text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
One&Only: maximising outcomes, not just minimising cost Nell Walker Suwannee Asawanopakun, Director, Procurement & Projects at One&Only Reethi Rah Maldives, digs deep into the true function of procurement, and how a luxury hospitality business is thriving as we move past the worst of the pandemic… READ IN A BROCHURE FORMAT An award-winning luxury hotel business, One&Only Reethi Rah Maldives offers its customers a level of bespoke service that's become synonymous with its name. In a reflection of its gorgeous island location, the operations at the One&Only Reethi Rah resort have to be absolutely pristine at all times, meaning procurement is very much at the core of every request and decision, no matter how big or small – even during a global pandemic. For the Director, Prcocurement & Projects, Suwannee Asawanopakun, the work One&Only Reethi Rah has done to turn the resort into what it is today has been the crown jewel in an exciting career path towards luxury hospitality. After completing a degree in business administration, Asawanopakun began her career in sales, in Thailand. Three years later, she took on an export sales position with a leading hotel supplies manufacturing company. She was soon promoted to a Regional Sales Manager position in charge of the Maldives, Mauritius, and the Seychelles, at which point she moved to the former location. After five years of supplying for hundreds of resorts, Asawanopakun decided she wanted to be on the other side of the conversation, having been drawn into the glamorous world of hospitality. So, she took on a role as pre-opening Purchasing Manager for an ultra-luxury Maldives resort. "From there, my procurement journey started," she says. "It was an amazing experience, to see an empty, sandy island become an ultra-luxury resort. I took a bigger role as the Director of Supply Chain Management for a group of international hotel chains in Maldives, and soon after, I had the opportunity to be part of the number one resort in the Maldives – One&Only Reethi Rah. There have been so many exciting resort development projects since I started, while handling a dynamic and enthusiastic team. "Procurement isn't just buying and bringing goods to the island – it's become a part of my life, sharpening my thinking and decision-making." Procurement: the reality This is something Asawanopakun is passionate about – the perception of procurement. More specifically, the incorrect perception of procurement. "Traditionally, procurement professionals are thought of as 'number crunchers' who work extremely hard to achieve the highest-quality product or service for the lowest cost," she explains. "However, procurement has become a more strategic function within the organisation, focused on adding value, increasing margin, and maximising the outcome – not just minimising cost." Put simply, procurement is so much broader than some people think – it touches every part of the organisation it operates within rather than simply being a background function which ticks along at a single speed. However, by Asawanopakun's own admission, the role of procurement in a resort or hotel operation isn't immediately apparent. "Procurement occupies a place of singular importance," she says. "It services not only to supply the organisation efficiently, but to produce value through optimal quality of goods and services as a function of internal and external customer service. Procurement costs are a core part of the profitability performance of a hospitality business." Essentially, procurement is the function which keeps hospitality running smoothly, and, ultimately, gives the end-user the luxurious, customisable service they're there for. The One&Only experience And this is, of course, the case for One&Only Reethi Rah. The procurement team needs to constantly forecast demand for everything at the resort; for example, it has hundreds of varieties of dishes in the restaurant, and procurement needs to ensure all ingredients are available and fresh. Additionally, it needs rapid turnaround for any special requests, because at a luxury resort, saying 'no' to a guest is not an option. "Far from the image of the cost-killer, the procurement function plays a growing strategic role in the sort," says Asawanopakun. "Procurement manag-ement has become one of the company's main performance drivers." Flexibility under pressure Despite all the pressures of maintaining a resort like One&Only Reethi Rah, procurement is almost always smooth sailing. According to Asawanopakun, if all stakeholders work as a team and things are well-planned, there are no challenges besides those few circumstances which are out of anyone's control. The main potential issue is geography, with Maldives being a low-lying garland of tropical islands and a lot of the food the resort needs being sourced from all over the world. Sometimes, bad weather can interfere, and that's when procurement has to move swiftly to solve the supply issue at minimum cost. Being able to operate in such a swift and agile way is thanks to the way One&Only Reethi Rah's procurement is structured from the very core. "Procurement at One&Only Reethi Rah is simple yet complex," Asawanopakun explains. "It starts by identifying and planning the business needs, the stakeholders, timeline, and budget. We engage with our Head of Department to identify their needs in terms of products, specification, budget, and required period. Procurement team will then source for all the suitable products with best cost from reliable sources across the world. Evaluation is done and agreed, and negotiation is another important process for best total cost. Shipping modes are decided based on cost and transit time, and import documentation must be prepared as per customs regulations. Then everything is unloaded and checked, before the procurement process is analysed using an established system of KPIs. This will help in assessing the efficacy, cost efficiency, speed, and overall success of the whole process. Collaboration and communication "Procurement isn't just a simple, routine job – it's a customised routine job which we need to focus on and pay attention to, while loving what we do." For Asawanopakun, the twin keys to keeping this moving like clockwork are collaboration and communication. They ensure each departmental team understands exactly what purchasing does and how we work together effectively. It goes far, far beyond finding specialised products or the best price – it also includes presenting and agreeing a strategy, working closely with internal customers, having the Head of Department's expertise, calculating risk, health and safety – the list is endless. "People deal with people, so the key is to gain respect from the teams. They must see that our knowledge and strategy will drive value to the business while making their job easier." There's always room for improvement. In fact, One&Only Reethi Rah is consis-tently striving for better, because that's how a business remains agile. And a major element of that is the procurement function. "Procurement is changing faster than ever in every way," says Asawanopakun. "The key drivers for our continuous improvement and innovation are attention to detail, focus on quality, IT infrastructure, brand consistency and compliance, cost management, and a two-way partnership. We build lasting and trusting relationships with all stakeholders. We encourage innovation and proactive cultures to our internal customers and procurement team. We're also focused on CSR and sustain-ability, which compel our resort and suppliers to find new and innovative working approaches." Much of this comes down to synergistic thinking through the best possible communication across all the One&Only Reethi Rah teams. They each get to apply their valuable skills, and be proactive and self-motivated to speak up about their own ideas, because it's a business that values collaborative ideas. "We never stop identifying the opportunities for improvement, wherever we can," says Asawanopakun. And synergy is more than what happens within the organisation and with the end-users – synergy with suppliers is vital. Especially when, as mentioned before, supply to the Maldives can be disrupted by the elements. One&Only Reethi Rah has to have strong relationships with all parts of the logistics network to make sure problems don't arise. "Suppliers are effectively our partners in shared success," says Asawanopakun. "The more effective our supplier relationships are, the better the chances we'll be able to take advantage of valuable opportunities. We make our suppliers feel that they are part of our team and success. We work with all partners with openness, fairness, and clarity." With this clearly being a success story, you may be wondering how COVID-19 factors into what One&Only Reethi Rah has been doing for the last 18 months. Largely, the hospitality industry had ground to a halt across the world for much of 2020 and into 2021 – so how has the resort been affected? "When we first learned we were heading into lockdown, supply chains felt the pressure," Asawanopakun admits. "We had our in-house guests staying with us during lockdown and service can't be compromised. It reached a point where all movements were frozen." One&Only Reethi Rah had to get a special permit to send its cargo boat to the capital of Maldives to collect goods, as flights were cut off. Supplies were limited, and sourcing had to be shuffled around to suit the small number of countries that could supply to the area. So there was less movement, longer transit times, and higher costs – but, as soon as lockdown was eased, business not only went back to normal, it was busier than ever. The downside to that was that there were still shortages, but Asawanopakun and the procurement team learned a lot. "We learned to be alert all the time, keep updated with the daily situation locally and worldwide, think outside the box, plan ahead, work as a team, and never take anything for granted." As well as strengthening the way the One&Only Reethi Rah teams operate, there's been another big benefit to the events of the pandemic: "We know who our trustworthy partners are," Asawanopakun says. "Most of our supplier relationships have become stronger; we're communicating more and they have been working hard to give us the most accurate information. This truly was a situation that no-one expected and I believe everyone tried their best to maintain a decent business relationship." Through thick and thin, and as the world slowly opens up once more, One&Only Reethi Rah is continuing to supply incredible service in a stunning location – and it's stronger than ever. Xcel Energy: Procurement as a valued business partner We meet Sangram Bhosale, Vice President and Chief Supply Chain Officer at Xcel Energy to see how procurement is changing the way the energy giant operates through a progressive, value-unlocking change in its role… Best Buy: The mission to deliver world-class procurement We catch up with Vice President of Procurement, Best Buy, Anna Barej, Chief Finance Officer, Matt Bilunas and Director, Procurement Center of Excellence, Shawn Calabrase to see how the electronic retail giant is unlocking real value for the company through a major transformation of its Procurement function… EY: A people-powered procurement transformation Kathy Golding is the Procurement & Supplier Ecosystem Services Leader at EY Global Services Limited and has been with the company for over 10 years, having spent her entire EY career in Supply Chain Services. Vodafone NZ: A digital procurement transformation We speak to Vodafone NZ Chief Procurement Officer Rajat Sarna to see how he's transforming procurement into a strategic and sustainable business partner at the telco through a start-up mentality… Coupa App Marketplace: An ecosystem for procurement transformation The Coupa App Marketplace is a digital ecosystem transforming procurement functions, the world over. We speak to Nigel Pegg, Vice President & General Manager to find out more about the roll-out one year on. Just Eat Takeaway: Data-driven decision making CPOstrategy discusses how strategic procurement is the way forward at a rapidly growing enterprise, with John Butcher, Group Procurement Director Just Eat Takeaway.com. FrieslandCampina Asia: Unlocking the value of strategic procurement CPOstrategy speaks to Maximillian Tan, Director Business Procurement Asia at FrieslandCampina, one of the largest dairy companies in the world with a cooperative tradition going back 150 years. City of Edmonton: A Supply Chain Transformation We speak to Thiago Braga, Director of Supply Chain Management at the City of Edmonton, Canada who discusses how improved operations are keeping the City healthy amid a range of challenges. QSC: Supply chain excellence through agility & visibility SupplyChain Strategy speaks to Karon Evanoff, Vice President, Global Supply Chain at QSC, to discuss supply chain transformation at the audio manufacturer.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,273
Die Liste der Biografien führt alle Personen auf, die in der deutschsprachigen Wikipedia einen Artikel haben. Dieses ist eine Teilliste mit 29 Einträgen von Personen, deren Namen mit den Buchstaben "Petu" beginnt. Petu Petub Petubastis I., altägyptischer Pharao Petubastis II., altägyptischer Pharao Petubastis III., altägyptischer Gaufürst Petubastis IV. Seheribre, spätzeitlicher ägyptischer Fürst Petuc Petuchow, Alexander (* 1985), kasachischer Fußballtorwart Petuchow, Alexei Jewgenjewitsch (* 1983), russischer Skilangläufer Petuchow, Jegor (* 1994), russisch-kasachischer Eishockeyspieler Petuchow, Sergei Alexandrowitsch (* 1983), russischer Sprinter Petuchow, Stanislaw Afanassjewitsch (* 1937), sowjetischer Eishockeyspieler Petuchowa, Iraida Georgijewna (1926–2018), sowjetische Architektin Petuchowski, Jakob Josef (1925–1991), US-amerikanischer liberaler Rabbiner und Hochschullehrer Petue Petuel, Karolina (1873–1956), Münchner Sozialförderin Petuel, Ludwig jun. (1870–1951), Münchner Geschäftsmann Petuel, Ludwig sen. (1839–1911), Münchner Geschäftsmann Petuel, Thomas (1797–1847), Freisinger Geschäftsmann Petuely, Friedrich (1922–1994), österreichischer Kinderarzt und Biochemiker Petuely, Kevin (* 2004), österreichischer Fußballspieler Petur Pétur Guðmundsson (* 1958), isländischer Basketballspieler Pétur Guðmundsson (* 1962), isländischer Kugelstoßer Pétur Gunnarsson (* 1947), isländischer Autor und Übersetzer Pétur Haraldsson Blöndal (1944–2015), isländischer Politiker (Unabhängigkeitspartei) Pétur Ormslev (* 1958), isländischer Fußballspieler Pétur Pétursson (* 1959), isländischer Fußballspieler Pétur Sigurðsson (1928–2002), isländischer Leichtathlet Pétur Sigurgeirsson (1919–2010), isländischer Geistlicher, Oberhaupt der Isländischen Staatskirche Pétursson, Magnús (* 1940), isländischer Phonetiker Petus Petuschkow, Anton Andrejewitsch (* 1992), russischer Naturbahnrodler Petuschkowa, Jelena Wladimirowna (1940–2007), russische Dressurreiterin, Sportfunktionärin und Biochemikerin Petut Petutschnig, Lorenz (* 1993), österreichischer Beachvolleyballspieler
{ "redpajama_set_name": "RedPajamaWikipedia" }
3
\section{Introduction} How fast a quantum system can evolve as required is usually referred to as the problem of quantum speed limit in quantum foundations. This problem is important in quantum foundations as it is naturally related to the uncertainty relations and other fundamental properties of quantum mechanics. For instance, the first bound concerning quantum speed limit given by Mandelstam and Tamm in 1945~\cite{Mandelstam1945} was derived based on the uncertainty relations. Nowadays, the quantum speed limit has gone way beyond the quantum foundations and attracted much attention from the community of quantum information and quantum technology due to the fact that the existence of noise is the major obstacle in most quantum information processings to provide true quantum advantages in practice, and fast evolutions could be a very useful approach to reduce the effect of noise in these processes and help to reveal the quantum advantage. The historical development of quantum speed limit is basically the development of mathematical tools. Until now, various tools have been developed for different scenarios~\cite{Deffner2017}, including unitary dynamics~\cite{Mandelstam1945, Margolus1998,Giovannetti2004}, open systems~\cite{Taddei2013,Campo2013,Deffner2013, Sun2015,Marvian2015,Mirkin2016,Campo2019,Campaioli2018,Campaioli2019,Chenu17,Beau17b, Cai2017,Villamizar2015,Sun2019,Meng2015,Mirkin2020,Zhang2014,Liu2015,Wu2018}, quantum metrology~\cite{Giovannetti2003,Giovannetti2006,Beau17a}, quantum control~\cite{Caneva2009,Hegerfeldt2013,Funo17,Campbell2017,Poggi2019}, quantum phase transitions~\cite{Heyl2017,Shao2020}, quantum information processings~\cite{Epstein2017, Girolami2019,Ashhab2012}, quantum resources~\cite{Marvian2016}, geometry of quantum mechanics~\cite{Pires2016,Bukov2019}, and even the classical systems~\cite{Margolus11,Shanahan2018,Okuyama2018,Amari16}. Most existing tools in this field can be divided into Mandelstam-Tamm type and Margolus-Levitin type, which are originated from the Mandelstam-Tamm bound $\pi/(2\Delta H)$~\cite{Mandelstam1945} and the Margolus-Levitin bound $\pi/(2\langle H\rangle)$~\cite{Margolus1998}, where $\langle H\rangle$ is the expected value of the Hamiltonian $H$ and $\Delta H:=\sqrt{\langle H^2\rangle-\langle H\rangle^2}$ is the corresponding deviation. The major difference between these two types is that the former one is depicted by the deviation and the latter one only uses the expected value. Different with these two types, an operational definition of quantum speed limit was proposed in 2020~\cite{Shao2020} based on the optimization of states that can fulfill the target. One advantage of this operational definition is its independence of the quantum states, which means it is the systematic minimum time scale for this target and only determined by the Hamiltonian structure. However, in quantum technology, some specific quantum states, like N00N states, cat states or certain type of entangled states, may be more worth studying than a general one in some scenarios, and it is also possible that one does not care the systematic minimum time scale, but is more interested in the time scale of these specific states. In such cases, state-dependent tools could be more handy than the operational definition. In the meantime, since the operational definition only includes the information of systematic minimum time scale and corresponding optimal states, it cannot reflect the closeness of the time scales between the concerned states and the optimal ones. Hence, the state-dependent tools, especially those can naturally connect to the operational definition, would be very helpful to reveal this closeness and thus more useful in practice. Searching such state-dependent tools is a major motivation of this paper. In mathematics, for a set of bounded real numbers $\{x_i\}$ with the expected value $\bar{x}=\frac{1}{n}\sum_{i}x_i$ ($n$ is the number of elements), Bhatia and Davis provided a very useful upper bound on the variance $\mathrm{var}(x)=\frac{1}{n}\sum_{i}(x_i-\bar{x})^2$ in 2000~\cite{Bhatia2000}, i.e., \begin{equation} \mathrm{var}(x)\leq (M-\bar{x})(\bar{x}-m), \end{equation} where $m$, $M$ are the lower and upper bounds of the set, namely, $m\leq x_i\leq M$ for any element $x_i$ in the set. This bound can be naturally extended to the statistics and further to the quantum mechanics. In quantum mechanics, the Bhatia-Davis inequality can be rewritten into $\Delta^2 H\leq(E_{\max}-\langle H\rangle) (\langle H\rangle-E_{\min})$ with $E_{\max}$ and $E_{\min}$ the maximum and minimum energies with respect to $H$. Compared to the Mandelstam-Tamm bound, it is obvious that \begin{equation} \frac{\pi}{2\Delta H}\geq\frac{\pi}{2\sqrt{(E_{\max}-\langle H\rangle) (\langle H\rangle-E_{\min})}}, \end{equation} namely, the Bhatia-Davis inequality provides a lower bound for the Mandelstam-Tamm bound. However, since the Mandelstam-Tamm bound itself is only attainable for two-level pure states~\cite{Deffner2017,Levitin2009}, the Bhatia-Davis bound above would be more difficult to saturate, and thus lack of practicability. Nevertheless, things are more complicated in the Bloch representation, which gives us a chance to introduce a similar formula by replacing $\pi$ to a general target angle $\Theta$ defined in the Bloch representation, and thoroughly study its role in quantum speed limit. In the entire paper this formula will be referred to as the Bhatia-Davis formula. The connection between this formula and the operational definition will be studied, along with its behaviors and roles in both multilevel and few-level systems from the aspect of quantum speed limit. \section{Bhatia-Davis formula} \subsection{Upper bound of the OQSL} The Bloch representation is a common geometric approach for quantum states, which is widely applied in many topics in quantum information. In this representation, an $N$-dimensional density matrix $\rho$ can be expressed by \begin{equation} \rho=\frac{1}{N}\left(\openone+\sqrt{\frac{N(N-1)}{2}}\vec{r}\cdot\vec{\lambda}\right), \end{equation} where $\vec{r}$ is a real vector referred to as the Bloch vector, $\openone$ is the identity matrix and $\vec{\lambda}$ is the $(N^2-1)$-dimensional vector of SU(N) generators. Throughout this paper, the target angle is defined by the angle between the Bloch vectors of the initial state $\vec{r}$ and its evolved state $\vec{r}(t)$~\cite{Campaioli2018,Campaioli2019,Shao2020} \begin{equation} \theta(t,\vec{r})=\arccos\left(\frac{\vec{r}\cdot\vec{r}(t)}{|\vec{r}| |\vec{r}(t)|}\right), \label{eq:Bloch_angle} \end{equation} where $\theta(t, \vec{r})\in [0, \pi]$. In the Bloch representation, we define the general form of Bhatia-Davis formula as \begin{equation} \tau_{\mathrm{BD}}:=\frac{\Theta}{2\sqrt{(E_{\mathrm{max}}-\langle H\rangle) (\langle H\rangle-E_{\min})}}, \label{eq:tauBD} \end{equation} where $E_{\mathrm{max}}$ and $E_{\min}$ are the highest and lowest energies of the Hamiltonian $H$ and $\langle H\rangle=\mathrm{Tr}(\rho H)$ is the expected value with respect to the state $\rho$. $\Theta$ is a fixed target angle. In the entire paper, we denote $E_k$ ($|E_k\rangle$) as the $k$th eigenvalue (eigenstate) of $H$ with $k\in[0,N-1]$. Without loss of generality, we assume $E_k\leq E_j$ for $k<j$ and there exist at least two different energy values in $H$, namely, not all the equalities can be achieved simultaneously. \begin{figure*}[tp] \includegraphics[width=14cm]{oat_final.pdf} \caption{(a) The OQSL $\tau$ as a function of $|\delta|$ for particle number $n=4$ (solid red line), $n=8$ (dash-dotted blue line) and $n=20$ (dashed black line). (b) The Bhatia-Davis formula $\tau_{\mathrm{BD}}$ and the OQSL $\tau$ as a function of $\phi$ for different values of $\delta$. The solid red, dash-dotted green and circled yellow lines represent $\tau_{\mathrm{BD}}$ for $\delta=1.0,4.0,8.0$, and the dashed black, dotted blue and squared cyan lines represent $\tau$ for $\delta=1.0,4.0,8.0$, respectively. (c) The difference between $\tau_{\mathrm{BD}}$ and $\tau$ (in the scale of log) as a function of $\phi$ and $\delta$ for $n=10$. (d) The regimes in (c) that the relative difference $R=(\tau_{\mathrm{BD}}-\tau) /\tau<1\%$ (white area), $1\%<R<10\%$ (lightgray area) and $R>10\%$ (darkgray area). $\chi$ is set to be 1 in all panels. \label{fig:oat}} \end{figure*} Recently, an operational definition of the quantum speed limit (OQSL) is provided and discussed~\cite{Shao2020}. The OQSL is defined via the set of states that can reach the target angle $\mathcal{S}:=\{\vec{r}|\theta(t,\vec{r})=\Theta, \exists t\}$. With this set, the OQSL (denoted by $\tau$) is defined as~\cite{Shao2020} \begin{eqnarray} \tau &=& \min_{\vec{r}\in\mathcal{S}} t \nonumber \\ & & \mathrm{subject}~\mathrm{to}~~\theta(t,\vec{r})=\Theta. \end{eqnarray} The Bhatia-Davis formula has a natural connection with the OQSL due to the following theorem. \begin{theorem} \label{theorem:upperbound} For time-independent Hamiltonians under unitary evolution, the Bhatia-Davis formula is an upper bound for the OQSL, i.e., \begin{equation} \tau_{\mathrm{BD}} \geq \tau. \end{equation} \end{theorem} The proof is given in Appendix~\ref{sec:apx_tauBD_tau}. The equality can be attained when the average energy is half of the summation between the highest and lowest energies. This theorem indicates that $\tau_{\mathrm{BD}}$ can reflect the closeness between the time scale of some specific states and the systematic minimum time scale when the equality is attainable. In the following we take the generalized one-axis twisting model~\cite{Kitagawa1993,Ma2011,Jin2009} as an example to discuss this theorem. The Hamiltonian for this model is \begin{equation} H=\chi J^2_z+\delta J_z, \end{equation} where $J_z=\frac{1}{2}\sum^{n}_{i=1}\sigma^{(i)}_z$ with $\sigma^{(i)}_z$ the Pauli Z matrix for the $i$th spin and $\chi$, $\delta$ the coefficients. The Dicke state $|J=n/2,m\rangle$ ($m=0,\pm 1,\cdots,\pm J$ when $n$ is even and $m=\pm 1/2, \pm 3/2,\cdots,\pm J$ when $n$ is odd) is the eigenstate of $J_z$ with the corresponding eigenvalue $m$. For the sake of simplicity, here we assume $n$ is even and $\chi>0$. According to Ref.~\cite{Shao2020}, the OQSL in this case can be expressed by $\tau=\Theta/(E_{\mathrm{max}}-E_{\min})$. The highest energy \begin{equation} E_{\max}=\frac{1}{4}\left(\chi n^2+2|\delta| n\right), \end{equation} and the lowest energy \begin{equation} E_{\min}=\begin{cases} \chi \mathcal{R}^2(\frac{\delta}{2\chi})-\delta\mathcal{R}(\frac{\delta}{2\chi}), & \mathrm{for}~|\delta|/\chi \leq n, \\ \frac{1}{4}(\chi n^2-2|\delta|n), & \mathrm{for}~|\delta|/\chi > n, \end{cases} \end{equation} where $\mathcal{R}(\cdot)$ represents rounding to the nearest integer. The OQSL can then be obtained correspondingly. When $\delta=0$, the Hamiltonian is a standard one-axis twisting one, and the OQSL reduces to a simple form \begin{equation} \tau_{0}=\frac{4\Theta}{\chi n^2}, \end{equation} which decreases quadratically with the growth of particle number $n$. As a matter of fact, compared to $\tau_{0}$, the linear term $\delta J_z$ can facilitate the reduction of the OQSL, as shown in Fig.~\ref{fig:oat}(a). For example, in the case of a small $\delta$, $\tau\propto 1/(\chi n^2+2|\delta|n)$. However, with the increase of $|\delta|$, when it is larger than $\chi n$, the OQSL becomes \begin{equation} \tau_{1}=\frac{\Theta}{|\delta|n}, \end{equation} which shows that the OQSL in this regime is not as sensitive as $\tau_0$ with respect to $n$. In this model, a well-used state is the coherent spin state $\exp(\zeta J_{+} -\zeta^{*}J_{-})|J,J\rangle$ ($J_{\pm}=J_x\pm i J_y$). Since $\zeta$ can be rewritten into $\zeta=-\frac{\phi}{2}\exp(-i\varphi)$ with $\phi\in[0,\pi]$ and $\varphi\in [0,2\pi]$, the coherent spin state can also be denoted by $|\phi,\varphi\rangle$. For this state, the Bhatia-Davis formula $\tau_{\mathrm{BD}}$ can be obtained by noticing the mean energy (details in Appendix~\ref{sec:apx_tauBD_tau}) is \begin{equation} \langle H\rangle=\frac{1}{4}\left(2\delta n\cos\phi+\chi n^2\cos^2\phi +\chi n\sin^2\phi\right), \end{equation} which indicates $\tau_{\mathrm{BD}}$ is not affected by $\varphi$. Figure~\ref{fig:oat}(b) shows the values of $\tau_{\mathrm{BD}}$ and $\tau$ as a function of $\phi$ for different values of $\delta$. In the case of $\delta=1.0$, two regimes of $\phi$ around $\pi/4$ and $3\pi/4$ are optimal for $\tau_{\mathrm{BD}}$ to attain $\tau$. However, with the increase of $\delta$ ($4.0$ and $8.0$ in the plot), the optimal regime around $\pi/4$ moves to the right and the optimal regime around $3\pi/4$ vanishes completely. To provide a complete picture of the attainable states, the difference between $\tau_{\mathrm{BD}}$ and $\tau$ (in the scale of log) is given in Fig.~\ref{fig:oat}(c) as a function of $\phi$ and $\delta/\chi$, which confirms that the attainable regime for a large $\phi$ vanishes with the growth of $\delta$ when it is positive. As a matter of fact, most coherent spin states have chances to be the attainable states when $\delta$ is tuned to proper values. Particularly, the required optimal values of $\delta$ are very small when $\phi$ is less than $\pi/4$ or larger than $3\pi/4$. The states around $\phi=\pi/2$ are more difficult to be the attainable states since they require large values of $\delta$. However, although $\tau_{\mathrm{BD}}$ for these states are not optimal, when $\delta/\chi$ is larger than, for example around $10.0$, there is still a large regime around $\pi/2$ (on the left for $\delta/\chi>0$ and right for $\delta/\chi<0$) in which the relative difference $R=(\tau_{\mathrm{BD}}-\tau)/\tau$ is less than $10\%$, as shown in Fig.~\ref{fig:oat}(d). In the meantime, the area in the plot for the regime that $R<1\%$ is around $7.9\%$ ($R<10\%$ is around $16.4\%$) of the total area, indicating that in this case, the time scale for the coherent spin states to reach the target could be very close to the systematic minimum time for a loose range of $\delta$. Another interesting phenomenon is that the behavior of the difference between $\tau_{\mathrm{BD}}$ and $\tau$ is dramatically different for positive and negative signs of $\delta/\chi$ when it is not very large. $\tau_{\mathrm{BD}}$ is way closer to $\tau$ for a negative (positive) $\delta/\chi$ when $\phi$ is small (large), which is due to the fact that $\langle H\rangle$ is closer to $\frac{1}{2}(E_{\max}+E_{\min})$ for a negative value of $\delta/\chi$. \subsection{Largest target angle $\Theta=\pi$} \label{sec:multi} In the study of quantum speed limit, the largest target angle $\Theta=\pi$ is worth paying particular attention as done in the Mandelstam-Tamm bound~\cite{Mandelstam1945} and Margolus-Levitin bound~\cite{Margolus1998} since it indicates the highest distinguishability. Due to the spirt of the operational definition, the set of states that can fulfill the target. i.e., the set $\mathcal{S}$, should be studied first as the state-dependent tools cannot reveal this information. Considering the unitary evolution, we have the following observations on the set $\mathcal{S}$ for the largest target $\Theta=\pi$. \begin{theorem} \label{theorem:multiB} For any finite-level Hamiltonian, there always exist states to fulfill the target $\Theta=\pi$, i.e., the set $\mathcal{S}$ cannot be an empty set. Furthermore, the set \begin{equation*} \mathcal{S}_0\!:=\!\{\vec{r}\,|r^2_{j^2+2k-1}\!+\!r^2_{j^2+2k}\!=\!|\vec{r}|^2, \forall j\!\in\![1,\!N\!-\!1],k\!\in\![0,j\!-\!1]\} \end{equation*} is always a subset of $\mathcal{S}$, i.e., \begin{equation} \mathcal{S}_0\subseteq \mathcal{S}. \end{equation} \end{theorem} Here $r_i$ is the $i$th entry of the Bloch vector $\vec{r}$. This theorem means that any state in $\mathcal{S}_0$ can fulfill the target regardless of the Hamiltonian structure. In the density matrix representation, the states in $\mathcal{S}_0$ take the form \begin{equation} \left(\begin{array}{cccccc} \frac{1}{N} & 0 & \cdots & \cdots & \cdots & 0 \\ 0 & \frac{1}{N} & \vdots & \vdots & \vdots & \vdots\\ \vdots & \vdots & \ddots & \vdots & \rho_{kj} & \vdots\\ \vdots & \rho_{kj}^{*} & \vdots &\ddots & \vdots & \vdots\\ \vdots & \vdots & \vdots & \vdots & \frac{1}{N} & \vdots\\ 0 & \cdots & \cdots & \cdots & \cdots & \frac{1}{N} \end{array}\right) \label{eq:state_S0} \end{equation} in the energy basis $\{|E_k\rangle\}$, where all the diagonal entries are $1/N$, and the only nonzero non-diagonal entries are the $kj$th and $jk$th ones with $j\in[1,N-1]$ and $k\in[0,j-1]$. Here $|\rho_{kj}|\in(0,1/N]$. As a matter of fact, the theorem above also indicates that $\mathcal{S}_0$ is the minimum set for $\mathcal{S}$, which leads to an interesting question that what kind of Hamiltonians own the minimum set of $\mathcal{S}$? By denoting $\mathcal{E}_{\mathrm{d}}$ as the set of all the values of energy differences, namely, \begin{equation} \mathcal{E}_{\mathrm{d}}=\{E_j-E_k|\forall j\in[1,N-1],k\in[0,j-1]\}, \end{equation} this question is answered by the corollary below. \begin{corollary} \label{corollary:S0_1} For the target $\Theta=\pi$, if a Hamiltonian satisfies that the ratio between any two elements in $\mathcal{E}_{\mathrm{d}}$ cannot be written as the ratio between two odd numbers, i.e., \begin{equation} \frac{E_{j_1}-E_{k_1}}{E_{j_2}-E_{k_2}}\neq \frac{2m_1+1}{2m_2+1} \label{eq:Ed_cond} \end{equation} with $m_1,m_2$ any two non-negative integers for any two different groups of subscripts $(j_1,k_1)$ and $(j_2,k_2)$, then \begin{equation} \mathcal{S}=\mathcal{S}_0. \end{equation} \end{corollary} Here two different groups of subscripts means that $j_1=j_2$ and $k_1=k_2$ cannot hold simultaneously. The proofs of the theorem and corollary above are given in Appendix~\ref{sec:apx_multilevel}. A natural Hamiltonian structure to fit Eq.~(\ref{eq:Ed_cond}) is that all the elements in $\mathcal{E}_{\mathrm{d}}$ are non-commensurable to each other, which leads to the next corollary as follows. \begin{corollary} \label{corollary:S0_2} For the target $\Theta=\pi$ and the Hamiltonians with non-commensurable energy differences, $\mathcal{S}=\mathcal{S}_0$. \end{corollary} Corollary~\ref{corollary:S0_1} could lead to the following no-go corollary for multilevel systems (with at least three energy levels). \begin{corollary} \label{corollary:nopure} For multilevel systems with Hamiltonians stated in Corollary~\ref{corollary:S0_1}, no pure state can fulfill the target $\Theta=\pi$. \end{corollary} In practice, quantum systems are inevitably exposed to the environment and therefore suffer from the noises. Hence, the performance of $\mathcal{S}$ must be affected by the noise in general. The target $\Theta=\pi$ might be the most sensitive case as it requires a large rotation of the Bloch vector, which may not be possible in some type of noises. For example, if there exists a steady state for some noisy dynamics, it is very possible that the states, whose angle with the steady state are less than $\pi$, can never reach the target during the evolution for a large enough decay rate. Hence, the state number in the reachable state set $\mathcal{S}$ could be very limited, and even vanish in such cases. For the sake of a more intuitive understanding, here we take the damped five-level system as an example. The decoherence is described by the master equation~\cite{Breuer2007} \begin{align} \partial_t\rho &=-i[H,\rho]+\gamma_0 (\bar{n}+1)\left(a\rho a^{\dagger} -\frac{1}{2}a^{\dagger}a\rho-\frac{1}{2}\rho a^{\dagger}a\right) \nonumber \\ &+\gamma_0\bar{n}\left(a^{\dagger}\rho a-\frac{1}{2}aa^{\dagger}\rho -\frac{1}{2}\rho aa^{\dagger}\right), \end{align} where $a$ ($a^{\dagger}$) is the lowering (raising) operator, $\gamma_0$ is a constant decay rate, and $\bar{n}=(e^{\omega_0/(k_{\mathrm{B}}T)}-1)^{-1}$ is the Planck distribution with $k_{\mathrm{B}}$ the Boltzmann constant and $T$ the temperature. Now consider two groups of energies $\{E_k\}=\{1.0,2.1,4.5,8.3,11.0\}$ (denoted by $H_1$) and $\{E_k\}=\{1.0,2\sqrt{7},6\sqrt{2},6\sqrt{3},6\sqrt{5}\}$ (denoted by $H_2$). According to Corollaries~\ref{corollary:S0_1} and~\ref{corollary:S0_2}, only the states in the form of Eq.~(\ref{eq:state_S0}), namely in the set $\mathcal{S}_0$, can reach the target $\Theta=\pi$ in the unitary dynamics. To show the influence of noise on $\mathcal{S}$, $5000$ random states in $\mathcal{S}_0$ are used to test the attainability of the target $\Theta=\pi$, as given in Fig.~\ref{fig:S_gamma}, for $H_1$ (red circles) and $H_2$ (blue squares). $\bar{n}$ is set to be 1 in the figure. In the absence of noise ($\gamma_0=0$), all the states can reach the target, just as Theorem~\ref{theorem:multiB} stated. With the increase of decay rate, the number of states capable of reaching the target reduces in an approximately exponential way. When $\gamma_0=0.05$, a very limited number of states can still reach the target, and with the further increase of $\gamma_0$, no state can ever reach the target eventually. \begin{figure}[tp] \centering \includegraphics[width=8cm]{S_gamma.pdf} \caption{The variety of state number capable of reaching the target $\Theta=\pi$ with the change of decay rate $\gamma_0$ for the energy structures $\{1.0,2.1, 4.5, 8.3, 11.0\}$ ($H_1$, red circles) and $\{1.0, 2\sqrt{7}, 6\sqrt{2}, 6\sqrt{3}, 6\sqrt{5}\}$ ($H_2$, blue squares). $\bar{n}=1$ in the plot.} \label{fig:S_gamma} \end{figure} With the knowledge of $\mathcal{S}$, we could further study the Bhatia-Davis formula. In general, the Bhatia-Davis formula is not a valid lower bound for the evolution time to reach any target $\Theta$ due to Theorem~\ref{theorem:upperbound}. For those Hamiltonians that the equality in Theorem~\ref{theorem:upperbound} is not attainable, $\tau_{\mathrm{BD}}$ is always larger than $\tau$. In this case, $\tau_{\mathrm{BD}}$ fails to be a valid lower bound for those states that reaches the OQSL, and hence not a lower bound in general. However, for the Hamiltonians ans states that the equality can hold, $\tau_{\mathrm{BD}}$ might still be a valid lower bound. One useful scenario is demonstrated as follows. \begin{theorem} \label{theorem:tauBD_pi} For a finite-level Hamiltonian that the energies are symmetric about $\langle H\rangle$, the Bhatia-Davis formula is a valid lower bound for the evolution time to reach the target $\Theta=\pi$, and for the states in $\mathcal{S}$, it reduces to the OQSL. \end{theorem} The proof is given in Appendix~\ref{sec:apx_multilevel}. For such a symmetric spaced energy structure, $\mathcal{S}$ must be larger than $\mathcal{S}_0$ since $E_{N-1-k}-E_{N-1-j}=E_j-E_k$ for any $k<j\leq\left\lfloor\frac{N-1}{2}\right\rfloor$ with $\lfloor\cdot\rfloor$ the floor function. In this case, apart from the states in Eq.~(\ref{eq:state_S0}), the following states \begin{equation*} \left(\begin{array}{ccccccc} \frac{1}{N} & 0 & \cdots & \cdots & \cdots & \cdots & 0\\ 0 & \frac{1}{N} & \vdots & \vdots & \vdots & \vdots & \vdots\\ \vdots & \vdots & \ddots & \rho_{kj} & \vdots & \vdots & \vdots\\ \vdots & \vdots & \rho^*_{kj} & \ddots & \rho_{N-1-j,N-1-k} & \vdots & \vdots\\ \vdots & \vdots & \vdots & \rho^*_{N-1-j,N-1-k} & \ddots & \vdots & \vdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & \cdots & \cdots & \cdots & \cdots & \cdots & \frac{1}{N} \end{array}\right) \end{equation*} are also always capable of reaching the target. An typical form of this symmetric structure is the equally spaced structure, i.e., $E_{k+1}-E_k$ is a constant for any legitimate $k$. Hence, one can immediately obtain the following corollary. \begin{corollary} \label{corollary:equal_spaced} For an equally spaced finite-level Hamiltonian, the Bhatia-Davis formula is a valid lower bound for the evolution time to reach the target $\Theta=\pi$, and for the states in $\mathcal{S}$, it reduces to the OQSL. \end{corollary} \begin{figure}[tp] \centering \includegraphics[width=8.5cm]{tauBD_test.pdf} \caption{The difference between the evolution time to reach the target ($t$) and the Bhatia-Davis formula ($\tau_{\mathrm{BD}}$) for $2\times 10^{5}$ pairs of randomly generated five-level Hamiltonians and random states in $\mathcal{S}_0$. } \label{fig:tauBD_test} \end{figure} Altough the Bhatia-Davis formula is not a valid lower bound in general, how general it is still keeps undiscovered. For this sake, we numerucally test whether $\tau_{\mathrm{BD}}$ is a valid lower bound for randomly generated five-level Hamiltonians and random initial states with the target $\Theta=\pi$. Since most randomly generated Hamiltonians satisfy the condition (\ref{eq:Ed_cond}) in Corollary~\ref{corollary:S0_1}, most random initial states cannot fulfill the target expect for those in $\mathcal{S}_0$. This means $\tau_{\mathrm{BD}}$ is indeed a formal lower bound for these states as the true evolution time to reach the target is infinity. Hence, we only pick the random states in $\mathcal{S}_0$ for the test. $2\times 10^{5}$ random pairs of Hamiltonians and initial states in $\mathcal{S}_0$ are generated and tested, as shown in Fig.~\ref{fig:tauBD_test}. It can be seen that even limited in the set $\mathcal{S}_0$, the difference between the evolution time $t$ (to reach the target) and $\tau_{\mathrm{BD}}$ is positive for most states (around $90\%$), and therefore $\tau_{\mathrm{BD}}$ is a valid lower bound for these $90\%$ states. \section{Few-level systems} \subsection{Two-level systems} \label{sec:twolevel} The two-level system is the most well-studied model in the topic of quantum speed limit, and the only system that the well-known tools like the Mandelstam-Tamm bound~\cite{Mandelstam1945} and the Margolus-Levitin bound~\cite{Margolus1998} can be attained~\cite{Deffner2017}. In this system, denoting the ground and excited energies by $E_0$ and $E_1$ with the corresponding energy levels $|E_0\rangle$ and $|E_1\rangle$, we then have the following theorem. \begin{theorem} \label{theorem:tauBD_2level} For a two-level system under unitary evolution, the Bhatia-Davis formula is a valid lower bound for the evolution time to reach any target angle $\Theta$, and it can be attained by the states $(|E_0\rangle+e^{i\phi}|E_1\rangle)/\sqrt{2}$ with $\phi\in [0,2\pi)$. \end{theorem} The proof of this theorem is given in Appendix~\ref{sec:apx_twolevel}. The attainability condition above comes from the requirement $\langle H\rangle =(E_{0}+E_{1})/2$. A corollary with respect to OQSL can be immediately obtained from this attainability condition. \begin{corollary} For a two-level system under unitary evolution, the Bhatia-Davis formula reduces to OQSL when it is attainable. \end{corollary} In this case, a well-used tool for the quantum speed limit is~\cite{Giovannetti2003,Giovannetti2004} \begin{equation} \tau_{\mathrm{C}}=\max\left\{\frac{\mathcal{A}}{\Delta H}, \frac{2\mathcal{A}^2} {\pi\langle H\rangle}\right\}, \end{equation} in which the target angle is defined via the Bures angle $\mathcal{A}=\arccos f$ with $f=\mathrm{Tr}\sqrt{\sqrt{\rho_0}\rho_1\sqrt{\rho_0}}$ the fidelity between two quantum states $\rho_0$ and $\rho_1$. In the meantime, another tool based on Bures angle is~\cite{Taddei2013} \begin{equation} \tau_{\mathrm{F}}=\frac{2\mathcal{A}}{\sqrt{F}}, \end{equation} with $F$ the quantum Fisher information with respect to the evolution time $t$. It is defined by $F=\mathrm{Tr}(\rho L^2)$ with $L$ the symmetric logarithmic derivative. Here $L$ is determined by the equation $\partial_t \rho=(\rho L+L\rho)/2$. In the Bloch sphere ($|E_1\rangle$ as the north pole), the density matrix is $\rho=(\openone+\vec{r}\cdot\vec{\sigma})/2$ with $\sigma=(\sigma_x, \sigma_y, \sigma_z)$ the vector of Pauli matrices. Then $\tau_{\mathrm{F}}$ can be expressed by \begin{equation} \tau_{\mathrm{F}}=\frac{2\mathcal{A}}{(E_1-E_0)\sqrt{|\vec{r}|^2-r^2_z}}, \end{equation} which is larger than $2\mathcal{A}/\Delta H$ since here $\Delta H=(E_1-E_0) \sqrt{1-r^2_z}/2$. They are equivalent when the initial state is pure. Combining several tools to construct a tighter bound for the quantum speed limit is a common method in the previous studies in this field. Using this strategy, the following quantity \begin{equation} \tau_{\mathrm{m}}=\max\left\{\tau_{\mathrm{BD}},\tau_{\mathrm{F}}, \frac{2\mathcal{A}^2(\Theta)}{\pi\langle H\rangle}\right\} \end{equation} is a valid lower bound for the evolution time to reach the target in the case of two-level systems. Here $\mathcal{A}(\Theta)$ means the target is still defined via Eq.~(\ref{eq:Bloch_angle}) and the value of $\mathcal{A}$ is calculated via $\Theta$ and the initial state. As a matter of fact, the fidelity between two qubtis can be expressed by $f^2=\mathrm{Tr}(\rho_0\rho_1)+2\sqrt{\det(\rho_0) \det(\rho_1)}$ with $\det(\cdot)$ the determinant~\cite{Hubner1992,Hubner1993}. For unitary evolutions, it can be rewritten with the Bloch vectors into $f^2=1-\frac{1}{2}|\vec{r}|^2(1-\cos\theta)$, where $|\vec{r}|$ is the norm of the initial state. Hence, $\mathcal{A}(\Theta)=\arccos\sqrt{1-\frac{1}{2} |\vec{r}|^2(1-\cos\Theta)}$, which indicates $\mathcal{A}\leq \Theta/2$ and the equality holds for pure states. Because of this property, the following corollary holds. \begin{figure}[tp] \includegraphics[width=8cm]{qubit_comparison.pdf} \caption{The behaviors of different tools as a function of $\alpha$. The dashed black, solid blue, dotted green and dash-dotted red lines represent the OQSL $\tau$, the bound based on quantum Fisher information $\tau_{\mathrm{F}}$, the Bhatia-Davis formula $\tau_{\mathrm{BD}}$ and the combined formula $\tau_{\mathrm{m}}$. The parameters are set as $E_1=2.0$, $E_0=1.0$, $|\vec{r}|=0.8$ and the target angle $\Theta=\pi/2$. \label{fig:qubit}} \end{figure} \begin{corollary} The Bhatia-Davis formula is equivalent to $\tau_{\mathrm{F}}$ for two-level pure states. \end{corollary} With this corollary, and noticing that $\tau_{\mathrm{F}}=\mathcal{A}/\Delta H$ for two-level pure states, it is easy to see that $\tau_\mathrm{m}$ reduces to $\tau_{\mathrm{C}}$ for two-level pure states. For mixed states, the relation between $\tau_{\mathrm{BD}}$, $\tau_{\mathrm{F}}$ and $2\mathcal{A}^2/(\pi\langle H\rangle)$ is undertermined. However, in many cases, for instance when $(E_1+E_0)/(E_1-E_0) >\sqrt{2}|\vec{r}|$ is satisfied, $\tau_{\mathrm{F}}$ is always larger than $2\mathcal{A}^2 /(\pi\langle H\rangle)$, and the value of $\tau_{\mathrm{m}}$ is taken as the larger one between $\tau_{\mathrm{F}}$ and $\tau_{\mathrm{BD}}$. More calculation details can be found in Appendix~\ref{sec:apx_twolevel}. An example is shown in Fig.~\ref{fig:qubit} for the sake of a more intuitive understanding on $\tau_{\mathrm{m}}$. Here $\alpha$ is the angle between the Bloch vector $|\vec{r}|$ and $z$-axis. The states in the regime $\alpha\in[\Theta/2,\pi-\Theta/2]$ can fulfill the target~\cite{Shao2020}. This plot first confirms that $\tau_{\mathrm{BD}}$ (dotted green line) is an upper bound for the OQSL $\tau$ (dashed black line). However, $\tau_{\mathrm{F}}$ (solid blue line) and $\tau_{\mathrm{BD}}$ do not have a fixed relation. $\tau_{\mathrm{F}}$ is less than $\tau_{\mathrm{BD}}$ for the states close to the $xy$ plane. Hence, $\tau_{\mathrm{m}}$ (dash-dotted red line) equals $\tau_{\mathrm{BD}}$ in this regime and it is indeed the tighest bound for the evolution time. $2\mathcal{A}^2/(\pi\langle H\rangle)$ is not plotted due to the fact that it is way smaller than the others in this case. Another advantage of using $\tau_{\mathrm{BD}}$ to construct the bound is that $\tau_{\mathrm{m}}$ is always larger than $\tau$ due to Theorem 1, and it reduces to $\tau$ in the $xy$ plane because of the attainability of $\tau_{\mathrm{BD}}$, which means $\tau_{\mathrm{m}}$ can reflect the fact that $\tau$ is the systematic minimum time to reach the target even when it is not attainable. In the meantime, $\tau_{\mathrm{F}}$ can only show this capability for some states (lightgray area), and it fails to do that for the states close to the $xy$ plane (darkgray area) as it is smaller than $\tau$ in this regime. Therefore, in the case where $\tau$ is not known, $\tau_{\mathrm{F}}$ cannot be used to estimate the true minimum time scale. \subsection{Three-level systems} \begin{figure}[tp] \includegraphics[width=8.5cm]{S_3level.pdf} \caption{Regimes of $x$, $y$ (gray areas) in the reachable states set $\mathcal{S}$ for equally spaced three-level Hamiltonians with different values of $|\vec{r}|^2$ and $\Theta$. For the states in the red areas, $\tau_{\mathrm{BD}}$ is always a valid lower bound for any target $\Theta$. $x=r^2_3+r^2_4$ and $y=r^2_0+r^2_1+r^2_5+r^2_6$ in the plot.} \label{fig:S_3level} \end{figure} Three-level systems are very common in the study of quantum optics and quantum information. Same as the general case, the Bhatia-Davis formula is not a valid lower bound in a general three-level system. However, Corollary~\ref{corollary:equal_spaced} shows that $\tau_{\mathrm{BD}}$ is indeed a valid lower bound in the equally spaced three-level systems for $\Theta=\pi$. To find out if $\tau_{\mathrm{BD}}$ still remains a valid bound for a general target in this case, we need to study the set $\mathcal{S}$ first. Define $x=r^2_3+r^2_4$ and $y=r^2_0+r^2_1+r^2_5+r^2_6$ with $r_i$ ($i=0,\dots,6$) an entry of the Bloch vector, then for the states in $\mathcal{S}$, $x$ and $y$ must locate in the following two regimes \begin{equation} \begin{cases} y \geq 4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x, \\ y\leq 4x, \\ y\leq |\vec{r}|^2-x. \end{cases} \label{eq:regime1} \end{equation} and \begin{equation} \begin{cases} y \geq 4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x, \\ y\geq 4x, \\ y\geq |\vec{r}|^2\sin^2\left(\frac{\Theta}{2}\right), \\ y\leq |\vec{r}|^2-x. \end{cases} \label{eq:regime2} \end{equation} The thorough derivation is given in Appendix~\ref{sec:apx_3level}. The full regime is illustrated in Fig.~\ref{fig:S_3level} (gray areas) for different values of $|\vec{r}|$ and $\Theta$. For the same $\Theta$ (columns in the plot), the shapes of the regime basically concide for different values of $|\vec{r}|$, and its area shrinks with the reduction of $|\vec{r}|$. On the other hand, for the same $|\vec{r}|$ (rows in the plot), the regime becomes more narrow when the value of $\Theta$ increases. These behaviors can also be reflected by the variety of the ranges of $x$ ($y$) along the $x$ axis ($y$ axis), which is in the regime $[|\vec{r}|^2\sin^2(\frac{\Theta}{2}),|\vec{r}|^2]$. The target $\Theta$ only affects the lower bound of this regime, which increases with the growth of $\Theta$. Hence, the area of the full regime becomes smaller when $\Theta$ gets larger. In the meantime, both bounds of this regime are affected by $|\vec{r}|^2$, and the largest range is attained at $|\vec{r}|^2=1$, indicating that there exist more choices of $x$, $y$ for pure states to reach the target. Another interesting fact is that the full regime is continuous for the targets less than $2\pi/3$, and it splits into two areas for those larger than $2\pi/3$. This phenomenon is due to the fact that the minimum difference between $|\vec{r}|^2-x$ and $4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x$ with respect to $x$ is $|\vec{r}|^2\left[1-\frac{4}{3}\sin^2\left(\frac{\Theta}{2}\right)\right]$. When $\Theta\leq\frac{2}{3}\pi$, this minimum value is always positive, indicating that all points on the line $y=4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x$ are feasible points in $\mathcal{S}$. By contrast, when $\Theta>\frac{2}{3}\pi$, only part points on this line is feasible, and the full regime then splits into two areas. With the information of $\mathcal{S}$, we then provide the following theorem on the Bhatia-Davis formula. \begin{theorem} \label{lamma_threelevel} For an equally spaced three-level system with a gap $\Delta$, the Bhatia-Davis formula is bounded above by $\pi/(2\Delta)$, i.e., \begin{equation} \tau_{\mathrm{BD}}\leq\frac{\pi}{2\Delta}, \end{equation} for any $|\vec{r}|\in(0,1]$ and $\Theta\in(0,\pi]$. \end{theorem} The derivation is given in Appendix~\ref{sec:apx_3level}. Since $\tau$ is bouned above by $\tau_{\mathrm{BD}}$ according to Theorem~\ref{theorem:upperbound}, this result directly leads to that $\tau\leq\pi/(2\Delta)$, which is fully reasonable as $\tau=\Theta/(2\Delta)$ in this case~\cite{Shao2020}. Next we provide a theorem to show when $\tau_{\mathrm{BD}}$ is a valid lower bound for a general target. \begin{figure}[tp] \includegraphics[width=8.5cm]{borderline.pdf} \caption{(a) Regimes of $x$, $y$ (red areas) that $\tau_{\mathrm{BD}}$ is always a valid lower bound in the case $|\vec{r}|^2=1$ and $\Theta=\pi/3$. $x=r^2_3+r^2_4$ and $y=r^2_0+r^2_1+r^2_5+r^2_6$ in the plot. (b) The variety of borderline given in Eq.~(\ref{eq:borderline}) as a function of $|\vec{r}|^2$ and $\Theta$. } \label{fig:3level_border} \end{figure} \begin{theorem} \label{theorem:3level} For an equally spaced three-level system with a gap $\Delta$, the Bhatia-Davis formula is a valid lower bound for the evolution time to reach any target $\Theta\in(0,\pi]$ at least for the states in the regimes \begin{equation} \begin{cases} y \geq 4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x, \\ y\geq 4x, \\ y\geq |\vec{r}|^2\sin^2\left(\frac{\Theta}{2}\right), \\ y\leq |\vec{r}|^2-x, \end{cases} \end{equation} and \begin{equation} \begin{cases} y \geq 4|\vec{r}|\sin\left(\frac{\Theta}{2}\right)\sqrt{x}-4x, \\ y\leq 4x, \\ y\leq |\vec{r}|^2-x, \\ y\sin^2(\frac{\Delta\tau_{\mathrm{m}}}{2})\leq |\vec{r}|^2\sin^2\!(\frac{\Theta}{2}) -x\sin^2(\Delta \tau_{\mathrm{m}}). \end{cases} \end{equation} \end{theorem} In this theorem, $\tau_{\mathrm{m}}$ is defined by \begin{equation} \tau_{\mathrm{m}}:=\frac{\Theta}{2\Delta\sqrt{1-\frac{4}{3}(|\vec{r}|^2-x-y)}} \label{eq:max_1} \end{equation} for $x+y\geq |\vec{r}|^2-1/3$ and \begin{equation} \tau_{\mathrm{m}}:=\frac{\Theta}{2\Delta\sqrt{1\!-\!\left[\frac{1}{2} +\sqrt{\frac{1}{3}(|\vec{r}|^2\!-\!x\!-\!y\!-\!\frac{1}{4})}\right]^{2}}} \label{eq:max_2} \end{equation} for $x+y\leq |\vec{r}|^2-1/3$. The regime of the states given in the above theorem is illustrated in the case $|\vec{r}|^2=1$ and $\Theta=\pi/3$ (red area in Fig.~\ref{fig:3level_border}(a)). For the states not in this regime (in gray area), there exist states for which the Bhatia-Davis formula fails to be a valid lower bound. An interesting fact is that $\tau_{\mathrm{BD}}$ is a valid lower bound for most edges, for example, $x=0$ and $y\neq 0$, i.e., non-diagonal states with $\rho_{12}=0$; $x+y=|\vec{r}|^2$, which means $r_2=r_7=0$, i.e., states with all diagonal entries $1/3$. The borderline between the red and gray regimes reads \begin{equation} x\sin^2(\Delta \tau_{\mathrm{m}})+y\sin^2\left(\frac{\Delta\tau_{\mathrm{m}}}{2}\right) =|\vec{r}|^2\sin^2\left(\frac{\Theta}{2}\right). \label{eq:borderline} \end{equation} As shown in Fig.~\ref{fig:3level_border}(b), the area of violation regime (inside the line) grows with the increase of $|\vec{r}|^2$ or the decrease of $\Theta$, indicating that $\tau_{\mathrm{BD}}$ is a valid lower bound for most mixed states, especially when $\Theta$ is large. As a matter of fact, apart from the case $|\vec{r}|^2=1, \Theta=\pi/3$, this regime is very insignificant for other examples given in Fig.~\ref{fig:S_3level}. Hence, in equally spaced three-level systems, $\tau_{\mathrm{BD}}$ is indeed a valid lower bound for most states, especially mixed states with large target angles. \section{conclusion} Inspired by the Bhatia-Davis theorem in mathematics and statistics, in this paper we construct a formula, referred to as the Bhatia-Davis formula, for the characterization of quantum speed limit in the Bloch representation. In a general multilevel system, we first prove that the Bhatia-Davis formula is an upper bound for the operational definition of quantum speed limit, and it reduces to the operational definition when the average energy is half of the summation between the highest and lowest energies. The behaviors of both the operational definition and Bhatia-Davis formula are discussed in the generalized one-axis twisting model as an example. In the case of largest target angle, the reachable state set are first studied and the Bhatia-Davis formula is then proved to be a valid lower bound for the evolution time to reach the target angle in systems with symmetric energy structures. With respect to few-level scenarios, the two-level systems are first studied, and the Bhatia-Davis formula is proved to be a valid lower bound in this case, and it reduces to the operational definition when attainable. Another alternative state-dependent bound is also constructed using the Bhatia-Davis formula, which is tighter than the bound given by quantum Fisher information. In the case of equally spaced three-level systems, the regime that the Bhatia-Davis formula remains a valid lower bound is given. Even though it is not valid in general, the violation becomes very insignificant for mixed states, especially when the target angle is large. Therefore, it could be approximately treated as a valid lower bound for most mixed states with large target angles in this type of systems. \begin{acknowledgments} The authors would like to thank M. Zhang and J. Qin for helpful discussions. This work was supported by the National Natural Science Foundation of China (Grants No.\,11805073, No.\,12088101, No.\,11935012, No.\,11875231 and No.\,62003113), the NSAF (Grant No.\,U1930403), and the National Key Research and Development Program of China (Grants No.\,2017YFA0304202 and No.\,2017YFA0205700). J.L. and Z.M. contributed equally to this work. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,132
Oswald Külpe [ˌˀɔsv̥alt ˈkʰʏlpʿə] (Kandava, 1862 - Munic, 1915) fou un filòsof i psicòleg alemany del Bàltic. Destacà pels seus treballs de psicologia experimental. En filosofia, considerà el coneixement com a conjunció entre pensaments i experiència. Referències Filòsofs alemanys Persones de Curlàndia Alumnes de la Universitat de Leipzig Morts a Munic Psicòlegs alemanys
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,601
\section{Introduction} Axions are probably the best solutions for the strong CP problem of the Standard Model (SM). This problem originates from the observation that the electroweak and the QCD sectors, by construction secluded, must conspire to cancel each other's contribution to the electric dipole moment of the neutron to an impressive precision of about one part in $10^{10}$~\cite{Abel:2020gbr}. The fundamental building block of all axion models is first a spontaneously broken $U(1)_{PQ}$ symmetry, the Peccei-Quinn (PQ) symmetry, and second some colored chiral fermions charged under that symmetry~\cite{PQ}. This makes the symmetry anomalous, and ensures the Goldstone boson~\cite{Weinberg:1977ma,Wilczek:1977pj} arising from the $U(1)_{PQ}$ breaking, the axion, is anomalously coupled to gluons. Subsequently, out of this gluonic coupling, non-perturbative QCD effects create an effective potential for the axion field, such that the strong CP puzzle disappears precisely when the axion falls at the minimum. In the process, the axion acquires a small mass, typically well below the eV scale~\cite{Bardeen:1977bd,Kim:1986ax}. Both its mass and its couplings are thus controlled by the single scale $f_{a}$ of the spontaneous symmetry breaking of $U(1)_{PQ}$. Constraints from astrophysics and particle physics call for this scale to be much larger than the electroweak scale, $f_a > 10^{9}$~GeV~\cite{Dicus:1979ch}. Naturally it is tempting to identify the PQ breaking scale with the scale of grand unification, using the same field to break both symmetries, either in non-supersymmetric or supersymmetric contexts. The first attempt at embedding the axion in a GUT context was proposed by Dine, Fisher, and Srednicki in Ref.~\cite{axionDFS} and by Wise, Georgi and Glashow in Ref.~\cite{Wise:1981ry}, both in the context of the minimal non-supersymmetric $SU(5)$ model of Ref.~\cite{Georgi:1974sy}. Since then, additional studies of the embedding of the axion in non-supersymmetric grand unified theories have been proposed, see e.g. Refs.~\cite{FileviezPerez:2019fku,FileviezPerez:2019ssf} for $SU(5)$, or Refs.~\cite{Reiss:1981nd,Mohapatra:1982tc,Holman:1982tb,Bajc:2005zf,Bertolini:2012im,Altarelli:2013aqa,Babu:2015bna,Ernst:2018bib,Ernst:2018rod,DiLuzio:2020xgc} for $SO(10)$. The PQ scale also looks suspiciously close to the seesaw scale, that is relevant to explain the small neutrino masses. So, it appears desirable to describe the three mechanisms -gauge coupling unification, small neutrino mass, axion solution to the strong CP puzzle- in a unified setting. The purpose of the present paper is to study such constructions, in the non-supersymmetric context. There have been a few attempts along this line, most notably Ref.~\cite{Bajc:2006ia} where a Majorana fermion in the adjoint representation of $SU(5)$ is used (see also Refs.~\cite{Bajc:2007zf,DiLuzio:2013dda,DiLuzio:2018gqe,Dorsner:2005fq}). Our purpose here is to study as systematically as possible the models based on $SU(5)$ and including both an axion and a seesaw mechanism. In the present analysis, we will extensively rely on the results of our recent studies in Refs.~\cite{Quevillon:2019zrd, Quevillon:2020hmx}. Indeed, there, we have shown in a non GUT context that whenever the SM fermions carry non trivial PQ charges, the PQ symmetry becomes entangled with the accidental $U(1)$ symmetries of the SM, corresponding to the conserved baryon ($\mathcal{B}$) and lepton ($\mathcal{L}$) numbers. This leads to ambiguities in the PQ charges of the fermions. Those have no phenomenological consequences, but become crucial to accommodate for additional $\Delta\mathcal{B}$ and/or $\Delta\mathcal{L}$ effects, and in particular, to allow for a seesaw mechanism. Further, since accounting for these effects simply fixes some ambiguous thus unphysical parameters, it is immediately clear that the axion phenomenology is totally unaffected. In this way, seemingly different axion models can be shown to be impossible to distinguish phenomenologically. In a $SU(5)$ context, the entanglement of $U(1)_{PQ}$ with $U(1)_{\mathcal{B},\mathcal{L}}$ must take a particular form since $\mathcal{B}+\mathcal{L}$ does not survive as a global symmetry. So, our first goal will be to precisely identify the PQ charge ambiguity present in models where only $\mathcal{B}-\mathcal{L}$ is active. Then, we will check whether this ambiguity permits to accommodate for $\mathcal{L}$-breaking seesaw mechanisms of various types. Further, we will follow this strategy both for the minimal $SU(5)$ and for the flipped $SU(5)$ GUT model~\cite{Barr:1981qv,Derendinger:1983aj,Antoniadis:1987dx, Ellis:1988tx}, and consider various alternative embedding of the axion in $SU(5)$ representations. All these seemingly very different models, in which the fermions do have very different PQ charges, will be shown to be mere particular cases of the $\mathcal{B}-\mathcal{L}$ preserving models, with the ambiguity fixed to some particular value. As a result, and despite their very different appearance in terms of effective interactions, those models cannot be distinguished at low energy. The paper is organized as follow. To set the stage, we start in the next section by briefly summarizing a few relevant features the PQ axion model and the DFSZ axion model (more details can be found in Ref.~\cite{Quevillon:2019zrd, Quevillon:2020hmx}). Then in Section~\ref{sec3}, we study the minimal $SU(5)$ axion models compatible with lepton number violation, by introducing various mechanisms to generate neutrino masses but also baryon number violation. In Section~\ref{sec4} we investigate the Flipped $SU(5)$ axion models, characterize the particular way in which $\mathcal{B}-\mathcal{L}$ is broken (or not), and two different DFSZ implementations for the axion. Finally, our results are summarized in Section~\ref{Ccl}. A few remarks about how these models necessarily have to be modified to reproduce realistic fermion masses are collected in the Appendix. \section{Brief overview of the PQ and DFSZ axions} In the presence of two Higgs doublets $\Phi_{1,2}$, the whole Lagrangian can be required to be invariant under a global $U(1)_{1}\otimes U(1)_{2}$ symmetry, corresponding to the independent rephasing of each doublet, $\Phi_{k}\rightarrow \exp (i \alpha_k)\Phi_{k}$. This imposes some restrictions on the scalar potential and on the Yukawa couplings, which we take to be of Type II,% \begin{equation} \mathcal{L}_{\text{Yukawa}}=-\bar{u}_{R}\mathbf{Y}_{u}q_{L}\Phi_{1}-\bar {d}_{R}\mathbf{Y}_{d}q_{L}\Phi_{2}^{\dagger}-\bar{e}_{R}\mathbf{Y}_{e}\ell _{L}\Phi_{2}^{\dagger}+h.c.\;.\label{YukQuark}% \end{equation} Because these couplings are also invariant under the global baryon and lepton number symmetries, $U(1)_{\mathcal{B}}$ and $U(1)_{\mathcal{L}}$, the pattern of symmetry breaking is% \begin{align} G_{THDM} & =U(1)_{\mathcal{B}}\otimes U(1)_{\mathcal{L}}\otimes U(1)_{1}\otimes U(1)_{2}\otimes SU(2)_{L}\otimes SU(3)_{C}\nonumber\\ & \rightarrow U(1)_{\mathcal{B}}\otimes U(1)_{\mathcal{L}}\otimes U(1)_{em}\otimes SU(3)_{C} \ , \label{PatternTHDM}% \end{align} where $U(1)_Y\subset U(1)_{1}\otimes U(1)_{2}$ is gauged. When the doublets acquire their vacuum expectation values, $\langle 0|\operatorname{Re}\Phi_{i}|0\rangle=v_{i}$ with $v_{1}^{2}+v_{2}^{2}\equiv v^{2}\approx\left( 246\,\text{GeV}\right) ^{2}$ and $v_{2}/v_{1}\equiv x\equiv1/\tan\beta$, the symmetry $U(1)_{1}\otimes U(1)_{2}\otimes SU(2)_{L}$ is broken down to $U(1)_{em}$. There are two electrically-neutral Goldstone bosons~\cite{Weinberg:1977ma,Wilczek:1977pj}: the Would-be Goldstone eaten by the $Z^{0}$ and the massless axion. Importantly, these states are only defined once $U(1)_{Y}\otimes SU(2)_{L}$ is broken, and are $v_{i}$-dependent linear combinations of $\operatorname{Im}\Phi_{1}$ and $\operatorname{Im}\Phi_{2}$. The PQ charges of the Higgs doublets are also function of the VEVs, and the orthogonality of the Goldstone bosons imposes% \begin{equation} PQ(\Phi_{1},\Phi_{2})=\left( x\ ,\ -\frac{1}{x}\ \right) \ . \end{equation} These charges fix those of the fermions, up to a two-parameter ambiguity originating in the $U(1)_{\mathcal{B}}\otimes U(1)_{\mathcal{L}}$ invariance, which we denote $\alpha$ and $\beta$:% \begin{equation} PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R})=(\alpha,\alpha+x,\alpha+\frac{1}{x}% ,\beta,\beta+\frac{1}{x})\ \ .\label{PQferm}% \end{equation} As detailed in Refs.~\cite{Quevillon:2019zrd,Quevillon:2020hmx}, the freedom in the PQ charge of the fermion has no observable consequence. Yet, some theoretical quantities depend on $\alpha$ and $\beta$. In particular, the divergence of the PQ current takes the form \begin{equation} \partial_{\mu}J_{PQ}^{\mu}=\frac{N_{f}}{16\pi^{2}}\left\{ \mathcal{N}% _{C}g_{s}^{2}G_{\mu\nu}^{a}\tilde{G}^{a,\mu\nu}+\mathcal{N}_{L}g^{2}W_{\mu\nu }^{i}\tilde{W}^{i,\mu\nu}+\mathcal{N}_{Y}g^{\prime2}B_{\mu\nu}\tilde{B}% ^{\mu\nu}\right\} \;,\label{dJPQTHDM}% \end{equation} with \begin{subequations} \label{JPQTHDM}% \begin{align} \mathcal{N}_{C} & =\sum_{\ \ \ \psi=q_{L}^{\dagger},u_{R},d_{R}\ \ \ }% d_{L}(\psi)C_{C}(\psi)PQ(\psi)=\frac{1}{2}\left( x+\frac{1}{x}\right) \ ,\\ \mathcal{N}_{L} & =\sum_{\ \ \ \ \ \psi=q_{L}^{\dagger},\ell_{L}^{\dagger }\ \ \ \ \ }d_{C}(\psi)C_{L}(\psi)PQ(\psi)=-\frac{1}{2}(3\alpha+\beta)\ ,\\ \mathcal{N}_{Y} & =\sum_{\psi=q_{L}^{\dagger},u_{R},d_{R},\ell_{L}^{\dagger },e_{R}}d_{L}(\psi)d_{C}(\psi)C_{Y}(\psi)PQ(\psi)=\frac{1}{2}\left( 3\alpha+\beta\right) +\frac{4}{3}\left( x+\frac{1}{x}\right) \ , \end{align} where $d_{L(C)}$, $c_{L(C)}$ are the $SU(2)_{L}$ ($SU(3)_{C}$) dimension and quadratic Casimir invariant, and $C_{Y}=Y^{2}/4$. With $\mathcal{N}_{L}+\mathcal{N}_{Y}=\mathcal{N}_{em}$, the QED and QCD terms~\cite{PQ} in $\partial_{\mu}J_{PQ}^{\mu}$ are physical but the electroweak term is ambiguous due to the presence of the free parameters, $\alpha$ and $\beta$. With the axion emerging from the Higgs doublets, its couplings to SM particles are tuned by the electroweak VEV, and are phenomenologically too large. To circumvent this, the idea of the DFSZ model~\cite{axionDFS,axionZ} is to embed the axion dominantly in a separate complex scalar field $\phi$, whose VEV $v_{s}$ is much larger than the electroweak one. Technically, the introduction of the complex scalar field does not enlarge the $U(1)_{1}\otimes U(1)_{2}$ symmetry thanks to the presence of a coupling $\phi^{2}\Phi_{1}^{\dagger}\Phi_{2}$ entangling the charges of all the scalars. This also prevents $\phi$ from coupling to fermions. The axion emerges as essentially $\operatorname{Im}\phi$, with small $\mathcal{O}(v/v_{s})$ components $\operatorname{Im}\Phi_{1,2}$. Since all the couplings to SM particles stem from these suppressed components, the axion couplings are all rescaled by $v/v_{s}$. The PQ charges of the doublets are not modified by the presence of $\phi$. Since it has no weak hypercharge, it does not enter in the WBG of the $Z^{0}$ to which the axion must be orthogonal, and thus: \end{subequations} \begin{equation} PQ(\Phi_{1},\Phi_{2},\phi)=\left( x\ ,\ -\frac{1}{x}\ ,\ \frac{1}{2}\left( x+\frac{1}{x}\right) \right) \ .\label{PQdfsz}% \end{equation} Also, the SM fermion PQ charges remain those of Eq.~(\ref{PQferm}) since the Yukawa couplings are the same. \section{Minimal SU(5) axion models\label{sec3}} Since phenomenological constraints push the invisible axion scale well above the electroweak scale, it could be related to other new physics scales, in particular to the seesaw scale of the neutrino sector or the grand unification scale suggested by the RG evolution of the SM gauge couplings. With these contexts in mind, the goal would actually be to relate these three scales, and this section is devoted to such constructions. At first sight, the PQ and DFSZ axions look incompatible with unification because the PQ charges for the fermions embedded in the same $SU(5)$ representation are different. Yet, as we will explain here, this line of reasoning is flawed, and the very idea of the DFSZ axion can be transposed quite naturally in $SU(5)$. One may also consider a KSVZ-like axion~\cite{KSVZ}, but having to embed the heavy fermion into a complete $SU(5)$ representation requires extending the matter content quite extensively. This will not be explored here. In the next section, the minimal $SU(5)$ model is briefly presented, focusing on those elements that will play a role in the following. Then, we extend this model in various ways to include the axion. In that description, particular emphasis is laid on global symmetries and the breaking chains. Indeed, in a GUT setting, the $\mathcal{B}$ and $\mathcal{L}$ are not exact symmetries at the GUT scale, but only emerge at the low scale. As we will see, this means the PQ symmetry should be defined similarly if it is to be compatible with $\mathcal{B}$ and/or $\mathcal{L}$ violating effects, as required for example to allow for a Majorana neutrino mass term. \subsection{Brief overview of the minimal SU(5) model} The simplest GUT model, due to Georgi and Glashow \cite{Georgi:1974sy}, is based on $SU(5)$. Fermions are embedded in the fundamental representations $\psi_{\mathbf{\bar{5}}}\sim\mathbf{\bar{5}}$ and $\chi_{\mathbf{10}}\sim\mathbf{10}$, while gauge bosons are in the adjoint, $A^{\mu}\sim\mathbf{24}$. Two Higgs multiplets are necessary to break $SU(5)$ down to $SU(3)_{C}\otimes U(1)_{em}$: a set of real scalar fields $\mathbf{H}_{\mathbf{24}}\sim\mathbf{24}$ responsible for $SU(5)\rightarrow SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}$ at the GUT scale $v_{24}$, and a complex fiveplet $h_{\mathbf{5}}\sim\mathbf{5}$ for the EW symmetry breaking at the scale $v_{5}\approx 246\,$GeV. The most general scalar potential, assuming a $\mathbf{H}_{\mathbf{24}}\rightarrow-\mathbf{H}_{\mathbf{24}}$ symmetry to get rid of cubic interactions, is% \begin{align} V(h_{\mathbf{5}},\mathbf{H}_{\mathbf{24}}) & =-\frac{\mu^{2}}{2}% \langle\mathbf{H}_{\mathbf{24}}^{2}\rangle+\frac{a}{4}\langle\mathbf{H}% _{\mathbf{24}}^{2}\rangle^{2}+\frac{b}{2}\langle\mathbf{H}_{\mathbf{24}}% ^{4}\rangle\nonumber\\ & -\frac{\mu^{\prime2}}{2}h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}}% +\frac{\lambda}{4}(h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}})^{2}+\alpha (h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}})\langle\mathbf{H}_{\mathbf{24}}% ^{2}\rangle+\beta h_{\mathbf{5}}^{\dagger}\mathbf{H}_{\mathbf{24}}% ^{2}h_{\mathbf{5}}\;, \end{align} and it can achieve the desired symmetry breaking chain for some appropriate choices of the parameters. Two features of the minimal model need to be emphasized. First, the fermion masses are not correctly predicted by the minimal model. From the Yukawa couplings% \begin{equation} \mathcal{L}_{\text{Yukawa}}^{\mathbf{5}}=-\frac{1}{4}\varepsilon_{ABCDE}(\bar{\chi }_{\mathbf{10}}^{c})^{AB}\mathbf{Y}_{10}(\chi_{\mathbf{10}}% )^{CD}h_{\mathbf{5}}^{E}+\sqrt{2}(\bar{\psi}_{\mathbf{\bar{5}}}^{c% })_{A}\mathbf{Y}_{5}(\chi_{\mathbf{10}})^{AB}(h_{\mathbf{5}}^{\dagger}% )_{B}+h.c.\;, \end{equation} one derives% \begin{equation} \mathbf{Y}_{u}=\mathbf{Y}_{10}=\mathbf{Y}_{10}^{T}\;,\;\;\mathbf{Y}% _{d}=\mathbf{Y}_{e}^{T}=\mathbf{Y}_{5}\;.\label{GUTminmass}% \end{equation} To cure for this, either effective non-renormalizable operators are added, or a non-minimal Higgs multiplet is included. These constructions are briefly described in the Appendix~\ref{AppMass}. A second important feature of the minimal $SU(5)$ model is the existence of a global $U(1)_{X}$ symmetry, with charges \begin{equation} X(h_{\mathbf{5}})=-2X(\chi_{\mathbf{10}})=X(\chi _{\mathbf{10}})+X(\psi_{\mathbf{\bar{5}}})\;. \end{equation} The $\mathbf{H}_{\mathbf{24}}$ and gauge bosons are neutral. This symmetry has an interesting property. It is a subgroup of the $U(1)_{5}\otimes U(1)_{10}$ symmetry corresponding to the separate rephasing of $\psi_{\mathbf{\bar{5}}}$ and $\chi_{\mathbf{10}}$. Both the $U(1)_{5}$ and $U(1)_{10}$ singlet currents are chiral and thus anomalous: \begin{equation} \left( \begin{array} [c]{c}% \partial_{\mu}J_{\bar{5}}^{\mu}\\ \partial_{\mu}J_{10}^{\mu}% \end{array} \right) =-N_{f}\frac{g_{5}^{2}}{16\pi^{2}}\left( \begin{array} [c]{c}% 1/2\\ 3/2 \end{array} \right) A_{\mu\nu}^{A}\tilde{A}^{A,\mu\nu}\;,\label{AnoSU5}% \end{equation} where $C(\mathbf{\bar{5}})=1/2$, $C(\mathbf{10})=3/2$, $N_{f}=3$ the number of fermion generations, and $\tilde{A}^{A,\mu\nu}=1/2\varepsilon^{\mu\nu\rho\sigma}A_{\rho\sigma}^{A}$. However, the combination $J_{X}^{\mu}\equiv J_{10}^{\mu}-3J_{\bar{5}}^{\mu}$ corresponding to the fermionic current of the $U(1)_{X}$ symmetry is anomaly-free. The $U(1)_{X}$ symmetry is not broken by the VEV of $\mathbf{H}_{\mathbf{24}}$ since that state is neutral. It is only broken at the electroweak scale, along with $U(1)_{Y}$, once $h_{\mathbf{5}}$ develops its VEV. There is no associated Goldstone boson because this actually corresponds to a partial breaking, with a reordering of the $U(1)$s:% \begin{equation} SU(2)_{L}\otimes U(1)_{Y}\otimes U(1)_{X}\rightarrow U(1)_{em}\otimes U(1)_{\mathcal{B}-\mathcal{L}}\ . \end{equation} In other words, there is no Goldstone boson associated with the $U(1)_{X}$ breaking because it gets replaced by another exact global symmetry~\cite{Mohapatra:1982xz}. To see that the surviving $U(1)$ is actually that for $\mathcal{B}-\mathcal{L}$, first note that the $h_{\mathbf{5}}$ breaks both $U(1)_{Y}$ and $U(1)_{X}$, but not the combination% \begin{equation} Z=\frac{1}{5}X+\frac{2}{5}Y\;, \end{equation} if we normalize the $X$ charge as $X(h_{\mathbf{5}})=-2$. It is then a simple exercise to check that $Z$ charges coincide with $\mathcal{B}-\mathcal{L}$ for the fermionic $SU(5)$ fields. Some of the GUT scale bosons also carry $\mathcal{B}-\mathcal{L}$ charges: the leptoquarks $X^{\mu}$ and $Y^{\mu}$ have $\mathcal{B}-\mathcal{L}=2/3$ and the colored states in $h_{\mathbf{5}}$ have $\mathcal{B}-\mathcal{L}=-2/3$. \subsection{The PQ-SU(5) model} There is no room for the axion in the minimal model, where no new matter fields are introduced. The simplest way to introduce it is to apply the PQ recipe~\cite{PQ} and add a second Higgs fiveplet. The Yukawa couplings are then% \begin{equation} \mathcal{L}_{\text{Yukawa}}=\frac{1}{4}\bar{\chi}_{\mathbf{10}}^{c% }\mathbf{Y}_{10}\chi_{\mathbf{10}}h_{1,\mathbf{5}}+\sqrt{2}\bar{\psi }_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{5}\chi_{\mathbf{10}% }h_{2,\mathbf{5}}^{\dagger}+h.c.\;.\label{SU5THDM}% \end{equation} We want the Lagrangian to be invariant under $U(1)_{1}\otimes U(1)_{2}$ corresponding to the independent phase redefinitions of the two Higgs fiveplets $h_{k,\mathbf{5}}\rightarrow\exp(i\alpha_{k})h_{k,\mathbf{5}}$, so the potential is restricted to% \begin{align} V(h_{1,\mathbf{5}},h_{2,\mathbf{5}},\mathbf{H}_{\mathbf{24}}) & =-\frac {\mu^{2}}{2}\langle\mathbf{H}_{\mathbf{24}}^{2}\rangle+\frac{a}{4}% \langle\mathbf{H}_{\mathbf{24}}^{2}\rangle^{2}+\frac{b}{2}\langle \mathbf{H}_{\mathbf{24}}^{4}\rangle\nonumber\\ & \ \ \ -\sum_{i=1,2}\frac{\mu_{i}^{2}}{2}h_{i,\mathbf{5}}^{\dagger }h_{i,\mathbf{5}}+\alpha_{i}(h_{i,\mathbf{5}}^{\dagger}h_{i,\mathbf{5}% })\langle\mathbf{H}_{\mathbf{24}}^{2}\rangle+\beta_{i}h_{i,\mathbf{5}% }^{\dagger}\mathbf{H}_{\mathbf{24}}^{2}h_{i,\mathbf{5}}+\frac{\lambda_{i}}% {2}(h_{i,\mathbf{5}}^{\dagger}h_{i,\mathbf{5}})^{2}\nonumber\\ & \ \ \ +\lambda_{3}(h_{1,\mathbf{5}}^{\dagger}h_{1,\mathbf{5}}% )(h_{2,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}})+\lambda_{4}(h_{1,\mathbf{5}% }^{\dagger}h_{2,\mathbf{5}})(h_{2,\mathbf{5}}^{\dagger}h_{1,\mathbf{5}})\ . \end{align} The symmetry breaking proceeds similarly as in the minimal model. The parameters $\mu$, $a$, and $b$ can be chosen such that $\langle0|\mathbf{H}_{\mathbf{24}}|0\rangle=v_{24}\,\operatorname{diag}(1,1,1,-3/2,-3/2)$. Then, provided $\beta_{i}$ are negative, the terms $\beta_{i}h_{i,\mathbf{5}}^{\dagger}\langle0| \mathbf{H}_{\mathbf{24}}|0\rangle^{2}h_{i,\mathbf{5}}$ tilt the potential so that minima of the form $\langle0|h_{i,\mathbf{5}}|0\rangle\sim(0,0,0,v_{51},v_{52})$ are lower than color-breaking ones. The $\beta_{i}$ terms also correct the initial $SU(5)$ breaking, with generically $\langle0|\mathbf{H}_{\mathbf{24}}|0\rangle=v_{24}\, \operatorname*{diag}(1,1,1,-(3+\epsilon)/2,-(3-\epsilon)/2)$, but this has a negligible impact at the EW scale. Neglecting all $\mathcal{O}(v_{5}/v_{24})$ corrections, the EW symmetry breaking induced by $\langle0|h_{i,\mathbf{5}}|0\rangle$ proceeds exactly as in the THDM. Broadly speaking, one can see that $\mu_{i}^{2}$, $\alpha_{i}$, and $\beta_{i}$ combine into a quadratic term for $h_{i,\mathbf{5}}$, while the $\lambda$ parameters simply match onto those of the THDM. The axion arises from the phases of $h_{i,\mathbf{5}}$. If we denote $\langle0|h_{i,\mathbf{5}}|0\rangle\sim(0,0,0,0,v_{i})^{T}$ with $v_{1}^{2}+v_{2}^{2}=v_{5}^{2}$ and $x=v_{2}/v_{1}$, the identification of the $U(1)_{Y}$ Goldstone boson and of the axion proceeds exactly as in the THDM, and the fiveplets have the PQ charges:% \begin{equation} PQ(h_{1,\mathbf{5}})=x\ \ ,\ \ PQ(h_{2,\mathbf{5}})=-\frac{1}{x}% \ .\label{PQSU5naif1}% \end{equation} If we interpret those charges at the level of the $SU(5)$ invariant Yukawa couplings of Eq.~(\ref{SU5THDM}), we find% \begin{equation} PQ(\chi_{\mathbf{10}})=-\frac{x}{2}\ ,\ \ PQ(\psi_{\mathbf{\bar{5}}})=\frac {x}{2}-\frac{1}{x}\ .\label{PQSU5naif2}% \end{equation} The charge of $\chi_{\mathbf{10}}$ is fixed because of the Majorana nature of the $\mathbf{Y}_{10}$ coupling, and this then leaves no freedom for $\psi_{\mathbf{\bar{5}}}$. Note that if we start from the PQ charges of the fermions in the THDM, Eq.~(\ref{PQferm}), and require the equality of the charges of the fermions of the $\mathbf{10}$ and $\mathbf{\bar{5}}$, we recover the above result with \begin{equation} \alpha=-\frac{x}{2}\ ,\ \ \beta=\frac{x}{2}-\frac{1}{x}\ .\label{UnifPQ}% \end{equation} It may seem the $SU(5)$ symmetry unambiguously fixes all the PQ charges, but this is suspicious. First, these charges are function of $x=v_{2}/v_{1}$, which is clearly defined only once the fiveplets acquire their VEVs, that is, at the very end of the breaking chain. Further, since the axion state knows about $U(1)_{Y}$ and its breaking, and since the hypercharge is not constant over $SU(5)$ multiplets, it is far from clear that the PQ charge should be constant over whole $SU(5)$ multiplets. To get a better understanding, let us analyze the breaking chain in more details. In the THDM, the Lagrangian is invariant under $U(1)_{1}\otimes U(1)_{2}\otimes U(1)_{\mathcal{B}}\otimes U(1)_{\mathcal{L}}$, with $U(1)_{Y}$ hidden inside $U(1)_{1}\otimes U(1)_{2}$. With the breaking chain of Eq.~(\ref{PatternTHDM}), there are two Goldstone bosons, one is the axion corresponding to $U(1)_{PQ}$ and the other is eaten by the $Z$ boson. The third $U(1)$ remains exact, $U(1)_{em}\subset U(1)_{Y}\otimes SU(2)_{L}$. Thus, in the THDM, there is a two-parameter freedom in the PQ charges of the fermions because $U(1)_{\mathcal{B}}\otimes U(1)_{\mathcal{L}}$ is always exact and separate. The situation is quite different in $SU(5)$. Adding the second Higgs fiveplet extends $U(1)_{X}$ to $U(1)_{1}\otimes U(1)_{2}$, and the breaking chain is% \begin{align} G_{SU(5)} & \sim U(1)_{1}\otimes U(1)_{2}\otimes SU(5)\nonumber\\ & \rightarrow U(1)_{1}\otimes U(1)_{2}\otimes U(1)_{Y}\otimes SU(2)_{L}% \otimes SU(3)_{C}\nonumber\\ & \sim U(1)_{X^{\prime}}\otimes U(1)_{X}\otimes U(1)_{Y}\otimes SU(2)_{L}\otimes SU(3)_{C}\nonumber\\ & \rightarrow U(1)_{\mathcal{B}-\mathcal{L}}\otimes U(1)_{em}\otimes SU(3)_{C}\ .\label{SSB5}% \end{align} In the second stage of breaking, the $U(1)_{X}\subset U(1)_{1}\otimes U(1)_{2}$ mixes with $U(1)_{Y}\subset SU(5)$ to generate the unbroken $U(1)_{\mathcal{B}-\mathcal{L}}$, while $U(1)_{X^{\prime}}=U(1)_{1}\otimes U(1)_{2}\backslash U(1)_{X}$ mixes with $U(1)_{Y}$ to generate the broken PQ symmetry. Thus, only one $U(1)\subset U(1)_{1}\otimes U(1)_{2}$ is spontaneously broken and only one additional Goldstone boson emerges. At the same time, though $U(1)_{\mathcal{B}-\mathcal{L}}$ is not explicitly present above the GUT scale, there is always a separate global symmetry that is active, and thus, we do expect the PQ charge of the fermions to exhibit a one-parameter freedom. This contradicts the naive Eq.~(\ref{UnifPQ}) where PQ charged are unambiguously fixed. To pinpoint this freedom, let us start at the $SU(5)$ level. What matters there is the $U(1)_{1}\otimes U(1)_{2}$ symmetry, whose charges are% \begin{equation}% \begin{tabular} [c]{ccccc}\hline & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$\\\hline $\ \ U(1)_{1}\ \ $ & $1$ & $0$ & $-1/2$ & $1/2$\\ $\ \ U(1)_{2}\ \ $ & $0$ & $1$ & $0$ & $1$\\\hline \end{tabular} \ \label{U1U2charges}% \end{equation} On the other hand, both PQ and $\mathcal{B}-\mathcal{L}$ charges are only defined once $SU(5)$ is broken because of the mixing with $U(1)_{Y}$. Let us impose the ansatz:% \begin{align} PQ & =\frac{2}{5}(\zeta_{1}^{PQ}U_{1}+\zeta_{2}^{PQ}U_{2}+\zeta_{Y}% ^{PQ}Y)\ ,\label{PQU1U2Y}\\ \mathcal{B}-\mathcal{L} & =\frac{2}{5}(\zeta_{1}^{\mathcal{B}-\mathcal{L}% }U_{1}+\zeta_{2}^{\mathcal{B}-\mathcal{L}}U_{2}+\zeta_{Y}^{\mathcal{B}% -\mathcal{L}}Y)\ ,\label{BLU1U2Y}% \end{align} for some coefficients $\zeta^{PQ}_{i}$ and $\zeta_{i}^{\mathcal{B}-\mathcal{L}}$. The factors $2/5$ are introduced for convenience. Remember that implicitly, PQ charges assume that $U(1)_{Y}$ is spontaneously broken since the PQ Goldstone boson is defined as that orthogonal to the WBG of the $Z^{0}$ boson. This fixes the PQ charges of the $SU(2)_{L}$ doublets as (compare with Eq.~(\ref{PQSU5naif1}))% \begin{equation} PQ(\Phi_{1}\subset h_{1,\mathbf{5}})=x\ ,\ \ PQ(\Phi_{2}\subset h_{2,\mathbf{5}})=-\frac{1}{x} \ , \end{equation} but the color triplets in $h_{i,\mathbf{5}}$ may have different PQ charges. Then, imposing Eq.~(\ref{PQferm}) and solving for $\alpha$, $\beta$, and $\zeta_{1,2,Y}^{PQ}$, a one-parameter freedom remains: \begin{equation} \zeta_{1}^{PQ}=\beta+2x+\frac{1}{x}\ ,\ \ \zeta_{2}^{PQ}=\beta-\frac{x}% {2}-\frac{3}{2x}\ ,\ \ \zeta_{Y}^{PQ}=\frac{x}{2}-\frac{1}{x}-\beta \ ,\label{PQU1U2Y2}% \end{equation} with $\beta$ arbitrary, together with% \begin{equation} 3\alpha+\beta=-\left( x+\frac{1}{x}\right) \equiv2\mathcal{N}_{SU(5)}% \ .\label{NSU5fixed}% \end{equation} So, we represent the PQ charge of the fermion keeping track of the one-parameter freedom as% \begin{equation} PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R})=\left( \frac{2\mathcal{N}_{SU(5)}-\beta }{3},\frac{2\mathcal{N}_{SU(5)}-\beta}{3}+x,\frac{2\mathcal{N}_{SU(5)}-\beta }{3}+\frac{1}{x},\beta,\beta+\frac{1}{x}\right) \ .\label{PQFinalCharge}% \end{equation} Written in this form, it is clear that the freedom $\beta$ corresponds to $\mathcal{B}-\mathcal{L}$ remaining exact. So, and quite expectedly, the only difference compared to the THDM is that the freedom corresponding to $\mathcal{B}+\mathcal{L}$ no longer appears. Actually, it is the specific way in which the SU(5) gauge interactions break $\mathcal{B}+\mathcal{L}$ that freezes the combination $3\alpha+\beta$ to the specific value $2\mathcal{N}_{SU(5)}$. Four features of this solution are remarkable. First, solving Eq.~(\ref{BLU1U2Y}) produces \begin{equation} \zeta_{Y}^{\mathcal{B}-\mathcal{L}}=-\zeta_{1}^{\mathcal{B}-\mathcal{L}% }=-\zeta_{2}^{\mathcal{B}-\mathcal{L}}=1\ .\label{BLU1U2Y2}% \end{equation} So, while there remains a one-parameter freedom for the PQ charges, corresponding to choices for $\beta$, no such freedom exists for $\mathcal{B}-\mathcal{L}$ charges. Second, the ambiguity in the PQ charge of the fermions cannot have any dynamical consequence~\cite{Quevillon:2019zrd, Quevillon:2020hmx}. The simplest way to see this is to adopt a linear representation for the Higgs multiplets. Their couplings with fermions are then uniquely defined, and so are those of the axion. Third, it is still possible to attain $SU(5)$-invariant PQ charges with the value of $\beta$ quoted in Eq.~(\ref{UnifPQ}). We now understand this value as that for which $\zeta_{Y}^{PQ}=0$, that is, the value which removes $Y$ from the PQ charge in Eq.~(\ref{PQU1U2Y}). Of course, this is compulsory for them to be $SU(5)$ invariant. Yet, it is crucial for the following to realize that at the level of the minimal model, this is nothing more than a choice. In the presence of explicit $\mathcal{B}$ and/or $\mathcal{L}$ violating couplings, other values of $\beta$ may be compulsory. Finally, the fact that $3\alpha+\beta$ is fixed, see Eq.~(\ref{NSU5fixed}), is particularly interesting. Looking back at Eq.~(\ref{U1U2charges}), the two global $U(1)$s have the anomalies% \begin{equation} \left( \begin{array} [c]{c}% \partial_{\mu}J_{1}^{\mu}\\ \partial_{\mu}J_{2}^{\mu}% \end{array} \right) =-\frac{N_{f}g_{5}^{2}}{16\pi^{2}}\left( \begin{array} [c]{c}% -1/2\\ +1/2 \end{array} \right) A_{\mu\nu}^{A}\tilde{A}^{A,\mu\nu}\ . \end{equation} At the low-energy scale, once $SU(5)$ is broken down to the THDM gauge group, these anomalies can only be matched with that of the PQ current since both $U(1)_{\mathcal{B}-\mathcal{L}}$ and $U(1)_{Y}$ remain anomaly-free. Specifically, the anomaly of the PQ current calculated at the $SU(5)$ level, that is from Eq.~(\ref{PQU1U2Y}) with $\zeta_{1,2}^{PQ}$ in Eq.~(\ref{PQU1U2Y2}), is \begin{equation} \partial_{\mu}J_{PQ}^{\mu} =-\frac{N_{f}g_{5}^{2}}{16\pi^{2}}\frac{\zeta_{2}^{PQ}-\zeta_{1}^{PQ}}{5} A_{\mu\nu}^{A}\tilde{A}^{A,\mu\nu} =-\frac{N_{f}g_{5}^{2}}{16\pi^{2}}\mathcal{N}_{SU(5)} A_{\mu\nu}^{A}\tilde{A}^{A,\mu\nu} \ . \end{equation} The $\beta$ parameter entering $\zeta_{1,2}^{PQ}$ cancels out, leaving no parametric freedom. This result matches that computed directly at the level of the THDM, after the $SU(5)$ breaking. Given the charges in Eq.~(\ref{PQFinalCharge}), we find% \begin{equation} \partial_{\mu}J_{PQ}^{\mu}=\frac{N_{f}}{16\pi^{2}}\left\{ \mathcal{N}% _{C}g_{s}^{2}G_{\mu\nu}^{a}\tilde{G}^{a,\mu\nu}+\mathcal{N}_{L}g^{2}W_{\mu\nu }^{i}\tilde{W}^{i,\mu\nu}+\mathcal{N}_{Y}g^{\prime2}B_{\mu\nu}\tilde{B}% ^{\mu\nu}\right\} \;, \label{unifano1} \end{equation} with, from Eq.~(\ref{JPQTHDM}), \begin{equation} \mathcal{N}_{C}=\mathcal{N}_{L}=\frac{3}{5}\mathcal{N}_{Y}=-\mathcal{N}% _{SU(5)}\ . \label{unifano2} \end{equation} Magically, all the coefficients match, independently of the free parameter $\beta$ in Eq.~(\ref{PQFinalCharge}). The ratio $3/5$ is the usual rescaling factor for the EW-scale hypercharge in terms of the diagonal hypercharge generator of $SU(5)$. Thus, even if there remain some freedom in the PQ charges, there is no ambiguity in the anomalous coefficients because they have to match that of the $SU(5)$ global anomalies of the $U(1)_{1}\otimes U(1)_{2}$ symmetry (see Ref.~\cite{Ernst:2018bib} for a similar observation in the $SO(10)$ context). Inverting the argument, requiring the matching of the anomalies at the various levels of the symmetry breaking chain provides tight constraints on the fermion PQ charges, though not enough to fix them all. \subsubsection{Lepton-number violation} To account for the very light neutrino masses, the standard approach is to implement a seesaw mechanism. In $SU(5)$, as in the SM, this starts by adding a flavor triplet of right-handed neutrinos neutral under the gauge group, $\psi_{\mathbf{1}}=(\nu_{R})^{c}$. Two new couplings are then allowed in the Lagrangian:% \begin{equation} \mathcal{L}_{\nu}=-\frac{1}{2}\bar{\psi}_{\mathbf{1}}^{c}% \mathbf{M}_{R}\psi_{\mathbf{1}}+(\bar{\psi}_{\mathbf{\bar{5}}}^{c% })_{A}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}}(h_{i,\mathbf{5}})^{A}% +h.c.\;,\label{Seesaw}% \end{equation} with either $i=1$ or $2$. The important point is that by being neutral, a Majorana mass term is permitted at the GUT scale. This is where the careful analysis of the $U(1)$ symmetries performed previously pays off. Indeed, if one naively assigns PQ charges to whole $SU(5)$ multiplets, as in Eqs.~(\ref{PQSU5naif1}) and~(\ref{PQSU5naif2}), then no matter the choice of Higgs fiveplet for the neutrino Yukawa coupling, the singlet state cannot be neutral,% \begin{align} \label{Majorana1} \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{1,\mathbf{5}} & :PQ(\psi_{\mathbf{1}})=\frac{1}{x}-\frac{3x}{2}\ ,\\ \label{Majorana2} \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}} & :PQ(\psi_{\mathbf{1}})=\frac{2}{x}-\frac{x}{2}\ . \end{align} In turn, this would mean that the Majorana mass term breaks the PQ symmetry. So, it may appear that either $\mathbf{M}_{R}$ is forbidden, the axion remains a massless Goldstone boson but there is no seesaw mechanism, or $\mathbf{M}_{R}$ is an explicit PQ-symmetry breaking term but the axion cannot remain massless. It that latter case, it would presumably no longer be able to solve the strong CP puzzle since neutrino masses cannot all be negligible compared to the instanton-induced QCD mass term. Actually, the axion remains as a Goldstone boson even in the presence of a Majorana mass term because one is not forced to assign $SU(5)$ invariant PQ charges. As detailed in the previous section, in the absence of $\psi _{\mathbf{1}}$, there was some freedom in how to assign PQ charges. Turning $\mathbf{M}_{R}$ on may use up some of that freedom, but it needs not forbid the PQ symmetry. Let us see how this proceeds in details. Looking back at the charge assignments in Eq.~(\ref{U1U2charges}), neither $U(1)_{1}$ nor $U(1)_{2}$ survives in the presence of $\mathcal{L}_{\nu_{R}}$. Yet, there is still a global $U(1)$ symmetry active in the Lagrangian, provided the two fiveplets have related charges. Solving for the two possible Yukawa couplings involving the singlet, we find :% \begin{equation}% \begin{tabular} [c]{cccccc}\hline $U(1)_{W}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\psi_{\mathbf{1}}$\\\hline $\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi _{\mathbf{1}}h_{1,\mathbf{5}}\ $ & $1$ & $-3/2$ & $-1/2$ & $-1$ & $0$\\ $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}}$ & $1$ & $-1/4$ & $-1/2$ & $1/4$ & $0$\\\hline \end{tabular} \label{U1Wcharges}% \end{equation} The $U(1)_{W}$ symmetry is spontaneously broken when the fiveplets acquire their vacuum expectation values. In some sense, it replaces the $U(1)_{X}$ symmetry of the minimal $SU(5)$ model. However, and contrary to $U(1)_{X}$, it does lead to a Goldstone boson because there is no global symmetry remaining after the EW symmetry breaking. The $U(1)_{\mathcal{B}-\mathcal{L}} $ is no longer active at the low energy scale because it is broken in the neutrino sector. Thus, the detailed breaking chain is now% \begin{align} G_{SU(5)} & \sim U(1)_{W}\otimes SU(5)\nonumber\\ & \rightarrow U(1)_{W}\otimes U(1)_{Y}\otimes SU(2)_{L}\otimes SU(3)_{C}% \nonumber\\ & \rightarrow U(1)_{em}\otimes SU(3)_{C} \ . \label{SSBW}% \end{align} There are again two Goldstone bosons from the spontaneous breaking of $U(1)_{W}\otimes U(1)_{Y}$: one will be the WBG of the $Z$, and the other will be the axion. Exactly as in the THDM, the physical axion state is defined only once the fiveplets acquire their VEVs, from its orthogonality to the WBG of the $Z^{0}$. This again fixes the PQ charges of the $SU(2)_{L}$ doublets in $h_{1,\mathbf{5}}$ and $h_{2,\mathbf{5}}$ to $x$ and $-1/x$, respectively. In a way similar to Eq.~(\ref{PQU1U2Y}), the PQ charges can be expressed as linear combinations of the $U(1)_{Y}$ and $U(1)_{W}$ charges,% \begin{equation} PQ=\frac{2}{5}(\xi_{Y}Y+\xi_{W}W)\ .\label{YWcharges}% \end{equation} However, contrary to Eq.~(\ref{PQU1U2Y}), it is not possible to have $\xi_{Y}=0$ here since $U(1)_{W}$ is not aligned with $U(1)_{PQ}$. Thus, the PQ charge cannot be $SU(5)$ symmetric.\ The linear combination depends on which fiveplet couples to right-handed neutrinos, and both $\xi_{Y}$ and $\xi_{W}$ depends on the ratio of the vacuum expectation values of the fiveplets since this equation has to reproduce the PQ charges of the doublets $\Phi_{1}\subset h_{1,\mathbf{5}}$ and $\Phi_{2}\subset h_{2,\mathbf{5}}$ given the $W$ charges in Eq.~(\ref{U1Wcharges}). Specifically, by asking in addition that $PQ(\nu_{R})=0$, we find% \begin{align} \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{1,\mathbf{5}} & :\xi_{Y}=\frac{3}{2}x-\frac{1}{x}\ ,\ \xi_{W}=x+\frac {1}{x}\ ,\\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}} & :\xi_{Y}=\frac{1}{2}x-\frac{2}{x}\ ,\ \xi_{W}=2 \left( x+\frac{1}{x}\right) \ . \end{align} The PQ charge of the fermions can then be calculated. Without surprise, they correspond to a specific choice for $\beta$ in Eq.~(\ref{PQFinalCharge}): \begin{subequations} \label{PQseesaw}% \begin{align} \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{1,\mathbf{5}} & :PQ(\nu_{R})=\beta+x=0\nonumber\\ & \Rightarrow PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R})=\left( -\frac{1}% {3x},x-\frac{1}{3x},\frac{2}{3x},-x,\frac{1}{x}-x\right) \ \ ,\\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}} & :PQ(\nu_{R})=\beta-\frac{1}{x}=0\nonumber\\ & \Rightarrow PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R})=\left( -\frac{x}{3}% -\frac{2}{3x},-\frac{2x}{3}-\frac{2}{3x},\frac{1}{3x}-\frac{x}{3},\frac{1}% {x},-\frac{2}{x}\right) \ \ . \end{align} \end{subequations} This also means that the THDM free parameters $\alpha$ and $\beta$ still satisfy $3\alpha+\beta=2\mathcal{N}_{SU(5)}$, as they should since introducing a gauge singlet does not change the anomaly coefficients. Thus, here also, the anomalies are constant through the SSB chain, and this is the only manifestation of the underlying $SU(5)$ dynamics on the fermion PQ charges. \begin{figure}[t] \centering\includegraphics[width=0.35\textwidth]{MR.jpg} \caption{Skeleton of the $U(1)_{1}\otimes U(1)_{2}$ symmetry-breaking fermion loop contributing to the effective scalar potential, and corresponding to the effective interactions in Eq.~(\ref{EffScal5h}). Additional derivatives and/or gauge interactions are understood to break the symmetry of the diagram.}% \label{Fig1}% \end{figure} There is still one remaining question to be addressed. The $U(1)_{1}\otimes U(1)_{2}$ symmetry is broken in the fermion sector, leaving only $U(1)_{X}$, but the scalar potential is still invariant under the full $U(1)_{1}\otimes U(1)_{2}$. Only that sector defines the Goldstone boson content of the theory, so one may wonder why the breaking pattern in Eq.~(\ref{SSBW}) is selected and not that in Eq.~(\ref{SSB5}). To understand this, it is necessary to go at the effective potential level. Radiative corrections must break $U(1)_{1}\otimes U(1)_{2}$, leaving only $U(1)_{W}$ as an active symmetry. Further, these radiative corrections must encode the whole fermion sector of the model, since it is the combined presence of the three Yukawa couplings and the Majorana mass term that prevents a $U(1)_{1}\otimes U(1)_{2}$ charge assignment. With this in mind, one can identify the simplest symmetry breaking term as arising from a fermion loop with the generic structure shown in Fig.~\ref{Fig1}, corresponding to \begin{subequations} \begin{align} \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{1,\mathbf{5}} & :\mathcal{L}_{scalar}^{eff}=\langle\mathbf{Y}% _{10}\mathbf{Y}_{5}^{\dagger}\mathbf{Y}_{1}\mathbf{M}_{R}^{-1}\mathbf{Y}% _{1}\mathbf{Y}_{5}^{\dagger}\rangle\varepsilon_{ABCDE}h_{1,\mathbf{5}}% ^{A}h_{1,\mathbf{5}}^{B}h_{1,\mathbf{5}}^{C}h_{2,\mathbf{5}}^{D}% h_{2,\mathbf{5}}^{E}+h.c.\ ,\\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}} & :\mathcal{L}_{scalar}^{eff}=\langle\mathbf{Y}% _{10}\mathbf{Y}_{5}^{\dagger}\mathbf{Y}_{1}\mathbf{M}_{R}^{-1}\mathbf{Y}% _{1}\mathbf{Y}_{5}^{\dagger}\rangle\varepsilon_{ABCDE}h_{1,\mathbf{5}}% ^{A}h_{2,\mathbf{5}}^{B}h_{2,\mathbf{5}}^{C}h_{2,\mathbf{5}}^{D}% h_{2,\mathbf{5}}^{E}+h.c.\ . \end{align} \label{EffScal5h} \end{subequations} As such, these couplings vanish due to the antisymmetric contraction, so additional derivatives are understood. Yet, they do break $U(1)_{1}\otimes U(1)_{2}$ but preserve their respective $U(1)_{W}$ symmetry. Also, clearly, none of them is able to generate a mass term for the pseudoscalar states embedded in the fiveplets, so the WBG of the $Z$ boson and the axion do emerge exactly as in the THDM. One way to see that these couplings can never contribute to a pseudoscalar mass term is to note that if they could, there would be a way to generate a Majorana mass term for the neutrinos out of the Higgs fields via the diagram obtained by cutting out the Majorana mass term in the fermion loop, see Fig.~\ref{Fig1}. Clearly, this is not possible. Rather, since $\mathcal{B}-\mathcal{L}$ is active in the rest of the $SU(5)$ model, the $\Delta\mathcal{L}=2$ Majorana mass is compensated by a $\Delta\mathcal{B}=2$ combination of colored Higgs states, for example $\varepsilon^{ijk}h_{1,\mathbf{5}}% ^{i}h_{1,\mathbf{5}}^{j}h_{1,\mathbf{5}}^{k}$, and such a combination can never acquire a VEV otherwise QCD would be broken. \subsubsection{Baryon-number violation} As we have seen, the ambiguity in the fermion PQ charges can be fixed by adopting a specific neutrino mass model. Once this is done, the PQ symmetry has no more room to accommodate further $\mathcal{B}$ and/or $\mathcal{L}$ violating effects. So, let us briefly investigate what type of violation is allowed, and what happens if we go beyond that. First, as in the usual $SU(5)$ model, gauge and Higgs states do carry non-trivial $\mathcal{B}-\mathcal{L}$ charges, which are conserved quantum numbers, and allows for $\mathcal{B}+\mathcal{L}$ violating effects. What we want to show now is that the presence of the $U(1)_{1}\otimes U(1)_{2}$ symmetry imposes some restrictions on those effects. To see this, let us start with the dimension-six four-fermion operators \begin{equation} \mathcal{H}_{eff}^{\dim6}=\frac{1}{\Lambda^{2}}(c_{1}\ell_{L}q_{L}^{3}% +c_{2}e_{R}u_{R}^{2}d_{R}+c_{3}e_{R}u_{R}q_{L}^{2}+c_{4}\ell_{L}q_{L}% d_{R}u_{R})+h.c.\ , \end{equation} where $\Lambda \sim v_{24}$. With $\alpha$ and $\beta$ fixed as in Eq.~(\ref{NSU5fixed}), only the last two are allowed by the PQ symmetry% \begin{equation} PQ(e_{R}u_{R}q_{L}^{2}) = PQ(\ell_{L}q_{L}d_{R}u_{R}) = 3\alpha+\beta+\left( x+\frac{1}{x}\right) =0 \ . \label{AllowedOps} \end{equation} This is understandable since those two operators can arise from leptoquark gauge interactions, and the PQ symmetry has to be compatible with the gauge structure of the model. Said differently, it is precisely because these operators have to be allowed that $3\alpha+\beta = 2\mathcal{N}_{SU(5)}$. On the other hand, the first two operators are forbidden~\cite{Hisano}: \begin{align} PQ(\ell_{L}q_{L}^{3}) & = 3\alpha + \beta = -\left( x+\frac{1}{x}\right)\neq 0 \ ,\\ PQ(e_{R}^{\dagger}u_{R}^{\dagger2}d_{R}^{\dagger}) & = -3\alpha-\beta-2\left( x+\frac{1}{x}\right) = -\left( x+\frac{1}{x}\right) \neq 0 \ . \end{align} Basically, the reason for this is the presence of two fiveplets with the Yukawa couplings of Eq.~(\ref{SU5THDM}), which prevents the operator $\bar{\chi}_{\mathbf{10}}^{c}\chi_{\mathbf{10}} \bar{\chi}_{\mathbf{10}}^{c}\psi_{\mathbf{\bar{5}}}$ at tree-level. Instead, operators with that fermionic field content first arise at the dimension-eight level, at which point one can use a quartic Higgs coupling to connect $\bar{\chi}_{\mathbf{10}}^{c}\chi_{\mathbf{10}}$ to $\bar{\chi}_{\mathbf{10}}^{c}\psi_{\mathbf{\bar{5}}}$, so that \begin{equation} \mathcal{H}_{eff}^{\dim8}=\frac{1}{\Lambda^{4}}(\bar{\chi}_{\mathbf{10}% }^{c}\chi_{\mathbf{10}})(\bar{\chi}_{\mathbf{10}}^{c}% \psi_{\mathbf{\bar{5}}})(h_{1,\mathbf{5}}h_{2,\mathbf{5}}^{\dagger})+...\ . \end{equation} The effective operator is then neutral under PQ, as it should. Yet, phenomenologically, these operators end up suppressed by $v_{5}^{2}/\Lambda^{2} \sim v_{5}^{2}/v_{24}^{2}$ compared to the leading dimension-six operators of Eq.~(\ref{AllowedOps}).\footnote{A similar reasoning holds for $\mathcal{B}+\mathcal{L}$ violating operators involving $\psi_{\mathbf{1}}$, which cannot arise from gauge interactions. Depending on which fiveplet enters in the Yukawa coupling $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}}$, either $\nu_{R}d_{R}q_{L}^{2}$ or $\nu_{R}u_{R}d_{R}^{2}$ arises at the dimension-six level, the other then being of dimension eight.} The $SU(5)$ gauge interactions not only break $\mathcal{B}+\mathcal{L}$ via perturbative leptoquark exchanges, but also via non-perturbative instanton interactions. What is peculiar in $SU(5)$ is that these two breaking effects are not aligned: they impose incompatible restrictions on the PQ symmetry. Indeed, while leptoquark interactions ask for $3\alpha+\beta = 2\mathcal{N}_{SU(5)}$, the existence of the instanton effective interactions requires instead $3\alpha+\beta = 0$ since \begin{equation} PQ((\chi_{\mathbf{10}}^{3}\psi_{\mathbf{\bar{5}}})^{3}) = 3 (3\alpha + \beta) \ . \label{SU5inst} \end{equation} There is no way to reconcile these constraints since they involve the same combination $3\alpha+\beta$, reflecting the fact that both effects break $\mathcal{B}+\mathcal{L}$ but not $\mathcal{B}-\mathcal{L}$. This incompatibility was already indentified in Ref.~\cite{Quevillon:2020hmx}, where it was remarked that the $SU(2)_{L}$ instanton interaction alone, $(\ell_{L}q_{L}^{3})^{3}$, requires to set $3\alpha+\beta = 0$, while the unification of the anomaly coefficients in Eqs.~(\ref{unifano1}) and~(\ref{unifano2}) rather ask for $3\alpha+\beta = 2\mathcal{N}_{SU(5)}$. In this respect, Eq.~(\ref{SU5inst}) provides an interesting new perspective. Indeed, the $SU(5)$ instanton interaction $(\chi_{\mathbf{10}}^{3}\psi_{\mathbf{\bar{5}}})^{3}$ not only contains the electroweak $(\ell_{L}q_{L}^{3})^{3}$ interactions, but also the QCD one, $q_{L}^{6}d_{R}^{\dagger3}u_{R}^{\dagger3}$. This latter interaction must have a non-zero PQ charge since it is another guise of the QCD axial anomaly, $G_{\mu\nu}\tilde{G}^{\mu\nu}$, ultimately responsible for the $\eta^{\prime}$ mass when run down to the low energy scale. So, the very idea of unification requires electroweak instantons to carry non-trivial PQ charges, and one should not expect $3\alpha+\beta = 0$ to hold. \begin{figure}[t] \centering\includegraphics[width=0.40\textwidth]{Inst.jpg}\caption{Effective scalar interaction built on the fermionic instanton interaction of Eq.~(\ref{SU5inst}), and corresponding to the naive estimate of Eq.~(\ref{InstScalar}).}% \label{Fig2}% \end{figure} Phenomenologically, instanton effects are tiny in the perturbative regime. As presented in Ref.~\cite{Quevillon:2020hmx}, a very rough estimate of their impact on the axion mass can be obtained by looking at the effective potential term induced by fermion loops built on the instanton interaction, see Fig.~\ref{Fig2}. The flower-like diagram becomes an effective coupling among Higgs fiveplets:% \begin{equation} \mathcal{L}_{scalar}^{eff}\ni\frac{1}{\Lambda^{2}}c_{inst}(h_{\mathbf{5}% ,1}h_{2,\mathbf{5}}^{\dagger})^{3}\ . \label{InstScalar} \end{equation} With $c_{inst}\sim\exp(-4\pi/g^{2})$, the induced axion mass is certainly much smaller than the QCD contribution. Yet, as explained in Ref.~\cite{Quevillon:2020hmx}, this interaction could have cosmological implications. At high temperature, electroweak instantons may represent the main force driving the alignment of the axion in a specific direction, and the strong CP puzzle would be resolved only at a later stage once hadronic effects start to kick in. \subsection{The DFSZ-SU(5) extensions} As proposed in the original paper, Ref.~\cite{axionDFS}, the axion of the PQ-$SU(5)$ model can be made invisible by following the same recipe as for the THDM. Adding a complex $SU(5)$ singlet $\phi_{\mathbf{1}}$, and allowing for the couplings% \begin{equation} V(h_{\mathbf{1}})=\frac{\lambda}{2}(\phi_{\mathbf{1}}^{\dagger}\phi _{\mathbf{1}})^{2}+\phi_{\mathbf{1}}^{\dagger}\phi_{\mathbf{1}}\left( \mu ^{2}+\alpha_{1}\langle\mathbf{H}_{\mathbf{24}}^{2}\rangle+\alpha _{2}h_{1,\mathbf{5}}^{\dagger}h_{1,\mathbf{5}}+\alpha_{3}h_{2,\mathbf{5}% }^{\dagger}h_{2,\mathbf{5}}\right) -\left[ \lambda_{12}\phi_{\mathbf{1}% }^{2}h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}+h.c.\right] \ ,\label{SU5DFSZ}% \end{equation} the $U(1)_{1}\otimes U(1)_{2}$ symmetry is preserved provided $\phi_{\mathbf{1}}$ is also charged: \begin{equation}% \begin{tabular} [c]{ccccccc}\hline & $\phi_{\mathbf{1}}$ & $\mathbf{H}_{\mathbf{24}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$\\\hline $\ \ U(1)_{1}\ \ $ & $1/2$ & $0$ & $1$ & $0$ & $-1/2$ & $1/2$\\ $\ \ U(1)_{2}\ \ $ & $-1/2$ & $0$ & $0$ & $1$ & $0$ & $1$\\\hline \label{DFSZSU5} \end{tabular} \end{equation} Another immediate observation from this potential is that even starting with $\mu^{2}>0$, the GUT symmetry breaking alone is able to induce that of the $U(1)_{1}\otimes U(1)_{2}$ symmetry whenever $\alpha_{1}<0$. Setting $\mu$ to zero, the VEV $v_{s}$ of the singlet is automatically set by the GUT scale, $2\lambda v_{s}^{2}\approx15\alpha_{1}^{2}v_{24}^{2}$. To proceed, it suffices to plug the polar representations \begin{equation} \phi_{\mathbf{1}}=\frac{1}{\sqrt{2}}\exp(i\eta_{s}/v_{s})(v_{s}+\sigma_{s}) \ , \ \ h_{i,\mathbf{5}}=\frac{1}{\sqrt{2}}\exp (i\eta_{i}/v_{i}) (h_{i,\mathbf{5},1}^{*},h_{i,\mathbf{5},2}^{*},h_{i,\mathbf{5},3}^{*},h_{i,\mathbf{5}}^{+}, v_i + \operatorname{Re} h_{i,\mathbf{5}}^{0}) \ , \end{equation} in the scalar potential, and set all fields but $\eta_{1,2,s}$ to zero. Only the $-\lambda_{12}\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger }h_{2,\mathbf{5}}$ coupling contributes and% \begin{equation} V_{\text{Scalar}}(\eta_{1,2,s})=-\frac{1}{2}\lambda_{12}v_{1}v_{2}v_{s}% ^{2}\cos\left( \frac{\eta_{1}}{v_{1}}-\frac{\eta_{2}}{v_{2}}-\frac{2\eta_{s}% }{v_{s}}\right) \ . \label{VscalarDFSZ} \end{equation} This is exactly the same potential as in the usual THDM version of the DFSZ model. The mixing matrix is thus \begin{equation} \left( \begin{array} [c]{c}% G^{0}\\ a^{0}\\ \pi^{0}% \end{array} \right) =\left( \begin{array} [c]{ccc}% \sin\beta & \cos\beta & 0\\ v_5\cos\beta\sin2\beta/\omega & -v_5\sin\beta\sin2\beta/\omega & v_{s}/\omega\\ v_{s}\cos\beta/\omega & -v_{s}\sin\beta/\omega & -v_5\sin2\beta/\omega \end{array} \right) \left( \begin{array} [c]{c}% \eta_{1}\\ \eta_{2}\\ \eta_{s}% \end{array} \right) \ ,\label{DFSZmixing}% \end{equation} with $\omega^{2}=v_{s}^{2}+v_{5}^{2}\sin^{2}2\beta$. The $G^{0}=\sin\beta \eta_{1}+\cos\beta\eta_{2}$ is the WBG of the $Z^{0}$ boson, the $\pi^{0}$ is the pseudoscalar state occurring in the argument of the cosine function in Eq.~(\ref{VscalarDFSZ}), once properly normalized, and has the mass $M_{\pi^{0}}^{2}\approx\lambda_{12}v_{s}^{2}/\sin2\beta$, while and the axion is orthogonal to both $G^0$ and $\pi^0$, hence \begin{equation} a^{0}=\eta_{s}+\frac{v_5}{v_{s}}(\cos\beta\eta_{1}-\sin\beta\eta_{2})\sin 2\beta+\mathcal{O}(v_5^{2}/v_{s}^{2})\ .\label{DFSZaxion}% \end{equation} The PQ charges of the scalars can be read off the second line of the mixing matrix in Eq.~(\ref{DFSZmixing}), and are the same as in Eq.~(\ref{PQdfsz}), while those of the fermions are again given by Eq.~(\ref{PQFinalCharge}). \subsubsection{Neutrino masses in DFSZ models} There are many ways to account for neutrino masses in the singlet DFSZ case, so we consider only the simplest realizations: a separate seesaw of type I, the $\nu$DFSZ model, and a seesaw of type II. Let us briefly describe these three constructions. \paragraph{Type I seesaw scenario:} Adding the seesaw terms of Eq.~(\ref{Seesaw}), i.e., \begin{equation} \mathcal{L}_{\nu}=-\frac{1}{2}\bar{\psi}_{\mathbf{1}}^{c}% \mathbf{M}_{R}\psi_{\mathbf{1}}+(\bar{\psi}_{\mathbf{\bar{5}}}^{c% })_{A}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}}(h_{i,\mathbf{5}})^{A}+h.c.\;, \end{equation} the active global symmetry is the $U(1)_{W}$ of Eq.~(\ref{U1Wcharges}), under which $\phi_{\mathbf{1}}$ is now charged because of the $\lambda_{12}$ coupling in Eq.~(\ref{SU5DFSZ}):% \begin{equation}% \begin{tabular} [c]{cccccccc}\hline $U(1)_{W}$ & $\phi_{\mathbf{1}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\psi_{\mathbf{1}}$ & $\beta$\\\hline $\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi _{\mathbf{1}}h_{1,\mathbf{5}}\ $ & $5/4$ & $1$ & $-3/2$ & $-1/2$ & $-1$ & $0$ & $-x$\\ $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}}$ & $5/8$ & $1$ & $-1/4$ & $-1/2$ & $1/4$ & $0$ & $\frac {1}{x}$\\\hline \end{tabular} \label{TypeIU1W}% \end{equation} The presence of the singlet does not change the PQ charges of the fermions, still given as in Eq.~(\ref{PQseesaw}). Those charges also correspond to Eq.~(\ref{PQFinalCharge}) with the specific values of $\beta$ quoted in the last column. \paragraph{$\nu$DFSZ scenario:} The presence of the singlet opens a new path to generate neutrino masses. The $SU(5)$ version of the $\nu$DFSZ model of Ref.~\cite{Clarke:2015bea}, first discussed in Ref.~\cite{Boucenna:2017fna}, replaces the Majorana mass term in Eq.~(\ref{Seesaw}) by the coupling% \begin{equation} \mathcal{L}_{\nu}=-\frac{1}{2}\bar{\psi}_{\mathbf{1}}^{c}% \mathbf{Y}_{M}\psi_{\mathbf{1}}\phi_{\mathbf{1}}+(\bar{\psi}_{\mathbf{\bar{5}% }}^{c})_{A}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}}(h_{i,\mathbf{5}}% )^{A}+h.c.\;. \label{LagrnuDFSZ} \end{equation} Equivalently, $\phi_{\mathbf{1}}^{\dagger}$ could occur for the first coupling. Altogether, there are thus four possible realization of the $U(1)_{W}$ symmetry. Normalizing $W(h_{1,\mathbf{5}})\equiv1$, the other fields have the charges% \begin{equation}% \begin{tabular} [c]{ccccccccc}\hline \multicolumn{2}{c}{$U(1)_{W}$} & $\phi_{\mathbf{1}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\psi_{\mathbf{1}}$ & $\beta$\\\hline $\ \bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi _{\mathbf{1}}h_{1,\mathbf{5}}\ $ & $\bar{\psi}_{\mathbf{1}}^{c% }\mathbf{Y}_{M}\psi_{\mathbf{1}}\phi_{\mathbf{1}}$ & $1$ & $1$ & $-1$ & $-1/2$ & $-1/2$ & $-1/2$ & $\frac{1}{4x}-\frac{3x}{4}$\\ $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}}$ & $\bar{\psi}_{\mathbf{1}}^{c}\mathbf{Y}_{M}% \psi_{\mathbf{1}}\phi_{\mathbf{1}}$ & $5/9$ & $1$ & $-1/9$ & $-1/2$ & $7/18$ & $-5/18$ & $\frac{5}{4x}+\frac{x}{4}$\\ $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{1,\mathbf{5}}$ & $\bar{\psi}_{\mathbf{1}}^{c}\mathbf{Y}_{M}% \psi_{\mathbf{1}}\phi_{\mathbf{1}}^{\dagger}$ & $5/3$ & $1$ & $-7/3$ & $-1/2$ & $-11/6$ & $5/6$ & $-\frac{1}{4x}-\frac{5x}{4}$\\ $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }h_{2,\mathbf{5}}$ & $\bar{\psi}_{\mathbf{1}}^{c}\mathbf{Y}_{M}% \psi_{\mathbf{1}}\phi_{\mathbf{1}}^{\dagger}$ & $5/7$ & $1$ & $-3/7$ & $-1/2$ & $1/14$ & $5/14$ & $\frac{3}{4x}-\frac{x}{4}$\\\hline \end{tabular} \label{TablenuDFSZ}% \end{equation} Interestingly, the first realization, which is that of Ref.~\cite{Boucenna:2017fna}, is the only one encountered up to now for which $U(1)_{W}$ could be compatible with an underlying $SO(10)$ structure, in which all the fermions are embedded into a 16 representation of definite $U(1)_{W}$ charge $-1/2$. At the electroweak scale, the PQ charges of the scalar doublets in $h_{1,\mathbf{5}}$ and $h_{2,\mathbf{5}}$ are still fixed to $x$ and $-1/x$, respectively. For each realization in Eq.~(\ref{TablenuDFSZ}), the PQ symmetry thus corresponds to a specific linear combination of $W$ and $Y$, see Eq.~(\ref{YWcharges}). Once applied to the SM fermions, the PQ charges are unambiguously fixed and correspond to Eq.~(\ref{PQFinalCharge}) with $\beta$ fixed to the values quoted in Eq.~(\ref{TablenuDFSZ}). Note that these charges are quite complicated, with for example the first realization predicting% \begin{equation} PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R},\nu_{R})=\left( -\frac{x}{12}-\frac {5}{12x},\frac{11x}{12}-\frac{5}{12x},\frac{7}{12x}-\frac{x}{12},\frac{1}% {4x}-\frac{3x}{4},\frac{5}{4x}-\frac{3x}{4},\frac{1}{4x}+\frac{x}{4}\right) \ . \end{equation} The fact that the PQ symmetry necessarily arises in part from $U(1)_{Y}$ has completely obscured the underlying $SU(5)$ or $SO(10)$ pattern. \paragraph{Type II seesaw scenario:} As a final alternative, let us discuss the $SU(5)$ version of the type II seesaw mechanism~\cite{Dorsner:2005fq}. Instead of right-handed neutrinos, a scalar $\Delta_{\mathbf{15}}$ in the \textbf{15} representation is added, whose branching rule contains an electroweak triplet, $\mathbf{15}\supset(\mathbf{1}\otimes\mathbf{3})_{2}$. This state couples to fermions via% \begin{equation} \mathcal{L}_{\nu}=-(\bar{\psi}_{\mathbf{\bar{5}}}^{c})_{A}% \mathbf{Y}_{\nu}(\psi_{\mathbf{\bar{5}}})_{B}(\Delta_{\mathbf{15}})^{AB}\ , \end{equation} where $(\Delta_{\mathbf{15}})^{AB}=(\Delta_{\mathbf{15}})^{BA}$. In parallel, to entangle the charges of the scalar states, we add to the scalar potential% \begin{align} V(\Delta_{\mathbf{15}}) & =\langle\Delta_{\mathbf{15}}^{\dagger}% \Delta_{\mathbf{15}}\rangle\left( \mu_{\Delta}^{2}+\beta_{1}\phi_{\mathbf{1}% }^{\dagger}\phi_{\mathbf{1}}+\beta_{2}\langle\mathbf{H}_{\mathbf{24}}% ^{2}\rangle+\beta_{3}h_{1,\mathbf{5}}^{\dagger}h_{1,\mathbf{5}}+\beta _{4}h_{2,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}\right) \nonumber\\ & +\lambda_{15}\langle\Delta_{\mathbf{15}}^{\dagger}\Delta_{\mathbf{15}% }\rangle^{2}+\lambda_{15}^{\prime}\langle\Delta_{\mathbf{15}}^{\dagger}% \Delta_{\mathbf{15}}\Delta_{\mathbf{15}}^{\dagger}\Delta_{\mathbf{15}}% \rangle\nonumber\\ & -\left[ \lambda_{\nu1}\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger }h_{2,\mathbf{5}}+\lambda_{\nu2}\phi_{\mathbf{1}}h_{1,\mathbf{5}}% h_{2,\mathbf{5}}\Delta_{\mathbf{15}}^{\dagger}+\mu_{\Delta}\lambda_{\nu 3}h_{1,\mathbf{5}}h_{2,\mathbf{5}}\Delta_{\mathbf{15}}^{\dagger}+h.c.\right] \ . \end{align} Because of the last two couplings in the last line, which becomes $\Delta_{\mathbf{15}}$ tadpoles after the symmetry breaking, that field develops a small VEV even when the terms in the first line sum up to a large and positive mass term. When $\beta_{2}\sim \mathcal{O}(1)$ and $v_{24}>v_{s},\mu_{\Delta}$, \begin{equation} v_{15}\sim\frac{v_{1}v_{2}}{\beta_{2}v_{24}^{2}}\left( v_{s}\lambda_{\nu 2}+\mu_{\Delta}\lambda_{\nu3}\right) \ . \end{equation} As detailed in Ref.~\cite{Quevillon:2020hmx}, if the three $\lambda_{\nu i}$ couplings are present, there is no global symmetry and no axion. If $\lambda_{\nu1}=0$, then there is a global symmetry but $\phi_{\mathbf{1}}$ is neutral and decouples. This means that the axion is embedded into the $h_{1,\mathbf{5}}$, $h_{2,\mathbf{5}}$, with a small $\Delta_{\mathbf{15}}$ component, and couples too strongly to SM particles. The two viable DFSZ-like scenarios are those with either $\lambda_{\nu2}=0$ or $\lambda_{\nu3}=0$, and this latter scenario further exists under two incarnations depending on whether $\phi_{\mathbf{1}}$ or $\phi_{\mathbf{1}}^{\dagger}$ occurs in the $\lambda_{\nu2}$ coupling. For those three scenarios, there is enough room for a global $U(1)_{W}$ symmetry, with charges% \begin{equation}% \begin{tabular} [c]{ccccccccc}\hline \multicolumn{2}{c}{$U(1)_{W}$} & $\phi_{\mathbf{1}}$ & $\Delta_{\mathbf{15}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\beta$\\\hline $\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}$ & $\phi_{\mathbf{1}}h_{1,\mathbf{5}}h_{2,\mathbf{5}}\Delta_{\mathbf{15}% }^{\dagger} $ & $1$ & $1$ & $1$ & $-1$ & $-1/2$ & $-1/2$ & $\frac{1}{4x}% -\frac{3x}{4}$\\ $\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}\ $ & $\phi_{\mathbf{1}}^{\dagger}h_{1,\mathbf{5}}h_{2,\mathbf{5}}\Delta _{\mathbf{15}}^{\dagger}$ & $5/7$ & $-1/7$ & $1$ & $-3/7$ & $-1/2$ & $1/14$ & $\frac{3}{4x}-\frac{x}{4}$\\ $\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}$ & $h_{1,\mathbf{5}}h_{2,\mathbf{5}}\Delta_{\mathbf{15}}^{\dagger}$ & $5/6$ & $1/3 $ & $1$ & $-2/3$ & $-1/2$ & $-1/6$ & $\frac{1}{2x}-\frac{x}{2}$\\\hline \end{tabular} \label{TypeIIU1W}% \end{equation} From this, the PQ charge are trickier to find because part of $\operatorname{Im}\Delta_{\mathbf{15}}^{55}$ is eaten by the $Z^{0}$ boson, and the axion has to be orthogonal to that state. However, we showed in Ref.~\cite{Quevillon:2020hmx} that to leading order in $v_{15}/v_{5}$, the PQ charge of $h_{1,\mathbf{5}}$ and $h_{2,\mathbf{5}}$ do remain equal to $x$ and $-1/x$. With this information, the pattern of fermion charges is found to match again Eq.~(\ref{PQFinalCharge}), up to negligible $\mathcal{O}(v_{15}/v_{5})$ corrections, with $\beta$ given in the last column of Eq.~(\ref{TypeIIU1W}). Note that the corresponding PQ charges of the weak triplet in $\Delta_{\mathbf{15}}$ for the three scenarios are $3x/2-1/2x$, $x/2-3/2x$, and $x-1/x$, respectively, in agreement with Ref.~\cite{Quevillon:2020hmx}. \subsubsection{The adjoint DFSZ model} In the previous section, the DFSZ extension was performed with the help of a complex singlet field, whose presence has no other motivation than moving the axion scale at the GUT scale. A somewhat simpler model, first proposed in Ref.~\cite{Wise:1981ry} is obtained by instead using the scalar fields already present at the GUT scale, that is, the $\mathbf{H}_{\mathbf{24}}$. Indeed, by making this field complex, the $Z_{2}$ symmetry of the minimal model is extended to a continuous global $U(1)_{24}$ symmetry, corresponding to the phase redefinitions $\mathbf{H}_{\mathbf{24}}\rightarrow\exp(i\alpha)\mathbf{H}_{\mathbf{24}}$. When $\mathbf{H}_{\mathbf{24}}$ acquires its VEV, the $U(1)_{24}$ symmetry is spontaneously broken at the GUT scale, and an additional Goldstone boson arises. For this field to be the axion, we must force the $U(1)_{24}$ symmetry to be anomalous, which requires fermions to transform non-trivially under $U(1)_{\mathbf{24}}$. Since $\mathbf{H}_{\mathbf{24}}$ does not directly couple to fermions, this has to proceed through charging the Higgs fiveplets under $U(1)_{24}$ first. It is at this stage that two fiveplets are required. Indeed, putting back the $SU(5)$ indices, the $\mathbf{H}_{\mathbf{24}}$ can only communicate its charge via $(\mathbf{H}_{\mathbf{24}})_{B}^{A}$ or $(\mathbf{H}_{\mathbf{24}})_{C}^{A}(\mathbf{H}_{\mathbf{24}})_{B}^{C}$. With only one fiveplet, contracting the indices necessarily requires an equal number of $(h_{\mathbf{5}})^{B}$ and $(h_{\mathbf{5}}^{\dagger})_{A}$, and these couplings end up forbidden by the $U(1)_{24}$ symmetry. With two fiveplets, on the other hand, the contractions% \begin{equation} V_{scalar}(h_{1,\mathbf{5}},h_{2,\mathbf{5}},\mathbf{H}_{\mathbf{24}}% )\supset\gamma_{1}(h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}})\langle \mathbf{H}_{\mathbf{24}}\mathbf{H}_{\mathbf{24}}\rangle+\gamma_{2}% (h_{1,\mathbf{5}}^{\dagger}\mathbf{H}_{\mathbf{24}}\mathbf{H}_{\mathbf{24}% }h_{2,\mathbf{5}})+h.c.\ ,\label{SU5coupl}% \end{equation} are allowed provided $h_{1,\mathbf{5}}$ and $h_{2,\mathbf{5}}$ have different $U(1)_{24}$ charges. So, the symmetry of this model is again $U(1)_{1}\otimes U(1)_{2}\sim U(1)_{X}\otimes U(1)_{24}$, with $\mathbf{H}_{\mathbf{24}}$ charged under both $U(1)_{1}$ and $U(1)_{2}$: \begin{equation}% \begin{tabular} [c]{cccccc}\hline & $\mathbf{H}_{\mathbf{24}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$\\\hline $\ \ U(1)_{1}\ \ $ & $1/2$ & $1$ & $0$ & $-1/2$ & $1/2$\\ $\ \ U(1)_{2}\ \ $ & $-1/2$ & $0$ & $1$ & $0$ & $1$\\\hline \end{tabular} \label{H24sym}% \end{equation} At leading order in $v_{5}/v_{24}$, the axion is entirely embedded in $\mathbf{H}_{\mathbf{24}}$ and does not couple to fermions. Indeed, at the EW level, there is no remaining symmetry beyond the local SM gauge symmetries, hence no new Goldstone boson (besides that eaten by the $Z^{0}$). This is evident since $(h_{1,\mathbf{5}}^{\dagger})_{A}(\mathbf{v}_{\mathbf{24}})_{C}^{A}(\mathbf{v}_{\mathbf{24}})_{B}^{C}(h_{2,\mathbf{5}})^{B}$ breaks the $U(1)_{1}\otimes U(1)_{2}$ symmetry explicitly. But, as in the DFSZ model, this picture gets modified at $\mathcal{O}(v_{5}/v_{24})$, once both breaking stages are combined, since the coupling Eq.~(\ref{SU5coupl}) is also an explicit $U(1)_{24}$ breaking term when $h_{i,\mathbf{5}}$ acquire their vacuum expectation values. Thus, the true Goldstone boson arising at the low scale is a combination of the pseudoscalar $\operatorname{Im}\mathbf{H}_{\mathbf{24}}^{0}$ and $\operatorname{Im}h_{i,\mathbf{5}}^{0}$, with the latter $\operatorname{Im}h_{i,\mathbf{5}}$ components suppressed by $v_{i}/v_{24}$. The most general scalar potential involves many couplings because $\mathbf{H}_{\mathbf{24}}^{\dagger}$ and $\mathbf{H}_{\mathbf{24}}$ transform in the same way, and will not be written down explicitly. Let us assume it supports the breaking chain of Eq.~(\ref{SSB5}). To identify the pseudoscalar states, the simplest is to use the polar representations% \begin{equation} \mathbf{H}_{\mathbf{24}}=\frac{1}{\sqrt{2}}\exp(i\eta_{24}/v_{24}% )\mathbf{v}_{\mathbf{24}}+...\ ,\ h_{i,\mathbf{5}}=\frac{1}{\sqrt{2}}% \exp(i\eta_{i}/v_{i})\mathbf{v}_{i,\mathbf{5}}+...\ , \end{equation} with $\mathbf{v}_{\mathbf{24}}=v_{24}\operatorname{diag}(1,1,1,-(3+\varepsilon)/2,-(3-\varepsilon)/2)$ and $\mathbf{v}_{i,\mathbf{5}}=(0,0,0,0,v_{i})^{T}$, and we have not written the other states explicitly. Plugging this in the potential, and restricted to the $\eta_{i}$ fields, only the couplings in Eq.~(\ref{SU5coupl}) survive because all the rest is invariant under $U(1)_{1}\otimes U(1)_{2}\otimes U(1)_{24}$, and the $\eta_{i}$ fields cancel out. Thus,% \begin{equation} V_{scalar}(\eta_{1},\eta_{2},\eta_{24})=\frac{1}{8}v_{1}v_{2}v_{24}^{2}\left( 2\gamma_{1}(15+\varepsilon^{2})+\gamma_{2}(3-\varepsilon)^{2}\right) \cos\left( \frac{\eta_{1}}{v_{1}}-\frac{\eta_{2}}{v_{2}}-\frac{2\eta_{24}% }{v_{24}}\right) \ . \end{equation} Apart from the prefactor, the cosine dependence on the pseudoscalars is exactly the same as in the DFSZ model. Thus, the mixing matrix is that in Eq.~(\ref{DFSZmixing}), with $v_{s}\rightarrow v_{24}$, and the pattern of PQ charge is not modified. For instance, scalars have the charges% \begin{equation} PQ(\Phi_{1}\subset h_{1,\mathbf{5}},\Phi_{2}\subset h_{2,\mathbf{5}}% ,\phi\subset \mathbf{H}_{\mathbf{24}})=\left( x\ \ ,\ -\frac{1}{x}\ ,\ \frac{1}{2}\left( x+\frac{1}{x}\right) \right) \ \ , \label{PQaDFSZ} \end{equation} where $\phi$ is the extra electroweak singlet arising when $\mathbf{H}_{\mathbf{24}}$ is made complex. With this, the SM fermions retain their PQ charges as given in Eq.~(\ref{PQFinalCharge}), including the one-parameter ambiguity. Note that if we assign to the singlet in $\mathbf{H}_{\mathbf{24}}$ twice the above PQ charge by replacing the $\gamma_{1,2}$ couplings of Eq.~(\ref{SU5coupl}) by $\gamma(h_{1,\mathbf{5}% }^{\dagger}\mathbf{H}_{\mathbf{24}}h_{2,\mathbf{5}})$, the PQ charges of the doublets and SM fermions stay the same. Concerning neutrinos, the seesaw type I (with right-handed neutrinos $\psi_{1}$) and type II (with scalars $\Delta_{15}$) can again be constructed, with the $U(1)_{W}$ symmetry identified as in Eq.~(\ref{TypeIU1W}) and~(\ref{TypeIIU1W}), with $\phi_{\mathbf{1}}\rightarrow\mathbf{H}_{\mathbf{24}}$. The final PQ charges obviously stay the same\footnote{In this respect, it should be mentioned that the adjoint DFSZ model, with and without the type I seesaw, has been presented in Ref.~\cite{FileviezPerez:2019ssf}, but the precise identification of the symmetry breaking chain, including the entanglement with $U(1)_{\mathcal{B}-\mathcal{L}}$, and the actual PQ charges were not discussed there.}. The only difference is that no equivalent of the $\nu$DFSZ model exists with an adjoint field, since there is no way to couple $\mathbf{H}_{\mathbf{24}}$ directly to a pair of right-handed neutrinos. \section{Flipped SU(5) axion models\label{sec4}} In the minimal $SU(5)$ model, $\mathcal{B}-\mathcal{L}$ emerges from the global $U(1)_{X}$ symmetry. Because a global $U(1)$ is active throughout the SSB chain, we ended up with a one-parameter freedom in the definition of the PQ charges of the fermions. This freedom was then used to make the PQ solution compatible with various seesaw mechanisms, which explicitly break the global symmetry. Since the $U(1)_{X}$ symmetry is not anomalous, nothing prevents it from becoming local. This opens an alternative realization of $SU(5)$ because the SM hypercharge $U(1)_{Y}$ need not be entirely generated by the generator $T_{24}$ of $SU(5)$. Instead, it could emerge as an unbroken linear combination of $U(1)_{X}$ and $U(1)_{Y^{\prime}}\subset SU(5)$. This is called the flipped $SU(5)$ model~\cite{Barr:1981qv,Derendinger:1983aj,Antoniadis:1987dx, Ellis:1988tx}. Because the $X$ symmetry, and therefore also $\mathcal{B}-\mathcal{L}$, is realized differently, our goal is to analyze the consequences on the PQ or DFSZ mechanism, once embedded in this framework. \subsection{Brief overview of the flipped SU(5) model} Before entering into the discussion of axionic aspects, let us briefly summarize the construction of the flipped model. Starting with $Y=\alpha Y^{\prime}+\beta X$, and requiring that the fermion charges sum up to the appropriate $Q=T^{3}+Y/2$, there are only two solutions for $\alpha$ and $\beta$. The minimal model with $\alpha=1$, $\beta=0$, and the flipped one with $\alpha=-1/5$ and $\beta=2/5$. In that latter case, the fermions are embedded into $U(1)_{X}\otimes SU(5)$ multiplets as \begin{subequations} \label{FlippedF}% \begin{align} (\mathbf{1})_{5} & =(\mathbf{1}\otimes\mathbf{1})_{2}\Rightarrow \chi_{\mathbf{1}}=e_{R}^{-c}\ ,\\ (\mathbf{5}^{\ast})_{-3} & =(\mathbf{\bar{3}}\otimes\mathbf{1})_{-4/3}% \oplus(\mathbf{1}\otimes\mathbf{2})_{-1}\Rightarrow\psi_{\mathbf{\bar{5}}% }=u_{R}^{c}\oplus\ell_{L}\ ,\\ (\mathbf{10})_{1} & =(\mathbf{\bar{3}}\otimes\mathbf{1})_{2/3}\oplus (\mathbf{3}\otimes\mathbf{2})_{1/3}\oplus(\mathbf{1}\otimes\mathbf{1}% )_{0}\Rightarrow\chi_{\mathbf{10}}=d_{R}^{c}\oplus q_{L}\oplus\nu _{R}^{c}\ . \end{align} Altogether, the fermion representations are thus obtained from those of the minimal model by flipping the right-handed components, $e_{R}^{-c}\leftrightarrow\nu_{R}^{c}$ and $u_{R}^{c}\leftrightarrow d_{R}^{c}$, and leaving the left-handed components in place. The $X$ charges are fixed up to an overall normalization. To parametrize this freedom, we write the $SU(5)\otimes U(1)_{X}$ derivative as \end{subequations} \begin{equation} D^{\mu}=\partial^{\mu}+i\sqrt{\frac{3}{5}}g_{5}\frac{Y^{\prime}}{2}% B^{\prime\mu}+ig_{X}\frac{X}{2}X^{\mu}+... \ , \label{CovFlip}% \end{equation} where the $SU(5)$ gauge boson associated with $U(1)_{Y^{\prime}}$ is denoted $B_{\mu}^{\prime}$, while that of $U(1)_{X}$ by $X_{\mu}$. In the minimal case, $\sqrt{5/3}g^{\prime}=g_{5}=g$, from which $\sin^{2}\theta_{W}=3/8$. In the flipped case, this relation is altered because $U(1)_{Y}$ has two uncorrelated origins and the normalization of the $g_{X}$ relative to $g_{5}$ is free. Fixing the $X$ charges as in Eq.~(\ref{FlippedF}), the normalization freedom resurfaces in the coupling constants, and we write $g_{X}=Kg_{5}$ for some constant $K$. The SM field $B^{\mu}$ is a linear combination of $X^{\mu}$ and $B^{\prime\mu}$, say $X_{\mu}=\sin\theta_{X}B_{\mu}^{X}+\cos\theta_{X}B_{\mu}$ and $B^{\prime\mu}=\cos\theta_{X}B_{\mu}^{X}-\sin\theta_{X}B_{\mu}$. Plugging this in the covariant derivative Eq.~(\ref{CovFlip}), and requiring that it collapses to $D^{\mu}=\partial^{\mu}+ig^{\prime}(Y/2)B^{\mu }+...$ with $Y=\alpha Y^{\prime}+\beta X$ gives \begin{equation} \tan\theta_{X}=\sqrt{5/12}K\ \ ,\ \ \ 15g_{5}^{2}=\left( 1+\frac{12}{5K^{2}% }\right) g^{\prime2}\ ,\label{FlipGaugeC2}% \end{equation} where we have set $\alpha=-1/5$ and $\beta=2/5$. With this, and since $g_{5}=g$ at the GUT scale as $SU(2)_{L}\subset SU(5)$, the prediction for $\sin^{2}\theta_{W}$ is modified to% \begin{equation} \sin^{2}\theta_{W}\equiv\frac{3}{8}\frac{50K^{2}}{20K^{2}+3}\ . \end{equation} Originally, the idea of $SU(5)\otimes U(1)_{X}$ was that it could emerge from $SO(10)$. In that case, $g_{X}=g_{5}=g_{10}$ at the unification scale, but the $X$ charges are not normalized to $X_{\mathbf{10}}=1$, $X_{\mathbf{\bar{5}}}=-3$, and $X_{\mathbf{1}}=5$. To find the correct relative normalization under the assumption that $SO(10)\rightarrow SU(5)\otimes U(1)_{X}$, one can use the fact that both $Y^{\prime2}$ and $X^{2}$ must sum up to the same over a complete $SO(10)$ representation. Since $\mathbf{1}\oplus\mathbf{5}^{\ast}\oplus\mathbf{10}=\mathbf{16}$, and with the $X$ charge of Eq.~(\ref{FlippedF}), we find $K^{2}=1/10$ and $\sin^{2}\theta_{W}=3/8$, precisely as in the minimal model. To achieve $SU(5)\otimes U(1)_{X}\rightarrow SU(3)_{C}\otimes SU(2)_{L}\otimes U(1)_{Y}$, both $SU(5)$ and $U(1)_{X}$ must be broken simultaneously at the GUT scale. This can be achieved only by giving a vacuum expectation value to a state with $Y^{\prime}\neq0$, $X\neq0$ but $Y=0$. The usual $\mathbf{H}_{\mathbf{24}}$ cannot be used since all its states have $Y^{\prime}=0$. Instead, the simplest representations able to induce a consistent symmetry breaking are a $\mathbf{H}_{\mathbf{10}}$ with $X_{\mathbf{10}}=+1$ or a $\mathbf{H}_{\mathbf{50}}$ with $X_{\mathbf{50}}=-2$, since \begin{align} \mathbf{10}_{1} & =(\mathbf{\bar{3}}\otimes\mathbf{1})_{2/3}\oplus (\mathbf{3}\otimes\mathbf{2})_{1/3}\oplus(\mathbf{1}\otimes\mathbf{1}% )_{0}\;,\\ \mathbf{50}_{-2} & =(\mathbf{8}\otimes\mathbf{2})_{-1}\oplus(\mathbf{\bar{6}% }\otimes\mathbf{3})_{-2/3}\oplus(\mathbf{6}\otimes\mathbf{1})_{-4/3}% \nonumber\\ & \ \ \ \ \oplus(\mathbf{\bar{3}}\otimes\mathbf{2})_{-1/3}\oplus (\mathbf{3}\otimes\mathbf{1})_{-2/3}\oplus(\mathbf{1}\otimes\mathbf{1})_{0}\;. \end{align} The symmetry breaking pattern is the same in both cases, the only difference being $M_{B_{X}}^{2}/M_{X,Y}^{2}=K^{2}+12/5$ when using the $\mathbf{H}_{\mathbf{10}}$, and twice that using the $\mathbf{H}_{\mathbf{50}}$. For the electroweak symmetry breaking, if only one multiplet is introduced, it must transform as a fiveplet with charge $X_{\mathbf{5}}=-2$. Indeed, once $e_{R}$ is a singlet, lepton masses can only come from a term $\bar{\psi}_{\mathbf{\bar{5}}}^{c}\psi_{\mathbf{1}}h$, thus $h$ must transform as a fiveplet. The states are \begin{equation} (\mathbf{5})_{-2}=(\mathbf{3}\otimes\mathbf{1})_{-2/3}\oplus(\mathbf{1}% \otimes\mathbf{2})_{-1}\Rightarrow h_{\mathbf{5}}=h_{i}^{\ast}\oplus\left( \begin{array} [c]{c}% h^{0\ast}\\ -h^{-}% \end{array} \right) \ . \end{equation} Notice that compared to the fermions in $\psi_{\mathbf{\bar{5}}}$, the charged conjugate $SU(2)_{L}$ spinor appears, and the colored $h_{i}^{\ast}$ states end up being the same as in the minimal $SU(5)$ model. The most general potential involving $h_{\mathbf{5}}$ and $\mathbf{H}_{\mathbf{10}}$ is% \begin{align} V(h_{\mathbf{5}},\mathbf{H}_{\mathbf{10}}) & =-\frac{\mu^{2}}{2}% \langle\mathbf{H}_{\mathbf{10}}^{\dagger}\mathbf{H}_{\mathbf{10}}\rangle +\frac{a}{4}\langle\mathbf{H}_{\mathbf{10}}^{\dagger}\mathbf{H}_{\mathbf{10}% }\rangle^{2}+\frac{b}{2}\langle\mathbf{H}_{\mathbf{10}}^{\dagger}% \mathbf{H}_{\mathbf{10}}\mathbf{H}_{\mathbf{10}}^{\dagger}\mathbf{H}% _{\mathbf{10}}\rangle\nonumber\\ & \ \ \ \ -\frac{\nu^{2}}{2}(h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}}% )+\frac{\lambda}{4}(h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}})^{2}+\alpha (h_{\mathbf{5}}^{\dagger}h_{\mathbf{5}})\langle\mathbf{H}_{\mathbf{10}% }^{\dagger}\mathbf{H}_{\mathbf{10}}\rangle+\beta h_{\mathbf{5}}^{\dagger }\mathbf{H}_{\mathbf{10}}^{\dagger}\mathbf{H}_{\mathbf{10}}h_{\mathbf{5}% }\nonumber\\ & \ \ \ \ +\left( \gamma\varepsilon_{ABCDE}\mathbf{H}_{\mathbf{10}}% ^{AB}\mathbf{H}_{\mathbf{10}}^{CD}h_{\mathbf{5}}^{E}+h.c.\right) \;.\label{PotH10H10h5}% \end{align} We will not analyze this potential in details, but comment on a few saillant features. First, in a combined treatment of the GUT and electroweak symmetry breaking, there is no modification of $\langle0|\mathbf{H}_{\mathbf{10}}|0\rangle$ since it transforms as an electroweak singlet. Second, the antisymmetric $\gamma$ coupling does not play any role in the symmetry breaking, and vanishes at the minimum. However, it directly couples the colored components $(\mathbf{3}\otimes\mathbf{1})_{-2/3}\subset h_{\mathbf{5}% }$ to that of the $\mathbf{H}_{\mathbf{10}}$. With $\gamma$ of $\mathcal{O}(1)$, these fields naturally end up at the GUT scale. This alleviates the doublet-triplet problem of the minimal model, since there is here no need to fine-tune the $\alpha$ and $\beta$ couplings in Eq.~(\ref{PotH10H10h5}) to make the $h_{i}^{\ast}$ heavy enough while maintaining $\langle0|h_{\mathbf{5}}|0\rangle$ at the electroweak scale. The situation with $\mathbf{H}_{\mathbf{50}}$ is similar, though with a quartic term $\varepsilon^{ABGHI}(\mathbf{H}_{\mathbf{50}})_{ABCD}(\mathbf{H}_{\mathbf{50}}^{\dagger})^{CDEF}(\mathbf{H}_{\mathbf{50}})_{EFGH}(h_{\mathbf{5}}^{\dagger})_{I}$ in place of the cubic $\varepsilon_{ABCDE}\mathbf{H}_{\mathbf{10}}^{AB}\mathbf{H}_{\mathbf{10}}^{CD}h_{\mathbf{5}}^{E}$ interaction. Given the $U(1)_{X}$ charges of the fermions and of the electroweak Higgs boson $h_{\mathbf{5}}$, the possible Yukawa couplings are the same as in the minimal model: \begin{align} \mathcal{L}_{\text{Yukawa}} & =-\frac{1}{4}\varepsilon_{ABCDE}(\bar{\chi }_{\mathbf{10}}^{c})^{AB}\mathbf{Y}_{10}(\chi_{\mathbf{10}}% )^{CD}h_{\mathbf{5}}^{E}+\sqrt{2}(\bar{\psi}_{\mathbf{\bar{5}}}^{c% })_{A}\mathbf{Y}_{5}(\chi_{\mathbf{10}})^{AB}(h_{\mathbf{5}}^{\dagger}% )_{B}\nonumber\\ & \ \ \ \ +(\bar{\psi}_{\mathbf{\bar{5}}}^{c})_{A}\mathbf{Y}_{1}^{T}% \psi_{\mathbf{1}}(h_{\mathbf{5}})^{A}+h.c.\;.\label{YukFlip}% \end{align} But since fermions are flipped, the mass relation of the minimal model is replaced by% \begin{equation} \mathbf{Y}_{d}=\mathbf{Y}_{10}=\mathbf{Y}_{10}^{T}\;,\;\;\mathbf{Y}% _{u}=\mathbf{Y}_{\nu}^{T}=\mathbf{Y}_{5}\;,\ \ \mathbf{Y}_{e}=\mathbf{Y}% _{1}\ .\label{MassFlip}% \end{equation} The annoying relationship between $m_{d}$ and $m_{e}$ is relaxed, but at the cost of a totally inconsistent neutrino sector. Contrary to the situation in the minimal model, neutrino masses must be addressed for the flipped construction to make sense. Actually, all the necessary ingredients are already present to automatically solve the neutrino problem. As in the minimal model, the gauge dynamics breaks $\mathcal{B}+\mathcal{L}$. However, since $U(1)_{X}$ is also spontaneously broken, neither $\mathcal{B}$ nor $\mathcal{L}$ survives below the GUT scale. Looking back at Eq.~(\ref{FlippedF}), it is evident that the state $(\mathbf{1}\otimes\mathbf{1})_{0}\subset\mathbf{H}_{\mathbf{10}}$ developing a vacuum expectation value carries $\mathcal{L}=1$. Similarly, the state $(\mathbf{1}\otimes\mathbf{1})_{0}\subset\mathbf{H}_{\mathbf{50}} $ carries $\mathcal{L}=-2$, as can be guessed from $\mathbf{10}_{-1}^{\ast}\otimes\mathbf{10}_{-1}^{\ast}=\mathbf{50}_{-2}\oplus\mathbf{45}_{-2}% \oplus\mathbf{5}_{-2}$. Thus, the symmetry breaking at the GUT scale can induce $\Delta\mathcal{L}$ effects, and in particular, it can be used to generate a Majorana mass for the neutrinos. With the $H_{\mathbf{50}}$, this is trivial to achieve since we should actually include in $\mathcal{L}_{\text{Yukawa}}$ the coupling% \begin{equation} \frac{1}{2}(\bar{\chi}_{\mathbf{10}}^{c})^{AB}Y_{\mathbf{50}}% (\chi_{\mathbf{10}})^{CD}(\mathbf{H}_{\mathbf{50}})_{AB,CD}\rightarrow -\frac{v_{50}}{\sqrt{2}}\bar{\nu}_{R}^{c}\nu_{R}\ .\label{Majo50}% \end{equation} From there, the seesaw mechanism proceeds as usual, and neutrino masses end up being proportional to% \begin{equation} m_{\nu}=\frac{v_{5}^{2}}{v_{50}}\mathbf{Y}_{u}^{T}(\mathbf{Y}_{\mathbf{50}% })^{-1}\mathbf{Y}_{u}\ . \end{equation} Note that $v_{5}^{2}/v_{50}\sim10^{-2}~$eV when $v_{50}$ is of $\mathcal{O}(10^{16}$ GeV$)$, thus $\mathbf{Y}_{\mathbf{50}}$ needs to be somewhat suppressed to ensure one neutrino states is heavy enough to account for the atmospheric mass splitting $\Delta m_{atm}^{2}\approx2.5\times10^{-3}\,$eV$^{2}$. \begin{figure}[t] \centering\includegraphics[width=0.32\textwidth]{Witten.jpg} \caption{The mechanism of Ref.~~\cite{Witten:1979nr} adapted to the flipped $SU(5)$ model~\cite{Rodriguez:2013rma}, leading to a GUT-scale Majorana mass term for $\nu_R$ when $\mathbf{H}_{\mathbf{10}}$ acquires its VEV, see Eqs.~(\ref{MajoH10}) and~(\ref{MajoH10b}). This diagram exists thanks to the cubic interaction $\gamma\mathbf{H}_{\mathbf{10}}\mathbf{H}_{\mathbf{10}}h_{\mathbf{5}}$, which can thus be interpreted as that where the $\mathcal{B}-\mathcal{L}$ symmetry is explicitly broken.} \label{Fig3}% \end{figure} With the $\mathbf{H}_{\mathbf{10}}$, the situation is a bit more complicated. Naively, we would like to replace $\mathbf{H}_{\mathbf{50}}$ in Eq.~(\ref{Majo50}) by $(\mathbf{H}_{\mathbf{10}}^{\dagger})^{2}$, which contains a component transforming as $\mathbf{50}$. But since $\mathbf{H}_{\mathbf{10}}$ does not couple to fermions at the renormalizable level, we cannot couple $\chi_{\mathbf{10}}\chi_{\mathbf{10}}\rightarrow\mathbf{H}_{\mathbf{10}% }\mathbf{H}_{\mathbf{10}}$ directly. At an intermediate stage, a combination of fields coupled to $\chi_{\mathbf{10}}$ and transforming as a $\mathbf{50} $ is needed. By inspection, the simplest such combination is $\varepsilon _{VWABZ}h_{\mathbf{5}}^{Z}\mathbf{A}_{C}^{B}\mathbf{A}_{F}^{A}$, which can then be coupled to $\mathbf{H}_{\mathbf{10}}^{2}$ thanks to the cubic $\varepsilon_{ABCDE}\mathbf{H}_{\mathbf{10}}^{AB}\mathbf{H}_{\mathbf{10}}% ^{CD}h_{\mathbf{5}}^{E}$ interaction. Altogether, the two loop process $\chi_{\mathbf{10}}\chi_{\mathbf{10}}\rightarrow\mathbf{A}_{\mathbf{24}% }\mathbf{A}_{\mathbf{24}}h_{\mathbf{5}}^{\dagger}\rightarrow\mathbf{H}_{\mathbf{10}% }\mathbf{H}_{\mathbf{10}}$ can thus proceed, see Fig.~\ref{Fig3}, and generates the effective action term% \begin{equation} \mathcal{L}_{\dim5}^{eff}=\frac{C_{\dim5}}{v_{10}}(\bar{\chi}_{\mathbf{10}% }^{c})^{AB}(\mathbf{H}_{\mathbf{10}}^{\dagger})_{AB}(\mathbf{H}_{\mathbf{10}% }^{\dagger})_{CD}(\chi_{\mathbf{10}})^{CD}\ ,\ \ C_{\dim5}\sim\frac{\gamma \mathbf{Y}_{10}g^{4}}{(4\pi)^{4}v_{10}}\ ,\label{MajoH10}% \end{equation} with the scale set by the GUT symmetry breaking. Once the $\mathbf{H}% _{\mathbf{10}}$ acquires its expectation value, we arrive at a Majorana mass term for the right-handed neutrinos% \begin{equation} \mathcal{L}_{10}^{eff}\sim\mathcal{O}(1)\times\frac{g^{4}}{(4\pi)^{4}}% v_{10}\times\bar{\nu}_{R}^{c}\nu_{R} \ . \label{MajoH10b} \end{equation} This is the flipped $SU(5)$ version~\cite{Rodriguez:2013rma} of the Witten $SO(10)$ mechanism~\cite{Witten:1979nr}. Overall, the right-handed neutrinos are significantly lighter than in with $\mathbf{H}_{\mathbf{50}}$, and $\mathbf{Y}_{10}$ need not be suppressed to account for the measured neutrino mass splittings. \subsection{PQ-flipped axion model} There is one particular aspect of the symmetries of the flipped model that is somewhat obscured in the above description, but that will play an important role now. In the absence of the $\gamma$ coupling in the potential of Eq.~(\ref{PotH10H10h5}), the theory actually has a larger symmetry. It is invariant under two separate $U(1)$, since nothing relates the $U(1)_{X}% \rightarrow U(1)^{\mathbf{10}}$ under which $\mathbf{H}_{\mathbf{10}}$ is charged to the $U(1)_{X}\rightarrow U(1)^{\mathbf{5}}$ exhibited by the Yukawa couplings of Eq.~(\ref{YukFlip}). The gauged symmetry is then embedded in the product, $U(1)_{X}\subset U(1)^{\mathbf{10}}\otimes U(1)^{\mathbf{5}}$, exactly as the gauged hypercharge is embedded in the $U(1)_{1}\otimes U(1)_{2}$ symmetry in the THDM, see Eq.~(\ref{PatternTHDM}). Though the symmetry is enlarged when $\gamma=0$, there is no extra Goldstone boson because without $\gamma$, $\mathcal{B}-\mathcal{L}$ emerges as an exact global symmetry at the low scale.\ Indeed, since $\mathbf{H}_{\mathbf{10}}$ is neutral under $U(1)^{\mathbf{5}}$, that part of the symmetry stays unbroken until the electroweak scale, and $U(1)_{\mathcal{B}-\mathcal{L}}$ arises from $U(1)^{\mathbf{5}}\otimes U(1)_{Y}$ after the fiveplet acquires its VEV, exactly as in the minimal $SU(5)$ model. Note, by the way, that the $\nu _{R}\leftrightarrow e_{R}$ and $u_{R}\leftrightarrow d_{R}$ interchanges are $\mathcal{B}-\mathcal{L}$ invariant, so the dimension-six leptoquark interactions of the flipped model still conserve $\mathcal{B}-\mathcal{L}$. Depending on $\gamma$, the patterns of the $U(1)$ symmetry breaking in the flipped models are thus \begin{equation} \begin{array} [c]{rcc}% \gamma=0: & U(1)^{\mathbf{10}}\otimes U(1)^{\mathbf{5}}\otimes U(1)_{Y}% \rightarrow U(1)_{\mathcal{B}-\mathcal{L}} & :2~\text{Goldstone bosons\ ,}\\ \gamma\neq0: & U(1)_{X}\otimes U(1)_{Y}\rightarrow\varnothing & :2~\text{Goldstone bosons\ .}% \end{array} \end{equation} Note that strictly speaking, $U(1)_{Y}$ is not entirely broken since the unbroken $U(1)_{em}$ emerges from a combination of $U(1)_{Y}$ and $U(1)\subset SU(2)_{L}$, but this is not essential. In both cases, the two Goldstone bosons are the WBG of the $X^{0}$ and $Z^{0}$. So, one should really understand the $\gamma$ coupling as the source of lepton-number violation. When $\gamma\neq 0$, by enforcing $U(1)^{\mathbf{10}}\otimes U(1)^{\mathbf{5}}\rightarrow U(1)_{X}$, the fermionic symmetry from which the $\mathcal{B}-\mathcal{L}$ symmetry would emerge becomes that broken by $\mathbf{H}_{\mathbf{10}}$, and thus never arises. That is the key to induce the $\Delta\mathcal{L}=2$ Majorana mass term dynamically. The same reasoning can be made with the $\mathbf{H}_{\mathbf{50}}$ instead of the $\mathbf{H}_{\mathbf{10}}$. One must introduce either the $\mathbf{H}_{\mathbf{50}}\mathbf{H}_{\mathbf{50}% }^{\dagger}\mathbf{H}_{\mathbf{50}}h_{\mathbf{5}}^{\dagger}$ quartic coupling or directly the Majorana coupling $\bar{\chi}_{\mathbf{10}}^{c% }Y_{\mathbf{50}}\chi_{\mathbf{10}}\mathbf{H}_{\mathbf{50}}$ to force $U(1)^{\mathbf{50}}\otimes U(1)^{\mathbf{5}}\rightarrow U(1)_{X}$, otherwise the $\mathcal{B}-\mathcal{L}$ symmetry again arises at the low scale and the seesaw mechanism is forbidden. With the above symmetries in mind, to introduce the axion, we proceed as in the PQ model and add a second Higgs fiveplet:% \begin{align} \mathcal{L}_{\text{Yukawa}} & =-\frac{1}{4}\varepsilon_{ABCDE}(\bar{\chi }_{\mathbf{10}}^{c})^{AB}\mathbf{Y}_{10}(\chi_{\mathbf{10}}% )^{CD}h_{2,\mathbf{5}}^{E}\\ & +\sqrt{2}(\bar{\psi}_{\mathbf{\bar{5}}}^{c})_{A}\mathbf{Y}_{5}% (\chi_{\mathbf{10}})^{AB}(h_{1,\mathbf{5}}^{\dagger})_{B}+(\bar{\psi }_{\mathbf{\bar{5}}}^{c})_{A}\mathbf{Y}_{1}^{T}\psi_{\mathbf{1}% }(h_{2,\mathbf{5}})^{A}+h.c.\;. \end{align} There is some freedom in deciding which fiveplet contributes to each coupling. The above choice, which differs from that of the minimal model in Eq.~(\ref{SU5THDM}), permits to recover the THDM of type II after the $SU(5)$ breaking:% \begin{equation} \mathcal{L}_{\text{Yukawa}}\rightarrow-\bar{u}_{R}\mathbf{Y}_{5}q_{L}\Phi _{1}-\bar{d}_{R}\mathbf{Y}_{10}q_{L}\Phi_{2}^{\dagger}-\bar{e}_{R}% \mathbf{Y}_{1}\ell_{L}\Phi_{2}^{\dagger}-\nu_{R}\mathbf{Y}_{5}^{T}\ell_{L}% \Phi_{1}+h.c.\ .\label{YukFlippedTHDM}% \end{equation} With appropriate charges, these Yukawa couplings are invariant under the independent rephasing of the Higgs fiveplets. Thus, at this level, the $U(1)^{\mathbf{5}}$ symmetry is enlarged into $U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}$. Let us assume the scalar potential shares this symmetry, except for the two possible $\gamma$ couplings:% \begin{equation} V\supset\varepsilon_{ABCDE}\mathbf{H}_{\mathbf{10}}^{AB}\mathbf{H}% _{\mathbf{10}}^{CD}(\gamma_{1}h_{1,\mathbf{5}}^{E}+\gamma_{2}h_{2,\mathbf{5}% }^{E})+h.c.\ .\label{GammaCoup}% \end{equation} Depending on which of these couplings is non-zero, various symmetry-breaking patterns are realized: \[% \begin{array} [c]{rcc}% \gamma_{1}=\gamma_{2}=0: & U(1)^{\mathbf{10}}\otimes U(1)_{1}^{\mathbf{5}% }\otimes U(1)_{2}^{\mathbf{5}}\otimes U(1)_{Y}\rightarrow U(1)_{\mathcal{B}% -\mathcal{L}} & :3~\text{Goldstone bosons\ ,}\\ \gamma_{1}\neq0\text{ or }\gamma_{2}\neq0: & U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}\otimes U(1)_{Y}\rightarrow\varnothing & :3~\text{Goldstone bosons\ ,}\\ \gamma_{1}\neq0\text{ and }\gamma_{2}\neq0: & U(1)_{X}\otimes U(1)_{Y}% \rightarrow\varnothing & :2~\text{Goldstone bosons\ .}% \end{array} \] For the first two scenarios, though the number of Goldstone bosons is the same as in the usual DFSZ model, the situation is very different. Indeed, first, all three electrically neutral pseudoscalar degrees of freedom $\eta_{10}$, $\eta_{1,5}$, $\eta_{2,5}$ are massless, and second, two combinations have to be used to form the WBG of the $X^{0}$ and $Z^{0}$ gauge bosons:% \begin{equation} G_{Y}^{0}\sim v_{1}\eta_{1,5}+v_{2}\eta_{2,5}\ ,\ \ G_{X}^{0}\sim-2v_{1}% \eta_{1,5}-2v_{2}\eta_{2,5}+v_{10}\eta_{10}\ .\label{GXGY}% \end{equation} Those states are not immediately orthogonal because of the mixing induced by $Y=-Y^{\prime}/5+2X/5$, but this does not matter. The point is that the remaining Goldstone boson must be orthogonal to both these WBG, and the only orthogonal direction available is \begin{equation} a^{0}\sim\frac{\eta_{1,5}}{v_{1}}-\frac{\eta_{2,5}}{v_{2}}% \ .\label{FlipEWaxion}% \end{equation} Thus, the axion cannot have a $\eta_{10}$ component, and its dynamics entirely take place at the electroweak scale. This is confirmed looking at the third scenario, with $\gamma_{1}\neq0$ and $\gamma_{2}\neq0$.\ In that case, both fiveplets have the same $X$ and $Y$ charges, and thus nothing prevents a $h_{1,\mathbf{5}}h_{2,\mathbf{5}}^{\dagger}$ coupling in the potential. Even if it is not initially present, it is radiatively induced by the $\gamma_{1}$ and $\gamma_{2}$ couplings. But to such a $h_{1,\mathbf{5}}h_{2,\mathbf{5}% }^{\dagger}$ coupling corresponds the pseudoscalar potential term (in the polar representation)% \begin{equation} V(\eta_{1},\eta_{2},\eta_{10})\sim\cos\left( \frac{\eta_{1,5}}{v_{1}}% -\frac{\eta_{2,5}}{v_{2}}\right) \ . \end{equation} Without surprise, the state becoming massive is the axion since the WBG of the $X^{0}$ and $Z^{0}$ gauge bosons cannot do so. The symmetry breaking patterns are confirmed by working out the charges explicitly. For the first scenario, with $\gamma_{1}=\gamma_{2}=0$, the Yukawa couplings impose% \begin{equation}% \begin{tabular} [c]{ccccccc}\hline $\gamma_{1}=\gamma_{2}=0$ & $\mathbf{H}_{\mathbf{10}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\psi_{\mathbf{1}}$\\\hline $U(1)^{\mathbf{10}}$ & $1$ & $0$ & $0$ & $0$ & $0$ & $0$\\ $U(1)_{1}^{\mathbf{5}}$ & $0$ & $1$ & $0$ & $0$ & $1$ & $-1$\\ $U(1)_{2}^{\mathbf{5}}$ & $0$ & $0$ & $1$ & $-1/2$ & $1/2$ & $-3/2$\\\hline \end{tabular} \end{equation} The PQ charge of the doublets $\Phi_{1}\subset h_{1,\mathbf{5}}^{\dagger}$ and $\Phi_{2}\subset h_{2,\mathbf{5}}^{\dagger}$ are the same as before (note the $\dagger$ though), $x$ and $-1/x$, and so are the PQ charges derived from Eq.~(\ref{YukFlippedTHDM}) which match Eq.~(\ref{PQferm}). This means that from the GUT-scale $U(1)$s, we can construct% \begin{align} X & =U_{1}^{\mathbf{10}}-2U_{1}^{\mathbf{5}}-2U_{2}^{\mathbf{5}}\ ,\\ \mathcal{B}-\mathcal{L} & =-2U_{1}^{\mathbf{5}}-2U_{2}^{\mathbf{5}}-2Y\ ,\\ PQ & =\zeta_{10}^{PQ}U_{1}^{\mathbf{10}}+\zeta_{1}^{PQ}U_{1}^{\mathbf{5}% }+\zeta_{2}^{PQ}U_{2}^{\mathbf{5}}+\zeta_{Y}^{PQ}Y\ , \end{align} with% \begin{equation} \zeta_{10}^{PQ}=0\ ,\ \ \zeta_{1}^{PQ}=2\beta+x-\frac{1}{x}\ ,\ \ \zeta _{2}^{PQ}=2\beta+2x\ ,\ \ \zeta_{Y}^{PQ}=2\beta+2x-\frac{1}{x}\ . \label{PQflippedSol} \end{equation} The PQ charge of the fermions are the same as in the minimal model, Eq.~(\ref{PQFinalCharge}), with in addition $PQ(\nu_{R})=\beta+x$. This reflects the fact that the Yukawa couplings remain those of a THDM of type II. Also, they exhibit the same one-parameter ambiguity originating in the global $\mathcal{B}-\mathcal{L}$ invariance. Note that when this symmetry is active, and only in that case, it is possible to choose $SU(5)$ invariant PQ charges by setting $\zeta_{Y}^{PQ} = 0$, i.e., \begin{equation} \beta = -x+\frac{1}{2x}\ . \label{PQflipSU5} \end{equation} If only one $\gamma$ is non-zero, the patterns of charges are% \begin{equation}% \begin{tabular} [c]{ccccccc}\hline $\gamma_{1}=0$ or $\gamma_{2}=0$ & $\mathbf{H}_{\mathbf{10}}$ & $h_{1,\mathbf{5}}$ & $h_{2,\mathbf{5}}$ & $\chi_{\mathbf{10}}$ & $\psi_{\mathbf{\bar{5}}}$ & $\psi_{\mathbf{1}}$\\\hline $U(1)_{1}^{\mathbf{5}}$ & $-1/2\delta_{\gamma_{2}}^{0}$ & $1$ & $0$ & $0$ & $1$ & $-1$\\ $U(1)_{2}^{\mathbf{5}}$ & $-1/2\delta_{\gamma_{1}}^{0}$ & $0$ & $1$ & $-1/2 $ & $1/2$ & $-3/2$\\\hline \end{tabular} \end{equation} Though $\mathbf{H}_{\mathbf{10}}$ is charged under $U(1)_{1}^{\mathbf{5}% }\otimes U(1)_{2}^{\mathbf{5}}$, the PQ charge of its electrically-neutral component vanishes since the axion has no $\eta_{10}$ component, while the PQ charges of $\Phi_{1}\subset h_{1,\mathbf{5}}^{\dagger}$ and $\Phi_{2}\subset h_{2,\mathbf{5}}^{\dagger}$ are the same as before. These charges can be rearranged into \begin{align} X & =-2U_{1}^{\mathbf{5}}-2U_{2}^{\mathbf{5}}\ ,\\ \mathcal{B}-\mathcal{L} & =-2U_{1}^{\mathbf{5}}-2U_{2}^{\mathbf{5}}-2Y\ ,\\ PQ & =\zeta_{1}^{PQ}U_{1}^{\mathbf{5}}+\zeta_{2}^{PQ}U_{2}^{\mathbf{5}}% +\zeta_{Y}^{PQ}Y\ . \end{align} For $\mathcal{B}-\mathcal{L}$, the above combination reproduces the charges of the fermions, but not that of the scalars since $\mathbf{H}_{\mathbf{10}}$ ends up with $\mathcal{B}-\mathcal{L} = 1$. This shows explicitly that this symmetry is broken by the $\gamma$ couplings. The coefficients $\zeta_{i}^{PQ}$ are now uniquely defined:% \begin{equation}% \begin{array} [c]{llll}% \gamma_{1}=0: & \zeta_{1}^{PQ}=-x-\dfrac{1}{x}\ , & \zeta_{2}^{PQ}=0\ ,\ & \zeta_{Y}^{PQ}=-\dfrac{1}{x}\ ,\\ \gamma_{2}=0: & \zeta_{1}^{PQ}=0\ ,\ \ & \zeta_{2}^{PQ}=x+\dfrac{1}{x}\ , & \zeta_{Y}^{PQ}=x\ . \end{array} \end{equation} Consequently, the PQ charges of the fermions are unambiguously defined, and correspond to Eq.~(\ref{PQFinalCharge}) with either \begin{equation}% \gamma_{1}=0: \beta=-x \ , \ \ \ \gamma_{2}=0: \beta=-\frac{1}{2} \left( x-\frac{1}{x}\right) \ . \label{betagammaPQ} \end{equation} Concerning neutrinos, since $PQ(\nu_{R})=\beta+x$, we find $PQ(\nu_{R})=0$ when $\gamma_{1}=0$, and $PQ(\nu_{R})=1/2(x+1/x)$ when $\gamma_{2}=0$. This means that the mechanism in Eq.~(\ref{MajoH10}) to induce a Majorana neutrino mass term works only when $\gamma_{2}\neq0$. This is not surprising looking at Fig.~\ref{Fig3}: the cubic scalar coupling has to involve the same fiveplet as that coupled to $\bar{\chi}_{\mathbf{10}}^{c}\chi_{\mathbf{10}}$. If that is not the case, a Majorana mass term can still be induced since $\mathcal{B}-\mathcal{L}$ is broken. But with $\gamma_{2}=0$, one has to rely on more complicated processes involving the quartic $\alpha(h_{1,\mathbf{5}}^{\dagger}h_{1,\mathbf{5}}) (h_{2,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}})$. Instead of Eq.~(\ref{MajoH10}), a Majorana mass term then arises at the dimension-seven level% \begin{equation} \mathcal{L}_{\dim7}^{eff}=\frac{C_{\dim7}}{v_{10}^{3}}\bar{\chi}_{\mathbf{10}% }^{c}(\mathbf{H}_{\mathbf{10}}^{\dagger})^{2}h_{1,\mathbf{5}}^{\dagger }h_{2,\mathbf{5}}\chi_{\mathbf{10}}+h.c.\ . \label{MajoDim7} \end{equation} With the Majorana mass scale $M_{R}\sim v_{5}^{2}/v_{10}$, this scenario is not physically viable because $M_{R}$ is not large enough to compensate the neutrino Dirac mass term in Eq.~(\ref{YukFlippedTHDM}). \subsection{DFSZ-flipped axion models} The flipped model needs a further GUT-scale scalars to move the axion up from the electroweak scale. The simplest solutions, in terms of additional fields, are either a singlet or another 10. Let us analyze these two scenarios in turn. \subsubsection{Singlet DFSZ} With a singlet, several new couplings can occur in the scalar potential. To investigate the possible realizations of the DFSZ mechanism, and besides the couplings immediately invariant under the separate rephasing of the fields, let us add the following mixing terms% \begin{equation} \mathcal{L}_{DFSZ}\supset\lambda\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}% ^{\dagger}h_{2,\mathbf{5}}+\varepsilon_{ABCDE}\mathbf{H}_{\mathbf{10}}% ^{AB}\mathbf{H}_{\mathbf{10}}^{CD}(\gamma_{1}^{\phi}h_{1,\mathbf{5}}^{E}% \phi_{\mathbf{1}}^{\dagger}+\gamma_{2}^{\phi}h_{2,\mathbf{5}}^{E}% \phi_{\mathbf{1}})+h.c.\ . \label{FlipSingDFS} \end{equation} Also, we assume that the dimensionless $\gamma_{i}^{\phi}$ couplings replace those in Eq.~(\ref{GammaCoup}). The reasons for choosing these particular couplings will be apparent shortly. There are four patterns of $U(1)$ symmetry breaking, depending on which couplings are present. Let us discuss each case in turn. \begin{itemize} \item $\mathbf{\lambda = \gamma_{1}^{\phi} =\gamma_{2}^{\phi}=0 }$: There are many active $U(1)$s in this case, \begin{equation} U(1)^{\mathbf{1}}\otimes U(1)^{\mathbf{10}}\otimes U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}% }\otimes U(1)_{Y}\rightarrow U(1)_{\mathcal{B}-\mathcal{L}} \ , \label{FlipSinglet} \end{equation} and four Goldstone bosons. Two of them are eaten by the $X^{0}$ and $Z^{0}$ gauge bosons. The one embedded in the singlet decouples entirely since there is nothing relating $\phi_{\mathbf{1}}$ to the rest of the dynamics. The axion remains at the electroweak scale, see Eq.~(\ref{FlipEWaxion}), and no Majorana mass term is allowed. The PQ charges are the same as in the PQ model of the previous section, with the parameter $\beta$ free. \item $\mathbf{\lambda=0}$\textbf{ and either }$\mathbf{\gamma_{1}^{\phi}=0}$\textbf{ or }$\mathbf{\gamma_{2}^{\phi}=0}$: The symmetry breaking chain is now \begin{equation} U(1)^{\mathbf{10}}\otimes U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}% }\otimes U(1)_{Y}\rightarrow\varnothing \ . \end{equation} Because the $\gamma_{i}^{\phi}$ couplings do not contribute to pseudoscalar masses, the Goldstone bosons are the same as in the previous case, and the axion remains the electroweak-scale state of Eq.~(\ref{FlipEWaxion}). The only difference is that the pseudoscalar state in $\phi_{\mathbf{1}}$ can now be identified as a pure, PQ-neutral majoron. Indeed, with the presence of $\phi_{\mathbf{1}}$ in the $\gamma_{1}^{\phi}$ or $\gamma_{2}^{\phi}$ coupling, the equivalent of Fig.~\ref{Fig3} leads to a $\nu_{R}$ Majorana mass term only once $\phi_{\mathbf{1}}$ acquires its VEV. Overall, the PQ charges are the same as in the PQ model since $PQ(\phi_{\mathbf{1}})=0$, and depending on which $\gamma_{i}^{\phi}$ is non-zero, $\beta$ is fixed as in Eq.~(\ref{betagammaPQ}). \item $\mathbf{\lambda \neq 0}$\textbf{ with }$\mathbf{\gamma_{1}^{\phi}=\gamma_{2}^{\phi}=0}$: Since the $\gamma_{i}^{\phi}$ couplings are absent, $\mathcal{B}-\mathcal{L}$ is active and the symmetry-breaking chain is \begin{equation} U(1)^{\mathbf{10}}\otimes U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}\otimes U(1)_{Y}\rightarrow U(1)_{\mathcal{B}-\mathcal{L}} \ . \end{equation} There are thus three Goldstone bosons. The role of the $\lambda$ coupling is to make one pseudoscalar state massive \begin{equation} \pi^{0}\sim -\frac{\eta_{1,5}}{v_{1}}+\frac{\eta_{2,5}}{v_{2}}+2\frac{\eta_{s}% }{v_{s}}\ , \end{equation} where $v_{s}$ is the VEV of the singlet $\phi_{\mathbf{1}}$, and $\eta_{s}$ its pseudoscalar component. Then, the only state orthogonal to $\pi^{0}$, $G_{X}^{0}$ and $G_{Y}^{0}$ (given in Eq.~(\ref{GXGY})) is necessarily% \begin{equation} a^{0}\sim\eta_{s}+\frac{v\sin2\beta}{v_{s}}(\cos\beta\eta_{1,5}-\sin\beta \eta_{2,5})\ ,\label{FlipDFSZaxion}% \end{equation} where $v_{1}=v_{5}\sin\beta$, $v_{2}=v_{5}\cos\beta$, exactly as in Eq.~(\ref{DFSZaxion}). The PQ charges of the scalars are then uniquely defined, with $PQ(\Phi_{1},\Phi_{2})=(x,-1/x)$, while those of the fermions are again given by Eq.~(\ref{PQFinalCharge}), including the $\beta$ ambiguity since $\mathcal{B}-\mathcal{L}$ is active. \item \textbf{At least two non-vanishing couplings among }$\mathbf{\lambda}$\textbf{ and }$\mathbf{\gamma_{1,2}^{\phi}}$: All these cases are equivalent because whichever two couplings are present, the third can be induced radiatively. Thus, the symmetry breaking pattern in the same in all these cases: \begin{equation} U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}\otimes U(1)_{Y}\rightarrow\varnothing \ . \label{flipnDFSZ} \end{equation} The Goldstone bosons are not affected by the $\mathbf{\gamma_{1,2}^{\phi}}$ couplings, the axion remains as in Eq.~(\ref{FlipDFSZaxion}), and the PQ charges of all the scalars but $\mathbf{H}_{\mathbf{10}}$ are unchanged. For fermions, however, the PQ charge ambiguity is removed and $\beta = -1/4(3x-1/x)$. This value is precisely that which ensures the Majorana mass operator arising from the analogue of Fig.~\ref{Fig3} to be allowed by the PQ symmetry: \begin{equation} \mathcal{L}_{\dim6}^{eff}=\frac{C_{\dim6}}{v_{10}^{2}}\bar{\chi}_{\mathbf{10}% }^{c}(\mathbf{H}_{\mathbf{10}}^{\dagger})^{2}\phi_{\mathbf{1}}^{\dagger }\chi_{\mathbf{10}}+h.c.\ . \end{equation} Phenomenologically, when $v_s \approx v_{10}$, this dimension-six operator is equivalent to the dimension-five one of Eq.~(\ref{MajoH10}). Overall, this scenario can be viewed as the flipped analog of the $\nu$DFSZ of the minimal $SU(5)$ model, see Eq.~(\ref{LagrnuDFSZ}). Here also, the axion is identified with the Majoron. \end{itemize} Note, finally, that we can now understand the specific choice of couplings made in Eq.~(\ref{FlipSingDFS}). For the axion to remain massless, they have to be compatible among themselves. For example, if one adds to $\gamma_{1}^{\phi}$ and $\gamma_{2}^{\phi}$ the coupling $\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}h_{2,\mathbf{5}}^{\dagger}$ instead of $\phi_{\mathbf{1}}^{2}h_{1,\mathbf{5}}^{\dagger}h_{2,\mathbf{5}}$, then the symmetry pattern collapses to $U(1)_{X}\otimes U(1)_{Y}\rightarrow\varnothing$, with both $\pi^{0}$ and $a^{0}$ massive. Thus, the set in Eq.~(\ref{FlipSingDFS}) is one example that ensures viable scenarios do exist. Though there are other possible sets of couplings that could produce an acceptable axion state, the symmetry breaking patterns would be very similar, and only the value of $\beta$ could be different. \subsubsection{Fundamental DFSZ} If instead of a singlet one introduces a second $H_{\mathbf{10}}$, with the same gauge quantum numbers, the possible mixing terms in the scalar potentials are% \begin{equation} \mathcal{L}_{DFSZ}\supset\gamma^{ijk}\mathbf{H}% _{i,\mathbf{10}}\mathbf{H}_{j,\mathbf{10}}h_{k,\mathbf{5}} +\alpha^{ijkl}(h_{i,\mathbf{5}}^{\dagger}h_{j,\mathbf{5}})\langle \mathbf{H}_{k,\mathbf{10}}^{\dagger}\mathbf{H}_{l,\mathbf{10}}\rangle +\beta^{ijkl}h_{i,\mathbf{5}}^{\dagger}\mathbf{H}_{k,\mathbf{10}}^{\dagger }\mathbf{H}_{l,\mathbf{10}}h_{j,\mathbf{5}}\ , \label{EntangFund} \end{equation} where $i,j,k,l=1,2$. Whenever $i\neq j$ and/or $k\neq l$, the $U(1)$ charges of the scalar states get entangled. From this point of view, the $\alpha^{ijkl}$ and $\beta^{ijkl}$ couplings have exactly the same effect, so it is sufficient to consider only the former and we set $\beta^{ijkl}=0$. To set the stage, consider the situation without any of these couplings. The scalar potential is invariant under $U(1)_{1}^{\mathbf{10}}\otimes U(1)_{2}^{\mathbf{10}}\otimes U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}$, and the $U(1)_{\mathcal{B}% -\mathcal{L}}$ symmetry emerges at low energy. There are thus two extra Goldstone bosons besides those eaten by the gauge bosons, which are now% \begin{equation} G_{Y}^{0}\sim v_{1}\eta_{1,5}+v_{2}\eta_{2,5}\ ,\ \ G_{X}^{0}\sim-2v_{1}% \eta_{1,5}-2v_{2}\eta_{2,5}+v_{1,10}\eta_{1,10}+v_{2,10}\eta_{2,10}\ . \end{equation} In this case, the axion $a_{EW}^{0}$ is given by Eq.~(\ref{FlipEWaxion}), and is accompanied by another massless states $a_{GUT}^{0}$, decoupled from the SM fermions, \begin{equation} a_{EW}^{0}\sim\frac{\eta_{1,5}}{v_{1}}-\frac{\eta_{2,5}}{v_{2}}\ ,\ \ a_{GUT}% ^{0}\sim\frac{\eta_{1,10}}{v_{1,10}}-\frac{\eta_{2,10}}{v_{2,10}% }\ .\label{aEWGUT}% \end{equation} Neither of these states can be a viable axion candidate, so some of the mixing terms have to be turned on. Consider first the quartic terms in the potential, keeping the $\gamma^{ijk}$ to zero. The trick to implement the DFSZ mechanism is to turn on just enough of these couplings to make a linear combination of $a_{EW}^{0}$ and $a_{GUT}^{0}$ massive.\ By orthogonality, the axion will then have a component in the $\eta_{i,10}$ direction, and this will move it up to the GUT scale. Clearly, turning on any of the $\alpha^{ijkl}$ with $i=j$ or $k=l$ does not work because the state becoming massive is either $a_{EW}^{0}$ or $a_{GUT}^{0}$, but not a combination of them. Instead, turning on $\alpha^{1212}$ or $\alpha^{2121}$ produces the massive% \begin{equation} \pi^{0}\sim\frac{\eta_{1,5}}{v_{1}}-\frac{\eta_{2,5}}{v_{2}}+\frac{\eta _{1,10}}{v_{1,10}}-\frac{\eta_{2,10}}{v_{2,10}}\ , \end{equation} and by orthogonality, the massless axion is then% \begin{equation} a^{0}\sim(-\cos\alpha \ \eta_{1,10}+\sin\alpha\ \eta_{2,10})\sin2\alpha+\frac {v_{5}\sin2\beta}{v_{10}}(\cos\beta\ \eta_{1,5}-\sin\beta\ \eta_{2,5})\ , \end{equation} where $\tan\alpha=v_{1,10}/v_{2,10}$ and $\tan\beta=v_{1}/v_{2}$. Comparing with Eq.~(\ref{DFSZaxion}) or~(\ref{FlipDFSZaxion}), this is precisely what we were after. From this expression, the corresponding PQ charges are immediately found to be, upon a proper normalization,% \begin{equation} PQ(\Phi_{1}\subset h_{1,\mathbf{5}}^{\dagger},\Phi_{2}\subset h_{2,\mathbf{5}% }^{\dagger},\phi_{1}\subset\mathbf{H}_{1,\mathbf{10}},\phi_{2}% \subset\mathbf{H}_{2,\mathbf{10}})=\left( x,-\frac{1}{x},\left( x+\frac{1}{x}\right) \cos^{2}\alpha,-\left( x+\frac{1}{x}\right) \sin ^{2}\alpha\right) \ . \label{FlipFund} \end{equation} Therefore, the PQ charges of the fermions are still given by Eq.~(\ref{PQFinalCharge}), including the $\beta$ ambiguity since $\mathcal{B}-\mathcal{L}$ remains active when $\gamma^{ijk}=0$. The situation is similar turning on $\alpha^{1221}$ or $\alpha^{2112}$ instead, with only the PQ charges of $\phi_{i}\subset\mathbf{H}_{\mathbf{10}}^{i}$ changing sign. However, if say $\alpha^{1212}$ and $\alpha^{1221}$ are simultaneously present, then the $a_{EW}^{0}$ and $a_{GUT}^{0}$ states of Eq.~(\ref{aEWGUT}) both become massive. Concerning the $\gamma^{ijk}$ couplings, note that none of them can directly generate a mass term for the pseudoscalars. Thus, if a single $\gamma^{ijk}$ coupling is present, but $\alpha^{ijkl}=0$, there are again the two extra Goldstone bosons of Eq.~(\ref{aEWGUT}). Indeed, the initial symmetry has one less $U(1)$ because of the $\gamma^{ijk}$ coupling, but $U(1)_{\mathcal{B}-\mathcal{L}}$ no longer emerges at low energy. In this case, $a_{GUT}^{0}$ is a pure majoron state, and the axion is stuck at the electroweak scale. \begin{table}[t] \centering \begin{tabular}[c]{ccccc}\hline & \multicolumn{2}{c}{$\alpha^{1212}$} & \multicolumn{2}{c}{$\alpha^{1221}$}\\ & $\beta$ & $(l,m,n,p)$ & $\beta$ & $(l,m,n,p)$\\\hline \multicolumn{1}{l}{$\gamma^{111}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-\dfrac{3x}{2}-\dfrac{1}{2x}$} & \multicolumn{1}{l}{$(1,0,1,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}% {x}\right) s_{\alpha}^{2}+\dfrac{x}{2}+\dfrac{3}{2x}$} & \multicolumn{1}{l}{$(3,0,0,1)$}\\ \multicolumn{1}{l}{$\gamma^{112}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-2x-\dfrac{1}{x}$} & \multicolumn{1}{l}{$(2,0,0,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}% {x}\right) s_{\alpha}^{2}+\dfrac{1}{x}$} & \multicolumn{1}{l}{$(2,0,0,0)$}\\ \multicolumn{1}{l}{$\gamma^{121}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-x$} & \multicolumn{1}{l}{$(0,0,2,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}{x}\right) s_{\alpha}^{2}+\dfrac {1}{x}$} & \multicolumn{1}{l}{$(2,0,0,0)$}\\ \multicolumn{1}{l}{$\gamma^{122}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-\dfrac{3x}{2}-\dfrac{1}{2x}$} & \multicolumn{1}{l}{$(1,0,1,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}% {x}\right) s_{\alpha}^{2}-\dfrac{x}{2}+\dfrac{1}{2x}$} & \multicolumn{1}{l}{$(1,0,1,0)$}\\ \multicolumn{1}{l}{$\gamma^{221}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-\dfrac{x}{2}+\dfrac{1}{2x}$} & \multicolumn{1}{l}{$(0,1,3,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}% {x}\right) s_{\alpha}^{2}-\dfrac{x}{2}+\dfrac{1}{2x}$} & \multicolumn{1}{l}{$(1,0,1,0)$}\\ \multicolumn{1}{l}{$\gamma^{222}$} & \multicolumn{1}{l}{$\left( x+\dfrac {1}{x}\right) s_{\alpha}^{2}-x$} & \multicolumn{1}{l}{$(0,0,2,0)$} & \multicolumn{1}{l}{$-\left( x+\dfrac{1}{x}\right) s_{\alpha}^{2}-x$} & \multicolumn{1}{l}{$(0,0,2,0)$}\\\hline \end{tabular} \label{FundDFSZ} \caption{Values of $\beta$ for various viable implementations of the DFSZ mechanism using two $\mathbf{H}_{\mathbf{10}}$ multiplets, where $s_{\alpha}\equiv\sin \alpha$. In each case, the set of numbers refer to the exponents of the seesaw operator in Eq.~(\ref{FundSee}).} \end{table} The simplest viable scenarios where $\mathcal{B}-\mathcal{L}$ is broken use either one of the $\gamma^{ijk}$ together with one $\alpha^{lmnp}$ with $l\neq m$ and $n\neq p$, or two of the $\gamma^{ijk}$ couplings. These two situations are actually equivalent because a pair of $\gamma^{ijk}$ couplings radiatively induces an effective $\alpha^{lmnp}$-like coupling, and a $\gamma^{ijk}$ coupling together with $\alpha^{lmnp}$ radiatively induces an effective $\gamma^{ijk}$-like coupling. By this we mean that though these effective couplings can be quite complicated, and may involve also the other quartic couplings of the scalar potential, they impose the same entanglement of the scalar charges as one of the $\gamma^{ijk}$ or $\alpha^{lmnp}$ coupling. So, it is sufficient to cover all the cases to consider only one of the $\gamma^{ijk}$ couplings together with one of the $\alpha^{lmnp}$ couplings. The axion state and PQ charges are the same as with only the $\alpha^{lmnp}$ coupling, except that the ambiguity parameter $\beta$ is fixed, see Table~\ref{FundDFSZ}. Clearly, all these scenarios have the same axion phenomenology since they differ only in the parameter $\beta$, which now depends on both the GUT and EW-scale ratios of VEVs $v_{1,10}/v_{2,10}$ and $v_{1}/v_{2}$. The same is true in the neutrino sector. We give in Table~\ref{FundDFSZ} the sets $(l,m,n,p)$ corresponding to the leading operator giving rise to the Majorana mass term for the right-handed neutrinos, i.e., \begin{equation} \mathcal{L}_{eff}=\frac{1}{v_{10}^{l+m+n+p-1}}\bar{\chi}_{\mathbf{10}}^{\mathrm{C}}(\mathbf{H}_{1,\mathbf{10}% }^{\dagger})^{l}(\mathbf{H}_{1,\mathbf{10}})^{m}(\mathbf{H}_{2,\mathbf{10}% }^{\dagger})^{m}(\mathbf{H}_{2,\mathbf{10}})^{p}\chi_{\mathbf{10}}\ , \label{FundSee} \end{equation} with $l-m+n-p = 2$. These operators are found from their invariance under the $U(1)_{1}^{\mathbf{5}}\otimes U(1)_{2}^{\mathbf{5}}$ symmetry. Contrary to the PQ model, see Eq.~(\ref{MajoDim7}), it is always possible to construct them using only the GUT-scale scalar decuplets, even when the $\gamma$ coupling does not involve the same fiveplet as $\mathbf{Y}_{10}$, because it is always possible to use the $\alpha$ coupling to switch from one fiveplet to the other (see Fig.~\ref{Fig3}). Once the decuplets acquire their VEVs, and except for extreme values of $v_{1,10}/v_{2,10}$, all these operators are clearly equivalent phenomenologically. \section{Conclusions\label{Ccl}} Axion models are based on the spontaneous breaking of an extra $U(1)$ symmetry. If this symmetry has a strong anomaly, the axion, the associated Goldstone boson, ends up coupled to gluons, and this ensures driving the strong CP violation to zero in the non-perturbative regime of QCD. A characteristic feature of axion models is that the true $U(1)_{PQ}$ symmetry corresponding to the axion is not trivial to identify, because of the presence of several other $U(1)$ symmetries acting on the same fields: baryon number $\mathcal{B}$, lepton number $\mathcal{L}$, and weak hypercharge. As a consequence, the PQ charges can only be defined after $U(1)_{Y}$ is spontaneously broken, and even then, those of the fermions remain ambiguous whenever baryon or lepton number is conserved~\cite{Quevillon:2020hmx}. Specifically, given Yukawa couplings to two electroweak Higgs doublets of type II (see Eq.~(\ref{YukQuark})), the PQ charge of the SM fermions are expressed in function of the two free parameters, $\alpha$ and $\beta$, as: \begin{equation} PQ(q_{L},u_{R},d_{R},\ell_{L},e_{R})=\left( \alpha,\alpha+x,\alpha+\frac{1}{x},\beta,\beta+\frac{1}{x} \right)\ \ , \end{equation} where $x =v_2/v_1$ and $v_i$ the VEV of each Higgs doublet. Our purpose was to study how these ambiguities manifest themselves in the $SU(5)$ GUT setting, see when they can be lifted, and how they permit to accommodate for a Majorana mass term for the neutrinos. Our main results are: \begin{itemize} \item In a GUT setting, one of the two ambiguities immediately disappears, and \begin{equation} 3\alpha+\beta=-\left( x+\frac{1}{x}\right) \equiv2\mathcal{N}_{SU(5)}\ .% \end{equation} This can be understood either as a consequence of the $SU(5)$ gauge interactions breaking $\mathcal{B}+\mathcal{L}$, or because the anomalous couplings of the axion to all the SM gauge bosons must originate from the single anomaly coefficients of the global $SU(5)$ chiral currents, see Eq.~(\ref{unifano1}) and~(\ref{unifano2}). Remains thus only one freedom in the fermion PQ charges, $\beta$, corresponding to the conserved $\mathcal{B}-\mathcal{L}$ symmetry. In the Table~\ref{Tablesum} we summarise the status of this parameter for the models studied in details in this paper. \item The $\mathcal{B}$ and $\mathcal{L}$ are not exact symmetries at the GUT scale, but only emerge at the low scale. This means the PQ symmetry has to be defined similarly if it is to be compatible with $\mathcal{B}$ and/or $\mathcal{L}$ violating effects, as required for example to allow for a Majorana neutrino mass term. As a corollary, this means there is no reason to expect PQ charges to be invariant over $SU(5)$ multiplets, and indeed in most cases, they are not. It is only in the absence of neutrino masses, when $\mathcal{B}-\mathcal{L}$ is active, that $SU(5)$-invariant PQ charges can be defined, see Table~\ref{Tablesum}. As an aside, we also clarified the status of $\mathcal{B}$ and $\mathcal{L}$ in the flipped $SU(5)$ model, putting it on a par with the minimal model. \item Table~\ref{Tablesum} shows that many different implementations were studied, but they all reproduce the same PQ charges, up to the value of $\beta$. This is true for both the minimal and flipped $SU(5)$ model, for various embedding of the axion in DFSZ-like models, and in the presence of a seesaw mechanism of type I, II, or when the DFSZ singlet also plays the role of the majoron. Though this fact can be understood as the orthogonality condition among Goldstone bosons stays essentially the same, and so are the low-energy Yukawa couplings, it is often obscured by the normalization of the PQ charges. Yet, this is remarkable because it means the low-energy phenomenology of the axion is the same in all these models, since as shown in Ref.~\cite{Quevillon:2019zrd}, its couplings to fermions and gauge bosons are independent of the value of $\beta$. \end{itemize} The strategy to construct viable embeddings of the axion within grand unified scenarios is thus clear. One must first identify precisely the GUT scale global and local symmetries. Then, the PQ symmetry, along with $\mathcal{B}$ and/or $\mathcal{L}$ if not explicitly broken, arise as a combination of the generators of all the $U(1)$ symmetries active at the GUT scale, including the weak hypercharge. This combination is fixed, up to some possible ambiguities, by the orthogonality requirement among the pseudoscalar states, including both the massive states and the would-be Goldstone bosons. As a final step, knowing the PQ charges of the scalars, one can derive those of the fermions. The main advantage of proceeding in this way, instead of first fixing the PQ charges of the scalars and fermions, is that automatically, enough room is left to accommodate possible explicit violation of $\mathcal{B}$ and/or $\mathcal{L}$ since one starts from the full symmetry content of the GUT-scale model. The results of this paper should have implications in other settings. For instance, it is well known that the difference between the number density of baryons and that of antibaryons is about $10^{-10}$ when normalized to the entropy density of the Universe. To achieve this imbalance, some $\mathcal{B}$ and/or $\mathcal{L}$ violation appear compulsory. Whether it comes from non-trivial electroweak field configurations or from explicit $\mathcal{B}$ and/or $\mathcal{L}$ violating interactions, axions should be expected to participate since most of these mechanisms are not PQ-neutral. This is particularly true in GUT models, where the $\mathcal{B}+\mathcal{L}$ electroweak instanton interactions necessarily carry the same PQ charge as their strong counterparts used to solve the strong CP puzzle. \begin{table}[p] \begin{tabular} [c]{llc}\hline $\mathcal{B}-\mathcal{L}$ & Minimal $SU(5)$ models & $\beta$\\\hline Exact & $\left\{ \begin{array}[c]{l} \text{PQ~(\ref{PQFinalCharge})}\\ \text{Singlet DFSZ~(\ref{DFSZSU5})}\\ \text{Adjoint DFSZ~(\ref{PQaDFSZ})}% \end{array}\right.$ & Free\\ & $\rightarrow\ $SU(5)-invariant PQ charges~(\ref{PQSU5naif2}): & $\dfrac {x}{2}-\dfrac{1}{x}\vspace{0.2007pc}$\\\hline Broken & Type I seesaw $\left\{ \begin{array}[c]{l} \text{PQ~(\ref{PQseesaw})}\\ \text{Singlet DFSZ~(\ref{TypeIU1W})}\\ \text{Adjoint DFSZ~(\ref{TypeIU1W})}% \end{array} \right.$ & $-x$, $\dfrac{1}{x}$\\ & Type II seesaw, PQ~(\ref{TypeIIU1W}) & $\dfrac{1}{2x}-\dfrac{x}{2}\vspace{0.2014pc}\vspace{0.2022pc}$\\ & Type II seesaw, $\left\{ \begin{array}[c]{l} \text{Singlet DFSZ~(\ref{TypeIIU1W})}\\ \text{Adjoint DFSZ~(\ref{TypeIIU1W})}% \end{array} \right.$ & $\dfrac{1}{2x}-\dfrac{x}{2}$, $\dfrac{1}{4x}-\dfrac{3x}{4}$, $\dfrac {3}{4x}-\dfrac{x}{4}\vspace{0.2022pc}\vspace{0.203pc}$\\ & $\nu$DFSZ~(\ref{TablenuDFSZ}) & $\dfrac{1}{4x}-\dfrac{3x}{4}$, $\dfrac{5}% {4x}+\dfrac{x}{4}$, $-\dfrac{1}{4x}-\dfrac{5x}{4}$, $\dfrac{3}{4x}% -\dfrac{x}{4}\vspace{0.2022pc}\vspace{0.203pc}$\\\hline $\mathcal{B}-\mathcal{L}$ & Flipped $SU(5)$ models & $\beta$\\\hline Exact & $\left\{ \begin{array}[c]{l}% \text{PQ}~(\ref{PQflippedSol})\\ \text{Singlet DFSZ~(\ref{FlipSinglet})}\\ \text{Fundamental DFSZ~(\ref{FlipFund})}% \end{array} \right.$ & Free\\ & $\rightarrow\ $SU(5)-invariant PQ charges~(\ref{PQflipSU5}): & $-x+\dfrac{1}{2x}\vspace{0.2014pc}% $\\\hline Broken & \multicolumn{1}{l}{PQ~(\ref{betagammaPQ})} & $-x\vspace{0.2022pc}\vspace{0.203pc}$\\ & Singlet DFSZ~(\ref{flipnDFSZ}) & $\dfrac{1}{4x}-\dfrac{3x}{4}\vspace{0.2022pc}\vspace{0.203pc}$\\ & Fundamental DFSZ~(\ref{FundDFSZ}) & Many possibilities\\\hline \end{tabular} \caption{Summary of the values of the $\beta$ parameter, needed to compute the fermion PQ charges for various models analysed in the text.} \label{Tablesum} \end{table} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
7,348
If you're yearning for the days of self-driving cars, Amazon looks like it may be joining the cause. According to a new patent deal, Amazon seems to be working on developing its own self-driving cars. The patent was filed in November of 2015 and was granted this Tuesday. The patent addresses the issue of reversible lanes, which could change which could change direction depending on the flow of traffic. These reversible lanes are usually used to manage commuter traffic in and out of busy cities. While Amazon hasn't yet made an official announcement explicitly about developing its own self-driving vehicle, the patent certainly hints at its plans to join the autonomous car market. Unconfirmed rumors suggest that Amazon has been developing self-driving vehicles out of their drone division, Prime Air. Amazon has also been making some news as it's moved into the trucking logistics space, buying up its own fleet of tracks while simultaneously developing an app that would make shipping processes much more efficient. The ideas discussed in the patent describes a network where vehicles can communicate with each other so that they can react to changes in the flow of traffic. The proposed roadway management system would also assign lanes to each vehicle depending on which direction the car is heading and what would best help traffic. The speed of each self-driving car and the number occupants per vehicle are additional factors that will be considered. "The roadway management system can identify a period of time and a particular lane of the roadway that is best suited to assign to the autonomous vehicle while taking into account an outcome directive," an Amazon tech expert told Geekwire. This news comes after Amazon filed a patent for massive floating warehouses to store its squadron of delivery drones. Stored above high density areas, these drone-warehouses could delivery packages at a much faster rate to accommodate demand.
{ "redpajama_set_name": "RedPajamaC4" }
5,873
package berlin.jentsch.modelchecker.akka import akka.actor.ActorPath import akka.actor.typed.Behavior import akka.actor.typed.mc.{BehaviorsEquals, IsDeferredBehavior} import akka.actor.typed.scaladsl.Behaviors private[akka] object Atoms { def apply(property: Property): Set[Map[ActorPath, ActorState] => Boolean] = atoms(property).map { case actorIs: ActorIs => state: Map[ActorPath, ActorState] => BehaviorsEquals.areEquivalent( state .get(actorIs.path) .fold[Behavior[_]](Behaviors.stopped)(_.behavior), actorIs.behavior ) case ProgressIsPossible => state: Map[ActorPath, ActorState] => state.values.forall(actor => actor.messages.isEmpty && IsDeferredBehavior(actor.behavior) ) } /** * Returns a set of the atomic properties */ private def atoms(property: Property): Set[Atomic] = property match { case actorIs: ActorIs => Set(actorIs) case ProgressIsPossible => Set(ProgressIsPossible) case AlwaysEventually(property) => atoms(property) case AlwaysGlobally(property) => atoms(property) case AlwaysUntil(during, until) => atoms(during) ++ atoms(until) case ExistsEventually(property) => atoms(property) case ExistsGlobally(property) => atoms(property) case ExistsUntil(during, until) => atoms(during) ++ atoms(until) case True => Set.empty case Not(property) => atoms(property) case And(property1, property2) => atoms(property1) ++ atoms(property2) case Or(property1, property2) => atoms(property1) ++ atoms(property2) case Show(property) => atoms(property) } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,568
\section{Introduction} It is hard to overstate the power of communication in today's society, which enjoys the benefits of technological advances due to telecommunication and the internet. These advances are a result of \textit{reliable} and \textit{efficient} classical communication protocols, which have been facilitated by decades of studies on data compression, error correction and physics of data transmission. As our technologies enter the quantum age, we have similarly started facing the question of how to make \textit{quantum communication} reliable and efficient. Quantum communication is central to the important tasks of quantum key distribution \cite{BennettB14, Ekert91}, the transfer of quantum states \cite{CiracZKM97} and the design of large scale quantum computers \cite{BrownKM16, MonroeRRBMDK14}. While the proposals and experimental implementations of quantum communication have made great strides in recent years \cite{AzumaTL15, AzumaTM15, DuanLCZ01, Kimble2008, Ma2012, Liao2017}, the range of communication is still limited to about a few hundred kilometers \cite{PirandaloB2016, Ma2012, Liao2017} in ground-based experiments. Some of the key challenges are the probabilistic nature (as well as decoherence) in optics-based models \cite{PirandaloB2016, Ma2012, Takeoka2014, Pirandola2015} and fast decoherence in matter-based models \cite{PirandaloB2016, BurkardKD04}. This strongly motivates the problem of finding quantum protocols that efficiently achieve certain tasks with small communication or fight noise to reliably communicate a given amount of message. The efficiency of a quantum communication protocol is typically captured by two quantities: the number of qubits communicated and the amount of additional resource, such as quantum entanglement, needed in the protocol. Since the foundational works of Holevo, Schumacher and Westmoreland \cite{Schumacher95, SchuW97, Holevo98}, great progress has been made in the understanding of optimal amount of communication and additional resources needed in a large family of quantum communication tasks. Well known results on quantum channel coding \cite{Holevo98, SchuW97, lloyd97, Shor02, BennettSST02, Devetak05private, HaydenHWY08}, quantum source coding \cite{Schumacher95}, quantum state merging \cite{HorodeckiOW05, HorodeckiOW07} and quantum state redistribution \cite{Devatakyard, YardD09} have discovered a powerful collection of tools for quantum information processing. These tools have found applications in disciplines beyond quantum communication, such as quantum thermodynamics \cite{LindenPSW09, RioARDV11} and black hole physics \cite{Page93, HaydenP07}. One such tool that takes a central stage in our work is that of quantum decoupling. Notably, aforementioned works in quantum information theory are set in the asymptotic and i.i.d. (independent and identically distributed) framework of Shannon \cite{Shannon}, which allows the protocol to run over many independent instances of the input system. In practice, however, one typically does not have an access to such independent instances, limiting the scope of these results. The field of one-shot information theory addresses this problem, by constructing protocols that run on one instance of the input system. This leads to a generalization of the asymptotic and i.i.d. theory and brings information processing tasks to a more practical domain. However, unlike the asymptotic and i.i.d. theory of quantum information, the understanding of optimal communication and additional resources is still lacking in one-shot quantum information theory. Even for the very basic task of entanglement-assisted quantum channel coding \cite{BennettSST02}, state-of-the-art \cite{DattaH13, DattaTW2016, AnshuJW17CC} one-shot protocols fail to simultaneously achieve optimal communication capacity and optimal amount of initial entanglement. The aim of this work is to introduce new methods that make progress in this problem and exponentially improve upon the amount of initial entanglement needed in a family of one-shot protocols that achieve the best known communication for above tasks. In many cases, the resulting protocols have the additional property that either the encoding or the decoding operation is a quantum circuit of small depth. In order to lay the groundwork for our results, we revisit the existing techniques of decoupling and more recent convex-split and position-based decoding. Decoupling (see Figure \ref{decoupling}) refers to the process of applying some quantum operation on one of the two given systems (which share quantum correlation), so as to make the two systems independent of each other. This idea has been applied in the aforementioned tasks of quantum state merging \cite{HorodeckiOW05, HorodeckiOW07, ADHW09, Berta09, Renner11}, quantum state redistribution \cite{Devatakyard, YardD09, DattaHO16, BertaCT16} and quantum channel coding \cite{Devetak05private, DupuisHL10, DattaH13, DattaTW2016}, as well as randomness extraction \cite{Renner05, Berta13, BertaFW14}. The central approach in many of these works is to perform a random unitary operation \cite{HorodeckiOW05, HorodeckiOW07} and then discard a part of the system. This technique has been expanded upon in various works such as \cite{Frederic10, Szehr11, DupuisBWR14}. Due to the importance of decoupling technique and the limitation that random unitaries cannot be implemented with a quantum circuit of small size, there is a great interest in finding efficient circuits that achieve the same performance as a random unitary. Existing methods to make decoupling efficient involve replacing random unitaries with unitary 2-designs \cite{DankertCEL09, DivincenzoLT02, Chau05, CleveLLC16} which can be simulated by Clifford circuits of small depth, random quantum circuits of small depth \cite{BrownF15} and random unitaries diagonal in Pauli-$\mathsf{X}$ and Pauli-$\mathsf{Z}$ basis \cite{NakataHMW17}. To elaborate, suppose we are given a quantum state $\Psi_{RC}$ on two registers $R$ and $C$, and we need to make $C$ independent of $R$ by acting on $C$. We must further ensure that the size of the discarded system, which is the cost of the decoupling operation (see Figure \ref{decoupling}), is small enough \footnote{The number of qubits of the discarded system translates to the quantum communication cost of a quantum protocol that employs decoupling. This motivates the question of minimizing the size of discarded system.}, ruling out the operation that discards all of $C$. The work \cite{CleveLLC16} shows that a quantum circuit of size $\mathcal{O}(\log|C|\log\log|C|)$ and depth $\mathcal{O}(\log\log|C|)$ suffices for this purpose, achieving the same cost as that of a random unitary. A similar circuit size of $\mathcal{O}(\log|C|\log^2\log|C|)$ and depth $\mathcal{O}(\log^3\log|C|)$ is obtained in \cite{BrownF15}, using elementary gates that mimick real world quantum processes. While the circuit size achieved by above results is impressive, the gates used in the circuit are highly quantum. More precisely, for a choice of preferred basis such as the computational basis, the gates convert any basis vector into a superposition over these vectors. Can the construction of a decoupling operation be further simplified, by only using the gates that are classical (taking basis vectors to basis vectors)? While being useful for practical implementation, such a construction would also lead to a surprising theoretical simplification: it would leave no conceptual difference between quantum decoupling and its classical counterpart of randomness extraction \cite{NisanZ96, RadhakrishnanT00, Trevisan01}. \begin{figure}[!h] \center \includegraphics[width=10cm]{Decoupling_sysdis.pdf} \\ \includegraphics[width=10cm]{Decoupling_mixunit.pdf} \caption{{\small Decoupling method refers to removing the quantum correlation between two registers $R$ and $C$, by means of quantum operations. The cost of performing a decoupling operation is characterized by the size of the register that must be discarded, in order to implement the operation. In $a)$, the discarded register is $T'$ and the operation performed on $CTT'$ is a global unitary $U$. In $b)$, the register $J$ (that is eventually discarded) is maximally mixed to begin with and the operation performed is a controlled unitary. Thus, $J$ can be viewed as a classical noise \cite{GroismanPW05}. While the operation in $b)$ is a special kind of operation in $a)$, the following equivalence holds due to the duality between teleportation \cite{Teleportation93} and superdense coding \cite{BennettW92}. For every operation in $a)$ with $\log|T'|$ qubits that are discarded, there is an operation in $b)$ with $2\log|T'|$ bits of noise. Moreover, for every operation in $b)$ with $\log|J|$ bits of noise, there is an operation in $a)$ where $\frac{1}{2}\log|J|$ qubits that are discarded.}} \label{decoupling} \end{figure} Random permutation is a canonical classical operation known to perform randomness extraction and also decouple classical-quantum systems \cite{Renner05, Berta13, BertaFW14}. In \cite{DupuisDT14} (see also \cite{Szehr11}) the authors used permutations to derive an analogue of the decoupling theorem that however only removes quantum and not classical correlations between $R$ and $C$. While the remaining classical correlation could also be removed by random permutations, the overall cost of decoupling would be larger than the cost of decoupling by a random unitary. This indicates that a decoupling method, which matches the random unitary decoupling in its cost, can only involve operations that are not classical. This is shown not to be true by the convex-split lemma \cite{AnshuDJ14}, which expresses a relation of the following form \begin{equation} \label{convsplit} \Phi_{RCE} \approx \sum_i p_i \Phi^{(i)}_{RCE}, \end{equation} showing how to view a given quantum state $\Phi_{RCE}$ as a convex combination of (more desirable) quantum states $\Phi^{(i)}_{RCE}$ in order to achieve an information-theoretic task. It implies decoupling (of the type in Figure \ref{decoupling}, $(b)$) when the quantum state on the left hand side (that is, $\Phi_{RCE}$) is a product state across $R$ and $CE$. In particular, it was shown in \cite{AnshuDJ14} that given $\Psi_{RC}$, if we add the quantum state $\sigma_{C_1}\otimes \ldots \sigma_{C_N}$ (for some large enough $N$) and randomly swap the register $C$ with one of the registers $C_1, \ldots C_N$, then the register $R$ becomes independent of all the other registers \footnote{Expressed mathematically via Equation \ref{convsplit}, we set $E=C_1 C_2\ldots C_N$, $\Phi_{RCE}= \Psi_R\otimes \sigma_C\otimes \sigma_{C_1}\otimes \ldots \sigma_{C_N}$, $\Phi^{(i)}_{RCE}=\Psi_{RC_i}\otimes \sigma_C\otimes \sigma_{C_1}\otimes\ldots \sigma_{C_{i-1}}\otimes \sigma_{C_{i+1}}\otimes\ldots \sigma_{C_N}$ and $p_i=\frac{1}{N}$.}; leading to decoupling with the classical operation of permutation of registers. In this work we will solely be interested in quantum tasks where decoupling is the same as constructing an appropriate convex-split, and hence we will use the two terms interchangeably. However, we highlight that the convex-split method is more general and can be used even in situations where no decoupling exists: such as in classical or classical-quantum communication tasks \cite{AnshuJW17CC, AnshuJW17MC, AGHY16} and resource theoretic tasks \cite{AnshuJH18, BertaM18, LiuW19}. Since the process of swapping two registers is a `classical' operation (that is, it takes basis vectors to basis vectors), the convex-split lemma of \cite{AnshuDJ14} gives a classical unitary for performing quantum decoupling. Unfortunately, the value of $N$ can be as large as $\mathcal{O}(|C|)$, where $|C|$ is the dimension of the register $C$. Hence swapping the register $C$ with a random register $C_i$ requires a circuit of depth $\mathcal{O}(|C|)$, which is exponential in the number of qubits of register $C$. Even an alternate implementation of swap operation, by placing the registers on a three dimensional grid, would require $\mathcal{O}(|C|^{1/3})$ operations. Thus, it has so far been unknown if one can achieve quantum decoupling by efficient classical operations. Recent works have shown several applications of the convex-split method in one-shot quantum information theory, along with the dual method of position-based decoding \cite{AnshuJW17CC}. The methods have been used to obtain near-optimal communication for one-shot entanglement-assisted quantum channel coding \cite{AnshuJW17CC}, near-optimal communication for one-shot quantum state splitting \cite{AnshuDJ14} (with slight improvement of the additive $\log\log|C|$ factor over \cite{Renner11}, for communicating the register $C$) and smallest known communication for one-shot quantum state redistribution \cite{AnshuJW17SR}. As mentioned earlier, all these protocols use a large amount of entanglement. Other known protocols, \cite{BennettSST02, DattaH13, DattaTW2016} for entanglement-assisted quantum channel coding and \cite{BertaCT16, DattaHO16} for quantum state redistribution, that do not rely on these two methods use exponentially small entanglement, but their communication is not known to be near-optimal. This motivates the question of find a scheme that achieves the best of both of the lines of work. \vspace{0.1in} \section{Our results} We show how to achieve near-optimal communication and the size of initial entanglement at most constant factors away from the optimal, in all the aforementioned quantum communication tasks. We further show that, in several cases, the implementation of either the encoding or the decoding operation in the protocol can be made efficient. Our results are obtained by two new methods that we outline below. \vspace{0.1in} \noindent{\bf Efficient decoupling procedures (Method $A$):} As mentioned earlier, the quantity of interest in a decoupling procedure is the number of bits or qubits that are discarded to achieve the decoupling. There are two models under which decoupling is performed, see Figure \ref{decoupling}. The first model involves adding a quantum state, applying a global unitary (without involving the register $R$) and then discarding some quantum system. The second model also involves adding a quantum state followed by a unitary, but the system that is discarded is classical and the unitary acts in a classical-quantum manner \cite{GroismanPW05}. The two models can be converted into each other by a Clifford circuit of depth $1$ and the number of qubits/bits discarded are the same up to a factor of $2$, due to the well known duality between teleportation \cite{Teleportation93} and super-dense coding \cite{BennettW92}. Additional quantum systems that are not discarded act as a catalyst for the decoupling process \cite{Renner11, AnshuDJ14, MajenzBDRC17, AnshuJH18, BertaM18}. For example, the randomness used in the process of decoupling via unitary $2$-design acts as a catalyst. In principle, this randomness can be fixed by standard derandomization arguments, but it leads to a loss in efficient implementation. In this work, we consider the second model of decoupling. We construct two new convex-split lemmas which immediately lead to efficient decoupling procedures for a quantum state $\Psi_{RC}$ (recall the discussion following Equation \ref{convsplit}). One of these lemmas solves the aforementioned problem of decoupling via an efficient classical operation. \begin{itemize} \item {\bf Method $A.1$:} A set of unitaries $\{V_{\ell}\}_{\ell=1}^{|C|^2}$ on a register $C$ forms a $1$-design if $$\frac{1}{N}\sum_{\ell}V_{\ell}\rho_C V^{\dagger}_{\ell}= \frac{\id_C}{|C|}, \quad \forall \text{ quantum state } \rho_C.$$ A canonical example of unitary $1$-design is $\mathcal{P}_{\log|C|}$, the set of the tensor products of Pauli $\mathsf{X}$ and $\mathsf{Z}$ operators if the register $C$ admits a qubit decomposition. Our first procedure shows how to achieve decoupling using a mixture of small number of $\approx \log|C| - \hmin{C}{R}_{\Psi}$ unitaries from any $1$-design. Here $\Psi_{RC}$ is the quantum state on registers $R$ and $C$ and $\hmin{C}{R}$ is the conditional min-entropy. The additional randomness used to choose the unitaries is $4\log|C|$ bits. We highlight that this is in stark contrast with many of the previous constructions for decoupling, which required unitaries from a $2$-design. Details appear in Subsection \ref{subsec:1design}. \item {\bf Method $A.2$:} The second decoupling procedure enlarges the Hilbert space $\mathcal{H}_C\otimes \mathcal{H}_C$ in a manner that the resulting Hilbert space $\mathcal{H}_{\ensuremath{G}}$ has prime dimension $|\ensuremath{G}|\leq 2|C|^2$. This is possible due to Bertrand's postulate \cite{Chebysev1852}, which says that there is a prime between any natural number and its twice. It also introduces a register $L$ of size approximately $N\ensuremath{ \stackrel{\mathrm{def}}{=} } \log|C| - \hmin{C}{R}_\Psi$. A preferred basis on $\mathcal{H}_C$ (such as the computational basis in the qubit representation of the registers) is chosen, which gives a basis $\{\ket{i}_G\}_{i=0}^{|G|-1}$ on $\mathcal{H}_G$. Similarly, a preferred basis $\{\ket{\ell}\}_{\ell=1}^N$ is chosen on $\mathcal{H}_L$. Following this, a unitary operation $U=\sum_{\ell=1}^NU_\ell\otimes \ketbra{\ell}_L$ is applied, where $U_\ell$ acts on two registers $\ensuremath{G}, \ensuremath{G}'\equiv \ensuremath{G}$ as \begin{equation} \label{Uellunits} U_\ell\ket{i}_{\ensuremath{G}}\ket{j}_{\ensuremath{G}'} = \ket{i+(j-i)\ell \mmod{|\ensuremath{G}|}}_{\ensuremath{G}}\ket{j+(j-i)\ell \mmod{|\ensuremath{G}|}}_{\ensuremath{G}'}. \end{equation} Upon tracing out register $L$, register $R$ becomes independent of $\ensuremath{G}\brc'$. Furthermore, the final state on registers $\ensuremath{G}\brc'$ is maximally mixed and the register $\ensuremath{G}'$ is returned in the original state. As can be seen, the unitaries $U_\ell$ are `classical' as they take basis vectors to basis vectors and perform addition and multiplication modulo $|\ensuremath{G}|$. This makes the construction of $U$ efficient, with circuit depth $\mathcal{O}(\log\log|C|)$ and size $\mathcal{O}(\log|C|\log\log|C|)$ due to well known results in modular arithmetic \cite{McLaughlin04}. Details appear in Subsections \ref{subsec:classicalunit} (proof of decoupling) and \ref{unitimp} (circuit complexity). In the other direction, our result shows that the reversible or quantum circuit complexity (such as depth or size) of integer multiplication modulo a prime is lower bounded by the reversible or quantum circuit complexity of the `best' decoupling method. This holds since integer multiplication is the most expensive step in Equation \ref{Uellunits}. We highlight that a super-linear lower bound on the circuit complexity of integer multiplication is an outstanding open question in the area of complexity theory \cite{SchonS71, Furer09}. The aforementioned connection to decoupling may suggest attacking this problem using an entirely different avenue connected to decoupling \cite{HaydenP07}: scrambling of quantum information in black holes \cite{LashkariSHOH13}. \end{itemize} \vspace{0.1in} \noindent{\bf Exponential improvement in entanglement (Method $B$) :} A \textit{flattening} procedure, that realizes any classical distribution as a marginal of a uniform distribution in a larger space, has been used in the context of classical correlated sampling in several works \cite{Broder97, Charikar2002, KleinbergT02, Holenstein2007, BarakHHRRS08, BravermanRao11, AnshuJW17classical}. A counterpart of this procedure for quantum states was considered in \cite{AJMSY16}. Let the eigendecomposition of $\sigma_C$ be $\sigma_C=\sum_i p_i \ketbra{i}_C$. Append a new register $E$ through the transformation $$\ketbra{i}_C\rightarrow \ketbra{i}_C\otimes\left(\frac{1}{Kp_i}\sum_{j=1}^{Kp_i}\ketbra{j}_E\right),$$ where $K$ is a large enough real such that $\{Kp_i\}_i$ are all integers \footnote{The existence of such a $K$ can be ensured, for example, by an arbitrarily small perturbation in $\{p_i\}_i$, so that they all are rationals.}. As a result, the quantum state $\sigma_C$ transforms to \begin{equation} \label{flatext} \sigma_C\rightarrow \frac{1}{K}\sum_{i,j: j\leq Kp_i} \ketbra{i}_C\otimes \ketbra{j}_E, \end{equation} which is uniform in a subspace. However, \cite{AJMSY16} did not provide a unitary operation to realize the above extension of $\sigma_C$. We show that this extension can be constructed in a unitary manner using embezzling states \cite{DamH03}. If the basis $\{\ket{i}\}_i$ can be efficiently prepared from computational basis and the eigenvalues $\{p_i\}_i$ are easy to compute, then the flattening procedure is also computationally efficient. Details appear in Section \ref{sec:maxmutdec}. The consequences of this method are as follows, with all the tasks appearing below summarized in Figure \ref{qcomtasks}. \begin{figure}[!h] \center \includegraphics[width=12cm]{ptop.pdf} \\ \includegraphics[width=12cm]{stateredist.pdf} \caption{{\small The first figure depicts the task of entanglement-assisted quantum channel coding, where the register $M$ holds a message $m\in \{1,2, \ldots 2^R\}$. The goal is to maximize the value of $R$, while keeping the error in decoding small. The second figure shows the task of quantum state redistribution with entanglement assistance. The goal is to ensure that the register $C$ is obtained by Bob using as less communication $\log|M|$ as possible and ensuring that $\Psi'\approx \ketbra{\Psi}$ .}} \label{qcomtasks} \end{figure} \begin{itemize} \item {\bf Entanglement-assisted classical communication over quantum channel:} Consider a quantum channel $\mathcal{N}_{A\to B}$, over which we wish to communicate a message from the set $\{1,2,\ldots 2^R\}$, with small error. The work \cite{BennettSST02} considered the asymptotic and i.i.d. setting for this task, involving the channel $\mathcal{N}_{A\to B}^{\otimes n}$ for large enough $n$. It was shown that the rate of communication $\frac{R}{n}$ converges to $$\max_{\ket{\Psi}_{AA'}}\mutinf{A'}{B}_{\mathcal{N}_{A\to B}(\Psi_{AA'})},$$ where $\mutinf{A'}{B}$ is the quantum mutual information. The number of qubits of entanglement in the protocol from \cite{BennettSST02} was approximately $nS(\Psi_A)$ (the von-Neumann entropy) and the rate of communication was shown to be optimal. The work \cite{DattaTW2016} obtained a one-shot version of their protocol, with $\log |A|$ qubits of pre-shared entanglement. Their communication was characterized by the \textit{quantum hypothesis testing relative entropy} between the quantum state $\mathcal{N}_{A\to B}(\Psi_{AA'})$ and a separable state derived from $\Psi_{AA'}$, which may not be optimal. The work \cite{AnshuJW17CC} introduced the position-based decoding method, showing how to achieve a communication characterized by the quantum hypothesis testing relative entropy between $\mathcal{N}_{A\to B}(\Psi_{AA'})$ and $\mathcal{N}_{A\to B}(\Psi_{A})\otimes \Psi_{A'}$. The achievable communication is near-optimal, due to the converse given in \cite{MatthewsW14}. But the protocol in \cite{AnshuJW17CC} required $\mathcal{O}(|A|)$ qubits of entanglement. Using our flattening procedure on the quantum state $\ket{\Psi}_{AA'}$, we show how to achieve the same near-optimal communication with $\mathcal{O}(\log|A|)$ qubits of entanglement. If the flattening procedure is efficient, then the encoding by Alice is efficient as well. Details appear in Subsection \ref{subsec:chancode}. The work \cite{AnshuJW17CC} also studied entanglement-assisted classical communication through various quantum networks, shown to be near optimal in \cite{AnshuJW19}. Our technique also exponentially improves upon the amount of entanglement in these protocols, while maintaining the achievable communication. \item {\bf Quantum state splitting and quantum state redistribution:} The task of quantum state redistribution \cite{Devatakyard, YardD09} considers a quantum state $\ket{\Psi}_{RABC}$, where the register $R$ is inaccessible, registers $A,C$ are with Alice and register $B$ is with Bob. It is required that after communication from Alice to Bob, the register $C$ should be held by Bob. Its special cases of quantum state splitting \cite{ADHW09} and quantum state merging \cite{HorodeckiOW05} are equivalent (up to reversal of the protocol) and quantum state splitting considers the case where register $B$ is trivial. The work \cite{Renner11} obtained a one-shot protocol for quantum state splitting achieving near-optimal communication up to an additive factor of $\mathcal{O}(\log\log|C|)$. This was improved in \cite{AnshuDJ14} through a near-optimal protocol with communication tight up to an additive factor of $\mathcal{O}(1)$. While the protocol in \cite{Renner11} required $\mathcal{O}(\log|C|)$ qubits of pe-shared entanglement, the protocol in \cite{AnshuDJ14} required much larger $\mathcal{O}(|C|)$ qubits. Here, we show how to improve the number of qubits of pre-shared entanglement to $\mathcal{O}(\log|C|)$, retaining the communication cost in \cite{AnshuDJ14}. Again, we use the flattening procedure, efficiency of which ensures the efficiency of decoding operation by Bob. The work \cite{AnshuJW17SR} gave a protocol for quantum state redistribution with smallest known quantum communication, improving upon the prior work \cite{BertaCT16}. But the number of qubits of pre-shared entanglement required was exponentially larger than that in \cite{BertaCT16}. Similar to aforementioned results, here we give a protocol that has similar quantum communication to \cite{AnshuJW17SR} and similar number of qubits of entanglement to \cite{BertaCT16}. Details appear in Subsection \ref{subsec:stateredist}. \end{itemize} \section{Proof outline} The proofs of results presented in Method $A$ crucially rely on the following simple identity, which was first shown in \cite{AnshuDJ14}. Below, $\relent{.}{.}$ is the quantum relative entropy \cite{umegaki1954}. $$\relent{\sum_i p_i \rho_i}{\theta} = \sum_i p_i \left(\relent{\rho_i}{\theta} - \relent{\rho_i}{\rho}\right).$$ This relation allows us to decompose the convex combination in Equation \ref{convsplit} into individual components. In addition, the proof of the decoupling result in Method $A.1$ also uses the notion of pairwise independent random variables to reduce the size of additional randomness, inspired by \cite{AnshuJW17MC}. The proof of decoupling result in Method $A.2$ is more subtle, as it requires us to find a collection of unitaries that form an appropriate representation of the cyclic group. Our construction, that is based on modular arithmetic, is inspired by explicit constructions of pairwise independent random variables \cite{Lovettnotes, KCN13}. To implement the flattening procedure in Method $B$, we show new relationships for quantum embezzlement. Let $\xi_D\ensuremath{ \stackrel{\mathrm{def}}{=} } \frac{1}{S}\sum_{j=1}^n\frac{1}{j}\ketbra{j}_D$ be the marginal of the embezzling state from \cite{DamH03}, for some integer $n$ and $S$ being the normalization factor. Let $\rho_E\ensuremath{ \stackrel{\mathrm{def}}{=} } \frac{1}{b}\sum_{e=1}^b\ketbra{e}_E$ be uniform in a support of size $b$. We show the existence of a unitary $U_b$ such that $$\dmax{U_b\left(\xi_D\otimes \ketbra{1}_E\right)U^{\dagger}_b}{\xi_D \otimes \rho_E} \leq \delta,$$ whenever $n> b^{\frac{1}{\delta}}$. Here $\dmax{.}{.}$ is the quantum max-relative entropy \cite{Datta09, Jain:2009}. Thus, it is possible to embezzle certain states with error guarantee in max-relative entropy, improving upon the error guarantee in fidelity \cite{DamH03}. We crucially use this in our proofs, as small max-relative entropy allows us to bound other one-shot information theoretic terms. \section{Discussion} Method $A.1$ is reminiscent of the derandomizing unitaries constructed in \cite{AmbainisS04}, which also uses unitary $1$-design for quantum encryption. But there is a difference between our setting and that in \cite{AmbainisS04}, since the number of unitaries that we use is dependent on the conditional min-entropy of the quantum state. On the other hand, the authors of \cite{AmbainisS04} only aim to decouple the maximally entangled state. We may also compare Method $A.1$ with the unitaries in \cite{NakataHMW17}, which shows how to perform decoupling with random unitaries diagonal in either $\mathsf{X}$ or $\mathsf{Z}$ bases. Our construction also yields a unitary diagonal in either $\mathsf{X}$ or $\mathsf{Z}$ bases, but it is explicit (that is, not a random unitary) and uses some additional catalytic randomness. As mentioned earlier, the construction in Method $A.2$ is efficient, with circuit depth $\mathcal{O}(\log\log|C|)$ and size $\mathcal{O}(\log|C|\log\log|C|)$. This already achieves the performance of circuits based on unitary $2$-designs \cite{CleveLLC16} and improves upon the performance of \cite{BrownF15}, with arguably simpler construction. The unitaries $\{U_\ell\}_{\ell}$, as defined in Equation \ref{Uellunits} have an interesting property that they act as a representation of the cyclic group, reflecting the property of permutation operations in the convex-split method. In the language of resource theory of coherence, both the decoupling procedures in Method $A$ belong to the class of Physically Incoherent Operations \cite{StreltsovAP17}. Thus, an immediate implication of our results is that quantum decoupling can be performed by incoherent unitaries. These decoupling procedures perform the same as decoupling via random unitary \cite{Frederic10, Berta13, DupuisBWR14}, when we consider the size of discarded system. None of these results (those in Method $A$ and the decoupling via random unitary) are optimal due to the additional effort put in making the decoupled register $C$ uniform. Indeed, it is known that the optimum cost of decoupling is characterized by the max-mutual information, rather than the conditional min-entropy \cite{Renner11, AnshuDJ14, MajenzBDRC17}. Method $B$ leads to a decoupling procedure achieving this, as it reduces the task to the case of uniform (or flat) marginal. As shown in Equation \ref{flatext}, the central idea behind Method $B$ is to flatten a non-uniform quantum state, and use resource efficient protocols for the flattened state. The work \cite{Renner11} used a different technique for flattening the eigenvalues of a quantum state. Their technique was to distribute the eigenvalues into bins $[2^{-i}: 2^{-i-1}]$ and run a protocol within each bin (on a high level, the protocols in \cite{BennettSST02, DattaTW2016} also place the eigenvalues into uniform bins). While this method can be used for quantum state splitting (with a loss of communication of $\approx \log\log|C|$ required in transmitting the information about the bin), it is not clear how it can be used to construct a near-optimal entanglement-assisted protocol for quantum channel coding or quantum state redistribution. Our method does not face this limitation and can be uniformly applied to all the quantum communication scenarios. Further, our use of embezzling states in both quantum state splitting and entanglement-assisted quantum channel coding further highlights the duality between the two tasks \cite{BennettDHSW14, Renner11}. We end this section with some open questions. Our first question is if there exists an analogue of Method $B$ that does not require embezzling states to achieve near-optimal decoupling. An efficient scheme could lead to new protocols with even smaller number of qubits of pre-shared entanglement in quantum communication tasks. Another important question is to see if the number of bits of additional randomness used in Method $A$ can further be reduced. It is known that seed size in randomness extraction in the presence of quantum side information can be very small \cite{DePVR12} (based on Trevisan's construction \cite{Trevisan01}). Since our construction treats classical side information and quantum side information in similar manner, we can hope to have similar results even in the case of quantum decoupling. \subsection*{Acknowledgment} This work was completed when A.A. was at the Centre for Quantum Technologies, National University of Singapore, Singapore. This work is supported by the Singapore Ministry of Education through the Tier 3 Grant ``Random numbers from quantum processes'' MOE2012-T3-1-009 and VAJRA Grant, Department of Science and Technology, Government of India. \bibliographystyle{naturemag}
{ "redpajama_set_name": "RedPajamaArXiv" }
33
\section{Introduction} \label{sec:intro} Planet formation is a process far from having a complete, robust and widely accepted theory. Multiple theories aim to explore the way planets form \citep[e.g.][]{Benz2014prpl.conf..691B,Kratter2016ARA&A..54..271K,Johansen2017AREPS..45..359J}. To understand planet formation, high resolution observations and large surveys of the birth places of planets, the protoplanetary disks (PPDs) are essential. In recent years, the Atacama Large Millimeter/Submillimeter Array (ALMA) has not only provided a sizable number of highly resolved observations of PPDs but due to its high sensitivity it also enabled several large, mid-resolution (of the order of \SI{100}{mas}) surveys of different star-forming regions \citep[for a review see][and references therein]{Andrews2020} providing crucial information on population properties such as distributions of disk sizes, fluxes, or spectral indices for disks across stars of different masses and ages and across star-forming regions in different environments. Highly resolved observations and large mid-resolution surveys are complementary and are both essential for understanding the connections between key properties of PPDs. One of the important diagnostics is the continuum luminosity ($\mathrm{L_{mm}}$) at (sub-)millimeter (hereafter "mm") wavelengths, since it can be a tracer of the disk mass that is produced by the solid grains \citep{Beckwith1990AJ.....99..924B}, i.e. the amount of material available to form planets. Assuming a constant dust-to-gas ratio (usually $\rm{0.01}$ based on observations of the interstellar medium, see \citealp{Bohlin1978}), the dust mass can be converted to the total disk mass (dust and gas). Surveys that have measured the disk dust mass ($\mathrm{M_{dust}}$), have been used to correlate this property with the mass of the host star ($\mathrm{M_{\star}}$), and a linear relation has been found between them \citep{Andrews2013} that appears to steepen with time \citep[][]{Pascucci2016ApJ...831..125P,Ansdell2017AJ....153..240A,Barenfeld2016ApJ...827..142B}. Furthermore, it has been observed a steeper than linear relationship between $\mathrm{L_{mm}}$ and $\mathrm{M_{\star}}$ \citep[][]{Andrews2013,Pascucci2016ApJ...831..125P,Ansdell2016ApJ...828...46A}. Recently, theoretical studies have started to explain these correlations \citep[e.g.][]{Pascucci2016ApJ...831..125P,stammler2019,pinilla2020A&A...635A.105P} using numerical dust evolution models that include grain growth, radial drift, fragmentation of dust particles, and particle traps \citep[e.g.][]{Birnstiel2010,BKE2012,Krijt2016}. These observations of dust thermal continuum emission are crucial for the characterization of disk evolution. However, the methods described above that relate dust emission to physical quantities like total disk masses carry uncertainty arising from the multiple assumptions such as the dust opacity and the disk temperature that may vary for every disk \citep[e.g.][]{Hendler2017,Ballering2019}. The grain opacity depends on the unknown particle size distribution, composition, and particle structure \citep[e.g.][and references within]{Birnstiel_DSHARP}, although considerable efforts have been undertaken from modeling \citep[e.g.][]{Wada2008,Wada2009,Okuzumi2009,Seizinger2013a,Seizinger2013b} and experimental studies \citep[e.g.][]{Blum2008,Guttler2010,Gundlach2015} of aggregation and of the computation of optical properties of aggregates \citep[e.g.][]{Kataoka2014,Min2016,Tazaki2016}. Another important property for the characterization of a disk population is the disk size. Viscous theory \citep{LBP1974} predicts that a fraction of the disk mass keeps moving outwards, the so-called \textit{viscous spreading}, suggesting the disk size should increase with time. In principle a measurement of disk size as function of time could measure this evolution to test viscous theory and measure its efficiency. The most readily available tracer of the disk size is the continuum as gas tracers suffer from uncertain abundances (due to freeze-out, dissociation, and other) and sensitivity constraints. Since the disk does not have a clear outer edge, we need to introduce an effective radius and express the size as a function of the total continuum emission. This size metric is called emission size or \textit{effective radius} ($\mathrm{r_{eff}}$) \citep{tripathi2017millimeter}. However, the dust component behaves differently than the gas, mainly due to an effect termed radial drift \citep[e.g.][]{Whipple1972,Weidenschilling1977MNRAS.180...57W, Takeuchi2002ApJ...581.1344T}. The dust particles interact with the sub-Keplerian gas disk via aerodynamic drag forces, leading them to migrate towards the star. As an observational implication, the dust emission is less extended than the gas emission \citep[e.g.][]{Andrews2012,Isella2012ApJ...747..136I,Andrews2016ApJ...820L..40A,Cleeves2016ApJ...832..110C} as predicted by \citet{Birnstiel2014ApJ...780..153B} (but see \citealp{Trapman2020}). Radial drift is also heavily dependent on the grain size, therefore grain growth \citep[e.g.][]{BKE2012} has to be included in the numerical studies that aim to use the dust disk radius. \citet{rosotti2019time_evolution} studied theoretically how the evolution of the disk dust radius changes with time in a viscously evolving disk and addressed whether the evolution of the dust disk radius is set by viscous spreading or by the dust processes such as grain growth and radial drift. They found that viscous spreading influences the dust and leads to the dust disk expanding with time. Many surveys have been performed to explore the relation between the two diagnostics \citep[e.g.][]{ Andrews2010ApJ...723.1241A,Pietu2014A&A...564A..95P,Hendler2020ApJ...895..126H}. Recently, a sub-arcsecond resolution survey of 50 nearby protoplanetary disks, conducted with the Submillimeter Array (SMA) by \citet{tripathi2017millimeter}, showed a strong size-luminosity relation (hereafter "SLR") between the observed population. The follow-up program in \citet{Andrews2018a}, a combined analysis of the \citet{tripathi2017millimeter} data and ALMA data from the Lupus disk sample (105 disks in total) confirmed the scaling relations between the $\mathrm{r_{eff}}$ and the $\mathrm{L_{mm}}$, $\mathrm{M_{\star}}$. However, not all studied star-forming regions show the same correlation but appear to vary with the age of the region \citep{Hendler2020ApJ...895..126H}. In recent years, due to the unprecedented sensitivity and resolution, ALMA provided a plethora of groundbreaking images of protoplanetary disks. Most of these disks do not show a smooth and monotonically decreasing surface density profile, but instead are composed of single or multiple symmetric annular sub-structures instead e.g., HL Tauri \citep{ALMA_partnership2015ApJ...808L...3A}, TW Hya \citep[][]{Andrews2016ApJ...820L..40A,Tsukagoshi2016ApJ...829L..35T}, HD 163296 \citep{Isella2016PhRvL.117y1101I}, HD 169142 \citep{Fedele2017A&A...600A..72F}, AS 209 \citep{Fedele2018A&A...610A..24F}, HD 142527 \citep{Casassus2013Natur.493..191C} and many more in the recent DSHARP survey \citep{Andrews_DSHARP_2018ApJ...869L..41A}. Moreover, non-axisymmetric features like spiral arms \citep[e.g.][]{Perez2016Sci...353.1519P,Huang2018III} and lopsided rings \citep[e.g.][]{Nienke2013Sci...340.1199V} have been observed. Many ideas tried to explore the origin of these ring-like sub-structures, but one of the most favorable explanations is the formation of gaps in the gas surface density, due to the presence of planets. A massive planet ($\rm{\geq 0.1 M_{jup}}$) \citep{Zhang2018DSHARPApJ...869L..47Z} is able to open a gap in the surrounding gaseous disk, thereby generating a pressure maximum. Later on, the dust particles are migrating towards the local pressure bump due to radial drift \citep{Weidenschilling1977MNRAS.180...57W,Nakagawa1986Icar...67..375N} and consequently leading to the annular shape \citep[e.g][]{Rice2006MNRAS.373.1619R,Pinilla2012A&A...545A..81P}. These narrow rings may be optically thick or moderately optically thick, but in between these features, the material is approximated as optically thin \citep{Dullemond2018DSHARP}. These rings however, can contain large amounts of dust that can increase the total luminosity of a disk and its position with respect to the SLR. The SLR might contain crucial information about disk evolution and planet formation theory. Our goal is to explore the physical origins of the SLR from \citet{tripathi2017millimeter} and \citet{Andrews2018a} by performing a large population study of models with gas and dust evolution. We aim to characterize the key properties of disks that reproduce the observational results. We explore the differences in the SLR of disks that have a smooth surface density profile and disks that contain weak and strong sub-structures. In \autoref{sec:methods}, we discuss the methods we used to carry out our computational models. The results of this analysis are presented in \autoref{sec:results}, where we explain the global effect of every parameter in the population of disks and we present the general properties that disks should have to follow the SLR. In \autoref{sec:discussion} we discuss the theoretical and observational implications of our results. We draw our conclusions in \autoref{sec:conclusions}. \section{Methods} \label{sec:methods} We carry out 1-D gas and dust evolution simulations using a slightly modified version of the two-population model (two-pop-py) by \citet[][]{BKE2012,Birnstiel2015ApJ...813L..14B}, while we also mimic the presence of planets. As a post-processing step, we calculate the intensity profile and the disk continuum emission. With the purpose of running a population study, we use a large grid of parameters (see \autoref{tabel:input_param}), so that we can explore the differences that occur due to the different initial conditions. In the next sections, we explain the procedure in more detail. \subsection{Disk evolution} The gas is following the viscous evolution equation. For the disk evolution, we use the turbulent effective viscosity as in \citet{shakura1973black}, \begin{equation} \mathrm{\nu = \alpha_{gas} \frac{c_{s}^{2}}{\Omega_{K}}}\,, \label{eq:visc} \end{equation} and the dust diffusion coefficient as, \begin{equation} \mathrm{D = \alpha_{dust} \frac{c_{s}^{2}}{\Omega_{K}}}\,, \label{eq:diff} \end{equation} with $\mathrm{\alpha_{gas}}$ being the turbulence parameter, $\mathrm{c_{s}}$ the sound speed and $\mathrm{\Omega_{K}}$ the Keplerian frequency. The above equation lacks the term $\rm{\frac{1}{1+St^{2}}}$, where $\rm{St}$ is the Stokes number, but we can ignore it since the Stokes number is always $\rm{<1}$ in our simulations (\citet{Youdin2007Icar..192..588Y}). We differentiate the $\alpha$ parameter, in two different values. One for the gas $\alpha_\mathrm{gas}$, and one for the dust $\alpha_\mathrm{dust}$, since we later mimic planetary gaps by locally varying the viscosity (see \autoref{subsec:planets}). In the smooth case $\alpha_\mathrm{dust} = \alpha_\mathrm{gas}$. The dust is described by the two populations model of \citet{BKE2012} which evolves the dust surface density under the assumption that the small dust is tightly coupled to the gas while the large particles can decouple from the gas and drift inwards. The initial dust growth phase is using the current dust-to-gas ratio, instead of the initial value as in \citet{BKE2012}. We set the initial gas surface density according to the self-similar solution of \citet{LBP1974}. \begin{equation} \mathrm{\Sigma_{g}(r)} =\mathrm{\Sigma_{0} \left(\frac{r}{r_{c}}\right) ^{-\gamma} Exp\left[-\left(\frac{r}{r_{c}}\right)^{2-\gamma}\right]}\,, \label{eq:surf_dens} \end{equation} where $\mathrm{\Sigma_{0} = (2-\gamma) M_{d} / 2 \pi \ensuremath{r_\mathrm{c}}\xspace^{2}}$ is the normalization parameter which is set in every simulation by the disk mass $\mathrm{M_{d}}$. The other parameters are $\mathrm{\gamma}=1$, which is the viscosity exponent and it is not varied throughout our models and \ensuremath{r_\mathrm{c}}\xspace which is the characteristic radius of the disk (see \autoref{tabel:input_param}). When r $\ll$ \ensuremath{r_\mathrm{c}}\xspace, then $\mathrm{\Sigma_{g}}$ is a power law and when $\mathrm{r \geq \ensuremath{r_\mathrm{c}}\xspace}$, $\mathrm{\Sigma_{g}}$ is dominated by the exponential factor. The initial dust distribution follows the gas distribution with a constant dust-to-gas ratio of $\mathrm{\Sigma_{d}}$/$\mathrm{\Sigma_{g}}$=0.01. The initial grain size (= monomer grain size) is $a_\mathrm{min} = \SI{0.1}{\mu m}$. This monomer size stays constant in time and space while the representative size for the large grains increases with time as particles grow. The particle bulk density is $\rm{\rho_{s}}=\SI{1.7}{g/cm^3}$ for the standard opacity model from \citet{Ricci_2010} and $\rm{\rho_{s}}=\SI{1.675}{g/cm^3}$ for the DSHARP \citep{Birnstiel_DSHARP} opacity but decreases for different values of porosity (see \autoref{sub:Observables}). We evolve the disks up to $\mathrm{\SI{10}{Myr}}$ to study the long-term evolution but in the following analysis we only show results from $\rm{\SI{300}{kyr}}$ to $\SI{3}{Myr}$ (see \autoref{sub:survival_frequency}). Our $\mathrm{1-D}$ radial grid ranges from $\rm{0.05}$ to $\rm{\SI{2000}{au}}$ and the grid cells are spaced logarithmically. We are using adaptive temperature that depends on the luminosity of every star. Since the stellar mass changes in our grid, the stellar luminosity is changing too. We follow the temperature profile: \begin{equation} \mathrm{T = \left(\phi \frac{L_{\star}}{4\pi\sigma_{SB}r^{2} }+(\SI{10}{K})^{4}\right)^{1/4}}\,, \label{eq:Temp} \end{equation} as is \citet{Kenyon_Hartmann_1996}. In this equation, $\mathrm{L_{\star}}$ is the stellar luminosity, $\mathrm{\phi} =0.05$ is the flaring angle, while $\mathrm{\sigma_{SB}}$ is the Stefan-Boltzmann constant and $\mathrm{r}$ the radius. The term $\mathrm{10^{4}}$ is a floor value so that we do not allow the disk temperature to drop below $\SI{10}{K}$ at the outer parts of the disk. We are using the evolutionary tracks of \citet{Siess_2000A&A...358..593S} to get the luminosity of an $\mathrm{\SI{1}{Myr}}$ old star of the given mass. The stellar luminosity and effective temperature is not evolved in our simulations. However, the luminosity of a \SI{1}{M_{\odot}} star would decrease from $\rm{\sim \SI{2.4}{L_{\odot}}}$ at \SI{1}{Myr} to $\rm{\sim \SI{1}{L_{\odot}}}$ at \SI{3}{Myr}. In \autoref{disc:limitations} we explore how a change in stellar luminosity affects our results. \subsection{Population study} We use an extended parameter grid, by varying the initial values of the turbulence parameter ($\mathrm{\alpha_{gas}}$), disk mass ($\mathrm{M_{d}}$), stellar mass ($\mathrm{M_{\star}}$), characteristic radius (\ensuremath{r_\mathrm{c}}\xspace) and fragmentation velocity ($\mathrm{v_{frag}}$). For every parameter, we pick $\mathrm{10}$ values specified in \autoref{tabel:tbl_params}, taking all the possible combinations between them leading to a total of $\mathrm{100.000}$ simulations. \begin{table} \caption{Grid parameters of the model} \begin{center} \centering \begin{tabular}{l l l } \toprule Parameter & Description & Value-Range \\ [0.3ex] \hline \hline $\mathrm{\Sigma_{d}/\Sigma_{g}}$ & initial dust-to-gas ratio & $\rm{0.01}$ \\ [0.3ex] $\mathrm{\rho_{s}}$ \hfill [$\rm{g/cm^{3}}$]& particle bulk density & $\rm{1.7, 1.675}$ \\ [0.3ex] $\mathrm{\gamma}$ & viscosity exponent & $\rm{1}$ \\ [0.3ex] $r$ \hfill [$\mathrm{au}$] & logarithmic grid extent & $\rm{0.05 - 2000}$ \\ [0.3ex] $\mathrm{n_{r}}$ \hfill [cells] & grid resolution & $\rm{400}$ \\ [0.3ex] $\mathrm{t}$ \hfill [years] & duration of each simulation & $\rm{10^{7}}$ \\ [0.3ex] \bottomrule \end{tabular} \end{center} \label{tabel:input_param} \end{table} \begin{table} \caption{Variables of the model} \begin{center} \centering \begin{tabular}{l l l } \toprule Parameter & Description & Values \\ [0.3ex] \hline \hline $\mathrm{\alpha}$ & viscosity parameter & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-4}}$ \\ [0.3ex] & & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-3}}$\\ [0.3ex] & & $\mathrm{\{1,2.5\}\cdot10^{-2}}$ \\ [0.3ex] $\mathrm{M_{d}}$ \hfill [M$_{\star}$] & initial disk mass & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-3}}$ \\ [0.3ex] & & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-2}}$\\ [0.3ex] & & $\mathrm{\{1,2.5\}\cdot10^{-1}}$ \\ [0.3ex] $\mathrm{M_\star}$ \hfill [M$_{\odot}$] & stellar mass & $\mathrm{0.2, 0.4, 0.6, 0.8, 1.0}$ \\ [0.3ex] & & $\mathrm{1.2, 1.4, 1.6, 1.8, 2.0}$\\ [0.3ex] \ensuremath{r_\mathrm{c}}\xspace \hfill [$au$] & characteristic radius & $\mathrm{10, 30, 50, 80, 100}$ \\ [0.3ex] & & $\mathrm{130, 150, 180, 200, 230}$\\ [0.3ex] $\mathrm{v_{f}}$ \hfill [$cm/s$]& fragmentation velocity & $\mathrm{200, 400, 600, 800, 1000}$ \\ [0.3ex] & & $\mathrm{1200, 1400, 1600, 1800, 2000}$\\ [0.3ex] q & planet/star mass ratio & $3\cdot10^{-4}$, $10^{-3}$, $3\cdot10^{-3}$ \\ [0.3ex] r$_{p}$ \hfill [r$_{c}$]& planet position & $1/3$, $2/3$ \\ [0.3ex] \bottomrule \end{tabular} \end{center} \label{tabel:tbl_params} \end{table} \subsection{Planets} \label{subsec:planets} A large planet embedded in a disk produces a co-orbital gap in the gas density. To mimick gap opening by planets in our simulations, we altered the $\mathrm{\alpha_{gas}}$ turbulence parameter. Since in steady state $\mathrm{\alpha_{gas}}$ $\mathrm{\cdot}$ $\mathrm{\Sigma_{g}}$ is constant, the $\mathrm{\alpha}$ parameter and the surface density $\mathrm{\Sigma_{g}}$ are inversely proportional quantities, so a bump in the $\mathrm{\alpha_{gas}}$ profile leads to a gap in the surface density profile. The reason for the change in $\mathrm{\alpha_{gas}}$ and not in $\mathrm{\Sigma_{g}}$ is that the surface density evolves according to the \autoref{eq:surf_dens}. By inserting the bump in the $\mathrm{\alpha_{gas}}$ the $\mathrm{\Sigma_{g}}$ still evolves viscously and at the same time produces a planetary gap shape. Following the prescription from \citet{Kanagawa2016}, we mimic the effect of planets with different planet/star mass ratios $q$ (see \autoref{tabel:input_param}). For reference, $q=10^{-3}$ represents a Jupiter mass planet around a solar mass star. This way, we can study the effect of planetary gaps and rings in the observable properties of the disk and extract the key observables in a computationally efficient way, avoiding the need to run expensive hydrodynamic simulations for each combination of parameters. Choosing the appropriate profile that mimics a planetary gap is tricky, so we performed hydrodynamical simulations using FARGO-3D \citep{FARG03D_2015ascl.soft09006B}, and we compared the effect on the observable quantities. The \citet{Kanagawa2016} profile is an analytical approximation of the gap depth and width but not necessarily represents the pressure bump that is caused by the bump. Therefore we tested how strongly this assumption affects the properties of the dust in the trap by comparing against proper hydrodynamical solutions and disk evolution. We found that the depth of the gap is not as important to the evolution of the disk on the SLR, but the width is the dominant factor. As long as the planet is massive enough to create a strong pressure maximum and thus stop the particles, the position of the pressure maximum is more important than a precise value of the gap depth. So in summary the gap depth is not what matters the most but the associated amplitude and location of the pressure maximum. We should mention that the precise amount of trapping in the bumps should still matter for e.g. planetesimal formation, but for our results this is less relevant. We provide comparison plots and more details in \autoref{app:gap_profiles}. We define the position of the planets in the disk $\mathrm{r_{p}}$, as a function of the characteristic radius \ensuremath{r_\mathrm{c}}\xspace (see \autoref{tabel:input_param}). We locate them either at $\mathrm{2/3}$, or at $\mathrm{1/3}$ of \ensuremath{r_\mathrm{c}}\xspace. In our simulations we used none, one or two planets in these positions. We refer the reader to \autoref{subsusb:planet_tracks} for the effect of the planet location and mass in the simulations. \subsection{Observables} \label{sub:Observables} Since the disk size is not one of the parameters that we measure directly, using the characteristic radius \ensuremath{r_\mathrm{c}}\xspace as a size metric is problematic \citep{rosotti2019time_evolution}. For this reason we define an observed disk radius using the calculated surface brightness profile. Following \citet{tripathi2017millimeter} we adopt their approach to define an effective radius ($\mathrm{r_{eff}}$), as the radius which encloses a fixed fraction of the total flux, $\mathrm{f_{\nu}}$ ($\mathrm{r_{eff}}$) = $\mathrm{xF_{v}}$. We choose $\mathrm{x=68\%}$ of the total disk flux as a suitable intermediate value to define $\mathrm{r_{eff}}$, as it is comparable to a standard deviation in the approximation of a Gaussian profile. We calculate the mean intensity $\rm{J_{\nu}}$ profile by using the scattering solution from \citet{Miyake1993Icar..106...20M} of the radiative transfer equation \begin{equation} \frac{J_{\nu}(\tau_{\nu})}{B_{\nu}(T(r))} = 1-b\left(e^{-\sqrt{3\epsilon_{\nu}^{eff}}\left(\frac{1}{2}\Delta\tau-\tau_{\nu}\right)}+e^{-\sqrt{3\epsilon_{\nu}^{eff}}\left(\frac{1}{2}\Delta\tau+\tau_{\nu}\right)}\right)\,, \end{equation} with \begin{equation} b=\left[\left(1-\sqrt{\epsilon_{\nu}^{eff}}\right)e^{-\sqrt{3\epsilon_{\nu}^{eff}}\Delta\tau}+1+\sqrt{\epsilon_{\nu}^{eff}}\right]^{-1} \end{equation} $\mathrm{B_{\nu}}$ is the Planck function and \begin{equation} \tau_{\nu}=\left(\kappa_{\nu}^{\rm abs}+\kappa_{\nu}^{\rm sca,eff}\right) \Sigma_{d} \end{equation} is the optical depth with $\mathrm{\kappa_{\nu}^{\rm abs}}$ the dust absorption opacity and $\mathrm{\kappa_{\nu}^{\rm sca,eff}}$ the effective scattering opacity which is obtained from \citet{Ricci_2010} or \citet{Birnstiel_DSHARP} (see below). As effective scattering opacity we refer to \begin{equation} \kappa_{\nu}^{\rm sca,eff}=(1-g_{\nu})\kappa_{\nu}^{sca} \end{equation} where $\rm{g_{\nu}}$ is the forward-scattering parameter. $\rm{\Delta \tau}$ is \begin{equation} \rm{\Delta \tau=\Sigma_{d}\kappa_{\nu}^{\rm tot} \Delta z} \end{equation} while \begin{equation} \epsilon_{\nu}^{eff}=\frac{\kappa_{\nu}^{abs}}{\kappa_{\nu}^{abs}+\kappa_{\nu}^{sca,eff}} \end{equation} is the effective absorption probability. To calculate the intensity $\rm{I_{\nu}^{out}}$ we follow the modified Eddington-Barbier approximation as in \citet{Birnstiel_DSHARP}: \begin{equation} \label{eq:intensity} I_{\nu}^{out}\simeq \left(1-e^{-\Delta \tau/\mu} \right)S_{\nu}\left(\left(\frac{1}{2}\Delta\tau-\tau_{\nu}\right)/\mu=2/3\right) \end{equation} where $\rm{\mu=cos\theta}$ \begin{equation} S_{\nu}(\tau_{\nu})=\epsilon_{\nu}^{eff}B_{\nu}(T_{d})+(1-\epsilon_{\nu}^{eff})J_{\nu}(\tau_{\nu}) \end{equation} is the source function. Two-pop-py evolves only the dust and gas surface densities and the maximum particle size\footnote{In the rest of the manuscript we often refer to "grain size" or "particle size" rather than "maximum grain size"}, thereby implicitly assuming a particle size distribution. The grain size at each radius is set by either the maximum size possible in the fragmentation- or drift-limited regimes, whichever is lower \citep[see][for details]{BKE2012}. To compute the optical properties of the dust, we therefore considered a population of grains with a power-law size distribution, $\mathrm{n(a)}$ $\mathrm{\propto}$ $\mathrm{a^{-q}}$, with an exponent $\mathrm{q = 2.5}$, for $\mathrm{a_{min}}$ $\mathrm{\leq}$ $\mathrm{a}$ $\mathrm{\leq}$ $\mathrm{a_{max}}$. This choice follows \citet{BKE2012} where the size distribution is closer to $\rm{q=2.5}$ for disks that are in the drift limit while for the fragmentation limited ones, a $\rm{q=3.5}$ choice would be more suitable. Considering the disk mass is dominated by the large grains, the choice of a smaller exponent does not alter our results significantly but it matters for the details. Since the smooth simulations are mostly drift limited the choice of $\rm{q=2.5}$ fits better these disks. Moreover, if a disk is fragmentation limited then it is so mostly in the inner part but the main bulk of the disk that defines the luminosity is in the outer part. Therefore the luminosity will still depend mainly on the drift limited regime. The disks with sub-structures can be fragmentation limited further out in the formed rings but considering that these rings are mostly optically thick, the difference between exponents is much smaller than for the smooth disks. The grain composition consists of $\mathrm{10\%}$ silicates, $\mathrm{30\%}$ carbonaceous materials, and $\mathrm{60\%}$ water ice by volume. For a direct comparison with observations \citep[][]{tripathi2017millimeter,Andrews2018a}, we calculate the opacity in band 7 (i.e. at 850 $\mathrm{\mu m}$). Afterwards, we use the absorption opacity to calculate the continuum intensity profile. We also examined the effect of different opacity models and different grain porosities. As a base model we used the composition from \citet{Ricci_2010} opacities but for compact grains (i.e. without porosity) as is \citet{rosotti2019millimetre} (this model is noted as R10-0 throughout the paper). Furthermore we used the DSHARP opacity model \citep{Birnstiel_DSHARP} (DSHARP) and we altered the grain porosity to $\mathrm{10\%}$ (little porous, DSHARP-10), $\mathrm{50\%}$ (semi-porous, DSHARP-50) and $\mathrm{90\%}$ (very porous, DSHARP-90). The particle bulk densities for the different porous grains are $\rm{\rho_{s}}=\SI{1.508}{g/cm^3}$, $\rm{\rho_{s}}=\SI{0.838}{g/cm^3}$ and $\rm{\rho_{s}}=\SI{0.168}{g/cm^3}$ respectively. An important feature of the opacity models that we used, is the so called \textit{opacity cliff}. As \textit{opacity cliff}, we refer the sharp drop in the opacity at $\SI{850}{\mu m}$, at a maximum particle size around $\SI{0.1}{mm}$, as defined in \citet{rosotti2019millimetre} (see \autoref{fig:Opacities}). In all the figures that are shown in this paper, the opacity model from \citet{Ricci_2010} R10-0 is used, unless it is explicitly stated otherwise. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/Opacity_q25.pdf} \caption[]{Comparison between the opacity models that we used at 850 $\mathrm{\mu m}$ as a function of the maximum particle size and for a power-law size distribution with an exponent of $\rm{q=2.5}$. We marked on the figure the \textit{opacity cliff} (where the opacity steeply drops by one order of magnitude over a small range of variation in grain size) at a wavelength of $\SI{850}{\mu m}$ (blue and orange shaded regions). The blue line refers to the opacity model from \citet{Ricci_2010} with compact grains (marked as R10-0) and the orange to \citet{Birnstiel_DSHARP} with compact grains (marked as DSHARP). Green, purple and red lines refer to $\mathrm{10\%}$, $\mathrm{50\%}$ and $\mathrm{90\%}$ porous grains in the DSHARP model. R10-0 and DSHARP opacity, differ by a factor of $\mathrm{\sim 8.5 }$ at the position of the opacity cliff. As we increase the porosity of the DSHARP model, we observe that the opacity cliff starts to flatten out, until it completely disappears for very porous grains ($\mathrm{90\%}$). Moreover the location of the cliff shifts to larger particle sizes as it diminished with porosity. The black dashed line shows the particle size at $\mathrm{1mm}$. The value for the R10-0 model at this size corresponds to $\mathrm{\kappa_{\nu}^{R10-0}=\SI{9.8}{cm^{2}/g}}$, while for the DSHARP to $\mathrm{\kappa_{\nu}^{DSHARP}=\SI{4}{cm^{2}/g}}$.} \label{fig:Opacities} \end{figure} \subsection{Matching simulations} \label{sub:survival_frequency} The behavior of the simulations on the size-luminosity diagram ($\mathrm{L_{mm}-r_{eff}}$ plane, hereafter \textit{SL Diagram}) depends on the time evolution of disks. According to \citet{Andrews2018a}, the linear regression of the joint data between \citet{tripathi2017millimeter} and \citet{Andrews2018a}, gave a relation between the disk size and the $\SI{340}{GHz}$ luminosity. The effective radius $\mathrm{r_{eff}}$ and the luminosity $\mathrm{L_{mm}}$ are correlated as \begin{equation} \mathrm{log} \mathrm{r_{eff}} = (2.10^{+0.06}_{-0.03}) + (0.49^{+0.05}_{-0.03}) \mathrm{log} \mathrm{L_{mm}}\,, \label{eq:size-lum} \end{equation} with a Gaussian scatter perpendicular to that scaling with a standard deviation (1$\mathrm{\sigma}$) of $\mathrm{0.20^{+0.02}_{-0.01}}$ $\rm{dex}$ (where $\mathrm{r_{eff}}$ is in $\rm{au}$ and $\mathrm{L_{mm}}$ is in $\rm{Jy}$ at a distance of $\SI{140}{pc}$). In \autoref{fig:param_effects} (top left), we show an evolution track. It is the path that the simulations follow on the luminosity-radius diagram in a chosen time-span. Generically they move from the top-right (higher luminosity and radius) regions to down-left as the disk evolves. We choose to plot the evolution track from $\SI{300}{kyr}$ to $\SI{3}{Myr}$. The thought behind this is that at roughly $\SI{300}{kyr}$ our disks reach a quasi steady state. In more detail, the drift speed in the drift limit is $\mathrm{V_{r} = \epsilon V_{K}}$, with $\mathrm{\epsilon}$ being the dust-to-gas ratio and $\mathrm{V_{k}}$ the Keplerian velocity. So after at least one order of magnitude is lost in the dust, the evolutionary time becomes too long. We refer the reader to \autoref{sec:discussion}, where we show the evolution of disk dust-to-gas ratio as a function of time for different cases. Longer evolutionary times than the ones we are exploring here do not alter our results significantly and we therefore exclude it to simplify the discussion. It will be included as a topic of future research since at later stages, disks are more strongly affected by dispersal. Moreover, our chosen time-span covers the observed disks from the \citet{Andrews2018a} and \citet{tripathi2017millimeter} joint sample. In order to filter our simulations, we divide them in categories. Every simulation that at all times in our chosen time-span, lies within $\mathrm{1-\sigma}$ (blue shaded region in \autoref{fig:param_effects} (top left) of the SLR (\autoref{eq:size-lum}) is considered as \textit{matching} (see \autoref{fig:param_effects}, the green evolution track). On the other hand, if at any time a simulation does not lie within the area, it is considered as \textit{discrepant}. The \textit{discrepant} simulations can be further divided into two sub-categories. One that is above the SLR (see \autoref{fig:param_effects} in the top left, the purple and the yellow track) and one that is below (red track in the same figure). With this classification, we can investigate what are the main parameters that drive a simulation to be located on a certain spot on the SL Diagram. It is worth mentioning that a fraction ($\mathrm{\sim32\%}$) of the observational data points lie outside the 1-$\mathrm{\sigma}$ region by definition. This highlights that the definition of the matching simulations is conservative with respect to the observational data. Later on, we define the term \textit{matching fraction}, as the percentage of the matching simulations, against the total number of simulations performed with a certain initial condition (see \autoref{sub:evolution_tracks}). \section{Results} \label{sec:results} In this section we present the main results of this analysis. In \autoref{sub:evolution_tracks} we explain the effect of every parameter on the path of the disk on the SL diagram. In \autoref{sub:heatmaps} we present the general properties that disks should have to follow the SLR and we derive a theoretical SLR for disk with sub-structures in \autoref{subsub:scaling_relation}. In \autoref{app:corner} we present an additional analysis for the results discussed below. \subsection{Evolution tracks} \label{sub:evolution_tracks} In \autoref{sub:survival_frequency}, we explained what an evolution track is, while we showed some examples in \autoref{fig:param_effects} (top left). Every track is affected by the initial conditions of the parameters chosen, and by the presence or not of a planet. In the following sections we will explore the effect of the most important parameters of our grid model and we will show how every parameter affects the evolution track on the SL Diagram in \autoref{fig:param_effects}, \autoref{fig:mass_acc_1e-3} and \autoref{fig:parameter_tracks}. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/survivability.pdf} \includegraphics[width=0.49\linewidth]{plots/planet_effect.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/alpha_effect_23rc.pdf} \includegraphics[width=0.49\linewidth]{plots/rc_effect_smooth.pdf} \caption[]{Evolution tracks and explanation of matching and discrepant simulations. Examples of a disk with the same initial conditions varying only one parameter at a time. \textbf{Top left}: The SLR according to \citet{Andrews2018a} and examples of evolution tracks. The black points correspond to the observational data, and the black dashed line is the \autoref{eq:size-lum}, that shows the relation between the luminosity and the effective radius. Finally, the blue shaded region is the area within $1-\sigma$ of the SLR where we consider our simulations as \textit{matching}. The green track noted with the number 1 is considered as a \textit{matching} simulation, since it starts and ends inside the SLR. On the other hand, the rest of the tracks are considered as \textit{discrepant}. The beginning of the track is where the empty bullet is (top right) and the end where the number is printed (lower left). \textbf{Top right}: Varying only the presence and the position of a Jupiter-mass planet. The red line (number 1) shows the smooth disk, the green line (number 3), a disk with a Jupiter-mass planet at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace, while the purple (number 2) corresponds to a planet at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace, and the yellow (number 4) to simulation with two planets at the positions mentioned before. \textbf{Bottom left}: Varying only the turbulence parameter $\mathrm{\alpha}$. Higher $\mathrm{\alpha}$ values lead to higher luminosity. \textbf{Bottom right}: Varying only the characteristic radius \ensuremath{r_\mathrm{c}}\xspace. Higher \ensuremath{r_\mathrm{c}}\xspace values lead to larger and more luminous disks.} \label{fig:param_effects} \end{figure*} Since the grid consists of 100.000 simulations, single evolution tracks do not show the preferred initial conditions that allow a simulation to stay in the SLR but only a representative case. In order to identify trends between the initial conditions and the matching fraction of every disk, we construct histograms, where on the y-axis we have the matching fraction (i.e. the percentage of simulations that stay on the SLR for the chosen time-span) and on the x-axis we have the value of each parameter in our grid model. Different colors represent different simulation grids with or without planets and with varying the planetary mass or positions (e.g. \autoref{fig:Histograms}). With black color we show the simulations where we used a smooth surface density profile as in \citet{rosotti2019millimetre}. With green color we show the case where the planet/star mass ratio is $\mathrm{q=1\cdot10^{-3}}$ at a location of 1/3 of the \ensuremath{r_\mathrm{c}}\xspace (inner planet), with red the same planet at a distance of 2/3 of the \ensuremath{r_\mathrm{c}}\xspace (outer planet) and with blue two planets of $\mathrm{q=1\cdot10^{-3}}$ at 1/3\ensuremath{r_\mathrm{c}}\xspace and 2/3\ensuremath{r_\mathrm{c}}\xspace. With the white hatched bars, we show the same cases but using a different opacity model DSHARP \citep{Birnstiel_DSHARP}. \subsubsection{Effect of planetary parameters} \label{subsusb:planet_tracks} In a smooth disk, the evolution track will evolve towards smaller radii and smaller luminosity as the dust drifts inward, so the emission (and size) decreases. Moreover, the opacity cliff moves further in as the radius in the disk where the maximum particle size $\mathrm{a_{max}}$ is, the value for the peak opacity decreases due to radial drift and grain growth as in \citet{rosotti2019time_evolution}. In contrast to this, in the case where a planet is present the pressure bump that is formed stops the dust from drifting towards the host star delaying the evolution of the disk on the SL Diagram, thus keeping the tracks on the SLR for longer times. With this in mind, we expect a less extended evolution track when we include planets, since planets are included early in the disk evolution. In \autoref{fig:param_effects} (top right), we show an example of the evolution tracks of a disk with the same initial conditions, varying only the presence and the position of a planet. The red line represents the evolution track of a disk with a smooth surface density profile. If we include a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$ (Jupiter-mass in this case) at a location that is close to the characteristic radius of the disk, in this case at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace, we see with the green line (number 3), that both effective radius and the luminosity increase relative to the planet-less case, and the evolution track is shorter at the same time-span as in all planet cases. This is clear since the pressure bump is trapping particles and the dust mass is retained. Therefore the luminosity is not decreasing as quickly. At the same time, the fixed position of the pressure bump causes the effective radius to remain the same. Together this means that the track in the SLR comes to a halt. On the other hand, if we place the planet close to the star, in this case at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace, we observe a much longer track similar to the one with the smooth profile as seen by the purple line. The size and the luminosity of the disk change only slightly. The smaller radius compared to the case where we have an outer planet is explained because the dust now stops at the inner pressure bump and the luminosity is roughly the same for the two cases. This could lead to the conclusion that a planet close to the star will not affect dramatically its position on the SL Diagram, but this is not true for all cases (see \autoref{subsusb:rc_tracks}). Instead, the evolution track here is similar to the smooth one because the disk is too large and massive and the inner planet cannot affect the evolution track too much. As a last point, we included two planets, one at the 2/3 of the \ensuremath{r_\mathrm{c}}\xspace and one at 1/3 (yellow line). We observe a similar evolution track as in the case where we have only one planet close to the outer radius, with the disk being slightly more luminous. This is also explained by the fact that the dust from the outer disk stops at the outer pressure bump, while the dust that exists inside the outer planet stops at the pressure bump of the inner planet. Since most of the dust mass initially resides outside the outer planet, the dust that is trapped between the two planets contributes only partially to the total luminosity. When two or more planets are present the location of the outermost planet one is more dominant in the evolution track. \subsubsection{Effect of the turbulence $\mathrm{\alpha}$-parameter} \label{subsusb:alpha_tracks} The effect of the turbulence parameter $\mathrm{\alpha}$ on the evolution tracks is straightforward, higher $\mathrm{\alpha}$ leads to higher luminosity. In \autoref{fig:param_effects} (bottom left), we show the evolution tracks of a disk with the same initial conditions, varying only the $\mathrm{\alpha}$-parameter. In this case, we choose a disk where we have also inserted a Jupiter mass planet at the 2/3 of the characteristic radius (\ensuremath{r_\mathrm{c}}\xspace) since the effect is more prominent on these disks. To understand the trends in \autoref{fig:param_effects} (bottom left), it is instructive to consider \autoref{fig:mass_acc_1e-3}, where we show an example of how different $\mathrm{\alpha}$-values affect the efficiency of trapping. In the top panel we show the dust mass local flow rate $\rm{\dot M_{acc,d}(r)}$ $[\mathrm{M_\oplus/yr}]$ as a function of radius. For low $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}}$ (red line), $\mathrm{5\cdot10^{-4}}$ (blue) and $\mathrm{1\cdot10^{-3}}$ (green), the mass is flowing towards the bump and the local flow rate for $\mathrm{r < r_{p}}$ is small ($\mathrm{\leq10^{-8} M_\oplus/yr}$), meaning that the trapping is efficient enough to stop the dust from drifting towards the star. On the other hand for the large $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}}$ (yellow) and $\mathrm{1\cdot10^{-2}}$ (purple), the dust mass local flow rate stays almost constant ($\mathrm{\leq 10^{-5} M_\oplus/yr}$) throughout the whole disk, i.e. the bump is not trapping the particles, just locally slowing them down. In the bottom panel we show the cumulative mass of the disk integrated from inside out, as a function of radius. For the low $\rm{\alpha}$-values, most of the mass is located in the bump, while for $\rm{\alpha}=\rm{5\cdot10^{-3}}$ and $\rm{\alpha}=\rm{1\cdot10^{-2}}$ it increases with the radius, meaning that the bump allows more the grains to escape and therefore it does not contain a significant fraction of the disk mass. Returning to \autoref{fig:param_effects} (bottom left), disks with low to medium $\mathrm{\alpha}$-values ($10^{-4}$, $5\cdot10^{-4}$, $10^{-3}$) are less luminous than disks with higher $\mathrm{\alpha}$-values (red, blue and green tracks). This is because the ring that is formed due to the pressure bump is becoming too narrow and optically thick. The total flux emitted by an optically thick ring of a given temperature is just a function of the emitting area. So lower $\mathrm{\alpha}$ will lead to a narrower ring \citep{Dullemond2018DSHARP} so less emitting area and therefore smaller luminosity, independently of the amount of mass in the ring. On the other hand, high alpha values work against trapping in various ways, leading to more luminous disks. A higher $\mathrm{\alpha}$-value: \begin{itemize} \item decreases the particle size in the fragmentation limit. Smaller particles are less efficiently trapped by radial drift (e.g. \citep{Zhu2012DustFiltration}). \item increases the diffusivity which allows more grains to escape the bump. \item increases the viscosity in the same way, so more dust sizes are traveling with the accreting gas. \item smears out the pressure peak, causing less efficient trapping by radial drift \citep{Pinilla2012}. \end{itemize} Moreover, with high $\alpha$-value, a higher dust-to-gas ratio is retained because grain growth is impeded by fragmentation, hence radial drift is much slower. If we consider the two cases where the $\mathrm{\alpha}$-value is high ($5\cdot10^{-3}$, $10^{-2}$), the planet cannot efficiently trap the dust and the disk evolves further along the SLR to lower luminosities. As a matter of fact, a disk that contains a planet with $\mathrm{\alpha=10^{-2}}$, behaves the same as a smooth disk without a planet on the SL Diagram, implying that in this case one would need a very massive planet (several Jupiter masses) to significantly affect disk evolution. This results in the dust grains to become smaller since the gas turbulent velocity is getting larger and the collisions between them are more destructive. Consequently, the gap is becoming shallower while there is also more diffusion. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/mass_acc_cum_1e-3_at_13.pdf} \caption[]{\textbf{Top panel}: Local flow rate of the dust mass in $\mathrm{M_\oplus/year}$ as a function of radius for a disk with a Jupiter mass planet at $\SI{31}{au}$. For the low $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}}$ (red line), $\mathrm{5\cdot10^{-4}}$ (blue) and $\mathrm{1\cdot10^{-3}}$ (green), we observe that the bump outside the planet location is large enough to hold the dust from drifting towards the star, while for larger $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}}$ (yellow), $\mathrm{10^{-2}}$ (purple), there is only a weak accumulation outside the planetary gap. \textbf{Bottom panel}: Cumulative dust mass contained within a radius $r$, as function of radius. For $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}, 5\cdot10^{-4}, 1\cdot10^{-3}}$, roughly all the mass of the disk is inside the bump that is created from the planet. For larger $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}, 1\cdot10^{-2}}$, the bump is minimal and not able to hold the dust from drifting.} \label{fig:mass_acc_1e-3} \end{figure} In \autoref{fig:Histograms} (top left), we show the dependence of the $\mathrm{\alpha}$-viscosity parameter to the matching fraction. As expected, all simulations tend to favor low values of the turbulence parameter $10^{-4}\leq \mathrm{\alpha} \leq10^{-3}$. Smooth simulations show a clear tendency towards low $\rm{\alpha}$-values because when they are drift dominated they remain in the SLR. On the other hand sub-structured disks show a preference towards $5\cdot10^{-4}\leq \mathrm{\alpha} \leq10^{-3}$. The dust trapping is efficient enough and allows the disks to retain their mass, but while moving to higher $\mathrm{\alpha}$-values $\mathrm{\alpha > 2.5\cdot 10^{-3}}$, the trapping stops being efficient anymore leading to higher chances that the evolution track will leave the SLR in the selected time-span. If $\rm{\alpha}<5\cdot10^{-4}$ the dust rings become narrow and the luminosity is not large enough to place them in the SLR, therefore the matching fraction decreases. \subsubsection{Effect of the characteristic radius - \ensuremath{r_\mathrm{c}}\xspace} \label{subsusb:rc_tracks} In \autoref{fig:param_effects} (bottom right), we plot the evolution tracks of a smooth disk with the same initial conditions while varying only the characteristic radius ($\rm{\ensuremath{r_\mathrm{c}}\xspace=\SIlist{10;50;100;180}{au}}$). The same trend applies to disks where a planet is included, but for a smooth disk the evolution tracks are longer and the effect is more easily visible. The effect of the characteristic radius in the evolution tracks is straightforward. The larger the \ensuremath{r_\mathrm{c}}\xspace the more the evolution track moves towards the top right of the plot. Meaning that for a larger disk we expect larger luminosity. In more detail, increasing the characteristic radius (going from red to green line) explains why the effective radius increases, while the luminosity increases because the total disk mass remains fixed on all these simulations. This result is consistent with the SLR from \citet{Andrews2018a}. Taking a look at \autoref{fig:Histograms} (middle left), we do not observe a continuous pattern as in the other histograms. Smooth disks do not seem to depend on the characteristic radius as disks of all sizes can reproduce the SLR. On the other hand, sub-structured disks with small \ensuremath{r_\mathrm{c}}\xspace ($\SI{10}{au}$) are mostly above the correlation because they have high luminosity relatively to their size (as it is explained by our size luminosity estimate in \autoref{disc:opacity}) and they are unable to enter the correlation in time. For large radii ($>\SI{150}{au}$), the disks can become too large but with low luminosity and can end up below the correlation to the very right part of the SL-diagram (see \autoref{fig:heatmaps_ricci}). Therefore we observe a peak towards a specific characteristic radius, around $\mathrm{80-\SI{130}{au}}$ when a planet is at a location of 1/3 of the \ensuremath{r_\mathrm{c}}\xspace (green color) and around $\mathrm{30-\SI{80}{au}}$ for the case where a planet is at a location of 2/3 of the \ensuremath{r_\mathrm{c}}\xspace (red color). The inner planet constrains the disk to a small size but with relatively high luminosity, therefore placing it above the SLR before $\mathrm{\SI{300}{kyr}}$, while the opposite effect occurs when an outer planet exists. When two planets are included we observe a mixed situation of the single cases. The reason is that both pressure bumps compete with each other and each one of them contributes with one of the ways described above. \subsubsection{Effect of the disk mass - $\rm{M_{d}}$} \label{subsusb:disk_mass_tracks} In \autoref{fig:parameter_tracks} (top left), we plot the evolution tracks of a smooth disk with the same initial conditions varying only the disk mass ($\mathrm{M_{d}}$). We choose a smooth disk to show the effect more clearly, but the same principle applies to the majority of disks. The disk mass contributes to both luminosity and radius. Higher disk masses lead to higher luminosities, both in the beginning and at the end of the track. Since for a fixed \ensuremath{r_\mathrm{c}}\xspace, disks with higher $\mathrm{M_{d}}$ have higher $\mathrm{L_{mm}}$, it is only logical that more material will lead to higher luminosity and vice versa. By the end of the evolution tracks, the less massive disks have left the SLR. The dramatic curvature off the SLR for the lowest $\mathrm{M_{d}}$ case ($\mathrm{M_{d}=5\cdot 10^{-3}M_{\star}}$) is because all grain sizes become smaller than the opacity cliff. If we choose a disk that contains a planet, the massive disks ($\mathrm{M_{d}\geq5\cdot 10^{-2}} M_{\star}$) will still evolve towards lower radii and luminosities on the SLR but the less massive ones will have shorter tracks. The pressure bump will trap all the material outside of the planet position, so the emission and the effective radius will both remain almost constant. The only case where the track of a low mass disk can be long, is if the planet mass is small and the pressure bump is not large enough to retain dust. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/Md_effect_smooth.pdf} \includegraphics[width=0.49\linewidth]{plots/Opacity_tracks.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/Mstar_effect_smooth.pdf} \includegraphics[width=0.49\linewidth]{plots/vfrag_effect_smooth.pdf} \caption[]{Evolution tracks with the same initial conditions varying only one parameter at a time. \textbf{Top left}: Varying the disk mass of a smooth disk. \textbf{Top right}: Varying the different opacity models and the porosity of the DSHARP model of a smooth disk. \textbf{Bottom left}: Varying the stellar mass of disk that contains a planet. \textbf{Bottom right}: Varying the fragmentation velocity of a smooth disk.} \label{fig:parameter_tracks} \end{figure*} In \autoref{fig:Histograms} (top right), we plot the matching fraction of the disk mass values. The tendency here is that higher disk mass leads to more simulation inside the SLR. The reason for this is that high initial disk mass places the disks above the SLR until they reach a stable state. While the dust is drifting towards the host star the luminosity decreases, allowing them by our chosen time-span, to reach the SLR and stay there for the remaining time. Since most of the dust is in the trap at this point, the remaining evolution time is set by the trap life time. The difference is noticeable between the smooth and the planet(s) case. As we see from the yellow bars, a disk with a smooth surface density profile must be initially massive ($\rm{M_{d}\geq\SI{0.025}{M_{\star}}}$) to remain in the SLR. The probability of a smooth simulation to match is even larger than the cases where a planet is included. Some of these results though are an effect of the opacity model used and the chosen time-span, as we will discuss in \autoref{subsusb:porosity_tracks}. \subsubsection{Effect of different opacity/grain porosity} \label{subsusb:porosity_tracks} In \autoref{fig:parameter_tracks} (top right), we are showing representatively the behavior of several similar tracks varying only the opacity model. The simulations with the \citet{Ricci_2010} (R10-0, blue line) opacity model produce more luminous and larger disks than the ones with the DSHARP (orange line) model due to the higher value of the opacity. The R10-0 opacity is 8.5 times higher than the DSHARP at the peak of the opacity cliff. If we use slightly porous grains (DSHARP-10, green line) by altering the DSHARP opacity, we observe that the effect is insignificant, as the shape of the opacity is roughly the same. On the contrary, for semi and very porous grains (DSHARP-50 and DSHARP-90, purple and red line) the opacity cliff starts to flatten out \citep{Kataoka2014}, leading to a disk with low luminosity and no significant change in disk size. On all the histogram figures (\autoref{fig:Histograms}) the same trend stands for either the opacity from R10-0 \citet{Ricci_2010} (solid color bars) or the DSHARP \citep{Birnstiel_DSHARP} (hatched bars). The difference is that more simulations match when the R10-0 opacity model is used as opposed to the DSHARP model. Disks with the DSHARP opacity are generally less bright because they do not become optically thick in the rings and end up below the SLR. Therefore it would need more dust (i.e. stronger traps) for them to be luminous enough. Especially for the smooth case (yellow bars), there is only a few simulations that match, hence the hatched bars are barely visible. For smooth disks the total matching fraction is $\rm{29.6\%}$ with the R10-0, while it is $\rm{0.8\%}$ with the DSHARP opacity. For disks with an inner planet the matching fractions are $\rm{30.2\%}$ and $\rm{15.9\%}$ respectively. We refer the reader to \autoref{disc:opacity}, where we explore the overall impact of porous grains for the entire grid of models. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/alpha_hist_q25.pdf} \includegraphics[width=0.49\linewidth]{plots/Mdisk_hist_q25.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/rc_hist_q25.pdf} \includegraphics[width=0.49\linewidth]{plots/Mstar_hist_q25.pdf}\\ \includegraphics[width=0.49\linewidth, left]{plots/vfrag_hist_q25.pdf} \caption[]{Histograms of the matching fraction for disk mass, characteristic radius, stellar mass and fragmentation velocity. The matching fraction shows the percentage of the simulations that remained on the SLR for the chosen time-span ($\rm{\SI{300}{kyr} - \SI{3}{Myr}}$). \textbf{Top left}: Dependence on the $\alpha$-value. There is a preference to low $\alpha$-values $\rm{(10^{-4}\leq \alpha \leq 10^{-3})}$. \textbf{Top right}: Dependence on the disk mass. There is a preference to high disk masses $\mathrm{(0.025 \leq \frac{M_{d}}{M_{\star}} \leq 0.25)}$. \textbf{Middle left}: Dependence on the characteristic radius. Smooth disks do not depend on the \ensuremath{r_\mathrm{c}}\xspace. \textbf{Middle right}: Dependence on the stellar mass. There is a small preference to larger values. \textbf{Bottom left}: Dependence on the fragmentation velocity. There is a small preference to larger values.} \label{fig:Histograms} \end{figure*} \subsubsection{Effect of the stellar mass - $\mathrm{M_\star}$} \label{subsub:Mstar_tracks} In our models, the stellar mass is assumed to be directly correlated with the disk mass. This is due to the fact that we varied the stellar mass, but we always kept the disk-to-star mass ratio constant. Therefore, a larger star mass implies a larger total mass of the dust, leading to larger continuum luminosities. In \autoref{fig:parameter_tracks} (bottom left), we plot the evolution tracks of a disk that contains a planet with the same initial conditions, varying only the stellar mass for $\mathrm{M_\star=\SIlist{0.2;0.6;1.0;2.0}{M_{\odot}}}$. As expected, the largest value of $\mathrm{M_\star=2.0M_{\odot}}$ leads to the largest and most luminous disk (green line), while the opposite is happening for $\mathrm{M_\star=0.2M_{\odot}}$ (red line). There is similar behavior for smooth disks, but in that case the radius of each disk will be much smaller due to radial drift. Furthermore, the stellar mass is the least important parameter on defining if a simulation matches or not. The histogram in \autoref{fig:Histograms} (middle right) confirms this. Even though the trend shows that using higher stellar mass has higher matching fraction, this is because the stellar mass scales to the disk mass and higher disk mass leads to more matching simulations (see \autoref{subsusb:disk_mass_tracks}). This scaling implies that the luminosity ($\mathrm{L_{mm}}$) scales with the stellar mass ($\mathrm{M_{\star}}$). Our models follow a relation that is not as steep as the observed $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ in \citet{Andrews2018a}, because there is not a correlation between disk size and stellar mass in our simulations. We refer the reader to \autoref{disc:Lmm-Mstar} and \autoref{fig:L_Mstar} where we explore further this relation. \subsubsection{Effect of the fragmentation velocity - $\mathrm{v_{frag}}$} \label{subsub:vfrag_tracks} In \autoref{fig:parameter_tracks} (bottom right), we plot the evolution tracks, of a smooth disk with the same initial conditions, varying only the fragmentation velocity for values of \SIlist{200;600;1000;2000}{cm/s}. We observe that for medium and large values of $\mathrm{v_{frag}}$ (in this case for $\mathrm{v_{frag}} \geq \SI{600}{cm/s}$), the evolution tracks overlap. Since most of our simulations are drift limited we expect that no effect from the fragmentation velocity will take place for these values since particles do not grow big enough to drift, so more mass remains at large radii to produce more emission. Therefore, this effect only arises when the fragmentation velocity is becoming too low leading for the first snapshots to higher luminosity. Moreover, if a disk is fragmentation limited then it is so mostly in the inner part. Therefore, considering that the main bulk of the disk is in the outer part, the emission that defines the luminosity will still depend on the drift limited regime hence leading to the overlapping tracks. The effect of a planet in these tracks is minimal and we expect a similar behavior to the case shown here. This can be validated in \autoref{fig:Histograms} (bottom left), where we plot the matching fraction compared to the fragmentation velocity and the tendency is to higher fragmentation velocities for all cases. More specifically, if $\mathrm{v_{frag}\geq 600cm/s}$ there is a large number of matching simulations and it only gets larger with increasing fragmentation velocity. In this range, the simulations are mostly drift limited. For low values of $\mathrm{v_{frag}}$, most of the simulations are fragmentation limited and they lose luminosity relatively quickly, moving them out of the SLR. Low values of $\mathrm{v_{frag}}$ lead to smaller particles and less efficient trapping. Therefore, those disks lose their solids too quickly. This is analyzed in more detail in \autoref{app:corner}. For reference, following recent lab experiments \citep{Wurm2018SSRv..214...52B}, $\SI{100}{cm/s} \leq \mathrm{v_{frag}\leq \SI{1000}{cm/s}}$ is considered as a value consistent with lab work and $\mathrm{v_{frag}>\SI{1000}{cm/s}}$ as high fragmentation velocity. \subsection{Heat-maps} \label{sub:heatmaps} Single evolution tracks give us an idea of how a single simulation evolves on the SLR but with a large sample of simulations as the one we have, it is not easy to extract global results. Since there are too many tracks to over-plot all of them on a single diagram, we treat the position of every simulation at every snapshot as an independent sample and we plot them on a heat map. In these figures, we plot the position of every simulation for a specific case (smooth and planet in different locations), for three different snapshots ($\mathrm{\SI{300}{kyr}}$, $\mathrm{\SI{1}{Myr}}$, $\mathrm{\SI{3}{Myr}}$). In \autoref{fig:heatmaps_ricci}, we plot three different cases. In the first column we show the smooth case, in the second one the one where we use a planet/star mass ratio $\mathrm{q=10^{-3}}$ at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace and in the last one $\mathrm{q=10^{-3}}$ at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace. Each row (from top to bottom) refers to the snapshots, \SI{300}{kyr}, \SI{1}{Myr} and \SI{3}{Myr} respectively. The solid white line refer to the SLR from \citet{Andrews2018a} and the dashed white lines show the $\mathrm{1\sigma}$ deviation (same as the blue shaded region on the single evolution tracks). The color-bar to the right shows the number of simulations in a single cell. The red line is our prediction for the cases where we include a planet (see \autoref{subsub:scaling_relation}). Instead of following the relation from \citet{Andrews2018a}, they seem to follow a relation of $\mathrm{L_{mm}\propto r_{eff}^{5/4}}$. In \autoref{subsub:width} and \autoref{subsub:scaling_relation} we perform a more detailed analysis on the this topic. We observe that on all cases, most of the disks start inside and above the correlation (first row at $\mathrm{\SI{300}{kyr}}$). In the smooth case, the disks lose a lot of their luminosity relatively quickly and also shrink in size, making them move to the lower radii due to radial drift. We end up with a large number \textbf{($\rm{29.6\%}$)} of simulations occupying and following the SLR. As we explain later \autoref{subsub:scaling_relation}, the slope is expected from \citet{rosotti2019millimetre} but the normalization depends on the choice of opacities. On the other hand, when we include a planet and the time increases the disks are not decreasing in size, but mainly in luminosity. This is due to the formation of the pressure bump that keeps the dust from drifting further in keeping the same effective radius. The consequence is that if the disks leave the SLR, it is due to luminosity decrease since they move vertically on the diagram. The clustering in $\mathrm{r_{eff}}$ that is formed on the plots with a planet are an artifact of our parameter grid. A randomly chosen value of \ensuremath{r_\mathrm{c}}\xspace and planet position would result in a continuous (non-clustered) distribution. From the comparison of the two cases where a planet is included, there are more matching simulations when a planet is in the inner part of the disk \textbf{($\rm{30.2\%}$)}. Having a planet in the outer part leads the disks to the right (large radii) and bottom part of the diagram and consequently leaves them outside of the relation ($\rm{20.6\%}$ of the disks match). The difference becomes striking when we use the dust opacities from DSHARP in \autoref{fig:heatmaps_dsharp}. Since the DSHARP opacity is lower than the R10-0 (see \autoref{fig:Opacities}), many of the smooth disks start below the SLR. This leads the majority of them outside of the relation by the last snapshot ($\rm{\SI{3}{Myr}}$) and only ($\rm{0.8\%}$) of the disks match. The same stands for the case where a planet is included. The disks have lower luminosity in the first snapshot but the pressure bump formed is big enough to keep them in the relation for the remaining time ($>\rm{11.1\%}$) of the disks match depending on the model. Similar behaviour for the cases where we include strong sub-structures stands for the opacity model with semi-porous grains, DSHARP-50, in \autoref{fig:heatmaps_d50}. Even though there are less matching simulations in total, if the sub-structures are large enough, they are able to keep a significant amount of simulation is the clusters on the SLR \textbf{($>\rm{10.7\%}$)}. The same argument cannot be made for simulations with the smooth surface density profile. With semi-porous grains there is only a small fraction of matching simulations \textbf{($\rm{0.7\%}$)}. The absence of a strong opacity cliff in the opacity profile leads to low luminosities and consequently all the simulations below the SLR. An almost identical point stands looking at the heat-map (\autoref{fig:heatmaps_d90}) for the case for the case where very porous grains are used (DSHARP-90). The complete absence of the opacity cliff does not allow a considerable fraction of smooth disks to enter the SLR \textbf{($\rm{1.3\%}$)} while a similar fraction of sub-structured disks match as in the DSHARP-50 case \textbf{($>\rm{10.2\%}$)}. This heat-map is included in \autoref{app:heatmap}. From these heat-maps we can extract three important results. \begin{itemize} \item Disks with strong traps (i.e. massive planets) follow a different SLR than smooth disks, while smooth disks are more consistent with the SLR in terms of the shape of the relation. \item Whether a smooth disk matches or not depends heavily on the opacity model. \citet{Birnstiel_DSHARP} DSHARP opacities produce significantly less simulations in the SLR than the \citet{Ricci_2010} R10-0 model and only a fraction of simulation will match with semi and very porous grains and the model DSHARP-50 and DSHARP-90. Therefore, the porosity should be smaller than $\mathrm{50}\%$ when the \citet{Birnstiel_DSHARP} opacities are used. However, the distribution of simulations is significantly tighter than the observed correlation for the smooth disks with the R10-0 opacity. As it is discussed later (\autoref{sec:discussion}), the observed correlation can be a mixture of smooth and sub-structured disks that adds scatter to the simulated SLR. \item A bright disk (top right on the SL diagram) is more probable to remain in the SLR if there is a pressure bump formed in the first $\mathrm{1Myr}$, regardless of the opacity model. \end{itemize} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_ricci_q25.pdf} \caption[]{Heat-maps of simulations with the \citet{Ricci_2010} opacities. The three different columns represent the smooth, planet at $\mathrm{1/3r_{c}}$ and a planet at $\mathrm{2/3r_{c}}$ respectively. The rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr}$ and $\mathrm{3Myr}$ respectively. The white solid is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color-bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_ricci} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_dsharp_q25.pdf} \caption[]{Heat-maps of simulations with the \citet{Birnstiel_DSHARP} opacities. The three different columns represent the smooth, planet at $\mathrm{1/3r_{c}}$ and a planet at $\mathrm{2/3r_{c}}$ respectively. The rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr}$ and $\mathrm{3Myr}$ respectively. The white solid is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color-bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_dsharp} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_D50_q25.pdf} \caption[]{Heat-maps of simulations with the \citet{Birnstiel_DSHARP} D-50 opacities with 50\% porostiy. The three different columns represent the smooth, planet at $\mathrm{1/3r_{c}}$ and a planet at $\mathrm{2/3r_{c}}$ respectively. The rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr}$ and $\mathrm{3Myr}$ respectively. The white solid is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color-bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_d50} \end{figure*} \subsubsection{Width of the pressure maxima} \label{subsub:width} To understand the overall shape of the heat-map for the case with massive planets (i.e. the red lines in \autoref{fig:heatmaps_ricci} and \autoref{fig:heatmaps_dsharp}), we will derive a theoretical estimate in the following. This estimate depends on the width and position of the pressure maximum formed outside the position of the gap-opening planets. We therefore first derive a relation of the gas width vs radius $r$, planet/star mass ratio $q$ and scale height $h$, using hydrodynamical simulations of planet-disk interaction with the FARGO-3D code \citep{FARG03D_2015ascl.soft09006B}. In the subsequent \autoref{subsub:scaling_relation}, we estimate the SLR based on these empirically determined widths. For a complete derivation of both sections, we refer the reader to \autoref{app:width_slr_derivation} and \autoref{app:scaling_relation}. In addition to our two-pop-py models, we performed 24 hydrodynamical simulations with FARGO-3D \citep{FARG03D_2015ascl.soft09006B} for different planet/star mass ratios, planet locations and $\mathrm{\alpha}$-values (see \autoref{tabel:gaps_param} in \autoref{app:gap_profiles}). We used these simulations to calculate the width of the outer pressure bump caused by the planet in the gas. The surface density maximum is locally well fitted by a Gaussian which allows us to measure the width (i.e. the standard deviation) using the curvature at the maximum. By measuring all the widths of our hydrodynamical simulations we fit as a multiple power-law to search how they scale with radius, scale height and the $\mathrm{\alpha}$-parameter: \begin{equation} \sigma_{g} = C \cdot h^{p} \cdot q^{k} \cdot \alpha^{l}\, \end{equation} where $\mathrm{C}$ is a constant, $h$ is the scale height, $q$ the mass ratio and $\mathrm{\alpha}$ the turbulence parameter. We found that the width in the measured range scales approximately as: \begin{equation} \label{eq:power_law} \sigma_{g} \propto h^{0.81} \cdot q^{0.14} \cdot \alpha^{0.05}\, \end{equation} \subsubsection{Size-luminosity relation of disks with companions} \label{subsub:scaling_relation} In \autoref{fig:heatmaps_ricci} we show a red line that scales differently from the SLR when we include planets. We predict that this line fits with a correlation $\mathrm{L_{mm}\propto r_{eff}^{5/4}}$. If we assume that all the luminosity of a disk comes from rings that are approximately optically thick, we can approximate: \begin{equation} L \simeq A \cdot B_{\nu}\, \end{equation} where $\mathrm{A}$ is the area and $\mathrm{B_{\nu}}$ is the Planck function. If we assume the Rayleigh Jeans approximation to approximate the Planck function with the temperature, the equation becomes: \begin{equation} L \simeq A \cdot T\, \end{equation} We make the assumption that the area of the pressure bump scales as $\mathrm{A} \propto r \cdot \sigma_{d}$, where $\mathrm{r}$ is the radius and $\mathrm{\sigma_{d}}$ is the width of the pressure bump in the dust and there is linear scaling of $\mathrm{\sigma_{d}}$ with $\mathrm{h}$. The width of the dust ring depends on the width of the gas \begin{equation} \sigma_{d} \propto \sigma_{g} \cdot \sqrt{\frac{\alpha}{St}}\, \end{equation} as in \citep{Dullemond2018DSHARP}, where $\mathrm{St}$ is the Stokes number, while in the ring there is also effective dust trapping and diffusion. Using the relation that previously measures the gas width \autoref{subsub:width}, we find that the luminosity ($\mathrm{L_{mm}}$) scales with the radius as: \begin{equation} \label{eq:luminosity_hydro} L_{mm} \propto r_{eff}^{5/4} \end{equation} which is the relation that we did plot with the red line in \autoref{fig:heatmaps_ricci}, \autoref{fig:heatmaps_dsharp}, \autoref{fig:heatmaps_d50} and \autoref{fig:heatmaps_d90}. We find that this theoretical estimate well explains the size luminosity scaling seen when strong sub-structure is present. However, towards larger radii, this relation slightly over-predicts the luminosity. For example, one could get a shallower slope looking at the heat-map, because towards large radii our fitting line is above the main bulk of the simulations. For a complete derivation of both sections, we refer the reader to \autoref{app:width_slr_derivation} and \autoref{app:scaling_relation}. \section{Discussion} \label{sec:discussion} To summarize the results discussed above, we have explored the observed trend among the (sub-)mm disk continuum luminosity ($\mathrm{L_{mm}}$) and the $\mathrm{68\%}$ effective radius ($\mathrm{r_{eff}}$) of protoplanetary disks. Following the size-luminosity relation (SLR) obtained from \citet{tripathi2017millimeter} and \citet{Andrews2018a}, $\mathrm{L_{mm}\propto r_{eff}^{2}}$, we have shown which initial conditions are favorable for a disk to remain on the SLR for a time-span of \SI{300}{kyr} - \SI{3}{Myr}. We explored the effect of every parameter on the disk evolution tracks on the SL Diagram, we got a visual representation of how the disk population moves on the same diagram and we found relations between the parameters (\autoref{app:corner}). We present a different correlation for disks that are dominated by strong sub-structures compared to the disks that have a monotonically decreasing surface density profile. Moreover we investigated the effect of different opacity models with compact or porous grains and we conclude that it is a major factor for reproducing the observational results. In the following sections we briefly recap these results and we discuss in detail some of the implications. \subsection{Dominant parameters} \label{disc:dominant_params} In summary, our results imply that the most dominant parameters for the evolution of disks are the viscosity parameter $\mathrm{\alpha}$, the initial disk mass $\mathrm{M_{d}}$, the location of a giant planet if present, and the opacity model that is used to derive the continuum intensity. The disks that match the SLR are characterized by low turbulence ($\mathrm{\alpha} \leq 10^{-3}$), high disk mass ($\mathrm{M_{d}} \geq 2.5\cdot10^{-2} M_{\star}$) and are affected strongly by the existence of a strong trap (in this study caused by a giant planet). Turbulence-$\mathrm{\alpha}$ values larger than $10^{-3}$ lead to smaller grains due to fragmentation and consequently to less luminous disks that do not enter the SLR. Moreover, particles are diffused more efficiently, and due to their size less efficiently trapped. Finally, the dust trap is not as pronounced if the alpha viscosity is higher. All of this is acting in concert to make dust trapping ineffective, and causes the disks to behave as if they were smooth (see \autoref{fig:mass_acc_1e-3} in \autoref{sec:results}). If the fragmentation velocity is large enough, there can be simulations that stay on the SLR, but we consider these disks to have unrealistic initial conditions according to the known literature. High initial disk mass locates the disks initially either inside or above the SLR (i.e. too bright for the given size) until they reach a quasi steady state. This allows them to migrate to smaller luminosities while they evolve up to $\SI{3}{Myr}$ and still remain in the SLR. This effect is aided by the right choice of opacity model and grain porosity. Compact grains shift the position of disks to larger luminosity (see \autoref{disc:opacity}). On the other hand, most of the less massive and smaller disks end up below the correlation at smaller luminosities, characterizing them as discrepant. Planets can alter the evolution path of the disk on the SL Diagram significantly. An effectively trapping planet causes the disk to quickly settle to a quasi-steady state on the SL Diagram, thereby leading to a shorter track and thus delaying the evolution of disks towards lower luminosity and radius. Disks with a massive planet in the inner part of the disk ($\mathrm{1/3 \ensuremath{r_\mathrm{c}}\xspace}$) have more extended evolutionary track that are overall less luminous. In contrast disks with a planet in the outer part ($\mathrm{2/3 \ensuremath{r_\mathrm{c}}\xspace}$) have shorter and more luminous disks if all the other parameters remain the same. This can be explained by the planet trapping a large part of the disk solids at large radii. When two planets are included then both of them contribute to the luminosity while the outer one defines the effective radius of the disk. Overall, the existence of planets increases the fraction of matching simulations on the SLR but this result is also a function of the opacity (see \autoref{disc:opacity}). \subsection{Position along the SLR} The position of a disk along the SLR is determined mainly by the disk mass $\mathrm{M_{d}}$ and the disk size \ensuremath{r_\mathrm{c}}\xspace, as it can be seen in \autoref{fig:param_effects} and \autoref{fig:parameter_tracks}. More massive and large disks are located on the top right part of the SL diagram while small and less massive, to the middle and left part. In \autoref{fig:lum_R10}, we show the kernel-density-estimate (kde) of the luminosity for all matching simulations for four different cases and three different snapshots, while with black color we plot the observational kde from \citet{Andrews2018a}. The brightest disks that stay on the SLR are the ones that contain planets which are located at the outer part of the disk (2/3 of the \ensuremath{r_\mathrm{c}}\xspace, yellow and green line). A planet in the outer region leads to larger / more luminous disks as explained in \autoref{subsec:planets} and \autoref{disc:dominant_params}. Massive planets at 1 and \SI{3}{Myr} reproduce the peak of the observed brightness distribution but overall produce too many bright and too few faint disks. The peak of the luminosity distribution for smooth disks is generally at much lower luminosities. Given these results, it is conceivable that the observed sample consists of two distinct categories of disks: a brighter and larger category due to massive outer planets that trap the dust while planets in the second category are not massive enough to trap the dust effectively and those disks evolve similar to a smooth disk. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/Luminosity_R10-0_q25.pdf} \caption[]{Kernel-density-distribution of the luminosity for all matching simulations using the \citet{Ricci_2010} R10-0 opacity model, for four different cases and three different snapshots, from $\mathrm{300kyr-3Myr}$. The black line refers to the disks from the \citet{Andrews2018a} sample. Disks with planets have larger luminosity, while smooth disks have low luminosity at $\mathrm{3Myr}$. When two planets are included the luminosity is larger than with a single planet.} \label{fig:lum_R10} \end{figure*} \subsection{Is there a preferable opacity model?} \label{disc:opacity} We used different opacity models for our study. As a base model, we made use of the composition of \citet{Ricci_2010} opacities but for compact grains (i.e. no porosity) as in \citet{rosotti2019millimetre} (this model is noted as R10-0 throughout the paper). Moreover we used the non-porous \citet{Birnstiel_DSHARP} opacities (DSHARP) and varied the porosity between $10\%$ DSHARP-10, $50\%$ DSHARP-50 and $90\%$ DSHARP-90. Therefore, we find that independent of the model used, relatively compact grains ($\mathrm{<50\%}$) are preferred instead of the highly porous grains. When compact grains are included the initial position of the disks on the SL Diagram is shifted towards higher luminosity giving it more time to evolve in the SLR on our chosen time-span. Disks with the DSHARP opacity are generally less bright and end up below the SLR. We remind the reader that the opacity at our wavelength is a factor of $\mathrm{\sim 8.5}$ larger in the R10-0 case compared to DSHARP at the opacity cliff location ($\mathrm{\sim 0.1-1mm}$), with the difference mainly stemming from the choice of carbonaceous material (\citealp{Zubko1996MNRAS.282.1321Z} versus \citealp{Henning1996}, see the comparison in \citealp{Birnstiel_DSHARP}). However, this point holds only for smooth disks and disk with weak sub-structures (where a disk behaves as smooth). If this is the case then only compact grains can explain the SLR while when sub-structures are strong then any of the opacity models and the porosities tested in this work can explain the sub-structured SLR. The latter applies because most of the sub-structures become optically thick. One can argue that alternative compositions that also exhibit a strong opacity cliff and a high opacity would be equally suitable. \subsection{Which disks populate the SLR?} We have shown that when strong traps (i.e. massive planets) are included, then disks follow a different SLR than the smooth ones. Based on measurements of the width of the pressure maxima formed by the planet in hydrodynamical simulations (see \autoref{app:width_slr_derivation}), we derived a theoretical prediction for disks with sub-structures (\autoref{subsub:scaling_relation}). Smooth disks with compact or slightly porous grains seem to follow the \citet{Andrews2018a} relation $\mathrm{L_{mm} \propto r_{eff}^{2}}$, while disks with massive planets follow a relation $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. This result does not imply that the observed disks from the \citet{tripathi2017millimeter} sample are all free of substructure, but that they might not show strong enough and optically thick sub-structures as, e.g. AS209 or HD163296 \citep[see][]{Huang2018}. \autoref{fig:heatmaps_ricci} shows how smooth disks follow the SLR, while disks with strong sub-structures follow a different relation but intersecting the SLR at the bright end (top right part of the SLR). In contrast, the less luminous disks follow the SLR if they are smooth, but disks of the same effective size with substructure are too luminous (cf. bottom left part of the SLR). In \citet[][Fig. 6]{Hendler2020ApJ...895..126H}, disks seem to follow a universal relation (close to the SLR) in all star-forming regions (Ophiuchus, Tau/Aur, Lupus, Chal) except from USco, the oldest region ($\mathrm{\sim 10Myr}$). The observed SLR therefore might flatten with age of the region. We could examine these results since we evolve our simulations to $\rm{10Myr}$, but our models do not include photo-evaporation and that would lead to uncertainties in the results. For example, at the age of USco the detectable disk fraction is $\rm{<20\%}$, while in our models it would be $\rm{100\%}$. This raises a question: If a planet is not massive at early times, but around \SI{1}{Myr} has a planet/star mass ratio of $\mathrm{q=10^{-3}}$, will the disk follow the observed SLR or the SLR with strong sub-structures? According to the analysis on the evolution tracks in \autoref{sub:evolution_tracks}, most of the small and less luminous disks that do not initially have a giant planet, drift towards lower radii and luminosities and even below the SLR. Therefore strong sub-structures need to form in the first $\sim$\SI{1}{Myr} for the disk to follow the $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$ relation. This result might imply that in most of the star-forming regions, strong sub-structures might not have formed early enough for small disks, or that the sub-structure is weak. On the other hand, bright and large disks can very well show strong sub-structures and follow the SLR at the same time. This is indeed the case for the DSHARP sample \citep{Andrews_DSHARP_2018ApJ...869L..41A} that is biased towards bright disks and show significant sub-structures in every source. The latter can be confirmed from \autoref{fig:lum_R10} as in the previous section. The brightest disks that stay on the SLR are the ones that contain planets that are located in the outer part of the disk (yellow and green line). The SLR can be explained if there is a mixture of both smooth and strong sub-structured disks. Smooths disks always follow the SLR as shown already from \autoref{fig:heatmaps_ricci}, while the bright sub-structured disks populate the upper right part of the SLR (Figures \ref{fig:heatmaps_ricci}, \ref{fig:heatmaps_dsharp}, and \ref{fig:heatmaps_d90}). Disks with sub-structures that have large \ensuremath{r_\mathrm{c}}\xspace and low disk mass $\mathrm{M_{d}}$ populate the lower right part of the plot under the SLR. These disks are not favored by \citet{Andrews2018a} who finds a tentative positive correlation between the mass of the star (or the disk) and the size of the disk. If massive, small disks are excluded from the plot then the SLR could be reproduced by both sub-structured disks that occupy the upper right part and smooth (or weakly sub-structured) disks that occupy the lower left part of the SLR. Our results seem to be in agreement with the observational classification from \citet{VanDerMarel2021AJ....162...28V} where they suggest that all bright disks should have sub-structures formed by giant planets. Moreover, the SLR for the sub-structured disks is independent of the opacity model but it slightly over-predicts the luminosity for the very large disks (see \autoref{disc:opacity} for more about the opacity). \subsection{$\mathrm{L_{mm} - M_{\star}}$ relation} \label{disc:Lmm-Mstar} In \autoref{subsub:Mstar_tracks} it is discussed that the stellar mass ($\mathrm{M_{\star}}$) is directly correlated with the disks mass ($\mathrm{M_{d}}$) and the disk temperature is only a weak function of the $\mathrm{L_{\star}}$ (and therefore $\mathrm{M_{\star}}$). The fact that the disk mass scales with the stellar mass implies that the luminosity ($\mathrm{L_{mm}}$) scales with the stellar mass. In \autoref{fig:L_Mstar} the $\mathrm{L_{mm}} - M_{\star}$ relation is shown for three different models for all matching simulations. Smooth case (yellow lines), a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$ at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$ (green lines) and a planet with the same mass ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$ (red lines). The markers define the median value of the luminosity at \SI{1}{Myr} and the error bars are the $\mathrm{75\%}$ percentile from the upper and lower value. The blue line is the $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ correlation from \citet{Andrews2018a}, a correlation that is consistent with those found from previous continuum surveys of comparable size and age \citep[][]{Andrews2013, Ansdell2016ApJ...828...46A, Pascucci2016ApJ...831..125P}. For any of our models, the correlation is not as steep as the \citet{Andrews2018a}, but the cases with strong sub-structures have steeper profile than the smooth one. The reason is that no correlation between disk size and stellar mass was imposed in the parameter grid. If a size-mass correlation as inferred by \citet{Andrews2018a} was imposed, the mass-luminosity relation is expected to steepen as disks with optically thick sub-structures would be larger and therefore brighter. However, reproducing the observed mass-luminosity trend will be part of a future population synthesis study. A similar manifestation of the same trend can be seen in \autoref{fig:corner_13rc}. In the panel $\mathrm{\ensuremath{r_\mathrm{c}}\xspace-M_{d}}$, marked with white dots is the mean value of the characteristic radius for every disk mass. In order for the correlation to reproduce the observations, more massive disks should have initially been larger. In other words large, low-luminosity disks would be expected in the lower right of the SL diagram, but are not observed. Preliminary results indicate that these disks need to have low disk mass $\rm{(M_{d}<10^{-2}M_{\star})}$ and be large in size $(\rm{\ensuremath{r_\mathrm{c}}\xspace>\SI{150}{au}})$. Moreover the turbulence parameter should be relatively small $\alpha$ $\rm{\leq10^{-3}}$, otherwise the disk will behave as smooth and will follow the SLR. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/L-M_ricci_1Myr_q25.pdf} \caption[]{$\mathrm{L_{mm}} - M_{\star}$ relation at $\mathrm{1Myr}$ for three different cases for the matching simulations. Smooth case (yellow lines), a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$ at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$ (green lines) and a planet with the same ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$ (red lines). The points define the median value of the luminosity at $\mathrm{1Myr}$ and the error bars are the $\mathrm{75\%}$ percentile from the upper and lower value. The blue line is the $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ correlation from \citet{Andrews2018a}.} \label{fig:L_Mstar} \end{figure} \subsection{Scattering} Scattering is included in our simulations as introduced in \autoref{sub:Observables}. Compared to the case where only the absorption opacity is used, the difference is minimal and it can be observed only in few cases. With the inclusion of scattering, the originally brightest disks (above the SLR) tend to move towards lower luminosity (move down in the SL diagram). This happens for disks that are optically thick hence for the ones that contain planets. This effect favors the SLR and allows slightly more disks ($\rm{\sim 2\%}$) to enter in the selected region. However, for moderately optically thick disks the emission is larger and a small fraction of disks moves up (towards larger luminosity) on the SL diagram (Figure 4 in \citet{Birnstiel_DSHARP}). This is happening because the derived intensity (\autoref{eq:intensity}) does not saturate to the Planck function, but to a slightly smaller value for a non-zero albedo. This is the well-known effect that scattering makes objects appear cooler than they are in reality. On the other hand, for small optical depths ($\rm{\tau<<1}$) the effect of scattering is insignificant because the intensity (\autoref{eq:intensity}) approaches $\rm{I_{\nu}^{out}\longrightarrow \epsilon_{\nu}^{eff}B_{\nu}(T_{d})\Delta \tau /\mu}$ which is the identical solution as when $\rm{\kappa_{\nu}^{sca}}$ is set to zero while $\rm{\kappa_{\nu}^{abs}}$ is kept unchanged as shown also in \citet{Birnstiel_DSHARP}. The effect of scattering depends also on the albedo ($\rm{\eta_{\nu}=1-\epsilon_{\nu}^{eff}}$). For the compositions we use the maximum effective albedo is $\rm{0.57}$ for R10-0 and $\rm{0.82}$ for DSHARP while it can reach up to $\rm{\sim 0.97}$ for DSHARP-90. For these compositions the effect of scattering is never more than a factor of $\rm{\sim 1.7}$ at a particle size of $\rm{1mm}$. The result of scattering is higher if the we increase albedo but by using a plausible composition, the result is effectively negligible. However we obtain different results compared to \citet{Zhu2019ApJ...877L..18Z}. In that work is discussed that a completely optically thick disk with high albedo ($\rm{0.9}$) can be constructed, which therefore lies along the SLR with the right normalization (because the high albedo and high optical depth lowers the luminosity). However our findings show that we cannot reach those results from an evolutionary perspective. For smooth disks the dust drifts to the inner part of the disk and the disk is no longer optically thick. On the other hand, disks with sub-structures create only rings rather than disks that are completely optically thick everywhere. \subsection{Prediction for longer wavelength} A recent study \citep{Tazzari2021MNRAS.506.2804T}, showed a flatter SLR at $\rm{3.1mm}$ ($\rm{Lmm \propto r_{eff}^{1.2}}$), confirming that emission at longer wavelengths becomes increasingly optically thin. We performed a series of simulations at $\rm{\SI{3.1}{mm}}$ to compare with these results. Disks are fainter and smaller at $\rm{\SI{3.1}{mm}}$. Two effects contribute to this fact. The value of the opacity decreases and the opacity cliff moves to larger particle sizes. This leads the disks to become optically thinner in comparison to the $\rm{\SI{850}{\mu m}}$ case. Secondly, the intensity at $\rm{\SI{3.1}{mm}}$ is less according to the Plank's spectrum. Therefore the luminosity will be smaller. In terms of the SLR, the slope for the smooth disks does not change since these disks are never optically thick, therefore all disks simply move towards lower luminosity and smaller radii. On the other hand, the sub-structured disks that cover the SLR do not change in terms of slope but the large and faint disks (right part of the heat-map in \autoref{fig:heatmaps_ricci}) show a larger spread in luminosity compared to the smaller wavelength. Disks that are very optically thick and moderately optically thick have the same luminosity at $\rm{\lambda=850\mu m}$, but at $\rm{\SI{3.1}{mm}}$ because of the decrease in opacity the former category is still optically thick while the latter no longer is, leading to a decrease in luminosity. Therefore the SLR can become flatter if we can take into account these disks that do not belong in the SLR. With our models, the flatter relation from \citet{Tazzari2021MNRAS.506.2804T}, could be explained by sub-structured and large smooth disks. The heat-map in \autoref{app:heatmap} confirms this. In this figure we plot the simulations at $\rm{\SI{3.1}{mm}}$, using the R10-0 opacity model and we over-plot the SLR from \citet{Tazzari2021MNRAS.506.2804T} $\rm{L_{mm} \propto r_{eff}^{1.2}}$. Sub-structured disks can very well explain this relation since it is similar to the scaling relation we calculated for disks with strong substructures in \autoref{subsub:scaling_relation}. Small and smooth disks on the other hand cannot enter the relation because they are too faint since the particles cannot grow to a size where the opacity cliff is at $\rm{\SI{3.1}{mm}}$. We have to mention the possibility that the flatter relation can be due to observational bias towards large disks which tend to be sub-structured. If small and faint disks are included in the sample, the observed SLR could be steeper and closer to the SLR from \citet{Andrews2018a}. Future observational surveys should further investigate this possibility. \subsection{Limitations} \label{disc:limitations} It is important to keep in mind the limitations of this paper. The time-span used for the simulations displayed in this paper and the figures is from $\mathrm{\SI{300}{kyr}}$ to $\mathrm{\SI{3}{Myr}}$. This does not exclude the possibility that some disks with high disk mass might evolve a lot on the SLR diagram for $\mathrm{\SI{10}{Myr}}$-$\mathrm{\SI{20}{Myr}}$. Disk dissipation has not been modeled in this paper and it is considered as future work. In \autoref{fig:d2g_R10} we show the kernel density estimate (kde) of the global dust-to-gas ratio\footnote{The dust-to-gas ratio in the disk changes with radius and time and this quantity is simply $\rm{M_{dust}/M_{gas}}$.} for three different snapshots between $\mathrm{\SI{300}{kyr}-\SI{1}{Myr}-\SI{3}{Myr}}$ and for three different cases. Smooth disks lose dust relatively quickly due to radial drift, while disks with planets retain a much higher dust-to-gas ratio because of the the strong trap. In the second panel there are cases where the dust-to-gas ratio increases over the initial $\mathrm{0.01}$. These are sub-structured disks with intermediate $\mathrm{\alpha}$-values, high fragmentation velocity ($\mathrm{>\SI{1000}{cm/s}}$) and small size ($\mathrm{<\SI{60}{au}}$). The gas is removed quicker than the dust, leading to a larger dust-to-gas ratio. Large $\mathrm{\alpha}$ would lead to less trapping and the dust would drift as usual while low $\mathrm{\alpha}$ would mean that the disk does not evolve significantly. As mentioned in \autoref{sec:methods}, the stellar luminosity is not evolved in the simulation. If that were the case the disk luminosity would approximately scale linearly with the stellar luminosity and further modulated by resulting changes in the dust evolution. We therefore expect a general shift of the disks towards lower luminosities but with the trends that have explored in \autoref{sec:results} remaining the same. Since most of the simulations need to be brighter to remain in the SLR, a change in the luminosity favors higher $\rm{\alpha}-$values and smaller fragmentation velocities than the ones shown before. An example of a heatmap in shown in \autoref{app:heatmap}. In our models the planets are already included at the beginning of the simulations and they open a gap in the initially smooth surface density profile relatively quickly. Realistically, the time-scales in the outer part of the disk are much longer and the time-scale for planet formation changes with the distance to the star \citep{Johansen2017AREPS..45..359J}. Therefore we would expect that the inner planet should form first and the outer later as it has been suggested by e.g. \citet{pinilla2015A&A...580A.105P}. Since both planets start at the same time, the inner one might trap more of the total disk mass and the outer bump might be less bright than in our models in reality. The latter will be included in a future work by including the outer planet later in the simulation. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/d2g_R10-0_q25.pdf} \caption[]{Evolution of the global disk dust-to-gas ratio of all matching simulations with the \citet{Ricci_2010} R10-0 opacity model, for three different cases and three different snapshots, from $\mathrm{\SI{300}{kyr}-\SI{3}{Myr}}$. From left to right: Smooth case, a planet with planet/star mass ratio at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$ and a planet with the same ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$. Different limits are used on the x-axis to highlight the evolution of the dust-to-gas ratio. Initial dust-to-gas ratio is $\mathrm{0.01}$. For the smooth case the dust-to-gas ratio is decreasing by three orders of magnitude up to $\mathrm{\SI{3}{Myr}}$. When a planet is included the disk dust mass is retained and leads to a much higher dust-to-gas ratio. In the case where the planet is the inner part of the disk (middle column), there are cases at $\mathrm{\SI{3}{Myr}}$ where the ratio is larger than $\mathrm{0.01}$. The gas mass moves faster than the dust mass in this case.} \label{fig:d2g_R10} \end{figure*} \section{Conclusions} \label{sec:conclusions} In this paper we have performed a large population study of 1D models of gas and dust evolution in protoplanetary disks, to study how the effective radius and disk continuum emission evolves with time. We varied a range of initial parameters and we included both smooth disks and disks that contain planets. We compared our results with the observed trend between continuum sizes and luminosities from \citet{Andrews2018a} and we managed to constrain the initial conditions. Our findings are as follows. \begin{enumerate} \item Disks with strong traps (i.e. massive planets) follow a different SLR than smooth disks. Smooth disks follow the \citet{Andrews2018a} relation $\mathrm{L_{mm} \propto r_{eff}^{2}}$ as shown by \citet{rosotti2019millimetre}, while disks with massive planets $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. This could mean that not all disks in the \citet{tripathi2017millimeter} and \citet{Andrews2018a} joint sample have sub-structure as significant as e.g. HD163296. We explained this result with a simple analytical derivation and we found that if the gas width scales as we measured it from FARGO-3D and if the dust width scales as we expect it from trapping and fragmentation, then theoretically the luminosity scales as $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. \item If disks are following the SLR or not depends heavily on the opacity model. When the DSHARP \citep{Birnstiel_DSHARP} opacity is used, disks are not so luminous in the first $\rm{\SI{300}{kyr}}$ and the majority of them end up below the SLR. Especially for smooth disks, the DSHARP opacities produce a much lower number of simulations on the SLR compared to models using the \citet{Ricci_2010} R10-0 opacities ($\rm{0.8\%}$ with DSHARP and $\rm{29.6\%}$ with R10-0). Therefore, with this opacity model, only disks with sub-structures can populate the SLR. On the other hand, R10-0 opacities can reproduce both disks with and without sub-structures since the absolute value of the opacity at $\SI{850}{\mu m}$ is $\rm{\sim 8.5}$ times larger than DSHARP, for particle sizes around $\mathrm{\sim \SI{0.1}{mm}}$ (position of the opacity cliff) and the disks become luminous enough to enter the relation. \item The SLR is more widely populated when sub-structures are included in contrast to a tight correlation for smooth disks. Sub-structured disks cover mostly the upper right part (large and bright disks) of the SL diagram, while the lower left (small and faint) is covered by smooth disks. This is an indication that the SLR can be explained if there is a mixture of both smooth and strong sub-structured disks. \item The grain porosity can drastically affect the evolution track of the disk. Throughout our models, relatively compact grains ($\mathrm{<50\%}$ porosity) are preferred for simulations that follow the SLR. If we use slightly porous grains ($\mathrm{10\%}$) by altering the DSHARP opacity, the effect is insignificant, as the shape of the opacity cliff is roughly the same. On the contrary, for semi ($\mathrm{50\%}$) and porous grains ($\mathrm{90\%}$) the opacity cliff flattens out, leading to disks with low luminosity. Only compact grains can explain the SLR for smooth disks, while any porosity can explain it when strong sub-structures are included. \item High initial disk mass gives a higher probability for a simulation to follow the SLR. If this applies, the disk starts initially above the SLR (bright) until it reaches a stable state at around $\mathrm{\sim \SI{300}{kyr}}$. By this time it will enter the relation and depending on the other initial conditions it will either remain there and it will be considered as a matching simulation or leave it. \item There is a preference towards low $\mathrm{\alpha}$-values (smaller than $\mathrm{10^{-3}}$). This result is in line with other more direct methods of determining $\mathrm{\alpha}$ (e.g. \citet{flaherty_2018ApJ...856..117F}). There are multiple reasons for this tendency. For $\mathrm{\alpha}\geq 2.5\cdot 10^{-3}$, disks tend to be more fragmentation dominated, the particle size decreases and consequently are not trapped by the pressure bump (if any) leading them outside the relation. Moreover the diffusivity increases and the peak of the pressure bump smears out, leading to inefficient trapping. On the other hand if $\mathrm{\alpha}$ is small, the ring that is formed is becoming too narrow and disks tend to have smaller luminosity. \item The location of the planet as a function of the characteristic radius plays a major role to the final outcome. If a planet is included in the inner part of the disk ($\mathrm{1/3 r_{c}}$), the disk has to be significantly larger in order to retain the correct ratio of luminosity and effective radius to stay in the SLR. Opposed to that, when an outer planet ($\mathrm{2/3 r_{c}}$) is included, the disk tends to be smaller in size. When two planets are included, the location of the outermost one defines the size of the disk but a combination of two defines the luminosity. These results are also affected by the opacity model. \item We expect less extended evolution track when sub-structure is included. The pressure bump halts the dust from drifting further in, constraining this way the size of the disk and not allowing it to evolve further on the SLR. Furthermore when two planets are included there is an indication that the inner planet should form first otherwise there will not be a big enough reservoir of material in order to form. \item We are not able to construct optically thick disks with high albedo ($\rm{0.9}$) that lie along the SLR with an evolutionary procedure as opposed to \citet{Zhu2019ApJ...877L..18Z}. Smooth disks are not optically thick due to radial drift while disks with sub-structure create only optically thick rings rather than a uniform optically thick distribution. \item We chose different gap profiles based on \citet{Kanagawa2016} and compared them again hydrodynamical simulations. We conclude that the depth of the gap does not play an important role to the evolution of the disk on the SLR, as long as the planet is big enough to stop the particles from drifting. The width of the gap is the important parameter instead (see \autoref{app:gap_profiles}, where we compare the different profiles for different parameters.) \end{enumerate} \begin{acknowledgements} T.B. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 714769 and funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 325594231 under Ref no. FOR 2634/1. Furthermore, this research was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excelence Strategy – EXC-2094-390783311. G.R. acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 016.Veni.192.233) and from an STFC Ernest Rutherford Fellowship (grant number ST/T003855/1) \end{acknowledgements} \bibliographystyle{aa} \section{Introduction} \label{sec:intro} Planet formation is a process far from having a complete, robust, and widely accepted theory. Multiple theories aim to explore the way planets form \citep[e.g.,][]{Benz2014prpl.conf..691B,Kratter2016ARA&A..54..271K,Johansen2017AREPS..45..359J}. To understand planet formation, high resolution observations and large surveys of the birth places of planets, the protoplanetary disks (PPDs), are essential. In recent years the Atacama Large Millimeter/Submillimeter Array (ALMA) has not only provided a sizable number of highly resolved observations of PPDs, but due to its high sensitivity it has also enabled several large intermediate resolution (on the order of \SI{100}{mas}) surveys of different star-forming regions \citep[for a review see][and references therein]{Andrews2020} providing crucial information on population properties such as distributions of disk sizes, fluxes, or spectral indices for disks across stars of different masses and ages and across star-forming regions in different environments. Highly resolved observations and large intermediate resolution surveys are complementary and are both essential for understanding the connections between key properties of PPDs. One of the important diagnostics is the continuum luminosity ($\mathrm{L_{mm}}$) at (sub-)millimeter wavelengths since it can be a tracer of the disk mass that is produced by the solid grains \citep{Beckwith1990AJ.....99..924B} (i.e., the amount of material available to form planets). Assuming a constant dust-to-gas ratio (usually $\rm{0.01}$ based on observations of the interstellar medium, see \citealp{Bohlin1978}), the dust mass can be converted to the total disk mass (dust and gas). Surveys that measure the disk dust mass ($\mathrm{M_{dust}}$) have been used to correlate this property with the mass of the host star ($\mathrm{M_{\star}}$), and a linear relation has been found between them \citep{Andrews2013} that appears to steepen with time \citep[][]{Pascucci2016ApJ...831..125P,Ansdell2017AJ....153..240A,Barenfeld2016ApJ...827..142B}. Furthermore, a steeper than linear relationship has been observed between $\mathrm{L_{mm}}$ and $\mathrm{M_{\star}}$ \citep[][]{Andrews2013,Pascucci2016ApJ...831..125P,Ansdell2016ApJ...828...46A}. Recently, theoretical studies have started to explain these correlations \citep[e.g.,][]{Pascucci2016ApJ...831..125P,stammler2019,pinilla2020A&A...635A.105P} using numerical dust evolution models that include grain growth, radial drift, fragmentation of dust particles, and particle traps \citep[e.g.,][]{Birnstiel2010,BKE2012,Krijt2016}. These observations of dust thermal continuum emission are crucial for the characterization of disk evolution. However, the methods described above that relate dust emission to physical quantities like total disk masses carry uncertainty arising from the multiple assumptions such as the dust opacity and the disk temperature that may vary for every disk \citep[e.g.,][]{Hendler2017,Ballering2019}. The grain opacity depends on the unknown particle size distribution, composition, and particle structure \citep[e.g.,][and references within]{Birnstiel_DSHARP}, although considerable efforts have been undertaken from modeling \citep[e.g.,][]{Wada2008,Wada2009,Okuzumi2009,Seizinger2013a,Seizinger2013b} and experimental studies \citep[e.g.,][]{Blum2008,Guttler2010,Gundlach2015} of aggregation and of the computation of optical properties of aggregates \citep[e.g.,][]{Kataoka2014,Min2016,Tazaki2016}. Another important property for the characterization of a disk population is the disk size. Viscous theory \citep{LBP1974} predicts that a fraction of the disk mass keeps moving outward, known as viscous spreading, suggesting the disk size should increase with time. In principle a measurement of disk size as a function of time could measure this evolution to test viscous theory and measure its efficiency. The most readily available tracer of the disk size is the continuum as gas tracers suffer from uncertain abundances (due to freeze-out and dissociation, among others) and sensitivity constraints. Since the disk does not have a clear outer edge, we need to introduce an effective radius and express the size as a function of the total continuum emission. This size metric is called emission size or effective radius ($\mathrm{r_{eff}}$) \citep{tripathi2017millimeter}. However, the dust component does not behave in the same way as the gas mainly due to an effect termed radial drift \citep[e.g.,][]{Whipple1972,Weidenschilling1977MNRAS.180...57W, Takeuchi2002ApJ...581.1344T}. The dust particles interact with the sub-Keplerian gas disk via aerodynamic drag forces, leading them to migrate toward the star. As an observational implication, the dust emission is less extended than the gas emission \citep[e.g.,][]{Andrews2012,Isella2012ApJ...747..136I,Andrews2016ApJ...820L..40A,Cleeves2016ApJ...832..110C} predicted by \citet{Birnstiel2014ApJ...780..153B} (but see \citealp{Trapman2020}). Radial drift is also heavily dependent on the grain size; therefore, grain growth \citep[e.g.,][]{BKE2012} has to be included in the numerical studies that aim to use the dust disk radius. \citet{rosotti2019time_evolution} studied theoretically how the evolution of the disk dust radius changes with time in a viscously evolving disk and addressed whether the evolution of the dust disk radius is set by viscous spreading or by the dust processes such as grain growth and radial drift. They found that viscous spreading influences the dust and leads to the dust disk expanding with time. Many surveys have been performed to explore the relation between the two diagnostics \citep[e.g.,][]{ Andrews2010ApJ...723.1241A,Pietu2014A&A...564A..95P,Hendler2020ApJ...895..126H}. Recently, a subarcsecond resolution survey of 50 nearby protoplanetary disks, conducted with the Submillimeter Array (SMA) by \citet{tripathi2017millimeter}, showed a strong size-luminosity relation (SLR) between the observed population. The follow-up program in \citet{Andrews2018a}, a combined analysis of the \citet{tripathi2017millimeter} data and the ALMA data from the Lupus disk sample (105 disks in total), confirmed the scaling relations between the $\mathrm{r_{eff}}$ and the $\mathrm{L_{mm}}$, $\mathrm{M_{\star}}$. However, not all studied star-forming regions show the same correlation, but appear to vary with the age of the region \citep{Hendler2020ApJ...895..126H}. In recent years, due to the unprecedented sensitivity and resolution, ALMA has provided a plethora of groundbreaking images of protoplanetary disks. Most of these disks do not show a smooth and monotonically decreasing surface density profile, but instead are composed of single or multiple symmetric annular substructures, for example HL Tauri \citep{ALMA_partnership2015ApJ...808L...3A}, TW Hya \citep[][]{Andrews2016ApJ...820L..40A,Tsukagoshi2016ApJ...829L..35T}, HD 163296 \citep{Isella2016PhRvL.117y1101I}, HD 169142 \citep{Fedele2017A&A...600A..72F}, AS 209 \citep{Fedele2018A&A...610A..24F}, HD 142527 \citep{Casassus2013Natur.493..191C}, and many more in the recent DSHARP survey \citep{Andrews_DSHARP_2018ApJ...869L..41A}. Moreover, non-axisymmetric features like spiral arms \citep[e.g.,][]{Perez2016Sci...353.1519P,Huang2018III} and lopsided rings \citep[e.g.,][]{Nienke2013Sci...340.1199V} have been observed. Many ideas for the origin of these ring-like substructures have been explored but one of the most favorable explanations is the formation of gaps in the gas surface density, due to the presence of planets. A massive planet ($\rm{\geq 0.1 M_{jup}}$) \citep{Zhang2018DSHARPApJ...869L..47Z} is able to open a gap in the surrounding gaseous disk, thereby generating a pressure maximum. Later on, the dust particles migrate toward the local pressure bump, due to radial drift \citep{Weidenschilling1977MNRAS.180...57W,Nakagawa1986Icar...67..375N}, and consequently lead to the annular shape \citep[e.g.,][]{Rice2006MNRAS.373.1619R,Pinilla2012A&A...545A..81P}. These narrow rings may be optically thick or moderately optically thick, but between these features the material is approximated as optically thin \citep{Dullemond2018DSHARP}. However, these rings can contain large amounts of dust that can increase the total luminosity of a disk and its position with respect to the SLR. The SLR might contain crucial information about disk evolution and planet formation theory. Our goal is to explore the physical origins of the SLR from \citet{tripathi2017millimeter} and \citet{Andrews2018a} by performing a large population study of models with gas and dust evolution. We aim to characterize the key properties of disks that reproduce the observational results. We explore the differences in the SLR of disks that have a smooth surface density profile and disks that contain weak and strong substructures. In \autoref{sec:methods}, we discuss the methods we used to carry out our computational models. The results of this analysis are presented in \autoref{sec:results}, where we explain the global effect of every parameter in the population of disks and we present the general properties that disks should have to follow the SLR. In \autoref{sec:discussion} we discuss the theoretical and observational implications of our results. We draw our conclusions in \autoref{sec:conclusions}. \section{Methods} \label{sec:methods} We carry out 1D gas and dust evolution simulations using a slightly modified version of the two-population model (two-pop-py) by \citet[][]{BKE2012,Birnstiel2015ApJ...813L..14B}, while we also mimic the presence of planets. As a post-processing step, we calculate the intensity profile and the disk continuum emission. With the purpose of running a population study, we use a large grid of parameters (see \autoref{tabel:input_param}), so that we can explore the differences that occur due to the different initial conditions. In the next sections, we explain the procedure in more detail. \subsection{Disk evolution} The gas follows the viscous evolution equation. For the disk evolution, we use the turbulent effective viscosity as in \citet{shakura1973black}, \begin{equation} \mathrm{\nu = \alpha_{gas} \frac{c_{s}^{2}}{\Omega_{K}}}\,, \label{eq:visc} \end{equation} and the dust diffusion coefficient as \begin{equation} \mathrm{D = \alpha_{dust} \frac{c_{s}^{2}}{\Omega_{K}}}\,, \label{eq:diff} \end{equation} with $\mathrm{\alpha_{gas}}$ being the turbulence parameter, $\mathrm{c_{s}}$ the sound speed, and $\mathrm{\Omega_{K}}$ the Keplerian frequency. The above equation lacks the term $\rm{\frac{1}{1+St^{2}}}$, where $\rm{St}$ is the Stokes number, but we can ignore it since the Stokes number is always $\rm{<1}$ in our simulations \citep{Youdin2007Icar..192..588Y}. We differentiate the $\alpha$ parameter in two different values. One for the gas $\alpha_\mathrm{gas}$ and one for the dust $\alpha_\mathrm{dust}$, since we later mimic planetary gaps by locally varying the viscosity (see \autoref{subsec:planets}). In the smooth case $\alpha_\mathrm{dust} = \alpha_\mathrm{gas}$. The dust is described by the two populations model of \citet{BKE2012} which evolves the dust surface density under the assumption that the small dust is tightly coupled to the gas while the large particles can decouple from the gas and drift inward. The initial dust growth phase is using the current dust-to-gas ratio instead of the initial value as in \citet{BKE2012}. We set the initial gas surface density according to the self-similar solution of \citet{LBP1974}, \begin{equation} \mathrm{\Sigma_{g}(r)} =\mathrm{\Sigma_{0} \left(\frac{r}{r_{c}}\right) ^{-\gamma} Exp\left[-\left(\frac{r}{r_{c}}\right)^{2-\gamma}\right]}\,, \label{eq:surf_dens} \end{equation} where $\mathrm{\Sigma_{0} = (2-\gamma) M_{d} / 2 \pi \ensuremath{r_\mathrm{c}}\xspace^{2}}$ is the normalization parameter, which is set in every simulation by the disk mass $\mathrm{M_{d}}$. The other parameters are $\mathrm{\gamma}=1$, which is the viscosity exponent and it is not varied throughout our models and \ensuremath{r_\mathrm{c}}\xspace which is the characteristic radius of the disk (see \autoref{tabel:input_param}). When r $\ll$ \ensuremath{r_\mathrm{c}}\xspace, then $\mathrm{\Sigma_{g}}$ is a power law and when $\mathrm{r \geq \ensuremath{r_\mathrm{c}}\xspace}$, $\mathrm{\Sigma_{g}}$ is dominated by the exponential factor. The initial dust distribution follows the gas distribution with a constant dust-to-gas ratio of $\mathrm{\Sigma_{d}}$/$\mathrm{\Sigma_{g}}$=0.01. The initial grain size (= monomer grain size) is $a_\mathrm{min} = \SI{0.1}{\mu m}$. This monomer size stays constant in time and space, while the representative size for the large grains increases with time as the particles grow. The particle bulk density is $\rm{\rho_{s}}=\SI{1.7}{g/cm^3}$ for the standard opacity model from \citet{Ricci_2010} and $\rm{\rho_{s}}=\SI{1.675}{g/cm^3}$ for the DSHARP \citep{Birnstiel_DSHARP} opacity, but decreases for different values of porosity (see \autoref{sub:Observables}). We evolve the disks up to $\mathrm{\SI{10}{Myr}}$ to study the long-term evolution, but in the following analysis we only show results from $\rm{\SI{300}{kyr}}$ to $\SI{3}{Myr}$ (see \autoref{sub:survival_frequency}). Our $\mathrm{1D}$ radial grid ranges from $\rm{0.05}$ to $\rm{\SI{2000}{au}}$ and the grid cells are spaced logarithmically. We use adaptive temperature, which depends on the luminosity of every star. Since the stellar mass changes in our grid, the stellar luminosity changes as well. We follow the temperature profile, \begin{equation} \mathrm{T = \left(\phi \frac{L_{\star}}{4\pi\sigma_{SB}r^{2} }+(\SI{10}{K})^{4}\right)^{1/4}}\,, \label{eq:Temp} \end{equation} as in \citet{Kenyon_Hartmann_1996}. In this equation $\mathrm{L_{\star}}$ is the stellar luminosity, $\mathrm{\phi} =0.05$ is the flaring angle, $\mathrm{\sigma_{SB}}$ is the Stefan-Boltzmann constant, and $\mathrm{r}$ is the radius. The term $\mathrm{10^{4}}$ is a lower limit so that we do not allow the disk temperature to drop below $\SI{10}{K}$ at the outer parts of the disk. We use the evolutionary tracks of \citet{Siess_2000A&A...358..593S} to get the luminosity of a $\mathrm{\SI{1}{Myr}}$ old star of the given mass. The stellar luminosity and effective temperature is not evolved in our simulations. However, the luminosity of a \SI{1}{M_{\odot}} star would decrease from $\rm{\sim \SI{2.4}{L_{\odot}}}$ at \SI{1}{Myr} to $\rm{\sim \SI{1}{L_{\odot}}}$ at \SI{3}{Myr}. In \autoref{disc:limitations} we explore how a change in stellar luminosity affects our results. \subsection{Population study} We use an extended parameter grid, by varying the initial values of the turbulence parameter ($\mathrm{\alpha_{gas}}$), disk mass ($\mathrm{M_{d}}$), stellar mass ($\mathrm{M_{\star}}$), characteristic radius (\ensuremath{r_\mathrm{c}}\xspace), and fragmentation velocity ($\mathrm{v_{frag}}$). For every parameter, we pick the ten values specified in \autoref{tabel:tbl_params}, taking all the possible combinations between them leading to a total of $\mathrm{100.000}$ simulations. \begin{table} \caption{Grid parameters of the model} \begin{center} \centering \begin{tabular}{l l l } \toprule Parameter & Description & Value-Range \\ [0.3ex] \hline \hline $\mathrm{\Sigma_{d}/\Sigma_{g}}$ & initial dust-to-gas ratio & $\rm{0.01}$ \\ [0.3ex] $\mathrm{\rho_{s}}$ \hfill [$\rm{g/cm^{3}}$]& particle bulk density & $\rm{1.7, 1.675}$ \\ [0.3ex] $\mathrm{\gamma}$ & viscosity exponent & $\rm{1}$ \\ [0.3ex] $r$ \hfill [$\mathrm{au}$] & logarithmic grid extent & $\rm{0.05 - 2000}$ \\ [0.3ex] $\mathrm{n_{r}}$ \hfill [cells] & grid resolution & $\rm{400}$ \\ [0.3ex] $\mathrm{t}$ \hfill [years] & duration of each simulation & $\rm{10^{7}}$ \\ [0.3ex] \bottomrule \end{tabular} \end{center} \label{tabel:input_param} \end{table} \begin{table} \caption{Variables of the model} \begin{center} \centering \begin{tabular}{l l l } \toprule Parameter & Description & Values \\ [0.3ex] \hline \hline $\mathrm{\alpha}$ & viscosity parameter & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-4}}$ \\ [0.3ex] & & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-3}}$\\ [0.3ex] & & $\mathrm{\{1,2.5\}\cdot10^{-2}}$ \\ [0.3ex] $\mathrm{M_{d}}$ \hfill [M$_{\star}$] & initial disk mass & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-3}}$ \\ [0.3ex] & & $\mathrm{\{1,2.5,5,7.5\}\cdot10^{-2}}$\\ [0.3ex] & & $\mathrm{\{1,2.5\}\cdot10^{-1}}$ \\ [0.3ex] $\mathrm{M_\star}$ \hfill [M$_{\odot}$] & stellar mass & $\mathrm{0.2, 0.4, 0.6, 0.8, 1.0}$ \\ [0.3ex] & & $\mathrm{1.2, 1.4, 1.6, 1.8, 2.0}$\\ [0.3ex] \ensuremath{r_\mathrm{c}}\xspace \hfill [$au$] & characteristic radius & $\mathrm{10, 30, 50, 80, 100}$ \\ [0.3ex] & & $\mathrm{130, 150, 180, 200, 230}$\\ [0.3ex] $\mathrm{v_{f}}$ \hfill [$cm/s$]& fragmentation velocity & $\mathrm{200, 400, 600, 800, 1000}$ \\ [0.3ex] & & $\mathrm{1200, 1400, 1600, 1800, 2000}$\\ [0.3ex] q & planet/star mass ratio & $3\cdot10^{-4}$, $10^{-3}$, $3\cdot10^{-3}$ \\ [0.3ex] r$_{p}$ \hfill [r$_{c}$]& planet position & $1/3$, $2/3$ \\ [0.3ex] \bottomrule \end{tabular} \end{center} \label{tabel:tbl_params} \end{table} \subsection{Planets} \label{subsec:planets} A large planet embedded in a disk produces a co-orbital gap in the gas density. To mimic gap opening by planets in our simulations, we altered the $\mathrm{\alpha_{gas}}$ turbulence parameter. Since in steady state $\mathrm{\alpha_{gas}}$ $\mathrm{\cdot}$ $\mathrm{\Sigma_{g}}$ is constant, the $\mathrm{\alpha}$ parameter and the surface density $\mathrm{\Sigma_{g}}$ are inversely proportional quantities, so a bump in the $\mathrm{\alpha_{gas}}$ profile leads to a gap in the surface density profile. The reason for the change in $\mathrm{\alpha_{gas}}$ and not in $\mathrm{\Sigma_{g}}$ is that the surface density evolves according to \autoref{eq:surf_dens}. By inserting the bump in the $\mathrm{\alpha_{gas}}$, the $\mathrm{\Sigma_{g}}$ still evolves viscously and at the same time produces a planetary gap shape. Following the prescription from \citet{Kanagawa2016}, we mimic the effect of planets with different planet/star mass ratios $q$ (see \autoref{tabel:input_param}). For reference, $q=10^{-3}$ represents a Jupiter-mass planet around a solar-mass star. This way, we can study the effect of planetary gaps and rings in the observable properties of the disk and extract the key observables in a computationally efficient way, avoiding the need to run expensive hydrodynamic simulations for each combination of parameters. Choosing the appropriate profile that mimics a planetary gap is tricky, so we performed hydrodynamical simulations using FARGO-3D \citep{FARG03D_2015ascl.soft09006B}, and we compared the effect on the observable quantities. The \citet{Kanagawa2016} profile is an analytical approximation of the gap depth and width, but does not necessarily represent the pressure bump that is caused by the bump. Therefore, we tested how strongly this assumption affects the properties of the dust in the trap by comparing them against proper hydrodynamical solutions and disk evolution. We found that the depth of the gap is not as important to the evolution of the disk on the SLR, but the width is the dominant factor. As long as the planet is massive enough to create a strong pressure maximum and thus stop the particles, the position of the pressure maximum is more important than a precise value of the gap depth. In summary, the gap depth is not what matters the most but the associated amplitude and location of the pressure maximum. We should mention that the precise amount of trapping in the bumps should still matter, for example in planetesimal formation, but for our results this is less relevant. We provide comparison plots and more details in \autoref{app:gap_profiles}. We define the position of the planets in the disk $\mathrm{r_{p}}$, as a function of the characteristic radius \ensuremath{r_\mathrm{c}}\xspace (see \autoref{tabel:input_param}). We locate them either at $\mathrm{2/3}$ or at $\mathrm{1/3}$ of \ensuremath{r_\mathrm{c}}\xspace. In our simulations we used zero, one, or two planets in these positions. We refer the reader to \autoref{subsusb:planet_tracks} for the effect of the planet location and mass in the simulations. \subsection{Observables} \label{sub:Observables} Since the disk size is not one of the parameters that we measure directly, using the characteristic radius \ensuremath{r_\mathrm{c}}\xspace as a size metric is problematic \citep{rosotti2019time_evolution}. For this reason we define an observed disk radius using the calculated surface brightness profile. Following \citet{tripathi2017millimeter} we adopt their approach to defining an effective radius ($\mathrm{r_{eff}}$) as the radius that encloses a fixed fraction of the total flux, $\mathrm{f_{\nu}}$ ($\mathrm{r_{eff}}$) = $\mathrm{xF_{v}}$. We choose $\mathrm{x=68\%}$ of the total disk flux as a suitable intermediate value to define $\mathrm{r_{eff}}$ as it is comparable to a standard deviation in the approximation of a Gaussian profile. We calculate the mean intensity $\rm{J_{\nu}}$ profile by using the scattering solution from \citet{Miyake1993Icar..106...20M} of the radiative transfer equation \begin{equation} \frac{J_{\nu}(\tau_{\nu})}{B_{\nu}(T(r))} = 1-b\left(e^{-\sqrt{3\epsilon_{\nu}^{eff}}\left(\frac{1}{2}\Delta\tau-\tau_{\nu}\right)}+e^{-\sqrt{3\epsilon_{\nu}^{eff}}\left(\frac{1}{2}\Delta\tau+\tau_{\nu}\right)}\right)\,, \end{equation} with \begin{equation} b=\left[\left(1-\sqrt{\epsilon_{\nu}^{eff}}\right)e^{-\sqrt{3\epsilon_{\nu}^{eff}}\Delta\tau}+1+\sqrt{\epsilon_{\nu}^{eff}}\right]^{-1} ,\end{equation} where $\mathrm{B_{\nu}}$ is the Planck function and \begin{equation} \tau_{\nu}=\left(\kappa_{\nu}^{\rm abs}+\kappa_{\nu}^{\rm sca,eff}\right) \Sigma_{d} \end{equation} is the optical depth with $\mathrm{\kappa_{\nu}^{\rm abs}}$ the dust absorption opacity and $\mathrm{\kappa_{\nu}^{\rm sca,eff}}$ the effective scattering opacity, which is obtained from \citet{Ricci_2010} or \citet{Birnstiel_DSHARP} (see below). As effective scattering opacity we refer to \begin{equation} \kappa_{\nu}^{\rm sca,eff}=(1-g_{\nu})\kappa_{\nu}^{sca} ,\end{equation} where $\rm{g_{\nu}}$ is the forward-scattering parameter, $\rm{\Delta \tau}$ is \begin{equation} \rm{\Delta \tau=\Sigma_{d}\kappa_{\nu}^{\rm tot} \Delta z} ,\end{equation} while \begin{equation} \epsilon_{\nu}^{eff}=\frac{\kappa_{\nu}^{abs}}{\kappa_{\nu}^{abs}+\kappa_{\nu}^{sca,eff}} \end{equation} is the effective absorption probability. To calculate the intensity $\rm{I_{\nu}^{out}}$ we follow the modified Eddington-Barbier approximation as in \citet{Birnstiel_DSHARP}, \begin{equation} \label{eq:intensity} I_{\nu}^{out}\simeq \left(1-e^{-\Delta \tau/\mu} \right)S_{\nu}\left(\left(\frac{1}{2}\Delta\tau-\tau_{\nu}\right)/\mu=2/3\right) ,\end{equation} where $\rm{\mu=cos\theta}$ and \begin{equation} S_{\nu}(\tau_{\nu})=\epsilon_{\nu}^{eff}B_{\nu}(T_{d})+(1-\epsilon_{\nu}^{eff})J_{\nu}(\tau_{\nu}) \end{equation} is the source function. Two-pop-py evolves only the dust and gas surface densities and the maximum particle size,\footnote{In the rest of the manuscript we refer to grain size or particle size rather than maximum grain size.} thereby implicitly assuming a particle size distribution. The grain size at each radius is set by either the maximum size possible in the fragmentation- or drift-limited regimes, whichever is lower \citep[see][for details]{BKE2012}. To compute the optical properties of the dust, we therefore considered a population of grains with a power-law size distribution, $\mathrm{n(a)}$ $\mathrm{\propto}$ $\mathrm{a^{-q}}$, with an exponent $\mathrm{q = 2.5}$, for $\mathrm{a_{min}}$ $\mathrm{\leq}$ $\mathrm{a}$ $\mathrm{\leq}$ $\mathrm{a_{max}}$. This choice follows \citet{BKE2012} where the size distribution is closer to $\rm{q=2.5}$ for disks that are in the drift limit, while for the fragmentation-limited ones, a choice of $\rm{q=3.5}$ would be more suitable. Considering the disk mass is dominated by the large grains, the choice of a smaller exponent does not alter our results significantly, but it matters for the details. Since the smooth simulations are mostly drift-limited the choice of $\rm{q=2.5}$ fits these disks better. Moreover, if a disk is fragmentation-limited then it is so mostly in the inner part, but the bulk of the disk that defines the luminosity is in the outer part. Therefore, the luminosity will still depend mainly on the drift-limited regime. The disks with substructures can be fragmentation-limited farther out in the formed rings, but considering that these rings are mostly optically thick, the difference between exponents is much smaller than for the smooth disks. The grain composition consists of $\mathrm{10\%}$ silicates, $\mathrm{30\%}$ carbonaceous materials, and $\mathrm{60\%}$ water ice by volume. For a direct comparison with observations \citep[][]{tripathi2017millimeter,Andrews2018a}, we calculate the opacity in band 7 (i.e., at 850 $\mathrm{\mu m}$). Afterward, we use the absorption opacity to calculate the continuum intensity profile. We also examined the effect of different opacity models and different grain porosities. As a base model we used the composition from the \citet{Ricci_2010} opacities (hereafter denoted R10-0), but for compact grains (i.e., without porosity) as in the model of \citet{rosotti2019millimetre}. Furthermore, we used the DSHARP opacity model \citep{Birnstiel_DSHARP} and we altered the grain porosity to $\mathrm{10\%}$ (little porous, DSHARP-10), $\mathrm{50\%}$ (semi-porous, DSHARP-50), and $\mathrm{90\%}$ (very porous, DSHARP-90). The particle bulk densities for the different porous grains are $\rm{\rho_{s}}=\SI{1.508}{g/cm^3}$, $\rm{\rho_{s}}=\SI{0.838}{g/cm^3}$, and $\rm{\rho_{s}}=\SI{0.168}{g/cm^3}$, respectively. An important feature of the opacity models that we used is called the \textit{opacity cliff}, which refers to the sharp drop in the opacity at $\SI{850}{\mu m}$, at a maximum particle size around $\SI{0.1}{mm}$, as defined in \citet{rosotti2019millimetre} (see \autoref{fig:Opacities}). In all the figures that are shown in this paper the R10-0 opacity model from \citet{Ricci_2010} is used, unless it is explicitly stated otherwise. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/Opacity_q25.pdf} \caption[]{Comparison between the opacity models that we used at 850 $\mathrm{\mu m}$ as a function of the maximum particle size and for a power-law size distribution with an exponent of $\rm{q=2.5}$. Shown is the \textit{opacity cliff} (see text for details) at a wavelength of $\SI{850}{\mu m}$ (blue and orange shaded regions). The blue line refers to the opacity model from \citet{Ricci_2010} with compact grains (labeled R10-0) and the orange line to that of \citet{Birnstiel_DSHARP} with compact grains (labeled DSHARP). The green, purple, and red lines refer to $\mathrm{10\%}$, $\mathrm{50\%,}$ and $\mathrm{90\%}$ porous grains in the DSHARP model. The R10-0 and DSHARP opacity values differ by a factor of $\mathrm{\sim 8.5 }$ at the position of the opacity cliff. As the porosity of the DSHARP model increases, the opacity cliff starts to flatten out, until it completely disappears for very porous grains ($\mathrm{90\%}$). The location of the cliff shifts to larger particle sizes as it diminishes in porosity. The black dashed line shows the particle size at $\mathrm{\SI{1}{mm}}$. The value for the R10-0 model at this size corresponds to $\mathrm{\kappa_{\nu}^{R10-0}=\SI{9.8}{cm^{2}/g}}$, while for the DSHARP to $\mathrm{\kappa_{\nu}^{DSHARP}=\SI{4}{cm^{2}/g}}$.} \label{fig:Opacities} \end{figure} \subsection{Matching simulations} \label{sub:survival_frequency} The behavior of the simulations on the size-luminosity diagram ($\mathrm{L_{mm}-r_{eff}}$ plane, hereafter \textit{SL Diagram}) depends on the time evolution of disks. According to \citet{Andrews2018a}, the linear regression of the joint data between \citet{tripathi2017millimeter} and \citet{Andrews2018a} gives a relation between the disk size and the $\SI{340}{GHz}$ luminosity. The effective radius $\mathrm{r_{eff}}$ and the luminosity $\mathrm{L_{mm}}$ are correlated as \begin{equation} \mathrm{log} \mathrm{r_{eff}} = (2.10^{+0.06}_{-0.03}) + (0.49^{+0.05}_{-0.03}) \mathrm{log} \mathrm{L_{mm}}\,, \label{eq:size-lum} \end{equation} with a Gaussian scatter perpendicular to that scaling with a standard deviation (1$\mathrm{\sigma}$) of $\mathrm{0.20^{+0.02}_{-0.01}}$ $\rm{dex}$ (where $\mathrm{r_{eff}}$ is in $\rm{au}$ and $\mathrm{L_{mm}}$ is in $\rm{Jy}$ at a distance of $\SI{140}{pc}$). In \autoref{fig:param_effects} (top left) we show an evolution track. It is the path that the simulations follow on the luminosity-radius diagram in a chosen time span. They move from the top right (higher luminosity and radius) to the bottom left as the disk evolves. We plot the evolution track from $\SI{300}{kyr}$ to $\SI{3}{Myr}$. The thought behind this is that at roughly $\SI{300}{kyr}$ our disks reach a quasi-steady state. Specifically, the drift speed in the drift limit is $\mathrm{V_{r} = \epsilon V_{K}}$, with $\mathrm{\epsilon}$ being the dust-to-gas ratio and $\mathrm{V_{k}}$ the Keplerian velocity. So after at least one order of magnitude is lost in the dust, the evolutionary time becomes too long. We refer the reader to \autoref{sec:discussion}, where we show the evolution of disk dust-to-gas ratio as a function of time for different cases. Longer evolutionary times than we are exploring here do not alter our results significantly, and are not included to simplify the discussion. They will be included as a topic of future research since at later stages disks are more strongly affected by dispersal. Moreover, our chosen time span covers the observed disks from the \citet{Andrews2018a} and \citet{tripathi2017millimeter} joint sample. In order to filter our simulations, we divided them in categories. A simulation that lies within $\mathrm{1\sigma}$ (blue shaded region in \autoref{fig:param_effects}, top left, of the SLR) (\autoref{eq:size-lum}) at all times in our chosen time span is considered matching (see \autoref{fig:param_effects}, green evolution track). On the other hand, if at any time a simulation does not lie within the area, it is considered \textit{discrepant}. The \textit{discrepant} simulations can be further divided into two subcategories. One that is above the SLR (see \autoref{fig:param_effects}, purple and yellow tracks, top left) and one below (red track). With this classification we can identify the main parameters that drive a simulation to be located in a certain spot on the SL Diagram. It is worth mentioning that a fraction ($\mathrm{\sim32\%}$) of the observational data points lie outside the 1$\mathrm{\sigma}$ region by definition. This highlights that the definition of the matching simulations is conservative with respect to the observational data. Later on we define the term \textit{matching fraction} as the percentage of the matching simulations against the total number of simulations performed with a certain initial condition (see \autoref{sub:evolution_tracks}). \section{Results} \label{sec:results} In this section we present the main results of this analysis. In \autoref{sub:evolution_tracks} we explain the effect of every parameter on the path of the disk in the SL diagram. In \autoref{sub:heatmaps} we present the general properties that disks should have to follow the SLR, and we derive a theoretical SLR for disk with substructures in \autoref{subsub:scaling_relation}. In \autoref{app:corner} we present an additional analysis for the results discussed below. \subsection{Evolution tracks} \label{sub:evolution_tracks} In \autoref{sub:survival_frequency} we explain what an evolution track is, while we show some examples in \autoref{fig:param_effects} (top left). Every track is affected by the initial conditions of the parameters chosen, and by the presence or not of a planet. In the following sections we explore the effect of the most important parameters of our grid model and we show how every parameter affects the evolution track on the SL Diagram in \autoref{fig:param_effects}, \autoref{fig:mass_acc_1e-3}, and \autoref{fig:parameter_tracks}. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/Survivability.pdf} \includegraphics[width=0.49\linewidth]{plots/planet_effect.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/alpha_effect_23rc.pdf} \includegraphics[width=0.49\linewidth]{plots/rc_effect_smooth.pdf} \caption[]{Evolution tracks and explanation of matching and discrepant simulations. Examples of a disk with the same initial conditions varying only one parameter at a time. \textbf{Top left}: SLR according to \citet{Andrews2018a} and examples of evolution tracks. The black points correspond to the observational data, and the black dashed line is \autoref{eq:size-lum}, which shows the relation between the luminosity and the effective radius. Finally, the blue shaded region is the area within $1\sigma$ of the SLR where the simulations are considered to be \textit{matching}. The green track labeled number 1 is considered a \textit{matching} simulation since it starts and ends inside the SLR. The other tracks are considered \textit{discrepant}. The beginning of the track is where the empty bullet is (top right) and the end is where the number is found (bottom left). \textbf{Top right}: Varying only the presence and the position of a Jupiter-mass planet. The red line (number 1) is for a smooth disk and the green line (number 3) for a disk with a Jupiter-mass planet at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace; the purple line (number 2) corresponds to a planet at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace and the yellow line (number 4) to a simulation with two planets at the positions mentioned above. \textbf{Bottom left}: Varying only the turbulence parameter $\mathrm{\alpha}$. Higher $\mathrm{\alpha}$ values lead to higher luminosity. \textbf{Bottom right}: Varying only the characteristic radius \ensuremath{r_\mathrm{c}}\xspace. Higher \ensuremath{r_\mathrm{c}}\xspace values lead to larger and more luminous disks.} \label{fig:param_effects} \end{figure*} Since the grid consists of 100.000 simulations, single evolution tracks do not show the preferred initial conditions that allow a simulation to stay in the SLR, but only a representative case. In order to identify trends between the initial conditions and the matching fraction of every disk, we constructed histograms where on the y-axis we have the matching fraction (i.e., the percentage of simulations that stay on the SLR for the chosen time span) and on the x-axis we have the value of each parameter in our grid model. Different colors represent different simulation grids, with or without planets and with varying the planetary mass or positions (e.g., \autoref{fig:Histograms}). In black we show the simulations where we used a smooth surface density profile, as in \citet{rosotti2019millimetre}. In green we show the case where the planet/star mass ratio is $\mathrm{q=1\cdot10^{-3}}$ at a location of 1/3 of the \ensuremath{r_\mathrm{c}}\xspace (inner planet), in red the same planet at a distance of 2/3 of the \ensuremath{r_\mathrm{c}}\xspace (outer planet), and in blue two planets of $\mathrm{q=1\cdot10^{-3}}$ at 1/3\ensuremath{r_\mathrm{c}}\xspace and 2/3\ensuremath{r_\mathrm{c}}\xspace. The white hatched bars show the same cases, but using a different opacity model DSHARP \citep{Birnstiel_DSHARP}. \subsubsection{Effect of planetary parameters} \label{subsusb:planet_tracks} In a smooth disk, the evolution track evolves toward smaller radii and lower luminosity as the dust drifts inward, so the emission (and size) decreases. Moreover, the opacity cliff moves farther in as the radius in the disk where the maximum particle size $\mathrm{a_{max}}$ is (i.e., the value for the peak opacity) decreases, due to radial drift and grain growth, as in \citet{rosotti2019time_evolution}. In contrast, in the case where a planet is present the pressure bump that is formed stops the dust from drifting toward the host star, delaying the evolution of the disk on the SL Diagram, thus keeping the tracks on the SLR for longer times. With this in mind, we expect a less extended evolution track when we include planets, since planets are included early in the disk evolution. In \autoref{fig:param_effects} (top right), we show an example of the evolution tracks of a disk with the same initial conditions, varying only the presence and the position of a planet. The red line represents the evolution track of a disk with a smooth surface density profile. If we include a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$, Jupiter mass in this case, at a location that is close to the characteristic radius of the disk, in this case at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace, we see from the green line (number 3) that both the effective radius and the luminosity increase relative to the planet-less case, and the evolution track is shorter at the same time span as in all planet cases. This is clear since the pressure bump is trapping particles and the dust mass is retained. Therefore, the luminosity does not decrease as quickly. At the same time, the fixed position of the pressure bump causes the effective radius to remain the same. Together this means that the track in the SLR comes to a halt. On the other hand, if we place the planet close to the star, in this case at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace, we observe a much longer track similar to the one with the smooth profile (purple line). The size and the luminosity of the disk change only slightly. The smaller radius compared to the case where we have an outer planet is explained because the dust now stops at the inner pressure bump and the luminosity is roughly the same for the two cases. This could lead to the conclusion that a planet close to the star will not affect dramatically its position on the SL Diagram, but this is not true for all cases (see \autoref{subsusb:rc_tracks}). Instead, the evolution track here is similar to the smooth one because the disk is too large and massive and the inner planet cannot affect the evolution track too much. As a last point, we included two planets, one at the 2/3 of the \ensuremath{r_\mathrm{c}}\xspace and one at 1/3. We observe a similar evolution track as in the case where we have only one planet close to the outer radius, with the disk being slightly more luminous. This is also explained by the fact that the dust from the outer disk stops at the outer pressure bump, while the dust that exists inside the outer planet stops at the pressure bump of the inner planet. Since most of the dust mass initially resides outside the outer planet, the dust that is trapped between the two planets contributes only partially to the total luminosity. When two or more planets are present the location of the outermost planet is more dominant in the evolution track. \subsubsection{Effect of the turbulence $\mathrm{\alpha}$-parameter} \label{subsusb:alpha_tracks} The effect of the turbulence parameter $\mathrm{\alpha}$ on the evolution track is straightforward: higher $\mathrm{\alpha}$ leads to higher luminosity. In \autoref{fig:param_effects} (bottom left) we show the evolution tracks of a disk with the same initial conditions, varying only the $\mathrm{\alpha}$-parameter. In this case, we choose a disk where we have also inserted a Jupiter mass planet at the 2/3 of the characteristic radius (\ensuremath{r_\mathrm{c}}\xspace) since the effect is more prominent on these disks. To understand the trends in \autoref{fig:param_effects} (bottom left), it is instructive to consider \autoref{fig:mass_acc_1e-3}, where we show an example of how different $\mathrm{\alpha}$-values affect the efficiency of trapping. In the top panel we show the dust mass local flow rate $\rm{\dot M_{acc,d}(r)}$ $[\mathrm{M_\oplus/yr}]$ as a function of radius. For low $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}}$ (red line), $\mathrm{5\cdot10^{-4}}$ (blue), and $\mathrm{1\cdot10^{-3}}$ (green), the mass flows toward the bump and the local flow rate for $\mathrm{r < r_{p}}$ is low ($\mathrm{\leq10^{-8} M_\oplus/yr}$), meaning that the trapping is efficient enough to stop the dust from drifting toward the star. On the other hand, for the high $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}}$ (yellow) and $\mathrm{1\cdot10^{-2}}$ (purple), the dust mass local flow rate stays almost constant ($\mathrm{\leq 10^{-5} M_\oplus/yr}$) throughout the whole disk, meaning that the bump does not trap the particles, just locally slows them down. In the bottom panel we show the cumulative mass of the disk integrated from inside out as a function of radius. For the low $\rm{\alpha}$-values, most of the mass is located in the bump, while for $\rm{\alpha}=\rm{5\cdot10^{-3}}$ and $\rm{\alpha}=\rm{1\cdot10^{-2}}$ it increases with the radius, meaning that the bump allows more grains to escape and therefore it does not contain a significant fraction of the disk mass. Returning to \autoref{fig:param_effects} (bottom left), disks with low to medium $\mathrm{\alpha}$-values ($10^{-4}$, $5\cdot10^{-4}$, $10^{-3}$) are less luminous than disks with higher $\mathrm{\alpha}$-values. This is so because the ring that is formed due to the pressure bump is becoming too narrow and optically thick. The total flux emitted by an optically thick ring of a given temperature is just a function of the emitting area. Lower $\mathrm{\alpha}$ will lead to a narrower ring \citep{Dullemond2018DSHARP}, and thus less emitting area and therefore lower luminosity, independently of the amount of mass in the ring. On the other hand, high $\mathrm{\alpha}$-values work against trapping in various ways, leading to more luminous disks. A higher $\mathrm{\alpha}$-value decreases the particle size in the fragmentation limit, which are less efficiently trapped by radial drift (e.g., \citealt{Zhu2012DustFiltration}). It increases the diffusivity, which allows more grains to escape the bump and it increases the viscosity in the same way, so more dust sizes are traveling with the accreting gas. Furthermore it smears out the pressure peak, causing less efficient trapping by radial drift \citep{Pinilla2012}. Moreover, with high $\alpha$-values a higher dust-to-gas ratio is retained because grain growth is impeded by fragmentation, hence radial drift is much slower. If we consider the two cases where the $\mathrm{\alpha}$-value is high ($5\cdot10^{-3}$, $10^{-2}$), the planet cannot efficiently trap the dust and the disk evolves farther along the SLR to lower luminosities. A disk that contains a planet with $\mathrm{\alpha=10^{-2}}$ behaves the same as a smooth disk without a planet on the SL Diagram, implying that in this case a very massive planet (several Jupiter masses) would be needed to significantly affect disk evolution. This results in the dust grains becoming smaller since the gas turbulent velocity is increasing and the collisions between them are more destructive. Consequently, the gap is becoming shallower while there is also more diffusion. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/mass_acc_cum_1e-3_at_13.pdf} \caption[]{\textbf{Top panel}: Local flow rate of the dust mass in $\mathrm{M_\oplus/year}$ as a function of radius for a disk with a Jupiter mass planet at $\SI{31}{au}$. For the low $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}}$ (red line), $\mathrm{5\cdot10^{-4}}$ (blue) and $\mathrm{1\cdot10^{-3}}$ (green), we observe that the bump outside the planet location is large enough to hold the dust from drifting toward the star, while for larger $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}}$ (yellow), $\mathrm{10^{-2}}$ (purple), there is only a weak accumulation outside the planetary gap. \textbf{Bottom panel}: Cumulative dust mass contained within a radius $r$, as function of radius. For $\mathrm{\alpha}$-values $\mathrm{1\cdot10^{-4}, 5\cdot10^{-4}, 1\cdot10^{-3}}$, roughly all the mass of the disk is inside the bump that is created from the planet. For larger $\mathrm{\alpha}$-values $\mathrm{5\cdot10^{-3}, 1\cdot10^{-2}}$, the bump is minimal and not able to hold the dust from drifting.} \label{fig:mass_acc_1e-3} \end{figure} In \autoref{fig:Histograms} (top left), we show the dependence of the $\mathrm{\alpha}$-viscosity parameter on the matching fraction. As expected, all simulations tend to favor low values of the turbulence parameter $10^{-4}\leq \mathrm{\alpha} \leq10^{-3}$. Smooth simulations show a clear tendency toward low $\rm{\alpha}$-values because when they are drift dominated they remain in the SLR. On the other hand substructured disks show a preference toward $5\cdot10^{-4}\leq \mathrm{\alpha} \leq10^{-3}$. The dust trapping is efficient enough and allows the disks to retain their mass, but while moving to higher $\mathrm{\alpha}$-values $\mathrm{\alpha > 2.5\cdot 10^{-3}}$, the trapping stops being efficient and leads to higher chances that the evolution track will leave the SLR in the selected time span. If $\rm{\alpha}<5\cdot10^{-4}$ the dust rings become narrow and the luminosity is not large enough to place them in the SLR, and therefore the matching fraction decreases. \subsubsection{Effect of the characteristic radius \ensuremath{r_\mathrm{c}}\xspace} \label{subsusb:rc_tracks} In \autoref{fig:param_effects} (bottom right), we plot the evolution tracks of a smooth disk with the same initial conditions while varying only the characteristic radius ($\rm{\ensuremath{r_\mathrm{c}}\xspace=\SIlist{10;50;100}{au}}$, and $\SI{180}{au}$). The same trend applies to disks where a planet is included, but for a smooth disk the evolution tracks are longer and the effect is more easily visible. The effect of the characteristic radius on the evolution tracks is straightforward. The larger the \ensuremath{r_\mathrm{c}}\xspace the more the evolution track moves toward the top right of the plot, meaning that for a larger disk we expect higher luminosity. In more detail, increasing the characteristic radius (going from the red to green line) explains why the effective radius increases, while the luminosity increases because the total disk mass remains fixed on all these simulations. This result is consistent with the SLR from \citet{Andrews2018a}. Taking a look at \autoref{fig:Histograms} (middle left), we do not observe a continuous pattern as in the other histograms. Smooth disks do not seem to depend on the characteristic radius as disks of all sizes can reproduce the SLR. On the other hand, substructured disks with small \ensuremath{r_\mathrm{c}}\xspace ($\SI{10}{au}$) are mostly above the correlation because they have high luminosity relative to their size (as explained by our size-luminosity estimate in \autoref{disc:opacity}) and they are unable to enter the correlation in time. For large radii ($>\SI{150}{au}$) the disks can become too large but with low luminosity, and can end up below the correlation to the very right part of the SL diagram (see \autoref{fig:heatmaps_ricci}). Therefore, we observe a peak toward a specific characteristic radius, around $\mathrm{80-\SI{130}{au}}$ when a planet is at a location of 1/3 of the \ensuremath{r_\mathrm{c}}\xspace and around $\mathrm{30-\SI{80}{au}}$ for the case where a planet is at a location of 2/3 of the \ensuremath{r_\mathrm{c}}\xspace. The inner planet constrains the disk to a small size but with relatively high luminosity, therefore placing it above the SLR before $\mathrm{\SI{300}{kyr}}$, while the opposite effect occurs when an outer planet exists. When two planets are included we observe a mixed situation of the single cases. The reason is that the two pressure bumps compete with each other and each contributes in one of the ways described above. \subsubsection{Effect of the disk mass - $\rm{M_{d}}$} \label{subsusb:disk_mass_tracks} In \autoref{fig:parameter_tracks} (top left) we plot the evolution tracks of a smooth disk with the same initial conditions varying only the disk mass ($\mathrm{M_{d}}$). We choose a smooth disk to show the effect more clearly, but the same principle applies to most disks. The disk mass contributes to both luminosity and radius. Higher disk masses lead to higher luminosities, both at the beginning and at the end of the track. Since for a fixed \ensuremath{r_\mathrm{c}}\xspace, disks with higher $\mathrm{M_{d}}$ have higher $\mathrm{L_{mm}}$, it is only logical that more material will lead to higher luminosity and vice versa. By the end of the evolution tracks, the less massive disks have left the SLR. The dramatic curvature of the SLR for the lowest $\mathrm{M_{d}}$ case ($\mathrm{M_{d}=5\cdot 10^{-3}M_{\star}}$) occurs because all grain sizes become smaller than the opacity cliff. If we choose a disk that contains a planet, the massive disks ($\mathrm{M_{d}\geq5\cdot 10^{-2}} M_{\star}$) will still evolve toward lower radii and luminosities on the SLR, but the less massive ones will have shorter tracks. The pressure bump will trap all the material outside of the planet position, so the emission and the effective radius will both remain almost constant. The only case where the track of a low mass disk can be long is if the planet mass is small and the pressure bump is not large enough to retain dust. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/Md_effect_smooth.pdf} \includegraphics[width=0.49\linewidth]{plots/Opacity_tracks.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/Mstar_effect_smooth.pdf} \includegraphics[width=0.49\linewidth]{plots/vfrag_effect_smooth.pdf} \caption[]{evolution tracks with the same initial conditions varying only one parameter at a time. \textbf{Top left}: Varying the disk mass of a smooth disk. \textbf{Top right}: Varying the different opacity models and the porosity of the DSHARP model of a smooth disk. \textbf{Bottom left}: Varying the stellar mass of a disk that contains a planet. \textbf{Bottom right}: Varying the fragmentation velocity of a smooth disk.} \label{fig:parameter_tracks} \end{figure*} In \autoref{fig:Histograms} (top right) we plot the matching fraction of the disk mass values. The tendency here is that higher disk mass leads to more simulation inside the SLR. The reason for this is that high initial disk mass places the disks above the SLR until they reach a stable state. While the dust is drifting toward the host star the luminosity decreases, allowing them in our chosen time span to reach the SLR and stay there for the remaining time. Since most of the dust is in the trap at this point, the remaining evolution time is set by the trap life time. The difference is noticeable between the smooth and the planet(s) cases. As we see from the yellow bars, a disk with a smooth surface density profile must be initially massive ($\rm{M_{d}\geq\SI{0.025}{M_{\star}}}$) to remain in the SLR. The probability of a smooth simulation to match is even greater than the cases where a planet is included. Some of these results though are an effect of the opacity model used and the chosen time span, as we discuss in \autoref{subsusb:porosity_tracks}. \subsubsection{Effect of different opacity and grain porosity} \label{subsusb:porosity_tracks} In \autoref{fig:parameter_tracks} (top right) we represent the behavior of several similar tracks varying only the opacity model. The simulations with the \citet{Ricci_2010} opacity model (R10-0, blue line) produce more luminous and larger disks than those with the DSHARP model (orange line) due to the higher value of the opacity. The R10-0 opacity is 8.5 times higher than the DSHARP at the peak of the opacity cliff. If we use slightly porous grains (DSHARP-10, green line) by altering the DSHARP opacity, we observe that the effect is insignificant as the shape of the opacity is roughly the same. On the contrary, for semi-porous and very porous grains (DSHARP-50 and DSHARP-90, purple and red line, respectively) the opacity cliff starts to flatten out \citep{Kataoka2014}, leading to a disk with low luminosity and no significant change in disk size. In all the histogram figures (\autoref{fig:Histograms}) the same trend stands for either the opacity from R10-0 (solid color bars) or DSHARP \citep{Birnstiel_DSHARP} (hatched bars). The difference is that more simulations match when the R10-0 opacity model is used as opposed to the DSHARP model. Disks with the DSHARP opacity are generally less bright because they do not become optically thick in the rings and end up below the SLR. Therefore, it would need more dust (i.e., stronger traps) for them to be luminous enough. Especially for the smooth case (yellow bars), there are only a few simulations that match, hence the hatched bars are barely visible. For smooth disks the total matching fraction is $\rm{29.6\%}$ with the R10-0 opacity, while it is $\rm{0.8\%}$ with the DSHARP value. For disks with an inner planet the matching fractions are $\rm{30.2\%}$ and $\rm{15.9\%,}$ respectively (see \autoref{disc:opacity}, where we explore the overall impact of porous grains for the entire grid of models). \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{plots/alpha_hist_q25.pdf} \includegraphics[width=0.49\linewidth]{plots/Mdisk_hist_q25.pdf}\\ \includegraphics[width=0.49\linewidth]{plots/rc_hist_q25.pdf} \includegraphics[width=0.49\linewidth]{plots/Mstar_hist_q25.pdf}\\ \includegraphics[width=0.49\linewidth, left]{plots/vfrag_hist_q25.pdf} \caption[]{Histograms of the matching fraction for disk mass, characteristic radius, stellar mass, and fragmentation velocity. The matching fraction shows the percentage of the simulations that remained on the SLR for the chosen time span ($\rm{\SI{300}{kyr} - \SI{3}{Myr}}$). \textbf{Top left}: Dependence on the $\alpha$-value. There is a preference to low $\alpha$-values $\rm{(10^{-4}\leq \alpha \leq 10^{-3})}$. \textbf{Top right}: Dependence on the disk mass. There is a preference to high disk masses $\mathrm{(0.025 \leq \frac{M_{d}}{M_{\star}} \leq 0.25)}$. \textbf{Middle left}: Dependence on the characteristic radius. Smooth disks do not depend on the \ensuremath{r_\mathrm{c}}\xspace. \textbf{Middle right}: Dependence on the stellar mass. There is a slight preference to higher values. \textbf{Bottom left}: Dependence on the fragmentation velocity. There is a slight preference to higher values.} \label{fig:Histograms} \end{figure*} \subsubsection{Effect of the stellar mass - $\mathrm{M_\star}$} \label{subsub:Mstar_tracks} In our models the stellar mass is assumed to be directly correlated with the disk mass because we varied the stellar mass, but we always kept the disk-to-star mass ratio constant. Therefore, a higher star mass implies a higher total mass of the dust, leading to higher continuum luminosities. In \autoref{fig:parameter_tracks} (bottom left) we plot the evolution tracks of a disk that contains a planet with the same initial conditions, varying only the stellar mass for $\SI{0.2}{M_{\odot}}$, $\SI{0.6}{M_{\odot}}$, $\SI{1.0}{M_{\odot}}$, and $\SI{2.0}{M_{\odot}}$. As expected, the highest value of $\mathrm{M_\star=2.0M_{\odot}}$ leads to the largest and most luminous disk (green line), while the opposite is true for $\mathrm{M_\star=0.2M_{\odot}}$ (red line). There is similar behavior for smooth disks, but in that case the radius of each disk is much smaller due to radial drift. Furthermore, the stellar mass is the least important parameter when defining whether a simulation matches. The histogram in \autoref{fig:Histograms} (middle right) confirms this. Even though the trend shows that using higher stellar mass has a greater matching fraction, this occurs because the stellar mass scales to the disk mass, and higher disk mass leads to more matching simulations (see \autoref{subsusb:disk_mass_tracks}). This scaling implies that the luminosity ($\mathrm{L_{mm}}$) scales with the stellar mass ($\mathrm{M_{\star}}$). Our models follow a relation that is not as steep as the observed $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ in \citet{Andrews2018a}, because there is not a correlation between disk size and stellar mass in our simulations (see \autoref{disc:Lmm-Mstar} and \autoref{fig:L_Mstar} where we explore further this relation). \subsubsection{Effect of the fragmentation velocity - $\mathrm{v_{frag}}$} \label{subsub:vfrag_tracks} In \autoref{fig:parameter_tracks} (bottom right) we plot the evolution tracks, of a smooth disk with the same initial conditions, varying only the fragmentation velocity for values of $\SI{200}{cm/s}$, $\SI{600}{cm/s}$, $\SI{1000}{cm/s}$, and $\SI{2000}{cm/s}$. We observe that for medium and high values of $\mathrm{v_{frag}}$ (in this case for $\mathrm{v_{frag}} \geq \SI{600}{cm/s}$), the evolution tracks overlap. Since most of our simulations are drift-limited we expect that no effect from the fragmentation velocity will take place for these values since particles do not grow big enough to drift, so more mass remains at large radii to produce more emission. Therefore, this effect only arises when the fragmentation velocity becomes too low, which leads to a higher luminosity in the first snapshots. Moreover, if a disk is fragmentation-limited then it is so mostly in the inner part. Therefore, considering that the main bulk of the disk is in the outer part, the emission that defines the luminosity will still depend on the drift-limited regime hence leading to the overlapping tracks. The effect of a planet in these tracks is minimal and we expect a similar behavior to the case shown here. This can be validated in \autoref{fig:Histograms} (bottom left), where we plot the matching fraction compared to the fragmentation velocity and the tendency is to higher fragmentation velocities for all cases. More specifically, if $\mathrm{v_{frag}\geq 600cm/s}$, there is a large number of matching simulations and it only gets larger with increasing fragmentation velocity. In this range the simulations are mostly drift-limited. For low values of $\mathrm{v_{frag}}$, most of the simulations are fragmentation-limited and they lose luminosity relatively quickly, moving them out of the SLR. Low values of $\mathrm{v_{frag}}$ lead to smaller particles and less efficient trapping. Therefore, those disks lose their solids too quickly. This is analyzed in more detail in \autoref{app:corner}. For reference, in the recent lab experiment of \citet{Wurm2018SSRv..214...52B}, $\SI{100}{cm/s} \leq \mathrm{v_{frag}\leq \SI{1000}{cm/s}}$ is considered a value consistent with lab work and $\mathrm{v_{frag}>\SI{1000}{cm/s}}$ a high fragmentation velocity. \subsection{Heat maps} \label{sub:heatmaps} Single evolution tracks give us an idea of how a single simulation evolves on the SLR, but with a large sample of simulations like the one we have it is not easy to extract global results. Since there are too many tracks to plot all of them in a single diagram, we treat the position of every simulation at every snapshot as an independent sample, and we plot them on a heat map. In these figures we plot the position of every simulation for a specific case (smooth and planet in different locations), for three different snapshots ($\mathrm{\SI{300}{kyr}}$, $\mathrm{\SI{1}{Myr}}$, $\mathrm{\SI{3}{Myr}}$). In \autoref{fig:heatmaps_ricci} we plot three different cases. In Col. 1 we show the smooth case, in the Col. 2 the case where we use a planet/star mass ratio $\mathrm{q=10^{-3}}$ at 1/3 of the \ensuremath{r_\mathrm{c}}\xspace, and in the last column $\mathrm{q=10^{-3}}$ at 2/3 of the \ensuremath{r_\mathrm{c}}\xspace. The snapshots are at three times as shown, as are the SLR and the standard deviation. The red line is our prediction for the cases where we include a planet (see \autoref{subsub:scaling_relation}). Instead of following the relation from \citet{Andrews2018a}, they seem to follow a relation of $\mathrm{L_{mm}\propto r_{eff}^{5/4}}$. In \autoref{subsub:width} and \autoref{subsub:scaling_relation} we perform a more detailed analysis on the this topic. We observe that most of the disks start inside and above the correlation (first row at $\mathrm{\SI{300}{kyr}}$). In the smooth case the disks lose a great deal of their luminosity relatively quickly and also shrink in size, making them move to the lower radii due to radial drift. We end up with a large number \textbf{($\rm{29.6\%}$)} of simulations occupying and following the SLR. As we explain in \autoref{subsub:scaling_relation}, the slope is expected from \citet{rosotti2019millimetre}, but the normalization depends on the choice of opacities. On the other hand, when we include a planet and the time increases the disks do not decrease in size, but mainly in luminosity. This is due to the formation of the pressure bump that keeps the dust from drifting farther in, thus keeping the same effective radius. The consequence is that if the disks leave the SLR, it is due to luminosity decrease since they move vertically in the diagram. The clustering in $\mathrm{r_{eff}}$ that is formed in the plots with a planet are an artifact of our parameter grid. A randomly chosen value of \ensuremath{r_\mathrm{c}}\xspace and planet position would result in a continuous (non-clustered) distribution. From the comparison of the two cases where a planet is included, there are more matching simulations when a planet is in the inner part of the disk \textbf{($\rm{30.2\%}$)}. Having a planet in the outer part leads the disks to the right (large radii) and bottom part of the diagram and consequently leaves them outside of the relation ($\rm{20.6\%}$ of the disks match). The difference becomes striking when we use the dust opacities from DSHARP in \autoref{fig:heatmaps_dsharp}. Since the DSHARP opacity is lower than the R10-0 (see \autoref{fig:Opacities}), many of the smooth disks start below the SLR. This leads the majority of them outside of the relation by the last snapshot ($\rm{\SI{3}{Myr}}$) and only ($\rm{0.8\%}$) of the disks match. The same stands for the case where a planet is included. The disks have lower luminosity in the first snapshot, but the pressure bump formed is big enough to keep them in the relation for the remaining time ($>\rm{11.1\%}$) of the disks match depending on the model. A similar behavior for the cases where we include strong substructures is seen for the opacity model with semi-porous grains, DSHARP-50, in \autoref{fig:heatmaps_d50}. Even though there are fewer matching simulations in total, if the substructures are large enough, they are able to keep a significant number of simulations in the clusters on the SLR \textbf{($>\rm{10.7\%}$)}. The same argument cannot be made for simulations with the smooth surface density profile. With semi-porous grains there is only a small fraction of matching simulations \textbf{($\rm{0.7\%}$)}. The absence of a strong opacity cliff in the opacity profile leads to low luminosities and consequently all the simulations below the SLR. An almost identical point is true looking at the heat map (\autoref{fig:heatmaps_d90}) for the case where very porous grains are used (DSHARP-90). The complete absence of the opacity cliff does not allow a considerable fraction of smooth disks to enter the SLR \textbf{($\rm{1.3\%}$),} while a similar fraction of substructured disks match, as in the DSHARP-50 case \textbf{($>\rm{10.2\%}$)}. This heat map is included in \autoref{app:heatmap}. From these heat maps we can extract three important results: \begin{itemize} \item Disks with strong traps (i.e., massive planets) follow a different SLR than smooth disks, while smooth disks are more consistent with the SLR in terms of the shape of the relation. \item Whether a smooth disk matches or not depends heavily on the opacity model. \citet{Birnstiel_DSHARP} DSHARP opacities produce significantly fewer simulations in the SLR than the \citet{Ricci_2010} R10-0 model and only a fraction of the simulations match the semi-porous and very porous grains and the model DSHARP-50 and DSHARP-90. Therefore, the porosity should be lower than $\mathrm{50}\%$ when the \citet{Birnstiel_DSHARP} opacities are used. However, the distribution of simulations is significantly tighter than the observed correlation for the smooth disks with the R10-0 opacity. As discussed in \autoref{sec:discussion}, the observed correlation can be a mixture of smooth and substructured disks that adds scatter to the simulated SLR. \item A bright disk (top right on the SL diagram) is more likely to remain in the SLR if there is a pressure bump formed in the first $\mathrm{1Myr}$, regardless of the opacity model. \end{itemize} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_ricci_q25.pdf} \caption[]{Heat maps of simulations with the R10-0 opacities. From left to right, the three columns represent the smooth case, a planet at $\mathrm{1/3r_{c}}$, and a planet at $\mathrm{2/3r_{c}}$. From top to bottom, the rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr,}$ and $\mathrm{3Myr}$. The white solid line is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_ricci} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_dsharp_q25.pdf} \caption[]{Heat maps of simulations with the \citet{Birnstiel_DSHARP} opacities. From left to right, the three different columns represent the smooth case, a planet at $\mathrm{1/3r_{c}}$, and a planet at $\mathrm{2/3r_{c}}$. From top to bottom, the rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr}$, and $\mathrm{3Myr}$. The white solid line is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_dsharp} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/heatmaps_D50_q25.pdf} \caption[]{Heat maps of simulations with the \citet{Birnstiel_DSHARP} D-50 opacities with 50\% porostiy. From left to right, the three columns represent the smooth case, a planet at $\mathrm{1/3r_{c}}$, and a planet at $\mathrm{2/3r_{c}}$. From top to bottom, the rows represent three different snapshots at $\mathrm{300kyr}$, $\mathrm{1Myr}$, and $\mathrm{3Myr}$. The white solid line is the SLR from \citet{Andrews2018a} and the red solid line our fit for the cases where a planet is included. The color bar shows the number of simulations in a single cell. The blue dash-dotted line shows the minimum limit ($\mathrm{r_{eff} \sim \SI{10}{au}}$) where observational results are available.} \label{fig:heatmaps_d50} \end{figure*} \subsubsection{Width of the pressure maxima} \label{subsub:width} To understand the overall shape of the heat map for the case with massive planets (i.e., the red lines in \autoref{fig:heatmaps_ricci} and \autoref{fig:heatmaps_dsharp}), we derive a theoretical estimate in the following. This estimate depends on the width and position of the pressure maximum formed outside the position of the gap-opening planets. We therefore first derive a relation of the gas width versus radius $r$, planet/star mass ratio $q$, and scale height $h$, using hydrodynamical simulations of planet-disk interaction with the FARGO-3D code \citep{FARG03D_2015ascl.soft09006B}. In \autoref{subsub:scaling_relation} we estimate the SLR based on these empirically determined widths. For a complete derivation of the two sections, we refer the reader to \autoref{app:width_slr_derivation} and \autoref{app:scaling_relation}. In addition to our two-pop-py models, we performed 24 hydrodynamical simulations with FARGO-3D \citep{FARG03D_2015ascl.soft09006B} for different planet/star mass ratios, planet locations, and $\mathrm{\alpha}$-values (see \autoref{tabel:gaps_param}). We used these simulations to calculate the width of the outer pressure bump caused by the planet in the gas. The surface density maximum is locally well fitted by a Gaussian, which allows us to measure the width (i.e., the standard deviation) using the curvature at the maximum. By measuring all the widths of our hydrodynamical simulations we fit as a multiple power law to search how they scale with radius, scale height, and the $\mathrm{\alpha}$-parameter, \begin{equation} \sigma_{g} = C \cdot h^{p} \cdot q^{k} \cdot \alpha^{l}\, ,\end{equation} where $\mathrm{C}$ is a constant, $h$ is the scale height, $q$ the mass ratio, and $\mathrm{\alpha}$ the turbulence parameter. We find that the width in the measured range scales approximately as \begin{equation} \label{eq:power_law} \sigma_{g} \propto h^{0.81} \cdot q^{0.14} \cdot \alpha^{0.05}\, .\end{equation} \subsubsection{Size-luminosity relation of disks with companions} \label{subsub:scaling_relation} In \autoref{fig:heatmaps_ricci} we show a red line that scales differently from the SLR when we include planets. We predict that this line fits with a correlation $\mathrm{L_{mm}\propto r_{eff}^{5/4}}$. If we assume that all the luminosity of a disk comes from rings that are approximately optically thick, we can approximate as \begin{equation} L \simeq A \cdot B_{\nu}\, ,\end{equation} where $\mathrm{A}$ is the area and $\mathrm{B_{\nu}}$ is the Planck function. If we assume the Rayleigh--Jeans approximation to approximate the Planck function with the temperature, the equation becomes \begin{equation} L \simeq A \cdot T\, .\end{equation} We make the assumption that the area of the pressure bump scales as $\mathrm{A} \propto r \cdot \sigma_{d}$, where $\mathrm{r}$ is the radius and $\mathrm{\sigma_{d}}$ is the width of the pressure bump in the dust and there is linear scaling of $\mathrm{\sigma_{d}}$ with $\mathrm{h}$. The width of the dust ring depends on the width of the gas, \begin{equation} \sigma_{d} \propto \sigma_{g} \cdot \sqrt{\frac{\alpha}{St}}\, ,\end{equation} as in \citep{Dullemond2018DSHARP}, where $\mathrm{St}$ is the Stokes number, while in the ring there is also effective dust trapping and diffusion. Using the relation that previously measures the gas width in \autoref{subsub:width}, we find that the luminosity ($\mathrm{L_{mm}}$) scales with the radius as \begin{equation} \label{eq:luminosity_hydro} L_{mm} \propto r_{eff}^{5/4}\, \end{equation} which is the relation that we plot with the red line in \autoref{fig:heatmaps_ricci}, \autoref{fig:heatmaps_dsharp}, \autoref{fig:heatmaps_d50}, and \autoref{fig:heatmaps_d90}. We find that this theoretical estimate nicely explains the size-luminosity scaling seen when strong substructure is present. However, toward larger radii, this relation slightly overpredicts the luminosity. For example, we could get a shallower slope looking at the heat map because toward large radii our fitting line is above the main bulk of the simulations. For a complete derivation of the two sections, we refer the reader to \autoref{app:width_slr_derivation} and \autoref{app:scaling_relation}. \section{Discussion} \label{sec:discussion} To summarize the results discussed above, we explored the observed trend among the (sub-)mm disk continuum luminosity ($\mathrm{L_{mm}}$) and the $\mathrm{68\%}$ effective radius ($\mathrm{r_{eff}}$) of protoplanetary disks. Following the size-luminosity relation (SLR) obtained from \citet{tripathi2017millimeter} and \citet{Andrews2018a}, $\mathrm{L_{mm}\propto r_{eff}^{2}}$, we showed which initial conditions are favorable for a disk to remain on the SLR for a time span of \SI{300}{kyr} - \SI{3}{Myr}. We explored the effect of every parameter on the disk evolution tracks on the SL Diagram, we got a visual representation of how the disk population moves on the same diagram, and we found relations between the parameters (\autoref{app:corner}). We present a different correlation for disks that are dominated by strong substructures compared to the disks that have a monotonically decreasing surface density profile. Moreover, we investigated the effect of different opacity models with compact or porous grains and we conclude that it is a major factor in reproducing the observational results. In the following sections we briefly recap these results and we discuss in detail some of the implications. \subsection{Dominant parameters} \label{disc:dominant_params} In summary, our results imply that the most dominant parameters for the evolution of disks are the viscosity parameter $\mathrm{\alpha}$, the initial disk mass $\mathrm{M_{d}}$, the location of a giant planet if present, and the opacity model that is used to derive the continuum intensity. The disks that match the SLR are characterized by low turbulence ($\mathrm{\alpha} \leq 10^{-3}$) and high disk mass ($\mathrm{M_{d}} \geq 2.5\cdot10^{-2} M_{\star}$), and they are affected strongly by the existence of a strong trap (in this study caused by a giant planet). Turbulence-$\mathrm{\alpha}$ values greater than $\rm{10^{-3}}$ lead to smaller grains due to fragmentation, and consequently to less luminous disks that do not enter the SLR. Moreover, particles are diffused more efficiently and, due to their size, are less efficiently trapped. Finally, the dust trap is not as pronounced if the alpha viscosity is higher. All of this acts in concert to make dust trapping ineffective, and causes the disks to behave as if they were smooth (see \autoref{fig:mass_acc_1e-3} in \autoref{sec:results}). If the fragmentation velocity is high enough, there can be simulations that stay on the SLR, but we consider these disks to have unrealistic initial conditions according to the known literature. High initial disk mass locates the disks initially either inside or above the SLR (i.e., too bright for the given size) until they reach a quasi-steady state. This allows them to migrate to lower luminosities while they evolve up to $\SI{3}{Myr}$ and still remain in the SLR. This effect is aided by the right choice of opacity model and grain porosity. Compact grains shift the position of disks to higher luminosity (see \autoref{disc:opacity}). On the other hand, most of the less massive and smaller disks end up below the correlation at lower luminosities, characterizing them as discrepant. Planets can alter the evolution path of the disk on the SL Diagram significantly. An effectively trapping planet causes the disk to quickly settle to a quasi-steady state on the SL Diagram, thereby leading to a shorter track and thus delaying the evolution of disks toward lower luminosity and radius. Disks with a massive planet in the inner part of the disk ($\mathrm{1/3 \ensuremath{r_\mathrm{c}}\xspace}$) have more extended evolutionary tracks that are overall less luminous. In contrast, disks with a planet in the outer part ($\mathrm{2/3 \ensuremath{r_\mathrm{c}}\xspace}$) have shorter and more luminous disks if all the other parameters remain the same. This can be explained by the planet trapping a large part of the disk solids at large radii. When two planets are included then both of them contribute to the luminosity while the outer one defines the effective radius of the disk. Overall, the presence of planets increases the fraction of matching simulations on the SLR, but this result is also a function of the opacity (see \autoref{disc:opacity}). \subsection{Position along the SLR} The position of a disk along the SLR is determined mainly by the disk mass $\mathrm{M_{d}}$ and the disk size \ensuremath{r_\mathrm{c}}\xspace, as can be seen in \autoref{fig:param_effects} and \autoref{fig:parameter_tracks}. More massive and large disks are located in the top right part of the SL diagram, while small and less massive are in the middle and left parts. In \autoref{fig:lum_R10} we show the kernel density estimate (kde) of the luminosity for all matching simulations for four different cases and three different snapshots, and also plot the observational kde from \citet{Andrews2018a}. The brightest disks that stay on the SLR are those that contain planets that are located at the outer part of the disk (2/3 of the \ensuremath{r_\mathrm{c}}\xspace, yellow and green lines). A planet in the outer region leads to larger, more luminous disks as explained in \autoref{subsec:planets} and \autoref{disc:dominant_params}. Massive planets at 1 and \SI{3}{Myr} reproduce the peak of the observed brightness distribution, but overall produce too many bright and too few faint disks. The peak of the luminosity distribution for smooth disks is generally at much lower luminosities. Given these results, it is conceivable that the observed sample consists of two distinct categories of disks: a brighter and larger category due to massive outer planets that trap the dust while planets in the second category are not massive enough to trap the dust effectively and these disks evolve similarly to a smooth disk. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/Luminosity_R10-0_q25.pdf} \caption[]{Kernel density distribution of the luminosity for all matching simulations using the \citet{Ricci_2010} R10-0 opacity model, for four different cases and three different snapshots, from $\mathrm{300kyr-3Myr}$. The black line refers to the disks from the \citet{Andrews2018a} sample. Disks with planets have higher luminosity, while smooth disks have low luminosity at $\mathrm{3Myr}$. When two planets are included the luminosity is higher than with a single planet.} \label{fig:lum_R10} \end{figure*} \subsection{Opacity model preferences} \label{disc:opacity} We used different opacity models for our study. As a base model, we made use of the composition of \citet{Ricci_2010} opacities (R10-0), but for compact grains (i.e., no porosity) as in \citet{rosotti2019millimetre}. Moreover we used the non-porous \citet{Birnstiel_DSHARP} opacities (DSHARP) and varied the porosity between $10\%$ DSHARP-10, $50\%$ DSHARP-50, and $90\%$ DSHARP-90. Therefore, we find that independent of the model used, relatively compact grains ($\mathrm{<50\%}$) are preferred instead of the highly porous grains. When compact grains are included the initial position of the disks on the SL Diagram is shifted toward higher luminosity giving it more time to evolve in the SLR on our chosen time span. Disks with the DSHARP opacity are generally less bright and end up below the SLR. We recall that the opacity at our wavelength is a factor of $\mathrm{\sim 8.5}$ higher in the R10-0 case compared to DSHARP at the opacity cliff location ($\mathrm{\sim 0.1-1mm}$), with the difference mainly stemming from the choice of carbonaceous material (\citealp{Zubko1996MNRAS.282.1321Z} versus \citealp{Henning1996}, see the comparison in \citealp{Birnstiel_DSHARP}). However, this point holds only for smooth disks and disks with weak substructures (where a disk behaves as smooth). If this is the case then only compact grains can explain the SLR; instead, when substructures are strong then any of the opacity models and the porosities tested in this work can explain the substructured SLR. The latter applies because most of the substructures become optically thick. It can be argued that alternative compositions that also exhibit a strong opacity cliff and a high opacity would be equally suitable. \subsection{Types of disks on the SLR} We show that when strong traps (i.e., massive planets) are included, then disks follow a different SLR than the smooth ones. Based on measurements of the width of the pressure maxima formed by the planet in hydrodynamical simulations (see \autoref{app:width_slr_derivation}), we derived a theoretical prediction for disks with substructures (\autoref{subsub:scaling_relation}). Smooth disks with compact or slightly porous grains seem to follow the \citet{Andrews2018a} relation $\mathrm{L_{mm} \propto r_{eff}^{2}}$, while disks with massive planets follow a relation $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. This result does not imply that the observed disks from the \citet{tripathi2017millimeter} sample are all free of substructure, but they might not show strong enough and optically thick substructures, such as AS209 or HD163296 \citep[see][]{Huang2018}. \autoref{fig:heatmaps_ricci} shows how smooth disks follow the SLR, while disks with strong substructures follow a different relation but intersect the SLR at the bright end (top right part of the SLR). In contrast, the less luminous disks follow the SLR if they are smooth, but disks of the same effective size with substructure are too luminous (cf. bottom left part of the SLR). In \citet[][Fig. 6]{Hendler2020ApJ...895..126H}, disks seem to follow a universal relation (close to the SLR) in all star-forming regions (Ophiuchus, Tau/Aur, Lupus, Chal) except for USco, the oldest region ($\mathrm{\sim 10Myr}$). The observed SLR therefore might flatten with age of the region. We could examine these results since we evolve our simulations to $\rm{10Myr}$, but our models do not include photo-evaporation and that would lead to uncertainties in the results. For example, at the age of USco the detectable disk fraction is $\rm{<20\%}$, while in our models it would be $\rm{100\%}$. This raises a question. If a planet is not massive at early times, but around \SI{1}{Myr} has a planet/star mass ratio of $\mathrm{q=10^{-3}}$, will the disk follow the observed SLR or the SLR with strong substructures? According to the analysis of the evolution tracks in \autoref{sub:evolution_tracks}, most of the small and less luminous disks that do not initially have a giant planet drift toward lower radii and luminosities and even below the SLR. Therefore, strong substructures need to form in the first $\sim$\SI{1}{Myr} for the disk to follow the $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$ relation. This result might imply that in most of the star-forming regions, strong substructures might not have formed early enough for small disks, or that the substructure is weak. On the other hand, bright and large disks can very clearly show strong substructures and follow the SLR at the same time. This is indeed the case for the DSHARP sample \citep{Andrews_DSHARP_2018ApJ...869L..41A}, which is biased toward bright disks and shows significant substructures in every source. The latter can be confirmed from \autoref{fig:lum_R10} in the previous section. The brightest disks that stay on the SLR are those that contain planets that are located in the outer part of the disk (yellow and green lines). The SLR can be explained if there is a mixture of both smooth and strong substructured disks. Smooths disks always follow the SLR, as shown in \autoref{fig:heatmaps_ricci}, while the bright substructured disks populate the upper right part of the SLR (Figures \ref{fig:heatmaps_ricci}, \ref{fig:heatmaps_dsharp}, and \ref{fig:heatmaps_d90}). Disks with substructures that have large \ensuremath{r_\mathrm{c}}\xspace and low disk mass $\mathrm{M_{d}}$ populate the lower right part of the plot under the SLR. These disks are not favored by \citet{Andrews2018a}, who find a tentative positive correlation between the mass of the star (or the disk) and the size of the disk. If massive small disks are excluded from the plot then the SLR could be reproduced by both substructured disks that occupy the upper right part and smooth (or weakly substructured) disks that occupy the lower left part of the SLR. Our results seem to be in agreement with the observational classification of \citet{VanDerMarel2021AJ....162...28V} who suggest that all bright disks should have substructures formed by giant planets. Moreover, the SLR for the substructured disks is independent of the opacity model, but it slightly overpredicts the luminosity for the very large disks (see \autoref{disc:opacity} for more about the opacity). \subsection{$\mathrm{L_{mm} - M_{\star}}$ relation} \label{disc:Lmm-Mstar} In \autoref{subsub:Mstar_tracks} we discuss that the stellar mass ($\mathrm{M_{\star}}$) is directly correlated with the disks mass ($\mathrm{M_{d}}$) and that the disk temperature is only a weak function of $\mathrm{L_{\star}}$ (and therefore $\mathrm{M_{\star}}$). The fact that the disk mass scales with the stellar mass implies that the luminosity ($\mathrm{L_{mm}}$) scales with the stellar mass. In \autoref{fig:L_Mstar} the $\mathrm{L_{mm}} - M_{\star}$ relation is shown for three different models for all matching simulations: the smooth case (yellow lines), a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$ at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$ (green lines), and a planet with the same mass ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$ (red lines). The markers define the median value of the luminosity at \SI{1}{Myr} and the error bars are the $\mathrm{75\%}$ percentile from the upper and lower value. The blue line is the $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ correlation from \citet{Andrews2018a}, a correlation that is consistent with those found from previous continuum surveys of comparable size and age \citep[][]{Andrews2013, Ansdell2016ApJ...828...46A, Pascucci2016ApJ...831..125P}. For any of our models the correlation is not as steep as for the \citet{Andrews2018a} model, but the cases with strong substructures have steeper profiles than the smooth cases. The reason is that no correlation between disk size and stellar mass was imposed in the parameter grid. If a size-mass correlation as inferred by \citet{Andrews2018a} were imposed, the mass-luminosity relation is expected to steepen as disks with optically thick substructures would be larger and therefore brighter. However, reproducing the observed mass-luminosity trend will be part of a future population synthesis study. A similar manifestation of the same trend can be seen in \autoref{fig:corner_13rc}. In the panel $\mathrm{\ensuremath{r_\mathrm{c}}\xspace-M_{d}}$, shown as white dots, is the mean value of the characteristic radius for every disk mass. In order for the correlation to reproduce the observations, more massive disks should have initially been larger. In other words, large low luminosity disks would be expected in the lower right of the SL diagram, but are not observed. Preliminary results indicate that these disks need to have low disk mass $\rm{(M_{d}<10^{-2}M_{\star})}$ and be large in size $(\rm{\ensuremath{r_\mathrm{c}}\xspace>\SI{150}{au}})$. Moreover the turbulence parameter should be relatively small $\alpha$ $\rm{\leq10^{-3}}$, otherwise the disk would behave as smooth and woul follow the SLR. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/L-M_ricci_1Myr_q25.pdf} \caption[]{$\mathrm{L_{mm}} - M_{\star}$ relation at $\mathrm{1Myr}$ for three different cases for the matching simulations. Smooth case (yellow lines), a planet with planet/star mass ratio $\mathrm{q=10^{-3}}$ at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$ (green lines) and a planet with the same ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$ (red lines). The points define the median value of the luminosity at $\mathrm{1Myr}$ and the error bars are the $\mathrm{75\%}$ percentile from the upper and lower value. The blue line is the $\mathrm{L_{mm}\propto M_{\star}^{1.5}}$ correlation from \citet{Andrews2018a}.} \label{fig:L_Mstar} \end{figure} \subsection{Scattering} Scattering is included in our simulations, as introduced in \autoref{sub:Observables}. Compared to the case where only the absorption opacity is used, the difference is minimal and it can be observed only in a few cases. With the inclusion of scattering, the originally brightest disks (above the SLR) tend to move toward lower luminosity (move down in the SL diagram). This happens for disks that are optically thick, hence for those that contain planets. This effect favors the SLR and allows slightly more disks ($\rm{\sim 2\%}$) to enter the selected region. However, for moderately optically thick disks the emission is larger and a small fraction of disks move up (toward higher luminosity) on the SL diagram (Figure 4 in \citealt{Birnstiel_DSHARP}). This happens because the derived intensity (\autoref{eq:intensity}) does not saturate to the Planck function, but to a slightly smaller value for a non-zero albedo. This is the well-known effect that scattering makes objects appear cooler than they are in reality. On the other hand, for small optical depths ($\rm{\tau<<1}$) the effect of scattering is insignificant because the intensity (\autoref{eq:intensity}) approaches $\rm{I_{\nu}^{out}\longrightarrow \epsilon_{\nu}^{eff}B_{\nu}(T_{d})\Delta \tau /\mu}$, which is the identical solution to when $\rm{\kappa_{\nu}^{sca}}$ is set to zero while $\rm{\kappa_{\nu}^{abs}}$ is kept unchanged, as also shown in \citet{Birnstiel_DSHARP}. The effect of scattering also depends on the albedo ($\rm{\eta_{\nu}=1-\epsilon_{\nu}^{eff}}$). For the compositions we use, the maximum effective albedo is $\rm{0.57}$ for R10-0 and $\rm{0.82}$ for DSHARP, while it can reach $\rm{\sim 0.97}$ for DSHARP-90. For these compositions the effect of scattering is never more than a factor of $\rm{\sim 1.7}$ at a particle size of $\rm{1mm}$. The result of scattering is higher if we increase the albedo, but by using a plausible composition the result is effectively negligible. However, we obtain different results compared to \citet{Zhu2019ApJ...877L..18Z}. In that work the authors discusse that a completely optically thick disk with high albedo ($\rm{0.9}$) can be constructed, which therefore lies along the SLR with the right normalization (because the high albedo and high optical depth lowers the luminosity). However our findings show that we cannot reach those results from an evolutionary perspective. For smooth disks the dust drifts to the inner part of the disk and the disk is no longer optically thick. On the other hand, disks with substructures create only rings rather than disks that are completely optically thick everywhere. \subsection{Predictions for longer wavelengths} A recent study \citep{Tazzari2021MNRAS.506.2804T}, showed a flatter SLR at $\rm{3.1mm}$ ($\rm{Lmm \propto r_{eff}^{1.2}}$), confirming that emission at longer wavelengths becomes increasingly optically thin. We performed a series of simulations at $\rm{\SI{3.1}{mm}}$ for a comparison with these results. The disks are fainter and smaller at $\rm{\SI{3.1}{mm}}$. Two effects contribute to this. First, the value of the opacity decreases and the opacity cliff moves to larger particle sizes. This leads the disks to become optically thinner in comparison to the $\rm{\SI{850}{\mu m}}$ case. Second, the intensity at $\rm{\SI{3.1}{mm}}$ is less according to Plank's spectrum. Therefore, the luminosity will be lower. In terms of the SLR, the slope for the smooth disks does not change since these disks are never optically thick; therefore, all disks simply move toward lower luminosities and smaller radii. On the other hand, the substructured disks that cover the SLR do not change in terms of slope, but the large and faint disks (right part of the heat map in \autoref{fig:heatmaps_ricci}) show a larger spread in luminosity compared to the smaller wavelength. Disks that are very optically thick and moderately optically thick have the same luminosity at $\rm{\lambda=850\mu m}$, but at $\rm{\SI{3.1}{mm}}$ because of the decrease in opacity the former category is still optically thick while the latter no longer is, leading to a decrease in luminosity. Therefore the SLR can become flatter if we can take into account these disks that do not belong in the SLR. With our models the flatter relation from \citet{Tazzari2021MNRAS.506.2804T}, could be explained by substructured and large smooth disks. The heat map in \autoref{app:heatmap} confirms this. In this figure we plot the simulations at $\rm{\SI{3.1}{mm}}$, using the R10-0 opacity model and we overplot the SLR from \citet{Tazzari2021MNRAS.506.2804T}: $\rm{L_{mm} \propto r_{eff}^{1.2}}$. Substructured disks can explain this relation very well since it is similar to the scaling relation we calculated for disks with strong substructures in \autoref{subsub:scaling_relation}. Small and smooth disks, on the other hand, cannot enter the relation because they are too faint since the particles cannot grow to a size where the opacity cliff is at $\rm{\SI{3.1}{mm}}$. We should mention the possibility that the flatter relation can be due to observational bias toward large disks, which tend to be substructured. If small and faint disks are included in the sample, the observed SLR could be steeper and closer to the SLR from \citet{Andrews2018a}. Future observational surveys should investigate this possibility further. \subsection{Limitations} \label{disc:limitations} It is important to keep in mind the limitations of this paper. The time span used for the simulations displayed in this paper and the figures is from $\mathrm{\SI{300}{kyr}}$ to $\mathrm{\SI{3}{Myr}}$. This does not exclude the possibility that some disks with high disk mass might evolve a great deal on the SLR diagram for $\mathrm{\SI{10}{Myr}}$-$\mathrm{\SI{20}{Myr}}$. Disk dissipation has not been modeled in this paper, but it will be considered for future work. In \autoref{fig:d2g_R10} we show the kde of the global dust-to-gas ratio\footnote{The dust-to-gas ratio in the disk changes with radius and time and this quantity is simply $\rm{M_{dust}/M_{gas}}$.} for three different snapshots between $\mathrm{\SI{300}{kyr}-\SI{1}{Myr}-\SI{3}{Myr}}$ and for three different cases. Smooth disks lose dust relatively quickly due to radial drift, while disks with planets retain a much higher dust-to-gas ratio because of the the strong trap. In the second panel there are cases where the dust-to-gas ratio increases over the initial $\mathrm{0.01}$. These are substructured disks with intermediate $\mathrm{\alpha}$-values, high fragmentation velocities ($\mathrm{>\SI{1000}{cm/s}}$), and small sizes ($\mathrm{<\SI{60}{au}}$). The gas is removed more quickly than the dust, leading to a higher dust-to-gas ratio. Large values of $\mathrm{\alpha}$ would lead to less trapping and the dust would drift as usual, while low $\mathrm{\alpha}$ would mean that the disk does not evolve significantly. As mentioned in \autoref{sec:methods}, the stellar luminosity is not evolved in the simulation. If this were the case the disk luminosity would scale approximately linearly with the stellar luminosity and would be further modulated by resulting changes in the dust evolution. We therefore expect a general shift of the disks toward lower luminosities, but with the trends that have explored in \autoref{sec:results} remaining the same. Since most of the simulations need to be brighter to remain in the SLR, a change in the luminosity favors higher $\rm{\alpha}$-values and lower fragmentation velocities than the values shown before. An example of a heat map in shown in \autoref{app:stellar_luminosity}. In our models the planets are already included at the beginning of the simulations and they open a gap in the initially smooth surface density profile relatively quickly. Realistically, the timescales in the outer part of the disk are much longer and the timescale for planet formation changes with the distance to the star \citep{Johansen2017AREPS..45..359J}. Therefore, we would expect that the inner planet should form first and the outer planet later, as has been suggested (e.g., \citealt{pinilla2015A&A...580A.105P}). Since both planets start at the same time, the inner one might trap more of the total disk mass and the outer bump might be less bright than in our models in reality. The latter will be included in a future work by including the outer planet later in the simulation. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/d2g_R10-0_q25.pdf} \caption[]{Evolution of the global disk dust-to-gas ratio of all matching simulations with the \citet{Ricci_2010} R10-0 opacity model, for three different cases and three different snapshots, from $\mathrm{\SI{300}{kyr}-\SI{3}{Myr}}$. From left to right, the smooth case, a planet with planet/star mass ratio at $\mathrm{1/3\ensuremath{r_\mathrm{c}}\xspace}$, and a planet with the same ratio at $\mathrm{2/3\ensuremath{r_\mathrm{c}}\xspace}$. Different limits are used on the x-axis to highlight the evolution of the dust-to-gas ratio. The initial dust-to-gas ratio is $\mathrm{0.01}$. For the smooth case the dust-to-gas ratio decreases by three orders of magnitude up to $\mathrm{\SI{3}{Myr}}$. When a planet is included the disk dust mass is retained and leads to a much higher dust-to-gas ratio. In the case where the planet is the inner part of the disk (middle column), there are cases at $\mathrm{\SI{3}{Myr}}$ where the ratio is higher than $\mathrm{0.01}$. The gas mass moves faster than the dust mass in this case.} \label{fig:d2g_R10} \end{figure*} \section{Conclusions} \label{sec:conclusions} In this paper we performed a large population study of 1D models of gas and dust evolution in protoplanetary disks to study how the effective radius and disk continuum emission evolves with time. We varied a range of initial parameters and we included both smooth disks and disks that contain planets. We compared our results with the observed trend between continuum sizes and luminosities from \citet{Andrews2018a} and we managed to constrain the initial conditions. Our findings are as follows: \begin{enumerate} \item Disks with strong traps (i.e., massive planets) follow a different SLR than smooth disks. Smooth disks follow the \citet{Andrews2018a} relation, $\mathrm{L_{mm} \propto r_{eff}^{2}}$, as shown by \citet{rosotti2019millimetre}, while disks with massive planets follow $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. This could mean that not all disks in the \citet{tripathi2017millimeter} and \citet{Andrews2018a} joint sample have a substructure as significant as HD163296, for example. We explained this result with a simple analytical derivation and we find that if the gas width scales as we measured it from FARGO-3D and if the dust width scales as we expect it from trapping and fragmentation, then theoretically the luminosity scales as $\mathrm{L_{mm} \propto r_{eff}^{5/4}}$. \item Whether disks follow the SLR depends heavily on the opacity model. When the DSHARP \citep{Birnstiel_DSHARP} opacity is used, disks are not as luminous in the first $\rm{\SI{300}{kyr}}$ and the majority of them end up below the SLR. Especially for smooth disks, the DSHARP opacities produce a much lower number of simulations on the SLR compared to models using the \citet{Ricci_2010} R10-0 opacities ($\rm{0.8\%}$ with DSHARP and $\rm{29.6\%}$ with R10-0). Therefore, with this opacity model, only disks with substructures can populate the SLR. On the other hand, R10-0 opacities can reproduce disks both with and without substructures since the absolute value of the opacity at $\SI{850}{\mu m}$ is $\rm{\sim 8.5}$ times higher than DSHARP for particle sizes around $\mathrm{\sim \SI{0.1}{mm}}$ (position of the opacity cliff) and the disks become luminous enough to enter the relation. \item The SLR is more widely populated when substructures are included in contrast to a tight correlation for smooth disks. Substructured disks cover mostly the upper right part (large and bright disks) of the SL diagram, while the lower left (small and faint) is covered by smooth disks. This is an indication that the SLR can be explained if there is a mixture of both smooth and strong substructured disks. \item The grain porosity can drastically affect the evolution track of the disk. Throughout our models, relatively compact grains ($\mathrm{<50\%}$ porosity) are preferred for simulations that follow the SLR. If we use slightly porous grains ($\mathrm{10\%}$) by altering the DSHARP opacity, the effect is insignificant, as the shape of the opacity cliff is roughly the same. On the contrary, for semi-porous ($\mathrm{50\%}$) and porous grains ($\mathrm{90\%}$) the opacity cliff flattens out, leading to disks with low luminosity. Only compact grains can explain the SLR for smooth disks, while any porosity can explain it when strong substructures are included. \item High initial disk mass gives a higher probability for a simulation to follow the SLR. If this applies, the disk starts initially above the SLR (bright) until it reaches a stable state at around $\mathrm{\sim \SI{300}{kyr}}$. By this time it will enter the relation, and depending on the other initial conditions it will either remain there and be considered a matching simulation or will leave it. \item There is a preference toward low $\mathrm{\alpha}$-values (lower than $\mathrm{10^{-3}}$). This result is in line with other more direct methods of determining $\mathrm{\alpha}$ (e.g., \citealt{flaherty_2018ApJ...856..117F}). There are multiple reasons for this tendency. For $\mathrm{\alpha}\geq 2.5\cdot 10^{-3}$, disks tend to be more fragmentation dominated, the particle size decreases, and consequently they are not trapped by the pressure bump (if any) leading them outside the relation. Moreover the diffusivity increases and the peak of the pressure bump smears out, leading to inefficient trapping. On the other hand, if $\mathrm{\alpha}$ is low, the ring that is formed becomes too narrow and disks tend to have lower luminosity. \item The location of the planet as a function of the characteristic radius plays a major role in the final outcome. If a planet is included in the inner part of the disk ($\mathrm{1/3 r_{c}}$), the disk has to be significantly larger in order to retain the correct ratio of luminosity to effective radius to stay in the SLR. Instead, when an outer planet ($\mathrm{2/3 r_{c}}$) is included the disk tends to be smaller in size. When two planets are included, the location of the outermost one defines the size of the disk, but a combination of the two defines the luminosity. These results are also affected by the opacity model. \item We expect a less extended evolution track when substructure is included. The pressure bump halts the dust from drifting farther in, constraining this way the size of the disk and not allowing it to evolve further on the SLR. Furthermore, when two planets are included there is an indication that the inner planet should form first, otherwise there will not be a big enough reservoir of material in order to form. \item We are not able to construct optically thick disks with high albedo ($\rm{0.9}$) that lie along the SLR with an evolutionary procedure, as opposed to \citet{Zhu2019ApJ...877L..18Z}. Smooth disks are not optically thick due to radial drift, while disks with substructure create only optically thick rings rather than a uniform optically thick distribution. \item We chose different gap profiles based on \citet{Kanagawa2016} and compared them again to hydrodynamical simulations. We conclude that the depth of the gap does not play an important role to the evolution of the disk on the SLR as long as the planet is big enough to stop the particles from drifting. Instead, the width of the gap is the important parameter (see \autoref{app:gap_profiles}, where we compare the different profiles for different parameters.) \end{enumerate} This study shows how the combination of observed and simulated populations allows us to put constraints on crucial unknowns such as the disk turbulence, or the dust opacity. Future work is required to also investigate the effects of disk build-up and dissipation as well as planet migration and planetesimal formation. \begin{acknowledgements} T.B. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 714769 and funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 325594231 under Ref no. FOR 2634/1. Furthermore, this research was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excelence Strategy - EXC-2094-390783311. G.R. acknowledges support from the Netherlands Organisation for Scientific Research (NWO, program number 016.Veni.192.233) and from an STFC Ernest Rutherford Fellowship (grant number ST/T003855/1) \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,078
Spasm (v překladu z angličtiny křeč) je česká grindcoreová kapela z Přerova v Olomouckém kraji založená v lednu 2000 původně jako vedlejší projekt členů skupin Psychopathia a Romantic Love. Po ukončení činnosti Psychopathie v témže roce se stala regulérní kapelou. Svůj hudební styl prezentuje jako "drum 'n' bass gigolo goregrind". Zajímavostí je, že s výjimkou krátkého angažmá na začátku kariéry nevyužívá kytaristu. Debutové studiové album Lust for Feculent Orgasm vyšlo v roce 2005 pod hlavičkou českého nezávislého vydavatelství Copremesis Records, jehož majitel Radim Týn se později stal zpěvákem kapely. Na koncertech vystupuje v plavkách ve stylu Borata. Spasm mají na svém kontě k roku 2022 celkem pět dlouhohrajících alb. Diskografie Demo nahrávky Spasmatic Secretion (2001) Promo (2004) – promo nahrávka na CD Studiová alba Lust for Feculent Orgasm (2005) Paraphilic Elegies (2008) Taboo Tales (2011) Pussy De Luxe (2015) Mystery of Obsession (2021) Kompilace Grind Over Sofia 2019 (2019) – 3 skladby na výběrovém digitálním albu z koncertu v bulharské Sofii Split nahrávky Spasm / Mizar (2008) – split se slovenskou kapelou Mizar Spasm / Gutalax (2017) – split s českou kapelou Gutalax Odkazy Reference Externí odkazy Spasm v databázi Encyclopaedia Metallum Spasm v databázi Discogs Spasm, Last.fm Spasm, Metal Music Archives Spasm, Bandzone.cz České grindcoreové hudební skupiny Hudební skupiny založené v roce 2000 Hudební skupiny 2000–2009 Hudební skupiny 2010–2019 Hudební skupiny 2020–2029
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,807
Q: Running .pkg installer fails through Script Editor, but works in Terminal I'm trying to use Script Editor to create a clickable app that downloads a script, runs the script (which uninstalls a piece of software that isn't working properly), downloads a new .pkg installer, and then runs that .pkg. I'm trying to use terminal commands through AppleScript to do this. Here's the AppleScript code: do shell script "cd ~/Downloads;curl -O https://my.url/jumpcloud-fix.sh;sudo sh jumpcloud-fix.sh" with administrator privileges do shell script "cd ~/Downloads;curl --no-progress-meter -O https://my.url2/jumpcloud-agent.pkg;sudo /usr/sbin/installer -pkg jumpcloud-agent.pkg -target /" with administrator privileges When I test it out by pressing the play button in Script Editor to run the code, everything is working perfectly right up until the last step of running the installer. The installer appears to run, but then fails with the following message: error "installer: Package name is JumpCloud Agent installer: Upgrading at base path / installer: The upgrade failed. (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. An error occurred while extracting files from the package "jumpcloud- agent.pkg".)" number 1 However, when I actually open Terminal and input the commands below to run the exact same .pkg that was downloaded, it works and the GUI for the installer launches. cd ~/Downloads sudo /usr/sbin/installer -pkg jumpcloud-agent.pkg -target / Any ideas why the installer isn't working through the Applescript code in Script Editor, but seems to be working just fine with the same commands in Terminal?
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,115
Plastic pollution is a growing problem around the world and right here in London. Plastic, particularly single use plastic, threatens the health of our rivers, lakes, oceans, marine life and eventually human life. We want to create a cleaner, healthier environment to benefit all of us and we would like your help to make this happen. The Plastic Ocean Festival will showcase a series of events from April to September 2017 in London incorporating film screenings of A Plastic Ocean documentary, marine and riverine clean ups, stand up paddle boarding, and educational talks by scientists. Find out about the festival, our mission, the film, and why we're doing what we're doing. Join a film screening of A Plastic Ocean, clean up the Thames, have fun stand up paddle boarding, and find out more about plastic in our waterways.
{ "redpajama_set_name": "RedPajamaC4" }
5,157
The Chair ist ein US-amerikanischer Horrorfilm aus dem Jahr 2016 des Regisseurs Chad Ferrin. Er basiert auf der gleichnamigen Graphic Novel von Peter Simeti und Erin Kohut. Premiere war im Oktober 2016 am Northeast Wisconsin Horror Festival. Inhalt Richard Sullivan wurde zu Unrecht verurteilt und wartet im Gefängnis auf seine Todesstrafe. Sullivan wird Zeuge, dass der Gefängnisdirektor und seine Mitarbeiter die Insassen auf sadistische und grausame Weise töten. Bald wollen die Wachen Sullivan ermorden, deswegen beschließt Sullivan aus dem Gefängnis zu flüchten, jedoch plagen bei Sullivan schlimme Erinnerungen aus seiner Kindheit, da er von seiner Mutter missbraucht wurde. Produktion und Veröffentlichung Regie führte Chad Ferrin und das Drehbuch schrieb Erin Kohut. Die Produzenten waren Timothy Morse, Jes Pececita Joule und Craig Walendziak. Die Musik komponierte Douglas Edward und für die Kameraführung war Christian Janss verantwortlich. Die künstlerische Leitung lagen bei Devynne Lauchner und Kristen Wair. Für den Schnitt verantwortlich war Jahad Ferif. Zuerst erschien The Chair am 18. Oktober 2016 auf Northeast Wisconsin Horror Festival und später wurde es am 31. Oktober 2017 direkt auf DVD veröffentlicht. In der Hauptrollen waren Bill Oberst Jr. als den Gefängnisdirektor, Roddy Piper war Murphy, Noah Hathaway spielte Alvarez und Zach Galligan war Riley. Der Film hatte eine Budget von 200.000 US-Dollar. Rezeption Auf der IMDb-Webseite erhielt der Film 4,5 von 10 Punkten. Weblinks Einzelnachweise Filmtitel 2016 US-amerikanischer Film Horrorfilm Comicverfilmung Direct-to-Video-Produktion
{ "redpajama_set_name": "RedPajamaWikipedia" }
101
ENCINITAS — Council ordered a special election for the "right-to-vote" initiative to take place June 18, instead of adopting it outright at Tuesday night's meeting. Under the initiative, increasing density or building heights beyond 30 feet would require a majority vote of the public. Additionally, changing the zoning type of a parcel in some circumstances would also need voter approval. The initiative aims to strip council of its power to increase height or density and change zoning type with a four-out-five council member vote. Council members agreed that they, and future councils, shouldn't have the ability to "up-zone" with a four-fifths vote. But they also had some reservations with the initiative. Councilwoman Kristin Gaspar said she's concerned that the initiative, if approved by voters, might also need the go-ahead from the California Coastal Commission. About 80 percent of the city falls under the coastal commission's jurisdiction. Should the initiative pass with the voters and the coastal commission deny it, it would put most of the city on "one zoning track and the rest on another," Gaspar said. "What it says to me is that the projects that incorporate more intense uses get shoved to the areas that have the more lenient zoning," Gaspar said. Councilman Mark Muir said the initiative is a package deal. Council members can't pick and choose what they like from it. The initiative will include an impartial description and an argument for and against it when it goes before voters. In sending the initiative to a special election, council had the option of writing the argument against it. Council agreed the opportunity should be used to list the pros and cons of the initiative, or what Mayor Teresa Barth called an argument that's "kinda, sorta against." A subcommittee will present that language to council for approval March 27. During the public comments section, Ian Thompson, the husband of the late Councilwoman Maggie Houlihan, said the initiative will give residents power over influential development interests. "Unfortunately for the past 12-plus years, Encinitas City Council development decisions have been dominated by special interests, inside and outside of our community," said Thompson referring to actions of previous councils. Because of this, much of the development in the city has been incompatible with Encinitas' slow-growth philosophy, he said. As a result of council direction, the law firm Rutan and Tucker issued an analysis of the initiative last week. The report lists a host of issues. Chiefly, the report states that the city would have a difficult time complying with state-housing requirements if major zoning decisions are put in the hands of voters. Every eight years, the state says a certain number of housing units must be built in Encinitas, and other cities, based on population trends and other factors. To accommodate these units, the city will likely have to rezone properties or plan for increased density, triggering a public vote. If voters reject the units, developers could sue the city for not having a housing element in place, which is against state law. But two of the 28 public speakers pointed out that Encinitas has never certified a housing element. Further, the initiative isn't "anti-growth," but rather about growth people can live with, several residents said. A similar proposition was passed in Escondido more than a decade ago. Escondido has rejected some developments, but the city has never faced a lawsuit from developers due to the proposition, according to Jerry Harmon, a past Escondido councilman. "Citizens do pay attention; they do like to be at the table," Harmon said. Everett Delano, a lawyer who helped author the right-to-vote initiative, said Rutan and Tucker's report is full of "dooms-speak," specifically the claim that the coastal commission must green light the initiative. His reasoning: the initiative seeks to overturn certain land use elements that aren't subject to coastal commission approval. But on the off chance the coastal commission does get involved, only parts of the initiative are in question, rather than the whole thing, Delano said. Not all the speakers were in favor of the initiative. Some were plainly opposed. They said the initiative would cripple development and send election costs sky high. Keith Harrison said he too is concerned with community character. But he noted the initiative could negatively impact "specific plans" like the one in downtown Encinitas. Under the specific plans, some of the buildings within them are taller or denser than normally allowed under city standards. In certain circumstances, raising density makes sense, and it's not fair to hold up specific plans that take the context of the neighborhood into account, Harrison said. "The La Paloma Theatre is one of the most beloved buildings we have in our downtown; it's 40 feet tall," Harrison said. "That tells you right there that a 30-foot-height limit doesn't establish community character," Harrison added. Council also threw around the idea of a public workshop to discuss the pros and cons of the initiative, but didn't finalize plans for it. At least 5,700 signatures for the initiative were deemed valid, qualifying it for a special election. The special election will cost the city an estimated $350,000 to $400,000.
{ "redpajama_set_name": "RedPajamaC4" }
4,246
package org.apache.cassandra.db.partitions; import java.util.function.LongPredicate; import java.util.function.Predicate; import org.apache.cassandra.db.*; import org.apache.cassandra.db.rows.*; import org.apache.cassandra.db.transform.Transformation; public abstract class PurgeFunction extends Transformation<UnfilteredRowIterator> { private final DeletionPurger purger; private final int nowInSec; private final boolean enforceStrictLiveness; private boolean isReverseOrder; public PurgeFunction(int nowInSec, int gcBefore, int oldestUnrepairedTombstone, boolean onlyPurgeRepairedTombstones, boolean enforceStrictLiveness) { this.nowInSec = nowInSec; this.purger = (timestamp, localDeletionTime) -> !(onlyPurgeRepairedTombstones && localDeletionTime >= oldestUnrepairedTombstone) && localDeletionTime < gcBefore && getPurgeEvaluator().test(timestamp); this.enforceStrictLiveness = enforceStrictLiveness; } protected abstract LongPredicate getPurgeEvaluator(); // Called at the beginning of each new partition protected void onNewPartition(DecoratedKey partitionKey) { } // Called for each partition that had only purged infos and are empty post-purge. protected void onEmptyPartitionPostPurge(DecoratedKey partitionKey) { } // Called for every unfiltered. Meant for CompactionIterator to update progress protected void updateProgress() { } @Override protected UnfilteredRowIterator applyToPartition(UnfilteredRowIterator partition) { onNewPartition(partition.partitionKey()); isReverseOrder = partition.isReverseOrder(); UnfilteredRowIterator purged = Transformation.apply(partition, this); if (purged.isEmpty()) { onEmptyPartitionPostPurge(purged.partitionKey()); purged.close(); return null; } return purged; } @Override protected DeletionTime applyToDeletion(DeletionTime deletionTime) { return purger.shouldPurge(deletionTime) ? DeletionTime.LIVE : deletionTime; } @Override protected Row applyToStatic(Row row) { updateProgress(); return row.purge(purger, nowInSec, enforceStrictLiveness); } @Override protected Row applyToRow(Row row) { updateProgress(); return row.purge(purger, nowInSec, enforceStrictLiveness); } @Override protected RangeTombstoneMarker applyToMarker(RangeTombstoneMarker marker) { updateProgress(); boolean reversed = isReverseOrder; if (marker.isBoundary()) { // We can only skip the whole marker if both deletion time are purgeable. // If only one of them is, filterTombstoneMarker will deal with it. RangeTombstoneBoundaryMarker boundary = (RangeTombstoneBoundaryMarker)marker; boolean shouldPurgeClose = purger.shouldPurge(boundary.closeDeletionTime(reversed)); boolean shouldPurgeOpen = purger.shouldPurge(boundary.openDeletionTime(reversed)); if (shouldPurgeClose) { if (shouldPurgeOpen) return null; return boundary.createCorrespondingOpenMarker(reversed); } return shouldPurgeOpen ? boundary.createCorrespondingCloseMarker(reversed) : marker; } else { return purger.shouldPurge(((RangeTombstoneBoundMarker)marker).deletionTime()) ? null : marker; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,205
{"url":"https:\/\/unmethours.com\/question\/49546\/pumping-system-plant-convergence-issue\/","text":"# Pumping system \/ Plant Convergence issue\n\nThe following configuration of cooling loops in energyplus works with just a few of warnings which I will mention (coolingload operation control and two setpoints and HX hasoperationschememodulated control):\n\nNow I want my pumps and chillers to have different sizes, but HeaderedPumps:ConstantSpeed gives identical pumps, so I decided to go with the following configuration with Pump:ConstantSpeed (only change):\n\nI got plenty of these warnings, in one-day simulation, at the time that setpoint rises from 3 to 6.7:\n\n************* ** Warning ** HeatExchanger:FluidToFluid named HX - Iteration Limit exceeded calculating demand side loop flow rate continues.\n************* ** ~~~ ** This error occurred 29 total times;\n************* ** ~~~ ** during Warmup 0 times;\n************* ** ~~~ ** during Sizing 0 times.\n************* ** ~~~ ** Max=74.812813 Min=0.616959\n\n\nIt seems that when the setpoint goes from 3 to 6.7, plant has fluid in supply side with temperature lower than new setpoint (6.7). So it stops working while Heatexchanger is demanding load. I don't know if it's a bug or is something I'm doing wrong. I will be grateful for any suggestion to resolve this issue.\n\nUpdate: I'm trying an EMS (python) solution like below but it doesn't work:\n\nIf demand>0 & PLR1==0 & PLR2==0:\npump1 (actuator) = ON\npump2 (actuator)= ON\npump1 flow (actuator)= max (internal variable)\npump2 flow (actuator)= max (internal variable)\n\n\nThe conditions are right but pumps flow does not change. I don't no why!\n\nedit retag close merge delete","date":"2021-11-28 20:16:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.42018356919288635, \"perplexity\": 8498.714886784684}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964358591.95\/warc\/CC-MAIN-20211128194436-20211128224436-00057.warc.gz\"}"}
null
null
{"url":"https:\/\/stats.stackexchange.com\/questions\/226304\/how-to-do-regression-on-a-time-series-by-learning-from-historical-time-series","text":"# How to do regression on a time series by learning from historical time series?\n\nI have a data set of customer purchases from the day of their registration to 120 days. There is a time series for each customer. However, some new customers do not have a history of 120 days yet. I want to predict how many purchases they will do by the time their history reaches 120 days.\n\nI have created a feature set including frequency of purchase, recency and monetary, and product category (https:\/\/en.wikipedia.org\/wiki\/RFM_(customer_value)).\n\nHow can I train the model from the time series to make a regression for each customer?\n\n\u2022 It seems you already have a good start. You have the variables and you have the model (a regression), so what is missing? I suppose you \"train\" your model by estimating it; is there a problem there? What could be improved upon is accounting for seasonality and allowing for time series patterns in model residuals by fitting regression with ARMA errors instead of plain regression. This can be done with functions arima (\"stats\" package) or auto.arima (\"forecast\" package) in R. \u2013\u00a0Richard Hardy Jul 30 '16 at 12:40\n\u2022 (I edited your question a bit. Please check whether I have not changed the meaning. You may roll back or edit further.) \u2013\u00a0Richard Hardy Jul 30 '16 at 12:46\n\nHow can I train the model from the time series to make a regression for each customer?\n\nThere is no need to do regression for every customer. You just need one model to do everything. You can construct the training data as following:\n\nFirst, select the same time period for every customer (you can choose those customers who purchased at least 120 days)\n\nSecond, do some features engineering, like last month purchased moving average, days of week, weekend, holiday, and so on\n\nFinally, you will arrange the training data like this: Customer-Id Purchased timestamp days-of-week moving-average-terms ...\n\nThen, you can do some regression on it, but i will suggest you do something feature selection using tree based models.\n\n1) Here's an approach that will work if you want predictions for days other than the 120th in addition to working for the 120th. If you want to do a true time series regression, you need features to account for trend and seasonality (this essentially acts as the \"differencing\" you'd need to do if you were making a non-stationary time series stationary).\n\nTo do this, add a feature \"customer_age_in_days,\" where you index each and every customer's activity by the # of days since his\/her first activity. If a customer starts on 1\/1\/12, his age in days on 1\/2\/12 should be 2 (don't zero-index - it could mess things up). If another customer starts on 1\/7\/14, his age in days on 1\/9\/14 is 3.\n\nThen, graph this time feature versus your dependent variable (# of purchases) and see what the trend looks like - it might not be linear. Play around with what transformations it might follow - sqrt, log, square, cube, etc.). Could even be a combination of some.\n\nFor seasonality, add dummy variables for which day of the week it is. isMon, isTues...isFri where the variable = 1 if it is that day of the week, and 0 if it is not. Delete the one with the least correlation with your dependent variable so as to avoid perfect multicollinearity.\n\nYou can then run a regression with customer_age_in_days and your isMon-isFri variables, along with your other features. To get your prediction, put in the data that corresponds to the 120th day.\n\n2) You could do a regression independent of the continuous approach described above if you just want the 120th day. You could just have a lot of other features as the ones you described, and have your dependent variable still being the # of purchases they made by day 120. Then, you just regress on all these other features without having time or seasonality as features. You could add lagged features such as \"# of purchases by day x\" for x in [10, 20, ...]. The limitation is that x would have to be less than or equal to the minimum age in days of all your customers (since if one customer is 40 days old, and you have a feature of \"purchases by day 50,\" that column will be NaN for that customer and mess everything up.\n\n3) Do a traditional time series. auto.arima() is good, and you could look into Facebook Prophet as well.","date":"2021-05-08 19:27:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.37549811601638794, \"perplexity\": 882.6681299001787}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988923.22\/warc\/CC-MAIN-20210508181551-20210508211551-00209.warc.gz\"}"}
null
null
IJoSS Vol. II / No. 2 Counsellor and Practicum Supervisor Critical Incidents in the Development of Multicultural and Socia Counsellor and Practicum Supervisor Critical Incidents in the Development of Multicultural and Social Justice Competency Collins, S., Arthur, N., & Brown, C. Cultural influences on the identities of clients and counsellors and how those identities intersect have significant effect on the counselling process. The experiences and worldviews of clients from diverse cultural backgrounds influence presenting concerns, case conceptualization, and intervention strategies. Clients with non-dominant identities (ethnicity, gender, sexual orientation, ability, socioeconomic status, age, or religion) more often encounter experiences of social injustice, including discrimination and cultural oppression that significantly impact psychosocial wellbeing. With increased globalization and cultural diversity in many countries, there is a call for increased attention to these challenges. Counsellors are expected to engage in social justice action, with or on behalf of clients, to effect change in organizations, communities, or broader social systems. To prepare counsellors for these challenges, graduate counsellor education programs must incorporate competency development in both multicultural counselling and social justice. However, research is lacking on the effectiveness of current curriculum and the application of learning to practice contexts. The existing literature suggests that counsellors are not fully prepared to meet these complex challenges, particularly in the area of social justice. The purpose of this study was to examine how well selected counsellor education programs in Canada are preparing counsellors for both multicultural counselling and social justice. Most research has focused on curriculum content; less attention has been paid to how that content is taught and the efficacy of those learning processes in facilitating competency. The critical incident technique was used to solicit examples of effective and less effective learning processes from both practicum supervisors and counsellors in the field. Twenty-five practicum supervisors from two graduate programs and 48 counsellors from national and provincial counselling organizations participated in the study through an online survey, a portion of those provided the detailed critical incidents discussed in this paper. The qualitative data was analyzed to isolate, cluster, and relate emergent concepts. A critical psychology lens facilitated contextualization of the data in context of full transcripts and the power structures within education, the profession, and society to examine both overt and covert meanings. Several themes emerged from the detailed analysis of these critical incidents. The strongest theme was the lack of graduate multicultural education and, even more absent, a focus on social justice. This gap in learning was itself a critical incident, particularly as participants encountered the demands of culturally diverse work environments. For many, their competency evolved posteducation through self-study and direct contact with diverse populations and, in some cases, through observations of cultural oppression in their work contexts. Those who had graduate multicultural counselling coursework highlighted critical readings, experiential learning activities, exposure to cultural diversity (sometimes through instructors and peers), open discussions, and opportunities to engage in direct service or applied practice. A statement by one participant reflects the conclusion that combining theory and practice optimizes learning: "I think that we need to engage fully in experiences which help us understand others at a deeper level, and this does not occur through reading some book on cultural differences." Recommendations for teaching and educational practices will be highlighted. http://www.iises.net/?p=8317 Collins, S., Arthur, N., & Brown, C. (2013). Counsellor and Practicum Supervisor Critical Incidents in the Development of Multicultural and Social Justice Competency. International Journal of Social Sciences, II(2), 16–32.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,876
\subsection{Figures} \begin{ack} John Canny is associated with both Google Research and the University of California, Berkeley. \end{ack} \bibliographystyle{plain} \section{Conclusion} \label{sec:conclusion} We introduced compressed versions of two state-of-the-art self-supervised algorithms, SimCLR~\cite{chen2020simple} and BYOL~\cite{grill2020bootstrap}, using the Conditional Entropy Bottleneck (CEB)~\cite{fischer2020ceb}. Our extensive experiments verified our hypothesis that compressing the information content of self-supervised representations yields consistent improvements in both accuracy and robustness to domain shifts. These findings were consistent for both SimCLR and BYOL across different network backbones, datasets and training schedules. Furthermore, we presented an alternative theoretical explanation of why C-SimCLR models are more robust, in addition to the information-theoretic view~\cite{fischer2020conditional, fischer2020ceb, achille2017emergence, achille2018information}, by connecting Lipschitz continuity to compression. \iffalse We introduced compressed versions of SimCLR and BYOL using CEB. Our extensive experimental results verified our hypothesis that compressed self-supervised representation yields substantially better performance and robust to domain shifts and perturbation. In particular, under the linear evaluation protocol on ImageNet with a ResNet-50, CEB compression improves BYOL accuracy from 74.3\% to 75.0\%, further reducing the gap with respect tothe supervised ResNet-50 (76.0\%) \cite{chen2020simple} from 1.7\% to 1.0\%. The improvement is consistently significant in SimCLR and BYOL across with different size of networks and number of training epochs. Furthermore, we gave an alternative theoretical explanation, in addition to \cite{fischer2020conditional}, for why CEB-compressed SimCLR models are more robust by connecting Lipschitz continuity to compression. \fi \paragraph{Limitations.} We note that using CEB often requires explicit and restricted distributions. This adds certain constraints on modeling choices. It also requires additional effort to identify or create required random variables, and find appropriate distributions for them. Although we did not need additional trainable parameters for C-SimCLR, we did for C-BYOL, where we added a linear layer to the online encoder, and a 2-layer MLP to create $b(\cdot|y)$. It was, however, not difficult to observe the von Mises-Fisher distribution corresponds to loss function of BYOL and SimCLR, as well as other recent InfoNCE-based contrastive methods \cite{caron2020unsupervised,chen2020improved,he2020momentum}. \paragraph{Potential Negative Societal Impact.} Our work presents self-supervised methods for learning effective and robust visual representations. These representations enable learning visual classifiers with limited data (as shown by our experiments on ImageNet with 1\% or 10\% training data), and thus facilitates applications in many domains where annotations are expensive or difficult to collect. Image classification systems are a generic technology with a wide range of potential applications. We are unaware of all potential applications, but are cognizant that each application has its own merits and societal impacts depending on the intentions of the individuals building and using the system. We also note that training datasets contain biases that may render models trained on them unsuitable for certain applications. It is possible that people use classification models (intentionally or not) to make decisions that impact different groups in society differently. \section{Related Work} \label{sec:related_work} Most methods for learning visual representations without additional annotation can be roughly grouped into three families: generative, discriminative, and bootstrapping. Generative approaches build a latent embedding that models the data distribution, but suffer from the expensive image generation step \cite{vincent2008extracting,rezende2014stochastic,goodfellow2014generative,hinton2006fast,kingma2013auto}. While many early discriminative approaches used heuristic pretext tasks~\cite{doersch2015unsupervised, noroozi2016unsupervised}, multi-view contrastive methods are among the recent state-of-the-art \cite{chen2020simple,he2020momentum,chen2020improved,chen2020big,oord2018representation,henaff2020data,li2021prototypical,tian2019contrastive,caron2020unsupervised}. Some previous contributions in the multi-view contrastive family \cite{tian2020makes, zbontar2021barlow,sridharan2008information,dubois2021lossy} can be connected to the information bottleneck principle \cite{tishby2000information,tishby2015deep,alemi2016deep} but in a form of unconditional compression as they are agnostic of the prediction target, i.e. the target view in multiview contrastive learning. As discussed in~\cite{fischer2020conditional, fischer2020ceb}, CEB performs conditional compression that directly optimizes for the information relevant to the task, and is shown theoretically and empirically better \cite{fischer2020conditional,fischer2020ceb}. A multi-view self-supervised formulation of CEB, which C-SimCLR can be linked to, was described in \cite{fischer2020conditional}. Federici~{et al}.\@ \cite{federici2020learning} later proposed a practical implementation of that, leveraging either label information or data augmentations. In comparison to \cite{federici2020learning}, we apply our methods with large ResNet models to well-studied large-scale classification datasets like ImageNet and study improvements in robustness and generalization, rather than using two layer MLPs on smaller scale tasks. This shows that compression can still work using state-of-the-art models on challenging tasks. Furthermore, we use the vMF distribution rather than Gaussians in high-dimensional spaces, and extend beyond contrastive learning with C-BYOL. Among the bootstrapping approaches \cite{guo2020bootstrap,caron2018deep,grill2020bootstrap} which BYOL \cite{grill2020bootstrap} belongs to, BYORL \cite{gowal2021selfsupervised} modified BYOL \cite{grill2020bootstrap} to leverage Projected Gradient Descent \cite{madry2017towards} to learn a more adversarially robust encoder. The focus is, however, different from ours as we concentrate on improving the generalization gap and robustness to domain shifts. A variety of theoretical work has established that compressed representations yield improved generalization, including \cite{shamir2008learning,vera2018role,dubois2020learning}. Our work demonstrates that these results are valid in practice, for important problems like ImageNet, even in the setting of self-supervised learning. Our theoretical analysis linking Lipschitz continuity to compression also gives a different way of viewing the relationship between compression and generalization, since smoother models have been found to generalize better (e.g., \cite{bruna2013invariant}). Smoothness is particularly important in the adversarial robustness setting~\cite{weng2018evaluating,fazlyab2019efficient,yang2020closer}, although we do not study that setting in this work. \section{Introduction} \label{sec:intro} \iffalse Snippets: - Learning effective visual representations without human supervision is a long-standing problem. - Contrastive learning means that you don't need to handcraft pretext tasks. -- ``The key idea is that maximising mutual information between features extracted from multiple views of a shared context forces the features to capture information about higher-level factors that broadly affect the shared context. \fi \iffalse Learning effective visual representations without additional annotation is a fundamental problem, as it enables efficient training on downstream tasks. This is illustrated by the ubiquitous use of pretrained network components in computer vision~\cite{he2016deep, girshick2014rich, carreira2017quo} and natural language~\cite{devlin2018bert, peters2018deep}. Whilst initial efforts focused on generative models~\cite{hinton2006fast, kingma2013auto} and heuristic, discriminative pretext tasks~\cite{doersch2015unsupervised, noroozi2016unsupervised}, current state-of-the-art approaches are based on contrastive learning \fi Individuals develop mental representations of the surrounding world that generalize over different views of a \textit{shared context}. For instance, a shared context could be the identity of an object, as it does not change when viewed from different perspectives or lighting conditions. This ability to represent views by distilling information about the \textit{shared context} has motivated a rich body of self-supervised learning work \cite{ oord2018representation, bachman2019learning, chen2020simple, grill2020bootstrap, he2020momentum, lee2020predictive}. For a concrete example, we could consider an image from the ImageNet training set \cite{russakovsky2015imagenet} as a shared context, and generate different views by repeatedly applying different data augmentations. Finding stable representations of a shared context corresponds to learning a minimal high-level description since not all information is relevant or persistent. This explicit requirement of learning a concise representation leads us to prefer objectives that are \textit{compressive} and only retain the relevant information. Recent contrastive approaches to self-supervised visual representation learning aim to learn representations that maximally capture the mutual information between two transformed views of an image~\cite{oord2018representation,bachman2019learning, chen2020simple,he2020momentum,hjelm2018learning}. The primary idea of these approaches is that this mutual information corresponds to a general shared context that is invariant to various transformations of the input, and it is assumed that such invariant features will be effective for various downstream higher-level tasks. However, although existing contrastive approaches maximize mutual information between augmented views of the same input, they do not necessarily compress away the irrelevant information from these views, or capture relevant compression in their objectives~\cite{chen2020simple, he2020momentum}. As shown in~\cite{fischer2020conditional,fischer2020ceb}, retaining irrelevant information often leads to less stable representations and to failures in robustness and generalization, hampering the efficacy of the learned representations. An alternative state-of-the-art self-supervised learning approach is BYOL~\cite{grill2020bootstrap}, which uses a slow-moving average network to learn consistent, view-invariant representations of the inputs. However, it also does not explicitly compress irrelevant information in its objective. In this work, we modify SimCLR~\cite{chen2020simple}, a state-of-the-art contrastive representation learning method, by explicitly adding information compression using the Conditional Entropy Bottleneck (CEB)~\cite{fischer2020ceb}. Furthermore, we show how BYOL~\cite{grill2020bootstrap} representations can also be compressed using CEB. By using CEB we are able to both measure and control the amount of information compression in the learned representation~\cite{fischer2020conditional}, and observe its impact on downstream tasks. We empirically demonstrate that our compressive variants of SimCLR and BYOL, which we name C-SimCLR and C-BYOL, significantly improve accuracy and robustness to domain shifts across a number of scenarios. Our primary contributions are: \begin{itemize}[leftmargin=1em] \item Reformulations of SimCLR and BYOL such that they are compatible with information-theoretic compression using the Conditional Entropy Bottleneck \cite{fischer2020conditional}. \item An exploration of the relationship between Lipschitz continuity, SimCLR, and CEB compression, as well as a simple, tractable lower bound on the Lipschitz constant. This provides an alternative explanation, in addition to the information-theoretic view \cite{fischer2020conditional, fischer2020ceb,achille2017emergence, achille2018information}, for why CEB compression improves SimCLR model robustness. \item Extensive experiments supporting our hypothesis that adding compression to the state-of-the-art self-supervised representation methods like SimCLR and BYOL can significantly improve their performance and robustness to domain shifts across multiple datasets. In particular, linear evaluation accuracies of C-BYOL are even competitive with the supervised baselines considered by SimCLR~\cite{chen2020simple} and BYOL~\cite{grill2020bootstrap}. C-BYOL reaches 76.0\% and 78.8\% with ResNet-50 and ResNet-50 2x respectively, whereas the corresponding supervised baselines are 76.5\% and 77.8\% respectively. \end{itemize} \iffalse In this work, we modify SimCLR~\cite{chen2020simple}, a leading contrastive representation learning method, by explicitly adding information compression Furthermore, we show how BYOL~\cite{grill2020bootstrap} Contrastive approaches to self-supervised learning aim to learn representations that maximally capture the mutual information between two augmented views of an image~\cite{chen2020simple,he2020momentum,hjelm2018learning,oord2018representation,bachman2019learning}. The primary idea of these approaches is finding a general representation that is invariant to various transformations of the input. And they thus correspond to learning a minimum high-level description of the invariant features of the input. Ho This corresponds to learning a minimum high-level description of the relevant features of an input, and discarding the spurious signals that are not common across different views~\cite{soatto2014visual}. Human develops mental representations of the world that generalize over different aspects of observation and perturbations. Identity of an object does not change because you view at another angle or in a different lighting condition. This ability to identify information that is insensitive to view changes and robust to noise motivates a rich body of self-supervised representations studies \cite{lee2020predictive, oord2018representation, grill2020bootstrap}. For learning visual representations in particular, recent contrastive approaches proposed learning representations that maximally capture the mutual information between two augmented views of an image \cite{chen2020simple,he2020momentum,hjelm2018learning,oord2018representation,tian2019contrastive}. The intent of these methods may be described as finding a general representation that is most invariant to a family of view changes and perturbations, upon which downstream tasks can be learned. This corresponds to learning a minimum high-level description of the invariant since not all information is relevant \cite{soatto2014visual}. This explicit requirement of learning a concise representation of the invariant leads us to prefer objectives functions that are \textit{compressive}. Philosophically and technically, this is in sharp contrast to recent contrastive methods based on InfoNCE \cite{oord2018representation,vmibounds} such as SimCLR \cite{chen2020simple} and MoCo \cite{he2020momentum} which do not explicitly compress irrelevant information in their objectives. \ANURAG{This doesn't make sense now, because we cited these methods above as ``... that maximally capture the mutual information between two augmented views of the same image.''} \KH{So here we assume this MI is the invariant we need. Say two views are x and y. The representation z is encoded from x, and the hope is z can contain MI, I(X;Y). However, as z comes from x, maximally capturing MI doesn't mean you won't capture other irrelevant information about x. What we want to get rid of is not anything in I(X;Y) but other irrelevant information.} As shown in \cite{fischer2020conditional,fischer2020ceb}, retaining irrelevant information often leads to failures of robustness and generalization, which go against the intent of learning a view-invariant representation that can generalize. Likewise, BYOL \cite{grill2020bootstrap}, a recent latent bootstrapping approach to multiview representation learning, also lacks such compression in its objective. \ANURAG{Maybe we should say that previous methods lack ``explicit'' compression. Like no-one knows what BYOL is doing.} \KH{I don't mind getting rid of this sentence. We just need to say BYOL also lack explicit compression and it's a SoTA method, and we try compressing it in the paper.} In this work, we modify SimCLR \cite{chen2020simple} and BYOL \cite{grill2020bootstrap}, exemplar contrastive and latent bootstrapping self-supervised representation methods, to make them compatible with the Condition Entropy Bottlenck \cite{fischer2020conditional}. This allows us to both measure and control the amount of information compression in the learned representation, and observe their impact on downstream tasks. We empirically demonstrate that compressive representations generalize robustly and improve performance. \ANURAG{How do we ``measure'' the amount of compression? Or add relevant citation} Our primary contributions are: \begin{itemize}[leftmargin=1em] \item Modifications to SimCLR and BYOL such that they are compatible with information-theoretic compression using the Conditional Entropy Bottleneck \cite{fischer2020conditional} in \Cref{sec:ceb,sec:vmf,sec:simclr,sec:byol}. \item An exploration of the relationship between Lipschitz continuity, SimCLR, and CEB compression, as well as a simple, tractable lower bound on the Lipschitz constant in \Cref{sec:lipschitz}. This provides an alternative view for why CEB compression improves SimCLR model robustness. \item Extensive experiments supporting our hypothesis that adding compression to self-supervised representations like SimCLR and BYOL can substantially improve their performance and robustness to domain shifts in \Cref{sec:experiments}. \item Experimental results showing some of the inherent weaknesses of these self-supervised methods, in both the uncompressed and compressed settings (\Cref{sec:experiments}) \ANURAG{I guess we can scratch this one?}. \KH{Maybe there is a better way to phrase this? I don't mind scratching it tho} \end{itemize} \fi \section{Experimental Evaluation} \label{sec:experiments} We first describe our experimental set-up in Sec.~\ref{sec:experiments_setup}, before evaluating the image representations learned by our self-supervised models in both linear evaluation and semi-supervised experiments in Sec.~\ref{sec:experiments_imagenet}. We then analyse the robustness and generalization of our self-supervised representations by evaluating model accuracy across a wide range of domain and distributional shifts in Sec.~\ref{sec:experiments_robustness}. Finally, we present further ablations in Sec.~\ref{sec:experiments_ablations}. \subsection{Experimental Set-up} \label{sec:experiments_setup} \paragraph{Implementation details} \KH{I made a section in appendix for implementation details. So maybe just briefly mention few important ones and refer to that section.} Based on the public implementation of SimCLR~\cite{chen2020simple}. We will release our own code upon acceptance. Exhaustive details in the appendix. \paragraph{Image augmentations} Same as SimCLR~\cite{chen2020simple} and BYOL~\cite{grill2020bootstrap} \ANURAG{Is this correct?} \KH{Same as BYOL. We changed SimCLR's augmentation to BYOL's. They are similar tho. Make a section in appendix to talk about augmentation.} \paragraph{Network architecture} ResNet-50. Batch normalisation synchronised across different workers, following~\cite{chen2020simple, grill2020bootstrap} \ANURAG{Any important things about BatchNorm? (ie if it is on the last layer or not)} \paragraph{Training} LARS optimizer~\cite{you2017large}, cosine learning rate schedule. 300 epochs for ablations. 1000 epochs. Learning rate of X. For BYOL experiments, EMA $\alpha = $ following~\cite{grill2020bootstrap}. Batch size of 4096 split over 64 Cloud TPU v3 accelerators. \pargraph{Evaluation protocol} Train self-supervised on ImageNet training set without using any labels. Then we train a linear classifier on top of the frozen representation. Final performance metric is the accuracy of these classifiers. As our approach builds on SimCLR~\cite{chen2020big} and BYOL~\cite{grill2020bootstrap}, follow the same evaluation protocols. \subsection{Linear evaluation of self-supervised representations} \label{sec:experiments_imagenet} \paragraph{Linear evaluation on ImageNet~\cite{russakovsky2015imagenet}} \input{tables/imagenet_linear_eval.tex} We first evaluate the representations learned by our models by training a linear classifier on top of frozen features, following standard practice~\cite{chen2020simple, grill2020bootstrap, others}. As shown in Tab.~\ref{tab:linear_eval_r50}, our compressed objectives provide consistent improvements to both SimCLR~\cite{chen2020simple} and BYOL~\cite{grill2020bootstrap} across different ResNet architectures of varying widths (and thus number of parameters). We note that these improvements are significant, as SimCLR and BYOL are two recent, state-of-the-art methods, and our reproduction of the SimCLR baseline outperforms that of the original paper (our uncompressed model's Top-1 accuracy is 69.0\% compared to \TODO{X} in ~\cite{chen2020simple}). Similarly, our implementation of BYOL, which obtains a Top-1 of 72.9\% closely matches that of~\cite{grill2020bootstrap} (\TODO{X}). Current self-supervised methods benefit from longer training schedules~\cite{chen2020simple, chen2020big, grill2020bootstrap, others}, and Tab.~\ref{tab:linear_eval_r50} shows that our improvements remain consistent for both 300 epoch, and the longer 1000 epoch schedule which achieves the best results. \TODO{In addition to the Top-1 and Top-5 accuracies, we also compute report the Brier score~\cite{} which measures model calibration. Similar to the predictive accuracy, we observe that our compressed models obtain small but consistent improvements. } \paragraph{Semi-supervised training on ImageNet} \input{tables/imagenet_semi_supervised.tex} \KH{Explain BYOL reproduction} \KH{explain why we don't do finetuning experiments} After self-supervised pretraining on ImageNet, we finetune the final linear classification layer of the network on a small subset (1\% and 10\% respectively) of the ImageNet training set, using the class labels this time, following the standard protocol of~\cite{chen2020simple, grill2020bootstrap, others}. We expect that with strong feature representations, we should be able to learn an effective classifier with limited training examples. Table~\ref{tab:semi} shows that our compressed models once again outperform their SimCLR and BYOL counterparts. The largest improvements are observed in the low-data regime, where we improve upon the state-of-the-art BYOL by 3.1\% and SimCLR by 1.8\%, when using only 1\% of the ImageNet labels. Moreover, note that self-supervised representations significantly outperform a fully-supervised ResNet-50 baseline which overfits significantly in this low-data scenario. \paragraph{State-of-the-art comparison} \input{tables/imagenet_linear_eval_sota.tex} Finally, Tab.~\ref{tab:linear_eval_sota}, compares the state-of-the-art for linear evaluation accuracy on ImageNet. We report accuracies for models trained for 1000 epochs, (except for MoCO v2~\cite{chen2020improved}) where the authors trained for 800 epochs). Consistent with Tab.~\ref{tab:linear_eval_r50}, we achieve state-of-the-art results across both ResNet-50 and ResNet-50 2x architectures. \TODO{Note that we report our accuracies averaged over three pretraining trials, and observe minimal variance in the final results of each run.} \subsection{Robustness evaluation} \label{sec:experiments_robustness} Robustness: ImageNet benchmarks. Take model and linear classifier from previous experiment, evaluate on a suite of robustness benchmarks. Label set is the same as ImageNet, meaning that we can evaluate our network without any modifications. Report Top-1 accuracy, higher is better. Comparison to state-of-the-art: Table X. Compare favourably. Ablation studies: Area lower bound? Further ablations in the appendix We will describe our evaluation metrics. \subsection{Linear Evaluation} \KH{Explain BYOL reproduction} \KH{explain why we don't do finetuning experiments} \subsection{Robustness} \label{sec:robust} In \Cref{tab:robustness}, We compare the robustness of SimCLR and BYOL models trained with and without CEB compression, and at various area lower bound parameters. For essentially every task in the suite of robustness benchmarks, the compressed models outperform the corresponding uncompressed models. This is what we hypothesized in the SimCLR setting based on the Lipschitz continuity argument in \Cref{sec:lipschitz}. The ImageNet-A~\cite{hendrycks2021natural} setting is interesting, because the adversarial examples are all valid ``ImageNet-style'' images, but in many of the images, the background texture is unusual for that image class, as we show in \Cref{fig:imageneta}. The SimCLR and BYOL models, whether compressed or not, do very poorly at this task, as seen in \Cref{tab:robustness}. \ANURAG{In the ImageNet-A paper (Fig.2), it looks like ResNet-50 gets about 2\% accuracy. As far as I see, they just show plots but don't report the actual numbers in the paper.} For comparison, in \cite{fischer2020ceb}, the uncompressed ResNet50 model gets 3.2\% accuracy, and the CEB ResNet50 model gets 5.1\% accuracy (see Table 1 of that work), so the performance of the semisupervised models we consider in this work is 3 to 5 times worse than what can be achieved by the same architecture in the supervised setting. This leads us to suspect that the way SimCLR and BYOL are trained encourages the models to focus on image texture more then image content, even more so than what happens in fully supervised ImageNet training. The nature of the simple cropping augmentation strategy of SimCLR means that for a given $(x, x')$ pair, frequently they will have no overlapping pixels at all, and often one or both augmented inputs will primarily see background pixels rather than object pixels, so the only way for the representations of both inputs to become better aligned is if they focus mostly on the background pixels. ImageNet-A~\cite{hendrycks2021natural}: ``Natural adversarial examples''. Consist of images of ImageNet classes which a ResNet-50 classifier failed on. Human-verified that the ResNet-50 mistakes were egregious. ImageNet-C~\cite{hendrycks2019benchmarking}: Add 15 corruptions to ImageNet images, each at 5 levels of severity. We report average accuracy over all the corruptions and severity levels. ImageNet-R~\cite{hendrycks2020many}: Naturally occurring distribution changes in image style, camera operation and geographic location. Stands for ``ImageNet Rendition''. ImageNet-v2~\cite{recht2019imagenet}: New test set for ImageNet, collected following the same original protocol. The authors posit that the collected images are more ``difficult'', and consistently observed accuracy drops across various models. ObjectNet~\cite{barbu2019objectnet}: More challenging test set for ImageNet, where authors control for different viewpoints, backgrounds and rotations. ImageNet-Vid and YouTube-BB~\cite{shankar2019image} evaluate the robustness of image classifiers to natural perturbations arising in video. Additional annotations added by~\cite{shankar2019image} to the ImageNet-Vid~\cite{russakovsky2015imagenet} and YouTube-BB~\cite{real2017youtubeboundingboxes} datasets. \begin{table} \scriptsize \caption{Robustness. Models trained for 1000 epochs. \ANURAG{Only reporting accuracies.} } \label{tab:robustness} \centering \begin{tabular}{lccccccc} \toprule Method & ImageNet-A & ImageNet-C & ImageNet-R & ImageNet-v2 & ImageNet-Vid & YouTube-BB & ObjectNet \\ \midrule SimCLR & 1.3 & 35.0 & 18.3 & 57.4 & 63.4 & 57.1 & 18.6 \\ Compressed SimCLR & 1.5 & 36.8 & 19.6 & 59.0 & 65.3 & 58.3 & 20.8 \\ \midrule BYOL & 1.6 & 42.3 & 24.2 & 62.1 & 67.3 & 61.8 & 23.4 \\ Compressed BYOL & 1.7 & 43.8 & 24.5 & 62.9 & 68.2 & 62.1 & 23.5 \\ Compressed BYOL & 1.9 & 45.1 & 24.0 & 62.8 & 69.0 & 60.7 & 24.2 \\ \bottomrule \end{tabular} \end{table} \begin{table} \scriptsize \caption{Robustness. Models trained for 300 epochs. \ANURAG{Only reporting accuracies.}} \label{tab:robustness} \centering \begin{tabular}{lccccccc} \toprule Method (sweep area lower bound) & ImageNet-A & ImageNet-C & ImageNet-R & ImageNet-v2 & ImageNet-Vid & YouTube-BB & ObjectNet \\ \midrule \textit{SimCLR} \\ 0.08 & 1.0 & 33.2 & 17.9 & 56.2 & 60.4 & 55.4 & 17.9 \\ 0.16 & 1.5 & 34.2 & 19.3 & 56.1 & 62.7 & 57.4 & 18.1 \\ 0.32 & 1.1 & 33.6 & 20.6 & 55.8 & 62.4 & 58.1 & 17.9 \\ 0.5 & 1.1 & 29.6 & 21.3 & 51.2 & 57.9 & 60.2 & 14.4 \\ \textit{Compressed SimCLR} \\ 0.08 & 1.0 & 35.7 & 19.5 & 57.2 & 63.0 & 56.7 & 19.3 \\ 0.16 & 1.3 & 35.6 & 20.9 & 57.0 & 64.3 & 58.5 & 19.4 \\ 0.32 & 1.2 & 35.3 & 22.0 & 56.6 & 63.5 & 59.2 & 19.0 \\ 0.5 & 1.3 & 31.1 & 22.4 & 52.5 & 60.8 & 59.2 & 15.0 \\ \bottomrule \end{tabular} \end{table} \subsection{Learning with few labels.} \KH{Maybe mention these numbers are all higher than what BYOL and SimCLR paper report, no matter for linear eval or finetuning. And explain we don't finetune} \subsection{Ablation} \begin{table} \caption{BYOL Ablation} \label{tab:byol_ab} \centering \begin{tabular}{ll} \toprule Method & acc. \\ \midrule BYOL & 72.5 \\ BYOL 5x loss & 72.9 \\ BYOL 5x loss + 256-d linear layer & 72.8 \\ BYOL 5x loss + 256-d linear layer + sampling & 72.9 \\ Compressed BYOL & 73.6 \\ \bottomrule \end{tabular} \end{table} \begin{table} \caption{Sweeping area range lower rounds (temporarily listing 1-run now)} \label{tab:area_range_lower_bound} \centering \begin{tabular}{lllll} \toprule Method & 8\% & 16\% & 25\% & 50\%\\ \midrule SimCLR & 68.7 & 68.6 & 67.6 & 61.4 \\ Compressed SimCLR & 70.0 & 70.0 & 68.9 & 64.3 \\ \bottomrule \end{tabular} \end{table} Info-Min paper about strength of augmentation Changing AOLB is way to constrain information between different views (that's why this hyperparameter is important) Loss function is another way to do this That is why we are less sensitive to this hyperparamter. And do better for high lower bound \paragraph{$\beta$, $\kappa_e$, $\kappa_b$} By default, $\beta=1$, $\kappa_e=1024$, $\kappa_b=10$ \begin{table} \caption{Sweeping $\beta$, fixing $\kappa_e=1024$, $\kappa_b=10$} \label{tab:area_range_lower_bound} \centering \begin{tabular}{lllll} \toprule Method & 0 & 0.1 & 1 & 2 \\ \midrule Compressed SimCLR \\ Compressed BYOL \\ \bottomrule \end{tabular} \end{table} BYOL discussion? BYOL Table 13: Transfer learning results (linear evaluation, ResNet-50) from Places365 (PL) \section{Methods} \label{sec:methods} In this section, we describe the components that allow us to make distributional, compressible versions of SimCLR and BYOL. This involves switching to the Conditional Entropy Bottleneck (CEB) objective, noting that the von Mises-Fisher distribution is the exponential family distribution that corresponds to the cosine similarity loss function used by SimCLR and BYOL, and carefully identifying the random variables and the variational distributions needed for CEB in SimCLR and BYOL. We also note that SimCLR and CEB together encourage learning models with a smaller Lipschitz constant, although they do not explicitly enforce that the Lipschitz constant be small. \subsection{The Conditional Entropy Bottleneck} \label{sec:ceb} In order to test our hypothesis that compression can improve visual representation quality, we need to be able to measure and control the amount of compression in our visual representations. To achieve this, we use the Conditional Entropy Bottleneck (CEB)~\citep{fischer2020conditional}, which has been shown to improve both test accuracy and robustness in various settings for classification models~\citep{fischer2020ceb}. CEB is an objective function in the Information Bottleneck (IB)~\citep{tishby2000information} family. Given an observation $X$, a prediction target $Y$, and a learned representation $Z$ of $X$, CEB can be written as: \begin{align} CEB &\equiv \min_Z \beta I(X;Z|Y) - I(Y;Z) \\ &= \min_Z \beta (H(Z) - H(Z|X) - H(Z) + H(Z|Y)) - H(Y) + H(Y|Z) \\ &= \min_Z \beta(-H(Z|X) + H(Z|Y)) + H(Y|Z) \label{eq:ceb} \end{align} where $H(\cdot)$ and $H(\cdot|\cdot)$ denote entropy and conditional entropy respectively. We can drop the $H(Y)$ term because it is constant with respect to $Z$. $I(Y;Z)$ is the useful information relevant to the task, or the prediction target $Y$. $I(X;Z|Y)$ is the \emph{residual information} $Z$ captures about $X$ when we already know $Y$, which we aim to minimize. Compression strength increases as $\beta$ increases. We define $e(z|x)$ as the true encoder distribution, where $z$ is sampled from; $b(z|y)$, a variational approximation conditioned on $y$; $d(y|z)$, the decoder distribution (also a variational approximation) which predicts $y$ conditioned on $z$. As shown in~\cite{fischer2020conditional}, CEB can be upper-bounded variationally with these distributions: \begin{align} vCEB &\equiv \min_{e(z|x),b(z|y),d(y|z)} \mathbb{E}_{x,y \sim p(x,y), z \sim e(z|x)} \beta(\log e(z|x) - \log b(z|y)) - \log d(y|z) \label{eq:vceb} \end{align} There is no explicit requirement that all three distributions have learned parameters. At one limit, a model's parameters can be restricted to any one of the three distributions; at the other limit, all three distributions could have learned parameters. If $e(z|x)$ has learned parameters, its distributional form may be restricted, as we must be able to take gradients through the $z$ samples. For example, $e(z|x)$ could not generally be a mixture distribution, as sampling the mixture distribution has a discrete component, and we cannot easily take gradients through discrete samples. The only requirement on the distributional form of $b(z|y)$ and $d(y|z)$ is that we be able to take gradients through their log probability functions. \paragraph{InfoNCE.} As shown in~\cite{fischer2020conditional}, besides parameterizing $d(y|z)$, it is possible to reuse $b(z|y)$ to make a variational bound on the $H(Y|Z)$ term. As $I(Y;Z) = H(Y) - H(Y|Z)$ and $H(Y)$ is a constant with respect to $Z$: \begin{align} \label{eq:infonce} H(Y|Z) \leq E_{x,y \sim p(x,y), z \sim e(z|x)} \log \frac{b(z|y)}{\sum_{k=1}^K b(z|y_k)} \end{align} where $K$ is the number of examples in a minibatch. Eq.~\eqref{eq:infonce} is also known as the contrastive \textit{InfoNCE} bound~\cite{oord2018representation,vmibounds}. The inner term, \begin{align} d(y|z) \equiv \frac{b(z|y)}{\sum_{k=1}^K b(z|y_k)}, \label{eq:catgen} \end{align} is a valid variational approximation of the true but unknown $p(y|z)$. Fischer~\cite{fischer2020conditional} calls Eq.~\eqref{eq:catgen} the \textit{CatGen} decoder because it is a categorical distribution over the minibatch that approximates the generative decoder distribution. \subsection{C-SimCLR: Compressed SimCLR} \label{sec:simclr} The InfoNCE bound \cite{oord2018representation} enables many contrastive visual representation methods to use it to capture shared context between different views of an image as a self-supervised objective \cite{chen2020simple,chen2020big,he2020momentum,chen2020exploring,hjelm2018learning}. In this work, we show how to compress the SimCLR \cite{chen2020simple} model, but the method we discuss is generally applicable to other InfoNCE-based models. SimCLR applies randomized augmentations to an image to create two different views, $x$ and $y$ (which we also refer to as $x'$), and encodes both of them with a shared encoder, producing representations $r_x$ and $r_y$. Both $r_x$ and $r_y$ are $l_2$-normalized. The SimCLR version of the InfoNCE objective has the following form: \begin{align} \label{eq:simclr_nce} L_{NCE}(r_x, r_y) = -\log \frac{e^{\frac{1}{\tau}r_y^Tr_x}}{\sum_{k=1}^Ke^{\frac{1}{\tau}r_{y_k}^Tr_x}} \end{align} where $\tau$ is a temperature term and $K$ is the number of views in a minibatch. SimCLR further makes its InfoNCE objective \textit{bidirectional}, such that the final objective becomes \begin{align} L_{NCE}(r_x, r_y) + L_{NCE}(r_y, r_x) = -\log \frac{e^{\frac{1}{\tau}r_y^Tr_x}}{\sum_{k=1}^Ke^{\frac{1}{\tau}r_{y_k}^Tr_x}} - \log \frac{e^{\frac{1}{\tau}r_x^Tr_y}}{\sum_{k=1}^Ke^{\frac{1}{\tau}r_{x_k}^Tr_y}} \end{align} We can observe the following: $\exp(\frac{1}{\tau} r_y^Tr_x)$ in Eq.~\eqref{eq:simclr_nce} corresponds to the unnormalized $b(z|y)$ in Eq.~\eqref{eq:infonce}. $e(\cdot|x)$ generates $z=r_x$, whilst $r_y$ and $r_{y_k}$ are distribution parameters of $b(\cdot|y)$ and $b(\cdot|y_k)$ respectively. $e(\cdot|x)$ and $b(\cdot|y)$ share model parameters. \paragraph{von Mises-Fisher Distributional Representations.} \label{sec:vmf} The cosine-similarity-based loss (Eq.~\eqref{eq:simclr_nce}) is commonly used in contrastive learning and can be connected to choosing the von Mises-Fisher (vMF) distribution for $e(\cdot|x)$ and $b(\cdot|y)$ \cite{hasnat2017mises,wang2020understanding}. vMF is a distribution on the $(n-1)$-dimensional hyper-sphere. The probability density function is given by $f_n(z, \mu, \kappa) = C_n(\kappa)e^{\kappa \mu^Tz}$, where $\mu$ and $\kappa$ are denoted as the mean direction and concentration parameter respectively. We assume $\kappa$ is a constant. The normalization term $C_n(\kappa)$ is a function of $\kappa$ and equal to $\frac{\kappa^{n/2-1}}{(2\pi)^{n/2} I_{n/2-1}(\kappa)}$, where $I_v$ denotes the modified Bessel function of the first kind at order $v$. By setting the mean direction $\mu$ to $r_y$, concentration $\kappa_b$ of $b(\cdot|y)$ to $1/\tau$, and $r_x$ to $z$, we can connect the SimCLR objective (Eq.~\eqref{eq:simclr_nce}) to the distributional form of InfoNCE (Eq.~\eqref{eq:infonce}) \begin{align} \frac{e^{\frac{1}{\tau}r_y^Tr_x}}{\sum_{k=1}^Ke^{\frac{1}{\tau}r_{y_k}^Tr_x}} = \frac{C_n(\kappa_b)e^{\kappa_b r_y^Tr_x}}{\sum_{k=1}^K C_n(\kappa_b) e^{\kappa_b r_{y_k}^Tr_x}} = \frac{f_n(r_x, r_y, \kappa_b)}{\sum_{k=1}^K f_n(r_x, r_{y_k}, \kappa_b)} = \frac{b(r_x|y)}{\sum_{k=1}^K b(r_x|y_k)} \end{align} $z=r_x$ is a deterministic unit-length vector, so we can view $e(\cdot|x)$ as a spherical delta distribution, which is equivalent to a vMF with $r_x$ as the mean direction and $\kappa_e \rightarrow \infty$. We can further extend the forward encoder to have non-infinite $\kappa_e$, which results in a stochastic $z$. These allow us to have SimCLR in a distributional form with explicit distributions $e(\cdot|x)$ and $b(\cdot|y)$ and satisfy the requirements of CEB discussed in Sec.~\ref{sec:ceb}. \begin{figure} \centering \includegraphics[keepaspectratio, width=0.8\textwidth]{fig/SimCLR_fig_v3.pdf} \caption{C-SimCLR explicitly defines encoder distributions $e(\cdot|x)$ and $b(\cdot|y)$ where $x$ and $y$ are two augmented views of an image. $y$ is also referred as $x'$. The upper and lower encoder outputs are used to specify mean directions of $e$ and $b$, and the two encoders share parameters. $r_x, r_y$ are $l_2$-normalized. Our modifications to SimCLR are highlighted in blue. No new parameters are added.} \label{fig:ceb_simclr} \end{figure} \paragraph{Compressing SimCLR with Bidirectional CEB.} \Cref{fig:ceb_simclr} illustrates the Compressed SimCLR (C-SimCLR) model. The model learns a compressed representation of an view $X$ that only preserves information relevant to predicting a different view $Y$ by switching to CEB. As can be seen in Eq.~\eqref{eq:ceb}, the CEB objective treats $X$ and $Y$ asymmetrically. However, as shown in~\cite{fischer2020conditional}, it is possible to learn a single representation $Z$ of both $X$ and $Y$ by having the forward and backward encoders act as variational approximations of each other: \begin{align} CEB_{\text{bidir}} &\equiv \min_Z \beta_X I(X;Z|Y) - I(Y;Z) + \beta_Y I(Y;Z|X) - I(X;Z) \\ &\equiv \min_Z \beta_X(-H(Z|X) + H(Z|Y)) + H(Y|Z) \\ &~~~~~~~~~~~~ + \beta_Y(-H(Z|Y) + H(Z|X)) + H(X|Z) \nonumber \\ &\leq \min_{e(\cdot|\cdot),b(\cdot|\cdot),c(\cdot|\cdot),d(\cdot|\cdot)} \mathbb{E}_{x,y \sim p(x,y)} \Big[ \label{eq:ceb_bidir} \\ &~~~~~~~~~~~~~~~~~~ \mathbb{E}_{z_x \sim e(z_x|x)} \big[ \beta_X(\log e(z_x|x) - \log b(z_x|y)) - \log d(y|z_x) \big] \nonumber \\ &~~~~~~~~~~~~~~~~~~ + \mathbb{E}_{z_y \sim e(z_y|y)} \big[ \beta_Y(\log(e(z_y|y) - \log(b(z_y|x)) - \log c(x|z_y) \big] \Big] \nonumber \end{align} where $d(\cdot|\cdot)$ and $c(\cdot|\cdot)$ are the InfoNCE variational distributions of $b(\cdot|\cdot)$ and $e(\cdot|\cdot)$ respectively. $e$ and $b$ use the same encoder to parameterize mean direction in SimCLR setting. Since SimCLR is trained with a bidirectional InfoNCE objective, Eq.~\eqref{eq:ceb_bidir} gives an easy way to compress its learned representation. As in SimCLR, the deterministic $h_x$ (in Fig.~\ref{fig:ceb_simclr}) is still the representation used on downstream classification tasks. \subsection{C-BYOL: Compressed BYOL} \label{sec:byol} \begin{figure} \centering \includegraphics[keepaspectratio, width=1.0\textwidth]{fig/CBYOL_fig_v2.pdf} \caption{C-BYOL. The upper online encoder path takes an augmented view $x$ as input and produces $e(\cdot|x)$ and $d(\cdot|z)$. The lower two paths use the same target encoder (shaded), which is a moving average of the online encoder (Conv + Projection). The target encoder maps $x$ and another view $x'$ to $r_t$ and $r_t'$. $sg(r_t)$ ($sg$: stop gradients) is our target $y$. $y$ leads to $b(\cdot|y)$. $sg(r_t')$ is our perturbed target $y'$. $r_t, r_t', \mu_e, \mu_b, \hat{y}$ are $l_2$-normalized. These yield the components required by CEB. We highlight changes to BYOL in blue.} \label{fig:byol} \end{figure} In this section we will describe how to modify BYOL to make it compatible with CEB, as summarized in Fig.~\ref{fig:byol}. BYOL~\cite{grill2020bootstrap} learns an online encoder that takes $x$, an augmented view of a given image, as input and predicts outputs of a target encoder which encodes $x'$, a different augmented view of the same image. The target encoder's parameters are updated not by gradients but as an exponential moving average of the online encoder's parameters. The loss function is simply the mean square error, which is equivalent to the cosine similarity between the online encoder output $\mu_e$ and the target encoder output $y'$ as both $\mu_e$ and $y'$ are $l_2$-normalized: \begin{align} L_{byol} = ||\mu_e - y'||^2_2 = \mu_e^T\mu_e + {y'}^T{y'} - 2 \mu_e^Ty' = 2 - 2 \mu_e^Ty' \label{eq:byol_loss} \end{align} This iterative ``latent bootstrapping'' allows BYOL to learn a view-invariant representation. In contrast to SimCLR, BYOL does not rely on other samples in a batch and does not optimize the InfoNCE bound. It is a simple regression task: given input $x$, predict $y'$. To make BYOL CEB-compatible, we need to identify the random variables $X$, $Y$, $Z$, define encoder distributions $e(z|x)$ and $b(z|y)$, and define the decoder distribution $d(y|z)$ (see \Cref{eq:vceb}). We define $e(z|x)$ to be a vMF distribution parameterized by $\mu_e$, and sample $z$ from $e(z|x)$: \begin{align} e(z|x) = C_n(\kappa_e)e^{\kappa_e z^T\mu_e} \label{eq:byol_ezx} \end{align} We use the target encoder to encode $x$ and output $r_t$, an $l_2$-normalized vector. We choose $r_t$ to be $y$. We then add a 2-layer MLP on top of $y$ and $l_2$-normalize the output, which gives $\mu_b$. We denote this transformation as $\mu_b=m(y)$ and define $b(z|y)$ to be the following vMF parameterized by $\mu_b$: \begin{align} b(z|y) = C_n(\kappa_b)e^{\kappa_b z^T\mu_b} \label{eq:byol_bzy} \end{align} For $d(y|z)$, we add a linear transformation on $z$ with $l_2$-normalization, $\hat{y}=l(z)$, and define a vMF parameterized by $\hat{y}$: \begin{align} d(y|z) = C_n(\kappa_d)e^{\kappa_d y^T\hat{y}} \label{eq:byol_dyz} \end{align} In the deterministic case where $z$ is not sampled, this corresponds to adding a linear layer with $l_2$-normalization on $\mu_e$ which does not change the model capacity and empirical performance. In principle, we can use any stochastic function of $Z$ to generate $Y$. In our implementation, we replace the generative decoder $\log d(y|z)$ with $\log d(y'|z)$, where we use the target encoder to encode $x'$ and output $y'$. Given that $X \rightarrow X'$ is a stochastic transformation and both $X$ and $X'$ go through the same the target encoder function, $Y \rightarrow Y'$ is also a stochastic transformation. $d(y'|z)$ can be considered as having a stochastic perturbation to $d(y|z)$. Our $vCEB$ objective becomes \begin{align} L_{cbyol}(x, x') = \beta(\log e(z|x) - \log b(z|y)) - \log d(y'|z). \end{align} We empirically observed the best results with this design choice. $d(y'|z)$ can be directly connected the standard BYOL regression objective: When $\kappa_d=2$, $-\log(d(y'|z)) = -\kappa_d y'^T\hat{y} -\log(C_n(\kappa_d))$ is equivalent to Eq.~\eqref{eq:byol_loss} when constants are ignored. Although it seems that we additionally apply the target encoder to $x$ compared to BYOL, this does not increase the computational cost in practice. As in BYOL, the learning objective is applied symmetrically in our implementation: $L_{cbyol}(x, x') + L_{cbyol}(x', x)$. Therefore, the target encoder has to be applied to both $x$ and $x'$ no matter in BYOL or C-BYOL. Finally, note that like in BYOL, $h$ (Fig.~\ref{fig:byol}) is the deterministic representation used for downstream tasks. \iffalse \TODO{Update this section to say something like:} The residual information, $I(X;Z|Y)$ and the information maximizer, $I(Y;Z)$ can use any stochastic function of $X$ to generate $Y$. In principle, we can even choose different functions of $X$ for the two parts of the CEB objective. We found empirically that we got the best performance by using the exponential moving average of $f(x)$ for $Y$ in the residual information, but using the exponential moving average of $X'$ in the BYOL loss (corresponding to the information maximizer, $I(Y;Z)$). \fi \subsection{Lipschitz Continuity and Compression} \label{sec:lipschitz} Lipschitz continuity provides a way of measuring how smooth a function is. For some function $f$ and a distance measure $D(f(x_1), f(x_2))$, Lipschitz continuity defines an upper bound on how quickly $f$ can change as $x$ changes: \begin{align} L ||\Delta x|| \geq D(f(x), f(x + \Delta x)), \end{align} where $L$ is the Lipschitz constant, $\Delta x$ is the vector change in $x$, and $||\Delta x|| > 0$. If we define $f(x)$ to be our encoder distribution $e(z|x)$ (which is a vMF and always positive), and the distance measure, $D$, to be the absolute difference of the logs of the functions, we get a function of $z$ of Lipschitz value, such that: \begin{align} \label{eq:encoder_lipschitz_main} L(z) \geq \frac{1}{||\Delta x||} |\log e(z|x) - \log e(z|x + \Delta x)| \end{align} As detailed in Sec.~\ref{sec:complete_lipschitz}, by taking expectations with respect to $z$, we can obtain a lower bound on the encoder \emph{distribution}'s squared Lipschitz constant:\footnote{% Note that by taking an expectation we get a KL divergence, which violates the triangle inequality, even though we started from a valid distance metric. Squaring the Lipschitz constant addresses this in the common case where the $\operatorname{KL}$ divergence grows quadratically in $||\Delta x||$, as detailed in Section~\ref{sec:complete_lipschitz}. } \begin{align} \label{eq:lipschitz_log_bound1} L^2 \geq \frac{1}{||\Delta x||^2} \max \Big( \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ],\, \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \Big) \end{align} To guarantee smoothness of the encoder distribution, we would like to have an upper bound on $L$, rather than a lower bound. Minimizing a lower bound does not directly yield any optimality guarantees relative to the bounded quantity. However, in this case, minimizing the symmetric $\operatorname{KL}$ below is \emph{consistent} with learning a smoother encoder function: \begin{align} \label{eq:min_symkl1} \inf_{e(z|\cdot)} \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ] + \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} By \emph{consistent}, we mean that, if we could minimize this symmetric KL at every pair $(x, x + \Delta x)$ in the input domain, we would have smoothed the model. In practice, for high-dimensional input domains, that is not possible, but minimizing Eq.~\eqref{eq:min_symkl1} at a subset of the input domain still improves the model's smoothness, at least at that subset. The minimization in Eq.~\eqref{eq:min_symkl1} corresponds almost exactly to the CEB compression term in the bidirectional SimCLR models. We define $y = x + \Delta x$. At samples of the augmented observed variables, $X, Y$, the C-SimCLR models minimize upper bounds on both of the following residual information terms: \begin{align} \label{eq:bidir_residual1} I(X;Z|Y) + I(Y;Z|X) \leq \mathbb{E}_{x,y \sim p(x,y)} \operatorname{KL}[ e(z|x) || e(z|y) ] + \operatorname{KL}[ e(z|y) || e(z|x) ] \end{align} The only caveat to this is that we use $b(z|y)$ instead of $e(z|y)$ in C-SimCLR. $b$ and $e$ share weights but have different $\kappa$ values in their vMF distributions. However, these are hyperparameters, so they are not part of the trained model parameters. They simply change the minimum attainable $\operatorname{KL}$s in Eq.~\eqref{eq:bidir_residual1}, thereby adjusting the minimum achievable Lipschitz constant for the models (see Sec.~\ref{sec:complete_lipschitz}). Directly minimizing \Cref{eq:lipschitz_log_bound1} would require normalizing the symmetric $\operatorname{KL}$ per-example by $||\Delta x||^2$. The symmetric CEB loss does not do this. However, the residual information terms in \Cref{eq:bidir_residual1} are multiplied by a hyperparameter $\beta \leq 1$. Under a simplifying assumption that the $||\Delta x||$ values generated by the sampling procedure are typically of similar magnitude, we can extract the average $\frac{1}{||\Delta x||^2}$ into the hyperparameter $\beta$. We note that in practice using per-example values of $||\Delta x||^2$ would encourage the optimization process to smooth the model more strongly at observed $(x, \Delta x)$ pairs where it is least smooth, but we leave such experiments to future work. Due to Eq.~\eqref{eq:bidir_residual1}, we should expect that the C-SimCLR models are locally more smooth around the observed data points. We reiterate, though, that this is not a proof of increased global Lipschitz smoothness, as we are minimizing a lower bound on the Lipschitz constant, rather than minimizing an upper bound. It is still theoretically possible to learn highly non-smooth functions using CEB in this manner. It would be surprising, however, if the C-SimCLR were somehow \emph{less} smooth than the corresponding SimCLR models. The Lipschitz continuity property is closely related to model robustness to perturbations \cite{bruna2013invariant}, including the extreme type, i.e. the robustness to adversarial examples \cite{weng2018evaluating, fazlyab2019efficient, yang2020closer}. Therefore, we would expect to see that the C-SimCLR models are more robust than SimCLR models on common robustness benchmarks. It is more difficult to make the same theoretical argument for the C-BYOL models, as they do not use exactly the same encoder for both $x$ and $y$. Thus, the equivalent conditional information terms from Eq.~\eqref{eq:bidir_residual1} are not directly minimizing a lower bound on the Lipschitz constant of the encoder. Nevertheless, we empirically explore the impact of CEB on both SimCLR and BYOL models next in Sec.~\ref{sec:experiments}. \section*{Checklist} \iffalse The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \fi \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{See \Cref{sec:conclusion}.} \item Did you discuss any potential negative societal impacts of your work? \answerYes{See \Cref{sec:conclusion}.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{See \Cref{sec:complete_lipschitz}} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerNo{As mentioned in Section~\ref{sec:experiments_setup}, we will release code and models upon acceptance. All data is already public.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{See \Cref{sec:experiments_setup} and \Cref{sec:impl_detail}} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{See \Cref{sec:experiments_setup}. As in \cite{grill2020bootstrap}, we report our accuracies averaged over three pretraining trials for the main results but omit error bars in reporting for readability because we found the maximum absolute difference between the best and worst trials to be less than 0.15 for 1000-epoch models and 0.27 for 300 epoch models.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{As described in \Cref{sec:experiments_setup}, we used Google Cloud TPU v3 accelerators, with 64 cores per experiment.} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \section{Experimental Evaluation} \label{sec:experiments} We first describe our experimental set-up in Sec.~\ref{sec:experiments_setup}, before evaluating the image representations learned by our self-supervised models in linear evaluation settings in Sec.~\ref{sec:experiments_imagenet}. We then analyse the robustness and generalization of our self-supervised representations by evaluating model accuracy across a wide range of domain and distributional shifts in Sec.~\ref{sec:experiments_robustness}. Finally, we analyse the effect of compression strength in Sec.~\ref{sec:experiments_ablations}. Additional experiments and ablations can be found in the Appendix. \subsection{Experimental Set-up} \label{sec:experiments_setup} \paragraph{Implementation details.} Our implementation of SimCLR, BYOL, and their compressed versions is based off of the public implementation of SimCLR~\cite{chen2020simple}. Our implementation consistently reproduces BYOL results from \cite{grill2020bootstrap} and outperforms the original SimCLR, as detailed in Sec.~\ref{sec:impl_detail}. We use the same set of image augmentations as in BYOL \cite{grill2020bootstrap} for both BYOL and SimCLR, and also use BYOL's (4096, 256) two-layer projection head for both methods. We follow SimCLR and BYOL to use the LARS optimizer~\cite{you2017large} with a cosine decay learning rate schedule \cite{loshchilov2016sgdr} over 1000 epochs with a warm-up period, as detailed in Sec.~\ref{sec:detail_training}. For ablation experiments we train for 300 epochs instead. As in SimCLR and BYOL, we use batch size of 4096 split over 64 Cloud TPU v3 cores. Except for ablation studies of compression strength, $\beta$ is set to $1.0$ for both C-SimCLR and C-BYOL. We follow SimCLR and BYOL in their hyperparameter choices unless otherwise stated, and provide exhaustive details in Sec.~\ref{sec:impl_detail}. Pseudocode can be found in the Sec.~\ref{sec:pseudocode}. \paragraph{Evaluation protocol.} We assess the performance of representations pretrained on the ImageNet training set \cite{russakovsky2015imagenet} without using any labels. Then we train a linear classifier on different labeled datasets on top of the frozen representation. The final performance metric is the accuracy of these classifiers. As our approach builds on SimCLR~\cite{chen2020big} and BYOL~\cite{grill2020bootstrap}, we follow the same evaluation protocols. Further details are in Sec.~\ref{sec:detail_eval}. \subsection{Linear Evaluation of Self-supervised Representations} \label{sec:experiments_imagenet} \paragraph{Linear evaluation on ImageNet.} We first evaluate the representations learned by our models by training a linear classifier on top of frozen features on the ImageNet training set, following standard practice~\cite{chen2020simple, grill2020bootstrap, kolesnikov2019revisiting, kornblith2019better}. As shown in \Cref{tab:linear_eval_r50}, our compressed objectives provide strong improvements to state-of-the-art SimCLR~\cite{chen2020simple} and BYOL~\cite{grill2020bootstrap} models across different ResNet architectures \cite{he2016deep} of varying widths (and thus number of parameters) \cite{zagoruyko2016wide}. Our reproduction of the SimCLR baseline (70.7\% top-1 accuracy) outperforms that of the original paper (69.3\%). Our implementation of BYOL, which obtains a mean Top-1 accuracy of 74.2\% (averaged over three trials) matches that of~\cite{grill2020bootstrap} within a standard deviation. Current self-supervised methods benefit from longer training schedules~\cite{chen2020simple, chen2020big, grill2020bootstrap, chen2020exploring,he2020momentum}. \Cref{tab:linear_eval_r50} shows that our improvements remain consistent for both 300 epochs, and the longer 1000 epoch schedule which achieves the best results. In addition to the Top-1 and Top-5 accuracies, we also compute the Brier score~\cite{brier1950verification} which measures model calibration. Similar to the predictive accuracy, we observe that our compressed models obtain consistent improvements. \input{tables/imagenet_linear_eval.tex} \paragraph{Learning with a few labels on ImageNet.} \input{tables/imagenet_semi_supervised.tex} After self-supervised pretraining on ImageNet, we learn a linear classifier on a small subset (1\% or 10\%) of the ImageNet training set, using the class labels this time, following the standard protocol of~\cite{chen2020simple, grill2020bootstrap}. We expect that with strong feature representations, we should be able to learn an effective classifier with limited training examples. \Cref{tab:semi} shows that the compressed models once again outperform the SimCLR and BYOL counterparts. The largest improvements are observed in the low-data regime, where we improve upon the state-of-the-art BYOL by 5.1\% and SimCLR by 1.8\%, when using only 1\% of the ImageNet labels. Moreover, note that self-supervised representations significantly outperform a fully-supervised ResNet-50 baseline which overfits significantly in this low-data scenario. \paragraph{Comparison to other methods.} \Cref{tab:linear_eval_sota} compares C-SimCLR and C-BYOL to other recent self-supervised methods from the literature (in the standard setting of using two augmented views) on ImageNet linear evaluation accuracy. We present accuracy for models trained for 800 and 1000 epochs, depending on what the original authors reported. C-BYOL achieves the best results compared to other state-of-the-art methods. Moreover, we can improve C-BYOL with ResNet-50 even further to 76.0 Top-1 accuracy when we train it for 1500 epochs. \paragraph{Comparison to supervised baselines.} As shown in \Cref{tab:linear_eval_sota}, SimCLR and BYOL use supervised baselines of 76.5\% for ResNet-50 and 77.8\% for ResNet-50 2x~\cite{chen2020simple,grill2020bootstrap} respectively. In comparison, the corresponding compressed BYOL models achieve 76.0\% for ResNet-50 and 78.8\% for ResNet-50 2x, effectively matching or surpassing reasonable supervised baselines.\footnote{ We note that comparing supervised and self-supervised methods is difficult, as it can only be system-wise. Various complementary techniques can be used to further improve evaluation results in both settings. For example, the appendix of~\cite{grill2020bootstrap} reports that various techniques improve supervised model accuracies, whilst~\cite{grill2020bootstrap, kolesnikov2019revisiting} report various techniques to improve evaluation accuracy of self-supervised representations. We omit these in order to follow the common supervised baselines and standard evaluation protocols used in prior work. } \iffalse \footnote{% Note that other hyperparameter settings for supervised learning in the literature reach even higher accuracy by employing various techniques that we don't evaluate here, such as label smoothing~(\cite{grill2020bootstrap}, Appendix). Evaluation of self-supervised representations can also reach higher accuracy using various techniques~\cite{grill2020bootstrap,kolesnikov2019revisiting} that we omit in order to follow the standard evaluation protocols used in the literature. }. \fi The results in \Cref{tab:linear_eval_r50,tab:semi,tab:linear_eval_sota} support our hypothesis that compression of SSL techniques can improve their ability to generalize in a variety of settings. These results are consistent with theoretical understandings of the relationship between compression and generalization~\cite{shamir2008learning,vera2018role,dubois2020learning}, as are the results in \Cref{tab:ceb_ablation} that show that performance improves with increasingly strong compression (corresponding to higher values of $\beta$), up to some maximum amount of compression, after which performance degrades again. \subsection{Evaluation of Model Robustness and Generalization} \label{sec:experiments_robustness} In this section, we analyse the robustness of our models to various domain shifts. Concretely, we use the models, with their linear classifier, from the previous experiment, and evaluate them on a suite of robustness benchmarks that have the same label set as the ImageNet dataset. We use the public robustness benchmark evaluation code of~\cite{djolonga2020robustness, djolonga2020robustnesscode}. As a result, we can evaluate our network and report Top-1 accuracy, as shown in \Cref{tab:robustness}, without any modifications to the network. We consider ``natural adversarial examples'' with ImageNet-A~\cite{hendrycks2019benchmarking} which consists of difficult images which a ResNet-50 classifier failed on. ImageNet-C~\cite{hendrycks2019benchmarking} adds synthetic corruptions to the ImageNet validation set, ImageNet-R~\cite{hendrycks2020many} considers other naturally occuring distribution changes in image style while ObjectNet~\cite{barbu2019objectnet} presents a more difficult test set for ImageNet where the authors control for different parameters such as viewpoint and background. ImageNet-Vid and YouTube-BB~\cite{shankar2019image} evaluate the robustness of image classifiers to natural perturbations arising in video. Finally, ImageNet-v2~\cite{recht2019imagenet} is a new validation set for ImageNet where the authors attempted to replicate the original data collection process. Further details of these robustness benchmarks are in \Cref{sec:robustness_benchmark_details}. \input{tables/robustness} \Cref{tab:robustness} shows that SimCLR and BYOL models trained with CEB compression consistently outperform their uncompressed counterparts across all seven robustness benchmarks. This is what we hypothesized in the SimCLR settings based on the Lipschitz continuity argument in Sec.~\ref{sec:lipschitz} and the appendix. All models performed poorly on ImageNet-A, but this is not surprising given that ImageNet-A was collected by~\cite{hendrycks2019benchmarking} according to images that a ResNet-50 classifier trained with full supervision on ImageNet misclassified, and we evaluate with ResNet-50 models too. \iffalse In \Cref{tab:robustness}, We compare the robustness of SimCLR and BYOL models trained with and without CEB compression, and at various area lower bound parameters. For essentially every task in the suite of robustness benchmarks, the compressed models outperform the corresponding uncompressed models. This is what we hypothesized in the SimCLR setting based on the Lipschitz continuity argument in \Cref{sec:lipschitz}. The ImageNet-A~\cite{hendrycks2021natural} setting is interesting, because the adversarial examples are all valid ``ImageNet-style'' images, but in many of the images, the background texture is unusual for that image class, as we show in \Cref{fig:imageneta}. The SimCLR and BYOL models, whether compressed or not, do very poorly at this task, as seen in \Cref{tab:robustness}. \ANURAG{In the ImageNet-A paper (Fig.2), it looks like ResNet-50 gets about 2\% accuracy. As far as I see, they just show plots but don't report the actual numbers in the paper.} For comparison, in \cite{fischer2020ceb}, the uncompressed ResNet50 model gets 3.2\% accuracy, and the CEB ResNet50 model gets 5.1\% accuracy (see Table 1 of that work), so the performance of the semisupervised models we consider in this work is 3 to 5 times worse than what can be achieved by the same architecture in the supervised setting. This leads us to suspect that the way SimCLR and BYOL are trained encourages the models to focus on image texture more then image content, even more so than what happens in fully supervised ImageNet training. The nature of the simple cropping augmentation strategy of SimCLR means that for a given $(x, x')$ pair, frequently they will have no overlapping pixels at all, and often one or both augmented inputs will primarily see background pixels rather than object pixels, so the only way for the representations of both inputs to become better aligned is if they focus mostly on the background pixels. ImageNet-A~\cite{hendrycks2021natural}: ``Natural adversarial examples''. Consist of images of ImageNet classes which a ResNet-50 classifier failed on. Human-verified that the ResNet-50 mistakes were egregious. ImageNet-C~\cite{hendrycks2019benchmarking}: Add 15 corruptions to ImageNet images, each at 5 levels of severity. We report average accuracy over all the corruptions and severity levels. ImageNet-R~\cite{hendrycks2020many}: Naturally occurring distribution changes in image style, camera operation and geographic location. Stands for ``ImageNet Rendition''. ImageNet-v2~\cite{recht2019imagenet}: New test set for ImageNet, collected following the same original protocol. The authors posit that the collected images are more ``difficult'', and consistently observed accuracy drops across various models. ObjectNet~\cite{barbu2019objectnet}: More challenging test set for ImageNet, where authors control for different viewpoints, backgrounds and rotations. ImageNet-Vid and YouTube-BB~\cite{shankar2019image} evaluate the robustness of image classifiers to natural perturbations arising in video. Additional annotations added by~\cite{shankar2019image} to the ImageNet-Vid~\cite{russakovsky2015imagenet} and YouTube-BB~\cite{real2017youtubeboundingboxes} datasets. ImageNet-A~\cite{hendrycks2021natural}: ``Natural adversarial examples''. Consist of images of ImageNet classes which a ResNet-50 classifier failed on. Human-verified that the ResNet-50 mistakes were egregious. ImageNet-C~\cite{hendrycks2019benchmarking}: Add 15 corruptions to ImageNet images, each at 5 levels of severity. We report average accuracy over all the corruptions and severity levels. ImageNet-R~\cite{hendrycks2020many}: Naturally occurring distribution changes in image style, camera operation and geographic location. Stands for ``ImageNet Rendition''. ImageNet-v2~\cite{recht2019imagenet}: New test set for ImageNet, collected following the same original protocol. The authors posit that the collected images are more ``difficult'', and consistently observed accuracy drops across various models. ObjectNet~\cite{barbu2019objectnet}: More challenging test set for ImageNet, where authors control for different viewpoints, backgrounds and rotations. ImageNet-Vid and YouTube-BB~\cite{shankar2019image} evaluate the robustness of image classifiers to natural perturbations arising in video. Additional annotations added by~\cite{shankar2019image} to the ImageNet-Vid~\cite{russakovsky2015imagenet} and YouTube-BB~\cite{real2017youtubeboundingboxes} datasets. \fi \subsection{The Effect of Compression Strength} \label{sec:experiments_ablations} \input{tables/ceb_beta_ablation} \Cref{tab:ceb_ablation} studies the effect of the CEB compression term, $\beta$ on linear evaluation accuracy on ImageNet, as well as on the same suite of robustness datasets. We observe that $\beta = 0$, which corresponds to no explicit compression, but a stochastic representation, already achieves improvements across all datasets. Further improvements are observed by increasing compression ($\beta$), with $\beta = 1$ obtaining the best results. But overly strong compression can be harmful. Large values of $\beta$ correspond to high levels of compression, and can cause training to collapse, which we observed for $\beta = 2$. \iffalse \subsection{Robustness} \label{sec:robust} \begin{table} \scriptsize \caption{Robustness. Models trained for 300 epochs. \ANURAG{Only reporting accuracies.} \TODO{Rather sweep beta, and observe effect of compression on robustness} } \label{tab:robustness} \centering \begin{tabular}{lccccccc} \toprule Method (sweep area lower bound) & ImageNet-A & ImageNet-C & ImageNet-R & ImageNet-v2 & ImageNet-Vid & YouTube-BB & ObjectNet \\ \midrule \textit{SimCLR} \\ 0.08 & 1.0 & 33.2 & 17.9 & 56.2 & 60.4 & 55.4 & 17.9 \\ 0.16 & 1.5 & 34.2 & 19.3 & 56.1 & 62.7 & 57.4 & 18.1 \\ 0.32 & 1.1 & 33.6 & 20.6 & 55.8 & 62.4 & 58.1 & 17.9 \\ 0.5 & 1.1 & 29.6 & 21.3 & 51.2 & 57.9 & 60.2 & 14.4 \\ \textit{Compressed SimCLR} \\ 0.08 & 1.0 & 35.7 & 19.5 & 57.2 & 63.0 & 56.7 & 19.3 \\ 0.16 & 1.3 & 35.6 & 20.9 & 57.0 & 64.3 & 58.5 & 19.4 \\ 0.32 & 1.2 & 35.3 & 22.0 & 56.6 & 63.5 & 59.2 & 19.0 \\ 0.5 & 1.3 & 31.1 & 22.4 & 52.5 & 60.8 & 59.2 & 15.0 \\ \bottomrule \end{tabular} \end{table} \fi \iffalse \begin{table} \caption{BYOL Ablation} \label{tab:byol_ab} \centering \begin{tabular}{ll} \toprule Method & acc. \\ \midrule BYOL & 72.5 \\ BYOL 5x loss & 72.9 \\ BYOL 5x loss + 256-d linear layer & 72.8 \\ BYOL 5x loss + 256-d linear layer + sampling & 72.9 \\ Compressed BYOL & 73.6 \\ \bottomrule \end{tabular} \end{table} \fi \iffalse \begin{table} \caption{Sweeping area range lower rounds (temporarily listing 1-run now)} \label{tab:area_range_lower_bound} \centering \begin{tabular}{lllll} \toprule Method & 8\% & 16\% & 25\% & 50\%\\ \midrule SimCLR & 69.0 & 68.6 & 67.6 & 61.4 \\ C-SimCLR & 70.2 & 70.0 & 68.9 & 64.3 \\ \bottomrule \end{tabular} \end{table} \fi \section{Implementation details and hyperparameters} \label{sec:impl_detail} In this section, we further describe our implementation details. Our implementation is based off of the public implementation of SimCLR~\cite{chen2020simple}. In general, we closely follow the design choices of BYOL~\cite{grill2020bootstrap} for both of our SimCLR and BYOL implementations. Despite having different objectives, BYOL and SimCLR share many components, including image augmentations, network architectures, and optimization settings. As explained in the original paper \cite{grill2020bootstrap}, BYOL itself may be considered as a modification to SimCLR with a slow moving average target network, an additional predictor network, and switching the learning target from InfoNCE to a regression loss. Therefore, many of the design choices and hyperparameters are applicable to both. As explained in \Cref{sec:experiments_setup}, we align SimCLR with BYOL on the choices of image augmentations, network architecture, and optimization settings in order to reduce the number of variables in comparison. \subsection{Image augmentations} \label{sec:detail_aug} During self-supervised training, we use the set of image augmentations from BYOL~\cite{grill2020bootstrap} for all our models. \begin{itemize} \item Random cropping: randomly select a patch of the image, with an area uniformly sampled between 8\% and 100\% of that original image, and an aspect ratio logarithmically sampled between $3/4$ and $4/3$. Then this patch is resized to $224 \times 224$ using bicubic interpolation. \item Left-to-right flipping: randomly flip the patch. \item Color jittering: brightness, contrast, saturation and hue of an image are shifted by a uniformly random offset. The order to apply these adjustments is randomly selected for each patch. \item Color dropping: RGB pixel values of an image are converted to grayscale according to $0.2989r + 0.5870g + 0.1140b$. \item Gaussian blurring: We use a $23 \times 23$ kernel to blur the $224 \times 224$ image, with a standard deviation uniformly sampled over $[0.1, 2.0]$. \item Solarization: This is a color transformation $x = x \cdot \textbf{1}_{\{x < 0.5\}} + (1-x) \cdot \textbf{1}_{\{x\geq0.5\}}$ for pixels with values in $[0, 1]$ (we convert all pixel values into floats between $[0, 1]$). \end{itemize} As described in Sec.~\ref{sec:methods}, we use augmentation functions $t$ and $t'$ to transform an image into two views. $t$ and $t'$ are compositions of the above image augmentations in the listed order, each applied with a predefined probability. The image augmentation parameters to generate $t$ and $t'$ are listed in Table~\ref{tab:image_aug}. During evaluation, we perform center cropping, as done in \cite{chen2020simple,grill2020bootstrap}. Images are resized to 256 pixels along the shorter side, after which a $224 \times 224$ center crop is applied. During both training and evaluation, we normalize image RGB values by subtracting the average color and dividing by the standard deviation, computed on ImageNet, after applying the augmentations. \paragraph{Differences from the original SimCLR~\cite{chen2020simple}.} Since the image augmentation parameters that BYOL~\cite{grill2020bootstrap} uses are different from the original SimCLR, we list the original SimCLR parameters in the last column of \Cref{tab:image_aug}, which are the same for $t$ and $t'$, to clarify the differences. Additionally, the original SimCLR samples the aspect ratio of cropped patches uniformly, instead of logarithmically, between $[3/4, 4/3]$. \begin{table} \caption{Image augmentation parameters. We use the hyperparameter values from BYOL~\cite{grill2020bootstrap}, and include the values from the original SimCLR~\cite{chen2020simple} as reference.} \label{tab:image_aug} \centering \begin{tabular}{lccc} \toprule Parameter & $t$ & $t'$ & Orig. SimCLR \cite{chen2020simple} \\ \midrule Random crop probability & 1.0 & 1.0 & 1.0 \\ Flip probability & 0.5 & 0.5 & 0.5 \\ Color jittering probability & 0.8 & 0.8 & 0.8 \\ Brightness adjustment max strength & 0.4 & 0.4 & 0.8 \\ Contrast adjustment max strength & 0.4 & 0.4 & 0.8 \\ Saturation adjustment max strength & 0.2 & 0.2 & 0.8 \\ Hue adjustment max strength & 0.1 & 0.1 & 0.2 \\ Color dropping probability & 0.2 & 0.2 & 0.2 \\ Gaussian blurring probability & 1.0 & 0.1 & 1.0 \\ Solarization probability & 0.0 & 0.2 & 0.0 \\ \bottomrule \end{tabular} \end{table} \subsection{Network architecture} \label{sec:detail_net_arch} Following \cite{chen2020simple,grill2020bootstrap}, we use ResNet-50 \cite{he2016deep} as our backbone convolutional encoder (the ``Conv'' part in \Cref{fig:ceb_simclr} and \Cref{fig:byol}). We vary the ResNet width \cite{zagoruyko2016wide} (and thus the number of parameters) from 1$\times$ to 2$\times$. In Sec.~\ref{sec:cbyol_w_deeper_resnets}, we additionally report C-BYOL results with different ResNet depth, from 50 to 152. The representations $h_x, h_y$ in SimCLR and $h, h_t, h_t'$ in BYOL correspond to the 2048-dimensional (for ResNet-50 1$\times$) final average pooling layer output. These representations are projected to a smaller space by an MLP (called ``projection`` in \Cref{fig:ceb_simclr} and \Cref{fig:byol}). As in \cite{grill2020bootstrap}, this MLP consists of a linear layer with output size 4096 followed by batch normalization~\cite{ioffe2015batch}, ReLU, and a final linear layer with output dimension 256. $q(\cdot)$ in BYOL/C-BYOL (Fig.~\ref{fig:byol}) is called the predictor. The predictor $q(\cdot)$ is also a two-layer MLP which shares the same architecture with the projection MLP \cite{grill2020bootstrap}. \paragraph{Differences from the original SimCLR~\cite{chen2020simple}.} The original SimCLR \cite{chen2020simple} uses a 2048-d hidden layer and a 128-d output layer for the projection MLP, after which an additional batch normalization is applied to the 128-d output. Both BYOL \cite{grill2020bootstrap} and our work do not have this batch normalization on the last layer. We did not observe significant change in performance for the uncompressed models and found it harmful to the compressed models. \subsection{von Mises-Fisher Distributions} We use the vMF implementation in public Tensorflow Probability (TFP) library \cite{dillon2017tensorflow}, specifically the current TFP version 0.13. We have found that sampling and computing log probabilities in high dimensions with the current TFP version has become sufficiently stable and fast to train all of the models in our paper.\footnote{Previous versions of TFP were unstable for sampling from vMF distributions with higher than 5 dimensions, and at the time of writing, the authors of the library have not updated the documentation to indicate that this is no longer the case.} \subsection{Optimization} \label{sec:detail_training} We follow BYOL \cite{grill2020bootstrap} for our optimization settings. During self-supervised training, we use the LARS optimizer~\cite{you2017large} with a cosine decay learning rate schedule \cite{loshchilov2016sgdr} over 1000 epochs and a linear warm-up period at the beginning of training. The linear warm-up period is 10 epochs in most cases. We increase it to 20 epochs for BYOL and C-BYOL with larger ResNets (ResNet-50 2x, ResNet-101, ResNet-152) as we found it helpful to prevent mode collapse and improve performance. In most cases, we set the base learning rate to 0.2 and scale it linearly by batch size (LearningRate = 0.2 $\times$ BatchSize/256). For C-BYOL, we increase the base learning rate to 0.26 for better performance. For careful comparison, we extensively searched base learning rate for BYOL but did not find a configuration better than 0.2 as used in the original work~\cite{grill2020bootstrap}. We use a weight decay of $1.5\times10^{-6}$. For the BYOL/C-BYOL target network, the exponential moving average update rate $\alpha$ starts from $\alpha_{\text{base}}=0.996$ and ramps up to 1 with a cosine schedule, $\alpha \triangleq 1-(1-\alpha_{\text{base}})(\cos(\pi k/K)+1)/2$ where $k$ is the current training step and $K$ is the total number of training steps. For 300-epoch models used in ablations, we set the base learning rate to 0.3 in most cases, and increase it to 0.35 for C-BYOL. We use a weight decay of $10^{-6}$. For BYOL and C-BYOL, the base exponential moving average update rate $\alpha_{\text{base}}$ is set to 0.99. We note that there is a small chance that both BYOL and C-BYOL can end up with collapsed solutions, but it mostly happens in early phase of training and is easy to observe with the learning objective spiking or reaching NaN. \paragraph{Differences from the original SimCLR~\cite{chen2020simple}.} Optimization settings of the original SimCLR are very similar but, for 1000-epoch training, they use a base learning rate of 0.3 and weight decay of $10^{-6}$. \subsection{SimCLR and C-SimCLR details} \label{sec:detail_simclr} As described in \Cref{sec:detail_aug}, \Cref{sec:detail_net_arch}, \Cref{sec:detail_training}, we made minor modifications to the original SimCLR to align with BYOL on the choices of image augmentations, network architecture, and optimization settings. With these modifications, our SimCLR baseline reproduction outperforms the original (top-1 accuracy 70.6\% versus 69.3\%). For C-SimCLR, we use $\kappa_e=1024$ for the true encoder $e(\cdot|x)$ and $\kappa_b=10$ for the backward encoder, where $e(\cdot|x)$ and $b(\cdot|y)$ are von Mises-Fisher distributions. The compression factor $\beta$ that we use for C-SimCLR is $1.0$. Note that the original SimCLR has temperature $\tau=0.1$ which is equivalent to having $\kappa_b=10$, since $\kappa_b=1/\tau$. \subsection{BYOL and C-BYOL details} \label{sec:detail_byol} \begin{table} \caption{Ablation study on BYOL models trained for 300 epochs. Top-1 denotes the linear evaluation Top-1 accuracy on ImageNet.} \label{tab:byol_300e_ablation} \centering \begin{tabular}{lc} \toprule Method & Top-1 \\ \midrule BYOL $w_{\text{byol}}=1.0$ & 72.5 \\ BYOL $w_{\text{byol}}=5.0$ & 72.8 \\ BYOL $w_{\text{byol}}=5.0$ + 256-d linear layer + $l_2$-normalization & 72.8 \\ BYOL $w_{\text{byol}}=5.0$ + 256-d linear layer + $l_2$-normalization + sampling & 72.8 \\ C-BYOL $w_{\text{byol}}=5.0$ & 73.6 \\ \bottomrule \end{tabular} \caption{The effect of loss weights on BYOL models trained for 1000 epochs. Top-1 denotes the linear evaluation Top-1 accuracy on ImageNet.} \label{tab:byol_1000e_loss_w} \centering \begin{tabular}{lc} \toprule Method & Top-1 \\ \midrule BYOL $w_{\text{byol}}=1.0$ & 74.2 \\ BYOL $w_{\text{byol}}=2.0$ & 74.2 \\ BYOL $w_{\text{byol}}=5.0$ & 74.2 \\ \bottomrule \end{tabular} \end{table} As shown in \Cref{tab:byol_300e_ablation} and \Cref{tab:byol_1000e_loss_w}, our BYOL implementation stably reproduces results comparable to \cite{grill2020bootstrap} with 300 and 1000 epochs of training. An interesting behavior we observed is that, for shorter training with 300 epochs, scaling the BYOL regression loss can improve performance. Specifically we add a weight multiplier $w_{\text{byol}} = \kappa_d/2$ to the BYOL loss Eq.~\eqref{eq:byol_loss}. \begin{align} L_{\text{byol}} = w_{\text{byol}}||\mu_e - y'||^2_2 \label{eq:scaled_byol_loss} \end{align} \Cref{tab:byol_300e_ablation} shows that multiplying the loss by five increases the linear evaluation accuracy from 72.5\% to 72.8\%. This improvement is consistent across multiple runs. Therefore, we choose $w_{\text{byol}}=5$ for 300-epoch BYOL/C-BYOL. However, we do not see the same improvement for 1000-epoch models. \Cref{tab:byol_1000e_loss_w} shows that $w_{\text{byol}}$ makes little difference for 1000-epoch BYOL models. We still choose $w_{\text{byol}}=2$ for all 1000-epoch BYOL and C-BYOL models since it tends to work better than $w_{\text{byol}}=1$ for the compressed models and models with larger ResNets. Furthermore, \Cref{tab:byol_300e_ablation} verifies that the additional linear layer with $l_2$-normalization that we added for C-BYOL and $z$ sampling (both were described in \Cref{sec:byol}) do not result in a difference in performance. The improvement happens only when CEB compression is used. We set $\kappa_e=16384.0$, $\kappa_b=10.0$, and the compression factor $beta=1.0$ for C-BYOL if not specified otherwise. \section{Linear evaluation protocol on ImageNet} \label{sec:detail_eval} As common in self-supervised learning literature~\cite{grill2020bootstrap,chen2020simple,kolesnikov2019revisiting,chen2020improved}, we assess the performance of our representations learned on the ImageNet training set (without labels) by training a linear classifier on top of the frozen representations using the labeled data. For training this linear classifier, we only apply the random cropping and flipping image augmentations. We optimize the cross-entropy loss using SGD with Nesterov momentum over 40 epochs. We use a batch size of 1024 and momentum of 0.9 without weight decay, and sweep the base learning rate over $\{0.4, 0.3, 0.2, 0.1, 0.05\}$ to choose the best learning rate on a validation set, following \cite{grill2020bootstrap}. We perform center cropping during evaluation, as done in \cite{chen2020simple,grill2020bootstrap}. Images are resized to 256 pixels along the shorter side, after which the $224 \times 224$ center crop is selected. During both training and evaluation, we normalize image RGB values by subtracting the average color and dividing by the standard deviation, computed on ImageNet, after applying the augmentations. \paragraph{Learning with a few labels} In \Cref{sec:experiments_imagenet} we described learning the linear classifier on 1\% and 10\% of the ImageNet training set with labels. We performed this experiment with the same 1\% and 10\% splits from \cite{chen2020simple}. \section{Transfer to other classification tasks} \label{sec:transfer_classification_tasks} \begin{table} \scriptsize \caption{ Transfer to other classification tasks, by performing linear evaluation. The backbone network is ResNet-50, pretrained in a self-supervised fashion for 1000 epochs. } \label{tab:classification_transfer} \centering \scalebox{0.925}{ \begin{tabular}{lcccccccccccc} \toprule Method & Food101 & CIFAR10 & CIFAR100 & Flowers & Pet & Cars & Caltech-101 & DTD & SUN397 & Aircraft & Birdsnap \\ \midrule SimCLR & 72.5 & 91.1 & 74.4 & 88.4 & 83.5 & 49.7 & 89.5 & 72.5 & 61.8 & 51.6 & 35.4 \\ C-SimCLR & \textbf{73.0} & \textbf{91.6} & \textbf{75.2} & \textbf{89.0} & \textbf{84.0} & \textbf{52.7} & \textbf{91.2} & \textbf{73.0} & \textbf{62.3} & \textbf{53.5} & \textbf{38.2} \\ \bottomrule \end{tabular} } \end{table} We analyze the effect of compression on transfer to other classification tasks in Table~\ref{tab:classification_transfer}. This allows us to assess whether the compressive representations learned by our method are generic and transfer across image domains. \paragraph{Datasets.} We perform the transfer experiments on the Food-101 dataset~\cite{bossard2014food}, CIFAR-10 and CIFAR-100 \cite{krizhevsky2009learning}, Birdsnap~\cite{berg2014birdsnap}, SUN397~\cite{xiao2010sun}, Stanford Cars~\cite{krause2013collecting}, FGVC Aircraft~\cite{maji2013fine}, the Describable Texture Dataset (DTD) \cite{cimpoi2014describing}, Oxford-IIIT Pets~\cite{parkhi2012cats}, Caltech-101~\cite{fei2004learning}, and Oxford 102 Flowers~\cite{nilsback2008automated}. We carefully follow their evaluation protocol, i.e. we report top-1 accuracy for Food-101, CIFAR-10, CIFAR-100, Birdsnap, SUN397, Stanford Cars, anad DTD; mean per-class accuracy for FGVC Aircraft, Oxford-IIIT Pets, Caltech-101, and Oxford 102 Flowers. These datasets are also used by~\cite{chen2020simple, grill2020bootstrap, kornblith2019better}. More exhaustive details about train, validation, and test splits of these datasets can be found in Section D of~\cite{grill2020bootstrap} (arXiv v3). \paragraph{Transfer via linear classifier.} To demonstrate the effectiveness of compressed representations, we compare SimCLR and C-SimCLR representations as an example. We freeze the representations of our model and train an $\ell_2$-regularized multinomial logitstic regression classifier on top of these frozen representations. We minimize the cross-entropy objective using the L-BFGS optimizer. As in \cite{grill2020bootstrap,chen2020simple}, we selected the $\ell_2$ regularization parameter from a range of 45 logarithmically spaced values between $[10^{-6}, 10^5]$. We observe in Table~\ref{tab:classification_transfer} that our Compressed SimCLR model consistently outperforms the uncompressed SimCLR baseline on each of the 11 datasets we tested. We note absolute improvements in accuracy ranging from 0.5\% (CIFAR-10, SUN397) to 3\% (Stanford Cars). These experiments suggest that the representations learned by compressed model are generic, and transfer beyond the ImageNet domain which they were learned on. \section{Extra C-BYOL results with Deeper ResNets} \label{sec:cbyol_w_deeper_resnets} \begin{table}[t] \caption{C-BYOL and BYOL trained for 1000 epochs with different ResNet depth. We report ImageNet Top-1 accuracy from linear evaluation, averaged over 3 trials.} \label{tab:cbyol_w_deeper_resnets} \centering \begin{tabular}{lcccc} \toprule & \multicolumn{2}{c}{C-BYOL} & \multicolumn{2}{c}{BYOL \cite{grill2020bootstrap}} \\ Architecture & Top-1 & Top-5 & Top-1 & Top-5 \\ \midrule ResNet-50 & \textbf{75.6} & \textbf{92.7} & 74.3 & 91.6 \\ ResNet-101 & \textbf{77.8} & \textbf{93.9} & 76.4 & 93.0 \\ ResNet-152 & \textbf{78.7} & \textbf{94.4} & 77.3 & 93.7 \\ \bottomrule \end{tabular} \end{table} In \Cref{tab:cbyol_w_deeper_resnets}, we additionally report results of C-BYOL and BYOL retrained for 1000 epochs with ResNet-101 and ResNet-152, as it could be of interest to demonstrate improvements over the state-of-the-art BYOL on these deeper ResNet models. It can be observed that C-BYOL gives significant gains across ResNets with different depths. \section{Additional Ablations} \label{sec:further_ablations} The hyperparameter and architecture choices of SimCLR and BYOL have been investigated in the original works \cite{chen2020simple,grill2020bootstrap}. Here we focus on analysing hyperparameters specific to C-SimCLR and C-BYOL. \Cref{tab:kappa_e_csimclr}, \Cref{tab:kappa_e_cbyol} and \Cref{tab:kappa_b} show how changing $\kappa_e$ and $\kappa_b$ affect the results, respectively. We also investigate the effect of the compression factor $\beta$ for C-BYOL (\Cref{tab:cbyol_beta}) in addition to the compression analysis for SimCLR in Sec.~\ref{sec:experiments_ablations}. Similar to C-SimCLR, as compression strength ($\beta$) increases, the linear evaluation result improves, with $\beta=1.0$ obtaining the best results, but overly strong compression leads to a drop in performance. Finally, we conduct a preliminary exploration on the interplay between CEB compression and image augmentations, using cropping area ratio as an example in \Cref{tab:area_range_lower_bound}. As described in \Cref{sec:detail_aug}, we follow \cite{grill2020bootstrap,chen2020simple} to randomly crop an image to an area between 8\% and 100\% of the original image. We refer to this 8\% as the ``area lower bound``, which is the most aggressive cropping area ratio that can happen. As the area lower bound decreases, we are reducing the amount of information that can be shared between the two representations, because there is less and less mutual information between the two images: $I(X;X')$ gets smaller the more we reduce the area lower bound \cite{tian2020makes}. Thus, smaller area lower bounds should force the model to be more compressed. What we see in \Cref{tab:area_range_lower_bound} is that the SimCLR models are much more sensitive to the changes in the area lower bound than the C-SimCLR models are. We speculate that this is because the compression done by the C-SimCLR objective overlaps to some extent with the compression given by varying the area lower bound. Regardless, the compression due to the area lower bound hyperparameter appears to be insufficient to adequately compress away irrelevant information in the SimCLR model, which is why the C-SimCLR models continue to outperform the SimCLR models at all area lower bound values. \begin{table}[t] \caption{The effect of varying $\kappa_e$ for C-SimCLR models. We report ImageNet Top-1 accuracy from linear evaluation.} \label{tab:kappa_e_csimclr} \centering \begin{tabular}{rcccccc} \toprule $\kappa_e$ & 256 & 512 & 1024 & 2048 & 4096 & 8192 \\ \midrule ImageNet Top-1 accuracy & 69.8 & 69.8 & 70.2 & 69.8 & 69.6 & 69.6 \\ \bottomrule \end{tabular} \caption{The effect of varying $\kappa_e$ for C-BYOL models. We report ImageNet Top-1 accuracy from linear evaluation.} \label{tab:kappa_e_cbyol} \centering \begin{tabular}{rcccc} \toprule $\kappa_e$ & 4096 & 8192 & 16384 & 32768 \\ \midrule ImageNet Top-1 accuracy & 73.0 & 73.3 & 73.6 & 73.2 \\ \bottomrule \end{tabular} \caption{The effect of varying $\kappa_b$ for C-SimCLR and C-BYOL models. We report ImageNet Top-1 accuracy from linear evaluation.} \label{tab:kappa_b} \centering \begin{tabular}{lccccc} \toprule Method & $\kappa_b=$1 & 3 & 10 & 15 & 20 \\ \midrule C-SimCLR & 65.0 & 68.5 & 70.2 & 69.1 & 68.6 \\ C-BYOL & 73.1 & 73.3 & 73.6 & 73.4 & 73.2 \\ \bottomrule \end{tabular} \caption{The effect of $\beta$ on C-BYOL. Note that \Cref{tab:ceb_ablation} in the main paper studied this effect on C-SimCLR. The final column is the uncompressed BYOL baseline. } \label{tab:cbyol_beta} \centering \begin{tabular}{rccccc} \toprule $\beta$ & 1.5 & 1.0 & 0.1 & 0.01 & BYOL \\ \midrule ImageNet Top-1 accuracy & 73.4 & 73.6 & 73.1 & 73.0 & 72.8 \\ \bottomrule \end{tabular} \caption{The effect of varying the area range lower rounds for SimCLR and Compressed SimCLR. We report the ImageNet Top-1 accuracy from linear evaluation. Note how the baseline SimCLR model is much more sensitive to this data-augmentation hyperparameter. } \label{tab:area_range_lower_bound} \centering \begin{tabular}{lcccc} \toprule Method & 8\% & 16\% & 25\% & 50\%\\ \midrule SimCLR & 69.0 & 68.6 & 67.6 & 61.4 \\ C-SimCLR & 70.2 & 70.0 & 68.9 & 64.3 \\ \bottomrule \end{tabular} \end{table} \section{Robustness benchmark details} \label{sec:robustness_benchmark_details} In this section, we provide some additional details on each of the datasets used in our robustness evaluations. Note that we use the public robustness benchmark evaluation code of~\cite{djolonga2020robustness, djolonga2020robustnesscode}.\footnote{\url{https://github.com/google-research/robustness_metrics}} ImageNet-A~\cite{hendrycks2021natural}: This dataset of ``Natural adversarial examples'' consists of images of ImageNet classes which a ResNet-50 classifier failed on. The dataset authors performed manual, human-verification to ensure that the predictions of the ResNet-50 model were indeed incorrect and egregious~\cite{hendrycks2021natural}. ImageNet-C~\cite{hendrycks2019benchmarking}: This dataset adds 15 corruptions to ImageNet images, each at 5 levels of severity. We report the average accuracy over all the corruptions and severity levels. ImageNet-R~\cite{hendrycks2020many}: This dataset, which has the full name ``Imagenet Rendition'', captures naturally occuring distribution changes in image style, camera operation and geographic location. ImageNet-v2~\cite{recht2019imagenet}: This is a new test set for ImageNet, and was collected following the same protocol as the original ImageNet dataset. The authors posit that the collected images are more ``difficult'', and observed consistent accuracy drops across a wide range of models trained on the original ImageNet. ObjectNet~\cite{barbu2019objectnet}: This is a more challenging test set for ImageNet, where the authors control for different viewpoints, backgrounds and rotations. Note that ObjectNet has a vocabulary 313 object classes, of which 113 are common with ImageNet. Following~\cite{djolonga2020robustnesscode}, we evaluate on only the images in the dataset which have one of the 113 ImageNet labels. Our network is still able to predict any one of the 1000 ImageNet classes though. ImageNet-Vid and YouTube-BB~\cite{shankar2019image} evaluate the robustness of image classifiers to natural perturbations arising in video. This dataset was created by~\cite{shankar2019image} by augmenting the ImageNet-Vid~\cite{russakovsky2015imagenet} and YouTube-BB~\cite{real2017youtubeboundingboxes} datasets with additional annotations. \section{Analysis of Lipschitz Continuity and Compression} \label{sec:complete_lipschitz} In this section, we provide a more detailed explanation of the relation between Lipschitz continuity and SimCLR with CEB compression, introduced in \Cref{sec:lipschitz}. Lipschitz Continuity provides a way of measuring how smooth a function is. For some function $f$ and a distance measure $D(f(x_1), f(x_2))$, Lipschitz continuity defines an upper bound on how quickly $f$ can change as $x$ changes: \begin{align} L ||\Delta x|| \geq D(f(x), f(x + \Delta x)) \end{align} where $L$ is the Lipschitz constant, $\Delta x$ is the vector change in $x$, and $||\Delta x|| > 0$. Frequently, the choice of $D$ is the absolute difference function: $|f(x_1) - f(x_2)|$. However, we can use a multiplicative distance rather than an additive distance by considering the absolute difference of the logs of the functions: \begin{align} \label{eq:multiplicative_distance} D(f(x_1), f(x_2)) \equiv | \log f(x_1) - \log f(x_2) | \end{align} It is trivial to see that \Cref{eq:multiplicative_distance} obeys the triangle inequality, which can be written: \begin{align} \label{eq:triangle_ineq} |a - b| \geq |a| - |b| \end{align} \Cref{eq:triangle_ineq} is true for any scalars $a$ and $b$. Setting $a = \log f(x_1)$ and $b = \log f(x_2)$ is sufficient, given that $f(\cdot)$ is a positive, scalar-valued function. For $D(\cdot)$ to be a valid distance metric, $f(x)$ must also satisfy the identity of indiscernibles requirement: $f(x_1) = f(x_2) \Leftrightarrow x_1 = x_2$. If that requirement is violated, then $D(\cdot)$ becomes a pseudometric, which is inconsistent with Lipschitz continuity. Noting that $|a - b| \equiv \max(a - b, b - a)$, we will simplify the analysis by considering the two arguments to the implicit $\max$ in \Cref{eq:multiplicative_distance} one at a time, starting with: \begin{align} L &\geq \frac{1}{||\Delta x||} ( \log f(x) - \log f(x + \Delta x) ) \end{align} If we define $f(x)$ to be our encoder distribution, $e(z|x)$, we get a function of $z$ of Lipschitz value:\footnote{% Note that if we choose an encoder distribution where the density ever goes to 0 or $\infty$, \Cref{eq:encoder_lipschitz} will have a maximum value of $\infty$. Of course, it's generally easy to avoid this situation by choosing ``well-behaved'' distributions like the Gaussian or von Mises-Fisher distributions, whose densities are non-zero on the entire domain, and to parameterize them with variance or concentration parameters that don't go to 0 or $\infty$, respectively. } \begin{align} \label{eq:encoder_lipschitz} L(z) \geq \frac{1}{||\Delta x||} ( \log e(z|x) - \log e(z|x + \Delta x) ) \end{align} Note that the encoder distribution must not violate the identity of indiscernibles property: $\forall z: e(z|x_1) = e(z|x_2) \Leftrightarrow x_1 = x_2$. This is not the case in general, but for reasonable distribution families, the sets of $z$ that violate this property for any $(x_1, x_2)$ pair will have measure zero. In the case that $e(z|\cdot)$ is parameterized by some function $f(\cdot)$, such as a neural network, $f$ must also not violate the identity of indiscernibles property. This argues in favor of using invertible networks for $f$, or at least not using activation functions like relu that are likely to cause $f$ to map multiple $x$ values to some constant. We note that in practice, it doesn't seem to matter, as shown empirically in \Cref{sec:experiments}. As $e(z|x)$ is parameterized by the output of our model, \Cref{eq:encoder_lipschitz} captures the semantically relevant smoothness of the model. For example, if our encoder distribution is a Gaussian with learned mean and variance, the impact of the model parameters on the means is semantically distinct from the impact of the model parameters on the variance. In that setting, using the parameter vectors themselves naively in a Lipschitz formulation like $L_{\text{naive}} ||\Delta x|| \geq || f_\theta(x) - f_\theta(x + \Delta x) ||$, where $f_\theta$ outputs concatenated mean and variance parameters, would clearly fail to correctly capture the model's smoothness. Our formulation using the encoder \emph{distribution} directly does not have this flaw, and thus generalizes to capture a notion of smoothness for any choice of distribution parameterization. Note that this notion of smoothness of the distribution still depends directly on the the smoothness of the underlying function that generates the distribution's parameters, while also capturing the smoothness of the distribution itself. We can remove the dependence on $z$ of \Cref{eq:encoder_lipschitz} by taking the expectation over $z$ with respect to $e(z|x)$. This gives us an \emph{expected} Lipschitz value: \begin{align} \mathbb{E}_{z \sim e(z|x)} L(z) &\geq \frac{1}{||\Delta x||} \mathbb{E}_{z \sim e(z|x)} \log e(z|x) - \log e(z|x + \Delta x) \\ &= \frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ] \label{eq:lipschitz_kl} \end{align} It is important to note that \Cref{eq:lipschitz_kl} no longer obeys the triangle inequality, due to the $\operatorname{KL}$ divergence, since it is easy to find three distributions $p, q, r$ such that $\operatorname{KL}[p||q] > \operatorname{KL}[p||r] + \operatorname{KL}[q||r]$. We could also have computed the expectation over $z \sim e(z|x + \Delta x)$, yielding: \begin{align} \mathbb{E}_{z \sim e(z|x + \Delta x)} L(z) &\geq -\frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} But this is trivially true due to $L(z)$ being non-negative and the negative $\operatorname{KL}$ term being non-positive, so we can ignore this term here. However, when we consider the second argument to the implicit $\max$ in \Cref{eq:multiplicative_distance}, the negative and positive $\operatorname{KL}$ terms are swapped, and we are left with: \begin{align} \mathbb{E}_{z \sim e(z|x + \Delta x)} L(z) &\geq \frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} When we take the expectations over $z$, the resulting $\operatorname{KL}$ divergences have an underlying quadratic growth in $||\Delta x||$: as $||\Delta x||$ increases linearly, the $\operatorname{KL}$ divergences increase quadratically.\footnote{% This is easiest to see with Gaussian distributions whose means are parameterized by an identity map of $x$ and $x + \Delta x$: the $\operatorname{KL}$ divergence is quadratic in difference of the means, which is $||\Delta x||$. } This is why the $\operatorname{KL}$ divergence violates the triangle inequality, and also why it is problematic for measuring Lipschitz continuity: in general, $L$ will be unbounded when measured by the $\operatorname{KL}$ even when the underlying function $f(x)$ parameterizing the distributions has a bounded Lipschitz constant, since the $\operatorname{KL}$ will always grow faster then $|| \Delta x ||$. We can address this by instead considering the squared Lipschitz constant: \begin{align} L^2 ||\Delta x||^2 \geq \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ] \quad \text{and} \quad L^2 ||\Delta x||^2 \geq \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} which is equivalent to: \begin{align} L^2 \geq \frac{1}{||\Delta x||} \mathbb{E}_{z \sim e(z|x)} L(z) \quad \text{and} \quad L^2 \geq \frac{1}{||\Delta x||} \mathbb{E}_{z \sim e(z|x + \Delta x)} L(z) \end{align} Finally, we note the following relationship: \begin{align} L^2 = \max_{x,\Delta x} \max\left( \frac{1}{||\Delta x||^2} \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ], \frac{1}{||\Delta x||^2} \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \right) \end{align} In words, the true squared Lipschitz constant of the encoder is equal to the least smooth $(x,\Delta x)$ pair, as measured by the greater of the two $\operatorname{KL}$ divergences at that pair. Putting all of this together, we observe that the following two $\operatorname{KL}$ divergences together give a lower bound on the encoder's Lipschitz constant: \begin{align} \label{eq:lipschitz_log_bound} L^2 \geq \frac{1}{||\Delta x||^2} \max \Big( \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ],\, \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \Big) \end{align} Thus, taking the pointwise maximum across pairs of inputs in any dataset gives a valid estimate of the maximum lower bound of the encoder's Lipschitz constant. \Cref{eq:lipschitz_log_bound} can be evaluated directly on any pair of valid inputs $(x, x + \Delta x)$. \Cref{eq:lipschitz_log_bound} is the same as \Cref{eq:lipschitz_log_bound1} used in \Cref{sec:lipschitz}. \iffalse In this section, we provide a more detailed explanation of the relation between Lipschitz continuity and SimCLR with CEB compression, introduced in Section 2.4. Lipschitz Continuity provides a way of measuring how smooth a function is. For some function $f$ and a distance measure $D(f(x_1), f(x_2))$, Lipschitz continuity defines an upper bound on how quickly $f$ can change as $x$ changes: \begin{align} L ||\Delta x|| \geq D(f(x), f(x + \Delta x)) \end{align} where $L$ is the Lipschitz constant, $\Delta x$ is the vector change in $x$, and $||\Delta x|| > 0$. Frequently, the choice of $D$ is the absolute difference function: $|f(x_1) - f(x_2)|$. However, we can use a multiplicative distance rather than an additive distance by considering the absolute difference of the logs of the functions: \begin{align} \label{eq:multiplicative_distance} D(f(x_1), f(x_2)) \equiv | \log f(x_1) - \log f(x_2) | \end{align} \iansf{Prove $D$ obeys the triangle inequality.} Noting that $|a - b| \equiv \max(a - b, b - a)$, we will simplify the analysis by considering the two arguments to the implicit $\max$ in \Cref{eq:multiplicative_distance} one at a time, starting with: \begin{align} L &\geq \frac{1}{||\Delta x||} ( \log f(x) - \log f(x + \Delta x) ) \end{align} If we define $f(x)$ to be our encoder distribution, $e(z|x)$, we get a Lipschitz function of $z$, parameterized by $x$ and $\Delta x$:\footnote{% Note that if we choose an encoder distribution where the density ever goes to 0 or $\infty$, \Cref{eq:encoder_lipschitz} will have a maximum value of $\infty$. Of course, it's generally easy to avoid this situation by choosing ``well-behaved'' distributions like the Gaussian or von Mises-Fisher distributions, whose densities are non-zero on the entire domain, and to parameterize them with variance or concentration parameters that don't go to 0 or $\infty$, respectively. } \begin{align} \label{eq:encoder_lipschitz} L_{x,\Delta x}(z) \geq \frac{1}{||\Delta x||} ( \log e(z|x) - \log e(z|x + \Delta x) ) \end{align} As $e(z|x)$ is parameterized by the output of our model, this formulation captures the semantically relevant smoothness of the model. For example, if our encoder distribution is a Gaussian with learned mean and variance, the impact of the model parameters on the means is semantically distinct from the impact of the model parameters on the variance. In that setting, using the parameter vectors themselves naively in a Lipschitz formulation like $L_{\text{naive}} ||\Delta x|| \geq || f_\theta(x) - f_\theta(x + \Delta x) ||$, where $f_\theta$ outputs concatenated mean and variance parameters, would clearly fail to correctly capture the model's smoothness. Our formulation using the encoder \emph{distribution} directly does not have this flaw, and thus generalizes to capture a notion of smoothness for any choice of distribution parameterization. Note that this notion of smoothness of the distribution still depends directly on the the smoothness of the underlying function that generates the distribution's parameters, while also capturing the smoothness of the distribution itself. We can also define a global $L(z)$ function: \begin{align} L(z) \geq \max_{x,\Delta x} L_{x,\Delta x}(z) \end{align} $L(z)$, used in Equation (18), is global on the input space, but still depends on the particular $z$, whereas $L_{x,\Delta x}(z)$ is local to a particular pair of encoder distributions, $e(z|x)$ and $e(z|x + \Delta x)$, and also depends on a particular $z$. Taking expectations over $z$, we get: \begin{align} \mathbb{E}_{z \sim e(z|x)} L_{x,\Delta x}(z) &\geq \frac{1}{||\Delta x||} \mathbb{E}_{z \sim e(z|x)} \log e(z|x) - \log e(z|x + \Delta x) \\ &= \frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ] \end{align} We could also have computed the expectation over $z \sim e(z|x + \Delta x)$, yielding: \begin{align} \mathbb{E}_{z \sim e(z|x + \Delta x)} L_{x,\Delta x}(z) &\geq -\frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} But this is trivially true due to $L(z)$ being non-negative and the negative $\operatorname{KL}$ term being non-positive, so we can ignore this term here. However, when we consider the second argument to the implicit $\max$ in \Cref{eq:multiplicative_distance}, the negative and positive $\operatorname{KL}$ terms are swapped, and we are left with: \begin{align} \mathbb{E}_{z \sim e(z|x + \Delta x)} L_{x,\Delta x}(z) &\geq \frac{1}{||\Delta x||} \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} Finally, we note the following relationship: \begin{align} L \geq \max_z L_{x,\Delta x}(z) \end{align} In words, the true Lipschitz constant of the encoder is lower-bounded by the ``least smooth'' $z$ sample from the encoder at any particular $(x, x + \Delta x)$ pair. Putting all of this together, we observe that the following two $\operatorname{KL}$ divergences together give a lower bound on the encoder's Lipschitz constant: \begin{align} \label{eq:lipschitz_log_bound} L \geq \frac{1}{||\Delta x||} \max \Big( \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ],\, \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \Big) \end{align} Thus, taking the pointwise maximum across pairs of inputs in any dataset gives a valid estimate of the maximum lower bound of the model's Lipschitz constant. \Cref{eq:lipschitz_log_bound} can be evaluated directly on any pair of valid inputs $(x, x + \Delta x)$. \Cref{eq:lipschitz_log_bound} is the same as Equation (19) used in Section 2.4. Note that because the $\operatorname{KL}$ divergence is an expectation rather than a max, \Cref{eq:lipschitz_log_bound} is not in general a tight lower bound. \fi \iffalse Of course, to guarantee smoothness of the encoder distribution, we would like to have an upper bound on $L$ that we can minimize, rather than a lower bound. Minimizing a lower bound does not directly yield any optimality guarantees relative to the bounded quantity. However, in this case, minimizing the symmetric $\operatorname{KL}$ below is \emph{consistent} with learning a smoother encoder function: \begin{align} \label{eq:min_symkl} \min_{e(z|\cdot)} \operatorname{KL}[ e(z|x) || e(z|x + \Delta x) ] + \operatorname{KL}[ e(z|x + \Delta x) || e(z|x) ] \end{align} By \emph{consistent}, we mean that, if we could minimize this symmetric KL at every pair $(x, x + \Delta x)$ in the input domain, we would have smoothed the model. In practice, for high-dimensional input domains, that isn't possible, but minimizing \Cref{eq:min_symkl} at a subset of the input domain still improves the model's smoothness, at least at that subset. The minimization in \Cref{eq:min_symkl} corresponds almost exactly to the CEB compression term in the bidirectional SimCLR models. At samples of the augmented observed variables, $X, X'$, the CEB SimCLR models minimize upper bounds on both of the following residual information terms: \begin{align} \label{eq:bidir_residual} I(X;Z|X') + I(X';Z|X) \leq \mathbb{E}_{x,x' \sim p(x,x')} \operatorname{KL}[ e(z|x) || e(z|x') ] + \operatorname{KL}[ e(z|x') || e(z|x) ] \end{align} The only caveat to this is that $e(z|x)$ and $e(z|x')$ have different $\kappa$ values in their von Mises-Fischer distributions. However, these are hyperparameters, so they are not part of the trained model parameters. They simply change the minimum attainable $\operatorname{KL}$s in \Cref{eq:bidir_residual}, thereby adjusting the minimum achievable Lipschitz constant for the models. Due to \Cref{eq:bidir_residual}, we should expect that the CEB SimCLR models are locally more smooth around the observed data points. We reiterate, though, that this is not a proof of increased global Lipschitz smoothness, as we are minimizing a lower bound on the Lipschitz constant, rather than minimizing an upper bound. It is still theoretically possible to learn highly non-smooth functions using CEB in this manner. It would be surprising, however, if the CEB models were somehow \emph{less} smooth than the corresponding uncompressed SimCLR models. Since various types of model robustness are largely determined by the model's Lipschitz constant \iansf{CITE STUFF}, we would expect to see that the compressed CEB SimCLR models are more robust than the uncompressed SimCLR models on common robustness benchmarks. We explore this hypothesis in \Cref{sec:experiments}. Note that it is more difficult to make the same theoretical argument for the CEB BYOL models, as they do not use exactly the same encoder for both $x$ and $x'$. Thus, the equivalent conditional information terms from \Cref{eq:bidir_residual} are not directly minimizing a lower bound on the Lipschitz constant of the encoder. Nevertheless, we also explore the impact of CEB on the robustness of BYOL models in \Cref{sec:experiments}. \fi \paragraph{Example: the von Mises-Fisher distribution.} An exponential family distribution has the form: \begin{align} h(z) \exp(\eta^T T(z)-A(\eta)) \end{align} where $T(z)$ is the sufficient statistic, $\eta$ is the canonical parameter, and $A(\eta)$ is the cumulant. For the von Mises-Fisher distribution, which has the form: \begin{align} C_n(\kappa)\exp(\kappa \mu^T z) \end{align} we have $h(z)=1$, $T(z)=z$ and $A(\eta)$ is the negative log of the normalizing constant $C_n(\kappa)$. Instead of a general parameter vector $\eta$, the standard von Mises-Fisher distribution uses a unit vector $\mu=\eta/||\eta||$ and a scale or concentration parameter $\kappa=||\eta||$. If $e(z|x)$ is parameterized by a deterministic neural network for the von Mises-Fisher canonical parameter denoted $\overline{e}(x)$, then we have: \begin{align} e(z|x)=C_n(||\overline{e}(x)||)\exp(\overline{e}(x)^T z) \end{align} and $\operatorname{KL}[ e(z|x) || e(z|y) ]$ (define $y = x + \Delta x$) is: \begin{align} \label{eq:vmf_kl} (\overline{e}(x)-\overline{e}(y))^T \overline{z}(x) + \log C_n(||\overline{e}(x)||) - \log C_n(||\overline{e}(y)||) \end{align} where $\overline{z}(x)$ is the mean direction function of the distribution ($\overline{e}(x) = ||\overline{e}(x)||\overline{z}(x)$). The symmetric KL-divergence ($\operatorname{KL}[ e(z|x) || e(z|y) ] + \operatorname{KL}[ e(z|y) || e(z|x) ]$) is then: \begin{align} (\overline{e}(x)-\overline{e}(y))^T (\overline{z}(x)-\overline{z}(y)) \end{align} which is closely related to the $L_2^2$ norm of the vector $\overline{e}(x)-\overline{e}(y)$. \iffalse \begin{align} ( c u + d v)^T(w) = cu^Tw + dv^Tw \end{align} \begin{align} &(||e_x||z_x - ||e_y||z_y)^T(z_x - z_y) \\ &= ||e_x||z_x^T(z_x - z_y) - ||e_y||z_y^T(z_x - z_y) \end{align} which is the $L_2$-distance between $\overline{e}(x)$ and $\overline{e}(y)$. Thus, minimizing \Cref{eq:min_symkl1} with a standard von Mises-Fisher distribution as the encoder is directly minimizing the commonly-used $L_2$ formulation of Lipschitz continuity: \begin{align} L ||\Delta x|| \geq ||f(x) - f(x + \Delta x)||_2 \end{align} \fi Furthermore, we can choose $\kappa=||\overline{e}(x)||$ as a hyperparameter and just parameterize $e(z|x)$'s unit length mean direction $\overline{z}(x)$. Apart from choosing different $\kappa$ hyperparameters, this is exactly what we do in the C-SimCLR setting described in \Cref{sec:simclr}. Specifically, in \Cref{sec:simclr}, minimizing the residual information term $I(X;Z|Y)$ correspond to minimizing $\operatorname{KL}[ e(z|x) || b(z|y) ]$ instead of $\operatorname{KL}[ e(z|x) || e(z|y) ]$, where $b$ and $e$ have the same mean direction parameterization but different $\kappa$ hyperparameters, say $\kappa_e$ and $\kappa_b$. We can show that the two $\operatorname{KL}$s are actually consistent as learning objectives. With $\kappa_e, \kappa_b$ as hyperparameters, $\operatorname{KL}[ e(z|x) || e(z|y) ]$ (\Cref{eq:vmf_kl}) can be written as \begin{align} \kappa_e(\overline{z}(x)-\overline{z}(y))^T \overline{z}(x) + \log C_n(\kappa_e) - \log C_n(\kappa_e), \end{align} and $\operatorname{KL}[ e(z|x) || b(z|y) ]$ can be written as \begin{align} (\kappa_e\overline{z}(x)-\kappa_b\overline{z}(y))^T \overline{z}(x) + \log C_n(\kappa_e) - \log C_n(\kappa_b) \\ = (\kappa_e - \kappa_b) + \kappa_b(\overline{z}(x)-\overline{z}(y))^T \overline{z}(x) + \log C_n(\kappa_e) - \log C_n(\kappa_b). \end{align} It is not difficult to see that the two $\operatorname{KL}$s are only different in scale and by a constant, and thus are consistent as learning objectives. As we claimed in \Cref{sec:lipschitz} (after \Cref{eq:bidir_residual1}), the use of different constant hyperparameters $\kappa_e$ and $\kappa_b$ in the encoders of $x$ and $y$ only changes the minimum achievable $\operatorname{KL}$ divergences. We can reach the same conclusion for the residual information in another direction $I(Y;Z|X)$. Thus, whether or not $\kappa_e$ and $\kappa_b$ are the same, we are still minimizing the Lipschitz constant of our encoder function at each observed $(x,y)$ pair when we minimize the residual information terms in the bidirectional CEB objective (\Cref{eq:ceb_bidir}). \begin{figure}[t] \centering \includegraphics[keepaspectratio, width=1.0\textwidth]{fig/lipschitz_simclr_train_10000_new.pdf} \caption{% Histograms of \Cref{eq:encoder_lipschitz_main} (also \Cref{eq:encoder_lipschitz} in this section) on 10,000 training images. Each local estimate is \Cref{eq:encoder_lipschitz} with a $(x, x + \Delta x)$ pair. Here $x$ is the original image and $x + \Delta x$ is the augmented image. SimCLR is in orange. C-SimCLR is in blue. Higher $y$-axis values for lower $x$-axis values are better. We also report the mean ($\mu$) values. C-SimCLR consistently outperforms SimCLR. } \label{fig:lipschitz_train} \end{figure} \begin{figure}[t] \centering \includegraphics[keepaspectratio, width=1.0\textwidth]{fig/lipschitz_simclr_val_50000_new.pdf} \caption{% The same as \Cref{fig:lipschitz_train}, but on 50,000 validation images. } \label{fig:lipschitz_val} \end{figure} \paragraph{Estimating the local Lipschitz constant.} We can evaluate \Cref{eq:encoder_lipschitz_main} (also \Cref{eq:encoder_lipschitz} on any $(x,x + \Delta x)$ pairs to estimate how smooth our model is at that point, and to compare the relative smoothness of different models. Here, we consider $(x,x + \Delta x)$ pairs where $x$ is taken either from the training or the validation set (using only a center crop in both cases), and $x + \Delta x$ is generated by either increasing or decreasing exactly one of: brightness, contrast, saturation, or hue. The absolute changes are the maximum adjustment strength in our image defined in \Cref{tab:image_aug} (e.g. for increasing brightness, we increase by 0.4). In \Cref{fig:lipschitz_train,fig:lipschitz_val}, we compare the histograms of \Cref{eq:encoder_lipschitz_main} on the SimCLR ResNet-50 model and the corresponding C-SimCLR ResNet-50 model. On both datasets and all eight augmentations, the C-SimCLR models have substantially more mass in the lower values of the local Lipschitz estimates for those image pairs, and have lower mean values computed over the dataset. Additionally, the mean C-SimCLR results on the \emph{validation} set are almost all lower or equal to the mean SimCLR results on the \emph{training} set, so the smoothness improvements from adding compression to SimCLR appear to be substantial. The only exceptions are for `brightness+' (SimCLR training mean: 0.088, C-SimCLR validation mean: 0.089) and `brightness-' (SimCLR training mean: 0.147, C-SimCLR validation mean: 0.162). \section{Pseudocode} \label{sec:pseudocode} Listings~\ref{code:csimclr} and~\ref{code:cbyol} show Tensorflow pseudocode for C-SimCLR and C-BYOL respectively. \begin{listing}[ht] \begin{minted}[fontsize=\small]{python} tfd = tensorflow_probability.distributions def simclr_ceb_loss(x, y, f_enc, kappa_e=1024.0, kappa_b=10.0, beta=1.0): """Compute a Contrastive version of CEB loss for C-SimCLR model. In practice, we follow SimCLR to apply this loss in a bidirectional manner as loss = simclr_ceb_loss(x, y) + simclr_ceb_loss(y, x). We use the same notation as the main paper. Args: x: An augmented image view. The expected shape is [B, H, W, C]. y: An augmented image view. The expected shape is [B, H, W, C]. f_enc: An image encoder (Conv + Projection in Fig. 1). kappa_e: A float. Concentration parameter of distribution e. kappa_b: A float. Concentration parameter of distribution b. beta: CEB beta for controlling compression strength (Equation 1). Returns: A tensor `loss`. The loss is per-sample. """ # Obtain unit-length mean direction vectors with expected shape [B, r_dim]. r_x = tf.math.l2_normalize(f_enc(x), -1) r_y = tf.math.l2_normalize(f_enc(y), -1) batch_size = tf.shape(r_x)[0] labels_idx = tf.range(batch_size) # Labels are pseudo-labels which mark corresponding positives in a batch labels = tf.one_hot(labels_idx, batch_size) mi_upper_bound = tf.math.log(tf.cast(batch_size, tf.float32)) e_zx = tfd.VonMisesFisher(r_x, kappa_e) b_zy = tfd.VonMisesFisher(r_y, kappa_b) z = e_zx.sample() log_e_zx = e_zx.log_prob(z) log_b_zy = b_zy.log_prob(z) i_xzy = log_e_zx - log_b_zy # residual information I(X;Z|Y) logits_ab = b_zy.log_prob(z[:, None, :]) # broadcast # The following categorical corresponds to c(y|z) and d(x|z) in Equation 12: cat_dist_ab = tfd.Categorical(logits=logits_ab) h_yz = -cat_dist_ab.log_prob(labels_idx) i_yz = mi_upper_bound - h_yz loss = beta * i_xzy - i_yz return loss \end{minted} \caption{Tensorflow pseudocode of C-SimCLR.} \label{code:csimclr} \end{listing} \begin{listing} \begin{minted}[fontsize=\small]{python} tfd = tensorflow_probability.distributions def byol_ceb_loss(x, x_prime, f_enc, f_enc_target, q_net, l_net, m_net, kappa_e=16384.0, kappa_b=10.0, beta=1.0, byol_loss_weight=2.0): """Compute loss for C-BYOL model. The notation corresponds to Section 2.3 and Figure 2 of the paper. This code presents an updated version of C-BYOL as described in the general response. In practice, we follow BYOL to apply this loss in a bidirectional manner as loss = byol_ceb_loss(x, x_prime, ...) + byol_ceb_loss(x_prime, x, ...). We use the same notation as the main paper. Args: x: An augmented image view. The expected shape is [B, H, W, C]. x_prime: An augmented image view. The expected shape is [B, H, W, C]. f_enc: An image encoder (Conv + Projection in Fig. 2). f_enc_target: The target image encoder. A slow moving-average of f_enc. q_net: The BYOL predictor, which is a two-layer MLP. l_net: A transformation function. We choose a linear layer in this work. m_net: A transformation function. We choose a two-layer MLP in this work. kappa_e: A float. Concentration parameter of distribution e. kappa_b: A float. Concentration parameter of distribution b. beta: CEB beta for controlling compression strength (Equation 1). byol_loss_weight: BYOL loss weight. byol_loss_weight = kappa_d/2. Returns: A tensor `loss`. The loss is per-sample. """ r = f_enc(x) mu_e = tf.math.l2_normalize(q_net(r), -1) e_zx = tfd.VonMisesFisher(mu_e, kappa_e) z = e_zx.sample() y_pred = tf.math.l2_normalize(l_net(z), -1) r_t = tf.math.l2_normalize(f_enc_target(x), -1) y = tf.stop_gradient(r_t) mu_b = tf.math.l2_normalize(m_net(y), -1) b_zy = tfd.VonMisesFisher(mu_b, kappa_b) r_t_prime = tf.math.l2_normalize(f_enc_target(x_prime), -1) y_prime = tf.stop_gradient(r_t_prime) # byol_loss corresponds to -log d(y|z) as described in Section 2.3 byol_loss = tf.reduce_sum(tf.math.square(y_pred - y_prime), axis=-1) log_e_zx = e_zx.log_prob(z) log_b_zy = b_zy.log_prob(z) i_xzy = log_e_zx - log_b_zy loss = byol_loss_weight * byol_loss + beta * i_xzy return loss \end{minted} \caption{Tensorflow pseudocode of C-BYOL.} \label{code:cbyol} \end{listing}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,426
Too many characters in one line? For subroutines, you can either use Sub or Function. Function can return a value, but Sub can't. ADD_VALUE = B + 1 ' Return B+1. When you declare variables inside a subroutine or a function, it is considered a "Local" variable. Local variables are created when the subroutine or function is called, and removed when the subroutine or function exits. This means that local variables will temporarily allocate data memory for the duration of the call. Local variables may only be referred to or used inside the subroutine or function in which they were declared. On the other hand, global variables may be used anywhere in your program. In the program above, A is declared as a global variable and K is declared as a local variable. A can be used anywhere in the program but K can only be used inside the subroutine DELAYTIME. Arrays cannot be declared as local variables. Arrays must be declared as global variables. Once the subroutine is created, they can be used like any other statement. For a subroutine, you do not need parenthesis around the parameters. Use commas to separate multiple parameters. For a function, you need parenthesis around the parameters. Parenthesis are required even when there are no parameters. The End statement is used to differentiate between the BASIC main program and the program's subroutines. The END statement in Ladder Logic is used to indicate the final Ladder Logic rung. ' will be used as return value. Arrays cannot be used as parameters. The following is not allowed. Function ARRAYUSING(A(10) AS Integer) ' Arrays may not be used as parameters. K = ARRAYUSING(b(10)) ' Use 10th element of array b as a parameter. All subroutine parameters are passed by value, not by reference. If the parameter value is changed within a subroutine, it will not affect the variable passed to the subroutine. V = V + 10 ' A does not change when V is changed. In contrast, some languages allow values to be passed by reference in which the actual memory address is passed to the subroutine. Cubloc BASIC only supports passing by value. Use an apostrophe/single-quote (') to add comments. Comments are discarded during at compile time, and will not consume any program memory. Nested subroutines are supported in Cubloc. A=Floor(SQR(F)) ' Do Floor() on Sqr(F). Colons cannot be used to append commands in Cubloc BASIC, as is possible in some other languages. A=1: B=1 : C=1 ' Incorrect.
{ "redpajama_set_name": "RedPajamaC4" }
9,604
Q: Natural filtration and Kolmogorov existence theorem Consider a stochastic process $(X_t)_{t \in T}$ where $T\subseteq[0,\infty)$. The natural filtration $\mathscr F^X=(\mathscr F_t^X)_{t \in T}$ is defined by $$\mathscr F_t^X=\sigma\{X_s^{-1}(B)|0\lt s\le t,B\in\mathscr B(\Bbb R) \}$$ In a book I have read, the author wants to define a mapping $$\psi:\omega\mapsto (X_t)_{t\in T}(\omega):=(\omega(t))_{t\in T}$$ from the underlying probability space $(\Omega,\mathscr A, P)$ to a measure space $(\Bbb R^T, \mathscr S_{cyl})$, the space of sample paths with the sigma-algebra generated by the cylinder-sets. Thus, $$\Bbb R^T=\{f:T\to \Bbb R\}\qquad\mathscr S_{cyl}=\sigma \{f∈\Bbb R^T:\forall i=1,2,...,n,\ f(t_i)\in B_i\}$$ and $$X_t:(\Bbb R^T, \mathscr S_{cyl})\to(\mathbb R,\mathcal B(\mathbb R)),\quad X_t(f)=f(t)$$ is the coordinate mapping in order to use the Kolmogorov Existence Theorem to construct a probability distribution between the time instance $0\le t_1\le\cdots\le t_n\le T$. Equivalently they use a separable process and define the natural filtration $\mathscr G=(\mathscr G_t)$ as the sigma algebras $\mathscr G_t$ generated by the following cylinder sets $$A=\{\omega \in \Omega:\forall i=1,2,...,n,\ X_{s_i}(\omega) \in B_i\}$$ for $0\le s_1\le.....\le s_n \le t$. My questions are: 1.Where is the natural filtration $\mathscr G$ defined? On $(\Bbb R^T, \mathscr S_{cyl})$? Where is the equivalence to the usual definition $\mathscr F^X$ given above? 2. If I want to calculate the expectation value $E[X^*_t]$ with $$X^*_t=\sup_{0\le t\le T} X_t$$ am I calculating it on $(\Bbb R^T, \mathscr S_{cyl})$ or $(\Omega,\mathscr A, P)$? I am confused between this pair of spaces and how to use them. A: Now that did performed a great editing work on your post (I added 2 minor changes to your post on notations). I am able to give you an answer to the first question, of course $\mathcal{G}$ lives on $\Omega$, does it needs the filtered space $(\Bbb R^T, \mathscr S_{cyl})$ to be defined ? The answer is indeed ! To show you this and answer the last part of your first question let me write a plain generator set $A\in \mathcal{G} $ again departing from your definition and noting that there is an abuse of notation here : $$A=\{\omega \in \Omega:\forall i=1,2,...,n, s_i \in [0,t]\cap T, \ X_{s_i}(\omega) \in B_i\}$$ Note that $\ X_{s_i}(\omega)$ has no meaning unless you identify (which is done implicitly) $\omega$ with $\psi(\omega)$, so getting back to the definition we get : $$A=\{\omega \in \Omega:\forall i=1,2,...,n, s_i \in [0,t]\cap T,\ X_{s_i}(\psi(\omega))\in B_i\}$$ Ok now we get a clearer picture, to get $\mathscr F_t^X$, you have to use a measure theoretical result but on a collection of set that generates this filtration. But let's suppose that we know that it is generated by sets B (let's call them cylindrical sets ) which can be defined like this : $$B=\{\omega \in \Omega:\forall i=1,2,...,n, s_i \in [0,t]\cap T,\ \omega \in X^{-1}_{s_i}(B_i)\}$$ here again $X_{s_i}$ is assimilated to the product $X_{s_i}\ (\psi(.)) $ so $X_{s_i}^{-1}(.)= \psi^{-1}(X_{s_i}^{-1}(.))$ So if we want to write things without any implicit notations we get : $$B=\{\omega \in \Omega:\forall i=1,2,...,n, s_i \in [0,t]\cap T,\ \omega \in \psi^{-1}(X^{-1}_{s_i}(B_i))\}$$ Now compare sets $A$ and $B$ those are exactly the same sets, given the looked for equivalence, as long as you know that the sets of the form $B $ (or $A$) generate the natural filtration (I still need to find the proper reference for this but it is usaly proven with Monotone Class Theorem argument unless mistkan which is a bit heavy for me so I pass). For the second question, you certainly calculate this over $\Omega$ but to achieve any calculus of this type, you (always) use implicitly the transfer theorem (this a french appellation I don't know the correct name for this in english so please edit this if you know the correct denomination in english) to get for example the image measure form $\Omega$ to $\Bbb R$ measure with respect to Lebesgue measure to express the law of $X^*$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,741
Le gouvernement Gruevski III (en ) est le gouvernement de l'Ancienne République yougoslave de Macédoine entre le et le , durant la septième législature de l'Assemblée. Coalition et historique Dirigé par le président du gouvernement conservateur sortant Nikola Gruevski, au pouvoir depuis , ce gouvernement est constitué et soutenu par une coalition gouvernementale entre l'Organisation révolutionnaire macédonienne intérieure - Parti démocratique pour l'Unité nationale macédonienne (VMRO-DPMNE) et l'Union démocratique pour l'intégration (BDI/DUI). Ensemble, ils disposent de 71 députés sur 123, soit 57,7 % des sièges de l'Assemblée. Il est formé à la suite des élections législatives anticipées du 5 juin 2011 et succède au gouvernement Gruevski II, constitué et soutenu par une coalition identique. À la suite d'une crise politique, déclenchée par des actions judiciaires à l'encontre de médias proches de l'opposition sociale-démocrate, le président du gouvernement a accepté la tenue d'élections anticipées d'un an. Lors de ce scrutin, la VMRO-DPMNE a fortement reculé, mais la bonne tenue de la BDI/DUI et le score modeste de l'opposition a permis la reconduction de l'alliance au pouvoir. Le , après un conflit entre les deux composantes de la coalition au sujet de l'investiture d'un candidat commun à l'élection présidentielle des 13 et 27 avril suivants, Nikola Gruevski a accédé à la demande de ses alliés et convoqué des élections législatives anticipées le 27 avril. Au cours de cette élection, les deux forces de la coalition au pouvoir ont progressé jusqu'à atteindre 80 députés sur 123. Le 19 juin, il cède le pouvoir au gouvernement Gruevski IV, constitué d'une coalition similaire. Composition Initiale (28 juillet 2011) Remaniement du 18 février 2013 Les nouveaux ministres sont indiqués en gras, ceux ayant changé d'attributions en italique. Remaniement du 29 mai 2013 Les nouveaux ministres sont indiqués en gras, ceux ayant changé d'attributions en italique. Annexes Articles connexes Politique de la Macédoine Gouvernement Gruevski I Gouvernement Gruevski II Lien externe Site officiel du gouvernement de la République de Macédoine Gruevski3 2011 en politique 2012 en politique 2013 en politique 2014 en politique
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,489
.class Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat; .super Ljava/lang/Object; .source "HoneycombMr1AnimatorCompatProvider.java" # interfaces .implements Landroid/support/v4/animation/ValueAnimatorCompat; # annotations .annotation system Ldalvik/annotation/EnclosingClass; value = Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider; .end annotation .annotation system Ldalvik/annotation/InnerClass; accessFlags = 0x8 name = "HoneycombValueAnimatorCompat" .end annotation # instance fields .field final mWrapped:Landroid/animation/Animator; # direct methods .method public constructor <init>(Landroid/animation/Animator;)V .registers 2 .param p1, "wrapped" # Landroid/animation/Animator; .prologue .line 46 invoke-direct {p0}, Ljava/lang/Object;-><init>()V .line 47 iput-object p1, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; .line 48 return-void .end method # virtual methods .method public addListener(Landroid/support/v4/animation/AnimatorListenerCompat;)V .registers 4 .param p1, "listener" # Landroid/support/v4/animation/AnimatorListenerCompat; .prologue .line 57 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; new-instance v1, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$AnimatorListenerCompatWrapper; invoke-direct {v1, p1, p0}, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$AnimatorListenerCompatWrapper;-><init>(Landroid/support/v4/animation/AnimatorListenerCompat;Landroid/support/v4/animation/ValueAnimatorCompat;)V invoke-virtual {v0, v1}, Landroid/animation/Animator;->addListener(Landroid/animation/Animator$AnimatorListener;)V .line 58 return-void .end method .method public addUpdateListener(Landroid/support/v4/animation/AnimatorUpdateListenerCompat;)V .registers 4 .param p1, "animatorUpdateListener" # Landroid/support/v4/animation/AnimatorUpdateListenerCompat; .prologue .line 77 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; instance-of v0, v0, Landroid/animation/ValueAnimator; if-eqz v0, :cond_12 .line 78 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; check-cast v0, Landroid/animation/ValueAnimator; new-instance v1, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat$1; invoke-direct {v1, p0, p1}, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat$1;-><init>(Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;Landroid/support/v4/animation/AnimatorUpdateListenerCompat;)V invoke-virtual {v0, v1}, Landroid/animation/ValueAnimator;->addUpdateListener(Landroid/animation/ValueAnimator$AnimatorUpdateListener;)V .line 87 :cond_12 return-void .end method .method public cancel()V .registers 2 .prologue .line 72 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; invoke-virtual {v0}, Landroid/animation/Animator;->cancel()V .line 73 return-void .end method .method public getAnimatedFraction()F .registers 2 .prologue .line 91 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; check-cast v0, Landroid/animation/ValueAnimator; invoke-virtual {v0}, Landroid/animation/ValueAnimator;->getAnimatedFraction()F move-result v0 return v0 .end method .method public setDuration(J)V .registers 4 .param p1, "duration" # J .prologue .line 62 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; invoke-virtual {v0, p1, p2}, Landroid/animation/Animator;->setDuration(J)Landroid/animation/Animator; .line 63 return-void .end method .method public setTarget(Landroid/view/View;)V .registers 3 .param p1, "view" # Landroid/view/View; .prologue .line 52 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; invoke-virtual {v0, p1}, Landroid/animation/Animator;->setTarget(Ljava/lang/Object;)V .line 53 return-void .end method .method public start()V .registers 2 .prologue .line 67 iget-object v0, p0, Landroid/support/v4/animation/HoneycombMr1AnimatorCompatProvider$HoneycombValueAnimatorCompat;->mWrapped:Landroid/animation/Animator; invoke-virtual {v0}, Landroid/animation/Animator;->start()V .line 68 return-void .end method
{ "redpajama_set_name": "RedPajamaGithub" }
9,258
Henrik Gunnar Lundegårdh, född 23 oktober 1888 i Stockholm, död 19 november 1969 i Länna församling, var en svensk botaniker. Lundegårdh blev student vid Stockholms högskola 1907, filosofie licentiat 1912 och filosofie doktor 1913. Han blev docent i botanik vid Stockholms högskola 1912, vid Lunds universitet från 1915 och var föreståndare för ekologiska stationen på Hallands Väderö från 1917. 1926 blev han föreståndare för botaniska avdelningen vid Centralanstalten för jordbruksförsök. Lundegårdh företog vetenskapliga resor och utvecklade en omfattande författarverksamhet inom skilda områden av botaniken såsom cytologi, fysiologi, experimentell morfologi, ekologi och kemi. Bland hans många avhandlingar, märks undersökningarna över protoplasma- och kärnstrukturer, karyotin, kärndelning i levande celler, rotens permeabilitet, plasmolysen och dess tillbakagång, betingelserna för rotbildningen, bladens och birötternas retningsrörelser, förhållandet mellan retningsintensistet och reaktion, trädens arkitektonik, kolsyregödningens betydelse, kvatitativ spektralanalys med mera, samt de mot allmänna biologiska problem riktade Klima und Boden (1925, 2:a upplagan 1930) och Reizphysiologische Probleme (1926). Han var 1935-1955 professor i växtfysiologi vid Lantbrukshögskolan. Han blev 1927 ledamot av Lantbruksakademien och 1943 ledamot av Vetenskapsakademien och Ingenjörsvetenskapsakademien. Han var far till geologen Per H. Lundegårdh. Källor Hjalmar Gullberg & Torsten Uggla: Svensk biografisk kalender, band I, Malmöhus län, Stockholm 1919, sid. 209. Tidens kalender 1969 Noter Svenska professorer i växtfysiologi Personer verksamma vid Stockholms universitet Personer verksamma vid Lunds universitet Personer verksamma vid Sveriges lantbruksuniversitet Ledamöter av Lantbruksakademien Ledamöter av Kungliga Vetenskapsakademien Ledamöter av Kungliga Ingenjörsvetenskapsakademien Födda 1888 Avlidna 1969 Män
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,763
Q: What are the Legendre symbols $\left(\frac{10}{31}\right)$ and $\left(\frac{-15}{43}\right)$? I have the following two Legendre symbols that need calculated: $\left(\frac{10}{31}\right)$ $=$ $-\left(\frac{31}{10}\right)$ $=$ $-\left(\frac{1}{10}\right)$ $=$ $-(-1)$ $=$ $-1$ $\left(\frac{-15}{43}\right)$ $=$ $\left(\frac{43}{15}\right)$ $=$ $\left(\frac{13}{15}\right)$ $=$ $-1$ is that correct? I just want to make sure I am understanding this concept. A: Here is an approach I would take. Note that $$\left(\frac{10}{31}\right)=\left(\frac{-21}{31}\right)=\left(\frac{-1}{31}\right)\left(\frac{3}{31}\right)\left(\frac{7}{31}\right)=(-1)\Biggl(-\left(\frac{31}{3}\right)\Biggr)\Biggl(-\left(\frac{31}{7}\right)\Biggr)\,.$$ That is, $$\left(\frac{10}{31}\right)=-\left(\frac{1}{3}\right)\left(\frac{3}{7}\right)=-(+1)\Biggl(-\left(\frac{7}{3}\right)\Biggr)=\left(\frac{1}{3}\right)=+1\,.$$ This can be verified by noting that $6^2\equiv 5\pmod{31}$ and $8^2\equiv 2\pmod{31}$, so $$17^2\equiv (6\cdot 8)^2\equiv 5\cdot2=10\pmod{31}\,.$$ For the second part, note that $$\left(\frac{-15}{43}\right)=\left(\frac{-1}{43}\right)\left(\frac{3}{43}\right)\left(\frac{5}{43}\right)=(-1)\Biggl(-\left(\frac{43}{3}\right)\Biggr)\left(\frac{43}{5}\right)\,.$$ Therefore, $$\left(\frac{-15}{43}\right)=\left(\frac{1}{3}\right)\left(\frac{3}{5}\right)=\left(\frac{3}{5}\right)\,.$$ It is easy to verify that $\left(\dfrac{3}{5}\right)=-1$, whence $$\left(\frac{-15}{43}\right)=-1\,.$$ You can check that $12^2\equiv 15\pmod{43}$, so $\left(\dfrac{15}{43}\right)=+1$, whereas $\left(\dfrac{-1}{43}\right)=-1$, confirming the calculations. A: As remarked in comments, to use quadratic reciprocity we need to work with Legendre Symbols $$ \left(\frac{p}{q}\right)$$ for $p,q$ prime. You should repeatedly use the property $$ \left(\frac{ab}{p}\right)=\left(\frac{a}{p}\right)\left(\frac{b}{p}\right)$$ to make sure that you are calculating with both parts of the symbol being prime. That is, write $$ \left(\frac{10}{31}\right)=\left(\frac{2}{31}\right)\left(\frac{5}{31}\right)$$ then iteratively apply quadratic reciprocity as you intended to.
{ "redpajama_set_name": "RedPajamaStackExchange" }
782
Timo Schisanowski (* 27. August 1981 in Hagen-Haspe) ist ein deutscher Politiker (SPD) und seit 2021 Mitglied des Deutschen Bundestages. Ausbildung und Beruf Schisanowski wurde im Heilig-Geist-Spital in Haspe als Kind einer Arbeiterfamilie geboren. Sein Vater ist Bäcker und seine Mutter Erzieherin. Er wuchs gemeinsam mit einem jüngeren Bruder auf. Er erhielt sein Abitur am Christian-Rohlfs-Gymnasium Hagen. Danach absolvierte er seinen Zivildienst im St.-Josefs-Hospital in Hagen-Altenhagen. Später studierte er Rechtswissenschaft an der Ruhr-Universität Bochum und der Fernuniversität in Hagen. Danach war er als Wirtschaftsjurist in Bochum bei der VBW Bauen und Wohnen GmbH als Geschäftsstellenleiter der gemeinnützigen VBW Stiftung tätig. Politische Tätigkeiten Timo Schisanowski ist seit 2000 Mitglied der SPD. Er war Hagener Juso-Vorsitzender. Seit 2012 ist er Parteivorsitzender des SPD-Unterbezirks Hagen. Von 2004 bis 2012 war er Mitglied im Rat der Stadt Hagen, stets direkt gewählt in Haspe. Seit 2020 ist er erneut als direkt gewähltes Mitglied aus Haspe im Rat der Stadt Hagen. Zudem ist er seit 2020 Mitglied im Ruhrparlament. Bei der Bundestagswahl 2021 gewann er das Direktmandat im Bundestagswahlkreis Hagen – Ennepe-Ruhr-Kreis I mit 33,3 % der Erststimmen und zog damit in den 20. Deutschen Bundestag ein. Er hatte sich bei einer Abstimmung der SPD-Vertreterversammlung des Wahlkreises im Dezember 2020 knapp gegen René Röspel durchgesetzt, der für die SPD 23 Jahre lang Mitglied des Bundestages gewesen war. Zudem kandidierte Schisanowski auf Platz 51 der Landesliste der SPD Nordrhein-Westfalen. Mitgliedschaften Schisanowski ist Mitglied bei der Arbeiterwohlfahrt, bei Arbeit schaffen in Haspe, in Hackebämmels Enkel, Hasper Heimat und Brauchtum Verein (HHBV), im Hasper SV und seit Oktober 2022 Vorsitzender des nordrhein-westfälischen Landesverbandes des Reichsbanners Schwarz-Rot-Gold. Zudem ist er Förderer von Amnesty International. Kontroversen Im Januar 2022 wurde bekannt, dass Schisanowski im Wahlkampf 2021 unrechtmäßig ein Bild der Flutkatastrophe im Juni 2021 vom freien Journalisten Alex Talash für seinen Auftritt auf Facebook und Instagram genutzt hatte. Aufgrund der Urheberrechtsverletzung musste Schisanowski circa 5500 Euro zahlen. Weblinks Offizielle Website von Timo Schisanowski Einzelnachweise Bundestagsabgeordneter (Nordrhein-Westfalen) Politiker (Hagen) SPD-Mitglied Politiker (21. Jahrhundert) Deutscher Geboren 1981 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,882
Q: How to find frequency of characters in a string without using array in java Given a String, I want to create a frequency distribution of characters in the String. That is, for each distinct character in the string, I want to count how many times it occurs. Output is a String that consists of zero or more occurrences of the pattern xd, where x is a character from the source String, and d is the number of occurrences of x within the String. Each x in the output should occur once. The challenge is to do this without using an array or Collection. Examples: Source: "aasdddr" Result: "a2s1d3r1" Source: "aabacc" Result: "a3b1c2" Source: "aasdddraabcdaa" Result: "a6s1d4r1b1c1" I tried this way: String str = "aasdddr", result = ""; int counter = 0; for(int i = 0; i < str.length(); i++){ result += "" + str.charAt(i); for(int j = 1; j < str.length(); j++){ if(str.charAt(i) == str.charAt(j)){ counter++; } } result += counter; } System.out.println(result); My output is a1a2s3d6d9d12r13 A: Finally, I found the solution. But I think any question has more than one solution. First, we should declare an empty string to keep the result. We use a nested loop because the outer loop will keep a character fixed during each iteration of the inner loop. Also, we should declare a count variable inside the outer loop. Because in each match, it will be increased by one and after controlling each character in the inner loop, it will be zero for the next check. Finally, after the inner loop, we should put a condition to check whether we have that character inside the result string. If there isn't any character like that, then it will be added to the result string. After that, its frequency (count) will be added. Outside of the loop, we can print it. public class FrequenciesOfChar { public static void main(String[] args) { String str = "aabcccd"; // be sure that you don't have any digit in your string String result = ""; // this will hold new string for (int i = 0; i < str.length(); i++) { // this will hold a character till be checked by inner loop int count = 0; // put here so that it can be zero after each cycle for new character for (int j = 0; j < str.length(); j++) { // this will change if(str.charAt(i) == str.charAt(j)){ // this will check whether there is a same character count++; // if there is a same character, count will increase } } if( !(result.contains(""+str.charAt(i))) ){ // this checks if result doesn't contain the checked character result += ""+str.charAt(i); // first if result doesn't contain the checked character, character will be added result += count; // then the character's frequency will be added } } System.out.println(result); } } Run Result: aabcccd - a2b1c3d1 A: First, counter needs to be reset inside the for loop. Each time you encounter a character in the source String, you want to restart the counter. Otherwise, as you have seen, the value of the counter is strictly increasing. Now, think about what happens if a character occurs in more than one place in the source String, as in the "aasdddraabcdaa" example. A sequence of 1 or more a appears in 3 places. Because, at the time you get to the 2nd occurrence of a, a has been previously counted, you want to skip over it. Because the source String cannot contain digits, the result String can be used to check if a particular character value has already been processed. So, after fixing the problem with counter, the code can be fixed by adding these two lines: if (result.indexOf (source.charAt(i)) >= 0) { continue; } Here is the complete result: package stackoverflowmisc; public class StackOverflowMisc { public static String freqDist(String source) { String result = ""; int counter ; for (int i = 0; i < source.length(); i++) { if (result.indexOf (source.charAt(i)) >= 0) { continue; } counter = 1; result += source.charAt(i); for (int j = 1; j < source.length(); j++) { if (source.charAt(i) == source.charAt(j)) { counter++; } } result += counter; } return result; } public static void main(String[] args) { String [] test = {"aasdddr", "aabacc", "aasdddraabcdaa"}; for (int i = 0; i < test.length; ++i) { System.out.println (test[i] + " - " + freqDist (test[i])); } System.out.println ("End of Program"); } } Run results: aasdddr - a2s2d4r2 aabacc - a3b2c3 aasdddraabcdaa - a6s2d5r2b2c2 End of Program In one of the Q&A comments, you said the source string can contain only letters. How would the program work if it were allowed to contain digits? You can't use the result String, because the processing inserts digits there. Again, this is an easy fix: Add a 3rd String to record which values have already been found: public static String freqDist2(String source) { String result = "", found = ""; int counter ; for (int i = 0; i < source.length(); i++) { if (found.indexOf (source.charAt(i)) >= 0) { continue; } counter = 1; result += source.charAt(i); found += source.charAt(i); for (int j = 1; j < source.length(); j++) { if (source.charAt(i) == source.charAt(j)) { counter++; } } result += counter; } return result; } Another possibility is to delete the corresponding characters from the source String as they are counted. If you are not allowed to modify the Source String, make a copy and use the copy. Comment: I don't know if this is what your professor or whomever had in mind by placing the "No array" restriction, because a String is essentially built on a char array.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,769
Q: How exactly does Hahn-Banach theorem explain duality of vector spaces? Serge Lang's Linear Algebra textbook just introduced me to the concept of dual space in very formal terms: space of all functional transformations having co-domain as $1$-dimensional vector space over the field $\mathbb{K}$ (since in essence, field $\mathbb{K}$ is a vector space over itself). But the textbook did not explain the exact purpose of the term "duality", thus I decided to go little further and dive into some basic functional analysis. The Uncertainty Principle by Terrence Tao (reference): Terrence Tao wrote a really nice article on the concept of duality, that is explained in terms of local and global perspectives. In his first example: Vector space duality A vector space ${V}$ over a field ${F}$ can be described either by the set of vectors inside ${V}$, or dually by the set of linear functionals ${\lambda: V \rightarrow F}$ from ${V}$ to the field ${F}$ (or equivalently, the set of vectors inside the dual space ${V^*}$). (If one is working in the category of topological vector spaces, one would work instead with continuous linear functionals; and so forth.) A fundamental connection between the two is given by the Hahn-Banach theorem (and its relatives). As you see in the last sentence (in italic font), Tao mentions that Hahn-Banach theorem displays the fundamental connection between some vector space $V$ and its dual $V^*$. Therefore I've decided to investigate this concept a little further. Hahn-Banach Theorem and Dual space: There is a question regarding some similar connection on Math SE, but I'm not certain whether or not it is the answer to my question. From my understanding of answers below the referenced question, Hahn-Banach theorem states that for any arbitrary vector $v \in V$, there exists a functional $L \in V^*$ such that $|L(v)|=||v||_{V}$ and $||L||_{V^*}=1$. The definition of norm on the dual space is: $$||L||_{V^*}=\textrm{sup}\{|L(v)|: v \in V, |v| \leq 1 \}$$ where $\textrm{sup}$ denotes the supremum of set. I also know that every $L \in V^*$ is a linear transformation with norm $1$ that is bounded (i.e $\exists C \in \mathbb{K}, ||T(v)||_{V^*} < C||v||_{V}, \forall v \in V$, where $C$ is called operator norm). This (along with definition of dual norm) shows another interesting relation: $$||v||_{V}=\textrm{sup}\{|L(v)|: v \in V, ||L||_{V^*}=1 \}$$ Riesz-representation theorem (Extension): According to comments made by Berci below this post, complete inner product spaces (or Hilbert spaces) have special relationship with their dual spaces. Let $H$ be a Hilbert Space on the field $\mathbb{R}$, this relationship can be seen by Riesz-representation theorem which asserts that $H$ and $H^*$ are isometrically isomorphic (whereas in complex field case, they are anti-isomorphic). In more specific details, it shows that there exists $g \in H$ such that for any functional $L \in H^*$ and any $x \in H$: $L(x) = \langle{} f, g \rangle{}$. Moreover, as a consequence to isometric connection: $||x||_{H} = ||L(x)||_{H^*}$. This theorem establishes interesting connection between inner product and functionals. In fact, I believe it can be utilized as extension for Hahn-Banach theorem to see the deeper connection from geometric perspective, since the isomorphic isometric connection that Riesz Representation gives is equivalent of hyper-plane corresponding to its normal unit vector (and this seems to be a consequence of Hahn-Banach theorem, proving the existence of unit functionals). This can be more intuitively understood by specific cases of $L^P$ spaces, since they have interesting properties such as natural isomorphism of their duals, but I don't believe I have sufficient experience to group this information yet. Question: How exactly does the Hahn-Banach theorem show the fundamental connection between a vector space and its dual, as mentioned by Terrence Tao? Is it just that every vector $v$ has a corresponding functional which has the norm $||v||$? Is there more abstract explanation involving the idea of dual norm? Thank you!
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,190
{"url":"http:\/\/stylescoop.net\/standard-error\/weighted-average-formula.html","text":"Home > Standard Error > Weighted Average Formula\n\nWeighted Average Formula\n\nContents\n\nIts minimum value is found when all weights are equal (i.e., unweighted mean), in which case we have \u03c3 X \u00af = \u03c3 0 \/ n {\\displaystyle \\sigma _{\\bar \u2212 4}=\\sigma Is extending human gestation realistic or I should stick with 9 months? For instance, if the distribution is anything but normal (or a good approximation thereof), relying on the standard deviation will give you a bad idea of the shape of the tails, Solutions? Source\n\nI am interested in estimating $\\operatorname{E}\\left[x\\right]$ from this information. Browse other questions tagged standard-deviation excel weighted-mean or ask your own question. If you have a doubt, check it by setting all the weights equal to 1, and you will obtain classical formula for unbiased estimate for the standard deviation with (N-1) in When the weights are normalized then w i \u2032 = 1 n . {\\displaystyle w_ \u2212 8'={\\frac \u2212 7 \u2212 6}.} Statistical properties The weighted sample mean, X \u00af {\\displaystyle {\\bar useful reference\n\nWeighted Average Formula\n\nFrom a statistics point of view, we have a sample x1, x1, . . , xn, where each value is from a Gaussian distribution having the same mean \u00b5 but a In this case, there will be some error in the variance of each data point. The standard deviation is simply the square root of the variance above. The weights cannot be negative.\n\nTherefore, the bias in our estimator is ( 1 \u2212 V 2 V 1 2 ) {\\displaystyle \\left(1-{\\frac {V_{2}}{V_{1}^{2}}}\\right)} , analogous to the ( N \u2212 1 N ) {\\displaystyle \\left({\\frac For such normalized weights the weighted mean is simply x \u00af = \u2211 i = 1 n w i x i {\\displaystyle {\\bar \u2212 6}=\\sum _ \u2212 5^ \u2212 4 \u2212 up vote 5 down vote The formulae are available various places, including Wikipedia. Weighted Standard Deviation Excel Do DC-DC boost converters that accept a wide voltage range always require feedback to maintain constant output voltage?\n\nAs a side note, other approaches have been described to compute the weighted sample variance.[2] Weighted sample covariance In a weighted sample, each row vector x i {\\displaystyle \\textstyle {\\textbf {x}}_{i}} In this case V 1 {\\displaystyle V_{1}} is simply V 1 = \u2211 i = 1 m w i \u2212 1 = 1 \u2212 w m 1 \u2212 w , {\\displaystyle GNU Scientific Library - Reference manual, Version 1.15, 2011. Inequalities (2nd ed.), Cambridge University Press, ISBN 978-0-521-35880-4, 1988. ^ Jane Grossman, Michael Grossman, Robert Katz.\n\nTaking percentages the way you are is going to make analysis tricky even if they're generated by a Bernoulli process, because if you get a score of 20 and 0, you The Standard Error Of A Weighted Mean Concentration--i. Bootstrapping Vs Other Methods Does the reciprocal of a probability represent anything? Statistical Methods in Experimental Physics (2nd ed.). more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed\n\nWeighted Average Excel\n\nIf the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction 0 < \u0394 < 1 {\\displaystyle 0<\\Delta <1} at each time step. The formulas are simplified when the weights are normalized such that they sum up to 1 {\\displaystyle 1} , i.e. \u2211 i = 1 n w i \u2032 = 1 {\\displaystyle Weighted Average Formula more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Weighted Standard Error R OpenAthens login Login via your institution Other institution login doi:10.1016\/1352-2310(94)00210-C Get rights and content AbstractConcentrations of chemical constituents of precipitation are frequently expressed in terms of the precipitation-weighted mean, which has\n\nGNU Scientific Library - Reference manual, Version 1.15, 2011. this contact form silly question about convergent sequences Why is international first class much more expensive than international economy class? If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction 0 < \u0394 < 1 {\\displaystyle 0<\\Delta <1} at each time step. This, unfortunately, ignores the fact that some measurements are more precise than others and should therefore be given more importance. Weighted Variance\n\nFrequency weights If the weights are frequency weights, then the unbiased estimator is: s 2 \u00a0 = \u2211 i = 1 N w i ( x i \u2212 \u03bc \u2217 ) Why is the size of my email so much bigger than the size of its attached files? Linked 1 How to calculate the weighted standard deviation 17 Standard deviation of binned observations 4 Correct equation for weighted unbiased sample covariance 1 Confidence interval for a weighted mean of have a peek here Indeed, x \u00af = \u2211 i = 1 n w i \u2032 x i = \u2211 i = 1 n w i \u2211 j = 1 n w j x i\n\nVieweg+Teubner. Weighted Estimate Of P In the US, are illegal immigrants more likely to commit crimes? Please try the request again.\n\nIts expected value and standard deviation are related to the expected values and standard deviations of the observations as follows, If the observations have expected values E ( X i )\n\nBox 12313, 2 Triangle Drive, Research Triangle Park, NC 27709, U.S.A. Check access Purchase Sign in using your ScienceDirect credentials Username: Password: Remember me Not Registered? The old list will shut down on April 23, and its replacement, statalist.org is already up and running. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Standard error of the weighted Weighted Mean Calculator Please try the request again.\n\nIn the weighted setting, there are actually two different unbiased estimators, one for the case of frequency weights and another for the case of reliability weights. Note that because one can always transform non-normalized weights to normalized weights all formula in this section can be adapted to non-normalized weights by replacing all w i {\\displaystyle w_ \u2212 Has an SRB been considered for use in orbit to launch to escape velocity? http:\/\/stylescoop.net\/standard-error\/standard-error-of-weighted-mean.html In particular, you will get different answers if the weights are frequencies (i.e.\n\nISBN981-270-527-9. ^ G. or its licensors or contributors. Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). P\u00f3lya.\n\nIf this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step ( 1 \u2212 w ) \u2212 Its minimum value is found when all weights are equal (i.e., unweighted mean), in which case we have \u03c3 X \u00af = \u03c3 0 \/ n {\\displaystyle \\sigma _{\\bar \u2212 4}=\\sigma The final unbiased estimate of sample variance is: s 2 \u00a0 = \u03c3 ^ w 2 1 \u2212 ( V 2 \/ V 1 2 ) = \u2211 i = 1 In this event, the variance in the weighted mean must be corrected to account for the fact that \u03c7 2 {\\displaystyle \\chi ^{2}} is too large.\n\nContents 1 Examples 1.1 Basic example 1.2 Convex combination example 2 Mathematical definition 3 Statistical properties 4 Dealing with variance 4.1 Correcting for over- or under-dispersion 5 Weighted sample variance 5.1 standard-deviation excel weighted-mean share|improve this question edited Jan 25 '11 at 23:42 asked Jan 25 '11 at 16:44 Yahel 2203711 The formula with (M-1)\/M is correct. Setting w = 1 \u2212 \u0394 {\\displaystyle w=1-\\Delta } we can define m {\\displaystyle m} normalized weights by w i = w i \u2212 1 V 1 , {\\displaystyle w_{i}={\\frac {w^{i-1}}{V_{1}}},} The discussion of the different meanings of weights was what I was looking for in this thread all along.\n\nE. For small samples, it is customary to use an unbiased estimator for the population variance.","date":"2017-08-20 17:08:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8677852749824524, \"perplexity\": 1400.2134858247236}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886106865.74\/warc\/CC-MAIN-20170820170023-20170820190023-00002.warc.gz\"}"}
null
null
{"url":"https:\/\/cs.stackexchange.com\/questions\/59952\/why-doesnt-subset-sum-solution-violate-exponential-time-hypothesis","text":"# Why doesn't subset sum solution violate Exponential Time Hypothesis?\n\nThe quickest algorithm for solving subset sum currently is $2^{n\/2}$ (via Wiki). Why doesn't this violate the Exponential Time Hypothesis which states that \u201cthere is no family of algorithms that can solve 3-SAT in $2^{o(n)}$ time.\u201d\n\nCouldn't a 3-SAT problem be translated to a subset sum problem in polynomial time and then solved in $2^{n\/2}$ time. What am I missing here?\n\n\u2022 n goes up by a very big factor when you reduce 3-SAT to subset sum \u2013\u00a0rotia Jun 23 '16 at 20:15\n\u2022 Well it also says that any related NP-complete problem (i.e. subset sum) cannot be solved in subexponential time. Isn't 2^(n\/2) subexponential? \u2013\u00a0C Shreve Jun 23 '16 at 20:24\n\u2022 Isn't it $2^{poly(n)}$, which is exponential? \u2013\u00a0Evil Jun 23 '16 at 20:39\n\u2022 Exponential - I am sure that this explanation is trustworthy. \u2013\u00a0Evil Jun 23 '16 at 21:07\n\u2022 n\/2 is not o (n). It is O (n). Little-o versus Big-O. $2^{n\/2}$ is not subexponential. It is larger than $1.4^n$. \u2013\u00a0gnasher729 Jun 23 '16 at 22:31\n\nGiven a 3SAT problem of size $n$, you can convert that to a subset sum problem of size $n^2$ (roughly). Now you can apply the algorithm for subset sum to solve the subset sum in time $2^{n^2\/2}$ (roughly). Remember, the running time depends on the size (length) of the input, and here the size of the input will be $n^2$, yielding the $2^{n^2\/2}$ figure.\nThat will give you an algorithm for solving a 3SAT problem of size $n$ in $2^{n^2\/2}$ time (roughly). But that's much worse than the naive algorithm, which takes $2^n$ time: $2^n$ is much smaller than $2^{n^2\/2}$. So this reduction does not contradict the exponential time hypothesis: it doesn't give you a way to solve 3SAT faster than $2^n$ time.\n\u2022 @CShreve, the ETH doesn't say that. Read it again. $2^{n\/2}$ time is not subexponential. ETH talks about SAT, not all NP-complete problems, and it talks about $2^{o(n)}$, and $n\/2$ is not $o(n)$. See en.wikipedia.org\/wiki\/Exponential_time_hypothesis, cs.stackexchange.com\/q\/9813\/755. Please don't use the comments to ask follow-up questions or for extended discussion. \u2013\u00a0D.W. Jun 23 '16 at 22:03","date":"2020-01-27 17:17:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.563335120677948, \"perplexity\": 516.6254547392236}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251700988.64\/warc\/CC-MAIN-20200127143516-20200127173516-00423.warc.gz\"}"}
null
null
{"url":"https:\/\/www.physicsforums.com\/threads\/system-of-linear-equations.623665\/","text":"# Homework Help: System of Linear Equations\n\n1. Jul 26, 2012\n\n### AGNuke\n\nIf the system of Linear Equations\n$$x+y+z=6$$\n$$x+2y+3z=14$$\n$$2x+5y+\\lambda z=\\mu$$\nhas infinite number of solution in x, y, z\n\nI need to find out two things\n1. The value of \u03bb\n2. Maximum value of $$(\\mu x+\\lambda y-20z)sin^2\\theta+(\\lambda x+\\mu y+64z)cos2\\theta, \\theta \\in \\mathbb{R}$$ is 272\n\nI used the Matrix method of AX=B to find out \u03bb by solving for A=0; I got the answer 8, and it is correct.\n\nNow my catch is to validate the second question. It is given true, I just need to validate. I tried to solve it with the three existing equations but I was unable to get answer.\n\nLast edited: Jul 26, 2012\n2. Jul 26, 2012\n\n### HallsofIvy\n\nThat isn't quite correct- first you haven't told us what \"A\" is! More important you mean det(A)= 0, not A= 0.\n\n3. Jul 27, 2012\n\n### AGNuke\n\nA is the coefficient matrix which is present when we try to solve the system of equations using matrix.\n\n$$A=\\begin{bmatrix} 1 &1 &1 \\\\ 1 & 2 &3 \\\\ 2 &5 &\\lambda \\end{bmatrix}$$\n\nand yes, I meant |A|=0, my bad.\n\nWhatever that may be, I found out the value of \u03bb and I need to find answer to my second question. I am on it, but haven't made progress.\n\n4. Jul 27, 2012\n\n### Ray Vickson\n\nYou also need to determine the value of \u03bc (because if you don't have the correct value the system will have no solutions at all). Once you know \u03bb and \u03bc you have an optimization problem in the 4 variables x,y,z,\u03b8, subject to linear restrictions on x,y,z. This can be tackled via Lagrange multiplier methods, or in some other way that handles constraints. At that point the problem is more suitable for the \"Calculus and Beyond\" Forum.\n\nRGV\n\nLast edited: Jul 27, 2012\n5. Jul 28, 2012\n\n### AGNuke\n\n\u03bc can be determined by solving the three given equations, at which point, it returns the value 36.\n\nWe know \u03bb and we know \u03bc, I think we are good to go and find the answer.\n\nUPDATE: I got my answer. I just found out y=4-2x and z=x+2; since the system has infinite solutions. I substituted y and z in the asked equation, got -8sin2\u03b8 + 272cos2\u03b8. Then made sin\u03b8=0 and cos2\u03b8=1. Proved the statement right.\n\nLast edited: Jul 28, 2012","date":"2018-06-18 07:52:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7561650276184082, \"perplexity\": 911.0885802815075}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267860089.13\/warc\/CC-MAIN-20180618070542-20180618090542-00330.warc.gz\"}"}
null
null
Q: unable to see the output of c++ program in the console I have installed the eclipse ide(cdt) on my windows 8 laptop and tried writing a simple c program to check whether the program executes. It did not execute and gave the error : binary not found . So I did some searching online and realized that my system did not have a c/c++ compiler installed. So I installed MinGW and selected the c and c++ compilers during installation. Then I set the PATH environment variable to C:\MinGW. I reopened eclipse, wrote a simple c program and it worked as expected! I created a c++ project, wrote a simple piece of code and could not see the output in the console! Here is the code: #include<iostream> using namespace std; int main() { cout<<"sample text"; return 0; } A: You may simply need to flush the output, using flush or endl. Try this: cout<<"sample text" << endl; or cout<<"sample text" << flush; A: Linker (Option) > Add Command (g++ -static-libgcc -static-libstdc++) This is not the right solution. You have in your path environment variable only c:\minGW . But it should be c:\minGW;c:\minGW\bin . (Set the PATH before open eclipse) Therefore, libstdc++-6.dll needed by the current program, can not be found. In eclipse there is no error, but no output in the console !! It to compile into the program may be regarded as a trick, but will only work for the standard libs . your linker flags should not be set like : --> MinGW C++ Linker (Option) > Command (g++ -static-libgcc -static-libstdc++) should be set here : I know in this case it is not necessary at the end << endl to write. A good programming style should use << endl : cout << "sample text" << endl;
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,808
<?php namespace Sylius\Bundle\SearchBundle\Doctrine\ORM; use Doctrine\ORM\EntityManager; use Doctrine\ORM\QueryBuilder; use Sylius\Bundle\ProductBundle\Doctrine\ORM\ProductRepository; use Sylius\Bundle\ResourceBundle\Doctrine\ORM\EntityRepository; /** * @author Argyrios Gounaris <agounaris@gmail.com> */ class SearchIndexRepository extends EntityRepository { /** * @var EntityManager */ private $em; /** * @var ProductRepository */ private $productRepository; /** * @param EntityManager $em * @param ProductRepository $productRepository */ public function __construct(EntityManager $em, ProductRepository $productRepository) { $this->em = $em; $this->productRepository = $productRepository; } /** * Returns the product ids for a given taxon. * * @param $taxonName * * @return array */ public function getProductIdsFromTaxonName($taxonName) { $productClassName = $this->productRepository->getClassName(); // Gets the taxon ids $queryBuilder = $this->em->createQueryBuilder(); $queryBuilder ->select('product') ->from($productClassName, 'product') ->leftJoin('product.taxons', 'taxon') ->where('taxon.name = :taxonName') ->setParameter('taxonName', $taxonName) ; $filteredIds = array(); foreach ($queryBuilder->getQuery()->getArrayResult() as $product) { $filteredIds[$productClassName][] = $product['id']; } return $filteredIds; } /** * @param array $resultSetFromFulltextSearch * * @return array */ public function hydrateSearchResults($resultSetFromFulltextSearch = array()) { $results = array(); foreach ($resultSetFromFulltextSearch as $model => $ids) { $queryBuilder = $this->em->createQueryBuilder(); $queryBuilder ->select('u') ->from($model, 'u') ->where('u.id IN (:ids)') ->setParameter('ids', $ids) ; foreach ($queryBuilder->getQuery()->getResult() as $object) { $results[] = $object; } } return $results; } /** * @param array $ids * * @return array */ public function getProductsByIds(array $ids) { return $this->productRepository->findBy(array('id' => $ids)); } /** * @return QueryBuilder */ public function getProductsQueryBuilder() { return $this->productRepository->getCollectionQueryBuilder(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
169
\section{Introduction} Channel fluctuation is an intrinsic characteristic of wireless communications. Such a variation calls for allocation of the wireless resources in a dynamic manner, leading to the classic \emph{opportunistic scheduling principle} (e.g., \cite{Knopp,JSAC_Liu}). Under the assumption that the instantaneous channel state information (CSI) is fully available to the scheduler, many efficient opportunistic scheduling algorithms (e.g., \cite{tassiulas}-\cite{Atilla}) have been proposed and extensively studied. More recent works have focused on designing scheduling algorithms under imperfect CSI, where the channel state is modeled as independent and identically distributed (\textit{i.i.d.}) processes across time (e.g., \cite{2stage,Allerton}). On the other hand, although the \textit{i.i.d.} channel model brings ease of analysis, it fails to capture the time-correlation of the fading channels \cite{Tse}. Specifically, it fails to exploit the channel memory, which is a critical resource for making scheduling decisions. However, designing efficient scheduling schemes under time-correlated channels with imperfect CSI is a very challenging problem. The challenge is mainly because of the difficulty in making the classic `exploitation versus exploration' trade-off, in which a scheduler needs to strike a balance between selecting the channels with up-to-date channel memory that guarantees high immediate gains, or to explore the channels with outdated CSI for more informed decisions and associated future throughput gains. We consider the downlink scheduling problem where a base station transmits to the users within its transmission range, subject to scheduling constraints. To model the time correlations present over fading channels, we assume that wireless channels evolve as Markov-modulated ON/OFF processes. The channel state information is obtained from ARQ-based feedback, only \emph{after} each scheduled transmission. Nevertheless, due to time correlation, the memory of the past channel state can be used to predict the current channel state \emph{prior to} scheduling decision. Hence, channel memory should be intelligently exploited by the scheduler in order to achieve high throughput performance. In a related work \cite{YingShakkottai}, a similar problem is considered under delayed CSI, where it is assumed that perfect CSI is available within a maximum delay, which is in turn smaller than the delay experienced by the ARQ feedback used for collision detection. These assumptions allow the scheduling decisions to be decoupled from CSI acquisition, which leads to the development of centralized as well as distributed schedulers. However, this approach does not use ARQ as a means of acquiring improved channel quality information. In contrast, in our setup the nature of ARQ feedback creates an implicit impact of scheduling decisions on the CSI feedback, which completely transforms the nature of the optimal scheduler design, and therefore requires a different approach. Under the scenario where all the channels have \emph{identical Markov statistics}, round-robin-based algorithms (e.g., \cite{Liu}-\cite{Neely_RR}) have been shown to possess optimality properties in throughput performance. However, the round-robin-based algorithms are no longer optimal in \emph{asymmetric scenarios}, e.g., when different channels have different Markov transition statistics, as is naturally the case in typical heterogeneous conditions. Under the asymmetric scenarios, our downlink scheduling problem is an example of the classic Restless Multiarmed Bandit Problem (RMBP) \cite{Whittle}. Low-complexity Whittle's Index Policies\hspace{3pt}\cite{Whittle}\hspace{3pt}for the downlink scheduling problem have been proposed in \cite{Zhao_index}\cite{Infocom11} based on RMBP theory. However, although Whittle's Index Policy can bring significant throughput gains by exploiting the channel memory \cite{Infocom11}, the analytical characterization of its performance under asymmetric scenarios is very challenging and prohibitively technical. This is because asymmetry leads to a sophisticated interplay of memory evolution among channels with heterogeneous characteristics, which brings a significant challenge to the analysis of Whittle's Index Policy not present in the perfectly symmetric scenario. For RMBP problems under general scenarios, Whittle's Index Policy has been proven in \cite{Weber} to be asymptotically optimal as the number of users grows, provided a non-trivial condition, known as Weber's condition, holds. Nonetheless, Weber's condition concerns the global convergence of a non-linear differential equation, which is extremely difficult to verify even numerically in our downlink scheduling scenario. In this paper, we take significant steps in analyzing the optimality properties of Whittle's Index Policy for the downlink scheduling problem in the presence of channel heterogeneity. Specifically, our contributions are as follows. \begin{itemize} \item We apply the Whittle's index framework to our downlink scheduling problem and identify the optimal policy for the problem with a relaxed constraint in Section~\ref{sec:bound}. This policy, with carefully selected randomization, provides a performance upper bound to Whittle's Index Policy. \item We establish the local optimality of Whittle's Index Policy in the asymptotic regime when the number of users scales in Section~\ref{sec:local}. Specifically, we show that the performance of the index policy can get arbitrarily close to that of the relaxed-constraint optimal policy, provided that the initial state of the system is within a certain neighborhood of a carefully selected state. \item Based on the local optimality result, under a numerically verifiable recurrence assumption, we then establish the global optimality of Whittle's Index Policy in the limiting regime of many users in Section~\ref{sec:global}. \end{itemize} To the best of our knowledge, our work is the first to give analytical characterization of Whittle's Index Policy for downlink scheduling under channel heterogeneity. \section{System Model and Problem Formulation} \label{sec:model} \subsection{Downlink Wireless Channel Model} We consider a time-slotted, wireless downlink system with one base station and $N$ users. The wireless channel $C_i[t]$ between base station and user $i$ remains static within each time slot $t$ and evolves stochastically across time slots, independently across users. We adopt the simplest non-trivial model of time-correlated fading channels by considering two-state ON/OFF channels, where the state space of channel $i$ is $\mathcal{\bm S}_i=\{0, 1\}$, with the value of each state representing the transmission rate a channel can support at the state. \begin{figure} \centering \includegraphics[width=2.2in]{chain.eps} \vspace{-10pt} \caption{Two state Markov chain model for channels in class $k$.} \vspace{-15pt} \label{fig:chain} \end{figure} One important component of our model is the inclusion of channel heterogeneity that the users will typically experience in real systems. Such asymmetry creates a significant challenge to the design and analysis of optimal scheduling schemes compared to perfectly symmetric channels. To avoid cumbersome notation and unessential technical complications, in this work we model channel asymmetry by considering only \emph{two classes} of channel statistics. Specifically, for all the channels in class $k$, $k{=}1,2$, their states evolve according to the same Markov statistics. However, these characteristics differ between classes. The state transition of channels in class $k$ is depicted in Fig.~\ref{fig:chain}, represented by a $2\times 2$ probability transition matrix, \vspace{-4pt} \begin{align} \mathbb{\bm P}_k=\begin{bmatrix} p_k&1-p_k\\ r_k&1-r_k\\ \end{bmatrix},\nonumber \end{align} \vspace{-7pt} \noindent where \vspace{-18pt} \begin{align} p_k&:= \textrm{prob$\big(C_i[t]{=}1 \ \big | \ C_i[t{-}1]{=}1\big)$,}\nonumber \\ r_k&:= \textrm{prob$\big(C_i[t]{=}1 \ \big | \ C_i[t{-}1]{=}0 \big)$.}\nonumber \end{align} \vspace{-4pt} \noindent for channel $i$ in class $k$. The number of class $k$ channels is $\gamma_k N$, $k\in \{1,2\}$ with $\gamma_k$ being the \emph{proportion} of channels in class $k$ with respective to the total number $N$ of channels. We study the scenario where all the Markovian channels are positively correlated, i.e., $p_k > r_k$ for $k{=}1,2$. This assumption, which is commonly made in this domain (e.g., \cite{Neely_RR,sugu_aslm}), means that the channel evolution has a positive auto-correlation. Hence, roughly speaking, the channel has a stronger potential to stay in its previous state than jumping to another, which is typical especially in slow fading environment. For ease of exposition, we shall exclude the trivial case when $r_k\hspace{1pt}{=}\hspace{1pt}0$ or $p_k\hspace{1pt}{=}\hspace{1pt}1$, $k=1,2$. \vspace{-5pt} \subsection{Scheduling Model -- Belief Value Evolution} We assume that the base station can simultaneously transmit to at most $\alpha N \hspace{1pt}{\in}\hspace{1pt}\mathbb{Z}^+$ users in a time slot without interference, where $\alpha \hspace{1pt}{\in}\hspace{1pt}(0,1]$ stands for the maximum \emph{fraction} of users that can be activated. For example, in a multi-channel communication model, $\alpha$ would correspond to the fraction of all users that can be simultaneously serviced in unit time. However, the scheduler does not know the exact channel state in the current slot when the scheduling decision is made. Instead, the scheduler maintains a \emph{belief value} $\pi_i[t]$ for each channel $i$, which is defined as the probability of channel $i$ being in the ON state at the beginning of slot $t$. The accurate channel state is revealed via ACK/NACK feedback from the scheduled users, only at the end of each time slot after the data is transmitted. This accurate channel state feedback is in turn used by the scheduler to update the belief values. For user $i$ in class $k$, $k{=}1,2$, let $a_i[t] {\in} \{0,1\}$ indicate whether the user is selected for transmission in slot $t$. Then, from the definition the belief values, $\pi_i[t]$ evolves as follows, \begin{align} \label{eq:evolve} \hspace{-3pt}\pi_i[t{+}1]{=}\hspace{-2pt} \begin{cases} p_k,& \text{if $a_i[t]{=}1$, $C_i[t]{=}1$,}\\ r_k,& \text{if $a_i[t]{=}1$, $C_i[t]{=} 0$,}\\ \pi_i[t] p_k{+}(1{-}\pi_i[t])r_k, & \text{if $a_i[t]{=}0.$} \end{cases} \end{align} In our setup, belief values are known to be sufficient statistics to represent the past scheduling decisions and feedback (e.g., \cite{Javidi,Sondik_thesis}). In the meanwhile, in our ON/OFF channel model, $\pi_i[t]$ also equals to the expected throughput contributed by channel $i$ if it is scheduled in time slot $t$. For a user in class $k$, $k{=}1,2$, we use $b^k_{c,l}$ to denote its belief value when the most recent observed channel was $c\in \{0,1\}$, and is $l$ slots in the past. From the belief update rule (\ref{eq:evolve}), $b^k_{c,l}$ can be calculated as a function of $l {\geq} 1$ as, \begin{align} b^k_{0,l}{=}\frac{r_k\hspace{1pt}{-}\hspace{1pt}(p_k-r_k)^l r_k}{1+r_k-p_k}, \ b^k_{1,l}{=}\frac{r_k\hspace{1pt}{+}\hspace{1pt}(1-p_k)(p_k-r_k)^l}{1+r_k-p_k}. \nonumber \end{align} Fig.~\ref{fig:Qupdate} illustrates the belief value update when a channel stays idle (i.e., $a_i{=}0$). It is clear that if the scheduler is never updated of the state of channel $i$ (in class $k$), the belief value will converge to its stationary probability of being ON, denoted by the stationary belief value $b_s^{k}{:=}{r_k}/{(1{+}r_k{-}p_k)}$. The vector $\vec{\bm \pi}[t]{=} ( \pi_1[t], {\cdots}, \pi_N[t])$ denotes the belief values of all channels at the beginning of slot $t$. We use $\mathcal{B}_k$ to represent the set of the belief values for class $k$ channels, where $\mathcal{B}_k{=}\{b^k_{s}, b^k_{c,l}, c \hspace{1pt}{\in}\hspace{1pt} \{0,1\}, l\hspace{1pt}{\in}\hspace{1pt} \mathbb{Z}^+ \}$. We assume that the system starts to operate from slot $t=0$. At the beginning of slot $0$, for each channel the scheduler has either observed its channel state before, or has never been updated of its channel state, i.e., with belief value $b^k_{s}$. It is then clear that, based on the belief update rule (\ref{eq:evolve}), $\pi_i[t] \in \mathcal{B}_k$ for all $t\geq0$, i.e., each belief value $\pi_i[t]$ evolves over countably many states. In the rest of the paper, we shall use `belief value' and `belief state' interchangeably. \subsection{Downlink Scheduling Problem -- POMDP Formulation} We consider the broad class $U$ of (possibly non-stationary) scheduling policies that makes a scheduling decision based on the history of observed channel states and scheduling actions. The downlink scheduling problem is then to identify a policy in $U$ that maximizes the infinite horizon, \emph{time average expected throughput}, subject to the constraint on the number of users selected at each time slot. Given the initial state $\vec{\bm \pi}[0]$, the problem is formulated as, \begin{align} \max_{u \in U}& \hspace{5pt} \liminf_{T\rightarrow \infty} \frac{1}{T} E\Big[\sum_{t=0}^{T-1} \sum_{i=1}^N \pi_i[t] \cdot a^u_i[t] \Big | \vec{\bm \pi}[0] \Big] \label{eq:strgt_obj} \\ s.t.& \hspace{0.1in} \sum_{i=1}^{N} a^u_i[t]\leq \alpha N , \quad \forall t. \label{eq:strgt} \end{align} where the belief value $\pi_i[t]$ evolves according to rule (\ref{eq:evolve}) based on the scheduling decision $a^u_i[t]$ under policy $u$. Such an objective is standard in literature for Markov Decision Processes under the long term average reward criteria (e.g., \cite{Eitan}). Noting that since the scheduling decisions are made based on incomplete knowledge of channel states, this problem is a Partially Observable Markov Decision Process \cite{Sondik_thesis}. \begin{figure} \centering \includegraphics[width=2.6in]{belief_evol.eps} \vspace{-4pt} \caption{Belief values update when staying idle, $p_k=0.8$, $r_k=0.2$.} \vspace{-12pt} \label{fig:Qupdate} \end{figure} This problem is in fact an example of Restless Multiarmed Bandit Problem (RMBP) \cite{Whittle}. For a general RMBP, finding an optimal solution is PSPACE-hard \cite{Tsitsiklis}. However, for the downlink scheduling problem at hand, a low-complexity Whittle's Index Policy was proposed in \cite{Zhao_index}\cite{Infocom11} based on the RMBP theory that inherently exploits the channel memory when making scheduling decisions. For detailed descriptions of general RMBP and Whittle's Index Policy for downlink scheduling, please refer to \cite{Whittle}-\cite{Infocom11}. For the downlink scheduling problem, we note that there is only limited analytical characterization of Whittle's Index Policy, which is restricted in perfectly symmetric scenarios where Whittle's Index Policy takes a special round-robin form \cite{Zhao_index}. In asymmetric cases, however, the scheduling decision no longer takes the form of round-robin, bringing sophisticated complications in belief value evolutions that are tightly coupled among channels, which significantly complicates the analysis. The main focus of this paper is to analytically characterize the performance of Whittle's Index Policy in the asymmetric case with two classes of channels. \section{Upper Bound on Achievable Throughput} \label{sec:bound} We begin our analysis by characterizing an upper bound to the throughput performance of all feasible downlink scheduling policies that satisfies the constraint (\ref{eq:strgt}). The upper bound is obtained from a fictitious policy which is optimal for the downlink scheduling problem under a \emph{relaxed constraint}. Note here that such relaxation is also a crucial step in the study of the general RMBP problem. Yet, our analysis, being specific to the downlink scheduling problem, has its novelties, as we shall remark on later. \subsection{Average-Constrained Relaxed Scheduling Problem} We consider an associated relaxed problem of (2)-(3) that only requires an \emph{average number} of users to be activated in the long run, defined as follows \begin{align} \max_{u \in U}& \hspace{5pt} \liminf_{T\rightarrow \infty} \frac{1}{T} E\Big[\sum_{t=0}^{T-1} \sum_{i=1}^N \pi_i[t] \cdot a^u_i[t] \Big | \vec{\bm \pi}[0] \Big] \label{eq:relaxed_obj} \\ s.t.& \hspace{6pt} \limsup_{T\rightarrow \infty} \frac{1}{T}E\Big[\sum_{t=0}^{T-1} \sum_{i=1}^{N} a^u_i[t] \Big ]\leq \alpha N . \label{eq:relaxed} \end{align} Note that, contrary to the stringent constraint (\ref{eq:strgt}), the relaxed constraint (\ref{eq:relaxed}) allows the activation of more than $\alpha$ fraction of users in each time slot, provided the long term average fraction does not exceed $\alpha$. Hence the optimal policy under this relaxed constraint, which we shall identify next, provides a throughput upper bound to any policy that satisfies the stringent constraint. \subsection{Optimal Policy for the Relaxed Problem} We remark that the relaxed problem is also an important component of Whittle's analysis of general RMBPs \cite{Whittle}, in which an optimal policy for the relaxed problem is developed based on the \emph{Whittle's index values}. Following the approach of classic RMBP framework \cite{Whittle}, in our downlink scenario, we identify an optimal policy for the relaxed problem based on Whittle's indices. Specifically, for channels in class $k$, the Whittle's index value $W_k(\pi)$ is assigned to each belief state $\pi \in \mathcal{B}_k$. These index values intuitively capture the exploitation and exploration value to be gained from scheduling the associated channel when its belief value is $\pi$. This characteristic of $W_k(\pi)$ is also illustrated in Section~\ref{sec:num:trade-off} via numerical investigations. While these index value functions have been expressed in closed form in various cases (see \cite{Zhao_index}\cite{Infocom11}), the following two characteristics they possess are primarily significant for our analysis: \begin{itemize} \item $W_k(\pi)$ monotonically increases with $\pi \in \mathcal{B}_k$. \item $W_k(\pi)\in [0,1]$ for all $\pi \in \mathcal{B}_k$. \end{itemize} In the next proposition, we identify an index-based policy with \emph{appropriate randomization} that is optimal for the relaxed constraint problem. This policy schedules each user based on its own belief value, independently from other users. \begin{proposition} \label{prop:thres_relax} For the problem under relaxed constraint, there exists an optimal stationary policy $\phi^*$, parameterized by the threshold $\omega^*$ and a randomization parameter $\rho^* {\in} (0,1]$, such that \vspace{3pt} \noindent(i) Channel $i$ in class $k$ is scheduled if $W_k(\pi_i[t])\hspace{1pt}{>}\hspace{1pt}\omega^*$, and stays idle if \hspace{3pt}$W_k(\pi_i[t])\hspace{1pt}{<}\hspace{1pt}\omega^*$. If $W_k(\pi_i[t])\hspace{1pt}{=}\hspace{1pt}\omega^*$, it is scheduled with probability $\rho^*$. \vspace{3pt} \noindent(ii) The parameters $\omega^*$ and $\rho^*$ are such that, under policy $\phi^*$, the relaxed constraint (\ref{eq:relaxed}) is strictly satisfied with equality. \end{proposition} \noindent \textbf{Proof:} This proof the proposition builds on the RMBP theory \cite{Whittle}\cite{Zhao_index} along with optimization techniques. Details of the proof are given in Appendix~\ref{appen:thres}. $\hfill \blacksquare$ \vspace{5pt} From now on, we shall denote $\phi^*$ as the `\emph{Optimal Relaxed Policy}'. {For technical purposes, we henceforth assume $\alpha$ is such that $\rho^*{\neq} 1$}. Since each $\alpha$ value maps to a unique $(\omega^*, \rho^*)$ pair (see Appendix~\ref{appen:thres}), only countably many $\alpha$ values correspond to $\rho^*{=} 1$, i.e., achieved by deterministic policies. Therefore, the set of $\alpha {\in} (0,1]$ for which $\rho^*{\neq} 1$ has Lebesgue measure one. \noindent \textbf{Remarks:} 1) Our work is the first to identify the specific form of the optimal policy for the relaxed problem in downlink scheduling. We identify in Proposition~\ref{prop:thres_relax} that appropriate randomization is essential to guaranteeing the optimality. The randomization is important, because the deterministic policies are insufficient to guarantee optimality to general constrained Markov Decision Processes when both the reward and constraint are in the expected average form \cite{Eitan}, and thus unable to provide a throughput upper bound. 2) Our objective function takes a very general form, it is not restricted to the family of stationary policies, nor does it require the existence of the limit (i.e., $\liminf \frac{1}{T} E[\cdot]=\lim \frac{1}{T} E[\cdot]$ in (\ref{eq:strgt_obj}) and (\ref{eq:relaxed_obj})), whereas the existence of limits (with different forms) is assumed in previous literatures \cite{Whittle} \cite{Zhao_index} on Whittle's Index Policy. Such an extension not only requires a non-trivial amount of technical work, but also is important to prove optimality of the stationary Optimal Relaxed Policy over a larger space of possibly non-stationary control strategies. \subsection{Steady State Distribution of Belief Values} We next present the transition structure of the belief values under Optimal Relaxed Policy, captured in the following lemma. The structure will be critical in the development of our subsequent main results. \vspace{2pt} \begin{lemma} \label{lemma:pos_rec} For each channel in class $k$, under the Optimal Relaxed Policy, the structure of belief value evolution depends on the threshold $\omega^*$ of policy. \vspace{2pt} \noindent(i) If $\omega^*\hspace{1pt}{<}\hspace{1pt}W_k(b^k_s)$, then the belief value evolution of each class $k$ channels is positive recurrent with a finite recurrent class. \vspace{2pt} \noindent(ii) If $\omega^* \hspace{1pt}{\geq}\hspace{1pt}W_k(b^k_s)$, the belief value evolution is transient. With probability $1$, ultimately no channel in class $k$ will transmit. \end{lemma} \noindent \textbf{Proof:} The proof of this lemma follows from the monotonic structure of belief evolution, as shown in Fig.~\ref{fig:Qupdate}. Details are included in Appendix \ref{appen:recur}. $\hfill \blacksquare$ \vspace{3pt} Thus, if $\omega^* \hspace{1pt}{\geq}\hspace{1pt} \max\{ W_1(b^1_s), W_2(b^2_s)\}$, the above analysis reveals that ultimately no user transits, corresponding to the trivial case of $\alpha N\hspace{1pt}{=}\hspace{1pt}0$. Also, if $\omega^*$ is between $W_1(b^1_s)$ and $W_2(b^2_s)$, the class with the smaller $W_k(b^k_s)$ will eventually transit into a passive mode, hence reducing the system to a well-understood scenario with a single class of channels \cite{Liu}\cite{Javidi}. Thus, here we focus on the heterogeneous case of $\omega^* \hspace{1pt}{<}\hspace{1pt} W_k(b^k_s), k{=}1,2$, where the steady-state belief value distribution exists for both classes under the Optimal Relaxed Policy. \subsection{Upper bound on achievable throughput} The throughput performance of Optimal Relaxed Policy provides an throughput upper bound for all policies under the stringent constraint. The value of such an upper bound clearly depends on the number of users in each class $\gamma_k N$, $k\hspace{1pt}{=}\hspace{1pt}1,2$, as well as the fraction $\alpha$ of users allowed for activation. Denoting $\bm \gamma\hspace{1pt}{=}\hspace{1pt}[\gamma_1, \gamma_2]$, we represent the time average expected throughput of the Optimal Relaxed Policy as $\upsilon^N(\bm \gamma, \alpha)$. The following lemma states that, as long as $\bm \gamma$ and $\alpha$ are given, the \emph{per-user} throughput is independent of $N$. \begin{lemma} \label{lemma:parameter} Given $\bm \gamma$ and $\alpha$, $\frac{\upsilon^N(\bm \gamma, \alpha)}{N}$ is independent of $N$, denoted henceforth as $r(\bm \gamma,\alpha)$. \end{lemma} \noindent \textbf{Proof:} The proof follows from showing that, when the number of users $N$ grows, as long as the proportion of each class of channels stays the same and the fraction $\alpha$ of users activated does not change, the form of Optimal Relaxed Policy does not change. Since each user is scheduled independently, the throughput $\upsilon^N(\bm \gamma, \alpha)$ is proportional to $N$, establishing the lemma. Details are provided in Appendix~\ref{appen:peruser}. $\hfill \blacksquare$ \vspace{3pt} We hence refer to the $(\bm \gamma, \alpha)$ pair as `\emph{system parameters}'. Therefore $N r(\bm \gamma, \alpha)$ provides a throughput upper bound to any policy in the same system under the stringent constraint (\ref{eq:strgt}). Equivalently, $r(\bm \gamma, \alpha)$ provides a per-user throughput performance upper bound to all policies that satisfies the stringent constraint. We next describe Whittle's Index Policy for the strictly-constrained problem (\ref{eq:strgt_obj})-(\ref{eq:strgt}), and later study the closeness of its performance to the upper bound established here. \section{Whittle's Index Policy Description} \label{sec:num:index} In this section we formally introduce Whittle's Index Policy for solving the stringently-constrained downlink scheduling problem (\ref{eq:strgt_obj})-(\ref{eq:strgt}). \subsection{Whittle's Index Policy} The Optimal Relaxed Policy, along with the Whittle's index values, gives consistent ordering of belief values with respective to the indices. For instance, under the Optimal Relaxed Policy, if it is optimal to schedule one channel, it is then optimal to transmit to other channels with higher index values. So the Whittle's index value gives an intuitive order of how attractive the channel is for scheduling. This intuition leads to Whittle's Index Policy \cite{Zhao_index} under the stringent constraint on the maximum number of channels that can be scheduled. \vspace{3pt} \noindent\textbf{Whittle's Index Policy:} \emph{At the beginning of each time slot, the channel $i$ in class $k$ is scheduled if its Whittle's index value $W_k(\pi_i)$ is within the top $\alpha N$ index values of all channels in that slot, with arbitrary tie-breaking while assuring a total $\alpha N$ channels being scheduled.} \vspace{2pt} Whittle's Index Policy is attractive because it has very low complexity, and it was observed via numerical investigations to yield significant throughput performance gains over the scheduling strategies that does not utilize channel memory \cite{Infocom11}. The main focus of our work is to analytically understand the approximate or asymptotic optimality of Whittle's Index Policy in asymmetric scenarios. \subsection{Whittle's Index Policy over Truncated State Space} Recall from Section~\ref{sec:model} that the belief values evolve over a countable state space, also note that if a channel is not scheduled for a long time, its belief value will get arbitrarily close to its stationary belief value. This motivates us to consider a truncated version of the belief value evolution whereby the belief value is set to its steady state if the corresponding channel is not scheduled for a large number, say $\tau$, slots. This mild assumption facilitates more tractable performance analysis of the policy. Thus, if a class $k$ user is not scheduled for $\tau$ time slots, its channel state history is entirely forgotten and its belief value will transit to the stationary belief value $b_s^k$, where the truncation $\tau$ is assumed to be very large. Whittle's Index Policy is then implemented over the truncated belief state, which differs from the non-truncated case merely in the truncated belief value evolution. We believe that, the truncated scenario can provide arbitrarily close approximation to the original system when $\tau$ is large. More importantly, as we shall see in the following two sections, Whittle's Index Policy, implemented over the truncated belief state space, achieve asymptotically optimal performance as long as the truncation is sufficiently large. \section{Local Optimality of Whittle's Index policy} \label{sec:local} In this section, we study the optimality properties of Whittle's Index Policy for downlink scheduling, over a large truncated belief space. This result forms the basis for the subsequent global optimality result in Section~\ref{sec:global}. We start by introducing a state space over which the local optimality will be established. \subsection{System State Vector} \label{sec:local:Z} We define the \emph{system state} $\bm {\bm Z}^N$ as a vector that represents the proportion of channels in each belief value, over the truncated space when the total number of users is $N$, i.e., ${\bm Z}^N=\big[{\bm Z}^{\>1,N}, {\bm Z}^{\>2,N}\big]$, with \begin{align} \nonumber {\bm Z}^{\>k,N}=[ Z_{0,1}^{k,N}, \cdots, Z_{0,\tau}^{k,N}, Z_{s}^{k,N}, Z_{1,\tau}^{k,N}, \cdots,Z_{1,1}^{k,N} ], k=1,2. \end{align} where $Z^{k,N}_{c,l}$ and $Z^{k,N}_{s}$ respectively denote the \emph{proportion} of channels in the corresponding belief state $b^k_{c,l}$ and $b^k_{s}$, with respect to the total number of users $N$. Hence, each element of ${\bm Z}^N$ is a multiple of $1/N$ so that ${\bm Z}^N$ takes values in a lattice with mesh size $1/N$. Noting that the total number of users in each class does not change over time, for any $N$ the system state ${\bm Z}^N[t] \in \mathcal{Z}$ where \begin{align} \mathcal{Z}:= \{{\bm Z}^N \geq 0: Z^{k,N}_{s}{+}\sum_{c,l} Z^{k,N}_{c,l}=\gamma_k, \hspace{2pt} k=1,2 \}. \label{eq:beta} \end{align} The system state vector ${\bm Z}^N[t]$ does not distinguish users with the same belief state, thus its dimension will not scale with $N$. Therefore, compared with $\vec{\bm \pi}[t]$, it provides a more convenient representation of the system belief state. Furthermore, ${\bm Z}^N[t]$ fully determines the instantaneous throughput gain in slot $t$ under both Whittle's Index Policy and the Optimal Relaxed Policy (introduced in Proposition~\ref{prop:thres_relax}), because the instantaneous throughput gains under both policies are only determined by the distribution of the channels with different belief values, not their identities. From Lemma~\ref{lemma:pos_rec} and the subsequent remarks, under the operation of the Optimal Relaxed Policy, the belief state evolution of each channel is positive recurrent with a steady-state distribution. The following lemma also establishes the independence of this steady-state distribution from $N$, and defines a useful parameter for future use. \begin{lemma}\label{lemma:zeta_invar} Given the system parameters $(\bm \gamma, \alpha)$, the system state vector ${\bm Z}^N[t]$ under the Optimal Relaxed Policy converges in distribution to a random vector, denoted as ${\bm Z}^N[\infty].$ The distribution of ${\bm Z}^N[\infty]$ is independent of $N$ with its mean denoted as \vspace{-18pt} \begin{align} \vec{\bm \zeta}^{\alpha}_{\bm \gamma}{:=} E\big[{\bm Z}^N[\infty]\big]. \nonumber \end{align} \end{lemma} \noindent{\textbf{Proof:}} This lemma follows from a similar principle to the one we established in Lemma~\ref{lemma:parameter}. For details, please refer Appendix~\ref{appen:zeta_invar}. \hspace{2in} $\hfill \blacksquare$ It is easy to see that $\vec{\bm \zeta}^{\alpha}_{\bm \gamma} \hspace{1pt}{\in}\hspace{1pt} \mathcal{Z}$ and the form of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ fully determines the time average throughput of the Optimal Relaxed Policy. Therefore, the vector $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ provides an important benchmark for our asymptotic analysis. If, in the long run under Whittle's Index Policy, the system state ${\bm Z}^N[t]$ stays close to $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$, it indicates that Whittle's Index Policy will have throughput performance close to that of the Optimal Relaxed Policy -- the throughput upper bound. To capture the closeness, we define the $\delta$ neighborhood of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ as \begin{align} \label{eq:nbhd} \Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})= \{{\bm Z} \in \mathcal{Z}: ||{\bm Z}- \vec{\bm \zeta}^{\alpha}_{\bm \gamma}||\leq \delta \}, \end{align} for $\delta>0$, where $||\cdot||$ stands for Euclidean distance. We are now ready to state and prove our first main result regarding a form of local optimality of Whittle's Index Policy. \subsection{Local Optimality of Whittle's Index Policy} Under the system parameters $(\bm \gamma, \alpha)$, we let $R_{T}^{N}(\bm \gamma, \alpha, \bm x)$ represent the time average throughput obtained over the time duration $0 \hspace{1pt}{\leq}\hspace{1pt} t \hspace{1pt}{<}\hspace{1pt} T$ under Whittle's Index Policy, conditioned on the initial system state ${\bm Z}^N[0]=\bm x$, i.e., \begin{align} \nonumber R_{T}^{N}(\bm \gamma, \alpha, \bm x)\hspace{1pt}{:=}\hspace{1pt}\frac{1}{T}E \Big[\sum_{t=0}^{T-1} \sum_{i=1}^{N} \pi_i[t] a_i^{ind}[t] \Big| {\bm Z}^N [0]\hspace{1pt}{=}\hspace{1pt}\bm x \Big], \end{align} where $(a_i^{ind}[t])_i$ denotes the scheduling decision vector made by Whittle's Index Policy at time $t.$ Recall from Lemma~\ref{lemma:parameter} that $r(\bm \gamma, \alpha)$ denotes the per-user throughput under the Optimal Relaxed Policy, which serves as an upper bound on Whittle's Index Policy performance. The next proposition characterizes the local convergence property of Whittle's Index Policy performance to $r(\bm \gamma, \alpha)$. \begin{proposition} \label{prop:local_conv} Under the system parameters $(\bm \gamma, \alpha)$, there exists a $\delta >0$ neighborhood $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ such that, if the initial system state $\bm x$ is within $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ , then \begin{align} \lim_{T \rightarrow \infty} \lim_{m \rightarrow \infty} \frac{R_{T}^{N_m}(\bm \gamma, \alpha, \bm x)}{N_m}\hspace{1pt}{=}\hspace{1pt}r(\bm \gamma, \alpha). \nonumber \end{align} where $\{ N_m \}_m$ is any increasing sequence of positive integers with $\alpha N_m$, $\gamma_k N_m \in \mathbb{Z}^+$, for $k=1,2$ and all $m$. \end{proposition} \noindent \textbf{Proof Outline:} Here, we give a high level description of the proof for an intuitive understanding, and refer the reader to Appendix~\ref{appen:local} for the rigorous derivation. \vspace{1pt} $\bullet$ We start by defining a fluid approximation, whereby the discrete-time evolution of ${ \bm Z}^N[t]$ under Whittle's Index Policy is modeled as a deterministic vector ${ \bm z}[t] \in \mathcal{Z}$ that evolves {in discrete time} over $\mathcal{Z}$ and is independent of $N.$ Under this fluid approximation, the users are no longer unsplittable entities so that the state space of ${ \bm z}[t]$ is no longer restricted to a lattice as it was for ${ \bm Z}^N[t]$. Also, the fluid approximation ${ \bm z}[t]$ evolves in a deterministic manner, in contrast to the stochastic transition of ${\bm Z}^N[t]$. The evolution of ${\bm z}[t]$ is defined by a {difference} equation as a function of the \emph{expected} state change of ${ \bm Z}^N[t]$ under Whittle's Index Policy as follows \begin{align} \label{eq:fluid_sketch}{\bm z[t+1] {-} \bm z[t] \Big |_{{ \bm z}[t]={ \bm z}} }\hspace{-3pt}{=} E\Big[{ \bm Z}^N[t+1]{-}{ \bm Z}^N[t] \Big| { \bm Z}^N[t]{=}{ \bm z}\Big], \end{align} where $N$ is any integer for which ${\bm z}$ is a feasible state. $\bullet$ We then establish local convergence of the fluid approximation model when ${\bm z}[0]$ is within a small enough $\delta$ neighborhood $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$. We show the convergence by first noting that the differential equation (\ref{eq:fluid_sketch}) is linear within a wider convex region than $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$. Within this region, we obtain a closed form expression of the right hand side of (\ref{eq:fluid_sketch}), which enables us to investigate the eigenvalue structure of the linear differential equation. We show {that each eigenvalue $\lambda$ satisfies $|\lambda|< 1$} and apply standard linear system theory to establish the local convergence. $\bullet$ We then connect the fluid approximation model ${\bm z}[t]$ to the discrete-time stochastic system state ${\bm Z}^N[t]$ by using a discrete-time extension of Kurtz's Theorem, which can be interpreted as an extension of the strong law of large numbers to random processes (see \cite{Weiss_LD}). Essentially, it states that, over any finite time duration $[0,T]$, the actual system evolution ${\bm Z}^N[t]$ can be made arbitrarily close to the above fluid approximation ${\bm z}[t]$ by increasing the number of channels $N$ sufficiently, {with exponential convergence rate}. $\bullet$ The previous convergence result, together with the local convergence result of the fluid evolution ${\bm z}[t]$ to $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$, enables us to establish the local convergence of the system state ${\bm Z}^N[t]$ to $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ as the number of users $N$ grows, provided that the initial state ${\bm Z}^N[0] \in \Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$. Hence the system state under Whittle's Index Policy will stay close (in a probabilistic sense) to the expectation $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ of the system state under the Optimal Relaxed Policy, which, in turn, indicates that the throughput performance of Whittle's Index Policy will approach the throughput upper bound $r(\bm \gamma, \alpha)$, as expressed in the proposition. We again emphasize that the technical details of the outlined steps are fairly intricate and are moved to Appendix~\ref{appen:local}. $\hfill \blacksquare$ \vspace{5pt} Proposition~\ref{prop:local_conv} illustrates an interesting local optimality property of Whittle's Index Policy as the number of users $N$ and the time horizon $T$ increases while the system parameters $(\bm \gamma, \alpha)$ stay the same. It indicates that, under Whittle's Index Policy, as long as the initial state ${\bm Z}^N[0]$ is close enough to $\vec{\bm \zeta}^{\alpha}_{\gamma}$, the average per-user throughput over any finite time duration will get arbitrarily close to the Optimal Relaxed Policy performance as the number of users scales. \vspace{3pt} \noindent \textbf{Remark: } We note that the sequence $\{N_m\}_m$ is used to guarantee that the number of channels in each class, as well as the number of scheduled users, take integer values. In fact, our result can be generalized to all $N$ by slightly perturbing $\bm \gamma$ and $\alpha$ as a function of $N$ but assuring their limits are well-defined. \section{Global Optimality of Whittle's Index Policy} \label{sec:global} The above local optimality result heavily relies on the initial state ${\bm Z}^N[0]$ being close to $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$, which is difficult to guarantee. In this section, we study the global optimality of the infinite horizon throughput performance of Whittle's Index Policy starting from any initial state. We begin our analysis by presenting the recurrence structure of the system state. \begin{lemma} \label{lemma:recur} Under system parameters $(\bm \gamma, \alpha)$, for any $\epsilon>0$, if the number of users $N$ is large enough, \noindent(i) The system state ${\bm Z}^N[t]$ evolves as an aperiodic Markov chain, in a state space that contains only one recurrent class. \noindent(ii) There exists at least one recurrent state within the $\epsilon$ neighborhood $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$. \end{lemma} \noindent \textbf{Proof:} We prove this lemma by constructing probability paths from any state to the neighborhood $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$. Details of the proof are included in Appendix~\ref{appen:recur}. $\hfill \blacksquare$ \vspace{4pt} This lemma states that ${\bm Z}^N[t]$ will ultimately enter any small neighborhood of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ when $N$ is large enough. Together with Proposition~\ref{prop:local_conv}, this result shows promise for establishing the global asymptotic optimality of Whittle's Index Policy. This is plausible because once ${\bm Z}^N[t]$ enters $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$, the performance of Whittle's Index Policy \emph{afterwards} can get very close to its upper bound as $N$ scales, as established in Proposition~\ref{prop:local_conv}. However, since we consider the infinite horizon time average throughput, this argument would break down if the time it takes for ${\bm Z}^N[t]$ to enter $\Omega_{\delta}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ also scales up with $N$. This observation motivates us to introduce a useful assumption, which will later be justified (in Section~\ref{sec:num:just}) via numerical studies. \vspace{5pt} \noindent \textbf{Assumption $\Psi$}: For each $\epsilon{>}0$, let $\Gamma_{\bm x}^N(\epsilon)$ represent the first time of reaching $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ starting from ${\bm Z}^N[0]= {\bm x}$, i.e., \begin{align} \Gamma_{\bm x}^N(\epsilon)=\min \{t: {\bm Z}^N[t] \in \Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma}) \big | {\bm Z}^N[0]= {\bm x} \}. \nonumber \end{align} Then, we assume that the expected time of reaching $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ is bounded uniformly over $N$ and $\bm x$, i.e., there exists $M_{\epsilon} {<} \infty$ such that $E\big[\Gamma_{\bm x}^N (\epsilon)\big] \leq M_{\epsilon}$ for all $N$ and $\bm x$. \vspace{7pt} Since for each $N$, ${\bm Z}^N[t]$ under Whittle's Index Policy is recurrent and aperiodic with a finite state space, there exists a steady-state distribution associated with ${\bm Z}^N[t]$. As before, we use ${\bm Z}^N[\infty]$ to denote the associated limiting random vector. The next lemma establishes that, under Assumption $\Psi$, the distribution of ${\bm Z}^N[\infty]$ approaches a point-mass at $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ as $N$ scales. Here, again, the sequence $\{N_m\}_m$ is defined in the same way as in Proposition \ref{prop:local_conv}. \vspace{-2pt} \begin{lemma} \label{lemma:steady_dist} Under Assumption $\Psi$ and system parameters $(\bm \gamma, \alpha)$, for any $\epsilon>0$, the steady state probability of ${\bm Z}^N[t]$ under Whittle's Index Policy satisfies \vspace{-3pt} \begin{align} \lim_{m \rightarrow \infty} P\big({\bm Z}^{N_m}[\infty] \in \Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})\big)=1. \nonumber \end{align} \end{lemma} \vspace{-2pt} \noindent \textbf{Proof:} The proof utilizes Theorem $6.89$ from \cite{Weiss_LD}, which builds on the following arguments. Note that $\epsilon>0$ can be selected to be small enough for the following argument. As depicted in Fig.~\ref{fig:invar_meas}, we let $T_{\epsilon}$ be a random variable denoting, in steady state, the time duration between \emph{consecutive} hitting times into the neighborhood $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ from outside of the neighborhood. Let $T^0_{\epsilon}$ denote the time duration from the time ${\bm Z}^N[t]$ enters the neighborhood $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ from outside until the time it leaves. Hence, the expected proportion of time that ${\bm Z}^N[t]$ stays outside this neighborhood is $(E[T_{\epsilon}]-E[T^0_{\epsilon}])/E[T_{\epsilon}$]. We know that the numerator $E[T_{\epsilon}]-E[T^0_{\epsilon}]$ is uniformly bounded for all $N$ due to Assumption $\Psi$. However, as $N$ increases, it is more likely for ${\bm Z}^N[t]$ to stay within the neighborhood for a long time before exiting it (based on the convergence of fluid approximation model and Kurtz's Theorem in the proof of Proposition~\ref{prop:local_conv}). Thus, $E[T^0_{\epsilon}],$ and hence the denominator $E[T_{\epsilon}]$, grow to infinity as $N$ scales. Therefore, the expected proportion of time spent outside $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ vanishes as $N$ scales up, which leads to the statement of the lemma. Details of the proof can be found in Appendix~\ref{appen:Invar_meas}. $\hfill \blacksquare$ \begin{figure} \centering \includegraphics[width=3.3in]{invar_meas.eps} \vspace{-8pt} \caption{Transition behavior of ${\bm Z}^N[t]$ in steady state.} \label{fig:invar_meas} \vspace{-12pt} \end{figure} \vspace{3pt} Under Whittle's Index Policy with system parameters $(\bm \gamma, \alpha)$, we let $R^{N}_{\bm x}(\bm \gamma, \alpha)$ be the achieved infinite horizon, time average throughput, conditioned on the initial system state ${\bm Z}^N[0]{=}\hspace{1pt} \bm x$, i.e., \begin{align} \nonumber R^{N}_{\bm x}(\bm \gamma, \alpha)\hspace{1pt}{:=}\hspace{1pt}\lim_{T\rightarrow \infty} \frac{1}{T}E \Big[\sum_{t=0}^{T-1} \sum_{i=1}^{N} \pi_i[t] a_i^{ind}[t] \Big|{\bm Z}^{N}[0]={\bm x} \Big]. \end{align} \vspace{-5pt}From Lemma~\ref{lemma:steady_dist} we know that, in steady-state, the system state $\bm Z^{N_m}[\infty]$ is increasingly concentrated around $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$ as $m$ increases, regardless of the initial state ${\bm x}.$ We build on this to establish the global asymptotical optimality of Whittle's Index Policy. \begin{proposition} \label{prop:asymp} Under Assumption $\it \Psi$, for any initial system state $\bm x$, we have \vspace{-12pt}\begin{align} \lim_{m \rightarrow \infty} \frac{R^{N_m}_{\bm x}(\bm \gamma, \alpha)}{N_m} = r(\bm \gamma, \alpha). \nonumber \end{align} Since $r(\bm \gamma, \alpha)$ is an upper bound on the maximum achievable per-user throughput by any policy, this implies that Whittle's Index Policy is optimal in the many user regime. \end{proposition} \noindent \textbf{Proof:} We prove this result by decomposing $R^{N}_{\bm x}(\bm \gamma, \alpha)$ as a summation of the expected throughput conditioned on whether the system state is within or outside an arbitrarily small $\epsilon$ neighborhood of $\vec{\bm \zeta}^{\alpha}_{\bm \gamma}$. Since the latter has diminishing probability according to Lemma~\ref{lemma:steady_dist}, the expected throughput of Whittle's Index Policy can get arbitrarily close to that of Optimal Relaxed Policy. Details of the proof are provided in Appendix~\ref{appen:global}.$\hfill \blacksquare$ \vspace{3pt} \begin{figure} \centering \includegraphics[width=3.0in]{recur_r.eps} \includegraphics[width=3.0in]{recur_bs.eps} \caption{Average time of hitting $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$. (a) ${\bm Z}^N[0]=\bm x$; (b) $\bm Z^N[0]=\bm y$. } \vspace{-8pt} \label{fig:recur_time} \end{figure} \noindent \textbf{Remarks: } 1) We would like to emphasize that the global optimality result is not a straight-forward extension of the local converge result by contrasting Proposition~\ref{prop:local_conv} and Proposition~\ref{prop:asymp}. Note that in Proposition~\ref{prop:local_conv}, the time limit is outside the limit of the number of users $N$, where each convergence (with $N$) is with respective to a \emph{fixed time duration}. However, the order of limit is switched in the global optimality result of Proposition~\ref{prop:asymp}, as it states the convergence with $N$ \emph{the infinite horizon} average throughput, which is much stronger and hence is much more challenging to prove. 2) We would like to contrast Assumption $\Psi$ with Weber's condition \cite{Weber}. For general RMBP problem, Weber's condition leads to the same global asymptotic optimality result. While confirming Weber's condition may be possible in very low-dimensional problems, in our downlink scheduling problem, this requires one to rule out the existence of both closed orbits and chaotic behavior of a high-dimensional non-linear differential equation, which is extremely difficult to check - even numerically. Assumption $\Psi$, on the other hand, takes a much simpler form, as it is defined over the actual stochastic system and is amenable to easy numerical verification, as is performed in Section~\ref{sec:num:just}. \section{Numerical Results} \subsection{Verification and Interpretation of Assumption $\it \Psi$} \label{sec:num:just} We start by numerically verifying Assumption $\Psi$. We consider the asymmetric scenario with two classes of channels with system parameters $\gamma{=}[0.45, 0.55]$, $\alpha{=}0.6$, with $p_1{=}0.9$, $r_1{=}0.45$, $p_2{=}0.8$, $r_2{=}0.3$. We next examine the change of the average hitting time $\Gamma_{\bm x}^N(\epsilon)$, while maintaining $\alpha$ and $\bm \gamma$. We let $\bm x, \bm y \in \mathcal{Z}$ be initial values of $\bm Z^N[0]$ that are selected to be two extreme points in the state space to exhibit the uniformity of $\Gamma^N_{\bm x}(\epsilon)$ to the initial state. Specifically, state $\bm x$ corresponds to the case when all the users have just observed their channels to be in OFF state, i.e., with belief value $b^k_{0,1}$, $k=1,2$. And $\bm y$ corresponds to the case when all users have no initial observation of their channels state history, i.e., with belief value $b^k_s$, $k=1,2$. We examine the average value of hitting time $\Gamma_{\bm x}^N(\epsilon)$ and $\Gamma^N_{\bm y}(\epsilon)$ with a very small neighborhood $\epsilon{=}0.005$, when the number of users $N$ grows from $10{\times} 10^3$ to $500{\times}10^3$. As indicated in Fig.~\ref{fig:recur_time}, for both cases, the average time of hitting the $\epsilon$ neighborhood first decreases with $N$, and then \emph{converges} and stays almost the same as $N$ scales up. This is especially intriguing. The rationale behind this phenomenon is as follows. Under Whittle's Index Policy, a total number of $\alpha N$ users are activated at each time slot. Therefore, for relatively small number of users, the amount of probabilistic belief state transitions, as well as the amount of system states in the neighborhood, increases with $N$, leading to a higher chance of hitting the desired neighborhood $\Omega_{\epsilon}(\vec{\bm \zeta}^{\alpha}_{\bm \gamma})$ and smaller value of hitting time. However, the belief update of each user contributes to the $1/N$ change of the system state $\bm Z^N[t]$, which decreases with $N$. Therefore, as $N$ further increases, the \emph{total amount of transitions} of the system state ${\bm Z}^N[t]$ due to channel state feedback is roughly $\alpha N \cdot 1/N=\alpha$, which is invariant of $N$. This result, along with many other numerical experiments we have conducted that lead to the same observation, gives verification to Assumption $\Psi$. \begin{figure} \centering \includegraphics[width=3.1in]{case2.eps} \includegraphics[width=3.1in]{case2b.eps} \vspace{-3pt} \caption{The evolution of belief value and Whittle's index value. (a) Belief value evolution (b) Whittle's index value evolution. } \vspace{-15pt} \label{fig:beliefindex} \end{figure} \subsection{`Exploitation versus Exploration' Trade-off} \label{sec:num:trade-off} \vspace{-3pt}In this section, we demonstrate how the Whittle's index value captures the `exploitation versus exploration' trade-off for our \emph{asymmetric downlink scheduling problem}. Consider two classes of ON/OFF fading channels with belief value evolutions plotted in Fig.~\ref{fig:beliefindex}(a). Note that both classes have the same stationary distribution $b^k_s=0.5$, $k\in \{1,2\}$ of being at ON state, but channels in class $1$ has a higher degree of time correlation, i.e., fades slower, than channels in class $2$ since $p_1 > p_2$ and $r_1<r_2$. The corresponding Whittle index values of the two classes of channels are depicted in Fig.~\ref{fig:beliefindex}(b) as functions of the updated belief value starting from different initial states. To understand the nature of Whittle's index value, we first consider the case when the channels in both classes are observed to be ON at time $0$ and stay passive since then. As indicated in Fig.~\ref{fig:beliefindex}(a) the class $1$ channel has a higher belief value than the class $2$ channel, hence scheduling the class $1$ channel gives a higher immediate throughput than scheduling the class $2$ channel. Moreover, once a class $1$ channel is scheduled, it is more likely to stay in ON state again, bringing high future gains. Accordingly, the index values in Fig.~\ref{fig:beliefindex}(b) when both state evolutions start from ON states capture that it is more attractive to schedule the class $1$ channel because of the advantage in both exploitation and exploration. On the other hand, when the scheduler has observed channels in both classes to be OFF at time $0$, Fig.~\ref{fig:beliefindex}(a) shows that the class $2$ channel has a higher belief value than the class $1$ channel. However, although the Whittle's index value in Fig.~\ref{fig:beliefindex}(b) of class $2$ channel is initially smaller than that of class $1$ channel, after a certain amount of delay (around $8$ slots in the figure) this order is switched, which is interpreted as follows: initially, since the class $1$ channel has smaller belief value than that of the class $2$ channel, it is more attractive to exploit the immediate gain brought by the class $2$ channel. However, as the passive time grows, as indicated in Fig.~\ref{fig:beliefindex}(a), the difference between immediate gain of both classes diminishes. Then, it becomes more attractive to explore the class $1$ channel because its longer memory can bring higher future gains if it turns out to be in ON state. This investigation reveals the intricate nature of Whittle's index value in capturing the fundamental `exploration versus exploitation' trade-off. In our scheduling problem with asymmetric channel statistics, such a property of Whittle's Index Policy turns out to be crucial in \emph{achieving asymptotically optimal performance}. \section{Conclusion} In this paper, we studied the problem of downlink scheduling over ON/OFF Markovian fading channels in the presence of channel heterogeneity. We consider the scenario where instantaneous channel state information is not perfectly known at the scheduler, but is acquired via a practical ARQ-styled feedback after each scheduled transmission. We analytically characterized the performance of Whittle's Index Policy for downlink scheduling, and proved its local and global asymptotic optimality properties as the number of users scales. Specifically, provided that the initial system state is within a certain region, we established the local optimality of Whittle's Index Policy by investigating the evolution of the system belief state with a fluid approximation. We then established the global asymptotic optimality of Whittle's Index Policy under a recurrence condition, which is suitable for numerical verification. Our results establish that Whittle's Index Policy, which is attractive due to its low-complexity operation, also processes strong asymptotic optimality properties for scheduling over heterogeneous Markovian fading channels. \vspace{6pt}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,600
INCYTE CORP. Daily Stock Analysis Daily Stock Analysis, INDA, INCYTE CORP, priceseries LONG Apr 6. 2020 22.13 Apr 21. 2020 23.38 10 Trading Days 5.66% Link LONG Jul 1. 2020 27.81 Jul 31. 2020 29.89 21 Trading Days 7.47% Link LONG Nov 4. 2020 32.66 Nov 30. 2020 34.33 17 Trading Days 5.12% Link LONG May 14. 2021 39.81 Jun 15. 2021 42.27 21 Trading Days 6.17% Link LONG Dec 23. 2021 45.07 Jan 18. 2022 47.52 16 Trading Days 5.42% Link http://www.incyte.com Hervé Hoppenot 1801 AUGUSTINE CUT-OFF, WILMINGTON, DE 19803 Incyte Corp. is a biopharmaceutical company, which focuses on the discovery, development, development, formulation, manufacturing and commercialization of proprietary therapeutics to treat serious unmet medical needs, primarily in oncology. Its product, Jakafi, a JAK1 and JAK2 inhibitor, is currently approved in the U.S. for the treatment of intermediate or high-risk myelofibrosis and is in development as a potential treatment for other cancers. The companywas founded by Roy A. Whitfield in April 1991 and is headquartered in Wilmington, DE. Incyte Corporation focuses on the discovery, development, and commercialization of proprietary therapeutics in oncology in the United States and internationally. It offers JAKAFI for the treatment of myelofibrosis and polycythemia vera cancers. The company's clinical stage products include ruxolitinib cream that is in Phase II clinical trial for the treatment of alopecia areata and atopic dermatitis; and baricitinib, which is in Phase III clinical trial for treatment of rheumatoid arthritis. In addition, it is developing itacitinib that is in Phase I/II clinical trials in combination with osimertinib for non-small cell lung cancer (NSCLC); INCB52793, INCB54329 (BRD), INCB57643 (BRD), and INCB53914 (PIM), which are in Phase I/II trials for the treatment of advanced malignancies; INCB54828 (FGFR1/2/3) that is in Phase II clinical trial the treatment of bladder cancer, cholangiocarcinoma, and 8p11 MPNs; INCB59872 (LSD1), which is in Phase II clinical trial the treatment of acute myeloid leukemia and small cell lung cancer; and capmatinib that is in Phase II clinical trial for the treatment of NSCLC and liver cancer. Further, the company's clinical stage products include epacadostat, which is in Phase II clinical trial for the treatment of various tumors, and in Phase I/II clinical trials for the treatment of NSCLC and bladder cancer, as well as in Phase III clinical trial for the treatment of advanced melanoma; and INCB01158, INCSHR1210, INCAGN1876 (GITR), and INCAGN1949 (OX40), which are in Phase I/II clinical trials for the treatment of solid tumors. It markets its JAKAFI product through a network of specialty pharmacy providers and wholesalers. The company has collaboration agreements with Novartis International Pharmaceutical Ltd.; Eli Lilly and Company; Agenus Inc.; Jiangsu Hengrui Medicine Co., Ltd.; Merus N.V.; Calithera Biosciences, Inc; Pfizer Inc; and Abramson Cancer Center. Incyte Corporation was founded in 1991 and is headquartered in Wilmington, Delaware.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
50
\section{Introduction} \label{sec1} Optimal design theory provides useful tools to improve the accuracy of statistical inference without any additional costs by carefully planning experiments before they are conducted. Numerous authors have worked on the construction of optimal designs in various situations. For many models optimal designs have been developed explicitly [see the monographs of \cite{pukelsheim2006,atkinson2007}] and several algorithms have been developed for their numerical construction if the optimal designs are not available in explicit form [see \cite{yu2010,yanbie2013} among others]. On the other hand the construction of such designs depends sensitively on the model assumptions and an optimal design for a particular model might be inefficient if it is used in a different model. Moreover, in many experiments it is often not obvious which model should be finally fitted to the data and model building is an important part of data analysis. A typical and very important example are Phase II dose-finding studies, where various nonlinear regression models of the form \begin{eqnarray} Y=\eta(x,\theta)+\varepsilon. \label{1.1} \end{eqnarray} have been developed for describing the dose-response relation [see \cite{pinbrebra2006}], but the problem of model uncertainty arises in nearly any other statistical application. As a consequence, the construction of efficient designs for model identification has become an important field in optimal design theory. Early work can be found in \cite{stigler1971}, who determined designs for discriminating between two nested univariate polynomials by minimizing the volume of the confidence ellipsoid for the parameters corresponding to the extension of the smaller model. Several authors have worked on this approach in various other classes of nested models [see for example \cite{dethal1998} or \cite{songwong1999} among others]. \\ A different approach to the problem of constructing optimal designs for model discrimination is given in a pioneering paper by \cite{atkfed1975a}, who proposed the $T$-optimality criterion to construct designs for discriminating between two competing regression models. Roughly speaking their approach provides a design such that the sum of squares for a lack of fit test is large. \cite{atkfed1975b} extended this method for discriminating a selected model $\eta_1$ from a class of other regression models, say $ \{\eta_2, \ldots , \eta_k \}$, $k \ge 2$. In contrast to the work \cite{stigler1971} and followers the $T$-optimality criterion does not require competing nested models and has found considerable attention in the statistical literature with numerous applications including such important fields as chemistry or pharmacokinetics [see e.g.\ \cite{atkbogbog1998}, \cite{ucibog2005}, \cite{loptomtra2007}, \citet{atkinson2008}, \cite{Tommasi09} or \cite{fooduf2011} for some more recent references]. A drawback of the $T$-optimality criterion consists of the fact that -- even in the case of linear models -- the criterion depends on the parameters of the model $\eta_1$. This means that $T$-optimality is a local optimality criterion in the sense of \cite{chernoff1953}, and that it requires some preliminary knowledge regarding the parameters. Consequently, most of the cited papers refer to locally $T$-optimal designs. Although there exist applications where such information is available [for example in the analysis of dose response studies as considered in \cite{pinbrebra2006}], in most situations such knowledge can be rarely provided. Several authors have introduced robust versions of the classical optimality criteria such as Bayesian or minimax $D$-optimality criteria in order to determine efficient designs for model discrimination, which are less sensitive with respect to the choice of parameters [see \cite{pronwalt1985,chaver1995,dette1997}]. The robustness problem of the $T$-optimality criterion has been already mentioned in \cite{atkfed1975a}, who proposed a Bayesian approach to address the problem of parameter uncertainty in the $T$-optimality criterion. \cite{wiens2009} imposed (linear) neighbourhoud structures on each regression response and determined least favorable points in these neighbourhouds in order to robustify the locally $T$-optimal design problem. \cite{detmelshp2012} considered polynomial regression models and determined explicitly Bayesian $T$-optimal discriminating designs for the criterion introduced by \cite{atkfed1975a}. Their results indicate the difficulties arising in Bayesian $T$-optimal design problems. \\ The scarcity of literature on Bayesian $T$-optimal discriminating designs can be explained by the fact that in nearly all cases of practical interest these designs have to be found numerically, and even this is a very hard problem. These numerical difficulties become even apparent in the case of locally $T$-optimal designs. \cite{atkfed1975a} proposed an exchange type algorithm, which has a rather slow rate of convergence and has been used by several authors. \cite{BraessDette2013} pointed out that, besides its slow convergence, this algorithm does not yield the solution of the optimal discriminating design problem, if more than $5$ model comparisons are under consideration. These authors developed a more efficient algorithm for the determination of locally $T$-optimal discriminating designs for several competing regression models by exploring relations between optimal design problems and (nonlinear) vector-valued approximation theory. Although the resulting algorithm provides a substantial improvement of the exchange type methods it cannot deal with Bayesian optimality criteria in general, and the development of an efficient procedure for this purpose is a very challenging and open problem. \\ The goal of the present paper is to fill this gap. We utilize the fact that in applications the integral with respect to the prior distribution has to be determined by a discrete approximation and we show that the discrete Bayesian $T$-optimal design problem is a special case of the local $T$-optimality criterion for a very large number of competing models considered as in \cite{BraessDette2013}. The competing models arise from the different support points used for the approximation of the prior distribution by a discrete measure, and the number of model comparisons in the resulting criterion easily exceeds the $200$. Therefore the algorithm in \cite{BraessDette2013} does not provide a solution of the corresponding optimization problem, and we propose a new method for the numerical construction of Bayesian $T$-optimal designs with substantial computational advantages. Roughly speaking, the support points of the design in each iteration are determined in a similar manner as proposed in \cite{atkfed1975a} but for the calculation of the corresponding weights we use a gradient approach. It turns out that the new procedure is extremely efficient and is able to find Bayesian $T$-optimal designs with a few number of iterations. \\ The remaining part of this paper is organized as follows. In Section \ref{sec2} we give an introduction into the problem of designing experiments for discriminating between competing regression models and also derive some basic properties of locally $T$-optimal discriminating designs. In particular we show how the Bayesian $T$-optimal design problem is related to a local one with a large number of model comparisons [see Section \ref{sec2b}]. Section \ref{sec3} is devoted to the construction of new numerical procedures (in particular Algorithm \ref{algorithm:new}), for which we prove convergence to a $T$-optimal discriminating design. Our approach consists of two steps consecutively optimizing with respect to the support points (Step 1) and weights of the design (Step 2). For the second step we also discuss two procedures to speed up the convergence of the algorithm. The results are illustrated in Section \ref{sec5} calculating several Bayesian $T$-optimal discriminating designs in examples, where all other available procedure do not provide a numerical solution of the optimal design problem. For example, the new procedure is able to solve locally $T$-optimal designs with more than $240$ model comparisons as they are arising frequently in Bayesian $T$-optimal design problems. In particular we illustrate the methodology calculating Bayesian $T$-optimal discriminating designs for a dose finding clinical trial which has recently been discussed in \cite{pinbrebra2006}. The corresponding R-package will be provided in the CRAN library. Finally all proof are deferred to an appendix in Section \ref{sec6}. \section{$T$-optimal discriminating designs} \label{sec2} Consider the regression model \eqref{1.1}, where $x$ belongs to some compact set $\mathcal{X}$ and observations at different experimental conditions are independent. For the sake of transparency and a clear representation we assume that the error $\varepsilon$ is normally distributed. The methodology developed in the following discussion can be extended to more general error structures following the line of research in \cite{loptomtra2007}, but details are omitted for the sake of brevity. Throughout this paper we consider the situation where $\nu $ different models, say \begin{align} \label{2.1} \eta_i(x,\theta_{i}), \qquad i = 1,\dots,\nu, \end{align} are available to describe the dependency of $Y$ on the predictor $x$. In \eqref{2.1} the quantity $\theta_{i}$ denotes a $d_i$-dimensional parameter, which varies in a compact space, say $\Theta_i$ ($i=1,\ldots , \nu$). Following \cite{kiefer1974} we consider approximate designs that are defined as probability measures, say $\xi$, with finite support. The support points $ x_1,\ldots, x_k $ of a design $\xi$ give the locations where observations are taken, while the weights $\omega_1,\ldots, \omega_k$ describe the relative proportions of observations at these points. If an approximate design is given and $n$ observations can be taken, a rounding procedure is applied to obtain integers $n_{i} $ ($i=1,\ldots,k)$ from the not necessarily integer valued quantities $\omega_{i}n$ such that $\sum_{i=1}^kn_i=n$. We are interested in designing an experiment, such that a most appropriate model can be chosen from the given class $\{\eta_1, \ldots , \eta_\nu\}$ of competing models. \subsection{$T$-optimal designs} \label{sec2a} In the case of $\nu =2$ competing models \cite{atkfed1975a} proposed to fix one model, say $\eta_1 (\cdot, \theta_1)$, with corresponding parameter $\overline \theta_1$ and to maximize the functional \begin{align} \label{2.2} T_{12}(\xi) = \inf_{\theta_{2} \in \Theta_2} \int_{\mathcal{X}} \Big[ \eta_1(x,\overline{\theta}_{1}) - \eta_2(x,\theta_{2}) \Big]^2 \xi(dx), \end{align} in the class of all (approximate) designs. Roughly speaking, these designs maximize the power of the test of the hypothesis ''$\eta_1$ versus $\eta_2$'' . Note that the resulting optimal design depends on the parameter $\overline{\theta}_1$ for the first model, which has to be fixed by the experimenter. This means that these designs are local in the sense of \cite{chernoff1953}. It was pointed out by \cite{DetteMelasShpilev2013} that locally $T$-optimal designs may be very sensitive with respect to misspecification of $\overline \theta_1$. In a further paper \cite{atkfed1975b} generalized their approach to construct optimal discriminating designs for more than $2$ competing regression models and suggested the criterion \begin{align} \label{2.3} T (\xi) = \min_{2 \leq j \leq \nu} T_{1j}(\xi) = \min_{2 \leq j \leq \nu}\inf_{\theta_{j} \in \Theta_j} \int_{\mathcal{X}} \Big[ \eta_1(x,\overline{\theta}_{1}) - \eta_j(x,\theta_{j}) \Big]^2 \xi(dx). \end{align} This criterion determines a ''good'' design for discriminating the model $\eta_1$ against $\eta_2, \ldots, \eta_\nu$, where the parameter $\overline{\theta}_1$ has the same meaning as before. As pointed out by \cite{tomlop2010} and \cite{BraessDette2013} there are many situations, where it is not clear which model should be considered as fixed and these authors proposed a symmetrized Bayesian (instead of minimax) version of the $T$-optimality criterion, that is \begin{align} \label{2.4} T_{\mathrm{P}}(\xi) = \sum_{i,j=1}^{\nu} p_{i,j} T_{i,j}(\xi ) = \sum_{i,j=1}^{\nu} p_{i,j} \inf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \Big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,\theta_{i,j}) \Big]^2 \xi(dx), \end{align} where the quantities $p_{i,j}$ denote nonnegative weights reflecting the importance of the comparison between the the model $\eta_i$ and $\eta_j$. We note again that this criterion requires the specification of the parameter $\overline{\theta}_{i}$, whenever the corresponding weight $p_{i,j}$ is positive. Throughout this paper we will call a design maximizing one of the criteria \eqref{2.2} - \eqref{2.4} locally $T$-optimal discriminating design, where the specific criterion under consideration is always clear from the context. For some recent references discussing locally $T$-optimal discriminating designs we refer to \cite{ucibog2005}, \cite{loptomtra2007}, \citet{atkinson2008}, \cite{Tommasi09} or \cite{BraessDette2013} among many others. For the formulation of the first results we require the following assumptions. \begin{assumption} \label{assum1} For each $i=1,\dots,\nu $ the functions $\eta_i(\cdot ,\theta_{i})$ is continuously differentiable with respect to the parameter $\theta_{i} \in \Theta_i, $ . \end{assumption} \begin{assumption}\label{assum2} For any design $\xi$ such that $T_\mathrm{P}(\xi)>0$ and weight $p_{i,j} \neq 0$ the infima in \eqref{2.4} are attained at a unique points $ \widehat{\theta}_{i,j} = \widehat{\theta}_{i,j}(\xi)$ in the interior of the set $\Theta_j$. \end{assumption} For a design $\xi$ we also introduce the notation \begin{align} {\Theta}_{i,j}^*(\xi) = \arginf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \big[ \eta_i(x,\overline{\theta}_i) - \eta_j(x, \theta_{i,j}) \big]^2 \xi(dx) , \label{thetamin} \end{align} which is used in the formulation of the following result. \begin{thm} \label{thm1} If Assumption \ref{assum1} is satisfied, then the design $\xi^*$ is a locally $T_\mathrm{P}$-optimal discriminating design, if and only if there exist distributions $\mu_{ij}^*$ on the sets $ {\Theta}_{i,j}^*(\xi^*)$ defined in \eqref{thetamin} such that the inequality \begin{equation} \label{equiv} \sum_{i,j = 1}^{\nu} p_{i,j} \int_{{\Theta}_{i,j}^*(\xi^*)} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{\theta}_{i,j}) \big]^2 \mu_{ij}^* (d {\theta}_{i,j}) ~\leq T_{\mathrm{P}}(\xi^*) \end{equation} is satisfied for all $ x \in \mathcal{X}$. Moreover, there is equality in \eqref{equiv} for all support points of the the locally $T_\mathrm{P}$-optimal discriminating design $\xi^*$. \end{thm} Theorem \ref{thm1} provides an extension of the corresponding theorem in ~\cite{BraessDette2013}, and the proof is similar and therefore omitted. For designs $\xi, \zeta$ on $\mathcal {X}$ we introduce the function \begin{align} \label{qfct} Q(\zeta,\xi) = \int_\mathcal{X} \sum_{i,j = 1}^{\nu} p_{i,j}\inf_{\theta_{i,j} \in \Theta_{ij}^*(\xi)} \Big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{\theta}_{i,j}) \Big]^2 \zeta (dx) , \end{align} where $\zeta$ is an experimental design and the set $\Theta_{ij}^*(\xi)$ is defined in \eqref{thetamin}. Using Lemma \ref{lemma1} from the appendix it is easy to check that \begin{align*} \frac{\partial T_{\mathrm{P}}(\xi(\alpha))}{\partial\alpha}\Big|_{\alpha=0}= Q(\zeta, \xi)-T_{\mathrm{P}}(\xi) \end{align*} where $ \xi(\alpha)=(1-\alpha) \xi + \alpha \zeta $ denotes the convex combination of the designs $\xi$ and $\zeta$. If Assumption \ref{assum2} is satisfied, the function $Q$ simplifies to \begin{align} \label{psi} Q(\zeta,\xi) &= \int_{\mathcal{X}} \sum_{i,j = 1}^{\nu} p_{i,j} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x, \widehat{\theta}_{i,j} ) \big]^2 \zeta (dx) , \nonumber \end{align} which plays an important role in the subsequent discussion. In particular we need also the following extension of Theorem \ref{thm1}. \begin{thm}\label{thm2.2} If Assumption \ref{assum1} is satisfied and the design $\xi$ is not $T_\mathrm{P}$-optimal, then there exists a design $\zeta^*$, such that the inequality $Q(\zeta^*,\xi)> T_{\mathrm{P}}(\xi)$ holds. \end{thm} In order to obtain a more manageable condition of this result let $\hat \mu_{i,j}(\xi)$ denote a measure on the set ${\Theta}_{i,j}^*(\xi)$ ( $i,j=1,\ldots,\nu$) for which the function $$ \max_{x \in \mathcal{X}} \sum_{i,j = 1}^{\nu} p_{i,j}\int_{{\Theta}_{i,j}^*(\xi)} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{ \theta}_{i,j}) \big]^2 \mu_{i,j}(d\theta_{i,j}) $$ attains its minimal value, and define \begin{equation} \label{Psifct} \Psi(x,\xi)= \sum_{i,j = 1}^{\nu} p_{i,j} \int_{{\Theta}_{i,j}^*(\xi)} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{\theta}_{i,j}) \big]^2 \hat \mu_{ij}(d {\theta}_{i,j}) ~. \end{equation} Note that the function in \eqref{Psifct} simplifies to \begin{equation} \label{Psifctsimp} \Psi(x,\xi)= \sum_{i,j = 1}^{\nu} p_{i,j} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{\widehat \theta}_{i,j}) \big]^2 ~, \end{equation} if both Assumptions \ref{assum1} and \ref{assum2} are satisfied. \begin{cor}\label{thm1a} If Assumption \ref{assum1} is satisfied and the design $\xi$ is not $T_\mathrm{P}$-optimal then there exists a point $\overline x \in \mathcal{X} $ such that \begin{align} \nonumber \Psi(\overline x,\xi)> T_{\mathrm{P}}(\xi). \end{align} \end{cor} \subsection{Bayesian $T$-optimal designs} \label{sec2b} As pointed out by \cite{detmelshp2012} locally $T$-optimal designs are rather sensitive with respect to misspecification of the unknown parameters $\overline{\theta}_{i}$, and it might be appropriate to construct more robust designs for model discrimination. The problem of robustness was already mentioned in \cite{atkfed1975a} and these authors proposed a Bayesian version of the $T$-optimality criterion which reads in the situation of the criterion \eqref{2.4} as follows \begin{align} \label{2.5} T_{\mathrm{P}}^{\mathrm{B}}(\xi) = \sum_{i,j=1}^{\nu} p_{i,j} \int_{\Theta_i} \inf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \Big[ \eta_i(x,\lambda_i) - \eta_j(x,\theta_{i,j}) \Big]^2 \xi(dx) \mathcal{P}_{i}(d\lambda_i). \end{align} Here for each $i=1, \ldots, \nu$ the measure $\mathcal{P}_{i}$ denotes a prior distribution for the parameter $\theta_{i}$ in model $\eta_i$, such that all integrals in \eqref{2.5} are well defined. Throughout this paper we will call any design maximizing the criterion \eqref{2.5} a Bayesian $T$-optimal discriminating design. For (two) polynomial regression models Bayesian $T$-optimal discriminating designs have been explicitly determined by \cite{DetteMelasShpilev2013}, and their results indicate the intrinsic difficulties in the construction of optimal designs with respect to this criterion. \\ In the following we will link the criterion \eqref{2.5} to the locally $T$-optimality criterion \eqref{2.4} for large number of competing models. For this purpose we note that in nearly all situations of practical interest an explicit evaluation of the integral in \eqref{2.5} is not possible and the criterion has to be evaluated by numerical integration approximating the prior distribution by a measure with finite support. Therefore we assume that the prior distribution $ \mathcal{P}_{i}$ in the criterion is given by a discrete measure with masses $\tau_{i1}, \ldots \tau_{i\ell_i}$ at the points $\lambda_{i1},\ldots ,\lambda_{i\ell_i}$. The criterion in \eqref{2.5} can then be rewritten as \begin{align} \label{2.6} T_{\mathrm{P}}^{\mathrm{B}}(\xi) = \sum_{i,j=1}^{\nu} \sum_{k=1}^{\ell_i} p_{i,j} \tau_{ik} \inf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \big[ \eta_i(x,\lambda_{ik}) - \eta_j(x,\theta_{i,j}) \big]^2 \xi(dx) , \end{align} which is a locally $T$-optimality criterion of the from \eqref{2.4}. The only difference between the criterion obtained form the Bayesian approach and \eqref{2.4} consists in the fact that the criterion \eqref{2.6} involves substantially more comparisons of the functions $\eta_i$ and $\eta_j$. For example, if this approach is used for a Bayesian version of the criterion \eqref{2.2} we obtain \begin{align} \label{2.7} T_{12}^B (\xi) = \sum_{k=1}^{\ell} \tau_{k} \inf_{\theta_{2} \in \Theta_2} \int_{\mathcal{X}} \big[ \eta_1(x, \lambda_k) - \eta_2(x,\theta_{2}) \big]^2 \xi(dx). \end{align} This is the locally $T$-optimality criterion \eqref{2.4} with $\nu=\ell +1,\ p_{i,\ell +1}=\tau_i \ (i=1,\dots,\ell)$ and $p_{i,j}=0$ otherwise. Thus, instead of making only one comparison as required for the locally $T$-optimality criterion, the Bayesian approach (with a discrete approximation of the prior) yields a criterion with $\ell $ comparisons, where $\ell$ denotes the number of support points used for the approximation of the prior distribution. Moreover, for each support point of the prior distribution in the criterion \eqref{2.6} (or \eqref{2.7}) the infimum has to be calculated numerically, which is computationally expensive. Consequently, the computation of Bayesian $T$-optimal discriminating design problems is particularly challenging. In the following sections we provide an efficient solution of this problem. \section{Calculating locally $T$-optimal designs }\label{sec3} \cite{BraessDette2013} proposed an algorithm for the numerical construction of locally $T$-optimal designs, which is based on vector-valued Chebyshev approximation. This algorithm is quite difficult both in terms of description and implementation. Moreover, it requires substantial computational resources and is therefore only able to deal with a small number of comparisons in the $T$-optimality criterion. The purpose of this section is to develop a more efficient method which is able to deal with a large number of comparisons in the the criterion and avoids the drawbacks of the procedures in \cite{atkfed1975a} and \cite{BraessDette2013}. As pointed out in Section \ref{sec2b} methods solving this problem are required for the calculation of Bayesian $T$-optimal discriminating designs. Recall the definition of the function $\Psi$ in \eqref{Psifct} and note that under Assumption \ref{assum1} it follows from Corollary \ref{thm1a} that there exists a point $\overline{x} \in \mathcal{X}$, such that the inequality $$ \Psi(\overline{x},\xi) > T_{\mathrm{P}}(\xi) $$ holds, whenever $\xi$ is {\bf not} a locally $T$-optimal discriminating design. The algorithm of \cite{atkfed1975a} uses this property to construct a sequence of designs which converges to the locally $T$-optimal discriminating design. For further reference it is stated here. \begin{algorithm}[\cite{atkfed1975a}]{\ } \label{algorithm:AtkinsonFedorov} Let $ \xi_0$ denote a given (starting) design and let $( \alpha_s)_{s=0}^{\infty}$ be a sequence of positive numbers, such that $\lim_{s \to \infty} \, \alpha_s = 0, \; \sum_{s = 0}^{\infty} \alpha_s = \infty, \; \sum_{s = 0}^{\infty} \alpha_s^2 < \infty. $ For $s=0,1,\ldots $ define $$\xi_{s+1} = ( 1 - \alpha_s ) \xi_s + \alpha_s \xi(x_{s+1}),$$ where $ x_{s+1} = \argmax_{x \in \mathcal{X}} \Psi(x,\xi_s).$ \end{algorithm} \noindent It can be shown that this algorithm converges in the sense that $\lim_{s \to \infty} T_{\mathrm{P}}(\xi_s) = T_{\mathrm{P}}(\xi^*)$, where $\xi^*$ denotes a locally $T$-optimal discriminating design. However, a major problem of Algorithm \ref{algorithm:AtkinsonFedorov} is that it yields a sequence of designs with an increasing number of support points. As a consequence the resulting design (after applying some stopping criterion) is concentrated on a large set of points. Even if this problem can be solved by clustering or by determining the extrema of the final function $\Psi(x,\xi_s)$, it is much more difficult to deal with the accumulation of support points during the iteration. Moreover, \cite{BraessDette2013} demonstrated that in many cases the iteration process may take several hundred iterations for obtaining a locally $T$- optimal discriminating design with a required precision, resulting in a high computational complexity for the recalculation of the optimum values \begin{align}\label{thetatilde} \widehat{\theta}_{i,j} \in \arginf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \big[ \eta_i(x,\overline{\theta}_i) - \eta_j(x, \theta_{i,j}) \big]^2 \xi(dx) \end{align} in the optimality criterion \eqref{2.4}. These authors also showed that Algorithm \ref{algorithm:AtkinsonFedorov} may not find the optimal design if there are too many model comparisons involved in the $T$-optimality criterion \eqref{2.4}. \\ Therefore, we propose the following alternative basic procedure for the calculation of locally $T$-optimal discriminating designs as an alternative to Algorithm~\ref{algorithm:AtkinsonFedorov}. Roughly speaking, it consists of two steps treating the maximization with respect to support points (Step 1) and weights (Step 2) separately, where two methods implementing the second step will be given below [see Section \ref{sec31} and \ref{sec32} for details]. \begin{algorithm}{\ } \label{algorithm:new}{\rm Let ${\xi_0}$ denote a starting design such that $T_{\mathrm{P}}({\xi_0})>0$ and define recursively a sequence of designs $({\xi_s})_{s=0,1,\ldots } $ as follows: \begin{itemize} \item[$(1)$] Let $\mathcal{S}_{[s]}$ denote the support of the design ${\xi_s}$. Determine the set $\mathcal{E}_{[s]}$ of all local maxima of the function $\Psi(x,{\xi_s})$ on the design space $\mathcal{X}$ and define $\mathcal{S}_{[s+1]} = \mathcal{S}_{[s]} \cup \mathcal{E}_{[s]}$. \item[$(2)$] We define $ \xi = \{\mathcal{S}_{[s+1]},\omega\} $ as the design supported at $ \mathcal{S}_{[s+1]}$ (with a vector $w$ of weights) and determine the locally $T_P$-optimal design in the class of all designs supported at $ \mathcal{S}_{[s+1]}$, that is we determine the vector $\omega_{[s+1]}$ maximizing the function \begin{align*} g(\omega) = T_{\mathrm{P}}( \{\mathcal{S}_{[s+1]},\omega\}) = \sum_{i,j=1}^{\nu} p_{i,j} \inf_{\theta_{i,j} \in \Theta_j} \sum_{x \in \mathcal{S}_{[s+1]}} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,\theta_{i,j}) \big]^2 w_x \end{align*} (here $w_x$ denotes the weights at the point $x \in \mathcal{S}_{s+1}$). All points in ${\mathcal{S}}_{[s+1]}$ with vanishing components in the vector of weights $\omega_{[s+1]}$ will be be removed and the new set of support points will also be denoted by ${\mathcal{S}}_{[s+1]}$. Finally the design ${\xi}_{s+1}$ is defined as the design with the set of support points ${\mathcal{S}}_{[s+1]}$ and the corresponding nonzero weights. \end{itemize} } \end{algorithm} \noindent \begin{thm}\label{thm2} Let Assumption \ref{assum1} be satisfied and let $({\xi_s})_{s=0,1,\ldots } $ denote the sequence of designs obtained by Algorithm \ref{algorithm:new}, then $$\lim_{s \to \infty} T_{\mathrm{P}}(\xi_{s+1}) = T_{\mathrm{P}}(\xi^*),$$ where $\xi^*$ denotes a locally $T$-optimal discriminating design. \end{thm} A proof of Theorem \ref{thm2} is deferred to Section \ref{sec6}. Note that the algorithm adds all local maxima of the function $\Psi(x,{\xi_s})$ as possible support points of the design in the next iteration. Consequently, in the current form Algorithm~\ref{algorithm:new} also accumulates too many support points. To avoid this problem, it is suggested to remove at each step those points from the support, whenever their weight is smaller than $m^{0.25}$, where $m$ denote the working precision of the software used in the implementation (which is $2.2\times 10^{-16} $ for $R$). Note also that this refinement does not affect the convergence of the algorithm from a practical point of view. A more important question is the implementation of the second step of the procedure, that is the maximization of function $g(\omega)$. Before we discuss two computationally efficient procedures for this purpose in the following sections, we state an important property of the function $\Psi(x,\xi_{s+1})$ obtained in each iteration. \begin{lemma}\label{lem2} At the end of each iteration of Algorithm \ref{algorithm:new} the function $ \Psi(x,\xi_{s+1}) $ attains one and the same value for all support points of the design $ \xi_{s+1}$. \end{lemma} \subsection{Quadratic programming} \label{sec31} Let $\mathcal{S}_{[s+1]} = \{x_1,\ldots , x_n\}$ denote the set obtained in the first step of Algorithm \ref{algorithm:new} and define $\xi$ as a design supported at $\mathcal{S}_{[s+1]} $ with corresponding weights $\omega_1,\ldots , \omega_n$ (which have to be determined in Step 2 of the algorithm by maximizing the function \begin{align*} g(\omega) = \sum_{i,j = 1}^{\nu} p_{i,j} \sum_{k = 1}^{n} {\omega}_k \bigl[ \eta_i(x_k,\overline{\theta}_{i}) - \eta_j(x_k, \widehat{\theta}_{i,j} ) \bigr]^2, \end{align*} where $ \widehat{\theta}_{i,j} = \widehat{\theta}_{i,j} (\omega) $ is defined in \eqref{thetatilde}. For this purpose we suggest to linearize the functions $\eta_j(x_k,\theta_{i,j})$ in the neighborhood of point $\widehat{\theta}_{i,j}$. More precisely, we consider the function \begin{align*} \overline g(\omega) &= \sum_{i,j = 1}^{\nu} p_{i,j} \min_{\alpha_{i,j} \in \mathbb{R}^{d_j} } \sum_{k = 1}^{n} {\omega}_k \Big[ \eta_i(x_k,\overline{\theta}_{i}) - \eta_j(x_k,\widehat{\theta}_{i,j}) - \alpha_{i,j} ^T \frac{\partial \eta_j(x_k,{\theta}_{i,j})}{\partial {\theta}_{i,j}}\Big|_{{\theta}_{i,j} = \widehat{\theta}_{i,j}} \Big]^2. \\ &= \sum_{i,j = 1}^{\nu} p_{i,j} \min_{\alpha_{i,j} \in \mathbb{R}^{d_j} } \left[ \alpha_{i,j}^{\mathrm{T}} \mathbf{J}_{i,j}^{\mathrm{T}} \mathbf{\Omega} \mathbf{J}_{i,j} \alpha_{i,j} - 2 \omega^{\mathrm{T}} \mathbf{R}_{i,j} \alpha_{i,j} + b_{i,j}^{\mathrm{T}} \omega \right], \end{align*} where $d_j$ is the dimension of the parameter space $\Theta_j$, $ \mathbf{\Omega} = \mathrm{diag}({\omega}_1,\dots,{\omega}_{n})$ and the matrices $\mathbf{J}_{i,j} \in \mathbb{R}^{n \times d_j} $, $\mathbf{R}_{i,j} \in \mathbb{R}^{n \times d_j} $ and the vectors $b_{i,j} \in \mathbb{R}^{n }$ are defined by \begin{align*} &\mathbf{J}_{i,j} = \Bigl( \frac{\partial \eta_j(x_r,\theta_{i,j})}{\partial \theta_{i,j}}\Big|_{\theta_{i,j} = \widehat{\theta}_{i,j}} \Bigl)_{ r = 1,\dots,n}, \\ &\mathbf{R}_{i,j} = \Bigl( [\eta_i(x_r,\overline{\theta}_{i}) - \eta_j(x_r,\widehat{\theta}_{i,j}) ] \frac{\partial \eta_j(x_r,\theta_{i,j})}{\partial \theta_{i,j}}\Big|_{\theta_{i,j} = \widehat{\theta}_{i,j}} \Bigl)_{ r = 1,\dots,n },\\ &b_{i,j} = \Bigl( [\eta_i(x_r,\overline{\theta}_{i}) - \eta_j(x_r,\widehat{\theta}_{i,j})]^2 \Bigl)_{r = 1,\dots,n}, \end{align*} respectively. Obviously the minimum with respect to $\alpha_{i,j}$ is achieved by $\alpha_{i,j} = \left(\mathbf{J}_{i,j}^{\mathrm{T}} \mathbf{\Omega} \mathbf{J}_{i,j}\right)^{-1} \mathbf{R}_{i,j}^{\mathrm{T}} \omega$ which gives \begin{align*} \overline g(\omega) = -\omega^{\mathrm{T}} \mathbf{Q} (\omega ) \; \omega + b^{\mathrm{T}} \omega, \end{align*} where \begin{align*} &\mathbf{Q} (\omega ) = \sum_{i,j = 1}^{\nu} p_{i,j} \mathbf{R}_{i,j} \left( \mathbf{J}_{i,j}^{\mathrm{T}} \mathbf{\Omega} \mathbf{J}_{i,j} \right)^{-1} \mathbf{R}_{i,j}^{\mathrm{T}}. \end{align*} The matrix $\mathbf{Q} (\omega ) $ depends on $\omega$, but if we ignore this dependence and take the matrix $\mathbf{\Omega} = \mathrm{diag}(\overline{\omega}_1,\dots,\overline{\omega}_{n})$ as fixed, then we end up with a quadratic programming problem, that is \begin{align} &\phi(\omega,\overline{\omega}) = -\omega^{\mathrm{T}} \mathbf{Q(\overline{\omega})} \; \omega + b^{\mathrm{T}} \omega \rightarrow \max_{\omega}, \label{iteration} \\ &\sum_{k = 1}^{n} \omega_k = 1; \; \omega_k \geq 0, \; k = 1,\dots,n. \nonumber \end{align} This problem is solved iteratively until convergence, substituting each time the solution obtained in the previous iteration instead of $\overline{\omega}$. We note that a similar idea has also been proposed by \cite{BraessDette2013}. \begin{rem} {\rm In the practical implementation of the procedure it is recommended to perform only a few iterations of this step such that an improvement in the difference between the value of the criterion of the starting design in Step 2 and the design obtained in the iteration of \eqref{iteration} is observed. This will speed up the convergence of the procedure substantially. In this case equality of the function $\Psi$ at the support points of the calculated design (as stated in Lemma~\ref{lem2}) is only achieved approximately. \\ Formally, the convergence of the algorithm is only proved if the iteration \eqref{iteration} is performed until convergence. However, in all examples considered so far, we observed convergence of the procedure, even if only a few iterations of \eqref{iteration} are used. In our $R$ program the user can specify the number of iterations used in this part of the algorithm. Thus, if any problem regarding convergence is observed, the number of iterations should be increased (of course at a cost speed of the algorithm). } \end{rem} \subsection{A gradient method} \label{sec32} A further option for the second step in Algorithm~\ref{algorithm:new} is a specialized gradient method, which is used for the function \begin{align}\label{gfct} g(\omega) = \sum_{i,j = 1}^{\nu} p_{i,j} \sum_{k = 1}^{n} {\omega}_k \big[ \eta_i(x_k,\overline{\theta}_{i}) - \eta_j(x_k,\widehat{\theta}_{i,j} ) \big]^2 \end{align} where $\widehat{\theta}_{i,j} = \widehat{\theta}_{i,j} (\omega) $ is defined in \eqref{thetatilde}. For it s description we define the functions \begin{align*} v_k(\omega) = \sum_{i,j = 1}^{\nu} p_{i,j} \bigl[ \eta_i(x_k,\theta_{i}) - \eta_j(x_k,\widehat{\theta}_{i,j}(\omega)) \bigr]^2, \; k = 1,\dots,n, \end{align*} and iteratively calculate a sequence of vectors $(\omega_{(\gamma)})_{\gamma =0,1,\ldots}$. At the beginning we choose $\omega_{(0)} = \overline{\omega}$ (for example equal weights). If ${\omega}_{(\gamma )} =({\omega}_{(\gamma ),1}, \ldots,{\omega}_{(\gamma ),n})$ is given, we proceed for $\gamma=0,1,\ldots $ as follows. We determine indices $\overline{k} $ and $ \underline{k}$ corresponding to $ \max_{1 \leq k \leq n} v_k(\omega_{(\gamma)})$ and $\min_{1 \leq k \leq n} v_k(\omega_{(\gamma)})$, respectively, and define \begin{align} \label{argmax} \alpha^* = \arg \max_{0 \leq \alpha \leq \omega_{(\gamma),\underline{k}}} g(\overline{\omega}_{(\gamma)}(\alpha)), \end{align} where the vector $\overline{\omega}_{(\gamma )} (\alpha) =(\overline{\omega}_{(\gamma ),1}(\alpha), \ldots,\overline{\omega}_{(\gamma ),n}(\alpha))$ is given by $$ \overline{\omega}_{(\gamma),i} (\alpha)= \left\{ \begin{array}{ll} \omega_{(\gamma),i} + \alpha & \mbox{ if } i=\overline{k}\\ \omega_{(\gamma),i} - \alpha & \mbox{ if } i= \underline{k}\\ \omega_{(\gamma),i} & \mbox{ else } \\ \end{array} \right. $$ The vector ${\omega}_{(\gamma+1)} $ of the next iteration is then defined by $ {\omega}_{(\gamma+1)} = \overline{\omega}_{(\gamma)} (\alpha^*).$ The following theorem shows that the generated sequence of vectors converges to a maximizer of the function $g$ in \eqref{gfct} and is proved in the Appendix. \begin{thm} \label{conv_theorem} The sequence $(\omega_{(\gamma)})_{\gamma \in \mathbb{N}}$ converges to a vector $\omega^* \in \argmax g(\omega)$. \end{thm} \begin{rem}{\rm It is worthwhile to mention that the one dimensional optimization problem \eqref{argmax} is computationally rather expensive. In the implementation we use a linearization of the optimization problem, which is obtained in a similar way a described in Section \ref{sec31}. } \end{rem} \section{Implementation and numerical examples }\label{sec5} We have implemented the procedure for the calculation of the locally $T$-optimal discriminating design in $R$, where the user has to specify the weights $p_{i,j}$ and the corresponding preliminary information regarding the parameters $\overline{\theta}_i$. To be precise, we call \begin{align*} \mathrm{P} = \left[ \begin{array}{ccccc} p_{1,1} & p_{1,2} & \dots & p_{1,\nu-1} & p_{1,\nu} \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ p_{\nu,1} & p_{\nu,2} & \dots & p_{\nu,\nu-1} & p_{\nu,\nu} \\ \end{array} \right] \end{align*} the comparison table for the locally $T$-optimal discriminating design problem under consideration. This table has to be specified by the experimenter. Because the Bayesian $T$-optimal design problem with a discrete prior can be reduced to a locally $T$-optimal one with a large number of model comparisons, we now describe the corresponding table for the Bayesian $T$-optimality criterion. For illustration purposes we consider the case $\nu=2$. The Bayesian $T$-optimality criterion is given in \eqref{2.7}, where the prior for the parameter $\theta_1$ puts masses $\tau_{1}, \ldots \tau_{\ell}$ at the points $\lambda_{1},\ldots ,\lambda_{\ell}$. This criterion can be rewritten as a local $T$-optimality criterion of the form \eqref{2.4}, i.e. \begin{align} T_{\mathrm{P}}(\xi) = \sum_{i,j=1}^{\ell+1} {p}_{i,j} \inf_{\theta_{i,j} \in \Theta_j} \int_{\mathcal{X}} \Big[ \eta_i(x, {\theta}_{i}) - \eta_j(x,\theta_{i,j}) \Big]^2 \xi(dx), \end{align} where comparison table is given by \begin{align} {\mathrm{P}} = ({p}_{i,j})_{i,j=1,\ldots ,\ell+1}~=~ \left[ \begin{array}{ccccc} 0 & 0 & \dots & 0 & \tau_1 \\ 0 & 0 & \dots & 0 & \tau_2 \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \dots & 0 & \tau_\ell \\ 0 & 0 & \dots & 0 & 0 \\ \end{array} \right] \in \mathbb{R}^{\ell +1 \times \ell +1 } , \end{align} $\eta_i(x,\overline{\theta}_{i}) = \eta_1(x,\lambda_i ), \; i = 1,\dots, \ell$ and $\eta_{\ell + 1}(x,\theta_{i,j}) = \eta_2(x,\theta_{i,\ell + 1})$. The extension of this approach to more than two models is easy and left to the reader. We now illustrate the new method in two examples calculating Bayesian $T$-optimal discriminating designs. We have implemented both procedures described in Section \ref{sec31} and \ref{sec32} and the results were similar. For this reason we only represent the Bayesian $T$-optimal discriminating designs calculated by Algorithm \ref{algorithm:new}, where the quadratic programming method was used in Step 2 [see Section \ref{sec31} for details]. \subsection{Bayesian $T$-optimal discriminating designs for exponential models} \label{sec51} Consider the problem of discriminating between the two regression models \begin{align} &\eta_1(x,\theta_1) = \theta_{1,1} - \theta_{1,2} \exp(-\theta_{1,3} x^{\theta_{1,4}}), \label{example1}\\ &\eta_2(x,\theta_2) = \theta_{2,1} - \theta_{2,2} \exp(-\theta_{2,3} x),\nonumber \end{align} where the design space is given by the interval $[0,10]$. Exponential models of the form \eqref{example1} are widely used in applications. For example, the model $\eta_2$ is frequently fitted in agricultural sciences, where it is called Mitscherlichs growth law and used for describing the relation between the yield of a crop and the amount of fertilizer. In fisheries research this model is called Bertalanffy growth curve and used for the description of the length of a fish in dependence of its age [see \cite{ratkowsky1990}]. Optimal designs for exponential regression models have been determined by \cite{hancha2003} among others. In the following we will demonstrate the performance of the new algorithm in calculating Bayesian $T$-optimal discriminating designs for the two exponential models. Note that it make only sense to consider the Bayesian version of $T_{12}$, because the model $\eta_2$ is obtained as a special case of $\eta_1$ for $\theta_{1,4}=1$. It is easy to see that the locally $T$-optimal discriminating designs do not depend on the linear parameters of $\eta_1$ and we have chosen $\bar \theta_{1,1}=2$ and $\bar \theta_{2,2}=1 $ for these parameters. For the parameters $\bar \theta_{1,3}$ and $\bar \theta_{1,4}$ we considered independent prior distributions supported at the points \begin{equation} \label{sup1} \mu_j + \frac {\sigma(i-3)}{2} \qquad \qquad i=1,\dots,5 \, ; \quad j=3,4 \, , \end{equation} where $\mu_3 = 0.8, \ \mu_4 = 1.5$ and different values of the variance $\sigma^2$ are investigated. The corresponding weights at these points are proportional (in both cases) to \begin{equation} \label{wei1} \frac {1}{\sqrt{2 \pi \sigma^2}} \exp \Bigl( - \frac {(i-3)^2}{8} \Bigr); \qquad \qquad i=1,\dots, 5\, . \end{equation} We note that this yields $25$ terms in the Bayesian optimality criterion \eqref{2.7}. Bayesian $T$-optimal discriminating designs are depicted in Table \ref{tab1} for various values of $\sigma^2$, where an equidistant design at $11$ points $0,1,\dots,10$ was used as starting design. \begin{table}[h] \begin{center} \begin{tabular}{||c||c||c||c||} \hline \hline $\sigma^2$ & Optimal design & $\sigma^2$ & Optimal design\\ \hline \hline 0.0 & $\begin{matrix} 0.000 & 0.441 & 1.952 & 10.000 \\ 0.209 & 0.385 & 0.291 & 0.115 \end{matrix}$ & 0.285 & $\begin{matrix} 0.000 & 0.453 & 1.758 & 10.000 \\ 0.207 & 0.396 & 0.292 & 0.105 \end{matrix}$\\ \hline 0.1 & $\begin{matrix} 0.000 & 0.452 & 1.877 & 10.000 \\ 0.209 & 0.391 & 0.290 & 0.110 \end{matrix}$ & 0.3 & $\begin{matrix} 0.000 & 0.452 & 1.747 & 4.951 & 10.000 \\ 0.207 & 0.396 & 0.292 & 0.003 & 0.102 \end{matrix}$\\ \hline 0.2 & $\begin{matrix} 0.000 & 0.455 & 1.811 & 10.000 \\ 0.208 & 0.394 & 0.291 & 0.107 \end{matrix}$ & 0.4 & $\begin{matrix} 0.000 & 0.446 & 1.651 & 4.699 & 10.000 \\ 0.200 & 0.384 & 0.290 & 0.060 & 0.066 \end{matrix}$\\ \hline \hline \end{tabular} \end{center} \caption{\it \label{tab1} Bayesian $T$-optimal discriminating designs for the two exponential models in \eqref{example1}. The support points and weights of the independent prior distributions for the parameters $\overline{\theta}_{1,3}$ and $\overline{\theta}_{1,4}$ are given by \eqref{sup1} and \eqref{wei1}, respectively.} \end{table} A typical determination of the optimal design takes between $0.03$ seconds (in the case $\sigma^2=0$) and $1.4$ seconds (in the case $\sigma^2=0.4$) CPU time on a standard PC (with an intel core i7-4790K processor). The algorithm using the procedure described in Section \ref{sec32} in step 2 requires between $0.11$ seconds (in the case $\sigma^2=0$) and $11.6$ seconds (in the case $\sigma^2=0.4$) CPU time. We observe that for small values of $\sigma^2$ the optimal designs are supported at $4$ points, while for $\sigma^2 \geq 0.285$ the Bayesian $T$-optimal discriminating design is supported at $5$ points. The corresponding function $\Psi$ from the equivalence Theorem \ref{thm1}. is shown in Figure \ref{fig1}. \begin{figure}[h] \centering {\includegraphics[width=45mm]{sigma_0_new.pdf}} {\includegraphics[width=45mm]{sigma_01_new.pdf}} {\includegraphics[width=45mm]{sigma_02_new.pdf}} \\ $\sigma^2 = 0$~~~~~~~~~~~~~~~~~~~~~~~~~$\sigma^2 = 0.1$ ~~~~~~~~~~~~~~~~~~~~ $\sigma^2 = 0.2$ \\ {\includegraphics[width=45mm]{sigma_0285_new.pdf}} {\includegraphics[width=45mm]{sigma_03_new.pdf}} {\includegraphics[width=45mm]{sigma_04_new.pdf}} $\sigma^2 = 0.285$~~~~~~~~~~~~~~~~~~~~~~~~$\sigma^2 = 0.3$ ~~~~~~~~~~~~~~~~~~~ $\sigma^2 = 0.4$ \caption{\it \label{fig1} The function on the left hand side of inequality \eqref{equiv} in the equivalence Theorem \ref{thm1} for the numerically calculated Bayesian $T$-optimal discriminating designs. The competing regression models are given in \eqref{example1}.} \end{figure} \subsection{Bayesian $T$-optimal discrimination designs for dose finding studies} \label{sec52} Non-linear regression models have also numerous applications in dose response studies, where they are used to describe the dose response relationship. In these and similar situations the first step of the data analysis consists in the identification of an appropriate model, and the design of experiment should take this task into account. For example, for modeling the dose response relationship of a Phase II clinical trial \cite{pinbrebra2006} proposed the following plausible models \begin{align} &\eta_1(x, \theta_1) = \theta_{1,1} + \theta_{1,2} x ; \nonumber \\ &\eta_2(x, \theta_2) = \theta_{2,1} + \theta_{2,2} x (\theta_{2,3} - x) ; \label{example2} \\ &\eta_3(x, \theta_3) = \theta_{3,1} + \theta_{3,2} x / (\theta_{3,3} + x) ; \nonumber\\ &\eta_4(x, \theta_4) = \theta_{4,1} + \theta_{4,2} / (1 + \exp(\theta_{4,3} - x) / \theta_{4,4}) ; \nonumber \end{align} where the designs space (dose range) is given by the interval $\mathcal{X}=[0,500]$. In this reference some prior information regarding the parameters for the models is also provided, that is \begin{align*} \overline{\theta}_1= (60, 0.56), \; \overline{\theta}_2= (60, 7/2250, 600), \; \overline{\theta}_3 = (60, 294, 25), \; \ \overline{\theta}_4= (49.62, 290.51, 150, 45.51). \end{align*} Locally optimal discrimination designs for the models in \eqref{example2} have been determined by \cite{BraessDette2013} in the case $p_{i,j}=1/6$, $(1\leq j<i \leq 4 )$, which means that the resulting local $T$-optimality criterion \eqref{2.4} consists of $6$ model comparisons. \\ We begin with an illustration of the new methodology developed in Section \ref{sec3} calculating again the locally $T$-optimal discriminating design for this scenario. The proposed algorithm needs only four iterations for the calculation of a design, say $\xi_{4}$, which has at least efficiency \begin{align*} \mathrm{Eff}_{T_{\mathrm{P}}}( \xi_{ 4 } ) = \frac{T_{\mathrm{P}}(\tilde \xi_{4} )}{\sup_{\zeta} T_{\mathrm{P}}(\zeta)} ~ \geq ~0.999. \end{align*} The function $\Psi (\cdot , \xi_{ 1 } )$ after the first iteration is displayed in Figure \ref{figrefine}, where we used the same starting design as in \cite{BraessDette2013}. The support points of $ \ \xi_{ 1 }$ are shown as circles and we can see that function $\Psi(x, \xi_{1})$ attains one and the same value, which is represented with dotted line, for all support points. We finally note that the algorithm proposed in \cite{BraessDette2013} needs $9$ iterations to find a design with the same efficiency. \begin{figure}[h] \centering \includegraphics[width=45mm]{first_iter.pdf} \caption{\it \label{figrefine} The function $\Psi (\cdot , \xi_{1} )$ after the first iteration of Algorithm \ref{algorithm:new} } \end{figure} We now investigate Bayesian $T$-optimal discriminating designs for a similar situation. For the sake of a transparent representation we only specify a prior distribution of the four-dimensional parameter $\overline{\theta}_4$ for the calculation of the discriminating design, while $\overline{\theta}_2$ and $\overline{\theta}_3$ are considered as fixed. In order to obtain a design which is robust with respect to model misspecification we chose a prior discrete prior with $81$ points in $\mathbb{R}^4$. More precisely, the support points of the prior distribution are given by the points \begin{align} \label {prior1} \big\{\mu_{e_1,e_2,e_3,e_4}~|~e_1,e_2,e_3,e_4 \in \{-1,0,1\}\big\}, \end{align} where \begin{align*} \mu_{e_1,e_2,e_3,e_4} & = (\mu_1 + e_{1} \sigma, \mu_2 + e_{2} \sigma, \mu_3 + e_{3} \sigma, \mu_4 + e_{4} \sigma),\\ \mu &=(\mu_1,\mu_2,\mu_3,\mu_4)= (49.62, 290.51, 150, 45.51), \end{align*} and different values for $\sigma^2$ are considered. The weights at the corresponding points are proportional (normalized such that their sum is $1$) to \begin{align} \label {prior2} \frac{1}{(2\pi\sigma ^2)^2} \exp \Big(\frac{ || \mu_{e_1,e_2,e_3,e_4} - \mu ||_2^2}{2\sigma^2} \Big)~,~~e_1,e_2,e_3,e_4 \in \{-1,0,1\}, \end{align} where $||\cdot ||_2$ denotes the Euclidean norm. The resulting Bayesian optimality criterion \eqref{2.6} consist of $246$ model comparisons. In this case the method of \cite{BraessDette2013} fails to find the Bayesian $T$-optimal discriminating design. Bayesian $T$-optimal discriminating designs have been calculated by the new Algorithm \ref{algorithm:new} for various values of $\sigma^2$ and the results are shown in Table \ref{tab2}. A typical determination of the optimal design takes between $0.09$ seconds (in the case $\sigma^2=0$) and $7.8$ seconds (in the case $\sigma^2=37^2$) CPU time on a standard PC. The algorithm using the procedure described in Section \ref{sec32} in Step 2 requires between $0.75$ seconds (in the case $\sigma^2=0$) and $37.1$ seconds (in the case $\sigma^2=37^2$) CPU time. For small values the Bayesian $T$-optimal discriminating designs are supported at $4$ points including the boundary of the design space. The smaller (larger) interior support point is increasing (decreasing) if $\sigma^2$ is increasing. For larger values of $\sigma^2$ even the number of support points of the optimal design increases. For example, if $\sigma^2=35^2$ or $37^2$ the Bayesian $T$-optimal discriminating design has $5$ or $6$ points (including the boundary points of the design space). These observations are in line with the theoretical finding of \cite{detbra2007} who showed that the number of support points of Bayesian $D$-optimal designs can become arbitrarily large with an increasing variability in the prior distribution. The corresponding functions from the equivalence Theorem \ref{thm1} are shown in Figure \ref{fig2}. \begin{table}[h] \begin{center} \begin{tabular}{||c||c||c||c||} \hline \hline $\sigma^2$ & optimal design & $\sigma^2$ & optimal design\\ \hline \hline 0 & $\begin{matrix} 0.000 & 78.783 & 241.036 & 500.0 \\ 0.255 & 0.213 & 0.357 & 0.175 \end{matrix}$ & $33^2$ & $\begin{matrix} 0.000 & 92.692 & 222.735 & 500.0 \\ 0.260 & 0.240 & 0.344 & 0.156 \end{matrix}$\\ \hline $20^2$ & $\begin{matrix} 0.000 & 84.467 & 234.134 & 500.0 \\ 0.257 & 0.225 & 0.351 & 0.167 \end{matrix}$ & $35^2$ & $\begin{matrix} 0.000 & 91.743 & 129.322 & 221.118 & 500.0 \\ 0.260 & 0.214 & 0.036 & 0.336 & 0.154 \end{matrix}$\\ \hline $30^2$ & $\begin{matrix} 0.000 & 91.029 & 225.713 & 500.0 \\ 0.259 & 0.237 & 0.345 & 0.159 \end{matrix}$ & $37^2$ & $\begin{matrix} 0.000 & 89.881 & 129.590 & 170.306 & 220.191 & 500.0 \\ 0.260 & 0.170 & 0.091 & 0.019 & 0.310 & 0.150 \end{matrix}$\\ \hline \hline \end{tabular} \caption{\it Bayesian $T$-optimal discriminating designs for the models in \eqref{example2}. The weights in the criterion \eqref{2.5} are given by $p_{i,j}=1/6$; $1 \leq i < j \leq 4$ and the support and masses of the prior distribution are defined by \eqref{prior1} and \eqref{prior2}, respectively. \label{tab2}} \end{center} \end{table} \begin{figure}[h] \centering {\includegraphics[width=45mm]{4mod_sigma_0.pdf}} {\includegraphics[width=45mm]{4mod_sigma_20.pdf}} {\includegraphics[width=45mm]{4mod_sigma_30.pdf}} \\ $\sigma^2 = 0$~~~~~~~~~~~~~~~~~~~~~~~~$\sigma^2 = 20$ ~~~~~~~~~~~~~~~~~~~ $\sigma^2 = 30$ \\ {\includegraphics[width=45mm]{4mod_sigma_33.pdf}} {\includegraphics[width=45mm]{4mod_sigma_35.pdf}} {\includegraphics[width=45mm]{4mod_sigma_37.pdf}} $\sigma^2 = 33$~~~~~~~~~~~~~~~~~~~~~~~~$\sigma^2 = 35$ ~~~~~~~~~~~~~~~~~~~ $\sigma^2 =37$ \\ \caption{\it \label{fig2} The function on the left-hand side of inequality \eqref{equiv} in the equivalence Theorem \ref{thm1} for the numerically calculated Bayesian $T$-optimal discriminating designs. The competing regression models are given in \eqref{example2}.} \end{figure} \bigskip {\bf Acknowledgements.} Parts of this work were done during a visit of the second author at the Department of Mathematics, Ruhr-Universit\"at Bochum, Germany. The authors would like to thank M. Stein who typed this manuscript with considerable technical expertise. The work of H. Dette and V. Melas was supported by the Deutsche Forschungsgemeinschaft (SFB 823: Statistik nichtlinearer dynamischer Prozesse, Teilprojekt C2). The research of H. Dette reported in this publication was also partially supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R01GM107639. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. V. Melas was also partially supported by Russian Foundation of Basic Research (Project 12.01.00747a). \section{Proofs} \label{sec6} \subsection{An auxiliary result} \label{sec6a} \begin{lemma} \label{lemma1} Let $\varphi(v,y)$ be a twice continuously differentiable function of two variables $v\in \mathcal{V} \subset \mathbb{R}^k$ and $y \in \mathcal{Y}, $ where $ \mathcal{Y} $ is a compact set. Denote by ${ \mathcal{Y}^* }$ the set of all points where the minimum $\min_{y \in \mathcal{Y} } \varphi(v,y)$ is attained and let $q \in \mathbb{R}^k$ be an arbitrary direction. Then \begin{align} \frac{\partial \min_{y \in { \mathcal{Y}^* }} \varphi(v,y)}{\partial q}= \min_{y \in{ \mathcal{Y} ^* }}\frac{\partial\varphi(v,y)}{\partial q}. \label{formula_diff} \end{align} \end{lemma} \begin{proof} See \cite{Pshenichny1971}, p. 75. \end{proof} \subsection{Proofs} \label{sec6b} {\bf Proof of Theorem \ref{thm2.2}.} Assume without loss of generality that $p_{i,j} >0$ for all $i,j=1,\dots,\nu$. Let $\xi^* $ denote any locally $T$-optimal discriminating design and let $\theta=(\theta_{i,j})_{i,j=1,\dots,\nu}$ denote the vector consisting of all $\theta_{i,j} \in \Theta_{i,j}(\xi^*) $. We introduce the function \begin{align} \label{varphi} \varphi(x, \theta) & = \sum_{i,j = 1}^{\nu} p_{i,j} \big[ \eta_i(x,\overline{\theta}_{i}) - \eta_j(x,{\theta}_{i,j}) \big]^2 \end{align} and consider the product measure \begin{align} \label{prodmeas} \mu(d\theta)= \prod_{i,j=1,\ldots,\nu} \mu_{i,j}(d\theta_{i,j}), \end{align} where $\mu_{ij}$ are measures on the sets ${\Theta}_{i,j}^*(\xi^*)$ defined by (\ref{thetamin}). Similarly, we define $\mu^* (d\theta)= \prod_{i,j=1,\ldots,\nu} \mu_{i,j}^* (d\theta_{i,j})$ as the product measure of the measures $\mu^*_{i,j}$ in Theorem \ref{thm1}. From this result we have \begin{eqnarray*} T_{\mathrm{P}}(\xi^*) &\geq& \sup_\zeta \int_\mathcal{X} \int_{\Theta^*(\xi^*)} \varphi(x, \theta) \mu^* (d\theta) \zeta(dx) \\ & \geq &\inf_\mu \sup_\zeta \int_\mathcal{X} \int_{\Theta^*(\xi^*)} \varphi(x, \theta) \mu (d\theta) \zeta(dx) = \sup_\zeta \inf_\mu \int_\mathcal{X} \int_{\Theta^*(\xi^*)} \varphi(x, \theta) \mu (d\theta) \zeta(dx), \end{eqnarray*} where the $\sup$ and $\inf$ are calculated in the class of designs $\zeta$ on $\mathcal{X}$ and product measures $\mu$ on $\Theta^*(\xi^*)=\otimes_{i,j=1}^{\nu} \Theta_{i,j(\xi^*)}^* $, respectively. It now follows that the characterizing inequality (\ref{equiv}) in Theorem \ref{thm1} is equivalent to the inequality $$ \sup_{\zeta} Q(\zeta,\xi^*) \leq T_{\mathrm{P}}(\xi^*). $$ Consequently, any non-optimal design must satisfy the opposite inequality. \hfill $\Box$ \bigskip \textbf{Proof of Corollary \ref{thm1a}:} Let $\xi$ denote a design such that $T_\mathrm{P}(\xi)>0$ and recall the definition of the set $\Theta_{ij}^*(\xi)$ in \eqref{thetamin}. We consider for a vector $\theta= (\theta_{i,j})_{ i,j=1,\ldots,\nu} \in \Theta^*(\xi)= \otimes_{ i,j=1,\ldots,\nu} \Theta_{i,j}^*(\xi)$, the function $\varphi$ is defined in \eqref{varphi} and product measures $\mu(d\theta)$ of the form \eqref{prodmeas} on $\Theta^*(\xi)$. Now the well known minimax theorem and the definition of the function $Q$ in \eqref{qfct} yields \begin{eqnarray*} \max_{x \in \mathcal{X}}\Psi(x,\xi)&=&\inf_\mu \max_{x \in \mathcal{X}} \int_{\Theta^*(\xi)} \varphi(x, \theta) \mu (d\theta) = \inf_\mu \sup_\zeta \int_{\mathcal X} \int_{\Theta^*(\xi)} \varphi(x, \theta) \mu (d\theta) \zeta (d x) \\ &=&\sup_\zeta \inf_\mu \int_{\mathcal X} \int_{\Theta^*(\xi)} \varphi(x, \theta) \mu (d\theta) \zeta (dx) = \sup_\zeta \inf_{\theta \in \Theta^*(\xi) }\int \varphi(x, \theta) \zeta (dx) = \sup_\zeta Q(\zeta ,\xi), \end{eqnarray*} where the infimum is calculated with respect to all measures $\mu$ of the form \eqref{prodmeas} and the supremum is calculated with respect to all experimental designs $\zeta$ on $\mathcal{X}$. Note that $\mathcal X$ is compact by assumption and it can be checked that the set $\Theta^*(\xi)$ is also compact as a closed subset of a compact set. Consequently all suprema and infima are achieved and there exists a design $\zeta^*$ supported at the set of local maxima of the function $\Psi(x,\xi)$, such that $$ Q(\zeta^*,\xi)=\sup_\zeta Q(\zeta,\xi) = \max_{x \in \mathcal{X}} \Psi(x,\xi). $$ The assertion of Corollary \ref{thm1a} now follows from Theorem \ref{thm2.2}. \hfill $\Box$ \medskip \textbf{Proof of Theorem \ref{thm2}:} Obviously, the inequality \begin{align*} T_{\mathrm{P}}(\{\mathcal{S}_{[s]},\omega_{[s]}\}) \leq T_{\mathrm{P}}(\{\mathcal{S}_{[s+1]},\omega_{[s+1]}\}) \end{align*} holds for all $s$ as optimization with respect to $\omega$ occurs on a larger set. Moreover, the sequence $T_{\mathrm{P}}(\xi_s)$ is bounded from above by $T_{\mathrm{P}}(\xi^*)$ and has a limit, which is denoted by $T^{**}_{\mathrm{P}}$. Consequently, there exists a subsequence of designs, say $ \xi_{s_j},j=1,2, \ldots$ converging to a design, say $\xi^{**}$. Note that $T_P$ is upper semi-continuous as the infimum of continuous functions, which implies $T_{\mathrm{P}}(\xi^{**})= T^{**}_{\mathrm{P}}$. Now, assume that $T_{\mathrm{P}}(\xi^{**}) < T_{\mathrm{P}}(\xi^{*})$, then $\xi^{**}$ is not locally $T$-optimal and by Theorem \ref{thm2.2} there exists a constant $\delta > 0$ such that \begin{align*} {\sup_{\zeta} Q(\zeta,\xi^{**}) - T_{\mathrm{P}}(\xi^{**}) = 2 \delta}, \end{align*} where the function $Q$ is defined in \eqref{qfct}. Therefore for sufficiently large $j$, say, $j \ge N$ we obtain (using again the lower semi-continuity of $ \sup_{\zeta} Q(\zeta ,\xi)$) that \begin{align*} \sup_{\zeta} Q(\zeta,{\xi}_{s_j}) - T_{\mathrm{P}}(\xi_{s_j})> \delta, \end{align*} whenever $j\ge N$. {Note that by construction the sequence $(T_{\mathrm{P}}(\xi_s))_{s \in \mathbb{N}}$ is increasing and therefore \begin{equation}\label{ungl} T_{\mathrm{P}}({\xi}_{s_{j+1}}) - T_{\mathrm{P}}(\xi_{s_{j}}) \geq T_{\mathrm{P}}(\xi_{s_{j} +1})- T_{\mathrm{P}}(\xi_{s_{j}}). \end{equation} In order to estimate the right hand side we consider for $j \ge N$ and $\alpha \in [0,1]$ the design \begin{align*} \tilde{\xi}_{s_{j+1}}(\alpha) = ( 1 - \alpha ) {\xi}_{s_j} + \alpha \zeta_{j}, \end{align*} where $\zeta_{j}$ is the measure for which the function $ Q( \zeta,{\xi}_{s_j} ) $ attains its maximal value in the class of all experimental designs supported at the local maxima of the function $\Psi(x,{\xi}_{s_j})$, and define \begin{align*} \alpha_{s_{j+1}} = \argmax_{0 \leq \alpha \leq 1} T_{\mathrm{P}}(\tilde{\xi}_{s_{j+1}}(\alpha)). \end{align*} By construction of ${\xi}_{s_{j}+1}$ is the best design supported at $\mbox{supp}({\xi}_{s_j}) \cup \mbox{supp}(\zeta_j)$, and \eqref{ungl} yields \begin{equation} \label{ungl1} T_{\mathrm{P}}({\xi}_{s_{j+1}}) \geq T_{\mathrm{P}}({\xi}_{s_{j}+1}) \geq T_{\mathrm{P}}(\tilde {\xi}_{s_{j+1}}(\alpha_{s_{j+1}})). \end{equation} We introduce the notations $ h(j,\alpha)= T_{\mathrm{P}}(\tilde {\xi}_{s_{j}}(\alpha)) $, and note that \begin{align*} \frac{\partial T_{\mathrm{P}}(\tilde {\xi}_{s_{j+1}}(\alpha))}{\partial\alpha}\Big|_{\alpha=0}= Q(\zeta_j, {\xi}_{s_j}) -T_{\mathrm{P}} ({\xi}_{s_j}) = \sup_\zeta Q (\zeta, {\xi}_{s_j}) -T_{\mathrm{P}}({\xi}_{s_j})>\delta. \end{align*} A Taylor expansion gives \begin{align*} h(j+1,\alpha_{s_{j+1}})-h(j+1,0) = & \max_{\alpha \in [0,1] } \big [ T_{\mathrm{P}}( \tilde{\xi}_{s_{j+1}}(\alpha)) - T_{\mathrm{P}}( \tilde{\xi}_{s_{j+1}}(0 ))\big] \\ \geq & \max_{\alpha \in [0,1] }\Big[\alpha \frac{\partial T_{\mathrm{P}}( \tilde{\xi}_{s_{j+1}}(\alpha))}{\partial\alpha}\Big|_{\alpha=0} - \frac{1}{2}\alpha^2K \Big] \notag > \max_{\alpha \in [0,1] }\Big[\alpha \delta - \frac{1}{2}\alpha^2K\Big] = \frac { \delta^2}{2K}, \end{align*} where $K$ is an absolute upper bound of the second derivative. Therefore it follows from \eqref{ungl1} that \begin{align*} T_{\mathrm{P}}(\xi_{s_{j+1}}) - T_{\mathrm{P}}(\xi_{s_j}) &\geq T_{\mathrm{P}}(\xi_{s_j+1}) - T_{\mathrm{P}}(\xi_{s_j}) \\ &\geq T_{\mathrm{P}}(\tilde\xi_{s_{j+1}}(\alpha_{s_{j+1}})) - T_{\mathrm{P}}(\xi_{s_j}) = h(j+1,\alpha_{s_{j+1}}) - h(j+1,0) \geq \frac{\delta^2}{2K}. \end{align*} which gives for $L > N+1$ \begin{align*} T_{\mathrm{P}}(\xi_{s_{L}}) - T_{\mathrm{P}}(\xi_{s_N}) = \sum_{j = N}^{L - 1} \big [ T_{\mathrm{P}}(\xi_{s_{j+1}}) - T_{\mathrm{P}}(\xi_{s_j}) \big ] \geq \left[ L - N \right] \frac{\delta^2}{2K}. \end{align*} The left hand side of this inequality converges to the finite value $T(\xi^{**}) - T(\xi_{s_N})$ as $L \to \infty$, while the right hand side converges to infinity. Therefore we obtain a contradiction to our assumption $T_{\mathrm{P}}(\xi^{**}) < T_{\mathrm{P}}(\xi^{*})$, which proves the assertion of Theorem \ref{thm2}. \bigskip \textbf{Proof of Lemma \ref{lem2}:} Fix $t \in \{ 1,\dots,n \}$ and note that $w_t=1 - \sum^n_{\ell =1, \ell \neq t} w_\ell$. Under Assumptions \ref{assum1} and \ref{assum2} we obtain by formula (\ref{formula_diff}) \begin{align*} \frac{\partial g(\omega)}{\partial \omega_k} &= \sum_{i,j = 1}^{\nu} p_{i,j} \big[ \eta_i(x_k,\overline{\theta}_{i}) - \eta_j(x_k,\widehat{\theta}_{i,j}(\omega)) \big]^2 - \sum_{i,j = 1}^{\nu} p_{i,j} \big[ \eta_i(x_t,\overline{\theta}_{i}) - \eta_j(x_t,\widehat{\theta}_{i,j}(\omega)) \big]^2 \end{align*} The condition $ \frac{\partial g(\omega)}{\partial \omega_k} = 0, \; k = 1,\dots,n, \; k \neq t $ is the necessary condition for weight optimality and consequently it follows from the definition of the function $ \Psi(x,\overline{\xi}_{s+1}) $ that this function attains one and the same value for all support points of the design $\overline\xi_{s+1}$. \bigskip \textbf{Proof of Theorem \ref{conv_theorem}:} The proof is similar to the proof of Theorem \ref {thm2}. Denote \begin{align*} h(\gamma,\alpha)= g(\overline{\omega}_{(\gamma)}(\alpha)), \end{align*} where the vector $\overline{\omega}_{(\gamma)} (\alpha^*)$ is calculated at the $\gamma$th iteration. Since the sequence $ g(\omega_{(\gamma)})$ is bounded and increasing (by construction) it converges to some limit, say $ g^{**}$. Consequently there exists a subsequence of vector of weights, say $\overline \omega_{(\gamma_j)},j=1,2, \ldots$ converging to a vector, say $\overline \omega^{**}$. Note that $ g$ is upper semi-continuous as the infimum of continuous functions, which implies $ g(\overline{\omega}^{**})= g^{**}$. Now, assume that $ g(\overline{\omega}^{**}) < g(\omega^{*})$, then it follows by an application of Theorem \ref{thm1} with $\mathcal{X}=\{x_1,\ldots , x_n\}$ that there exists a constant $\delta > 0$ such that \begin{align*} \frac{\partial g(\overline{\omega}(\alpha))}{\partial \alpha} \Bigl|_{\alpha=0} = 2\delta > 0 . \end{align*} Here the vector $\overline{\omega}(\alpha)$ is defined in the same way as $\overline{\omega}_{(\gamma )} (\alpha)$, where $\omega_{(\gamma)}$ is replaced by $\omega=\omega^{**}$. Therefore for sufficiently large $j$, say, $j \ge N$ we obtain (using the lower semi-continuity of ${g}$) that $ h(\gamma_j,0)> \delta , $ and a Taylor expansion yields \begin{align*} h(\gamma_{j+1},\alpha^*_{(\gamma_{j+1})})-h(s_j,\alpha^*_{(\gamma_j)}))\geq \max_\alpha \Big( \alpha \frac{\partial g(\overline\omega(\alpha))}{\partial \alpha} - \frac{1}{2}\alpha^2K \Big) =\frac{\delta^2}{2K}, \end{align*} where $\alpha^*_{(\gamma_j)}$ is the value $\alpha^*$ from the $\gamma_j$th iteration and $K$ is an absolute upper bound of the second derivative. Using the same arguments as in the proof of Theorem \ref {thm2} we obtain a contradiction, which proves the assertion of the theorem. \bigskip \setstretch{1.25} \setlength{\bibsep}{1pt} \begin{small} \footnotesize \bibliographystyle{apalike} \itemsep=0.5pt
{ "redpajama_set_name": "RedPajamaArXiv" }
6,309
require 'spec_helper' describe Horatio::Detector::NodePackage do before do FakeFS.activate! create_package_json @detector = Horatio::Detector::NodePackage.new end it "correctly implements the name method" do @detector.name.must_equal 'less' end it "correctly implements the version method" do @detector.version.must_equal '1.7.0' end after { FakeFS.deactivate! } end
{ "redpajama_set_name": "RedPajamaGithub" }
4,694
Martin Shkreli sued by artist over Wu-Tang Clan album NEW YORK (AP) — A Long Island artist sued ex-pharmaceutical CEO Martin Shkreli and others Tuesday over the use of his art in a Wu-Tang Clan album, saying he never expected portraits he posted on a fan blog two years ago to be used without his permission. Artist Jason Koza said in the Manhattan federal court copyright infringement lawsuit that his portraits of members of the New York-based hip-hop group were used without authorization on an album Shkreli bought for $2 million. The lawsuit comes after Shkreli's recent not guilty plea in Brooklyn federal court to securities fraud charges claiming he cheated investors in companies he created. Shkreli, 32, became widely known last year after a drug company he founded, Turing Pharmaceuticals, spent $55 million for the U.S. rights to sell a life-saving medicine called Daraprim and then raised the price from $13.50 to $750 per pill. In recent testimony before a congressional committee investigating the price of drugs, he remained silent, citing the Fifth Amendment on the advice of his attorney, Benjamin Brafman. Brafman declined to comment on the lawsuit. Koza, of Copiague, New York, said he learned that some of his portraits were in a 174-page book included with the album titled "Once Upon a Time in Shaolin" when he saw a news article after the sole copy of the album was reportedly sold to Shkreli. The lawsuit said Shkreli is prohibited from distributing further copies of the album commercially for 88 years. He sought unspecified damages from Shkreli, a Wu-Tang leader, a music producer and the album's auctioneer. Other defendants did not immediately comment. The lawsuit said that Koza, a musician who works for the Town of Babylon Department of Public Works, primarily creates ink-on-paper portraits of individuals such as David Bowie, Abraham Lincoln, Paul McCartney and Jim Morrison. His portraits of Wu-Tang Clan members, created in 2013 and 2014, were comic-book-style depictions of the artists with titles such as "Ghostface Killa-Koza," ''Inspecta Deck-Koza" and "U-God-Koza," the lawsuit said. It noted that he deposited applications to register all nine of his Wu-Tang Clan portraits with the U.S. Copyright Office on Feb. 1.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,204
Q: How to create a Save/Load function on Scratch? Im trying to make a game on Scratch that will use a feature to generate a special code, and when that code is input into a certain area it will load the stats that were there when the code was generated. I've run into a problem however, I don't know how to make it and I couldn't find a clear cut answer for how to make it. I would prefer that the solution be: *Able to save information for as long as needed (from 1 second to however long until it's input again.) *Doesn't take too many blocks to make, so that the project won't take forever to load it. Of course i'm willing to take any solution in order to get my game up and running, those are just preferences. A: You can put all of the programs in a custom block with "Run without screen refresh" on so that the program runs instantly. If you save the stats using variables, you could combine those variable values into one string divided by /s. i.e. join([highscore]) (join("/") (join([kills]) (/)) NOTE: Don't add any "/" in your stats, you can probably guess why. Now "bear" (pun) with me, this is going to take a while to read Then you need the variables: [read] for reading the inputted code [input] for storing the numbers Then you could make another function that reads the code like so: letter ([read]) of (code) and stores that information to the [input] variable like this: set [input] to (letter ([read]) of (code)). Then change [read] by (1) so the function can read the next character of the code. Once it letter ([read]) of (code) equals "/", this tells the program to set [*stat variable*] to (input) (in our example, this would be [highscore] since it was the first variable we saved) and set [input] to (0), and repeat again until all of the stats variables are filled (In this case, it repeats 2 times because we saved two variables: [highscore] and [kills]). This is the least amount of code that it takes. Jumbling it up takes more code. I will later edit this answer with a screenshot showcasing whatever I just said before, hopefully clearing up the mess of words above. A: The technique you mentioned is used in many scratch games but there is two option for you when making the save/load system. You can either do it the simpler way which makes the code SUPER long(not joking). The other way is most scratchers use, encoding the data into a string as short as possible so it's easy to transfer. If you want to do the second way, you can have a look at griffpatch's video on the mario platformer remake where he used a encode system to save levels.https://www.youtube.com/watch?v=IRtlrBnX-dY The tips is to encode your data (maybe score/items name/progress) into numbers and letters for example converting repeated letters to a shorter string which the game can still decode and read without errors If you are worried it took too long to load, I am pretty sure it won't be a problem unless you really save a big load of data. The common compress method used by everyone works pretty well. If you want more data stored you may have to think of some other method. There is not an actual way to do that as different data have different unique methods for things working the best. Good luck.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,262
Q: Require.js version individual files We have a complex older system and we want to possibly transition to using Require.js One question I have is whether it is possible to set up specific versions of a file. I'm not talking about "multiple versions" in the same page. What I'm thinking is this: * *Module1 *Module2 * *Module1 *Module3 * *Module1 *Module4 * *Module2 *Module3 I want a way to version Module1 so that I can go and change the version number in a single place. I know I can use: require.config({ urlArgs: "bust=v3" }) but can I do that in the top of the file for Module1 and have it only refer to that Module? A: Try setting the paths object in your RequireJS configuration. This will map a module's name to the URL RequireJS should download. http://requirejs.org/docs/api.html#config-paths paths: { 'Module1': 'path/to/Module1_v1' }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,716
Earning money is one of the main reasons why people work. An adequate amount of these basic necessities for survival is needed to be supplied by people themselves, and that will only happen if they get an equivalently adequate compensation from their work. If they do get something just right, then they will keep track of every earnings and purchases they do in order for the money not to be quickly depleted. On the other hand, there are instances when the money or salary received is bigger or smaller than the expected amount. It's naturally ok if it's bigger or slightly smaller, but not that much when it's suspiciously smaller. In order to clarify some of these deductions and bonuses as well, How to create free custom pay stub? A pay stub form is a breakdown of the salary with all the bonuses and deductions for that month. It can come before or during the pay day, but rarely after it. It explains everything that affects the salary. Other than that, there are other things that it can do for the worker. It can help an individual keep track of his money every month. This is important because the salary would not be most likely the same every month because of different instances, such as profits from products sold, inflation and the like. The worker should always be informed about these changes, so it would be preferable if these are distributed before the pay day. Moreover, a worker will be informed with the extra fees and taxes that he/she is directly or indirectly paying to the company. Local taxes should go to the government, but if the company includes a small fee for their purposes, then the worker is free to explain this matter with the administration especially if he/she would like to know more details about it. Having a pay stub form will enable the worker to discuss the discrepancies between the expected amount of salary and the actual salary if in case the amount missing is a bit high. As mentioned in this article, salaries may differ every month. So, having a pay stub form of the current and previous salaries can be proofs that attest to the problems that the company and the worker have to work with. Last but not the least; a stub form will delimit the amount of money a worker can spend in a given amount of time. This might be a bit bad of a reason, but this will actually help in the controlling of expenses and other matters that might lead to even more problems. A pay stub form has more uses, but these factors will do the basic explanations of questions that bother the worker regarding his pay.
{ "redpajama_set_name": "RedPajamaC4" }
8,014
Growing up, I was always told I have a "sensitive stomach." Digestion issues and I were never friends. I suffered from constant stomach aches, nausea, headaches, etc. I decided to find out what is causing so much distress on my digestion. Awhile back, I took Everlywell's food sensitivity kit. I found out I was sensitive to a lot of foods. Some foods were oats, dairy, gluten, apples, peaches, chicken, pineapple, basil, among many others. I was told due to the sensitivities I had, a paleo lifestyle could be the best option for my stomach issues. Food sensitivities present themselves in many ways: nausea, migraines, anxiety, ezcema, IBS, and others. So a paleo lifestyle is the way I chose. With these changes in my diet, I cried a little when I thought I would have to say goodbye to my sweet tooth. That is, until I found Sweet Laurel's Bakery's cookbook. There's so many great options for sweets that all are grain free, dairy free and refined sugar free. It's definitely the best cookbook I've purchased. This whole new way of eating is a learning curve but I'm excited to learn along the way! Preheat oven to 350 degrees F. Line a baking sheet with parchment paper. Line muffin cups with paper liners. Make the Struesel: In a small bowl, whisk together the flour, oil and maple syrup until the mixture comes together. Crumble onto the baking sheet and bake for 20 minutes or until the streusel begins to crisp. Remove and set aside. Keep the oven at 350. Make the Muffins: In a large bowl, whisky together the flour, baking soda and salt. In a separate bowl, combine the eggs, coconut oil, maple syrup, lemon zest, lemon juice and vanilla. A little at a time, add the dry ingredients into the wet ingredients. Stir until a smooth batter forms. Divide the batter among the muffin cups, filling 3/4 of each cup. Top each muffing with blueberries and swirl them into the batter with a spoon. Sprinkle the streusel topping over the muffins. Bake for about 25 minutes, until the streusel is golden brown. Remove muffins from the tin, set on a rack and allow to cool completely.
{ "redpajama_set_name": "RedPajamaC4" }
4,119
CMP's VARBusiness Magazine Named Fujitsu fi-5110C Scanner a Midmarket 'Product of the Year' | SPARC International, Inc. Fujitsu Computer Products of America, Inc., a market leader in document imaging scanners and services, today announced that the Fujitsu fi-5110C compact desktop scanner was named a midmarket "Product of the Year" by CMP Media's VARBusiness magazine. The Fujitsu fi-5110C scanner leverages the industry-leading technology found in departmental and production-level scanners at an affordable price of $895 (U.S. list) and is ideal for decentralized or workgroup scanning applications. The Fujitsu fi-5110C scanner is featured and profiled in the special May 2 VARBusiness "State of the Midmarket" issue. VARBusiness, a biweekly magazine that provides strategic insight to technology integrators solicited input from information-technology (IT) vendors and solution providers to determine which products and services were best suited for midmarket customers – companies with between 100 and 999 employees. Entries were considered and reviewed by VARBusiness editors who selected only 100 of the more than 400 nominated products. The Fujitsu fi-5110C scanner features reliable paper-handling from business cards and checks to legal-sized and other long documents, as well as high-speed USB 2.0 connectivity, a 50-sheet automatic document feeder (ADF), and double feed detection. The scanner has fast scan speeds up to 15 ppm (simplex) / 30 ipm (duplex) and excellent image quality with 600 dpi optical resolution and dual-CCD scanning arrays to capture fine details in color or monochrome. The Fujitsu fi-5110C scanner comes standard with Adobe® Acrobat® 6.0 Standard and ScandAll 21 which allow users to quickly and easily integrate the scanner into existing workflows. The scanner also comes with TWAIN and ISIS driver support included. It can be purchased through authorized resellers, VARs, and distributors including Compucenter de Mexico, Cranel Imaging, Ingram Micro, NewWave Technology, and Tech Data. The Fujitsu fi-5110C scanner comes standard with a 1-year depot limited warranty. Additionally, the Advance Exchange service offering, the Fujitsu overnight replacement program, provides the customer a replacement scanner prior to shipment of any malfunctioned scanner back to Fujitsu. The Advance Exchange offering for the fi-5110C scanner is offered as an upgrade during the limited warranty period for only $59 or on a yearly basis post-warranty for only $99 (contract and agreement required for replacement unit). For the past 18 years, VARBusiness' strategic resources have been the gateway to the solution-provider community. VARBusiness provides unique strategic business and technology insight for solution providers through industry-defining research, in-depth editorial, channel events and innovative Web services, enabling these IT professionals to make educated decisions for their businesses, partnerships and customers. VARBusiness has been the recipient of numerous industry awards for both editorial content and design. Additional information about VARBusiness products, events and services, is available on the magazine's Web site at www.varbusiness.com. Copyright 2005 Fujitsu Computer Products of America, Inc. All rights reserved. Fujitsu and the Fujitsu logo are registered trademarks and The Possibilities are Infinite is a trademark of Fujitsu Ltd. Advance Exchange is a trademark of Fujitsu Computer Products of America, Inc. All other trademarks are the property of their respective owners. Statements herein are based on normal operating conditions and are not intended to create any implied warranty of merchantability or fitness for a particular purpose. Fujitsu Computer Products of America, Inc. reserves the right to modify at any time without notice these statements, our services, products, and their warranty and performance specifications.
{ "redpajama_set_name": "RedPajamaC4" }
7,375
\section{Introduction} Rechargeable electric batteries are one of the most important devices of modern civilization. It is obvious that their role will only increase in the future. Unfortunately, the existing chemical rechargeable batteries (based on reversible electrochemical reactions) are far from ideal. This manifests itself, for example, in their inevitable irreversible degradation, slow charging, relatively low energy capacitance per unit mass, the need for heating when the temperature drops, etc. Of course, the progress doesn't stop, however the most efforts now are applied to chemistry and physical chemistry properties of batteries (see \cite{Chen:2020, Suga:2011}). Relatively new physical idea of quantum battery explores quantum state and entanglement properties \cite{Alicki:2013,Binder:2015,Fe-Co:18,Barra:2019}. The idea of using spin degree of freedom to store energy attracts a lot of attention last years~\cite{Tian2017,Xie2018,Bozkurt2018,Nguyen2020}. Particularly, in recent article~\cite{battery} authors propose a spin battery (SB) which is half-metal spin valve with suppressed spin flips of conducting electrons. This solution would allow us to store reversibly the electric energy without any chemical reactions at the charging process using nonequilibrium states of quasiparticles in a conductor instead. Hence, the following question appears: is it possible for SB to surpass chemical battery, and what are the properties of ideal SB? In this paper we theoretically ``test'' possible limits of solid state for SB-s and more exotic matter of neutron stars as well. \section{Chemical potential} The energy in SB is stored in spin carriers' density deviation from its equilibrium value under condition that spin relaxation is suppressed in this conductor. In other words, such a battery is just a certain volume filled with spin particles, and energy accumulation appears due to a development of the non-equilibrium spin state. Such spin particles can be electrons in conductor~\cite{battery}, whereas being charge carriers, and their spin direction~$\pm$ can be determined by the external magnetic field or magnetic contacts. In order to charge such a battery containing charged spin carriers or to transfer accumulated non-equilibrium spin concentration into charge current one can use for instance antiparallel magnetized half-metal~\cite{Picket:2001,Mazin:2000,CongChen:2019} electrodes which passes only~$+$ or~$-$ spin correspondingly~\cite{battery}, see Fig.~\ref{fig-scheme}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{img/SB-scheme.eps} \caption{Schematic illustration of spin battery (denoted by ``SB'' in picture) which is a conductor between half-metal electrodes (denoted by ``H'' in picture) with opposite spin polarization. } \label{fig-scheme} \end{figure} In this situation the charging potential difference~$\delta \varphi$ induces variations of chemical potentials of $\pm$~components (after the charging time when the equilibrium established): $$\mu_{\pm} = \mu_{0,\pm} + \eta_{\pm} \ ,$$ where $\mu_{0,\pm}$ are equilibrium electrochemical potentials of the discharged battery determined by density of correspondent $\pm$~components. Chemical potentials $\eta_{\pm}$ are induced by charging process, and their values could be found from the conditions $\delta \varphi = \eta_+/q_+ - \eta_-/q_-$ (where $q_\pm$ are electric charges of corresponded carriers), together with spatial electroneutrality which follows from Poisson equation $\Delta \varphi = - 4 \pi \{q_+ [\rho_+(\mu_+) - \rho_+(\mu_0)] + q_- [\rho_-(\mu_-) - \rho_-(\mu_0)]\}$ (with RHS equal to zero): \begin{equation} q_+ [\rho_+(\mu_+) - \rho_+(\mu_0)] + q_- [\rho_-(\mu_-) - \rho_-(\mu_0)] = 0 \label{eq-neutrality-gen} \end{equation} where $\varphi$ is electrical potential, $\rho_\pm$ are the densities, $q_\pm$ are charges of correspondent spin~$\pm$ carriers. Here, for simplicity, we assume $\mu_{0,+} = \mu_{0,-} = \mu_0$, as it is obvious that the equilibrium value of electrochemical potential does not affect the basic principles of energy storage. For one-band conductor we have $q_+ = q_- = e$, where $e$ is the electron charge. Moreover, we can consider the usage of two-band conductors which contain not only the electrons but also ``holes'' with opposite charge. What is more, the carriers are polarized in such a way as to spin~$\pm$ connected to charge~$\pm$. In this situation the charges $q_\pm$ have the opposite sign: $q_+ = -q_- = e$. This is possible when interaction between carriers from different bands is weak~\cite{ZAYETS201853}, or by using electron-hole pairing methods~\cite{Shevchenko:1976, PhysRevLett.93.266805, Nandi:2012} for the spin-flip suppressing mentioned in~\cite{battery}. When the one-band battery is being charged it follows to increasing the number of certain spin carriers, and it necessarily leads to decreasing the number of opposite spin carriers in order to preserve electroneutrality~(\ref{eq-neutrality-gen}). In two-band battery we always chose polarity of charging voltage in such a way to have increasing of the number of carriers of both spins ($\eta_+ > 0$). It means that we connect positive-guided contact to the electrode with ``$+$''-polarity (corresponding to ``holes'' with charge $q_+$), and negative-guided contact to the electrode with ``$-$''-polarity (corresponding to electrons with charge~$q_-$). Non-equilibrium states caused by deviations of spin densities in one-band and two-band SB-s are shown in Fig.~\ref{fig-filling}. Here we show schematically the filling energy levels $\varepsilon$ as functions of corresponded densities of states (DOS) $D_\pm$ for spins~$\pm$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{img/filling.eps} \caption{(a) One-band ``purely-electronic'' spin battery. (b) Two-band ``electron-holes'' spin battery. Filled levels are drawn in gray color. The number of carriers of one spin decreases when the number of carriers of another spin increases for one-band battery, and polarity is not important. We chose polarity of two-band battery in such a way to have increasing of both types of carriers during charging process. } \label{fig-filling} \end{figure} As can be seen, two-band battery has polarity, and such SB is equal to chemical battery but with the following difference. In chemical battery we have the concentrations changing, and correspondingly the changing of chemical potentials with respect to electrodes, while $q_\pm$ corresponds to the one ion charge: $-q_+ = q_- = -z F/N_a$, where $z$ is positive valency, $F$ is Faraday constant, and $N_a$ is Avogadro number (for definiteness we chose the sign the same as elementary charge has, we assume the same absolute value of valency of all ions). In the absence of charging potential difference in the circuit, the presence of non-equilibrium chemical potentials leads to appearance of diffusion forces, which pull-in or push-out charges into the circuit. It happens on electrodes of the opposite ``affinity'' (such an affinity is related to chemical reactions in usual chemical battery, or it is related to the presence of conduction band only for certain spin on $\pm$ electrodes in the SB). Asymmetry of charge moving during relaxation into thermodynamic equilibrium state causes electric current in the full circuit~\cite{battery}. Also, we can consider SB with spin/charge carriers are not being usual conductive electrons but quasi-particles. Such quasi-particles even can have zero electric charge, but in this situation the movement of such quasi-particles do not cause electric current, and thus the energy extraction from this battery is difficult. SB doesn't require chemical reactions, and therefore it does not suffer from chemical degradation. As we show below, SB doesn't require heating in the case it consists of degenerate gas of charge/spin carriers. Obviously, SB can be a source not only of charge but also of spin current~\cite{Awschalom:2001,Brataas:2002,Long:2003}\footnote{Attaching usual conductor to a conductor with non-equilibrium spin distribution will cause diffusive spin current which equalizes spin concentrations.}. At last, SB can be charged without electrodes with using polarized electromagnetic radiation~\cite{polarizedlight}. \section{General formulas for the energy of charged SB} Let us denote $E_\pm(\mu_\pm)$ as the total internal energy of carriers of~$\pm$ components for given value of electrochemical potential $\mu_\pm$. The energy stored in the battery is a difference between the internal energies of charged and discharged states \begin{equation} \delta E = E_+(\mu_{0} + \eta_+) + E_-(\mu_{0}+\eta_-) - E_+(\mu_{0}) - E_-(\mu_{0}). \label{eq-E} \end{equation} At microscopic level the value of~$E$ in SB as well in chemical battery is determined by equilibrium energy distribution of carriers~$n_{\pm}(\varepsilon, \mu)$, DOS~$D_\pm(\varepsilon)$ and by the volume~$\Omega$: \begin{equation} E_\pm(\mu_\pm) = \Omega \int\limits_{0}^{\infty} d \varepsilon \varepsilon D_\pm(\varepsilon) n_\pm(\varepsilon, \mu_\pm) \ . \label{eq-energy1} \end{equation} Also, we can write the following expression for $\rho_\pm$ in order to substitute it in~(\ref{eq-neutrality-gen}) \begin{equation} \rho_\pm(\mu_\pm) = \int\limits_{0}^{\infty} d \varepsilon D_\pm(\varepsilon) n_\pm(\varepsilon, \mu_{\pm}) \ . \label{eq-dencity1} \end{equation} As can be seen from (\ref{eq-E}) and (\ref{eq-energy1}), the energy distribution determines which parts of the DOS dependencies~$D_\pm(\varepsilon)$ give the main contribution. Of course, the energy distribution can be classical (Boltzmann), Bose, Fermi or, even, so-called fractional statistics. Note, usually, $\varepsilon D(\varepsilon)$ increases with $\varepsilon$, and on the other hand Bose distributions collect particles in states with lower energies where DOS has its minimal values. Of course, DOS of Bose distributions can have certain singularities (see \cite{Pastur:1982}), thus some low dimensional systems require special investigation. However, in general case, if we do not consider non-physical DOS which diverge at zero faster than $\varepsilon^{-1}$, then in order to have maximal capacitance of SB, particles should occupy more states with larger DOS. In this regard, fermions are the best choice for SB-s. As two fermions cannot occupy the same state, they have to fill levels with higher energy with the number of particles on the rise. Below we discuss the influence of quantum statistics in details. Intuitively we understand that battery is more sensitive to the temperature in the case of classical statistics. In general case, when $\mu_0$ and $q_\pm$ are given, the searching of a battery with maximal energy capacitance is reduced to variational problem -- the searching for conditional maximum of the functional~$\delta E [D_\pm, n_\pm]$. This functional is linear on $D_\pm$ and $n_\pm$, while restrictive condition is the equation (\ref{eq-neutrality-gen}). In reality distributions and density of states are limited by physical states mentioned above. \section{Briefly on classical statistics} Boltzmann statistics corresponds to sufficiently low densities $\rho_\pm$, when $\exp\{-\mu/T\} \gg 1$, where $T$ is the temperature in energy units. Note, in the case $q_+ = q_-$ the electroneutrality condition (\ref{eq-neutrality-gen}) is broken when $|e\delta \varphi| \sim T$. In this situation increasing of $\pm$ component (depending on the sign of $\delta \varphi$) can not be compensated via decreasing of $\mp$ component due to the insufficient amount of it, and thus the equation (\ref{eq-neutrality-gen}) does not have a solution. Since the temperature $300 \text{K}$ approximately corresponds to the potential difference only $0.03 \text{V}$, the case $q_+ = q_-$ is out of interest in classical limit. Particularly, this is why there is no sense in using of chemical accumulators with different types of cations or anions in such a way that the opposite electrodes make unbalance of different ions of the same sign during the charging, because electroneutrality breaks at relatively low voltages. Generally, electroneutrality is not broken when $q_+ = - q_-$ with increasing of $\delta \varphi$. In such a case only the average value of DOS matters. Really, the classical distribution $n_\pm(\varepsilon, \mu) \approx \exp\{\mu/k T\} \exp\{-\varepsilon/ k T\} $, where constant in $\mu$ is chosen to satisfy normalization condition for the density of particles. From (\ref{eq-neutrality-gen})--(\ref{eq-dencity1}) we have the following \begin{equation} \delta E \approx \Omega \Big[\exp\{\frac{\eta_+}{T}\} - 1\Big] \Big(E_+(\mu_{0}) + E_{-}(\mu_0)\Big). \label{eq-energy-class} \end{equation} Accordingly, the value of $\delta E$ in the leading approximation with respect to $T|q\delta \varphi|^{-1} \ll 1$ is $$\delta E \sim \Omega T [\rho_0(T, \mu_{0} + \frac{q\delta\varphi}{2}) - \rho_0(T, \mu_0)] \ ,$$ where $\rho_0(T, \mu_0)$ is the equilibrium concentration for given temperature. As we can see, singularities of $D(\varepsilon)$ don't play role in the case of classic statistics, and only the average of DOS appears in~(\ref{eq-energy-class}). Also, one can see that $\delta E$~strongly depends on~$T$, and on concentrations as well which reduces application of such accumulators at low temperatures without additional heating. Let us show that Fermi statistics changes the situation dramatically. \section{``Fermi'' batteries} Quantum Fermi statistics can be realized in SB-s made from degenerate conductors like metals where equilibrium electrochemical potential approximately equal to Fermi energy $\mu_0 \simeq \varepsilon_F$. Usually in metals Fermi energy is of order $10\text{eV} \sim 10^5\text{K}$. Therefore we can consider energies $|q_\pm \delta \varphi| \sim |\eta_\pm| < \mu_0$ and neglect thermal blurring. The energy of non equilibrium state with splitting of spin components and corresponded Lagrange function for this approximation can be found in \cite{Pyshkin:2014}. Here we write it with using our notation: \begin{equation} \delta E \approx \Omega \Big[\int_{\mu_{0}}^{\mu_{0} + \eta_+} d \varepsilon \varepsilon D_+(\varepsilon) + \int_{\mu_{0}}^{\mu_{0} + \eta_-} d \varepsilon \varepsilon D_-(\varepsilon)\Big] \ , \label{eq-energy-fermi} \end{equation} \begin{multline} q_+ [\rho_+(\mu_+) - \rho_+(\mu_0)] + q_- [\rho_-(\mu_-) - \rho_-(\mu_0)] \cong \\q_+ \int_{\mu_0}^{\mu_0 + \eta_+} d \varepsilon D_+(\varepsilon) + q_- \int_{\mu_0}^{\mu_0 + \eta_-} d\varepsilon D_-(\varepsilon) = 0\ . \label{eq-neutrality-fermi} \end{multline} As can be seen from (\ref{eq-energy-fermi}) and (\ref{eq-neutrality-fermi}) the Fermi level shift determines the limits of integration, and as we show below the DOS singularities can play an important role in the properties of ``Fermi'' battery. Note, in one-band battery ($q_+ = q_-$ ) integrand in (\ref{eq-energy-fermi}) coincides up to multiplier $\varepsilon$ with integrand in RHS of (\ref{eq-neutrality-fermi}), which is set to zero. Thus, in this case we have certain compensation when we make small parameter expansion for $\eta_{\pm}/\mu_{0}$. Physical meaning of that is the following: for one-band battery we lose in energy when decrease the number of quasiparticles of a certain kind during charging, for two-band battery the number of quasiparticles of both kind only increases during charging (when polarity is correctly chosen). Firstly, we consider the case when DOS does not have any singularities, i.e. $D_\pm(\varepsilon)$ are analytical functions which can be expanded in Taylor series near initial Fermi level $\mu_0$: $$D_\pm(\varepsilon) \approx D_\pm(\mu_0) + (\varepsilon - \mu_0) D'_\pm(\mu_0) + \frac{(\varepsilon - \mu_0)^2}{2} D''_\pm(\mu_0) + \dots \,$$ where prime means differentiation according the only argument. In the case of smooth DOS, SB based on one-band and two-band normal metal have quite different energy storage properties. In one-band metal we have $D_+(\mu_0) = D_-(\mu_0) = D(\mu_0)$, and the same relations for derivatives $D', D''$. The electro-neutrality condition for the case $q_+ = q_-$ with second-order accurate respecting to $\eta_{\pm}/\mu_0$ gives the following equation: \begin{equation} \eta_+ + \eta_- + \frac{D'(\mu_0)}{2 D(\mu_0)} (\eta_+^2 + \eta_-^2) = 0 \label{eq-eta-expanded}, \end{equation} which leads to \begin{equation} \eta_- \approx - \eta_+ - \frac{ D'(\mu_0)}{ D(\mu_0)} \eta_+^2 + \dots. \end{equation} Thus, from (\ref{eq-energy-fermi}), (\ref{eq-neutrality-fermi}), (\ref{eq-eta-expanded}) the energy of one-band SB is \begin{equation} \delta E \approx \Omega D(\mu_0) \eta_+^2 + \dots. \label{eq-energy-smooth-dos} \end{equation} Here dots mean high-order terms with respect to $\eta_{\pm}/\mu_0$. As can be seen from (\ref{eq-energy-smooth-dos}), the linear terms (with respect to $\eta_\pm$) are canceled, and the energy has only quadratic dependence on potential difference. This fact leads to the appearance of small multiplier $\sim e \delta\varphi/\mu_0$ compared with two-band SB. Indeed, in two-band SB the linear approximation with respect to $\eta_{\pm}/\mu_0$ is enough, which gives $\eta_+ = \eta_-$, and we have \begin{equation} \delta E = 2 \Omega D(\mu_0) \mu_0 \eta_+ \ , \ \ \frac{\delta E_{1-band}}{\delta E_{2-band}}\sim \frac{\eta_+}{\mu_{0}}. \label{eq-energy-2band} \end{equation} As can be seen from (\ref{eq-energy-2band}) the energy is linear and greater than in one-band case. In both cases the higher Fermi energy (i.e. the density of carriers), the higher energy of SB. In one-band SB the energy is proportional to DOS at $\mu_0$, while in two-band SB the energy is proportional to product of DOS at $\mu_0$ and $\mu_0$ itself. As can be seen, in both cases (\ref{eq-energy-smooth-dos}) and (\ref{eq-energy-2band}) we have DOS at Fermi level without averaging over energy as opposed to classical case. Let us discuss how DOS singularities can increase power capacity of one-band Fermi SB based on discussed above high-value parameter which contains DOS derivative. Really, it is not unusual to have cusps in $D(\varepsilon)$, for instance, when Van Hove singularities are present~\cite{VanHove,Bassani} etc. For the sake of simplicity we consider the model case of linear dependence the right and left of the cusp, and also we assume the cusp of $D(\varepsilon)$ is exactly aligned at $\mu_0$. \begin{multline}\label{cusp-1} D(\varepsilon) = D(\mu_0) + D'(\mu_0 + 0)(\varepsilon - \mu_0) \theta(\varepsilon - \mu_0) + \\ D'(\mu_0 - 0)(\varepsilon - \mu_0) \theta(\mu_{0} - \varepsilon) \ . \end{multline} Note, the expression~(\ref{cusp-1}) is not an expansion with respect to the small parameter, but the model DOS. Let us denote $D = D(\mu_0)$, $D_{>}' = D'(\mu_0 + 0)$ and $D_{<}' = D'(\mu_0 - 0)$, see Fig.~\ref{fig-singularity}. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{img/singularity.png} \caption Model DOS cusp at Fermi level with high value of DOS derivative at right.} \label{fig-singularity} \end{figure} After taking integrals in equations (\ref{eq-neutrality-fermi}), (\ref{eq-energy-fermi}) we obtain \begin{equation} D \eta_+ + D'_> \frac{\eta_+^2}{2} + D \eta_- + D'_< \frac{\eta_-^2}{2} = 0 \end{equation} \begin{equation} \delta E = \left[D \frac{\eta_+^2}{2} + D'_> \frac{\eta_+^3}{3} + D \frac{\eta_-^2}{2} + D'_< \frac{\eta_-^3}{3}\right]\Omega. \end{equation} Assume the value of $D_>'$ is big enough to have $D_>'\eta_+/D \gg 1$, while derivative $D'_<$ is small (for simplicity we can take it zero). In such a case $\eta_- \approx - 2^{-1}D^{-1} D_>'\eta_+^2$, and the energy $\delta E \sim \Omega D'_>\eta_+^3$. Using the condition $e \delta \varphi = \eta_+ - \eta_-$ we can estimate the derivative value of $D'_>$ which corresponds the situation when the energy of a Fermi battery with cusp exceeds estimation~(\ref{eq-energy-smooth-dos}) for the same value of carriers density and the same DOS at Fermi level. Let us take $\varphi \sim 1 \text{V}$, $\mu_0 \sim \varepsilon_F \sim 10 \text{eV}$, and then $$ D'_>\eta_+^3 > D \eta_+^2 \Rightarrow \frac{D'_{>} \eta_+}{D} > 1 \Rightarrow \frac{D}{D'_{>}} < 1 \ \text{eV,}$$ which looks experimentally feasible. The possibility of such power capacity increasing follows from that fact that DOS derivative gives the {\em main contribution} to the energy instead of being corrections. \section{Heavy fermions} The density of states of conduction electrons in metals are high as a result of high density~$\rho$. Obviously, DOS in normal metals is greater than DOS in liquid electrolytes, despite small effective electron mass in metals. In fact, the smallness of effective electron mass is compensated by high Fermi velocity, which in several orders greater than, for example, thermal ions velocities in dilutions. One can imagine even more effective conductors for SB-s, which contain so called ``heavy fermions''~\cite{Alekseevskii,Moshchalkov,Stewart,Ott,Heavy-ferm-1-Feng2021,Heavy-ferm-2-Chatterjee2021}, like some intermetallic antiferromagnetic alloys with f-electrons. Indeed, in such correlated conductors the effective masses $m_{heavy}^*$ of carriers are $100 \div 1000$ times greater than effective masses in normal metals $m^*_{normal} \sim 1 \div 10 \ m$, where $m$ is a free electron mass. The DOS of heavy fermions has irregularity, see Fig.~\ref{fig-heavy}. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{img/heavy_fermions.png} \caption{ Schematic illustration of DOS in intermetalic alloys with f-electrons. Dotted curve corresponds to $D(\varepsilon)$ at absence of sf-interaction. ``Heavy fermions'' form states near the peak. $\varepsilon_f$ is the electron bound energy in f-shell.} \label{fig-heavy} \end{figure} If we substitute heavy fermion DOS in expressions (\ref{eq-energy-smooth-dos}) and (\ref{eq-energy-2band}) taking into account that $\eta_\pm \sim e V$, $\mu_0 \sim \varepsilon_F$, $D_{heavy} \sim (m^{*}_{heavy}/m^{*}_{normal})^{3/2}_{heavy} D_{normal} \sim 10^{4} D_{normal}$ we get the following rough estimation \begin{equation} \delta E_{heavy} \sim 10^4 \cdot \frac{\Omega}{\lambda_F^3} \frac{e V}{\varepsilon_F} \begin{cases} eV \ , \ \ \text{one-band battery} \\ \varepsilon_F \ , \ \ \text{two-band battery} \end{cases} \end{equation} Given that in metals we have $\lambda_F \sim 10^{-10}$ meters, and Fermi energy is being in range from one to ten electron-volts then for $V \sim 1$ Volt per $1 \text{cm}^3$ we estimate maximal power capacity of such SB: $W_{max} = 3600^{-1} \delta E_{max}/V \sim 10^4 \text{Ah}$. For comparison, the battery power capacity of Apple iPhone 12 Max Pro indicated on specifications is $3.7\text{Ah}$, i.e. three orders of magnitude less than one cubic centimeter of hypothetical heavy fermion SB. Given that metal density is approximately $10^4 \text{kg}/\text{m}^{-3}$ we obtain the energy density of heavy fermion metallic SB is about $100 \div 1000$ Megajoules per kilogram. This value can be one order of magnitude greater than normalized energy density of gasoline as a fuel for internal combustion engine (which in its turn is about $46$ Megajoules per kilogram \cite{GOST, GOST2}). At the same time the energy density of chemical batteries is two orders of magnitude worse than the gasoline, and this fact is an important problem of modern electric transport. \section{Neutron star as a record SB} Since neutrons in neutron stars are compressed by gravitational force to huge density, and due to the fact they are fermions, they gain huge DOS values. Neutron has spin but does not have a charge, and starting from this fact we can imagine supercivilization which could store energy by ``charging'' neutron stars via polarized radiation, i.e. producing the difference of concentrations of spins referred to certain direction in space. Here we don't discuss the problem of extraction of this energy. It is impossible to transform this energy into electric current due to the electrical neutrality of neurtons. Also, it is obvious that it is impossible to make some contacts with neutron star. Thus, the supercivilization has to investigate non-contact method. Nevertheless, let us here estimate colossal energy which can be stored in neutron star in this fantastic scenario. If neutron star has a Sun mass $M_\odot \approx 2\cdot 10^{30}\text{kg}$, its radius should be $R \approx 10\text{km} = 10^4\text{m}$, and correspondingly the density $\rho = 3 M_\odot/4 \pi R^3 \approx 1.4 \cdot 10^{18}\text{kg}/\text{m}^{-3}$. These numbers are taken just for understanding the scale of such a fantastic ``device'', the mass of a real neutron star should be greater than Tolman–Oppenheimer–Volkoff limit~$2.17 M_\odot$~\cite{Oppenheimer,Tolman}. In order to estimate DOS and Fermi energy one can use simple Thomas-Fermi formulas with relativistic corrections~\cite{neutronstars1, neutronstars2} because Fermi velocity is close to speed of light~$c$ for such a huge densities, and moreover, the momentum $p \gg m_n c$, where $m_n$ is a neutron mass. Thus we have \begin{eqnarray} \varepsilon &=& c \sqrt{p^2 + m_n^2 c^2} - m_n c^2 \approx c p\ , \ \ p \gg m_n c \, \\ D(\varepsilon) &=& \frac{\varepsilon^2}{\pi^2 c^3 \hbar^3}, \, \\ \varepsilon_F &\approx& \pi c \hbar \Big(\frac{3 \rho}{\pi m_n}\Big)^\frac{1}{3}. \end{eqnarray} Electroneutrality condition is automatically satisfied in neutron stars. Spin concentrations just change accordingly: when some spin flips it increases the concentration of one type and decreases the concentration of opposite type. Therefore, the energy of ``one-band'' SB~(\ref{eq-energy-smooth-dos}) based on a neutron star matter, which is stored in chemical potential spin splitting~$2 \eta$ is \begin{equation} \delta E_n \approx \Omega D(\varepsilon_F) \eta^2 \approx \frac{3^\frac{2}{3} \Omega}{c \hbar}\Big(\frac{\rho}{\pi m_n}\Big)^\frac{2}{3}\eta^2 \ . \end{equation} Maximal energy which can be reversibly stored in ``spin battery'' based on such a neutron star corresponds to 100\% polarization of its neutrons, i.e.~$\eta = \varepsilon_F $. This allows us to estimate the maximal energy capacitance of one cubic millimeter of neutron star matter $\delta E_{n, max}(\Omega = 10^{-9} \text{m}^3) \sim 10^{42}$ Joules, which is $25$ orders of magnitude more than the energy of the most powerful thermonuclear bomb~\cite{ginness} tested by mankind~$2.4\cdot10^{17}$ Joules. The colossal energy reversibly stored for the whole neutron star with above parameters is~$\sim 10^{63}$~Joules. Note, due to neutron has zero electric charge there is absent spin-orbit coupling and full spin polarization of the star does not lead to rotation of the star. \section{Conclusion} We have theoretically shown that spin batteries can surpass modern chemicals accumulators by orders of magnitude of energy capacitance due to high density of states of electrons in metals, especially in the case of metals with ``heavy fermions''. Fantastic scenario of using neutron star matter would allow reversibly store of colossal energy. We didn't find such estimations for neutron stars in literature. \section{Acknowledgement} We thank L. A. Pastur for helpful discussions. P.~V.~P. acknowledge support of supported by Grant No. PGC2018-101355-B-I00 funded by MCIN/AEI/10.13039/501100011033 and by ``ERDF A way of making Europe''.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,355
\section{Introduction} \label{sec:intro} Canonical correlation analysis (CCA) \cite{hotelling36} is one of the most classical and important tools in multivariate statistics \cite {Anderson03,mkb}. It has been widely used in various fields to explore the relation between two sets of variables measured on the same sample. On the population level, given two random vectors $X\in\reals^p$ and $Y\in\reals^m$, CCA first seeks two vectors $u_1\in\reals^p$ and $v_1\in\reals^m$ such that the correlation between the projected variables $u_1'X$ and $v_1'Y$ is maximized. More specifically, $(u_1, v_1)$ is the solution to the following optimization problem: \begin{equation} \label{eq:cca-def} \max_{u\in\reals^p, v\in\reals^m} \Cov\bigl(u'X, v'Y\bigr),\qquad \mbox{subject to}\quad \Var\bigl(u'X\bigr) = \Var\bigl(v'Y\bigr) = 1, \end{equation} which is uniquely determined up to a simultaneous sign change when there is a positive eigengap. Inductively, once $(u_i, v_i)$ is found, one can further obtain $(u_{i+1}, v_{i+1})$ by solving the above optimization problem repeatedly subject to the extra constraint that \[ \Cov\bigl(u'X, u_j'X\bigr) = \Cov \bigl(v'Y, v_j'Y\bigr) = 0\qquad \mbox{for } j=1,\ldots, i. \] Throughout the paper, we call the $(u_i,v_i)$'s canonical correlation directions. It was shown by Hotelling \cite{hotelling36} that the $(\Sigma_x^{1/2} u_i, \Sigma_y^{1/2}v_i)$'s are the successive singular vector pairs of \begin{equation} \label{eq:cca-svd} \Sigma_x^{-{1/2}} \Sigma_{xy} \Sigma_y^{-{1/2}}, \end{equation} where $\Sigma_x = \Cov(X), \Sigma_y = \Cov(Y)$ and $\Sigma_{xy} = \Cov(X,Y)$. When one is only given a random sample $\{(X_i, Y_i):i=1,\ldots, n\}$ of size $n$, classical CCA estimates the canonical correlation directions by performing singular value decomposition (SVD) on the sample counterpart of \eqref{eq:cca-svd} first and then premultiply the singular vectors by the inverse of square roots of the sample covariance matrices. For fixed dimensions $p$ and $m$, the estimators are well behaved when the sample size is large~\cite{Anderson99}. However, in contemporary datasets, we typically face the situation where the ambient dimension in which we observe data is very high while the sample size is small. The dimensions $p$ and $m$ can be much larger than the sample size $n$. For example, in cancer genomic studies, $X$ and $Y$ can be gene expression and DNA methylation measurements, respectively, where the dimensions $p$ and $m$ can be as large as tens of thousands while the sample size $n$ is typically no larger than several hundreds \cite{cancer12}. When applied to datasets of such nature, classical CCA faces at least three key challenges. First, the canonical correlation directions obtained through classical CCA procedures involve all the variables measured on each subject, and hence are difficult to interpret. Second, due to the amount of noise that increases dramatically as the ambient dimension grows, it is typically impossible to consistently estimate even the leading canonical correlation directions without any additional structural assumption \cite{Johnstone08,Bao14}. Third, successive canonical correlation directions should be orthogonal with respect to the population covariance matrices which are notoriously hard to estimate in high-dimensional settings. Indeed, it is not possible to obtain a substantially better estimator than the sample covariance matrix \cite{MaWu13} which usually behaves poorly \cite{Johnstone01}. So, the estimation of such nuisance parameters further complicates the problem of high-dimensional CCA. Motivated by genomics, neuroimaging and other applications, there have been growing interests in imposing sparsity assumptions on the leading canonical correlation directions. See, for example, \cite {wiesel08,witten09,parkhomenko09,hardoon2011sparse,le2009sparse,waaijenborg2009sparse,avants2010dementia,Wang14} for some recent methodological developments and applications. By seeking sparse canonical correlation directions, the estimated $(u_i, v_i)$ vectors only involve a small number of variables, and hence are easier to interpret. Despite these recent methodological advances, theoretical understanding about the sparse CCA problem is lacking. It is unclear whether the sparse CCA algorithms proposed in the literature have consistency or certain rates of convergence if the population canonical correlation directions are indeed sparse. To the best of our limited knowledge, the only theoretical work available in the literature is \cite{chen13}. In this paper, the authors gave a characterization for the sparse CCA problem and considered an idealistic single canonical pair model where $\Sigma_{xy}$, the covariance between $X$ and $Y$, was assumed to have a rank one structure. They reparametrized $\Sigma_{xy}$ as follows: \begin{equation} \label{eq:scp-model} \Sigma_{xy} = \Sigma_x \lambda uv' \Sigma_y, \end{equation} where $\lambda\in(0,1)$ and $u'\Sigma_x u = v'\Sigma_y v = 1$. It can be shown that $(u,v)$ is the solution to \eqref{eq:cca-def}, so that they are the leading canonical correlation directions. It is worth noting that without knowledge of $\Sigma_x$ and $\Sigma_y$, one is not able to obtain (resp., estimate) $(u,v)$ by simply applying singular value decomposition to $\Sigma_{xy}$ (resp., sample covariance $\wh\Sigma_{xy}$). Under this model, Chen et al. \cite{chen13} studied the minimax lower bound for estimating the individual vectors $u$ and $v$, and proposed an iterative thresholding approach for estimating $u$ and $v$, partially motivated by \cite{Ma11}. However, their results depend on how well the nuisance parameters $\Sigma_x$ and $\Sigma_y$ can be estimated, which to our surprise, turns out to be unnecessary as shown in this paper. \subsection{Main contributions} The main objective of the current paper is to understand the fundamental limits of the sparse CCA problem from a decision-theoretic point of view. Such an investigation is not only interesting in its own right, but will also inform the development and evaluation of practical methodologies in the future. The model considered in this work is very general. As shown in \cite{chen13}, $\Sigma_{xy}$ can be reparametrized as follows: \begin{equation} \label{eq:CCA} \Sigma_{xy} = \Sigma_x \bigl(U\Lambda V'\bigr) \Sigma_y\qquad \mbox{with } U' \Sigma_x U = V'\Sigma_y V = I_{\bar{r}}, \end{equation} where $\bar{r} = \min(p,m)$, $\Lambda= \diag(\lambda_1,\ldots, \lambda _{\bar{r}})$ and $1 > \lambda_1\geq\cdots\geq\lambda_{\bar{r}} \geq0$. Then the successive columns of $U$ and $V$ are the leading canonical correlation directions. Therefore, \eqref{eq:CCA} is the most general model for covariance structure, and sparse CCA actually means the leading columns of $U$ and $V$ are sparse. We can split $U\Lambda V'$ as \begin{equation} U\Lambda V' = U_1 \Lambda_1 V_1' + U_2 \Lambda_2 V_2', \label{eq:splitCCA} \end{equation} where $\Lambda_1 = \diag(\lambda_1,\ldots,\lambda_r), \Lambda_2 = \diag (\lambda_{r+1},\ldots,\lambda_{\bar{r}})$, $U_1\in\mathbb {R}^{p\times r}$, $V_1\in\mathbb{R}^{m\times r}$, $U_2\in\reals^{p\times r_2}$ and $V_2\in\reals^{m\times r_2}$ for $r_2 = \bar{r}-r$. In what follows, we call $(U_1, V_1)$ the \emph{leading} and $(U_2, V_2)$ the \emph{residual} canonical correlation directions. Since our primary interest lies in $U_1$ and $V_1$, both the covariance matrices $\Sigma_x$ and $\Sigma_y$ and the residual canonical correlation directions $U_2$ and $V_2$ are nuisance parameters in our problem. This model is more general than (\ref{eq:scp-model}) considered in \cite{chen13}. It captures the situation in real practice where one is interested in recovering the first few sparse canonical correlation directions while there might be additional directions in the population structure. To measure the performance of a procedure, we propose to estimate the matrix $U_1 V_1'$ under the following loss function: \begin{equation} \label{eq:loss} L\bigl(U_1 V_1', \wh{U_1 V_1'}\bigr) = \bigl\|U_1 V_1' - \wh{U_1 V_1'} \bigr\|_{\mathrm{F}}^2. \end{equation} We choose this loss function for several reasons. First, even when the $\lambda_i$'s are all distinct, $U_1$ and $V_1$ are only determined up to a simultaneous sign change of their columns. In contrast, the matrix $U_1 V_1'$ is uniquely defined as long as $\lambda_r > \lambda_{r+1}$. Second, \eqref{eq:loss} is stronger than the squared projection error loss. For any matrix $A$, let $P_A$ stand for the projection matrix onto its column space. If the spectra of $\Sigma_x$ and $\Sigma_y$ are both bounded away from zero and infinity, then, in view of Wedin's sin-theta theorem \cite{Wedin72}, any upper bound on the loss function \eqref{eq:loss} leads to an upper bound on the loss functions $\| P_{U_1} - \wh{P}_{U_1}\|_{\mathrm{F}}^2$ and $\|P_{V_1} - \wh{P}_{V_1}\| _{\mathrm{F}}^2$ for estimating the column subspaces of $U_1$ and $V_1$, which have been used in the related problem of sparse principal component analysis \cite {CMW13a,Vu13}. Third, this loss function comes up naturally as the key component in the Kullback--Leibler divergence calculation for a special class of normal distributions where $\Sigma_x = I_p$, $\Sigma_y = I_m$ and $\lambda_{r+1} = \cdots= \lambda_{\bar{r}} = 0$ in \eqref{eq:CCA}. We use weak-$\ell_q$ balls to quantify sparsity. Let $\|{(U_1)_{j*}} \| $ denote the $\ell_2$ norm of the $j$th row of $U_1$, and let $\|{(U_1)_{(1)*}} \|\geq\cdots\geq\|{(U_1)_{(p)*}} \|$ be the ordered row norms. One way to characterize the sparsity in $U_1$ (and $V_1$) is to look at its weak-$\ell_q$ radius for some $q\in[0,2)$, \begin{equation} \label{eq:weak-lq} \|{U_1} \|_{q,w} = \max_{j\in[p]} j \bigl\|{(U_1)_{(j)*}} \bigr\|^q \end{equation} under the tradition that $0^q = 0$. For instance, in the case of exact sparsity, that is, $q = 0$, $\|{U_1} \| _{0,w}$ counts the number of nonzero rows in $U_1$. When $q \in(0,2)$, \eqref{eq:weak-lq}~quantifies the decay of the ordered row norms of $U_1$, which is a form of approximate sparsity. Then we define the parameter space $\mathcal{F}_q (s_u, s_v, p,m, r, \lambda; \kappa, M)$, as the collection of all covariance matrices \[ \Sigma= \left[\matrix{ \Sigma_x & \Sigma_{xy} \cr \Sigma_{yx} & \Sigma_y} \right] \] with the CCA structure (\ref{eq:CCA}) and (\ref{eq:splitCCA}), which satisfies: \begin{enumerate} \item$U_1\in\mathbb{R}^{p\times r}$ and $V_1\in\mathbb{R}^{m\times r}$ satisfying $\|{U_1} \|_{q,w}\leq s_u$ and $\|{V_1} \|_{q,w}\leq s_v$; \item$\llVert \Sigma_x^l\rrVert _{\mathrm{op}}\vee\llVert \Sigma _y^l\rrVert _{\mathrm{op }\leq M$ for $l=\pm1$; \item$1>\kappa\lambda\geq\lambda_1\geq\cdots\geq\lambda_r\geq \lambda> 0$. \end{enumerate} Throughout the paper, we assume $\kappa\lambda\leq1-c_0$ for some absolute constant $c_0\in(0,1)$. The key parameters $s_u, s_v, p,m, r$ and $\lambda$ are allowed to depend on the sample size $n$, while $\kappa, M> 1$ are treated as absolute constants. Compared with the single canonical pair model \eqref{eq:scp-model} in \cite{chen13}, where $\operatorname{rank}(\Sigma_{xy})=1$, in this paper, the rank of $\Sigma_{xy}$ can be as high as $p$ or $m$ and $r$ is allowed to grow. In addition, we do not need any structural assumption on $\Sigma_x$ and $\Sigma_y$ except for condition 2 on the largest and smallest eigenvalues, which implies that $\Sigma_x$ and $\Sigma_y$ are invertible. Suppose we observe i.i.d. pairs $(X_1,Y_1),\ldots,(X_n, Y_n)\sim N_{p+m}(0,\Sigma)$. For two sequences $\{a_n\}$ and $\{b_n\}$ of positive numbers, we write $a_n\asymp b_n$ if for some absolute constant $C>1$, $1/C \leq a_n/b_n \leq C$ for all $n$. By the minimax lower and upper bound results in Section~\ref{sec:result}, under mild conditions, we obtain the following tight nonasymptotic minimax rates for estimating the leading canonical directions when $q = 0$: \begin{eqnarray} \label{eq:rate-0} && \inf_{\wh{U_1 V_1'}} \sup _{\Sigma\in\mathcal {F}_0(s_u,s_v,p,m,r,\lambda)} \Expect_\Sigma\bigl\|U_1 V_1' - \wh{U_1 V_1'} \bigr\|_{\mathrm{F}}^2 \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\asymp \frac{1}{n\lambda^2}\biggl( r(s_u+s_v) + s_u \log\frac{\eexp p}{s_u} + s_v \log\frac{\eexp m}{s_v} \biggr). \end{eqnarray} In Section~\ref{sec:result}, we give a precise statement of this result and tight minimax rates for the case of approximate sparsity, that is, $q\in(0,2)$. The result (\ref{eq:rate-0}) provides a precise characterization of the statistical fundamental limit of the sparse CCA problem. It is worth noting that the conditions required for \eqref{eq:rate-0} do not involve any additional assumptions on the nuisance parameters $\Sigma_x, \Sigma_y, U_2$ and $V_2$. Therefore, we are able to establish the remarkable fact that the fundamental limit of the sparse CCA problem is \emph{not} affected by those nuisance parameters. This optimality result can serve as an important guideline to evaluate procedures proposed in the literature. To obtain minimax upper bounds, we propose an estimator by optimizing canonical correlation under sparsity constraints. A key element in analyzing the risk behavior of the estimator is a generalized sin-theta theorem. See Theorem~\ref{thm:sintheta} in Section~\ref{sec:key}. The theorem is of interest in its own right and can be useful in other problems where matrix perturbation analysis is needed. It is worth noting that the proposed procedure does \emph{not} require sample splitting, which was needed in \cite{CMW13a}. We bypass sample splitting by establishing a new empirical process bound for the supreme of Gaussian quadratic forms with rank constraint. See Lemma~\ref {lem:EP} in Section~\ref{sec:key}. The estimator is shown to be minimax rate optimal by establishing matching minimax lower bounds based on a local metric entropy approach \cite{Lecam73,Birge83,Yang99,CMW13a}. \subsection{Connection to and difference from sparse PCA} The current paper is related to the problem of sparse principal component analysis (PCA), which has received a lot of recent attention in the literature. Most literature on sparse PCA considers the spiked covariance model \cite{tb99jrssb,Johnstone01} where one observes an $n\times p$ data matrix, each row of which is independently sampled from a normal distribution $N_p(0,\Sigma_0)$ with \begin{equation} \label{eq:spiked-cov} \Sigma_0 = V\Lambda V' + \sigma^2 I_p. \end{equation} Here, $V\in\reals^{p\times r}$ has orthonormal column vectors which are assumed to be sparse and $\Lambda= \diag(\lambda_1,\ldots ,\lambda _r)$ with $\lambda_1\geq\cdots\geq\lambda_r >0$. Since the first $r$ eigenvalues of $\Sigma_0$ are $\{\lambda_i+\sigma ^2\}_{i=1}^r$ and the rest are all $\sigma^2$, the $\lambda_i$'s are referred as ``spikes,'' and hence the name of the model. Johnstone and Lu \cite{JohnstoneLu09} proposed a diagonal thresholding estimator of the sparse principal eigenvector which is provably consistent for a range of sparsity regimes. For fixed $r$, Birnbaum et al. \cite{Birnbaum12} derived minimax rate optimal estimators for individual sparse principal eigenvectors, and Ma \cite{Ma11} proposed to directly estimate sparse principal subspaces, that is, the span of $V$, and constructed an iterative thresholding algorithm for this purpose which is shown to achieve near optimal rate of convergence adaptively. Cai et al. \cite{CMW13a} studied minimax rates and adaptive estimation for sparse principal subspaces with little constraint on $r$. See also \cite{Vu13} for the case of a more general model. In addition, variable selection, rank detection, computational complexity and posterior contraction rates of sparse PCA have been studied. See, for instance, \cite{Amini09,CMW13,Berthet13,Gao13} and the references therein. Compared with sparse PCA, the sparse CCA problem studied in the current paper is different and arguably more challenging in three important ways. \begin{itemize} \item In sparse PCA, the sparse vectors of interest, that is, the columns of $V$ in \eqref{eq:spiked-cov} are normalized with respect to the identity matrix. In contrast, in sparse CCA, the sparse vectors of interest, that is, the columns of $U$ and $V$ are normalized with respect to $\Sigma_x$ and $\Sigma_y$, respectively, which are not only unknown but also hard to estimate in high-dimensional settings. The necessity of normalization with respect to nuisance parameters adds on to the difficulty of the sparse CCA problem. \item In sparse PCA, especially in the spiked covariance model, there is a clean separation between ``signal'' and ``noise'': the signal is in the spiked part and the rest are noise. However, in the parameter space considered in this paper, we allow the presence of residual canonical correlations $U_2 \Lambda_2 V_2'$, which is motivated by the situation statisticians face in practice. It is highly nontrivial to show that the presence of the residual canonical correlations does not influence the minimax estimation rates. \item The covariance structures in sparse PCA and sparse CCA have both sparsity and low-rank structures. However, there is a subtle difference between the two. In sparse PCA, the sparsity and orthogonality of $V$ in (\ref{eq:spiked-cov}) are coherent. This means that the columns of $V$ are sparse and orthogonal to each other simultaneously. Such convenience is absent in the sparse CCA problem. It is implied from (\ref{eq:CCA}) that $\Sigma_x^{1/2}U_1$ and $\Sigma_y^{1/2}V_1$ have orthogonal columns, while it is the columns of $U_1$ and $V_1$ that are sparse. The orthogonal columns and the sparse columns are different. The consequence is that in order to estimate the sparse matrices $U_1$ and $V_1$, we must appeal to the orthogonality in the nonsparse matrices $\Sigma_x^{1/2}U_1$ and $\Sigma_y^{1/2}V_1$, even when the matrices $\Sigma_x$ and $\Sigma_y$ are unknown. If we naively treat sparse CCA as sparse PCA, the procedure can be inconsistent (see the simulation results in \cite{chen13}). \end{itemize} \subsection{Organization of the paper} The rest of the paper is organized as follows. Section~\ref{sec:result} presents the main results of the paper, including upper bounds in Section~\ref{sec:upper} and lower bounds in Section~\ref{sec:lower}. Section~\ref{sec:discussion} discusses some related issues. The proofs of the minimax upper bounds are gathered in Section~\ref {sec:proof}, with some auxiliary results and technical lemmas proved in Section~\ref{sec:aux-proof}. The proof of the lower bounds and some further technical lemmas are given in the supplementary material~\cite{supp2}. \subsection{Notation} For any matrix $A = (a_{ij})$, the $i$th row of $A$ is denoted by ${A}_{{i} *}$ and the $j$th column by ${A}_{* {j}}$. For a positive integer $p$, $[p]$ denotes the index set $\{1, 2, \ldots, p\}$. For any set $I$, $|I|$ denotes its cardinality and ${I^{\mathrm{ c}}}$ its complement. For two subsets $I$ and $J$ of indices, we write $A_{IJ}$ for the $|I|\times|J|$ submatrices formed by $a_{ij}$ with $(i,j) \in I \times J$. When $I$ or $J$ is the whole set, we abbreviate it with an $*$, and so if $A\in\reals^{p\times k}$, then ${A}_{{I} *} = A_{I [k]}$ and ${A}_{* {J}} = A_{[p] J}$. For any square matrix $A = (a_{ij})$, denote its trace by $\Tr(A) = \sum_{i}a_{ii}$. Moreover, let $O(p,k)$ denote the set of all $p\times k$ orthonormal matrices and $O(k)=O(k,k)$. For any matrix $A \in\reals^{p \times k}$, $\sigma_i(A)$ stands for its $i$th largest singular value. The Frobenius norm and the operator norm of $A$ are defined as $\|{A} \|_{{\mathrm{ F}}}=\sqrt{\Tr (A'A)}$ and $\|{A} \|_{{\mathrm{ op}}}=\sigma_1(A)$, respectively. The support of $A$ is defined as $\supp (A)=\{i\in[n]: \|A_{i*}\|>0\}$. The trace inner product of two matrices $A,B\in\reals^{p \times k}$ is defined as $ \langle A, B \rangle=\Tr(A'B)$. For any number $a$, we use ${\lceil{a} \rceil}$ to denote the smallest integer that is no smaller than $a$. For any two numbers $a$ and $b$, let $a\vee b = \max(a,b)$ and $a\wedge b = \min(a,b)$. For any event $E$, we use ${\mathbf{1}_{\{{E}\}}}$ to denote its indicator function. We use $\mathbb{P}_{\Sigma}$ to denote the probability distribution of $N_{p+m}(0,\Sigma)$ and $\mathbb{E}_{\Sigma}$ for the associated expectation. \section{Main results} \label{sec:result} In this section, we state the main results of the paper. In Section~\ref{sec:upper}, we introduce a method to estimate the leading canonical correlation directions. Minimax upper bounds are obtained. Section~\ref {sec:lower} gives minimax lower bounds which match the upper bounds up to a constant factor. We abbreviate the parameter space $\mathcal {F}_q(s_u,s_v,p,m,r,\lambda;\kappa,M)$ as $\mathcal{F}_q$. \subsection{Upper bounds} \label{sec:upper} The main idea of the estimator proposed in this paper is to maximize the canonical correlations under sparsity constraints. Note that the SVD approach of the classical CCA \cite{hotelling36} can be written in the following optimization form: \begin{equation} \max_{(A,B)} \Tr\bigl(A'\wh{\Sigma}_{xy}B \bigr) \qquad\mbox{s.t.}\quad A'\wh{\Sigma }_xA=B' \wh{\Sigma}_yB=I_r. \label{eq:optim} \end{equation} We generalize (\ref{eq:optim}) to the high-dimensional setting by adding sparsity constraints. Since the leading canonical correlation directions $(U_1,V_1)$ are weak $\ell_q$ sparse, we introduce effective sparsity for $q\in[0,2)$, which plays a key role in defining the procedure. Define \begin{eqnarray} \label{eq:x-q-u} x_q^u & =&\max \biggl\{0\leq x\leq p: x \leq s_u \biggl( \frac{n\lambda^2}{r+\log(ep/x)} \biggr) ^{q/2} \biggr\}, \\ \label{eq:x-q-v} x_q^v & =&\max \biggl\{0\leq x\leq m: x\leq s_v \biggl( \frac{n\lambda^2}{r+\log(em/x)} \biggr) ^{q/2} \biggr\}. \end{eqnarray} The effective sparsity of $U_1$ and $V_1$ are defined as \begin{equation} k_q^{u}={\bigl\lceil{x_q^u} \bigr \rceil}, \qquad k_q^v={\bigl\lceil{x_q^v} \bigr\rceil}. \label{eq:effectivesparsity} \end{equation} For $j\geq k_q^u$, it can be shown that \[ \bigl\Vert(U_1)_{(j)*}\bigr\Vert\leq \biggl(\frac{r+\log(ep/k_q^u)}{n\lambda ^2} \biggr)^{1/2}, \] for which the signal is not strong enough to be recovered from the data. It holds similarly for $V_1$. For $n$ i.i.d. observations $(X_i,Y_i)$, $i\in[n]$, we compute the sample covariance matrix \[ \widehat{\Sigma} \left[\matrix{ \widehat{\Sigma}_x & \widehat{\Sigma}_{xy} \cr \widehat{\Sigma}_{yx} & \widehat{ \Sigma}_{y }\right] . \] The estimator $(\wh{U}_1,\wh{V}_1)$ for $(U_1,V_1)$, the leading $r$ canonical correlation directions, is defined as a solution to the following optimization problem: \begin{eqnarray} \label{eq:pickset} &&\max_{(A,B)} \Tr \bigl(A'\wh{\Sigma}_{xy}B\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\mbox{s.t.}\quad A'\wh{\Sigma}_xA=B'\wh{ \Sigma}_yB=I_r\mbox{ and }\|{A} \|_{0,w}= k_q^u, \|{B} \|_{0,w}= k_q^v. \end{eqnarray} When $q=0$, we have $k_q^u=s_u$ and $k_q^v=s_v$. Then the program (\ref {eq:pickset}) is just a slight generalization of the classical approach of \cite{hotelling36} with additional $\ell_0$ constraints $\|{A} \| _{0,w}=s_u$ and $\|{B} \|_{0,w}=s_v$. By the definition of the parameter space, it is also natural to impose the $\ell_q$ constraints $\|{A} \|_{q,w}\leq s_u$ and $\|{B} \|_{q,w}\leq s_v$. Such constraints were used by \cite{Vu13} to solve the sparse PCA problem. However, their upper bounds require more assumptions due to the difficulty in analyzing $\ell_q$ constraints. We use $\ell_0$ constraints on the effective sparsity and obtain the optimal upper bound under minimal assumptions. Set \begin{equation} \eps_n^{2}=\frac{1}{n\lambda^{2}} \biggl(r \bigl(k_{q}^{u}+k_{q}^{v} \bigr)+k_{q}^{u} \log\frac{ep}{k_q^u}+k_{q}^{v} \log\frac{em}{k_q^v} \biggr), \label{eq:DefEpsilon} \end{equation} which is the minimax rate to be shown later. \begin{theorem} \label{thm:upperbound1} We assume that \begin{eqnarray} \varepsilon_n^2 &\leq&c, \label{eq:ass1} \\ \lambda_{r+1} &\leq&c\lambda, \label{eq:ass2} \end{eqnarray} for some sufficiently small constant $c\in(0,1)$. For any constant $C'>0$, there exists a constant $C>0$ only depending on $M,q,\kappa$ and $C'$, such that for any $\Sigma\in\mathcal{F}_q$, \[ \bigl\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}^{2}\leq C\varepsilon_n^2, \] with $\mathbb{P}_{\Sigma}$-probability at least $1-\exp(-C^{\prime }(k_{q}^{u}+\log (ep/k_{q}^{u})))-\exp(-C^{\prime}(k_{q}^{v}+\log (em/k_{q}^{v})))$. \end{theorem} \begin{remark} It will be shown in Section~\ref{sec:lower} that assumption (\ref {eq:ass1}) is necessary for consistent estimation. Assumption (\ref {eq:ass2}) implies $\lambda_{r+1}\leq c\lambda_r$ for $c\in(0,1)$, such that the eigengap is lower bounded as $\lambda_r-\lambda _{r+1}\geq (1-c)\lambda_r>0$. \end{remark} \begin{remark} The upper bound $\varepsilon_n^2$ has two parts. The first part $\frac {1}{n\lambda^2} (r(k_q^u+k_q^v) )$ is caused by low rank structure, and the second part $\frac{1}{n\lambda^2} (k_q^u\log (ep/k_q^u)+k_q^v\log(em/k_q^v) )$ is caused by sparsity. If $r\leq \log(ep/k_q^u)\wedge\log(em/k_q^v)$, the second part dominates, while the first part dominates if $r\geq\log(ep/k_q^u)\vee\log(em/k_q^v)$. \end{remark} \begin{remark} The upper bound does not require any structural assumption on the marginal covariance matrices $\Sigma_x$ and $\Sigma_y$ other than bounds on the largest and the smallest eigenvalues. Although in the high-dimensional setting, the sample covariance $\wh {\Sigma}_x$ and $\wh{\Sigma}_y$ are not good estimators of the matrices $\Sigma_x,\Sigma_y$, the normalization constraints $A'\wh{\Sigma }_xA=B'\wh{\Sigma}_yB=I_r$,\vspace*{1pt} together with the sparsity of $A,B$, only involve submatrices of $\wh{\Sigma}_x$ and $\wh{\Sigma}_y$. Under the assumption (\ref{eq:ass1}), it can be shown that a $k_q^u\times k_q^u$ submatrix of $\wh{\Sigma}_x$ converges to\vspace*{-1pt} the corresponding submatrix of $\Sigma_x$ with the rate $\sqrt{\frac{k_q^y\log(ep/k_q^u)}{n}}$ under operator norm uniformly over all $k_q^u\times k_q^u$ submatrices. Similar results hold for $\wh{\Sigma}_y$ and $\Sigma_y$. See Lemma~\ref {lem:covdeviation45} in Section~\ref{sec:bias-pf}. These rates are dominated by the minimax rate $\varepsilon_n$ in (\ref{eq:DefEpsilon}). \end{remark} \begin{remark} One of the major difficulties of sparse CCA is the presence of the unknown $\Sigma_x$ and $\Sigma_y$. Suppose $\Sigma_x$ and $\Sigma_y$ are known, one may work with the transformed data $\{(\Sigma_x^{-1}X_i, \Sigma_y^{-1}Y_i):i=1,\ldots,n\}$. The cross-covariance of the transformed data is $\Sigma_x^{-1}\Sigma_{xy}\Sigma_y^{-1}=U\Lambda V'$, which is a sparse matrix. When $\operatorname{rank}(\Sigma_{xy})=1$, algorithms such as \cite{Yang11,chen13} can obtain the sparse singular vectors from $\Sigma _x^{-1}\wh{\Sigma}_{xy}\Sigma_y^{-1}$, which estimate $U_1$ and $V_1$ with optimal rate. When $\Sigma_x$ and $\Sigma_y$ are unknown, structural assumptions are required on the covariance matrices in order that $\Sigma_x^{-1}$ and $\Sigma_y^{-1}$ can be well estimated. Then one can use the estimated $\Sigma_x^{-1}$ and $\Sigma_y^{-1}$ to transform the data and apply the same sparse singular vector estimator (see \cite{chen13}). However, unless $\Sigma_x=I_p$ and $\Sigma_y=I_m$, this method cannot be extended to the case where $\operatorname {rank}(\Sigma _{xy})\geq2$, since the orthogonality of $U$ and $V$ is with respect to general covariance matrices $\Sigma_x$ and $\Sigma_y$, respectively. In the case where $\Sigma_x=I_p$ and $\Sigma_y=I_m$, the problem is similar to sparse PCA, and the proof of Theorem~\ref{thm:upperbound1} can be greatly simplified. \end{remark} To obtain the convergence rate in expectation, we propose a modified estimator. The modification is inspired by the fact that U_{1}V_{1}^{\prime}$ are bounded in Frobenius norm, because \begin{equation} \bigl\| U_{1}V_{1}^{\prime}\bigr\| _{\mathrm{F}}\leq\bigl\| \Sigma_{x}^{-1/2}\bigr\| _{\mathrm{op}}\bigl\| \Sigma _{x}^{1/2}U_{1}\bigr\| _{\mathrm{F}}\bigl\| \Sigma _{y}^{1/2}V_{1}\bigr\| _{\mathrm{op}}\bigl\| \Sigma _{y}^{-1/2}\bigr\| _{\mathrm{op}}\leq M\sqrt{r}. \label{eq:BoundTruth} \end{equation} Define $\widehat{U_{1}V_{1}^{\prime}}$ to be the truncated version of $\widehat{U}_{1}\widehat{V}_{1}^{\prime}$ as \[ \widehat{U_{1}V_{1}^{\prime}}= \widehat{U}_{1}\widehat {V}_{1}^{\prime } {\mathbf{1}_{\{{\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}\| _{\mathrm{F}}\leq2M\sqrt{r}}\}}}. \] The modification can be viewed as an improvement, because whenever\break \| \widehat{U}_{1}\widehat{V}_{1}^{\prime}\| _{\mathrm{F }>2M\sqrt{r}$, we have \[ \bigl\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}\geq\bigl\| \widehat{U}_{1}\widehat{V _{1}^{\prime} \bigr\| _{\mathrm{F}}-\bigl\| U_{1}V_{1}^{\prime }\bigr\| _{\mathrm{F}}\geq M\sqrt{r}\geq\bigl\|{0-U_1V_1'} \bigr\|_{{\mathrm{ F}}}. \] Then it is better to estimate $U_{1}V_{1}^{\prime}$ by $0$. \begin{theorem} \label{thm:upperbound2} Suppose (\ref{eq:ass1}) and (\re {eq:ass2}) hold. In addition, assume that \begin{eqnarray} \exp\bigl(C_{1}\bigl(k_{q}^{u}+\log \bigl(ep/k_{q}^{u}\bigr)\bigr)\bigr) &>&n \lambda^{2}, \label{eq:TheoremExpAssup1} \\ \exp\bigl(C_{1}\bigl(k_{q}^{v}+\log \bigl(em/k_{q}^{v}\bigr)\bigr)\bigr) &>&n \lambda^{2}, \label{eq:TheoremExpAssup2} \end{eqnarray} for some $C_{1}>0$, then there exists $C_{2}>0$ only depending on $M,q,\kappa$ and $C_1$, such that \[ \sup_{\Sigma\in\mathcal{F}_q}\mathbb{E}_{\Sigma}\bigl\| \widehat{U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F }^{2}\leq C_2\varepsilon_n^2. \] \end{theorem} \begin{remark} The assumptions (\ref{eq:TheoremExpAssup1}) and (\ref {eq:TheoremExpAssup2}) imply the tail probability in Theorem~\ref {thm:upperbound1} is sufficiently small. Once there exists a small constant $\delta>0$, such that \[ p\vee e^{k_q^u}\geq n^{\delta}\quad \mbox{and}\quad m\vee e^{k_q^v} \geq n^{\delta} \] hold, then (\ref{eq:TheoremExpAssup1}) and (\ref{eq:TheoremExpAssup2}) also hold with some $C_1>0$. Notice that $p>n^{\delta}$ is commonly assumed in high-dimensional statistics to have convergence results in expectation. The assumption here is weaker than that. \end{remark} \subsection{Lower bounds} \label{sec:lower} Theorems \ref{thm:upperbound1} and \ref{thm:upperbound2} show that the procedure proposed in~(\ref{eq:pickset}) attains the rate $\varepsilon_n^2$. In this section, we show that this rate is optimal among all estimators. More specifically, we show that the following minimax lower bounds hold for $q\in[0,2)$. \begin{theorem} \label{thm:lower-bd-q} Assume that $1\leq r \leq\frac{k_q^u\wedge k_q^v}{2}$, and that \begin{equation} n\lambda^2 \geq C_0 \biggl( r+\log{\eexp p\over k_q^u} \vee\log{\eexp m\over k_q^v} \biggr)\label{eq:assumptionlower} \end{equation} for some sufficiently large constant $C_0$. Then there exists a constant $c>0$ depending only on $q$ and an absolute constant $c_0$ such that the minimax risk for estimating $U_1V_1'$ satisfies \[ \inf_{(\wh{U}_1, \wh{V}_1)}\sup_{\Sigma\in\mathcal{F}_q} \Expect _{\Sigma }\bigl\|\wh{U}_1\wh{V}'_1 - U_1 V_1'\bigr\|_{\mathrm{F}}^2 \geq c\varepsilon_n^2\wedge c_0. \] \end{theorem} The proof of Theorem~\ref{thm:lower-bd-q} is given in the supplementary material \cite{supp2}. \begin{remark} Assumption (\ref{eq:assumptionlower}) is necessary for consistent estimation. \end{remark} \section{Discussion} \label{sec:discussion} We include below discussions on two related issues. \subsection{Minimax rates for individual sparsity} In this paper, we have derived tight minimax estimation rates for the leading sparse canonical correlation directions where the sparsity is depicted by the rapid decay of the ordered row norms in $U_1$ and $V_1$ (as characterized by the weak-$\ell_q$ notion). Another interesting case of sparsity is when the individual column vectors of $U_1$ and $V_1$ are sparse. For instance, when \begin{equation} \label{eq:col-sparse} \|u_i\|_{q,w}\leq t_u\quad \mbox{and}\quad \|v_i\|_{q,w}\leq t_v \qquad\forall i \in[r], \end{equation} where the $\|\cdot\|_{q,w}$ is defined as in \eqref{eq:weak-lq} by treating any $p$-vector as a $p\times1$ matrix. Let $\calF_q^c = \calF_q^c(t_u, t_v, p, m, r, \lambda; \kappa, M)$ be defined as in Section~\ref{sec:intro} following \eqref{eq:weak-lq} but with the sparsity notion changed to that in \eqref{eq:col-sparse}. Similar to \eqref{eq:x-q-u}--\eqref{eq:effectivesparsity}, let \[ y_q^u = \max\biggl\{ 0\leq y\leq p: y\leq t_u \biggl( \frac{n\lambda ^2}{\log (ep/(ry) )} \biggr)^{q/2} \biggr\},\qquad j_q^u = {\bigl\lceil{y_q^u} \bigr \rceil}, \] and $y_q^v$ and $j_q^v$ be analogously defined. Then we have: \begin{theorem} \label{thm:col-sparse} Assume that $1\leq r\leq\frac{j_q^u\wedge j_q^v}{2}$, $2r j_q^u\leq p$, $2r j_q^v\leq m$ and $n\lambda^2 \geq C_0(r+ \log\frac{ep}{rj_q^u} \vee\log\frac{em}{rj_q^v})$ for some sufficiently large constant $C_0$. Then there is a constant $c>0$ depending only on $q$ and an absolute constant $c_0>0$ such that \begin{equation}\qquad \label{eq:col-lowbd} \inf_{\wh{U_1 V_1'}} \sup_{\Sigma\in\calF_q^c} \Expect_\Sigma\bigl\| U_1 V_1' - \wh{U_1 V_1'}\bigr\|_{\mathrm{F}}^2 \geq c_0 \wedge\frac{c}{n\lambda^2}r\biggl( j_q^u \log\frac{\eexp p}{r j_q^u } + j_q^v \log\frac{\eexp m}{r j_q^v } \biggr). \end{equation} If in addition $rj_q^u\leq p^{1-\alpha}$, $r j_q^v \leq m^{1-\alpha}$ for some small $\alpha\in(0,1)$, $r \leq C \log(p\wedge m)$ for some $C > 0$ and the conditions of Theorem~\ref{thm:upperbound2} are satisfied with $k_q^u = r j_q^u$ and $k_q^v = r j_q^v$, then a matching upper bound is achieved by the estimator in Theorem~\ref{thm:upperbound2} with $k_q^u = r j_q^u$ and $k_q^v = r j_q^v$. \end{theorem} The proof of Theorem~\ref{thm:col-sparse} is given in the supplementary material \cite{supp2}. The lower bound (\ref{eq:col-lowbd}) for individual sparsity is larger than the minimax rate (\ref{eq:DefEpsilon}) for joint sparsity when $t_u=s_u$ and $t_v=s_v$. \subsection{Adaptation, computation and some recent work} The main purpose of proposing the estimator in \eqref{eq:pickset} is to determine the minimax estimation rates in sparse CCA problem under weak assumptions. Admittedly, it requires the knowledge of parameter space and is computationally intensive. Designing adaptive and computationally efficient procedures to achieve statistically optimal performance is an interesting and important research direction. Built upon the insights developed in the current paper, Gao et al. \cite {gao14b} have proposed an adaptive and efficient procedure for sparse CCA. The procedure first obtains a crude estimator via a convex relaxation of the problem \eqref{eq:pickset} here which is then refined by a group sparse linear regression. The resulting estimator achieves optimal rates of convergence in estimating the leading sparse canonical directions under a prediction loss without imposing any structural assumption on $\Sigma_x$ and $\Sigma_y$, when the residual directions are absent. Notably, the procedure in \cite{gao14b} requires a larger sample size than in the present paper, which has been shown to be essentially necessary for any computational efficient procedure under the Gaussian CCA model considered here under the assumption of planted clique hardness. The argument has also led to a computational lower bounds for the sparse PCA problem under the Gaussian spiked covariance model, bridging the gap between the sparse PCA literature and the computational lower bounds in \cite{Berthet13} and \cite{wang2014statistical}. It is of great interest to further investigate if there is some adaptive and efficient estimator that attains the statistical optimality established in the current paper under full generality. \section{Proof of main results} \label{sec:proof} This section is devoted to the proof of Theorems~\ref {thm:upperbound1}--\ref{thm:upperbound2}. The proof of Theorems \ref{thm:lower-bd-q}--\ref{thm:col-sparse} is given in the supplementary material \cite{supp2}. \subsection{Outline of proof and preliminaries} \label{sec:prelim-proof} To prove both Theorems \ref{thm:upperbound1} and \ref{thm:upperbound2}, we go through the following three steps: \begin{longlist}[1.] \item[1.] We decompose the value of the loss function into multiple terms which result from different sources; \item[2.] We derive individual high probability bound for each term in the decomposition; \item[3.] We assemble the individual bounds to obtain the desired upper bounds on the loss and the risk functions. \end{longlist} In the rest of this subsection, we carry out these three steps in order. To facilitate the presentation, we introduce below several important quantities to be used in the proof. Recall the effective sparsity $(k_q^u, k_q^v)$ defined in \eqref {eq:effectivesparsity}. Let $S_{u}$ be the index set of the rows of $U_{1}$ with the $k_{q}^{u}$ largest $\ell_2$ norms. In case $U_{1}$ has no more than $k_q^u$ nonzero rows, we include in $S_u$ the smallest indices of the zero rows in $U_{1}$ such that $|S_u| = k_q^u$. We also define $S_{v}$ analogously. In what follows, we refer to them as the \emph{effective support sets}. We define $({U}^*_1, {V}^*_1)$ as a solution to \begin{eqnarray} \label{eq:UV-oracle} &&\max_{(A,B)} \Tr \bigl(A'{\Sigma}_{xy}B\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad \mbox{s.t.}\quad A'{\Sigma}_xA=B'{ \Sigma}_yB=I_r\mbox{ and } \supp(A) \subset S_u, \supp(B) \subset S_v. \end{eqnarray} In what follows, we refer to them as the \emph{sparse approximations} to $U_1$ and $V_1$. By definition, when $q = 0$, ${U}^*_1({V}^*_1)' =U_1 V_1'$, which can be derived rigorously from Theorem~\ref{thm:sintheta}. In addition, we define the \emph{oracle estimator} $(\widehat{U}^*_1, \widehat{V}^*_1)$ as a solution to \begin{eqnarray} \label{eq:UV-oracle-est} &&\max_{(A,B)} \Tr \bigl(A'\wh{\Sigma}_{xy}B\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\mbox{s.t.} \quad A'\wh{\Sigma}_xA=B'\wh{ \Sigma}_yB=I_r\mbox{ and } \supp(A) = S_u, \supp(B) = S_v. \end{eqnarray} In case the program \eqref{eq:UV-oracle} [or \eqref{eq:UV-oracle-est}] has multiple global optimizers, we define $({U}^*_1, {V}^*_1)$ [or $(\widehat{U}^*_1, \widehat{V}^*_1)$] by picking an arbitrary one. \begin{remark} The introduction of (\ref{eq:UV-oracle}) and (\ref{eq:UV-oracle-est}) is to separate the error brought by not knowing the covariance $\Sigma _x$ and $\Sigma_y$ and by not knowing the effective supports $S_u$ and $S_v$. The program (\ref{eq:UV-oracle-est}) assumes known effective supports but unknown covariance and the program (\ref{eq:UV-oracle}) assumes both known effective supports and known covariance. \end{remark} We note that \[ \bigl({U}^*_1\bigr)_{S_u^c *} = \bigl(\widehat{U}^*_1 \bigr)_{S_u^c *} = 0,\qquad \bigl({V}^*_1\bigr)_{S_v^c *} = \bigl(\widehat{V}^*_1\bigr)_{S_v^c *} = 0. \] By definition, the matrices $({U}^*_1,{V}^*_1)$ are normalized with respect to $\Sigma_x$ and $\Sigma_y$, and $(\widehat{U}^*_1,\widehat{V}^*_1)$ are normalized with respect to $\widehat{\Sigma}_x$ and $\widehat{\Sigma}_y$. Note the notation $A_{S*}$ stands for the submatrix of $A$ with rows in $S$ and all columns. Last but not least, let \begin{equation} \label{eq:supp-est} \wh{S}_u = \supp(\wh{U}_1),\qquad \wh{S}_v = \supp(\wh{V}_1). \end{equation} By the definition of $(\wh{U}_1, \wh{V}_1)$ in \eqref{eq:pickset}, we have $|\wh{S}_u| = k_q^u$ and $|\wh{S}_v| = k_q^v$ with probability one. Remember the minimax rate $\varepsilon_n^2$ defined in (\ref {eq:DefEpsilon}). \subsection{Loss decomposition} In the first step, we decompose the loss function into five terms as follows. \begin{lemma} \label{lem:lossdecompose} Assume $\frac{1}{n} (k_{q}^{u}\log(ep/k_{q}^{u})+k_{q}^{v}\log(em/k_{q}^{v}))<c$ for sufficiently small $c>0$. For any constant $C'>0$, there exists a constant $C>0$ only depending on $M$ and $C'$, such that \begin{eqnarray} &&\bigl\|\widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2} \nonumber \\ &&\qquad\leq 3\bigl\| {U}^*_{1}\bigl({V}^*_{1}\bigr)^{\prime }-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2} \label{eq:sparseapproxerror} \\ &&\qquad\quad{}+3\bigl\| \widehat{U}^*_1\bigl(\widehat{V}^*_1 \bigr)'-{U}^*_{1 \bigl({V}^*_{1} \bigr)^{\prime}\bigr\| _{\mathrm{F}}^{2} \label{eq:oracleloss} \\ &&\qquad\quad{}-\frac{6C}{\lambda_{r}} \bigl\langle{\Sigma}_{x}{U}_{2} \Lambda _{2}{V _{2}^{\prime}{ \Sigma}_{y},\widehat{U}^*_1\bigl(\widehat {V}^*_{1}\bigr)'-\widehat {U _{1} \widehat{V}_{1}^{\prime} \bigr\rangle\label{eq:bias} \\ &&\qquad\quad{}+\frac{6C}{\lambda_{r}} \bigl\langle\Sigma_{xy}-\widehat{\Sigma} _{xy},\widehat{U}^*_1\bigl( \widehat{V}^*_1\bigr)'-\widehat{U}_{1} \widehat{V _{1}^{\prime} \bigr\rangle\label{eq:excessloss1} \\ &&\qquad\quad{}+\frac{6C}{\lambda_{r}} \bigl\langle\widehat{\Sigma}_{x}\widehat {U}^*_{1}\Lambda_{1}\widehat{V}^*_1{^{\prime}} \widehat{\Sigma }_{y}-{\Sigma}_{x}{U _{1} \Lambda_{1}{V}_{1}^{\prime}{\Sigma}_{y}, \widehat{U}^*_1 \bigl(\wh {V}_{1}^{\ast} \bigr)'-\widehat{U}_{1}\widehat{V}_{1}^{\prime} \bigr\rangle, \label{eq:excessloss2} \end{eqnarray} with probability at least $1-\exp(-C' k_{q}^{u}\log(ep/k_{q}^{u} )-\exp(-C' k_{q}^{v}\log(em/k_{q}^{v}))$. \end{lemma} \begin{pf} See Section~\ref{sec:lossdecompose-pf}. \end{pf} In particular, Lemma~\ref{lem:lossdecompose} decomposes the total loss into the sum of the sparse approximation error in (\ref{eq:sparseapproxerror}), the oracle loss in (\ref{eq:oracleloss}) which is present even if we have the oracle knowledge of the effective support sets $S_u$ and $S_v$, the bias term in (\ref{eq:bias}) caused by the presence of the residual term $U_2\Lambda_2V_2^{\prime}$ in the CCA structure (\ref{eq:CCA}) and the two excess loss terms in (\ref{eq:excessloss1}) and (\ref {eq:excessloss2}) resulting from the uncertainty about the effective support sets. When $q = 0$, the sparse approximation error term \eqref {eq:sparseapproxerror} vanishes. \subsection{Bounds for individual terms} We now state the bounds for the individual terms obtained in Lemma~\ref {lem:lossdecompose} as five separate lemmas. The proofs of these lemmas are deferred to Sections \ref {sec:sparseapproxerror-pf}--\ref{sec:excessloss2-pf}. \begin{lemma}[(Sparse approximation)] \label{lem:sparseapproxerror} Suppose (\ref{eq:ass1}) and (\re {eq:ass2}) hold. There exists a constant $C>0$ only depending on $M,\kappa,q$, such that \begin{eqnarray} \bigl\| {U}^*_{1}\bigl({V}^*_{1}\bigr)^{\prime}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}^{2} &\leq&\frac{Cq}{2-q} \eps_n^2, \label{eq:SparseApproClaim1} \\ \bigl\| {U}^*_{1}\Lambda_{1}\bigl({V}^*_{1} \bigr)^{\prime }-U_{1}\Lambda_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2} &\leq& \frac{Cq}{2-q}\lambda^{2} \eps_n^2. \label{eq:SparseApproClaim2} \end{eqnarray} \end{lemma} \begin{lemma}[(Oracle loss)] \label{lem:oracleloss} Suppose $\frac{1}{n\lambda^{2}} k_{q}^{u}+k_{q}^{v}+\log(ep/k_{q}^{u})+\log(em/k_{q}^{v}) )<c$ and that (\ref{eq:ass2}) holds for some sufficiently small $c>0$. For any constant $C'>0$, there exists a constant $C>0$ only depending on $M,q,\kappa$ and $C'$, such that \begin{equation} \bigl\| \widehat{U}^*_1\bigl(\widehat{V}^*_1 \bigr)'-{U}^*_{1} \bigl({V}^*_{1} \bigr)^{\prime}\bigr\| _{\mathrm{F}}^{2}\leq\frac{Cr}{n\lambda ^{2} \biggl[ k_{q}^{u}+k_{q}^{v}+\log \biggl(\frac{ep}{k_{q}^{u}} \biggr)+\log \biggl(\frac{em}{k_{q}^{v}} \biggr) \biggr], \label{eq:OracleLossClaim1} \end{equation} with probability at least $1-\exp(-C' (k_{q}^{u}+\log (ep/k_{q}^{u})) )-\exp(-C' (k_{q}^{v}+\break \log (em/k_{q}^{v})))$. Moreover, if (\ref{eq:ass1}) also holds, then with the same probability \begin{equation} \label{eq:OracleLossClaim2} \bigl\| \widehat{U}^*_1 \Lambda_{1}\bigl(\widehat{V}^*_1\bigr)'-{U}^* _{1}\Lambda_{1}\bigl({V}^*_{1} \bigr)^{\prime}\bigr\| _{\mathrm{F}}^{2} \leq C \lambda^{2} \eps_n^2. \end{equation} \end{lemma} The proof of Lemma~\ref{lem:oracleloss} is given in the supplementary material \cite{supp2}. Since $r\leq k_q^u\wedge k_q^v$, (\ref{eq:OracleLossClaim1}) is bounded above by $C\varepsilon_n^2$. The error bounds in Lemma~\ref {lem:oracleloss} are due to the estimation error of true covariance matrices by sample covariance matrices on the subset $S_u\times S_v$. \begin{lemma}[(Bias)] \label{lem:bias} Suppose $\frac{1}{n}(k_{q}^{u}\log(ep/k_{q}^{u})+k_{q}^{v}\log (em/k_{q}^{v}))<C_{1}$ for some constant $C_{1}>0$. For any constant $C'>0$, there exists a constant $C>0$ only depending on $M,\kappa,C_1$ and $C'$, such that \begin{eqnarray*} & &\bigl\llvert \bigl\langle{\Sigma}_{x}{U}_{2} \Lambda_2{V}_{2}^{\prime }{\Sigma _{y}, \widehat{U}^*_{1} \bigl(\widehat{V}^*_{1} \bigr)'-\widehat{U}_{1}\widehat {V _{1}^{\prime} \bigr\rangle\bigr\rrvert \\ & &\qquad\leq C\lambda_{r+1} \bigl( \bigl\|\widehat{U}^*_{1} \bigl( \widehat{V}^*_{1}\bigr)'-{U}_{1}{V}_{1}^{\prime} \bigr\| _{\mathrm {F}}^2+ \bigl\|U_{1}V^{\prime}_{1}- \widehat{U}_{1}\widehat {V}_{1}^{\prime}\bigr\| _{\mathrm{F}}^2 \bigr), \end{eqnarray*} with probability at least $1-\exp(-C^{\prime}k_{q}^{u}\log (ep/k_{q}^{u}))-\exp(-C^{\prime}k_{q}^{v}\log(em/k_{q}^{v}))$. \end{lemma} The bias in Lemma~\ref{lem:bias} is $0$ when $U_2\Lambda_2V_2'$ is $0$. \begin{lemma}[(Excess loss 1)] \label{lem:excessloss1} Suppose (\ref{eq:ass1}) holds. For any constant $C'>0$, there exists a constant $C>0$ only depending on $M$ and $C'$, such that \[ \bigl\llvert \bigl\langle\Sigma_{xy}-\widehat{\Sigma}_{xy}, \widehat{U}^*_1 \bigl(\widehat{V}^*_{1} \bigr)'-\widehat{U}_{1}\widehat {V}_{1}^{\prime } \bigr\rangle\bigr\rrvert \leq C\lambda\eps_n \bigl\|\widehat{U}_{1}\widehat{V}_{1}^{\prime}- \widehat{U}^*_1\bigl(\widehat {V}^*_{1} \bigr)'\bigr\| _{\mathrm{F}}, \] with probability at least $1-\exp (-C'(r(k_{q}^{u}+k_{q}^{v})+k_{q}^{u}\log(ep/k_{q}^{u})+\break k_{q}^{v}\log (em/ k_{q}^{v})))$. \end{lemma} \begin{lemma}[(Excess loss 2)] \label{lem:excessloss2} Suppose (\ref{eq:ass1}) and (\ref{eq:ass2}) hold. For any constant \mbox{$C'>0$}, there exists a constant $C>0$ only depending on $M,\kappa,q$ and $C'$, such that \begin{eqnarray*} &&\bigl\llvert \bigl\langle\widehat{\Sigma}_{x}\widehat{U}^*_1 \Lambda_{1} \bigl(\widehat{V}^*_1\bigr)' \widehat{\Sigma}_{y}-{\Sigma}_{x}{U}_{1} \Lambda _{1}{V _{1}^{\prime}{ \Sigma}_{y},\widehat{U}^*_1\bigl(\widehat {V}^*_1\bigr)'-\widehat{U} _{1} \widehat{V}_{1}^{\prime} \bigr\rangle\bigr\rrvert \\ &&\qquad\leq C \lambda\eps_n\bigl \| \widehat{U}^*_1\bigl( \widehat{V}^*_1\bigr)'-\widehat{U}_{1} \widehat{V _{1}^{\prime}\bigr\| _{\mathrm{F}}, \end{eqnarray*} with probability at least $1-\exp(-C'(k_{q}^{u}+\log (ep/k_{q}^{u})))-\exp(-C'(k_{q}^{v}+\break \log (em/k_{q}^{v})))$. \end{lemma} \subsection{Proof of Theorem \texorpdfstring{\protect\ref{thm:upperbound1}}{1}} For notational convenience, let \begin{eqnarray*} R&=&\bigl\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime } \bigr\|_{\mathrm{F}}, \qquad\theta=\bigl\| {U}^*_{1} \bigl({V}^*_{1} \bigr)^{\prime}-U_{1}V_{1}^{\prime }\bigr\| _{\mathrm{F}},\\ \delta&=&\bigl\| \widehat{U}^*_1\bigl(\widehat {V}^*_1\bigr)'-{U}^*_{1} \bigl({V}^*_{1}\bigr)^{\prime}\bigr\|_{\mathrm{F}}. \end{eqnarray*} Consider the event such that the conclusions of Lemmas \ref {lem:lossdecompose}--\ref{lem:excessloss2} hold, which occurs with probability at least $1-\exp(-C'(k_{q}^{u}+\log (ep/k_{q}^{u})))-\exp(-C'(k_{q}^{v}+\log(em/k_{q}^{v})))$ according to the union bound. On this event, Lemmas \ref{lem:sparseapproxerror} and \ref{lem:oracleloss} imply that \[ \theta^{2}\leq C \varepsilon_n^{2}\quad \mbox{and} \quad\delta^{2} \leq C \eps_n^2. \] Moreover, Lemma~\ref{lem:bias} implies \[ \biggl\llvert \frac{1}{\lambda_{r}} \bigl\langle{\Sigma }_{x}{U}_{2} \Lambda _{2}{V}_{2}^{\prime}{\Sigma}_{y}, \widehat{U}^*_1\bigl(\widehat{V}^*_1 \bigr)' \widehat{U}_{1}\widehat{V}_{1}^{\prime} \bigr\rangle\biggr\rrvert \leq \frac C\lambda_{r+1}}{\lambda}\bigl( R^{2}+ \theta^{2}+\delta^{2} \bigr). \] Lemma~\ref{lem:excessloss1} implies \[ \biggl\llvert \frac{1}{\lambda_{r}} \bigl\langle\Sigma_{xy}-\widehat { \Sigma _{xy},\widehat{U}^*_1\bigl( \widehat{V}^*_1\bigr)'-\widehat{U}_{1} \widehat{V _{1}^{\prime} \bigr\rangle\biggr\rrvert \leq C\varepsilon_n (R+\theta +\delta), \] and Lemma~\ref{lem:excessloss2} implies \[ \biggl\llvert \frac{1}{\lambda_{r}} \bigl\langle\widehat{\Sigma _{x} \widehat{U}^*_1\Lambda_{1}\bigl(\widehat{V}^*_1 \bigr){^{\prime}}\widehat {\Sigma }_{y}- \Sigma}_{x}{U}_{1}\Lambda_{1}{V}_{1}^{\prime}{ \Sigma }_{y},U_{1}^{\ast }\bigl( \widehat{V}^*_1\bigr)'-\widehat{U}_{1} \widehat{V}_{1}^{\prime } \bigr\rangle\biggr\rrvert \leq C \varepsilon_n (R+\theta+\delta). \] Together with Lemma~\ref{lem:lossdecompose}, the above bounds lead to \begin{eqnarray*} R^{2} &\leq&C\bigl(\theta^{2}+\delta^{2}\bigr)+ \frac{C\lambda_{r+1}} \lambda}\bigl(R^{2}+\theta^{2}+ \delta^{2}\bigr)+C\eps_n (R+\theta+\delta) \\ &\leq&\frac{C\lambda_{r+1}}{\lambda}R^{2}+C\eps_n R+C \eps_n ^{2}. \end{eqnarray*} Under assumption \eqref{eq:ass2}, we have $\frac{1}{2}R^{2}\leq C\eps_n R+C\eps_n ^{2}$, implying \[ R^{2}\leq C\eps_n ^{2}, \] for some $C>0$. We complete the proof by noting that the conditions of Lemmas \ref {lem:lossdecompose}--\ref{lem:excessloss2} are satisfied under assumptions \eqref{eq:ass1} and \eqref{eq:ass2}. \subsection{Proof of Theorem \texorpdfstring{\protect\ref{thm:upperbound2}}{2}} Recall the definition of $\eps_n$ in \eqref{eq:DefEpsilon}, and let $C_1$ be the constant in \eqref{eq:TheoremExpAssup1} and \eqref {eq:TheoremExpAssup2}. The result of Theorem~\re {thm:upperbound1} implies that we can choose an arbitrarily large constant $C^{\prime}$ such that $C^{\prime}>C_{1}$. Given~$C'$, there exists a constant~$C$, by which we can bound the risk as follows: \begin{eqnarray} &&\mathbb{E}_{\Sigma} \bigl\| \widehat{U_{1}V_{1}^{\prime }}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}^{2} \nonumber \\ &&\qquad\leq \mathbb{E}_\Sigma\bigl[ \bigl\| \widehat{U_{1}V_{1}^{\prime }}-U_{1}V_{1}^{\prime} \bigr\|_{\mathrm{F}}^{2} {\mathbf{1}_{\{{\| \widehat{U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime}\|_{\mathrm {F}}^{2}\leq C\eps_n ^{2} }\}}} \bigr] \nonumber \\ &&\qquad\quad{} +\mathbb{E}_\Sigma\bigl[ \bigl\| \widehat{U_{1}V_{1}^{\prime }}-U_{1}V_{1}^{\prime} \bigr\|_{\mathrm{F}}^{2} {\mathbf{1}_{\{{\| \widehat{U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime}\|_{\mathrm {F}}^{2} > C\eps_n ^{2} }\}}} \bigr] \nonumber \\ &&\qquad\leq C\eps_n ^{2}+\mathbb{E}_\Sigma\bigl[ \bigl( 2\bigl\|\widehat {U_{1}V_{1}^{\prime}} \bigr\|_{\mathrm{F}}^{2}+2\bigl\| U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2} \bigr) {\mathbf{1}_{\{{\| \widehat {U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime}\|_{\mathrm{F}}^{2} > C\eps _n ^{2} }\}}} \bigr] \label{eq:TheoremExp1} \\ &&\qquad\leq C\eps_n ^{2}+ 6M^{2}r \mathbb{P}_\Sigma\bigl( \bigl\| \widehat{U}_{1}\widehat {V}_{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\|_{\mathrm{F}}^{2}>C\eps_n ^{2} \bigr) \label{eq:TheoremExp2} \\ &&\qquad\leq C_{2}\eps_n ^{2}. \label{eq:TheoremExp3} \end{eqnarray} Here, inequality (\ref{eq:TheoremExp1}) is due to the triangle inequality and the fact that \[ \bigl\{ \bigl\| \widehat{U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm {F}}^{2}>C\eps_n ^{2} \bigr\} \subset \bigl\{ \bigl\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2}>C\eps_n ^{2} \bigr\}. \] In fact, if $\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime}\| _{\mathrm{F}}^{2}\leq C\eps_n ^{2}$, then $\| \widehat{U}_{1}\widehat{V}_{1}^{\prime}\|_{\mathrm{F}}^{2}\leq C\eps_n ^{2}+M^{2}r\leq2M^{2}r$. By our definition of the estimator, this means $\widehat{U_{1}V_{1}^{\prime}}=\widehat{U}_{1}\widehat{V} _{1}^{\prime}$, which further implies $\| \widehat U_{1}V_{1}^{\prime}}-U_{1}V_{1}^{\prime}\|_{\mathrm{F}}^{2}\leq C\eps_n ^{2}$. Inequality (\ref{eq:TheoremExp2}) follows from our definition of estimator $\widehat{U_{1}V_{1}^{\prime}}$ and (\re {eq:BoundTruth}). The last inequality follows from the conclusion of Theorem~\ref{thm:upperbound1} and assumptions (\re {eq:TheoremExpAssup1}) and (\ref{eq:TheoremExpAssup2}). This completes the proof. \section{Proof of auxiliary results} \label{sec:aux-proof} In this section, we prove Lemmas \ref{lem:lossdecompose}--\ref {lem:sparseapproxerror} and \ref{lem:bias}--\ref{lem:excessloss2} used in the proof of Theorem~\ref{thm:upperbound1} and \ref{thm:upperbound2}. The proof of Lemma~\ref{lem:oracleloss} is given in the supplementary material \cite{supp2}. Throughout the section, without further notice, $\eps_n^2$ is defined as in \eqref{eq:DefEpsilon}. \subsection{A generalized sin-theta theorem and Gaussian quadratic form with rank constraint} \label{sec:key} We first introduce two key results used in the proof of Lemmas \ref {lem:lossdecompose}--\ref{lem:excessloss2} that might be of independent interest. The first result is a generalized sin-theta theorem. For the definition of unitarily invariant norms, we refer the readers to \cite {Bhatia97,Stewart90}. In particular, both Frobenius norm $\|\cdot\| _{\mathrm{F}}$ and operator norm $\|\cdot\|_{\mathrm {op}}$ are unitarily invariant. \begin{theorem} \label{thm:sintheta} Consider matrices $X,Y\in\mathbb{R}^{p\times m}$. Let the SVD of $X$ and $Y$ be \[ X=A_1D_1B_1^{\prime}+A_2D_2B_2^{\prime},\qquad Y=\widehat{A}_1\widehat {D}_ \widehat{B}_1^{\prime}+\widehat{A}_2 \widehat{D}_2\widehat {B}_2^{\prime}, \] with $D_1=\mathop{\operatorname{diag}}(d_1,\ldots,d_r)$ and $\widehat{D}_1= \mathop{\operatorname{diag}}(\widehat{d}_1,\ldots,\widehat{d}_r)$. Suppose there is a positive constant $\delta\in(0,d_r]$ such that $\| \widehat{D}_2\|_{\mathrm{op}}\leq d_r-\delta$. Let $\|\cdot\|$ be any unitarily invariant norm, and $\varepsilon= \|A_1^{\prime}(X-Y) \| \vee\|(X-Y)B_1 \|$. Then we have \begin{equation} \label{eq:MPprincipal}\bigl \|A_1D_1B_1^{\prime}- \widehat{A}_1\widehat{D}_1\widehat {B}_1^{\prime} \bigr\| \leq \biggl(\frac{\sqrt{2}(d_1+\widehat{d}_1)}{\delta}+1 \biggr)\varepsilon. \end{equation} If further there is an absolute constant $\bar{\kappa}\geq1$ such that d_1\vee\widehat{d}_1\leq\bar{\kappa} d_r$, then there is a constant $C>0$ only depending on $\bar{\kappa}$, such that \begin{equation} \label{eq:MPnew} \bigl\|A_1B_1^{\prime}- \widehat{A}_1\widehat{B}_1^{\prime}\bigr\| \leq \frac{C\varepsilon}{\delta}. \end{equation} \end{theorem} \begin{remark} In addition, when $X$ and $Y$ are positive semi-definite, $A_l=B_l$, $\wh{A}_l=\wh{B}_l$ for $l=1,2$, we recover the classical Davis--Kahan sin-theta theorem \cite{Davis70} $\|A_1A_1'-\wh{A}_1\wh{A}_1'\|\leq C\varepsilon/\delta$ up to a constant multiplier. \end{remark} The second result is an empirical process type bound for Gaussian quadratic forms with rank constraint. \begin{lemma} \label{lem:EP} Let $\{Z_{i}\}_{1\leq i\leq n}$ be i.i.d. observations from N(0,I_{d})$. Then there exist some $C,C^{\prime}>0$, such that for any $t>0$, \[ \mathbb{P} \Biggl( \sup_{\{K:\| K\| _{\mathrm{F}}\leq1 \mathrm{rank}(K)\leq r\}}\Biggl\llvert \Biggl\langle \frac{1}{n \sum_{i=1}^{n}Z_{i}Z_{i}^{\prime}-I_{d},K \Biggr\rangle\Biggr\rrvert >t \Biggr) \leq\exp\bigl(C^{\prime}rd-Cn \bigl(t^{2}\wedge t\bigr)\bigr). \] \end{lemma} The proofs of Theorem~\ref{thm:sintheta} and Lemma~\ref{lem:EP} are given in the supplementary material~\cite{supp2}. \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lem:lossdecompose}}{1}} \label{sec:lossdecompose-pf} Recall the definition of $(S_u, S_v)$ and $(\wh{S}_u, \wh{S}_v)$ in Section~\ref{sec:prelim-proof}. From here on, let \begin{equation} \label{eq:Tu-Tv} T_{u}=S_{u}\cup\widehat{S}_{u}\quad \mbox{and}\quad T_{v}=S_{v}\cup\widehat{S}_{v}. \end{equation} The proof of Lemma~\ref{lem:lossdecompose} depends on the following two technical results. Their proofs are given in the supplementary material \cite{supp2}. \begin{lemma} \label{lem:linearloss} For matrices $A,B,E,F$ and a diagonal matrix D=(d_{l})_{1\leq l\leq r}$ with $d_{1}\geq d_{2}\geq\cdots\geq d_{r}>0$ and A^{\prime}A=B^{\prime}B=E^{\prime}E=F^{\prime}F=I_r$, we have \[ \frac{d_{r}}{2}\bigl\| AB^{\prime}-EF^{\prime}\bigr\| _{\mathrm{F }^{2} \leq \bigl\langle ADB^{\prime},AB^{\prime}-EF^{\prime} \bigr \rangle \leq\frac{d_{1}}{2}\bigl\| AB^{\prime}-EF^{\prime}\bigr\| _ \mathrm{F}}^{2}. \] \end{lemma} \begin{lemma} \label{lem:diff} Under the assumption of Lemma~\ref{lem:lossdecompose}, for any constant $C'>0$, there exists a constant $C>0$ only depending on $M$ and $C'$, such that for any matrix $A$ supported on the $T_{u}\times T_{v}$, we have \[ {C}^{-1} \|A\|_{\mathrm{F}}^2\leq \bigl\|\widehat{\Sigma}_{x}^{1/2} A \widehat{\Sigma}_{y}^{1/2}\bigr\|_{\mathrm{F}}^{2} \leq C\| A\| _{\mathrm{F}}^{2}, \] with probability at least $1-\exp(-C^{\prime}k_{q}^{u}\log (ep/k_{q}^{u}))-\exp(-C^{\prime}k_{q}^{v}\log(em/k_{q}^{v}))$. \end{lemma} \begin{pf*}{Proof of Lemma~\ref{lem:lossdecompose}} First of all, the triangle inequality and Jensen's inequality together lead to \begin{eqnarray} \label{eq:losssurrogate} && \bigl\| \widehat{U}_{1} \widehat{V}_{1}^{\prime}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}^{2} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad \leq3\bigl( \bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime }-U_{1}^{\ast}V_{1}^{\ast\prime} \bigr\| _{\mathrm{F}}^{2} + \bigl\| \widehat {U}_{1} \widehat{V}_{1}^{\prime}-\widehat{U}_{1}^{\ast } \widehat {V}_{1}^{\ast\prime}\bigr\| _{\mathrm{F}}^{2} +\bigl \| U_{1}^{\ast }V_{1}^{\ast\prime}-U_{1}V_{1}^{\prime } \bigr\| _{\mathrm{F}}^{2} \bigr). \end{eqnarray} Now, it remains to bound $\| \widehat{U}_{1}\widehat{V}_{1}^{\prime }-\widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}\| _{\mathrm F}}^{2}$. To this end, we have \begin{eqnarray} && \bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}- \widehat{U }_{1}\widehat{V}_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2} \nonumber \\ &&\qquad\leq C\bigl\| \widehat{\Sigma}_{x}^{1/2}\bigl( \widehat{U}_{1}^{\ast \widehat{V}_{1}^{\ast\prime}- \widehat{U}_{1}\widehat {V}_{1}^{\prime}\bigr) \widehat{\Sigma}_{y}^{1/2}\bigr\| _{\mathrm{F}}^{2} \label{eq:sigma1} \\ &&\qquad\leq\frac{2C}{\lambda_{r}} \bigl\langle\widehat{\Sigma}_{x}^{1/2} \widehat{U}_{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime }\widehat \Sigma}_{y}^{1/2},\widehat{\Sigma}_{x}^{1/2} \bigl(\widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast\prime}- \widehat{U}_{1}\widehat {V}_{1}^{\prime}\bigr) \widehat{\Sigma}_{y}^{1/2} \bigr \rangle\label{eq:linearloss} \\ &&\qquad = \frac{2C}{\lambda_{r}} \bigl\langle\widehat{\Sigma }_{x}\widehat{U} _{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}\widehat{\Sigma }_{y} \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}- \widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle \nonumber \\ &&\qquad = \frac{2C}{\lambda_{r}} \bigl\langle\widehat{\Sigma }_{x}\widehat{U} _{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}\widehat{\Sigma }_{y} \widehat{\Sigma}_{xy},\widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast \prime} \widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle \nonumber \\ &&\qquad\quad{} +\frac{2C}{\lambda_{r} \bigl\langle\widehat{\Sigma}_{xy}, \widehat{U}_{1}^{\ast}\widehat {V _{1}^{\ast\prime}- \widehat{U}_{1}\widehat{V}_{1}^{\prime} \bigr\rangle \nonumber \\ &&\qquad\leq\frac{2C}{\lambda_{r}} \bigl\langle\widehat{\Sigma }_{x}\widehat {U _{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}\widehat{\Sigma }_{y} \widehat{\Sigma}_{xy},\widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast \prime} \widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle\label{eq:aggregdef} \\ &&\qquad = \frac{2C}{\lambda_{r}} \bigl\langle\widehat{\Sigma }_{x}\widehat{U} _{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}\widehat{\Sigma }_{y}- \Sigma}_{x}{U}_{1}\Lambda_{1}{V}_{1}^{\prime}{ \Sigma}_{y},\widehat {U _{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}-\widehat{U}_{1}\widehat{V} _{1}^{\prime} \bigr\rangle\label{eq:biasstructure1} \\ &&\qquad\quad{} - \frac{2C}{\lambda_{r}} \bigl\langle{\Sigma }_{x}{U}_{2} \Lambda _{2}{V _{2}^{\prime}{ \Sigma}_{y},\widehat{U}_{1}^{\ast}\widehat {V}_{1}^{\ast ^{\prime}}-\widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle \nonumber \\ &&\qquad\quad{} + \frac{2C}{\lambda_{r}} \bigl\langle\Sigma_{xy}-\widehat { \Sigma}_{xy} \widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}-\widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle. \nonumber \end{eqnarray} Here, (\ref{eq:sigma1}) is implied by Lemma~\ref{lem:diff} and (\ref {eq:linearloss}) is implied by Lemma~\ref{lem:linearloss}. To see (\ref{eq:aggregdef}), we note $(\wh{U}_1, \wh{V}_1)$ is the solution to \eqref{eq:pickset}, and so $\mathop{\sf Tr}(\widehat{U}_{1}^{\prime}\widehat{\Sigma }_{xy}\widehat{V}_1) \geq \mathop{\sf Tr}((\widehat{U}_{1}^{\ast})'\widehat{\Sigma}_{xy} \wh {V}_{1}^{\ast})$, or equivalently \[ \bigl\langle\widehat{\Sigma}_{xy},\widehat{U}_{1}^{\ast} \widehat {V _{1}^{\ast\prime}-\widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle \leq0. \] Equality (\ref{eq:biasstructure1}) comes from the CCA structure (\ref{eq:CCA}) and (\ref{eq:splitCCA}). Combining (\ref{eq:losssurrogate})--(\re {eq:biasstructure1}) and rearranging the terms, we obtain the desired result. \end{pf*} \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lem:sparseapproxerror}}{2}} \label{sec:sparseapproxerror-pf} The major difficulty in proving the lemma lies in the presence of the residual structure $U_2 \Lambda_2 V_2'$ in \eqref{eq:splitCCA} and the possible nondiagonality of covariance matrices $\Sigma_x$ and $\Sigma_y$. To overcome the difficulty, we introduce intermediate matrices $(\wt{U}_{1},\wt{V}_{1})$ defined as follows. First, we write the SVD of $(\Sigma _{xS_{u}S_{u}})^{1/2}{U}_{1S_{u}\ast}\Lambda_{1}({V}_{1S_{v}\ast })^{\prime}(\Sigma_{yS_{v}S_{v}})^{1/2}$ as \begin{equation} (\Sigma_{xS_{u}S_{u}})^{1/2}{U}_{1S_{u}\ast}\Lambda_{1}({V _{1S_{v}\ast})^{\prime}( \Sigma_{yS_{v}S_{v}})^{1/2}=P\wt \Lambda}_{1}Q^{\prime}, \label{eq:intermSVD} \end{equation} and let $\wt{U}_{1}^{S_{u}} = ( \Sigma_{xS_{u}S_{u}} )^{-1/2} P$ and $\wt{V}_{1}^{S_{v}} = ( \Sigma_{yS_{v}S_{v}} )^{-1/2} Q$. Finally, we define $\wt{U}_1\in\reals^{p\times r}$ and $\wt{V}_1 \in \reals^{m\times r}$ by setting \begin{equation} \label{eq:UV-intermediate} \qquad(\wt{U}_1)_{S_u *} = \wt{U}_{1}^{S_{u}},\qquad (\wt{U}_1)_{S_u^c *} = 0,\qquad (\wt{V}_1)_{S_v *} = \wt{V}_{1}^{S_{v}},\qquad (\wt{V}_1)_{S_v^c *} = 0. \end{equation} By definition, we have ${U}_{1S_{u}\ast}\Lambda_{1}({V}_{1S_{u}\ast })^{\prime}=\wt{U}_{1S_{u}\ast}\wt{\Lambda}({\wt{V}}_{1S_{u}\ast} )^{\prime}$. Last but not least, we define \begin{eqnarray}\quad\hspace*{4pt} \label{eq:Xidef} \Xi&=& P\wt{\Lambda}_{1}Q^{\prime} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} + \bigl(I-PP^{\prime}\bigr) (\Sigma_{xS_{u}S_{u} )^{-1/2} \Sigma_{xS_{u}\ast}U_{2}\Lambda_{2}V_{2}^{\prime} \Sigma _{y\ast S_{v}}(\Sigma_{yS_{v}S_{v}})^{-1/2} \bigl(I-QQ^{\prime}\bigr). \end{eqnarray} We now summarize the key properties of the $\wt{U}_1, \wt{V}_1$ and $\wt {\Lambda}_1$ matrices in the following two lemmas, the proofs of which are given in the supplementary material~\cite{supp2}. \begin{lemma} \label{lem:CCA2} Let $P, Q$ and $\Xi$ be defined in \eqref{eq:intermSVD} and \eqref{eq:Xidef}. Then we have: \begin{longlist}[1.] \item[1.] The column vectors of $P$ and $Q$ are the $r$ leading left and right singular vectors of $\Xi$. \item[2.] The first and the $r$th singular values $\wt{\lambda}_1$ and $\wt {\lambda}_r$ of $\Xi$ satisfy $1.1\kappa\lambda\geq\wt{\lambda}_1 \geq\wt{\lambda}_{r}\geq0.9\lambda$, and the $(r+1)$th singular value $\wt{\lambda}_{r+1}\leq c\lambda$ for some sufficiently small constant $c>0$. \item[3.] The column vectors of $\Sigma_{x}^{1/2}\wt{U}_{1}$ and $\Sigma _{y}^{1/2}\wt{V}_{1} $ are the $r$ leading left and right singular vectors of $\Sigma _{x}^{1/2}\wt{U}_{1}\wt \Lambda}_{1}\wt{V}_{1}^{\prime}\Sigma_{y}^{1/2}$. \end{longlist} \end{lemma} \begin{lemma} \label{lem:lq3} For some constant $C > 0$, \[ \bigl\| \wt{U}_1' \Sigma_x U_{2}\bigr\| _{\mathrm{F}}^{2} \leq C\| U_{1S_{u}^{c}\ast}\| _{\mathrm{F}}^{2}\quad \mbox{and} \quad\bigl\| \wt{V}_1' \Sigma_y V_{2}\bigr\| _{\mathrm{F}}^{2} \leq C\| V_{1S_{v}^{c}\ast}\| _{\mathrm{F}}^{2}. \] \end{lemma} In what follows, we prove claims \eqref{eq:SparseApproClaim1} and \eqref {eq:SparseApproClaim2} in order. \begin{pf*}{Proof of (\ref{eq:SparseApproClaim1})} By the triangle inequality, \begin{equation} \bigl\| U_{1}^{\ast}V_{1}^{\ast\prime}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}\leq\bigl\| U_{1}^{\ast}V_{1}^{\ast\prime}- \wt{U}_{1 \wt{V}_{1}^{\prime}\bigr\| _{\mathrm{F}}+\bigl\| \wt{U}_{1}\wt{V _{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}. \label{eq:piece1} \end{equation} It is sufficient to bound each of the two terms on the right-hand side. \textit{$1^\circ$ Bound for $\| \wt{U}_{1}\wt{V}_{1}^{\prime }-U_{1}V_{1}^{\prime}\| _{\mathrm{F}}$}. Since the smallest eigenvalues of $\Sigma_x$ and $\Sigma_y$ are bounded from below by some absolute positive constant, \[ \bigl\| \wt{U}_{1}\wt{V}_{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\|_{\mathrm{F}} \leq C\bigl\| \Sigma_{x}^{1/2} \bigl( \wt{U}_{1}\wt{V}_{1}^{\prime }-U_{1}V_{1}^{\prime} \bigr)\Sigma_{y}^{1/2}\bigr\|_{\mathrm{F}}. \] By Lemma~\ref{lem:CCA2}, $\Sigma_{x}^{1/2}\wt{U}_{1}$ and $\Sigma _{y}^{1/2}\wt{V}_{1}$ collect the $r$ leading left and right singular vectors of $\Sigma_{x}^{1/2}\wt{U}_{1}\wt{\Lambda}_{1}\wt {V}_{1}^{\prime}\Sigma_{y}^{1/2}$, and by (\ref{eq:CCA}), $\Sigma _{x}^{1/2}{U}_{1}$ and $\Sigma_{y}^{1/2}{V}_{1} $ collect the $r$ leading left and right singular vectors of $\Sigma_{x}^{1/2}{U}_{1} \Lambda}_{1}{V}_{1}^{\prime}\Sigma_{y}^{1/2}$. Thus, Theorem~\ref{thm:sintheta} implies \[ \bigl\| \Sigma_{x}^{1/2}\bigl(\wt{U}_{1} \wt{V}_{1}^{\prime }-U_{1}V_{1}^{\prime} \bigr)\Sigma_{y}^{1/2}\bigr\| _{\mathrm{F}}\leq \frac {C}{\lambda} \bigl\| \Sigma_{x}^{1/2}\bigl( \wt{U}_{1}\wt{\Lambda}_{1}\wt{V}_{1}^{\prime }-U_{1} \Lambda_{1}V_{1}^{\prime}\bigr)\Sigma_{y}^{1/2} \bigr\| _{\mathrm{F}}. \] The right-hand side of the above inequality is bounded as \begin{eqnarray} && \label{eq:zhaoren} \bigl\| \wt{U}_{1}\wt{\Lambda}_{1} \wt{V}_{1}^{\prime }-U_{1}\Lambda_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}} \nonumber\\ && \qquad\leq\bigl\| \wt{U}_{1S_{u}\ast}\wt{\Lambda}_{1}( \wt{V}_{1S_{v}\ast })^{\prime}-U_{1S_{u}\ast}\Lambda _{1}(V_{1S_{v}\ast})^{\prime}\bigr\|_{\mathrm{F}} + \bigl\|U_{1S_{u}^{c}\ast}\Lambda_{1}(V_{1S_{v}\ast})^{\prime }\bigr\| _{\mathrm{F}} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\quad{} +\bigl \| U_{1S_{u}\ast}\Lambda_{1}(V_{1S_{v}^{c}\ast})^{\prime} \bigr\| _{\mathrm{F}} +\bigl \|U_{1S_{u}^{c}\ast} \Lambda_{1}(V_{1S_{v}^{c}\ast} )^{\prime}\bigr\| _{\mathrm{F}} \nonumber \\ \nonumber &&\qquad\leq C\lambda\bigl(\| U_{1S_{u}^{c}\ast}\|_{\mathrm{F}}+\| V_{1S_{v}^{c}\ast}\| _{\mathrm{F}}\bigr). \nonumber \end{eqnarray} Here, the last inequality is due to \eqref{eq:intermSVD} and \eqref {eq:UV-intermediate}. For the last term, a similar argument to that used in Lemma 7 of \cite {CMW13a} leads to \begin{eqnarray} \label{eq:lqsparse} \bigl \| U_{1S_{u}^{c}\ast} \bigr\|_{\mathrm{F}}^2 & \leq&\frac {Cq}{2-q}k_{q}^{u} \bigl(s_{u}/k_{q}^{u}\bigr)^{2/q} \leq \frac{Cq}{2-q} \eps_n^2, \nonumber \\[-8pt] \\[-8pt] \nonumber \|V_{1S_{v}^{c}\ast}\| _{\mathrm{F}}^2 & \leq& \frac{Cq}{2-q} k_{q}^{v}\bigl(s_{v}/k_{q}^{v} \bigr)^{2/q} \leq\frac{Cq}{2-q} \eps_n^2, \end{eqnarray} where the last inequalities in both displays are due to \eqref {eq:x-q-u}--\eqref{eq:effectivesparsity}. Therefore, we obtain \begin{equation} \bigl\| \wt{U}_{1}\wt{V}_{1}^{\prime}-U_{1}V_{1}^{\prime} \bigr\| _{\mathrm{F}}^{2}\leq\frac{Cq}{2-q} \eps_n^2. \label{eq:piece2} \end{equation} \textit{$2^\circ$ Bound for $\| U_{1}^{\ast}V_{1}^{\ast \prime}-\wt{U}_{1}\wt{V}_{1}^{\prime}\| _{\mathrm{F}}$}. Let $\lambda_{r+1}^{\ast}$ denote the $(r+1)$th singular value of $(\Sigma_{xS_{u}S_{u}}) ^{-1/2}\Sigma_{xyS_{u}S_{v}}(\Sigma_{yS_{v}S_{v}})^{-1/2}$. Then we have \begin{eqnarray} \label{eq:bound2ndSparseAppr} &&\bigl\| U_{1}^{\ast}V_{1}^{\ast\prime}- \wt{U}_{1}\wt{V}_{1}^{\prime}\bigr\| _{\mathrm{F}} \nonumber \\ &&\qquad = \bigl\| U_{1S_{u}\ast}^{\ast}\bigl(V_{1S_{v}\ast}^{\ast} \bigr)^{\prime}-\wt {U}_{1S_{u}\ast}(\wt{V}_{1S_{v}\ast})^{\prime} \bigr\| _{\mathrm{F}} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad\leq C\bigl\| (\Sigma_{xS_{u}S_{u}})^{1/2}\bigl[U_{1S_{u}\ast }^{\ast} \bigl(V_{1S_{v}\ast}^{\ast}\bigr)^{\prime}-\wt{U}_{1S_{u}\ast}( \wt {V}_{1S_{v}\ast})^{\prime}\bigr] (\Sigma_{yS_{v}S_{v}})^{1/2} \bigr\|_{\mathrm{F}} \\ &&\qquad\leq\frac{C \| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma_{xyS_{u}S_{v}}(\Sigma _{yS_{v}S_{v}})^{-1/2}-\Xi\| _{\mathrm{F}} }{ \wt{\lambda}_{r}-\lambda_{r+1}^{\ast}}.\nonumber \end{eqnarray} Here, the first equality holds since both $U_{1}^{\ast}V_{1}^{\ast \prime}$ and $\wt{U}_{1}\wt{V}_{1}^{\prime}$ are supported on the $S_u\times S_v$ submatrix. Noting that by the discussion before \eqref{eq:UV-oracle}, \eqref {eq:UV-intermediate} and Lemma~\ref{lem:CCA2}, $((\Sigma _{xS_{u}S_{u}})^{1/2}U_{1S_{u}\ast}^{\ast}, (\Sigma _{yS_{v}S_{v}})^{1/2}V_{1S_{v}\ast}^{\ast})$ and $((\Sigma _{xS_{u}S_{u}})^{1/2}\wt{U}_{1S_{u}\ast}, \break (\Sigma _{yS_{v}S_{v}})^{1/2}\wt{V}_{1S_{v}\ast})$ collect the leading left and right singular vectors of\break $(\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma _{xyS_{u}S_{v}} \Sigma_{yS_{v}S_{v}})^{-1/2}$ and $\Xi$, respectively, we obtain the last inequality by applying (\ref{eq:MPnew}) in Theorem~\ref{thm:sintheta}. In what follows, we derive upper bound for the numerator and lower bound for the denominator in \eqref{eq:bound2ndSparseAppr} in order. \textit{Upper bound for $\| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma _{xyS_{u}S_{v}}(\Sigma_{yS_{v}S_{v}})^{-1/2}-\Xi\| _{\mathrm{F}}$}. First, we decompose $\Sigma_{xyS_{u}S_{v}}$ as \begin{eqnarray}\label{eq:decompSigma_xy} \Sigma_{xyS_{u}S_{v}} & =& \Sigma_{xS_{u}\ast}\bigl(U_{1}\Lambda _{1}V_{1}^{\prime}+U_{2} \Lambda_{2}V_{2}^{\prime}\bigr)\Sigma_{y\ast S_{v}} \nonumber \\ & = &\Sigma_{xS_{u}S_{u}}U_{1S_{u}\ast}\Lambda_{1}V_{1S_{v}\ast }^{\prime } \Sigma_{yS_{v}S_{v}}+\Sigma_{xS_{u}S_{u}}U_{1S_{u}\ast}\Lambda _{1}V_{1S_{v}^{c}\ast}^{\prime}\Sigma_{yS_{v}^{c}S_{v}} \\ &&{} +\Sigma_{xS_{u}S_{u}^{c}}U_{1S_{u}^{c}\ast}\Lambda_{1}V_{1 }^{\prime} \Sigma_{y\ast S_{v}}+\Sigma_{xS_{u}\ast}U_{2}\Lambda _{2}V_{2}^{\prime}\Sigma_{y\ast S_{v}}. \nonumber \end{eqnarray} Then \eqref{eq:decompSigma_xy}, \eqref{eq:Xidef} and \eqref {eq:intermSVD} jointly imply that \begin{eqnarray*} && \bigl\| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma_{xyS_{u}S_{v}}(\Sigma _{yS_{v}S_{v}})^{-1/2}-\Xi\bigr\| _{\mathrm{F}} \\ &&\qquad \leq\bigl\| (\Sigma_{xS_{u}S_{u}}) ^{-1/2}\Sigma_{xS_{u}S_{u}^{c}}U_{1S_{u}^{c}\ast} \Lambda_{1}V_{1 }^{\prime}\Sigma_{y\ast S_{v}}( \Sigma_{yS_{v}S_{v}}) ^{-1/2}\bigr\| _{\mathrm{F}} \\ & &\qquad\quad{}+\bigl\| (\Sigma_{xS_{u}S_{u}})^{1/2}U_{1S_{u}\ast}\Lambda _{1}V_{1S_{v}^{c}\ast}^{\prime}\Sigma_{yS_{v}^{c}S_{v}}(\Sigma _{yS_{v}S_{v}})^{-1/2}\bigr\| _{\mathrm{F}} \nonumber \\ &&\qquad\quad{} +\bigl\| PP^{\prime}(\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma _{xS_{u}\ast}U_{2}\Lambda_{2}V_{2}^{\prime} \Sigma_{y\ast S_{v}} \Sigma_{yS_{v}S_{v}})^{-1/2} \bigl(I-QQ^{\prime}\bigr)\bigr\| _{\mathrm{F}} \\ &&\qquad\quad{} + \bigl\| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma _{xS_{u}\ast}U_{2} \Lambda_{2}V_{2}^{\prime}\Sigma_{y\ast S_{v}} \Sigma_{yS_{v}S_{v}})^{-1/2}QQ^{\prime}\bigr\| _{\mathrm{F } \\ &&\qquad \leq C \lambda\bigl(\| U_{1S_{u}^{c}\ast}\| _{\mathrm{F}}+\| V_{1S_{v}^{c}\ast}\| _{\mathrm{F}}\bigr) \\ & &\qquad\quad{}+ C\lambda_{r+1} \bigl(\bigl\| P^{\prime}(\Sigma _{xS_{u}S_{u}})^{-1/2}\Sigma _{xS_{u}\ast}U_{2}\bigr\| _{\mathrm{F}}+\bigl\| Q^{\prime}(\Sigma _{yS_{v}S_{v}})^{-1/2} \Sigma_{yS_{v}\ast}V_{2}\bigr\|_{\mathrm{F}}\bigr) \\ &&\qquad = C \lambda\bigl(\| U_{1S_{u}^{c}\ast}\| _{\mathrm{F}}+\| V_{1S_{v}^{c}\ast}\| _{\mathrm{F}}\bigr) + C\lambda_{r+1}\bigl(\bigl\| \wt{U}_1' \Sigma_x U_{2}\bigr\| _{\mathrm{F}} + \bigl\| \wt {V}_1' \Sigma_y V_{2}\bigr\| _{\mathrm{F}}\bigr). \end{eqnarray*} Here, the last equality is due to the definition \eqref{eq:UV-intermediate}. The last display, together with~\eqref{eq:lqsparse} and Lemma~\ref {lem:lq3}, leads to \begin{equation} \bigl\| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma_{xyS_{u}S_{v}} \Sigma_{yS_{v}S_{v}})^{-1/2}-\Xi\bigr\| _{\mathrm{F}}^{2}\leq \frac{Cq}{2-q} \lambda^{2}\eps_n^2. \label{eq:usedlater4} \end{equation} \textit{Lower bound for $\wt{\lambda}_{r}-\lambda_{r+1}^{\ast}$}. The\vspace*{1pt} bound \eqref{eq:usedlater4}, together with Weyl's inequality (\cite {golub1996matrix}, page 449 and Hoffman--Wielant inequality \cite{tao12}, page 63) implies \begin{eqnarray} \label{eq:boundsLamdaTilda}\qquad &&\bigl |\lambda_{r+1}^{\ast}-\wt{ \lambda}_{r+1}\bigr| \vee\bigl\| \Lambda_{1}^{\ast}-\wt{ \Lambda}_{1}\bigr\|_{\mathrm{F}} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad \leq\bigl\| (\Sigma_{xS_{u}S_{u}})^{-1/2}\Sigma_{xyS_{u}S_{v}} \Sigma_{yS_{v}S_{v}})^{-1/2}-\Xi\bigr\| _{\mathrm{F}} \leq C \sqrt{\frac{q}{2-q}} \lambda\eps_n \leq0.1\lambda. \end{eqnarray} Together with Lemma~\ref{lem:CCA2}, it further implies \begin{equation} \label{eq:lambda-lower} \wt{\lambda}_{r}-\lambda_{r+1}^{\ast} \geq\wt{\lambda}_{r}-\wt {\lambda _{r+1}-\bigl|\wt{ \lambda}_{r+1}-\lambda_{r+1}^{\ast}\bigr|\geq0.7\lambda. \end{equation} Combining \eqref{eq:bound2ndSparseAppr}, \eqref{eq:usedlater4} and \eqref{eq:lambda-lower}, we obtain \begin{equation} \bigl\| \wt{U}_{1}\wt{V}_{1}^{\prime}-U_{1}^{\ast}V_{1}^{\ast\prime } \bigr\| _{\mathrm{F}}^{2}\leq\frac{Cq}{2-q} \eps_n^2. \label{eq:piece3} \end{equation} The proof of \eqref{eq:SparseApproClaim1} is completed by combining \eqref{eq:piece1}, \eqref{eq:piece2} and \eqref{eq:piece3}. \end{pf*} \begin{pf*}{Proof of (\ref{eq:SparseApproClaim2})} Note that \begin{eqnarray*} &&\bigl\| U_{1}^{\ast}\Lambda_{1}V_{1}^{\ast\prime}-{U}_{1} \Lambda _{1}{V}_{1}^{\prime}\bigr\| _{\mathrm{F}} \\ &&\qquad\leq\bigl\| U_{1}^{\ast}\Lambda_{1}V_{1}^{\ast\prime}- \wt{U}_{1 \wt{\Lambda}_{1}\wt{V}_{1}^{\prime} \bigr\| _{\mathrm{F}}+\bigl\| \wt{U}_{1}\wt{\Lambda}_{1} \wt{V}_{1}^{\prime}-{U}_{1}\Lambda _{1}{V _{1}^{\prime}\bigr\| _{\mathrm{F}} \\ &&\qquad\leq\bigl\| U_{1}^{\ast}\Lambda_{1}^{\ast}V_{1}^{\ast\prime}- \wt U}_{1}\wt{\Lambda}_{1}\wt{V}_{1}^{\prime} \bigr\| _{\mathrm{F }+\bigl\| \wt{U}_{1}\wt{\Lambda}_{1} \wt{V}_{1}^{\prime}-{U _{1} \Lambda_{1}{V}_{1}^{\prime}\bigr\| _{\mathrm{F}} \\ &&\qquad\quad{} +C\bigl\| \Lambda_{1}^{\ast}-\wt{\Lambda}_{1}\bigr\| _{\mathrm{F}}+C\| \wt{\Lambda}_{1}-\Lambda_{1}\| _{\mathrm{F}} \\ &&\qquad\leq\bigl\| U_{1}^{\ast}\Lambda_{1}^{\ast}V_{1}^{\ast\prime}- \wt U}_{1}\wt{\Lambda}_{1}\wt{V}_{1}^{\prime} \bigr\| _{\mathrm{F }+C^{\prime}\bigl\| \wt{U}_{1}\wt{ \Lambda}_{1}\wt{V}_{1}^{\prime}-{U _{1} \Lambda_{1}{V}_{1}^{\prime}\bigr\| _{\mathrm{F}}+C\bigl\| \Lambda_{1}^{\ast}-\wt{\Lambda}_{1}\bigr\| _{\mathrm{F}}. \end{eqnarray*} Here, the last inequality is due to \begin{equation} \label{eq:lambda-perturb} \| \wt{\Lambda _{1}-\Lambda_{1} \| _{\mathrm{F}}\leq\bigl\| \Sigma_{x}^{1/2 \bigl( \wt{U}_{1}\wt{\Lambda}_{1}\wt{V}_{1}^{\prime}-U_{1} \Lambda _{1}V_{1}^{\prime}\bigr) \Sigma_{y}^{1/2}\bigr\| _{\mathrm{F}}, \end{equation} a consequence of Lemma~\ref{lem:CCA2} and the Hoffman--Wielandt inequality \cite{tao12}, page 63. We now control each of the three terms on the rightmost-hand side of the second last display. First, the bound we derived for (\ref{eq:zhaoren}), up to a constant multiplier, $\|\wt{U}_{1}\wt{\Lambda}_{1}\wt{V}_{1}^{\prime}-{U}_{1}\Lambda _{1}{V}_{1}^{\prime}\| _{\mathrm{F}}$ is upper bounded by the right-hand side of \eqref{eq:SparseApproClaim2}. Next, the bound for $\|\Lambda_{1}^{\ast}-\wt{\Lambda}_{1}\| _{\mathrm {F}}$ has been shown in (\ref{eq:boundsLamdaTilda}). Last but not least, applying (\ref{eq:MPprincipal}) in Theorem~\ref{thm:sintheta}, we obtain \begin{eqnarray*} &&\bigl \|U_{1}^{\ast}\Lambda_{1}^{\ast}V_{1}^{\ast\prime}- \wt {U}_{1}\wt {\Lambda}_{1}\wt{V}_{1}^{\prime} \bigr\| _{\mathrm{F}} \\ &&\qquad \leq\frac{C ( \wt{\lambda}_{1}+{\lambda}_{1}^*) }{\wt{\lambda }_{r}-{\lambda}^*_{r+1}}\bigl\| (\Sigma_{xS_{u}S_{u}} ^{-1/2} \Sigma_{xyS_{u}S_{v}} (\Sigma_{yS_{v}S_{v}} )^{-1/2}-\Xi \bigr\| _{\mathrm{F}} \leq C \sqrt{\frac{q}{2-q} } \lambda \eps_n , \end{eqnarray*} where the last inequality is due to (\ref{eq:usedlater4}), (\ref {eq:boundsLamdaTilda}), (\ref{eq:lambda-lower}) and Lemma~\ref{lem:CCA2}. The proof is completed by assembling the bounds for the three terms. \end{pf*} \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lem:bias}}{4}} \label{sec:bias-pf} In this proof, we need the following technical result, which is a direct consequence of Lemma~3 in \cite{supp2} by applying union bound. Remember the notation $T_u$ and $T_v$ defined in (\ref{eq:Tu-Tv}). \begin{lemma} \label{lem:covdeviation45} Assume $\frac{1}{n}(k_{q}^{u}\log(ep/k_{q}^{u})+k_{q}^{v}\log (em/k_{q}^{v}))<C_{1}$ for some constant $C_{1}>0$. For any constant $C'>0$, there exists some constant $C>0$ only depending on $M,C_1$ and $C'$, such that \begin{eqnarray*} \bigl\| \widehat{\Sigma}_{xT_{u}T_{u}}-\Sigma_{xT_{u}T_{u}}\bigr\| _{\mathrm{op}}^{2} &\leq&\frac{C}{n}\bigl(k_{q}^{u} \log\bigl(ep/k_{q}^{u}\bigr)\bigr), \\ \bigl\| \widehat{\Sigma}_{yT_{v}T_{v}}-\Sigma_{yT_{v}T_{v}}\bigr\| _{\mathrm{op}}^{2} &\leq&\frac{C}{n}\bigl(k_{q}^{v} \log\bigl(em/k_{q}^{v}\bigr)\bigr), \end{eqnarray*} with probability at least $1-\exp(-C^{\prime}k_{q}^{u}\log (ep/k_{q}^{u}))-\exp(-C^{\prime}k_{q}^{v}\log(em/k_{q}^{v}))$. \end{lemma} In addition, we need the following result. \begin{lemma}[(Stewart and Sun \cite{Stewart90}, Theorem II.4.11)] \label{lem:subspacedist} For any matrices A,B $ with $A^{\prime}A=B^{\prime}B=I$, we have \[ \inf_{W}\| A-BW\| _{\mathrm{F}}\leq\bigl\| AA^{\prime }-BB^{\prime}\bigr\| _{\mathrm{F}}. \] \end{lemma} We first bound $ \langle\Sigma_xU_2\Lambda_2V_2'\Sigma_y, \wh {U}_1\wh{V}_1' \rangle$. By the definition of trace product, we have \begin{eqnarray*} \bigl\langle\Sigma_xU_2\Lambda_2V_2' \Sigma_y, \wh{U}_1\wh{V}_1' \bigr\rangle&=& \bigl\langle\Lambda_2 V_2' \Sigma_y\wh{V}_1', U_2' \Sigma_x\wh{U}_1 \bigr\rangle \\ &\leq& \bigl\|\Lambda_2 V_2' \Sigma_y\wh{V}_1'\bigr\|_{\mathrm{F}} \bigl\|U_2'\Sigma _x\wh{U}_1 \bigr\|_{\mathrm{F}} \\ &\leq& \lambda_{r+1} \bigl\|V_2' \Sigma_y\wh{V}_1'\bigr\|_{\mathrm{F}} \bigl\|U_2'\Sigma _x\wh{U}_1 \bigr\|_{\mathrm{F}}. \end{eqnarray*} Define the SVD of matrices $U_1$ and $\wh{U}_1$ to be \[ U_1=\Theta R H', \qquad\wh{U}_1=\wh{\Theta} \wh{R}\wh{H}'. \] For any matrix $W$, we have \begin{eqnarray*} \bigl\|\wh{U}_1'\Sigma_xU_2 \bigr\|_{\mathrm{F}} &=& \bigl\|\bigl(\wh{U}_1-U_1HR^{-1}W \wh {R}\wh{H}'\bigr)'\Sigma_xU_2 \bigr\|_{\mathrm{F}} \\ &\leq& C\bigl\|\wh{U}_1-U_1HR^{-1}W\wh{R} \wh{H}'\bigr\|_{\mathrm{F}} \\ &\leq& C\|\wh{R}\|_{\mathrm {op}}\|\wh{\Theta}-\Theta W\|_{\mathrm{F}}, \end{eqnarray*} where $\|\wh{R}\|_{\mathrm {op}}\leq\|\wh{U}_1\|_{\mathrm {op}}\leq\|(\wh {\Sigma}_{xT_uT_u})^{-1/2}\|_{\mathrm{ op}}\|(\wh{\Sigma }_{xT_uT_u})^{1/2}\wh{U}_{1T_u*}\|_{\mathrm{ op}}\leq C$ with probability at least $1-\exp(-C^{\prime }k_{q}^{u}\log(ep/k_{q}^{u}))-\exp(-C^{\prime}k_{q}^{v}\log (em/k_{q}^{v}))$ by Lem\-ma~\ref{lem:covdeviation45}. Hence, by Lem\-ma~\ref{lem:subspacedist}, we have \begin{equation} \bigl\|\wh{U}_1'\Sigma_xU_2 \bigr\|_{\mathrm{F}}\leq C\inf_W\|\wh{\Theta}-\Theta W \|_{\mathrm{F}}\leq C\bigl\|\wh{\Theta}\wh{\Theta}'-\Theta \Theta'\bigr\|_{\mathrm{F}}. \label{eq:fix1} \end{equation} We note that both $\widehat{\Theta}\widehat{\Theta ^{\prime}$ and $\Theta\Theta^{\prime}$ are the projection matrices of the left singular spaces of $\widehat{U}_{1}\widehat{V}_{1}^{\prime}$ and U_{1}V_{1}^{\prime}$, respectively, and the eigengap is at constant level since the $r$th singular value of $U_{1}V_{1}^{\prime}$ is bounded below by some constant and the $(r+1)$th singular value of $\widehat{U}_{1 \widehat{V}_{1}^{\prime}$ is zero. Then a direct consequence of Wedin's sin-theta theorem \cite{Wedin72} gives \begin{equation} \bigl\|\wh{\Theta}\wh{\Theta}'-\Theta\Theta' \bigr\|_{\mathrm{F}}\leq C\bigl\|\wh {U}_1\wh{V}_1'-U_1V_1' \bigr\|_{\mathrm{F}}.\label{eq:fix2} \end{equation} Combining (\ref{eq:fix1}) and (\ref{eq:fix2}), we have $\|\wh {U}_1'\Sigma_xU_2\|_{\mathrm{F}}\leq C_1\|\wh{U}_1\wh{V}_1'-U_1V_1'\| _{\mathrm{F}}$. The same argument also implies $\|V_2'\Sigma_y\wh{V}_1'\|_{\mathrm{F}}\leq C_1\| \wh{U}_1\wh{V}_1'-U_1V_1'\|_{\mathrm{F}}$. Therefore, \[ \bigl\llvert \bigl\langle\Sigma_xU_2 \Lambda_2V_2'\Sigma_y, \wh{U}_1\wh {V}_1' \bigr\rangle\bigr \rrvert \leq C_2\lambda_{r+1}\bigl\|\wh{U}_1 \wh{V}_1'-U_1V_1' \bigr\|_{\mathrm{F}}^2. \] Using a similar argument, we also obtain \[ \bigl\llvert \bigl\langle\Sigma_xU_2 \Lambda_2V_2'\Sigma_y, \wh {U}_1^*\bigl(\wh{V}_1^*\bigr)' \bigr \rangle\bigr\rrvert \leq C_2\lambda_{r+1}\bigl\| \wh{U}^*_1\bigl(\wh{V}_1^*\bigr)'-U_1V_1' \bigr\|_{\mathrm{F}}^2. \] By the triangle inequality, we complete the proof. \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lem:excessloss1}}{5}} \label{sec:excessloss1-pf} Define \[ W \left[\matrix{ 0 & \widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}-\widehat {U}_{1 \widehat{V}_{1}^{\prime} \vspace*{2pt}\cr \bigl(\widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}-\widehat {U}_{1 \widehat{V}_{1}^{\prime}\bigr)^{\prime} & } \right]. \] Then simple algebra leads to \begin{equation} \bigl\langle\Sigma_{xy}-\widehat{\Sigma}_{xy},\widehat {U}_{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat {V}_{1}^{\prime } \bigr \rangle=\tfrac{1}{2} \langle\Sigma-\widehat{\Sigma ,W \rangle. \label{eq:excessexpand} \end{equation} In the rest of the proof, we bound $\langle\Sigma-\wh{\Sigma}, W \rangle$ by using Lemma~\ref{lem:EP}. Notice that the matrix $\widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast ^{\prime}}-\widehat{U}_{1}\widehat{V}_{1}^{\prime}$ has nonzero rows with indices in $T_{u}=S_{u}\cup\widehat{S}_{u}$ and nonzero columns with indices in $T_{v}=S_{v}\cup\widehat{S}_{v}$. Hence, the enlarged matrix $W$ has nonzero rows and columns with indices in $T\times T$, where \[ T=T_{u} \cup(T_{v}+p) \] with $T_v +p = \{j+p: j\in T_v \}$. The cardinality of $T$ is $|T| = |T_{u}|+|T_{v}| \leq 2(k_{q}^{u}+k_{q}^{v})$. Thus, we can rewrite (\ref{eq:excessexpand}) as \begin{eqnarray*} &&\bigl\langle\Sigma_{xy}-\widehat{\Sigma}_{xy},\widehat {U}_{1}^{\ast \widehat{V}_{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat {V}_{1}^{\prime } \bigr \rangle\\ &&\qquad= \tfrac{1}{2} \langle\Sigma-\wh{\Sigma}, W \rangle \\ &&\qquad=\tfrac{1}{2} \langle\Sigma_{TT}-\widehat{\Sigma }_{TT},W_{TT}\rangle \\ &&\qquad=\tfrac{1}{2} \bigl\langle I_{|T|}-\Sigma_{TT}^{-1/2} \widehat {\Sigma _{TT}\Sigma_{TT}^{-1/2}, \Sigma_{TT}^{1/2}W_{TT}\Sigma _{TT}^{1/2} \bigr\rangle \\ &&\qquad=\tfrac{1}{2}\bigl\| \Sigma_{TT}^{1/2}W_{TT} \Sigma _{TT}^{1/2}\bigr\| _{\mathrm{F}} \bigl\langle I_{|T|}-\Sigma_{TT}^{-1/2 \widehat{ \Sigma}_{TT}\Sigma_{TT}^{-1/2}, K^T \bigr\rangle, \end{eqnarray*} where $K^T = \frac{\Sigma _{TT}^{1/2}W_{TT}\Sigma_{TT}^{1/2}}{\| \Sigma _{TT}^{1/2}W_{TT}\Sigma_{TT}^{1/2}\| _{\mathrm{F}}}$. Note that \[ \tfrac{1}{2}\bigl\| \Sigma_{TT}^{1/2}W_{TT} \Sigma_{TT}^{1/2}\bigr\| _{\mathrm{F}}\leq C\| W_{TT} \| _{\mathrm{F}}=C\| W\| _{\mathrm{F}}=\sqrt{2}C\bigl\| \widehat{U}_{1}^{\ast \widehat{V}_{1}^{\ast^{\prime}}-\widehat{U}_{1}\widehat {V}_{1}^{\prime }\bigr\| _{\mathrm{F}}. \] To obtain the desired bound, it suffices to show that \begin{equation} \bigl\llvert \bigl\langle I_{|T|}-\Sigma_{TT}^{-1/2} \widehat{\Sigma _{TT}\Sigma_{TT}^{-1/2}, K^T \bigr\rangle\bigr\rrvert \label {eq:useEPtobound} \end{equation} is upper bounded by $C\lambda\eps_n$ with high probability. To this end, we note that $T_{u} = S_u \cup\wh{S}_u$ has at most ${{p\choose k_{q}^{u}}}$ different possible configurations since $S_u$ is deterministic and $\wh{S}_u$ is a random set of size $k_q^u$. For the same reason, $T_{v}$ has at most ${{m\choose k_{q}^{v}}}$ different possible configurations. Therefore, the subset $T$ has at most $N={{p\choose k_{q}^{u}}}{{m\choose k_{q}^{v}}}$ different possible configurations, which can be listed as $T_{1},T_{2},\ldots,T_{N}$. Let \[ K^{T_j} = \frac{\Sigma_{T_{j}T_{j}}^{1/2}W_{T_{j}T_{j}}\Sigma _{T_{j}T_{j}}^{1/2}}{\| \Sigma _{T_{j}T_{j}}^{1/2}W_{T_{j}T_{j}}\Sigma_{T_{j}T_{j}}^{1/2}\| _ \mathrm{F}}} \] for all $j\in[N]$. Since each $W_{T_{j}T_{j}}$ is of rank at most $2r$, so are the $K^{T_j}$'s. Therefore, \begin{eqnarray*} \bigl|(\ref{eq:useEPtobound})\bigr| &\leq&\max_{1\leq j\leq N}\bigl\llvert \bigl\langle I_{|T_{j}|}-\Sigma_{T_{j}T_{j}}^{-1/2}\widehat{\Sigma }_{T_{j}T_{j}}\Sigma _{T_{j}T_{j}}^{-1/2}, K^{T_j} \bigr \rangle\bigr\rrvert \\ &\leq&\max_{1\leq j\leq N}\sup_{\| K\| _{\mathrm{F }\leq1, \operatorname{rank}(K)\leq2r}\bigl\llvert \bigl\langle I_{|T_{j}|}-\Sigma_{T_{j}T_{j}}^{-1/2}\widehat{ \Sigma }_{T_{j}T_{j}}\Sigma _{T_{j}T_{j}}^{-1/2},K \bigr\rangle\bigr \rrvert . \end{eqnarray*} Then the union bound leads to \begin{eqnarray}\label{eq:EPused1} &&\mathbb{P}_\Sigma\bigl(\bigl|(\ref{eq:useEPtobound})\bigr|>t\bigr) \nonumber \\ &&\qquad \leq\sum_{j=1}^{N}\mathbb{P} \Bigl(\sup _{\| K\| _ \mathrm{F}}\leq1,\operatorname{rank}(K)\leq2r}\bigl\llvert \bigl\langle I_{|T_{j}|}- \Sigma_{T_{j}T_{j}}^{-1/2}\widehat{\Sigma }_{T_{j}T_{j}}\Sigma _{T_{j}T_{j}}^{-1/2},K \bigr\rangle\bigr\rrvert >t \Bigr) \\ &&\qquad\leq\sum_{j=1}^{N}\exp \bigl(C^{\prime}r|T_{j}|-Cn\bigl(t\wedge t^{2}\bigr) \bigr) \nonumber\\ &&\qquad\leq{\pmatrix{p\cr k_{q}^{u}}} {\pmatrix{m\cr k_{q}^{v}}}\exp\bigl C_{1}r \bigl(k_{q}^{u}+k_{q}^{v}\bigr)-Cn \bigl(t\wedge t^{2}\bigr)\bigr) \nonumber \\ &&\qquad\leq\exp\biggl( C_{1}r\bigl(k_{q}^{u}+k_{q}^{v} \bigr)+k_{q}^{u}\log \frac {ep}{k_{q}^{u}}+k_{q}^{v} \log\frac{em}{k_{q}^{v}}-Cn\bigl(t\wedge t^{2}\bigr) \biggr), \nonumber \end{eqnarray} where inequality (\ref{eq:EPused1}) is due to Lemma~\ref{lem:EP}. We complete the proof by choosing $t^{2}= C_2 \lambda^2 \eps_n^2 $ in the last display for some sufficiently large constant $C_2>0$, which, by condition (\ref{eq:ass1}), is bounded. \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lem:excessloss2}}{6}} \label{sec:excessloss2-pf} First, we apply a telescoping expansion to the quantity of interest as \begin{eqnarray} && \bigl\langle\widehat{\Sigma}_{x}\widehat{U}_{1}^{\ast} \Lambda _{1 \widehat{V}_{1}^{\ast\prime} \widehat{\Sigma}_{y}-{\Sigma}_{x}{U _{1} \Lambda_{1}{V}_{1}^{\prime}{\Sigma}_{y}, \widehat{U}_{1}^{\ast \widehat{V}_{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat {V}_{1}^{\prime } \bigr \rangle \nonumber \\ &&\qquad= \bigl\langle\Sigma_{x}\widehat{U}_{1}^{\ast} \Lambda_{1}\widehat {V _{1}^{\ast\prime} \Sigma_{y}-\Sigma_{x}U_{1}^{\ast} \Lambda _{1}V_{1}^{\ast\prime}\Sigma_{y}, \widehat{U}_{1}^{\ast}\widehat{V} _{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat{V}_{1}^{\prime} \bigr \rangl \label{eq:excessloss2.3} \\ &&\qquad\quad{}+ \bigl\langle\Sigma_{x}U_{1}^{\ast} \Lambda_{1}V_{1}^{\ast\prime }\Sigma_{y}-{ \Sigma}_{x}{U}_{1}\Lambda_{1}{V}_{1}^{\prime}{ \Sigma }_{y} \widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast^{\prime}}-\widehat{U}_{1} \widehat{V}_{1}^{\prime} \bigr\rangle\label{eq:excessloss2.2} \\ &&\qquad\quad{}+ \bigl\langle\widehat{\Sigma}_{x}\widehat{U}_{1}^{\ast} \Lambda_{1} \widehat{V}_{1}^{\ast\prime} \widehat{\Sigma}_{y}-\Sigma _{x}\widehat {U _{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}\Sigma _{y}, \widehat{U _{1}^{\ast}\widehat{V}_{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat{V} _{1}^{\prime} \bigr\rangle. \label{eq:excessloss2.1} \end{eqnarray} In what follows, we bound each of the terms in \eqref {eq:excessloss2.3}--\eqref{eq:excessloss2.1} in order. \textit{$1^\circ$ Bound for} \eqref{eq:excessloss2.3}. Applying (\ref{eq:OracleLossClaim2}) in Lemma~\ref{lem:oracleloss}, we obtain that with probability at least $1-\exp(-C^{\prime}(k_{q}^{u}+\log (ep/k_{q}^{u})))-\exp(-C^{\prime}(k_{q}^{v}+\log (em/k_{q}^{v})))$, \begin{eqnarray*} \bigl\llvert (\ref{eq:excessloss2.3})\bigr\rrvert &\leq&C\bigl\| \widehat{U }_{1}^{\ast}\Lambda_{1} \widehat{V}_{1}^{\ast\prime}-U_{1}^{\ast }\Lambda _{1}V_{1}^{\ast\prime}\bigr\| _{\mathrm{F}}\bigl\| \widehat{U _{1}^{\ast}\widehat{V}_{1}^{\ast^{\prime}}- \widehat{U}_{1}\widehat{V} _{1}^{\prime} \bigr\| _{\mathrm{F}} \\ &\leq& C\sqrt{ \frac{q}{2-q}}\lambda\eps_n \bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast^{\prime}}- \widehat{ U}_{1}\widehat{V}_{1}^{\prime} \bigr\| _{\mathrm{F}}. \end{eqnarray*} \textit{$2^\circ$ Bound for} \eqref{eq:excessloss2.2}. Applying (\ref{eq:SparseApproClaim2}) in Lemma~\ref {lem:sparseapproxerror}, we obtain \begin{eqnarray*} \bigl\llvert (\ref{eq:excessloss2.2})\bigr\rrvert &\leq&C\bigl\| U_{1}^{\ast} \Lambda_{1}V_{1}^{\ast\prime}-{U}_{1} \Lambda_{1}{V _{1}^{\prime}\bigr\| _{\mathrm{F}}\bigl\| \widehat{U}_{1}^{\ast \widehat{V}_{1}^{\ast^{\prime}}-\widehat{U}_{1}\widehat {V}_{1}^{\prime }\bigr\| _{\mathrm{F}} \\ &\leq& C\sqrt{\frac{q}{2-q}}\lambda\eps_n \bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast^{\prime}}- \widehat{ U}_{1}\widehat{V}_{1}^{\prime} \bigr\| _{\mathrm{F}}. \end{eqnarray*} \textit{$3^\circ$ Bound for} \eqref{eq:excessloss2.1}. We turn to bound (\ref{eq:excessloss2.1}) based on a strategy similar to that used in proving Lemma~\ref{lem:excessloss1}. First, we write it in a form for which we could apply Lemma~\ref{lem:EP}. Recall the random sets $T_u$ and $T_v$ defined in \eqref{eq:Tu-Tv}. Then for \begin{eqnarray*} H_{x}^{T_u} & =&(\Sigma_{xT_{u}T_{u}})^{1/2} \bigl(\widehat{U}_{1T_{u}\ast }^{\ast}\bigl(\widehat{V}_{1T_{v}\ast}^{\ast} \bigr)^{\prime}-\widehat{U _{1T_{u}\ast}(\widehat{V}_{1T_{v}\ast})^{\prime} \bigr) \\ &&{} \times\widehat{\Sigma _{yT_{v}T_{v}}\widehat{V}_{1T_{v}\ast}^{\ast} \Lambda_{1}\bigl(\widehat {U _{1T_{u}\ast}^{\ast} \bigr)^{\prime}(\Sigma_{xT_{u}T_{u}})^{1/2}, \\ H_{y}^{T_v} & = &(\Sigma_{yT_{v}T_{v}})^{1/2} \widehat{V}_{1T_{v}\ast }^{\ast }\Lambda_{1}\bigl( \widehat{U}_{1T_{u}\ast}^{\ast}\bigr)^{\prime} \\ &&{} \times\Sigma _{xT_{u}T_{u}}\bigl(\widehat{U}_{1T_{u}\ast}^{\ast} \bigl(\widehat {V}_{1T_{v}\ast }^{\ast}\bigr)^{\prime}- \widehat{U}_{1T_{u}\ast}(\widehat {V}_{1T_{v}\ast })^{\prime}\bigr) ( \Sigma_{yT_{v}T_{v}})^{1/2}, \end{eqnarray*} and $\overline{H}_x^{T_u} = {H_{x}^{T_u}}/{\|H_{x}^{T_u}\| _{\mathrm {F}}}$, $\overline{H}_y^{T_v} = {H_{y}^{T_v}}/{\|H_{y}^{T_v}\| _{\mathrm {F}}}$, we have \begin{eqnarray*} \label{eq:Excess2FirstDec} &&\bigl\llvert (\ref{eq:excessloss2.1})\bigr\rrvert \nonumber \\ &&\qquad= \bigl\llvert \bigl\langle\widehat{\Sigma}_{x}- \Sigma_{x},\bigl(\widehat {U}_{1}^{\ast } \widehat{V}_{1}^{\ast\prime}-\widehat{U}_{1}\widehat {V}_{1}{^{\prime } \bigr)\widehat{ \Sigma}_{y}\widehat{V}_{1}^{\ast} \Lambda_{1}\widehat {U}_{1}^{\ast} ^{\prime}} \bigr\rangle \nonumber \\ &&\qquad\quad{} + \bigl\langle\widehat{\Sigma}_{y}-\Sigma _{y},\widehat{V}_{1}^{\ast}\Lambda_{1} \widehat{U}_{1}^{\ast }{^{\prime }}\Sigma_{x \bigl(\widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}- \widehat {U}_{1 \widehat{V}_{1}{^{\prime}} \bigr) \bigr\rangle\bigr\rrvert \nonumber \\ &&\qquad\leq \bigl\llvert \bigl\langle\widehat{\Sigma}_{xT_{u}T_{u}}-\Sigma _{xT_{u}T_{u}},\bigl \widehat{U}_{1T_{u}\ast}^{\ast} \bigl(\widehat{V}_{1T_{v}\ast}^{\ast }\bigr)^{\prime }\\ &&\qquad\quad{}- \widehat{U}_{1T_{u}\ast}(\widehat{V}_{1T_{v}\ast})^{\prime} \bigr \widehat{\Sigma}_{yT_{v}T_{v}}\widehat{V}_{1T_{v}\ast}^{\ast } \Lambda _{1}\bigl \widehat{U}_{1T_{u}\ast}^{\ast} \bigr)^{\prime} \bigr\rangle\bigr\rrvert \nonumber \\ &&\qquad\quad{}+\bigl\llvert \bigl\langle\widehat{\Sigma}_{yT_{v}T_{v}}\\ &&\qquad\quad{}-\Sigma _{yT_{v}T_{v}} \widehat{V}_{1T_{v}\ast}^{\ast} \Lambda_{1}\bigl(\widehat{U}_{1T_{u}\ast }^{\ast} \bigr)^{\prime}\Sigma_{xT_{u}T_{u}}\bigl(\widehat{U}_{1T_{u}\ast }^{\ast} \bigl(\widehat{V}_{1T_{v}\ast}^{\ast}\bigr)^{\prime}- \widehat{U _{1T_{u}\ast}(\widehat{V}_{1T_{v}\ast})^{\prime} \bigr) \bigr\rangle \bigr\rrvert \nonumber \\ &&\qquad=\bigl\| H_{x}^{T_u}\bigr\| _{\mathrm{F}}\bigl\llvert \bigl \langle(\Sigma _{xT_{u}T_{u}})^{-1/2}\widehat{\Sigma}_{xT_{u}T_{u}}( \Sigma _{xT_{u}T_{u}})^{-1/2}-I_{|T_{u}|}, \overline{H}_x^{T_u} \bigr\rangle \bigr\rrvert \nonumber \\ &&\qquad\quad{}+\bigl\| H_{y}^{T_v}\bigr\| _{\mathrm{F}} \bigl\llvert \bigl \langle(\Sigma _{yT_{v}T_{v}})^{-1/2}\widehat{\Sigma}_{yT_{v}T_{v}}( \Sigma _{yT_{v}T_{v}})^{-1/2}-I_{|T_{v}|}, \overline{H}_y^{T_v} \bigr\rangle \bigr\rrvert . \nonumber \end{eqnarray*} We now bound each term on the rightmost side. Applying Lemma~\ref{lem:EP} with union bound and then following a similar analysis to that leading to (\ref{eq:useEPtobound}) but with $T$ replaced by $T_{u}$ and $T_{v}$, we obtain that \begin{eqnarray}\label{eq:excess2.1.1} &&\bigl\llvert \bigl\langle(\Sigma _{xT_{u}T_{u}})^{-1/2}\widehat{\Sigma}_{xT_{u}T_{u}}(\Sigma _{xT_{u}T_{u}})^{-1/2}-I_{|T_{u}|}, \overline{H}_x^{T_u} \bigr\rangle \bigr\rrvert \nonumber\\ &&\qquad\leq C\sqrt{\frac{k_{q}^{u }{n}\biggl( r+\log \frac{ep}{k_{q}^{u}} \biggr)}, \nonumber \\[-8pt] \\[-8pt] \nonumber &&\bigl\llvert \bigl\langle(\Sigma _{yT_{v}T_{v}})^{-1/2}\widehat{ \Sigma}_{yT_{v}T_{v}}(\Sigma _{yT_{v}T_{v}})^{-1/2}-I_{|T_{v}|}, \overline{H}_y^{T_v} \bigr\rangle \bigr\rrvert\\ &&\qquad \leq C\sqrt {\frac{k_{q}^{v}}{n}\biggl( r+\log\frac{em}{k_{q}^{v}} \biggr)}\nonumber \end{eqnarray} with probability at least $1-\exp(-C'k_{q}^{u}(r+\log(ep/k_{q}^{u} ))$ and $1-\exp(-C'k_{q}^{v}(r+\log(em/k_{q}^{v})))$, respectively. To bound $\| H_{x}^{T_u}\| _{\mathrm{F}}$ and $\| H_{y}^{T_v}\| _{\mathrm{F}}$, we note that it follows from Lemma~\ref{lem:covdeviation45} that all eigenvalues of $\widehat{\Sigma _{xT_{u}T_{u}}$ and $\widehat{\Sigma}_{yT_{v}T_{v}}$ are bounded from below and above by some universal positive constants with probability at least $1-\exp -C^{\prime}k_{q}^{u}\log(ep/k_{q}^{u}))-\exp(-C^{\prime }k_{q}^{v}\log(em/k_{q}^{v}))$ under assumption (\ref{eq:ass1}). Thus, with the same probability we have \begin{eqnarray} \label{eq:excess2.1.2} \bigl\| H_{x}^{T_u}\bigr\| _{\mathrm{F}} &\leq& C \lambda\bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}- \widehat{U}_{1} \widehat{V}_{1}{^{\prime}} \bigr\| _{\mathrm{F}}\bigl\| \widehat{\Sigma }_{yT_{v}T_{v}}^{1/2} \widehat{V}_{1T_{v}\ast}^{\ast}\bigr\| _{\mathrm {op}} \nonumber \\ &&{}\times \bigl\| \widehat{\Sigma}_{yT_{v}T_{v}}^{1/2}\bigr\| _ \mathrm{op}}\bigl\| \widehat{\Sigma}_{xT_{u}T_{u}}^{1/2}\widehat{U _{1T_{u}\ast}^{\ast} \bigr\| _{\mathrm{op}}\bigl\| \widehat{\Sigma _{xT_{u}T_{u}}^{-1/2}\bigr\| _{\mathrm{op}} \\ &\leq&C_1\lambda\bigl\| \widehat{U}_{1}^{\ast} \widehat{V _{1}^{\ast\prime}-\widehat{U}_{1} \widehat{V}_{1}{^{\prime}}\bigr\| _ \mathrm{F}}\nonumber \end{eqnarray} and \begin{eqnarray} \label{eq:excess2.1.3}\bigl \| H_{y}^{T_v}\bigr\| _{\mathrm{F}} &\leq& C \lambda\bigl\| \widehat{U}_{1}^{\ast}\widehat{V}_{1}^{\ast\prime}- \widehat {U}_{1}\widehat{V}_{1}{^{\prime}}\bigr\| _{\mathrm{F}}\bigl\| \widehat{\Sigma }_{yT_{v}T_{v}}^{1/2} \widehat{V}_{1T_{v}\ast}^{\ast}\bigr\| _ \mathrm{op}} \nonumber \\ &&{}\times \bigl\| \widehat{\Sigma}_{yT_{v}T_{v}}^{-1/2}\bigr\| _ \mathrm{op}}\bigl\| \widehat{\Sigma}_{xT_{u}T_{u}}^{1/2}\widehat{U _{1T_{u}\ast}^{\ast} \bigr\| _{\mathrm{op}}\bigl\| \widehat{\Sigma _{xT_{u}T_{u}}^{-1/2}\bigr\| _{\mathrm{op}} \\ &\leq&C_1\lambda\bigl\| \widehat{U}_{1}^{\ast} \widehat{V _{1}^{\ast\prime}-\widehat{U}_{1} \widehat{V}_{1}{^{\prime}}\bigr\| _ \mathrm{F}}.\nonumber \end{eqnarray} Combining (\ref{eq:excess2.1.1}), (\ref{eq:excess2.1.2}) and (\re {eq:excess2.1.3}), we obtain \[ \bigl\llvert (\ref{eq:excessloss2.1})\bigr\rrvert \leq C\lambda^2 \eps _n \bigl\| \widehat{U}_{1}^{\ast} \widehat{V}_{1}^{\ast\prime}-\widehat{U}_{1} \widehat{V}_{1}{^{\prime}}\bigr\| _{\mathrm{F}}, \] with probability at least $1-\exp(-C^{\prime}k_{q}^{u}\log(ep/k_{q}^{u}))-\exp -C^{\prime}k_{q}^{v}\log(em/k_{q}^{v}))$. Noting that $\lambda< 1$, this completes the proof. \begin{supplement}[id=suppA] \stitle{Supplement to ``Minimax estimation in sparse canonical correlation analysis''} \slink[doi]{10.1214/15-AOS1332SUPP} \sdatatype{.pdf} \sfilename{aos1332\_supp.pdf} \sdescription{The supplement \cite{supp2} contains an Appendix to the current paper in which we prove Theorems \ref{thm:lower-bd-q}--\ref{thm:sintheta} and Lemmas \ref {lem:oracleloss} and \ref{lem:EP}--\ref{lem:lq3}.} \end{supplement}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,502
@interface VGHtmlATagTransfom : NSObject <VGHtmlTagTransform> @end
{ "redpajama_set_name": "RedPajamaGithub" }
8,444
On Saturday afternoon last week, several SOTE families enjoyed an afternoon of fun, friendship and star weaving in the SOTE dining room. We wove over one hundred stars to contribute to the 1 Million Stars To End Violence project for the 2018 Commonwealth Games. If you would like to weave stars at home, please feel free to collect pre-cut ribbon from Sophia Lightfoot in Building 4 or phone 0429 549 202. We will continue weaving stars through to July 2017 and will hold another SOTE workshop in Term 1 next year.
{ "redpajama_set_name": "RedPajamaC4" }
0
Q: Which filter is the most suitable if I know points with zero noise amplitude I've got observed data $Y_1,\ldots, Y_n $ which consists of real values $X_1,\ldots, X_n$ and additive high-frequency noise $e_1,\ldots, e_n$, so $Y_i=X_i+e_i$. I know, that indices $i_1,\ldots, i_m, m<n$, refer to those samples in which $e_j=0$ if $j\in(i_1,\ldots,i_m)$. I'm trying to implement baseline detecting using that information about points which should have zero amplitude of noise. The filter should has a $Y$ series as input, and it should has output $Z$ like the follows: $Z$ - filtered data without high-frequency noise with $Z_j=Y_j$ if $j\in(i_1,\ldots,i_m)$. That is not strict limitation so it could be $Z_j\approx Y_j$ Which filters could I use for that purposes? I've seen cubic splines, which interpolate baseline by that points, but they are strictly depend on them, since I want filter to be able working even without points but with using them for correction.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,914
\section{Introduction} \label{sec:intro} 3D data is used in many different fields, including autonomous driving, robotics, remote sensing, and more \cite{chen2022pseudo,zhang2022adversarial,james2022coarse,liu2022cdgnet,li2022exploiting}. Point cloud has a very uniform structure, which avoids the irregularity and complexity of composition. However, in practical applications, due to the occlusion of objects, the difference in the reflectivity of the target surface material, and the limitation of the resolution and viewing angle of the visual sensor, the collected point cloud data is often incomplete. The resultant missing geometric and semantic information will affect the subsequent 3D tasks \cite{uddin2022incomplete}. Therefore, how to use a limited amount of incomplete data to complete point cloud and restore the original shape has become a hot research topic, and is of great significance to downstream tasks \cite{wu2020multimodal,chen2022pointtree,chen2022imlovenet,hou2022hitpr,xu2022bico,liu2022pvnas}. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{pictures/First_Figure.pdf}\\ \caption{\textbf{Visual comparison of point cloud completion results.} Compared with GRNet \cite{xie2020grnet} and PoinTr \cite{yu2021pointr}. \emph{ProxyFormer} completely retains the partial input (blue bounding box) and restores the missing part with details (purple bounding box).} \label{main_Fig.1.} \end{figure} With the tremendous success of PointNet \cite{qi2017pointnet} and PointNet++ \cite{qi2017pointnet++}, direct processing of 3D coordinates has become the mainstream of point cloud analysis. In recent years, there have been many point cloud completion methods \cite{achlioptas2018learning,yuan2018pcn,huang2020pf,xie2020grnet,yu2021pointr,xiang2021snowflakenet,zhou2022seedformer}, and the emergence of these networks has also greatly promoted the development of this area. Many methods \cite{achlioptas2018learning,yuan2018pcn,xie2020grnet} adopt the common encoder-decoder structure, which usually get global feature from the incomplete input by pooling operation and map this feature back to the point space to obtain a complete one. This kind of feature can predict the approximate shape of the complete point cloud. However, there are two drawbacks: (1) The global feature is extracted from partial input and thus lack the ability to represent the details of the missing part; (2) These methods discard the original incomplete point cloud and regenerate the complete shape after extracting features, which will cause the shape of the original part to deform to a certain extent. Methods like \cite{huang2020pf,yu2021pointr} attempt to predict the missing part separately, but they do not consider the feature connection between the existing and the missing parts, which are still not good solutions to the first drawback. The results of GRNet \cite{xie2020grnet} and PoinTr \cite{yu2021pointr} in Fig. \ref{main_Fig.1.} illustrate the existence of these problems. GRNet failed to keep the ring on the light stand while PoinTr incorrectly predicted the straight edge of the lampshade as a curved edge. Besides, some methods \cite{yu2021pointr,xiang2021snowflakenet,lin2021pctma,zhou2022seedformer} are based on the transformer structure and use the attention mechanism for feature correlation calculation. However, this also brings up two other problems: (3) In addition to the feature, the position encoding also has a great influence on the effect of the transformer. Existing transformer-based methods, either directly using 3D coordinates \cite{guo2021pct,zhou2022seedformer} or using MLP to upscale the coordinates \cite{yu2021pointr,xiang2021snowflakenet}, the position information of the point cloud cannot be well represented; (4) It also leads to the problem of excessive parameters or calculation. Furthermore, we also note that most of the current supervised methods do not make full use of the known data. During the training process, the point cloud data we can obtain includes incomplete input and Ground Truth (GT). This pair of data can indeed undertake the point cloud completion task well, but in fact, we can obtain a new data through these two data, that is, the missing part of the point cloud, so as to increase our prior knowledge. In order to solve the above-mentioned problems, we propose a novel point cloud completion network named \emph{ProxyFormer}, which completely preserves the incomplete input and has better detail recovery capability as shown in Fig. \ref{main_Fig.1.}. Firstly, we design a feature and position extractor to convert the point cloud to proxies, with a particular attention to the representation of point position. Then, we let the proxies of the partial input interact with the generated missing part proxies through a newly proposed missing part sensitive transformer, instead of using the global feature extracted from incomplete input alone as in prior methods. After mapping proxies back to the point space, we splice it with the incomplete input points to 100\% preserve the original data. During training, we use the true missing part of the point cloud to increase prior knowledge and for prediction error refinement. Overall, the main contributions of our work are as follows: \vspace{-0.2cm} \begin{itemize} \item We design a Missing Part Sensitive Transformer, which focuses on the geometry structure and details of the missing part. We also propose a new position encoding method that aggregates both the coordinates and features from neighboring points. \vspace{-0.2cm} \item We introduce Proxy Alignment into the training process. We convert the true missing part into proxies, which are used to enhance the prior knowledge while refining the predicted missing proxies. \vspace{-0.2cm} \item Our proposed method \emph{ProxyFormer} discards the transformer decoder adopted in most transformer based completion methods such as PointTr, which achieves SOTA performance compared to various baselines while having considerably few parameters and the fastest inference speed in terms of GFLOPs. \end{itemize} \section{Related Work} \label{sec:related work} \noindent{\bfseries 3D shape completion.} Traditional shape completion work mainly includes two categories: geometric rule completion \cite{zhao2007robust,pauly2008discovering,mitra2013symmetry} and template matching completion \cite{nan2012search,kim2013learning,li2015database}. However, these methods require the input to be as complete as possible, and thus are not robust to new objects and environmental noise. VoxelNet \cite{zhou2018voxelnet} attempts to divide the point cloud into voxel grids and applies convolutional neural networks, but the voxelization will lose the details of the point cloud, and the increasing resolution of the voxel grid will significantly increase the memory consumption. Yuan \emph{et al.} \cite{yuan2018pcn} designed PCN, which proposed a coarse-to-fine method based on the PointNet \cite{qi2017pointnet} and FoldingNet \cite{yang2018foldingnet}, but its decoder often fails to recover rare geometries of objects such as seat backs with gaps, \emph{etc.} Therefore, after PCN, many other methods \cite{tchapmi2019topnet,huang2020pf,wang2020cascaded,xia2021asfm} focus on multi-step point cloud generation, which is helpful to recover a final point cloud with fine-grained details. Furthermore, following DGCNN \cite{wang2019dynamic}, some researchers developed graph-based methods \cite{wu2021point,zhu2021towards,wu2021lra} which consider regional geometric details. Although these methods provided better feature extractors and decoders, none of them considered the feature connection between the incomplete input and the missing part, which affects the quality of the completion result. Our proposed \emph{ProxyFormer} is not limited to the partial input but also incorporates true missing points during training. We generate features separately for the missing part and explore the correlation with the features extracted from the partial input via self-attention. \noindent{\bfseries Transformers.} The transformer structure originated in the field of natural language processing, which is proposed by Vaswani \emph{et al.} \cite{vaswani2017attention} and applied to machine translation tasks. Recently, this structure was introduced into point cloud processing tasks due to its advantage in extracting correlated features between points. Guo \emph{et al.} \cite{guo2021pct} proposed PCT and optimized the self-attention module, making the transformer structure more suitable for point cloud learning, and achieved good performance in shape classification and part segmentation. Point Transformer \cite{zhao2021point} designs a vector attention for point cloud feature processing. PointTr \cite{yu2021pointr} and SeedFormer \cite{zhou2022seedformer} treat the point cloud completion as a set-to-set translation problem that share similar ideas as \emph{ProxyFormer}. PoinTr designs a geometry-aware block that explicitly simulates local geometric relations to facilitate transformers to use better inductive bias. However, it adopts a transformer encoder-decoder structure for point cloud completion, which results in a large amount of parameters. SeedFormer designs an upsample transformer by extending the transformer structure into a basic operation in point generators that effectively incorporates spatial and semantic relationships between neighboring points. However, the upsample transformer runs throughout its network, resulting in excessive computation. Differently, \emph{ProxyFormer} discards the transformer decoder to reduce the number of parameters, and modifies the query of transformer to make it more suitable for the prediction of the missing part. In the coarse-to-fine process, we still adopt the Foldingnet \cite{yang2018foldingnet}, which greatly reduces the amount of calculation. \section{Method} \label{sec:method} \begin{figure*} \centering \includegraphics[width=17.3cm]{pictures/Pipeline2.pdf}\\ \quad \begin{subfigure}{0.48\linewidth} \includegraphics[width=8.5cm]{pictures/FAPE.pdf} \caption{Feature And Position Extractor (FAPE).} \label{main_Fig.2-a.} \end{subfigure} \hfill \begin{subfigure}{0.48\linewidth} \includegraphics[width=8cm]{pictures/MFG.pdf} \caption{Missing Feature Generator.} \label{main_Fig.2-b.} \end{subfigure} \caption{The pipeline of \emph{ProxyFormer} is shown in the upper part. The completion of the point cloud is divided into two steps. First, we simply convert the incomplete seed feature into a predicted coarse missing part. Second, we send the predicted missing proxies and the coarse part into FoldingNet \cite{yang2018foldingnet} to obtain the predicted dense missing part. True missing part is used for training only so that we block its back-propagation and directly employ the pretrained FAPE on the incomplete point cloud to generate the proxy. (a) The feature and position extractor is applied to obtain seed feature and position encoding which are combined to the so-called proxies. $P$ represents the count of points in the input point cloud. $C_1$ and $C_2$ is the dimensions of point cloud features. (b) The missing feature generator is used to generate predicted seed feature from incomplete seed feature. $N$ and $M$ means the point count of the incomplete seed feature and predicted seed feature. $C$ means the dimensions of the seed feature and is divided into $U$ groups to speed up operations.} \label{main_Fig.2.} \end{figure*} The overall network structure of \emph{ProxyFormer} is shown in Fig. \ref{main_Fig.2.}. We will introduce our method in detail as follows. \subsection{Proxy Formation} \label{sec:PF} \noindent{\bfseries Proxy introduction.} A proxy represents a local region of the point clouds. All the proxies in this paper fuse two information: \textbf{feature} and \textbf{position}. The types of proxies are defined as follows: \begin{itemize} \vspace{-0.2cm} \item \textbf{Existing Proxies} (\emph{EP}): It combines incomplete seed feature and incomplete position encoding. (obtained by FAPE). \vspace{-0.2cm} \item \textbf{Missing Proxies} (\emph{MP}): It combines predicted seed feature and random position encoding. During the training process, \emph{MP} is also divided into: \begin{itemize} \vspace{-0.2cm} \item[-] \textbf{Predicted Missing Proxies} (\emph{pre-MP}): It is obtained by Missing Part Sensitive Transformer (Sec. \ref{sec:MPST}). \vspace{-0.15cm} \item[-] \textbf{True Missing Proxies} (\emph{true-MP}): It combines true missing seed feature and true missing position encoding. (obtained by pre-trained FAPE). It is only used for Proxy Alignment (Sec. \ref{sec:PA}). \end{itemize} \end{itemize} For clarity, we next explain how to obtain the information these proxies need. \noindent{\bfseries Feature and position extractor (FAPE).} For \textbf{feature} extraction, as shown in Fig. \ref{main_Fig.2-a.}, the point cloud of dimension $\left(P, 3\right)$ is sent to point transformer block \cite{zhao2021point}, and the center point cloud of $\left(\frac{P}{16}, 3\right)$ is obtained by farthest point sampling twice. The feature of $\left(\frac{P}{16}, C_2\right)$ is obtained through two vector attention calculations \cite{zhao2021point}. After that, we use a shared MLP to convert the feature to final seed feature. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{pictures/Position_Encoding.pdf}\\ \caption{Details of position extractor. $n = \frac{P}{16}$. $K$ is the count of neighbor points. $C_2$ and $C_{out}$ are the dimension of point features. $Shared~MLP$ means shared multi-layer perceptron and $attn$ means attention score calculation.} \label{main_Fig.3.} \end{figure} For \textbf{position} encoding, we found that directly concatenating the point coordinates with extracted features \cite{qi2017pointnet++,zhao2019pointweb,wang2019dynamic} or simply using MLP to upgrade the three-dimensional coordinates \cite{yu2021pointr} are ineffective. So we design a new position extractor (as shown in Fig. \ref{main_Fig.3.}) to improve this. The coordinates and features of the center points after feature extraction are used as input. For each point, we take its adjacent $K$ points, and use $\tilde{p}_i^k=p_i^k-p_i$ to calculate the relative position of the point. $p_i^k\in{\mathbb{P}=\left\{p_i^1,p_i^2,...,p_i^K\right\}}$, which means the neighbor point coordinates of $p_i$. We also perform neighbor points subtraction in the feature dimension using $\tilde{f}_i^k=|f_i^k-f_i|$. Similarly, $f_i^k\in{\mathbb{F}=\left\{f_i^1,f_i^2,...,f_i^K\right\}}$, which means the neighbor point features of $p_i$. After that, we get the coordinate information of $K \times 3$ and the feature information of $K \times C_2$. Then we concatenate them and transform feature from $K \times (3+C_2)$ to $K \times C_{out}$ (the transition feature ${TF}_i^k$ for each neighboring point) using a shared MLP. After obtaining ${TF}_i^k$, we use attention mechanism to learn a unique attention score for each channel of point features and then aggregate them. The attention score is calculated and channel-wise multiplied with the feature and summed to obtain the final $PE$ (\emph{i.e.} position encoding) of each point. This process can be represented by Eq. (\ref{main_eq.1.}). \begin{equation} \begin{aligned} PE=\sum_{k=1}^K\left(attn\left(\left\{{TF}_i^k\right\}\right)\cdot\left\{{TF}_i^k\right\}\right) \end{aligned} ~ , \label{main_eq.1.} \end{equation} where $\left\{{TF}_i^k\right\}$ is the set of transition feature of $K$-neighbor points and $attn()$ is a shared function (per-point MLPs) with learnable weights $W_{attn}$. Incomplete point cloud and true missing part point cloud are sent into FAPE to get \emph{EP} and \emph{true-MP}, and the imcomplete seed feature of \emph{EP} is used to predict coarse missing part at the same time. \subsection{Missing Part Sensitive Transformer} \label{sec:MPST} Usually, query, key and value come from the same input. Many methods \cite{yu2021pointr,xiang2021snowflakenet,zhou2022seedformer} attempt to modify the source of value to adapt to the specific tasks (the left part of Fig. \ref{main_Fig.4.}). Differently, we change the query source, taking the \emph{MP} with random position encoding as query conditions, to maximally mine the representation of the missing part from the features and positions of existing proxies via self-attention mechanism. In order to change the query source to \emph{MP}, we propose a Missing Feature Generator, which is specially used to learn the missing part features from the existing features. The generation process is shown in Fig. \ref{main_Fig.2-b.}. Specifically, incomplete seed feature of $N \times C$ is used as input, and the $C$-dimensional channels are divided into $U$ equal length groups. Then, the change dimension of the convolution is determined by the point number $M$ of the predicted coarse missing part, which means that we convert each $\frac{C}{U}$ to $\frac{M}{N} \times \frac{C}{U}$. Lastly, we transform the transition feature into predicted seed feature of $M \times C$ through the reshape operation. All channel groups use convolutional layers with shared parameters, reducing the amount of parameters and computation. Predicted missing feature is added with random position encoding to get \emph{MP}. \begin{figure}[h] \centering \includegraphics[width=8cm]{pictures/MPSTransformer2.pdf}\\ \caption{Details of Missing Part Sensitive Transformer. Compared with vanilla transformer block and PoinTr transformer block, we change the source of query to make it more suitable for the prediction of the missing part.} \label{main_Fig.4.} \end{figure} In Sec. \ref{sec:PF}, we have obtained \emph{EP} and \emph{true-MP}, and through the missing feature generator described above, we have obtained \emph{MP}. Then we design a Missing Part Sensitive Transformer to further explore their relationship and learn the representation of the missing points for subsequent completion work. Its structure is illustrated in Fig. \ref{main_Fig.4.}, which receives \emph{EP} and \emph{MP} as input and outputs \emph{pre-MP}. \emph{EP} is a matrix $\mathbb{E}$ of $N \times C$, and \emph{MP} is a matrix $\mathbb{M}$ of $M \times C$. Output \emph{pre-MP} is a matrix $\mathbb{P}$ of $M \times C$. We use multi-head self-attention mechanism and add residual connections to obtain \emph{pre-MP}: \begin{equation} \begin{gathered} T=\mu \left(Q+\xi (Q,K,V)\right) ~ ,\\ \emph{\text{pre-MP}}=\sigma(T)+T ~ , \end{gathered} \label{main_eq.2.} \end{equation} where $Q=\mathbb{M} \times W^{Q}$, $K=\mathbb{E} \times W^{KV}$, $V=\mathbb{E} \times W^{KV}$. $\mu$ means layer normalization. $\xi$ means multi-head attention calculation. $\sigma$ means feed forward network. The attention of each head is calculated with $Q_i$, $K_i$, $V_i$ on head $i$: \begin{equation} \begin{gathered} attn_i=softmax\left(\frac{Q_i\left(K_i\right)^T}{\sqrt{d_k}}\right)V_i \end{gathered} ~ . \label{main_eq.3.} \end{equation} This method predicts the proxies of the missing points, which not only discovers feature associations between missing and existing parts, but also converts random positions into meaningful position information. The \emph{pre-MP} is next used for proxy alignment with \emph{true-MP}. \subsection{Proxy Alignment} \label{sec:PA} In this subsection, we describe the Proxy Alignment strategy and how this operation assists us in the point cloud completion task. The detailed computational graph of \emph{ProxyFormer} is plotted in Fig. \ref{main_Fig.8.}. Therefore, \emph{pre-MP} and \emph{true-MP} can be formulated as: \begin{equation} \begin{gathered} \emph{\text{pre-MP}}=T\left({PE}_{R}\oplus\theta\left(\omega\left(C_i\right)\right)\right)\\ \emph{\text{true-MP}}={PE}_{T}\oplus\omega\left(C_m\right) \end{gathered} ~ . \label{main_eq.4.} \end{equation} In order to refine the prediction error in \emph{pre-MP}, the proxy alignment constraint is imposed on the model, which can be formulated as: \begin{equation} \begin{gathered} l_p=MSE\left(\emph{\text{pre-MP}},\emph{\text{true-MP}}\right) \end{gathered} ~ , \label{main_eq.5.} \end{equation} where $l_p$ means the alignment loss that we will apply to our training loss (Sec. \ref{sec:training loss}) and $MSE$ means the mean squared error. After correcting the \emph{pre-MP}, it is used as a feature and sent to FoldingNet \cite{yang2018foldingnet} for coarse-to-fine conversion, and then combined with the previously predicted coarse missing part to obtain the dense missing part. \subsection{Training Loss} \label{sec:training loss} \noindent{\bfseries Chamfer Distance.} We use the average Chamfer Distance(CD) \cite{fan2017point} as the first type of our completion loss. \noindent{\bfseries proxy alignment Loss.} We use the $MSE$ loss between \emph{pre-MP} and \emph{true-MP} as the second type of loss. To sum up, as shown in Fig. \ref{main_Fig.8.}, the loss used in this paper consists three parts: (1). $l_{c1}$, the CD between the predicted coarse missing part $C_{pcm}$ and the true center point $C_{tcm}$ of the missing part; (2). $l_{c2}$, the CD between the predicted complete point cloud $C_{pc}$ and the GT $C_{gt}$; (3) $l_{p}$, the alignment loss between \emph{pre-MP} and \emph{ture-MP}. We use the weighted sum of these three terms for network training (we set $\gamma$ to $1.5$ in experiments): \begin{equation} \begin{gathered} L=l_{c1}+l_{c2}+\gamma l_p \end{gathered} ~ . \label{main_eq.6.} \end{equation} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{pictures/Proxies_Alignment.pdf} \caption{The computational graph for \emph{ProxyFormer}. The part framed by the blue dotted line is used for training only. For the left part, we input incomplete point cloud $C_i$ and use \emph{FAPE} ($\omega$) to get feature $F_{ep}$ in \emph{EP}. $F_{ep}$ is not only sent to a linear projection layer $\chi$ to generate predicted coarse missing part $C_{pcm}$, but also sent to missing feature generator ($\theta$) to generate feature $F_{rmp}$. $F_{rmp}$ is added ($\oplus$) with random position distribution ${PE}_{R}$ and sent to missing part sensitive transformer ($T$) to get \emph{pre-MP}. Then \emph{pre-MP} and $C_{pcm}$ are sent to FoldingNet ($f$), and the result is spliced with input $C_i$ to form predicted complete point cloud $C_{pc}$. For the right part, we input true missing part point cloud $C_m$ and use the same \emph{FAPE} ($\omega$) to get feature $F_{tmp}$ and position ${PE}_{T}$ in \emph{true-MP}. \emph{pre-MP} is aligned with \emph{true-MP} for correcting deviation values. We also retain the center point $C_{tcm}$ obtained after $C_m$ is downsampled by \emph{FAPE}. $l_p$, $l_{c1}$ and $l_{c2}$ are the losses we use, which will be detailed in the next subsection.} \label{main_Fig.8.} \end{figure} \section{Experiments} In this section, we use \emph{ProxyFormer} for two common point cloud completion benchmarks PCN \cite{yuan2018pcn} and KITTI \cite{geiger2013vision} to evaluate the completion ability of the network, and then we also train and test on two other datasets, ShapeNet-55 and ShapeNet-34 proposed by PoinTr \cite{yu2021pointr}. Finally, through ablation experiments, we demonstrate the effectiveness of each module in the proposed \emph{ProxyFormer}. \subsection{Point Cloud Completeion on PCN Dataset} \noindent{\bfseries Dataset and evaluation metric.} The PCN dataset \cite{yuan2018pcn} is created from ShapeNet dataset \cite{chang2015shapenet}, including eight categories with a total of 30974 CAD models. When preparing the data, we use missing part extractor to extract the missing part of the point cloud from the complete point cloud and then downsample it to 3584 points as the true missing part (This process is described in detail in the supplementary material). We use the L1 CD to evaluate the results of each methods. In addition, We also use density-aware chamfer distance (DCD) \cite{wu2021density} as a quantitative evaluation criterion, which can retain the measurement ability similar to CD but also better judge the visual effect of the result. \begin{figure*}[t] \centering \includegraphics[width=17cm]{pictures/PCN_Dataset_Result_Picture.pdf}\\ \caption{The visualization results of each method on the PCN dataset, showing Cabinet, Car, Chair and Vessel from top to bottom.} \label{main_Fig.9.} \end{figure*} \noindent{\bfseries Quantiative comparison.} According to the results in Table \ref{main_tab.1.}, on the PCN dataset, our method has substantially surpassed PoinTr \cite{yu2021pointr} and reaches lowest CD in the cabinet, car, sofa and table categories. Further, as can be seen from the DCD shown in Table \ref{main_tab.2.}, our method outperforms state-of-the-art in all the categories, which means that our method is more able to take into account the rationality of shape and distribution density while complementing the object. \begin{table}[h] \centering \caption{Quantitative comparison of PCN dataset. Point resolutions for the output and GT are 16384. For CD, lower is better.} \scalebox{0.6}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \cline{1-10} \multirow{2}{*}{Methods} & \multicolumn{9}{c}{Chamfer Distance($10^{-3}$)} \\ \cline{2-10} & Air & Cab & Car & Cha & Lam & Sof & Tab & Ves & Ave \\ \cline{1-10} FoldingNet \cite{yang2018foldingnet} & 9.49 & 15.80 & 12.61 & 15.55 & 16.41 & 15.97 & 13.65 & 14.99 & 14.31 \\ TopNet \cite{tchapmi2019topnet} & 7.61 & 13.31 & 10.90 & 13.82 & 14.44 & 14.78 & 11.22 & 11.12 & 12.15 \\ AtlasNet \cite{groueix2018papier} & 6.37 & 11.94 & 10.10 & 12.06 & 12.37 & 12.99 & 10.33 & 10.61 & 10.85 \\ PCN \cite{yuan2018pcn} & 5.50 & 22.70 & 10.63 & 8.70 & 11.00 & 11.34 & 11.68 & 8.59 & 9.64 \\ GRNet \cite{xie2020grnet} & 6.45 & 10.37 & 9.45 & 9.41 & 7.96 & 10.51 & 8.44 & 8.04 & 8.83 \\ CRN \cite{wang2020cascaded} & 4.79 & 9.97 & 8.31 & 9.49 & 8.94 & 10.69 & 7.81 & 8.05 & 8.51 \\ NSFA \cite{zhang2020detail} & 4.76 & 10.18 & 8.63 & 8.53 & 7.03 & 10.53 & 7.35 & 7.48 & 8.06 \\ PMP-Net \cite{wen2021pmp} & 5.65 & 11.24 & 9.64 & 9.51 & 6.95 & 10.83 & 8.72 & 7.25 & 8.73 \\ PoinTr \cite{yu2021pointr} & 4.75 & 10.47 & 8.68 & 9.39 & 7.75 & 10.93 & 7.78 & 7.29 & 8.38 \\ PMP-Net++ \cite{wen2022pmp} & 4.39 & 9.96 & 8.53 & 8.09 & 6.06 & 9.82 & 7.17 & 6.52 & 7.56 \\ SnowflakeNet \cite{xiang2021snowflakenet} & 4.29 & 9.16 & 8.08 & 7.89 & 6.07 & 9.23 & 6.55 & 6.40 & 7.21 \\ SeedFormer \cite{zhou2022seedformer} & \textbf{3.85} & 9.05 & 8.06 & \textbf{7.06} & \textbf{5.21} & 8.85 & 6.05 & \textbf{5.85} & \textbf{6.74} \\ \cline{1-10} ProxyFormer(Ours) & 4.01 & \textbf{9.01} & \textbf{7.88} & 7.11 & 5.35 & \textbf{8.77} & \textbf{6.03} & 5.98 & 6.77 \\ \cline{1-10} \end{tabular}} \label{main_tab.1.} \end{table} \begin{table}[h] \centering \caption{Quantitative comparison of PCN dataset. Point resolutions for the output and GT are 16384. For DCD, lower is better.} \scalebox{0.6}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \cline{1-10} \multirow{2}{*}{Methods} & \multicolumn{9}{c}{Density-aware Chamfer Distance} \\ \cline{2-10} & Air & Cab & Car & Cha & Lam & Sof & Tab & Ves & Ave \\ \cline{1-10} GRNet \cite{xie2020grnet} & 0.688 & 0.582 & 0.610 & 0.607 & 0.644 & 0.622 & 0.578 & 0.642 & 0.622 \\ PoinTr \cite{yu2021pointr} & 0.574 & 0.611 & 0.630 & 0.603 & 0.628 & 0.669 & 0.556 & 0.614 & 0.611 \\ SnowflakeNet \cite{xiang2021snowflakenet} & 0.560 & 0.597 & 0.603 & 0.582 & 0.598 & 0.633 & 0.521 & 0.583 & 0.585 \\ SeedFormer \cite{zhou2022seedformer} & 0.557 & 0.592 & 0.598 & 0.579 & 0.585 & \textbf{0.626} & 0.520 & 0.605 & 0.583 \\ \cline{1-10} ProxyFormer(Ours) & \textbf{0.555} & \textbf{0.590} & \textbf{0.597} & \textbf{0.571} & \textbf{0.562} & \textbf{0.626} & \textbf{0.518} & \textbf{0.507} & \textbf{0.577} \\ \cline{1-10} \end{tabular}} \label{main_tab.2.} \end{table} \noindent{\bfseries Qualitative comparison.} In Fig. \ref{main_Fig.9.}, we visualize the completion results of different methods on the PCN dataset. Compared with other methods, the results show that our method can perceive the position of the missing points while completing, and reduce the noisy points in the process of refinement. For example, as can be seen from the chair in the third row, except that the chair generated by PCN \cite{yuan2018pcn} has been deformed to a large extent, the other methods have successfully recovered the complete chair, but there are many noisy points around it. The chair completed by our method is more visually plausible. In addition, it can be evident from the chair leg and back that the chair completed by our method are more prominent in detail. \subsection{Point Cloud Completion on ShapeNet-55/34} \begin{table*}[t] \centering \caption{Quantitative comparison on ShapeNet-55. For L2 CD $\times1000$ and DCD, lower is better. For F1-Score@1\%, higher is better.} \scalebox{0.8}{ \begin{tabular}{c|ccccc|cccccc|ccc} \hline Methods & Table & Chair & Plane & Car & Sofa & CD-S & CD-M & CD-H & DCD-S & DCD-M & DCD-H & CD-Avg & DCD-Avg & F1 \\ \hline FoldingNet \cite{yang2018foldingnet} & 2.53 & 2.81 & 1.43 & 1.98 & 2.48 & 2.67 & 2.66 & 4.05 & - & - & - & 3.12 & - & 0.082 \\ PCN \cite{yuan2018pcn} & 2.13 & 2.29 & 1.02 & 1.85 & 2.06 & 1.94 & 1.96 & 4.08 & 0.570 & 0.609 & 0.676 & 2.66 & 0.618 & 0.133 \\ TopNet \cite{tchapmi2019topnet} & 2.21 & 2.53 & 1.14 & 2.18 & 2.36 & 2.26 & 2.16 & 4.30 & - & - & - & 2.91 & - & 0.126 \\ PFNet \cite{huang2020pf} & 3.95 & 4.24 & 1.81 & 2.53 & 3.34 & 3.83 & 3.87 & 7.97 & - & - & - & 5.22 & - & 0.339 \\ GRNet \cite{xie2020grnet} & 1.63 & 1.88 & 1.02 & 1.64 & 1.72 & 1.35 & 1.71 & 2.85 & 0.545 & 0.581 & 0.650 & 1.97 & 0.592 & 0.238 \\ PoinTr \cite{yu2021pointr} & 0.81 & 0.95 & 0.44 & 0.91 & 0.79 & 0.58 & 0.88 & 1.79 & 0.525 & 0.562 & 0.637 & 1.09 & 0.575 & 0.464 \\ SeedFormer \cite{zhou2022seedformer} & 0.72 & \textbf{0.81} & 0.40 & 0.89 & 0.71 & 0.50 & 0.77 & \textbf{1.49} & 0.513 & 0.549 & 0.612 & \textbf{0.92} & 0.558 & 0.472 \\ \hline Ours & \textbf{0.70} & 0.83 & \textbf{0.34} & \textbf{0.78} & \textbf{0.69} & \textbf{0.49} & \textbf{0.75} & 1.55 & \textbf{0.502} & \textbf{0.536} & \textbf{0.608} & 0.93 & \textbf{0.549} & \textbf{0.483} \\ \hline \end{tabular}} \label{main_tab.3.} \end{table*} \begin{table*}[h] \centering \caption{Quantitative comparison on ShapeNet-34. For L2 CD $\times1000$ and DCD, lower is better. For F1-Score@1\%, higher is better.} \scalebox{0.6}{ \begin{tabular}{c|ccccccccc|ccccccccc} \hline \multirow{2}{*}{Methods} & \multicolumn{9}{c|}{34~seen~categories} & \multicolumn{9}{c}{21~unseen~categories} \\ & CD-S & CD-M & CD-H & DCD-S & DCD-M & DCD-H & CD-Avg & DCD-Avg & F1 & CD-S & CD-M & CD-H & DCD-S & DCD-M & DCD-H & CD-Avg & DCD-Avg & F1 \\ \hline FoldingNet & 1.86 & 1.81 & 3.38 & - & - & - & 2.35 & - & 0.139 & 2.76 & 2.74 & 5.36 & - & - & - & 3.62 & - & 0.095 \\ PCN & 1.87 & 1.81 & 2.97 & 0.571 & 0.617 & 0.683 & 2.22 & 0.624 & 0.150 & 3.17 & 3.08 & 5.29 & 0.601 & 0.638 & 0.692 & 3.85 & 0.644 & 0.101 \\ TopNet & 1.77 & 1.61 & 3.54 & - & - & - & 2.31 & - & 0.171 & 2.62 & 2.43 & 5.44 & - & - & - & 3.50 & - & 0.121 \\ PFNet & 3.16 & 3.19 & 7.71 & - & - & - & 4.68 & - & 0.347 & 5.29 & 5.87 & 13.33 & - & - & - & 8.16 & - & 0.322 \\ GRNet & 1.26 & 1.39 & 2.57 & 0.550 & 0.594 & 0.656 & 1.74 & 0.600 & 0.251 & 1.85 & 2.25 & 4.87 & 0.583 & 0.623 & 0.670 & 2.99 & 0.625 & 0.216 \\ PoinTr & 0.76 & 1.05 & 1.88 & 0.533 & 0.570 & 0.622 & 1.23 & 0.575 & 0.421 & 1.04 & 1.67 & 3.44 & 0.558 & 0.608 & 0.647 & 2.05 & 0.604 & 0.384 \\ SeedFormer & 0.48 & 0.70 & \textbf{1.30} & 0.513 & 0.561 & 0.608 & 0.83 & 0.561 & 0.452 & 0.61 & \textbf{1.07} & \textbf{2.35} & 0.541 & 0.587 & 0.629 & \textbf{1.34} & 0.586 & 0.402 \\ \hline Ours & \textbf{0.44} & \textbf{0.67} & 1.33 & \textbf{0.506} & \textbf{0.557} & \textbf{0.606} & \textbf{0.81} & \textbf{0.556} & \textbf{0.466} & \textbf{0.60} & 1.13 & 2.54 & \textbf{0.538} & \textbf{0.584} & \textbf{0.627} & 1.42 & \textbf{0.583} & \textbf{0.415} \\ \hline \end{tabular}} \label{main_tab.4.} \end{table*} \noindent{\bfseries Dataset and evaluation metric.} We also evaluate our model on two other datasets, ShapeNet-55 and ShapeNet-34, proposed in PoinTr \cite{yu2021pointr}. In the two datasets, the input incomplete point cloud has 2048 points, and the complete point cloud contains 8192 points. Like \cite{yu2021pointr}, we randomly select a viewpoint during training, and select a value from 2048 to 6144 to delete the corresponding points (25\% to 75\% of the complete point cloud), and then downsample the remaining points to 2048, as the input for model training. For the deleted part, we downsample it to 1536 points, as the true missing part. During testing, we choose 8 fixed viewpoints, and set the count of incomplete points to 2048, 4096 or 6144 (25\%, 50\% or 75\% of the complete point cloud), corresponding to three difficulty levels (simple, moderate and hard) during testing. We use L2 CD, DCD and F-Score as evaluation metrics. \noindent{\bfseries Quantitative comparison.} We list the quantitative performance of several methods on ShapeNet-55 and ShapeNet-34 in Tables \ref{main_tab.3.} and \ref{main_tab.4.}, respectively. We use CD-S, CD-M and CD-H to represent the CD results under the simple, moderate and hard settings. The same goes for DCD (the short line in the table indicates that the DCD value of this method is not competitive). It can be seen from Table \ref{main_tab.3.} that \emph{ProxyFormer} achieves the best performance on most listed categories and make lowest CD on simple and moderate settings. As for DCD, our method has the lowest values on all the three settings, which proves that the objects completed by \emph{ProxyFormer} have the closest density distribution to GT. In terms of F1-Score, our method improved by 2.3\% compared with SeedFormer, reaching the highest value. Similarly, in Table \ref{main_tab.4.}, we can also see that the three indicators of CD, DCD and F1-Score of \emph{ProxyFormer} in the 34 visible categories have greatly exceeded PoinTr \cite{yu2021pointr}, and in all the three settings, we still get the lowest DCD. Among 21 unseen categories, \emph{ProxyFormer} also make the lowest DCD and the highest F1-Score, which demonstrates the generalization performance of \emph{ProxyFormer}. (More results will be presented in the supplementary material) \subsection{Point Cloud Completion on KITTI} \noindent{\bfseries Dataset and evaluation metric.} To further evaluate our proposed model, we test it on the real-scanned dataset KITTI \cite{geiger2013vision}, which have no GT values as a reference, and some of the data are very sparse. We use Fidelity Distance and Minimal Matching Distance (MMD) as evaluation metric. \begin{figure}[h] \centering \includegraphics[width=8.5cm]{pictures/KITTI_Dataset_Results.pdf}\\ \caption{The visualization results on KITTI dataset. To better show the effect of completion, we provide two views for each car.} \label{main_Fig.11.} \end{figure} \noindent{\bfseries Quantitative and Qualitative comparison.} Following GRNet \cite{xie2020grnet}, we fine-tune our pretrained model on ShapeNetCars (cars in the ShapeNet dataset) and evaluate it on the KITTI dataset, and the evaluation results are shown in Table \ref{main_tab.5.}. From this we can see that our method achieves the state-of-the-art on MMD (since both our method and PoinTr\cite{yu2021pointr} merge input into the final result, the Fidelity Distance is both $0$). As shown in Fig. \ref{Fig.11.}, our method performs well on such real scan data, and even if the input point cloud is very sparse, our method can restore its shape well, and by comparing with the results of GRNet, it can be seen that the point cloud generated by \emph{ProxyFormer} is softer, with less noisy points, and is more ornamental. \begin{table*} \centering \setlength{\abovecaptionskip}{-0.01cm} \caption{Quantitative comparison on KITTI dataset. For Fidelity Distance and Minimal Matching Distance (MMD), lower is better.} \scalebox{0.68}{ \begin{tabular}{c|cccccccccc|c} \hline & AtlasNet \cite{groueix2018papier} & PCN \cite{yuan2018pcn} & FoldingNet \cite{yang2018foldingnet} & TopNet \cite{tchapmi2019topnet} & MSN \cite{liu2020morphing} & NSFA \cite{zhang2020detail} & CRN \cite{wang2020cascaded} & GRNet \cite{xie2020grnet} & PoinTr \cite{yu2021pointr} & SeedFormer \cite{zhou2022seedformer} & Ours \\ \hline Fidelity & 1.759 & 2.235 & 7.467 & 5.354 & 0.434 & 1.281 & 1.023 & 0.816 & \textbf{0.000} & 0.151 & \textbf{0.000} \\ MMD & 2.108 & 1.366 & 0.537 & 0.636 & 2.259 & 0.891 & 0.872 & 0.568 & 0.526 & 0.516 & \textbf{0.508} \\ \hline \end{tabular}} \label{main_tab.5.} \end{table*} \subsection{Ablation Studies} In this subsection, we conduct ablation experiments for \emph{ProxyFormer} on the PCN dataset \cite{yuan2018pcn} to demonstrate the effectiveness of our proposed components. \noindent{\bfseries Model Design Analysis.} The results of removing each component are listed in Table \ref{main_tab.6.}. The baseline model A only uses Point Transformer for feature extraction, and then send this feature into vanilla transformer encoder to get the feature for FoldingNet. We then add the position extractor to extract the position encoding for each point (Model B). It can be seen that the position extractor we designed reduces the CD of the baseline by $1.06$. After using missing part sensitive transformer for missing proxies prediction (Model C), we can observe that the CD drops significantly to $7.74$. When proxy alignment comes into play, the CD value drops a further $0.97$. \begin{table}[h] \centering \caption{Ablation study of each component. We add components including Position Extractor (PE), Missing Part Sensitive Transformer (Sensitive) and Proxy Alignment (PA) step by step.} \begin{tabular}{c|ccl|c} \hline Model & PE & Sensitive & PA & CD \\ \hline A & & & & 11.08 \\ B & \checkmark & & & 10.02 \\ C & \checkmark & \checkmark & & 7.74 \\ D & \checkmark & \checkmark & \checkmark & 6.77 \\ \hline \end{tabular} \label{main_tab.6.} \end{table} After conducting ablation experiments on the proposed module, we further demonstrate the irreplaceability of position extractor through one more ablation experiments. \noindent{\bfseries Position Extractor.} Our position extractor can synthesize the coordinates and feature information of the point cloud to more accurately represent the correlation and similarity between points. In this experiment, we compare our proposed position encoding method with two method: (1) directly use 3D coordinates as position encoding; (2) MLP-style position encoding method. which performs a simple upscaling operation on the 3D coordinates of the point cloud to form position encoding. The results in Table \ref{main_tab.7.} show that the direct use of 3D coordinates provides very limited position information and MLP cannot extract the positional of the point cloud well. Our proposed position encoding method can perceive the geometric structure of the point cloud well, and in this process, it is optimal to fuse the coordinates and feature information of 16 nearby points. More ablation experiments and analysis will be given in the supplementary material. \begin{table}[h] \centering \setlength{\abovecaptionskip}{-0.00cm} \caption{Ablation study of Position Extractor of FAPE Module.} \scalebox{0.9}{\begin{tabular}{c|c|c} \hline Methods & Attempts & CD-Avg \\ \hline \multirow{2}{*}{w/o~Position~Extractor} & w/~3D coordinates & 9.63 \\ & w/~MLP & 7.83 \\ \hline \multirow{3}{*}{w/~Position~Extractor} & num~of~neighbor~=~8 & 6.86 \\ & num~of~neighbor~=~16 & \textbf{6.77} \\ & num~of~neighbor~=~32 & 6.92 \\ \hline \end{tabular}} \label{main_tab.7.} \end{table} \subsection{Complexity Analysis} \label{sec:complexity} Our method achieves the best performance on many metrics on PCN dataset, ShapeNet-55, ShapeNet-34 and KITTI datasets. In Table \ref{main_tab.8.}, we list the number of parameters (Params), theoretical computation cost (FLOPs), the average chamfer distances (CD-Avg) and the average density-aware chamfer distances (DCD-Avg) of our method and other six methods. It can be seen that our method can obtain the lowest DCD-Avg while having the smallest FLOPs, and it is the second best only a litter inferior to SeedFormer \cite{zhou2022seedformer} in terms of CD. Since the transformer decoder part was no longer needed in \emph{ProxyFormer}, the number of parameters is also greatly reduced compared to PoinTr \cite{yu2021pointr}, which also shows that our method can better balance computational cost and performance. \vspace{-0.2cm} \begin{table}[h] \centering \caption{Complexity analysis. We show the the number of parameter (Params) and FLOPs of our method and six existing methods. We also provide the distance metrics CD-Avg and DCD-Avg on PCN dataset.} \scalebox{0.85}{\begin{tabular}{c|cc|cc} \hline Methods & Params & FLOPs & CD-Avg & DCD-Avg \\ \hline FoldingNet \cite{yang2018foldingnet} & \textbf{2.41M} & 27.65G & 14.31 & 0.688 \\ PCN \cite{yuan2018pcn} & 6.84M & 14.69G & 9.64 & 0.651 \\ GRNet \cite{xie2020grnet} & 76.71M & 25.88G & 8.83 & 0.622 \\ PoinTr \cite{yu2021pointr} & 30.9M & 10.41G & 8.38 & 0.611 \\ SnowflakeNet \cite{xiang2021snowflakenet} & 19.32M & 10.32G & 7.21 & 0.585 \\ SeedFormer \cite{zhou2022seedformer} & 3.20M & 29.61G & \textbf{6.74} & 0.583 \\ \hline Ours & 12.16M & \textbf{9.88G} & 6.77 & \textbf{0.577} \\ \hline \end{tabular}} \label{main_tab.8.} \end{table} \vspace{-0.5cm} \section{Conclusion} In this paper, we propose a new point cloud completion framework named \emph{ProxyFormer}, which designs a missing part sensitive transformer to generate missing proxies. We extract feature and position for the missing points and form point proxies. We regularize the distribution of predicted point proxies through proxy alignment, so as to better complete the input partial point clouds. Experiments also show that our method achieves state-of-the-art performance on multiple metrics on several challenging benchmark datasets and has the fastest inference speed. \section*{Acknowledgement} This work was supported in part by the National Key Research and Development Program of China under Grant 2021ZD0113203 and the Natural Science Foundation of China under Grant 62272227. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,154
Q: function "optional trait bound" in rust I want to have something like optional trait bound for function. Where if T implements that type - do something. fn test<T: Eq + ?Debug>(a:T, b:T){ if a!=b{ println!("Not equal!"); if (T impl Debug){ println!("{:?} != {:?}", a, b); } } } A: As @user4815162342 commented, using specialization, this is possible. I'll provide a slightly different approach from what they specified in their comment, to keep the same if ... { ... } setup that you had in your original code. The idea is to have a trait AsMaybeDebug with an associated type Debug, which always implements Debug and a function to go from &Self to Option<&Self::Debug>: trait AsMaybeDebug { type Debug: Debug; fn as_maybe_debug(&self) -> Option<&Self::Debug>; } After this we make a default impl for all T, with the debug type being !, the never type, and always return None. impl<T> AsMaybeDebug for T { default type Debug = !; default fn as_maybe_debug(&self) -> Option<&Self::Debug> { None } } Instead of the never type, you could choose any type that always implemented Debug but still returning None. Afterwards we specialize for T: Debug by returning Self: impl<T: Debug> AsMaybeDebug for T { type Debug = Self; fn as_maybe_debug(&self) -> Option<&Self::Debug> { Some(self) } } Finally in test we just call as_maybe_debug and check if T: Debug fn test<T: Eq>(a: T, b: T){ if a != b { println!("Not equal!"); if let (Some(a), Some(b)) = (a.as_maybe_debug(), b.as_maybe_debug()) { println!("{:?} != {:?}", a, b); } } } You can check in the playground both that it works and that the assembly generated for test_non_debug doesn't have any debugging calls, only the single call to std::io::_print. It unfortunately isn't possible to retrieve the original a or b inside the if after calling as_maybe_debug. This is due to <Self as AsMaybeDebug>::Debug not being convertible back to Self. This can be fixed, but not easily as it requires updates from the standard library. Requiring AsMaybeDebug::Debug: AsRef<Self> doesn't work for 2 reasons: * * * *There is no impl<T> AsRef<T> for T yet, this is due to specialization still being incomplete, I assume. * *There is no impl<T> AsRef<T> for ! yet. Not sure if this impl can be made even with specialization or not, but it would be required. Also, although the specialization can be unsound, I believe that the trait and it's impls cannot be used for unsoundness, you would need a specific setup to be able to generate unsoundness from it, which this lacks. A: As mentioned in the comments, you're looking for the impls crate, which does exactly what you want. if impls!(T: Debug) { ... } Just for the sake of completeness, here's how you do it without an external crate dependency. I'm paraphrasing from the way the impls developer explains the trick. Let's say we want to check whether some type implements Debug. First, let's define the "base case". trait NotDebug { const IMPLS: bool = false; } We'll also provide a blanket implementation so that all types (which don't have a better answer) have a IMPLS constant equal to false. impl<T> NotDebug for T {} Now, let's make a simple type with a single generic type parameter. struct IsDebug<T>(std::marker::PhantomData<T>); PhantomData is conceptually nonexistent and exists only to anchor the generic type T to our IsDebug. We can think of IsDebug as being effectively a singleton struct. Now, we would like IsDebug::<T>::IMPLS to be true if (and only if) T implements Debug. Currently, IsDebug::<T>::IMPLS is always false, by a blanket implementation of NotDebug. But we can specify an inherent impl that applies conditionally. impl<T: Debug> IsDebug<T> { const IMPLS: bool = true; } Since this is an impl on IsDebug itself, not on a trait implementation, it takes precedent over the NotDebug blanket implementation. In any case where T: Debug, the inherent impl kicks in and we get true. In any other case, the inherent impl fails, so we get the fallback blanket implementation which gives false. Try it in the Rust Playground!
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,459
Home News Regions Northern Virginia Keith Summers joins Colliers Tysons Corner office as vice president Keith Summers joins Colliers Tysons Corner office as vice president Published June 24, 2013 by Paula C. Squires Colliers International announces that Keith Summers has joined the firm's Tysons Corner office as vice president. Summers will specialize in tenant and landlord representation in the Northern Virginia suburban office market. He is a former regional manager and assistant vice president of California-based PS Business Parks Inc., a publicly traded real estate investment trust (REIT) encompassing more than 29 million square feet nationwide. Before joining PS Business Parks, Summers worked as an associate broker and vice president at Grubb & Ellis Co. in Tysons Corner. His clients included Metlife, The Salvation Army, Qwest Communications and Vonage. While at Grubb & Ellis, Summers also was instrumental in developing a life sciences specialty practice group, which represented the interests of biotech and life sciences organizations nationwide. He currently serves on the Commercial Real Estate Development Association's (NAIOP) Executive Committee, board of directors, and chairs the Education Program Committee. Tysons civic group names new chairman Group of Harrison & Bates brokers joining Commonwealth Commercial Hilton names vice president of brand communications
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,535
1. It would be a place you could meet your mom for coffee. 2. It would be a place where work could get done. 3. It would be a place for music, art, and inspiration. Now that she has been at it for a while, Amanda believes they are achieving her dream with room to grow. She explains, "I wanted to open a place that I would hang out in, a place where people feel at home. Detroit has always felt like that kind of place to me, so of course this seemed like the natural place to start this kind of business." The shop accommodates community gatherings, the Grandmont-Rosedale SOUP events, artisan markets, open-mic nights and a bevy of other engaging, community-minded events. When Amanda was starting out, she introduced Always Brewing Detroit as a pop-up concept to the neighborhood that would later become its home. Detroit SOUP entered her life just after that ended, as she was poised to sign a lease and take on some hefty financial burdens. This was a time of immense reflection for Amanda; she wanted to make sure that she was making positive choices for her business, but she also wanted a little validation that people believed in her idea—Detroit SOUP seemed like just the right place to do it. The money was definitely an incentive, but ultimately, Amanda wanted to talk to people about what she was trying to achieve and promote the shop in the process. When reminiscing about the night she won, Amanda recalls it as a euphoric experience. "It really felt like Detroit was saying, 'We support your dream.' I thought to myself, 'I'm not crazy. I'm not alone in this. I've got people who believe in this idea.'" Those people were the people that go to Detroit SOUP to learn about ways that others are making an impact on the city and to be an integral part in that force. Amanda and the other folks at Always Brewing Detroit want people to get more than just a cup of coffee when they come in. Each day the shop opens, it is an opportunity for positive things to happen in the Grandmont Rosedale community and the rest of Detroit. Always Brewing Detroit is an opportunity to give access to everyone to have a positive and productive impact on their community. The shop has turned into an office, a place to meet neighbors, a place to see what else is going on in the community, and a place to embrace the value of coming together. Humbly, Amanda will not claim that these things would not happen without her and Always Brewing Detroit, but a shop like that hers is definitely a catalyst for growth.
{ "redpajama_set_name": "RedPajamaC4" }
9,617
# J. G. BALLARD # _High-Rise_ **London, New York, Toronto and Sydney** # Contents _Cover_ _Title Page_ _Introduction by Ned Beauman_ 1. Critical Mass 2. Party Time 3. Death of a Resident 4. Up! 5. The Vertical City 6. Danger in the Streets of the Sky 7. Preparations for Departure 8. The Predatory Birds 9. Into the Drop Zone 10. The Drained Lake 11. Punitive Expeditions 12. Towards the Summit 13. Body Markings 14. Final Triumph 15. The Evening's Entertainment 16. A Happy Arrangement 17. The Lakeside Pavilion 18. The Blood Garden 19. Night Games _By the same author_ _About the author_ _Interview with J. G. Ballard_ _Copyright_ _About the Publisher_ # INTRODUCTION BY NED BEAUMAN In October 2013 the _Guardian_ reported that everything had gone a bit _High-Rise_ in a new housing development in Hayes, west London. The 600 flats at High Point Village, about half an hour up the M25 from J. G. Ballard's former home in Shepperton, are a mix of 'luxury' and 'affordable'. The two sorts are divided by gates. 'Tension came to a head in August after a disruption to the water supply left some residents in affordable homes without water for nearly two days. Some residents found an emergency hosepipe existed for the private homeowners only to be told it could not be used to temporarily supply affordable homes. Families claim that at one point they were reduced to filling up bottles from a decorative fountain at the entrance to the luxury housing area.' To the disappointment of committed Ballardians, the feud didn't escalate any further. No one, as far as we know, was barbecuing Alsatians on their balconies. High Point Village cannot quite validate the _Guardian's_ original comment on _High-Rise_ , which blares from the front cover of the 1975 paperback edition: 'A hideous warning.' Today, that would feel like an odd way to sell the book. Ballard's prescience is extraordinary – tipping Ronald Reagan for President in a 1967 short story, inventing Facebook in a 1977 _Vogue_ essay – but a novel's predictive power is almost never the most interesting thing about it. I want to propose two readings of _High-Rise_ here: the book is all about architecture; the book is not about architecture at all. And according to the latter, _High-Rise_ is no more a 'warning' about barbarism in tower blocks than 1966's _The Crystal World_ is a 'warning' about the African jungle freezing into crystal. Yet it's true that this novel, more than any other of Ballard's early career, draws on what was actually going on in England at the time – the concrete tendencies, so to speak, of that country in that decade. In an interview with Jon Savage, Ballard connected the unnamed high-rise of _High-Rise_ – which from now on I'm just going to call the High-Rise – to Hulme Crescents in Manchester, the gargantuan housing development that had been ruled unfit for family occupation only two years after its construction. And Ballard's character Anthony Royal, the architect of the High Rise, surely has his basis in Ernő Goldfinger, the architect of Balfron Tower in Poplar and Trellick Tower in Kensington. Like Royal, Goldfinger anointed his own building by moving into its penthouse (although he quit Balfron Tower for a terraced house in Hampstead only two months later). And like Royal, Goldfinger was immanent in his creations, the man and the monument almost merging in the public imagination, to the extent that an urban legend developed that Goldfinger had thrown himself from the top of Trellick Tower in despair at its failure. Royal, Goldfinger and the more anonymous designers of Hulme Crescents are, of course, all intellectual descendants of the French architect Le Corbusier, champion of the 'Radiant City' of high-tech modernist 'habitation units'. Le Corbusier promised a utopia, and in _High-Rise_ Ballard gives us a dystopia. But it would be a mistake to set Ballard and Le Corbusier entirely in opposition, because in fact they agree on a fundamental premise: that a new architecture can transform the moral and sentimental lives of human beings. As Ballard's character Robert Laing observes, 'a new social type was being created by the apartment building, a cool, unemotional personality impervious to the psychological pressures of high-rise life.' Forty years after Hulme Crescents, in our modern technocracy of think tanks and taskforces and 'nudge units', we tend to take this as a presupposition – that the place you live can change the way you behave. But it's important to remember that most novelists are willing to permit this only within strict limits. In traditional literary fiction, you may be shaped by your worldly circumstances, but your fundamental moral soul remains intact and diamond-hard from your first breath to your last, and in the end it is this moral soul, good or evil or somewhere in between, that will guide your actions and determine your fate. If you're too pliant in the grip of larger forces, then you certainly can't be the protagonist of the story, because resisting larger forces is what protagonists are there for. Yet even the three main characters in _High-Rise_ are perfectly happy to acknowledge that the responsibility for most of their actions lies with the building itself – this 'huge animate presence, brooding over them and keeping a magisterial eye on the events taking place' – or, in a broader sense, with high-rise living as a modern trend. Ballard's casual deflation of human agency is one of his fiction's most singular qualities. He told the _Paris Review_ that his initial outline for _High-Rise_ was 'written in the form of a social worker's report on the strange events that had taken place in this apartment block, an extended case history. I wish I'd kept it; I think it was better than the novel.' A social worker's report would allow for no protagonist, and perhaps no named individuals at all – the ultimate antidote to the conventional humanist chronicle. Does the place you live really change the way you behave? A 2007 meta-study by Robert Gifford concluded that 'children who live in high-rises have, on average, more behaviour problems. Residents in high-rises probably have fewer friendships in the buildings, and certainly help each other less. Crime and fear of crime probably are greater in high-rise buildings. A small proportion of suicides may be attributable to living in high-rises.' So Laing may be correct that 'people in high-rises tended not to care about tenants more than two floors below them'; take the example of High Point Village. And yet this is where we run up against the limitations of regarding _High-Rise_ as a dispatch on twentieth-century urban life. When Ballard linked his High-Rise to Hulme Crescents, he must have been perfectly aware that there was a basic difference between the two: the former is intended for the rich, the latter for the poor. By setting his novel not in a council estate but a desirable, middle-class high-rise – regarded almost as a contradiction in terms in Britain at the time he was writing – Ballard was deliberately severing his tale from the reality of the Corbusian project. 'The mutiny of these well-to-do professional people against the building they had collectively purchased,' Royal claims, 'was no different from the dozens of well-documented revolts by working-class tenants against municipal tower-blocks that had taken place at frequent intervals during the post-war years.' Really? When a cheaply-constructed and badly-maintained ghetto for downtrodden families begins its collapse into squalor, no one can be too surprised. When a luxury condominium goes feral, that's another story. Next to reality, _High-Rise_ is like the account of a scientific experiment with a control group. The High-Rise and Hulme Crescents are comparable architecturally, but only Hulme Crescents is a locus of social neglect. If the High-Rise succeeds where Hulme Crescents fail, this proves that social neglect is the problem, not architecture; on the other hand, if they both fail in just the same way, this proves that the architecture is the problem, not social neglect. Is Ballard expecting us to find in the events of _High-Rise_ a plausibility, even an inevitability? In which case we don't have to wait for the High-Rise to be tested out in real life, because we already have such an irrefutable prophecy, a 'terrible warning', of what these rotten towers will do to us? Or, on the other hand, is he expecting us to recognise this case study as inherently preposterous? In which case we can stop wringing our hands about the architecture, because our attempt to imagine how it might warp different social classes in identical ways has been enough to make vivid the impossibility of such an outcome? Let's stop there. Clearly, even to pose such pedantic questions about this fangy and umbrous masterwork is to demonstrate the narrowness of reading it as a contribution to a specific debate in a specific time. As I said, _High-Rise_ is all about architecture, but it's also not about architecture at all. 'In a sense,' Laing muses, 'life in the high-rise had begun to resemble the world outside – there were the same ruthlessness and aggression concealed within a set of polite conventions.' These things are universal. The inhabitants of the High-Rise descend into barbarism with such unbridled willingness – are we really to believe the same would not have happened if they'd all been living in nice maisonettes? In an interview with Travis Elborough, Ballard said that when he spent three years as a teenager in a Japanese internment camp he got 'a tremendous insight into what makes up human behaviour' when he saw the adults around him 'stripped of any kind of defence... humiliated and frightened'. Arguably, _High-Rise_ could just as well have been set in an internment camp, or for that matter a cruise ship or a medieval convent or any other self-contained community. Perhaps the bare corridors of the High-Rise are appropriate not so much for their specifically modernist quality, but – quite the opposite – because they're so generic, so placeless, like a black box theatre. Throughout the book we have the sense that the concrete musculature of the building is beginning to dissolve, leaving behind only a sort of oneiric grid, 'less a habitable architecture... than the unconscious diagram of a mysterious psychic event'. On this reading, the High-Rise finds its closest analogues not in the housing developments of London and Manchester but rather in the existentialist purgatories of Samuel Beckett, whose great short story 'The Lost Ones' was first published in English in 1971, four years before _High-Rise_. 'The Lost Ones' describes, with the neutrality of a social worker's report, the nightmarish facts of daily life inside a fifty-foot cylinder so densely populated that there is 'one body per square metre of available surface', each inhabitant 'searching for its lost one. Vast enough for the search to be in vain. Narrow enough for flight to be in vain.' Just as the forty floors described in _High-Rise_ are divided into three sections for 'the three classical social groups', the cylinder has 'three distinct zones separated by clear-cut mental or imaginary frontiers', and is 'doomed in a more or less distant future to a state of anarchy given over to fury of violence' when the social order breaks down. I'm not proposing 'The Lost Ones' as a direct inspiration for _High-Rise_ ; my point is only that the latter novel could be stripped of all context, all names, all 'social comment', and it would be just as persuasive an account of the savage competition and futile endeavour that take up so much of our time on this earth. _High-Rise_ is not a 'warning' of what could happen; it's an account of what does happen, everywhere, all the time. We aren't literally drowning each other's dogs and ransacking each other's flats, so instead we wage quiet wars with fake smiles, or just repress and sublimate and fantasise. Ballard's novel externalises all that. In 2014, high-rise living is no longer a novelty in the UK; Londoners will read this book beneath a skyline sheenier than ever with notched glass, 500-foot test tubes ready for the admixture of volatile chemicals. And so this book might seem to take on a renewed relevance. But in fact its relevance has never wavered and never will. Any time in human history that two or more households have tried to share the same space, they have lived in the High-Rise. New York, 2014 # 1 Critical Mass Later, as he sat on his balcony eating the dog, Dr Robert Laing reflected on the unusual events that had taken place within this huge apartment building during the previous three months. Now that everything had returned to normal, he was surprised that there had been no obvious beginning, no point beyond which their lives had moved into a clearly more sinister dimension. With its forty floors and thousand apartments, its supermarket and swimming-pools, bank and junior school – all in effect abandoned in the sky – the high-rise offered more than enough opportunities for violence and confrontation. Certainly his own studio apartment on the 25th floor was the last place Laing would have chosen as an early skirmish-ground. This over-priced cell, slotted almost at random into the cliff face of the apartment building, he had bought after his divorce specifically for its peace, quiet and anonymity. Curiously enough, despite all Laing's efforts to detach himself from his two thousand neighbours and the régime of trivial disputes and irritations that provided their only corporate life, it was here if anywhere that the first significant event had taken place – on this balcony where he now squatted beside a fire of telephone directories, eating the roast hind-quarter of the Alsatian before setting off to his lecture at the medical school. While preparing breakfast soon after eleven o'clock one Saturday morning three months earlier, Dr Laing was startled by an explosion on the balcony outside his living-room. A bottle of sparkling wine had fallen from a floor fifty feet above, ricocheted off an awning as it hurtled downwards, and burst across the tiled balcony floor. The living-room carpet was speckled with foam and broken glass. Laing stood in his bare feet among the sharp fragments, watching the agitated wine seethe across the cracked tiles. High above him, on the 31st floor, a party was in progress. He could hear the sounds of deliberately over-animated chatter, the aggressive blare of a record-player. Presumably the bottle had been knocked over the rail by a boisterous guest. Needless to say, no one at the party was in the least concerned about the ultimate destination of this missile – but as Laing had already discovered, people in high-rises tended not to care about tenants more than two floors below them. Trying to identify the apartment, Laing stepped across the spreading pool of cold froth. Sitting there, he might easily have found himself with the longest hangover in the world. He leaned out over the rail and peered up at the face of the building, carefully counting the balconies. As usual, though, the dimensions of the forty-storey block made his head reel. Lowering his eyes to the tiled floor, he steadied himself against the door pillar. The immense volume of open space that separated the building from the neighbouring high-rise a quarter of a mile away unsettled his sense of balance. At times he felt that he was living in the gondola of a ferris wheel permanently suspended three hundred feet above the ground. None the less, Laing was still exhilarated by the high-rise, one of five identical units in the development project and the first to be completed and occupied. Together they were set in a mile-square area of abandoned dockland and warehousing along the north bank of the river. The five high-rises stood on the eastern perimeter of the project, looking out across an ornamental lake – at present an empty concrete basin surrounded by parking-lots and construction equipment. On the opposite shore stood the recently completed concert-hall, with Laing's medical school and the new television studios on either side. The massive scale of the glass and concrete architecture, and its striking situation on a bend of the river, sharply separated the development project from the rundown areas around it, decaying nineteenth-century terraced houses and empty factories already zoned for reclamation. For all the proximity of the City two miles away to the west along the river, the office buildings of central London belonged to a different world, in time as well as space. Their glass curtain-walling and telecommunication aerials were obscured by the traffic smog, blurring Laing's memories of the past. Six months earlier, when he had sold the lease of his Chelsea house and moved to the security of the high-rise, he had travelled forward fifty years in time, away from crowded streets, traffic hold-ups, rush-hour journeys on the Underground to student supervisions in a shared office in the old teaching hospital. Here, on the other hand, the dimensions of his life were space, light and the pleasures of a subtle kind of anonymity. The drive to the physiology department of the medical school took him five minutes, and apart from this single excursion Laing's life in the high-rise was as self-contained as the building itself. In effect, the apartment block was a small vertical city, its two thousand inhabitants boxed up into the sky. The tenants corporately owned the building, which they administered themselves through a resident manager and his staff. For all its size, the high-rise contained an impressive range of services. The entire 10th floor was given over to a wide concourse, as large as an aircraft carrier's flight-deck, which contained a supermarket, bank and hairdressing salon, a swimming-pool and gymnasium, a well-stocked liquor store and a junior school for the few young children in the block. High above Laing, on the 35th floor, was a second, smaller swimming-pool, a sauna and a restaurant. Delighted by this glut of conveniences, Laing made less and less effort to leave the building. He unpacked his record collection and played himself into his new life, sitting on his balcony and gazing across the parking-lots and concrete plazas below him. Although the apartment was no higher than the 25th floor, he felt for the first time that he was looking down at the sky, rather than up at it. Each day the towers of central London seemed slightly more distant, the landscape of an abandoned planet receding slowly from his mind. By contrast with the calm and unencumbered geometry of the concert-hall and television studios below him, the ragged skyline of the city resembled the disturbed encephalograph of an unresolved mental crisis. The apartment had been expensive, its studio living-room and single bedroom, kitchen and bathroom dovetailed into each other to minimize space and eliminate internal corridors. To his sister Alice Frobisher, who lived with her publisher husband in a larger apartment three floors below, Laing had remarked, 'The architect must have spent his formative years in a space capsule – I'm surprised the walls don't curve...' At first Laing found something alienating about the concrete landscape of the project – an architecture designed for war, on the unconscious level if no other. After all the tensions of his divorce, the last thing he wanted to look out on each morning was a row of concrete bunkers. However, Alice soon convinced him of the intangible appeal of life in a luxury high-rise. Seven years older than Laing, she made a shrewd assessment of her brother's needs in the months after his divorce. She stressed the efficiency of the building's services, the total privacy. 'You could be alone here, in an empty building – think of _that,_ Robert.' She added, illogically, 'Besides, it's full of the kind of people you ought to meet.' Here she was making a point that had not escaped Laing during his inspection visits. The two thousand tenants formed a virtually homogeneous collection of well-to-do professional people – lawyers, doctors, tax consultants, senior academics and advertising executives, along with a smaller group of airline pilots, film-industry technicians and trios of air-hostesses sharing apartments. By the usual financial and educational yardsticks they were probably closer to each other than the members of any conceivable social mix, with the same tastes and attitudes, fads and styles – clearly reflected in the choice of automobiles in the parking-lots that surrounded the high-rise, in the elegant but somehow standardized way in which they furnished their apartments, in the selection of sophisticated foods in the supermarket delicatessen, in the tones of their self-confident voices. In short, they constituted the perfect background into which Laing could merge invisibly. His sister's excited vision of Laing alone in an empty building was closer to the truth than she realized. The high-rise was a huge machine designed to serve, not the collective body of tenants, but the individual resident in isolation. Its staff of air-conditioning conduits, elevators, garbage-disposal chutes and electrical switching systems provided a never-failing supply of care and attention that a century earlier would have needed an army of tireless servants. Besides all this, once Laing had been appointed senior lecturer in physiology at the new medical school, the purchase of an apartment nearby made sense. It helped him as well to postpone once again any decision to give up teaching and take up general practice. But as he told himself, he was still waiting for his real patients to appear – perhaps he would find them here in the high-rise? Rationalizing his doubts over the cost of the apartment, Laing signed a ninety-nine-year lease and moved into his one-thousandth share of the cliff face. The sounds of the party continued high over his head, magnified by the currents of air that surged erratically around the building. The last of the wine rilled along the balcony gutter, sparkling its way into the already immaculate drains. Laing placed his bare foot on the cold tiles and with his toes detached the label from its glass fragment. He recognized the wine immediately, a brand of expensive imitation champagne that was sold pre-chilled in the 10th-floor liquor store and was its most popular line. They had been drinking the same wine at Alice's party the previous evening, in its way as confused an affair as the one taking place that moment over his head. Only too keen to relax after demonstrating all afternoon in the physiology laboratories, and with an eye on an attractive fellow guest, Laing had inexplicably found himself in a minor confrontation with his immediate neighbours on the 25th floor, an ambitious young orthodontic surgeon named Steele and his pushy fashion-consultant wife. Half-way through a drunken conversation Laing had suddenly realized that he had managed to offend them deeply over their shared garbage-disposal chute. The two had cornered Laing behind his sister's bar, where Steele fired a series of pointed questions at him, as though seriously disturbed by a patient's irresponsible attitude towards his own mouth. His slim face topped by a centre parting – always an indication to Laing of some odd character strain – pressed ever closer, and he half-expected Steele to ram a metal clamp or retractor between his teeth. His intense, glamorous wife followed up the attack, in some way challenged by Laing's offhand manner, his detachment from the serious business of living in the high-rise. Laing's fondness for pre-lunch cocktails, his nude sunbathing on the balcony, and his generally raffish air obviously unnerved her. She clearly felt that at the age of thirty Laing should have been working twelve hours a day in a fashionable consultancy, and be in every way as respectably self-aggrandizing as her husband. No doubt she regarded Laing as some kind of internal escapee from the medical profession, with a secret tunnel into a less responsible world. This low-level bickering surprised Laing, but after his arrival at the apartment building he soon recognized the extraordinary number of thinly veiled antagonisms around him. The high-rise had a second life of its own. The talk at Alice's party moved on two levels – never far below the froth of professional gossip was a hard mantle of personal rivalry. At times he felt that they were all waiting for someone to make a serious mistake. After breakfast, Laing cleared the glass from the balcony. Two of the decorative tiles had been cracked. Mildly irritated, Laing picked up the bottle neck, still with its wired cork and foil in place, and tossed it over the balcony rail. A few seconds later he heard it shatter among the cars parked below. Pulling himself together, Laing peered cautiously over the ledge – he might easily have knocked in someone's windscreen. Laughing aloud at this aberrant gesture, he looked up at the 31st floor. What were they celebrating at eleven-thirty in the morning? Laing listened to the noise mount as more guests arrived. Was this a party that had accidentally started too early, or one that had been going on all night and was now getting its second wind? The internal time of the high-rise, like an artificial psychological climate, operated to its own rhythms, generated by a combination of alcohol and insomnia. On the balcony diagonally above him one of Laing's neighbours, Charlotte Melville, was setting out a tray of drinks on a table. Queasily aware of his strained liver, Laing remembered that at Alice's party the previous evening he had accepted an invitation to cocktails. Thankfully, Charlotte had rescued him from the orthodontic surgeon with the disposal-chute obsessions. Laing had been too drunk to get anywhere with this good-looking widow of thirty-five, apart from learning that she was a copywriter with a small but lively advertising agency. The proximity of her apartment, like her easy style, appealed to Laing, exciting in him a confusing blend of lechery and romantic possibility – as he grew older, he found himself becoming more romantic and more callous at the same time. Sex was one thing, Laing kept on reminding himself, that the high-rise potentially provided in abundance. Bored wives, dressed up as if for a lavish midnight gala on the observation roof, hung around the swimming-pools and restaurant in the slack hours of the early afternoon, or strolled arm-in-arm along the 10th-floor concourse. Laing watched them saunter past him with a fascinated but cautious eye. For all his feigned cynicism, he knew that he was in a vulnerable zone in this period soon after his divorce – one happy affair, with Charlotte Melville or anyone else, and he would slip straight into another marriage. He had come to the high-rise to get away from all relationships. Even his sister's presence, and the reminders of their high-strung mother, a doctor's widow slowly sliding into alcoholism, at one time seemed too close for comfort. However, Charlotte had briskly put all these fears to rest. She was still preoccupied by her husband's death from leukaemia, her six-year-old son's welfare and, she admitted to Laing, her insomnia – a common complaint in the high-rise, almost an epidemic. All the residents he had met, on hearing that Laing was a physician, at some point brought up their difficulties in sleeping. At parties people discussed their insomnia in the same way that they referred to the other built-in design flaws of the apartment block. In the early hours of the morning the two thousand tenants subsided below a silent tide of Seconal. Laing had first met Charlotte in the 35th-floor swimming-pool, where he usually swam, partly to be on his own, and partly to avoid the children who used the 10th-floor pool. When he invited her to a meal in the restaurant she promptly accepted, but as they sat down at the table she said pointedly, 'Look, I only want to talk about myself.' Laing had liked that. At noon, when he arrived at Charlotte's apartment, a second guest was already present, a television producer named Richard Wilder. A thick-set, pugnacious man who had once been a professional rugby-league player. Wilder lived with his wife and two sons on the 2nd floor of the building. The noisy parties he held with his friends on the lower levels – airline pilots and hostesses sharing apartments – had already put him at the centre of various disputes. To some extent the irregular hours of the tenants on the lower levels had cut them off from their neighbours above. In an unguarded moment Laing's sister had whispered to him that there was a brothel operating somewhere in the high-rise. The mysterious movements of the air-hostesses as they pursued their busy social lives, particularly on the floors above her own, clearly unsettled Alice, as if they in some way interfered with the natural social order of the building, its system of precedences entirely based on floor-height. Laing had noticed that he and his fellow tenants were far more tolerant of any noise or nuisance from the floors above than they were from those below them. However, he liked Wilder, with his loud voice and rugby-scrum manners. He let a needed dimension of the unfamiliar into the apartment block. His relationship with Charlotte Melville was hard to gauge – his powerful sexual aggression was overlaid by a tremendous restlessness. No wonder his wife, a pale young woman with a postgraduate degree who reviewed children's books for the literary weeklies, seemed permanently exhausted. As Laing stood on the balcony, accepting a drink from Charlotte, the noise of the party came down from the bright air, as if the sky itself had been wired for sound. Charlotte pointed to a fragment of glass on Laing's balcony that had escaped his brush. 'Are you under attack? I heard something fall.' She called to Wilder, who was lounging back in the centre of her sofa, examining his heavy legs. 'It's those people on the 31st floor.' 'Which people?' Laing asked. He assumed that she was referring to a specific group, a clique of over-aggressive film actors or tax consultants, or perhaps a freak aggregation of dipsomaniacs. But Charlotte shrugged vaguely, as if it was unnecessary to be more specific. Clearly some kind of demarcation had taken place in her mind, like his own facile identification of people by the floors on which they lived. 'By the way, what are we all celebrating?' he asked as they returned to the living-room. 'Don't you know?' Wilder gestured at the walls and ceiling. 'Full house. We've achieved critical mass.' 'Richard means that the last apartment has been occupied,' Charlotte explained. 'Incidentally, the contractors promised us a free party when the thousandth apartment was sold.' 'I'll be interested to see if they hold it,' Wilder remarked. Clearly he enjoyed running down the high-rise. 'The elusive Anthony Royal was supposed to provide the booze. You've met him, I think,' he said to Laing. 'The architect who designed our hanging paradise.' 'We play squash together,' Laing rejoined. Aware of the hint of challenge in Wilder's voice, he added, 'Once a week – I hardly know the man, but I like him.' Wilder sat forward, cradling his heavy head in his fists. Laing noticed that he was continually touching himself, for ever inspecting the hair on his massive calves, smelling the backs of his scarred hands, as if he had just discovered his own body. 'You're favoured to have met him,' Wilder said. 'I'd like to know why. An isolated character – I ought to resent him, but somehow I feel sorry for the man, hovering over us like some kind of fallen angel.' 'He has a penthouse apartment,' Laing commented. He had no wish to become involved in any tug of war over his brief friendship with Royal. He had met this well-to-do architect, a former member of the consortium which had designed the development project, during the final stages of Royal's recovery from a minor car accident. Laing had helped him to set up the complex callisthenics machine in the penthouse where Royal spent his time, the focus of a great deal of curiosity and attention. As everyone continually repeated, Royal lived 'on top' of the building, as if in some kind of glamorous shack. 'Royal was the first person to move in here,' Wilder informed him. 'There's something about him I haven't put my finger on. Perhaps even a sense of guilt – he hangs around up there as if he's waiting to be found out. I expected him to leave months ago. He has a rich young wife, so why stay on in this glorified tenement?' Before Laing could protest, Wilder pressed on. 'I know Charlotte has reservations about life here – the trouble with these places is that they're not designed for children. The only open space turns out to be someone else's car-park. By the way, doctor, I'm planning to do a television documentary about high-rises, a really hard look at the physical and psychological pressures of living in a huge condominium such as this one.' 'You'll have a lot of material.' 'Too much, as always. I wonder if Royal would take part – you might ask him, doctor. As one of the architects of the block and its first tenant, his views would be interesting. Your own, too...' As Wilder talked away rapidly, his words over-running the cigarette smoke coming from his mouth, Laing turned his attention to Charlotte. She was watching Wilder intently, nodding at each of his points. Laing liked her determination to stick up for herself and her small son, her evident sanity and good sense. His own marriage, to a fellow physician and specialist in tropical medicine, had been a brief but total disaster, a reflection of heaven-only-knew what needs. With unerring judgment Laing had involved himself with this highly strung and ambitious young doctor, for whom Laing's refusal to give up teaching – in itself suspicious – and involve himself directly in the political aspects of preventive medicine had provided a limitless opportunity for bickering and confrontation. After only six months together she had suddenly joined an international famine-relief organization and left on a three-year tour. But Laing had made no attempt to follow her. For reasons he could not yet explain, he had been reluctant to give up teaching, and the admittedly doubtful security of being with students who were still almost his own age. Charlotte, he guessed, would understand this. In his mind Laing projected the possible course of an affair with her. The proximity and distance which the high-rise provided at the same time, that neutral emotional background against which the most intriguing relationships might develop, had begun to interest him for its own sake. For some reason he found himself drawing back even within this still imaginary encounter, sensing that they were all far more involved with each other than they realized. An almost tangible network of rivalries and intrigues bound them together. As he guessed, even this apparently casual meeting in Charlotte's apartment had been set up to test his attitude to the upper-level residents who were trying to exclude children from the 35th-floor swimming-pool. 'The terms of our leases guarantee us equal access to all facilities,' Charlotte explained. 'We've decided to set up a parents' action group.' 'Doesn't that leave me out?' 'We need a doctor on the committee. The paediatric argument would come much more forcefully from you, Robert.' 'Well, perhaps...' Laing hesitated to commit himself. Before he knew it, he would be a character in a highly charged television documentary, or taking part in a sit-in outside the office of the building manager. Reluctant at this stage to be snared into an inter-floor wrangle, Laing stood up and excused himself. As he left, Charlotte had equipped herself with a checklist of grievances. Sitting beside Wilder, she began to tick off the complaints to be placed before the building manager, like a conscientious teacher preparing the syllabus for the next term. When Laing returned to his apartment, the party on the 31st floor had ended. He stood on his balcony in the silence, enjoying the magnificent play of light across the neighbouring block four hundred yards away. The building had just been completed, and by coincidence the first tenants were arriving on the very morning that the last had moved into his own block. A furniture pantechnicon was backing into the entrance to the freight elevator, and the carpets and stereo-speakers, dressing-tables and bedside lamps would soon be carried up the elevator shaft to form the elements of a private world. Thinking of the rush of pleasure and excitement which the new tenants would feel as they gazed out for the first time from their aerial ledge on the cliff face, Laing contrasted it with the conversation he had just heard between Wilder and Charlotte Melville. However reluctantly, he now had to accept something he had been trying to repress – that the previous six months had been a period of continuous bickering among his neighbours, of trivial disputes over the faulty elevators and air-conditioning, inexplicable electrical failures, noise, competition for parking space and, in short, that host of minor defects which the architects were supposed specifically to have designed out of these over-priced apartments. The underlying tensions among the residents were remarkably strong, damped down partly by the civilized tone of the building, and partly by the obvious need to make this huge apartment block a success. Laing remembered a minor but unpleasant incident that had taken place the previous afternoon on the 10th-floor shopping concourse. As he waited to cash a cheque at the bank an altercation was going on outside the doors of the swimming-pool. A group of children, still wet from the water, were backing away from the imposing figure of a cost-accountant from the 17th floor. Facing him in this unequal contest was Helen Wilder. Her husband's pugnacity had long since drained any self-confidence from her. Nervously trying to control the children, she listened stoically to the accountant's reprimand, now and then making some weak retort. Leaving the bank counter, Laing walked towards them, past the crowded check-out points of the supermarket and the lines of women under the driers in the hair-dressing salon. As he stood beside Mrs Wilder, waiting until she recognized him, he gathered that the accountant was complaining that her children, not for the first time, had been urinating in the pool. Laing briefly interceded, but the accountant slammed away through the swing doors, confident that he had sufficiently intimidated Mrs Wilder to drive her brood of children away for ever. 'Thanks for taking my side – Richard was supposed to be here.' She picked a damp thread of hair out of her eyes. 'It's becoming impossible – we arrange set hours for the children but the adults come anyway.' She took Laing's arm and squinted nervously across the crowded concourse. 'Do you mind walking me back to the elevator? It must sound rather paranoid, but I'm becoming obsessed with the idea that one day we'll be physically attacked...' She shuddered under her damp towel as she propelled the children forward. 'It's almost as if these aren't the people who really live here.' During the afternoon Laing found himself thinking of this last remark of Helen Wilder's. Absurd though it sounded, the statement had a certain truth. Now and then his neighbours, the orthodontic surgeon and his wife, stepped on to their balcony and frowned at Laing, as if disapproving of the relaxed way in which he lay back in his reclining chair. Laing tried to visualize their life together, their hobbies, conversation, sexual acts. It was difficult to imagine any kind of domestic reality, as if the Steeles were a pair of secret agents unconvincingly trying to establish a marital role. By contrast, Wilder was real enough, but hardly belonged to the high-rise. Laing lay back on his balcony, watching the dusk fall across the façades of the adjacent blocks. Their size appeared to vary according to the play of light over their surfaces. Sometimes, when he returned home in the evening from the medical school, he was convinced that the high-rise had managed to extend itself during the day. Lifted on its concrete legs, the forty-storey block appeared to be even higher, as if a group of off-duty construction workers from the television studios had casually added another floor. The five apartment buildings on the eastern perimeter of the mile-square project together formed a massive palisade that by dusk had already plunged the suburban streets behind them into darkness. The high-rises seemed almost to challenge the sun itself – Anthony Royal and the architects who had designed the complex could not have foreseen the drama of confrontation each morning between these concrete slabs and the rising sun. It was only fitting that the sun first appeared between the legs of the apartment blocks, raising itself over the horizon as if nervous of waking this line of giants. During the morning, from his office on the top floor of the medical school, Laing would watch their shadows swing across the parking-lots and empty plazas of the project, sluice-gates opening to admit the day. For all his reservations, Laing was the first to concede that these huge buildings had won their attempt to colonize the sky. Soon after nine o'clock that evening, an electrical failure temporarily blacked out the 9th, 10th and 11th floors. Looking back on this episode, Laing was surprised by the degree of confusion during the fifteen minutes of the blackout. Some two hundred people were present on the 10th-floor concourse, and many were injured in the stampede for the elevators and staircases. A number of absurd but unpleasant altercations broke out in the darkness between those who wanted to descend to their apartments on the lower levels and the residents from the upper floors who insisted on escaping upwards into the cooler heights of the building. During the blackout two of the twenty elevators were put out of action. The air-conditioning had been switched off, and a woman passenger trapped in an elevator between the 10th and 11th floors became hysterical, possibly the victim of a minor sexual assault – the restoration of light in due course revealed its crop of illicit liaisons flourishing in the benevolent conditions of total darkness like a voracious plant species. Laing was on his way to the gymnasium when the power failed. Uneager to join the mêlée on the concourse, he waited in a deserted classroom of the junior school. Sitting alone at one of the children's miniature desks, surrounded by the dim outlines of their good-humoured drawings pinned to the walls, he listened to their parents scuffling and shouting in the elevator lobby. When the lights returned he walked out among the startled residents, and did his best to calm everyone down. He supervised the transfer of the hysterical woman passenger from the elevator to a lobby sofa. The heavy-boned wife of a jeweller on the 40th floor, she clung powerfully to Laing's arm, only releasing him when her husband appeared. As the crowd of residents dispersed, their fingers punching the elevator destination buttons, Laing noticed that two children had sheltered during the blackout in another of the classrooms. They were standing now in the entrance to the swimming-pool, backing away defensively from the tall figure of the 17th-floor cost-accountant. This self-appointed guardian of the water held a long-handled pool skimmer like a bizarre weapon. Angrily, Laing ran forward. But the children were not being driven from the pool. They stepped aside when Laing approached. The accountant stood by the water's edge, awkwardly reaching the skimmer across the calm surface. At the deep end three swimmers, who had been treading water during the entire blackout, were clambering over the side. One of them, he noticed without thinking, was Richard Wilder. Laing took the handle of the skimmer. As the children watched, he helped the accountant extend it across the water. Floating in the centre of the pool was the drowned body of an Afghan hound. # 2 Party Time During these days after the drowning of the dog, the air of over-excitement within the high-rise gradually settled itself, but to Dr Laing this comparative calm was all the more ominous. The swimming-pool on the 10th floor remained deserted, partly, Laing assumed, because everyone felt that the water was contaminated by the dead Afghan. An almost palpable miasma hung over the slack water, as if the spirit of the drowned beast was gathering to itself all the forces of revenge and retribution present within the building. On his way to the medical school a few mornings after the incident, Laing looked in at the 10th-floor concourse. After booking a squash court for his weekly game that evening with Anthony Royal, he walked towards the entrance of the swimming-pool. He remembered the panic and stampede during the blackout. By contrast, the shopping mall was now almost empty, a single customer ordering his wines at the liquor store. Laing pushed back the swing doors and strolled around the pool. The changing cubicles were closed, the curtains drawn across the shower stalls. The official attendant, a retired physical-training instructor, was absent from his booth behind the diving-boards. Evidently the profanation of his water had been too much for him. Laing stood by the tiled verge at the deep end, under the unvarying fluorescent light. Now and then, the slight lateral movement of the building in the surrounding airstream sent a warning ripple across the flat surface of the water, as if in its pelagic deeps an immense creature was stirring in its sleep. He remembered helping the accountant to lift the Afghan from the water, and being surprised by its lightness. With its glamorous plumage drenched by the chlorinated water, the dog had lain like a large stoat on the coloured tiles. While they waited for the owner, a television actress on the 37th floor, to come down and collect the dog Laing examined it carefully. There were no external wounds or marks of restraint. Conceivably it had strayed from its apartment into a passing elevator and emerged on to the shopping concourse during the confusion of the power failure, fallen into the swimming-pool and died there of exhaustion. But the explanation hardly fitted the facts. The blackout had lasted little more than fifteen minutes, and a dog of this size was powerful enough to swim for hours. Besides, it could simply have stood on its hind legs in the shallow end. But if it had been thrown into the pool, and held below the water in the darkness by a strong swimmer... Surprised by his own suspicions, Laing made a second circuit of the pool. Something convinced him that the dog's drowning had been a provocative act, intended to invite further retaliation in its turn. The presence of the fifty or so dogs in the high-rise had long been a source of irritation. Almost all of them were owned by residents on the top ten floors – just as, conversely, most of the fifty children lived in the lower ten. Together the dogs formed a set of over-pampered pedigree pets whose owners were not noticeably concerned for their fellow tenants' comfort and privacy. The dogs barked around the car-parks when they were walked in the evening, fouling the pathways between the cars. On more than one occasion elevator doors were sprayed with urine. Laing had heard Helen Wilder complain that, rather than use their five high-speed elevators which carried them from a separate entrance lobby directly to the top floors, the dog-owners habitually transferred to the lower-level elevators, encouraging their pets to use them as lavatories. This rivalry between the dog-owners and the parents of small children had in a sense already polarized the building. Between the upper and lower floors the central mass of apartments – roughly from the roth floor to the 30th – formed a buffer state. During the brief interregnum after the dog's drowning a kind of knowing calm presided over the middle section of the high-rise, as if the residents had already realized what was taking place within the building. Laing discovered this when he returned that evening from the medical school. By six o'clock the section of the parking-lot reserved for the 20th to the 25th floors would usually be full, forcing him to leave his car in the visitors' section three hundred yards from the building. Reasonably enough, the architects had zoned the parking-lots so that the higher a resident's apartment (and consequently the longer the journey by elevator), the nearer he parked to the building. The residents from the lower floors had to walk considerable distances to and from their cars each day – a sight not without its satisfaction, Laing had noticed. Somehow the high-rise played into the hands of the most petty impulses. That evening, however, as he reached the already crowded car-park, Laing was surprised by his fellow tenants' tolerant behaviour. He arrived at the same time as his neighbour Dr Steele. By rights they should have raced each other for the last vacant place, and taken separate elevators to their floor. But tonight each beckoned the other forward in a show of exaggerated gallantry and waited while the other parked. They even walked together to the main entrance. In the lobby a group of tenants stood outside the manager's office, remonstrating noisily with his secretary. The electrical supply system on the 9th floor was still out of order, and at night the floor was in darkness. Fortunately it was light until late in the summer evening, but the inconvenience to the fifty residents on the floor was considerable. None of the appliances in their apartments would function, and the limits of co-operation with their neighbours on the floors above and below had soon been reached. Steele watched them unsympathetically. Although he was in his late twenties, his manner was already securely middle-aged. Laing found himself fascinated by his immaculate centre parting, almost an orifice. 'They're always complaining about something,' Steele confided to Laing as they stepped into an elevator. 'If it isn't this, it's that. They seem unwilling to accept that the services in a new building take time to settle down.' 'Still, it must be a nuisance to have no power.' Steele shook his head. 'They persistently overload the master-fuses with their elaborate stereo-systems and unnecessary appliances. Electronic baby-minders because the mothers are too lazy to get out of their easy chairs, special mashers for their children's food..." Laing waited for the journey to end, already regretting his new-found solidarity with his neighbour. For some reason, Steele made him nervous. Not for the first time, he wished he had purchased an apartment above the 30th floor. The high-speed elevators were bliss. 'The children here look well enough to me,' he remarked when they stepped out at the 25th floor. The surgeon held his elbow in a surprisingly powerful grip. He smiled reassuringly, flashing a mouth like a miniature cathedral of polished ivory. 'Believe me, Laing. I see their teeth.' The punitive tone in Steele's voice, as if he were describing a traditionally feckless band of migrant workers rather than his well-to-do neighbours, came as a surprise to Laing. He knew casually a few of the 9th floor residents – a sociologist who was a friend of Charlotte Melville's, and an air-traffic controller who played string trios with friends on the 25th floor, an amusing and refined man to whom Laing often talked as he carried his cello into the elevator. But distance lent disenchantment. The extent of this separation of loyalties was brought home to Laing when he set off to play squash with Anthony Royal. He took an elevator up to the 40th floor and, as usual, arrived ten minutes early so that he could go out on to the roof. The spectacular view always made Laing aware of his ambivalent feelings for this concrete landscape. Part of its appeal lay all too clearly in the fact that this was an environment built, not for man, but for man's absence. Laing leaned against the parapet, shivering pleasantly in his sports-clothes. He shielded his eyes from the strong air currents that rose off the face of the high-rise. The cluster of auditorium roofs, curving roadway embankments and rectilinear curtain walling formed an intriguing medley of geometries – less a habitable architecture, he reflected, than the unconscious diagram of a mysterious psychic event. Fifty feet away to Laing's left a cocktail party was in progress. Two buffet tables covered with white cloths had been laid with trays of canapés and glasses, and a waiter was serving drinks behind a portable bar. Some thirty guests in evening dress stood about talking in small groups. For a few minutes Laing ignored them, absent-mindedly tapping his rackets case on the parapet, but something about the hard, over-animated chatter made him turn. Several of the guests were looking in his direction, and Laing was certain that they were talking about him. The party had moved nearer, and the closest guests were no more than ten feet away. All were residents from the top three floors. Even more unusual was the self-conscious formality of their dress. At none of the parties in the high-rise had Laing seen anyone dressed in anything other than casual wear, yet here the men wore dinner-jackets and black ties, the women floor-length evening dresses. They carried themselves in a purposeful way, as if this were less a party than a planning conference. Almost within arm's reach, the immaculate figure of a well-to-do art dealer was squaring up to Laing, the lapels of his dinner-jacket flexing like an over-worked bellows. On either side of him were the middle-aged wives of a stock-exchange jobber and a society photographer, staring distastefully at Laing's white sports-clothes and sneakers. Laing picked up his rackets case and towel bag, but his way to the staircase was blocked by the people around him. The entire cocktail party had moved along the roof, and the waiter now stood alone between the bar and the buffet tables. Laing leaned against the parapet, for the first time conscious of the immense distance to the ground below. He was encircled by a heavily breathing group of his fellow residents, so close that he could smell the medley of expensive scents and after-shaves. He was curious as to what exactly they were going to do, but at the same time was aware that at any moment a meaningless act of violence might occur. 'Dr Laing...Ladies, would you release the doctor?' At what seemed the last moment, a familiar figure with adroit hands and a soft walk called out reassuringly. Laing recognized the jeweller whose hysterical wife he had briefly examined during the power failure. As he greeted Laing the guests casually dispersed, like a group of extras switched to another scene. Without thinking, they strolled back to their drinks and canapés. 'Was it fortunate that I arrived?' The jeweller peered at Laing, as if puzzled by his presence in this private domain. 'You're here to play squash with Anthony Royal? I'm afraid he's decided to decline.' He added, as much to himself as to Laing. 'My wife should have been here. She was treated appallingly, you know – they were like animals...' Slightly shaken, Laing accompanied him to the stairway. He looked back at the cocktail party, with its well-bred guests, uncertain whether he had imagined the imminent attack on him. After all, what could they have actually done – hardly tossed him over the edge? As he pondered this, he noticed a familiar pale-haired figure in a white safari-jacket standing with one hand on the callisthenics machine in the penthouse overlooking the northern end of the roof. Resting at his feet was Royal's Alsatian with its arctic coat, without doubt the premier dog in the high-rise. Making no attempt to hide himself, Anthony Royal was watching Laing with a thoughtful gaze. As always, his expression was an uneasy mixture of arrogance and defensiveness, as if he were all too aware of the built-in flaws of this huge building he had helped to design, but was determined to out-stare any criticism, even at the price of theatrical gestures such as the Alsatian and his white-hunter's jacket. Although he was over fifty, his shoulder-length fair hair made him look uncannily youthful, as if the cooler air at these great heights had somehow preserved him from the ordinary processes of ageing. His bony forehead, still marked by the scars of his accident, was tilted to one side, and he seemed to be checking that an experiment he had set up had now been concluded. Laing raised one hand and signalled to him as the jeweller ushered him briskly below, but Royal made no reply. Why had he not cancelled their squash game by telephone? For a moment Laing was certain that Royal had deliberately let him come up to the roof, knowing that the party was in progress, simply out of interest in the guests' reactions and behaviour. The next morning Laing rose early, eager to get on. He felt fresh and clear-headed, but without realizing why he decided to take the day off. Promptly at nine, after pacing about for two hours, he telephoned his secretary at the medical school and postponed that afternoon's supervision. When she expressed regret at Laing's illness he brushed this aside. 'It's all right, I'm not ill. Something important has come up.' What? Puzzled by his own behaviour, Laing wandered around the small apartment. Charlotte Melville was also at home. She was dressed for the office in a formal business suit, but made no attempt to leave. She invited Laing over for coffee, but when he arrived an hour later she absent-mindedly handed him a glass of sherry. His visit, Laing soon discovered, was a pretext for him to examine her son. The boy was playing in his room, but according to Charlotte was not feeling well enough to go to the junior school on the 10th floor. Annoyingly, the young sister of an airline pilot's wife on the 1st floor had declined to baby-sit. 'It's a nuisance, she's usually only too keen. I've relied on her for months. She sounded rather vague on the phone, as if she was being evasive...' Laing listened sympathetically, wondering whether he should volunteer to look after the child. But there was no hint of this in Charlotte's voice. Playing with the boy, Laing realized that there was nothing wrong with him. Lively as ever, he asked his mother if he could go to his 3rd-floor playgroup that afternoon. Without thinking, she refused. Laing watched her with growing interest. Like himself, Charlotte was waiting for something to happen. They did not have long to wait. In the early afternoon the first of a fresh series of provocations took place between the rival floors, setting in motion again the dormant machinery of disruption and hostility. The incidents were trivial enough, but Laing knew already that they reflected deep-rooted antagonisms that were breaking through the surface of life within the high-rise at more and more points. Many of the factors involved had long been obvious – complaints about noise and the abuse of the building's facilities, rivalries over the better-sited apartments (those away from elevator lobbies and the service shafts, with their eternal rumbling). There was even a certain petty envy of the more attractive women who were supposed to inhabit the upper floors, a widely held belief that Laing had enjoyed testing. During the electricity blackout the eighteen-year-old wife of a fashion photographer on the 38th floor had been assaulted in the hairdressíng salon by an unknown woman. Presumably in retaliation, three air-hostesses from the 2nd floor were aggressively jostled by a party of marauding top-floor matrons led by the strong-shouldered wife of the jeweller. Watching from Charlotte's balcony, Laing waited as the first of these incidents took place. Standing there with a pretty woman, a drink in one hand, he felt pleasantly light-headed. Below them, on the 9th floor, a children's party was in full swing. The parents made no attempt to restrain their offspring, in effect urging them to make as much noise as possible. Within half an hour, fuelled by a constant flow of alcohol, the parents took over from their children. Charlotte laughed openly as soft drinks were poured on to the cars below, drenching the windscreens and roofs of the expensive limousines and sports saloons in the front ranks. These lively proceedings were watched by hundreds of residents who had come out on to their balconies. Playing up to their audience, the parents egged on their children. The party was soon out of control. Drunken children tottered about helplessly. High above them, on the 37th floor, a woman barrister began to shout angrily, outraged by the damage to her open-topped sports-car, whose black leather seats were covered with melting ice-cream. A pleasant carnival atmosphere reigned. At least it made a change, Laing felt, from the formal behaviour of the high-rise. Despite themselves, he and Charlotte joined in the laughter and applause as if they were spectators at an impromptu amateur circus. A remarkable number of parties were being held that evening. Usually, few parties took place other than at weekends, but on this Wednesday evening everyone was involved in one revel or another. Telephones rang continuously, and Charlotte and Laing were invited to no less than six separate parties. 'I ought to get my hair done.' Charlotte took his arm happily, almost embracing Laing. 'What exactly are we celebrating?' The question surprised Laing. He held Charlotte's shoulder, as if protecting her. 'God only knows – nothing to do with fun and games.' One of the invitations had come from Richard Wilder. Instantly, both he and Charlotte declined. 'Why did we refuse?' Charlotte asked, her hand still on the receiver. 'He was expecting us to say no.' 'The Wilders live on the 2nd floor,' Laing explained. 'Things _are_ rather rowdy down there...' 'Robert, that's a rationalization.' Behind Charlotte, as she spoke, her television set was showing the newsreel of an attempted prison break-out. The sound had been turned down, and the silent images of crouching warders and police, and the tiers of barricaded cells, flickered between her legs. Everyone in the high-rise, Laing reflected, watched television with the sound down. The same images glowed through his neighbours' doorways when he returned to his apartment. For the first time, people were leaving their front doors ajar and moving casually in and out of each other's apartments. However, these intimacies did not extend beyond each resident's immediate floor. Elsewhere the polarization of the building proceeded apace. Finding that he had run out of liquor, Laing took the elevator down to the 10th-floor concourse. As he expected, there was a heavy run on alcohol, and long lines of impatient residents stood outside the liquor store. Seeing his sister Alice near the counter, Laing tried to enlist her help. Without hesitating, she turned him down, and promptly launched into a vigorous denunciation of the tomfoolery that afternoon. In some way she clearly associated Laing with the lower-floor tenants responsible, identifying him with Richard Wilder and his rowdies. As Laing waited to be served, what resembled a punitive expedition from the upper floors caused a fracas in the swimming-pool. A party of residents from the top three floors arrived in a belligerent mood. Among them was the actress whose Afghan hound had drowned in the pool. She and her companions began by fooling about in the water, drinking champagne on a rubber raft against the swimming-pool rules and splashing people leaving the changing cubicles. After a futile attempt to intercede, the elderly attendant gave up and retreated to his booth behind the diving-boards. The elevators were full of aggressive pushing and heaving. The signal buttons behaved erratically, and the elevator shafts drummed as people pounded impatiently on the doors. On their way to a party on the 27th floor Laing and Charlotte were jostled when their elevator was carried down to the 3rd floor by a trio of drunken pilots. Bottles in hand, they had been trying for half an hour to reach the 10th floor. Seizing Charlotte good-humouredly around the waist, one of the pilots almost dragged her off to the small projection theatre beside the school which had previously been used for showing children's films. The theatre was now screening a private programme of blue movies, including one apparently made on the premises with locally recruited performers. At the party on the 27th floor, given by Adrian Talbot, an effeminate but likeable psychiatrist at the medical school, Laing began to relax for the first time that day. He noticed immediately that all the guests were drawn from the apartments nearby. Their faces and voices were reassuringly familiar. In a sense, as he remarked to Talbot, they constituted the members of a village. 'Perhaps a clan would be more exact,' Talbot commented. 'The population of this apartment block is nowhere near so homogeneous as it looks at first sight. We'll soon be refusing to speak to anyone outside our own enclave.' He added, 'My car had its windscreen smashed this afternoon by a falling bottle. Could I move it back to where you people are?' As a qualified physician, Talbot was entitled to park in the ranks closest to the building. Laing, perhaps anticipating the dangers of proximity, had never made use of this concession. The psychiatrist's request was instantly granted by his fellow residents, an appeal to solidarity that no member of his clan could deny. The party was one of the most successful Laing had attended. Unlike the majority of parties in the high-rise, at which well-bred guests stood about exchanging professional small-talk before excusing themselves, this one had real buoyancy, an atmosphere of true excitement. Within half an hour almost all the women were drunk, a yardstick Laing had long used to measure the success of a party. When he complimented Talbot the psychiatrist was noncommittal. 'There's a quickening pulse in the air, all right, but has it anything to do with good humour or fellow-feeling? Rather the opposite, I'd guess.' 'You're not concerned?' For some reason, less than I should be – but that's true of us all.' These agreeably expressed remarks cautioned Laing. Listening to the animated conversations around him, he was struck by the full extent of the antagonism being expressed, the hostility directed at people who lived in other sections of the high-rise. The malicious humour, the eagerness to believe any piece of gossip and any tall story about the shift-lessness of the lower-floor tenants, or the arrogance of the upper-floor, had all the intensity of racial prejudice. But as Talbot had pointed out, Laing found himself un-worried by all this. He even took a certain crude pleasure in joining in the gossip, and in watching the usually circumspect Charlotte Melville put down several more than two drinks too many. At least it was a means by which they could reach each other. However, as the party broke up a small but unpleasant episode took place outside the elevator doors in the 27th-floor lobby. Although it was after ten o'clock, the entire building was alive with noise. Residents were barging in and out of each other's apartments, shouting down the staircases like children refusing to go to bed. Confused by the endless button-punching, the elevators had come to a halt, and gangs of impatient passengers packed the lobbies. Although their next destination, a party given by a lexicographer on the 26th floor, was only one storey below them, everyone leaving Talbot's party was determined not to use the stairs. Even Charlotte, face flushed and tottering happily on Laing's arm, joined in the wild surge across the elevator lobby and drummed on the doors with her strong fists. When at last an elevator arrived, the doors opened to reveal a solitary passenger, a thin-shouldered and neurasthenic young masseuse who lived with her mother on the 5th floor. Laing immediately recognized her as one of the 'vagrants', of whom there were many in the high-rise, bored apartment-bound housewives and stay-at-home adult daughters who spent a large part of their time riding the elevators and wandering the long corridors of the vast building, migrating endlessly in search of change or excitement. Alarmed by the drunken crowd reeling towards her, the young woman snapped out of her reverie and pressed a button at random. A derisory hoot went up from the swaying guests. Within seconds she was pulled from the elevator and put through a mock-playful grilling. A statistician's over-excited wife shouted at the hapless girl in a shrill voice, pushed a strong arm through the front rank of interrogators and slapped her face. Pulling himself away from Charlotte, Laing stepped forward. The crowd's mood was unpleasant but difficult to take seriously. His neighbours were like a group of unrehearsed extras playing a lynch scene. 'Come on – I'll see you to the stairs.' Holding the young woman by her thin shoulders, he tried to steer her towards the door, but there was a chorus of sceptical shouts. The women among the guests pushed aside their husbands and began to punch the girl on the arms and chest. Giving up, Laing stood to one side. He watched as the shocked young woman stumbled into the mouth of this eager gauntlet and was pummelled through a circuit of fists before she was allowed to disappear into the stairwell. His reflex of chivalry and good sense had been no match for this posse of middle-aged avenging angels. Uneasily, he thought: careful, Laing, or some stockbroker's wife will unman you as expertly as she de-stones a pair of avocados. The night passed noisily, with constant movement through the corridors, the sounds of shouts and breaking glass in the elevator shafts, the blare of music failing across the dark air. # 3 Death of a Resident A cloudless sky, as dull as the air over a cold vat, lay across the concrete walls and embankments of the development project. At dawn, after a confused night, Laing went out on to his balcony and looked down at the silent parking-lots below. Half a mile to the south, the river continued on its usual course from the city, but Laing searched the surrounding landscape, expecting it to have changed in some radical way. Wrapped in his bath-robe, he massaged his bruised shoulders. Although he had failed to realize it at the time, there had been a remarkable amount of physical violence during the parties. He touched the tender skin, prodding the musculature as if searching for another self, the physiologist who had taken a quiet studio in this expensive apartment building six months earlier. Everything had started to get out of hand. Disturbed by the continuous noise, he had slept for little more than an hour. Although the high-rise was silent, the last of the hundred or so separate parties held in the building had ended only five minutes beforehand. Far below him, the cars in the front ranks of the parking-lot were spattered with broken eggs, wine and melted icecream. A dozen windscreens had been knocked out by falling bottles. Even at this early hour, at least twenty of Laing's fellow residents were standing on their balconies, gazing down at the debris gathering at the cliff-foot. Unsettled, Laing prepared breakfast, absent-mindedly pouring away most of the coffee he had percolated before he tasted it. With an effort he reminded himself that he was due to demonstrate in the physiology department that morning. Already his attention was fixed on the events taking place within the high-rise, as if this huge building existed solely in his mind and would vanish if he stopped thinking about it. Staring at himself in the kitchen mirror, at his wine-stained hands and unshaven face with its surprisingly good colour, he tried to switch himself on. For once, Laing, he told himself, fight your way out of your own head. The disturbing image of the posse of middle-aged women beating up the young masseuse anchored everything around him to a different plane of reality. His own reaction – the prompt side-step out of their way – summed up more than he realized about the progress of events. At eight o'clock Laing set off for the medical school. The elevator was filled with broken glass and beer cans. Part of the control panel had been damaged in an obvious attempt to prevent the lower floors signalling the car. As he walked across the parking-lot Laing looked back at the high-rise, aware that he was leaving part of his mind behind him. When he reached the medical school he walked through the empty corridors of the building, with an effort re-establishing the identity of the offices and lecture theatres. He let himself into the dissecting rooms of the anatomy department and walked down the lines of glass-topped tables, staring at the partially dissected cadavers. The steady amputation of limbs and thorax, head and abdomen by teams of students, which would reduce each cadaver by term's end to a clutch of bones and a burial tag, exactly matched the erosion of the world around the high-rise. During the day, as Laing took his supervision and lunched with his colleagues in the refectory, he thought continually about the apartment building, a Pandora's box whose thousand lids were one by one inwardly opening. The dominant tenants of the high-rise, Laing reflected, those who had adapted most successfully to life there, were not the unruly airline pilots and film technicians from the lower floors, nor the bad-tempered and aggressive wives of the well-to-do tax specialists on the upper levels. Although at first sight these people appeared to provoke all the tension and hostility, those really responsible were the quiet and self-contained residents, like the dental surgeon Steele and his wife. A new social type was being created by the apartment building, a cool, unemotional personality impervious to the psychological pressures of high-rise life, with minimal needs for privacy, who thrived like an advanced species of machine in the neutral atmosphere. This was the sort of resident who was content to do nothing but sit in his over-priced apartment, watch television with the sound turned down, and wait for his neighbours to make a mistake. Perhaps the recent incidents represented a last attempt by Wilder and the airline pilots to rebel against this unfolding logic? Sadly, they had little chance of success, precisely because their opponents were people who were content with their lives in the high-rise, who felt no particular objection to an impersonal steel and concrete landscape, no qualms about the invasion of their privacy by government agencies and data-processing organizations, and if anything welcomed these invisible intrusions, using them for their own purposes. These people were the first to master a new kind of late twentieth-century life. They thrived on the rapid turnover of acquaintances, the lack of involvement with others, and the total self-sufficiency of lives which, needing nothing, were never disappointed. Alternatively, their real needs might emerge later. The more arid and effectless life became in the high-rise, the greater the possibilities it offered. By its very efficiency, the high-rise took over the task of maintaining the social structure that supported them all. For the first time it removed the need to repress every kind of anti-social behaviour, and left them free to explore any deviant or wayward impulses. It was precisely in these areas that the most important and most interesting aspects of their lives would take place. Secure within the shell of the high-rise like passengers on board an automatically piloted airliner, they were free to behave in any way they wished, explore the darkest corners they could find. In many ways, the high-rise was a model of all that technology had done to make possible the expression of a truly 'free' psychopathology. During the long afternoon Laing slept in his office, waiting until he could leave the medical school and return home. When he left at last he drove at speed past the half-completed television studios, and then was held up for five minutes by a line of bulk-cement carriers entering the construction site. It was here that Anthony Royal had been injured when his car had been crushed by a reversing grader – it often struck Laing as ironic, and in a way typical of Royal's ambiguous personality, that he should not only have become the project's first road casualty, but have helped to design the site of the accident. Annoyed by the delay, Laing fretted at the wheel. For some reason he was convinced that important events were taking place in his absence. Sure enough, when he reached the apartment building at six o'clock he learned that a number of fresh incidents had occured. After changing, he joined Charlotte Melville for drinks. She had left her advertising agency before lunch, worried about her son. 'I didn't like him being on his own here – the babysitters are so unreliable.' She poured whisky into their glasses, gesturing with the decanter in an alarmed way as if about to toss it over the balcony rail. 'Robert, what _is_ happening? Everything seems to be in a state of crisis—I'm frightened to step into an elevator by myself.' 'Charlotte, things aren't that bad,' Laing heard himself say. 'There's nothing to worry about.' Did he really believe that life here was running smoothly? Laing listened to his own voice, and noticed how convincing he sounded. The catalogue of disorder and provocation was a long one, even for a single afternoon. Two successive groups of children from the lower floors had been turned away from the recreation garden on the roof. This walled enclosure fitted with swings, roundabouts and play-sculptures had been specifically intended by Anthony Royal for the amusement of the residents' children. The gates of the garden had now been padlocked, and any children approaching the roof were ordered away. Meanwhile, the wives of several top-floor tenants claimed that they had been abused in the elevators. Other residents, as they left for their offices that morning, had found that their car tyres had been slashed. Vandals had broken into the classrooms of the junior school on the 10th floor and torn down the children's posters. The lobbies of the five lower floors had been mysteriously fouled by dog excrement; the residents had promptly scooped this into an express elevator and delivered it back to the top floor. When Laing laughed at this Charlotte drummed her fingers on his arm, as if trying to wake him up. 'Robert! You ought to take all this seriously!' 'I do...' 'You're in a _trance!_ ' Laing looked down at her, suddenly aware that this intelligent and likeable woman was failing to get the point. He placed an arm around her, unsurprised by the fierce way in which she embraced him. Ignoring her small son trying to open the kitchen door, she leaned against it and pulled Laing on to herself, kneading his arms as if trying to convince herself that here at last was something whose shape she could influence. During the hour they waited for her son to fall asleep her hands never left Laing. But even before they sat down together on her bed Laing knew that, almost as an illustration of the paradoxical logic of the high-rise, their relationship would end rather than begin with this first sexual act. In a real sense this would separate them from each other rather than bring them together. By the same paradox, the affection and concern he felt for her as they lay across her small bed seemed callous rather than tender, precisely because these emotions were unconnected with the realities of the world around them. The tokens that they should exchange, which would mark their real care for each other, were made of far more uncertain materials, the erotic and perverse. When she was asleep in the early evening light, Laing let himself out of the apartment and went in search of his new friends. Outside, in the corridors and elevator lobbies, scores of people were standing about. In no hurry to return to his apartment, Laing moved from one group to another, listening to the talk going on. These informal meetings were soon to have an almost official status, forums at which the residents could air their problems and prejudices. Most of their grievances, Laing noticed, were now directed at the other tenants rather than at the building. The failure of the elevators was blamed on people from the upper and lower floors, not on the architects or the inefficient services designed into the block. The garbage-disposal chute Laing shared with the Steeles had jammed again. He tried to telephone the building manager, but the exhausted man had been inundated with complaints and requests for action of every kind. Several members of his staff had resigned and the energies of the remainder were now devoted to keeping the elevators running and trying to restore power to the 9th floor. Laing mustered what tools he could find and went into the corridor to free the chute himself. Steele immediately came to his aid, bringing with him a complex multi-bladed cutting device. While the two men worked away, trying to loosen a bundle of brocaded curtain that supported a column of trapped kitchen refuse, Steele amiably regaled Laing with a description of those tenants above and below them responsible for overloading the disposal system. 'Some of these people generate the most unusual garbage – certainly the kind of thing we didn't expect to find here,' he confided to Laing. 'Objects that could well be of interest to the vice squad. That beautician on the 33rd floor, and the two so-called radiographers living together on the 22nd. Strange young women, even for these days...' To some extent, Laing found himself agreeing. However petty the complaints might sound, the fifty-year-old owner of the hair dressing salon _was_ endlessly redecorating her apartment on the 33rd floor, and _did_ stuff old rugs and even intact pieces of small furniture into the chute. Steele stood back as the column of garbage sank below in a greasy avalanche. He held Laing's arm, steering him around a beer can lying on the corridor floor. 'Still, no doubt we're all equally guilty – I hear that on the lower floors people are leaving small parcels of garbage outside their apartment doors. Now, you'll come in for a drink? My wife is keen to see you again.' Despite his memories of their quarrel, Laing had no qualms about accepting. As he expected, in the larger climate of confrontation any unease between them was soon forgotten. Her hair immaculately coiffured, Mrs Steele hovered about him with the delighted smile of a novice madam entertaining her first client. She even complimented Laing on his choice of music, which she could hear through the poorly insulated walls. Laing listened to her spirited description of the continuous breakdown of services within the building, the vandalizing of an elevator and the changing cubicles of the 10th-floor swimming-pool. She referred to the high-rise as if it were some kind of huge animate presence, brooding over them and keeping a magisterial eye on the events taking place. There was something in this feeling – the elevators pumping up and down the long shafts resembled pistons in the chamber of a heart. The residents moving along the corridors were the cells in a network of arteries, the lights in their apartments the neurones of a brain. Laing looked out across the darkness at the brilliantly lit decks of the nearby high-rise, barely aware of the other guests who had arrived and were sitting in the chairs around him – the television newsreader Paul Crosland, and a film critic named Eleanor Powell, a hard-drinking redhead whom Laing often found riding the elevators up and down in a fuddled attempt to find her way out of the building. Crosland had become the nominal leader of their clan – a local cluster of some thirty contiguous apartments on the 25th, 26th and 27th floors. Together they were planning a joint shopping expedition to the 10th-floor supermarket the following day, like a band of villagers going on an outing to an unpoliced city. Beside him on the sofa, Eleanor Powell was watching Crosland in a glazed way while the newsreader, in his florid announcer's style, outlined his proposals for the security of their apartments. Now and then she reached forward with one hand, as if trying to adjust Crosland's image, perhaps alter the colour values of his fleshy cheeks or turn down the volume of his voice. 'Isn't your apartment next to the elevator lobby?'. Laing asked her. 'You'll need to barricade yourself in.' 'What on earth for? I leave the door wide open.' When Laing looked puzzled, she said, 'Isn't that part of the fun?' 'You think that we're secretly enjoying all this?' 'Don't you? I'd guess so, doctor. Togetherness is beating up an empty elevator. For the first time since we were three years old what we do makes absolutely no difference. When you think about it, that's really rather interesting...' When she leaned against him, resting her head on his shoulder, Laing said: 'Something seems to be wrong with the air-conditioning...there should be some fresh air on the balcony.' Holding his arm, she picked up her bag. 'All right. Lift me up. You're a shy lecher, doctor...' They had reached the french windows when there was an explosion of breaking glass from a balcony high above them. Fragments of glass flicked away like knives through the night air. A large, ungainly object whirled past, no more than twenty feet from the balcony. Startled, Eleanor blundered into Laing. As they caught their balance there was the sound of a harsh metallic collision from the ground below, almost as if a car had crashed. A short but unbroken silence followed, the first true quiet, Laing realized, that the building had known for days. Everyone crowded on to the balcony, Crosland and Steele grappling together as if each was trying to prevent the other from jumping over the ledge. Pushed along the railing, Laing saw his own empty balcony fifteen feet away. In an absurd moment of panic he wondered if he himself was the victim. All around, people were leaning on their railings, glasses in hand, staring down through the darkness. Far below, embedded in the crushed roof of a car in the front rank, was the body of a man in evening dress. Eleanor Powell, her face like pain, swayed from the rail and pushed her way past Crosland. Laing held tightly to the metal bar, shocked and excited at the same time. Almost every balcony on the huge face of the high-rise was now occupied, the residents gazing down as if from their boxes in an enormous outdoor opera house. No one approached the crushed car, or the body embedded in its roof. Seeing the burst tuxedo and the small patent-leather shoes, Laing thought that he recognized the dead man as the jeweller from the 40th floor. His pebble spectacles lay on the ground by the front wheel of the car, their intact lenses reflecting the brilliant lights of the apartment building. # 4 Up! During the week after the jeweller's death, events moved rapidly in a more disquieting direction. Richard Wilder, twenty-four floors below Dr Laing and for that reason far more exposed to the pressures generated within the building, was among the first to realize the full extent of the changes taking place. Wilder had been away on location for three days, shooting scenes for a new documentary on prison unrest. A strike by the inmates at a large provincial prison, widely covered by the newspapers and television, had given him a chance to inject some directly topical footage into the documentary. He returned home in the early afternoon. He had telephoned Helen each evening from his hotel and questioned her carefully about conditions in the high-rise, but she made no particular complaints. Nevertheless, her vague tone concerned him. When he had parked Wilder kicked open the door and lifted his heavy body from behind the steering wheel. From his place on the perimeter of the parking-lot he carefully scanned the face of the huge building. At first glance everything had settled down. The hundreds of cars were parked in orderly lines. The tiers of balconies rose through the clear sunlight, potted plants thriving behind the railings. For a moment Wilder felt a pang of regret – always a believer in direct action, he had enjoyed the skirmishes of the past week, roughing up his aggressive neighbours, particularly those residents from the top floors who had made life difficult for Helen and the two boys. The one discordant note was provided by the fractured picture window on the 40th floor, through which the unfortunate jeweller had made his exit. At either end of the floor were two penthouse apartments, the north corner occupied by Anthony Royal, the other by the jeweller and his wife. The broken pane had not been replaced, and the asterisk of cracked glass reminded Wilder of some kind of cryptic notation, a transfer on the fuselage of a wartime aircraft marking a kill. Wilder unloaded his suitcase from the car, and a holdall containing presents for Helen and his sons. On the rear seat was a lightweight cine-camera with which he planned to shoot a few hundred feet of pilot footage for his documentary on the high-rise. The unexplained death of the jeweller had confirmed his long-standing conviction that an important documentary was waiting to be made about life in the high-rise – perhaps taking the jeweller's death as its starting point. It was a lucky coincidence that he lived in the same block as the dead man – the programme would have all the impact of a personal biography. When the police investigation ended the case would move on to the courts, and a huge question mark of notoriety would remain immovably in place over what he liked to term this high-priced tenement, this hanging palace self-seeding its intrigues and destruction. Carrying the luggage in his strong arms, Wilder set off on the long walk back to the apartment building. His own apartment was directly above the proscenium of the main entrance. He waited for Helen to emerge on to the balcony and wave him in, one of the few compensations for having to leave his car at the edge of the parking-lot. However, all but one of the blinds were still drawn. Quickening his step, Wilder approached the inner lines of parked cars. Abruptly, the illusion of normalcy began to give way. The cars in the front three ranks were spattered with debris, their once-bright bodywork streaked and stained. The pathways around the building were littered with bottles, cans, and broken glass, heaped about as if they were being continuously shed from the balconies. In the main entrance Wilder found that two of the elevators were out of order. The lobby was deserted and silent, as if the entire high-rise had been abandoned. The manager's office was closed, and unsorted mail lay on the tiled floor by the glass doors. On the wall facing the line of elevators was scrawled a partly obliterated message – the first of a series of slogans and private signals that would one day cover every exposed surface in the building. Fittingly enough, these graffiti reflected the intelligence and education of the tenants. Despite their wit and imagination, these complex acrostics, palindromes and civilized obscenities aerosolled across the walls soon turned into a colourful but indecipherable mess, not unlike the cheap wallpapers found in launderettes and travel-agencies which the residents of the high-rise most affected to despise. Wilder waited impatiently by the elevators, his temper mounting. Irritably he punched the call buttons, but none of the cars showed any inclination to respond to him. All of them were permanently suspended between the 20th and 30th floors, between which they made short journeys. Picking up his bags, Wilder headed for the staircase. When he reached the 2nd floor he found the corridor in darkness, and tripped over a plastic sack stuffed with garbage that blocked his front door. As he let himself into the hall his first impression was that Helen had left the apartment and taken the two boys away with her. The blinds in the living-room were lowered, and the air-conditioning had been switched off. Children's toys and clothes lay about on the floor. Wilder opened the door of the boys' bedroom. They lay asleep together, breathing unevenly in the stale air. The remains of a meal left from the previous day were on a tray between the beds. Wilder crossed the living-room to his own bedroom. One blind had been raised, and the daylight crossed the white walls in an undisturbed bar. Uncannily, it reminded Wilder of a cell he had filmed two days earlier in the psychiatric wing of the prison. Helen lay fully dressed on the neatly made bed. He assumed that she was asleep, but as he crossed the room, trying to quieten his heavy tread, her eyes watched him without expression. 'Richard...it's all right.' She spoke calmly. 'I've been awake – since you rang yesterday, in fact. Was it a good trip?' She started to get up but Wilder held her head on the pillow. 'The boys – what's going on here?' 'Nothing.' She touched his hand, giving him a reassuring smile. 'They wanted to sleep, so I let them. There isn't anything else for them to do. It's too noisy at night. I'm sorry the place is in such a mess.' 'Never mind the place. Why aren't the boys at school?' 'It's closed – they haven't been since you left.' 'Why not?' Irritated by his wife's passivity, Wilder began to knead his heavy hands together. 'Helen, you can't lie here like this all day. What about the roof garden? Or the swimming-pool?' 'I think they only exist in my head. It's too difficult...' She pointed to the cine-camera on the floor between Wilder's feet. 'What's that for?' 'I may shoot some footage – for the high-rise project.' 'Another prison documentary.' Helen smiled at Wilder without any show of humour. 'I can tell you where to start.' Wilder took her face in his hands. He felt the slim bones, as if making sure that this tenuous armature still existed. Somehow he would raise her spirits. Seven years earlier, when he had met her while working for one of the commercial television companies, she had been a bright and self-confident producer's assistant, more than a match for Wilder with her quick tongue. The time not spent in bed together they had spent arguing. Now, after the combination of the two boys and a year in the high-rise, she was withdrawing into herself, obsessively wrapped up with the children's most elementary activities. Even her reviewing of children's books was part of the same retreat. Wilder brought her a glass of the sweet liqueur she liked. Trying to decide what best to do, he rubbed the muscles of his chest. What had at first pleased Wilder, but now disturbed him most of all, was that she no longer noticed his affairs with the bachelor women in the high-rise. Even if she saw her husband talking to one of them Helen would approach, tugging the boys after her, as if no longer concerned with what his wayward sex might be up to. Several of these young women, like the television actress whose Afghan he had drowned in the pool during the blackout, or the continuity girl on the floor above them, had become Helen's friends. The latter, a serious-minded girl who read Byron in the supermarket queues, worked for an independent producer of pornographic films, or so Helen informed him matter-of-factly. 'She has to note the precise sexual position between takes. An interesting job – I wonder what the qualifications are, or the life expectancy?' Wilder had been shocked by this. Vaguely prudish, he had never been able to question the continuity girl. When they made love in her 3rd-floor apartment he had the uneasy feeling that she was automatically memorizing every embrace and copulatory posture in case he was suddenly called away, and might take off again from exactly the same point with another boy-friend. The limitless professional expertise of the high-rise had its unsettling aspects. Wilder watched his wife sip the liqueur. He stroked her small thighs in an attempt to revive her. 'Helen, come on – you look as if you're waiting for the end. We'll straighten everything and take the boys up to the swimming-pool.' Helen shook her head. 'There's too much hostility. It's always been there, but now it stands out. People pick on the children – without realizing it, I sometimes think.' She sat on the edge of the bed while Wilder changed, staring through the window at the line of high-rises receding across the sky. 'In fact, it's not really the other residents. It's the building...' 'I know. But once the police investigation is over you'll find that everything will quieten down. For one thing, there'll be an overpowering sense of guilt.' 'What are they investigating?' 'The death, of course. Of our high-diving jeweller.' Picking up the cine-camera, Wilder took off the lens shroud. 'Have you spoken to the police?' 'I don't know. I've been avoiding everyone.' Brightening herself by an effort of will, she went over to Wilder. 'Richard – have you ever thought of selling the apartment? We could actually leave. I'm serious.' 'Helen...' Nonplussed for a moment, Wilder stared down at the small, determined figure of his wife. He took off his trousers, as if exposing his thick chest and heavy loins in some way reasserted his authority over himself. 'That's equivalent to being driven out. Anyway, we'd never get back what we paid for the apartment.' He waited until Helen lowered her head and turned away to the bed. At her insistence, six months earlier, they had already moved from their first apartment on the ground floor. At the time they had seriously discussed leaving the high-rise altogether, but Wilder had persuaded Helen to stay on, for reasons he had never fully understood. Above all, he would not admit his failure to deal on equal terms with his professional neighbours, to outstare these self-satisfied cost-accountants and marketing managers. As his sons wandered sleepily into the room Helen remarked, 'Perhaps we could move to a higher floor.' Shaving his chin, Wilder pondered this last comment of his wife's. The frail plea had a particular significance, as if some long-standing ambition had been tapped inside his head. Helen, of course, was thinking in terms of social advancement, of moving in effect to a 'better neighbourhood', away from this lower-class suburb to those smarter residential districts somewhere between the 15th and 30th floors, where the corridors were clean and the children would not have to play in the streets, where tolerance and sophistication civilized the air. Wilder had something different in mind. As he listened to Helen's quiet voice, murmuring to her two sons as if speaking to them from inside a deep dream, he examined himself in the mirror. Like a prize-fighter reassuring himself before a match, he patted the muscles of his stomach and shoulders. In the mental as well as the physical sense, he was almost certainly the strongest man in the building, and Helen's lack of spirit annoyed him. He realized that he had no real means of coping with this kind of passivity. His response to it was still framed by his upbringing, by an over-emotional mother who had loved him devotedly through the longest possible childhood she could arrange and thereby given Wilder what he always thought of as his unshakeable self-confidence. She had separated from Wilder's father – a shadowy figure of disreputable background – when he was a small child. The second marriage, to a pleasant but passive accountant and chess enthusiast, had been wholly dominated by the relationship between the mother and her bullock-like son. When he met his future wife Wilder naively believed that he wanted to pass on these advantages to Helen, to look after her and provide an endless flow of security and good humour. Of course, as he realized now, no one ever changed, and for all his abundant self-confidence he needed to be looked after just as much as ever. Once or twice, in unguarded moments during the early days of their marriage, he had attempted to play the childish games he had enjoyed with his mother. But Helen had not been able to bring herself to treat Wilder like her son. For her part, Wilder guessed, love and care were the last things she really wanted. Perhaps the breakdown of life in the high-rise would fulfil her unconscious expectations more than she realized. As he massaged his cheeks Wilder listened to the air humming erratically in the air-conditioning flues behind the shower stall, pumped all the way down from the roof of the building thirty-nine floors above. He watched the water emerge from the tap. This too had made its long descent from the reservoirs on the roof, running down the immense internal wells riven through the apartment block, like icy streams percolating through a subterranean cavern. His determination to make the documentary had a strong personal bias, part of a calculated attempt to come to terms with the building, meet the physical challenge it presented to him, and then dominate it. For some time now he had known that he was developing a powerful phobia about the high-rise. He was constantly aware of the immense weight of concrete stacked above him, and the sense that his body was the focus of the lines of force running through the building, almost as if Anthony Royal had deliberately designed his body to be held within their grip. At night, as he lay beside his sleeping wife, he would often wake from an uneasy dream into the suffocating bedroom, conscious of each of the 999 other apartments pressing on him through the walls and ceiling, forcing the air from his chest. He was sure that he had drowned the Afghan, not because he disliked the dog particularly or wanted to upset its owner, but to revenge himself on the upper storeys of the building. He had seized the dog in the darkness when it blundered into the pool. Giving in to a cruel but powerful impulse, he had pulled it below the water. As he held its galvanized and thrashing body under the surface, in a strange way he had been struggling with the building itself. Thinking of those distant heights, Wilder took his shower, turning the cold tap on full and letting the icy jet roar across his chest and loins. Where Helen had begun to falter, he felt more determined, like a climber who has at long last reached the foot of the mountain he has prepared all his life to scale. # 5 The Vertical City Whatever plans he might devise for his ascent, whatever route to the summit, it was soon obvious to Wilder that at its present rate of erosion little of the high-rise would be left. Almost everything possible was going wrong with the services. He helped Helen straighten the apartment, and tried to jerk some sense of vitality into his dormant family by drawing blinds and moving noisily around the rooms. Wilder found it difficult to revive them. At five-minute intervals the air-conditioning ceased to work, and in the warm summer weather the apartment was heavy with stagnant air. Wilder noticed that he had already begun to accept the foetid atmosphere as normal. Helen told him that she had heard a rumour from the other residents that dog excrement had been deliberately dropped into the air-conditioning flues by the upper-level tenants. Strong winds circulated around the open plazas of the development project, buffeting the lower floors of the apartment building as they swirled through the concrete legs. Wilder opened the windows, hoping for some fresh air, but the apartment soon filled with dust and powdered cement. The ashy film already covered the tops of cupboards and bookshelves. By the late afternoon the residents began to return from their offices. The elevators were noisy and overcrowded. Three of them were now out of order, and the remainder were jammed with impatient tenants trying to reach their floors. From the open door of his apartment Wilder watched his neighbours jostle each other aggressively like bad-tempered miners emerging from their pit-cages. They strode past him, briefcases and handbags wielded like the instruments of an over-nervous body armour. On an impulse Wilder decided to test his rights of free passage around the building, and his access to all its services, particularly the swimming-pool on the 35th floor and the children's sculpture-garden on the observation roof. Taking his camera, he set out for the roof with the older of his two sons. However, he soon found that the high-speed elevators were either out of order, under repair, or kept permanently at the top floors with their doors jammed open. The only access to them was through the private outside entrance to which Wilder did not have a key. All the more determined now to reach the roof, Wilder waited for one of the intermediate elevators which would carry them as far as the 35th floor. When it arrived he pushed his way into the crowded cabin, surrounded by passengers who stared down at Wilder's six-year-old son with unfeigned hostility. At the 23rd floor the elevator refused to move any further. The passengers scrummaged their way out, drumming their briefcases against the closed doors of the elevators in what seemed to be a ritual display of temper. Wilder set off up the stairs, carrying his small son in his arms. With his powerful physique, he was strong enough to climb all the way to the roof. Two floors above, however, the staircase was blocked by a group of local residents – among them the offensive young orthodontic surgeon who was Robert Laing's neighbour – trying to free a garbage-disposal chute. Suspicious that they might be tampering with the air-conditioning ducts, Wilder pushed through them, but was briskly shouldered aside by a man he recognized as a newsreader for a rival television company. 'This staircase is closed, Wilder! Can't you get the point?' 'What?' Wilder was amazed by this effrontery. 'How do you mean?' _'Closed!_ What are you doing up here, anyway?' The two men squared up to each other. Amused by the announcer's aggressive manner, Wilder lifted the camera as if to film his florid face. When Crosland waved him away imperiously, Wilder was tempted to knock the man down. Not wishing to upset his son, who was nervous enough already in this harsh atmosphere, he retreated to the elevator and returned to the lower floors. The confrontation, however minor, had unsettled Wilder. Ignoring Helen, he prowled around the apartment, swinging the camera to and fro. He felt excited in a confused way, partly by bis plans for the documentary, but also by the growing atmosphere of collision and hostility. From the balcony he watched the huge, Alcatraz blocks of the nearby high-rises. The material about these buildings, visual and sociological, was almost limitless. They would film the exteriors from a helicopter, and from the nearest block four hundred yards away – in his mind's eye he could already see a long, sixty-second zoom, slowly moving from the whole building in frame to a close-up of a single apartment, one cell in this nightmare termitary. The first half of the programme would examine life in the high-rise in terms of its design errors and minor irritations, while the remainder would then look at the psychology of living in a community of two thousand people boxed up into the sky – everything from the incidence of crime, divorce and sexual misdemeanours to the turnover of residents, their health, the frequency of insomnia and other psychosomatic disorders. All the evidence accumulated over several decades cast a critical light on the high-rise as a viable social structure, but cost-effectiveness in the area of public housing and high profitability in the private sector kept pushing these vertical townships into the sky against the real needs of their occupants. The psychology of high-rise life had been exposed with damning results. The absence of humour, for example, had always struck Wilder as the single most significant feature – all research by investigators confirmed that the tenants of high-rises made no jokes about them. In a strict sense, life there was 'eventless'. On the basis of his own experience, Wilder was convinced that the high-rise apartment was an insufficiently flexible shell to provide the kind of home which encouraged activities, as distinct from somewhere to eat and sleep. Living in high-rises required a special type of behaviour, one that was acquiescent, restrained, even perhaps slightly mad. A psychotic would have a ball here, Wilder reflected. Vandalism had plagued these slab and tower blocks since their inception. Every torn-out piece of telephone equipment, every handle wrenched off a fire safety door, every kicked-in electricity meter represented a stand against decerebration. What angered Wilder most of all about life in the apartment building was the way in which an apparently homogeneous collection of high-income professional people had split into three distinct and hostile camps. The old social subdivisions, based on power, capital and self-interest, had reasserted themselves here as anywhere else. In effect, the high-rise had already divided itself into the three classical social groups, its lower, middle and upper classes. The 10th-floor shopping mall formed a clear boundary between the lower nine floors, with their 'proletariat' of film technicians, air-hostesses and the like, and the middle section of the high-rise, which extended from the 10th floor to the swimming-pool and restaurant deck on the 35th. This central two-thirds of the apartment building formed its middle class, made up of self-centred but basically docile members of the professions – the doctors and lawyers, accountants and tax specialists who worked, not for themselves, but for medical institutes and large corporations. Puritan and self-disciplined, they had all the cohesion of those eager to settle for second best. Above them, on the top five floors of the high-rise, was its upper class, the discreet oligarchy of minor tycoons and entrepreneurs, television actresses and careerist academics, with their high-speed elevators and superior services, their carpeted staircases. It was they who set the pace of the building. It was their complaints which were acted upon first, and it was they who subtly dominated life within the high-rise, deciding when the children could use the swimming-pools and roof garden, the menus in the restaurant and the high charges that kept out almost everyone but themselves. Above all, it was their subtle patronage that kept the middle ranks in line, this constantly dangling carrot of friendship and approval. The thought of these exclusive residents, as high above him in their top-floor redoubts as any feudal lord above a serf, filled Wilder with a growing sense of impatience and resentment. However, it was difficult to organize any kind of counter-attack. It would be easy enough to play the populist leader and become the spokesman of his neighbours on the lower floors, but they lacked any cohesion or self-interest; they would be no match for the well-disciplined professional people in the central section of the apartment building. There was a latent easy-goingness about them, an inclination to tolerate an undue amount of interference before simply packing up and moving on. In short, their territorial instinct, in its psychological and social senses, had atrophied to the point where they were ripe for exploitation. To rally his neighbours Wilder needed something that would give them a strong feeling of identity. The television documentary would do this perfectly and in terms, moreover, which they could understand. The documentary would dramatize all their resentments, and expose the way in which the services and facilities were being abused by the upper-level tenants. It might even be necessary to foment trouble surreptitiously, to exaggerate the tensions present in the high-rise. However, as Wilder soon discovered, the shape of his documentary was already being determined. Fired by his resolve to fight back, Wilder decided to give his wife and children a break from his ceaseless pacing. The air-conditioning now worked for only five minutes in each hour, and by dusk the apartment was stuffy and humid. The noise of over-loud conversations and record-players at full volume reverberated off the balconies above them. Helen Wilder moved along the already closed windows, her small hands pressed numbly against the latches as if trying to push away the night. Too preoccupied to help her, Wilder set off with a towel and swimming trunks to the pool on the 10th floor. A few telephone calls to his neighbours on the lower floors had confirmed that they were keen to take part in the documentary, but Wilder needed participants from the upper and middle levels of the high-rise. The out-of-order elevators had still not been repaired, and Wilder took to the stairs. Sections of the staircase had already been turned into a garbage-well by the residents above. Broken glass littered the steps, cutting his shoes. The shopping mall was crowded with people, milling about and talking at the tops of their voices as if waiting for a political rally to start. Usually deserted at this hour, the swimming-pool was packed with residents playing the fool in the water, pushing each other off the tiled verge and splashing the changing stalls. The attendant had gone, abandoning his booth, and already the pool was beginning to look neglected, discarded towels lying in the gutters. In the showers Wilder recognized Robert Laing. Although the doctor turned his back on him Wilder ignored the rebuff and stood under the next spray. The two men spoke briefly but in non-committal terms. Wilder had always found Laing good company, with his keen eye for any passing young woman, but today he was being standoffish. Like everyone else he had been affected by the atmosphere of confrontation. 'Have the police arrived yet?' Wilder asked above the noise as they walked to the diving-boards. 'No – are you expecting them?' Laing seemed genuinely surprised. 'They'll want to question the witnesses. What happened, in fact? Was he pushed? His wife looks hefty enough – perhaps she wanted a quick divorce?' Laing smiled patiently, as if this remark in doubtful taste was all he expected of Wilder. His sharp eyes were deliberately vague, and remained closed to any probing. 'I know nothing about the accident, Wilder. It may have been suicide, I suppose. Are you personally concerned?' 'Aren't you, Laing? It's odd that a man can fall from a window forty floors above the ground without there being any kind of investigation...' Laing stepped on to the diving board. His body was unusually well muscled, Wilder noticed, almost as if he had been taking a good deal of recent exercise, doing dozens of push-ups. Laing waited for a clear space in the crowded water. 'I think we can rely on his neighbours to look after everything.' Wilder lifted his voice. 'I've begun planning the television documentary – his death would make a good starting point.' Laing looked down at Wilder with sudden interest. He shook his head firmly. 'I'd forget all about it – if I were you, Wilder.' He stepped to the end of the board, sprang twice and made a hard, neat dive into the yellowing water. Swimming by himself at the shallow end of the pool. Wilder watched Laing and his party of friends playing about in the deep end. Previously Wilder would have joined them, particularly as there were two attractive women in the group – Charlotte Melville, whom he had not seen for several days about their projected parents' association, and the tyro alcoholic Eleanor Powell. Wilder had obviously been excluded. Laing's pointed use of his surname marked the distance between them, like his vagueness about the dead jeweller, and his sidestepping of the television documentary, in which he had once been keenly interested – if anything, Laing's approval had inspired Wilder to develop the idea into a provisional treatment. Presumably Laing, with his excessive need for privacy, had no wish to see the collective folly of the residents, their childish squabbles and jealousies, exposed on the nation's television screens. Or was there some other impulse at work – a need to shut away, most of all from oneself, any realization of what was actually happening in the high-rise, so that events there could follow their own logic and get even more out of hand? For all his own professed enthusiasm about the documentary, Wilder knew that he had never discussed it with anyone who did not live inside the apartment building. Even Helen, talking to her mother that afternoon on the telephone, had said vaguely, 'Everything's fine. There's some slight trouble with the air-conditioning, but it's being fixed.' This growing defiance of reality no longer surprised Wilder. The decision that the chaos within the high-rise was a matter for the residents themselves explained the mystery of the dead jeweller. At least a thousand people must have seen the body – Wilder remembered stepping on to the balcony and being startled, not by the sight of the dead man, but by the huge audience reaching up to the sky. Had anyone notified the police? He had taken it for granted, but now he was less sure. Wilder found it hard to believe that this sophisticated and self-important man would commit suicide. Yet no one was in the least concerned, accepting the possibility of murder in the same way that the swimmers in the pool accepted the wine bottles and beer cans rolling around the tiled floor under their feet. During the evening, Wilder's speculations took second place to the struggle to preserve his sanity. After settling the two boys in their bedroom, he and his wife sat down to dinner, only to find that a sudden electricity failure had plunged them into darkness. Sitting opposite each other at the dining-room table, they listened to the continuous noise from the corridor, their neighbours arguing in the elevator lobby, transistors blaring through open apartment doors. Helen began to laugh, relaxing for the first time in weeks. 'Dick, it's a huge children's party that's got out of hand.' She reached out to calm Wilder. In the faint light that crossed the room from the nearby high-rise her slim face had an almost unreal calm, as if she no longer felt herself to be part of the events taking place around her. Restraining his temper, Wilder hunched heavily in the darkness over the table. He was tempted more than once to plunge his fist into the soup. When the lights returned he tried to telephone the building manager, but the switchboard was jammed with calls. At last a recorded voice told him that the manager had fallen ill, and that all complaints would be played through and noted for future attention. 'My God, he's actually going to listen to all these tapes – there must be miles of them...' 'Are you sure?' Helen was giggling to herself. 'Perhaps no one else minds. You're the only one.' The tampering with the electricity system had affected the air-conditioning. Dust was spurting from the vents in the walls. Exasperated, Wilder drove his fists together. Like a huge and aggressive malefactor, the high-rise was determined to inflict every conceivable hostility upon them. Wilder tried to close the grilles, but within minutes they were forced to take refuge on the balcony. Their neighbours were crowded against their railings, craning up at the roof as if hoping to catch sight of those responsible. Leaving his wife, who was wandering light-heartedly around the apartment and smiling at the spurting dust, Wilder went out into the corridor. All the elevators were stationary in the upper section of the building. A large group of his neighbours had gathered in the elevator lobby, pounding rhythmically on the doors and complaining about various provocative acts by the residents on the floors above. Wilder pushed his way towards the centre, where two airline pilots were standing on a lobby sofa and selecting the members of a raiding party. Wilder waited his turn, trying to catch their attention, until he realized from the excited talk around him that their mission consisted solely of going up to the 35th floor and publicly urinating into the water. Wilder was about to argue with them, warning that a childish act of this kind would be counter-productive. Until they were organized the notion of a punitive expedition was absurd, as they were far too exposed to retaliation. However, at the last moment he turned away. He stood by the doors to the staircase, aware that he no longer felt committed to this crowd of impulsive tenants egging each other on to a futile exercise. Their real opponent was not the hierarchy of residents in the heights far above them, but the image of the building in their own minds, the multiplying layers of concrete that anchored them to the floor. A cheer went up, followed by a chorus of catcalls. An elevator was descending from the 35th floor, the indicator numerals flashing from right to left. While it approached, Wilder thought of Helen and the two boys – he knew already that his decision to dissociate himself from his neighbours had nothing to do with any feelings of concern for his wife and children. The elevator reached the 2nd floor and stopped. As the doors opened there was a sudden hush. Lying on the floor of the cabin was the barely conscious figure of one of Wilder's neighbours, a homosexual air-traffic controller who dined regularly in the 35th-floor restaurant. He turned his bruised face away from the watching crowd and tried to button the shirt torn from his chest. Seeing him clearly as the crowd stepped back, awed by this evidence of open violence, Wilder heard someone say that two more floors, the 5th and 8th, were now in darkness. # 6 Danger in the Streets of the Sky All day Richard Wilder had been preparing for his ascent. After the noise-filled night, which he had spent calming his sons and his giggling wife, Wilder left for the television studios. Once there, he cancelled his appointments and told his secretary that he would be away for the next few days. While he spoke, Wilder was barely aware of this puzzled young woman or his curious colleagues in the nearby offices – he had shaved only the left side of his face, and had not changed his clothes since the previous day. Tired out, he briefly fell asleep at his desk, watched by his secretary as he slumped snoring across his unread correspondence. After no more than an hour at the studios, he packed his briefcase and returned to the high-rise. For Wilder, this brief period away from the apartment building was almost dreamlike in its unreality. He left his car in the parking-lot without locking it and walked towards the entrance, a growing sense of relief coming over him. Even the debris scattered at the foot of the building, the empty bottles and garbage-stained cars with their broken windscreens, in a strange way merely reinforced his conviction that the only real events in his life were those taking place within the high-rise. Although it was after eleven o'clock, Helen and the children were still asleep. A film of white dust covered the furniture in the lounge and bedrooms, as if he had returned to the apartment and its three sleepers after an immense period of time had condensed around them like a stone frost. Wilder had blocked the air-conditioning vents during the night, and the apartment was without sound or movement. Wilder looked down at his wife, lying on the bed surrounded by the children's books she was reviewing. Aware that he would be leaving her in a few hours, he regretted that she was too weak to come with him. They might have climbed the high-rise together. Trying to think more clearly about his ascent, Wilder began to clean the apartment. He stepped out on to the balcony and swept up the cigarette butts and broken glass, condoms and torn newspapers thrown down from the floors above. He could no longer remember when he had made his decision to climb the building, and had little idea of what exactly he would do when he finally got there. He was also well aware of the disparity between the simple business of climbing to the roof – a matter of pressing an elevator button – and the mythologized version of this ascent that had taken over his mind. This same surrender to a logic more powerful than reason was evident in the behaviour of Wilder's neighbours. In the elevator lobby he listened to the latest rumours. Earlier that morning there had been a serious brawl between the 9th and 11th-floor tenants. The 10th-floor concourse was now a no-man's-land between two warring factions, the residents of the lower nine floors and those in the middle section of the building. Despite the harassment and increasing violence, no one was surprised by these events. The routines of daily life within the high-rise, the visits to the supermarket, liquor store and hairdressing salon continued as before. In some way the high-rise was able to accommodate this double logic. Even the tone of voice of his neighbours as they described these outbreaks of hostility was calm and matter-of-fact, like that of civilians in a war-torn city dealing with yet another air-raid. For the first time it occurred to Wilder that the residents enjoyed this breakdown of its services, and the growing confrontation between themselves. All this brought them together, and ended the frigid isolation of the previous months. During the afternoon Wilder played with his sons and waited for the evening to come. Helen moved silently around the apartment, barely aware of her husband. After the fit of compulsive laughter the previous evening, her face was waxy and expressionless. Now and then a tic flickered in the right apex of her mouth, as if reflecting a tremor deep within her mind. She sat at the dining-table, mechanically straightening the boys' hair. Watching her, and unable to think of what he could do to help her, Wilder almost believed that it was she who was leaving him, rather than the contrary. As the light began to fade, Wilder watched the first of the residents return from their offices. Among them, stepping from her car, was Jane Sheridan. Six months earlier, Wilder had broken off a brief affair with the actress, ironically enough because of the effort involved in reaching the 37th floor. He had found it difficult to be himself in her apartment. All the time he was conscious of the distance to the ground, and of his wife and children far below him, deep in the lowest seams of the building like the exploited women and child labourers of the nineteenth century. Watching television during their sexual acts in her chintz-lined bedroom, he felt as if he were high over the city in a lavish executive airliner fitted with boudoir and cocktail bar. Their conversations, even their diction and vocabulary, had become as stylized as those of strangers in adjacent aircraft seats. The actress walked to the private entrance of the upper-floor elevator lobby, picking her way casually through the broken bottles and empty cans. A single journey to her apartment would carry him, like a ladder in a board game, virtually to the top of the high-rise with one throw of the dice. Helen was putting the boys to bed. She had moved the wardrobe and dressing-table around their beds, in an attempt to shield them from the noise and disturbances which the night would bring. 'Richard...? Are you going...?' As she spoke she emerged briefly from the deep well inside herself, aware for these few seconds that she and her sons were about to be left on their own. Wilder waited for this moment of lucidity to pass, knowing that it would be impossible to describe his self-imposed mission to Helen. She sat silently on her bed, a hand resting on the pile of children's books, watching him in the mirror with an unchanging expression as he stepped into the corridor. Wilder soon found that it was more difficult than he had assumed to climb to the 37th floor. The five top-floor elevators were either out of order or had been taken to the upper levels and parked there with their doors jammed open. The 2nd-floor lobby was crowded with Wilder's neighbours, some in office suits, others in beach wear, arguing with each other like disgruntled tourists caught by a currency crisis. Wilder pushed through them to the staircase, and began the long climb to the 10th floor, where he stood a better chance of finding an ascending elevator. When he reached the 5th floor he met the dozen members of the airline pilots' raiding party returning from another of their abortive missions. Angry and shaken, they shouted at the people jeering down at them from the stairwell above. The entrance to the 10th-floor concourse had been blocked by desks and chairs taken from the junior school and flung down the stairs. The raiding party, made up of parents of the children attending the school, had tried to replace the desks, harassed by residents from the middle floors waiting impatiently for the liquor store to be re-stocked. Wilder pressed on past them. By the time he reached the 10th floor the opposing group had moved off in a posse. Wilder stepped over the broken desks lying on the steps, pencils and crayons scattered around them. Wishing that he had brought his camera with him, he noticed two 18th-floor residents, a chemical engineer and a personnel manager, standing by the door. Each had a cine-camera and was carefully filming the scene below, following Wilder as he climbed towards them. Leaving them to complete these dubious private newsreels, Wilder pushed back the swing doors, and looked out at the deck of the shopping mall. Hundreds of residents jostled against each other, pulling and shoving among the wine-bins and shelves of detergent packs, wire trollies locked together in a mesh of chromium wire. Voices rose in anger above the singing of the cash registers. Meanwhile, as these scuffles took place, a line of women customers sat under the driers in the hairdressing salon, calmly reading their magazines. The two cashiers on evening duty at the bank impassively counted out their bank-notes. Giving up any attempt to cross the concourse, Wilder turned into the deserted swimming-pool. The water level was down by at least six inches, as if someone had been stealing the yellowing fluid. Wilder walked around the pool. An empty wine bottle floated in the centre, surrounded by a swill of cigarette packs and unravelling cigar butts. Below the diving-boards a newspaper hung slackly in the water, its wavering headline like a message from another world. In the 10th-floor lobby a crowd of residents pressed impatiently against the elevator doors, their arms laden with liquor cartons and delicatessen purchases, raw materials for the aggressive parties of that evening. Wilder returned to the staircase. Somewhere above him these passengers would step out of their elevators and give him a chance to get aboard. He climbed the steps two at a time. The staircase was deserted – the higher up. the building the more reluctant were the residents to use the stairs, as if this in some way demeaned them. As he pressed on upwards Wilder peered through the windows at the car-park sinking from view below. The distant arm of the river stretched towards the darkening outline of the city, a signpost pointing towards a forgotten world. As he turned into the final stretch of steps to the 14th floor, picking his way among the discarded cans and cigarette packs, something moved above his head. Wilder paused and looked up, his lungs pumping in the silence. A kitchen chair whirled through the air towards his head, hurled down by an assailant three floors above. Wilder flinched back as the steel chair struck the railing, glancing against his right arm before spinning away. Wilder crouched against the steps, shielding himself below the overhang of the next floor. He massaged his bruised arm. At least three or four people were waiting for him, ostentatiously tapping their clubs on the metal railing. Fists clenching, Wilder searched the steps for a weapon. Danger in the streets of the sky – his first impulse was to rush the stairs and counter-attack. With his powerful physique he knew that he could put to flight any three residents of the high-rise, these under-exercised and over-weight account executives and corporation lawyers egged on into this well-bred violence by their pushy wives. However, he calmed himself, deciding against a frontal attack – he would reach the top of the high-rise, but by guile rather than by brute force. He moved down to the 13th-floor landing. Through the walls of the elevator shaft he could hear the rails and cables humming. Passengers were stepping out of the elevators on to their floors. But the doors into the 13th-floor lobby had been bolted. A face frowned out at him, a well-groomed hand curtly waved him away. All the way down to the 10th floor the communicating doors had been locked or barricaded. Frustrated, Wilder returned to the shopping mall. A large crowd was still waiting by the elevators. They formed clearly demarked groups from different floors, each commandeering its own transit system. Wilder left them and strode towards the supermarket. The shelves had been stripped, and the staff had left after locking the turnstiles. Wilder vaulted over a check-out counter and made his way to the store-room at the rear. Beyond the pyramids of empty cartons was one of the three service cores of the high-rise, containing a freight elevator, and the water, air-conditioning and electrical supply trunks. Wilder waited as the elevator descended cumbrously down its shaft. The size of a carrier's aircraft lift, it had been designed to carry kitchen-appliance islands, bathroom units, and the huge pop-art and abstract-expressionist paintings favoured by the residents of the high-rise. As he pulled back the steel grille he noticed a thin-shouldered young woman hiding behind the control panel. She was pallid and undernourished, but she watched Wilder with interest, as if glad to welcome him to this private domain. 'How far do you want to go?' she asked him. 'We can travel anywhere. I'll ride with you.' Wilder recognized her as a masseuse from the 5th floor, one of the vagrants who spent their time wandering around the high-rise, the denizens of an interior world who formed a second invisible population. 'All right – what about the 35th floor?' 'The people on the 30th are nicer.' Expertly she pressed the control buttons, activating the heavy doors. Within seconds the elevator was carrying them ponderously aloft. The young masseuse smiled at him encouragingly, alive now that they were moving. 'If you want to go higher, I'll show you. There are a lot of air-shafts, you know. The trouble is, dogs have got into them – they're getting hungry...' An hour later, when Wilder stepped out into the lavishly carpeted lobby of the 37th floor, he realized that he had discovered a second building inside the one that he had originally occupied. He left behind the young masseuse, endlessly climbing the service shafts and freight wells of the high-rise, transits that externalized an odyssey taking place inside her head. During his roundabout route with her – changing to a second freight elevator to climb three floors to the 28th, moving up and down a maze of corridors on the borders of hostile enclaves, until finally taking an upper-level elevator a journey of one storey – Wilder had seen the way in which the middle and upper levels of the building had organized themselves. While his neighbours on the lower floors remained a confused rabble united only by their sense of impotence, here everyone had joined a local group of thirty adjacent apartments, informal clans spanning two or three floors based on the architecture of corridors, lobbies and elevators. There were now some twenty of these groups, each of which had formed local alliances with those on either side. There was a marked increase in vigilante activity of all kinds. Barriers were being set up, fire-doors locked, garbage thrown down the stairwells or dumped on rival landings. On the 29th floor Wilder came across a commune composed exclusively of women, a cluster of apartments dominated by an elderly children's-story writer, a woman of intimidating physique and personality. Sharing an apartment with her were three air-hostesses from the 1st floor. Wilder walked gingerly down the corridor between their apartments, glad of the company of the young masseuse. What unsettled Wilder, as the women questioned him in pairs from their half-open doors, was their hostility to him, not only because he was a man, but because he was so obviously trying to climb to a level above their own. He stepped out with relief into the deserted lobby of the 37th floor. He stood by the staircase doors, suspicious that no one was guarding the lobby. Conceivably the residents here were unaware of what was going on beneath their feet. The carpets in the silent corridors were thick enough to insulate them from hell itself. He walked down the corridor towards Jane Sheridan's apartment. She might be surprised to see him, but Wilder was confident that he would spend the night with her. The next day he would move in permanently, and visit Helen and the boys on his way to and from the television studios. As he pressed the bell he could hear her strong, masculine voice through the door, its tone familiar from countless television costume-dramas. At last the door opened, held on its latch chain. When she looked out at Wilder, recognizing him immediately, he knew that she had been waiting for him to arrive. She was detached and uneasy at the same time, like a spectator forced to watch someone about to be involved in an accident. Wilder remembered that he had given his destination to one of the women's vigilante groups. 'Jane, you're expecting me. I'm flattered.' 'Wilder...I can't—' Before Wilder could speak the door of the next apartment opened sharply. Staring at Wilder with undisguised hostility were a tax specialist from the 40th floor and an over-muscled choreographer with whom Wilder had often heaved a medicine ball in the 10th-floor gymnasium. Realizing that his arrival had been anticipated by all these people, Wilder turned to leave, but the corridor behind him was blocked. A group of six residents had emerged together from the elevator lobby. They wore track suits and white sneakers, and at first sight looked like a middle-aged gymnasium dumb-bell team, each carrying his polished wooden clubs. Leading this antique but spritely troupe, which consisted of a stockbroker, two paediatricians and three senior academics, was Anthony Royal. As usual he wore his white safari-jacket, a costume which always irritated Wilder, the kind of garment that might be affected by an eccentric camp-commander or zoo-keeper. The corridor lighting flushed his blond hair and picked out the scars on his forehead, a confusing notation that hung like a series of mocking question marks over his stern expression. As he approached Wilder the chromium walking-stick flicked in his hand like a cane. Wilder watched the polished shaft catch the light, looking forward with pleasure to wrapping it around Royal's neck. Although well aware that he had been trapped, Wilder found himself laughing aloud at the sight of this lunatic troupe. When the lights failed, first dipping warningly and then going out altogether, he backed against the wall to allow the group to pass. The wooden clubs clicked around him in the darkness, beating out a well-rehearsed tattoo. From the open door of Jane Sheridan's apartment a torch flared at him. Around Wilder the dumb-bell troupe was beginning its act. The first clubs whirled in the torch-light. Without any warning, he felt a flurry of blows on his shoulders. Before he fell Wilder seized one of the clubs, but the others struck him to the carpeted floor at Anthony Royal's feet. When he woke he was lying outstretched on a sofa in the ground-floor entrance lobby. Fluorescent lights shone around him, reflected in the glass ceiling-panels. With their toneless glow they seemed to have been shining for ever somewhere inside his head. Two residents returning late to the high-rise waited by the elevators. Holding tightly to their briefcases, they ignored Wilder, whom they clearly assumed to be drunk. Aware of his bruised shoulders, Wilder reached up and nursed the swollen mastoid bone behind his right ear. When he could stand, he wandered away from the sofa towards the entrance and steadied himself against the glass doors. The lines of parked cars stretched through the darkness, enough transport to evacuate him to a thousand and one destinations. He walked out into the cold night air. Holding his neck, he looked up at the face of the high-rise. He could almost pick out the lights of the 37th floor. He felt suddenly exhausted, as much by the building's weight and mass as by his own failure. His casual and unthought-out attempt to scale the building had ended humiliatingly. In a sense he had been rejected more by the high-rise than by Royal and his friends. Lowering his eyes from the roof, he saw that his wife, fifty feet above him, was watching from the balcony of their apartment. Despite his dishevelled clothes and bruised face she showed no concern, as if she no longer recognized him. # 7 Preparations for Departure High above, on the 40th floor, the first two residents were preparing to leave. All day Anthony Royal and his wife had been packing. After lunch in the deserted restaurant on the 35th floor they returned to their apartment, where Royal spent what he knew would be his last hours in the high-rise closing down his design studio. In no hurry to leave, now that the moment had come for them to abandon the building, Royal deliberately took his time over this last ritual task. The air-conditioning had ceased to function, and the absence of its vague familiar hum – once a source of minor irritation – made Royal restless. However reluctantly, he was now forced to recognize what he had been trying to repress for the past month, despite the evidence of his eyes. This huge building he had helped to design was moribund, its vital functions fading one by one – the water-pressure falling as the pumps faltered, the electrical sub-stations on each floor switching themselves off, the elevators stranded in their shafts. As if in sympathy, the old injuries to his legs and back had begun to keen again. Royal leaned against his drawing-stand, feeling the pain radiate upwards from his knees into his groin. Gripping the chromium cane, he left the studio and moved among the tables and armchairs in the drawing-room, each shrouded in its dust-sheet. In the year since his accident he had found that constant exercise alone held back the pain, and he missed the games of squash with Robert Laing. Like his own physicians, Laing had told him that the injuries sustained in car-crashes took a great deal of time to heal, but Royal recently had begun to suspect that these wounds were playing a devious role of their own. The three suitcases he had packed that morning stood ready in the hall. Royal stared down at them, for a moment hoping that they belonged to someone else. The cases had never been used, and the prominent part they would soon play in his personal Dunkirk only rubbed in the humiliation. Royal returned to the studio and continued to take down the architectural drawings and design studies pinned to the walls. This small office in a converted bedroom he had used for his work on the development project, and the collection of books and blueprints, photographs and drawing-boards, originally intended to give a sense of purpose to his convalescence, had soon become a kind of private museum. The majority of the plans and design studies had been superseded by his colleagues after the accident, but in a strange way these old frontal elevations of the concert-hall and television studios, like the photograph of himself standing on the roof of the high-rise on hand-over day, described a more real world than the building which he was now about to abandon. The decision to leave their apartment, already postponed for too long, had been difficult to take. For all his professional identification with the high-rise as one of its architects, Royal's contribution had been minor, but sadly for him had concerned those very sections which had borne the brunt of the residents' hostility – the 10th-floor concourse, the junior school, the observation roof with its children's sculpture-garden, and the furnishing and design of the elevator lobbies. Royal had gone to immense care in the choice of wall surfaces, now covered by thousands of aerosolled obscenities. It was stupid of him, perhaps, but it was difficult not to take them personally, particularly as he was only too aware of his neighbours' hostility towards him – the chromium cane and white Alsatian were no longer theatrical props. In principle, the mutiny of these well-to-do professional people against the building they had collectively purchased was no different from the dozens of well-documented revolts by working-class tenants against municipal tower-blocks that had taken place at frequent intervals during the post-war years. But once again Royal had found himself reacting personally to these acts of vandalism. The breakdown of the building as a social structure was a rebellion against himself, so much so that in the early days after the jeweller's unexplained death he expected to be physically attacked. Later, however, the collapse of the high-rise began to strengthen his will to win through. The testing of the building he had helped to design was a testing of himself. Above all, he became aware that a new social order was beginning to emerge around him. Royal was certain that a rigid hierarchy of some kind was the key to the elusive success of these huge buildings. As he often pointed out to Anne, office blocks containing as many as thirty thousand workers functioned smoothly for decades thanks to a social hierarchy as rigid and as formalized as an anthill's, with an incidence of crime, social unrest, and petty misdemeanours that was virtually nil. The confused but unmistakable emergence of this new social order – apparently based on small tribal enclaves – fascinated Royal. To begin with, he had been determined to stay on, come what may and whatever the hostility directed against him, in the hope of acting as its midwife. In fact, this alone had stopped him from notifying his former colleagues of the mounting chaos within the building. As he told himself repeatedly, the present breakdown of the high-rise might well mark its success rather than its failure. Without realizing it, he had given these people a means of escaping into a new life, and a pattern of social organization that would become the paradigm of all future high-rise blocks. But these dreams of helping the two thousand residents towards their new Jerusalem meant nothing to Anne. As the air-conditioning and electricity supply began to fail, and it became dangerous to move unaccompanied around the building, she told Royal that they were leaving. Playing on Royal's concern for her, and his own feelings of guilt about the breakdown of the high-rise, she soon persuaded him that they must go. Curious to see how she was getting on with her packing, Royal walked into his wife's bedroom. Two wardrobe trunks, and a selection of small and large suitcases, jewellery boxes and vanity cases lay open on the floor and dressing-table like a luggage store display. Anne was packing, or unpacking, one of the cases in front of the dressing-table mirror. Recently, Royal had noticed that she deliberately surrounded herself with mirrors, as if this replication of herself gave her some kind of security. Anne had always taken for granted a naturally deferential world, and the last few weeks, even in the comparative safety of this penthouse apartment, she had found more and more trying. The childlike strains in her character had begun to come out again, as if she was suiting her behaviour to the over-extended Mad-Hatter's tea-party that she had been forced to attend like a reluctant Alice. The journey down to the 35th-floor restaurant had become a daily ordeal, and only the prospect of leaving the apartment building for good had kept her going. She stood up and embraced Royal. As usual, without thinking, she touched the scars on his forehead with her lips, as if trying to read a digest of the twenty-five years that separated them, a key to that part of Royal's life she had never known. As he recovered from the accident, sitting in the windows of the penthouse or exercising on the callisthenics machine, he had noticed how much his wounds had intrigued her. 'What a mess.' She gazed down hopefully at the jumble of suitcases. 'I'll be about an hour – have you called the taxi?' 'We'll need at least two. They refuse to wait now – there's no point in calling them until we're on the doorstep.' Both their own cars, parked in the line nearest the building, had been damaged by the tenants below, their windscreens knocked out by falling bottles. Anne returned to her packing. 'The important thing is that we're going. We should have left a month ago when I wanted to. Why anyone stays on here I can't imagine.' 'Anne, we're _leaving_...' 'At last – and why has no one called the police? Or complained to the owners?' 'We are the owners.' Royal turned his head away from her, his smile of affection stiffening. Through the windows he watched the light fading across the curtain-walling of the nearby high-rises. Inevitably, he had always taken Anne's criticisms as a comment on himself. As Royal knew now, his young wife would never be happy in the special atmosphere of the high-rise. The only daughter of a provincial industrialist, she had been brought up in the insulated world of a large country house, a finicky copy of a Loire chateau maintained by a staff of servants in the fullblown nineteenth-century manner. In the apartment building, by contrast, the servants who waited on her were an invisible army of thermostats and humidity sensors, computerized elevator route-switches and over-riders, playing their parts in a far more sophisticated and abstract version of the master-servant relationship. However, in Anne's world it was not only necessary for work to be done, but be seen to be done. The steady breakdown of the building's services, and the confrontation between the rival groups of tenants, had been too much for her, playing on her huge sense of insecurity, all her long-ingrained upper-class uncertainties about maintaining her superior place in the world. The present troubles in the apartment block had exposed these mercilessly. When he had first met her, Royal had taken for granted her absolute self-confidence, but in fact the reverse was true – far from being sure of herself, Anne needed constantly to reestablish her position on the top rung of the ladder. By comparison, the professional people around her, who had achieved everything as a result of their own talents, were models of self-assurance. When they first moved into the high-rise as its first tenants, they had both intended the apartment to be no more than a _pied à terre,_ conveniently close to Royal's work on the development project. As soon as they found a house in London they would leave. But Royal noticed that he continued to postpone any decision to move out. He was intrigued by life in this vertical township, and by the kind of people attracted to its smooth functionalism. As the first tenant, and owner of the best and highest apartment, he felt himself to be lord of the manor – borrowing a phrase he disliked from Anne's rule book. His sense of physical superiority as a sometime amateur tennis champion – a minor hard-courts title, though no less impressive for that – had inevitably slackened with the passage of years, but in a way had been rekindled by the presence of so many people directly below him, on the shoulders of whose far more modest dwellings his own rested securely. Even after his accident, when he had been forced to sell out his partnership and retreat to a wheelchair in the penthouse, he had felt this sense of renewed physical authority. During the months of convalescence, as his wounds healed and his body grew stronger, each of the new tenants in some way seemed identified with his strengthening muscles and sinews, his quickening reflexes, each one bringing his invisible tribute to Royal's well-being. For Anne, by contrast, the continued flow of new arrivals puzzled and irritated her. She had enjoyed the apartment when they were alone in the high-rise, taking it for granted that no one else would appear. She rode the elevators as if they were the grandly upholstered gondolas of a private funicular, swam alone in the undisturbed waters of the two swimming-pools, and strolled about the shopping concourse as if visiting her own personal bank, hairdresser and supermarket. By the time that the last of the two thousand residents had appeared and taken their place below, Anne was impatient to move. But Royal was drawn to his new neighbours, exemplars beyond anything he had previously imagined of the puritan work ethic. In turn, he knew from Anne that his neighbours found him a puzzling and aloof figure, an automobile-crash casualty in his wheelchair living on the roof of the high-rise in a casual ménage with a rich young wife half his age whom he was happy to see taken out by other men. Despite this symbolic emasculation, Royal was still regarded in some way as having the key to the building. His scarred forehead and chromium cane, the white jacket which he affected and wore like a target, together seemed to be the elements of a code that concealed the real relationship between the architect of this huge building and its uneasy tenants. Even Anne's always imminent promiscuities were part of this same system of ironies, appealing to Royal's liking for the 'game' situation where one could risk everything and lose nothing. The effect of all this on his neighbours interested Royal, and particularly on those mavericks such as Richard Wilder, who would set out to climb Everest equipped with nothing more than a sense of irritation that the mountain was larger than himself, or Dr Laing, staring out all day from his balcony under the fond impression that he was totally detached from the high-rise, when in fact he was probably its most true tenant. At least Laing knew his place and kept to it; three nights earlier they had been forced to give Wilder a short sharp lesson. Thinking about Wilder's intrusion – only one in a series of attempts by people below to break into the top-floor apartments – Royal left the bedroom and checked the bolts on the front door. Anne waited while he stood in the deserted corridor. There was a continuous sullen murmur from the lower levels carried up the elevator shafts. She pointed to Royal's three suitcases. 'Is that all you're taking?' 'For the time being. I'll come back for anything else.' 'Come back? Why should you want to? Perhaps you'd rather stay?' To himself, rather than to his wife, Royal remarked, 'First to arrive, last to leave...' 'Is that a joke?' 'Of _course_ not.' Anne placed a hand on his chest, as if searching for an old wound. 'It's really all over, you know. I hate to say it, but this place hasn't worked.' 'Perhaps not...' Royal took her commiseration with a strong dose of salt. Without realizing it, Anne often played on his sense of failure, frightened by Royal's new resolve to prove himself, this conviction that the building might succeed after all. In addition, their neighbours had accepted him a little too readily as their leader. His partnership in the consortium had been largely paid for by the commissions her father had steered his way, a fact Anne had never let him forget, not to humble Royal so much as to prove her own value to him. The point was made, though. He had come up in the world, all right, in too many senses of the term. In an insane way, his accident might have been an attempt to break out of the trap. But all this belonged to the past now. As Royal knew, they were leaving just in time. During the last few days life in the high-rise had become impossible. For the first time the top-floor residents were directly involved. The erosion of everything continued, a slow psychological avalanche that was carrying them downwards. Superficially, life in the apartment building was normal enough – most of the residents left for their offices each day, the supermarket was still open, the bank and hairdressing salon functioned as usual. None the less, the real internal atmosphere was that of three uneasily coexisting armed camps. A complete hardening of positions had taken place, and there was now almost no contact between the upper, middle and lower groups. During the early part of the day it was possible to move freely around the building, but as the afternoon proceeded this became increasingly difficult. By dusk any movement was impossible. The bank and supermarket closed at three o'clock. The junior school had moved from its vandalized classrooms to two apartments on the 7th floor. Few children were ever seen above the 10th floor, let alone in the sculpture-garden on the roof which Royal had designed for them with so much care. The loth-floor swimming-pool was a half-empty pit of yellowing water and floating debris. One of the squash courts had been locked, and the other three were filled with garbage and broken classroom furniture. Of the twenty elevators in the building, three were permanently out of order, and by evening the remainder had become the private transit lines of the rival groups who could seize them. Five floors were without electricity. At night the dark bands stretched across the face of the high-rise like dead strata in a fading brain. Fortunately for Royal and his neighbours, conditions in the upper section of the building had yet to decline so steeply. The restaurant had discontinued its evening service, but a limited luncheon was available each day during the few hours when the small staff could freely enter and leave. However, the two waiters had already gone, and Royal guessed that the chef and his wife would soon follow. The swimming-pool on the 35th floor was usable, but the level had fallen, and the water supply, like that to their own apartment, was dependent on the vagaries of the roof tanks and electric pumps. From the drawing-room windows Royal looked down into the parking-lot. Many of the cars had not been moved for weeks – windscreens broken by falling bottles, cabins filled with garbage, they sat on flattening tyres, surrounded by a sea of rubbish that spread outwards around the building like an enlarging stain. This visible index of the block's decline at the same time measured the extent to which its tenants accepted this process of erosion. At times Royal suspected that his neighbours unconsciously hoped that everything would decline even further. Royal had noticed that the manager's office was no longer besieged by indignant residents. Even his own top-floor neighbours, who in the early days had been only too quick to complain about everything, now never criticized the building. In the absence of the manager – still lying in a state of mental collapse in his ground-floor apartment – his dwindling staff of two (the wives of a dubbing-mixer on the 2nd floor and a first violinist on the 3rd) sat stoically at their desks in the entrance lobby, oblivious of the deterioration going on apace over their heads. What interested Royal was the way in which the residents had become exaggeratedly crude in their response to the apartment building, deliberately abusing the elevators and air-conditioning systems, over-straining the power supply. This carelessness about their own convenience reflected a shuffling of mental priorities, and perhaps the emergence of the new social and psychological order for which Royal was waiting. He remembered the attack on Wilder, who had laughed happily as the group of paediatricians and academics had flailed away at him with their dumb-bells like a troupe of demented gymnasts. Royal had found the episode grotesque, but he guessed that in some obscure way Wilder had been glad to be flung half-conscious into an elevator. Royal strolled around the shrouded furniture. He raised his stick and slashed at the stale air with the same stroke he had used against Wilder. At any moment a battalion of police would arrive and cart them all off to the nearest jail. Or would they? What played straight into the residents' hands was the remarkably self-contained nature of the high-rise, a self-administered enclave within the larger private domain of the development project. The manager and his staff, the personnel who manned the supermarket, bank and hairdressing salon, were all residents of the apartment building; the few outsiders had left or been sacked. The engineers who serviced the building did so on instructions from the manager, and clearly none had been issued. They might even have been told to stay away – no garbage-collection vehicle had called for several days, and a large number of the chutes were blocked. Despite the growing chaos around them, the residents showed less interest in the external world. Bales of unsorted mail lay about in the ground-floor lobbies. As for the debris scattered around the high-rise, the broken bottles and cans, these were barely noticeable from the ground. Even the damaged cars were to some extent concealed by the piles of building materials, wooden forms and sand-pits that had yet to be cleared away. Besides, as part of that unconscious conspiracy to shut out the external world, no visitors came to the high-rise. He and Anne had invited none of their friends to the apartment for months. Royal watched his wife move about vaguely in her bedroom. Jane Sheridan, Anne's closest friend, had called and was helping her to pack. The two women were transferring a line of evening gowns from the wardrobe racks to the trunks, and at the same time returning unwanted shirts and trousers from the suitcases back to the shelves. For all the activity it was uncertain whether they were packing on the eve of departure or unpacking on arrival. 'Anne – are you coming or going?' Royal asked. 'We hardly stand a chance of making it tonight.' Anne gestured helplessly at the half-filled cases. 'It's the air-conditioning – I can't think.' 'You won't get out now even if you want to,' Jane told her. 'We're marooned here, as far as I can see. All the elevators have been commandeered by other floors.' 'What? Did you hear that?' Anne stared angrily at Royal, as if his faulty design of the elevator lobbies was directly responsible for these acts of piracy. 'All right, we'll leave first thing tomorrow. What about food? The restaurant will be shut.' They had never eaten in the apartment – Anne's gesture of contempt for her neighbours' endless preparation of elaborate meals. The only food in the refrigerator was the dog's. Royal stared at himself in the mirror, adjusting his white jacket. In the fading light his reflection had an almost spectral vibrancy, making him look like an illuminated corpse. 'We'll think of something.' A curious answer, he realized, implying that there were other sources of food than the supermarket. He looked down at Jane Sheridan's plump figure. Seeing Royal's subdued expression, she was smiling reassuringly at him. Royal had taken on the task of looking after this amiable young woman since the death of her Afghan. 'The elevators may be free in an hour or so,' he told them. 'We'll go down to the supermarket.' Thinking of the Alsatian – presumably asleep on his bed in the penthouse – he decided to exercise it on the roof. Anne had begun to empty the half-filled suitcases. She seemed barely aware of what she was doing, as if a large part of her mind had been switched off. For all her complaints, she had never telephoned the building manager herself. Perhaps she felt this was beneath her, but nor had she mentioned the smallest criticism to any of their friends in the world beyond the apartment building. Thinking about this, Royal noticed that the plug of her bedside telephone had been pulled from its socket, and the cable neatly wrapped around the receiver. As he walked around the apartment before going to search for the dog, he saw that the three other external telephones, in the hall, drawing-room and kitchen, had also been disconnected. Royal realized why they had received no outside calls during the previous week, and felt a distinct sense of security at knowing that they would receive none in the future. Already he guessed that, for all their expressed intentions, they would not be leaving either the following morning or any other. # 8 The Predatory Birds From the open windows of the penthouse Royal watched the huge birds clustering on the elevator heads fifty feet away. An unfamiliar species of estuarine gull, they had come up the river during the previous months and begun to congregate among the ventilation shafts and water storage tanks, infesting the tunnels of the deserted sculpture-garden. During his convalescence he had watched them arrive as he sat in his wheelchair on the private terrace. Later, when the callisthenics machine had been installed, the birds would hobble around the terrace while he exercised. In some way they were attracted by Royal's white jacket and pale hair, so close in tone to their own vivid plumage. Perhaps they identified him as one of their own, a crippled old albatross who had taken refuge on this remote roof-top beside the river? Royal liked this notion and often thought about it. The french windows swung in the early evening air. The Alsatian had escaped, hunting by itself on the five-hundred-feet-long observation deck. Now that the summer had ended few people went up to the roof. The remains of a cocktail-party marquee, bedraggled in the rain, lay in the gutter below the balustrade. The gulls, heavy wings folded, strutted among the cheese sticks scattered around a cardboard carton. The potted palms had been untended for months, and the whole roof increasingly resembled a voracious garden. Royal stepped down on to the roof deck. He enjoyed the hostile gaze of the birds sitting on the elevator heads. The sense of a renascent barbarism hung among the overturned chairs and straggling palms, the discarded pair of diamanté sunglasses from which the jewels had been picked. What attracted the birds to this isolated realm on the roof? As Royal approached, a group of the gulls dived into the air, soaring down to catch the scraps flung from a balcony ten floors below them. They fed on the refuse thrown into the car-park, but Royal liked to think that their real motives for taking over the roof were close to his own, and that they had flown here from some archaic landscape, responding to the same image of the sacred violence to come. Fearing that they might leave, he frequently brought them food, as if to convince them that the wait would be worth their while. He pushed back the rusty gates of the sculpture-garden. From the casement of a decorative lantern he took out a box of cereal meal, by rights reserved for the Alsatian. Royal began to scatter the grains among the concrete tunnels and geometric forms of the play-sculptures. Designing the garden had given him particular satisfaction, and he was sorry that the children no longer used the playground. At least it was open to the birds. The gulls followed him eagerly, their strong wings almost knocking the cereal box from his hands. Leaning on his stick, Royal swung himself around the pools of water on the concrete floor. He had always wanted his own zoo, with half a dozen large cats and, more important, an immense aviary stocked with every species of bird. Over the years he had sketched many designs for the zoo, one of them – ironically – a high-rise structure, where the birds would be free to move about in those sections of the sky that were their true home. Zoos, and the architecture of large structures, had always been Royal's particular interest. The drenched body of a Siamese cat lay in the gutter where the birds had cornered it – the small beast had climbed all the way up a ventilation shaft from the warm comfort of an apartment far below, embracing the daylight for a few last seconds before the birds destroyed it. Next to the cat was the carcass of a dead gull. Royal picked it up, surprised by its weight, stepped forward and with a powerful running throw hurled the bird far out into the air. It plummeted towards the ground, in an almost unending downward plunge, until it burst like a white bomb across the bonnet of a parked car. No one had seen him, but Royal would not have cared anyway. For all his keen interest in his neighbours' behaviour, he found it difficult not to look down on them. The five years of his marriage to Anne had given him a new set of prejudices. Reluctantly, he knew that he despised his fellow residents for the way in which they fitted so willingly into their appointed slots in the apartment building, for their over-developed sense of responsibility, and lack of flamboyance. Above all, he looked down on them for their good taste. The building was a monument to good taste, to the well-designed kitchen, to sophisticated utensils and fabrics, to elegant and never ostentatious furnishings – in short, to that whole aesthetic sensibility which these well-educated professional people had inherited from all the schools of industrial design, all the award-winning schemes of interior decoration institutionalized by the last quarter of the twentieth century. Royal detested this orthodoxy of the intelligent. Visiting his neighbours' apartments, he would find himself physically repelled by the contours of an award-winning coffee-pot, by the well-modulated colour schemes, by the good taste and intelligence that, Midas-like, had transformed everything in these apartments into an ideal marriage of function and design. In a sense, these people were the vanguard of a well-to-do and well-educated proletariat of the future, boxed up in these expensive apartments with their elegant furniture and intelligent sensibilities, and no possibility of escape. Royal would have given anything for one vulgar mantelpiece ornament, one less than snow-white lavatory bowl, one hint of hope. Thank God that they were at last breaking out of this fur-lined prison. On either side of him, the rain-soaked concrete stretched away into the evening mist. There were no signs of the white Alsatian. Royal had reached the centre of the roof. The gulls sat on the ventilation shafts and elevator heads, watching him with their unusually alert eyes. Thinking that they might already have dined off the dog, Royal kicked aside an overturned chair and set off towards the stairhead, calling out the Alsatian's name. Ten feet from the private terrace at the southern end of the roof, a middle-aged woman in a long fur coat stood by the balustrade. Shivering continuously, she stared out across the development project at the silver back of the river. A trio of lighters followed a tug upstream, and a police patrol boat cruised along the north bank. As Royal approached he recognized the widow of the dead jeweller. Was she waiting for the police to arrive, in some perverse way too proud to call them herself? He was about to ask if she had seen the Alsatian, but he knew already that she would not reply. Her face was immaculately made up, but an expression of extreme hostility came through the rouge and powder, a gaze as hard as pain. Royal held tight to his cane. The woman's hands were hidden from sight, and he almost believed that inside the coat her jewelled fingers held a pair of unsheathed knives. For some reason he was suddenly convinced that she had been responsible for her husband's death, and that at any moment she would seize him and wrestle him over the ledge. At the same time, to his surprise, he found himself wanting to touch her, to put his arm around her shoulders. Some kind of wayward sexuality was at work. For a grotesque moment he was tempted to expose himself to her. 'I'm looking for Anne's Alsatian,' he said lamely. When she made no reply he added, 'We've decided to stay on.' Confused by his response to this grieving woman, Royal turned away and made his way down the staircase to the floor below. Despite the pain in his legs he walked swiftly along the corridor, striking at the walls with his cane. When he reached the central lobby the sounds of the Alsatian's frantic barking rose clearly up the nearest of the five high-speed elevator shafts. Royal pressed his head to the door panel. The elevator car, with the Alsatian snarling and leaping inside it, was on the 15th-floor, its doors jammed open. Royal could hear the heavy blows of a metal club striking at the floor and walls, and the shouts of three attackers – one of them a woman – as they beat the animal to the floor. When the dog's yelping subsided, the elevator at last responded to the call button. The car climbed to the top floor, where the doors opened on the barely conscious dog dragging itself around the bloodied floor. The animal's head and shoulders were heavy with blood. Matted hair streaked the walls of the cabin. Royal tried to reassure it, but the Alsatian snapped at his hand, frightened of the stick. Several of his neighbours gathered around, carrying an assortment of weapons – tennis rackets, dumb-bells and walking sticks. They were beckoned aside by a friend of Royal's, a gynaecologist named Pangbourne who lived in the apartment next to the lobby. A swimming partner of Anne's, he often played with the dog on the roof. 'Let me have a look at him...Poor devil, those savages have abused you...' Deftly he insinuated himself into the elevator and began to soothe the dog. 'We'll get him back to your apartment, Royal. Then I suggest we discuss the elevator position.' Pangbourne knelt down on the floor, whistling a strange series of sounds at the dog. For some weeks the gynaecologist had been urging Royal to interfere with the building's electrical switching systems, as a means of retaliating against the lower floors. This supposed power over the high-rise was the chief source of Royal's authority with his neighbours, though he suspected that Pangbourne for one was well aware that he would never make use of it. With his soft hands and consulting-room manner the gynaecologist unsettled Royal slightly, as if he were always just about to ease an unwary patient into a compromising obstetric position – in fact, though, Pangbourne belonged to the new generation of gynaecologists who never actually touched their patients, let alone delivered a child. His speciality was the computerized analysis of recorded birth-cries, from which he could diagnose an infinity of complaints to come. He played with these tapes like an earlier generation of sorcerer examining the patterns of entrails. Characteristically, Pangbourne's one affair in the high-rise had been with a laboratory researcher on the 2nd floor, a slim, silent brunette who probably spent all her time tormenting small mammals. He had broken this off soon after the outbreak of hostilities. None the less, he had a way with the injured Alsatian. Royal waited while he calmed the dog and examined its wounds. He held its muzzle in his white hands as if he had just freed the poor beast from its caul. Together, he and Royal half-carried and half-dragged the dog back to Royal's apartment. Fortunately, Anne and Jane Sheridan had left for the 10th-floor supermarket, picking up the one elevator released for general traffic. Pangbourne settled the dog on the dust-sheet covering one of the sofas. 'I'm glad you were here,' Royal told him. 'You're not at your practice?' Pangbourne stroked the Alsatian's swollen head, his white hands delicate with blood. 'I attend my consultancy two mornings a week, just enough time for me to listen to the latest recordings. Otherwise I'm on guard duty here.' He peered pointedly at Royal. 'If I were you, I'd keep a closer eye on Anne – unless you want her to be...' 'Sound advice. You've never thought of leaving? The conditions now...' The gynaecologist frowned at Royal as if unsure whether he was serious. 'I've only just moved here. Why should I concede anything to these people?' He pointed expressively at the floor with a bloodstained finger. Impressed by the determination of this refined and punctilious man to defend his terrain, Royal followed him to the door, thanking him for his help and promising to discuss with him the sabotage of the elevators. For the next half an hour Royal cleaned the wounds of the Alsatian. Although the dog began to sleep, the bloodstains on the white dust-sheet made Royal feel increasingly restless. The assault had released in him a more than half-conscious wish for conflict. To date he had been a moderating influence, restraining his neighbours from any unnecessary retaliatory action. Now he wanted trouble at any price. Somewhere below a falling bottle burst on a balcony, a brief explosion against the rising background of over-noisy record-players, shouts and hammering. The light in the apartment had begun to fade, the shrouded furniture suspended around him like under-inflated clouds. The afternoon had passed, and soon the danger period would begin. Thinking of Anne trying to make her way back from the 10th floor, Royal turned to leave the apartment. By the door he stopped, holding one hand over the dial of his wrist-watch. His concern for Anne was as strong as ever – if anything he felt more possessive towards her – but he decided to let another half-hour elapse before he went in search of her. Perversely, this would increase the element of danger, the chance of confrontation. He walked calmly around the apartment, noting the telephones on the floor and the neatly wrapped cables. Even if she were trapped somewhere, Anne would be unable to call him. While he waited for the darkness, Royal went up to the penthouse and watched the gulls on the elevator heads. In the evening light their plumage was a vibrant white. Like birds at dusk waiting among the cornices of a mausoleum, they flicked their wings against the bone-like concrete. As if agitated by Royal's confused state, they rose excitedly into the air. Royal was thinking of his wife, of the possible assaults on her, an almost sexual fever of hazard and revenge tightening his nerves. In another twenty minutes he would leave the apartment and make his killing drop down the shafts of the high-rise, murder descending. He wished he could take the birds with him. He could see them diving down the elevator shafts, spiralling through the stairwells to swoop into the corridors. He watched them wheel through the air, listening to their cries as he thought of the violence to come. # 9 Into the Drop Zone At seven o'clock Anthony Royal set out with the white Alsatian to find his wife. The dog had recovered sufficiently from its beating to limp along in front of him. Its damp pelt was marked with a vivid crimson bloom. Like the blood-stains on his white jacket, Royal was proud of these signs of combat. As if mimicking the dog, he wore its blood on his chest and hips, the insignia of an executioner's apparel yet to be designed. He began his descent into the lower depths of the building in the high-speed elevator lobby. A group of excited neighbours had just emerged from one of the cars. Four floors down, an apartment had been ransacked by a party of tenants from the 15th floor. These sporadic raids on apartments were taking place with increasing frequency. Empty apartments, even if left for no more than a single day, were especially vulnerable. Some unconscious system of communication alerted any would-be raiders that an apartment a dozen floors above or below was ripe for ransack. With difficulty Royal found an elevator to take him down to the 35th floor. The restaurant had closed. After serving a last lunch to the Royals the chef and his wife had left for good. Chairs and tables had been stacked around the kitchen in a barricade, and the revolving door was padlocked. The long observation windows, with their magnificent view, were shuttered and chained, throwing the north end of the pool into darkness. The last swimmer, a market analyst from the 38th floor, was leaving the swimming-pool. His wife waited protectively outside his cubicle as he changed. She watched the Alsatian lapping at the water lying on the greasy tiles by the diving-board. When the dog relieved itself against the door of an empty cubicle her face was expressionless. Royal felt a modest pride in this act, which rekindled a primitive territorial reflex. The marking of this cubicle with the dog's over-bright urine defined the small terrain coming under his sway. For the next hour Royal continued his search for his wife, descending deeper into the central mass of the high-rise. As he moved from one floor to the next, from one elevator to another, he realized the full extent of its deterioration. The residents' rebellion against the apartment building was now in full swing. Garbage lay heaped around the jammed disposal chutes. The stairways were littered with broken glass, splintered kitchen chairs and sections of handrail. Even more significant, the pay-phones in the elevator lobbies had been ripped out, as if the tenants, like Anne and himself, had agreed to shut off any contact with the world outside. The further down Royal reached, the greater the damage. Fire safety doors leaned off their hinges, quartz inspection windows punched out. Few corridor and staircase lights still worked, and no effort had been made to replace the broken bulbs. By eight o'clock little light reached the corridors, which became dim tunnels strewn with garbage sacks. The lurid outlines of lettered slogans, aerosolled in luminous paint across the walls, unravelled around him like the decor of a nightmare. Rival groups of residents stood around in the lobbies, guarding their elevators and watching each other along the corridors. Many of the women had portable radios slung from their shoulders, which they switched from station to station as if tuning up for an acoustic war. Others carried cameras and flash equipment, ready to record any acts of hostility, any incursions into their territory. By changing elevators and making journeys of two floors at a time, Royal finally descended into the lower half of the apartment building. He was unmolested by the other residents, who watched him as he entered their lobbies, moving out of his way as he strolled past. The wounded Alsatian and Royal's bloodstained jacket gave him free passage through these rival clans, as if he were a betrayed landowner descending from his keep to parade his wounds among his rebellious tenants. By the time he reached the 10th floor the concourse was almost deserted. A few residents wandered around the shopping mall, staring at the empty chromium counters. The bank and liquor store were closed, their grilles chained. There was no sign of Anne. Royal led the Alsatian through the swing doors into the swimming-pool, now barely half full. The yellow water was filled with debris, the floor at the shallow end emerging like a beach in a garbage lagoon. A mattress floated among the bottles, surrounded by a swill of cardboard cartons and newspapers. Even a corpse would go unnoticed here, Royal reflected. As the Alsatian snuffled its way along the vandalized changing cubicles, Royal waved his cane at the humid air, trying to stir it into life. He would soon suffocate here in the lower section of the apartment building. During even this brief visit he had felt crushed by the pressure of all the people above him, by the thousands of individual lives, each with its pent-up time and space. From the elevator lobby on the far side of the swimming-pool came the sounds of shouting. Urging on the dog, Royal strode to the rear exit behind the diving-boards. Through the glass doors he watched a heated argument taking place outside the entrance to the junior school. Some twenty men and women were involved, one group from the lower floors carrying desks and chairs, a blackboard and artist's easel, the other trying to prevent them from re-occupying the classrooms. Scuffles soon broke out. Egged on by a film-editor wielding a desk over his head, the parents pressed forward determinedly. Their opponents, residents from the 11th and 12th floors, stood their ground, forming a heavy-breathing cordon. A bad-tempered brawl developed, men and women wrestling clumsily with each other. Royal pulled the Alsatian away, deciding to leave this jostling group to settle their own dispute. As he turned to continue his search for Anne, the staircase doors leading into the lobby were flung back. A group of residents, all from the 14th and 15th floors, leapt out and hurled themselves into the mêlée. They were led by Richard Wilder, cine-camera gripped like a battle standard in one hand. Royal assumed that Wilder was filming an episode from the documentary he had been talking about for so long, and had set up the entire scene. But Wilder was in the thick of the fray, aggressively wielding the cine-camera as he urged on his new allies against his former neighbours. The raiding party was shouldered back towards the staircase in disarray, the parents dropping the desks and blackboard. Wilder slammed the staircase doors behind them. Expelling his sometime neighbours and friends had clearly given him enormous satisfaction. Waving his camera, he pointed to the classroom of the junior school. Two young women, Royal's wife and Jane Sheridan, were crouching behind an overturned desk. Like children caught red-handed in some mischief, they watched Wilder as he beckoned theatrically towards them. Holding the Alsatian on a short leash, Royal pushed back the glass doors. He strode through the residents in the lobby, who were now happily breaking up the children's desks. 'It's all right, Wilder,' he called out in a firm but casual voice. 'I'll take over.' He stepped past Wilder and entered the classroom. He lifted Anne to her feet. 'I'll get you out of here – don't worry about Wilder.' 'I'm not...' For all her ordeal, Anne was remarkably unruffled. She gazed at Wilder with evident admiration. 'My God, he's rather insane...' Royal waited for Wilder to attack him. Despite the twenty years between them, he felt calm and self-controlled, ready for the physical confrontation. But Wilder made no attempt to move. He watched Royal with interest, patting one armpit in an almost animal way, as if glad to see Royal here on the lower levels, directly involved at last in the struggle for territory and womenfolk. His shirt was open to the waist, exposing a barrel-like chest that he showed off with some pride. He held the cine-camera against his cheek as if he were visualizing the setting and choreography of a complex duel to be fought at some more convenient time on a stage higher in the building. That night, when they had returned to their apartment on the 40th floor, Royal set about asserting his leadership of the topmost levels of the high-rise. First, while his wife and Jane Sheridan rested together in Anne's bed, Royal attended to the Alsatian. He fed the dog in the kitchen with the last of its food. The wounds on its shoulders and head were as hard as coins. Royal was more aroused by the injuries to the dog than by any indignity suffered by his wife. He had almost made Anne's ordeal certain by deliberately postponing his search for her. As he expected, she and Jane had been unable to find an elevator when they had finished shopping at the supermarket. After being molested in the lobby by a drunken sound-man they had taken refuge in the deserted classroom. 'They're all making their own films down there,' Anne told him, clearly fascinated by her heady experience of the lower orders at work and play. 'Every time someone gets beaten up about ten cameras are shooting away.' 'They're showing them in the projection theatre,' Jane confirmed. 'Crammed in there together seeing each other's rushes.' 'Except for Wilder. He's waiting for something really gruesome.' Both women turned without thinking to look at Royal, but he took this in his stride. In an obscure way, it was his affection for Anne that had led him to display her to his neighbours below, his contribution to the new realm they would create together. By contrast, the Alsatian belonged to a more practical world. Already he knew that the dog might well prove useful, be more easily bartered than any woman, in the future that lay ahead. He decided not to throw away the bloodstained jacket, glad to wear the dog's blood against his chest. He refused any offers to clean it from the wives of his fellow residents who came in to comfort the two young women. The assaults on the Alsatian, and on Royal's wife, made his apartment a natural focus of his neighbours' decision to regain the initiative before they were trapped on the roof of the high-rise. To Pangbourne he explained that it was vital for them to enlist the support of the tenants living on the floors immediately below the 35th. 'To survive, we need allies as a buffer against any attacks from the lower levels, and also to give us access to more of the elevators. We're in danger of being cut off from the central mass of the building.' 'Right,' the gynaecologist agreed, glad to see that Royal had at last woken up to the realities of their position, 'Once we've gained a foothold there we can play these people off against those lower down – in short balkanize the centre section and then begin the colonization of the entire building...' In retrospect, it surprised Royal how easily they were able to implement these elementary schemes. At nine o'clock, before the evening's parties began, Royal began to enlist the support of the residents below the 35th-floor swimming-pool. Expertly, Pangbourne played on their grievances. These people shared many of the problems of the top-floor tenants – their cars had also been damaged, and they had the same struggles with the declining water-supply and air-conditioning. In a calculated gesture. Royal and Pangbourne offered them the use of the top-floor elevators. To reach their apartments they would no longer have to enter the main lobby and run the gauntlet of thirty intervening floors. They would now wait for a top-level tenant to appear, enter the private lobby with him and ride straight to the 35th floor without harassment, and then walk the few steps down to their apartments. The offer was accepted, Royal and Pangbourne deliberately asking for no concessions in return. The deputation returned to the 40th floor, the members dispersing to their apartments to prepare for the evening's festivities. During the previous hour a few trivial incidents had occurred – the middle-aged wife of a 28th-floor account executive had been knocked unconscious into the half-empty swimming-pool, and a radiologist from the 7th floor had been beaten up among the driers in the hairdressing salon – but in general everything within the high-rise was normal. As the night progressed, the sounds of continuous revelry filled the building. Beginning with the lower floors, the parties spread upwards through the apartment block, investing it in an armour of light and festivity. Standing on his balcony, Royal listened to the ascending music and laughter as he waited for the two young women to dress. Far below him, a car drove along the access road to the nearby high-rise, its three occupants looking up at the hundreds of crowded balconies. Anyone seeing this ship of lights would take for granted that the two thousand people on board lived together in a state of corporate euphoria. Invigorated by this tonic atmosphere, Anne and Jane Sheridan had made a rapid recovery. Anne no longer referred to their leaving the high-rise, and seemed to have forgotten that she had ever made the decision to go. The rough and tumble in the junior school had given her that previously missing sense of solidarity with the other tenants of the high-rise. In the future, violence would clearly become a valuable form of social cement. As Royal escorted her to the first party of the evening, given by a newspaper columnist on the 37th floor, she and Jane strolled arm in arm, buoyed up by reports of further confrontations, and by the news that two more floors, the 6th and 14th, were now in darkness. Pangbourne congratulated Royal on this, almost as if he believed that Royal was responsible. No one, even on the top floors, seemed aware of the contrast between the well-groomed revellers and the dilapidated state of the building. Along corridors strewn with uncollected garbage, past blocked disposal chutes and vandalized elevators, moved men in well-tailored dinner-jackets. Elegant women lifted long skirts to step over the debris of broken bottles. The scents of expensive after-shave lotions mingled with the aroma of kitchen wastes. These bizarre contrasts pleased Royal, marking the extent to which these civilized and self-possessed professional men and women were moving away from any notion of rational behaviour. He thought of his own confrontation with Wilder, which summed up all the forces in collision within the high-rise. Wilder had obviously begun his ascent of the building again, and had climbed as far as the 15th floor. By rights the high-rise should be totally deserted except for Wilder and himself. The real duel would be resolved among the deserted corridors and abandoned apartments of the building inside their heads, watched only by the birds. Now that she had accepted it, the threat of violence in the air had matured Anne. Standing by the fireplace in the columnist's drawing-room, Royal watched her with affection. She was no longer flirting with the elderly businessmen and young entrepreneurs, but listening intently to Dr Pangbourne, as if aware that the gynaecologist might be useful to her in more ways than the purely professional. Despite his pleasure in displaying her to the other residents, Royal felt far more protective of her. This sexual territoriality extended to Jane Sheridan. 'Have you thought about moving in with us?' he asked her. 'Your own apartment is very much exposed.' 'I'd like to – Anne did mention it. I've already brought some things over.' Royal danced with her in the garbage-stacked hallway, openly feeling her strong hips and thighs, as if this inventory established his claim to these portions of her body at a future date. Hours later, at some period after midnight when it seemed to Royal that these parties had been going on for ever, he found himself drunk in an empty apartment on the 39th floor. He was lying back on a settee with Jane against his shoulder, surrounded by tables loaded with dirty glasses and ashtrays, all the debris of a party abandoned by its guests. The music from the balconies nearby was overlaid by the noise of sporadic acts of violence. Somewhere a group of residents was shouting in a desultory way, hammering on the doors of an elevator shaft. A power failure had switched out the lights. Royal lay back in the darkness, steadying his slowly rotating brain against the illumination of the nearby high-rise. Without thinking, he began to caress Jane, stroking her heavy breasts. She made no attempt to pull herself away from him. A few moments later, when the electric power returned, lighting up a single table-lamp lying on the floor of the balcony, she recognized Royal and settled herself across him. Hearing a noise from the kitchen, Royal looked round to see his wife sitting at the table in her long gown, one hand on the electric coffee-percolator as it began to warm. Royal put his arms around Jane and embraced her with deliberate slowness, as if repeating for his wife's benefit a slow-motion playback. He knew that Anne could see them, but she sat quietly at the kitchen table, lighting a cigarette. During the sexual act that followed she watched them without speaking, as if she approved, not from any fashionable response to marital infidelity, but from what Royal realized was a sense of tribal solidarity, a complete deference to the clan leader. # 10 The Drained Lake Soon after dawn the next morning, Robert Laing sat on his balcony on the 25th floor, eating a frugal breakfast and listening to the first sounds of activity in the apartments around him. Already a few residents were leaving the building on their way to work, picking their way through the debris underfoot towards their garbage-speckled cars. Several hundred people still left each day for their offices and studios, airports and auction-rooms. Despite the scarcity of water and heating, the men and women were well dressed and groomed, their appearance giving no hint of the events of the previous weeks. However, without realizing it, many of them would spend much of their time at their offices asleep at their desks. Laing ate his slice of bread with methodical slowness. Sitting there on the cracked balcony of tiles, he felt like a poor pilgrim who had set out on a hazardous vertical journey and was performing a simple but meaningful ritual at a wayside shrine. The previous night had brought total chaos – drunken parties, brawls, the looting of empty apartments and assaults on any isolated resident. Several more floors were now in darkness, including the 22nd, where his sister Alice lived. Hardly anyone had slept. Amazingly, few people showed any signs of fatigue, as if the economy of their lives was switching from day to night. Laing half-suspected that the insomnia so many of his neighbours had suffered had been some kind of unconscious preparation for the emergency ahead. He himself felt alert and confident – despite the bruises on his shoulders and arms, he was physically in fine trim. At eight o'clock he intended to clean himself up and leave for the medical school. Laing had spent the early part of the night straightening Charlotte Melville's apartment, which had been ransacked by intruders while she and her small son were sheltering with friends. Later, he had helped to guard an elevator which his neighbours had seized for a few hours. Not that they had gone anywhere – having commandeered the elevator what mattered was to hold it for an effective psychological interval. The evening had begun, as usual, with a party held by Paul Crosland, television newsreader and now clan chief. Crosland had been delayed at the studios, but his guests watched him deliver the nine-o'clock news, speaking in his familiar, well-modulated voice about a rush-hour pile-up in which six people had died. As his neighbours stood around the television set, Laing waited for Crosland to refer to the equally calamitous events taking place in the high-rise, the death of the jeweller (now totally forgotten), and the division of the tenants into rival camps. Perhaps, at the end of the newscast, he would add a special message for his clan members at that moment fixing their drinks among the plastic rubbish-sacks in his living-room. By the time Crosland arrived, swerving into the apartment in his fleece-lined jacket and boots like a returning bomber pilot, everyone was drunk. Flushed and excited, Eleanor Powell swayed up to Laing, pointing hilariously at him and accusing him of trying to break into her apartment. Everyone cheered this news, as if rape was a valuable and well-tried means of bringing clan members together. 'A low crime-rate, doctor,' she told him amiably, 'is a sure sign of social deprivation.' Drinking steadily and without any self-control, Laing felt the alcohol bolt through his head. He knew that he was deliberately provoking himself, repressing any reservations about the good sense of people such as Crosland. On a practical level, being drunk was almost the only way of getting close to Eleanor Powell. Sober, she soon became tiresomely maudlin, wandering about the corridors in a vacant way as if she had lost the key to her own mind. After a few cocktails she was hyper-animated, and flicked on and off like a confused TV monitor revealing glimpses of extraordinary programmes which Laing could only understand when he was drunk himself. Although she kept overruling everything he said, tripping over the plastic garbage-sacks under the bar, he held her upright, excited by the play of her hands across his lapels. Not for the first time Laing reflected that he and bis neighbours were eager for trouble as the most effective means of enlarging their sex lives. Laing emptied the coffee-percolator over the edge of the balcony. A greasy spray hung across the face of the building, the residue of the cascade of debris now heaved over the side without a care whether the wind would carry it into the apartments below. He carried his breakfast tray into the kitchen. The continuing failure of the electricity supply had destroyed the food in the refrigerator. Bottles of sour milk stood in a mould-infested line. Rancid butter dripped through the grilles. The smell of this rotting food was not without its appeal, but Laing opened a plastic sack and scooped everything into it. He slung the sack into the corridor, where it lay in the dim light with a score of others. A group of his neighbours was arguing in the elevator lobby, voices raised. A minor confrontation was developing between them and the 28th-floor residents. Crossland was bellowing aggressively into the empty elevator shaft. Usually, at this early hour of the day, Laing would have paid no attention to him. Too often Crosland had no idea what he was arguing about – confrontation was enough. Without his make-up, the expression of outrage on his face made Crosland resemble an announcer tricked for the first time into reading an item of bad news about himself. From the shadows outside his door the orthodontic surgeon emerged with studied casualness. Steele and his hard-faced wife had been standing among the garbage-sacks for some time, keeping an eye on everything. He sidled up to Laing and took his arm in a gentle but complex grip, the kind of hold he might have used for an unusual extraction. He pointed to the floors above. 'They want to seal the doors permanently,' he explained. 'They're going to re-wire two of the elevator circuits so that they move non-stop from the ground floor to the 28th.' 'What about the rest of us?' Laing asked. 'How do we leave the building?' 'My dear Laing, I don't suppose they care very much about us. Their real intention is to divide the building in half – here, at the 25th floor. This is a key level for the electrical services. By knocking out the three floors below us they will have a buffer zone separating the top half of the building from the lower. Let's make sure, doctor, that when this happens we are on the right side of the buffers...' He broke off as Laing's sister approached, carrying her electric coffee-pot. With a bow, Steele moved away through the shadows, his small feet stepping deftly among the garbage sacks, the centre parting of hair gleaming in the faint light. Laing watched him slide noiselessly into his apartment. No doubt Steele would pick his way with equal skill through the hazards ahead. He never left the building now, Laing had noticed. What had happened to that ruthless ambition? After the battles of the past weeks he was presumably banking on an imminent upsurge in the demand for advanced surgery of the mouth. As Laing greeted Alice he realized that she too would be excluded if the surgeon was right, living in the darkness on the wrong side of the dividing line with her alcoholic husband. She had come up ostensibly to plug her coffee-pot into the power point in Laing's kitchen, but when they entered the apartment she left it absently on the hall table. She walked on to the balcony and stared into the morning air, as if glad to have the three extra floors beneath her. 'How is Charles?' Laing asked. 'Is he at the office?' 'No...He's taken some leave. Terminal, if you ask me. What about you? You shouldn't neglect your students. At the present rate we're going to need every one of them.' 'I'm going in this morning. Would you like me to have a look at Charles on my way?' Alice ignored this offer. She grasped the handrail and began to rock herself like a child. 'It's peaceful up here. Robert, you've no idea what it's like for most people.' Laing laughed aloud, amused by Alice's notion that somehow he had been unaffected by events in the high-rise – the typical assumption of a martyred older sister forced during her childhood to look after a much younger brother. 'Come whenever you want to.' Laing put his arm around her shoulders, steadying her in case she lost her balance. In the past he had always felt physically distanced from Alice by her close resemblance to their mother, but for reasons not entirely sexual this resemblance now aroused him. He wanted to touch her hips, place his hand over her breast. As if aware of this, she leaned passively against him. 'Use my kitchen this evening,' Laing told her. 'From what I've heard, everything is going to be chaotic. You'll be safer here.' 'All right – but your apartment is so dirty.' 'I'll clean it for you.' Checking himself, Laing looked down at his sister. Did she realize what was happening? Without intending to, they were arranging an assignation. All over the high-rise people were packing their bags, readying themselves for short but significant journeys, a few floors up or down, laterally to the other end of a corridor. A covert but none the less substantial movement of marital partners was taking place. Charlotte Melville was now involved with a statistician on the 29th floor, and had almost vacated her apartment. Laing had watched her leave without resentment. Charlotte needed someone who would bring out her forcefulness and grit. Thinking about her, Laing felt a pang of regret that he himself had found no one. But perhaps Alice would give him the practical support he needed, with her now unfashionable dedication to the domestic virtues. Although he disliked her shrewish manner, with its unhappy reminders of their mother, it gave him an undeniable sense of security. Holding her shoulders, he looked up at the roof of the high-rise. It seemed months since he had last visited the observation deck, but for the first time he felt no urge to do so. He would build his dwelling-place where he was, with this woman and in this cave in the cliff face. When his sister had gone, Laing began to prepare for his visit to the medical school. Sitting on the kitchen floor, he looked up at the unwashed plates and utensils stacked in the sink. He was leaning comfortably against a plastic sack filled with rubbish. Seeing the kitchen from this unfamiliar perspective, he realized how derelict it had become. The floor was strewn with debris, scraps of food and empty cans. To his surprise, Laing counted six garbage-sacks – for some reason he had assumed that there was only one. Laing wiped his hands on his dirt-stained trousers and shirt. Reclining against this soft bed of his own waste, he felt like going to sleep. With an effort he roused himself. A continuous decline had been taking place for some time, a steady erosion of standards that affected not only the apartment, but his own personal habits and hygiene. To some extent this was forced on him by the intermittent water and electricity supply, the failure of the garbage-disposal system. But it also reflected a falling interest in civilized conventions of any kind. None of his neighbours cared what food they ate. Neither Laing nor his friends had prepared a decent meal for weeks, and had reached the point where they opened a can at random whenever they felt hungry. By the same token, no one cared what they drank, interested only in getting drunk as quickly as possible and blunting whatever sensibilities were left to them. Laing had not played one of his carefully built-up library of records for weeks. Even his language had begun to coarsen. He picked at the thick rims of dirt under his nails. This decline, both of himself and his surroundings, was almost to be welcomed. In a way he was forcing himself down these steepening gradients, like someone descending into a forbidden valley. The dirt on his hands, his stale clothes and declining hygiene, his fading interest in food and drink, all helped to expose a more real vision of himself. Laing listened to the intermittent noises from the refrigerator. The electricity had come on again, and the machine was sucking current from the mains. Water began to trickle from the taps as the pumps started to work. Spurring himself on with Alice's criticisms, Laing wandered around the apartment, doing what he could to straighten the furniture. But half an hour later, as he carried a garbage-sack from the kitchen into the hallway, he suddenly stopped. He dropped the sack on to the floor, realizing that he had achieved nothing – all he was doing was rearranging the dirt. Far more important was the physical security of the apartment, particularly while he was away. Laing strode down the long bookcase in the sitting-room, pulling his medical and scientific text-books on to the floor. Section by section, he wrenched out the shelving. He carried the planks into the hall, and for the next hour moved around the apartment, transforming its open interior into a home-made blockhouse. All pieces of heavy furniture, the dining-table and a hand-carved oak chest in his bedroom, he pulled into the hall. With the armchairs and desk he constructed a solid barricade. When he was satisfied with this he moved his food supplies from the kitchen into the bedroom. His resources were meagre, but would keep him going for several days – bags of rice, sugar and salt, cans of beef and pork, and a stale loaf of bread. Now that the air-conditioning had ceased, the rooms soon became stuffy. Recently Laing had noticed a strong but not unpleasant smell, the characteristic odour of the apartment – himself. Laing stripped off his grimy sports-shirt and washed himself in the last water flowing from the shower. He shaved and put on a fresh shirt and suit. If he visited the medical school looking like a tramp he might give away to some sharp-eyed colleague what was actually going on in the high-rise. He examined himself in the wardrobe mirror. The gaunt, white-skinned figure with a bruised forehead standing awkwardly in an over-large business suit looked totally unconvincing, like a discharged convict in his release suit blinking at the unfamiliar daylight after a long prison-sentence. After tightening the bolts on the front door, Laing let himself out of the apartment. Fortunately, leaving the high-rise was easier than moving around within it. Like an unofficial subway service, one elevator still travelled by mutual consent to and from the main entrance lobby during office hours. However, the atmosphere of tension and hostility, the complex of overlapping internal sieges, was apparent everywhere. Barricades of lobby furniture and plastic sacks filled with garbage blocked the entrances to individual floors. Not only the lobby and corridor walls, but the ceilings and carpets were covered with slogans, a jumble of coded signals that marked the attacks of raiding parties from floors above and below. Laing had to restrain himself from pencilling the number of his own floor among the numerals, some three feet high, emblazoned across the walls of the elevator car like the entries in a lunatic ledger. Almost everything possible had been vandalized – lobby mirrors fractured, pay-phones torn out, sofa upholstery slashed. The degree of vandalism was deliberately excessive, almost as if it served a more important secondary role, disguising the calculated way in which the residents of the high-rise, by ripping out all the phone lines, were cutting themselves off from the outside world. For a few hours each day a system of informal truce routes opened like fracture lines throughout the building, but this period was becoming progressively shorter. Residents moved around the building in small groups, sharply on the look-out for any strangers. Each of them wore his floor-level on his face like a badge. During this brief armistice of four or five hours they could move about, contestants in a ritualized ladder-battle allowed between bouts to mount the rungs of their pre-ordained ranks. Laing and his fellow passengers waited as the car made its slow descent, frozen together like mannequins in a museum tableau—'late twentieth-century high-rise dweller'. When they reached the ground floor Laing walked cautiously through the entrance, past the shuttered manager's office and the sacks of unsorted mail. He had not been to the medical school for days, and as he stepped through the glass door he was struck immediately by the cooler light and air, like the harsh atmosphere of an alien planet. A sense of strangeness, far more palpable than anything within the building, extended around the apartment block on ail sides, reaching across the concrete plazas and causeways of the development project. Looking over his shoulder, as if maintaining a mental lifeline to the building, Laing walked across the parking-lot. Hundreds of broken bottles and cans lay among the cars. A health engineer from the central office of the project had called the previous day but left within half an hour, satisfied that these signs of breakdown were no more than teething troubles in the building's waste-disposal system. As long as the residents made no formal complaint, no action would be taken. Laing was no longer surprised by the way in which the residents, who only a few weeks earlier had been united in their anger over the breakdown of the building's services, were now just as united in assuring any outsiders that all was well – partly out of a displaced pride in the high-rise, but also out of a need to resolve the confrontation between them without interference, like rival gangs battling across a refuse tip who joined forces to expel any intruder. Laing reached the centre of the parking-lot, only two hundred yards from the neighbouring high-rise, a sealed rectilinear planet whose glassy face he could now see clearly. Almost all the new tenants had moved into their apartments, duplicating to the last curtain fabric and dish-washer those in his own block, but this building seemed remote and threatening. Looking up at the endless tiers of balconies, he felt uneasily like a visitor to a malevolent zoo, where terraces of vertically mounted cages contained creatures of random and ferocious cruelty. A few people leaned on their railings and watched Laing without expression, and he had a sudden image of the two thousand residents springing to their balconies and hurling down at him anything to hand, inundating Laing beneath a pyramid of wine bottles and ashtrays, deodorant aerosols and contraceptive wallets. Laing reached his car and leaned against the window pillar. He knew that he was testing himself against the excitements of the world outside, exposing himself to its hidden dangers. For all its present conflict, the high-rise represented safety and security. Feeling the warm cellulose of the window pillar against his shoulder, Laing remembered the stale air in his apartment, tepid with the smell of his own body. By comparison, the brilliant light reflected off the chromium trim of the hundreds of cars filled the air with knives. He turned away from his car, and walked along the parking lane that ran parallel to the apartment building. He was not ready yet to venture into the open air, face his colleagues at the medical school, catch up with the lost student supervisions. Perhaps he would stay at home that afternoon and prepare his notes for his next lecture. He reached the edge of the ornamental lake, a graceful oval two hundred yards in length, and stepped down on to the concrete floor. Following his shadow, he walked along the gently sloping lake-bed. Within a few minutes he was standing in the centre of the empty lake. The damp concrete, like the surface of an enormous mould, curved away on all sides, smooth and bland, but in some way as menacing as the contours of some deep reductive psychosis. The absence of any kind of rigid rectilinear structure summed up for Laing all the hazards of the world beyond the high-rise. Unable to stay there any longer, he turned and strode swiftly towards the shore, climbed the bank and ran towards the apartment building between the dusty cars. Within ten minutes he had returned to his apartment. After bolting the door, he climbed over his barricade and wandered around the half-empty rooms. As he inhaled the stale air he was refreshed by his own odour, almost recognizing parts of his body – his feet and genitalia, the medley of smells that issued from his mouth. He stripped off his clothes in the bedroom, throwing his suit and tie into the bottom of the closet and putting on again his grimy sports-shirt and trousers. He knew now that he would never again try to leave the high-rise. He was thinking about Alice, and how he could bring her to his apartment. In some way these powerful odours were beacons that would draw her to him. # 11 Punitive Expeditions By four o'clock that afternoon the last of the residents had returned to the high-rise. From his balcony Laing watched their cars appear on the approach roads and turn into their spaces in the parking-lot. Briefcases in hand, the drivers made their way to the entrance lobbies. Laing was relieved that all conversation ended when they neared the building. This civilized behaviour in some way unsettled him. Laing had rested during the afternoon, deciding to calm himself and gather his strength for the night to come. At intervals he climbed over the barricade and peered into the corridor, hoping to catch sight of Steele. Laing's concern for his sister, only three floors below with her twilight husband, made him increasingly restless. He needed an outbreak of violence to provide a pretext to rescue her. If the plan to divide the building succeeded, he would be unlikely to see her again. Laing paced around the apartment, testing the primitive defensive preparations. Those residents like himself on the upper floors were more vulnerable than they assumed, and might easily find themselves at the mercy of those on the lower levels. Wilder and his henchmen could easily block the exits, destroy the electrical and water-supply inputs, and set fire to the upper floors. Laing imagined the first flames climbing through the elevator shafts and staircases, floors collapsing as the terrified residents were driven to find refuge on the roof. Unsettled by this lurid vision, Laing disconnected his stereo-speakers and added them to the barricade of furniture and kitchen appliances. Records and cassettes lay about underfoot, but he kicked them out of his way. At the base of his bedroom wardrobe he prised away the floorboards. In this suitcase-sized cavity he hid away his cheque book and insurance policies, tax returns and share certificates. Lastly, he forced in his medical case with vials of morphine, antibiotics and cardiac stimulants. When he nailed the floorboards back into place he felt that he was sealing away for ever the last residues of his previous life, and preparing himself without reservation for the new one to come. On the surface, the apartment building remained quiet, but much to Laing's relief the first incidents broke out by the early evening. He waited in the lobby through the late afternoon, standing about with a group of his fellow residents. Perhaps, insanely, _nothing_ was going to happen? Then a foreign-affairs analyst arrived with the news that there had been a fierce scuffle over an elevator ten floors below. Adrian Talbot, the likeable psychiatrist on the 27th floor, had been drenched in urine as he climbed the stairs to his apartment. There was even a rumour that a 40th-floor apartment had been vandalized. Such an act of provocation guaranteed them all a hot night. This was followed by a spate of reports that many residents had returned home to find their apartments ransacked, furniture and kitchen equipment damaged, electrical fittings torn out. Oddly enough, no food supplies had been touched, as if these acts of vandalism were deliberately random and meaningless. Had the damage been inflicted by the owners themselves, without realizing what they were doing, in an attempt to bring about an increase in violence? These incidents continued as the evening settled over the apartment building. From his balcony Laing could see torch-beams flicking to and fro in the windows of the eight blacked-out floors below, as if signalling the preparations of a brutal blood-rite. Laing sat in the darkness on the living-room carpet, his back against the reassuring bulk of the barricade. He was reluctant to switch on the lights, for fear – absurdly, as he knew – that an assailant might attack him from the air outside his balcony. Drinking steadily from a hip-flask of whisky, he watched the early evening television programmes. He turned down the sound, not out of boredom with these documentaries and situation comedies, but because they were meaningless. Even the commercials, with their concern for the realities of everyday life, were transmissions from another planet. Squatting among the plastic garbage-sacks, his furniture piled up behind him, Laing studied these lavish reconstructions of housewives cleaning their immaculate kitchens, deodorants spraying well-groomed armpits. Together they formed the elements of a mysterious domestic universe. Calm and unfrightened, Laing listened to the strident voices in the corridor. Thinking about his sister, he welcomed these signs of the violence to come. Alice, always fastidious, would probably be repelled by the derelict state of the apartment, but it would do her good to find something to criticize. The sweat on Laing's body, like the plaque that coated his teeth, surrounded him in an envelope of dirt and body odour, but the stench gave him confidence, the feeling that he had dominated the terrain with the products of his own body. Even the prospect that the lavatory would soon be permanently blocked, something that had once filled him with polite dread, was now almost inviting. This decline in standards of hygiene Laing shared with his neighbours. Emitted from their bodies was a strong scent, the unique signature of the high-rise. The absence of this odour was what most unsettled him about the world outside the apartment block, though its nearest approximation was to be found in the dissecting-room at the anatomy school. A few days earlier Laing had caught himself hanging about his secretary's desk trying to get close enough to her to detect this reassuring smell. The startled girl had looked up to find Laing hovering over her like a beachcomber in rut. Three floors above, a falling bottle burst across a balcony. The glass fragments spat away like tracers through the darkness. A record-player by an open window was turned up to full volume. Huge fragments of amplified music boomed into the night. Laing climbed around his barricade and unlocked the door of his apartment. In the elevator lobby a group of his neighbours were manhandling a steel fire-door across the entrance to the stairway. Five floors below, a raid was in progress. Laing and his fellow clansmen crowded against the fire-door, peering into the darkened stairwell. They could hear the elevator gear reverberating as the car moved up and down, ferrying more attackers to the fray. Rising from the 20th floor, as if from an execution pit, came a woman's scream. Waiting for Steele to appear and help them, Laing was about to go in search of him. But the lobby and corridors were filled with running people, colliding into each other in the dark as they fought their way back to their apartments on the floors above the 25th. The raiders had been hurled back. Torch-beams swerved across the walls in a lunatic semaphore. Laing slipped in a pool of grease and fell among the swerving shadows. Behind him, an excited woman stepped on his hand, her heel cutting his wrist. For the next two hours a series of running battles took place in the corridors and staircases, moving up and down the floors as the barricades were reassembled and torn down again. At midnight, as he crouched in the elevator lobby behind the overturned fire-door, debating whether to risk making a run for Alice's apartment, Laing saw Richard Wilder standing among the scattered steel chairs. In one hand he still held his cine-camera. Like a large animal pausing for breath, he followed the huge projections of himself cast upon the walls and ceiling, as if about to leap on to the backs of his own shadows and ride them like a troupe of beasts up the flues of the building. The confrontation subsided, moving away like a storm towards the lower floors. Laing and his neighbours assembled in Adrian Talbot's apartment. Here they sat on the living-room floor among the broken tables and the easy chairs with their slashed cushions. The torches at their feet formed a circle of light, shining on the bottles of whisky and vodka they shared together. Arm in a sling, the psychiatrist moved around his vandalized apartment, trying to hang the shattered picture-frames over the slogans aerosolled across his walls in the supermarket paint-section's most fashionable colours. Talbot seemed more numbed by the personal hostility in these anti-homosexual obscenities than by the wholesale destruction of his apartment, but in spite of himself Laing found them stimulating. The lurid caricatures on the walls glimmered in the torch-light like the priapic figures drawn by cave-dwellers. 'At least they've left you alone,' Talbot said, crouching beside Laing. 'I've obviously been picked out as a scapegoat. This building must have been a powerhouse of resentments – everyone's working off the most extraordinary backlog of infantile aggressions.' 'They'll spend themselves.' 'Perhaps. I had a bucket of urine thrown over me this afternoon. Much more of that and I may take up a cudgel myself. It's a mistake to imagine that we're all moving towards a state of happy primitivism. The model here seems to be less the noble savage than our un-innocent post-Freudian selves, outraged by all that over-indulgent toilet-training, dedicated breast-feeding and parental affection – obviously a more dangerous mix than anything our Victorian forebears had to cope with. Our neighbours had happy childhoods to a man and still feel angry. Perhaps they resent never having had a chance to become perverse...' As they nursed their bruises and passed around the bottles, drinking steadily to build up their courage, Laing listened to the talk of counter-attack and revenge. There was still no sign of Steele. For some reason Laing felt that he should have been there, a future leader more important to them than Crosland. In spite of his injuries, Laing felt exhilarated and confident, eager to return to the fray. The darkness was reassuring, providing its own security, the natural medium of their life in the apartment building. He felt proud of having learned how to move around the pitch-black corridors, never more than three steps at a time, how to pause and test the darkness, and even the right way of crossing his own apartment, always keeping as close to the floor as possible. He almost resented the daylight which the following morning would bring. The true light of the high-rise was the metallic flash of the polaroid camera, that intermittent radiation which recorded a moment of hoped-for violence for some later voyeuristic pleasure. What depraved species of electric flora would spring to life from the garbage-strewn carpets of the corridors in response to this new source of light? The floors were littered with the blackened negative strips, flakes falling from this internal sun. Muddled by alcohol and excitement, Laing clambered to his feet with his neighbours as they set off like a crowd of drunken students, brawling with each other to keep up their courage. By the time they had descended three floors in the darkness Laing had lost his bearings. They had entered an enclave of abandoned apartments on the 22nd floor. They wandered around the deserted rooms, kicking in the faces of the television sets, breaking up the kitchen crockery. Trying to clear his head before going to rescue his sister, Laing vomited over a balcony rail. The threads of luminous phlegm fell away across the face of the building. Leaning there in the darkness, he listened to his neighbours moving along the corridor. When they had gone he would be able to look for Alice. Behind him the electric lights came on. Startled, Laing flinched against the parapet, expecting an intruder to attack him. After a brief interval, the lights began to flicker continuously like a fibrillating heart. Laing looked down at his grimy clothes and vomit-stained hands. The vandalized living-room glimmered around him, the floor strewn with debris as if he had woken on a battlefield. In the bedroom a broken mirror lay on the bed, the pieces flickering like the fragments of another world trying unsuccessfully to reconstitute itself. 'Come in, Laing...' The familiar precise voice of the orthodontic surgeon called out to him. 'There's something interesting here.' Steele was circling the room with a sword-stick in one hand. Now and then he feinted at the floor in a teasing way, as if rehearsing a scene from a melodrama. He beckoned Laing forward into the stuttering light. Laing cautiously approached the door, glad to see Steele at last but well aware of how exposed he was to any passing whim of his. He assumed that Steele had trapped the apartment's owner, or a vagrant resident who had taken shelter here, but there was no one in the room. Then, following the blade of the sword-stick, he saw that Steele had cornered a small cat between the legs of the dressing-table. Steele lunged forward, twirling a brocade curtain he had wrenched from the window, and whirled the terrified creature into the bathroom. 'Wait, doctor!' The surgeon's voice was infused with a strangely cold gaiety, like an erotic machine's. 'Don't leave yet...' The lights continued to flicker with the harsh over-reality of an atrocity newsreel. Confused by his own response, Laing watched Steele manipulate the cat under the curtain. By some ugly logic the dentist's pleasure in tormenting the creature was doubled by the presence of a squeamish but fascinated witness. Laing stood in the bathroom doorway, hoping despite himself that the lights would not fail again. He waited as Steele calmly smothered the cat, destroying it under the curtain as if carrying out a complex resuscitation under a hospital blanket. Pulling himself away at last, Laing left without speaking. He moved carefully along the darkened corridor, as the lights flickered from the doorways of ransacked apartments, from overturned lamps lying on the floor and television screens brought back to a last intermittent life. A faint music played somewhere around him. An abandoned record turntable was spinning again. In an empty bedroom a cine-projector screened the last feet of a pornographic film on to the wall facing the bed. When he reached Alice's apartment Laing hesitated, uncertain how to explain his presence. But as his sister opened the door and beckoned him in he saw immediately that she had known he was coming. Two suitcases, already packed, stood in the living-room. Alice walked to the door of her bedroom for the last time. In the yellow, intermittent light Frobisher was slumped asleep on the bed, a half-empty case of whisky beside him. Alice took Laing's arm. 'You're late,' she said reprovingly. 'I've been waiting for hours.' As they left she made no attempt to look back at her husband. Laing remembered Alice and himself at home years earlier, and how once they had slipped out of the drawing-room in the same way as their mother lay unconscious on the floor after injuring herself during a drinking bout. The sounds of a minor clash echoed up the stairwell as they made their way to the safety of the darkness on the 25th floor. Fifteen floors, including Laing's own, were now permanently without light. Like a storm reluctant to end, recapitulating itself at intervals, the violence rumbled on throughout the night as Laing and his sister lay awake together on the mattress in his bedroom. # 12 Towards the Summit Soon after two o'clock in the afternoon four days later, Richard Wilder returned from his television station and drove into the parking-lot beside the high-rise. Reducing speed so that he could relish to the full this moment of arrival, he sat back comfortably behind the wheel and looked up with a confident eye at the face of the apartment building. Around him the long ranks of parked cars were covered with a thickening layer of dirt and cement dust, blown across the open plazas of the development project from the road junction under construction behind the medical centre. Few cars now left the parking-lot, and there were almost no free spaces, but Wilder drove up and down the access lanes, stopping at the end of each file and reversing back to his starting point. Wilder fingered the freshly healed scar on his unshaven chin, relic of a vigorous corridor battle the previous night. Deliberately he reopened the wound and glanced with satisfaction at the point of blood on his finger. He had driven from the television station at speed, as if trying to emerge from an angry dream, shouting and sounding the horn at other drivers in his way, cutting up one-way streets. Now he felt calm and relaxed. The first sight of the line of five apartment buildings soothed him as usual, providing a context of reality absent from the studios. Confident that he would find a free space, Wilder continued his patrol. Originally he had parked, along with his neighbours on the lower floors, in the ranks along the perimeter of the parking-lot, but during the previous weeks he had been moving his car nearer to the building. What had begun as a harmless piece of vanity – an ironic joke at his own expense – had soon taken on a more serious role, a visible index of his success or failure. After several weeks dedicated to his ascent of the building he felt entitled to park in those files reserved for his new neighbours. Ultimately he would reach the front rank. At the moment of his triumph, when he climbed to the 40th floor, his car would join the line of expensive wrecks nearest to the apartment block. For several hours the previous night Wilder had reached the 20th floor and even, during the few minutes of an unexpected skirmish, the 25th. By dawn he had been forced to retire from this advance position to his present base camp, an apartment on the 17th floor owned by a stage manager at the television station, a former drinking companion named Hillman who had grudgingly accepted this cuckoo in his nest. The occupation of a floor, in Wilder's strict sense of the term, meant more than the casual seizure of an abandoned apartment. Dozens of these were scattered throughout the high-rise. Wilder had imposed on himself a harder definition of ascent – he had to be accepted by his new neighbours as one of them, the holder of a tenancy won by something other than physical force. In short, he insisted that they need him – when he thought about it, a notion that made him snort. He had reached the 20th floor as a result of one of the many demographic freaks that had confused his progress through the building. During the running battles that had filled the night he found himself helping to barricade the damaged door of an apartment on the 20th floor owned by two women stock-market analysts. After trying to brain him with a champagne bottle as he pushed his head through the broken panel, they had welcomed Wilder's easy-going offer to help – he deliberately was never more calm than at these moments of crisis. In fact, the older of the two, a spirited blonde of thirty, had complimented Wilder on being the only sane man she had met in the high-rise. For his part, Wilder was glad to play a domestic role rather than the populist leader and Bonaparte of the elevator-lobby barricades, instructing an ill-trained militia of magazine editors and finance company executives in how to storm a defended staircase or capture a rival elevator. Apart from anything else, the higher up the building he climbed, the worse the physical condition of the residents – hours on the gymnasium exercycles had equipped them for no more than hours on the gymnasium exercycles. After helping the two women, he spent the period before dawn drinking their wine and manoeuvring them into making the suggestion that he move into their apartment. As usual, he gestured grandly with his cine-camera and told them about his television documentary on the high-rise, inviting them to appear on screen. But neither was particularly impressed by the offer. Although the lower-level tenants were keen to take part in the programme and vent their grievances, the people living on the upper floors had appeared on television already, often more than once, as professional experts on various current-affairs programmes. 'Television is for watching, Wilder,' one of the women told him firmly, 'not for appearing on.' Soon after dawn, the members of a women's raiding-party appeared. Their husbands and companions had either moved in with friends on other floors or exited from their lives altogether. The leader of the pack, the elderly children's-story writer, gazed balefully at Wilder when he offered her the starring role in his documentary. Taking the hint, Wilder bowed out and returned to his previously secure base, the Hillmans' apartment on the 17th floor. Thirty feet away, as Wilder drove around the parking-lot, determined to find a rank in keeping with his new station, a bottle shattered across a car roof, vanishing in a brittle cloud-burst. The bottle had been dropped from a height, conceivably from the 40th floor. Wilder slowed his car almost to a halt, offering himself as a target. He half-expected to see the white-jacketed figure of Anthony Royal standing in one of his messianic poses on the parapet of his penthouse, the white Alsatian at his heels. During the past days he had caught several glimpses of the architect, standing high above Wilder at the top of a staircase, disappearing in a commandeered elevator towards the fastnesses of the top floors. Without any doubt, he was deliberately exposing himself to Wilder, tempting him upwards. At times Royal seemed to be uncannily aware of the confused image of his natural father that hovered in the attics of Wilder's mind, glimpsed always in the high windows of his nursery. Had Royal set out to play this role, knowing that Wilder's confusions about his father would deflect his resolve to climb the building? Wilder drummed bis heavy fists on the steering wheel. Each night he moved closer to Royal, a few steps nearer their ultimate confrontation. Broken glass crackled under his tyres, as if unzipping the treads. Directly ahead of Wilder, in the front rank reserved for the top-floor residents, was a free space once occupied by the dead jeweller's car. Without hesitating, Wilder spun the wheel and steered into the open space. 'Not before time...' He sat back expansively, gazing with pleasure at the garbage-strewn wrecks on either side. The appearance of the space was a good omen. He took his time getting out of the car, and slammed the door aggressively. As he strode towards the entrance he felt like a well-to-do landowner who had just bought himself a mountain. In the entrance lobby a group of down-at-heel 1st-floor residents watched Wilder stride past the elevators to the stairway. They were suspicious of his movements around the building, his changing allegiances. During the day Wilder spent a few hours with Helen and his sons in the 2nd floor apartment, trying to rally his increasingly withdrawn wife. Sooner or later he would have to leave her for ever. In the evenings, when he renewed his ascent of the high-rise, she would come alive a little, perhaps even speak to him about his work at the television studios, referring to programmes on which he had worked years before. The previous night, as he prepared to leave, settling his sons and testing the locks on the doors, Helen had suddenly embraced him, as if wanting him to stay. The muscles of her thin face had moved through an irregular sequence of tremors, like tumblers trying to fall into place. To Wilder's surprise, when he returned to the apartment he found Helen in a state of high excitement. He made his way around the garbage-sacks and barricades of broken furniture that blocked the corridor. Helen and a group of wives were celebrating a minor triumph. The tired women with their unruly children – the civil war within the high-rise had made them as combative as their parents – formed a wistful tenement tableau. Two young women from the 7th floor, who had once worked as teachers in the junior school, had volunteered to reopen the classes. From their uneasy glances at the vigilante group of three fathers – a computer-time salesman, a sound man and a travel-agency courier – standing between them and the door Wilder guessed that they were the victims of a less than gentle abduction. As he prepared a meal from the last of the canned food, Helen sat at the kitchen table, her white hands moving about like a pair of confused birds in a cage. 'I can barely believe it – I'll be free of the boys for an hour or two.' 'Where are these classes being held?' 'Here – for the next two mornings. It's the least I can do.' 'But you won't be away from the boys at all. Well, anything's better than nothing.' Would she ever abandon the children? Wilder asked himself. It was all she thought about. As he played with his sons he seriously considered taking them with him on his climb. He watched Helen making a nervous effort to tidy the apartment. The living-room had been ransacked during a raid. While Helen and the boys sheltered in a neighbour's apartment, most of the furniture had been broken, the kitchen kicked to a shambles. Helen carried the wrecked chairs from the dining-room, lining them up in front of Wilder's broken-backed desk. The tilting chairs leaned against each other in a scarecrow parody of a children's classroom. Wilder made no effort to help. He watched her thin arms dragging at the furniture. At times he almost suspected that she was deliberately exhausting herself, and that the bruises on her wrists and knees were part of an elaborate system of conscious self-mutilation, an attempt to win back her husband – each day when he returned home he half expected to find her in an invalid chair, legs broken and trepan bandage around her shaven head, about to take the last desperate step of lobotomy. Why did he keep coming back to her? His one ambition now was to get away from Helen, and overcome that need to return to the apartment each afternoon and whatever thread-bare links it maintained with his own childhood. By leaving Helen he would break away from the whole system of juvenile restraints he had been trying to shake off since his adolescence. Even his compulsive womanizing was part of the same attempt to free himself from the past, an attempt that Helen brought to nothing by turning a blind eye. At least, however, his affairs had prepared the ground for his ascent of the high-rise, those literal handholds which would carry him on his climb to the roof over the supine bodies of the women he had known. He found it difficult now to feel much involvement with his wife's plight, or with her neighbours and their narrow, defeated lives. Already it was clear that the lower floors were doomed. Even their insistence on educating their children, the last reflex of any exploited group before it sank into submission, marked the end of their resistance. Helen was even being helped now by the women's group from the 29th floor. During the noon armistice the children's-story writer and her minions moved through the apartment building, offering help to abandoned or isolated wives, sisters of sinister charity. Wilder went into his sons' bedroom. Glad to see Wilder, they banged their empty feeding-bowls with their plastic machine-pistols. They were dressed in miniature paratroopers' camouflage suits and tin helmets – the wrong outfit, Wilder reflected, in the light of what had been taking place in the high-rise. The correct combat costume was stockbroker's pin-stripe, briefcase and homburg. The boys were hungry. After calling to Helen he returned to the kitchen. Helen was slumped on her knees in front of the electric cooker. The door was open, and Wilder had the sudden notion that she was trying to hide her small body in the oven – perhaps cook herself, the ultimate sacrifice for her family. 'Helen...' He bent down, surprised by the slightness of her body, a collection of sticks inside her pallid skin. Tor heaven's sake, you're like...' 'It's all right...I'll have something later.' She pulled herself away from him, and began to pick without thinking at the burnt fat on the oven floor. Looking down at her huddled at his feet. Wilder realized that she had momentarily fainted from hunger. Wilder let her subside against the cooker. He scanned the empty shelves of the pantry. 'Stay here – I'll go up to the supermarket and get you something to eat.' Angry with her, he snapped, 'Why didn't you tell me you were starving yourself?' 'Richard, I've mentioned it a hundred times.' She watched him from the floor as he hunted in her purse for money, something Wilder had found less and less use for recently. He had not even bothered to pay his latest salary cheque into his account. He picked up his cine-camera, making sure that the lens shroud was in place. As he looked back at Helen he noticed that her eyes were surprisingly hard within her small face, almost as if she was amused by her husband's dependence on the fictions of this elaborate toy. Locking the apartment door behind him, Wilder set off in search of food and water. During the afternoon lull, one access route to the 10th-floor supermarket was still allowed the tenants in the lower section of the apartment building. Most of the stairways were blocked by permanent barricades – living-room furniture, dining-tables and washing-machines piled high between the steps and ceilings. More than a dozen of the twenty elevators were out of order. The remainder functioned intermittently, at the whim of any superior clan. In the lobby Wilder peered cautiously up the empty shafts. Sections of metal railing and water pipes criss-crossed the shafts, inserted like stop indicators to prevent the cars moving up or down, and almost formed a staircase of their own. The walls were covered with slogans and obscenities, lists of apartments to be vandalized like an insane directory. By the stairwell doors a military-style message in sober lettering pointed to the one safe staircase to be used during the early afternoon, and the obligatory curfew time, three o'clock. Wilder raised his camera and stared at the message through the view-finder. The shot would make a striking opening title sequence for the documentary on the high-rise. He was still aware of the need to make a visual record of what had happened within the apartment building, but the resolve had begun to fade. The decline of the apartment building reminded him of a slow-motion newsreel of a town in the Andes being carried down the mountain slopes to it's death, the inhabitants still hanging out their washing in the disintegrating gardens, cooking in their kitchens as the walls were pulverized around them. Twenty of the floors in the high-rise were now in darkness at night, and over a hundred apartments had been abandoned by their owners. The clan system, which had once given a measure of security to the residents, had now largely broken down, individual groups drifting into apathy or paranoia. Everywhere people were retreating into their apartments, even into one room, and barricading themselves away. At the 5th floor landing Wilder paused, surprised that there was no one around. He waited by the lobby doors, listening for any suspicious sound. The tall figure of a middle-aged sociologist, garbage-pail in hand, emerged from the shadows and drifted like a ghost along the refuse-strewn corridor. For all the building's derelict state – almost no water was flowing, the air-conditioning vents were blocked with garbage and excrement, rails ripped off the staircase balustrades – the behaviour of the residents during the daylight hours for the most part remained restrained. At the 7th-floor landing Wilder stopped and relieved himself against the steps. In a way he was surprised by the sight of the urine running away between his feet. However, this was the mildest display of crudity. During the brawls and running battles of the night he was aware that he took a distinct and unguilty pleasure in urinating wherever he cared, defecating in abandoned apartments regardless of the health hazards to himself and his family. The previous night he had enjoyed pushing around a terrified woman who remonstrated with him for relieving himself on her bathroom floor. None the less, Wilder welcomed and understood the night – only in the darkness could one become sufficiently obsessive, deliberately play on all one's repressed instincts. He welcomed this forced conscription of the deviant strains in his character. Happily, this free and degenerate behaviour became easier the higher he moved up the building, as if encouraged by the secret logic of the high-rise. The 10th-floor concourse was deserted. Wilder pushed back the staircase doors with their shattered glass and walked out on to the shopping mall. The bank had closed, along with the hairdressing salon and the liquor store. The last supermarket cashier – the wife of a cameraman on the 3rd floor – sat stoically at her check-out point, presiding like a doomed Britannia over a sea of debris. Wilder strolled around the empty shelves. Rotting packs floated in the greasy water at the bottom of the freezer cabinets. In the centre of the supermarket a pyramid of dog-biscuit cartons had collapsed across the aisle. Wilder filled a basket with three of the cartons and half a dozen cans of cat-meat. Together they would keep Helen and the boys alive until he could break into an apartment and raid a food cache. 'There's nothing here but pet food,' he told the cashier at the check-out. 'Have you stopped ordering?' 'There's no demand,' she told him. She played absent-mindedly with an open wound on her forehead. 'Everyone must have stocked up months ago.' This was not true, Wilder reflected as he walked away towards the elevator lobby, leaving her alone on the huge concourse. As he knew full well, having broken into any number of apartments, few people had any reserve supplies whatever. It was as if they were no longer giving any thought to what they might need the next day. Fifty feet away, beyond the overturned hair-driers lying outside the salon, the elevator indicator lights moved from right to left. The last public elevator of the day was winding itself up the building. Somewhere between the 25th and 30th floors it would be brought to a halt at the whim of a lookout, marking the end of the mid-day armistice and the beginnings of another night. Without thinking, Wilder quickened his pace. He reached the doors as the elevator paused at the 9th floor to discharge a passenger. At the last moment, as it resumed its ascent, Wilder pressed the button. In the few seconds that remained before the doors opened he realized that he had already decided to abandon Helen and his sons for good. Only one direction lay before him – up. Like a climber resting a hundred feet from the summit, he had no option but to ascend. The elevator doors opened. Some fifteen passengers faced him, standing rigidly together like plastic mannequins. There was a fractional movement of feet as a space was made for Wilder. Wilder hesitated, controlling his impulse to turn and run down the staircase to his apartment. The eyes of the passengers were fixed on him, wary of his indecision and suspecting that it might conceal a ruse of some kind. As the doors began to close Wilder stepped forward into the elevator, the cine-camera raised in front of him, and began once again his ascent of the high-rise. # 13 Body Markings After a delay of twenty minutes, as irritating as a hold-up at a provincial frontier post, the elevator moved from the 16th to the 17th floor. Exhausted by the long wait, Wilder stepped through the doors into the lobby, looking for somewhere to throw away his cartons of pet food. Crammed together shoulder to shoulder, the returning cost-accountants and television executives held tightly to their briefcases, eyes averted from each other as they stared at the graffiti on the walls of the car. The steel roof had been removed, and the long shaft rose above their heads, exposed to anyone with a missile casually to hand. The three passengers who stepped out with Wilder vanished among the barricades that lined the dimly lit corridors. When Wilder reached the Hillmans' apartment he found that the door was securely bolted. There were no sounds of movement from within. Wilder tried without success to force the lock. Conceivably the Hillmans had abandoned the apartment and taken shelter with friends. Then he heard a faint scraping from the hall. Pressing his head to the door, he heard Mrs Hillman remonstrating with herself in a thin voice as she pulled a heavy object across the floor. After a prolonged tapping and negotiation, during which Wilder was obliged to speak to her in her own wheedling tone, he was admitted to the apartment. A huge barricade of furniture, units of kitchen equipment, books, clothes and table ornaments blocked the hallway, a miniature municipal dump in its own right. Hillman lay on a mattress in the bedroom. His head was bandaged in a torn evening-dress shirt, through which the blood had seeped on to the pillow. He raised his head as Wilder came in, his hand searching for a section of balcony railing on the floor beside him. Hillman had been one of the first scapegoats to be selected and attacked – his brusque and independent manner made him a natural target. During a raid on the next floor he had been hit on the head by a television award-winner's statuette as he tried to order his way up a defended staircase. Wilder had carried him back to his apartment and spent the night looking after him. With her husband out of commission, Mrs Hillman depended totally on Wilder, a dependence that he himself in a way enjoyed. When Wilder was away she spent all her time worrying about him, like an over-anxious mother fretting about a wayward child, though as soon as he arrived she forgot who he was. She tugged at Wilder's sleeve as he looked down at Hillman. She was more concerned about her barricade than her husband and his ominous disturbances of vision. Almost everything movable in the apartment, however small, she had added to the barricade, at times threatening to entomb them for good. Each night Wilder slept through the few hours before dawn slumped in an armchair partly embedded in the barricade. He would hear her moving tirelessly around him, adding a small piece of furniture she had found somewhere, three books, a single gramophone record, her jewellery box. Once Wilder woke to find that she had incorporated part of his left leg. Often it would take him half an hour to dig his way out of the apartment. 'What is it?' Wilder asked her irritably. 'What are you doing to my arm?' She was peering at the bag of dog-food, which Wilder,' in the absence of any furniture, had been unable to put down. For some reason, he did not want it added to the barricade. 'I've been cleaning up for you,' she told him with some pride. 'You wanted me to, didn't you?' 'Of course...' Wilder gazed around the apartment in a lordly way. In fact, he barely noticed any changes and, if anything, preferred the apartment to be dirty. 'What's this?' She poked excitedly at the carton, jabbing him roguishly in the ribs as if she had caught a small son with a secret present for her. 'You've got a surprise!' 'Leave it alone.' Roughly, Wilder fended her away, almost knocking her off her feet. In a way, he enjoyed these absurd rituals. They touched levels of intimacy that had never been possible with Helen. The higher up the building he moved the more free he felt to play these games. Mrs Hillman wrestled a pack of dog-biscuits out of the bag. Her small body was surprisingly agile. She gazed at the overweight basset hound on the label. Both she and her husband were as thin as scarecrows. Generously, Wilder handed her a can of cat-meat. 'Soak the biscuits in gin – I know you've got a bottle bidden somewhere. It will do you both good.' 'We'll get a dog!' When Wilder looked irritated by this suggestion she sidled up to him teasingly, pressing her hands against his heavy chest. 'A dog? Please, Dicky...' Wilder tried to move away from her, but the lewd, wheedling tone and the pressure of her fingers on his nipples unsettled him. Their unexpected sexual expertise excited a hidden strain in his character. Hillman, the dress shirt around his head like a bloody turban, was looking up passively at them, his face drained of all colour. With his visual disturbances, Wilder reflected, the empty apartment would seem to be filled with embracing replicas of himself and Mrs Hillman. He pretended to accost her, out of curiosity running his hands over her buttocks, as small as apples, to see how the injured man would react. But Hillman gave no flicker of recognition. Wilder stopped stroking Mrs Hillman when he saw that she was openly responding to him. It was on other levels that he wanted their relationship to develop. 'Dicky, I know why you came to rescue me...' Mrs Hillman followed him around the barricade, still holding Wilder's arm. 'Will you punish them?' This was another of their games. 'Rescue' she visualized primarily in terms of making 'them' – that is, all the residents in the high-rise below the 17th floor – eat humble pie and prostrate themselves in an endless line outside her front door. 'I'll punish them,' Wilder reassured her. 'All right?' They were leaning against the barricade, Mrs Hillman's sharp-chinned face against his chest. No more ill-suited couple, Wilder decided, could have been cast to play mock-mother and mock-son. Nodding eagerly at the prospect of revenge, Mrs Hillman reached into the barricade and pulled at a black metal pipe. As it emerged, Wilder saw that it was the barrel of a shotgun. Surprised, Wilder took the weapon from her hands. She was smiling encouragingly, as if expecting Wilder to go out into the corridor at that very moment and shoot someone dead. He broke the breech. Two live shells were in place under the hammers. Wilder moved the weapon out of Mrs Hillman's reach. He realized that this was probably only one of hundreds of similar firearms in the high-rise – sporting rifles, military service souvenirs, handbag pistols. But no one had fired a single shot, despite the epidemic of violence. Wilder knew perfectly well why. He himself would never bring himself to fire this shotgun, even at the point of death. There was an unspoken agreement among the residents of the high-rise that their confrontation would be resolved by physical means alone. He jammed the shotgun back into the barricade and pushed Mrs Hillman in the chest. 'Go away, rescue yourself...' As she protested, half-playfully, half in earnest, he began to throw dog-biscuits at her, scattering them around the bare floor. Wilder enjoyed abusing her. Deriding her in front of her supine husband, he withheld the food from her until she broke down and retreated to the kitchen. The evening progressed happily. Wilder became more and more oafish as the darkness settled over the high-rise, deliberately coarsening himself like a delinquent youth fooling about with a besotted headmistress. Until two o'clock that morning, during a night intermittently disrupted by outbreaks of violence, Wilder remained within the Hillmans' apartment on the 17th floor. The marked decline in the number of incidents disturbed Wilder – for his ascent of the building he relied on being able to offer himself as an aggressive street-fighter to one or another of the warring groups. However, the open tribal conflicts of the previous week had now clearly ceased. With the breakdown of the clan structure, the formal boundary and armistice lines had dissolved, giving way to a series of small enclaves, a cluster of three or four isolated apartments. These were far more difficult to penetrate and exploit. Sitting in the darkness on the floor of the sitting-room with Mrs Hillman, their backs to opposite walls, they listened to the muted noises around them. The residents of the high-rise were like creatures in a darkened zoo lying together in surly quiet, now and then tearing at each other in brief acts of ferocious violence. The Hillmans' immediate neighbours, an insurance broker and his wife, two account executives and a pharmacologist, were listless and unorganized. Wilder had visited them several times, but found that appeals to self-advantage no longer roused them. In fact, only the most blatant expressions of irrational hostility could galvanize their glazed minds. Wildder's feigned and unfeigned rages, his fantasies of revenge roused them briefly from their state of torpor. This regrouping around more radical and aggressive leaders was taking place all over the high-rise. In the hours after midnight torches flared behind the barricades in the lobbies and corridors, where enclaves of five or six residents squatted among the plastic garbage sacks, inciting each other like wedding guests making themselves drunk in the knowledge that they too will soon be copulating freely among the sweet-they too will soon be copulating freely among the sweetmeats. At two o'clock Wilder left the Hillmans' apartment and set about stirring up his neighbours. The men crouched together, clubs and spears in hand, hip-flasks of whisky pooled at their feet. The torch-beams illuminated the garbage-sacks piled high around them, a visible museum of their leavings. Wilder sat in the centre of the group, outlining his plans for another foraging expedition to the floors above. Although they had eaten little for days, his neighbours were reluctant to take part, fearful of the power of the residents above them. Skilfully, Wilder played on their fantasies. Once again, as his imaginary scapegoat, he selected the psychiatrist Adrian Talbot, whom he now accused of molesting a child in a swimming-pool changing cubicle. The untruth of the accusation, which they all well knew, only served to reinforce it. However, before they would move they insisted that Wilder invent an even more lurid crime, as if the imaginary nature of Talbot's sexual offences held the essence of their appeal. By the logic of the high-rise those most innocent of any offence became the most guilty. Shortly before dawn Wilder found himself in an empty apartment on the 26th floor. Once occupied by a woman and her small son, the apartment had recently been abandoned, and no attempt had been made to padlock the door from the outside. Tired after the night's rampage, Wilder wasted no time in breaking down the door. He had side-stepped his raiding party, leaving them to break up Talbot's apartment for the tenth time. During these last minutes of darkness he would settle himself into an empty apartment, and sleep through the long hours of daylight in time to resume his ascent of the high-rise at dusk. Wilder moved around the three rooms, satisfying himself that no one was hiding in the kitchen or bathroom. He wandered about in the darkness, kicking open the cupboards and knocking any books or ornaments to the floor. Before leaving, the owner had made a half-hearted attempt to tidy the apartment, packing away the child's toys in a bedroom wardrobe. The sight of the freshly swept floors and neatly furled curtains unsettled Wilder. He pulled the drawers on to the floor, heaved the mattresses off the beds, and urinated into the bath. His burly figure, trousers open to expose his heavy genitalia, glared at him from the mirrors in the bedroom. He was about to break the glass, but the sight of his penis calmed him, a white club hanging in the darkness. He would have liked to dress it in some way, perhaps with a hair-ribbon tied in a floral bow. Now that he was alone Wilder felt confident of his progress. His hunger was overlaid by his feelings of triumph at having climbed more than half-way up the high-rise. From the windows the ground below was barely visible, part of a world he had left behind. Somewhere above him, Anthony Royal was strutting about with his white Alsatian, unaware that he would soon be in for a surprise. At dawn the owner of the apartment reappeared, and blundered into the kitchen where Wilder was resting. By now he had relaxed and was sitting comfortably on the floor with his back against the cooker, the remains of a meal scattered around him. He had found the few cans of food, along with two bottles of red wine, in their invariable hiding place, under the floorboards in the bedroom wardrobe. As he broke open the cans he played with a battery-powered tape-recorder which had been mixed up with the child's toys. He recorded his grunts and belches, playing them back to himself. Wilder was amused by the deft way in which he edited the tape, overlaying one set of belches with a second and third, a skill that now resided entirely in his scarred fingers with their cracked and blackened nails. The bottles of claret had made him pleasantly drowsy. Smearing the red wine across his broad chest, he gazed up amiably at the startled woman who stumbled into the kitchen and tripped across his legs. As she stared down at him, one hand nervously to her throat, Wilder remembered that she had once been called Charlotte Melville. The name had now detached itself from her, like an athlete's tie-on numeral blown away in a gust of wind. He knew that he had often been in this apartment, and this explained the vague familiarity of the child's toys and the furniture, although the chairs and sofa had been rearranged to conceal various hiding places. 'Wilder...?' As if uncertain about the name, Charlotte Melville pronounced it softly. She had been sheltering during the night with her son in the apartment of the statistician three floors above with whom she had become friendly. At the first light, when everything had settled down, she had come back intending to collect the last of her food reserves before abandoning the apartment for good. Swiftly composing herself, she looked down critically at the burly man with the exposed loins lying like a savage among her wine bottles, his chest painted with red stripes. She felt no sense of loss or outrage, but a fatalistic acceptance of the damage he had casually inflicted on her apartment, like the strong odour of his urine in the bathroom. He appeared to be half asleep, and she stepped slowly towards the door. Wilder reached out with one hand and held her ankle. He smiled up at her blearily. Climbing to his feet, he circled around her, the tape-recorder raised in one hand as if about to hit her with it. Instead he switched it on and off, playing for her his selection of belches and grunts, obviously pleased with this demonstration of his unexpected expertise. He steered her slowly around the apartment as she backed from one room to the next, listening to his edited mutterings. The first time he struck her, cuffing her to the bedroom floor, he tried to record her gasp, but the reel had jammed. He freed it carefully, bent down and slapped her again, only stopping when he had recorded her now deliberate cries to his satisfaction. He enjoyed terrorizing her, taping down her exaggerated but none the less frightened gasps. During their clumsy sexual act on the mattress in the child's bedroom he left the tape-recorder switched on beside them on the floor and played back the sounds of this brief rape, editing together the noise of her tearing clothes and panting anger. Later, bored with the woman and these games with the tape-recorder, he hurled the machine into the corner. The sound of himself speaking, however coarsely, introduced a discordant element. He resented speaking to Charlotte or to anyone else, as if words introduced the wrong set of meanings into everything. After she dressed they had breakfast together on the balcony, sitting at the table with an incongruous old-world formality. Charlotte ate the scraps of canned meat she found on the kitchen floor. Wilder finished the last of the claret, remarking the red stripes across his chest. The rising sunlight warmed his exposed loins, and he felt like a contented husband sitting with his wife in a villa high on a mountainside. Naively, he wanted to explain to Charlotte his ascent of the apartment building, and shyly pointed to the roof. But she failed to get the point. She fastened her torn clothes around her strong body. Although her mouth and throat were bruised, she seemed unconcerned, watching Wilder with a passive expression. From the balcony Wilder could see the roof of the high-rise, little more than a dozen floors above him. The intoxication of living at this height was as palpable as anything produced by the wine bottle in his hand. Already he could see the line of huge birds perched on the balustrades, no doubt waiting for him to arrive and take command. Below, on the 20th floor, a man was cooking over a fire on his balcony, breaking up a coffee table and feeding the legs to the clutch of smouldering sticks on which a soup can was balanced. A police car approached the perimeter entrance. A few residents were leaving for work at this early hour, neatly dressed in suits and raincoats, briefcases in hand. The abandoned cars in the access roads prevented the police from reaching the main entrance to the building, and the officers stepped out and spoke to the passing residents. Usually none of them would have replied to an outsider, but now they gathered in a group around the two policemen. Wilder wondered if they were going to give the game away, but although he could not hear them, he was certain that he knew what they were saying. Clearly they were pacifying the policemen, reassuring them that everything was in order, despite the garbage and broken bottles scattered around the building. Deciding to test the defences of the apartment before he went to sleep, Wilder stepped into the corridor. He stood outside the doorway, as the stale air moved past him to the open balcony. He relished the rich smells of the high-rise. Like their garbage, the excrement of the residents higher up the building had a markedly different odour. Returning to the balcony, he watched the police drive away in their car. Of the twenty or so residents who still left for work each morning, three had turned back, evidently unsettled by the task of convincing the police that all was well. Without looking up, they scurried back to the entrance lobby. Wilder knew that they would never leave again. The separation of the high-rise from the world around it was now almost complete, and would probably coincide with his own arrival at the summit. Soothed by this image, he sat down on the floor and leaned against Charlotte Melville's shoulder, falling asleep as she stroked the wine-coloured stripes on his chest and shoulders. # 14 Final Triumph At dusk, after he had strengthened the guard, Anthony Royal ordered the candles lit on the dining-room table. Hands in the pockets of his dinner-jacket, he stood at the windows of the penthouse apartment on the 40th floor and looked down across the concrete plazas of the development project. All the tenants who had earlier left for their offices had now parked their cars and entered the building. With their safe arrival, Royal felt for the first time that he could relax, like a captain eager to set sail seeing the last of his crew return from shore-leave in a foreign port. The evening had begun. Royal sat down in the high-backed oak chair at the head of the dining-table. The candlelight flickered over the silver cutlery and gold plate, reflected in the silk facings of his dinner-jacket. As usual he smiled at the theatricality of this contrived setting, like a badly rehearsed and under-financed television commercial for a high-life product. It had started three weeks earlier when he and Pangbourne had decided to dress for dinner each evening. Royal had ordered the women to extend the dining-room table to its furthest length, so that he could sit with his back to the high windows and the illuminated decks of the nearby buildings. Responding to Royal, the women had brought candles and silverware from secret caches, and served an elaborately prepared meal. Their shadows swayed across the ceiling as if they were moving around the dining chamber of a feudal chief. Sitting in his chair at the far end of the long table, Pangbourne had been suitably impressed. Of course, as the gynaecologist well knew, the charade was meaningless. A single step beyond the circle of candlelight the garbage-sacks were piled six-deep against the walls. Outside, the corridors and staircases were filled with broken furniture and barricades built from washing-machines and freezer cabinets. The elevator shafts were the new garbage chutes. Not one of the twenty elevators in the apartment building now functioned, and the shafts were piled deep with kitchen refuse and dead dogs. A fading semblance of civilized order still survived in the top three floors, the last tribal unit in the high-rise. However, the one error that Royal and Pangbourne had made was to assume that there would always be some kind of social organization below them which they could exploit and master. They were now moving into **a** realm of no social organization at all. The clans had broken down into small groups of killers, solitary hunters who built man-traps in empty apartments or preyed on the unwary in deserted elevator lobbies. Royal looked up from the polished table as one of the women walked into the room, a silver tray in her strong arms. Watching her, he remembered that she was Mrs Wilder. She wore one of Anne's well-cut trouser-suits, and not for the first time Royal thought how easily this intelligent woman had fitted into the upper levels of the high-rise. Two weeks earlier, when she was found cowering with her sons in an empty apartment on the 19th floor after Wilder had abandoned her, she was totally exhausted, numbed by hunger and indignation. Whether in quest of her husband, or responding to some dim instinct, she had begun to climb the building. The raiding party brought her to the top floor. Pangbourne had wanted to throw out this anaemic and rambling woman, but Royal overruled him. Somewhere below, Wilder was still making his ascent of the high-rise, and his wife might one day be a valuable hostage. Led away, she joined the group of outcast wives who lived with their children in the next apartment, earning their keep by working as house servants. Within days Mrs Wilder had regained her strength and self-confidence. No longer stunned and stoop-shouldered, she reminded Royal of the serious and attractive wife of an up-and-coming television journalist who had arrived at the high-rise a year earlier. He noticed that she was clearing away Pangbourne's place setting, returning the immaculate silverware to her tray. 'They seem clean enough,' Royal told her. 'I don't think Dr Pangbourne will notice.' When she ignored him and continued to remove the cutlery. Royal asked, 'Have you heard from him? I take it he won't be joining me this evening?' 'Or any evening. He's decided to decline in future.' Mrs Wilder glanced across the table at Royal, almost as if she had felt a flicker of concern for him. She added matter-of-factly, 'I should be wary of Dr Pangbourne.' 'I always have been.' 'When a man like Dr Pangbourne loses his appetite for food it's reasonable to assume that he has something much more interesting between his teeth – and much more dangerous.' Royal listened to her cool advice without comment. He was not surprised that the dinners had come to an end. Both he and Pangbourne, anticipating the inevitable break-up of the last clan within the apartment building, had now retired to their quarters at opposite ends of the roof, each taking his women with him. Pangbourne had moved into the penthouse once owned by the dead jeweller. Strangely enough, Royal reflected, they would soon be back where they had begun, each tenant isolated within his own apartment. Something warned him to dispense with this meal but he waited for Mrs Wilder to serve him. Having survived so far, nothing that the gynaecologist could do would put him off his stride. During the past months almost all traces of his accident had vanished, and Royal felt stronger and more confident than ever before. He had won his attempt to dominate the high-rise, and amply proved his right to rule this huge building, even though at the cost of his marriage. As for the new social order that he had hoped to see emerge, he knew now that his original vision of a high-rise aviary had been closer to the truth than he guessed. Without knowing it, he had constructed a gigantic vertical zoo, its hundreds of cages stacked above each other. All the events of the past few months made sense if one realized that these brilliant and exotic creatures had learned to open the doors. Royal sat back as Mrs Wilder served him. Since his own kitchen lacked any equipment, all his meals were prepared in the apartment next door. Mrs Wilder reappeared with her tray, stepping over the garbage-sacks that lined the hallway – for all their descent into barbarism, the residents of the high-rise remained faithful to their origins and continued to generate a vast amount of refuse. As usual, the main course consisted of a piece of roast meat. Royal never asked about the source of the meat – dog, presumably. The women had the supply situation well in hand. Mrs Wilder stood beside him, gazing into the night air as Royal tasted the heavily spiced dish. Like a well-trained housekeeper, she was waiting for Royal to give some indication of approval, though she never seemed concerned by either praise or criticism. She spoke in a flat voice unlike the animated tone she used with Anne and the other women. In fact, Mrs Wilder spent more time with his wife than Royal did himself. Six women lived together in the adjacent apartment, ostensibly so that they could be more easily protected from a surprise attack. Sometimes Royal would visit Anne, but there was something daunting about the closely knit group of women, sitting on their beds surrounded by the garbage-sacks, together looking after the Wilder children. Their eyes would watch him as he hesitated in the door, waiting for him to go away. Even Anne had withdrawn from him, partly out of fear of Royal, but also because she realized that he no longer needed her. At last, after all the months of trying to maintain her superior status, Anne had decided to join her fellow residents. 'Good – it's excellent again. Wait...before you go.' Royal put down his fork. Casually, he asked, 'Have you heard anything of him? Perhaps someone has seen him?' Mrs Wilder shook her head, bored by this roundabout questioning. 'Who...?' 'Your husband – Richard, I think he was called. _Wilder._ ' Mrs Wilder stared down at Royal, shaking her head as if not recognizing him. Royal was certain that she had not only forgotten the identity of her husband, but of all men, including himself. To test this, he placed his hand on her thigh, feeling the strong muscle. Mrs Wilder stood passively with her tray, unaware of Royal fondling her, partly because she had been molested by so many men during the past months, but also because the sexual assault itself had ceased to have any meaning. When Royal slipped two of his fingers into her natal cleft she reacted, not by pushing his hand away, but by moving it to her waist and lightly holding it there as she would the straying hands of her children. When she had gone, taking the portion of meat which Royal always left for her, he sat back at the long table. He was glad to see her go. Without asking him, Mrs Wilder had laundered his white jacket, washing out the bloodstains which had given him not merely his sense of authority, but his whole unstated role within the high-rise. Had she done this deliberately, knowing that it would emasculate him? Royal could still remember the period of endless parties, when the apartment building had been lit up like a drunken liner. Royal had played the role of feudal chief to the hilt, presiding each evening over the council meetings held in his drawing-room. As they sat together in the candlelight, these neurosurgeons, senior academics and stockbrokers displayed all the talents for intrigue and survival exercised by years of service in industry, commerce and university life. For all the formal vocabulary of agendas and minutes, proposed and seconded motions, the verbal paraphernalia bequeathed by a hundred committee meetings, these were in effect tribal conferences. Here they discussed the latest ruses for obtaining food and women, for defending the upper floors against marauders, their plans for alliance and betrayal. Now the new order had emerged, in which all life within the high-rise revolved around three obsessions – security, food and sex. Leaving the table, Royal picked up a silver candlestick and carried it to the window. All the lights in the high-rise were out. Two floors, the 40th and the 37th, were left with electric current, but they remained unlit. The darkness was more comforting, a place where real illusions might flourish. Forty floors below, a car turned into the parking-lot and threaded its way through the maze of access lanes to its place two hundred yards from the building. The driver, wearing a flying-jacket and heavy boots, stepped out and hurried head-down towards the entrance. Royal guessed that this unknown man was probably the last resident to leave the building and set off for his office. Whoever he was, he had found a route to and from his apartment. Somewhere on the roof, a dog whimpered. Far below, from the mouth of an apartment twenty storeys down the cliff face, there was a brief isolated scream – whether of pain, lust or rage no longer mattered. Royal waited, his heart starting to race. A moment later there was a second scream, a meaningless wail. These cries were the expressions of totally abstracted emotions, detached from the context of events around them. Royal waited, expecting one of his retinue to enter and inform him of the probable reasons for these disturbances. Apart from the women in the next apartment, several of the younger male residents – a gallery owner from the 39th floor, and a successful hairdresser from the 38th – usually lounged about in the corridor among the garbage-sacks, leaning on their spears and keeping an eye on the staircase barricades. Picking up his chromium cane, Royal left the dining-room, a single candle in its silver stick lighting his way. As he stumbled over the black plastic bags he wondered why they had never heaved them over the side. Presumably they held this rubbish to themselves less from fear of attracting the attention of the outside world than from a need to cling to their own, surround themselves with the mucilage of unfinished meals, bloody bandage scraps, broken bottles that once held the wine that made them drunk, all faintly visible through the semi-opaque plastic. His apartment was empty, the high-ceilinged rooms deserted. Cautiously, Royal stepped into the corridor. The guard-post by the barricades was unmanned, and no lights gleamed through the doorway of the adjacent apartment where the women lived. Surprised by the absence of light from the usually busy kitchen, Royal walked through the darkened hallway. He kicked aside a child's toy and raised the candlestick above his head, trying to pick out any sleeping human figures in the surrounding rooms. Open suitcases lay on the mattresses that covered the floor of the master-bedroom. Royal stood in the doorway, a medley of scents crowding around him in the darkness, brilliant wakes left behind them by these fleeing women. Hesitating for a moment, he reached into the room and switched on the light. The instant electric glow, so unfamiliar after the wavering candlelight and twitching torch-beams, shone down on the six mattresses in the room. Half-packed suitcases lay on top of each other, as if the women had left at a moment's notice, or at some prearranged signal. Most of their clothes had been left behind, and he recognized the trouser-suit which Mrs Wilder had worn to serve his dinner. The racks of Anne's dresses and suits hung in the wardrobes like a store display. The even light, as dead as a time exposure in a police photograph recording a crime, lay across these torn mattresses and discarded clothes, the wine-stains on the walls and the forgotten cosmetics on the floor at his feet. As Royal stared down at them, he could hear a faint hooting noise from the darkened corridor, moving away from him as if emitted by these escaping women. This series of whoops and nasal grunts he had been listening to for days, trying without success to repress them from his mind. Switching off the light, he seized his cane firmly in both hands and left the apartment. Standing outside the door, he listened to the distant sounds, almost an electronic parody of a child's crying. They moved through the apartments at the far end of the floor, metallic and remote, the sounds of the beasts of his private zoo. # 15 The Evening's Entertainment The evening deepened, and the apartment building withdrew into the darkness. As usual at this hour, the high-rise was silent, as if everyone in the huge building was passing through a border zone. On the roof the dogs whimpered to themselves. Royal blew out the candles in the dining-room and made his way up the steps to the penthouse. Reflecting the distant lights of the neighbouring high-rises, the chromium shafts of the callisthenics machine seemed to move up and down like columns of mercury, a complex device recording the shifting psychological levels of the residents below. As Royal stepped on to the roof the darkness was lit by the white forms of hundreds of birds. Their wings flared in the dark air as they struggled to find a perch on the crowded elevator heads and balustrades. Royal waited until they surrounded him, steering their beaks away from his legs with his stick. He felt himself becoming calm again. If the women and the other members of his dwindling entourage had decided to leave him, so much the better. Here in the darkness among the birds, listening to them swoop and cry, the dogs whimpering in the children's sculpture-garden, he felt most at home. He was convinced more than ever that the birds were attracted here by his own presence. Royal scattered the birds out of his way and pushed back the gates of the sculpture-garden. As they recognized him, the dogs began to whine and strain, pulling against their leads. These retrievers, poodles and dachshunds were all that remained of the hundred or so animals who had once lived in the upper floors of the high-rise. They were kept here as a strategic food reserve, but Royal had seen to it that few of them had been eaten. The dogs formed his personal hunting pack, to be kept until the final confrontation when he would lead them down into the building, and throw open the windows of the barricaded apartments to admit the birds. The dogs pulled at his legs, their leads entangled around the play-sculptures. Even Royal's favourite, the white Alsatian, was restless and on edge. Royal tried to settle it, running his hands over the luminous but still bloodstained coat. The dog butted him nervously, knocking him back across the empty food-pails. As Royal regained his balance, he heard the sound of voices surging up the central stairway a hundred feet behind him. Lights approached through the darkness, a procession of electric torches held at shoulder height. The beams of light cut through the night air, scattering the birds into the sky. A portable cassette player boomed out its music over the clicking of dumb-bells. As Royal paused behind an elevator head, a group of his top-floor neighbours erupted on to the roof. Led by Pangbourne, they spread in a loose circle across the observation deck, ready to celebrate a recent triumph. Without Royal's approval or foreknowledge, a raid had taken place on the floors below. The gynaecologist was in high excitement, waving the last stragglers up the staircase like a demented courier. From his mouth came a series of peculiar whoops and cries, barely articulated grunts that sounded like some Neanderthal mating call but, in fact, were Pangbourne's rendering of the recorded birth-cries analysed by his computer. These eerie and unsettling noises Royal had been forced to listen to for weeks as members of his entourage took up the refrain. A few days earlier he had finally banned the making of these noises altogether – sitting in the penthouse and trying to think about the birds, it unnerved him to hear the women in the kitchen next door emitting these clicks and grunts. However, Pangbourne held regular sessions in his own quarters at the opposite end of the roof, where he would play through his library of recorded birth-cries for the benefit of the women crouching in a hushed circle on the floor around him. Together they mimicked these weird noises, an oral emblem of Pangbourne's growing authority. Now they had left Royal, and were giving full vent to everything they had learned, hooting and growling like a troupe of demented mothers-to-be invoking their infants' birth-traumas. Waiting for the right moment to make his entrance, Royal heeled the Alsatian behind a tattered awning that leaned against the elevator head. For once he was glad that he was wearing his tuxedo – the white safari-jacket would have stood out like a flame. Two 'guests' had been picked up, a cost-accountant from the 32nd floor with a bandaged head, and a myopic meteorologist from the 27th. The woman carrying the cassette player, he noted calmly, was his wife Anne. Sloppily dressed, her hair in a mess, she lolled against Pangbourne's shoulder and then wandered about in the circle of torchlight like a moody trollop, brandishing the cassette player at the two prisoners. 'Ladies...please, now. There's more to come.' Pangbourne calmed the women, his slim fingers like brittle sticks in the confused light. The portable bar was lifted upright. A table and two chairs were set beside it, and the guests uneasily took their seats. The cost-accountant was trying to straighten the unravelling bandage around his head, as if frightened that he might be called upon to play blind man's buff. The meteorologist squinted shortsightedly into the torchlight, hoping to recognize someone among those taking part in this revel. Royal knew everyone present, his neighbours of the past year, and could almost believe that he was attending one of the many cocktail parties held on the roof that summer. At the same time he felt that he was watching the opening act of a stylized opera or ballet, in which a restaurant is reduced to a single table and the doomed hero is taunted by a chorus of waiters, before being despatched to his death. The hosts at this party had been drinking long before their two guests arrived. The jeweller's widow in the long fur coat, Anne with her cassette player, Jane Sheridan waving a cocktail shaker, all were lurching about as if to some deranged music only Royal was unable to hear. Pangbourne called for quiet again. 'Now – keep our guests amused. They're looking bored. What are we playing tonight?' A medley of suggestions was shouted out. 'Gang Plank!' 'Flying School, doctor!' 'Moon Walk!' Pangbourne turned to his guests. 'I rather like Flying School... Did you know we've been running a flying school here? No—?' 'We've decided to offer you some free lessons,' Anne Royal told them. 'One free lesson,' Pangbourne corrected. Everyone sniggered at this. 'But that's all you'll need. Isn't it, Anne?' 'It's a remarkably effective course.' 'Solo first time, in fact.' Already, led by the jeweller's widow, they were dragging the injured accountant towards the balustrade, everyone tripping over the bloodstained bandage unwrapping around his head. A pair of tattered papier-mâché wings, part of a child's angel costume, were fastened to the victim's back. The grunting and hooting began again. Dragging the reluctant Alsatian after him, Royal stepped into view. Involved in their imminent execution, no one noticed him. As casually as he could muster, he called out, 'Pangbourne...! Dr Pangbourne...!' The noise slackened. Torch-beams flicked through the darkness, whipping across Royal's silk-lapelled dinner-jacket, fixing on the white Alsatian trying to escape between his feet. 'Flying School! Flying School!' The sullen chant was taken up. Looking down at this unruly gang, Royal could almost believe that he was surrounded by a crowd of semi-literate children. The zoo had rebelled against its keeper. Hearing Royal's voice, the gynaecologist turned from his prisoner, whose bandage he had expertly refastened. Wiping his hands, he strolled across the roof, almost mimicking Royal's casual saunter. But his eyes were examining Royal's face with a wholly professional curiosity, as if he had already decided that its expression of firm determination could be readjusted by cutting a minimum number of nerves and muscles. The chant rose into the air. The torch-beams beat rhythmically across the darkness, striking Royal's face. He waited patiently for the clamour to subside. As Anne broke away from the crowd and ran forward he raised the chromium cane, ready to strike her. She stopped in front of him, smirking as she fluffed up her long skirt in a provocative gesture. Suddenly she turned the cassette player to full volume and thrust it into his face. A gabble of birth-cries filled the air. 'Royal...' the jeweller's widow shouted warningly. 'Here's Wilder!' Startled by the name, Royal flinched back, thrashing at the darkness with the chromium cane. The torch-beams swerved around him, the shadows of the overturned chairs swinging across the concrete roof. Expecting Wilder to lunge at him from behind, he stumbled across the awning and entangled himself in the dog's lead. He heard laughter behind him. Controlling himself with an effort, he turned to face Pangbourne again. But the gynaecologist was walking away, looking back at him without hostility. He waved to Royal with a quick movement of his hand, as if flicking a dart at him, dismissing him for ever. The torches swung away from Royal, and everyone returned to the more serious business of tormenting the two guests. Royal watched from the darkness as they argued over the prisoners. The confrontation with Pangbourne was over – or, more exactly, had never taken place. A simple ruse had unnerved him, leaving him with the uncertainty of whether or not he really feared Wilder. He had been humiliated, but in a sense this was only just. The gynaecologist was the man for their hour. No zoo would survive for long with Pangbourne as its keeper, but he would provide a node of violence and cruelty that would keep alive in others the will to survive. Let the psychotics take over. They alone understood what was happening. Holding to the Alsatian, Royal let the dog drag him away towards the safety of the darkness near the sculpture-garden. The white forms of the birds were massed together on every ledge and parapet. Royal listened to the whimpering dogs. He had no means now of feeding them. The glass doors of the penthouse reflected the swerving birds, like the casements of a secret pavilion. He would close down his apartment, block the staircase and retreat to the penthouse, perhaps taking Mrs Wilder with him as his servant. Here he would preside over the high-rise, taking up his last tenancy in the sky. He unlocked the gate of the sculpture-garden and moved through the darkness among the statues, releasing the dogs. One by one they scrambled away, until only Royal and the birds were left. # 16 A Happy Arrangement An uncertain scene, Robert Laing decided. He could no longer trust his senses. A curious light, grey and humid but at the same time marbled by a faint interior luminosity, hung over the apartment. As he stood among the garbage-sacks in the kitchen, trying to coax a few drops of water from the tap, he peered over his shoulder at the dull fog that stretched like a curtain across the sitting-room, almost an extension of his own mind. Not for the first time he was unsure what time of day it was. How long had he been up? Laing vaguely remembered sleeping on the tartan rug that lay on the kitchen floor, his head pillowed on a garbage-sack between the table legs. He had been wandering about the bedroom where his sister Alice lay asleep, but whether he had woken five minutes ago or the previous day Laing had no means of telling. He shook his watch, picking at the fractured dial with a grimy finger-nail. The watch had stopped during a scuffle in the 25th floor lobby several days earlier. Although he had forgotten the exact moment, the hands of this broken watch contained the one point of finite time left to him, like a fossil cast on to a beach, crystallizing for ever a brief sequence of events within a vanished ocean. However, it barely mattered now what time it was – anything rather than night, when it was too terrifying to do more than shelter in the apartment, crouching behind his dilapidated barricade. Laing turned the cold water tap on and off, listening to the faintly changing tone. At rare intervals, perhaps for a single minute during the day, a green, algae-stained liquid flowed from the tap. These small columns of water, moving up and down the huge system of pipes that ran throughout the building, announced their arrivals and departures with faint changes of note. Listening to this remote and complex music had sharpened Laing's ears, a sensitivity that extended to almost any kind of sound within the building. By contrast his sight, dulled by being used chiefly at night, presented him with an increasingly opaque world. Little movement took place within the high-rise. As Laing often reminded himself, almost everything that could happen had already taken place. He left the kitchen and squeezed himself into the narrow niche between the front door and the barricade. He placed his right ear to the sounding panel of the wooden door. From the minute reverberations he could tell instantly if a marauder was moving through the abandoned apartments nearby. During the brief period each afternoon when he and Steele emerged from their apartments – a token remembrance of that time when people had actually left the building – they would take turns standing with their hands pressed against the metal walls of an elevator shaft, feeling the vibrations transmitted to their bodies, picking up a sudden movement fifteen floors above or below. Crouched on the staircase with their fingers on the metal rails, they listened to the secret murmurs of the building, the distant spasms of violence that communicated themselves like bursts of radiation from another universe. The high-rise quivered with these tremors, sinister trickles of sound as a wounded tenant crawled up a stairway, a trap closed around a wild dog, an unwary prey went down before a club. Today, however, befitting this timeless zone with its uncertain light, there was no sound at all. Laing returned to the kitchen and listened to the water-pipes, part of a huge acoustic system operated by thousands of stops, this dying musical instrument they had once all played together. But everything was quiet. The residents of the high-rise remained where they were, hiding behind the barricades in their apartments, conserving what was left of their sanity and preparing themselves for the night. By now what violence there was had become totally stylized, spasms of cold and random aggression. In a sense life in the high-rise had begun to resemble the world outside – there were the same ruthlessness and aggression concealed within a set of polite conventions. Still uncertain how long he had been awake, or what he had been doing half an hour earlier, Laing sat down among the empty bottles and refuse on the kitchen floor. He gazed up at the derelict washing-machine and refrigerator, now only used as garbage-bins. He found it hard to remember what their original function had been. To some extent they had taken on a new significance, a role that he had yet to understand. Even the run-down nature of the high-rise was a model of the world into which the future was carrying them, a landscape beyond technology where everything was either derelict or, more ambiguously, recombined in unexpected but more meaningful ways. Laing pondered this – sometimes he found it difficult not to believe that they were living in a future that had already taken place, and was now exhausted. Squatting beside his dried-up water-hole like a desert nomad with all the time in the world, Laing waited patiently for the taps to flow. He picked at the dirt on the backs of his hands. Despite his tramp-like appearance he dismissed the notion of using the water to wash. The high-rise stank. None of the lavatories or garbage-disposal chutes were working, and a faint spray of urine hung over the face of the building, drifting across the tiers of balconies. Overlaying this characteristic odour, however, was a far more ambiguous smell, putrid and sweet, that tended to hover around empty apartments, and which Laing chose not to investigate too closely. For all its inconveniences, Laing was satisfied with life in the high-rise. Now that so many of the residents were out of the way he felt able to relax, more in charge of himself and ready to move forward and explore his life. How and where exactly, he had not yet decided. His real concern was with his sister. Alice had fallen ill with a non-specific malaise, and spent her time lying on the mattress in Laing's bedroom or wandering half-naked around the apartment, her body shuddering like an over-sensitive seismograph at imperceptible tremors that shook the building. When Laing drummed on the waste-pipe below the sink, sending a hollow drone through the empty pipe, Alice called out from the bedroom in her thin voice. Laing went in to see her, picking his way among the piles of kindling he had made from chopped-up furniture. He enjoyed cutting up the chairs and tables. Alice pointed to him with a stick-like hand. 'The noise – you're signalling again to someone. Who is it now?' 'No one, Alice. Who do you think we know?' Those people on the lower floors. The ones you like.' Laing stood beside her, uncertain whether to sit on the mattress. His sister's face was as greasy as a wax lemon. Trying to focus on him, her tired eyes drifted about in her head like lost fish. It crossed his mind briefly that she might be dying – during the past two days they had eaten no more than a few fillets of canned smoked salmon, which he had found under the floorboards in an empty apartment. Ironically, the standard of cuisine in the apartment building had begun to rise during these days of its greatest decline, as more and more delicacies came to light. However, food was a secondary matter, and Alice was very much alive in other ways. Laing enjoyed her wheedling criticisms of him, as he tried to satisfy her pointless whims. All this was a game, but he relished the role of over-dutiful servant dedicated to a waspish mistress, a devoted menial whose chief satisfaction was a total lack of appreciation and the endless recitation of his faults. In many ways, in fact, his relationship with Alice recapitulated that which his wife had unthinkingly tried to create, hitting by accident on the one possible source of harmony between them, and which Laing had rejected at the time. Within the high-rise, he reflected, his marriage would have succeeded triumphantly. 'I'm trying to find some water, Alice. You'd like a little tea?' 'The kettle smells.' 'I'll wash it for you. You mustn't become dehydrated.' She nodded grudgingly. 'What's been happening?' 'Nothing...It's already happened.' A ripe but not unpleasant smell rose from Alice's body. 'Everything is starting to get back to normal.' 'What about Alan – you said you'd look for him.' 'I'm afraid he's gone.' Laing disliked these references to Alice's husband. The introduced a discordant note. 'I found your apartment but it's empty now.' Alice turned her head away, indicating that she had seen enough of her brother. Laing bent down and gathered together the kindling she had scattered on the floor beside the mattress. These dining-room chair-legs, well impregnated with glue and varnish, would burn briskly. Laing had looted the chairs from Adrian Talbot's apartment after the psychiatrist's disappearance. He was grateful for this reproduction Hepplewhite – the conventional tastes of the middle-floor residents had served them well. By contrast, those on the lower levels found themselves with a clutter of once-fashionable chromium tubing and undressed leather, useless for anything but sitting on. All cooking was now done over fires which the residents lit for themselves on their balconies, or in the artificial fireplaces. Laing carried the sticks on to the balcony. As he squatted there he realized that he had nothing to cook. The secret cache of cans he had long ago been obliged to surrender to the orthodontic surgeon next door. In fact, Laing's position was secure thanks only to the morphine ampoules he had concealed. Alrhough Steele frightened him with his unpredictable cruelties, Laing had attached himself to him out of necessity. So many people had gone, or dropped out of the struggle altogether. Had they deserted the high-rise for the world outside? Laing was sure that they had not. In a sense he depended on the uncertainties of his relationship with the dentist, following his murderous swings like a condemned prisoner in love with a moody jailer. During the previous weeks Steele's behaviour had become frightening. The deliberately mindless assaults on anyone found alone or unprotected, the infantile smearing of blood on the walls of empty apartments – all these Laing watched uneasily. Since his wife's disappearance Steele had been as tautly strung as the huge crossbows which he constructed from piano wire and mounted in the lobbies and corridors, their vicious arrows fashioned from the shafts of golf-clubs. At the same time, however, Steele remained strangely calm, as if pursuing some unknown quest. Steele slept in the afternoon, giving Laing a chance to prospect for water. As he picked up the kettle he heard Alice call out to him, but when he returned to her she had already forgotten what she wanted. She held out her hands to him. Usually Laing would have rubbed them for her, trying to kindle a little warmth in them, but out of some kind of peculiar loyalty to the dentist he made no effort to help Alice. This petty show of callousness, his declining personal hygiene, and even his deliberate neglect of his health, were elements in a system he made no attempt to change. For weeks all he had been able to think about were the next raid, the next apartment to be ransacked, the next tenant to be beaten up. He enjoyed watching Steele at work, obsessed with these expressions of mindless violence. Each one brought them a step closer to the ultimate goal of the high-rise, a realm where their most deviant impulses were free at last to exercise themselves in any way they wished. At this point physical violence would cease at last. Laing waited for Alice to subside into half-consciousness. Looking after his sister was taking up more of his energy than he could afford. If she was dying there was little he could do, apart from giving her a terminal gram of morphine and hiding her body before Steele could mutilate it. Dressing up corpses and setting them in grotesque tableaux was a favourite pastime of the dentist's. His imagination, repressed by all the years of reconstructing his patients' mouths, came alive particularly when he was playing with the dead. The previous day Laing had blundered into an apartment and found him painting a bizarre cosmetic mask on the face of a dead account-executive, dressing the body like an overblown drag-queen in a voluminous silk nightdress. Given time, and a continuing supply of subjects, the dentist would repopulate the entire high-rise. Carrying the kettle, Laing let himself out of the apartment. The same dim light, pearled by a faint interior glow, filled the corridor and elevator lobby, a miasma secreted by the high-rise itself, distillation of all its dead concrete. The walls were spattered with blood, overlaying the aerosolled graffiti like the tachist explosions in the paintings that filled the top-floor apartments. Broken furniture and unravelled recording tape lay among the garbage-sacks piled against the walls. Laing's feet crackled among the polaroid negatives scattered about the corridor floor, each recording a long-forgotten act of violence. As he paused, wary of attracting the attention of a watching predator, the staircase doors opened and a man in a flying-jacket and fleece-lined boots entered the lobby. Watching Paul Crosland stride purposefully across the debris-strewn carpet, Laing realized that the television announcer had just returned, as he did every day, from reading the lunch-time news bulletin at the television station. Crosland was the only person to leave the high-rise, maintaining a last tenuous link with the outside world. Even Steele side-stepped him discreetly. A few people still watched him read the news on their battery-powered sets, crouching among the garbage-sacks behind their barricades, perhaps still hoping that even now Crosland might suddenly depart from his set text and blurt out to the world at large what was happening within the high-rise. Inside the staircase Laing had set up a dog trap, using a tropical mosquito-net he had lifted from an anthropologist's apartment three floors above. A plague of dogs had descended the building from their breeding grounds on the upper floors. Laing had no hopes of catching the larger dogs in the spring-loaded contraption, but a dachshund or pekinese might become entangled in the nylon mesh. The staircase was unguarded. Taking a chance, Laing made his way down the steps to the floor below. The lobby was blocked by a barricade of furniture, and he turned into the corridor that served the ten apartments in the northern wing of the building. Three doors along, he entered an abandoned apartment. The rooms were empty, the furniture and fittings long since stripped away. In the kitchen Laing tried the taps. With his sheath-knife he cut the hoses of the washing-machine and dishwasher, collecting a cupful of metallic water. In the bathroom the naked body of an elderly tax-specialist lay on the tiled floor. Without thinking, Laing stepped over him. He wandered around the apartment, picking up an empty whisky decanter on the floor. A faint odour of malt whisky clung to it, an almost intoxicating nostalgia. Laing moved to the next apartment, also abandoned and gutted. In a bedroom he noticed that the carpet covered a small circular depression. Suspecting a secret food cache, he rolled back the carpet, and found that a manhole had been drilled through the wooden floorboards and concrete deck to the apartment below. After sealing the door, Laing lay down on the floor and peered into the room beneath. A circular glass table, by a miracle still intact, reflected his blood-spattered shirt and bearded face, staring up from what seemed to be the bottom of a deep well. Beside the table were two overturned armchairs. The balcony doors were closed, and curtains hung on either side of the windows. Looking down at this placid scene, Laing felt that he had accidentally been given a glimpse into a parallel world, where the laws of the high-rise were suspended, a magical domain where these huge buildings were furnished and decorated but never occupied. On an impulse, Laing eased his thin legs through the manhole. He sat on the ledge and swung himself down into the room below. Standing on the glass table, he surveyed the apartment. Hard experience told him that he was not alone – somewhere a miniature bell was ringing. A faint scratching came from the bedroom, as if a small animal was trying to escape from a paper sack. Laing pushed back the bedroom door. A red-haired woman in her mid-thirties lay fully dressed on the bed, playing with a Persian cat. The creature wore a velvet collar and bell, and its lead was attached to the woman's bloodied wrist. The cat vigorously licked at the bloodstains on its coat, and then seized the woman's wrist and gnawed at the thin flesh, trying to reopen a wound. The woman, whom Laing vaguely recognized as Eleanor Powell, made no effort to stop the cat from dining off her flesh. Her serious face, with its blue cyanosed hue, was inclined over the cat like that of a tolerant parent watching a child at play. Her left hand lay across the silk bedspread, touching a pencil and reporter's note-pad. Facing her, at the foot of the bed, were four television sets. They were tuned to different stations, but three of the screens were blank. On the fourth, a battery-powered set, the out-of-focus picture of a horserace was being projected soundlessly. Uninterested in her reviewing, Eleanor teased her bloodied wrist into the cat's mouth. The creature was ravenous, tearing excitedly at the flesh around the knuckle. Laing tried to pull the cat away, but Eleanor jerked at the lead, urging it back on to her wound. 'I'm keeping her alive,' she told Laing reprovingly. The cat's attentions brought a serene smile to her face. She raised her left hand. 'Doctor, you may suckle my other wrist...Poor man, you look thin enough.' Laing listened to the sounds of the cat's teeth. The apartment was silent, and the noise of his own excited breathing was magnified to an uncanny extent. Would he soon be the last person alive in the high-rise? He thought of himself in this enormous building, free to roam its floors and concrete galleries, to climb its silent elevator shafts, to sit by himself in turn on every one of its thousand balconies. This dream, longed for since his arrival at the high-rise, suddenly unnerved him, almost as if, at last alone here, he had heard footsteps in the next room and come face to face with himself. He turned up the volume of the television set. A racetrack commentator's voice emerged from the speaker, a gabble of names that sounded like a demented inventory, a list of unrelated objects being recruited to repopulate the high-rise in an emergency transfusion of identity. 'What—? Where's the programme?' Eleanor raised her head, peering disjointedly at the television set. Her left hand scrabbled around for the dictation pad and pencil. 'What's he saying?' Laing slipped his arms under her. He intended to carry her, but her thin body was surprisingly heavy. He was weaker than he had thought. 'Can you walk? I'll come back later for the set.' She shrugged vaguely, swaying against Laing like a drunk in a bar accepting a dubious proposition from an old acquaintance. Sitting beside him on the edge of the bed, she leaned an arm on his shoulder, inspecting him with a shrewd eye. She tapped Laing aggressively on the arm. 'All right. First thing, though, find some batteries.' 'Of course.' Her show of wilfulness was pleasantly encouraging. As she watched from the bed he pulled a suitcase from the wardrobe and began to fill it with her clothes. So Laing took Eleanor Powell and her portable television set back to his apartment. He arranged her on a mattress in the living-room, and spent his days hunting the abandoned apartments for food, water and batteries. The reappearance of television in his life convinced Laing that everything in the high-rise was becoming normal again. When Steele moved on to the richer pastures above, Laing declined his offer to join him. Already Laing had decided to separate himself and his two women from everyone else. He needed to be alone with Alice and Eleanor, to be as aggressive and self-reliant, as passive and submissive as he wished. He had little idea at this early stage of what role he would play with these two women, but whatever he chose he would have to play out within his own walls. Laing knew that he was far happier now than ever before, despite all the hazards of his life, the likelihood that he would die at any time from hunger or assault. He was satisfied by his self-reliance, his ability to cope with the tasks of survival – foraging, keeping his wits about him, guarding his two women from any marauder who might want to use them for similar purposes. Above all, he was pleased with his good sense in giving rein to those impulses that involved him with Eleanor and his sister, perversities created by the limitless possibilities of the high-rise. # 17 The Lakeside Pavilion As if nervous of disturbing the interior of the apartment building, the morning sun explored the half-shuttered skylight of the 40th floor stairwell, slipped between the broken panes and fell obliquely down the steps. Shivering in the cold air five floors below, Richard Wilder watched the sunlight approach him. He sat on the steps, leaning against a dining-room table which formed part of a massive barricade blocking the staircase. After crouching here all night, Wilder was frozen. The higher up the building he moved, the colder it became, and at times he had been tempted to retreat to the floors below. He looked down at the animal crouching beside him – a black poodle, he guessed it had once been – envying its shaggy coat. His own body was almost naked, and he rubbed at the lipstick smeared across his chest and shoulders, trying to insulate himself with this sweet grease. The dog's eyes were fixed on the landing above. Its ears pricked as it detected the sounds, inaudible to Wilder, of someone moving behind the barricade. During their ten days together the two had formed a successful hunting team, and Wilder was reluctant to urge the dog to attack before it was ready. The threadbare remains of Wilder's trousers, cut away at the knees, were stained with blood and wine. A ragged beard covered his heavy face, partly concealing an open wound on his jaw. He looked derelict and exhausted, but in fact his body was as strongly muscled as ever. His broad chest was covered with a hatchwork of painted lines, a vivid display that spread across his shoulders and back. At intervals he inspected the design, which he had painted the previous afternoon with a lipstick he had found in an abandoned apartment. What had begun as a drink-fuddled game had soon taken on a serious ritual character. The markings, apart from frightening the few other people he might come across, gave him a potent sense of identity. As well, they celebrated his long and now virtually successful ascent of the high-rise. Determined to look his best when he finally stepped on to the roof, Wilder licked his scarred fingers, massaging himself with one hand and freshening up his design with the other. He held the dog's leash in a strong grip and watched the landing ten steps above him. The sun, continuing its laboured descent of the stairwell, at last reached him and began to warm his skin. Wilder looked up at the skylight sixty feet above his head. The rectangle of white sky became more and more unreal as it drew closer, like the artificial ceiling of a film set. The dog quivered, edging its paws forwards. Only a few yards from them, someone was straightening part of the barricade. Wilder waited patiently, moving the dog up one step. For all the savage-like ferocity of his appearance, Wilder's behaviour was a model of restraint. Having come this far, he had no intention of being caught unawares. He peered through a crack in the dining-table. Behind the barricade someone pulled back a small mahogany writing-desk that served as a concealed door. Through this gap appeared an almost bald woman of about seventy. Her tough face peered into the stairwell. After a wary pause, she stepped through the gap to the landing rail, a champagne bucket in one hand. She was dressed in the remnants of an expensive evening gown, which exposed the mottled white skin of her muscular arms and shoulders. Wilder watched her with respect. He had tangled with these crones more than once, and was well aware that they were capable of a surprising turn of speed. Without moving, he waited as she leaned over the landing rail and emptied the slops from the champagne bucket. The cold grease spattered Wilder and the dog, but neither made any response. Wilder carefully wiped the cine-camera lying on the step beside him. Its lenses had been fractured during the skirmishes and assaults that had brought him to the roof of the high-rise, but the camera's role was now wholly emblematic. He felt the same identification with the camera that he did with the dog. However, for all his affection and loyalty towards the animal, the dog would soon be leaving him – they would both be present at a celebratory dinner when they reached the roof, he reflected with a touch of gallows-humour, but the poodle would be in the pot. Thinking of this supper to come – his first decent meal for weeks – Wilder watched the old woman muttering about. He wiped his beard, and cautiously raised himself from his knees. He pulled the dog's lead, a length of electric cord, and hissed between his broken teeth. As if on cue, the dog whimpered. It stood up, shivering, and climbed two steps. In full view of the old woman it crouched down and began to whine plaintively. The old woman retreated swiftly behind her barricade. Within seconds a heavy carving-knife materialized in her hand. Her canny eyes peered down at the dog cringing on the steps below her. As it rolled on to its side and exposed its loins her eyes were riveted on its fleshy stomach and shoulders. As the dog whimpered again, Wilder watched from behind the dining-table. This moment never failed to amuse him. In fact, the higher he climbed the building, the greater its potential for humour. He still held the lead, which trailed behind the dog down the steps, but was careful to leave it loose. The old woman, unable to take her eyes off the dog, stepped through the gap in the barricade. She whistled through the gap in her false teeth, and beckoned the dog forward. 'Poor pet. You're lost, aren't you, beauty? Come on, up here...' Barely able to contain his glee at the spectacle of this bald-headed crone fawning with exaggerated pathos over the dog, Wilder leaned against the table, laughing soundlessly to himself. At any moment she would be in for a shock, his heavy boot on her neck. Behind the barricade a second figure appeared. A young woman of about thirty, probably the daughter, peered over the old woman's shoulder. Her suede jacket was unbuttoned to reveal a pair of grimy breasts, but her hair was elaborately wound into a mass of rollers, as if she were preparing parts of her body for some formal gala to which the rest of herself had not been invited. The two women stared down at the dog, their faces expressionless. As the daughter waited with the carving-knife the mother edged down the steps. Muttering reassuringly, she patted the poodle on the head and bent down to take the lead. As her strong fingers closed around the cord Wilder leapt forward. The dog sprang to life, hurled itself up the steps and sank its teeth into the old woman's arm. With surprising agility, she darted through the gap in the barricade, the dog clamped to her arm. Barely in time, Wilder followed her, kicking back the writing-desk before the daughter could lock it into place. He dragged the poodle from the old woman's bloodied arm, seized her by the neck and flung her sideways across a stack of cardboard cartons. She lay there stunned, like a dishevelled duchess surprised to find herself drunk at a ball. As Wilder turned away, wrestling with the dog, the daughter ran towards him. She had thrown the carving-knife aside. In one hand she held her hair curlers, in the other a silver handbag pistol. Wilder sidestepped out of her way, knocked the pistol from her hand and clubbed her backwards across the barricade. As the two women sat panting on the floor, Wilder looked down at the pistol at his feet, little more than a child's bright toy. He picked it up and began to inspect his new domain. He was standing in the entrance to the 35th-floor swimming-pool. The tank of foetid water, filled with debris, reflected the garbage-sacks heaped around the tiled verge. A small den had been built inside a stationary elevator in the lobby. Beside a burnt-out fire an elderly man – a former tax-consultant, Wilder seemed to recall – lay asleep, apparently unaware of the spasm of violence that had taken place. A chimney flue, fashioned from two lengths of balcony drainage pipe, exited over his head through the roof of the elevator. Still holding the pistol, Wilder watched the two women. The mother sat among the cardboard cartons, matter-of-factly bandaging her arm with a strip torn from her silk dress. The daughter squatted on the floor by the barricade, rubbing the bruise on her mouth and patting the head of Wilder's poodle. Wilder peered up the staircase to the 36th floor. The skirmish had excited him, and he was tempted to press on all the way to the roof. However, he had not eaten for more than a day, and the smell of animal fat hung in the air around the fire by the entrance to the den. Wilder beckoned the young woman towards him. Her bland, rather bovine face was vaguely familiar. Had she once been the wife of a film-company executive? She climbed to her feet and walked up to him, staring with interest at the emblems painted across bis chest and shoulders, and at his exposed genitals. Pocketing the pistol, Wilder pulled her towards the den. They stepped over the old man and entered the elevator. Curtains hung from the walls, and two mattresses covered the floor. Holding the young woman to him, an arm around her shoulders, Wilder sat down against the rear wall of the elevator. He gazed across the lobby at the yellow water of the swimming-pool. Several of the changing cubicles had been converted into small, single-tenant cabins, but they were all now abandoned. Two bodies, he noted, floated in the pool, barely distinguishable from the other debris, the kitchen garbage and pieces of furniture. Wilder helped himself to the last of the small cat that had been barbecued above the fire. His teeth pulled at the stringy meat, the still warm fat almost intoxicating him as he sucked at the skewer. The young woman leaned affably against him, content to have Wilder's strong arm around her shoulders. The fresh smell of her body surprised him – the higher up the apartment building he moved the cleaner were the women. Wilder looked down at her unmarked face, as open and amiable as a domestic animal's. She seemed to have been totally untouched by events within the high-rise, as if waiting in some kind of insulated chamber for Wilder to appear. He tried to speak to her, but found himself grunting, unable to form the words with his broken teeth and scarred tongue. Pleasantly high on the meat, he lay back comfortably against the young woman, playing with the silver handbag pistol. Without thinking, he opened the front of her suede jacket and loosened her breasts. He placed his hands over the small nipples and settled himself against her. He felt drowsy, murmuring to the young woman while she stroked the painted stripes on his chest and shoulders, her fingers moving endlessly across his skin as if writing a message to him. Lying back in this comfortable lakeside pavilion Wilder rested during the early afternoon. The young woman sat beside him, her breasts against his face, nursing this huge, nearly naked man with his painted body and exposed loins. Her mother and father pottered about in the lobby. Now and then the old woman in her evening gown pulled a piece of furniture at random from the barricade and chopped it into kindling with the carving knife. Wilder ignored them, conscious only of the young woman's body and the huge pillars that carried the apartment building upwards to the roof. Through the windows around the swimming-pool he could see the towers of the four high-rise blocks nearby, suspended like rectilinear clouds within the afternoon sky. The warmth within the elevator, which seemed to emanate from the young woman's breasts, had drained all will and energy from him. Her calm face gazed down at Wilder reassuringly. She had accepted him as she would any marauding hunter. First she would try to kill him, but failing this give him food and her body, breast-feed him back to a state of childishness and even, perhaps, feel affection for him. Then, the moment he was asleep, cut his throat. The synopsis of the ideal marriage. Rallying himself, Wilder sat up and put his boot into the poodle lying asleep on the mattress outside the elevator. The yelp of pain revived Wilder. He pushed the young woman away. He needed to sleep, but first he would move to a safer hiding-place, or the crone and her daughter would make short work of him. Without looking back, he stood up and dragged the dog behind him. He slipped the silver pistol into the waistband of his trousers and checked the patterns on his chest and shoulders. Carrying the cine-camera, he climbed past the barricade and re-entered the staircase, leaving behind the quiet encampment and the young woman beside her yellow lake. As he moved up the steps everything was silent. The staircase was carpeted, muffling the tread of his boots, and he was too distracted by the sounds of his own breathing to notice that the walls around him had been freshly painted, their white surfaces gleaming in the afternoon sunlight like the entrance to an abattoir. Wilder climbed to the 37th floor, smelling the icy air moving across his naked body from the open sky. He could hear now, more clearly than ever before, the crying of gulls. When the dog began to whimper, reluctant to go any further, he turned it loose, and watched it disappear down the stairs. The 37th floor was deserted, apartment doors open on the bright air. Too exhausted to think, he found an empty apartment, barricaded himself into the living-room and sank into a deep sleep on the floor. # 18 The Blood Garden By contrast, Anthony Royal, high on the open roof three floors above, had never been more awake. Ready at last to join the sea-birds, he stood at the windows of his penthouse, looking out over the open plazas of the development project towards the distant mouth of the river. Washed by the recent rain, the morning air was clear but frozen, and the river flowed from the city like a stream of ice. For two days Royal had eaten nothing, but far from exhausting him the absence of food had stimulated every nerve and muscle in his body. The shrieking of the gulls filled the air, and seemed to tear at the exposed tissues of his brain. They rose from the elevator heads and balustrades in a continuous fountain, soared into the air to form an expanding vortex and dived down again towards the sculpture-garden. Royal was certain now that they were calling for him. He had been deserted by the dogs – as soon as he freed them they had disappeared into the stairways and corridors below – and only the white Alsatian remained. It sat at Royal's feet by the open windows, mesmerized by the movement of the birds. Its wounds had healed now, and its thick arctic coat was white again. Royal missed the stains, as he did the bloody hand-prints that Mrs Wilder had washed from his jacket. The little food Royal had taken with him before sealing himself into the penthouse he had given to the dog, but already he felt himself beyond hunger. For three days he had seen no one, and was glad to have cut himself off from his wife and neighbours. Looking up at the whirling cloud of gulls, he knew that they were the true residents of the high-rise. Without realizing it at the time, he had designed the sculpture-garden for them alone. Royal shivered in the cold air. He wore his safari-jacket, and the thin linen gave him no protection against the wind moving across the concrete roof. In the over-lit air the white fabric was grey by comparison with Royal's chalk-like skin. Barely able to control himself, and uncertain whether the scars of his accident had begun to reopen themselves, he stepped on to the terrace and walked across the roof. The gulls sidled around him, rolling their heads and wiping their beaks against the concrete. The surface was streaked with blood. For the first time Royal saw that the ledges and balustrades were covered with these bloody notches, the symbols of a mysterious calligraphy. Voices sounded in the distance, a murmur of women. In the central section of the observation deck, beyond the sculpture-garden, a group of women residents had gathered for some kind of public discussion. Unsettled by this intrusion into his private landscape, and its reminder that he was not yet alone in the apartment building, Royal retreated behind the rear wall of the sculpture-garden. The voices moved around him, talking away informally as if this were the latest of many similar visits. Perhaps he had been asleep during their previous excursions, or with the cooler weather they had decided to move their meeting place further along the roof to the shelter of his penthouse. The vortex of birds was breaking up. As Royal returned to the penthouse the spiral had begun to disintegrate. The gulls dived away across the face of the building far below. Urging the Alsatian ahead of him, Royal emerged from behind the rear wall of the sculpture-garden. Two of the women were standing inside the penthouse, one of them with a hand on the callisthenics machine. What startled Royal was their casual stance, as if they were about to move into a vacation villa they had recently rented. Royal retreated behind an elevator head. After being alone with the birds and the white Alsatian for so long the sight of these human intruders unsettled him. He pulled the dog against his legs, deciding to wait in the sculpture-garden until the visiting party had left. He pushed back the rear door of the garden, and walked between the painted geometric forms. Dozens of the gulls surrounded him, crowded together on the tiled floor. They followed Royal expectantly, almost as if they had been waiting for him to bring something to them. His feet slipped on the wet tiles. Looking down, he found a piece of gristle attached to his shoe. Pulling it away, he leaned against one of the concrete sculptures, a waist-high sphere that had been painted bright carmine. When he drew his hand away it was wet with blood. As the birds strutted ahead, clearing an open space for him, he saw that the whole interior of the play-garden was drenched with blood. The tiled floor was slick with bright mucilage. The Alsatian snuffled greedily, wolfing down a shred of flesh lying by the edge of the paddling pool. Appalled, Royal stared at the blood-spattered tiles, at his bright hands, at the white bones picked clean by the birds. It was late afternoon when Wilder woke. Cold air moved through the empty room, flicking at a newspaper on the floor. The apartment was without shadows. Wilder listened to the wind moving down the ventilation shafts. The screaming of the gulls had ended, as if the birds had gone away for ever. Wilder sat on the floor in a corner of the living-room, an apex of this untenanted cube. Feeling the pressure of his back against the wall, he could almost believe that he was the first and last occupant of this apartment building. He climbed to his feet and walked across the floor to the balcony. Far below, he could see the thousands of cars in the parking-lots, but they were screened from him by a faint mist, part of the corroborative detail of a world other than his own. Sucking at the traces of animal fat that clung to his fingers, Wilder entered the kitchen. The cupboards and refrigerator were empty. He thought of the young woman and her warm body in the elevator beside the pool, wondering whether to go back to her. He remembered her stroking his chest and shoulders, and could feel the pressure of her hands on his skin. Still sucking his fingers, and thinking of himself abandoned in this huge building, Wilder stepped out of the apartment. The corridor was silent, the cold air stirring the tags of refuse on the floor. He carried the cine-camera in his left hand, but he was no longer certain what its function was, or why he had kept it with him for so long. The silver pistol, by contrast, he recognized immediately. He held it in his right hand, pointing it playfully at the open doorways, and half-hoping that someone would come out to join him in his game. The top floors of the building had been partially invaded by the sky. He saw white clouds through an elevator shaft, framed in the skylight of the stairwell as he climbed to the 40th floor. Feinting with the pistol, Wilder darted across the elevator lobby of the 40th floor. There were no barricades here, and a recent attempt had been made at housekeeping. The garbage-sacks had been removed, the barricades dismantled, the lobby furniture re-installed. Someone had scrubbed the walls, clearing away all traces of the graffiti, duty rosters and elevator embarkation times. Behind him, a door closed in the wind, cutting off a shaft of light. Enjoying this game with himself in the empty building, and certain that someone would soon turn up to play with him, Wilder dropped to one knee and levelled the pistol at an imaginary assailant. He darted down the corridor, kicked back the door and burst into the apartment. The apartment was the largest he had seen in the building, far more spacious than any others on the upper floors. Like the lobby and corridor, the rooms had been carefully cleaned, the carpets re-laid, the curtains hung around the high windows. On the polished dining-room table stood two silver candlesticks. Impressed by this sight, Wilder wandered around the gleaming table. In some confused way he felt that he had already been here, many years before he came to this empty building. The high ceiling and masculine furniture reminded him of a house he had visited as a small child. He wandered around the refurnished rooms, almost expecting to find his childhood toys, a cot and playpen laid out for his arrival. Between the bedrooms a private staircase led upwards to another chamber, and a small suite of rooms overlooking the roof. Excited by the mystery and challenge of this secret staircase, Wilder began to climb the steps. Licking the last of the fat from his fingers, he trumpeted happily to himself. He was half-way up the staircase, climbing towards the open air, when something blocked his path. The gaunt figure of a tall, white-haired man had stepped forward from the shadows. Far older than Wilder, his hair dishevelled by the wind, he stood at the head of the staircase, looking down silently at the intruder below him. His face was concealed by the harsh light, but the scars on the bony points of bis forehead stood out clearly, like the fresh hand-stains that marked his white jacket. Dimly recognizing this wild old man of the observation roof, Wilder stopped on the stairs. He was unsure whether Royal had come to play with him or to reprimand him. From Royal's nervous posture, and his destitute appearance, Wilder guessed that he had been hiding somewhere, but not as part of a game. Hoping none the less to enlist him, Wilder waved his pistol playfully at Royal. To his surprise the architect flinched back, as if pretending to be frightened. As Wilder climbed towards him he raised the chromium cane in his hand and hurled it down the staircase. The metal rod struck the hand-rail and whipped across Wilder's left arm. Stung by the pain of the blow, Wilder dropped the cine-camera. His arm was numb, and for a moment he felt helpless, like an abused child. As the architect advanced down the steps towards him, Wilder raised the silver pistol and shot him through the chest. When the brief explosion had faded across the cold air, Wilder climbed the last of the steps. The architect's body lay awkwardly across the staircase, as if he were pretending to be dead. His scarred face, drained of all blood, was turned away from Wilder. He was still alive, staring through the open windows at the last of the birds that the explosion had driven into the air. Confused by this game, and its unexpected turns, Wilder stepped over him. The cine-camera lay at the bottom of the staircase, but he decided to leave it there. Rubbing his injured arm, he threw away the pistol that had jarred his hand and stepped through the french windows. Twenty yards away, children were playing in the sculpture-garden. The doors, chained for so long to exclude them, were now wide open, and Wilder could see the geometric forms of the play-sculptures, their vivid colours standing out against the white walls. Everything had been freshly painted, and the roof was vibrant with light. Wilder waved to the children, but none of them saw him. Their presence revived him, and he felt a surge of triumph at having climbed all the way to the roof to find them. The strange, scarred man in the blood-printed jacket lying on the steps behind him had not understood his game. One of the children, an infant boy of two, was naked, running in and out of the sculptures. Quickly Wilder loosened his ragged trousers and let them fall to his ankles. Stumbling a little, as if he was forgetting how to use his legs, he ran forward naked to join his friends. In the centre of the sculpture-garden, beside the empty paddling pool, a woman was lighting a large fire from pieces of furniture. Her strong hands adjusted a heavy spit assembled from the chromium tubing of a large callisthenics device. She squatted beside the fire, stacking the chair-legs as the children played together. Wilder walked forward, shyly hoping that the woman would notice the patterns painted across his chest. As he waited for the children to ask him to play with them he saw that a second woman was standing ten feet away to his left. She was wearing an ankle-length dress and a long gingham apron, her hair drawn back off her severe face and tied in a knot behind her neck. Wilder stopped among the statues, embarrassed that no one had noticed him. Two more women, dressed in the same formal way, had appeared by the gate. Others were stepping forward among the sculptures, surrounding Wilder in a loose circle. They seemed to belong to another century and another landscape, except for their sunglasses, whose dark shades stood out against the blood-notched concrete of the roofterrace. Wilder waited for them to speak to him. He was glad to be naked and show off his body with its painted patterns. At last the woman kneeling by the fire looked over her shoulder at him. Despite her change of dress he recognized her as his wife Judith. He was about to run forward to her, but her matter-of-fact gaze, her unimpressed appraisal of his heavy loins, made him stop. By now he was aware that he knew all the women around him. Dimly he recognized Charlotte Melville, a scarf around her bruised throat, watching him without hostility. Standing next to Jane Sheridan was Royal's young wife, now a governess supervising the smallest children. He recognized the jeweller's widow in her long fur coat, her face made up like his own body with red paint. Looking over his shoulder, if only to confirm that his escape was blocked, he could see the stately figure of the children's-story writer seated in the open window of the penthouse like a queen in her pavilion. In a last moment of hope he thought that perhaps she would read him a story. In front of him the children in the sculpture-garden were playing with bones. The circle of women drew closer. The first flames lifted from the fire, the varnish of the antique chairs crackling swiftly. From behind their sunglasses the women were looking intently at Wilder, as if reminded that their hard work had given them a strong appetite. Together, each removed something from the deep pocket of her apron. In their bloodied hands they carried knives with narrow blades. Shy but happy now. Wilder tottered across the roof to meet his new mothers. # 19 Night Games Dinner was about to be served. Sitting on his balcony on the 25th floor, Robert Laing stirred the bright embers of the fire he had lit from pages of a telephone directory. The flames illuminated the handsome shoulders and thorax of the Alsatian roasting on its spit. Laing fanned the flames, hoping that Alice and Eleanor Powell, lying together in his sister's bed, would appreciate all he had done. He methodically basted the dark skin of the Alsatian, which he had stuffed with garlic and herbs. 'One rule in life,' he murmured to himself. 'If you can smell garlic, everything is all right.' For the moment, at least, everything was highly satisfactory. The Alsatian was almost cooked, and a large meal would do the women good. Both had become querulous recently as a result of the shortage of food, and had been too tired to appreciate Laing's skill and courage in capturing the dog, let alone the exhausting task of skinning and disembowelling this huge animal. They had even complained about its nervous whimperings as Laing turned the pages of an advanced cookery book he had found in a nearby apartment. Laing had debated for some time how best to cook the dog. From the extent of its shivering and whining, the problem had communicated itself to the Alsatian, as if it was aware that it was one of the last animals in the high-rise and for that reason alone merited a major culinary effort. The thought of the weeks of hunger to come momentarily unsettled Laing, and he fed more sheets of paper into the balcony fire. Perhaps there was game to be found on the lower levels, though Laing never ventured below the 20th floor. The stench from the swimming-pool on the 10th floor was too disturbing, and reached up every ventilation flue and elevator shaft. Laing had descended to the lower levels only once during the previous month, when he had briefly played Samaritan to Anthony Royal. Laing had found the dying architect while chopping firewood in the 25th-floor lobby. As he pulled an antique dressing-table from the disused barricade, Royal had fallen through the gap, almost knocking Laing to the floor. A small wound had opened Royal's chest, covering his white jacket with huge bloodstains in the outline of his hands, as if he had tried to identify himself with these imprints of his own death to come. He was clearly on his last legs, eyes unfocused, the bones of his forehead cutting through the over-stretched skin. Somehow he had managed to descend all the way from the 40th floor. Rambling continually, he stumbled down the staircase, partly supported by Laing, until they reached the 10th floor. As they stepped on to the shopping mall the stench of rotting flesh hung over the deserted counters of the supermarket, and at first Laing assumed that a concealed meat-store had burst open and begun to putrefy. Appetite keening, he had been about to drop Royal and head off in search of food. But Royal, eyes almost closed, one hand gripping Laing's shoulder, pointed towards the swimming-pool. In the yellow light reflected off the greasy tiles, the long tank of the bone-pit stretched in front of them. The water had long since drained away, but the sloping floor was covered with the skulls, bones and dismembered limbs of dozens of corpses. Tangled together where they had been flung, they lay about like the tenants of a crowded beach visited by a sudden holocaust. Disturbed less by the sight of these mutilated bodies – residents who had died of old age or disease and then been attacked by wild dogs, Laing assumed – than by the stench, Laing turned away. Royal, who had clung so fiercely to him during their descent of the building, no longer needed him, and dragged himself away along the line of changing cubicles. When Laing last saw him, he was moving towards the steps at the shallow end of the swimming-pool, as if hoping to find a seat for himself on this terminal slope. Laing crouched over the fire, testing the hind-quarters of the Alsatian with a skewer. He shivered in the cold air flowing up the face of the high-rise, with an effort repressing his memory of the bone-pit. At times he suspected that some of the residents had reverted to cannibalism – the flesh had been stripped with a surgeon's skill from many of the corpses. The lower-level residents, under constant pressure and discrimination, had probably given in to necessity. 'Robert...! What are you doing...?' Alice's querulous voice roused Laing from his reverie. Wiping his hands on his apron, he hurried into the bedroom. 'It's all right – dinner is nearly ready.' He spoke in the reassuring, childlike voice he had used during his hospital training with the duller of his child patients, a tone at variance with the intelligent and bored gaze of the two women in the bed. 'You're filling the place with smoke,' Eleanor told him. 'Are you sending up signals again?' 'No...it's the telephone directories. The paper must be made of plastic.' Alice shook her head wearily. 'What about Eleanor's batteries? You promised to find her some. She's got to start reviewing again.' 'Yes, I know...' Laing looked down at the blank screen of the portable television set sitting on the floor beside Eleanor. He felt stumped for an answer – despite all his efforts, the last of the batteries had been used. Eleanor stared at him severely. She had opened the wound on her wrist and was coyly exposing it to the cat watching with interest from the far side of the room. 'We've been discussing whether you should move to another apartment.' 'What?' Unsure whether the pantomime had become serious, Laing laughed delightedly, excited all the more when Eleanor refused to let her customary slow smile cross her mouth. The two women lay side by side, so close that they seemed to be merging into each other. At intervals throughout the day he brought them their food, but he was never sure exactly whose bodily needs and functions he was satisfying. They had moved into the same bed for warmth and security, but really, Laing suspected, so that they could synchronize their supervision over him. They knew that they were dependent on Laing. Despite the 'pantomime' their behaviour was entirely geared to meeting Laing's private needs in return for his attention to the business of their physical survival. The exchange suited Laing admirably, just as it suited him to have them in bed together – he was faced with only one set of wheedling demands, one repertory of neurotic games. He liked to see Eleanor's old spirit emerge. Both women suffered seriously from malnutrition, and it encouraged him when they were well enough to play their parts in this loosely evolving pantomime, treating him like two governesses in a rich man's ménage, teasing a wayward and introspective child. At times Laing liked to carry the game to its logical conclusion, and imagine that it was the two women who were in charge, and that they despised him totally. This ultimate role had helped him on one occasion, when a marauding band of women led by Mrs Wilder had entered the apartment. Seeing Laing being abused, and assuming him to be Eleanor's and Alice's prisoner, they had left. On the other hand, perhaps they understood all too well what was really taking place. Whatever the answer, Laing was free for the time being to live within this intimate family circle, the first he had known since his childhood. The situation allowed him ample freedom to explore himself, and the strong element of unpredictability kept everyone alert. Although he might wheedle at their breast he could easily become vicious. The women admired him for this. A substantial number of morphine ampoules were left, and he planned to introduce the two women to this heady elixir. Their addiction would tilt the balance of authority in his direction again, and increase their dependence on him. Ironically, it was here, in the high-rise, that he had found his first patients. Later, after he had carved the dog and served generous but not excessive portions to the two women, Laing thought about his good fortune as he sat on the balcony with his back to the railing. Above all, now, it no longer mattered how he behaved, what wayward impulses he gave way to, or which perverse pathways he chose to follow. He was sorry that Royal had died, as he owed the architect a debt of gratitude for having helped to design the high-rise and make all this possible. It was strange that Royal had felt any guilt before his death. Laing waved reassuringly to the two women, who sat on the mattress with the tray across their knees, eating from the same plate. Laing finished the dark, garlic-flavoured meat, and looked up at the face of the high-rise. All the floors were in darkness, and he felt happy at this. His affection for the two women was real, like his pride in keeping them alive, but this in no way interfered with his new-found freedom. On the whole, life in the high-rise had been kind to him. To an increasing extent, everything was returning to normal. Laing had begun to think again of the medical school. He might well pay a visit to the physiology laboratory the next day, and perhaps take a supervision. First, though, he would clean up. He had noticed two women neighbours sweeping the corridor. It might even be possible to get an elevator working. Perhaps he would take over a second apartment, dismantle the barricades and begin to refurnish it. Laing thought of Eleanor's threat to banish him. He toyed with the notion, feeling an illicit thrill of pleasure at the prospect. He would have to think of something with which to win their favour again. However, all this, like the morphine he would give them in increasing doses, was only a beginning, trivial rehearsals for the real excitements to come. Feeling these gather within him, Laing leaned against the railing. Dusk had settled, and the embers of the fire glowed in the darkness. The silhouette of the large dog on the spit resembled the flying figure of a mutilated man, soaring with immense energy across the night sky, embers glowing with the fire of jewels in his skin. Laing looked out at the high-rise four hundred yards away. A temporary power failure had occurred, and on the 7th floor all the lights were out. Already torch-beams were moving about in the darkness, as the residents made their first confused attempts to discover where they were. Laing watched them contentedly, ready to welcome them to their new world. # By the same author _The Drowned World The Voices of Time The Terminal Beach The Drought The Crystal World The Day of Forever The Venus Hunters The Disaster Area The Atrocity Exhibition Vermilion Sands Crash Concrete Island Low-Flying Aircraft The Unlimited Dream Company Hello America Myths of the Near Future Empire of the Sun Running Wild The Day of Creation War Fever The Kindness of Women Rushing to Paradise A User's Guide to the Millennium_ (non-fiction) _Cocaine Nights Super-Cannes The Complete Short Stories Millennium People Kingdom Come_ _Miracles of Life_ # About the Author J. G. Ballard was born in 1930 in Shanghai, where his father was a businessman. He and his family were interned in a civilian prison camp, following their release they returned to England in 1946. After reading medicine at Cambridge for two years, he worked as a copywriter and Covent Garden porter before going to Canada with the RAF. His first major novel, _The Drowned World_ , was published in 1962. His acclaimed novels include _The Atrocity Exhibition_ , _Crash_ (filmed by David Cronenberg), _High-Rise_ , _The Unlimited Dream Company, Empire of the Sun_ (filmed by Steven Spielberg), _The Kindness of Women_ , _Cocaine Nights_ , _Super-Cannes_ , _Millennium People_ and _Kingdom Come_. Ballard's autobiography, _Miracles of Life_ , was published in 2008 to great acclaim. J. G. Ballard died in 2009. # Interview with J. G. Ballard _J. G. Ballard talks to Travis Elborough_ **IN SEVERAL OF your novels you have used a small community, the residents of a luxury housing development or a high-rise block for example, as a microcosm with which to explore the fragility of civil society. Do you think that your preoccupation with social regression, de-evolution even, stems from your childhood experiences in the internment camp when you saw, first hand, how easily the veneer of civilization could slip away?** Yes, I think it does; although anyone who has experienced a war first hand knows that it completely overturns every conventional idea of what makes up day-to-day reality. You never feel quite the same again. It's like walking away from a plane crash; the world changes for you for ever. The experience of spending nearly three years in a camp, especially as an early teenage boy, taking a keen interest in the behaviour of adults around him, including his own parents, and seeing them stripped of all the garments of authority that protect adults generally in their dealings with children, to see them stripped of any kind of defence, often losing heart a bit, being humiliated and frightened – and we all felt the war was going to go on for ever and heaven knows what might happen in the final stages – all of that was a remarkable education. It was unique, and it gave me a tremendous insight into what makes up human behaviour. **You've written that the landscape of even your first novel,** **_The Drowned World,_** **a futuristic portrait of a flooded twenty-first-century London, was clearly informed by your memories of Shanghai. I wondered if you could say a little about how, after having possibly explored it obliquely in your works of science fiction, you came to write so directly about your childhood experiences in** **_Empire of the Sun?_** 'Anyone who has experienced a war first hand knows that it completely overturns every conventional idea of what makes up day-to-day reality. It's like walking away from a plane crash.' I had always planned to write about my experiences of the Second World War, Shanghai under the Japanese and the camp. I knew that it was such an important event, and not just for me. But when I came to England in 1946 I had to face the huge problem of adjusting to life here. England in those days was a very, very strange place. There was an elaborate class system that I'd never come across in Shanghai. England...it was a terribly shabby place, you know, locked into the past and absolutely exhausted by the war. It was only on a technicality that we could be said to have won the war; in many ways we'd lost it. Financially we were desperate. I had to cope with all this. By 1949 the Communists had taken over China and I knew I would never go back. So there seemed no point in keeping those memories alive, I felt I had to come to terms with life in England. This is, after all, where I was educated. I got married and began my career as a writer. England interested me. It seemed to be a sort of disaster area. It was a subject and a disaster in its own right. I was interested in change, which I could see was coming in a big way, everything from supermarkets to jet travel, television and the consumer society. I remember thinking, my God, these things will bring change to England and reveal the strange psychology of these tormented people. So I began writing science fiction, although most readers of science fiction did not consider me to be a science fiction writer. They saw me as an interloper, a sort of virus that had got into the cell of science fiction, entered its nucleus and destroyed it. But all this while I could see bits of my China past floating up and I knew I was going to write it up at some point. **You studied medicine and have stated that you believe that the contemporary novelist should be like a scientist. Do you ever regret not qualifying as a doctor?** I was very interested in medicine. The experience of dissecting cadavers for two years was a very important one for me, for all sorts of reasons. I do think that novelists should be like scientists, dissecting the cadaver...I would like to have become a doctor, but the urge to write was too great. I knew from friends of mine who were a year or two ahead of me that once you actually joined a London hospital or became a junior doctor the pressures of work were too great. I'd never have any time to write, and the urge to write was just too strong. **Do you think there is a moral purpose to your fiction?** I am not sure about that. I see myself more as a kind of investigator, a scout who is sent on ahead to see if the water is drinkable or not. **As a scout or investigator you've been uncannily prescient, famously predicting Reagan's presidency in _The Atrocity Exhibition_ , and I noticed that one commentator made reference to _The Drowned World_ in the aftermath of the New Orleans disaster. Have you ever worried that you might be too prescient? ** An investigator and a sort of early warning system, let's put it like that. I suppose one of the things I took from my wartime experiences was that reality was a stage set. The reality that you took for granted – the comfortable day-to-day life, school, the home where one lives, the familiar street and all the rest of it, the trips to the swimming pool and the cinema – was just a stage set. They could be dismantled overnight, which they literally were when the Japanese occupied Shanghai and turned our lives upside down. I think that experience left me with a very sceptical eye, which I've turned on to something even as settled as English suburbia where I now live. Nothing is as secure as we like to think it is. One doesn't just have to think of Hurricane Katrina and New Orleans – this applies to everything. A large part of my fiction tries to analyse what is going on around us, and whether we are much different people from the civilized human beings we imagine ourselves to be. I think it's true of all my fiction. I think that investigative spirit forms all my novels really. # Copyright Fourth Estate An imprint of HarperCollins _Publishers_ 77–85 Fulham Palace Road, Hammersmith London W6 8JB www.4thestate.co.uk This edition published by Fourth Estate in 2014 First published in Great Britain by Jonathan Cape Ltd 1975 Copyright © J.G. Ballard 1975 J.G. Ballard asserts the moral right to be identified as the author of this work. Introduction © Ned Beauman 2014 Interview © Travis Elborough 2006 This novel is entirely a work of fiction. The names, characters and incidents portrayed in it are the work of the author's imagination. Any resemblance to actual persons, living or dead, events or localities is entirely coincidental. A catalogue record for this book is available from the British Library All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this ebook on-screen. No part of this text may be reproduced, transmitted, down-loaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins. HarperCollins _Publishers_ has made every reasonable effort to ensure that any picture content and written content in this ebook has been included or removed in accordance with the contractual and technological constraints in operation at the time of publication. Source ISBN: 9780586044568 Ebook Edition © ISBN: 9780007382910 Version: 2014-04-09 # About the Publisher **Australia** HarperCollins Publishers (Australia) Pty. Ltd. Level 13, 201 Elizabeth Street Sydney, NSW 2000, Australia <http://www.harpercollins.com.au> **Canada** HarperCollins Canada 2 Bloor Street East - 20th Floor Toronto, ON, M4W, 1A8, Canada <http://www.harpercollins.ca> **New Zealand** HarperCollins Publishers (New Zealand) Limited P.O. Box 1 Auckland, New Zealand <http://www.harpercollins.co.nz> **United Kingdom** HarperCollins Publishers Ltd. 77-85 Fulham Palace Road London, W6 8JB, UK <http://www.harpercollins.co.uk> **United States** HarperCollins Publishers Inc. 10 East 53rd Street New York, NY 10022 <http://www.harpercollins.com>
{ "redpajama_set_name": "RedPajamaBook" }
4,704
\section{Introduction} Invention of complex canonical variables \cite{Ash-1} opened a new avenue for non-perturbative treatment of quantum general relativity. In these new variables all constraints were made polynomial at the expense of introducing reality conditions. Afterwards, many gravitational theories were re-formulated in a similar way, including even eleven dimensional supergavity \cite{MeNi}. Quite spectacular success was achieved in loop quantum gravity \cite{Rov}. In the view of recent progress of non-perturbative methods it seems especially important to develop the path integral formulation of the Ashtekar gravity which could serve as a bridge between perturbative and non-perturbative results. Constraint structure of the Ashtekar gravity has been studied in some detail (for reviews, see \cite{Abook} and \cite{Peldan}). The BRST charge was constructed \cite{AshMaTo}. However, this results are still insufficient for constructing a path integral. It is known, that any restriction imposed on integration variables may lead to the Faddeev--Popov ghosts \cite{FaPo}. It is unclear what kind of ghost action is induced by the reality conditions. It is obvious that the path integral for the Ashtekar gravity will have a somewhat unusual form. In the case of complex scalar fields action is real and one integrates over whole complex plane. In the case of Ashtekar gravity action is holomorphic. Thus one may expect some sort of contour integration. Position of the contour must be defined by using the reality conditions. However, it is not known yet which gauges are compatible with these conditions. Our strategy is rather simple. We derive the path integral for the Hilbert--Palatini gravity and than rewrite it in terms of the Ashtekar variables. By itself, the first part of our work is not a great novelty. Hamiltonian structure of the Hilbert--Palatini gravity has been analyzed in a number of papers \cite{He,NeTe,ABaJo,Abook,Peldan}. Given this analysis construction of the path integral is quite straightforward. However, transition to the Ashtekar variables requires a complex canonical transformation which is not well defined in the path integral. We would also like to avoid any gauge fixing at intermediate steps before the path integral is written down. Thus we are forced to choose a basis in the Hilbert--Palatini action different from the ones used earlier and redo calculations of the constraint algebra, BRST charge, etc. A price to pay for the relatively easy transition to the Ashtekar variables in the path integral is an ugly form of the Hamiltonian constraint of the Hilbert--Palatini action. It leads to lengthy calculations at intermediate steps, which are reported here in some detail to make the paper self-contained. As our main result, we transformed the Hilbert--Palatini path integral to the Ashtekar variables. This can be done successfully for a restricted class of gauges only. One is not allowed to impose gauge conditions on the connection variables. Therefore, path integral quantization of the Ashtekar gravity in an arbitrary gauge remains an open problem. The paper is organized as follows. In the next section some preliminary information on the self dual Hilbert--Palatini action is collected. We introduce variables which will be convenient for construction of the path integral, re-derive the Ashtekar action and give some useful equations. In the third section we re-consider constraint structure of the Hilbert--Palatini gravity in terms of our variables. The fourth section is devoted to the BRST quantization of the Hilbert--Palatini gravity. In the section V we establish a relation between first and second class constraints of the Hilbert--Palatini action and the reality conditions and vanishing of imaginary part of the Ashtekar action. In the sixth section we re-write the path integral in terms of the Ashtekar variables. This represents our main result. The reader who do not want to go into technicalities of the BRST quantization will find a simple derivation of the Faddeev path integral for the Ashtekar gravity in section VII. In the last section some perspectives are briefly discussed. Technical details are collected in the Appendices. \section{Selfdual Hilbert--Palatini action} Let $\Omega^{\gamma\delta}=d\omega^{\gamma\delta}+ {\omega^\gamma}_\alpha \wedge \omega^{\alpha\delta}$, $\omega$ and $e$ are connection and tetrad one-forms respectively. Signature of the metric is $(-,+,+,+)$. The Levi--Civita tensor is defined by the equation $\varepsilon_{0123}=1$. Define the star operator as $\star \omega^{\alpha\beta}= \frac 12 {\varepsilon^{\alpha\beta}}_{\gamma\delta} \omega^{\gamma\delta}$. Define \begin{eqnarray} A^{\alpha\beta}&=&\frac 12 (\omega^{\alpha\beta} -i\star \omega^{\alpha\beta}) \label{AF} \\ {\cal F}^{\alpha\beta} &=& dA^{\alpha\beta}+{A^\alpha}_\gamma \wedge A^{\gamma\delta}=\frac 12 (\Omega^{\alpha\beta} -i\star \Omega^{\alpha\beta}) \nonumber \end{eqnarray} These fields satisfy $\star A=iA$, $\star {\cal F} =i{\cal F}$. Let us start with the selfdual Hilbert--Palatini action expressed in terms of selfdual connection only \cite{ABaJo,Sam,Ben,Wal}: \begin{equation} S_{SD}=\int \varepsilon_{\alpha\beta\gamma\delta} e^\alpha \wedge e^\beta \wedge {\cal F}^{\gamma\delta} \label{HP1} \end{equation} Let us split coordinates $x^\mu$ into "time" $t$ and "space" $x^i$ and introduce the notations: \begin{eqnarray} e^0&=&Ndt+\chi_a E_i^a dx^i ,\quad e^a=E^a_idx^i+E^a_iN^idt \nonumber \\ A^a_i&=&\varepsilon^{abc}A_{bci} ,\quad A^a_0=\varepsilon^{abc}A_{bc0} \nonumber \\ F^a_{ij}&=&\varepsilon^{abc}{\cal F}_{ij,bc} \label{split} \end{eqnarray} where $a,b,c=1,2,3$ are flat $SO(3)$ indices. $E_a^i$ will denote inverse of $E_i^a$. We also need weighted fields: \begin{equation} {\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E}^i_a =\sqrt{h}E^i_a, \quad \mathop{N}\limits_{\sim}=\bigl(\sqrt{h}\bigr)^{-1} N \label{wei} \end{equation} $\sqrt{h}=\det E^a_i$. After long but elementary calculations we can represent (\ref{HP1}) in the following form \begin{eqnarray} S_{SD}&=&2 \int dt\ d^3x (P_a^i\partial_tA_i^a+A_0^a{\cal G}_a +N^i{\cal H}_i+\mathop{N}\limits_{\sim}{\cal H}) , \nonumber \\ P_a^i&=&i(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a-i{\varepsilon_a}^{bc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\chi_c ) , \nonumber \\ {\cal G}_a&=&\nabla_iP^i_a=\partial_iP^i_a -\varepsilon_{abc}A^b_iP^{ci} , \nonumber \\ {\cal H}_i&=&-2i\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_aF^a_{ik}-\varepsilon_{ijk}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^k\varepsilon^{lmn}\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}^d_l\chi_d F^{ab}_{mn} , \nonumber \\ {\cal H}&=&2\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_b F^{ab}_{ik} ,\label{HP2} \end{eqnarray} $\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^a=h^{-1/2}E_i^a$. By a suitable redefinition of Lagrange multipliers $\chi^a$ can be removed from the action. \begin{equation} {\cal N}_D^i=N^i+\frac {\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\chi^a(N^j\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^b\chi_b-\mathop{N}\limits_{\sim})}{1-\chi^2} \qquad {\mathop{\cal N}\limits_{\sim}}=\frac {\mathop{N}\limits_{\sim}-N^i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^a\chi_a}{1-\chi^2} \label{cN} \end{equation} The action (\ref{HP2}) now reads: \begin{eqnarray} S_{SD}=S_A&=&2\int dt\ d^3x (P_a^i\partial_tA_i^a+A_0^a{\cal G}_a +{{\cal N}_D}^i H_i+{\mathop{\cal N}\limits_{\sim}} H) \nonumber \\ H_i&=&-2P^k_aF^a_{ik} \nonumber \\ H&=&-2P^i_aP^k_b F^{ab}_{ik} \label{HP3} \end{eqnarray} All $\chi$-dependence is hidden in the canonical variables. We arrived at the Ashtekar action (\ref{HP3}) (later denoted as $S_A$). Absence of $\chi$ in $S_A$ leads to a first class primary constraint $p_\chi =0$, where $p_\chi$ is canonical momentum for $\chi$. This constraint generates shifts of $\chi$ by an arbitrary function and originates from the Lorentz boosts. One must bear in mind that not all the components of ${\rm Re}\, P^i_a$ are independent. To restore correct form of $P^i_a$ one needs a condition ${\rm Im}\, P^{(i}_a {\rm Re}\, P^{j)}_a=0$ or, equivalently, \begin{equation} {\rm Im}\, (P_a^i P_a^j)=0 \label{1rc} \end{equation} The equation (\ref{1rc}) is known as first metric reality condition. Being supplemented by the second metric reality condition \begin{equation} \partial_t {\rm Im}\, (P_a^i P_a^j)=0 \label{2rc} \end{equation} on an initial hypersurface it ensures real evolution of the metric \cite{ARoTa,Imm,YoSh}. As usual, the triad field $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ should be non-degenerate. Define the smeared constraints: \begin{eqnarray} &&{\cal G}(n)=\int d^3x\, n^a{\cal G}_a, \qquad H^A(\mathop{N}\limits_{\sim} )=\int d^3x\, \mathop{N}\limits_{\sim} H \nonumber \\ &&{\cal D}(\vec N)=\int d^3x\, N^i(H_i+2A_i^a{\cal G}_a), \end{eqnarray} They obey the following algebra: \begin{eqnarray} &&\left\{ {\cal G}(n) ,{\cal G}(m) \right\}_C=-{\cal G}(n\times m), \nonumber \\ &&\left\{ {\cal D}(\vec N) ,{\cal D}(\vec M) \right\}_C= -2{\cal D}([\vec N ,\vec M ]),\nonumber \\ &&\left\{ {\cal D}(\vec N) ,{\cal G}(n) \right\}_C=- 2{\cal G}( N^i\partial_in), \nonumber \\ &&\Bigl\{ H^A(\mathop{N}\limits_{\sim} ) ,{\cal G}(n) \Bigr\}_C =0, \label{algA} \\ &&\Bigl\{ {\cal D}(\vec N) ,H^A(\mathop{N}\limits_{\sim} ) \Bigr\}_C= -2H^A({\cal L}_{\vec N}\mathop{N}\limits_{\sim} ), \nonumber \\ &&\Bigl\{ H^A(\mathop{N}\limits_{\sim} ),H^A(\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim ) \Bigr\}_C = 2{\cal D}(\vec K)-2{\cal G}(2K^jA_j) \nonumber \end{eqnarray} where \begin{eqnarray} &&(n\times m)^a=\varepsilon^{abc}n^bm^c,\qquad {\cal L}_{\vec N}\mathop{N}\limits_{\sim} = N^i\partial_i \mathop{N}\limits_{\sim}-\mathop{N}\limits_{\sim}\partial_iN^i, \nonumber \\ &&[\vec N ,\vec M ]^i= N^k\partial_kM^i-M^k\partial_kN_i, \label{not1} \\ && K^j=(\mathop{N}\limits_{\sim}\partial_i\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim-\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim\partial_i\mathop{N}\limits_{\sim})P^i_aP^j_a \label{not11} \end{eqnarray} We introduced the subscript $C$ to distinguish the Poisson bracket $\{ \cdot ,\cdot \}_C$ of the complex Ashtekar theory from that of the real Hilbert-Palatini action. \section{Hamiltonian form of the Hilbert--Palatini action} Let us start with the Hilbert--Palatini action \begin{equation} S=\frac 12\int \varepsilon_{\alpha\beta\gamma\delta} e^\alpha \wedge e^\beta \wedge {\Omega}^{\gamma\delta} \end{equation} Recall that the Ashtekar action is obtained from the Hilbert--Palatini one by adding a pure imaginary term $-i\frac 12 \int \varepsilon_{\alpha\beta\gamma\delta} e^\alpha \wedge e^\beta \wedge \star {\Omega}^{\gamma\delta}$. Therefore, \begin{equation} S={\rm Re} \ S_{A}=2\int dt\ d^3x (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_a^i\partial_t \omega_i^{0a}+Z^i_a\partial_t \xi_i^a + n_G^a{\rm Re}\,{\cal G}_a+n_L^a{\rm Im}\,{\cal G}_a +{{\cal N}_D}^i {\rm Re}\,H_i+{\mathop{\cal N}\limits_{\sim}} {\rm Re}\,H) \end{equation} where \begin{eqnarray} && n_G^a={\rm Re}\, A_0^a,\quad n_L^a=-{\rm Im}\, A_0^a \\ &&Z_a^i={\varepsilon_a}^{bc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b \chi_c\\ &&\xi_i^a= \frac 12 {\varepsilon^a}_{bc}\omega_i^{bc} \end{eqnarray} In order to simplify the constraint algebra we replace ${\rm Re}\, H_i$ by the modified vector constraint. To this end we shift the Lagrange multipliers. \begin{equation} n_G^a = {\cal N}_G^a + 2{\cal N}_D^i\xi_i^a \ \ \ \ n_L^a = {\cal N}_L^a+2{\cal N}_D^i\omega_i^{0a} \end{equation} We see that $\tE^i_a$ plays a role of the momentum for $\xi_i^a$ whereas $Z^i_a$ is momentum conjugate to $\omega_i^{0a}$. $Z_a^i$ has three independent components only. To have time derivatives of true dynamical variables we replace \begin{equation} \omega_i^{0a} = \eta^a_i +\varepsilon^{abc}\xi_i^b \chi_c \end{equation} Then the kinetic term reads $\tE^i_a \partial_t \eta^a_i - (\varepsilon^{abc}\xi^b_i \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_c) \partial_t \chi_a$. By a suitable change of variables we can bring this term to the standard form $p\partial_t q$. Let us introduce a basis in the space of $3\times 3$ matrices. \begin{equation} (r_A)_i^a= \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^b (\beta_A)^a_b, \quad (\gamma_a)_i^b=\frac 12 \varepsilon_{abc}\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^c \label{bas} \end{equation} where $\beta_A$ are six symmetric $3\times 3$ matrices. Define \begin{equation} \xi^a_i = r_i^a +(\gamma_b)^a_i \omega^b, \qquad r_i^a=(r_A)^a_i \lambda^A \label{ksi} \end{equation} $\omega$ and $\lambda$ will be treated as new canonical variables. We arrive at the following expression for the Hilbert--Palatini action \begin{eqnarray} \frac 12 S &=&\int dt\ d^3x (\tE^i_a\partial_t \eta^a_i+ \chi_a\partial_t \omega^a + {\cal N}_G^a \Phi^G_a+{\cal N}_L^a\Phi^L_a +{\cal N}_D^i \Phi^D_i+\mathop{\cal N}\limits_{\sim} \Phi^H) \label{HP5} \\ \Phi^G_a &=& \partial_i ({\varepsilon_a}^{bc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b \chi_c)-{\varepsilon_{ab}}^c \eta_i^b \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_c -{\varepsilon_{ab}}^c \omega^b \chi_c \nonumber \\ \Phi^L_a &=& \partial_i\tE^i_a +\varepsilon_{abc}\eta_i^b \varepsilon^{cgf}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_g \chi_f - (\delta_{ab}-\chi_a \chi_b)\omega^b \nonumber \\ \Phi^D_i &=&-2\left[ \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a \partial_i \eta_j^a -\partial_j(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a\eta_i^a) - \omega^a \partial_i \chi_a \right] \nonumber \\ \Phi^H &=& \varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^i \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_c^j (\delta_{ad}-\chi_a \chi_d) {\varepsilon^d}_{gf} \eta_i^g \eta_j^f + 2\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_b \chi^b (\partial_i \eta_j^a-\partial_j \eta_i^a) \nonumber \\ && -(1-\chi^2)(2\partial_i (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\omega^a) -h^{-1}\omega^a \partial_i (h\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a))+\omega^a \chi^b(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\partial_i\ \chi^b +\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^i\partial_i \chi_a) \nonumber \\ &&+ \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a\omega^b(\chi_a\eta_j^b-\chi_b \eta_j^a)- \omega^a \chi_a(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_b\eta_j^c\chi^b\chi_c - \chi^2\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^j \eta_j^b) \nonumber \\ &&-\frac 12 (1-\chi^2)\omega^a \omega^b (\delta_{ab}-\chi_a \chi_b) \nonumber \\ &&+ 2\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c \left( (1-\chi^2)\partial_i r_j^a+ r_j^d \chi_d \partial_i \chi_a -(1-\chi^2)\chi_a r_j^d\eta_i^d \right. \nonumber \\ &&\left. +(\delta_{ag}-\chi_a \chi_g)\eta_i^g r_j^d \chi_d \right) - (1-\chi^2)\varepsilon^{abc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_c^j (\delta_{ad}-\chi_a \chi_d) {\varepsilon^d}_{gf} r_i^g r_j^f \nonumber \end{eqnarray} We see that $\lambda_A$ has no conjugate momentum, and thus is non-dynamical. We observe also that $\lambda_A$ is contained in $\Phi^H$ only. Let us analyse constraints of the theory along the lines of usual Dirac procedure \cite{Dirac}. Since all steps are completely standard we omit irrelevant technical details (cf. \cite{ABaJo,Abook}). First we note that $\tE^i_a$ and $\chi_a$ are conjugate momenta to $\eta^a_i$ and $\omega^a$ respectively. By analyzing the consistency conditions we get the following set of constraints \begin{equation} p^{(n)}_{\alpha }=0 \qquad p^{(\lambda )}_A=0 \qquad \Phi_{\alpha }=0 \qquad \mathop{\cal N}\limits_{\sim} \frac {\partial \Phi^H}{\partial \lambda_A}=0 \end{equation} where $p^{(q)}$ denotes momentum conjugate to variable $q$, $(n)$ are all Lagrange multipliers, and $\Phi_{\alpha}= (\Phi^G_a,\Phi^L_a,\Phi^D_i,\Phi^H)$. Introduce \begin{equation} \Phi_{\alpha}^{'}=\Phi_{\alpha}-\frac 12 p^{\lambda}_A {\cal A}_{AB}^{-1} \left\{ \Phi_{\alpha},\frac{\partial \Phi^H}{\partial \lambda_B}\right\} ,\end{equation} where ${\cal A}_{AB}=-\frac 12 \frac {\partial^2 \Phi^H} {\partial \lambda_A \partial \lambda_B}$. Then $\Phi^{'}_{\alpha}$ and $p^{(n)}_{\alpha}$ are first class constraints. Remaining constraints $p_A^{(\lambda )}$ and $\mathop{\cal N}\limits_{\sim} \frac {\partial \Phi^H}{\partial \lambda_A}$ are second class constraints with nontrivial matrix of commutators. This matrix is non-degenerate and can be used to construct Dirac's bracket. To avoid using such an object one should solve second class constraints explicitly. The constraints $p^{(\lambda )}_A=0$ are solved trivially giving us back $\Phi_\alpha$ as first class constraints. Since $\Phi^H$ is quadratic in $\lambda$, it can be represented as \begin{equation} \Phi^H =\Phi^H_0+2{\cal B}_A\lambda_A -\lambda_A {\cal A}_{AB}\lambda_B, \end{equation} The remaining second class constraints give the equations \begin{equation} 0=\frac {\delta \Phi^H}{\delta\lambda^A} = 2(-{\cal A}_{AB}\lambda_B+{\cal B}_A) ,\label{2class} \end{equation} which can be solved for $\lambda$ resulting in expressions for non-dynamical components $r_i^a$ in terms of other canonical variables. Here we give final results only. Some intermediate steps are reported in the Appendix A. \begin{eqnarray} r_i^a&=&\frac 1{2(1-\chi^2)} \Bigl( -X_{ad} \varepsilon^{dbc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^k \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c X_{gf} \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^g \partial_k \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^f \Bigr. \nonumber \\ &&+X_{ag} \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^g \varepsilon^{dbc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_b \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c X_{df} \partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^f -\varepsilon^{dbc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_b^k\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_c^j X_{dg} \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^g X_{af}\partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^f \nonumber \\ &&-\chi_a \varepsilon_{dbc} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_b X_{cg} \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^g \partial_i \chi_d +\varepsilon^{abc}\chi_b\partial_i\chi_c -\varepsilon^{abc}\chi_b \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_c^j\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^d\partial_j\chi_d \nonumber \\ &&\Bigl. +\varepsilon^{abc}\chi_b \eta_i^c +\varepsilon^{dbc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E_a^j\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^d \chi_b \eta_j^c \Bigr) ,\label{rai} \end{eqnarray} where $X_{ab}=(\delta_{ab}-\chi_a\chi_b )$. The Hamiltonian constraint reads: \begin{eqnarray} \Phi^H&=&\Phi^H_0+ {\cal B}_A {\cal A}_{AB}^{-1} {\cal B}_B \nonumber \\ &=&-\frac12(1-\chi^2)\omega^a\omega^b X_{ab} -(1-\chi^2)\left( 2\partial_i(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\omega^a)-\right. \nonumber \\ &&h^{-1}\omega^a\partial_i(h\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a)) +\omega^a\chi_b(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\partial_i\chi_b +\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\partial_i\chi_a) \nonumber \\ &&+\left. (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\omega^b(\chi_a\eta_i^b-\chi_b\eta_i^a)-\omega^a\chi_a (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\chi^b\eta_i^c\chi_c-\chi^2\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\eta_i^b)\right) \nonumber \\ &&+\frac12\Bigl\{ -\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c X_{ad}\varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^l_q X_{gf}\partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^g\partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_l^f+\Bigr.\nonumber \\ &&\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c X_{ag}\partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^g \varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^l_q X_{df}\partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_l^f \nonumber \\ &&\Bigl. -\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c X_{ag}\partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_l^g \varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^l_q X_{df}\partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^f \Bigr\}\nonumber \\ &&+\Bigl\{ -\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c \chi_a\varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p\partial_k\chi_d X_{qg}\partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^g+ \varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c \partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^a \varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p \partial_k\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_l^f \chi_q \Bigr. \nonumber \\ &&\Bigl. - \varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c \varepsilon^{adp}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_p\partial_k\chi_d \partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^g\chi_g \Bigr\}\nonumber \\ &&-\frac{\chi^2}{2(1-\chi^2)}\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\partial_i\chi_a X_{cq} \varepsilon^{dpq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_p\partial_j\chi_d \nonumber \\ &&+\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^k_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c\varepsilon^{dpq}\chi_d\eta_k^p\partial_i\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^q -\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\partial_i\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c\varepsilon^{apq}\chi_p\eta_j^q \nonumber \\ &&+\frac{1}{1-\chi^2}\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\partial_i\chi_a \varepsilon^{cpq}\chi_p\eta_j^q\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_d\chi_d \nonumber \\ &&+\Bigl\{ 2\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_b\chi_b(\partial_i\eta_j^a-\partial_j\eta_i^a)+ \varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_c\varepsilon_{apq}\eta_i^p\eta_j^q+\frac12\varepsilon^{abc} \chi_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\eta_j^c\varepsilon^{dpq}\chi_d\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_p\eta_i^q\Bigr. \nonumber \\ &&-\Bigl.\frac12\varepsilon^{abc}\chi_a\eta_i^b \varepsilon^{cpq}\chi_p\eta_j^q\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_g\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_g- \frac1{2(1-\chi^2)}\varepsilon^{abc}\chi_a\eta_i^b\varepsilon^{cpq}\chi_p\eta_j^q \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_g\chi_g\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_f\chi_f \Bigr\} . \label{newH} \end{eqnarray} We end up this section with some useful commutators. Introduce smeared first class constraints: \begin{eqnarray} &&G(n)=\int d^3x\, n^a\Phi^G_a, \quad L(m)=\int d^3x\, m^b \Phi^L_b, \nonumber \\ &&D(\vec N)=\int d^3x\, N^i\Phi^D_i, \quad H(\mathop{N}\limits_{\sim} )=\int d^3x\, \mathop{N}\limits_{\sim} \Phi^H \end{eqnarray} Here all the constraints are taken from (\ref{HP5}), except for the Hamiltonian constraint $\Phi^H$ which is now given by (\ref{newH}). $\xi^a_i$ is expressed in terms of canonical variables by means of (\ref{ksi}) and (\ref{rai}). The transformations of the connection fields are: \begin{eqnarray} &&\Bigl\{ G(n) ,\xi_j^d \Bigr\}=\varepsilon^{dab}n^a\xi_j^b+\partial_jn^d, \nonumber \\ &&\Bigl\{ G(n) ,\eta_j^d+\varepsilon^{dpq}\xi_j^p\chi_q \Bigr\}= \varepsilon^{dab}n^a(\eta_j^b+\varepsilon^{bpq}\xi_j^p\chi_q),\nonumber \\ &&\Bigl\{ L(m) ,\xi_j^d \Bigr\}= -\varepsilon^{dab}m^a(\eta_j^b+\varepsilon^{bpq}\xi_j^p\chi_q), \nonumber \\ &&\Bigl\{ L(m) ,\eta_j^d+\varepsilon^{dpq}\xi_j^p\chi_q \Bigr\}= \varepsilon^{dab}m^a\xi_j^b+\partial_jm^d,\nonumber \\ &&\Bigl\{ D(\vec N) ,\xi_j^d \Bigr\}= 2(N^i\partial_i\xi_j^d+\xi_i^d\partial_jN^i),\label{comm} \\ &&\Bigl\{ D(\vec N) ,\eta_j^d+\varepsilon^{dpq}\xi_j^p\chi_q \Bigr\}= 2(N^i\partial_i(\eta_j^d+\varepsilon^{dpq}\xi_j^p\chi_q) +(\eta_i^d+\varepsilon^{dpq}\xi_i^p\chi_q)\partial_jN^i) \nonumber \end{eqnarray} The Poisson brackets between the constraints are straightforward to evaluate. One obtains \begin{eqnarray} &&\Bigl\{ G(n) ,G(m) \Bigr\}=-G(n\times m), \nonumber \\ &&\Bigl\{ L(n) ,L(m) \Bigr\}=G(n \times m) , \nonumber \\ &&\Bigl\{ G(n) ,L(m) \Bigr\}=-L(n\times m) , \nonumber \\ &&\left\{ D(\vec N) ,D(\vec M) \right\}=-2D([\vec N ,\vec M ]), \nonumber \\ &&\left\{ D(\vec N) ,G(n) \right\}=-2G( N^i\partial_in), \nonumber \\ &&\left\{ D(\vec N) ,L(m) \right\}=-2L(N^i\partial_im),\label{alg}\\ &&\Bigl\{ H(\mathop{N}\limits_{\sim} ) ,G(n) \Bigr\} =0, \nonumber \\ &&\Bigl\{ H(\mathop{N}\limits_{\sim} ) ,L(m) \Bigr\} =0, \nonumber \\ &&\Bigl\{ D(\vec N) ,H(\mathop{N}\limits_{\sim} ) \Bigr\} =-2H({\cal L}_{\vec N}\mathop{N}\limits_{\sim} ), \nonumber \\ && \Bigl\{ H(\mathop{N}\limits_{\sim} ),H(\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim ) \Bigr\} = 2D(\vec K)-2G(2K^j\xi_j)- 2L(2K^j(\eta_j+\xi_j\times \chi)) \nonumber \end{eqnarray} where \begin{eqnarray} && K^j[\mathop{N}\limits_{\sim} ,\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim]= (\mathop{N}\limits_{\sim} \partial_i\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim -\lefteqn{\mathop{\vphantom{\Bigl(}}\limits_{\,\sim \partial_i\mathop{N}\limits_{\sim} )K^{ij} \nonumber \\ &&K^{ij}=- (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a(1-\chi^2)+\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\chi_a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_b\chi_b). \label{Kij} \end{eqnarray} Other notations are taken from (\ref{not1}). $K^{i}$ is in fact the same as in (\ref{not11}) but written in different variables. $\Phi^H$ will be called the Hamiltonian constraint. $\Phi^D$ generates diffeomorphisms of the 3-surface and will be called the diffeomorphism constraint. $\Phi^G$ and $\Phi^L$ generate the $SO(3,R)$ rotations and the Lorentz boosts respectively. They will be called the Gauss law constraint and the Lorentz constraint, respectively. There is a set of remarkable relations between the Poisson brackets of the Hilbert--Palatini gravity and that of the Ashtekar gravity. \begin{eqnarray} &&\{ {\cal G}(n),P_a^j\}_C=\{ G(n),P_a^j\}= \{ iL(n),P_a^j\} , \nonumber \\ &&\{ {\cal G}(n),A_j^a\}_C=\{ G(n),A_j^a\}= \{ iL(n),A_j^a\} , \nonumber \\ &&\{ {\cal D}(\vec N), P_a^j\}_C=\{ D(\vec N),P_a^j\} , \quad \{ {\cal D}(\vec N), A^a_j\}_C=\{ D(\vec N),A^a_j\} , \nonumber \\ &&\{ H^A(N),P_a^j\}_C=\{ H(N),P_a^j\} \label{prop} \end{eqnarray} Note, that last relation holds for $P_a^j$ only. In a different context relation between Hilbert--Palatini and Ashtekar brackets was considered recently by Khatsymovsky \cite{Kh}. \section{BRST quantization of the Hilbert--Palatini gravity} In this section we construct the BRST path integral \cite{BFV} for the Hilbert--Palatini gravity. Here we follow the review \cite{Henneaux}. Consider a dynamical system with phase space variables $(q^s,p_s)$, Hamiltonian $H_0$, and constraints $\Phi_{\alpha}$. Let $n^{\alpha}$ be the Lagrange multipliers associated with the constraints $\Phi_{\alpha}$, and $\pi_{\alpha}$ be the canonically conjugate momenta. The extended phase space is defined by introducing extra ghost and antighost fields $(b^{\alpha},\bar c_{\alpha},c^{\alpha},\bar b_{\alpha})$. obeying the following nonvanishing antibrackets $$ \{ b^{\alpha} ,\bar c_{\beta} \}_+=-\delta^{\alpha}_{\beta},\ \{ c^{\alpha} ,\bar b_{\beta} \}_+=-\delta^{\alpha}_{\beta} $$ $c^{\alpha},\bar c_{\alpha}$ are real, whereas $b^{\alpha},\bar b_{\alpha}$ are imaginary. It is convenient to define an additional structure on the extended phase space, that of "ghost number". This is done by attributing the following ghost number to the canonical variables: $c^{\alpha},b^{\alpha}$ have ghost number one, $\bar c_{\alpha},\bar b_{\alpha}$ have ghost number minus one. All other variables have ghost number zero. On this space one can construct a BRST generator $\Omega$ and a BRST invariant Hamiltonian $H$. They are determined by the following conditions: (a) $\Omega$ is real and odd; (b) $\Omega$ has ghost number one; (c) $\Omega =-ib^{\alpha}\pi_{\alpha}+c^{\alpha}\Phi_{\alpha}+ "higher\ ghost\ terms"$; (d) $\{ \Omega ,\Omega \}_+=0$ (a) $H$ is real and even; (b) $H$ has ghost number zero; (c) $H$ coincides with $H_0$ up to higher ghost terms; (d) $\{ H ,\Omega \}=0$ If $H_0$ weakly vanishes (as in our case) one can take $H=0$ since the formalism supports an arbitrariness in the definition of observables: $H_0 \sim H_0+k^{\alpha}\Phi_{\alpha}$. The BRST generator is fully defined by structure functions of the constraint algebra: $$\Omega=-ib^{\alpha}\pi_{\alpha}+\sum\limits_{n\ge 0}c^{\alpha_{n+1}} \cdots c^{\alpha_1}U^{(n)\beta_1 \cdots \beta_n}_{\alpha_1 \cdots \alpha_{n+1}}\bar b_{\beta_n}\cdots \bar b_{\beta_1}$$ The structure functions for the Hilbert--Palatini gravity are constructed in the Appendix B. As a result, we obtain \begin{equation} \Omega=-ib^{\alpha}\pi_{\alpha}+c^{\alpha}\Phi_{\alpha}+\frac12 c^{\alpha}c^{\beta}C^{\gamma}_{\alpha \beta}\bar b_{\gamma}+ c^{\alpha}c^{\beta}c^{\gamma}U^{(2)\delta \lambda}_{\alpha \beta \gamma} \bar b_{\delta}\bar b_{\lambda} \label{BRSg} \end{equation} where $U^{(2)}$ is taken from (\ref{sf2}). Note that for the Yang--Mills theory the term with $U^{(2)}$ is absent in the BRST charge. This is also the case of the Ashtekar gravity \cite{AshMaTo}. The quantization is based on the generating functional for the Green functions which is represented in the form \begin{equation} Z[j,J,\lambda ] =\int {\cal D}\mu e^{i\int dt\, (L_{eff}+j_sq^s+J^sp_s+\lambda_{\alpha} n^{\alpha})} \end{equation} where \begin{equation} L_{eff}=\dot q^s p_s +\dot n^{\alpha}\pi_{\alpha}+ \dot c^{\alpha}\bar b_{\alpha}+\dot b^{\alpha}\bar c_{\alpha}-H_{eff} \qquad H_{eff}=H-\{ \psi ,\Omega \}_+ \label{Leff} \end{equation} Here $\psi$ is an odd and imaginary function which has ghost number minus one and plays a role of gauge fixing function, whereas ${\cal D} \mu$ is the usual measure (product over time of the Liouville measure of the extended phase space). Let us choose \begin{equation} \psi= -\bar b_{\alpha}n^{\alpha}+i\bar c_{\alpha} \bigl( \frac1{\gamma}f^{\alpha}(q,p)+\frac1{\gamma}g^{\alpha}(n)\bigr) . \label{psi} \end{equation} By substituting (\ref{BRSg}) and (\ref{psi}) in (\ref{Leff}) and putting $H=0$ one obtains: \begin{eqnarray} H_{eff}&=&- n^{\alpha}\Phi_{\alpha}-i\bar b_{\alpha} b^{\alpha}+ c^{\alpha}n^{\beta}C^{\gamma}_{\alpha \beta}\bar b_{\gamma} - 3c^{\alpha}c^{\beta}n^{\gamma}U^{(2)\delta \lambda}_{\alpha \beta \gamma} \bar b_{\delta}\bar b_{\lambda} \nonumber \\ && +\frac1{\gamma}\Bigl\{ (f^{\alpha}+g^{\alpha})\pi_{\alpha}- \bar c_{\alpha}\frac{\partial g^{\alpha}}{\partial n^{\beta}}b^{\beta}- i\bar c_{\alpha}\{ f^{\alpha}, \Phi_{\beta}\} c^{\beta}- i\bar c_{\alpha}\{ f^{\alpha},C^{\delta}_{\beta \gamma}\} c^{\beta}c^{\gamma}\bar b_{\delta} \Bigr. \nonumber \\ && \Bigl. -i\bar c_{\alpha}\{ f^{\alpha}, U^{(2)\xi \eta}_{\beta \gamma \delta}\} c^{\beta}c^{\gamma}c^{\delta} \bar b_{\xi}\bar b_{\eta} \Bigr\} \end{eqnarray} Let us make the change of variables with unit Jacobian: $$ \pi_{\alpha} \longrightarrow \gamma \pi_{\alpha},\ \ \bar c_{\alpha} \longrightarrow \gamma \bar c_{\alpha} $$ Then let $\gamma \longrightarrow 0$. In this limit integration over $\pi_{\alpha},\ b^{\alpha}$ and $\bar b_{\alpha}$ is easily performed giving: \begin{equation} Z[j,J,\lambda ] =\int {\cal D}q {\cal D}p {\cal D}n {\cal D}c {\cal D}\bar c \delta (f^{\alpha} {+} g^{\alpha}) e^{i\int dt\, (L_{eff}'+j_sq^s+J^sp_s+\lambda_{\alpha} n^{\alpha})} \label{ZHP} \end{equation} where \begin{eqnarray} L_{eff}'&=& \dot q^s p_s+n^{\alpha}\Phi_{\alpha}- i\bar c_{\beta}\Bigl( \frac{\partial g^{\beta}}{\partial n^{\alpha}} \partial_t - \frac{\partial g^{\beta}}{\partial n^{\gamma}} C^{\gamma}_{\alpha \lambda}n^{\lambda}+\{ \Phi_{\alpha} ,f^{\beta}\} \Bigr) c^{\alpha} \nonumber \\ && -\bar c_{\xi}\bar c_{\eta}\Bigl( \frac{\partial g^{\eta}} {\partial n^{\delta}}\{ f^{\xi}, C^{\delta}_{\alpha \beta}\} + 3\frac{\partial g^{\xi}}{\partial n^{\delta}} \frac{\partial g^{\eta}}{\partial n^{\lambda}} U^{(2)\delta \lambda}_{\alpha \beta \gamma}n^{\gamma}\Bigr) c^{\alpha} c^{\beta} \nonumber \\ && -i\bar c_{\alpha}\bar c_{\xi}\bar c_{\eta} \frac{\partial g^{\xi}}{\partial n^{\lambda}} \frac{\partial g^{\eta}}{\partial n^{\sigma}}\{ f^{\alpha}, U^{(2) \lambda \sigma}_{\beta \gamma \delta}\} c^{\beta}c^{\gamma}c^{\delta} \label{Lsht} \end{eqnarray} and $q^s=(\eta^a_i,\omega^a),\ p_s=(\tE^i_a,\chi_a)$. This completes construction of the path integral for the Hilbert--Palatini gravity.One can see that dependence of structure constants on canonical variables leads to appearance of multighost interaction terms in (\ref{Lsht}). By an appropriate choice of gauge fixing functions one can eliminate these terms. All nonvanishing components of $U^{(2)}$ have upper indices corresponding to the Gauss or Lorentz constraints. Therefore, if the functions $g^\alpha$ do not depend on the Lagrange multipliers ${\cal N}_G$ and ${\cal N}_L$, all terms with $U^{(2)}$ disappear. If, furthermore, the functions $f^\alpha$ do not depend on canonical coordinates $q^s$, the Poisson bracket $\{ f^{\xi}, C^{\delta}_{\alpha \beta}\}$ vanishes and the remaining higher ghost terms disappear also. In such a case, general structure of the path integral is identical to that of rank one Yang--Mills theory. For short, these gauges will be called the Yang--Mills (YM) gauges. They play an important role in path integral quantization of the Ashtekar gravity. \section{Constraints versus reality conditions} In this section we establish relation between solutions of the constraints in the real Hilbert--Palatini formulation and the reality conditions (\ref{1rc}) and (\ref{2rc}) of the Ashtekar gravity. Let us recall expressions for the complex canonical variables $P$ and $A$ in terms of the real canonical variables: \begin{eqnarray} P_a^i&=&i(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a-i{\varepsilon_a}^{bc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\chi_c ) ,\nonumber \\ A_j^a&=&\xi_j^a -i(\eta_j^a+\varepsilon^{abc}\xi_j^b\chi_c) , \nonumber \\ \xi_j^a&=&r_j^a-\frac 12 \varepsilon^{abc}\omega_b \smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_j^c , \label{CtoR} \end{eqnarray} $r_j^a$ is given by the equation (\ref{rai}). Here it will be demonstrated the reality conditions (\ref{1rc}) and (\ref{2rc}) are satisfied by (\ref{CtoR}) provided the canonical variables of the real theory satisfy the Gauss law and the Lorentz constraint. Moreover, we shall prove that the Ashtekar action is real under the same conditions. The last statement is not completely trivial even though real Hilbert--Palatini action is related to complex Ashtekar action by a canonical transformation. The point is that this transformation is not canonical on the whole phase space \cite{Abook}. Thus for our basis in the phase space reality of the Ashtekar action must be checked independently. The first reality condition (\ref{1rc}) is satisfied trivially. Let us rewrite (\ref{2rc}) in a more explicit form. Time evolution $P_a^lP_a^j$ is given by Poisson bracket of total complex Hamiltonian (\ref{HP3}) and $P_a^lP_a^j$: \begin{eqnarray} \partial_t\, (P_a^lP_a^j)&=&\left\{ \int dt\ d^3x (A_0^a{\cal G}_a +{{\cal N}_D}^i H_i+{\mathop{\cal N}\limits_{\sim}} H), P_a^lP_a^j \right\}_C \nonumber \\ &=&-2(2P^l_aP^j_a\partial_i{{\cal N}_D}^i -P^k_aP^l_a\partial_k{{\cal N}_D}^j -P^k_aP^j_a\partial_k{{\cal N}_D}^l +{{\cal N}_D}^i\partial_i(P^l_aP^j_a)) \nonumber \\ &&+2(\nabla_k P^k_a)({{\cal N}_D}^jP^l_a+{{\cal N}_D}^lP^j_a) \nonumber \\ &&-2{\mathop{\cal N}\limits_{\sim}} \varepsilon^{abc}P_a^i (P^j_c\nabla_i P^l_b + P^l_c\nabla_i P^j_b ) .\label{2rc2} \end{eqnarray} First line of (\ref{2rc2}) is real for real ${{\cal N}_D}^i$ due to the first reality condition (\ref{1rc}). Second line disappears due to the Gauss law constraint. Therefore, to ensure real metric evolution one must require \begin{equation} {\rm Im}\, (\varepsilon^{abc}P_a^i (P^j_c\nabla_i P^l_b + P^l_c\nabla_i P^j_b ) )=0 .\label{2rc3} \end{equation} The condition (\ref{2rc3}) can be presented as ${\rm Im}\, \{ P_a^lP_a^j,H\}_C =0$. It is clear that this condition is invariant under {\em complex} $SO(3)$ transformations. These transformations can be used to put $\chi =0$. One can easily demonstrate that for the fields (\ref{CtoR}) the condition (\ref{2rc3}) is satisfied. Now let us prove that under the same conditions \begin{equation} {\rm Im}\, H_i = {\rm Im}\, (H_i+2A_i^a{\cal G}_a)=0 . \label{ImHi} \end{equation} From the equations (\ref{algA}) and (\ref{prop}) one can see that $\{ {\cal G},{\cal G}\}_C\sim {\cal G}$ and $\{ \Phi^D,{\cal G}\}\sim {\cal G}$. Hence the surface ${\cal G}=0$ is invariant under complex $SO(3)$ transformations and real diffeomorphisms. Since $\{ {\cal G}, H_i+2A_i^a{\cal G}_a\}_C \sim {\cal G}$ and $\{ \Phi^D, {\rm Im}\, (H_i+2A_i^a{\cal G}_a) \} \sim {\rm Im}\,( H_i+2A_i^a{\cal G}_a) $, these transformations map solutions of (\ref{ImHi}) to themselves inside the surface ${\cal G}=0$. One can use $SO(3)$ transformations and diffeomorphisms to impose the condition $\chi =0$ everywhere, and $\partial_k\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a=0$ at a certain point. At this point one must only check cancellation of second derivatives of $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$. This is straightforward to do by using the equations (\ref{CtoR}), (\ref{rai}) and the explicit form (\ref{HP5}) of the constraint ${\cal G}=\Phi^G+i\Phi^L$. To prove that ${\rm Im}\, H=0$ one can use the Lorentz boosts to put $\chi =0$. This makes the calculations quite elementary even without further gauge fixing. By straightforward calculations one can demonstrate that imaginary part of the kinetic term $P^j_a\partial_t A_j^a$ is a total derivative and thus can be discarded in quantization. This is done in the Appendix C. As it was advertised at the beginning of this section, we demonstrated that the complex canonical variables satisfy the reality conditions on the surface of the equations (\ref{CtoR}), the second class constraint (\ref{2class}) and two first class constraints $\Phi^G$ and $\Phi^L$. Note, that the reality conditions admit more solutions. For example, one can interchange real and imaginary parts of $P^j_a$. \section{Path integral quantization of the Ashtekar gravity} In this section we derive a path integral for the Ashtekar gravity from the one for the Hilbert--Palatini gravity. Consider functional (\ref{ZHP}) in an YM-gauge. \begin{eqnarray} Z[j,J] =\int {\cal D}\eta^a_i {\cal D}\tE^i_a {\cal D}\omega^a {\cal D}\chi_a {\cal D}{\cal N}_G {\cal D}{\cal N}_L {\cal D}{\cal N}_D^i {\cal D}\mathop{\cal N}\limits_{\sim} {\cal D}c^{\alpha} {\cal D}\bar c_{\alpha} \delta (f^{\alpha} {+} g^{\alpha}) \nonumber \\ \times \exp \left( i\int dt\, (L_{eff}'+j_a^i\eta_i^a+J_i^a\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a)\right) \label{ZA1} \end{eqnarray} We dropp out the sources for the Lagrange multipliers, $\chi$ and $\omega$. Discussion of the source terms is postponed to the end of this section. Since the gauge fixing functions $g^\alpha$ does not depend on the Gauss and Lorentz Lagrange multipliers, integration over these Lagrange multipliers gives $\delta$-functions of the corresponding constraints, $\delta (\Phi^G_a )\delta (\Phi^L_a)$. This means that in fact we are working on the surface of these constraints. In the previous section it is shown that on this surface imaginary part of the Ashtekar action vanishes. Thus one can write \begin{equation} L_{eff}'= L_{A}(P,A)- i\bar c_{\beta}\Bigl( \frac{\partial g^{\beta}}{\partial n^{\alpha}} \partial_t - \frac{\partial g^{\beta}}{\partial n^{\gamma}} C^{\gamma}_{\alpha \lambda}n^{\lambda}+\{ \Phi_{\alpha} ,f^{\beta}\} \Bigr) c^{\alpha} \end{equation} We assume that complex canonical variables are expressed in terms of real canonical variables by means of (\ref{CtoR}). One can integrate over $\omega^a$ by using the delta function of the Lorentz constraint $\Phi^L$. This is equivalent in effect to the substitution: \begin{equation} \omega^a(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E,\eta,\chi):=(\delta_{ab}+ \frac{\chi_a\chi_b}{1-\chi^2})\partial_i\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b+\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a\eta^b_i\chi_b -\frac{\chi_a}{1-\chi^2}(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\eta_i^b-\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\chi_b\eta_i^c\chi_c) \label{om0} \end{equation} The path integral measure is multiplied by \begin{equation} \Delta_1={\rm det}^{-1}\X{a}{b}= \prod_{x,t}\frac1{1-\chi^2}. \label{d1} \end{equation} Now we are ready to change the integration variables in (\ref{ZA1}): \begin{eqnarray} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a &\longrightarrow & P^i_a=i\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a+\varepsilon^{abc}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_b\chi_c \nonumber \\ \eta_i^a &\longrightarrow & A_i^a=\xi_i^a-i(\eta_i^a+ \varepsilon^{abc}\xi_i^b\chi_c) \label{chan} \end{eqnarray} This gives rise to a determinant \begin{eqnarray} \Delta_2&=&{\rm det}^{-1}\left( 1i\delta^i_j\delta_a^b+ \delta^i_j\varepsilon^{abc}\chi_c\right) {\rm det}^{-1}\Bigl( \frac1{2(1-\chi^2)} \left( -2\delta_i^j\varepsilon^{abc}\chi_c +\varepsilon^{apq}\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_q\X{p}{b}\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^d\chi_d \right. \Bigr. \nonumber \\ && \left. -\chi_a\varepsilon^{dpq}\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^d\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_q\X{p}{b}\right) -i\bigl( \delta_i^j\delta^a_b+\frac1{2(1-\chi^2)}\left( 2\delta_i^j(\delta^a_b\chi^2-\chi_a\chi_b) \right. \bigr. \nonumber \\ && \Bigr. \bigl. \left. +(1-\chi^2)\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a\chi_b\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^c\chi_c-\X{a}{b}\smash{{\mathop{E}\limits_{\sim}}}\vphantom{E}_i^c\chi_c\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_d\chi_d \right) \bigr) \Bigr) =\prod_{x,t} \left( -\frac1{1-\chi^2} \right) \label{d2} \end{eqnarray} Note, that if all the gauge fixing functions $f$ depend on the real fields $\chi$ and $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ through $P$ only, the ghost action becomes degenerate (see (\ref{prop})). This is a manifestation of the fact that the Lorentz constraint is "superfluous" in the complex Ashtekar gravity. Therefore, we must fix corresponding gauge freedom by means of a condition on $\chi$: \begin{equation} \chi^a=\chi^a_{(0)}(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E ) ,\label{chi0} \end{equation} where $\chi_{(0)}$ is a given function. Before integrating over $\chi$ let us rewrite (\ref{chi0}) in a different form. By inverting the first equation in (\ref{CtoR}), one obtains \begin{equation} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a=\left( \frac{\varepsilon^{abc}\chi_c}{1-\chi^2}-i\frac{\delta_{ab}- \chi_a\chi_b}{1-\chi^2}\right)P^i_b=\pi_a^b(\chi )P^i_b . \label{EP} \end{equation} Due to (\ref{chi0}) one can replace $\chi$ by $\chi_{(0)}(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E )$. Right hand side of (\ref{EP}) becomes $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ dependent. This dependence, however, can be removed at least locally be means of formal power series expansion. As a result, we obtain \begin{equation} \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^i_a=\bar \pi_a^b (P) P^i_b , \label{EP1} \end{equation} where $\bar \pi$ is a function of $P$ but not of $P^*$, which depends on a choice of the gauge fixing function $\chi^{(0)}$. For the present analysis explicit form of $\bar \pi$ is of no importance. Note, that simple relation $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E ={\rm Im} P$ would not work, because it depends both on $P$ and its complex conjugate. One can replace (\ref{chi0}) by the condition \begin{equation} \chi =\chi_{(0)} (\bar \pi P) =\bar \chi (P)\ . \label{chi1} \end{equation} The two conditions (\ref{chi0}) and (\ref{chi1}) are equivalent since they select the same surfaces in the phase space. However, ghost terms and Jacobian factors appearing due to delta functions of gauge conditions are different for (\ref{chi0}) and (\ref{chi1}). In the final result these differences compensate each other, as one can easily show using geometric interpretation of the Faddeev--Popov determinant. Let us integrate over $\chi$ with the help of the delta function $\delta (\chi -\bar \chi (P))$. Since we already changed variables to $P$ and $A$, no Jacobian factor appears. Intergation over $P$ and $A$ should be understood as a contour integration in complex space. One integrates along the lines defined by the reality conditions and the equations (\ref{chi0}) and (\ref{om0}). As usual, there are real parameters which label points of the contours in the complex planes. These are $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ and $\eta$. Since the fields $\omega$ and $\chi$ are already excluded, we do not integrate over position of the contours. Consider the ghost action. Integration over $\bar c$ and $c$ gives the following functional determinant: \begin{equation} \det \Bigl( \frac{\partial g^{\beta}}{\partial n^{\alpha}} \partial_t - \frac{\partial g^{\beta}}{\partial n^{\gamma}} C^{\gamma}_{\alpha \lambda}n^{\lambda}+\{ \Phi_{\alpha} ,f^{\beta}\} \Bigr) \label{cdet} \end{equation} Let us separate indices corresponding to the Lorentz boosts: $\{ \Phi_\alpha \} = \{ \Phi^L_a ; \Phi_\mu \}$, $\{ f^\alpha \} = \{ \chi^a -\bar \chi^a (P);f^\mu (\chi , P)\}$, $\{ g^\alpha \} =\{ 0;g^\mu \}$. Greek indices from the middle of the alphabet correspond to the Gauss law, diffeomorphism and Hamiltonian constraints. Matrix elements in (\ref{cdet}) contain the following brackets: \begin{eqnarray} &&\left\{ \Phi_\mu ,f^\nu (\chi , P)\right\} = \{ \Phi_\mu ,P\} \frac{\delta f^\nu}{\delta P} + \frac{\delta \Phi_\mu}{\delta\omega} \frac{\delta f^\nu}{\delta\chi} , \nonumber \\ &&\left\{ \Phi_\mu ,\chi -\bar\chi (P)\right\} = \frac{\delta \Phi_\mu}{\delta \omega} - \{ \Phi_\mu ,P\} \frac{\delta \bar\chi}{\delta P} , \label{lines} \end{eqnarray} where summation indices are suppressed. Let us multiply the lines corresponding to $\chi^a -\bar\chi^a$ by $-\delta f^\nu /\delta \chi^a$ and add them to the $f^\nu$ lines. This produces the matrix elements: \begin{eqnarray} &&\frac{\partial g^{\nu}}{\partial n^{\mu}}\partial_t - \frac{\partial g^{\nu}}{\partial n^{\rho}} C^{\rho}_{\mu \sigma}n^{\sigma}+ \{ \Phi_\mu ,P\} \left( \frac{\delta f^\nu}{\delta P} +\frac{\delta f^\nu}{\delta \chi} \frac{\delta \bar\chi}{\delta P} \right) = \nonumber \\ &&\frac{\partial g^{\nu}}{\partial n^{\mu}}\partial_t - \frac{\partial g^{\nu}}{\partial n^{\rho}} C^{\rho}_{\mu \sigma}n^{\sigma}+ \left\{ \Phi_\mu^{[C]}, f^\nu (\bar\chi (P),P) \right\}_C . \label{fchiP} \end{eqnarray} $\Phi_\mu^{[C]}$ is the Ashtekar constraint corresponding to $\Phi_\mu$, ${\rm Re}\, \Phi_\mu^{[C]}=\Phi_\mu$. In the last line we used that $\{ \Phi_\mu ,P\} = \{ \Phi_\mu^{[C]} ,P\}_C$ due to (\ref{prop}). The equation (\ref{fchiP}) means that one replace $\chi$ by $\bar\chi$ in the gauge fixing functions $f^\nu$. Consider the two columns in (\ref{cdet}) corresponding to the Gauss law and Lorentz constraints. Due to (\ref{prop}) $\{ \Phi^G ,f(P)\} =i \{ \Phi^L,f(P)\}$. Therefore, by multiplying the column with $\Phi^G$ by $-i$ and adding it to the column with $\Phi^G$ one obtains zeros almost everywhere, except for the lines corresponding to the gauge conditions $\chi^a -\bar\chi^a (P)$. As a result, one can represent the determinant (\ref{cdet}) as a product of two determinants: \begin{equation} \Delta_3 \ \det \left( \frac{\partial g^{\nu}}{\partial n^{\mu}}\partial_t - \frac{\partial g^{\nu}}{\partial n^{\rho}} C^{\rho}_{\mu \sigma}n^{\sigma}+ \left\{ \Phi_\mu^{[C]}, f^\nu (\bar\chi (P),P) \right\}_C \right) , \label{newdet} \end{equation} where \begin{equation} \Delta_3= \det \{ \Phi_a^L-i\Phi_a^G ,\chi^b\} = {\rm det}\left( \X{a}{b}+i\varepsilon^{abc}\chi_c\right)= \prod_{x,t}(1-\chi^2)^2 \label{d3} \end{equation} From the expressions (\ref{d1}), (\ref{d2}) and (\ref{d3}) one can see that all $\Delta$'s cancel each other up to an overall minus sign which can be absorbed in reversed orientation of the contour of the $A$-integration. The path integral is now rewritten in terms of the Ashtekar variables: \begin{equation} Z[\bar j,\bar J] =\int_R {\cal D}A^a_i {\cal D}P^i_a {\cal D}{\cal N}_D^i {\cal D}\mathop{\cal N}\limits_{\sim} {\cal D} A_0^a {\cal D}c^{\mu} {\cal D}\bar c_{\mu} \delta (f^{\mu} {+} g^{\mu}) e^{i\int dt\, (L_{eff}'+\bar j_a^iA_i^a+\bar J_i^aP^i_a)} \label{ZA2} \end{equation} where \begin{equation} L_{eff}'= L_{A}- i\bar c_{\nu}\Bigl( \frac{\partial g^{\nu}}{\partial n^{\mu}}\partial_t - \frac{\partial g^{\nu}}{\partial n^{\rho}} C^{\rho}_{\mu \sigma}n^{\sigma}+ \left\{ \Phi_\mu^{[C]}, f^\nu (\bar\chi (P),P) \right\}_C \Bigr) c^{\mu} \end{equation} The subscript $R$ means contour integration in complex spaces along lines defined by the reality conditions. Integration over ${\cal N}_L$ (which is essentially an imaginary part of $A_0$) has been already performed to produce a delta function of the Lorentz constraint. This delta function, in turn, has been used to integrate over $\omega$. Thus in (\ref{ZA2}) we integrate over real part of $A_0$. This integral gives $\delta (\Phi^G )=\delta ({\cal G})$. The equation ${\cal G}=0$ can be considered as a complex equation because ${\rm Im}\, {\cal G}=0$ is supplied by the reality conditions. The same is true for the gauge conditions $f^{\mu} {+} g^{\mu}=0$. A fascinating property of these complex delta functions is possibility to integrate over complex variables without explicit transition to real coordinates on a contour. By comparing (\ref{algA}) and (\ref{alg}), one can see that $C^{\rho}_{\mu \sigma}$ are just structure constants of the Ashtekar gravity (Note, that this property does not hold in the variables used by Henneaux \cite{He}) Therefore, the ghost term in (\ref{ZA2}) produces the ordinary Faddeev--Popov determinant for the Ashtekar gravity. The path integral (\ref{ZA2}) coincides with what one would write naively just ignoring any Jacobian factors which may arise from the reality conditions and fixing the Lorentz gauge freedom. Some remarks are in order. First of all, the result (\ref{ZA2}) is valid for a certain class of gauges only. We are not allowed to impose gauge condition on $A_0^a$. This restriction is needed (i) to cancel contributions to the path integral of the second order structure functions (which are zero for the Ashtekar gravity \cite{AshMaTo}), and (ii) to ensure delta functions of the complex Gauss law constraint. While (i) seems to depend on a particular choice of basic variables and constraints because rank of and algebra is not an invariant, the second point (ii) looks more fundamental. The complex Gauss law constraint is needed to prove vanishing of imaginary part of the Ashtekar action. We are not allowed to impose gauge conditions on the connection variables. The ultimate reason for this is that the last line of (\ref{prop}) is not true if we replace $P$ by $A$. This restriction will receive a natural explanation in the next section in a framework of the Faddeev path integral. In all other respects the gauge conditions $f^\alpha +g^\alpha$ are arbitrary. For a given set of admissible YM gauges one can first express $\chi^a$ from three of them and then denote the remaining gauge conditions by $f^\mu +g^\mu$. Path integral for the Ashtekar gravity was previously considered by the present authors and I.~Grigentch in the one--loop approximation over de Sitter background \cite{GV} and for the Bianchi IX finite dimensional model \cite{AGV}. In these simple cases the reality conditions do not lead to any Jacobian factors if one uses gauge conditions of the YM type. We observed also that one runs into troubles if gauge conditions are imposed on the connection variables. Using of this or that gauge condition is just a matter of convenience. In principle, it is enough to formulate the path integral in just one gauge. All physical results are to be gauge independent. However, extension of our results for arbitrary gauge conditions still poses an interesting problem from both technical and aesthetic points of view. Note, that we excluded sources for $\chi$, $\omega$ and Lagrange multipliers. Sources for $\chi$ and $\omega$ are not needed because in the present formulation these fields are absent. Moreover, $\chi$ and $\omega$ can be considered as composite fields. Sources for $\mathop{N}\limits_{\sim}$ and ${\cal N}_D$ can be easily restored without any modification in our procedure. Therefore, we have enough sources to describe any Green functions of the four-metrics and three-dimensional connections. If, however, we introduce a source for $A_0^a$, it penetrates into the delta functions of the Gauss law and Lorentz constraints and destroys reality of the Ashtekar action. Green functions of $A_0$ are not defined in our approach. At the last step we introduced sources $\bar J$ and $\bar j$ for $P$ and $A$. This makes exponential in (\ref{ZA2}) complex. Thus, strictly speaking, the path integral is not well defined, even though all finite order Green functions do exist. If one wishes to be on a safe side, one can easily return to the original sources $J$ and $j$ for $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ and $\eta$. \section{The Faddeev path integral} In this section we give a more simple derivation of the Faddeev path integral \cite{Faddeev} for the Ashtekar gravity, which does not rely upon heavy machinery of the BRST quantization. This also seems to be a proper place to discuss triad form of the reality conditions. For a dynamical system with canonical variables $q^s,p_s$, first class constraints $\Phi_a$ and weakly vanishing Hamiltonian, such as the Hilbert--Palatini gravity, the Faddeev path integral reads: \begin{equation} Z=\int {\cal D}q {\cal D}p {\cal D}n F \delta (f^{\alpha}) \exp \left( i\int dt\, (\dot{q}^sp_s +n^\alpha\Phi_\alpha )\right) \label{ZFP} \end{equation} where $f^\alpha$ are gauge fixing functions of the dynamical variables. $F$ is the Faddeev--Popov determinant, $F=\det \{ \Phi_\alpha ,f^\beta \}$. We do not show the source terms explicitly. The expression (\ref{ZFP}) can be obtained by from the path integral (\ref{ZHP}) by choosing $g^\alpha =0$ and integrating over the ghost fields $c$ and $\bar c$. Of course, starting point of the original derivation \cite{Faddeev} of the Faddeev path integral was not the BRST approach. To make the presentation as simple as possible, we fix Lorentz boosts by the condition \begin{equation} \chi =0 . \label{chic} \end{equation} Now we integrate over ${\cal N}_L^a$, $\chi$ and $\omega$. Again, integration over $\omega$ is equivalent to the following substitution: \begin{equation} \omega_a :=\partial_j \lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E^j_a .\label{sub-om} \end{equation} If the remaining gauge fixing conditions $f^\mu$ are functions of $\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E$ only, the Poisson brackets $\{ f^\mu ,\Phi_L^a\}$ vanish on the surface (\ref{chic}). Hence the Faddeev--Popov determinant takes the form \begin{equation} F=\det \, \{ f^\mu (\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E ),\Phi_\nu \} = \det\, \{ f^\mu (-iP), \Phi^{[C]}_\nu \}_C \label{FP88} \end{equation} The gauge (\ref{chic}) means that we are using reality conditions in the triad form \begin{equation} {\rm Re}\, P_a^i=0, \qquad {\rm Re}\,(\partial_t P_a^i)=0 \label{trirc} \end{equation} instead of the metric reality conditions (\ref{1rc}) and (\ref{2rc}). The change of variables $(\lefteqn{\smash{\mathop{\vphantom{<}}\limits^{\;\sim}}}E ,\eta )\to (P,A)$ gives unit Jacobian factor. Our prove of vanishing of imaginary part of the Ashtekar action is still valid. Hence we arrive at the path integral for the Ashtekar gravity in the Faddeev form: \begin{equation} Z=\int_R {\cal D}P {\cal D}A {\cal D}\mathop{N}\limits_{\sim} {\cal D}{\cal N}_D {\cal D}A_0 F \delta (f^{\mu}(-iP)) \exp \left( iS_{A} \right) \label{FPZ88} \end{equation} where subscript $R$ means now that the contour of integration is defined by the reality conditions (\ref{trirc}). Of course, most of the comments of the previous section apply here also. \section{Discussion} Main result of the present paper is the path integral (\ref{ZA2}) for the Ashtekar gravity, which is a kind of contour integral. As a byproduct, we also constructed the BRST quantization of the Hilbert--Palatini gravity. Main features of our approach were discussed in detail in the section VI. Here we speculate on perspectives of this approach. The path integral (\ref{ZA2}) is obtained with certain restrictions on possible gauge conditions. In principle, one can transform (\ref{ZA2}) to any other gauge by means of the Faddeev--Popov trick \cite{FaPo}. However, this trick is not so easy to implement in the present context due to reality conditions and quite unusual rules of the functional integration. Perhaps restrictions on the gauge conditions may be weakened or even lifted altogether. Anyhow, one should formulate criteria of admissibility of gauge conditions for the Ashtekar gravity in terms of the Ashtekar variables without referring to the Hilbert--Palatini gravity. This definitely will not be easy to do. In general, a function of $P$ is complex valued. Therefore, a condition $f=0$ implies two real gauge fixing conditions ${\rm Re} f=0$ and ${\rm Im}f=0$ even if reality conditions are taken into account. Even the requirement that a given set of gauge conditions removes correct number of degrees of freedom looks quite non-trivial. One may hope to overcome these difficulties by using the generalized Wick rotation \cite{Wick}. We must admit that for degenerate triad our analysis is incomplete. This reflects a well known problem of the Ashtekar gravity which exists already at the classical level. An intriguing feature of (\ref{ZA2}) is that it is a contour integral. The contour of integration can be deformed as far as the reality conditions allow (This corresponds to arbitrariness of gauge fixing in the Hilbert--Palatini action.) One may hope, that certain deformations are possible even beyond these limits. If this is really so, some interesting properties of quantum gravity can manifest themselves. \section*{Acknowledgments} This work was supported by the Russian Foundation for Fundamental Research, grant 97-01-01186, and by GRACENAS through grant 97-0-14.1-61 (D.V.) and Young Investigator Program (S.A.).
{ "redpajama_set_name": "RedPajamaArXiv" }
5,371
////////////////////////////////////////////////////////////////////////////////////// // // Copyright (c) 2014-2015, Egret Technology Inc. // All rights reserved. // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // * Neither the name of the Egret nor the // names of its contributors may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY EGRET AND CONTRIBUTORS "AS IS" AND ANY EXPRESS // OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES // OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. // IN NO EVENT SHALL EGRET AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, // INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;LOSS OF USE, DATA, // OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF // LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, // EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // ////////////////////////////////////////////////////////////////////////////////////// module eui { var UIComponentClass = "eui.UIComponent"; /** * @language en_US * The TileLayout class arranges layout elements in columns and rows * of equally-sized cells. * The TileLayout class uses a number of properties that control orientation, * count, size, gap and justification of the columns and the rows * as well as element alignment within the cells. * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native * @includeExample extension/eui/layout/TileLayoutExample.ts */ /** * @language zh_CN * TileLayout 类在单元格大小相等的列和行中排列布局元素。 * TileLayout 类使用许多属性来控制列和行的方向、计数、大小、间隙和两端对齐以及单元格内的元素对齐。 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native * @includeExample extension/eui/layout/TileLayoutExample.ts */ export class TileLayout extends LayoutBase { /** * @language en_US * Constructor. * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 构造函数。 * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public constructor() { super(); } /** * @private * 标记horizontalGap被显式指定过 */ private explicitHorizontalGap:number = NaN; /** * @private */ private _horizontalGap:number = 6; /** * @language en_US * Horizontal space between columns, in pixels. * * @default 6 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 列之间的水平空间(以像素为单位)。 * * @default 6 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get horizontalGap():number { return this._horizontalGap; } public set horizontalGap(value:number) { value = +value; if (value === this._horizontalGap) return; this.explicitHorizontalGap = value; this._horizontalGap = value; this.invalidateTargetLayout(); } /** * @private * 标记verticalGap被显式指定过 */ private explicitVerticalGap:number = NaN; /** * @private */ private _verticalGap:number = 6; /** * @language en_US * Vertical space between rows, in pixels. * * @default 6 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 行之间的垂直空间(以像素为单位)。 * * @default 6 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get verticalGap():number { return this._verticalGap; } public set verticalGap(value:number) { value = +value; if (value === this._verticalGap) return; this.explicitVerticalGap = value; this._verticalGap = value; this.invalidateTargetLayout(); } /** * @private */ private _columnCount:number = -1; /** * @language en_US * Contain the actual column count. * * @default -1 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 实际列计数。 * * @default -1 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get columnCount():number { return this._columnCount; } /** * @private */ private _requestedColumnCount:number = 0; /** * @language en_US * Number of columns to be displayed. * <p>Set to 0 to allow the TileLayout to determine * the column count automatically.</p> * <p>If the <code>orientation</code> property is set to <code>TileOrientation.ROWS</code>, * then setting this property has no effect * In this case, the <code>rowCount</code> is explicitly set, and the * container width is explicitly set. </p> * * @default 0 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 要显示的列数。 * <p>设置为 0 会允许 TileLayout 自动确定列计数。</p> * <p>如果将 <code>orientation</code> 属性设置为 <code>TileOrientation.ROWS</code>, * 则设置此属性不会产生任何效果。这种情况下,会显式设置 code>rowCount</code>,并显式设置容器宽度。</p> * * @default 0 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get requestedColumnCount():number { return this._requestedColumnCount; } public set requestedColumnCount(value:number) { value = +value || 0; if (this._requestedColumnCount === value) return; this._requestedColumnCount = value; this._columnCount = value; this.invalidateTargetLayout(); } /** * @private */ private _rowCount:number = -1; /** * @language en_US * The row count. * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 行计数。 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get rowCount():number { return this._rowCount; } /** * @private */ private _requestedRowCount:number = 0; /** * @language en_US * Number of rows to be displayed. * <p>Set to 0 to remove explicit override and allow the TileLayout to determine * the row count automatically.</p> * <p>If the <code>orientation</code> property is set to * <code>TileOrientation.COLUMNS</code>, setting this property has no effect. * in this case, <code>columnCount</code> is explicitly set, and the * container height is explicitly set.</p> * * @default 0 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 要显示的行数。 * <code>设置为 -1 会删除显式覆盖并允许 TileLayout 自动确定行计数。</code> * <code>如果将 <code>orientation</code> 属性设置为 <code>TileOrientation.COLUMNS</code>, * 则设置此属性不会产生任何效果。这种情况下,会显式设置 <code>columnCount</code>,并显式设置容器高度。</code> * * @default 0 * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get requestedRowCount():number { return this._requestedRowCount; } public set requestedRowCount(value:number) { value = +value || 0; if (this._requestedRowCount == value) return; this._requestedRowCount = value; this._rowCount = value; this.invalidateTargetLayout(); } /** * @private * 外部显式指定的列宽 */ private explicitColumnWidth:number = NaN; /** * @private */ private _columnWidth:number = NaN; /** * @language en_US * Contain the actual column width, in pixels. * <p>If not explicitly set, the column width is * determined from the width of the widest element. </p> * * @default NaN * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 包含实际列宽(以像素为单位)。 * <p>若未显式设置,则从根据最宽的元素的宽度确定列宽度。</p> * * @default NaN * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get columnWidth():number { return this._columnWidth; } public set columnWidth(value:number) { value = +value; if (value === this._columnWidth) return; this.explicitColumnWidth = value; this._columnWidth = value; this.invalidateTargetLayout(); } /** * @private * 外部显式指定的行高 */ private explicitRowHeight:number = NaN; /** * @private */ private _rowHeight:number = NaN; /** * @language en_US * The row height, in pixels. * <p>If not explicitly set, the row height is * determined from the maximum of elements' height.</p> * * @default NaN * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 行高(以像素为单位)。 * <p>如果未显式设置,则从元素的高度的最大值确定行高度。<p> * * @default NaN * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get rowHeight():number { return this._rowHeight; } public set rowHeight(value:number) { value = +value; if (value === this._rowHeight) return; this.explicitRowHeight = value; this._rowHeight = value; this.invalidateTargetLayout(); } /** * @private */ private _paddingLeft:number = 0; /** * @copy eui.LinearLayoutBase#paddingLeft * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get paddingLeft():number { return this._paddingLeft; } public set paddingLeft(value:number) { value = +value || 0; if (this._paddingLeft == value) return; this._paddingLeft = value; this.invalidateTargetLayout(); } /** * @private */ private _paddingRight:number = 0; /** * @copy eui.LinearLayoutBase#paddingRight * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get paddingRight():number { return this._paddingRight; } public set paddingRight(value:number) { value = +value || 0; if (this._paddingRight === value) return; this._paddingRight = value; this.invalidateTargetLayout(); } /** * @private */ private _paddingTop:number = 0; /** * @copy eui.LinearLayoutBase#paddingTop * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get paddingTop():number { return this._paddingTop; } public set paddingTop(value:number) { value = +value || 0; if (this._paddingTop == value) return; this._paddingTop = value; this.invalidateTargetLayout(); } /** * @private */ private _paddingBottom:number = 0; /** * @copy eui.LinearLayoutBase#paddingBottom * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get paddingBottom():number { return this._paddingBottom; } public set paddingBottom(value:number) { value = +value || 0; if (this._paddingBottom === value) return; this._paddingBottom = value; this.invalidateTargetLayout(); } /** * @private */ private _horizontalAlign:string = JustifyAlign.JUSTIFY; /** * @language en_US * Specifies how to align the elements within the cells in the horizontal direction. * Supported values are * HorizontalAlign.LEFT、HorizontalAlign.CENTER、 * HorizontalAlign.RIGHT、JustifyAlign.JUSTIFY。 * * @default <code>JustifyAlign.JUSTIFY</code> * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 指定如何在水平方向上对齐单元格内的元素。支持的值有 * HorizontalAlign.LEFT、HorizontalAlign.CENTER、 * HorizontalAlign.RIGHT、JustifyAlign.JUSTIFY。 * * @default <code>JustifyAlign.JUSTIFY</code> * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get horizontalAlign():string { return this._horizontalAlign; } public set horizontalAlign(value:string) { if (this._horizontalAlign == value) return; this._horizontalAlign = value; this.invalidateTargetLayout(); } /** * @private */ private _verticalAlign:string = JustifyAlign.JUSTIFY; /** * @language en_US * 指定如何在垂直方向上对齐单元格内的元素。 * 支持的值有 VerticalAlign.TOP、VerticalAlign.MIDDLE、 * VerticalAlign.BOTTOM、JustifyAlign.JUSTIFY。 * 默认值:JustifyAlign.JUSTIFY。 * * @default <code>eui.JustifyAlign.JUSTIFY</code> * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * Specifies how to align the elements within the cells in the vertical direction. * Supported values are * VerticalAlign.TOP、VerticalAlign.MIDDLE、 * VerticalAlign.BOTTOM、JustifyAlign.JUSTIFY。 * * @default <code>eui.JustifyAlign.JUSTIFY</code> * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get verticalAlign():string { return this._verticalAlign; } public set verticalAlign(value:string) { if (this._verticalAlign == value) return; this._verticalAlign = value; this.invalidateTargetLayout(); } /** * @private */ private _columnAlign:string = ColumnAlign.LEFT; /** * @language en_US * Specifies how to justify the fully visible columns to the container width. * * <p>When set to <code>ColumnAlign.LEFT</code> it turns column justification off. * There may be partially visible columns or whitespace between the last column and * the right edge of the container. This is the default value.</p> * * <p>When set to <code>ColumnAlign.JUSTIFY_USING_GAP</code> the <code>horizontalGap</code> * actual value increases so that * the last fully visible column right edge aligns with the container's right edge. * In case there is only a single fully visible column, the <code>horizontalGap</code> actual value * increases so that it pushes any partially visible column beyond the right edge * of the container. * Note that explicitly setting the <code>horizontalGap</code> property does not turn off * justification. It only determines the initial gap value. * Justification may increases it.</p> * * <p>When set to <code>ColumnAlign.JUSTIFY_USING_WIDTH</code> the <code>columnWidth</code> * actual value increases so that * the last fully visible column right edge aligns with the container's right edge. * Note that explicitly setting the <code>columnWidth</code> property does not turn off justification. * It only determines the initial column width value. * Justification may increases it.</p> * * @default ColumnAlign.LEFT * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 指定如何将完全可见列与容器宽度对齐。 * * <p>设置为 <code>ColumnAlign.LEFT</code> 时,它会关闭列两端对齐。 * 在容器的最后一列和右边缘之间可能存在部分可见的列或空白。这是默认值。</p> * * <p>设置为 <code>ColumnAlign.JUSTIFY_USING_GAP</code> 时,<code>horizontalGap</code> 的实际值将增大, * 这样最后一个完全可见列右边缘会与容器的右边缘对齐。仅存在一个完全可见列时, * <code>horizontalGap</code> 的实际值将增大,这样它会将任何部分可见列推到容器的右边缘之外。 * 请注意显式设置 <code>horizontalGap</code> 属性不会关闭两端对齐。它仅确定初始间隙值。两端对齐可能会增大它。</p> * * <p>设置为 <code>ColumnAlign.JUSTIFY_USING_WIDTH</code> 时,<code>columnWidth</code> 的实际值将增大, * 这样最后一个完全可见列右边缘会与容器的右边缘对齐。请注意显式设置 <code>columnWidth</code> 属性不会关闭两端对齐。 * 它仅确定初始列宽度值。两端对齐可能会增大它。</p> * * @default ColumnAlign.LEFT * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get columnAlign():string { return this._columnAlign; } public set columnAlign(value:string) { if (this._columnAlign == value) return; this._columnAlign = value; this.invalidateTargetLayout(); } /** * @private */ private _rowAlign:string = RowAlign.TOP; /** * @language en_US * Specifies how to justify the fully visible rows to the container height. * * <p>When set to <code>RowAlign.TOP</code> it turns column justification off. * There might be partially visible rows or whitespace between the last row and * the bottom edge of the container. This is the default value.</p> * * <p>When set to <code>RowAlign.JUSTIFY_USING_GAP</code> the <code>verticalGap</code> * actual value increases so that * the last fully visible row bottom edge aligns with the container's bottom edge. * In case there is only a single fully visible row, the value of <code>verticalGap</code> * increases so that it pushes any partially visible row beyond the bottom edge * of the container. Note that explicitly setting the <code>verticalGap</code> does not turn off * justification, but just determines the initial gap value. * Justification can then increases it.</p> * * <p>When set to <code>RowAlign.JUSTIFY_USING_HEIGHT</code> the <code>rowHeight</code> * actual value increases so that * the last fully visible row bottom edge aligns with the container's bottom edge. Note that * explicitly setting the <code>rowHeight</code> does not turn off justification, but * determines the initial row height value. * Justification can then increase it.</p> * * @default RowAlign.TOP * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 指定如何将完全可见行与容器高度对齐。 * * <p>设置为 <code>RowAlign.TOP</code> 时,它会关闭列两端对齐。 * 在容器的最后一行和底边缘之间可能存在部分可见的行或空白。这是默认值。</p> * * <p>设置为 <code>RowAlign.JUSTIFY_USING_GAP</code> 时,<code>verticalGap</code> 的实际值会增大, * 这样最后一个完全可见行底边缘会与容器的底边缘对齐。仅存在一个完全可见行时,<code>verticalGap</code> 的值会增大, * 这样它会将任何部分可见行推到容器的底边缘之外。请注意,显式设置 <code>verticalGap</code> * 不会关闭两端对齐,而只是确定初始间隙值。两端对齐接着可以增大它。</p> * * <p>设置为 <code>RowAlign.JUSTIFY_USING_HEIGHT</code> 时,<code>rowHeight</code> 的实际值会增大, * 这样最后一个完全可见行底边缘会与容器的底边缘对齐。请注意,显式设置 <code>rowHeight</code> * 不会关闭两端对齐,而只是确定初始行高度值。两端对齐接着可以增大它。</p> * * @default RowAlign.TOP * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get rowAlign():string { return this._rowAlign; } public set rowAlign(value:string) { if (this._rowAlign == value) return; this._rowAlign = value; this.invalidateTargetLayout(); } /** * @private */ private _orientation:string = TileOrientation.ROWS; /** * @language en_US * Specifies whether elements are arranged row by row or * column by column. * * @default TileOrientation.ROWS * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ /** * @language zh_CN * 指定是逐行还是逐列排列元素。 * * @default TileOrientation.ROWS * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public get orientation():string { return this._orientation; } public set orientation(value:string) { if (this._orientation == value) return; this._orientation = value; this.invalidateTargetLayout(); } /** * @private * 标记目标容器的尺寸和显示列表失效 */ private invalidateTargetLayout():void { var target = this.$target; if (target) { target.invalidateSize(); target.invalidateDisplayList(); } } /** * @inheritDoc * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public measure():void { var target = this.$target; if (!target) return; var savedColumnCount = this._columnCount; var savedRowCount = this._rowCount; var savedColumnWidth = this._columnWidth; var savedRowHeight = this._rowHeight; var measuredWidth = 0; var measuredHeight = 0; var values = target.$UIComponent; this.calculateRowAndColumn(values[sys.UIKeys.explicitWidth], values[sys.UIKeys.explicitHeight]); var columnCount = this._requestedColumnCount > 0 ? this._requestedColumnCount : this._columnCount; var rowCount = this._requestedRowCount > 0 ? this._requestedRowCount : this._rowCount; var horizontalGap = isNaN(this._horizontalGap) ? 0 : this._horizontalGap; var verticalGap = isNaN(this._verticalGap) ? 0 : this._verticalGap; if (columnCount > 0) { measuredWidth = columnCount * (this._columnWidth + horizontalGap) - horizontalGap; } if (rowCount > 0) { measuredHeight = rowCount * (this._rowHeight + verticalGap) - verticalGap; } var hPadding = this._paddingLeft + this._paddingRight; var vPadding = this._paddingTop + this._paddingBottom; target.setMeasuredSize(measuredWidth + hPadding, measuredHeight + vPadding) this._columnCount = savedColumnCount; this._rowCount = savedRowCount; this._columnWidth = savedColumnWidth; this._rowHeight = savedRowHeight; } /** * @private * 计算行和列的尺寸及数量 */ private calculateRowAndColumn(explicitWidth:number, explicitHeight:number):void { var target = this.$target; var horizontalGap = isNaN(this._horizontalGap) ? 0 : this._horizontalGap; var verticalGap = isNaN(this._verticalGap) ? 0 : this._verticalGap; this._rowCount = this._columnCount = -1; var numElements = target.numElements; var count = numElements; for (var index = 0; index < count; index++) { var layoutElement = <UIComponent> (target.getElementAt(index)); if (!egret.is(layoutElement, UIComponentClass) || !layoutElement.$includeInLayout) { numElements--; continue; } } if (numElements == 0) { this._rowCount = this._columnCount = 0; return; } if (isNaN(this.explicitColumnWidth) || isNaN(this.explicitRowHeight)) this.updateMaxElementSize(); if (isNaN(this.explicitColumnWidth)) { this._columnWidth = this.maxElementWidth; } else { this._columnWidth = this.explicitColumnWidth; } if (isNaN(this.explicitRowHeight)) { this._rowHeight = this.maxElementHeight; } else { this._rowHeight = this.explicitRowHeight; } var itemWidth = this._columnWidth + horizontalGap; //防止出现除数为零的情况 if (itemWidth <= 0) itemWidth = 1; var itemHeight = this._rowHeight + verticalGap; if (itemHeight <= 0) itemHeight = 1; var orientedByColumns = (this._orientation == TileOrientation.COLUMNS); var widthHasSet = !isNaN(explicitWidth); var heightHasSet = !isNaN(explicitHeight); var paddingL = this._paddingLeft; var paddingR = this._paddingRight; var paddingT = this._paddingTop; var paddingB = this._paddingBottom; if (this._requestedColumnCount > 0 || this._requestedRowCount > 0) { if (this._requestedRowCount > 0) this._rowCount = Math.min(this._requestedRowCount, numElements); if (this._requestedColumnCount > 0) this._columnCount = Math.min(this._requestedColumnCount, numElements); } else if (!widthHasSet && !heightHasSet) { var side = Math.sqrt(numElements * itemWidth * itemHeight); if (orientedByColumns) { this._rowCount = Math.max(1, Math.round(side / itemHeight)); } else { this._columnCount = Math.max(1, Math.round(side / itemWidth)); } } else if (widthHasSet && (!heightHasSet || !orientedByColumns)) { var targetWidth = Math.max(0, explicitWidth - paddingL - paddingR); this._columnCount = Math.floor((targetWidth + horizontalGap) / itemWidth); this._columnCount = Math.max(1, Math.min(this._columnCount, numElements)); } else { var targetHeight = Math.max(0, explicitHeight - paddingT - paddingB); this._rowCount = Math.floor((targetHeight + verticalGap) / itemHeight); this._rowCount = Math.max(1, Math.min(this._rowCount, numElements)); } if (this._rowCount == -1) this._rowCount = Math.max(1, Math.ceil(numElements / this._columnCount)); if (this._columnCount == -1) this._columnCount = Math.max(1, Math.ceil(numElements / this._rowCount)); if (this._requestedColumnCount > 0 && this._requestedRowCount > 0) { if (this._orientation == TileOrientation.ROWS) this._rowCount = Math.max(1, Math.ceil(numElements / this._requestedColumnCount)); else this._columnCount = Math.max(1, Math.ceil(numElements / this._requestedRowCount)); } } /** * @private * 缓存的最大子对象宽度 */ private maxElementWidth:number = 0; /** * @private * 缓存的最大子对象高度 */ private maxElementHeight:number = 0; /** * @private * 更新最大子对象尺寸 */ private updateMaxElementSize():void { if (!this.$target) return; if (this.$useVirtualLayout) { this.maxElementWidth = Math.max(this.maxElementWidth, this.$typicalWidth); this.maxElementHeight = Math.max(this.maxElementHeight, this.$typicalHeight); this.doUpdateMaxElementSize(this.startIndex, this.endIndex); } else { this.doUpdateMaxElementSize(0, this.$target.numElements - 1); } } /** * @private * 更新虚拟布局的最大子对象尺寸 */ private doUpdateMaxElementSize(startIndex:number, endIndex:number):void { var maxElementWidth = this.maxElementWidth; var maxElementHeight = this.maxElementHeight; var bounds = egret.$TempRectangle; var target = this.$target; if ((startIndex != -1) && (endIndex != -1)) { for (var index = startIndex; index <= endIndex; index++) { var elt = <UIComponent> target.getElementAt(index); if (!egret.is(elt, UIComponentClass) || !elt.$includeInLayout) { continue; } elt.getPreferredBounds(bounds); maxElementWidth = Math.max(maxElementWidth, bounds.width); maxElementHeight = Math.max(maxElementHeight, bounds.height); } } this.maxElementWidth = maxElementWidth; this.maxElementHeight = maxElementHeight; } /** * @inheritDoc * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public clearVirtualLayoutCache():void { super.clearVirtualLayoutCache(); this.maxElementWidth = 0; this.maxElementHeight = 0; } /** * @private * 当前视图中的第一个元素索引 */ private startIndex:number = -1; /** * @private * 当前视图中的最后一个元素的索引 */ private endIndex:number = -1; /** * @private * 视图的第一个和最后一个元素的索引值已经计算好的标志 */ private indexInViewCalculated:boolean = false; /** * @inheritDoc * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public scrollPositionChanged():void { if (this.$useVirtualLayout) { var changed = this.getIndexInView(); if (changed) { this.indexInViewCalculated = true; this.$target.invalidateDisplayList(); } } } /** * @private * 获取视图中第一个和最后一个元素的索引,返回是否发生改变 */ private getIndexInView():boolean { if (!this.$target || this.$target.numElements == 0) { this.startIndex = this.endIndex = -1; return false; } var target = this.$target; var numElements = target.numElements; if (!this.$useVirtualLayout) { this.startIndex = 0; this.endIndex = numElements - 1; return false; } var values = target.$UIComponent; if (values[sys.UIKeys.width] == 0 || values[sys.UIKeys.height] == 0) { this.startIndex = this.endIndex = -1; return false; } var oldStartIndex = this.startIndex; var oldEndIndex = this.endIndex; var paddingL = this._paddingLeft; var paddingT = this._paddingTop; var horizontalGap = isNaN(this._horizontalGap) ? 0 : this._horizontalGap; var verticalGap = isNaN(this._verticalGap) ? 0 : this._verticalGap; if (this._orientation == TileOrientation.COLUMNS) { var itemWidth = this._columnWidth + horizontalGap; if (itemWidth <= 0) { this.startIndex = 0; this.endIndex = numElements - 1; return false; } var minVisibleX = target.scrollH; var maxVisibleX = minVisibleX + values[sys.UIKeys.width]; var startColumn = Math.floor((minVisibleX - paddingL) / itemWidth); if (startColumn < 0) startColumn = 0; var endColumn = Math.ceil((maxVisibleX - paddingL) / itemWidth); if (endColumn < 0) endColumn = 0; this.startIndex = Math.min(numElements - 1, Math.max(0, startColumn * this._rowCount)); this.endIndex = Math.min(numElements - 1, Math.max(0, endColumn * this._rowCount - 1)); } else { var itemHeight = this._rowHeight + verticalGap; if (itemHeight <= 0) { this.startIndex = 0; this.endIndex = numElements - 1; return false; } var minVisibleY = target.scrollV; var maxVisibleY = minVisibleY + values[sys.UIKeys.height]; var startRow = Math.floor((minVisibleY - paddingT) / itemHeight); if (startRow < 0) startRow = 0; var endRow = Math.ceil((maxVisibleY - paddingT) / itemHeight); if (endRow < 0) endRow = 0; this.startIndex = Math.min(numElements - 1, Math.max(0, startRow * this._columnCount)); this.endIndex = Math.min(numElements - 1, Math.max(0, endRow * this._columnCount - 1)); } return this.startIndex != oldStartIndex || this.endIndex != oldEndIndex; } /** * @inheritDoc * * @version Egret 2.4 * @version eui 1.0 * @platform Web,Native */ public updateDisplayList(width:number, height:number):void { super.updateDisplayList(width, height); if (!this.$target) return; var target = this.$target; var paddingL = this._paddingLeft; var paddingR = this._paddingRight; var paddingT = this._paddingTop; var paddingB = this._paddingBottom; if (this.indexInViewCalculated) { this.indexInViewCalculated = false; } else { this.calculateRowAndColumn(width, height); if (this._rowCount == 0 || this._columnCount == 0) { target.setContentSize(paddingL + paddingR, paddingT + paddingB); return; } this.adjustForJustify(width, height); this.getIndexInView(); } if (this.$useVirtualLayout) { this.calculateRowAndColumn(width, height); this.adjustForJustify(width, height); } if (this.startIndex == -1 || this.endIndex == -1) { target.setContentSize(0, 0); return; } var endIndex = this.endIndex; target.setVirtualElementIndicesInView(this.startIndex, endIndex); var elt:UIComponent; var x:number; var y:number; var columnIndex:number; var rowIndex:number; var orientedByColumns = (this._orientation == TileOrientation.COLUMNS); var index = this.startIndex; var horizontalGap = isNaN(this._horizontalGap) ? 0 : this._horizontalGap; var verticalGap = isNaN(this._verticalGap) ? 0 : this._verticalGap; var rowCount = this._rowCount; var columnCount = this._columnCount; var columnWidth = this._columnWidth; var rowHeight = this._rowHeight; for (var i = this.startIndex; i <= endIndex; i++) { elt = <UIComponent> target.getElementAt(i); if (!egret.is(elt, UIComponentClass) || !elt.$includeInLayout) { continue; } if (orientedByColumns) { columnIndex = Math.ceil((index + 1) / rowCount) - 1; rowIndex = Math.ceil((index + 1) % rowCount) - 1; if (rowIndex == -1) rowIndex = rowCount - 1; } else { columnIndex = Math.ceil((index + 1) % columnCount) - 1; if (columnIndex == -1) columnIndex = columnCount - 1; rowIndex = Math.ceil((index + 1) / columnCount) - 1; } x = columnIndex * (columnWidth + horizontalGap) + paddingL; y = rowIndex * (rowHeight + verticalGap) + paddingT; this.sizeAndPositionElement(elt, x, y, columnWidth, rowHeight); index++; } var hPadding = paddingL + paddingR; var vPadding = paddingT + paddingB; var contentWidth = (columnWidth + horizontalGap) * columnCount - horizontalGap; var contentHeight = (rowHeight + verticalGap) * rowCount - verticalGap; target.setContentSize(contentWidth + hPadding, contentHeight + vPadding); } /** * @private * 为单个元素布局 */ private sizeAndPositionElement(element:UIComponent, cellX:number, cellY:number, cellWidth:number, cellHeight:number):void { var elementWidth = NaN; var elementHeight = NaN; var values = element.$UIComponent; if (this._horizontalAlign == JustifyAlign.JUSTIFY) elementWidth = cellWidth; else if (!isNaN(values[sys.UIKeys.percentWidth])) elementWidth = cellWidth * values[sys.UIKeys.percentWidth] * 0.01; if (this._verticalAlign == JustifyAlign.JUSTIFY) elementHeight = cellHeight; else if (!isNaN(values[sys.UIKeys.percentHeight])) elementHeight = cellHeight * values[sys.UIKeys.percentHeight] * 0.01; element.setLayoutBoundsSize(Math.round(elementWidth), Math.round(elementHeight)); var x = cellX; var bounds = egret.$TempRectangle; element.getLayoutBounds(bounds); switch (this._horizontalAlign) { case egret.HorizontalAlign.RIGHT: x += cellWidth - bounds.width; break; case egret.HorizontalAlign.CENTER: x = cellX + (cellWidth - bounds.width) / 2; break; } var y = cellY; switch (this._verticalAlign) { case egret.VerticalAlign.BOTTOM: y += cellHeight - bounds.height; break; case egret.VerticalAlign.MIDDLE: y += (cellHeight - bounds.height) / 2; break; } element.setLayoutBoundsPosition(Math.round(x), Math.round(y)); } /** * @private * 为两端对齐调整间隔或格子尺寸 */ private adjustForJustify(width:number, height:number):void { var paddingL = this._paddingLeft; var paddingR = this._paddingRight; var paddingT = this._paddingTop; var paddingB = this._paddingBottom; var targetWidth = Math.max(0, width - paddingL - paddingR); var targetHeight = Math.max(0, height - paddingT - paddingB); if (!isNaN(this.explicitVerticalGap)) this._verticalGap = this.explicitVerticalGap; if (!isNaN(this.explicitHorizontalGap)) this._horizontalGap = this.explicitHorizontalGap; this._verticalGap = isNaN(this._verticalGap) ? 0 : this._verticalGap; this._horizontalGap = isNaN(this._horizontalGap) ? 0 : this._horizontalGap; var offsetY = targetHeight - this._rowHeight * this._rowCount; var offsetX = targetWidth - this._columnWidth * this._columnCount; var gapCount; if (offsetY > 0) { if (this._rowAlign == RowAlign.JUSTIFY_USING_GAP) { gapCount = Math.max(1, this._rowCount - 1); this._verticalGap = offsetY / gapCount; } else if (this._rowAlign == RowAlign.JUSTIFY_USING_HEIGHT) { if (this._rowCount > 0) { this._rowHeight += (offsetY - (this._rowCount - 1) * this._verticalGap) / this._rowCount; } } } if (offsetX > 0) { if (this._columnAlign == ColumnAlign.JUSTIFY_USING_GAP) { gapCount = Math.max(1, this._columnCount - 1); this._horizontalGap = offsetX / gapCount; } else if (this._columnAlign == ColumnAlign.JUSTIFY_USING_WIDTH) { if (this._columnCount > 0) { this._columnWidth += (offsetX - (this._columnCount - 1) * this._horizontalGap) / this._columnCount; } } } } } if(DEBUG){ egret.$markReadOnly(TileLayout,"columnCount"); egret.$markReadOnly(TileLayout,"rowCount"); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,703
"Accurate attribution of a cyber incident takes time and investigations are being undertaken together with the relevant protection agencies," they stated. The clock rates of existing computers nevertheless continue to be single-gigahertz, however, in an extensive leap forward, researchers have performed first-rate fast clock rates inside the terahertz of frequencies with the aid of the use of mild. The test conducted on the Max-Born-Institute used extraordinarily brief, excessive mild pulses ranging from close to-infrared to seen orange color to generate oscillating currents in a semiconductor called gallium arsenide. According to the researchers, electric currents have exceptionally created the use of semiconductor crystals which soak up mild. In this example, the oscillations prompted the chip to emit terahertz radiation with a bandwidth of up to 20 THz. It shows that the digital charge switch can occur between neighboring atoms within the crystal lattice, representing the underlying mechanism. This step forward may have exciting applications in excessive-frequency electronics which lead to the development of computer systems which are a lot faster than the existing ones. Eventually, laptop and different associated electronics may be run on mild and sorts of photons, resulting in an ultimate shift toward light-primarily based technology.
{ "redpajama_set_name": "RedPajamaC4" }
6,384
Q: Как узнать какой тип ключа в цикле? При использовании разных циклов все ключи по разному но сводятся к одному типу. Нужно в цикле определять какой тип ключа у конкретной итерации, как такое сделать? let arr = []; arr[0] = 0; arr[1] = 1; arr["2"] = 2; for (let i of arr) { console.log(typeof(i)); } for (let i in arr) { console.log(typeof(i)); } arr.forEach(i => console.log(typeof(i)));
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,301
New Mexico detention facility files lawsuit against county over coronavirus vaccine mandate Isaac Legareta, an officer who was fired from working at the Doña Ana County Detention Center in New Mexico for refusing a Wuhan coronavirus (COVID-19) vaccine, is suing the county manager for imposing the illegal and unconstitutional vaccine mandate. The complaint addresses an illicit directive issued by Fernando Macias, the county manager in question, which orders all county-employed first responders, sheriff's deputies, firefighters and detention center officers to be vaccinated with either the Pfizer-BioNTech China virus jab or the Moderna China virus jab. Legareta maintains that requiring anyone to take a vaccine that is not yet fully approved by the Food and Drug Administration (FDA) violates the Food, Drug and Cosmetic Act. In the case of Wuhan coronavirus (COVID-19) vaccines, none of them have received formal approval and all of them are being administered under Emergency Use Authorization (EUA). WuFlu vaccines from both Pfizer-BioNTech and Moderna, along with a third from Johnson & Johnson (J&J) that was just given emergency approval over the weekend, are all undergoing the clinical trial process, which typically takes several years to complete. Federal law requires that full disclosures be given to individuals about unapproved drug products such as these explaining "the option to accept or refuse administration of the product, of the consequences, if any, of refusing administration of the product, and of the alternatives to the product that are available and of their benefits and risks." In other words, nobody can be forced to take an EUA-designated Wuhan coronavirus (COVID-19) vaccine. Anyone who tries to force it is guilty of violating federal law. CDC advisory committee noted that EUA vaccines cannot be mandated Last summer, the Centers for Disease Control and Prevention's (CDC) advisory committee on immunization practices announced that vaccines released under EUA provisions cannot be mandated. This includes Wuhan coronavirus (COVID-19) vaccines. In a memo, Macias tried to claim that unless employees of the county were granted an accommodation, "being vaccinated is a requirement and a condition of on-going employment with the County due to the significant health and safety risks posed by contracting or spreading COVID-19." This memo is a violation of county employees' rights, Legareta's lawsuit explains. Legareta also says he was written up at work and threatened with losing his job for refusing the jab, which is also illegal. Legareta attached to his legal filing a personal memo issued by his superiors and dated Feb. 17 that demands he provide proof of receiving the shot within five days or else face termination. The lawsuit seeks an injunction against termination by the county, or reinstatement should he be fired before the court issues its ruling. Legareta is barred under New Mexico state law from seeking any monetary damages for retaliatory discharge, should this occur. Legareta is also requesting a temporary restraining order and preliminary injunction against any further enforcement or disciplinary action, including possible termination, in response to his refusal of the vaccine. Nancy Ana Garner, his Santa Fe-based attorney, had previously sent the county a cease-and-desist letter, along with a notice of impending litigation. Garner wrote this letter on behalf of New Mexico Stands Up, a group that opposes the state's draconian Wuhan coronavirus (COVID-19) "public health emergency" directives. Also named in Legareta's suit are Bryan Baker, the detention center's director and Capt. Ben Mendoza. As of this writing, most of Doña Ana County's first responders had reportedly received at least one dose of China virus vaccine or a waiver. This includes 195 out of the 203 staff members who work at the detention center. Should any of these county workers end up falling ill or dying as a result of the jabs, there could be more lawsuits soon to follow. For more related news about Wuhan coronavirus (COVID-19) tyranny, check out Pandemic.news. Tagged Under: BioNTech, coronavirus, covid-19, FDA, Food and Drug Administration, Illegal, lawsuit, mandate, Moderna, Pfizer, unconstitutional, vaccine
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,706
{"url":"https:\/\/stats.stackexchange.com\/questions\/322608\/different-results-for-between-within-groups-and-within-group-regression-analyses","text":"# Different results for between\/within groups and within group regression analyses\n\nI have a lmer model that analyse the interaction between a treatment (4 levels. Name: Relation_PenultimateLast) and 3 groups (ExpertiseType), crossed. In this model I have 3 by-group random effects.\n\nThe function used is lmer from the library lme4, extended through the lmerTest library.\n\nHere the formula:\n\nf.e.model = lmer(Score ~ Relation_PenultimateLast*ExpertiseType + (1|TrajectoryType) + (1|StimulusType) + (1|LastPosition), data=datasheet.complete)\n\n\nResults:\n\nScaled residuals:\nMin 1Q Median 3Q Max\n-2.43700 -0.87535 -0.03117 0.76091 2.06034\n\nRandom effects:\nGroups Name Variance Std.Dev.\nTrajectoryType (Intercept) 0.019520 0.13971\nLastPosition (Intercept) 0.008778 0.09369\nStimulusType (Intercept) 0.028348 0.16837\nResidual 1.292387 1.13683\nNumber of obs: 8200, groups:\nTrajectoryType, 25; LastPosition, 8; StimulusType, 4\n\nFixed effects:\nEstimate Std. Error df t value\n(Intercept) 3.34934 0.13401 17.00000 24.993\nRelation_PenultimateLast -0.08738 0.03453 77.00000 -2.531\nExpertiseType -0.09808 0.03639 8165.00000 -2.695\nRelation_PenultimateLast:ExpertiseType 0.05224 0.01271 8165.00000 4.110\nPr(>|t|)\n(Intercept) 7.55e-15 ***\nRelation_PenultimateLast 0.01343 *\nExpertiseType 0.00705 **\nRelation_PenultimateLast:ExpertiseType 3.99e-05 ***\n\n\nUsing the plot function:\n\nf.e.model.plot = datasheet.complete\nf.e.model.plot$fit <- predict(f.e.model) interaction.plot(x.factor = f.e.model.plot$Relation_PenultimateLast, trace.factor = f.e.model.plot$ExpertiseType, response = f.e.model.plot$fit, fun = mean,\ntype = \"b\", legend = TRUE,\nfixed=TRUE,\nxlab = \"Penultimate_Last category\", ylab=\"Cadential effectiveness\", trace.label = \"Expertise\",\npch=c(1,19), col = c(\"#00AFBB\", \"#E7B800\", \"#FF0000\")\n)\n\n\nI obtain this graph:\n\nNote the yellow line, ParticipantType = 2\n\nI would expect the yellow line to represent the effects of the treatment within the group 2, but if I run the same analysis mode within the group:\n\ndatasheet.complete.performers = subset(datasheet.complete, ExpertiseType==2) #create a subset with only composers\nf.e.model.performers = lmer(Score ~ Relation_PenultimateLast + (1|TrajectoryType) + (1|StimulusType) + (1|LastPosition), data=datasheet.complete.performers)\n\n\nResults:\n\nScaled residuals:\nMin 1Q Median 3Q Max\n-2.41905 -0.87678 0.02313 0.76503 1.85794\n\nRandom effects:\nGroups Name Variance Std.Dev.\nTrajectoryType (Intercept) 0.01906 0.1381\nLastPosition (Intercept) 0.01179 0.1086\nStimulusType (Intercept) 0.06358 0.2522\nResidual 1.39162 1.1797\nNumber of obs: 2400, groups:\nTrajectoryType, 25; LastPosition, 8; StimulusType, 4\n\nFixed effects:\nEstimate Std. Error df t value Pr(>|t|)\n(Intercept) 3.40381 0.15825 6.70100 21.509 1.96e-07 ***\nRelation_PenultimateLast -0.03909 0.03059 23.09500 -1.278 0.214\n\n\nI obtain a complete different scenario:\n\nf.e.model.performers.plot = datasheet.complete.performers\nf.e.model.performers.plot$fit <- predict(f.e.model.performers) interaction.plot(x.factor = f.e.model.performers.plot$Relation_PenultimateLast, trace.factor = f.e.model.performers.plot$ExpertiseType, response = f.e.model.performers.plot$fit, fun = mean,\ntype = \"b\", legend = TRUE,\nfixed=TRUE,\nxlab = \"Penultimate_Last category\", ylab=\"Cadential effectiveness\", trace.label = \"Expertise\",\npch=c(1,19), col = c(\"#E7B800\"))\n\n\nShould not the two representation of the effect of Relation_PenultimateLast be the same? Should I consider the second graph the correct representation? Or should this be a warning that there is still some random effect that is not counted in the formula?\n\n\u2022 Welcome to CV. This is a great question. Please add the library you are using. \u2013\u00a0Ferdi Jan 11 '18 at 14:28\n\u2022 Hi @Ferdi, I updated the question. Paragraph 2 - lme4 + lmerTest. I don't know what's the library for the predict() function, unfortunately \u2013\u00a0Luca Danieli Jan 11 '18 at 14:32\n\u2022 Thank you. predict() should be already installed. Have a look here: rdocumentation.org\/packages\/stats\/versions\/3.4.3\/topics\/predict \u2013\u00a0Ferdi Jan 11 '18 at 14:49\n\u2022 Is it possible that the predict() function is not good for lmer() as they suppose in this post? stats.stackexchange.com\/questions\/174203\/\u2026 \u2013\u00a0Luca Danieli Jan 11 '18 at 14:52\n\u2022 If you do not get a satisfactory response here then (1) try adding the tag for nlme (2) ask on the r-sig-mixed-models mailing list (telling them you failed here of course). I suspect this may have to do with the ransom effects but I am not an expert here so I leave that to others to try to answer. \u2013\u00a0mdewey Jan 11 '18 at 14:57","date":"2019-11-17 02:42:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.424710214138031, \"perplexity\": 11485.932501449852}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496668782.15\/warc\/CC-MAIN-20191117014405-20191117042405-00495.warc.gz\"}"}
null
null
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:padding="4dp"> <FrameLayout android:id="@+id/iconContainer" android:layout_width="40dp" android:layout_height="match_parent" android:layout_alignParentBottom="true" android:layout_alignParentTop="true" android:layout_marginRight="6dp"> <ImageView android:id="@+id/icon" android:layout_width="match_parent" android:layout_height="match_parent" android:src="@drawable/ic_launcher" android:layout_gravity="center"/> <ProgressBar style="?android:attr/progressBarStyleHorizontal" android:id="@+id/progress" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:indeterminate="false" android:max="100" android:visibility="invisible"/> </FrameLayout> <TextView android:id="@+id/text" android:layout_width="fill_parent" android:layout_height="15dp" android:layout_alignParentBottom="true" android:layout_alignParentRight="true" android:layout_toRightOf="@id/image" android:ellipsize="marquee" android:singleLine="true" android:textSize="12sp" /> <TextView android:id="@+id/name" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_above="@id/text" android:layout_alignParentRight="true" android:layout_alignParentTop="true" android:layout_alignWithParentIfMissing="true" android:layout_toRightOf="@id/iconContainer" android:gravity="top" android:textSize="16sp" /> </RelativeLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
31
\section{Introduction} Mathematical models are now commonly used in the study of growth of cell tissue. For instance, a wide literature is now available on the study of the tumor growth through mathematical modeling and numerical simulations \cite{Bellomo1,Bellomo2,Friedman,Lowengrub}. In such models, we may distinguish two kinds of description: Either they describe the dynamics of cell population density (see e.g. \cite{Byrne,BenAmar}), or they consider the geometric motion of the tissue through a free boundary problem of Hele-Shaw type (see e.g. \cite{Greenspan,FriedmanHu,Cui,Lowengrub}). Recently the link between both descriptions has been investigated from a mathematical point of view thanks to an incompressible limit \cite{PQV}. In this paper, we depart from the simplest cell population model as proposed in \cite{BD}. In this model the dynamics of the cell density is driven by pressure forces and cell multiplication. More precisely, let us denote by $n(t,x)$ the cell density depending on time $t\geq 0$ and position $x\in \mathbb{R}^d$, and by $p$ the mechanical pressure. The mechanical pressure depends only on the cell density and is given by a state law $p=\Pi(n)$. Cell proliferation is modelled by a pressure-limited growth function denoted $G$. Mechanical pressure generates cells displacement with a velocity whose field $v$ is computed thanks to the Darcy's law. After normalizing all coefficients, the model reads \begin{align*} & \partial_t n + \nabla\cdot (n v) = n G(p), \quad \mbox{ on } \mathbb{R}^+\times\mathbb{R}^d, \\ & v = - \nabla p, \qquad p = \Pi(n). \end{align*} The choice $\Pi(n)= \frac{\gamma}{\gamma -1}n^{\gamma-1}$ has been taken in \cite{PQV,PQTV,PV}. This choice allows to recover the well-known porous medium equation for which a lot of nice mathematical properties are now well-established (see e.g. \cite{Vazquez}). The incompressible limit is then obtained by letting $\gamma$ going to $+\infty$. However, this state law does not prevent cells to overlap. In fact, it is not possible with this choice to avoid the cell density to take value above $1$ (which corresponds here to the maximal packing density after normalization). A convenient way to avoid cells overlapping is to consider a pressure law which becomes singular when the cell density approaches $1$. Such type of singularity is encountered, for instance, in the kinetic theory of dense gases where the interaction between molecules is strongly repulsive at very short distance \cite{Chapman}. Similar singular pressure laws have been also considered in \cite{Degond,Degond2} to model collective motion, in \cite{Berthelin,Berthelin2} to model the traffic flow, and in \cite{Ewelina} to model crowd motion (see also the review article \cite{Maury}). Then, in order to fit this non-overlapping constraint, we consider the following simple model of pressure law given by $$ P(n)=\epsilon \frac{n}{1-n}. $$ Finally, the model under study in this paper reads, for $\epsilon>0$, \begin{align} \partial_t n_\epsilon - \nabla \cdot (n_\epsilon \nabla p_\epsilon) = n_\epsilon G(p_\epsilon), \label{eq:n} \\ p_\epsilon = P(n_\epsilon) = \epsilon \frac{n_\epsilon}{1-n_\epsilon}. \label{eq:p} \end{align} This system is complemented by an initial data denoted $n_\epsilon^{ini}$. The aim of this paper is to investigate the incompressible limit of this model, which consists in letting $\epsilon$ going to $0$ in the latter system. At this stage, it is of great importance to observe that from \eqref{eq:n}, we may deduce an equation for the pressure by simply multiplying \eqref{eq:n} by $P'(n_\epsilon)$ and using the relation $n_\epsilon=\frac{p_\epsilon}{\epsilon+p_\epsilon}$ from \eqref{eq:p}, \begin{equation}\label{eq:p1} \partial_t p_{\epsilon} - (\frac{p_{\epsilon}^2}{\epsilon}+p_{\epsilon})\Delta p_{\epsilon} - |\nabla p_{\epsilon}|^2 = (\frac{p_{\epsilon}^2}{\epsilon}+p_{\epsilon}) G(p_{\epsilon}). \end{equation} Formally, we deduce from \eqref{eq:p1} that when $\epsilon\to 0$, we expect to have the relation \begin{equation}\label{eq:p0} -p_0^2 \Delta p_0 = p_0^2 G(p_0). \end{equation} Moreover, passing formally to the limit into \eqref{eq:p}, it appears clearly that $(1-n_0) p_0=0$. We deduce from this relation that if we introduce the set $\Omega_0(t)=\{p_0>0\}$, then we obtain a free boundary problem of Hele-Shaw type: On $\Omega_0(t)$, we have $n_0=1$ and $-\Delta p_0 = G(p_0)$, whereas $p_0=0$ on $\mathbb{R}^d\setminus \Omega_0(t)$. Thus although the pressure law is different, we expect to recover the same free boundary Hele-Shaw model as in \cite{PQV}. The incompressible limit of the above cell mechanical model for tumor growth with a pressure law given by $\Pi(n)=\frac{\gamma}{\gamma-1} n^{\gamma-1}$ has been investigated in \cite{PQV} and in \cite{PQTV} when taking into account active motion of cells. In \cite{PV}, the case with viscosity, where the Darcy's law is replaced by the Brinkman's law, is studied. We mention also the recent works \cite{KP,MPQ} where the incompressible limit with more general assumptions on the initial data has been investigated. However, in all these mentionned works the pressure law do not prevent the non-overlapping of cells. Up to our knowledge, this work is the first attempt to extend the previous result with this constraint, i.e. with a singular pressure law as given by \eqref{eq:p}. The outline of the paper is the following. In the next section we give the statement of the main result in Theorem \ref{TH1}, which is the convergence when $\epsilon$ goes to $0$ of the mechanical model \eqref{eq:n}--\eqref{eq:p} towards the Hele-Shaw free boundary system. The rest of the paper is devoted to the proof of this result. First, in section \ref{sec:estim} we establish some a priori estimate allowing to obtain space compactness. Then, section \ref{sec:tcompact} is devoted to the study of the time compactness. Thanks to compactness results, we can pass to the limit $\epsilon\to 0$ in system \eqref{eq:n}--\eqref{eq:p} in section \ref{sec:conv}, up to the extraction of a subsequence. Finally the proof of the complementary relation \eqref{eq:p0} is performed in section \ref{sec:compl}. \section{Main result} The aim of this paper is to establish the incompressible limit $\epsilon \to 0$ of the cell mechanical model with non-overlapping constraint \eqref{eq:n}--\eqref{eq:p}. Before stating our main result, we list the set of assumptions that we use on the growth fonction and on the initial data. For the growth function, we assume \begin{equation}\label{hypG} \left\{ \begin{aligned} &\exists\, G_m>0, \quad \| G \|_{\infty} \leq G_m,\\ & G' <0, \quad \mbox{ and }\ \exists \gamma>0, \quad \min_{[0,P_M]} |G'| = \gamma,\\ &\exists\, P_M>0, \quad G(P_M)=0. \end{aligned} \right. \end{equation} The quantity $P_M$, for which the growth stops, is commonly called the homeostatic pressure \cite{Prost}. This set of assumptions on the growth function is quite similar to the one in \cite{PQV}, except for the bound on the growth term which is needed here due to the singularity in the pressure law. For the initial data, we assume that there exists $\epsilon_0>0$ such that for all $\epsilon\in (0,\epsilon_0)$, \begin{equation}\label{hypini} \left\{ \begin{aligned} & 0 \leq n^{ini}_{\epsilon}, \qquad \ p^{ini}_{\epsilon}:= \epsilon \frac{n_\epsilon^{ini}}{\epsilon+n_\epsilon^{ini}} \leq P_M, \\ & \| \partial_{x_i} n^{ini}_{\epsilon} \|_{L^1(\mathbb{R}^d)} \leq C, \qquad i=1,...,d,\\ & \exists \, n^{ini}_0 \in L^1_+(\mathbb{R}^d), \quad \|n^{ini}_\epsilon - n^{ini}_0\|_{L^1(\mathbb{R}^d)} \to 0 \mbox{ as }\epsilon\to 0, \\ & \exists\, K \subset \mathbb{R}^d,\ K \mbox{ compact}, \quad \forall\, \epsilon\in(0,\epsilon_0), \ \mbox{supp } n_\epsilon^{ini} \subset K. \\ \end{aligned} \right. \end{equation} Notice that this set of assumptions imply that $n_\epsilon^{ini}$ is uniformly bounded in $W^{1,1}(\mathbb{R}^d)$. We are now in position to state our main result. \begin{thm}\label{TH1} Let $T>0$, $Q_T=(0,T)\times\mathbb{R}^d$. Let $G$ and $(n^{ini}_{\epsilon})$ satisfy assumptions \eqref{hypG} and \eqref{hypini} respectively. After extraction of subsequences, both the density $n_{\epsilon}$ and the pressure $p_{\epsilon}$ converge strongly in $L^1(Q_T)$ as $\epsilon \rightarrow 0$ to the limit $n_0 \in C([0,T];L^1(\mathbb{R}^d))\cap BV(Q_T)$ and $p_0 \in BV(Q_T)\cap L^2([0,T];H^1(\mathbb{R}^d))$, which satisfy \begin{align} \label{boundn0p0} & 0 \leq n_0 \leq 1, \quad 0 \leq p_0 \leq P_M, \\ \label{eqn0} & \partial_t n_0 - \Delta p_0 = n_0 G(p_0), \mbox{ in } \mathcal{D}'(Q_T), \end{align} and \begin{align} \label{eq2n0} \partial_t n_0 - \nabla \cdot ( n_0 \nabla p_0) = n_0 G(p_0), & \quad \text{ in } \mathcal{D}'(Q_T). \end{align} Moreover, we have the relation \begin{align}\label{n0p0} (1-n_0)p_0=0, \end{align} and the complementary relation \begin{align}\label{compl} p_0^2(\Delta p_0 + G(p_0)) =0, & \quad \text{ in } \mathcal{D}'(Q_T). \end{align} \end{thm} This result extends the one in \cite{PQV} to singular pressure laws with non-overlapping constraint. We notice that we recover the same limit model whose uniqueness has already been stated in \cite[Theorem 2.4]{PQV}. Although our proof follows the idea in \cite{PQV}, several technical difficulties must be overcome due to the singularity of the pressure law. Indeed, we first recall that with the choice $\Pi(n)=\frac{\gamma}{\gamma-1} n^{\gamma-1}$, equation \eqref{eq:n} may be rewritten as the porous medium equation $\partial_t n + \Delta n^\gamma = n G(\Pi(n))$. A lot of estimates are known and well established for this equation (see \cite{Vazquez}), in particular a semiconvexity estimate is used in \cite{PQV} which allows to obtain estimate on the time derivative and thus compactness. With our choice of pressure law, \eqref{eq:n} should be consider as a fast diffusion equation. Thus we have first to state a comparison principle to obtain a priori estimates (see Lemma \ref{lem:estim}). Unlike in \cite{PQV}, we may not use a semiconvexity estimate to obtain estimate on the time derivative. To do so, we use regularizing effects (see section \ref{sec:tcompact}). Then the convergence proof has to be adapted for these new estimates. \begin{figure} \includegraphics[width=0.49\linewidth]{plot_eps=05_p2.png} \includegraphics[width=0.49\linewidth]{plot_m=20_p1.png} \caption{Comparison between numerical solutions computed with two different pressure laws. The red line correspond to the cell density $n$ solving \eqref{eq:n}, the dashed line correspond to the constant value $1$. On the left, the pressure law is $p=P(n) = 0.5 \frac{n}{1-n}$. On the right, the pressure law is $p=\Pi(n)=\frac{\gamma}{\gamma-1} n^\gamma$ with $\gamma=20$.} \label{fig1} \end{figure} Finally, we illustrate the comparison between the two pressure laws $P$ and $\Pi$ by a numerical simulation. We display in Figure \ref{fig1} the density computed thanks to a discretization with an upwind scheme of \eqref{eq:n}. In Figure \ref{fig1}-left, the pressure law is $p=P(n)=\epsilon\frac{n}{1-n}$ as in \eqref{eq:p} with $\epsilon=0.5$. In Figure \ref{fig1}-right, the pressure law is $p=\Pi(n)=\frac{\gamma}{\gamma-1} n^\gamma$ with $\gamma=20$. We take $G(p)= 10(10-p)_+$ as growth function (which satisfies obviously assumption \eqref{hypG} with $P_M=10$). The dashed lines in these plots correspond to the constant value $1$. As expected, we observe that the density $n$ is bounded by $1$ in the case of the pressure law $P$ whereas it takes values greater than $1$ for the pressure law $\Pi$. This observation illustrates the fact that the choice of the pressure law $\Pi$ does not prevent from overlapping. \section{A priori estimates}\label{sec:estim} \subsection{Nonnegativity principle} The following Lemma establishes the nonnegativity of the density. \begin{lem}\label{lem:nonneg} Let $(n_\epsilon,p_\epsilon)$ be a solution to \eqref{eq:n} such that $n_\epsilon^{ini}\geq 0$ and $\|G\|_\infty \leq G_m <\infty$. Then, for all $t\geq 0$, $n_\epsilon(t)\geq 0$. \end{lem} \begin{proof} We have the equation $$\partial_t n_{\epsilon} - \nabla \cdot (n_{\epsilon} \nabla p_{\epsilon}) = n_{\epsilon} G(p_{\epsilon}). $$ We use the Stampaccchia method. We multiply by $\mathbf{1}_{n_{\epsilon}<0}$, then using the notation $|n|_- = \max(0,-n)$ for the negative part, we get $$\frac{d}{dt} |n_{\epsilon}|_{-} -\nabla \cdot (|n_{\epsilon}|_{-} \nabla p_{\epsilon}) = |n_{\epsilon}|_{-} G(p_{\epsilon}). $$ We integrate in space, using assumption \eqref{hypG}, we deduce $$\frac{d}{dt} \int_{\mathbb{R}^d} |n_{\epsilon}|_{-}dx \leq \int_{\mathbb{R}^d} |n_{\epsilon}|_{-} G(p_{\epsilon})dx \leq G_m\int_{\mathbb{R}^d} |n_{\epsilon}|_{-}dx. $$ So, after a time integration $$ \int_{\mathbb{R}^d} |n_{\epsilon}|_{-}\,dx \leq e^{G_mt} \int_{\mathbb{R}^d} |n_{\epsilon}^{ini}|_{-}\,dx. $$ With the initial condition $n_{\epsilon}^{ini} \geq 0$, we deduce $n_{\epsilon} \geq 0$. \end{proof} \subsection{A priori estimates} In order to use compactness results, we need first to find a priori estimates on the pressure and the density. We first observe that we may rewrite system \eqref{eq:n} as, by using \eqref{eq:p}, \begin{equation}\label{eq:nH} \partial_t n_{\epsilon} -\Delta H(n_{\epsilon})= n_{\epsilon} G(P(n_{\epsilon})), \end{equation} with $H(n)=\int_0^{n} u P'(u)du = P(n)-\epsilon \ln (P(n)+\epsilon) + \epsilon \ln \epsilon$. \begin{lem}\label{lem:estim} Let us assume that \eqref{hypG} and \eqref{hypini} hold. Let $(n_\epsilon,p_\epsilon)$ be a solution to \eqref{eq:nH}--\eqref{eq:p}. Then, for all $T>0$, we have the uniform bounds in $\epsilon\in(0,\epsilon_0)$, \begin{align*} & 0 \leq n_{\epsilon} \in L^\infty([0,T];L^1\cap L^\infty(\mathbb{R}^d)); \\ & 0\leq p_\epsilon \leq P_M, \qquad 0\leq n_\epsilon \leq \frac{P_M}{P_M+\epsilon} \leq 1. \end{align*} More generally, we have the {\bf comparison principle}: If $n_\epsilon$, $m_\epsilon$ are respectively subsolution and supersolution to \eqref{eq:nH}, with initial data $n_\epsilon^{ini}$, $m_\epsilon^{ini}$ as in \eqref{hypini} and satisfying $n_\epsilon^{ini}\leq m_\epsilon^{ini}$. Then for all $t>0$, $n_\epsilon(t) \leq m_\epsilon(t)$. Finally, we have that $(n_\epsilon)_\epsilon$ is uniformly bounded in $L^\infty([0,T],W^{1,1}(\mathbb{R}^d))$ and $(p_\epsilon)_\epsilon$ is uniformly bounded in $L^1([0,T],W^{1,1}(\mathbb{R}^d))$. \end{lem} \begin{proof} {\bf Comparison principle.} Let $n_{\epsilon}$ be a subsolution and $m_{\epsilon}$ a supersolution of \eqref{eq:nH}, we have $$ \partial_t (n_{\epsilon} -m_{\epsilon})-\Delta (H(n_{\epsilon})-H(m_{\epsilon})) \leq n_{\epsilon} G(P(n_{\epsilon})) -m_{\epsilon}G(P(m_{\epsilon})). $$ Notice that, since the function $H$ is nondecreasing, the sign of $n_{\epsilon}-m_{\epsilon}$ is the same as the sign of $H(n_{\epsilon})-H(m_{\epsilon})$. Moreover, $$ \Delta f(y) = f''(y) |\nabla y|^2 +f'(y) \Delta y, $$ so for $y=H(n_{\epsilon})-H(m_{\epsilon})$ and $f(y)=y_+$ is the positive part, the so-called Kato inequality reads $\Delta f(y) \geq f'(y) \Delta y$. Thus multiplying the latter equation by $\mathbf{1}_{n_\epsilon-m_\epsilon>0}$, we obtain \begin{align*} \partial_t |n_{\epsilon} -m_{\epsilon}|_+-\Delta |H(n_{\epsilon})-H(m_{\epsilon})|_+ \leq & |n_{\epsilon}-m_\epsilon|_+ G(P(n_{\epsilon})) \\ & + m_{\epsilon}(G(P(n_{\epsilon}))-G(P(m_{\epsilon}))) \mathbf{1}_{n_\epsilon-m_\epsilon>0}. \end{align*} From assumption \eqref{hypG}, we have that $G$ is nonincreasing. Thus, since $n\mapsto P(n)$ is increasing, we deduce that the last term of the right hand side is nonpositive. Since $G$ is uniformly bounded we obtain $$ \partial_t |n_{\epsilon} -m_{\epsilon}|_{+} - \Delta |H(n_{\epsilon})-H(m_{\epsilon})|_{+} \leq G_m|n_{\epsilon}-m_{\epsilon}|_{+}. $$ After an integration over $\mathbb{R}^d$, $$ \frac{d}{dt} \int_{\mathbb{R}^d} |n_{\epsilon} -m_{\epsilon}|_{+}\,dx \leq G_m \int_{\mathbb{R}^d} |n_{\epsilon}-m_{\epsilon}|_{+}\,dx. $$ Then, integrating in time, we deduce $$ \int_{\mathbb{R}^d} |n_{\epsilon} -m_{\epsilon}|_{+}\,dx \leq e^{G_m t} \int_{\mathbb{R}^d} |n^{ini}_{\epsilon} -m^{ini}_{\epsilon}|_{+}\,dx. $$ Since we have $n^{ini}_{\epsilon}\leq m^{ini}_\epsilon$, we deduce that for all $t>0$, $|n_\epsilon-m_\epsilon|_+(t) =0$. {\bf $L^{\infty}$ bounds.} We define $n_M= \frac{P_M}{\epsilon+P_M}$, such that $p_M=P(n_M)$, then applying the comparison principle with $m_\epsilon = n_M$, we deduce, using also the assumption on the initial data \eqref{hypini} that for all $0<\epsilon\leq \epsilon_0$, $n_{\epsilon}\leq n_M.$ Moreover, since $0$ is clearly a subsolution to \eqref{eq:nH}, we also have by the comparison priniciple $n_\epsilon\geq 0$. Since $n_M\leq 1$, we have $0\leq n_{\epsilon}\leq n_M \leq 1$ which implies $$ 0 \leq p_{\epsilon} \leq P_M.$$ {\bf $L^1$ bound of $n, p$.} By nonnegativity, after a simple integration in space of equation \eqref{eq:n}, we deduce \begin{equation}\label{eqnL1} \frac{d}{dt} \| n_\epsilon\|_{L^1(\mathbb{R}^d)} \leq G_m \| n_\epsilon\|_{L^1(\mathbb{R}^d)}, \end{equation} where we use \eqref{hypG}. Integrating in time give the $L^1$ bound, $$ \|n_{\epsilon}\|_{L^1(\mathbb{R}^d)} \leq e^{G_mt} \|n^{ini}_{\epsilon}\|_{L^1(\mathbb{R}^d)}. $$ Then, using $p_\epsilon= n_\epsilon (\epsilon+p_\epsilon)$ by \eqref{eq:p}, we get from the bound $p_\epsilon\leq P_M$, which has been proved above, $$ \|p_{\epsilon}\|_{L^1(R^d)} \leq (\epsilon+P_M)\int_{\mathbb{R}^d} |n_{\epsilon}|dx \leq Ce^{G_m t} \|n^{ini}_{\epsilon}\|_{L^1(\mathbb{R}^d)}. $$ {\bf Estimates on the $x$ derivative.} We derive equation \eqref{eq:nH} with respect to $x_i$ for $i=1,\ldots,d$, $$ \partial_t \partial_{x_i} n_{\epsilon} -\Delta (H'(n_{\epsilon})\partial_{x_i} n_{\epsilon})= \partial_{x_i} n_{\epsilon} G(p_{\epsilon}) +n_{\epsilon} G'(p_{\epsilon}) \partial_{x_i} p_{\epsilon}.$$ Multiplying by sign$(\partial_{x_i} n_\epsilon)$, we get $$ \partial_t |\partial_{x_i} n_{\epsilon}| -\Delta (\partial_{x_i}H(n_{\epsilon}))\mbox{sign}(\partial_{x_i} n_{\epsilon}) = |\partial_{x_i} n_{\epsilon}| G(p_{\epsilon}) +n_{\epsilon} G'(p_{\epsilon}) \partial_{x_i} p_{\epsilon} \mbox{sign}(\partial_{x_i} n_{\epsilon}).$$ We can remark that $\mbox{sign}(\partial_{x_i} n_{\epsilon})= \mbox{sign}(\partial_{x_i} H(n_{\epsilon}))$, so, by the same token as above, we have $$\Delta(\partial_{x_i}H(n_{\epsilon}))\mbox{sign}(\partial_{x_i} n_{\epsilon}) \geq \Delta (|\partial_{x_i}H(n_{\epsilon})|).$$ Moreover, $\mbox{sign}(\partial_{x_i} n_{\epsilon})=\mbox{sign}(\partial_{x_i} p_{\epsilon})$, thus $\partial_{x_i} p_{\epsilon} \mbox{sign}(\partial_{x_i} n_{\epsilon}) = |\partial_{x_i} p_\epsilon|$. By assumption \eqref{hypG}, we know that $$ G'(p_{\epsilon}) \leq - \gamma<0, $$ we deduce $$ \partial_t |\partial_{x_i} n_{\epsilon}| -\Delta (|\partial_{x_i}H(n_{\epsilon})|) \leq |\partial_{x_i} n_{\epsilon}| G_{m} - \gamma n_{\epsilon} |\partial_{x_i} p_{\epsilon}|.$$ After an integration in time and space, \begin{equation}\label{estimdxn} \|\partial_{x_i} n_{\epsilon}\|_{L^1(\mathbb{R}^d)} + \gamma \int_0^t \int_{\mathbb{R}^d} n_{\epsilon} |\partial_{x_i} p_{\epsilon}|\,dxds \leq e^{G_m t} \| \partial_{x_i} n_{\epsilon}^{ini}\|_{L^1(\mathbb{R}^d)}. \end{equation} This latter inequality provides us with a uniform bound on the space derivative of $n_\epsilon$ in $L^1$. Then $$ \| \partial_{x_i} p_{\epsilon}\|_{L^1(\mathbb{R}^d)} = \int_{\mathbb{R}^d} |\partial_{x_i} p_{\epsilon}|dx = \int_{\mathbb{R}^d} \frac{\epsilon}{(1-n_{\epsilon})^2}|\partial_{x_i} n_{\epsilon}|dx. $$ We split the integral in two: Either $n_{\epsilon}\leq 1/2$ and then $\frac{\epsilon}{(1-n_{\epsilon})^2} \leq C$; or $n_{\epsilon}> 1/2$. \begin{align*} \| \partial_{x_i} p_{\epsilon}\|_{L^1(\mathbb{R}^d)} &\leq C \int_{n_{\epsilon}\leq 1/2} |\partial_{x_i} n_{\epsilon}|dx + \int_{n_{\epsilon}> 1/2} |\partial_{x_i} p_{\epsilon}|dx \\ &\leq C \int_{n_{\epsilon}\leq 1/2} |\partial_{x_i} n_{\epsilon}|dx + 2 \int_{n_{\epsilon}> 1/2} \frac 12|\partial_{x_i} p_{\epsilon}|dx \\ &\leq C e^{G_m t} \int_{n_{\epsilon}\leq 1/2} |\partial_{x_i} n_{\epsilon}^{ini}|dx + 2 \int_{n_{\epsilon}>1/2} n_\epsilon |\partial_{x_i} p_{\epsilon}|dx, \end{align*} where we have used the estimate \eqref{estimdxn} for the last inequality. Then, integrating in time, we deduce, using again the estimate \eqref{estimdxn} $$ \| \partial_{x_i} p_{\epsilon}\|_{L^1(Q_T)} \leq C' e^{G_m t} \| \partial_{x_i} n_{\epsilon}^{ini}\|_{L^1(\mathbb{R}^d)}. $$ It concludes the proof. \end{proof} \subsection{Compact support} The following Lemma proves that assuming that the initial data is compactly supported, then the pressure is compactly supported for any time with a control of the growth of the support. \begin{lem}[Finite speed of propagation]\label{lem:supp} Under the same assumptions as in Theorem \ref{TH1}, we have that $\mbox{supp }p_\epsilon \subset B(0,R(t))$ with $R(t)\leq 2 \sqrt{C(T+t)}$, where $B(0,R(t))$ is the ball of center $0$ and radius $R(t)$. \end{lem} \begin{proof} Using the equation on $p_{\epsilon}$ \eqref{eq:p1}, $$ \partial_t p_{\epsilon} - (\frac{p_{\epsilon}^2}{\epsilon}+p_{\epsilon})\Delta p_{\epsilon} - |\nabla p_{\epsilon}|^2 =(\frac{p_{\epsilon}^2}{\epsilon}+p_{\epsilon}) G(p_{\epsilon}) \leq G_{m}(\frac{p_{\epsilon}^2}{\epsilon}+p_{\epsilon}). $$ Let us introduce for $C>0$, $$ \tilde{p}(t,x) = \left(C+\frac{|x|^2}{4(\theta+t)}\right)_+ ,$$ with $\theta = \frac{d}{4 G_{m}}$. Then $\tilde{p}$ is compactly supported in $B(0,R_\theta(t))$ with $R_\theta(t)= 2 \sqrt {C(\theta+t)}.$ We have $$\partial_t \tilde{p} = \frac{|x|^2}{4(\theta+t)^2} 1_{|x|\leq R_\theta(t)}, \qquad |\nabla \tilde{p}|^2= \frac{|x|^2}{4(\theta+t)^2} 1_{|x|\leq R_\theta(t)}, $$ and $$ \Delta \tilde{p} = -\frac{d}{(\theta+t)}, \mbox{ for } |x| < R_\theta(t).$$ Then, for all $t\in [0,\theta]$, \begin{equation}\label{ineqptilde} \partial_t \tilde{p}- (\frac{\tilde{p}^2}{\epsilon}+\tilde{p})\Delta \tilde{p} -|\nabla \tilde{p}|^2 -G_{m}(\frac{\tilde{p}^2}{\epsilon}+\tilde{p}) = (\frac{\tilde{p}^2}{\epsilon}+\tilde{p}) (\frac{d}{(\theta+t)}-G_{m}) \geq 0. \end{equation} In other words, $\tilde{p}$ is a supersolution for the equation for the pressure. Let us show that it implies that $p\leq \tilde{p}$. We define $\tilde{n} = \frac{\tilde{p}}{\epsilon+\tilde{p}}= N(\tilde{p})$. We know that $$ N'(\tilde{p})= \frac{\epsilon}{(\epsilon+\tilde{p})^2} >0. $$ Then, on the one hand, multiplying \eqref{ineqptilde} with by $N'(\tilde{p})$ we get $$ \partial_t \tilde{n}-\nabla.(\tilde{n} \nabla \tilde{p}) -G_{m}\tilde{n} \geq 0.$$ On the other hand, from \eqref{eq:n}, $$ \partial_t n_{\epsilon}-\nabla.(n_{\epsilon} \nabla p_{\epsilon}) \leq G_{m}n_{\epsilon}.$$ By the comparison principle (see Lemma \ref{lem:estim}), we have $$ n^{ini}_{\epsilon} \leq \tilde{n}^{ini} \Rightarrow n_{\epsilon} \leq \tilde{n}. $$ Thus, for all $t\in [0,\theta]$, $$ p^{ini}_{\epsilon} \leq \tilde{p}(t=0) \Rightarrow p_{\epsilon} \leq \tilde{p}. $$ and $p_\epsilon(t)$ is compactly supported in $B(0,R_\theta(t))$ provided we choose $C$ large enough such that $p_\epsilon^{ini}(x)\leq \tilde{p}(t=0,x)$, which can be done thanks to our assumption on the initial data \eqref{hypini}. Since $p_\epsilon$ is uniformly bounded in $L^\infty$, we may iterate the process on $[\theta,2\theta]$. After several iterations, we reach the time $T$ and prove the result on $[0,T]$. \end{proof} \subsection{$L^2$ estimate for $\nabla p$} In the following Lemma, we state a uniform $L^2$ estimate on the gradient of the pressure. \begin{lem}[$L^2$ estimate for $\nabla p$]\label{lem:L2dp} Under the same assumptions as in Theorem \ref{TH1}, we have a uniform bound on $\nabla p_\epsilon$ in $L^2(Q_T)$. \end{lem} \begin{proof} For a given function $\psi$ we have, multiplying \eqref{eq:n} by $\psi(n_\epsilon)$, $$ \partial_t n_{\epsilon} \psi(n_\epsilon) -\nabla(n_{\epsilon} \nabla p_{\epsilon})\psi(n_{\epsilon})= n_{\epsilon} G(p_{\epsilon})\psi(n_\epsilon). $$ Let $\Psi$ be an antiderivative of $\psi$, we have thanks to an integration by parts $$ \frac{d}{dt} \int_{\mathbb{R}^d} \Psi(n_{\epsilon})\,dx + \int_{\mathbb{R}^d} n_{\epsilon} \nabla n_{\epsilon}\cdot \nabla p_{\epsilon}\psi'(n_{\epsilon})\,dx = \int_{\mathbb{R}^d} n_{\epsilon} G(p_{\epsilon})\psi(n_{\epsilon})\,dx. $$ We choose $\psi$ such as $n_{\epsilon} \nabla n_{\epsilon}\cdot \nabla p_{\epsilon}\psi'(n_{\epsilon})= |\nabla p_{\epsilon}|^2$, i.e. $n_{\epsilon}\psi'(n_{\epsilon})=p'(n_{\epsilon})$. After straightforward computations, we find $\psi(n)= \epsilon (\ln(n)-\ln(1-n)+\frac{1}{1-n})$ and $\Psi(n)= \epsilon n(\ln(n)-\ln(1-n))$. It gives \begin{align*} &\frac{d}{dt} \int_{\mathbb{R}^d} \epsilon n_{\epsilon} \ln\Big(\frac{n_{\epsilon}}{1-n_{\epsilon}}\Big)\,dx + \int_{\mathbb{R}^d} |\nabla p_{\epsilon}|^2 dx \leq G_{m} \int_{\mathbb{R}^d} \epsilon n_{\epsilon}\left|\ln(n_{\epsilon})-\ln(1-n_{\epsilon})+\frac{1}{1-n_{\epsilon}}\right|\,dx. \end{align*} We integrate in time, using also the expression of $p_\epsilon$ in \eqref{eq:p}, \begin{align*} &\int_{\mathbb{R}^d} \epsilon n_{\epsilon} \ln\Big(\frac{p_{\epsilon}}{\epsilon}\Big)\,dx -\int_{\mathbb{R}^d} \epsilon n_{\epsilon}^{ini} \ln\left(\frac{n_{\epsilon}^{ini}}{1-n_{\epsilon}^{ini}}\right)\,dx + \int_0^T \int_{\mathbb{R}^d} |\nabla p_{\epsilon}|^2\,dxdt \\ &\leq G_{m} \int_0^T \int_{\mathbb{R}^d} \left(\epsilon n_{\epsilon} \Big|\ln\Big(\frac{p_{\epsilon}}{\epsilon}\Big)\Big|+p_{\epsilon}\right) \,dx. \end{align*} Then, to have a bound on the $L^2$-norm of $\nabla p_\epsilon$, it suffices to prove a uniform control on $ \int_{\mathbb{R}^d} \epsilon n_{\epsilon} |\ln(\frac{p_{\epsilon}}{\epsilon})|dx$. We have $$ \int_{\mathbb{R}^d} \epsilon n_{\epsilon} |\ln\big(\frac{p_{\epsilon}}{\epsilon}\big)|\,dx \leq \int_{\mathbb{R}^d} \epsilon n_{\epsilon} |\ln p_{\epsilon}| \,dx + \epsilon \ln(\epsilon) \int_{\mathbb{R}^d} n_{\epsilon}\,dx. $$ The second term of the right hand side is small when $\epsilon$ is small thanks to the $L^1$ bound on $n_\epsilon$, thus it is uniformly bounded. Using the expression of $p_\epsilon$ in \eqref{eq:p}, we get $$ \int_{\mathbb{R}^d} \epsilon n_\epsilon |\ln(\frac{p_{\epsilon}}{\epsilon})|\,dx \leq \int_{\mathbb{R}^d} (1-n_{\epsilon})p_{\epsilon} |\ln p_{\epsilon}|\,dx + C. $$ Then, since $0\leq p_\epsilon \leq P_M$ and since $x\mapsto x|\ln x|$ is uniformly bounded on $[0,P_M]$, we get $$ \int_{\mathbb{R}^d} (1-n_{\epsilon})p_{\epsilon} |\ln(p_{\epsilon})|\,dx \leq C \int_{\mathbb{R}^d} \mathbf{1}_{p_{\epsilon}>0} \,dx.$$ We conclude thanks to Lemma \ref{lem:supp}, which provides a uniform control on the support of $p_\epsilon$. \end{proof} \section{Regularizing effect and time compactness}\label{sec:tcompact} As already noticed in \cite{PQTV}, regularizing effects, similar to the ones observed for the heat equation \cite{AB,CP}, allow to deduce estimates on the time derivatives. \begin{lem}\label{lem:regul} Under the assumptions \eqref{hypG} and \eqref{hypini}, the weak solution $(\rho_k,p_k)$ satisfies the estimates $$ \partial_t p_\epsilon \geq - \frac{\kappa p_\epsilon}{t}, \qquad \partial_t n_\epsilon \geq -\frac{\kappa n_\epsilon}{t}, $$ for a large enough (independent of $\epsilon$) constant $\kappa$. \end{lem} \begin{proof} Let us denote $w_\epsilon=\Delta p_\epsilon + G(p_\epsilon)$, the equation on the pressure \eqref{eq:p1} reads \begin{equation}\label{eq:preg} \partial_t p_\epsilon = \left(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon\right) w_\epsilon + |\nabla p_\epsilon|^2. \end{equation} The proof is divided into several steps. We first find a lower bound for $w_\epsilon$ by using the comparison principle. Then we deduce estimates on the density and on the pressure. \\ {\it 1st step.} Thanks to \eqref{eq:preg}, we deduce an equation satisfied by $w_\epsilon$. On the one hand, by multiplying \eqref{eq:preg} by $G'(p_\epsilon)$, we deduce, since $G$ is decreasing from \eqref{hypG} \begin{equation}\label{ineq:Gp} \partial_t G(p_\epsilon) \geq G'(p_\epsilon) \big(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon\big) w_\epsilon + 2 \nabla G(p_\epsilon)\cdot \nabla p_\epsilon. \end{equation} On the other hand, we have \begin{align*} \partial_t \Delta p_\epsilon = & \Delta w_\epsilon \big(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon\big) + 2 \nabla\big(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon\big) \cdot \nabla w_\epsilon + w_\epsilon \Delta (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \\ & + 2 \nabla p_\epsilon\cdot \nabla(\Delta p_\epsilon) + 2 \sum_{i,j=1}^d (\partial_{x_ix_j} p_\epsilon)^2 \\ \geq &\Delta w_\epsilon (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) + 2 \nabla(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \cdot \nabla w_\epsilon + w_\epsilon \Delta (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \\ & + 2 \nabla p_\epsilon\cdot \nabla(\Delta p_\epsilon) + \frac{2}{d} (\Delta p_\epsilon)^2. \end{align*} Thus, with \eqref{ineq:Gp}, we deduce that $w_\epsilon=\Delta p_\epsilon+G(p_\epsilon)$ satisfies \begin{align*} \partial_t w_\epsilon \geq &\Delta w_\epsilon (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) + 2 \nabla(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \cdot \nabla w_\epsilon + w_\epsilon \Big(\Delta p_\epsilon (\frac{2p_\epsilon}{\epsilon}+1) + \frac{2}{\epsilon} |\nabla p_\epsilon|^2 \\ & + (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon)G'(p_\epsilon)\Big) + 2 \nabla p_\epsilon\cdot \nabla w_\epsilon + \frac{2}{d} (\Delta p_\epsilon)^2. \end{align*} By definition of $w_\epsilon$, we have $(\Delta p_\epsilon)^2 \geq w_\epsilon^2 - 2 G(p_\epsilon) w_\epsilon$. Thus we deduce that \begin{equation}\label{eq:w} \partial_t w_\epsilon \geq \mathcal{F}(w_\epsilon), \end{equation} where we have used the notation \begin{align} \mathcal{F}(w) := & \Delta w (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) + 2 \nabla(\frac{p_\epsilon^2}{\epsilon}+2p_\epsilon) \cdot \nabla w + \frac{2}{\epsilon} |\nabla p_\epsilon|^2 w + w^2 (\frac{2p_\epsilon}{\epsilon}+1+ \frac{2}{d}) \nonumber \\ & - w\Big(G(p_\epsilon)(\frac{2 p_\epsilon}{\epsilon} +1 + \frac{4}{d}) - (\frac{p_\epsilon^2}{\epsilon} +p_\epsilon) G'(p_\epsilon)\Big). \label{eq:F} \end{align} Following an idea of \cite{CP} which has been generalized in \cite{PQTV}, we introduce the function \begin{equation}\label{def:W} W(t,x) = -\frac{h(p_\epsilon(t,x))}{t}, \end{equation} where the function $h$ will be defined later such that $W$ is a subsolution for \eqref{eq:w}. We compute \begin{align*} &\partial_t W = \frac{W^2}{h(p_\epsilon)} - \frac{h'(p_\epsilon)}{t} \partial_t p_\epsilon, \\ &\nabla W = -\frac{h'(p_\epsilon)}{t} \nabla p_\epsilon, \qquad \Delta W = -\frac{h'(p_\epsilon)}{t} \Delta p_\epsilon - \frac{h''(p_\epsilon)}{t} |\nabla p_\epsilon|^2. \end{align*} Using again equation \eqref{eq:preg}, we have \begin{align} \partial_t W & = \frac{W^2}{h(p_\epsilon)} - \frac{h'(p_\epsilon)}{t}(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \Delta p_\epsilon - \frac{h'(p_\epsilon)}{t}(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) G(p_\epsilon) - \frac{h'(p_\epsilon)}{t} |\nabla p_\epsilon|^2 \nonumber \\ & = \frac{W^2}{h(p_\epsilon)} + (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) \Delta W + \frac{h''(p_\epsilon)}{t} |\nabla p_\epsilon|^2 (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) - \frac{h'(p_\epsilon)}{t} |\nabla p_\epsilon|^2 \nonumber \\ & \quad - \frac{h'(p_\epsilon)}{t} (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) G(p_\epsilon). \label{eqW1} \end{align} By definition of $\mathcal{F}(W)$ in \eqref{eq:F}, we deduce with \eqref{eqW1}, \begin{align*} \partial_t W = & \mathcal{F}(W) + 4(\frac{p_\epsilon}{\epsilon}+1) |\nabla p_\epsilon|^2 \frac{h'(p_\epsilon)}{t} + \frac{2}{\epsilon} \frac{h(p_\epsilon)}{t} |\nabla p_\epsilon|^2 \\ & + W^2\Big(\frac{1}{h(p_\epsilon)} -\frac{2p_\epsilon}{\epsilon} - 1 -\frac{2}{d}\Big) + \frac{h''(p_\epsilon)}{t} |\nabla p_\epsilon|^2 (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) - \frac{h'(p_\epsilon)}{t} |\nabla p_\epsilon|^2 \\ & - \frac{h'(p_\epsilon)}{t} (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) G(p_\epsilon) + W \Big(G(p_\epsilon)(\frac{2p_\epsilon}{\epsilon}+1+\frac{4}{d}) - (\frac{p_\epsilon^2}{\epsilon}+ p_\epsilon) G'(p_\epsilon)\Big). \end{align*} We may rearrange it into \begin{align} \partial_t W = & \mathcal{F}(W) + W^2\Big(\frac{1}{h(p_\epsilon)} -\frac{2p_\epsilon}{\epsilon} - 1 -\frac{2}{d}\Big) + \frac{|\nabla p_\epsilon|^2}{t} \Big(\big(h(p_\epsilon)(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon)\big)''+h'(p_\epsilon)\Big) \nonumber \\ & - \frac{h'(p_\epsilon)}{t} (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) G(p_\epsilon) + W \Big(G(p_\epsilon)(\frac{2p_\epsilon}{\epsilon}+1+\frac{4}{d}) - (\frac{p_\epsilon^2}{\epsilon}+ p_\epsilon) G'(p_\epsilon)\Big). \label{eqWf} \end{align} Let us choose \begin{equation}\label{def:h} h(p) = \frac{ \kappa \epsilon}{p+\epsilon}, \end{equation} where $\kappa>0$ is chosen large enough (independent of $\epsilon$) such that $$ \frac{1}{h(p_\epsilon)} = \frac{p_\epsilon+\epsilon}{ \kappa \epsilon} \leq \frac{2 p_\epsilon}{\epsilon}+1+\frac{2}{d}. $$ Thanks to this choice, we have $$ \big(h(p_\epsilon)(\frac{p_\epsilon^2}{\epsilon}+p_\epsilon)\big)''+h'(p_\epsilon) = -\frac{\kappa \epsilon}{(p_\epsilon+\epsilon)^2} \leq 0, $$ and $$ - \frac{h'(p_\epsilon)}{t} (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) = W \frac{p_\epsilon}{\epsilon}. $$ Finally, we obtain from \eqref{eqWf} $$ \partial_t W \leq \mathcal{F}(W) + W \Big(G(p_\epsilon)(\frac{p_\epsilon}{\epsilon}+1+\frac{4}{d}) - (\frac{p_\epsilon^2}{\epsilon}+ p_\epsilon) G'(p_\epsilon)\Big) \leq \mathcal{F}(W), $$ where we use the fact that by definition \eqref{def:W} we have $W\leq 0$ (recalling also that $G$ is decreasing by assumption \eqref{hypG}). Thus, by the sub- and super-solution technique, we deduce, using also \eqref{eq:w} that \begin{equation}\label{estimw} w_\epsilon\geq W = - \frac{\kappa \epsilon}{t(p_\epsilon+\epsilon)}. \end{equation} \\ {\it 2nd step.} Using again equation \eqref{eq:preg}, we get from \eqref{estimw} $$ \partial_t p_\epsilon \geq (\frac{p_\epsilon^2}{\epsilon}+p_\epsilon) W = - \frac{\kappa p_\epsilon}{t}, $$ which is the first inequality of Lemma \ref{lem:regul}. Finally, by definition \eqref{eq:p}, we have also $n_\epsilon = \frac{p_\epsilon}{p_\epsilon+\epsilon}$. Thus \begin{align*} \partial_t n_\epsilon & = \frac{\epsilon}{(p_\epsilon+\epsilon)^2} \partial_t p_\epsilon \geq -\frac{\kappa \epsilon p_\epsilon}{t(p_\epsilon+\epsilon)^2} = -\frac{\kappa n_\epsilon(1-n_\epsilon)}{t}, \end{align*} where we use the definition \eqref{eq:p} for the last identity. We conclude easily the proof. \end{proof} Thanks to this latter Lemma, we may deduce uniform estimates on the time derivative of $n_\epsilon$ and $p_\epsilon$. \begin{lem}\label{estimdtn} For any $\tau>0$, we have that $\partial_t n_\epsilon$ is uniformly bounded in $L^\infty([\tau,T];L^1(\mathbb{R}^d))$ and $\partial_t p_\epsilon$ is uniformly bounded in $L^1([\tau,T]\times\mathbb{R}^d)$. \end{lem} \begin{proof} We use the equality $|\partial_t n_\epsilon| = \partial_t n_\epsilon + 2 |\partial_t n_\epsilon|_-$, where we recall that $|\cdot|_-$ denotes the negative part. Thus \begin{align*} \|\partial_t n_\epsilon\|_{L^1(\mathbb{R}^d)} & = \frac{d}{dt} \int_{\mathbb{R}^d} n_\epsilon \,dx + 2 \int_{\mathbb{R}^d} |\partial_t n_\epsilon|_- \,dx \\ & \leq \Big(G_m + \frac{2 \kappa}{t}\Big) \|n_\epsilon\|_{L^1(\mathbb{R}^d)}, \end{align*} where we have used equation \eqref{eqnL1} to bound the first term and Lemma \ref{lem:regul} for the second term. By the same token, we have \begin{align*} \|\partial_t p_\epsilon\|_{L^1([\tau,T]\times\mathbb{R}^d)} & = \int_\tau^T \frac{d}{dt} \int_{\mathbb{R}^d} p_\epsilon \,dx + 2 \int_\tau^T\int_{\mathbb{R}^d} |\partial_t p_\epsilon|_- \,dx \\ & \leq \|p_\epsilon(T)\|_{L^1(\mathbb{R}^d)} + \|p_\epsilon\|_{L^\infty([\tau,T];L^1(\mathbb{R}^d))} 2 \kappa \ln(T/\tau). \end{align*} We conclude the proof thanks to the estimates on $n_\epsilon$ and $p_\epsilon$ in $L^1\cap L^\infty$ obtained in Lemma \ref{lem:estim}. \end{proof} \section{Convergence}\label{sec:conv} This section is devoted to the proof of Theorem \ref{TH1} apart from the complementary relation \eqref{compl} which is postponed to the next section. Since the sequences $(n_{\epsilon})_{\epsilon}$ and $(p_{\epsilon})_{\epsilon}$ are bounded in $W^{1,1}_{loc}(Q_T)$, due to Lemma \ref{lem:estim} and \ref{estimdtn}, we may apply the Helly theorem and recover strong convergence in $L^1_{loc}(Q_T)$, up to an extraction. If we want to extend this local convergence to a global convergence in $L^1(Q_T)$ we need to prove that we can control the mass in an initial strip and in the tail. Indeed, let $\epsilon,\epsilon' >0$, $R >0$, $\tau >0$ \begin{align*} \| n_{\epsilon}-n_{\epsilon'} \|_{L^1(Q_T)} = & \int_0^T \int_{\mathbb{R}^d} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt \\ \leq & \int_{\tau}^T \int_{B(0,R)} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt \\ & + \int_{\tau}^T \int_{\mathbb{R}^d \setminus { B(0,R)}} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt \\ & + \int_0^{\tau} \int_{\mathbb{R}^d} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt. \end{align*} Since we have strong convergence of $n_{\epsilon}$ in $L^1_{loc}(Q_T)$, $$ \int_{\tau}^T \int_{B(0,R)} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| \,dxdt \underset{\epsilon\to 0}{\longrightarrow} 0.$$ Then we have to control the two other terms in the right hand side. The control of the initial strip comes from the $L^1$ estimate of $n$, $$ \int_0^{\tau} \int_{\mathbb{R}^d} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt \leq \int_0^{\tau} \Big(\|n_{\epsilon}(t,x)\|_{L^1(\mathbb{R}^d)}+\|n_{\epsilon'}(t,x)\|_{L^1(\mathbb{R}^d)} \Big) dt \underset{\tau \to 0}{\longrightarrow} 0$$ For the control of the tail we consider $\phi \in C^{\infty}(\mathbb{R})$ such that $0 \leq \phi \leq 1$, $\phi(x)=0$ for $|x|<R-1$ and $\phi(x)=1$ for $|x|>R$. We define $\phi_R(x)=\phi(x/R)$. Then \begin{align*} \int_{\tau}^T \int_{\mathbb{R}^d \setminus{ B(0,R)}} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| dx dt \leq & \int_{\tau}^T \int_{\mathbb{R}^d \setminus{ B(0,R)}} |n_{\epsilon}(t,x)-n_{\epsilon'}(t,x)| \phi_R dx dt \\ \leq & \int_{\tau}^T \int_{\mathbb{R}^d \setminus{ B(0,R)}} (n_{\epsilon}(t,x)+n_{\epsilon'}(t,x)) \phi_R \,dxdt,\\ \end{align*} where the notation $C$ stand for a generic nonnegative constant. Moreover, using equation \eqref{eq:nH}, we deduce \begin{align*} \frac{d}{dt} \int_{\mathbb{R}^d} n_{\epsilon} \phi_R\,dx = & \int_{\mathbb{R}^d} H(n_\epsilon) \Delta \phi_R \,dx + \int_{\mathbb{R}^d} n_\epsilon G(p_\epsilon) \phi_R\,dx \\ \leq & C R^{-2} \|\Delta \phi\|_{L^{\infty}} + G_{m} \int_{\mathbb{R}^d} n_{\epsilon} \phi_R\, dx. \end{align*} Then, integrating on $[0,T]$, we get \begin{align*} 0 \leq \int_{\mathbb{R}^d} n_{\epsilon} \phi_R dx \leq & e^{G_{m} T} \left(\int_{\mathbb{R}^d} n^{ini}_{\epsilon} \phi_R+CR^{-2}T\right) \\ \leq & e^{G_{m} T} \left(\| n^{ini}_{\epsilon}-n^{ini}\|_{L^1(\mathbb{R}^d)} + \int_{\mathbb{R}^d} n^{ini} \phi_R\,dx +CR^{-2}T\right). \end{align*} By assumption \eqref{hypini}, since the initial data is uniformly compactly supported, we deduce that the right hand side tends to $0$ as $R$ goes to $+\infty$ and ${\epsilon}$ goes to $0$. Then $(n_{\epsilon})_\epsilon$ is a Cauchy sequence in $L^1(Q_T)$. It implies its convergence in $L^1(Q_T)$. The convergence of the pressure follows from the same kind of computation. The only difference is for the control of the tail and which is directly given by the estimate $$ 0\leq \int_{\mathbb{R}^d} p_{\epsilon} \phi_R\,dx \leq (\epsilon+P_M)\int_{\mathbb{R}^d} n_{\epsilon} \phi_R \,dx. $$ Therefore, we can extract subsequences and pass to the limit in the equation $$ (1-n_{\epsilon}) p_{\epsilon} = \epsilon n_{\epsilon},$$ which implies $$ (1-n_0) p_0 =0. $$ This is the relation \eqref{n0p0}. We can also pass to the limit in the uniform estimate of Lemma \ref{lem:estim} which provides \eqref{boundn0p0} and $n_0, p_0 \in BV(Q_T)$. \paragraph{Limit model.} We first recall that from \eqref{eq:nH}, we have $$ \partial_t n_\epsilon - \Delta (p_\epsilon - \epsilon \ln(p_\epsilon + \epsilon)) = n_\epsilon G(p_\epsilon). $$ We get, $$ \epsilon \ln \epsilon \leq \epsilon \ln(p_\epsilon + \epsilon) \leq \epsilon \ln(P_M + \epsilon). $$ Thus, the term in the Laplacien converges strongly to $p_0$ as $\epsilon$ goes to $0$. Then, thanks to the strong convergence of $n_\epsilon$ and $p_\epsilon$, we deduce that in the sense of distribution $(n_0,p_0)$ satisfies \eqref{eqn0}. Moreover, due to the uniform estimate on $\nabla p$ in $L^2(Q_T)$ of Lemma \ref{lem:L2dp}, we can show, by passing into the limit in a product of a weak-strong convergence, that in the sense of distribution $(n_0,p_0)$ satisfies \eqref{eq2n0}. \paragraph{Time continuity.} Let us define $0<t_1< t_2 \leq T$, $\eta>0$. For a given $R>0$, we consider a smooth function $\zeta_R$ on $\mathbb{R}^d$ such that $0 \leq \zeta_R \leq 1$, $\zeta_R(x)=1$ for $|x|<R-1$ and $\zeta_R(x)=0$ for $|x|>R$. We have \begin{align*} \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|\,dx = \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|\zeta_R\,dx + \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|(1-\zeta_R)\,dx. \end{align*} We have \begin{align*} \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|(1-\zeta_R)dx \leq \int_{\mathbb{R}^d}n_0(t_2)(1-\zeta_R)dx +\int_{\mathbb{R}^d}n_0(t_1)(1-\zeta_R)dx \end{align*} with $1-\zeta_R$ a function which is zero on $B(0,R-1)$. Thus, as for the control of the tail, for $R$ large enough, we have, uniformly for $0<t_1< t_2 \leq T$, \begin{align*} \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|(1-\zeta_R)dx \leq \eta. \end{align*} In addition, we know from Lemma \ref{lem:regul} (and the $L^\infty$ bound on $n_0$) that $\partial_t n_0 \geq -\frac{C}{t}$, so $\partial_t (n_0+ C\ln(t)) \geq 0$. Then, since $t_1 < t_2$, \begin{align*} \int_{\mathbb{R}^d} |n_0(t_2)-n_0(t_1)| \zeta_R \,dx \leq & \int_{\mathbb{R}^d} (n_0(t_2)+ C\ln(t_2)-(n_0(t_1)+ C\ln(t_1))) \zeta_R \,dx \\ &+ \int_{\mathbb{R}^d} C(\ln(t_2)-\ln(t_1)) \zeta_R \,dx \\ \leq & \int_{t_1}^{t_2} \int_{\mathbb{R}^d} \partial_t (n_0+ C\ln(t)) \zeta_R \,dxdt + \int_{\mathbb{R}^d} C(\ln(t_2)-\ln(t_1)) \zeta_R \,dx. \end{align*} Then, using equation \eqref{eqn0} and an integration by parts, we obtain \begin{align*} \int_{\mathbb{R}^d} |n_0(t_2)-n_0(t_1)| \zeta_R \,dx \leq & \int_{t_1}^{t_2} \int_{\mathbb{R}^d} \Big(p_0 \Delta \zeta_R + n_0 G(p_0) \zeta_R\Big)\,dxdt \\ & + 2 \int_{\mathbb{R}^d} C(\ln(t_2)-\ln(t_1)) \zeta_R \,dx \\ \leq & C(t_2-t_1)( ||\Delta \zeta_R||_{\infty}+1) + 2C(\ln(t_2)-\ln(t_1)) \int_{\mathbb{R}^d} \zeta_R \,dx. \end{align*} Then we can choose $(t_1,t_2)$ close enough such that $$ \int_{\mathbb{R}^d}|n_0(t_2)-n_0(t_1)|\zeta_R \,dx \leq \eta. $$ We conclude that $n_0 \in C((0,T),L^1(\mathbb{R}^d))$. \paragraph{Initial trace} For any test function $0 \leq \zeta(x) \leq 1$, we have from \eqref{eq:nH}, \begin{align*} \int_{\mathbb{R}^d} n_{\epsilon}(t) \zeta\,dx -\int_{\mathbb{R}^d} n_{\epsilon}^{ini} \zeta\,dx & = \int_{0}^t \int_{\mathbb{R}^d} (\Delta H(n_{\epsilon}) + n_{\epsilon} G(p_{\epsilon})) \zeta \,dxds \\ & = \int_{0}^t \int_{\mathbb{R}^d} (H(n_{\epsilon}) \Delta \zeta + n_{\epsilon} G(p_{\epsilon})\zeta ) \,dxds. \end{align*} Letting $\epsilon$ going to $0$, we obtain with \eqref{hypini}, \begin{align*} \int_{\mathbb{R}^d} n_{0}(t) \zeta\,dx -\int_{\mathbb{R}^d} n_{0}^{ini} \zeta\,dx & = \int_{0}^t \int_{\mathbb{R}^d} (p_{0} \Delta \zeta +n_{0} G(p_{0})\zeta)\,dxds. \end{align*} Letting $t \rightarrow 0$ we can conclude that $n_{0}(0) = n_{0}^{ini}$. \section{Complementary relation}\label{sec:compl} In this section we prove the complementary relation $$ p_0^2 (\Delta p_0 + G(p_0)) =0. $$ In the weak sense, this identity reads, for any test function $\phi$, \begin{equation}\label{compldis} \iint_{Q_T} \left(- 2\phi p_{0} |\nabla p_0|^2 - p_0^2 \nabla p_0\cdot \nabla \phi + \phi p_0^2 G(p_0)\right) \,dxdt = 0. \end{equation} The proof is divided into two steps. \\ {\it 1st step.} In this first step we prove the inequality $\geq 0$ in \eqref{compldis}. We start with the pressure equation \eqref{eq:p1} that we multiply by $ \epsilon$ $$ \epsilon \partial_t p_{\epsilon} - p_{\epsilon}(\epsilon+p_{\epsilon})\Delta p_{\epsilon} - \epsilon |\nabla p_{\epsilon}|^2= p_{\epsilon}(\epsilon+p_{\epsilon}) G(p_{\epsilon}). $$ We multiply by a test function $\phi\in \mathcal{D}((0,T)\times \mathbb{R}^d)$ and integrate, \begin{align*} \iint_{Q_T} p_{\epsilon}^2 \phi (\Delta p_{\epsilon}+G(p_{\epsilon})) \,dxdt & =\epsilon \iint_{Q_T} \phi ( \partial_t p_{\epsilon} - |\nabla p_{\epsilon}|^2 - p_{\epsilon}(\Delta p_{\epsilon}+G(p_{\epsilon}) )\,dxdt \\ & = \epsilon \iint_{Q_T} \left( \phi\partial_t p_{\epsilon} +p_{\epsilon} \nabla p_{\epsilon}\cdot \nabla \phi - \phi p_{\epsilon} G(p_{\epsilon})\right) \,dxdt, \end{align*} where we use an integration by parts for the last identity. From the estimates in Lemma \ref{lem:estim}, we have \begin{align*} & \left| \epsilon \iint_{Q_T} \big(\phi\partial_t p_{\epsilon} +p_{\epsilon} \nabla p_{\epsilon} \cdot\nabla \phi - \phi p_{\epsilon} G(p_{\epsilon})\big)\,dxdt \right| \\ & \qquad \leq \epsilon \left(\|\phi \|_{L^\infty} \| \partial_t p_\epsilon \|_{L^1(Q_T)} + \|\nabla \phi\|_{L^\infty} P_M \| \nabla p_\epsilon \|_{L^1(Q_T)} +\|\phi\|_{L^\infty} G_{m} \| p_\epsilon \|_{L^1(Q_T)}\right) \\ & \qquad \underset{\epsilon \rightarrow 0}{\longrightarrow} 0. \end{align*} We deduce that for any test function $\phi\in \mathcal{D}((0,T)\times \mathbb{R}^d)$, \begin{equation}\label{step1} \iint_{Q_T} \left(- 2\phi p_{\epsilon} |\nabla p_{\epsilon}|^2 - p_{\epsilon}^2 \nabla p_{\epsilon} \nabla \phi + \phi p_{\epsilon}^2 G(p_{\epsilon})\right) dxdt \underset{\epsilon \rightarrow 0}{\longrightarrow} 0. \end{equation} Since we have strong convergence of $(p_{\epsilon})_\epsilon$ and weak convergence of $(\nabla p_{\epsilon})_\epsilon$, we can pass into the limit in the last two term in \eqref{step1}, $$ \iint_{Q_T} \left(- p_{\epsilon}^2 \nabla p_{\epsilon} \nabla \phi + \phi p_{\epsilon}^2 G(p_{\epsilon})\right)\,dxdt \underset{\epsilon \rightarrow 0}{\longrightarrow} \iint_{Q_T} \left(- p_0^2 \nabla p_0 \nabla \phi + \phi p_0^2 G(p_0)\right) dxdt. $$ Now we are looking for the limit of the first term in \eqref{step1}. We have $ p_{\epsilon} |\nabla p_{\epsilon}|^2 = \frac{4}{9} |\nabla p_{\epsilon}^{3/2}|^2$. By weak convergence of $ \nabla p_{\epsilon}^{3/2} = p_{\epsilon}^{1/2} \nabla p_{\epsilon}$ and with Jensen inequality (since $x \mapsto x^2$ is convex), $$ \underset{\epsilon\to 0}{\lim\inf} \iint_{Q_T} \phi p_{\epsilon} |\nabla p_{\epsilon}|^2\,dxdt \leq \iint_{Q_T} \phi |\nabla p_0^{3/2}|^2\,dxdt.$$ Thus, we conclude from \eqref{step1} that $$ 0 \leq \iint_{Q_T} \left(- 2\phi p_{0} |\nabla p_0|^2 - p_0^2 \nabla p_0 \nabla \phi + \phi p_0^2 G(p_0)\right) \,dxdt, $$ which is a first inequality for \eqref{compldis}. \\ {\it 2nd step.} Now we want to show the reverse inequality, i.e. $$0 \geq \iint_{Q_T} \left(- 2\phi p_{0} |\nabla p_0|^2 - p_0^2 \nabla p_0 \nabla \phi + \phi p_0^2 G(p_0)\right) \,dxdt. $$ We know that $$ \partial_t n_{\epsilon} -\Delta q_{\epsilon} = n_{\epsilon} G(p_{\epsilon}), $$ with $ q_{\epsilon} = p_{\epsilon} - \epsilon \ln(p_{\epsilon} +\epsilon).$ Thanks to the inequality $ \epsilon \ln(\epsilon) \leq \epsilon \ln(p_{\epsilon} +\epsilon) \leq \epsilon \ln(p_M +\epsilon) $, and the strong convergence $p_\epsilon\to p_0$, we know that $q_{\epsilon} \rightarrow p_0$ as $\epsilon\to 0$. Because $$ \Delta q_{\epsilon}= \partial_t n_{\epsilon} -n_{\epsilon} G(p_{\epsilon}),$$ we deduce from Lemma \ref{lem:estim} that $ \Delta q_{\epsilon} \in L^{\infty}([0,T];L^{1}(\mathbb{R}^d)) $. It gives us compactness in space but not in time. Thus, following the idea of \cite{PQV}, we use a regularization process '\`a la Steklov'. Let introduce a time regularizing kernel $\omega_{\eta} \geq 0$ such that $\mbox{supp}(\omega_{\eta}) \subset \mathbb{R}_-$. Then with the notations $ n_{\epsilon,\eta} = \omega_{\eta} *_t n_{\epsilon}$, $q_{\epsilon,\eta}= \omega_{\eta} *_t q_{\epsilon}$, where the convolution holds only in the time variable, \begin{equation}\label{eqconv} \partial_t n_{\epsilon,\eta} -\Delta q_{\epsilon,\eta} = (n_{\epsilon} G(p_{\epsilon}))*\omega_{\eta} \end{equation} We denote $ U_{\epsilon} =\Delta q_{\epsilon,\eta} $, then \begin{align*} U_{\epsilon} &= \partial_t n_{\epsilon,\eta}-(n_{\epsilon} G(p_{\epsilon}))*\omega_{\eta} \\ &= n_{\epsilon} *\partial_t \omega_{\eta}-(n_{\epsilon} G(p_{\epsilon}))*\omega_{\eta} \end{align*} Since $ n_{\epsilon}$ and $n_{\epsilon} G(p_{\epsilon})$ are uniformly bounded in $W^{1,1}(Q_T)$ from Lemma \ref{lem:estim}, $(U_{\epsilon})_\epsilon $ is bounded in $W^{1,1}(Q_T)$ and we can extract a converging subsequence, still denoted $(U_{\epsilon})_\epsilon$, converging towards $U_0$ in $L^1_{loc}(\mathbb{R}^d)$ for $\eta$ fixed. Moreover $$ U_0 = \Delta p_{0}*\omega_{\eta}.$$ We multiply \eqref{eqconv} by $P'(n_{\epsilon})= \frac{\epsilon}{(1-n_{\epsilon})^2}= \frac{1}{\epsilon} (p_{\epsilon}+\epsilon)^2$, $$ \frac{\epsilon}{(1-n_{\epsilon})^2} \partial_t n_{\epsilon,\eta} -\frac{1}{\epsilon} (p_{\epsilon}+\epsilon)^2\Delta q_{\epsilon,\eta} =\frac{1}{\epsilon} (p_{\epsilon}+\epsilon)^2 (n_{\epsilon} G(p_{\epsilon}))*\omega_{\eta}. $$ Then, passing to the limit $\epsilon\to 0$, we obtain, thanks to the above remark $$ \frac{\epsilon^2}{(1-n_{\epsilon})^2} \partial_t n_{\epsilon,\eta} \underset{\epsilon\to 0}{\longrightarrow} p_0^2 \Delta p_{0}*\omega_{\eta} + p_0^2(n_{0} G(p_{0}))*\omega_{\eta}. $$ So we are left to prove that for any $\eta >0$, we have $$ \lim_{\epsilon\to 0} \frac{\epsilon^2}{(1-n_{\epsilon})^2} \partial_t n_{\epsilon,\eta} \leq 0.$$ We compute for a fixed $\eta >0$, \begin{align*} \frac{\epsilon^2}{(1-n_{\epsilon})^2} \partial_t n_{\epsilon,\eta}(t,x) & = \int_{\mathbb{R}} \frac{\epsilon^2}{(1-n_{\epsilon}(t,x))^2} \partial_t n_{\epsilon} (s,x)\omega_{\eta}(t-s,x) ds \\ & = \int_{\mathbb{R}} \frac{\epsilon^2}{(1-n_{\epsilon}(s,x))^2} \partial_t n_{\epsilon} (s,x)\omega_{\eta}(t-s,x) ds \\ &+\int_{\mathbb{R}} ( \frac{\epsilon^2}{(1-n_{\epsilon}(t,x))^2}- \frac{\epsilon^2}{(1-n_{\epsilon}(s,x))^2}) (\partial_t n_{\epsilon} (s,x)+\frac{C}{s})\omega_{\eta}(t-s,x) ds \\ &-C \int_{\mathbb{R}} ( \frac{\epsilon^2}{(1-n_{\epsilon}(t,x))^2}- \frac{\epsilon^2}{(1-n_{\epsilon}(s,x))^2}) \frac{\omega_{\eta}(t-s,x)}{s} ds \\ & = \text{ I }_{\epsilon}+\text{ II }_{\epsilon}+\text{ III }_{\epsilon}, \end{align*} where $C$ is a constant such that $ \partial_t n_{\epsilon} (s,x)+\frac{C}{t} \geq 0$. For the first term we have \begin{align*} \int_{\mathbb{R}^d} |\text{ I }_{\epsilon}| dxds & \leq \epsilon \iint_{Q_T} |\partial_t p_{\epsilon} (s,x)|\omega_{\eta}(t-s,x) dxds \\ &\leq \epsilon \|\omega_{\eta}\|_{L^\infty} \| \partial_t p_{\epsilon}\|_{L_1(Q_T)} \leq \epsilon C_{\eta}\\ & \underset{\epsilon\to 0}{\longrightarrow} 0. \end{align*} For the second term, we have $$ \frac{\epsilon^2}{(1-n_{\epsilon}(t,x))^2} = (p_{\epsilon}+\epsilon)^2$$ and $\partial_t (p_{\epsilon}+\epsilon)^2 = 2(p_{\epsilon}+\epsilon) \partial_t p_{\epsilon} \geq - \frac{C'}{t} $. Let $0 \leq \xi \in \mathcal{C}^{\infty}_c(Q)$ and $\tau>0$ the smallest time in its support, we then have for $t \geq \tau$ $$ \partial_t (p_{\epsilon}+\epsilon)^2(t,x) \geq - \frac{C'}{\tau} .$$ So integrating on $(t,s) \subset (\tau, + \infty)$ $$ \frac{\epsilon^2}{(1-n_{\epsilon}(t,x))^2}- \frac{\epsilon^2}{(1-n_{\epsilon}(s,x))^2} \leq \frac{C'}{\tau} (s-t) .$$ Then $$ \iint_{Q} \xi \text{ II }_{\epsilon} \leq \frac{C'}{\tau} \eta \iint_{Q} \int_{\mathbb{R}} (\partial_t n_{\epsilon} (s,x)+\frac{C}{\tau})\omega_{\eta}(t-s,x) ds dx dt \leq C'_{\tau } \eta,$$ where we use the bound on $\partial_t n$ in Lemma \ref{estimdtn}. For the third term, since $s \geq t >0$, for any test function $\xi$ as above, \begin{align*} \iint_{Q} \xi \text{ III }_{\epsilon} = & -C \iint_{Q} \xi\int_{\mathbb{R}} ( (p_{\epsilon}(t)+\epsilon)^2- (p_{\epsilon}(s)+\epsilon)^2) \frac{\omega_{\eta}(t-s,x)}{s} ds \\ & \underset{\epsilon\to 0}{\longrightarrow} -C \iint_{Q} \xi \int_{\mathbb{R}} ( p_{0}(t)^2- p_{0}(s)^2) \frac{\omega_{\eta}(t-s,x)}{s} dsdx dt\\ & \underset{\epsilon\to 0}{\longrightarrow} -C \iint_{Q} \xi \left[p_0^2(t) \int_{\mathbb{R}} \frac{\omega_{\eta}(t-s,x)}{s} ds - \int_{\mathbb{R}} \frac{p_{0}(s)^2}{s} \omega_{\eta}(t-s,x)ds\right] dx dt \\ & \qquad = \underset{\eta\to 0}{o}(1). \end{align*} So for all test function $\xi$ as above, and all $\eta>0$, $$ \iint_{Q_T} \xi (p_{0}^2 \Delta p_{0}*\omega_{\eta} + p_0^2 (n_0G(p_0))*\omega_{\eta}) dx dt \leq \underset{\eta\to 0}{o}(1). $$ Now it remain to pass to the limit $ \eta \rightarrow 0$ in the regularization process. Thanks to an integration by parts, $$ 0 \geq \iint_{Q_T} (- 2\xi p_{0} \nabla p_0\cdot\nabla p_0*\omega_\eta - p_0^2 \nabla \xi\cdot\nabla p_0 *\omega_\eta + \xi p_0^2 (n_0 G(p_0))*\omega_\eta )dxdt. $$ From the $L^2$ estimate on $\nabla p_0$ (Lemma \ref{lem:L2dp}) and the $L^1\cap L^\infty$ estimate on $p_0$ (Lemma \ref{lem:estim}), we deduce that we can pass to the limit $\eta\to 0$ and get $$ 0 \geq \iint_{Q_T} (- 2\xi p_{0} |\nabla p_0|^2 - p_0^2 \nabla \xi\cdot\nabla p_0 + \xi p_0^2 n_0 G(p_0))dxdt. $$ Finally, from \eqref{n0p0}, we have $p_0 n_0 = p_0$. It concludes the proof.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,050
Hello and thank you for being involved in the first ever Edinburgh Festival of Sound! We are looking forward to seeing you there and greatly appreciate your involvement. In order for us to complete the full programme, we need to gather some information from you. Could you please complete the below form by Tuesday 9th October to ensure you are represented accurately in the programme which will be on the website and paper copies distributed at the festival. In addition could you please email over images for us to use to russ.mcmahon@signumaudio.com, otherwise we will just use your logo. Thanks! We will email you over general information about travel, accommodation, guest list allocation, stand specs, access for load in/out, disable access, parking etc to the email you provide in this forum by Tuesday 16th October. If you require the information before for whatever reason please contact russ.mcmahon@signumaudio.com. We provide this as an additional option for exhibitors to show off their gear.
{ "redpajama_set_name": "RedPajamaC4" }
450
Uyen and Simon set up Fernandez & Leluu (F&L) about 3 months ago at their delightful flat in Hackney. Before starting her supper club and possibly because of my write up on Banh Mi, Uyen, who is Vietnamese became a follower of The London Foodie. I have, since then, corresponded with her whilst following her Supper Club venture with great interest and admiration. I was lucky enough to meet them a few days ago at their Miss Saigon Evening – it was like meeting a "pen-friend" (if such things still exist), someone I felt I knew reasonably well but had never met. Uyen and Simon were charming and very welcoming – as "front-of-house", Uyen had more time and opportunity to talk to her guests while Simon was in the kitchen helped by Uyen's mother. Uyen and Simon make up a great partnership – from completely different backgrounds (Vietnamese and Spanish), their cooking style reflects their cosmopolitan origins and sophistication. Their beautiful flat has an understated elegance about it, the casual but chic look of its furniture and decor adding to the experience at F&L. The evening was sold out, and soon their living room was filled with a trendy crowd, mostly in their 20s and 30s, chatting excitedly over their bottles of wine and in great anticipation of the 7 courses that were to follow. Soon enough, our first course was served – "Bi Cuon" (summer rolls of egg, cured pork, shredded pork skin, with mint, coriander and lettuce). Summer rolls is one of my favourite starters in any Vietnamese restaurant in London or Vietnam, and I order it every time. F&L's summer rolls bore no resemblance to the usual fare that I had tasted so far – they cleverly substituted the vermicelli with shredded pork skin, and used cured pork meat instead of the blander prawns that are so commonly used. An egg omelette was also used together with herbs and lettuce. The combination of flavours was sensational and the accompanying sauce - sweet, sour and slightly hot was perfect. The "Banh Bao Dep" (clear prawn tapioca dumplings) served with "Phan Thiet" sweet & hot fish sauce was another exceptional dish. The tapioca wrapping was deliciously light, its silken texture contrasting beautifully with the meaty prawns. The "Phan Thiet" sauce was also complex, complimenting the simple flavours of the dish well. The third course was "Goi Ga" (shredded chicken, coriander and lime salad served with prawn crackers). Similar to a Thai or Laotion laap, this dish was again packed with flavours while being very refreshing. A great dish to share around. F&L's "Cha Gio Nem" (woven spring rolls with prawns, pork, black fungus and vermicelli) was also excellent. I don't normally get excited about spring rolls but F&L's had obviously been freshly made on the premises. They tasted delicious and were a cut above anything I had ever eaten in Asian restaurants. Uyen has recently posted her own spring roll recipe with great photos at their Fernandez & Leluu's blog which I am now dying to try out. I am a real sucker for slow cooked meats and rich stews – I love tagines, koureshs, and casseroles of any kind. Our next course, "Thit Heo Trung Kho" (braised ham & quail egg in fish sauce and coconut juice) was another fine example of such a dish. The meat was exquisitely tender and tasted sweet from the coconut juice and spices, and was nicely complemented by the quail eggs. It was served with rice and stir fried morning glory. The "Banh Canh" (Udon noodle soup with dill fish cakes) was delicate and soothing. The homemade fish cakes had been flavoured with dill and were utterly delicious. For pudding, we were served "Che Chuoi" (banana with tapioca in coconut milk). The tapioca pearls had a pleasantly soft texture soaking up the banana and coconut milk flavours. It was a light and elegant dessert which I would love to try again. We were also served Vietnamese coffee "Ca Phe Sua Da" (Vietnamese weasel coffee with condensend milk). I was never a fan of this, having tried it a few times in Vietnam, but to my surprise, F&L's version was strongly flavoured and not overly sweet. This was the perfect way to finish an exceptional meal. I have already been to a few supper clubs, but undoubtedly Fernandez & Leluu and The Loft by Nuno Mendes, are the top two by a long way in my opinion. Unlike The Loft where the suggested donation is £115 per person, L&F's £35 price tag appears very moderate for this level of sophistication and consistency. I have booked again for the 18th December for their White Christmas evening, and am really looking forward to this. Verdict – sophisticated cuisine by delightful couple Uyen and Simon @ £35 a head plus tips. A great value supper club serving outstanding food in charming apartment in East London. BYOB with no corkage charge. One of the best London supper clubs, and highly recommended. Dr Tiffany Morris on 1st March 2011: I had read about this underground restaurant on The London Foodie and as a fan of all things Vietnamese I was very excited to give it a try! I, three of my housemates, and six couchsurfers (a mix of local hosts and travellers passing through) met to have a meal to remember. Several of us arrived on bikes that were lovingly carried over the diners to the back garden for safekeeping by our host Simon. The seven courses were served at a relaxed but regular pace between 8pm-midnight. Each dish was shared between two which meant we quickly got to know each other. All of the dishes were delicious and the one that stood out the most for me was the sashimi and chips. Fresh delicious salmon sashimi was served with a small dish of perfectly cooked chips, mushy peas and a side of fresh mayonnaise. It was a very simply but delicious union of western and eastern cooking. I loved the way you made the dishes sound so truly irresistible, and now I'm converted to Supper Club culture as well,the food pictures are mouth watering ,not to mention that you always manage to capture beautiful people too. I can always trust you to find the most unusual and amazing places. So far I've never been disapointed with any of your recomendations. Heh London Foodie - I think I must have been there the same day as you! Sadly I am not blessed with your powers of culinary recall! It is great to be reminded with (beautiful photos and eloquent word paintings) of an outstanding meal - quality and attention to detail I would expect from a restaurant, not a supper club. I am not sure you mentioned that the very reasonable £30 donation also included a complimentary bottle of good wine per table. Not only was the quality of the food and presentation exceptional, but the value for money was amazing, considering how many mediocre meals in London cost over £50 a head. Keep up these reviews and insights into the London food scene please! @ Regis - thanks for the encouraging words, it was not difficult to make the dishes sound delicious and they truly were! I am glad you like the posting and I heartly recommend Fernandez & Leluu, the best Supper Club in London. @ Mr Truffle - wow, I am so grateful for your commentary, thanks! What a shame that we never managed to chat on the night. Indeed you are right, there was a complimentary bottle of wine at our table which was an amazing touch considering other supper clubs where wine is sold and corkage is levied on wine that guests bring in, F&L is a real gem of a place! Love foods..makes me feel hungry.. @ Greedy Diva- hi Carly, i cannot recommend F&L enough, they are consistently good, their Vietnamese evenings are the best. @ Photos of London - thanks for stopping by and commenting! I hope you will be able to enjoy F&L sometime soon.
{ "redpajama_set_name": "RedPajamaC4" }
8,113
\section{INTRODUCTION} \label{sec:intro} \vspace{-2pt} The Boundary Element Method (BEM), often called the `panel method' in fluid dynamics, is a standard technique for the solution of boundary integral equations in a number of fields. Historically, relatively low-order discretizations have been used with geometries modelled using first order elements, and surface variables modelled to zero or first order on those elements. There are many methods available for the computation of potential integrals on linear panels~\cite[for example]{okon-harrington82b,okon-harrington82a,newman86,suh00,salvadori10,% carley13}, and their behaviour is reasonably well-understood, allowing them to be implemented with confidence in production codes. More recently, however, there has been increasing interest in the use of higher order methods, in part to achieve better geometric fidelity, and in part to improve the modelling of solutions. For example, a recently developed panel method for whole aircraft aerodynamics~\cite{willis-peraire-white05,willis-peraire-white07}, employing accelerated integration and summation techniques, depends on the availability of a robust integration scheme for second order panels, developed by the authors~\cite{willis-peraire-white06}, to avoid some of the deficiencies inherent in other curved panel integration techniques~\cite[for example]{wang-newman-white00}. To clarify the application, we consider the solution of a boundary integral formulation of the Laplace equation: \begin{align} \label{equ:potential} \phi(\mathbf{x}) &= \int_{S} \frac{\partial\phi(\mathbf{y})}{\partial n}G(\mathbf{x},\mathbf{y}) - \frac{\partial G(\mathbf{x},\mathbf{y})}{\partial n}\phi(\mathbf{y}) \,\D S, \end{align} where $\phi$ denotes potential, $\mathbf{x}$ field point position, $\mathbf{y}$ position on the surface $S$, and $n$ the outward pointing normal to the surface. The Green's function $G$ is: \begin{align} \label{equ:laplace} G(\mathbf{x};\,\mathbf{y}) &= \frac{1}{4\pi R},\\ R &= |\mathbf{x}-\mathbf{y}|. \nonumber \end{align} To solve this problem using a BEM, the surface $S$ is discretized into a number of elements, triangular in this case, over which $\phi$ is approximated by some interpolant. This results in a linear system: \begin{align} \label{equ:potential:system} c_{i}(\mathbf{x})\phi_{i} &= \sum_{j=1}^{P} \int_{S_{j}} \frac{\partial\phi(\mathbf{y})}{\partial n}G(\mathbf{x},\mathbf{y}) \,\D S_{j} - \sum_{j=1}^{P} \int_{S_{j}} \frac{\partial G(\mathbf{x},\mathbf{y})}{\partial n}\phi(\mathbf{y}) \,\D S_{j}, \end{align} where $P$ is the number of elements (panels), $i$ is the index of a surface point, $S_{j}$ is the surface of panel $j$, and the constant $c_{i}(\mathbf{x})$ is a geometric property given by: \begin{align} \label{equ:constant} c_{i} &= 1 + \int_{S} \frac{\partial G(\mathbf{x}_{i},\mathbf{y)}}{\partial n}\,\D S, \end{align} equal to $1/2$ at a smooth point on the surface, taking some other value at sharp edges. Inserting the interpolant for each element into Equation~\ref{equ:potential:system} yields a system of equations relating $\phi$ to $\partial\phi/\partial n$, allowing the problem to be solved subject to the specification of some boundary condition. In aerodynamic problems, this will usually be the Neumann boundary condition, specifying the surface normal velocity $\partial\phi/\partial n$. Upon solving for surface potential $\phi$, the boundary integral can then be used to compute the potential or its derivatives, i.e.\ fluid velocity, external to the surface, using Equation~\ref{equ:potential}. The core of the implementation is then the evaluation of the panel integrals in Equation~\ref{equ:potential:system}. For planar panels, there is no great difficulty, and for the Laplace equation, the integration can be performed analytically using a variety of approaches~\cite{okon-harrington82b,okon-harrington82a,newman86,% suh00,salvadori10,carley13}, although a numerical method is still necessary for the Helmholtz equation for acoustic scattering and radiation. When the panel is curved, however, a fully numerical method is required, and extra difficulties arise in finding a transformation which maps the curved panel to a reference domain where standard quadrature rules can be applied. A common approach in integrating over planar panels is to convert to polar coordinates with axis perpendicular to the element plane, which mitigates difficulties caused by the $1/R$ singularity in the Green's function. Such an approach has been used in computing the self-term for curved panels~\cite{willis-peraire-white06}, and a similar technique will be used here. Alternatives which have been used include the mapping of the element onto a plane triangle~\cite{wang-newman-white00}, which can, however, only be used for the single layer potential, and onto a sphere~\cite{willis-peraire-white06}, with appropriate conversions between the appropriate Jacobians. In this paper, we present a technique for the evaluation of a quadrature rule for second order curved panels which uses a polar transformation of the integral combined with basic geometric operations, to give a method more akin to the current techniques for planar panels, but with additional complexity due to the need to perform the geometric operations on curved element edges. \section{ANALYSIS} \label{sec:analysis} \vspace{-2pt} The quadrature method for the curved element is derived for a panel in a reference position. For a planar element, there is no difficulty in defining an element plane which can be used to fix a coordinate system for the triangle, but for the curved elements we consider here, there is clearly a choice to be made. A standard approach is to use a plane tangent to the element, through some appropriate point, for example, the field point $\mathbf{x}$ when a self-term is being computed, but here we use the plane defined by the three corners of the triangular element. The problem axes are rotated and shifted so that the corners lie in a plane $z=0$ and the field point $\mathbf{x}=(0,0,z)$, i.e.\ the origin of the coordinate system is taken as the projection of the field point onto the triangle reference plane. In this orientation, cylindrical polar coordinates can be readily defined and used to carry out the required integration. The rest of this section describes the geometric operations employed, and the technique used to define the quadrature rule. \subsection{Description of second order triangles} \label{sec:triangles} \begin{figure} \centering \begin{tabular}{cc} \includegraphics{ijnme13-figures-1} & \includegraphics{ijnme13-figures-2} \end{tabular} \caption{Description of second-order curved triangle. Left hand image: curved triangle; right hand image: reference right-angle triangle.} \label{fig:triangle} \end{figure} Figure~\ref{fig:triangle} shows the notation used for description of the second order triangle. Nodes are numbered~1,2,3 for the corners, and~4,5,6 for the points internal to an edge. Quantities on the element, including position, are interpolated using the second order shape functions for the reference element: \begin{align} \label{equ:interp} \mathbf{y} &= \sum_{i=1}^{6} L_{i}(\xi,\eta) \mathbf{y}_{i},\\ \phi &= \sum_{i=1}^{6} L_{i}(\xi,\eta) \phi_{i}, \end{align} where: \begin{subequations} \label{equ:shape} \begin{align} L_{1} &= 2(1-\xi-\eta)(1/2-\xi-\eta),\\ L_{2} &= 2\xi(\xi-1/2),\\ L_{3} &= 2\eta(\eta-1/2),\\ L_{4} &= 4\xi(1-\xi-\eta),\\ L_{5} &= 4\xi\eta,\\ L_{6} &= 4\eta(1-\xi-\eta),\\ 0&\leq\xi\leq 1,\,0\leq\eta\leq 1-\xi.\nonumber \end{align} \end{subequations} As shown in Figure~\ref{fig:triangle}, the edges of the triangle are defined by second order interpolation on three points, and can be described using a single variable $\gamma$, $0\leq\gamma\leq1$, with $\gamma$ increasing in the anti-clockwise direction on each edge. Inserting the conditions for each edge shown in Figure~\ref{fig:triangle} gives three shape functions: \begin{subequations} \begin{align} J_{1} &= 2\gamma^{2} - 3\gamma + 1,\\ J_{2} &= 2\gamma^{2} - \gamma,\\ J_{3} &= -4\gamma^{2} + 4\gamma, \end{align} \end{subequations} with the edge described by: \begin{align} \label{equ:edge} \mathbf{y}(\gamma) &= \mathbf{y}_{i}J_{1}(\gamma) + \mathbf{y}_{j}J_{2}(\gamma) + \mathbf{y}_{i+3}J_{3}(\gamma), \end{align} where $(i,j)$=$(1,2)$, $(2,3)$, $(3,1)$ for each edge respectively. A point on an edge can be given in the general coordinates $(\xi,\eta)$ using the relations: \begin{align} \label{equ:edge:area} (\xi,\eta) &= \left\{ \begin{matrix} &(\gamma,0)\quad &&i=1;\\ &(1-\gamma,\gamma),\quad &&i=2;\\ &(0,1-\gamma),\quad &&i=3. \end{matrix} \right. \end{align} These shape functions will prove useful in determining intersections between edges and lines in the plane, necessary in finding the domain of integration for quadrature over the triangular element. \subsection{Integration over the triangle} \label{sec:integration} \begin{figure} \centering \includegraphics{ijnme13-figures-5} \caption{Coordinate system for calculation of integrals. The triangle is rotated so that the vertices 1,2,3 lie in the plane $z=0$ (shown shaded), and translated so that the field point lies at a position $(0,0,z)$.} \label{fig:orientation} \end{figure} Figure~\ref{fig:orientation} shows the curved triangle in its reference position, rotated so that the corners lie in the plane $z=0$, and shifted so that the field point lies at $\mathbf{x}=(0,0,z)$. The integral to be evaluated is \begin{align} \label{equ:potential:1} I &= \int_{0}^{1}\int_{0}^{1-\xi} f(\xi,\eta) J(\xi,\eta)\,\D\eta\,\D\xi, \end{align} where $J(\xi,\eta)$ is the Jacobian for the transformation from $(\xi,\eta)$ to coordinates on the element surface. Converting to Cartesian coordinates in the problem system of axes: \begin{align} \label{equ:potential:2} I &= \int_{\triangle} f(\xi,\eta) r\,\D r\,\D\theta, \end{align} where integration takes place over the projection of the curved triangle into the plane $z=0$ and the function $f(\xi,\eta)$ is computed by transformation from $(r,\theta)$ to $(\xi,\eta)$. \begin{figure} \centering \includegraphics{ijnme13-figures-3} \caption{Intersection of rays with curved triangle: bold lines show the part of each ray which lies inside the domain of integration.} \label{fig:intersection} \end{figure} The integration of Equation~\ref{equ:potential:2} is conceptually simple, and is readily applied to first order elements~\cite{carley13}, but gives rise to some extra complexities in the second order case, shown in Figure~\ref{fig:intersection}. As written, the integration is composed of a sequence of integrals over $r$, along rays at fixed values of $\theta$. In Figure~\ref{fig:intersection}, two such rays are shown, at angles $\theta_{1}$ and $\theta_{2}$. The ray at $\theta_{1}$ presents no particular difficulties: it has one entry and one exit point on the element boundary, both easily found using analytical methods (see Section~\ref{sec:operations}). The ray at $\theta_{2}$, however, is broken as it crosses the triangle boundary, having two entry and two exit points. In evaluating Equation~\ref{equ:potential:2}, this case must be handled, as must the case of a ray which lies tangent to an edge. The algorithm of Section~\ref{sec:algorithm} handles these special cases, using the geometrical operations of the next section. \subsection{Geometrical operations} \label{sec:operations} The quadrature algorithm, presented in Section~\ref{sec:algorithm}, depends on the availability of a number of elementary geometrical operations, described in this section. These operations can be implemented analytically using standard methods and are used to determine the limits of integration in $\theta$, to break the integral at possible points of discontinuity, and in $r$, to find the entry and exit points on the triangle boundary. The first operation is finding the intersection, $\gamma$ or $(r,\theta)$, between a ray through the origin of angle $\theta$ and an edge of the triangle. The coordinates of a point on the edge are given by Equation~\ref{equ:edge}: \begin{align} \label{equ:edge:1} x = x_{i}J_{1}(\gamma) + x_{j}J_{2}(\gamma) + x_{i+3}J_{3}(\gamma),\, y = y_{i}J_{1}(\gamma) + y_{j}J_{2}(\gamma) + y_{i+3}J_{3}(\gamma), \end{align} while the ray is given by $x=r\cos\theta$, $y=r\sin\theta$, so that, upon substitution: \begin{align} \label{equ:edge:2} (x_{i}\sin\theta - y_{i}\cos\theta)J_{1}(\gamma) + (x_{j}\sin\theta - y_{j}\cos\theta)J_{2}(\gamma) + (x_{i+3}\sin\theta - y_{i+3}\cos\theta)J_{3}(\gamma) = 0, \end{align} which is a quadratic in $\gamma$. To find the intersection: \begin{enumerate} \item solve Equation~\ref{equ:edge:2} for $\gamma$; \item for each value of $\gamma$, with $0\leq\gamma\leq1$ \begin{enumerate} \item compute $(x,y)$ and $r=x/\cos\theta$ or $r=y/\sin\theta$; \item if $r>0$, $\gamma$ is a valid intersection. \end{enumerate} \end{enumerate} In this operation, $r$ is computed as shown in order to accept only $r>0$, to avoid double counting of intersections. The condition $0\leq\gamma\leq1$ is imposed to exclude corners of the triangle, as these are handled separately in the algorithm. Tangents which pass through the origin must also be determined, in order to break the integration at these points. They are found using the equation of a tangent to a point $(x_{0},y_{0})$ on the edge: \begin{align} \label{equ:edge:3} y &= y_{0} + \left.\frac{\D y}{\D x}\right|_{x=x_{0}}(x-x_{0}),\\ \left. \frac{\D y}{\D x} \right|_{x=x_{0}} &= \frac{y_{i}J'_{1}(\gamma) + y_{j}J'_{2}(\gamma) + y_{i+3}J'_{3}(\gamma)} {x_{i}J'_{1}(\gamma) + x_{j}J'_{2}(\gamma) + x_{i+3}J'_{3}(\gamma)}, \end{align} where the prime denotes differentiation with respect to $\gamma$. Setting $x=y=0$ to find a tangent through the origin yields: \begin{align} &x_{i}y_{j} \left[ J_{1}'(\gamma)J_{2}(\gamma) - J_{2}'(\gamma)J_{1}(\gamma) \right] + x_{j}y_{i+3} \left[ J_{2}'(\gamma)J_{3}(\gamma) - J_{3}'(\gamma)J_{2}(\gamma) \right] \nonumber\\ \label{equ:edge:5} &+ x_{i+3}y_{i} \left[ J_{3}'(\gamma)J_{1}(\gamma) - J_{1}'(\gamma)J_{3}(\gamma) \right] = 0, \end{align} which is a cubic which can be solved for $\gamma$ subject to the constraint $0<\gamma<1$ and that $\gamma$ be real. The final part of determining the limits of integration in $\theta$ is the angle $\psi$ of a tangent to an edge, given by: \begin{align} \label{equ:edge:6} \psi &= \tan^{-1} \frac{y_{i}J'_{1}(\gamma) + y_{j}J'_{2}(\gamma) + y_{i+3}J'_{3}(\gamma)} {x_{i}J'_{1}(\gamma) + x_{j}J'_{2}(\gamma) + x_{i+3}J'_{3}(\gamma)}, \end{align} from the slope of the curve at a point $\gamma$ on an edge. A second angle $\psi+\pi$ is also included in order to ensure that rays in both tangent directions are included. \subsection{Quadrature algorithm} \label{sec:algorithm} The algorithm for the quadrature rule consists of a first stage in which the range of integration in $\theta$ is broken into a set of intervals, and a second in which the integration is performed over these intervals. Initially it must be determined whether the origin lies inside, outside or on the boundary of the triangle projected into the plane $z=0$. This is done by first checking if the origin lies outside a box containing the points $(x_{1},y_{1})$, $(x_{2},y_{2})$, and~$(x_{3},y_{3})$. If not, its coordinates $(\xi,\eta)$ in the triangle are found using Newton's method, and it is checked whether they lie within the triangle. In the first stage of processing: \begin{enumerate} \item find possible limits of integration $\theta_{i}$, $i=1,\ldots,N$, as the angles of rays joining the origin to the triangle's corners, the angles of tangents through the origin, and, if the origin lies on an edge, the angles of tangents to the edge; \item adjust all angles to lie in the range $0,2\pi$; \item sort the list of limits $\theta_{i}$ in ascending order; \item if the origin lies inside the triangle, append the angle $\theta_{1}+2\pi$. \end{enumerate} Given the list of $N$ angles from the first stage, the nodes and weights of the quadrature rule are found for each pair of limits, $\theta_{i}$, $\theta_{i+1}$ by this procedure: \begin{enumerate} \item select a quadrature rule with abscissae $\theta_{k}$ and weights $w_{k}^{\theta}$, $k=1,\ldots,K$; \item for each $k$: \begin{enumerate} \item find the radii $r_{j}$ of the intersections of a ray of angle $\theta_{k}$ with the triangle edges (prepend $r=0$ if the origin lies inside the triangle); \item for each pair of limits $r_{2j}$ and $r_{2j+1}$, select a quadrature rule $(r_{m},w_{m}^{r})$, $m=1,\ldots,M$; \item for each point $(r_{m}\cos\theta_{k},r_{m}\sin\theta_{k})$: \begin{enumerate} \item find the corresponding coordinates $(\xi,\eta)$ using Newton's method; \item append to the quadrature rule the abscissa $(\xi,\eta)$ and the weight $r_{m}w_{m}^{r}w_{k}^{\theta}/J_{2}(\xi,\eta)$, where $J_{2}$ is the Jacobian for conversion from $(\xi,\eta)$ to $(r,\theta)$. \end{enumerate} \end{enumerate} \end{enumerate} The resulting quadrature rule can be used to evaluate an integral on the panel by summation: \begin{align} \label{equ:rule} \int_{0}^{1}\int_{0}^{1-\xi} f(\xi,\eta) J(\xi,\eta)\,\D\eta\,\D\xi &\approx \sum_{n} f(\xi_{n},\eta_{n}) J(\xi_{n},\eta_{n})w_{n}, \end{align} where $J(\xi,\eta)$ is the Jacobian for conversion from $(\xi,\eta)$ to the surface coordinates $(x,y,z)$, allowing the rule to be used in the same manner as standard quadratures for triangles. We note that since the quadrature rule is mapped to the reference triangle, the sum of the weights $w_{n}$ should be equal to $1/2$, which gives a convenient error measure for checking the accuracy of the quadrature. Finally, a good starting guess for Newton's method in finding coordinates on the reference triangle is provided by the intersection point $(r,\theta)$ since its value of $\gamma$ on the edge is known, and can be converted to $(\xi,\eta)$ using Equation~\ref{equ:edge:area}. Each set of coordinates $(\xi,\eta)$ on a ray can then be used as an initial guess for the evaluation at the next quadrature point. \subsection{Quadrature selection} \label{sec:quadrature} The quadrature rule is implemented using a sequence of Gaussian quadratures for integration in $\theta$ and $r$. This has the first advantage that the endpoints of the integral are not included. Since tangents are used to fix limits of integration, there is then no ambiguity in determining the number of entry and exit points in radius. The quadrature rules are selected using a criterion which gives some adaptivity to the intervals of integration. A point separation parameter $\Delta\theta$ is specified and used to estimate the number of points required in the quadrature rule: \begin{align} \label{equ:selection} K &= \left[ (\theta_{i+1}-\theta_{i})/\Delta\theta \right], \end{align} where $[\cdot]$ denotes rounding of the value to the nearest natural number. The resulting value $K$ is then adjusted to lie in a range $K_{\min}\leq K \leq K_{\max}$. A similar method is used to select quadrature rules in $r$. In the calculations presented here, the values of $\Delta\theta$ and $\Delta r$ are computed with user-defined constants $N_{\theta}$ and $N_{r}$, and set as follows: \begin{subequations} \begin{align} \Delta\theta &= \frac{\max_{i}\phi_{i}}{N_{\theta}},\\ \Delta r &= \frac{\max_{i}\ell_{i}}{N_{r}}, \end{align} \end{subequations} where $\phi_{i}$ is the angle subtended by the corner of the triangle at vertex $i$, $i=1,2,3$ and $\ell_{i}$ is the length of the straight edge starting at corner $i$. This gives a quickly computed abscissa density under user control. Finally, in using the algorithm in a code, a criterion is required in deciding when to use it, and when not. In this case, a parameter $\sigma$ is computed based on easily evaluated geometric properties of the element. These are the mean values of the nodes and the radius of a sphere containing the element: \begin{subequations} \begin{align} \overline{\mathbf{y}} &= \frac{1}{6}\sum_{i=1}^{6}\mathbf{y}_{i},\\ \rho &= s\max_{i}|\mathbf{y}_{i}-\overline{\mathbf{y}}|, \end{align} \end{subequations} where $s$ is a scaling factor, with, in this case, $s=2^{1/2}$. Given these values for the element, $\sigma$ is determined as follows: \begin{enumerate} \item if $\mathbf{x}$ lies on the element, $\sigma=0$; \item compute $\rho_{x}=|\mathbf{x}-\overline{\mathbf{y}}|$: if $\rho_{x}>\rho$, $\sigma=\rho_{x}/\rho$; \item otherwise, $\sigma=z/\rho$, \end{enumerate} where $z$ is the distance of $\mathbf{x}$ from the element reference plane, as noted above. This gives a quickly-computed parameter which varies from~$0$ on the element to large values away from the element, but remains small in some reasonable neighbourhood, so that it can be used to select quadrature methods. \section{NUMERICAL TESTS} \label{sec:tests} \vspace{-2pt} The algorithm of the previous section is demonstrated using two sets of results. The first is an illustration of the distribution of quadrature nodes, using a sample element, while the second is an assessment of the accuracy and convergence of the method when implemented in a BEM program. \subsection{Quadrature points} \label{sec:points} \begin{figure} \centering \begin{tabular}{ccc} \includegraphics{triangles-100} & \raisebox{-1mm}{\includegraphics{triangles-101}} & \includegraphics{triangles-102} \\ \raisebox{-1mm}{\includegraphics{triangles-103}} & \includegraphics{triangles-104} & \includegraphics{triangles-105} \end{tabular} \caption{Quadrature points for different field points: origin is shown as a cross, quadrature points as dots. Top row: origin inside element, on vertex, and outside element. Bottom row: origin on straight, convex and concave edge.} \label{fig:points} \end{figure} To demonstrate the nature of the quadrature point distributions generated by the algorithm, an element with one straight, one convex and one concave edge has been used, with the origin placed inside and outside the element, on a corner, and on each of the edges in turn. In order to show the point distribution more clearly, fixed length quadrature rules have been used, with sixteen points in both angle and radius. This gives a higher than normal density in small intervals of integration, and a lower density in larger regions, but is helpful for visualization. Figure~\ref{fig:points} shows the resulting quadratures, with the origin indicated by a cross. Each of the cases considered gives rise to a qualitatively different point distribution. With the origin located inside the element, the region of integration is divided by rays to each of the corners (compare Figure~8 of reference~\cite{willis-peraire-white06}). When the origin is placed on a corner of the element, there are two clearly demarcated domains of integration, separated by the tangent to the curved edge at that corner. Similarly, when the origin is moved outside the element, there are two domains, separated in this case by a ray joining the origin to the most distant corner. When the origin lies on an edge, the situation is slightly more complicated. When it is on the straight edge, the element is divided into two regions, separated by the ray to the furthest corner. When it lies on the convex edge, there is a thin region of integration bounded by the edge and the rays to the vertices on that edge. Conversely, on the concave edge, the narrow region of integration is bounded by the edge and the tangent to the edge at the origin. \subsection{Numerical accuracy and convergence} \label{sec:accuracy} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{catseye} \caption{The cat's eye geometry} \label{fig:catseye} \end{figure} The accuracy and convergence of the integration method are tested by implementing it in a BEM code~\cite{bem3d} and solving the Laplace equation on a cat's eye geometry, shown in Figure~\ref{fig:catseye}. This is a unit sphere with one octant removed, recommended as a more stringent test of BEM codes than a simple sphere~\cite{marburg-amini05}, since it contains discontinuities in the geometry, as would be found, for example, in aerodynamic calculations. The surface was meshed using GMSH~\cite{geuzaine-remacle09}, changing the discretization length to produce panels of varying sizes. A second mesh was produced for comparison by splitting each second order element into six triangles, giving a mesh of planar elements based on the same nodes. The polar quadrature rule was selected for $\sigma<1$, with $N_{\theta}=N_{r}=8$, and a twenty-five point symmetric rule~\cite{wandzura-xiao03} for $\sigma\geq1$. \begin{figure} \centering \includegraphics{ijnme13-figures-4} \\ \includegraphics{ijnme13-figures-6} \caption{Error $\epsilon$ in solution of Laplace equation on cat's eye against panel number $P$ (upper) and node number $N$ (lower). Dots: linear elements; squares: quadratic elements. Upper plot: solid line $6.6P^{-1.1}$; dashed line: $3.9P^{-1.6}$. Lower plot: solid line $4.8N^{-1.2}$; dashed line: $19.3N^{-1.6}$.} \label{fig:laplace:error} \end{figure} A Neumann boundary condition was generated using a point potential source positioned inside the surface at $(-0.2,-0.2,-0.2)$. Solving Equation~\ref{equ:potential} for the surface potential $\phi$ gave a result which could be compared to the result found analytically for the point source. The error estimate is the r.m.s.\ difference between the computed and the analytically specified data. The error is plotted in Figure~\ref{fig:laplace:error} as a function of element number, and, for convenience, of node number. The error is fitted using a power law, to estimate the convergence rate, and the superior numerical performance of the second order method is clear. Plotting against panel number shows a similar convergence rate as in other work~\cite[Figure~12]{willis-peraire-white06}, $P^{-1}$ for linear panels, and $P^{-3/2}$ for quadratic. Plotting against node number, which can be taken as a proxy for the memory requirement for the matrix used to solve the problem, and which shows error for different element types on the same point distribution, shows similar trends. \section{CONCLUSIONS} \label{sec:conclusions} \vspace{-2pt} A quadrature technique for second order triangular elements has been presented and tested on a realistic geometry. It has been found that the method is accurate and convergent, giving an error which scales as $P^{-1.6}$ in the example tested. It is concluded that the technique is readily implemented and can be used as a direct replacement for existing quadratures.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,833
Q: 4 bit Year and decade counter So I came across this problem and I can't figure an elegant solution, maybe someone can point me in the right direction. Say I have a 4 bit counter (so it can count from 0-15 in decimal) for number of years elapsed. On year 16, the year counter will overflow and reset back to 0, on year 17 it will be 1 and so on. I also have a decade counter that will have the value 1 for years 10 through 19, value 2 for years 20 through 29 and so on. How can I get the total number of years elapsed if I have the yearCounter and decadeCounter information? E.g. decade = 1 and counter = 2; year = 18. Is there a name for this kind of problem? And is there a function that can calculate the number of years from these counters up to decade overflows? A: The name of the problem is modulo arithmetics; providing that year is not negative we have decade = year / 10 counter = year % 16 Reversed formula (which you are looking for) is year = decade * 10 + (16 - (decade * 10) % 16 + counter) % 16 where % is remainder and / is integer division. For instance, if year = 2018 we have decade = 2018 / 10 = 201 counter = 2018 % 16 = 2 and the reversed formula gives year = 201 * 10 + (16 - (201 * 10) % 16 + 2) % 16 = 2010 + (16 - 2010 % 16 + 2) % 16 = 2010 + (16 - 10 + 2) % 16 = 2010 + 8 % 16 = 2010 + 8 = 2018
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,656
Q: change grid column order in mobile mode I have one requirement where i have to show four grid column in desktop mode like in this which is correct. For that i used below html code: <div class="row"> <div class="col-xs-12 col-md-3"> <div class="alert alert-info"> 1 </div> </div> <div class="col-xs-12 col-md-3"> <div class="alert alert-warning"> 2 </div> </div> <div class="col-xs-12 col-md-3"> <div class="alert alert-danger"> 3 </div> </div> <div class="col-xs-12 col-md-3"> <div class="alert alert-info"> 4 </div> </div> </div> But when i see the same in mobile this shows like below: 1 2 3 4 But my requirement is it should show like below in mobile: 1 2 3 4 How could i do the same using above html code? A: Use col-xs-6 instead of col-xs-12 See Bootstrap Grid System <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <div class="row"> <div class="col-xs-6 col-md-3"> <div class="alert alert-info"> 1 </div> </div> <div class="col-xs-6 col-md-3"> <div class="alert alert-warning"> 2 </div> </div> <div class="col-xs-6 col-md-3"> <div class="alert alert-danger"> 3 </div> </div> <div class="col-xs-6 col-md-3"> <div class="alert alert-info"> 4 </div> </div> </div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,923
\section{Introduction and main results} \label{sec:introduction} Stratified groups appear in quantum physics and many parts of mathematics, including several complex variables, Fourier analysis, geometry, and topology \cite{folland1982hardy,varopoulos2008analysis}. The geometry structure of stratified groups is so good that it inherits a lot of analysis properties from the Euclidean spaces \cite{stein1993harmonic,grafakos2009modern}. Apart from this, the difference between the geometry structures of Euclidean spaces and stratified groups makes the study of function spaces on them more complicated. However, many harmonic analysis problems on stratified Lie groups deserve a further investigation since most results of the theory of Fourier transforms and distributions in Euclidean spaces cannot yet be duplicated. Let $T$ be the classical singular integral operator. The commutator $[b, T]$ generated by $T$ and a suitable function $b$ is defined by \begin{align} \label{equ:commutator-1} [b,T]f & = bT(f)-T(bf). \end{align} It is known that the commutators are intimately related to the regularity properties of the solutions of certain partial differential equations (PDE), see \cite{difazio1993interior,bramanti1995commutators,rios2003lp}. The first result for the commutator $[b,T]$ was established by Coifman et al.\cite{coifman1976factorization}, and the authors proved that $b\in \bmo(\mathbb{R}^{n})$ (bounded mean oscillation functions) if and only if the commutator \labelcref{equ:commutator-1} is bounded on $L^{p}(\mathbb{R}^{n})$ for $1<p<\infty$. In 1978, Janson \cite{janson1978mean} generalized the results in \cite{coifman1976factorization} to functions belonging to a Lipschitz functional space and gave some characterizations of the Lipschitz space $\dot{\Lambda}_{\beta}(\mathbb{R}^{n})$ via commutator \labelcref{equ:commutator-1}, and the author proved that $b\in \dot{\Lambda}_{\beta}(\mathbb{R}^{n})$ if and only if $[b,T]$ is bounded from $L^{p}(\mathbb{R}^{n})$ to $L^{q}(\mathbb{R}^{n})$ where $1<p<n/\beta$ and $1/p-1/q=\beta/n$ (see also \cite{paluszynski1995characterization}). In addition, using real interpolation techniques, Milman and Schonbek\cite{milman1990second} established a commutator result that applies to the Hardy-Littlewood maximal function as well as to a large class of nonlinear operators. In 2000, Bastero et al.\cite{bastero2000commutators} proved the necessary and sufficient condition for the boundedness of the nonlinear commutators $[b,M]$ and $[b,M^{\sharp}]$ on $L^{p}$ spaces. In 2009, Zhang and Wu\cite{zhang2009commutators} studied the same problem for $[b,M_{\alpha}]$. In 2017, Zhang\cite{zhang2017characterization} considered some new characterizations of the Lipschitz spaces via the boundedness of maximal commutator $M_{b}$ and the (nonlinear) commutator $[b, M]$ in Lebesgue spaces and Morrey spaces on Euclidean spaces. In 2018, Zhang et al.\cite{zhang2018commutators} gave necessary and sufficient conditions for the boundedness of the nonlinear commutators $[b,M_{\alpha}]$ and $[b,M^{\sharp}]$ on Orlicz spaces when the symbol $b$ belongs to Lipschitz spaces, and obtained some new characterizations of non-negative Lipschitz functions. And Guliyev\cite{guliyev2022some} recently gave necessary and sufficient conditions for the boundedness of the fractional maximal commutators in the Orlicz spaces $L^{\Phi} (\mathbb{G})$ on any stratified Lie group $\mathbb{G}$ when $b$ belongs to $\bmo(\mathbb{G})$ spaces, and obtained some new characterizations for certain subclasses of $\bmo(\mathbb{G})$ spaces. Inspired by the above literature, the purpose of this paper is to study the boundedness of the fractional maximal commutator $M_{\alpha,b}$ and the nonlinear commutator $[b, M_{\alpha}]$ on the Lebesgue spaces over some stratified Lie group $\mathbb{G}$ when $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$, by which some new characterizations of the Lipschitz spaces are given. Let $0\le \alpha <Q$ and $f: \mathbb{G} \to \mathbb{R}$ be a locally integrable function, the fractional maximal function is defined by \begin{align*} M_{\alpha}(f)(x) &= \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |f(y)| \mathrm{d} y, \end{align*} where the supremum is taken over all balls $B\subset \mathbb{G}$ containing $x$ with radius $r>0$ , and $|B|$ is the Haar measure of the $ \mathbb{G}$-ball $B$. And the fractional maximal commutator generated by the operator $M_{\alpha}$ and a locally integrable function $b$ is defined by \begin{align*} M_{\alpha,b} (f)(x) &= \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)-b(y)| |f(y)| \mathrm{d} y. \end{align*} If $\alpha=0$, then $M_{0,b} \equiv M_{b} $ is the sublinear maximal commutator operator. On the other hand, similar to \labelcref{equ:commutator-1}, the commutator generated by the fractional maximal operator $M_{\alpha}$ and a suitable function $b$ is defined by \begin{align*} [b,M_{\alpha}] (f)(x) &= b(x) M_{\alpha} (f)(x) - M_{\alpha} (bf)(x). \end{align*} Note that operators $M_{\alpha,b}$ and $[b, M_{\alpha}]$ essentially differ from each other. For example, $M_{\alpha,b}$ is positive and sublinear, but $[b, M_{\alpha}]$ is neither positive nor sublinear. The first main result of this paper is to study the boundedness of $M_{\alpha,b}$ when the symbol $b$ belongs to a Lipschitz space. Some characterizations of the Lipschitz space via such commutator are given. \begin{theorem} \label{thm:lipschitz-frac-main-1} Let $b$ be a locally integrable function and let $0 <\beta <1$, $0 <\alpha <Q$ and $0 <\alpha+\beta <Q$. Then the following statements are equivalent: \begin{enumerate \item $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$. \label{enumerate:thm-lip-frac-main-1-1} \item $ M_{\alpha,b} $ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ for all $p, q$ with $1<p<\frac{Q}{\alpha+\beta} $ and $\frac{1}{q} = \frac{1}{p} -\frac{\alpha+\beta}{Q}$. \label{enumerate:thm-lip-frac-main-1-2} \item $ M_{\alpha,b} $ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ for some $p, q$ with $1<p<\frac{Q}{\alpha+\beta} $ and $\frac{1}{q} = \frac{1}{p} -\frac{\alpha+\beta}{Q}$. \label{enumerate:thm-lip-frac-main-1-3} \item There exists $q\in [1,\infty)$ such that \begin{align} \label{inequ:lip-frac-main-1-4} \sup_{B} \dfrac{1}{|B|^{\beta/Q}} \Big( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x)-b_{B}|^{q} \mathrm{d} x \Big)^{1/q} &< \infty. \end{align} \label{enumerate:thm-lip-frac-main-1-4} \item For all $q\in [1,\infty)$ we have \labelcref{inequ:lip-frac-main-1-4}. \label{enumerate:thm-lip-frac-main-1-5} \end{enumerate} \end{theorem} \begin{remark} For the case $\alpha= 0$ and $\mathbb{G} =\mathbb{R}^{n}$, similar results were given in \cite{zhang2017characterization} for Lebesgue spaces with constant exponents, and in \cite{zhang2019some,zhang2019characterization} for the variable case. \end{remark} The second main result of this paper aims to study the mapping properties of the (nonlinear) commutator $[b, M_{\alpha}]$ when $b$ belongs to some Lipschitz space. \begin{theorem} \label{thm:lipschitz-nonlinear-frac-main-1} Let $0 <\beta <1$, $0 <\alpha <Q$, $0 <\alpha+\beta <Q$ and let $b$ be a locally integrable function. Then the following statements are equivalent: \begin{enumerate} \item $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $b\ge 0$. \label{enumerate:thm-lip-nonlinear-frac-main-1-1} \item $[b,M_{\alpha} ]$ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ for all $p$ and $q$ satisfy $1 <p < \frac{Q}{\alpha+\beta}$ and $\frac{1}{q} =\frac{1}{p} - \frac{\alpha+\beta}{Q}$. \label{enumerate:thm-lip-nonlinear-frac-main-1-2} \item $[b,M_{\alpha} ]$ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ for some $p$ and $q$ such that $1 <p < \frac{Q}{\alpha+\beta}$ and $\frac{1}{q} =\frac{1}{p} - \frac{\alpha+\beta}{Q}$. \label{enumerate:thm-lip-nonlinear-frac-main-1-3} \item There exists $s\in [1,\infty)$ such that \begin{align} \label{inequ:lip-nonlinear-frac-main-1-4} \sup_{B\ni x} \dfrac{1}{|B|^{\beta/Q}} \left( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) |^{s} \mathrm{d} x \right)^{1/s} < \infty. \end{align} \label{enumerate:thm-lip-nonlinear-frac-main-1-4} \item For all $s \in [1,\infty)$ we have \labelcref{inequ:lip-nonlinear-frac-main-1-4}. \label{enumerate:thm-lip-nonlinear-frac-main-1-5} \end{enumerate} \end{theorem} \begin{remark} \label{rem:lipschitz-nonlinear-frac-main-1} % Let $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $b\ge 0$ if and only if \begin{align} \label{inequ:lip-nonlinear-frac-main-1-4b} \sup_{B\ni x} \dfrac{1}{|B|^{\beta/Q}} \left( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x) - M_{B}(b)(x) |^{s} \mathrm{d} x \right)^{1/s} < \infty. \end{align} Compared with \labelcref{inequ:lip-nonlinear-frac-main-1-4b}, \labelcref{inequ:lip-nonlinear-frac-main-1-4} gives a new characterization for nonnegative Lipschitz functions. \end{remark} This paper is organized as follows. In the next section, we recall some basic definitions and known results. In \cref{sec:proof-mab}, we will prove \cref{thm:lipschitz-frac-main-1,thm:lipschitz-nonlinear-frac-main-1}. Throughout this paper, the letter $C$ always stands for a constant independent of the main parameters involved and whose value may differ from line to line. In addition, we give some notations. Here and hereafter $L^{p} ~(1\le p\le \infty)$ will always denote the standard $L^{p} $-space with respect to the Haar measure $\mathrm{d} x$, with the $L^{p} $-norm $\|\cdot\|_{p}$. And let $WL^{p}$ be weak-type $L^{p} $-space. Denote by $\chi_{E}$ the characteristic function of a measurable set $E$ of $\mathbb{G}$. \section{Preliminaries and lemmas} \label{sec:preliminary} \subsection{Lie group $\mathbb{G}$} To prove the main results of this paper, we first recall some necessary notions and remarks. Firstly, we recall some preliminaries concerning stratified Lie groups (or so-called Carnot groups). We refer the reader to \cite{folland1982hardy,bonfiglioli2007stratified,stein1993harmonic}. \begin{definition}\label{def:stratified-Lie-algebra-yessir2019} We say that a Lie algebra $\mathcal{G}$ is stratified if there is a direct sum vector space decomposition \begin{align}\label{equ:lie-algebra-decomposition} \mathcal{G} =\oplus_{j=1}^{m} V_{j} = V_{1} \oplus \cdots \oplus V_{m} \end{align} such that $\mathcal{G}$ is nilpotent of step $m$ if $m$ is the smallest integer for which all Lie brackets (or iterated commutators) of order $m+1$ are zero, that is, \begin{align*} [V_{1},V_{j}] = \begin{cases} V_{j+1} & 1\le j \le m-1 \\ 0 & j\ge m \end{cases} \end{align*} holds. \end{definition} It is not difficult to find that the above $V_{1}$ generates the whole of the Lie algebra $\mathcal{G}$ by taking Lie brackets. \begin{remark}\cite{zhu2003herz} \label{rem:lie-algebra-decom-zhu2003herz} % Let $\mathcal{G} = \mathcal{G}_{1}\supset \mathcal{G}_{2} \supset \cdots \supset \mathcal{G}_{m+1} =\{0\}$ denote the lower central series of $\mathcal{G}$, and $\{X_{1},\dots,X_{N}\}$ be a basis for $V_{1}$ of $\mathcal{G}$. \begin{enumerate}[leftmargin=2em,label=(\roman*),itemindent=1.0em] \item The direct sum decomposition \labelcref{equ:lie-algebra-decomposition} can be constructed by identifying each $\mathcal{G}_{j}$ as a vector subspace of $\mathcal{G}$ and setting $ V_{m}=\mathcal{G}_{m}$ and $ V_{j}=\mathcal{G}_{j}\setminus \mathcal{G}_{j+1}$ for $j=1,\ldots,m-1$. \item The dimension of $\mathbb{G}$ at infinity as the integer $Q$ is given by \begin{align*} Q = \sum_{j=1}^{m} j \dim(V_{j}) = \sum_{j=1}^{m} \dim(\mathcal{G}_{j}). \end{align*} \end{enumerate} \end{remark} \begin{definition}\label{def:stratified-Lie-group} A Lie group $\mathbb{G}$ is said to be stratified when it is a connected simply-connected Lie group and its Lie algebra $\mathcal{G}$ is stratified. \end{definition} If $\mathbb{G}$ is stratified, then its Lie algebra $\mathcal{G}$ admits a canonical family of dilations $\{\delta_{r}\}$, namely, for $r>0$, $X_{k}\in V_{k}~(k=1,\ldots,m)$, \begin{align*} \delta_{r} \Big( \sum_{k=1}^{m} X_{k} \Big) = \sum_{k=1}^{m} r^{k} X_{k}, \end{align*} which are Lie algebra automorphisms. By the Baker-Campbell-Hausdorff formula for sufficiently small elements $X$ and $Y$ of $\mathcal{G}$ one has \begin{align*} \exp X \exp Y= \exp H(X,Y)= X+Y +\frac{1}{2}[X,Y]+\cdots \end{align*} where $\exp : \mathcal{G} \to \mathbb{G}$ is the exponential map, $H(X, Y )$ is an infinite linear combination of $X$ and $Y$ and their Lie brackets, and the dots denote terms of order higher than two. The following properties can be found in \cite{ruzhansky2019hardy}(see Proposition 1.1.1, or Proposition 2.1 in \cite{yessirkegenov2019function} or Proposition 1.2 in \cite{folland1982hardy}). \begin{proposition}\label{pro:2.1-yessirkegenov2019} Let $\mathcal{G}$ be a nilpotent Lie algebra, and let $\mathbb{G}$ be the corresponding connected and simply-connected nilpotent Lie group. Then we have \begin{enumerate}[leftmargin=2em,label=(\arabic*),itemindent=1.0em] \item The exponential map $\exp: \mathcal{G} \to \mathbb{G}$ is a diffeomorphism. Furthermore, the group law $(x,y) \mapsto xy$ is a polynomial map if $\mathbb{G}$ is identified with $\mathcal{G}$ via $\exp$. \item If $\lambda$ is a Lebesgue measure on $\mathcal{G}$, then $\exp\lambda$ is a bi-invariant Haar measure on $\mathbb{G}$ (or a bi-invariant Haar measure $\mathrm{d} x$ on $\mathbb{G}$ is just the lift of Lebesgue measure on $\mathcal{G}$ via $\exp$). \end{enumerate} \end{proposition} Thereafter, $y^{-1}$ represents the inverse of $y\in \mathbb{G}$, $y^{-1}x$ stands for the group multiplication of $y^{-1}$ by $x$ and the group identity element of $\mathbb{G}$ will be referred to as the origin denotes by $e$. A homogenous norm on $\mathbb{G}$ is a continuous function $x\to \rho(x)$ from $\mathbb{G}$ to $[0,\infty)$, which is $C^{\infty}$ on $\mathbb{G}\setminus\{0\}$ and satisfies \begin{align*} \begin{cases} \rho(x^{-1}) = \rho(x), \\ \rho(\delta_{t}x) = t\rho(x) \ \ \text{for all}~ x \in \mathbb{G} ~\text{and}~ t > 0, \\ \rho(e) = 0. \end{cases} \end{align*} Moreover, there exists a constant $c_{0} \ge 1$ such that $\rho(xy) \le c_{0}(\rho(x) + \rho(y))$ for all $x,y \in \mathbb{G}$. With the norm above, we define the $\mathbb{G}$ ball centered at $x$ with radius $r$ by $B(x, r) = \{y \in \mathbb{G} : \rho(y^{-1}x) < r\}$, and by $\lambda B$ denote the ball $B(x,\lambda r)$ with $\lambda>0$, let $B_{r} = B(e, r) = \{y \in \mathbb{G} : \rho(y) < r\}$ be the open ball centered at $e$ with radius $r$, which is the image under $\delta_{r}$ of $B(e, 1)$. And by $\sideset{^{\complement}}{} {\mathop {B(x,r)}} = \mathbb{G}\setminus B(x,r)= \{y \in \mathbb{G} : \rho(y^{-1}x) \ge r\}$ denote the complement of $B(x, r)$. Let $|B(x,r)|$ be the Haar measure of the ball $B(x,r)\subset \mathbb{G}$, and there exists $c_{1} =c_{1} (\mathbb{G})$ such that \begin{align*} |B(x,r)| = c_{1} r^{Q}, \ \ \ \ x\in \mathbb{G}, r>0. \end{align*} The most basic partial differential operator in a stratified Lie group is the sub-Laplacian associated with $X$ is the second-order partial differential operator on $\mathbb{G}$ given by \begin{align*} \mathfrak{L} = \sum_{i=1}^{n} X_{i}^{2} \end{align*} The following lemma is known as the H\"{o}lder's inequality on Lebesgue spaces over Lie groups $\mathbb{G}$, it can also be found in \cite{rao1991theory} or \cite{guliyev2022some}, when Young function $\Phi(t)=t^{p}$ and its complementary function $\Psi(t)=t^{q}$ with $\frac{1}{p}+\frac{1}{q}=1$. \begin{lemma}[H\"{o}lder's inequality on $\mathbb{G}$]\label{lem:holder-inequality-Lie-group} Let $1\le p,q \le\infty$ with $\frac{1}{p}+\frac{1}{q}=1$, $\Omega\subset \mathbb{G}$ be a measurable set and measurable functions $f\in L^{p}(\Omega)$ and $g\in L^{q}(\Omega)$. Then there exists a positive constant $C$ such that \begin{align*} \displaystyle\int_{\Omega} |f(x)g(x)| \mathrm{d}x \le C \|f\|_{L^{p}(\Omega)} \|g\|_{L^{q}(\Omega)}. \end{align*} \end{lemma} By elementary calculations we have the following property. It can also be found in \cite{guliyev2022some}, when Young function $\Phi(t)=t^{p}$. \begin{lemma}[Norms of characteristic functions]\label{lem:norm-characteristic-functions-Lie-group} Let $0<p<\infty$ and $\Omega\subset \mathbb{G}$ be a measurable set with finite Haar measure. Then \begin{align*} \|\scalebox{1.2}{$\chi$}_{\Omega}\|_{L^{p}(\mathbb{G})} = \|\scalebox{1.2}{$\chi$}_{\Omega}\|_{WL^{p}(\mathbb{G})} = |\Omega|^{1/p}. \end{align*} \end{lemma} \subsection{Lipschitz spaces on $\mathbb{G}$} Next we give the definition of the Lipschitz spaces on $\mathbb{G}$, and state some basic properties and useful lemmas. \begin{definition}[Lipschitz-type spaces on $\mathbb{G}$] \label{def.lip-space} \ \begin{enumerate}[ label=(\arabic*),itemindent=1em] \item Let $0<\beta <1$, we say a function $b$ belongs to the Lipschitz space $\dot{\Lambda}_{\beta}(\mathbb{G}) $ if there exists a constant $C>0$ such that for all $x,y\in \mathbb{G}$, \begin{align} \label{inequ:lip-def-1} |b(x)-b(y)| &\le C(\rho(y^{-1}x))^{\beta}, \end{align} where $\rho$ is the homogenous norm. The smallest such constant $C$ is called the $\dot{\Lambda}_{\beta}$ norm of $b$ and is denoted by $\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})}$. \label{enumerate:def-lip-1} \item (see \cite{macias1979lipschitz} ) Let $0<\beta <1$ and $1\le p<\infty$. The space $\lip_{\beta,p}(\mathbb{G}) $ is defined to be the set of all locally integrable functions $b$, i.e., there exists a positive constant $C $, such that \begin{align*} \sup_{B\ni x} \dfrac{1}{ |B|^{\beta/Q}}\Big( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x)- b_{B}|^{p}\mathrm{d} x \Big)^{1/p} \le C \end{align*} where the supremum is taken over every ball $B\subset \mathbb{G}$ containing $x$ and $b_{B}=\frac{1}{|B|} \int_{B} b(x) \mathrm{d} x$. The least constant $C$ satisfying the conditions above shall be denoted by $\|b\|_{\lip_{\beta,p}(\mathbb{G})}$. \label{enumerate:def-lip-2} \item (see \cite{macias1979lipschitz}) Let $0<\beta <1$. When $ p=\infty$, we shall say that a locally integrable functions $b$ belongs to $\lip_{\beta,\infty}(\mathbb{G}) $ if there exists a constant $C$ such that \begin{align*} \esssup_{x\in B} \dfrac{|b(x)- b_{B}|}{ |B|^{\beta/Q}} \le C \end{align*} holds for every ball $B\subset \mathbb{G}$ with $b_{B}=\frac{1}{|B|} \int_{B} b(x) \mathrm{d} x$. And $\|b\|_{\lip_{\beta,\infty}(\mathbb{G})}$ stand for the least constant $C$ satisfying the conditions above. \label{enumerate:def-lip-3} \end{enumerate} \end{definition} \begin{remark} \label{rem.Lipschitz-def} \begin{enumerate}[label=(\roman*)] \item Similar to the definition of Lipschitz space $\dot{\Lambda}_{\beta}(\mathbb{G}) $ in \labelcref{enumerate:def-lip-1}, we also have the definition form as following (see \cite{krantz1982lipschitz,chen2010lipschitz,fan1995characterization} et al.) \begin{align*} \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})}&= \sup_{x,y\in \mathbb{G}\atop y\neq e} \dfrac{|b(xy)- b(x)|}{(\rho(y))^{\beta}} = \sup_{x,y\in \mathbb{G} \atop x\neq y} \dfrac{|b(x)-b(y)|}{(\rho(y^{-1}x))^{\beta}}. \end{align*} And $\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} =0$ if and only if $b$ is constant. \item In \labelcref{enumerate:def-lip-2}, when $p=1$, we have \begin{align*} \|b\|_{\lip_{\beta,1}(\mathbb{G})} =\sup_{B\ni x} \dfrac{1}{ |B|^{\beta/Q}}\Big( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x)- b_{B}| \mathrm{d} x \Big) :=\|b\|_{\lip_{\beta}(\mathbb{G})} \end{align*} \item There are two basically different approaches to Lipschitz classes on the $n$-dimensional Euclidean space. Lipschitz classes can be defined via Poisson (or Weierstrass) integrals of $L^{p}$-functions, or, equivalently, by means of higher order difference operators (see \cite{meda1988lipschitz}). \end{enumerate} \end{remark} \begin{lemma} (see \cite{macias1979lipschitz,chen2010lipschitz,li2003lipschitz} ) \label{lem:2.2-li2003lipschitz} Let $0<\beta<1$ and the function $b(x)$ integrable on bounded subsets of $\mathbb{G}$. \begin{enumerate}[leftmargin=2em,label=(\arabic*),itemindent=1.0em] \item When $1\le p<\infty$, then \begin{align*} \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} &= \|b\|_{\lip_{\beta}(\mathbb{G})} \approx \|b\|_{\lip_{\beta,p}(\mathbb{G})}. \end{align*} \item Let balls $B_{1}\subset B_{2}\subset \mathbb{G}$ and $b\in \lip_{\beta,p}(\mathbb{G})$ with $p\in [1,\infty]$. Then there exists a constant $C$ depends on $B_{1}$ and $B_{2}$ only, such that \begin{align*} |b_{B_{1}}- b_{B_{2}} | &\le C \|b\|_{\lip_{\beta,p}(\mathbb{G})} |B_{2}|^{\beta/Q} \end{align*} \item When $1\le p<\infty$, then there exists a constant $C$ depends on $\beta$ and $p$ only, such that \begin{align*} | b(x)- b(y) | &\le C \|b\|_{\lip_{\beta,p}(\mathbb{G})} |B|^{\beta/Q} \end{align*} holds for any ball $B$ containing $x$ and $y$. \end{enumerate} \end{lemma} \subsection{Maximal function} Let $f: \mathbb{G} \to \mathbb{R}$ be a locally integrable function. The Hardy–Littlewood maximal function $M$ is given by \begin{align*} M (f)(x) &= \sup_{B\ni x} |B|^{-1} \displaystyle\int_{B} |f(y)| \mathrm{d} y, \end{align*} where the supremum is taken over all balls $B\subset \mathbb{G}$ containing $x$. The fractional maximal function $ M_{\alpha}(f)$ coincides for $\alpha = 0$ with the Hardy-Littlewood maximal function $M(f)(x)\equiv M_{0}(f)(x)$. For a function $b$ defined on $\mathbb{G}$, we denote \begin{align*} b^{-}(x) :=- \min\{b, 0\} = \begin{cases} 0, & \text{if}\ b(x) \ge 0 \\ |b(x)|, & \text{if}\ b(x) < 0 \end{cases} \end{align*} and $b^{+}(x) =|b(x)|-b^{-}(x)$. Obviously, $b(x)=b^{+}(x)-b^{-}(x)$. Now, we give the following pointwise estimate for $[b,M_{\alpha}] $ on $\mathbb{G}$. \begin{lemma}[pointwise estimates for {$[b,M_{\alpha}] $}]\label{lem:frac-maximal-pointwise} Let $0\le\alpha<Q$, $f: \mathbb{G} \to \mathbb{R}$ be a locally integrable function. \begin{enumerate}[label=(\arabic*)] \item If $b$ is any non-negative locally integrable function on $\mathbb{G}$, then \begin{align*} |[b,M_{\alpha}] (f)(x)| &\le M_{\alpha,b} (f)(x). \end{align*} \label{enumerate:lem:frac-maximal-pointwise-1} \item If $b$ is any locally integrable function on $\mathbb{G}$, then \begin{align*} |[b,M_{\alpha}] (f)(x)| &\le M_{\alpha,b} (f)(x)+ 2b^{-}(x)M_{\alpha} (f)(x). \end{align*} \label{enumerate:lem:frac-maximal-pointwise-2} \item Assume that $0<\beta <1$ and $0<\alpha+\beta<Q$. If $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $b\ge 0$, then for arbitrary $x\in \mathbb{G} $ such that $M_{\alpha} (f)(x) <\infty$, we have \begin{align*} |[b,M_{\alpha}] (f)(x)| &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\alpha+\beta} (f)(x). \end{align*} \label{enumerate:lem:frac-maximal-pointwise-3} \end{enumerate} \end{lemma} \begin{proof} \labelcref{enumerate:lem:frac-maximal-pointwise-1}\ For any fixed $x \in \mathbb{G}$ such that $M_{\alpha}(f)(x) <\infty$, since $b \geq 0$ then \begin{align*} \begin{aligned} \big|[b,M_{\alpha}] (f)(x) \big| &= \big|b(x)M_{\alpha}(f)(x)-M_{\alpha}(bf)(x) \big| \\ &= \bigg| \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} b(x)|f(y)| \mathrm{d} y \\ &\;\qquad -\sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} b(y)|f(y)| \mathrm{d} y \bigg| \\ &\le \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)-b(y)| |f(y)| \mathrm{d} y \\ &= M_{\alpha,b} (f)(x). \end{aligned} \end{align*} \labelcref{enumerate:lem:frac-maximal-pointwise-2}\ Similar to the discussion in \cite{zhang2014commutators} \ For any fixed $x \in \mathbb{G}$ such that $M_{\alpha}(f)(x) <\infty$, and any $b \in L_{{\mathrm{loc}}}^{1}(\mathbb{G})$, we have \begin{align*} \begin{aligned} \big|[b,M_{\alpha}] (f)(x) \big| &= \big|b(x)M_{\alpha}(f)(x)-M_{\alpha}(bf)(x) \big| \\ &= \bigg| \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)f(y)| \mathrm{d} y \\ &\;\qquad -\sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B}| b(y)f(y)| \mathrm{d} y \\ &\;\qquad -2\sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B}b^{-}(x)|f(y)| \mathrm{d} y \bigg| \\ &\le \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)-b(y)| |f(y)| \mathrm{d} y \\ &\;\qquad +2\sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B}b^{-}(x)|f(y)| \mathrm{d} y \\ &= M_{\alpha,b} (f)(x)+ 2b^{-}(x)M_{\alpha} (f)(x). \end{aligned} \end{align*} \labelcref{enumerate:lem:frac-maximal-pointwise-3}\ Similar to the discussion of lemma 2.11 in \cite{zhang2019some}. For any fixed $x \in \mathbb{G}$ such that $M_{\alpha}(f)(x) <\infty$, if $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $b\ge 0$, then we have \begin{align*} \big|[b,M_{\alpha}] (f)(x) \big| &= \big|b(x)M_{\alpha}(f)(x)-M_{\alpha}(bf)(x) \big| \\ &= \bigg| \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} b(x)|f(y)| \mathrm{d} y \\ &\;\qquad -\sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} b(y)|f(y)| \mathrm{d} y \bigg| \\ &\le \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)-b(y)| |f(y)| \mathrm{d} y \\ &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} \sup_{B\ni x} \dfrac{1}{|B|^{1-(\alpha+\beta)/Q}} \displaystyle\int_{B} |f(y)| \mathrm{d} y \\ &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\alpha+\beta} (f)(x). \end{align*} \end{proof} In the case $\alpha=0$, similar to \cref{lem:frac-maximal-pointwise}, we can also get the following pointwise estimates for $[b,M] $ and ignore the proof. \begin{lemma}[pointwise estimates for {$[b,M] $}]\label{lem:maximal-pointwise} Let $f: \mathbb{G} \to \mathbb{R}$ be a locally integrable function. \begin{enumerate}[label=(\arabic*)] \item If $b$ is any non-negative locally integrable function on $\mathbb{G}$, then \begin{align*} |[b,M] (f)(x)| &\le M_{b} (f)(x). \end{align*} \label{enumerate:lem:maximal-pointwise-1} \item If $b$ is any locally integrable function on $\mathbb{G}$, then \begin{align*} |[b,M] (f)(x)| &\le M_{b} (f)(x)+ 2b^{-}(x)M (f)(x). \end{align*} \label{enumerate:lem:maximal-pointwise-2} \item Assume that $0<\beta <1$. If $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $b\ge 0$, then for arbitrary $x\in \mathbb{G} $ such that $M (f)(x) <\infty$, we have \begin{align*} |[b,M] (f)(x)| &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\beta} (f)(x). \end{align*} \label{enumerate:lem:maximal-pointwise-3} \end{enumerate} \end{lemma} To prove our results, we recall the definition of the maximal operator with respect to a ball. For a fixed ball $B_{0}$, the maximal function with respect to $B_{0}$ of a function $f$ is given by \begin{align*} M_{B_{0}} (f)(x) &= \sup_{ B\ni x \atop B\subset B_{0}} |B|^{-1} \displaystyle\int_{B} |f(y)| \mathrm{d} y, \\ \intertext{and} M_{\alpha,B_{0}} (f)(x) &= \sup_{ B\ni x \atop B\subset B_{0}} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |f(y)| \mathrm{d} y, \end{align*} where the supremum is taken over all the balls $B$ with $B\subset B_{0}$ and $x\in B$. In the following results, \labelcref{enumerate:lem:frac-maximal-pointwise2-1} can be found in \cite{guliyev2022some}(see lemma 3.2). And similar to the discussion in \cite{zhang2009commutators}, by elementary calculations and derivations we can obtain the desired relations \labelcref{enumerate:lem:frac-maximal-pointwise2-2} on $\mathbb{G}$. \begin{lemma}[pointwise estimates]\label{lem:frac-maximal-pointwise-2} Let $0\le\alpha<Q$, and $f: \mathbb{G} \to \mathbb{R}$ be a locally integrable function. \begin{enumerate}[label=(\arabic*)] \item If $B_{0}$ is a ball on $ \mathbb{G}$ with radius $r_{0}$, then $|B_{0}|^{\alpha/Q} \le M_{\alpha} (\scalebox{1.2}{$\chi$}_{B_{0}})(x) = M_{\alpha,B_{0}} (\scalebox{1.2}{$\chi$}_{B_{0}})(x)$ for every $x\in B_{0}$. \label{enumerate:lem:frac-maximal-pointwise2-1} \item $ M_{\alpha} (f\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (f)(x)$ and $ M_{\alpha} (\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (\scalebox{1.2}{$\chi$}_{B})(x)=|B|^{\alpha/Q}$ for every $x\in B\subset \mathbb{G}$. \label{enumerate:lem:frac-maximal-pointwise2-2} \end{enumerate} \end{lemma} The following propositions can be found in \cite{kokilashvili1989fractional}. \begin{proposition} \label{pro:A-kokilashvili1989fractional} Let $0\le\alpha<Q$ and $1< p < \gamma^{-1}=\frac{Q}{\alpha}$ with $\frac{1}{q}=\frac{1}{p}-\frac{\alpha}{Q}$. Then the following two conditions are equivalent: \begin{enumerate}[label=(\arabic*)] \item There is a constant $C>0$ such that for any $f\in L_{\omega}^{p}(\mathbb{G})$ the inequality \begin{align*} \Big( \displaystyle\int_{\mathbb{G}} \big( M_{\gamma}(f\omega^{\gamma}) (x) \big)^{q} \omega(x) \mathrm{d} x \Big)^{1/q} &\le C \Big( \displaystyle\int_{\mathbb{G}}|f(x) |^{p} \omega(x) \mathrm{d} x \Big)^{1/p} \end{align*} holds. \item $\omega \in A_{1+q/p'}(\mathbb{G})$, $p' = \frac{p}{p-1}$. \end{enumerate} \end{proposition} \begin{proposition} \label{pro:B-kokilashvili1989fractional} Let $0<\alpha<Q$, $\gamma=\alpha/Q$, $q=(1-\gamma)^{-1}$, and $f\in L^{q}(\mathbb{G})$. Then the following two conditions are equivalent: \begin{enumerate}[label=(\arabic*)] \item $\omega \{x\in \mathbb{G}: M_{\gamma}(f\omega^{\gamma})(x) >\lambda \} \le C \lambda^{-q} \Big( \displaystyle\int_{\mathbb{G}}|f(x) | \mathrm{d} x \Big)^{q}$ with a constant $C>0$ independent of $f$ and $\lambda>0$. \item $\omega \in A_{1}(\mathbb{G})$. \end{enumerate} \end{proposition} The following strong and weak-type boundedness of $M_{\alpha}$ can be obtained from \cref{pro:A-kokilashvili1989fractional,pro:B-kokilashvili1989fractional} when the weight $\omega=1$, see \cite{kokilashvili1989fractional} for more details. And the first part can also be obtained from \cref{pro:1.6-bernardis1994two} when the weights $\omega =1$ and $\nu=1$. \begin{lemma}\label{lem:frac-maximal-kokilashvili1989fractional} Let $0<\alpha<Q$, $1\le p\le Q/\alpha$ with $1/q=1/p-\alpha/Q$, and $f\in L^{p}(\mathbb{G})$. \begin{enumerate}[label=(\arabic*)] \item If $1< p<Q/\alpha$, then there exists a positive constant $C$ such that \begin{align*} \|M_{\alpha}(f)\|_{L^{q}(\mathbb{G})} &\le C \|f\|_{L^{p}(\mathbb{G})} \end{align*} \item If $p=1$, then there exists a positive constant $C$ such that \begin{align*} |\{x\in \mathbb{G}: M_{\alpha}(f)(x) >\lambda \}| &\le C \big( \lambda^{-1}\|f\|_{L^{1}(\mathbb{G})}\big)^{Q/(Q-\alpha)} \end{align*} holds for all $\lambda>0$. \end{enumerate} \end{lemma} \begin{lemma} \label{lem:frac-maximal-to-maximal-Lie} Let $0 <\beta <1$, $0 <\alpha <Q$, $0 <\alpha+\beta <Q$ and $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$. If \labelcref{inequ:lip-nonlinear-frac-main-1-4} holds for some $s\in [1,\infty)$, then \begin{align*} \dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} |b(x) - M_{B}(b)(x) | \mathrm{d} x < \infty. \end{align*} \end{lemma} \begin{proof} We first consider the following inequality. \begin{align} \label{inequ:proof-nonlinear-frac-main-1-41-4} \begin{aligned} &\; \dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big|b(x) - M_{B}(b)(x) \big| \mathrm{d} x \\ &\le \dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big|b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) \big| \mathrm{d} x \\ &\ \; +\dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big| |B|^{-\alpha/Q}M_{\alpha,B}(b)(x) - M_{B}(b)(x) \big| \mathrm{d} x \\ & =I_{1} + I_{2}. \end{aligned} \end{align} For $I_{1}$, applying hypothesis \labelcref{inequ:lip-nonlinear-frac-main-1-4} and H\"{o}lder's inequality (see \cref{lem:holder-inequality-Lie-group}) we have \begin{align*} I_{1} &\le \dfrac{1}{|B|^{1+\beta/Q}} \Big( \displaystyle\int_{B} \big|b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) \big|^{s} \mathrm{d} x \Big)^{1/s} \Big( \displaystyle\int_{B} \scalebox{1.2}{$\chi$}_{B}(x) \mathrm{d} x \Big)^{1/s'} \\ &= \dfrac{1}{|B|^{\beta/Q}} \Big( \dfrac{1}{|B|} \displaystyle\int_{B} \big|b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) \big|^{s} \mathrm{d} x \Big)^{1/s} \\ &\le C, \end{align*} where the constant $C$ is independent of $B$. Next, we consider $I_{2}$. From the definition of $M_{\alpha,B} $ and \cref{lem:frac-maximal-pointwise-2}, it is not difficult to check that the pointwise estimates $ M_{\alpha} (b\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (b)(x)$ and $M_{\alpha} (\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (\scalebox{1.2}{$\chi$}_{B})(x)= |B|^{\alpha/Q}$ for any fixed ball $B\subset \mathbb{G}$ and all $x \in B$. Furthermore, when $\alpha=0$, for all $x\in B$, we have (also see \cite{bastero2000commutators}) \begin{align*} M (\scalebox{1.2}{$\chi$}_{B})(x) = \scalebox{1.2}{$\chi$}_{B} (x)= 1 \ \ \text{and} \ \ M (b\scalebox{1.2}{$\chi$}_{B})(x) = M_{B} (b)(x). \end{align*} Then, for any $x\in B$, \begin{align} \label{inequ:proof-nonlinear-frac-main-1-41-6} \begin{aligned} &\; \big| |B|^{-\alpha/Q}M_{\alpha,B}(b)(x) - M_{B}(b)(x) \big| \\ &\le |B|^{-\alpha/Q} \big|M_{\alpha,B}(b)(x) - |B|^{\alpha/Q}|b(x)| \big| + \big| |b(x)|- M_{B}(b)(x) \big| \\ &\le |B|^{-\alpha/Q} \big| M_{\alpha} (b\scalebox{1.2}{$\chi$}_{B})(x) - |b(x)| M_{\alpha} (\scalebox{1.2}{$\chi$}_{B})(x) \big| \\ &\; \qquad + \big| |b(x)| M(\scalebox{1.2}{$\chi$}_{B})(x)- M (b\scalebox{1.2}{$\chi$}_{B})(x) \big| \\ &\le |B|^{-\alpha/Q} \big| [|b|, M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x) \big| + \big| [|b|, M](\scalebox{1.2}{$\chi$}_{B})(x) \big|. \end{aligned} \end{align} Noting that $0\le |b| \in \dot{\Lambda}_{\beta}(\mathbb{G})$ since $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$. Therefore, we can apply \cref{lem:frac-maximal-pointwise} to $[|b|, M_{\alpha}]$ and \cref{lem:maximal-pointwise} to $[|b|, M]$ since $|b| \in \dot{\Lambda}_{\beta}(\mathbb{G})$ and $|b| \geq 0$. By \cref{lem:frac-maximal-pointwise,lem:maximal-pointwise,lem:frac-maximal-pointwise-2}, for arbitrary $x\in B$, we have \begin{align*} \big| [|b|, M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x) \big| &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\alpha+\beta} (\scalebox{1.2}{$\chi$}_{B})(x) \le C\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} |B|^{(\alpha+\beta)/Q}, \\ \intertext{and} \big| [|b|, M] (\scalebox{1.2}{$\chi$}_{B})(x) \big| &\le \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\beta} (\scalebox{1.2}{$\chi$}_{B})(x) \le C \|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} |B|^{\beta/Q}. \end{align*} By \labelcref{inequ:proof-nonlinear-frac-main-1-41-6}, we have \begin{align*} I_{2} &= \dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big| |B|^{-\alpha/Q}M_{\alpha,B}(b)(x) - M_{B}(b)(x) \big| \mathrm{d} x \\ &\le \dfrac{C}{|B|^{1+(\alpha+\beta)/Q}} \displaystyle\int_{B} \big| [|b|, M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x) \big| \mathrm{d} x \\ &\ \; + \dfrac{C}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big| [|b|, M] (\scalebox{1.2}{$\chi$}_{B})(x) \big| \mathrm{d} x \\ &\le C\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})}. \end{align*} Combined with the above estimation, putting $I_{1}$ and $I_{2}$ into \labelcref{inequ:proof-nonlinear-frac-main-1-41-4}, we can obtain \begin{align*} \dfrac{1}{|B|^{1+\beta/Q}} \displaystyle\int_{B} |b(x) - M_{B}(b)(x) | \mathrm{d} x < C. \end{align*} This completes the proof of \cref{lem:frac-maximal-to-maximal-Lie}. \end{proof} \section{Proof of the principal results } \label{sec:proof-mab} We now give the proof of the principal results. First, we prove \cref{thm:lipschitz-frac-main-1}. \begin{refproof}[Proof of \cref{thm:lipschitz-frac-main-1}] Since the implications \labelcref{enumerate:thm-lip-frac-main-1-2} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-3} and \labelcref{enumerate:thm-lip-frac-main-1-5} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-4} follows readily, and \labelcref{enumerate:thm-lip-frac-main-1-2} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-5} is similar to \labelcref{enumerate:thm-lip-frac-main-1-3} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-4}, we only need to prove \labelcref{enumerate:thm-lip-frac-main-1-1} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-2}, \labelcref{enumerate:thm-lip-frac-main-1-3} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-4}, and \labelcref{enumerate:thm-lip-frac-main-1-4} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-1} (see \Cref{fig:ps-equivalent}). \begin{figure}[!ht] \centering \scalebox{0.6}{ \begin{tikzpicture}[ vertex/.style = {shape=circle,draw,minimum size=2em}, edge/.style = {->,-Latex}, ] \node[vertex] (o) at (0,0) {1}; \node[vertex] (t) at (-2,-2) {2}; \node[vertex] (th) at (-2,-5) {3}; \node[vertex] (f) at (2,-5) {4}; \node[vertex] (fv) at (2,-2) {5}; \draw[edge,dashed] (t) to node[left] {$w_{23}$} (th); \draw[edge,dashed] (fv) to node[right] {$w_{54}$} (f); \draw[edge, ultra thick,blue] (o) -- (t) node[midway,left] {$w_{12}$} ; \draw[edge, ultra thick,blue] (th) to node[above, midway] {$w_{34}$} (f); \draw[edge,ultra thick,blue] (f) to[out=0, in=0] node[above, yshift=11mm] {$w_{41}$} (o); \draw[edge,dashed] (t) to node[below, midway] {$w_{25}$} (fv); \end{tikzpicture} } \vskip 6pt \caption{Proof structure \\ where $w_{ij}$ denotes $i\Longrightarrow j$}\label{fig:ps-equivalent} \end{figure} \labelcref{enumerate:thm-lip-frac-main-1-1} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-2}:\ Let $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$, then, using \labelcref{enumerate:def-lip-1} in \cref{def.lip-space}, we have \begin{align} \label{inequ:proof-frac-main-1-1} \begin{aligned} M_{\alpha,b} (f)(x) &= \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B} |b(x)-b(y)| |f(y)| \mathrm{d} y \\ &\le C\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} \sup_{B\ni x} \dfrac{1}{|B|^{1-\alpha/Q}} \displaystyle\int_{B}|\rho(y^{-1}x)|^{\beta} |f(y)| \mathrm{d} y \\ &\le C\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} \sup_{B\ni x} \dfrac{1}{|B|^{1-(\alpha+\beta)/Q}} \displaystyle\int_{B} |f(y)| \mathrm{d} y \\ &\le C\|b\|_{\dot{\Lambda}_{\beta}(\mathbb{G})} M_{\alpha+\beta} (f)(x). \end{aligned} \end{align} Therefore, assertion \labelcref{enumerate:thm-lip-frac-main-1-2} follows from \cref{lem:frac-maximal-kokilashvili1989fractional} and \labelcref{inequ:proof-frac-main-1-1}. \labelcref{enumerate:thm-lip-frac-main-1-3} $\xLongrightarrow[]{\ \ \ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-4}:\ For any fixed ball $B\subset \mathbb{G}$, we have for all $x\in B$, \begin{align*} |b(x)-b_{B}| &\le \dfrac{1}{ |B| } \displaystyle\int_{B} |b(x)-b(y)| \mathrm{d} y \\ &= \dfrac{1}{ |B| } \displaystyle\int_{B} |b(x)-b(y)| \scalebox{1.2}{$\chi$}_{B} (y) \mathrm{d} y \\ &\le \dfrac{1}{ |B|^{\alpha/Q} } M_{\alpha,b}(\scalebox{1.2}{$\chi$}_{B})(x). \end{align*} Then, for all $x\in \mathbb{G}$, \begin{align*} |(b(x)-b_{B})\scalebox{1.2}{$\chi$}_{B} (x)| \le \dfrac{1}{ |B|^{\alpha/Q} } M_{\alpha,b}(\scalebox{1.2}{$\chi$}_{B})(x). \end{align*} Using assertion \labelcref{enumerate:thm-lip-frac-main-1-3}, there exists $p, q$ satisfy $1<p<\frac{Q}{\alpha+\beta} $ and $\frac{1}{q} = \frac{1}{p} -\frac{\alpha+\beta}{Q}$ such that $ M_{\alpha,b} $ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$. For any ball $B\subset \mathbb{G}$ containing $x$, by \cref{lem:norm-characteristic-functions-Lie-group}, one obtains \begin{align*} \dfrac{1}{|B|^{\beta/Q}} \Big( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x)-b_{B}|^{q} \mathrm{d} x \Big)^{1/q} &\le \dfrac{1}{ |B|^{(\alpha+\beta)/Q}} \Big( \dfrac{1}{|B|} \displaystyle\int_{B} \big( M_{\alpha,b}(\scalebox{1.2}{$\chi$}_{B})(x) \big)^{q} \mathrm{d} x \Big)^{1/q} \\ &\le \dfrac{1}{ |B|^{1/q+(\alpha+\beta)/Q}} \Big(\displaystyle\int_{B} \big( M_{\alpha,b}(\scalebox{1.2}{$\chi$}_{B})(x) \big)^{q} \mathrm{d} x \Big)^{1/q} \\ &\le \dfrac{C}{ |B|^{1/q+(\alpha+\beta)/Q}} \|\scalebox{1.2}{$\chi$}_{B}\|_{L^{p}(\mathbb{G})} \\ &\le C. \end{align*} Thus, this together with \cref{lem:2.2-li2003lipschitz} gives $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$. \labelcref{enumerate:thm-lip-frac-main-1-4} $\xLongrightarrow[]{\ \ \ \ }$ \labelcref{enumerate:thm-lip-frac-main-1-1}:\ For any ball $B \subset \mathbb{G}$ containing $x$, it follows from H\"{o}lder's inequality (see \cref{lem:holder-inequality-Lie-group}) and assertion \labelcref{enumerate:thm-lip-frac-main-1-4} that \begin{align*} \dfrac{1}{ |B|^{1+\beta/Q}} \displaystyle\int_{B} |b(x)- b_{B}| \mathrm{d} x &\le \dfrac{C}{|B|^{1+\beta/Q}} \Big( \displaystyle\int_{B} |b(x)-b_{B}|^{q} \mathrm{d} x \Big)^{1/q} \Big(\displaystyle\int_{B} \scalebox{1.2}{$\chi$}_{B}(x) \mathrm{d} x \Big)^{1/q'} \\ &\le \dfrac{C}{|B|^{\beta/Q}} \Big( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x)-b_{B}|^{q} \mathrm{d} x \Big)^{1/q} \\ &\le C. \end{align*} It follows from \cref{lem:2.2-li2003lipschitz} that $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ since $B$ is an arbitrary ball in $\mathbb{G}$. The proof of \cref{thm:lipschitz-frac-main-1} is completed. \end{refproof} Now, we prove \cref{thm:lipschitz-nonlinear-frac-main-1}. \begin{refproof}[Proof of \cref{thm:lipschitz-nonlinear-frac-main-1}] Since the implications \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-2} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-3} and \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-5} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4} follows readily, and \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-2} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-5} is similar to \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-3} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4}, we only need to prove \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-1} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-2}, \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-3} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4},and \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-1} (see \Cref{fig:ps-equivalent}). \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-1} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-2}:\ For any fixed $x \in \mathbb{G}$ such that $M_{\alpha}(f)(x) <\infty$, by \cref{lem:frac-maximal-pointwise} and $b \geq 0$, we have \begin{align*} \begin{aligned} \big| [b,M_{\alpha}] (f)(x) \big| &\le M_{\alpha,b} (f)(x). \end{aligned} \end{align*} It follows from \cref{thm:lipschitz-frac-main-1} that $[b, M_{\alpha}]$ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$ since $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$. \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-3} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4}:\ From the definition of $M_{\alpha,B} $ and \cref{lem:frac-maximal-pointwise-2}, we can obtain that the pointwise estimates $ M_{\alpha} (b\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (b)(x)$ and $M_{\alpha} (\scalebox{1.2}{$\chi$}_{B})(x) = M_{\alpha,B} (\scalebox{1.2}{$\chi$}_{B})(x)= |B|^{\alpha/Q}$ for any fixed ball $B\subset \mathbb{G}$ and all $x \in B$. Then for any $x \in B$, we have \begin{align*} b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) &= |B|^{-\alpha/Q} \Big( b(x)|B|^{\alpha/Q} - M_{\alpha,B}(b)(x) \Big) \\ &= |B|^{-\alpha/Q} \Big( b(x) M_{\alpha} (\scalebox{1.2}{$\chi$}_{B})(x) - M_{\alpha} (b\scalebox{1.2}{$\chi$}_{B})(x) \Big) \\ &= |B|^{-\alpha/Q} [b,M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x). \end{align*} Thus \begin{align*} \Big( b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) \Big) \scalebox{1.2}{$\chi$}_{B} (x) &= |B|^{-\alpha/Q} [b,M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x) \scalebox{1.2}{$\chi$}_{B} (x). \end{align*} Using assertion \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-3}, there exists $p, q$ satisfy $1<p<\frac{Q}{\alpha+\beta} $ and $\frac{1}{q} = \frac{1}{p} -\frac{\alpha+\beta}{Q}$ such that $ [b,M_{\alpha}] $ is bounded from $L^{p}(\mathbb{G})$ to $L^{q}(\mathbb{G})$. For any ball $B\subset \mathbb{G}$ containing $x$, by \cref{lem:norm-characteristic-functions-Lie-group}, we get \begin{align*} \begin{aligned} & \dfrac{1}{|B|^{\beta/Q}} \left( \dfrac{1}{|B|} \displaystyle\int_{B} |b(x) -|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) |^{q} \mathrm{d} x \right)^{1/q} \\ &= |B|^{-\beta/Q} \Big( |B|^{-1} \displaystyle\int_{B} \Big||B|^{-\alpha/Q} [b,M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B})(x) \Big|^{q} \mathrm{d} x \Big)^{1/q} \\ &\le |B|^{-(\alpha+\beta)/Q-1/q} \big\|[b,M_{\alpha}] (\scalebox{1.2}{$\chi$}_{B}) \big\|_{L^{q}(\mathbb{G})} \\ &\le C |B|^{-(\alpha+\beta)/Q-1/q} \| \scalebox{1.2}{$\chi$}_{B} \|_{L^{p}(\mathbb{G})} \\ & \le C, \end{aligned} \end{align*} which gives \labelcref{inequ:lip-nonlinear-frac-main-1-4} for $s=q$ since the ball $B\subset \mathbb{G}$ is arbitrary and $C$ is independent of $B$. \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-4} $\xLongrightarrow[]{\ \ }$ \labelcref{enumerate:thm-lip-nonlinear-frac-main-1-1}:\ To prove $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$, by \cref{lem:2.2-li2003lipschitz}, it suffices to verify that there is a constant $C>0$ such that for all balls $B\subset \mathbb{G}$, \begin{align} \label{inequ:proof-nonlinear-frac-main-1-41-1} |B|^{-1-\beta/Q} \displaystyle\int_{B} | b(x)-b_{B} | \mathrm{d} x \le C. \end{align} For any fixed ball $B\subset \mathbb{G}$, let $E=\{x \in B:b(x) \le b_{B}\}$ and $F=\{x \in B:b(x) > b_{B}\}$. The following equality is trivially true (see \cite{bastero2000commutators}, page 3331): \begin{align*} \displaystyle\int_{E} | b(x)-b_{B} | \mathrm{d} x = \displaystyle\int_{F} | b(x)-b_{B} | \mathrm{d} x. \end{align*} Noticing the obvious estimate $ b_{B} \le |b_{B}| \le |B|^{-\alpha/Q} M_{\alpha,B}(b)(x)$ for any $x\in B$ and $b(x) \le b_{B} $ for any $x\in E$, thus we have $b(x) \le b_{B} \le |b_{B}| \le |B|^{-\alpha/Q} M_{\alpha,B}(b)(x)$ for any $x\in E$ (see \cite{zhang2009commutators}), then for any $x\in E$, we have \begin{align*} | b(x)-b_{B} | \le | b(x)-|B|^{-\alpha/Q} M_{\alpha,B}(b)(x)|. \end{align*} Therefore \begin{align} \label{inequ:proof-nonlinear-frac-main-1-41-2} \begin{aligned} \dfrac{1}{|B|^{1+\beta/Q} } \displaystyle\int_{B} | b(x)-b_{B} | \mathrm{d} x &= \dfrac{1}{|B|^{1+\beta/Q} } \displaystyle\int_{E\cup F} | b(x)-b_{B} | \mathrm{d} x \\ &= \dfrac{2}{|B|^{1+\beta/Q} } \displaystyle\int_{E} | b(x)-b_{B} | \mathrm{d} x \\ &\le \dfrac{2}{|B|^{1+\beta/Q} } \displaystyle\int_{E} | b(x)-|B|^{-\alpha/Q} M_{\alpha,B}(b)(x) | \mathrm{d} x \\ &\le \dfrac{2}{|B|^{1+\beta/Q} } \displaystyle\int_{B} | b(x)-|B|^{-\alpha/Q} M_{\alpha,B}(b)(x) | \mathrm{d} x. \end{aligned} \end{align} On the other hand, it follows from H\"{o}lder's inequality (see \cref{lem:holder-inequality-Lie-group}) and \labelcref{inequ:lip-nonlinear-frac-main-1-4} that \begin{align*} &\; \dfrac{1}{|B|^{1+\beta/Q} } \displaystyle\int_{B} | b(x)-|B|^{-\alpha/Q} M_{\alpha,B}(b)(x) | \mathrm{d} x \\ &\le \dfrac{C}{|B|^{1+\beta/Q} } \bigg( \displaystyle\int_{B} | b(x)-|B|^{-\alpha/Q} M_{\alpha,B}(b)(x) |^{q} \mathrm{d} x \bigg)^{1/q} |B|^{1/q'}\\ &\le \dfrac{1}{|B|^{\beta/Q} } \bigg( \dfrac{1}{|B|} \displaystyle\int_{B} | b(x)-|B|^{-\alpha/Q}M_{\alpha,B}(b)(x) |^{q} \mathrm{d} x \bigg)^{1/q} \\ &\le C, \end{align*} which gives \labelcref{inequ:proof-nonlinear-frac-main-1-41-1} for $q=s$ together with \labelcref{inequ:proof-nonlinear-frac-main-1-41-2}. Thus we achieve $b\in \dot{\Lambda}_{\beta}(\mathbb{G})$ from \cref{lem:2.2-li2003lipschitz}. In order to prove $b \ge 0$, it suffices to show $b^{-}=0$. For any fixed ball $B\subset \mathbb{G}$, observe that \begin{align*} 0\le b^{+}(x)=|b(x)|-b^{-}(x) \le |b(x)| \le M_{B}(b)(x) \end{align*} for $x\in B$ and thus we have that, for $x\in B$, \begin{align*} 0\le b^{-}(x) \le M_{B}(b)(x) -b^{+}(x) \le M_{B}(b)(x) -b^{+}(x)+b^{-}(x) =M_{B}(b)(x) -b(x). \end{align*} Using \cref{lem:frac-maximal-to-maximal-Lie}, then, for any ball $B\subset \mathbb{G}$, we obtain \begin{align*} \dfrac{1}{|B|} \displaystyle\int_{B} b^{-}(x) \mathrm{d} x &\le \dfrac{1}{|B|} \displaystyle\int_{B} |M_{B}(b)(x) -b(x)| \mathrm{d} x \\ &= \dfrac{|B|^{\beta/Q}}{|B|^{1+\beta/Q}} \displaystyle\int_{B} \big|b(x) - M_{B}(b)(x) \big| \mathrm{d} x \\ &\le C |B|^{\beta/Q}. \end{align*} Thus, $b^{-}=0$ follows from Lebesgue's differentiation theorem. The proof of \cref{thm:lipschitz-nonlinear-frac-main-1} is completed. \end{refproof} \section*{Acknowledgments} This work is supported partly by the National Natural Science Foundation of China (Grant No.11571160), Scientific Project-HLJ (No.2019- KYYWF-0909) and Youth Project-HLJ (No.2020YQ07). \section*{Data Availability Statement} My manuscript has no associate data.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,642
{"url":"https:\/\/www.zbmath.org\/authors\/?q=ai%3Amagyari.eugen","text":"## Magyari, Eugen\n\nCompute Distance To:\n Author ID: magyari.eugen Published as: Magyari, E.; Magyari, Eugen\n Documents Indexed: 49 Publications since 1990 Co-Authors: 16 Co-Authors with 38 Joint Publications 572 Co-Co-Authors\nall top 5\n\n### Co-Authors\n\n 10 single-authored 15 Pop, Ioan 13 Keller, Bruno 7 Barletta, Antonio 5 Weidman, Patrick D. 3 Keller, Bruno 3 Lazzari, Stefano 2 Aly, Emad H. 2 Pantokratoras, Asterios 1 Abbasbandy, Saeid 1 Kumaran, Vishwanathan 1 L\u00e9vai, G\u00e9za 1 Liao, Shijun 1 Rees, D. Andrew S. 1 Shivanian, Elyas 1 Storesletten, Leiv 1 Valk\u00f3, Peter P.\nall top 5\n\n### Serials\n\n 10 Acta Mechanica 10 International Journal of Heat and Mass Transfer 5 ZAMP. Zeitschrift f\u00fcr angewandte Mathematik und Physik 5 European Journal of Mechanics. B. Fluids 4 Fluid Dynamics Research 3 Communications in Nonlinear Science and Numerical Simulation 2 Journal of Engineering Mathematics 2 ZAMM. Zeitschrift f\u00fcr Angewandte Mathematik und Mechanik 1 Journal of Fluid Mechanics 1 Physica D 1 Applied Mathematics Letters 1 Elemente der Mathematik 1 Journal of Physics A: Mathematical and General 1 Journal of Porous Media 1 Journal of Physics A: Mathematical and Theoretical 1 PAMM. Proceedings in Applied Mathematics and Mechanics\nall top 5\n\n### Fields\n\n 44 Fluid mechanics\u00a0(76-XX) 23 Classical thermodynamics, heat transfer\u00a0(80-XX) 5 Mechanics of deformable solids\u00a0(74-XX) 2 Ordinary differential equations\u00a0(34-XX) 2 Partial differential equations\u00a0(35-XX) 1 Real functions\u00a0(26-XX) 1 Numerical analysis\u00a0(65-XX) 1 Mechanics of particles and systems\u00a0(70-XX) 1 Quantum theory\u00a0(81-XX)\n\n### Citations contained in zbMATH Open\n\n39 Publications have been cited 320 times in 255 Documents Cited by Year\nExact solutions for self-similar boundary-layer flows induced by permeable stretching walls.\u00a0Zbl\u00a00976.76021\nMagyari, Eugen; Keller, Bruno\n2000\nExponentially decaying boundary layers as limiting cases of families of algebraically decaying ones.\u00a0Zbl\u00a01101.76056\nLiao, Shijun; Magyari, Eugen\n2006\nThe homotopy analysis method for multiple solutions of nonlinear boundary value problems.\u00a0Zbl\u00a01221.65170\nAbbasbandy, S.; Magyari, E.; Shivanian, E.\n2009\nUnsteady fluid and heat flow induced by a submerged stretching surface while its steady motion is slowed down gradually.\u00a0Zbl\u00a01104.80001\nAli, M. E.; Magyari, E.\n2007\nDual mixed convection flows in a vertical channel.\u00a0Zbl\u00a01189.76479\nBarletta, A.; Magyari, E.; Keller, B.\n2005\nGeneralized crane flow induced by continuous surfaces stretching with arbitrary velocities.\u00a0Zbl\u00a01381.76058\nWeidman, Patrick D.; Magyari, E.\n2010\nThe \u201dmissing\u201d similarity boundary-layer flow over a moving plane surface.\u00a0Zbl\u00a01035.76011\nMagyari, E.; Pop, I.; Keller, B.\n2002\nMixed convection with heating effects in a vertical porous annulus with a radially varying magnetic field.\u00a0Zbl\u00a01153.80310\nBarletta, A.; Lazzari, S.; Magyari, E.; Pop, I.\n2008\nThe algebraically decaying wall jet.\u00a0Zbl\u00a01093.76017\nMagyari, Eugen; Keller, Bruno\n2004\nFalkner-Skan flows past moving boundaries: an exactly solvable case.\u00a0Zbl\u00a01161.76014\nMagyari, E.\n2009\nBoundary-layer similarity flows driven by a power-law shear over a permeable plane surface.\u00a0Zbl\u00a01064.76035\nMagyari, E.; Keller, B.; Pop, I.\n2003\nNew analytical solutions of a well-known boundary value problem in fluid mechanics.\u00a0Zbl\u00a01032.76535\nMagyari, E.; Pop, I.; Keller, B.\n2003\nStokes\u2019 first problem for micropolar fluids.\u00a0Zbl\u00a01423.76098\nMagyari, Eugen; Pop, Ioan; Valk\u00f3, Peter P.\n2010\nHeat transfer characteristics of a boundary-layer flow driven by a power-law shear over a semi-infinite flat plate.\u00a0Zbl\u00a01052.76014\nMagyari, E.; Keller, B.; Pop, I.\n2004\nBuoyant Poiseuille-Couette flow with viscous dissipation in a vertical channel.\u00a0Zbl\u00a01304.76053\nBarletta, A.; Lazzari, S.; Magyari, E.\n2008\nMixed convection flow on a horizontal permeable plate.\u00a0Zbl\u00a01419.74112\nMagyari, E.; Pop, I.; Keller, B.\n2002\nExact dual solutions occuring in the Darcy mixed convection flow.\u00a0Zbl\u00a01068.76548\nMagyari, E.; Pop, I.; Keller, B.\n2001\nThe $$\\mathcal {PT}$$-symmetric Rosen-Morse II potential: Effects of the asymptotically non-vanishing imaginary potential component.\u00a0Zbl\u00a01163.81005\nL\u00e9vai, G.; Magyari, E.\n2009\nEMHD free-convection boundary-layer flow from a Riga-plate.\u00a0Zbl\u00a01168.76384\nPantokratoras, Asterios; Magyari, Eugen\n2009\nTranslation groups of the boundary-layer flows induced by continuous moving surfaces.\u00a0Zbl\u00a01197.76110\nMagyari, Eugen\n2010\nAiding and opposing mixed convection flows over the Riga-plate.\u00a0Zbl\u00a01419.76601\nMagyari, Eugen; Pantokratoras, Asterios\n2011\nAnalytical solutions for unsteady free convection in porous media.\u00a0Zbl\u00a01178.76325\nMagyari, E.; Pop, I.; Keller, B.\n2004\nGeneralized crane flows of micropolar fluids.\u00a0Zbl\u00a01222.76039\nMagyari, E.; Kumaran, V.\n2010\nMixed convection with viscous dissipation in a vertical channel filled with a porous medium.\u00a0Zbl\u00a01140.76464\nBarletta, A.; Magyari, E.; Pop, I.; Storesletten, L.\n2007\nBackward boundary layer heat transfer in a converging channel.\u00a0Zbl\u00a01136.76019\nMagyari, E.\n2007\nThe entrainment theorem for wall driven boundary layer flows.\u00a0Zbl\u00a01155.76325\nMagyari, E.\n2008\nThe wall jet as limiting case of a boundary-layer flow induced by a permeable stretching surface.\u00a0Zbl\u00a00991.76017\nMagyari, E.; Keller, B.\n2001\nUni- and bidirectional mixed convection flow regimes described by dual solutions in a vertical duct.\u00a0Zbl\u00a01140.76463\nBarletta, A.; Lazzari, S.; Magyari, E.\n2007\nEffect of viscous dissipation on the Darcy free convection boundary-layer flow over a vertical plate with exponential temperature distribution in a porous medium.\u00a0Zbl\u00a01388.76348\nMagyari, E.; Rees, D. A. S.\n2006\nExact solutions for a longitudinal steady mixed convection flow over a permeable vertical thin cylinder in a porous medium.\u00a0Zbl\u00a01189.76495\nMagyari, E.; Pop, I.; Keller, B.\n2005\nMechanical and thermal characteristics of a mixed convection boundary-layer flow in a saturated porous medium.\u00a0Zbl\u00a01099.76059\n2006\nForced convection with viscous dissipation in the thermal entrance region of a circular duct with prescribed wall heat flux.\u00a0Zbl\u00a01104.80002\nBarletta, A.; Magyari, E.\n2007\nThermally developing Poiseuille flow with a non-uniform entrance temperature when the viscous heat generation is significant.\u00a0Zbl\u00a01119.76056\nMagyari, E.; Barletta, A.\n2006\nNew solutions of the Navier-Stokes equations associated with flow above moving boundaries.\u00a0Zbl\u00a01394.76041\nMagyari, Eugen; Weidman, Patrick\n2017\nA note on the free convection from curved surfaces.\u00a0Zbl\u00a01008.76084\nMagyari, E.; Pop, I.; Keller, B.\n2002\nExact analytical solution for a thermal boundary layer in a saturated porous medium.\u00a0Zbl\u00a01154.76049\n2006\nComment on \u201cUnsteady stagnation-point flow over a plate moving along the direction of flow impingement\u201d.\u00a0Zbl\u00a01329.76062\nMagyari, Eugen; Weidman, Patrick D.\n2012\nComment on the homogeneous nanofluid models applied to convective heat transfer problems.\u00a0Zbl\u00a01398.76020\nMagyari, Eugen\n2011\nFrictionally coupled sliding and spinning on an inclined plane.\u00a0Zbl\u00a007477822\nMagyari, E.; Weidman, P.\n2020\nFrictionally coupled sliding and spinning on an inclined plane.\u00a0Zbl\u00a007477822\nMagyari, E.; Weidman, P.\n2020\nNew solutions of the Navier-Stokes equations associated with flow above moving boundaries.\u00a0Zbl\u00a01394.76041\nMagyari, Eugen; Weidman, Patrick\n2017\nComment on \u201cUnsteady stagnation-point flow over a plate moving along the direction of flow impingement\u201d.\u00a0Zbl\u00a01329.76062\nMagyari, Eugen; Weidman, Patrick D.\n2012\nAiding and opposing mixed convection flows over the Riga-plate.\u00a0Zbl\u00a01419.76601\nMagyari, Eugen; Pantokratoras, Asterios\n2011\nComment on the homogeneous nanofluid models applied to convective heat transfer problems.\u00a0Zbl\u00a01398.76020\nMagyari, Eugen\n2011\nGeneralized crane flow induced by continuous surfaces stretching with arbitrary velocities.\u00a0Zbl\u00a01381.76058\nWeidman, Patrick D.; Magyari, E.\n2010\nStokes\u2019 first problem for micropolar fluids.\u00a0Zbl\u00a01423.76098\nMagyari, Eugen; Pop, Ioan; Valk\u00f3, Peter P.\n2010\nTranslation groups of the boundary-layer flows induced by continuous moving surfaces.\u00a0Zbl\u00a01197.76110\nMagyari, Eugen\n2010\nGeneralized crane flows of micropolar fluids.\u00a0Zbl\u00a01222.76039\nMagyari, E.; Kumaran, V.\n2010\nThe homotopy analysis method for multiple solutions of nonlinear boundary value problems.\u00a0Zbl\u00a01221.65170\nAbbasbandy, S.; Magyari, E.; Shivanian, E.\n2009\nFalkner-Skan flows past moving boundaries: an exactly solvable case.\u00a0Zbl\u00a01161.76014\nMagyari, E.\n2009\nThe $$\\mathcal {PT}$$-symmetric Rosen-Morse II potential: Effects of the asymptotically non-vanishing imaginary potential component.\u00a0Zbl\u00a01163.81005\nL\u00e9vai, G.; Magyari, E.\n2009\nEMHD free-convection boundary-layer flow from a Riga-plate.\u00a0Zbl\u00a01168.76384\nPantokratoras, Asterios; Magyari, Eugen\n2009\nMixed convection with heating effects in a vertical porous annulus with a radially varying magnetic field.\u00a0Zbl\u00a01153.80310\nBarletta, A.; Lazzari, S.; Magyari, E.; Pop, I.\n2008\nBuoyant Poiseuille-Couette flow with viscous dissipation in a vertical channel.\u00a0Zbl\u00a01304.76053\nBarletta, A.; Lazzari, S.; Magyari, E.\n2008\nThe entrainment theorem for wall driven boundary layer flows.\u00a0Zbl\u00a01155.76325\nMagyari, E.\n2008\nUnsteady fluid and heat flow induced by a submerged stretching surface while its steady motion is slowed down gradually.\u00a0Zbl\u00a01104.80001\nAli, M. E.; Magyari, E.\n2007\nMixed convection with viscous dissipation in a vertical channel filled with a porous medium.\u00a0Zbl\u00a01140.76464\nBarletta, A.; Magyari, E.; Pop, I.; Storesletten, L.\n2007\nBackward boundary layer heat transfer in a converging channel.\u00a0Zbl\u00a01136.76019\nMagyari, E.\n2007\nUni- and bidirectional mixed convection flow regimes described by dual solutions in a vertical duct.\u00a0Zbl\u00a01140.76463\nBarletta, A.; Lazzari, S.; Magyari, E.\n2007\nForced convection with viscous dissipation in the thermal entrance region of a circular duct with prescribed wall heat flux.\u00a0Zbl\u00a01104.80002\nBarletta, A.; Magyari, E.\n2007\nExponentially decaying boundary layers as limiting cases of families of algebraically decaying ones.\u00a0Zbl\u00a01101.76056\nLiao, Shijun; Magyari, Eugen\n2006\nEffect of viscous dissipation on the Darcy free convection boundary-layer flow over a vertical plate with exponential temperature distribution in a porous medium.\u00a0Zbl\u00a01388.76348\nMagyari, E.; Rees, D. A. S.\n2006\nMechanical and thermal characteristics of a mixed convection boundary-layer flow in a saturated porous medium.\u00a0Zbl\u00a01099.76059\n2006\nThermally developing Poiseuille flow with a non-uniform entrance temperature when the viscous heat generation is significant.\u00a0Zbl\u00a01119.76056\nMagyari, E.; Barletta, A.\n2006\nExact analytical solution for a thermal boundary layer in a saturated porous medium.\u00a0Zbl\u00a01154.76049\n2006\nDual mixed convection flows in a vertical channel.\u00a0Zbl\u00a01189.76479\nBarletta, A.; Magyari, E.; Keller, B.\n2005\nExact solutions for a longitudinal steady mixed convection flow over a permeable vertical thin cylinder in a porous medium.\u00a0Zbl\u00a01189.76495\nMagyari, E.; Pop, I.; Keller, B.\n2005\nThe algebraically decaying wall jet.\u00a0Zbl\u00a01093.76017\nMagyari, Eugen; Keller, Bruno\n2004\nHeat transfer characteristics of a boundary-layer flow driven by a power-law shear over a semi-infinite flat plate.\u00a0Zbl\u00a01052.76014\nMagyari, E.; Keller, B.; Pop, I.\n2004\nAnalytical solutions for unsteady free convection in porous media.\u00a0Zbl\u00a01178.76325\nMagyari, E.; Pop, I.; Keller, B.\n2004\nBoundary-layer similarity flows driven by a power-law shear over a permeable plane surface.\u00a0Zbl\u00a01064.76035\nMagyari, E.; Keller, B.; Pop, I.\n2003\nNew analytical solutions of a well-known boundary value problem in fluid mechanics.\u00a0Zbl\u00a01032.76535\nMagyari, E.; Pop, I.; Keller, B.\n2003\nThe \u201dmissing\u201d similarity boundary-layer flow over a moving plane surface.\u00a0Zbl\u00a01035.76011\nMagyari, E.; Pop, I.; Keller, B.\n2002\nMixed convection flow on a horizontal permeable plate.\u00a0Zbl\u00a01419.74112\nMagyari, E.; Pop, I.; Keller, B.\n2002\nA note on the free convection from curved surfaces.\u00a0Zbl\u00a01008.76084\nMagyari, E.; Pop, I.; Keller, B.\n2002\nExact dual solutions occuring in the Darcy mixed convection flow.\u00a0Zbl\u00a01068.76548\nMagyari, E.; Pop, I.; Keller, B.\n2001\nThe wall jet as limiting case of a boundary-layer flow induced by a permeable stretching surface.\u00a0Zbl\u00a00991.76017\nMagyari, E.; Keller, B.\n2001\nExact solutions for self-similar boundary-layer flows induced by permeable stretching walls.\u00a0Zbl\u00a00976.76021\nMagyari, Eugen; Keller, Bruno\n2000\nall top 5","date":"2022-06-30 07:44:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8126051425933838, \"perplexity\": 8025.350813352436}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103669266.42\/warc\/CC-MAIN-20220630062154-20220630092154-00763.warc.gz\"}"}
null
null
Superbly located close to amazing restaurants in Virginia Beach downtown, this townhome is sure to impress. Vaulted ceilings and over size windows bring in a ton of natural light in this a 3 bed, 2.5 bath, 1,614 sq ft townhouse located just a few minutes from Town Center. This highly sought corner unit boasts an open floor plan and stylish finishes. Your cozy fireplace creates the perfect ambiance for relaxation or entertaining friends.
{ "redpajama_set_name": "RedPajamaC4" }
2,882
\section{Introduction} How to unify general relativity (GR) with quantum mechanics by a theory of quantum gravity is a great challenge to theoretical physics. As a nonperturbative approach to quantum gravity, loop quantum gravity (LQG) has made remarkable progress in the past thirty years \cite{ashtekar2004back,rovelli2005quantum,thiemann2007modern,han2007fundamental} . According to LQG, spacetime consists of fundamental units of spacetime quanta since the spectra of the operators corresponding to the classical length, area and volume turned out to be discrete \cite{rovelli1994physical,ashtekar1997quantum,ashtekar1997quantumII,thiemann1998length,ma2010new,yang2016new}. Despite these achievements, the dynamics of LQG is still an open issue, as the problem of how to suitably quantize and solve the Hamiltonian constraint is still unsolved. There are some attempts to quantize the Hamiltonian constraints \cite{thiemann1998quantum,thiemann2006phoenix,han2006master,yang2015new,Domagala2015kse,alesci2015hamiltonian}, and some properties of the resulted operators are studied \cite{bonzom2011hamiltonian,alesci2013matrix,zhang2018towards,zhang2019bouncing}. The problems in the full LQG theory motivate us to consider the symmetry-reduced models, such as the homogeneous and isotropic cosmology, on which the loop quantization method is applied \cite{bojowald2001absence,ashtekar2006quantumnature,ding2009effective}. The consequent quantum cosmology is called loop quantum cosmology (LQC). As a potential approach to address the dark energy and dark matter problems in the standard Lambda cold dark matter model, a large variety of modified theories of gravity have been studied. Among these theories, a well-known one is the Brans-Dicke theory \cite{brans1961mach}, which is apparently compatible with Mach's principle. Loop quantization of this theory was studied in \cite{Zhang:2011vg}, where not only the kinematical Hilbert space but also the Hamiltonian constraint operator were constructed. However, similar to the situation in LQG, it is still difficult to solve the Hamiltonian constraint in the full loop quantum Brans-Dicke theory (LQBDT). Then, the symmetry-reduced model of loop quantum Brans-Dicke cosmology (LQBDC) was developed afterward \cite{zhang2013loop,Artymowski:2013qua}. By solving the effective Hamiltonian constraint, one obtained a symmetric bouncing evolution of the Universe such that the classical big bang singularity was avoided in the quantum theory. It should be noted that the Hamiltonian constraint in full LQG consists of two terms: the so-called Euclidean term and Lorentzian term. These two terms were first regularized and quantized as operators in \cite{thiemann1998quantum}. Classically the Lorentzian term is proportional to the Euclidean term in the spatially flat cosmological models. Thus one could combine the two terms into one term and then quantize it to obtain the Hamiltonian constraint operator in the cosmological models. In both standard LQC with massless scalar field and LQBDC, this treatment leads to the symmetric bounce of the Universe \cite{bojowald2001absence,ashtekar2006quantumnature,zhang2013loop,Artymowski:2013qua}. Alternatively, the Lorentzian term could also be quantized independently in the cosmological models by using Thiemann's trick as in full LQG and full LQBDT. This idea was first realized in \cite{yang2009alternative}, where an alternative Hamiltonian constraint operator was obtained in LQC. Notably, the effective Hamiltonian of this alternative operator was lately confirmed by the semiclassical analysis of Thiemann's Hamiltonian constraint operator in full LQG, which leads to an asymmetric bounce scenario in LQC \cite{assanioussi2018emergent,li2018towards}. This result relates the flat Friedmann-Lema\^{i}tre-Robertson-Walker cosmological spacetime with an asymptotic de Sitter spacetime. Thus an effective cosmological constant and an effective Newton constant were obtained in LQG \cite{li2018towards,assanioussi2018emergent}. This ambiguity also exists in LQBDC. To inherit more features of LQBDT, in this paper we will deal with the Euclidean and the Lorentzian terms independently in LQBDC. It will be shown that the main features of the effective dynamics of the alternative Hamiltonian in LQC are tenable by that of LQBDC. The paper is arranged as follows. In Sec. \ref{se:two} the classical Brans-Dicke cosmology with the coupling parameter $\omega\neq -3/2$ will be briefly reviewed, and then the kinematics of LQBDC will be introduced. In Sec. \ref{se:three}, the Hamiltonian constraint of the Brans-Dicke cosmological model will be quantized by using the strategy to treat the Euclidean and Lorentzian terms independently as in full LQBDT. In Sec. \ref{se:four}, the effective Hamiltonian constraint of the alternative Hamiltonian operator will be derived by the path-integral method in LQBDC. Then in Sec. \ref{se:five} the effective dynamics driven by the effective Hamiltonian will be studied. Finally, the results will be summarized and discussed in Sec. \ref{se:six}. \section{Brans-Dicke cosmology and its loop quantization}\label{se:two} The action of the original Brans-Dicke theory reads\cite{brans1961mach} \begin{equation*} S[g,\phi]=\frac{1}{2\kappa} \int_{M} d^{4} x \sqrt{-g}\left[\phi R-\frac{\omega}{\phi}\left(\partial_{\mu} \phi\right) \partial^{\mu} \phi\right], \end{equation*} where $\kappa=8\pi G$ with $G$ the Newtonian gravitational constant, the scalar field $\phi$ is nonminimally coupled to the scalar curvature $R$, and the coupling constant $\omega$ is restricted by the observations to be bigger than $10^4$\cite{Will:2014kxa,will2018theory}. In the connection formulation of Brans-Dicke theory, the phase space consists of canonical pairs of geometrical conjugate variables $(A_a^i, E^b_j )$ and scalar conjugate variables $(\phi,\Pi)$ , where $A_a^i$ is an SU(2) connection and $E^b_j$ is the densitized triad on the spatial manifold $M$. The nonvanished Poisson brackets between the canonical variables read \begin{equation} \begin{aligned} \{A_a^i(x),E^b_j(y)\}&=\kappa\gamma\delta_a^b\delta^i_j\delta(x,y),\\ \{\phi(x),\Pi(y)\}&=\delta(x,y), \end{aligned} \end{equation} where $\gamma$ is the Barbero-Immirzi parameter. In the case of the coupling constant $\omega\neq -3/2$ as required by the observation, the Hamiltonian constraint in Brans-Dicke theory reads \cite{zhang2013loop} \begin{equation}\label{eq:Hamiltonainfull} \begin{aligned} H=&\frac{\phi}{2}\left(F ^j_{ab}-(\gamma^2+\frac{1}{\phi^2})\epsilon_{jmn}\tilde{K}^m_a\tilde{K}^n_b\right)\frac{\epsilon_{jkl}E^a_kE^b_l}{\sqrt{q}}\\ &+\frac{1}{3+2\omega}\left(\frac{(\tilde{K}^i_aE^a_i)^2}{\phi\sqrt{q}}+2\kappa\frac{(\tilde{K}_a^iE^a_i)\Pi}{\sqrt{q}}+\kappa^2\frac{\Pi^2\phi}{\sqrt{q}}\right) +\frac{\omega}{2\phi}\sqrt{q}(D^a\phi) D_a\phi+\sqrt{q}D_aD^a\phi=0, \end{aligned} \end{equation} where $F^i_{ab}= 2\partial_{\left[a\right.}A^i_{\left.b\right]}+{\epsilon^i}_{kl}A^k_a A^l_b$ is the curvature of the connection $A_a^i$, $\tilde{K}^i_a$ is defined in \cite{zhang2013loop} , and $q$ is the determinant of physical 3-metric on $M$. We will restrict ourselves to spatially flat, homogeneous and isotropic cosmology with the symmetry of $\mathcal{S}=\mathbb{R}^3\rtimes_\rho $SO(3). Then the spatial 3-manifold $M$ is diffeomorphic to $\mathbb{R}^3$. As in the standard treatment of LQC, we first introduce an ``elementary cubic cell" $\mathcal{V}$ on $M$ and restrict all integrals to this cell. Fix a fiducial 3-metric $\mathring{q}_{ab}$ and denote the volume of $\mathcal{V}$ measured with $\mathring{q}_{ab}$ by $V_0$. Let $\mathring{e}^a_i$ and $\mathring{\omega}_a^i$ be the triad and cotriad adapted to $\cal{V}$ and satisfying $\mathring{\omega}_a^i\mathring{e}^b_i=\delta_a^b$ and $\mathring{q}_{ab}=\delta_{ij}\mathring{\omega}_a^i\mathring{\omega}_b^j$. By fixing the local diffeomorphism and internal gauge freedom, the basic variables are reduced to \begin{equation} A_a^i=c V_0^{-1/3}\mathring{\omega}_a^i,~E^b_j=pV_0^{-2/3}\sqrt{\mathring{q}}\mathring{e}^b_j,~\Pi=V_0^{-1}\sqrt{\mathring{q} }\pi_\phi. \end{equation} The nontrivial Poisson brackets among reduced variables $c$, $b$, $\phi$, and $\pi_\phi$ read \begin{equation} \{c,p\}=\frac{\kappa\gamma}{3},~\{\phi,\pi_\phi\}=1. \end{equation} The remaining Hamiltonian constraint \eqref{eq:Hamiltonainfull} is reduced to \begin{equation}\label{eq:Hamiltonianc} H=-\frac{3c^2\sqrt{|p|}}{\gamma^2\phi}+\frac{1}{(3+2\omega)\phi |p|^{3/2}}\left(\frac{3cp}{\gamma}+\kappa\pi_\phi\phi\right)^2=0. \end{equation} The kinematical Hilbert space $\mathcal H$ of the LQBDC can be given by the direct product of the geometric sector $\mathcal{H}_{\rm geo}=L^2(\mathbb{R}_{\rm Bohr},\d\mu_H)$ \cite{ashtekar2003mathematical,thiemann2007modern}, where $\mathbb{R}_{\rm Bohr}$ is the Bohr compactification of $\mathbb{R}$ and ${\mathrm d}\mu_{\mathrm{Bohr}}$ is the Haar measure, and the scalar field sector $\mathcal{H}_{\rm sca}=L^2(\mathbb{R},\d\mu)$, which is the usual Schr\"{o}dinger representation, i.e., \begin{equation} \mathcal{H}=L^2(\mathbb{R}_{\rm Bohr},\d\mu_H)\otimes L^2(\mathbb{R},\d\mu). \end{equation} In $\mathcal{H}_{\rm sca}$, one has the configuration operator $\hat{\phi}$ defined as multiplication and the momentum operator $\hat{\pi}_{\phi}:=i\hbar\d / \d \phi$. The generalized eigenstates $|\phi)$ of $\hat{\phi}$ contribute a generalized basis of $\mathcal{H}_{\rm sca}$. In $\mathcal{H}_{\rm geo}$, there are two fundamental operators, namely the momentum operator $\hat{p}$ which represents the area of each side of $\mathcal{V}$ and the configuration operator ${\widehat{\exp{({\rm i}\lambda c)}}}$ which represents the holonomy of the reduced connection $c$ along an edge parallel to an edge of $\mathcal{V}$. Since we will follow the improved scheme as in \cite{ashtekar2006quantumnature}, it is convenient to introduce a new operator $$ \hat{v}=\frac{\textrm{sgn}(\hat{p})|\hat{p}|^{3/2}}{2\pi\gamma\ell^2_{p}\sqrt\Delta}, $$ where $\ell_{p}=\sqrt{G\hbar}$ is the Planck length and $\Delta=4\sqrt{3}\pi\gamma \ell_p^2$ denotes the area gap in full LQBDT. Note that $\hat{v}$ is actually a dimensionless variable representing the physical volume of $\mathcal{V}$. The eigenstates $|v\rangle$ of the operator $\hat{v}$ are labeled by real numbers $v$ and contribute an orthonormal basis in $\mathcal{H}_{\rm geo}$ such that \begin{align} \langle v|v'\rangle=\delta_{v,v'}\, , \end{align} where $\delta_{v,v'}$ is the Kronecker delta. A general state in ${\mathcal H}_{\rm geo}$ can be expressed as a countable sum: $|\psi\rangle=\sum\psi_n|v_n\rangle$ and thus the inner product reads $$\langle\psi^{(1)}|\psi^{(2)}\rangle=\sum_n\overline{\psi^{(1)}_n}\psi^{(2)}_n.$$ It should be noted that the operator which measures the physical volume $V$ of $\mathcal{V}$ is given by \begin{equation} \hat{V}=2\pi\gamma\ell_{p}^2\sqrt{\Delta}\,|\hat v|. \end{equation} where $|\hat{v}|$ is the absolute value of the operator $\hat{v}$. One prefers to use the holonomy operator $\widehat{e^{{\rm i}b/2}}$, where $b:=\bar{\mu}c$ with $\bar\mu=\sqrt{\Delta/|p|}$. Note that $\widehat{e^{{\rm i}b/2}}$ represents the holonomy $h^{(\bar\mu)}_i$ of $c$ along an edge parallel to the triad $\mathring{e}^a_i$ whose length with respect to the physical metric is $\sqrt{\Delta} $. Thus the edge underlying $h^{(\bar\mu)}_i$ takes the minimal length of the quantum geometry. The variables $b$ and $v$ are conjugate to each other, since $$\{b,v\}=\frac{2}{\hbar}.$$ Hence one has \begin{equation} \widehat{e^{{\rm i}b/2}}\,|v\rangle=|v+1\rangle. \end{equation} Actually, the holonomy operator $\widehat{h}_i^{(\bar\mu)}$ can be expressed as \begin{equation}\label{eqn:i-holonomy} \begin{aligned} &\widehat{h}^{(\bar\mu)}_i=\frac{1}{2}\left(\widehat{e^{{\rm i}b/2}}+\widehat{e^{-{\rm i}b/2}}\right)-{\rm i}\left( \widehat{e^{{\rm i}b/2}}-\widehat{e^{-{\rm i}b/2}}\right)\tau_i, \end{aligned} \end{equation} where $\tau_i$ are the generators of Lie algebra $\mathfrak{su}(2)$ \cite{ashtekar2006quantumnature}. \section{Alternative Hamiltonian constraint operator}\label{se:three} In the homogeneous cosmological model, the Hamiltonian constraint \eqref{eq:Hamiltonainfull} can be written as \begin{equation} \begin{aligned}\label{eq:Hamilton-c} H=\frac{\phi}{2}\left(F^j_{ab}-(\gamma^2+\frac{1}{\phi^2})\epsilon_{jmn}\tilde{K}^m_a\tilde{K}^n_b\right)\frac{\epsilon_{jkl}E^a_kE^b_l}{\sqrt{q}} +\frac{1}{3+2\omega}\left(\frac{(\tilde{K}^i_aE^a_i)^2}{\phi\sqrt{q}} +2\kappa\frac{(\tilde{K}_a^iE^a_i)\Pi}{\sqrt{q}}+\kappa^2\frac{\Pi^2\phi}{\sqrt{q}}\right)=0. \end{aligned} \end{equation} Similar to the case of full LQBDT, there is no operator corresponding to the connection $A_a^i(x)$ in LQBDC. Hence, one has to express the curvature $F^j_{ab}$ in \eqref{eq:Hamilton-c} by holonomies. This can be accomplished by using Thiemann's tricks \cite{thiemann2007modern}. Classically the curvature in our cosmological model can be regularized on the elementary cell as \cite{ashtekar2006quantumnature} \begin{equation} F_{ab}^k=\lim_{\lambda\to 0} {\rm Tr}\left(-2\frac{h_{ij}^{(\lambda)}\tau^k}{\lambda^2V_0^{2/3}}\right)\mathring{\omega}_a^i\mathring{\omega}_b^j, \end{equation} where $h_{ij}^{(\lambda)}=h_i^{(\lambda)}h_j^{(\lambda)}(h_i^{(\lambda)})^{-1}(h_j^{(\lambda)})^{-1}$ is the holonomy around the loop formed by the two edges of $\cal{V}$ that are tangent to $e_i^a$ and $e_j^b$ whose length is $\lambda V^{1/3}$ with respect to the fiducial metric $\mathring{q}_{ab}$ respectively. To quantize the Hamiltonian constraint, we also need to use the regularizations \begin{equation} \frac{\varepsilon^{ijk}E^b_jE^c_k}{\sqrt{\det(q)}}=\lim_{\lambda\to 0}\frac{2\mathrm{sgn}(p){\rm Tr}(h_m^{(\lambda)} \{(h_m^{(\lambda)})^{-1}),V\}\tau^i)}{\kappa\gamma\lambda V_0^{1/3}}\mathring{\omega}^m_a\varepsilon^{abc}, \end{equation} and \begin{equation} \tilde{K}_a^i(x)=\frac{1}{2\gamma(\kappa\gamma)^2}\{A_a^i(x), \{C,V\}\}, \end{equation} where $\mathrm{sgn}(p)$ denotes the sign of $p$ and $C=\int \d^3x{\epsilon_i}^{jk} F_{ab}^i(x)E_j^a(x)E_k^b(x)/\sqrt{q(x)}$. The integration of the Hamiltonian \eqref{eq:Hamilton-c} reads \begin{equation} \mathscr{C}=\int_{\mathcal{V}}\d^3 xH(x)=\lim_{\lambda\to 0}H^{(\lambda)} \end{equation} where \begin{equation} \begin{aligned} H^{(\lambda)}&= -\phi\frac{\mathrm{sgn}(p)}{2\pi G\gamma\lambda^3}{\rm Tr}(h_{kj}^{(\lambda)}\tau^i){\rm Tr}(h_m^{(\lambda)}\{(h_m^{(\lambda)})^{-1},V\}\tau_i)\varepsilon^{kjm}\\ &+\frac{\mathrm{sgn}(p)}{\gamma^2(8\pi G\gamma)^5\lambda^3}\phi(\gamma^2+\frac{1}{\phi^2})\varepsilon^{ijk}{\rm Tr}(h_i^{(\lambda)-1}\{h_i^{(\lambda)},\{C,V\}\}h_j^{(\lambda)-1}\{h_j^{(\lambda)},\{C,V\}\}h_k^{(\lambda)-1}\{h_k^{(\lambda)},V\})\\ &+\frac{1}{2\omega+3}\left(\frac{(\{C,V\})^2}{4\gamma^2(\kappa\gamma)^2\phi V }+\frac{\{C,V\}\pi_\phi}{\gamma^2 V}+\kappa^2\frac{\pi_\phi^2\phi}{V}\right). \end{aligned} \end{equation} However, the family of operators $\hat{H}^{(\lambda)}$ does not converge as $\lambda\to 0$. Thus, in the so-called $\bar\mu$-scheme \cite{ashtekar2006quantumnature}, one fixed the length $\lambda$ of the edge underlying the holonomies in the Hamiltonian to $\bar\mu=\sqrt{\Delta/|p|}$, which implies that the curvature is smeared over the elementary faces with the physical area Ar$_{\square}=\Delta$. By this treatment, we obtain the Hamiltonian constraint operator as \begin{equation}\label{eq:cc} \hat\mathscr{C}=\lim_{\lambda\to \bar\mu}\hat{H}^{(\lambda)}. \end{equation} It should be noted that classically one has \begin{equation} \lim_{\lambda\to\bar\mu}\{h^{(\lambda)},\tilde{K}\}=\frac{2}{3}\{h^{\bar\mu},\tilde{K}\}, \end{equation} where $\tilde{K}=\int\d^3 x\tilde{K}^i_aE^a_i$. Hence, in the expression of $\lim_{\lambda\to\bar\mu}$, the commutator $[\widehat{h}^{(\lambda)}, \hat{\tilde{K}}]$ would be replaced by $\frac{2}{3}[\widehat{h}^{\bar{\mu}}, \hat{\tilde{K}}]$. It is convenient to split the expression of \eqref{eq:cc} into three parts as $\hat\mathscr{C}=\hat{\mathscr{C}}_1+\hat{\mathscr{C}}_2+\hat{\mathscr{C}}_3$. Their actions on the basis $|v, \phi\rangle=|v\rangle\otimes |\phi)$ of $\mathcal{H}$ are given by: \begin{equation} \begin{aligned}\label{eq:cc1} \hat{\mathscr{C}}_1|v,\phi\rangle=&\phi\sin(b)\hat A\sin(b)|v,\phi\rangle\\ =&\frac{1}{8}\alpha\phi \left( f_+(v)|v+4,\phi\rangle+f_0(v)|v,\phi\rangle+f_-(v)|v-4,\phi\rangle\right) \end{aligned} \end{equation} with $\hat A=-i \hat v\left(\sin\frac{b}{2}|\hat v|\cos\frac{b}{2}-\cos\frac{b}{2}|\hat v|\sin\frac{b}{2}\right)$, $\alpha=6\pi\gamma\ell_p^2/\sqrt{\Delta}$, $f_+(v)=(v +2) (\left| v +1\right| -\left| v +3\right| )$, $f_-(v)=f_+(v-4)$, and $f_0(v)=-f_+(v)-f_-(v)$, \begin{equation} \begin{aligned}\label{eq:cc2} \hat{\mathscr{C}}_2|v,\phi\rangle=&\frac{\alpha}{256\gamma^2}\phi(\gamma^2+\frac{1}{\phi^2})\hat\beta\hat A\hat\beta|v,\phi\rangle\\ =&-\frac{\alpha}{16^3\times 2\gamma^2}\phi(\gamma^2+\frac{1}{\phi^2})\Big(g^\Delta_+(v)A(v+4)g^\Delta_+(v+4)|v+8,\phi\rangle\\ &-\big(g^\Delta_+(v)A(v+4)g^\Delta_-(v+4)+g^\Delta_+(v-4)A(v-4)g^\Delta_-(v)\big)|v,\phi\rangle\\ &+g^\Delta_-(v-4)A(v-4)g^\Delta_-(v)|v-8,\phi\rangle\Big) \end{aligned} \end{equation} with $\hat\beta=2\left(\sin\frac{b}{2}[\hat c,|\hat v|]\cos\frac{b}{2}-\cos\frac{b}{2}[\hat c,|\hat v|]\sin\frac{b}{2}\right)$, $\hat c=2\sin(b)\hat A\sin(b)$, $g_+(v):=f_+(v)(|v|-|v+4|)$, $g_-(v):=f_-(v)(|v-4|-|v|)$, and $g_\pm^\Delta(v):=g_\pm(v+1)-g_\pm(v-1)$, and \begin{equation} \begin{aligned}\label{eq:cc3} \hat{\mathscr{C}}_3|v,\phi\rangle=&\frac{\alpha}{3+2\omega}\sqrt{|\widehat{v^{-1}}|}\left(\frac{-3[\hat c,|\hat{v}|]^2}{64\gamma^2 \phi }+\kappa \frac{3[\hat c,|\hat{v}|]\hat{\pi}_{\phi}}{4i\alpha\gamma\sqrt{\Delta}}+\kappa^2\frac{3}{2\alpha^2\Delta}(\hat{\pi}_{\phi}^2\hat{\phi}+\hat{\phi}\hat{\pi}_{\phi}^2)\right)\sqrt{|\widehat{v^{-1}}|}|v,\phi\rangle\\ =&(\frac{\sqrt{3}}{32\gamma})^2\frac{\alpha}{\phi}\Big( \frac{g_+(v)g_+(v+4)}{\sqrt{|v(v+8)|}}|v+8,\phi\rangle-\frac{g_+(v)g_-(v+4)+g_-(v)g_+(v-4)}{|v|}|v,\phi\rangle\\ &+\frac{g_-(v)g_-(v-4)}{\sqrt{|v||v-8|}}|v-8,\phi\rangle\Big)+\kappa\frac{3}{i 16\gamma\sqrt{\Delta}} \pi_\phi \Big(\frac{g_+(v)}{\sqrt{|v(v+4)|}}|v+4,\phi\rangle\\ &-\frac{g_-(v)}{\sqrt{|v(v-4)|}}|v-4,\phi\rangle\Big)+\kappa^2\frac{3}{2\alpha\Delta}\frac{1}{|v|}(\hat{\pi}^2_\phi\phi+\phi\hat{\pi}^2_\phi)|v,\phi\rangle, \end{aligned} \end{equation} where $\widehat{v^{-1}}$ is defined by $\widehat{v^{-1}}|v\rangle=v^{-1}|v\rangle$ if $v\neq0$, and $\widehat{v^{-1}}|v\rangle=0$ if $v=0$\cite{assanioussi2017time}. \section{The effective Hamiltonian constraint}\label{se:four} To get an effective Hamiltonian constraint, we calculate the transition amplitude of the Hamiltonian constraint operator \eqref{eq:cc} as \begin{equation}\label{eq:tran-ampl} A(v_f,\phi_f;v_i,\phi_i)=\langle v_f,\phi_f|v_i,\phi_i\rangle_{\rm phy}=\lim_{\alpha_0\to \infty}\int_{-\alpha_0}^{\alpha_0}\d\alpha\langle v_f\phi_f|e^{i\alpha\hat{\mathscr{C}}}|v_i,\phi_i\rangle. \end{equation} Dividing the path into $N$ parts by setting $\alpha=\sum_{n=1}^N \epsilon_n$ and inserting the basis, we have \begin{equation} \langle v_f,\phi_f|e^{i\alpha \hat{\mathscr{C}}}|v_i,\phi_i\rangle=\sum_{v_{N-1},\cdots,v_1}\int\d \phi_{N-1}\cdots\d\phi_1\prod_{n=1}^N\langle \phi_n,v_n|e^{i\epsilon_n\hat{\mathscr{C}}}|v_{n_1},\phi_{n-1}\rangle, \end{equation} where $\langle \phi_n,v_n|e^{i\epsilon_n\hat\mathscr{C}}|v_{n-1},\phi_{n-1}\rangle$ can be calculated by using the formula \begin{equation} \int\d\phi_n\langle \phi_n,v_n|e^{i\epsilon_n\hat{\mathscr{C}}}|\phi_{n-1},v_{n-1}\rangle=\delta_{v_n,v_{n-1}}-i\epsilon_n\int\d\phi_n\langle\phi_n,v_n|(\hat{\mathscr{C}}_1+\hat\mathscr{C}_2+\hat\mathscr{C}_3)|v_{n-1},\phi_{n-1}\rangle. \end{equation} By Eqs. \eqref{eq:cc1}--\eqref{eq:cc3}, we obtain \begin{equation} \begin{aligned} &\int d\phi_n\langle\phi_n,v_n|\mathscr{C}_1|v_{n-1}\phi_{n-1}\rangle&\\ =&-\frac{1}{2\pi\hbar}\frac{\alpha}{8}\int \d\phi_n \int d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})}\phi_n (v_n+v_{n+1})(\delta_{v_n,v_{n-1}+4}-2\delta_{v_n,v_{n-1}}+\delta_{v_n,v_{n-1}-4}),\\ &\int d\phi_n\langle\phi_{n},v_{n}|\hat\mathscr{C}_2|v_{n-1}\phi_{n-1}\rangle&\\ =&\frac{1}{2\pi\hbar}\frac{\alpha}{32\gamma^2}\int \d\phi_n \int d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})}\phi_n(\gamma^2+\frac{1}{\phi_n^2}) (v_n+v_{n-1})( \delta_{v_n,v_{n-1}+8}-2\delta_{v_{n-1},v_n}+\delta_{v_n,v_{n-1}-8}), \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &\int \d\phi_n\langle \phi_n v_n|\hat\mathscr{C}_3|\phi_{n-1},v_{n-1}\rangle\\ =&\frac{1}{3+2\omega}\left(-\frac{1}{2\pi\hbar}(\frac{\sqrt{3 }}{4\gamma})^2\int \d\phi_n\int\d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})}\frac{\alpha}{\phi_n}(\sqrt{v_{n}v_{n-1}}+\frac{4}{\sqrt{v_n v_{n+1}}})\Big( \delta_{v_n,v_{n-1}+8}-2\delta_{v_{n-1},v_n}+\delta_{v_{n-1}-8,v_n}\Big)\right.\\ -&\frac{1}{2\pi\hbar}(\frac{\sqrt{3}}{4\gamma})^2\int \d\phi_n\int\d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})}\frac{\alpha}{\phi_n}\frac{8}{\sqrt{v_{n}v_{n-1}}}\Big( \delta_{v_n,v_{n-1}+8}+\delta_{v_{n-1}-8,v_n}\Big)\\ +&\frac{1}{2\pi\hbar}\kappa\frac{3}{i\gamma 4\sqrt{\Delta}}\int\d\phi_n\int\d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})}\pi_n \frac{v_n+v_{n+1}}{\sqrt{v_n(v_{n+1})}} (\delta_{v_n,v_{n-1}+4}-\delta_{v_n,v_{n-1}-4})\\ &+\left.\frac{1}{2\pi\hbar}\kappa^2\frac{3}{2\alpha\Delta}\int\d\phi_n\int\d\pi_n(\phi_n+\phi_{n+1})\pi_n^2 e^{\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})} \frac{1}{v_{n-1}}\delta_{v_n,v_{n-1}}\right). \end{aligned} \end{equation} Combining these equations and the formulas \begin{equation*} \begin{aligned} \delta_{v_n,v_{n-1}+4}-2\delta_{v_n,v_{n-1}}+\delta_{v_n,v_{n-1}-4}&=-\frac{1}{\pi}\int_0^{\pi }\d b_n 4 e^{-i b_n (v_n-v_{n-1}) } \sin ^2(b_n), \\ \delta_{v_n,v_{n-1}+4}-\delta_{v_n,v_{n-1}-4}&=\frac{i}{\pi}\int_0^{\pi} \d b_n 2 e^{-i b_n(v_n-v_{n-1}) } \sin (2 b_n),\\ \delta_{v_n,v_{n-1}}&=\frac{1}{\pi}\int_0^{\pi }\d b_n e^{-i \frac{1}{2}b_n (v_n-v_{n-1}) }, \end{aligned} \end{equation*} we get \begin{equation} \begin{aligned} &\langle \phi_n v_n|\hat\mathscr{C}|\phi_{n-1},v_{n-1}\rangle\\ =&\frac{1}{2\pi\hbar}\int\d\pi_n e^{i\frac{\pi_n}{\hbar}(\phi_n-\phi_{n-1})} \frac{1}{\pi}\int_0^{\pi} \d b_n e^{-i\frac{1}{2} b_n (v_n-v_{n-1}) }\left( \frac{\alpha}{8}\phi_n (v_n+v_{n+1})4\sin ^2(b_n) -\frac{\alpha}{32\gamma^2}\phi_n(\gamma^2+\frac{1}{\phi_n^2}) (v_n+v_{n-1}) 4\sin ^2(2b_n)\right. \\ &+\frac{1}{3+2\omega}\Big((\frac{\sqrt{3}}{4\gamma})^2\frac{\alpha}{\phi_n}(\sqrt{v_{n}v_{n-1}}+\frac{4}{\sqrt{v_n v_{n+1}}})4 \sin ^2(2b_n) -(\frac{\sqrt{3}}{4 \gamma })^2\frac{\alpha}{\phi_n}\frac{8}{\sqrt{v_{n}v_{n-1}}}2 \cos (4 b_n) +\\ &+\left.\kappa\frac{3}{4\gamma\sqrt{\Delta}\alpha}\pi_n \frac{v_n+v_{n+1}}{\sqrt{v_n(v_{n+1})}} 2 \sin (2 b_n)+\kappa^2\frac{3}{2\alpha\Delta}(\phi_n+\phi_{n+1})\pi_n^2 \frac{1}{v_{n-1}}\Big)\right). \end{aligned} \end{equation} Hence the transition amplitude \eqref{eq:tran-ampl} can be expressed as \begin{equation}\label{eq:tran-ampl2} \begin{aligned} &A(v_f,\phi_f;v_i,\phi_i)\\ =&\lim_{\alpha_0\to \infty}\int_{-\alpha_0}^{\alpha_0}\d \alpha_0\lim_{N\to \infty} \sum_{\{v_{N-1},\cdots,v_1\}}\int d\phi_{N-1}\cdots \d\phi_1\prod_{n=1}^{N}\langle \phi_n,v_n|e^{-i\epsilon_n C}|\phi_{n-1},v_{n-1}\rangle\\ =&\int\mathcal{D} \alpha \int \mathcal{D}\phi \int\mathcal{D}\pi \int\mathcal{D} b\int\mathcal{D} v \exp\Big\{\frac{i}{\hbar}\int \d\tau\left( \pi\dot{\phi}-\frac{\hbar}{2} b\dot{v}\right. +\hbar\Bigg(\alpha\phi v \sin ^2(b) -\frac{\alpha}{4\gamma^2}\phi(\gamma^2+\frac{1}{\phi^2}) v \sin ^2(2b) \\ &-\frac{1}{3+2\omega}\left.\Big((\frac{\sqrt{3} }{ \gamma})^2\frac{1}{\phi}(v+\frac{4}{v}) \sin ^2(2b)- (\frac{\sqrt3 }{ \gamma})^2\frac{1}{\phi v} \cos (4 b) +\kappa\frac{ 3}{\gamma\sqrt{\Delta}}\pi_\phi \sin (2 b)+\kappa^2\frac{3 }{\alpha\Delta}\phi\pi_\phi^2 \frac{1}{v}\Big)\Bigg)\right)\Big\}. \end{aligned} \end{equation} Therefore, the effective Hamiltonian constraint can be read from Eq.~\eqref{eq:tran-ampl2} as \begin{equation}\label{eq:effh} \begin{aligned} H_{\rm eff}=&-\alpha\phi v \sin ^2(b) +\frac{\alpha}{4\gamma^2}\phi(\gamma^2+\frac{1}{\phi^2}) v \sin ^2(2b) +\frac{1}{3+2\omega}\frac{\alpha}{\phi v}\Big(\frac{\sqrt 3}{2\gamma} v\sin(2b)+\frac{\sqrt3 \kappa}{\alpha\sqrt\Delta}\phi\pi_\phi\Big)^2\\ &-\frac{3\alpha}{3+2\omega}\frac{1}{\gamma^2v\phi}(\cos(4b)-\sin^2(2b)). \end{aligned} \end{equation} In the limit $b\to 0$, we have \begin{equation}\label{eq:effapp} \begin{aligned} H_{\rm eff}=&-\frac{\alpha}{\phi\gamma^2}v b^2+\frac{1}{3+2\omega}\frac{\alpha}{\phi v}\Big(\frac{\sqrt 3}{\gamma} vb+\frac{\sqrt3 \kappa}{\alpha\sqrt\Delta}\phi\pi_\phi\Big)^2-\frac{3\alpha}{3+2\omega}\frac{1}{\gamma^2v\phi}(1-12b^2). \end{aligned} \end{equation} Equation \eqref{eq:effapp} is different from the classical Brans-Dicke Hamiltonian constraint \eqref{eq:Hamiltonianc} by the residual term $\frac{3\alpha}{3+2\omega}\frac{1}{\gamma^2v\phi}(1-12 b^2)$. In order to compare this term with the others, it is convenient to introduce a new variable \begin{equation} B=\frac{b}{4\pi G\gamma\sqrt\Delta}, \end{equation} which is canonically conjugate to the physical volume $V$ of the elementary cell $\mathcal{V}$ as \begin{equation} \{B,V\}=1. \end{equation} Then Eq. \eqref{eq:effapp} can be reexpressed in terms of $B$ and $V$ as \begin{equation}\label{eq:H-BV} H_{\rm eff}= -\frac{3\kappa^2}{4\phi}VB^2+\frac{\kappa^2}{3+2\omega}\frac{1}{\phi V}(\frac{3}{2}BV+\pi_\phi\phi)^2-\frac{\hbar^2}{3+2\omega}\frac{9\kappa^2}{16V\phi}(1-3\kappa^2\gamma^2\Delta B^2). \end{equation} It is obvious from Eq.\eqref{eq:H-BV} that the residual term in \eqref{eq:effapp} is of order $\hbar^2$, which is certainly a quantum correction. By checking the derivation procedure of the effective Hamiltonian, one can find that the residual term comes from the effect of $[\hat c,|\hat{v}|]^2$ in $\hat{\mathscr{C}}_3$. Thus this is a particular term existing in the effective theory of LQBDC, since there is no square term of a commutator in the expression of the Hamiltonian constraint operator in the usual LQC. For semiclassical consideration, one may get rid of this term and obtain the following effective Hamiltonian constraint \begin{equation}\label{Hamiltonian-e} H_{\rm eff}=-\alpha\phi v \sin ^4(b) +\frac{\alpha}{4\gamma^2}\phi(\gamma^2+\frac{1}{\phi^2}) v \sin ^2(2b) +\frac{1}{3+2\omega}\frac{\alpha}{\phi v}\Big(\frac{\sqrt 3}{2\gamma} v\sin(2b)+\frac{\sqrt3 \kappa}{\alpha\sqrt\Delta}\phi\pi_\phi\Big)^2. \end{equation} As we will show in the next section, the dynamics driven by this effective Hamiltonian can be obtained analytically. \section{The effective dynamics }\label{se:five} To simplify the calculation of the dynamics determined by the effective Hamiltonian \eqref{Hamiltonian-e}, we choose a lapse function $N=v\phi/\alpha$, such that the effective Hamiltonian constraint can be reexpressed as \begin{equation} \begin{aligned}\label{eq:HamiltonianNH} C=NH_{\rm eff}=\phi^2v^2\sin^4(b)-\frac{1}{4\gamma^2}v^2\sin^2(2b)+\frac{1}{3+2\omega}\Big(\frac{\sqrt 3}{2\gamma} v\sin(2b)+\frac{\sqrt3 \kappa}{\alpha\sqrt\Delta}\phi\pi_\phi\Big)^2=0. \end{aligned} \end{equation} Let $X=v\sin(2b)$, $Y=\phi\pi_\phi$ and $Z=\phi v\sin^2(b)$. Then two constants of motion with respect to $C$ can be obtained as \begin{equation} \begin{aligned} \xi_1&=\hbar X/4-Y,\\ \xi_2&=Z^2+AY^2+BXY, \end{aligned} \end{equation} where \begin{equation}\label{eq:ab} \begin{aligned} A&=\frac{8(3\omega+2)}{3\gamma^2(2\omega+3)\hbar^2},\\ B&=-\frac{4(\omega-1)}{\gamma^2(2\omega+3)\hbar}. \end{aligned} \end{equation} Expressed by the two constants of motion, the constraint \eqref{eq:HamiltonianNH} can be reexpressed as \begin{equation}\label{eq:xi} \xi_2=\frac{8\omega}{\gamma^2(2\omega+3)\hbar^2}\xi_1^2. \end{equation} Thus, the Hamiltonian constraint \eqref{eq:HamiltonianNH} will be satisfied throughout the evolution as far as the two constants of motion are chosen such that Eq. \eqref{eq:xi} holds. The equations of motion for $X$, $Y$, and $Z$ can be easily derived by using Hamilton's equations with the Hamiltonian $C$, which, together with the Hamiltonian constraint \eqref{eq:HamiltonianNH}, leads to \begin{equation}\label{eq:eom} \begin{aligned} \dot{Y}&=-2Z^2\\ Z^2&=-(A+\frac{4B}{\hbar})Y^2-\frac{4B}{\hbar}\xi_1Y+\xi_2=:\mathfrak{a}Y^2+\mathfrak{b}Y+\mathfrak{c}, \end{aligned} \end{equation} where we defined \begin{equation}\label{eq:abc} \begin{aligned} \mathfrak{a}&=\frac{8(3\omega-8)}{3\gamma^2(2\omega+3)\hbar^2},\\ \mathfrak{b}^2-4\mathfrak{a}\mathfrak{c}&=\frac{256\xi_1^2}{3\gamma^4(2\omega+3)\hbar^4}. \end{aligned} \end{equation} Thus the types of the solutions $Y(t)$ depend on the sign of $\mathfrak{b}^2-4\mathfrak{a}\mathfrak{c}$. For $\mathfrak{b}^2-4\mathfrak{a}\mathfrak{c}<0$, $Y(t)$ takes the form of a tangent function, while for $\mathfrak{b}^2-4\mathfrak{a}\mathfrak{c}>0$, it takes the form of a hyperbolic tangent function. We are interested in the case with the coupling parameter $\omega\gg1$, which coincides with the Solar System experiments \cite{Will:2014kxa,will2018theory}. In this case Eq. \eqref{eq:abc} ensures that $\mathfrak{b}^2-4\mathfrak{a}\mathfrak{c}>0$. Then Eq. \eqref{eq:eom} gives \begin{equation}\label{eq:doty2} \dot{Y}=-2\mathfrak{a}(Y-y_1)(Y-y_2), \end{equation} where $y_1$ and $y_2$ ($y_1>y_2$) are the roots of the equation $\mathfrak{a}Y^2+\mathfrak{b}Y+\mathfrak{c}$=0. Thus the solutions to Eq. \eqref{eq:doty2} can be obtained as \begin{equation} Y_\pm(t)=y_1+\frac{y_2-y_1}{1\pm e^{2\mathfrak{a}(y_1-y_2)t}}. \end{equation} Taking account of the fact that $Z^2=\mathfrak{a}(Y-y_1)(Y-y_2)\geq 0$, we conclude the following two cases. \begin{enumerate}[(i)] \item For $\mathfrak{a}>0$, i.e., $\omega>8/3$, the solution is \begin{equation} Y_-(t)=\frac{3(\omega-1)-\sqrt{3(2\omega+3)}}{8-3\omega}\xi_1+\frac{2\sqrt{3(2\omega+3)}}{8-3\omega}\xi_1\left(1-e^{\frac{32\xi_1 t}{\gamma^2\hbar^2\sqrt{3(2\omega+3)}}}\right)^{-1}. \end{equation} \item For $\mathfrak{a}<0$, i.e., $-3/2<\omega<8/3$, the solution is \begin{equation} Y_+(t)=\frac{3(\omega-1)-\sqrt{3(2\omega+3)}}{8-3\omega}\xi_1+\frac{2\sqrt{3(2\omega+3)}}{8-3\omega}\xi_1\left(1+e^{\frac{32\xi_1 t}{\gamma^2\hbar^2\sqrt{3(2\omega+3)}}}\right)^{-1}. \end{equation} \end{enumerate} By Eq. \eqref{eq:eom} we can obtain the expression of $Z_\pm(t)$ corresponding to $Y_\pm(t)$ as \begin{equation} \begin{aligned} Z_-(t)&=\frac{2\sqrt{2}|\xi_1|}{\hbar\gamma\sqrt{3\omega-8}}\left|\sinh(\frac{16\xi_1}{\gamma^2\hbar^2\sqrt{3(2\omega+3)}}t)\right|^{-1},\\ Z_+(t)&=\frac{2\sqrt{2}|\xi_1|}{\hbar\gamma\sqrt{8-3\omega}}\left|\cosh(\frac{16\xi_1}{\gamma^2\hbar^2\sqrt{3(2\omega+3)}}t)\right|^{-1}. \end{aligned} \end{equation} The equation of motion for $\phi$, which can be derived by Hamilton's equation as well as the Hamiltonian constraint \eqref{eq:HamiltonianNH}, reads \begin{equation}\label{eq:phi-solution} \dot{\phi}_\pm=\frac{16\phi_\pm}{3\gamma^2(2\omega+3)\hbar^2}(5Y_\pm+3\xi_1). \end{equation} The solutions of Eq. \eqref{eq:phi-solution} can be obtained as \begin{equation} \begin{aligned} \phi_-(t)=\phi_0\, 2^{\frac{5}{3\omega-8}}e^{-\frac{16\xi_1 t}{\gamma^2\hbar^2(3\omega-8)}}\left|\sinh(\frac{16\xi_1 t}{\sqrt{3(2\omega+3)}\gamma^2\hbar^2})\right|^{\frac{5}{3\omega-8}},\\ \phi_+(t)=\phi_0\, 2^{\frac{5}{3\omega-8}}e^{-\frac{16\xi_1 t}{\gamma^2\hbar^2(3\omega-8)}}\left|\cosh(\frac{16\xi_1 t}{\sqrt{3(2\omega+3)}\gamma^2\hbar^2})\right|^{\frac{5}{3\omega-8}}, \end{aligned} \end{equation} where $\phi_0$ is a integration constant. The dynamical evolution of $v$ and $b$ can be obtained by using the functions $X$, $Y$, $Z$, and $\phi$ as \begin{equation} v=\frac{\phi X^2}{4Z}+\frac{Z}{\phi},\ \sin(2b)=\frac{X}{v},\ \cos(2b)=1-\frac{2Z}{v\phi}. \end{equation} It should be noted that in the solutions obtained so far we adopted the coordinate time $t$ corresponding to the lapse function in Eq.~\eqref{eq:HamiltonianNH}. However, the Hubble parameter is defined with respect to the cosmological proper time $\tau$, which is related to the coordinate time by $\d\tau=8\pi G N\d t$. By denoting $\dot{v}:=\d v/\d t$, the Hubble parameter $\bm{H}$ can be expressed as \begin{equation}\label{Hubble} \bm{H}=\frac{\alpha\dot{v}}{24\pi Gv^2\phi}=\frac{4\alpha\phi^2X\left(\dot{\phi} X Z+2Z\phi\dot{X}-\phi X\dot{Z}\right)+16\alpha Z^2\left(\dot{Z}\phi-Z\dot{\phi}\right)}{24\pi G\phi\left(\phi^2X^2+4Z^2\right)^2}. \end{equation} Taking account of the Solar System experiments, we consider the case $\omega>8/3$. In this case, the dynamics is described by the functions $Y_-(t),\ Z_-(t)$, and $\phi_-(t)$. Since the functions $Y_-(t)$ and $Z_-(t)$ are ill-defined at $t=0$, they are valid in the domain $t \in (-\infty,0)\cup(0,\infty)$, so is the lapse function $N=\phi v/\alpha$. Because $N$ does not vanish in this domain, as a time coordinate $t$ is well defined in each branch $(-\infty,0)$ or $(0,\infty)$. Moreover, for a given $t_0>0$, the integrals $\int_{\pm t_0}^{0^{\pm}}N(t)\d t$ and $\int_{\pm t_0}^{\pm\infty}N(t)\d t$ diverge. Hence the cosmological time $\tau$ ranges over $(-\infty,\infty)$ in either the branch of domain of $t$. Thus we can choose one of the branches, say $t\in (0,\infty)$, to cover the whole spacetime. Thanks to the divergence of the integrals, the hypersurfaces of $t=0$ and $t=\infty$ are actually the past and future timelike infinities respectively. Furthermore, the effective dynamics will return to the classical one for $v\gg 1$. This happens in the classical regions of $\frac{1}{t}\ll 1$ and $t\gg 1$ respectively. Now we consider the dynamical behavior of the Universe with $t\in (0,\infty)$. As $t\rightarrow0$, the leading terms of the functions $Y_-,Z_-,X_-,$ and $\phi_-$ read respectively \begin{equation}\label{eq:fiveone} \begin{aligned} Y_{-}(t)&\cong \frac{3\gamma^2\hbar^2(2\omega+3)}{16(3\omega-8)}\,\frac{1}{t},\\ Z_{-}(t)&\cong\frac{\gamma\hbar\sqrt{6(2\omega+3)}}{8\sqrt{3\omega-8}}\,\frac{1}{|t|},\\ X_{-}(t)&\cong-\frac{3\gamma^2\hbar(2\omega+3)}{4(3\omega-8)}\,\frac{1}{t},\\ \phi_{-}(t)&\cong\phi_0\left(\frac{32\xi_1}{\sqrt{3(2\omega+3)}\gamma^2\hbar^2}\right)^{\frac{5}{3\omega-8}} t^{5/(3\omega-8)}. \end{aligned} \end{equation} Thus, their derivatives with respect to $t$ are respectively \begin{equation} \begin{aligned} \dot{Y}_{-}(t)&\cong-\frac{1}{t}Y_{-}(t),\quad &\dot{Z}_{-}(t)\cong-\frac{1}{t}Z_{-}(t),\\ \dot{X}_{-}(t)&\cong-\frac{1}{t}X_{-}(t),\quad &\dot{\phi}_{-}(t)\cong\frac{5}{3\omega-8}\,\frac{1}{t}\phi_{-}(t). \end{aligned} \end{equation} Hence, as $t\rightarrow0$, by Eq. \eqref{Hubble} the Hubble parameter approaches \begin{align}\label{eq:Hubble0} \bm{H}&\cong-\frac{8\alpha(\omega-1)}{8\pi\gamma\ell_p^2\sqrt{6(3\omega-8)(2\omega+3)}}<0. \end{align} Let us consider the other side. As $t\to\infty$, the leading terms of those functions become respectively \begin{equation} \begin{aligned} Y_{-}(t)&\cong\frac{3(\omega-1)- \mathrm{sgn}(t\xi_1)\sqrt{3(2\omega+3)}}{8-3\omega}\xi_{1},\\ X_{-}(t)&\cong\frac{4\left(5- \mathrm{sgn}(t\xi_1)\sqrt{3(2\omega+3)}\right)}{(8-3\omega)\hbar}\xi_1,\\ Z_{-}(t)&\cong\frac{2\sqrt{2}|\xi_{1}|}{\hbar\gamma\sqrt{3\omega-8}} \exp\left(\frac{-16\left|\xi_{1}t\right|}{\gamma^{2}\hbar^{2}\sqrt{3(2\omega+3)}}\right),\\ \phi_{-}(t)&\cong\exp\left[\frac{16\xi_1 t}{\gamma^2\hbar^2(3\omega-8)}\left(\frac{5\,\mathrm{sgn}(t\xi_1)}{\sqrt{3(2\omega+3)}}-1\right)\right]. \end{aligned} \end{equation} Then their time derivatives are respectively \begin{equation} \begin{aligned} \dot{Y}_{-}(t)&\cong 0,\\ \dot{X}_{-}(t)&\cong 0,\\ \dot{Z}_{-}(t)&\cong- \mathrm{sgn}(t\xi_1)\frac{16\xi_1}{\gamma^2\hbar^2\sqrt{3(2\omega+3)}}Z_-(t),\\ \dot{\phi}_{-}(t)&\cong\frac{16\xi_1 }{\gamma^2\hbar^2(3\omega-8)}\left(\frac{5\,\mathrm{sgn}(t\xi_1)}{\sqrt{3(2\omega+3)}}-1\right)\phi_{-}(t). \end{aligned} \end{equation} Hence the asymptotic behavior of the Hubble parameter for $t\to \infty$ reads \begin{align}\label{eq:Hubblei} \bm{H}\cong\lim_{t\to\infty} \frac{256 \alpha\xi^2 e^{-\frac{16 |\xi_1 t|}{\gamma ^2 \sqrt{6 \omega +9} \hbar ^2}}}{24\pi G \gamma ^3 \hbar ^3 \sqrt{3 \omega -8} \sqrt{3 \omega +\frac{9}{2}}} = 0. \end{align} Equations \eqref{eq:Hubble0} and \eqref{eq:Hubblei} imply that there exists at lease one moment $t_0\in(0,\infty)$ such that $H(t_0)=0$. Hence a bounce of the Universe may happen at $t=t_0$. On one side, the negative Hubble constant around $t=0^+$ implies that the Universe goes through an asymptotical de Sitter epoch there. On the other side, the fact that $\bm{H}(t)$ approaches to $0^+$ as $t\to\infty$ implies that the effective theory returns to the classical Brans-Dicke cosmology at late time. It is easy to check that the asymptotic behavior of the Universe would not change if the residual term in the effective Hamiltonian \eqref{eq:effh} was taken into account. However, the detailed evolution around the bounce would be influenced by that term. The numerical simulation for the evolution of the Hubble parameter is plotted in Fig. \ref{fig:hubble}. In the left panel, the dynamics of $\bm{H}(t)$ driven by the Hamiltonian constraints \eqref{eq:effh} and \eqref{Hamiltonian-e} are compared. In the right panel, the dynamics of $\bm{H}(t)$ driven by the Hamiltonian constraint \eqref{Hamiltonian-e} with respect to different values of $\omega$ are shown. According to the results, there is only a single bounce with $\bm{H}(t)=0$. Around the bounce, the residual term does affect the dynamics. However, for various values of $\omega$, the qualitative features of $\bm{H}(t)$ are not influenced. Furthermore, the evolutions of $\phi$ and $v$ with respect to the cosmological time $\tau$ are also plotted in Fig.~\ref{fig:vH}. As shown in this plot, $v(\tau)$ bounces at $\tau_0$ with $\bm{H}(\tau_0)=0$. In the de Sitter epoch, $v(\tau)$ grows exponentially as $\tau$ goes from $0$ to $-\infty$. It is straightforward to check that the dynamics of $\bm{H}(t)$, $\phi(t)$, and $v(t)$ for $t\in (-\infty,0)$ behaves similar to that for $t\in (0,\infty)$. \begin{figure} \centering \subfigure[]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{BD_Cosmology} \label{fig:Hubble} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{omegaH} \label{fig:omegaH} \end{minipage}% } \centering \caption{(a) Comparison of the evolutions of $\bm{H}(t)$ driven by \eqref{Hamiltonian-e} (the solid line) and by \eqref{eq:effh} (the red dashed line): The difference between the two evolutions of $\bm{H}(t)$ is also given (the black dot-dashed line). (b) Evolution of $\bm{H}(t)$ with respect to different values of $\omega$. The parameters in this plot are chosen as $\gamma =0.2357,\ \hbar =1$, $\ell_p =1$, $\xi =5$, and $\phi_0=1$ for both panels. In the left panel, we choose $\omega =10^4$. }\label{fig:hubble} \end{figure} \begin{figure}[h!tb] \centering \includegraphics[width=0.6\linewidth]{vH} \caption{The behaviors of $\phi$ and $v$ near the bounce compared with $\bm{H}$. The parameters in the plot are chosen as $\gamma =0.2357,\ \hbar =1$, $\ell_p =1$, $\xi =5$, $\phi_0=1$, and $\omega =10^4$.} \label{fig:vH} \end{figure} \section{Discussion}\label{se:six} In the previous sections, to inherit more features of full LQBDT, we dealt with the Euclidean and Lorentzian terms of the Hamiltonian constraint independently in LQBDC. The Hamiltonian constraint operator \eqref{eq:cc} alternative to the one obtained in \cite{zhang2013loop} was constructed in Sec. \ref{se:three}. The effective Hamiltonian constraint \eqref{eq:effh} was also derived from the alternative Hamiltonian operator by the semiclassical analysis in Sec. \ref{se:four}. It turns out that there exists a residual quantum correction term in the effective Hamiltonian, which could not be obtained simply by replacing $b\to \sin(b)$ or $b\to \sin(2b)/2$ in the classical Hamiltonian constraint. This is a particular property of our LQBDC. The dynamics given by the effective Hamiltonian constraint was analyzed in Sec. \ref{se:five}. The evolution equation of the Universe was solved analytically by getting rid of the residual term which is of $\hbar^2$-order. The dynamical behaviors of the Hubble parameter for the physically interesting case of $\omega\gg 1$ was considered. It turns out that the classical singularity is resolved by a quantum bounce which relates a de Sitter epoch to a usual classical Brans-Dicke cosmology. Both the evolutions driven by the effective Hamiltonian \eqref{Hamiltonian-e} and by the original \eqref{eq:effh} with the residual term were numerically computed and plotted in Fig.~\ref{fig:Hubble}. The comparison of the two evolutions shows that the two Hamiltonians determine the qualitatively same dynamics. However, the residual term affected the evolution around the bounce, while they give the same asymptotic behaviors. Since an asymptotical de Sitter epoch appears in our cosmological model, it is interesting to see whether that epoch of the model can match the observation of current accelerating Universe. By substituting \eqref{eq:fiveone} into \eqref{Hubble}, the Hubble parameter in the asymptotical de Sitter epoch can be expressed as \begin{equation} \bm{H}(t)\cong -\frac{2\alpha}{\sqrt{6}\pi \gamma\ell_p^2} \sqrt{\frac{3 \omega -8}{2 \omega +3}}\, \frac{2 (\omega -1) (3 \omega -8)+ (2 \omega +3) (3 \omega -13)\gamma ^2\phi_-(t)^2 }{ \left(\,2(3 \omega -8)+3(2 \omega +3)\gamma ^2 \phi_-(t)^2\, \right)^2}. \end{equation} Hence, if one asked $\bm{H}(t)$ at some fixed $t$ to match the observation, the value of $\phi_-(t)$ would have to be sufficiently large. For instance, letting $\omega=10^4$, one has $\phi_-(t)=8.899\times 10^{30}$. Moreover, $\bm{H}(t)$ should change slowly at the moment $t$. Such a requirement could be achieved by choosing $\phi_0$ and $\xi_1$ in the expression of $\phi_-(t)$ properly. However, it is straightforward to check that in this case, the effective gravitational constant $G/\phi_-(t)$ in the Brans-Dicke theory is far away from the observational value because of the huge value of $\phi_-(t)$. Thus there is no evidence that the emerged asymptotical de Sitter epoch could match our current Universe. \section*{Acknowledgements} The authors would like to thank Chun-Yen Lin for discussion. This work is supported by NSFC with Grants No. 11875006 and No. 11961131013. C. Z. acknowledges the support by the Polish Narodowe Centrum Nauki, Grant No. 2018/30/Q/ST2/00811.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,333
Q: How to schedule a task to run daily in the morning next day before reaching office? Hi I want to schedule a task to run automation on virtual mavchine atleast 3-4 hours after I leave my office. I have made a .vbs file which opens QTP and execute the scripts and even I have made a script to stop my vm from autolock. But the problem is when I am setting task scheduler say for 2 hrs after I lock my PC it doesn't work.But if I schedule it for after 15 minutes it triggers.Please any suggestions?? If there needs to be some changes in the settings or anything else. Thanks, Abhishek A: It should work and in fact it does work when you delay for 15 minutes after logoff but not when you delay for three hours. I am going to propose a work around until we can get some more information and determine the cause of the failure. The work around is to have your logoff script write a "sentinel" file that contains the time of the logoff in it. Create a second task and schedule it to run once per hour. The second task looks for the sentinel file, if not found it ends. if the sentinel file exists it compares the current time to the logoff time in the file. If it has been less than your threshold (3 hours), the task ends. If the delay threshold is exceeded it performs your scripts and deletes the sentinel file so that the scripts will not execute again. One final step would be to create a logon script that deletes the sentinel file, if present, when you logon. This would only come into play if you logged off, were heading home and remembered you forgot to do something and logged in again before your overnight process had run. The rest of this is a comment not an answer, I don't know the SE protocol for that. In terms of answering the question of why doesn't your "delayed logoff trigger" work after fifteen minutes, maybe I should leave that to others that have experienced it, I have not but, I would think that spelling out what VM you are running and what guest OS you are running under it would be useful in that regard. I would ask you to create your scenario on a real machine instead of a virtual one. If it works real and fails virtual we can focus on the virtual aspects, if it fails real as well as virtual, then we can concentrate on windows. I am also curious as to how you know that it "did not trigger" vs it triggered but did not do anything. I am assuming that you know that by looking at the "last run" column and that if you set the trigger delay at 15 minutes, last run is updated to 15 minutes after you logged off but if you set the trigger delay for 3 hours, the last runs stays what it was before you logged off.
{ "redpajama_set_name": "RedPajamaStackExchange" }
577
{"url":"https:\/\/hal.inria.fr\/inria-00347012v2","text":"# An identification problem in an urn and ball model with heavy tailed distributions\n\nAbstract : We consider in this paper an urn and ball problem with replacement, where balls are with different colors and are drawn uniformly from a unique urn. The numbers of balls with a given color are i.i.d. random variables with a heavy tailed probability distribution, for instance a Pareto or a Weibull distribution. We draw a small fraction $p\\ll 1$ of the total number of balls. The basic problem addressed in this paper is to know to which extent we can infer the total number of colors and the distribution of the number of balls with a given color. By means of Le Cam's inequality and Chen-Stein method, bounds for the total variation norm between the distribution of the number of balls drawn with a given color and the Poisson distribution with the same mean are obtained. We then show that the distribution of the number of balls drawn with a given color has the same tail as that of the original number of balls. We finally establish explicit bounds between the two distributions when each ball is drawn with fixed probability $p$.\nDocument type :\nPreprints, Working Papers, ...\n\nCited literature [10 references]\n\nhttps:\/\/hal.inria.fr\/inria-00347012\nContributor : Philippe Robert <>\nSubmitted on : Saturday, June 20, 2009 - 11:39:21 AM\nLast modification on : Friday, May 25, 2018 - 12:02:03 PM\nDocument(s) archiv\u00e9(s) le : Wednesday, September 22, 2010 - 1:05:47 PM\n\n### Files\n\nHal.pdf\nFiles produced by the author(s)\n\n### Identifiers\n\n\u2022 HAL Id : inria-00347012, version 2\n\u2022 ARXIV : 0812.2546\n\n### Citation\n\nChristine Fricker, Fabrice Guillemin, Philippe Robert. An identification problem in an urn and ball model with heavy tailed distributions. 2009. \u27e8inria-00347012v2\u27e9\n\nRecord views","date":"2020-07-16 04:14:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7536953091621399, \"perplexity\": 651.6124100502849}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657181335.85\/warc\/CC-MAIN-20200716021527-20200716051527-00366.warc.gz\"}"}
null
null
<?php /** * The base object from which all DataObjects are extended from * * @category Mad * @package Mad_Model * @copyright (c) 2007-2009 Maintainable Software, LLC * @license http://opensource.org/licenses/bsd-license.php BSD */ class Mad_Model_Serializer_Xml extends Mad_Model_Serializer_Base { protected $_builder = null; /** * To keep the code as similar to Rails as possible, we use * Mad_Support_Builder as a proxy to XMLWriter */ public function getBuilder() { if (!$this->_builder) { if (!isset($this->_options['indent'])) { $this->_options['indent'] = 2; } $options = array('indent' => $this->_options['indent']); if (!empty($this->_options['builder'])) { $this->_builder = $this->_options['builder']; } else { $this->_builder = new Mad_Support_Builder($options); $this->_options['builder'] = $this->_builder; } if (empty($this->_options['skipInstruct'])) { $this->_builder->instruct(); $this->_options['skipInstruct'] = true; } } return $this->_builder; } /** * return string */ public function root() { if (!empty($this->_options['root'])) { $root = $this->_options['root']; } else { $root = $this->_record->getXmlClassName(); } return $this->dasherize($root); } /** * Dasherize by default or if options['dasherize'] = true * @return boolean */ public function isDasherized() { return !array_key_exists('dasherize', $this->_options) || !empty($this->_options['dasherize']); } /** * proxy to support dasherize * @param string $name * @return string */ public function dasherize($name) { return $this->isDasherized() ? Mad_Support_Inflector::dasherize($name) : $name; } /** * @return array */ public function getSerializableAttributes() { $attributes = array(); foreach ($this->getSerializableAttributeNames() as $name) { $attributes[] = new Mad_Model_Serializer_Attribute($name, $this->_record); } return $attributes; } /** * @return array */ public function getSerializableMethodAttributes() { $methods = !empty($this->_options['methods']) ? $this->_options['methods'] : array(); $methodAttributes = array(); foreach ((array)$methods as $name) { if (method_exists($this->_record, $name)) { $methodAttributes[] = new Mad_Model_Serializer_MethodAttribute($name, $this->_record); } } return $methodAttributes; } /** * @return array */ public function getSerializablePropertyAttributes() { $properties = !empty($this->_options['properties']) ? $this->_options['properties'] : array(); $propertyAttributes = array(); foreach ((array)$properties as $name) { try { $propertyAttributes[] = new Mad_Model_Serializer_PropertyAttribute($name, $this->_record); // ignore exceptions -- just don't add as a property if it errors } catch (Exception $e) {} } return $propertyAttributes; } public function addAttributes() { $attributes = array_merge($this->getSerializableAttributes(), $this->getSerializablePropertyAttributes(), $this->getSerializableMethodAttributes()); foreach ($attributes as $attribute) { $this->addTag($attribute); } } /** * @param Mad_Model_Serializer_Attribute */ public function addTag($attribute) { $attrName = $this->dasherize($attribute->getName()); $attrValue = $attribute->getValue(); $attrDecos = $attribute->getDecorations(empty($this->_options['skipTypes'])); $builder = $this->getBuilder(); // check if attribute values need to be further serialized if (is_array($attrValue)) { $options = array_merge($this->_options, array('root' => $attrName)); $ao = new Mad_Support_ArrayObject($attrValue); $ao->toXml($options); } else { $builder->tag($attrName, $attrValue, $attrDecos); } } /** * @param string $association * @param mixed $records * @param array $opts */ public function addAssociations($association, $records, $opts) { // association collection if (is_array($records)) { $name = $this->dasherize($association); if (empty($records)) { $this->getBuilder()->tag($name, '', array('type' => 'array')); } else { $tag = $this->getBuilder()->startTag($name, '', array('type' => 'array')); $associationName = Mad_Support_Inflector::singularize($association); foreach ($records as $record) { $type = get_class($record) == $associationName ? null : get_class($record); $options = array_merge($opts, array('root' => $associationName, 'type' => $type)); $record->toXml($options); } $tag->end(); } // single association } else { $records->toXml(array_merge($opts, array('root' => $association))); } } /** * Use the record to build associations * * @param string $association * @param mixed $records * @param array $opts */ public function yieldRecords($association, $records, $opts) { $this->addAssociations($association, $records, $opts); } /** * Return the serialized XML string * * @return string */ public function serialize() { $args = array(); if (!empty($this->_options['namespace'])) { $args['xmlns'] = $this->_options['namespace']; } if (!empty($this->_options['type'])) { $args['type'] = $this->_options['type']; } $builder = $this->getBuilder(); $tag = $builder->startTag($this->root(), '', $args); $this->addAttributes(); $this->addIncludes(); $tag->end(); return $builder->__toString(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,847
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:background="#85C1E9" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <TextView android:text="@string/hello_world" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="74dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:textColor="#7D3C98" android:textSize="43dp" android:id="@+id/textView" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello World" android:id="@+id/button" android:onClick="true" android:textColor="#F7F9F9" android:background="#1B4F72" android:layout_below="@+id/textView" android:layout_marginTop="31dp" android:layout_alignEnd="@+id/editText" android:layout_alignStart="@+id/editText" /> <EditText android:layout_width="wrap_content" android:layout_height="wrap_content" android:inputType="textPersonName" android:text="Jonathan" android:ems="10" android:id="@+id/editText" android:layout_marginTop="43dp" android:layout_below="@+id/button" android:layout_centerHorizontal="true" /> </RelativeLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
8,892
using System; using FluentAssertions; using Paradigm.CodeGen.Output.Models.Configuration; using Paradigm.CodeGen.Output.TypeMatchers; using NUnit.Framework; using Paradigm.CodeGen.Input.Models.Definitions; namespace Paradigm.CodeGen.Tests.Output.TypeMatchers { public class IsStructTypeMatcherTest { #region Properties private StructDefinition StructDefinition { get; set; } private IsStructTypeMatcher TypeMatcher { get; set; } #endregion #region Setup [OneTimeSetUp] public void Setup() { this.StructDefinition = new StructDefinition(); this.TypeMatcher = new IsStructTypeMatcher(); } #endregion #region Public Methods [Test] public void ThrowWhenParametersAreInvalid() { var config = new TypeMatcherConfiguration { Parameters = new[] { "Param1", "Param2" } }; Action match = () => this.TypeMatcher.Match(config, this.StructDefinition); match.Should().Throw<Exception>().WithMessage("Is Struct type matcher hasn't any arguments."); } [Test] public void ShouldNotMatchIfProcessedObjectIsNotAStructDefinition() { var config = new TypeMatcherConfiguration(); this.TypeMatcher.Match(config, new ClassDefinition()).Should().BeFalse("Must return True only when objectDefinition is StructDefinition"); } [Test] public void ShouldMatchIfProcessedObjectIsAStructDefinition() { var config = new TypeMatcherConfiguration(); this.TypeMatcher.Match(config, this.StructDefinition).Should().BeTrue("Must return False when objectDefinition is not StructDefinition"); } #endregion } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,226
Editor's blog 6 June 2009: Bob Sang RIP Publish Date/Time: It is with huge sadness that I write about the death of Bob Sang yesterday. Bob was 61. His particular specialism was in patient and public invovlement and engagement in health and social care, and in facilitation of this. Bob was working in this area long before it was fashionable, let alone a statutory duty. He worked in academia and at the Kings Fund, before setting up his independent consultancy Sang Jacobsson. He was a strategic advisor to and latterly an honorary member of the Patients' Association. He co-facilitated the Engaging Communities Learning Network for the National Primary and Care Trusts (NatPaCT) development programme, which later evolved into NHS Networks. Bob was the UK's first professor of patient and public involvement at South Bank University, and published widely. Recently, Bob was special advisor to the Health Select Committee Inquiry into Patient and Public Involvement in Health. Bob was also a wonderful man and a great friend. To go to any NHS event with Bob, or meet him at the Kings Fund, was to see a man with contacts everywhere. He believed in networking, not in the careerist sense but in introducing people with common interests. He was a connector. His personal warmth and charm were considerable, but he would stand and fight for principle. He also had a great radar for cant and for the domineering, and would not let them go unchallenged. He was great company: insightful, witty and creative. Devoted to his wife Lisa and children and grandchild, Bob was a man of clear values and immense integrity. I first met Bob while I was editing British Journal of Healthcare Management, and often published his work, both there and elsewhere. Recently, Bob gave me a new article for this site. It is not clear whether it is his last article, but it is certainly among them. I had the privilege of working with Bob on several projects, including one without which I am sure that my move into freelance working would have failed. I owe him an immense debt for his support, advice, encouragement and help. Bob lived a full life and a good life, and I am lucky to have known him. I will miss Bob Sang more than I can say. Tributes to Bob Sang Tributes from Bob's friends and colleagues: Harry Cayton, Council for Healthcare Regulatory Excellence - "Bob was a gentle giant of public engagement; consistent, persistent and generous of mind and person. We'll miss him". Rick Stern, Primary Care Foundation and NHS Alliance - "Bob had a compassion and interest in other people and their lives in a way that was unusual and affecting. I would always come away from meeting up with Bob with a slightly different view of the world. Although he loved developing and shaping ideas, they were always steeped in personal experience: both his own and of all the people he met and had touched him in some way. His integrity and his passion shone through". Dr Phil Hammond, GP, writer and broadcaster - "Bob was extraordinarily wise, knowledgeable, funny and positive. He was the voice of patient and public involvement, and never shied away from difficult issues. He once described himself to me as a 'constructive subversive', which is something to which we should all aspire. The NHS will be poorer without him". David Crepaz-Keay, Head of Patient and Public Involvement, Mental Health Foundation - "I worked with Bob a number of times over the years. He was one of the few mental health service users I knew and worked with who brought genuine patient involvement and personal experience out of the ghetto and into mainstream political, academic and philosophical thinking. He contributed a lot to my own personal and academic development and it was always a joy to work or spend time with Bob". Jean Trainor, Health Links - "I have known Bob for about 12 years. What struck me most about him was his humanity; his absolute commitment to the NHS and the rights of patients; and the fact that he never seemed to lose faith in it all. He was kind and true". John Flook, independent financial consultant - "I got to know Bob when I began to build a portfolio career in healthcare. Bob led a series of seminars, and I did the 'futures' session. It was a new venture for me, and I greatly appreciated the advice and support that Bob gave me. His constructive nature and positive outlook certainly gave me much encouragement and a belief in my ability to tackle new ground. "He was a thouroughly genuine bloke - committed to doing things because they were the right things to do; not because it might provide some personal advantage. He cared deeply about the disadvantaged, the less well-off and those who did not enjoy fair access to their rights. He set an example for those who knew him. We will miss him". Michael Sobanja, chief executive, NHS Alliance - "This is very sad news. Bob was a good man and a great ally". Simon Williams, Expect Health community interest company - "Bob was an inspiration and will be deeply, sadly missed." Steve Collins, National Specialised Commissioning Group, DH - "It's hard to imagine an NHS without Bob. He was a close work colleague and a friend. He has been really key to some major pieces of work that I am doing and have done; but mainly, I feel an awful sense of loss – his boyish smile and infectious laugh, his wise counsel and lateral perspective, his awesome contact list (of course), but mainly his lovely personality and friendship. "The Service will never be the same again." Jeanne Hardacre, independent consultant - "Part of Bob's unique legacy is a genuine belief amongst thousands of people he worked with that we must continue to constructively challenge the system where it appears to be less than patient-oriented. Bob was passionate and relentless in his quest to have this (sometimes unwelcome) message heard at all levels of the system. His family was his other passion; he always spoke with palpable warmth and joy about his wife, children and grandchildren". Brendan O'Rourke, Training Manager Expert Patient Programme community interest company - "Bob, your energy, breadth of experience and most of all your commitment to people with long-term conditions will be sadly missed". Beryl Furr, non-executive director, NHS South East Essex PCT - "If we're lucky, we sometimes meet someone who gives so much more than he takes out of life. Bob was that man, and his loss is immeasurable". Kate Lorig, Patient Education Research Centre, Department of Medicine at the Stanford University School of Medicine, USA – "This is a story from my Jewish tradition. There once was a town band. Unfortunately, their trumpet player was always a note or two above and ahead of the rest of the band. There was nothing to be done. One day he died, and went to play in the heavenly band. Here he played just as he always had, but was right on key and in time. "Bob was like this. Always a note or two higher and ahead. I will miss him". Ken Jarrold, Dearden Consulting – "I knew Bob for about 20 years and worked with him on a number of projects starting with the MESOL (Management Education Scheme by Open Learning) learning materials. I had a very high regard for Bob - in particular for his curiosity, his lively intellect, his warmth, his sense of fun and his strong values. Bob was one of the good guys, and he will be very badly missed". Jane Keep, independent coach and facilitator - "Bob worked ceaselessly with drive and determination for the plight of the patient, and to enable the patient voice in health and healthcare services. He worked with thoughtfulness in the way he connected those working in and around health, using his extensive networks. He always offered support, taking time for all with whom he came into contact". Candy Morris, chief executive, NHS South Coast strategic health authority – "'I've known Bob as a colleague and friend for over eight years, and have treasured his wisdom and rich insights - even when uncomfortable! I've always enjoyed his early morning calls with 'just three thoughts …', as well as our regular swapping of thoughts over a glass of wine. His authenticity, rigour and passion will be hugely missed by so many people". Dr Brian Fisher GP, director, PAERS and PPI lead, NHS Alliance – "Bob was an amazing man. He always lifted my spirits and made me feel the next step was possible. He harnessed his depression in such a way that it made it easier to feel with him and for him. And he was so rare – a man in whom feeling and emotion was an integral part of his understanding of the world and of work. "He had a visual mind – he would always see situations in terms of a diagram. As I talked with him about some knotty problem, there would bloom on the paper between us circles, squares, arrows and stick people who, between them, would explain what the current problem was and how we should move forwards. The diagrams always made sense at the time … "He will be sadly missed. I will miss him". Stephen Thornton, chief executive, Health Foundation - "Bob was a loyal and thoughtful friend of the Health Foundation. For the last decade he offered his insights and his gentle, persuasive steer to our work on leadership development and on engaging patients in healthcare quality. He proffered me much personal support, standing quietly by me in times of trouble. He found opportunities to challenge and inform me, but always with the utmost grace and patience. He was simply a delight to work with, and I will miss him greatly." Angela Greatley, chief executive, Sainsbury Centre for Mental Health - "Bob was a pleasure to work with. His commitment, his good humour and his openness were defining characteristics of a man who helped to make health services better and more responsive to us, the people who use them. His dedication to that cause and his passion for better healthcare made him a great advocate for the voice of the service user. His contribution to this cause will greatly be missed." Bec Hanley, independent consultant, TwoCan Associates - "I loved Bob's integrity, his generosity and his incredibly enthusiastic capacity for networking. Like many people, I benefited from Bob's knack of linking people and projects together - he always did this in such an unselfish way. It was a real privilege to work with him". Pippa Hague, management consultant, SMS Management & Technology - "Bob was a man who carried himself with integrity, passion and intelligence in all that he did. Professionally, Bob pushed us all to be the best that we could be and made us strive to become the ethical backbone of the NHS, ensuring patients and the public were at the centre of policy decision-making and service delivery. "Personally, Bob was an inspiration to me on many levels: he forced me to apply my intellect to problem-solving; he challenged me to always do what is right rather than what is easy; he introduced me to some of the most interesting people I have met and had a knack for bringing together the right people at the right time to make stuff happen. I will remember him as a master networker – we would often say that in the real world, there is the principle of six degrees of separation. With Bob, it was down to three. "I will remember the sparkle in his eyes when he was facilitating a good group; the energy, passion and trust he drew out of others. But mainly I will remember the long evening conversations over a table filled with great people, laughter and tapas. To all the rest of Bob's friends, family and colleagues, I send my love in this time of loss". Jean Thompson, Talking Health Network – "I knew Bob through my work in putting the person with a long-term health condition in the driving seat when managing their health. He was a true friend of lay-led self-management: someone with whom I could bounce ideas and share hopes and fears about policies and practice - particularly when the two seemed at odds with one another". Katherine Andrews, fundraising manager, The Prince's Trust – "Bob was such a huge influence in my life and treated me with such respect at all times. I know I'm only one of many who's life was changed for the better by meeting this wonderful, warm man". Ed Rosen, Institute for Strategic Leadership and Service Innovation, London Sputh Bank University – "Bob used to use his hands in an extraordinary way. As I watched his hands move, I could almost feel him shaping an idea out of thin air! I will miss that the most; almost as much as the 'Hiya!' that would greet me either from the other end of a mobile or from a doorway or street. I now think that mobiles were invented for Bob: sometimes he seemed to be permanently attached to one. "So how did it all begin? One sunny summer day in Elephant and Castle where I was introduced to the newly appointed Professor of Patient and Public Involvement in Health. Bob was really chuffed with this appointment at London South Bank University, as it provided him with an opportunity to reach back into South London's history and yank out a brilliant nugget, which was the Peckham Health Centre. He enthused about Peckham in his inaugural professorial lecture. "Bob had a shed-load of chutzpah, which he reminded us about last Tuesday, the last time I saw him. He said 'don't ask permission first; but if it doesn't work, ask for forgiveness later'. "Bob sang for most of his life and for some of us lucky ones, we had a chance to sing with him". Duncan Selbie, chief executive, Brighton and Sussex University Hospitals NHS Trust - "Bob Sang, England's first Professor of patient and public involvement, believed passionately in the NHS, and he translated that passion into a fervour and commitment to improving it. He was a true pioneer who was talking about the importance of involving patients and the public in designing and improving health services at a time when such ideas were met with apathy and even derision. "But like all true pioneers, rather than being deterred by a less-than-positive response, he kept going, finding new ways of presenting his ideas and pushing his cause until people started to listen. He relentlessly articulated the importance and value of talking and listening, to each other and to the patients and public who use our services, and can rightfully take much of the credit for transforming a minority view into the unquestionable principle and statutory requirement it is today". Thurstine Basset, Basset Consultancy Ltd – "When the railway was briefly named as 'Network South East' we all thought this was a good title for Bob since he conducted so many informal seminars and discussion groups on the trains to and from Victoria and London Bridge - I particularly remember one on stress at work where he had over half the carriage involved! "I shall always remember his smile, his energy and his unchanging principles of inclusion and empowerment. In particular, on one day back in the 1980s, I had had an extraordinary meeting in the morning which filled me with alarm about the direction services were going in and the amoral attitude of some of the more senior people involved in the NHS. I met Bob that afternoon (by chance at Victoria Station). He strode across the concourse with a big smile on his face and after 10 minutes of our meeting, I felt assured that all was not lost. 'Thank God for Bob', I remember thinking at the time". Julia Baronness Neuberger – "Well, all I can say is that like everyone else who has written about Bob, I loved him; he could drive me mad; he really cared; and he was unwaveringly brave and committed to patient involvement ... in his short 61 years, he changed things for people, especially vulnerable people". Maureen Dale, Carer and Patient Involvement, NHS South of Tyne and Wear - "Bob was a star. Like a star, he was light years ahead of me; like a star he guided me through challenges; and like a star, he will continue to inspire me for a long time to come". Covid vaccine: When will you be eligible?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
965
Q: How to change a QTreeView model on an action triggered signal in PyQt5? I am trying to build a PyQt5 application that would display a tree view and populate it dynamically on a button press. But it crashes (Process finished with exit code 134 (interrupted by signal 6: SIGABRT)) whenever I try to set or populate the model from within a function assigned to an action triggered signal (although it works just fine if I instantiate the model, load the data and assign the model to the TreeView in the window __init__ itself rather than in a function assigned to a signal). How do I achieve the desired behaviour? The whole model content (including the set of columns) is meant to completely change often during runtime. The UI is designed in Qt Designer and generated with pyuic5. Here is my window code: class MainWindow(QMainWindow): def __init__(self): super().__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) # model = MyModel() # UPDATE: useless, this wasn't here in the last pre-question version of the code actually self.ui.actionLoad.triggered.connect(MainWindow.load) # UPDATE: Here is a mistake - should be self.load, not MainWindow.load # @staticmethod # UPDATE: this wasn't here in the last pre-question version of the code actually def load(self): model = MyModel() self.ui.treeViewLeft.setModel(model) self.model.load() # UPDATE: Here is a mistake - should be model.load(), not self.model.load() Here is my model code: class MyModel(QStandardItemModel): def __init__(self, *args, **kwargs): super(MyModel, self).__init__(*args, **kwargs) def load(self): self.clear() self.setHorizontalHeaderLabels(["Name", "Attr1", "Attr2"]) self.appendRow([QStandardItem('item1'), QStandardItem('attr11'), QStandardItem('attr21')]) self.appendRow([QStandardItem('item2'), QStandardItem('attr12'), QStandardItem('attr22')]) self.appendRow([QStandardItem('item3'), QStandardItem('attr13'), QStandardItem('attr23')]) A: I recommend you execute your code in the CMD or terminal in these cases since many IDEs have limitations in these cases. By running it you get this error message: Traceback (most recent call last): File "main.py", line 29, in load self.ui.treeViewLeft.setModel(model) AttributeError: 'bool' object has no attribute 'ui' Aborted (core dumped) A static method indicates that this method does not belong to an object of the class but to the class itself, and if you want to modify an object it should not be a static method so remove that decorator. On the other hand you must connect the signal to the slot by using self. The solution is the next: class MainWindow(QMainWindow): def __init__(self): super().__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) model = MyModel() self.ui.actionLoad.triggered.connect(self.load) def load(self): model = MyModel(self) self.ui.treeViewLeft.setModel(model) model.load()
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,505
Human resources play a key role in any company; they are are the people who make up the workforce of an organization. According to entrepreneur.com, HR is "the department or support systems responsible for personnel sourcing and hiring, applicant tracking, skills development and tracking, benefits administration and compliance with associated government regulations . Any mix-up concerning these issues can cause major legal problems for your business, as well as major employee dissatisfaction. But small businesses often don't have the staff or the budget to properly handle the nitty-gritty details of HR. Because of this, more and more small businesses are beginning to outsource their HR needs. There are two ways to work a job as a recruiter. You can either work for an employer as part of the HR department or you could be working in an agency that specialises in finding the right people for various organisations. A career in recruiting requires not only people skills but also an aptitude in sales and marketing. A profession in Training will include recognizing necessities and developing programmes. 'Human capital' is presently seen as the way to business growth, and great bosses are centered around building up the skills and information of their workforce. The Irish Institute of Training & Development, founded in 1969, is the non-profit professional body representing members concerned with human resource training and development in Ireland. Our 1,500+ members work in business, industry, consultancy, voluntary, community, education and the public sector. The National Recruitment Federation is a voluntary organisation set up to establish and maintain standards and codes of practice for theRecruitment Industry in Ireland. Representing recruitment agencies throughout the country, NRF Members aim to communicate their commitment to providing quality service by agreeing to abide by a strict Code of Conduct.Founded in 1971, the NRF seeks to provide its members with the best possible service in terms of support, communication, advice sharing and problem solving and in doing so to promote professional competence within the industry. As part of this mission the NRF has inaugurated a formal education programme the Certificate in Recruitment Practice to ensure all new entrants to the industry have a solid ground in legislation, customer service operations and sales to equip the graduates of the programme with the tools and knowledge to provide a quality service to Clients and Candidates alike. Membership of the NRF is granted only to organisations that meet criteria of excellence (including adherence to the provisions of the Employment Agency Act 1971 and all other relevant Government legislation & amendments) and who agree to abide by the NRF Code of Conduct.
{ "redpajama_set_name": "RedPajamaC4" }
4,983
\section{Introduction} Over the time, several criteria have been developed to define, in a more or less quantitative way, the resolving power of an optical imaging system. Among the most popular ones there is the Rayleigh criterion \cite{Rayleigh1879}, which states, in a heuristic way, that the wavelength sets the minimum resolvable transverse separation between two point sources $A$ and $B$. According to the Rayleigh criterion, two point-sources can be resolved when the first diffraction minimum of the image of $A$ coincides with the maximum of the image of $B$. Indeed, this criterion is strictly related to the point-spread function (PSF) of the optical system. In practice, the knowledge of the PSF alone is not sufficient to determine the resolution power, as it also depends on signal-to-noise ratio (SNR). A number of techniques have been developed to circumvent the Rayleigh limit \cite{Hell2007}. These includes switching the emission on and off \cite{Dickson1997}, near field probing \cite{Drig1986}, or exploiting optical non-linearities \cite{Hell1994}, just to name a few. Most of this techniques rely on source engineering, which is not an option for astronomical observations. Recently, Tsang at al.~\cite{Tsang2016} proposed a technique to achieve far-field super-resolution of a pair of natural, incoherent point-sources. This is enabled by linear optics and photon detection in the photon-counting regime, and exploits the additional information contained in the phase and in the spatial correlations of the optical field. Such information is ignored in direct imaging but can be extracted through a coherent processing of the field before detection, using interferometric techniques as SPAtial-mode DE-multiplexing (SPADE) \cite{Zhou2019,Xue2001,Abouraddy2012,Martin2017} or Super-Localization via Image-inVERsion interferometry (SLIVER) \cite{Larson2019,Tang2016,Wicker2009}. These and other interferometric techniques \cite{Tham2017,PRL2020,Ugo2022} have been demonstrated for super-resolution imaging and high-precision distance measurements \cite{Parniak2018, Sorelli}, especially for the problem of estimating the transverse separation between two point-sources \cite{Par2016,Tham2017,Yang2016,Boucher2020}, both in the photon-counting regime \cite{MT2019} and for bright sources \cite{Lvovsky,Nayak}. \begin{table}[b!] \center \begin{tabular}{ |p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| } \hline $x_{01}^{00}$ & $x_{10}^{00}$ & $x_{02}^{00}$ & $x_{20}^{00}$ & $x_{11}^{00}$ \\ \hline -8.7 & -7.7 & -5.9 & -6.2 & -12.1\\ \hline \end{tabular} \caption{Cross-talk values from $\text{HG}_{00}$ mode into the $\text{HG}_{nm}$ modes. The values are enhanced on purpose to simulate a real observation with reduced control and misalignment.} \label{ct1} \end{table} In a very fine experiment \cite{Boucher2020}, Boucher \textit{et al.}~demonstrated separation measurements with high precision. In their work, the sensitivity is quantified by the ratio $r = d_m/w_0$, where $d_m$ is the minimum measurable separation, and $w_0$ is the beam waist. They reported $r \simeq 5 \cdot 10^{-2}$ using a Multi-Plane Light Conversion system as a demultiplexer \cite{Morizur2010, Labroille2014}. Their observed sensitivity is limited by the cross-talk $x_{nm}^{00}$ between $\text{HG}_{00}$ and generic $\text{HG}_{nm}$ channels quantified as \begin{equation} x_{nm}^{00}=10 \log_{10}\left(\frac{P_{nm}}{P_{00}}\right) \, , \end{equation} where $P_{nm}$ represents the output power on the $\text{HG}_{nm}$ channel (fiber-coupled in our case) when a $\text{HG}_{00}$ is injected with a power equal to $P_{00}$ (free-space coupled). Here we exploit balanced detection to suppress the effects of cross-talks. To test the system, we intentionally lower the matching of coupling optics. This induces large cross-talks between the channels (see table \ref{ct1}), which in turn simulate in-field experimental conditions (e.g., cells for a microscope, binary stars for a telescope). Balanced detection yields noise cancellation and leads to an effective zero-background measurement. With this technique we improve the ratio $d_m/w_0$ by a factor $\simeq 4$: this is a very promising result for application of SPADE in real observation campaigns. The experimental setup is shown in figure~\ref{fi:setup}. We combine two telecom fiber lasers (1.55 $\mu m$) on a non-polarizing beam splitter (NPBS) to mimic the point-like sources. Each fiber laser is coupled with a collimator and partially mode-matched using two lenses. Two polarizers are used to change the beams power in controlled way. The beams are combined on the NPBS through a pair of steering mirrors that are also used to couple each beam in the input (free-space) port of the demultiplexer. The second mirror of each beam is mounted on a translation stage to move the beams, within transverse plane, with micrometric resolution. The demultiplexer, PROTEUS-C from Cailabs, allows us to perform intensity measurements on six Hermite-Gaussian modes. It accepts radiation from the free-space input port, and decomposes it in the lowest-order modes ($\text{HG}_{00}$, $\text{HG}_{01}$, $\text{HG}_{10}$, $\text{HG}_{11}$, $\text{HG}_{20}$, $\text{HG}_{02}$). The latter are coupled with six single-mode fibers following conversion into the $\text{HG}_{00}$ mode. Finally, the intensities of modes $\text{HG}_{01}$ and $\text{HG}_{00}$ are combined for balanced detection using commercial balanced detectors that produce a signal $S_1$ proportional to $\text{HG}_{01}-\alpha_1 \text{HG}_{00}$, where $\alpha_1$ is an attenuation factor selected to set $S_1$ to zero when the beams displacement is zero (i.e., the beams overlap completely) and the overall source has a circular symmetry. The same procedure is repeated for the modes $\text{HG}_{10}$ and $\text{HG}_{00}$, yielding the signal $S_2$ and the optimised value of the factor $\alpha_2$. \begin{figure} \centering \includegraphics[scale=.35]{setup} \caption{\textbf{Experimental setup.} Two telecom fiber lasers (1.55 $\mu m$) exit from collimators (C) and are mode-matched using a simple lenses (L) system. The intensities are tuned by changing the relative orientation of a pair of polarizers (P). The beams are combined on the NPBS through a pair of steering (M) mirrors that are also used to couple each beam in the input free space port of the demultiplexer. The second mirror of each beam is mounted on a translation stage to move the beams, within transverse plane, with micrometric resolution. The demultiplexer, PROTEUS-C from Cailabs, allows to perform intensity measurements on six Hermite-Gaussian mode. The $\text{HG}_{01} / \text{HG}_{00}$ modes and $\text{HG}_{10} / \text{HG}_{00}$ modes are detected by balanced detection.} \label{fi:setup} \end{figure} \begin{figure} \centering \includegraphics[scale=.35]{misure} \caption{\textbf{Measured signals.} Measured signals $S_1$ and $S_2$ (upper plots, blue points) at different values of separation $d$ between simulated sources. Using the $S_1$ and $S_2$ as calibration curves, it is possible to estimate the beam separation by simply measuring $S_1$ or $S_2$ and by finding the corresponding beam separation $d$. In the lower graphs, the red points represent the uncertainties $\delta d$ associated to the estimation of the beam separation ($d$). Using the upper graphs as calibration curves, $\delta d \simeq\frac{\partial d(S_i)}{\partial S_i} \delta S_i$. } \label{fi:misure} \end{figure} To simulate a situation where we do not know whether there is a single source or there are two sources, we align the system on the centroid of the two sources by maximizing the $\text{HG}_{00}$ output. Then, we acquire the signals $S_1$ and $S_2$ for different separations of the two simulated sources between $0$ and $200$ $\mu m$ (beams FWHM $\simeq 360\mu m$). The beams positions are controlled using a pair of translation stages, shifted in opposite directions by the same amount in order to keep the centroid aligned with demultiplexer optics. We note that using only one translation stage, and keeping the other source aligned with the demultiplexer, one would obtain a better resolution. However, in a practical scenario one can only hope to align the demultiplexer with the centroid, since the positions of the sources are unknown. Figure~\ref{fi:misure} shows the measured signals $S_1$ and $S_2$ (blue points) for several values of the separation $d$ between the simulated sources. Using $S_1$ and $S_2$ as calibration curves, it is possible to measure the beam separation by simply measuring $S_1$ or $S_2$ and by finding the corresponding beam separation $d$. In the lower graphs of figure~\ref{fi:misure}, the red points represent the uncertainties of the beam separation estimates using these calibration curves. We estimate the uncertainties (red points) $\delta d$ through the following procedure: (1) Inversion of the data (blue points) represented in the upper graphs $S_i(d) \rightarrow d(S_i)$; (2) Calculation of the numerical derivative $\partial d(S_i) / \partial S_i$; (3) Multiplication by the uncertainty $\delta S_i$ (vertical error bars of the blue points) obtaining: $\delta d \simeq \delta S_i \, \partial d(S_i) /\partial S_i$. We obtain a very low uncertainty, which allows us to resolve the two beams within one hundredth of the FWHM even in presence of the large cross-talk simulating a real observation. The $S_1$ and $S_2$ signals show different responses (and thus different sensitivities) on $d$. This is due to the fact that $\text{HG}_{01}$ forms an angle lower than $45^{\circ}$ with the direction of $d$, whereas the angle of $\text{HG}_{10}$ is larger than $45^{\circ}$. In principle, by rotating the image in the transverse plane, it would be possible to find the direction of max sensitivity of, e.g., $S_1$ (which in turn corresponds to min sensitivity of $S_2$). This further optimisation could be implemented by placing a Dove prism on a rotation stage carefully aligned with optical axis. Finally, we remark that this technique is not an absolute distance measurement as it is based on a calibration curve. After demonstrating a high resolving power in high cross-talk condition (to emulate a real observation), we test the system for robustness and reproducibility. We repeated the measurement for the more sensitive signal $S_1$ two days after the first measurement, obtaining $S_{1r}$. We found excellent reproducibility as shown in figure~\ref{fi:misure2}, also thanks to finely controlled environmental conditions. In fact, the setup is placed on a 460 mm optical table with active vibration isolation and self-leveling in a humidity and temperature controlled laboratory. Figure~\ref{fi:misure2} shows the difference between signals measured in two different days $ \Delta=S_1(d)-S_{1r}(d)$. All the differences are within $\delta S$, ensuring a high level of reproducibility. The balanced detection we used can be especially advantageous in passive observation as it is independent on unpredictable source fluctuations and in real measurement campaigns where the crosstalk among the channels in not negligible. \begin{table}[b!] \center \begin{tabular}{ |p{1cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| } \hline $x_{01}^{00}$ & $x_{10}^{00}$ & $x_{02}^{00}$ & $x_{20}^{00}$ & $x_{11}^{00}$ \\ \hline -20.0 & -22.6 & -15.9 & -18.5 & -34.1\\ \hline \end{tabular} \caption{Cross-talk values from $\text{HG}_{00}$ mode into the $\text{HG}_{nm}$ modes in good alignment conditions.}\label{ct2} \end{table} After demonstrating the benefit of balanced detection for real observations, we prepared an improved setup to increase the performance even further by lowering the cross-talk and using more advanced translation stages. We matched the laser waist with the demultiplexer waist as much as possible, obtaining the cross-talk values as in table \ref{ct2}. We replaced the previous translation stages with a higher resolution translators (250 $nm$ each and 500 $nm$ for symmetric separation) but a more limited travel. We acquired the signal on oscilloscope for $\text{HG}_{01}$ and $\text{HG}_{10}$ modes at different values of the source separation with this improved setup. The results are shown in figure \ref{fi:misure3}. The upper graphs show the intensities of $\text{HG}_{01}$ and $\text{HG}_{10}$ modes for different values of source separations. The lower graphs show the uncertainty calculated using the same procedure as figure \ref{fi:misure}. Using this improved setup we obtain an uncertainty of a few thousands of the FWHM, and we are able to resolve two sources within a distance of 0.0055 FWHM. \begin{figure} \centering \includegraphics[scale=.35]{misure2} \caption{\textbf{Repeatability.} Difference $\Delta$ between signals $S_1$ and $S_{1r}$ measured in two different days. All $\Delta$ values are included in $\pm \delta S$ ensuring an excellent measurement reproducibility.} \label{fi:misure2} \end{figure} \begin{figure} \centering \includegraphics[scale=.35]{misure3} \caption{\textbf{High resolution measurements.} Measured intensities $\text{HG}_{01}$ and $\text{HG}_{10}$ (upper graphs) at different values of the separation $d$ between simulated sources. Using $\text{HG}_{01}$ and $\text{HG}_{10}$ intensities as calibration curves, it is possible to estimate the beam separation by simply measuring $\text{HG}_{01}$ and $\text{HG}_{10}$ and by finding the corresponding beam separation value. In the lower graphs, the red points represent the associated uncertainties $\delta d$, obtained using the upper graphs as calibration curves, $\delta d \simeq\frac{\partial d(\text{HG}_{01})}{\partial \text{HG}_{01}} \delta \text{HG}_{01}$ and $\delta d \simeq\frac{\partial d(\text{HG}_{10})}{\partial \text{HG}_{10}} \delta \text{HG}_{10}$.} \label{fi:misure3} \end{figure} In conclusion, we have demonstrated in a proof-of principle experiment, a very simple yet robust and reliable scheme for high-precision distance metrology in transverse plane in the presence of strong cross-talk between channels, exploitable in real observations where the imperfections cannot be fully controlled. The measurement relies on spatial demultiplexing combined with balanced homodyne detection and we obtained a minimum measurable separation between sources lower than one hundredth of FWHM. We test the reproducibility with good results proving the reliability of the setup for real observations. Finally, using an improved setup with lower cross-talk and high resolution translation stages, we repeated the measurements by recording the intensities of $\text{HG}_{01}$ and $\text{HG}_{10}$ modes obtaining a resolution of 0.0055 FWHM. To the best of our knowledge this is an improvement of about one order of magnitude compared with best measurement of this type \cite{Boucher2020}. The technique is not an absolute technique since it is based on a calibration curve but can be used both in real observations or for development of inertial sensors in the two dimensions of the transverse plane at the same time. \vspace{0.5cm} We gratefully acknowledge support by the Italian Space Agency (ASI) through the Nonlinear Interferometry at Heisenberg Limit (NIHL) project (CUP F89J21027890005). We thank Cailabs, 38 boulevard Albert 1er, 35200 Rennes, France.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,741
Arctic Monkeys Are A Million Miles From Home In Their Most Experimental Album Yet - HuffPost Verdict The band's sixth album is set to divide fans' opinion but we're fully onboard. Rachel McGrath The Arctic Monkeys Ticket Saga Has Brought Out The Best And Worst In People Hell hath no fury like a millennial who didn't get a ticket. NME Magazine: 15 Of The Publication's Most Memorable Covers So long, (print) NME... 15 Of NME's Most Memorable Covers So long, NME. We Can't Believe All These Iconic Brit Awards Moments Happened 10 Years Ago And all on the same night, no less. Emma Thompson Once Rocked Out In The Mosh Pit To Arctic Monkeys At Reading Festival. Yes, The Emma Thompson. It didn't end well. Matt Bagwell We never had Emma Thompson down as (a) a rock fan or (b) someone who would rough it, but that shows how much we know. The By Matt Bagwell, The Huffington Post How Many Best British Album Winners Can You Name From Brit Awards Gone By? You'd be surprised how many talented A-listers missed the mark For a UK-based artist, there are few honours bigger than being given the title of Best British Album at the Brit Awards, which David Bowie Could Have Been A Figment Of Your Imagination. We, The Carnabys, gave away all of our pre-order earnings from this summer's album release to the Trust. 'Too Much, Never Enough' is an album that we are really proud of and we haven't taken this decision lightly. However its even more important to us that we - and the bands coming up behind - actually have somewhere to play in 5 years time. So if you want to really go and see the next Bowie rather than imagine what might have been, get on board and help us to #savelivemusic. Jack Mercer Noel Gallagher Blasts Adele, 1D And, Well, Almost Everyone... Noel Gallagher The former Oasis singer is his usual candid self in a brand new interview with Esquire, in which he reveals By Daniel Welsh, The Huffington Post UK From Joy Division To Amy Winehouse: The Covers That Defined NME NME magazine and its publisher Time Inc have revealed plans for a bold new change, stating that from September, the music By Rachel McGrath, The Huffington Post UK Fearne Cotton's Best Ever Live Lounge Guests After 10 years behind the microphone, Fearne Cotton has hosted her last show on Radio 1. As regulars listeners will know Arctic Monkeys Fight Sound Problems At Reading Arctic Monkeys performed their first Reading Festival set in five years on Saturday night, however it wasn't all smooth sailing The 50 Most Memorable Reading Festival Moments Over the years, there have been a number of memorable moments at Reading Festival, from Kurt Cobain's now famous stage entrance By The Huffington Post UK Reading and Leeds 2014: A Modern Flavour to the Veteran Festival It's the pinnacle of summer. The final bank holiday hurrah for the U.K. A chance for many to bid farewell to their barbeques, paddling pools and take their tops off in public one last time. For over 150,000 people though it's the annual celebration of getting rowdy in a field to one of the biggest bills of music in the world at Reading and Leeds Festivals. What is often forgotten is that it's been this way for over 30 years. Olly Hunter Review: Arctic Monkeys Showcase 'AM' At Finsbury Park Arctic Monkeys have come a long way since storming the charts with 'Whatever People Say I Am, That's What I'm Not' in 2006 Is Subculture Dead? At this year's Brit Awards, Alex Turner from the Arctic Monkeys, used his acceptance speech for Best Album of the Year to announce that "rock n roll will never die." He then threw the mic he was using down on the floor and mumbled "invoice me for the mic if you wanna." Charlotte Mallory We Can't Quite Believe This Song Is Actually 10 Years Old... We've double, triple and quadruple-checked the dates and can confirm it's true - the Arctic Monkeys' 'I Bet You Look Good Ivor Novello Awards 2014 Nominations Announced The nominees for the 2014 Ivor Novello Awards have been announced, revealing that Arctic Monkeys, London Grammar and Emeli
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,955
package com.intellij.openapi.vcs.actions; import com.intellij.openapi.editor.Editor; import com.intellij.openapi.editor.colors.ColorKey; import com.intellij.openapi.util.Couple; import com.intellij.openapi.vcs.VcsBundle; import com.intellij.openapi.vcs.annotate.AnnotationSource; import com.intellij.openapi.vcs.annotate.FileAnnotation; import com.intellij.openapi.vcs.annotate.LineAnnotationAspect; import com.intellij.openapi.vcs.annotate.TextAnnotationPresentation; import com.intellij.openapi.vcs.history.VcsRevisionNumber; import com.intellij.util.Consumer; import java.awt.*; import java.util.Map; /** * shown additionally only when merge * * @author Konstantin Bulenkov */ class CurrentRevisionAnnotationFieldGutter extends AnnotationFieldGutter implements Consumer<AnnotationSource> { // merge source showing is turned on private boolean myTurnedOn; CurrentRevisionAnnotationFieldGutter(FileAnnotation annotation, LineAnnotationAspect aspect, TextAnnotationPresentation highlighting, Couple<Map<VcsRevisionNumber, Color>> colorScheme) { super(annotation, aspect, highlighting, colorScheme); } @Override public ColorKey getColor(int line, Editor editor) { return AnnotationSource.LOCAL.getColor(); } @Override public String getLineText(int line, Editor editor) { final String value = myAspect.getValue(line); if (String.valueOf(myAnnotation.getLineRevisionNumber(line)).equals(value)) { return ""; } // shown in merge sources mode return myTurnedOn ? value : ""; } @Override public String getToolTip(int line, Editor editor) { final String aspectTooltip = myAspect.getTooltipText(line); if (aspectTooltip != null) { return aspectTooltip; } final String text = getLineText(line, editor); return ((text == null) || (text.length() == 0)) ? "" : VcsBundle.message("annotation.original.revision.text", text); } public void consume(final AnnotationSource annotationSource) { myTurnedOn = annotationSource.showMerged(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,014
Scaligeria nodosa är en flockblommig växtart som först beskrevs av Pierre Edmond Boissier, och fick sitt nu gällande namn av Pierre Edmond Boissier. Scaligeria nodosa ingår i släktet Scaligeria och familjen flockblommiga växter. Inga underarter finns listade i Catalogue of Life. Källor Flockblommiga växter nodosa
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,104
{"url":"https:\/\/johncarlosbaez.wordpress.com\/page\/2\/?s=category+theory","text":"## Applied Category Theory 2020 (Part\u00a02)\n\n23 March, 2020\n\nDue to the coronavirus outbreak, many universities are moving activities online. This is a great opportunity to open up ACT2020 to a broader audience, with speakers from around the world.\n\nThe conference will take place July 6-10 online, coordinated by organizers in Boston USA. Each day there will be around six hours of live talks, which will be a bit more spaced out than usual to accommodate the different time zones of our speakers. All the talks will be both live streamed and recorded on YouTube. We will also have chat rooms and video chats in which participants can discuss various themes in applied category theory.\n\nWe will give more details as they become available and post updates on our official webpage:\n\nhttp:\/\/act2020.mit.edu\n\nSince there is no need to book travel, we were able to postpone the acceptance notification, and hence the submission deadline. If you would like to speak, please prepare an abstract or a conference paper according to the instructions here:\n\nhttp:\/\/act2020.mit.edu\/#papers\n\nImportant dates (all in 2020)\n\n\u2022 Submission of contributed papers: May 10\n\u2022 Tutorial day: July 5\n\u2022 Main conference: July 6-10\n\nRegistration will now be free; please register for the conference ahead of time here:\n\nhttp:\/\/act2020.mit.edu\/#registration\n\nWe will send registering participants links to the live stream, the recordings, and the chat rooms, and we\u2019ll use the list to inform participants of any changes.\n\nSubmissions\n\nTo give a talk at ACT2020, you have to submit a paper. You can submit either original research papers or extended abstracts of work submitted\/accepted\/published elsewhere. Accepted original research papers will be invited for publication in a proceedings volume.\n\nHere\u2019s how to submit papers. Two types of submissions are accepted, which will be reviewed to the same standards:\n\nProceedings Track. Original contributions of high quality work consisting of a 5\u201312 page extended abstract that provides evidence for results of genuine interest, and with enough detail to allow the program committee to assess the merits of the work. Submissions of works in progress are encouraged, but must be more substantial than a research proposal.\n\nNon-Proceedings Track. Descriptions of high-quality work submitted or published elsewhere will also be considered, provided the work is recent and relevant to the conference. The work may be of any length, but the program committee members may only look at the first 3 pages of the submission, so you should ensure these pages contain sufficient evidence of the quality and rigor of your work.\n\nSubmissions should be prepared using LaTeX, and must be submitted in PDF format. Submission is currently open, and can be perfomed at the following web page:\n\nhttps:\/\/easychair.org\/conferences\/?conf=act2020\n\nOne or more best paper awards may be given out at the discretion of the PC chairs. Selected contributions will be offered extended keynote slots in the program.\n\nOrganizers\n\nHere are the local organizers:\n\n\u2022 Brendan Fong\n\u2022 David Jaz Myers (logistics)\n\u2022 Paolo Perrone (publicity)\n\u2022 David Spivak\n\nHere is the committee running the school:\n\n\u2022 Carmen Constantin\n\u2022 Eliana Lorch\n\u2022 Paolo Perrone\n\nHere is the steering committee:\n\n\u2022 John Baez\n\u2022 Bob Coecke\n\u2022 David Spivak\n\u2022 Christina Vasilakopoulou\n\nHere is the program committee:\n\n\u2022 Mathieu Anel, CMU\n\u2022 John Baez, University of California, Riverside\n\u2022 Richard Blute, University of Ottawa\n\u2022 Tai-Danae Bradley, City University of New York\n\u2022 Andrea Censi, ETC Zurich\n\u2022 Bob Coecke, University of Oxford\n\u2022 Valeria de Paiva, Samsung Research America and University of Birmingham\n\u2022 Ross Duncan, University of Strathclyde\n\u2022 Eric Finster, University of Birmingham\n\u2022 Brendan Fong, Massachusetts Institute of Technology\n\u2022 Tobias Fritz, Perimeter Institute for Theoretical Physics\n\u2022 Richard Garner, Macquarie University\n\u2022 Fabrizio Romano Genovese, Statebox\n\u2022 Amar Hadzihasanovic, IRIF, Universit\u00e9 de Paris\n\u2022 Helle Hvid Hansen, Delft University of Technology\n\u2022 Jules Hedges, Max Planck Institute for Mathematics in the Sciences\n\u2022 Kathryn Hess Bellwald, Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne\n\u2022 Chris Heunen, The University of Edinburgh\n\u2022 Joachim Kock, UAB\n\u2022 Tom Leinster, The University of Edinburgh\n\u2022 Martha Lewis, University of Amsterdam\n\u2022 Daniel R. Licata, Wesleyan University\n\u2022 David Jaz Myers, Johns Hopkins University\n\u2022 Paolo Perrone, MIT\n\u2022 Vaughan Pratt, Stanford University\n\u2022 Peter Selinger, Dalhousie University\n\u2022 Michael Shulman, University of San Diego\nDavid I. Spivak, MIT (co-chair)\n\u2022 Walter Tholen, York University\n\u2022 Todd Trimble, Western Connecticut State University\nJamie Vicary, University of Birmingham (co-chair)\n\u2022 Maaike Zwart, University of Oxford\n\n## Applied Category Theory 2020 (Part\u00a01)\n\n1 March, 2020\n\nHere\u2019s the big annual conference on applied category theory:\n\nACT2020, 2020 July 6\u201310, online worldwide. Organized by Brendan Fong and David Spivak.\n\nThis happens right after the applied category theory school, which will take place June 29 \u2013 July 3. There will also be a tutorial day on Sunday July 5, with talks by Paolo Perrone, Emily Riehl, David Spivak and others.\n\nTo give a talk at ACT2020, you have to submit a paper. You can submit either original research papers or extended abstracts of work submitted\/accepted\/published elsewhere. Accepted original research papers will be invited for publication in a proceedings volume. Some contributions will be invited to become keynote addresses, and best paper awards may also be given. The conference will also include a business showcase.\n\nHere\u2019s how to submit papers. Two types of submissions are accepted, which will be reviewed to the same standards:\n\nProceedings Track. Original contributions of high quality work consisting of a 5\u201312 page extended abstract that provides evidence for results of genuine interest, and with enough detail to allow the program committee to assess the merits of the work. Submissions of works in progress are encouraged, but must be more substantial than a research proposal.\n\nNon-Proceedings Track. Descriptions of high-quality work submitted or published elsewhere will also be considered, provided the work is recent and relevant to the conference. The work may be of any length, but the program committee members may only look at the first 3 pages of the submission, so you should ensure these pages contain sufficient evidence of the quality and rigor of your work.\n\nSubmissions should be prepared using LaTeX, and must be submitted in PDF format. Submission is currently open, and can be perfomed at the following web page:\n\nhttps:\/\/easychair.org\/conferences\/?conf=act2020\n\nHere are some important dates, all in 2020:\n\n\u2022 Submission of contributed papers: April 26\n\u2022 Early bird registration deadline: May 20\n\u2022 Final registration deadline: June 26\n\u2022 Tutorial day: July 5\n\u2022 Main conference: July 6\u201310\n\nHere is the program committee:\n\n\u2022 Mathieu Anel, CMU\n\u2022 John Baez, University of California, Riverside\n\u2022 Richard Blute, University of Ottawa\n\u2022 Tai-Danae Bradley, City University of New York\n\u2022 Andrea Censi, ETC Zurich\n\u2022 Bob Coecke, University of Oxford\n\u2022 Valeria de Paiva, Samsung Research America and University of Birmingham\n\u2022 Ross Duncan, University of Strathclyde\n\u2022 Eric Finster, University of Birmingham\n\u2022 Brendan Fong, Massachusetts Institute of Technology\n\u2022 Tobias Fritz, Perimeter Institute for Theoretical Physics\n\u2022 Richard Garner, Macquarie University\n\u2022 Fabrizio Romano Genovese, Statebox\n\u2022 Amar Hadzihasanovic, IRIF, Universit\u00e9 de Paris\n\u2022 Helle Hvid Hansen, Delft University of Technology\n\u2022 Jules Hedges, Max Planck Institute for Mathematics in the Sciences\n\u2022 Kathryn Hess Bellwald, Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne\n\u2022 Chris Heunen, The University of Edinburgh\n\u2022 Joachim Kock, UAB\n\u2022 Tom Leinster, The University of Edinburgh\n\u2022 Martha Lewis, University of Amsterdam\n\u2022 Daniel R. Licata, Wesleyan University\n\u2022 David Jaz Myers, Johns Hopkins University\n\u2022 Paolo Perrone, MIT\n\u2022 Vaughan Pratt, Stanford University\n\u2022 Peter Selinger, Dalhousie University\n\u2022 Michael Shulman, University of San Diego\nDavid I. Spivak, MIT (co-chair)\n\u2022 Walter Tholen, York University\n\u2022 Todd Trimble, Western Connecticut State University\nJamie Vicary, University of Birmingham (co-chair)\n\u2022 Maaike Zwart, University of Oxford\n\nHere is the steering committee:\n\n\u2022 John Baez\n\u2022 Bob Coecke\n\u2022 David Spivak\n\u2022 Christina Vasilakopoulou\n\nHere is the committee running the school:\n\n\u2022 Carmen Constantin\n\u2022 Eliana Lorch\n\u2022 Paolo Perrone\n\nAnd here are the local organizers:\n\n\u2022 Brendan Fong\n\u2022 David Jaz Myers (logistics)\n\u2022 Paolo Perrone (publicity)\n\u2022 David Spivak\n\nMore news will follow!\n\n## Applied Category Theory at NIST (Part\u00a03)\n\n22 February, 2020\n\nSadly, this workshop has been cancelled due to the coronavirus pandemic. It may be postponed to a later date.\n\nMy former student Blake Pollard is working at the National Institute of Standards and Technology. He\u2019s working with Spencer Breiner and Eswaran Subrahmanian, who are big advocates of using category theory to organize design and manufacturing processes. In the spring of 2018 they had a workshop on applied category theory with a lot of honchos from industry and government in attendance\u2014you can see videos by clicking the link.\n\nThis spring they\u2019re having another workshop on this topic!\n\nApplied Category Theory Workshop, April 8-9, 2020, National Institute of Standards and Technology, Gaithersburg, Maryland. Organized by Spencer Breiner, Blake Pollard and Eswaran Subrahmanian.\n\nThe focus of this workshop in on fostering the development of tooling and use-cases supporting the applied category theory community. We are particularly interested in bringing together practitioners who are engaged with susceptible domains as well as those involved in the implementation, support, and utilization of software and other tools. There will be a number of talks\/demos showcasing existing approaches as well as ample time for discussion.\n\nHere are the speakers listed so far:\n\n\u2022 John Baez, University of California, Riverside\n\n\u2022 Arquimedes Canedo, Siemens\n\n\u2022 Daniel Cicala, New Haven University\n\n\u2022 James Fairbanks, Georgia Tech Research Institute\n\n\u2022 Jules Hedges, Max Planck Institute for the Mathematical Sciences\n\n\u2022 Jelle Herold, Statebox\n\n\u2022 Evan Patterson, Stanford University\n\n\u2022 Qunfen Qi, University of Huddersfield\n\n\u2022 Christian Williams, University of California, Riverside\n\n\u2022 Ryan Wisnesky, Conexus.ai\n\nI\u2019ll also be giving a separate talk on \u201cecotechnology\u201d at NIST on Friday April 10th; more about that later!\n\n## The Category Theory Behind\u00a0UMAP\n\n10 February, 2020\n\nAn interesting situation has arisen. Some people working on applied category theory have been seeking a \u2018killer app\u2019: that is, an application of category theory to practical tasks that would be so compelling it would force the world to admit categories are useful. Meanwhile, the UMAP algorithm, based to some extent on category theory, has become very important in genomics:\n\n\u2022 Leland McInnes, John Healy and James Melville, UMAP: uniform manifold approximation and projection for dimension reduction.\n\nBut while practitioners have embraced the algorithm, they\u2019re still puzzled by its category-theoretic underpinnings, which are discussed in Section 2 of the paper. (You can read the remaining sections, which describe the algorithm quite concretely, without understanding Section 2.)\n\nI first heard of this situation on Twitter when James Nichols wrote:\n\nWow! My first sighting of applied category theory: the UMAP algorithm. I\u2019m a category novice, but the resulting adjacency-graph algorithm is v simple, so surely the theory boils down to reasonably simple arguments in topology\/Riemannian geometry?\n\nDo any of you prolific ACT tweeters know much about UMAP? I understand the gist of the linked paper, but not say why we need category theory to define this \u201cfuzzy topology\u201d concept, as opposed to some other analytic defn.\n\nWhat was gained by CT for UMAP? (honest question, not trying to be snarky)\n\nLeland McInnes, one of the inventors of UMAP, responded:\n\nIt is my math background, how I think about the problem, and how the algorithm was derived. It wasn\u2019t something that was added, but rather something that was always there\u2014for me at least. In that sense what was gained was the algorithm.\n\nI don\u2019t really understand UMAP; for a good introduction to it see the original paper above and also this:\n\n\u2022 Nikolay Oskolkov, How Exactly UMAP Works\u2014and Why Exactly It Is Better Than tSNE, 3 October 2019.\n\ntSNE is an older algorithm for taking clouds of data points in high dimensions and mapping them down to fewer dimensions so we can understand what\u2019s going on. From the viewpoint of those working on genomics, the main good thing about UMAP is that it solves a bunch of problems that plagued tSNE. Oskolkov explains what these problems are and how UMAP deals with them. But he also alludes to the funny disconnect between these practicalities and the underlying theory:\n\nMy first impression when I heard about UMAP was that this was a completely novel and interesting dimension reduction technique which is based on solid mathematical principles and hence very different from tSNE which is a pure Machine Learning semi-empirical algorithm. My colleagues from Biology told me that the original UMAP paper was \u201ctoo mathematical\u201d, and looking at the Section 2 of the paper I was very happy to see strict and accurate mathematics finally coming to Life and Data Science. However, reading the UMAP docs and watching Leland McInnes talk at SciPy 2018, I got puzzled and felt like UMAP was another neighbor graph technique which is so similar to tSNE that I was struggling to understand how exactly UMAP is different from tSNE.\n\nHe then goes on and attempts to explain exactly why UMAP does so much better than tSNE. None of his explanation mentions category theory.\n\nSince I don\u2019t really understand UMAP or why it does better than tSNE, I can\u2019t add anything to this discussion. In particular, I can\u2019t say how much the category theory really helps. All I can do is explain a bit of the category theory. I\u2019ll do that now, very briefly, just as a way to get a conversation going. I will try to avoid category-theoretic jargon as much as possible\u2014not because I don\u2019t like it or consider it unimportant, but because that jargon is precisely what\u2019s stopping certain people from understanding Section 2.\n\nI think it all starts with this paper by Spivak, which McInnes, Healy and Melville cite but for some reason don\u2019t provide a link to:\n\n\u2022 David Spivak, Metric realization of fuzzy simplicial sets.\n\nSpivak showed how to turn a \u2018fuzzy simplicial set\u2019 into an \u2018uber-metric space\u2019 and vice versa. What are these things?\n\nAn \u2018uber-metric space\u2019 is very simple. It\u2019s a slight generalization of a metric space that relaxes the usual definition in just two ways: it lets distances be infinite, and it lets distinct points have distance zero from each other. This sort of generalization can be very useful. I could talk about it a lot, but I won\u2019t.\n\nA fuzzy simplicial set is a generalization of a simplicial set.\n\nA simplicial set starts out as a set of vertices (or 0-simplices), a set of edges (or 1-simplices), a set of triangles (or 2-simplices), a set of tetrahedra (or 3-simplices), and so on: in short, a set of n-simplices for each n. But there\u2019s more to it. Most importantly, each n-simplex has a bunch of faces, which are lower-dimensional simplices.\n\nI won\u2019t give the whole definition. To a first approximation you can visualize a simplicial set as being like this:\n\nBut of course it doesn\u2019t have to stop at dimension 3\u2014and more subtly, you can have things like two different triangles that have exactly the same edges.\n\nIn a \u2018fuzzy\u2019 simplicial set, instead of a set of n-simplices for each n, we have a fuzzy set of them. But what\u2019s a fuzzy set?\n\nFuzzy set theory is good for studying collections where membership is somewhat vaguely defined. Like a set, a fuzzy set has elements, but each element has a \u2018degree of membership\u2019 that is a number 0 < x \u2264 1. (If its degree of membership were zero, it wouldn't be an element!)\n\nA map f: X \u2192 Y between fuzzy sets is an ordinary function, but obeying this condition: it can only send an element x \u2208 X to an element f(x) \u2208 Y whose degree of membership is greater than or equal to that of x. In other words, we don't want functions that send things to things with a lower degree of membership.\n\nWhy? Well, if I'm quite sure something is a dog, and every dog has a nose, then I'm must be at least equally sure that this dog has a nose! (If you disagree with this, then you can make up some other concept of fuzzy set. There are a number of such concepts, and I'm just describing one.)\n\nSo, a fuzzy simplicial set will have a set of n-simplices for each n, with each n-simplex having a degree of membership\u2026 but the degree of membership of its faces can't be less than its own degree of membership.\n\nThis is not the precise definition of fuzzy simplicial set, because I'm leaving out some distracting nuances. But you can get the precise definition by taking a nuts-and-bolts definition of simplicial set, like Definition 3.2 here:\n\n\u2022 Greg Friedman, An elementary illustrated introduction to simplicial sets.\n\nand replacing all the sets by fuzzy sets, and all the maps by maps between fuzzy sets.\n\nIf you like visualizing things, you can visualize a fuzzy simplicial set as an ordinary simplicial set, as in the picture above, but where an n-simplex is shaded darker if its degree of membership is higher. An n-simplex can\u2019t be shaded darker than any of its faces.\n\nHow can you turn a fuzzy simplicial set into an uber-metric space? And how can you turn an uber-metric space into a fuzzy simplicial set?\n\nSpivak focuses on the first question, because the answer is simpler, and it determines the answer to the second using some category theory. (Psst: adjoint functors!)\n\nThe answer to the first question goes like this. Say you have a fuzzy simplicial set. For each n-simplex whose degree of membership equals $a,$ you turn it into a copy of this uber-metric space:\n\n$\\{ (t_0, t_1, \\dots, t_n) : t_0 + \\cdots + t_n = - \\log a , \\; t_0, \\ldots, t_n \\geq 0 \\} \\subseteq \\mathbb{R}^{n+1}$\n\nThis is really just an ordinary metric space: an n-simplex that\u2019s a subspace of Euclidean (n+1)-dimensional space with its usual Euclidean distance function. Then you glue together all these uber-metric spaces, one for each simplex in your fuzzy simplical set, to get a big fat uber-metric space.\n\nThis process is called \u2018realization\u2019. The key here is that if an n-simplex has a high degree of membership, it gets \u2018realized\u2019 as a metric space shaped like a small n-simplex. I believe the basic intuition is that an n-simplex with a high degree of membership describes an (n+1)-tuple of things\u2014its vertices\u2014that are close to each other.\n\nIn theory, I should try to describe the reverse process that turns an uber-metric space into a fuzzy simplicial set. If I did, I believe we would see that whenever an (n+1)-tuple of things\u2014that is, points of our uber-metric space\u2014are close, they give an n-simplex with a high degree of membership.\n\nIf so, then both uber-metric spaces and fuzzy simplicial sets are just ways of talking about which collections of data points are close, and we can translate back and forth between these descriptions.\n\nBut I\u2019d need to think about this a bit more to do a good job of going further, and reading the UMAP paper a bit more I\u2019m beginning to suspect that\u2019s not the main thing that practitioners need to understand. I\u2019m beginning to think the most useful thing is to get a feeling for fuzzy simplicial sets! I hope I\u2019ve helped a bit in that direction. They are very simple things. They are also closely connected to an idea from topological data analysis:\n\nI should admit that McInnes, Healy and Melville tweak Spivak\u2019s formalism a bit. They call Spivak\u2019s uber-metric spaces \u2018extended-pseudo-metric spaces\u2019, but they focus on a special kind, which they call \u2018finite\u2019. Unfortunately, I can\u2019t find where they define this term. They also only consider a special sort of fuzzy simplicial set, which they call \u2018bounded\u2019, but I can\u2019t find the definition of this term either! Without knowing these definitions, I can\u2019t comment on how these tweaks change things.\n\n## Applied Category Theory 2020 \u2014 Adjoint\u00a0School\n\n23 December, 2019\n\nLike last year and the year before, there will be a school associated to this year\u2019s international conference on applied category theory! If you\u2019re trying to get into applied category theory, this is the best possible way.\n\nThe school will consist of online meetings from February to June 2020, followed by a research week June 29\u2013July 3, 2020 at MIT in Cambridge Massachusetts. The conference follows on July 6\u201310, 2020, and if you attend the school you should also go to the conference.\n\nThe deadline to apply is January 15 2020; apply here.\n\nThere will be 4 mentors teaching courses at the school:\n\n\u2022 Michael Johnson, Categories of maintainable relations.\n\n\u2022 Valeria de Paiva, Dialectica categories of Petri nets.\n\n\u2022 Michael Shulman, A practical type theory for symmetric monoidal categories.\n\nClick on the links for more detailed information!\n\n### Who should apply?\n\nAnyone, from anywhere in the world, who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language\u2014the definition of monoidal category for example\u2014is encouraged.\n\nWe will consider advanced undergraduates, PhD students, post-docs, as well as people working outside of academia. Members of minorities, and of any groups which are underrepresented in the mathematics and computer science communities, are especially encouraged to apply.\n\n### Structure of the school\n\nEvery participant will be assigned to one of the groups above, according to their preference (and to the availability of places within the groups). Each group will consist of a mentor, a TA, and 4-5 students.\n\n#### Online meetings\n\nBetween February and June 2020 there will be an online reading seminar. Each group will have a reading list of two papers, which they will study, and then present to the rest of the school during weekly online meetings. Every member of the school is encouraged to take part in the discussion of every paper, first during the meeting via live chat, and then, in written form, on an online forum. After the presentation and the forum discussion the students of each group will write a blog post about their assigned paper on the n-Category Caf\u00e9.\n\nDuring this period, the TAs will be there to help the students, answer any question they might have, and moderate the discussions. This way, all the participants will build the necessary background to take part in the research activities during the week at MIT.\n\n#### Research week\n\nAfter the online meetings, there will be a two-week event at MIT, from June 29th to July 10th 2020. The first week is dedicated exclusively to the participants of the school. They will work in groups on the research projects outlined above, led by their mentors, with the help of their TAs.\n\nDuring the second week the ACT 2020 Conference will take place, which is open to a wider audience. The member of each group of the school will have the possibility to present their activity to the audience of the conference, and share their ideas. The conference is not technically part of the school, but is about very similar topics, and participation is very much encouraged. The online meetings should prepare students to be able to follow some of the conference presentations to a reasonable degree, and introduce them to the main problems and techniques of the field.\n\n### Questions?\n\nFor any questions or doubts please write us at the address act adjoint school at gmail dot com.\n\n## Applied Category Theory Postdocs at\u00a0NIST\n\n13 December, 2019\n\nWe are looking to expand our group of applied category theorists at the National Institute of Standards and Technology (NIST). Our group develops use cases, tools and methodology to apply category theory and related methods in a broad range of disciplines centered around the design, implementation, operation and evolution of engineered systems.\n\nWe encourage those eligible and interested to apply for the National Research Council Research Associateship Program. The upcoming deadline is February 1st, for those looking to start by December 2020.\n\nThe relevant postdoctoral opportunities can be found here:\n\nThese 2-year postdoctoral positions are only open to US citizens, come with a base stipend around \\$72k (12 month), great benefits, and travel support.\n\nFor non-US citizens, NIST has mechanisms to host foreign guest researchers (undergrad through professor). Typically, such researchers propose their own projects to be completed in collaboration with researchers and use of facilities at NIST.\n\nFor more information, contact Spencer Breiner (spencer.breiner@nist.gov), Blake Pollard (blake.pollard@nist.gov), and\/or Eswaran Subrahmanian (sub@cmu.edu).\n\n## Applied Category Theory Meeting at UCR (Part\u00a03)\n\n15 November, 2019\n\nWe had a special session on applied category theory here at UCR:\n\nApplied category theory, Fall Western Sectional Meeting of the AMS, 9\u201310 November 2019, U.C. Riverside.\n\nI was bowled over by the large number of cool ideas. I\u2019ll have to blog about some of them. A bunch of people stayed for a few days afterwards, and we had lots of great conversations.\n\nThe biggest news was that Brendan Fong and David Spivak definitely want to set up an applied category theory in the San Francisco Bay Area, which they\u2019re calling the Topos Institute. They are now in the process of raising funds for this institute! I plan to be involved, so I\u2019ll be saying more about this later.\n\nBut back to the talks. We didn\u2019t make videos, but here are the slides. Click on talk titles to see abstracts of the talks. For a multi-author talk, the person whose name is in boldface is the one who gave the talk. You also might enjoy comparing the 2017 talks.\n\nSaturday November 9, 2019\n\n8:00 a.m.\nFibrations as generalized lens categoriestalk slides.\nDavid I. Spivak, Massachusetts Institute of Technology\n\n9:00 a.m.\nSupplying bells and whistles in symmetric monoidal categoriestalk slides.\nBrendan Fong, Massachusetts Institute of Technology\nDavid I. Spivak, Massachusetts Institute of Technology\n\n9:30 a.m.\nPhilip Hackney, University of Louisiana at Lafayette\nGabriel C. Drummond-Cole, IBS Center for Geometry and Physics\n\n10:00 a.m.\nDuality of relationstalk slides.\nAlexander Kurz, Chapman University\n\n10:30 a.m.\nA synthetic approach to stochastic maps, conditional independence, and theorems on sufficient statisticstalk slides.\nTobias Fritz, Perimeter Institute for Theoretical Physics\n\n3:00 p.m.\nConstructing symmetric monoidal bicategories functoriallytalk slides.\nMichael Shulman, University of San Diego\nLinde Wester Hansen, University of Oxford\n\n3:30 p.m.\nStructured cospanstalk slides.\nKenny Courser, University of California, Riverside\nJohn C. Baez, University of California, Riverside\n\n4:00 p.m.\nGeneralized Petri netstalk slides.\nJade Master, University of California, Riverside\n\n4:30 p.m.\nFormal composition of hybrid systemstalk slides and website.\n\nPaul Gustafson, Wright State University\nJared Culbertson, Air Force Research Laboratory\nDan Koditschek, University of Pennsylvania\nPeter Stiller, Texas A&M University\n\n5:00 p.m.\nStrings for cartesian bicategoriestalk slides.\nM. Andrew Moshier, Chapman University\n\n5:30 p.m.\nDefining and programming generic compositions in symmetric monoidal categoriestalk slides.\nDmitry Vagner, Los Angeles, CA\n\nSunday November 10, 2019\n\n8:00 a.m.\nMathematics for second quantum revolutiontalk slides.\nZhenghan Wang, UCSB and Microsoft Station Q\n\n9:00 a.m.\nA compositional and statistical approach to natural languagetalk slides.\n\n9:30 a.m.\nExploring invariant structure in neural activity with applied topology and category theorytalk slides.\nKrista Perks, UC San Diego\nTimothy Q Gentner, UC San Diego\n\n10:00 a.m.\nOf monks, lawyers and villages: new insights in social network science \u2014 talk cancelled due to illness.\nNina Otter, Mathematics Department, UCLA\nMason A. Porter, Mathematics Department, UCLA\n\n10:30 a.m.\nFunctorial cluster embeddingtalk slides.\n\nSteve Huntsman, BAE Systems FAST Labs\n\n2:00 p.m.\nQuantitative equational logictalk slides.\nPrakash Panangaden, School of Computer Science, McGill University\nGordon D. Plotkin, University of Edinburgh\n\n3:00 p.m.\nBrakes: an example of applied category theorytalk slides in PDF and Powerpoint.\nEswaran Subrahmanian, Carnegie Mellon University \/ National Institute of Standards and Technology\n\n3:30 p.m.\nIntuitive robotic programming using string diagramstalk slides.\nBlake S. Pollard, National Institute of Standards and Technology\n\n4:00 p.m.\nMetrics on functor categoriestalk slides.\nVin de Silva, Department of Mathematics, Pomona College\n\n4:30 p.m.\nHausdorff and Wasserstein metrics on graphs and other structured datatalk slides.\nEvan Patterson, Stanford University\n\n## Why Is Category Theory a Trending\u00a0Topic?\n\n8 November, 2019\n\nI wrote something for the Spanish newspaper El Pa\u00eds, which has a column on mathematics called \u201cCaf\u00e9 y Teoremas\u201d. \u00c1gata Tim\u00f3n helped me a lot with writing this, and she also translated it into Spanish:\n\n\u2022 John Baez, Qu\u00e9 es la teor\u00eda de categor\u00edas y c\u00f3mo se ha convertido en tendencia, El Pa\u00eds, 8 November 2019.\n\nHere\u2019s the English-language version I wrote. It\u2019s for a general audience so don\u2019t expect hard-core math!\n\n### Why has \u201ccategory theory\u201d become a trending topic?\n\nRecently, various scientific media have been paying attention to a branch of mathematics called \u201ccategory theory\u201d that has become pretty popular inside the mathematical community in recent years. Some mathematicians are even starting to complain on Twitter that more people are tweeting about category theory than their own specialties. But what is this branch of mathematics, and why is it becoming so fashionable?\n\nCategory theory was invented in 1945 as a general technique to transform problems in one field of pure mathematics into problems in another field, where they could be solved. For example, we know that at any moment there must be a location on the surface of the Earth there where the wind velocity is zero. This is a marvelous result\u2014but to prove this result, we must translate it into a fact about algebra, and a bit of category theory is very helpful here. More difficult results often require more category theory. The proof of Fermat\u2019s Last Theorem, for example, builds on a vast amount of 20th-century mathematics, in which category theory plays a crucial role.\n\nCategory theory is sometimes called \u201cthe mathematics of mathematics\u201d, since it stands above many other fields of mathematics, connecting and uniting them. Unfortunately even mathematicians have a limited tolerance for this high level of abstraction. So, for a long time many mathematicians called category theory \u201cabstract nonsense\u201d\u2014using it reluctantly when it was necessary for their work, but not really loving it.\n\nOn the other hand, other mathematicians embraced the beauty and power of category theory. Thus, its influence has gradually been spreading. Since the 1990s, it has been infiltrating computer science: for example, new programming languages like Haskell and Scala use ideas from this subject. But now we are starting to see people apply category theory to chemistry, electrical engineering, and even the design of brakes in cars! \u201cApplied category theory\u201d, once an oxymoron, is becoming a real subject.\n\nTo understand this we need a little taste of the ideas. A category consists of a set of \u201cobjects\u201d together with \u201cmorphisms\u201d\u2014some kind of processes, or paths\u2014going between these objects. For example, we could take the objects to be cities, and the morphisms to be routes from one city to another. The key requirement is that if we have a morphism from an object x to an object y and a morphism from y to an object z, we can \u201ccompose\u201d them and get a morphism from x to z. For example, if you have a way to drive from Madrid to Seville and a way to drive from Seville to Faro, that gives a way to drive from Madrid to Faro. Thus there is a category of cities and routes between them.\n\nIn mathematics, this focus on morphisms represented a radical shift of viewpoint. Starting around 1900, logicians tried to build the whole of mathematics on solid foundations. This turned out to be a difficult and elusive task, but their best attempt at the time involved \u201cset theory\u201d. A set is simply a collection of elements. In set theory as commonly practiced by mathematicians, these elements are also just sets. In this worldview, everything is just a set. It is a static worldview, as if we had objects but no morphisms. On the other hand, category theory builds on set theory by emphasizing morphisms\u2014ways of transforming things\u2014as equal partners to things themselves. It is not incompatible with set theory, but it offers new ways of thinking.\n\nThe idea of a category is simple. Exploiting it is harder. A loose group of researchers are starting to apply category theory to subjects beyond pure mathematics. The key step is to focus a bit less on things and a bit more on morphisms, which are ways to go between things, or ways to transform one thing into another. This is attitude is well suited to computer programming: a program is a way to transform input data into output data, and composing programs is the easiest way to build complicated programs from simpler ones. But personally, I am most excited by applications to engineering and the natural sciences, because these are newer and more surprising.\n\nI was very pleased when two of my students got internships at the engineering firm Siemens, applying category theory to industrial processes. The first, Blake Pollard, now has a postdoctoral position at the National Institute of Standards and Technology in the USA. Among other things, he has used a programming method based on category theory to help design a \u201csmart grid\u201d\u2014an electrical power network that is flexible enough to handle the ever-changing power generated by thousands of homes equipped with solar panels.\n\nRumors say that soon there may even be an institute of applied category theory, connecting mathematicians to programmers and businesses who need this way of thinking. It is too early to tell if this is the beginning of a trend, but my friends and colleagues on Twitter are very excited.\n\n## Applied Category Theory Meeting at UCR (Part\u00a02)\n\n30 September, 2019\n\nJoe Moeller and I have finalized the schedule of our meeting on applied category theory:\n\nApplied Category Theory, special session of the Fall Western Sectional Meeting of the AMS, U. C. Riverside, Riverside, California, 9\u201310 November 2019.\n\nIt\u2019s going to be really cool, with talks on everything from brakes to bicategories, from quantum physics to social networks, and more\u2014with the power of category theory as the unifying theme!\n\nYou can get information on registration, hotels and such here. If you\u2019re coming, you might also want to attend Eugenia Cheng\u2018s talk on the afternoon of Friday November 8th.\u00a0\u00a0 I\u2019ll announce the precise title and time of her talk, and also the location of all the following talks, as soon as I know!\n\nIn what follows, the person actually giving the talk has an asterisk by their name. You can click on talk titles to see abstracts of the talks.\n\nSaturday November 9, 2019, 8:00 a.m.-10:50 a.m.\n\n\u2022 8:00 a.m.David I. Spivak*, Massachusetts Institute of Technology\n\u2022 9:00 a.m.Brendan Fong*, Massachusetts Institute of Technology\nDavid I. Spivak, Massachusetts Institute of Technology\n\u2022 9:30 a.m.Gabriel C. Drummond-Cole, IBS Center for Geometry and Physics\nPhilip Hackney*, Department of Mathematics, University of Louisiana at Lafayette\n\u2022 10:00 a.m.\nDuality of relations.\nAlexander Kurz*, Chapman University\n\u2022 10:30 a.m.Tobias Fritz*, Perimeter Institute for Theoretical Physics\n\nSaturday November 9, 2019, 3:00 p.m.-5:50 p.m.\n\nSunday November 10, 2019, 8:00 a.m.-10:50 a.m.\n\nSunday November 10, 2019, 2:00 p.m.-4:50 p.m.\n\n## 2020 Category Theory\u00a0Conferences\n\n9 August, 2019\n\nYes, my last post was about ACT2019, but we\u2019re already planning next year\u2019s applied category theory conference and school! I\u2019m happy to say that Brendan Fong and David Spivak have volunteered to run it at MIT on these dates:\n\n\u2022 Applied Category Theory School: June 29\u2013July 3, 2020.\n\u2022 Applied Category Theory Conference: July 6\u201310, 2020.\n\nThe precise dates for the other big category theory conference, CT2020, have not yet been decided. However, it will take place in Genoa sometime in the interval June 18\u201328, 2020.\n\nAnd don\u2019t forget to submit your abstracts for the November 2019 applied category theory special session at U. C. Riverside by September 3rd! We\u2019ve got a great lineup of speakers, but anyone who wants to give a talk\u2014including the invited speakers\u2014needs to submit an abstract to the AMS website by September 3rd. The AMS has no mercy about this.","date":"2022-10-04 04:13:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 2, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3451823890209198, \"perplexity\": 2150.1050964510014}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337473.26\/warc\/CC-MAIN-20221004023206-20221004053206-00736.warc.gz\"}"}
null
null
I am Craig Matthews. I was diagnosed in October 1996, after being in the hospital for an asthma attack. My parents and I have been dealing with diabetes pretty well, with the exceptional ups and downs. I enjoy playing baseball, racquetball, playing with my baby sister and riding 4 wheelers at the river with my brother and dad.
{ "redpajama_set_name": "RedPajamaC4" }
1,516
"I am very pleased with today's installation. All were very friendly and professional. The windows look great!" Window Universe was founded with the basic idea that replacing windows didn't need to be difficult. It sounds a little corny, but we were pretty sure we could bring a little integrity to the home improvement industry. It turns out we were right. the kind words and the support of our customers! Anyone who has received estimates from the largest window replacement companies will tell you that we're not the same as the other guys. From start to finish you'll find that we don't operate like a traditional contractor. Our prices are clear and simple, our warranties are extensive and our products lead the industry. Homeowners often tell us their horror stories of being subjected to a three hour window presentation only to have some salesperson try to pressure them into signing a contract that night. That's not the way we would want to buy windows, so that's not the way we sell windows. you'll love your new windows for years to come! Our window demonstrations take about 45 minutes and at the end you'll have all of the information you need to make an informed decision. When you're ready to give us a call we'll be happy to show you the difference.
{ "redpajama_set_name": "RedPajamaC4" }
2,149
\section{Introduction} \hspace{0.33in} It was well established two decades ago that all electronic states of one dimensional (1D) disordered systems are exponentially localized in the absence of external fields irrespective of the amount of disorder \cite{ATAF}. However, recently some models of disorder introducing the correlation \cite{San,Phil} and the nonlinearity \cite{Bourb} have been shown to exhibit extended states at particular energies. The electric field, on the other hand has been shown to delocalize the electronic states in 1D disordered systems where the wave function becomes power-law decaying [5-7] while for sufficiently large field strenghs the eigenstates become extended \cite{Del,Senou1}. Furthermore, it can affect the backscattering and the interferences yielding a strong enhancement the localization ( Wannier-Stark localization) \cite{Ouas}. In a recent paper, We found that the nonlinearity can either localize or delocalize the electronic states depending on the strengh and the sign of the nonlinear potential \cite{Senou2}. Physically, a repulsive nonlinear (NL) potential represents the electron-electron interaction while an attractive one corresponds to the electron-phonon interaction. These interactions are important in various systems such as quantum dots, superlattices etc. \cite{Diez}. Therfore, the electric field and the nonlinear potential effects can compete and their presence together in the system may lead to the suppression of some effects such as the Wannier-Stark localization. This is the aim of the present letter where we examine the effect of the NL interaction on the electronic properties of a chain of potentials in the presence of a constant electric field. Note that this effect on the resonnant transmission has been investigated by Cota et al \cite{Cot2}. These resonnances seem to change their structure with the NL strength. However, to the best of our knowledge this effect on the nature of the eigenstates has not been investigated before. \newpage \section{Model description} \hspace{0.35in} The model studied in this letter is defined by the following nonlinear Schrodinger equation \cite{Cot2} \begin{equation} \left\{ -\frac{ d^{2} }{ dx^{2} } + \sum_{n} ( \beta_n + \alpha \left| \Psi (x) \right| ^{2} ) \delta(x-n) -eFx\right\}\Psi (x) = E\Psi (x) \end{equation} \noindent Here $\Psi (x)$ is the single particle wavefunction at $x$, $\beta _{n}$ the potential strength at the $n-th$ site, $\alpha $ the nonlinearity strength and $E$ the single particle energy in units of $\hbar ^{2}/2m$ with $m$ the electronic effective mass and $F$ the electric field. The electronic charge $e$ and the lattice parameter $a$ are taken here for simplicity to be unity. The two ends of the system are assumed to be connected ohmically to ideal leads (where the electron moves freely) and maintained at a constant potential difference $V=FL$. The potential strength $\beta _{n}$ is uniformly distributed between $0$ and $W$ in the case of potential barriers and between $-W$ and $0$ in the case of potential wells ($W$ being the degree of disorder). Equation (1) can be mapped by means the Poincar\'{e} map representation in the ladder approximation (i.e, when the field can be approximated as constant between two consecutive sites \cite{Ouas}. This approximation is valid for $eFa\ll E$) to the following recursive equation \cite{Cot2} \begin{equation} \Psi_{n+1} = \left[\cos\ k_{n+1} + \frac{k_{n}\sin\ k_{n+1}}{k_{n+1}\sin\ k_{n}} \\ cos\ k_{n} +(\beta_{n}+\alpha|\psi(x)|^{2})\frac{\sin\ k_{n+1}}{k_{n+1}}% \right]\Psi_{n}-\frac{k_{n}\sin\ k_{n+1}}{k_{n+1}\sin\ k_{n}}\Psi_{n-1} \end{equation} \noindent where $\Psi _{n}$ is the value of the wavefunction at site $n$ and $k_{n}=\sqrt{E+Fn}$ is the electron wave number at the site $n$. The solution of equation (2) is carried out iteratively by taking the two initial wave functions at sites $1$ and $2$ of the ideal leads : $\Psi _{1}=$ $\exp(-ik)$ and $\Psi _{2}=$ $\exp (-2ik)$. We consider here an electron with a wave number $k$ incident at site $N+3$ from the right side (by taking the chain length $L=N$, i.e. $N+1$ scatterers ). The transmission coefficient ($T$) reads \begin{equation} T=\frac{k_{0}}{k_{L}}\frac{|1-exp(-2ik_{L})|^{2}}{|\Psi_{N+2}-\Psi_{N+3} exp(-ik_{L})|^{2}} \end{equation} \noindent where $k_{0}=\sqrt{E}$ and $k_{L}=\sqrt{E+FN} $. \section{Results and discussion} \hspace{0.33in} In this section we examine in a first step the effect of nonlinearity on the energy spectrum of a periodic system in the presence of an electric field. We choose in this case $\beta =1$, $F=0.01$ and $L=500$. For linear systems ($\alpha =0$), the electric field seems to narrow the allowed bands because of the Wannier-Stark localization. Indeed, in this case the transmission coefficient has been shown to decrease abruptly near the band edges while Bloch oscillations appear \cite{Zekri}. The nonlinearity, on the other hand was found to delocalize, under certain conditions, the electronic states in periodic systems in the sense that the allowed bands become larger and the gaps get narrowed \cite {Zekri2}. \hspace{0.33in} Figure 1 shows the effect of the NL on a periodic chain of potential barriers in the presence of an electric field. We in particular observe for increasing $\alpha <0$ , an increase of the transmission coefficient in the regions localized by the electric field (i.e. Wannier-Stark localization). This field induced localization tends to diappear for a given NL strength. On the other hand, the amplitude of the Bloch oscillations observed in the linear case (solid line) seems to decrease. This delocalization is however not observed if we consider periodic potential wells whith repulsive NL, although we found recently that this type of NL delocalizes the electronic states in the gap for such systems \cite{Senou2}. This surprising effect may come from the unstabilities (strong drop of the transmission) observed at certain length scales where any amount of the NL potential strength enhances the localization \cite{Senou2}. These unstabilities should appear at larger length scales for the potential barriers. \hspace{0.33in}Let us now examine the effect of NL interactions on disordered chains in the presence of an electric field. It was shown that the wave function becomes power-law decaying in the presence of an electric field \cite{Sou,Del,Cot1}. On the other hand, the electric field was also found in certain cases to modify the scaling of the transmission in jumps with a behavior as $exp(-L^{\gamma })$ (with $\gamma >1$ and $L$ the length scale) between them \cite{Ouas}. This case was shown to correspond to a negative differential resistance \cite{Zekri}. Figure 2 shows the transmission coefficient versus the chain length in the case of disordered potential wells. We choose $E=5$,$F=0.015$ and $W=2$ with an ensemble averaging over 2000 samples (sufficient for an accuracy about $1\%$). We observe clearly that the superlocalization before the first jump tends to be suppressed in the presence of a repulsive NL ($\alpha >0$) and the eigenstates become power-law decaying. The same behavior can be observed in the case of potential barriers (not shown here to avoid a lengthy paper). We note here that for almost cases the unstabilities of $T$ discussed above \cite{Senou2} appear after the first jump of $T$. Therefore, we restricted ourselve to the first jump. \hspace{0.33in}In figure 2 we observed also a characteristic length $l_{c}$ separating the superlocalized states for small lengths from the power-law decaying ones for larger length scales. This caracteristic length seems to decrease logarithmically with the NL strength in the case of disordered potential wells while it decreases more rapidly for potential barriers (see Fig.3). \newpage \section{Conclusion} \hspace{0.33in} We studied in this letter the effect of nonlinearity on electrified periodic and disordered chains using a simple Kronig-Penney model. We found that in periodic potential barriers, the nonlinearity contributes to the delocalization of the Wannier-Stark localized states induced by the electric field. In the case of disordered systems, we found that the superlocalization observed recently in such systems in the presence of an electric field \cite{Ouas} is suppressed progressively by the NL interaction and the wave functions become power-law decaying above a characteristic length $l_{c}$ (which seems to decrease also at least logarithmically with nonlinearity). However, beyond a certain length scale (corresponding after the first jump), any amount of the NL interaction destroys the transmission in certain samples instead of enhancing it, due to the unstability observed in nonlinear systems. Most probably this unstability predicts very interesting statistical properties of the transmission in such systems and should be carefully examined. This investigation should be the subject of a forthcoming paper. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
9,686
{"url":"https:\/\/cirosantilli.com\/explicit-scalar-form-of-the-maxwell-s-equations","text":"Ciro Santilli\n\ud83d\udd17\n\n# Explicit scalar form of the Maxwell's equations\n\n| \ud83d\uddd6 nosplit | \u2191 parent \"Maxwell's equations\" | 95, 193, 1\n\ud83d\udd17\nFor numerical algorithms and to get a more low level understanding of the equations, we can expand all terms to the simpler and more explicit form: $$\u2202x\u2202Ex\u200b\u200b+\u2202x\u2202Ey\u200b\u200b+\u2202x\u2202Ez\u200b\u200b=\u03b50\u200b\u03c1\u200b\u2202x\u2202Bx\u200b\u200b+\u2202x\u2202By\u200b\u200b+\u2202x\u2202Bz\u200b\u200b=0\u2202y\u2202Ez\u200b\u200b\u2212\u2202z\u2202Ey\u200b\u200b=\u2212\u2202t\u2202Bx\u200b\u200b\u2202z\u2202Ex\u200b\u200b\u2212\u2202x\u2202Ez\u200b\u200b=\u2212\u2202t\u2202By\u200b\u200b\u2202x\u2202Ey\u200b\u200b\u2212\u2202y\u2202Ex\u200b\u200b=\u2212\u2202t\u2202Bz\u200b\u200b\u2202y\u2202Bz\u200b\u200b\u2212\u2202z\u2202By\u200b\u200b=\u03bc0\u200b(Jx\u200b+\u03b50\u200b\u2202t\u2202Ex\u200b\u200b)\u2202z\u2202Bx\u200b\u200b\u2212\u2202x\u2202Bz\u200b\u200b=\u03bc0\u200b(Jy\u200b+\u03b50\u200b\u2202t\u2202Ey\u200b\u200b)\u2202x\u2202By\u200b\u200b\u2212\u2202y\u2202Bx\u200b\u200b=\u03bc0\u200b(Jz\u200b+\u03b50\u200b\u2202t\u2202Ez\u200b\u200b) (7)$$\n\ud83d\udd17\n\n\ud83d\udd17","date":"2021-05-08 08:16:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 1, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7964334487915039, \"perplexity\": 3185.10489479752}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988850.21\/warc\/CC-MAIN-20210508061546-20210508091546-00588.warc.gz\"}"}
null
null
B'nai B'rith Encouraged by Corker-Cardin Compromise; Questions Remain on Viability of Iran Deal B'nai B'rith International is encouraged by the bipartisan agreement reached between Congress and the White House on the Iran Nuclear Agreement Review Act of 2015 (S.615), which, if passed, would give the legislature the power to review a final deal on the Iranian nuclear program. The compromise struck between Sen. Bob Corker and Sen. Ben Cardin gives Congress 30 days to review a deal with Iran following the June 30 negotiating deadline with the United States, its negotiating partners and Tehran. President Obama has pledged to sign the bill if passed by both chambers of Congress. B'nai B'rith calls on the Senate to pass the Iran Nuclear Agreement Review Act and the House of Representatives to do the same when the bill is brought to the floor. Given the high stakes for U.S. national security and stability in the Middle East, it is essential that Congress be involved. The bipartisan consensus on S.615 is encouraging. It conveys the broad concern in the Senate over the proposed nuclear deal with Iran. While this congressional action is vitally important, B'nai B'rith remains deeply concerned about the Iranian regime's interest in adhering to a nuclear agreement based on a 36-year track record of obfuscation and cheating. Iran also continues to act as the world's largest state-sponsor of terrorism, which only furthers our skepticism as to whether Iran will honor the final deal in good faith.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,963
namespace MonoKit.UI { using System; public interface IDataViewWrapper { IViewDefinition ViewDefinition { get; } object Data { get; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,273
James Carville and Mary Matlin are used to be public relations spokespeople for the Democratic and Republican parties, respectively. They are each articulate, sharp and feisty. Sparks fly when they debate in favor of their parties, so much so that they seem like arch enemies who couldn't possibly exchange a friendly word. I remember the shock reaction I had (how many years ago was it?) hearing that they were getting married. I pictured a contentious loud fighting household. Recently I saw the two of them on television talking about their private lives. One never knows for sure the truth about public figures or anyone else for that matter, but if taken at face value, they have a solid marriage, children and good family life. They displayed mellowness, respect and total togetherness. It was obvious that the party competitiveness was left outside of the house and what was brought inside was love, goal sharing, and family first. Businesses run by partners are very much like marriages. If the partners are in competition with each other they are creating a lot of damage. First of all, their relationship is limited because there cannot be complete openness between competitors. The ultimate goal of the success of the business is undermined and lost in the morass of the need to win over each other. If the competition is obvious to others and it usually is, it creates a situation of two camps where employees, directors and suppliers choose sides. If the business is comprised of family members, the situation can be even more intense and damaging because the negative effects permeate to personal lives and to family members who may not even be directly involved in the business. As a coach, I help the partners focus on their goal for the business by examining the harm caused by destructive self centered behavior. The need to compete and win sometimes is an unresolved need to play out old family patterns even if the partners are not members of the same family. The goal of coaching is not to heal old patterns, but rather to contain them and to create new patterns of satisfaction that directly relate to business success. If the situation is so entrenched, therapy may be recommended as part of the solution. However, usually coaching techniques which direct the parties to clarify and satisfy needs in productive ways in order to focus on the business, such as Carville and Matlin focus on their marriage, is the usual path that can be implemented.
{ "redpajama_set_name": "RedPajamaC4" }
7,658
<?php header("Suborigin: foobar"); ?> <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Allow suborigin in HTTP header</title> <script src="/resources/testharness.js"></script> <script src="/resources/testharnessreport.js"></script> <script src="/security/suborigins/resources/suborigin-cors-lib.js"></script> </head> <body> <div id="container"></div> <script> // XMLHttpRequest tests var SuboriginXHRTest = function(pass, name, src, crossorigin_value) { SuboriginTest.call(this, pass, 'XHR: ' + name, src, crossorigin_value); } SuboriginXHRTest.prototype.execute = function() { var settings = this; async_test(test => { var xhr = new XMLHttpRequest(); if (settings.crossorigin_value === 'use-credentials') { xhr.withCredentials = true; } if (settings.pass) { xhr.onload = test.step_func_done(); xhr.onerror = test.unreached_func('Good XHR fired error handler.'); } else { xhr.onload = test.unreached_func('Bad XHR successful.'); xhr.onerror = test.step_func_done(); } xhr.open('GET', settings.src); // Set a custom header to force a preflight. Even though the // scheme/host/port of the source and destination origins are the same, the // Suborigin should cause the request to be treated as cross-origin. xhr.setRequestHeader('x-custom-header', 'foobar'); xhr.send(); }, settings.name); }; var xorigin_preflight_script = 'http://127.0.0.1:8000/security/resources/cors-script.php'; // XHR preflight tests new SuboriginXHRTest( false, 'Complex anonymous XHR preflight, no AC for custom header', xorigin_preflight_script + '?cors=http-so://foobar.127.0.0.1:8000', 'anonymous').execute(); new SuboriginXHRTest( true, 'Complex anonymous XHR preflight, has AC for custom header', xorigin_preflight_script + '?cors=http-so://foobar.127.0.0.1:8000&' + 'custom=x-custom-header', 'anonymous').execute(); new SuboriginXHRTest( false, 'Complex anonymous XHR preflight with \'*\' ACAO, no AC for custom header', xorigin_preflight_script + '?cors=*', 'anonymous').execute(); new SuboriginXHRTest( true, 'Complex anonymous XHR preflight with \'*\' ACAO, has AC for custom header', xorigin_preflight_script + '?cors=*&custom=x-custom-header', 'anonymous').execute(); new SuboriginXHRTest( false, 'Complex XHR with credentials preflight, no AC for custom header', xorigin_preflight_script + '?cors=http-so://foobar.127.0.0.1:8000&' + 'credentials=true', 'use-credentials').execute(); new SuboriginXHRTest( true, 'Complex XHR with credentials preflight, has AC for custom header', xorigin_preflight_script + '?cors=http-so://foobar.127.0.0.1:8000&' + 'credentials=true&custom=x-custom-header', 'use-credentials').execute(); new SuboriginXHRTest( false, 'Complex XHR with credentials preflight with \'*\' ACAO, ' + 'no AC for custom header', xorigin_preflight_script + '?cors=*&credentials=true', 'use-credentials').execute(); new SuboriginXHRTest( false, 'Complex XHR with credentials preflight with \'*\' ACAO, ' + 'has AC for custom header', xorigin_preflight_script + '?cors=*&credentials=true&custom=x-custom-header', 'use-credentials').execute(); </script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
6,264
The Iowa Ag Summit: 10 takeaways The GOP's top contenders try to thread the needle on farm policy, ethanol. By JAMES HOHMANN M.Scott Mahaskey/POLITICO The politics of ethanol have shifted enough over the last 15 years that several Republican presidential candidates felt comfortable – at an agricultural summit in Iowa Saturday – expressing their disagreement with special government supports for corn growers. Philosophically, that is. Back in 2000, John McCain skipped the Iowa caucuses altogether on the grounds that he could not win while opposing the federal ethanol subsidy. But the GOP has shifted rightward during the age of Barack Obama, and as the 2016 race gets underway a thousand spectators gathered at the state fairgrounds to watch the party's top contenders try to thread the needle on farm policy, especially when it comes to ethanol mandates. There are limits to just how far the conversation has shifted. Top-tier candidates, including former Florida Gov. Jeb Bush and Wisconsin Gov. Scott Walker, each said they would love to eventually phase out the Renewable Fuel Standard – which requires refiners to blend a certain amount of ethanol into their gasoline – but that it ought to stay in place for at least a few more years. "In general, on any issue, I'm someone who believes in a free and open market," said Walker. "But…right now we don't have a free and open marketplace, so that's why I'm willing to take that position." Texas Gov. Rick Perry acknowledged opposing the mandate in the past and said he wants to get rid of federal tax credits for wind power. But he warned against going too far, too fast. "I don't think you pull the RFS out…and leave all these other subsidies and mandates in place," he said. Others like Mike Huckabee and Rick Santorum, the past two winners of the Iowa caucuses, remained unabashed in their support for the federal requirements. Here are 10 takeaways from the day in Des Moines: Ted Cruz got the Sister Souljah moment he came to Iowa for. "How about we deal with the elephant in the room right away?" That's how Bruce Rastetter, the agribusiness mogul who organized the summit and has a large financial stake in the continuation of the RFS, opened his 20-minute interview with the Texas senator. Sitting on a brown leather chair, Cruz took a sip of his water and crossed his legs to show off a pair of black cowboy boots. "The answer you'd like me to give is 'I'm for the RFS, darn it,'" Cruz responded. "That'd be the easy thing to do. But people are pretty fed up with politicians that run around and tell one group one thing and tell another group another thing. Then they go to Washington and don't do anything they said they would do." "I'm going to tell you the truth," he added. Cruz is the sponsor of a Senate bill to repeal the RFS standard over a period of five years, so it's no surprise where he stands. But he did not try to nuance his position. He said he's against corporate welfare of all kinds and against the government picking winners and losers. Rastetter responded by noting that the oil companies block consumer access to ethanol since they control the service stations. "There are remedies in the antitrust laws to deal with that if you're having market access blocked," Cruz responded. Kentucky Sen. Rand Paul, who's also opposed to these kinds of government mandates, could have created his own moment. But he skipped the event. Cruz hits liberals on global warming, GMO food labeling JAMES HOHMANN Jeb Bush sounded more like a pro-business technocrat than a movement conservative. Making his first foray of the year into the Hawkeye State, Bush tried to please the business interests that sponsored the summit while also nodding to right-wing activists. Speaking of federal mandates, he used the language of business – lamenting "uncertainty" in the marketplace and arguing that industry insiders should run federal agencies, not government bureaucrats. "The law that passed in 2007 has worked, for sure," said Bush. He then floated the idea of getting rid of the standard in 2022 "or somewhere in the future," if the ethanol industry can sustain itself. Likewise, Bush said he's okay with federal wind energy tax credits but that he would consider phasing them out over a three- to five-year period. The nuance Bush packed into his answers provided depth to his remarks. But long replies to simple questions made him seem overly cautious and unwilling to just say what he really thinks. Asked what America's relationship with China should look like, for example, Bush replied with a typical "on the one hand, on the other hand" answer that lacked passion. "It's one we need to manage with great care because of the complexity of the relationship," said the son of a former U.S. envoy to that country. Bush made an effort to humanize himself. His best moment, in terms of connecting with the crowd, came as he endorsed mandatory country-of-origin labeling. "When I go to Publix in Coral Gables, which I'll do tomorrow after church, to prepare for Sunday Fun Day in my house," he said, "we'll be cooking Iowa beef and we'll be making guacamole. I will want to know where that avocado comes from." PHOTO GALLERY: POLITICO takes in all the action as Republican hopefuls put farm policy front and center for 2016. Scott Walker acted like the Iowa frontrunner — but he blanked on a local legend. Organizers scheduled the Wisconsin governor to go last, so that they could hold the crowd through the end of a six-and-a-half-hour program. It worked. Walker opened by recalling his breakout performance at Steve King's cattle call event in January, which catapulted him to the top of early polls and made him the leading Bush alternative. He wore the same lucky outfit: a blue shirt and red tie, with shirtsleeves rolled up. He was confident, loose and relaxed. He emphasized the seven years he spent growing up as a preacher's son in an Iowa town of 500. A folksy reference to the old CBS show "Hee Haw" got the older crowd laughing. "I didn't inherit fame and fortune," Walker explained. In what seemed intended as another implicit knock on Bush, he said: "I know there are some out there, but I'm not a supporter of amnesty." The one eyebrow-raising moment came when Walker blanked on the name of Norman Borlaug. A procession of speakers had already heaped praise on the deceased Iowa biologist for unleashing the green revolution. Walker referenced him as the guy who had won the Nobel Prize, but he couldn't remember who it was. Walker talked in the language of a conservative, but he also worked to appeal to the establishment. Back in 2006, running unsuccessfully in the GOP primary for governor, Walker spoke out against a 10 percent ethanol standard. But on Saturday he defended a similar federal requirement as necessary. "Long term … my goal would be to get to a point where we directly address those market access issues, so that eventually you don't need a standard," he said. Walker did say the federal wind tax credit has served its purpose. "I would support phasing that out over a period of time," he said. Rick Perry did the most to connect with farmers. The former Texas governor is working really hard to get past the bad first impression he left in his "oops"-ridden 2012 campaign. "I don't even remember four years ago," Perry, who left office in January and has been spending as much time in Iowa as anybody, said at the start of his appearance. Last time, Mitt Romney got to Perry's right on immigration. On Saturday, Perry fired up the crowd by ripping Obama for not securing the border and recalling how he sent troops to the Rio Grande. But Perry's presentation most heavily emphasized his own roots in agriculture. He grew up on a cotton farm 200 miles west of Fort Worth. His mom worked at a cotton gin as a bookkeeper. He didn't have running water in his house until he was "six or seven." He was active in the 4-H and got a degree in animal science from Texas A&M. After the Air Force, he spent four years farming and eight years as Texas agriculture commissioner. In a way that his rivals did not, Perry expressed concern about falling crop prices and new challenges facing farmers. "I've watched a wheat crop be lost to a hail storm," he said. "I understand the vagaries…" Jeb Bush 2016 Chris Christie 2016 Scott Walker 2016 Mike Huckabee 2016 Rick Santorum 2016 Rick Perry 2016 Iowa Ag Summit Republicans start whacking Trump for attacks on minority congresswomen
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,847
James Bamidele Oluwafemi Alabi (born 8 November 1994) is an English footballer who plays as a striker for National League club Maidstone United. He has previously played for Stoke City, Scunthorpe United, Mansfield Town, Forest Green Rovers, Accrington Stanley, Ipswich Town, Grimsby Town, Chester, Tranmere Rovers, Dover Athletic, Leyton Orient, Eastleigh and Bromley. Career Stoke City Alabi was born in London Borough of Southwark and began his career with Stoke City playing for the club's academy in 2010–11 before moving to Scottish club Celtic. After a season at Lennoxtown he moved back to Stoke City for the 2012–13 season. On 21 February 2013 he joined League One side Scunthorpe United on loan for a month. After seeing him in training Irons manager Brian Laws compared him as a 'technically better' version of John Gayle, who Laws believes was one of his best signings in his first spell at Scunthorpe. He made his professional debut on 23 February against Hartlepool United at Glanford Park, scoring 10 minutes after coming on as a 67th-minute substitute. On 26 March 2013 his loan spell at Scunthorpe was extended until the end of the 2012–13 season. He remained at Glanford Park for the remainder of the season, playing in nine matches as they failed to avoid relegation. On 31 October 2013, Alabi joined Mansfield Town on a one-month loan. He made his debut for Mansfield the next day against Southend United but was sent-off after for a late tackle. Alabi then joined Forest Green Rovers on a one-month loan on 28 November 2013. He made his debut on 30 November 2013 in an FA Trophy first round tie against Dartford. He played in six games for Forest Green without scoring before returning to Stoke at the end of December 2013. On 11 March 2014, Alabi joined League Two side Scunthorpe United for a second loan spell with the Iron. He made one appearance for Scunthorpe before returning to Stoke. On 9 August 2014, Alabi joined Accrington Stanley on a one-month loan. He played three times for Stanley before returning to Stoke. In January 2015 Alabi had a trial with Dutch side De Graafschap. Ipswich Town He was released by Stoke at the end of the 2014–15 season and joined Ipswich Town, after impressing Mick McCarthy by scoring 2 goals in 2 appearances for the under 21s side on trial, on a one-year deal on 24 August 2015. On 25 August 2015, Alabi made his Ipswich debut, scoring in a 4–1 win against Doncaster Rovers in a League Cup second round match. On 25 November 2015, Alabi joined National League side Grimsby Town, on loan until 3 January 2016. Chester Following his release by Ipswich, Alabi signed for National League side Chester, on a deal until the end of the 2015–16 season. Tranmere Rovers Alabi joined Tranmere Rovers from Chester on 10 July 2017. Leyton Orient He was placed on the transfer list in May 2019, but removed by the club in July 2019. Eastleigh (loan) On 16 January 2020, Alabi signed for Eastleigh, on loan from Leyton Orient until the end of the 2019–20 Season. Bromley On 4th September 2020, Alabi signed for Bromley, as a free agent previously from Leyton Orient in the 2020–21 season. On 1 July 2022, Alabi left Bromley following the expiry of his contract. Maidstone United On 9 July 2022, Alabi joined newly promoted National League club Maidstone United. Personal life Born in England, Alabi is of Nigerian descent. Career statistics Honours Leyton Orient National League: 2018–19 Bromley FA Trophy: 2021–22 References External links 1994 births Living people English footballers English people of Ghanaian descent Black British sportsmen Association football forwards Stoke City F.C. players Celtic F.C. players Scunthorpe United F.C. players Mansfield Town F.C. players Forest Green Rovers F.C. players Accrington Stanley F.C. players Ipswich Town F.C. players Grimsby Town F.C. players Chester F.C. players Tranmere Rovers F.C. players Dover Athletic F.C. players Leyton Orient F.C. players Eastleigh F.C. players Bromley F.C. players Maidstone United F.C. players English Football League players National League (English football) players
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,204
Q: Is it possible to have a Theme with built-in physical page files? I am creating a theme and was thinking if it would be possible to create pages within my theme folder that would reflect the wordpress url + path. So for example I could when accessing: http://mydomain.com/about it would looks for about.php or about.html inside my theme folder? I have obtained success doing this but handling the is_404 method and then rendering the page with get_template_part($pathname), however, every page will output the 404 response status which is not viable? Is there any other way to accomplish that? I wish there was some kind of url aliases I could grant for my theme and could use it. Thanks in advance A: WordPress would allow this (sort of). You would still need to create an About page in the back end, but you can tailor your display for such a page 2 different ways: * *Page Templates *Page template hierarchy - What this means is there is a certain order of what WordPress is looking for when displaying any page/post. Check out the image here for a more direct understanding. For pages specifically, the default is page.php, but page-$slug.php has higher priority. So in this case, you could make page-about.php, and alter what is displayed on that page. I would strongly suggest keeping the main content within that page, but this is how you add additional items to a page structure, such as sidebars, "Related Items" links, etc.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,715
Month: February 2008 Page 1 of 2 Brisingr: inheriting a story too long to tell in one book On Friday, 29 February, 2008 In Books, Grimm It is a truth universally acknowledged, that a single trilogy comprises three volumes… "My dear Mr Bennet," said his lady to him one day, "have you heard that the Inheritance trilogy by Mr Christopher Paolini is indeed to extend to four volumes?" Mr Bennet replied that he had not, for he was not at all interested in the peculiarities of fantasy literature and the writers thereof. – Apologies to Jane Austen and Pride and Prejudice. Anyway: Christopher Paolini has discovered that his Inheritance story – Eragon… Eldest… – isn't going to squeeze into three volumes. Therefore, Brisingr, which is due to be published at the end of September this year, is not the final instalment you've all been waiting for with baited breath… there's another one to go after that. So what's going to happen in Brisingr? Is Saphira going to find a bloke? Will Nasuada remain the leader of the Varden, or will there be uprisings and overthrowings? Is Murtagh really all bad now and what's with his dragon? How many more dragons are going to appear out of the woodwork? We'll give you a heads up when we've placed an order for Brisingr, then you can start reserving to your heart's content. Grimm. New CDs & DVDs In DVDs, Music, New, Simon More new YA CDs and DVDs have arrived. Since they're so new and so popular you may need to reserve them, which you can do online. Click on the title! These are the new DVDs: Stardust (Fantasy Movie) Step Up / Honey / Bring It On – Three movies about dancing in one package! Beyond The Golden Compass – The 'definitive documentary guide' to the book Sicko – Michael Moore's latest doco about the American health system Punk's Not Dead – A documentary charting the history of punk The Amazing Extraordinary Friends – NZ comedy series Superbad – More like super funny! Hah! Paprika – Quality anime New CDs: MySongs : 36 Feel Good Hits – Various This Is Dub : The Original Master – Various V – Vanessa Hudgens Lupe Fiasco's The Cool – Lupe Fiasco St. Anger – Metallica (Re-issue with DVD) Greetings From Vogeltown West – Huhu All Time Greatest Summer Anthems of the '60s, '70s, '80s, '90s and the '00s – Various YA CDs are free and DVDs cost 50c when issued on a YA or Child's card. Reserves are free. Get thee to a library On Thursday, 28 February, 2008 In Classic novels, DVDs, Library, Simon, Study It has been said that if William Shakespeare were alive today he would be a screenwriter, not a playwright. I don't know how true that is, but it's certainly true that his plays translate well to the big screen. If you're studying Shakespeare at school, often the best place to start with his work is to watch the film adaption; reading them is great but can take some time, and watching them performed isn't always an option. We have loads of Shakespeare-related DVDs for young adults in the library – here is a full list. Some are very close adaptions (Zeffirelli's Romeo and Juliet, for instance), others are films loosely based on Shakespeare's plays (10 Things I Hate About You, She's The Man), and others are documentaries about Shakespeare (The In Search of Shakespeare series). You can study Shakespeare and watch a movie at the same time! Though beware: So wise so young, they say do never live long. The Nominees Are … In Books, Edna Welthorpe The finalists for the NZ Bookseller NZ Post book awards were announced today. These annual awards go to the best children's and young adult's books published in NZ. The finalists for the young adult's category are: Salt, by Maurice Gee; The Sea-Wreck Stranger, by Anna McKenzie; Tomorrow All Will Be Beautiful, by Brigid Lowry; The Transformation of Minna Hargreaves, by Fleur Beale; and Zillah, by Penelope Todd. (Reserve them quickly as they will leap off the shelves.) Excitingly, you can vote online for your favourite book and be in to win! The Seventh SUBTXT Review In Prudence, Subtext My Lost and Found Life Melodie Bowsher Theme: The theme of this book is Ashley's mother disappearing after embezzling 1.2 million dollars and Ashley is left with debts and rising bills. Recommend?: It is a really good book because it captures what life is like when you have lots of money, the latest clothes and a hot boyfriend. Then when your mother has disappeared after embezzling a million dollars, how do you cope with selling the house to pay off debts? You get to experience what Ashley is feeling and how she is coping with all of this. A Slam Dunk? (sorry) My friend… let's call him Mike… really likes Slam by Nick Hornby, so much so that he a) owns a copy and b) started reading bits out to me, nodding to himself as he read. I take it that this is a good sign, since a) Mike's too busy with his broadband, hard drive recorder and iPod (and his job and life and families and that stuff) to read lots of books, and b) Sam the narrator of Slam (oh, I just got it – Sam… Slam) is 15 and Mike hasn't been 15 for ages, so he reads even fewer young adult books. Slam's about teenage pregnancy, but from a guy's perspective, which is a good thing, but it's also about growing up, skateboarding and having a laugh while reading about serious issues. Top 10: Literal Biters In Books, Grimm, Horror, Top 10 Kym, Children and Youth Services and list-making Specialist, has an interest in books about vampires and werewolves. Here's her Top 10 young adult fiction titles about people who like biting other people: Top 10: Fight Scenes On Wednesday, 20 February, 2008 In DVDs, Simon, Top 10 A well-choreographed fight can often make an action film worth watching. It might be the awesome special effects that make it so great, or that the outcome of the battle determines the fate of humanity … or else it just looks cool. Or (usually) it is all those things. Here, then, are some exceptional fight scenes from DVDs held in the Young Adult area: SUBTXT Review Númer Sjö Lady Friday Theme: Magical world, good vs. bad, underdog fights back. Recommend?: It's a good book, and an easy read, but you won't understand it if you haven't read the 4 books that precede it. Favourite Character: My favourite character would have to be the main character, Arthur Penhaligon, because even though he has adapted well to the situation he's in, he's not a conventional hero – he's not strong, and he make more mistakes than his companions. That being said, he's a pretty resourceful guy, who uses the power of the magical Key to the Kingdom he carries to its full advantage. Girlosophy In Books, Tess "Real girls make the best role models for real girls" – that's the philosophy behind the bestselling and award-winning Girlosophy series of books. Through her travels around the world, author and photographer Anthea Paul has met many young women. She shares their inspirational stories in the Girlosophy books, and offers practical advice for young women around the world. The latest Girlosophy installment, The Girlo Travel Survival Kit is essential reading for young women considering overseas travel. For more information check out the Girlosophy website.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,662
Are you looking for activities near Málaga? Vincci Seleccion Posada del Patio is in the heart of Malaga, walking distance from Carmen Thyssen Museum and Mercado de Atarazanas. This 5-star hotel is within close proximity of Plaza de la Constitucion and MIMMA. Make yourself at home in one of the 106 air-conditioned rooms featuring minibars. Complimentary wired and wireless Internet access keeps you connected, and digital programming provides entertainment. Private bathrooms with separate bathtubs and showers feature complimentary toiletries and bidets. Conveniences include safes and desks, and you can also request irons/ironing boards. Enjoy recreation amenities such as a seasonal outdoor pool or take in the view from a rooftop terrace. Additional amenities include complimentary wireless Internet access, concierge services, and wedding services. Satisfy your appetite at the hotel's restaurant, which serves breakfast and dinner. Dining is also available at a coffee shop/cafŽ, and 24-hour room service is provided. Quench your thirst with your favorite drink at a bar/lounge. Featured amenities include express check-in, express check-out, and complimentary newspapers in the lobby. Planning an event in Malaga? This hotel has facilities measuring 470 square feet (44 square meters), including a meeting room. With a stay at Hotel Petit Palace Plaza Malaga, you'll be centrally located in Malaga, steps from Plaza de la Constitucion and Palacio Episcopal. This 4-star hotel is within close proximity of Carmen Thyssen Museum and Malaga Cathedral. Make yourself at home in one of the 66 air-conditioned rooms featuring minibars and plasma televisions. Complimentary wired and wireless Internet access keeps you connected, and digital programming provides entertainment. Private bathrooms with bathtubs or showers feature complimentary toiletries and hair dryers. Conveniences include phones, as well as laptop-compatible safes and desks. Take advantage of recreation opportunities such as bicycles to rent, or other amenities including complimentary wireless Internet access and concierge services. This Art Deco hotel also features babysitting/childcare (surcharge), a television in the lobby, and tour/ticket assistance. Featured amenities include complimentary high-speed (wired) Internet access, a business center, and a computer station. Event facilities at this hotel consist of a conference center and a meeting room. With a stay at Molina Lario in Malaga (Malaga Historic Centre), you'll be minutes from Palacio Episcopal and Malaga Cathedral. This 4-star hotel is within close proximity of MIMMA and Port of Malaga. Make yourself at home in one of the 103 air-conditioned rooms featuring minibars. Complimentary wired and wireless Internet access keeps you connected, and satellite programming provides entertainment. Private bathrooms with shower/tub combinations feature deep soaking bathtubs and designer toiletries. Conveniences include phones, as well as safes and desks. Enjoy recreation amenities such as an outdoor pool or take in the view from a rooftop terrace. Additional amenities include complimentary wireless Internet access, concierge services, and babysitting/childcare. Grab a bite to eat at the hotel's restaurant, which features a bar, or stay in and take advantage of 24-hour room service. Relax with your favorite drink at a bar/lounge or a poolside bar. Breakfast is available daily for a fee. Featured amenities include complimentary high-speed (wired) Internet access, a business center, and complimentary newspapers in the lobby. Guests may use a roundtrip airport shuttle for a surcharge, and self parking (subject to charges) is available onsite. With a stay at Room Mate Larios, you'll be centrally located in Malaga, steps from Plaza de la Constitucion and Carmen Thyssen Museum. This 4-star hotel is within close proximity of Palacio Episcopal and Malaga Cathedral. Make yourself at home in one of the 45 air-conditioned rooms featuring minibars and LCD televisions. Complimentary wireless Internet access keeps you connected, and satellite programming is available for your entertainment. Private bathrooms with bathtubs or showers feature complimentary toiletries and hair dryers. Conveniences include phones, as well as safes and desks. Featured amenities include a computer station, complimentary newspapers in the lobby, and dry cleaning/laundry services. This hotel has 3 meeting rooms available for events. With a stay at AC Hotel Malaga Palacio by Marriott, you'll be centrally located in Malaga, steps from Palacio Episcopal and MIMMA. This 4-star hotel is within close proximity of Malaga Cathedral and Port of Malaga. Make yourself at home in one of the 214 air-conditioned rooms featuring minibars and plasma televisions. Wireless Internet access (surcharge) keeps you connected, and satellite programming is available for your entertainment. Private bathrooms with shower/tub combinations feature designer toiletries and complimentary toiletries. Conveniences include desks and complimentary newspapers, and you can also request cribs/infant beds. Be sure to enjoy recreational amenities including a health club and an outdoor pool. Additional amenities include complimentary wireless Internet access, concierge services, and babysitting/childcare. Enjoy a bite to eat at a coffee shop/cafŽ, or stay in and take advantage of the hotel's 24-hour room service. Quench your thirst with your favorite drink at a bar/lounge. Featured amenities include a business center, complimentary newspapers in the lobby, and dry cleaning/laundry services. Event facilities at this hotel consist of a conference center, conference space, and meeting rooms. Self parking (subject to charges) is available onsite. Room Mate Lola is in the heart of Malaga, walking distance from Mercado de Atarazanas and Centre de Arte Contemporaneo. This 4-star hotel is within close proximity of MIMMA and Port of Malaga. Make yourself at home in one of the 50 air-conditioned rooms featuring minibars and flat-screen televisions. Complimentary wireless Internet access keeps you connected, and satellite programming is available for your entertainment. Private bathrooms with bathtubs or showers feature complimentary toiletries and hair dryers. Conveniences include phones, as well as safes and desks. Featured amenities include complimentary newspapers in the lobby, dry cleaning/laundry services, and a 24-hour front desk. Planning an event in Malaga? This hotel has facilities measuring 226 square feet (21 square meters), including a meeting room. Self parking (subject to charges) is available onsite. A stay at NH M‡laga places you in the heart of Malaga, walking distance from Mercado de Atarazanas and MIMMA. This 4-star hotel is within close proximity of Centre de Arte Contemporaneo and Carmen Thyssen Museum. Make yourself at home in one of the 133 air-conditioned rooms featuring minibars and LCD televisions. Rooms have private patios. Complimentary wireless Internet access keeps you connected, and satellite programming is available for your entertainment. Private bathrooms with bathtubs or showers feature complimentary toiletries and bidets. Enjoy recreational amenities such as a sauna and a 24-hour fitness center. Additional features include complimentary wireless Internet access, babysitting/childcare, and wedding services. Featured amenities include a 24-hour business center, complimentary newspapers in the lobby, and dry cleaning/laundry services. Planning an event in Malaga? This hotel has 13000 square feet (1208 square meters) of space consisting of a conference center, conference space, and meeting rooms. Self parking (subject to charges) is available onsite. With a stay at Suite Novotel Malaga Centro in Malaga, you'll be minutes from Mercado de Atarazanas and Carmen Thyssen Museum. This 4-star hotel is within close proximity of Plaza de la Constitucion and MIMMA. Make yourself at home in one of the 90 air-conditioned rooms featuring microwaves. Complimentary wireless Internet access is available to keep you connected. Bathrooms have bathtubs and hair dryers. Conveniences include safes and sofa beds, and you can also request cribs/infant beds (complimentary). Malaga Centro is in the heart of Malaga, walking distance from Carmen Thyssen Museum and Plaza de la Constitucion. This 4-star hotel is within close proximity of Mercado de Atarazanas and MIMMA. Make yourself at home in one of the 147 air-conditioned rooms featuring free minibar items. Complimentary wireless Internet access keeps you connected, and satellite programming is available for your entertainment. Private bathrooms have complimentary toiletries and hair dryers. Conveniences include safes and desks, and housekeeping is provided daily. Be sure to enjoy recreational amenities including an outdoor pool and a seasonal outdoor pool. This hotel also features complimentary wireless Internet access, concierge services, and a television in the lobby. Enjoy a meal at a restaurant or in a coffee shop/cafŽ. Or stay in and take advantage of the hotel's room service (during limited hours). Relax with your favorite drink at a bar/lounge or a poolside bar. With a stay at Hotel Guadalmedina, you'll be centrally located in Malaga, steps from Centre de Arte Contemporaneo and minutes from Mercado de Atarazanas. This 4-star hotel is within close proximity of MIMMA and Port of Malaga. Make yourself at home in one of the 60 air-conditioned rooms featuring minibars and plasma televisions. Complimentary wireless Internet access keeps you connected, and satellite programming is available for your entertainment. Private bathrooms have deep soaking bathtubs and complimentary toiletries. Conveniences include phones, as well as laptop-compatible safes and desks. With a stay at Hotel MS Maestranza in Malaga, you'll be minutes from Plaza de Toros de la Malagueta and Gibralfaro Castle. This 4-star hotel is within close proximity of Alcazaba and Tajo's Tree-Lined Avenue. Make yourself at home in one of the 94 air-conditioned rooms featuring minibars. Rooms have private balconies. Complimentary wireless Internet access keeps you connected, and cable programming is available for your entertainment. Bathrooms feature shower/tub combinations, designer toiletries, and bidets. DonÕt miss out on the many recreational opportunities, including a health club, a spa tub, and a sauna. This hotel also features complimentary wireless Internet access, concierge services, and babysitting/childcare. Featured amenities include a business center, limo/town car service, and express check-in. A roundtrip airport shuttle is provided for a surcharge (available on request). The hotel search engine in Málaga of Hotelvoy will help you find the best prices for accommodation. Save time and money in your reservations, please indicate the dates of arrival and departure and click the search button. We offer a selection of hotels Málaga with photos of the hotel, complete information, maps to get to the hotel and fantastic prices. Compare the prices between several travel agencies for your business trip, weekend getaway, next bridge or summer vacation. With the price comparison you will find the most central hotels or the most charming hotels in the town of Málaga with very cheap prices. Book now and get great discounts on hotels Málaga.
{ "redpajama_set_name": "RedPajamaC4" }
4,656
class CPRTMesh { public: CPRTMesh( void ); ~CPRTMesh( void ); HRESULT OnCreateDevice( LPDIRECT3DDEVICE9 pd3dDevice, D3DFORMAT fmt ); HRESULT OnResetDevice(); void OnLostDevice(); void OnDestroyDevice(); // General HRESULT LoadEffects( IDirect3DDevice9* pd3dDevice, const D3DCAPS9* pDeviceCaps ); // Mesh HRESULT LoadMesh( IDirect3DDevice9* pd3dDevice, WCHAR* strMeshFileName ); HRESULT SetMesh( IDirect3DDevice9* pd3dDevice, ID3DXMesh* pMesh ); HRESULT AdjustMeshDecl( IDirect3DDevice9* pd3dDevice, ID3DXMesh** ppMesh ); DWORD GetNumVertices() { return m_pMesh->GetNumVertices(); } ID3DXMesh* GetMesh() { return m_pMesh; } D3DXMATERIAL* GetMaterials() { return m_pMaterials; } DWORD GetNumMaterials() { return m_dwNumMaterials; } bool IsMeshLoaded() { return ( m_pMesh != NULL ); } IDirect3DTexture9* GetAlbedoTexture() { return m_pAlbedoTextures[0]; } float GetObjectRadius() { return m_fObjectRadius; } const D3DXVECTOR3& GetObjectCenter() { return m_vObjectCenter; } // Misc void GetVertexUnderMouse( const D3DXMATRIX* pmProj, const D3DXMATRIX* pmView, const D3DXMATRIX* pmWorld, unsigned int* uVert ); void GetSHTransferFunctionAtVertex( unsigned int uVert, int uTechnique, unsigned int uChan, float* pfVals ); void GetVertexPosition( unsigned int uVert, D3DXVECTOR3* pvPos ); // N dot L void RenderWithNdotL( IDirect3DDevice9* pd3dDevice, D3DXMATRIX* pmWorldViewProj, D3DXMATRIX* pmWorldInv, bool bRenderWithAlbedoTexture, CDXUTDirectionWidget* aLightControl, int nNumLights, float fLightScale ); // SHIrradEnvMap void RenderWithSHIrradEnvMap( IDirect3DDevice9* pd3dDevice, D3DXMATRIX* pmWorldViewProj, bool bRenderWithAlbedoTexture ); void ComputeSHIrradEnvMapConstants( float* pSHCoeffsRed, float* pSHCoeffsGreen, float* pSHCoeffsBlue ); // PRT void SetPRTBuffer( ID3DXPRTBuffer* pPRTBuffer, WCHAR* strFile ); HRESULT LoadPRTBufferFromFile( WCHAR* strFile ); HRESULT LoadCompPRTBufferFromFile( WCHAR* strFile ); HRESULT CompressPRTBuffer( D3DXSHCOMPRESSQUALITYTYPE Quality, UINT NumClusters, UINT NumPCA, LPD3DXSHPRTSIMCB pCB= NULL, LPVOID lpUserContext=NULL ); void ExtractCompressedDataForPRTShader(); void ComputeShaderConstants( float* pSHCoeffsRed, float* pSHCoeffsGreen, float* pSHCoeffsBlue, DWORD dwNumCoeffsPerChannel ); void RenderWithPRT( IDirect3DDevice9* pd3dDevice, D3DXMATRIX* pmWorldViewProj, bool bRenderWithAlbedoTexture ); HRESULT SavePRTBufferToFile( WCHAR* strFile ); HRESULT SaveCompPRTBufferToFile( WCHAR* strFile ); DWORD GetPRTOrder() { return m_dwPRTOrder; } bool IsPRTUncompressedBufferLoaded() { return ( m_pPRTBuffer != NULL ); } bool IsPRTCompBufferLoaded() { return ( m_pPRTCompBuffer != NULL ); } ID3DXPRTCompBuffer* GetPRTCompBuffer() { return m_pPRTCompBuffer; } ID3DXPRTBuffer* GetPRTBuffer() { return m_pPRTBuffer; } bool IsPRTShaderDataExtracted() { return ( m_aPRTClusterBases != NULL ); } bool IsPRTEffectLoaded() { return ( m_pPRTEffect != NULL ); } UINT GetOrderFromNumCoeffs( UINT dwNumCoeffs ); // LDPRT void RenderWithLDPRT( IDirect3DDevice9* pd3dDevice, D3DXMATRIX* pmWorldViewProj, D3DXMATRIX* pmNormalXForm, bool bUniform, bool bRenderWithAlbedo ); void SetLDPRTData( ID3DXPRTBuffer* pLDPRTBuff, D3DXVECTOR3* pShadingNormal ); void ComputeLDPRTConstants( float* pSHCoeffsRed, float* pSHCoeffsGreen, float* pSHCoeffsBlue, DWORD dwNumCoeffsPerChannel ); static VOID WINAPI StaticFillCubeTextureWithSHCallback( D3DXVECTOR4* pOut, CONST D3DXVECTOR3* pTexCoord, CONST D3DXVECTOR3* pTexelSize, LPVOID pData ); HRESULT LoadLDPRTFromFiles( WCHAR* strLDPRTFile, WCHAR* strShadingNormalsFile ); HRESULT SaveLDPRTToFiles( WCHAR* strLDPRTFile, WCHAR* strShadingNormalsFile ); HRESULT CreateLDPRTData(); protected: struct RELOAD_STATE { bool bUseReloadState; bool bLoadCompressed; WCHAR strMeshFileName[MAX_PATH]; WCHAR strPRTBufferFileName[MAX_PATH]; WCHAR strLDPRTFile[MAX_PATH]; WCHAR strShadingNormalsFile[MAX_PATH]; D3DXSHCOMPRESSQUALITYTYPE quality; UINT dwNumClusters; UINT dwNumPCA; } m_ReloadState; /////////// // Mesh ID3DXMesh* m_pMesh; CGrowableArray <IDirect3DTexture9*> m_pAlbedoTextures; D3DXMATERIAL* m_pMaterials; ID3DXBuffer* m_pMaterialBuffer; DWORD m_dwNumMaterials; float m_fObjectRadius; D3DXVECTOR3 m_vObjectCenter; LPDIRECT3DDEVICE9 m_pd3dDevice; D3DVIEWPORT9 m_ViewPort; /////////// // N dot L ID3DXEffect* m_pNDotLEffect; /////////// // SHIrradEnvMap ID3DXEffect* m_pSHIrradEnvMapEffect; /////////// // PRT ID3DXPRTBuffer* m_pPRTBuffer; ID3DXEffect* m_pPRTEffect; DWORD m_dwPRTOrder; ID3DXPRTCompBuffer* m_pPRTCompBuffer; // The basis buffer is a large array of floats where // Call ID3DXPRTCompBuffer::ExtractBasis() to extract the basis // for every cluster. The basis for a cluster is an array of // (NumPCAVectors+1)*(NumChannels*Order^2) floats. // The "1+" is for the cluster mean. float* m_aPRTClusterBases; // m_aPRTConstants stores the incident radiance dotted with the transfer function. // Each cluster has an array of floats which is the size of // 4+MAX_NUM_CHANNELS*NUM_PCA_VECTORS. This number comes from: there can // be up to 3 channels (R,G,B), and each channel can // have up to NUM_PCA_VECTORS of PCA vectors. Each cluster also has // a mean PCA vector which is described with 4 floats (and hence the +4). float* m_aPRTConstants; /////////// // LDPRT D3DXVECTOR3* m_pLDPRTShadingNormals; ID3DXPRTBuffer* m_pLDPRTBuffer; ID3DXMesh* m_pLDPRTMesh; ID3DXEffect* m_pLDPRTEffect; D3DXHANDLE m_hValidLDPRTTechnique; D3DXHANDLE m_hValidLDPRTTechniqueWithTex; IDirect3DCubeTexture9* m_pSHBasisTextures[9]; // Store SH basis functions using textures };
{ "redpajama_set_name": "RedPajamaGithub" }
1,731
Plaque: Angus McGill Erection date: 13/10/2017 This plaque was installed in 2017, on the 30th anniversary of the Great Storm. It commemorates Angus McGill, who initiated the appeal to replace London's lost trees and the planting of the oak nearby. McGill died in 2015 after 42 years as a columnist with the Evening Standard and creator of the Clive and Augusta strip cartoons. He was named Descriptive Writer of the Year 1968, and appointed MBE 1990. Site: Storm Tree - Charing Cross (2 memorials) WC2, Strand Angus McGill Great Storm of 1987 In the early hours of Friday 16 October 1987 a great storm struck South East ... Initiated the Evening Standard's appeal to replace London's lost trees. For 4... Westminster City Council Created in 1965 from the former area of the Metropolitan Boroughs of St Maryl... Storm Tree - Charing Cross When we first saw the plaque it was in the pavement close to the tree but is ... Lawrence Hall - Viscountess Lascelles This building was opened on June 26th 1928 by her Royal Highness Princess Mar... Haydn Wood Haydn Wood, 1882 - 1959, composer, a much-loved master of British light music... Albert Bridge - opened The rope-framed roundel at the top carries the crest for the RBofK&C, and loo... Marshalsea 2 - steel The plaque refers to 'wall mounted artworks' but we did not see any on our vi... King's Hall Picture Palace - first cinema in Britain The erection date differs from the date on the plaque. We have found other ca...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,388
Astronomia Cratere La Caille – cratere lunare Geografia Caille – comune francese situato nel dipartimento delle Alpi Marittime Caille Island – isola tra Grenada e Carriacou, nelle Grenadine Allonzier-la-Caille – comune francese situato nel dipartimento dell'Alta Savoia Persone Alain Caillé (1944) – sociologo francese Fanny Caillé (1850-1900) – pittrice francese Florence Loiret-Caille (1975) – attrice francese Gisèle Caille (...) – ex ciclista su strada francese Niall Caille mac Áeda (... – 846) – sovrano supremo d'Irlanda Nicolas Louis La Caille (1713-1762) – astronomo francese Pierre Caille (1911-1996) – scultore e pittore belga René Caillié (1799-1838) – esploratore francese
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,036
\section{Introduction} This work is part of a series of research on radial, axially symmetric free surface flows studied for both their intrinsic interest and their significance as test cases to validate numerical models that integrate two-dimensional (2D) Shallow-Water Equations (SWE) \citep{VC2003, CV2012}. Basic contributions typically address the hydraulic jump in radial flow and/or diverging or converging channels, and the subsequent stability analysis \citep{Law1983, Hager1985, AB2008, Fog2012}. Their practical importance consists of supporting the design of stilling basins and similar hydraulic structures. The radial hydraulic jump, both as it is diverging or converging, is studied by \cite{VC2011, VC2013}, including the spatial development along its length. The issue is also topical from an interdisciplinary viewpoint: recent experimental studies on physical hydraulic models are devoted to study the phenomenon of the standing accretion shock instability of collapsing stellar cores in astrophysics \citep{Fog2012}. This work makes available reference analytical solutions. Recent work in the field of computational hydraulics has shown the importance of these analytical solutions \citep{Del2013} to validate the consistency, accuracy and robustness of shock-capturing numerical methods for the 1D and 2D SWE. \section{The mechanical scheme} \label{sec:mech} An axially symmetric free surface radial flow is considered, due to the incidence of a vertical free jet against a plane horizontal plate, perpendicular to the jet axis, which is designated as $z$ axis. The radial velocity is positive if directed outward from the centre of the coordinate system $(r, \theta, z)$. Sufficiently far from the $z$ axis, the vertical velocity is small with respect to the radial velocity, and the tangential velocity is null. The same flow structure is valid, except for the sign of $u$, for an axially symmetric radial centripetal flow, which flows from the external boundary of a circular plate towards its centre. The feeding (from outside) results from a circular sluice gate, along with a free fall into a central pipe at the plate centre. The flow moves downward in the central free flowing weir. Figure~\ref{fig:disegno} provides a sketch of the reference flow field. \begin{figure} \begin{center} \includegraphics[width=1.0\textwidth]{fig1.pdf} \end{center} \caption{Sketch of typical axially symmetric flows, in (a) centrifugal and (b) centripetal direction.} \label{fig:disegno} \end{figure} All classical hypotheses of the SWE are assumed applied to an incompressible liquid in the gravitational field. Inertial and gravity effects are considered as dominant with respect to viscous effects. Surface tension is neglected. The pressure distribution over each vertical is hydrostatic; the radial velocity is assumed uniform on each vertical; and the vertical velocity is assumed negligible with respect to the radial velocity. The axial symmetry is maintained everywhere. \subsection{Basic steady flow equations} Under the specified assumptions, the continuity and dynamic equations for radial steady flow read \begin{equation} \label{eq:contdyn} \frac{\partial}{\partial r}\left(U\,r\,Y\right)=0 \, ; \qquad U\frac{\partial U}{\partial r}+ g\frac{\partial Y}{\partial r}+g\frac{\partial z_b}{\partial r}+\frac{f}{2}\frac{U^2}{Y}=0 \end{equation} Here $Y$ is the flow depth; $U$ the vertically-averaged radial velocity ($r$ direction); $g$ the gravity acceleration; $z_b$ the bottom elevation; $f$ the friction coefficient defined by $\tau_{0}=\left({1/2}\right)\,f\,\rho\,U^2$; $\rho$ the liquid density, assumed constant; and $\tau_0$ the bed shear stress. A reference liquid discharge, $Q$, is considered, which flows in a reference circular sector of half angular amplitude $\alpha$ ($\alpha=\pi$ in the fully circular case). $\,Q/\left({2} \, \alpha\right)$ is the liquid discharge per unit angular width. A reference specific energy, $E_{0}$, is also considered, which is related to the steady inviscid flow over the flat bed, with $E=Y+Q^{2}/\left[{2} g \left(2 \alpha r Y \right)^{2}\right]$. The total force of the flow is $F=\left({1/2}\right)\rho g \, r Y^{2} + \rho Q^{2}/ \left(2 \alpha r Y \right)$. The reference steady flow is characterised by a constant specific energy $E_{0}$ and a constant liquid discharge $Q$. Non-dimensional equations are derived from Eq. (\ref{eq:contdyn}) with the critical depth as the vertical length scale, $Y_{0}=Y_c=\left({2/3}\right)E_0$; the critical radius $r_{0}=r_c=\left[Q/\left(2\alpha Y_c\sqrt{g\,Y_c}\right)\right]$ as the longitudinal length scale; and the critical velocity, $U_{0}=U_c=\sqrt{g Y_c}$ as velocity scale. Critical quantities $\,Y_c\,$ and $\,r_c\,$ are defined as those minimising the specific energy and the total force, respectively \citep{VC2011}. The non-dimensional reference discharge is $\varGamma=\left[Q/\left(2\,\alpha\,E_0^2\,\sqrt{gE_0}\right)\right]$. It follows that the typical aspect ratio of the problem is $\beta=r_c/Y_c=\left({3/2}\right)^{5/2}\varGamma\simeq {2.76}\varGamma$. The non-dimensional radius, depth, velocity, bottom elevation are, respectively: $\xi=r/r_0$; $\eta=Y/Y_0$; $u=U/U_0$; $\zeta=z_b/Y_0$. For steady flow the continuity equation is reduced to the condition of constant liquid discharge as \begin{equation} \label{eq:adcontinuity} \left(u\,\xi\,\eta\right)=1 \end{equation} The dynamic equation is expressed in terms of $\mathcal{F}=\left[\left({1/2}\right)\,\xi\,\eta^2+{1}/\left(\xi\,\eta\right)\right]$, the non-dimensional total force, or in terms of $\mathcal{E}=\left[\eta+{1}/\left(2\,\xi^2\,\eta^2\right)\right]$, the non-dimensional specific energy, with $\mathcal{H}=\zeta+\mathcal{E}$ as the non-dimensional total head \begin{equation} \label{eq:adforce} \frac{d}{d\xi}\left(\frac{1}{2}\,\xi\,\eta^2+\frac{1}{\xi\,\eta}\right)= \frac{1}{2}\,\eta^2-\frac{1}{2}\,\beta\,f\,\left(\frac{1}{\xi\,\eta^2}\right)-\,\xi\,\eta\,\frac{d\zeta}{d\xi} \end{equation} \begin{equation} \label{eq:adenergy} \frac{d}{d\xi}\left(\zeta+\eta+\frac{1}{2\,\xi^2\,\eta^2}\right)=-\frac{1}{2}\,\beta\,f\,\left(\frac{1}{\xi^2\,\eta^3}\right) \end{equation} Both formulations, which are equivalent for a continuous solution (though not for a discontinuous solution, as well known from the shallow-water theory of inviscid shocks) are useful to determine analytical solutions and to discuss a detailed analysis of the conservation properties of the system. \section{Analytical solution: inviscid flow over flat horizontal bed} \label{sec:inviscid flat} An analytical solution is obtained by setting $\zeta={0}$ and $f={0}$ in Eq. (\ref{eq:adenergy}) and corresponds to the specific energy conservation $E=E_{0}=\left({3/2}\right)Y_c$, $\mathcal{E}={3/2}$, in the entire flow domain. The solution for the flow depth is apparent in the following implicit form \begin{equation} \label{eq:csi} \xi=\frac{1}{\eta\sqrt{3-2\,\eta}} \end{equation} This relationship determines the position where a prescribed depth occurs. Equation (\ref{eq:csi}) can be inverted using symbolic software (i.e., Mathematica, see www.wolframalpha.com) to obtain three solutions in the complex field. A procedure, omitted here for brevity, which bears similarities to that in \cite{VC2008} is finally used. In the flow domain of physical interest ($\xi\geq {1};\,\,\varphi=\arcsin\left({1}/{\xi}\right);\,\,\pi/{2}\geq\varphi>{0}$), the solutions read \begin{equation} \label{eq:real profiles} \begin{aligned} \eta_{1}&=\eta_{sb}=\frac{1}{2}+\cos\left(\frac{2}{3}\,\varphi\right) &: {1}\leq\eta_{sb}<\frac{3}{2}\\ \eta_{2}&=\eta_{sp}=\frac{1}{2}-\frac{1}{2}\,\cos\left(\frac{2}{3}\,\varphi\right)+\frac{\sqrt{3}}{2}\, \sin\left(\frac{2}{3}\,\varphi\right) &: {1}\geq\eta_{sp}>{0} \end{aligned} \end{equation} These solutions represent the explicit analytical solutions, not yet provided in the literature, namely: the subcritical (\textit{sb}) flow solution and the supercritical (\textit{sp}) flow solution. The physically meaningless, negative depth solution, is omitted here but is shown in Fig.~\ref{fig:inviscid_flat}. The trends of the Froude number $\Fr=u\,\eta^{-1/2}$, and of the total force $\mathcal{F}$ are also plotted. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{fig2a.pdf} \includegraphics[width=0.48\textwidth]{fig2b.pdf} \end{center} \caption{Flow depth profiles in inviscid steady flow over flat horizontal bed; (\textit{a}) depth $\eta\left(\xi\right)$ \textit{vs.} radius $\xi$, (\textit{b}) Froude number $\Fr$ and $\mathcal{F}/\mathcal{F}_c$ \textit{vs.} radius $\xi$} \label{fig:inviscid_flat} \end{figure} \section{Analytical solution: flat horizontal bed with friction} \label{sec:friction flat} This solution is obtained by setting $\zeta ={0}$ in Eq. (\ref{eq:adenergy}) and by referring to the specific energy $E=E_{0}=\left({3/2}\right)\,Y_c\,$, $\mathcal{E}=\left({3/2}\right)$, in the critical section. The friction term is computed by assuming a constant (small) friction coefficient. Equations (\ref{eq:csi}) and (\ref{eq:real profiles}), with reference only to the physical meaningful solutions, are considered to be the basic $0^{th}$-order solutions inside a perturbation approach as \begin{equation} \label{eq:perturb} \eta=\eta_{0}+\epsilon\,\eta_{1}+\ldots\, ; \qquad \,\epsilon=\frac{1}{2}\,\beta\,f\ll {1} \end{equation} At the $0^{th}$-order, the inviscid solution applies, determined in the previous section. It suffices to establish that $\eta_{0}=\eta_{sb}$ or $\eta_{0}=\eta_{sp}$ in Eqs. (\ref{eq:real profiles}), depending on the case. It is straightforward to demonstrate that at first order, the depth profile is the solution of the differential equation \begin{equation} \label{eq:eta1diff1} \frac{d}{d\xi}\left(\eta_{1}-\frac{\eta_{1}}{\xi^{2}\,\eta_{0}^{3}}\right)=-\frac{1}{\xi^{2}\,\eta_{0}^{3}} \end{equation} Using mathematics, Eqs. (\ref{eq:csi}) and (\ref{eq:perturb}), the solution is found as \begin{equation} \eta_{1}=\left[\frac{\left(\eta_{0}-{1}\right)\sqrt{{3}-{2}\,\eta_{0}}}{{2}\,\eta_{0}^{2}}+ \frac{\textrm{arctanh}\left(\sqrt{{1}-\left({2/3}\right)\eta_{0}}\right)}{\sqrt{3}} - \frac{\textrm{arctanh}\left({1}/\sqrt{3}\right)}{\sqrt{3}}\right] \frac{\eta_{0}}{{3}\left(\eta_{0}-{1}\right)} \label{eq:eta1} \end{equation} In Eq. (\ref{eq:eta1}), \textrm{arctanh} is the hyperbolic arc tangent function. The integration constant is determined by establishing the critical depth at the critical radius, i.e.: $\eta_{1}({1})={0}$. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{fig3a.pdf} \includegraphics[width=0.48\textwidth]{fig3b.pdf} \end{center} \caption{Comparison between inviscid solution, numerical solution and analytical first order solution for $\epsilon={0.02}$; (\textit{a}) subcritical flow, (\textit{b}) supercritical flow} \label{fig:real_flat} \end{figure} The proposed analytical solution is valid for both cases, i. e. when the basic solution is super- or subcritical. Figure~\ref{fig:real_flat} compares both flow states, with the complete solution of Eq. (\ref{eq:adenergy}) determined with a Runge-Kutta 4th order numerical method \citep{BT1992}. For the selected values of the parameters, Fig.~\ref{fig:real_flat} shows the computed error, which is the difference between the analytical depth computed from Eqs. (\ref{eq:perturb}) and (\ref{eq:eta1}) and the numerical solution of Eq. (\ref{eq:adenergy}). The analytical solution is strictly close to the numerical, so that the perturbation procedure (\ref{eq:perturb}) \label{eq:} ends at first order. \section{Analytical solution: inviscid flow over a variable bottom topography} \label{sec:inviscid variable bottom} This solution is obtained by setting, in Eq. (\ref{eq:adenergy}), $\zeta\neq{0}$ and $f={0}$, and by considering that $\zeta =\zeta(\xi)$. The procedure to obtain the analytical solution is similar to that of \cite{VC2008}. Consider as a constant the total head $H=z_b+E=\left({3/2}\right)Y_c$, and its non-dimensional counterpart, $\mathcal{H}=\zeta+\mathcal{E}=\left({3/2}\right)$. By setting $\psi={2}\left({3/2}-\zeta\right);\,\,\phi=\arcsin\left[{3}\,\sqrt{3}/\left(\psi^{3/2}\,\xi\right)\right]\,$ in the flow domain of physical interest (discarding the negative depth solution) the solutions are \begin{equation} \begin{aligned} \eta_{sb}&=\frac{1}{6}\,\psi+\frac{1}{3}\,\psi\,\cos\left(\frac{2}{3}\,\phi\right) \\ \eta_{sp}&=\frac{1}{6}\,\psi-\frac{1}{6}\,\psi\,\cos\left(\frac{2}{3}\,\phi\right)+ \frac{\sqrt{3}}{6}\,\psi\,\sin\left(\frac{2}{3}\,\phi\right) \label{eq:real z-profiles} \end{aligned} \end{equation} These solutions are the explicit analytical solutions for flow depth: the subcritical solution $\eta_{sb}=\eta_{sb}(\xi)$ and the supercritical solution $\eta_{sp}=\eta_{sp}(\xi)$, respectively. They satisfy the implicit relationship $\xi=\left[{1}/\left(\eta\sqrt{\psi-{2}\,\eta}\right)\right]$. The condition to obtain a real solution is \begin{equation} \label{eq:z condition} \frac{{3}\sqrt{3}}{\psi^{3/2}\,\xi}\leq{1}\quad\Rightarrow\quad \zeta\leq\frac{3}{2}-\frac{1}{2}\left(\frac{{3}\sqrt{3}}{\xi}\right)^{2/3} \end{equation} thus indicating that the bottom elevation must be sufficiently small with respect to the available total head; the threshold also depends on the non-dimensional position. Flow choking requires the treatment of the hydraulic jump, which is analysed separately in the following section. In other words, a hydraulic jump occurs if inequality (\ref{eq:z condition}) is unsatisfied. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{fig4a.pdf} \includegraphics[width=0.48\textwidth]{fig4b.pdf} \end{center} \caption{Inviscid flow over uneven bottom; (\textit{a}) subcritical flow, (\textit{b}) supercritical flow} \label{fig:inviscid_uneven} \end{figure} In Fig.~\ref{fig:inviscid_uneven}, the behaviour of the free surface (without choking) is shown for sub- and supercritical flows, respectively. The bottom elevation is assumed as given according to the following equations, which are applied, respectively, for the sub- and supercritical cases $\,\zeta=\left({3/5}\right) \exp{\left[-{2}\left(\xi-{3}\right)^{2}\right]}+\left({3/5}\right) \exp{\left[-\left(\xi-{5}\right)^{2}\right]}$; $\,\zeta=\left({1/10}\right) \exp{\left[-{2}\left(\xi-{3}\right)^{2}\right]}+\left({3/5}\right) \exp{\left[-\left(\xi-{5}\right)^{2}\right]}$. \section{Analytical solution: direct hydraulic jump} \label{sec:hydraulic jump} To ensure simplicity and generality, this solution is determined for the simplest conditions, described above (flat bottom, inviscid flow). The jump is considered an inviscid shock of zero-length, including the entire energy dissipation. A more detailed treatment, which incorporates the gradual variability of the physical quantities inside the jump, which is inversely considered to have a finite length, is reported by \cite{VC2011} and \cite{VC2013}. Here, the scheme concerning the mechanics is much more simple and general, even if less detailed. Neglecting the length of the jump allows to obtain an analytical solution for sequent depths and jump position, which cannot be obtained previously. In the inviscid frame, the specific energy $E_{1}$ is a constant upstream of the jump, whereas a different, lower constant value $E_{2}$ corresponds to the downstream flow portion. All quantities are made non-dimensional using the upstream depth and upstream radius. Note that the word 'upstream' is used in the physical sense of the flow direction, so that it indicates smaller values of the radius for diverging flows and larger values of the radius for converging flows. Using these hypotheses \cite{VC2011} demonstrate that the only non-dimensional parameter governing the phenomenon is the specific energy ratio $\mathcal{E}_R=E_{2}/E_{1}$, whose completion to unity $({1}-\mathcal{E}_R)$ is the rate of mechanical energy per unit weight dissipated in the jump. The implicit expressions for the supercritical and subcritical free surface profiles are $\,\, \xi=\left[{1}/\left(\eta_{sp}\sqrt{{3}-{2}\,\eta_{sp}}\right)\right]$; $\xi=\left[{1}/\left(\eta_{sb}\sqrt{{3}\,\mathcal{E}_R-{2}\,\eta_{sb}}\right)\right]$. Let superscripts $*$ and $**$ be the sequent quantities upstream and downstream of the jump, respectively (the depth and the velocity giving the same total force). Denoting $\xi_j$ as the shock position, the conditions at the jump are: \textit{i)} the uniqueness of jump position, \textit{ii)} mass conservation, and \textit{iii)} total force conservation as $\, \xi_{j}=\xi^{\ast}=\xi^{\ast\ast}\,$; $\, \left(u\,\eta\right)^{\ast}=\left(u\,\eta\right)^{\ast\ast}\,$; $\, \left[\left({1/2}\right)\,\xi\,\eta^{2}+u^{2}\,\eta\right]^{\ast}=\left[\left({1/2}\right)\,\xi\,\eta^{2}+u^{2}\,\eta\right]^{\ast\ast}\,$. A nonlinear system (with $\eta^\ast$ and $\eta^{\ast\ast}$ as unknowns) is obtained. The fundamental equation for the sequent depth ratio, $\Lambda=\eta^{\ast\ast}/\eta^{\ast}$ then is \begin{equation} \label{eq:jump equation} \mathcal{E}_R\,\Lambda^{3}-\left(4-\mathcal{E}_R\right)\,\Lambda^{2}+\left(4\,\mathcal{E}_R-{1}\right)\,\Lambda-{1}={0} \end{equation} In the range ${0}<\mathcal{E}_R<{1}$, it has only one real solution \begin{equation} \label{eq:con depths ratio} \Lambda=\frac{{4}-\mathcal{E}_R}{{3}\,\mathcal{E}_R}+\frac{\Upsilon}{{3}\,\mathcal{E}_R}+ \frac{{16}-{5}\,\mathcal{E}_R-{11}\,\mathcal{E}_R^{2}}{{3}\,\Upsilon\,\mathcal{E}_R} \end{equation} with: $\Upsilon=\left({64}-{30}\mathcal{E}_R-{51}\mathcal{E}_R^{2}+{17}\mathcal{E}_R^{3}+ {9}\sqrt{{20}\mathcal{E}_R^{2}+\mathcal{E}_R^{3}-{42}\mathcal{E}_R^{4}+\mathcal{E}_R^{5}+{20}\,\mathcal{E}_R^{6}}\right)^{1/3}$. The corresponding super- and subcritical depths are \begin{equation} \label{eq:con depths} \eta^\ast=\frac{6}{\Lambda^{2}+\Lambda+{4}}\,;\quad \eta^{\ast\ast}=\frac{{6}\,\Lambda}{\Lambda^{2}+\Lambda+{4}} \end{equation} The shock position is \begin{equation} \label{eq:jump position} \xi_j=\xi_j^\ast=\frac{1}{\eta^\ast\sqrt{{3}-{2}\,\eta^\ast\,}}=\xi_j^{\ast\ast}= \frac{1}{\eta^{\ast\ast}\sqrt{{3}\,\mathcal{E}_R-{2}\,\eta^{\ast\ast}\,}} \end{equation} Usually, the literature gives the sequent depth ratio, the sequent depths and the jump position as functions of the upstream Froude number $\Fr^{\ast}$ (or some equivalent quantity), and more or less complicate computations are required to find its position. Note that Eqs. (\ref{eq:con depths ratio}), (\ref{eq:con depths}), (\ref{eq:jump position}) are really predictive, at least in the framework of the inviscid-shock theory. For a prescribed value of discharge, the quantity $\mathcal{E}_R$ is directly found if well-posed boundary conditions are known. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{fig5a.pdf} \includegraphics[width=0.48\textwidth]{fig5b.pdf} \end{center} \caption{Hydraulic jump (\textit{a}) depth and force profiles $\eta\left(\xi\right)$ and $\mathcal{F}\left(\xi\right)$ for $\mathcal{E}_R={0.5}$, (\textit{b}) depth profiles $\eta\left(\xi\right)$ for different $\mathcal{E}_R$ values for diverging and converging flows}. \label{fig:jump} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{fig6a.pdf} \includegraphics[width=0.48\textwidth]{fig6b.pdf} \end{center} \caption{Hydraulic jump (\textit{a}) Sequent depths and jump position \textit{vs.} energy ratio, (\textit{b}) Sequent depths ratio and sequent Froude numbers \textit{vs.} energy ratio. (...) limit direct/undular jump. Experimental data from \citep{Rub63, Rub64}: (+) diverging, (o) converging; experimental points in (\textit{b}) refer to sequent depths ratio} \label{fig:condepths} \end{figure} In Fig.~\ref{fig:jump}, for a prescribed value of the energy ratio $\mathcal{E}_R={0.5}$, the behaviour of a jump is plotted (\textit{a}) using the derived equations, whereas the physical flow features are plotted in (\textit{b}) for three different values of $\mathcal{E}_R$. Figure~\ref{fig:condepths} shows (\textit{a}) the behaviour of the sequent depths and the jump position versus the energy ratio and (\textit{b}) the behaviour of the sequent depths ratio and sequent Froude numbers, as functions of the same parameter. A comparison with experimental data from \cite{Rub63} and \cite{Rub64} is also shown. The selected dataset is chosen because experiments were performed at a quite large scale; further details are given in \cite{VC2011} and \cite{VC2013}. The dashed lines represent existence limits for the direct jump. Under the classical limit ${Fr}^{\ast 2}<{3}$, corresponding to $\eta^{\ast}>{0.6}$ and $\mathcal{E}_R>{0.95}$, the undular jump occurs, for which the present theory does not hold. Notably, the above-mentioned limit for the upstream Froude number matches the condition that the undular jump occurs when no greater than five percent of the available specific energy is dissipated in the jump. \section{Conclusions} \label{sec:Conclusions} Analytical results concerning radial, axially symmetric, steady free surface flows are determined and discussed. The selected results can be used in field-scale hydraulic engineering because they pertains to the radial flow in stilling basins, where gravitational and inertial effects are dominant. The simplest case is the explicit solution for a flat horizontal bottom under inviscid flow. Analytical expressions for the sub- and supercritical flow depths are determined. An additional solution in the form of a perturbation solution is presented for the flat bottom and frictional flows, with the limit of small friction, both for sub- and supercritical flows. An analytical solution is also determined for inviscid flow over a spatially-varied bottom elevation, both for sub- and supercritical flows. The existence condition for these types of flows is determined and consists of the hypothesis of sufficiently small bottom elevation with respect to the prescribed level of the total head. Analytical expressions are determined for the sequent depths and the jump position over a flat bed as functions only of the prescribed energy dissipation rate. This quantity is the unique parameter governing the phenomenon for inviscid flow outside the jump and based on the hypothesis of an inviscid shock of zero length. These analytical results represent useful benchmarks to test numerical integration schemes for Shallow-Water Equations and as important reference conditions for the stability analysis of the radial hydraulic jump. \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
6,094