text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
The Cathedral Santuario de la Virgen de Guadalupe (Cathedral Shrine of the Virgin of Guadalupe) is the cathedral church of the Roman Catholic Diocese of Dallas, Texas. The structure dates from the late 19th century and is located in the Arts District of downtown Dallas, Texas. The church oversees the second largest Catholic church membership in the United States. Its average Sunday attendance is 11,200. History Background In 1869, Dallas's first Catholic parish, Sacred Heart Church, was established by the Bishop of Galveston. The church was built in 1872 and was located at Bryan and Ervay Streets, near present-day St. Paul Station. In 1890, Dallas was established as a diocese, and Sacred Heart became the diocesan cathedral of Dallas with Bishop Thomas Brennan acting as the first bishop. Along with Dallas' tremendous growth at the time, the parish soon outgrew its church building, and the need for a new cathedral arose. Construction The property on which the current cathedral is now located was purchased for US$30,000, which adjusted for inflation, is equivalent to over $600,000 in 2007. The cornerstone for the cathedral was laid June 17, 1898 and the church was formally dedicated on October 26, 1902. Consolidation As the Dallas–Fort Worth metroplex grew through the early 20th century, other diocesan parishes were built in neighboring suburbs, decreasing Sacred Heart's attendance. However, by the 1960s the neighboring Our Lady of Guadalupe parish had outgrown its facilities. The parish, located on Harwood Street, was established in 1914 and primarily served Mexican immigrants. Bishop Thomas Tschoepe of Sacred Heart invited Our Lady of Guadalupe to merge with Sacred Heart, and by 1975, the Guadalupe church on Harwood closed following the churches' consolidation. On December 12, 1977, Sacred Heart Cathedral was renamed Cathedral Santuario de Guadalupe—"the Cathedral Shrine of Our Lady of Guadalupe." This reflects the large Spanish-speaking proportion of the congregation, so that the congregation now has masses and various programs in Spanish and English, as well as English classes. Expansion The cathedral recently underwent a major multi-phase renovation project. As part of the project, a US$20 million bell tower housing a 49-bell carillon was constructed. The bell tower was planned by the original architect, Nicholas J. Clayton, but had not been built. See also List of Catholic cathedrals in the United States List of cathedrals in the United States List of buildings and structures in Dallas, Texas References External links Official Cathedral Site Roman Catholic Diocese of Dallas Official Site Churches in Dallas Downtown Dallas Religious organizations established in 1869 Roman Catholic churches completed in 1902 Guadalupe, Catedral Santuario de Our Lady of Guadalupe 1869 establishments in Texas 20th-century Roman Catholic church buildings in the United States
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,914
public class Patient { static String name; static String disease; public static String getTreatment(){ name="karim"; disease="migraine"; String treatment=name+" is suffering "+disease+", take this med"; return treatment; } public static void payBill(){ System.out.println("Your bill is 3099, pay it up"); } public static void main(String[] args) { System.out.println(getTreatment()); payBill(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,791
Danny Bonaduce Net Worth 2019 Danny Bonaduce is a comedian, radio/TV character, expert in wrestling and an ex-child actor of America. On November 14, 2011, Bonaduce joined a talk show on the radio station KZOK-FM located in Seattle and has been hosting it nicely since then. Let's find out more about how much is Danny Bonaduce's net worth in 2019. Danny Bonaduce was born on August 13, 1959, in Broomall, Pennsylvania. His father, Joseph Bonaduce, is a television writing expert and a producer. Bonaduce opened his eye in a disturbed and upset setting. He was extremely ill-treated both emotionally and physically by his father. Bonaduce's mother tried hard to save him from such abusive environment and to maintain peacefulness but all in vain. Bonaduce, in 1985, tied his knot with Setsuko Hattori who, by profession, was a Japanese real estate agent. This marriage remained unsuccessful. He remarried, in 1990, to Gretchen Hillmer whom he met on a blind date. Surprisingly, their relationship prolonged for more than16 years. They had two kids Isabella and Dante. Unfortunately, their marriage ended with a divorce on April 9, 2007, due to incompatibility issues. Bonaduce started to date a girl Amy Railsback, who is an ex-substitute school teacher. They decided to marry in November 2010. At present, Railsback is a full-time manager of Bonaduce's career, operates Gravel Tones Productions Inc and also appear infrequently on The Smoking Gun Presents: World's Dumbest…aired on truTV. Danny Bonaduce, in his childhood, took part in a sitcom The Partridge Family in 1970. He appeared as Danny Partridge who was the sarcastic and redheaded middle child of the pop singing band the leader of which was Shirley Jones. Bonaduce was an illusory guitar player of that pop group. Bonaduce produced a number of different films in the meantime of the release of The Partridge Family and even after its ending. In 1978, he produced Corvette Summer which is probably considered as his most famous movie. Bonaduce co-starred Mark Hamill in it. in this movie, both of them appeared as students of the high school who try to find the stolen bespoke Corvette Stingray. Bonaduce, from the period of 1994 to 1996, hosted his own show The Danny Bonaduce Show aired on radio station The Loop WLUP placed in Chicago. Then from 1996 to 1998, he hosted another radio show in the morning Detroit aired on WKQI. Bonaduce has also appeared on numerous of TV series as a guest, for example, an action TV series 'CHiPs'. In 1999, he made his appearance on a Christmas episode of Sabrina, the Teenage Witch. Bonaduce has participated in a number of other TV serials, for instance, the reality TV show Breaking Bonaduce on VH1, in 2005, different shows of radio in Los Angeles, Philadelphia. Danny Bonaduce, in 2009, prized with an award for the comedy show The Soup from the E! Entertainment. He won two contests of wrestling against Donny Osmond in a charity show and Brady, Barry Williams in another show. Likewise, he triumphed over Reverend" Bob Levy by a TKO on September 13, 2008. Net Worth of Danny Bonaduce Danny Bonaduce has an estimated net worth of around $3 million. He took home a huge amount of money by working as an RJ, TV artist and a wrestler. He earned a great amount of wealth by hosting many popular radio shows and by performing in several different TV shows. Adding to it, he earned a big amount of money by participating in several different celebrity wrestling shows as well. Danny Bonaduce keeps quite traditionalist political views as observed during his guest appearances. He is a powerful encourager of capital punishment. In 2002, published an autobiography, Random Acts of Badness. He made divisive statements, on many social media platforms, about laissez-faire celebrities which turned out to be the reason of his increasing popularity.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,779
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/106905\/do-redox-reactions-always-contain-pure-elements","text":"# Do redox reactions always contain pure elements? [closed]\n\nAccording to a video by the Organic Chemistry Tutor, a quite well-known chemistry channel on YouTube, you can easily identify a redox reaction by seeing if there are atoms in their elemental states on one side of a reaction and form compounds (or compounds decomposing to elemental atoms) on the other. Is this accurate?\n\n## closed as off-topic by Mithoron, Todd Minehardt, Buck Thorn, Buttonwood, airhuffMay 27 at 21:20\n\nThis question appears to be off-topic. The users who voted to close gave this specific reason:\n\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\n\u2022 Not quite. True, all such reactions are redox. But not all (and not even most) redox reactions are like this. \u2013\u00a0Ivan Neretin Dec 20 '18 at 5:09\n\u2022 Do not try to learn chemistry via moronic youtube videos. You can be sure they are going to tell you some supersimplyfied nonsense, which breaks the second you try to use it on sth else. \u2013\u00a0Karl Dec 20 '18 at 5:42\n\u2022 Counterexample : K+Na -> NaK en.wikipedia.org\/wiki\/Sodium-potassium_alloy \u2013\u00a0Karl Dec 20 '18 at 5:46\n\u2022 The problem is that until you have learned a good deal of chemistry, you can't tell moronic videos from the good ones. Ditto for teachers and textbooks. \u2013\u00a0Ivan Neretin Dec 20 '18 at 7:14\n\u2022 Some textbooks are however known to be good. I wouldn't say the same about youtube channels. ;-) \u2013\u00a0Karl Dec 20 '18 at 7:47\n\n$$\\ce{2FeCl3 + SnCl2 + 2 HCl -> 2 FeCl2 + H2[SnCl6]}\\tag{1}$$","date":"2019-06-19 07:12:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 1, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4758245348930359, \"perplexity\": 2050.334441496297}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998923.96\/warc\/CC-MAIN-20190619063711-20190619085711-00194.warc.gz\"}"}
null
null
Animation Digital Media Latest News Netflix animated family drama, 'The… The animated family drama, The Willoughbys, which debuted on 22 April on Netflix has managed to… 'The Willoughbys' adaptation to premiere… Netflix announced on Twitter a release date for its adaptation of The Willoghbys, which is based…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
144
Named in honor of Jane Rathbone, our current Board Chair and CEO Emeritus, our Annual Design Retreat is inspired by Jane's belief that experiencing great design is essential to creating great design. The opportunity to retreat with colleagues and dedicate time to a design focused dialogue has had an immeasurable imprint on our practice. For many attendees the Retreat is a time of professional renewal where together we share, collaborate, and challenge our thinking.The faculty-led team is inspired by visiting exceptional contemporary, vernacular and historic architecture, lectures, site walks, sketching, photography and dialogue. Upon their return, the team formally shares their experiences and their new point of reference with the broader firm. This annual retreat has taken us to fascinating and diverse locations across the globe. The true hallmark of the experience is the impact on our design culture, as we deepen our connections to great design, our practice and our colleagues.
{ "redpajama_set_name": "RedPajamaC4" }
4,264
/* * $Id$ */ #if !defined(XERCESC_INCLUDE_GUARD_MATCH_HPP) #define XERCESC_INCLUDE_GUARD_MATCH_HPP // --------------------------------------------------------------------------- // Includes // --------------------------------------------------------------------------- #include <xercesc/util/PlatformUtils.hpp> #include <xercesc/util/ArrayIndexOutOfBoundsException.hpp> #include <xercesc/util/RuntimeException.hpp> XERCES_CPP_NAMESPACE_BEGIN /** * An instance of this class has ranges captured in matching */ class XMLUTIL_EXPORT Match : public XMemory { public: // ----------------------------------------------------------------------- // Public Constructors and Destructor // ----------------------------------------------------------------------- Match(MemoryManager* const manager = XMLPlatformUtils::fgMemoryManager); /** * Copy constructor */ Match(const Match& toCopy); Match& operator=(const Match& toAssign); virtual ~Match(); // ----------------------------------------------------------------------- // Getter functions // ----------------------------------------------------------------------- int getNoGroups() const; int getStartPos(int index) const; int getEndPos(int index) const; // ----------------------------------------------------------------------- // Setter functions // ----------------------------------------------------------------------- void setNoGroups(const int n); void setStartPos(const int index, const int value); void setEndPos(const int index, const int value); private: // ----------------------------------------------------------------------- // Initialize/Clean up methods // ----------------------------------------------------------------------- void initialize(const Match& toCopy); void cleanUp(); // ----------------------------------------------------------------------- // Private data members // // fNoGroups // Represents no of regular expression groups // // fStartPositions // Array of start positions in the target text matched to specific // regular expression group // // fEndPositions // Array of end positions in the target text matched to specific // regular expression group // // fPositionsSize // Actual size of Start/EndPositions array. // ----------------------------------------------------------------------- int fNoGroups; int fPositionsSize; int* fStartPositions; int* fEndPositions; MemoryManager* fMemoryManager; }; /** * Inline Methods */ // --------------------------------------------------------------------------- // Match: getter methods // --------------------------------------------------------------------------- inline int Match::getNoGroups() const { if (fNoGroups < 0) ThrowXMLwithMemMgr(RuntimeException, XMLExcepts::Regex_Result_Not_Set, fMemoryManager); return fNoGroups; } inline int Match::getStartPos(int index) const { if (!fStartPositions) ThrowXMLwithMemMgr(RuntimeException, XMLExcepts::Regex_Result_Not_Set, fMemoryManager); if (index < 0 || fNoGroups <= index) ThrowXMLwithMemMgr(ArrayIndexOutOfBoundsException, XMLExcepts::Array_BadIndex, fMemoryManager); return fStartPositions[index]; } inline int Match::getEndPos(int index) const { if (!fEndPositions) ThrowXMLwithMemMgr(RuntimeException, XMLExcepts::Regex_Result_Not_Set, fMemoryManager); if (index < 0 || fNoGroups <= index) ThrowXMLwithMemMgr(ArrayIndexOutOfBoundsException, XMLExcepts::Array_BadIndex, fMemoryManager); return fEndPositions[index]; } // --------------------------------------------------------------------------- // Match: setter methods // --------------------------------------------------------------------------- inline void Match::setStartPos(const int index, const int value) { if (!fStartPositions) ThrowXMLwithMemMgr(RuntimeException, XMLExcepts::Regex_Result_Not_Set, fMemoryManager); if (index < 0 || fNoGroups <= index) ThrowXMLwithMemMgr(ArrayIndexOutOfBoundsException, XMLExcepts::Array_BadIndex, fMemoryManager); fStartPositions[index] = value; } inline void Match::setEndPos(const int index, const int value) { if (!fEndPositions) ThrowXMLwithMemMgr(RuntimeException, XMLExcepts::Regex_Result_Not_Set, fMemoryManager); if (index < 0 || fNoGroups <= index) ThrowXMLwithMemMgr(ArrayIndexOutOfBoundsException, XMLExcepts::Array_BadIndex, fMemoryManager); fEndPositions[index] = value; } XERCES_CPP_NAMESPACE_END #endif
{ "redpajama_set_name": "RedPajamaGithub" }
4,127
Article| March 01 1989 Growth and partial differentiation of presumptive human cardiac myoblasts in culture. D S Kohtz, D S Kohtz Department of Pathology, Mount Sinai School of Medicine, New York 10029. N R Dische, N R Dische T Inagami, T Inagami B Goldman J Cell Biol (1989) 108 (3): 1067–1078. https://doi.org/10.1083/jcb.108.3.1067 D S Kohtz, N R Dische, T Inagami, B Goldman; Growth and partial differentiation of presumptive human cardiac myoblasts in culture.. J Cell Biol 1 March 1989; 108 (3): 1067–1078. doi: https://doi.org/10.1083/jcb.108.3.1067 A cell culture model for human cardiac myogenesis is introduced. Human fetal myocardial cells were dissociated enzymatically, and cultured in a mitogen-rich medium that promoted the growth of presumptive cardiac myoblasts. Strains of human cardiac myoblasts were generated from different anatomical regions of the fetal heart. The cells could be cultured for at least 30 generations, or frozen and recovered for later use. Differentiation was induced by culturing the cardiac myoblasts in a mitogen-poor medium. Differentiation of cardiac myoblasts was marked primarily by transcriptional activation of the atrial natriuretic factor (ANF) gene. Evidence is presented that posttranscriptional processing of ANF transcripts is affected by the anatomical origin of the cardiac myoblasts and the presence of cocultured neuronal cells. Cardiac myoblasts induced to differentiate in culture synthesized only low levels of sarcomeric myosin and cardiac alpha-actin, suggesting that differentiation of these cells progresses through two phases: an initial, noncontractile phase that is represented by the differentiating cultured cells; and a later contractile phase, in which myofibrillar assembly is accentuated and modulated by secondary signals from the cardiac milieu. Activation of nuclear factor-κB is necessary for myotrophin-induced cardiac hypertrophy Mitogen-activated protein kinases mediate changes in gene expression, but not cytoskeletal organization associated with cardiac muscle cell hypertrophy. Glucose Transporter (GLUT-4) Is Targeted to Secretory Granules in Rat Atrial Cardiomyocytes
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,337
PRACTICE AREASarrow_drop_down Personal Injury and Accident Law SUCCESS STORIES & BLOG DACA VS. TRUMP – 3RD ROUND – DACA STILL STANDING March 1, 2018 by Jose Perez Phase out ordered by Trump, Congress negotiations, the Government Shutdown and now the Courts take over the fight. Certainly, the last few months have been a roller coaster for the Dreamers and DACA eligible immigrants. In January, a court first ruled that DACA eligible immigrants could still apply to renew their status even if it has expired after Trump cancelled it. Then, another court ruled that ALL eligible DACA immigrants could apply to renew or apply as first time applicants. However, this case has not been decided in appeal yet. Then, the United States Supreme Court the last week of February decided not take the DACA cases for review, which meant that the federal courts decisions maintaining DACA alive are the applicable law and Trump must follow it whether he likes it or not. Specifically, on January 9, 2018, a federal judge in San Francisco, William Alsup, ruled in favor of the University of California and its president, former Homeland Security secretary Janet Napolitano. They sued to keep the program going after the Trump administration said in September that it would end it within six months. Alsup said Attorney General Jeff Sessions had wrongly concluded that DACA was put in place without proper legal authority. Trump's Justice Department immediately said it would contest that ruling before the 9th Circuit Court of Appeals in California. But government lawyers also asked the Supreme Court to take the highly unusual step of agreeing to hear the case, bypassing the appeals court. The United States Supreme Court declined to bypass the appeals courts in order to take up a DACA case. The Supreme Court's decision keeps in place lower court decisions that allow current DACA recipients to continue to apply for status renewals. Significantly, it may well mean that a final decision on the case will extend past next November's midterm elections, meaning that if this Congress does not take long overdue action on the Dream Act, the next Congress will. While the Supreme Court's denial gives Dreamers a breath of relief while the case works its way through lower courts, Congress must still act immediately to pass the Dream Act. Under lower court orders that remain in effect, the Department of Homeland Security must continue to accept applications from the roughly 700,000 young people who are currently enrolled in the program. The Supreme Court now leaves the DACA challenge pending, expected to be taken up by the 2nd and 9th Circuit courts. The lower court's decision does not allow Dreamers to apply for DACA if they have never before applied for the initiative, including Dreamers who are aging into eligibility, couldn't afford the filing fees, or are newly eligible for the initiative. These Dreamers remain at risk of deportation, as do the DACA recipients whose protections have expired while they wait for USCIS to process their renewal applications. You should remember that this article is not intended to provide you with legal advice; it is intended only to provide guidance about the current immigration issues and other immigration policies. I represent individuals in immigration cases. If you have any questions or concerns about an immigration case, you can call me at (315) 422-5673, send me a fax at (315) 466-5673, or e-mail me at joseperez@joseperezyourlawyer.com. The Law Office of Jose Perez is located at 120 East Washington Street, Suite 925, Syracuse, New York 13202. Now with offices in Buffalo and Rochester!!! 659 West Onondaga Street Fax. 315-466-JOSE Fax: 315-466-JOSE Buffalo, NY, 14202 Copyright © 2017 Law Offices of Jose Perez, PC Disclaimer: The choice of a lawyer is an important decision and should not be based solely upon advertisement. Every case is different and should be judged upon its own merits. Past case results provide no guarantee of future results. The information you obtain on this site is not, nor is it intended to be legal advice. You should consult an attorney for advice regarding your individual situation. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established. Law Offices of Jose Perez – Oficinas del Abogado Jose Perez handles worker's compensation, work injury, immigration, social security disability and personal injury cases throughout all of New York and New Jersey. Attorney Jose Perez represents injured and immigrant clients in all counties and cities, including: Broome County, Cayuga County, Chenango County, Cortland County, Erie County, Genesee County, Herkimer County, Jefferson County, Lewis County, Madison County, Monroe County, Montgomery County, Niagara County, Oneida County, Onondaga County, Ontario County, Oswego County, Otsego County, Seneca County, Tompkins County, Warren County . Lawyer Jose Perez and his team represent Latinos – Hispanics -Spanish clients throughout New York in all counties and cities, including: Syracuse, Buffalo, Rochester, Ithaca, Waterloo, Corning, Fulton, Oswego, Oneonta, Geneva, Canandaigua, Utica, Rome, Niagara, Cazenovia, Auburn, Binghamton, Chenango, Owasco, Cortland, Batavia and Watertown.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,476
\section{Introduction} An important goal of cosmology is to describe the structure formation processes which led to the wide variety of astrophysical objects we observe in the present universe, from Lyman-$\alpha$ clouds to galaxies and clusters. Several studies have shown that the usual hierarchical scenarios (like the standard CDM model) can provide predictions which agree reasonably well with observations for galaxies as well as for Lyman-$\alpha$ clouds. This corresponds to objects at $z \leq 5$. However, it is possible to constrain the earlier evolution of the universe by studying the reheating and reionization history implied by such models. Indeed, observations show that the universe is highly photo-ionized by $z=5$ and a large reionization redshift could imprint a signature on the CMB radiation. Moreover, future missions such as NGST could for instance detect quasars at high redshifts $z>5$. In this article, we present an analytic model for the reheating and reionization history of the universe, adopting a CDM power spectrum in a critical density and in an open universe. Similar studies have been performed previously via numerical simulations (e.g. Gnedin \& Ostriker 1997) and analytic approaches (e.g. Haiman \& Loeb 1997; Haiman \& Loeb 1998) based on the Press-Schechter prescription (Press \& Schechter 1974). However, previous analytic models were often developed for this specific purpose (i.e. they were not derived from a model already checked in detail against observations of galaxies or Lyman-$\alpha$ clouds) and neglected the clumping of the gas (except for the presence of virialized objects used to count galaxies). Thus, the main motivations of our present study are to: - describe these early stages of structure formation through a self-consistent model which has already been applied to galaxies (Valageas \& Schaeffer 1998) and to Lyman-$\alpha$ clouds (Valageas et al.1999a). - take into account the broad range of density fluctuations within the IGM through our description of Lyman-$\alpha$ clouds. - use this feature to constrain our model against several observations: notably the QSO number counts and the Gunn-Peterson test (for HI and HeII). - develop a simple analytic model which can predict many properties of the universe (galaxy and quasar luminosity functions, temperature and ionization state of the IGM, intensity and spectrum of the UV background radiation and fraction of matter within stars) and provide a complementary tool to numerical simulations. Consideration of the various objects involved in our work (beyond the just-virialized halos which are usually studied) is made possible because of a specific description of the density field based on the assumption that the many-body correlation functions obey the scaling model detailed in Balian \& Schaeffer (1989) and checked numerically in Colombi et al.(1997). This allows one to define the various mass functions of interest, as described in Valageas \& Schaeffer (1997; also in Valageas et al.1999b), and to go beyond the scope of the usual Press-Schechter approximation (Press \& Schechter 1974). The main advantage of our approach is thus to provide a globally consistent picture of structure formation in the universe, within the framework of a hierarchical scenario. This article is organized as follows. In Sect.\ref{Multiplicity functions} we describe our prescription for mass functions. Next, in Sect.\ref{Galaxy formation} we review our model for galaxy formation, described in more detail in Valageas \& Schaeffer (1998) while in Sect.\ref{Quasar radiative output} we deal with our prescription for quasars. In Sect.\ref{Lyman-alpha clouds} we summarize the relevant aspects of our model for Lyman-$\alpha$ clouds (Valageas et al. 1999a). We describe the calculation of the evolution of the IGM properties (temperature, UV background radiation, ionization state) in Sect.\ref{Evolution of the IGM} and in Sect.\ref{Opacity}. Finally, in Sect.\ref{Open universe} and Sect.\ref{Critical universe} we present our results for the case of an open universe and then for a critical universe. \section{Multiplicity functions} \label{Multiplicity functions} We first review the method we use to obtain the mass functions of various astrophysical objects, specifically galaxies and Lyman-$\alpha$ clouds. We consider objects of dark matter mass $M$ to be defined by a density threshold $\Delta(M,z)$. This constraint depends on the class of astrophysical objects one considers and it allows us for instance to distinguish clusters from galaxies which correspond to higher density contrasts (see VS II). Lyman-$\alpha$ clouds are also formed by several populations of different objects which are not always defined by a constant density threshold (see Sect.\ref{Lyman-alpha clouds}). Note that such a goal is beyond the reach of the usual Press-Schechter prescription (Press \& Schechter 1974) which only deals with ``just-collapsed halos'' while we wish to describe simultaneously a wide variety of objects. In any case, we attach to each halo a parameter $x$ defined by: \begin{equation} x(M,z) = \frac{1+\Delta(M,z)}{\; \overline{\xi}[R(M,z),z] \;} \label{xnl} \end{equation} where \[ \overline{\xi}(R) = \int_V \frac{d^3r_1 \; d^3r_2}{V^2} \; \xi_2 ({\bf r}_1,{\bf r}_2) \;\;\;\;\; \mbox{with} \;\;\;\;\; V= \frac{4}{3} \pi R^3 \] is the average of the two-body correlation function $\xi_2 ({\bf r}_1,{\bf r}_2)$ over a spherical cell of radius $R$ and provides the measure of the density fluctuations in such a cell. Then, we write the multiplicity function of these objects (defined by the constraint $\Delta(M,z)$) as (see VS I): \begin{equation} \eta(M,z) \frac{dM}{M} = \frac{\overline{\rho}}{M} \; x^2 H(x) \; \frac{dx}{x} \label{etah} \end{equation} where $\overline{\rho}$ is the mean density of the universe at redshift $z$, while the mass fraction in halos of mass between $M$ and $M+dM$ is: \begin{equation} \mu(M,z) \frac{dM}{M} = x^2 H(x) \; \frac{dx}{x} \label{muh} \end{equation} The scaling function $H(x)$ depends only on the initial spectrum of the density fluctuations and must be obtained from numerical simulations. However, from theoretical arguments (see VS I and Balian \& Schaeffer 1989) it is expected to follow the asymptotic behaviour: \[ x \ll 1 \; : \; H(x) \propto x^{\omega-2} \hspace{0.3cm} , \hspace{0.3cm} x \gg 1 \; : \; H(x) \propto x^{\omega_s-1} \; e^{-x/x_*} \] with $\omega \simeq 0.5$, $\omega_s \sim -3/2$, $x_* \sim 10$ to 20 and by definition it must satisfy \begin{equation} \int_0^{\infty} x \; H(x) \; dx = 1 \end{equation} The correlation function $\overline{\xi}$, that measures the non-linear fluctuations at scale $R$ can be modelled in a way that accurately follows the numerical simulation. The mass functions obtained from (\ref{etah}) for various constraints $\Delta(M)$ were checked against the results of numerical simulations in Valageas et al.(1999b) in the case of a critical universe with an initial power-spectrum which is a power-law: $P(k) \propto k^n$ with $n=0, -1$ and $-2$. This study showed that this model provides a reasonable approximation to the mass functions obtained in the simulations and that it works quite well for the two cases we shall need in the present article: i) a constant density threshold $\Delta \sim 178$ and ii) a constant radius constraint (or $(1+\Delta) \propto M$). Moreover, the results of Valageas et al.(1999b) showed that $H(x)$ is close to a similar scaling function $h(x)$ obtained from the counts-in-cells statistics, as expected from theoretical considerations (see for instance VS I). It is clear that the model outlined above provides a unified description of various astrophysical objects which are obtained from the same non-linear density field. This is a great advantage of this approach since it ensures that we can model a wide variety of objects, from low density Lyman-$\alpha$ clouds to high density bright galaxies, in a fully consistent way. Then, we can study the interplay between these various structures as they develop progressively. \section{Galaxy formation} \label{Galaxy formation} In this paper we wish to study the reionization history of the universe. Since a large part of the ionizing radiation will be emitted by stars, we first need to devise a model for galaxy and star formation. We shall use a simplified version of the model described in detail in VS II and which was there compared with many observations. One can define galaxies by the requirement that two constraints be satisfied by the underlying dark matter halo: 1) {\it a virialization condition} $\Delta > \Delta_c$ (where $\Delta_c(z) \sim 178$ is given by the spherical model and is constant for a critical universe) and 2) {\it a cooling constraint} $t_{cool} < t_{H}$ which states that the gas must have been able to cool within a few Hubble times at formation. However, at high redshifts $z>1$ the cooling constraint becomes irrelevant since any object which satisfies 1) also satisfies 2). Hence since we are mainly interested in large redshifts $z>1$ we shall simply define galaxies by the virialization condition $\Delta = \Delta_c$. We also require that the virial temperature $T$ of the halo be larger than the ``cooling temperature'' $T_{cool}(z)$ at redshift $z$. The latter corresponds to the smallest virialized objects which can cool efficiently at redshift $z$, defined by the constraint: \begin{equation} t_{cool} = s \; t_H \label{tcool} \end{equation} where $s=6$ is a proportionality factor (one must have $s>1$ since cooling is more efficient within the halo where the density is larger than on its boundary and cooling accelerates as baryons collapse). Here $t_{H}(z)$ is the age of the universe at redshift $z$ while $t_{cool}$ is the cooling time of a halo with density contrast $\Delta_c(z)$, mass $M$, taking into account both cooling (recombination, molecular cooling) and heating (by the background UV flux) processes. Since the physical properties of virialized halos with temperature $T$ and density contrast $\Delta_c(z)$ are different from the IGM, we let the chemical reactions (involving HI, HII, H$^-$, H$_2$, H$_2^+$, HeI, HeII, HeIII and e$^-$) evolve for a Hubble time $t_H$ within this environment (defined by $T$ and $\Delta_c$) before we evaluate the cooling time $t_{cool}$. The main effect is that at large redshifts such clouds may produce enough molecular hydrogen to make molecular cooling efficient while with the use of the IGM abundances one would underestimate this contribution; see for instance Tegmark et al.(1997) for a detailed discussion. This will also appear clearly below in Fig.\ref{figtcoolO03} where we compare the main contributions to cooling for both the IGM and these cooling halos. The virial temperature $T_{cool}$ also defines the mass $M_{cool}(z)$ and the radius $R_{cool}(z)$ of the smallest objects which can cool and eventually form stars at redshift $z$. From the lower-bound $T_{cool}(z)$ and the virialization constraint $\Delta=\Delta_c(z)$, we obtain the mass function of galaxies at redshift $z$ using (\ref{etah}). Next, we must attach a specific stellar content to these galactic halos. We shall again use the star formation model described in VS II. This involves 4 components: (1) short-lived stars which are recycled, (2) long-lived stars which are not recycled, (3) a central gaseous component which is deplenished by star formation and ejection by supernovae winds, and replenished by infall from (4) a diffuse gaseous component. The star-formation rate $dM_s/dt$ is proportional to the mass of central gas with a time-scale set by the dynamical time. The mass of gas ejected by supernovae is proportional to the star-formation rate and decreases for deep potential wells as $1/T$, in a fashion similar to that adopted by Kauffmann et al.(1993). It was seen in VS II that for such a model a good approximation for the star-formation rate is: \begin{equation} \frac{dM_s}{dt} = \frac{M_g}{\tau_0} \hspace{0.5cm} \mbox{with} \hspace{0.5cm} \tau_0 \simeq \left( 1+\frac{T_{SN}}{T} \right) \; \tau_d \label{SFR} \end{equation} where $M_g$ is the total mass of gas, $\tau_d$ is the dynamical time and $T_{SN}=10^6$ K describes the ejection of gas by supernovae and stellar winds: \begin{equation} T_{SN} = \frac{2 \; \epsilon \; E_{SN} \; \mu m_p \; \eta_{SN}}{3 \; k \; m_{SN}} \sim 10^{6} \; \mbox{K} \label{T0SN} \end{equation} Here $\epsilon \sim 0.1$ is the fraction of the energy $E_{SN}$ delivered by supernovae transmitted to the gas ($E_{SN} = 10^{51}$ erg) while $\eta_{SN}/m_{SN} \simeq 0.005 \; M_{\odot}^{-1}$ is the number of supernovae per solar mass of stars formed. Note that for halos defined by a constant density threshold $\Delta_c \sim 178$ we have $\tau_d \sim t_H(z)$. Although (\ref{SFR}) was obtained for small galaxies with $T \ll T_{SN}$ (which is the range we are mainly interested in) it also provides a reasonable approximation for large galaxies $T > T_{SN}$. In the case $\Omega=1$ we obtain in our model for a galaxy similar to the Milky Way (i.e. with a circular velocity $V_c=220$ km/s): $\tau_0 \simeq 7 \; 10^9$ years and $dM_s/dt \simeq 5 M_{\odot}/$year (see VS II). This star formation rate is consistent with observations (McKee 1989). Then the mass of gas at time $t$ within the galaxy is given by: \begin{equation} M_g = M_{g0} \; e^{-t/\tau_0} \label{Mg} \end{equation} where $M_{g0}=M_b$ is the initial mass of baryons which we take to be proportional to the dark matter mass $M$: \begin{equation} M_b = \frac{\Omega_b}{\Omega_0} \; M \end{equation} From this model, the star-formation rate per Mpc$^3$ is: \begin{equation} \begin{array}{ll} {\displaystyle \left( \frac{d\rho_s}{dt} \right) = } & {\displaystyle \frac{\Omega_b}{\Omega_0} \; \frac{\overline{\rho}(z)}{t_H} \; \int_{x_{cool}}^{\infty} \frac{p}{\beta_d} \left(1+\frac{T_{SN}}{T} \right)^{-1} } \\ & \\ & {\displaystyle \hspace{1cm} \times \; e^{ - \frac{p}{\beta_d} (1+\frac{T_{SN}}{T})^{-1} } \; x^2 H(x) \; \frac{dx}{x} } \end{array} \label{SFRav} \end{equation} where $p/\beta_d = 0.7$ is a parameter of order unity which enters the definition of the dynamical time $\tau_d$. The significance of each term in this expression is clear and the temperature dependence simply states that the average star formation efficiency of small galaxies is small as the gas is easily expelled by supernovae. Note that in the original model described in VS II for bright galaxies at low redshifts, the star formation rate declines since most of the gas has already been consumed. This does not appear in (\ref{SFRav}) because we defined all galaxies by $\Delta=\Delta_c$ while at low $z$ for large $T$ the cooling constraint implies that $(1+\Delta) \propto M$ (i.e. $R$ is constant) which decreases the galactic dynamical time and increases the ratio $t/\tau_0$ which enters (\ref{Mg}). To derive the radiation emitted by galaxies, we do not need their global star formation rate but their stellar content. However, as shown in VS II, the mass in the form of short-lived stars (i.e. with a life-time $\tau_{sh}$ small compared to $t_H$) of mass $m$ to $m+dm$ is given by: \begin{equation} dM_{sh} = d\eta \; \frac{\tau_{sh}}{\tau_0} \; M_g = d\eta \; \tau_{sh} \; \left( \frac{dM_s}{dt} \right) \label{Msh} \end{equation} where $d\eta=m \phi(m) dm$ is the fraction of mass which goes into such stars for each unit mass of stars which are formed. This depends on the initial stellar mass function (IMF) $\phi(m)$. Since the stellar radiation output at high energy ($\nu > 13.6$ eV) is dominated by the most massive stars, the relation (\ref{Msh}) will be sufficient for our purposes. Next, if we assume that stars radiate as blackbodies with an effective temperature $T_{eff} \propto L^{0.13}$ and we use the mean scalings $\tau_{sh} \propto m/L$ and $L \propto m^{3.3}$ we obtain the energy output of such galaxies: \begin{equation} \left( \frac{\partial^2 E}{\partial \nu \partial t} \right)_s = \left( \frac{dM_s}{dt} \right) \; \frac{1 \mbox{yr}}{1 M_{\odot}} \; L_{\nu s}(\nu) \label{dEdnudts} \end{equation} with \[ L_{\nu s} (\nu) = \frac{10^{10} L_{\odot}}{\nu} \; \int m \phi(m) dm \frac{2\pi h \nu^4}{\sigma T^4 c^2 ( e^{h\nu/kT}-1)} \] From the radiation emitted by individual galaxies we now wish to estimate the energy received by a random point in the IGM. We shall write the source term $S_{\nu s}$ due to stellar radiation for the background UV flux $J_{\nu}$, see (\ref{Jnu}), as the following average: \begin{equation} S_{\nu s} = \frac{c}{4 \pi} \; \int \eta_g(x) \frac{dx}{x} \; \left( \frac{\partial^2 E}{\partial \nu \partial t} \right)_s (x) \; e^{-\tau_s(x)} \label{Snus} \end{equation} where $\eta_g(x) dx/x$ is the mass function of galaxies, obtained from (\ref{etah}) as described previously, while $\tau_s$ is a mean opacity which takes into account the fact that the radiation emitted by galaxies can be absorbed by the IGM {\it and} Lyman-$\alpha$ clouds. We shall come back to this term later. Thus, we get in this way a simple model for the stellar radiative output from our more detailed description of galaxy formation. The reader is referred to VS II for a more precise account of the details and predictions of our galaxy formation model. Note that our prescription is consistent with such observations as the Tully-Fisher relation and the B-band luminosity function. \section{Quasar radiative output} \label{Quasar radiative output} In addition to galaxies we also need to describe the radiation emitted by quasars which provide a non-negligible contribution to the background radiation field, especially at the high frequencies $\nu > 24.6$ eV which are relevant for helium ionization. We shall again follow the formalism of VS II to obtain the quasar luminosity function, in a fashion similar to Efstathiou \& Rees (1988) and Nusser \& Silk (1993). We assume that the quasar mass $M_Q$ is proportional to the mass of gas $M_{gc}$ available in the inner parts of the galaxy: $M_Q = F \; M_{gc}$. Note that for galaxies which have not yet converted most of their gas into stars (i.e. all galaxies except those with $T > T_{SN}$ at $z < 1$) this also implies $M_Q \sim F \; M_s$ where $M_s$ is the stellar mass. Indeed, for $t_H < \tau_0$ (where $t_H$ is the age of the universe) we have: \begin{equation} M_s \sim t_H/\tau_0 \; M_g \end{equation} by definition of $\tau_0$, see (\ref{SFR}), while the mass $M_{gc}$ of cold central gas $M_{gc}$ satisfies: \begin{equation} M_{gc} \sim \left( 1+\frac{T_{SN}}{T} \right)^{-1} \; M_g \label{Mgc} \end{equation} The factor $(1+T_{SN}/T)$ translates the fact that in our model supernovae eject part of the star-forming gas out of the galactic center into the larger dark matter halo (VS II). Hence we get $M_{gc} \sim M_s$. Of course, at late times for bright galaxies when most of the gas has been consumed we have $M_{gc} \ll M_s$. Then the mass of gas available to feed the quasar declines with time. This leads to a high luminosity cut-off at low $z$ for the quasar luminosity function since in this regime very massive galaxies have less gas than smaller ones which underwent less efficient star formation (see VS II and Sect.\ref{Quasar luminosity function}). We shall use $F=0.01$ for $\Omega=1$ and $F=0.006$ for $\Omega_0=0.3$. Note that observations (Magorrian et al.1998) find $M_Q \simeq 0.006 \; M_s$ in large galaxies. Next we write the bolometric luminosity $L_Q$ of the quasar as: \begin{equation} L_Q = \frac{\epsilon \; M_Q \; c^2}{t_Q} \end{equation} where $\epsilon = 0.1$ is the quasar radiative efficiency (fraction of central rest mass energy converted into radiation) and $t_Q$ is the quasar life-time. Since we shall assume that quasars radiate at the Eddington limit we have: $t_Q = 4.4 \; \epsilon \; 10^8$ yr. Thus, the quasar luminosity attached to a galaxy of dark matter mass $M$, virial temperature $T$, is: \begin{equation} L_Q = \frac{\epsilon \; F}{t_Q} \; \frac{\Omega_b}{\Omega_0} \; \left( 1 + \frac{T_{SN}}{T} \right)^{-1} \; M c^2 \label{LQ} \end{equation} As seen above in (\ref{Mgc}), the temperature term comes from the fact that in our galactic model, small galaxies ($T \ll T_{SN}$) are strongly influenced by supernova feedback which expells part of their baryonic content from the inner regions. Note however that this term does not enter the relation (quasar mass) - (stellar mass) as it cancels out on both sides. Next we obtain the quasar multiplicity function from the galaxy mass function as: \begin{equation} \eta_Q(M_Q) \frac{dM_Q}{M_Q} = \lambda_Q \; \eta_g(M) \frac{dM}{M} \; \mbox{Min} \left[ 1 , \frac{t_Q}{t_M} \right] \label{etaQ} \end{equation} The factor $\lambda_Q < 1$ (we use $\lambda_Q=0.06$) is the fraction of galactic halos which actually harbour a quasar while $t_M$ is the evolution time-scale of galactic halos of mass $M$ defined by: \begin{equation} t_M^{-1} = \frac{1}{\eta_g(M)} \; \frac{\partial}{\partial t} \eta_g(M) \end{equation} Since the quasar life-time $t_Q \sim 10^8$ yr is quite short, this reduces to $\eta_Q(M_Q) dM_Q/M_Q = \lambda_Q \; t_Q \; \partial \eta_g / \partial t \; dM/M$. Together with (\ref{LQ}) the relation (\ref{etaQ}) provides the quasar luminosity function. Thus, we only have two parameters: $(\epsilon \; F/t_Q)$ (which only depends on $F$, constrained by the observed (quasar mass)/(stellar mass) ratio, for quasars shining at the Eddington luminosity) which enters the mass-luminosity relation, and $(\lambda_Q \; t_Q)$ which appears as a simple normalization factor in the luminosity function. Hence a larger fraction of quasars $\lambda_Q$ together with a smaller life-time $t_Q$ would give the same results, so that we could also choose $\lambda_Q=1$. In a fashion similar to what we did for galaxies we can now derive the quasar radiative output. We first write the radiation emitted by an individual quasar as: \begin{equation} \left( \frac{\partial^2 E}{\partial \nu \partial t} \right)_Q = \frac{L_Q}{\nu_B} \; \left( \frac{L_B}{L_{bol}} \right) \; \left( \frac{\nu_B}{\nu} \right)^{\alpha} \end{equation} where $(L_B/L_{bol}) = 0.094$ is the conversion factor from bolometric luminosity to B-band luminosity ($L_B = \nu_B L_{\nu}(\nu_B)$ at $\nu_B=2.8$ eV), taken from Elvis et al.(1994), while $\alpha=1.5$ is the local slope of the quasar spectrum. Then, the source term $S_{\nu Q}$ for the background radiation due to quasars is: \begin{equation} S_{\nu Q} = \frac{c}{4 \pi} \; \int \eta_Q(x) \frac{dx}{x} \; \left( \frac{\partial^2 E}{\partial \nu \partial t} \right)_Q (x) \; e^{-\tau_Q(x)} \label{SnuQ} \end{equation} where again $\tau_Q$ is a mean opacity which we shall describe later. \section{Lyman-$\alpha$ clouds} \label{Lyman-alpha clouds} The description of gravitational clustering used in this article allows one to build a model for Lyman-$\alpha$ clouds (Valageas et al.1999a). We shall take advantage of this possibility to include these objects in the present study. Indeed, although at high redshifts they do not contribute significantly to the total opacity (which comes mainly from the uniform component of the IGM) since only a small fraction of baryonic matter has been allowed to form bound objects, at redshifts close to the reionization epoch they already provide a non-negligible opacity. We identify Lyman-$\alpha$ absorbers as three different classes of objects, which we shall briefly describe below. \subsection{Lyman-$\alpha$ forest} We assume that after reionization the gas within low-density halos is reheated by the UV flux to a temperature $T_{Ly} = 3 \; 10^4$ K. Hence in such shallow potential wells, baryonic density fluctuations are erased over scales $R_{dLy}$ defined as in (\ref{Rd}) but with the temperature $T_{Ly}$. This builds our first class of objects defined by their radius $R_{dLy}(z)$ and virial temperatures $T < T_{Ly}$. The multiplicity function of these mass condensations is again obtained from (\ref{etah}). The fraction of neutral hydrogen at low $z$ is evaluated by assuming photo-ionization equilibrium. At high $z$ prior to reionization, when the UV flux is very small and cannot heat the gas, we simply take $T_{Ly}=T_{IGM}$ while the fraction of neutral hydrogen is unity. Since the baryonic density is roughly uniform within these objects (by definition) we consider that each halo produces one specific mean column density on any intersecting line-of-sight (we neglect the small dependence on the impact parameter due to geometry). At low $z$ this population can be identified with the Lyman-$\alpha$ forest. Note that, as explained in details in Valageas et al.(1999a), our approach is also valid for clouds which are not spherical objects of radius $R_{dLy}$ but filaments of thickness $R_{dLy}$ and length $L \gg R_{dLy}$. This is due to the growth of the density fluctuations on smaller scales (along with $\overline{\xi}$) and to the direction jumps of filamentary structures. Here we note that models for the Lyman-$\alpha$ forest are often classified in two categories: 1) mini-halo models and 2) IGM density fluctuations. In case 1), one considers that Lyman-$\alpha$ absorbers are discrete clouds formed by bound collapsed objects (or halos confined by the IGM pressure) which occupy a small fraction of the volume. On the other hand, in case 2) (which is currently favored) one assumes that absorption comes from a continuous medium (the IGM) with relatively small density fluctuations. Although in our model we identify distinct patches of matter (of size $R_{dLy}$) as in 1), the underlying picture corresponds to the case 2). Indeed, as we consider regions with an ``overdensity'' $(1+\Delta)$ from $\sim 20$ down to $(1+\Delta)_{IGM}$, defined below in (\ref{DeltaIGM}), which can be as low as $10^{-3}$, see Fig.\ref{figclumpO03}, we take into account {\it all the volume} of the universe. Hence our Lyman-$\alpha$ forest absorbers are made of a broad range of density fluctuations within the IGM which fill all the space between galactic halos (which we describe below as they form Lyman-limit and damped systems and only occupy a negligible fraction of the volume, as seen in Fig.\ref{figFracO03}). Note that this would not be possible if we were to consider density fluctuations defined by a constant density threshold $(1+\Delta)_{th} > 1$ since this would imply that we probe at most a fraction $1/(1+\Delta)_{th}$ of the volume of the universe. We identify the lowest density regions (i.e. with a density contrast $\Delta_{IGM}$), which are also the most numerous and fill most of the volume, with the IGM. A patch of matter with this density would only make up a column density $N_{HI} \sim 10^6$ cm$^{-2}$ on a scale $R_{dLy}$ at $z=0$. \subsection{Lyman-limit systems} Potential wells with a large virial temperature $T > T_{Ly}$ do not see their baryonic density profile smoothed out and they also retain their individuality. Thus, we define a second class of objects identified to the ionized outer shells of virialized halos, characterized by their density contrast $\Delta_c$ and satisfying $T > T_{Ly}$. The deepest of these potential wells (such that $T > T_{cool}$) corresponds to the galactic halo described in Sect.\ref{Galaxy formation}. We assume that the mean density profile is a power-law $\rho \propto r^{-\gamma}$ (with $\gamma=1.8$) so that each object can now produce a broad range of absorption lines, as a function of the impact parameter of the line-of-sight. This population can be identified with the Lyman-limit systems. \subsection{Damped systems} The deep cores of the virialized halos described above which are not ionized because of self-shielding (at low $z$) form our third population of objects. One halo can again produce various absorption lines for different impact parameters. At high $z$, prior to reionization, halos are entirely neutral so that the previous contribution of ionized shells disappears and we only have two classes of objects: these neutral virialized halos and the ``forest'' objects. \section{Evolution of the IGM} \label{Evolution of the IGM} We now turn to the IGM itself. We model the universe at a given redshift $z$ as a uniform medium (the IGM), characterized by a density contrast $\Delta_{IGM}$, a gas temperature $T_{IGM}$ and a background radiation field $J_{\nu}$, which contains some mass condensations recognized as individual objects identified with galaxies or Lyman-$\alpha$ clouds as described above. Since the gas in the IGM has non-zero temperature $T_{IGM}$, baryonic density fluctuations are erased over scales of order $R_{d}(z)$ within shallow potential wells with a virial temperature $T_{vir}<T_{IGM}$ or within ``voids'', with: \begin{equation} R_d(z) = \frac{1}{2} \; t_H \; C_s = \frac{1}{2} \; t_H \; \sqrt{ \frac{\gamma k T_{IGM}}{\mu m_p} } \label{Rd} \end{equation} where $C_s$ is the sound speed, $t_H$ the age of the universe, $m_p$ the proton mass and $\gamma \sim 5/3$. Indeed, the pressure dominates over gravitation for objects such that $T_{vir}<T$. Note that the damping scale $R_d$ is different from the Jeans scale: \begin{equation} R_{J}(z) = \sqrt{ \frac{\gamma k T_{IGM}}{4 \pi {\cal G} \mu m_p \rho_{DM}} } \label{RJ} \end{equation} Both scales are equal (up to a normalization factor of order unity) if the dark matter density is equal to the mean universe density: $\rho_{DM}=\overline{\rho}$. However, we shall consider underdense regions where $(1+\Delta)$ can be as low as $10^{-3}$, see Fig.\ref{figclumpO03} below. Indeed, as an increasingly large proportion of the matter content of the universe gets embedded within collapsed objects as time goes on the density of the IGM (the volume between these mass condensations) becomes much smaller than the mean universe density. In this case where $\rho_{DM} < \overline{\rho}$ we have $R_d < R_J$. We use $R_d$ because of the finite age of the universe: the medium cannot be homogenized over scales larger than those reached by acoustic waves over the time $t_H$ (the scale $R_J$ corresponds to the limit of large times). Note that for Lyman-$\alpha$ clouds we also use $R_d$ as the characteristic scale, with $T_{Ly} = 3 \; 10^4$ K, since we consider regions with very low or moderate densities $(1+\Delta) < 45$, see Sect.\ref{Lyman-alpha clouds} and Valageas et al.(1999a). Then, the density contrast of the IGM is given by: \begin{equation} (1+\Delta)_{IGM} = \mbox{Min} \left[ \;1 \; , \; \overline{\xi}(R_d)^{-\omega/(1-\omega)} \; \right] \label{DeltaIGM} \end{equation} This simply states that at high $z$ (when $\overline{\xi}(R_d) \ll 1$) we have $\rho_{IGM} = \overline{\rho}$ (i.e. the universe is almost exactly a uniform medium on scale $R_d$) while at low $z$ we have $\rho_{IGM} < \overline{\rho}$ since most of the matter is now within overdense bound collapsed objects (clusters, filaments etc.) while most of the volume (which we call the IGM) is formed by underdense regions. Since the mean density of the universe is $<\rho>=\overline{\rho}$ we define a baryonic clumping factor $C_b = <\rho_b^2>/<\rho_b>^2$ by: \begin{equation} \begin{array}{ll} C_b & = {\displaystyle F_{IGM,vol} \; (1+\Delta)_{IGM}^2 \; + \int (1+\Delta) \; x^2 H(x) \frac{dx}{x} } \\ & \\ & \simeq (1+\Delta)_{IGM}^2 + F_{Ly} <1+\Delta>_{Ly} + F_{vir} (1+\Delta_c) \end{array} \label{Cb} \end{equation} where we used the fact that the volume fraction $F_{IGM,vol}$ occupied by the IGM is very close to unity. Here $F_{Ly}$ and $F_{vir}$ are the fractions of mass formed by Lyman-$\alpha$ forest clouds (with a density contrast lower than $\Delta_c$) and by virialized objects. Note that $C_b$ somewhat underestimates the actual clumping of the gas since we did not take into account the collapse of baryons due to cooling nor the slope of the density profile within virialized halos. However, these latter characteristics are included in our model for Lyman-$\alpha$ clouds. We also define the mean density due to objects which do not cool as: \begin{equation} <1+\Delta>_n = (1+\Delta)_{IGM} + \int_0^{x_{cool}} x^2 H(x) \frac{dx}{x} \label{rhon} \end{equation} Before reionization this corresponds to the density field of neutral hydrogen since galactic halos (i.e. massive potential wells with $x>x_{cool}$ which can cool) ionize most of their gas because of the radiation emitted by their stars or their central quasar. We obtain the mean square density in a similar fashion: \[ <(1+\Delta)^2>_n = (1+\Delta)_{IGM}^2 + \int_0^{x_{cool}} (1+\Delta) \; x^2 H(x) \frac{dx}{x} \] and the corresponding clumping factor is simply: \begin{equation} C_n = \frac{<(1+\Delta)^2>_n}{<1+\Delta>_n^2} \label{Cn} \end{equation} The quantities $<1+\Delta>_n$ and $C_n$ characterize the density fluctuations of neutral hydrogen within the IGM. Note that most of the volume is occupied by regions which satisfy $(1+\Delta) \sim (1+\Delta)_{IGM}$. The gas which is within the IGM is heated by the UV background radiation while it cools due to the expansion of the universe and to various radiative cooling processes. Note that we neglect here the possible heating of the IGM by supernovae. However supernova feedback is included in our model for galaxy formation: we simply assume it only affects the immediate neighbourhood of these galaxies (see also McLow \& Ferrara 1998). Thus, we write for the evolution of the temperature of the IGM: \begin{equation} \frac{dT_{IGM}}{dt} = - 2 \; \frac{\dot{a}}{a} \; T_{IGM} \; - \; \frac{T_{IGM}}{t_{cool}} \; + \; \frac{T_{IGM}}{t_{heat}} \label{TIGM} \end{equation} where $a(t)$ is the scale factor (which enters the term describing adiabatic cooling due to the expansion). The heating time-scale $t_{heat}$ is given by: \begin{equation} t_{heat}^{-1} = \frac{4 \pi}{3/2 n_b k T_{IGM}} \; \sum_j \; \int n_j \sigma_j(\nu) (\nu - \nu_j) J_{\nu} \frac{d\nu}{\nu} \label{theat} \end{equation} where $j=$ (HI,HeI,HeII), $\nu_j$ is the ionization threshold of the corresponding species, $n_j$ its number density in the IGM and $n_b$ the baryon number density. The cooling time-scale $t_{cool}$ describes collisional excitation, collisional ionization, recombination, molecular hydrogen cooling, bremsstrahlung and Compton cooling or heating (e.g. Anninos et al.1997). Next, we can write the evolution equation for the background radiation field $J_{\nu}$: \begin{equation} \frac{\partial J_{\nu}}{\partial t} = -3 \; \frac{\dot{a}}{a} \; J_{\nu} \; + \; \frac{\dot{a}}{a} \; \nu \; \frac{\partial J_{\nu}}{\partial \nu} \; - \; k_{\nu} J_{\nu} \; + S_{\nu s} + S_{\nu Q} \label{Jnu} \end{equation} The first two terms on the r.h.s. describe the effects of the expansion of the universe, while the last two terms represent the radiation emitted by stars and quasars which we obtained previously. The absorption coefficient $k_{\nu}$ is written as: \begin{equation} k_{\nu} = \frac{c}{1 \mbox{Mpc}} \left( \tau_{\nu, IGM}^1 + \tau_{\nu, NHI}^1 + \tau_{\nu, NHeI}^1 + \tau_{\nu, NHeII}^1 \right) \end{equation} where $\tau_{\nu, IGM}^1$ is the opacity at frequency $\nu$ of the IGM over a physical length of 1 Mpc, while $\tau_{\nu, N_j}$ corresponds to the contribution by ``Lyman-$\alpha$'' clouds (i.e. discrete mass condensations as opposed to the uniform component which forms the IGM). Thus we have: \begin{equation} \tau_{\nu, IGM}^1 = \left( \sum_j \sigma_j(\nu) n_j \right) 1 \mbox{Mpc} \end{equation} Note that in this study we consider the medium as purely absorbing and we neglected the reprocessing of ionizing photons. From the evolution of the IGM temperature and the background radiation field we can also follow the chemistry of the gas within this uniform component. More precisely we consider the following species: HI, HII, H$^{-}$, H$_2$, H$_2^+$, HeI, HeII, HeIII and e$^-$ (see for instance Abel et al.1997 for rate coefficients). Thus we obtain the reionization history of hydrogen and helium together with the spectral shape of the background radiation $J_{\nu}$. \section{Opacity} \label{Opacity} In the previous calculations, (\ref{Snus}) and (\ref{SnuQ}), where we described the source terms for the radiation field within the IGM we introduced opacity factors to model the absorption of the radiation emitted by quasars and stars by the IGM and Lyman-$\alpha$ clouds. We shall deal with these terms in this section. We consider that each source (galaxy or quasar) active at a given redshift $z$ ionizes its surroundings over a radius $R_i$ given by: \begin{equation} R_{i,HII} = \left[ \frac{3}{4 \pi \alpha C_n n_H^2} \; \left( 1 - e^{-\alpha C_n n_H t} \right) \; \frac{dN_{\gamma}}{dt} \right]^{1/3} \label{Ri} \end{equation} where $\alpha(3 \; 10^4$ K$)$ is the recombination rate (within the ionized bubbles), $n_H$ the mean number density of hydrogen obtained from (\ref{rhon}), $C_n$ the clumping factor from (\ref{Cn}), $(dN_{\gamma}/dt)$ the emission rate of ionizing photons from the source and $t$ its age. We have $t \leq t_H$ so we neglected here the influence of the expansion of the universe and the time-dependence of the source luminosity over its age. We take $t = t_H$ for galaxies and $t = t_Q$ for quasars. Although this procedure is consistent with our prescription for the quasar luminosity function (we assumed quasars to shine at the Eddington luminosity on the time-scale $t_Q \ll t_H$ and then to fade) we somewhat overestimate the radiative output of galaxies since the galaxy luminosity function decreases with $z$ over the time-scale $t_H$. However, the relation (\ref{Ri}) should still provide a correct estimate of the magnitude of this effect. Note that $R_i$ is smaller than the usual ``Stromgren'' radius which corresponds to the limit $t \rightarrow \infty$ in (\ref{Ri}). Indeed, the exponential term in (\ref{Ri}) can also be written as $\exp(-t/t_{rec})$ which shows that at redshifts $z \ll 100$ where the recombination time is larger than the age of the source (which is smaller than $t_H$) the ionization front is smaller than the Stromgren radius. This effect was also described by Shapiro \& Giroux (1987). In addition, these authors took into account the expansion of the universe but assumed a fixed number of sources. Here since we consider sources with a life-time $t \leq t_H$ we neglect the influence of the expansion of the universe but we take into account the increasing number of galaxies and quasars. Next, we obtain the volume fraction $Q_{HII_{s,Q}}$ (i.e. the filling factor) occupied by such ionized bubbles around galaxies or quasars as: \begin{equation} Q_{HII_{s,Q}} = \int \eta_{bubble_{s,Q}} (x) \; \frac{dx}{x} \; \frac{4\pi}{3} R_{i_{s,Q}}^3 \end{equation} For bubbles ionized by stellar radiation we have $\eta_{bubble_s} (x) = \eta_g(x)$ where $\eta_g(x) dx/x$ is the mass function of galaxies while for quasars we write: \begin{equation} \eta_{bubble_Q} (x) = \lambda_Q \; \eta_g(x) \; \mbox{Min} \left[ 1 , \mbox{Max} \left( \frac{t_Q}{t_M} , \frac{t_{rec}}{t_M} \right) \right] \label{etabubbleQ} \end{equation} where $t_{rec}$ is the recombination time within the ionized bubbles. This differs from the quasar mass function (\ref{etaQ}) through the term $t_{rec}/t_M$ because a region remains ionized over a time-scale $t_{rec}$ which may be longer than the quasar life-time $t_Q$. Note that our general procedure only provides an upper bound to the actual efficiency of radiative processes since we did not include absorption within the host galactic halo itself. We do not integrate $Q_{HII_{s,Q}}$ over time since this is already done in (\ref{Ri}) and the sharp rise with time of the luminosity functions (before reionization) ensures that the radiative output is dominated by recent epochs. Moreover, since we have $t_{rec} < t_H$, as shown below in Fig.\ref{figtionrecO03} (curve $t_{rec,bubble}$), ionized bubbles do not survive more than a Hubble time unless new sources (galaxies or quasars) appear. The filling factor within the IGM is written as the sum of the contributions from galaxies and quasars: \begin{equation} Q_{HII,IGM} = Q_{HII_s} + Q_{HII_Q} \end{equation} Of course the previous considerations only apply to high redshifts prior to reionization when the universe is almost completely neutral. At reionization these bubbles overlap and the background UV flux gets suddenly very large as absorption drops. At later times the whole universe is ionized so there are no more discrete bubbles (and formally $Q_{HII}=1$). We also define in a similar fashion the filling factors $Q_{HeII}$ and $Q_{HeIII}$ which describe regions around quasars where helium is singly or doubly ionized. Since the stellar radiation shows an exponential decrease at high frequencies quasars are the only relevant source for this process. The filling factor $Q_{HII,IGM}$ obtained above will be used to obtain the IGM opacity. However, as we explained previously we also consider the universe to contain numerous clouds which contribute to the opacity seen by the radiation field. We shall first consider that these clouds are ionized (or more exactly that their number density drops significantly) within the radius $R_{i,cl} = R_i$ defined in (\ref{Ri}) from the quasar. In other words, most of the opacity comes from clouds located deeply within the IGM where the background radiation $J_{\nu}$ is very small (before reionization) since close to quasars (within $R_{i,cl}$) the local radiation suddenly gets much higher. Note that the distribution of Lyman-$\alpha$ clouds we calculate is indeed obtained from the IGM background radiation, which calls for the cutoff $R_{i,cl}$. However, at low $z$ after reionization the ``sphere of influence'' of a quasar is no longer given by $R_{i,cl}$ (since the whole medium is ionized). Instead, we define the radius $R_{J,cl}$ by: \begin{equation} \frac{1}{4 \pi R_{J,cl}^2} \; \left( \frac{dN_{\gamma}}{dt} \right)_Q = 10 \; \frac{J_{21}}{h} \end{equation} where $h$ is Planck constant and $J_{21}$ is a measure of the background radiation within the IGM in the ionizing part of the spectrum: \begin{equation} J_{21} = \frac{ \int J_{\nu} \; \sigma_{HI}(\nu) \; \frac{d\nu}{\nu} } { \int \sigma_{HI}(\nu) \; \frac{d\nu}{\nu} } \label{J21} \end{equation} Thus, the ``sphere of influence'' of a quasar is defined by the region of space around the source where the radiation emitted by this quasar is significantly larger than the background radiation (at high $z$ when the medium is neutral this corresponds to $R_{i,cl}$, while at low $z$ when the IGM is ionized this is given by $R_{J,cl}$). Thus, in practice we shall simply use $R_{cl} = \mbox{Min}[ R_{i,cl} , R_{J,cl} ]$ to obtain the volume fraction $Q_{HII,cloud}$ where the number density of Lyman-$\alpha$ clouds is significantly lower than within the IGM: \begin{equation} Q_{HII,cloud} = \int \eta_{bubble_Q} (x) \; \frac{dx}{x} \; \frac{4\pi}{3} R_{cl}^3 \end{equation} As we shall see from the numerical results, at low redshifts $z < z_{ri}$ when the universe is reionized the opacity is very low (the UV flux is large) so that absorption plays no role for the evolution of $J_{\nu}$. Thus, in practice the radius $R_{J,cl}$ is irrelevant. It only provides some information on the properties of the universe but it does not influence the behaviour of the latter. Thus, while $Q_{HII,IGM}$ increases with time until it reaches unity as the universe gets reionized, $Q_{HII,cloud}$ will first grow before reionization as the volume occupied by the ionized bubbles increases and then decrease at $z \ll z_{ri}$ because the quasar luminosity function drops at low $z$. Since a fraction of volume $Q$ translates into the same fraction $Q$ along a random line of sight (neglecting correlations in the distributions of sources) we write the opacity $\tau_{\nu}(r)$ seen from a point in the IGM to a source located at the distance $r$ as: \begin{equation} \tau_{\nu}(r) = \sum_j \left( \tau^1_{\nu,N_j} \; Q_{j,cloud} + \tau^1_{\nu,IGM_j} \; Q_{j,IGM} \right) \; \frac{r}{1 \mbox{Mpc}} \label{tauQHI} \end{equation} where $Q_{HI}=1-Q_{HII}$ is the neutral hydrogen filling factor ($Q_{HeI} = 1-Q_{HeII}-Q_{HeIII}$) and $\tau^1_{\nu,N_j}$ corresponds to discrete clouds while $\tau^1_{\nu,IGM_j}$ describes the IGM contribution. The typical distance $l_g(x)$ between galactic sources characterized by their parameter $x$, density contrast $\Delta$ and radius $R$, is given by their number density, see (\ref{etah}), \begin{equation} l_g(x) \sim R \; (1+\Delta)^{1/3} \; \left[ x^2 H(x) \right]^{-1/3} \end{equation} where we did not take into account correlations. Since only a fraction $\lambda_Q \leq 1$ of galaxies host quasars we have for the mean distance between bubbles ionized by quasars: \begin{equation} l_Q(x) = \left( \lambda_Q \mbox{Min} \left[ 1 , \mbox{Max} \left( \frac{t_Q}{t_H} , \frac{t_{rec}}{t_H} \right) \right] \right)^{-1/3} \; l_g(x) \label{distQ} \end{equation} as in (\ref{etabubbleQ}). Here $t_{rec}$ is the recombination time within the ionized bubbles, see Fig.\ref{figtionrecO03}. Since at low redshift $t_{rec} \sim t_H$ we usually have $l_Q \sim \lambda_Q^{-1/3} l_g$. Next, we define an effective opacity $\tau_{eff}$ over the region of size $l$ and volume $V$ by: \begin{equation} e^{-\tau_{eff}} = \int_0^{l} \; \frac{d^3r}{V} \; e^{-\tau(r)} \label{taueff} \end{equation} where $\tau(r)$ is given by (\ref{tauQHI}). Then we use for the opacities which enter the source terms (\ref{Snus}) and (\ref{SnuQ}) a simple prescription which recovers the asymptotic regimes $\tau_{eff} \rightarrow 0$ and $\tau_{eff} \rightarrow \infty$ of (\ref{taueff}): \begin{equation} \tau_{s,Q}(x) = \ln \left( 1 + \frac{3}{4} \tau_{\nu}(l_{g,Q}) + \frac{1}{6} \tau_{\nu}(l_{g,Q})^3 \right) \label{tausQ} \end{equation} \section{Numerical results: open universe} \label{Open universe} We can now use the model we described in the previous sections to obtain the reionization history of the universe, as well as many other properties such as the population of quasars, galaxies or Lyman-$\alpha$ clouds, for various cosmologies. We shall first consider the case of an open universe $\Omega_0=0.3$, $\Lambda=0$, with a CDM power-spectrum (Davis et al.1985), normalized to $\sigma_8=0.77$. We choose a baryonic density parameter $\Omega_b=0.03$ and $H_0=60$ km/s/Mpc. We use the scaling function $h(x)$ obtained by Bouchet et al.(1991) as explained in VS II. Our model is consistent with the studies presented in VS II and Valageas et al.(1999a), so that those papers are part of the same unified model and describe in more details our predictions for galaxies and Lyman-$\alpha$ clouds at $z \leq 5$. \subsection{Quasar luminosity function} \label{Quasar luminosity function} Although our model was already checked in previous studies for galaxies and Lyman-$\alpha$ clouds as explained above, our description for quasars was not compared to observations in great detail (although a first check was performed in VS II). Thus, we first compare in this section our predictions for the quasar luminosity function to observational data, as shown in Fig.\ref{figquasO03}. \begin{figure}[htb] \begin{picture}(230,430) \epsfxsize=26 cm \epsfysize=18 cm \put(-28,-50){\epsfbox{figquasO03.ps}} \end{picture} \caption{The evolution with redshift of the B-band quasar luminosity function in comoving Mpc$^{-3}$. The data points are from Pei (1995).} \label{figquasO03} \end{figure} We can see that our model is consistent with observations. At low redshifts the number of quasars we predict does not decline as fast as the data, however we get a significant decrease which is already an improvement over the results of Efstathiou \& Rees (1988) for instance. We can note that Haehnelt \& Rees (1993) managed to obtain a good fit to the observed decline at low $z$ but they had to introduce an ad-hoc redshift and circular velocity dependence for the black hole formation efficiency. Since our model appears to work reasonably well we prefer not to introduce additional parameters. Moreover, as we noticed earlier our ratio (black hole mass)/(stellar mass) is consistent with observations ($F=0.006$) while the quasar life-time we use $t_Q=0.44 \; 10^8$ yrs agrees with theoretical expectations. The high-luminosity cutoff, which appears at $z<3.5$, comes from the fact that in our model very massive and bright galaxies have consumed most of their gas. Thus, the maximum quasar luminosity starts decreasing with time at low $z$ because of fuel exhaustion. We note that Haiman \& Loeb (1998) obtained similar results at $z>2$ although they used a very small time-scale $t_Q \sim 6.6 \; 10^5$ yrs (in our case this problem is partly solved by the introduction of the parameter $\lambda_Q < 1$ which states that only a small fraction of galaxies actually host a black hole). However, they note that the number density of bright quasars they get increases until $z=0$. In a recent paper, Haiman et al.(1998) point out that the lack of quasar detection down to magnitude $V=29$ in the HDF strongly constrains the models of quasar formation, which tend to predict more than 4 objects (which is still marginally consistent). In particular, they find that one needs to introduce a lower cutoff for the possible mass of quasars (shallow potential wells with a circular velocity lower than 50 km/s are not allowed to form back holes) or a mass-dependent black-hole formation efficiency. We show in Fig.\ref{figcountO03} the predictions of our model. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figcountO03.ps}} \caption{The quasar cumulative V-band number counts. The dashed line shows the counts of quasars with magnitude brighter than $V$ located at the redshifts $3.5 < z < 4.5$ while the solid line corresponds to $3.5 < z < z_{ri}$.} \label{figcountO03} \end{figure} The solid line shows the number $N(<V)$ of quasars with a magnitude lower than $V$ located at a redshift larger than $3.5$ up to the reionization redshift $z_{ri}=6.8$. The large opacity beyond this redshift prevents detection of higher $z$ objects. We also take into account the opacity due to Lyman-$\alpha$ clouds at lower $z$. We can see that our model is marginally consistent with the constraints from the HDF since it predicts $4$ detections up to $V=29$. We note that our model automatically includes photoionization feedback (threshold $T_{cool}$) and a virial temperature dependence in the relation (black hole mass) - (dark matter halo mass). However, the ``cooling temperature'' $T_{cool}$ is too low to have a significant effect on the number counts. Of course, we see that at bright magnitudes most of the counts come from low-redshift quasars ($z<4.5$). Thus, the QSO number counts strongly constrain our model since in order to obtain a reionization history consistent with observations (namely the HI and HeII Gunn-Peterson tests and the low-redshift amplitude of the UV background radiation field) we need a relatively large quasar multiplicity function. However, one might weaken these constraints by using an ad-hoc QSO luminosity function with many faint objects ($L_B < 5 \; 10^9 \; L_{\odot}$). \subsection{Reheating of the IGM} As we explained previously the radiation emitted by galaxies and quasars will reheat and reionize the universe, following (\ref{TIGM}) and (\ref{Jnu}). We start our calculations at $z_i=200$ with the initial conditions used by Abel et al.(1998), see also Peebles (1993). In particular: \[ T_{IGM}(z_i) = \mbox{Min} \left[\; 135 \; \left( \frac{1+z_i}{100} \right)^2 \; \mbox{K} \; , \; 2.73 \; (1+z_i) \; \mbox{K} \right] \] \[ \frac{n_{HII}}{n_H} = 2.4 \; 10^{-4} \; \Omega_0^{1/2} \; \frac{0.05}{h \Omega_b} \] \[ \frac{n_{H_2}}{n_H} = 2 \; 10^{-20} \; \frac{(1-Y) \Omega_0^{3/2}}{h \Omega_b} \; (1+z_i)^{5.1} \] and we use a helium mass fraction $Y=0.26$. We present in Fig.\ref{figTO03} the redshift evolution of the IGM temperature $T_{IGM}$, as well as the temperature $T_{cool}$ which defines the smallest virialized objects which can cool at redshift $z$, see (\ref{tcool}). \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figTO03.ps}} \caption{The redshift evolution of the IGM temperature $T_{IGM}$ (solid curve). We also show the virial temperature $T_{cool}$ of the smallest virialized halos which can cool at redshift $z$ (dashed curve) while $T_m$ is a mass-averaged temperature (dot-dashed curve).} \label{figTO03} \end{figure} At high $z$ the IGM temperature decreases with time due to the adiabatic expansion of the universe. Next, for $z < 24$ ($\log(1+z)<1.4$) the medium starts being slowly reheated by the radiative output of stars and quasars until it reaches at $z \simeq 9$ a maximum temperature $T_{max} \sim 3 \; 10^4$ K where collisional excitation cooling is so efficient that the IGM temperature cannot increase significantly any more. As we shall see later, this phase occurs {\it before} the medium is reionized, as was also noticed by Gnedin \& Ostriker (1997) using a numerical simulation. There is a small increase at $z \sim 7$ ($\log(1+z) \sim 0.9$) when the universe is fully reionized and the UV background radiation shows a sharp rise. However, because cooling is very efficient, the dramatic increase in $J_{\nu}$ only leads to a small change in $T_{IGM}$. Eventually at low redshifts the temperature starts decreasing again due to the expansion of the universe as the heating time-scale becomes larger than the Hubble time $t_H$. The temperature $T_{cool}$ which defines the smallest objects which can cool at redshift $z$ increases with time because the decline of the number density of the various species, due to the expansion of the universe, makes cooling less and less efficient. Indeed, the cooling rate (in erg cm$^{-3}$ s$^{-1}$) associated with a given process involving the species $i$ and $j$ can usually be written as $k_{ij}(T) n_i n_j$, which leads to a cooling time-scale: \begin{equation} t_{cool,ij} = \frac{3/2 \; n_b k T}{k_{ij}(T) n_i n_j} \propto n^{-1} \sim (1+z)^{-3/2} \; t_H \end{equation} where we have neglected the temperature dependence. Thus, the ratio (cooling time)/(Hubble time) increases as time goes on (at fixed $T$ and abundance fractions). Since halos with virial temperature $T_{cool}$ must satisfy $t_{cool} \sim t_H$, see (\ref{tcool}), $T_{cool}$ has to get higher with time to increase the rate $k_{ij}$ (which usually contains factors of the form $\exp(-T_{ij}/T)$). The sudden increase of $T_{cool}$ at $z \sim 30$ ($\log(1+z) \sim 1.5$) is due to the decline of the fraction of molecular hydrogen which starts being destroyed by the radiation emitted by stars and quasars. As a consequence the main cooling process becomes collisional excitation cooling instead of molecular cooling. Since the former is only active at high temperatures (the coefficient rate $k(T)$ contains a term $\exp(-118348 K/T)$ instead of $ \exp(-512 K/T)$ for molecular cooling) the cooling temperature $T_{cool}$ has to increase up to $T_{cool} \sim 10^4$ K. By definition $T_{cool}$ is larger than the IGM temperature and usually much higher as can be seen in Fig.\ref{figTO03}. However, at $z \sim 9$ when $T_{IGM} \sim 10^4$ K is quite high due to reheating by the background radiation field we have $T_{cool}=T_{IGM}$ since the IGM temperature is large enough to allow for efficient cooling. Then all virialized bound objects, with $T>T_{IGM}$, form baryonic clumps which can cool. The temperature $T_m$ represents a mass-averaged temperature: the matter within the IGM is associated to $T_{IGM}$ while virialized objects (hence with $T>T_{IGM}$) are characterized by a temperature defined as Min$(T,10^6 \mbox{K})$. Since $T_m$ does not enter any of our calculations used to obtain the redshift evolution of the universe this crude definition is sufficient for our purpose which is merely to illustrate the difference between volume ($T_{IGM}$) and mass averages. As can be seen from Fig.\ref{figTO03} we always have $T_{m} \geq T_{IGM}$ as it should be. At large redshifts $T_{m} \simeq T_{IGM}$ since most of the matter is within the uniform IGM component, whereas at low redshifts $z < 5$ ($\log(1+z) < 0.8$) the IGM temperature declines as we explained previously while $T_m$ remains large since most of the matter is now embedded within collapsed objects where shock heating is important (and they do not experience adiabatic cooling due to the expansion). We show in Fig.\ref{figtcoolO03} the cooling and heating times associated with various processes for the IGM as well as for the smallest halos $M_{cool}$ which can cool at $z$. \begin{figure}[htb] \centerline{\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figtcoolO03.ps}} \caption{The cooling and heating times associated with the various relevant processes in units of $s \; t_H(z)$ for the IGM (upper figure) and the halos defined by $T_{cool}$ (lower figure). The labels are as follows: 1) collisional excitation, 2) collisional ionization, 3) recombination, 4) molecular hydrogen, 5) bremsstrahlung, 6) Compton, 7) photoionization heating and 8) cooling due to expansion (only for the IGM, see text).} \label{figtcoolO03} \end{figure} We can see in the upper panel that for large and small redshifts, $z > 24$ ($\log(1+z) > 1.4$) and $z < 5$ ($\log(1+z) < 0.8$), all time-scales associated with the IGM are larger than the Hubble-time which means that the IGM temperature declines due to the adiabatic cooling entailed by the expansion of the universe. However, at intermediate redshifts $z \sim 18$ ($\log(1+z) \sim 1.3$) the smallest time-scale corresponds to heating by the background radiation ($t_{heat}$) which means that $T_{IGM}$ increases during this period. Next, at $z \sim 9$, the IGM temperature becomes large enough to activate collisional excitation cooling so as to reach a temporary equilibrium where $t_{cool} \simeq t_{heat}$ while $T_{IGM}$ remains constant. Then, as we shall see later the universe gets suddenly reionized at $z_{ri} = 6.8$ ($\log(1+z)=0.9$). This means that $t_{heat}$ increases sharply as $n_{HI}$ declines (as well as $n_{HeI}$ and $n_{HeII}$) as can be seen from (\ref{theat}). The cooling time due to collisional excitation follows this rise as the medium remains in quasi-equilibrium while the temperature declines slightly (the strong temperature-dependent factors like $\exp(-118348 K/T)$ in $t_{cool}$ ensure it immediately adjusts to $t_{heat}$, moreover these cooling rates are also proportional to $n_{HI}$, $n_{HeI}$ and $n_{HeII}$) until both heating and cooling time-scales become larger than the Hubble time. Then this quasi-equilibrium regime stops as the medium merely cools because of the expansion of the universe. These various phases, which appear very clearly in Fig.\ref{figtcoolO03}, explain the behaviour of $T_{IGM}$ shown in Fig.\ref{figTO03} which we described earlier. The peak at $z \simeq 19$ ($\log(1+z) \simeq 1.3$) of $t_{Compton}$ in the upper panel (curve 6) corresponds to the time when its sign changes (hence $t_{Compton}^{-1}=0$). At higher redshifts $T_{IGM}$ is lower than the CMB temperature (due to adiabatic cooling by the expansion of the universe), so that the gas is heated by the CMB photons while at lower $z$ the IGM temperature is larger than $T_{CMB}$ (due to reheating) so that the gas is cooled by the interaction with the CMB radiation. The lower panel shows the cooling and heating times associated with the halos $T_{cool}$. As we have already explained we can see that at high redshifts the main cooling process is molecular hydrogen cooling. Note however that for the IGM this process is always irrelevant. This difference comes from the fact that the larger density and temperature of these virialized halos allow them to form more molecular hydrogen than is present in the IGM, so that molecular cooling becomes efficient. This was also described in detail in Tegmark et al.(1997) for instance. Of course at these redshifts we have $t_{cool,mol} = s \; t_H$ by definition of $T_{cool}$. Then, at $z < 27$ ($\log(1+z) < 1.4$) as molecular hydrogen starts being destroyed by the background radiation the main cooling process becomes collisional excitation. Note that the corresponding cooling time gets smaller than $s \; t_H$ because the medium is also heated by the radiation so that the actual cooling time results from a slight imbalance between cooling and heating processes. The sharp decrease of the various time-scales at $z \simeq 9$ corresponds to a sudden increase of $T_{cool}$ due to the rise of $T_{IGM}$ (which influences the cooling halos since $T_{cool} \geq T_{IGM}$) also seen in Fig.\ref{figTO03} and in the upper panel of Fig.\ref{figtcoolO03}. Around $z \sim 8$ ($\log(1+z) \sim 0.9$) we have $t_{cool} \ll t_{heat}$ and $t_{cool} \ll t_H$ so that all virialized halos above $T_{IGM}$ can cool ($T_{cool}=T_{IGM}$). The feature at $z \sim 5.3$ ($\log(1+z) \sim 0.8$) is due to reionization. Finally, we present in Fig.\ref{figMO03} the characteristic masses we encounter. The mass $M_d$ is obtained from (\ref{Rd}): \begin{equation} M_d = (1+\Delta)_{IGM} \; \overline{\rho} \frac{4 \pi}{3} R_d^3 \label{Md} \end{equation} while $M_{cool}$ corresponds to the halos which can cool at redshift $z$, as we have already explained. The mass $M_{vir}$ which follows closely the behaviour of $M_d$ describes the smallest virialized objects. It differs from $M_d$ because the density contrast is now $\Delta_c$ instead of $\Delta_{IGM}$. The mass $M_{NL}$ corresponds to the first non-linear scale defined by $\overline{\xi} = 1$. Note that after reionization $M_d \sim 3 \; 10^6 M_{\odot}$ while the usual Jeans mass would be $M_J \sim 10^8 M_{\odot}$. This is mainly due to the low density $(1+\Delta)_{IGM}$ of the IGM, see (\ref{Md}) and Fig.\ref{figclumpO03} below. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figMO03.ps}} \caption{The redshift evolution of the characteristic masses $M_d$, $M_{vir}$, $M_{cool}$ and $M_{NL}$ in $M_{\odot}$.} \label{figMO03} \end{figure} By definition we have $M_d < M_{vir} \leq M_{cool}$. At $z \sim 9$ we have $M_{cool} = M_{vir}$ since $T_{cool}=T_{IGM}$ as we noticed earlier in Fig.\ref{figTO03}. We also note that at large $z$ our calculation is not entirely correct since our multiplicity functions are valid in the non-linear regime, for masses $M \ll M_{NL}$. However, at these early times the universe is nearly exactly uniform (by definition !) so that this is not a very serious problem. We can see that the first cooled objects which form in significant numbers are halos of dark-matter mass $M \sim 10^5 M_{\odot}$ which appear at $z \sim 49$ ($\log(1+z) \sim 1.7$), when $M_{cool}$ becomes smaller than $M_{NL}$. However, they only influence the IGM after $z < 24$ ($\log(1+z) < 1.4$) when reheating begins. \subsection{Reionization of the IGM} After the radiation emitted by quasars and stars reheats the universe, as described in the previous section, it will eventually reionize the IGM. We present in Fig.\ref{figJO03} the evolution with redshift of the background radiation field and of the comoving stellar formation rate. Within the framework of our model the latter is a good measure of the radiative output from galaxies, see (\ref{dEdnudts}), as well as from quasars, see (\ref{LQ}), since we note that the quasar mass happens to be roughly proportional to the stellar mass. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figJO03.ps}} \caption{The redshift evolution of the UV flux $J_{21}$ (upper panel) and of the comoving star formation rate $d\rho_s/dt$ (lower panel). The data points are from Giallongo et al.(1996) (square), Cooke et al.(1997) (filled square), Vogel et al.(1995) (triangle, upper limit), Donahue et al.(1995) (filled triangle, upper limit) and Kulkarni \& Fall(1993) (circle). The dashed line in the lower panel shows the effect of the absorption of high energy photons by the neutral hydrogen present in the IGM and in Lyman-$\alpha$ clouds.} \label{figJO03} \end{figure} The upper panel of Fig.\ref{figJO03} shows the UV flux $J_{21}$ in units of $10^{-21}$ erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$ sr$^{-1}$ defined by (\ref{J21}). We can see that the UV flux rises very sharply at $z \simeq 6.8$ ($\log(1+z) \simeq 0.9$) which corresponds to the reionization redshift $z_{ri}$ when the universe suddenly becomes optically thin, so that the radiation emitted by stars and quasars at large frequencies is no longer absorbed and contributes directly to $J_{\nu}$. This appears clearly from the lower panel. Here the solid line shows the comoving star formation rate, obtained from (\ref{SFRav}), while the dashed line shows the same quantity multiplied by a luminosity-weighted opacity factor $\exp(-\tau_L)$ which describes the opacity due to the IGM and Lyman-$\alpha$ clouds (see below (\ref{tauL}), (\ref{tauLtot}) and Fig.\ref{figtauO03}). Thus, we can see that while the star formation rate evolves rather slowly with $z$ the absorption term varies sharply around $z_{ri}$. Hence the universe is suddenly reionized at $z_{ri}$ (when the ionized bubbles overlap: $Q_{HII}=1$) on a time-scale very short as compared to the Hubble time because of the strongly non-linear effect of the opacity. We note that for $z<1$ our star-formation model is somewhat simplified, as we explained earlier, because we defined all galaxies by a constant density contrast $\Delta_c$ while cooling constraints should be taken into account, as in VS II, and we also used approximations for the stellar content of galaxies which are not strictly valid for all galactic halos at these low redshifts. The reader is referred to VS II for a more precise description of the low $z$ behaviour. However, our present treatment is sufficient for our purposes and still provides a reasonable approximation at $z < 1$. We display in Fig.\ref{figJnuO03} the background radiation spectrum $J_{\nu}$ at four redshifts: $z_1=7.3$ (before reionization), $z_2=6.4$ (after reionization), $z_3=3$ and $z_4=0$. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figJnuO03.ps}} \caption{The background radiation spectrum $J_{\nu}$ (in units of $10^{-21}$ erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$ sr$^{-1}$) at the redshifts $z_1=7.3$ (solid line, prior to reionization), $z_2=6.4$ (lower dashed line, after reionization), $z_3=3$ (upper dashed line) and $z_4=0$ (solid line).} \label{figJnuO03} \end{figure} We see that the ionization edges corresponding to HI, HeI and HeII can be clearly seen at high $z$ before the universe is reionized. Then the background radiation is very strongly suppressed for $\nu > 13.6$ eV due to HI and HeI absorption. Of course at very large frequencies $\nu \ga 1$ keV where the cross-section gets small we recover the slope $J_{\nu} \propto \nu^{-\alpha}$ of the radiation emitted by quasars. At low redshifts $z < z_{ri}$ after reionization the drop corresponding to HeI disappears as HeI is fully ionized and its number density gets extremely small, as we shall see below in Fig.\ref{figchemO03}. However, even at $z \sim 3$ the ionization edges due to HI and HeII are clearly apparent and $J_{\nu}$ is still significantly different from a simple power-law. At low redshifts $z \sim 0$ the background radiation is much smoother since its main contribution comes from radiation emitted while the universe was ionized and optically thin. However, its intensity is smaller than at $z \sim 4$ because the quasar luminosity function drops at low $z$, see Fig.\ref{figquasO03}, while the universe keeps expanding. We show in Fig.\ref{figtionrecO03} the redshift evolution of the ionization and recombination times $t_{ion}$ and $t_{rec}$ of the IGM, divided by $t_H$. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figtionrecO03.ps}} \caption{The redshift evolution of the ionization and recombination times $t_{ion}$ (solid line) and $t_{rec,IGM}$ (dashed line) of the IGM, divided by the Hubble time $t_H$. The horizontal solid line only shows $t_H$ for reference. The recombination times $t_{rec,mean}$ (uniform medium) and $t_{rec,bubble}$ (ionized bubbles) are defined in the main text.} \label{figtionrecO03} \end{figure} More precisely, the ionization time $t_{ion}$ is defined by: \begin{equation} t_{ion}^{-1} = \int 4 \pi \; \frac{J_{\nu}}{h \nu} \; \sigma_{HI}(\nu) \; d\nu \label{tion} \end{equation} while the recombination time within the IGM is: \begin{equation} t_{rec,IGM}^{-1} = \alpha(T_{IGM}) \; C_n \; \overline{n}_{e-} \label{trec} \end{equation} where $\alpha$ is the recombination rate, $C_n$ the clumping factor and $\overline{n}_{e-}$ the mean electron number density, from (\ref{Cn}) and (\ref{rhon}). We also display for reference the recombination time which would correspond to a uniform medium with the mean density of the universe: \begin{equation} t_{rec,mean} = (1+\Delta)_n \; C_n \; t_{rec,IGM} \end{equation} Finally, we show the recombination time within ionized bubbles (where all hydrogen atoms are ionized): \begin{equation} t_{rec,bubble}^{-1} = \alpha \left(3 \; 10^4 \mbox{K} \right) \; C_n \; \overline{n}_H \label{trecbubble} \end{equation} The recombination time grows with time at high $z$ as the mean density decreases with the expansion, although this is somewhat balanced by the increase of the clumping factor $C_n$ (see below Fig.\ref{figclumpO03}). In particular, the decrease of $t_{rec,IGM}$ around $z \sim 30$ ($\log(1+z) \sim 1.5$) is due to the growth of $C_n$. The sharp drop at $z \sim 7$ ($\log(1+z) \sim 0.9$) is due to reionization which suddenly increases the number density of free electrons. After reionization the recombination time characteristic of the IGM keeps decreasing (while $t_{rec,mean}$ increases slightly since the density declines) because of the growth of the clumping factor $C_n$, see Fig.\ref{figclumpO03}, which overides the decline of the mean universe density. The recombination time within ionized bubbles $t_{rec,bubble}$ follows the change of the mean universe density and of the clumping factor $C_n$. At large $z$ it is much smaller than the mean IGM recombination time since the IGM is close to neutral. At low $z$ it becomes larger than $t_{rec,IGM}$ since the IGM is suddenly reionized with a temperature $T_{IGM}$ which declines after reheating and gets lower than $3 \; 10^4$ K, see Fig.\ref{figTO03}. The ionization time is very large at high $z$ since the UV background radiation is small. Then it decreases very sharply at $z_{ri}$ when the universe is reionized and the background radiation suddenly grows as the medium becomes optically thin, as seen in Fig.\ref{figJO03}. The reionization redshift corresponds to the time when $t_{ion}$ becomes smaller than $t_H$, somewhat after it gets smaller than $t_{rec,IGM}$. Thus $t_{rec,IGM}$ does not play a decisive role since it is never the smallest time-scale around reionization. Finally, we show in Fig.\ref{figchemO03} how the chemistry of the IGM evolves with time as the temperature $T_{IGM}$ and the UV flux $J_{21}$ vary with $z$. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figchemO03.ps}} \caption{The redshift evolution of the chemistry of the IGM. The upper panel shows the ionization state of hydrogen, as well as the fraction of molecular hydrogen and electrons. The lower panel presents the ionization of helium.} \label{figchemO03} \end{figure} We see very clearly in the upper panel the redshift of reionization $z_{ri}=6.8$ ($\log(1+z) = 0.9$) when the fraction of neutral hydrogen $n_{HI}/n_{H}$ declines very sharply while the UV flux $J_{21}$ suddenly rises, as was shown in Fig.\ref{figJO03}. We note that at low redshifts $z < z_{ri}$ the neutral hydrogen fraction decreases more slowly down to $n_{HI}/n_{H} \sim 10^{-8}$. The fractions of electrons and ionized hydrogen start increasing earlier at $z \sim 19$ ($\log(1+z) \sim 1.3$) but of course they remain small until $z_{ri}$. The abundance of molecular hydrogen decreases sharply at a rather high redshift $z \sim 27$ ($\log(1+z) \sim 1.4$) due to the background radiation, as we noticed on Fig.\ref{figtcoolO03}. The lower panel shows that helium gets fully ionized simultaneously with hydrogen. In particular, although there remains a small fraction of HeII ($\sim 10^{-4}$) the abundance of HeI gets extremely small. We note that at low redshifts $z < 2$ the fraction of HeII does not evolve much (and even slightly increases) while the HI abundance keeps declining. This is due to the fact that the radiation relevant for helium ionization comes from quasars whose luminosity function drops at low $z$ as shown in Fig.\ref{figquasO03} while an important contribution to the hydrogen ionizing radiation is provided by stars and the galaxy luminosity function declines more slowly with time at low $z$, as seen in Fig.\ref{figJO03} or in VS II. We shall come back to this point in Sect.\ref{Contributions of quasars and stars}. \subsection{Opacities} As we explained previously the radiation emitted by stars and quasars at high frequencies ($\nu > 13.6$ eV) is absorbed by the IGM and discrete clouds as it propagates into the IGM. This leads to the extinction factors $\exp(-\tau_s(x))$ and $\exp(-\tau_Q(x))$ in the evaluation of the source terms (\ref{Snus}) and (\ref{SnuQ}) for the background radiation. We define here the ``luminosity averages'' $\tau^L_{IGM,NHI}$ for both continuous and discrete components by: \begin{equation} e^{ -\tau^L } = \left( \frac{d\rho_s}{dt} \right)^{-1} \; \int \eta_g(x) \; \frac{dx}{x} \; \left( \frac{dM_s}{dt} \right)(x) \; e^{-\tau_s(x)} \label{tauL} \end{equation} where $\tau_s(x)$ is the relevant opacity (from the IGM or clouds for sources $x$, see (\ref{tausQ}) ) at the frequency 20 eV below the HeI ionization threshold without taking into account the factors $Q_{HI}$ which describe ionized bubbles. The subscript $s$ refers to the fact that we consider here the opacity which enters into the calculation of the stellar radiative output. The quasar-related opacity mainly differs through the factor $\lambda_Q$, see (\ref{distQ}). The weight $dM_s/dt$ corresponds to a luminosity weight as we noticed earlier, see (\ref{dEdnudts}), (\ref{LQ}) and (\ref{SFR}). \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figtauO03.ps}} \caption{Upper panel: the redshift evolution of the opacity from the IGM (dashed line) and ``Lyman-$\alpha$ clouds'' (solid line) which enters the absorption factors in the calculation of the radiative output from stars and quasars. Lower panel: evolution of the filling factors $Q_{HII_Q}$ (ionized bubbles around quasars, solid line), $Q_{HII_s}$ (around galaxies, upper dashed line) and $Q_{HII,cloud}$ (lower dashed line).} \label{figtauO03} \end{figure} We show in the upper panel of Fig.\ref{figtauO03} the redshift evolution of the opacity from the IGM ($\tau^L_{IGM}$, dashed line) and from ``Lyman-$\alpha$ clouds'' ($\tau^L_{NHI}$, solid line). We can see that both contributions have roughly the same magnitude before reionization, except at very high redshifts $z > 50$ ($\log(1+z) > 1.7$) when very few structures exist as shown in Fig.\ref{figFracO03}. Prior to reionization the opacity is large and the background radiation quite small, as seen in Fig.\ref{figJO03}. At $z_{ri}$ the opacity suddenly declines while $J_{21}$ rises sharply, due to the strong non-linear coupling between $\tau$ and $J_{\nu}$, as we explained in the lower panel of Fig.\ref{figJO03} where we presented the influence of the total opacity $\tau_L$: \begin{equation} \tau_L = \tau^L_{IGM} + \tau^L_{NHI} \label{tauLtot} \end{equation} At low redshifts $z < z_{ri}$ when the universe is reionized the opacity due to discrete clouds becomes much larger than the IGM contribution (although it is very small) because the density of neutral hydrogen is now proportional to the square of the baryonic density (in photoionized regions) and most of the matter is embedded within collapsed objects. The opacities $\tau^L$ were shown in the upper panel of Fig.\ref{figtauO03} without the filling factors $Q_{HI}$ which enter the actual evaluation of the source terms (\ref{Snus}) and (\ref{SnuQ}), see (\ref{tauQHI}). The filling factors $Q_{HII}$, describing the volume fraction occupied by ionized bubbles, are shown in the lower panel of Fig.\ref{figtauO03}. We can see that $Q_{HII_s}$ and $Q_{HII_Q}$ increase with time as structures form and emit radiation while the IGM density declines. When these ionized bubbles overlap ($Q_{HII}=1$) the universe is reionized. At low redshifts the coefficient $Q_{HII,cloud}$ declines because the background radiation is large while the quasar number density drops (indeed $Q_{HII,cloud}$ measures the volume occupied by the ``spheres of influence'' of quasars). Next, we can evaluate the mean opacities $\tau_{HI}(z)$ and $\tau_{HeII}(z)$ seen on a random line of sight from $z=0$ to a quasar located at redshift $z$. We present in Fig.\ref{figtauLymO03} the contributions from both the uniform IGM component and the discrete Lyman-$\alpha$ clouds. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figtauLymO03.ps}} \caption{The redshift evolution of the average opacities $\tau_{HI}$ and $\tau_{HeII}$ along a random line of sight produced by ``Lyman-$\alpha$ clouds''. The dashed lines show the opacities from the uniform IGM. The data points are from Press et al.(1993) (circles), Zuo \& Lu (1993) (filled circles) for hydrogen, and from Davidsen et al.(1996) (filled rectangle) and Hogan et al.(1997) (cross) for helium.} \label{figtauLymO03} \end{figure} At high redshifts prior to reionization the main contribution to the opacity is provided by the IGM which contains most of the matter. However, at low $z$ when the IGM is ionized most of the absorption comes from the Lyman-$\alpha$ clouds. At large $z$ the HeII opacity is very small because most of the helium is in its neutral form HeI. We refer the reader to Valageas et al.(1999a) for a much more detailed description of the properties of the Lyman-$\alpha$ clouds at low $z$. We can see that our predictions show a reasonable agreement with observations for the hydrogen opacity. At low redshifts $z < 1$ the influence of star-formation which consumes and may eject some of the gas (which we did not take into account here) could explain the relatively high opacity we obtain. The helium opacity we find is also close to observations. This is due to the fact that the UV radiation spectrum shows strong ionization edges, even at relatively low redshifts $z \sim 3$ ($\log(1+z) \sim 0.6$), see Fig.\ref{figJnuO03}. Hence the ratio $N_{HeII}/N_{HI}$ is rather large, see Fig.\ref{figchemO03}, which explains why we get a better agreement than Zheng et al.(1998) for instance (see also Valageas et al.1999a). In particular, we have $n_{HeII}/n_{HeIII} \gg n_{HI}/n_{HII}$ since $n_{HeII}/n_{HeIII} \sim 10^{-4}$ while $n_{HI}/n_{HII} \sim 10^{-8}$ at low redshift. We note that the observed HeII opacity strongly constrains the quasar contribution to the reionization process since stellar radiation is small at high frequencies (due to the near blackbody behaviour of stellar spectra). In particular, it implies that one needs a population of faint QSOs ($M_B>-26.7$) in order to reionize helium but $z_{ri}$ should not be too large so that there is still an appreciable density of HeII. In other words, as we noticed above, the UV radiation field must still display strong ionization edges, which means that it has not had enough time to be smoothed out by the radiation emitted since $z_{ri}$ when the medium is optically thin. \subsection{Stellar properties} Our model also allows us to obtain the fraction of matter within virialized or cooled halos, as well as in stars. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figFracO03.ps}} \caption{The redshift evolution of the fraction of matter enclosed within virialized halos ($F_{vir}$), cooled objects ($F_{cool}$) and stars ($F_{star}$). The lower solid line is the volume fraction $F_{vir,vol}$ occupied by virialized objects.} \label{figFracO03} \end{figure} We show in Fig.\ref{figFracO03} the fraction of matter within virialized halos ($F_{vir}$, upper dashed line), cooled objects ($F_{cool}$, upper solid line) and stars ($F_{star}$, lower dashed line). The first two quantities are simply obtained from (\ref{muh}). Of course we have: $F_{star} \leq F_{cool} \leq F_{vir}$. Around $z \sim 9$ we note that $F_{cool} = F_{vir}$ because as we explained previously at this time all virialized objects (with $T \geq T_{IGM}$) can cool efficiently ($T_{cool}=T_{IGM}$). The fraction of matter within virialized halos increases very fast at high redshifts $z \sim 49$ ($\log(1+z) \sim 1.6$) as $M_{vir}$ becomes smaller than $M_{NL}$, see Fig.\ref{figMO03}, when dark matter structures form on scale $R_d$. However, until $z \sim 15$ ($\log(1+z) \sim 1.2$) the mass within cooled halos remains much smaller because cooling is not very efficient so that $T_{cool} \gg T_{IGM}$, see Fig.\ref{figTO03}. At low redshifts $z < 5$ ($\log(1+z) < 0.8$) both $F_{vir}$ and $F_{cool}$ get close to unity since most of the matter is now embedded within collapsed and cooled halos (even though $T_{cool}$ becomes again much larger than $T_{IGM}$: we are so far within the non-linear regime that even $T_{cool}$ is small compared to the characteristic virial temperature of the structures built on scale $R_d$). Of course the mass within stars grows with time, closely following $F_{cool}$. Note however that it is not strictly proportional to $F_{cool}$ since an increasingly large fraction of the gas within galaxies is consumed into stars. The fraction of volume $F_{vir,vol}$ occupied by virialized objects always remains small as it satisfies: \begin{equation} F_{vir,vol} = \frac{1}{1+\Delta_c} \; F_{vir} \leq \frac{1}{1+\Delta_c} \end{equation} \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=10 cm \epsfbox{figclumpO03.ps}} \caption{Upper panel: the redshift evolution of the clumping factors $C_b$ and $C_n$. Lower panel: the overdensities $(1+\Delta)_{IGM}(z)$ characteristic of most of the volume of the universe at redshift $z$ at scale $R_d$ and $(1+\Delta)_n$.} \label{figclumpO03} \end{figure} We show in the upper panel of Fig.\ref{figclumpO03} the clumping factor $C_b$ defined by (\ref{Cb}). The expression (\ref{Cb}) shows clearly that at large redshifts where there are very few collapsed baryonic structures $C_b \simeq 1$ while at low $z$ when most of the baryonic matter is within virialized halos (note this is always true for dark matter on sufficiently small scales) we have $C_b \simeq \Delta_c$. We can see in the figure that $C_b$ usually increases with time as the hierarchical clustering process goes on. The temporary decrease at $z \sim 15$ ($\log(1+z) \sim 1.2$), which also appears in the lower panel and in Fig.\ref{figFracO03}, is due to the reheating of the universe, shown in Fig.\ref{figTO03}, which increases the ``damping'' length $R_d$ and mass scale $M_d$, as seen in Fig.\ref{figMO03}. As a consequence, small objects which were previously well-defined entities suddenly see the IGM temperature become larger than their virial temperature. Hence they cannot retain efficiently their gas content and they lose their identity. We note that neglecting the clumping of the gas would lead to a higher reionization redshift $z_{ri}=7.3$ since it would underestimate the efficiency of recombination. The clumping factor $C_n$ displays a behaviour similar to $C_b$ but it is usually smaller since it does not include the deep halos which can cool. We display in the lower panel of Fig.\ref{figclumpO03} the overdensity $(1+\Delta)_{IGM}(z)$ characteristic of most of the volume of the universe at redshift $z$ when seen on scale $R_d$. While structures form and the matter gets embedded within overdensities which occupy a decreasing fraction of the volume (when seen at this scale $R_d$) the ``overdensity'' $(1+\Delta)_{IGM}$ which characterises the medium in-between these objects (halos or filaments) declines. The density contrast $(1+\Delta)_n$ which corresponds to the IGM and shallow potential wells which do not form stars (but constitute Lyman-$\alpha$ clouds) decreases more slowly since it only excludes the high virial temperature halos with $T>T_{cool}$. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figmetO03.ps}} \caption{The redshift evolution of the metallicities $Z_c$ (star-forming gas), $Z_s$ (stars), $Z_h$ (galactic halos) and $Z_m$ (matter average). The data points are from Pettini et al.(1997) for the zinc metallicity of damped Lyman-$\alpha$ systems.} \label{figmetO03} \end{figure} We present in Fig.\ref{figmetO03} the redshift evolution of the mean metallicities (in units of solar metallicity) $Z_c$ (within the star-forming gas located in the inner parts of galaxies, upper solid line), $Z_s$ (within stars, upper dashed line) and $Z_h$ (within galactic halos, lower dashed line). We use the mass average over the various galactic halos: \begin{equation} Z = \frac{ \int Z(M) \; \mu_g(M) \; \frac{dM}{M} } { \int \mu_g(M) \; \frac{dM}{M} } \end{equation} where $\mu_g(M) dM/M$ is the galaxy mass function defined as in (\ref{muh}). The reader is referred to VS II for a detailed description of these metallicities (note that we only consider here the abundance of Oxygen or any other element that is mainly produced by SN II since we did not include SN I in our model). The lower solid line corresponds to a ``matter averaged'' metallicity $Z_m$ defined by $Z_m = F_{cool} Z_h$. Thus, although we do not include explicitly in our model any contamination of the IGM by heavy elements produced within galaxies, $Z_m$ defined in this way provides an upper bound for the mean IGM metallicity (corresponding to very efficient mixing). If galaxies do not eject metals very deeply within the IGM its metallicity could be much smaller. The mean metallicity of Lyman-$\alpha$ clouds associated to galactic halos (limit or damped systems) is $Z_h$. Our results agree well with observations by Pettini et al.(1997) for damped Lyman-$\alpha$ systems. Note that there is in fact a non-negligible spread in metallicity over the various halos. \subsection{Consequence for the CMB radiation} After reionization, CMB photons may be scattered by electrons present in the gas. We write the corresponding Thomson opacity up to a redshift $z$ as: \begin{equation} \tau_{es}(z) = \int_0^z c \frac{dt}{dz} \; \sigma_T \; \overline{n}_e(z) \end{equation} where $\overline{n}_e(z)$ is the mean electron number density at redshift $z$. We take: \begin{equation} \overline{n}_e(z) = (1-Y) \; \frac{\Omega_b}{\Omega_0} \; \frac{\overline{\rho}(z)}{m_p} \; \left( \frac{n_e}{n_H} \right)_{IGM} \end{equation} which means that we use the same electron fraction in clouds as for the IGM (note that we calculate the IGM electron number density together with the ionization state of hydrogen and helium). Then, CMB anisotropies are damped on angular scales smaller than the angle subtended by the horizon at reionization. We use the analytic fit given by Hu \& White (1997) to obtain the damping factor $R_l^2$ of the CMB power-spectrum $C_l$ from the optical depth $\tau_{es}$. The results are shown in Fig.\ref{figCMBO03}. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figCMBO03.ps}} \caption{Upper panel: the optical depth $\tau_{es}$ for electron scattering. Lower panel: damping factor $R_l^2$ for the CMB power-spectrum.} \label{figCMBO03} \end{figure} We can check that the total opacity $\tau_{es} \simeq 0.023$ is quite small because reionization occurs rather late at $z_{ri} \simeq 6.8$. This also implies that the damping factor $R_l^2$ remains close to unity: $R_l^2 \simeq 0.95$ for large $l$. Another distortion of the CMB radiation is the Sunyaev-Zeldovich effect which transfers photons from the Rayleigh-Jeans part of the spectrum to the Wien tail when they are scattered by hot electrons. The magnitude of this perturbation is conveniently described by the Compton parameter $y$: \begin{equation} y(z) = \int_0^z c \frac{dt}{dz} dz \; \sigma_T \; n_e \; \frac{kT}{m_e c^2} \label{yComp} \end{equation} We can first consider the contribution of the IGM gas, using in (\ref{yComp}) the temperature and the electronic density of this uniform component. Then, we estimate the distortion due to the hot gas embedded within virialized objects. We can write this latter contribution as: \begin{equation} y_{halos} = \int dy_{IGM} \; \frac{1}{(1+\Delta)_{IGM}} \; F_{vir} \; \frac{T_{mvir}}{T_{IGM}} \end{equation} where we used the same electronic fraction for halos and the IGM. The temperature $T_{mvir}$ is the ``mass averaged'' temperature of virialized objects while $F_{vir}$ is the mass fraction within collapsed objects displayed in Fig.\ref{figFracO03}. The results are presented in Fig.\ref{figycompO03}. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figycompO03.ps}} \caption{The Compton parameter $y$ up to redshift $z$ describing the Sunyaev-Zeldovich effect from the IGM (dashed line) and virialized halos (solid line).} \label{figycompO03} \end{figure} The Compton parameter $y_{IGM}$ due to the IGM first increases rather fast with $z$ until reionization, together with $n_e$ and $T_{IGM}$. After $z_{ri}$ it reaches a plateau and does not grow any more since at these large redshifts the universe is almost exactly neutral. The contribution $y_{halos}$ of virialized objects is much larger at low $z$ than $y_{IGM}$ since the temperature of these collapsed halos is much higher than $T_{IGM}$, as shown in Fig.\ref{figTO03}. We can note however that $y_{halos}$ grows much slower and becomes close to its asymptotic value earlier than $y_{IGM}$. This is due to the fact that the characteristic temperature of virialized halos declines at larger $z$, see Fig.\ref{figTO03}, and the mass fraction they contain also decreases (while the IGM undergoes the opposite trends). A more detailed description of the Sunyaev-Zeldovich effect due to clusters, and its fluctuations (which in fact have the same magnitude as the mean), will be presented in a future article. \subsection{Contributions of quasars and stars} \label{Contributions of quasars and stars} In our model the radiation which reheats and reionizes the universe comes from both quasars and stars. At large frequencies $\nu > 24.6$ eV most of the UV flux is emitted by quasars so that stars play no role in the helium ionization. However, at lower frequencies both contributions have roughly the same magnitude. We present in Fig.\ref{figSourcesO03} the redshift evolution of the radiative output due to stars and quasars. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figSourcesO03.ps}} \caption{Upper panel: the redshift evolution of the ``instantaneous'' UV fluxes $J_{HI}^i$ and $J_{HeI}^i$ due to stars (solid lines) and quasars (dashed lines). The dotted line is the UV flux $J_{21}$ shown in Fig.\ref{figJO03}. Lower panel: the ``instantaneous'' ionization times $t_{ion;s,Q}^i$ due to stars (solid line) and quasars (dashed line). The dotted curve is the recombination time as in Fig.\ref{figtionrecO03} while the horizontal solid line is the Hubble time $t_H$.} \label{figSourcesO03} \end{figure} Thus, we define the ``instantaneous'' radiation fields: \begin{equation} J_{\nu;s,Q}^i = \frac{1}{10} \; t_H \; S_{\nu;s,Q} \end{equation} where $t_H(z)$ is the Hubble time, see (\ref{Jnu}). From these quantities we define the averages $J_{HI;s,Q}^i(z)$ and $J_{HeI;s,Q}^i(z)$ as in (\ref{J21}). This provides a measure of the radiative output above the ionization thresholds $\nu_{HI}$ and $\nu_{HeI}$ due to stars and quasars. We can also derive the ionization times $t_{ion;s,Q}^i$ as in (\ref{tion}). We can see in the upper panel that at reionization $z \sim z_{ri}$ the contributions to HI ionizing radiation from stars and quasars are of the same order. However, since the quasar spectrum is much harder than stellar radiation we have $J_{HeI;Q}^i \gg J_{HeI;s}^i$ so that quasars are slightly more efficient at reheating and reionizing the universe (the additional factor $(\nu-\nu_j)$ in (\ref{theat}) increases the weight of high energy photons which also remain longer above the threshold $\nu_{HI}$ while being redshifted). At low $z$ we can see that the quasar radiative output declines much faster than the stellar source term. Of course, this is due to the sharp drop at low redshifts of the quasar luminosity function. This decrease of $S_{\nu Q}$ as compared to $S_{\nu s}$ comes from two effects: i) as time increases the ``creation time-scale'' of halos of the relevant masses $M \sim 10^{12} M_{\odot}$ (through merging of smaller sub-units, measured by $t_Q/t_M$) grows and ii) there is less gas available to fuel the quasars (which even disappear) while old stars can still provide a non-negligible luminosity source for galaxies. We can note in the upper panel that the redshift evolution of the background radiation field $J_{21}(z)$ actually produced by stars and quasars does not exactly follow the ``instantaneous'' quantities $J_{HI;s,Q}^i(z)$ since one must take into account the expansion of the universe and deviations from equilibrium. This explains the slower increase of $J_{21}$ at $z \sim z_{ri}$ as well as the relatively low approximate ionization times $t_{ion;s,Q}^i$ at this epoch. \section{Critical universe} \label{Critical universe} We now consider the case of a critical universe $\Omega=1$ with a CDM power-spectrum (Davis et al.1985) normalized to $\sigma_8=0.5$. We also choose $\Omega_b=0.04$ and $H_0=60$ km/s/Mpc. Thus, as in the previous case of an open universe our model is consistent with the studies described in VS II and Valageas et al.(1999a). \subsection{Quasar luminosity function} We first present our results for the redshift evolution of the quasar luminosity function in Fig.\ref{figquasO1}. \begin{figure}[htb] \begin{picture}(230,430) \epsfxsize=26 cm \epsfysize=18 cm \put(-28,-50){\epsfbox{figquasO1.ps}} \end{picture} \caption{The evolution with redshift of the B-band quasar luminosity function in comoving Mpc$^{-3}$, as in Fig.\ref{figquasO03}. The data points are from Pei (1995).} \label{figquasO1} \end{figure} We can check that our results are similar to those obtained previously for an open universe and they again agree reasonably with observations. This is not very surprising since we use the same physical model so that we recover a similar behaviour. We present in Fig.\ref{figcountO1} the quasar number counts we obtain from our model. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figcountO1.ps}} \caption{The quasar cumulative V-band number counts. The dashed line shows the counts of quasars with a magnitude brighter than $V$ located at redshifts $3.5 < z < 4.5$ while the solid line corresponds to $3.5 < z < z_{ri}$.} \label{figcountO1} \end{figure} We can see that our results are similar to those displayed previously in Fig.\ref{figcountO03} and that our predictions are still marginally consistent with the lack of observation in the HDF. \subsection{Reheating} We present in Fig.\ref{figTO1} the redshift evolution of the IGM temperature $T_{IGM}$, the virial temperature $T_{cool}$ of the smallest objects which can cool at a given time, and the mass averaged temperature $T_m$. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=5.5 cm \epsfbox{figTO1.ps}} \caption{The redshift evolution of the IGM temperature $T_{IGM}$ (solid curve), the virial temperature $T_{cool}$ (upper dashed curve) and the mass averaged temperature $T_m$ (lower dashed curve), as in Fig.\ref{figTO03}.} \label{figTO1} \end{figure} We can see that our results are again very close to those obtained for an open universe. Indeed, the structure formation process is quite similar and it must agree with the same observations (quasar and galaxy luminosity functions, Lyman-$\alpha$ column density distribution) at low $z$. \subsection{Reionization} We display in Fig.\ref{figJO1} the redshift evolution of the background radiation and the comoving star formation rate. \begin{figure}[htb] \centerline {\epsfxsize=8 cm \epsfysize=11.5 cm \epsfbox{figJO1.ps}} \caption{The redshift evolution of the UV flux $J_{21}$ (upper panel) and of the comoving star formation rate $d\rho_s/dt$ (lower panel) for the case of an open universe. The dashed line in the lower panel shows the effect of the absorption of high energy photons by the neutral hydrogen present in the IGM and in Lyamn-$\alpha$ clouds.} \label{figJO1} \end{figure} We can check that we recover the behaviour obtained previously for an open universe. However, the reionization redshift $z_{ri} = 5.6$ is smaller than previously. This is related to the lower normalization $\sigma_8$ of the power-spectrum as compared to the previous case. This leads to fewer bright quasars at high $z$ (compare Fig.\ref{figquasO1} and Fig.\ref{figquasO03}) and to a smaller radiative output. We can also check that the hydrogen and helium reionization process is close to our previous results (as for the reheating). Thus, for most practical purposes both critical and open cosmologies allow reasonable reheating and reionization histories which are very similar. In fact, the uncertainties involved in the galaxy and quasar formation processes are probably too large to favour significantly one of these two possible scenarios (as compared to the other). However, both models are consistent with present observations. \section{Conclusion} In this article we have described an analytic model for structure formation in the universe which deals simultaneously with quasars, galaxies and Lyman-$\alpha$ clouds, within the framework of a hierarchical scenario. This allows us to study the reheating and reionization history of the universe consistently with the properties of these various classes of objects. We have shown that for both a critical and an open universe our predictions agree reasonably well with observations. However, as was noticed by Haiman et al.(1998) it appears that the observational constraints on the quasar luminosity function are already strong. Moreover, the Gunn-Peterson test for HeII provides stringent additional constraints on the quasar contribution to the UV radiation field and on the reionization redshift. Thus, although our model in its simplest version (i.e. as described here, with no additional cutoffs for the quasar multiplicity function) is marginally consistent with the data, further observations of the helium opacity and of the quasar number counts (e.g. with the NGST) could provide tight constraints on such models where reionization is produced by QSOs. On the other hand, since reionization occurs rather late $z_{ri} \leq 7$ the damping of CMB anisotropies is quite small. We can note that our predictions are similar to some results obtained by Gnedin \& Ostriker (1997) with a numerical simulation (but for a different cosmology). Moreover, the fact that our model agrees reasonably with observations for Lyman-$\alpha$ clouds, galaxies, quasars and constraints on the reionization process, strongly suggests that its main characteristics are fairly realistic. Thus, it provides a simple description of structure formation in the universe, from high redshifts after recombination down to the present epoch. \begin{acknowledgements} This research has been supported in part by grants from NASA and NSF. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,833
Edward Harold Fulcher Swain (1883—1970) was a forester in New South Wales and Queensland, Australia. Swain laid the foundations of modern forestry economics in Queensland. Early life Edward Harold Fulcher Swain was born in Sydney in 1883. Career Swain was the first Cadet Forester in the New South Wales Forestry Branch in 1899. He studied forestry in Montana, USA in 1915 and on his return became a District Forest Inspector in Queensland. Between 1918 and 1924 he was Director of Forests in Queensland. During this time he set aside large tracts of hoop pine forest in the Brisbane Valley and Mary River Valley and planted areas of introduced species. On the abolition of the office of Director in 1924, he became inaugural Chairman of the Queensland Forestry Board until 1932. This Board was responsible for the management and control of the State Forests and National Parks. Swain laid the foundations of modern forestry economics in Queensland. His new ideas and strong personality frequently brought him into conflict with others and he was often a controversial figure in an industry deeply rooted in traditional practice. Amongst other achievements, he pioneered forest assessment surveys, promoted the permanent reservation of good forests and forest land, improved pricing policies which led to a better use of timber resources and expanded staff training and the activities of his department. He wrote a number of books on forestry, was a founder of the Australian Forestry School at the Australian National University and supported community interest in trees. In 1924, he was instrumental in establishing the Sherwood Arboretum, a heritage-listed park on the Brisbane River, which is dedicated to the growth of indigenous trees. In 1932, Swain publicly campaigned against the indiscriminate allocation of forested land as land grants, at the time a policy of the incumbent National Party Government. Although the Labor Party won the election, in the controversy and inquiries which followed Swain lost his job. He became a research consultant for Australian Paper Manufacturers in South Australia before becoming Commissioner for Forests in New South Wales until his official retirement in 1948. Between 1951 and 1955 he was United Nations Forestry Consultant in Ethiopia. House Edward Swain purchased the 1.5 acres of riverfront land for Swain House in 1920 when he was Director of Forests in Queensland. He took out a mortgage for £1,300 and constructed a California Bungalow style home in which he was living by 1925. He surrounded the house with extensive plantings of native and exotic trees reflecting his life's work in forestry. Swain House is listed on the Queensland Heritage Register. His daughter Nancy and her husband occupied the family home from 1946. She shared her father's interest in indigenous species and planted rainforest trees in the grounds. The house itself reflects Swain's obsession with exploring the potential of Australian timbers. It was constructed of rosewood with the intention of proving that this timber, then not well regarded for the purpose, was suitable for building. The house also displays Queensland timbers to advantage, the joinery including panelling in pine and kauri. Later life Swain died in Brisbane in July 1970. Legacy Most of the trees planted by Swain and his family survive and are now large trees that make a major contribution to the landscape of the area. The Alan Fletcher Research Station of the Department of Primary Industries nearby uses the hoop pine plantation for research into forestry practices. References Attribution Further reading External links Australian foresters 1883 births 1970 deaths Australian officials of the United Nations People from Sydney Articles incorporating text from the Queensland Heritage Register
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,246
\section{Introduction} Given a nonempty rational polytope $P\subseteq\RR^n$, we denote by $\VV(P)$, $\FF(P)$, and $\FT(P)$ the sets of vertices, faces, and facets of $P$, respectively, and we write $f(P):=|\FT(P)|$. We also denote by $\xc(P)$ the extension complexity of $P$, that is, the minimum number of inequalities in any linear extended formulation of $P$, i.e., a description of a polyhedron whose image under a linear map is $P$ \gustavo{(see for instance \cite{exponentialFiorini}.)} Finally, given a set $\rem \subseteq \VV(P)$, we define $\forb(P,\rem):=\conv(\VV(P)\setminus \rem)$, where $\conv(S)$ denotes the convex hull of $S\subseteq\RR^n$. This work is devoted to understanding the complexity of the forbidden-vertices problem defined below. \begin{definition}\label{def_forbidden} Given a polytope $P\subseteq \RR^n$, a set $\rem\subseteq \VV(P)$, and a vector $c\in\RR^n$, the forbidden-vertices problem is to either assert $\VV(P)\setminus \rem=\emptyset$, or to return a minimizer of $c^\top x$ over $\VV(P)\setminus \rem$ otherwise. \end{definition} Our work is motivated by enumerative schemes for stochastic integer programs \cite{Laporte}, where a series of potential solutions are evaluated and discarded from the search space. As we will see later, the problem is also related to finding different basic solutions to a linear program. To address the complexity of the forbidden-vertices problem, it is crucial to distinguish between different encodings of a polytope. \begin{definition} An explicit description of a polytope $P\subseteq \RR^n$ is a system $Ax\leq b$ defining $P$. An implicit description of $P$ is a separation oracle which, given a rational vector $x\in\RR^n$, either asserts $x\in P$, or returns a valid inequality for $P$ that is violated by $x$. \end{definition} Note that an extended formulation for $P$ is a particular case of an implicit description. When $P$ admits a separation oracle that runs in time bounded polynomially in the facet complexity of $P$ and the encoding size of the point to separate, we say that $P$ is tractable. We refer the reader to \cite[Section 14]{schrijver1998theory} for a deeper treatment of the complexity of linear programming. We also distinguish different \gustavo{encodings} of a set of vertices. \begin{definition} An explicit description of $\rem\subseteq\VV(P)$ is the list of the elements in $\rem$. If $\rem=\VV(F)$ for some face $F$ of $P$, then an implicit description of $\rem$ is an encoding of $P$ and some valid inequality for $P$ defining $F$. \end{definition} Below we summarize our main contributions. \begin{itemize} \item In Section~\ref{general}, we show that the complexity of optimizing over $\VV(P)\setminus \rem$ or describing $\forb(P,\rem)$ changes significantly depending on the encoding of $P$ and/or $\rem$. In most situations, however, the problem is hard. \item In Section~\ref{binary} we consider the case of removing a list $\rem$ of binary vectors from a 0-1 polytope $P$. When $P$ is the unit \gustavo{cube}, we present two compact extended formulations describing $\forb([0,1]^n,\rem)$. We further extend this result and show that the forbidden-vertices problem is polynomially solvable for tractable 0-1 polytopes. \item Then in Section~\ref{app} we apply our results to the $k$-best problem and to binary all-different polytopes, showing the tractability of both. Finally, in Section~\ref{integral}, we also provide extensions to integral polytopes. \end{itemize} The complexity results of Sections~\ref{general} and \ref{binary} lead to the \gustavo{classification shown in Table~\ref{classif}}, depending on the \gustavo{encoding} of $P$ and $\rem$, and whether $P$ has 0-1 vertices only or not. \gustavo{Note that ($*$) is implied, for instance, by Theorem~\ref{binary_facet}. Although we were not able to establish the complexity of ($**$), Proposition~\ref{faceTU} presents a tractable subclass.} \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{cc|cc|cc|} ~ & ~ & \multicolumn{4}{c|}{$P$}\\ ~ & ~ & \multicolumn{2}{c|}{General}&\multicolumn{2}{c|}{0-1}\\ ~ & ~ & Explicit & Implicit & Explicit & Implicit\\ \hline \multirow{4}{*}{$\rem$} & \multirow{2}{*}{Explicit} & $\mathcal{NP}$-hard \gustavo{(Thm.~\ref{hardness})} & \multirow{2}{*}{$\mathcal{NP}$-hard for $|\rem|=1$ \gustavo{(Thm.~\ref{cutpoly})}} & \multirow{2}{*}{Polynomial} & \multirow{2}{*}{Polynomial \gustavo{(Thm.~\ref{poly01})}}\\ & & Polynomial for fixed $|\rem|$ \gustavo{(Prop.~\ref{polyFixed})}& & & \\ &&&&&\\ & Implicit & $\mathcal{NP}$-hard \gustavo{(Prop.~\ref{knapsack_facet})}& $\mathcal{NP}$-hard \gustavo{($*$)}& \gustavo{($**$)} & $\mathcal{NP}$-hard \gustavo{(Thm.~\ref{binary_facet})} \label{classif} \end{tabular} \caption{Complexity classification.} \end{center} \end{table} In constructing linear extended formulations, disjunctive programming emerges as a practical powerful tool. The lemma below follows directly from \cite{Balas} and the definition of extension complexity. We will frequently refer to it. \begin{lemma}\label{disj} Let $P_1,\ldots,P_k$ be \gustavo{nonempty} polytopes in $\RR^n$. If $P_i=\{x\in\RR^n|\ \exists y_i\in\RR^{m_i}:\ E_ix+F_iy_i=h_i,\ y_i\geq 0\}$, then $\conv(\cup_{i=1}^k P_i)=\{x\in\RR^n|\ \exists x_i\in\RR^n,\ y_i\in\RR^{m_i},\ \lambda\in\RR^k:\ x=\sum_{i=1}^k x_i,\ E_ix_i+F_iy_i=\lambda_ih_i,\ \sum_{i=1}^k\lambda_i = 1,\ y_i\geq 0,\ \lambda\geq 0\}$. In particular, we have $\xc\left(\conv(\cup_{i=1}^k P_i)\right)\leq \sum_{i=1}^k (\xc(P_i)+1)$. \end{lemma} \section{General polytopes}\label{general} We begin with some general results when $P\subseteq\RR^n$ is an arbitrary polytope. The first question is how complicated $\forb(P,\rem)$ is with respect to $P$. \begin{proposition}\label{single} For each $n$, there exists a polytope $P_n\subseteq\RR^n$ and a vertex $v_n\in\VV(P_n)$ such that $P_n$ has $2n+1$ vertices and $n^2+1$ facets, while $\forb(P_n,\{v_n\})$ has $2^n$ facets. \end{proposition} \begin{proof} Let $Q_n:=[0,1]^n\cap L$, where $L:=\left\{x\in\RR^n|\ \textbf{1}^\top x\leq \frac{3}{2}\right\}$ and $\textbf{1}$ is the vector of ones. It has been observed \cite{Avis} that $Q_n$ has $2n+1$ facets and $n^2+1$ vertices. We translate $Q_n$ and define $Q_n':=Q_n- \frac{1}{n}\textbf{1}=\left[-\frac{1}{n},1-\frac{1}{n}\right]^n\cap L'$, where $L':=\left\{x\in\RR^n|\ \textbf{1}^\top x\leq \frac{1}{2}\right\}$. Since $Q_n'$ is a full-dimensional polytope having the origin in its interior, there is a one-to-one correspondence between the facets of $Q_n'$ and the vertices of its polar $P_n:=(Q_n')^*$ and vice versa. In particular, $P_n$ has $n^2+1$ facets and $2n+1$ vertices. Let $v\in\VV(P_n)$ be the vertex associated with the facet of $Q_n'$ defined by $L'$. From polarity, we have $\forb(P_n,\{v\})^*=\left[-\frac{1}{n},1-\frac{1}{n}\right]^n$. Thus $\forb(P_n,\{v\})^*$ is a full-dimensional polytope with the origin in its interior and $2^n$ vertices. By polarity, we obtain that $\forb(P_n,\{v\})$ has $2^n$ facets. \end{proof} Note that the above result only states that $\forb(P,\rem)$ may need exponentially many inequalities to be described, which does not constitute a proof of hardness. Such a result is provided by Theorem~\ref{hardness} at the end of this section. We first show that $\forb(P,\rem)$ has an extended formulation of polynomial size in $f(P)$ when both $P$ and $\rem$ are given explicitly and the cardinality of $\rem$ is fixed. \begin{proposition}\label{polyFixed} Suppose $P = \{ x \in \RR^n|\ Ax \leq b\}$. Using this description of $P$, and an explicit list of vertices $\rem$, we can construct an extended formulation of $\forb(P,\rem) $ that requires at most $f(P)^{|\rem| + 1}$ inequalities, i.e., $\xc(\forb(P,\rem)) \leq f(P)^{|\rem| + 1}$. \end{proposition} \begin{proof} Let $\rem=\{v_1,\ldots,v_{|\rem|}\}$ and define $\mathcal F_\rem:=\{F_1\cap\cdots\cap F_{|\rem|}|\ F_i\in \FT(P),\ v_i\notin F_i,\ i=1,\ldots,|\rem|\}$. We claim $$\forb(P,\rem)=\conv\left(\cup_{F\in\mathcal F_\rem}F\right).$$ Indeed, let $w\in \VV(P)\setminus \rem$. For each $i=1,\ldots,|\rem|$, there exists $F_i\in\FT(P)$ such that $w\in F_i$ and $v_i\notin F_i$. Therefore, letting $F:=F_1\cap\cdots\cap F_{|\rem|}$, we have $F\in\mathcal F_\rem$ and $w\in F$, proving the forward inclusion. For the reverse inclusion, consider $F\in\mathcal F_\rem$. By definition, $F$ is a face of $P$ that does not intersect $\rem$, and hence $F\subseteq \forb(P,\rem)$. By Lemma~\ref{disj}, we have $\xc(\forb(P,\rem))\leq\sum_{F\in\mathcal F_\rem}(\xc(F)+1)$. Since $\xc(F)\leq f(F)\leq f(P)-1$ for each proper face $F$ of $P$ and $|\mathcal F_\rem|\leq f(P)^{|\rem|}$, the result follows. \end{proof} Note that when $\rem=\{v\}$, the above result reduces $\forb(P,\{v\})$ to the convex hull of the union of the facets of $P$ that are not incident to $v$, which is a more intuitive result. Actually, we can expect describing $\forb(P,\rem)$ to be easier when the vertices in $\rem$ are ``far'' thus can be removed ``independently'', and more complicated when they are ``close''. Proposition~\ref{polyFixed} can be refined as follows. The graph of a polytope $P$, or the 1-skeleton of $P$, is a graph $G$ with vertex set $\VV(P)$ such that two vertices are adjacent in $G$ if and only if they are adjacent in $P$. \begin{proposition}\label{components} Let $G$ be the graph of $P$. Let $\rem\subseteq\VV(P)$ and let $(\rem_1,\ldots,\rem_m)$ be a partition of $\rem$ such that $\rem_i$ and $\rem_j$ are independent in $G$, i.e., there is no edge connecting $\rem_i$ to $\rem_j$, for all $1\leq i<j\leq m$. Then $$\forb(P,\rem)=\bigcap_{i=1}^m\forb(P,\rem_i).$$ \end{proposition} \begin{proof} We only need to show $\forb(P,\rem)\supseteq\bigcap_{i=1}^m\forb(P,\rem_i)$. For this, it is enough to show that for each $c$ we have $\max\{c^\top x:\ x\in \forb(P,\rem)\}\geq \max\left\{c^\top x:\ x\in \bigcap_{i=1}^m\forb(P,\rem_i)\right\}$. Given $c$, let $v$ be an optimal solution to the maximization problem in the right-hand side, and let $W\subseteq\VV(P)$ be the set of vertices $w$ of $P$ such that $c^\top w\geq c^\top v$. \gustavo{Observe that $W$ induces a connected subgraph of the graph $G$ of $P$ since the simplex method applied to $\max\{c^\top x:\ x\in P\}$ starting from a vertex in $W$ visits elements in $W$ only.} Hence, due to the \gustavo{independence} of $\rem_1,\ldots,\rem_m$, either there is some $w\in W$ with $w\notin \rem_1\cup\cdots\cup\rem_m$, in which case we have $w\in \forb(P,\rem)$ and $c^\top w\geq c^\top v$ as desired, or $W\subseteq \rem_i$ for some $i$, which yields the contradiction $v\in\forb(P,\rem_i)\subseteq\forb(P,W)$ with $c^\top x<c^\top v$ for all $x\in\VV(P)\setminus W$. \end{proof} Conversely, we may be tempted to argue that if $\forb(P,\rem)=\forb(P,\rem_1)\cap\forb(P,\rem_2)$, then $\rem_1$ and $\rem_2$ are ``far''. However, this is not true in general. For instance, consider $P$ being a simplex. Then any $\rem\subseteq \VV(P)$ is a clique in the graph of $P$, and yet $\forb(P,\rem)=\forb(P,\rem_1)\cap\forb(P,\rem_2)$ for any partition $(\rem_1,\rem_2)$ of $\rem$. Proposition~\ref{components} generalizes the main result of \cite{lee2003cropped} regarding cropped cubes. Moreover, the definition of being ``croppable'' in \cite{lee2003cropped} in the case of the unit cube coincides with the independence property of Proposition~\ref{components}. Recall that a vertex of an $n$-dimensional polytope is simple if it is contained in exactly $n$ facets. \gustavo{Proposition~\ref{components} also implies the following well-known fact.} \begin{corollary}\label{stable} \gustavo{If $\rem$ is independent in the graph of $P$ and all its elements are simple, then} $$\forb(P,\rem)=P\cap\bigcap_{v\in \rem}H_v,$$ where $H_v$ is the half-space defined by the $n$ neighbors of $v$ that does not contain $v$. \end{corollary} \begin{proof} The result follows from Proposition~\ref{components} since, as $\rem$ is simple, we have $\forb(P,\{v\})=P\cap H_v$ for any $v\in\rem$. \end{proof} Observe that when $P$ is given by an extended formulation or a separation oracle, $f(P)$ may be exponentially large with respect to the size of the encoding, and the bound given in Proposition~\ref{polyFixed} is not interesting. In fact, in this setting and using recent results on the extension complexity of the cut polytope \cite{Fiorini}, we show that removing a single vertex can render an easy problem hard. Let $K_n=(V_n,E_n)$ denote the complete graph on $n$ nodes. We denote by $\cut(n)$, $\cut^0(n)$, and $st\textrm-\cut(n)$ the convex hull of the characteristic vectors of cuts, nonempty cuts, and $st$-cuts of $K_n$, respectively. \begin{theorem}\label{cutpoly} For each $n$, there exists a set $S_n\subseteq\RR^{n(n-1)/2}$ with $|S_n|=2^{n-1}+n-1$ and a point $v_n\in S_n$ such that linear optimization over $S_n$ can be done in polynomial time and $\xc(\conv(S_n))$ is polynomially bounded, but linear optimization over $S_n\setminus\{v_n\}$ is $\mathcal{NP}$-hard and $\xc(\conv(S_n\setminus\{v_n\}))$ grows exponentially. \end{theorem} \begin{proof} Let $T_n:=\left\{n^2 \unit_e|\ e\in E_n\right\}$, where $\unit_e$ is the $e$-th unit vector, and define $S_n:=\VV\left(\cut^0(n)\right)\cup T_n$. We have that linear optimization over $S_n$ can be done in polynomial time. To see this, suppose we are minimizing $c^\top x$ over $S_n$. Let $x^T$ and $x^C$ be the best solution in $T_n$ and $\cut^0(n)$, respectively. Note that computing $x^T$ is trivial, and if $c$ has a negative component, then $x^T$ is optimal. Otherwise, $c$ is nonnegative and $x^C$ can be found with a max-flow/min-cut algorithm. Then the best solution among $x^T$ and $x^C$ is optimal. Now, consider the dominant of $\cut^0(n)$ defined as $\cut^0(n)_+:=\cut^0(n)+\RR^{n(n-1)/2}_+$. From \cite{Conforti}, we have that $\cut^0(n)_+$ is an unbounded polyhedron having the same vertices as $\cut^0(n)$, and moreover, it has an extended formulation of polynomial size in $n$. Let $L:=\{x\in\RR^{n(n-1)/2}|\ \sum_{e\in E_n}x_e\leq n^2\}$. Then $\cut^0(n)_+\cap L$ is a polytope having two classes of vertices: those corresponding to $\VV\left(\cut^0(n)\right)$ and those belonging to the hyperplane defining $L$. Let $W$ be the latter set. Since $\conv( W)\subseteq\conv(T_n)$, we obtain $\conv(S_n)=\conv\left(\cut^0(n)\cup T_n\right)=\conv\left((\cut^0(n)\cup W)\cup T_n)\right)=\conv\left((\cut^0(n)_+\cap L)\cup T_n\right)$. Applying disjunctive programming in the last expression yields a compact extended formulation for $\conv(S_n)$. Now, let $v_n$ be any point from $T_n$, say the one corresponding to $\{s,t\}\in E$. We claim that linear optimization over $S_n\setminus \gustavo\{v_n\gustavo\}$ is $\mathcal{NP}$-hard. To prove this, consider an instance of $\max\{c^\top x|\ x \in st\textrm-\cut(n)\}$, where $c$ is a positive vector. Let $\bar c:= \max\{c_e|\ e \in E\}$. Let $d$ be obtained from $c$ as $$d_e=\left\{\begin{array}{cc} c_e & e\neq\{s,t\}\\ c_e+\bar c n^2 & e=\{s,t\} \end{array}\right.$$ and consider the problem $\max\{d^\top x|\ x \in S_n\setminus \{v_n\}\}$. We have that every optimal solution to this problem must satisfy $x_{st} = 1$. Indeed, if $x \in T_n\setminus\{v_n\}$, then for some $e \in E_n\setminus\{\{s,t\}\}$ we have $d^\top x = d_e x_e = c_e n^2$. If $x \in \gustavo{\VV(\cut^0(n))}$ is not an $st$-cut, then $x_{st} = 0$ and thus $d^\top x \leq \bar c n^2$. On the other hand, if $x$ is an $st$-cut, then $x_{st} = 1$ and thus $d^\top x \geq d_{st} x_{st} = c_{st} + \bar c n^2$. Therefore $x_{st} = 1$ in any optimal solution, and in particular, such a solution must define an $st$-cut of maximum weight. Finally, since $x_{st}\leq 1$ defines a face of $\conv(S_n\setminus\{v_n\})$ and $\conv(S_n\setminus\{v_n\})\cap\{x\in\RR^{n(n-1)/2}|\ x_{st}=1\}=\gustavo{st\textrm{-}\cut(n)}$, we conclude that $\xc(\conv(S_n\setminus\{v_n\}))$ is exponential in $n$, \gustavo{for otherwise applying disjunctive programming over all pairs of nodes $s$ and $t$ would yield an extended formulation for $\cut(n)$ of polynomial size, contradicting the results in \cite{Fiorini}.} \end{proof} Contrasting Proposition~\ref{polyFixed} and Theorem~\ref{cutpoly} shows that the complexity of $\forb(P,\rem)$ depends on the encoding of $P$. On the other hand, in all cases analyzed so far, $\rem$ has been explicitly given as a list. Now we consider the case where $\rem=\VV(F)$ for some face $F$ of $P$. \begin{proposition}\label{knapsack_facet} \gustavo{Given a polytope $P\subseteq\RR^n$ and a face $F$, both described in terms of the linear inequalities defining them, optimizing a linear function over $\VV(P)\setminus\VV(F)$ is $\mathcal{NP}$-hard. Moreover, $\xc(\conv(\VV(P)\setminus\VV(F)))$ cannot be polynomially bounded in the encoding length of the inequality description of $P$ and thus not in $n$.} \end{proposition} \begin{proof} Let $a\in\ZZ^n_+$ and $b\in\ZZ_+$, and consider \gustavo{the binary knapsack set} $S:=\{x\in\{0,1\}^n|\ a^\top x\leq b\}$. Let $P:=\{x\in[0,1]^n|\ 2a^\top x\leq 2b+1\}$ and note that $S=P\cap\ZZ^n$. It is straightforward to verify that $x\in\VV(P)$ is fractional if and only if $2a^\top x=2b+1$. Then, if $F$ is the facet of $P$ defined by the previous constraint, we have $S=\VV(P)\setminus\VV(F)$. \gustavo{The second part of the statement is a direct consequence of \cite{pokutta2013note} using multipliers $4^i$ as discussed after Remark 3.4 of that reference.} \end{proof} It follows from Theorem~\ref{cutpoly} and Proposition~\ref{knapsack_facet} that only when $P$ and $\rem$ are explicitly given there is hope for efficient optimization over $\forb(P,\rem)$. In a similar vein, when the linear description of $P$ is provided, we can consider the vertex-enumeration problem, which consists of listing all the vertices of $P$. We say that such a problem is solvable in polynomial time if there exists an algorithm that returns the list in time bounded by a polynomial of $n$, $f(P)$, and the output size $|\VV(P)|$. In \cite{Khachiyan} it is shown that given a partial list of vertices, the decision problem ``is there another vertex?'' is $\mathcal{NP}$-hard for (unbounded) polyhedra, and in \cite{Boros} this result is strengthened to polyhedra having 0-1 vertices only. Building on these results, we show hardness of the forbidden-vertices problem \gustavo{(Def.~\ref{def_forbidden})} for general polytopes. \begin{theorem}\label{hardness} The forbidden-vertices problem is $\mathcal{NP}$-hard, even if both $P$ and $\rem$ are explicitly given. \end{theorem} \begin{proof} Let $Q=\{x\in\RR^n:\ Ax=b,\ x\geq 0\}$ be an unbounded polyhedron such that $\VV(Q)\subseteq\{0,1\}^n$. In \cite{Boros}, it is shown that given the linear description of $Q$ and a list $\rem\subseteq\VV(Q)$, it is $\mathcal{NP}$-hard to decide whether $\rem\neq\VV(Q)$. Let $P$ be the polytope obtained by \gustavo{intersecting} $Q$ with the half-space defined by $\sum_{i=1}^n x_i\leq n+1$, and let $F$ be the facet of $P$ associated with this constraint. Then we have $\VV(P)=\VV(Q)\cup\VV(F)$, $\sum_{i=1}^n x_i\leq n$ for $x\in\VV(Q)$, and $\sum_{i=1}^n x_i=n+1$ for $x\in\VV(F)$. Now, given the description of $P$ and a list $\rem\subseteq\VV(Q)\subseteq\VV(P)$, consider the instance of the forbidden-vertices problem $\min\left\{\sum_{i=1}^n x_i:\ x\in\VV(P)\setminus \rem\right\}$. The optimal value is equal to $n+1$ if and only if $\rem=\VV(Q)$. Since the reduction is clearly polynomial, the result follows. \end{proof} In fact, it also follows from \cite{Boros} that the forbidden-vertices problem for general polytopes becomes hard already for $|\rem|=n$. Fortunately, the case of 0-1 polytopes is amenable to good characterizations. \section{0-1 polytopes}\label{binary} We consider polytopes having binary vertices only. We show that $\forb(P,\rem)$ is tractable as long as $P$ is \gustavo{and $\rem$ is explicitly given}. Our results for $P=[0,1]^n$ \gustavo{allow us} to obtain tractability in the case of general 0-1 polytopes. \subsection{The 0-1 \gustavo{cube}} In this subsection we have $P=[0,1]^n$, and therefore $\VV(P)=\{0,1\}^n$. We show the following result. \begin{theorem}\label{P01minusV} Let $\rem$ be a list of $n$-dimensional binary vectors. Then $\xc(\forb([0,1]^n,\rem)) \leq \mathcal O(n|\rem|)$. \end{theorem} For this, we present two extended formulations involving $\mathcal O(n|\rem|)$ variables and constraints. The first one is based on an identification between nonnegative integers and binary vectors. The second one is built by recursion and lays ground for a simple combinatorial algorithm to optimize over $\forb([0,1]^n,\rem)$ and for an extension to remove vertices from general 0-1 polytopes. \subsubsection{First extended formulation} Let $N:=\{1,\ldots,n\}$ and $\mathcal N:=\{0,\ldots,2^n-1\}$. There exists a bijection between $\{0,1\}^n$ and $\mathcal N$ given by the mapping $\sigma(v):=\sum_{i\in N}2^{i-1}v_i$ for all $v\in \{0,1\}^n$. Therefore, we can write $\{0,1\}^n=\{v^0,\ldots,v^{2^n-1}\}$, where $v^k$ gives the binary expansion of $k$ for each $k\in \mathcal N$, that is, $v^k=\sigma^{-1}(k)$. Let $\rem=\{v^{k_1},\ldots,v^{k_m}\}$, where without loss of generality we assume $k_l<k_{l+1}$ for all $l=1,\ldots,m-1$. Also, let $\mathcal N_\rem:=\{k\in \mathcal N|\ v^k\in \rem\}$. Then we have $$\{0,1\}^n\setminus \rem=\left\{x\in \{0,1\}^n|\ \sum_{i\in N} 2^{i-1}x_i \notin \mathcal N_\rem\right\}.$$ Now, for integers $a$ and $b$, let $$K(a,b)=\left\{x\in \{0,1\}^n|\ a\leq \sum_{i\in N} 2^{i-1}x_i \leq b\right\}.$$ If $b<a$, then $K(a,b)$ is empty. Set $k_0=-1$ and $k_{m+1}=2^n$. Then we can write $$\{0,1\}^n\setminus \rem=\bigcup_{l=0}^{m}K(k_l+1,k_{l+1}-1).$$ Thus \begin{equation} \forb([0,1]^n,\rem)=\conv\left(\bigcup_{l=0}^{m}K(k_l+1,k_{l+1}-1)\right)=\conv\left(\bigcup_{l=0}^{m}\conv(K(k_l+1,k_{l+1}-1))\right). \label{intervals} \end{equation} For $k\in\mathcal N$, let $N^k:=\{i\in N|\ v^k_i=1\}$. From \cite{Muldoon} we have $$\conv(K(a,b))=\left\{x\in[0,1]^n:\ \begin{array}{rl} \displaystyle \sum_{j\notin N^a|\ j>i}x_j\geq 1 -x_i& \forall i\in N^a\\ \displaystyle \sum_{j\in N^b|\ j>i}(1-x_j)\geq x_i & \forall i\notin N^b \end{array}\right\},$$ thus $\conv(K(a,b))$ has $\mathcal O(n)$ facets. Finally, combining this and (\ref{intervals}), by Lemma~\ref{disj}, we have that $\forb([0,1]^n,\rem)$ can be described by an extended formulation having $\mathcal O(n|\rem|)$ variables and constraints. \subsubsection{Second extended formulation} Given $\rem\subseteq \{0,1\}^n$, let $\rem'$ denote the projection of $\rem$ onto the first $n-1$ coordinates. Also, let $\widehat \rem:= \widetilde \rem\setminus \rem$, where $\widetilde \rem$ is constructed from $\rem$ by flipping the last coordinate of each of its elements. The result below is key in giving a recursive construction of $\forb([0,1]^n,\rem)$. \begin{proposition}\label{recursion} $\{0,1\}^n\setminus \rem=\left[\left(\{0,1\}^{n-1}\setminus \rem'\right)\times\{0,1\}\right]\cup \widehat \rem$. \end{proposition} \begin{proof} Given $v\in\{0,1\}^n$, let $v'\in\{0,1\}^{n-1}$ and $\widetilde v\in\{0,1\}^n$ be the vectors obtained from $v$ by removing and by flipping its last coordinate, respectively. Let $v\in \{0,1\}^n\setminus \rem$. If $\widetilde v\in \rem$, since $v\notin \rem$, we have $v\in \widehat \rem$. Otherwise $v'\notin \rem'$, and thus $v\in(\{0,1\}^{n-1}\setminus \rem')\times\{0,1\}$. For the converse, note that $\widehat \rem\subseteq \{0,1\}^n\setminus \rem$. Finally, if $v\in (\{0,1\}^{n-1}\setminus \rem')\times\{0,1\}$, then $v'\notin \rem'$ and thus $v\notin \rem$. \end{proof} The second proof of Theorem~\ref{P01minusV} follows from Proposition~\ref{recursion} by induction. Suppose that $\forb([0,1]^{n-1},\rem')$ has an extended formulation with at most $(n-1)(|\rem'|+4)$ inequalities, which holds for $n=2$. Then we can describe $\forb([0,1]^{n-1},\rem')\times\{0,1\}$ using at most $(n-1)(|\rem'|+4) + 2$ inequalities. Since the polytope $\conv(\widehat \rem)$ requires at most $|\widehat \rem|$ inequalities \gustavo{in an extended formulation}, we obtain an extended formulation for $\forb([0,1]^n,\rem)$ of size no more than $[(n-1)(|\rem'|+4)+2+1]+[|\widehat \rem|+1]\leq n(|\rem|+4)$. \subsection{General 0-1 polytopes} \gustavo{In this subsection we analyze the general 0-1 case. We show that the encoding of $\rem$ plays an important role in the complexity of the problem.} \subsubsection{\gustavo{Explicit $\rem$}} \gustavo{In order to prove tractability of the forbidden vertices problem corresponding to general 0-1 tractable polytopes, we introduce the notion of $X$-separating faces for the 0-1 cube.} \begin{definition} Given $\rem\subseteq\{0,1\}^n$, we say that $\mathcal F\subseteq\FF([0,1]^n)$ is $\rem$-separating if $\{0,1\}^n\setminus \rem=\cup_{F\in\mathcal F}F\cap\{0,1\}^n$. We denote by $\mu(\rem)$ the minimal cardinality of an $\rem$-separating set. \end{definition} Clearly, if $\mathcal F$ is $\rem$-separating, then $$\min\left\{c^\top x|\ x\in\{0,1\}^n\setminus \rem\right\}=\min_{F\in\mathcal F}\min\left\{c^\top x|\ x\in F\cap\{0,1\}^n\right\}.$$ Thus, if we can find an $\rem$-separating family of cardinality bounded by a polynomial on $n$ and $|\rem|$, then we can optimize in polynomial time over $\{0,1\}^n\setminus \rem$ by solving the inner minimization problem for each $F\in\mathcal F$ and then picking the smallest value. \begin{proposition}\label{muV} For every nonempty set $\rem\subseteq \{0,1\}^n$, we have $\mu(\rem)\leq n|\rem|$. \end{proposition} \begin{proof} \gustavo{For each $y\in\{0,1\}^n\setminus\rem$, let $0\leq k\leq n-1$ be the size of the longest common prefix between $y$ and any element of $\rem$, and consider the face $F=F(y):=\{x\in [0,1]^n|\ x_i=y_i\ \forall 1\leq i\leq k+1\}=(y_1,\ldots,y_k,y_{k+1})\times[0,1]^{n-k-1}$. Then the collection $\mathcal F:=\{F(y)|\ y\in\{0,1\}^n\setminus\rem\}$ is $\rem$-separating since any $y\in\{0,1\}^n\setminus\rem$ belongs to $F(y)$ and no element of $\rem$ lies in any $F(y)$ by maximality of $k$. Clearly, $|\mathcal F|\leq n|\rem|$ since each face in $\mathcal F$ is of the form $(v_1,\ldots,v_k,1-v_{k+1})\times[0,1]^{n-k-1}$ for some $v\in\rem$.} \end{proof} \gustavo{In other words, letting} $\rem^i$ be the projection of $\rem$ onto the first $i$ components and $\widehat \rem^i:=(\rem^{i-1}\times\{0,1\})\setminus \rem^i$, where $\widehat \rem^1:=\{0,1\}\setminus \rem^1$, we have $$\{0,1\}^n\setminus \rem=\bigcup_{i=1}^n\left[\widehat \rem^i\times\{0,1\}^{n-i}\right].$$ \gustavo{Moreover, it also follows from the proof of Proposition~\ref{muV} that $\mu(\rem)$ is at most the number of neighbors of $\rem$ since if $(v_1,\ldots,v_k,1-v_{k+1},v_{k+2},\ldots,v_n)$ is a neighbor of $v\in\rem$ that also lies in $\rem$, then the face $\left\{(v_1,\ldots,v_k,1-v_{k+1})\right\}\times[0,1]^{n-k-1}$ in not included in $\mathcal F$ in the construction above.} \gustavo{Now, let} $P\subseteq\RR^n$ be an arbitrary 0-1 polytope. Note that $\VV(P)\setminus \rem=\VV(P)\cap(\{0,1\}^n\setminus \rem)$. On the other hand, if $\mathcal F\subseteq\FF([0,1]^n)$ is $\rem$-separating, then $\{0,1\}^n\setminus \rem=\cup_{F\in\mathcal F}F\cap\{0,1\}^n$. Combining these two expressions, we get $$\VV(P)\setminus \rem=\bigcup_{F\in\mathcal F}\VV(P)\cap F\cap\{0,1\}^n=\bigcup_{F\in\mathcal F}P\cap F\cap\{0,1\}^n.$$ Note that since $P$ has 0-1 vertices and $F$ is a face of the unit \gustavo{cube}, then $P\cap F$ is a 0-1 polytope. Moreover, if $P$ is tractable, so is $P\cap F$. Recalling that $\mu(\rem)\leq n|\rem|$ from Proposition~\ref{muV}, we obtain \begin{theorem}\label{poly01} If $P\subseteq\RR^n$ is a tractable 0-1 polytope, then the forbidden-vertices problem is polynomially solvable. \end{theorem} In fact, a compact extended formulation for $\VV(P)\setminus \rem$ is available when $P$ has one. \begin{proposition}\label{bound1} For every 0-1 polytope $P$ and for every nonempty set $\rem\subseteq\VV(P)$, we have $$\xc(\forb(P,\rem))\leq \mu(\rem)(\xc(P)+1).$$ \end{proposition} \begin{proof} The result follows from $$\forb(P,\rem)=\conv\left(\bigcup_{F\in\mathcal F}P\cap F\cap\{0,1\}^n\right)=\conv\left(\bigcup_{F\in\mathcal F}F\right),$$ Lemma~\ref{disj}, and $\xc(F)\leq\xc(P)$ for any face $F$ of $P$. \end{proof} Observe that when $P$ is tractable but its facet description is not provided, Theorem~\ref{poly01} is in contrast to Theorem~\ref{cutpoly}. Having all vertices with at most two possible values for each component is crucial to retain tractability when $\rem$ is given as a list. However, when $\rem$ is given by a face of $P$, the forbidden-vertices problem can become intractable even in the 0-1 case. \subsubsection{\gustavo{Implicit $\rem$}} Let $\tsp(n)$ denote the convex hull of the characteristic vectors of Hamiltonian cycles in the complete graph $K_n$. Also, let $\sub(n)$ denote the subtour-elimination polytope for $K_n$ with edge set $E_n$. \begin{theorem}\label{binary_facet} For each $n$, there exists a 0-1 polytope $P_n\subseteq\RR^{n(n-1)/2}$ and a facet $F_n\in\FT(P_n)$ such that linear optimization over $P_n$ can be done in polynomial time and $\xc(P_n)$ is polynomially bounded, but linear optimization over $\VV(P_n)\setminus\VV(F_n)$ is $\mathcal{NP}$-hard and $\xc(\forb(P_n,\VV(F_n)))$ grows \gustavo{exponentially}. \end{theorem} \begin{proof} Given a positive integer $n$, consider $T^+_n:=\{x\in\{0,1\}^{E_n}|\ \sum_{e\in E_n}x_e=n+1\}$, $T^-_n:=\{x\in\{0,1\}^{E_n}|\ \sum_{e\in E_n}x_e=n-1\}$, and $H_n:=\tsp(n)\cap\{0,1\}^{E_n}$. The idea is to ``sandwich'' $H_n$ between $T^-_n$ and $T^+_n$ to obtain tractability, and then remove $T^-_n$ to obtain hardness. We first show that linear optimization over $T_n^-\cup H_n\cup T_n^+$ is polynomially solvable. Given $c\in\RR^{n(n-1)/2}$, consider $\max\{c^\top x|\ x\in T_n^-\cup H_n\cup T_n^+\}$. Let $x^-$ and $x^+$ be the best solution in $T_n^-$ and $T_n^+$, respectively, and note that $x^-$ and $x^+$ are trivial to find. Let $m$ be the number of nonnegative components of $c$. If $m\geq n+1$, then $x^+$ is optimal. If $m\leq n-1$, then $x^-$ is optimal. If $m=n$, let $x^n\in\{0,1\}^{E_n}$ have a 1 at position $e$ if and only if $c_e\geq 0$. If $x^n$ belongs to $H_n$, which is easy to verify, then it is optimal. Otherwise either $x^-$ or $x^+$ is an optimal solution. Now we show that linear optimization over $H_n\cup T_n^+$ is $\mathcal{NP}$-hard. Given $c\in\RR^{n(n-1)/2}$ with $c>0$, consider $\min\{c^\top x|\ x\in H_n\}$. Let $\bar c:=\max\{c_e|\ e\in E_n\}$ and define $d_e:=c_e+n\bar c$. Consider $\min\{d^\top x|\ x\in H_n\cup T_n^+\}$. For any $x\in T_n^+$, we have $d^\top x=(n+1)n\bar c + c^\top x> (n+1)n\bar c$. For any $x\in H_n$, we have $d^\top x=n^2\bar c + c^\top x\leq n^2\bar c+n\bar c=(n+1)n\bar c$. Hence, the optimal solution to the latter problem belongs to $H_n$ and defines a tour of minimal length with respect to $c$. Letting $P_n:=\conv(T_n^-\cup H_n\cup T_n^+)$, we have that $P_n$ is a tractable 0-1 polytope, $\sum_{e\in E_n}x_e\geq n-1$ defines a facet $F_n$ of $P_n$, and $\VV(P_n)\setminus\VV(F_n)=H_n\cup T_n^+$, which is an intractable set. Now, since $\forb(P_n,\VV(F_n))=\conv(H_n\cup T_n^+)$, we have that $\sum_{e\in E_n}x_e\geq n$ defines a facet of $\forb(P_n,\VV(F_n))$ and $\forb(P_n,\VV(F_n))\cap\{x\in\RR^{n(n-1)/2}|\ \sum_{e\in E_n}x_e=n\}=\tsp(n)$. Therefore, $\xc(\forb(P_n,\VV(F_n)))$ is \gustavo{exponential} in $n$ \cite{rothvoss2013matching}. It remains to show that $\xc(P_n)$ is polynomial in $n$. Let $T_n:=\{x\in\{0,1\}^{E_n}|\ \sum_{e\in E_n}x_e=n\}$ and let $\overline H_n:=T_n\setminus H_n$ be the set of incidence vectors of $n$-subsets of $E_n$ that do not define a Hamiltonian cycle. Given $x\in\{0,1\}^{E_n}$, let $N(x)$ be the set of neighbors of $x$ in $[0,1]^{E_n}$, let $L(x)$ be the half-space spanned by $N(x)$ that does not contain $x$, and let $C(x):=[0,1]^{E_n}\setminus L(x)$. Finally, let $\Delta_n:=\conv(T^-_n\cup T_n\cup T^+_n)\gustavo{=\{x\in[0,1]^{E_n}|\ n-1\leq \sum_{e\in E_n}x_e\leq n+1\}}$. We claim that $P_n=\conv(T^-_n\cup \sub(n) \cup T^+_n)$. By definition, we have $P_n\subseteq\conv(T^-_n\cup \sub(n) \cup T^-_n)$. To show the reverse inclusion, it suffices to show $\sub(n)\subseteq P_n$. \gustavo{Note that any two distinct elements in $T_n$ can have at most $|E_n|-2$ tight inequalities in common from those defining $\Delta_n$. Thus, $T_n$ defines an independent set in the graph of $\Delta_n$. Moreover, for each $x\in T_n$ the set of neighbors in $\Delta_n$ is $N(x)$ and thus all vertices in $T_n$ are simple. As $\overline H_n\subseteq T_n$, we have that $\overline H_n$ is simple and independent,} and by Corollary~\ref{stable} we have $$P_n=\Delta_n\cap\bigcap_{x\in\overline H_n}L(x)=\Delta_n\setminus\bigcup_{x\in\overline H_n}C(x).$$ Since $\sub(n)\subseteq\Delta_n$, from the second equation above, it suffices to show $C(x)\cap\sub(n)=\emptyset$ for all $x\in\overline H_n$. For this, note that for any $x\in\overline H_n$, there exists a set $\emptyset\neq S\subsetneq V_n$ such that $x(\delta(S))\leq 1$, which implies $y(\delta(S))\leq 2$ for all $y\in N(x)$. Thus $C(x)\cap\sub(n)=\emptyset$ as $x(\delta(S))\geq 2$ is valid for $\sub(n)$. Finally, applying disjunctive programming and since $\xc(\sub(n))$ is polynomial in $n$ \cite{yannakakis1991expressing}, we conclude that $P_n$ has an extended formulation of polynomial size. \end{proof} To conclude this section, consider the case where $P$ is explicitly given and $\rem$ is given as a facet of $P$. Although we are unable to establish the complexity of the forbidden-vertices problem in this setting, we present a tractable case and discuss an extension. \begin{proposition}\label{faceTU} Let $P=\{x\in\RR^n|\ Ax\leq b\}$ be a 0-1 polytope, where $A$ is TU and $b$ is integral. Let $F$ be the face of $P$ defined by $a_i^\top x = b_i$. Then $$\forb(P,\VV(F))=P\cap\{x\in\RR^n|\ a_i^\top x\leq b_i-1\}.$$ \end{proposition} \begin{proof} We have $$\VV(P)\setminus\VV(F) = P\cap\{x\in\{0,1\}^n|\ a_i^\top x\leq b_i-1\}.$$ Since $A$ is TU and $b$ in integral, the set $P\cap\{x\in\RR^n|\ a_i^\top x\leq b_i-1\}$ is an integral polyhedron contained in $P$, which is a 0-1 polytope. \end{proof} Since any face is the intersection of a subset of facets, the above result implies that removing a single face can be efficiently done by disjunctive programming in the context of Proposition~\ref{faceTU}. Also, if we want to remove a list of facets, that is, $\rem=\cup_{F\in\mathcal F}\VV(F)$ and $\mathcal F$ is a subset of the facets of $P$, then we can solve the problem by removing one facet at a time. However, if $\mathcal F$ is a list of faces, then the problem becomes hard in general. \begin{proposition} If $\mathcal F$ is a list of faces of $[0,1]^n$, then optimizing a linear function over $\{0,1\}^n\setminus \cup_{F\in\mathcal F}\VV(F)$ is $\mathcal{NP}$-hard. \end{proposition} \begin{proof} Let $G=(V,E)$ be a graph. Consider the problem of finding a minimum cardinality vertex cover of $G$, which can be formulated as \begin{eqnarray*} \min & \sum_{i\in V}x_i\\ \st &x_i+x_j\geq 1 &\forall \{i,j\}\in E\\ & x_i\in\{0,1\} &\forall i\in V. \end{eqnarray*} Construct $\mathcal F$ by adding a face of the form $F=\{x\in[0,1]^n|\ x_i=0,\ x_j=0\}$ for each $\{i,j\}\in E$. Then the vertex cover problem, which is $\mathcal{NP}$-hard, reduces to optimization of a linear function over $\{0,1\}^n\setminus \cup_{F\in\mathcal F}\VV(F)$. \end{proof} \section{Applications}\label{app} \subsection{$k$-best solutions} The $k$-best problem defined below is closely related to removing vertices. \begin{definition}\label{def_kbest} Given a nonempty 0-1 polytope $P\subseteq\RR^n$, a vector $c\in\RR^n$, and a positive integer $k$, the $k$-best problem is to either assert $|\VV(P)|\leq k$ and return $\VV(P)$, or to return $v_1,\ldots,v_k\in\VV(P)$, all distinct, such that $\max\{c^\top v_i|\ i=1,\ldots,k\}\leq \min\{c^\top v|\ v\in\VV(P)\setminus\{v_1,\ldots,v_k\}\}$. \end{definition} Since we can sequentially remove vertices from 0-1 polytopes, we can prove the following. \begin{proposition} Let $P\subseteq[0,1]^n$ be a tractable 0-1 polytope. Then, for any $c\in\RR^n$, the $k$-best problem can be solved in polynomial time on $k$ and $n$. \end{proposition} \begin{proof} For each $i=1,\ldots,k$, solve the problem \begin{eqnarray*} (\mathcal P_i)\ \min & c^\top x\\ \st& x\in P_i, \end{eqnarray*} where $P_1:=P$, $P_i:=\forb(P_{i-1},\{v_{i-1}\})=\forb(P,\{v_1,\ldots,v_{i-1}\})$ for $i=2,\ldots,k$, and $v_i\in\VV(P_i)$ is an optimal solution to $(\mathcal P_i)$, if one exists, for $i=1,\ldots,k$. From Theorem~\ref{poly01}, we can solve each of these problems in polynomial time. In particular, if $(\mathcal P_i)$ is infeasible, we return $v_1,\ldots,v_{i-1}$. Otherwise, by construction, $v_1,\ldots,v_k$ satisfy the required properties. Clearly, the construction is done in polynomial time. \end{proof} The above complexity result was originally obtained in \cite{Lawler} building on ideas from \cite{Murty} by applying a branch-and-fix scheme. \subsection{Binary all-different polytopes} With edge-coloring of graphs in mind, the binary all-different polytope has been introduced in \cite{Lee}. It was furthermore studied in \cite{lee2007binary} and \cite{lee2005separating}. We consider a more general setting. \begin{definition} Given a positive integer $k$, nonempty 0-1 polytopes $P_1,\ldots,P_k$ in $\RR^n$, and vectors $c_1,\ldots,c_k\in\RR^n$, the binary all-different problem is to solve \begin{eqnarray*} (\mathcal P)\ \min & \sum_{i=1}^k c_i^\top x_i\\ \st& x_i\in \VV(P_i) & i=1,\ldots,k\\ &x_i\neq x_j &1\leq i<j\leq k. \end{eqnarray*} \end{definition} In \cite{Lee}, it was asked whether the above problem is polynomially solvable in the case $P_i=[0,1]^n$ for all $i=1,\ldots,k$. Using the tractability of the $k$-best problem, we give a positive answer even for the general case of distinct polytopes. Given a graph $G=(V,E)$ and $U\subseteq V$, a $U$-matching in $G$ is a matching $M\subseteq E$ such that each vertex in $U$ is contained in some element of $M$. \begin{theorem}\label{alldiffP} If $P_i\subseteq\RR^n$ is a tractable nonempty 0-1 polytope for $i=1,\ldots,k$, then the binary all-different problem is polynomially solvable. \end{theorem} \begin{proof} For each $i=1,\ldots,k$, let $S_i$ be the solution set of the $k$-best problem \gustavo{(Def.~\ref{def_kbest})} for $P_i$ and $c_i$. \gustavo{Observe that $|S_i|\leq k$.} Now, consider the bipartite graph $G=(S\cup R,E)$, where $S:=\cup_{i=1}^k S_i$ and $R:=\{1,\ldots,k\}$. For each $v\in S$ and $i\in R$, we include the arc $\{v,i\}$ in $E$ if and only if $v\in S_i$. Finally, for each $\{v,i\}\in E$, we set $w_{vi}:=c_i^\top v$. We claim that $(\mathcal P)$ reduces to finding an $R$-matching in $G$ of minimum weight with respect to $w$. It is straightforward to verify that an $R$-matching in $G$ defines a feasible solution to $(\mathcal P)$ of equal value. Thus, it is enough to show that if $(\mathcal P)$ is feasible, then there exists an $R$-matching with the same optimal value. Indeed, let $(x_1,\ldots,x_k)$ be an optimal solution to $(\mathcal P)$ that does not define an $R$-matching, that is, such that $x_i\notin S_i$ for some $i=1,\ldots,k$. Then, we must have $|\VV(P_i)|>k$ and $|S_i|=k$. This latter condition and $x_i\notin S_i$ imply the existence of $v\in S_i$ such that $v\neq x_j$ for all \gustavo{$j=1,\ldots,k$}. Furthermore, by the definition of $S_i$, we also have $c_i^\top v\leq c_i^\top x_i$. Therefore, the vector $(x_1,\ldots,x_{i-1},v,x_{i+1},\ldots,x_k)$ is an optimal solution to $(\mathcal P)$ having its $i$-th \gustavo{subvector} in $S_i$. Iteratively applying the above reasoning to all components, we obtain an optimal solution to $(\mathcal P)$ given by an $R$-matching as desired. \end{proof} \section{Extension to integral polytopes}\label{integral} In this section, we generalize the forbidden-vertices problem to integral polytopes, that is, to polytopes having integral extreme points, even allowing the removal of points that are not vertices. We show that for an important class of integral polytopes the resulting problem is tractable. For an integral polytope $P\subseteq\RR^n$ and $\rem\subseteq P\cap\ZZ^n$, we define $\forb_I(P,\rem):=\conv((P\cap\ZZ^n)\setminus \rem)$. \begin{definition} Given an integral polytope $P\subseteq\RR^n$, a set $\rem\subseteq P\cap\ZZ^n$ of integral vectors, and a vector $c\in\RR^n$, the forbidden-vectors problem asks to either assert $(P\cap\ZZ^n)\setminus \rem=\emptyset$, or to return a minimizer of $c^\top x$ over $(P\cap\ZZ^n)\setminus \rem$ otherwise. \end{definition} Given vectors $l,u\in\RR^n$ with $l\leq u$, we denote $[l,u]:=\{x\in\RR^n|\ l_i\leq x_i\leq u_i, i=1,\ldots,n\}$. We term these sets as boxes. \begin{definition} An integral polytope $P\subseteq\RR^n$ is box-integral if for any pair of vectors $l,u\in\ZZ^n$ with $l\leq u$, the polytope $P\cap[l,u]$ is integral. \end{definition} Polytopes defined by a TU matrix and an integral right-hand-side, or by a box-TDI system, are examples of box-integral polytopes. Further note that if $P$ is tractable and box-integral, so is $P\cap[l,u]$. When both conditions are met, we say that $P$ is box-tractable. With arguments analogous to that of the 0-1 case, we can verify the following result. \begin{theorem}\label{vectors} If $P\subseteq\RR^n$ is a box-tractable polytope, then, given a list $\rem\subseteq P\cap\ZZ^n$, the forbidden-vectors problem is polynomially solvable. Moreover, $$\xc(\forb_I(P,\rem))\leq 2n|\rem|(\xc(P)+1).$$ \end{theorem} \begin{proof} Since $P$ is bounded, it is contained in a box. Without lost of generality and to simplify the exposition, we may assume that $P\subseteq [0,r-1]^n$ for some $r\geq 2$. As in the 0-1 case, we first address the case $P=[0,r-1]^n$, for which we provide two extended formulations for $\forb_I(P,\rem)$ involving $\mathcal O(n|\rem|)$ variables and constraints. The first extended formulation relies on the mapping $\phi(x):=\sum_{i=1}^n r^{i-1}x_i$ for $x\in [0,r-1]^n$, which defines a bijection with $\{0,\ldots,r^n-1\}$. Letting $K_r(a,b):=\{x\in\{0,\ldots,r-1\}^n|\ a\leq \phi(x)\leq b\}$, we have that $\forb_I(P,\rem)$ is the convex hull of the union of at most $|\rem|+1$ sets of the form $K_r(a,b)$. Since $\conv(K_r(a,b))$ has $\mathcal O(n)$ facets \cite{gupten}, by disjunctive programming we obtain an extended formulation for $\forb_I(P,\rem)$ having $\mathcal O(n|\rem|)$ inequalities. For the second extended formulation, let $\rem'$ denote the projection of $\rem$ onto the first $n-1$ coordinates and set $\widehat \rem:= (\rem'\times\{0,\ldots,r-1\})\setminus \rem$. Along the lines of Proposition~\ref{recursion}, we have $$\{0,\ldots,r-1\}^n\setminus \rem=\left[\left(\{0,\ldots,r-1\}^{n-1}\setminus \rem'\right)\times\{0,\ldots,r-1\}\right]\cup \widehat \rem.$$ Although $\widehat \rem$ can have up to $r|\rem|$ elements, we also see that $\widehat \rem$ is the union of at most $2|\rem|$ sets of the form $v\times\{\alpha,\ldots,\beta\}$ for $v\in \rem'$ and integers $0\leq \alpha\leq \beta\leq r-1$. More precisely, for each $v\in \rem'$, there exist integers $0\leq \alpha^v_1\leq \beta^v_1 < \alpha^v_2 \leq \beta^v_2 < \cdots < \alpha^v_{q_v}\leq\beta^v_{q_v}\leq r-1$ such that $$\widehat \rem=\bigcup_{v\in \rem'}\bigcup_{l=1}^{q_v}v\times\{\alpha^v_l,\ldots,\beta^v_l\}$$ and $\sum_{v\in\rem'}q_v\leq 2|\rem|$. Therefore, $\conv(\widehat \rem)$ can be described with $\mathcal O(|\rem|)$ inequalities. Then a recursive construction of an extended formulation for $\forb_I(P,\rem)$ is analogous to the binary case and involves $\mathcal O(n|\rem|)$ variables and constraints. In order to address the general case, we first show how to cover $\{0,\ldots,r-1\}^n\setminus \rem$ with boxes. For each $i=1,\ldots,n$, let $\rem^i$ be the projection of $\rem$ onto the first $i$ components and let $\widehat \rem^i:=(\rem^{i-1}\times\{0,\ldots,r-1\})\setminus\rem^i$, where $\widehat \rem^1:=\{0,\ldots,r-1\}\setminus \rem^1$. Working the recursion backwards yields $$\{0,\ldots,r-1\}^n\setminus \rem=\bigcup_{i=1}^n\left[\widehat \rem^i\times\{0,\ldots,r-1\}^{n-i}\right].$$ Combining the last two expressions, we arrive at $$\{0,\ldots,r-1\}^n\setminus\rem=\bigcup_{i=1}^n\bigcup_{v\in \rem^{i-1}}\bigcup_{l=1}^{q_v}v\times\{\alpha^v_l,\ldots,\beta^v_l\}\times\{0,\ldots,r-1\}^{n-i}.$$ The right-hand-side defines a family $\mathcal B$ of at most $2n|\rem|$ boxes in $\RR^n$, yielding $$\{0,\ldots,r-1\}^n\setminus\rem=\bigcup_{[l,u]\in\mathcal B}[l,u]\cap\ZZ^n.$$ Finally, if $P\subseteq[0,r-1]^n$, then $$(P\cap\ZZ^n)\setminus \rem= (P\cap\ZZ^n)\cap(\{0,\ldots,r-1\}^n\setminus\rem) =\bigcup_{[l,u]\in\mathcal B}P\cap[l,u]\cap\ZZ^n.$$ Moreover, if $P$ is box-tractable, then $$\forb_I(P,\rem)=\conv\left(\bigcup_{[l,u]\in\mathcal B}\conv\left(P\cap[l,u]\cap\ZZ^n\right)\right)=\conv\left(\bigcup_{[l,u]\in\mathcal B}P\cap[l,u]\right),$$ where each term within the union is a tractable set. \end{proof} The $k$-best problem and the binary all-different problem can be extended to the case of integral vectors as follows. \begin{definition} Given a nonempty integral polytope $P\subseteq\RR^n$, a vector $c\in\RR^n$, and a positive integer $k$, the integral $k$-best problem is to either assert $|P\cap\ZZ^n|\leq k$ and return $P\cap\ZZ^n$, or to return $v_1,\ldots,v_k\in P\cap\ZZ^n$, all distinct, such that $\max\{c^\top v_i|\ i=1,\ldots,k\}\leq \min\{c^\top v|\ v\in\ (P\cap\ZZ^n)\setminus\{v_1,\ldots,v_k\}\}$. \end{definition} \begin{definition} Given a positive integer $k$, nonempty integral polytopes $P_1,\ldots,P_k$ in $\RR^n$, and vectors $c_1,\ldots,c_k\in\RR^n$, the integral all-different problem is to solve \begin{eqnarray*} (\mathcal P)\ \min & \sum_{i=1}^k c_i^\top x_i\\ \st& x_i\in P\cap\ZZ^n & i=1,\ldots,k\\ &x_i\neq x_j &1\leq i<j\leq k. \end{eqnarray*} \end{definition} The above problems can be shown to be polynomially solvable if the underlying polytopes are box-tractable. \textbf{Acknowledgments} We thank Marc Pfetsch for pointing out the example in Proposition~\ref{single}. This research has been supported in part by the Air Force Office of Scientific Research (Grant \#FA9550-12-1-0154) and by Deutsche Forschungsgemeinschaft (KA 1616/4-1). \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
376
{"url":"https:\/\/projecteuclid.org\/euclid.aoap\/1394465363","text":"## The Annals of Applied Probability\n\n### Universality for one-dimensional hierarchical coalescence processes with double and triple merges\n\n#### Abstract\n\nWe consider one-dimensional hierarchical coalescence processes (in short HCPs) where two or three neighboring domains can merge. An HCP consists of an infinite sequence of stochastic coalescence processes: each process occurs in a different \u201cepoch\u201d and evolves for an infinite time, while the evolutions in subsequent epochs are linked in such a way that the initial distribution of epoch $n+1$ coincides with the final distribution of epoch $n$. Inside each epoch a domain can incorporate one of its neighboring domains or both of them if its length belongs to a certain epoch-dependent finite range.\n\nAssuming that the distribution at the beginning of the first epoch is described by a renewal simple point process, we prove limit theorems for the domain length and for the position of the leftmost point (if any). Our analysis extends the results obtained in [Ann. Probab. 40 (2012) 1377\u20131435] to a larger family of models, including relevant examples from the physics literature [Europhys. Lett. 27 (1994) 175\u2013180, Phys. Rev. E (3) 68 (2003) 031504]. It reveals the presence of a common abstract structure behind models which are apparently very different, thus leading to very similar limit theorems. Finally, we give here a full characterization of the infinitesimal generator for the dynamics inside each epoch, thus allowing us to describe the time evolution of the expected value of regular observables in terms of an ordinary differential equation.\n\n#### Article information\n\nSource\nAnn. Appl. Probab., Volume 24, Number 2 (2014), 476-525.\n\nDates\nFirst available in Project Euclid: 10 March 2014\n\nhttps:\/\/projecteuclid.org\/euclid.aoap\/1394465363\n\nDigital Object Identifier\ndoi:10.1214\/12-AAP917\n\nMathematical Reviews number (MathSciNet)\nMR3178489\n\nZentralblatt MATH identifier\n1311.60055\n\nSubjects\nPrimary: 60G55: Point processes 60B10: Convergence of probability measures\n\n#### Citation\n\nFaggionato, A.; Roberto, C.; Toninelli, C. Universality for one-dimensional hierarchical coalescence processes with double and triple merges. Ann. Appl. Probab. 24 (2014), no. 2, 476--525. doi:10.1214\/12-AAP917. https:\/\/projecteuclid.org\/euclid.aoap\/1394465363\n\n#### References\n\n\u2022 [1] Billingsley, P. (1968). Convergence of Probability Measures. Wiley, New York.\n\u2022 [2] Bray, A. J., Derrida, B. and Gordr\u00e8che, C. (1994). Nontrivial algebraic decay in a soluble model of coarsening. Europhys. Lett. 27 175\u2013180.\n\u2022 [3] Carr, J. and Pego, R. (1992). Self-similarity in a coarsening model in one dimension. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 436 569\u2013583.\n\u2022 [4] Daley, D. J. and Vere-Jones, D. (1988). An Introduction to the Theory of Point Processes. Springer, New York.\n\u2022 [5] Derrida, B., Bray, A. J. and Godr\u00e8che, C. (1994). Nontrivial exponents in the zero temperature dynamics of the 1d Ising and Potts model. J. Phys. A 27 L357\u2013L361.\n\u2022 [6] Derrida, B., Godr\u00e8che, C. and Yekutieli, I. (1990). Stable distributions of growing and coalescing droplets. Europhys. Lett. 12 385\u2013390.\n\u2022 [7] Derrida, B., Godr\u00e8che, C. and Yekutieli, I. (1991). Scale invariant regime in the one dimensional models of growing and coalescing droplets. Phys. Rev. A (3) 44 6241\u20136251.\n\u2022 [8] Eisinger, S. and Jackle, J. (1991). A hierarchically constrained kinetic ising model. Z. Phys. B 84 115\u2013124.\n\u2022 [9] Faggionato, A., Martinelli, F., Roberto, C. and Toninelli, C. (2012). Universality in one-dimensional hierarchical coalescence processes. Ann. Probab. 40 1377\u20131435.\n\u2022 [10] Faggionato, A., Martinelli, F., Roberto, C. and Toninelli, C. (2012). Aging through hierarchical coalescence in the East model. Comm. Math. Phys. 309 459\u2013495.\n\u2022 [11] Feller, W. (1971). An Introduction to Probability Theory and Its Applications, 2nd ed. Wiley Series in Probability and Mathematical Statistics 2. Wiley, New York.\n\u2022 [12] Franken, P., K\u00f6nig, D., Arndt, U. and Schmidt, V. (1982). Queues and Point Processes. Wiley, Chichester.\n\u2022 [13] Gallay, T. and Mielke, A. (2003). Convergence results for a coarsening model using global linearization. J. Nonlinear Sci. 13 311\u2013346.\n\u2022 [14] Garcia, N. L. and Kurtz, T. G. (2006). Spatial birth and death processes as solutions of stochastic equations. ALEA Lat. Am. J. Probab. Math. Stat. 1 281\u2013303.\n\u2022 [15] Liggett, T. M. (2005). Interacting Particle Systems. Grundlehren der mathematischen Wissenschaften 276. Springer, Berlin.\n\u2022 [16] Preston, C. (1975). Spatial birth-and-death processes. In Proceedings of the 40th Session of the International Statistical Institute (Warsaw, 1975), Vol. 2. Invited Papers 46 371\u2013391, 405\u2013408.\n\u2022 [17] Privman, V. (1997). Nonequilibrium Statistical Physics in One Dimension. Cambridge Univ. Press, Cambridge.\n\u2022 [18] Sepp\u00e4l\u00e4inen, T. Translation invariant exclusion processes. Available at http:\/\/www.math.wisc.edu\/~seppalai\/excl-book\/ajo.pdf.\n\u2022 [19] Sollich, P. and Evans, M. R. (2003). Glassy dynamics in the asymmetrically constrained kinetic Ising chain. Phys. Rev. E (3) 68 031504.","date":"2019-12-10 12:59:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4712079167366028, \"perplexity\": 2667.2014694487198}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540527620.19\/warc\/CC-MAIN-20191210123634-20191210151634-00398.warc.gz\"}"}
null
null
{"url":"https:\/\/www.atqed.com\/integer\/15-36","text":"(15, 36)\n\nA = 15\n\nB = 36\n\n Sum 51 Difference 21 A \u00d7 B 540 A \u00f7 B 0.416667 B \u00f7 A 2.4 GCD 3 LCM 180 Average 25.5\n\nCombination\n\n5567902560\n\n${}_{ 36 } C_{ 15 } = 5567902560$\n\nSum of squares\n\n1521\n\n$15^2 + 36^2 = 1521$\n\nNorm\n\n$\\sqrt{ 15^2 + 36^2 } = 39.0$\n\n23.2379000772445\n\nNo\n\nStirling number of the first kind\n\n0\n\n$S(15,\\ 36) = 0$\n\nStirling number of the second kind\n\n0\n\n$S(15,\\ 36) = 0$\n\nProperties of 15 and 36\n\nProperties of 15 and 36: The GCD and LCM of 15 and 36 are 3 and 180 respectively. 15 + 36 = 51, |15 - 36| = 21, 15 * 36 = 540, 15 \/ 36 = 0.4166666666666667, 36 \/ 15 = 2.4. The average of 15 and 36 is 25.5 and the norm of them is 39.0. The geometric mean of two numbers is 23.2379000772445.","date":"2023-02-06 06:40:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5417173504829407, \"perplexity\": 737.4115549356932}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500304.90\/warc\/CC-MAIN-20230206051215-20230206081215-00629.warc.gz\"}"}
null
null
Find the closest Cartridge World in your area. Cartridge World may have multiple locations within Brooklyn, NY. It is smart to call before you leave. Look through our site to find Discount Codes. 2068 Flatbush Ave., Brooklyn, NY 11234.
{ "redpajama_set_name": "RedPajamaC4" }
2,458
{"url":"https:\/\/proofwiki.org\/wiki\/Equivalence_of_Definitions_of_P-adic_Integer\/Definition_2_Implies_Definition_1","text":"# Equivalence of Definitions of P-adic Integer\/Definition 2 Implies Definition 1\n\n## Theorem\n\nLet $\\struct {\\Q_p, \\norm {\\,\\cdot\\,}_p}$ be the $p$-adic numbers for some prime $p$.\n\nLet $x \\in \\Q_p$ such that the canonical expansion of $x$ contains only positive powers of $p$.\n\nThen:\n\n$\\norm x_p \\le 1$\n\n## Proof\n\nLet the canonical expansion of $x$ contain only positive powers of $p$.\n\nThat is:\n\n$x = \\ds \\sum_{n \\mathop = 0}^\\infty d_n p^n : \\forall n \\in \\N : 0 <= d_n < p$\n\n#### Case 1\u00a0: $\\forall n \\in \\N : d_n = 0$\n\nLet:\n\n$\\forall n \\in \\N : d_n = 0$\n\nThen $x = 0$.\n\nHence:\n\n$\\norm x_p = 0 < 1$\n\n$\\Box$\n\n#### Case 2\u00a0: $\\exists n \\in \\N : d_n > 0$\n\nLet:\n\n$\\exists n \\in \\N : d_n > 0$\n\nLet:\n\n$l = \\min \\set {i: i \\ge 0 \\land d_i \\ne 0}$\n\nHence:\n\n$l \\ge 0$\n$\\norm x_p = p^{-l}$\n\nThus:\n\n $\\ds \\norm x_p$ $=$ $\\ds p^{-l}$ $\\ds$ $\\le$ $\\ds p^0$ $\\ds$ $=$ $\\ds 1$\n\n$\\blacksquare$","date":"2021-10-18 01:43:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9928162693977356, \"perplexity\": 554.3751767895071}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585186.33\/warc\/CC-MAIN-20211018000838-20211018030838-00606.warc.gz\"}"}
null
null
The pursuit of 'happyness' Happiness is a fallacy. Yes, you read that right. Here's the latest in our series on contentment. C Image Credit Image from Drew Coffman used under a Creative Commons license. threadshome The pursuit of 'happyness' In 2006, Will Smith starred in a movie called The Pursuit of Happyness. Will Smith plays the character of Chris Gardner, a struggling sales man who uses his life savings to invest in bone scanning devices. While he does make something of a living from the sale of the devices, it's never quite enough to raise his financial position, which puts a strain on his marriage to his wife Linda. The movie then follows Chris' journey as he overcomes homelessness, the weight of becoming a single father to his son and the bitter sweet feeling of being in the role of your dreams that pays very little. After six months of pretence – he never wore his financial or marital woes on his face or gave his colleagues any reason to believe he was a struggling single father – he wins the full-time position and ultimately becomes happy. A lot of us can relate to similar feelings of lowliness, inadequacy or despondence. For the last week or so I felt like a lot of the things I'm desperately trying to attain were out of my reach. I don't know how many of you have played that awfully cruel – but hilarious – trick on babies and toddlers where you give them something and just as they reach out to take the item from you, you suddenly take it away. I felt exactly like that. Life was offering me everything I had ever envisioned on a silver platter and just as I am about to reach out for it, the platter would disappear. I felt like the weight of the world was on my shoulders and I found myself trying hard to conjure happiness. This is typical of millenials. We're told that happiness is the only thing worth chasing. From "5 ways to be happy", "quit your job if it's not making you happy", "divorce your spouse if they're not making you happy" , happiness is the only thing worth dying for. What if I told you there's something far better than happiness? What if I told you that happiness just seems like a lot of work especially for something that dissipates so quickly? Would you believe me? Would you stop trying to be happy? Happiness is a fallacy. It took me a long time to come to terms with this truth because it goes against everything society subscribes to, but happiness is a fallacy. The saddest thing about happiness is the disappointment you feel when it can't be replicated and so you revert to drowning your sorrows in a sea of self pity. Happiness is not constant. It's the silent killer, the drug with the least recorded side effects. Contentment is the real deal. Paul said to the church in Corinth (Philippians 4:10-13): "I am not saying this because I am in need, for I have learned to be content whatever the circumstances. I know what it is to be in need, and I know what it is to have plenty. I have learned the secret of being content in any and every situation, whether well fed or hungry, whether living in plenty or in want. I can do all this through him who gives me strength." I no longer feel the need to be happy because contentment is soul-satisfying. Contentment isn't the gateway to complacency or an excuse for laziness. Contentment is an appreciation for where you are and being satisfied with the journey up until that point. So in place of happiness, I chose contentment. So here's to life, love and the pursuit of contentment. Cristine Edusi 13th of September, 2016 Life, Society contentment, culture, faith, happiness, society, theology Written by Cristine Edusi // Follow Cristine on Twitter // The Promiscuous Pen Cristine who doesn't mind being called Cris now ( it took years to get to that point, it always felt too boyish) is a writer who dabbles in relationships, politics and Christianity. She is a proud mother to her blog and to #PenTalk, a series of debates she hosts every quarter. Burgers and a good pair of heels are like gold to her. Read more of Cristine's posts Want to keep reading? // I used to think that having doubts was a sign of a weak and... Still sleeping // The story of the garden of Gethsemane issues us a challenge. Comments loading!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,471
Q: Best way to handle unused function arguments Suppose a user gives me a list with a variable number of functions that have a fixed number of arguments a, b, and c. All with fixed type. Sometimes all the arguments are used, but sometimes not. For example: def two_list_sum_mult(a: List, b: List, c: int): """ Sums the elements on each list and multiplies it by a constant. """ return c * (sum(a) + sum(b)) def list_sum_mult(a: List, b: List, c: int): """ Sums the elements on list a and multiplies it by a constant. """ return c * sum(a) def list_sum_reciprocals(a: List, b: List, c: int): """ Returns the sum of the reciprocals of each element of the list a. """ return sum([1/x for x in a]) The user passes a list of functions, and the arguments to my function. Then, my function loops over all the functions, computes the result, and then computes something with the result. For example: def function_sum(functions: List, a: List, b: List, c: int): """ Computes all the functions and adds the results. """ total = 0 for f in functions: total += f(a, b, c) return total What is the best way to handle the arguments of the functions when they are not used? In other words: is it ok to have unused arguments for functions, or is there a better way to do all of this? A: It is okay to leave unused arguments, however it may be misleading and some linters would complain. Another way you could make it clearer that the variables are not used is using _ as the variable name or prepending _ to the variable name. For example: def two_list_sum_mult(a: List, b: List, c: int): """ Sums the elements on each list and multiplies it by a constant. """ return c * (sum(a) + sum(b)) # replace `b` with `_` or `_b` def list_sum_mult(a: List, _, c: int): """ Sums the elements on list a and multiplies it by a constant. """ return c * sum(a) # as you only care about the first argument you can ignore the others with `*_` def list_sum_reciprocals(a: List, *_): """ Returns the sum of the reciprocals of each element of the list a. """ return sum([1/x for x in a]) A: Its ok to have unused arguments, you could have those arguments have a default value
{ "redpajama_set_name": "RedPajamaStackExchange" }
410
El alto de Cildad es una pequeña elevación a 242 msnm que dista 4,5 km del mar Cantábrico en línea recta. La carretera que discurre por este alto es la CA-353 y enlaza los municipios de Reocín al este y Alfoz de Lloredo al oeste. Su nombre deriva del monte Cildad, cuya cima se eleva a escasos metros de este enclave. Miradores En el alto de Cildad se localiza un mirador que ofrece una visión amplia de las montañas pasiegas y parte de las montañas del Asón. Además, subiendo a este alto por la vertiente de Alfoz de Lloredo se encuentra el mirador de Los Pandos, con vistas al mar Cantábrico y a los bosques que circundan el pueblo de Novales Referencias Enlaces externos Alto de Cildad por San Pedro de Rudaguera Red de carreteras de Cantabria
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,487
The ImageFileIO module can be used to read an image from a socket, or any other stream device. This module is deprecated. New code should use the Parser class in the ImageFile module instead. Adds buffering to a stream file object, in order to provide seek and tell methods required by the Image.open method. The stream object must implement read and close methods.
{ "redpajama_set_name": "RedPajamaC4" }
500
{"url":"https:\/\/www.physicsforums.com\/threads\/help-with-supersymmetry.123067\/","text":"# Help with Supersymmetry\n\n1. Jun 6, 2006\n\n### AlphaNumeric\n\nIt's been bugging my for ages, but I cannot see to show the following supersymmetry algebra :\n\n$$\\delta_{\\epsilon} X^{\\mu} = \\bar{\\epsilon}\\bar{\\psi}$$\n$$\\delta_{\\epsilon} \\psi^{\\mu} = \\rho .\\partial X^{\\mu}\\epsilon$$\n\nUsing these show that\n\n$$[\\delta_{\\epsilon_{1}},\\delta_{\\epsilon_{2}}]X^{\\mu} = 2\\bar{\\epsilon}_{1}\\rho^{\\alpha}\\epsilon_{2}\\partial_{\\alpha}X^{\\mu}$$\n$$[\\delta_{\\epsilon_{1}},\\delta_{\\epsilon_{2}}]\\psi^{\\mu} = 2\\bar{\\epsilon}_{1}\\rho^{\\alpha}\\epsilon_{2}\\partial_{\\alpha}\\psi^{\\mu}$$\n\nusing $$\\rho . \\partial \\psi^{\\mu}=0$$ and $$\\epsilon$$ being a Grassman spinor.\n\nI can do the first one but I cannot do the second one. Every textbook I check just says \"It can be shown that...\" but I can't actually show it!! :uhh:","date":"2016-12-04 10:20:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5798941254615784, \"perplexity\": 1438.0538980661404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698541317.69\/warc\/CC-MAIN-20161202170901-00206-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
package http import ( "encoding/json" "log" "net/http" "strconv" exchange "github.com/hf-mush/exchange" "github.com/labstack/echo" ) // Server は、完全な HTTP リクエストを受け付け、レスポンスを返す type Server struct{} // Start は指定されたポート番号への HTTP リクエストの受付を開始する func (s *Server) Start(port int64) error { e := echo.New() e.GET("/api", func(c echo.Context) error { s.requestLog(c) information := exchange.GetInformation() c.JSON(http.StatusOK, information) return nil }) e.GET("/api/user", func(c echo.Context) error { s.requestLog(c) user := exchange.GetUser() c.JSON(http.StatusOK, user) return nil }) e.GET("/api/rates", func(c echo.Context) error { s.requestLog(c) rates := exchange.GetRates() c.JSON(http.StatusOK, rates) return nil }) return e.Start(":" + strconv.FormatInt(port, 10)) } func (s *Server) requestLog(c echo.Context) { log.Println("info: " + c.Request().Method + " " + c.Request().RequestURI) } func (s *Server) responseLog(response exchange.Response) { resJSON, err := json.Marshal(response) if err != nil { log.Println("error: " + err.Error()) return } log.Println("info: " + string(resJSON)) }
{ "redpajama_set_name": "RedPajamaGithub" }
3,912
{"url":"https:\/\/geoml.info\/integration-of-results-from-recognition-algorithms-and-its-realization\/","text":"Muhamedyev, S. Iskakov, P.Gricenko, K. Yakunin Y. Kuchin. Integration of results from Recognition Algorithms and its realization at the uranium production process. Proceedings of 8th IEEE International Conference on Application of Information and Communication Technologies \u2014 AICT2014, Kazakhstan, Astana, 15-17 October 2014, p.188-191, ISBN 987-1-4799-4120-92, IEEE Catalog Number CFP1456H-PRT.\nTaught systems such as artificial neural network\u00a0(ANN) can be used for data interpretation of electric logging.\u00a0Using only the ANN algorithm gives the result of coincidence\u00a0between interpretable data and experimental results in certain\u00a0samplings from 66% to 73%. But using additional algorithms of\u00a0recognition and integrating the results of recognition could\u00a0enhance quality of recognition by 1-3%. The problem of\u00a0integration of classification algorithms results was formulated\u00a0and realization as pseudocode was shown.\u00a0The paper describes the recognition algorithms which used in\u00a0research, results of recognition, integration of results and\u00a0realization of integration algorithm.\u00a0Index Terms\u2014Lithology, machine learning, ensemble of\u00a0algorithms, uranium deposit, pseudocode.","date":"2023-03-30 17:53:14","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8012324571609497, \"perplexity\": 4168.053641236394}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949355.52\/warc\/CC-MAIN-20230330163823-20230330193823-00437.warc.gz\"}"}
null
null
SEC and Mobley Settle, Lifetime Bar WASHINGTON (HedgeWorld.com)--The Securities and Exchange Commission settled its civil action against David M. Mobley, founder of the fictitious but aptly named "Predator System" of index... By Staff Writer | May 27, 2003 at 08:00 PM WASHINGTON (HedgeWorld.com)–The Securities and Exchange Commission settled its civil action against David M. Mobley, founder of the fictitious but aptly named "Predator System" of index trading. Pending the approval of the federal district court, the settlement enjoins Mr. Mobley from violating the antifraud provisions of the federal securities laws and bars him permanently from the investment advisory business. It also orders him to disgorge US$48.96 million, but the SEC's statement May 20 observed that this obligation has been satisfied by his previous disgorgement of that amount and other assets to the court-appointed receiver, Otto Obermaier. Beginning in 1992 and continuing into 2000, Mr. Mobley told potential investors that he would invest their capital in hedge funds that produced average annual returns of 50%, employing what he called his "Predator System" of index trading. He raised more than US$140 million from 170 investors in this way. He was not a licensed stockbroker, and he engaged in no index trading. He spent their money on a lavish lifestyle, paying the earlier investors with the money raised from later investors, in classic pyramid style. In February 2000, at the SEC's request, the U.S. District Court for the District of Florida froze his assets and those of the suspect entities: Maricopa Investment Fund Ltd., Maricopa Index Hedge Fund Ltd., Maricopa Financial Corp. and Ensign Trading Corp. Criminal charges followed and, on July 20, 2001, Mr. Mobley pleaded guilty to mail fraud, wire fraud, money laundering and tax evasion. With their money, he bought himself a US$400,000 home in Naples, Fla., aUS$1 million vacant lot in the Quail West section of Naples and a US$40,000 diamond ring, prosecutors said at the time of his plea Previous HedgeWorld Story. Mr. Mobley is currently in prison, serving his sentence of 17.5 years. CFaille@HedgeWorld.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,187
A critique of the biggest game of all time Rockstar's hits and misses with Red Dead Redemption 2, the game you (and/or your kids) are possibly losing an unhealthy amount of sleep and spare time to. I remember Rockstar's first rub up against the real world back in 2001. It was Grand Theft Auto 3. If the player carjacked a car, which they would have to, because they needed to drive a car and people aren't just willing to give them up, occasionally the driver would pull you out and take their car back. Just like the real world: If you take someone's car, they'll probably want it back. Cut to Red Dead Redemption 2. If you're unfamiliar with it, it's basically Grand Theft Auto - you run, you gun, you do criminal things - but set in the Wild West. The game has been more or less the success that Rockstar wanted (aside from the reports on the dodgy working conditions of many of their staff). It's on track to sell about twenty million copies by the holiday season - that's $725 million, which is a lot - and a bunch of your friends have probably taken suspicious sick days close to the weekend in order to play it. Most importantly, if you steal someone's horse, they will one hundred percent want it back. If there's anything to be gleaned from Red Dead Redemption 2, and I've spent upwards of twenty hours on this game, it's that Rockstar has taken what has, up until now, been a selective lean towards realism and gone full horizontal on it. Every artform does a thousand-metre sprint towards realism eventually, and it's understandable. What better way to engage with an audience than trying to replicate their actual life? Fantasy is looking through stained glass, realism is looking through the window of someone's living room. For theatre, it was the kitchen-sink theatre. For painting, it was paintings that looked like people. For reality TV, it was Married at First Sight. Look, not all art is equal. As graphical power improves exponentially - remember when Mario was a sixteen-pixel dude and now he's so detailed we can see his nipples? - so does the form's ability to mimic and recreate realism. And if our graphics are at that point, why shouldn't the gameplay? The rub, and the problem, is that reality isn't necessarily fun. Red Dead Redemption 2 runs up against this particular problem again, and again, throughout its fifty-plus hours. The graphics are showing us how dirty the Wild West really was - it's grimy, people bleed, your protagonist's beard and hair can grow to true Grizzly Man proportions. It's real as real gets, man. Unfortunately, the gameplay seems to want to replicate that experience as much as possible as well. The controls are awkward, and the systems are designed so that when you're forced to take a break from the story - already a slow-moving steam train of a thing - so you can hunt for your fellow Westerners and continue their survival it feels as arduous as it would as if you're doing a chore in real life. Rockstar has been crawling towards this depiction of reality slowly over the past twenty years, but for the first time they've finally managed to achieve a game that captures how frustrating it is to have to do chores for other people. But for all it touts to be as close to a real experience as possible, you can still see the stitches. This is a game where you build a bond with your horse, and magically that horse learns to drift. We all know full well that horses hate all human beings with a passion, and also that they cannot drift. It's a video game. You're holding a controller. No amount of wink and nudge towards reality is going to reconcile that. In fact what Red Dead Redemption 2 ends up capturing, quite successfully and with more reward, is cinema. It's the closest I've felt to playing a Terrence Malick film, and there are moments where the image of a man riding his horse alone through the wilderness had me in awe at their beauty in a way that few games have really managed. It captures the battle between man and nature, whether that's an external nature or the own nature of his inner varmint, in a beautiful fashion. That is until reality, or Rockstar's version of it, kicks in and you run your horse into a tree and get thrown twenty metres. Red Dead Redemption 2's main problem, and this is one emblematic throughout the industry and the artform, is that it wants to eat dessert before dinner. Realism and cinematic are two qualities that can work well, even together, but if they get in the way of your game actually being fun, then what's the point? Photos: Getty, supplied
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,723
\section{Correctness of the Operational Semantics} \begin{figure} [t] \small \centering \begin{mathpar} \inferrule[spawn*]{\tr \mbox{ fresh}\quad \mathsf{P}(j) = \ibegin; \mathsf{Body}; \icommit; \mathsf{S} \quad \vec{\mathsf{B}}(j) = \epsilon}{ \hist,\vec{\gamma},\vec{\mathsf{B}},\mathsf{P} \Rightarrow \hist \oplus_j \tup{\tr,\emptyset,\emptyset},\vec{\gamma}[j\mapsto \emptyset],\vec{\mathsf{B}}[j\mapsto \mathsf{Body}],\mathsf{P}[j\mapsto \mathsf{S}] } \inferrule[if-true]{\varphi(\vec{x})[x\mapsto \vec{\gamma}(j)(x): x\in\vec{x}]\mbox{ true} \\ \vec{\mathsf{B}}(j) = \iif{\phi(\vec{x})}{\mathsf{Instr}};\mathsf{B} }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist,\vec{\gamma},\vec{\mathsf{B}}[j\mapsto \mathsf{Instr};\mathsf{B}],\mathsf{P} } \inferrule[if-false]{\varphi(\vec{x})[x\mapsto \vec{\gamma}(j)(x): x\in\vec{x}]\mbox{ false} \\ \vec{\mathsf{B}}(j) = \iif{\phi(\vec{x})}{\mathsf{Instr}};\mathsf{B} }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist,\vec{\gamma},\vec{\mathsf{B}}[j\mapsto \mathsf{B}],\mathsf{P} } \inferrule[write]{v = \vec{\gamma}(j)(x)\quad \id\mbox{ fresh} \quad \vec{\mathsf{B}}(j) = \iwrite(\key,\xvar);\mathsf{B} }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist \oplus_j \wrt[\id]{\key}{\val},\vec{\gamma},\vec{\mathsf{B}}[j\mapsto \mathsf{B}], \mathsf{P} } \inferrule[read-local]{ \wrt{\key}{\val}\mbox{ is the last write on $\key$ in $\tr$}\\ \id\mbox{ fresh } \\ \vec{\mathsf{B}}(j) = \xvar := \iread(\key);\mathsf{B} }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist \oplus_j \rd[\id]{\key}{\val},\vec{\gamma}[(j,\xvar)\mapsto \val],\vec{\mathsf{B}}[j\mapsto \mathsf{B}],\mathsf{P} } \inferrule[read-extern*]{ \vec{\mathsf{B}}(j) = \xvar := \iread(\key);\mathsf{B} \\ \hist=(T,\so,\wro) \\ \tr \mbox{ is the id of the last transaction log in $\so(j)$} \\ \wrt{\key}{\val}\in\writeOp{\tr'}\mbox{ with $\tr'\in \transC{\hist,\vec{\mathsf{B}}}$ and $\tr\neq \tr'$} \\ \id\mbox{ fresh }\\ \hist' = (\hist \oplus_j \rd[\id]{\key}{\val}) \oplus \wro(\tr',\rd[i]{\key}{\val}) }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist',\vec{\gamma}[(j,\xvar)\mapsto \val],\vec{\mathsf{B}}[j\mapsto \mathsf{B}],\mathsf{P} } \end{mathpar} \caption{A baseline operational semantics for $\KVProgs$ programs. Above, $\transC{\hist,\vec{\mathsf{B}}}$ denotes the set of transaction logs in $\hist$ that excludes those corresponding to live transactions, i.e., transaction logs $\tr''\in T$ such that $\tr''$ is the last transaction log in some $\so(j')$ and $\vec{B}(j')\neq\epsilon$.} \label{fig:op:sem:baseline:complete} \end{figure} This section provides more details about the proof of correctness for our operational semantics defined in Figure~\ref{fig:op:sem}. The complete definition of the baseline semantics is given in Figure~\ref{fig:op:sem:baseline:complete}. The notion of prefix of a tuple $\{\hist_2,\co_2\}$ is formally defined as follows. For a relation $R\subseteq A\times B$, the restriction of $R$ to $A'\times B'$, denoted by $R\downarrow A'\times B'$, is defined by $\{(a,b): (a,b)\in R, a\in A', b\in B'\}$. For $\hist_1=\tup{T_1, \so_1, \wro_1}$ and $\hist_2=\tup{T_2, \so_2, \wro_2}$, $\tup{\hist_1,\co_1}$ is a \emph{prefix} of $\tup{\hist_2,\co_2}$, denoted by $\tup{\hist_1,\co_1}\leq \tup{\hist_2,\co_2}$, iff $\tup{\hist_1,\co_1}\leq \tup{\hist_2,\co_2}$ iff $T_1=T_1'\cup\{\tup{t,O,\po}\}$, $T_2=T_2'\cup\{\tup{t,O',\po'}\}$, $T_1'\subseteq T_2'$, $O\subseteq O'$, $\po = \po' \downarrow O\times O$, $\so_1 = \so_2 \downarrow T_1\times T_1$, $\wro_1= \wro_2\downarrow T_1\times \readOp{T_1}$, and $\co_1= \co_2\downarrow T_1\times T_1$. Then, a property $\phi(\tr_2,\alpha)$ used to define an axiom like in (\ref{eq:axiom}), is called \emph{monotonic} iff for every $\tup{\hist_1,\co_1}\leq \tup{\hist_2,\co_2}$, \begin{align*} \forall \tr_2, \forall \alpha. \tup{\hist_1,\co_1}\models \phi(\tr_2,\alpha) \Rightarrow \tup{\hist_2,\co_2}\models \phi(\tr_2,\alpha). \end{align*} \begin{lemma}\label{lem:prefix} For any monotonic axiom $X$, if $\tup{\hist_1,\co_1}\leq \tup{\hist_2,\co_2}$, then \begin{align*} \tup{\hist_2,\co_2}\mbox{ satisfies } X \Rightarrow \tup{\hist_1,\co_1}\mbox{ satisfies } X \end{align*} \end{lemma} \begin{proof}(Sketch) Given a monotonic axiom, the number of instantiations of $\forall k$, $\forall t_1$, $\forall t_2$, and $\forall \alpha$ from (\ref{eq:axiom}) that satisfy the left-hand side of the entailement in the context of $\tup{\hist_1,\co_1}$ is a subset of the same type of instantiations in the context of $\tup{\hist_2,\co_2}$. Therefore, the $\co$ constraints imposed in the context of $\tup{\hist_1,\co_1}$ (by the right-hand side of the entailement) are a subset of the $\co$ constraints imposed in the context of $\tup{\hist_2,\co_2}$. Since the latter are satisfied (because $\tup{\hist_2,\co_2}$ satisfies $X$), the former are also satisfied and hence, $\tup{\hist_1,\co_1}$ satisfies $X$. \end{proof} Lemma~\ref{lem:prefix} extends obviously to isolation levels defined as conjunctions of axioms (which is the case for all the isolation levels that we are aware of~\cite{DBLP:journals/pacmpl/BiswasE19}). \begin{theorem} For any isolation level $I$ defined by a set of monotonic axioms, \begin{align*} \histOf[I]{\prog} = \{ h \in \histOf[*]{\prog}: h\mbox{ satisfies }I\}. \end{align*} \end{theorem} \begin{proof}(Sketch) For the direction $\subseteq$, let $c_0 c_1\ldots c_n$ be an execution under $\Rightarrow_I$, where $c_n$ is a final configuration. We need to show that the history $\hist_n$ contained in $c_n$ belongs to $\histOf[*]{\prog}$ and that it satisfies $I$. The fact that $\hist_n\in \histOf[*]{\prog}$ is a direct consequence of the fact that $\Rightarrow_I$ is more constrained than $\Rightarrow$. To prove that $\hist_n$ satisfies $I$, let $c_j$ be the latest configuration in the execution that is obtained from $c_{j-1}$ through an application of \textsc{read-extern}. By the definition of this rule, the history $\hist_j$ in $c_j$ satisfies $I$. Since the write-read relation of $\hist_j$ is identical to that of $\hist_n$, any axiom of the form (\ref{eq:axiom}) satisfied by $\hist_j$ is also satisfied by $\hist_n$ (the set of instantiations of $\forall \tr_1$ and $\forall \alpha$ in (\ref{eq:axiom}) that satisfy the left part of the entailment are the same in $\hist_j$ and $\hist_n$). Therefore, $\hist_n$ satisfies $I$, which concludes this part of the proof. For the reverse, let $\hist=\tup{T, \so, \wro}\in \histOf[*]{\prog}$ that satisfies $I$. Since $\hist$ satisfies $I$, there exists a commit order $\co$ such that $\wro\cup\so\subseteq \co$ and $\tup{h,\co}$ satisfies the axioms defining $I$. We show that there exists an execution $c_0 c_1\ldots c_n$ under $\Rightarrow_I$ where transactions are executed serially in the order defined by $\co$, such that $c_n$ is a final configuration that contains $\hist$. The only difficulty is showing that the \textsc{read-extern} transitions between two configurations $c_j$ and $c_{j+1}$ that add a write-read dependency $(\tr',\rd{\key}{\val})\in\wro$ are enabled even though the transaction log $\tr$ containing $\rd{\key}{\val}$ is ``incomplete'' in the history $\hist_j$ of $c_j$, and $\hist_j$ does not contain transactions committed after $\tr$. This relies on the prefix-closure property in Lemma~\ref{lem:prefix}. Let $\co_j$ be the order in which transactions have been executed until $c_j$. Then, $\tup{\hist_j,\co_j}$ is a prefix of $\tup{\hist,\co}$, and $\tup{\hist_j,\co_j}\models I$ because $\tup{\hist,\co}\models I$. \end{proof} \section{Isolation Levels for Key-Value Stores} \label{sec:ax-kv} We present the axiomatic framework introduced in~\cite{DBLP:journals/pacmpl/BiswasE19} for defining isolation levels\footnote{Isolation levels are called consistency models in~\cite{DBLP:journals/pacmpl/BiswasE19}.} in Key-Value stores. Isolation levels are defined as logical constraints, called \emph{axioms}, over \emph{histories}, which are an abstract representation of the interaction between a program and the store in a concrete execution. \subsection{Histories} Programs interact with a Key-Value store by issuing transactions formed of $\textsf{read}$ and $\textsf{write}$ instructions. The effect of executing one such instruction is represented using an \emph{operation}, which is an element of the set \begin{align*} \Op=\set{\rd[\id]{\key}{\val},\wrt[\id]{\key}{\val}: \id\in\OId, \key\in\Keys, \val\in \Val} \end{align*} where $\rd[\id]{\key}{\val}$ (resp., $\wrt[\id]{\key}{\val}$) corresponds to reading a value $\val$ from a key $\key$ (resp., writing $\val$ to $\key$). Each operation is associated with an identifier $\id$ from an arbitrary set $\OId$. We omit operation identifiers when they are not important. \begin{definition} A \emph{transaction log} $\tup{\tr,O, \po}$ is a transaction identifier $\tr$ and a finite set of operations $O$ along with a strict total order $\po$ on $O$, called \emph{program order}. \end{definition} The program order $\po$ represents the order between instructions in the body of a transaction. We assume that each transaction log is well-formed in the sense that if a read of a key $k$ is preceded by a write to $\key$ in $\po$, then it should return the value written by the last write to $\key$ before the read (w.r.t. $\po$). This property is implicit in the definition of every isolation level that we are aware of. For simplicity, we may use the term \emph{transaction} instead of transaction log. The set of all transaction logs is denoted by $\mathsf{Tlogs}$. The set of read operations $\rd{\key}{\_}$ in a transaction log $\tr$ that are \emph{not} preceded by a write to $\key$ in $\po$ is denoted by $\readOp{\tr}$. As mentioned above, the other read operations take their values from writes in the same transaction and their behavior is independent of other transactions. Also, the set of write operations $\wrt{\key}{\_}$ in $\tr$ that are \emph{not} followed by other writes to $\key$ in $\po$ is denoted by $\writeOp{\tr}$. If a transaction contains multiple writes to the same key, then only the last one (w.r.t. $\po$) can be visible to other transactions (w.r.t. any isolation level that we are aware of). The extension to sets of transaction logs is defined as usual. Also, we say that a transaction log $\tr$ \emph{writes} a key $\key$, denoted by $\writeVar{\tr}{\key}$, when $\wrt[\id]{\key}{\val}\in \writeOp{\tr}$ for some $\id$ and $\val$. A \emph{history} contains a set of transaction logs (with distinct identifiers) ordered by a (partial) \emph{session order} $\so$ that represents the order between transactions in the same session\footnote{In the context of our programming language, $\so$ would be a union of total orders. This constraint is not important for defining isolation levels.}. It also includes a \emph{write-read} relation (also called read-from) that ``justifies'' read values by associating each read to a transaction that wrote the value returned by the read. \begin{definition} A \emph{history} $\tup{T, \so, \wro}$ is a set of transaction logs $T$ along with a strict partial \emph{session order} $\so$, and a \emph{write-read} relation $\wro\subseteq T\times \readOp{T}$ such that the inverse of $\wro$ is a total function, and if $(\tr,\rd{\key}{\val})\in\wro$, then $\wrt{\key}{\val}\in\tr$, and $\so\cup\wro$ is acyclic. \end{definition} To simplify the technical exposition, we assume that every history includes a distinguished transaction log writing the initial values of all keys. This transaction log precedes all the other transaction logs in $\so$. We use $\hist$, $\hist_1$, $\hist_2$, $\ldots$ to range over histories. The set of transaction logs $T$ in a history $\hist=\tup{T, \so, \wro}$ is denoted by $\tlogs{\hist}$. For a key $\key$, $\wro[\key]$ denotes the restriction of $\wro$ to reads of $\key$, \ie, $\wro[\key]=\wro\cap (T\times \{\rd{\key}{\val}\mid \val\in \Val\})$. Moreover, we extend the relations $\wro$ and $\wro[\key]$ to pairs of transactions by $\tup{\tr_1,\tr_2}\in \wro$, resp., $\tup{\tr_1,\tr_2}\in \wro[\key]$, iff there exists a read operation $\rd{\key}{\val}\in \readOp{\tr_2}$ such that $\tup{\tr_1,\rd{\key}{\val}}\in \wro$, resp., $\tup{\tr_1,\rd{\key}{\val}}\in \wro[\key]$. We say that the transaction log $\tr_1$ is \emph{read} by the transaction log $\tr_2$ when $\tup{\tr_1,\tr_2}\in \wro$. \subsection{Axiomatic Framework} \input{def_figs.tex} A history is said to satisfy a certain isolation level if there exists a strict total order $\co$ on its transaction logs, called \emph{commit order}, which extends the write-read relation and the session order, and which satisfies certain properties. These properties, called \emph{axioms}, relate the commit order with the session-order and the write-read relation in the history. They are defined as first-order formulas\footnote{These formulas are interpreted on tuples $\tup{\hist,\co}$ of a history $\hist$ and a commit order $\co$ on the transactions in $\hist$ as usual.} of the following form: \begin{align} & \forall \key,\ \forall \tr_1\neq \tr_2,\ \forall \alpha.\ \nonumber\\ & \hspace{3mm} \tup{\tr_1,\alpha}\in \wro[\key] \land \writeVar{\tr_2}{\key} \land \phi(\tr_2,\alpha) \implies \tup{\tr_2,\tr_1}\in\co \label{eq:axiom} \end{align} where $\phi$ is a property relating $\tr_2$ and $\alpha$ (i.e., the read or the transaction reading from $\tr_1$) that varies from one axiom to another. Intuitively, this axiom schema states the following: in order for $\alpha$ to read specifically $t_1$'s write on $x$, it must be the case that every $t_2$ that also writes $x$ and satisfies $\phi(t_2,\alpha)$ was committed before $t_1$. The property $\phi$ relates $\tr_2$ and $\alpha$ using the relations in a history and the commit order. Figure~\ref{consistency_defs} shows the axioms defining three isolation levels: Read Committed, Causal Consistency, and Serializability (see~\cite{DBLP:journals/pacmpl/BiswasE19} for axioms defining Read Atomic, Prefix, and Snapshot Isolation). For instance, $\mathsf{Read\ Committed}$~\cite{DBLP:conf/sigmod/BerensonBGMOO95} requires that every read returns a value written in a committed transaction, and also, that the reads in the same transaction are ``monotonic'', i.e., they do not return values that are older, w.r.t. the commit order, than values read in the past. While the first condition holds for every history (because of the surjectivity of $\wro$), the second condition is expressed by the axiom $\mathsf{Read\ Committed}$ in Figure~\ref{lock_rc_def}, which states that for any transaction $\tr_1$ writing a key $\key$ that is read in a transaction $\tr$, the set of transactions $\tr_2$ writing $\key$ and read previously in the same transaction (these reads may concern other keys) must precede $\tr_1$ in commit order. For instance, Figure~\ref{rc_example:1} shows a history and a (partial) commit order that does not satisfy this axiom because $\rd{\key_1}{1}$ returns the value written in a transaction ``older'' than the transaction read in the previous $\rd{\key_2}{2}$. \begin{figure} \centering \begin{subfigure}{.23\textwidth} \resizebox{\textwidth}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, semithick, transform shape] \node[draw, rounded corners=2mm] (t1) at (0, 0) {\begin{tabular}{l} $\wrt{\key_1}{1}$ \end{tabular}}; \node[draw, rounded corners=2mm,outer sep=0] (t2) at (0, -1.5) {\begin{tabular}{l} $\wrt{\key_1}{2}$ \\ $\wrt{\key_2}{2}$\end{tabular}}; \node[draw, rounded corners=2mm, minimum width=1.8cm, minimum height=2.5cm] (t3) at (3, -0.75) {}; \node[style={inner sep=0,outer sep=0}] (t3_1) at (3, 0) {\begin{tabular}{l} $\rd{\key_2}{2}$ \end{tabular}}; \node[style={inner sep=0,outer sep=0}] (t3_2) at (3, -1.5) {\begin{tabular}{l} $\rd{\key_1}{1}$ \end{tabular}}; \path (t1) edge node {$\co$} (t2); \path (t3_1) edge node {$\po$} (t3_2); \path (t1) edge[below] node[yshift=-4,xshift=4] {$\wro$} (t3_2); \path (t2) edge node[yshift=-2,xshift=7] {$\wro$} (t3_1); \end{tikzpicture} } \caption{$\mathsf{Read\ Committed}$ violation.} \label{rc_example:1} \end{subfigure} \begin{subfigure}{.23\textwidth} \resizebox{\textwidth}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, semithick, transform shape] \node[draw, rounded corners=2mm,outer sep=0] (t1) at (0, 1.5) {$\wrt{\key_1}{1}$}; \node[draw, rounded corners=2mm,outer sep=0] (t2) at (3, 1.5) {\begin{tabular}{l} $\rd{\key_1}{1}$ \\ $\wrt{\key_1}{2}$ \end{tabular}}; \node[draw, rounded corners=2mm,outer sep=0] (t3) at (3, 0) {\begin{tabular}{l} $\rd{\key_1}{1}$ \\ $\rd{\key_2}{1}$ \end{tabular}}; \node[draw, rounded corners=2mm,outer sep=0] (t4) at (0, 0) {\begin{tabular}{l} $\rd{\key_1}{2}$ \\ $\wrt{\key_2}{1}$\end{tabular}}; \path (t1) edge[above] node[yshift=0,xshift=0] {$\wro$} (t2); \path (t1) edge[below] node[yshift=-5,xshift=7] {$\wro$} (t3); \path (t2) edge[above] node[yshift=-6,xshift=-14] {$\wro$} (0,0.58); \path (t4) edge[below] node[yshift=0,xshift=0] {$\wro$} (t3); \end{tikzpicture} } \caption{$\mathsf{Causal}$ violation.} \label{cc_example:1} \end{subfigure} \vspace{-3mm} \caption{Histories used to explain the axioms in Figure~\ref{consistency_defs}.} \label{counter_example:1} \vspace{-3mm} \end{figure} The axiom defining $\mathsf{Causal}$ Consistency~\cite{DBLP:journals/cacm/Lamport78} states that for any transaction $\tr_1$ writing a key $\key$ that is read in a transaction $\tr_3$, the set of $(\wro\cup \so)^+$ predecessors of $\tr_3$ writing $\key$ must precede $\tr_1$ in commit order ($(\wro\cup \so)^+$ is usually called the \emph{causal} order). A violation of this axiom can be found in Figure~\ref{cc_example:1}: the transaction $\tr_2$ writing 2 to $\key_1$ is a $(\wro\cup \so)^+$ predecessor of the transaction $\tr_3$ reading 1 from $\key_1$ because the transaction $\tr_4$, writing 1 to $\key_2$, reads $\key_1$ from $\tr_2$ and $\tr_3$ reads $\key_2$ from $\tr_4$. This implies that $\tr_2$ should precede in commit order the transaction $\tr_1$ writing 1 to $\key_1$, which again, is inconsistent with the write-read relation ($\tr_2$ reads from $\tr_1$). Finally, $\mathsf{Serializability}$~\cite{DBLP:journals/jacm/Papadimitriou79b} requires that for any transaction $\tr_1$ writing to a key $\key$ that is read in a transaction $\tr_3$, the set of $\co$ predecessors of $\tr_3$ writing $\key$ must precede $\tr_1$ in commit order. This ensures that each transaction observes the effects of all the $\co$ predecessors. \begin{definition} For an isolation level $I$ defined by a set\footnote{Isolation levels like Snapshot Isolation require more than one axiom.} of axioms $X$, a history $\hist=\tup{T, \so, \wro}$ \emph{satisfies} $I$ iff there is a strict total order $\co$ s.t. $\wro\cup\so\subseteq \co$ and $\tup{h,\co}$ satisfies $X$. \label{axiom-criterion} \end{definition} \section{Conclusion} \label{sec:conc} Our goal is to enable developers to test the correctness of their storage-backed applications under weak isolation levels. Such bugs are hard to catch because weak behaviors are rarely generated by real storage systems, but failure to address them can lead to loss of business \cite{acidrain}. We present MonkeyDB, an easy-to-use mock storage system for weeding out such bugs. MonkeyDB uses a logical understanding of isolation levels to provide (randomized) coverage of all possible weak behaviors. Our evaluation reveals that using MonkeyDB is very effective at breaking assertions that would otherwise hold under a strong isolation level. \section{Compiling SQL to Key-Value API} \label{sec:SQL-to-KV} We define an operational semantics for SQL programs (in $\SQLProgs$) based on a compiler that rewrites SQL queries to Key-Value $\iread$ and $\iwrite$ instructions. For presentation reasons, we use an intermediate representation where each table of a database instance is represented using a \emph{set} variable that stores values of the primary key\footnote{For simplicity, we assume that primary keys correspond to a single column in the table.} (identifying uniquely the rows in the table) and a set of key-value pairs, one for each cell in the table. In a second step, we define a rewriting of the API used to manipulate set variables into Key-Value $\iread$ and $\iwrite$ instructions. \paragraph{Intermediate Representation} Let $\DBschema:\Tables\rightharpoonup 2^\Columns$ be a database schema (recall that $\Tables$ and $\Columns$ are the set of table names and column names, resp.). For each table $\tab$, let $\tab.\pkey$ be the name of the primary key column. We represent an instance $\DBinst: \mathsf{dom}(\DBschema)\rightarrow 2^{\Rows}$ using: \begin{itemize} \item for each table $\tab$, a set variable $\tab$ (with the same name) that contains the primary key value $r(\tab.\pkey)$ of every row $r\in \DBinst(\tab)$, \item for each row $r\in \DBinst(\tab)$ with primary key value $\pkeyVal = r(\tab.\pkey)$, and each column $c\in \DBschema(\tab)$, a key $\tab.\pkeyVal.c$ associated with the value $r(c)$. \end{itemize} \begin{figure}[t] {\footnotesize \begin{minipage}[t]{3cm} \setlength{\tabcolsep}{3pt} \begin{center} Table: \end{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{A} \\ \hline\hline Id & Name & City \\ \hline 1 & Alice & Paris \\ \hline 2 & Bob & Bangalore \\ \hline 3 & Charles & Bucharest \\ \hline \end{tabular} \end{minipage} \begin{minipage}[t]{5cm} \begin{center} Intermediate representation: \end{center} \setlength{\tabcolsep}{1pt} \begin{tabular}{lll} \multicolumn{3}{l}{A = \{ 1, 2, 3 \}} \\ \\[1mm] A.1.Id: 1, & A.1.Name: Alice, & A.1.City: Paris \\ A.2.Id: 2, & A.2.Name: Bob, & A.2.City: Bangalore \\ A.3.Id: 3, & A.3.Name: Charles, & A.3.City: Bucharest \end{tabular} \end{minipage}} \vspace{-2mm} \caption{Representing tables with set variables and key-value pairs. We write a key-value pair as key:value.} \label{fig:sql-example} \vspace{-2mm} \end{figure} \begin{example} The table A on the left of Figure~\ref{fig:sql-example}, where the primary key is defined by the Id column, is represented using a set variable A storing the set of values in the column Id, and one key-value pair for each cell in the table. \end{example} \begin{figure}[t] \small \begin{flushleft} \begin{minipage}{6cm} \begin{flushleft} \texttt{SELECT}/\texttt{DELETE}/\texttt{UPDATE} \end{flushleft} \vspace{-2mm} \begin{lstlisting}[xleftmargin=5mm,language=MyLang,escapeinside={(*}{*)}] rows := elements(tab) for ( let pkeyVal of rows ) { for ( let c of (*$\vec{c_2}$*) ) { val[c] := read(tab.pkeyVal.c) if ( (*$\phi[\texttt{c}\mapsto \texttt{val[c]}: \texttt{c}\in\vec{c_2}]$*) true ) // (*$\iselect{\vec{c_1}}{\xvar}{\tab}{\phi(\vec{c_2})}$*) for ( let c of (*$\vec{c_1}$*) ) out[c] := read(tab.pkeyVal.c) x := x (*$\cup$*) out // (*$\idelete{\tab}{\phi(\vec{c_2})}$*) remove(tab, pkeyVal); // (*$\iupdate{\tab}{\vec{c_1}=\vec{x}}{\phi(\vec{c_2})}$*) for ( let c of (*$\vec{c_1}$*) ) write( tab.pkeyVal.c, (*$\gamma$*)((*$\vec{x}$*)[c]) ) \end{lstlisting} \end{minipage \begin{minipage}{2cm} ~ \end{minipage \begin{minipage}{3cm} \begin{flushleft} $\iinsert{\tab}{\vec{x}}$ \end{flushleft} \vspace{-2mm} \begin{lstlisting}[xleftmargin=5mm,language=MyLang,escapeinside={(*}{*)}] pkeyVal := (*$\gamma$*)((*$\vec{x}$*)[0]) if ( add(tab,pkeyVal) ) { for ( let c of (*$\DBschema(\texttt{tab})$*) ) { write( tab.pkeyVal.c, (*$\gamma$*)((*$\vec{x}$*)[c]) ) \end{lstlisting} \end{minipage} \end{flushleft} \caption{Compiling SQL queries to the intermediate representation. Above, $\gamma$ is a valuation of local variables. Also, in the case of $\mathtt{INSERT}$, we assume that the first element of $\vec{x}$ represents the value of the primary key.} \label{fig:sql-ir} \end{figure} Figure~\ref{fig:sql-ir} lists our rewriting of SQL queries over a database instance $\DBinst$ to programs that manipulate the set variables and key-value pairs described above. This rewriting contains the minimal set of accesses to the cells of a table that are needed to implement an SQL query according to its conventional specification. To manipulate set variables, we use $\iadd$ and $\iremove$ for adding and removing elements, respectively (returning $\btrue$ or $\bfalse$ when the element is already present or deleted from the set, respectively), and $\ielements$ that returns all of the elements in the input set\footnote{$\iadd(s,e)$ and $\iremove(s,e)$ add and remove the element $e$ from $s$, respectively. $\ielements(s)$ returns the content of $s$.}. $\mathsf{SELECT}$, $\mathsf{DELETE}$, and $\mathsf{UPDATE}$ start by reading the contents of the set variable storing primary key values and then, for every row, the columns in $\vec{c_2}$ needed to check the Boolean condition $\phi$ (the keys corresponding to these columns). For every row satisfying this Boolean condition, $\mathsf{SELECT}$ continues by reading the keys associated to the columns that need to be returned, $\mathsf{DELETE}$ removes the primary key value associated to this row from the set $\tab$, and $\mathsf{UPDATE}$ writes to the keys corresponding to the columns that need to be updated. In the case of $\mathsf{UPDATE}$, we assume that the values of the variables in $\vec{x}$ are obtained from a valuation $\gamma$ (this valuation would be maintained by the operational semantics of the underlying Key-Value store). $\mathsf{INSERT}$ adds a new primary key value to the set variable $\tab$ (the call to $\iadd$ checks whether this value is unique) and then writes to the keys representing columns of this new row. \paragraph{Manipulating Set Variables} \begin{figure}[t] \small \begin{minipage}[t]{4.2cm} \begin{flushleft} $\iadd(tab,pkeyVal)$: \end{flushleft} \vspace{-2mm} \begin{lstlisting}[language=MyLang,escapeinside={(*}{*)}] if (read((*$tab$*).has.(*$pkeyVal$*))) return false; write((*$tab$*).has.(*$pkeyVal$*),true) return true; \end{lstlisting} \end{minipage} \begin{minipage}[t]{4cm} \begin{flushleft} $\ielements(tab)$: \end{flushleft} \vspace{-2mm} \begin{lstlisting}[language=MyLang,escapeinside={(*}{*)}] ret := (*$\emptyset$*) for ( let (*$pkeyVal$*) of (*$\Vals$*) ) if (read((*$tab$*).has.(*$pkeyVal$*))) ret := ret (*$\cup$*) {(*$pkeyVal$*)} return ret; \end{lstlisting} \end{minipage} \vspace{-4mm} \caption{Manipulating set variables using key-value pairs.} \label{fig:ir-key} \vspace{-3mm} \end{figure} Based on the standard representation of a set using its characteristic function, we implement each set variable $\tab$ using a set of keys $\tab.\icontains.\pkeyVal$, one for each value $\pkeyVal\in\Vals$. These keys are associated with Boolean values, indicating whether $\pkeyVal$ is contained in $\tab$. In a concrete implementation, this set of keys need not be fixed a-priori, but can grow during the execution with every new instance of an $\mathtt{INSERT}$. Figure~\ref{fig:ir-key} lists the implementations of $\iadd$/$\ielements$, which are self-explanatory ($\iremove$ is analogous). \section{Implementation} \label{sec:impl} We implemented \mbox{MonkeyDB}{}\footnote{We plan to make MonkeyDB available open-source soon.} to support an interface common to most storage systems. Operations can be either key-value (KV) updates (to access data as a KV map) or SQL queries (to access data as a relational database). \mbox{MonkeyDB}{} supports transactions as well; a transaction can include multiple operations. Figure~\ref{fig:block_dia} shows the architecture of \mbox{MonkeyDB}{}. A client can connect to \mbox{MonkeyDB}{} over a TCP connection, as is standard for SQL databases\footnote{We support the MySQL client-server protocol using \url{https://github.com/jonhoo/msql-srv}.}. This offers a plug-and-play experience when using standard frameworks such as JDBC \cite{jdbc}. Client applications can also use \mbox{MonkeyDB}{} as a library in order to directly invoke the storage APIs, or interact with it via HTTP requests, with JSON payloads. MonkeyDB contains a SQL-To-KV compiler that parses an input query\footnote{We use \url{https://github.com/ballista-compute/sqlparser-rs}}, builds its Abstract Syntax Tree (AST) and then applies the rewriting steps described in Section~\ref{sec:SQL-to-KV} to produce an equivalent sequence of KV API calls ({\tt read()} and {\tt write()}). It uses a hashing routine ({\tt hash}) to generate unique keys corresponding to each cell in a table. For instance, in order to insert a value $v$ for a column $c$ in a particular row with primary key value $\pkeyVal$, of a table $\tab$, we invoke {\tt write(hash($\tab$, $\pkeyVal$, $c$), $v$)}. We currently support only a subset of the standard SQL operators. For instance, nested queries or join operators are unsupported; these can be added in the future with more engineering effort. MonkeyDB schedules transactions from different sessions one after the other using a single global lock. Internally, it maintains execution state as a history consisting of a set of transaction logs, write-read relations and a partial session order (as discussed in \sectref{ax-kv}). On a {\tt read()}, MonkeyDB first collects a set of possible writes present in transaction log that can potentially form write-read (read-from) relationships, and then invokes the consistency checker (Figure~\ref{fig:block_dia}) to confirm validity under the chosen isolation level. Finally, it randomly returns one of the values associated with valid writes. A user can optionally instruct MonkeyDB to only select from the set of \textit{latest} valid write per session. This option helps limit weak behaviors for certain reads. The implementation of our consistency checker is based on prior work \cite{DBLP:journals/pacmpl/BiswasE19}. It maintains the write-read relation as a graph, and detects cycles (isolation-level violations) using DFS traversals on the graph. The consistency checker is an independent and pluggable module: we have one for Read Committed and one for Causal Consistency, and more can be added in the future. \begin{figure} \includegraphics[scale=0.8]{figures/block_dia.pdf} \caption{Architecture of \mbox{MonkeyDB}{}} \label{fig:block_dia} \end{figure} \section{Introduction} \label{sec:intro} Data storage is no longer about writing data to a single disk with a single point of access. Modern applications require not just data reliability, but also high-throughput concurrent accesses. Applications concerning supply chains, banking, etc. use traditional relational databases for storing and processing data, whereas applications such as social networking software and e-commerce platforms use cloud-based storage systems (such as Azure CosmosDb \cite{cosmosdb}, Amazon DynamoDb \cite{decandia2007dynamo}, Facebook TAO \cite{facebook-tao}, etc.). We use the term \textit{storage system} in this paper to refer to any such database system/service. Providing high-throughput processing, unfortunately, comes at an unavoidable cost of weakening the guarantees offered to users. Concurrently-connected clients may end up observing different views of the same data. These ``anomalies'' can be prevented by using a strong \textit{isolation level} such as \textit{serializability}, which essentially offers a single view of the data. However, serializability requires expensive synchronization and incurs a high performance cost. As a consequence, most storage systems use weaker isolation levels, such as {\it Causal Consistency}~\cite{DBLP:journals/cacm/Lamport78,DBLP:conf/sosp/LloydFKA11,antidote}, {\it Snapshot Isolation}~\cite{DBLP:conf/sigmod/BerensonBGMOO95}, {\it Read Committed}~\cite{DBLP:conf/sigmod/BerensonBGMOO95}, etc. for better performance. In a recent survey of database administrators \cite{survey}, 86\% of the participants responded that most or all of the transactions in their databases execute at read committed isolation level. \begin{figure} \begin{minipage}{4.2cm} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)},language=MyLang] // Append item to cart AddItem(item i, userId) { Begin() key = "cart:" + userId cart = read(key) cart.append(i) write(key, cart) Commit() } \end{lstlisting} \end{minipage} \hspace{-5mm} \begin{minipage}{4.2cm} \begin{lstlisting}[xleftmargin=4mm,basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)},language=MyLang] // Fetch cart and delete item DeleteItem(item i, userId) { Begin() key = "cart:" + userId cart = read(key) cart.remove(i) write(key, cart) Commit() } \end{lstlisting} \end{minipage} \vspace{-6mm} \resizebox{8.5cm}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm, semithick, transform shape] \node (s11l) at (1.15, 2.1) {\textbf{Initial state}}; \node[draw, rounded corners=2mm] (t0) at (2.05, 1.5) {\begin{tabular}{l} $\wrt{\texttt{cart:}u}{\{..\, I\, ..\}}$ \end{tabular}}; \node[draw, rounded corners=2mm, minimum width=3.6cm, minimum height=1.3cm] (s1) at (0, -0.1) {}; \node[style={inner sep=0,outer sep=0}] (s11) at (0, 0.3) {\begin{tabular}{l} $\rd{\texttt{cart:}u}{\{..\, I\, ..\}}$\end{tabular}}; \node[style={inner sep=0,outer sep=0}] (s12) at (0, -0.5) {\begin{tabular}{l} $\wrt{\texttt{cart:}u}{\{..\, I,I\, ..\}}$ \end{tabular}}; \node (s11l) at (-1, 0.8) {\textbf{AddItem}}; \node[draw, rounded corners=2mm, minimum width=3.6cm, minimum height=1.3cm] (s2) at (4.1, -0.1) {}; \node[style={inner sep=0,outer sep=0}] (s21) at (4.1, 0.3) {\begin{tabular}{l} $\rd{\texttt{cart:}u}{\{..\, I\, ..\}}$ \end{tabular}}; \node[style={inner sep=0,outer sep=0}] (s22) at (4.1, -0.5) {\begin{tabular}{l} $\wrt{\texttt{cart:}u}{\{..\, ..\}}$ \end{tabular}}; \node (s11l) at (4.9, 0.8) {\textbf{DeleteItem}}; \node[draw, rounded corners=2mm] (r1) at (8.3, 0) {\begin{tabular}{l} $\rd{\texttt{cart:}u}{\{..\, ..\}}$ \end{tabular}}; \node[draw, rounded corners=2mm] (r2) at (8.3, -1.3) {\begin{tabular}{l} $\rd{\texttt{cart:}u}{\{..\, I, I\, .\}}$ \end{tabular}}; \path (s11) edge[left] node {$\po$} (s12); \path (s21) edge[left] node {$\po$} (s22); \path (t0) edge[left] node {$\wro$} (s1); \path (t0) edge[right] node {$\wro$} (s2); \path (r1) edge[left] node {$\so$} (r2); \path (s2) edge[above] node {$\wro$} (r1); \path (s1) edge[below,bend right=11] node {$\wro$} (r2); \end{tikzpicture} } \vspace{-2mm} \caption{A simple shopping cart service.} \label{fig:motiv} \vspace{-3mm} \end{figure} A weaker isolation level allows for more possible behaviors than stronger isolation levels. It is up to the developers then to ensure that their application can tolerate this larger set of behaviors. Unfortunately, weak isolation levels are hard to understand or reason about \cite{DBLP:conf/popl/BrutschyD0V17,adya-thesis} and resulting application bugs can cause loss of business \cite{acidrain}. Consider a simple shopping cart application that stores a per-client shopping cart in a key-value store (\textit{key} is the client ID and \textit{value} is a multi-set of items). \figref{motiv} shows procedures for adding an item to the cart (\texttt{AddItem}) and deleting \textit{all} instances of an item from the cart (\texttt{DeleteItem}). Each procedure executes in a transaction, represented by the calls to \texttt{Begin} and \texttt{Commit}. Suppose that initially, a user $u$ has a single instance of item $I$ in their cart. Then the user connects to the application via two different sessions (for instance, via two browser windows), adds $I$ in one session (\texttt{AddItem($I$, $u$)}) and deletes $I$ in the other session (\texttt{DeleteItem($I$, $u$)}). With serializability, the cart can either be left in the state $\{ I \}$ (delete happened first, followed by the add) or $\emptyset$ (delete happened second). However, with causal consistency (or read committed), it is possible that with two sequential reads of the shopping cart, the cart is empty in the first read (signaling that the delete has succeeded), but there are \textit{two} instances of $I$ in the second read! Such anomalies, of deleted items reappearing, have been noted in previous work \cite{decandia2007dynamo}. \paragraph{Testing storage-based applications} This paper addresses the problem of \textit{testing} code for correctness against weak behaviors: a developer should be able to write a test that runs their application and then asserts for correct behavior. The main difficulty today is getting coverage of weak behaviors during the test. If one runs the test against the actual production storage system, it is very likely to only result in serializable behaviors because of their optimized implementation. For instance, only 0.0004\% of all reads performed on Facebook's TAO storage system were not serializable \cite{facebook-consistency}. Emulators, offered by cloud providers for local development, on the other hand, do not support weaker isolation levels at all \cite{cosmosdb-local}. Another option, possible when the storage system is available open-source, is to set it up with a tool like Jepsen~\cite{jepsen} to inject noise (bring down replicas or delay packets on the network). This approach is unable to provide good coverage at the level of client operations \cite{clotho} (\sectref{oltp}). Another line of work has focussed on finding anomalies by identifying non-serializable behavior (\sectref{related}). Anomalies, however, do not always correspond to bugs \cite{DBLP:conf/pldi/BrutschyD0V18,isodiff}; they may either not be important (e.g., gather statistics) or may already be handled in the application (e.g., checking and deleting duplicate items). We present MonkeyDB, a mock in-memory storage system meant for testing correctness of storage-backed applications. MonkeyDB supports common APIs for accessing data (key-value updates, as well as SQL queries), making it an easy substitute for an actual storage system. MonkeyDB can be configured with one of several isolation levels. On a read operation, MonkeyDB computes the set of all possible return values allowed under the chosen isolation level, and randomly returns one of them. The developer can then simply execute their test multiple times to get coverage of possible weak behaviors. For the program in \figref{motiv}, if we write a test asserting that two sequential reads cannot return empty-cart followed by $\{I, I\}$, then it takes only 20 runs of the test (on average) to fail the assert. In contrast, the test does not fail when using MySQL with read committed, even after 100k runs. \paragraph{Design of MonkeyDB} MonkeyDB does not rely on stress generation, fault injection, or data replication. Rather, it works directly with a formalization of the given isolation level in order to compute allowed return values. The theory behind MonkeyDB builds on the axiomatic definitions of isolation levels introduced by Biswas et al. \cite{DBLP:journals/pacmpl/BiswasE19}. These definitions use logical constraints (called \emph{axioms}) to characterize the set of executions of a key-value store that conform to a particular isolation level (we discuss SQL queries later). These constraints refer to a specific set of relations between events/transactions in an execution that describe control-flow or data-flow dependencies: a program order $\po$ between events in the same transaction, a session order $\so$ between transactions in the same session\footnote{A session is a sequential interface to the storage system. It corresponds to what is also called a connection.}, and a write-read $\wro$ (read-from) relation that associates each read event with a transaction that writes the value returned by the read. These relations along with the events (also called, operations) in an execution are called a \emph{history}. The history corresponding to the shopping cart anomaly explained above is given on the bottom of Figure~\ref{fig:motiv}. Read operations include the read value, and boxes group events from the same transaction. A history describes only the interaction with the key-value store, omitting application side events (e.g., computing the value to be written to a key). MonkeyDB implements a \emph{centralized} operational semantics for key-value stores, which is based on these axiomatic definitions. Transactions are executed \emph{serially}, one after another, the concurrency being simulated during the handling of read events. This semantics maintains a history that contains all the past events (from all transactions/sessions), and write events are simply added to the history. The value returned by a read event is established based on a non-deterministic choice of a write-read dependency (concerning this read event) that satisfies the axioms of the considered isolation level. Depending on the weakness of the isolation level, this makes it possible to return values written in arbitrarily ``old'' transactions, and simulate any concurrent behavior. For instance, the history in Figure~\ref{fig:motiv} can be obtained by executing \texttt{AddItem}, \texttt{DeleteItem}, and then the two reads (serially). The read in \texttt{DeleteItem} can take its value from the initial state and ``ignore'' the previously executed \texttt{AddItem}, because the obtained history validates the axioms of causal consistency (or read committed). The same happens for the two later reads in the same session, the first one being able to read from \texttt{DeleteItem} and the second one from \texttt{AddItem}. We formally prove that this semantics does indeed simulate any concurrent behavior, by showing that it is equivalent to a semantics where transactions are allowed to interleave. In comparison with concrete implementations, this semantics makes it possible to handle a wide range of isolation levels in a uniform way. It only has two sources of non-determinism: the order in which entire transactions are submitted, and the choice of write-read dependencies in read events. This enable better coverage of possible behaviors, the penalty in performance not being an issue in safety testing workloads which are usually small (see our evaluation). We also extend our semantics to cover SQL queries as well, by compiling SQL queries down to transactions with multiple key-value reads/writes. A table in a relational database is represented using a set of primary key values (identifying uniquely the set of rows) and a set of keys, one for each cell in the table. The set of primary key values is represented using a set of Boolean key-value pairs that simulate its characteristic function (adding or removing an element corresponds to updating one of these keys to $\btrue$ or $\bfalse$). Then, SQL queries are compiled to read or write accesses to the keys representing a table. For instance, a $\mathtt{SELECT}$ query that retrieves the set of rows in a table that satisfy a $\mathtt{WHERE}$ condition is compiled to (1) reading Boolean keys to identify the primary key values of the rows contained in the table, (2) reading keys that represent columns used in the $\mathtt{WHERE}$ condition, and (3) reading all the keys that represent cells in a row satisfying the $\mathtt{WHERE}$ condition. This rewriting contains the minimal set of accesses to the cells of a table that are needed to ensure the conventional specification of SQL. It makes it possible to ``export'' formalizations of key-value store isolation levels to SQL transactions. \paragraph{Contributions} This paper makes the following contributions: \begin{itemize} \item We define an operational semantics for key-value stores under various isolation levels, which simulates all concurrent behaviors with executions where transactions execute serially (\sectref{op-kv}) and which is based on the axiomatic definitions in~\cite{DBLP:journals/pacmpl/BiswasE19} (and outlined in \S\ref{sec:ax-kv}), \item We broaden the scope of the key-value store semantics to SQL transactions using a compiler that rewrites SQL queries to key-value accesses (\sectref{SQL-to-KV}), \item The operational semantics and the SQL compiler are implemented in a tool called MonkeyDB (\sectref{impl}). It randomly resolves possible choices to provide coverage of weak behaviors. It supports both a key-value interface as well as SQL, making it readily compatible with any storage-backed application. \item We present an evaluation of MonkeyDB on several applications, showcasing its superior coverage of weak behaviors as well as bug-finding abilities (\sectref{micro}, \sectref{oltp}).\footnote{Source code of our benchmarks is available as supplementary material.} \end{itemize} \section{Evaluation: Microbenchmarks} \label{sec:micro} We consider a set of micro-benchmarks inspired from real-world applications (\sectref{micro-benchmarks}) and evaluate the number of test iterations required to fail an invalid assertion (\sectref{micro-assertion-violations}). We also measure the \textit{coverage} of weak behaviors provided by MonkeyDB (\sectref{micro-coverage}). Each of these applications were implemented based on their specifications described in prior work; they all use MonkeyDB as a library, via its KV interface. \subsection{Applications} \label{sec:micro-benchmarks} \paragraph{Twitter \cite{twissandra}} This is based on a social-networking application that allows users to create a new account, follow, unfollow, tweet, browse the newsfeed (tweets from users you follow) and the timeline of any particular user. \figref{twitter-algo} shows the pseudo code for two operations. A user can access twitter from multiple clients (sessions), which could lead to unexpected behavior under weak isolation levels. Consider the following scenario with two users, $A$ and $B$ where user $A$ is accessing twitter from two different sessions, $S_1$ and $S_2$. User $A$ views the timeline of user $B$ from one session (\texttt{$S_1$:Timeline($B$)}) and decides to follow $B$ through another session (\texttt{$S_2$:Follow($A$, $B$)}). Now when user $A$ visits their timeline or newsfeed (\texttt{$S_2$:NewsFeed($A$)}), they expect to see all the tweets of $B$ that were visible via \texttt{Timeline} in session $S_1$. But under weak isolation levels, this does not always hold true and there could be missing tweets. \begin{figure} \begin{tabular}{@{\hspace{0ex}}l@{\hspace{-1ex}}l} \begin{minipage}{4cm} \begin{lstlisting}[basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)},language=MyLang] // Get user's tweets Timeline(user u) { Begin() key = "tweets:" + u.id T = read(key) Commit() return sortByTime(T) } \end{lstlisting} \end{minipage} & \begin{minipage}{4.3cm} \begin{lstlisting}[xleftmargin=3mm,basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)},language=MyLang] // Get following users' tweets NewsFeed(user u) { Begin() FW = read("following:"+ u.id) NF = {} foreach v (*$\in$*) FW: T = read("tweets:"+ v.id) NF = NF (*$\cup$*) T Commit() return sortByTime(NF) } \end{lstlisting} \end{minipage} \end{tabular} \vspace{-5ex} \caption{Example operations of the Twitter app} \label{fig:twitter-algo} \end{figure} \vspace{-2mm} \paragraph{Shopping Cart \cite{sivaramakrishnan2015declarative}} This application allows a user to add, remove and change quantity of items from different sessions. It also allows the user to view all items present in the shopping cart. The pseudo code and an unexpected behavior under weak isolation levels were discussed in \sectref{intro}, \figref{motiv}. \vspace{-2mm} \paragraph{Courseware \cite{DBLP:conf/esop/NairP020}} This is an application for managing students and courses, allowing students to register, de-register and enroll for courses. Courses can also be created or deleted. Courseware maintains the current status of students (registered, de-registered), courses (active, deleted) as well as enrollments. Enrollment can contain only registered students and active courses, subject to the capacity of the course. Under weak isolation, it is possible that two different students, when trying to enroll concurrently, will both succeed even though only one spot was left in the course. Another example that breaks the application is when a student is trying to register for a course that is being concurrently removed: once the course is removed, no student should be seen as enrolled in that course. \vspace{-2mm} \paragraph{Treiber Stack \cite{cavNagarMJ20}} Treiber stack is a concurrent stack data structure that uses compare-and-swap (CAS) instructions instead of locks for synchronization. This algorithm was ported to operate on a kv-store in prior work \cite{cavNagarMJ20} and we use that implementation. Essentially, the stack contents are placed in a kv-store, instead of using an in-memory linked data structure. Each row in the store contains a pair consisting of the stack element and the key of the next row down in the stack. A designated key ``{\tt head}'' stores the key of the top of the stack. CAS is implemented as a transaction, but the \texttt{pop} and \texttt{push} operations do not use transactions, i.e., each read/write/CAS is its own transaction. When two different clients try to \texttt{pop} from the stack concurrently, under serializability, each \texttt{pop} would return a unique value, assuming that each pushed value is unique. However, under causal consistency, concurrent \texttt{pop}s can return the same value. \subsection{Assertion Checking} \label{sec:micro-assertion-violations} We ran the above applications with MonkeyDB to find out if assertions, capturing unexpected behavior, were violated under causal consistency. Table \ref{tab:assert} summarizes the results. For each application, we used 3 client threads and 3 operations per thread. We ran each test with MonkeyDB for a total of 10,000 times; we refer to a run as an iteration. We report the average number of iterations (Iters) before an assertion failed, and the corresponding time taken (sec). All the assertions were violated within 58 iterations, in half a second or less. In contrast, running with an actual database almost never produces an assertion violation. \begin{table}[] \footnotesize \begin{tabular}{|l|l|c|c|} \hline \textbf{Application} & \textbf{Assertion} & \multicolumn{2}{c|}{\textbf{Avg. time to fail}} \\ & & \textbf{(Iters)} & \textbf{(sec)} \\ \hline Stack & Element popped more than once & 3.7 & 0.02 \\ \hline Courseware & Course registration overflow & 10.6 & 0.09 \\ \hline Courseware & Removed course registration & 57.5 & 0.52 \\ \hline Shopping & Item reappears after deletion & 20.2 & 0.14 \\ \hline Twitter & Missing tweets in feed & 6.3 & 0.03 \\ \hline \end{tabular} \caption{\label{tab:assert}Assertions checking results in microbenchmarks} \end{table} \subsection{Coverage} \label{sec:micro-coverage} The previous section only checked for a particular set of assertions. As an additional measure of test robustness, we count the number of distinct \textit{client-observable states} generated by a test. A client-observable state, for an execution, is the vector of values returned by read operations. For instance, a stack's state is defined by return values of \texttt{pop} operations; a shopping cart's state is defined by the return value of \texttt{GetCart} and so on. For this experiment, we randomly generated test harnesses; each harness spawns multiple threads that each execute a sequence of operations. In order to compute the absolute maximum of possible states, we had to limit the size of the tests: either 2 or 3 threads, each choosing between 2 to 4 operations. Note that any program that concurrently executes operations against a store has two main sources of non-determinism: the first is the interleaving of operations (i.e., the order in which operations are submitted to the store) and second is the choice of read-from (i.e., the value returned by the store under its configured isolation level). MonkeyDB only controls the latter; it is up to the application to control the former. There are many tools that systematically enumerate interleavings (such as \textsc{Chess} \cite{DBLP:conf/pldi/MusuvathiQ08}, \textsc{Coyote} \cite{coyote-web}), but we use a simple trick instead to avoid imposing any burden on the application: we included an option in MonkeyDB to deliberately add a small random delay (sleep between $0$ to $4$ ms) before each transaction begins. This option was sufficient in our experiments, as we show next. We also implemented a special setup using the \textsc{Coyote} tool \cite{coyote-web} to enumerate all sources of non-determinism, interleavings as well as read-from, in order to explore the entire state space of a test. We use this to compute the total number of states. \figref{micro_dfs} shows the number of distinct states observed under different isolation levels, averaged across multiple ($50$) test harnesses. For each of serializability and causal consistency, we show the max (as computed by \textsc{Coyote}) and versions with and without the delay option in MonkeyDB. Each of these graphs show similar trends: the number of states with causal consistency are much higher than with serializability. Thus, testing with a store that is unable to generate weak behaviors will likely be ineffective. Furthermore, the ``delay'' versions of MonkeyDB are able to approach the maximum within a few thousand attempts, implying that MonkeyDB's strategy of per-read randomness is effective for providing coverage to the application. \begin{figure*}[h] \centering \includegraphics[width=1.0\textwidth]{plots/random_avg.pdf} \caption{State coverage obtained with MonkeyDB for various microbenchmarks} \label{fig:micro_dfs} \end{figure*} \section{Evaluation: OLTP Workloads} \label{sec:oltp} OLTPBench \cite{difallah2013oltp} is a benchmark suite of representative OLTP workloads for relational databases. We picked a subset of OLTPBench for which we had reasonable assertions. Table~\ref{table-bench} lists basic information such as the number of database tables, the number of static transactions, how many of them are read-only, and the number of different assertions corresponding to system invariants for testing the benchmark. We modified OLTPBench by rewriting SQL join and aggregation operators into equivalent application-level loops, following a similar strategy as prior work \cite{clotho}. Except for this change, we ran OLTPBench unmodified. For TPC-C, we obtained a set of $12$ invariants from its specification document~\cite{tpcc-spec}. For all other benchmarks, we manually identified invariants that the application should satisfy. We asserted these invariants by issuing a read-only transaction to \mbox{MonkeyDB}{} at the end of the execution of the benchmark. None of the assertions fail under serializability; they are indeed invariants under serializability.\footnote{We initially observed two assertions failing under serializability. Upon analyzing the code, we identified that the behavior is due to a bug in OLTPBench that we have reported to the authors (link ommitted).} When using weaker isolation, we configured MonkeyDB to use latest reads only (\sectref{impl}) for the assertion-checking transactions in order to isolate the weak behavior to only the application. We ran each benchmark $100$ times and report, for each assertion, the number of runs in which it was violated. Note that OLTPBench runs in two phases. The first is a loading phase that consists of a big initial transaction to populates tables with data, and then the execution phase issues multiple concurrent transactions. With the goal of testing correctness, we \textit{turn down} the scale factor to generate a small load and limit the execution phase time to ten seconds with just two or three sessions. A smaller test setup has the advantage of making debugging easier. With \mbox{MonkeyDB}{}, there is no need to generate large workloads. \begin{table} \footnotesize \begin{tabular}{|l|c|c|c|c|} \hline Benchmark & \#Tables & \#Txns &\#Read-only & \#Assertions \\ \hline TPC-C & 9 & 5 & 2& 12\\ SmallBank & 3 & 6 & 1 & 1\\ Voter & 3 & 1 & 0 & 1\\ Wikipedia & 12 & 5 & 2 & 3\\ \hline \end{tabular} \caption{OLTP benchmarks tested with \mbox{MonkeyDB}{}} \label{table-bench} \end{table} \paragraph{TPC-C} TPC-C emulates a wholesale supplier transactional system that delivers orders for a warehouse company. This benchmark deals with customers, payments, orders, warehouses, deliveries, etc. We configured OLTPBench to issue a higher proportion ($>85\%$) of update transactions, compared to read-only ones. Further, we considered a small input workload constituting of one warehouse, two districts per warehouse and three customers per district. TPC-C has twelve assertions (A1 to A12) that check for consistency between the database tables. For example, A12 checks: for any customer, the sum of delivered order-line amounts must be equal to the sum of balance amount and YTD (Year-To-Date) payment amount of that customer. \begin{figure}[t] \centering \includegraphics[scale=0.5]{plots/random_strongest_tpcc.png} \caption{\small Assertion checking: {TPC-C}} \label{fig:tpcc} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.45]{plots/random_strongest_all.png} \caption{\small Assertion checking: SmallBank, Voter, and Wikipedia} \label{fig:rest} \end{figure} Figure~\ref{fig:tpcc} shows the percentage of test runs in which an assertion failed. It shows that all the twelve assertions are violated under Read~Committed isolation level. In fact, $9$ out of the $12$ assertions are violated in more than 60\% of the test runs. In case of causal, all assertions are violated with three sessions, except for A4 and A11. We manually inspected TPC-C and we believe that both these assertions are valid under causal consistency. For instance, A4 checks for consistency between two tables, both of which are only updated within the same transaction, thus causal consistency is enough to preserve consistency between them. These results demonstrate the effectiveness of \mbox{MonkeyDB}{} in breaking (invalid) assertions. Running with MySQL, under read committed, was unable to violate any assertion except for two (A10 and A12), even when increasing the number of sessions to $10$. We used the same time limit of $10$ seconds for the execution phase. We note that MySQL is much faster than MonkeyDB and ends up processing up to $50\times$ more transactions in the same time limit, yet is unable to violate most assertions. Prior work \cite{clotho} attempted a more sophisticated test setup where TPC-C was executed on a Cassandra cluster, while running Jepsen~\cite{jepsen} for fault injection. This setup also was unable to violate all assertions, even when running without transactions, and on a weaker isolation level than read committed. Only six assertions were violated with 10 sessions, eight assertions with 50 sessions, and ten assertions with 100 sessions. With \mbox{MonkeyDB}{}, there is no need to set up a cluster, use fault injection or generate large workloads that can make debugging very difficult. \paragraph{SmallBank, Voter, and Wikipedia} SmallBank is a standard financial banking system, dealing with customers, saving and checking accounts, money transfers, etc. Voter emulates the voting system of a television show and allows users to vote for their favorite contestants. Wikipedia is based on the popular online encyclopedia. It deals with a complex database schema involving page revisions, page views, user accounts, logging, etc. It allows users to edit its pages and maintains a history of page edits and user actions. We identified a set of five assertions, A13 to A17, that should be satisfied by these systems. For SmallBank, we check if the total money in the bank remains the same while it is transfered from one account to another (A13). Voter requires that the number of votes by a user is limited to a fixed threshold (A14). For Wikipedia, we check if for a given user and for a given page, the number of edits recorded in the user information, history, and logging tables are consistent (A15-A17). As before, we consider small work loads: (1) five customers for SmallBank, (2) one user for Voter, and (3) two pages and two users for Wikipedia. Figure~\ref{fig:rest} shows the results. \mbox{MonkeyDB}{} detected that all the assertions are invalid under the chosen isolation levels. Under causal, \mbox{MonkeyDB}{} could break an assertion in 26.7\% (geo-mean) runs given 2 sessions and in 37.2\% (geo-mean) runs given 3 sessions. Under read committed, the corresponding numbers are 56.1\% and 65.4\% for 2 and 3 sessions, respectively. \section{Operational Semantics for $\KVProgs$} \label{sec:op-kv} We define a small-step operational semantics for Key-Value store programs, which is parametrized by an isolation level $I$. Transactions are executed \emph{serially} one after another, and the values returned by $\rdo$ operations are decided using the axiomatic definition of $I$. The semantics maintains a history of previously executed operations, and the value returned by a $\rdo$ is chosen non-deterministically as long as extending the current history with the corresponding write-read dependency satisfies the axioms of $I$. We show that this semantics is sound and complete for any natural isolation level $I$, i.e., it generates precisely the same set of histories as a \emph{baseline} semantics where transactions can interleave arbitrarily and the $\rdo$ operations can return arbitrary values as long as they can be proved to be correct at the end of the execution. \subsection{Definition of the Operational Semantics} \tikzset{ keep name/.style={ prefix after command={ \pgfextra{\let\fixname\tikzlastnode} } }, partialbox/.style={ keep name, append after command={ ([xshift=#1]\fixname.north west) -- (\fixname.north west) -- (\fixname.south west) -- ([xshift=#1]\fixname.south west) ([xshift=-#1]\fixname.north east) -- (\fixname.north east) -- (\fixname.south east) -- ([xshift=-#1]\fixname.south east) } }, partialbox/.default=15pt } \begin{figure} \begin{minipage}{2.2cm} \begin{lstlisting}[xleftmargin=5mm,basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)}] begin; write((*$\key_1$*),1); x2=read((*$\key_2$*)); commit \end{lstlisting} \end{minipage} \begin{minipage}{1mm} || \end{minipage} \hspace{-5mm} \begin{minipage}{2.2cm} \begin{lstlisting}[xleftmargin=5mm,basicstyle=\ttfamily\footnotesize,escapeinside={(*}{*)}] begin; write((*$\key_2$*),1); x1=read((*$\key_1$*)); commit \end{lstlisting} \end{minipage} \begin{minipage}{4.1cm} \resizebox{.7\textwidth}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm, semithick, transform shape] \node[draw, rounded corners=2mm] (t0) at (1.7, 1.7) {\begin{tabular}{l} $\wrt{\key_1}{0}$ \\ $\wrt{\key_2}{0}$ \end{tabular}}; \node(s11) at (0, 0) {\begin{tabular}{l} $\wrt{\key_1}{1}$ \end{tabular}}; \path (t0) edge[above] node[pos=0.6, xshift=-3] {$\so$} (s11); \end{tikzpicture} } \end{minipage} {\small (a)} \hspace{4cm} {\small (b)} \medskip \begin{minipage}{4.1cm} \resizebox{.7\textwidth}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm, semithick, transform shape] \node[draw, rounded corners=2mm] (t0) at (1.7, 1.7) {\begin{tabular}{l} $\wrt{\key_1}{0}$ \\ $\wrt{\key_2}{0}$ \end{tabular}}; \node[draw, rounded corners=2mm] (s11) at (0, 0) {\begin{tabular}{l} $\wrt{\key_1}{1}$ \\ $\rd{\key_2}{0}$ \end{tabular}}; \path (t0) edge[above] node[pos=0.6, xshift=-3] {$\so$} (s11); \path (t0) edge[red, right, bend left=20] node[pos=0.7] {$\wro$} (s11); \end{tikzpicture} } \begin{center} {\small (c)} \end{center} \end{minipage} \begin{minipage}{4.1cm} \resizebox{\textwidth}{!}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm, semithick, transform shape] \node[draw, rounded corners=2mm] (t0) at (1.7, 1.7) {\begin{tabular}{l} $\wrt{\key_1}{0}$ \\ $\wrt{\key_2}{0}$ \end{tabular}}; \node[draw, rounded corners=2mm] (s11) at (0, 0) {\begin{tabular}{l} $\wrt{\key_1}{1}$ \\ $\rd{\key_2}{0}$ \end{tabular}}; \node[draw, rounded corners=2mm] (s12) at (3.5, 0) {\begin{tabular}{l} $\wrt{\key_2}{1}$ \\ $\rd{\key_1}{0}$ \end{tabular}}; \path (t0) edge[above] node[pos=0.6, xshift=-3] {$\so$} (s11); \path (t0) edge[right] node[pos=0.3] {$\so$} (s12); \path (t0) edge[red, right, bend left=20] node[pos=0.4,xshift=-1] {$\wro$} (s11); \path (t0) edge[red, left, bend right=20] node[pos=0.9,xshift=-1] {$\wro$} (s12); \end{tikzpicture} } \begin{center} {\small (d)} \end{center} \end{minipage} \vspace{-3mm} \caption{The $\mathsf{Causal}$ semantics on the program in (a), assuming that the transaction on the left is scheduled first. } \label{fig:opsEx} \vspace{-5mm} \end{figure} We use the program in Figure~\ref{fig:opsEx}a to give an overview of our semantics, assuming Causal Consistency. This program has two concurrent transactions whose reads can both return the initial value $0$, which is not possible under $\mathsf{Serializability}$. Our semantics executes transactions in their entirety one after another (without interleaving them), maintaining a history that contains all the executed operations. We assume that the transaction on the left executes first. Initially, the history contains a fictitious transaction log that writes the initial value 0 to all keys, and that will precede all the transaction logs created during the execution in session order. Executing a write instruction consists in simply appending the corresponding write operation to the log of the current transaction. For instance, executing the first write (and $\ibegin$) in our example results in adding a transaction log that contains a write operation (see Figure~\ref{fig:opsEx}b). The execution continues with the read instruction from the same transaction, and it cannot switch to the other transaction. The execution of a read instruction consists in choosing non-deterministically a write-read dependency that validates $\mathsf{Causal}$ when added to the current history. In our example, executing $\iread(\key_2)$ results in adding a write-read dependency from the transaction log writing initial values, which determines the return value of the $\iread$ (see Figure~\ref{fig:opsEx}c). This choice makes the obtained history satisfy $\mathsf{Causal}$. The second transaction executes in a similar manner. When executing its read instruction, the chosen write-read dependency is again related to the transaction log writing initial values (see Figure~\ref{fig:opsEx}d). This choice is valid under $\mathsf{Causal}$. Since a read must not read from the preceding transaction, this semantics is able to simulate all the ``anomalies'' of a weak isolation level (this execution being an example). Formally, the operational semantics is defined as a transition relation $\Rightarrow_I$ between \emph{configurations}, which are defined as tuples containing the following: \begin{itemize} \item history $\hist$ storing the operations executed in the past, \item identifier $j$ of the current session, \item local variable valuation $\gamma$ for the current transaction, \item code $\mathsf{B}$ that remains to be executed from the current transaction, and \item sessions/transactions $\mathsf{P}$ that remain to be executed from the original program. \end{itemize} For readability, we define a program as a partial function $\mathsf{P}:\mathsf{SessId}\rightharpoonup \mathsf{Sess}$ that associates session identifiers in $\mathsf{SessId}$ with concrete code as defined in Figure~\ref{fig:syntax} (i.e., sequences of transactions). Similarly, the session order $\so$ in a history is defined as a partial function $\so:\mathsf{SessId}\rightharpoonup \mathsf{Tlogs}^*$ that associates session identifiers with sequences of transaction logs. Two transaction logs are ordered by $\so$ if one occurs before the other in some sequence $\so(j)$ with $j\in \mathsf{SessId}$. Before presenting the definition of $\Rightarrow_I$, we introduce some notation. Let $\hist$ be a history that contains a representation of $\so$ as above. We use $\hist\oplus_j \tup{\tr,O,\po}$ to denote a history where $\tup{\tr,O,\po}$ is appended to $\so(j)$. Also, for an operation $o$, $\hist\oplus_j o$ is the history obtained from $\hist$ by adding $o$ to the last transaction log in $\so(j)$ and as a last operation in the program order of this log (i.e., if $\so(j)=\sigma; \tup{t,O,\po}$, then the session order $\so'$ of $\hist\oplus_j o$ is defined by $\so'(k)=\so(k)$ for all $k\neq j$ and $\so(j) =\sigma; \tup{t,O\cup{o},\po\cup \{(o',o): o'\in O\}}$). Finally, for a history $\hist = \tup{T, \so, \wro}$, $\hist\oplus\wro(\tr,o)$ is the history obtained from $\hist$ by adding $(\tr,o)$ to the write-read relation. \begin{figure} [t] \small \centering \begin{mathpar} \inferrule[spawn]{\tr \mbox{ fresh}\quad \mathsf{P}(j) = \ibegin; \mathsf{Body}; \icommit; \mathsf{S}}{ \hist,\_,\_,\epsilon,\mathsf{P} \Rightarrow_I \hist \oplus_j \tup{\tr,\emptyset,\emptyset},j,\emptyset,\mathsf{Body},\mathsf{P}[j\mapsto \mathsf{S}] } \inferrule[if-true]{\varphi(\vec{x})[x\mapsto \gamma(x): x\in\vec{x}]\mbox{ true}}{ \hist,j,\gamma,\iif{\phi(\vec{x})}{\mathsf{Instr}};\mathsf{B}, \mathsf{P} \Rightarrow_I \hist,j,\gamma,\mathsf{Instr};\mathsf{B},\mathsf{P} } \inferrule[if-false]{\varphi(\vec{x})[x\mapsto \gamma(x): x\in\vec{x}]\mbox{ false}}{ \hist,j,\gamma,\iif{\phi(\vec{x})}{\mathsf{Instr}};\mathsf{B}, \mathsf{P} \Rightarrow_I \hist,j,\gamma,\mathsf{B},\mathsf{P} } \inferrule[write]{v = \gamma(x)\quad \id\mbox{ fresh}}{ \hist,j,\gamma, \iwrite(\key,\xvar);\mathsf{B}, \mathsf{P} \Rightarrow_I \hist \oplus_j \wrt[\id]{\key}{\val},j,\gamma,\mathsf{B},\mathsf{P} } \inferrule[read-local]{ \wrt{\key}{\val}\mbox{ is the last write on $\key$ in $\tr$ w.r.t. $\po$}\\ \id\mbox{ fresh } }{ \hist,j,\gamma, \xvar := \iread(\key);\mathsf{B}, \mathsf{P} \Rightarrow_I \hist \oplus_j \rd[\id]{\key}{\val},j,\gamma[\xvar\mapsto \val],\mathsf{B},\mathsf{P} } \inferrule[read-extern]{ \hist=(T,\so,\wro) \\ \tr \mbox{ is the id of the last transaction log in $\so(j)$} \\ \wrt{\key}{\val}\in\writeOp{\tr'}\mbox{ with $\tr'\in T$ and $\tr'\neq \tr$} \\ \id\mbox{ fresh }\\ \hist' = (\hist \oplus_j \rd[\id]{\key}{\val}) \oplus \wro(\tr',\rd[i]{\key}{\val}) \\ \hist' \mbox{ satisfies } I }{ \hist,j,\gamma, \xvar := \iread(\key);\mathsf{B}, \mathsf{P} \Rightarrow_I \hist',j,\gamma[\xvar\mapsto \val], \mathsf{B}, \mathsf{P} } \end{mathpar} \vspace{-4mm} \caption{Operational semantics for $\KVProgs$ programs under isolation level $I$. For a function $f:A\rightharpoonup B$, $f[a\mapsto b]$ denotes the function $f':A\rightharpoonup B$ defined by $f'(c) = f(c)$, for every $c\neq a$ in the domain of $f$, and $f'(a)=b$.} \label{fig:op:sem} \vspace{-4mm} \end{figure} Figure~\ref{fig:op:sem} lists the rules defining $\Rightarrow_I$. The \textsc{spawn} rule starts a new transaction, provided that there is no other live transaction ($\mathsf{B}=\epsilon$). It adds an empty transaction log to the history and schedules the body of the transaction. \textsc{if-true} and \textsc{if-false} check the truth value of a Boolean condition of an $\mathtt{if}$ conditional. \textsc{write} corresponds to a write instruction and consists in simply adding a write operation to the current history. \textsc{read-local} and \textsc{read-extern} concern read instructions. \textsc{read-local} handles the case where the read follows a write on the same key $k$ in the same transaction: the read returns the value written by the last write on $k$ in the current transaction. Otherwise, \textsc{read-extern} corresponds to reading a value written in another transaction $\tr'$ ($\tr$ is the id of the log of the current transaction). The transaction $\tr'$ is chosen non-deterministically as long as extending the current history with the write-read dependency associated to this choice leads to a history that still satisfies $I$. An \emph{initial} configuration for program $\prog$ contains the program $\prog$ along with a history $\hist=\tup{\{\tr_0\},\emptyset,\emptyset}$, where $\tr_0$ is a transaction log containing only writes that write the initial values of all keys, and empty current transaction code ($\mathsf{B}=\epsilon$). An execution of a program $\prog$ under an isolation level $I$ is a sequence of configurations $c_0 c_1\ldots c_n$ where $c_0$ is an initial configuration for $\prog$, and $c_m\Rightarrow_I c_{m+1}$, for every $0\leq m < n$. We say that $c_n$ is \emph{$I$-reachable} from $c_0$. The history of such an execution is the history $\hist$ in the last configuration $c_n$. A configuration is called \emph{final} if it contains the empty program ($\prog=\emptyset$). Let $\histOf[I]{\prog}$ denote the set of all histories of an execution of $\prog$ under $I$ that ends in a final configuration. \subsection{Correctness of the Operational Semantics} \begin{figure} [t] \small \centering \begin{mathpar} \inferrule[spawn*]{\tr \mbox{ fresh}\quad \mathsf{P}(j) = \ibegin; \mathsf{Body}; \icommit; \mathsf{S} \quad \vec{\mathsf{B}}(j) = \epsilon}{ \hist,\vec{\gamma},\vec{\mathsf{B}},\mathsf{P} \Rightarrow \hist \oplus_j \tup{\tr,\emptyset,\emptyset},\vec{\gamma}[j\mapsto \emptyset],\vec{\mathsf{B}}[j\mapsto \mathsf{Body}],\mathsf{P}[j\mapsto \mathsf{S}] } \inferrule[read-extern*]{ \vec{\mathsf{B}}(j) = \xvar := \iread(\key);\mathsf{B} \\ \hist=(T,\so,\wro) \\ \tr \mbox{ is the id of the last transaction log in $\so(j)$} \\ \wrt{\key}{\val}\in\writeOp{\tr'}\mbox{ with $\tr'\in \transC{\hist,\vec{\mathsf{B}}}$ and $\tr\neq \tr'$} \\ \id\mbox{ fresh }\\ \hist' = (\hist \oplus_j \rd[\id]{\key}{\val}) \oplus \wro(\tr',\rd[i]{\key}{\val}) }{ \hist,\vec{\gamma},\vec{\mathsf{B}}, \mathsf{P} \Rightarrow \hist',\vec{\gamma}[(j,\xvar)\mapsto \val],\vec{\mathsf{B}}[j\mapsto \mathsf{B}],\mathsf{P} } \end{mathpar} \vspace{-4mm} \caption{A baseline operational semantics for $\KVProgs$ programs. Above, $\transC{\hist,\vec{\mathsf{B}}}$ denotes the set of transaction logs in $\hist$ that excludes those corresponding to live transactions, i.e., transaction logs $\tr''\in T$ such that $\tr''$ is the last transaction log in some $\so(j')$ and $\vec{B}(j')\neq\epsilon$.} \label{fig:op:sem:baseline} \vspace{-4mm} \end{figure} We define the correctness of $\Rightarrow_I$ in relation to a \emph{baseline} semantics where transactions can interleave arbitrarily, and the values returned by $\rdo$ operations are only constrained to come from committed transactions. This semantics is represented by a transition relation $\Rightarrow$, which is defined by a set of rules that are analogous to $\Rightarrow_I$. Since it allows transactions to interleave, a configuration contains a history $\hist$, the sessions/transactions $\mathsf{P}$ that remain to be executed, and: \begin{itemize} \item a valuation map $\vec{\gamma}$ that records local variable values in the current transaction of each session ($\vec{\gamma}$ associates identifiers of sessions that have live transactions with valuations of local variables), \item a map $\vec{B}$ that stores the code of each live transaction (associating session identifiers with code). \end{itemize} Figure~\ref{fig:op:sem:baseline} lists some rules defining $\Rightarrow$ (the others can be defined in a similar manner). \textsc{spawn*} starts a new transaction in a session $j$ provided that this session has no live transaction ($\vec{\mathsf{B}}(j) = \epsilon$). Compared to \textsc{spawn} in Figure~\ref{fig:op:sem}, this rule allows unfinished transactions in other sessions. \textsc{read-extern*} does not check conformance to $I$, but it allows a read to only return a value written in a completed (committed) transaction. In this work, we consider only isolation levels satisfying this constraint. Executions, initial and final configurations are defined as in the case of $\Rightarrow_I$. The history of an execution is still defined as the history in the last configuration. Let $\histOf[*]{\prog}$ denote the set of all histories of an execution of $\prog$ w.r.t. $\Rightarrow$ that ends in a final configuration. Practical isolation levels satisfy a ``prefix-closure'' property saying that if the axioms of $I$ are satisfied by a pair $\tup{\hist_2,\co_2}$, then they are also satisfied by every \emph{prefix} of $\tup{\hist_2,\co_2}$. A prefix of $\tup{\hist_2,\co_2}$ contains a prefix of the sequence of transactions in $\hist_2$ when ordered according to $\co_2$, and the last transaction log in this prefix is possibly incomplete. In general, this prefix-closure property holds for isolation levels $I$ that are defined by axioms as in (\ref{eq:axiom}), provided that the property $\phi(\tr_2,\alpha)$ is \emph{monotonic}, i.e., the set of models in the context of a pair $\tup{\hist_2,\co_2}$ is a \emph{superset} of the set of models in the context of a prefix $ \tup{\hist_1,\co_1}$ of $\tup{\hist_2,\co_2}$. For instance, the property $\phi$ in the axiom defining $\mathsf{Causal}$ is $(\tr_2,\alpha)\in (\wro \cup \so)^+$, which is clearly monotonic. In general, standard isolation levels are defined using a property $\alpha$ of the form $(\tr_2,\alpha)\in R$ where $R$ is an expression built from the relations $\po$, $\so$, $\wro$, and $\co$ using (reflexive and) transitive closure and composition of relations~\cite{DBLP:journals/pacmpl/BiswasE19}. Such properties are monotonic in general (they would not be if those expressions would use the negation/complement of a relation). An axiom as in (\ref{eq:axiom}) is called \emph{monotonic} when the property $\phi$ is monotonic. The following theorem shows that $\histOf[I]{\prog}$ is precisely the set of histories under the baseline semantics, which satisfy $I$ (the validity of the reads is checked at the end of an execution), provided that the axioms of $I$ are monotonic. \begin{theorem} For any isolation level $I$ defined by a set of monotonic axioms, $ \histOf[I]{\prog} = \{ h \in \histOf[*]{\prog}: h\mbox{ satisfies }I\}. $ \end{theorem} The $\subseteq$ direction follows mostly from the fact that $\Rightarrow_I$ is more constrained than $\Rightarrow$. For the opposite direction, given a history $\hist$ that satisfies $I$, i.e., there exists a commit order $\co$ such that $\tup{h,\co}$ satisfies the axioms of $I$, we can show that there exists an execution under $\Rightarrow_I$ with history $\hist$, where transactions execute serially in the order defined by $\co$. The prefix closure property is used to prove that \textsc{read-extern} transitions are enabled (these transitions get executed with a prefix of $\hist$). See the supplementary material for more details. It can also be shown that $\Rightarrow_I$ is \emph{deadlock-free} for every natural isolation level (e.g., Read Committed, Causal Consistency, Snapshot Isolation, and Serializability), i.e., every read can return some value satisfying the axioms of $I$ at the time when it is executed (independently of previous choices). \section{Programming Language} \begin{figure} \small \begin{align*} \key\in \Keys\quad \xvar\in\Vars\quad \tab\in\Tables\quad \vec{c},\vec{c_1},\vec{c_2}\in \Columns^* \end{align*} \begin{align*} \mathsf{Prog} & \eqdef \mathsf{Sess} \ \mid\ \mathsf{Sess}\,||\,\mathsf{Prog} \\ \mathsf{Sess} & \eqdef \mathsf{Trans} \ \mid\ \mathsf{Trans}; \mathsf{Sess} \\ \mathsf{Trans} & \eqdef \ibegin; \mathsf{Body}; \icommit \\ \mathsf{Body} & \eqdef \mathsf{Instr} \ \mid\ \mathsf{Instr}; \mathsf{Body} \\ \mathsf{Instr} & \eqdef \mathsf{InstrKV} \ \mid\ \mathsf{InstrSQL}\ \mid\ x := e \mid\ \iif{\phi(\vec{x})}{\mathsf{Instr}} \\ \mathsf{InstrKV} & \eqdef \xvar := \iread(\key) \ \mid\ \iwrite(\key,\xvar) \\ \mathsf{InstrSQL} & \eqdef \iselect{\vec{c_1}}{\xvar}{\tab}{\phi(\vec{c_2})} \ \mid\ \\ & \hspace{5mm} \iinsert{\tab}{\vec{x}} \ \mid\ \\ & \hspace{5mm} \idelete{\tab}{\phi(\vec{c})} \ \mid\ \\ & \hspace{5mm} \iupdate{\tab}{\vec{c_1}=\vec{x}}{\phi(\vec{c_2})} \end{align*} \vspace{-6mm} \caption{Program syntax. The set of all keys is denoted by $\Keys$, $\Vars$ denotes the set of local variables, $\Tables$ the set of table names, and $\Columns$ the set of column names. We use $\phi$ to denote Boolean expressions, and $e$ to denote expressions interpreted as values. We use $\vec{\cdot}$ to denote vectors of elements.} \label{fig:syntax} \vspace{-4mm} \end{figure} Figure~\ref{fig:syntax} lists the definition of two simple programming languages that we use to represent applications running on top of Key-Value or SQL stores, respectively. A program is a set of \emph{sessions} running in parallel, each session being composed of a sequence of \emph{transactions}. Each transaction is delimited by $\ibegin$ and $\icommit$ instructions\footnote{For simplicity, we assume that all the transactions in the program commit. Aborted transactions can be ignored when reasoning about safety because their effects should be invisible to other transactions.}, and its body contains instructions that access the store, and manipulate a set of local variables $\Vars$ ranged over using $\xvar$, $\yvar$, $\ldots$. In case of a program running on top of a Key-Value store, the instructions can be reading the value of a key and storing it to a local variable $\xvar$ ($\xvar := \iread(\key)$) , writing the value of a local variable $\xvar$ to a key ($\iwrite(\key,\xvar)$), or an assignment to a local variable $\xvar$. The set of values of keys or local variables is denoted by $\Vals$. Assignments to local variables use expressions interpreted as values whose syntax is left unspecified. Each of these instructions can be guarded by a Boolean condition $\phi(\vec{x})$ over a set of local variables $\vec{x}$ (their syntax is not important). Other constructs like $\mathtt{while}$ loops can be defined in a similar way. Let $\KVProgs$ denote the set of programs where a transaction body can contain only such instructions. For programs running on top of SQL stores, the instructions include simplified versions of standard SQL instructions and assignments to local variables. These programs run in the context of a \emph{database schema} which is a (partial) function $\DBschema:\Tables\rightharpoonup 2^\Columns$ mapping table names in $\Tables$ to sets of column names in $\Columns$. The SQL store is an \emph{instance} of a database schema $\DBschema$, i.e., a function $\DBinst: \mathsf{dom}(\DBschema)\rightarrow 2^{\Rows}$ mapping each table $\tab$ in the domain of $\DBschema$ to a set of \emph{rows} of $\tab$, i.e., functions $r:\DBschema(\tab)\rightarrow\Vals$. We use $\Rows$ to denote the set of all rows. The $\mathtt{SELECT}$ instruction retrieves the columns $\vec{c_1}$ from the set of rows of $\tab$ that satisfy $\phi(\vec{c_2})$ ($\vec{c_2}$ denotes the set of columns used in this Boolean expression), and stores them into a variable $\xvar$. $\mathtt{INSERT}$ adds a new row to $\tab$ with values $\vec{x}$, and $\mathtt{DELETE}$ deletes all rows from $\tab$ that satisfy a condition $\phi(\vec{c})$. The $\mathtt{UPDATE}$ instruction assigns the columns $\vec{c_1}$ of all rows of $\tab$ that satisfy $\phi(\vec{c_2})$ with values in $\vec{x}$. Let $\SQLProgs$ denote the set of programs where a transaction body can contain only such instructions. \section{Related Work} \label{sec:related} There have been several directions of work addressing the correctness of database-backed applications. We directly build upon one line of work concerned with the logical formalization of isolation levels \cite{ansi,DBLP:conf/icde/AdyaLO00,DBLP:conf/sigmod/BerensonBGMOO95,DBLP:conf/concur/Cerone0G15,DBLP:journals/pacmpl/BiswasE19}. Our work relies on the axiomatic definitions of isolation levels, as given in~\cite{DBLP:journals/pacmpl/BiswasE19}, which also investigated the problem of checking whether a given history satisfies a certain isolation level. Our kv-store implementation relies on these algorithms to check the validity of the values returned by read operations. Working with a logical formalization allowed us to avoid implementing an actual database with replication or sophisticated synchronization. Another line of work concentrates on the problem of finding ``anomalies'': behaviors that are not possible under serializability. This is typically done via a static analysis of the application code that builds a static dependency graph that over-approximates the data dependencies in all possible executions of the application~\cite{DBLP:journals/jacm/CeroneG18,DBLP:journals/jacm/CeroneG18,DBLP:conf/concur/0002G16,DBLP:journals/tods/FeketeLOOS05,DBLP:conf/vldb/JorwekarFRS07,acidrain,isodiff}. Anomalies with respect to a given isolation level then corresponds to a particular class of cycles in this graph. Static dependency graphs turn out to be highly imprecise in representing feasible executions, leading to false positives. Another source of false positives is that an anomaly might not be a bug because the application may already be designed to handle the non-serializable behavior \cite{DBLP:conf/pldi/BrutschyD0V18,isodiff}. Recent work has tried to address these issues by using more precise logical encodings of the application, e.g.~\cite{DBLP:conf/popl/BrutschyD0V17,DBLP:conf/pldi/BrutschyD0V18} or by using user-guided heuristics~\cite{isodiff}. Another approach consists of modeling the application logic and the isolation level in first-order logic and relying on SMT solvers to search for anomalies~\cite{DBLP:journals/pacmpl/KakiESJ18,DBLP:conf/concur/NagarJ18,burcu-netys}, or defining specialized reductions to assertion checking~\cite{DBLP:conf/concur/BeillahiBE19,DBLP:conf/cav/BeillahiBE19}. The \textsc{Clotho} tool \cite{clotho}, for instance, uses a static analysis of the application to generate test cases with plausible anomalies, which are deployed in a concrete testing environment for generating actual executions. Our approach, based on testing with MonkeyDB, has several practical advantages. There is no need for analyzing application code; we can work with any application. There are no false positives because we directly run the application and check for user-defined assertions, instead of looking for application-agnostic anomalies. The limitation, of course, is the inherent incompleteness of testing. Several works have looked at the problem of reasoning about the correctness of applications executing under weak isolation and introducing additional synchronization when necessary~\cite{DBLP:conf/eurosys/BalegasDFRPNS15,DBLP:conf/popl/GotsmanYFNS16,DBLP:conf/esop/NairP020,DBLP:conf/usenix/0001LCPRV14}. As in the previous case, our work based on testing has the advantage that it can scale to real sized applications (as opposed to these techniques which are based on static analysis or logical proof arguments), but it cannot prove that an application is correct. Moreover, the issue of repairing applications is orthogonal to our work. From a technical perspective, our operational semantics based on recording past operations and certain data-flow and control-flow dependencies is similar to recent work on stateless model checking in the context of weak memory models, e.g.~\cite{DBLP:journals/pacmpl/Kokologiannakis18,DBLP:conf/tacas/AbdullaAAJLS15}. This work, however, does not consider transactions. Furthermore, their focus is on avoiding enumerating equivalent executions, which is beyond the scope of our work (but an interesting direction for future work).
{ "redpajama_set_name": "RedPajamaArXiv" }
5,860
Yes, you CAN have it all! Experience the magic of the French countryside in this charming, roomy, lofty apartment in a 250 year old country, stone farmhouse. Located just 15 minutes away from Amboise, one of the most visited cities in France, it's the perfect starting point for an enchanting discovery tour of France's glorious historical past. The Loire flows majestically through this historical castle-studded valley, recently listed as one of the UNESCO World Heritage Sites. Fields and forests are the views from your cozy nest. The farmhouse loft has just been renovated, with a new kitchen, and a new inviting cool blue swimming pool. With leafy gardens around, the pool is perfect for cooling off if you are here in the summer. Deck chairs are provided. A separate entrance with a sturdy staircase takes you up to the spacious 2 bedroom loft (900 square feet), where you will marvel at the beautifully exposed, original, rough-hewn beams and stone walls. The main bedroom has a California King sized bed, and includes a sitting area, and 2 windows looking over the Valley. The large, dormer window, glassed entrance, and round oeil-de-boeuf window in the second bedroom, flood the room with soft light and provide a pleasant view of the original 16th century well, and farmhouse grounds. This bedroom has a double bed, and is separated from the salon and kitchen area by French doors. The spacious living area includes 2 sofas (one becomes a double bed), and flat screen TV, with international channels included. The new and modern bathroom with its comfortable, large oval soaking tub/shower stands in harmony with the ancient beams and angled walls. We enjoyed searching high and low for the antique furnishings. The functional kitchen is equipped with cooktop, oven, microwave, drip coffer maker, refrigerator, dishwasher, and all you'll need to create meals from the fresh meats and produce found in the local, open markets. There is a full sized washer/dryer combo (washing machine that also dries). ​​​​​​​There is a dining table which seats 4. This lofty apartment is equipped with WiFi and Free Long Distance phone calls. If you are here in the spring, summer or fall, you, too, can explore the local vide greniers (community garage sales) or the many antique stores in the area. Treasures can be found! Explore Amboise, only 15 minutes away, with it's fortressed walls, ancient clock-tower, the Royal Chateau d'Amboise, and the special jewel, Clos Luce, the magnificent home where Leonardo da Vinci spent the last years of his life, now a museum and tribute to his genius. Just 30 minutes away by car or a delightful bike ride is Chenonceau, the darling of the Loire Valley Chateaux. Nearby, Chambord with its many turrets or Vilandry, with its amazingly beautiful ornamental gardens of colorful vegetables should be next on your agenda as they shouldn't be missed. Wine enthusiasts will also be delighted by and abundance of wine tasting cellars in the area-Vouvray, Chinon, St Nicolas de Bourgeuil, Sancerre, etc. Local wine estates welcome visitors. If you're the outdoor type, one of the longest and most picturesque bicycle trails in Europe as well as section of the Grande Randonnee (France's answer to the John Muir Trail) pass through Amboise, for your cycling and hiking enjoyment. Bicycles can be rented from several locations in Amboise or if you bring your own, we can shelter them in our barn. You can visit the twice-weekly open-air market in Amboise, along the Loire, and pick up something delicious to eat. If you had your big meal in the afternoon, as the French do, you can enjoy a simple light meal at home, with a crusty baguette from our village bakery, that you've picked up fresh-baked in the morning, or you can purchase a sampling of Madame Bodet's (our neighbor) chevre (goat) cheeses, which this region in France is known for, or buy sausages and other meats from the local animal reserve, Beaumarchais. Your meal will be complimented with a robust red wine or sparkling, slightly fruity Vouvray from one of the many local wineries. Feel like going out ? Our village is the home of the Restaurant Auberge de la Brenne, renowned for its gastronomical sampling of local specialties. There is also a plethora of restaurants in all price ranges from country casual to city elegant, within 20 minutes of the farmhouse and we'd be happy to recommend some to you. The charming French owners of this historic, stone farmhouse, Muriel & Michel, are glad to offer assistance by pointing you in interesting directions. They hope to make your stay not only one you will long remember, but one you will long to return to, time and time again ! Please ask about renting one of many unique Paris rentals : studio, 1 bedrooms, 2 bedrooms. My husband and I stayed for 4 nights in September. Found it a little hard to find but what a treasure when you do! A beautifully furnished and clean apartment/loft, beautiful surroundings - better than the photos. Veronica and Steven were very kind, more than helpful in answering questions/concerns. Loved the delicious fresh tomatoes from the garden which Veronica kindly let us pick, also raspberries, chard and beetroot. It is a little out of the way, about 15 mins from Amboise, but a pleasant drive. Lots to see and do, 4 nights wasn't long enough. Absolutely recommend it! My wife and I stayed here and it was fantastic. When one thinks of a farmhouse loft in the French countryside, one thinks of a place like this. A beautiful location and so much nearby! Plenty of space to go for walks in the day, easy access to Amboise and the many castles of the Loire Valley. The hosts are very gracious and extremely helpful. The loft is absolutely wonderful--spacious and clean. The amenities were perfect; "normal" plumbing (for an American) and enough media access to get connected the few times we needed to. Just a wonderful place and a wonderful experience. I can't imagine that there's anything better. The famous Chateaux of the Loire Valley, many with summer sound and light shows, Amboise, Chenonceau, Chambord, Blois, Chaumont, Azay-le-Rideau,Villandry, to name a few, Leonardo da Vinci's home/museum in Amboise and numerous museums, monuments, and open-air markets in Tours, There are many recreational attractions as well, the aquarium, the mini-chateaux, Fantasy Forest (a children's amusement park), a wild-life park and horseback Riding, and swimming within fifteen minutes of the house, Dining out becomes a fine art with many nearby restaurants in all price ranges, from the deliciously simple to the gastronomical splurge, In the heart of Vouvray Wine country as well as all the different varieties of Loire Wine--Chinon, Bourgeuil, Sancerre, etc., For the shoppers, there are many gift and specialty shops, as well as antique shops in town, Check the activity calendar for special town fairs with their parades, medieval jousts, and the wonderful vide-greniers (village-wide flea markets) during Spring, Summer and Fall.
{ "redpajama_set_name": "RedPajamaC4" }
4,337
Il Challenger Lugano 2002 è stato un torneo di tennis facente parte della categoria ATP Challenger Series nell'ambito dell'ATP Challenger Series 2002. Il torneo si è giocato a Lugano in Svizzera dal 17 al 23 giugno 2002 su campi in terra rossa. Vincitori Singolare Guillermo Coria ha battuto in finale Giorgio Galimberti 6-3, 6-0 Doppio Emilio Benfele Álvarez / Giorgio Galimberti hanno battuto in finale Christian Kordasz / Kim Tiilikainen 4-6, 7-6(5), 6-2 Collegamenti esterni
{ "redpajama_set_name": "RedPajamaWikipedia" }
286
\section{Introduction} Pre-trained language models(PLMs), such as BERT \cite{devlin2018bert} and T5 \cite{2019t5}, have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style \cite{gururangan2020don,gu2021domain}. To address this issue, \citet{gururangan2020don} proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on domain-specific tasks, while \citet{gu2021domain} further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT~\cite{peng-etal-2019-transfer} and PubMedBERT~\cite{gu2021domain} in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction. We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table~\ref{tab:fin-datasets}, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT~\cite{FinBERT} and Mengzi-BERT-base-fin~\cite{zhang2021mengzi}. However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version. Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pre-training methods to improve PLMs' understanding and memorization of entity knowledge. However, these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pre-training method based on the T5 model's text-to-text paradigm. In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training \cite{xu2020clue,2019t5,gao2020pile}. However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table \ref{tab:typicalplm-corpus}. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table~\ref{tab:fin-datasets}. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks. The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE \cite{wang2018glue} and SuperGLUE \cite{wang2019superglue}, while the general benchmark evaluation for Chinese PLMs is CLUE \cite{xu2020clue}. Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain. To address this issue and promote research in the financial domain, we propose CFLEB, the \textbf{C}hinese \textbf{F}inancial \textbf{L}anguage Understanding and Generation \textbf{E}valuation \textbf{B}enchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty, and more importantly, emphasize challenges that arise in real-world scenarios. Our contributions are summarized as follows: \begin{itemize} \item We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training. \item We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus. \item We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain. \end{itemize} \section{Related Work} \subsection{Domain-specific PLMs and Corpora} PLMs have achieved state-of-the-art performance in many NLP tasks ~\cite{devlin2018bert,2019t5,liu2019roberta}. However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains ~\cite{gururangan2020don, gu2021domain}. To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed ~\cite{gururangan2020don}. For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models ~\cite{gu2021domain}. Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora. In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, ~\citet{araci2019finbert} and ~\citet{yang2020finbert} pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, ~\citet{FinBERT} pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, ~\citet{zhang2021mengzi} pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks. Table~\ref{tab:typicalplm-corpus} summarizes the characteristics of typical PLMs and their corpora in the financial domain. It can be observed that both the scale of our model and corpus exceed existing works. \begin{table*}[!htb] \centering \begin{tabular}{p{5cm} l l p{6cm}} \hline \textbf{PLM} & \textbf{Size} & \textbf{Corpus Size} & \textbf{Corpus Sources}\\ \hline FinBERT~\cite{araci2019finbert} & 110M & 29M words & News filtered by financial keywords \\ FinBERT~\cite{yang2020finbert} & 110M & 4.9B tokens & Corporate Reports, Earnings Call Transcripts, Analyst Reports \\ FinBERT~\cite{FinBERT} & 110M & 3B tokens & News, Analyse reports, Company announcements and Encyclopedias \\ Mengzi-BERT-base-fin~\cite{zhang2021mengzi} & 110M & 20GB file & News, Analyse reports, Company announcements \\ BBT-FinT5 (ours) & 220M, 1B & 80B tokens & Corporate Reports, Analyst Reports, Social media and Financial News \\ \hline \end{tabular} \caption{Typical financial PLMs and their corpora.} \label{tab:typicalplm-corpus} \end{table*} \subsection{Knowledge Enhanced Pre-training} Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed~\cite{yang2021survey}. Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory. For example, Ernie~\cite{sun2019ernie} is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus. Ernie 3.0, introduced by \citet{sun2021ernie}, incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them. The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence~\cite{smirnova2018relation}. Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model. To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models. \subsection{Domain-specific NLP Benchmarks} Various domain-specific NLP benchmarks have been proposed to compare the ability of different methods in modeling text from specific domains in a fair manner. The BLUE benchmark~\cite{peng2019transfer} evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark~\cite{gu2021domain} further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE~\cite{shah2022flue} is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works. \section{The Corpus: BBT-FinCorpus} \label{sec:fincorpus} We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section \ref{sec:corpus-scale} covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section \ref{sec:fincorpus-desc}. \begin{table*}[!htb] \centering \resizebox{2\columnwidth}{!}{ \begin{tabular}{p{7cm} p{4cm} l p{2cm}} \hline \textbf{Dataset} & \textbf{Text Source} & \textbf{Open State} & \textbf{Practicality}\\ \hline DuEE-fin~\cite{han2022duee} & Financial news, Company announcement & Yes & High \\ FinRE~\cite{li-etal-2019-chinese} & Financial news & Yes & High \\ Announcement information extraction~\cite{ieaalc} & Company announcement & Yes & High \\ Discovery of new entities in Internet finance~\cite{df-internet} & Social media & Unspecified & Low \\ Announcement information extraction~\cite{bd-public} & Company announcement & Unspecified & High \\ Construction of financial knowledge graph~\cite{bd-kg} & Analyse report & Unspecified & Medium \\ Event causality extraction~\cite{bd-cau} & Financial news & Unspecified & Low \\ Financial NL2SQL~\cite{bd-sql} & Data query sentence & Unspecified & Medium \\ Few-shot event extraction~\cite{bd-few} & Financial news & Unspecified & Medium \\ Few-shot event extraction~\cite{bd-trans} & Financial news & Unspecified & Medium \\ FinNL (ours) & Financial news & Yes & High \\ FinNA (ours) & Financial news & Yes & High \\ FinFE (ours) & Social media & Yes & High \\ FinNSP (ours) & Social media & Yes & High \\ \hline \end{tabular} } \caption{Chinese financial datasets we collected, with their open source status and practicality scores} \label{tab:fin-datasets} \end{table*} \subsection{Coverage Confirmation of the Corpus} \label{sec:corpus-scale} We believe that, since the purpose of domain pre-training is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table \ref{tab:fin-datasets}. It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance~\footnote{\url{https://finance.sina.com.cn/}}, Tencent Finance~\footnote{\url{https://new.qq.com/ch/finance/}}, Phoenix Finance~\footnote{\url{https://finance.ifeng.com/}}, 36Kr~\footnote{\url{https://36kr.com/}}and Huxiu~\footnote{\url{https://www.huxiu.com/}}. For company announcements and research reports, we chose Eastmoney~\footnote{\url{https://www.eastmoney.com/}} for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba~\footnote{\url{https://guba.eastmoney.com/}} and Xueqiu~\footnote{\url{https://xueqiu.com/}}, for crawling. \subsection{Crawling and Filtering of the Corpus} We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules~\cite{2019t5,YUAN202165}. \subsection{Description of the Corpus} \label{sec:fincorpus-desc} After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials: \begin{itemize} \item \textbf{Corporate announcements.} \quad These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB. \item \textbf{Research reports.} \quad These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries, and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB. \item \textbf{Financial news.} \quad These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB. \item \textbf{Social media.} \quad These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB. \end{itemize} The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP. \begin{figure*}[!htb] \centering \includegraphics[width=2\columnwidth]{emnlp template/KETM-EN.pdf} \caption{Knowledge enhancement pre-training method based on triple masking (KETM) } \label{fig:ketm} \end{figure*} \section{The Large PLM: BBT-FinT5} To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 \cite{2019t5} model and are pre-trained on BBT-FinCorpus (refer to Section~\ref{sec:fincorpus}). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus. In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking. \subsection{Pre-training Model Architecture and Task} ~\citet{2019t5} model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this, they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT~\cite{joshi2020spanbert}, randomly masking 15\% contiguous spans within a sentence rather than independent tokens. \subsection{Pre-training Acceleration} We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed~\cite{rasley2020deepspeed} to accelerate the pre-training process. In particular, we found that using the BFLOAT16~\cite{kalamkar2019study} half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. ~\citet{kalamkar2019study} pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format. \subsection{Knowledge Enhancement Pre-training Method Based on Triple Masking} We propose a knowledge enhancement pre-training method based on triple masking (KETM). First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple. Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask 15\% of a random-length span. Finally, we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure~\ref{fig:ketm}. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge. \section{The Benchmark: BBT-CFLEB} In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks. \subsection{Task Selection} We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table~\ref{tab:fin-datasets}. \subsection{Task Introduction} \begin{table*}[!htb] \centering \begin{tabular}{l p{6cm} c c c } \hline \textbf{Task Name} & \textbf{Introduction} & \textbf{Data} & \textbf{Evaluation} \\ \hline FinNL & Multi-label classification of financial news & 8000/1000/1000 & F1-score \\ FinNA & Generation of summaries for financial news & 24000/3000/3000 & Rouge \\ FinRE & Entity relation classification for financial news & 7454/1489/3727 & F1-score \\ FinFE & Sentiment classification of financial social media text & 8000/1000/1000 & Accuracy \\ FinQA & Question-answering for financial news/events & 16000/2000/2000 & F1-score \\ FinNSP & Detection of negative messages and entities in financial news & 4800/600/600 & F1-score \\ \hline \end{tabular} \caption{Summary of CFLEB tasks.} \label{tab:CFLEB-tasks} \end{table*} CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows: \begin{itemize} \item FinNL, a financial news classification dataset. Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. \item FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge~\cite{lin2004rouge}. The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles. \item FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles. \item FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. \item FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin~\cite{han2022duee} dataset. Given financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles. \item FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles. \end{itemize} \subsection{Leaderboard Introduction} We have organized the tasks into multiple leaderboards according to different ability requirements~\cite{xu2020clue}, so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows: \begin{itemize} \item Overall leaderboard: includes all six tasks. \item Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP. \item Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA. \end{itemize} \section{Experiments} In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method. \begin{table*}[!htb] \centering \resizebox{2.1\columnwidth}{!}{ \begin{tabular}{l c c c c|c|c c|c|c} \hline \textbf{PLMs} & \textbf{FinFE} & \textbf{FinNL} & \textbf{FinNSP} & \textbf{FinRE} & \textbf{Un.Avg.} & \textbf{FinNA} & \textbf{FinQA} & \textbf{Ge.Avg.} & \textbf{Avg.} \\ \hline GPT2-base & 79.05 & 84.09 & 91.30 & 36.37 & 72.70 & 44.19 & 75.22 & 59.71 & 68.37 \\ T5-base & 79.40 & 87.48 & \textbf{95.43} & 54.93 & 79.56 & 48.54 & 83.58 & 66.06 & 74.89 \\ FinBERT-base & 79.45 & 84.69 & 69.01 & 55.33 & 72.37 & - & - & - & -\\ Mengzi-BERT-base-fin & 79.50 & 85.88 & 71.72 & 58.25 & 73.59 & - & - & - & -\\ BBT-FinT5-base & 80.19 & 87.55 & 94.50 & 60.62 & 80.21 & 50.06 & 84.82 & 67.44 & 76.29 \\ BBT-FinT5-base-KE & 79.43 & 87.77 & 95.05 & 61.79 & 80.26 & 51.36 & 85.66 & 68.51 & 76.84 \\ BBT-FinT5-large & \textbf{80.24} & \textbf{88.44} & 94.54 & \textbf{61.88} & \textbf{81.78} & \textbf{51.42} & \textbf{85.95} & \textbf{68.69} & \textbf{77.07} \\ \hline \end{tabular} } \caption{Results of BBT-CFLEB from different PLMs.} \label{table:fint5} \end{table*} \subsection{Experiments Setup} \subsubsection{Pre-trained Language Models} The models participating in the comparative experiment of this section include: \begin{itemize} \item \textbf{GPT2-base}~\cite{zhao2019uer}. \quad A Chinese GPT2 released by~\citet{zhao2019uer}. Pre-trained using the general corpus CLUECorpusSmall~\cite{xu2020clue}. \item \textbf{T5-base}~\cite{zhao2019uer}. \quad A Chinese T5 released by~\citet{zhao2019uer}. Pre-trained using the general corpus CLUECorpusSmall~\cite{xu2020clue}. \item \textbf{FinBERT}~\cite{FinBERT}. \quad A Chinese BERT for the financial domain released by~\citet{FinBERT}. \item \textbf{Mengzi-BERT-base-fin}~\cite{zhang2021mengzi}. \quad A Chinese BERT for the financial domain released by~\citet{zhang2021mengzi}. \item \textbf{FinT5-base}. \quad Our Chinese pre-trained language model for the financial domain, pre-trained on our financial corpus, FinCorpus. Its model architecture, parameter size, and pre-training hyperparameters are the same as T5-v1.1-base. \item \textbf{FinT5-base-KE}. \quad Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia~\cite{xu2017cn} knowledge graph. \item \textbf{FinT5-large}. \quad Our proposed Chinese pre-trained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base. \end{itemize} \subsubsection{Fine-tuning} For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks. \subsection{Experiment 1: Comparison of Pre-trained Model Architectures} For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table~\ref{table:fint5}. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model. \subsection{Experiment 2: Effectiveness of Domain Pre-training} As shown in Table~\ref{table:fint5}, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the effectiveness of domain pre-training and the effectiveness of FinCorpus. \subsection{Experiment 3: Superiority Compared to Existing Models in the domain} As shown in Table~\ref{table:fint5}, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain. \subsection{Experiment 4: Effectiveness of KETM} As shown in Table~\ref{table:fint5}, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method. \subsection{Experiment 5: Effectiveness of parameter scaling up} As shown in Table~\ref{table:fint5}, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up. \section{Conclusion} In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM, which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications. \newpage \section{Introduction} Pre-trained language models(PLMs), such as BERT \cite{devlin2018bert} and T5 \cite{2019t5}, have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style \cite{gururangan2020don,gu2021domain}. To address this issue, \citet{gururangan2020don} proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on domain-specific tasks, while \citet{gu2021domain} further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT~\cite{peng-etal-2019-transfer} and PubMedBERT~\cite{gu2021domain} in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction. We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table~\ref{tab:fin-datasets}, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT~\cite{FinBERT} and Mengzi-BERT-base-fin~\cite{zhang2021mengzi}. However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version. Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on large-scale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pre-training methods to improve PLMs' understanding and memorization of entity knowledge. However, these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pre-training method based on the T5 model's text-to-text paradigm. In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training \cite{xu2020clue,2019t5,gao2020pile}. However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table \ref{tab:typicalplm-corpus}. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table~\ref{tab:fin-datasets}. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks. The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE \cite{wang2018glue} and SuperGLUE \cite{wang2019superglue}, while the general benchmark evaluation for Chinese PLMs is CLUE \cite{xu2020clue}. Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain. To address this issue and promote research in the financial domain, we propose CFLEB, the \textbf{C}hinese \textbf{F}inancial \textbf{L}anguage Understanding and Generation \textbf{E}valuation \textbf{B}enchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty, and more importantly, emphasize challenges that arise in real-world scenarios. Our contributions are summarized as follows: \begin{itemize} \item We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training. \item We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus. \item We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain. \end{itemize} \section{Related Work} \subsection{Domain-specific PLMs and Corpora} PLMs have achieved state-of-the-art performance in many NLP tasks ~\cite{devlin2018bert,2019t5,liu2019roberta}. However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains ~\cite{gururangan2020don, gu2021domain}. To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed ~\cite{gururangan2020don}. For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models ~\cite{gu2021domain}. Consequently, many domain-specific PLMs have been proposed and pre-trained on their respective corpora. In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, ~\citet{araci2019finbert} and ~\citet{yang2020finbert} pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, ~\citet{FinBERT} pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, ~\citet{zhang2021mengzi} pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks. Table~\ref{tab:typicalplm-corpus} summarizes the characteristics of typical PLMs and their corpora in the financial domain. It can be observed that both the scale of our model and corpus exceed existing works. \begin{table*}[!htb] \centering \begin{tabular}{p{5cm} l l p{6cm}} \hline \textbf{PLM} & \textbf{Size} & \textbf{Corpus Size} & \textbf{Corpus Sources}\\ \hline FinBERT~\cite{araci2019finbert} & 110M & 29M words & News filtered by financial keywords \\ FinBERT~\cite{yang2020finbert} & 110M & 4.9B tokens & Corporate Reports, Earnings Call Transcripts, Analyst Reports \\ FinBERT~\cite{FinBERT} & 110M & 3B tokens & News, Analyse reports, Company announcements and Encyclopedias \\ Mengzi-BERT-base-fin~\cite{zhang2021mengzi} & 110M & 20GB file & News, Analyse reports, Company announcements \\ BBT-FinT5 (ours) & 220M, 1B & 80B tokens & Corporate Reports, Analyst Reports, Social media and Financial News \\ \hline \end{tabular} \caption{Typical financial PLMs and their corpora.} \label{tab:typicalplm-corpus} \end{table*} \subsection{Knowledge Enhanced Pre-training} Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed~\cite{yang2021survey}. Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory. For example, Ernie~\cite{sun2019ernie} is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus. Ernie 3.0, introduced by \citet{sun2021ernie}, incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them. The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence~\cite{smirnova2018relation}. Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERT-like model. To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models. \subsection{Domain-specific NLP Benchmarks} Various domain-specific NLP benchmarks have been proposed to compare the ability of different methods in modeling text from specific domains in a fair manner. The BLUE benchmark~\cite{peng2019transfer} evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark~\cite{gu2021domain} further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE~\cite{shah2022flue} is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works. \section{The Corpus: BBT-FinCorpus} \label{sec:fincorpus} We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section \ref{sec:corpus-scale} covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section \ref{sec:fincorpus-desc}. \begin{table*}[!htb] \centering \resizebox{2\columnwidth}{!}{ \begin{tabular}{p{7cm} p{4cm} l p{2cm}} \hline \textbf{Dataset} & \textbf{Text Source} & \textbf{Open State} & \textbf{Practicality}\\ \hline DuEE-fin~\cite{han2022duee} & Financial news, Company announcement & Yes & High \\ FinRE~\cite{li-etal-2019-chinese} & Financial news & Yes & High \\ Announcement information extraction~\cite{ieaalc} & Company announcement & Yes & High \\ Discovery of new entities in Internet finance~\cite{df-internet} & Social media & Unspecified & Low \\ Announcement information extraction~\cite{bd-public} & Company announcement & Unspecified & High \\ Construction of financial knowledge graph~\cite{bd-kg} & Analyse report & Unspecified & Medium \\ Event causality extraction~\cite{bd-cau} & Financial news & Unspecified & Low \\ Financial NL2SQL~\cite{bd-sql} & Data query sentence & Unspecified & Medium \\ Few-shot event extraction~\cite{bd-few} & Financial news & Unspecified & Medium \\ Few-shot event extraction~\cite{bd-trans} & Financial news & Unspecified & Medium \\ FinNL (ours) & Financial news & Yes & High \\ FinNA (ours) & Financial news & Yes & High \\ FinFE (ours) & Social media & Yes & High \\ FinNSP (ours) & Social media & Yes & High \\ \hline \end{tabular} } \caption{Chinese financial datasets we collected, with their open source status and practicality scores} \label{tab:fin-datasets} \end{table*} \subsection{Coverage Confirmation of the Corpus} \label{sec:corpus-scale} We believe that, since the purpose of domain pre-training is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table \ref{tab:fin-datasets}. It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance~\footnote{\url{https://finance.sina.com.cn/}}, Tencent Finance~\footnote{\url{https://new.qq.com/ch/finance/}}, Phoenix Finance~\footnote{\url{https://finance.ifeng.com/}}, 36Kr~\footnote{\url{https://36kr.com/}}and Huxiu~\footnote{\url{https://www.huxiu.com/}}. For company announcements and research reports, we chose Eastmoney~\footnote{\url{https://www.eastmoney.com/}} for crawling. For social media, we chose the two largest financial social media platforms on the Chinese Internet, Guba~\footnote{\url{https://guba.eastmoney.com/}} and Xueqiu~\footnote{\url{https://xueqiu.com/}}, for crawling. \subsection{Crawling and Filtering of the Corpus} We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules~\cite{2019t5,YUAN202165}. \subsection{Description of the Corpus} \label{sec:fincorpus-desc} After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials: \begin{itemize} \item \textbf{Corporate announcements.} \quad These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB. \item \textbf{Research reports.} \quad These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries, and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB. \item \textbf{Financial news.} \quad These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB. \item \textbf{Social media.} \quad These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB. \end{itemize} The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP. \begin{figure*}[!htb] \centering \includegraphics[width=2\columnwidth]{emnlp template/KETM-EN.pdf} \caption{Knowledge enhancement pre-training method based on triple masking (KETM) } \label{fig:ketm} \end{figure*} \section{The Large PLM: BBT-FinT5} To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 \cite{2019t5} model and are pre-trained on BBT-FinCorpus (refer to Section~\ref{sec:fincorpus}). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus. In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking. \subsection{Pre-training Model Architecture and Task} ~\citet{2019t5} model all NLP tasks in a text-to-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this, they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT~\cite{joshi2020spanbert}, randomly masking 15\% contiguous spans within a sentence rather than independent tokens. \subsection{Pre-training Acceleration} We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed~\cite{rasley2020deepspeed} to accelerate the pre-training process. In particular, we found that using the BFLOAT16~\cite{kalamkar2019study} half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 half-precision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. ~\citet{kalamkar2019study} pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format. \subsection{Knowledge Enhancement Pre-training Method Based on Triple Masking} We propose a knowledge enhancement pre-training method based on triple masking (KETM). First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple. Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask 15\% of a random-length span. Finally, we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure~\ref{fig:ketm}. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entity-related knowledge. \section{The Benchmark: BBT-CFLEB} In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks. \subsection{Task Selection} We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table~\ref{tab:fin-datasets}. \subsection{Task Introduction} \begin{table*}[!htb] \centering \begin{tabular}{l p{6cm} c c c } \hline \textbf{Task Name} & \textbf{Introduction} & \textbf{Data} & \textbf{Evaluation} \\ \hline FinNL & Multi-label classification of financial news & 8000/1000/1000 & F1-score \\ FinNA & Generation of summaries for financial news & 24000/3000/3000 & Rouge \\ FinRE & Entity relation classification for financial news & 7454/1489/3727 & F1-score \\ FinFE & Sentiment classification of financial social media text & 8000/1000/1000 & Accuracy \\ FinQA & Question-answering for financial news/events & 16000/2000/2000 & F1-score \\ FinNSP & Detection of negative messages and entities in financial news & 4800/600/600 & F1-score \\ \hline \end{tabular} \caption{Summary of CFLEB tasks.} \label{tab:CFLEB-tasks} \end{table*} CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows: \begin{itemize} \item FinNL, a financial news classification dataset. Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. \item FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge~\cite{lin2004rouge}. The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles. \item FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles. \item FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutral-positive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles. \item FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin~\cite{han2022duee} dataset. Given financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles. \item FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles. \end{itemize} \subsection{Leaderboard Introduction} We have organized the tasks into multiple leaderboards according to different ability requirements~\cite{xu2020clue}, so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows: \begin{itemize} \item Overall leaderboard: includes all six tasks. \item Understanding ability leaderboard: includes four language comprehension tasks, FinNL, FinRE, FinFE, and FinNSP. \item Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA. \end{itemize} \section{Experiments} In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method. \begin{table*}[!htb] \centering \resizebox{2.1\columnwidth}{!}{ \begin{tabular}{l c c c c|c|c c|c|c} \hline \textbf{PLMs} & \textbf{FinFE} & \textbf{FinNL} & \textbf{FinNSP} & \textbf{FinRE} & \textbf{Un.Avg.} & \textbf{FinNA} & \textbf{FinQA} & \textbf{Ge.Avg.} & \textbf{Avg.} \\ \hline GPT2-base & 79.05 & 84.09 & 91.30 & 36.37 & 72.70 & 44.19 & 75.22 & 59.71 & 68.37 \\ T5-base & 79.40 & 87.48 & \textbf{95.43} & 54.93 & 79.56 & 48.54 & 83.58 & 66.06 & 74.89 \\ FinBERT-base & 79.45 & 84.69 & 69.01 & 55.33 & 72.37 & - & - & - & -\\ Mengzi-BERT-base-fin & 79.50 & 85.88 & 71.72 & 58.25 & 73.59 & - & - & - & -\\ BBT-FinT5-base & 80.19 & 87.55 & 94.50 & 60.62 & 80.21 & 50.06 & 84.82 & 67.44 & 76.29 \\ BBT-FinT5-base-KE & 79.43 & 87.77 & 95.05 & 61.79 & 80.26 & 51.36 & 85.66 & 68.51 & 76.84 \\ BBT-FinT5-large & \textbf{80.24} & \textbf{88.44} & 94.54 & \textbf{61.88} & \textbf{81.78} & \textbf{51.42} & \textbf{85.95} & \textbf{68.69} & \textbf{77.07} \\ \hline \end{tabular} } \caption{Results of BBT-CFLEB from different PLMs.} \label{table:fint5} \end{table*} \subsection{Experiments Setup} \subsubsection{Pre-trained Language Models} The models participating in the comparative experiment of this section include: \begin{itemize} \item \textbf{GPT2-base}~\cite{zhao2019uer}. \quad A Chinese GPT2 released by~\citet{zhao2019uer}. Pre-trained using the general corpus CLUECorpusSmall~\cite{xu2020clue}. \item \textbf{T5-base}~\cite{zhao2019uer}. \quad A Chinese T5 released by~\citet{zhao2019uer}. Pre-trained using the general corpus CLUECorpusSmall~\cite{xu2020clue}. \item \textbf{FinBERT}~\cite{FinBERT}. \quad A Chinese BERT for the financial domain released by~\citet{FinBERT}. \item \textbf{Mengzi-BERT-base-fin}~\cite{zhang2021mengzi}. \quad A Chinese BERT for the financial domain released by~\citet{zhang2021mengzi}. \item \textbf{FinT5-base}. \quad Our Chinese pre-trained language model for the financial domain, pre-trained on our financial corpus, FinCorpus. Its model architecture, parameter size, and pre-training hyperparameters are the same as T5-v1.1-base. \item \textbf{FinT5-base-KE}. \quad Knowledge-enhanced version of FinT5-base, enhanced by KETM method using CN-DBPedia~\cite{xu2017cn} knowledge graph. \item \textbf{FinT5-large}. \quad Our proposed Chinese pre-trained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base. \end{itemize} \subsubsection{Fine-tuning} For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks. \subsection{Experiment 1: Comparison of Pre-trained Model Architectures} For the two models in the general domain, GPT2-base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table~\ref{table:fint5}. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model. \subsection{Experiment 2: Effectiveness of Domain Pre-training} As shown in Table~\ref{table:fint5}, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the effectiveness of domain pre-training and the effectiveness of FinCorpus. \subsection{Experiment 3: Superiority Compared to Existing Models in the domain} As shown in Table~\ref{table:fint5}, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain. \subsection{Experiment 4: Effectiveness of KETM} As shown in Table~\ref{table:fint5}, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledge-enhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method. \subsection{Experiment 5: Effectiveness of parameter scaling up} As shown in Table~\ref{table:fint5}, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up. \section{Conclusion} In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM, which was effective. We also created a benchmark to evaluate the understanding and generation capabilities of language models, called CFLEB. We believe domain benchmarks should prioritize practicality to better reflect how improvements in language models in academia can benefit the real world. Our future work includes expanding FinCorpus and FinT5 and exploring multilingual and multimodal applications. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
9,554
Q: Can not upload file to the text field(there is hidden file field) I have trouble with file upload. Introducing everything step-by-step. I have one file upload in the form. But the field's type is text. But in the back, there is a hidden field, that its type is a file upload. Visible field (text field): <div _ngcontent-ete-c120="" class="form-group upload-doc ng-star-inserted" id="form-group-936cd4f5-19ab-4f3c-bfd9-563bf46212b3"> <label _ngcontent-ete-c120="" class="ng-star-inserted">Please provide bank account statement</label><!----> <input _ngcontent-ete-c120="" type="text" readonly="" class="form-control ng-untouched ng-pristine ng-valid"><i _ngcontent-ete-c120="" class="fal fa-upload"></i> <!----> <!----> </div> – Hidden field <input _ngcontent-juf-c120="" type="file" ng2fileselect="" class="uploader" accept="application/pdf,application/acrobat,application/nappdf,application/x-pdf,image/pdf,image/jpeg,image/pjpeg,image/png"> I make a hidden field and can add files with invoke-command. But the file's name doesn't written in the text field that I see. That is why can not pass the form. You can see the hidden field at the top where I added the file. Also, the text field that I am uploading file from UI. The file's name I must in this field. UI Screenshot A: Do you use libraries like "cypress-file-upload" ? Its really good and i haven't had issues with it so far.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,065
Q: Python SOAP Server using .wsdl file I'm trying to create SOAP Server in python, like in PHP: $server = new WebSoapServer("test.wsdl", $options); $server->setClass("Webservice_Test"); $server->handle(); How to done it, which library whould be best? All i found was developed years ago without any change, and every tutorial of using it don't say how to use wsdl file to generate webservice like in WebSoapServer in php. Can you help me with it?
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,148
<?php /** * Response Text * Webino Example */ use WebinoAppLib\Event\RouteEvent; use WebinoAppLib\Response\Content\SourcePreview; use WebinoAppLib\Response\TextResponse; use WebinoAppLib\Router\DefaultRoute; use WebinoConfigLib\Feature\Route; require __DIR__ . '/../../vendor/autoload.php'; /** * Example routes */ abstract class MyRoutes { const TEXT_TEST = 'textTest'; } $config = Webino::config([ /** * Configuring plain text * response route. */ (new Route(MyRoutes::TEXT_TEST))->setLiteral('/text-test'), ]); $app = Webino::application($config)->bootstrap(); $app->bindRoute(MyRoutes::TEXT_TEST, function (RouteEvent $event) { /** * Responding using * plain text. */ $event->setResponse(new TextResponse('Hello Webino!')); }); $app->bind(DefaultRoute::class, function (RouteEvent $event) { $event->setResponse([ $event->getApp()->url(MyRoutes::TEXT_TEST)->html('View plain text!'), new SourcePreview(__FILE__), ]); }); $app->dispatch();
{ "redpajama_set_name": "RedPajamaGithub" }
932
{"url":"https:\/\/in.mathworks.com\/help\/finance\/smoothts.html","text":"# smoothts\n\nsmoothts is not recommended. Use smoothdata instead.\n\n## Description\n\nexample\n\noutput = smoothts(input) smooths the input data using the default Box method with window size, wsize, of 5.\n\noutput = smoothts(input,'b',wsize) smooths the input data using the Box (simple, linear) method. wsize specifies the width of the box to be used.\n\noutput = smoothts(input,'g',wsize,stdev) smooths the input data using the Gaussian window method.\n\noutput = smoothts(input,'e',n) smooths the input data using the Exponential method. n can represent the window size (period length) or alpha. If n > 1, n represents the window size. If 0 < n < 1, n represents alpha, where\n\n$\\alpha =\\frac{2}{wsize+1}.$\n\nIf input is a financial time series object, output is a financial time series object identical to input except for contents. If input is a row-oriented matrix, output is a row-oriented matrix of the same length.\n\nexample\n\noutput = smoothts(input,method) smooths the input data using a smoothing method.\n\nexample\n\noutput = smoothts(___,wsize) smooths the input data using a smoothing method where wsize specifies the width of the box to be used.\n\nexample\n\noutput = smoothts(___,stdev) represents the standard deviation of the Gaussian window.\n\nexample\n\noutput = smoothts(___,n) smooths the input data using the Exponential method ('e'). n can represent the window size (period length) or alpha. If n > 1, n represents the window size. If 0 < n < 1, n represents alpha, where\n\n$\\alpha =\\frac{2}{wsize+1}.$\n\n## Examples\n\ncollapse all\n\n1. Create a financial times series (fints) object using dates and data.\n\ndata = [1:6]';\ndates = [today:today+5]';\ntsobj = fints(dates, data)\nWarning: FINTS is not recommended. Use TIMETABLE instead. For more information, see Convert Financial Time Series Objects (fints) to Timetables.\n> In fints (line 169)\n\ntsobj =\n\ndesc: (none)\nfreq: Unknown (0)\n\n{'dates: (6)'} {'series1: (6)'}\n{'01-Sep-2021'} {[ 1]}\n{'02-Sep-2021'} {[ 2]}\n{'03-Sep-2021'} {[ 3]}\n{'04-Sep-2021'} {[ 4]}\n{'05-Sep-2021'} {[ 5]}\n{'06-Sep-2021'} {[ 6]}\n2. Use smoothts to smooth the data.\n\noutput = smoothts(tsobj)\noutput =\n\ndesc: Box-smoothed of\nfreq: Unknown (0)\n\n{'dates: (6)'} {'series1: (6)'}\n{'01-Sep-2021'} {[ 1.2000]}\n{'02-Sep-2021'} {[ 2.0000]}\n{'03-Sep-2021'} {[ 3.0000]}\n{'04-Sep-2021'} {[ 4]}\n{'05-Sep-2021'} {[ 3.6000]}\n{'06-Sep-2021'} {[ 3]}\n\n## Input Arguments\n\ncollapse all\n\nInput data, specified as a fints object or a row-oriented matrix. In a row-oriented matrix, each row represents an individual set of observations.\n\nData Types: object | double\n\nSmoothing method, specified as a scalar logical character vector with one of the following values:\n\n\u2022 'b' \u2014 Box\n\n\u2022 'e' \u2014 Exponential\n\n\u2022 'g' \u2014 Gaussian\n\nData Types: char\n\nWindow size, specified as a scalar numeric.\n\nNote\n\nThe wsize input argument can only be used when the method is 'b' (Box) or 'g' (Gaussian).\n\nData Types: double\n\nStandard deviation of the Gaussian window, specified as a scalar numeric.\n\nNote\n\nThe stdev input argument can only be used when the method is 'g' (Gaussian).\n\nData Types: numeric\n\nWindow size or exponential factor depending upon value, specified as a scalar numeric with one of the following values:\n\n\u2022 n > 1 (window size) or period length\n\n\u2022 n < 1 and > 0 (exponential factor: alpha)\n\n\u2022 n = 1 (either window size or alpha)\n\nNote\n\nThe n input argument can only be used when the method is 'e' (exponential).\n\nData Types: double\n\n## Output Arguments\n\ncollapse all\n\nOutput, returned as a fints object or row-oriented matrix.\n\nIf input is a financial time series object, output is a financial time series object identical to input except for contents. If input is a row-oriented matrix, output is a row-oriented matrix of the same length.\n\n## Version History\n\nIntroduced before R2006a","date":"2022-09-26 16:21:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 2, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5265734195709229, \"perplexity\": 9286.035179674001}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334912.28\/warc\/CC-MAIN-20220926144455-20220926174455-00484.warc.gz\"}"}
null
null
/** * */ package gov.nasa.ensemble.core.plan.constraints.network; import gov.nasa.ensemble.core.model.plan.EPlanElement; import gov.nasa.ensemble.emf.model.common.Timepoint; import javax.measure.quantity.Duration; import org.jscience.physics.amount.Amount; /** * Given: * source = sourceTimepoint of sourceElement * affected = affectedTimepoint of affectedElement * * This constraint asserts the following: (these are all mathematically equivalent) * * 1. source + earliestDistance <= affected <= source + latestDistance * 2. affected - latestDistance <= source <= affected - earliestDistance * 3. earliestDistance <= affected - source <= latestDistance * * @author abachmann * */ public class ConsistencyConstraint { public final EPlanElement sourceElement; public final Timepoint sourceTimepoint; public final EPlanElement affectedElement; public final Timepoint affectedTimepoint; public final Amount<Duration> minimumDistance; public final Amount<Duration> maximumDistance; public ConsistencyConstraint(EPlanElement sourceElement, Timepoint sourceTimepoint, EPlanElement affectedElement, Timepoint affectedTimepoint, Amount<Duration> minimumDistance, Amount<Duration> maximumDistance) { this.sourceElement = sourceElement; this.sourceTimepoint = sourceTimepoint; this.affectedElement = affectedElement; this.affectedTimepoint = affectedTimepoint; this.minimumDistance = minimumDistance; this.maximumDistance = maximumDistance; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,347
{"url":"https:\/\/www.physicsforums.com\/threads\/differential-equation-help.469085\/","text":"# Differential equation (help)\n\nJennifer_88\nHi\n\ncan someone help with the DE\n\ndy^2\/dx=62y-.2\n\nDJsTeLF\nAre you sure it doesn't read (dy\/dx)^2 = 62y - 0.2 ?\n\nIs this what you meant? $$\\frac{ d\\left(y^2\\right) }{dx} = 62y -0.2$$. If so just write z in place of $y^2$ so $$\\frac{dz}{dx} = 62\\sqrt{z} -0.2$$, and separate variables as usual.\nOk, have you learned how to deal with differential equations of the form $$y'' + ay' + by = c$$ where a,b,c are constants. Because thats what this problem is. If not, I suggest you read through that section in your textbook or notes again. If you do know the theory for diff. eqns of that form, what specifically got you stuck?","date":"2022-08-19 02:26:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.586901843547821, \"perplexity\": 437.20880148190935}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573540.20\/warc\/CC-MAIN-20220819005802-20220819035802-00523.warc.gz\"}"}
null
null
\section{Introduction} Variational inequalities are a broad and flexible class of problems that includes minimization, saddle point, Nash equilibrium, and fixed point problems as special cases; see \citep{VIbook2003,Heinz} for an introduction. Over the long history of modern research on variational inequalities spanning at least half a century, the community developed their own methods and theory, differing from the approaches in their sister field, optimization. The \algname{ExtraGradient} / \algname{MirrorProx} methods due to \citet{Korpelevich1976TheEM,Nemirovski2004,juditsky2008solving} have a similar foundational standing in the variational inequalities field that gradient descent occupies in the optimization literature. As in the case of gradient descent, many modifications~\citep{hsieh2019convergence} and variants~\cite{doi:10.1137/S0363012998338806} of these methods were proposed and studied in the variational inequalities literature. \subsection{Applications of variational inequalities} In recent years, there has been a significant increase of research activity in the study of variational inequalities due to new connections to reinforcement learning~\citep{Omidshafiei2017:rl,Jin2020:mdp}, adversarial training \citep{Madry2017:adv}, and GANs~\citep{goodfellow2014generative}. In particular, ~\citet{daskalakis2017training,gidel2018variational,mertikopoulos2018optimistic,chavdarova2019reducing,pmlr-v89-liang19b,peng2020training} show that even if one considers the classical (in the variational inequalities literature) regime involving monotone and strongly monotone inequalities, it is possible to obtain insights, methods and recommendations useful for the GAN community. In addition to the above modern applications, and besides their many classical applications in applied mathematics that include economics, equilibrium theory, game theory and optimal control \cite{facchinei2007finite}, variational inequalities remain popular in supervised learning (with non-separable loss \citep{Thorsten}; with non-separable regularizer~\citep{bach2011optimization}), unsupervised learning (discriminative clustering \citep{NIPS2004_64036755}; matrix factorization \citep{bach2008convex}), image denoising \citep{esser2010general,chambolle2011first}, robust optimization \citep{BenTal2009:book}, and non-smooth optimization via smooth reformulations \citep{nesterov2005smooth,nemirovski2004prox}. \subsection{Processing on the edge} With the proliferation of mobile phones, wearables, digital sensors, smart home appliances, and other devices capable of capturing, storing and processing data, there is an increased appetite to mine the richness contained in these sources for the benefit of humanity. However, at the same time, the traditional centralized approach relying on moving the data into a single proprietary warehouse for processing via suitable machine learning methods is problematic, and a new modus operandi is on the rise: processing the data at the source, on the edge, where it was first captured and where it is stored~\citep{FEDLEARN, FL2017-AISTATS, kairouz2019advances}, by the client's devices that own the data. There are many reasons for a gradual shift in this direction, including energy efficiency and data privacy. A key necessary characteristic for any viable algorithmic approach to work in such a massively decentralized regime is the ability to support decentralized processing reflecting the fact that the devices are connected through a network of a potentially complicated topology, possibly varying in time. A central authority may be absent in such a system, and the methods need to rely on communication patters that correspond to the existing connection links. \subsection{Decentralized algorithms for variational inequalities} In this paper we study \begin{quote}{\em algorithms for solving variational inequalities over decentralized communication networks}.\end{quote} In this regime, a number of nodes (workers, devices, clients) are connected via a communication network, represented by a graph. Each node can perform computations using its local state and data, and is only allowed to communicate with its neighbors in the graph. Decentralized algorithms over fixed communication networks find their applications in sensor networks \cite{1307319}, network resource allocation \cite{beck20141}, cooperative control \cite{giselsson2013accelerated}, distributed spectrum sensing \cite{bazerque2009distributed}, power system control \cite{gan2012optimal} and, of course, in machine learning \cite{scaman2017optimal}. Recently, decentralized methods over time-varying networks have gained particular popularity due to their relevance to federated learning \cite{FEDLEARN, kairouz2019advances}, where communication failures between devices are a common problem. Decentralized minimization methods are well studied \cite{9084356, koloskova2020unified}. In particular, lower bounds and optimal algorithms for such problems are known in the {\em fixed} \cite{scaman2017optimal,hendrikx2020optimal, kovalev2020optimal} and {\em time-varying} \cite{kovalev2021lower, li2021accelerated} network topology regimes. \begin{quote}\em However, in significantly more general and hence potentially much more impactful formalism of variational inequalities, the question of optimal and efficient decentralized methods is still open. \end{quote} Motivated by these considerations, our work is devoted to advancing the algorithmic and theoretical foundations of decentralized variational inequalities, in both the fixed and time-varying network regimes. \subsection{Our contributions and related work} \renewcommand{\arraystretch}{2} \begin{table*}[!t] \centering \small \caption{Summary of upper and low bounds for communication and local computation complexities for finding an $\varepsilon$-solution for strongly monotone \textbf{stochastic (finite-sum)} \textbf{decentralized} variational inequality \eqref{eq:VI} over fixed and time-varying networks. Convergence is measured by the distance to the solution.} \label{tab:comparison0} \scriptsize \begin{threeparttable} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{2}{c|}{} & \textbf{\quad\quad\quad\quad\quad\quad Reference \quad\quad\quad\quad\quad\quad} & \textbf{Communication complexity} & \textbf{Local complexity} & \textbf{Weaknesses} \\ \hline \multirow{8}{*}{\rotatebox[origin=c]{90}{\textbf{Fixed \quad\quad}}} & \multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Upper}\quad\quad}} & \cite{Mukherjee2020:decentralizedminmax} \tnote{{\color{blue}(1,2)}} & $\mathcal{O} \left( \myred{\chi^{\frac{4}{3}}} \frac{L^{\myred{\frac{4}{3}}}}{\mu^{\myred{\frac{4}{3}}}} \log \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \myred{n\chi^{\frac{4}{3}}} \frac{L^{\myred{\frac{4}{3}}}}{\mu^{\myred{\frac{4}{3}}}} \log \frac{1}{\varepsilon} \right)$ & \makecell{{weak communication rates} \\ {weak local computation rates}} \\ \cline{3-6} && \cite{beznosikov2021distributed} \tnote{{\color{blue}(1,2)}} & $\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log^{\myred{2}} \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \myred{n} \frac{L}{\mu} \myred{\log \frac{L+\mu}{\mu}} \log \frac{1}{\varepsilon}\right)$ & \makecell{{multiple gossip} \\ {no linear convergence}} \\ \cline{3-6} && \cite{beznosikov2020distributed} \tnote{{\color{blue}(1,3)}} & $\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log^{\myred{2}} \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \myred{n} \frac{L}{\mu}\log \frac{1}{\varepsilon}\right)$ & \makecell{{multiple gossip} \\ {no linear convergence}} \\ \cline{3-6} && \cite{rogozin2021decentralized} \tnote{{\color{blue}(1,2,4)}} & $\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \myred{n \sqrt{\chi}} \frac{L}{\mu}\log \frac{1}{\varepsilon}\right)$ & weak local computation rates \\ \cline{3-6} && \cellcolor{bgcolor2}{Alg. \ref{alg:vrvi} (this paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \max[\sqrt{n}; \sqrt{\chi}] \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \max[\sqrt{n}; \sqrt{\chi}]\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{} \\ \cline{3-6} && \cellcolor{bgcolor2}{Alg. \ref{alg:vrvi} + Alg. \ref{alg:chebyshev_gossip} (this paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \sqrt{n}\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{multiple gossip} \\ \cline{2-6} & \multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{Lower}}} & \cite{beznosikov2020distributed} \tnote{{\color{blue}(3)}} & $\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & \\ \cline{3-6} && \cellcolor{bgcolor2}{Thm. \ref{th:lower_fixed} + Cor. \ref{cor:lower_fixed} (this paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \sqrt{\chi} \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \myblue{\sqrt{n}}\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{} \\\hline\hline \multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{Time-varying}\quad}} & \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{Upper}\quad \quad}} & \cite{beznosikov2021decentralized} \tnote{{\color{blue}(3)}} & $\mathcal{O} \left( \chi \frac{L}{\mu} \log \frac{1}{\varepsilon} + \myred{\chi \frac{L D }{\mu^2 \sqrt{\varepsilon}}} \right)$ \tnote{{\color{blue}(5)}} & $\mathcal{O} \left( \myred{n\chi} \frac{L}{\mu} \log \frac{1}{\varepsilon} + \myred{n\chi \frac{L D }{\mu^2 \sqrt{\varepsilon}} }\right)$ & \makecell{{$D$-homogeneity} \\ {no linear convergence}} \\ \cline{3-6} && \cite{beznosikov2021} \tnote{{\color{blue}(1,2)}} & $\mathcal{O} \left( \chi \frac{L}{\mu} \log^{\myred{2}} \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \myred{n}\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & \makecell{{multiple gossip} \\ {no linear convergence}} \\ \cline{3-6} && \cellcolor{bgcolor2}{ Alg. \ref{2dvi2:alg} (this paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \max[\sqrt{n}; \chi] \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ \tnote{{\color{blue}(5)}}} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \max[\sqrt{n}; \chi]\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{} \\ \cline{3-6} && \cellcolor{bgcolor2}{ Alg. \ref{2dvi2:alg} + Eq. \ref{eq:tv_gossip} (this paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \chi \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ \tnote{{\color{blue}(5)}}} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \sqrt{n}\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{multiple gossip} \\ \cline{2-6} & \multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{Lower}}} & \cite{beznosikov2021} \tnote{{\color{blue}(2)}} & $\mathcal{O} \left( \chi \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & $\mathcal{O} \left( \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ & \\ \cline{3-6} && \cellcolor{bgcolor2}{Thm. \ref{th:lower_tv} + Cor. \ref{cor:lower_tv} (This paper)} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \chi \frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$ \tnote{{\color{blue}(5)}}} & \cellcolor{bgcolor2}{$\mathcal{O} \left( \myblue{\sqrt{n}}\frac{L}{\mu} \log \frac{1}{\varepsilon} \right)$} & \cellcolor{bgcolor2}{} \\\hline \end{tabular} \begin{tablenotes} {\scriptsize \item [{\color{blue}(1)}] for saddle point problems; \tnote{{\color{blue}(2)}} deterministic; \tnote{{\color{blue}(3)}} stochastic, but not finite sum; \tnote{{\color{blue}(4)}} convex-concave (monotone) case (we re-analyzed for strongly monotone case); \tnote{{\color{blue}(5)}}$B$-connected graphs \cite{nedich2016geometrically} are also considered. For simplicity in comparison with other works, we put $B = 1$. To get estimates for $B \neq 1$, one need to change $\chi$ to $B \chi$ \item [] {\em Notation:} $\mu$ = constant of strong monotonicity of operator $F$, $L$ = Lipschitz constants of $L_{m,i}$, $\chi$ = characteristic \# of the network (see Assumptions \ref{ass:fixed} and \ref{ass:tv}), $n$ = size of the local dataset. } \end{tablenotes} \end{threeparttable} \end{table*} We now briefly summarize our main contributions. \textbf{(a) Lower bounds} We present the {\bf first lower bounds for the communication and local computation complexities of decentralized variational inequalities in the stochastic (finite sum) case, in both the fixed and time-varying network topology regimes.} See Table \ref{tab:comparison0}. Existing literature contains lower bounds for {\em non-distributed} finite-sum variational inequalities \citep{han2021lower}, which we recover as a special case. Existing literature also contains lower bounds for {\em deterministic} decentralized variational inequalities in the fixed \citep{beznosikov2020distributed} and time-varying \citep{beznosikov2021} regimes. Our bounds covers these results too. See Table \ref{tab:comparison1} (Appendix \ref{sec:tables}). \textbf{(b) Optimal decentralized algorithms} We construct {\bf four new algorithms for stochastic (finite sum) decentralized variational inequalities: two for fixed networks, and two for time-varying networks. Two of these algorithms match our lower bounds, and are therefore optimal in terms of communication and local iteration complexities.} These are the first algorithms for stochastic (finite sum) decentralized variational inequalities over fixed and time-varying networks. See Table \ref{tab:comparison0}. Moreover, our results offer linear communication complexity for deterministic decentralized strongly monotone variational inequalities, which is an improvement upon the sublinear results of \citet{beznosikov2021distributed,beznosikov2020distributed,beznosikov2021decentralized,beznosikov2021}. Additionally, our algorithms have better guarantees on local computations than the methods developed by \citet{rogozin2021decentralized}. See Table \ref{tab:comparison1} (Appendix \ref{sec:tables}). Let us also single out a number of works on decentralized saddle point problems or VIs which are not suitable for comparison: \citet{7403075} do not prove convergence, \citet{liu2019decentralized} assume data homogeneity, and \citet{6004889} consider a discrete problem. \textbf{(c) Optimal non-distributed and centralized algorithms} We believe it is notable that despite the generality of our setup and algorithms, {\bf our results, when specialized to handle this simpler case, improve upon the current state-of-the-art results in the non-distributed and centralized setting.} In particular, unlike existing methods, our algorithms support {\em batch parallelization}: while the complexity of the best available algorithms grows with the batch size, our algorithms are not sensitive to this. This property is of crucial importance when working in the large batch mode, which is used in the practice \cite{brock2018large,zhu2019freelb,you2019large}. See Table \ref{tab:comparison2} (Appendix \ref{sec:tables}). \textbf{(d) Experiments} Numerical experiments on bilinear problems and robust regression problems confirm the practical efficiency of our methods, both in the non-distributed stochastic setup and in the decentralized deterministic one. \section{Problem Setup and Assumptions} We write $\langle x,y \rangle \vcentcolon= \sum_{i=1}^nx_i y_i$ to denote the standard inner product of vectors $x,y\in\mathbb R^n$, where $x_i$ corresponds to the $i$-th component of $x$ in the standard basis in $\mathbb R^n$. This induces the standard $\ell_2$-norm in $\mathbb R^n$ in the following way: $\|x\| \vcentcolon= \sqrt{\langle x, x \rangle}$. To denote the Kronecker product of two matrices $\mathbf{A}\in\mathbb R^{m\times m}$ and $\mathbf{B}\in\mathbb R^{n\times n}$, we use $A \otimes B \in \mathbb R^{nm \times nm}$. The identity matrix of size $n\times n$ is denoted by $\mathbf{I}_n$. We write $[n]\vcentcolon= \{1,2,\dots,n\}$. $\mathbb N$ is the set of positive integers. \subsection{Distributed Variational Inequality} We study variational inequalities (VI) of the form \begin{equation}\begin{aligned} \label{eq:VI} \text{Find} \quad z^* &\in \mathbb R^d \quad \text{such that} \\ \langle F(z^*), z - z^* \rangle + g(z) &- g(z^*) \geq 0, \quad \forall z \in \mathbb R^d, \end{aligned}\end{equation} where $F: \mathbb R^d \to \mathbb R^d $ is an operator, and $g: \mathbb R^d \to \mathbb R \cup \{ + \infty\}$ is a proper lower semicontinuous convex function. \subsection{Examples} To showcase the expressive power of the formalism \eqref{eq:VI}, we now give a few examples of variational inequalities arising in machine learning. \textbf{Example 1 [Convex minimization].} Consider the convex regularized minimization problem: \begin{align} \label{eq:min} \min_{z \in \mathbb R^d} f(z) + g(z), \end{align} where $f$ is typically a smooth data-fidelity term, and $g$ a possibly nonsmooth regularizer. Such problem arise in many classical machine learning applications, including empirical risk minimization, maximum likelihood estimation, and least-square problems \citep{bishop2006pattern,shalev2014understanding}. If we define $F(z) \vcentcolon= \nabla f(z)$, then it can be proved that $z^* \in \dom g$ is a solution for \eqref{eq:VI} if and only if $z^* \in \dom g$ is a solution for \eqref{eq:min}. So, the regularized optimization problem \eqref{eq:min} can be cast as a VI \eqref{eq:VI}. \textbf{Example 2 [Convex-concave saddle point problems].} Consider the convex-concave saddle point problem \begin{align} \label{eq:minmax} \min_{x \in \mathbb R^{d_x}} \min_{y \in \mathbb R^{d_y}} f(x,y) + g_1 (x) + g_2(y), \end{align} where $g_1$ and $g_2$ can also be interpreted as regularizers. If we let $F(z) \vcentcolon= F(x,y) = [\nabla_x f(x,y), -\nabla_y f(x,y)]$ and $g(z) = g(x,y) = g_1 (x) + g_2(y)$, then it can be proved that $z^* \in \dom g$ is a solution for \eqref{eq:VI} if and only if $z^* \in \dom g$ is a solution for \eqref{eq:minmax}. So, convex-concave saddle point problems \eqref{eq:minmax} can be cast as a VI \eqref{eq:VI}. Saddle point problems are strongly related to variational inequalities. In particular, lower bounds for the former are also valid for the latter. Moreover, upper bounds for variational inequalities are valid for saddle point problems. However, what is perhaps more important is that these lower and upper bounds match. This is in contrast to minimization, where the lower bounds are weaker. \subsection{Decentralized variational inequalities} We consider the decentralized case of problem \eqref{eq:VI}, namely we assume that $F$ is distributed across $M$ workers, \begin{equation} \label{eq:distr} \textstyle F(z) \vcentcolon= \sum\limits_{m=1}^M F_m(z), \end{equation} while each $F_m:\mathbb R^d \to \mathbb R^d$, $m \in [M]$, has the finite sum structure \begin{equation} \label{eq:fs} \textstyle F_m(z) \vcentcolon= \frac{1}{n}\sum\limits_{i=1}^n F_{m,i}(z). \end{equation} The data describing $F_{m}$ being stored on worker $m$. For example, $F_{m,i}$ can correspond to the value of the operator on the $i$th data point of the $m$-th dataset. \subsection{Assumptions} \label{sec:as} \begin{assumption}[Lipschitzness] \label{as:Lipsh} Each operator $F_m$ is $L$-Lipschitz continuous, i.e. for all $u, v \in \mathbb R^d$ we have \begin{equation} \label{eq:Lipsh} \left\| F_m(u)-F_m(v) \right\| \leq L\left\|u - v\right\|. \end{equation} Further, the collection of operators is $\overline{L}$-average Lipschitz continuous, i.e., for all $u, v \in \mathbb R^d$ it holds \begin{equation} \label{eq:avgLipsh} \textstyle \frac{1}{n}\sum\limits_{i=1}^n\left\| F_{m,i}(u)-F_{m,i}(v) \right\|^2 \leq \overline{L}^2\left\|u - v\right\|^2. \end{equation} \end{assumption} In the context of \eqref{eq:min} and \eqref{eq:minmax}, $L$-Lipschitzness of the operator means that the functions $f(z)$ and $f(x,y)$ are $L$-smooth. \begin{assumption}[Strong monotonicity]\label{as:strmon} Each operator $F_m$ is $\mu$-strongly monotone, i.e., for all $u, v \in \mathbb R^d$ we have \begin{equation} \label{eq:strmon} \langle F_m(u) - F_m(v); u - v \rangle \geq \mu \| u-v\|^2. \end{equation} \end{assumption} In the context of \eqref{eq:min} and \eqref{eq:minmax}, strong monotonicity of $F$ means strong convexity of $f(z)$ and strong convexity-strong concavity of $f(x,y)$. \subsection{Communication and gossip} \label{sec:comm} Typically, decentralized communication is realized via a {\em gossip protocol} \cite{XIAO200465,boyd2006randomized,nedic2009distributed}, which is merely matrix-vector multiplication with a gossip matrix $\mathbf{W}$, described below, which is different in the fixed and time-varying cases. Let \begin{equation*} \mathcal{L} = \{\mathbf{z} = (z_1,\ldots,z_M)^\top \in (\mathbb R^d)^M: z_1 = \ldots=z_M \} \end{equation*} be the {\em consensus space}. \begin{assumption}[Fixed network \cite{scaman2017optimal}] \label{ass:fixed} For a fixed network, communication can be modeled via an undirected connected graph, $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V} = [n]$ are vertices (workers) and $\mathcal{E} = \{(i,j) \, |\, i,j \in \mathcal{V} \}$ are edges. Note that $(i,j) \in \mathcal{E}$ if and only if there exists a communication link between agents $i$ and $j$. The gossip matrix $\mathbf{W}$ satisfies the following three assumptions: \begin{enumerate} \item $\mathbf{W}$ is symmetric positive semi-definite; \item $\text{ker}\mathbf{W} \supset \mathcal{L}$; \item $\mathbf{W}$ is supported on the vertices and edges of the network only: $w_{i,j} \neq 0$ if and only if $i = j$ or $(i, j)\in \mathcal{E}$. \end{enumerate} To characterize the matrix $\mathbf{W}$, which captures the properties of the network, we denote $\lambda_{\max}(\mathbf{W}) = 1$ as the maximum eigenvalue of $\mathbf{W}$, $\lambda^+_{\min}(\mathbf{W})$ as the minimum positive eigenvalue of $\mathbf{W}$, and the characteristic number $\chi = \lambda_{\max}(\mathbf{W})/\lambda^+_{\min}(\mathbf{W}) = 1/\lambda^+_{\min}(\mathbf{W})$. \end{assumption} \begin{assumption}[Time-varying network \cite{nedich2016geometrically}] \label{ass:tv} For a time-varying network, at any moment $t$, communication network can be modeled as a directed $B$-connected graph, $\mathcal{G}(t) = (\mathcal{V}, \mathcal{E}(t))$, where $\mathcal{E}(t) = \{(i,j) \, |\, i,j \in \mathcal{V} \}$ are directed edges. $B$-connectedness means that for any time $t$, the graph $\mathcal{G}_B(t)$ with the set of edges $\bigcup^{t+B-1}_{\tau = t} \mathcal{E}(t)$ is connected. To describe the gossip protocol for time-varying case, we define the multi-consensus gossip matrix \begin{equation} \label{eq:tv_gossip} \textstyle \mathbf{W}_T (t) = \mathbf{I}_M - \prod\limits_{\tau = t}^{t+T-1} \mathbf{W}(\tau). \end{equation} One can also observe that multiplication with the matrix $\mathbf{W}_T$ requires to perform multiplication with $T$ gossip matrices $\mathbf{W}(t), \ldots, \mathbf{W}(t+T-1)$, i.e., it requires $T$ decentralized communications. We further assume that the gossip matrices $\mathbf{W}(t)$ (for $\mathcal{G}(t)$) and $\mathbf{W}_B(t)$ satisfy: \begin{enumerate} \item $\mathbf{W}(t)$ is supported on the nodes and edges of the network: $w_{i,j}(t) \neq 0$ if and only if $i = j$ or $(i, j)\in \mathcal{E}(t)$; \item $\text{ker}\mathbf{W} (t) \supset \mathcal{L}$; \item $\text{range}\mathbf{W} (t) \subset \{\mathbf{z} \in (\mathbb R^d)^M: \sum\limits_{m=1}^M z_m = 0\}$; \item There exists a characteristic number $\chi \geq 1$ such that $\|\mathbf{W}_B (t) z - z \|^2 \leq (1 - \chi^{-1})\|z\|^2$ for all $z \in \text{range}\mathbf{W}_B(t) $. \end{enumerate} \end{assumption} \section{Lower Bounds} Our lower bounds apply to a specific class of algorithms which are, loosely speaking, allowed to communicate with neighbors, and compute any local first-order information. We now give a formal definition. \begin{definition}[Oracle] \label{def:proc} Each agent $m$ has its own local memory $\mathcal{M}_{m}$ with initialization $\mathcal{M}_{m} = \{0\}$. $\mathcal{M}_{m}$ is updated as follows. At each iteration, the algorithm either performs local computations or communicates.\\ $\bullet$ \textbf{Local computation:} At each local iteration, device $m$ can sample uniformly and independently batch $S_m$ of any size $b$ from $\{F_{m,i}\}$ and adds to its $\mathcal{M}_m$ a finite number of points $z$, satisfying \begin{equation}\label{eq:oracle-opt-step} \textstyle z \in \text{span} \left\{z'~,~ \sum_{i_m \in S_m} F_{m, i_m}(z'')\right\} \end{equation} for given $z', z'' \in \mathcal{M}_{m}$. Such a call needs $b$ local computations to collect the batch. Batch of size $n$ represents $F_m$;\\ $\bullet$ \textbf{Communication:} Upon communication rounds among the neighboring nodes, and at communication time $t$, $\mathcal{M}_{m}$ is updated according to \begin{equation}\begin{aligned}\label{eq:oracle-comm} \textstyle \mathcal{M}_{m} \vcentcolon= \text{span}\left\{\bigcup _{(i,m) \in \mathcal{E}(t)} \mathcal{M}_{i} \right\}. \end{aligned}\end{equation} $\bullet$ \textbf{Output:} The final global output is calculated as $ \hat z \in \text{span}\left\{\bigcup _{m=1}^M \mathcal{M}_{m} \right\}. $ \end{definition} The structure of the above definition is typical for distributed lower bounds \cite{scaman2017optimal} and for stochastic lower bounds \cite{hendrikx2020optimal}. In particular, Definition \ref{def:proc} includes all the approaches for working with stochastic problems, such as \algname{SGD} or variance reduction techniques (\algname{SVRG}, \algname{SARAH}). Note that while our algorithm can invoke the deterministic oracle (full $F_m$) in local computations, in the work on lower bounds in the non-distributed case \cite{han2021lower}, there is no such a possibility. This greatly narrows the class of algorithms for which lower bounds of \citet{han2021lower} are valid. In particular, they can not do \algname{SVRG}-type updates. \begin{theorem}[Lower bound - fixed network] \label{th:lower_fixed} For any $L, \overline{L} \geq \mu >0$ and $\chi \geq 1$, $n \in \mathbb N$ and $K, N \in \mathbb N$, there exists a decentralized variational inequality (satisfying Assumptions \ref{as:Lipsh} and \ref{as:strmon}) on ${\mathbb R}^{d}$ (where $d$ is sufficiently large) with $z^* \neq 0$ over a fixed network (satisfying Assumption \ref{ass:fixed}) with a gossip matrix $\mathbf{W}$ and characteristic number $\chi$, such that for any output $\hat z$ of any procedure (Definition \ref{def:proc}) with $K$ communication rounds and $N$ local computations, the following estimates hold: \begin{equation*} \textstyle \|\hat z - z^*\|^2 = \Omega\left(\exp\left( -\frac{80}{1 + \sqrt{\frac{2L^2}{\mu^2} + 1}} \cdot \frac{K}{\sqrt{\chi}}\right) R_0^2\right); \end{equation*} \begin{equation*} \textstyle \|\hat z - z^*\|^2 = \Omega\left(\exp\left(-\frac{16}{n + \sqrt{\frac{2n \overline{L}^2}{\mu^2} + n^2}}\cdot N\right) R_0^2\right), \end{equation*} where $R_0^2 = \|z^0 - z^* \|^2$. \end{theorem} \begin{corollary} \label{cor:lower_fixed} In the setting of Theorem~\ref{th:lower_fixed}, the number of communication rounds and local computations required to obtain an $\varepsilon$-solution is lower bounded by \begin{align*} & \textstyle \Omega\left( \sqrt{\chi}\left(1 + \frac{L}{\mu}\right) \cdot \log \left(\frac{R_0^2}{\varepsilon}\right)\right) ~~\text{and} \\ & \textstyle \Omega\left(\left(n + \sqrt{n}\cdot \frac{\overline{L}}{\mu}\right) \cdot \log \left(\frac{R_0^2}{\varepsilon}\right)\right), ~~\text{respectively.} \end{align*} \end{corollary} \begin{theorem}[Lower bound - time varying network] \label{th:lower_tv} For any $L, \overline{L} \geq \mu >0$ and $\chi \geq 3$, $n \in \mathbb N$ and $K, N \in \mathbb N$, there exist a decentralized variational inequality (satisfying Assumptions \ref{as:Lipsh} and \ref{as:strmon}) on $\mathbb R^{d}$ (where $d$ is sufficiently large) with $z^* \neq 0$ over a time-varying network (satisfying Assumption \ref{ass:tv}) with a sequence of gossip matrices $\mathbf{W}(t)$ and characteristic number $\chi$, such that for any output $\hat z$ of any procedure (Definition \ref{def:proc}) with $K$ communication rounds and $N$ local computations, the following estimates hold: \begin{equation*} \textstyle \|\hat z - z^*\|^2 = \Omega\left(\exp\left( -\frac{64}{\left(1 + \sqrt{\frac{2L^2}{\mu^2} + 1}\right)} \cdot \frac{K}{B\chi}\right) R_0^2\right); \end{equation*} \begin{equation*} \textstyle \|\hat z - z^*\|^2 = \Omega\left(\exp\left(-\frac{16}{n + \sqrt{\frac{2n \overline{L}^2}{\mu^2} + n^2}}\cdot N\right) R_0^2\right), \end{equation*} where $R_0^2 = \|z^0 - z^* \|^2$. \end{theorem} \begin{corollary} \label{cor:lower_tv} In the setting of Theorem~\ref{th:lower_tv}, the number of communication rounds and local computations required to obtain an $\varepsilon$-solution is lower bounded by \begin{align*} & \textstyle \Omega\left( B\chi\left(1 + \frac{L}{\mu}\right) \cdot \log \left(\frac{R_0^2}{\varepsilon}\right)\right) ~~\text{and} \\ & \textstyle \Omega\left(\left(n + \sqrt{n}\cdot \frac{\overline{L}}{\mu}\right) \cdot \log \left(\frac{R_0^2}{\varepsilon}\right)\right), ~~\text{respectively.} \end{align*} \end{corollary} See proofs for Theorems \ref{th:lower_fixed} and \ref{th:lower_tv} in Appendix \ref{sec:pr_lb}. Note that in the time-varying case, the lower bounds for communication differ by the constant $B$ from the estimates that were previously encountered in the literature \cite{beznosikov2021}. This is due to the fact that we consider a more general setup with a $B$-connected graph (see Assumption \ref{ass:tv}), while the existing literature on lower bounds focuses on the simpler $B=1$ case. \section{Optimal Algorithms} Since in the decentralized gossip protocol each local worker $m$ stores its own $z_m$ vector, we consider the problem \begin{align} \label{eq:VI_lift} \text{Find} \quad \mathbf{z}^* \in (\mathbb R^d)^M \quad & \text{such that} \notag\\ \langle \mathbf{F}(\mathbf{z}^*), \mathbf{z} - \mathbf{z}^* \rangle + \mathbf{g}(\mathbf{z}) &- \mathbf{g}(\mathbf{z}^*) \geq 0, \forall \mathbf{z} \in (\mathbb R^d)^M, \end{align} where we use new notation: $\mathbf{z} = (z_1,\ldots,z_M)^\top$ and $\mathbf{z}^* = (z^*_1,\ldots,z^*_M)^\top$. Additinally, here we introduce the lifted operator $\mathbf{F} : (\mathbb R^d)^M \to (\mathbb R^d)^M $ given as \begin{equation*} \mathbf{F}(\mathbf{z}) = (F_1(z_1),\ldots,F_M(z_M))^\top, \end{equation*} and the lifted operator $\mathbf{g} : (\mathbb R^d)^M \to \mathbb R \cup \{ + \infty\}$ defined by \begin{equation*} \textstyle \mathbf{g}(\mathbf{z}) = \frac{1}{M}\sum\limits_{m=1}^M g(z_m). \end{equation*} One can note that \eqref{eq:VI_lift} is a set of $M$ unrelated variational inequalities with their own variables. But the original problem \eqref{eq:VI} + \eqref{eq:fs} is a sum of variational inequalities with the same variables: $\sum_{m=1}^M \left[\langle F(z^*), z - z^* \rangle + \tfrac{1}{M}g(z) - \tfrac{1}{M}g(z^*)\right]$. To eliminate this issue and move on to problem \eqref{eq:VI} + \eqref{eq:fs}, it is easy to get the following modification of \eqref{eq:VI_lift} \begin{align} \label{eq:VI_new} \text{Find} \quad \mathbf{z}^* &\in \mathcal{L} \quad \text{such that} \notag\\ \langle \mathbf{F}(\mathbf{z}^*), \mathbf{z} - \mathbf{z}^* \rangle + \mathbf{g}(\mathbf{z}) &- \mathbf{g}(\mathbf{z}^*) \geq 0, ~~ \forall \mathbf{z} \in \mathcal{L}, \end{align} where $\mathcal{L}$ is the consensus space. Problem \eqref{eq:VI_new} is equivalent to \eqref{eq:VI} + \eqref{eq:fs}. Due to Assumptions \ref{as:Lipsh} and \ref{as:strmon}, $\mathbf{F}$ is $L$-Lipschitz continuous, $\overline{L}$-average Lipschitz continuous and $\mu$-strongly monotone. \subsection{Fixed networks} We present Algorithm~\ref{alg:vrvi} for fixed networks. The next result gives the iteration complexity of Alg.~\ref{alg:vrvi}. \begin{algorithm}[h] \caption{} \label{alg:vrvi} \begin{algorithmic}[1] \STATE {\bf Parameters:} Stepsizes $\eta, \theta>0$, momentums $\alpha, \beta, \gamma$, batchsize $b \in \{1,\ldots,n\}$, probability $p \in (0,1)$ \STATE {\bf Initialization:} Choose $\mathbf{z}^0 = \mathbf{w}^0 \in (\dom g)^M$, $\mathbf{y}^0 \in \mathcal{L}^\perp$. Put $\mathbf{z}^{-1} = \mathbf{z}^0, \mathbf{w}^{-1} = \mathbf{w}^0$, $\mathbf{y}^{-1} = \mathbf{y}^0$ \FOR{$k=0,1,2\ldots$} \STATE Sample $j_{m,1}^k, \ldots,j_{m,b}^k$ independently from $[n]$ \STATE $S^k = \{j_{m,1}^k, \ldots,j_{m,b}^k\}$ \STATE Sample $j_{m,1}^{k+1/2}, \ldots,j_{m,b}^{k+1/2}$ independently from $[n]$ \STATE $S^{k+1/2} = \{j_{m,1}^{k+1/2}, \ldots,j_{m,b}^{k+1/2}\}$ \STATE $\delta^k = \frac{1}{b}\sum_{j\in S^k} \Big(\mathbf{F}_j(\mathbf{z}^k) - \mathbf{F}_j (\mathbf{w}^{k-1}) $ \\\hspace{1.8cm} $+ \alpha[\mathbf{F}_j(\mathbf{z}^k) - \mathbf{F}_j(\mathbf{z}^{k-1})]\Big) + \mathbf{F}(\mathbf{w}^{k-1})$ \label{vrvi:line:Delta} \STATE $\Delta^k = \delta^k - (\mathbf{y}^k + \alpha(\mathbf{y}^k - \mathbf{y}^{k-1}))$ \label{vrvi:line:g} \STATE $\mathbf{z}^{k+1} = \mathrm{prox}_{\eta \mathbf{g}} (\mathbf{z}^k + \gamma (\mathbf{w}^k - \mathbf{z}^k)- \eta \Delta^k)$\label{vrvi:line:x} \STATE $\Delta^{k+1/2} = \frac{1}{b}\sum_{j\in S^{k+1/2}} \left(\mathbf{F}_j(\mathbf{z}^{k+1}) - \mathbf{F}_j (\mathbf{w}^{k}) \right)$\\\hspace{5.9cm} $+ \mathbf{F}(\mathbf{w}^{k})$ \label{vrvi:line:Delta1/2} \STATE $\mathbf{y}^{k+1} = \mathbf{y}^k - \theta (\mathbf{W} \otimes \mathbf{I}_d )(\mathbf{z}^{k+1} - \beta(\Delta^{k+1/2} - \mathbf{y}^k))$\label{dvi:line:y} \STATE $\mathbf{w}^{k+1} = \begin{cases} \mathbf{z}^{k},& \text{with probability }p\\ \mathbf{w}^{k},& \text{with probability }1-p \end{cases}$\label{vrvi:line:w} \ENDFOR \end{algorithmic} \end{algorithm} \begin{theorem} \label{th:ALg1_conv} Consider the problem \eqref{eq:VI_new} (or \eqref{eq:VI} + \eqref{eq:fs}) under Assumptions~\ref{as:Lipsh} and \ref{as:strmon} over a fixed graph $\mathcal{G}$ (Assumption \ref{ass:fixed}) with a gossip matrix $\mathbf{W}$. Let $\{\mathbf{z}^k\}$ be the sequence generated by Alg.~\ref{alg:vrvi} with tuning of $\eta, \theta, \alpha, \beta, \gamma$ as described in Appendix \ref{sec:pr_oa_fixed}. Then, given $\varepsilon>0$, the number of iterations for $\|\mathbf{z}^k - \mathbf{z}^*\|^2 \leq \varepsilon$ is \begin{equation*} \textstyle \mathcal{O}\left( \left[\frac{1}{p} +\chi + \frac{1}{\sqrt{pb}}\frac{\overline{L}}{\mu}+ \sqrt{\chi} \frac{L}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{equation*} \end{theorem} See the proof in Appendix \ref{sec:pr_oa_fixed}. Let us discuss the results of Theorem. First of all, we are interested in how to obtain the complexity of communications and local computations from iterative complexity. At each iteration we require (in average) $\mathcal{O}(b + p n)$ local computations, because we need to store batch $b$ twice and with probability $p$ we update the point $\mathbf{w}^{k+1}$ by $\mathbf{z}^k$, this requires calculating the full $\mathbf{F}$ in the next iteration. Then, as the optimal $p$, one can choose $p \sim \nicefrac{b}{n}$. Then with such choice of $p$ we have the following local complexity \begin{equation*} \textstyle \mathcal{O}\left( \left[n + b\chi + \sqrt{n}\frac{\overline{L}}{\mu}+ b\sqrt{\chi} \frac{L}{\mu} \right] \log \frac{1}{\varepsilon}\right), \end{equation*} and communication complexity (since at each iteration Alg.~\ref{alg:vrvi} performs $\mathcal{O}(1)$ communications) \begin{equation*} \textstyle \mathcal{O}\left( \left[\frac{n}{b} + \chi + \frac{\sqrt{n}}{b}\frac{\overline{L}}{\mu}+ \sqrt{\chi} \frac{L}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{equation*} Hence, with $b = 1$ we have the complexities the same as in Table \ref{tab:comparison0}. Depending on $\max\{\sqrt{n}; \sqrt{\chi}\}$, we have the optimality of either local communications or decentralized communications. One can note that it is enough to take $b \geq \nicefrac{\overline{L} \sqrt{n} }{L}$ and guarantee the optimal communication complexity (see Corollary \ref{cor:lower_fixed}), but we have non-optimality in local iterations. To make the algorithm optimal both in terms of communications and local computations, we need to slightly modify it. One can make it using Chebyshev acceleration (see Alg.~\ref{alg:chebyshev_gossip} in Appendix \ref{sec:cheb}). Following \cite{scaman2017optimal}, we can construct a polynomial $P$ such that 1) $P(\mathbf{W})$ is a gossip matrix, 2) multiplication by $P(\mathbf{W}) \otimes \mathbf{I}_d$ requires $\sqrt{\chi(\mathbf{W})}$ multiplications by $\mathbf{W}$ (i.e. $\sqrt{\chi(\mathbf{W})}$ communication rounds) 3) $\chi(P(\mathbf{W})) \leq 4$. Then we can modify Alg.~\ref{alg:vrvi} by replacing $\mathbf{W}$ by $P(\mathbf{W})$. Then it holds that \begin{theorem} \label{th:ALg1_cheb_conv} Consider the problem \eqref{eq:VI_new} (or \eqref{eq:VI} + \eqref{eq:fs}) under Assumptions~\ref{as:Lipsh} and \ref{as:strmon} over a fixed connected graph $\mathcal{G}$ (Assumption \ref{ass:fixed}) with a gossip matrix $\mathbf{W}$. Let $\{\mathbf{z}^k\}$ be the sequence generated by Alg.~\ref{alg:vrvi} with Chebyshev polynomial $P(\mathbf{W})$ as a gossip matrix and with tuning of $\eta, \theta, \alpha, \beta, \gamma$ as described in Appendix \ref{sec:pr_oa_fixed}. Then, given $\varepsilon>0$, the number of iterations for $\|\mathbf{z}^k - \mathbf{z}^*\|^2 \leq \varepsilon$ is \begin{equation*} \textstyle \mathcal{O}\left( \left[\frac{1}{p} + \frac{1}{\sqrt{pb}}\frac{\overline{L}}{\mu}+ \frac{L}{\mu} \right] \log \frac{1}{\varepsilon} \right). \end{equation*} \end{theorem} In this case, the communication complexity of one iteration is $\chi$, and the local complexity (in average) is still $\mathcal{O}(b + p n)$. Then with $p = \nicefrac{b}{n}$ we get the following local complexity \begin{equation*} \textstyle \mathcal{O}\left(\left[ n + \sqrt{n}\frac{\overline{L}}{\mu}+ b\frac{L}{\mu} \right] \log \frac{1}{\varepsilon}\right), \end{equation*} and communication complexity \begin{equation*} \textstyle \mathcal{O}\left(\left[\sqrt{\chi} \frac{n}{b} + \sqrt{\chi}\frac{\sqrt{n}}{b}\frac{\overline{L}}{\mu}+ \sqrt{\chi} \frac{L}{\mu} \right] \log \frac{1}{\varepsilon} \right). \end{equation*} To get optimal results from Table \ref{tab:comparison0} we just need to take $b = \nicefrac{\overline{L} \sqrt{n} }{L}$. In contrast to algorithms of \citet{beznosikov2021distributed, beznosikov2020distributed} (the closest papers in theoretical convergence), our Alg.~\ref{alg:vrvi} needs multi-consensus/Chebyshev acceleration for both optimal rates, but can work without these additional procedures. Algorithms \cite{beznosikov2021distributed, beznosikov2020distributed} requires $\mathcal{O}(\sqrt{\chi} \log \varepsilon^{-1})$ iterations for Chebyshev acceleration, which makes the algorithms less practical. \subsection{Time-varying networks} We present Alg.~\ref{2dvi2:alg} for time-varying networks. \begin{algorithm}[h] \caption{} \label{2dvi2:alg} \begin{algorithmic}[1] \STATE {\bf Parameters:} Stepsizes $\eta_z, \eta_y, \eta_x, \theta>0$, momentums $\alpha, \gamma, \omega, \tau$, parameters $\nu, \beta$, batchsize $b \in \{1,\ldots,n\}$, probability $p \in (0,1)$ \STATE {\bf Initialization:} Choose $\mathbf{z}^0 = \mathbf{w}^0 \in (\dom g)^M$, $\mathbf{y}^0 \in (\mathbb R^d)^M$, $\mathbf{x}^0 \in \mathcal{L}^\perp$. Put $\mathbf{z}^{-1} = \mathbf{z}^0, \mathbf{w}^{-1} = \mathbf{w}^0$, $\mathbf{y}_f = \mathbf{y}^{-1} = \mathbf{y}^0$, $\mathbf{x}_f = \mathbf{x}^{-1} = \mathbf{x}^0$, $m_0 = \textbf{0}^{dM}$ \FOR{$k=0,1,2,\ldots$} \STATE Sample $j_{m,1}^k, \ldots,j_{m,b}^k$ independently from $[n]$ \STATE $S^k = \{j_{m,1}^k, \ldots,j_{m,b}^k\}$ \STATE Sample $j_{m,1}^{k+1/2}, \ldots,j_{m,b}^{k+1/2}$ independently from $[n]$ \STATE $S^{k+1/2} = \{j_{m,1}^{k+1/2}, \ldots,j_{m,b}^{k+1/2}\}$ \STATE $\delta^k = \frac{1}{b}\sum_{j\in S^k} \Big(\mathbf{F}_j(\mathbf{z}^k) - \mathbf{F}_j (\mathbf{w}^{k-1}) $ \\\hspace{1.8cm} $+ \alpha[\mathbf{F}_j(\mathbf{z}^k) - \mathbf{F}_j(\mathbf{z}^{k-1})]\Big) + \mathbf{F}(\mathbf{w}^{k-1})$ \STATE $\Delta_z^k = \delta^k - \nu \mathbf{z}^k - \mathbf{y}^k - \alpha(\mathbf{y}^k - \mathbf{y}^{k-1})$ \STATE $\mathbf{z}^{k+1} = \mathrm{prox}_{\eta_z \mathbf{g}}(\mathbf{z}^k + \omega (\mathbf{w}^k - \mathbf{z}^k) - \eta_z \Delta_z^k)$ \STATE $\mathbf{y}_c^k = \tau \mathbf{y}^k + (1-\tau)\mathbf{y}_f^k$ \STATE $\mathbf{x}_c^k = \tau \mathbf{x}^k + (1-\tau)\mathbf{x}_f^k$ \STATE $\Delta_y^k = \nu^{-1} (\mathbf{y}_c^k + \mathbf{x}_c^k) + \mathbf{z}^{k+1} + \gamma (\mathbf{y}^k + \mathbf{x}^k + \nu \mathbf{z}^k)$ \STATE $\delta^{k+1/2} = \frac{1}{b}\sum_{j\in S^{k+1/2}} \left(\mathbf{F}_j(\mathbf{z}^{k+1}) - \mathbf{F}_j (\mathbf{w}^{k}) \right)$\\\hspace{5.9cm} $+ \mathbf{F}(\mathbf{w}^{k})$ \STATE $\Delta_x^k = \nu^{-1} (\mathbf{y}_c^k + \mathbf{x}_c^k) + \beta(\mathbf{x}^k + \delta^{k+1/2})$ \STATE $\mathbf{y}^{k+1} = \mathbf{y}^k - \eta_y \Delta_y^k$ \STATE $\mathbf{x}^{k+1} = \mathbf{x}^k - (\mathbf{W}_T(Tk) \otimes \mathbf{I}_d) (\eta_x\Delta_x^k + m^k)$ \STATE $m^{k+1} = \eta_x\Delta_x^k + m^k$\\\hspace{1.7cm} $- (\mathbf{W}_T(Tk) \otimes \mathbf{I}_d) (\eta_x\Delta_x^k + m^k)$ \STATE $\mathbf{y}_f^{k+1} = \mathbf{y}_c^k + \tau(\mathbf{y}^{k+1} - \mathbf{y}^k)$ \STATE $\mathbf{x}_f^{k+1} = \mathbf{x}_c^k - \theta(\mathbf{W}_T(Tk) \otimes \mathbf{I}_d)(\mathbf{y}_c^k + \mathbf{x}_c^k)$ \STATE $\mathbf{w}^{k+1} = \begin{cases} \mathbf{z}^{k},& \text{with probability }p\\ \mathbf{w}^{k},& \text{with probability }1-p \end{cases}$ \ENDFOR \end{algorithmic} \end{algorithm} It needs to compute $\mathbf{W}_T$ using \eqref{eq:tv_gossip}, it requires $T$ communications. The next result gives the iteration complexity of Alg.~\ref{2dvi2:alg}. \begin{theorem} \label{th:Alg2_conv} Consider the problem \eqref{eq:VI_new} (or \eqref{eq:VI} + \eqref{eq:fs}) under Assumptions~\ref{as:Lipsh} and \ref{as:strmon} over a sequence of time-varying graphs $\mathcal{G}(k)$ (Assumption \ref{ass:tv}) with gossip matrices $\mathbf{W}(k)$. Let $\{\mathbf{z}^k\}$ be the sequence generated by Alg.~\ref{2dvi2:alg} with $T \geq B$ and tuning of parameters as described in Appendix \ref{sec:pr_oa_tv}. Let the choice of $T$ guarantees contraction property (Assumption \ref{ass:tv} point 4) with $\chi(T)$. Then , given $\varepsilon>0$, the number of iterations for $\|\mathbf{z}^k - \mathbf{z}^*\|^2 \leq \varepsilon$ is \begin{align*} \textstyle \mathcal{O}\left( \left[\chi^2(T) + \frac{1}{p} + \chi(T)\frac{L}{\mu} + \frac{1}{\sqrt{bp}}\frac{\overline{L}}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{align*} \end{theorem} Note that an important detail of the method is that $T \geq B$. This limitation is due to the fact that network is $B$-connected. In particular, for $B >1$ it can happen that in some communications we use empty graphs. Therefore, the requirement $T \geq B$ is natural to guarantee the contraction property (Assumption \ref{ass:tv}, point 4). This means that if $B > 1$ we have to use multi-consensus \eqref{eq:tv_gossip}. But with $B=T=1$, we can avoid multi-consensus, let us this case first. The same way as in fixed graph case we choose $p = \nicefrac{b}{n}$. Then we get the following estimates on communications and local calls \begin{align*} \textstyle \mathcal{O}\left( \left[\chi^2 + \frac{n}{b} + \chi\frac{L}{\mu} + \frac{\sqrt{n}}{b}\frac{\overline{L}}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{align*} If we put $b=1$, we have the same estimates as in Table~\ref{tab:comparison0}. Now we consider general case with any $B$. We use a multi-gossip step and take $T > 1$. In particular, let us choose $T = B \cdot \lceil \chi \ln 2\rceil$. Than using \eqref{eq:tv_gossip} and point 4 of Assumption~\ref{ass:tv}, we can guarantee that $ \|\mathbf{W}_T (t) z - z \|^2 \leq \frac{1}{2}\|z\|^2. $ Therefore, $\chi(T) = 2$, but to achieve this we need $T$ communications per iteration. With $p=\nicefrac{b}{n}$ and $b = \nicefrac{\overline{L} \sqrt{n} }{L}$, the iteration complexity from Theorem \ref{th:Alg2_conv} can be rewritten as follows \begin{align*} \textstyle \mathcal{O}\left( \left[1 + \frac{\sqrt{n} L}{\overline{L}} + \frac{L}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{align*} Using that per iteration we make $\mathcal{O}(B \cdot \lceil \chi \ln 2\rceil)$ communications and $\mathcal{O}\left(\nicefrac{\overline{L} \sqrt{n} }{L}\right)$ local computations, we get \begin{align*} \textstyle \mathcal{O}\left( \left[B \chi + B\chi \frac{\sqrt{n} L}{\overline{L}} + B\chi \frac{L}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{align*} communications and \begin{align*} \textstyle \mathcal{O}\left( \left[n + \frac{\overline{L} \sqrt{n} }{L} + \sqrt{n}\frac{\overline{L}}{\mu}\right] \log \frac{1}{\varepsilon} \right). \end{align*} local calls. These results are reflected in Table \ref{tab:comparison0}. \section{Experiments} We now perform several experiments with the goal of corroborating our theoretical results. Note though that we are the first who consider the decentralized stochastic (finite-sum) setting for VIs, and hence there are no competing methods. Therefore, we compare the non-distributed finite sum setting and decentralized deterministic setting separately. \subsection{Variance reduction} In this section, we compare the main methods for solving strongly monotone non-distributed stochastic (finite-sum) variational inequalities with non-distributed version of our Alg.~\ref{alg:vrvi}. \textbf{Problem}: We first consider robust linear regression:\begin{align} \label{bilinear} \textstyle \min\limits_{x\in \triangle^d} & \textstyle \max\limits_{y\in \triangle^d} f(x,y) = \frac{1}{n} \sum \limits_{i=1}^n f_i(x,y), \\ f_i(x,y) \vcentcolon= & \textstyle x^\top \mathbf{A}_i y + a^\top_i x + b^\top_i y + \frac{\lambda}{2} \|x\|^2 - \frac{\lambda}{2} \|y\|^2 \notag, \end{align} where $\triangle^d$ is the unit simplex in $\mathbb R^d$. We generated positive definite matrices $\mathbf{A}_i$ and vectors $a_i, b_i$ randomly. The dimensions of the problem are $d_x = 50$ and $d_y = 50$. \textbf{Setting.} For comparison, we took methods from Table \ref{tab:comparison2}. In particular, we chose \algname{EG-Alc-Alg1} and \algname{EG-Alc-Alg2} from \cite{alacaoglu2021stochastic}, \algname{EG-Car} from \cite{carmon2019variance}. All methods are tuned as described in the theory of the corresponding papers. We run all methods with different batch sizes. The comparison criterion is the number of epochs (one full gradient = epoch). \begin{figure}[!h] \centering \captionof{figure}{Comparison epoch complexities of Alg.~\ref{alg:vrvi}, \algname{EG-Alc-Alg1}, \algname{EG-Alc-Alg2} and \algname{EG-Car} on \eqref{bilinear}.} \label{fig:comp1} \vspace{-0.3cm} \begin{minipage}[][][b]{0.5\textwidth} \centering \includegraphics[width=0.48\textwidth]{plots/b1.pdf} \includegraphics[width=0.48\textwidth]{plots/b2.pdf} \includegraphics[width=0.48\textwidth]{plots/b3.pdf} \includegraphics[width=0.48\textwidth]{plots/b5.pdf} \end{minipage} \end{figure} \textbf{Results.} The plots from Fig.~\ref{fig:comp1} show that our Alg.~\ref{alg:vrvi} is ahead of other methods for any batch size (including $b=1$). \subsection{Decentralized methods} In this section, we compare the state-of-the-art methods for solving strongly monotone decentralized variational inequalities over fixed networks with our Alg.~\ref{alg:vrvi}. \textbf{Problem}: We now considered another variant of robust linear regression: \begin{align}\label{eq:regression} \textstyle \min\limits_{w} \max \limits_{\|r\|\leq e} \frac{1}{N} \sum\limits_{i=1}^N (w^\top (x_i + r_i) - y_i)^2 + \frac{\lambda}{2} \| w\|^2 - \frac{\beta}{2} \|r \|^{2}, \end{align} where $w$ are model weights, $\{(x_i, y_i)\}_{i=1}^N$ are pairs of the training data, $r_i$ are noise vectors, and $\lambda$ and $\beta$ are regularization parameters. The noises $r_n$ resist training the model, thereby inducing more robustness and stability. \textbf{Setting.} For comparison, we took methods from Table~\ref{tab:comparison0} for decentralized problems over fixed networks. In particular, we chose \algname{EGD-GT} from \cite{Mukherjee2020:decentralizedminmax}, \algname{EGD-Con} from \cite{beznosikov2020distributed} and \algname{Sliding} from \cite{beznosikov2021distributed}. For a fair comparison, we consider the deterministic setup, i.e., each worker can compute full gradients. We took dataset \texttt{a7a} and \texttt{a9a} from LiBSVM \cite{chang2011libsvm} and divided unevenly across the workers. For communication networks we chose the star and the grid topologies. All methods are tuned as described in the theory of the corresponding papers. The comparison criterion is the number of communication rounds. \begin{figure}[!h] \centering \captionof{figure}{Comparison communication complexities of Alg.~\ref{alg:vrvi}, \algname{EGD-GT}, \algname{EGD-Con} and \algname{Sliding} on \eqref{eq:regression}.} \label{fig:comp2} \vspace{-0.3cm} \begin{minipage}[][][b]{0.5\textwidth} \centering \includegraphics[width=0.48\textwidth]{plots/a7a_grid.pdf} \includegraphics[width=0.48\textwidth]{plots/a7a_star.pdf} \includegraphics[width=0.48\textwidth]{plots/a9a_grid.pdf} \includegraphics[width=0.48\textwidth]{plots/a9a_star.pdf} \end{minipage} \end{figure} \textbf{Results.} The plots from Fig.~\ref{fig:comp2} show that our Alg.~\ref{alg:vrvi} is ahead of other methods. Among other things, it is ahead of \algname{Sliding} from \cite{beznosikov2021distributed}, which has a fast theoretical communication complexity. However, this happens when the dataset is relatively homogeneous and uniformly divided across the devices. In our setting, this is not the case.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,281
Clear Creek Furniture store specializes in creating custom furniture to meet your wants and needs. If you don't see the dining room furniture that you are looking for, then chances are we can build it for you. Come to our Ohio amish furniture store to talk about customizing the look you want.
{ "redpajama_set_name": "RedPajamaC4" }
3,944
{"url":"https:\/\/www.sawaal.com\/numbers-questions-and-answers\/476-0-is-divisible-by-both-3-and-11the-non-zero-digits-in-the-hundreds-and-tens-places-are-respectiv_1451","text":"20\nQ:\n\n# 476 ** 0 is divisible by both 3 and 11.The non zero digits in the hundred's and ten's places are respectively:\n\n A) 6 and 2 B) 8 and 2 C) 6 and 5 D) 8 and 5\n\nExplanation:\n\nLet the number \u00a0be 476ab0\n\n476ab0 is divisible by 3\n\n=> 4 + 7 + 6 + a + b + 0 is divisible by 3\n\n=> 17 + a + b is divisible by 3 ------------------------(i)\n\n476ab0 is divisible by 11\n\n[(4 + 6 + b) -(7 + a + 0)] is 0 or divisible by 11\n\n=> [3 + (b - a)] is 0 or divisible by 11 \u00a0--------------(ii)\n\nSubstitute the values of a and b with the values given in the choices and select the values which satisfies both Equation 1 and Equation 2.\n\nif a=6 and b=2,\n\n17 + a + b = 17 + 6 + 2 = 25 which is not divisible by 3 --- Does not meet equation(i).Hence this is not the answer\n\nif a=8 and b=2,\n\n17 + a + b = 17 + 8 + 2 = 27 which is divisible by 3 --- Meet equation(i)\n\n[3 + (b - a)] = [3 + (2 - 8)] = -3 which is neither 0 nor divisible by 11---Does not meet equation(ii).Hence this is not the answer\n\nif a=6 and b=5,\n\n17 + a + b = 17 + 6 + 5 = 28 which is not divisible by 3 --- Does not meet equation (i) .Hence this is not the answer\n\nif a=8 and b=5,\n\n17 + a + b = 17 + 8 + 5 = 30 which is divisible by 3 --- Meet equation 1\n\n[3 + (b - a)] = [3 + (5 - 8)] = 0 ---Meet equation 2\n\nSince these values satisfies both equation 1 and equation 2, this is the answer\n\nQ:\n\nIn math questions answers each questions are solved with explanation. The questions are based from different topics. Care has been taken to solve the questions in such a way that students can understand each and every step\n\n1. Which is greater than 4?\n\n(a) 5,\n\n(b) -5,\n\n(c) -1\/2,\n\n(d) -25.\n\n A) 5 B) 20 C) 2 D) 3\n\nExplanation:\n\nSolution:\n\n5 greater than 4.\n\n1 98\nQ:\n\nWhich of the following is the largest?\n\n A) 12\/49 B) 7\/30 C) 13\/56 D) 11\/46\n\nExplanation:\n\nIn the given options,\n\n12\/49 can be written as 1\/(49\/12) = 1\/4.083\n\n7\/30 can be written as 1\/(30\/7) = 1\/4.285\n\n13\/56 can be written as 1\/(56\/13) = 1\/4.307\n\n11\/46 can be written as 1\/(46\/11) = 1\/4.1818\n\nHere in the above,\n\n1\/4.083 has the smallest denominator and so 12\/49 is the largest number or fraction.\n\n3 316\nQ:\n\n7, 16, 36, 78, 144, ?\n\nwhich number come in place of question mark\n\n A) 168 B) 196 C) 222 D) 256\n\nExplanation:\n\n16 712\nQ:\n\nWhich of the following is a rational number?\n\n A) 0.241 B) 1.732 C) 4 D) All of the above\n\nExplanation:\n\nAny number which can be expressed as a fraction of two integers like P & Q as P\/Q where Q is not equal to zero.\n\nEvery integer is a rational number since Q can be 1.\n\nHence, in the given options, 4 can be expressed as a simple fraction as 4\/1. And all other options cannot be expressed as fractions.\n\nHence, 4 is a rational number in the given options.\n\n3 576\nQ:\n\nWhich expression is equivalent to\u00a0$i233$?\n\n A) i B) 1 C) -i D) -1\n\nExplanation:\n\nWe know that,\n\n6 1868\nQ:\n\nThe product of a number and its multiplicative inverse is\n\n A) 1 B) 0 C) -1 D) Infinity\n\nExplanation:\n\nThe mutiplicative inverse of a number is nothing but a reciprocal of a number.\n\nNow,\u00a0the product of a number and its multiplicative inverse is always equal to 1.\n\nFor example :\n\nLet the number be 15\n\nMultiplicative inverse of 15 = 1\/15\n\nThe product of a number and its multiplicative inverse is = 15 x 1\/15 = 1.\n\n2 1204\nQ:\n\n25 divided by 7\n\n A) 3.571 B) 35.71 C) 0.351 D) 0.0357\n\nExplanation:\n\nHere we have\u00a025 divided by 7.\n\n25 will not go directly in 7\n\nHence, we get the result in decimals.\n\n25\/7 = 3.571.\n\n4 1064\nQ:\n\nWhich of the following is not a prime number?\n\n A) 5 B) 11 C) 21 D) 37\n\nExplanation:\n\nA prime number is a whole number greater than 1 whose only factors are 1 and itself.\n\nFactors of 5 are 1, 5\n\nFactors of 11 are 1, 11\n\nFactors of 21 are 1, 3, 7, 21\n\nFactors of 37 are 1, 37.\n\nHence, according to the definition of a prime number, 21 is not a prime number as it has more than two factors.","date":"2023-02-07 01:17:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6360704302787781, \"perplexity\": 638.2606479817031}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500368.7\/warc\/CC-MAIN-20230207004322-20230207034322-00483.warc.gz\"}"}
null
null
Q: My Chart is Cut Off When Drawn (Most of The Time) I'm filling an mschart with data dynamically, and most of the time the chart is cut off on the right. I've been fiddling with many of the properties on the x axis and the chart area, but nothing seems to fix it. For these charts when I stretch the form across both of my monitors the chart will eventually fit. And yet, it will fit no problem for data that's very similar. I'm not terribly well versed in mschart, is it a problem with the chart area or the x axis? Thanks in advance. A: I made an incorrect assumption that the lack of the vertical line on the end meant that part of the chart was hidden. The fix was to specify the interval with which to label the axis so a label is always drawn at the end.
{ "redpajama_set_name": "RedPajamaStackExchange" }
274
Q: Doubt about two different ways of taking the supremum of a sequence of functions As per my understanding: Functions: $\sup_{x \in I} f(x) = \sup \{y \in \mathbb{R}: y = f(x), x \in I\}.$ Example: $f(x) = x^2$ and $I = [0,3)$: $\sup_{x \in I} f(x) = 9$. Sequence of functions: $g(x)=\sup_{n \in \mathbb{N}} f_n(x) = \sup \{ y \in \mathbb{R}: y = f_{n}(x), n \in \mathbb{N}\} \tag{1},$ where $x$ is fixed. Example: $f_n(x) = x^n$ * *$\forall x> 0, x \neq 1, g(x)=\sup\{x,x^2,x^3,x^4,x^5,......\}= +\infty$. *$x=1: g(x)=\sup\{1,1,1,1,....\}=1$ *$x=0: g(x)=\sup\{0,0,0,0,....\}=0.$ *$x = -1$: $g(x)=\sup\{-1,1,-1,1,....\}=1$ *$\forall x<0, x \neq -1$: $g(x)=\sup\{x,x^2,x^3,x^4,x^5,.....\}=+\infty$. But there is also another way of taking the supremum of a sequence of functions: $\sup_{x \in I} f_n(x) = \sup \{y \in \mathbb{R}: y = f_n(x) , x \in I\}=(g_n)_{n\in \mathbb{N}} \tag{2}$ Example: $I =[0,3)$: $ (g_n)_{n\in \mathbb{N}} =\sup_{x \in I=[0,3)}f_n(x)=3^n.$ My conclusion: $(1)$ and $(2)$ are two different ways of taking $\sup f_n(x)$ and are used according to the specific needs.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,603
Charles-Bazile Mercier, dit Mercier-Vergerie, né le aux Sables-d'Olonne et mort le à Paris, est un avocat et un homme politique français. Biographie Famille Issu d'une ancienne famille de la bourgeoisie des Sables-d'Olonne, Charles-Bazile Mercier-Vergerie est le fils de Gilles-Louis Mercier, sieur de Plantibaud, avocat au parlement, conseiller du roi et procureur de l'élection des Sables-d'Olonne, et de Marie-Anne Dupont. Il a pour frère Gilles Mercier, sieur de la Colombière, plus connu au cours de la Révolution sous le nom de Mercier-Colombière, l'un des deux chasseurs à cheval qui contribuèrent à la prise de Charette près de la Chabotterie. Carrière professionnelle Avant 1789, Mercier-Vergerie est avocat aux Sables-d'Olonne ; sous la Révolution, il y est nommé défenseur officieux en . Alors qu'il est commissaire du gouvernement près la cour de justice criminelle au cours du Consulat, sa fonction devient celle de procureur général impérial sous l'Empire. Carrière politique Le , il est élu par le Sénat conservateur comme représentant de la Vendée au Corps législatif, assemblée où il siège jusqu'à sa mort. Distinction Le , il est fait chevalier de la Légion d'honneur, cité comme capitaine de la Garde impériale. Notes et références Annexes Bibliographie Article connexe Liste des députés de la Vendée Lien externe Fiche de Charles-Bazile Mercier-Vergerie sur le site de l'Assemblée nationale (www.assemblée-nationale.fr) Naissance en janvier 1762 Naissance dans la province du Poitou Naissance aux Sables-d'Olonne Décès en mars 1811 Décès à 49 ans Décès à Paris Avocat français du XVIIIe siècle Député de la Vendée Député au Corps législatif
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,847
Penglai Jinlin Import and Export Co. Ltd. located in Penglai Yantai city of Shandong Provice. We supply quality diesel engine and generator sets. We sell Screw oil press and oil expeller, capacity from 1 ton/24hour to the complete equipment(plant) 80 ton/24 hour. They can be used to extract vegetable oil from peanuts, rapeseeds, sesame seeds, soybeans, cottonseeds, tea seeds, sunflower seeds, palm seeds olive, coconut, corn pummels. We'd like to cooperate with friend from all over the world. Differenct series of diesel engine sets. D226B series . Ricardo series, 495 series, Steyr series. Used for Power industry, Agriculture, Truck , Marine . We can supply all kinds of quality gearbox in reasonable price. Form Mirne use. Pellet machine 9ks series, can be used for processing all kinds of pellets to animals like fish, shrimps, chicken, rabbit, sheep, pig and cattle. and can be used for straw pellet also. Its remarkable advantages as follows: it can be operated easily and stably, low noise and good performance and it can produce pellets of 2.2mm,3mm,4mm, 6mm,7mm,8mm,10mm & 12mm for your options.
{ "redpajama_set_name": "RedPajamaC4" }
1,877
{"url":"https:\/\/zbmath.org\/authors\/?q=rv%3A4380","text":"## Sussman, Myron M.\n\nCompute Distance To:\n Author ID: sussman.myron-m Published as: Sussman, Myron M.; Sussman, Myron; Sussman, M. M. more...less\n Documents Indexed: 15 Publications since 1975, including 4 Books Reviewing Activity: 162 Reviews Co-Authors: 6 Co-Authors with 7 Joint Publications 184 Co-Co-Authors\nall top 5\n\n### Co-Authors\n\n 6 single-authored 4 Layton, William J. 2 Grundland, Alfred Michel 2 Rebholz, Leo G. 1 Hastings, Stuart P. 1 Manko, David J. 1 Takhirov, Aziz 1 Trenchea, Catalin\nall top 5\n\n### Serials\n\n 1 AIAA Journal 1 IMA Journal of Applied Mathematics 1 Journal of Applied Mechanics 1 Journal of Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 Numerical Functional Analysis and Optimization 1 T\u00f4hoku Mathematical Journal. Second Series 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 SIAM Journal on Applied Dynamical Systems 1 Bulletin of the American Mathematical Society 1 International Journal of Numerical Analysis and Modeling. Series B\nall top 5\n\n### Fields\n\n 6 Numerical analysis\u00a0(65-XX) 4 Partial differential equations\u00a0(35-XX) 4 Fluid mechanics\u00a0(76-XX) 3 General and overarching topics; collections\u00a0(00-XX) 2 Real functions\u00a0(26-XX) 2 Global analysis, analysis on manifolds\u00a0(58-XX) 1 Differential geometry\u00a0(53-XX) 1 Mechanics of deformable solids\u00a0(74-XX) 1 Geophysics\u00a0(86-XX) 1 Biology and other natural sciences\u00a0(92-XX)\n\n### Citations contained in zbMATH Open\n\n7 Publications have been cited 31 times in 28 Documents Cited by Year\nOn the high accuracy NS-alpha-deconvolution turbulence model.\u00a0Zbl 1187.76720\nRebholz, Leo G.; Sussman, Myron M.\n2010\nEnergy and helicity dissipation rates of the NS-alpha and NS-alpha-deconvolution models.\u00a0Zbl 1423.76160\nLayton, William; Rebholz, Leo; Sussman, Myron\n2010\nBounds on energy, magnetic helicity, and cross helicity dissipation rates of approximate deconvolution models of turbulence for MHD flows.\u00a0Zbl 1425.76302\nLayton, William; Sussman, Myron; Trenchea, Catalin\n2010\nOn uniqueness in Cauchy\u2019s problem for elliptic partial differential operators with characteristics of multiplicity greater than two.\u00a0Zbl 0355.35028\nSussman, Myron M.\n1977\nInstability of Crank-Nicolson leap-frog for nonautonomous systems.\u00a0Zbl 1463.65187\nLayton, William; Takhirov, Aziz; Sussman, Myron\n2014\nOn uniqueness in Cauchy\u2019s problem for elliptic operators with characteristics of multiplicity greater than two.\u00a0Zbl 0303.35032\nSussman, Myron M.\n1975\nInteraction of kink-type solutions of the harmonic map equations.\u00a0Zbl 0827.35130\nGrundland, A. M.; Kovalyov, M.; Sussman, M.\n1994\nInstability of Crank-Nicolson leap-frog for nonautonomous systems.\u00a0Zbl 1463.65187\nLayton, William; Takhirov, Aziz; Sussman, Myron\n2014\nOn the high accuracy NS-alpha-deconvolution turbulence model.\u00a0Zbl 1187.76720\nRebholz, Leo G.; Sussman, Myron M.\n2010\nEnergy and helicity dissipation rates of the NS-alpha and NS-alpha-deconvolution models.\u00a0Zbl 1423.76160\nLayton, William; Rebholz, Leo; Sussman, Myron\n2010\nBounds on energy, magnetic helicity, and cross helicity dissipation rates of approximate deconvolution models of turbulence for MHD flows.\u00a0Zbl 1425.76302\nLayton, William; Sussman, Myron; Trenchea, Catalin\n2010\nInteraction of kink-type solutions of the harmonic map equations.\u00a0Zbl 0827.35130\nGrundland, A. M.; Kovalyov, M.; Sussman, M.\n1994\nOn uniqueness in Cauchy\u2019s problem for elliptic partial differential operators with characteristics of multiplicity greater than two.\u00a0Zbl 0355.35028\nSussman, Myron M.\n1977\nOn uniqueness in Cauchy\u2019s problem for elliptic operators with characteristics of multiplicity greater than two.\u00a0Zbl 0303.35032\nSussman, Myron M.\n1975\nall top 5\n\n### Cited by 36 Authors\n\n 9 Rebholz, Leo G. 5 Kim, Tae-Yeon 5 Neda, Monika 4 Fried, Eliot 3 Dunca, Argus Adrian 3 Layton, William J. 3 Trenchea, Catalin 2 Berselli, Luigi Carlo 2 Bowers, Abigail L. 2 Breckling, Sean 2 Manica, Carolina Cardoso 2 Schneier, Michael 2 Watanabe, Kinji 2 Wilson, Nicholas E. 2 Zuily, Claude 1 Burkardt, John V. 1 Byon, Young-Ji 1 Catania, Davide 1 Cuff, Victoria M. 1 Erkmen, Dilek 1 Jiang, Nan 1 Kean, Kiera 1 Kohler, Kara E. 1 Labovsky, Alexander E. 1 Lewandowski, Roger 1 Li, Yong 1 Linke, Alexander 1 Monteiro, Igor Olivia 1 Pahlevani, Fran 1 Pakzad, Ali 1 Pei, Wenlong 1 Saut, Jean-Claude 1 Scheurer, Bruno 1 Sussman, Myron M. 1 Yang, Huanhuan 1 Zeman, Marvin\nall top 5\n\n### Cited in 21 Serials\n\n 3 Journal of Mathematical Analysis and Applications 2 Mathematical Methods in the Applied Sciences 2 Applied Mathematics and Computation 2 Applied Mathematical Modelling 2 Communications in Partial Differential Equations 2 International Journal of Numerical Analysis and Modeling 1 Computers & Mathematics with Applications 1 Computer Methods in Applied Mechanics and Engineering 1 Journal of Computational Physics 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 1 Journal of Computational and Applied Mathematics 1 Journal of Differential Equations 1 Physica D 1 Numerical Methods for Partial Differential Equations 1 International Journal of Computer Mathematics 1 Advances in Computational Mathematics 1 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 1 Journal of Numerical Mathematics 1 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 1 Bulletin of the American Mathematical Society 1 Results in Applied Mathematics\n\n### Cited in 5 Fields\n\n 21 Fluid mechanics\u00a0(76-XX) 11 Partial differential equations\u00a0(35-XX) 9 Numerical analysis\u00a0(65-XX) 2 Ordinary differential equations\u00a0(34-XX) 1 Real functions\u00a0(26-XX)","date":"2023-02-09 10:18:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3072260022163391, \"perplexity\": 13719.134117275758}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764501555.34\/warc\/CC-MAIN-20230209081052-20230209111052-00331.warc.gz\"}"}
null
null
Q: Fiware : wilma vs steelskin when to use wilma and when to use steelskin? why there is no reference of steelskin in fiware generic enablers catalogue? Thanks in advance for your help! A: Wilma is an official FIWARE enabler. Steelskin is not part of FIWARE, but is developed by some people that are very involved in FIWARE (from Telefonica) and is mostly compatible. So if you want to be 100% FIWARE compliant go with Wilma. If you don't care then it may be interesting to research further on Steelskin.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,077
Martin Bernhardt, né le à Potsdam et mort le à Berlin est un médecin neurologue et neuropathologiste allemand. Biographie En 1867, il est reçu docteur en médecine de l'université de Berlin, où il a été l'élève de Rudolf Virchow (1821-1902) et de Ludwig Traube (1818-1878). I est ensuite assistant de Ernst Viktor von Leyden (1832-1910) à la clinique universitaire de l'université de Königsberg, puis à l'hôpital de la Charité de Berlin sous l'autorité de Carl Westphal (1833-1890). Après son service militaire et la guerre franco-prussienne, il se spécialise en neuropathologie à Berlin, où en 1882, il obtient le titre de professeur extraordinaire. Bernhardt a publié quelques ouvrages sur les maladies neurologiques et l'électrothérapie. En 1885, il devient rédacteur en chef du journal Centralblatt für die Medizinischen Wissenschaften. Éponymie : autre nom de la , qu'il a décrite avec le neuropathologiste russe Vladimir Karlovitch Roth (1848-1916), une affection commune traduisant l'atteinte du nerf fémoro-cutané. Syndrome de Vulpian-Bernhardt : forme clinique de la sclérose latérale amyotrophique touchant préférentiellement la ceinture scapuaire. Formule de Bernhardt : formule utilisée pour calculer le poids idéal d'un sujet adulte en kilogrammes en multipliant sa taille en centimètre par son tour de poitrine en centimètres et en divisant ce produit par 240. Publications Die Sensibilitätsverhältnisse der Haut; 1873 Beiträge zurSymptomatologie und Diagnostik der Hirngeschwülste; 1881 Electricitätslehre für Mediziner und Elektrotherapie 1884, (en collaboration avec Isidor Rosenthal (1836-1915). Erkrankungen der Peripherischen Nerven; 1895-1897. Références Notice biographique sur la Jewish Encyclopedia Neurologue allemand Neuropathologiste Naissance en avril 1844 Naissance à Potsdam Naissance dans la province de Brandebourg Décès en mars 1915 Décès à Berlin Décès à 70 ans
{ "redpajama_set_name": "RedPajamaWikipedia" }
36
Q: Delete email from database I've got to delete about 10 email addresses from a huge database, wanted to find out the correct way to delete multiple emails from the db at once. I can delete one by one, just wondered if it was separating them by commas which I now need. I presumed this is the appropriate SQL Statement: DELETE FROM `subscribers` WHERE email='example1@example.com,example2@example.com,example3@example.com' Is this the correct way to list multiple emails in order for them to be deleted from the database? A: Use IN() and put the emails in quotes DELETE FROM `subscribers` WHERE email IN('example1@example.com','example2@example.com','example3@example.com') A: Instead of =, use IN. E.g. DELETE FROM subscribers WHERE email IN ('example1@example.com','example2@example.com','example3@example.com')
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,859
package org.danekja.edu.pia.domain; import org.apache.commons.lang3.StringUtils; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.UserDetails; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Table; import javax.persistence.Transient; import java.util.Collection; import java.util.Collections; /** * Entity representing application User. * * Date: 26.11.15 * * @author Jakub Danek */ @Entity @Table(name = User.TABLE_NAME) public class User extends BaseObject implements UserDetails { public static final String TABLE_NAME = "danekja_websec_user"; /** * Login, unique */ private String username; /** * Secret for signing-in */ private String password; public User() { } public User(String username, String password) { this.username = username; this.password = password; } /* ########### API ################## */ /** * Validates that user instance is currently in a valid state. * @throws UserValidationException in case the instance is not in valid state. */ public void validate() throws UserValidationException { if(StringUtils.isBlank(username)) throw new UserValidationException("Username is a required field"); if(StringUtils.isBlank(password)) throw new UserValidationException("Password is a required field"); } /* ########### MAPPINGS ##################### */ @Column(unique = true) public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } @Override @Transient public Collection<? extends GrantedAuthority> getAuthorities() { return Collections.singleton(new SimpleGrantedAuthority("ROLE_USER")); } @Override @Transient public boolean isAccountNonExpired() { return true; } @Override @Transient public boolean isAccountNonLocked() { return true; } @Override @Transient public boolean isCredentialsNonExpired() { return true; } @Override @Transient public boolean isEnabled() { return true; } @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof User)) return false; User user = (User) o; return !(username != null ? !username.equals(user.username) : user.username != null); } @Override public int hashCode() { return username != null ? username.hashCode() : 0; } @Override public String toString() { final StringBuilder sb = new StringBuilder("User{"); sb.append("username='").append(username).append('\''); sb.append('}'); return sb.toString(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,661
@extends('layouts.app') @section('title', 'Produk Import') @section('content') <ol class="breadcrumb"> <li> <a href="{{ url('/') }}"><i class="fa fa-home"></i></a> </li> <li>Master</li> <li class="active">Produk</li> <li class="active">Import</li> </ol> <div class="row"> <div class="col-md-12"> <section class="panel no-b"> <div class="panel-body"> <form role="form" class="form-horizontal" method="post" enctype="multipart/form-data" action="{!! url('user/postImport') !!}" id="fimpbnk"> <input type="hidden" name="konf" id="konf" value=""> <div class="row"> <div class="col-md-8"> <div class="form-group"> <label for="import" class="col-sm-2 control-label">Excel File</label> <div class="col-sm-6"> <input name="import" type="file" id="import" placeholder="Import" required> </div> </div> <div class="form-group"> <label for="import" class="col-sm-2 control-label">Format</label> <div class="col-sm-10"> <textarea readonly class='form-control'>Sheet 1 : NAMA | HARGA_JUAL | HARGA_BELI | DISC | BARCODE | STATUS | EXPIRED | PRINT_LABEL | GANTI_HARGA | KATEGORI | KET | LOGO</textarea> </div> </div> <div class="form-group"> <label for="import" class="col-sm-2 control-label">Keterangan</label> <div class="col-sm-9"> <ul> <li>NAMA : Nama produk misalnya Oreo</li> <li>HARGA_JUAL : Harga jual produk misalnya 1500</li> <li>HARGA_BELI : Harga beli produk misalnya 1000</li> <li>DISC : Diskon produk misalnya 10</li> <li>BARCODE : Barcode produk misalnya 88006546</li> <li>STATUS : Status produk misalnya Aktif/Tidak Aktif</li> <li>EXPIRED : Tanggal kadaluarsa produk misalnya 2017-10-16</li> <li>PRINT_LABEL : Print label produk misalnya Oreo Vanilla 300gr</li> <li>GANTI_HARGA : Ganti Harga produk misalnya 2000</li> <li>KATEGORI : Kategori produm misalnya Makanan</li> <li>KET : Keterangan pada produk</li> <li>LOGO : Logo produk</li> </ul> </div> </div> <div class="form-group"> <input type="hidden" name="_token" value="{{csrf_token()}}"> <label for="tanggal_lahir" class="col-sm-2 control-label"></label> <div class="col-sm-2"> <a href="{!! url('produk/import/sample') !!}" class="btn btn-warning btn-block"><i class="ti-download mr5"></i>Sample</a> </div> <div class="col-sm-2"> <button type="submit" class="btn btn-primary btn-block" name="upload" value="Upload"><i class="ti-upload mr5"></i>Upload</button> </div> </div> </div> </div> </form> </div> </section> </div> </div> @endsection
{ "redpajama_set_name": "RedPajamaGithub" }
2,184
The Muzaffarid dynasty, sometimes referred as Ahmedabad dynasty, were Sultans of Gujarat in western India from 1391 to 1583. The founder of the dynasty was Zafar Khan (later Muzaffar Shah I) who was governor of Gujarat under the suzerainty of the Tughlaq dynasty of the Delhi Sultanate. Zafar Khan's father Sadharan, has been variously described a Rajput sect of Tonk, Rajputana, a Tank Rajput from Thanesar in modern-day Haryana, or a Tānk Khatri from southern Punjab. Other historians such as Dr. V.K Agnihotri and Saiyid Athar Abbas Rizvi even wrote that his father, Sadhāran, was a Jat convert to Islam. He adopted the name Wajih-ul-Mulk. Wajih-ul-Mulk and his brother were influential Chaudharis who were agriculturists by profession but could also muster thousands of fighting men on their call. His Hindu forebearers claimed descend from Rāmachandra, who the Hindus worshipped as God. Such genealogies were fabricated to glorify royalty and were generally not accepted. When the Sultanate was weakened by the sacking of Delhi by Timur in 1398, and Zafar Khan took the opportunity to establish himself as sultan of an independent Gujarat. His son, Ahmed Shah I established the capital at Ahmedabad. The dynasty ruled for almost 200 years, until the conquest of Gujarat by the Mughal Empire in 1572. The sultanate reached its peak of expansion under Mahmud Begada, reaching east into Malwa and west to the Gulf of Kutch. Sultans of Gujarat Sultanate See also List of Sunni Muslim dynasties Notes Dynasties of India Gujarat Sultanate Indian former Hindus 1583 disestablishments States and territories established in 1391 Sunni dynasties
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,133
HOUSING ESTATE DEVELOPMENT COMPLETES WITH BRIDGING LOAN 3rd January 2019 Newscomplete with bringing loan, funding, housing developmentColette Lowe Holme Finance Bridging Solutions (HFBS) ensure building can complete on a housing estate development site valued at £900,000 with a £50,000 bridging loan. With buyers lined up for the first two of a select development of six homes, foundations were rocked when the property developer's go-to funding partner, a crown funding company, found themselves unable to advance the funds as quickly as required to enable to completion of the first two homes. With the option of waiting for funds, and the knock-on consequence of losing the significant sales of the first two properties, the developer had no choice but to seek alternative funding, and quickly. Dan Yendall-Collings, senior undewriter at HFBS says: "Our unique ability to complete quickly comes into its own again. The developer couldn't wait for funds to complete the properties, it was an urgent need that could have had devastating consequences if not approved. "We took the enquiry on a Thursday, prepared the relevant documents the same day, visitied the site Friday and paid out the folllowing Monday morning. With a three day turnaround, and minimal site downtime, the progress of the development was saved." With an average completion taking less than seven days from enquiry to money in the bank, no solicitor involvement, no minimum valuation, and entirely privately funded, no bank mandates, no fixed rules, HFBS really mean business. HFBS offer one the LOWEST second mortgage rates in the bridging finance market starting at just 0.95% month on advances from £5,000. HFBS Bridging Solutions have been advancing short-term funds, via a limited panel of intermediaries, for over 15 years with complete authority on their lending. Simpler, quicker, cheaper.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,277
\section{Introduction}\label{introduction} The power system is on the cusp of a revolution. The coming decade could witness increased renewable energy penetration, Electric Vehicle (EV) penetration, EV energy storage integration, demand response programs, etc. These changes have a profound impact on electricity market operations. New mechanisms must be devised to address a variety of important problems that are anticipated to arise in next-generation electricity markets. Most of the existing mechanism design settings are insufficient to model certain crucial features of these problems. To address this, we introduce the setting of Two-Stage Repeated Stochastic Games using which many problems that arise in the context of electricity markets can be readily modeled. The setting is an extension of the one-shot two-stage stochastic game introduced in \cite{Mukund2007} to repeated plays. At a high level, a two-stage stochastic game, as the name suggests, consists of two stages. In the first stage, the players do not know their valuation functions precisely, but rather only know the probability distribution of their valuation functions. It is only in the second stage of the game that the valuation functions realize. However, the social planner cannot wait until the second stage to decide on the outcome. It is constrained to make certain decisions in the first stage itself based on the probability distribution bids of the players' valuation functions. Once the valuation functions realize and are reported in the second stage, the social planner can make corrections to the first stage outcome by taking certain recourse actions but this comes at a cost \cite{Mukund2007}. It is the stochasticity of the players' valuation functions and the prospect for them to misreport both the probability distribution \emph{and} the realization of their valuation functions that preclude the use of classical mechanism design techniques to design efficient and incentive-compatible mechanisms for this setting. Motivated by applications to electricity markets which operate every day, we consider a setting wherein a two-stage stochastic game is played repeatedly. Repeated playing affords the players a large class of strategies that adapt a player's actions to all past observations and inferences obtained therefrom. In other settings such as iterative auctions or dynamic games where a large strategy space of this sort manifests, it typically has an important implication for mechanism design: It may be impossible to obtain truth-telling as a dominant strategy equilibrium \cite{bergemann2019dynamic}. Consequently, in such scenarios, it is common to settle for mechanisms that render truth-telling only a Nash equilibrium, or variants thereof, even though Nash equilibria are known to be poor models of real-world behavior. This is owing to each player having to make overly specific assumptions about the behaviors of the other players in order for them to employ their Nash equilibrium strategy, which they may not make. In general, the lesser the burden of speculation in an equilibrium, the more plausible it is that it models real-world behavior. Guided by the above maxim, we develop a new notion of equilibrium called the \emph{Dominant Strategy Non-Bankrupting Equilibrium (DNBE)} that requires players to make very little assumptions about the behaviors of the other players for them to employ their equilibrium strategy. Specifically, the only assumption that the players are required to make to play their DNBE strategy is that no player employs a strategy that leads to their own bankruptcy. We make this more precise in Section \ref{problemFormulation}. That the assumption is mild in that it is quite likely to hold in practice needs no belaboring. Consequently, a mechanism that implements a certain desired behavior as a DNBE as opposed to only a Nash equilibrium could be quite effective in molding real-world behavior along the desired lines. We then present a mechanism for two-stage repeated stochastic games that renders truth-telling a dominant strategy non-bankrupting equilibrium. The mechanism is individually rational in that every player is guaranteed to accrue a nonnegative utility by truth-telling regardless of what strategies the other players employ. Finally, if every player bids truthfully, then the outcome that the mechanism produces maximizes social welfare. The mechanism is a generalization of the mechanism that we have developed in \cite{satchidanandan2020efficient} for energy storage markets. Finally, we apply the mechanism to design an efficient and incentive-compatible demand response market. There are two main takeaways that we wish to highlight for designers of next-generation electricity markets. The first is that there is a need to redesign the ``bidding language" of the day-ahead market. In today's electricity markets, the generators and loads bid their supply and demand functions respectively. However, with the inclusion of demand response providers who may not know exactly in the day-ahead market their ability to reduce consumption the following day, the day-ahead market should allow for bids that are only \emph{probabilistic} in nature. It is only in real time, if and when called upon for demand response, that the demand response providers should be required to disclose their actual costs for curtailing consumption. The theory developed in the paper allows for such probabilistic bids to be submitted to the system operator. Secondly, our results show that ``simple" mechanisms like making payments proportional to the power curtailed by demand response providers, which have been employed in previous demand response trials, are incapable of attaining the optimal social welfare. Significant welfare gains can be obtained by employing carefully-designed mechanisms that take into account the uncertainties of the market participants. The rest of the paper is organized as follows. Section \ref{problemFormulation} begins with a precise description of a two-stage repeated stochastic game, defines the notion of dominant strategy non-bankrupting equilibrium, and formulates the mechanism design problem. Section \ref{mechanismDesign} develops a mechanism for two-stage repeated stochastic games that guarantees truth-telling to be a dominant strategy non-bankrupting equilibrium. Section \ref{applications} describes the application of the results to the design of demand response markets. Section \ref{relatedWork} provides an account of related work. Section \ref{conclusion} concludes the paper. \noindent\textbf{Notation:} Vectors and sequences are denoted using boldface letters. Given a sequence $\mathbf{x}=\{x(1),x(2),\hdots\},$ we denote by $\mathbf{x}^l$ the segment $\{x(1),\hdots,x(l)\}.$ The hat notation is used to denote bids: Given a variable $x$ that is private to a player, we denote by $\widehat{x}$ the bid that the player submits for $x.$ \section{Problem Formulation}\label{problemFormulation} A two-stage stochastic game played by $n$ players and consisting of a social planner is described by \begin{enumerate} \item a publicly-known set $\Delta$ known as the type space of the players, \item a publicly-known set $\Theta$ of probability mass functions over $\Delta$, known as the supertype space of the players, \item for each $i\in\{1,\hdots,n\},$ a probability distribution $\theta_i\in\Theta$, known as player $i$'s supertype, that is privately known to player $i$ in the first stage of the game, and which it is supposed to report to the social planner in the first stage, \item a set $\mathcal{O}_1$ of first-stage outcomes \item a first-stage decision rule $g^*_1:\Theta^n\to\mathcal{O}_1$ according to which the social planner chooses the first-stage outcome as a function of the players' supertype bids, \item for each $i\in\{1,\hdots,n\},$ player $i$'s type $\delta_i\in\Delta$ that is ``drawn by nature" at random according to $\theta_i$, whose realization is privately observed by player $i$ in the second stage of the game, and which it is supposed to report to the social planner in the second stage, \item a set $\mathcal{O}_2$ of second-stage outcomes or ``recourse actions" that the social planner can choose \item a second-stage decision rule $g^*_2:\Theta^n\times\Delta^n\to\mathcal{O}_2$ according to which the social planner chooses the second-stage outcome as a function of the players' type and supertype bids, \item a cost function $c:\mathcal{O}_1\times\mathcal{O}_2\to\mathbb{R}$ that specifies for every $(o_1,o_2)\in\mathcal{O}_1\times\mathcal{O}_2,$ the cost incurred by the social planner for choosing the outcome $o_1$ in the first stage and taking the recourse action $o_2$ in the second stage, \item for each $i\in\{1,\hdots,n\},$ a valuation function $v_i:\Delta\times\mathcal{O}_1\times\mathcal{O}_2\to\mathbb{R}$ of player $i$ that specifies for every $\delta_i\in\Delta$ and every $(o_1,o_2)\in\mathcal{O}_1\times\mathcal{O}_2,$ the valuation of player $i$ if its type is $\delta_i$ and the social planner chooses the outcomes $o_1$ and $o_2$ in the first and the second stage of the game respectively. \end{enumerate} The first- and second-stage decision rules $(g_1^*,g_2^*)$ that we consider are those that maximize the expected social welfare. To elaborate, let $g_1:\Theta^n\to\mathcal{O}_1$ be any first-stage decision rule and $g_2:\Theta^n\times\Delta^n\to\mathcal{O}_2$ be any second-stage decision rule. If the players bid their types and supertypes truthfully, then the expected social welfare that results as a consequence of using the decision rule $(g_1,g_2)$ is $$\mathbb{E}_{\boldsymbol{\delta}\sim\boldsymbol{\theta}}\big[\sum_{i=1}^nv_i(\delta_i,g_1(\boldsymbol{\theta}),g_2(\boldsymbol{\theta},\boldsymbol{\delta}))-c(g_1(\boldsymbol{\theta}),g_2(\boldsymbol{\theta},\boldsymbol{\delta}))\big]=:{W}(\boldsymbol{\theta},g_1,g_2).$$ The goal of the social planner is to maximize the expected social welfare, and so the decision rule $(g_1^*,g_2^*)$ that it employs is \begin{align} (g_1^*,g_2^*)=\argmax_{g_1,g_2}\;{W}(\cdot,g_1,g_2),\label{gStarDefn} \end{align} where the maximization is defined in the pointwise sense. The social planner computes $g_1^*$ and $g_2^*$ and announces it to the players before the game commences. The problem that we study is one where a two-stage stochastic game of the above form is played repeatedly on each day $l,$ $l\in\mathbb{Z}_+.$ For ease of exposition, we assume that the supertypes of the players remain the same on all days and it is only their types that differ across days, though this assumption can be relaxed in a straightforward manner. Consequently, for each player $i$, $i\in\{1,\hdots,n\},$ we denote by $\theta_i$ its privately known supertype which remains the same on all days and by $\delta_i(l)$ its privately known type on day $l.$ The sequence $\{\boldsymbol{\delta}(1),\boldsymbol{\delta}(2),\hdots\}$ is assumed to be Independent and Identically Distributed (IID) with $\boldsymbol{\delta}(1)\sim\theta_1\times\hdots\times\theta_n.$ \subsection{First-stage strategy} On each day $l,$ each player $i$ is required to report its supertype to the social planner in the first stage so that the latter can compute the optimal first-stage outcome. Since the players' supertypes are assumed to remain the same on all days, it suffices for the players to bid their supertypes just once, namely, in the first stage of the game on day $1.$ Owing to strategic reasons that will be clear shortly, the players may not bid their supertypes truthfully, and so we denote by ${\widehat{\theta}_i}$ the supertype bid of player $i$ and by $\sigma_i:\Theta\to\Theta$ the first-stage strategy according to which player $i$ constructs its supertype bid. Therefore, $\widehat{\theta}_i=\sigma_i(\theta_i).$ Once all players submit their supertype bids, the social planner computes the first-stage outcome as $g_1^*(\boldsymbol{\sigma}(\boldsymbol{\theta})),$ where $\boldsymbol{\sigma}(\boldsymbol{\theta})\coloneqq[\sigma_1(\theta_1),\hdots,\sigma_n(\theta_n)].$ The game then proceeds to the second stage. \subsection{Second-stage bidding policy} In the second stage on each day $l$, each player $i$ observes the realization of $\delta_i(l)$ which it is supposed to report to the social planner. However, owing to strategic reasons that will become clear shortly, the players may not bid their type realizations truthfully, and so we denote by $\widehat{\delta}_i(l)$ player $i$'s type bid on day $l.$ We allow for the player to construct its type bid on any day $l$ using all information available to it until day $l$, and in accordance with any randomized, history-dependent policy of its choosing. Specifically, a second-stage bidding policy $\mu$ of player $i$ is a rule which specifies for each $o_1\in\mathcal{O}_1$ and each $l\in\mathbb{Z}_+,$ a probability transition kernel $\mathbb{P}_{\mu}(\widehat{\delta}_i(l)\big\vert\delta_i^l,\widehat{\delta}_i^{l-1},o_2^{l-1};o_1)$ according to which player $i$ constructs its second-stage bid $\widehat{\delta}_i(l)$ on day $l$ if the first-stage outcome is $o_1$. We denote by $\Pi_i$ the set of all second-stage bidding policies available to player $i.$ Note that the second stage bidding policy is a \emph{rule} that maps the history of observations available to a player to its second-stage bid. While the outcome of the rule is random owing to the types and second-stage outcomes being random, there is nothing random about the rule itself. Consequently, a player without any loss of generality can choose its second-stage bidding policy right on day $1$ as a function of its supertype. This leads to the notion of the second-stage strategy which is described next. \subsection{Second-stage strategy} A \emph{second-stage strategy} of player $i$ is a function $\pi_i:\Theta\to\Pi_i$ which specifies the second-stage bidding policy that it employs as a function of its private supertype $\theta_i.$ Therefore, $\pi_i(\theta_i)$ is the second-stage bidding policy employed by player $i.$ Once all players submit their type bids, the social planner computes the second-stage outcome for day $l$ as $o_2(l)=g_2^*(\boldsymbol{\sigma}(\boldsymbol{\theta}),\widehat{\boldsymbol{\delta}}(l))$. Note that once the players' first-stage and second-stage bidding policies are fixed, a functional relationship is established between the types and the type bids, and all random variables become well-defined. \subsection{Strategies and Strategy profiles} We refer to the composition of the first- and second-stage strategies simply as a \emph{strategy}. I.e, $S_i\coloneqq(\sigma_i,{\pi}_i)$ is referred to as the {strategy} of player $i.$ We denote by $\Lambda_i$ the set of strategies available to player $i.$ Finally, we refer to $\boldsymbol{S}\coloneqq(S_1,\hdots,S_n)$ as the \emph{strategy profile} of the players and denote by $\Lambda$ the set of strategy profiles $\Lambda_1\times\hdots\times\Lambda_n$. \subsection{Truthful strategies} The stochasticity of the player types necessitates the definition of truthful strategy to be weaker than requiring a player to bid its type truthfully on all days. \begin{definition} A strategy $S_i=(\sigma_i,{\pi}_i)$ of player $i$, $i\in\{1,\hdots,n\},$ is \emph{truthful} if \begin{enumerate} \item[(i)] $\sigma_i(\theta)=\theta$ for every $\theta\in\Theta,$ and \item[(ii)] for every $\theta\in\Theta$ and every $o_1\in\mathcal{O}_1,$ there exists $\mathcal{L}\subseteq\mathbb{Z}_+$ with $\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{l\in\mathcal{L}\}}=0$ such that for all $l\notin\mathcal{L},$ $$\mathbb{P}_{\pi_i(\theta)}(\widehat{\delta}_i(l)\big\vert\delta_i^l,\widehat{\delta}_i^{l-1},o_2^{l-1};o_1)=\mathds{1}_{\{\widehat{\delta}_i(l)=\delta_i(l)\}}.$$ \end{enumerate} A strategy profile $(S_1,\hdots,S_n)$ is a \emph{truthful strategy profile} if $S_i$ is truthful for every $i\in\{1,\hdots,n\}.$ \end{definition} \noindent In other words, a strategy $S_i$ is truthful if the supertype bid is truthful and the type bid is truthful ``almost all days." We denote by $\mathcal{T}_i\subset\Lambda_i$ the set of all truthful strategies available to player $i.$ \subsection{Payments and utilities} The social planner collects a payment from each player at the end of each day that is determined as a function of the bids that they submit until that day. We denote by $p_{i,l}:\Theta_1\times\hdots\times\Theta_n\times\Delta_1^l\times\hdots\times\Delta_n^l\to\mathbb{R}$ the payment rule so that $p_{i,l}(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}^l)$ specifies the amount that player $i$ should pay on day $l$. The utility accrued by player $i$ is defined as \begin{align} u_{i}(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)\coloneqq\bigg[\liminf_{L\to\infty}\frac{1}{L}\sum_{l=1}^Lv_i(\delta_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}(l)))-p_{i,l}(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l)\bigg].\label{uiAsymptotic} \end{align} Note that a player's utility is a random variable that depends on the realization of the type sequence $\boldsymbol{\delta}^\infty.$ \subsection{Non-Bankrupting strategies} As mentioned in Section \ref{introduction}, a ``mild" behavioral assumption, one that is quite likely to hold in practice, is that no player behaves in a manner that might result in its own bankruptcy. This is captured by the notion of a non-bankrupting strategy. \begin{definition} A strategy $S_i$ of player $i,$ $i\in\{1,\hdots,n\},$ is \emph{non-bankrupting} if for all $(\boldsymbol{S}_{-i},\boldsymbol{\theta}),$ $$u_i(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)>-\infty$$ for all $\boldsymbol{\delta}^\infty,$ except perhaps on a set of probability zero. A strategy profile $\boldsymbol{S}=(S_1,\hdots,S_n)$ is \emph{non-bankrupting} if $S_i$ is non-bankrupting for every $i\in\{1,\hdots,n\}.$ \end{definition} \noindent We denote by $\mathcal{NB}_i$ the set of non-bankrupting strategies of player $i,$ by $\mathcal{NB}_{-i}$ the set of non-bankrupting strategy profiles of all players except player $i,$ and by $\mathcal{NB}$ the set of non-bankrupting strategy profiles of all players. \subsection{Dominant Strategy Non-Bankrupting Equilibrium} We are now ready to introduce a notion of equilibrium that is ``slightly" weaker than dominant strategy equilibrium. \begin{definition} A strategy profile $\boldsymbol{S}=(S_1,\hdots,S_n)\in\mathcal{NB}$ is a \emph{Dominant Strategy Non-Bankrupting Equilibrium (DNBE)} if for all $i\in\{1,\hdots,n\},$ all $S'_{-i}\in\mathcal{NB}_{-i},$ all $S_i'\in\Lambda_i,$ and all $\boldsymbol{\theta},$ \begin{align} u_i(S_i,S'_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)\geq u_i(S'_i,S'_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty) \end{align} for all $\boldsymbol{\delta}^\infty,$ except perhaps on a set of probability zero. \end{definition} It is perhaps instructive to contrast DNBE with Dominant Strategy Equilibrium (DSE) and Nash Equilibrium (NE) to gain a better appreciation of the notion. Note that for a strategy profile $\boldsymbol{S}$ to form a Nash equilibrium, it must hold for every $i\in\{1,\hdots,n\}$ that $S_i$ is a best response to $\boldsymbol{S}_{-i}.$ On the other hand, for the strategy profile $\boldsymbol{S}$ to form a DNBE, we must have for all $i\in\{1,\hdots,n\}$ that $S_i$ is a best response not only to $\boldsymbol{S}_{-i}$, but also to all $\boldsymbol{S}'_{-i}\in\mathcal{NB}_{-i}.$ It follows that any dominant strategy non-bankrupting equilibrium is also a Nash equilibrium but not vice-versa. The stronger notion of dominant strategy equilibrium requires for all $i\in\{1,\hdots,n\}$ that $S_i$ is a best response to every $\boldsymbol{S}'_{-i}\in\Lambda_{-i},$ and not just to those in $\mathcal{NB}_{-i}$ as required by DNBE. Hence, any dominant strategy equilibrium is also a dominant strategy non-bankrupting equilibrium. Fig. \ref{figHierarchy} illustrates the hierarchy formed by these equilibrium notions. \begin{figure} \centering \begin{tikzpicture} \draw[black, thick] (0,0) rectangle (5,5); \draw[black, thick] (2.5,2.5) ellipse (2.25cm and 1.6cm); \draw[black, thick] (2.5,2.5) ellipse (1.7cm and 1cm); \draw[black, thick] (2.5,2.5) ellipse (1cm and 0.4cm); \node[align=center] at (2.5,2.5) {\text{DSE}}; \node[align=center] at (2.5,3.15) {\text{DNBE}}; \node[align=center] at (2.5,3.75) {\text{NE}}; \node[align=center] at (2.5,4.5) {\text{Set of Strategy Profiles}}; \end{tikzpicture} \caption{Hierarchy of equilibrium notions. Any dominant strategy equilibrium is also a dominant strategy non-bankrupting equilibrium, and any dominant strategy non-bankrupting equilibrium is also a Nash equilibrium. }\label{figHierarchy} \end{figure} \subsection{Mechanism Design Problem} Arbitrarily fix the strategy profile $\boldsymbol{S}$ of the players. The long-term average social welfare that results from the game is \begin{align} q(\boldsymbol{S},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)\coloneqq\liminf_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\bigg[\sum_{i=1}^nv_i(\delta_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}(l)}))\bigg]-c(g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}(l))).\label{SWasymptotic} \end{align} The objective of the social planner is to ensure that the average social welfare $q(\boldsymbol{S},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)$ equals the optimal value $W^*(\boldsymbol{\theta})$ that would result almost surely if all players employ a truthful strategy. However, the objective of each player $i$ is to maximize its own utility given by (\ref{uiAsymptotic}), and so it may not employ a truthful strategy if there is a possibility for it to accrue a higher utility by doing so than by employing a truthful strategy. This brings us to the mechanism design problem. We wish to design a payment rule $\{p_{i,l}: (i,l)\in\{1,\hdots,n\}\times\mathbb{Z}_+\}$ such that each player employing a truthful strategy is a Dominant Strategy Non-Bankrupting Equilibrium. The next section develops the mechanism and establishes the incentive and efficiency properties guaranteed by it. \section{An Efficient and Incentive-Compatible Mechanism for Two-Stage Repeated Stochastic Games}\label{mechanismDesign} For each $i\in\{1,\hdots,n\},$ the payment of player $i$ on any day $l$ consists of two components $p_i^{F}$ and $p_i^{S}$ that can be computed by the social planner at the end of the first and the second stages of the game respectively on day $l.$ These payment functions are defined next. \subsection{First-stage payment} The first-stage payment $p_i^F$ is a function of only the first-stage bids of the players. Since these quantities remain the same on all days, so do the first-stage payments. The first-stage payment is simply the VCG payment and is defined as \begin{align} p_i^F(\boldsymbol{\widehat{\theta}})\coloneqq W^*(\boldsymbol{\widehat{\theta}}_{-i})-\mathbb{E}_{\boldsymbol{\delta}\sim\mathbb{P}_{\boldsymbol{\widehat{\theta}}}}\bigg[\sum_{j\neq i}v_j(\delta_j,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\delta}))-c(g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\delta}))\bigg],\label{piF} \end{align} where $\widehat{\boldsymbol{\theta}}_{-i}$ denotes the supertype bids of all players other than player $i$ \subsection{Second-stage payment} At a high level, the first functionality of the second-stage payment is to bind the first-stage and the second-stage strategies of the players. To achieve this, the second-stage payment rule compares the empirical frequencies of the players' type bids with their supertype bids and penalizes discrepancies between them. To elaborate, denote by $\widehat{\theta}_i(t)$ the probability that a random variable distributed according to $\widehat{\theta}_i$ takes the value $t,$ $t\in\Delta.$ On each day $l$ and for each player $i,$ $(l,i)\in\mathbb{Z}_+\times\{1,\hdots,n\},$ the second-stage payment rule computes the discrepancy \begin{align} \widehat{f}_{i,t}(l)\coloneqq\bigg[\frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\widehat{\delta}_i(l')=t\}}\bigg]-\widehat{\theta}_i(t)\label{fhatDefn} \end{align} for every $t\in\Delta$, and imposes a penalty of $J_p(l)$ on player $i$ if $\widehat{f}_{i,t}(l)$ falls outside a window of size $r(l)$ for some $t\in\Delta,$ i.e., if \begin{align} \vert\widehat{f}_{i,t}(l)\vert\geq r(l)\label{constraint1} \end{align} for some $t\in\Delta.$ In a setting of repeated playing, the sequence of second-stage outcomes serves as a source of common randomness which the players can potentially use to correlate their second-stage bids if there is a possibility for them to accrue a higher utility by doing so than by fabricating their bids independently of the other players' bids. The second functionality of the second-stage payment is to disincentivize such strategies. Towards this end, on each day $l$ and for each player $i$, $(l,i)\in\mathbb{Z}_+\times\{1,\hdots,n\},$ the second-stage payment rule computes \begin{align} \widehat{h}_{i,\mathbf{d}}(l)\coloneqq\bigg[\frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\widehat{\delta}_i(l')=d_i,\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\bigg]-\bigg[\widehat{\theta}_i(d_i)\bigg]\bigg[\frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\bigg]\label{hhatDefn} \end{align} for every $\mathbf{d}\in\Delta^{n},$ and imposes a penalty of $J_p(l)$ on player $i$ if it falls outside a window of size $r(l)$ for some $\mathbf{d}\in\Delta^n,$ i.e., if \begin{align} \vert\widehat{h}_{i,\mathbf{d}}(l)\vert\geq r(l)\label{constraint2} \end{align} for some $\mathbf{d}\in\Delta^n.$ How should the window size sequence $\{r\}$ be chosen? On the one hand, the window size $r(l)$ must tend to zero as $l$ tends to infinity for otherwise, the set of sequences $\{\boldsymbol{\widehat{\delta}}\}$ that satisfy (\ref{constraint1}) and (\ref{constraint2}) would be ``large," thereby violating incentive compatibility. On the other hand, if $\{r\}$ decays too quickly, then even truthful type bids would violate (\ref{constraint1}) and (\ref{constraint2}) infinitely often, thereby incurring a large penalty and violating individual rationality. Hence, the sequence $\{r\}$ should be chosen in a manner that balances the two objectives. This is achieved by choosing $\{r\}$ such that \begin{align} \lim_{l\to\infty}r(l)=0,\label{rGoesToZero} \end{align} and for some $\gamma>0,$ \begin{align} r(l)\geq\sqrt{\frac{\ln{2l^{1+\gamma}}}{2l}}\label{rlgreater} \end{align} for all $l\in\mathbb{Z}_+.$\footnote{It suffices that (\ref{rlgreater}) holds not for all $l$ but only for all sufficiently large $l$.} To obtain an intuition for condition (\ref{rlgreater}), note that the empirical frequency $\frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\delta_i(l')=t\}}$ resulting from the true type sequence of player $i$ is a random variable with mean $\theta_i(t)$ and standard deviation that scales as ${{1}/\sqrt{l}}.$ Therefore, if the window size decays at the same rate, then the probability of the empirical frequency falling outside the window would remain at a constant value. This suggests that the window size must scale slower than at least ${{1}/\sqrt{l}}.$ By scaling the window size only slightly slower than ${{1}/\sqrt{l}}$, namely the rate specified by condition (\ref{rlgreater}), truthful bids are guaranteed to almost surely satisfy (\ref{constraint1}) and (\ref{constraint2}) for all but finitely many values of $l$. This is established in Lemma \ref{LemmaHonesty}. How should the penalty sequence $\{J_p\}$ be chosen? As shown in Lemma \ref{LemmaHonesty}, truthful players incur a penalty only finitely often almost surely, and so the long-term average penalty that they incur is almost surely zero regardless of how the sequence $\{J_p\}$ is chosen. Therefore, the only objective in the design of $\{J_p\}$ is for {every} non-truthful strategy to incur a sufficiently high penalty. This is accomplished by choosing $\{J_p\}$ to be any nonnegative sequence such that \begin{align} \lim_{l\to\infty} \frac{J_p(l)}{l}=\infty.\label{JpIsOmegaL} \end{align} We now have the necessary quantities to define the second-stage payment function. Define the event \begin{align} {E}_{i,\boldsymbol{S}}(l)\coloneqq{\{\max_{t\in\Delta}\;\vert\widehat{f}_{i,t}(l)\vert\geq r(l)\;\cup\; \max_{\mathbf{d}\in\Delta^n}\vert\widehat{h}_{i,\mathbf{d}}(l)\vert\geq r(l)\}}\label{Edefn} \end{align} which denotes the occurrence of at least one of (\ref{constraint1}) and (\ref{constraint2}). The second-stage payment of player $i$ on day $l$ is defined as \begin{align} p_{i,l}^S(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l)\coloneqq \Bigg[v_i(\widehat{\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))-\mathbb{E}_{\boldsymbol{\delta}\sim\mathbb{P}_{\boldsymbol{\widehat{\theta}}}}\big[v_i(\delta_i,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\delta}))\big]\Bigg]+J_p(l)\mathds{1}_{\{E_{i,\boldsymbol{S}}(l)\}}.\label{piS} \end{align} A negative value of the above quantity implies a transfer from the social planner to player $i$ on day $l.$ Note that if all players employ a truthful strategy, then the long-term average second-stage payment almost surely equals zero for every player. The total payment $p_{i,l}(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l)$ that player $i$ transfers to the social planner on day $l$ is \begin{align} p_{i,l}(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l)\coloneqq p_i^F(\boldsymbol{\widehat{\theta}})+p_{i,l}^S(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l).\label{Payment} \end{align} The following theorem establishes the incentive and optimality guarantees of the mechanism. \begin{theorem} Consider the two-stage repeated stochastic game induced by the payment rule (\ref{Payment}). \begin{enumerate} \item A Truthful strategy profile is a dominant strategy non-bankrupting equilibrium. \item If for every $i\in\{1,\hdots,n\}$ and every $\boldsymbol{\theta},$ \begin{align} W^*(\boldsymbol{\theta})-W^*(\boldsymbol{\theta}_{-i})\geq 0,\label{noPessimist} \end{align} then every player obtains a nonnegative utility by employing a truthful strategy regardless of the strategies that the other players employ. \item If every player employs a truthful strategy, then the long-term average social welfare (\ref{SWasymptotic}) that results is almost surely equal to its optimal value $W^*(\boldsymbol{\theta}).$ \end{enumerate} \end{theorem} \begin{proof} Arbitrarily fix $\boldsymbol{\theta},$ $i\in\{1,\hdots,n\},$ the strategy $S_i\in\Lambda_i$ that player $i$ employs, and the strategy profile $\boldsymbol{S}_{-i}\in\mathcal{NB}_{-i}$ that all other players employ. We begin with a lemma. \begin{lemma} For $T_i\in\mathcal{T}_i,$ it holds almost surely that \begin{align} \limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^L J_p(l)\mathds{1}_{\{E_{i,(T_i,\boldsymbol{S}_{-i})}(l)\}}=0. \end{align} I.e., if player $i$ employs a truthful strategy, then the penalty that it pays is almost surely zero.\label{LemmaHonesty} \end{lemma} \begin{proof} It suffices to show that $\{E_{i,(T_i,\boldsymbol{S}_{-i})}(l)\}$ almost surely occurs only finitely often. Arbitrarily fix $\mathbf{d}\in\Delta^n$. Define $\mathcal{F}_l\coloneqq\sigma(\widehat{\boldsymbol{\delta}}_{-i}^l,\delta_i^{l-1})$ so that $\big(\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\big[\mathds{1}_{\{\delta_i(l')=d_i\}}-\theta_i(d_i)\big],\mathcal{F}_{l'+1}\big)$ is a martingale difference sequence bounded by unity. It follows from the Azuma-Hoeffding inequality that \begin{align} \mathbb{P}\big(\big\vert\ \frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\big[\mathds{1}_{\{\delta_i(l')=d_i\}}-\theta_i(d_i)\big]\big\vert\geq r(l)\big)\leq 2e^{-2lr^2(l)}. \end{align} Combining the above inequality with (\ref{rlgreater}) implies \begin{align} \mathbb{P}\big(\big\vert\ \frac{1}{l}\sum_{l'=1}^l\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\big[\mathds{1}_{\{\delta_i(l')=d_i\}}-\theta_i(d_i)\big]\big\vert\geq r(l)\big)\leq \frac{1}{l^{1+\gamma}}. \end{align} Using (\ref{hhatDefn}) and the fact that player $i$ employs a truthful strategy, the above inequality implies \begin{align} \mathbb{P}\big(\big\vert\widehat{h}_{i,\mathbf{d}}(l)\big\vert\geq r(l)\big)\leq\frac{1}{l^{1+\gamma}} \end{align} which in turn implies that $\sum_{l=1}^\infty\mathbb{P}\big(\big\vert\widehat{h}_{i,\mathbf{d}}(l)\big\vert\geq r(l)\big)<\infty.$ Invoking the Borel-Cantelli lemma, we have that $\{\vert\widehat{h}_{i,\mathbf{d}}(l)\vert\geq r(l)\}$ almost surely occurs only finitely often. Similarly, $(\mathds{1}_{\{{\delta}_i(l')=d_i\}}-\theta_i(d_i),\mathcal{F}_{l'+1})$ is a martingale difference sequence bounded by unity and following the same sequence of arguments as above, it can be established that $\{\vert\widehat{f}_{i,d_i}(l)\vert\geq r(l)\}$ almost surely occurs only finitely often. Since $\mathbf{d}$ is arbitrarily chosen, we have that for every $\mathbf{d}\in\Delta^n,$ $\{\vert\widehat{h}_{i,\mathbf{d}}(l)\vert\geq r(l)\}$ and $\{\vert\widehat{f}_{i,d_i}(l)\vert\geq r(l)\}$ almost surely occur only finitely often, and the desired result follows. \end{proof} We have \begin{align*} u_i(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)&=\liminf_{L\to\infty}\frac{1}{L}\sum_{l=1}^Lv_i({\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))-p_{i,l}(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}^l), \end{align*} where $\boldsymbol{\widehat{\theta}}$ and $\boldsymbol{\widehat{\delta}}^\infty$ are determined in accordance with $\boldsymbol{S}$. Substituting (\ref{piF}) and (\ref{piS}) into (\ref{Payment}), substituting the resulting expression for $p_{i}(l)$ into the above equality, and simplifying the result yields \begin{align} u_i(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)&=\big[W^*(\boldsymbol{\widehat{\theta}})-W^*(\boldsymbol{\widehat{\theta}}_{-i})\big]\nonumber\\ &+\liminf_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\bigg(v_i({\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))-v_i(\widehat{\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))\bigg)\nonumber\\ &-\limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^LJ_p(l)\mathds{1}_{\{E_{i,\boldsymbol{S}}(l)\}}.\label{uiAvgExpression} \end{align} Arbitrarily fix $T_i\in\mathcal{T}_i$. Then, we obtain using Lemma \ref{LemmaHonesty} and some straightforward algebra that \begin{align} u_i(T_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)-u_i(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty) &=\big[W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-W^*(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})\big]\nonumber\\ &+\limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\bigg(v_i(\widehat{\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))-v_i({\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))\bigg)\nonumber\\ &+\limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^LJ_p(l)\mathds{1}_{\{E_{i,\boldsymbol{S}}(l)\}}.\label{utilityDifference} \end{align} In what follows, we show that the above quantity is almost surely nonnegative, implying that truthful strategy profiles are Dominant Strategy Non-Bankrupting Equilibria. Define \begin{align} \nu_i(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})\coloneqq\mathbb{E}_{(\widehat{\delta}_i,\widehat{\boldsymbol{\delta}}_{-i})\sim\widehat{\theta}_i\times\boldsymbol{\widehat{\theta}}_{-i}}\bigg[v_i\big(\widehat{\delta}_i,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}})\big)\bigg] \end{align} and \begin{align} \nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})\coloneqq\mathbb{E}_{(\widehat{\delta}_i,\widehat{\boldsymbol{\delta}}_{-i})\sim\widehat{\theta}_i\times\boldsymbol{\widehat{\theta}}_{-i}}\bigg[\sum_{j\neq i}v_j\big(\widehat{\delta}_j,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}})\big)-c\big(g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}})\big)\bigg] \end{align} so that \begin{align} W^*(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})=\nu_i(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})+\nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})\label{tempUse1}. \end{align} Let $\mu(\Delta,\Delta^n)$ be the set of joint probability mass functions over $\Delta\times\Delta^n.$ For $\psi\in\mu(\Delta,\Delta^n),$ define \begin{align} \rho_i(\psi)\coloneqq\mathbb{E}_{({\delta}_i,[\widehat{\delta}_i,\widehat{\boldsymbol{\delta}}_{-i}])\sim\psi}\bigg[v_i({\delta}_i,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}))\bigg].\label{rhodefn} \end{align} Let $\Psi(\theta_i,\boldsymbol{\widehat{\theta}})\subset \mu(\Delta,\Delta^n)$ be the set of joint probability mass functions with ``$x-$marginal" distributed according to $\theta_i$ and ``$y-$marginal" distributed according to ${\widehat{\theta}_1\times\hdots\times\widehat{\theta}_n}.$ {Then, for every $\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}}),$ \begin{align} W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})\geq\rho_i(\psi)+\nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i}).\label{tempUse2} \end{align}} To see this, note that if $(\delta_i,\boldsymbol{\delta}_{-i})\sim\theta_i\times\boldsymbol{\widehat{\theta}}_{-i}$, then the social planner can map $\delta_i$ to a random variable $\delta_i'$ using an appropriate probability transition kernel $P_{\delta_i'\vert\boldsymbol{\delta}}$ such that $(\delta_i,[\delta_i',\boldsymbol{\delta}_{-i}])\sim\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}}).$ Consequently, by choosing the first-stage outcome as $g_1^*(\boldsymbol{\widehat{\theta}})$ and the second-stage outcome as $g_2^*(\boldsymbol{\widehat{\theta}},[{\delta}_i',\boldsymbol{\delta}_{-i}])$, an expected social welfare of $\rho_i(\psi)+\nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})$ can be attained. It follows that the optimal expected social welfare $W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})$ is at least as large, which yields (\ref{tempUse2})\footnote{This argument requires the second-stage decision rule to be randomized whereas we have assumed $g_1^*$ and $g_2^*$ to be deterministic functions. This apparent gap can be addressed by noting that an optimal decision rule $(g_1^*,g_2^*)$ can be found within the class of deterministic functions.}. Suppose for a moment that each player $j\in\{1,\hdots,n\}$ employs a stationary second-stage bidding policy $\mu_S^j$ so that $\widehat{\delta}_j(l)$ is chosen as a function of $\delta_j(l)$ according to some probability kernel $P^j_{\widehat{\delta}_j\vert\delta_j}$ for every $l.$ For player $j$'s strategy to be non-bankrupting, it is necessary that $\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L \mathds{1}_{\{\widehat{\delta}_j(l)=t\}}=\widehat{\theta}_j(t)$ almost surely for every $t\in\Delta$ for (\ref{constraint1}) would be violated infinitely often otherwise resulting in infinite average penalty. So, for every $j\in\{1,\hdots,n\},$ if player $j$'s strategy is to be non-bankrupting, then $P^j_{\widehat{\delta}_j\vert\delta_j}$ must be such that $\widehat{\delta}_j(1)\sim\widehat{\theta}_j$ given $\delta_j(1)\sim\theta_j$. It follows that for every $j\in\{1,\hdots,n\},$ $(\delta_j(1),\boldsymbol{\widehat{\delta}}(1))\sim\psi_j$ for some $\psi_j\in\Psi(\theta_j,\boldsymbol{\widehat{\theta}})$. It also follows that $\{(\boldsymbol{\delta}(1),\boldsymbol{\widehat{\delta}}(1)),(\boldsymbol{\delta}(2),\boldsymbol{\widehat{\delta}}(2)),\hdots\}$ is a sequence of IID random variables, and so we obtain using the Strong Law of Large Numbers (SLLN) that the RHS of (\ref{utilityDifference}) almost surely equals $[W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-W^*(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})]+[\nu_i(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})-\rho_i(\psi_i)].$ Upon substituting (\ref{tempUse1}), this becomes $W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-\nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})-\rho_i(\psi_i),$ and combining it with (\ref{tempUse2}) implies the nonnegativity of (\ref{utilityDifference}). However, in order to fabricate the type bids, the players may not restrict just to stationary policies but can employ any history-dependent policy. The rest of the proof is devoted to showing that the same result, namely, the nonnegativity of (\ref{utilityDifference}), holds even in the general case where the players may employ any non-bankrupting strategy. The key to establishing this is the following lemma that characterizes the empirical joint distributions of the reported types when all players employ a non-bankrupting strategy. \begin{lemma} Suppose that for every $j\in\{1,\hdots,n\},$ \begin{align} \limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^LJ_p(l)\mathds{1}_{\{E_{j,\boldsymbol{S}}(l)\}}<\infty.\label{allPlayersNB} \end{align} Then, for every $\mathbf{d}\in\Delta^n,$ \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\boldsymbol{\widehat{\delta}}(l)=\mathbf{d}\}}=\Pi_{j=1}^n\widehat{\theta}_j(d_j).\label{empiricalDistEqualsPhi} \end{align}\label{lemma1} \end{lemma} \begin{proof} It suffices to show that for all $\mathbf{d}\in\Delta^n$ and all $k\in\{1,\hdots,n-1\},$ \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{k:n}(l)=\mathbf{d}_{k:n}\}}=\widehat{\theta}_k(d_k)\bigg[\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{k+1:n}(l)=\mathbf{d}_{k+1:n}\}}\bigg]\label{l1s1} \end{align} and that \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{{\delta}}_{n}(l)={d}_{n}\}}=\widehat{\theta}_n(d_n),\label{l1s2} \end{align} where $\mathbf{d}_{k:n}\coloneqq[d_k\;d_{k+1}\;\hdots\;d_n]$ and $\widehat{\boldsymbol{\delta}}_{k:n}(l)$ is defined likewise. Combining (\ref{allPlayersNB}) with (\ref{JpIsOmegaL}) implies that $\limsup_{L\to\infty}\sum_{l=1}^L\mathds{1}_{\{E_{j,\boldsymbol{S}}(l)\}}<\infty$ for every $j\in\{1,\hdots,n\}$. I.e., the event sequence $\{E_{j,\boldsymbol{S}}(l)\}$ occurs only finitely often. Hence, we obtain using (\ref{Edefn}) and (\ref{rGoesToZero}) that for all $\mathbf{d}\in\Delta^n$ and all $j\in\{1,\hdots,n\},$ \begin{align} \lim_{L\to\infty}\widehat{f}_{j,d_j}(L)=0\label{fhatlim0} \end{align} and \begin{align} \lim_{L\to\infty}\widehat{h}_{j,\mathbf{d}}(L)=0.\label{hhatlim0} \end{align} Substituting (\ref{fhatDefn}) in (\ref{fhatlim0}) implies \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\delta}_j(l)=d_j\}}=\widehat{\theta}_j(d_j)\label{marginalEqual} \end{align} for all ${d}_j\in\Delta$ and all $j\in\{1,\hdots,n\},$ which in particular establishes (\ref{l1s2}). Substituting (\ref{hhatDefn}) in (\ref{hhatlim0}) implies \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\delta}_j(l)=d_j,\widehat{\boldsymbol{\delta}}_{-j}(l)=\mathbf{d}_{-j}\}}=\widehat{\theta}_j(d_j)\bigg[\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-j}(l)=\mathbf{d}_{-j}\}}\bigg]\label{noCollusion} \end{align} for all $\mathbf{d}\in\Delta^n$ and all $j\in\{1,\hdots,n\}$. In concluding (\ref{noCollusion}), we have assumed that the limit in the RHS exists, to justify which certain additional arguments are required. We omit these details since they might lessen the focus on the main aspects of the proof. The equality (\ref{l1s1}) can now established by noting that \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{k:n}(l)=\mathbf{d}_{k:n}\}}&=\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\sum_{t_1,\hdots,t_{k-1}}\mathds{1}_{\{{\widehat{\delta}}_1(l)=t_1,\hdots,\widehat{\delta}_{k-1}(l)=t_{k-1},\widehat{\boldsymbol{\delta}}_{k:n}(l)=\mathbf{d}_{k:n}\}}\nonumber\\ &=\sum_{t_1,\hdots,t_{k-1}}\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\delta}_1(l)=t_1,\hdots,\widehat{\delta}_{k-1}(l)=t_{k-1},\widehat{\boldsymbol{\delta}}_{k:n}(l)=\mathbf{d}_{k:n}\}}\nonumber\\ &=\sum_{t_1,\hdots,t_{k-1}}\bigg[\widehat{\theta}_k(d_k)\bigg]\bigg[\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\delta}_1(l)=t_1,\hdots,\widehat{\delta}_{k-1}(l)=t_{k-1},\widehat{\boldsymbol{\delta}}_{k+1:n}(l)=\mathbf{d}_{k+1:n}\}}\bigg]\nonumber\\ &=\widehat{\theta}_k(d_k)\bigg[\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\sum_{t_1,\hdots,t_{k-1}}\mathds{1}_{\{\widehat{\delta}_1(l)=t_1,\hdots,\widehat{\delta}_{k-1}(l)=t_{k-1},\widehat{\boldsymbol{\delta}}_{k+1:n}(l)=\mathbf{d}_{k+1:n}\}}\bigg]\nonumber\\ &=\widehat{\theta}_k(d_k)\bigg[\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{k+1:n}(l)=\mathbf{d}_{k+1:n}\}}\bigg], \end{align} where the third equality follows from (\ref{noCollusion}). \end{proof} It follows from (\ref{JpIsOmegaL}) that $\limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^LJ_p(l)\mathds{1}_{\{E_{i,\boldsymbol{S}}(l)\}}$ can only take values $0$ and $\infty.$ In the latter case, the nonnegativity of (\ref{utilityDifference}) is immediate. In the former case, since $\boldsymbol{S}_{-i}$ is a non-bankrupting strategy profile, we have that for all $j\in\{1,\hdots,n\},$ \begin{align} \limsup_{L\to\infty}\frac{1}{L}\sum_{l=1}^LJ_p(l)\mathds{1}_{\{E_{j,\boldsymbol{S}}(l)\}}<\infty\label{t1u0} \end{align} almost surely. Consequently, Lemma \ref{lemma1} applies, and we get \begin{align} \lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^L v_i(\widehat{\delta}_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))=\nu_i(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i}).\label{t1u1} \end{align} Now, consider the empirical joint distribution $\psi_L(d,\mathbf{\widehat{d}})\coloneqq\frac{1}{L}\sum_{l=1}^L\mathds{1}_{\{{\delta}_i(l)={d},\widehat{\boldsymbol{\delta}}(l)=\widehat{\mathbf{d}}\}},$ where ${d}\in\Delta$ and $\widehat{\mathbf{d}}\in\Delta^n.$ Note that $\psi_L\in\mu(\Delta,\Delta^n)$ for all $L\in\mathbb{Z}_+.$ It follows from SLLN that for any $d\in\Delta,$ $\lim_{L\to\infty}\sum_{\mathbf{\widehat{d}}}\psi_L(d,\widehat{\mathbf{d}})=\theta_i(d).$ Since (\ref{t1u0}) holds, we obtain using Lemma \ref{lemma1} that for any $\mathbf{\widehat{d}}\in\Delta^n,$ $\lim_{L\to\infty}\sum_{{{d}}}\psi_L(d,\widehat{\mathbf{d}})=\Pi_{j=1}^n\widehat{\theta}_j(\widehat{{d}_j}).$ I.e., the sequence $\{\psi_L\}$ of empirical joint distributions is such that its x-marginal approaches the distribution $\theta_i$ and its y-marginal approaches the distribution ${\widehat{\theta}_1\times\hdots\times\widehat{\theta}_n}.$ It can be shown as a consequence that $\{\psi_L\}$ approaches the set $\Psi(\theta_i,\boldsymbol{\widehat{\theta}})$ in that $\min_{\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}})}\vert\vert\psi-\psi_L\vert\vert\to0$ as $L\to\infty,$ where $\vert\vert\cdot\vert\vert$ can be any norm defined on the set $\mu(\Delta,\Delta^n).$ Also, the function $\rho_i:\mu(\Delta,\Delta^n)\to\mathbb{R}$ defined in (\ref{rhodefn}) is a continuous function over a compact set, and hence uniformly continuous. It follows that \begin{align} \liminf_{L\to\infty}\rho_i(\psi_L)\leq\sup_{\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}})}\rho_i(\psi).\label{rhoIbounded} \end{align} Note also that $\frac{1}{L}\sum_{l=1}^Lv_i(\delta_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}(l)))=\mathbb{E}_{(\delta_i,[\widehat{\delta}_i,\widehat{\boldsymbol{\delta}}_{-i}])\sim\psi_L}[v_i(\delta_i,g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\widehat{\boldsymbol{\delta}}))]=\rho_i(\psi_L).$ Taking the limit as $L\to\infty$ and using (\ref{rhoIbounded}) implies \begin{align} \liminf_{L\to\infty}\frac{1}{L}\sum_{l=1}^Lv_i(\delta_i(l),g_1^*(\boldsymbol{\widehat{\theta}}),g_2^*(\boldsymbol{\widehat{\theta}},\boldsymbol{\widehat{\delta}}(l)))\leq\sup_{\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}})}\rho_i(\psi).\label{t1u2} \end{align} Substituting (\ref{t1u1}) and (\ref{t1u2}) in (\ref{utilityDifference}) yields \begin{align*} u_i(T_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)-u_i(S_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)\geq[W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-W^*(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}}_{-i})]+\nu_i(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}_{-i}})-\sup_{\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}})}\rho_i(\psi). \end{align*} Upon substituting (\ref{tempUse1}), the RHS of the above inequality becomes $W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-\nu_{-i}(\widehat{\theta}_i,\boldsymbol{\widehat{\theta}_{-i}})-\sup_{\psi\in\Psi(\theta_i,\boldsymbol{\widehat{\theta}})}\rho_i(\psi).$ Combining this with (\ref{tempUse2}) implies its nonnegativity, thereby establishing the nonnegativity of (\ref{utilityDifference}). We now prove the second statement of the theorem. Arbitrarily fix $\boldsymbol{S}_{-i}\in\Lambda_{-i}$ and $T_i\in\mathcal{T}_i.$ Using (\ref{Payment}), (\ref{uiAsymptotic}) and Lemma \ref{LemmaHonesty}, we obtain almost surely that $u_i(T_i,\boldsymbol{S}_{-i},\boldsymbol{\theta},\boldsymbol{\delta}^\infty)=\big[W^*(\theta_i,\boldsymbol{\widehat{\theta}}_{-i})-W^*(\boldsymbol{\widehat{\theta}}_{-i})\big]\geq0$, where the inequality follows from (\ref{noPessimist}). Hence, truth-telling is individually rational for every player. That the mechanism maximizes social welfare under truthful bidding is a straightforward consequence of the optimality of the first- and the second-stage decision rules. \end{proof} The following section describes an application of the mechanism to the design of demand response markets. \section{Application to Demand Response Markets}\label{applications} As mentioned in Section I, one of the motivating reasons for introducing the environment of a two-stage repeated stochastic game is its ability to readily model many problems that arise in the context of next-generation electricity markets. We illustrate one such problem in this section, namely, mechanism design for demand response markets. In addition to illustrating an application of the proposed framework, the results of this section also serve to illustrate the benefits of using the proposed mechanism as opposed to other ``natural" mechanisms that a policy-maker might employ in such scenarios. One of the main requirements of power systems operations is that the power supply has to equal the random demand at each time instant. In conventional systems, the power supply can be controlled, and so the generation is continuously adjusted to follow the random demand to maintain balance. However, at deep levels of renewable energy penetration, the generation becomes random. A popular paradigm for maintaining demand-supply balance in such a system is to make the demand follow the random supply. This typically involves curtailing consumption during times of power supply shortage. This is referred to as demand response, and is achieved by using incentives to modulate the demand. One of the key challenges in implementing demand response is that in order to optimally allocate a desired consumption reduction among the demand response providers, their costs for curtailing consumption must be known, which are in general random and private to the loads, and which they could misreport to achieve more favorable allocations for themselves. The goal of the mechanism designer is to elicit both the probability distribution and the realization of the private costs truthfully. See \cite{LCSS2022} for more details. In what follows, we describe how the mechanism developed in the previous section can be applied to this problem. In this section, we overload certain notation. Specifically, whenever a demand response market-specific quantity maps to a two-stage repeated stochastic game-specific quantity, the former will be denoted using the same symbol that has been used for the latter. Consider a system consisting of $n$ Demand Response (DR) providers and a reserve generator. Each DR provider has a cost function that specifies the cost it incurs as a function of its power consumption reduction. We assume that the cost function is parameterizable and denote by $\delta_i(l)$ the parameter that specifies the cost function of DR provider $i$ on day $l$. Hence, $c(x,\delta_i(l))$ denotes the cost that DR provider $i$ incurs on day $l$ for curtailing its consumption by $x$ units from its baseline. The sequence $\boldsymbol{\delta}^\infty$ is IID with $\boldsymbol{\delta}(1)\sim\boldsymbol{\theta}\coloneqq\theta_1\times\hdots\times\theta_n$ where $\theta_i$ denotes the probability distribution of ${\delta}_i(1).$ The reserve generator has associated with it a production function $c_s:\mathbb{R}\to\mathbb{R}$ which specifies the cost it incurs as a function of the power that it produces. Denote by $d(l)$ the power shortage on day $l.$ The system operator wishes to minimize the social cost of compensating the shortage, and therefore wishes to determine the consumption reduction of the DR providers and the reserve generation on day $l$ as \begin{align} (\mathbf{x}^*(\boldsymbol{\delta}(l)),g_s^*(\boldsymbol{\delta}(l)))=\argmin_{x_1,\hdots,x_n,g_s} \quad & \sum_{i=1}^nc(x_i,\delta_i(l))+c_s(g_s)\\ \mathrm{subject\; to} \quad & \sum_{i=1}^nx_i+g_s=d(l).\nonumber \end{align} The problem of course is that the system operator does not know $\{\delta_1(l),\hdots,\delta_n(l)\},$ and so it requests the DR providers to bid their cost functions. Denote by $\widehat{\delta}_i(l)$ the parameter that DR provider $i$ bids on day $l$. The system operator computes $\mathbf{x}^*(\widehat{\boldsymbol{\delta}}(l))$ and pays each DR load $i$ a payment $p_i(l)$ on day $l$ for reducing its consumption by $x_i^*(\boldsymbol{\widehat{\delta}}(l)).$ The average utility that DR provider $i$ accrues is defined as \begin{align} u_i^\infty\coloneqq\lim_{L\to\infty}\frac{1}{L}\sum_{l=1}^Lp_i(l)-c(x_i^*(\widehat{\boldsymbol{\delta}}(l)),\delta_i(l)). \end{align} \begin{figure} \centering \includegraphics[scale=0.5]{socialCost.jpg} \caption{Social cost vs. the number of DR providers. The larger the number of participants in the demand response program, the lower the social cost of the program.} \label{fig:socialCost} \end{figure} It is straightforward to see that the average utility of each DR provider is a function of not only its own bidding strategy, but also the bidding strategy of the other DR loads. Consequently, a DR provider may not bid its cost truthfully if there is a possibility for it to obtain a larger utility by misreporting its cost. This in turn could result in the demand response program operating in a manner that is not social cost-minimizing. This motivates the mechanism design problem. The mechanism presented in the previous section can be used to design a payment rule which results in truth-telling being a dominant strategy non-bankrupting equilibrium. For our numerical study, we have taken $c(x,\delta_i)=\frac{\delta_i}{2}x^2$, $c_s(x,\delta_s)=\frac{\delta_s}{2}x^2,$ $\boldsymbol{\theta}$ to be a product of beta distributions of unit mean and variance $2$, and $\delta_s(l)$ to also be beta distributed with the same parameters. Fig. \ref{fig:socialCost} quantifies how the social cost reduces as the participation of DR providers increases. Fig. \ref{fig:sensitivity1} illustrates how the payment resulting from the proposed mechanism behaves from the point of view of a randomly chosen DR provider. Specifically, we fix the cost function of a randomly chosen DR provider and plot how its average payment varies with the mean of the costs of the other DR providers. Qualitatively, the higher the mean cost of a DR provider, the higher the inelasticity of its demand. Hence, Fig. \ref{fig:sensitivity1} quantifies the rate at which the payment received by a given DR load increases as a function of the inelasticity of the other DR providers. An arguably natural alternative for the proposed mechanism is the posted price mechanism wherein the system operator announces the payment $p_{pp}$ that the DR providers would receive per unit reduction in their power consumption. Each DR provider $i$ then chooses its curtailment $x_{i,pp}^*(l)$ on day $l$ as ${x}_{i,pp}^*(l)=\argmin_{x} \; c(x,\delta_i(l))-p_{pp}x.$ The residual mismatch $d(l)-\sum_{i=1}^nx_{i,pp}^*(l)=:g_s(l)$ is purchased in the spot market at price $c_s(x,\delta_s(l))=\frac{\delta_s(l)}{2}g^2_s(l).$ Such a mechanism has been employed, for example, in a prior demand response trial in the United Kingdom. How do such ``simple," ``natural" alternatives compare with the proposed mechanism? Fig. \ref{fig:socialCostComparison} compares the social cost attained by the proposed mechanism with the social cost attained by the posted price mechanism. Certain important observations are in order. First, note that there exists a price point at which the posted price mechanism attains its minimum social cost. However, this price point is a function of the type distributions of the DR loads which are their private knowledge. This necessitates the system operator to perform price discovery in order to compute the optimal price point --- a process that is vulnerable to strategic manipulation by the DR providers. Secondly, even assuming that the DR providers do not manipulate the price discovery, the minimum social cost that can be attained by the posted price mechanism is in general strictly larger than what can be attained by employing the proposed mechanism. \begin{figure} \centering \includegraphics[scale=0.5]{rebate1.jpg} \caption{Average payment received by a fixed DR provider as a function of the mean of the supertypes of the other DR providers. The fixed DR provider has cost parameter $\delta_i(l)=4$ for all $l$, and the supertypes of the other loads are beta distributed with varying mean and a fixed variance of $2$. Hence, the average payment received by a given load increases as the demand of the other loads become more and more inelastic.} \label{fig:sensitivity1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{comparison.jpg} \caption{The social cost attained by the posted price mechanism vs. the price.} \label{fig:socialCostComparison} \end{figure} \section{Related Work}\label{relatedWork} The setting of two-stage stochastic games was introduced in \cite{Mukund2007} which considers a one-shot setting and develops a mechanism that renders truthful bidding a sequential ex post Nash equilibrium. Reference \cite{Jain1} considers a two-stage game setting to model electricity markets consisting of wind power producers and develops a mechanism that incentivizes truthful bidding. However, it assumes that it is only in the first stage of the game that the wind power producers can bid strategically, and not in the second stage. In contrast, the setting that we have considered assumes that the valuation function distribution \emph{and} the valuation function realization are private to the players, and that they can misreport either or both of them to accrue a higher utility. Reference \cite{mezzetti2004mechanism} presents a two-stage mechanism called the generalized Groves mechanism. In terms of the terminology and the framework presented in this paper, the setting in \cite{mezzetti2004mechanism} can be interpreted as each player having a privately known distribution of its valuation function which it is required to bid to the social planner. The joint distribution of the players' valuation functions is assumed to be common knowledge. The social planner chooses an outcome that maximizes the expected social welfare based on the bids. After the social planner chooses the outcome, the valuation functions realize, which the players are required to bid in the second stage. Following this, a final payment is made. The payment rule guarantees that truth-telling by all players is an ex post Nash equilibrium. It is important to recognize that it is only the payment rule that has two stages in the aforementioned setting, and not the game itself. This in fact is one of the key departures of the one-shot two-stage stochastic game setting from the setting considered in \cite{mezzetti2004mechanism}; the latter doesn't include the possibility for the social planner to take recourse actions after the valuation functions realize. In the context of electricity markets, not only is it feasible to take recourse actions, it is also \emph{imperative} to take recourse actions if grid stability is to be maintained. Reference \cite{BilateralTradeTwoStage} builds upon the mechanism proposed in \cite{mezzetti2004mechanism} to devise a two-stage mechanism for bilateral trade. A power system offering a demand response program is considered in \cite{DRTwoStage,DRTwoStageFull} and a two-stage mechanism is presented using which a certain quantity of power can be apportioned among the loads when a demand response event occurs. The first stage establishes a contingency plan that specifies the amount of power that would be supplied to each load in each contingency and the corresponding price, and the second stage, during which the contingency occurs, allows the loads to trade among themselves at the price established in the first stage. It is shown that the second stage trade results in an allocation that Pareto dominates the first-stage allocation. All of the aforementioned papers consider a one-shot game whereas the setting that we have considered is one of repeated plays. As mentioned in Section \ref{introduction}, the aspect of repeated plays introduces certain additional complexities for mechanism design that can be attributed to the availability history-dependent bidding strategies to players. A similar challenge manifests in dynamic games. References \cite{dynamicMechanism1,dynamicMechanism2,dynamicMechanism3,dynamicAuctions,Ma_Kumar} are some of the papers that address the problem of mechanism design for dynamic games. The solution concept adopted in most of the literature on dynamic games is ex post Nash equilibrium or variants thereof. With the exception of certain special cases such as in \cite{Ma_Kumar}, to the best of our knowledge, we are unaware of any other work that tries to surpass Nash equilibrium or its variants and implement truth-telling in stronger notions of equilibria for broad classes of repeated or dynamic games. A generously disposed view of the present paper could be as an attempt in that direction. \section{Conclusion}\label{conclusion} We have considered two-stage repeated stochastic games wherein private information is revealed over two stages and the social planner is constrained to make a decision in each stage. The setting models many important problems that arise in next-generation electricity markets. Recognizing the limitation of Nash equilibria in molding real-world behavior, we have introduced the notion of a dominant strategy non-bankrupting equilibrium which requires the players to make very little assumptions about the behaviors of the other players to employ their equilibrium strategy. Consequently, a mechanism that implements a certain desired behavior as a dominant strategy non-bankrupting equilibrium could effectively mold real-world behavior along the desired lines. We have developed a mechanism for two-stage repeated stochastic games that implements truth-telling as a DNBE. The mechanism is also individually rational and maximizes social welfare. \begin{comment} Define $\tau_m\coloneqq\min\{l:\sum_{l'=1}^l\mathds{1}_{\{\boldsymbol{\widehat{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}=m\}.$ Then, \begin{align} \frac{1}{\tau_m}\sum_{l'=1}^{\tau_m}\mathds{1}_{\{{\delta}_i(l')=d_i,\boldsymbol{\widehat{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}=\frac{m}{\tau_m}\bigg[\frac{1}{m}\sum_{j=1}^m\mathds{1}_{\{{\delta}_i(\tau_j)=d_i\}}\bigg]\label{l1u1} \end{align} and \begin{align} \bigg[\frac{1}{\tau_m}\sum_{l'=1}^{\tau_m}\mathds{1}_{\{\widehat{\boldsymbol{\delta}}_{-i}(l')=\mathbf{d}_{-i}\}}\bigg]\bigg[\frac{1}{\tau_m}\sum_{l'=1}^{\tau_m}\mathds{1}_{\{\delta_i(l')=d_i\}}\bigg]=\frac{m}{\tau_m}\bigg[\frac{1}{\tau_m}\sum_{l'=1}^{\tau_m}\mathds{1}_{\{\delta_i(l')=d_i\}}\bigg].\label{l1u2} \end{align} Subtracting (\ref{l1u2}) from (\ref{l1u1}) implies \begin{align} \widehat{h}_{i,\mathbf{d}}(\tau_m)=\frac{m}{\tau_m}\bigg[\frac{1}{m}\sum_{j=1}^m\big(\mathds{1}_{\{{\delta}_i(\tau_j)=d_i\}}-\theta_i(d_i)\big)-\frac{1}{\tau_m}\sum_{l'=1}^{\tau_m}\big(\mathds{1}_{\{\delta_i(l')=d_i\}}-\theta_i(d_i)\big)\bigg].\label{hhatRV} \end{align} The first term in the braces in the RHS of the above equality is the average of $m$ i.i.d. bounded, hence sub-Gaussian, random variables each of which has variance proxy $1$ (the optimal variance proxy is lesser than $1$). Therefore, the first term is sub-Gaussian with variance proxy $\frac{1}{m}.$ Similarly, the second term is a sub-Gaussian random variable with variance proxy $\frac{1}{\tau_m}.$ It can be shown using straightforward arguments that the difference of two sub-Gaussian random variables (not necessarily independent) remains sub-Gaussian with variance proxy given by the sum of the variance proxies of the two random variables. It follows that the random variable in (\ref{hhatRV}) is sub-Gaussian with variance proxy $\frac{m^2}{\tau_m^2}(\frac{1}{m}+\frac{1}{\tau_m}).$ Using Chernoff bound, we obtain \begin{align} \mathbb{P}\big(\big\vert\widehat{h}_{i,\mathbf{d}}(\tau_m)\big\vert\geq r(\tau_m)\big)\leq 2\exp{\bigg\{-\frac{r^2(\tau_m)}{2\frac{m^2}{\tau_m^2}(\frac{1}{m}+\frac{1}{\tau_m})}\bigg\}}. \end{align} Combining (\ref{rlgreater}) with the above inequality implies \begin{align} \mathbb{P}\big(\big\vert\widehat{h}_{i,\mathbf{d}}(\tau_m)\big\vert\geq r(\tau_m)\big)\leq2\exp{\bigg\{-\frac{2\ln{2\tau_m^{1+\gamma}}}{\frac{m}{\tau_m}+\frac{m^2}{\tau_m^2}}\bigg\}}\leq\frac{1}{\tau_m^{1+\gamma}} \end{align} where the last inequality follows from the fact that $m\leq\tau_m.$ Hence, \end{comment} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,692
Die 13. Biathlon-Weltmeisterschaften fanden 1974 im eigens dafür errichteten Wintersportkomplex Raubitschy in der Nähe von Minsk in der Sowjetunion statt. Mit der Aufnahme des Sprintwettbewerbs ins Weltmeisterschaftsprogramm wurden erstmals drei Wettbewerbe ausgetragen. Männer Sprint 10 km Datum: 28. Februar Einzel 20 km Datum: 27. Februar Staffel 4 × 7,5 km Datum: 1. März Medaillenspiegel Literatur Weblinks 1974 Weltmeisterschaften Weltmeisterschaften 1974 Weltmeisterschaften 1974
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,265
\section{Introduction} In description logics (DLs) a concrete domain is a construct that can be used to define new classes by specifying restrictions on attributes that have literal values (as opposed to relationships to other concepts). Practical applications of DLs usually require concrete properties with values from a fixed domain, such as strings or integers, supporting built-in predicates. For DLs that are extended with concrete domains, there exist partial functions mapping objects of the abstract domain to values of the concrete domain, and can be used for building complex concepts. Concrete domains can be used to construct complex concepts as for instance, the axiom $Teenager \equiv Person \sqcap \exists age. (\geq, 13) \sqcap \exists age. (\leq,19)$ defines a teenager as a person whose age is at least 13 and at most 19. In DLs, concrete domains are also known as \textit{datatypes}. Several probabilistic extensions of DLs opt to exclude datatypes while, in fact, it is an essential feature as several knowledge extraction tools produce weighted rules or axioms that contain concrete data values. Reasoning over these data either to infer new knowledge or to verify correctness is indispensable. Additionally, recent advances in information extraction have paved the way for the automatic construction and growth of large, semantic knowledge bases from different sources. However, the very nature of these extraction techniques entails that the resulting knowledge bases may contain a significant amount of incorrect, incomplete, or even inconsistent (i.e., uncertain) knowledge, which makes efficient reasoning and query answering over this kind of uncertain data a challenge. To address these issues, there exist ongoing studies on probabilistic knowledge bases. The study of extending DLs to handle uncertainty and vagueness has gained momentum recently. There have been several proposals to add probabilities to various DLs. Probabilistic DLs can be classified in several dimensions. One possible classification is on the reasoning mechanism used: Markov logic networks (MLNs), Bayesian networks, and probabilistic reasoning. There exist some studies that employ MLNs to extend various DLs. The study in \cite{lukasiewicz-et-al:2012} extends $\mathcal{EL}^{++}$ with probabilistic uncertainty based on the annotation of axioms using MLNs. The main focus of this work is ranking queries in descending order of probability of atomic inferences which is different from the objective of this paper. Another study in \cite{niepert-et-al:2011}, presents a probabilistic extension of the DL $\mathcal{EL}^{++}$ without nominals and concrete domains in MLN in order to find the most probable coherent ontology. In doing so, they have developed a reasoner for probabilistic OWL-EL called ELOG \cite{noessner-et-al:2011}. In this study, we extend this work in order to deal with concrete domains in addition to nominals and instances. In databases, MLNs have been used to create a probabilistic datalog called Datalog$+/-$. It is an extension of datalog that allows to express ontological axioms by using rule-based constraints \cite{gottlob-et-al:2013}. The probabilistic extension of Datalog$+/-$ uses MLNs as the underlying probabilistic semantics. The focus of this work is on scalable threshold query answering which is different from that of this work. Other literatures extend DLs with Bayesian networks. Some notable works include: an extension of $\mathcal{EL}$ with Bayesian networks called $\mathcal{BEL}$ is presented in \cite{ceylan-et-al:2014}. They study the complexity of reasoning under $\mathcal{BEL}$ to show that reasoning is intractable. However, their work does not discuss probabilities in the ABox and concrete domains are excluded. On the other hand, in \cite{damato-et-al:2008}, they added uncertainty to DL-Lite based on Bayesian networks. Additionally, they have shown that satisfiability test and query answering in probabilistic DL-Lite can be reduced to satisfiability test and query answering in the DL-Lite family. Further, it is proved that satisfiability checking and union of conjunctive query answering can be done in LogSpace in the data complexity. Consequently, as discussed above, most of the studies that involve extending description logics to deal with uncertainty by using either Bayesian or MLNs often excluded concrete domains. This is partly due to either the lack of supporting features or the difficulty in dealing with them. In this paper, we study a novel way of dealing with uncertainty involving concrete domains. Henceforth, we provide an extension to $\mathcal{EL}^{++}$-LL with concrete domains, nominals and instances. \section{Preliminaries} In this section, we present a brief summary of: $\mathcal{EL}^{++}$, Markov logic networks, cutting plane inference, and $\mathcal{EL}^{++}$-LL. For a detailed discussion on these subjects, we refer the reader to \cite{baader-et-al:2005,richardson-et-al:2006,riedel:2012,niepert-et-al:2011} and the references therein. \subsection{$\bm{\mathcal{EL}^{++}}$} $\mathcal{EL}^{++}$ is the description logic underlying the OWL 2 profile OWL-EL\footnote{\url{http://www.w3.org/TR/owl2-profiles/}}. \subsubsection{Syntax} Given a set of concept names $\mathrm{N_C}$, role names $\mathrm{N_R}$, individuals $\mathrm{N_I}$, and feature names $\mathrm{N_F}$, $\mathcal{EL}^{++}$ concepts and roles are formed according to the following syntax: \begin{align*} C &~::= \top \mid \bot \mid A \mid C \sqcap D \mid \exists R.C \mid \{a\} \mid \exists F.r \end{align*} A concept in $\mathcal{EL}^{++}$ is either a top, bottom concept, an atomic concept or a complex concept (formed by conjunction and existential restriction). Given a datatype restriction $r=(o,v)$ and $x\in \mathcal{D}$, we say that $x$ satisfies $r$ and write $r(x)$ iff $(x,v) \in o$, where $o \in \{<,\leq,>,\geq,=\}$, $o$ is interpreted as the standard relation on real numbers, and $\mathcal{D} \subseteq \mathbb{R}$ is a concrete domain \cite{despoina-et-al:2011}. In this work, we consider only numerical concrete domains and leave out the others for future work. An $\mathcal{EL}^{++}$ TBox contains a set of GCI (General Concept Inclusion) axioms, i.e., $C \sqsubseteq D$, as well as role inclusion axioms, i.e., $R_1 \circ \cdots \circ R_k \sqsubseteq R$. The semantics of $\mathcal{EL}^{++}$ concepts and roles is given by an interpretation function $\mathcal{I}=(\Delta^\mathcal{I},.^{\mathcal{I}})$ which consists of a non-empty (abstract) domain $\Delta^\mathcal{I}$ and a mapping function $.^\mathcal{I}$ \cite{baader-et-al:2005}. \subsubsection{Semantics} The semantics of $\mathcal{EL}^{++}$ concepts and roles is given by an interpretation function $\mathcal{I}=(\Delta^\mathcal{I},.^{\mathcal{I}})$ which consists of a non-empty (abstract) domain $\Delta^\mathcal{I}$ and a mapping $.^\mathcal{I}$ that assigns to each atomic concept $A \in \mathrm{N_C}$ a subset of $\Delta^\mathcal{I}$, to each abstract role $R \in \mathrm{N_R}$ a subset of $\Delta^\mathcal{I} \times \Delta^\mathcal{I}$, to each concrete relation $F \in \mathrm{N_F}$ a subset of $\Delta^\mathcal{I} \times \mathcal{D}$, and to each individual $a \in \mathrm{N_I}$ an element of $\Delta^\mathcal{I}$. The mapping $\cdot^\mathcal{I}$ is extended to all concepts and roles as follows: \begin{align*} (\top)^\mathcal{I} &~= \Delta^\mathcal{I} \\ (\bot)^\mathcal{I} &~= \emptyset \\ (\{a\})^\mathcal{I} &~= \{a^\mathcal{I}\} \\ (C \sqcap D )^\mathcal{I} &~= C^\mathcal{I} \cap D^\mathcal{I} \\ (\exists R.C) &~= \{x \in \Delta^\mathcal{I} \mid \exists y\in \Delta^\mathcal{I}:\\&~~~~~~~~(x,y)\in R^\mathcal{I} \wedge y\in C^\mathcal{I}\}\\ (\exists F.r)^\mathcal{I} &~= \{x\in \Delta^\mathcal{I} \mid \exists v\in \mathcal{D}: (x,v) \in F^\mathcal{I} \\ &~~~~~~~~\wedge r(v)\} \\ (C \sqsubseteq D)^\mathcal{I} &~= C^\mathcal{I} \subseteq D^\mathcal{I} \\ (R_1 \circ \cdots \circ R_k \sqsubseteq R)^\mathcal{I} &~= R_1^\mathcal{I} \circ \cdots \circ R_k^\mathcal{I} \subseteq R^\mathcal{I} \end{align*} Knowledge about specific objects can be expressed using concept and role assertions of the form $C(a)$ and $R(a,b)$. The axioms and assertions are contained in the TBox and ABox, respectively, which together form a knowledge base (KB). An $\mathcal{EL}^{++}$ knowledge base (or ontology) $\mathcal{O}=(\mathcal{T},\mathcal{A})$ consists of a set $\mathcal{T}$ of general concept inclusion axioms (TBox) and role inclusion axioms, and possibly a set $\mathcal{A}$ of assertional axioms (ABox). A concept name $C$ in an ontology $\mathcal{O}$, is \emph{unsatisfiable} iff, for each interpretation $\mathcal{I}$ of $\mathcal{O}$, $C^\mathcal{I}=\emptyset$. An ontology $\mathcal{O}$ is \emph{incoherent} iff there exists an unsatisfiable concept name $C$ in $\mathcal{O}$, i.e., $C \models \bot$ \cite{flouris-et-al:2006}. To simplify the translation of probabilistic $\mathcal{EL}^{++}$ KB into FOL, we first obtain the \textit{normal} form of the KB in such a way that satisfiability is preserved \cite{baader-et-al:2005,krotzsch:2011}. An $\mathcal{EL}^{++}$ KB is in \textit{normal} form if its axioms are in the following form: \begin{alignat*} {4} &~ C(a) \quad && R(a,b) \quad && A \sqsubseteq \bot \quad && \top \sqsubseteq C\\ &~ A \sqsubseteq \{c\} \quad && \{a\} \sqsubseteq \{c\} \quad && A \sqsubseteq C \quad && A \sqcap B \sqsubseteq C \\ &~ \exists R.A \sqsubseteq C \quad && A \sqsubseteq \exists R.B \quad && A \sqsubseteq \exists F.r \quad && \exists F.r \sqsubseteq A \\ &~ R_1 \sqsubseteq R_2 \quad && R_1 \circ R_2 \sqsubseteq R \end{alignat*} where $A,B,C \in \mathrm{N_C}, R,R_1,R_2 \in \mathrm{N_R}, F \in \mathrm{N_F}$, $r$ is a datatype restriction, and $a,b,c \in \mathrm{N_I}$. It is possible to provide a probabilistic extension of $\mathcal{EL}^{++}$ using MLNs. An $\mathcal{EL}^{++}$ KB can be seen as a set of hard constraints on the set of possible interpretations: if an interpretation violates even one axiom or assertion, it has zero probability. The basic idea in MLNs is to soften these constraints, i.e., when an interpretation violates one axiom or assertion in the KB it is less probable, but not impossible. The fewer axioms an interpretation violates, the more probable it becomes. Each axiom and assertion has an associated weight that reflects how strong a constraint is: the higher the weight, the greater the difference in log probability between an interpretation that satisfies the axiom and one that does not, other things being equal \cite{richardson-et-al:2006}. \subsection{Markov Logic Networks} Markov Logic Networks (MLNs) combine Markov networks and first-order logic (FOL) by attaching weights to first-order formulas and viewing these as templates for features of Markov networks \cite{richardson-et-al:2006}. An MNL $L$ is a set of pairs $(F_i,w_i)$ where $F_i$ is a formula in FOL and $w_i$ is a real number representing a weight. Together with a finite set of constants $C$, it defines a Markov Network $M_{L,C}$, where $M_{L,C}$ contains one node for each possible grounding of each predicate appearing in $L$. The value of the node is $1$ if the ground predicate is true, and $0$ otherwise. The probability distribution over possible worlds $x$ specified by the ground Markov network $M_{L,C}$ is given by: $$P(X=x) = \dfrac{1}{Z} \mathrm{exp}\big(\sum\limits_{i=1}^F w_i n_i(x)\big)$$ where $F$ is the number of formulas in the MLN and $n_i(x)$ is the number of true groundings of $F_i$ in $x$. The groundings of a formula are formed simply by replacing its variables with constants in all possible ways. The \textit{Herbrand Universe} $H$ for an MLN $L$ is the set of all terms that can be constructed from the constants in $L$. The \textit{Herbrand Base} $\mathrm{HB}$ is often defined as the set of all ground predicates (atoms) that can be constructed using the predicates in $L$ and the terms in $H$. In this paper we focus on MLNs whose formulas are function-free clauses. In order to compute a maximum a-posteriori (MAP) state of an MLN, we formulate the problem as an integer linear program (ILP) using the cutting plane inference algorithm. \subsection{Cutting Plane Inference (CPI)} A MAP query corresponds to an optimization problem with linear constraints and a linear objective function. Hence, it can be formulated and solved as an instance of an integer linear program (ILP). \cite{riedel:2012,noessner-et-al:2013}~introduced cutting plane inference as a meta algorithm that transforms an MLN into ILP. The basic idea of CPI is to add all constraints to the ILP that violate the current intermediate solution. This process is repeated until no (additional) violated ground clauses exist. An ILP solver resolves the conflicts by computing an optimal truth assignment for an MLN. Hence, the solution of the final ILP corresponds to the MAP state. It is necessary to execute several iterations as the intermediate solution changes after each iteration and more violated clauses might be detected. At the beginning of each CPI iteration it is necessary to determine the violated ground clauses $\mathcal{G}$ that are specified by the MLN and are in conflict with the intermediate solution. A binary ILP variable $x_{\ell} \in \{0,1\}$ gets assigned to each grounded predicate occurring in a violated clause $g \in \mathcal{G}$. The value of the the variable $x_{\ell}$ is $1$ if the respective literal $\ell$ is true and $0$ when it is false. These variables are used to generate ILP constraints that are added to the ILP for each violated ground clause. For each clause $g \in \mathcal{G}$, we define ${L}^{+}(g)$ as the set of ground atoms that occur unnegated in $g$ and ${L}^{-}(g)$ as the set of ground atoms that occur negated in $g$. The transformation scheme depends on the weight $w_g \in \mathbb{R}$ of the violated clause $g$. It is also necessary to create a binary variable $z_g$ for every $g$ with $w_g \neq \infty$ that is used in the objective of the ILP. For every ground clause $g$ with $w_g > 0$, the following constraint has to be added to the ILP. \begin{equation*} \displaystyle\sum_{\ell \in {L}^{+}(g)} x_{\ell} + \displaystyle\sum_{\ell \in {L}^{-}(g)} (1-x_{\ell}) \geq z_g \end{equation*} A ground atom $\ell$ that is set to false (true if it appears negated) by evidence will not be included in the ILP as it cannot fulfil the respective constraint. For every $g$ with weight $w_g <0$, we add the following constraint to the ILP: \begin{equation*} \displaystyle\sum_{\ell \in {L}^{+}(g)} x_{\ell} + \displaystyle\sum_{\ell \in {L}^{-}(g)} (1-x_{\ell}) \leq (|{L}^{+}(g)|+|{L}^{-}(g)|) z_g \end{equation*} The variable $z_g$ expresses if a ground formula $g$ is true considering the optimal solution of the ILP. However, for every $g$ with weight $w_g = \infty$ this variable can be replaced with 1 as the respective formula cannot be violated in any solution: \begin{equation*} \displaystyle\sum_{\ell \in L^{+}(g)} x_{\ell} + \displaystyle\sum_{\ell \in L^{-}(g)} (1-x_{\ell}) \geq 1 \end{equation*} Finally, the objective of the ILP sums up the weights of the (satisfied) ground formulas: \begin{equation*} \max \displaystyle\sum_{g \in \mathcal{G}} w_g z_g \end{equation*} The MAP state corresponds to the solution of the ILP in the last CPI iteration. It can be directly obtained from the solution as the assignment of the variables $x_\ell$ can be directly mapped to the optimal truth values for the ground predicates, i.e., $x_i = \texttt{true}$ if the corresponding ILP variable is $1$ and $x_i = \texttt{false}$ otherwise. The MAP state of an $\mathcal{EL}^{++}$-LL TBox can be computed by a reduction into CPI. \subsection{$\bm{\mathcal{EL}^{++}}$-LL} $\mathcal{EL}^{++}$-LL (Log-linear $\mathcal{EL}^{++}$) is a probabilistic extension of $\mathcal{EL}^{++}$ without nominals, instances and concrete domains \cite{niepert-et-al:2011}. Each $\mathcal{EL}^{++}$-LL TBox axiom is either deterministic (i.e., axioms that are known to be true) or uncertain (i.e., axioms that have a degree of confidence). The uncertain axioms have associated weight. Formally, a $\mathcal{EL}^{++}$-LL TBox is given by $\mathcal{T}=(\mathcal{T}^D, \mathcal{T}^U)$, where $\mathcal{T}^D$ and $\mathcal{T}^U$, is a set of pairs of $\langle S,w_S\rangle$ where $S$ is an axiom and $w_S$ is its real-valued weight, denote deterministic and uncertain axioms respectively. The semantics of an $\mathcal{EL}^{++}$-LL TBox is given by a joint probability distribution over a \emph{coherent} $\mathcal{EL}^{++}$ TBox. Given TBoxes $\mathcal{T}=(\mathcal{T}^D, \mathcal{T}^U)$ and $\mathcal{T}'$ over the same vocabulary, the probability of $\mathcal{T}'$ is given by: \begin{align*} P(\mathcal{T}') &~ = \begin{cases} \dfrac{1}{Z} \mathrm{exp}\bigg(\sum\limits_{\{\forall(S,w_S)\in \mathcal{T}^U:\mathcal{T}'\models S\}} w_S \bigg) \\ ~~~~~~~\text{if }~ \mathcal{T}' \models \mathcal{T}^D \wedge \mathcal{T}' \not\models \bot \\ 0 ~~~~~\text{otherwise} \end{cases} \end{align*} In order to generate the most probable, coherent and classified TBox using MLN, $\mathcal{EL}^{++}$ completion rules and $\mathcal{EL}^{++}$-LL TBox axioms are translated into FOL formulae. In the following, we show how to extend $\mathcal{EL}^{++}$-LL with nominals, instances, and concrete domains. \section{Extending $\bm{\mathcal{EL}^{++}}$-LL with Nominals, Instances and Concrete Domains} In \cite{niepert-et-al:2011}, the authors claim that their approach is extensible to the Horn fragments of DLs (look \cite{krotzsch:2011} for instance). To take advantage of this claim, we extend $\mathcal{EL}^{++}$-LL with probabilistic knowledge expressed through nominals, individuals, and concrete domains. The syntax of this extension (that we call $\mathcal{MEL}^{\mathrm{++}}$) is the same as that of $\mathcal{EL}^{++}$-LL, basically, it is the syntax of $\mathcal{EL}^{++}$ with weights attached to each uncertain axiom and assertion. An $\mathcal{MEL}^{++}$ KB has two components: deterministic $\mathrm{KB}^D$ and uncertain $\mathrm{KB}^U$ knowledge bases. In order to provide semantics, we assume that $\mathrm{KB}^D$ is coherent. The semantics of \emph{coherent} $\mathcal{MEL}^{\mathrm{++}}$ KBs is given by a probability distribution as defined below. % \begin{definition} Given an $\mathcal{MEL}^{\mathrm{++}}$ knowledge base $\mathrm{KB}=(\mathrm{KB}^D, \mathrm{KB}^U)$ over a vocabulary of $\mathrm{N_C}$, $\mathrm{N_R}$, $\mathrm{N_F}$, and $\mathrm{N_I}$, the semantics of a \emph{coherent} $\mathrm{KB}_i=(\mathrm{KB}^D_i, \mathrm{KB}^U_i)$ over the same vocabulary is given by a probability distribution: \begin{align*} P(\mathrm{KB}') &~= \begin{cases} \dfrac{1}{Z}\mathrm{exp}\bigg(\sum\limits_{\{\forall (o_j,w_j)\in \mathrm{KB}^U:\mathrm{KB}_i\models o_j\}} w_j\bigg) \\ ~~~~~~~~~\text{if}~~ \mathrm{KB}_i \models \mathrm{KB}^D \wedge \mathrm{KB}_i \not\models \bot \\ 0 ~~~~~~\text{otherwise} \end{cases} \end{align*} \end{definition} % \begin{example} Consider an $\mathcal{MEL}^{++}$ $\mathrm{KB} = (\mathrm{KB}^D, \mathrm{KB}^U)$: \begin{align*} \mathrm{KB}^{D} =&~ \{~ Toddler \sqcap Adult \sqsubseteq \bot\}, \\ \mathrm{KB}^{U} = &~\{\langle Toddler \sqsubseteq ~\exists age.(\leq, 3), ~0.8 \rangle, \\ &~~~~\langle \exists age.(\leq, 3) \sqsubseteq Person, ~0.7 \rangle, \\ &~~~~\langle Toddler \sqsubseteq Adult, ~0.1 \rangle, ~\langle age(john, 2), ~0.7\rangle \} \end{align*} The probabilities of the axioms and assertions can be computed as follows: \begin{align*} P\big(\{Toddler \sqsubseteq ~\exists age.(\leq, 3)\}\big) =&~ \dfrac{1}{Z}\mathrm{exp}(0.8) \\ P\big(\{Toddler \sqsubseteq Adult\}\big)= &~ 0 \\ P\bigg(\{Toddler \sqsubseteq ~\exists age.(\leq, 3), age(john, 2), &~ \\ ~~~~~~\exists age.(\leq, 3) \sqsubseteq Person\}\bigg) =&~ \dfrac{1}{Z}\mathrm{exp}(2.2) \\ P\big(\{\}\big) =&~ \dfrac{1}{Z}\mathrm{exp}(0) \\ P\big(\{Toddler \sqcap Adult \sqsubseteq \bot\}\big)= &~ 1 \\ Z =~ \mathrm{exp}(0.8) + \mathrm{exp}(2.2) + \mathrm{exp}(0.7) + \mathrm{exp}(0) \end{align*} \end{example} In order to derive the most probable, classified and coherent $\mathcal{EL}^{++}$ ontology from an $\mathcal{MEL}^{++}$ KB, we transform the KB, TBox completions rules \cite{baader-et-al:2005}, concrete domains, and ABox completion rules \cite{krotzsch:2011} into FOL formulae. \subsection{Nominals} (Un)certain axioms that contain nominals can be translated into FOL in MNL by using Definition \ref{def:mapping}. Inference in MNL can be done by converting the completion rule CR6 \cite{baader-et-al:2005} into FOL and enforcing that each nominal $a_i \in \mathrm{N_I}$ is distinct. Alternatively, \textit{unique name assumption} for individuals names can be enforced by using the axiom $\{a\} \sqcap \{b\} \sqsubseteq \bot$ for all relevant individual names $a$ and $b$. In addition, the transformation of TBox completion rules into FOL in MNL is given in Table \ref{tab:tboxCompletionRules}. By using nominals, instance knowledge can be added to an ABox. \subsection{ABox} Since the description logic $\mathcal{EL}^{++}$ is equipped with nominals. ABox knowledge can be converted into TBox axioms. Thus, with nominals, ABox becomes syntactic sugar: $$C(a) \Leftrightarrow \{a\} \sqsubseteq C,~~R(a,b) \Leftrightarrow \{a\} \sqsubseteq \exists R.\{b\}$$ Instance checking in turn is directly reducible to subsumption checking in the presence of nominals. There exist two ways to represent uncertain ABox assertions, i.e., $C(a)$ and $R(a,b)$, in MLN: \begin{itemize} \item[i.] transform ABox assertions into TBox axioms using nominals as follows: \begin{align*} \langle C(a), w_1\rangle \Leftrightarrow &~ \langle \{a\} \sqsubseteq C, w_1\rangle \\ \langle R(a,b), w_2\rangle \Leftrightarrow&~ \langle \{a\} \sqsubseteq \exists R.\{b\}, w_2\rangle \end{align*} \item[iii.] introduce two new predicates for each instance type as: \begin{align*} \langle C(a), w_1\rangle &~ \mapsto inst(a,C) ~~~w_1 \\ \langle R(a,b), w_2\rangle &~ \mapsto rinst(a,R,b) ~~~w_2 \end{align*} This approach requires transforming ABox completion rules into FOL, so as to generate classified ontologies. \end{itemize} In this paper, we consider the second approach (ii)\footnote{We leave a comparison of the two approaches as a future work.}. Next, we show how concrete domains are translated into the MLN framework. \begin{table*}[t!] \centering \renewcommand{\arraystretch}{1.4} \begin{tabular}{|ll|} \hline $F_1$ -- $F_9$ & Refer to Table 2 in \cite{niepert-et-al:2011}. \\ $F_{10}$& $\forall c,d,a,r: \mathrm{subNom}(c,a) \wedge \mathrm{subNom}(d,a) \wedge \mathrm{rsup}(c,r,d) \rightarrow \mathrm{sub}(c,d)$ \\ $F_{11}$& $\forall c,d,a,r,b: \mathrm{subNom}(c,a) \wedge \mathrm{subNom}(d,a) \wedge \mathrm{rsupNom}(b,r,d) \rightarrow \mathrm{sub}(c,d)$ \\ $F_{12}$& $\forall c,d,f,o,v: \mathrm{sub}(c,d) \wedge \mathrm{rsupEx}(d,f,o,v) \Rightarrow \mathrm{rsupEx}(c,f,o,v)$ \\ $F_{13}$& $\forall c,d,f,o,v: \mathrm{rsupEx}(c,f,o_1,v_1) \wedge \mathrm{rsubEx}(f,o_2,v_2,d) \wedge \mathrm{eval}(o_1,v_1,o_2,v_2) \Rightarrow \mathrm{sub}(c,d)$ \\ \hline \end{tabular} \caption{TBox completion rules.} \label{tab:tboxCompletionRules} \end{table*} \begin{table*}[t!] \centering \renewcommand{\arraystretch}{1.4} \begin{tabular}{|ll|} \hline $F_{14}$ & $\forall x,A,B: \mathrm{inst}(x,A) \wedge \mathrm{sub}(A,B) \Rightarrow \mathrm{inst}(x,B)$ \\ $F_{15}$ & $\forall x,A_1,A_2,B: \mathrm{inst}(x,A_1) \wedge \mathrm{inst}(x,A_2) \wedge \mathrm{int}(A_1,A_2,B) \Rightarrow \mathrm{inst}(x,B)$ \\ $F_{16}$ & $\forall x,y,R,A,B: \mathrm{rinst}(x,R,y) \wedge \mathrm{inst}(y,A) \wedge \mathrm{rsub}(A,R,B) \Rightarrow \mathrm{inst}(x,B)$ \\ $F_{17}$ & $\forall x,y,R,S: \mathrm{rinst}(x,R,y) \wedge \mathrm{psub}(R,S) \Rightarrow \mathrm{rinst}(x,R,y)$ \\ $F_{18}$ & $\forall x,y,z,R_1,R_2,R_3: \mathrm{rinst}(x,R_1,y) \wedge \mathrm{rinst}(y,R_2,z) \wedge \mathrm{pcomp}(R_1,R_2,R_3) \Rightarrow \mathrm{rinst}(x,R_3,z)$ \\ $F_{19}$ & $\forall x,a,B: \mathrm{ninst}(x,a) \wedge \mathrm{inst}(x,B) \Rightarrow \mathrm{inst}(a,B)$ \\ $F_{20}$ & $\forall x,a,B: \mathrm{ninst}(x,a) \wedge \mathrm{inst}(a,B) \Rightarrow \mathrm{inst}(x,B)$ \\ $F_{21}$ & $\forall x,a,z,R: \mathrm{ninst}(x,a) \wedge \mathrm{rinst}(z,R,x) \Rightarrow \mathrm{rinst}(z,R,a)$ \\ $F_{22}$ & $\forall x,A,B: \mathrm{sub}(\top,A) \wedge \mathrm{inst}(x,B) \Rightarrow \mathrm{inst}(x,A)$ \\ $F_{23}$ & $\forall x,x',R,A,B: \mathrm{inst}(x,a) \wedge \mathrm{rsup}(A,R,B) \Rightarrow \mathrm{rinst}(x,R,x')$ \\ $F_{24}$ & $\forall x,x',R,A,B: \mathrm{inst}(x,a) \wedge \mathrm{rsup}(A,R,B) \Rightarrow \mathrm{inst}(x',B)$ \\ $F_{25}$& $\forall f,op,v,C: \mathrm{rsupEx}(f,op,v,C) \wedge \mathrm{rinst}(a,f,v') \wedge \mathrm{eval}(v,op,v') \Rightarrow \mathrm{inst}(a,A)$ \\ $F_{26}$& $\forall a,A, f,v: \mathrm{inst}(a,A) \wedge \mathrm{rsubEx}(A,f,=,v) \Rightarrow \mathrm{rinst}(a,f,v)$ \\ $F_{27}$& $\forall a, A_1, A_2, f, v: \mathrm{inst}(a,A_1) \wedge \mathrm{inst}(a,A_2) \wedge \mathrm{intEx}(A_1,A_2,f,op,v) \Rightarrow \mathrm{rinst}(a,f,v)$ \\ \hline \end{tabular} \caption{ABox completion rules.} \label{tab:aboxCompletionRules} \end{table*} \subsection{Concrete Domains } Reasoning over uncertain concrete domains can be done by transforming the datatype predicates in the axioms and assertions into mixed integer programming as shown in \cite{straccia:2012}. However, in this work, we introduce an efficient approach that transforms the predicates into a test function that evaluates to \textit{true} or \textit{false} based on the grounding generated by an extension of the CPI algorithm. Inference involving axioms that contain concrete domains can be done according to the deduction rules given below: \begin{align*} &~\frac{A \sqsubseteq B ~~~~ B \sqsubseteq \exists F.(o,v)}{A \sqsubseteq \exists F.(o,v)} \\ &~ \frac{A \sqsubseteq \exists F.(o_1,v_1) ~~~\exists F.(o_2,v_2) \sqsubseteq B}{A \sqsubseteq B} ~~~ eval(o_1,v_1,o_2,v_2)\\ &~\frac{\exists F.(o,v_1) \sqsubseteq A ~~~F(a,v_2)}{A(a)} ~~~ eval(o,v_1,=,v_2) \\ &~\frac{A(a) ~~~A \sqsubseteq \exists F.(=,v)}{F(a,v)} \end{align*} where $eval(\ldots)$ checks if all possible values of the first \textit{operator-value} pair $(o_1,v_1)$ are covered by the possible values of the second \textit{operator-value} pair $(o_2,v_2)$, when so, it evaluates to true otherwise false. The function $eval(\ldots)$ is defined based on a datatype $\mathcal{D}$, i.e., $\mathbb{N}$ or $\mathbb{Z}$ or $\mathbb{R}$, and algebraic operators. Some of the algebraic comparisons, computed via $eval(\ldots)$, that are useful to determine inference are listed below: \begin{align*} eval(\leq,v_1,<,v_2) &~ := v_1 < v_2 \\ eval(\leq,v_1,\leq,v_2) &~ := v_1 \leq v_2 \\ eval(=,v_1,<,v_2) &~ := v_1 < v_2 \\ eval(=,v_1,\leq,v_2) &~ := v_1 \leq v_2 \\ eval(=,v_1,=,v_2) &~ := v_1 = v_2 \\ eval(=,v_1,\geq,v_2) &~ := v_1 \geq v_2 \\ eval(=,v_1,>,v_2) &~ := v_1 > v_2 \\ eval(\geq,v_1,\geq,v_2) &~ := v_1 \geq v_2 \\ eval(\geq,v_1,>,v_2) &~ := v_1 > v_2 \\ eval(>,v_1,>,v_2) &~ := v_1 \geq v_2 \end{align*} This function is computed on-demand after each CPI iteration as discussed in the next section. The translation of the deduction rules into FOL is given in Table \ref{tab:tboxCompletionRules} and Table~\ref{tab:aboxCompletionRules}. \begin{example}\label{ex:InferenceDatatype} Consider an $\mathcal{MEL}^{++}$ $\mathrm{KB}=\{\langle 2YearOld \sqsubseteq \exists age.(=,2), 0.7\rangle, \langle \exists age.(\leq,3) \sqsubseteq Toddler, 0.8\rangle\}$ that contains axioms expressed using concrete domains. From the KB, the axiom $2YearOld \sqsubseteq Toddler$ can be inferred since $eval(o_1,v_1,o_2,v_2)$ is \textit{true}, i.e., $eval(=, 2, \leq,3) := 2\leq 3$. \end{example} So far we have discussed how axioms and assertions can be translated into FOL. Next, we show how the most probable KB is derived using MAP inference. \section{Computing a Most Probable KB} To derive the most probable classified and coherent ontology from a weighted $\mathcal{EL}^{++}$ KB, we proceed by transforming TBox and ABox completion rules, schema axioms, and assertions into function-free FOL formulae. The formulae corresponding to the translation of completion rules into FOL are shown in Table \ref{tab:tboxCompletionRules} and Table \ref{tab:aboxCompletionRules}. The formulae from $F_1$ through $F_9$ are taken from \cite{niepert-et-al:2011}. Additionally, a \textit{bijective} mapping function is provided in Definition \ref{def:mapping} to transform axioms and assertions into formulae. Of particular interest for us is proposing a novel way to deal with concrete domains under MLN by modifying the Cutting Plane Inference (CPI) algorithm. In $\mathcal{EL}^{++}$, it is possible to build incoherent TBox axioms due to the presence of the bottom concept $\bot$, for instance, consider the axiom $\{a\} \sqsubseteq \bot$, this cannot be satisfied by any interpretation. To filter out such incoherencies in models generated by MLN, we include the formula $\forall c: \neg sub(c,\bot)$ (formula $F_9$ in Table \ref{tab:tboxCompletionRules}) to the translation of the completion rules into FOL. This technique has already been used in \cite{niepert-et-al:2011}. \begin{definition}\label{def:mapping}[Mapping $\mathcal{MEL}^{++}$ KB into Ground FOL predicates] The function $\varphi$ translates a normalized $\mathcal{MEL}^{++}$ knowledge base KB into FOL formulae in MLN as follows: \allowdisplaybreaks \begin{align*} C(a) \mapsto &~ \mathrm{inst}(a,C) \\ R(a,b) \mapsto &~ \mathrm{rinst}(a,R,b) \\ A \sqsubseteq \bot \mapsto &~ \mathrm{sub}(A,\bot) \\ \top \sqsubseteq C \mapsto &~ \mathrm{sub}(\top,C) \\ A \sqsubseteq \{c\} \mapsto &~ \mathrm{subNom}(A,\{c\}) \\ \{a\} \sqsubseteq \{c\} \mapsto &~ \mathrm{sub}(\{a\},\{c\}) \\ A \sqsubseteq C \mapsto &~ \mathrm{sub}(A,C) \\ A \sqcap B \sqsubseteq C \mapsto &~ \mathrm{int}(A,B,C) \\ \exists R.A \sqsubseteq C \mapsto &~ \mathrm{rsub}(A,R,C) \\ A\sqsubseteq \exists R.B \mapsto &~ \mathrm{rsup}(A,R,B) \\ A\sqsubseteq \exists F.(o,v) \mapsto &~ \mathrm{rsupEx}(A,F,o,v) \\ \exists F.(o,v) \sqsubseteq A \mapsto &~ \mathrm{rsubEx}(F,o,v,A) \\ R_1 \sqsubseteq R_2 \mapsto &~ \mathrm{psub}(R_1,R_2)\\ R_1 \circ R_2 \sqsubseteq R \mapsto &~ \mathrm{pcom}(R_1,R_2,R) \\ int(\{a_i\}, \{a_j\}, \bot) &~~~\text{where }a_i, a_j \in \mathrm{N_I} \text{ and } i \not= j \end{align*} where $a,b,c \in \mathrm{N_I}$, $A,B,C \in \mathrm{N_C}$, $R, R_1, R_2 \in\mathrm{N_R}$, $F \in \mathrm{N_F}$, $o \in \{<,\leq,>,\geq,=\}$, and $v \in \mathbb{R}$ (set of real numbers). \end{definition} \begin{comment} \begin{alignat*}{4} C(a) \mapsto &~ \mathrm{inst}(a,C) \quad && R(a,b) \mapsto &&~ \mathrm{rinst}(a,R,b) \\ A \sqsubseteq \bot \mapsto &~ \mathrm{sub}(A,\bot) \quad && \top \sqsubseteq C \mapsto &&~ \mathrm{sub}(\top,C) \\ A \sqsubseteq \{c\} \mapsto &~ \mathrm{subNom}(A,\{c\}) \quad && \{a\} \sqsubseteq \{c\} \mapsto &&~ \mathrm{sub}(\{a\},\{c\}) \\ A \sqsubseteq C \mapsto &~ \mathrm{sub}(A,C) \quad && A \sqcap B \sqsubseteq C \mapsto &&~ \mathrm{int}(A,B,C) \end{alignat*} \begin{minipage}{.1\textwidth} \begin{align*} C(a) \mapsto &~ \mathrm{inst}(a,C) \\ R(a,b) \mapsto &~ \mathrm{rinst}(a,R,b) \\ A \sqsubseteq \bot \mapsto &~ \mathrm{sub}(A,\bot) \\ \top \sqsubseteq C \mapsto &~ \mathrm{sub}(\top,C) \\ A \sqsubseteq \{c\} \mapsto &~ \mathrm{subNom}(A,\{c\}) \\ \{a\} \sqsubseteq \{c\} \mapsto &~ \mathrm{sub}(\{a\},\{c\}) \\ A \sqsubseteq C \mapsto &~ \mathrm{sub}(A,C) \end{align*} \end{minipage} \begin{minipage}{.9\textwidth} \begin{align*} A \sqcap B \sqsubseteq C \mapsto &~ \mathrm{int}(A,B,C) \\ \exists R.A \sqsubseteq C \mapsto &~ \mathrm{rsub}(A,R,C) \\ A\sqsubseteq \exists R.B \mapsto &~ \mathrm{rsup}(A,R,B) \\ A\sqsubseteq \exists F.(o,v) \mapsto &~ \mathrm{rsupEx}(A,F,o,v) \\ \exists F.(o,v) \sqsubseteq A \mapsto &~ \mathrm{rusbEx}(F,o,v,A) \\ R_1 \sqsubseteq R_2 \mapsto &~ \mathrm{psub}(R_1,R_2)\\ R_1 \circ R_2 \sqsubseteq R \mapsto &~ \mathrm{pcom}(R_1,R_2,R) \\ \end{align*} \end{minipage} \end{comment} \begin{lemma} The translation of an $\mathcal{EL}^{++}$ KB into FOL and vice versa can be done in polynomial time in the size of the knowledge base \cite{lukasiewicz-et-al:2012}. \end{lemma} From the above Lemma, we see that the translation of $\mathcal{MEL}^{++}$ KB completion rules, axioms, and assertions into FOL in MLN does not affect the complexity of inference in MLN. Besides, as \textit{typed variables} and \textit{constants} greatly reduce size of ground Markov nets. We introduce types to all of the predicates shown in Tables \ref{tab:tboxCompletionRules} and Table \ref{tab:aboxCompletionRules}. \begin{theorem Given an $\mathcal{MEL}^{++}$ ontology $\mathrm{KB}=(\mathcal{T},\mathcal{A})$ and $\mathrm{KB}' \subseteq \mathrm{KB}$, a Herbrand interpretation $\mathcal{H}$ is a model of $\mathrm{KB}'$, i.e., $\mathcal{H} \models \mathrm{KB}'$ if and only if there exist a mapping function $\varphi$ such that $\varphi(\mathcal{H}) \models \mathrm{KB}'$. \end{theorem} So far we have introduced a mapping function $\varphi$ for KB assertions and axioms and completion rules as formulae ($F_1$--$F_{27}$). The next step requires using MAP inference of MLN to obtain the most probable ontology of a given $\mathcal{MEL}^{++}$ KB. \subsection{Maximum A-Posteriori Inference (MAP)} In order to deal with $\mathcal{MEL}^{++}$ datatypes, we introduced a predicate called $eval(\ldots)$ in the translation of $\mathcal{EL}^{++}$ completion rules into FOL, depicted in Table \ref{tab:tboxCompletionRules} and Table \ref{tab:aboxCompletionRules}. The truth value of $eval(\ldots)$ is computed by evaluating the logical expressions corresponding to datatypes in an $\mathcal{MEL}^{++}$ KB. For instance, consider the $eval(\ldots)$ predicate in Example \ref{ex:InferenceDatatype}. In the following, we show how the expression $(=, 2) \subseteq (\leq, 3)$, operator-value pair coverage, i.e., is evaluated by extending the CPI algorithm. \begin{comment} The grounding algorithm Cutting Plane Inference (CPI) \cite{riedel:2012} proceeds from some initial grounding and incrementally grounds only those formulas that are violated by the current solution. The formulas are only grounded when they are unsatisfied. But rather than working the grounding into the inference algorithm itself, CPI interleaves the two, allowing it to be applied to any MAP solver. CPI produces some initial approximate grounding. This grounding must be engineered to avoid both an empty grounding or a grounding that consumes more memory than is available. CPI can produce approximate groundings by stopping at some iteration before all unsatisfied formulas are grounded. \end{comment} Thus, we propose an extension of CPI by incorporating algebraic expressions. In particular, our extension addresses a limitation of MLN with respect to concrete domains. In general, all (numerical) values are represented as constants in MLN. The only semantics that are related to constants might be the type to which they belong. This enables more efficient grounding and leads to smaller MLNs. However, this does hardly cover the characteristics of numerical values. Therefore, we exploit the iterative character of CPI in order to evaluate numerical (in)equalities. The extension can be considered as additional features that are only used on-demand. It is formula-specific as it affects the ground values and the truth value of specific constraints. Hence, it can be implemented as an extension of the detection of the violated constraints. The algorithm identifies at the beginning of each CPI iteration for each formula all violated groundings considering the current intermediate solution. Each of the violated ground clauses has to be translated and added to the ILP. Therefore, an ILP variable is generated for each ground predicate as well as additional ILP constraints. Datatype ground predicates $eval(\ldots)$ appear during this process as any other predicates. However, we exploit there semantics to decide whether $eval(\ldots)$ predicates evaluate to \textit{true} or \textit{false}. Depending on the result of the evaluation of the attached boolean expression of the respective predicate, we decide whether it is necessary to add the violated ground clause to the ILP. For instance, if the datatype predicate is positive (negative) and it appears without negation (or negation) in the formula, we do not add the ground clause to the ILP as it is not violated in the current iteration. Otherwise, we need to add the clause to the ILP but leave out the datatype ground predicates as they can not fulfil the violated clause, i.e., the respective literal is false due to evidence. Hence, we do not introduce ILP variables for datatype predicates as they will not be added to the ILP. Instead, we compute the truth value of the datatype predicates on-the-fly and only on-demand. Hence, the proposed approach improves the efficiency of processing numerical predicates in a Markov logic solver without sacrificing the performance. We implemented this algorithm as an extension to the MLN inference engine ROCKIT\footnote{\url{https://code.google.com/p/rockit/}} \cite{noessner-et-al:2013}. We leave out testing this implementation with different ontologies as a future work. \begin{theorem}\label{thm:soundness} Given the following: \begin{itemize} \item an $\mathcal{MEL}^{++}$ knowledge base $\mathrm{KB} = (\mathrm{KB}^D, \mathrm{KB}^U)$ formed from a vocabulary containing a finite set of individuals $\mathrm{N_I}$, concepts $\mathrm{N_C}$, features $\mathrm{N_F}$, and roles $\mathrm{N_R}$, \item $\mathrm{HB}$ as a Herbrand base of the formulae $F$ in Table \ref{tab:tboxCompletionRules} and Table \ref{tab:aboxCompletionRules} over the same vocabulary, \item $G_1$ as a set of ground formulae constructed from $\mathrm{KB}^D$, and \item $G_2$ as a set of ground formulae constructed from $\mathrm{KB}^U$, \end{itemize} the most probable coherent and classified ontology is obtained with: $$\varphi^{-1}(\hat{I}) = \underset{\mathrm{HB} \supseteq I \models G_1 \cup F}{\arg\max}\bigg(\sum_{(o_j,w_j) \in G_2:I \models o_j} w_j \bigg) $$ \end{theorem} From Theorem \ref{thm:soundness} and the results in \cite{roth:1996}, finding the most probable, classified and coherent $\mathcal{MEL}^{++}$ KB is in NP. The \textit{hardness} of this complexity bound can be obtained by reducing partial weighted MaxSAT problem into an $\mathcal{MEL}^{++}$ MAP query. Consequently, the MAP problem for $\mathcal{MEL}^{++}$ is NP-hard. \section{Conclusion} In this work, we have extended $\mathcal{EL}^{++}$-LL into $\mathcal{MEL}^{++}$ with nominals, concrete domains and instances. In particular, we proposed an extension to the CPI algorithm in order to deal with reasoning under uncertain concrete domains. We have implemented the proposed approach and planned to carry out experiments in the future. We will also investigate to extend the proposed approach to other datatypes such as Date, Time, and so on. \bibliographystyle{aaai}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,617
{"url":"https:\/\/www.studyadda.com\/ncert-solution\/11th-chemistry-classification-of-elements-and-periodicity-in-properties_q100\/492\/38258","text":"\u2022 # question_answer 100) \u00a0 Define ionisation enthalpy. Discuss the factors affecting ionisation enthalpy of the elements and its trends in periodic table.\n\nIonisation enthalpy It is defined as the \"minimum amount of energy required to remove the most loosely bound electron from an isolated gaseous atom\" $M(g)-{{M}^{+}}(g){{e}^{-}}{{I}_{1}}$ = First ionisation enthalpy Similarly, second and third electrons are also removed by providing successive ionisation enthalpy. Factors on which Ionisation Enthalpy Depends: (i) Size of the atom: The larger the atomic size, smaller is the value of ionisation enthalpy. In a larger atom, the outer electrons are far away from the nucleus and thus force of attraction with which they are attracted by the nucleus is less and hence can be easily removed. Ionisation enthalpy $\\propto$$\\frac{1}{Atomic\\text{ }size}$ (ii) Screening effect: Higher the screening effect, the lesser is the value of ionisation enthalpy as the screening effect reduces the force of attraction towards nucleus and hence the outer electrons can be easily removed. Ionisation enthalpy $\\propto \\frac{1}{Screeing\\,effect}$ (iii) Nuclear charge: As the nuclear charge increases among atoms having same number of energy shells, the ionisation enthalpy increases because the force of attraction towards nucleus increases. Ionisation enthalpy $\\propto$ Nuclear charge (iv) Half-filled and fully filled orbitals: The atoms having half-filled and fully filled orbitals are comparatively more stable, hence more energy is required to remove the electron from such atoms. The ionisation enthalpy is rather higher than the expected value in case of such an atom. Ionisation enthalpy $\\propto$ Stable electronic configuration (v) Shape of orbital: The s-orbital is more close to nucleus than the p-orbital of the same orbit. Thus, it is easier to remove electron from a p-orbital in comparison to s-orbital. In general, the ionisation enthalpy follows the following order: (s > p > d > f) orbitals of the same orbit. Variation of ionisation enthalpy in the periodic table In general, the ionisation energy decreases down the group due to increase in atomic size. On the other hand, the ionisation energy increases across the period from left to right, again due to decrease in atomic size from left to right.","date":"2020-07-09 08:51:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7527820467948914, \"perplexity\": 948.4916941882866}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655899209.48\/warc\/CC-MAIN-20200709065456-20200709095456-00505.warc.gz\"}"}
null
null
This session must be completed by all new members. An introductory session which will be led by one of our trained Coxes with the assistance of a competent rower. This session covers PARQ Completion; Safety Instructions and Equipment; Basic Rowing Techniques; A Maiden Voyage to put the newly learnt techniques into practice. Once you have completed the Sweep Stroke and Maiden Voyage session and been confirmed as a Novice member you can book onto these more involved rows which will be led by a fully trained cox and two competent crew members. We will keep working with you on the basic technique and you will get to explore the Ouse Valley a bit more. The aim is to get you on to the competent, open rows as soon as possible. Once certified as a competent rower by your supervising cox you can choose to progress to either an Associate membership or a Full membership. These rows are designed for people who have been certified as a competent rower by a supervising cox. During these rows you will get to explore the beautiful Ouse estuary and also head out to sea. All rowers must have completed a PARQ form and be a Full or Associate member. These rows are designed for juniors aged 10-18 years. All junior rowers must have completed a PARQ form and have approved parent or guardian consent. These rows are designed for a longer session to take place to explore further afield and can be for around 4 - 5 hours long. Adventures may take place up to Lewes and back or along the coast to Cuckmere or Brighton to see our beautiful Sussex countryside and coastline. These events require stamina and an adventurous spirit! An early morning row to watch the sunrise from a totally new perspective. From 5th December 2018 a 2 hour men's row will take place every Wednesday evening. This row will be a mixture of a more relaxed, social row with some training and technique practice. There are many opportunities available to join a race crew to learn advanced rowing techniques, increase your fitness and compete in various rowing events. Speak to one of your Coxes for more information.
{ "redpajama_set_name": "RedPajamaC4" }
6,425
\section{Introduction} The presence of an external environment typically affects quantum systems in weak interaction with it via loss of quantum correlations due to decohering and mixing-enhancing effects \cite{Alicki}-\cite{Benatti1}. Nevertheless, it has also been established that suitable environments are capable of creating and enhancing quantum entanglement among quantum open sub-systems immersed in them instead of destroying it \cite{Plenio}-\cite{Benatti2}. It is remarkable that entanglement can be generated solely by the mixing structure of the irreversible dynamics, without any environment induced, direct interaction between the quantum sub-systems. This mechanism of environment induced entanglement generation has been studied for systems made of few qubits or oscillator modes \cite{Benatti1},\cite{Benatti2}-\cite{Benatti4} and specific protocols have been proposed to prepare predefined entangled states via the action of suitably engineered environments \cite{Kraus}. Instead, in this paper, we study the possibility that entanglement be created through a purely noisy mechanism in many-body systems (for different approaches to entanglement in many-body systems, see \cite{Lewenstein}-\cite{Modi} and references therein). In a quantum system made of a large number $N$ of constituents, typical accessible observables are collective ones, {\it i.e.} those involving the degrees of freedom of all its elementary parts. For these ``macroscopic'' observables, one usually expects that quantum effects fade away as $N$ becomes large, even more so when the many-body system is in contact with an external environment. This is surely the case for the so-called ``mean field'' observables, {\it i.e.} averages of microscopic operators; these quantities scale as $1/N$ and as such behave as classical observables when the number of system constituents becomes large. Nevertheless, other collective observables exist that scale as $1/\sqrt{N}$ and that might retain some quantum properties as $N$ increases \cite{Goderis1}-\cite{Matsui}. These observables have been called ``fluctuation operators'' and shown to obey a quantum central limit theorem. In the large $N$ limit, the microscopic fluctuation operators form a bosonic algebra, irrespective of the nature of the microscopic many-body system. Being half-way between microscopic observables (as for instance the individual spin operators in a generic spin systems) and truly macroscopic ones ({\it e.g.} the corresponding mean magnetization), the fluctuation operators have been named ``mesoscopic''. They provide a particularly suited scenario to look for truly quantum signals in the dynamics of ``large'' systems, {\it i.e.} in systems in which the number of microscopic constituents grows arbitrarily. Although the emergent time-evolution over the fluctuation algebra has been extensively studied in many systems \cite{Verbeure}, very little is known of its behaviour in open many-body systems, {\it i.e.} in systems immersed in an external bath. This is the most common situation encountered in actual experiments, typically involving cold atoms, optomechanical or spin-like systems \cite{Bloch,Aspelmeyer,Rogers}, that can never be thought of as completely isolated from their thermal surroundings. Actually, the repeated claim of having detected ``macroscopic'' entanglement in those experiments \cite{Jost,Krauter} poses a serious challenge in trying to interpret theoretically those results \cite{Narnhofer}. Motivated by these experimental findings, in the following we shall show that quantum behaviour can indeed be present at the mesoscopic level in open many-body systems provided suitable fluctuation operators are considered. More specifically, we focus on a many-body system composed by two spin-1/2 chains, one next to the other, which are endowed with a microscopic thermal state at inverse temperature $\beta$ with a tensor product structure, that excludes long-range correlations. A site in the system is thus composed by the corresponding couple of sites in the two chains and suitable single-site operators are considered giving rise to quantum fluctuations that, in the infinite volume limit, identify collective bosonic degrees of freedom clearly attributable to the two chains independently. The two chains are immersed in a common environment such that the observables supported by finite lattice intervals are subjected to a Lindblad type dynamics without direct interactions among the spins either in a same or in different chains. The dynamics is chosen in such a way to leave the microscopic state invariant and to map into itself the linear span of the relevant single-site observables. Under this condition, we show that the emergent, mesoscopic dissipative quantum fluctuation dynamics is capable of entangling different collective bosonic degrees of freedom and that the dissipatively created entanglement presents interesting features as a function of the temperature and of the microscopic coupling strength of the two chains~\cite{note}. The structure of the paper is as follows: Section 2 provides the necessary preliminary notions concerning quantum spin chains and their description at the mesoscopic level based on a Weyl algebra of quantum fluctuations that satisfy a quantum central limit relation as explained in Theorem~\ref{th1}. In Section 3, the general techniques exposed in Section 2 are applied to the case of a system consisting of two quantum spin $1/2$ chains in a microscopic factorized thermal state: specific microscopic operators are selected that give rise to collective degrees of freedom pertaining to each chain independently of the other or to both chains at the same time. The description of the resulting quantum fluctuations is given in terms of bosonic creation and annihilation operators and their mesoscopic thermal state is obtained in Proposition~\ref{prop-state}. In Section 4, a microscopic open quantum dynamics of the two chains is considered with a Lindblad generator that does not contain direct spin interactions and whose dissipative term statistically couples also spins belonging to different chains, while leaving the microscopic thermal state invariant. The main result of the paper is contained in Theorem~\ref{qfth} which shows that, in the large $N$ limit, the microscopic dissipative dynamics gives rise to a mesoscopic dynamics of quantum fluctuations consisting of a semigroup of completely positive Gaussian maps sending Weyl operators into Weyl operators. The Lindblad generator of this so-called quasi-free semigroup is derived in Corollary~\ref{cor1}. Section 5 and 6 focus on mesoscopic Gaussian initial states whose form is left invariant by the dissipative mesoscopic dynamics. Specific Gaussian states are considered involving collective degrees of freedom that belong to the two chains, independently. They are obtained with separable squeezing operations on the mesoscopic thermal state: the resulting squeezed state is then separable with respect to the collective degrees of freedom pertaining to different chains. In Section 7, two concrete microscopic models of open quantum spin chains are considered: in the first one, the dissipative term of the microscopic Lindblad generator is not diagonal in the site indices and consists of Kraus operators involving spins from both chains at each lattice site. Instead, in the second model the dissipative contribution is diagonal in the site indices and each site contributes with Kraus operators pertaining to only one chain. Propositions~\ref{propo1} and~\ref{propo3} provide the precise forms of the Lindblad generators of the dissipative quasi-free semigroups. Section 8 studies the entanglement dynamics of the initially separable squeezed states constructed in Section 6 for the two models explicitly solved in Section 7. Squeezed states are not left invariant by the emerging mesoscopic dynamics, although they remain Gaussian, so that they may develop collective entanglement between the two chains at the mesoscopic level which can be quantified by the logarithmic negativity. The temporal behaviour of such a dissipatively generated entanglement is then studied analytically and numerically for different values of temperature, squeezing parameter and dissipation strength. \section{Quantum spin chains and their fluctuation algebra} \label{CONSTRUCTION} In this section, we briefly review how to construct the algebra of quantum fluctuations of a generic spin chain. \subsection{Quantum fluctuations} A quantum spin chain is a one-dimensional bi-infinite lattice, whose sites are indexed by an integer $j\in\mathbb{Z}$, all supporting the same finite-dimensional matrix algebra ${\cal A}^{(j)}=M_d(\mathbb{C})$. Its algebraic description~\cite{Bratteli} is by means of the \textit{quasi-local} $C^*$ algebra ${\cal A}$ obtained as an inductive limit from the strictly local sub-algebras ${\cal A}_{[q,p]}=\bigotimes_{j=p}^q{\cal A}^{(j)}$ supported by finite intervals $[q,p]$, with $q\leq p$ in $\mathbb{Z}$. Namely, one considers the algebraic union $\bigcup_{q\leq p}{\cal A}_{[q,p]}$ and its completion with respect to the norm inherited by the local algebras. Any operator $x\in M_d(\mathbb{C})$ at site $j$ can be embedded into ${\cal A}$ as: \begin{equation} x^{(j)}=\bold{1}_{j-1]}\otimes x\otimes\bold{1}_{[j+1}\ , \label{embed} \end{equation} where $\bold{1}_{j-1]}$ is the tensor product of identity matrices at each site from $-\infty$ to $j-1$, while $\bold{1}_{[j+1}$ is the tensor product of identity matrices from site $j+1$ to $+\infty$. Quantum spin chains are naturally endowed with the translation automorphism $\tau:{\cal A}\mapsto{\cal A}$ such that $\tau(x^{(j)})=x^{(j+1)}$. Generic states $\omega$ on the quantum spin chain are described by positive, normalised linear functionals ${\cal A}\ni a\mapsto\omega(a)$: they are expectation functionals that assign mean values to all operators in ${\cal A}$. In the following, we shall consider translation-invariant states such that \begin{equation} \label{transinv} \begin{split} \omega(a)&=\omega\big(\tau(a)\big)\hspace{34pt}\qquad\forall a\in \mathcal{A}\ ,\\ \omega(x^{(j)})=\omega(x^{(j+1)})&=\omega(x)={\rm Tr}(\rho\,x)\qquad\forall x\in M_d(\mathbb{C})\ , \end{split} \end{equation} where $\rho$ is any density matrix in $M_d(\mathbb{C})$: it represents the evaluation of $\omega$ on single site observables. Furthermore, we shall focus upon translation-invariant states $\omega$ that are also \textit{clustering}, namely they do not support correlations between far away localized operators: \begin{equation} \label{clustates} \lim_{n\to\pm\infty}\omega\Big(a^\dag\tau^n(b)c\Big)=\omega(a^\dag\,c)\,\omega(b)\quad \forall a,b,c\in{\cal A}\ . \end{equation} In an infinite quantum spin chain, the operators belonging to strictly local sub-algebras contribute to the microscopic description of the system. In order to move to a description based on collective observables supported by infinitely many lattice sites, a proper scaling ought to be chosen. Most often, mean-field observables are considered; these are constructed as averages of $N$ copies of a same single site observables $x$, from site $j=0$ to site $N-1$: \begin{equation} X_N=\frac{1}{N}\sum_{k=0}^{N-1}x^{(k)}\ ,\qquad x\in M_d(\mathbb{C})\ . \label{macro} \end{equation} Given any state $\omega$ on ${\cal A}$, the Gelfand-Naimark-Segal (GNS) construction~\cite{Bratteli} provides a representation $\pi_\omega:{\cal A}\mapsto \pi_\omega({\cal A})$ of ${\cal A}$ on a Hilbert space $\mathbb{H}_\omega$ with a cyclic vector $\vert\omega\rangle$ such that the linear span of vectors of the form $\vert\Psi_a\rangle=\pi_\omega(a)\vert\omega\rangle$ is dense in $\mathbb{H}_\omega$ and $$ \omega(b^\dag\,a\,c)=\langle \Psi_b\vert\pi_\omega(a)\vert\Psi_c\rangle\ ,\qquad a,b,c\in{\cal A}\ . $$ In case of a clustering state $\omega$, one can then consider the limit for $N\to\infty$ of $\omega\left(b^\dagger X_N\, c\right)$ where $b,c\in{\cal A}$, obtaining \begin{equation} \lim_{N\to\infty}\omega\left(b^\dagger X_N\, c\right)=\omega(b^\dag c)\,\omega(x)\ . \label{MET} \end{equation} Indeed, for any integer $N_0<N$ one can write: $$ \lim_{N\to\infty} \omega\left(b^\dagger X_N\, c\right)= \lim_{N\to\infty} \omega\Bigg( b^\dagger \bigg( \frac{1}{N} \sum_{k=0}^{N_0} x^{(k)} + \frac{1}{N} \sum_{k=N_0+1}^{N-1} x^{(k)}\bigg)\, c\Bigg)\ . $$ The first contribution in the r.h.s. clearly vanishes in the large $N$ limit. Concerning the second term, since strictly local operators are norm dense in $\mathcal{A}$, without loss of generality one can assume $c$ to have support on sites with labels $\leq N_0$, so that one can exchange it with $\sum_{k=N_0+1}^{N-1} x^{(k)}$. Using the clustering property (\ref{clustates}) one immediately gets the result (\ref{MET}). This means that in the so-called weak operator topology, {\it i.e.} under the state average, $X_N$ converges to a scalar multiple of the identity operator: \begin{equation} \lim_{N\to\infty} X_N = \omega(x)\, {\bf 1}\ . \end{equation} Furthermore, in Appendix A it is proved that, given $x,y\in M_d(\mathbb{C})$, the product $X_NY_N$ of the mean-field-observables weakly converges to $\omega(x)\omega(y)$: \begin{equation} \label{macro1} \lim_{N\to\infty}\omega\bigg(a^\dag X_N\,Y_N\,b\bigg)=\omega(a^\dag b)\,\omega(x)\,\omega(y)\ . \end{equation} It thus follows that the weak-limits of mean-field observables commute and give rise to a commutative algebra. \medskip \begin{remark} \label{rem0} {\rm Since they commute, mean-field observables pertain to the macroscopic, classical description level with no fingerprints of the microscopic quantum framework from which they emerge. Instead, as outlined in the Introduction, we are interested in studying which collective observables extending over the whole spin chain may keep some degree of quantum behaviour; clearly, a less rapid scaling than $1/N$ is necessary.} \qed \end{remark} \medskip Let us then consider combinations of microscopic operators of the form: \begin{equation} F_N(x)=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}\left(x^{(k)}-\omega(x)\right)\ ; \label{FL} \end{equation} they are quantum analogues of the fluctuation variables in classical stochastic theory: we shall refer to them as ``local quantum fluctuations''. Their large $N$ limit with respect to clustering states $\omega$ has been thoroughly investigated in~\cite{Goderis1,Verbeure} yielding a non-commutative central limit theorem and an associated quantum fluctuation algebra. The scaling $1/\sqrt{N}$ is not sufficient to guarantee convergence in the weak-operator topology. Nevertheless, consider $x,y\in M_d(\mathbb{C})$ such that $\left[x\,,\,y\right]=z$. Since $[x^{(j)}\,,\, y^{(\ell)}]=\delta_{j\ell}\,z^{(j)}$, with respect to a clustering state $\omega$, one has, following the same strategy used in (\ref{MET}), \begin{equation} \lim_{N\to\infty} \omega\left(a^\dag\left[F_N(x),F_N(y)\right]b\right)=\lim_{N\to\infty} \frac{1}{N}\sum_{j=0}^{N-1}\omega\left(a^\dag z^{(j)}\, b\right)=\omega(a^\dag b)\,\omega(z) , \end{equation} for all $a,b\in{\cal A}$. Therefore, commutators $\left[F_N(x),F_N(y)\right]$ of local fluctuations do not vanish when $N\to\infty$. They behave as mean-field quantities and tend, in the weak-topology, to scalar quantities $\omega(z)$. This fact indicates that, at the mesoscopic level, the emerging quantum structure is endowed with a non-commutative algebraic structure. \medskip \begin{remark} \label{rem0a} {\rm Because they emerge from a scaling $1/\sqrt{N}$, quantum fluctuations provide a description level in between the microscopic (strictly local) and the macroscopic (mean-field) ones. We will refer to it as to a \textit{mesoscopic} description level: though collective, it nevertheless inherits to a certain extent the quantum, non-commutativity of the microscopic system from which it emerges.} \qed \end{remark} \medskip \subsection{Quantum fluctuation algebra} In order to construct a quantum fluctuation algebra, one starts by selecting a set of $d$ linearly independent single-site microscopic observables $\chi=\{x_j\}_{j=1}^d$, $x_j\in M_p(\mathbb{C})$, $x_j=x_j^\dag$, and then considers their local elementary fluctuations $F_N(x_j)$ and the large $N$ limit of the expectations of polynomials in the operators $F_N(x_j)$ with respect to a clustering state $\omega$. In particular, the observables $x_j$ are chosen such that $1)$ the coefficients \begin{equation} \label{cormat} C^{(\omega)}_{ij}:=\lim_{N\to\infty}\omega\big(F_N(x_i)F_N(x_j)\big)\ , \end{equation} give a well defined positive $d\times d$ correlation matrix $C^{(\omega)}$, and $2)$ that the characteristic functions $\omega\big(e^{itF_N(x_j)}\big)$ converge to a Gaussian function in $t$ with zero mean and covariance matrix $\Sigma^{(\omega)}$ with entries \begin{equation} \label{covmat} \Sigma^{(\omega)}_{ij}=\frac{1}{2}\,\lim_{N\to\infty}\omega\big(\left\{F_N(x_i)\,,\,F_N(x_j)\right\}\big)\ . \end{equation} We shall then define the following bilinear, positive and symmetric map on the real linear span $\mathcal{X}=\Big\{x_r=\sum_{i=1}^d r_i\, x_i,\ x_i\in\chi,\ r_i\in\mathbb{R}\Big\}$, \begin{equation} \label{BiFo} (x_{r_1},x_{r_2})\to (r_1,\Sigma^{(\omega)}\,r_2)=\sum_{i,j=1}^dr_{1i}\, r_{2j}\,\Sigma^{(\omega)}_{ij}\ . \end{equation} \medskip A multivariate version of the {\sl normal quantum central limit theorem} is based on a restricted class of clustering states. \medskip \begin{definition} \label{2} A finite set of self-adjoint operators $\chi=\{x_j\}_{j=1}^d$ is said to have ``normal multivariate quantum fluctuations'' with respect to a clustering state $\omega$ if the latter obeys the condition: \begin{equation} \label{const2} \sum_{k=0}^{\infty}\Big|\omega(x^{(0)}_ix_j^{(k)})-\omega(x_i)\omega(x_j)\Big|<+\infty\quad\forall x_i,x_j\in\chi\ , \end{equation} and further satisfies \begin{eqnarray} \label{Gauss1} \lim_{N\to\infty}\omega\big(F_N^2(x_j)\big)&=& \Sigma^{(\omega)}_{jj}\\ \label{Gauss2} \lim_{N\to\infty}\omega(e^{itF_N(x_j)})&=&{\rm e}^{-\frac{t^2}{2}\Sigma^{(\omega)}_{jj}}\qquad \forall x_j\in\chi,\ \forall\, t\in\mathbb{R}\ . \end{eqnarray} \end{definition} \medskip \noindent We expect quantum fluctuations to obey the canonical commutation relations in the limit of large $N$; then, exponentials of local fluctuations ${\rm e}^{iF_N(x_j)}$ are expected to satisfy Weyl-like commutation relations in that limit \cite{Verbeure}. In full generality, given a set $\chi$ as in {\sl Definition \ref{2}}, one equips the real vector space $\mathcal{X}$ with the symplectic (bilinear) form \begin{equation} \label{sympform1} (r_1,r_2)\to(r_1,\sigma^{(\omega)} r_2)=\sum_{i,j=1}^dr_{1i}\,r_{2j}\,\sigma^{(\omega)}_{ij}\ , \end{equation} defined by the anti-symmetric matrix $\sigma^{(\omega)}$ with entries \begin{equation} \label{sympform} \sigma^{(\omega)}_{ij}:=-i\lim_{N\to\infty}\omega\left(\left[F_N(x_i)\,,\,F_N(x_j)\right]\right)=-\sigma^{(\omega)}_{ji}\ . \end{equation} The relation between the correlation, covariance and symplectic matrices is \begin{equation} \label{corcovsym} C^{(\omega)}=\Sigma^{(\omega)}\,+\,\frac{i}{2}\sigma^{(\omega)}\ . \end{equation} For sake of compactness, using the linearity of the map that associates an operator $x$ with its local quantum fluctuation $F_N(x)$, the following notation will be used: \begin{eqnarray} \label{qfa1} (r\,,\,F_N)&:=&\sum_{j=1}^dr_j\,F_N(x_j)=F_N(x_r)\qquad\forall x_r\in \chi\ ,\\ \label{qfa2} W_N(r)&:=&{\rm e}^{i(r\,,\,F_N)}={\rm e}^{iF_N(x_r)}\ , \end{eqnarray} where $F_N=(F_N(x_1),F_N(x_2),\ldots, F_N(x_d))^{tr}$ is the vector of local fluctuations. With the aid of the symplectic matrix $\sigma^{(\omega)}$, one can construct the abstract \emph{Weyl} algebra $\mathcal{W}$, linearly generated by the Weyl operators $W(r)$, $r\in\mathbb{R}^d$, obeying the relations: \begin{equation} W^\dag(r)=W(-r)\ ,\quad W(r_1)W(r_2)=W(r_1+r_2)\,{\rm e}^{-\frac{i}{2}(r_1,\sigma^{(\omega)} r_2)}\ . \label{Weyl} \end{equation} The following theorem specifies in which sense the large $N$ limit of the local exponentials $W_N(r)$ can be identified with Weyl operators $W(r)$ \cite{Verbeure}. \medskip \begin{theorem} \label{th1} Any set $\chi$ with normal fluctuations with respect to a clustering state $\omega$ admits a regular \emph{quasi-free} state $\Omega$ on a Weyl algebra $\mathcal{W}(\chi,\sigma^{(\omega)})$ such that: \begin{eqnarray} \nonumber &&\hskip-1cm \lim_{N\to\infty}\omega\big(W_N(r_1)\,W_N(r_2)\big)= \exp\Bigg(-\frac{\big((r_1+r_2),\Sigma^{(\omega)}\,(r_1+r_2)\big)}{2}\,-\frac{i}{2}\,\big(r_1,\sigma^{(\omega)}r_2\big)\Bigg)\\ &&\hskip 3.7cm =\,\Omega\big(W(r_1)W(r_2)\big)\ , \label{quasistate} \end{eqnarray} for all $x_{r_{1,2}}\in \mathcal{X}$ . \end{theorem} \medskip The regularity and quasi-free character of $\Omega$ follow from \eqref{Gauss2}; indeed, as explicitly shown by \eqref{Gauss2}, $\Omega$ is a Gaussian state (see Section 5). In particular, its regularity guarantees that one can write \begin{equation} \label{reg} W(r)={\rm e}^{iF(x_r)}={\rm e}^{i(r,F)}\ ,\qquad (r,F)=\sum_{i=1}^dr_i\,F(x_i)\ , \end{equation} where $F$ is an operator-valued $d$-dimensional vector with components $F(x_i)$ that are collective field operators satisfying canonical commutation relations \begin{equation} \left[F(x_{r_1})\,,\,F(x_{r_2})\right]=\left[(r_1,F)\,,\,(r_2,F)\right]=i\,\big(r_1,\sigma^{(\omega)} r_2\big)\ . \label{COMSIGMA} \end{equation} We shall refer to the Weyl algebra $\mathcal{W}(\chi,\sigma_\omega)$ generated by the strong-closure (in the GNS representation based on $\Omega$) of the linear span of Weyl operators as the {\sl quantum fluctuation algebra}. \section{Spin-1/2 chains} \label{EX} In this section we consider two quantum spin chains whose spins do not directly interact, but are immersed into a same environment in such a way that they are subjected to a same external quantum noise and behave as open quantum systems undergoing a microscopic dissipative quantum dynamics described by a semi-group with a generator in Kossakowski-Lindblad form. Our aim is to study which kind of mesoscopic time-evolution emerges from a given microscopic dynamics and how it affects a suitably constructed quantum fluctuation algebra. In particular, we shall show that, solely because of its statistical mixing properties, the noisy part of the microscopic generator may induce entanglement between the two spin chains at the mesoscopic level. \subsection{Quantum fluctuations} We will first focus upon the microscopic double spin chain for which we shall construct a specific fluctuation algebra without considering any dynamics. At each site of both chains we attach the algebra $M_2(\mathbb{C})$ generated by the $2\times 2$ identity matrix and the Pauli matrices $\sigma_{1,2,3}$ satisfying the algebraic rules $$ [\sigma_i\,,\,\sigma_j]=2i\epsilon_{ijk}\,\sigma_k\ . $$ We shall pair sites from the two chains so that ${\cal A}^{(k)}$ will denote the matrix algebra $M_4(\mathbb{C})=M_2(\mathbb{C})\otimes M_2(\mathbb{C})$ supported by the $k$-th sites of the double chain. The \emph{quasi-local} algebra ${\cal A}$ describing the double chain will then be the tensor product of the quasi-local algebras of the single chains, with $a\otimes 1$ and $1\otimes a$ denoting operators pertaining to the first, respectively the second chain. We shall equip ${\cal A}$ with the microscopic thermal state at inverse temperature $\beta$ constructed from the infinite tensor product of a same single site thermal state with Hamiltonian: \begin{equation} H=\frac{\eta}{2}\big(\sigma_3\otimes\bold{1}+\bold{1}\otimes \sigma_3\big)\ . \label{MICHAM} \end{equation} Explicitly, one then has \begin{equation} a\mapsto\omega_\beta(a)={\rm Tr}_{[q,p]} \left(\bigotimes_{k=p}^q\rho_\beta^{(k)}\, a\right)\ ,\quad\rho_\beta^{(k)}:=\frac{{\rm e}^{-\beta H^{(k)}}}{{\rm Tr}\left({\rm e}^{-\beta H^{(k)}}\right)}\ , \label{STATE} \end{equation} where $H^{(k)}$ coincides with the hamiltonian in \eqref{MICHAM} for all $k$ and $a$ is any operator belonging to the strictly local algebra ${\cal A}_{[q,p]}\otimes{\cal A}_{[q,p]}$ (more general translationally invariant, clustering states are discussed in \cite{BCF2}). Further, ${\rm Tr}_j$, respectively ${\rm Tr}_{[q,p]}$, will denote the trace with respect to the Hilbert spaces $\mathbb{C}^4$, respectively $\mathbb{C}^{4^{q-p+1}}$, relative to the site $j\in[p,q]$, respectively to all sites $j\in[p,q]$. Setting $\epsilon=\tanh\left(\beta\eta/2\right)$, the only non-vanishing single site expectations are: \begin{eqnarray} \label{thermexp1} \omega_\beta\Big(\sigma^{(j)}_3\otimes 1\Big)&=&\omega_\beta\Big(1\otimes \sigma^{(j)}_3\Big)=\frac{{\rm Tr}\left({\rm e}^{-(\beta\eta/2)\,\sigma_3}\, \sigma_3\right)}{2\cosh(\beta\eta/2)}=-\,\epsilon\\ \label{thermexp2} \omega_\beta(\sigma^{(j)}_3\otimes \sigma_3^{(k)})&=&\epsilon^2\ . \end{eqnarray} The state $\omega_\beta$ is thus an equilibrium thermal state with respect to the hamiltonian time-evolution automorphism $\tau_t$ of ${\cal A}$: namely, $\omega_\beta$ satisfies the Kubo-Martin-Schwinger (KMS) relations at inverse temperature $\beta$ given by \begin{equation} \label{KMS1} \omega_\beta\big(a\,\tau_t[b]\big)=\omega_\beta\big(\,\tau_{t-i\beta}[b]\,a\big)\qquad\forall\ a,b\in{\cal A}\ . \end{equation} Such a state does not support correlations between the two spin chains and manifestly obeys the clustering condition in \eqref{clustates}. In the following, we shall consider the quantum fluctuation algebra based upon the self-adjoint subset $\chi=\{x_j\}_{j=1}^8$ consisting of the following $4\times 4$ hermitean matrices \begin{eqnarray} \label{matrix} &&x_1=\sigma_1\otimes\bold{1}\ ,\ x_2=\sigma_2\otimes\bold{1}\ ,\ x_3=\bold{1}\otimes \sigma_1\ , \ x_4=\bold{1}\otimes \sigma_2\\ \label{matrixa} &&x_5=\sigma_1\otimes\sigma_3\ , \ x_6=\sigma_2\otimes\sigma_3\ ,\ x_7=\sigma_3\otimes \sigma_1\ ,\ x_8=\sigma_3\otimes \sigma_2\ . \end{eqnarray} One easily sees that $\omega_\beta(x_j)=0$ for all $j=1,\dots,8$. Further, the conditions in {\sl Definition \ref{2}} are satisfied; indeed, \begin{equation} \label{micst} \sum_{k=0}^\infty\Big|\omega_\beta(x^{(0)}_ix^{(k)}_j)-\omega_\beta(x_i)\,\omega_\beta(x_j)\Big| =\Big|\omega_\beta(x_ix_j)\Big|\ . \end{equation} \begin{remark} \label{remchoice} {\rm There are $16$ single site observables of the form $\sigma_\mu\otimes\sigma_\nu$, $\mu,\nu=0,1,2,3$, $\sigma_0=\bold{1}$. It turns out that the set of local fluctuation operators, \begin{equation} \label{fluctexpl} F_N(x_j)=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}\Big(x^{(k)}_j-\omega(x_j)\Big) =\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1} x^{(k)}_j\ , \end{equation} corresponding to the chosen subset $\chi$, gives rise to a set of mesoscopic bosonic operators $F(x_j)$, $1\leq j\leq 8$ whose Weyl algebra commutes with the one generated by the remaining eight elements. Moreover, since the matrices $x_{1,2}$ and $x_{3,4}$ do refer to single sites belonging to different spin chains, they will provide collective operators associated to two different mesoscopic degrees of freedom.} \qed \end{remark} \medskip The microscopic state $\omega_\beta$ is a tensor product state and translation invariant; therefore, from \eqref{cormat}, one gets the correlation matrix $C^{(\beta)}$ with entries \begin{equation} \label{modcorrmat} C^{(\beta)}_{ij}=\lim_{N\to\infty}\omega_\beta\Big(F_N(x_i)F_N(x_j)\Big) ={\rm Tr}(\rho_\beta\, x_i\,x_j)\ . \end{equation} with $\rho_\beta$ as in (\ref{STATE}). The explicit form of this $8\times 8$ matrix is given in Appendix B; it can be expressed as a three-fold tensor products of $2\times 2$ matrices: \begin{equation} C^{(\beta)}=\left(\bold{1}-\epsilon\,\sigma_1\right)\otimes\bold{1}\otimes\left(\bold{1}+ \epsilon\,\sigma_2\right)\ . \label{modcorrmat2} \end{equation} In computing tensor products, we adopt the convention in which the entries of a matrix are multiplied by the matrix to its right. According to the preceding section, the algebraic relations among the emerging mesoscopic operators $F(x_j)$ are described by the symplectic matrix with entries $\sigma^{(\beta)}_{ij}=-i{\rm Tr}\big(\rho_\beta\,[x_i\,,\,x_j]\big)$, \begin{equation} \label{COMM1} \sigma^{(\beta)}=-2i\epsilon(\bold{1}-\epsilon\sigma_1)\otimes\bold{1}\otimes\sigma_2 \end{equation} and by the covariance matrix with entries $ \Sigma^{(\beta)}_{ij}=\frac{1}{2}\,{\rm Tr}\big(\rho_\beta\left\{x_i\,,\,x_j\big\}\right)$, \begin{equation} \label{covmat2} \Sigma^{(\beta)}=\frac{1}{2}\left(C^{(\beta)}+(C^{(\beta)})^{tr}\right) =(\bold{1}-\epsilon\sigma_1)\otimes \bold{1}\otimes \bold{1}\ , \end{equation} where $tr$ means matrix transposition. Notice that the symplectic matrix $\sigma^{(\beta)}$ is invertible; explicitly one finds: \begin{equation} \label{invCOMM1} (\sigma^{(\beta)})^{-1}=\frac{1}{2c^2\epsilon}\left(\bold{1}+\epsilon\sigma_1\right)\otimes\bold{1}\otimes\,i\sigma_2\ , \qquad c=\sqrt{1-\epsilon^2}\ . \end{equation} The fluctuation algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$ is then obtained from the linear span of exponential operators of the form (see the discussion after {\sl Theorem \ref{th1}}) \begin{equation} \label{Weyl1} W(r)={\rm e}^{iF(x_r)}={\rm e}^{i\sum_{j=1}^8 r_j\,F(x_j)}={\rm e}^{i(r\,,\,F)}\ , \qquad x_r=\sum_{j=1}^8r_j\,x_j\ , \end{equation} where the vector $r$ is now eight dimensional, $r=(r_1,\ldots,r_8)^{tr}\in\mathbb{R}^8$, while $F$ is the eight-dimensional operator valued vector with components $F(x_j)$, $1\leq j\leq 8$. The mesoscopic Weyl operators arise from limits of microscopic exponential operators \begin{eqnarray} \label{Weyl8} W_N(r)&:=&{\rm e}^{iF_N(x_r)}={\rm e}^{i(r\,,\,F_N)}\\ \label{Weyl8a} (r\,,\,F_N)&:=&\sum_{j=1}^8r_j\,F_N(x_j)=F_N(x_r)\ , \end{eqnarray} where $F_N=\{F_N(x_j)\}_{j=1}^8$ is the vector of local fluctuations. From \eqref{Weyl} and \eqref{COMM1}, one has: \begin{equation} \label{displ} W(r)\,F(x_i)\,W^\dag(r)=F(x_i)\,+\,i\big[(r,F)\,,\,F(x_i)\big]=F(x_i)\,+\,\sum_{j=1}^8 \sigma^{(\beta)}_{ij}\,r_j\ . \end{equation} \subsection{Fluctuation algebra} The Weyl algebraic structure associated with the chosen set $\chi$ and the thermal state $\omega_\beta$ allows for the mesoscopic description to be formulated in terms of four-mode bosonic annihilation and creation operators $a^\#_i\equiv(a_i,\, a_i^\dagger)$, $1\leq i\leq 4$, satisfying the canonical commutation relations \begin{equation} \label{coomodea} [a_i\,,\,a^\dag_j]=\delta_{ij}\ ,\quad [a_i\,,\,a_j]=[a_i^\dag\,,\,a^\dag_j]=0\ . \end{equation} Indeed, one can write \begin{equation} \label{Weyl2} F(x_i)=a(f_i)+a^\dag(f_i)\ ,\quad a^\dag(f_i)=\sum_{j=1}^4\,[f_{i}]_j\,a^\dag_j\ ,\ 1\leq i\leq 8\ , \end{equation} by means of the following four-dimensional vectors $f_i\in\mathbb{C}^4$, with components \begin{eqnarray} \label{Weyl3} &&\hskip -1.2cm f_1=\sqrt{\epsilon}\begin{pmatrix} 1\cr0\cr0\cr0 \end{pmatrix}\, ,\qquad f_2=-i\,f_1\, ,\; f_3=\sqrt{\epsilon}\begin{pmatrix} 0\cr0\cr1\cr0 \end{pmatrix}\, ,\qquad f_4=-i\,f_3\\ \label{Weyl4} &&\hskip-1.2cm f_5=\sqrt{\epsilon}\begin{pmatrix} -\epsilon\cr\sqrt{1-\epsilon^2}\cr0\cr0 \end{pmatrix}\, ,\qquad f_6=-i\,f_5\, ,\qquad f_7=\sqrt{\epsilon}\begin{pmatrix} 0\cr0\cr-\epsilon\cr \sqrt{1-\epsilon^2} \end{pmatrix}\, ,\qquad f_8=-i\,f_7\ . \end{eqnarray} It follows that \begin{equation} \label{Weyl5} \big[F(x_i)\,,\,F(x_j)\big]=2\,i\,\mathcal{I}m\left((f_i,f_j)\right)\ ,\quad (f_i,f_j)=\epsilon\,\Sigma^{(\beta)}_{ij}\,+\,\frac{i}{2}\sigma^{(\beta)}_{ij}\ . \end{equation} Setting \begin{equation} \label{anncrop} a=(a_1,a_2,a_3,a_4)^{tr}\ ,\quad a^\dag=(a^\dag_1,a^\dag_2,a^\dag_3,a^\dag_4)^{tr}\ ,\quad A=(a,a^\dag)^{tr}\ , \end{equation} one has \begin{equation} \label{matrix1} F=\mathcal{M}\,A\ ,\quad \mathcal{M}=\begin{pmatrix}f_1^\dag&f_1^{tr}\cr \vdots&\vdots\\ f_8^\dag&f_8^{tr}\end{pmatrix}\ , \end{equation} where $f_i^\dag=(f^*_{i1},f^*_{i2},f^*_{i3},f^*_{i4})$, $f^{tr}_i=(f_{i1},f_{i2},f_{i3},f_{i4})$. The $8\times 8$ matrix $\mathcal{M}$ can be inverted and used to write $A=\mathcal{M}^{-1}F$. The explicit expressions of $\mathcal{M}$ and $\mathcal{M}^{-1}$ are reported in Appendix B. From the structure of $\mathcal{M}^{-1}$, one notices that the creation and annihilation operators $a^\#_1$, respectively $a^\#_3$ come from single site operators $x_{1,2}$, respectively $x_{3,4}$ pertaining to the first, respectively the second chain. Then, $a^\#_1$ and $a^\#_3$ describe two independent mesoscopic degrees of freedom emerging from different chains. Instead, $a^\#_{2}$ and $a^\#_4$ result from combinations of spin operators involving both chains at the same time. \medskip \begin{remark} \label{rem5} {\rm If the temperature vanishes, {\it i.e.} $\epsilon=1$, the non vanishing purely imaginary entries in $C^{(\beta)}$ are all proportional to $\pm 1$ (see (\ref{COMM2})). In such a degenerate case, only two bosonic modes can be accommodated: \begin{equation} \label{inverse0} a_1^{\dagger}=\frac{ F(x_1)\,+\,i\,F(x_2)}{2}\ ,\quad a_2^\dagger= \frac{F(x_3)\,+\, i\,F(x_4)}{2}\ . \end{equation} This degeneracy is due to a so-called coarse graining effect \cite{Verbeure} which forbids distinguishing the mesoscopic limits of some different fluctuation operators. In other terms, it may happen that $$ \lim_{N\to\infty}\omega\Big(\big[F_N(x_{r_1})-F_N(x_{r_2})\big]^2\Big)=0\ , $$ even when $x_{r_1}\neq x_{r_2}$.} \qed \end{remark} \medskip In the creation and annihilation operator formalism, the Weyl operators become displacement operators $D(z)$ labeled by complex vectors $z\in\mathbb{C}^4$. Let $Z=(z,z^*)^{tr}\in\mathbb{C}^8$ and $\Sigma_3$ denote the diagonal $8\times 8$ matrix $\hbox{diag}(1,1,1,1,-1,-1,-1,-1)$; then, \begin{equation} \label{displ1} D(z):={\rm e}^{-(Z,\Sigma_3\,A)}=\exp\left(\sum_{j=1}^4\Big(z_j\,a^\dag_j-z^*_j\,a_j\Big)\right)\ . \end{equation} \begin{lemma} \label{lemma0} Given the creation and annihilation operators $a_i^\#$, $1\leq i\leq 4$, Weyl and displacement operators are related by \begin{eqnarray} \label{Weyl6} W(r)&=&{\rm e}^{i(r,F)}=D(z_r)\ ,\qquad Z_r=\begin{pmatrix}z_r\cr z^*_r\end{pmatrix}=i\Sigma_3\,\mathcal{M}^\dag\,r\\ \label{Weyl6b} D(z)&=&W(r_z)\ , \quad r_z=-i(\mathcal{M}^\dagger)^{-1}\Sigma_3\,Z_r\ . \end{eqnarray} \end{lemma} \medskip According to Theorem \ref{th1}, the mesoscopic algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$ inherits a regular quasi-free state from the microscopic state $\omega_\beta$. \medskip \begin{proposition} \label{prop-state} The quasi-free state $\Omega_\beta$ on the Weyl algebra of quantum fluctuations $\mathcal{W}(\chi,\sigma^{(\beta)})$ is such that \begin{equation} \label{qfs1} \Omega_\beta(W(r))=\exp\Big(-\frac{1}{2}(r\,,\Sigma^{(\beta)}\,r)\Big)\ , \end{equation} with covariance matrix $\Sigma^{(\beta)}$ given by \eqref{covmat2}. In the creation and annihilation operator formalism, it amounts to the expectation functional $\displaystyle \Omega_\beta(W)={\rm Tr}(R_\beta\,W)$, where \begin{equation} \label{qfs2} R_\beta=\frac{{\rm e}^{-\beta\, K}}{{\rm Tr}\left({\rm e}^{-\beta\,K}\right)} \ ,\quad K=\eta\sum_{j=1}^4a^\dag_j a_j\ , \end{equation} namely to a $KMS$ state at inverse temperature $\beta$ with respect to the group of automorphisms generated by quadratic hamiltonian $K$. \label{StateBETA} \end{proposition} \noindent \begin{proof} The tensor product structure and translation-invariance of $\omega_\beta$ yield \begin{eqnarray*} \omega_\beta\left(W_N(r)\right)&=&\left({\rm Tr}\left(\rho_\beta\,{\rm e}^{i/\sqrt{N}\sum_{j=1}^8r_j\,x_j }\right)\right)^N\\ &=&\left(1-\frac{1}{2N}\sum_{i,j=1}^8r_i r_j{\rm Tr}(\rho\,x_i\,x_j)\,+\,o\left(\frac{1}{N}\right)\right)^N\ , \end{eqnarray*} whence, since $r\in\mathbb{R}^8$, $$ \lim_{N\to\infty}\omega_\beta\big(W_N(r)\big)= \lim_{N\to\infty}\omega_\beta\left({\rm e}^{i(r\,,\,F_N)}\right)= \exp\Big(-\frac{1}{2}(r\,,\Sigma^{(\beta)}\,r)\Big)\ . $$ On the other hand, writing $W(r)$ as a displacement operator $D(z_r)$, from \eqref{Weyl6}, its expectation with respect to the KMS state $\Omega_\beta$ reads $$ \Omega_\beta(W(r))=\exp\Big(-\frac{\|Z_r\|^2}{4\epsilon}\Big)=\exp\Big(-\frac{\sum_{i,j=1}^8r_ir_j\,(f_i,f_j)}{4\epsilon}\Big)\ . $$ Then, the result follows from \eqref{Weyl5}. \qed \end{proof} \section{Dissipative mesoscopic dynamics} \label{DDTISC} Once the algebra of quantum fluctuations is constructed, an important issue is what kind of dynamics emerges at the mesoscopic level from a given microscopic time-evolution. So far, only unitary microscopic dynamics have been considered and these have given rise to quasi-free mesoscopic unitary time-evolutions \cite{Verbeure}. Instead, in the following we shall focus upon the double quantum spin chain introduced before, undergoing an irreversible dissipative microscopic dynamics due to the presence of a common environment to which the chains are weakly coupled. This setting is typical of open quantum systems, so that the double chain will be affected by decoherence due to noise and dissipation. However, quantum correlations in open systems need not only be destroyed by an environment; if the latter is suitably engineered, entanglement can be created among two open quantum systems immersed into it by a purely statistical mixing mechanism, namely without the intervention of either direct or environment induced hamiltonian interactions \cite{Braun,Benatti2}. The main purpose of the following sections is twofold: on one hand, we show that, from a suitable Lindblad-type microscopic dissipative dynamics, one obtains a mesoscopic quasi-free dissipative semigroup at the fluctuation level. On the other hand, we study under which conditions the capacity of the dissipative microscopic dynamics to entangle spins belonging to different chains can persist at the mesoscopic level. \subsection{Dissipative microscopic dynamics} We shall study the fluctuation time-evolution emerging from a microscopic irreversible dynamics generated locally by a generator whose action on $X\in{\cal A}_{[0,N-1]}$ is of Kossakowski-Lindblad form. More specifically, we shall discuss dynamical equations of the following generic form: \begin{eqnarray} \label{LINDMICO0a} &&\hskip-1.2cm \partial_tX(t)=\mathbb{L}_N[X(t)]\ ,\quad \mathbb{L}_N[X]=\mathbb{H}_N[X]\,+\,\mathbb{D}_N[X]\\ \label{LINDMICO0b} &&\hskip -1.2cm \mathbb{H}_N[X]=i\Big[H_N\,,\,X\Big]\ ,\quad H_N=\sum_{k=0}^{N-1}\,h^{(k)}\ ,\quad H_N^\dagger=H_N\ ,\\ &&\hskip-1.2cm \label{LINDMICO0} \mathbb{D}_N[X]=\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^d\,D_{\mu\nu}\,\Big(v_\mu^{(k)}\,X\,(v_{\nu}^\dag)^{(\ell)} -\frac{1}{2}\left\{v_{\mu}^{(k)}\,(v_\nu^\dag)^{(\ell)})\,,\,X\,\right\}\Big)\\ \label{LINDMICO0c} &&\hskip-.2cm =\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^d\,D_{\mu\nu}\,\Big(v_\mu^{(k)}\,\left[X\,,\,(v_{\nu}^\dag)^{(\ell)}\right]\,+\,\left[v_{\mu}^{(k)}\,,\,X\right]\,(v_\nu^\dag)^{(\ell)}\Big) \ . \end{eqnarray} The single site terms in the Hamiltonian contribution $\mathbb{H}_N$ are the same for each site with no interactions among spins either belonging to a same chain or to different ones. Instead, in the purely dissipative contribution $\mathbb{D}_N$, the mixing action of the Kraus operators $v_\mu$ is weighted by the coefficients $J_{k\ell}\,D_{\mu\nu}$, involving in general different sites. Altogether, they form a Kossakowski matrix $J\otimes D$; in order to ensure the complete positivity of the generated dynamical maps $\displaystyle\Phi_t^N={\rm e}^{t\mathbb{L}_N}$, both $J$ and $D$ must be positive semi-definite. We shall leave the operators $h$ and $v_\mu$ completely unspecified; they will be fixed only later, when discussing specific examples of entanglement generation. In order to enforce translation invariance, one attaches the same hamiltonian to each sites $h^{(k)}=h$, and further consider different site couplings $J_{k\ell}$ of the form \begin{equation} \label{JKL1} J_{k\ell}=J(|k-\ell|)\ ,\qquad J(0)=:J_0> 0\ . \end{equation} Furthermore, we shall assume the strength of the mixing terms to decrease with the site distance in such a way that \begin{equation} \label{JKL2} \lim_{N\to\infty}\frac{1}{N}\sum_{k,\ell=0}^{N-1}|J_{k\ell}| =J_0+\lim_{N\to\infty}\frac{1}{N}\sum_{k\neq \ell=0}^{N-1}|J_{k\ell}|\,<\,+\infty\ . \end{equation} This request together with \eqref{JKL1} implies that \begin{equation} \label{JKL3} \lim_{N\to\infty}\frac{1}{N}\sum_{k=0}^{N-1}|J_{k\ell}|=0\qquad\forall\ \ell\in\mathbb{N}\ . \end{equation} \medskip \begin{remark} \label{rem4} {\rm The generator $\mathbb{L}_N$ does not mediate any direct interaction between different spins since the Hamiltonian in $\mathbb{H}_N$ does not have interaction terms. On the other hand, the dissipative term $\mathbb{D}_N$ accounts for environment induced dissipative effects by means of the anti-commutator $$ -\frac{1}{2}\left\{\sum_{k,\ell=0}^{N-1}\sum_{\mu,\nu=1}^d\,J_{k\ell}D_{\mu\nu}\,v_\mu^{(k)}\,(v_{\nu}^\dagger)^{(\ell)}\,,\,X \right\}\ , $$ while the remaining term $$ \sum_{k,\ell=0}^{N-1}\sum_{\mu,\nu=1}^dJ_{k\ell}D_{\mu\nu}\,v_\mu^{(k)}\,X\,(v_{\nu}^\dag)^{(\ell)} \ , $$ also known as quantum noise, contributes to statistical mixing. This latter effect can be better appreciated by diagonalising the non-negative matrix $J\otimes D$ and recasting the corresponding contribution to $\mathbb{D}_N$ into the Kraus-Stinespring form $\sum_a L_a\,X\,L_a^\dag$ of completely positive maps. By duality, it gives rise to a map on local density matrices, $$ {\cal A}_{[0,N-1]}\ni\rho_N\mapsto\sum_aL^\dag_a\,\rho_N\,L_a \ , $$ that transforms pure states into mixed ones. As we shall see, the presence of Kraus operators supported by both chains may allow this mixing term to entangle them at the mesoscopic level even in absence of direct spin interactions.} \qed \end{remark} \medskip An important request needed for the discussion presented in the next sections is the time-invariance of the microscopic state $\omega_\beta$. Were it not so, the state dependent mesoscopic canonical commutation relations would also depend on time, opening the way to mesoscopic non-markovian time-evolutions: such an interesting issue is however outside the scope of the present work and will be addressed elsewhere. We shall thus consider local generators $\mathbb{L}_N$ such that \begin{equation} \label{invst} \omega_N\circ\Phi_t^N=\omega_N\ , \end{equation} where $\omega_N$ denotes the local state resulting from restricting $\omega_\beta$ to ${\cal A}_{[0,N-1]}$. \subsection{Emerging mesoscopic dynamics} We shall now prove that, under certain technical conditions to be specified later, the mesoscopic dynamics that emerges in the limit of large $N$ from the local time-evolution \hbox{$\Phi^N_t={\rm e}^{t\mathbb{L}_N}$}, ${t\geq0}$, generated by \eqref{LINDMICO0a}-\eqref{LINDMICO0c} is a dissipative semigroup $\Phi_t={\rm e}^{t\mathbb{L}}$, ${t\geq 0}$, of completely positive, unital quasi-free maps on the algebra of fluctuations. Namely, that, under the mesoscopic dynamics, displacement operators $W(r)$ of the form \eqref{Weyl6} are mapped into themselves, \begin{equation} \Phi_t[W(r)]={\rm e}^{f_r(t)}\,W(r_t)\ , \label{DYN} \end{equation} where both the function $f_r(t)$ and the time-dependent eight-dimensional real vector $r_t=(r_1^t,\ldots,r_8^t)^{tr}$ are to be determined. \medskip \begin{remark} \label{rem4-1} {\rm It is worth noting that, due to unitality and complete positivity, the maps $\Phi_t$ obey Schwartz-positivity \begin{equation} \label{Schwpos} \Phi_t(X^\dag X)\,\geq\,\Phi_t(X^\dag)\,\Phi_t(X)\ . \end{equation} Moreover, since the Weyl operators $W(r)$ are unitary, \begin{equation} \label{Schwpos1} \|\Phi_t(W(r))\|=\left|{\rm e}^{f_r(t)}\right|\leq \|W(r)\|=1\ . \end{equation} } \qed \end{remark} \medskip In order to outline the idea of the proof, we first consider the structure of the time-derivative of the time-evolving local exponentials that give rise to $\Phi_t[W(r)]$ in \eqref{DYN}. \medskip \begin{lemma} \label{lemma1} Let $W_N(r)\in{\cal A}_{[0,N-1]}$, $r\in\mathbb{R}^8$, denote the local exponential operators \eqref{Weyl8} and define \begin{equation} \label{P2} W^t_N(r)={\rm e}^{f_r(t)}\,W_N(r_t)={\rm e}^{f_r(t)}\,{\rm e}^{i(r_t,F_N)}\ , \end{equation} with $r_t=(r_1^t,\ldots,r_8^t)^{tr}$. Then, \begin{equation} \frac{{\rm d}}{{\rm d}t}W_N^t(r)= \Bigg(\frac{{\rm d}f_r(t)}{{\rm d}t}\,+\,i\left(\dot{r}_t\,,\,F_N\right)\, -\,\frac{1}{2}\Big[(r_t,F_N)\,,\,(\dot{r_t},F_N)\Big]\Bigg)\,W_N^t(r) +\,E_{N}\, \label{lemma1-1} \end{equation} with $E_{N}$ vanishing in norm when $N\to\infty$ for all finite $t\geq 0$. \end{lemma} \medskip \noindent \begin{proof} \noindent Recalling (\ref{Weyl8}) and (\ref{Weyl8a}), one can write: $$ W_N^t(r)={\rm e}^{f_r(t)}\,{\rm e}^{iF_N(x_{r_t})}\ ,\quad x_{r_t}=\sum_{j=1}^8r_t^j\,x_j\ . $$ Note that $\displaystyle\dot{F}_N(x_{r_t}):=\frac{{\rm d}}{{\rm d}t}F_N(x_{r_t})=F_N(\dot{x}_{r_t})=(\dot{r}_t,F_N)$. Introduce now the following nested commutators: \begin{equation} \label{comms} \mathbb{K}_A^n(B):=\Big[A\,,\,\mathbb{K}^{n-1}_A(B)\Big]\ ,\quad \mathbb{K}^0_A(B)=B\ . \end{equation} Then, as shown in Appendix C, one has: \begin{eqnarray} \nonumber &&\hskip-2.5cm \frac{{\rm d}}{{\rm d}t}\,W_N(r_t)=\left(\sum_{n=1}^\infty\frac{i^n}{n!}\,\mathbb{K}^{n-1}_ {F_N(x_{r_t})}\Big(F_N(\dot{x}_{r_t})\Big)\right)\,W_N(r_t)\\ \hskip-.8cm &=&\Big(i\,\left(\dot{r}_t\,,\,F_N\right) -\frac{1}{2}\Big[F_n(x_{r_t}),\ F_N(\dot x_{r_t})\Big]\Big)\,W_N(r_t)\,+\,E_{N}\\ \label{expder1} \hskip-.8cm E_{N}&=& \sum_{n=3}^\infty\frac{i^n}{n!}\,\mathbb{K}^{n-1}_{F_N(x_{r_t})}\Big(F_N(\dot{x}_{r_t})\Big)\ , \end{eqnarray} thus recovering the second and third terms in the r.h.s. of \eqref{lemma1-1}. Moreover, since operators at different sites commute, one has: $$ \mathbb{K}^{n-1}_{F_N(x_{r_t})}\big(F_N(\dot{x}_{r_t})\big) =\frac{1}{N^{n/2}}\sum_{k=0}^{N-1}\mathbb{K}^{n-1}_{x^{(k)}_r}(\dot{x}^{(k)}_r)\ . $$ Further, using $$ \Big\| \mathbb{K}^{n-1}_{x_r^{(k)}}(\dot{x}^{(k)}_{r_t}) \Big\|\leq 2^{n-1} \|x_r\|^{n-1} \|\dot x_r\|\ , $$ one estimates $$ \Big\|\mathbb{K}^{n-1}_{F_N(x_{r_t})}\big(F_N(\dot{x}_{r_t})\big)\Big\|\leq \frac{1}{\sqrt{N}}\Bigg(\frac{2\|x_{r_t}\|}{\sqrt{N}}\Bigg)^{n-1}\,\|\dot{x}_{r_t}\|\ . $$ As a consequence, the norm of $E_N$ in (\ref{expder1}) is bounded as \begin{equation} \Big\|E_{N}\Big\|\leq \frac{{\rm e}^{2\|x_{r_t}\|}}{\sqrt{N}}\, \|\dot{x}_{r_t}\|\ . \label{bound1} \end{equation} Therefore, from $\displaystyle\|\dot{x}_{r_t}\|\leq\sum_{j=1}^8|\dot{r}^j_t|\,\|x_j\|$, it follows that, in the limit of large $N$, $E_{N}$ vanishes in norm uniformly for $0\leq t\leq {\cal T}$, with $\cal T$ any finite, positive constant. \qed \end{proof} \medskip Notice that, beside the scalar term, the dominant contributions to the time derivative of $W_N^t(r)$ scale like fluctuations and mean-field quantities. We want to compare them with similarly scaling terms in $\mathbb{L}_N[W^t_N(r)]$. The following {\sl Lemma} is then useful. \begin{lemma} \label{lemma3} Given the local dissipative semigroup on ${\cal A}_{[0,N-1]}$ generated by \begin{eqnarray*} \partial_tX(t)&=&\mathbb{L}_N[X(t)]\ ,\quad \mathbb{L}_N[X]=\mathbb{H}_N[X]\,+\,\mathbb{D}_N[X]\\ \mathbb{H}_N[X]&=&i\Big[H_N\,,\,X\Big]\ ,\quad H_N=\sum_{k=0}^{N-1}\,h^{(k)},\quad h^{(k)}=h=h^\dag\\ \mathbb{D}_N[X]&=&\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^d\,D_{\mu\nu}\Bigg(v_\mu^{(k)}\,X\,(v_{\nu}^\dag)^{(\ell)} -\frac{1}{2}\left\{v_{\mu}^{(k)}\,(v_\nu^\dag)^{(\ell)}\,,\,X\,\right\}\Bigg)\ , \end{eqnarray*} with positive semi-definite matrices $J\otimes D=[J_{k\ell}]\otimes[D_{\mu\nu}]$ and coefficients \hbox{$J_{k\ell}=J(|k-\ell|)$} satisfying \eqref{JKL1} and \eqref{JKL2}, one can recast the action of the Lindblad generator on $W_N(r)$ as follows: \begin{eqnarray} \label{Ldec0} &&\hskip -1.5cm \mathbb{L}_N\big[W_N(r)\big]=\,i\,\mathbb{L}_N\big[(r,F_N)\big]\,W_N(r)\,-\,\frac{1}{2}\Big[(r,F_N)\,,\,\mathbb{L}_N\big[(r,F_N)\big]\Big]\,W_N(r)\\ &&\hskip -1cm +\,\frac{1}{2}\Big(\mathbb{L}_N\big[(r,F_N)\big]\,(r,F_N)\,+\,(r,F_N)\,\mathbb{L}_N\big[(r,F_N)\big] -\,\mathbb{L}_N\big[(r,F_N)^2\big]\Big)\,W_N(r)\,+\,L_N \label{Ldec2} \end{eqnarray} with $L_N=\mathcal{R}_N+D_N$ and $\mathcal{R}_N$, $D_N$ vanishing in norm when $N\to\infty$. \end{lemma} \medskip \begin{proof} \noindent We shall analyze separately the hamiltonian and dissipative contributions.\hfill\break \leftline{$\bullet$ {\sl Hamiltonian contribution}} \smallskip Since $W_N(r)$ is unitary, the Hamiltonian term can be recast as \begin{eqnarray*} \nonumber \hskip-.5cm \mathbb{H}_N[W_N(r)]&=&i\sum_{k=0}^{N-1}\Bigg(h^{(k)}-W_N(r)\,h^{(k)}\,W^\dag_N(r)\Bigg)W_N(r)\\ \label{ham1} \hskip-.5cm &=& -i\Bigg(\sum_{k=0}^{N-1}H_N^{(k)}(x_r)\Bigg)\,W_N(r)\\ \hskip-.5cm \label{ham2} H_N^{(k)}(x_r)&=&\sum_{n=1}^\infty\frac{i^n}{n!}\,\mathbb{K}^n_{F_N(x_r)}(h^{(k)})= \sum_{n=1}^\infty\frac{i^n}{n!N^{n/2}}\mathbb{K}^n_{x^{(k)}_r}(h^{(k)})\ , \end{eqnarray*} whence $\mathbb{H}_N[W_N(r)]=\Big(H^{(1)}_{N}(x_r)+H^{(2)}_{N}(x_r)\Big)\,W_N(r)\,+\,{\cal R}_{N}$, where \begin{eqnarray} \label{ham3} H^{(1)}_{N}(x_r)&=&-\Big[H_N\,,\,(r,F_N)\Big]\\ \label{ham3a} H^{(2)}_{N}(x_r)&=&-\frac{i}{2}\Big[(r,F_N)\,,\,\Big[H_N\,,\,(r,F_N)\Big]\Big] \\ \label{ham4} {\cal R}_{N}&=&-\,i\sum_{k=0}^{N-1}\sum_{n=3}^\infty\frac{i^n}{n!N^{n/2}}\mathbb{K}^n_{x^{(k)}_r}(h^{(k)})\,W_N(r)\ . \end{eqnarray} Since $\|h^{(k)}\|=\|h\|$ and $\|x^{(k)}_r\|=\|x_r\|$ for all $k$, one can write: \begin{equation} \label{bound2} \|{\cal R}_{N}\|\leq\sum_{k=0}^{N-1}\sum_{n=3}^\infty\frac{1}{n!N^{n/2}}\,\left\|\mathbb{K}^n_{x^{(k)}_r}(h^{(k)})\right\|\leq\frac{{\rm e}^{2\|x_r\|}}{\sqrt{N}}\,\|h\|\ . \end{equation} \medskip \leftline{$\bullet$ {\sl Dissipative contribution}} Setting $W_N(r)\,v^{(k)}_\mu\,W^\dag_N(r)=v^{(k)}_\mu\,+\,V^{(k)}_{\mu N}$, where $$ V^{(k)}_{\mu N}=\sum_{n=1}^\infty\,\frac{i^n}{n!} \mathbb{K}^n_{F_N(x_r)}(v^{(k)}_\mu)= \sum_{n=1}^\infty\,\frac{i^n}{n!N^{n/2}} \mathbb{K}^n_{x_r^{(k)}}(v^{(k)}_\mu)\ , $$ one rewrites the purely dissipative contribution as \begin{equation} \mathbb{D}_N[W_N(r)] =\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^{d}\, D_{\mu\nu}\,\Big( v^{(k)}_\mu(V^\dag_{\nu N})^{(\ell)} -V^{(k)}_{\mu N}(v^\dag_\nu)^{(\ell)} -V^{(k)}_{\mu N}\,(V^\dag_{\nu N})^{(\ell)}\Big)\,W_N(r)\ . \label{decomp} \end{equation} Collecting contributions that scale not faster than $1/N$, one can write: \begin{eqnarray} \label{proof1c} &&\hskip-1cm V^{(k)}_{\mu N}=i\,\Big[(r,F_N)\,,\,v_\mu^{(k)}\Big]\,-\,\frac{1}{2}\Big[(r,F_N)\,,\,\Big[(r,F_N)\,,\,v_\mu^{(k)}\Big]\Big] +\Delta^{(k)}_{\mu N}\ ,\\ \label{proof1d} &&\hskip-1cm \Delta^{(k)}_{\mu N}=\sum_{n=3}^\infty\,\frac{i^n}{n!N^{n/2}}\, \mathbb{K}^n_{x^{(k)}_r}(v^{(k)}_\mu)\\ \label{proof1b} &&\hskip-1cm \label{VV} V^{(k)}_{\mu N}\,(V^\dag_{\nu N})^{(\ell)}=-\,\Big[(r,F_N)\,,\,v_\mu^{(k)}\Big]\,\Big[(r,F_N)\,,\,(v_\nu^\dag)^{(\ell)}\Big]\,+\,\Delta^{(k\ell)}_{\mu\nu N}\\ &&\hskip-1cm \Delta^{(k\ell)}_{\mu\nu N}=\sum_{n+m\geq3}\frac{i^n(-i)^m}{n!m!N^{(n+m)/2}}\, \mathbb{K}^n_{x^{(k)}_r}(v^{(k)}_\mu)\,\mathbb{K}^m_{x^{(\ell)}_r}(v^\dag_\nu)^{(\ell)})\ . \end{eqnarray} Using as before $\displaystyle\|\mathbb{K}^n_{x^{(k)}_r}(v^{(k)}_\mu)\|\leq 2^n\|x_r\|^n\,\|v_\mu\|$, one gets \begin{equation} \label{est0} \|\Delta^{(k)}_{\mu N}\|\leq\frac{{\rm e}^{2\|x_r\|}}{N^{3/2}}\,\|v_\mu\|\ ,\quad \|\Delta^{(k\ell)}_{\mu\nu N}\|\leq\frac{{\rm e}^{4\|x_r\|}}{N^{3/2}}\,\|v_\mu\|\,\|v_\nu\|\ . \end{equation} Using these results, one can decompose $\mathbb{D}_N$ as the sum of three contributions scaling at most as $1/N$, plus a correction term: $\mathbb{D}_N[W_N(r)]=\Big(D^{(1)}_{N}(x_r)\,+\,D^{(2)}_{N}(x_r)+\,D^{(3)}_{N}(x_r)\Big)\,W_N(r)\,+D_N$. The contribution $D^{(1)}_{N}(x_r)$ comes from the first term in \eqref{proof1c}, it scales as a fluctuation and, using \eqref{LINDMICO0c}, it can be rewritten as: \begin{equation} D^{(1)}_{N}(x_r)=i\mathbb{D}_N\big[(r,F_N)\big] \ . \label{proof1e} \end{equation} The second contribution scales as $1/N$ and comes from the second term in \eqref{proof1c} and the first two terms in the r.h.s. of (\ref{decomp}); using $$ \big[x\,,\,[x\,,\,v]\big]v^\dag\,-\,v\,\big[x\,,\,[x\,,\,v^\dag]\big] =-\big[x\,,\,v[x\,,\,v^\dag\big]\,+\,\big[v\,,\,x]\,v^\dag\big]\ , $$ it can be recast in the form \begin{equation} \label{proof1f} D^{(2)}_{N}(x_r)=-\,\frac{1}{2}\big[(r,F_N)\,,\,\mathbb{D}_N\left[(r,F_N)\right]\big]\ . \end{equation} Further, using the relation \begin{eqnarray*} && x\,\left(v\,[x\,,\,v^\dag]\,+\,[v\,,\,x]\,v^\dag\right)\,+\,\Big(v\,[x\,,\,v^\dag]\,+\,[v\,,\,x]\,v^\dag\Big)\,x\,-\\ &&\hskip 3cm -\,v\,[x^2\,,\,v^\dag]\,-\,[v\,,\,x^2]\,v^\dag=2\,[x\,,\,v]\,[x\,,\,v^\dag]\ , \end{eqnarray*} the third contribution, that comes from the first term in the r.h.s of (\ref{VV}) and the last term in the r.h.s of (\ref{decomp}) and scales as a mean-field quantity, can be rewritten as \begin{eqnarray} \nonumber &&\hskip-1cm D^{(3)}_{N}(x_r)=\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^{d}D_{\mu\nu}\Big[(r,F_N)\,,\,v_\mu^{(k)}\Big]\,\Big[(r,F_N)\,,\,(v_\nu^\dag)^{(\ell)}\Big]\\ && =\frac{1}{2}\Big(\mathbb{D}_N\big[(r,F_N)\big]\,(r,F_N)\,+\,(r,F_N)\,\mathbb{D}_N\big[(r,F_N)\big]\, -\,\mathbb{D}_N\big[(r,F_N)^2\big]\Big)\ . \end{eqnarray} Notice that the Hamiltonian term is such that $$ \mathbb{H}_N\big[(r,F_N)\big]\,(r,F_N)\,+\,(r,F_N)\,\mathbb{H}_N\big[(r,F_N)\big]\,-\,\mathbb{H}_N\big[(r,F_N)^2\big]=0\ , $$ so that one can add the above contribution to that of $\mathbb{D}_N$ without modifying it, thus obtaining \begin{equation} \label{proof1g} D^{(3)}_{N}(x_r)=\frac{1}{2}\Big(\mathbb{L}_N\big[(r,F_N)\big]\,(r,F_N)\,+\,(r,F_N)\,\mathbb{L}_N\big[(r,F_N)\big]\,-\,\mathbb{L}_N\big[(r,F_N)^2\big]\Big)\ . \end{equation} Finally, the correction term $D_{N}$ reads $$ D_{N}=\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^{d}\, D_{\mu\nu}\,\Big( v^{(k)}_\mu(\Delta^\dag_{\nu N})^{(\ell)}-(v_\nu^\dag)^{(\ell)})\,\Delta^{(k)}_{\mu N}-\Delta^{(k\ell)}_{\mu\nu N}\Big)\,W_N(r)\ , $$ and \eqref{est0} provides the upper bound \begin{equation} \label{bound3} \|D_{N}\|\leq \frac{3}{2N^{3/2}}\sum_{k,\ell=0}^{N-1}|J_{k\ell}|\,\sum_{\mu,\nu=1}^d\,|D_{\mu\nu}|\,\|v_\mu\|\,\|v_\nu\|\,{\rm e}^{4\|x_r\|}\ , \end{equation} whence the condition \eqref{JKL2} on the coefficients $J_{k\ell}$ makes it vanish in norm as $1/\sqrt{N}$ when $N\to\infty$. Putting together all these results and estimates, the statement of the Lemma immediately follows. \qed \end{proof} \subsection{Quasi-free dissipative mesoscopic dynamics} We shall choose single particle Hamiltonian operators $h=h^\dag$ and Kraus operators $v_\mu$ such that, for all $0\leq k\leq N-1$, the linear span ${\cal X}$ of the chosen set $\chi$ of on-site microscopic observables be mapped into itself by the Lindblad generator: \begin{equation} \label{Ldec} \mathbb{L}_N[x^{(k)}_j]=\mathbb{H}_N[x^{(k)}_j]\,+\,\mathbb{D}_N[x^{(k)}_j]=\sum_{p=1}^8 \left(\mathcal{H}_{jp}\,+\,\mathcal{D}_{jp}\right)\,x^{(k)}_p\ . \end{equation} We have denoted by $\mathcal{H}=[\mathcal{H}_{jp}]$ and $\mathcal{D}=[\mathcal{D}_{jp}]$ the $8\times 8$ matrices of coefficients specifying the action of the hamiltonian and dissipative generators and set \begin{equation} \label{singsiteL} \mathcal{L}=\mathcal{H}+\mathcal{D}\ ,\qquad \mathcal{H}_{ij}^*=\mathcal{H}_{ij}\ ,\quad \mathcal{D}_{ij}^*=\mathcal{D}_{ij}\ . \end{equation} When comparing the time derivative in \eqref{lemma1-1} with the action of the generator in \eqref{Ldec0}, one has to match contributions with the same scaling. Since, for large $N$, mean-field observables behave as scalar multiples of the identity the matching among them can be obtained by a proper choice of the unknown function $f_r(t)$. On the other hand, the term $i(\dot{r}_t,F_N)$ in the time-derivative that scales as a fluctuation should be matched by the term $i\mathbb{L}_N[(r,F_N)]$ in the action of the generator. Then, for generic $r\in\mathbb{R}^8$, the equality \begin{equation} \label{sqrtn} (\dot{r}_t,F_N)=\mathbb{L}_N\big[(r_t,F_N)\big]=\left(\mathcal{L}^{tr}r_t,F_N\right) \end{equation} is equivalent to having \begin{equation} \label{Ldec4} r_t={\rm e}^{\,t\,\mathcal{L}^{tr}}\,r\ ,\quad \Phi^N_t\big[(r,F_N)\big]=\Big(r,\,{\rm e}^{t\mathcal{L}}\,F_N\Big)\ , \end{equation} where, as before, $\mathcal{L}^{tr}$ denotes the transposed $\mathcal{L}$. Notice that such a time-dependence also satisfies \begin{equation} \label{Ldec1} \Big[(r_t,F_N)\,,\,(\dot{r}_t,F_N)\Big]=\Big[(r_t,F_N)\,,\,\mathbb{L}_N\big[(r_t,F_N)\big]\Big]\ . \end{equation} Therefore, the difference between the time-derivative of $W_N^t(r)$ and the action of the generator on the same operator becomes \begin{equation} \label{difft} \frac{{\rm d}}{{\rm d}t}W^t_N(r)-\mathbb{L}_N\left[W_N^t(r)\right]=E_N-L_N+\left(\frac{{\rm d}f_r(t)}{{\rm d}t}-D^{(3)}_{N}(x_{r_t})\right)\,W^t_N(r_t)\ , \end{equation} where now $D^{(3)}_{N}(x_{r_t})$ in (\ref{proof1g}) can be expressed as: \begin{equation} D^{(3)}_{N}(x_{r_t})=\frac{1}{2}\Big(\big(\mathcal{L}^{tr}r_t,F_N\big)(r_t,F_N)+(r_t,F_N)\big(\mathcal{L}^{tr}r_t,F_N\big) -\mathbb{L}_N\big[(r_t,F_N)^2\big]\Big)\ . \label{difft0} \end{equation} Since the microscopic state $\omega_\beta$ is $\Phi^N_t$-invariant, so that $\omega_\beta\circ\mathbb{L}_N=0$, and of the product form \eqref{STATE} with $\omega_\beta(x_j)=0$, we get \begin{equation} \label{difft1} \omega_\beta\left(D^{(3)}_{N}(x_{r_t})\right)=\frac{1}{2}\big(\mathcal{L}^{tr} r_t,\Sigma^{(\beta)}\,r_t\big)\,+\,\frac{1}{2}\big(r_t,\Sigma^{(\beta)}\mathcal{L}^{tr}\,r_t\big) =\big(r_t,\mathcal{L}\,\Sigma^{(\beta)}\,r_t\big)\ , \end{equation} where the last equality follows from $r$ being a real vector and the covariance matrix $\Sigma^{(\beta)}$ \eqref{covmat2} being real symmetric. This result and \eqref{difft} suggest then to choose \begin{eqnarray} \label{difft2} \frac{{\rm d}}{{\rm d}t}f_r(t)&=&\omega_\beta\left(D^{(3)}_{N}(x_{r_t})\right)\quad \hbox{so that}\\ \label{difft2a} f_r(t)&=&-\frac{1}{2}\,\left(r,\mathcal{Y}_t\,r\right)\ ,\quad \mathcal{Y}_t\,=\, \Sigma^{(\beta)}\,-\,{\rm e}^{t\mathcal{L}}\,\Sigma^{(\beta)}\,{\rm e}^{t\mathcal{L}^{tr}} \end{eqnarray} with initial condition $f_r(0)=0$. It turns out that \begin{equation} \label{difft3} \mathcal{Y}_t\geq 0\quad\hbox{so that}\quad f_r(t)\leq 0\quad\hbox{and}\quad {\rm e}^{f_r(t)}\leq 1\ , \end{equation} for all $t\geq 0$, in agreement with \eqref{Schwpos1}. This can be seen as follows: let $\lambda\in\mathbb{C}^8$ be a generic complex vector and set $q_\lambda=\sum_{j=1}^8 \lambda_j\,x_j\in\mathcal{X}$. Then, Schwartz positivity \eqref{Schwpos}, the time-invariance of $\omega_\beta$ and the second relation in \eqref{Ldec4} yield \begin{eqnarray*} && \frac{1}{2}\sum_{i,j=1}^d\lambda_i^*\lambda_j\,\omega_\beta\Big(\Big\{F_N(x_i)\,,\,F_N(x_j) \Big\}\Big)=\\ &&\hskip .5cm =\frac{1}{2}\sum_{i,j=1}^d\lambda_i^*\lambda_j\,\omega_\beta\Big(\Phi^N_t\Big[\Big\{F_N(x_i)\,,\,F_N(x_j)\Big\}\Big]\Big)\\ &&\hskip .5cm \geq\frac{1}{2}\omega_\beta\Big(\Phi^N_t[F_N(q^\dag_\lambda)]\,\Phi_t^N\left[F_N(q_\lambda)\right]\Big)\,+\,\frac{1}{2}\omega_\beta\Big(\Phi^N_t\left[F_N(q_\lambda)\right]\,\Phi^N_t[F_N(q^\dag_\lambda)]\Big)\\ &&\hskip .5cm =\frac{1}{2}\omega_\beta\Big(\Big(\lambda,{\rm e}^{t\mathcal{L}}\,F_N\Big)\,\Big(\lambda^*,{\rm e}^{t\mathcal{L}}\,F_N\Big)\Big)\,+\,\frac{1}{2}\omega_\beta\Big(\Big(\lambda^*, {\rm e}^{t\mathcal{L}}\,F_N\Big)\,\Big(\lambda,{\rm e}^{t\mathcal{L}}\,F_N\Big)\Big)\\ &&\hskip .5cm =\frac{1}{2}\sum_{i,j;r,s=1}^d\lambda_i^*\lambda_r\,\left({\rm e}^{t\mathcal{L}}\right)_{ij}\, \left({\rm e}^{t\mathcal{L}}\right)_{rs}\,\omega_\beta\Big(\Big\{F_N(x_j)\,,\,F_N(x_s)\Big\}\Big)\ . \end{eqnarray*} Recalling (\ref{covmat}), in the large $N$ limit one thus obtains, for all $\lambda\in\mathbb{C}^d$, $$ \Big(\lambda,\Sigma^{(\beta)}\,\lambda\Big)\geq \sum_{i,j;r,s=1}^d\lambda_i^*\lambda_r\left({\rm e}^{t\mathcal{L}}\right)_{ij}\,\left({\rm e}^{t\mathcal{L}}\right)_{rs}\,\Sigma^{(\beta)}_{js}= \Big(\lambda,{\rm e}^{t\mathcal{L}}\,\Sigma^{(\beta)}\,{\rm e}^{t\mathcal{L}^{tr}}\,\lambda\Big)\ . $$ Equipped with these considerations, we prove the following main technical result. \medskip \begin{theorem} \label{qfth} Consider the quasi-local algebra ${\cal A}$ with the translation-invariant KMS state $\omega_\beta$ in \eqref{STATE}, the self-adjoint set $\chi=\{x_j\}_1^8$ in \eqref{matrix}, \eqref{matrixa} and the resulting quantum fluctuation algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$. Let the local algebras ${\cal A}_{[0,N-1]}$ evolve under the local dissipative semigroups $\{\Phi^N_t\}_{t\geq0}$ with Lindblad generator as in \eqref{LINDMICO0a}-\eqref{LINDMICO0c} where the Hamiltonian and Kraus operators satisfy the relations \eqref{Ldec}. In the limit of large $N$, the emerging dissipative mesoscopic dynamics is described by a semi-group $\{\Phi_t\}_{t\geq0}$ of completely positive, unital maps on $\mathcal{W}(\chi,\sigma^{(\beta)})$, such that \begin{equation} \label{RD2} \lim_{N\to\infty}\omega_\beta\Big(W_N(a)\,\Phi^N_t\big[W_N(r)\big]\, W_N(b)\Big)=\Omega_\beta\Big(W(a)\,\Phi_t\big[W(r)\big]\,W(b)\Big)\ , \end{equation} for all microscopic exponential operators $W_N(a)$, $W_N(b)$, $W_N(r)$, with $W(a)$, $W(b)$ and $W(r)$ the corresponding Weyl operators in the algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$ and $\Omega_\beta$ the state on it defined by \eqref{qfs1}, which is then left invariant by $\Phi_t$. Moreover, the maps $\Phi_t$ are quasi-free, {\it i.e.} they map Weyl operators into Weyl operators: $\displaystyle \Phi_t[W(r)]={\rm e}^{f_r(t)}\,W(r_t)$, with $r_t$ and $f_r(t)$ as in \eqref{Ldec4} and \eqref{difft2}-\eqref{difft2a}, respectively. \end{theorem} \medskip \begin{remark} \label{rem6} {\rm The chosen type of convergence conforms to the fact that the action of any map on the quantum fluctuation algebra is totally specified by its action on the Weyl operators $W(r)$. Such an action is in turn completely defined by the matrix elements in the GNS representation based on the limit state $\Omega_\beta$. Both the state $\Omega_\beta$ and the Weyl operators arise from the large $N$ limit of the microscopic exponential operators $W_N(r)$ with respect to the microscopic state $\omega_\beta$.} \qed \end{remark} \medskip \noindent \begin{proof} For sake of simplicity, we shall set \begin{equation} \label{notation} \omega^N_{ab}(\cdot):=\omega_\beta\Big(W_N(a)\,\cdot\,W_N(b)\Big)\ ,\ \Omega_{ab}(\cdot)= \Omega\Big(W(a)\,\cdot\,W(b)\Big) \end{equation} and then show that, for arbitrary $a,b,r\in\mathbb{R}^8$, the positive quantity \begin{equation} \label{P1} I_N=\Bigg|\Omega_{ab}\Big(\Phi_t\big[W(r)\big]\Big)\,-\,\omega^N_{ab}\Big(\Phi_t^N\big[W_N(r)\big]\Big)\Bigg| \end{equation} vanishes when $N\to\infty$. Writing $\Phi_t^N[W(r)]=\Phi_t^N[W_N(r)]-W^t_N(r)+W^t_N(r)$ one has $I_N\leq I^{(1)}_{N}+I^{(2)}_{N}$, where \begin{eqnarray} \label{I1} I^{(1)}_{N}&:=&\Big|\omega^N_{ab}\Big(W_N^t(r)\,-\,\Phi_t^N[W_N(r)]\Big)\Big|\\ \label{I2} I^{(2)}_{N}&:=&\Big|\Omega_{ab}\Big(\Phi_t[W(r)]\Big)\,-\,\omega^N_{ab}\Big(W_N^t(r)\Big)\Big|\ . \end{eqnarray} Because of \eqref{P2} and \eqref{Schwpos1}, one gets $$ I^{(2)}_{N}\leq\left|\Omega_{ab}\Big(W(r_t)\Big)\,-\,\omega_{ab}^N\Big(W_N(r_t)\Big)\right|\ . $$ Then, the properties of the exponential operators (see Remark \ref{rem6}) make $I^{(2)}_{N}\to 0$ with $N\to\infty$, uniformly for any finite time interval, $0\leq t\leq {\cal T}$. On the other hand, in order to estimate $I^{(1)}_{N}$, we write \begin{eqnarray*} W^t_N(r)\,-\,\Phi_t^N[W_N(r)]&=&\int_0^t{\rm d}s\,\frac{{\rm d}}{{\rm d}s}\left(\Phi_{t-s}^N\Big[W^s_N(r)\Big]\right)\\ &=&\int_0^{t}{\rm d}s\,\Phi_{t-s}^N\Big[\frac{{\rm d}}{{\rm d}s}W_N^s(r)\,-\,\mathbb{L}_N[W_N^s(r)]\Big]\ . \end{eqnarray*} Then, recalling (\ref{difft}), one obtains: \begin{eqnarray*} I^{(1)}_{N}&\leq&\int_0^t{\rm d}s\,\Big|\omega^N_{ab}\Big(\Phi^N_{t-s}[\delta_N(r,s)]\Big)\Big|\\ \delta_N(r,s)&:=&E_N-L_N\,+\,\left(\frac{{\rm d}f_r(s)}{{\rm d}t}-D^{(3)}_{N}(x_{r_s})\right)\,W^t_N(r_s)\ , \end{eqnarray*} with $D^{(3)}_{N}(x_{r_s})$ given by \eqref{difft0}. Since the microscopic state $\omega_\beta$ obeys the KMS conditions \eqref{KMS1}, from the Cauchy-Schwartz inequality it follows that \begin{eqnarray*} &&\hskip-.7cm \Big|\omega^N_{ab}\Big(\Phi^N_{t-s}[\delta_N(r,s)]\Big)\Big|^2=\Big|\omega_\beta\Big(\Phi_{t-s}^N[\delta_N(r,s)]\, W_N(b)\tau_{i\beta}[W_N(a)]\Big)\Big|^2\\ &&\hskip-.7cm \leq\omega_\beta\Big(\Phi^N_{t-s}[\delta_N(r,s)]\Phi^N_{t-s}[\delta^\dag_N(r,s)]\Big)\, \omega_\beta\Big(\big(\tau_{i\beta}[W_N(a)]\big)^\dag\tau_{i\beta}[W_N(a)]\Big)\ . \end{eqnarray*} For finite inverse temperatures $\beta$, the second term in the right side of the inequality is bounded on a dense subset of operators in the Weyl algebra, while the first one can be estimated by means of the invariance of $\omega_\beta$ under $\Phi_t^N$ and of Schwartz-positivity \eqref{Schwpos}: $$ \omega_\beta\Big(\Phi^N_{t-s}[\delta_N(r,s)]\Phi^N_{t-s}[\delta^\dag_N(r,s)]\Big)\leq \omega_\beta\Big(\delta_N(r,s)\,\delta^\dag_N(r,s)\Big)\ . $$ The proof of the theorem can thus be completed by showing that, when $N\to\infty$, the right hand side of the above inequality vanishes uniformly for $0\leq s\leq t\leq {\cal T}$. The Cauchy-Schwartz inequality $\left|\omega(a^\dag b)\right|^2\leq\omega(a^\dag a)\,\omega(b^\dag b)$ yields $$ \omega_\beta\left((a+b)^\dag(a+b)\right)\leq\left(\sqrt{\omega_\beta(a^\dag a)}\,+\,\sqrt{\omega_\beta(b^\dag b)}\right)^2\ . $$ Therefore, setting $\dot{f}_r(s):={\rm d}f_r(s)/{\rm d}s$ and using \eqref{difft} and \eqref{Schwpos1} together with $\omega_\beta(a^\dag a)\leq \|a\|^2$ and \eqref{difft1}--\eqref{difft3}, one gets \begin{eqnarray*} && \sqrt{\omega_\beta\Big(\delta_N(r,s)\,\delta^\dag_N(r,s)\Big)}\leq \sqrt{\omega\Big((E_N-L_N)^\dag(E_N-L_N)\Big)}\\ && +\,{\rm e}^{f_r(t)}\,\sqrt{\omega_\beta\bigg(\left(\dot{f}_r(s)-D^{(3)}_{N}(x_{r_s})\right)\,\left(\dot{f}_r(s)-\big(D^{(3)}_{N}\big)^\dagger(x_{r_s})\right)\bigg)}\\ &&\leq\|E_N-L_N\|\,+\,\sqrt{\omega_\beta\bigg(\left(\dot{f}_r(s)-D^{(3)}_{N}(x_{r_s})\right)\,\left(\dot{f}_r(s)-\big(D^{(3)}_{N}\big)^\dagger(x_{r_s})\right)\bigg)}\\ &&\leq\|E_N-L_N\|\,+\,\sqrt{\omega_\beta\left(D^{(3)}_{N}(x_{r_s})\,\big(D^{(3)}_{N}\big)^\dagger(x_{r_s})\right)-\left|\omega_\beta\left(D^{(3)}_{N}(x_{r_s})\right)\right|^2}\ . \end{eqnarray*} According to {\sl Lemma 2} and {\sl Lemma \ref{lemma3}} (with $r_t$ in the place of $r$ in the bound \eqref{bound3}), one obtains $\lim_{N\to\infty}\|E_N-L_N\|=0$, uniformly for $0\leq t\leq {\cal T}$. Furthermore, \begin{eqnarray*} D^{(3)}_{N}(x_r)&=&\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^{d}D_{\mu\nu}\Big[(r,F_N)\,,\,v_\mu^{(k)}\Big]\,\Big[(r,F_N)\,,\,(v_\nu^\dag)^{(\ell)}\Big]\\ &=&\frac{1}{2N}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^{d}\sum_{i,j=1}^8 D_{\mu\nu}r_s^ir_s^j\Big[x_i^{(k)}\,,\,v_\mu^{(k)}\Big]\,\Big[x^{(\ell)}_j\,,\,(v_\nu^\dag)^{(\ell)}\Big] \end{eqnarray*} can be recast in the form $$ D^{(3)}_{N}(x_r)=\frac{1}{N}\sum_{k,\ell=0}^{N-1} J_{k\ell}\sum_{\mu,\nu=1}^{d}D_{\mu\nu}\, a_\mu^{(k)}\,b_\nu^{(\ell)}\ , $$ where $a_\mu^{(k)}$ and $b_\nu^{(\ell)}$ are single site operators. Then, \begin{eqnarray*} &&\hskip-1.5cm \omega_\beta\left(D^{(3)}_{N}(x_{r_s})\,\big(D^{(3)}_{N}\big)^\dagger(x_{r_s})\right)-\left|\omega_\beta\left(D^{(3)}_{N}(x_{r_s})\right)\right|^2\\ &&=\sum_{k_1,\ell_1=0\atop k_2,\ell_2=0}^{N-1}\sum_{\mu_1,\nu_1=1\atop\mu_2,\nu_2=1}^d\,\frac{J_{k_1\ell_1}\,J_{k_2\ell_2}}{N^2}\ D_{\mu_1\nu_1}\,D_{\mu_2\nu_2}\, \Bigg(\omega_\beta\Big(a_{\mu_1}^{(k_1)}\,b_{\nu_1}^{(\ell_1)}(b_{\nu_2}^\dag)^{(\ell_2)}\,(a_{\mu_2}^\dag)^{(k_2)}\Big)\\ &&\hskip6cm -\,\omega_\beta\Big(a_{\mu_1}^{(k_1)}\,b_{\nu_1}^{(\ell_1)}\Big)\,\omega_\beta\Big((b^\dag_{\mu_2})^{(\ell_2)}\,(a_{\mu_2}^\dag)^{(k_2)}\Big)\Bigg)\ . \end{eqnarray*} Because of the assumption \eqref{JKL2} and its consequence \eqref{JKL3}, this quantity vanishes when \hbox{$N\to\infty$}. For example, suppose $k_1=k_2$, then the corresponding multiple sums can be bounded by a term proportional to $$ \frac{1}{N^2}\sum_{k,\ell_1,\ell_2=0}^{N-1}\,|J_{k\ell_1}|\,|J_{k\ell_2}|\ . $$ Then, the right hand side of the previous expression vanishes uniformly for $0\leq s\leq t\leq {\cal T}$ because of the finite number of summands and the bounded norm of all the spin operators involved in any finite interval of time. \qed \end{proof} \medskip The previous theorem shows that, when the linear space $\mathcal{X}$ of selected single-site operators is stable under the action of the local Lindblad generator, then the emergent mesoscopic irreversible dynamics maps Weyl operators into themselves: it turns out that such a dynamics corresponds to a semigroup of unital, completely positive maps on the Weyl algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$, generated by a Lindblad generator which is at most quadratic in the fluctuation operators $F(x_i)$. \medskip \begin{corollary} \label{cor1} The maps $W(r)\mapsto\Phi_t[W(r)]=W_t(r)={\rm e}^{f_r(t)}\,W(r_t)$ with $r_t\in\mathbb{R}^8$ and $f_r(t)$ given by \eqref{Ldec4}, respectively \eqref{difft2a}, satisfy the time-evolution equation $\partial_tW_t(r)=\mathbb{L}[W_t(r)]$, where the generator $\mathbb{L}$ is given by \begin{eqnarray} \label{fluctLind1} &&\hskip-1.5cm \mathbb{L}[W_t(r)]=\frac{i}{2}\,\sum_{i,j=1}^8 H^{(1)}_{ij}\big[F(x_i)F(x_j)\,,\,W_t(r)\big]\\ \label{fluctLind2} &&\hskip-.5cm+ \sum_{i,j=1}^8D^{(1)}_{ij}\left(F(x_i)\,W_t(r)\,F(x_j)\,-\,\frac{1}{2}\big\{ F(x_i)F(x_j)\,,\,W_t(r)\big\}\right), \end{eqnarray} with $H^{(1)}$ a Hermitian $8\times 8$ matrix and $D^{(1)}$ a positive semi-definite $8\times 8$ hermitian matrix, given by \begin{eqnarray} \label{Flind1a} &&H^{(1)}=-i(\sigma^{(\beta)})^{-1}\left(\mathcal{L}\,C^{(\beta)}\,-\,C^{(\beta)}\,\mathcal{L}^{tr}\right)\,(\sigma^{(\beta)})^{-1}\ ,\\ &&D^{(1)}=(\sigma^{(\beta)})^{-1}\left(\mathcal{L}\,C^{(\beta)}\,+\,C^{(\beta)}\mathcal{L}^{tr}\right)(\sigma^{(\beta)})^{-1}\ . \label{Flind1b} \end{eqnarray} In the creation and annihilation operator formalism, using the notation introduced in \eqref{anncrop}, the generator reads \begin{eqnarray} \label{fluctLind3} &&\hskip-1.5cm \mathbb{L}[D_t(z)]=\frac{i}{2}\,\sum_{i,j=1}^8 H^{(2)}_{ij}\left[A^\dag_i\,A_j\,,\,D_t(z)\right]\\ \label{fluctLind4} &&\hskip-.5cm+ \sum_{i,j=1}^8 D^{(2)}_{ij}\left(A_i^\dag\,D_t(z)\,A_j\,-\,\frac{1}{2}\left\{ A_i^\dag A^\dag\,,\,D_t(z)\right\}\right), \end{eqnarray} where $D_t(z)$ is the time-evolved displacement operator \eqref{Weyl6b} corresponding to the time-evolved Weyl operator $W_t(r)$ and $H^{(2)}$ and $D^{(2)}$ are $8\times 8$ matrices, given by \begin{equation} \label{Flind2a} H^{(2)}=\mathcal{M}^\dag\,H^{(1)}\,\mathcal{M}\ ,\qquad D^{(2)}=\mathcal{M}^\dag\,D\,\mathcal{M}\ , \end{equation} where $\mathcal{M}$ is the matrix in \eqref{matrix1.1} of Appendix B. \end{corollary} \begin{proof} Using \eqref{appb1} in Appendix C, the explicit expressions for $\dot{r}_t$, $f_r(t)$ and the relation \eqref{corcovsym} among the correlation, covariance and symplectic matrices, one computes \begin{eqnarray*} \partial_tW_t(r)&=&\Big(\dot{f}_r(t)\,+\,i(\dot{r}_t,F)\,-\,\frac{1}{2}\big[(r_t,F)\,,\,(\dot{r}_t,F)\big]\Big)\,W_t(r)\\ &=& \left(i(r_t,\mathcal{L}\,F)\,+\,(r_t,\mathcal{L}\,\Sigma^{(\beta)} r_t)\,+\,\frac{i}{2}(r_t,\mathcal{L}\,\sigma^{(\beta)} r_t)\right)\,W_t(r)\\ &=&\Big(i(r_t,\mathcal{L}\,F)\,+\,(r_t,\mathcal{L}\,C^{(\beta)} r_t)\Big)\,W_t(r) \ . \end{eqnarray*} In order to show how to match this time-derivative with the action on $W_t(r)$ of a linear map as in the statement of the Corollary, it is useful to recall \eqref{displ}, which gives $$ W_t(r)\,F(x_i)=\left(F(x_i)\,+\,\sum_{j=1}^8\sigma^{(\beta)}_{ij}\,r^j_t\right)\,W_t(r)\ . $$ It is then straightforward to derive that \begin{eqnarray*} \mathbb{L}[W_t(r)]&=&\frac{i}{2}\Big(\left(r_t,\sigma^{(\beta)}\big(H^{(1)}+(H^{(1)})^{tr}\big)F\right)\,+\, \left(r_t,\sigma^{(\beta)}\, H^{(1)}\,\sigma^{(\beta)} r_t\right)\Big)\,W_t(r)\\ &+&\frac{1}{2}\Big(\left(r_t,\sigma^{(\beta)}\big(D^{(1)}-(D^{(1)})^{tr}\big)F\right)\,+\, \left(r_t,\sigma^{(\beta)}\, D^{(1)}\,\sigma^{(\beta)} r_t\right)\Big)\,W_t(r)\ . \end{eqnarray*} By equating the operatorial, respectively the scalar contributions from the time-derivative and the generator action, one obtains \begin{eqnarray*} \mathcal{L}&=&\frac{1}{2}\sigma^{(\beta)}\left(H^{(1)}\,+\,(H^{(1)})^{tr}\right)\, -\,\frac{i}{2}\sigma^{(\beta)}\left(D^{(1)}-(D^{(1)})^{tr}\right)\\ \mathcal{L}\,C^{(\beta)}&=&\sigma^{(\beta)}\,\frac{i\,H^{(1)}\,+\,D^{(1)}}{2}\,\sigma^{(\beta)}\ , \end{eqnarray*} whence, by the invertibility of $\sigma^{(\beta)}$ (see \eqref{invCOMM1}), the hermiticity of $C^{(\beta)}$ and the the fact that $\mathcal{L}^\dag=\mathcal{L}^{tr}$ (see \eqref{singsiteL}), the result follows from $$ \mathcal{L}\,C^{(\beta)}\pm C^{(\beta)}\mathcal{L}^{tr}=\sigma^{(\beta)}\,\left(\frac{i\,H^{(1)}\,+\,D^{(1)}}{2}\,\mp\,\frac{i\,H^{(1)}\,-\,D^{(1)}}{2}\right)\,\sigma^{(\beta)}\ . $$ The second part of the corollary follows from using \eqref{matrix1} and inserting it into \eqref{fluctLind1} and \eqref{fluctLind2} $$ F(x_i)=F^\dag(x_i)=\sum_{k=1}^8\mathcal{M}^*_{ik}\,A^\dag_k\ ,\quad F(x_j)=\sum_{\ell=1}^8\mathcal{M}_{i\ell}\,A_\ell\ . $$ \qed \end{proof} \section{Gaussian states} The mesoscopic dissipative dynamics $\Phi_t$ obtained in the previous section is quasi-free as it maps Weyl operators into Weyl operators. The dual maps $\Psi_t$ acts on the states $\rho$ on the Weyl algebra $\mathcal{W}(\chi,\sigma^{(\beta)})$, sending them into $\rho_t=\Psi_t[\rho]$ according to the duality relation \begin{equation} \label{duality} \rho_t(W)=\rho\big(\Phi_t[W]\big)\qquad\forall\, W\in\mathcal{W}(\chi,\sigma^{(\beta)})\ . \end{equation} Particularly useful states on $\mathcal{W}(\chi,\sigma^{(\beta)})$ are the Gaussian states (with zero averages) which are identified by their characteristic functions being Gaussian, {\it i.e.} by the following expectation of Weyl operators \begin{eqnarray} \label{charfunct1} \rho_G\big(W(r)\big)&=&\rho_G\left({\rm e}^{i(r,F)}\right)=\exp\left(-\frac{1}{2}(r,G\,r)\right)\ ,\qquad\forall r\in\mathbb{R}^8\\ G&=&[G_{ij}]\ ,\quad G_{ij}=\frac{1}{2}\rho_G\left(\Big\{F(x_i)\,,\,F(x_j)\Big\}\right)\ . \end{eqnarray} These states are completely identified by their covariance matrix $G$; in particular, positivity of $\rho_G$ is equivalent to the following condition on $G$ \cite{Holevo}: \begin{equation} G+\frac{i}{2}\sigma^{(\beta)}\geq0\ , \label{g-positivity} \end{equation} where $\sigma^{(\beta)}$ is the symplectic matrix in (\ref{COMM1}). Clearly, the maps $\Psi_t$ transform Gaussian states into Gaussian states: \begin{eqnarray} \nonumber \Psi_t[\rho_G](W(r))&=&\rho_G\Big(\Phi_t[W(r)]\Big)={\rm e}^{f_r(t)}\,\rho_G\Big(W(r_t)\Big)\\ \label{gaussian1} &=& \exp\left(f_r(t)\,-\,\frac{1}{2}(r_t,G\,r_t)\right)=\rho_{G_t}\Big(W(r)\Big)\ , \end{eqnarray} with the time-dependent covariance matrix $G_t$ obtained recalling {\sl Corollary 1}, \eqref{Ldec4} and \eqref{difft2a}: \begin{equation} \label{gaussian2} G_t=\Sigma^{(\beta)}\,-\,{\rm e}^{t\mathcal{L}}\,\Sigma^{(\beta)}\,{\rm e}^{t\mathcal{L}^{tr}}+{\rm e}^{t\mathcal{L}}\,G\,{\rm e}^{t\mathcal{L}^{tr}}\ . \end{equation} It follows that the mesoscopic state $\Omega_\beta$ in \eqref{qfs1} is Gaussian with covariance matrix $G=\Sigma^{(\beta)}$ and thus, as the microscopic state $\omega_\beta$ is invariant under the local dissipative dynamics $\Phi^N_t$, $\Omega_\beta$ is invariant under the mesoscopic dissipative dynamics $\Psi_t$, {\it i.e.} $G_t=\Sigma^{(\beta)}$. A useful equivalent expression for the covariance matrix can be obtained by organizing the creation and annihilation operators in the new vector $\tilde A=(a_1,a^\dag_1,a_2,a^\dag_2,a_3,a^\dag_3,a_4,a^\dag_4)^{tr}$, and by introducing the coefficient vector $\tilde Z=(z_1,\bar{z}_1,z_2,\bar{z}_2,z_3,\bar{z}_3,z_4,\bar{z}_4)^{tr}\in\mathbb{C}^8$ together with the $8\times 8$ matrix $\tilde \Sigma_3=$diag$(1,-1,1,-1,1,-1,1,-1)$; it will be useful in the next Section while discussing entanglement criteria for Gaussian states. \begin{lemma} The displacement operator $D(z)=\exp\Big(-(Z,\Sigma_3\, A)\Big)$ in \eqref{displ1} can be recast as $D(z)=\exp\Big(-(\tilde Z,\tilde \Sigma_3\, \tilde A)\Big)$ with \begin{equation} \tilde A= \mathcal{P}^{tr}\,A,\qquad \tilde Z=\mathcal{P}^{tr}\, Z,\qquad \tilde\Sigma_3={\cal P}^{tr}\, \Sigma_3\, {\cal P}\ ,\qquad \mathcal{P}\mathcal{P}^{tr}={\bf 1}_{8}\ , \label{lemm2} \end{equation} where $\mathcal{P}$ is explicitly given in \eqref{lastmat} of Appendix B. \label{lem2} \end{lemma} Using this new ordering, the expectation of the displacement operator $D(z)$ with respect to a Gaussian state $\rho_G$ reads \begin{equation} \rho_G\left(D(z)\right)=\exp\left(-\frac{1}{2}(\tilde Z,\tilde G,\tilde Z)\right)\ , \label{bigcov1} \end{equation} with the new covariance matrix $\tilde G$ explicitly given by \begin{equation} \tilde G=\begin{pmatrix} \tilde G_{11}&\tilde G_{12}&\tilde G_{13}&\tilde G_{14}\\ \tilde G_{21}&\tilde G_{22}&\tilde G_{23}&\tilde G_{24}\\ \tilde G_{31}&\tilde G_{32}&\tilde G_{33}&\tilde G_{34}\\ \tilde G_{41}&\tilde G_{42}&\tilde G_{43}&\tilde G_{44} \end{pmatrix}\ , \label{bigcov2} \end{equation} where \begin{equation} \label{covariance2} \tilde G_{ij}=\frac{1}{2} \begin{pmatrix} \rho_G\left(\big\{a_i,a^\dagger_j\big\}\right)&-\rho_G\left(\big\{a_i,a_j^{\phantom{\dagger}}\big\}\right)\\ -\rho_G\left(\big\{a_i^\dagger,a^\dagger_j\big\}\right)&\rho_G\left(\big\{a^\dagger_i,a_j\big\}\right)\\ \end{pmatrix}\ . \end{equation} The $2\times2$ matrices along the diagonal represent single-mode covariance matrices, while the off-diagonal ones account for correlations among the various modes. \section{Entanglement in Gaussian states} \label{EML} Using the previous results, and in particular the quasi-free property of the maps $\Phi_t$, we want now to study 1) whether it is possible to generate mesosocopic entanglement between different chains entirely by means of the dissipative microscopic dynamics and further 2) investigate the fate of the generated entanglement in the course of time and of its dependence on the strength of the coupling with the environment and on the temperature of the given microscopic invariant state. By \textit{mesoscopic entanglement} we mean the existence of mesoscopic states carrying non-local, quantum correlations among the fluctuation operators pertaining to different chains. More precisely, we shall focus on the creation and annihilation operators $a^\#_1$ and $a^\#_3$ that, as already observed before, are collective degrees of freedom attached to the first, second chain, respectively. We shall then study the time-evolution of two-mode Gaussian states $\rho^{(13)}$, obtained by tracing a full four-mode Gaussian state over $a_2^\#$ and $a_4^\#$; indeed, as discussed below, the trace operation does not spoil the Gaussian character of the initial four-mode states. In the case of two-mode Gaussian states, the presence of entanglement can be ascertained using the partial transposition criterion, {\it i.e.} by looking at their behaviour when $a_1$ and $a^\dag_1$ are exchanged while keeping $a_1^\dag a_1$ and $a_1a^\dag_1$ unchanged and without touching $a_3$ and $a_3^\dag$. If under this substitution, $\rho^{(13)}$ does not remain positive, then it carries quantum correlations between the modes $1$ and $3$ and thus results entangled. Vice versa, a Gaussian state with respect to these two modes that remains positive under the above substitution is for sure separable. This is the content of the so-called Simon entanglement criterion \cite{Simon}. Notice that the state $\Omega_\beta$ in \eqref{qfs1} is separable with respect to all its four modes; indeed, its density matrix representation $R_\beta$ in \eqref{qfs2} can be written as a product of four independent density matrices one for each of the modes. Indeed, the corresponding covariance matrix $\widetilde\Sigma^{(\beta)}$ results diagonal when expressed in the representation \eqref{bigcov1}, \eqref{bigcov2}, thus showing neither quantum nor classical correlations between the different modes. As initial states, we shall consider states that are obtained from $R_\beta$ by the action of suitable squeezing operators in the modes 1 and 3, {\it i.e.} Gaussian states of the form \begin{equation} \rho^{(\beta)}_{r_1r_3}=S_1(r_1)S_3(r_3)\,R_\beta\, S^\dagger_3(r_3)S^\dagger_1(r_1)\ , \label{t0} \end{equation} where $S_j(r_j)$, $r_j\in\mathbb{R}$, are single-mode squeezing operators such that $$ S^\dag_j(r_j)\,a^\dagger_j\,S_j(r_j)=\cosh(r_j)\, a^{\dagger}_j\,-\,\sinh(r_j)\,a_j\ ,\quad j=1,3\ . $$ The squeezing operators map displacement operators $D(z)$ in \eqref{displ1} into displacement operators $$ D(z')=S^\dag_3(r_3) S^\dag_1(r_1)\,D(z)\,S_1(r_1)S_3(r_3)\ , $$ where $z'=(z_1',z_2,z'_3,z_4)$ with $z'_{1,3}=\cosh(r_{1,3})z_{1,3}-\sinh(r_{1,3})\bar{z}_{1,3}$. Further, the modes are not mixed by the squeezing so that $\rho^{(\beta)}_{r_1r_3}$ is also a separable Gaussian state relatively to all four modes. In particular, after squeezing, the $8\times 8$ covariance matrix $\widetilde\Sigma^{(\beta)}$ of the thermal state $R_\beta$ is mapped into the following one: \begin{equation} \widetilde \Sigma^{(\beta)}_{r_1,r_3}=\frac{1}{2\epsilon} \begin{pmatrix} {\cal S}(r_1) & {\bf 0}_4\\ {\bf 0}_4 & {\cal S}(r_3) \end{pmatrix}\ , \qquad {\cal S}(r)=\begin{pmatrix} \phantom{-}\cosh(2r)&-\sinh(2r)&0&0\\ -\sinh(2r)&\phantom{-}\cosh(2r)&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix} \ , \end{equation} where ${\bf 0}_4$ is the null matrix in four dimensions; in presenting this result, the ordering introduced at the end of the previous Section (denoted by a tilde) has again been used, so that $\widetilde \Sigma^{(\beta)}_{r_1,r_3}$ takes a convenient block diagonal form. Moreover, a state $\rho^{(13)}$ on the Bose algebra generated by $a_{1,3}^\#$ can be obtained from $\rho^{(\beta)}_{r_1r_3}$ by restricting its action on displacement operators of the form $D(z_{13})$ with $z_{13}=(z_1,0,z_3,0)$ and $z_{1,3}\in\mathbb{C}$. Namely, $\rho^{(13)}$ is completely defined by the expectations \begin{equation} \label{state13} \rho^{(13)}\big(D(z_{13})\big)={\rm Tr}\left(\rho^{(\beta)}_{r_1r_3}\,D(z_{13})\right)= {\rm Tr}\big[R_\beta\, D(z'_{13})\big]\ , \end{equation} and then inherits the Gaussian character of $R_\beta$ as these expectations are Gaussian functions of $z_{1,3}$. Finally, the same argument shows that the mesoscopic, dissipative time-evolution $\Phi_t$ transforms it in a Gaussian state at all times $t\geq 0$: \begin{equation} \label{evolvGauss} \rho^{(13)}_t\left(D(z_{13})\right)={\rm Tr}\Big[\rho^{(\beta)}_{r_1r_3}\,\Phi_t\big[D(z_{13})\big]\Big]\ . \end{equation} Therefore the covariance matrix of interest, that involves only the modes $1,3$, can be retrieved from the total matrix in the form \eqref{bigcov2} by discarding the blocks relative to modes $2,4$. Explicitly, \begin{equation} \label{newcov} \widetilde G_{red}(t)= \begin{pmatrix} \rho^{(13)}_t( a_1^{\dagger}a_1)+\frac{1}{2}&-\rho^{(13)}_t(a_1^{2})&\rho^{(13)}_t( a_1a_3^{\dagger})&-\rho^{(13)}_t( a_1a_3)\\ -\rho^{(13)}_t( a_1^{\dagger2})&\rho^{(13)}_t( a_1^{\dagger}a_1)+\frac{1}{2}&-\rho^{(13)}_t( a_1^{\dagger}a_3^{\dagger})&\rho^{(13)}_t( a_1^{\dagger}a_3)\\ \rho^{(13)}_t( a_1^{\dagger}a_3)&-\rho^{(13)}_t( a_1a_3)&\rho^{(13)}_t( a_3^{\dagger}a_3)+\frac{1}{2}&-\rho^{(13)}_t( a_3^{2})\\ -\rho^{(13)}_t( a_1^{\dagger}a^{\dagger}_3)&\rho^{(13)}_t( a_1a_3^{\dagger})&-\rho^{(13)}_t( a_3^{\dagger2})&\rho^{(13)}_t( a_3^{\dagger}a_3)+\frac{1}{2} \end{pmatrix} \equiv \begin{pmatrix} \Sigma_1&\Sigma_c\\ \Sigma_c^{\dagger}&\Sigma_2 \end{pmatrix}\ . \end{equation} For two mode-Gaussian states, the already mentioned Simon's criterion not only provides an exhaustive entanglement witness, but it also offers a means to quantify it \cite{Simon}. It is nevertheless convenient to formulate the criterion in terms of the previous covariance matrix \cite{Souza}. Consider the block structure of $\widetilde G_{red}(t)$ and define: \begin{equation} I_1=\det(\Sigma_1)\ ,\qquad I_2=\det(\Sigma_2)\qquad I_3=\det(\Sigma_c)\ ,\qquad I_4={\rm Tr}\Big(\Sigma_1\sigma_3\Sigma_c\sigma_3\Sigma_2\sigma_3\Sigma_c^{\dagger}\sigma_3\Big)\ . \label{crit-1} \end{equation} Then, the necessary and sufficient condition for a state to be separable is: \begin{equation} S\equiv I_1I_2+\Big(\frac{1}{4}-|I_3|\Big)^2-I_4-\frac{(I_1+I_2)}{4}\ge0\ . \label{crit} \end{equation} Taking real squeezing parameters $r_1,r_2$ for both chains, we have that $\Sigma_c=\Sigma_c^\dagger$; in this case, the four quantities $I_j$ can be explicitly computed as shown in Appendix F. Further, the amount of entanglement in two-mode Gaussian states can be measured through the so-called logarithmic negativity of the state: \begin{equation} \label{entmeas1} E=\max\left\{0,-\frac{1}{2}\log_2\left(4\, {\cal I}\right)\right\}\ ,\\ \end{equation} where \begin{equation} \label{entmeas2} {\cal I}=\frac{I_1+I_2}{2}-I_3-\sqrt{\left[\frac{I_1+I_2}{2}-I_3\right]^2-(I_1I_2+I_3^2-I_4)}\ . \end{equation} \section{Spin chain models} In the following we shall apply the theoretical tools developed so far to the study of the dissipative generation of mesoscopic entanglement in two different models: in the first one, the microscopic Lindblad generator contains contributions involving single-site operators from both chains, while in the second one all terms contain single-site operators from one chain only. \subsection{Model 1} We shall consider a Lindblad generator of the form (\ref{LINDMICO0a})-(\ref{LINDMICO0c}), with Hamiltonian term \begin{equation} \label{mod1H} \mathbb{H}_N[X]=-i\big[H_N,\, X\big]\ ,\qquad H_N=\frac{\eta}{2}\sum_{k=0}^{N-1} h^{(k)}\ ,\quad h^{(k)}=\sigma_3^{(k)}\otimes \bold{1}^{(k)}\,+\,\bold 1^{(k)}\otimes\sigma_3^{(k)}\ , \end{equation} and dissipative contribution of the generic form \eqref{LINDMICO0c}, \begin{equation} \mathbb{D}_N[X]=\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^4\,D_{\mu\nu}\Big(v_\mu^{(k)}\,\left[X\,,\,(v_{\nu}^\dag)^{(\ell)}\right]\,+\,\left[v_{\mu}^{(k)}\,,\,X\right]\,(v_\nu^\dag)^{(\ell)}\Big)\ , \label{mod1-dissip} \end{equation} with the following single-site Kraus operators \begin{equation} \label{Krops} v_1=\sigma_+\otimes \sigma_-\ ,\quad v_2=\sigma_-\otimes \sigma_+\,,\quad v_3=\frac{1}{2}\big(\sigma_3\otimes \bold{1}\big)\,,\quad v_4=\frac{1}{2}\big(\bold{1}\otimes \sigma_3\big)\ , \end{equation} where $\sigma_{\pm}=(\sigma_1\pm i\,\sigma_2)/2$, while the $4\times 4$ matrix $D$ is given by \begin{equation} D=\begin{pmatrix} \delta&0&\gamma&\gamma\\ 0&\delta&\gamma&\gamma\\ \gamma&\gamma&\delta&0\\ \gamma&\gamma&0&\delta \end{pmatrix}\ ; \label{Kosmat} \end{equation} by choosing $|\gamma|\le \delta/2$, $D$ results positive semi-definite. In this case, one can recast $\mathbb{D}_N$ in a double commutator form: \begin{equation} \mathbb{D}_N[X]=\frac{1}{2}\sum_{k,\ell=0}^{N-1}J_{k\ell}\sum_{\mu,\nu=1}^4D_{\mu\nu}\Big[\left[v_\mu^{(k)}\,,\,X\right]\,,\,(v^\dag_{\nu})^{(\ell)}\Big]\ . \label{double} \end{equation} In the following we shall study the emergent mesoscopic dynamics corresponding to the microscopic dissipative dynamics locally generated by $\mathbb{L}_N[X]=\mathbb{H}_N[X]+\mathbb{D}_N[X]$ as given above. Local states $\rho_N$ evolve according to the master equation involving the dual generator $\mathbb{L}_N^{\phantom{|}\star}$: \begin{equation} \partial_t \rho_N(t)=\mathbb{L}_N^{\star\phantom{|}}[\rho_N(t)]=-i\big[H_N,\,\rho_N(t)\big]+ \mathbb{D}_N[\rho_N(t)]\ . \end{equation} The microscopic thermal state $$ \rho_N^{(\beta)}=\bigotimes_{j=0}^{N-1}\frac{1}{4\cosh^2(\eta\beta/2)}\,{\rm e}^{-\beta \eta h^{(k)}/2}\ , $$ in \eqref{STATE} is left invariant by the dissipative dynamics; indeed, $\mathbb{L}_N^\star[\rho_N^{(\beta)}]=0$, as it follows from $$ \left[\sigma_3\otimes\bold{1}+\bold{1}\otimes\sigma_3\,,\,v_\mu\right]=0\qquad \forall \mu=1,2,3,4\ . $$ Further, since spin operators at different sites commute, given the Lindblad generator $\mathbb{L}_N$, its action on the self adjoint element $x_i^{(k)}$ from the set $\chi$ at site $k$ is given by: \begin{eqnarray*} \mathbb{L}_N\left[x_i^{(k)}\right]&=&i\frac{\eta}{2}\left[\sigma_3^{(k)}\otimes\bold{1}+\bold{1}\otimes \sigma_3^{(k)}\,,\,x_i^{(k)}\right]\\ &+&J_0\sum_{\mu,\nu=1}^4\frac{D_{\mu\nu}}{2}\left[\left[v_\mu^{(k)}\,,\,x_i^{(k)}\right]\,,\,(v^\dag_{\nu})^{(k)}\right]\ . \end{eqnarray*} This action maps the linear span $\chi$ in itself; indeed, $\mathbb{L}_N\left[x_i^{(k)}\right]=\sum_{j=1}^8\mathcal{L}_{ij}\,x_j^{(k)}$, with the $8\times 8$ matrix $\mathcal{L}=\mathcal{H}+\mathcal{D}$ explicitly given in Appendix D. Then, the generator of the mesoscopic dissipative dynamics as given in {\sl Corollary \ref{cor1}} is completely determined by the $8\times 8$ matrices $H^{(1)}$ and $D^{(1)}$ in \eqref{Flind1a}, \eqref{Flind1b} or $H^{(2)}$ and $D^{(2)}$ in \eqref{Flind2a}. Here, we give the form of the generator with respect to creation and annihilation operators. \begin{proposition} \label{propo1} In terms of annihilation and creation operators $a^\#_i$, $i=1,2,3,4$, the mesoscopic Lindblad generator acts on displacement operators $D(z)$ as $\mathbb{L}=\mathbb{H}\,+\,\mathbb{D}$, with $\mathbb{H}$ and $\mathbb{D}$ given by \begin{eqnarray} \label{LINDBOS0} \mathbb{H}[D(z)]&=&i\eta\Big[\sum_{j=1}^4a^\dag_j a_j\,,\,D(z)\Big]\\ \mathbb{D}[D(z)]&=&\sum_{i,j=1}^{8}K_{ij}^{(\beta)}\left(V^\dag_i\,D(z)\,V_j\,-\,\frac{1}{2}\left\{V^\dag_i\,V_j\,,\,D(z)\right\}\right)\ , \label{LINDBOS} \end{eqnarray} where $V=(a_1,a_2,a^\dag_1,a^\dag_2,a_3,a_4,a^\dag_3,a^\dag_4)^{tr}$ and Kossakowski matrix \begin{eqnarray} \label{gen0} \hskip-1cm && K^{(\beta)}=\frac{J_0}{\epsilon}\begin{pmatrix}A_\beta&B_\beta\cr B_\beta&A_\beta\end{pmatrix}\ ,\quad A_\beta=\delta\,\begin{pmatrix} 1+\epsilon&0&0&0\cr 0&1+\epsilon&0&0\cr 0&0&1-\epsilon&0\cr 0&0&0&1-\epsilon \end{pmatrix}\\ \hskip-1cm \label{gen2a} && B_\beta=\gamma\begin{pmatrix} \epsilon(1+\epsilon)&-(1+\epsilon)c&0&0\cr -(1+\epsilon)c&-\epsilon(1+\epsilon)&0&0\cr 0&0&\epsilon(1-\epsilon)&-(1-\epsilon)c\cr 0&0&-(1-\epsilon)c&-\epsilon(1-\epsilon) \end{pmatrix}\ , \end{eqnarray} where $\epsilon=\tanh(\eta\beta/2)$ and $c=\sqrt{1-\epsilon^2}$ as before. \end{proposition} \begin{proof} The Hamiltonian contribution $\mathbb{H}$ to the generator is defined by the matrix $H^{(2)}$ in equation \eqref{H1A} of Appendix D: it is diagonal in the operators $A_i^\#$, defined in (\ref{anncrop}). Moreover, $A^\dag_{5,6,7,8}=A_{1,2,3,4}$; thus, by using the canonical commutation relations $[a_i\,,\,a^\dag_j]=\delta_{ij}$, the mesoscopic Hamiltonian results proportional to the number operator $\sum_{j=1}^4a^\dag_j a_j$. The form of the dissipative term $\mathbb{D}$ in the generator derives from the expression of the Kossakowski matrix given in equations \eqref{D1A1} and \eqref{D1A2} of Appendix D. Using {\sl Corollary~1}, the form (\ref{LINDBOS}) then follows; note that, for convenience, the sums over the indices $i,j$ in (\ref{LINDBOS}) use the ordering $(a_1,a_2,a^\dag_1,a^\dag_2,a_3,a_4,a^\dag_3,a^\dag_4)^{tr}$ instead of $(a_1,a_2,a_3,a_4,a^\dag_1,a^\dag_2,a^\dag_3,a^\dag_4)^{tr}$ introduced before. \qed \end{proof} \medskip \begin{remark} {\rm From the above expression of the Lindblad generator there emerge two main features of the mesoscopic dissipative dynamics: 1) the unitary contribution $\mathbb{H}$ to the collective dynamics of the Boson degrees of freedom shows no interactions among them. The mesoscopic Hamiltonian is proportional to the number operator and as such it does commute with the dissipative contribution: $\mathbb{D}\circ\mathbb{H}=\mathbb{H}\circ\mathbb{D}$. In fact, $\mathbb{D}$ is gauge-invariant, it does not change by sending $a_i$ into ${\rm e}^{i\phi}a_i$ and $a_i^\dag$ into ${\rm e}^{-i\phi}a^\dag_i$, $i=1,2,3,4$. Furthermore, 2) were it not for the off-diagonal blocks $B_\beta$ in the Kossakowski matrix, the dissipative dynamics would correspond to decaying process affecting independently the various bosonic degrees of freedom. For instance, in absence of off-diagonal terms in the Kossakowski matrix, one would have $$ \mathbb{L}[a_i]=-\left(i\omega\,+\,J_0\delta\right)\,a_i\ . $$ Instead, the presence of $B_\beta\neq 0$ statistically couples the collective operators, $a^\#_{1,3}$, $a^\#_{2,4}$ referring to different chains.} \qed \end{remark} \subsection{Model 2} While the Lindblad operators $v$'s of the first model involve contributions from both chains ({\it c.f.} \eqref{Krops}) and different sites are statistically coupled by the coefficients $J_{k\ell}$, in the following we shall consider a Lindblad generator with the same Hamiltonian term as in (\ref{mod1H}), and a diagonal dissipative contribution of the form: \begin{equation} \label{modD2a} \mathbb{D}_N[X]=\sum_{k=0}^{N-1}\mathbb{D}^{(k)}[X]\ ,\quad \mathbb{D}^{(k)}_N[X]=\sum_{\mu,\nu=1}^{6}D_{\mu\nu}\left(v^{(k)}_\mu\, X\, v_\nu^{(k)}-\frac{1}{2}\left\{v^{(k)}_\mu\,v^{(k)}_\nu\,,\,X\right\}\right)\ , \end{equation} with self-adjoint Lindblad operators, \begin{equation} \label{Krausopa} v_{1,2,3}=\sigma_{1,2,3}\otimes\bold{1}\ ,\quad v_{4,5,6}=\bold{1}\otimes\sigma_{1,2,3}\ , \end{equation} and $6\times 6$ Kossakowski matrix $D$ given by \begin{equation} \label{Kossmat2} D=\begin{pmatrix}M&M\cr M&M\end{pmatrix}\ ,\quad M= \begin{pmatrix} 1&-i\epsilon&0\\ i\epsilon&1&0\\ 0&0&\xi \end{pmatrix}\ , \end{equation} where the conditions $\xi\ge0$ and $\epsilon=\tanh(\eta\beta/2)\leq 1$ guarantee $D\geq 0$. Because of the symmetry of the Kossakowski matrix, each single site contribution to the Lindblad generator can be recast in the simpler form: \begin{eqnarray} \label{Lind2a} \mathbb{D}^{(k)}_N[X]&=&\sum_{\mu,\nu=1}^{3}M_{\mu\nu}\left(w^{(k)}_\mu\,X\,w^{(k)}_\nu-\frac{1}{2}\left\{w^{(k)}_\mu\,w^{(k)}_\nu\,,\,X\right\}\right)\\ \nonumber &&\hskip-2cm =\frac{1}{2}\Big(\left[w^{(k)}_1\,,\,\left[X\,,\,w^{(k)}_1\right]\right]+ \left[w_2^{(k)}\,,\,\left[X\,,\,w^{(k)}_2\right]\right]+\gamma\left[w_3^{(k)}\,,\left[X\,,\,w^{(k)}_3\right]\right]\Big)\\ \label{Lind2c} &-&i\frac{\epsilon}{2}\,\left\{w^{(k)}_1\,,\,\left[X\,,\,w^{(k)}_2\right]\right\}+i\frac{\epsilon}{2}\,\left\{w_2^{(k)}\,,\,\left[X\,,\,w^{(k)}_1\right]\right\} \end{eqnarray} with operators $w_\mu=\sigma_\mu\otimes\bold{1}+\bold{1}\otimes\sigma_{\mu}$ obeying \begin{eqnarray} \label{Pauli1} \left[w_j\,,\,w_k\right]&=&2i\epsilon_{jk\ell}\,w_\ell\\ \label{Pauli2} \left\{w_j\,,\,w_k\right\}&=&\sigma_j\otimes\sigma_k\,+\,\sigma_k\otimes\sigma_j\,+\,i\epsilon_{jk\ell}\left(\sigma_\ell\otimes\bold{1}-\bold{1}\otimes\sigma_\ell\right)\ . \end{eqnarray} In the Schr\"odinger picture, the local spin states $\rho_N$ evolve in time according to the dual generator $\mathbb{L}_N^{\star\phantom{|}}=\left(\mathbb{H}_N^{\,\star\phantom{|}}+\mathbb{D}_N^{\,\star\phantom{|}}\right)$ where \begin{eqnarray*} \hskip-1cm \mathbb{H}_N^{\,\star\phantom{|}}[\rho_N]&=&-i\eta\sum_{k=0}^{N-1}\left[w_3^{(k)}\,,\,\rho_N\right]\ ,\quad \mathbb{D}_N^{\,\star\phantom{|}}[\rho_N]=\sum_{k=0}^{N-1}\left(\mathbb{D}^{(k)}\right)^{\star\phantom{|}}[\rho_N]\ ,\\ \left(\mathbb{D}^{(k)}_N\right)^{\star\phantom{|}}[\rho_N]&=&\sum_{\mu,\nu=1}^{3}M_{\mu\nu}\left(w^{(k)}_\nu\,\rho_N\,w^{(k)}_\mu-\frac{1}{2}\left\{w^{(k)}_\mu\,w^{(k)}_\nu\,,\,\rho_N\right\}\right)\\ \hskip-1cm &=&\frac{1}{2}\sum_{\mu=1}^2\left[w^{(k)}_\mu,\left[\rho_N,w^{(k)}_\mu\right]\right]+\gamma\left[w_3^{(k)},\left[w^{(k)}_3,\rho_N\right]\right]\\ &&\hskip-2cm +i\frac{\epsilon}{2}\,\left\{w^{(k)}_1,\left[\rho_N,w^{(k)}_2\right]\right\}-i\frac{\epsilon}{2}\,\left\{w_2^{(k)},\left[\rho_N,w^{(k)}_1\right]\right\}-2\epsilon\left\{w_3,\rho_N\right\}\ . \end{eqnarray*} In terms of the operators $w_\mu$, the microscopic state $\rho^{(\beta)}_N$ in \eqref{STATE} is the tensor product of $N$ density matrices of the form $$ \frac{1}{4\cosh^2(\frac{\eta\beta}{2})}\, \exp\left(-\frac{\eta\beta}{2} w_3\right)\ . $$ Expanding the exponential and using \eqref{Pauli2} with $j=k=3$ one gets: $$ \rho_N^{(\beta)}=\bigotimes_{k=0}^{N-1}\frac{1}{4}\left(\bold{1}\,-\,\epsilon\,w^{(k)}_3+\epsilon^2\sigma^{(k)}_3\otimes\sigma^{(k)}_3\right)\ ,\qquad \epsilon=\tanh\left(\frac{\beta\eta}{2}\right)\ . $$ By explicit computation one then checks that $\mathbb{L}_N^{\star\phantom{|}}\big[\rho^{(\beta)}_N\big]=0$, whence the microscopic local states are left invariant by the microscopic dissipative dynamics. This fact is one of the two conditions for applying the results of the previous sections; the other condition is that the action of the local generator $\mathbb{L}_N$ maps into itself the linear span $\mathcal{X}$ of the elements $x_j\in\chi$ in \eqref{matrix},\eqref{matrixa}. This is verified in Appendix E. Finally, as for the first model, it is sufficient to explicitly write the generator of the quasi-free mesoscopic semigroup emerging from the above microscopic dissipative dynamics in the language of creation an annihilation operators: \begin{proposition} \label{propo3} In terms of annihilation and creation operators $a^\#_i$, $i=1,2,3,4$, the mesoscopic Lindblad generator reads $\mathbb{L}=\mathbb{H}\,+\,\mathbb{D}$, where the action of $\mathbb{H}$ and $\mathbb{D}$ on displacement operators $D(z)$ is as in (\ref{LINDBOS0}) and (\ref{LINDBOS}), where the Kossakowski matrix now reads \begin{eqnarray} \label{gen20} K_\beta&=&\frac{2}{\epsilon}\begin{pmatrix}(1+\epsilon)M_\beta&0&(1+\epsilon)N_\beta&0 \cr 0&(1-\epsilon)M_\beta&0&(1-\epsilon)N_\beta\cr (1+\epsilon)N_\beta&0&(1+\epsilon)M_\beta&0\cr 0&(1-\epsilon)N_\beta&0&(1-\epsilon)M_\beta\end{pmatrix}\\ \label{gen21} M_\beta&=&\,\begin{pmatrix} 1+\xi&0\cr 0&3+\xi\end{pmatrix}\ ,\quad N_\beta=\begin{pmatrix} \epsilon^2&-\epsilon c\cr -\epsilon c&1+c^2 \end{pmatrix}\ , \end{eqnarray} again with $\epsilon=\tanh(\eta\beta/2)$, $c=\sqrt{1-\epsilon^2}$. \end{proposition} \noindent The proof is very similar to the one discussed for the previous model and it is based on {\sl Corollary 1} and the results of Appendix E. Though the details are different, the structure of the Kossakowski matrix is similar to the one in Model 1, so that again the Hamiltonian contribution $\mathbb{H}$ to the mesoscopic Lindblad generator commutes with the dissipative one. Moreover, also in this case, the off-diagonal elements of the Kossakowski matrix statistically couple the mesoscopic operators $a^\#_{1,3}$, $a^\#_{2,4}$ referring to different chains. \section{Environment induced mesoscopic entanglement} Given the results of the previous Section, one can now study whether the mesoscopic dissipative time-evolutions in Model 1 and 2 can give rise to mesoscopic entanglement between the two independent chains, and, if yes, analyze the fate of the generated entanglement in the course of time. \subsection{Entanglement Dynamics: Model 1} \label{EDPTM} In this case the entanglement criterion (\ref{crit}) can be studied analytically: we will show that the two spin chains can indeed become mesoscopically entangled, and relate the behaviour of these bath-induced quantum correlations to the squeezing parameters, the parameter $\gamma$ and the temperature associated to the initial microscopic state. For sake of simplicity, we shall further set $\delta=J_0=\eta=1$, since these parameters do not play any role in the discussion that follows. The behaviour in time of the logarithmic negativity $E$, introduced in (\ref{entmeas1}), is shown in Fig.\ref{GAMMA} for different values of the dissipative parameter $\gamma$ appearing in the Kossakowski matrix and fixed initial temperature $T$. Both a ``symmetrically squeezed'', with $r_1=r_3=r$, and ``one-mode squeezed'', with $r_1=r$, $r_3=\,0$, initial state have been studied; however, since similar results hold for both cases, only the graphs relative to the symmetric squeezed case will be shown. From the behaviour of $E$, one clearly sees that the two infinite spin chains get entangled by the dynamics. Since the Hamiltonian does not contain coupling terms, this entanglement is due solely to the mixing effects of the environment within which the two spin chains are embedded. Moreover, the amount of created entanglement increase as the dissipative parameter $\gamma$ gets larger, while a non-zero entanglement appears earlier in time. \begin{figure}[t] \center\includegraphics[scale=0.65]{Fig1.pdf} \caption{\small Model 1: behaviour in time of the logarithmic negativity $E$ for different values of $\gamma$ at fixed temperature $T=0.1$, for a symmetrically squeezed initial state with $r_1=r_3=r=1$.} \label{GAMMA} \end{figure} \begin{figure}[h!] \center\includegraphics[scale=0.65]{Fig2.pdf} \caption{\small Model 1: behaviour in time of the logarithmic negativity $E$ for different values of the squeezing parameter $r=r_1=r_3$, at fixed temperature $T=0.1$ and dissipative parameter $\gamma=1/2$.} \label{SQUEEZE} \end{figure}\ \begin{figure}[h!] \center\includegraphics[scale=0.65]{Fig3.pdf} \caption{\small Model 1: behaviour in time of the logarithmic negativity $E$ for different values of the temperature $T$, at fixed dissipative parameter $\gamma=1/2$ and squeezing $r_1=r_3=r=1$.} \label{TEMP} \end{figure} Also the amount of squeezing plays an essential role; while a non-vanishing squeezing appears necessary to create quantum correlations, too much squeezing decreases the maximum value of $E$. Squeezing also influences the time at which it is first generated. Further, for fixed $T$ and $\gamma$, there is a value of the squeezing parameter $r$ allowing for a maximal value of $E$. All this is explicitly shown in Fig.\ref{SQUEEZE}. Finally, the effect of the temperature is displayed in Fig.\ref{TEMP}, for fixed dissipative and squeezing parameters. One sees that increasing the temperature, the maximum of the logarithmic negativity $E$ decreases, indicating that there exists a critical temperature $T_C$, above which no entanglement is possible. The explanation of this result can be traced to the behaviour of the quantity $S$ appearing in the separability criterion in (\ref{crit}). In Appendix F, this quantity has been explicitly computed both for the case of a symmetrically squeezed initial state, see (\ref{critsas0}), and one-mode squeezed initial state, see (\ref{critsas}). For large temperatures, the parameter $\epsilon$ becomes small, so that all terms but those proportional to $1/\epsilon^4$ can be neglected, obtaining in the two cases: \begin{eqnarray*} && S_{S}(t)\sim\frac{1}{16\epsilon^4}\left(1+8\sinh^2(r)\left( y_1(t)-y_1^2(t)\right)\right)\ ,\\ && S_A(t)\sim\frac{1}{16\epsilon^4}\left(1+4\sinh^2(r)\left(y_1(t)-y_1^2(t)\right)\right)\ , \end{eqnarray*} where $y_1(t)$ is given in (\ref{y1}) of Appendix F. Notice that since $y_1(t)<1$ for $t>0$, these two quantities are always positive; therefore, there must be a finite ``critical temperature'' $T_C$ beyond which entanglement is no longer present. \begin{figure}[h!] \center \subfigure{\includegraphics[scale=0.35]{Fig4-1.pdf}}\qquad \subfigure{\includegraphics[scale=0.35]{Fig4-2.pdf}} \caption{\small Model 1: entanglement phase diagrams for the symmetrically squeezed state $r=r_1=r_3$ (left) and one-mode squeezed state $r=r_1$, $r_3=0$ (right), with $\gamma=1/2$; the line separating the two regions gives the behaviour of the critical temperature $T_C$ as a function of $r$.} \label{phase} \end{figure} This result is further illustrated by Fig.4, where the points in the $(r,T)$ plane with non-vanishing mesoscopic entanglement are highlighted. These figures show two regions, the dark ones associated with a non-vanishing maximal value of $E$, the brighter ones with vanishing maximal value of $E$ and therefore no entanglement. The line separating the two regions determines the ``critical temperature'' $T_C$, above which entanglement among the two chains is not possible, as a function of the squeezing parameter; it is defined implicitly by the condition ${\rm max}\big(E(r,T)\big)=0$, where the maximization is over all times. \subsection{Entanglement sudden birth and sudden death} \label{suddendeath} The time behaviour of the logarithmic negativity $E$ reported in Fig.'s \ref{GAMMA},\ref{SQUEEZE},\ref{TEMP} shows the phenomena of the so-called ``sudden birth'' and ``sudden death'' of entanglement \cite{Eberly}, {\it i.e.} the sudden generation of entanglement only after a finite time since the starting of the dynamics, and the abrupt vanishing of it at a later, finite time. These two effects can be analyzed in detail as function of the temperature $T$ of the initial state. Let us first consider the phenomenon of sudden death and accordingly look at the large $t$ behaviour of evolved initial Gaussian state. As discussed before, the asymptotic state of the dynamics generated by (\ref{LINDBOS0}) and (\ref{LINDBOS}) is thermal, with a reduced covariance in the modes $a_1$, $a_3$ given by (see Appendix F): $$ \widetilde G_{red}^\infty\equiv\lim_{t\to\infty}\widetilde G_{red}(t)=\frac{1}{2\epsilon}\bold{1}_{4}\ . $$ Positivity of the asymptotic state requires ({\it c.f.} (\ref{g-positivity})): \begin{equation} \label{ineq} \widetilde G_{red}^\infty+\frac{i}{2}\tilde\sigma\geq0\ ,\quad \tilde\sigma=-i\begin{pmatrix} \sigma_3 & 0\\ 0 & \sigma_3 \end{pmatrix}\ , \end{equation} where $\tilde\sigma$ is the symplectic matrix in the reduced $a_1$, $a_3$ representation. This condition assures also the positivity of the partially transposed state, since $\widetilde G_{red}^\infty$ is left invariant by this transformation. In fact, the large time asymptotic limit of the lowest eigenvalue $\lambda_{min}(t)$ of the matrix $\displaystyle \widetilde G_{red}(t)+\frac{i}{2}\tilde\sigma$ is given by $\displaystyle\lambda_{min}^\infty=\frac{1-\epsilon}{2\epsilon}$, which is always strictly positive, except at zero temperature ($\epsilon=1$) when it vanishes. Therefore, when $T>0$, the bath generated entanglement must always vanish in finite times, since $\lambda_{min}(t)$, from being negative, becomes strictly positive for $t\to\infty$. Only at $T=0$ the created entanglement may vanish asymptotically. In order to study the phenomenon of sudden birth of entanglement, one has to analyze the behaviour of the logarithmic negativity $E$ in a right neighborhood of $t=0$. Let us consider first the case of the symmetrically squeezed initial state. Using (\ref{critsas0}) in Appendix~F, one checks that $$ \lim_{t\to{0^+}}S_{S}(t)=\frac{(1-\epsilon^2)^2}{16\epsilon^4}\ge0\ . $$ This result already shows that only at zero temperature ($\epsilon=1$) there is the possibility of having generation of entanglement as soon as the dynamics starts. In fact, at $T=0$ one has: \begin{equation} S_{S}^{T=0}(t)=\sinh^4(r)\Big(e^{-8t}-2e^{-6t}\cosh(2\gamma t)+e^{-4t}\Big)-e^{-4t}\sinh^2(2\gamma t)\sinh^2(r)\ . \label{T0} \end{equation} Since its first derivative with respect to $t$ vanishes at $t=0$, one needs to study the behaviour of its second derivative: $$ \frac{d^2}{dt^2}S_{S}^{T=0}(t)\Big|_{t=0}=8\big[\sinh^4(r)(1-\gamma^2)-\sinh^2(r)\gamma^2\big]\ . $$ Since $S_{S}^{T=0}(t)=0$, there can be entanglement generation as soon as $t>0$ only if this quantity is negative, {\it i.e.} only when $\sinh^2(r)<\gamma^2/(1-\gamma^2)$. In the opposite case, as well as for $T>0$, entanglement generation can occur only through the sudden creation phenomenon. Similarly, in the case of a single mode squeezed initial state, $r_1=r$, $r_3=0$, from (\ref{critsas}) of Appendix F, we have: $$ \lim_{t\to{0^+}}S_{A}(t)=\frac{(1-\epsilon^2)^2}{16\epsilon^4}\ge0\ . $$ Therefore, also in this case, the system may become entangled as soon as $t>0$ only at zero temperature. Indeed, one has \begin{equation} S_{A}^{T=0}(t)=-\sinh^2(r)\frac{e^{-4t}\sinh^2(2\gamma t)}{16}\ , \end{equation} which is always negative, vanishing only at $t=0$, so that indeed entanglement is created as soon as $t>0$. On the other hand, the phenomenon of sudden creation of entanglement always occur for $T>0$. Concerning the behaviour of the critical temperature $T_C$ for large squeezing parameter~$r$, the graph on the left part of Fig.4 suggests a vanishing value for $T_C$, while that on the right a constant value, independent from $r$. Indeed, in the first case, recalling the result (\ref{T0}) above, one sees that for $T=0$ and $\gamma=1/2$, {\it i.e.} the largest admissible value for the dissipative parameter $\gamma$, one gets for large $r$: \begin{equation} S_{S}^{T=0}(t) \simeq e^{4(r-t)}\Big(1-e^{-3t}\Big) \Big(1-e^{-t}\Big)\ , \end{equation} which is always non negative. This means that in the limit $r\to\infty$, no entanglement is created at any time when $T=0$. The critical temperature $T_C$ must therefore approach zero in the same limit. Instead, in the other case one finds that for large squeezing parameter: \begin{equation} S_{A}(t)\simeq e^{2r}\, g(t,T)\ , \end{equation} where $g(t,T)$ is the function multiplying $\sinh^2(r)$ in (\ref{critsas}). One can show that this function takes negative values for some $t$, {\it i.e.} entanglement is generated, only for temperatures below a certain fixed value $\bar T$, which can be computed only numerically. As shown by the graph in the right part of Fig.4, the critical temperature is thus always non vanishing, reaching the asymptotic value $\bar T$ for large squeezing. \subsection{Entanglement Dynamics: Model 2} While in Model 1 the microscopic dynamics is generated by a Lindblad term involving contributions from both chains and also different sites, the dissipative generator (\ref{modD2a}) of Model 2 contains only single chain Lindblad operators, and further without any statistical coupling between different sites. \begin{figure}[hbtp] \center\includegraphics[scale=0.57]{Fig5.pdf} \caption{\small Model 2: behaviour in time of the logarithmic negativity $E$ for different values of the dissipative parameter $\xi$, at fixed temperature $T=0.1$ and squeezing $r=r_1=r_3=1$.} \label{UNDYN1} \end{figure} \begin{figure}[hbtp] \center\includegraphics[scale=0.57]{Fig6.pdf} \caption{\small Model 2: behaviour in time of the logarithmic negativity $E$ for different values of the temperature $T$, for $\xi=1/2$ and squeezing $r=r_1=r_3=1$.} \label{UNDYN2} \end{figure} This model is the many-body generalization of a two-qubit system studied in \cite{Benatti3}, where entanglement between the two qubits was shown to occur through a purely mixing mechanism induced by the presence of off-diagonal contributions of the form $(\sigma_\mu\otimes\bold{1})\,X\,(\bold{1}\otimes\sigma_\nu)$ in the dissipative generator. In fact, the entangling power of the model depends entirely on the strength of the statistical coupling of the otherwise independent qubits. Similarly, in Model 2, mesoscopic entanglement can be dissipatively generated among the two chains in the large $N$ limit. Unfortunately, in this case manageable analytic expressions for the logarithmic negativity are not available, so that the behaviour of $E$ can be studied only numerically. For simplicity, in the following discussion we have further set $\eta=1$, since this parameter can be reabsorbed into a redefinition of the temperature. As in Model 1, some initial squeezing is necessary in order for the dynamics to generate entanglement; further, the amount of created entanglement decreases as the dissipative parameter $\xi$ entering the Kossakowski matrix (\ref{Kossmat2}) gets larger. This is explicitly shown by the behaviour of the graphs in Fig.5 and Fig.6, where the phenomena of sudden birth and sudden death of entanglement are also visible as in Model~1. These graphs (and the ones below) refer to the choice of a symmetrically squeezed initial state; similar results hold also in the case of one-mode squeezed initial states. The dependence on the initial state temperature $T$ is instead depicted in Fig.7, for fixed $\xi$ and squeezing parameter. Also in this case, one sees that increasing the temperature, the maximum of the logarithmic negativity $E$ decreases, indicating that there exists a critical temperature $T_C$, above which no entanglement is possible; the behaviour of $T_C$ as function of the squeezing parameter $r$ is given by phase diagrams very similar to those in Fig.4. \begin{figure}[hbtp] \center\includegraphics[scale=0.6]{Fig7.pdf} \caption{\small Model 2: behaviour in time of the logarithmic negativity $E$ for different values of the temperature $T$, for $\xi=1$ and squeezing $r=r_1=r_3=1$.} \end{figure} However, unlike in Model 1, asymptotic entanglement is now possible. Indeed, setting the parameter $\xi=0$ and decreasing the initial temperature $T$, one sees that the two chains not only get mesoscopically entangled at finite time, but remarkably, the generated mesoscopic entanglement persists for longer times. This behaviour is clearly shown by the plots in Fig.8, where the time behaviour of the logarithmic negativity is reported for a symmetrically squeezed initial state: in the case of zero temperature, one sees that the generated mesoscopic entanglement persists for arbitrary long times. \begin{figure}[hbtp] \center \includegraphics[scale=0.5]{Fig8.pdf} \caption{\small Model 2: behaviour in time of the logarithmic negativity $E$ for different values of the temperature $T$, for $\xi=0$ and squeezing $r=r_1=r_3=1$.} \end{figure} \section{Outlook} When dealing with many-body systems, {\it i.e.} systems with a very large number $N$ of elementary constituents, accessible observables are global, collective ones, involving the degrees of freedom of all its parts. Typical examples are the mean-field observables, defined as the algebraic mean of single particle observables, as in the case of mean magnetization for spin systems. These quantities scale as $1/N$ and can be seen to behave as classical observables in the thermodynamical limit, {\it i.e.} as the number of constituents becomes very large. Similarly, fluctuation operators, defined in analogy with classical stochastic theory as deviations from the mean, form another class of collective operators; however, they scale as $1/\sqrt{N}$, and, because of this, they retain some quantum properties as $N$ increases. Indeed, irrespective of the nature of the microscopic many-body system, the algebra they form turns out to be non-commutative and always of bosonic type: they can be used to probe at the mesoscopic level the quantum properties of the system. We have studied the quantum dynamics of the fluctuation operators in a many-body system composed by two, non-interacting spin-1/2 chains, immersed in a common, weakly coupled external environment. The system behaves as an open quantum systems, so that noise and dissipation are expected to occur. Nevertheless, even in the thermodynamical limit, these phenomena are not able to spoil the quantum character of suitable chosen, two-chain fluctuation operators. Actually, despite the decohering and mixing-enhancing effects usually induced by the presence of the environment, the two chains can get entangled by the emergent, open mesoscopic dynamics, through a purely dissipative mechanism. We have studied in details the fate of the generated entanglement in the course of time and of its dependence on the strength of the coupling with the environment and on the temperature of the starting microscopic many-body state: despite its inevitable dissipative action, the environment can nevertheless sustain non vanishing quantum correlations among the two chains even for very large times, provided the temperature of the initial state is sufficiently low. The mechanism of environment induced entanglement generation has been previously known only for systems involving few qubits or oscillator modes; our discussion shows that this phenomenon is at work also in the case of many-body systems provided suitable mesoscopic observables are considered. This result is general and can find direct applications in all instances where mesoscopic, coherent quantum behaviours are expected to emerge, {\it e.g.} in experiments involving spin-like and optomechanical systems, or ultra-cold gases trapped in optical lattices: the possibility of entangling these many-body systems through a purely mixing mechanism may reinforce their use for the actual realization of quantum information and communication protocols. \section{Appendix A} The relation \eqref{macro1} can be proved as follows: because of definition \eqref{macro}, it is equivalent to $$ \lim_{N\to\infty}\omega\bigg(a^\dag\,\big(X_N-\omega(x)\big)\big(Y_N-\omega(y)\big)\,b\bigg)=0 $$ for all $a,b\in{\cal A}$. Set $$ \widetilde{X}_N=\frac{1}{N}\sum_{k=0}^{N-1}\underbrace{\bigg(x^{(k)}-\omega(x)\bigg)}_{\widetilde{x}^{(k)}}\ ,\quad \widetilde{Y}_N=\frac{1}{N}\sum_{k=0}^{N-1}\underbrace{\bigg(y^{(k)}-\omega(y)\bigg)}_{\widetilde{y}^{(k)}}\ , $$ so that $\omega(\widetilde{x}^{(k)})=\omega(\widetilde{x})=0$, $\omega\left(\widetilde{X}_N\right)=0$ and similarly for $\widetilde{y}$, $\widetilde{Y}_N$. Then, as shown in the main text for a single variable, the quasi-locality of $a,b$ and the clustering properties of the state yield: $$ \lim_{N\to\infty}\omega\bigg(a^\dag\,\big(X_N-\omega(x)\big)\big(Y_N-\omega(y)\big)\,b\bigg)=\omega(a^\dag b)\lim_{N\to\infty}\omega\bigg(\widetilde{X}_N\widetilde{Y}_N\bigg)\ . $$ Further, one can write: $$ \omega\bigg(\widetilde{X}_N\widetilde{Y}_N\bigg)=\frac{1}{N^2}\sum_{k=0}^{N-1} \omega\bigg(\widetilde{x}^{(k)}\widetilde{y}^{(k)}\bigg)\,+\, \frac{1}{N^2}\sum_{k\neq\ell=0}^{N-1} \omega\bigg(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\bigg)\ . $$ Since $\omega$ is translation-invariant, the first term vanishes as $\omega\big(\widetilde{x}\widetilde{y}\big)/N$ when $N\to\infty$. Moreover, thank to the clustering property (\ref{clustates}), for any small $\epsilon>0$, there exists an integer $N_\epsilon$, such that for $|k-\ell|^2>N_\epsilon$ one has: $$ \left|\omega(\big(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\big)-\omega(\widetilde{x})\,\omega(\widetilde{y})\right|= \left|\omega(\big(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\big)\right|\leq\epsilon\ . $$ Then, using this result, one can finally write: \begin{eqnarray*} \left|\frac{1}{N^2}\sum_{k\neq\ell=0}^{N-1} \omega\bigg(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\bigg)\right|&\leq& \frac{1}{N^2}\sum_{0<|k-\ell|\leq N_\epsilon}\left|\omega\bigg(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\bigg)\right|\\ &+&\frac{1}{N^2}\sum_{|k-\ell|> N_\epsilon}\left|\omega\bigg(\widetilde{x}^{(k)}\widetilde{y}^{(\ell)}\bigg)\right|\\ &\leq&4\frac{2N_\epsilon+1}{N}\,\|x\|\,\|y\|\,+\,\epsilon\ , \end{eqnarray*} so that, in the large $N$ limit, the relation \eqref{macro1} is indeed satisfied. Notice that \eqref{macro1} entails that, in the GNS representation, \begin{eqnarray} \nonumber && \lim_{N\to\infty}\omega\bigg(a^\dag \big(X-\omega(x)\big)^\dag\,\big(X-\omega(x)\big)\,a\bigg)=\\ \label{macro2} &&\hskip 2cm =\lim_{N\to\infty}\left\|\pi_\omega\big(X-\omega(x)\big)\vert\Psi_a\rangle\right\|^2=0\ , \end{eqnarray} for all $a\in{\cal A}$. Namely, mean-field spin observables converge to their expectations with respect to $\omega$ in the strong operator topology on the GNS Hilbert space $\mathbb{H}_\omega$. \section{Appendix B} In this Appendix we collect the explicit expressions of various matrices that have been used in the main text; these results are obtained from the corresponding multiple tensor product expressions by multiplying each matrix by the entries of the matrix which precedes it. The correlation matrix $C^{(\beta)}$ in \eqref{modcorrmat2} then reads: \begin{equation} C^{(\beta)}=\left(\bold{1}-\epsilon\,\sigma_1\right)\otimes\bold{1}\otimes\left(\bold{1}+ \epsilon\,\sigma_2\right)\\ =\begin{pmatrix} \phantom{-}{\cal C}_\epsilon &-\epsilon\, {\cal C}_\epsilon\\ -\epsilon\, {\cal C}_\epsilon & \phantom{-}{\cal C}_\epsilon \end{pmatrix}\ , \label{COMM2} \end{equation} with $$ {\cal C}_\epsilon= \begin{pmatrix} 1&-i\epsilon&0&0\\ i\epsilon&1&0&0\\ 0&0&1&-i\epsilon\\ 0&0&i\epsilon&1 \end{pmatrix}\ . $$ The symplectic matrix in \eqref{COMM1} and it inverse in \eqref{invCOMM1} are represented by: \begin{eqnarray} &&\sigma^{(\beta)}=2i\epsilon(\epsilon\sigma_1-\bold{1})\otimes\bold{1}\otimes\sigma_2= 2\epsilon\begin{pmatrix} \phantom{-}{\cal S} & -\epsilon\, {\cal S}\\ -\epsilon\, {\cal S} & \phantom{-}{\cal S} \end{pmatrix}\ ,\\ &&\big(\sigma^{(\beta)}\big)^{-1}=\frac{i}{2\epsilon\,c^2}(\bold{1}+\epsilon\sigma_1)\otimes\bold{1}\otimes\sigma_2 =-\frac{1}{2\epsilon\,c^2} \begin{pmatrix} {\cal S} & \epsilon\, {\cal S}\\ \epsilon\, {\cal S} & {\cal S} \end{pmatrix}\ , \end{eqnarray} where \begin{equation} {\cal S}= \begin{pmatrix} 0&-1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0 \end{pmatrix}\ , \label{S} \end{equation} and $c=\sqrt{1-\epsilon^2}$, while the covariance matrix in \eqref{covmat2} is given by: \begin{equation} \Sigma^{(\beta)}=(1-\epsilon\sigma_1)\otimes {\bf 1}\otimes{\bf 1}= \begin{pmatrix} \phantom{-}{\bf 1}_4 & -\epsilon\, {\bf 1}_4\\ -\epsilon\, {\bf 1}_4 & \phantom{-}{\bf 1}_4 \end{pmatrix}\ , \end{equation} with ${\bf 1}_4$ the unit matrix in four dimensions. Furthermore, the matrix $\mathcal{M}$ in \eqref{matrix1} reads \begin{equation} \label{matrix1.1} \mathcal{M}=\sqrt{\epsilon}\, \begin{pmatrix} {\cal K} & {\cal K}^*\\ {\cal Q}^* & {\cal Q} \end{pmatrix}\ , \end{equation} with $$ {\cal K}= \begin{pmatrix} 1&0&0&0\\ i&0&0&0\\ 0&0&1&0\\ 0&0&i&0\\ \end{pmatrix}\ ,\qquad {\cal Q}= \begin{pmatrix} -\epsilon&c&0&0\\ i\epsilon&-ic&0&0\\ 0&0&-\epsilon&c\\ 0&0&i\epsilon&-ic\\ \end{pmatrix}\ , $$ while its inverse is given by \begin{equation} \label{matrix2.1} \mathcal{M}^{-1}=\frac{1}{2c\sqrt{\epsilon}}\, \begin{pmatrix} {\cal W} & {\cal Z}^*\\ {\cal W}^* & {\cal Z} \end{pmatrix}\ , \end{equation} with $$ {\cal W}= \begin{pmatrix} c&-ic&0&0\\ \epsilon&-i\epsilon&0&0\\ 0&0&c&-ic\\ 0&0&\epsilon&-i\epsilon\\ \end{pmatrix}\ ,\qquad {\cal Z}= \begin{pmatrix} 0&0&0&0\\ 1&i&0&0\\ 0&0&0&0\\ 0&0&1&i\\ \end{pmatrix}\ . $$ Finally, the $\mathcal{P}$ in \eqref{lemm2} is explicitly given by \begin{equation} \label{lastmat} \mathcal{P}= \begin{pmatrix} {\cal P}_{11} & {\cal P}_{12}\\ {\cal P}_{21} & {\cal P}_{22}\\ \end{pmatrix}\ , \end{equation} with $$ {\cal P}_{11}= \begin{pmatrix} 1&0&0&0\\ 0&0&1&0\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}\ ,\qquad {\cal P}_{12}= \begin{pmatrix} 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0\\ 0&0&1&0\\ \end{pmatrix}\ , $$ $$ {\cal P}_{21}= \begin{pmatrix} 0&1&0&0\\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ \end{pmatrix}\ ,\qquad {\cal P}_{22}= \begin{pmatrix} 0&0&0&0\\ 0&0&0&0\\ 0&1&0&0\\ 0&0&0&1 \end{pmatrix}\ . $$ \section{Appendix C} We shall prove that, given a time-dependent Hermitean matrix $M_t$ and its exponential $\displaystyle N_t={\rm e}^{iM_t}$, then \begin{equation} \label{appb1} \dot{N_t}:=\frac{{\rm d}N_t}{{\rm d}t}=O_t\,N_t\ ,\quad O_t:=\sum_{k=1}^\infty\frac{i^k}{k!}\mathbb{K}^{k-1}_{M_t}(\dot{M}_t)\ , \end{equation} where $$ \mathbb{K}^n_A(B):=\Big[A\,,\,\mathbb{K}^{n-1}_A(B)\Big]\ ,\quad \mathbb{K}^0_A(B)=B\ . $$ Indeed, given matrices $A$ and $B$, one has $$ {\rm e}^{iA}\,B\,{\rm e}^{-iA}=\sum_{n=0}^\infty\frac{i^n}{n!}\underbrace{\Big[A\,\Big[A\,,\cdots\Big[}_{n\ times}B\,,\,A\Big]\cdots\Big]\Big]=\sum_{n=0}^\infty\frac{i^n}{n!}\,\mathbb{K}^n_A(B)\ . $$ Then, $[N_t\,,\,M_t]=0$ and $N_tN_t^\dag=N_t^\dag N_t=1$ imply $N_tM_tN^\dag_t=M_t$ and $\displaystyle \dot{N}_t\,N^\dag_t=-N_t\,\dot{N}^\dag_t$. Therefore, $$ N_t\,\dot{M}_t\,N^\dag_t\,-\,\dot{M}_t\,=-\,\dot{N}_t\,M_t\,N^\dag_t\,-\,N_t\,M_t\,\dot{N}^\dag_t=\Big[M_t\,,\,\dot{N}_t\Big]\,N_t^\dag\ . $$ Furthermore, since, for $n\geq 1$, $\displaystyle \mathbb{K}_A^n[B]=\Big[A\,,\,\mathbb{K}^{n-1}_A[B]\Big]$, it follows that $$ \hskip -.5cm N_t\,\dot{M}_t\,N^\dag_t-\dot{M}_t=\sum_{n=1}^\infty\frac{i^n}{n!}\mathbb{K}^n_{M_t}[\dot{M}_t]=\Big[M_t\,,\,O_t\Big]=\Big[M_t\,,\,\dot{N}_t\Big]\,N_t^\dag\ , $$ where $O_t=\sum_{k=1}^\infty\frac{i^k}{k!}\mathbb{K}_{M_t}^{k-1}[\dot{M}_t]$. Then, using again that $[N_t\,,\,M_t]=0$, one obtains $$ \Big[M_t\,,\,O_t\,N_t\Big]=\Big[M_t\,,\,\dot{N}_t\Big]\ . $$ In order to show that $\dot{N}_t=O_tN_t$, consider the orthogonal eigenvectors $\vert m_a(t)\rangle$ of $M_t$ with eigenvalues $m_a(t)$. Then, if $m_a(t)\neq m_b(t)$, the previous equality yields $$ \langle m_a(t)\vert O_tN_t\vert m_b(t)\rangle=\langle m_a(t)\vert\dot{N}_t\vert m_b(t)\rangle\ . $$ On the other hand if $\vert m_a(t)\rangle$ and $\vert m_b(t)\rangle$ correspond to a same (real) eigenvalue $m(t)$, then one uses that $$ 0=\frac{{\rm d}}{{\rm d}t}\Big(\langle m_a(t)\vert m_b(t)\rangle\Big)=\langle \dot{m}_a(t)\vert m_b(t)\rangle\,+\, \langle m_a(t)\vert\dot{m}_b(t)\rangle\ , $$ to deduce that also in such a case \begin{eqnarray*} \langle m_a(t)\vert O_t\,N_t\vert m_b(t)\rangle&=&i\,\langle m_a(t)\vert \dot{M}_t\vert m_b(t)\rangle\, {\rm e}^{im(t)}\, \delta_{ab} =i\dot{m}(t)\,{\rm e}^{im(t)}\,\delta_{ab}\\ &=&\langle m_a(t)\vert\dot{N}_t\vert m_b(t)\rangle\ . \end{eqnarray*} \section{Appendix D} In Model 1, the dynamics is generated by a Lindblad operator $\mathbb{L}_N[X]=\mathbb{H}_N[X]+\mathbb{D}_N[X]$, with hamiltonian part $\mathbb{H}_N$ as in (\ref{mod1H}) and dissipative part $\mathbb{D}_N$ given by~\eqref{mod1-dissip} with Kraus operators as in~\eqref{Krops}. When acting on the self-adjoint element $x_i^{(k)}\in\chi$ at site $k$, it reduces to: \begin{eqnarray*} \mathbb{L}_N\left[x_i^{(k)}\right]&=&i\frac{\eta}{2}\left[\sigma_3^{(k)}\otimes\bold{1}+\bold{1}\otimes \sigma_3^{(k)}\,,\,x_i^{(k)}\right]\\ &+&\frac{J_0}{2}\sum_{\mu,\nu=1}^4 D_{\mu\nu}\left[\left[v_\mu^{(k)}\,,\,x_i^{(k)}\right]\,,\,(v^\dag_{\nu})^{(k)}\right]\ . \end{eqnarray*} One can recast the first term as: $$ i\frac{\eta}{2}\left[\sigma_3^{(k)}\otimes\bold{1}+\bold{1}\otimes \sigma_3^{(k)}\,,\,x_i^{(k)}\right]=\sum_{j=1}^8\mathcal{H}_{ij}\,x_j^{(k)}\ ,\quad \mathcal{H}=-i\eta\begin{pmatrix} \sigma_2&0&0&0\cr 0&\sigma_2&0&0\cr 0&0&\sigma_2&0\cr 0&0&0&\sigma_2 \end{pmatrix}\ . $$ Further, let $\left[v_\mu^{(k)}\,,\,x_i^{(k)}\right]=\sum_{j=1}^8\mathcal{V}_\mu^{ij}x_j^{(k)}$; then, the dissipative term reads $$ \mathbb{D}_N\left[x_i^{(k)}\right]=\sum_{k=1}^8\mathcal{D}_{ik}\,x_k^{(k)}\ ,\quad \mathcal{D}_{ik}=J_0\sum_{\mu,\nu=1}^4\frac{D_{\mu\nu}}{2}(\mathcal{V}_\mu \mathcal{V}^*_\nu)^{ik}\ . $$ The four $8\times 8$ matrices $\mathcal{V}_\mu$ explicitly read \begin{eqnarray*} \mathcal{V}_1&=&\frac{1}{2}\begin{pmatrix} 0&0&0&\bold{1}+\sigma_2\cr 0&0&\sigma_2-\bold{1}&0\cr 0&\bold{1}+\sigma_2&0&0\cr \sigma_2-\bold{1}&0&0&0 \end{pmatrix}= -\mathcal{V}^*_2\\ \mathcal{V}_3&=&-\begin{pmatrix} \sigma_2&0&0&0\cr 0&0&0&0\cr 0&0&\sigma_2&0\cr 0&0&0&0 \end{pmatrix}\ ,\quad \mathcal{V}_4=-\begin{pmatrix} 0&0&0&0\cr 0&\sigma_2&0&0\cr 0&0&0&0\cr 0&0&0&\sigma_2 \end{pmatrix}\ . \end{eqnarray*} In order to make computations easier, it proves convenient to write these matrices as (sums of) $3$-fold tensor products of Pauli matrices: \begin{eqnarray*} \mathcal{V}_1&=&\frac{1}{2}\sigma_1\otimes\left(i\,\sigma_2\otimes\bold{1}+\sigma_1\otimes\sigma_2\right)=-\mathcal{V}^*_2\\ \mathcal{V}_3&=&-\bold{1}\otimes\left(\bold{1}+\sigma_3\right)\otimes\sigma_2\ ,\quad \mathcal{V}_4=\bold{1}\otimes\left(\sigma_3-\bold{1}\right)\otimes\sigma_2\ . \end{eqnarray*} Similarly, $\mathcal{H}=-i\eta\bold{1}\otimes\bold{1}\otimes\sigma_2$, whence $$ \mathcal{L}\equiv\mathcal{H}+\mathcal{D}=-i\eta\bold{1}\otimes\bold{1}\otimes\sigma_2-J_0\Big(\delta-\gamma\sigma_1\otimes\sigma_1\otimes\bold{1}\Big)\ . $$ Explicitly, one has: \begin{equation} \mathcal{H}=\eta \begin{pmatrix} {\cal S} & {\bf 0}_4\\ {\bf 0}_4 & {\cal S}\\ \end{pmatrix}\ ,\qquad \mathcal{D}= J_0\,\begin{pmatrix} -\delta {\bf 1}_4 &\Gamma\\ \Gamma & -\delta {\bf 1}_4\\ \end{pmatrix} \end{equation} where ${\cal S}$ is as in (\ref{S}) and ${\bf 0}_4$ is the null matrix in four dimensions, while $$ \Gamma=\gamma\, \begin{pmatrix} 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ \end{pmatrix}\ . $$ The expressions of the $8\times 8$ matrices $H^{(1)}$ and $D^{(1)}$ in \eqref{Flind1a} and \eqref{Flind1b} that define the action of the mesoscopic dissipative generator in \eqref{fluctLind1}-\eqref{fluctLind2} can then be readily computed by expressing also the matrices $C^{(\beta)}$ and $(\sigma^{(\beta)})^{-1}$ as (sums of) $3$-fold tensor products of Pauli matrices, as given in \eqref{modcorrmat} and \eqref{invCOMM1}, respectively: $$ C^{(\beta)}=\left(\bold{1}-\epsilon\,\sigma_1\right)\otimes\bold{1}\otimes\left(\bold{1}+ \epsilon\,\sigma_2\right)\ ,\quad (\sigma^{(\beta)})^{-1}=\frac{1}{2c^2\epsilon}\,{\left(\bold{1}+\epsilon\sigma_1\right)\otimes\bold{1}\otimes\,i\sigma_2}\ , $$ where $\epsilon=\tanh(\eta\beta/2)$, $c^2=1-\epsilon^2$. Then, one computes \begin{eqnarray*} &&\hskip -.8cm \mathcal{L}\,C^{(\beta)}\,-\,C^{(\beta)}\,\mathcal{L}^{tr}= -2i\eta\left(\bold{1}-\epsilon\sigma_1\right)\otimes\bold{1}\otimes\left(\epsilon+\sigma_2\right)\\ &&\hskip-.8cm \mathcal{L}\,C^{(\beta)}\,+\,C^{(\beta)}\,\mathcal{L}^{tr}=-2J_0\Big(\delta\left(\bold{1}-\epsilon\sigma_1\right)\otimes\bold{1}\,-\,\gamma\left(\sigma_1-\epsilon\right)\otimes\sigma_1\Big)\otimes\left(\bold{1}+\epsilon\sigma_2\right)\ . \end{eqnarray*} From \eqref{Flind1a}, {\it i.e.} $$ H^{(1)}=-i(\sigma^{(\beta)})^{-1}\left(\mathcal{L}C^{(\beta)}\,-\,C^{(\beta)}\mathcal{L}^{tr}\right)\,(\sigma^{(\beta)})^{-1}\ , $$ one derives that the Hamiltonian coupling among the $F(x_i)$ is given by \begin{equation} \label{H1F} H^{(1)}=\frac{\eta}{2c^2\epsilon^2}\left(\bold{1}+\epsilon\sigma_1\right)\otimes\bold{1}\otimes\left(\epsilon+\sigma_2\right)= \begin{pmatrix} {\cal E} & \epsilon\,{\cal E}\\ \epsilon\,{\cal E} & {\cal E}\\ \end{pmatrix}\ , \end{equation} with $$ {\cal E}= \begin{pmatrix} \epsilon&-i&0&0\\ i&\epsilon&0&0\\ 0&0&\epsilon&-i\\ 0&0&i&\epsilon\\ \end{pmatrix}\ . $$ Similarly, the hamiltonian contribution expressed in terms of creation and annihilation operators in \eqref{Flind2a} gives rise to the matrix $H^{(2)}=\mathcal{M}^\dag\,H^{(1)}\,\mathcal{M}$, explicitly given by \begin{equation} \label{H1A} H^{(2)}= \frac{\eta}{\epsilon} \begin{pmatrix} (\epsilon+1) {\bf 1}_4 & {\bf 0}_4\\ {\bf 0}_4 & (\epsilon-1) {\bf 1}_4 \end{pmatrix}\ . \end{equation} From \eqref{Flind1b}, {\it i.e.} $$ D^{(1)}=(\sigma^{(\beta)})^{-1}\left(\mathcal{L}C^{(\beta)}\,+\,C^{(\beta)}\mathcal{L}^{tr}\right)(\sigma^{(\beta)})^{-1}\ , $$ one derives the Kossakowski matrix responsible for the dissipative action of the generator: \begin{eqnarray*} D^{(1)}&=&\frac{J_0}{2c^2\epsilon^2}\Big(\delta\left(\bold{1}+\epsilon\sigma_1\right)\otimes\bold{1}-\gamma\left(\epsilon+\sigma_1\right)\otimes\sigma_1\Big)\otimes\left(\bold{1}+\epsilon\sigma_2\right)\\ &=&\frac{J_0}{2c^2\epsilon^2} \begin{pmatrix} D_1&\epsilon D_2&\epsilon D_1&D_2\cr \epsilon D_2&D_1&D_2&\epsilon D_1\cr \epsilon D_1&D_2&D_1&\epsilon D_2\cr D_2&\epsilon D_1&\epsilon D_2&D_1\end{pmatrix}\ ,\\ D_1&=&\delta\begin{pmatrix}1&-i\epsilon\cr i\epsilon&1\end{pmatrix}\ ,\qquad D_2=-\gamma\begin{pmatrix}1&-i\epsilon\cr i\epsilon&1\end{pmatrix}\ . \end{eqnarray*} Instead, when the dissipative contribution is expressed in terms of creation and annihilation operators, the corresponding Kossakowski matrix reads \begin{eqnarray} \label{D1A1} D^{(2)}&=&\mathcal{M}^\dag\,D\,\mathcal{M}=\frac{J_0}{\epsilon} \begin{pmatrix} D_{1+}&D_{2+}&0&0\cr D_{2+}&D_{1+}&0&0\cr 0&0&D_{1-}&D_{2-}\cr 0&0&D_{2-}&D_{1-} \end{pmatrix}\ ,\\ \label{D1A2} D_{1\pm}&=&\delta(1\pm\epsilon)\begin{pmatrix} 1&0\cr0&1\end{pmatrix}\ ,\qquad D_{2\pm}=\gamma(1\pm\epsilon)\begin{pmatrix} \epsilon&-c\cr -c&\epsilon\end{pmatrix}\ . \end{eqnarray} \section{Appendix E} The Hamiltonian contribution to the Lindblad generator of the microscopic dynamics studied in Model 2 is the same as in Model 1, thus we concentrate on the dissipative term $\mathbb{D}_N$ of $\mathbb{L}_N$. Since operators at different sites commute, the action of $\mathbb{D}_N$ on an operator $x_i$ from the set $\chi$ at a given site $k$ is given by \begin{eqnarray*} \mathbb{D}_N[x_i^{(k)}]&=& \frac{1}{2}\Big(\left[w^{(k)}_1\,,\,\left[x^{(k)}_i\,,\,w^{(k)}_1\right]\right]+ \left[w_2^{(k)}\,,\,\left[x^{(k)}\,,\,w^{(k)}_2\right]\right]\\ &+&\gamma\left[w_3^{(k)}\,,\left[x^{(k)}\,,\,w^{(k)}_3\right]\right]\Big)\\ &-&i\frac{\epsilon}{2}\,\left\{w^{(k)}_1\,,\,\left[x^{(k)}_i\,,\,w^{(k)}_2\right]\right\}+i\frac{\epsilon}{2}\,\left\{w_2^{(k)}\,,\,\left[x^{(k)}_i\,,\,w^{(k)}_1\right]\right\}\ , \end{eqnarray*} with $w_\mu=\sigma_\mu\otimes\bold{1}+\bold{1}\otimes\sigma_{\mu}$. Then, by means of the Pauli algebraic relations, one explicitly computes that $$ \mathbb{D}_N\left[x_i^{(p)}\right]=\sum_{k=1}^8\mathcal{D}_{ik}\,x_k^{(p)}\ , $$ where \begin{equation} \label{mod2L} \mathcal{D}=-2\begin{pmatrix} 1+\xi&0&0&0&0&0&-\epsilon&0\cr 0&1+\xi&0&0&0&0&0&-\epsilon\cr 0&0&1+\xi&0&-\epsilon&0&0&0\cr 0&0&0&1+\xi&0&-\epsilon&0&0\cr 2\epsilon&0&\epsilon&0&3+\xi&0&2&0\cr 0&2\epsilon&0&\epsilon&0&3+\xi&0&2\cr \epsilon&0&2\epsilon&0&2&0&3+\xi&0\cr 0&\epsilon&0&2\epsilon&0&2&0&3+\xi\cr \end{pmatrix}\ . \end{equation} As in the previous Appendix, from \eqref{Flind1b}, with $$ C^{(\beta)}=\left(\bold{1}-\epsilon\,\sigma_1\right)\otimes\bold{1}\otimes\left(\bold{1}+ \epsilon\,\sigma_2\right)\ , $$ one computes $$ D^{(1)}=(\sigma^{(\beta)})^{-1}\left(\mathcal{L}C^{(\beta)}\,+\,C^{(\beta)}\mathcal{L}^{tr}\right)(\sigma^{(\beta)})^{-1}\ . $$ Then, by the transformation $D^{(2)}=\mathcal{M}^\dag\,D^{(1)}\,\mathcal{M}$ that maps the dissipator written in terms of the operators $F(x_i)$, $1\leq i\leq 8$, to the one expressed using annihilation and creation operators $a^\#_i$, $i=1,2,3,4$, one gets \begin{equation} \label{mod2La1} D^{(2)}=\frac{2}{\epsilon}\begin{pmatrix}(1+\epsilon)A&0\cr 0&(1-\epsilon)A \end{pmatrix}\ , \end{equation} with \begin{equation} \label{mod2La2} A=\begin{pmatrix}1+\xi&0&\epsilon^2&-\epsilon c\cr 0&3+\xi&-\epsilon c&-(1+c^2)\cr \epsilon^2&-\epsilon c&1+\xi&0\cr -\epsilon c&-(1+c^2)&0&3+\xi \end{pmatrix}\ , \end{equation} where, as before, $\epsilon=\tanh(\eta\beta/2)$, $c^2=1-\epsilon^2$. \section{Appendix F} In this Appendix we derive the explicit form of the quantity $S$ appearing in the entanglement criterion of equation \eqref{crit} in Model 1, both in the case of an initial symmetrically squeezed state, $r_1=r_3=r$, and for a one-mode squeezed state, $r_1=r$, $r_3=0$. The first step is to find the evolution of the reduced covariance matrix at every time $t$, in the language of creation and annihilation operators. From Appendix~D, {\sl Theorem~\ref{qfth}}, {\sl Lemma~\ref{lemma0}} and {\sl Lemma~\ref{lem2}}, one finds: \begin{equation} \Phi_t\left[D(z)\right]={\rm e}^{-\frac{1}{2}(\tilde Z, \tilde{\mathcal{Y}}_t \tilde Z)}\,D(z_t) \end{equation} with: $$ \tilde {Z}_t= {\rm e}^{t\widetilde{\mathcal{L}}^{tr}} \tilde Z\ ,\qquad {\rm e}^{t\widetilde{\mathcal{L}}^{tr}}=\mathcal{P}^T\Sigma_3\mathcal{M}^\dagger e^{t\mathcal{L}^{tr}} (\mathcal{M}^{\dagger})^{-1}\Sigma_3\mathcal{P}\ ,\qquad \widetilde{\mathcal{Y}}_t=\frac{1}{2\epsilon}\left({\bf 1}_{8}-\left({\rm e}^{t\widetilde{\mathcal{L}}^{tr}}\right)^\dagger {\rm e}^{t\widetilde{\mathcal{L}}^{tr}}\right)\ , $$ and $$ {\rm e}^{t\widetilde{\mathcal{L}}^{tr}}=e^{-\delta J_0 t}\begin{pmatrix} \cosh(J_0\gamma t)&0&-\epsilon\sinh(J_0\gamma t)&c\sinh(J_0\gamma t)\\ 0&\cosh(J_0\gamma t)&c\sinh(J_0\gamma t)&\epsilon\sinh(J_0\gamma t)\\ -\epsilon\sinh(J_0\gamma t)&c\sinh(J_0\gamma t)&\cosh(J_0\gamma t)&0\\ c\sinh(J_0\gamma t)&\epsilon\sinh(J_0\gamma t)&0&\cosh(J_0\gamma t) \end{pmatrix}\otimes \begin{pmatrix} {\rm e}^{i\omega t}&0\\ 0&{\rm e}^{-i\omega t} \end{pmatrix}\ . $$ As a result, the evolution of the covariance matrix for the four modes reads as follows: $$ \widetilde G(t)=\left({\rm e}^{t\tilde{\mathcal{L}}^{tr}}\right)^\dagger\widetilde \Sigma^{(\beta)}_{r_1,r_3}\,{\rm e}^{t\tilde{\mathcal{L}}^{tr}}+\widetilde \Sigma^{(\beta)}_{0,0}-\left({\rm e}^{t\tilde{\mathcal{L}}^{tr}}\right)^\dagger\widetilde \Sigma^{(\beta)}_{0,0}\, {\rm e}^{t\tilde{\mathcal{L}}^{tr}}\ . $$ In order to construct the reduced matrix for the two relevant modes under investigation, it is sufficient to look at the block structure of formula \eqref{bigcov2} and to collect the corresponding entries: $$ \widetilde G_{red}(t)=\begin{pmatrix} \widetilde G_{11}(t)&\widetilde G_{13}(t)\\ \widetilde G_{13}(t)& \widetilde G_{33}(t) \end{pmatrix}\ , $$ where one has $\widetilde G_{13}=(\widetilde G_{13})^\dagger$. This allows evaluating the four quantities $I_j$ entering the definition of $S$ in (\ref{crit}). As already mentioned, two cases have been considered for the initial state, a symmetrically squeezed state and a one-mode squeezed state. In the two cases, one obtains, respectively: \begin{eqnarray} \nonumber S_S(t)&=&\frac{\left(\epsilon^2-1\right)^2}{16\epsilon^4}+\sinh^2(r)\left[\left(\frac{1}{2\epsilon^2}-\frac{1}{2}\right)\left(\frac{y_\epsilon(t)}{\epsilon}-y_\epsilon^2(t)\right)-2\left(1+\frac{1}{\epsilon^2}\right)y_3^2(t)\right]+\\ \label{critsas0} &+&\sinh^4(r)\left[\left(\frac{y_\epsilon(t)}{\epsilon}-y_\epsilon^2(t)+4y_3^2(t)\right)^2-4\frac{y_3^2(t)}{\epsilon^2}\right]\ , \\ \nonumber S_A(t)&=&\frac{\left(\epsilon^2-1\right)^2}{16\epsilon^4}+\sinh^2(r)\Bigg[\left(\frac{1}{4\epsilon^2}-\frac{1}{4}\right)\left(\frac{y_1(t)-y_1^2(t)}{\epsilon^2}+y_2(t)-\epsilon^2 y_2^2(t)\right)+\\ &-&y_3^2(t)\left(\frac{1}{2}+\frac{1}{2\epsilon^2}\right)\Bigg]\ , \label{critsas} \end{eqnarray} where \begin{eqnarray} \label{y1} y_1(t)&=&\frac{e^{-2J_0\delta t}}{2}\left(\cosh(2J_0\gamma t)+1\right)\ ,\qquad y_2(t)=\frac{e^{-2J_0\delta t}}{2}\left(\cosh(2J_0\gamma t)-1\right)\ ,\\ y_3(t)&=&\frac{e^{-2J_0\delta t}}{2}\sinh(2J_0\gamma t)\ ,\hskip 1.9cm y_\epsilon(t)=\frac{y_1(t)}{\epsilon}+\epsilon y_2(t)\ . \end{eqnarray} \vfill\eject
{ "redpajama_set_name": "RedPajamaArXiv" }
4,720
18 As soon as he had finished speaking to Saul, the soul of Jonathan was knit to the soul of David, and Jonathan dloved him as his own soul. 3 Then Jonathan made a covenant with David, because dhe loved him as his own soul. 17 And Jonathan made David swear again by his love for him, cfor he loved him as he loved his own soul.
{ "redpajama_set_name": "RedPajamaC4" }
3,337
Saudi Arabia to Set Up a Cultural Center in Islamabad Posted 2 years ago by Ambreen Shabbir Saudi Arabia is interested in setting up a cultural center in Islamabad in a bid to improve the cultural collaboration between the two countries. Saudi Ambassador Nawaf Saeed Ahmad Al-Malki told this to Special Assistant to Prime Minister on Information and Broadcasting Dr. Firdous Ashiq Awan in a meeting held in Islamabad. The ambassador said that the Kingdom looks forward to benefiting from the skilled labor of Pakistan with regards to the Saudi Crown Prince's vision 2030. "Pak-Saudi relationship is not confined to the governments, rather it represents a people-to-people bonhomie," he said. Saudi Arabia Opens New 'Green Card' Residency That Doesn't Require a Kafeel The special assistant to PM welcomed the Saudi govt's intention to establish the cultural center along with enhancing people-to-people relations. She assured the dignitary of her full assistance in this regard. Awan told the envoy that Pakistan enjoys cordial relations with the Kingdom since 1947. This relationship is entrenched in the centuries-old religious, commercial, and cultural ties between the two peoples. She said that the recent visit of the crown prince to Pakistan was quite significant as both countries signed agreements to further their economic cooperation and strategic partnership. Awan said that Pakistan has remained the recipient of generous economic assistance from Saudi Arabia in the times of crises. Both dignitaries discussed an executive program for media and cultural cooperation between the two countries while deciding to prepare a calendar of cultural activities. Ambreen Shabbir Uff My GOD, Kis Muskurahat Se Dekh Raha hai FriDous Ashiq Awam Ko : Ehsan says: I wonder though! why they ought to? already Punjab saara to AL-BUNJAB hua para hai. KPK bhi spiritually 30 odd yrs se SAUDIA extension hi hai.. Ajeeb
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,793
APERICENA IN PISCINA - SAIL TIME Mercoledì 20 LUGLIO ORE 20:00 Brescia - CA' DESIDERIO FRANCIACORTA & CLUB AZZURRI VILLAGE PRESENTANO : APERICENA CON BUFFET, MUSICA ANIMAZIONE ESTRAZIONE DI PREMI... A BORDO PISCINA... IMMERSI NEL VERDE, PARTECIPAZIONE PER I CLIENTI SAIL REGISTRATI AL SITO www.sail.tours ISCRIVITI ORA LA QUOTA È DI €20,00. ΑΙΟΛOS ha come base il porto di Cecina (Livorno), zona strategica del mediterraneo, dove sarà possibile raggiungere le principali isole: Ponte del 2 Giugno ISOLA D'ELBA - ARCIPELAGO TOSCANO PREZZO TOP RESERVE Euro 600,00 A PERSONA, in CABINA DOPPIA. WEEK FROM 22/07 TO 29/07 FROM COSTA AZZURRA /TO IBIZA - YACHT SAIL LIFE - double cab ALL INCLUSIVE YACHT LIFE combines the adventure of the sea opened with the modern comforts Price Double Cabin for Week All Inclusive. ►Panoramic Boat Party, Sailboat Sunday, June 25, 2017 (Cisano - VR) The Private Party in Garda Lake's most talked and anticipated boat. Only 45 seats available: 4 hours of navigation, 4 hours of music Brain Events ... 4 hours of freshness at the highest levels! CRUISE FIREWORKS DESENZANO Sunday, August 7 - LAKE GARDA Lake party and dell'Ospite 2016 Revolver BOARDING from the Port of SIRMIONE 2, Starting at 21:15 Aperitif in Cruise on Desenzano 22.00 return 00:30 hours at the Port of SIRMIONE 2. 24:00 hours at the Port of Desenzano.
{ "redpajama_set_name": "RedPajamaC4" }
4,047
NASHVILLE, TN—Dr. Walter C. Kaiser, Jr., president emeritus and distinguished professor of Old Testament at Gordon-Conwell Theological Seminary in South Hamilton, Massachusetts, will serve as the guest lecturer for the 2010 Leroy Forlines Lectures at Free Will Baptist Bible College, according to President Matt Pinson. The lectures will be held February 23-24, 2010. A Pennsylvania native, Dr. Kaiser is one of the world's most influential Old Testament scholars. He received his A.B. from Wheaton College, B.D. from Wheaton Graduate School, and Ph.D. from Brandeis University. Kaiser taught at Trinity Evangelical Divinity School where he chaired the Old Testament Department and also served as academic dean and vice president. After serving as the first Colman M. Mockler Professor of Old Testament at Gordon-Conwell, he filled the office of president there for nine years.
{ "redpajama_set_name": "RedPajamaC4" }
8,493
Гірничі виробки групові, (, , ) — підземні виробки, що обслуговують розробку декількох пластів, жил та ін., а також поверхів, дільниць (на концентраційному горизонті). Література Гірничі виробки
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,535
\section{Introduction} The problem of the internal structure of the scalar mesons with mass below 1 GeV is still open\cite{klempt:2007}. It is controversial whether they are $q\bar q$ mesons\cite{scadron:2004}, $qq\bar q\bar q$ states\cite{jaffe:1977}, bound states of a $K\bar K$ pair\cite{weinstein:1982} or a mixing of these configurations. \\ An important part of the program of the KLOE experiment, carried out at the Frascati $\phi$-factory DA$\Phi$NE, has been dedicated to the study of the radiative decays $\phi(1020)\to P_1 P_2\gamma$ ($P_{1,2}=$ pseudoscalar mesons). These decays are dominated by the exchange of a scalar meson $S$ in the intermediate state ($\phi\to S\gamma$, and $S\to P_1 P_2$), and both their branching ratios and the $P_1 P_2$ invariant mass shapes depend on the scalar structure.\\ The $\phi\to\eta\pi^0\gamma$ decay has been already used by KLOE and by other experiments to study the neutral component of the isotriplet $a_0(980)$\cite{aloisio:2002,achasov:2000}. This process is well suited to study the $\phi\to a_0(980)\gamma$ dynamics, since it is dominated by the scalar production, with small vector background, contrary to $\pi^0\pi^0\gamma$ and $\pi^+\pi^-\gamma$ cases, where a large irreducible background interferes with the $f_0(980)$ signal\cite{ambrosino:2007}.\\ In this paper the result of the analysis of the $\phi\to\eta\pi^0\gamma$ decay, performed on a sample with 20 times larger statistics than the previously published paper\cite{aloisio:2002}, is presented. The final states corresponding to $\eta\to\gamma\gamma$ and $\eta\to\pi^+\pi^-\pi^0$ have been selected. The $\eta\pi^0$ invariant mass distributions have been fit to two models of parametrization of the $\phi\to a_0(980)\gamma$ decay, to extract the relevant $a_0(980)$ parameters (mass and couplings). \section{DA$\Phi$NE and KLOE} The Frascati $\phi$-factory DA$\Phi$NE is an $e^+e^-$ collider operating at a center of mass energy $\sqrt{s}=M_{\phi}\simeq$ 1020 MeV. The beams collide at an angle of ($\pi$ - 0.025) rad, thus producing $\phi$ mesons with small momentum ($p_{\phi}\simeq$ 13 MeV) in the horizontal plane. The KLOE detector\cite{adinolfi:2002} consists of two main subdetectors: a large volume drift chamber (DC) and a fine sampling lead-scintillating fibers electromagnetic calorimeter (EMC). The whole apparatus is inserted in a 0.52 T axial magnetic field, produced by a superconducting coil. The DC is 3.3 m long, with inner and outer radii of 25 and 200 cm respectively. It contains {$12~582$} drift cells arranged in 58 stereo layers uniformly distributed in the sensitive volume and it is filled with a gas mixture of 90\% helium and 10\% isobutane. Its spatial resolution is 200 $\mu$m and the tracks coming from the beam interaction point (IP) are reconstructed with $\sigma(p_{\perp})/p_{\perp}\leq 0.4\%$. The position resolution for two track vertices is about 3 mm.\\ The DC is surrounded by the EMC, that covers 98\% of the solid angle, and is divided into a barrel, made of 24 trapezoidal modules about 4 m long, with the fibres running parallel to the barrel axis, and two endcaps of 32 module each, with fibers aligned vertically. The read-out granularity is $\sim 4.4\times 4.4$ cm$^2$, for a total of 2440 cells, read at both ends by photomultipliers. The coordinate of a particle along the fiber direction is reconstructed from the difference of the arrival time of the signals at the two ends of the cell. Cells close in time and space are grouped together into clusters. The cluster energy is the sum of the cell energies, while the cluster time and position are energy weighed averages. The energy and time resolutions for photons are $\sigma_E/E=5.7\%/\sqrt{E({\rm GeV})}$ and $\sigma_t=57~{\rm ps}/\sqrt{E({\rm GeV})}\oplus 100~{\rm ps}$ respectively. Cluster positions are measured with resolutions of 1.3 cm in the coordinates transverse to the fibers, and $1.2~{\rm cm}/\sqrt{E({\rm GeV})}$ in the longitudinal coordinate. The detection efficiency for photons of $E\simeq 20$ MeV is greater than 80\% and reaches almost 100\% at $E > 80$ MeV. \\ The KLOE trigger is based on the detection of two energy deposits in the EMC, with $E > 50$ MeV in the barrel and $E > 150$ MeV in the endcaps. \section{Event selection} The results are based on the data collected during the 2001-02 run, at $\sqrt{s}\simeq M_{\phi}$. Of the two selected decay chains, the fully neutral one is characterized by high statistics and large background, while the charged one has small background but lower statistics. These two decay chains have been selected with different criteria and slightly different data samples have been used: 414 pb$^{-1}$ for the fully neutral and 383 pb$^{-1}$ for the charged decay. Monte Carlo (MC) samples of signal and of background processes have been produced with the simulation program of the experiment\cite{ambrosino:2004}. They have been generated on a run-by-run basis, simulating the machine operating conditions and background levels as measured in the data. Three MC samples, generated with different luminosity scale factors (LSF = $L_{MC}/L_{data}$), have been used: \begin{enumerate} \item the {\it rad} sample contains all the radiative $\phi-$decays plus the non resonant process $e^+ e^-\to\omega\pi^0$, with LSF=5; \item the {\it kk} sample contains $\phi\to K^0\overline{K^0}$ with all subsequent kaon decays generated with LSF=1; \item the {\it all} sample contains all the $\phi$ decays with LSF=1/5; it is used to find possible backgrounds not included in the two main samples. \end{enumerate} The shape of the $\eta\pi^0$ invariant mass distribution has been simulated according to the spectrum obtained from the previously published analysis\cite{aloisio:2002}. \subsection{$\phi\to\eta\pi^0\gamma$ with $\eta\to\gamma\gamma$} This final state is characterized by five prompt photons originating from the IP. A prompt photon is defined as an EMC cluster not associated to any charged track in the DC and satisfying the condition $|t-r/c| < {\rm min}[5\sigma_t(E), 2~{\rm ns}]$, where $t$ is the photon flight time, $r$ is the corresponding path length, and $c$ is the speed of light. Events with exactly five prompt clusters, with E $>$ 3 MeV and polar angle $\vartheta > 21^{\circ}$ with respect to the beam line, are selected.\\ The main background originates from the other five photon final states, $\phi\to f_0(980)\gamma\to\pi^0\pi^0\gamma$ and $e^+e^-\to\omega\pi^0\to\pi^0\pi^0\gamma$, and from the seven photon process, $\phi\to\eta\gamma$ with $\eta\to 3\pi^0$, which can mimic five photon events due to either photon loss or cluster merging. Also the three photon final states, $\phi\to\eta(\pi^0)\gamma$ with $\eta(\pi^0)\to\gamma\gamma$, give a small contribution to the selected sample, when fake clusters are produced either by accidental coincidence with machine background or by cluster splittings. Other background processes are negligible. \\ The following analysis steps are then applied to the selected events. \begin{enumerate} \item First kinematic fit which imposes the total 4-momentum conservation and the speed of light for each photon, with 9 degrees of freedom. Events with $\chi^2_{fit1} > 27$ are rejected. A cut at 980 MeV on the total energy of the three most energetic photons is also applied to reject residual three photon events (processes 4 and 5 of Table \ref{tab:bckg}). \item Search for the best photon pairing to $\eta$'s and $\pi^0$'s, by choosing the combination that minimizes the $\chi^2$-like variable ($i,j,k,l=1,...,5$ are the photon indices): \begin{displaymath} \chi^2_{pair}=\frac{(M_{ij}-M_{P_1})^2}{\sigma^2_{M_{P_1}}} +\frac{(M_{kl}-M_{P_2})^2}{\sigma^2_{M_{P_2}}} \end{displaymath} for both $P_1 P_2 = \eta\pi^0$ (signal) or $\pi^0\pi^0$ (background) hypotheses. $\sigma_{M_{\pi^0}}$ and $\sigma_{M_{\eta}}$ are the width of the $\pi^0$ and $\eta$ peaks after the first kinematic fit ($\sigma_{M_{\pi^0}}=6$ MeV and $\sigma_{M_{\eta}}=9$ MeV). \item Second kinematic fit with the two additional constraints of the masses of the intermediate particles. The number of degrees of freedom is 11. \end{enumerate} Background from process 1 and 3 of Table \ref{tab:bckg} dominates the tail of the distribution of the $\chi^2_{fit2}$ of the second kinematic fit, as shown in Fig.\ref{fig:chi2fit2}, and it can be reduced by cutting at $\chi^2_{fit2} < 24$. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_chi2fit2.eps} & \includegraphics[width=.45\textwidth]{a0_dalitz.eps} \\ \end{tabular} \caption{ Right: $\chi^2$ of the second kinematic fit; the applied cut at $\chi^2_{fit2} = 24$ is also shown. Left: Dalitz plot of data in the background hypothesis ($\pi^0\pi^0\gamma$).} \label{fig:chi2fit2} \end{figure} By using the photon pairing in the background hypothesis, $\pi^0\pi^0\gamma$, the Dalitz plot of Fig.\ref{fig:chi2fit2} is obtained: the $f_0\gamma$ background populates the lower right corner, while the two straight bands are the contribution of $\omega\pi^0$. The $a_0$ signal is contained in the region between these bands. The $\omega\pi^0$ background is strongly reduced by cutting out the two bands shown in Fig.\ref{fig:chi2fit2}.\\ Assuming the background hypotesis $\omega\pi^0$, the angle $\theta^{\star}$ between the non associated photon and the $\omega$ flight direction can be defined. The regions at large $|\cos\theta^{\star}|$ (Fig.\ref{fig:f0cut}.left) are dominated by $\omega\pi^0$ and $f_0\gamma$ backgrounds. The cut $|\cos\theta^{\star}| < 0.8$ is then applied. Another effective cut to reduce the $f_0\gamma$ background is $\theta_{23} > 42^{\circ}$ (Fig.\ref{fig:f0cut}.right), where $\theta_{23}$ is the angle between the second and third photons ordered by decreasing energy. \\ After these cuts the overall selection efficiency, evaluated by MC, is almost independent from the $\eta\pi^0$ invariant mass and its average value is 38.5\%. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_cosstar.eps} & \includegraphics[width=0.45\textwidth]{a0_ang23.eps} \\ \end{tabular} \caption{ Left: $\cos\theta^{\star}$ distribution (see text for explanation); right: angle between the second and third photons ordered by decreasing energy (vertical lines represent the applied cuts).} \label{fig:f0cut} \end{figure} The final sample consists of {29 601} events and the expected S/B ratio is about 1.0 (see Table \ref{tab:bckg}). The residual background is irreducible and has to be evaluated and subtracted. A reweighing procedure has been adopted: \begin{enumerate} \item for each specific background process a data sample with a small signal content (below few percent) has been selected; \item a fit has been performed on selected kinematical distributions, using the corresponding MC shapes to determine the weight to be assigned to that specific background; the weight is defined as the ratio of the number of events found by the fit to the number of expected events from MC. \end{enumerate} In the last two columns of Table \ref{tab:bckg} the applied weights and the numbers of background events in the final sample are listed. \begin{table}[htb] \caption{Background processes for $\phi\to\eta\pi^0\gamma$, with $\eta\to\gamma\gamma$. (S/B)$_1$ is the signal to background ratio after the preselection, (S/B)$_2$ the same ratio at the end of the whole analysis chain. The reweighing factors, $w$, are also listed. Last column reports the final background estimate.} \begin{center} \begin{tabular}{clccc|c}\hline & Process & (S/B)$_1$ & (S/B)$_2$ & $w$ & Background events\\ \hline 1 & $\phi\to f_0\gamma\to\pi^0\pi^0\gamma$ & 0.40 & 4.4 & 1.2 & 5062 $\pm$ 60 \\ 2 & $e^+e^-\to\omega\pi^0\to\pi^0\pi^0\gamma$ & 0.14 & 3.1 & 0.96 & 3825 $\pm$ 37 \\ 3 & $\phi\to\eta\gamma$ with $\eta\to 3\pi^0$ & 0.10 & 2.8 & 1.1 & 7248 $\pm$ 78 \\ 4 & $\phi\to\eta\gamma$ with $\eta\to\gamma\gamma$ & 1.6 & 200 & 2.5 & 197 $\pm$ 11 \\ 5 & $\phi\to\pi^0\gamma$ & 10 & -- & -- & -- \\ \hline & Total background & 0.05 & 1.0 & & {16 332} $\pm$ 86 \\ \hline \end{tabular} \end{center} \label{tab:bckg} \end{table} The uncertainties are the combination of MC statistics and of the systematics on the applied weights. The correlations have also been taken into account. After the background subtraction the number of signal event is {13 269} $\pm$ 192. In Fig.\ref{fig:spectrum} the $\eta\pi^0$ invariant mass distribution of the final sample is shown together with the background contributions. The invariant mass resolution is about 4 MeV, with non-gaussian tails mainly due to wrong photon combinations. In the same figure, the distribution of the polar angle $\theta_{rec}$ of the recoil photon is plotted after the background subtraction: good agreement with the expected $1+\cos^2\theta_{rec}$ behaviour is shown. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_spectrum_neutral.eps} & \includegraphics[width=0.45\textwidth]{a0_cost_neutral.eps} \\ \end{tabular} \caption{ Left: $\eta\pi^0$ invariant mass distribution of the neutral channel. Right: Distribution of the cosine of the polar angle of the recoil photon after background subtraction (dots), compared with the MC expectation (solid line).} \label{fig:spectrum} \end{figure} \subsection{$\phi\to\eta\pi^0\gamma$ with $\eta\to\pi^+\pi^-\pi^0$} With respect to the fully neutral one, this decay provides a lower statistics since the branching ratio of $\eta\to\pi^+\pi^-\pi^0$ is smaller than for $\eta\to\gamma\gamma$. Moreover a lower acceptance is expected due to the larger number of particles to be detected. However in this case there is a smaller background contamination, since no other final state with two tracks and five photons has a significant branching ratio from the $\phi$. The main sources of background are due to final states with two tracks and either four or six photons. In order of importance there are: $e^+ e^-\to\omega\pi^0$ with $\omega\to\pi^+\pi^-\pi^0$ and a fake cluster; $\phi\to K_SK_L$ with $K_S\to\pi^+\pi^-$ and prompt $K_L\to 3\pi^0$ with one photon lost; $\phi\to K_SK_L$ with $K_S\to\pi^0\pi^0$ and prompt $K_L\to \pi^+\pi^-\pi^0$ or $\pi\ell\nu$ with either one photon lost or one fake cluster; $\phi\to\eta\gamma$ with $\eta\to\pi^+\pi^-\pi^0$ plus two fake clusters. \\ The signal preselection requires the detection of two charged tracks and of five photons. The following requirements are then applied: \begin{enumerate} \item a vertex with two opposite sign tracks in a cylinder, around the IP, of 5 cm radius and 11 cm length; \item five prompt photons with $E>$10 MeV; \item total energy in the range $900<E_{tot}<1160$ MeV and total momentum $|\vec{P_{tot}}|<110$ MeV/c; \item{the scalar sum of the momenta of the two pions $P_{\Sigma}=|\vec{p_1}|+|\vec{p_2}|$, outside the range $418<P_{\Sigma}<430$ MeV/c, which identifies events with $K_S\to\pi^+\pi^-$.} \end{enumerate} Events surviving this preselection go to the kinematic fit stage, similar to that of the neutral channel. \begin{enumerate} \item A kinematic fit with 9 degrees of freedom is performed by imposing only the total 4-momentum conservation and speed of light for the photons; events with $\chi^2_{fit1}<17$ are retained. \item Photons are combined to build $\pi^0$'s and $\eta$'s. There are 15 possibilities to get two $\pi^0$'s out of five photons. For each of them there are two choices in the association of one $\pi^0$ to the $\pi^+\pi^-$ pair. For each of these 30 combinations $\chi_{pair}^2$ is computed according to ($i,j,k,l=1,...,5$ are the photon indices): \begin{displaymath} \chi^2_{pair}={{(M_{ij}-M_{\pi^0})^2}\over {\sigma^2_{M_{\pi^0}}}}+ {{(M_{kl}-M_{\pi^0})^2}\over {\sigma^2_{M_{\pi^0}}}}+ {{(M_{\pi^+\pi^-\pi^0}-M_{\eta})^2}\over {\sigma^2_{M_{\eta}}}} \end{displaymath} Events with at least one combination with $\chi_{pair}^2<$ 10 are retained. \item The second kinematic fit is performed on all the combinations selected by the previous step adding the three mass constraints, for a total of 12 degrees of freedom. The combination with the lowest $\chi^2_{fit2}$ is chosen. Only events with $\chi^2_{fit2}<20$ are retained. \item Finally, events with the recoil photon energy below 20 MeV are discarded to remove events with a spurious low energy photon. \end{enumerate} The final sample consists of 4181 events. The overall selection efficiency for the signal, evaluated by MC, is 19.4\%, almost independent from the $\eta\pi^0$ invariant mass, decreasing only at very high invariant mass values. Fig.\ref{fig:chisquadri} shows the data-MC agreement for the $\chi^2$ distributions of the first and second kinematic fits. The MC distributions include signal and background events. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_chi2_1_charged.eps} & \includegraphics[width=0.45\textwidth]{a0_chi2_2_charged.eps} \\ \end{tabular} \caption{ $\chi^2$ distributions for the first (left) and second (right) kinematic fit. The selected data sample (points) is compared to the MC expectation (dark grey histograms) given by the weighed sum of the signal and the estimated background (light grey histograms).} \label{fig:chisquadri} \end{center} \end{figure} The mass resolution is about 4 MeV for all mass values, with non gaussian tails, mainly due to events with a wrong photon combination.\\ The residual background is evaluated by applying the selection procedure on MC samples and by checking the absolute normalization on background enriched data control samples. In order to properly normalize the observed numbers of events, data and MC samples after the preselection but before the kinematic fit have been used. At this level the expected contribution of the signal does not exceed $2 \div 3$\%. Four variables have been chosen to compare data and MC samples: $E_{tot}$, $|\vec{P_{tot}}|$, $M_{\gamma\gamma}$ and $M_{\pi\pi\gamma\gamma}$ where $M_{\gamma\gamma}$ is the invariant mass of any pair of photons (10 combinations per event) and $M_{\pi\pi\gamma\gamma}$ is the invariant mass of the two pions and any pair of two photons (again 10 combinations per event). The four distributions for the data are simultaneously fit with the weighed sum of the same MC distributions for each background sample and for the signal. The weights of the {\it rad} and {\it kk} samples are the free parameters. $w_{rad}=0.45$ and $w_{kk}=1.3$ are obtained, from which the numbers of background events $B_{rad}=307$ and $B_{kk}=264$ are estimated. 8 additional background events from the $all$ sample have also to be taken into account. The fit has been repeated separately on each control distribution and the spread obtained in the estimated number of events is taken as systematic uncertainty. The total number of background events is $579\pm 27$, where the uncertainty is the quadratic sum of the statistical and the systematic uncertainties. This background accounts for about 14\% of the selected events. \\ Fig.\ref{fig:spectrum_ch} shows the $\eta\pi^0$ invariant mass distribution. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_spectrum_charged.eps} & \includegraphics[width=0.45\textwidth]{a0_cost_charged.eps} \\ \end{tabular} \caption{ Left: $\eta\pi^0$ invariant mass distribution for the final data sample (points) compared to the estimated background (dark histogram). Right: polar angle of the recoil photon for data (points) and for MC expectations (histogram). Dark histogram represents the background.} \label{fig:spectrum_ch} \end{figure} In the same figure, the distribution of the polar angle of the recoil photon is shown, and is compared to the MC expected behaviour. Also in this case the distribution agrees with the $1+\cos^2\theta_{rec}$ dependence of the signal. \subsection{Branching ratio evaluation} The branching ratio of the process $\phi\to\eta\pi^0\gamma$ is obtained from the formula: \begin{equation} Br(\phi\to\eta\pi^0\gamma)=\frac{N_{f}-B_{f}}{\varepsilon_{f} N^{(f)}_{\phi}Br(\eta\to f)} \;\;\;\;\; (f=\gamma\gamma, \pi^+\pi^-\pi^0) \label{eq:br} \end{equation} where $N_{f}$ is the total number of selected events, $B_{f}$ the estimated background, $\varepsilon_f$ is the average efficiency. $N_{\phi}$ is the number of produced $\phi$ mesons evaluated from the number $N_{\eta\gamma}$ of $\phi\to\eta\gamma$ with $\eta\to\pi^0\pi^0\pi^0$ events. \begin{equation} N_{\phi} = \frac{N_{\eta\gamma}}{\varepsilon_{\eta\gamma} Br(\phi\to\eta\gamma) Br(\eta\to\pi^0\pi^0\pi^0)} \label{eq:norm} \end{equation} The $Br(\pi^0\to\gamma\gamma)$ is not included in eq.(\ref{eq:br}) and (\ref{eq:norm}) since it has been already taken into account in the MC. The normalization sample has been selected by requiring no tracks in the DC and six or more prompt clusters in the EMC, in the same runs used for the signal selection. $N_{\eta\gamma}=4.2\times 10^6$ events have been found in the sample used for the analysis of the fully neutral decay chain, with efficiency $\varepsilon_{\eta\gamma} = $ 81\%, corresponding to $N^{(\gamma\gamma)}_{\phi}=(1.24\pm 0.03)\times 10^9$. \\ By using $Br(\eta\to\gamma\gamma)=(39.31\pm 0.20)\%$\cite{amsler:2008}, the branching ratio is obtained: \begin{equation} Br(\phi\to\eta\pi^0\gamma)=(7.01\pm 0.10 \pm 0.20)\times 10^{-5} \label{eq:brneutral} \end{equation} The first uncertainty is due to statistics and to the background subtraction. Several sources of systematics have been taken into account (see Table \ref{tab:syst}): photon counting (dominated by the detection efficiency for low energy photons), the data-MC discrepancies in the evaluation of the selection efficiency, and the normalization uncertainty.\\ \begin{table}[htb] \caption{Main sources of systematic uncertainty on the branching ratio (\ref{eq:brneutral}).} \begin{center} \begin{tabular}{lc}\hline Source & uncert. ($\times 10^{-5}$) \\ \hline Photon counting & 0.08 \\ Selection efficiency & 0.12 \\ $Br(\eta\to\gamma\gamma)$ & 0.04 \\ $Br(\phi\to\eta\gamma)$ & 0.13 \\ $Br(\eta\to\pi^0\pi^0\pi^0)$ & 0.05 \\ \hline \end{tabular} \end{center} \label{tab:syst} \end{table} The data sample analyzed for the charged decay channel is slightly smaller than the other one, $N^{(\pi^+\pi^-\pi^0)}_{\phi}=(1.15\pm 0.03)\times 10^9$. By using $Br(\eta\to\pi^+\pi^-\pi^0)=(22.73\pm 0.28)\%$\cite{amsler:2008} \begin{equation} Br(\phi\to\eta\pi^0\gamma)=(7.12\pm0.13\pm0.22)\times 10^{-5} \label{eq:brcharged} \end{equation} is obtained. The first uncertainty is the quadratic sum of the statistical uncertainty on $N_{\pi^+\pi^-\pi^0}$ and of the uncertainty on the background; the second one is systematic, mainly due to the absolute normalization, and includes a 1\% error due to the efficiency evaluation.\\ The two branching ratios (\ref{eq:brneutral}) and (\ref{eq:brcharged}) are compatible with the old KLOE results: $(8.51 \pm 0.51 \pm 0.57)\times 10^{-5}~(\eta\to\gamma\gamma)$ and $(7.96\pm 0.60 \pm 0.40)\times 10^{-5}~(\eta\to\pi^+\pi^-\pi^0)$\cite{aloisio:2002}. By combining the two results, taking into account the common normalization error \begin{equation} Br(\phi\to\eta\pi^0\gamma)=(7.06 \pm 0.22)\times 10^{-5} \label{eq:avebr} \end{equation} is obtained, where the uncertainty is both statistic and systematic. \section{Fit of the $\eta\pi^0$ invariant mass distributions} \label{sec:fit} In order to extract the relevant parameters of the $a_0$, a simultaneous fit, with the same set of free parameters, has been performed on the two $\eta\pi^0$ invariant mass distributions, by minimizing the following $\chi^2$: \begin{displaymath} \chi^2=\sum_{f=\gamma\gamma,\pi^+\pi^-\pi^0}\sum_{i=1}^{n_f}\frac{(N_i^{(f)} -B_i^{(f)}-E_i^{(f)})^2}{{\sigma_i^{(f)}}^2} \end{displaymath} where $n_f$ is the number of bins of respectively the fully neutral and charged $\eta\pi^0$ mass distribution; $N_i$ is the content of the $i$-th bin and $B_i$ is the number of background events to be subtracted from the $i$-th bin. The expected number of events, $E_i$, can be written as \begin{displaymath} E_i^{(f)}= N_{\phi}^{(f)}\sum_{j=1}^{n_{f}}\varepsilon_{ij}^{(f)} \frac{1}{\Gamma_{\phi}}\int_{{\rm bin~}j}\frac{d\Gamma_{th}(\phi\to\eta\pi^0\gamma)}{dm}dm\times Br(\eta\to f) \end{displaymath} where $m=M_{\eta\pi^0}$, and $\Gamma_{\phi}=4.26$ MeV\cite{amsler:2008}. $\varepsilon_{ij}^{(f)}$ is the efficiency matrix (also referred to as smearing matrix), representing the probability of a signal event with ``true'' mass in the $j$-th bin of the spectrum to be reconstructed in the $i$-th bin. The efficiency matrices, evaluated by MC, are almost diagonal; the off-diagonal elements take into account resolution effects as well as wrong photon pairings. The differential decay width $d\Gamma_{th}/dm$ has been parametrized according to two different models. \\ In the ``Kaon Loop'' (KL) model\cite{achasov:1989} the $\phi$ is coupled to the scalar meson through a loop of charged kaons. The theoretical function can be written as: \begin{equation} \frac{d\Gamma_{th}(\phi\to\eta\pi^0\gamma)}{dm}= \frac{d\Gamma_{scal}}{dm}+\frac{d\Gamma_{vect}}{dm} +\frac{d\Gamma_{interf}}{dm} \label{eq:klformula} \end{equation} The scalar term $d\Gamma_{scal}/dm$ is described in some details in Appendix \ref{app:kl}. $d\Gamma_{vect}/dm$, is dominated by $\phi\to\rho\pi^0$ with $\rho\to\eta\gamma$ and is described in the framework of the Vector Dominance Models (VDM)\cite{achasov:2001}. Last term is the interference between the scalar and the vector amplitudes. \\ The free fit parameters are: the $a_0$ mass, the couplings $g_{a_0K^+K^-}$, $g_{a_0\eta\pi^0}$, the branching ratio of the vector contribution, the relative phase $\delta$ between scalar and vector amplitudes, and, as a relative normalization between the two different final states, the ratio $R_{\eta}=Br(\eta\to\gamma\gamma)/Br(\eta\to\pi^+\pi^-\pi^0)$. \\ An alternative parametrization of the amplitude of the decay $\phi\to\eta\pi^0\gamma$ has been also used, following ref.\cite{isidori:2006}. A point-like coupling of the scalar to the $\phi$ meson is assumed, hence this model will be called ``No Structure'' (NS) in the following. The scalar meson is parametrized as a Breit-Wigner interfering with a polynomial scalar background and with a vector background (see Appendix \ref{app:ns}). The free parameters in this case are the couplings $g_{a_0K^+K^-}$, $g_{a_0\eta\pi^0}$, and $g_{\phi a_0\gamma}$, the ratio $R_{\eta}$, the branching ratio of the vector background, and two complex coefficients, b$_0$ and b$_1$, of the scalar background. The $a_0$ mass is fixed to avoid fit instabilities, due the large number of free parameters, and due to the large cancellations that occur among the terms of eq.(\ref{eq:nsfunction}). The chosen value of the $a_0$ mass is the result of the KL fit. \\ \begin{table}[htb] \caption{ Fit results for KL and NS models.} \begin{center} \begin{tabular}{l|c|c}\hline & KL & NS \\ \hline $M_{a_0}$ (MeV) & 982.5 $\pm$ 1.6 $\pm$ 1.1 & 982.5 (fixed) \\ $g_{a_0K^+K^-}$ (GeV) & 2.15 $\pm$ 0.06 $\pm$ 0.06 & 2.01 $\pm$ 0.07 $\pm$ 0.28 \\ $g_{a_0\eta\pi^0}$ (GeV) & 2.82 $\pm$ 0.03 $\pm$ 0.04 & 2.46 $\pm$ 0.08 $\pm$ 0.11 \\ $g_{\phi a_0\gamma}$ (GeV$^{-1}$) & & 1.83 $\pm$ 0.03 $\pm$ 0.08 \\ $\delta$ (deg.) & 222 $\pm$ 13 $\pm$ 3 & \\ B.r. of vector backg. ($\times 10^6$) & 0.92 $\pm$ 0.40 $\pm$ 0.15 & $\sim $ 0 \\ $R_{\eta}$ & 1.70 $\pm$ 0.04 $\pm$ 0.03 & 1.70 $\pm$ 0.03 $\pm$ 0.01 \\ $|{\rm b}_0|$ & & 14.9 $\pm$ 0.6 $\pm$ 0.5 \\ $arg({\rm b}_0)$ (deg.) & & 38.3 $\pm$ 1.1 $\pm$ 0.6 \\ $|{\rm b}_1|$ & & 21.3 $\pm$ 1.4 $\pm$ 0.9 \\ $arg({\rm b}_1)$ (deg.) & & 57.3 $\pm$ 1.4 $\pm$ 1.1 \\ $\chi^2/ndf$ & 157.1 / 136 & 140.6 / 133 \\ $P(\chi^2)$ & 10.4\% & 30.9\% \\ \hline \end{tabular} \end{center} \label{tab:fitkl} \end{table} \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{a0_fit_neutral.eps} & \includegraphics[width=0.45\textwidth]{a0_fit_charged.eps} \\ \end{tabular} \caption{ Fit results: points are data after background subtraction; histograms represent the fit functions for KL (solid) and NS (dashed) models.} \label{fig:fitplotkl} \end{figure} \begin{table}[htb] \caption{Correlation coefficients among the relevant $a_0$ parameters.} \begin{center} \begin{tabular}{lccc||lccc}\hline \multicolumn{4}{l||}{KL model} & \multicolumn{4}{l}{NS model} \\ \hline & $M_{a_0}$ & $g_{a_0K^+K^-}$ & $g_{a_0\eta\pi^0}$ & & $g_{a_0K^+K^-}$ & $g_{a_0\eta\pi^0}$ & $g_{\phi a_0\gamma}$ \\ $M_{a_0}$ & 1. & & & $g_{a_0K^+K^-}$ & 1. & & \\ $g_{a_0K^+K^-}$ & 0.931 & 1. & & $g_{a_0\eta\pi^0}$ & -0.565 & 1. & \\ $g_{a_0\eta\pi^0}$ & 0.584 & 0.550 & 1. & $g_{\phi a_0\gamma}$ & -0.138 & 0.657 & 1. \\ \hline \end{tabular} \end{center} \label{tab:corrkl} \end{table} The fit results are shown in Fig.\ref{fig:fitplotkl}, and the parameter values are listed in Table \ref{tab:fitkl}. Good $\chi^2$ probability is obtained for both models. \\ The ratio $R_{\eta}$ is in good agreement with the PDG value 1.729 $\pm$ 0.028\cite{amsler:2008}, confirming that the two samples are consistent with each other. \\ A vector background smaller than the VDM predictions, $(3 \div 5)\times 10^{-6}$\cite{achasov:2001,bramon:1992}, is found in both fits, indicating that the $\phi\to\eta\pi^0\gamma$ process is largely dominated by $\phi\to a_0\gamma$. \\ In the KL case, the $a_0$ mass is in agreement with the PDG value (985.1 $\pm$ 1.3) MeV\cite{amsler:2008}. A ratio of the squared coupling constants $R_{a_0}=g^2_{a_0K^+K^-}/g^2_{a_0\eta\pi^0} = 0.58\pm 0.03\pm 0.03$ can be derived. The $g_{\phi a_0\gamma}$ is not a free parameter of this model, but can be obtained according to the formula: \begin{eqnarray} g_{\phi a_0\gamma} & = \sqrt{\frac{3}{\alpha}\left(\frac{2M_{\phi}}{M^2_{\phi}-M^2_{a_0}}\right)^3 \Gamma_{\phi}Br(\phi\to\eta\pi^0\gamma)} = \nonumber \\ \label{eq:gpag} & \\ & = 1.58\pm 0.10\pm 0.16~{\rm GeV}^{-1}\nonumber \end{eqnarray} The $a_0$ width obtained from eq.(\ref{eq:klwidth}) is $\Gamma_{a_0}(M_{a_0}) \simeq 105$ MeV.\\ In Table \ref{tab:corrkl} the correlation coefficients among the $a_0$ parameters are shown.\\ The couplings $g_{a_0K^+K^-}$ and $g_{a_0\eta\pi^0}$ of the NS fit and therefore the ratio $R_{a_0} = 0.67\pm 0.06\pm 0.13$ are in agreement with the KL values. In the NS case $g_{\phi a_0\gamma}$ can be determined directly and is compatible with the value of eq.(\ref{eq:gpag}). From this fit a total decay width $\Gamma_{a_0}(M_{a_0}) \simeq 80$ MeV can be evaluated according to eq.(\ref{eq:nswidth}). \\ The systematic uncertainties on the parameters account for: {\it (i)} sensitivity to the fixed parameters (the $a_0$ coupling to $\eta^{\prime}\pi^0$, $g_{a_0\eta^{\prime}\pi^0}$, and $g_{\phi K^+K^-}$ in the KL model, $M_{a_0}$ in the NS model); {\it (ii)} normalization uncertainty; {\it (iii)} data-MC discrepancy of the fraction of wrong photon pairings (12\% from data and 14\% from MC). \\ \section{Unfolding of the $\eta\pi^0$ invariant mass distribution} In order to allow a better comparison with other experimental results and with theoretical models, the invariant mass distribution should be corrected for resolution and smearing effects. Therefore an unfolding procedure has been applied to the $\eta\pi^0$ invariant mass distributions by using the method described in ref.\cite{dagostini:1995}. This is an iterative procedure based on the Bayes theorem, which does not require the inversion of the smearing matrix. \\ The unfolding has been performed separately on both invariant mass distributions before the background subtraction. The smearing matrices are the same used in the fits described in Sect.\ref{sec:fit}. \\ An initial distribution has to be provided as starting point of the iterative procedure; the unfolded distributions obtained starting from the output of the KL fit or from a flat distribution in $M_{\eta\pi^0}$ differ by less than 3\%. This difference has been taken into account in the uncertainty evaluation. \\ The bin by bin average of the two unfolded distributions is used to calculate the differential branching ratio $(1/\Gamma_{\phi})(d\Gamma(\phi\to\eta\pi^0\gamma)/dM_{\eta\pi^0})$ reported in Table \ref{tab:unfolding}. \begin{table}[htb] \caption{Differential branching ratio: $m$ is the bin center, the errors are the total uncertainties, and the bin width is 6.35 MeV.} \begin{center} \begin{tabular}{cc|cc}\hline $m$ & $(1/\Gamma_{\phi})(d\Gamma_{\eta\pi^0\gamma}/dm)\times 10^7$ & $m$ & $(1/\Gamma_{\phi})(d\Gamma_{\eta\pi^0\gamma}/dm)\times 10^7$ \\ (MeV) & (MeV$^{-1}$) & (MeV) & (MeV$^{-1}$) \\ \hline 691.53 & 0.06 $\pm$ 0.07 & 850.35 & 2.25 $\pm$ 0.13 \\ 697.88 & 0.18 $\pm$ 0.10 & 856.71 & 2.35 $\pm$ 0.14 \\ 704.24 & 0.18 $\pm$ 0.12 & 863.06 & 2.27 $\pm$ 0.13 \\ 710.59 & 0.31 $\pm$ 0.13 & 869.41 & 2.35 $\pm$ 0.13 \\ 716.94 & 0.30 $\pm$ 0.08 & 875.76 & 2.42 $\pm$ 0.16 \\ 723.29 & 0.38 $\pm$ 0.11 & 882.12 & 2.59 $\pm$ 0.16 \\ 729.65 & 0.53 $\pm$ 0.17 & 888.47 & 2.80 $\pm$ 0.14 \\ 736.00 & 0.51 $\pm$ 0.13 & 894.82 & 2.92 $\pm$ 0.19 \\ 742.35 & 0.53 $\pm$ 0.05 & 901.18 & 3.18 $\pm$ 0.20 \\ 748.71 & 0.67 $\pm$ 0.07 & 907.53 & 3.37 $\pm$ 0.17 \\ 755.06 & 0.81 $\pm$ 0.07 & 913.88 & 3.48 $\pm$ 0.17 \\ 761.41 & 0.94 $\pm$ 0.10 & 920.24 & 3.67 $\pm$ 0.17 \\ 767.76 & 0.99 $\pm$ 0.11 & 926.59 & 3.94 $\pm$ 0.17 \\ 774.12 & 0.99 $\pm$ 0.08 & 932.94 & 4.29 $\pm$ 0.25 \\ 780.47 & 1.08 $\pm$ 0.09 & 939.29 & 4.63 $\pm$ 0.25 \\ 786.82 & 1.30 $\pm$ 0.10 & 945.65 & 4.89 $\pm$ 0.21 \\ 793.18 & 1.27 $\pm$ 0.13 & 952.00 & 5.20 $\pm$ 0.22 \\ 799.53 & 1.42 $\pm$ 0.28 & 958.35 & 5.40 $\pm$ 0.28 \\ 805.88 & 1.63 $\pm$ 0.28 & 964.71 & 5.44 $\pm$ 0.33 \\ 812.24 & 1.71 $\pm$ 0.14 & 971.06 & 5.35 $\pm$ 0.22 \\ 818.59 & 1.79 $\pm$ 0.16 & 977.41 & 4.94 $\pm$ 0.21 \\ 824.94 & 1.66 $\pm$ 0.18 & 983.76 & 4.02 $\pm$ 0.19 \\ 831.29 & 1.82 $\pm$ 0.15 & 990.12 & 2.80 $\pm$ 0.27 \\ 837.65 & 1.96 $\pm$ 0.12 & 996.47 & 1.51 $\pm$ 0.32 \\ 844.00 & 2.13 $\pm$ 0.13 & & \\ \hline \end{tabular} \end{center} \label{tab:unfolding} \end{table} The uncertainties are both from statistics (data and MC) and from systematics. The main contribution to the systematic error is the difference between the two unfolded distributions. The correlation of the contents of nearest neighbour bins of invariant mass is about 50\%, for next-nearest neighbour bins is about 20\%, and is negligible for bin distance greater than two. \\ An additional uncertainty of 3\% on the absolute scale has to be considered, according to eq.(\ref{eq:avebr}). \\ To check this procedure, the unfolded distribution has been fit to the KL model, without requiring any smearing matrix. The parameters values are in good agreement with those of Table \ref{tab:fitkl}. \section{Conclusions} A high statistics study of the process $\phi\to\eta\pi^0\gamma$ has been performed, by selecting the decay chains corresponding to $\eta\to\gamma\gamma$ and $\eta\to\pi^+\pi^-\pi^0$.\\ $Br(\phi\to\eta\pi^0\gamma)=(7.01\pm 0.10 \pm 0.21)\times 10^{-5}$ and $(7.12\pm0.13\pm0.22)\times 10^{-5}$ respectively have been measured.\\ A simultaneous fit of the two invariant mass distributions has been performed, which shows that the two samples are consistent with each other.\\ Both models used in the fits, the $\phi-$scalar meson coupling through the kaon loop (KL model) and the direct coupling (NS model), are able to reproduce the experimental $\eta\pi^0$ mass distribution. \\ From the fit results that $\phi\to\eta\pi^0\gamma$ decay is dominated by $\phi\to a_0(980)\gamma$, since the vector contribution is very small, $Br(e^+ e^-\to VP\to\eta\pi^0\gamma) < 10^{-6}$.\\ The fit allows also the extraction of the $a_0(980)$ mass and its couplings to $\eta\pi^0$, $K^+K^-$, and to the $\phi$ meson. The mass agrees at one standard deviation level with the PDG value. The two sets of couplings obtained from the fits agree with each other. Using these couplings, a total decay width of the $a_0(980)$ in the range $80 \div 105$ MeV is estimated. The ratio $R_{a_0} = g^2_{a_0K^+K^-}/$ $g^2_{a_0\eta\pi^0} \simeq 0.6 - 0.7$ is obtained. A large $g_{\phi a_0\gamma}$ has been found ($1.6 \div 1.8$ GeV$^{-1}$) suggesting a sizeable strange quark content of the $a_0(980)$. \section{Acknowledgments} We thank the DA$\Phi$NE team for their efforts in maintaining low background running conditions and their collaboration during all data-taking. We want to thank our technical staff: G. F. Fortugno and F. Sborzacchi for their dedicated work to ensure an efficient operation of the KLOE Computing Center; M. Anelli for his continuous support to the gas system and the safety of the detector; A. Balla, M. Gatta, G. Corradi, and G. Papalino for the maintenance of the electronics; M. Santoni, G. Paoluzzi, and R. Rosellini for the general support to the detector; C. Piscitelli for his help during major maintenance periods. This work was supported in part by EURODAPHNE, contract FMRX-CT98-0169; by the German Federal Ministry of Education and Research (BMBF) contract 06-KA-957; by the German Research Foundation (DFG), 'Emmy Noether Programme', contracts DE839/1-4; by INTAS, contracts 96-624, 99-37; and by the EU Integrated Infrastructure Initiative HadronPhysics Project under contract number RII3-CT-2004-506078.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,686
Q: react-native-swipe-gestures exclude specific children from swipe action I have a component wrapped with react-native-swipe-gestures import React from 'react' import GestureRecognizer from 'react-native-swipe-gestures'; import Carousel from 'react-native-reanimated-carousel' const MyComponent = () => { return ( <GestureRecognizer onSwipeLeft={() => { handleSwipeTask('left') }} onSwipeRight={() => { handleSwipeTask('right') }} > {/* some view children */} <Carousel loop width={size.width} height={size.height} autoPlay={false} data={lists} scrollAnimationDuration={2000} onSnapToItem={e => console.log(e)} renderItem={({ item, index }) => { return ( <View style={{ width: '100%', borderWidth: 1, justifyContent: 'center', }} > <FastImage key={`attachment-${index}`} style={{ width: '100%', height: '100%', }} source={ local ? item.file_path : { uri: item.file_path, priority: FastImage.priority.normal, // cache: FastImage.cacheControl.cacheOnly } } resizeMode={FastImage.resizeMode.contain} /> </View> ) }} autoPlayInterval={4000} pagingEnabled={true} /> {/* some view children */} </GestureRecognizer> ) } with the gesture reconigzer when i swipe left or right i have to change the data of the view with handleSwipeTask method; it work well and in the data i have an array of images to show as carousel; so using react-native-reanimated-carousel i made my carousel working; but my issue when i swipe manually the caroussel my gesture recognizer detect the swipe and with the caroussel sliding my handleSwipeTask is triggered too; so what i want is to disable that recognition only for my caroussel component; i don't want to wrap every view exluding the caroussel on the gesture recognizer (it's my last option if there is no solution)
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,081
from django.db import models from biz.firewall.settings import DIRECTION_EGRESS, DIRECTION_INGRESS, DIRECTIONS, ENTER_TYPE_IPV6, ENTER_TYPE_IPV4, ENTER_TYPES from django.utils.translation import ugettext_lazy as _ class Firewall(models.Model): id = models.AutoField(primary_key=True) name = models.CharField(_("Firewall Name"), null=False, blank=False, max_length=128) firewall_id = models.CharField(_("OS Firewall UUID"), null=True, blank=True, max_length=128) desc = models.CharField(_("Firewall desc"), null=True, blank=True, max_length=50) is_default = models.BooleanField(_("Default"), null=False, blank=False, default=False) user = models.ForeignKey('auth.User') user_data_center = models.ForeignKey("idc.UserDataCenter") create_date = models.DateTimeField(_("Create Date"), auto_now_add=True) deleted = models.BooleanField(_("Deleted"), default=False) class Meta: db_table = "firewall" class FirewallRules(models.Model): id = models.AutoField(primary_key=True) firewall = models.ForeignKey('Firewall') firewall_rules_id = models.CharField(_("OS Firewall Rules UUID"), null=True, blank=True, max_length=40) direction = models.CharField(_("Direction"), null=True, blank=True, max_length=10, choices=DIRECTIONS, default=DIRECTION_INGRESS) ether_type = models.CharField(_("Ether type"), null=True, blank=True, max_length=40, choices=ENTER_TYPES, default=ENTER_TYPE_IPV4) port_range_min = models.IntegerField(_("Port range min"), null=True, blank=True, default=0) port_range_max = models.IntegerField(_("Port range max"), null=True, blank=True, default=0) protocol = models.CharField(_("Protocol"), null=True, blank=True, max_length=40) remote_group_id = models.CharField(_("remote group id UUID"), null=True, blank=True, max_length=40) remote_ip_prefix = models.CharField(_("remote ip prefix"), null=True, blank=True, max_length=255, default='0.0.0.0/0') is_default = models.BooleanField(_("Default"), null=False, blank=False, default=False) user = models.ForeignKey('auth.User') user_data_center = models.ForeignKey("idc.UserDataCenter") deleted = models.BooleanField(_("Deleted"), default=False) create_date = models.DateTimeField(_("Create Date"), auto_now_add=True) class Meta: db_table = "firewall_rules"
{ "redpajama_set_name": "RedPajamaGithub" }
7,237
C'waam And Koptu: The Fish at the Center of the Klamath Basin's Water Crisis C'waam, also known as Lost River suckers congregating to spawn on Sucker Springs in Upper Klamath Lake, Oregon. (Photo: Brian Hayes, USGS-Klamath Falls Field Station, public domain) In the drought-stricken Klamath Basin along the California-Oregon border, water is a precious resource. Who gets that water hinges, in large part, on two endemic species of fish that make their home there and nowhere else in the world. Jefferson Public Radio reporter Erik Neumann reports. BASCOMB: In the drought-stricken Klamath Basin along the California-Oregon border, water is a precious resource. Who gets that water hinges, in large part, on two endemic species of fish that make their home on the Klamath and nowhere else in the world. Jefferson Public Radio reporter Erik Neumann has the story. [BOAT SOUNDS] NEUMANN: Biologist Alex Gonyaw aims his Boston Whaler up the eastern shore of Upper Klamath Lake. He's showing off what, he says, used to be abundant habitat for juvenile fish. GONYAW: It's a mosaic of cattails and willows and tulles, or bullrushes. NEUMANN: At almost 30 miles long, Upper Klamath Lake is the home to several types of fish that only live here. GONYAW: So, the more hiding places for juvenile creatures the better they generally tend to do. NEUMANN: Two of them are called C'waam and Koptu in the traditional Klamath language or in English the Lost River and shortnose sucker. They have a stubby face and wide lips and can live to be 50 years old. GONYAW: They're an endemic species. It's only found here, nowhere else in the universe and due to their sort of near extinction level status, they are becoming something of a figurehead in the water crisis here. NEUMANN: In recent years, the juvenile fish have been dying, causing the overall population to crash. Five years ago, when Gonyaw started working for the tribes, there were about 20,000 shortnose suckers in the lake. Estimates today are just 3,400. The Lost River sucker is disappearing at a similar rate. Exactly why these fish are dying is unclear, but biologists believe it's because of poor water quality and habitat loss that's impacted by low water in the lake. Those factors make their future grim. GONYAW: There's a catastrophic event likely in the next few years. NEUMANN: In this extremely dry year in the Klamath Basin, much of the debate over who gets water depends on these fish. Water flowing out of the lake has been shut off to farmers who rely on the federally managed irrigation system. Even further down the Klamath River, threatened salmon are also getting the bare minimum. Besides being protected under the Endangered Species Act, the C'waam and Koptu are culturally important to the Klamath Tribes who say they've subsisted on them since time immemorial. At a recent rally in Klamath Falls, Tribal Chairman Don Gentry talked about how the Klamath people prayed for the fish to return after hard winters. GENTRY: Those fish are so important. We wouldn't be here likely without those fish that helped us survive. NEUMANN: The declining fish numbers also illustrate a problem with the US government's treaty. In 1864 the Klamath Tribes gave up around 20 million acres of land, in exchange for the right to hunt and fish. Gentry says those treaty rights don't mean much if there are no fish to catch. Klamath Tribes Senior Fish Biologist Alex Gonyaw boats across Upper Klamath Lake. Gonyaw is working to try to save the endangered C'waam and Koptu, or Lost River and shortnose suckers that live in the lake. (Photo: Erik Neumann / JPR) GENTRY: What good is a treaty if you don't have the resources? NEUMANN: He says the Endangered Species Act is meant to prevent species from going extinct. It doesn't live up to the treaty responsibility of providing harvestable resources. GENTRY: So we're basically relegated to the ESA. And that's.. that's very minimal…It's not even working, you know, for us. But that's the thing that we have. NEUMANN: The Klamath Tribes have senior water rights. But farmers in the basin are the other group that is linked to these fish. Mark Johnson represents irrigators with the group Klamath Water Users Association. JOHNSON: Ultimately the farmers they want as they want all fish species to thrive is if the fish are doing well, everybody's doing well. NEUMANN: For 15 years Johnson studied Lost River and shortnose suckers as a fish biologist with the US Geological Survey. One of the big frustrations from the irrigator standpoint, he says, is that water is prioritized to protect fish, but they're still dying. But taking more water out of the lake would be gambling with the existence of a species. JOHNSON: Yeah, I mean you are. But in terms of an extinction level event, I don't think that's actually going to happen. But on that trajectory we're on right now, basically managing the lake the same way we have for over 20 years we haven't moved the needle. So, something has to change. NEUMANN: There are no long-term solutions for saving the native fish populations. For the first time this year the Klamath Tribes are raising juvenile fish from eggs in a hatchery. When mature, they'll be released in Upper Klamath Lake. This exceptionally dry year is shining a spotlight on the Klamath Basin and how there just isn't enough water to go around. And with current climate trends, there's little reason to think abundant water will be available any time soon. Erik Neumann, JPR News. BASCOMB: Reporter Erik Neumann's story comes to us courtesy of Jefferson Public Radio. Find this story and more coverage of the Klamath water crisis on the JPR website Watch a Klamath Tribes video about the fight to save their sacred C'waam and Koptu fish About Reporter Erik Neumann
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
850
Sermon: Sunday, August 7, 2016: Twelfth Sunday after Pentecost Erik Christensen Sermons August 7, 2016 August 10, 2016 8 Minutes Texts: Genesis 15:1-6 + Psalm 33:12-22 + Hebrews 11:1-3, 8-16 + Luke 12:32-40 The world is full of heroes. That's true every week, but this week they've really been on display. Yusra Mardini, the 18-year-old Syrian refugee who, together with her sister, pushed a boat carrying twenty people through the sea for three hours, saving their lives and is now competing in the Olympics under the Olympic banner. Khizr and Ghazala Khan, the parents of Army Captain Humayun Khan (himself a hero), Pakistani immigrants who came to the United States seeking opportunity for their family and ended up at the center of national politics for the last two weeks following Khizr's impassioned speech denouncing the rhetoric of racism and islamophobia infecting our nation's political life. Khizr and Ghazala Khan at the Democratic National Convention A young refugee and two immigrants thrust onto the international stage because of who they are and what they have done — but also, I think, because we are hungry for stories of heroism right now. We are weary and battered by a season in our life together filled with stories of police violence and political cowardice. We are soul-sick from watching the weekly polls tell us how cynical and skeptical we have become about the possibility of meaningful change. We, who have so much, wrestle with a sense of helplessness and hopelessness. Then we hear the story of Yusra Mardini jumping out of her boat and literally pitting her body's strength against the relentlessness of the sea. We watch an unknown Muslim man stand up before the entire nation and proclaim his love for a country that has not loved him and his family half as well, but for which he and his wife gave their son. "They confessed that they were strangers and foreigners on the earth, for people who speak in this way make it clear that they are seeking a homeland. If they had been thinking of the land that they had left behind, they would have had opportunity to return. But as it is, they desire a better country, that is, a heavenly one. Therefore God is not ashamed to be called their God; indeed, God has prepared a city for them." (Heb. 11:13-16) The Letter to the Hebrews is full of memorable turns of phrase from the description of faith as "the assurance of things hoped for, the conviction of things not seen" (11:1) to the "great cloud of witnesses" (12:1) that surround us. If the verses I've just read are less familiar, they're no less full of poetry and power. In fact, I almost named the sermon series that begins today and will continue through the month of August, "A Better Country," but decided that at this point in the presidential campaign it would be heard too narrowly as a focus on electoral politics, when what the author is really talking about is more akin to what Parker Palmer has termed "courage" in his book "Healing the Heart of Democracy: The Courage to Create a Politics Worthy of the Human Spirit." Parker dedicated that book to Christina Taylor Green, the 10 year-old girl who was shot and killed five years ago in the same event at which Congresswoman "Gabby" Giffords of Arizona was also shot; and to Addie Mae Collins, Denise McNair, Carole Robertson, and Cynthia Wesley, better known as the "four little girls" who died in the racist bombing of the 16th Street Baptist Church in Birmingham, Alabama during the Civil Rights movement fifty-three years ago. In his dedication, he writes, "When we forget that politics is about weaving a fabric of compassion and justice on which everyone can depend, the first to suffer are the most vulnerable among us — our children, the elderly, the mentally ill, the poor, and the homeless. As they suffer, so does the integrity of our democracy." That's what draws me to the image of "a better country," and what also ultimately moved me to title this series, "Sight Unseen" — because what we are laboring for is ultimately something we have yet to see: a homeland here on this earth of which all people are equally residents, equally citizens, equally honored and cared for, "a better country, that is, a heavenly one." (Heb. 11:16a) But it is exhausting, longing and laboring for this country to come into view. It is tempting to look back at the lands we have known and to convince ourselves that the world as it is is sufficient or, at least, the best we can hope for. Then we hear Yusra's story, and we imagine her muscles freezing up in the cold sea as she pushes her fellow refugees toward that better country, the one she has never seen. Then we hear Khizr and Ghazala Khan, and see Khizr brandishing his pocket edition of the United States Constitution in the face of calls to ban an entire religious community from entering this nation, and we imagine the faith it must take to hold fast to promises made but not yet kept. That is the heart of the Letter to the Hebrews, it is a letter to people who have not yet seen the promises of God fulfilled, who wonder if they will ever be kept. It is a word of encouragement to worn down people, a reminder that we have only come this far by faith. It is a roll call of the saints, the heroes, the ones like Abraham and Sarah, Yusra, Khizr and Ghazala, who left behind all they had known and fixed their eyes on the stars to guide them into a future filled with hope. It is a call to faith in things we have only hoped for, and conviction in sights as yet unseen (11:1). And now, if you'll let me, I want to make this all a bit more personal and talk for just a few moments about us as a congregation in light of this word from scripture. There is so much in this Letter to the Hebrews that makes me think of the journey we've been on for the last ten years. The setting out without knowing entirely where we'd end up. The power of procreation at a late age manifested as a congregation made up of elderly people who saw wave after wave of young adults and even younger children begin to fill in the empty pews. The promise of a future filled with hope set alongside the constant exhortation to never give up. There's so much in this letter that feels familiar to me when I think of the distance we have come. I'm tempted to place our story in the background, as one more example among many, a word of encouragement for other congregations, other communities of faith preparing to leave behind the known past for the promised future. But this letter is still for us. This letter is calling out to us from history, urging us to cast our lot fully with the "strangers and foreigners on the earth" (Heb. 11:13). To, as Jesus puts it, "sell your possessions, and give alms" and "make purses for yourselves that do not wear out, an unfailing treasure in heaven." (Lk. 12:33) There is a convergence of conversations and actions happening in our congregation and in the communities that surround us that I believe is about to set us on a new course just as adventurous as the one we have been on these last ten years. Having already sold our chief possession, the building that had housed us for a century, we are now considering anew what it means for us to "give alms." For the last six months the Council has been deliberating with one another about the practical, ethical, and theological implications of the fact that we have gone, in a very short time, from being rich in land and poor in cash to being practically itinerant but carrying with us an enormous amount of money. Over the last few months the Social Justice committee has joined this conversation, sharing with the Council their intention to move us from a more shallow monthly focus on benevolent giving to a strategy of 6-month campaigns designed to deepen our engagement with partner organizations working in our community to build that better country, the one none of us has ever fully seen. We are beginning with Center for Changing Lives, an organization that grew out of Humboldt Park Social Services, which itself grew out of Humboldt Park United Methodist Church, a member of the Logan Square Ecumenical Alliance just two blocks north of us here on Mozart St. Center for Changing Lives provides financial coaching and employment assistance to families and individuals in our neighborhood struggling to make it in today's economy. As we learn more about their work and are shaped by it we'll not only be looking for ways to enhance their mission, but will benefit directly ourselves from their "Just Financials" curriculum, a powerful tool for helping us develop a shared vocabulary for connecting our values as Christian people to our actions with regard to our wealth, taking Jesus' encouragement to "sell your possessions and give alms" seriously enough to really ask how our relationship to God is shaping our relationship to what we earn and what we own. But stewardship of wealth is only one aspect of our vocation as baptized people. As we enter this new phase of our life together we will be talking more openly, not only in worship but in small groups with one another, about the ways we choose to steward the gifts of our lives. Whether at work or at home, with family or friends, how do we live out our baptisms? How do we live into that vocation, so that our whole life reflects our relationship to the God of life and love and liberation? We'll be listening for signs of the Holy Spirit's work in each other, deepening our capacities for love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, and self-control (Gal. 5:22-23). Pr. Bruce Ray standing in the center of the labyrinth at the Kimball Avenue Church "urban oasis" at last month's LSEA summer garden party. Which means that we'll be telling each other stories. From the testimonies that have been a growing part of our Sunday morning worship to the sorts of story-telling we saw at the ecumenical alliance's Garden Party last month at which members of our separate congregations told stories about how they had come to Chicago, stories of migration from Honduras and Puerto Rico and Latvia. Stories filled with the kinds of quiet heroism that brought Yusra Mardini and the Khans to the international stage. Stories that we come to realize populate all our pasts, if we would only take the time to listen. So we will. We will listen to each other's stories, in and out of worship, within this congregation and beyond its glass windows. We will listen, and we will grow. We will be experimenting with adding services this fall, not simply because those of us who are already here are feeling a little bit cramped, but because we know there are others in our neighborhood and across our city who are longing for a glimpse of that better country, that city God is already preparing for those who face the future by faith. We will plan for growth and we will embrace it because we are inheritors of the same promise made to Abraham and Sarah, that their descendants would be more numerous than the stars in the sky. Those descendants, who are Jewish and Christian and Muslim, whose rich diversity points us beyond Abraham and Sarah to all of humanity, are our people and we are theirs in ways we know to be true even if we have never seen it lived out perfectly. But we see that future coming from a distance and we welcome it (Heb. 11:13). Letter to the Hebrews Published August 7, 2016 August 10, 2016 Previous Post Sermon: Sunday, July 31, 2016: Eleventh Sunday after Pentecost Next Post Sermon: Wednesday, August 10, 2016: Proper 14
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,101
namespace openmc { //============================================================================== // Low-level internal functions //============================================================================== void read_attr(hid_t obj_id, const char* name, hid_t mem_type_id, void* buffer); void write_attr(hid_t obj_id, int ndim, const hsize_t* dims, const char* name, hid_t mem_type_id, const void* buffer); void read_dataset_lowlevel(hid_t obj_id, const char* name, hid_t mem_type_id, hid_t mem_space_id, bool indep, void* buffer); void write_dataset_lowlevel(hid_t group_id, int ndim, const hsize_t* dims, const char* name, hid_t mem_type_id, hid_t mem_space_id, bool indep, const void* buffer); bool using_mpio_device(hid_t obj_id); //============================================================================== // Normal functions that are used to read/write files //============================================================================== hid_t create_group(hid_t parent_id, const std::string& name); inline hid_t create_group(hid_t parent_id, const std::stringstream& name) {return create_group(parent_id, name.str());} hid_t file_open(const std::string& filename, char mode, bool parallel=false); void write_string(hid_t group_id, const char* name, const std::string& buffer, bool indep); std::vector<hsize_t> attribute_shape(hid_t obj_id, const char* name); std::vector<std::string> dataset_names(hid_t group_id); void ensure_exists(hid_t obj_id, const char* name, bool attribute=false); std::vector<std::string> group_names(hid_t group_id); std::vector<hsize_t> object_shape(hid_t obj_id); std::string object_name(hid_t obj_id); //============================================================================== // Fortran compatibility functions //============================================================================== extern "C" { bool attribute_exists(hid_t obj_id, const char* name); size_t attribute_typesize(hid_t obj_id, const char* name); hid_t create_group(hid_t parent_id, const char* name); void close_dataset(hid_t dataset_id); void close_group(hid_t group_id); int dataset_ndims(hid_t dset); size_t dataset_typesize(hid_t obj_id, const char* name); hid_t file_open(const char* filename, char mode, bool parallel); void file_close(hid_t file_id); void get_name(hid_t obj_id, char* name); int get_num_datasets(hid_t group_id); int get_num_groups(hid_t group_id); void get_datasets(hid_t group_id, char* name[]); void get_groups(hid_t group_id, char* name[]); void get_shape(hid_t obj_id, hsize_t* dims); void get_shape_attr(hid_t obj_id, const char* name, hsize_t* dims); bool object_exists(hid_t object_id, const char* name); hid_t open_dataset(hid_t group_id, const char* name); hid_t open_group(hid_t group_id, const char* name); void read_attr_double(hid_t obj_id, const char* name, double* buffer); void read_attr_int(hid_t obj_id, const char* name, int* buffer); void read_attr_string(hid_t obj_id, const char* name, size_t slen, char* buffer); void read_complex(hid_t obj_id, const char* name, std::complex<double>* buffer, bool indep); void read_double(hid_t obj_id, const char* name, double* buffer, bool indep); void read_int(hid_t obj_id, const char* name, int* buffer, bool indep); void read_llong(hid_t obj_id, const char* name, long long* buffer, bool indep); void read_string(hid_t obj_id, const char* name, size_t slen, char* buffer, bool indep); void read_tally_results(hid_t group_id, hsize_t n_filter, hsize_t n_score, double* results); void write_attr_double(hid_t obj_id, int ndim, const hsize_t* dims, const char* name, const double* buffer); void write_attr_int(hid_t obj_id, int ndim, const hsize_t* dims, const char* name, const int* buffer); void write_attr_string(hid_t obj_id, const char* name, const char* buffer); void write_double(hid_t group_id, int ndim, const hsize_t* dims, const char* name, const double* buffer, bool indep); void write_int(hid_t group_id, int ndim, const hsize_t* dims, const char* name, const int* buffer, bool indep); void write_llong(hid_t group_id, int ndim, const hsize_t* dims, const char* name, const long long* buffer, bool indep); void write_string(hid_t group_id, int ndim, const hsize_t* dims, size_t slen, const char* name, char const* buffer, bool indep); void write_tally_results(hid_t group_id, hsize_t n_filter, hsize_t n_score, const double* results); } // extern "C" //============================================================================== // Template struct used to map types to HDF5 datatype IDs, which are stored // using the type hid_t. By having a single static data member, the template can // be specialized for each type we know of. The specializations appear in the // .cpp file since they are definitions. //============================================================================== template<typename T> struct H5TypeMap { static const hid_t type_id; }; //============================================================================== // Templates/overloads for read_attribute //============================================================================== // Scalar version template<typename T> void read_attribute(hid_t obj_id, const char* name, T& buffer) { read_attr(obj_id, name, H5TypeMap<T>::type_id, &buffer); } // array version template<typename T, std::size_t N> inline void read_attribute(hid_t obj_id, const char* name, std::array<T, N>& buffer) { read_attr(obj_id, name, H5TypeMap<T>::type_id, buffer.data()); } // vector version template<typename T> void read_attribute(hid_t obj_id, const char* name, std::vector<T>& vec) { // Get shape of attribute array auto shape = attribute_shape(obj_id, name); // Allocate new array to read data into std::size_t size = 1; for (const auto x : shape) size *= x; vec.resize(size); // Read data from attribute read_attr(obj_id, name, H5TypeMap<T>::type_id, vec.data()); } // Generic array version template<typename T> void read_attribute(hid_t obj_id, const char* name, xt::xarray<T>& arr) { // Get shape of attribute array auto shape = attribute_shape(obj_id, name); // Allocate new array to read data into std::size_t size = 1; for (const auto x : shape) size *= x; std::vector<T> buffer(size); // Read data from attribute read_attr(obj_id, name, H5TypeMap<T>::type_id, buffer.data()); // Adapt array into xarray arr = xt::adapt(buffer, shape); } // overload for std::string inline void read_attribute(hid_t obj_id, const char* name, std::string& str) { // Create buffer to read data into auto n = attribute_typesize(obj_id, name); char* buffer = new char[n]; // Read attribute and set string read_attr_string(obj_id, name, n, buffer); str = std::string{buffer, n}; delete[] buffer; } // overload for std::vector<std::string> inline void read_attribute(hid_t obj_id, const char* name, std::vector<std::string>& vec) { auto dims = attribute_shape(obj_id, name); auto m = dims[0]; // Allocate a C char array to get strings auto n = attribute_typesize(obj_id, name); char* buffer = new char[m*n]; // Read char data in attribute read_attr_string(obj_id, name, n, buffer); for (int i = 0; i < m; ++i) { // Determine proper length of string -- strlen doesn't work because // buffer[i] might not have any null characters std::size_t k = 0; for (; k < n; ++k) if (buffer[i*n + k] == '\0') break; // Create string based on (char*, size_t) constructor vec.emplace_back(&buffer[i*n], k); } delete[] buffer; } //============================================================================== // Templates/overloads for read_dataset and related methods //============================================================================== // Template for scalars. We need to be careful that the compiler does not use // this version of read_dataset for vectors, arrays, or other non-scalar types. // enable_if_t allows us to conditionally remove the function from overload // resolution when the type T doesn't meet a certain criterion. template<typename T> inline std::enable_if_t<std::is_scalar<std::decay_t<T>>::value> read_dataset(hid_t obj_id, const char* name, T& buffer, bool indep=false) { read_dataset_lowlevel(obj_id, name, H5TypeMap<T>::type_id, H5S_ALL, indep, &buffer); } // overload for std::string inline void read_dataset(hid_t obj_id, const char* name, std::string& str, bool indep=false) { // Create buffer to read data into auto n = dataset_typesize(obj_id, name); char* buffer = new char[n]; // Read attribute and set string read_string(obj_id, name, n, buffer, indep); str = std::string{buffer, n}; } // array version template<typename T, std::size_t N> inline void read_dataset(hid_t dset, const char* name, std::array<T, N>& buffer, bool indep=false) { read_dataset_lowlevel(dset, name, H5TypeMap<T>::type_id, H5S_ALL, indep, buffer.data()); } // vector version template <typename T> void read_dataset(hid_t dset, std::vector<T>& vec, bool indep=false) { // Get shape of dataset std::vector<hsize_t> shape = object_shape(dset); // Resize vector to appropriate size vec.resize(shape[0]); // Read data into vector read_dataset_lowlevel(dset, nullptr, H5TypeMap<T>::type_id, H5S_ALL, indep, vec.data()); } template <typename T> void read_dataset(hid_t obj_id, const char* name, std::vector<T>& vec, bool indep=false) { hid_t dset = open_dataset(obj_id, name); read_dataset(dset, vec, indep); close_dataset(dset); } template <typename T> void read_dataset(hid_t dset, xt::xarray<T>& arr, bool indep=false) { // Get shape of dataset std::vector<hsize_t> shape = object_shape(dset); // Allocate space in the array to read data into std::size_t size = 1; for (const auto x : shape) size *= x; arr.resize(shape); // Read data from attribute read_dataset_lowlevel(dset, nullptr, H5TypeMap<T>::type_id, H5S_ALL, indep, arr.data()); } template<> void read_dataset(hid_t dset, xt::xarray<std::complex<double>>& arr, bool indep); template <typename T> void read_dataset(hid_t obj_id, const char* name, xt::xarray<T>& arr, bool indep=false) { // Open dataset and read array hid_t dset = open_dataset(obj_id, name); read_dataset(dset, arr, indep); close_dataset(dset); } template <typename T, std::size_t N> void read_dataset(hid_t obj_id, const char* name, xt::xtensor<T, N>& arr, bool indep=false) { // Open dataset and read array hid_t dset = open_dataset(obj_id, name); // Get shape of dataset std::vector<hsize_t> hsize_t_shape = object_shape(dset); close_dataset(dset); // cast from hsize_t to size_t std::vector<size_t> shape(hsize_t_shape.size()); for (int i = 0; i < shape.size(); i++) { shape[i] = static_cast<size_t>(hsize_t_shape[i]); } // Allocate new xarray to read data into xt::xarray<T> xarr(shape); // Read data from the dataset read_dataset(obj_id, name, xarr); // Copy into xtensor arr = xarr; } // overload for Position inline void read_dataset(hid_t obj_id, const char* name, Position& r, bool indep=false) { std::array<double, 3> x; read_dataset(obj_id, name, x, indep); r.x = x[0]; r.y = x[1]; r.z = x[2]; } template <typename T, std::size_t N> inline void read_dataset_as_shape(hid_t obj_id, const char* name, xt::xtensor<T, N>& arr, bool indep=false) { hid_t dset = open_dataset(obj_id, name); // Allocate new array to read data into std::size_t size = 1; for (const auto x : arr.shape()) size *= x; std::vector<T> buffer(size); // Read data from attribute read_dataset_lowlevel(dset, nullptr, H5TypeMap<T>::type_id, H5S_ALL, indep, buffer.data()); // Adapt into xarray arr = xt::adapt(buffer, arr.shape()); close_dataset(dset); } template <typename T, std::size_t N> inline void read_nd_vector(hid_t obj_id, const char* name, xt::xtensor<T, N>& result, bool must_have=false) { if (object_exists(obj_id, name)) { read_dataset_as_shape(obj_id, name, result, true); } else if (must_have) { fatal_error(std::string("Must provide " + std::string(name) + "!")); } } //============================================================================== // Templates/overloads for write_attribute //============================================================================== template<typename T> inline void write_attribute(hid_t obj_id, const char* name, T buffer) { write_attr(obj_id, 0, nullptr, name, H5TypeMap<T>::type_id, &buffer); } inline void write_attribute(hid_t obj_id, const char* name, const char* buffer) { write_attr_string(obj_id, name, buffer); } inline void write_attribute(hid_t obj_id, const char* name, const std::string& buffer) { write_attr_string(obj_id, name, buffer.c_str()); } template<typename T, std::size_t N> inline void write_attribute(hid_t obj_id, const char* name, const std::array<T, N>& buffer) { hsize_t dims[] {N}; write_attr(obj_id, 1, dims, name, H5TypeMap<T>::type_id, buffer.data()); } template<typename T> inline void write_attribute(hid_t obj_id, const char* name, const std::vector<T>& buffer) { hsize_t dims[] {buffer.size()}; write_attr(obj_id, 1, dims, name, H5TypeMap<T>::type_id, buffer.data()); } inline void write_attribute(hid_t obj_id, const char* name, Position r) { std::array<double, 3> buffer {r.x, r.y, r.z}; write_attribute(obj_id, name, buffer); } //============================================================================== // Templates/overloads for write_dataset //============================================================================== // Template for scalars (ensured by SFINAE) template<typename T> inline std::enable_if_t<std::is_scalar<std::decay_t<T>>::value> write_dataset(hid_t obj_id, const char* name, T buffer) { write_dataset_lowlevel(obj_id, 0, nullptr, name, H5TypeMap<T>::type_id, H5S_ALL, false, &buffer); } inline void write_dataset(hid_t obj_id, const char* name, const char* buffer) { write_string(obj_id, name, buffer, false); } template<typename T, std::size_t N> inline void write_dataset(hid_t obj_id, const char* name, const std::array<T, N>& buffer) { hsize_t dims[] {N}; write_dataset_lowlevel(obj_id, 1, dims, name, H5TypeMap<T>::type_id, H5S_ALL, false, buffer.data()); } inline void write_dataset(hid_t obj_id, const char* name, const std::vector<std::string>& buffer) { auto n {buffer.size()}; hsize_t dims[] {n}; // Determine length of longest string, including \0 size_t m = 1; for (const auto& s : buffer) { m = std::max(m, s.size() + 1); } // Copy data into contiguous buffer char* temp = new char[n*m]; std::fill(temp, temp + n*m, '\0'); for (int i = 0; i < n; ++i) { std::copy(buffer[i].begin(), buffer[i].end(), temp + i*m); } // Write 2D data write_string(obj_id, 1, dims, m, name, temp, false); // Free temp array delete[] temp; } template<typename T> inline void write_dataset(hid_t obj_id, const char* name, const std::vector<T>& buffer) { hsize_t dims[] {buffer.size()}; write_dataset_lowlevel(obj_id, 1, dims, name, H5TypeMap<T>::type_id, H5S_ALL, false, buffer.data()); } // Template for xarray, xtensor, etc. template<typename D> inline void write_dataset(hid_t obj_id, const char* name, const xt::xcontainer<D>& arr) { using T = typename D::value_type; auto s = arr.shape(); std::vector<hsize_t> dims {s.cbegin(), s.cend()}; write_dataset_lowlevel(obj_id, dims.size(), dims.data(), name, H5TypeMap<T>::type_id, H5S_ALL, false, arr.data()); } inline void write_dataset(hid_t obj_id, const char* name, Position r) { std::array<double, 3> buffer {r.x, r.y, r.z}; write_dataset(obj_id, name, buffer); } inline void write_dataset(hid_t obj_id, const char* name, std::string buffer) { write_string(obj_id, name, buffer.c_str(), false); } } // namespace openmc #endif // OPENMC_HDF5_INTERFACE_H
{ "redpajama_set_name": "RedPajamaGithub" }
9,610
>> want this facility, and it's bit of a shame to have to reimplement it. > Ok - that's a reasonable request. Can you raise an issue for it? helpful if I put it in as a wishlist request on the Python tracker? >> The other is unorderable_list_difference. Excellent. Thank you for your quick response. >> 'unsortable' items, as I find it's usually easier for me to read. > and why is the output not good enough? I find the first more convenient. misleading as it is to be helpful.
{ "redpajama_set_name": "RedPajamaC4" }
6,237
Q: "'CREATE PROCEDURE' must be only statement in the batch" en procedimiento ¿Cómo puedo solucionar este error cuando hago este procedimiento? En el que me da el siguiente error: 'CREATE PROCEDURE' must be only statement in the batch. ¿Alguien puede ayudarme? A: Esta pregunta suele estar solucionada. Dejo acá los links para nos duplicar pero te dejo la solución breve abajo: * *https://dba.stackexchange.com/questions/147384/sql80001-incorrect-syntax-create-procedure-must-be-the-only-statement-in-the *https://stackoverflow.com/questions/41022645/create-procedure-must-be-the-only-statement-in-the-batch-erro *https://stackoverflow.com/questions/34773473/incorrect-syntax-create-procedure-must-be-the-only-statement-in-the-batch *Y más... Pero la solución es la siguiente: SQL te pide que la sentencia CREATE PROCEDURE sea la única en tu archivo (pestaña/conexión/etc). Por lo tanto, lo que podés hacer es dejar esa única sentencia en ese archivo ó agregar la palabra reservada GO antes de las setencias que tenés arriba. Algo así: .... SELECT * FROM tabla; GO CREATE PROCEDURE ... Repito, esta solución se ve también en este link: https://stackoverflow.com/questions/41022645/create-procedure-must-be-the-only-statement-in-the-batch-erro Espero que te sirva!
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,891
\subsection{title} \newcommand\mm{{\rm m}} \newcommand\meas{{\, \rm meas\,}} \newcommand\cO{{\cal O}} \newcommand\id{{\, \rm id \,}} \newcommand\Id{{\, \rm Id \,}} \newcommand\fd{{n} \newcommand\sd{{m} \newcommand\td{{n} \newcommand\fk{{k' } \newcommand\sk{{k } \newcommand\tk{{k } \newcommand\ffd{{n' } \newcommand\sfd{{m' } \newcommand\fg{{{\gamma} _{\#}} \newcommand\sg{{\bar{\gamma} } \newcommand\ssg{{{\gamma} } \newcommand\nf{{\rm h} \newcommand\nfj{{\rm h}_ \newcommand\pert{{f} \newcommand\Ham{{{h}} \newcommand\EEj{{{h}}_ \newcommand\fo{{\o'} \newcommand\so{{\o} \newcommand\Hs{{\MM} \newcommand\sHs{{\hat M} \newcommand\nnf{{+} \newcommand\inHs{{\bar M} \newcommand\pertnorm{{E} \newcommand\KAM{{\hat E} \newcommand\fnf{\breve H_0 \newcommand\fHam{\breve{H} \newcommand\fpert{\breve{P} \begin{document} \title \bf Perturbation theory and canonical coordinates\\ in celestial mechanics\thanks{ Notes of two courses given, respectively, at the 18th School of Interaction Between Dynamical Systems and Partial Differential Equations (Barcelona, June 27--July 1, 2022) and at the XLVII Summer School on Mathematical Physics (Ravello, 12--24 September, 2022). I warmly thank the Centre de Recerca Matematica of Bellaterra (Barcelona) and Istituto Nazionale di Alta Matematica and the Gruppo Nazionale per la Fisica Matematica for their kind hospitality and especially A. Delshams, M. Guardia, T. Ruggeri, G. Saccomandi and T. M--Seara for their interest. Sections \ref{sec: K map}, \ref{The reduction of perihelia}, \ref{P-map vs rotations and reflections}, \ref{Global Kolmogorov tori} and \ref {Coexistence of stable and whiskered tori} are based on work done while the author was funded by the ERC grant 677793 StableChaoticPlanetM (2016--2022). {\bf MSC2000 numbers:} primary: 34C20, 70F10, 37J10, 37J15, 37J40; secondary: 34D10, 70F07, 70F15, 37J25, 37J35. } } \author{Gabriella Pinzari\footnote{Department of Mathematics, University of Padua, e--mail address: {\tt pinzari@math.unipd.it}}} \date{September 16, 2022} \maketitle \begin{abstract}\footnotesize{KAM theory owes most of its success to its initial motivation: the application to problems of celestial mechanics. The masterly application was offered by V.I.Arnold in the 60s who worked out a theorem, that he named the "Fundamental Theorem" (FT), especially designed for the planetary problem. However, FT could be really used at that purpose only when, about 50 years later, a set of coordinates constructively taking the invariance by rotation and close--to--integrability into account was used. Since then, some progress has been done in the symplectic assessment of the problem, and here we review such results. } \end{abstract} \maketitle \tableofcontents \renewcommand{\theequation}{\arabic{equation}} \setcounter{equation}{0} \section{Some sets of canonical coordinates for many--body problems} \subsection{$(1+n)$--body problem, Delaunay--Poincar\'e coordinates and Arnold's theorem}\label{sec: AT intro} In the masterpiece \cite{arnold63}, a young a brilliant mathematician, named Vladimir Igorevich Arnold, stated, and partly proved, the following result. \begin{theorem}{\bf ``Theorem of stability of planetary motions'', \cite[Chapter III, p. 125]{arnold63}}\label{Arnold Theorem} For the majority of initial conditions under which the instantaneous orbits of the planets are close to circles lying in a single plane, perturbation of the planets on one another produces, in the course of an infinite interval of time, little change on these orbits provided the masses of the planets are sufficiently small. {\rm[...]} In particular {\rm[...]} in the n-body problem there exists a set of initial conditions having a positive Lebesgue measure and such that, if the initial positions and velocities of the bodies belong to this set, the distances of the bodies from each other will remain perpetually bounded. \end{theorem} \vskip.1in \noindent Let us summarize the main ideas behind the statement above. \\ After the symplectic reduction of the linear momentum, the $(1+n)$--body problem with masses $m_0$, $m_1$, $\ldots$, $m_n$ is governed by the $3n$--degrees of freedom Hamiltonian (see Appendix \ref{appendix}) \beqa{Helio}{\cal H}&=\,&\sum_{1\leq i\leq n}\left(\frac{|\by_i|^2}{2\mu_i}-\frac{\mu_i M_i}{|\bx_i|}\right)+\sum_{1\leq i<j\leq n}\left(\frac{\by_i\cdot \by_j}{ m_0}-\frac{m_i m_j}{|\bx_i-\bx_j|}\right) \end{eqnarray} where $\bx_i$ represent the difference between the position of the $i^{\rm th}$ planet and the mass $m_0$, $\by_i$ are the associated symplectic momenta, $\bx\cdot \by=\sum_{1\le i\le 3}x_i y_i$ and $|\bx|:=(\bx\cdot \bx)^{1/2}$ denote, respectively, the standard inner product in ${\Bbb R} ^3$ and the Euclidean norm; \beq{masses} \mu_i:=\frac{m_0 m_i}{m_0+ m_i}\,,\qquad M_i:=m_0+ m_i \end{equation} The phase space is the ``collisionless'' domain of $ {\Bbb R} ^{3n}\times{\Bbb R} ^{3n}$ \beqa{P6n} \Big\{(\by,\bx)=\big((\by_1,\dots,\by_n), (\bx_1,\dots,\bx_n)\big) \ {\rm s.t.} \ \ \ 0\ne \bx_i\ne \bx_j\ ,\ \forall \ i\neq j\Big\}\ , \end{eqnarray} endowed with the standard symplectic form $$\o=\sum_{i=1}^n d\by_i\wedge d\bx_i=\sum_{i=1}^n \sum_{j=1}^3 d\by_{ij}\wedge d\bx_{ij}$$ where $\by_{ij}$, $\bx_{ij}$ denote the $j^ {\rm th}$ component of $\by_i$, $\bx_i$.\\ The {\it planetary case} is when $m_1$, $\ldots$, $m_n$ are of the same order, and much smaller that $m_0$. In such a case, letting $m_i\to \mu m_i$, $\by_i\to \mu \by_i$, with $0<\mu\ll 1$, one obtains \beqa{HelioNEW}{\cal H}&=\,&\sum_{1\leq i\leq n}\left(\frac{|\by_i|^2}{2\mu_i}-\frac{\mu_i M_i}{|\bx_i|}\right)+\mu\sum_{1\leq i<j\leq n}\left(\frac{\by_i\cdot \by_j}{ m_0}-\frac{m_i m_j}{|\bx_i-\bx_j|}\right) \end{eqnarray} with \beq{massesNEW} \mu_i:=\frac{m_0 m_i}{m_0+ \mu m_i}\,,\qquad M_i:=m_0+ \mu m_i \end{equation} \vskip.1in \noindent Consider the {\it two--body} Hamiltonians \beqa{KeplerHam}h_i(\by_i, \bx_i):= \frac{|\by_i|^2}{2{\mu} _i}-\frac{{\mu} _i M_i}{|\bx_i|}\,.\end{eqnarray} Assume that $h_i(\by_i, \bx_i)<0$ so that the Hamiltonian flow $\phi^t_{h_i}$ evolves on a Keplerian ellipse ${\cal E}_i$ and assume that the eccentricity $e_i\in (0,1)$. Let $a_i$, ${\mathbf P}_i$ denote, respectively, the {\it semimajor axis} and the {\it perihelion} of ${\cal E}_i$. Let ${\mathbf C}_i$ denote the $i^{\rm th}$ angular momentum \beqa{Ci}\mathbf{C}_i(\by_j, \bx_j):=\bx_i\times \by_i\,.\end{eqnarray} Define the {\it Delaunay nodes} \beqa{barni}\bar{\bm n}_i:=\mathbf k\times \mathbf C_i\end{eqnarray} and, for $u,v\in{\Bbb R} ^3$ lying in the plane orthogonal to a vector $w$, let $\a_w(u,v)$ denote the positively oriented angle (mod $2{\pi} $) between $u$ and $v$ (orientation follows the ``right hand rule''). \vskip.1in \noindent The {\it Delaunay action--angle variables} \beqa{Delaa}{\cal D}_{e\ell, aa}:= ({\mathbf Z}, {\mathbf G}, {\bm\Lambda}, \bm\zeta, {\mathbf g}, \bm\ell)\end{eqnarray} with \begin{eqnarray*} \begin{array}{lll} \displaystyle{\mathbf Z}=(Z_1,\ldots,Z_{n}),\quad &\bm\zeta=(\zeta_1,\ldots,\zeta_{n})\\\\ \displaystyle {\mathbf G}=(G_1,\ldots, G_{n}),& {\mathbf g}=(g_1,\ldots,g_{n})\\\\ \bm\L=(\L_1,\ldots,\L_n), &\bm\ell=(\ell_1,\ldots,\ell_{n}) \end{array} \end{eqnarray*} are defined as \beqa{Delaunay variables} \left\{\begin{array}{l} \L_i:=\mu_i\sqrt{M_i a_i}\\ \ell_i:= {\rm mean\ anomaly\ of}\ \bx_i \ {\rm on}\ {\cal E}_i \end{array}\right. &&\left\{\begin{array}{l} G_i:=|\mathbf{C}_i|=\L_i\sqrt{1-e_i^2}\\ g_i:=\a_{\mathbf{C}_i}(\bar{\bm n}_i, \mathbf P_i) \end{array} \right.\nonumber\\ \ \nonumber \\ &&\left\{\begin{array}{l} Z_i:={\mathbf C}_i\cdot \mathbf{k}\\ \zeta_i:=\a_{\mathbf{k}}(\mathbf{i},\bar{\bm n}_i) \end{array} \right.\end{eqnarray} \vskip.1in \noindent The {\it Poincar\'e variables} \begin{eqnarray*} {\cal P}_{oinc}:= \big((\bm\uh, {\bm\up}, \bm\L), (\bm\ux, {\bm\uq}, \bm\l)\big)\end{eqnarray*} with \begin{eqnarray*} \begin{array}{lll} \displaystyle{\bm\uh}=(\uh_1,\ldots,\uh_{n}),\quad &\bm\ux=(\ux_1,\ldots,\ux_{n})\\\\ \displaystyle {\bm\up}=(\up_1,\ldots, \up_{n}),& {\bm\uq}=(\uq_1,\ldots,\uq_{n})\\\\ \bm\L=(\L_1,\ldots,\L_n), &\bm\l=(\l_1,\ldots,\l_{n}) \end{array} \end{eqnarray*} with the $\L_i$'s as in \equ{Delaunay variables} and \beqa{Poinc reg} \ul_i=\ell_i+ {g_i}+\theta_i\qquad&&\arr{\uh_i=\sqrt{2(\L_i-G_i)}\ \cos{(\zeta_i+g_i)}\\ \ux_i=-\sqrt{2(\L_i-G_i)}\ \sin{(\zeta_i+g_i})} \nonumber\\ \\ && \arr{\up_i=\sqrt{2(G_i-Z_i)}\ \cos{\zeta_i}\\ \uq_i=-\sqrt{2(G_i-Z_i)}\ \sin{\zeta_i}} \nonumber \end{eqnarray} \begin{figure} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.2,0,0) {$\mathbf j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf k$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.2) {$\mathbf i$}; \node (inew) at (0,0,1.8) {}; \draw [thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.8,3.8,3.8) {$\mathbf{ C}_i$}; \node (Cnew) at (1.9,2.2,1.9) {$\tred{G_i}$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [->] (-3.0,0,3.0) -- (3.0,0,-3.0); \node (gamma) at (3.2,0,-3.2) {$\bar{\bm n}_ $}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.2,2,0.2) {$\tred{Z_i}$} ; \node (zeta) at (1.2,0,1.2) {$\tred{\zeta_i}$} ; \end{tikzpicture} \caption{Delaunay coordinates $Z_i$, $\zeta_i$, $G_i$.} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf{C}_i\times \bar{\bm n}_i$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf{ C}_i$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {$\bar{\bm n}_i$}; \node (inew) at (0,0,1.8) {}; \draw [dashed, -] (0,0,0) -- (5.6,0,7.3) ; \draw [thick, ->] (0,0,0) -- (3.57,0,2.75) ; \node (x2) at (4.0,0,3.0) {$\mathbf{x}_i$}; \node (ell2) at (2.0,0,1.5) {}; \node (C) at (6.1,0,8.0) {$\mathbf{P}_i$}; \node (g) at (1.0,0,2.5) {$\tred{g_i}$}; \node (g1) at (2,0,2.7) {}; \node (ell3) at (2.5,0,2.7) {$\tred{\ell_i}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \node (Zeta) at (-0.2,2,0.2) {$\tred{G_i}$} ; \node (zeta) at (1.2,0,1.2) {} ; \draw [-latex, bend right] (inew) to (zeta); \draw [-latex, bend right] (g1) to (ell2); \draw plot [variable=\t, domain=-78:63, samples=50] ({3.5*(cos(\t)-0.2)+3.5*0.3*sin(\t)}, {-3.5*(cos(\t)-0.2)+3.5*0.3*sin(\t)}); \end{tikzpicture} \caption{Delaunay coordinates $G_i$, $g_i$, $\ell_i$.} \end{figure} {\smallskip\noindent} In Poincar\'e coordinates the Hamiltonian \equ{HelioNEW} takes the form \beq{prop deg} \cH_{\textrm{\scshape p}}(\L,\ul,\uz)=h_{\textrm{\scshape k}} (\L)+ \mu f_{\textrm{\scshape p}}(\L,\ul, \uz)\ ,\ \ \uz:=(\uh,\up,\ux,\uq)\in{\Bbb R} ^{4n} \end{equation} where $(\L,\l)\in{\Bbb R} ^n\times{\Bbb T} ^n$; the ``Kepler'' unperturbed term $h_{\textrm{\scshape k}}$, coming from $h_{\rm plt} $ in \equ{Helio}, becomes \beq{Kep} h_{\textrm{\scshape k}}:=\sum_{i=1}^nh^\ppi_{\textrm{\scshape k}}(\L)=-\sum_{i=1}^n\frac{ {\mu} _i^3 M_i^2}{2\L_i^2}\ . \end{equation} {\smallskip\noindent} Because of rotation (with respect the ${\mathbf k}$--axis) and reflection (with respect to the coordinate planes) invariance of the Hamiltonian \equ{Helio}, the perturbation $f_{\textrm{\scshape p}}$ in \equ{prop deg} satisfies well known symmetry relations called {\it d'Alembert rules}, see \cite{chierchiaPi11c}. By such symmetries, in particular, the averaged perturbation \beq{fav pl}f^{\rm av}_{\textrm{\scshape p}}(\L, \uz):=\su{(2{\pi} )^n}\int_{{\Bbb T} ^n}f_{\textrm{\scshape p}}(\L,\ul,\uz) d\l\end{equation} is even around the origin $\uz=0$ and its expansion in powers of $\uz$ has the form\footnote{${\cal Q}\cdot u^2$ denotes the 2--indices contraction $\sum_{i,j}{\cal Q}_{ij}u_i u_j$ (${\cal Q}_{ij}$, $u_i$ denoting the entries of ${\cal Q}$, $u$). } \beq{f'aaav} f^{\rm av}_{\textrm{\scshape p}}=C_0(\L)+{\cal Q}_h(\L)\cdot\frac{{\uh}^2+{\ux}^2}{2}+{\cal Q}_v(\L)\cdot\frac{{\up}^2+{\uq}^2}{2}+{\rm O}(|\uz|^4)\ , \end{equation} where ${\cal Q}_h$, ${\cal Q}_v$ are suitable quadratic forms. The explicit expression of such qua\-dra\-tic forms can be found, {\rm e.g.\,}, in \cite[(36), (37)]{fejoz04}. {\smallskip\noindent} By such expansion, the (secular) origin $\uz=0$ is an {\it elliptic equilibrium} for $f^{\rm av}_{\textrm{\scshape p}}$ and corresponds to co--planar and co--circular motions. It is therefore natural to put \equ{f'aaav} into Birkhoff Normal Form (BNF, from now on) in a small neighborhood of the secular origin; see, {\rm e.g.\,}, \cite{hoferZ94} for general information on BNFs for Birkhoff theory for rotational invariant Hamiltonian systems. {\smallskip\noindent} As a preliminary step, one can diagonalize \equ{f'aaav}, {\rm i.e.\,}, find a symplectic transformation defined by $\L\to\L$ and \beq{diag Poinc1}\ul=\tilde\ul+\varphi(\L,\tilde\uz),\ \uh=\r_h(\L)\tilde\uh,\ \ux=\r_h(\L)\tilde\ux,\ \up=\r_v(\L)\tilde\up,\ \uq=\r_v(\L)\tilde\uq\ , \end{equation} with $\r_h$, $\r_v\in {\rm SO}(n)$ diagonalizing ${\cal Q}_h$, ${\cal Q}_v$. In this way, \equ{prop deg} takes the form \beq{planetary diag} \tilde\cH_{\textrm{\scshape p}}(\L,\tilde\ul,\tilde\uz)=h_{\textrm{\scshape k}} (\L)+{\mu} \tilde f(\L,\tilde\ul, \tilde\uz)\ , \end{equation} with the average over $\tilde\ul$ of $\tilde f^{\rm av}$ given by \beqa{f'aaavdiag}\tilde f^{\rm av}(\L,\tilde \uz)=C_0(\L)+\sum_ {i=1}^ {m}\Omega_i(\L)\frac{{\tilde u}_i^2+{\tilde v}_i^2}{2}+{\rm O}(|\tilde\uz|^4), \quad \tilde\uz=(\tilde u, \tilde v)=\big((\tilde\uh,\tilde\up)\,,\ (\tilde\ux,\tilde\uq)\big).\end{eqnarray} {\smallskip\noindent} with $m=2n$, and the vector $\O(\L): ({\sigma} _1(\L),\ldots,{\sigma} _n(\L),\varsigma_1(\L), \ldots, \varsigma_n(\L))$ being formed by the eigenvalues of the matrices ${\cal Q}_h$ and ${\cal Q}_v$. \begin{theorem}[Birkhoff]\label{BNF} Let ${\cal H}$ be a Hamiltonian having the form in \equ{planetary diag}--\equ{f'aaavdiag}. Assume that there exists $\tilde\varepsilon>0$ ${\cal A} \subset {\mathbb R}^n$ and $s\in \natural$ such that ${\cal H}$ is smooth on an open set $\tilde{\cal M} ^ {2m+2n}_{\varepsilon}={\cal A} \times{\Bbb T} ^n\times B^{2m}_{\tilde\varepsilon}$ and that \beqa{nonres}\sum_{i=0}^m\Omega_i(\Lambda)k_i\ne 0\quad\forall\ k=(k_1\,,\ldots\,,\ k_m)\in {\Bbb Z} ^m:\ 0<|k|_1\le 2s\,,\ \forall\ \L\in {\cal A} \,. \end{eqnarray} Then there exists $0<\varepsilon\le\tilde\varepsilon$ and a symplectic map (``Birkhoff transformation'') \beq{birkhoff transf}\Phi_{\textrm{\scshape b}}:\quad (\bm\L,{\mathbf l},\bar{\mathbf{w}})\in{\cal M} ^ {2m+2n}_{\varepsilon}\to (\L,\tilde\ul,\tilde\uz)\in \Phi_{\textrm{\scshape b}}({\cal M} ^ {2m+2n}_{\varepsilon})\subseteq{\cal M} ^ {2m+2n}_{\tilde\varepsilon} \end{equation} which puts the Hamiltonian \equ{planetary diag} into the form \beq{birkhoff planetary} \cH_{\textrm{\scshape b}}(\bm\L,{\mathbf l},\bar{\mathbf{w}}):=\tilde\cH_{\textrm{\scshape p}}\circ\Phi_{\textrm{\scshape b}}=h_{\textrm{\scshape k}} (\L)+{\mu} f_{\textrm{\scshape b}}(\L,l, w)\end{equation} where the average $f_{\textrm{\scshape b}}^{\rm av}(\L,w):=\int_{{\Bbb T} ^n}f_{\textrm{\scshape b}}d l$ is in BNF of order $s$: \beq{fb} f_{\textrm{\scshape b}}^{\rm av}(\L,w)=C_0+\O\cdot r+{\rm P}_s(r)+{\rm O}(|w|^{2s+1})\quad w:=(u,v)\quad r_i:=\frac{u_i^2+v_i^2}{2}\ , \end{equation} ${\rm P}_s$ being homogeneous polynomial in $r$ of order $s$, with coefficients depending on $\L$.\\ In particular, if \equ{nonres} holds with $s=4$, \beq{fbNEW} f_{\textrm{\scshape b}}^{\rm av}(\L,w)=C_0(\L)+\O(\L)\cdot r+r\cdot \tau(\L) r+{\rm O}(|w|^{5})\quad w:=(u,v)\quad r_i:=\frac{u_i^2+v_i^2}{2}\ , \end{equation} with some square matrix $\tau(\L)$ of order $m$ (``torsion'', or ``second-order Birkhoff invariants''). \end{theorem} \begin{theorem}\label{FT} {\bf (``The Fundamental Theorem'', V. I. Arnold, \cite{arnold63})} If the Hessian matrix of ${\rm h}$ and the matrix $\t(\L)$ do not vanish identically, and if ${\mu} $ is suitably small with respect to $\varepsilon$, the system affords a positive measure set $\cK_{{\mu} , \varepsilon}$ of quasi--periodic motions in phase space such that its density goes to one as $\varepsilon\to 0$. \end{theorem} \begin{remark}[Arnold, Herman]\rm It turns out that such invariants satisfy identically the following two {\it secular resonances} \beq{Herman resonance} \varsigma_ {n}(\Lambda)\equiv0\ ,\qquad\qquad\sum_ {i=1}^ {n}({\sigma} _i(\Lambda)+\varsigma_i(\Lambda))\equiv 0 \end{equation} Such resonances strongly violate the assumption \equ{nonres} of Theorem \ref{BNF}. \end{remark} We remark that the former equality in \equ{Herman resonance} is mentioned in \cite{arnold63}, while the latter been pointed out by M. Herman in the 1990s. Note that \equ{Herman resonance} do not appear in the planar problem, because the matrix ${\cal Q}_v$, hence the $\varsigma_i$'s, do not exist in that case. Being aware of such difficulty, Arnold completely proved Theorem \ref{Arnold Theorem} via Theorem \ref{FT} in the case of the planar three--body problem, checking explicitly the non vanishing of the $2\times 2$ torsion matrix for that case. However, in the case of the spatial problem, the question remained open until 2004, when M. Herman and J. F\'ejoz \cite{fejoz04} proved Theorem \ref{Arnold Theorem} via a completely different strategy, which does need Birkhoff normal form. We refer to \cite{chierchiaPi14} for more details. \subsection{The rotational degeneracy} In \cite{arnold63}, Arnold wrote -- without giving the details -- that the former resonance in \equ{Herman resonance} was to be ascribed to the conservation of the total angular momentum of the system: \beqa{C}{\mathbf C}=\sum_{j=1}^n{\mathbf C}_j\,,\qquad {\mathbf C}_j=\bx_j\times \by_j\,.\end{eqnarray} An argument which clearly shows this goes as follows. Using Poincar\'e coordinates, the planets' angular momenta have the expressions \begin{eqnarray*} \mathbf C_ &=&\left( \begin{array}{ccc} -\uq_j\sqrt{\Lambda_j-\frac{\uh^2_j+\ux^2_j}{2}-\frac{\up^2_j+\uq^2_j}{4}}\\ -\up_j\sqrt{\Lambda_j-\frac{\uh^2_j+\ux^2_j}{2}-\frac{\up^2_j+\uq^2_j}{4}}\\ \Lambda_j-\frac{\uh^2_j+\ux^2_j}{2}-\frac{\up^2_j+\uq^2_j}{2} \end{array} \right)\nonumber\\ &=&\left( \begin{array}{ccc} -\sqrt{\Lambda_j}\uq_j+{\rm O}(|\uz|^3)\\ -\sqrt{\Lambda_j}\up_j+{\rm O}(|\uz|^3)\\ \Lambda_j+{\rm O}(|\uz|^2) \end{array} \right) \end{eqnarray*} In particular, the two former components of the total angular momentum \equ{C} are given by \beqa{C1C2***}C_1=-\sum_{j=1}^n\sqrt{\Lambda_j}\uq_j+{\rm O}(|\uz|^3)\,,\qquad C_2=-\sum_{j=1}^n\sqrt{\Lambda_j}\up_j+{\rm O}(|\uz|^3)\end{eqnarray} On the other hand, it is possible to find a canonical transformation \beqa{checktransf}(\Lambda, \check\ul, \check\uh, \check\up, \check\ux, \check\uq)\to (\Lambda, \ul, \uh, \up, \ux, \uq)\end{eqnarray} having the form \equ{diag Poinc1} with $\rho_h=\id$ and $\rho_v\in SO(n)$ chosen such in a way that the last raw of $\rho^{-1}_v$ is \beqa{vector}N(\L)\big(\sqrt{\Lambda_1}\,,\ldots\,,\sqrt{\Lambda_n}\big) \end{eqnarray} where $N(\L)=\frac{1}{\sqrt{\sum_{i=1}^n \Lambda_i}}$ fixes the Euclidean norm of \equ{vector} to $1$. With such choice, we have $$\check\up_n=\rho^{-1}_v\left( \begin{array}{cc} \up_1\\ \vdots\\ \up_n \end{array} \right)_n=N(\L)\sum_{j=1}^n\sqrt{\Lambda_j}\up_j$$ and, similarly, $$\check\uq_n=N(\L)\sum_{j=1}^n\sqrt{\Lambda_j}\uq_j$$ Therefore, \equ{C1C2***} become \beqa{C1C2NEW} C_1=-N(\L)^{-1}\check q_n+{\rm O}(|\check\uz|^3)\,,\qquad C_2=-N(\L)^{-1}\check p_n+{\rm O}(|\check\uz|^3)\end{eqnarray} Now, as the projection of the transformation \equ{checktransf} on $\check\ul$'s is a $\check\ul$--independent translation, the averaged perturbing function using the new coordinates can be obtained applying such transformation to the function in \equ{f'aaav}. We denote it as \begin{eqnarray*} \check f^{\rm av}=C_0(\L)+\check{\cal Q}_h(\L)\cdot\frac{\check{\uh}^2+\check{\ux}^2}{2}+\check{\cal Q}_v(\L)\cdot\frac{\check{\up}^2+\check{\uq}^2}{2}+{\rm O}(|\check\uz|^4)\ , \end{eqnarray*} with $\check{\cal Q}_h(\L)={\cal Q}_h(\L)$ and $\check{\cal Q}_v(\L)=\rho_v(\L)^{-1}{\cal Q}_v(\L)\rho_v(\L)$. Note that $\check{\cal Q}_v(\L)$ has the same eigenvalues as ${\cal Q}_v(\L)$, as $\rho_v\in SO(n)$. Let us now use \beqa{commutation}\{\check f^{\rm av}, C_1\}=0=\{\check f^{\rm av}, C_2\}\end{eqnarray} which hold because they are true for $f$, and $\mathbf C$ is $\check\ul$--independent. Using \equ{C1C2NEW}, it is immediate to see that \equ{commutation} imply that the quadratic form $$\check{\cal Q}_v(\L)\cdot\frac{\check{\up}^2+\check{\uq}^2}{2}$$ is independent of $\check\up_n$, $\check\uq_n$. Hence, the $n^{\rm th}$ raw and column of $\check{\cal Q}_v(\L)$ vanish identically. This implies that $\check{\cal Q}_v(\L)$, hence ${\cal Q}_v(\L)$, has an identically vanishing eigenvalue, which is $\varsigma_n(\L)$ in \equ{Herman resonance}. \subsection{Jacobi reduction of the nodes} In the case $n=2$, Arnold in \cite{arnold63} suggested to get rid of the rotation invariance (described in the previous section) by means of the classical so--called {\it Jacobi reduction of the nodes}. This is a classical procedure with a remarkable geometric meaning, which goes as follows. Let us consider a reference frame $({\mathbf i}, {\mathbf j}, {\mathbf k})$ whose third axis ${\mathbf k}$ is along the direction of the total angular momentum ${\mathbf C}={\mathbf C}_1+{\mathbf C}_2$, while ${\mathbf i}$ coincides with the intersection of the planes orthogonal to ${\mathbf C}_1$, ${\mathbf C}_2$. Such intersection is well defined provided that ${\mathbf C}_1\not\parallel{\mathbf C}_2$, namely, when the problem is not planar. With such a choice of the reference frame, one cannot fix Delaunay coordinates completely freely. Indeed, by the choice of $\mathbf i$, we have that the $\zeta_j$ satisfy \beqa{Jacobi1}\zeta_2-\zeta_1={\pi} \,.\end{eqnarray} Moreover, a geometrical analysis of the triangle formed by ${\mathbf C}_1$, ${\mathbf C}_2$ and ${\mathbf C}$ shows that the coordinates $Z_j$ satisfy \begin{figure}[htp] \center{\includegraphics[width=0.5\linewidth]{JRbis1.jpg}} \caption{The construction underlying Jacobi reduction of the nodes.} \end{figure} \beq{Jacobi2}Z_1=\frac{G }{2}+\frac{G_1^2-G_2^2}{2G }\ ,\quad Z_2=\frac{G }{2}-\frac{G_1^2-G_2^2}{2G }\end{equation} where ${G}:=|{\mathbf C}|=\sqrt{{ C}_1^2+{ C}_2^2+{ C}_3^2}$ is the Euclidean norm of ${\mathbf C}$. As $\mathbf i$ moves, the following fact is not obvious at all -- in fact proved by R. Radau. \begin{theorem}[R. Radau, 1868, \cite{radau1868}] Replacing relations \equ{Jacobi1}--\equ{Jacobi2} inside the Hamiltonian \equ{Helio} with $n=2$ written in Delaunay coordinates, one obtains a function, depending on $(\Lambda_j, \ell_j, G_j, g_j)$ ($j=1$, $2$) and $G$, whose Hamilton equations relatively to $(\Lambda_j, \ell_j, G_j, g_j)$ generate the motions of the coordinates $(\Lambda_j, \ell_j, G_j, g_j)$ referred to the rotating frame under the action of the Hamiltonian \equ{Helio} with $n=2$. The motion of $Z_j$ and $\zeta_j$ can be recovered via \equ{Jacobi1}--\equ{Jacobi2}. \end{theorem} \subsection{Deprit coordinates} Arnold commented on the general problem of rotational degeneracy as follows:\vskip.1in \noindent \cite[\bf Chap.III, \S 5, n. 5]{arnold63} {\it In the case of more than three bodies {\rm[$n> 2$]} there is no such {\rm[analogue to Jacobi reduction of the nodes]} elegant method of reducing the number of degrees of freedom} [...]. \vskip.1in \noindent However, exactly 20 years later, in 1983, A. Deprit \cite{deprit83} discovered a set of canonical coordinates which, after a simple transformation, do the desired job and reduce to Jacobi's when $n=2$. Let us describe them. \\ Consider the ``partial angular momenta'' \beq{partial sums} {\mathbf S}_j(\by, \bx):=\sum_{i=1}^j {\mathbf C}_j\ ; \end{equation} with $\mathbf C_i$ as in \equ{Ci}. Notice that ${\mathbf S}_n={\mathbf C}$ is the total angular momentum of the system. Define the ``Deprit nodes'' \beqa{nodes} \left\{\begin{array}{l} \bm{\nu} _{i+1}:= {\mathbf S}_{i+1}\times {\mathbf C}_{i+1}\ ,\qquad\ \ 1\le i\le n-1\\ \bm{\nu} _1:={\mathbf S}_{2}\times {\mathbf C}_{1}=-\bm{\nu} _2\\ \bm{\nu} _{n+1}:= {\mathbf k}\times {\mathbf C}=:\bar{\bm{\nu} }\ . \end{array} \right. \end{eqnarray} \noindent If $n\ge 2$, Deprit's coordinates \beqa{Deprit coordinates}{\cal D}_{ep}=(\mathbf R, \mathbf G,\bm\Psi, \mathbf r,\bm\varphi,\bm\psi)\end{eqnarray} with \beqa{Deparg} &&\mathbf R=(R_1, \ldots, R_n)\,,\ \bm\Psi=(\Psi_1, \ldots, \Psi_n)\,,\ \mathbf G=(G_1, \ldots, G_n)\,,\nonumber\\ && \mathbf r=(r_1, \ldots, r_n)\,,\ \ \ \ \bm \psi=(\psi_1, \ldots, \psi_n)\,,\ \ \bm\varphi=(\varphi_1, \ldots, \varphi_n)\,. \end{eqnarray} are defined as follows (compare also Figures \ref{Deprit1}, \ref{Deprit2} and \ref{Deprit3}): \beqa{Deprit variables} &&\left\{\begin{array}{l}\displaystyle R_i:=\by_i\cdot\frac{\bx_i}{|\bx_i|}\\\\ \displaystyle r_i:= |\bx_i| \end{array}\right.\qquad\left\{\begin{array}{l} \displaystyle G_i:=|{\mathbf C}_i|\\ \\ \displaystyle \varphi_i:=\a_{{\mathbf C}_i}(\bm{\nu} _i,\bx_i) \end{array} \right.\nonumber \\ \\ &&\Psi_i:=\left\{\begin{array}{l} \displaystyle |\mathbf S_{i+1}|\phantom{AAAAAAAA}\ 1\le i\le n-2\ (n\ge 3) \\ \\ \displaystyle C:=|\mathbf C|\phantom{AAAAAA}\ \ i=n-1\\\\ \displaystyle Z:={\mathbf C}\cdot \mathbf k\phantom{AAAA.} \ \ \ i=n \end{array}\right. \nonumber\\ \nonumber\\ &&\psi_i:=\left\{\begin{array}{l}\displaystyle \a_{{\mathbf S}_{i+1}}(\bm{\nu} _{i+2},\bm{\nu} _{i+1})\phantom{AAAa} 1\le i\le n-2\ (n\ge 3)\\\\ \gamma:= \a_{\mathbf C}(\bar{\bm{\nu} },\bm{\nu} _{n})\phantom{AAAAA} i= n-1\\\\ \displaystyle\zeta:=\a_{\mathbf k}(\mathbf i, \bar{\bm{\nu} }) \phantom{AAAAAA}\ i=n \end{array}\right.\nonumber \end{eqnarray} \begin{figure} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.2,0,0) {$\mathbf j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf k$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.2) {$\mathbf i$}; \node (inew) at (0,0,1.8) {}; \draw [thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.8,3.8,3.8) {$\mathbf C$}; \node (Cnew) at (1.9,2.2,1.9) {$\tred C$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [->] (-3.0,0,3.0) -- (3.0,0,-3.0); \node (gamma) at (3.2,0,-3.2) {$\bar{\bm\nu}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.2,2,0.2) {$\tred Z$} ; \node (zeta) at (1.2,0,1.2) {$\tred\zeta$} ; \end{tikzpicture} \caption{Deprit coordinates $Z$, $C$ and $\zeta$ fix the angular momentum in the initial reference frame $({\mathbf i}, {\mathbf j}, {\mathbf k})$.}\label{Deprit1} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (5.0,0,0) {$\mathbf S_{i+1}\times \bm\nu_{i+2}$}; \draw [ultra thick,->] (0,0,0) -- (0,4.2,0); \node (k) at (-0.4,2.2,0) {$\mathbf S_{i+1}$}; \node (k) at (0.4,2.2,0) {$\tred{\Psi_i}$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {$\bm\nu_{i+2}$}; \node (inew) at (0,0,1.8) {}; \draw [thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.0,2.5,3.8) {$\mathbf C_{i+1}$}; \draw [thick, ->] (3.5,3.5,3.5) -- (0,4.2,0) \node (Gi) at (2.3,4.5,3.7) {$\tred{\Psi_{i-1}}$}; \node (Ginew) at (3.0,4.8,3.7) {$\mathbf S_i$}; \node (Cnew) at (1.7,2.3,1.9) {$\tred{G_{i+1}}$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [->] (-3.0,0,3.0) -- (3.0,0,-3.0); \node (gamma) at (3.2,0,-2.0) {$\bm\nu_{i+1}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.2,2,0.2) {} ; \node (zeta) at (1.2,0,1.2) {$\tred{\psi_i}$} ; \end{tikzpicture} \caption{{The frames $\DD_{i+1}$ and the} coordinates $\Psi_i$, $\Psi_{i-1}$, $G_{i+1}$ and $\psi_{i}$.}\label{Deprit2} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf C_i\times { \bm\nu_i}$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf C_i$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {${\bm\nu_i}$}; \node (inew) at (0,0,1.8) {}; \draw [dashed, -] (0,0,0) -- (5.6,0,7.3) ; \draw [thick, ->] (0,0,0) -- (3.57,0,2.75) ; \node (x2) at (4.0,0,3.0) {$\mathbf x_i$}; \node (x2new) at (3.0,0,2.25) {}; \node (ell2) at (2.0,0,1.5) {}; \node (C) at (6.1,0,8.0) {$\mathbf P_i$}; \node (g) at (1.0,0,2.5) {$\tred{g_i}$}; \node (g1) at (2,0,2.7) {}; \node (ell3) at (2.5,0,2.7) {$\tred{\ell_i}$}; \node (varphi) at (1.9,0,3.7) {$\tred{\varphi_i}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \node (Zeta) at (-0.2,2,0.2) {$\tred{G_i}$} ; \node (zeta) at (1.2,0,1.2) {} ; \draw [-latex, bend right] (inew) to (zeta); \draw [-latex, bend right] (g1) to (ell2); \draw [-latex, bend right] (inew) to (x2new); \draw plot [variable=\t, domain=-78:63, samples=50] ({3.5*(cos(\t)-0.2)+3.5*0.3*sin(\t)}, {-3.5*(cos(\t)-0.2)+3.5*0.3*sin(\t)}); \end{tikzpicture} \caption{{The frames ${\rm H}_{i}$ and the} coordinates $g_i$, $G_{i}$, $\ell_i$.}\label{Deprit3} \end{figure} \noindent We have \begin{theorem}[A. Deprit, 1983, \cite{deprit83}]\label{Deprit theorem} $\sum_{i=1}^n\by_i\cdot d\bx_i={\mathbf R}\cdot d\mathbf r+\mathbf \Psi\cdot d\bm\psi+\mathbf G\cdot d\bm\varphi$ for all $n\in \natural$. \end{theorem} For later need, we formulate an equivalen statement of Theorem \ref{Deprit theorem}. We consider the coordinates \beqa{Del}{\cal D}_{e\ell}:= (\mathbf Z, \mathbf G, \mathbf R, \bm\zeta, \bm\phi, \mathbf r)\end{eqnarray} with \beqa{Delarg} &&\mathbf Z=(Z_1, \ldots, Z_n)\,,\ \mathbf G=(G_1, \ldots, G_n)\,,\ \mathbf R=(R_1, \ldots, R_n)\nonumber\\ &&\bm \zeta=(\zeta_1, \ldots, \zeta_n)\,,\ \ \ \bm\phi=(\phi_1, \ldots, \phi_n)\,,\ \mathbf r=(r_1, \ldots, r_n) \end{eqnarray} where $Z_i$, $G_i$, $\zeta_i$, are as in \equ{Delaunay variables}, $R_i$, $r_i$ are as in \equ{Deprit variables}, and, finally, \begin{eqnarray*} \phi_i:=\a_{\mathbf{C}_i}({\mathbf n}_i, \mathbf x_i)\,.\end{eqnarray*} Let \beq{elementary rotations}{\cal R} _{1}(i)=\left( \begin{array}{cccc} 1&0&0\\ 0&\cos i&-\sin i\\ 0&\sin i&\cos i \end{array} \right)\ ,\qquad {\cal R} _{3}(\theta)=\left( \begin{array}{ccc} \cos\theta&-\sin\theta&0\\ \sin\theta&\cos\theta&0\\ 0&0&1 \end{array} \right)\end{equation} and $$\bx={\cal R} _3(\theta){\cal R} _1(i)\bar{\bx}\ ,\quad \by={\cal R} _3(\theta){\cal R} _1(i)\bar {\by}\ ,\quad {\mathbf C}:=\bx\times \by\ ,\quad \bar{\mathbf C}:=\bar{\bx}\times \bar{\by}\ ,\quad \mathbf i=\left( \begin{array}{ccc} 1\\ 0\\ 0 \end{array} \right)\ ,\quad \mathbf k=\left( \begin{array}{ccc} 0\\ 0\\ 1 \end{array} \right)$$ with $\bx,\bar{\bx}, \by, \bar{\by}\in{\Bbb R} ^3$. The proof of the following fact is left to the reader. \begin{lemma}\label{LemmaD} $\by\cdot d\bx={\mathbf C}\cdot \mathbf kd\theta+\bar{\mathbf C}\cdot \mathbf i di+\bar{\by} \cdot d\bar{\bx}$. \end{lemma} \noindent Lemma \ref{LemmaD} immediately implies \begin{lemma}\label{LemmaDel} $\mathbf y_j\cdot d\mathbf x_j=Z_jd\zeta_j+G_jd\phi_j+R_jdr_j\quad \forall\ j=1\,,\ldots\,, n\,,\ \forall n\in \natural$. \end{lemma} Indeed, we have $$\left\{\begin{array}{lll}\mathbf x_j={\cal R}_3(\zeta_j){\cal R}_1(i^*_j){\mathbf x}_j^*\\ \mathbf y_j={\cal R}_3(\zeta_j){\cal R}_1(i^*_j){\mathbf y}_j^* \end{array} \right. \quad j=1\,,\ldots\,,\ n$$ where $i^*_j$ is the convex angle formed by $\mathbf k$ and $\mathbf C_j$ and, finally, \beqa{xjyjpl}{\mathbf x}_j^*=\left( \begin{array}{ccc} r_j\cos\phi_j\\ r_j\sin \phi_j\\ 0 \end{array} \right)\,,\quad {\mathbf y}_j^*=\left( \begin{array}{ccc} R_j\cos\varphi_j-\frac{G_j}{r_j}\sin\phi_j\\ R_j\sin \varphi_j+\frac{G_j}{r_j}\cos\phi_j\\ 0 \end{array} \right) \end{eqnarray} verify, as well known, \beqa{Kepler symplectic}{\mathbf y}_j^*\cdot d{\mathbf x}_j^*=R_j dr_j+G_j d\phi_j\,.\end{eqnarray} Then, by Lemma \ref{LemmaD}, \equ{Kepler symplectic} and as ${\mathbf C_j}^*\cdot \mathbf i=0$, we have \begin{eqnarray*} \mathbf y_j\cdot d\mathbf x_j&=&{\mathbf C}_j\cdot \mathbf kd\zeta_j+{\mathbf C_j}^*\cdot \mathbf i di_j+{\mathbf y}_j^* \cdot d{\mathbf x}_j^*\nonumber\\ &=&Z_jd\zeta_j+G_jd\phi_j+R_jdr_j\,.\qquad \square \end{eqnarray*} We denote as $$\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}:\quad {\cal D}_{e\ell}=(\mathbf Z, \mathbf G, \mathbf R, \bm\zeta, \bm\phi, \mathbf r)\to {\cal D}_{ep}=(\bm\Psi, \mathbf G, \mathbf R, \bm\psi, \bm\varphi, \mathbf r)$$ the map which relates ${\cal D} _{e\ell}$ and ${\cal D} _{ep}$ and as $$\widehat\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}:\quad \widehat{\cal D}_{e\ell}=(\mathbf Z, \mathbf G, \bm\zeta, \bm\phi)\to \widehat{\cal D}_{ep}=(\bm\Psi, \mathbf G, \bm\psi, \bm\varphi)$$ is the natural projections on the coordinates above. It is easy to check that $\widehat\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}$ is independent of $\mathbf R$ and $\mathbf r$. Indeed, $\widehat\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}$ has the expression \beqa{reduced Deprit} G_j&=&G_j\ ,\nonumber\\ \nonumber \\ \varphi_j&=&\phi_j+\a_{{\mathbf C}_i}(\bm{{\nu} }_j,\bar{\bm {\nu} }_j)\ {\rm with}\ \bar{\bm {\nu} }_j={\mathbf k}\times {\mathbf C}_j,\nonumber\\ \nonumber \\ \Psi_j&=&\left\{\begin{array}{lll}|\mathbf S_{j+1}|\quad &j\ne n\\ Z_1+\ldots+Z_n\ &j=n\end{array}\right.\quad \nonumber\\ \psi_j&=&\left\{\begin {array}{lll}\a_{\mathbf S_{j+1}}(\bm{{\nu} }_{j+2},\bm{{\nu} }_{j+1})\quad &j\ne n\\ \a_{\mathbf k}(\mathbf i,\bar{\bm\nu})&j=n \end{array} \right. \end{eqnarray} where {${\mathbf S_{j+1}}$, $\bm{{\nu} }_{j}$, $\ovl{\bm{{\nu} }}_{j}$ at} the right hand sides are to be written as functions of ${\cal D}_{e\ell}$ ({see \equ{nodes} and \equ{Delaunay variables}}): $$\left\{\begin{array}{lll}\displaystyle{{\mathbf S_{j+1}}=\sum_{i=1}^{j+1}G_i{\cal R} _3(\zeta_i){\cal R} _1(i^{{\cal D} el}_i){\mathbf k}}\\\\ \displaystyle{\bm{\nu} _{j+1}= \left(\sum_{i=1}^{j+1}G_i{\cal R} _3(\zeta_i){\cal R} _1(i^{{\cal D} el}_i){\mathbf k}\right)\times G_{j+1}{\cal R} _3(\zeta_{j+1}){\cal R} _1(i^{{\cal D} el}_{j+1}){\mathbf k}\ ,\qquad\ \ 1\le j\le n-1}\\ \\ \displaystyle{\bm{\nu} _1=-\bm{\nu} _2=\left(\sum_{i=1}^{j+1}G_i{\cal R} _3(\zeta_i){\cal R} _1(i^{{\cal D} el}_i){\mathbf k}\right)\times G_1{\cal R} _3(\zeta_1){\cal R} _1(i^{{\cal D} el}_1){\mathbf k}}\\\\ \displaystyle {\bm{\nu} _{n+1}=\bar{\bm{\nu} } {\mathbf k}\times \left(\sum_{i=1}^{n}G_i{\cal R} _3(\zeta_i){\cal R} _1(i^{{\cal D} el}_i){\mathbf k}\right)}\ . \end{array} \right.$$ {with $ i_i^{{\cal D} el}=\cos^{-1}\frac{G_i}{Z_i}$.} As the right hand sides are defined only in terms of $\mathbf C_j$, so they are functions of $\mathbf Z$, $\bm\zeta$ and $\mathbf G$, while are independent of $\mathbf R$ and $\mathbf r$. \begin{theorem}\label{Deprit theoremNEW} Theorem \ref{Deprit theorem} is equivalent to stress that \beqa{New statement}\ \widehat\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}\ verifies:\quad \mathbf Z\cdot d\bm\zeta+\mathbf G\cdot d\bm\phi=\mathbf \Psi\cdot d\bm\psi+\mathbf G\cdot d\bm\varphi\ \ for\ all\ n\in \natural\,.\end{eqnarray} \end{theorem} \par\medskip\noindent{\bf Proof\ } Use Lemma \ref{LemmaDel} and that the coordinates $(\mathbf R, \mathbf r)$ are shared by ${\cal D}_{ep}$ and ${\cal D}_{e\ell}$. $\quad \square$ \vskip.1in \noindent We prove Theorem \ref{Deprit theorem} ($\Longleftrightarrow$ \equ{New statement}) by induction on $n$, with $n\ge 2$, as in \cite{pinzari-th09}. \vskip.1in \noindent {\bf Base step} We prove the statement \ref{Deprit theorem} with $n=2$. We first observe that, in such case, $(\by_j, \bx_j)$ are expressed, through $(\mathbf R, \bm \Psi, \mathbf G, \mathbf r, \bm\psi, \bm\varphi)$ via the formulae $$\left\{ \begin{array}{lll}\mathbf x_j={\cal R}_3(\zeta){\cal R}_1(i){\cal R} _3(\gamma){\cal R} _1(i_j){\mathbf x_j}_{\rm pl}\\ \mathbf y_j={\cal R}_3(\zeta){\cal R}_1(i){\cal R} _3(\gamma){\cal R} _1(i_j){\mathbf y_j}_{\rm pl} \end{array} \right.\quad j=1\,,\ 2$$ where $i$ is the convex\footnote{The expressions of $i_1$, $i_2$ and $i$ -- not needed here -- can easily be deduced by the analysis of the triangle formed by $\mathbf C_1$, $\mathbf C_2$ and $\mathbf C$: see Figure \ref{Deprit2}} angle formed by $\mathbf k$ and $\mathbf C$; $i_j$ is the convex angle formed by $\mathbf C$ and $\mathbf C_j$ and, finally, ${\mathbf x_j}_{\rm pl}$, $ {\mathbf y_j}_{\rm pl}$ are as in \equ{xjyjpl}, with $\phi_j$ replaced by $\varphi_j$.\\ Using Lemma \ref{LemmaD} twice, one easily finds \beqa{C1} \by_j\cdot d{\bx}_j=&&{{\mathbf C}_j}\cdot {\mathbf k}\,d\zeta+{\bar{\mathbf C}_j}\cdot {\mathbf i}\,di +{\bar{\mathbf C}_j}\cdot {\mathbf k}\,d\gamma+\ {{\mathbf C}_j}_{\rm pl}\cdot {\mathbf i}\,d(i_j)\nonumber\\ &&+{\by_j}_{\rm pl}\cdot d{{\bx}_j}_{\rm pl}\nonumber\\ =&&{{\mathbf C}_j}\cdot {\mathbf k}\,d\zeta+{{\mathbf C}_j}\cdot {\mathbf e}_1\,di +{{\mathbf C}_j}\cdot {\mathbf e}_3\,d\gamma+{\by_j}_{\rm pl}\cdot d{{\bx}_j}_{\rm pl} \end{eqnarray} We have used ${{\mathbf C}_j}_{\rm pl}\cdot {\mathbf i}=0$, ${{\mathbf C}_j}={\cal R} _3(\zeta){\cal R} _1(i ){\bar{\mathbf C}_j}$ and we have let \beq{e1e3}\mathbf e_1:={\cal R} _3(\zeta){\cal R} _1(i )\mathbf i\,,\qquad \mathbf e_3:={\cal R} _3(\zeta){\cal R} _1(i )\mathbf k\ .\end{equation} Taking the sum of \equ{C1} with $j=1$, $2$ and using \equ{Kepler symplectic} and recognizing that $$\arr{({{\mathbf C}_1}+{\mathbf C}_2)\cdot {\mathbf k}={\mathbf C}\cdot {\mathbf k}=Z\\ ({{\mathbf C}_1}+{\mathbf C}_2)\cdot {\mathbf e}_1= {\mathbf C}\cdot {\mathbf e}_1=0\\ ({{\mathbf C}_1}+{\mathbf C}_2)\cdot {\mathbf e}_3={\mathbf C}\cdot {\mathbf e}_3=C}$$ we have the proof. \qed \vskip.1in \noindent {\bf Induction} The inductive step is made on the statement \equ{New statement}. The map $\widehat\phi_{{\cal D} _{e\ell}}^{{\cal D}_{ep}}$ in \equ{New statement} will be named $\widehat\phi_n$. We assume that \equ{New statement} holds for a given $n\ge 2$ and prove it for $n+1$. Consider the map $$\phi^*_{n+1}:\quad \widehat{\cal D}_{e\ell, n+1}=(\mathbf Z, \mathbf G, \bm\zeta, \bm\phi)\to \widetilde{\cal D}_{ep, n+1}=(\bm\Psi^*, \mathbf G^*, \bm\psi^*, \bm\varphi^*)$$ defined as follows. If $$\mathbf Z=\big(\widetilde{\mathbf Z}, Z_{n+1}\big)\,,\ \mathbf G=\big(\widetilde{\mathbf G}, G_{n+1}\big)\,,\ \bm \zeta=\big(\widetilde{\bm \zeta}, \zeta_{n+1}\big)\,,\ \bm \phi=\big(\widetilde{\bm \phi}, \phi_{n+1}\big)$$ where the tilded arguments have dimension {$n$}, we let $$(\widetilde{\bm \Psi}, \widetilde{\mathbf G},\widetilde{\bm \psi}, \widetilde{\bm \varphi})=\phi_{n}(\widetilde{\mathbf Z}, \widetilde{\mathbf G},\widetilde{\bm \zeta}, \widetilde{\bm \phi})$$ and then $$\phi^*_{n+1}(\mathbf Z, \mathbf G, \bm\zeta, \bm\phi):=\big((\widetilde{\bm\Psi}, Z_{n+1}), (\widetilde{\mathbf G}, G_{n+1}), (\widetilde{\bm\psi}, \zeta_{n+1}), (\widetilde{\bm\varphi}, \phi_{n+1})\big)=:(\bm\Psi^*, \mathbf G^*, \bm\psi^*, \bm\varphi^*)$$ By the inductive assumption, $\phi_n$ verifies $$\widetilde{\mathbf Z}\cdot d\widetilde{\bm\zeta}+\widetilde{\mathbf G}\cdot d\widetilde{\bm\phi}=\widetilde{\mathbf \Psi}\cdot d\widetilde{\bm\psi}+\widetilde{\mathbf G}\cdot d\widetilde{\bm\varphi}$$ and hence $\phi^*_{n+1}$ verifies \beqa{canonical1} {\mathbf Z}\cdot d{\bm\zeta}+{\mathbf G}\cdot d{\bm\phi}&=&{\mathbf \Psi}^*\cdot d{\bm\psi}^*+{\mathbf G}^*\cdot d{\bm\varphi}^*\nonumber\\&=&\widetilde{\mathbf \Psi}\cdot d\widetilde{\bm\psi}+\widetilde{\mathbf G}\cdot d\widetilde{\bm\varphi}+Z_{n+1}d\zeta_{n+1}+G_{n+1}d\phi_{n+1}\nonumber\\ &=&\left(\sum_{j=1}^{n-2}\widetilde{ \Psi}_j\cdot d\widetilde{\psi}_j+\widetilde{\mathbf G}\cdot d\widetilde{\bm\varphi}\right) +\widetilde{ \Psi}_{n-1}\cdot d\widetilde{\psi}_{n-1}+\widetilde{ \Psi}_{n}\cdot d\widetilde{\psi}_{n}\nonumber\\ &+&Z_{n+1}d\zeta_{n+1}+G_{n+1}d\phi_{n+1}\quad {\rm split\ the\ rhs} \end{eqnarray} having split \begin{eqnarray*} \widetilde{\bm\Psi}=(\widetilde\Psi_1, \ldots, \widetilde\Psi_n)\,,\ \widetilde{\bm \psi}=(\widetilde\psi_1, \ldots, \widetilde\psi_n)\,.\end{eqnarray*} We moreover define a map $\phi_{*, n+1}$ on $(\bm\Psi^*, \mathbf G^*, \bm\psi^*, \bm\varphi^*)$ acting as $$(\bm\Psi_*, \mathbf G_*, \bm\psi_*, \bm\varphi_*)=\phi_{2}\big((\widetilde{\Psi}_{n}, Z_{n+1}), (\widetilde{ \Psi}_{n-1}, G_{n+1}), (\widetilde{\psi}_n, \zeta_{n+1}), (\widetilde{\psi}_{n-1}, \phi_{n+1})\big)$$ on the designed variables, and as the identity on the remaining ones. Note that the arguments at left hand side have dimension $2$, that $\mathbf G_*=(\widetilde{ \Psi}_{n-1}, G_{n+1})$, and put $\bm\varphi_*=(\varphi_{*, 1}, \varphi_{*, 2})$. Again by the inductive assumption, we have \beqa{canonical2}\widetilde{ \Psi}_{n-1}\cdot d\widetilde{\psi}_{n-1}+\widetilde{ \Psi}_{n}\cdot d\widetilde{\psi}_{n} +Z_{n+1}d\zeta_{n+1}+G_{n+1}d\phi_{n+1}={\mathbf \Psi}_*\cdot d{\bm\psi}_*+{\mathbf G}_*\cdot d{\bm\varphi}_*\end{eqnarray} Let us now look at the composition \beqa{composition}\phi_{*, n+1}\circ\phi^*_{n+1}\end{eqnarray} It acts as \begin{eqnarray*} (\mathbf Z, \mathbf G, \bm\zeta, \bm\phi)&\to& \big( (\widetilde\Psi_1, \ldots, \widetilde\Psi_{n-2},\widetilde\Psi_{n-1}, \bm\Psi_*), (\widetilde{\mathbf G}, G_{n+1}), (\widetilde\psi_1, \ldots, \widetilde\psi_{n-2},{\varphi}_{*1}, \bm\psi_*), (\widetilde{\bm\varphi}, {\varphi}_{*2})\nonumber\\ &&=:(\bm\Psi, \mathbf G, \bm\psi, \bm\varphi) \big) \end{eqnarray*} and, by \equ{canonical1} and \equ{canonical2}, verifies \begin{eqnarray*} {\mathbf Z}\cdot d{\bm\zeta}+{\mathbf G}\cdot d{\bm\phi}&=&\left(\sum_{j=1}^{n-2}\widetilde{ \Psi}_j\cdot d\widetilde{\psi}_j+\widetilde{\mathbf G}\cdot d\widetilde{\bm\varphi}\right) +{\mathbf \Psi}_*\cdot d{\bm\psi}_*+{\mathbf G}_*\cdot d{\bm\varphi}_* \nonumber\\ &=&{\mathbf \Psi}\cdot d{\bm\psi}+{\mathbf G}\cdot d{\bm\varphi}\,. \end{eqnarray*} It is not difficult to recognize -- using \equ{reduced Deprit} -- that the map \equ{composition} coincides with $\phi_{n+1}$. For the details, we refer to \cite{pinzari-th09, chierchiaPi11a}. $\quad \square$ \paragraph{{The Deprit map}} {In this section, we provide the explicit expression of the map \beqa{cartesianToDeprit}\phi_{{\cal C} }^{{\cal D}_{ep}}:\quad {\cal C}=(\mathbf y_1, \ldots, \mathbf y_n, \mathbf x_1, \ldots, \mathbf x_n)\to {\cal D}_{ep}=(\bm\Psi, \mathbf G, \mathbf R, \bm\psi, \bm\varphi, \mathbf r)\,.\end{eqnarray} The discussion in the previous section shows that each orbital frame $\cH_i$, $i=1$, $\ldots$, $n$, can be reached via a sequence of transformations which overlap the $\DD_{n+1}:=({\mathbf i}, {\mathbf j}, {\mathbf k})$ to $\cH_i$ through the following diagram (named {\it tree} by Deprit): \begin{eqnarray*} \begin{array}{cccccccccccccccccc} \displaystyle \DD_{n+1}&\to&\DD_n&\to& \DD_{n-1}&\to&\cdots&\to& \DD_2&\to{\rm H}_1\\ \\ \displaystyle&&\downarrow&&\downarrow&&\vdots&&\downarrow&&\\ \\ \displaystyle&&{\rm H}_{n}&&{\rm H}_{n-1}&&\vdots&&{\rm H}_2&&&\\ \\ \end{array} \end{eqnarray*} In turn, \begin{itemize} \item[--] the transition $\DD_{n+1}\to \rm D_{n}$ is described by the sequence of rotations ${\cal R} _3(\psi_{n}){\cal R} _1(i_{n})$, with $\cos i_n=\frac{Z}{\rm G}=\frac{\Psi_{n}}{\Psi_{n-1}}$ (see figure \ref{Deprit1}); \item[--] the transitions $\DD_{i+1}\to \rm H_{i+1}$, $i+1=n-1$, $\ldots$, $2$, are described by the sequence of rotations ${\cal R} _3(\psi_i){\cal R} _1(i_i)$, with $\cos i_i=\frac{\Psi_{i}^2+G_i^2-\Psi_{i-1}^2}{2\Psi_{i} G_{i+1}}$ (see figure \ref{Deprit2}); \item[--] the transitions $\DD_{i+1}\to \DD_{i}$, $i+1=n-1$, $\ldots$, $1$, are related by the sequence of rotations ${\cal R} _3(\psi_i+\pi){\cal R} _1(i^*_i):={\cal R} _3(\psi_i^*){\cal R} _1(i^*_i)$, with $\cos i^*_i=\frac{\Psi_{i}^2-G_{i+1}^2+\Psi_{i-1}^2}{2\Psi_{i-1} \Psi_{i}}$ (see figure \ref{Deprit2}, noticing that $\mathbf S_{i+1}\times \mathbf C_{i+1}=-\mathbf S_{i+1}\times \mathbf S_{i}$). \end{itemize} Then we find that \equ{cartesianToDeprit} has the expression $$\left\{ \begin{array}{lll} \displaystyle\by_i={\cal R} _i^n \by^*_i\\\\ \displaystyle\bx_i={\cal R} _i^n \bx^*_i \end{array} \right.$$ with $${\cal R} _i^n:={\cal R} _3(\psi_{n}){\cal R} _1(i_{n}){\cal R} _3(\psi^*_{n-1}){\cal R} _1(i^*_{n-1})\cdots {\cal R} _3(\psi^*_{i}){\cal R} _1(i^*_{i}){\cal R} _3(\psi_{i-1}){\cal R} _1(i_{i-1})$$ and $\by^*_i$, $\bx^*_i$ as in \equ{xjyjpl}. } \subsection{The map ${\cal K}$}\label{sec: K map} \begin{figure}[htp] \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {${\mathbf S}_{j}\times \hat{\bm\nu}_j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {${\mathbf S}_{j}$}; \node (knew) at (0,2.2,0) {}; \node (xnew) at (2.2, 2.2, 2.2) {}; \node (incli) at (1.5, 3.2, 2.2) {$\tred{{\rm i}_{j}}$}; \draw [-latex, bend right] (xnew) to (knew); \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {$\hat{\bm\nu}_j$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.5,3.5,3.5); \draw [dashed, -] (3.5,3.5,3.5) -- (4.3,4.3,4.3); \draw [dashed, -] (4.3,4.3,4.3) -- (5.5,2.8,3.5); \node (C) at (3.0,3.5,3.8) {$\tblue{\mathbf x_j}$}; \draw [thick, ->] (3.5,3.5,3.5) -- (5.5,2.8,3.5); \node (y) at (4.4,3.0,3.8) {$\mathbf y_j$}; \node (Cnew) at (1.9,2.2,1.9) {$\tred{r_j}$}; \node (R) at (3.7,4.2,4) {$\tred{R_j}$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (3.2,0,-2.0) {$\tblue{\hat{\mathbf{ n}}_j $}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.2,2,0.2) {} ; \node (zeta) at (1.4,0,1.2) {$\tred{\hat\k_{j-1}}$} ; \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\hat\k_{j-1}-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.2,-1.2,-1.6) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,3.0) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \end{tikzpicture} \caption{The reference frames $\hat{\rm F}_j$ and the $\cK$--coordinates $\hat\k_{j-1}$, $r_j$, $R_j$ $j=2$, $\ldots$, $n$.}\label{K1} % % % \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf C_1\times \hat{\bm\nu}_1$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf C_1$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {$\hat{\bm\nu}_1$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.57,0,2.75) ; \node (x2) at (4.0,0,3.0) {$\tblue{\mathbf x_1}$}; \node (x2newnew) at (2.0,0,1.5) {}; \node (ell2) at (2.0,0,1.5) {}; \node (g1) at (2,0,2.7) {}; \node (ell3) at (2.5,0,2.7) }; \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (1.0,0,-1.0) {}; \node (gammaNEW) at (3.2,0,-3.2) {$\tblue{\hat{\mathbf n}_1}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\hat\vartheta_1-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.5,0,-0.8) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,2.25) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \node (Zeta) at (-0.2,2,0.2) {$\tred{\hat\Theta_1}$} ; \node (zeta) at (1.2,0,1.2) {} ; \end{tikzpicture} \caption{The reference $\hat{\rm F}_1$ and the $\cK$--coordinates $\hat\Theta_1$, $\hat\vartheta_1$.}\label{K2} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf x_j\times \hat{\mathbf n}_j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf x_j$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {$\hat{\mathbf n}_j$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.8,3.8,3.8) {$\tblue{\mathbf S_{j-1}}$}; \node (Cnew) at (1.7,2.2,1.9) {$\tred {\hat\chi_{j-2}}$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (3.4,0,-3.2) {$\tblue{\hat{\bm\nu}_{j-1}}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.4,2,0.2) {$\tred{\hat\Theta_j}$} ; \node (zeta) at (1.2,0,1.2) {$\tred{\hat\vartheta_j}$} ; \node (knew) at (0,2.2,0) {}; \node (xnew) at (2.2, 2.2, 2.2) {}; \node (incli) at (1.5, 3.2, 2.2) {$\tred{\iota_{j-1}}$}; \draw [-latex, bend right] (xnew) to (knew); \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\hat\vartheta_{j}-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.2,-1.2,-1.6) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,3.0) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \end{tikzpicture} \caption{The reference frames $\hat{\rm G}_{j}$ and the $\cK$--coordinates $\hat\vartheta_j$, $\hat\Theta_j$, $\hat\chi_{j-1}$, $j=1$, $\ldots$, $n$. When $j=2$, take $\hat\chi_0:=\hat\Theta_1$; when $j=1$, disregard ${\mathbf S}_0$, $\hat{\bm\nu}_0$ and $\hat\chi_{-1}$.}\label{K3} \end{figure} \noindent The $\cK$--coordinates have been described in \cite{pinzari13} for $n=2$ and generalized to any $n\in \natural$, $n\ge 2$ in \cite{pinzari18}. Here, for sake of uniformity with the coordinates ${\cal D} _{ep}$, we change\footnote{ The main changes regard the coordinates that in \cite{pinzari18} are called $\tilde\Theta_0$, $\tilde\chi_{n-1}$, which here al called $\hat\chi_n$, $\hat\Theta_1$. The other coordinates just underwent a different numbering: $(\tilde\Theta_j)_{1\le j\le n-1}$, $\tilde\chi_0$, $(\tilde\chi_j)_{1\le j\le n-2}$, $\Lambda_j$ here are denoted, respectively, as $(\hat\Theta_{n-j+1})$, $\hat\chi_{n-1}$, $(\hat\chi_{n-j-1})$, $\hat\L_{n-j+1}$. An analogue change of notations holds of course for the conjugated coordinates. } notations a little bit compared to \cite{pinzari18}. We let $$\cK=(\hat{\bm\Theta}, \hat{\bm\chi}, {\mathbf R}, \hat{\bm\vartheta}, \hat{\bm\k}, {\mathbf r})$$ where $\mathbf R$, $\mathbf r$ are as in \equ{Deprit variables}, while \begin{eqnarray*} \begin{array}{lll} \displaystyle\hat{\bm\Theta}=(\hat\Theta_1,\ldots,\hat\Theta_{n}),\quad &\hat{\bm\vartheta}=(\hat\vartheta_1,\ldots,\hat\vartheta_{n})\\\\ \displaystyle\hat{\bm\chi}=(\hat\chi_1,\ldots,\hat\chi_{n}),&\hat{\bm\k}=(\hat\k_1,\ldots,\hat\k_{n} \end{array} \end{eqnarray*} are defined as follows. Let $\mathbf S_j$ be as in \equ{partial sums}. Define the {\it$\cK$-nodes} \beq{good nodesNEW} \hat{\bm{\nu} }_j:=\left\{ \begin{array} {llll} \displaystyle {\mathbf k}\times {\mathbf C}\quad &j=n\\ \\ \displaystyle {\mathbf x}_{j+1}\times {\mathbf S}_{j} &j=1,\ldots, n-1 \end{array} \right.\qquad \hat{\mathbf n}_j:=\displaystyle{\mathbf S}_{j}\times {\mathbf x}_{j} \qquad j=1,\ldots, n. \end{equation} and then the $\cK$coordinates as follows. \beqa{belle*NEW} \begin{array} {llllrrr} \displaystyle \hat\Theta_{j}:=\left\{ \begin{array} {lrrr} \displaystyle {\mathbf S}_{j}\cdot \frac{{\mathbf x}_{j}}{|{\mathbf x}_{j}|} \\ \\ \displaystyle |{\mathbf C}_{1}| \end{array} \right.& \hat\vartheta_{j}:=\left\{ \begin{array} {lrrr} \displaystyle\a_{{\mathbf x}_{j}}(\hat{\mathbf n}_{j}, \hat{\bm\nu}_{j-1})\qquad& 2\le j\le n\\ \\ \displaystyle \a_{{\mathbf C}_{1}}(\hat{\bm \nu}_{1}, \hat{\mathbf n}_1)&j=1 \end{array} \right.\\ \\ \displaystyle\hat\chi_{j}:=\left\{ \begin{array} {lrrr}Z:={\mathbf C}\cdot {\mathbf k}\\ \\ C:=|{\mathbf C} |\\ \\ |{\mathbf S}_{j+1}| \end{array} \right. & \hat{\k}_{j}:=\left\{ \begin{array} {lrrr} \zeta:=\a_{{\mathbf k}}({\mathbf i}, \hat{\bm{\nu} }_n) % \qquad& j=n\\ \\ \gamma:= \a_{{\mathbf S}_{n}}(\hat{\bm{\nu} }_{n}, \hat{\mathbf n}_{n})&j=n-1\\ \\ \a_{{\mathbf S}_{j+1}}(\hat{\bm{\nu} }_{j+1}, \hat{\mathbf n}_{j+1})&1\le j\le n-2\ (n\ge 3) \end{array} \right. \end{array} \end{eqnarray} \begin{remark}\rm Note that the node $\hat{\bm{\nu} }_n$ coincides with $\ovl{\bm{\nu} }={\bm{\nu} }_{n+1}$ in \equ{nodes}; the coordinates $Z$ and $\zeta$ are the same as in \equ{Deprit variables} and, finally, the coordinates $\bm\chi$ coincide with the coordinates $\hat{\bm\Psi}$ in \equ{Deprit variables}. In particular, ${\cal D} _{ep}$ and $\cK$ share the construction in Figure \ref{Deprit1}. The geometrical meaning of the other $\cK$--coordinates is pointed out in the next section. \end{remark} \paragraph{\bf A chain of reference frames} We consider the following chain of vectors {\small\beqa{chainNEW} \begin{array} {cccccccccccccccccc} \displaystyle {\mathbf k}&\to&{\mathbf S}_n={\mathbf C}&\to& {\mathbf x}_n&\to&\cdots&\to&{\mathbf S}_j&\to&{\mathbf x}_j&\to& {\mathbf S}_{j-1}&\to&\cdots&\to& {\mathbf S}_1={\mathbf C}_1\\ \\ \displaystyle&&\Downarrow&&\Downarrow&&\vdots&&\Downarrow&&\Downarrow&&\Downarrow&&\vdots&&\Downarrow\\ \\ \displaystyle&&\hat{\bm{\nu} }_n&&\hat{\mathbf n}_1&&\vdots&&\hat{\bm{\nu} }_j&&\hat{\mathbf n}_j&&\hat{\bm{\nu} }_{j-1}&&\vdots&&\hat{\bm{\nu} }_1\\ \end{array} \end{eqnarray} }% where $\hat{\bm{\nu} }_j$, $\hat{\mathbf n}_j$ are the {\it $\cK$-nodes} in \equ{good nodesNEW}, given by the skew-product of the two consecutive vectors in the chain. {\smallskip\noindent} We associate to this chain of vectors the following chain of frames \beqa{P chain} \begin{array} {cccccccccccccccccc} \displaystyle \hat{\rm G}_{n+1}&\to&\hat{\rm F}_n&\to& \hat{\rm G}_{n}&\to&\cdots&\to& \hat{\rm F}_j&\to& \hat{\rm G}_j&\to& \hat{\rm F}_{j-1}&\to&\cdots&\to &\hat{\rm G}_1 \end{array} \end{eqnarray} where $\hat{\rm G}_{n+1}=({\mathbf i}, {\mathbf j}, {\mathbf k})$ is the initial prefixed frame and the frames, while $\hat{\rm F}_j$, $\hat{\rm G}_j$ are frames defined via \beqa{FGNEW}&&\hat{\rm F}_j=(\hat{\bm{\nu} }_j, \ \cdot , {\mathbf S}_j)\quad\hat{\rm G}_j=(\hat{\mathbf n}_j,\ \cdot, {\mathbf x}_j)\qquad j=1,\cdots, n. \end{eqnarray} By construction, each frame in the chain has its first axis coinciding with the intersection of horizontal plane with the horizontal plane of the previous frame (hence, in particular, $\hat{\bm{\nu} }_j\perp {\mathbf S}_j$ and $ \hat{\mathbf n}_j\perp {\mathbf x}_j$). \paragraph{\bf Explicit expression of the $\cK$--map} We now derive the explicit formulae of the map which relates the coordinates \equ{belle*NEW} to the coordinates $(\by_1, \ldots, \by_n, \bx_1, \ldots, \bx_n)$. We shall prove that such map has the expression \beqa{Kmap}\left\{ \begin{array}{ll}\displaystyle \bx_j=\bx^n_j:={\cal R}_j^n\tilde \bx_j\\\\ \displaystyle \by_j=\by^n_j:={\cal R}_j^n\tilde \by_j \end{array} \right.\end{eqnarray} where \beqa{Rr} \left\{\begin{array}{lll} \displaystyle{\cal R}^n_j:=\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{j+1}\hat{\cal S}_{j+1}\hat{\cal T}_{j}\hat{\cal S}_{j}\\\\ \displaystyle\tilde \bx_j:=r_j {\mathbf k}\\\\ \displaystyle\tilde \by_j:=R_j{\mathbf k}+\frac{1}{r_j}\tilde {\mathbf C}_j\times {\mathbf k}\\\\ \displaystyle \tilde {\mathbf C}_j:=\left\{\begin{array}{ll}\displaystyle\hat{\cal S}^{-1}_j \Big(\hat\chi_{j-1}{\mathbf k}-\hat\chi_{j-2}\hat{\cal S}_{j}\hat{\cal T}_{j-1}{\mathbf k}\Big)=\tilde \bx_j\times \tilde \by_j\quad &i=2, \ldots, n\\\\ \displaystyle \hat\Theta_1\hat{\cal S}^{-1}_1{\mathbf k} &j=1 \end{array} \right. \end{array} \right. \end{eqnarray} where $\hat{\cal T} _j$, $\hat{\cal S} _j$ have the expressions \beqa{CC} && \hat{\cal T}_{j}:=\left\{\begin{array}{ll}{\cal R} _3(\zeta){\cal R} _1(\iota_n)\quad &j=n\\\\ {\cal R} _3(\hat\vartheta_{j+1}){\cal R} _1(\iota_{j})&1\le j\le n-1 \end{array} \right. \qquad \hat{\cal S}_{j}:=\left\{ \begin{array}{lll}\displaystyle{\cal R} _3(\hat\k_{j-1}){\cal R} _1({\rm i}_j),\quad &2\le j\le n\\\\ \displaystyle{\cal R} _3(\hat\vartheta_1){\cal R} _1(\frac{{\pi} }{2}),\quad &j=1 \end{array} \right. \end{eqnarray} with \beqa{good incli*}\displaystyle&&\left\{\begin{array}{lll}\displaystyle\cos\iota_{n}=\frac{Z}{\hat\chi_{n-1}}\quad &\\\\ \displaystyle\cos\iota_{j}=\frac{\hat\Theta_{j+1}}{\hat\chi_{j-1}}& 2\le j\le n-1\ (n\ge 3)\\\\ \displaystyle\cos\iota_{1}=\frac{\hat\Theta_{2}}{\hat\Theta_{1}}& \end{array} \right.\nonumber\\ &&\left\{\begin{array}{lll}\displaystyle \cos{\rm i}_j:=\frac{\hat\Theta_{j}}{\hat\chi_{j-1}},\quad2\le j\le n\\\\ \displaystyle {\rm i}_1=\frac{{\pi} }{2}\end{array} \right.\end{eqnarray} \vskip.1in \noindent Indeed, $\hat{\cal T}_j$ is the rotation matrix which describes the change of coordinates from $\hat{\rm G}_{j+1}$ to $\hat{\rm F}_j$, while $\hat{\cal S}_j$ describes the change of coordinates from $\hat{\rm F}_j$ to $\hat{\rm G}_{j}$, as it follows from the definitions of $(\hat\Theta, \hat\chi,\hat\vartheta, \hat\k)$ in \equ{belle*NEW} (see also Figures \ref{K1}, \ref{K2} and \ref{K3}). The formulae \equ{Kmap}--\equ{good incli*} are obtained considering the following sequence of transformations \begin{eqnarray*} \begin{array} {cccccccccccccccccc} &\hat{\cal T}_n&&\hat{\cal S}_n& &&\cdots&&&\hat{\cal S}_j&&\hat{\cal T}_{j-1}&& &\cdots&\hat{\cal S}_1& \\ \\ \displaystyle \hat{\rm G}_{n+1}&\to&\hat{\rm F}_n&\to& \hat{\rm G}_{n}&\to&\cdots&\to& \hat{\rm F}_j&\to& \hat{\rm G}_{j}&\to& \hat{\rm F}_{j-1}&\to&\cdots&\to &\hat{\rm G}_1 \end{array} \end{eqnarray*} connecting $\hat{\rm G}_n$ to any other frame in the chain. From this, and the definitions of the frames \equ{FGNEW}, one finds $${\mathbf S}_j=\left\{\begin{array}{lll} \displaystyle\chi_{j-1}\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{j+1}\hat{\cal S}_{j+1}\hat{\cal T}_j {\mathbf k}\quad &j=2\,\ldots\,n\\\\ \hat\Theta_1\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{2}\hat{\cal S}_{2}{\cal T}_1{\mathbf k}&j=1 \end{array} \right.\qquad {\mathbf x}_j=r_j\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{j+1}\hat{\cal S}_{j+1}\hat{\cal T}_{j}\hat{\cal S}_{j}{\mathbf k}$$ whence $${\mathbf C}_j=\left\{\begin{array}{lll}\displaystyle{\mathbf S}_j-{\mathbf S}_{j-1}=\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{j+1}\hat{\cal S}_{j+1}\hat{\cal T}_j \Big(\hat\chi_{j-1}{\mathbf k}-\hat\chi_{j-2}\hat{\cal S}_{j}\hat{\cal T}_{j-1}{\mathbf k}\Big)\quad &j=2\,,\ \ldots\,, n\\\\ \displaystyle{\mathbf S}_1=\hat\Theta_1\hat{\cal T}_n\hat{\cal S}_n\cdots \hat{\cal T}_{2}\hat{\cal S}_{2}\hat{\cal T}_1{\mathbf k}&j=1 \end{array} \right.$$ and finally \begin{eqnarray*} \by_j=\frac{{R}_j}{{r}_j}\bx_j+\frac{1}{r_j^2}{\mathbf C}_j\times \bx_j \end{eqnarray*} Collecting such formulae, one finds \equ{Kmap}--\equ{good incli*}. \paragraph{\bf Canonical character of $\cK$} \begin{lemma}\label{Kcanonical} \label{lem: good change} $\cK$ preserves the standard Liouville 1-form: \beq{1form}\sum_{j=1}^n\by_j\cdot d\bx_j=\hat{\bm\Theta}\cdot d\hat{\bm\vartheta}+\hat{\bm\chi}\cdot d\hat{\bm\k}+{\mathbf R}\cdot d{\mathbf r}.\end{equation} \end{lemma} {\smallskip\noindent} The proof of Lemma \ref{lem: good change} again relies in Lemma \ref{LemmaD}. \par\medskip\noindent{\bf Proof\ } We use the expression in \equ{Kmap}. We also define $${\mathbf C}_j^n:={\cal R}_j^n\tilde {\mathbf C}_j\,,\qquad \ovl{\mathbf C}^n_j:=\ovl{\cal R}_j^n\tilde {\mathbf C}_j\,,\quad \ovl{\cal R}_j^n:=\hat{\cal T}_n^{-1}{\cal R}_j^n$$ Applying Lemma \ref{LemmaD} twice, we get $$\by^n_j\cdot d\bx^n_j={\mathbf C}^n_j\cdot {\mathbf k}\,d\zeta+\ovl{\mathbf C}^n_j\cdot {\mathbf i}\,d\iota_n+\ovl{\mathbf C}^n_j\cdot {\mathbf k}\, d\hat\k_{n-1}+{\mathbf C}^{n-1}_j\cdot {\mathbf i}\, d {\rm i}_n+\by^{n-1}_j\cdot d\bx^{n-1}_j\,.$$ Continuing in this way, after $n-j+1$ iterates we arrive at \beqa{yjdxj}\by_j\cdot d\bx_j&=&{\mathbf C}^n_j\cdot {\mathbf k}\,d\zeta+\ovl{\mathbf C}^n_j\cdot {\mathbf i}\,d\iota_n+\ovl{\mathbf C}^n_j\cdot {\mathbf k}\, d\hat\k_{n-1}+{\mathbf C}^{n-1}_j\cdot {\mathbf i}\, d {\rm i}_n\nonumber\\ &+&\sum_{k=j}^{n-1}\Big({\mathbf C}^k_j\cdot {\mathbf k}\,d\hat\vartheta_{k+1}+\ovl{\mathbf C}^k_j\cdot {\mathbf i}\,d\iota_k+\ovl{\mathbf C}^k_j\cdot {\mathbf k}\, d\hat\k_{k-1}+{\mathbf C}^{k-1}_j\cdot {\mathbf i}\, d {\rm i}_k\Big)\nonumber\\ &+&\widetilde\by_j\cdot d\widetilde\bx_j\end{eqnarray} with $${\rm i}_1:=\frac{{\pi} }{2}\,,\ \k_{0}:=\hat\vartheta_1\,,\quad {\mathbf C}^{j-1}_j:=\tilde{\mathbf C}_j=\widetilde\bx_j\times\widetilde\by_j\,.$$ We take the sum of \equ{yjdxj} with $j=1$, $\ldots$, $n$. Exchanging the sums $$\sum_{j=1}^n\sum_{k=j}^{n-1}=\sum_{k=1}^{n-1}\sum_{j=1}^k$$ and recognizing that $$\left\{ \begin{array}{lll} \displaystyle\sum_{j=1}^k{\mathbf C}^k_j=\left\{ \begin{array}{lll}\hat{\cal S}_{k+1}^{-1}\hat{\cal T}_{k+1}^{-1}\cdots\hat{\cal S}_n^{-1}\hat{\cal T}_n^{-1}{\mathbf S}_k=\chi_{k-1}\hat{\cal T}_k{\mathbf k} \quad &1\le k\le n-1\\\\ \displaystyle {\mathbf S}_n=\chi_{n-1}\hat{\cal T}_n{\mathbf k} &k=n \end{array} \right.\\\\ % \displaystyle\sum_{j=1}^k{\mathbf C}^{k-1}_j=\left\{ \begin{array}{lll}\hat{\cal S}_{k}^{-1}\hat{\cal T}_{k}^{-1}\cdots\hat{\cal S}_n^{-1}\hat{\cal T}_n^{-1}{\mathbf S}_k=\chi_{k-1}\hat{\cal S}_{k}^{-1}{\mathbf k} \quad &1\le k\le n-1\\\\ \displaystyle \hat{\cal S}_n^{-1} \hat{\cal T}_n^{-1}{\mathbf S}_n=\chi_{n-1}\hat{\cal S}_{n}^{-1}{\mathbf k} &k=n \end{array} \right.\\\\ % \displaystyle % \sum_{j=1}^k\ovl{\mathbf C}^k_j=\left\{ \begin{array}{lll}\hat{\cal T}_k^{-1}\hat{\cal S}_{k+1}^{-1}\hat{\cal T}_{k+1}^{-1}\cdots\hat{\cal S}_n^{-1}\hat{\cal T}_n^{-1}{\mathbf S}_k=\chi_{k-1}{\mathbf k} \quad &1\le k\le n-1\\\\ \displaystyle \hat{\cal T}_n^{-1} {\mathbf S}_n=\chi_{n-1}{\mathbf k} &k=n \end{array} \right. % % % % \end{array} \right.$$ with $\hat\chi_0:=\hat\Theta_1$ and that, by \equ{Rr}, the last term in \equ{yjdxj} is $$\tilde \by_j\cdot d\tilde \bx_j=R_j dr_j$$ we get \begin{eqnarray*} \sum_{j=1}^n\by_j\cdot d\bx_j&=&\sum_{j=1}^{n}\Big({\mathbf C}^n_j\cdot {\mathbf k}\,d\zeta+\ovl{\mathbf C}^n_j\cdot {\mathbf i}\,d\iota_n+\ovl{\mathbf C}^n_j\cdot {\mathbf k}\, d\hat\k_{n-1}+{\mathbf C}^{n-1}_j\cdot {\mathbf i}\, d {\rm i}_n\Big)\nonumber\\ &+&\sum_{k=1}^{n-1}\sum_{j=1}^k\Big({\mathbf C}^k_j\cdot {\mathbf k}\,d\hat\vartheta_{k+1}+\ovl{\mathbf C}^k_j\cdot {\mathbf i}\,d\iota_k+\ovl{\mathbf C}^k_j\cdot {\mathbf k}\, d\hat\k_{k-1}+{\mathbf C}^{k-1}_j\cdot {\mathbf i}\, d {\rm i}_k\Big)\nonumber\\ &+&\sum_{j=1}^{n}R_j dr_j\nonumber\\\nonumber\\ &=&\hat\chi_{n-1}\hat{\cal T}_{n}{\mathbf k}\cdot {\mathbf k}\,d\zeta+\hat\chi_{n-1}{\mathbf k}\cdot {\mathbf i}\,d\iota_n+\hat\chi_{n-1}{\mathbf k}\cdot {\mathbf k}\, d\hat\k_{n-1} +\hat\chi_{n-1}{\mathbf k} \cdot \hat{\cal S}_n{\mathbf i}\, d {\rm i}_n\nonumber\\ &+&\sum_{k=1}^{n-1}\Big(\hat\chi_{k-1}\hat{\cal T}_{k}{\mathbf k}\cdot {\mathbf k}\,d\hat\vartheta_{k+1}+\hat\chi_{k-1}{\mathbf k}\cdot {\mathbf i}\,d\iota_k+\hat\chi_{k-1}{\mathbf k}\cdot {\mathbf k}\, d\hat\k_{k-1} +\hat\chi_{k-1}{\mathbf k} \cdot \hat{\cal S}_k{\mathbf i}\, d {\rm i}_k\Big)\nonumber\\ &+&\sum_{j=1}^{n}R_j dr_j\nonumber\\ &=&\sum_{k=1}^{n}\hat\Theta_{k}d\hat\vartheta_k+\sum_{k=1}^{n}\hat\chi_{k}d\hat\k_{k}+\sum_{j=1}^{n}R_j dr_j \end{eqnarray*} having used $$\hat{\cal T}_{k}{\mathbf k}\cdot {\mathbf k}=\cos\iota_k{=\frac{\hat\Theta_{k+1}}{\hat\chi_{k-1}}}\,\quad \hat{\cal S}_{k}{\mathbf i}\cdot {\mathbf k}=0\,,\quad {\mathbf k}\cdot {\mathbf k}=1\,,\quad {\mathbf i}\cdot {\mathbf k}=0\,.$$ In the following section, we shall use the following byproduct of Lemma \ref{Kcanonical}. Recall the coordinates ${\cal D} _{e\ell}$ in \equ{Del} and denote $$\phi_{{\cal D} _{e\ell}}^\cK:\quad {\cal D} _{e\ell}=(\mathbf Z, \mathbf G, \mathbf R, \bm\zeta, \bm\phi, \mathbf r)\to{\cal K}=(\hat{\bm\Theta}, \hat{\bm\chi}, {\mathbf R}, \hat{\bm\vartheta}, \hat{\bm\k}, {\mathbf r})$$ Consider the family of projections \beqa{projection1}\hat\phi_{{\cal D} _{e\ell}}^\cK:\quad {\cal D} _{e\ell}=(\mathbf Z, \mathbf G, \bm\zeta, \bm\phi)\to{\cal K}=(\hat{\bm\Theta}, \hat{\bm\chi}, \hat{\bm\vartheta}, \hat{\bm\k})\end{eqnarray} which, as it is immediate to see, is independent of $\mathbf r$ and $\mathbf R$. \begin{lemma}\label{projection2} The projections \equ{projection1} verify $${\mathbf Z}\cdot d\bm\zeta+{\mathbf G}\cdot d\bm\phi=\hat{\bm\Theta}\cdot d\hat{\bm\vartheta}+\hat{\bm\chi}\cdot d\hat{\bm\k}\quad \forall\ \mathbf r$$ \end{lemma} \subsection{The reduction of perihelia ${\cal P} $}\label{The reduction of perihelia} \begin{figure}[htp] \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {${\mathbf S}_{j}\times {\bm\nu}_j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {${\mathbf S}_{j}$}; \node (knew) at (0,2.2,0) {}; \node (xnew) at (2.2, 2.2, 2.2) {}; \node (incli) at (1.5, 3.2, 2.2) {$\tred{{\rm i}_{j}}$}; \draw [-latex, bend right] (xnew) to (knew); \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {${\bm\nu}_j$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.0,3.5,3.8) {$\tblue{\mathbf P_j}$}; \node (Cnew) at (1.9,2.2,1.9) }; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (3.2,0,-2.0) {$\tblue{{\mathbf{ n}}_j $}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.2,2,0.2) {} ; \node (zeta) at (1.4,0,1.2) {$\tred{\k_{j-1}}$} ; \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\k_{j-1}-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.2,-1.2,-1.6) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,3.0) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \end{tikzpicture} \caption{The references ${\rm F}_j$ and the ${\cal P} $--coordinates $\k_{j-1}$, $j=2$, $\ldots$, $n$.}\label{P1} % % % \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf C_1\times {\bm\nu}_1$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf C_1$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {${\bm\nu}_1$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.57,0,2.75) ; \node (x2) at (4.0,0,3.0) {$\tblue{\mathbf P_1}$}; \node (x2newnew) at (2.0,0,1.5) {}; \node (ell2) at (2.0,0,1.5) {}; \node (g1) at (2,0,2.7) {}; \node (ell3) at (2.5,0,2.7) }; \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (1.0,0,-1.0) {}; \node (gammaNEW) at (3.2,0,-3.2) {$\tblue{{\mathbf n}_1}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\vartheta_1-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.5,0,-0.8) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,2.25) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \node (Zeta) at (-0.2,2,0.2) {$\tred{\Theta_1}$} ; \node (zeta) at (1.2,0,1.2) {} ; \end{tikzpicture} \caption{The reference ${\rm F}_1$ and the ${\cal P} $--coordinates $\Theta_1$, $\vartheta_1$.}\label{P2} \begin{tikzpicture} \draw [ultra thick, ->] (0,0,0) -- (4,0,0); \node (j) at (4.7,0,0) {$\mathbf P_j\times {\mathbf n}_j$}; \draw [ultra thick,->] (0,0,0) -- (0,4,0); \node (k) at (0,4.2,0) {$\mathbf P_j$}; \draw [ultra thick,->] (0,0,0) -- (0,0,5); \node (i) at (0,0,5.5) {${\mathbf n}_j$}; \node (inew) at (0,0,1.8) {}; \draw [blue, ultra thick, ->] (0,0,0) -- (3.5,3.5,3.5); \node (C) at (3.8,3.8,3.8) {$\tblue{\mathbf S_{j-1}}$}; \node (Cnew) at (1.7,2.2,1.9) {$\tred {\chi_{j-2}}$}; \draw [dashed, -] (3.5,3.5,3.5) -- (3.5,0,3.5); \draw [dashed, -] (0,0,0) -- (3.5,0,3.5); \draw [dashed, -] (0,3.5,0) -- (3.5,3.5,3.5); \draw [-] (-3.0,0,3.0) -- (0,0,0); \draw [blue, ultra thick,->] (0,0,0) -- (3.0,0,-3.0); \node (gamma) at (3.4,0,-3.2) {$\tblue{{\bm\nu}_{j-1}}$}; \node (gammanew) at (0.8,0,-0.8) {} ; \draw [-latex, bend right] (inew) to (gammanew); \node (Zeta) at (-0.4,2,0.2) {$\tred{\Theta_j}$} ; \node (zeta) at (1.2,0,1.2) {$\tred{\vartheta_j}$} ; \node (knew) at (0,2.2,0) {}; \node (xnew) at (2.2, 2.2, 2.2) {}; \node (incli) at (1.5, 3.2, 2.2) {$\tred{\iota_{j-1}}$}; \draw [-latex, bend right] (xnew) to (knew); \node (gammanewnew) at (1.6,0,-1.6) {} ; \node (inewnew) at (0,0,2.4) {}; \node (varphi) at (2.2,0,4.2) {$\tred{\vartheta_{j}-\frac{{\pi} }{2}}$}; \node (gammanewnewnew) at (2.2,-1.2,-1.6) {$\tred{\frac{\pi}{2}}$} ; \node (x2new) at (3.0,0,3.0) {}; \draw [-latex, bend right] (inewnew) to (x2new); \draw [-latex, bend right] (x2new) to (gammanewnew); \end{tikzpicture} \caption{The references ${\rm G}_{j}$ and the ${\cal P} $--coordinates $\Theta_j$, $\vartheta_j$, $\chi_{j-2}$, $j=1$, $\ldots$, $n$. When $j=2$, take $\chi_0:=\Theta_1$; when $j=1$, disregard ${\mathbf S}_0$, $\bm\nu_0$ and $\chi_{-1}$.}\label{P3} \end{figure} The ${\cal P} $--coordinates have been described in \cite{pinzari18}. Here, as in the case of $\cK$, we change\footnote{The coordinates named in \cite{pinzari18} $\Theta_0$, $(\Theta_j)_{1\le j\le n-1}$, $\chi_0$, $(\chi_j)_{1\le j\le n-2}$, $\chi_{n-1}$, $\Lambda_j$ here are denoted, respectively, as $\chi_n$, $(\Theta_{n-j+1})$, $\chi_{n-1}$, $(\chi_{n-j-1})$, $\Theta_1$, $\L_{n-j+1}$. An analogue change of notations holds for the conjugated coordinates. } notations a little bit and denote them as \beqa{Peri}{\cal P} =(\bm\Theta, \bm\chi, \bm\L, \bm\vartheta, \bm\k, \bm\ell)\in {\Bbb R} ^n\times {\Bbb R} _+^n\times{\Bbb R} _+^n\times {\Bbb T} ^n\times {\Bbb T} ^n\times {\Bbb T} ^n\end{eqnarray} where $\bm\Lambda$, $\bm\ell$ are as in \equ{Delaunay variables}, while \begin{eqnarray*} \begin{array}{lll} \displaystyle\bm\Theta=(\Theta_1,\ldots,\Theta_{n}),\quad &\bm\vartheta=(\vartheta_1,\ldots,\vartheta_{n})\\\\ \displaystyle\bm\chi=(\chi_1,\ldots,\chi_{n}),&\bm\k=(\k_1,\ldots,\k_{n} \end{array} \end{eqnarray*} are defined as follows. Consider a phase space where the Kepler Hamiltonians \equ{KeplerHam} take negative values. Let ${\mathbf S}_j$ be as in \equ{partial sums} and ${\mathbf P}_j$ the perihelia of the instantaneous ellipses generated by \equ{KeplerHam}, assuming they are not circles. The coordinates $\bm\L$, $\bm\ell$ are the same as in Delaunay, while, roughly, $(\bm\Theta, \bm\chi, \bm\vartheta, \bm\k)$ in \equ{Peri} are defined as the $(\hat{\bm\Theta}, \hat{\bm\chi}, \hat{\bm\vartheta}, \hat{\bm\k})$ of $\cK$, ``replacing ${\mathbf x}_j$ with ${\mathbf P}_j$'' (see Figures \ref{P1}, \ref{P2}, \ref{P3}). Exact definitions are below. \noindent Define the {\it${\cal P} $-nodes} \beq{good nodes} \widetilde{\bm{\nu} }_j:=\left\{ \begin{array} {llll} \displaystyle {\mathbf k}\times {\mathbf C}\quad &j=n\\ \\ \displaystyle {\mathbf P}_{j+1}\times {\mathbf S}_{j} &j=1,\ldots, n-1 \end{array} \right.\qquad \widetilde{\mathbf n}_j:=\displaystyle{\mathbf S}_{j}\times {\mathbf P}_{j} \qquad j=1,\ldots, n. \end{equation} Then the ${\cal P} $--coordinates are \beqa{belle*} \begin{array} {llllrrr} \displaystyle \Theta_{j}:=\left\{ \begin{array} {lrrr} \displaystyle {\mathbf S}_{j}\cdot {\mathbf P}_{j} \\ \\ \displaystyle |{\mathbf C}_{1}| \end{array} \right.& \vartheta_{j}:=\left\{ \begin{array} {lrrr} \displaystyle\a_{{\mathbf P}_{j}}(\widetilde{\mathbf n}_{j}, \widetilde{\bm\nu}_{j-1})\qquad& 2\le j\le n\\ \\ \displaystyle \a_{{\mathbf C}_{1}}(\widetilde{\bm \nu}_{1}, \widetilde{\mathbf n}_1)&j=1 \end{array} \right.\\ \\ \displaystyle\chi_{j}:=\left\{ \begin{array} {lrrr}Z:={\mathbf C}\cdot {\mathbf k}\\ \\ C:=|{\mathbf C} |\\ \\ |{\mathbf S}_{j+1}| \end{array} \right. & {\k}_{j}:=\left\{ \begin{array} {lrrr} \zeta:=\a_{{\mathbf k}}({\mathbf i}, \widetilde{\bm{\nu} }_n) % \qquad& j=n\\ \\ \gamma:= \a_{{\mathbf S}_{n}}(\widetilde{\bm{\nu} }_{n}, \widetilde{\mathbf n}_{n})&j=n-1\\ \\ \a_{{\mathbf S}_{j+1}}(\widetilde{\bm{\nu} }_{j+1}, \widetilde{\mathbf n}_{j+1})&1\le j\le n-2\ (n\ge 3) \end{array} \right. \end{array} \end{eqnarray} \noindent To prove that \equ{Peri} are canonical, we consider the map $$\phi_{{\cal D} _{e\ell, aa}}^{{\cal P} }:\quad {\cal D} _{e\ell, aa}=(\mathbf Z, \mathbf G, \bm\L, \bm\zeta, {\mathbf g}, \bm\ell)\to{\cal P}=({\bm\Theta}, {\bm\chi}, \bm\L, {\bm\vartheta}, {\bm\k}, \bm\ell)$$ relating action--angle Delaunay \equ{Delaa} and ${\cal P} $ and its projection $$\hat\phi_{{\cal D} _{e\ell, aa}}^{{\cal P} }:\quad {\cal D} _{e\ell, aa}=(\mathbf Z, \mathbf G, \bm\zeta, {\mathbf g})\to{\cal P}=({\bm\Theta}, {\bm\chi}, {\bm\vartheta}, {\bm\k})$$ which is independent of $\bm\L$, $\bm\ell$ (even though this will not be used). \begin{lemma}\label{projection4} $\hat\phi_{{\cal D} _{e\ell, aa}}^{{\cal P} }$ coincides with the map $\hat\phi_{{\cal D} _{e\ell}}^\cK$ in \equ{projection1}. \end{lemma} \noindent Combining Lemmas \ref{projection2} and \ref{projection4}, we have \begin{lemma} The map $$\phi^{{\cal P} }_{{\cal D} _{e\ell, aa}}:\quad {\cal D} _{e\ell, aa}=(\mathbf Z, \mathbf G, \bm\L, \bm\zeta, {\mathbf g}, \bm\ell)\to {\cal P}=({\bm\Theta}, {\bm\chi}, \bm\L, {\bm\vartheta}, {\bm\k}, \bm\ell)$$ verifies $ {\bm\Theta}\cdot d{\bm\vartheta}+{\bm\chi}\cdot d{\bm\k}+{\bm\L}\cdot d{\bm\ell}={\mathbf Z}\cdot d\bm\zeta+{\mathbf G}\cdot d{\mathbf g}+{\bm\L}\cdot d{\bm\ell}\,. $$ \end{lemma} \paragraph{\bf Explicit expression of the ${\cal P} $--map} We now provide the explicit formulae of the map which relates the coordinates \equ{belle*} to the coordinates $(\by_1, \ldots, \by_n, \bx_1, \ldots, \bx_n)$. We shall prove that such map has the expression \beqa{Pmap}\left\{ \begin{array}{ll}\displaystyle \bx_j=\bx^n_j:={\cal R}_j^n\tilde \bx_j\\\\ \displaystyle \by_j=\by^n_j:={\cal R}_j^n\tilde \by_j \end{array} \right.\end{eqnarray} where \beqa{RrPmap} \left\{\begin{array}{lll} \displaystyle{\cal R}^n_j:= {\cal T}_n {\cal S}_n\cdots {\cal T}_{j+1} {\cal S}_{j+1} {\cal T}_{j} {\cal S}_{j}\\\\ \displaystyle\tilde \bx_j:=a_j\Big((\cos\xi_j-e_j) {\mathbf k}+\sqrt{1-e_j^2}\sin\xi_j \tilde{\mathbf Q}_j\Big) \\\\ \displaystyle\tilde \by_j:=\frac{\mu_j n_j a_j}{1-e_j\sin\xi_j}\Big(-\sin\xi_j {\mathbf k}+\sqrt{1-e_j^2}\cos\xi_j \tilde{\mathbf Q}_j\Big)\end{array} \right. \end{eqnarray} where $ {\cal T} _j$, $ {\cal S} _j$ have the expressions \beqa{CCPmap} && {\cal T}_{j}:=\left\{\begin{array}{ll}{\cal R} _3(\zeta){\cal R} _1(\iota_n)\quad &j=n\\\\ {\cal R} _3( \vartheta_{j+1}){\cal R} _1(\iota_{j})&1\le j\le n-1 \end{array} \right. \qquad {\cal S}_{j}:=\left\{ \begin{array}{lll}\displaystyle{\cal R} _3( \k_{j-1}){\cal R} _1({\rm i}_j),\quad &2\le j\le n\\\\ \displaystyle{\cal R} _3( \vartheta_1){\cal R} _1(\frac{{\pi} }{2}),\quad &j=1 \end{array} \right. \end{eqnarray} with \beqa{good incli*Pmap}\displaystyle&&\left\{\begin{array}{lll}\displaystyle\cos\iota_{n}=\frac{Z}{ \chi_{n-1}}\quad &\\\\ \displaystyle\cos\iota_{j}=\frac{ \Theta_{j+1}}{ \chi_{j-1}}& 2\le j\le n-1\ (n\ge 3)\\\\ \displaystyle\cos\iota_{1}=\frac{ \Theta_{2}}{ \Theta_{1}}& \end{array} \right.\quad \left\{\begin{array}{lll}\displaystyle \cos{\rm i}_j:=\frac{ \Theta_{j}}{ \chi_{j-1}},\quad2\le j\le n\\\\ \displaystyle {\rm i}_1=\frac{{\pi} }{2}\end{array} \right.\end{eqnarray} and $$\tilde{\mathbf Q}_j=\frac{\tilde{\mathbf C}_j}{C_j}\times\mathbf k$$ with \begin{eqnarray*} &&C_j=|\mathbf C_j|=\left\{\begin{array}{ll}\displaystyle \sqrt{\chi_{j-1}^2+\chi_{j-2}^2-2\Theta_{j}^2+2\sqrt{\chi_{j-1}^2-\Theta_{j}^2}\sqrt{\chi_{j-2}^2-\Theta_{j}^2}}\cos\vartheta_j\quad &j=2\,,\ldots\,,n\\\\ \displaystyle\Theta_1 &j=1 \end{array} \right.\nonumber\\ && \tilde {\mathbf C}_j:=\left\{\begin{array}{ll}\displaystyle {\cal S}^{-1}_j \Big( \chi_{j-1}{\mathbf k}- \chi_{j-2} {\cal S}_{j} {\cal T}_{j-1}{\mathbf k}\Big)=\tilde \bx_j\times \tilde \by_j\quad &i=2, \ldots, n\\\\ \displaystyle \Theta_1 {\cal S}^{-1}_1{\mathbf k} &j=1 \end{array} \right.\nonumber\\ &&e_j=\sqrt{1-\frac{C_j^2}{\L_j^2}} \end{eqnarray*} $a_j$ as in \equ{Delaunay variables}, $n_j=\sqrt{\frac{M_j}{a_j^3}}$ the mean motion, and $\xi_j$ the eccentric anomaly, solving $$\xi_j-e_j\sin \xi_j=\ell_j\,.$$ These formulae are easily obtained using the well--known relations $$\bx_j=a_j\Big((\cos\xi_j-e_j) {\mathbf P}_j+\sqrt{1-e_j^2}\sin\xi_j {\mathbf Q}_j\Big)$$ $$\by_j:=\frac{\mu_j n_j a_j}{1-e_j\sin\xi_j}\Big(-\sin\xi_j {\mathbf P}_j+\sqrt{1-e_j^2}\cos\xi_j {\mathbf Q}_j\Big)$$ with $\mathbf P_j$ the $j^{\rm th}$ perihelion and ${\mathbf Q}_j=\frac{{\mathbf C}_j}{C_j}\times\mathbf P_j$, and the relations which relate ${\mathbf C}_j$, ${\mathbf P}_j$, ${\mathbf Q}_j$ to ${\cal P} $, which, similarly to how done for $\cK$, are: $${\mathbf C}_j={\cal R} ^n_j\tilde {\mathbf C}_j\,,\quad {\mathbf P}_j={\cal R} ^n_j{\mathbf k}\,,\quad {\mathbf Q}_j={\cal R} ^n_j\tilde {\mathbf Q}_j\,.$$ \subsection[The behavior of $\cK$ and ${\cal P} $ under reflections]{The behavior of $\cK$ and ${\cal P} $ under reflections}\label{P-map vs rotations and reflections} The maps $\cK$ and ${\cal P} $ have a nice behavior under reflections, which turns to be useful if they are applied to Hamiltonians which are reflection--invariant. {\smallskip\noindent} We denote as \beqa{x*} \bx^*=(x_{1}, -x_{2}, x_{3})\end{eqnarray} the vector obtained from $\bx=(x_{1}, x_{2}, x_{3})$ by reflecting its second coordinate, and as $${\cal R}_2^-\Big((\by_1,\ldots, \by_n), (\bx_1,\ldots, \bx_n)\Big):=\Big((\by^*_1,\ldots, \by^*_n), (\bx^*_1,\ldots, \bx^*_n)\Big)$$ the simultaneous reflection of the second coordinate of all the $\by_j$ and all the $\bx_j$ in the system of Cartesian coordinates $(\by, \bx)=\Big((\by_1,\ldots, \by_n), (\bx_1,\ldots, \bx_n)\Big)$. We aim to show that \begin{lemma}\label{reflections} Using $\cK$, the reflection ${\cal R}_2^-$ is obtained by changing $$\Big((\hat\Theta_2\,,\ldots\hat\Theta_n\,,Z)\,,\ (\hat\vartheta_2\,,\ldots\,, \hat\vartheta_n\,,\zeta)\Big)\to \Big((-\hat\Theta_2\,,\ldots\, -\hat\Theta_n\,,-Z)\,,\ (-\hat\vartheta_2\,,\ldots\,,-\hat\vartheta_n\,,-\zeta)\Big)$$ Similarly, using ${\cal P} $, it is obtained by changing $$\Big((\Theta_2\,,\ldots\Theta_n\,,Z)\,,\ (\vartheta_2\,,\ldots\,, \vartheta_n\,,\zeta)\Big)\to \Big((-\Theta_2\,,\ldots\, -\Theta_n\,,-Z)\,,\ (-\vartheta_2\,,\ldots\,,-\vartheta_n\,,-\zeta)\Big)$$ \end{lemma} \par\medskip\noindent{\bf Proof\ } We prove for $\cK$. We write \equ{x*} as $$\bx^*={\cal I} _2^-\bx\qquad {\cal I} _2^-=\left( \begin{array}{rrr} 1&0&0\\ 0&-1&0\\ 0&0&1 \end{array} \right)$$ Now use the formulae in \equ{Kmap}--\equ{good incli*} and that $${\cal I} _2^-{\cal R} _3(\alpha)={\cal R} _3(-\alpha){\cal I} _2^-\,,\qquad {\cal I} _2^-{\cal R} _1(\beta)={\cal R} _1(\pi-\beta){\cal I} _2^-$$ and finally that the change $$(\hat\Theta_2\,,\ldots\,\hat\Theta_n\,,Z)\to (-\hat\Theta_2\,,\ldots\, -\hat\Theta_n\,,-Z)$$ acts on the functions in \equ{good incli*} as $$(\iota_1\,,\ldots\,, \iota_n\,, {{{\rm i}}}_2\,,\ldots\, {\rm i}_n)\to (\pi-\iota_1\,,\ldots\,, \pi-\iota_n\,, \pi-{{\rm i}}_2\,,\ldots\, \pi-{\rm i}_n)\,.$$ The proof for ${\cal P} $ is similar. $\quad \square$ \vskip.1in \noindent Lemma \ref{reflections} reflects on the Hamiltonian \equ{Helio} as well as in all Hamiltonians which are ${\cal R} _2^-$--invariant as follows. \begin{lemma} Let $\cH(\by, \bx)$ be ${\cal R} _2^-$--invariant. Using the coordinates $\cK$, the manifolds $$\hat\Theta_j=0\,,\quad \hat\vartheta_j\in \{0\,,{\pi} \}\quad j=2\,,\ldots\,,n\quad Z=0\,,\quad \zeta\in \{0\,,{\pi} \}$$ are equilibria. Similarly, using the coordinates ${\cal P} $, the manifolds $$\Theta_j=0\,,\quad \vartheta_j\in \{0\,,{\pi} \}\quad j=2\,,\ldots\,,n\quad Z=0\,,\quad \zeta\in \{0\,,{\pi} \}$$ are equilibria. \end{lemma} \section{Applications} \subsection{Arnold's Theorem}\label{Arnold's Theorem} Here we retrace the main ideas of the proof of Theorem \ref{Arnold Theorem} given in \cite{chierchiaPi11b}. Such proof uses on the coordinates \equ{Deprit coordinates}. The first step is to switch from the coordinates \equ{Deprit coordinates} to a new set of coordinates which are well fitted with the close--to--be--integrable form of the Hamiltonian \equ{HelioNEW}. Then we modify the coordinates \equ{Deprit coordinates} to the following form \beqa{Depaa}{\cal D}_{ep, aa}=(\bm\L, \mathbf G,\bm\Psi, \bm\ell,\bm{\gamma} ,\bm\psi)\end{eqnarray} which we call {\it action--angle Deprit coordinates}, where $\bm\Psi=(\Psi_1, \ldots, \Psi_n)$, $\bm\psi=(\psi_1, \ldots, \psi_n)$ are left unvaried, while $\bm\L=(\L_1, \ldots, \L_n)$, $\mathbf G=({\Gamma} _1, \ldots, {\Gamma} _n)$,, $\bm\ell=(\ell_1, \ldots, \ell_n)$, $\bm\gamma=(\gamma_1, \ldots, \gamma_n)$ are obtained replacing the quadruplets $(R_i, G_i, r_i, \varphi_i)$ with the quadruplets $(\L_i, {\Gamma} _i, \ell_i, {\gamma} _i)$ (with $G_i={\Gamma} _i$), through the symplectic maps (depending on $\mu_i$, $M_i$) $$(R_i, G_i, r_i, \varphi_i)\to (\L_i, {\Gamma} _i, \ell_i, {\gamma} _i)$$ which integrate Kepler Hamiltonian \equ{KeplerHam}. This step is necessary to carry the integrable part in \equ{HelioNEW} to the form $$h_{\textrm{\scshape k}}(\bm\L)=\sum_{1\le i\le n}\left(-\frac{\mu_i^3M_i^2}{2\L_i^2}\right)\,.$$ Recall that the new angles $\gamma_i$ provide the direction of the perihelion of the instantaneous ellipse generated by \equ{KeplerHam}, however they have a different meaning compared to the analogous angles $g_i$ appearing in the set of Delaunay coordinates \equ{Delaunay variables}, as, by construction, the ${\gamma} _i$'s are measured {\it relatively to the nodes $\bm{\nu} _i$} in \equ{nodes} (because the $\varphi_i$ were), while the angles $g_i$ in the Delaunay set are measured relatively to $\bar{\bm n}_i$ in \equ{barni}.\\ The $3n-2$ degrees of freedom Hamiltonian which is obtained is still singular. Singularities appear when the coordinates are not defined and in correspondence of collisions among the planets. The latter case will be later excluded through a careful choice of the reference frame. The singularities of the coordinates appear when the some of the convex angles ({\it Deprit inclinations})\\ \beqa{Dincli} {i^*_j}:=({\mathbf S}_{j}, {\mathbf S}_{j+1})\quad j=1\,,\ldots, n\,,\quad {\mathbf S}_{n+1}:={\mathbf k}\end{eqnarray} take the values $0$ or $\pi$, because in such situations the angle $\psi_j$ is not defined (see Figures \ref{Deprit1}, \ref{Deprit2}, \ref{Deprit3}) and when the instantaneous orbits of some of the Kepler Hamiltonians \equ{KeplerHam} is a circle, because in that case, the corresponding $\gamma_i$ is not defined. Such singularities are important from the physical point of view, because the eccentricities and the inclinations of the planets of the solar system are very small, hence the system is in a configuration pretty close to the singularity. To deal with this situation, a regularization similar to the Poincar\'e regularization \equ{Poinc reg} of Delaunay coordinates has been introduced in \cite{chierchiaPi11b}. Note that, in principle, there are $2^n$ singular configurations (corresponding to any choice of $i^*_j\in\{0\,,\pi\}$, besides $e_j=0$ for some $j$). Here we discuss the case $i_j=0$ for some $j$. Another regularization will be discussed in Section \ref{Coexistence of stable and whiskered tori}. \paragraph{{\rm\scshape rps} coordinates and Birkhoff normal form} The {\scshape rps} variables are given by $(\bm\L,\bm\l,{\mathbf z}):=(\bm\L,\bm\l,\bm\eta,\bm\xi,{\mathbf p},{\mathbf q})$ with (again) the $\L$'s as in \equ {Delaunay variables} and \beqa{reg var} \ \ \l_i=\ell_i+{\gamma} _i+\psi_{i-1}^n\ \ &&\arr{\eta_i=\sqrt{2(\L_i-{\Gamma} _i)}\ \cos\big({\gamma} _i+\psi_{i-1}^n\big)\\ \xi_i=-\sqrt{2(\L_i-{\Gamma} _i)}\ \sin\big({\gamma} _i+\psi_{i-1}^n\big) }\nonumber\\ \\ &&\arr{\displaystyle p_i=\sqrt{2({\Gamma} _{i+1}+\Psi_{i-1}-\Psi_i)}\ \cos\psi_i^n\\ \displaystyle q_i=-\sqrt{2({\Gamma} _{i+1}+\Psi_{i-1}-\Psi_i)}\ \sin\psi_i^n \nonumber} \end{eqnarray} where \beq{conv}\Psi_0:={\Gamma} _1\ ,\quad {\Gamma} _{n+1}:=0\ ,\quad \psi_0:=0\ ,\quad \psi^n_i:=\sum_{i\le j\le n} \psi_j\ .\end{equation} Let $\phi_{\textrm{\scshape rps}}$ denote the map \beq{P map} \phi_{{\cal C} }^{\textrm{\scshape rps}}:\quad (\by,\bx)\to (\bm\L,\bm\l,{\mathbf z})\ . \end{equation} {\begin{remark}\rm The coordinates \equ{reg var} have been constructed as follows. First of all, we look for a linear and canonical transformation which replaces $\Psi_i$, ${\Gamma} _i$, $\L_i$ with $$I_i:=\L_i-{\Gamma} _i\,,\ J_i:={\Gamma} _{i+1}+\Psi_{i-1}-\Psi_i\quad \L_i\,,\ i=1\,,\ldots\,, n\,.$$ with the conventions in \equ{conv}. To find the coordinates $\alpha_i$, $\beta_i$, $\l_i$ respectively conjugated to $I_i$, $J_i$, $\L_i$ we impose the conservation of the standard 1--form: \begin{eqnarray*} \sum_{i=1}^n(I_i d\alpha_i+J_i d\beta_i+\L_i d\l_i)&=&\sum_{i=1}^n((\L_i-{\Gamma} _i)d\alpha_i+({\Gamma} _{i+1}+\Psi_{i-1}-\Psi_i) d\beta_i+\L_i d\l_i)\nonumber\\ &=&\sum_{i=1}^n\L_i d(\alpha_i+\l_i)+\sum_{i=1}^n{\Gamma} _i d(-\alpha_i+\beta_{i-1}) +\sum_{i=1}^n\Psi_i d(-\beta_i+\beta_{i+1}) \end{eqnarray*} with $\beta_0:=0$, $\beta_{n+1}:=0$. This provides the following relations $$\left\{ \begin{array}{lll} \displaystyle \alpha_i+\l_i=\ell_i\\\\ \displaystyle -\alpha_i+\beta_{i-1}={\gamma} _i\\\\ \displaystyle -\beta_i+\beta_{i+1}=\psi_i \end{array} \right.$$ These equations may be solved recursively, and give \beqa{new angles}\left\{ \begin{array}{lll} \displaystyle \l_i=\ell_i+{\gamma} _i+\psi_{i-1}^n\\\\ \displaystyle \alpha_i=-({\gamma} _i+\psi_{i-1}^n)\\\\ \displaystyle \beta_i=-\psi_{i}^n \end{array} \right.\end{eqnarray} Note that $\l_i$, $\alpha_i$, $\beta_i$ are in fact angles as the linear combinations at right hand sides of \equ{new angles} have integer coefficients. As a second step, one defines \beqa{polar}\left\{ \begin{array}{lll} \displaystyle \eta_i=\sqrt{2I_i}\cos\alpha_i\\\\ \displaystyle \xi_i=\sqrt{2I_i}\sin\alpha_i \end{array} \right.\qquad \left\{ \begin{array}{lll} \displaystyle p_i=\sqrt{2J_i}\cos\beta_i\\\\ \displaystyle q_i=\sqrt{2J_i}\sin\beta_i \end{array} \right.\end{eqnarray} and obtains \equ{reg var} . The transformations \equ{polar} are well known to be canonical. \end{remark}} \noindent The main point is that \begin{lemma}[\cite{chierchiaPi11b}] The map $\phi_{\cal C} ^{\textrm{\scshape rps}}$ can be extended to a symplectic diffeomorphism on a set ${\cal P} _{ \textrm{\sc rps}}^{6n}$ where the eccentricities $e_j$ and and the angles $i_j^*$ in \equ{Dincli} are allowed to be zero. In particular, \begin{itemize} \item[\tiny \textbullet] $e_j=0$ corresponds to the {\scshape rps} coordinates $\eta_j=0=\xi_j$; \item[\tiny \textbullet] $i_j^*=0$ corresponds to the the {\scshape rps} coordinates $p_j=0=q_j$. \end{itemize} \end{lemma} \noindent From the definitions \equ{reg var}--\equ{conv} it follows that the variables \beq{pn qn def}\arr{p_n=\sqrt{2(\Psi_{n-1}-\Psi_n)}\cos{\psi_n}=\sqrt{2(C-Z)}\cos{\zeta}\\ \\ q_n=-\sqrt{2(\Psi_{n-1}-\Psi_n)}\sin{\psi_n}=-\sqrt{2(C-Z)}\sin{\zeta}} \end{equation} are integrals (as they are defined only in terms of the integral ${\mathbf C}$), hence, cyclic for the Hamiltonian \equ{HelioNEW}. Therefore, if $\cH_{\textrm{\scshape rps}}$ denotes the planetary Hamiltonian expressed in {\scshape rps} variables, we have that \beq{HRPS}\cH_{\textrm{\scshape rps}}(\bm\L,\bm\l,\bar{\mathbf z}):= \cH\circ \phi^{\textrm{\scshape rps}}_{\cal C} = h_{\textrm{\scshape k}}(\L)+{\mu} f_{\textrm{\scshape rps}}(\bm\L,\bm\l,\bar{\mathbf z})\end{equation} where $\cH$ is as in \equ{HelioNEW} and $\phi_{\textrm{\scshape rps}}$ as in \equ{P map} has $3n-1$ degrees of freedom, as it depends on $\bm\L,\bm\l,\bar{\mathbf z}$, where $$\bar {\mathbf z}=(\bm\eta, \bar{\mathbf p}, \bm\xi, \bar{\mathbf q})\quad {\rm with}\quad \bar{\mathbf p}=(p_1, \ldots, p_{n-1} $$ We denote as $a_i=\frac{1}{M_i}\left(\frac{\L_i}{{\mu} _i}\right)^2$ the semi--major axis associated to $\L_i$. The next result solves the problem of the construction of the Birkhoff normal form for the Hamiltonian \equ{HelioNEW}, mentioned in Section \ref{sec: AT intro}. \begin{theorem}[\cite{chierchiaPi11b, chierchiaPi11c}]\label{planetary normal form} For any $s\in \natural$ there exists an open set ${\cal A} \subset \{a_1<\cdots<a_n\}$, a set ${\cal M} ^ {6n-2}_\varepsilon\subseteq{\cal A} \times{\Bbb T} ^n\times{{\Bbb R} ^{4n}}$ containing the strip ${\cal M} ^ {6n-2}_0={\cal A} \times{\Bbb T} ^n\times \{0\}_{{\Bbb R} ^{4n}}$, a positive number $\varepsilon$ and a symplectic map (``Birkhoff transformation'') \beq{birkhoff transf}\Phi_{\textrm{\scshape b}}:\quad (\bm\L,{\mathbf l},\bar{\mathbf{w}})\in{\cal M} ^ {6n-2}_\varepsilon \to (\bm\L,\bm\l,\bar{\mathbf z})\in\Phi_{\textrm{\scshape b}}({\cal M} ^ {6n-2}_\varepsilon ) \end{equation} which carries the Hamiltonian \equ{HRPS} into \beq{birkhoff planetary} \cH_{\textrm{\scshape b}}(\bm\L,{\mathbf l},\bar{\mathbf{w}}):=\tilde\cH_{\textrm{\scshape rps}}\circ\Phi_{\textrm{\scshape b}}=h_{\textrm{\scshape k}} (\bm\L)+{\mu} f_{\textrm{\scshape b}}(\bm\L,\mathbf l, \bar{\mathbf w})\end{equation} where the average $f_{\textrm{\scshape b}}^{\rm av}(\L,w):=\int_{{\Bbb T} ^n}f_{\textrm{\scshape b}}d l$ is in BNF of order $s$: \beq{fb} f_{\textrm{\scshape b}}^{\rm av}(\bm\L,\bar{\mathbf w})=C_0+\O\cdot \mathbf r+{\rm P}_s(\mathbf r)+{\rm O}(|\bar{\mathbf{w}}|^{2s+1})\quad \bar{\mathbf{w}}:=(\mathbf u,\mathbf v)\quad r_i:=\frac{u_i^2+v_i^2}{2}\ , \end{equation} ${\rm P}_s$ being homogeneous polynomial in $r$ of order $s$, parameterized by $\L$. Furthermore, the normal form \equ{birkhoff planetary}--\equ{fb} is non--degenerate, in the sense that, if $s\ge 4$, the $(2n-1)\times (2n-1)$ matrix $\tau(\bm\L)$ of the coefficients of the monomial \beqa{torsion}\sum_{i, j=1}^{2n-1}\tau(\bm\L)_{ij}r_i r_j\end{eqnarray} with degree 2 in ${\rm P}_s(\mathbf r)$ is non singular, for all $\bm\L\in {\cal A} $. \end{theorem} \vskip.1in \noindent Denote by $B_\e=B_\e^{2n_2}=\{y\in{\Bbb R} ^{2n_2}: |y|<\e\}$ the $2n_2$--ball of radius $\e$ and let \beq{defcP} {\cal P} _\e:=V\times {\Bbb T} ^{n_1}\times B_\e\,.\end{equation} The second ingredient is a KAM theorem for properly--degenerate Hamiltonian systems. This has been stated and proved (with a proof of about 100 pages) by Arnold in \cite{arnold63}, who named it the {\it Fundamental Theorem}. Here we present a refined version appeared in \cite{chierchiaPi10}. \begin{theorem}[Fundamental Theorem, V.I.Arnold, 1963] \label{FT} Let \beq{pndham} H({\mathbf I}, \bm{\varphi} , {\mathbf p}, {\mathbf q}):=H_0(\mathbf I)+{\mu} P({\mathbf I}, \bm{\varphi} , {\mathbf p}, {\mathbf q})\ ,\end{equation} be real--analytic on ${\cal P} _\e$ and assume \begin{itemize}{\it \item[{\bf (A1)}] $\mathbf I\in V\to \partial_{\mathbf I} H_0$ is a diffeomorphism; \item[{\bf (A2)}] $\displaystyle P_{\!\rm av} (\mathbf p,\mathbf q;\mathbf I)=P_0(\mathbf I) +\sum_{i=1}^{n_2}\O_i(\mathbf I)r_i+\frac{1}{2}\sum_{i,j=1}^{n_2} \b_{ij}(\mathbf I)r_i r_j+o_4$ where $\displaystyle r_i:=\frac{p_i^2+q_i^2}{2}$ and $o_4/|(\mathbf p,\mathbf q)|^4\to 0$ as $(\mathbf p,\mathbf q)\to 0$; \item[{\bf (A3)}] The matrix $\beta(\mathbf I)=(\beta_{ij}(\mathbf I))$ is non--singular for all $\mathbf I\in V$. }\end{itemize} Then, there exist positive numbers $\e_*$, ${\mu} _*$, $C_*$ and $b$ such that, for \beq{A5'} 0<\e<\e_*\ , \quad 0<{\mu} <{\mu} _*\ ,\quad {\mu} <\su{C_* (\log \e^{-1})^{2b}}\ , \end{equation} one can find a set ${\cal T} \subset {\cal P} $ formed by the union of $H$--invariant $(n_1+n_2)$--dimensional tori, on which the $H$--motion is analytically conjugated to linear Diophantine quasi--periodic motions. The set ${\cal T} $ is of positive Liouville--Lebesgue measure and satisfies \beq{impmeasest} \meas {\cal P} _\e>\meas {\cal T} > \Big(1- C_* \Big(\sqrt{{\mu} }\ ( \log \epsilon^{-1})^b+ \sqrt{\epsilon}\Big) \Big) \meas {\cal P} _\e\ . \end{equation} \end{theorem} \noindent An application of Theorem \ref{FT} with $n_0=n$, $n_1=2n-1$ to the system in \equ{birkhoff planetary} with $s=4$ now leads to the proof of Theorem \ref{Arnold Theorem}. \vskip.1in \noindent\subsection{Global Kolmogorov tori}\label{Global Kolmogorov tori} The quasi--periodic motions of Theorem \ref{Arnold Theorem} provide almost circular and almost planar orbits. This is because the normal form of Theorem \ref{planetary normal form} is constructed around the strip ${\cal M} ^ {6n-2}_{0}$, and the origin corresponds to zero eccentricities and zero mutual inclinations. The question whether similar motions may exist outside such regime is therefore natural and important from the physical point of view. To this end, one has to understand that the Birkhoff normal form (assumption {\bf (A2)} of Theorem \ref{FT}) is used in the proof only to construct a {reasonable} integrable approximation for the whole Hamiltonian, in fact given by $$H_{\rm int}(\mathbf I, \mathbf r)=H_0(\mathbf I)+{\mu} \left( P_0(\mathbf I) +\sum_{i=1}^{n_2}\O_i(\mathbf I)r_i+\frac{1}{2}\sum_{i,j=1}^{n_2} \b_{ij}(\mathbf I)r_i r_j \right)$$ Therefore, a possible construction of full dimensional quasi--periodic motions outside the small eccentricities and small inclinations regime should start from a different integrable approximation. In this section we describe an approach in such direction, where we look at the first terms of the series expansion of the $\bm\ell$--averaged $f$ with respect to a small parameter. The small parameter will be taken to be the inverse distance between the planets (the idea goes back to S. Harrington \cite{harrington69}). In addition, the use of the coordinates ${\cal P} $ will allow to construct $(3n-2)$--dimensional quasi--periodic motions without singularities when the inclinations become zero. Recall that the tori of Theorem \ref{Arnold Theorem} may be reduced to $(3n-2)$ frequencies (as shown in \cite{chierchiaPi11b}), in a almost co--planar, co--centric configuration, but away from it, due to singularities. \noindent Here we discuss the following result. \begin{theorem}[Global Kolmogorov tori in the planetary problem, \cite{pinzari18}]\label{Global Kolmogorov tori in the planetary problem} Fix numbers $0<\underline e_i<\ovl e_i<0.6627\ldots$, $i=1,\cdots,n$. There exists a number ${\rm N}$ depending only on $n$ and a number $\a_0$ depending on $\underline e_i$, $\ovl e_i${, and} $n$ such that, if $\a<\a_0$, ${\mu} \le \a^{\rm N}$, in a domain of planetary motions where the semi-major axes $a_n<a_{n-1}<\cdots<a_1$ are spaced as follows \beqa{asymptotics} a_i^-\le a_i\le a_i^+\qquad {\rm with}\qquad a_{i}^\pm:= \frac{a_n^\pm}{\a^{\frac{1}{3}(2^{n+1}-2^{i+1}+i-n)}} \end{eqnarray} there exists a positive measure set $\cK_{{\mu} , \a}$, the density of which in phase space can be bounded below as $${\rm dens}(\cK_{{\mu} , \a})\ge 1-(\log\a^{-1})^{\rm p}\sqrt\a,$$ consisting of quasi-periodic motions with $3n-2$ frequencies where the planets' eccentricities $e_i$ verify $$\underline e_i\le e_i\le \ovl e_i.$$ \end{theorem} \noindent Let us consider a general set of coordinates ${\cal C}=(\bm\L, \bm\ell, \mathbf u, \mathbf v)$ which puts the Kepler Hamiltonians \equ{KeplerHam} into integrated form and hence carries the Hamiltonian \equ{HelioNEW} to $$\cH_{\cal C}(\bm\L, \bm\ell, \mathbf u, \mathbf v):=\cH\circ{\cal C}=-\sum_{j=1}^n\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+{\mu} f_{{\cal C}}(\bm\L, \mathbf u, \mathbf v),$$ where $$f_{{\cal C}}(\bm\L, \bm\ell, \mathbf u, \mathbf v):=\sum_{1\le i<j\le n}\bigg(\frac{\by_i\cdot \by_j}{m_0}-\frac{m_im_j}{|\bx_i-\bx_j|}\bigg)\circ{\cal C} \,.$$ {\smallskip\noindent} We denote \beq{average K}\ovl{ f_{\cal C}}(\bm\L, \mathbf u, \mathbf v):=\frac{1}{(2{\pi} )^n}\int_{{\Bbb T} ^n}f_{{\cal C}}(\bm\L, \bm\ell, \mathbf u, \mathbf v)d\bm\ell, \end{equation} so that \begin{eqnarray*} \begin{array} {lll} \displaystyle f_{{\cal C}}=\sum_{1\le i<j\le n}f_{{\cal C}}^{ij},\qquad\qquad\qquad &\displaystyle \ovl{ f_{\cal C}}=\sum_{1\le i<j\le n}\ovl {f_{{\cal C}}^{ij}}\\ \\ \displaystyle f_{{\cal C}}^{ij}:=\left(\frac{\by_i\cdot \by_j}{m_0}-\frac{m_im_j}{|\bx_i-\bx_j|}\right)\circ{\cal C} ,&\displaystyle \ovl {f_{{\cal C}}^{ij}}:=\frac{1}{(2{\pi} )^n}\int_{{\Bbb T} ^n}{ f_{\cal C}^{ij}}d\ell_1\cdots d\ell_n. \end{array} \end{eqnarray*} {\noindent} For such any ${\cal C} $ one always has, as a consequence of the motion equations of \equ{KeplerHam}, the following identities \beqa{yi} &&\frac{1}{2{\pi} }\int_{\Bbb T} \frac{1}{\bx_j} d\ell_j=\frac{1}{a_j}\nonumber\\ &&\frac{1}{2{\pi} }\int_{\Bbb T} \by_j d\ell_j=\frac{\mu_j}{2{\pi} }\int_{\Bbb T} \dot\bx_j d\ell_j = 0 \nonumber\\ &&\frac{1}{2{\pi} }\int_{\Bbb T} \frac{\bx_j}{|\bx_j|^3}d\ell_j=\frac{1}{2{\pi} \mu_jM_j}\int_{\Bbb T} \dot\by_j d\ell_j =0 \end{eqnarray} with $a_j$ the semi--major axes. Consider now the average $\ovl{ f_{\cal C}}(\bm\L, \mathbf u, \mathbf v)$ in \equ{average K} with respect to $\bm \ell$. Due to the fact that $\by_j$ has zero-average, one has that only the Newtonian part contributes to $\ovl{ f_{\cal C}}(\bm\L, \mathbf u, \mathbf v)$: $$\ovl {f_{\cal C}}=-\sum_{1\le i<j\le n}\frac{m_im_j}{(2{\pi} )^2}\int_{{\Bbb T} ^2}\frac{d\ell_id\ell_j}{|\bx_i-\bx_j|}. $$ We now consider any of the contributions to this sum \beqa{fijC}\ovl {f_{{\cal C}}^{ij}}=-\frac{m_im_j}{(2{\pi} )^2}\int_{{\Bbb T} ^2}\frac{d\ell_id\ell_j}{|\bx_i-\bx_j|}\qquad 1\le i<j\le n \end{eqnarray} and expand {any such} terms \[ \ovl {f_{{\cal C}}^{ij}}=\ovl {f_{{\cal C}}^{ij}}^\ppo+ \ovl {f_{{\cal C}}^{ij}}^\ppu+ \ovl {f_{{\cal C}}^{ij}}^\ppd+\cdots\] where $$ \ovl {f_{{\cal C}}^{ij}}^\pph:=-\frac{m_im_j}{(2{\pi} )^2}\int_{{\Bbb T} ^2}\frac{1}{h!}\frac{d^h}{d\varepsilon^h}\frac{1}{| \bx_i-\varepsilon\bx_j|}\Big|_{\varepsilon=0}d\ell_id\ell_j$$ is proportional to {$\frac{1}{a_i}(\frac{a_j}{a_i})^h$}. Then the formulae in \equ{yi} imply that the two first terms of this expansion are given by $$\ovl {f_{{\cal C}}^{ij}}^\ppo= -\frac{m_im_j}{a_{i}},\qquad \ovl {f_{{\cal C}}^{ij}}^\ppu= 0.$$ {\smallskip\noindent} Namely, whatever is the map ${\cal C} $ that is used, the first non--trivial term is the double average of the second order term, which is given by \[ \ovl {f_{{\cal C}}^{ij}}^\ppd(\bm\L, \mathbf u, \mathbf v)=-\frac{m_im_j}{(2{\pi} )^2}\int_{{\Bbb T} ^2}\frac{3(\bx_i\cdot \bx_j)^2-|\bx_i|^2|\bx_j|^2}{|\bx_i|^5}d\ell_id\ell_j.\] {\smallskip\noindent} Using Jacobi coordinates, S. Harrington noticed that \begin{lemma}[\cite{harrington69}]\label{HarringtonL} If $n=2$, $ \ovl {f_{{\cal J}}^{12}}^\ppd$ depends on one only angle: the perihelion argument of the inner planet, hence is integrable. \end{lemma} \noindent When $n=2$, Lemma \ref{HarringtonL} provides an effective good starting point to construct quasi--periodic motions without the constraint of small eccentricities and inclinations, because in that case one can take, as initial approximation, \beqa{HarringtonH}\cH_{Harr}=-\sum_{j=1}^2\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+{\mu} \left(-\frac{m_1m_2}{a_{2}} +\ovl {f_{\cal J}^{12}}^\ppd(\L_1, \L_2, {\Gamma} _1, {\Gamma} _2, {\gamma} _1)\right)\end{eqnarray} The motions of $\cH_{Harr}$ have indeed widely studied in the literature, after \cite{harrington69}. When $n>2$, the argument does not seem to have an immediate extension using Deprit coordinates (which, as said, are the natural extension of Jacobi reduction). The generalization of \equ{HarringtonH} for such a case is \begin{eqnarray*} \cH_{{\cal D} _{ep, aa}}=-\sum_{j=1}^n\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+{\mu} \sum_{1\le i<j\le n} \left(-\frac{m_im_j}{a_{j}} +\ovl {f_{\cal J}^{ij}}^\ppd\right)\end{eqnarray*} It turns out that, even looking at the nearest neighbors interactions \beqa{HarringtonH1}\cH_{nn}=-\sum_{j=1}^n\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+\mu\sum_{i=1}^{n-1}\left(-\frac{m_{i}m_{i+1}}{a_{i}}+\ovl {f_{{\cal D}_{ep}}^{i, i+1}}^\ppd\right) \end{eqnarray} the terms $\ovl {f_{{\cal D}_{ep}}^{i, i+1}}^\ppd$ with $1\le i\le n-2$ depend on two angles: $\gamma_i$ and $\psi_{i-1}$, so the effective study of the unperturbed motions of \equ{HarringtonH1} is involved. Using the ${\cal P} $--coordinates \beqa{HnnPeri} \cH_{nn}=-\sum_{j=1}^n\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+\mu\sum_{i=1}^{n-1}\left(-\frac{m_{i}m_{i+1}}{a_{i}}+\ovl {f_{{\cal P}}^{i, i+1}}^\ppd\right) \end{eqnarray} one has that the terms ${\ovl {f_{{\cal P} }^{i, i+1}}}^{(2)}$ with $1\le i\le n-2$ depend on $3$ angles: $\k_{i-1}$, $\vartheta_{i}$ and $\vartheta_{i+1}$, but the dependence upon $\k_{i-1}$ and $\vartheta_{i}$ is at a higher order term. This is shown by the following formula, discussed in \cite{pinzari18}: \beqa{ovl f} {\ovl {f_{{\cal P} }^{i, i+1}}}^{(2)}&=&m_{i} m_{i+1} \frac{a_{i+1}^2}{4a_{i}^3}\frac{\L_{i}^3}{\chi_{i-1}^2(\chi_{i-1}-\chi_{i-2})^3}\Big[ \frac{5}{2}(3\Theta_{i+1}^2-\chi_{i-1}^2)\nonumber\\ &-&\frac{3}{2}\frac{4\Theta_{i+1}^2-\chi_{i-1}^2}{\L_{i+1}^2}\Big(\chi_{i}^2+\chi_{i-1}^2-2\Theta_{i+1}^2+2\sqrt{(\chi_{i}^2-\Theta_{i+1}^2)(\chi_{i-1}^2-\Theta_{i+1}^2)}\cos{\vartheta_{i+1}}\Big)\nonumber\\ &+&\frac{3}{2}\frac{(\chi_{i-1}^2-\Theta_{i+1}^2)(\chi_{i}^2-\Theta_{i+1}^2)}{\L_{i+1}^2}\sin^2{\vartheta_{i+1}}\nonumber\\ &+&{\rm O}(\Theta_{i}^2+(\vartheta_{i}-\vartheta^0_i)^2)\Big] \qquad i=1\,,\ldots\,,\ n-1\end{eqnarray} where $\chi_0:=\Theta_1$, $ \chi_{-1}:=0$, $\vartheta^0_i\in\{0\,,{\pi} \}$ and the ${\rm O}(\Theta_{i}^2+(\vartheta_{i}-\vartheta^0_i)^2)$ term vanishes identically when $i=1$. \vskip.1in \noindent {We denote as \beq{HP}{\rm H}_{\cal P} ({\rm X}_{\cal P} ,\ell)={\rm h}_{\rm fast}^0(\L)+{\mu} f_{\cal P} ({\rm X}_{\cal P} , \ell)\qquad {\rm X}_{\cal P} :=(\Theta,\chi, \L,\vartheta, \k)\end{equation} where \beq{hk0}{\rm h}_{\rm fast}^0(\L):=-\sum_{j=1}^n\frac{\mu_j M_j^2}{2\L_j^2}\ ,\end{equation} the $(3n-2)$--dimensional Hamiltonian \equ{Helio} expressed in ${\cal P} $--coordinates. } The proof of Theorem \ref{Global Kolmogorov tori in the planetary problem} is based on {three} steps: {in step 0 we compute the holomorphy domain of $\cH_{\cal P} $}; in the step 1 the Hamiltonian is transformed to a similar one, but with a much smaller remainder. In step 2, a well fitted KAM theory is applied. Note that, as the terms of the unperturbed part are smaller and smaller as and when the distance from the sun increases, such KAM theory will be required to take such different scales into account. \paragraph{Step 0: Choice of the holomorphy domain} { A typical practice, in order to use perturbation theory techniques, is to extend Hamiltonians governing dynamical systems to the complex field, and then to study their holomorphy properties.\vskip.1in {\smallskip\noindent} It can be proven that a domain of holomorphy for the perturbing function $f_{\cal P} $ in \equ{HP}, regarded as a function of complex coordinates can be chosen as $${\mathbb D}_{{\cal P} }:={\cal T}_{\Theta^+,\vartheta^+}\times\big({\cal X}_\theta\times \ovl{\Bbb T} ^n_{{s}}\big)\times\big({\cal A}_\theta\times \ovl{\Bbb T} ^{n}_{{s}}\big)\ ,$$ where, for given positive numbers \begin{eqnarray*} \Theta_{j}^+\ ,\quad \vartheta_{j}^+\ ,\quad{\rm G}_i^\pm\ ,\quad \L_i^\pm\ ,\quad \theta_i\ ,\quad s \end{eqnarray*} with $i=1$, $\cdots$, $n$, $j=1$, $\cdots$, $n-1$, \beqa{domain***} {\cal T}_{\Theta^+,\vartheta^+}&:=&\Big\{(\ovl\Theta, \ovl\vartheta)=(\Theta_2,\cdots, \Theta_{n}, \vartheta_2,\cdots,\vartheta_{n})\in {\Bbb C} ^{n-1}\times {\Bbb T} _{\Bbb C} ^{n-1}:\ \nonumber\\ && |\vartheta_j-{\pi} |\le {\vartheta^+_j}\ ,\quad |\Theta_{j}|\le {\Theta_j^+}\ ,\quad \forall\ j=2,\cdots, n\Big\}\nonumber\\\nonumber\\ {\cal X}_\theta&:=&\Big\{ (\Theta_1, \ovl\chi)=\Big(\Theta_1, (\chi_1,\cdots, \chi_{n-1})\Big)\in {\Bbb C} ^n:\ {\rm G}_{j}^-\le |\chi_{j-1}-\chi_{j-2}|\le{\rm G}_{j}^+\ ,\nonumber\\ &&|\Im (\chi_{j-1}-\chi_{j-2})|\le \theta_{j}\ \forall\ j=1,\cdots, n\Big\}\nonumber\\\nonumber\\ {\cal A}_\theta&:=&\Big\{\L=(\L_1,\cdots, \L_n)\in {\Bbb C} ^n:\quad \L_j^-\le | \L_j|\le \L_j^+\ ,\quad |\Im \L_j|\le \theta_j\nonumber\\ && \forall\ j=1,\cdots, n\Big\}\nonumber\\ \ovl{\Bbb T} _{{s}}&:=&{\Bbb T} +{\rm i}[-{s}, {s}]\end{eqnarray} with $\chi_{-1}:=0$, $\chi_{0}:=\Theta_1$, and \beqa{Choice of parameters} \L_i^\pm&:=&{\mu} _i\sqrt{M_ia_i^\pm}\ ,\quad {\rm G}_i^+:=\ovl{\cal C} _i^* \L_i^-\ ,\qquad {\rm G}_i^-:=\underline{\cal C} _i^* \L_i^+\ ,\quad\Theta_j^+:=s{\rm G}_1^-\ ,\qquad \vartheta_j^+:={\cal D} _i\frac{{\L_i^-}}{{\rm G}_1^+}\nonumber\\ \theta_i&:=&s\sqrt{\L_i^-} \end{eqnarray} with $s\in (0, 1)$ arbitrary, ${\cal D} _i$, $\underline{\cal C} _i^*$, $\ovl{\cal C} _i^*$ depending only on $m_0$, $\ldots$, $m_n$, $a_i^\pm$ as in \equ{asymptotics}. } \paragraph{Step 1: Normal Form Theory} \begin{definition}\label{def: Diophantine numbers} \rm Given $m$, ${\nu} _1$, $\cdots$, ${\nu} _m\in \natural$, ${\nu} :={\nu} _1+\cdots+{\nu} _m$; ${\gamma} _1$, $\ldots$, ${\gamma} _m$, $\t\in {\Bbb R} _+$. We call {\it m--scale Diphantine set}, and denote it as ${\cal D}_{{\gamma} _1, \ldots, {\gamma} _m, \t}$, the set of $\omega=(\o_1,\cdots,\o_m)$, with $\omega_j\in {\mathbb R}^{{\nu} _j}$ such that, for any $k=(k_1,\cdots,k_m)\in {\Bbb Z} ^{\nu} \setminus\{0\}$, with $k_j\in {\Bbb Z} ^{{\nu} _j}$, the following inequalities hold: \beq{dioph2sc} |\o\cdot k|=\bigg|\sum_{j=1}^m\o_j\cdot k_j\bigg|\geq \left\{ \begin{array} {l} \displaystyle\frac{{\gamma} _1}{|\tk|^{\t}}\quad {\rm if}\quad k_1\neq0;\\ \ \\ \displaystyle \frac{{\gamma} _2}{|k|^{\t}}\quad {\rm if}\quad k_1= 0,\quad k_2\neq 0;\\ \ \\ \;\;\;\vdots\\ \ \\ \displaystyle \frac{{\gamma} _m}{|k_m|^{\t}}\quad {\rm if}\quad k_1=\cdots=k_{m-1}= 0,\ \cdots,\ k_{m}\neq 0. \end{array} \right. \end{equation} \end{definition} The set ${\cal D}_{{\gamma} _1, \ldots, {\gamma} _m, \t}$ reduces to the usual diophantine set taking ${\gamma} _j=\gamma$ $\forall\ j$. The first multi--scale Diophantine set was proposed by Arnold in \cite{arnold63} with $m=2$. \begin{proposition}\label{multi scale normal form} \label{exponential average} Let ${ {\mu} }_j$, ${ M}_j$ be as in \equ{masses} and ${\rm m}_j:=\sum_{i=1}^{j-1} m_i$, with $j=2,\cdots, n$, ${\chi_0:=\Theta_1}$. There exists a number ${ c}$, depending only on $n$, $m_0,\cdots,m_n$, $a_1^\pm$, $\underline e_j$, $\ovl e_j$, and a number $0<\ovl{ {c}}<1$, depending only on $n$ such that, for any fixed positive numbers $\ovl{\gamma} <1<\bar K$, $\a>0$ verifying \beqa{bar K} &&\bar K\le \frac{{ c} }{\a^{3/2}}\end{eqnarray} and \beqa{small secular}\frac{1}{{ c}}\max\Big\{ {\mu} (\frac{a_n^+}{a_1^-})^{5}\frac{\bar K^{2\bar\t+2}}{\bar{\gamma} ^2},\ \frac{\bar K^{2(\bar\t+1)}\a}{\bar{\gamma} ^2}\Big\}<1\end{eqnarray} there exist natural numbers ${\nu} _1,\cdots,{\nu} _{2n-1}$, with $\sum_j{\nu} _j=3n-2$, open sets $B_j^*\subset B^{2}_{\varepsilon_j}, {\cal X}^*\subset {\cal X}$, positive real numbers \mbox{${\gamma} _1> \cdots >{\gamma} _{2n-1} \varepsilon_1, \cdots, \varepsilon_{n-1}, \ovl r_1, \cdots, \ovl r_{n-1}, \widetilde r_1, \cdots, \widetilde r_{n}$}, a domain $${ D}_{\rm n}:=B_{\sqrt{2\ovl r}}\times {\cal X}_{\ovl r}\times{\cal A} _{\widetilde r} \times{\Bbb T} ^n_{\ovl{ {c}}s}\times {\Bbb T} ^n_{\ovl{ {c}}s}$$ a sub-domain of the form $${ D}^*_{\rm n}:=B^*_{\sqrt{2\ovl r}}\times {\cal X}^*_{\ovl r}\times{\cal A} _{\widetilde r} \times{\Bbb T} ^n_{\ovl{ {c}}s}\times {\Bbb T} ^n_{\ovl{ {c}}s}$$ verifying \beq{good set***}\meas{ D}^*_{\rm n}\ge\big(1-\frac{\bar{\gamma} }{\ovl{ c}}\big)\meas{ D}_{\rm n}\end{equation} a real-analytic transformation $$\phi_{\rm n}:\quad (p,q,\chi,\L,\k,\ell)\in { D}^*_{\rm n}\to { D}_{\cal P} $$ which conjugates $\cH_{\cal P} $ to $$\cH_{\rm n}(p,q, \chi,\L,\k,\ell) :=\cH_{\cal P} \circ\phi_{\rm n}={\rm h}_{ {fast}, {sec}}(p,q,\chi,\L)+{\mu} \,{f}_ {exp}(p,q, \chi,\L,\k,\ell) $$ where ${f}_ {exp}(p,q, \chi,\L,\k,\ell)$ is independent of $\k_{n-1}$, and the following holds. \paragraph{1.} The function ${\rm h}_{ {fast}, {sec}}(p,q,\chi,\L)$ is a sum $${\rm h}_{ {fast}, {sec}}(p,q,\chi,\L)={\rm h}_{ {fast}}(\L)+{\mu} \, {\rm h}_{ {sec}}(p,q,\chi,\L)$$ where, if \begin{eqnarray*} \hat{\rm y}_i:=\bigg(\frac{p_2^2+q_2^2}{2},\ \cdots,\ \frac{p_{i+1}^2+q_{i+1}^2}{2},\ \chi_{0},\ \cdots,\ \chi_{i},\ \L_1,\ \cdots,\ \L_{i+1}\bigg)\qquad i=1\,,\ldots\,,n-1 \end{eqnarray*} then ${\rm h}_{ {fast}}$ and ${\rm h}_{ {sec}}$ are given by $${\rm h}_{ {fast}}(\L)=-\sum_{j=1}^n\frac{{ m}_j^3{ M}_j^2}{2\L_j^2}-{\mu} \sum_{j=1}^{n-1}\frac{{ M}_j{ m}_j^2m_j{\rm m}_j}{\L_j^2} ,\qquad {\rm h}_{ {sec}}(p,q,\chi,\L)=\sum_{i=1}^{n-1}{\rm h}_{ {sec}}^i(\hat{\rm y}_i)$$ where the functions ${\rm h}_{ {sec}}^i$ have an analytic extension on ${ D}_{\rm n}$ and verify \[{ c}\frac{(a_{j+1}^+)^2}{(a_{j}^-)^3}\le | {\rm h}_{ {sec}}^j(\hat{\rm y}_j) |\le \frac{1}{ c}\frac{(a_{j+1}^+)^2}{(a_{j}^-)^3}.\] \paragraph{2.} The function ${f}_ {exp}$ satisfies $$|{f}_ {exp}|\le \frac{1}{ c}\frac{e^{-{ c}\bar K}}{a_{n}^-}.$$ \paragraph{3.} If $\zeta$ is $\hat{\rm y}_{{n-1}}$ deprived of ${\chi_{n-1}=C}$, the frequency-map $$\zeta\to\o_{ {fast}, {sec}}(\zeta):= \partial_{\zeta}{\rm h}_{ {fast}, {sec}}(\zeta)$$ is a diffeomorphism of $\P_\zeta(B^*_{\sqrt{2\ovl r}}\times {\cal X}^*_{\ovl r}\times{\cal A} ^*_{\widetilde r})$ and, moreover, it satisfies \equ{dioph2sc}, with $m=2n-1$, $\t=\bar\t>2$, and \begin{eqnarray*} {\nu} _j&:=&\left\{ \begin{array} {llll} \displaystyle\ 1& j=1,\cdots, n\\ \\ \displaystyle \ 2\qquad &j=3,\ n=2\\ \\ \displaystyle \ 3 & j=n+1,\ n\ge 3 \\ \\ \displaystyle \ 2& n+2\le j\le 2n-2,\ n\ge 4 \\ \\ \displaystyle\ 1& j=2n-1,\ n\ge 3 \end{array} \right. \end{eqnarray*} \beqa{nugamma} \omega_j&:=&\left\{ \begin{array} {llll} \displaystyle\partial_{\L_j}{\rm h}_{ {fast}, {sec}}& j=1,\cdots, n\\ \\ \displaystyle \partial_{(\frac{p_{2}^2+q_{2}^2}{2},\chi_0)}\,{\rm h}_{ {fast}, {sec}}\qquad &j=3,\ n=2\\ \\ \displaystyle \partial_{(\frac{p_{2}^2+q_{2}^2}{2}, \chi_{1},\chi_{0})}\,{\rm h}_{ {fast}, {sec}}& j=n+1,\ n\ge 3 \\ \\ \displaystyle \partial_{(\frac{p_{j-n+1}^2+q_{j-n+1}^2}{2}, \chi_{j-n})}\,{\rm h}_{ {fast}, {sec}}& n+2\le j\le 2n-2,\ n\ge 4 \\ \\ \displaystyle \partial_{\frac{p_{n}^2+q_{n}^2}{2}}\,{\rm h}_{ {fast}, {sec}}& j=2n-1,\ n\ge 3 \end{array} \right.\nonumber\\ \nonumber\\ {\gamma} _j&:=& \left\{ \begin{array} {llll} \displaystyle\frac{1}{a_j^-}\frac{\ovl{\gamma} }{\theta_j} \qquad &1\le j\le n\\ \\ \displaystyle\frac{{\mu} (a_{2n-j+1}^+)^2}{(a_{2n-j}^-)^3}\frac{\ovl{\gamma} }{\theta_{j-n}} &n+1\le j\le 2n-1 \end{array} \right.\end{eqnarray} \paragraph{4.} The mentioned constants are \begin{eqnarray*} \varepsilon_j:={ c}\,\sqrt{\theta_j},\quad \ovl r_j:=\frac{\theta_j\ovl{\gamma} }{\bar K^{\bar\t+1}} ,\quad \widetilde r_i:={ c}\,\theta_j \end{eqnarray*} with $\bar\t>2$. \end{proposition} \paragraph{Step 2: KAM theory} \begin{theorem} [Multi-scale KAM Theorem, \cite{pinzari18}]\label{two scales KAM} Let $m,\ell,{\nu} _1,\cdots,{\nu} _m\in \natural$, ${\nu} :={\nu} _1+\cdots+{\nu} _m\ge \ell$, $\t_*>{\nu} $, ${\gamma} _1\ge \cdots\ge{\gamma} _m>0$, $0<4s\leq \bar{s}<1$, $\r_1, \cdots, \r_\ell, r_1, \cdots, r_{{\nu} -\ell}, \varepsilon_1, \cdots, \varepsilon_\ell>0$, $B_1, \cdots, B_\ell\subset {\Bbb R} ^2$, $D_j:=\{\frac{x^2+y^2}{2}\in {\Bbb R} : (x,y)\in B_j\}\subset {\Bbb R} $, $B:=B_1\times\cdots\times B_\ell\subset {\Bbb R} ^{2\ell}$, $D:=D_1\times\cdots\times D_\ell\subset {\Bbb R} ^\ell$, $C\subset {\Bbb R} ^{{\nu} -\ell}$, $A:=D_\r\times C_r$. Let \begin{eqnarray*} {\rm H} (\mathbf p,\mathbf q, \mathbf I,\bm\psi)={\rm h}(\mathbf p,\mathbf q, \mathbf I)+{f}(\mathbf p,\mathbf q, \mathbf I,\bm\psi) \end{eqnarray*} be real-analytic on $B_{\sqrt{2\r}}\times C_r\times {\Bbb T} _{\bar{ s}+s}^{{\nu} -\ell}$, where ${\rm h}(\mathbf p,\mathbf q, \mathbf I)$ depends on $(\mathbf p,\mathbf q)$ only via \[ J(\mathbf p,\mathbf q):=\Big(\frac{p_1^2+q_1^2}{2},\ \cdots,\ \frac{p_\ell^2+q_\ell^2}{2}\Big).\] Assume that $\o_0:=\partial_{J(\mathbf p,\mathbf q, \mathbf I)} {\rm h}$ is a diffeomorphism of $A$ with non singular Hessian matrix $U_{ 1}:=\partial^2_{(J(\mathbf p,\mathbf q, \mathbf I)}{\rm h}$ and let $U_k$ denote the $ ({\nu} _k+\cdots+{\nu} _m)\times {\nu} $ submatrix of $U$, {\rm i.e.\,}, the matrix with entries $(U_k)_{ij}=U_{ij}$, for ${\nu} _{1}+\cdots+{\nu} _{k-1}+1\leq i\leq {\nu} $, $1\leq j\leq {\nu} $, where $2\le k\le m$. Let \begin{eqnarray*} && {\rm M}_k\geq\sup_{A}|U_k|,\quad \bar {\rm M} \geq\sup_{A}|U^{-1}|,\quad \pertnorm\geq|{f}|_{\r,\bar{ s}+s}\nonumber\\ &&\bar {\rm M}_k\geq \sup_{A}|T_k|\quad {\rm if}\quad \displaystyle U^{-1}=\left( \begin{array} {lrr} T_1\\ \vdots\\ T_m \end{array} \right)\qquad 1\le k\le m.\end{eqnarray*} Define \begin{eqnarray*} && \displaystyle K:=\frac{6}{s}\ \log_+{\left(\frac{\pertnorm {\rm M}_1^2\,L}{\gamma_1^2}\right)^{-1}}\quad {\rm where}\quad \log_+ a :=\max\{1,\log{a}\}\\ && \displaystyle \hat\r_k:=\frac{{\gamma} _k}{3{\rm M}_kK^{\t_*+1}},\quad \hat\r:=\min\left\{\hat\r_1,\ \cdots,\ \hat\r_m,\ \r_1,\ \cdots,\ \r_\ell,\ r_1,\ \cdots ,\ r_{{\nu} -\ell}\right\}\\ \\ && \displaystyle L:=\max \Big\{\bar {\rm M} , \ {\rm M}_1^{-1},\ \cdots,\ {\rm M}_m^{-1}\Big\} \\ && \hat E:=\frac{E L}{\hat\r^2}. \end{eqnarray*} Then one can find two numbers $\hat c_{\nu} >c_{\nu} $ depending only on ${\nu} $ such that, if the perturbation ${f}$ {is} so small that the following ``KAM condition'' holds \[ \hat c_{\nu} \KAM<1, \] for any $\o\in\O_*:=\o_0({D})\cap{\cal D} _{{\gamma} _1,\cdots,{\gamma} _m,\t_*}$, one can find a unique real-analytic embedding \begin{eqnarray*} \phi_\o:\quad \vartheta=(\hat\vartheta,\bar\vartheta)\in{\Bbb T} ^{{\nu} } &\to&(\hat v(\vartheta;\o),\hat\vartheta+\hat u(\vartheta;\o), {\cal R} _{\bar\vartheta+\bar u(\vartheta;\o)}w_1,\ \cdots,\ {\cal R} _{\bar\vartheta+\bar u(\vartheta;\o)}w_\ell)\nonumber\\ &&\in \Re C_r\times {\Bbb T} ^{{\nu} -\ell}\times \Re B^{2\ell}_{\sqrt{2r}} \end{eqnarray*} where $r:= c_{\nu} \KAM \hat\r$ such that ${\rm T}_\o:=\phi_{\o}({\Bbb T} ^{\nu} )$ is a real-analytic ${\nu} $-dimensional ${\rm H}$-invariant torus, on which the ${\rm H}$-flow is analytically conjugated to $\vartheta\to \vartheta+\o\,t$. Furthermore, the map $(\vartheta;\o)\to\phi_\o(\vartheta)$ is Lipschitz and one-to-one and the invariant set $\displaystyle {{\rm K}}:=\bigcup_{\o\in\O_*}{\rm T}_\o$ satisfies the following measure estimate \[ \meas\Big(\!\Re({D}_r)\times{\Bbb T} ^\td\setminus{{\rm K}}\Big) \leq c_{\nu} \Big(\!\meas({D}\setminus{D}_{{\gamma} _1,\cdots, {\gamma} _m,\t_*}\times{\Bbb T} ^\td)+\meas(\Re({D}_r)\setminus{D})\times{\Bbb T} ^\td\Big), \] where ${D}_{{\gamma} _1,\cdots, {\gamma} _m,\t_*}$ denotes the $\o_0$-pre-image of ${\cal D} _{{\gamma} _1,\cdots, {\gamma} _m,\t_*}$ in ${D}$. Finally, on ${\Bbb T} ^{\nu} \times \O_*$, the following uniform estimates hold \begin{align*} | v_k(\cdot;\o)-I_k^0(\o)| &\leq c_{\nu} \Big(\frac{\bar {\rm M}_k}{\bar {\rm M}}+\frac{{\rm M}_k}{{\rm M}_1}\Big)\KAM\,\hat\r \\ |u(\cdot;\o)| &\leq c_{\nu} \KAM\,s \end{align*} where $v_k$ denotes the projection of $v=(\hat v, \bar v)\in {\Bbb R} ^{{\nu} _1}\times\cdots\times{\Bbb R} ^{{\nu} _m}$ over ${\Bbb R} ^{{\nu} _k}$, $\displaystyle\bar v_k:=\frac{|w_k|^2}{2}$ and $I^0(\o) = (I^0_1(\o),\cdots, I^0_{\nu} (\o)) \in D$ is the $\o_0$-pre-image of $\o\in\O_*$. \end{theorem} \noindent Theorem \ref{two scales KAM} generalizes Theorem 3 in \cite{chierchiaPi10} and hence the Fundamental Theorem of \cite{arnold63}, to which Theorem 3 in \cite{chierchiaPi10} is inspired. \paragraph{Proof of Theorem \ref{Global Kolmogorov tori in the planetary problem}} Let $$ \bar{\gamma} :={\ovl{ c}}\sqrt\a(\log\a^{-1})^{\bar\t+1},\quad \bar K=\frac{1}{\widetilde{ c}}\log\frac{1}{\a}$$ where $\ovl{ c}$ is as in \equ{good set***} and $\widetilde{ c}$ will be fixed later. We {aim to} apply Theorem \ref{two scales KAM} to the Hamiltonian $\cH_{\rm n}$ of Proposition \ref{exponential average}, with these choices of $\bar{\gamma} $ and $\bar K$. To this end, we take \begin{eqnarray*} &&{\rm M}_j=\left\{ \begin{array} {llll} \displaystyle\frac{1}{{ c}_1a_j^-\theta_j^2}\qquad\ &1\le j\le n\\ \\ \displaystyle\frac{{\mu} (a_{2n-j+1}^+)^2}{{ c}_1(a_{2n-j}^-)^3\theta_j^2} & n+1\le j\le 2n-1 \end{array} \right. \qquad L=\bar{\rm M}=\frac{1}{{ c}_2}\,\frac{\theta_1^2(a_{n}^+)^3}{{\mu} (a_{n-1}^-)^2} \nonumber\\ &&E=\frac{1}{{ c}_3}\frac{{\mu} }{a_n^-}e^{-{ c}\bar K} \qquad\qquad\qquad\qquad\qquad \qquad\qquad\quad K=\frac{1}{{ c}_4}\log_+\Big(\frac{1}{\ovl{\gamma} ^2}\frac{(a_n)^3}{(a_{n-1}^-)^3}e^{-{ c}\bar K}\Big)^{-1}\nonumber\\ &&\hat \r_j=\arr{\displaystyle{{ c}_5}\frac{\ovl{\gamma} \theta_j}{K^{\t_*+1}}\quad 1\le j\le n\\ \\ \displaystyle{ c}_5\frac{\ovl{\gamma} \theta_{j-n}}{{}K^{\t_*+1}}\quad n+1\le j\le 2n-1 }\qquad \qquad\hat\r:=\frac{\theta_1\ovl{\gamma} }{\hat K^{\t_*+1} }\quad \t_*>3n-2\nonumber\\ &&\hat E=\frac{1}{{ c}_6}\frac{1}{\ovl{\gamma} ^2}\frac{(a^{ +}_n)^3}{(a_{n-1}^-)^3}e^{-{ c}\bar K}\hat K^{2(\t_*+1)}\nonumber\\ \end{eqnarray*} where $\hat K:=\max\{K,\bar K\}$. The number $\frac{1}{\ovl{\gamma} ^2}\frac{(a_{n-1})^3}{(a_n^-)^3}$ can be bounded by $\frac{1}{\a^N}$ for a sufficiently large $N$ depending only on $n$. Hence, if $\widetilde{ c}<\frac{ c}{N}$ and $\a<{ c}_6$, we have $\hat E<1$ and the theorem is proved. $\quad\square$ \subsection{On the co--existence of stable and whiskered tori}\label{Coexistence of stable and whiskered tori} In this section we discuss how the use two different sets of coordinates may lead to prove the co--existence of stable and unstable motions. Specifically, we deal with the following situation, which we shall refer to as {\it outer, retrograde configuration} ({\sc orc}): \vskip.1in \noindent {\it Two planets describe almost co--planar orbits, revolving around their common sun, in opposite sense. The outer planet has a lower angular momentum and retrograde motion, as seen from the total angular momentum of the system. } \vskip.1in \noindent We aim to discuss the following \begin{theorem}\label{thm: coexistence} \item[{\rm 1.}] {\it There exists a {8--dimensional} region ${\cal D} _{\rm s}$ in the phase space almost completely filled with a positive measure set of five--dimensional {\sc kam} tori, in {\sc orc} configuration; \item[{\rm 2.}] There exists a {8--dimensional} region ${\cal D} _{\rm u}$ in the phase space including a {6--dimensional, hyperbolic} invariant region ${\cal D} ^0_{\rm u}$ consisting of co--planar, retrograde motions for the outer planet. \item[{\rm 3.}] ${\cal D} _{\rm s}$ and ${\cal D} ^0_{\rm u}$ have a non--empty intersection}. \end{theorem} \noindent Theorem \ref{thm: coexistence} leads to the following conjecture, which is likely to be proved somewhere. \begin{conjecture} Full dimensional quasi--periodic motions and hyperbolic 3--dimensional tori co--exist in ${\cal D} _{\rm s}$. \end{conjecture} \noindent The proof of statements 1. and 2. in Theorem \ref{thm: coexistence} relies on the use of two different sets of coordinates for the Hamiltonian \equ{HelioNEW} with $n=2$: \beqa{3BP}{\cal H}_{3BP}&=\,&\frac{|\by_1|^2}{2\mu_1}-\frac{\mu_1 M_1}{|\bx_1|}+\frac{|\by_2|^2}{2\mu_2}-\frac{\mu_2 M_2}{|\bx_2|}+\mu\left(\frac{\by_1\cdot \by_2}{ m_0}-\frac{m_1 m_2}{|\bx_i-\bx_j|}\right) \end{eqnarray} \paragraph{Proof of 1.} We consider the coordinates \equ{Depaa} with $n=2$. {It will turn to be useful to work with regularizing complex coordinates, which we denote as \beqa{real and complex} && \textrm{\sc rps}_{\pi} ^{{\Bbb C} }:=(\bm\L,\bm\l, \mathbf t, \mathbf t^*, T, T^*)=(\L_1,\L_2,\l_1,\l_2, t_1,t_2, t_3, t_1^*, t_2^*, t_3^*, T, T^*)\end{eqnarray} } and define via the formulae \beqa{PR} \arr{ \L_1=\L_1\\ \L_2=\L_2\\ t_1=-{\rm i} \sqrt{\L_1-{\Gamma} _1}\,e^{{\rm i}(-{\gamma} _1+{\gamma} +\zeta)}\\ t_2=\sqrt{\L_2-{\Gamma} _2}\,e^{{\rm i}({\gamma} _2+{\gamma} +\zeta)}\\ t_3=-{\rm i} \sqrt{C-{\Gamma} _2+{\Gamma} _1}\,e^{{\rm i}({\gamma} +\zeta)}\\ T=\sqrt{{C} -\ZZ}\,e^{{\rm i}\zeta} }\qquad \arr{ \l_2=\ell_2+{\gamma} _2+{\gamma} +\zeta\\ \l_1=\ell_1+{\gamma} _1-{\gamma} -\zeta\\ t_1^*=-\sqrt{\L_1-{\Gamma} _1}\,e^{-{\rm i}(-{\gamma} _1+{\gamma} +\zeta)}\\ t_2^*=-{\rm i}\sqrt{\L_2-{\Gamma} _2}\,e^{-{\rm i}({\gamma} _2+{\gamma} +\zeta)}\\ t_3^*=-\sqrt{C-{\Gamma} _2+{\Gamma} _1}\,e^{-{\rm i}({\gamma} +\zeta)}\\ T^*=-{\rm i}\sqrt{{C} -\ZZ}\,e^{-{\rm i}\zeta} } \end{eqnarray} We also define, for later need, $\eta_1$, $\eta_2$, $p$, $\xi_1$, $\xi_2$, $q$ via\beqa{PR1} t_2&:=& \frac{\eta_2-{\rm i}\xi_2}{\sqrt2}\qquad t_1:= \frac{{\rm i} \eta_1-\xi_1}{\sqrt2}\qquad\ \ \ t_3:= \frac{{\rm i} p-q}{\sqrt2}\qquad\ \ \ T:= \frac{P-{\rm i}Q}{\sqrt2}\nonumber\\\nonumber\\ t^*_2&:=& \frac{\eta_2+{\rm i}\xi_2}{\sqrt2{\rm i}}\qquad t^*_1:= \frac{{\rm i} \eta_1+\xi_1}{\sqrt2{\rm i}}\qquad \ t_3^*:= \frac{{\rm i} p+q}{\sqrt2{\rm i}}\qquad T^*:= \frac{P+{\rm i}Q}{\sqrt2{\rm i}}. \end{eqnarray} \noindent Observe that \beq{singularities1}{\cal M} _{\pi} :=\big\{(\bm\L, \bm\l, \mathbf t, \mathbf t^*):\ (\mathbf t, \mathbf t^*)=(0,0)\big\}\end{equation} corresponds to co--circular, co--planar orbits for the two planets, with the outer planet in retrograde motion. \noindent We denote as \beqa{3BPRPSpi} \cH_{\textrm{\sc rps}^{\Bbb C} _{\pi} }=-\frac{\mu_1^3 M_1^2}{2\L_1^2}-\frac{\mu_2^3 M_2^2}{2\L_2^2}+\mu f_{\textrm{\sc rps}^{\Bbb C} _{\pi} }(\bm\L,\bm\l, \mathbf t, \mathbf t^*) \end{eqnarray} the expression of the Hamiltonian \equ{3BP} using the coordinates $\textrm{\sc rps}_{\pi} ^{{\Bbb C} }$ in \equ{real and complex}, which, similarly to the prograde case, $\cH_{\textrm{\sc rps}^{\Bbb C} _{\pi} }$ is independent of $(T, T^*)$. Abusively, we shall continue calling $\textrm{\sc rps}_{\pi} ^{{\Bbb C} }$ the coordinates \equ{real and complex} deprived of $(T, T^*)$. \noindent \noindent We now define a domain where letting the $\textrm{\sc rps}_{\pi} ^{{\Bbb C} }$ coordinates vary. First of all, we observe that {\sc orc} configuration can be realized only if the planetary masses are tuned with the semi--major axes. More precisely, that, if we denote as ``2'' and ``1'' the inner\footnote{Compared to \cite{pinzari18a}, here ``2'' and ``1'' are exchanged, in order to keep uniform notations along the paper.}, outer planet; as $a_2$, $a_1$, the semi--major axes of their respective instantaneous orbits around the sun; $\a_-$, $\a_+$, with $0<\a_-<\a_+<1$, two numbers such that the semi--axes ratio $\a:=\frac{a_2}{a_1}$ verifies \beq{a}\a_-<\a<\a_+\ ,\end{equation} then the following inequality needs to be satisfied \beq{masses ratio} \frac{m_2}{m_1}\sqrt{\a_-}>1\ .\end{equation} {\smallskip\noindent} Indeed, since the motions are almost--circular, the lenghths of the angular momenta of the planets, $ {C}_1$, $ {C}_2$ are arbitrarily close to the action coordinates $\L_1$, $\L_2$ related to their semi--major axes, which in turn are related to the semi--axes and the mass ratio via \[ 1<\frac{C_2}{C_1}\sim \frac{\L_2}{\L_1}=\frac{{ {\mu} }_2}{{ {\mu} }_1}\sqrt{\frac{{ M}_2}{{ M}_1}}\sqrt{\a}\] where ${ {\mu} }_i$, ${ M}_i$ are as in \equ{massesNEW}. This inequality does not make conflict with \equ{a} if one assumes that \beqa{kpm}k_\pm:=\frac{{ {\mu} }_2}{{ {\mu} }_1}\sqrt{\frac{{ M}_2}{{ M}_1}\a_\pm}>1\ .\end{eqnarray} whence the necessity of \equ{masses ratio}. \noindent We then fix the domain as follows. The coordinates $\L_1$, $\L_2$ will be taken to vary in the set \beq{L0}{\cal L} :=\Big\{\L=(\L_1,\L_2):\ \L_-\le \L_1\le \L_+\ ,\ k_-\L_1\le \L_2\le k_+\L_1\Big\}\end{equation} with $k_\pm$ as in \equ{kpm}, and $0<\L_-<\L_+$ to be chosen later. {\smallskip\noindent} The coordinates $\l=(\l_1,\l_2)$ will be taken to run in the torus ${\Bbb T} ^2$. {\smallskip\noindent} As for the coordinates $(\mathbf t,\mathbf t^\star)$, we take a domain of the form $${\cal U} _{\rm s}:=\Big\{(\mathbf t, \mathbf t^\star)\in {\Bbb C} ^6:\quad |(\mathbf t,\mathbf t^\star)|\le \varepsilon\Big\}$$ \noindent The domain for $\textrm{\sc rps}_{\pi} ^{{\Bbb C} }$ will then be \beqa{DS}{\cal D} _{\rm s}={\cal L} \times {\mathbb T}^2\times {\cal U} _{\rm s}\,.\end{eqnarray} \noindent The following statement is a more precise version of statement 1. in Theorem \ref{thm: coexistence}. \begin{theorem}[\cite{pinzari18a}]\label{stable tori} There exist two numbers $0<\varepsilon_+<\varepsilon_0$, $0<\a_+<1$, such that, for any $0<\varepsilon<\varepsilon_+$, $0<\a_-<\a_+$, $0<\L_-<\L_+$, one can find ${\mu} _+(\varepsilon)>0$ such that, for any $0<{\mu} <{\mu} _+(\varepsilon)$, in the domain ${\cal D} _{\rm s}$ there exists an invariant set ${\cal F}_{\varepsilon,{\mu} }\subset {\cal D} _{\rm s}$ with density going to $1$ as $\varepsilon\to 0$ which is foliated as \beq{foliation}{\cal F}_{\varepsilon,{\mu} }=\bigcup_{\omega}{\cal T} _{\o,\varepsilon, {\mu} }\end{equation} where ${\cal T} _{\o,\varepsilon, {\mu} }$ is diffeomorphic to ${\Bbb T} ^5$, where ${\Bbb T} :={\Bbb R} /(2{\pi} {\Bbb Z} )$ is the standard, ``flat'' torus. Moreover, on ${\cal T} _{\o,\varepsilon, {\mu} }$ the motions are quasi--periodic, in {\sc orc} configuration, with suitable (``diophantine'') irrational frequencies. \end{theorem} \noindent Theorem \ref{stable tori} extends Theorem \ref{Arnold Theorem} to {\sc orc} motions. As we briefly discuss below, even though the setting is similar, the extension is not completely trivial. Here we provide a sketch of the proof. \vskip.1in \noindent In \cite{pinzari18a} it is shown that $ \cH_{\textrm{\sc rps}^{\Bbb C} _{\pi} }$ is related to the Hamiltonian $ \cH_{\textrm{\sc rps}}$ in \equ{HRPS} with $n=2$ by a simple relation. If, in order to avoid confusions, we equip with ``tildas'' the coordinates \equ{reg var} with $n=2$ and denote as \begin{eqnarray*} \textrm{\sc rps}^{{\Bbb C} }:=({\bm\L},\widetilde{\bm\l}, \widetilde{\mathbf t}, \widetilde{\mathbf t}^*, \widetilde T, \widetilde T^*)=(\L_1,\L_2,\widetilde \l_1,\widetilde \l_2, \widetilde t_1,\widetilde t_2, \widetilde t_3, \widetilde t_1^*, \widetilde t_2^*, \widetilde t_3^*, \widetilde T, \widetilde T^*)\end{eqnarray*} their complex version, defined via \beqa{PROld+} \widetilde t_1&:=& \frac{\widetilde \eta_1-{\rm i}\widetilde \xi_1}{\sqrt2}\qquad \widetilde t_2:= \frac{\widetilde \eta_2-{\rm i}\widetilde \xi_2}{\sqrt2}\qquad\ \widetilde t_3:= \frac{\widetilde p-{\rm i}\widetilde q}{\sqrt2}\qquad\widetilde T:= \frac{\widetilde P-{\rm i}\widetilde Q}{\sqrt2}\nonumber\\\nonumber\\ \widetilde t^*_1&:=& \frac{\widetilde \eta_1+{\rm i}\widetilde \xi_1}{\sqrt2{\rm i}}\qquad \widetilde t^*_2:= \frac{\widetilde \eta_2+{\rm i}\widetilde \xi_2}{\sqrt2{\rm i}}\qquad \ \widetilde t_3^*:= \frac{\widetilde p+{\rm i}\widetilde q}{\sqrt2{\rm i}}\qquad\widetilde T^*:= \frac{\widetilde P+{\rm i}\widetilde Q}{\sqrt2{\rm i}} \end{eqnarray} and, finally, introduce the involution \beqa{involution}\phi_1^-\big(\L_1,\L_2,\l_1, \l_2,t,t^*, T, T^*\big):=\big(-\L_1,\L_2,-\l_1, \l_2,t,t^*, T, T^*\big)\,.\end{eqnarray} Then we have \begin{proposition}[\cite{pinzari18a}]\label{signs1} $\cH_{{\rm rps}^{\Bbb C} _{\pi} }=\cH_{{\rm rps}^{\Bbb C} }\circ\phi_1^-$. \end{proposition} In particular, the coefficients of the expansion \beqa{quadratic retrograde expr} f^{\rm av}_{\textrm{\sc rps}^{\Bbb C} _{\pi} }=C_0(\bm\L)+{\rm i} \mathbf t_h\cdot {\sigma} (\bm\L) \mathbf t^*+{\rm i} \varsigma(\bm\L)t_3 t_3^*+{\rm O}_4(\mathbf t,\mathbf t^*;\bm\L)\end{eqnarray} of $f^{\rm av}_{\textrm{\sc rps}^{\Bbb C} _{\pi} }$ are obtained from the corresponding coefficients $\widetilde {\sigma} (\bm\L)$, $\widetilde \varsigma(\bm\L)$ computed in \cite{chierchiaPi11b} by applying the projection on $(\bm\L, \bm\l)$ of the transformation in \equ{involution}. This immediately provides \beqa{coefficients1} \left\{ \begin{array}{lll}\displaystyle{\sigma} (\L_1, \L_2)=\widetilde{\sigma} (-\L_1, \L_2)=\left( \begin{array}{ccc} -\frac{\rm s}{\L_1}&-{\rm i}\frac{\widetilde{\rm s}}{\sqrt{\L_1\L_2}}\\ -{\rm i}\frac{\widetilde{\rm s}}{\sqrt{\L_1\L_2}}&\frac{\rm s}{\L_2} \end{array} \right)\\\\ \displaystyle \varsigma(\L)=\widetilde\varsigma(-\L_1, \L_2) -\Big(\frac{1}{\L_2}-\frac{1}{\L_1}\Big){\rm s} \end{array} \right.\end{eqnarray} with \beqa{coefficients2} {\rm s}:=- m_1m_2\frac{\a}{2a_1}{b^{(1)}_{3/2}}(\a)\qquad \widetilde{\rm s}:=m_1m_2\frac{\a}{2a_1}{b^{(2)}_{3/2}(\a)}\qquad \quad \alpha=\frac{a_2}{a_1} \end{eqnarray} where $b^{(j)}_s(\a)$'s being the Laplace coefficients\footnote{The Laplace coefficients defined via the Fourier expansion $$\frac{1}{\big(1-2\a\cos\theta+\a^2\big)^s}=\sum_{k\in {\Bbb Z} }b^{(k)}_s(\a)e^{{\rm i} k\theta}\ \qquad {\rm i}:=\sqrt{(-1)}\ .$$}. It is to be remarked, from the formulae in \equ{coefficients1}--\equ{coefficients2} that the matrix $\sigma$ is symmetric but {\it not} real. This is a remarkable difference with the prograde case studied in \cite{fejoz04, chierchiaPi11b}, which, in particular, does not ensure ``a priori'' the reality of its eigenvalues. However, the following turns true: \begin{lemma} The eigenvalues of the $(2\times 2)$ matrix $\sigma(\bm\L)$ in \equ{quadratic retrograde expr} are real. Hence, $(\mathbf t, \mathbf t^*)=(\mathbf 0, \mathbf 0)\in {\Bbb R} ^3\times {\Bbb R} ^3$ is an elliptic equilibrium point for $f^{\rm av}_{\textrm{\sc rps}^{\Bbb C} _{\pi} }$. \end{lemma} \par\medskip\noindent{\bf Proof\ } The eigenvalues of ${\sigma} $ can be explicitly computed: \beq{elleq1}{\sigma} _{1}, {\sigma} _2=\frac{{\, {\rm tr}\, }{\sigma} }{2}\pm\frac{1}{2}\sqrt{({\, {\rm tr}\, }{\sigma} )^2-4\det{\sigma} }\ .\end{equation} {Since ${\, {\rm tr}\, }{\sigma} =\Big(\frac{1}{\L_2}-\frac{1}{\L_1}\Big){\rm s}$ is real, we} have to check that the discriminant \begin{eqnarray*} {\Delta} :=({\, {\rm tr}\, }{\sigma} )^2-4\det{\sigma} =(\frac{1}{\L_2}-\frac{1}{\L_1})^2{\rm s}^2+\frac{4}{\L_1\L_2}\big({\rm s}^2-\widetilde{\rm s}^2\big) \end{eqnarray*} is positive. Recalling that the Laplace coefficients verify $$b^{(j)}_s(\b)> b^{(j+1)}_s(\b)\quad \textrm{for all}\quad s>0,\quad j\in {\Bbb Z} ,\quad 0<|\b|<1,$$ (see~Ref.\cite{fejoz04} for a proof), one has \beq{elleq2} {\rm s}^2-\widetilde{\rm s}^2=(m_1m_2\frac{\a}{a_1})^2\big((b^{(1)}_{3/2}(\a))^2-(b^{(2)}_{3/2}(\a))^2\big)> 0.\end{equation} and we have the assertion. $\quad\square$ \vskip.1in \noindent The formulae in \equ{coefficients1}--\equ{coefficients2} show that, as in the prograde case, the eigenvalues of $\sigma(\bm\L)$ and the number $\varsigma(\bm\L)$ verify, identically \beqa{HR}{\sigma} _1+{\sigma} _1+\varsigma\equiv0\end{eqnarray} By analogy with the latter identity in \equ{Herman resonance}, we shall refer to \equ{HR} as {\it Herman resonance}. The asymptotic values of the eigenvalues $\sigma_1$, $\sigma_2$ and $\varsigma$ in the well--spaced regime \equ{L0} can be computed directly from \equ{elleq1}--\equ{elleq2}, or from the corresponding ones in \cite{fejoz04, chierchiaPi11b} applying the transformation \equ{involution}. In any case, the result is $$\left\{ \begin{array}{lll} \displaystyle {\sigma} _1=+\frac{3}{4\L_1}\frac{a_2^2}{a_1^3}+{\rm O}(\frac{a_2^3}{a_1^4\L_1})\\\\ \displaystyle{\sigma} _2=-\frac{3}{4\L_2}\frac{a^2_2}{a_1^3}+{\rm O}(\frac{a_2^3}{a_1^4\L_2})\\\\\displaystyle\varsigma=\frac{3}{4}\frac{a^2_2}{a_1^2}\left(\frac{1}{\L_2}-\frac{1}{\L_1}\right)+{\rm O}(\frac{a_2^3}{a_1^4\L_2}) \end{array} \right. $$ It shows that there is no other resonance besides Herman resonance in \equ{HR}, provided the semi--axes are well spaced. Recall the definition of ${\cal L} $ in \equ{L0}. \begin{lemma}\label{lem: HR} For any $K>0$, there exist $\L_\pm$, $\alpha_\pm$ such that the triple $\Omega^{\Bbb C} (\bm\L):=\big({\sigma} _1(\bm\L), {\sigma} _2(\bm\L), \varsigma(\bm\L)\big)$ verifies \beqa{nonresuptoHR}\Omega^{\Bbb C} (\bm\L)\cdot k\ne 0\quad \forall k\in {\Bbb Z} ^3\,,\ 0<|k|\le K\,,\ k\ne N(1,1,1)\quad \forall\ \bm\L\in {\cal L} \end{eqnarray} with some $N\in {\Bbb Z} $. \end{lemma} \noindent At first sight, Lemma \ref{lem: HR} might seem an obstruction towards the construction of the Birkhoff normal form for the Hamiltonian \equ{3BPRPSpi}. However, as in the prograde case, the conservation of the angular momentum lenghth \beqa{CRPS}C=\L_2-\L_1-{\rm i}\mathbf t\cdot\mathbf t^*\end{eqnarray} is of great help. Indeed, by the commutation of $ f_{\textrm{\sc rps}^{\Bbb C} _{\pi} }$ and $C$, it turns out that, in the Taylor expansion \equ{quadratic retrograde expr}, only monomials with literal part ${\mathbf t}^{\mathbf a}{\mathbf t^*}^{\mathbf a^*}$ verifying \beqa{Sum ai}\sum_{i} a_i=\sum_{i} a^*_i\end{eqnarray} appear. In \cite{chierchiaPi11c} it is shown that, because of \equ{Sum ai}, then \equ{nonresuptoHR} is sufficient for constructing a Birkhoff normal form (i.e., Theorem \ref{planetary normal form} with $n=2$) for the Hamiltonian \equ{3BPRPSpi}. Moreover, the torsion matrix (i.e., the matrix $\tau(\bm\L)$ defined via \equ{torsion}) for this case can be computed from the analogue one from the prograde case again applying \equ{involution} to the torsion of the prograde problem. The computation is omitted (see \cite{pinzari18a} for the details), apart for stating that it is non--singular. An application of Theorem \ref{FT} then leads to the proof of Theorem \ref{stable tori}. \paragraph{Proof of 2.} As a second set of coordinates, we use the ${\cal P} $--coordinates defined in Section \ref{The reduction of perihelia}. In the case $n=2$, they reduce to $${\cal P} =(Z, C, \bm\Theta, \bm\L, \zeta, \k_2 \bm\vartheta, \bm\ell)$$ with $$\bm\L=(\L_1, \L_2)\,,\ \bm\Theta=(\Theta_1, \Theta_2)\,,\ \bm\ell=(\ell_1, \ell_2)\,,\ \bm\vartheta=(\vartheta_1, \vartheta_2)$$ We denote as $$\cH_{{\cal P} }=-\sum_{j=1}^2\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+\mu f_{{\cal P} } (\bm\L, \bm\Theta, \bm\ell, \bm\vartheta; C)$$ the four--degrees--of--freedom Hamiltonian \equ{3BP} written using ${\cal P} $--coordinates, which is independent of $Z$, $\zeta$ and $\k_2$. \\ The manifols \beqa{equilibrium}{\cal D} ^0_{\rm u}:=\Big\{(\bm\L, \bm\Theta, \bm\ell, \bm\vartheta; C):\quad (\Theta_2,\vartheta_2)=(0,0)\Big\}\end{eqnarray} corresponds to retrograde motions. It is invariant as $f_{{\cal P} } $ has an equilibrium on it and includes, in particular, the manifold ${\cal M} _{\pi} $ in \equ{singularities1}. \noindent We establish a suitable domain (including ${\cal D} ^0_{\rm u}$) for the coordinates ${\cal P} $ where $\cH_{{\cal P} }$ is regular. We check below that the following domain is suited to the scope: \beqa{DC} {\cal D} _{{\cal P} }({C} )&:=&\Big\{(\bm\L, \Theta_1)\in {\cal A} ({C} )\Big\}\times\Big\{ (\bm\ell, \vartheta_1)\in {\mathbb T}^3\Big\}\times\Big\{ (\Theta_2,\vartheta_2)\in {\cal B} (\Theta_1,{C} )\Big\} \end{eqnarray} where \beqa{B} {\cal A} ({C} )&:=&\Big\{(\L_1, \L_2, \Theta_1): \ (\L_1,\L_2)\in {\cal L} ({C} ), \Theta_1\in {\cal G} (\L_1,\L_2,{C} )\Big\}\nonumber\\ {\cal B} (\Theta_1,{C} )&:=&\Big\{(\Theta_2,\vartheta_2): \ |\Theta_2|< \frac{1}{2}\min\{{C} ,\Theta_1\}, |\vartheta_2|< \frac{{\pi} }{2}\Big\}\nonumber\\ {\cal L} ({C} )&:=&\Big\{\bm\L: \ \bm\L\in {\cal L} ,\quad \L_2>{C} +\frac{2}{ c}\sqrt{\a_+}\L_1\Big\}\nonumber\\ {\cal G} (\L_1,\L_2,{C} )&:=&\Big({C} _-,{C} _+\Big),\qquad {C} _-:=\frac{2}{ c}\sqrt{\a_+} \L_1\qquad {C} _+:=\min\Big\{\L_2-{C} , \L_1\Big\}. \end{eqnarray} with ${\cal L} $ is as in \equ{L0}, while $c$ is an arbitrarily fixed number in $(0,1)$. We need to establish two kinds of conditions. \subparagraph{\it a) existence of the perihelia} We need that the planets' eccentricities $e_1$, $e_2$ stay strictly confined in $(0,1)$. Namely, that the following inequalities are satisfied: \beq{C1C2}0<\Theta_1<\L_1\,,\qquad 0<C_2<\L_2\end{equation} with $C_2:=|\mathbf C_2|$, $\mathbf C_2$ as in~\equ{C}. The expression of $C_2$ using ${\cal P} $ is $$C_2=\sqrt{C^2+\Theta_1^2-2\Theta_{2}^2+2\sqrt{(C^2-\Theta_{2}^2)(\Theta_1^2-\Theta_{2}^2)}\cos{\vartheta_{2}}}$$ We observe that $C_2$ may vanish only for $ (\Theta_2,\vartheta_2)= (0 ,{\pi} )$. Since we deal with the equilibrium \equ{equilibrium}, the occurrence of this equality is automatically excluded, limiting the values of the coordinates $(\Theta_2,\vartheta_2)$ in the set ${\cal B} $ in~\equ{B} since in this case \beq{C1lowerbound}C_2^2\ge \frac{3}{4}{C} ^2.\end{equation} \\ Moreover, the two right inequalities in~\equ{C1C2} are satisfied taking \[ \Theta_1<\min\Big\{\L_2-{C} , \L_1\Big\}={C} _+\] where we have used the triangular inequality $C_2=|\mathbf C-\mathbf C_1|\le |\mathbf C|+|\mathbf C_1|={C} +\Theta_1$. \subparagraph{\it b) non--collision conditions} We have to exclude possible encounters of the planets with the sun and each other. Collisions of the inner planet with the sun are excluded by~\equ{B}. Indeed, using~\equ{C1lowerbound}, $$1-e_2^2=\frac{C_2^2}{\L_2^2}\ge \frac{3}{4}\frac{{C} ^2}{\L_2^2}$$ whence the minimum distance of the inner planet with the sun $a_2(1-e_2)$ is positive. In order to avoid planetary collisions, it is typical to ensure the following inequality: $$a_2(1+e_2)<{c^2}a_1(1-e_1)$$ with $0<c<1$. A sufficient condition for it is \[ \Theta_1\ge \frac{2}{{c}}\sqrt{\a_+} \L_1={C} _-.\] Indeed, if this inequality is satisfied, one has $$a_2(1+e_2)<2a_2<\frac{a_1}{2}\frac{\Theta_1^2{c^2}}{\L_1^2}=\frac{a_1}{2}(1-e_1^2){c^2}<a_1(1-e_1){c^2}.$$ \subparagraph{The hyperbolic equilibrium \cite{pinzari18a}} By the formulae \equ{HnnPeri}--\equ{ovl f} with $n=2$, the $\bm\ell$ of $\cH_{{\cal P} }$ is given by \begin{eqnarray*} \ovl\cH_{{\cal P} }=-\sum_{j=1}^2\frac{{\mu}^3_j{M}^2_j}{2 \L_i^2}+\mu\left(-\frac{m_{1}m_{2}}{a_{1}}+\ovl {f_{{\cal P}}^{12}}^\ppd\right)+\frac{\mu}{a_1}{\rm O}\left(\frac{a_2^2}{a_1^2}\right) \end{eqnarray*} with \begin{eqnarray*} {\ovl {f_{{\cal P} }^{12}}}^{(2)}&=&m_1 m_2 \frac{a_2^2}{4a_1^3}\frac{\L_{1}^3}{\Theta_1^5}\Big[ \frac{5}{2}(3\Theta_{2}^2-\Theta_1^2)\nonumber\\ &-&\frac{3}{2}\frac{4\Theta_{2}^2-\Theta_1^2}{\L_{2}^2}\Big(C^2+\Theta_1^2-2\Theta_{2}^2+2\sqrt{(C^2-\Theta_{2}^2)(\Theta_1^2-\Theta_{2}^2)}\cos{\vartheta_{2}}\Big)\nonumber\\ &+&\frac{3}{2}\frac{(\Theta_1^2-\Theta_{2}^2)(C^2-\Theta_{2}^2)}{\L_2^2}\sin^2{\vartheta_{2}}\,.\end{eqnarray*} We shall now prove that, restricting the domain \equ{DC} a little bit, so that the manifolds \equ{equilibrium} are hyperbolic for ${\ovl {f_{{\cal P} }^{12}}}^{(2)}$. We fix the following domain \beqa{DU}{\cal D} _{\rm u}:={\cal A} _{\rm u}\times {\cal B} _{\rm u}\times{\Bbb T} ^3\end{eqnarray} with \beqa{assumptions0} {\cal A} _{\rm u}({C} )&:=&\Big\{(\L_1,\L_2)\in {\cal L}_{\rm u}({C} ),\quad \Theta_1\in{\cal G}_{\rm u}(\L_1,\L_2,{C} )\Big\}\nonumber\\ {\cal B} _{\rm u}({C} )&:=&\Big\{(\Theta_2,\vartheta_2): \ |\Theta_2|< \frac{{C} }{2}, |\vartheta_2|< \frac{{\pi} }{2}\Big\} \end{eqnarray} where \beqa{assumptions}{\cal L} _{\rm u}({C} )&:=&\Big\{\L=(\L_1,\L_2)\in {\cal L} : \ 5\L_2^2{C} -({C} +\frac{2}{ c}\sqrt{\a_+}\L_2)^2 (4 {C} +\frac{2}{ c}\sqrt{\a_+}\L_2)>0,\nonumber\\ && \hspace*{10em} \L_1>{C} , \L_2>\max\{{C} +\frac{2}{ c}\sqrt{\a_+}\L_1, 2{C} \}\Big\}\nonumber\\ {\cal G} _{\rm u}(\L_1,\L_2,{C} )&:=&\Big(\ovl{C} _-, \ovl{C} _+\Big) \end{eqnarray} where ${\cal L} $ is as in~\equ{L0} and, if ${C} ^{\star}(\L_2,{C} )$ is the unique positive root of the cubic polynomial ${C} _2\to 5\L_2^2{C} -({C} +{C} _2)^2 (4 {C} +{C} _2)$, then \beq{assumptions3}\ovl{C} _-:=\max\{\frac{2}{ c}\sqrt{\a_+}\L_1, {C} \}\qquad \ovl{C} _+:=\min\{ \L_1, {C} ^{\star}\}.\end{equation} Implicitly, we shall prove that \beq{assumptions7}\ovl C_-< \ovl C_+\ .\end{equation} We check that the coefficients in front of $\Theta_2^2$, $\vartheta_2^2$ in the Taylor expansion about $(\Theta_2, \vartheta_2)=(0, 0)$ have opposite sign in the domain \equ{DU}, so that the equilibrium manifold \equ{equilibrium} is hyperbolic. Indeed, the part of degree 2 in such expansion is \begin{eqnarray*} m_1m_2\frac{a_2^2}{a_1^3}\frac{1}{8}\frac{\L_{1}^3}{\L_2^2\Theta_1^5} \times \Big[\frac{3 a}{C}\Theta_2^2+3C\Theta_1^2{b} \vartheta_2^2+{\rm O}(\Theta_2^4+\vartheta_2^4)\Big] \end{eqnarray*} where \beq{a b} a:=5\L_2^2C -(C+\Theta_1)^2 (4C+\Theta_1)\quad {\rm and}\quad b:=C- \Theta_1.\end{equation} Both $\Theta_1\to a(\L_1,\Theta_1;{C} )$ and $\Theta_1\to b(\Theta_1;{C} )$, as functions of $\Theta_1$ decrease monotonically from a positive value (respectively, ${C} (5\L_2^2-4{C} ^2)$ and ${C} $) to $-\infty$ as $\Theta_1$ increases from $\Theta_1=0$ to $\Theta_1=+\infty$. The function $a(\L_1,\Theta_1;{C} )$ changes its sign for $\Theta_1$ equal to a suitable unique positive value ${C} ^{\star}(\L_2,{C} )$, while $b(\Theta_1;{C} )$ does it for $\Theta_1={C} $. We note that (i) inequality ${C} <\min\{{C} _+, {C} ^{\star}\}$ follows immediately from the assumptions~\equ{assumptions} (in particular, the two last ones) and (ii), more generally, that ${C} ^{\star}\leq{C} $ is equivalent to $\L_2\leq2{C} $. Since, for our purposes, we have to exclude ${C} ^{\star}={C} $ (otherwise, $a(\L_1,\Theta_1;{C} )$ and $b(\Theta_1;{C} )$ would be simultaneously positive and simultaneously negative, and no hyperbolicity would be possible), we distinguish two cases. \begin{itemize} \item[(a)] ${C} >\frac{2}{ c}\sqrt{\a_+} \L_1$ and ${C} +\frac{2}{ c}\sqrt{\a_+} \L_1<\L_2<2{C} $. In this case ${C} ^\star<{C} $. We show that no such ${\cal G} _{\rm u}$ can exist in this case. In fact, since ${C} ^{\star}<{C} $, in order that the interval $({C} ^{\star}, {C} )$ and the set ${\cal G}$ have a non-empty intersection, one should have, necessarily, ${C} _+=\sup{\cal G}>{C} ^{\star}$, hence, in particular, $\L_2-{C} >{C} ^{\star}$. Using the definition of ${C} ^{\star}$, this would imply $\L_2>2{C} $, which is a contradiction. \item[(b)] $\L_2>\max\{2{C} , {C} +\frac{2}{ c}\sqrt{\a_+} \L_1\}$. In this case ${C} <{C} ^{\star}<\L_2-{C} $. In order that the interval $({C} , {C} ^{\star})$ and the set ${\cal G}$ have a non-empty intersection, we need \beq{conditions}{C} _-<{C} ^{\star}\qquad {\rm and}\qquad {C} _+>{C} \end{equation} and such intersection will be given by the interval ${\cal G} _{\rm u}$ as in~\equ{assumptions}. Note that the definition of $\ovl{C} _+$ does not include $\L_2-{C} $ in the brackets because, as noted, ${C} ^{\star}<\L_2-{C} $. But~\equ{conditions} are equivalent to \equ{assumptions}. \end{itemize} \paragraph{Proof of 3.} Here we prove that \begin{theorem}\label{coexistence of tori} Let $\a_+<\frac{1}{16}$. There exist universal numbers $1<\underline k<\ovl k$ such that, if $$ \a_-<\frac{\underline k^2}{\ovl k^2}\a_+\ ,\quad\frac{ \ovl k}{\sqrt{\a_+}}<\frac{{ {\mu} }_2}{{ {\mu} }_1}\sqrt{\frac{{ M}_2}{{ M}_1}}<\frac{\underline k}{\sqrt{\a_-}}$$ then ${\cal D} _{\rm s}\cap {\cal D} _{\rm u}^0$ is non--empty. The following values work: \beq{ass1} \underline k=\frac{1}{4}\sqrt{\frac{3}{10}(69+11\sqrt{33})}\sim 1.57\ ,\quad \ovl k=2\ .\end{equation} \end{theorem} \par\medskip\noindent{\bf Proof\ } The sets ${\cal D} _{\rm s}$ in \equ{DS} and ${\cal D} ^0_{\rm u}$ in \equ{equilibrium} are expressed with different sets of coordinates. To prove that ${\cal D} _{\rm s}$ and ${\cal D} ^0_{\rm u}$ have a non--empty intersection, we need to use the same set for both. We choose to use the coordinates ${\cal P} $, so we rewrite ${\cal D} _{\rm s}$ in terms of ${\cal P} $. \begin{figure}[htp] \centering{ \includegraphic {plot1-eps-converted-to.pdf}} \caption{The blue curve is ${\cal C}$; the orange line has slope $\underline k$, the green one has slope $\ovl k$ (\textsc{Mathematica}).\label{coexistence1} } \end{figure} \begin{figure}[htp] \centering{ \includegraphic {plot2-eps-converted-to.pdf}} \caption{The blue strip corresponds to the set ${\cal L} _1$, the green one to ${\cal L} _2$ (\textsc{Mathematica}). \label{coexistence} } \end{figure} \begin{figure}[htp] \centering{ \includegraphic {plot3-eps-converted-to.pdf}} \caption{${\cal L} _1$: the blue region; ${\cal L} _2$: the green region; ${\cal L} _3$: the violet region (\textsc{Mathematica}). \label{strip} } \end{figure} \noindent Using ${\cal P} $, the set ${\cal D} _{\rm s}$ becomes (at the expenses of diminishing $\varepsilon$, if necessary) $${\cal D} _{\rm s}={\cal A} _{\rm s}\times {\cal B} _{\rm s}\times{\Bbb T} ^3$$ where, if \beq{assumptions1}{\cal L} _{\rm s}({C} ):=\Big\{\L=(\L_1,\L_2)\in {\cal L} _0:\ |\L_2-\L_1-{C} |<\varepsilon\Big\}\ ,\qquad {\cal G}_{\rm s}(\L_1):=\Big\{\Theta_1:\ 0<\L_1-\Theta_1<\varepsilon\Big\}\,,\ \end{equation} then \beq{assumptions22}{\cal A} _{\rm s}:=\Big\{(\L_1,\L_2,\Theta_1):\ (\L_1,\L_2)\in {\cal L} _{\rm s}\ ,\ \Theta_1\in {\cal G} _{\rm s}(\L_1)\Big\}\,,\ {\cal B} _{\rm s}:=\Big\{(\Theta_2,\vartheta_2):\ |(\Theta_2,\vartheta_2)|<\varepsilon\Big\}\ .\end{equation} All we have to do is to check that the intersection ${\cal A} _{\rm s}\cap{\cal A} _{\rm u}$ is non--empty. \\ Recalling the definition of ${\cal A} _{\rm u}$ in \equ{assumptions0}--\equ{assumptions} and the definition of ${\cal A} _{\rm s}$ in \equ{assumptions1}--\equ{assumptions22}, asserting that ${\cal A} _{\rm s}\cap{\cal A} _{\rm u}\ne \emptyset$ is equivalent to asserting that $${\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\ne \emptyset$$ and $${\cal G}_{\rm s}(\L_1)\cap{\cal G}_{\rm u}(\L_1,\L_2, {C})\ne \emptyset\quad \forall\ (\L_1,\L_2)\in {\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\ .$$ It will be enough to check that \beq{assumptions5}{\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\cap{\cal L}_{{\rm su}}( {C})\ne \emptyset\end{equation} and \beq{assumptions4}{\cal G}_{\rm s}(\L_1)\cap{\cal G}_{\rm u}(\L_1,\L_2, {C})\ne \emptyset\quad \forall\ (\L_1,\L_2)\in {\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\cap{\cal L}_{{\rm su}}( {C})\ ,\end{equation} where, if $\ovl {C}_\pm$ are as in \equ{assumptions3}, ${\cal L} _{{\rm su}}$ is defined as \beq{LSU}{\cal L} _{{\rm su}}:=\Big\{(\L_1,\L_2):\ \ovl {C}_+=\L_1\Big\}\ .\end{equation} Note that \equ{assumptions4} is certainly satisfied provided \equ{assumptions5} is, since in fact, for $(\L_1,\L_2)\in {\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\cap{\cal L}_{{\rm su}}( {C})$, $${\cal G}_{\rm s}(\L_1)\cap{\cal G}_{\rm u}(\L_1,\L_2, {C})=\Big\{ \Theta_1:\ \max\{\ovl {C}_-,\L_1-\varepsilon\}< \Theta_1<\L_1\Big\}$$ which is well--defined by \equ{assumptions3}--\equ{assumptions7}. {\smallskip\noindent} On the other hand, in view of the definition of $\ovl {C}_+$ in \equ{assumptions3}, and of $ {C}^\star$ a few lines above, ${\cal L} _{{\rm su}}$ in \equ{LSU} is equivalently defined as \beq{L2}{\cal L} _{{\rm su}}=\Big\{(\L_1,\L_2):\ 5\L_2^2 {C} -( {C}+\L_1)^2 (4 {C}+\L_1)>0\Big\}\ .\end{equation} Therefore, in view of this definition and the definitions of ${\cal L} _{\rm s}$, ${\cal L} _{\rm u}$ in \equ{assumptions} and \equ{assumptions1}, one sees that the set on the left hand side in \equ{assumptions5} is determined by inequalities \beqa{allinequalities} &&\L_-<\L_1<\L_+\nonumber\\ &&k_-\L_1\le \L_2\le k_+\L_1\nonumber\\ &&5\L_2^2 {C} -( {C}+2\sqrt{\a_+}\L_2)^2 (4 {C}+2\sqrt{\a_+}\L_2)>0\nonumber\\ && \L_1> {C}\nonumber\\ &&\L_2>\max\{ {C}+2\sqrt{\a_+}\L_1,\ 2 {C}\}\nonumber\\ && |\L_2-\L_1- {C}|<\varepsilon\nonumber\\ &&5\L_2^2 {C} -( {C}+\L_1)^2 (4 {C}+\L_1)>0\end{eqnarray} {\smallskip\noindent} We observe that no phase point\footnote{Inequalities $\L_1< {C}_\star$ (which is equivalent to \equ{L2}) and $ {C}_\star<\L_2- {C}$ (which is equivalent to $ \Theta_1> {C}$, in turn implied by the definition of ${\cal G} _{{\rm su}}$ above) imply $\L_2-\L_1- {C}>0$. } $(\L_1,\L_2)$ with $ \L_2-\L_1- {C}<0$ will ever satisfy \equ{allinequalities}, and that inequality $\L_2>2 {C}$ is implied by $\L_1> {C}$ and \equ{L2}. Then, we divide such inequalities in three groups, so as to rewrite the set \equ{assumptions5} as the intersection of the sets \begin{eqnarray*} \widehat{\cal L} _1&:=&\Big\{(\L_1,\L_2):\ \L_-<\L_1<\L_+,\ \L_1> {C} ,\ \L_2>2 {C}\ ,\nonumber\\ && \max\{k_-\L_1,\ ( {C}+\L_1)\sqrt{\frac{4 {C}+\L_1}{5 {C}} }\}<\L_2\le k_+\L_1\Big\}\nonumber\\ \widehat{\cal L} _2&:=&\Big\{(\L_1,\L_2):\ 0<\L_2-\L_1- {C}<\varepsilon,\ \L_2> {C}+2\sqrt{\a_+}\L_1\ ,\ \L_1> {C}\Big\}\nonumber\\ \widehat{\cal L} _3&:=&\Big\{(\L_1,\L_2):\ 5\L_2^2 {C} -( {C}+2\sqrt{\a_+}\L_2)^2 (4 {C}+2\sqrt{\a_+}\L_2)>0\ ,\ \L_2>2 {C}\Big\} \end{eqnarray*} {\smallskip\noindent} We now aim to choose the parameters $\L_\pm$, $k_\pm$ and $\a_+$ so as to find a non--empty intersection of the sets a above. {\smallskip\noindent} Let us denote as ${\cal C}$ the curve, in the $(\L_2,\L_1)$--plane, having equation \beq{curve}{\cal C}:\qquad \L_2=( {C}+\L_1)\sqrt{\frac{4 {C}+\L_1}{5 {C}} }\end{equation} {\smallskip\noindent} Let $$\L_1= k\L_2$$ be any straight line through the origin. The straight line intersecting ${\cal C}$ into the point $(\underline{\L_1},\underline{\L_2})=( {C},2 {C})$ has $\ovl k=2$, and intersects this curve, also in the higher point $$(\ovl{\L_1},\ovl{\L_2})=(\frac{1}{2}(13+\sqrt{185}), (13+\sqrt{185})) {C}\ .$$ Any other line with $k>\ovl k$ has a lower intersection $(\underline{\L_1}',\underline{\L_2}')$, with $\underline{\L_1}'< {C}$ and $\underline{\L_2}'<2 {C}$ and a higher intersection $(\ovl{\L_1}',\ovl{\L_2}')$ with $\ovl{\L_1}'>\ovl{\L_1}$ and $\ovl{\L_2}'>\ovl{\L_2}$. {\smallskip\noindent} The last straight line, in the plane $(\L_1,\L_2)$, through the origin intersecting ${\cal C}$ is the tangent line, and it is easy to compute (see below) that such a tangent line has slope has slope $\underline k$ as in \equ{ass1} (Figure~\ref{coexistence1}). We then conclude that, as soon as we choose $k_-<\underline k$, $k_+>\ovl k$, $\L_-<\underline{\L_1}$, $\L_+>\ovl{\L_1}$, we have the inclusion $$\widehat{\cal L} _1\supset{\cal L} _1:=\Big\{(\L_1,\L_2):\ ( {C}+\L_1)\sqrt{\frac{4 {C}+\L_1}{5 {C}} }\}<\L_2\le 2\L_1\Big\}\ .$$ {\smallskip\noindent} Let us now turn to $\widehat{\cal L} _2$. Since we are assuming $\a_+<\frac{1}{16}$, we conclude that the strip $${\cal L} _2:=\Big\{(\L_1,\L_2):\ 0<\L_2-\L_1- {C}<\varepsilon\ ,\ \L_1> {C}\Big\}$$ is all included in the region $$\widetilde{\cal L} _2=\Big\{(\L_1,\L_2):\ \L_2> {C}+2\sqrt{\a_+}\L_1\ ,\ \L_1> {C}\Big\}$$ and this allows to conclude $$\widehat{\cal L} _2={\cal L} _2\cap\widetilde{\cal L} _2={\cal L} _2\ .$$ {\smallskip\noindent} Since the sets ${\cal L} _1$ and ${\cal L} _2$ have a non--empty intersection, independently of $\a_+$ (see Figure~\ref{coexistence}), a fortiori, $\widehat{\cal L} _1$ and $\widehat{\cal L} _2$ have one: $$\widehat{\cal L} _1\cap\widehat{\cal L} _2\supset{\cal L} _1\cap{\cal L} _2\ne \emptyset\ .$$ Observe, in particular, that ${\cal L} _1\cap{\cal L} _2$ (hence, $\widehat{\cal L} _1\cap\widehat{\cal L} _2$) has non--empty intersection with any strip ${\Bbb R} \times \Big[2 {C},y\Big]$, with $y>2 {C}$ (see Figure~\ref{strip}). {\smallskip\noindent} On the other hand, it is immediate to check that $\widehat{\cal L} _3$ includes the horizontal strip $${\cal L} _3:=\Big\{(\L_1,\L_2):\ 2 {C}<\L_2<\frac{ {C}}{2\sqrt{\a_+}}\ ,\ \L_1\in {\Bbb R} \Big\}\qquad 0<\a_+<\frac{1}{16}$$ and so we conclude $${\cal L}_{\rm s}( {C})\cap{\cal L}_{\rm u}( {C})\cap{\cal L}_{{\rm su}}( {C})=\widehat{\cal L} _1\cap\widehat{\cal L} _2\cap\widehat{\cal L} _3\supset {\cal L} _1\cap{\cal L} _2\cap{\cal L} _3\ne \emptyset$$ In order to complete the proof, it remains to prove that the tangent straight line to ${\cal C} $ through the origin has slope has slope $\underline k$ as in \equ{ass1}.\\ We switch to the homogenized variables $$x:=\frac{\L_1}{{C}}\qquad y=\frac{\L_2}{{C}}$$ so that the curve ${\cal C} $ in \equ{curve} becomes $$\widehat{\cal C} :\qquad y=(1+x)\sqrt{\frac{4+x}{5}}\ .$$ We look for a straight line through the origin $y=\underline k x$ with $\underline k>0$ which is tangent to $\widehat{\cal C} $ at some point $(a,b)$, with $a>0$. {\smallskip\noindent} The intersections between $\widehat{\cal C} $ and any straight line through the origin $y= k x$ are ruled by a complete cubic equation, given by \beq{1st}x^3+(6-5 k^2)x^2+9x+4=0\ .\end{equation} In order that such an equation has a double solution $x=a$ for $k=\underline k$, one needs that, when $k=\underline k$, it can factorized as \beq{2nd}(x-a)^2(x-c)=0 \end{equation} Therefore, equating the respective coefficients of \equ{1st} and \equ{2nd} one finds the equations $$\arr{\displaystyle -(c+2a)=6-5\underline k^2\\\\ \displaystyle 2ac+ a^2=9\\\\ \displaystyle -a^2c=4 }$$ Two last equations, allow to eliminate $b$ so as to obtain the equation for $a$ $$a^3-9 a-8=0$$ which has the following three roots: $$a_0=-1\ ,\qquad a_{\pm}=\frac{1\pm\sqrt{33}}{2}\ .$$ The only admissible (positive) value is then $$a=a_+=\frac{1+\sqrt{33}}{2}$$ and it provides the values $$c=\frac{-17+\sqrt{33}}{32}\ ,\qquad \underline k=\frac{1}{4}\sqrt{\frac{3}{10}(69+11\sqrt{33})}\ .\quad \square$$
{ "redpajama_set_name": "RedPajamaArXiv" }
8,579
Dornes era una freguesia portuguesa del municipio de Ferreira do Zêzere, distrito de Santarém. Historia Fue suprimida el 28 de enero de 2013, en aplicación de una resolución de la Asamblea de la República portuguesa promulgada el 16 de enero de 2013 al unirse con la freguesia de Paio Mendes, formando la nueva freguesia de Nossa Senhora do Pranto. Referencias Enlaces externos Antiguas freguesias de Ferreira do Zêzere
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,633
Q: Rotate matplotlib x-axis on dual axis plot I'm trying to rotate the x-axis labels by 90 degrees, which typically works with the last line of the category_amts() function below. However, because this is a dual-axis visual, the approach is not working. How do you rotate axis labels on a dual-axis chart like this? import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'place': ['restaurant', 'gas station', 'movie theater', 'grocery store'], 'amount': [50, 65, 32, 70]}) df = df.sort_values('amount', ascending = False) df['cumpercentage'] = df['amount'].cumsum() / df['amount'].sum() x_pos = np.arange(len(df.index)) def category_amts(): plt.rcParams['figure.figsize'] = (18,8) plt.rcParams["font.size"] = 12 fig, ax = plt.subplots() ax.bar(x_pos, df['amount'], color = 'C0') ax2 = ax.twinx() ax2.plot(x_pos, df['cumpercentage'], color = 'C3', marker = 'D', ms = 7) ax.tick_params(axis = 'y', colors = 'C0') ax2.tick_params(axis = 'y', colors = 'C3') ax.xaxis.label.set_color('black') ax2.xaxis.label.set_color('black') ax.grid(False) ax2.grid(False) plt.title('Transactions by Merchant Category') ax.set_xlabel('Merchant Category') ax.set_ylabel('Transaction Count') ax2.set_ylabel('Cummulative % of Transaction Amounts', rotation = 270, labelpad = 15) plt.xticks(x_pos, df['place'], rotation = 90) category_amts() A: per @BigBen comment, I needed to move the where I was calling plt.xticks. See reproducible solutions below: import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'place': ['restaurant', 'gas station', 'movie theater', 'grocery store'], 'amount': [50, 65, 32, 70]}) df = df.sort_values('amount', ascending = False) df['cumpercentage'] = df['amount'].cumsum() / df['amount'].sum() x_pos = np.arange(len(df.index)) def category_amts(): plt.rcParams['figure.figsize'] = (18,8) plt.rcParams["font.size"] = 12 fig, ax = plt.subplots() plt.xticks(x_pos, df['place'], rotation=90) ax.bar(x_pos, df['amount'], color = 'C0') ax2 = ax.twinx() ax2.plot(x_pos, df['cumpercentage'], color = 'C3', marker = 'D', ms = 7) ax.tick_params(axis = 'y', colors = 'C0') ax2.tick_params(axis = 'y', colors = 'C3') ax.xaxis.label.set_color('black') ax2.xaxis.label.set_color('black') ax.grid(False) ax2.grid(False) plt.title('Transactions by Merchant Category') ax.set_xlabel('Merchant Category') ax.set_ylabel('Transaction Count') ax2.set_ylabel('Cummulative % of Transaction Amounts', rotation = 270, labelpad = 15) category_amts()
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,698
\section{Introduction} The theory of geometric structures on manifolds was introduced by Cartan and Ehresmann in the 1920s, following the ideas given by Klein in his Erlangen program. This theory became popular in the 1980s, when Thurston used it in the statement of his Geometrization Conjecture. Since then, many people contributed important results, see for example \cite{BabaGrafting1,BabaGrafting2,BauesTorus,ChoiGoldmanRP2,ChoiGoldmanClassification,GalloKapovichMarden, GoldmanFuchsianHol,GoldmanConvex}. Nowadays, geometric structures on manifols are also important in higher Teichm\"uller theo\-ry, a research area that arose from the work of Goldman \cite{GoldmanConvex}, Choi--Goldman \cite{ChoiGoldmanRP2}, Hitchin~\cite{liegroupsteichmuller}, Labourie~\cite{AnosovFlowsLabourie}, Fock--Goncharov \cite{FockGoncharov}. They studied some connected components of the character varieties of surface groups in higher rank Lie groups which share many properties with Teichm\"uller spaces. They are now called Hitchin components or higher Teichm\"uller spaces. The first works which related geometric structures and higher Teichm\"uller theory are Choi--Goldman \cite{ChoiGoldmanRP2} and Guichard--Wienhard \cite{convexfoliatedprojective}, showing how the low-rank Hitchin components can be used as parameter spaces of special geometric structures on closed manifolds. These first results were then generalized by Guichard--Wienhard \cite{GWDomainsofDiscont} and Kapovich--Leeb--Porti \cite{KLPAnosov1}, who show that Anosov representations can often be used to construct geometric structures on closed manifolds. Higgs bundles are an important tool in higher Teichm\"uller theory because they can be used to describe the topology of the character varieties (see for example Hitchin \cite{selfduality,liegroupsteichmuller}, Alessandrini--Collier \cite{SO23LabourieConj}). Anyway, they were initially believed to give very little information on the geometry of a single representation. This point of view is changing, since we have now many examples where the Higgs bundle can be used to give interesting information on the geometric structures associated with a certain representation of a surface group. The main purpose of this survey paper is to explain these constructions. The first ones were presented by Baraglia in his Ph.D.~Thesis~\cite{BaragliaThesis}, more recent ones are in Alessandrini--Li \cite{AdSpaper,NilpotentCone,ProjectiveStructuresHB}, and Collier--Tholozan--Toulisse \cite{CollierTholozanToulisse}. The main idea behind these constructions is that a geometric structure corresponds to a section of a flat bundle which is transverse to the parallel foliation. The holomorphic structure of the Higgs bundle helps to construct sections, and the parallel foliation can be described by solving Hitchin's equations. I will initially describe the fundamental notions of the theory of geometric structures on manifolds: the notion of geometry in the sense of Klein, geometric manifolds, the relationship with the theory of domains of discontinuity for Anosov representations, the relationship with representations of fundamental groups of manifolds, and the deformation spaces of geometric structures on a fixed topological manifold, see Section \ref{sec:geometric structures}. Then I will give an introduction to character varieties and their relationship with the moduli space of flat bundles. I will also introduce the subspaces of the character varieties that are most important in higher Teichm\"uller theory, see Section~\ref{sec:character varieties}. After this, everything is ready to explain the relationship between geometric structures and flat bundles, via a tool called the graph of a geometric structure, see Section \ref{sec:graph}. We will then enter in the main topic of the mini-course, using Higgs bundles and solutions of Hitchin's equations to describe flat bundles explicitly and construct geometric structures. In Section~\ref{sec:higgs bundles} three simple examples are given where this method works and allows us to construct hyperbolic structures, complex projective structures and convex real projective structures on surfaces. Higgs bundles can only describe flat bundles on surfaces, but we also want to describe flat bundles on higher-dimensional manifolds. See Section \ref{sec:higher dimension} for an explanation on how flat bundles on manifolds of different dimension can be related, and an exposition of interesting open problems in the theory of geometric manifolds that are related with this issue. We will finally see how geometric structures on higher-dimensional manifolds can be constructed. As a warm-up, in Section \ref{sec:circle bundles} we consider the case of $3$-dimensional manifolds, and we see how to construct the convex foliated real projective structures and the anti-de Sitter structures on circle bundles over surfaces. In the last part, in Section \ref{sec:higher dimensions}, we will see how the technique works in the case of manifolds of higher dimension. In this final case, the technical details are more involved and will be mainly left out. We will see how to construct real and complex projective structures on higher-dimensional manifolds, and how this result has applications to the theory of domains of discontinuity for Anosov representations. This survey paper is based on the lecture notes for the mini-course ``Higgs bundles and geometric structures on manifolds'' that I gave at the University of Illinois at Chicago during the program ``Workshop on the Geometry and Physics of Higgs bundles II'', November 11--12, 2017. The mini-course was targeted at graduate students and young post-docs with an interest in Gauge Theory and Higgs bundles and this survey paper addresses the same public. \section{Geometric structures on manifolds} \label{sec:geometric structures} This section will be an introduction to the theory of geometric structures on manifolds, their developing maps and holonomy representations. For more details about this theory, see Thurston's book \cite{ThurstonBook} or Goldman's notes \cite{GoldmanGSAVOR}. \subsection{Geometries} The theory of geometric structures on manifolds traces its origins back to Felix Klein who, in his Erlangen program (1872) discussed what is geometry. Klein's idea is that geometry is the study of the properties of a space that are invariant under the action of a certain group of symmetries. The main examples he had in mind were the Euclidean geometry, where the space is ${\mathbb{R}}^n$ and the group is $\Isom\big({\mathbb{R}}^n\big)$, and the affine geometry, where the space is ${\mathbb{R}}^n$ and the group is $\Aff\big({\mathbb{R}}^n\big)$. These geometries study exactly the same space, but they focus on very different properties. Euclidean geometry deals with lengths, angles, and circles, the notions that are invariant under the group of isometries. These notions make no sense in affine geometry, because they are not preserved by the affine group. Affine geometry, instead, deals with ratios of lengths, parallelism and ellipses. Klein emphasizes that when studying geometry, the symmetry group is as important as the space. Let's now give a definition in modern terms. \begin{Definition}A \defin{geometry} is a pair $(X,{\mathsf{G}})$, where ${\mathsf{G}}$, the \defin{symmetry group} is a Lie group and~$X$, the \defin{model space}, is a manifold endowed with a transitive and effective action of ${\mathsf{G}}$. Recall that an action is \defin{effective} if every $g\in {\mathsf{G}} {\setminus} \{e\}$ acts non-trivially on~$X$. If $U \subset X$ is an open subset, we will say that a map $f\colon U\rightarrow X$ is \defin{locally in ${\mathsf{G}}$} if for every connected component $C$ of $U$, there exists $g\in {\mathsf{G}}$ such that $f|_C = g|_C$. For $x\in X$, the \defin{isotropy group} of $x$ in ${\mathsf{G}}$ is the subgroup \begin{gather*}{\mathsf{H}} = \Stab_{\mathsf{G}}(x) = \{h\in {\mathsf{G}} \,|\, h(x) = x\}.\end{gather*} \end{Definition} The isotropy group ${\mathsf{H}}$ is a closed subgroup of ${\mathsf{G}}$. Since the action is transitive, the conjugacy class of the isotropy group does not depend on the choice of the point~$x$. As an equivalent definition, a geometry can be defined as a pair $({\mathsf{G}},{\mathsf{H}})$, where ${\mathsf{G}}$ is a Lie group and ${\mathsf{H}}$ is a closed subgroup of~${\mathsf{G}}$, up to conjugation. The model space can then be reconstructed as the quotient $X={\mathsf{G}}/{\mathsf{H}}$. From this description, we see that $X$ inherits from ${\mathsf{G}}$ a structure of real analytic manifold such that the action of~${\mathsf{G}}$ on~$X$ is real analytic. \begin{Example}Classical examples of geometries are the \defin{Euclidean geometry} $\big({\mathbb{R}}^n, \Isom\big({\mathbb{R}}^n\big)\big)$, the \defin{affine geometry} $\big({\mathbb{R}}^n, \Aff\big({\mathbb{R}}^n\big)\big)$ and the \defin{real projective geometry} $\big({\mathbb{RP}}^n, {\mathsf{PGL}}(n+1,{\mathbb{R}})\big)$. There are many other examples which we will organize in families. \begin{enumerate}\itemsep=0pt \item A geometry is said to be \defin{of Riemannian type} if ${\mathsf{G}}$ acts on $X$ preserving a Riemannian metric. This happens if and only if the isotropy group is compact. Examples are the \defin{isotropic geometries} (the Euclidean geometry $\big({\mathbb{R}}^n, \Isom\big({\mathbb{R}}^n\big)\big)$, the \defin{hyperbolic geometry} $\big({\mathbb{H}}^n, {\mathsf{PO}}(1,n)\big)$ and the \defin{spherical geometry} $\big({\mathbb{S}}^n, {\mathsf{PO}}(n+1)\big)$), the geometries of symmetric spaces and the geometries of Lie groups ($({\mathsf{G}},{\mathsf{G}})$, where~$G$ acts on itself on the left). Thurston's eight $3$-dimensional geometries~\cite{ThurstonBook} are in this family. \item A geometry is said to be of \defin{of pseudo-Riemannian type} if ${\mathsf{G}}$ acts on $X$ preserving a pseudo-Riemannian metric. Examples are many geometries coming from the theory of relativity, such as the geometry of Minkowski space $\big({\mathbb{R}}^n, {\mathsf{O}}(1,n-1)\ltimes{\mathbb{R}}^n\big)$, of the anti-de Sitter space $\big({\rm AdS}^n, {\mathsf{PO}}(2,n-1)\big)$, and of the de Sitter space $\big({\rm dS}^n, {\mathsf{PO}}(1,n)\big)$. \item A geometry is said to be \defin{of parabolic type} if the isotropy group ${\mathsf{H}}$ is a parabolic subgroup of ${\mathsf{G}}$. Examples are the real projective geometry $\big({\mathbb{RP}}^n, {\mathsf{PGL}}(n+1,{\mathbb{R}})\big)$, the \defin{complex projective geometry} $\big({\mathbb{CP}}^n, {\mathsf{PGL}}(n+1,{\mathbb{C}})\big)$, the \defin{conformal geometry} $\big({\mathbb{S}}^n, {\mathsf{PO}}(1,n+1)\big)$, the geometry of Grassmannians and of Flag manifolds. \end{enumerate} \end{Example} \begin{Remark}The notation $(X,{\mathsf{G}})$ for a geometry is, in most practical cases, too cumbersome, hence we will usually denote the geometry just by~$X$, when this does not result in ambiguities. For example, we will often denote the real projective geometry by ${\mathbb{RP}}^n$, instead of $\big({\mathbb{RP}}^n, {\mathsf{PGL}}(n+1,{\mathbb{R}})\big)$. Similarly for ${\mathbb{CP}}^n$, ${\mathbb{H}}^n$, ${\rm AdS}^n$, these symbols will denote both the model space and the geometry. \end{Remark} \subsection{Geometric manifolds} Every geometry can be used as a local model for geometric structures on manifolds. This idea was introduced by Cartan and Ehresmann in the 1920s, and it was made popular by Thurston around 1980, when he used it in the statement of his geometrization conjecture (now Perelman's theorem). \begin{Definition}Given a geometry $(X,G)$ and a manifold $M$ with $\dim(M)=\dim(X)$, an \defin{$(X,G)$-structure} on $M$ is a maximal atlas $\mathcal{U} = \{(U_i,\varphi_i)\}$ where \begin{enumerate}\itemsep=0pt \item[1)] $\{U_i\}$ is an open cover of $M$, \item[2)] the functions \begin{gather*}\varphi_i\colon \ U_i \rightarrow X\end{gather*} are homeomorphisms with the image, which is an open subset of $X$, \item[3)] the transition functions \begin{gather*}\varphi_i\circ \varphi_j^{-1}\colon \ \varphi_j(U_i \cap U_j) \rightarrow \varphi_i(U_i \cap U_j) \end{gather*} are locally in ${\mathsf{G}}$. \end{enumerate} An \defin{$(X,{\mathsf{G}})$-manifold} is a manifold endowed with an $(X,{\mathsf{G}})$-structure. \end{Definition} An $(X,{\mathsf{G}})$-manifold is a real analytic manifold, because the transition functions of the atlas are real analytic. Moreover, on an $(X,{\mathsf{G}})$-manifold $M$, all the local properties of $X$ that are preserved by ${\mathsf{G}}$ are given to~$M$ by the atlas. For example, if $(X,{\mathsf{G}})$ is of (pseudo-)Riemannian type, every $(X,{\mathsf{G}})$-manifold inherits a (pseudo-)Riemannian metric from~$X$. Similarly, every manifold with a real or complex projective structure has a well defined notion of projective line: some real or complex $1$-dimensional submanifold that is mapped to a projective line by any chart. Moreover, given $4$ points on such a projective line, it is possible to compute their cross-ratio. \begin{Example} \label{exa:geometric manifolds}\quad \begin{enumerate}\itemsep=0pt \item For every geometry $(X,G)$, take $M=X$. The identity map is a global chart for the tautological $(X,G)$-structure on~$M$. Slightly more generally, if $M \subset X$ is an open subset, again the identity map is a global chart for an $(X,G)$-structure on~$M$. \item Consider the Euclidean geometry: $\big({\mathbb{R}}^n, \Isom\big({\mathbb{R}}^n\big)\big)$. Let $M$ be the torus \begin{gather*}M = T^n = {\mathbb{R}}^n/{\mathbb{Z}}^n.\end{gather*} We can construct an atlas using the covering ${\mathbb{R}}^n\rightarrow M$: every well covered open set is one of the $U_i$s, and every section of the covering over such a $U_i$ is one of the $\varphi_i$s. \item Consider the hyperbolic geometry: $\big({\mathbb{H}}^2, {\mathsf{PO}}(1,2)\big)$. Let $M$ be a closed surface of genus $g\geq 2$. Recall that such a surface can be obtained by gluing the sides of a $(4g)$-gon along the standard pattern $a,b,a^{-1},b^{-1}, c,d, c^{-1},d^{-1}, \dots$, obtaining a cell complex with $1$ vertex, $2g$ edges and $1$ face. To put an ${\mathbb{H}}^2$-structure on~$M$, we first need to construct a regular $(4g)$-gon in ${\mathbb{H}}^2$ such that the sum of the internal angles of the polygon is $\frac{2\pi}{4g}$. When the edges of the polygon are glued with the standard pattern, they give a surface of genus $g$ with an ${\mathbb{H}}^2$-structure. \item Consider the real projective geometry: $\big({\mathbb{RP}}^2, {\mathsf{PGL}}(3,{\mathbb{R}})\big)$. The subgroup ${\mathsf{PO}}(1,2) < {\mathsf{PGL}}(3,{\mathbb{R}})$ acts on ${\mathbb{RP}}^2$ preserving a disc, this is the \defin{Klein model} of the hyperbolic plane: there is a ${\mathsf{PO}}(1,2)$-equivariant map $K\colon {\mathbb{H}}^2 \rightarrow {\mathbb{RP}}^2$ with image this disc. The ${\mathbb{H}}^2$-structure on $M$ constructed in point (3) induces an ${\mathbb{RP}}^2$-structure by composing the charts with the map $K$. \item Consider the complex projective geometry: $\big({\mathbb{CP}}^1, {\mathsf{PGL}}(2,{\mathbb{C}})\big)$. The subgroup ${\mathsf{PSL}}(2,{\mathbb{R}}) < {\mathsf{PGL}}(2,{\mathbb{C}})$ acts on ${\mathbb{CP}}^1$ preserving the upper half plane, this is the \defin{Poincar\'e model} of the hyperbolic plane: the connected component ${\mathsf{PO}}_0(2,1)$ of ${\mathsf{PO}}(2,1)$ is isomorphic to ${\mathsf{PSL}}(2,{\mathbb{R}})$ in such a way that there is a ${\mathsf{PO}}_0(2,1)$-equivariant map $P\colon {\mathbb{H}}^2 \rightarrow {\mathbb{CP}}^1$ with image the upper half plane. The ${\mathbb{H}}^2$-structure on $M$ constructed in point (3) induces a~${\mathbb{CP}}^1$-structure by composing the charts with the map $P$.b \end{enumerate} \end{Example} \subsection{Morphisms} \label{subsec:morphisms} \begin{Definition}Given two $(X,{\mathsf{G}})$-manifolds $M$, $N$, a map $f\colon M\rightarrow N$ is an \defin{$(X,{\mathsf{G}})$-map} if for every $m\in M$, there exist charts $(U,\varphi)$ for $M$ around $m$ and $(V,\psi)$ for $N$ around $f(m)$ such that $f(U) \subset V$ and the composition \begin{gather*}\psi \circ f \circ \varphi^{-1} \colon \ \varphi(U) \rightarrow \psi(V)\end{gather*} is locally in ${\mathsf{G}}$. \end{Definition} The $(X,{\mathsf{G}})$-maps are always real analytic local diffeomorphisms. Composition of $(X,{\mathsf{G}})$-maps is an $(X,{\mathsf{G}})$-map, hence we can form a category having the $(X,{\mathsf{G}})$-manifolds as objects and the $(X,{\mathsf{G}})$-maps as arrows. \begin{Definition} An \defin{$(X,{\mathsf{G}})$-isomorphism} is a diffeomorhism which is also an $(X,{\mathsf{G}})$-map. An \defin{$(X,{\mathsf{G}})$-automorphism} is an isomorphism between an $(X,{\mathsf{G}})$-manifold and itself. \end{Definition} Notice that the inverse of an $(X,G)$-isomorphism is automatically an $(X,{\mathsf{G}})$-map. If $M$ is an $(X,{\mathsf{G}})$-manifold, we will denote its group of automorphisms by \begin{gather*}\Aut_{(X,{\mathsf{G}})}(M) = \{f\colon M\rightarrow M \,|\, f \text{ is an } (X,{\mathsf{G}})\text{-isomorphism}\}.\end{gather*} These groups can sometimes be understood: \begin{Proposition} \begin{gather*}\Aut_{(X,{\mathsf{G}})}(X) = {\mathsf{G}}.\end{gather*} More generally, if $U\subset X$ is open, then \begin{gather*}\Aut_{(X,{\mathsf{G}})}(U) = \{g\in {\mathsf{G}} \,|\, g(U) = U\}. \end{gather*} \end{Proposition} \subsection{Kleinian geometric structures} The following proposition gives a tool that can be used to construct many interesting manifolds carrying geometric structures. \begin{Proposition} \label{prop:geometry of coverings} Let $M$ be an $(X,{\mathsf{G}})$-manifold, and $\Gamma < \Aut_{(X,{\mathsf{G}})}(M)$ be a subgroup acting properly discontinuously and freely on $M$. Then $M/\Gamma$ is a manifold, and there exists a unique $(X,{\mathsf{G}})$-structure on $M/\Gamma$ such that the quotient $M \rightarrow M/\Gamma$ is an $(X,{\mathsf{G}})$-map. Conversely, let $M$ be an $(X,{\mathsf{G}})$-manifold, and let $\pi\colon \bar{M} \rightarrow M$ be a covering map. Then there exists a unique $(X,{\mathsf{G}})$-structure on $\bar{M}$ such that $\pi$ is an $(X,{\mathsf{G}})$-map. \end{Proposition} \begin{Definition}Let $\Gamma < {\mathsf{G}}$ be a discrete subgroup. A~\defin{domain of discontinuity} for $\Gamma$ is an open subset $\Omega \subset X$ that is $\Gamma$-invariant and such that $\Gamma$ acts properly discontinuously on $\Omega$. \end{Definition} By applying Proposition~\ref{prop:geometry of coverings}, if $\Omega$ is a domain of discontinuity for $\Gamma$ and $\Gamma$ acts freely on~$\Omega$ (which is always true if $\Gamma$ is torsion-free), then the quotient $\Omega / \Gamma$ is a manifold with an $(X,{\mathsf{G}})$-structure. \begin{Definition}The geometric structures of the form $\Omega/\Gamma$ described above are called \defin{Kleinian $(X,{\mathsf{G}})$-structures}. \end{Definition} The theory of Anosov representations, introduced by Labourie \cite{AnosovFlowsLabourie} and Guichard--Wien\-hard~\cite{GWDomainsofDiscont}, gives methods for constructing interesting Kleinian geometric structures. We will not give here the complete definition of Anosov representations, we will only recall some of their properties. Let ${\mathsf{G}}$ be a semi-simple Lie group and $\Gamma$ be a Gromov-hyperbolic group. Anosov representations $\rho\colon \Gamma \rightarrow {\mathsf{G}}$ are defined with reference to a parabolic subgroup ${\mathsf{P}} \subset {\mathsf{G}}$, they will be called ${\mathsf{P}}$-Anosov representations. One property of a~${\mathsf{P}}$-Anosov representation $\rho$ is the existence of a $\rho$-equivariant map \begin{gather*}\xi\colon \ \partial_\infty \Gamma \rightarrow {\mathsf{G}}/{\mathsf{P}}\end{gather*} which must, by definition, satisfy some special properties. Here with $\partial_\infty \Gamma$ we denote the boundary at infinity of $\Gamma$, defined by Gromov \cite{Gromov} for hyperbolic groups. The ${\mathsf{P}}$-Anosov representations form an open subset of the character variety (see Section~\ref{sec:character varieties}): \begin{gather*}{\mathsf{P}}\text{-}\Anosov(\pi_1(S),{\mathsf{G}}) \subset {\mathcal X}(\pi_1(S),{\mathsf{G}}).\end{gather*} When $(X,{\mathsf{G}})$ is a geometry of parabolic type, whose isotropy group might be different from~${\mathsf{P}}$, there is a very rich theory giving sufficient conditions for a~${\mathsf{P}}$-Anosov representation to admit a~domain of discontinuity $\Omega \subset X$, which is, in the best cases, co-compact. The domain $\Omega$ is defined using the map $\xi$. This theory was founded by Guichard--Wienhard~\cite{GWDomainsofDiscont}, and was improved and extended by Kapovich--Leeb--Porti \cite{KLPAnosov1}. For an example of how this works, see Section~\ref{subsec:dod}. In this way, it is possible to construct many examples of Kleinian geometric structures on closed manifolds for geometries of parabolic type. One limitation of this method is that even if we construct an $(X,{\mathsf{G}})$-manifold $M = \Omega/\rho(\Gamma)$, we have no idea what the topology of $M$ is. Other techniques are needed to get a good understanding of these geometric manifolds, see for example Theorem~\ref{thm:dod}. \subsection{Developing maps and holonomies} The Kleinian geometric structures are the easiest to understand, but not all geometric structures are Kleinian. To work with general geometric structures, we introduce here the tools of developing maps and holonomy representations. \begin{Lemma} Let $N$ be a simply-connected manifold $(X,{\mathsf{G}})$-manifold. Then $N$ admits a global $(X,{\mathsf{G}})$-map \begin{gather*}D\colon \ N \rightarrow X\end{gather*} unique up to post-composition by an element of ${\mathsf{G}}$. \end{Lemma} \begin{proof} Choose a point $n_0\in N$, and a chart $(U_0,\varphi_0)$ around $N$. We will extend $\varphi_0\colon U_0\rightarrow X$ to a map $D$ defined on $N$ such that $D|_{U_0} = \varphi$. For every point $n\in N$, we will define the value $D(n)\in X$ in the following way. Choose a path $\gamma\colon [0,1]\rightarrow N$ such that $\gamma(0)=n_0$, $\gamma(1)=n$. We can find charts $(U_1,\varphi_1), \dots, (U_k,\varphi_k)$ and points $t_0, \dots, t_k, s_0, \dots, s_k \in [0,1]$ such that \begin{enumerate}\itemsep=0pt \item[1)] $0=t_0 < t_1 < s_0 < t_2 < s_1 < t_3 < \dots < s_{k-2} < t_k < s_{k-1} < s_k = 1$, \item[2)] $\gamma([t_0,s_0)) = U_0\cap \gamma([0,1])$, \item[3)] $\gamma((t_i,s_i)) = U_i \cap \gamma([0,1])$, \item[4)] $\gamma((t_k,s_k]) = U_k \cap \gamma([0,1])$. \end{enumerate} The path $\gamma((t_i,s_{i-1}))$ is contained in a connected component $C_i$ of $U_{i-1} \cap U_i$. There exists a~$g_i\in {\mathsf{G}}$ such that $\varphi_{i-1}\circ \varphi_i^{-1}|_{\varphi_i(C_i)} = g_i|_{\varphi_i(C_i)}$. We define \begin{gather*}D(n) = g_1\circ g_2\circ \dots \circ g_k \circ \varphi_{k}(n).\end{gather*} Now it is necessary to show that the value $D(n)$ is independent on the choice of the charts $(U_i,\varphi_i)$, with $i\geq 1$. This is an application of the principle of unique analytic continuation. Then, we need to show that $D(n)$ does not depend on the choice of the curve $\gamma$. This comes from the fact that~$M$ is simply-connected, hence every other curve $\gamma'$ is homotopic to~$\gamma$ relatively to the end-points. This defines a global $(X,{\mathsf{G}})$-map $D$, which depends only on the choice of~$(U_0,\varphi_0)$. The principle of unique analytic continuation gives the uniqueness of~$D$ up to an element of~${\mathsf{G}}$. \end{proof} Given an $(X,{\mathsf{G}})$-manifold $M$, we denote its universal covering by $\widetilde{M}$. By Proposition \ref{prop:geometry of coverings}, $\widetilde{M}$~inherits an $(X,{\mathsf{G}})$-structure from $M$. Since $\widetilde{M}$ is simply-connected, there is a global $(X,{\mathsf{G}})$-map \begin{gather*}D\colon \ \widetilde{M} \rightarrow X \end{gather*} unique up to post-composition by an element of ${\mathsf{G}}$. \begin{Definition}The map $D$ is called the \defin{developing map} of the $(X,{\mathsf{G}})$-manifold $M$. \end{Definition} The fundamental group $\pi_1(M)$ acts on $\widetilde{M}$ by deck transformations. This action preserves the $(X,{\mathsf{G}})$-structure on $\widetilde{M}$, hence we have the inclusion $\pi_1(M)<\Aut_{(X,{\mathsf{G}})}(\widetilde{M})$. The composition of an element $\gamma\in\pi_1(M)$ with the developing map~$D$ is again a developing map, hence there exists an element $h(\gamma)\in {\mathsf{G}}$ such that \begin{gather*} D \circ \gamma = h(\gamma) \circ D. \end{gather*} The map $h\colon \pi_1(M)\rightarrow {\mathsf{G}}$ is a group homomorphism, and the formula above tells us that the developing map is $h$-equivariant. \begin{Definition}The group homomorphism $h$ is called the \defin{holonomy representation} of the $(X,{\mathsf{G}})$-manifold $M$. The pair $(D,h)$ is called the \defin{developing pair} of the $(X,{\mathsf{G}})$-manifold~$M$. \end{Definition} \begin{Example}In the case of a Kleinian geometric structure $M=\Omega/\Gamma$, for some $\Omega \subset X$ and $\Gamma < {\mathsf{G}}$, the developing map is a covering \begin{gather*}D\colon \ \widetilde{M} \rightarrow \Omega\end{gather*} and the holonomy representation is a homomorphism \begin{gather*}h\colon \ \pi_1(M)\rightarrow \Gamma \end{gather*} such that $\ker(h)=\pi_1(\Omega)$. \end{Example} If we change the developing map by post-composing it with an element $g\in{\mathsf{G}}$, the holonomy representation changes by conjugation by $g$. In other words, the group ${\mathsf{G}}$ acts on the developing pairs in the following way: \begin{gather*}g \cdot (D,h) = \big(g\circ D, g h g^{-1}\big). \end{gather*} The developing pair $(D,h)$ of the $(X,{\mathsf{G}})$-manifold $M$ is well defined up to this action of ${\mathsf{G}}$. The developing pair completely determines the $(X,{\mathsf{G}})$-structure on $M$, as we will now see. \begin{Definition}Let $M$ be a manifold without a specified $(X,{\mathsf{G}})$-structure. We will say that a pair $(D,h)$ is an $(X,{\mathsf{G}})$-\emph{developing pair} for $M$ if \begin{enumerate}\itemsep=0pt \item[1)] $h$ is a representation $h\colon \pi_1(M)\rightarrow {\mathsf{G}}$, \item[2)] $D$ is an $h$-equivariant local diffeomorphism. \end{enumerate} \end{Definition} Given an $(X,{\mathsf{G}})$-developing pair $(D,h)$ for $M$, we can construct an $(X,{\mathsf{G}})$-structure in the following way: let $U$ be a simply-connected open subset of $M$, and let $s\colon U\rightarrow \widetilde{M}$ be a section of the universal covering. Assume that $U$ is small enough, so that $s(U)$ is an open subset where~$D$ is a diffeomorphism. Then $(U, D\circ s)$ is a chart, and the collection of all the charts of this type forms an atlas for a $(X,{\mathsf{G}})$-structure on $M$. This is the unique $(X,{\mathsf{G}})$-structure on $M$ with developing pair~$(D,h)$. \subsection{Parameter spaces} Given a fixed manifold $M$, we want to define a parameter space of all $(X,{\mathsf{G}})$-structures on~$M$. \begin{Definition}We will say that two $(X,{\mathsf{G}})$-structures on $M$ are \defin{isotopic} if there exists a~diffeomorphism $f\colon M\rightarrow M$ isotopic to the identity, which is an $(X,{\mathsf{G}})$-isomorphism between the first structure and the second. \end{Definition} We will denote by ${\mathcal D}_{(X,{\mathsf{G}})}(M)$ the set of all the $(X,{\mathsf{G}})$-structures on $M$ up to isotopy. The topology on ${\mathcal D}_{(X,{\mathsf{G}})}(M)$ is given by the $C^\infty$-topology on the corresponding developing maps. Let's see this in more detail. Consider the space $\Dev_{(X,{\mathsf{G}})}(M)$ of all $(X,{\mathsf{G}})$-developing pairs $(D,h)$ for $M$. This space is endowed with the $C^\infty$-topology on the developing maps. Given a sequence of developing pairs $(D_k,h_k)$, it is easy to check that if the sequence $(D_k)$ converges to $D_0$ in the $C^\infty$-topology, then the sequence $(h_k)$ converges point-wise to~$h_0$. Choose a point $m\in M$, and consider the group $\Diffeo_0(M,m)$ of all diffeomorphisms of~$M$ that fix the point $m$ and are isotopic to the identity. Every element of this group can be lifted in a unique way to a diffeomorphism of $\widetilde{M}$ that fixes the fiber over $m$. In this way, $\Diffeo_0(M,m)$ acts on $\widetilde{M}$. The group $\Diffeo_0(M,m) \times G$ acts on $\Dev_{(X,{\mathsf{G}})}(M)$ in the following way: \begin{gather*}(f,g)\cdot (D,h) = \big(g\circ D \circ f, g h g^{-1}\big). \end{gather*} We have that \begin{gather*}{\mathcal D}_{(X,{\mathsf{G}})}(M) = \Dev_{(X,{\mathsf{G}})}(M)/ \Diffeo_0(M,m) \times G.\end{gather*} In this way, the parameter space of $(X,{\mathsf{G}})$-structures on $M$ inherits the quotient topology. \section{Representations and flat bundles} \label{sec:character varieties} In this section, we review the correspondence between conjugacy classes of representations of the fundamental group of a manifold and isomorphism classes of flat bundles. We tried to keep the required Lie theory to a minimum, anyway, for all the Lie-theoretical notions, the reader can refer to \cite{Helgason,HumphreysBook,SerreBook}. \subsection{Character varieties} Let $\Gamma$ be a finitely generated group. Here, the most interesting case is when $\Gamma$ is the fundamental group of a closed manifold, but for the moment it can be arbitrary. Let ${\mathsf{G}}$ be a reductive Lie group with Lie algebra ${\mathfrak g}$. We will denote by $\Hom(\Gamma,{\mathsf{G}})$ the set of all representations (i.e., group homomorphisms) of $\Gamma$ in ${\mathsf{G}}$, endowed with the topology of point-wise convergence of representations. \begin{Definition}A \defin{reductive representation} of $\Gamma$ in ${\mathsf{G}}$ is a representation $\rho\colon \Gamma \rightarrow {\mathsf{G}}$ such that the induced action on ${\mathfrak g}$ given by the adjoint representation is completely reducible. \end{Definition} \begin{Example}If ${\mathsf{G}}$ is a linear group, then $\rho$ is reductive if and only if it is completely reducible. \end{Example} We will denote by $\Hom^*(\Gamma,{\mathsf{G}})$ the subspace of all reductive representations of $\Gamma$ in ${\mathsf{G}}$. The group ${\mathsf{G}}$ acts on $\Hom^*(\Gamma,{\mathsf{G}})$ by conjugation, and the action is proper. We will denote the quotient by this action by \begin{gather*}{\mathcal X}(\Gamma,{\mathsf{G}}) = \Hom^*(\Gamma,{\mathsf{G}}) / {\mathsf{G}}. \end{gather*} \begin{Definition}The space ${\mathcal X}(\Gamma,{\mathsf{G}})$ is called the \defin{character variety} of $\Gamma$ in ${\mathsf{G}}$. \end{Definition} Character varieties are Hausdorff topological spaces. They are in general not manifolds since they can have singularities, but they are always locally contractible. When $\Gamma = \pi_1(S)$, for a closed orientable surface $S$ of genus $g\geq 2$ and ${\mathsf{G}}$ is a real Lie group, there are results describing the topology of some connected components of the character varieties. \begin{Example} \label{exa:special representations}\quad \begin{enumerate}\itemsep=0pt \item When ${\mathsf{G}}={\mathsf{PSL}}(2,{\mathbb{R}})$, Goldman \cite{GoldmanThesis} used a topological invariant, the Euler number, to classify the connected components of the character variety: it has $4g-3$ connected components corresponding to the values of the Euler number from $2-2g$ to $2g-2$. Moreover Goldman proved that a representation in ${\mathsf{PSL}}(2,{\mathbb{R}})$ is discrete and faithful if and only if it has Euler number $\pm(2g-2)$. Such representations are called \defin{Fuchsian representations}, and they form two connected components of the character variety, each of whom is a copy of the Teichm\"uller space ${\mathcal T}(S)$ of the surface. Hitchin \cite{selfduality} described the topology of all the connected components with non-zero Euler number. \item Similarly, when ${\mathsf{G}}={\mathsf{PGL}}(2,{\mathbb{R}})$, the set of discrete and faithful representations, again called Fuchsian representations, forms a connected component of the character variety which is a~copy of the Teichm\"uller space ${\mathcal T}(S)$ of the surface. This component is then homeomorphic to ${\mathbb{R}}^{6g-6}$, and it is also denoted by $\Hit(S,2)$, see below. \item Consider now the case when ${\mathsf{G}}={\mathsf{PGL}}(n,{\mathbb{R}})$. A \defin{Fuchsian representation} in ${\mathsf{PGL}}(n,{\mathbb{R}})$ is defined as the composition of a Fuchsian representation in ${\mathsf{PGL}}(2,{\mathbb{R}})$ with the irreducible representation ${\mathsf{PGL}}(2,{\mathbb{R}})\rightarrow {\mathsf{PGL}}(n,{\mathbb{R}})$. This construction gives an embedding of the Teichm\"uller space ${\mathcal T}(S)$ in ${\mathcal X}(\pi_1(S), {\mathsf{PGL}}(n,{\mathbb{R}}))$, whose image is called the \defin{Fuchsian locus}. Hitchin \cite{liegroupsteichmuller} proved that the connected component of the character variety containing the Fuchsian locus is homeomorphic to ${\mathbb{R}}^{(n^2-1)(2g-2)}$. This component is called the \defin{Hitchin component}, and denoted by $\Hit(S,n)$. Labourie~\cite{AnosovFlowsLabourie} described the geometry of the representations in this component. The Hitchin components share many properties with the Teichm\"uller spaces, hence they are sometimes called \defin{higher Teichm\"uller spaces}, and they give the name to higher Teichm\"uller theory. \item Hitchin~\cite{liegroupsteichmuller} defined special components in the character varieties of all split real simple Lie groups ${\mathsf{G}}$. They are homeomorphic to ${\mathbb{R}}^{\dim({\mathsf{G}})(2g-2)}$. \item The Euler number can be generalized to representations into all Lie groups of Hermitian type, in this case it is called the Toledo number, see Toledo \cite{Toledo}. Representations with maximal value of the Toledo number are called \defin{maximal representations}, and they form a~union of connected components in the corresponding character varieties. This is another way to generalize Fuchsian representations to higher rank Lie groups. For ${\mathsf{G}} = {\mathsf{Sp}}(4,{\mathbb{R}})$ and ${\mathsf{PSp}}(4,{\mathbb{R}})$, an explicit description of the topology of the maximal components was determined in a joint work with Brian Collier \cite{SO23LabourieConj}. \end{enumerate} \end{Example} When ${\mathsf{G}}$ is a complex Lie group, we don't have explicit descriptions of connected components of character varieties. But there are at least some special open subsets that are particularly interesting. For example, in the character variety of ${\mathcal X}(\pi_1(S),{\mathsf{PGL}}(2,{\mathbb{C}}))$ we have the open subset of \defin{quasi-Fuchsian representations}, denoted by~$\QFuch(S)$. Quasi-Fuchsian representations can be defined as those representations whose action on ${\mathbb{CP}}^1$ is topologically conjugate to the action of a Fuchsian representation on~${\mathbb{CP}}^1$. The open subset $\QFuch(S)$ is homeomorphic to~${\mathbb{R}}^{12g-12}$. Hitchin components generalize Teichm\"uller spaces to higher rank Lie groups. In a similar way, there are some special open subsets of the character varieties of a simple complex Lie group ${\mathsf{G}}$ that generalize the space of quasi-Fuchsian representations. We will call it the space of \defin{quasi-Hitchin representations}. To define them, consider the open subset \begin{gather*}{\mathsf{B}}\text{-}\Anosov(\pi_1(S),{\mathsf{G}}) \subset {\mathcal X}(\pi_1(S),{\mathsf{G}})\end{gather*} consisting of all ${\mathsf{B}}$-Anosov representations, where ${\mathsf{B}}$ is the Borel subgroup of ${\mathsf{G}}$. The space of quasi-Hitchin representations is then defined as the connected component of ${\mathsf{B}}\text{-}\Anosov(\pi_1(S),{\mathsf{G}})$ containing the Hitchin component of the split real form of ${\mathsf{G}}$. For the group ${\mathsf{PGL}}(n,{\mathbb{C}})$, we will denote the space of quasi-Hitchin representations by $\QHit(S,n)$. \subsection{Flat bundles} Let $M$ be a manifold, and let $X$ be a manifold endowed with an effective action of ${\mathsf{G}}$. \begin{Definition} A \defin{fiber bundle} on $M$ with \defin{structure group} ${\mathsf{G}}$ and \defin{fiber} $X$ (also called a \defin{${\mathsf{G}}$-bundle} with fiber $X$) is a manifold $B$ with a smooth map $\pi\colon B \rightarrow M$ and a maximal ${\mathsf{G}}$-atlas for $\pi$. Recall that a \defin{${\mathsf{G}}$-atlas} is a set of charts $\{(U_i,\varphi_i)\}$ where the $U_i$s are open subsets of $M$ which cover $M$, and \begin{gather*}\varphi_i\colon \ \pi^{-1}(U_i) \rightarrow U_i \times X \end{gather*} is a diffeomorphism that intertwines $\pi|_{U_i}$ and the projection on the first factor. The $\varphi_i$s must be \defin{${\mathsf{G}}$-compatible} in the following sense: the maps \begin{gather*}\varphi_i \circ \varphi_j^{-1}\colon \ (U_i \cap U_j) \times X \rightarrow (U_i \cap U_j) \times X \end{gather*} are of the form $\varphi_i \circ \varphi_j^{-1}(m,x) = (m,t_{ij}(m)x)$, where $t_{ij}(m)\in {\mathsf{G}}$. The functions \begin{gather*}t_{ij}\colon \ U_i \cap U_j \rightarrow {\mathsf{G}}\end{gather*} are called the \defin{transition functions} of the atlas. \end{Definition} It is interesting to remark that the bundle is determined up to isomorphism by the transition functions, and that the transition functions don't depend at all on the space $X$. This has the following consequence: if $X, Y$ are manifolds with effective actions of ${\mathsf{G}}$, then a bundle~$B$ with structure group~${\mathsf{G}}$ and fiber~$X$, determines a~bundle~$B(Y)$ with the same structure group and fiber $Y$. The bundle $B(Y)$ is defined as the bundle with fiber $Y$ having the same transition functions as the bundle~$B$. More generally, given a group homomorphism $q\colon {\mathsf{G}} \rightarrow {\mathsf{H}}$, assume that $X$ is a manifold with an effective ${\mathsf{G}}$-action, $Y$ is a manifold with an effective ${\mathsf{H}}$-action, and $B$ is a ${\mathsf{G}}$-bundle with fiber~$X$. We can apply the construction given above by composing the transition functions of $B$ with the homomorphism $q$. This produces an ${\mathsf{H}}$-bundle $B(Y)$ with fiber $Y$. \begin{Definition}The bundle $B(Y)$ is called the \defin{associated bundle} to $B$ with fiber $Y$. \end{Definition} \begin{Example}\quad \begin{enumerate}\itemsep=0pt \item The most important example is the special case when $X = {\mathsf{G}}$, acting on itself on the left. A bundle with structure group and fiber ${\mathsf{G}}$ is called a \defin{principal} ${\mathsf{G}}$-bundle. \item Another fundamental example is the case when the Lie group ${\mathsf{G}}$ is a \defin{linear group}, i.e., when it can be embedded as a Lie subgroup of ${\mathsf{GL}}(n,{\mathbb{R}})$ or ${\mathsf{GL}}(n,{\mathbb{C}})$. In this case it admits an effective linear action on $V={\mathbb{R}}^n$ or ${\mathbb{C}}^n$, and a ${\mathsf{G}}$-bundle with fiber $V$ is called a \defin{vector bundle}. Starting from every bundle $B$ with structure group ${\mathsf{G}}$ and some fiber, we can construct the associated vector bundle $B(V)$. \item Similarly, a \defin{projective group} is a Lie group ${\mathsf{G}}$ that can be embedded as a Lie subgroup of ${\mathsf{PGL}}(n+1,{\mathbb{R}})$ or ${\mathsf{PGL}}(n+1,{\mathbb{C}})$. In this case it admits an effective projective action on $P = {\mathbb{RP}}^{n}$ or ${\mathbb{CP}}^{n}$, and a ${\mathsf{G}}$-bundle with fiber $P$ is called a \defin{projective bundle}. Starting from every bundle $B$ with structure group ${\mathsf{G}}$ and some fiber, we can construct the associated projective bundle $B(P)$. \end{enumerate} \end{Example} There is a close relationship between vector bundles and projective bundles. Assume that ${\mathsf{G}}$ is a linear group, $X = {\mathbb{R}}^{n+1}$ or ${\mathbb{C}}^{n+1}$, ${\mathsf{H}}$ is the corresponding projectivized group and $Y={\mathbb{RP}}^{n}$ or ${\mathbb{CP}}^{n}$. From every vector bundle $E$ with structure group ${\mathsf{G}}$, we can construct the associated projective bundle $E(Y)$. We will denote $E(Y)$ by ${\mathbb{P}}(E)$, the \defin{projectivized bundle} of $E$. \begin{Definition}A \defin{flat structure} on a ${\mathsf{G}}$-bundle $B$ is an atlas of $B$ satisfying the additional condition that all transition functions are locally constant, and maximal among all atlases sa\-tisfying this additional condition. A bundle with a flat structure is called a \defin{flat bundle}, or a \defin{local system}. \end{Definition} The flat structure is just a special atlas, hence, as explained above, it does not depend on the fiber. It is thus possible to construct \defin{associated bundles} and to replace a flat bundle $B$ with fiber $X$ by a flat bundle $B(Y)$ with fiber $Y$. If ${\mathsf{G}}$ is a linear group acting on a vector space $V={\mathbb{R}}^n$ or ${\mathbb{C}}^n$, starting from every flat bundle~$B$ with fiber $X$, we can change fiber and construct the vector bundle $B(V)$, with a flat structure. A~flat structure on a vector bundle can be described by a \defin{flat connection}, i.e., a connection with vanishing curvature form. This description is useful for doing computations. A flat bundle $B$ with fiber $X$ has a well defined foliation, called the \defin{parallel foliation}, that can be described in local charts: in $\pi^{-1}(U_i)$, for every $x \in X$ there is a local leaf given by $\varphi_i^{-1}(U_i \times \{x\})$. The local foliations defined by the different charts all match up, giving rise to a~global foliation. A \defin{local parallel section} of a flat bundle is a section $s\colon U \rightarrow B$ defined on an open subset $U$, which is locally constant when restricted to every chart of the flat structure. In other words, it is a section whose image is contained in a leaf of the parallel foliation. Similarly, given a curve $\gamma\colon [0,1]\rightarrow M$, a~\defin{parallel section along} $\gamma$ is a section along $\gamma$ which is locally constant in the charts. Using parallel sections, we can define the \defin{parallel transport operator} along a curve $\gamma\colon [0,1]\rightarrow M$: it is an operator \begin{gather*}P_\gamma\colon \ \pi^{-1}(\gamma(0)) \rightarrow \pi^{-1}(\gamma(1)) \end{gather*} defined in the following way: given $x_0\in \pi^{-1}(\gamma(0))$, there exists a unique parallel section~$s$ along~$\gamma$ such that $s(0)=x_0$. We define $P_\gamma(x_0) = s(1)$. The parallel transport $P_\gamma$ only depends on the homotopy class of~$\gamma$ relative to the end points. \subsection{Monodromy} Let $\pi\colon B\rightarrow M$ be a flat ${\mathsf{G}}$-bundle with fiber $X$. Given a base point $m_0 \in M$, we can use a chart to identify the fiber $\pi^{-1}(m_0)$ with $X$. If we change chart, this identification changes by the action of an element of ${\mathsf{G}}$. Now the parallel transport $P_\gamma$ along a loop $\gamma$ based at~$m_0$ is a~map $P_\gamma\colon X \rightarrow X$, which agrees with the action of an element of ${\mathsf{G}}$, hence we can write $P_\gamma\in{\mathsf{G}}$. Since~$P_\gamma$ only depends on the homotopy class of $\gamma$, we get a map \begin{gather*}P\colon \ \pi_1(M,m_0) \rightarrow {\mathsf{G}}. \end{gather*} This map behaves well under composition of loops, hence it is a representation. If we change the chart around $m_0$, the representation changes by conjugation by an element of~${\mathsf{G}}$. \begin{Definition}The representation $P$ is called the \defin{monodromy representation} of the bundle. We will say that a flat bundle is \defin{reductive} if the monodromy representation is reductive. \end{Definition} We will denote by $\Flat(M,G,X)$ the space of all reductive flat $G$-bundles with fiber $X$ up to isomorphism. If $X$, $Y$ are manifolds with effective actions of ${\mathsf{G}}$, the associated bundle construction gives a natural bijection $\Flat(M,G,X)\rightarrow \Flat(M,G,Y) $. Hence, we can suppress the $X$ in the notation, and consider the space $\Flat(M,G)$, parametrizing reductive flat ${\mathsf{G}}$-bundles with any fixed fiber $X$. The monodromy representation gives a map \begin{gather*}{\mathcal P}\colon \ \Flat(M,G) \rightarrow {\mathcal X}(\pi(M), G). \end{gather*} \begin{Proposition} \label{prop:monodromy}The map ${\mathcal P}$ is a bijection. \end{Proposition} \begin{proof}The inverse map is given by the following construction. Let $\rho\colon \pi_1(M)\rightarrow {\mathsf{G}}$ be a~re\-pre\-sentation. This gives an action of $\pi_1(M)$ on $\widetilde{M}\times X$, acting on the first factor by deck transformations and on the second factor via $\rho$. This action is properly discontinuous and free because the action on the first factor has these properties. Hence, we can construct the manifold \begin{gather*}X_\rho = \big(\widetilde{M}\times X\big)/\pi_1(M). \end{gather*} The projection on the first factor induces a map $p\colon X_\rho\rightarrow M$ which turns $X_\rho$ into a ${\mathsf{G}}$-fiber bundle with fiber $X$. Moreover, the product $ \widetilde{M}\times X$ induces a flat structure on the bundle. It is easy to show that the flat bundle $X_\rho$ has monodromy $\rho$, and that any other flat ${\mathsf{G}}$-bundle with fiber~$X$ and monodromy $\rho$ is isomorphic to~$X_\rho$. \end{proof} \section{The graph of a geometric structure} \label{sec:graph} In this section, we will see how geometric structures correspond to flat bundles with a transverse section. The flat bundle encodes the holonomy representation of the geometric structure, and the transverse section encodes the developing map. This construction is described in detail in Goldman's notes~\cite{GoldmanGSAVOR}. \subsection{Sections and equivariant maps} Let $\rho\colon \pi_1(M) \rightarrow {\mathsf{G}}$ be a representation, and consider the space $\Equiv(\rho,X)$ of smooth $\rho$-equivariant maps from the universal covering $\widetilde{M}$ to $X$, endowed with the $C^\infty$-topology. Let~$B$ be the a flat bundle over $M$ with fiber $X$ and holonomy $\rho$, and consider the space $\Gamma(M, B)$ of smooth sections of~$B$, endowed with the $C^\infty$-topology. \begin{Proposition}There is a natural homeomorphism between $\Gamma(M, B)$ and $\Equiv(\rho,X)$. \end{Proposition} \begin{proof}Recall first that $B$ is isomorphic to $X_\rho$, the bundle defined in the proof of Proposi\-tion~\ref{prop:monodromy}. From the construction of $X_\rho$, we can see that the pull-back of $X_\rho$ to $\widetilde{M}$ is isomorphic to a~product $\widetilde{M}\times X$, with the product flat structure. A section $s\in \Gamma(M, X_\rho)$ can be pulled back to a section $\widetilde{s}$ of $\widetilde{M}\times X$. A section of a product bundle is just a map $\widetilde{s}\colon \widetilde{M}\rightarrow X$. The fact that $\widetilde{s}$ is a pull-back tells us that this map is $\rho$-equivariant. This gives a map between $\Gamma(M, X_\rho)$ and $\Equiv(\rho,X)$. To find the inverse of this map, just notice that a $\rho$-equivariant map $f\colon \widetilde{M}\rightarrow X$ is a section of the product bundle $\widetilde{M}\times X$. The fact that $f$ is $\rho$-equivariant implies that it passes to the quotient, giving a section $[f]$ of $X_\rho$. To check that the maps are continuous, we can work locally on small open sets of $M$ which are well covered by the universal covering. \end{proof} \subsection{Transverse sections} Let $\rho\colon \pi_1(M) \rightarrow {\mathsf{G}}$ be a representation, and $B$ be the flat bundle over $M$ with fiber $X$ and holonomy $\rho$. \begin{Definition}A section $s\in \Gamma(M, B)$ is \defin{transverse} if it is transverse to the parallel foliation of the bundle. \end{Definition} \begin{Proposition}A section $s\in \Gamma(M, B)$ is transverse if and only if the corresponding $\rho$-equivariant map is \begin{enumerate}\itemsep=0pt \item[$1)$] an immersion if $\dim(M) \leq \dim(X)$, \item[$2)$] a submersion if $\dim(M) \geq \dim(X)$. \end{enumerate} In particular, if $\dim(M)= \dim(X)$, then $s$ is transverse if and only if the corresponding $\rho$-equivariant map is a local diffeomorphism. \end{Proposition} \begin{proof} Let $f\colon \widetilde{M}\rightarrow X$ be the corresponding $\rho$-equivariant map, and let $\pi\colon \widetilde{M} \rightarrow M$ denote the universal covering. Let $v\in T_{x}\widetilde{M}$, and $v' = d\pi[v] \in T_{\pi(x)} M$. Then the differential of $f$ vanishes at $v$ if and only if the differential of $s$ at $v'$ is tangent to the parallel foliation. \end{proof} \begin{Definition}If $\dim(M) = \dim(X)$, a \defin{graph of an $(X,{\mathsf{G}})$-structure} is a pair $(B,s)$ where $B$ is a flat bundle over~$M$ with fiber $X$ and $s\in \Gamma(M, B)$ is a transverse section. \end{Definition} Graphs of $(X,{\mathsf{G}})$-structures correspond to $(X,{\mathsf{G}})$-developing pairs, which determine $(X,{\mathsf{G}})$-structures on~$M$. Let's see this more explicitly in the case of real or complex projective structures. Let ${\mathbb{K}}$ be ${\mathbb{R}}$ or ${\mathbb{C}}$, and consider the geometry ${\mathbb{KP}}^{n}$. Given a representation $\rho\colon \pi_1(M)\rightarrow {\mathsf{PGL}}(n+1,{\mathbb{K}})$, we want to construct a ${\mathbb{KP}}^n$-structure on $M$ with holonomy $\rho$. To do this, we need to consider the flat bundle $B$ over $M$ with fiber ${\mathbb{KP}}^{n}$ and holonomy $\rho$, and construct a transverse section of $B$. This becomes more concrete when $\rho$ lifts to a representation $\bar{\rho}\colon \pi_1(M) \rightarrow {\mathsf{GL}}(n+1,{\mathbb{K}})$. In this case, there is a flat vector bundle $E$ with holonomy $\bar{\rho}$ such that the projectivized bundle~${\mathbb{P}}(E)$ is isomorphic to~$B$. The flat structure on $E$ is described by a flat connection~$\nabla$. A~section of~$B$ is the same thing as a line subbundle of~$E$. The next proposition shows how it is possible to verify whether a section of~$B$ is transverse with a computation in local coordinates involving the derivatives with reference to the flat connection on~$E$. \begin{Proposition} \label{prop:transversality condition} Let $E$ be a flat vector bundle of rank $n+1$ over $M$, and $L \subset E$ be a line subbundle. Then $L$ is a transverse section of ${\mathbb{P}}(E)$ if and only if for every $m\in M$ there exists a coordinate neighborhood $U$ of $m$ $($with coordinates $x_1, \dots, x_k$, where $k=\dim(M))$ and a local non-vanishing section $s\colon U \rightarrow L$ such that the local vector fields \begin{gather*}s, \nabla_{\!\!\!\frac{\partial}{\partial x_1}} s, \dots, \nabla_{\!\!\!\frac{\partial}{\partial x_k}} s \end{gather*} satisfy one of the following conditions: \begin{enumerate}\itemsep=0pt \item[$1)$] are linearly independent on $U$ if $\dim(M) \leq n$, \item[$2)$] span every fiber over $U$ if $\dim(M) \geq n$. \end{enumerate} \end{Proposition} \subsection{The holonomy map} If ${\mathsf{G}}$ is reductive, we can consider the subspace \begin{gather*}{\mathcal D}^*_{(X,{\mathsf{G}})}(M) \subset {\mathcal D}_{(X,{\mathsf{G}})}(M) \end{gather*} of all $(X,{\mathsf{G}})$-structures on $M$ with reductive holonomy. This subspace has a natural map to the character variety, given by the holonomy representation: \begin{gather*}\mathrm{Hol}\colon \ {\mathcal D}^*_{(X,{\mathsf{G}})}(M) \rightarrow {\mathcal X}(\pi_1(M),{\mathsf{G}}).\end{gather*} \begin{Theorem}[Thurston's holonomy principle]If $M$ is a closed manifold, the map $\mathrm{Hol}$ is open and it has discrete fiber. \end{Theorem} \begin{proof}See Goldman \cite{GoldmanGSAVOR}. The openness of the map $\mathrm{Hol}$ can be proved easily using graphs of $(X,{\mathsf{G}})$-structures. \end{proof} When $M$ is closed, the map $\mathrm{Hol}$ is very often a local homeomorphism, but not always (for a~counterexample, see Baues~\cite{BauesTorus}). This issue needs to be better understood: \begin{Question}[refined Thurston's holonomy principle]\quad \begin{enumerate}\itemsep=0pt \item Is it true that the map $\mathrm{Hol}$ is always a branched local homeomorphism? \item What are some sufficient conditions for it to be a local homeomorphism? \end{enumerate} \end{Question} Other important questions are raised by the fact that the map $\mathrm{Hol}$ is in general neither injective nor surjective. \begin{Question} \label{question:holonomy fibers} Let $\rho\colon \pi_1(M)\rightarrow {\mathsf{G}}$ be a representation. Is there an $(X,{\mathsf{G}})$-structure on $M$ with holonomy $\rho$? And in the affirmative case, how many are there? \end{Question} A complete answer to Question \ref{question:holonomy fibers} is known only in very special cases, for example for ${\mathbb{CP}}^1$-structures on closed surfaces (see Gallo--Kapovich--Marden \cite{GalloKapovichMarden}, Goldman \cite{GoldmanFuchsianHol}, Baba \cite{BabaGrafting1,BabaGrafting2}). In the case of ${\mathbb{RP}}^2$-structures on closed surfaces a partial answer is given in Choi--Goldman \cite{ChoiGoldmanClassification}. The character varieties are much easier to understand than the parameter spaces ${\mathcal D}^*_{(X,{\mathsf{G}})}(M)$, hence, if we can obtain a better understanding of Question~\ref{question:holonomy fibers}, we can use our knowledge about representations to understand parameter spaces of geometric structures. A plan to answer these questions can be the following: given a representation $\rho$, we construct the corresponding flat bundle, and we then try to understand all the possible transverse sections. An obstacle is that even for representations that we know very well, we don't always understand the corresponding flat bundle well enough to see the transverse sections. This is the point when Higgs bundles can be very useful: they can give an explicit description of the flat bundle. \section{How to use Higgs bundles?} \label{sec:higgs bundles} We will show in some simple examples how Higgs bundles can be used to construct geometric structures on manifolds. The flat connection can be expressed in terms of solutions of Hitchin's equations and the transverse section can be constructed from the study of the holomorphic structure of the vector bundle. This idea first appeared in Baraglia's Ph.D.~Thesis~\cite{BaragliaThesis}. \subsection[${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundles]{$\boldsymbol{{\mathsf{SL}}(2,{\mathbb{R}})}$-Higgs bundles} In this subsection we will describe all the ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundles. We will use this description in Sections~\ref{subsec:hyperbolic structures}, \ref{subsection:almost fuchsian}, \ref{subsec:ads}, and \ref{subsec:projective structures Higgs}. Let $\Sigma$ be a closed Riemann surface. \begin{Definition}An \defin{${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle} on $\Sigma$ is a tuple $(E,Q,\omega,\varphi)$, where \begin{itemize}\itemsep=0pt \item[1)] $E$ is a holomorphic vector bundle on $\Sigma$ of rank $2$, \item[2)] $Q\colon E\rightarrow E^*$ is a holomorphic symmetric ${\mathbb{C}}$-bilinear form, \item[3)] $\omega\in H^0\big(\Sigma,\Lambda^2 E\big)$ is a holomorphic ${\mathbb{C}}$-volume form such that $Q$ has volume $1$, \item[4)] $\varphi\in H^0(\Sigma,\End(E)\otimes K)$ is $Q$-symmetric and satisfies $\tr(\varphi)=0$ (the \defin{Higgs field}). \end{itemize} \end{Definition} The first three conditions say that $(E,Q,\omega)$ is a rank $2$ vector bundle with an ${\mathsf{SO}} (2,{\mathbb{C}})$-structure. In particular, $\Lambda^2 E = {\mathcal O}$. The structure of such an ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle can be made more explicit. This description was done by Hitchin \cite{selfduality}, who started from a different definition of ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundles. Consider the set of $Q$-isotropic vectors: \begin{gather*}\Iso(Q) = \{v \in E \,|\, Q(v,v) = 0\}. \end{gather*} In every fiber, this set is the union of two lines. $E$ has two line subbundles whose total spaces are given by: \begin{gather*}L_+ = \left\{v \in \Iso(Q) \,|\, \forall\, w \in \Iso(Q){\setminus} \Span(v), \ i \frac{\omega(v,w)}{Q(v,w)} > 0 \right\}, \\ L_- = \left\{v \in \Iso(Q) \,|\, \forall\, w \in \Iso(Q){\setminus} \Span(v), \ i \frac{\omega(v,w)}{Q(v,w)} < 0 \right\}. \end{gather*} Hence, we have $E=L_+ \oplus L_-$. The condition $\Lambda^2 E = {\mathcal O}$ now says that $L_+ = L_-^{-1}$. To simplify the notation, we write $L=L_+$, $L^{-1} = L_-$. The Higgs bundle can be written as \begin{gather*}E = L \oplus L^{-1}, \qquad Q = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \qquad \omega = \frac{i}{\sqrt{2}}\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, \qquad \varphi = \begin{pmatrix} 0 & a\\ b & 0 \end{pmatrix},\end{gather*} where $a \in H^0\big(\Sigma, L^2 K\big)$, $b \in H^0\big(\Sigma, L^{-2} K\big)$. The condition for the Higgs bundle to be poly-stable is that: \begin{enumerate}\itemsep=0pt \item If $\deg(L) > 0$, then $b \neq 0$. \item If $\deg(L) < 0$, then $a \neq 0$. \item If $\deg(L) = 0$, then $a,b\neq 0$ or $a=b=0$. \end{enumerate} In the case when $a=b=0$, the Higgs bundle is strictly poly-stable, in all other cases it is stable. These conditions impose a restriction to the degree of $L$ for a poly-stable ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle: $|\deg(L)|\leq g-1$ (Milnor--Wood inequality). A poly-stable Higgs bundle where $L$ has maximal possible degree ($\deg(L)= g-1$) is called a~\defin{Fuchsian Higgs bundle}, and they correspond to Fuchsian representations. The stability condition $b\neq 0$ forces $L$ to be a square root of $K$ (we will write $L = K^{\frac{1}{2}}$). The section $b$ is a constant, and, up to gauge transformations we can assume $b=1$. The section $a$ is a quadratic differential, we will write $a=q_2 \in H^0\big(\Sigma,K^2\big)$. Let $H$ be the Hermitian metric on $E$ that solves Hitchin's equations. \begin{Proposition}[{\cite[Theorem~3.1]{AdSpaper}}] If an ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle is stable, then \begin{gather*}H = \begin{pmatrix} h & 0\\ 0 & h^{-1} \end{pmatrix},\end{gather*} for some real positive $h\in \Gamma(\Sigma, \bar{L}\otimes L)$. \end{Proposition} \begin{proof}Consider the Higgs field $Q^{-1}\varphi^T Q \in H^0(\Sigma,\End(E)\otimes K)$. Then the metric $\bar{Q}^T\big(H^T\big)^{-1}Q$ is a solution of Hitchin's equations for the Higgs bundle $\big(E,Q^{-1}\varphi^T Q\big)$. The fact that $\varphi$ is $Q$-symmetric means that $\varphi = Q^{-1}\varphi^T Q$, hence $H = \bar{Q}^T\big(H^T\big)^{-1}Q$. This, plus the condition $\det(H)=1$ implies the statement. \end{proof} Let $\ell$ be a local holomorphic frame for $L$. Denote by $\ell'$ the dual holomorphic frame on $L^{-1}$. The pair $(\ell,\ell')$ is a local frame for $E$. In this local frame, we can write the flat connection given by the solutions of Hitchin's equations in the following way: \begin{gather*}\nabla = d + H^{-1}\partial H + \varphi + H^{-1}\bar{\varphi}^T H = d + \begin{pmatrix} -\partial \log h & a + h^2 \bar{b}\\ b + h^{-2} \bar{a} & \partial \log h \end{pmatrix}. \end{gather*} The real structure is given by \begin{gather*}\tau\colon \ E \ni \begin{pmatrix} v_1\\ v_2 \end{pmatrix} {\longrightarrow} \begin{pmatrix} 0 & h\\ h^{-1} & 0 \end{pmatrix} \begin{pmatrix} \bar{v_1}\\ \bar{v_2} \end{pmatrix} = \begin{pmatrix} h\bar{v_2}\\ h^{-1}\bar{v_1} \end{pmatrix} \in E. \end{gather*} And the real locus is given by \begin{gather*}E_{\mathbb{R}} = \{v\in E \,|\, \tau(v)= v\}.\end{gather*} \subsection{Hyperbolic structures on surfaces} \label{subsec:hyperbolic structures} Now we show the simplest example of how to use Higgs bundles to construct geometric structures with given holonomy. We start with a Fuchsian representation $\rho\colon \pi_1(S) \rightarrow {\mathsf{PSL}}(2,{\mathbb{R}})$, and we want to construct an ${\mathbb{H}}^2$-structure with holonomy $\rho$. We will first construct a ${\mathbb{CP}}^1$-structure with holonomy $\rho$, and we will then verify that this ${\mathbb{CP}}^1$-structure is actually an ${\mathbb{H}}^2$-structure. We choose a complex structure $\Sigma$ on $S$ and we consider the ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle $(E,\varphi)$ corresponding to a lift of $\rho$ to ${\mathsf{SL}}(2,{\mathbb{R}})$. Since $\rho$ is Fuchsian, we know that \begin{gather*}E = K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}}, \qquad \varphi = \begin{pmatrix} 0 & q_2\\ 1 & 0 \end{pmatrix}, \qquad q_2 \in H^0\big(\Sigma,K^2\big). \end{gather*} To construct a ${\mathbb{CP}}^1$-structure on $\Sigma$, we need to choose a line subbundle, and prove that it gives a transverse section of the projectivized bundle ${\mathbb{P}}(E)$. We can choose $K^{\frac{1}{2}}$ as a subbundle. We will use the transversality condition from Proposition \ref{prop:transversality condition}. Given a local section $s$ of $K^{\frac{1}{2}}$, we can compute the derivatives: \begin{gather*}s = \begin{pmatrix} 1\\0 \end{pmatrix}, \ \ \ \ \ \nabla_{\!\!\!\frac{\partial}{\partial z}}s = \begin{pmatrix} -\partial\log h\\1 \end{pmatrix}, \ \ \ \ \ \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s = \begin{pmatrix} 0\\h^{-2}\bar{q_2} \end{pmatrix}.\end{gather*} Here we computed the derivatives in the complex directions $\frac{\partial}{\partial z}$ and $\frac{\partial}{\partial \bar{z}}$, but to apply Proposition \ref{prop:transversality condition} we need to transform into derivatives in the real directions. This gives the following modified condition: the section $K^{\frac{1}{2}}$ is transverse if and only if \begin{gather*}\forall\, A,B\in {\mathbb{C}}, \qquad A \nabla_{\!\!\!\frac{\partial}{\partial z}}s + \bar{A} \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s + B s = 0 \qquad \Rightarrow \qquad A=B=0. \end{gather*} Substituting, we see that the section $K^{\frac{1}{2}}$ is transverse if and only if \begin{gather*}\forall\, A,B\in {\mathbb{C}}, \qquad \begin{cases} -A \partial\log h + B &= 0,\\ A +\bar{A}h^{-2}\bar{q_2} &= 0 \end{cases} \qquad \Rightarrow \qquad A=B=0. \end{gather*} If $A\neq 0$, the second equation is equivalent to \begin{gather*}\frac{A}{\bar{A}} = -h^{-2}\bar{q_2}. \end{gather*} This cannot be satisfied because of the following lemma: \begin{Lemma}[Hitchin \cite{selfduality}] In the above setup, we have \begin{gather*} \big| h^{-2}\bar{q_2} \big| < 1. \end{gather*} \end{Lemma} \begin{proof} If $q_2=0$, this is obvious. Otherwise, it was proven by Hitchin \cite{selfduality} applying the maximum principle. \end{proof} We have found a graph of a ${\mathbb{CP}}^1$-structure $\big({\mathbb{P}}(E),K^{\frac{1}{2}}\big)$. We denote by $D\colon \widetilde{\Sigma} \rightarrow {\mathbb{CP}}^1$ the corresponding developing map. We can now check that the image of this map never meets ${\mathbb{RP}}^1$, this is because we wrote the real structure $\tau$ explicitly, and it is easy to check that $K^{\frac{1}{2}}$ is never in the real locus: \begin{gather*}\tau\begin{pmatrix} 1\\0 \end{pmatrix} = \begin{pmatrix} 0\\h^{-1} \end{pmatrix}. \end{gather*} Hence we have a developing map \begin{gather*}D\colon \ \widetilde{\Sigma} \rightarrow {\mathbb{H}}^2.\end{gather*} The holonomy is in ${\mathsf{PSL}}(2,{\mathbb{R}})$ and hence this ${\mathbb{CP}}^1$-structure is actually an ${\mathbb{H}}^2$-structure. The map $D$ actually coincides with the harmonic map to the symmetric space coming from solving Hitchin's equations. The fact that $D$ is a local diffeo was proved by Sampson~\cite{Sampson}, Wolf~\cite{TeichOfHarmonic} and Hitchin~\cite{selfduality}. The proof given here is Hitchin's proof. The case when $q_2=0$ is the easiest, but it is particularly interesting. Fuchsian Higgs bundles with $q_2=0$ are called \defin{uniformizing Higgs bundles}, because they give an alternative proof of a~version of the uniformization theorem. This was done by Hitchin~\cite{selfduality} with essentially the same proof we give here, but without mentioning geometric structures. \begin{Theorem}[uniformization theorem] Every complex structure $\Sigma$ on a closed surface~$S$ admits a conformal Riemannian metric of constant curvature~$-1$. \end{Theorem} \begin{proof}Choose a square root $K^{\frac{1}{2}}$ of the canonical bundle, and take the uniformizing Higgs bundle with that square root. The equivariant map $D$ constructed above is now conformal: to see this, notice that \begin{gather*} \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s = \begin{pmatrix} 0\\h^{-2}\bar{q_2} \end{pmatrix} = 0.\end{gather*} Hence, the pull-back of the hyperbolic metric on ${\mathbb{H}}^2$ is conformal, and it has curvature $-1$. \end{proof} \subsection{Almost-Fuchsian representations} \label{subsection:almost fuchsian} Given a Fuchsian representation in the character variety ${\mathcal X}(\pi_1(S),{\mathsf{PGL}}(2,{\mathbb{R}}))$, we want to deform it in $\QFuch(S) \subset {\mathcal X}(\pi_1(S),{\mathsf{PGL}}(2,{\mathbb{C}}))$, the space of quasi-Fuchsian representations. These representations have a very interesting geometry, and they are holonomies of some very special ${\mathbb{CP}}^1$-structures called the quasi-Fuchsian ${\mathbb{CP}}^1$-structures. \begin{Definition} Consider a homeomorphism $f\colon {\mathbb{CP}}^1 \rightarrow {\mathbb{CP}}^1$ that topologically conjugates the action of a Fuchsian representation with the action of a quasi-Fuchsian representation $\rho$. Then the open subset $f\big({\mathbb{H}}^2\big)$ is a domain of discontinuity for $\rho$, and $S = f\big({\mathbb{H}}^2\big)/\rho(\pi_1(S))$ is a surface with a ${\mathbb{CP}}^1$-structure which is called a \defin{quasi-Fuchsian ${\mathbb{CP}}^1$-structure}. \end{Definition} We would like to see the quasi-Fuchsian ${\mathbb{CP}}^1$-structures in terms of Higgs bundles, but we are not able to do this in full generality. We can see this for a special open subset of the quasi-Fuchsian representations, which is called the space of almost-Fuchsian representations. The material in this section is part of a joint work with Qiongling Li \cite{NilpotentCone}. Let's start with a uniformizing Higgs bundle \begin{gather*}\left(K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}}, \ \begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix} \right).\end{gather*} We can deform this Higgs bundle for ${\mathsf{SL}}(2,{\mathbb{R}})$ to a Higgs bundle for ${\mathsf{SL}}(2,{\mathbb{C}})$ by changing the holomorphic structure of the vector bundle. Consider the vector bundle \begin{gather*} E = K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}} \end{gather*} endowed with the following holomorphic structure: \begin{gather*}\bar{\partial}_E = \bar{\partial} + \begin{pmatrix} 0 & 0\\ \beta & 0 \end{pmatrix},\end{gather*} with $\beta \in \Omega^{0,1}\big(\Sigma,K^{-1}\big)$. In the formula, $\bar{\partial}$ is the standard holomorphic structure of the direct sum, which is modified by adding a correction term. Such a bundle is an extension \begin{gather*}0 \rightarrow K^{-\frac{1}{2}}\rightarrow E \rightarrow K^{\frac{1}{2}} \rightarrow 0.\end{gather*} These extensions are classified by the Dolbeault cohomology class $[\beta] \in H^1\big(\Sigma,K^{-1}\big)$ a space isomorphic, by Serre's duality, to the dual of the space of quadratic differentials on $\Sigma$. Diffe\-rent~$\beta$s in the same cohomology class give rise to isomorphic vector bundles. The choice of the representative $\beta$ in the class corresponds to a choice of a non-holomorphic section $K^{\frac{1}{2}} \rightarrow E$, whose image is the non-holomorphic subbundle appearing in the direct sum. We consider now the Higgs bundle $(E,\varphi)$, where \begin{gather*}E = \big(K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}}, \bar{\partial}_E \big), \qquad \varphi = \begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix}. \end{gather*} The Higgs bundles of this form are parametrized by the pair $(\Sigma,[\beta])$. For every quasi-Fuchsian representation $\rho$, there exists a lift $\bar{\rho}\colon \pi_1(S)\rightarrow {\mathsf{SL}}(2,{\mathbb{C}})$ and a pair $(\Sigma,[\beta])$ such that the flat connection of the corresponding Higgs bundle has monodromy~$\bar{\rho}$ (see~\cite{SandersThesis}). This pair is not unique in general. Moreover, not all the pairs $(\Sigma,[\beta])$ give rise to a quasi-Fuchsian monodromy. It is an open problem to distinguish them: \begin{Question} Given a complex structure $\Sigma$, how can we characterize the classes $[\beta]{\in} H^1\!\big(\Sigma{,}K^{-1}\!\big)$ such that the flat connection of the corresponding Higgs bundle has quasi-Fuchsian monodromy? \end{Question} Answering this question was our initial motivation for trying to construct the quasi-Fuchsian ${\mathbb{CP}}^1$-structures using Higgs bundles, but, as explained above, we still cannot construct all of them. Let's fix now a pair $(\Sigma,[\beta])$. To construct a ${\mathbb{CP}}^1$-structure on $\Sigma$, we need to choose a line subbundle. We choose the holomorphic subbundle $K^{-\frac{1}{2}}$, and we then have to verify the transversality conditions. Denote by $H$ the solutions of Hitchin's equations for the corresponding Higgs bundle. We can choose the representative $\beta$ in the Dolbeault cohomology class in a way such that the non-holomorphic subbundle $K^{\frac{1}{2}}$ is $H$-orthogonal to the holomorphic subbundle $K^{-\frac{1}{2}}$. With this choice, we can write~$H$ as \begin{gather*}H = \begin{pmatrix} h^{-1} & 0\\ 0 & h \end{pmatrix}. \end{gather*} We can now write the flat connection: \begin{gather*}\nabla = d + \begin{pmatrix} -\partial \log h & h^2 \big(\bar{1}+\bar{\beta}\big)\\ 1+\beta & \partial \log h \end{pmatrix}. \end{gather*} Given a local section $s$ of $K^{-\frac{1}{2}}$, we can compute the derivatives: \begin{gather*}s = \begin{pmatrix} 0\\1 \end{pmatrix}, \qquad \nabla_{\!\!\!\frac{\partial}{\partial z}}s = \begin{pmatrix} h^2\bar{\beta}\\ \partial\log h \end{pmatrix}, \qquad \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s = \begin{pmatrix} h^2\\0 \end{pmatrix}.\end{gather*} As in the previous subsection, the transversality condition from Proposition \ref{prop:transversality condition} is equivalent to the following condition: the section $K^{-\frac{1}{2}}$ is transverse if and only if \begin{gather*}\forall\, A,B\in {\mathbb{C}}, \qquad A \nabla_{\!\!\!\frac{\partial}{\partial z}}s + \bar{A} \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s + B s = 0 \qquad \Rightarrow \qquad A=B=0. \end{gather*} Substituting, we see that the section $K^{-\frac{1}{2}}$ is transverse if and only if \begin{gather*}\forall\, A,B\in {\mathbb{C}}, \qquad \begin{cases} Ah^2\bar{\beta} + \bar{A}h^{2} = 0,\\ A \ \partial\log h + B = 0 \end{cases} \qquad \Rightarrow \qquad A=B=0. \end{gather*} If $A\neq 0$, the first equation is equivalent to \begin{gather*}\frac{\bar{A}}{A} = \bar{\beta}. \end{gather*} If $|\beta|<1$, this cannot be satisfied, hence the section is transverse. The condition $|\beta|<1$ is well known, see Uhlenbeck \cite{Uhlenbeck}: \begin{Definition} A representation $\rho\colon \pi_1(S)\rightarrow {\mathsf{PGL}}(2,{\mathbb{C}})$ is called \defin{almost-Fuchsian} if it is the projectivization of the monodromy of the flat connection of a Higgs bundle associated with a~pair $(\Sigma,[\beta])$, with $|\beta|<1$. \end{Definition} Almost-Fuchsian representations are a special type of quasi-Fuchsian representations having very good analytic properties. Summarizing, we find the following: \begin{Theorem}[Alessandrini--Li \cite{NilpotentCone}]Let $\rho\colon \pi_1(S)\rightarrow {\mathsf{PGL}}(2,{\mathbb{C}})$ be an almost-Fuchsian representation corresponding to the Higgs bundle $(E,\varphi)$ defined by the pair $(\Sigma,[\beta])$. Then the holomorphic line subbundle $K^{-\frac{1}{2}} \subset E$ induces a quasi-Fuchsian ${\mathbb{CP}}^1$-structure with holonomy~$\rho$. \end{Theorem} \subsection{Convex real projective structures} \begin{Definition}An ${\mathbb{RP}}^2$-structure on a closed surface $S$ is said to be a \defin{convex ${\mathbb{RP}}^2$-structure} if the developing map \begin{gather*}D\colon \ \widetilde{S}\rightarrow {\mathbb{RP}}^2 \end{gather*} is a diffeomorphism with an open convex subset of ${\mathbb{RP}}^2$. \end{Definition} Examples of convex ${\mathbb{RP}}^2$-structures were given in Example \ref{exa:geometric manifolds}, where we have seen that every ${\mathbb{H}}^2$-structure on $S$ produces such an ${\mathbb{RP}}^2$-structure via the Klein model. The subset of ${\mathcal D}_{{\mathbb{RP}}^2}(S)$ consisting of convex real projective structures will be denoted by ${\mathcal D}^{\mathrm{conv}}_{{\mathbb{RP}}^2}(S)$. The holonomy of these structures is always reductive, hence we have \begin{gather*}\mathrm{Hol}\colon \ {\mathcal D}^{\mathrm{conv}}_{{\mathbb{RP}}^2}(S) \rightarrow {\mathcal X}(\pi_1(S), {\mathsf{PGL}}(3,{\mathbb{R}})) ).\end{gather*} Goldman \cite{GoldmanConvex} proved that ${\mathcal D}^{\mathrm{conv}}_{{\mathbb{RP}}^2}(S)$ is connected, hence the image of $\mathrm{Hol}$ lies in the Hitchin component $\Hit(S,3)$. Choi--Goldman~\cite{ChoiGoldmanRP2} proved that $\mathrm{Hol}$ gives a homeomorphism between ${\mathcal D}^{\mathrm{conv}}_{{\mathbb{RP}}^2}(S) $ and $\Hit(S,3)$. This gives a nice geometric interpretation of the Hitchin component as the parameter space of convex ${\mathbb{RP}}^2$-structures on the surface. In Baraglia's thesis~\cite{BaragliaThesis}, he shows how to see these convex ${\mathbb{RP}}^2$-structures using Higgs bundles. Every $\rho\in \Hit(S,3)$ admits a lift to a representation $\bar{\rho}\colon \pi_1(S)\rightarrow {\mathsf{SL}}(3,{\mathbb{R}})$. By a theorem of Loftin~\cite{AffSpheresConvexRPn} and Labourie~\cite{LabourieCubic}, there exists a complex structure $\Sigma$ and a cubic differential $q_3 \in H^0\big(\Sigma,K^3\big)$ such that the representation $\bar{\rho}$ is the monodromy of the flat connection of the Higgs bundle $(E,\varphi)$, where \begin{gather*} E = K \oplus {\mathcal O} \oplus K^{-1}, \qquad \varphi = \begin{pmatrix} 0 & 0 & q_3\\ 1 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix}. \end{gather*} Baraglia \cite{BaragliaThesis} proved that the solution $H$ of Hitchin's equations for this Higgs bundle is diagonal: \begin{gather*}H = \begin{pmatrix} h^{-1} & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & h \end{pmatrix}.\end{gather*} To construct an ${\mathbb{RP}}^2$-structure on $\Sigma$, we can choose the section given by the line subbundle ${\mathcal O}$. We can verify in the usual way that this section is transverse, hence it gives an ${\mathbb{RP}}^2$-structure, which can be checked to be convex. When $q_3=0$, the representation takes values in ${\mathsf{SO}}(1,2)$, and the convex set is precisely an ellipsoid, the Klein model of the hyperbolic plane. \section{Higher-dimensional manifolds} \label{sec:higher dimension} One limitation of the method described in the previous section is that Higgs bundles can only describe flat bundles on surfaces. We would like to apply similar methods to construct geometric structures on higher-dimensional manifolds, but we need to find a good way to describe the flat bundle. This is possible in some special cases, when the representation factors through a surface group. \subsection{Sections of the holonomy map} \label{subsec:geometric interpretation of characters} Let $N$ be a closed manifold and ${\mathsf{G}}$ be a reductive Lie group. Consider the character variety \begin{gather*}{\mathcal X}(\pi_1(N),{\mathsf{G}}). \end{gather*} Sometimes, it is possible to find special open subsets ${\mathcal U} \subset {\mathcal X}(\pi_1(N),{\mathsf{G}})$ which parametrize geometric structures on $N$. To give a meaning to this, we first need to find a manifold $X$ with a transitive and effective action of ${\mathsf{G}}$ and $\dim(X)=\dim(N)$. Consider then the holonomy map \begin{gather*}\mathrm{Hol}\colon \ {\mathcal D}^*_{(X,{\mathsf{G}})}(N) \rightarrow {\mathcal X}(\pi_1(N),{\mathsf{G}}).\end{gather*} We want to find an open subset ${\mathcal U} \subset {\mathcal X}(\pi_1(N),{\mathsf{G}})$ and a map \begin{gather*}T\colon \ {\mathcal U} \rightarrow {\mathcal D}^*_{(X,{\mathsf{G}})}(N)\end{gather*} such that $\mathrm{Hol} \circ T = \mathrm{Id}_{\mathcal U}$. Such a map $T$ is a section of the holonomy map on ${\mathcal U}$. Finding such a $T$ gives a geometric interpretation to the open subset ${\mathcal U}$: it becomes a parameter space for a~special subset of $(X,{\mathsf{G}})$-structures on $N$. \begin{Example}In the previous section, we have seen some very interesting examples of this construction: \begin{gather*} {\mathcal X}(\pi_1(S), {\mathsf{PGL}}(2,{\mathbb{R}})) \supset \Hit(S,2)\rightarrow {\mathcal D}_{{\mathbb{H}}^2}(S) = {\mathcal T}(S),\\ {\mathcal X}(\pi_1(S), {\mathsf{PGL}}(2,{\mathbb{C}})) \supset \QFuch(S) \rightarrow {\mathcal D}_{{\mathbb{CP}}^1}(S),\\ {\mathcal X}(\pi_1(S), {\mathsf{PGL}}(3,{\mathbb{R}})) \supset \Hit(S,3)\rightarrow {\mathcal D}^{\mathrm{conv}}_{{\mathbb{RP}}^2}(S) \subset {\mathcal D}_{{\mathbb{RP}}^2}(S). \end{gather*} \end{Example} If we want to find more examples like these, the hypothesis that $\dim(X)=\dim(N)$ becomes a~serious problem: for some groups ${\mathsf{G}}$ we don't have homogeneous spaces of the correct dimension. To relax this condition, we will look for geometric structures on another closed manifold $M$. At this point, we don't even need that $N$ is a manifold: the role of $\pi_1(N)$ will be played by a finitely generated group $\Gamma$. Consider the character variety \begin{gather*}{\mathcal X}(\Gamma,{\mathsf{G}}). \end{gather*} We want to use an open subset of it to parametrize $(X,{\mathsf{G}})$-structures on a~closed manifold $M$ (with $\dim(M)=\dim(X)$) which is related with $\Gamma$ by a~group homomorphism $\alpha\colon \pi_1(M) \rightarrow \Gamma$. This group homomorphism induces a map \begin{gather*}\alpha^*\colon \ {\mathcal X}(\Gamma,{\mathsf{G}}) \ni \rho \rightarrow \rho \circ \alpha \in {\mathcal X}(\pi_1(M),{\mathsf{G}}).\end{gather*} We want to find an open subset ${\mathcal U} \subset {\mathcal X}(\Gamma,{\mathsf{G}})$ and a map \begin{gather*}T\colon \ {\mathcal U} \rightarrow {\mathcal D}^*_{(X,{\mathsf{G}})}(M)\end{gather*} such that $\mathrm{Hol} \circ T = \alpha^*|_{\mathcal U}$. Finding such a map $T$ gives a geometric interpretation to the open subset ${\mathcal U}$ as a parameter space for a special subset of $(X,{\mathsf{G}})$-structures on $M$. Many examples of this scenario come from the theory of domains of discontinuity for Anosov representation in geometries of parabolic type (see the discussion at the end of Section~\ref{subsec:morphisms}, Guichard and Wienhard \cite{GWDomainsofDiscont} and Kapovich, Leeb and Porti \cite{KLPAnosov1}). Assume that ${\mathsf{G}}$ is semi-simple, ${\mathsf{P}} \subset {\mathsf{G}}$ is a parabolic subgroup, $\Gamma$ is Gromov-hyperbolic and torsion-free, ${\mathcal U}$ is a connected component of ${\mathsf{P}}\text{-}\Anosov(\pi_1(S),{\mathsf{G}})$. Then, we need to choose a geometry $(X,{\mathsf{G}})$ of parabolic type which is in a special relation with ${\mathsf{P}}$, in a way that the theory of domains of discontinuity guarantees the existence of a co-compact domain of discontinuity $\Omega_\rho \subset X$ for all the ${\mathsf{P}}$-Anosov representations. Under these hypotheses, the topology of the manifold $M=\Omega_\rho/\rho(\Gamma)$ does not depend on $\rho$, and the map \begin{gather*}T\colon \ {\mathcal U} \ni \rho \rightarrow \Omega_\rho/\rho(\Gamma) \in {\mathcal D}^*_{(X,{\mathsf{G}})}(M) \end{gather*} has all the properties listed above. In this way, we give a geometric interpretation to many open connected subsets of Anosov representations as parametrizing $(X,{\mathsf{G}})$-structures on a closed manifold $M$. For an example where these hypotheses are satisfied, see Section~\ref{subsec:dod}. Even if we know the group $\Gamma$, we usually have no idea what the topology of $M$ is: \begin{Question} \label{question:topology of M} For some connected component ${\mathcal U}$ of ${\mathsf{P}}\text{-}\Anosov(\pi_1(S),{\mathsf{G}})$, understand the map \begin{gather*}T\colon \ {\mathcal U} \rightarrow {\mathcal D}^*_{(X,{\mathsf{G}})}(M). \end{gather*} The first step is to determine the topology of $M$. \end{Question} \subsection{Transverse maps and transverse submanifolds} Let $\Gamma$ be a finitely generated group, and $\rho\colon \Gamma \rightarrow {\mathsf{G}}$ a representation. \begin{Definition}Let $M$ be a manifold. A representation $\bar{\rho}\colon \pi_1(M) \rightarrow {\mathsf{G}}$ \defin{factors through $\rho$} if there exists a group homomorphism $\alpha\colon \pi_1(M) \rightarrow \Gamma$ such that $\bar{\rho} = \rho \circ \alpha$. \end{Definition} Assume now that $\Gamma = \pi_1(N)$ for some manifold $N$. If $N$ is aspherical (for example, if $N$ is a surface), then every group homomorphism $\alpha\colon \pi_1(M) \rightarrow \pi_1(N)$ is induced by a smooth map $f\colon M \rightarrow N$, i.e., $\alpha = f_*$. If $N$ is not aspherical, this is not automatic. \begin{Definition} Let $\rho\colon \pi_1(N) \rightarrow {\mathsf{G}}$ be a representation. A representation $\bar{\rho}\colon \pi_1(M) \rightarrow {\mathsf{G}}$ \defin{strongly factors through $\rho$} if there exists a smooth map $f\colon M \rightarrow N$ such that $\bar{\rho} = \rho \circ f_*$. \end{Definition} In this case, the representation $\bar{\rho}$ is the monodromy of a flat bundle $\bar{p}\colon \bar{B} \rightarrow M$, with fiber $X$, and $\rho$ is the monodromy of a flat bundle $p\colon B\rightarrow N$, with fiber $X$, where $(X,{\mathsf{G}})$ is a geometry. The former bundle is isomorphic to the pull-back of the latter by the map~$f$: \begin{gather*} \bar{B} = f^* B.\end{gather*} Consider the following commutative diagram: \begin{gather*}\begin{matrix} \bar{B} & \stackrel{f_+}{{\longrightarrow}} & B \\ \scriptstyle{\bar{p}}\Big\downarrow \ & & \scriptstyle{p}\Big\downarrow \ \\ M & \stackrel{f}{{\longrightarrow}} & N. \end{matrix} \end{gather*} \begin{Proposition} There is a homeomorphism between $\Gamma(M,\bar{B})$, the space of smooth sections of $\bar{B}$, and the space of smooth functions $s\colon M \rightarrow B$ satisfying $p \circ s = f$, endowed with the $C^\infty$ topology. The homeomorphism is given by \begin{gather*}\Gamma(M,\bar{B}) \ni \bar{s} \rightarrow s = \bar{s} \circ f_+ \in C^\infty(M,B), \end{gather*} where $f_+\colon \bar{B} \rightarrow B$ is the map given by the pull-back. \end{Proposition} Now let's change perspective and think that we don't know the map $f\colon M\rightarrow N$ in advance. Let's just start from a map $s\colon M \rightarrow B$. From $s$, we can construct a~map $f = p \circ s\colon M \rightarrow N$, a~representation $\bar{\rho} = \rho \circ f_*$, a flat bundle $\bar{B} = f^* B$ and a section $\bar{s} \in \Gamma(M,f^* B)$. We will call the section $\bar{s}$ the \defin{tautological section}, because it is just a~reinterpretation of the map $s$ as section of a bundle. \begin{Definition}We will say that a smooth map $M \rightarrow B$ is a \defin{transverse map} if it is transverse to the parallel foliation of the flat bundle $B$. \end{Definition} \begin{Proposition}The map $s$ is a transverse map if and only if the tautological section $\bar{s} \in \Gamma(M,f^* B)$ is a transverse section. \end{Proposition} We can summarize the constructions in this subsection with the following proposition. \begin{Proposition}Let $\rho\colon \pi_1(N) \rightarrow {\mathsf{G}}$, and $B$ be a flat bundle with holonomy $\rho$ and fiber $X$, where $(X,{\mathsf{G}})$ is a geometry. Every $(X,{\mathsf{G}})$-structure on some manifold $M$ with holonomy that strongly factors through $\rho$ comes from a transverse map $M \rightarrow B$. \end{Proposition} From this proposition we see that we can construct geometric structures on a manifold of higher dimension than $N$, by only understanding the parallel foliation of a flat bundle over $N$. Recall that, when $\dim(M) = \dim(X)$, transverse maps are always immersions. An interesting special case is when the transverse map is an embedding. In that case we can confuse the map with its image, a submanifold of $B$. Even more interesting is the case when the submanifold is a subbundle of $B$ (not necessarily with structure group ${\mathsf{G}}$). \begin{Definition}A \defin{transverse submanifold of $B$} is a submanifold that is transverse to the parallel foliation of $B$. A \defin{transverse subbundle} is a subbundle that is a transverse submanifold. \end{Definition} Transverse submanifolds and transverse subbundles of $B$ can be constructed without any a~priori knowledge of $M$, for example as zero loci of systems of equations defined on $B$. In the case of a transverse subbundle, we can hope to get an explicit description of its topology. This discussion suggests a reformulation of Question~\ref{question:topology of M}: \begin{Question} With the notation of Question \ref{question:topology of M}, add the hypothesis that $\Gamma=\pi_1(N)$ is the fundamental group of a~closed aspherical manifold $N$. Is it true that $M$ is homeomorphic to a~fiber bundle over~$N$? \end{Question} This question generalizes a conjecture by Dumas and Sanders: \begin{Conjecture}[Dumas--Sanders \cite{DumasSanders}] \label{conj:dumas sanders}Let ${\mathsf{G}}$ be a simple complex Lie group and $\rho\colon \pi_1(S)\rightarrow {\mathsf{G}}$ be a quasi-Hitchin representation. Consider a geometry $(X,{\mathsf{G}})$ of parabolic type, and assume that~$\rho$ has a co-compact domain of discontinuity $\Omega \in X$ coming from the construction of Kapovich, Leeb and Porti~{\rm \cite{KLPAnosov1}}. Then the manifold $\Omega/\rho(\pi_1(S))$ admits a continuous fiber bundle map to the surface~$S$. \end{Conjecture} We can prove this conjecture in some special cases, see Section~\ref{subsec:dod}. In the following we will assume that $N$ is a closed surface. In this case we can use Higgs bundles to describe the bundle $B$ and construct geometric structures on higher-dimensional manifolds by finding transverse subbundles of~$B$. \section{Geometric structures on circle bundles over surfaces} \label{sec:circle bundles} We will now show examples where we can construct transverse subbundles which are $3$-dimen\-sio\-nal manifolds. In this case they are circle bundles over surfaces. The case of manifolds of dimension higher than $3$ is more complicated, and it will be treated in the next section. \subsection{Convex-foliated real projective structures} Let's consider the Hitchin component \begin{gather*}\Hit(S,4) \subset {\mathcal X}(\pi_1(S),{\mathsf{PGL}}(4,{\mathbb{R}})).\end{gather*} We can here see an example of the ideas explained in Section~\ref{subsec:geometric interpretation of characters} about how to interpret this component as parameter space for a special subset of ${\mathbb{RP}}^3$-structures on $T^1 S$, the unit tangent bundle of the surface. Guichard--Wienhard \cite{convexfoliatedprojective} proved that every $\rho\in\Hit(S,4)$ has a co-compact domain of discontinuity $\Omega_\rho \subset {\mathbb{RP}}^3$ which has two connected components $\Omega_\rho = \Omega^+_\rho \cup \Omega^-_\rho$. One of the two (say $\Omega^+_\rho$) has the property that the quotient $\Omega_\rho^+/\rho(\pi_1(S))$ is homeomorphic to $T^1 S$, and that its ${\mathbb{RP}}^3$-structure is of a special type, called a convex foliated ${\mathbb{RP}}^3$-structure. This gives a map \begin{gather*}\Hit(S,4) \rightarrow {\mathcal D}_{(X,{\mathsf{G}})}\big(T^1 S\big) \end{gather*} that they prove to be a homeomorphism onto a connected component of ${\mathcal D}_{(X,{\mathsf{G}})}\big(T^1 S\big) $ containing exactly all convex foliated ${\mathbb{RP}}^3$-structures on $T^1 S$ (\cite{convexfoliatedprojective}). We can construct some of these structures using Higgs bundles. In this way we can see some properties of these structures that where not known before. Every $\rho\in \Hit(S,4)$ admits a lift to a representation $\bar{\rho}\colon \pi_1(S)\rightarrow {\mathsf{SL}}(4,{\mathbb{R}})$. By a theorem of Labourie \cite{LabourieEnergy}, there exists a complex structure $\Sigma$, a square root $K^{\frac{1}{2}}$ of the canonical bundle and differentials $q_3$, $q_4$ with $q_3 \in H^0\big(\Sigma,K^3\big)$ and $q_4 \in H^0\big(\Sigma,K^4\big)$ such that the representation $\bar{\rho}$ is the monodromy of the flat connection of the following Higgs bundle: \begin{gather*} E = K^{\frac{3}{2}} \oplus K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}} \oplus K^{-\frac{3}{2}}, \qquad \varphi = \begin{pmatrix} 0 & 0 & q_3 & q_4\\ 1 & 0 & 0 & q_3\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{pmatrix}. \end{gather*} In Baraglia's thesis \cite{BaragliaThesis}, he considered the special case when the image of $\rho$ is contained in ${\mathsf{PSp}}(4,{\mathbb{R}})$. In terms of Higgs bundles, this corresponds to the case when $q_3=0$. This belongs to a special type of Higgs bundles called cyclic Higgs bundles, and Baraglia proved that, in this case, the solution $H$ of Hitchin's equations is diagonal \begin{gather*}H = \begin{pmatrix} h_1 & 0 & 0 & 0\\ 0 & h_2 & 0 & 0\\ 0 & 0 & h_3 & 0\\ 0 & 0 & 0 & h_4 \end{pmatrix}.\end{gather*} We can then write the real structure \begin{gather*}\tau\colon \ E \ni \begin{pmatrix} v_1\\v_2\\v_3\\v_4 \end{pmatrix} \rightarrow \begin{pmatrix} h_4 \bar{v_4}\\ h_3\bar{v_3}\\ h_2\bar{v_2}\\ h_1 \bar{v_1} \end{pmatrix} \in E. \end{gather*} The real locus of $E$ is the real vector bundle \begin{gather*}\Real(E) = \{v\in E \,|\, \tau(v)= v\}. \end{gather*} We want to construct ${\mathbb{RP}}^3$-structures, hence we set $X={\mathbb{RP}}^3$. The flat bundle $B$ with monodromy $\rho$ and fiber $X$ is $B = {\mathbb{P}}(\Real(E))$. We now want to find a transverse subbundle. Consider \begin{gather*}M = {\mathbb{P}}\left(\left\{ \begin{pmatrix} 0\\v_2\\v_3\\0 \end{pmatrix} \in B \,\Big|\, v_2 = h_3 \bar{v_3} \right\}\right). \end{gather*} This is a circle bundle over $\Sigma$, isomorphic to the unit tangent bundle. To check that it is transverse, we will put local coordinates on $M$, using a local holomorphic coordinate $z$ on $\Sigma$, and a real coordinate $\theta$ on the circle fiber. For every $s$ local section, we compute the derivatives \begin{gather*}\nabla_{\!\!\!\frac{\partial}{\partial z}}s, \qquad \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s, \qquad \nabla_{\!\!\!\frac{\partial}{\partial \theta}}s. \end{gather*} We can then check the transversality condition: \begin{gather*}\forall\, A\in {\mathbb{C}}, \ \forall\, B,C\in {\mathbb{R}}, \qquad A \nabla_{\!\!\!\frac{\partial}{\partial z}}s + \bar{A} \nabla_{\!\!\!\frac{\partial}{\partial \bar{z}}}s + B \nabla_{\!\!\!\frac{\partial}{\partial \theta}}s + C s = 0 \qquad \Rightarrow \qquad A=B=C=0. \end{gather*} After the transversality condition has been verified, Baraglia also proves that these ${\mathbb{RP}}^3$-struc\-tu\-res are convex foliated. In this way, he obtains the following new result: \begin{Theorem}[Baraglia \cite{BaragliaThesis}]For every convex foliated ${\mathbb{RP}}^3$-structure on $T^1 S$ with holonomy in ${\mathsf{PSp}}(4,{\mathbb{R}})$, there exists a map $T^1 S \rightarrow S$ which is a circle bundle and has the property that every circle fiber is a projective line for the ${\mathbb{RP}}^3$-structure. \end{Theorem} This statement is completely geometric, but there is no known geometric proof of it, the only proof is the one with Higgs bundles. We conjecture that the same statement is true in general, for all convex foliated ${\mathbb{RP}}^3$-structures also when the holonomy is not restricted to be in~${\mathsf{PSp}}(4,{\mathbb{R}})$. Working with Qiongling Li, we explored and modified this construction. With the same Higgs field as before, we noticed that the other choice of subbundle \begin{gather*}M' = {\mathbb{P}}\left( \left\{ \begin{pmatrix} v_1\\0\\0\\v_4 \end{pmatrix} \in B \,\Big|\, v_1 = h_4 \bar{v_4} \right\}\right) \end{gather*} is also a transverse subbundle, and gives the other ${\mathbb{RP}}^3$-structure we discussed before, the one given by $\Omega_\rho^-/\rho(\pi_1(S))$. This is a circle bundle over $S$ with Euler class $6g-6$. We then changed the Higgs field, considering the case when $q_4=0$, but $q_3\neq 0$. For this kind of Higgs bundles, the solution of Hitchin's equations are again diagonal (see Collier--Li~\cite{CollierLi}), and the construction can be applied in a similar way. Moreover, since the transversality condition is an open condition, we can also understand the cases when at least one of $q_3$, $q_4$ is small enough: \begin{Theorem}[Alessandrini--Li, work in progress]Let $\rho \in \Hit(S,4)$ be a representation such that the corresponding Higgs bundle has the form given above, with the additional hypothesis that at least one between $q_3$ and $q_4$ is small enough. Then the two ${\mathbb{RP}}^3$-structures $M=\Omega^+/\rho(\pi_1(S))$ and $M'=\Omega^-/\rho(\pi_1(S))$ have the property that there exist maps $M \rightarrow S$ and $M' \rightarrow S$ which are circle bundles with the property that every circle fiber is a projective line for the ${\mathbb{RP}}^3$-structure. \end{Theorem} \subsection{Closed anti-de Sitter 3-manifolds} \label{subsec:ads} Consider a symmetric bilinear form $Q$ on ${\mathbb{R}}^4$ of signature $(2,2)$. This form is preserved by the group ${\mathsf{O}}(2,2)$ which has four connected components. The connected component of the identity is called ${\mathsf{SO}}_0(2,2)$. On ${\mathbb{RP}}^3$, this form defines the open subset \begin{gather*}{\rm AdS}^3 = \big\{[v] \in {\mathbb{RP}}^3 \,|\, Q(v,v) > 0 \big\}. \end{gather*} Let ${\mathsf{PO}}(2,2)$ be the projectivization of ${\mathsf{O}}(2,2)$, and ${\mathsf{PO}}_0(2,2)$ the connected component of the identity. The geometry $\big({\rm AdS}^3, {\mathsf{PO}}(2,2)\big)$ is called the $3$-dimensional \defin{anti-de Sitter geometry}, and it is a geometry of pseudo-Riemannian type, carrying an invariant pseudo-Riemannian metric of signature~$(2,1)$. We can construct the bilinear form $Q$ in the following special way. Consider the vector space~${\mathbb{R}}^2$ with a volume form $\omega$. This volume form is preserved by the group ${\mathsf{SL}}(2,{\mathbb{R}})$. On the tensor product ${\mathbb{R}}^4 = {\mathbb{R}}^2\otimes {\mathbb{R}}^2$ we have a bilinear form $Q = \omega \otimes \omega$. This bilinear form is symmetric and it has signature $(2,2)$. This construction shows us how ${\mathsf{SL}}(2,{\mathbb{R}})\times {\mathsf{SL}}(2,{\mathbb{R}})$ acts on~${\mathbb{R}}^4$ preserving~$Q$. This gives a homomorphism \begin{gather*}{\mathsf{SL}}(2,{\mathbb{R}}) \times {\mathsf{SL}}(2,{\mathbb{R}}) \rightarrow {\mathsf{SO}}_0(2,2),\end{gather*} which induces an isomorphism \begin{gather*}{\mathsf{PSL}}(2,{\mathbb{R}}) \times {\mathsf{PSL}}(2,{\mathbb{R}}) \rightarrow {\mathsf{PO}}_0(2,2). \end{gather*} We will consider a representation $\rho\colon \pi_1(S)\rightarrow {\mathsf{PO}}_0(2,2)$ which can be lifted to a representation $\bar{\rho}\colon \pi_1(S)\rightarrow {\mathsf{SL}}(2,{\mathbb{R}}) \times {\mathsf{SL}}(2,{\mathbb{R}})$. The corresponding Higgs bundle can be written as the tensor product of two Higgs bundles for ${\mathsf{SL}}(2,{\mathbb{R}})$: given two Higgs bundles for ${\mathsf{SL}}(2,{\mathbb{R}})$ \begin{gather*}E_1 = L_1 \oplus L_1^{-1}, \qquad Q_1 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \qquad \omega_1 = \frac{i}{\sqrt{2}}\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, \\ \varphi_1 = \begin{pmatrix} 0 & a_1\\ b_1 & 0 \end{pmatrix}, \qquad H_1 = \begin{pmatrix} h_1^{-1} & 0\\ 0 & h_1 \end{pmatrix}, \\ E_2 = L_2 \oplus L_2^{-1}, \qquad Q_2 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \qquad \omega_2 = \frac{i}{\sqrt{2}}\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, \\ \varphi_2 = \begin{pmatrix} 0 & a_2\\ b_2 & 0 \end{pmatrix}, \qquad H_2 = \begin{pmatrix} h_2^{-1} & 0\\ 0 & h_2 \end{pmatrix}, \end{gather*} we can form their tensor product \begin{gather*} E = E_1 \otimes E_2 = L_1L_2 \oplus L_1L_2^ {-1} \oplus L_1^{-1}L_2 \oplus L_1^{-1}L_2^{-1}, \qquad \varphi = \begin{pmatrix} 0 & a_2 & a_1 & 0 \\ b_2 & 0 & 0 & a_1\\ b_1 & 0 & 0 & a_2\\ 0 & b_1 & b_2 & 0 \end{pmatrix}.\end{gather*} The solutions of Hitchin's equations for this Higgs bundle are given by \begin{gather*}H = \begin{pmatrix} h_1^{-1}h_2^{-2} & & & \\ & h_1^{-1}h_2 & & \\ & & h_1 h_2^{-1} & \\ & & & h_1 h_2 \end{pmatrix}. \end{gather*} We now want to construct an ${\rm AdS}^3$-structure on a $3$-manifold with this holonomy. To do so, we will first construct an ${\mathbb{RP}}^3$-structure, then we verify that the image of the developing map lies inside ${\rm AdS}^3$ by writing the bilinear form $Q=\omega\otimes \omega$ explicitly. Since the developing map goes to ${\rm AdS}^3$ and the holonomy is in ${\mathsf{PO}}(2,2)$, the structure we are constructing is actually an ${\rm AdS}^3$-structure. We consider the following subbundle: \begin{gather*}M = {\mathbb{P}}\big( \Real\big(L_1L_2 \oplus L_1^{-1}L_2^{-1}\big)\big). \end{gather*} We then start to verify the transversality condition in the usual way. But we notice that the condition is not always verified. To state the result we need to recall that the solutions of Hitchin's equations for an ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle describe an equivariant harmonic map to the hyperbolic plane. So we have the two harmonic maps \begin{gather*}f_1,f_2\colon \ \widetilde{\Sigma}\rightarrow {\mathbb{H}}^2. \end{gather*} We will denote by $\widetilde{g_1}$, $\widetilde{g_2}$ the two pull-backs of the hyperbolic metric to $\widetilde{\Sigma}$. These tensors are $\pi_1(\Sigma)$-invariant, hence they define two tensors $g_1$, $g_2$ on~$\Sigma$. These symmetric tensors are called the \defin{pull-back metrics}, even though they are not always Riemannian metrics, they can be degenerate at some points. \begin{Definition}The ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle $(E_1,Q_1,\omega_1,\varphi_1)$ \defin{dominates} $(E_2,Q_2,\omega_2,\varphi_2)$ if \begin{gather*}g_1 - g_2 > 0, \end{gather*} i.e., if the symmetric tensor $g_1 - g_2$ is positive definite. \end{Definition} \begin{Theorem}[Alessandrini--Li \cite{AdSpaper}] The subbundle $M$ is a transverse subbundle if and only if the ${\mathsf{SL}}(2,{\mathbb{R}})$-Higgs bundle $(E_1,Q_1,\omega_1,\varphi_1)$ dominates $(E_2,Q_2,\omega_2,\varphi_2)$. \end{Theorem} In the theory of anti-de Sitter $3$-manifolds, there exists a necessary and sufficient condition for the representation $\rho$ to be the holonomy of an anti-de Sitter structure on a closed manifold. It was shown by Tholozan~\cite{TholozanDomination} that this condition is equivalent to the existence of a complex structure~$\Sigma$ on~$S$ such that $(E_1,Q_1,\omega_1,\varphi_1)$ dominates $(E_2,Q_2,\omega_2,\varphi_2)$. It follows that with Higgs bundles we can construct all closed anti-de Sitter $3$-manifolds with holonomy that lifts to ${\mathsf{SL}}(2,{\mathbb{R}}) \times {\mathsf{SL}}(2,{\mathbb{R}})$. Our main motivation for this work was to use the special parametrization that the Higgs bundle give to the manifold~$M$ to explicitly compute invariants of the anti-de Sitter structure, such as the volume. The computation of the volume of the closed anti-de Sitter $3$-manifolds was an open problem that was solved shortly before us by Tholozan~\cite{TholozanVolume}. \begin{Theorem}[Tholozan \cite{TholozanVolume}, Alessandrini--Li \cite{AdSpaper}] The volume of the anti-de Sitter structure constructed on $M$ is \begin{gather*}\Vol(M) = \pi^2\left|\deg(L_1) + \deg(L_2)\right|. \end{gather*} \end{Theorem} \section[Projective structures with Hitchin or quasi-Hitchin holonomies]{Projective structures with Hitchin\\ or quasi-Hitchin holonomies} \label{sec:higher dimensions} In the previous section we presented examples of constructions of geometric structures on $3$-dimensional manifolds. Now we will see how to apply the method to higher-dimensional manifolds. The general strategy is similar, but the technical details are more complicated. Hitchin and quasi-Hitchin representations act on odd-dimensional real and complex projective spaces admitting a co-compact domains of discontinuity (Guichard--Wienhard~\cite{GWDomainsofDiscont}). The quotient of this domain is a closed manifold with a projective structure. The holomorphic structure of the Higgs bundles and the solutions of Hitchin's equations help us to construct the same projective structure using Higgs bundles, in this way we can determine the topology of the manifold. This is joint work with Qiongling Li~\cite{ProjectiveStructuresHB}. Another construction of geometric structures on higher-dimensional manifolds using Higgs bundles was done by Collier, Tholozan and Toulisse~\cite{CollierTholozanToulisse}. They constructed photon structures whose holonomy factors through maximal representations in ${\mathsf{O}}(2,n)$. This work was not discussed during the mini-course for lack of time. \subsection{Construction of real and complex projective structures} \label{subsec:projective structures Higgs} Consider a representation $\rho\colon \pi_1(S) \rightarrow {\mathsf{PGL}}(2n,{\mathbb{R}})$ in the Fuchsian locus of the Hitchin component $\Hit(S,2n)$. Recall from Example \ref{exa:special representations} that such a representation is the composition of a Fuchsian representation in ${\mathsf{PGL}}(2,{\mathbb{R}})$ with the irreducible representation ${\mathsf{PGL}}(2,{\mathbb{R}}) \rightarrow {\mathsf{PGL}}(2n,{\mathbb{R}})$. We now want to construct ${\mathbb{RP}}^{2n-1}$ and ${\mathbb{CP}}^{2n-1}$-structures with holonomy that factors through~$\rho$. Let's start with the corresponding uniformizing Higgs bundle for ${\mathsf{SL}}(2,{\mathbb{R}})$: \begin{gather*}E = K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}}, \qquad Q = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, \qquad \omega = \frac{i}{\sqrt{2}}\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, \\ \varphi = \begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix}, \qquad H = \begin{pmatrix} h^{-1} & 0\\ 0 & h \end{pmatrix}. \end{gather*} The composition with the irreducible representation corresponds, in terms of Higgs bundles, to the symmetric tensor product \begin{gather*}S(E) = \Symm^{2n-1}(E). \end{gather*} To make our formulae more explicit and more readable, we will write them only for $n=3$, but similar formulae work for every $n$ \begin{gather*}S(E) = K^{\frac{5}{2}} \oplus K^{\frac{3}{2}} \oplus K^{\frac{1}{2}} \oplus K^{-\frac{1}{2}} \oplus K^{-\frac{3}{2}} \oplus K^{-\frac{5}{2}}, \qquad S(\varphi) = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}. \end{gather*} The solution of Hitchin's equations and the real structure are as usual given by \begin{gather*}S(H) = \begin{pmatrix} h_1 & \\ & \ddots \\ & & h_6 \end{pmatrix}, \qquad \tau\colon \ S(E) \ni \begin{pmatrix} v_1\\ \vdots \\ v_6 \end{pmatrix} \rightarrow \begin{pmatrix} h_6 \bar{v_6}\\ \vdots \\ h_1 \bar{v_1} \end{pmatrix}\in S(E). \end{gather*} We will consider the subbundles defined by the following equations: \begin{gather*}U^{\mathbb{C}} = {\mathbb{P}}\left( \left\{ \begin{pmatrix} h_1^{-\frac{1}{2}} t_1\\ \vdots \\ h_6^{-\frac{1}{2}} t_6 \end{pmatrix} \, \middle| \, t_1 \bar{t_2} + t_3 \bar{t_4} + t_5 \bar{t_6} = 0 \right\} \right), \qquad U^{\mathbb{R}} = U^{\mathbb{C}} \cap {\mathbb{P}}\left(\Real(E) \right).\end{gather*} Similar formulae define these subbundles for every $n$. \begin{Theorem}[Alessandrini--Li \cite{ProjectiveStructuresHB}]The subbundle $U^{\mathbb{C}}$ is a transverse subbundle of ${\mathbb{P}}(E)$, hence it supports a ${\mathbb{CP}}^{2n-1}$-structure whose holonomy factors through~$\rho$. The subbundle $U^{\mathbb{R}}$ is a transverse subbundle of ${\mathbb{P}}(\Real(E))$, hence it supports an ${\mathbb{RP}}^{2n-1}$-structure whose holonomy factors through~$\rho$. \end{Theorem} This method of constructing projective structures has the merit that we can see explicitly the topology of the manifold that supports the structure. Consider the spaces \begin{gather*}F^{\mathbb{R}} = T^1 {\mathbb{RP}}^{n-1}, \qquad F^{\mathbb{C}} = \big(T^1 {\mathbb{S}}^{2n-1}\big)/{\mathsf{U}}(1), \end{gather*} where ${\mathsf{U}}(1)$ acts on the unit sphere in a complex vector space ${\mathbb{S}}^{2n-1} \subset {\mathbb{C}}^n$ by component-wise multiplication by a complex number; this action is then lifted to the unit tangent bundle using the differential. Both spaces carry an action of ${\mathsf{SO}}(2)$: on $T^1 {\mathbb{RP}}^{n-1}$, the action of ${\mathsf{SO}}(2)$ is given by the geodesic flow, which is periodic. Similarly, ${\mathsf{SO}}(2)$ acts via the geodesic flow on $T^1 {\mathbb{S}}^{2n-1}$, and this action commutes with the action of ${\mathsf{U}}(1)$, hence it descends to an action on the quotient. \begin{Theorem}[Alessandrini--Li \cite{ProjectiveStructuresHB}] \label{thm:topology of bundle} Let $P$ be a principal ${\mathsf{SO}}(2)$-bundle with Euler class $2g-2$. Then \begin{enumerate}\itemsep=0pt \item[$1)$] $U^{\mathbb{R}} \simeq P\big(F^{\mathbb{R}}\big)$, for $n \geq 3$, \item[$2)$] $U^{\mathbb{C}} \simeq P\big(F^{\mathbb{C}}\big)$, for $n \geq 2$. \end{enumerate} \end{Theorem} Since the Euler number completely determines a principal ${\mathsf{SO}}(2)$-bundle, this result completely determines the topology of the manifolds $U^{\mathbb{R}}$ and $U^{\mathbb{C}}$. \begin{Remark}We can apply the same technique to other representations, namely the ones which are composition of a~Fuchsian representation in ${\mathsf{SL}}(2,{\mathbb{R}})$ with the diagonal representation ${\mathsf{SL}}(2,{\mathbb{R}}) \rightarrow {\mathsf{SL}}(2n,{\mathbb{R}})$. Again we can find transverse subbundles and construct ${\mathbb{RP}}^{2n-1}$ and ${\mathbb{CP}}^{2n-1}$-structures on these manifolds~\cite{ProjectiveStructuresHB}. This part is less interesting though, since the same construction can be done in a completely geometric way, without using Higgs bundles at all, thanks to the special geometry of the diagonal representation. The interesting thing about the result for the irreducible representation of ${\mathsf{SL}}(2,{\mathbb{R}})$ is that it is very hard to see these transverse subbundles using only geometry. \end{Remark} \subsection{Domains of discontinuity} \label{subsec:dod} Let ${\mathbb{K}}={\mathbb{R}}$ or ${\mathbb{C}}$, $\Gamma$ be a Gromov-hyperbolic group and $\rho\colon \Gamma \rightarrow {\mathsf{PGL}}(2n,{\mathbb{K}})$ be a representation which is ${\mathsf{P}}$-Anosov, where ${\mathsf{P}}$ is the stabilizer of an $(n-1)$-dimensional projective subspace. The space ${\mathsf{G}}/{\mathsf{P}}$ can be identified with the Grassmannian $\Gr\big(n,{\mathbb{K}}^{2n}\big)$, which parametrizes the $n$-dimensional linear subspaces of ${\mathbb{K}}^{2n}$. The Anosov property gives us the $\rho$-equivariant map \begin{gather*}\xi\colon \ \partial_\infty \Gamma \rightarrow \Gr\big(n,{\mathbb{K}}^{2n}\big).\end{gather*} Guichard--Wienhard \cite{GWDomainsofDiscont} used this map to construct a co-compact domain of discontinuity for the action of $\rho$ in ${\mathbb{KP}}^{2n-1}$. We first define the $\rho$-equivariant compact subset \begin{gather*}K^{\mathbb{K}}_\rho = \bigcup_{t\in \partial_\infty \Gamma} [\xi(t)] \subset {\mathbb{KP}}^{2n-1},\end{gather*} which is the complement of the $\rho$-equivariant open subset \begin{gather*}\Omega^{\mathbb{K}}_\rho = {\mathbb{KP}}^{2n-1} {\setminus} K.\end{gather*} \begin{Theorem}[Guichard--Wienhard \cite{GWDomainsofDiscont}]If $\rho$ is a $P$-Anosov representation in ${\mathsf{PGL}}(2n,{\mathbb{K}})$, then $\rho$ acts on $\Omega^{\mathbb{K}}_\rho$ properly discontinuously and co-compactly. \end{Theorem} If $\Gamma$ is torsion-free, we can construct the quotient manifold $M^{\mathbb{K}} = \Omega^{\mathbb{K}}_\rho/\rho(\Gamma)$, a closed manifold carrying a ${\mathbb{KP}}^{2n-1}$-structure. The topology of $M^{\mathbb{K}}$ is constant when $\rho$ varies in a connected component ${\mathcal U}$ of ${\mathsf{P}}$-$\Anosov(\Gamma,{\mathsf{PGL}}(2n,{\mathbb{K}}))$. In this way, we get a map \begin{gather*} T\colon \ {\mathcal U} \rightarrow {\mathcal D}^*_{{\mathbb{KP}}^{2n-1}}\big(M^{\mathbb{K}}\big),\end{gather*} which gives an example of the construction described in Section~\ref{subsec:geometric interpretation of characters}. Now let's assume that $\Gamma = \pi_1(S)$ is a surface group, and that the connected component ${\mathcal U}$ we have chosen is the Hitchin component $\Hit(S,2n)$ when ${\mathbb{K}}={\mathbb{R}}$, and the space of quasi-Hitchin representations when ${\mathbb{K}}={\mathbb{C}}$. In these cases we can understand the topology of $M^{\mathbb{K}}$, using our construction with Higgs bundles in the previous section. \begin{Theorem}[Alessandrini--Li \cite{ProjectiveStructuresHB}] \label{thm:dod} For $n \leq 63$, $M^{\mathbb{K}}$ is diffeomorphic to $U^{\mathbb{K}}$. \end{Theorem} \begin{proof} For a representation $\rho$ in the Fuchsian locus, we constructed a ${\mathbb{KP}}^{2n-1}$-structure on the manifold $U^{\mathbb{K}}$. We can explicitly compute the developing map of this structure, and we can prove that this structure is isomorphic to the structure $\Omega^{\mathbb{K}}_\rho/\rho(\pi_1(S))$ if and only if a certain explicit $n \times n$ matrix is positive definite. We then used the computer to check whether the matrix is actually positive definite. Unfortunately, since with the computer we could only check finitely many values of $n$, we stopped after $n=63$. \end{proof} We believe the result to be true for every value of $n$, but we don't have a general proof yet. Together with Theorem \ref{thm:topology of bundle}, this result completely describes the topology of $M^{\mathbb{K}}$. Moreover, this result describes some interesting properties of the geometry of the projective structure on $\Omega^{\mathbb{K}}_\rho/\rho(\pi_1(S))$ when $\rho$ is close enough to the Fuchsian locus. In particular, Theorem \ref{thm:dod} proves that, for $n \leq 63$, the manifold $M^{\mathbb{C}}$, is diffeomorphic to a fiber bundle over the surface: this result proves Conjecture \ref{conj:dumas sanders} by Dumas and Sanders in this special case. The topology of the manifold $M^{\mathbb{R}}$ was also studied by Guichard and Wienhard (announced in \cite[Remark 11.4(ii)]{GWDomainsofDiscont}). They also saw that it is diffeomorphic to a fiber bundle over the surface with fiber $F^{\mathbb{R}}$. More recent work about Conjecture \ref{conj:dumas sanders} uses different methods, which don't involve Higgs bundles. In a joint work with Qiongling Li \cite{ProjectionsHyperbolicPlane}, we proved the following theorem. \begin{Theorem}[Alessandrini--Li \cite{ProjectionsHyperbolicPlane}]Let $\Omega$ be the domain of discontinuity described in~{\rm \cite{GWDomainsofDiscont}} of a quasi-Hitchin representation $\rho$, where: \begin{enumerate}\itemsep=0pt \item[$1)$] $\rho\colon \pi_1(S) \rightarrow {\mathsf{PGL}}(2n,{\mathbb{C}})$, and $\Omega \subset {\mathbb{CP}}^{2n-1}$, or \item[$2)$] $\rho\colon \pi_1(S) \rightarrow {\mathsf{PGL}}(n,{\mathbb{C}})$, and $\Omega \subset \mathcal{F}_{1,n-1}$, the partial flag manifold parametrizing flags made of lines and hyperplanes. \end{enumerate} Then, for every $n$, the quotient $M = \Omega/\pi_1(S)$ is homeomorphic to the total space of a continuous fiber bundle over $S$. \end{Theorem} This theorem proves Conjecture~\ref{conj:dumas sanders}, for infinitely many cases. The method we use for the proof has broader scope than the method we used to prove Theorem~\ref{thm:dod}, but it does not allow us to completely understand the topology of the fiber, nor it gives information about the geometric structures. In a joint work with Maloni and Wienhard \cite{LagrangianGrassmannian}, we proved Conjecture \ref{conj:dumas sanders} in another case: \begin{Theorem}[Alessandrini--Maloni--Wienhard]Let $\rho\colon \pi_1(S) \rightarrow {\mathsf{PSp}}(4,{\mathbb{C}})$ be a quasi-Hitchin representation, and let $\Omega$ be the domain of discontinuity described in~{\rm \cite{GWDomainsofDiscont}} of $\rho$ in $\mathrm{Lag}\big({\mathbb{C}}^4\big)$, the Lagrangian Grassmannian of~${\mathbb{C}}^4$. Then the quotient $M = \Omega/\pi_1(S)$ is homeomorphic to the total space of a continuous fiber bundle over~$S$. \end{Theorem} The work continues giving an explicit description of the fiber. Conjecture~\ref{conj:dumas sanders} is still open in general, and it is the subject of active research. \subsection*{Acknowledgements} I am grateful to Qiongling Li for the collaboration that brought many of the results surveyed here, to Steve Bradlow, Brian Collier, John Loftin and Anna Wienhard for interesting discussions about this topic and to the anonymous referees for their useful comments on the first draft of the paper. The mini-course was funded by the UIC NSF RTG grant DMS-1246844, L.P.~Schaposnik's UIC Start up fund, and NSF DMS 1107452, 1107263, 1107367 ``RNMS: GEometric structures And Representation varieties'' (the GEAR Network). \pdfbookmark[1]{References}{ref}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,485
God & Food God & Sex Second Coming Watch The Youth Department BCNN1 Bloggers BCNN1 – Black Christian News Network How to Stay and Survive the Coronavirus Plague Briefing Podcast #82 with Daniel Whyte III Gospel Light Society Podcasts · How to Stay and Survive the Coronavirus Plague Briefing Podcast #82 THERE IS MORE SAID IN THE PODCAST AUDIO THAN IN THE NOTES BELOW Welcome to the How to Stay and Survive the Coronavirus Plague Briefing Podcast #82. My name is Daniel Whyte III, president of Gospel Light Society International. Numbers 11:33 says, "And while the flesh was yet between their teeth, ere it was chewed, the wrath of the LORD was kindled against the people, and the LORD smote the people with a very great plague." Thomas Constable said, "The wind blew from the southeast and apparently brought quails from the Gulf of Aqabah. Normally quails migrated to the northeast, from central Africa, so the direction from which these quails came was an abnormal provision of the Lord. The sickness of the people was a judgment for their greed. They wanted something for themselves that God had not chosen for them." REPENT! 2 Chronicles 7:14 says, "If my people, which are called by my name, shall humble themselves, and pray, and seek my face, and turn from their wicked ways; then will I hear from heaven, and will forgive their sin, and will heal their land." Revelation 2:4-5 says, "Nevertheless I have somewhat against thee, because thou hast left thy first love. Remember therefore from whence thou art fallen, and repent, and do the first works; or else I will come unto thee quickly, and will remove thy candlestick out of his place, except thou repent." Leonard Ravenhill said, "There are only three classes of people in the world today: those who are afraid, those who do not know enough to be afraid, and those who know their Bibles. Sodom, which had no Bible, no preachers, no tracts, no prayer meetings, no churches, perished. How then will America and England be spared from the wrath of the Almighty, think you? We have millions of Bibles, scores of thousands of churches, endless preachers—and yet what sin!" According to France24, The coronavirus pandemic in the US claimed at least 122,000 more lives than would be expected in a normal year, for a rise of 18 percent, says a study released Wednesday. According to CNBC, The number of confirmed U.S. deaths due to the coronavirus is substantially lower than the true tally, according to a study published Wednesday in JAMA Internal Medicine. According to ProPublica, internal messages highlight the growing strain that the coronavirus crisis is putting on hospital systems in the Houston region, where the number of patients hospitalized with COVID-19 has nearly quadrupled since Memorial Day. As of Tuesday, more than 3,000 people were hospitalized for the coronavirus in the region, including nearly 800 in intensive care. According to the Daily Mail, New York, New Jersey, and Connecticut extended their 14-day coronavirus quarantine to travelers from 16 states. According to Pew Research, the mood of the people is growing grim as just 12 percent of Americans are proud of their country and 87 percent are dissatisfied, with a majority calling government leadership 'poor' or 'terrible.' You can find alternative housing at Motor Home Specialist, "where the world shops", MHSRV.com. Emerging into the motor home industry in 1964, National RV has been responsible for providing some of the most popular, recognized and innovative motor homes for over 40 years. Starting out as a two-man operation, National RV has transformed into one of the most experienced and largest recreational vehicle manufacturers in the United States. FORTUNE Magazine even featured National RV in one of their "100 Fasting Growing Companies" issues. If you're searching for the perfect National RV motor home, Motor Home Specialist is the ideal place to find one. Some RV types are: 1. Diesel Pusher 2. Class A 3. Class C & B+ 4. Super C 5. Sprinter Chassis 6. Class B 7. 5th Wheels 8. Travel Trailers 9. Bus Conversion 11. Indeed 12. Idealist 14. Jobspresso 15. Lets work remotely Best online colleges from BestColleges.com LeTourneau University Embry-Riddle Aeronautical University-Worldwide HOME FAMILY Proverbs 14:23 says, "In all labour there is profit: but the talk of the lips tendeth only to penury." QUESTION: Rhett in Houston and his brother are involved in a family-owned business their father started. How do they keep the brilliance of the founder alive while allowing the second generation to make their own mark? ANSWER: The second thing is 1) we admit that it's hard emotionally, 2) we say that the thing's going to die if he doesn't do it. That's true of all of our businesses. Then 3) he's got to lay out some milestones for you guys—that says he begins to trust your competence and your integrity, which as we found in the delegation lesson is the secret to delegating. At these performance milestones—not age, but performance that indicates your maturity in business regardless of your age—he's going to begin to turn loose this amount. He's going to turn loose this amount. And the final release is he slips to the side and takes a position of honor. It doesn't mean he can never come to the office, but he's no longer in charge. And he can't come in and spread hate and dissension anymore, which is what we founders do. We tend to walk through the place and just blow things up. He can't do that anymore. He's going to destroy the thing he loves if he does that. I'm not saying he does that. I'm saying all of us who are founders do that. We've got to systematically hand over. One of the guys I was interviewing in family business had a great word picture for me. He said, "If you visualize an Olympic-level relay race, if you've ever seen them hand the baton off, they run so close and so in sync that when the baton leaves one hand and goes into the other, it's so in sync, no one realized it happened." It's so smooth and so gradual and so predictable and so practiced and so communicated. And so should be a transition. You guys are in your 30s, I assume. By the time you're 40, this thing needs to be done because you can't ask two sharp guys in their 30s to still be subordinates when they're 60. That's kind of stupid. It doesn't work. And you're going to want to go do something else if he won't do it. He needs to develop this gradual process to where the team and the customer base—when the transition occurs and one of you becomes the CEO—they all say, "Oh, he's a CEO? Gosh, I thought he already was," because everybody's been leaning more and more and more on you and less and less and less on him to where a gradual, gentle transition happened. If he doesn't do that, he'll be a typical founder and he'll kill his own business. First, please understand that you are a sinner and that you have broken God's laws. The Bible says in Romans 3:23: "For all have sinned and come short of the glory of God." Please understand that because of your sins, you deserve punishment in hell. Romans 6:23 says "the wages of sin is death…" This is both physical death and spiritual death in hell. But here is the good news. John 3:16 reads, "For God so loved the world, that He gave His only begotten Son, that whosoever believeth in Him should not perish, but have everlasting life." The phrase "For God so loved the world" means that if you are in this world, God loves you no matter what you have done. The next phrase, "that He gave His only begotten Son" refers to Jesus Christ. He is God's son who suffered, bled, and died on the cross for your sins and for mine, and He was buried and rose again. Our next phrase is "that whosoever believeth in Him". The word "whosoever" means anybody at anytime. The phrase "believeth in Him" means to trust in Him, to depend upon Him, to rely on Him, or to have faith in Him for your salvation. Our next phrase, "should not perish", refers to eternal punishment in a place called hell. And, lastly, the phrase "but have everlasting life" means to live eternally in Heaven with God. The Bible also says in Romans 10:9 and 13: "That if thou shalt confess with thy mouth the Lord Jesus, and shalt believe in thine heart that God hath raised him from the dead, thou shalt be saved…. For whosoever shall call upon the name of the Lord shall be saved." Dear friend, if you are willing to believe on the Lord Jesus Christ for salvation, please pray with me this simple prayer: Heavenly Father, I realize that I am a sinner and that I have done some bad things in my life. For Jesus Christ sake, please forgive me of my sins. I now believe with all of my heart that Jesus Christ died for me, was buried, and rose again. Lord Jesus, please come into my heart and save my soul and change my life today. Amen. If you believed in your heart that Jesus Christ died on the Cross, was buried, and rose again, allow me to say, congratulations on doing the most important thing in life and that is accepting Jesus Christ as your Lord and Saviour! For more information to help you grow in your newfound faith in Christ, go to Gospel Light Society.com and read "What To Do After You Enter Through the Door". Jesus Christ said in John 10:9, "I am the door: by me if any man enter in, he shall be saved, and shall go in and out, and find pasture." If you accepted Jesus Christ as your Savior today, please email me at [email protected] and let us know. There is some free material that we want to send you. If you have a prayer request, please e-mail that to us as well, and we will pray for you until you tell us to stop. God loves you. We love you. And may God bless you. Previous articleMichael Brown on What Are You Going to Do When They Come for You? Next articleJudge Blocks Trump Administration Rule Requiring Central Americans to Seek Asylum in Other Countries They Travel Through Before Reaching the U.S. BCNN1 BCNN1 PARTNER SITES *If BCNN1 ever goes down, please take note of the links above so that you can continue accessing our content. Just Jesus Evangelistic Campaign Revive the Family, Revive the Church, Awaken the Nation, O Lord On "Being Saved" in Black America The Get Back to Prayer Meeting Campaign Gospel Light Society International and GLM Omnimedia Group LLC have a network of over 1,000 Christian news sites which contain the preaching of the Gospel and Christian discipleship teaching in every country of the world and in every major city of the world. BCNN1 is a part of that network. Please click here to view some of those sites. BCNN1 Network BCNN1 Canada WordPress | Tumblr BCNN1 UK BCNN1 EU BCNN1 Asia BCNN1 Middle East BCNN3 TV BCNN5.com – Africa BCNN6 – Caribbean BCNNRadio7.com LCNN1.com – Latino Urban Christian News New America Today Whyte House Report Whyte House TV International Christian Herald The Torch Leader Church Leader Gazette Black Christian Nation The Black Daily Black Christian Book Promo Christian Media Promo St. Paul Press Fresh Eyes Proofreading and Editing Kings Highway Web Design Gospel Light Society Gospel Light House of Prayer GLM Podcast Network GoToChurchOnline.TV Second Coming Chapel Gospel Light World Radio Evangelio Luz Mundo Radio Torch Ministries Haitian Christian News Torch Legacy Publications © BCNN1: Black Christian News Network One. All Rights Reserved.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,562
Q: rails routes error in the Official Guide I am the newbie of the ROR and I am going through the Ruby on Rails official guide (4.2.6), but I got one problem when I want to add the Article model. When I am trying to save the article I got the error, undefined method `article_url' for # Did you mean? articles_url I found that the route don't have the "article" prefix in my route: majiandeMacBook-Pro:blog majian$ bin/rake routes Running via Spring preloader in process 26766 Prefix Verb URI Pattern Controller#Action welcome_index GET /welcome/index(.:format) welcome#index root GET / welcome#index articles POST /articles(.:format) articles#create new_articles GET /articles/new(.:format) articles#new edit_articles GET /articles/edit(.:format) articles#edit GET /articles(.:format) articles#show PATCH /articles(.:format) articles#update PUT /articles(.:format) articles#update DELETE /articles(.:format) articles#destroy But in the document, I found that it should be like this: article GET /articles/:id(.:format) articles#show Does anybody know why the routes are different? Any help will be appreciated. A: Check your routes.rb file, it should look like this: The file is in config/routes.rb Rails.application.routes.draw do get 'welcome/index' resources :articles root 'welcome#index' # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html end The error can be caused by an error in the line "resources :articles" A: I ran into the same issue, and it was because I have misspelled the resources :article in routes.rb I had resource :article instead and that doesn't work A: article_url requires as an argument the id of the article, in you show action you should have something like this article_url(@article)
{ "redpajama_set_name": "RedPajamaStackExchange" }
487
package bank.gui; import bank.LoanTellerRole; import java.awt.Color; import java.awt.Graphics2D; /** * * @author Byron Choy * */ public class LoanGui implements Gui { private LoanTellerRole role = null; private int xPos; private int yPos; private int yDestination; private int xDestination; static final int hostWidth = 20, hostHeight = 20; private int xBankEntrance= 750; private final static int yBankEntrance= 0; private final static int xIntermediateEntrance = 680; private final static int yIntermediateEntrance = 180; private final int xTellerDesk= 595; private final int yTellerDesk= 180; private final static int xBreakRoom= 450; private final static int yBreakRoom= 10; private int xcounter = 0; private int ycounter = 0; BankAnimationPanel gui; public LoanGui(LoanTellerRole tellerRole, BankAnimationPanel bankAnimationPanel) { this.role = tellerRole; xPos = xBankEntrance; yPos = yBankEntrance-50; xDestination = xIntermediateEntrance; yDestination = yIntermediateEntrance; this.gui = bankAnimationPanel; //this.waiterNumber=waiterNumber; } public void updatePosition() { if (xPos < xDestination) {xPos++; } else if (xPos > xDestination) {xPos--; } if (yPos < yDestination) {yPos++; } else if (yPos > yDestination) {yPos--; } if (xPos == xDestination && yPos == yDestination && xDestination == xIntermediateEntrance && yDestination == yIntermediateEntrance){ role.msgAtIntermediate(); } if (xPos == xDestination && yPos == yDestination && (xDestination == xTellerDesk) & (yDestination == yTellerDesk)) { role.msgAtStation(); } /* if (xPos == xDestination && yPos == yDestination & (xDestination == xTable + 20 + ((tableNum-1)*60)) & (yDestination == yTable - 20)) { agent.msgAtTable(); } else if (((xDestination == xCookCord) && (yDestination == yCookCord))&& ((xCookCord == xPos) & (yCookCord == yPos) )) { agent.msgAtCook(); } else if (((xDestination == xTellerDesk) && (yDestination == yTellerDesk))&& ((xTellerDesk == xPos) && (yTellerDesk == yPos) )) { agent.msgAtCook(); } else if (((xDestination == xBankEntrance) && (yDestination == yBankEntrance))&& ((xBankEntrance == xPos) & (yBankEntrance == yPos) )) {agent.msgAtEntrance(); agent.msgWaiterReadytoSeat(true);} */ } public void draw(Graphics2D g) { g.setColor(Color.YELLOW); g.fillRect(xPos, yPos, hostWidth, hostHeight); } public boolean isPresent() { return true; } public void DoGoToStation(){ xDestination=xTellerDesk; yDestination = yTellerDesk;//default start position } public void DoLeave() { xDestination = xBankEntrance; yDestination = yBankEntrance; } public int getXPos() { return xPos; } public int getYPos() { return yPos; } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,591
SHU Student Outcomes Fairfield, Connecticut Percent of students graduating within 150% of normal completion time 65% vs. 41.9% national median Graduation Rates for Sacred Heart University Percent Earning Bachelor's Degree within 4 years 61% earned a bachelor degree A college's graduation rate is a strong indication of its effectiveness and your potential to find success at a school. These statistics measure the percentage of first-time, full-time students who earned a bachelor degree within four, five or six years from Sacred Heart University. For comparison, approximately 58% of starting students nationally earn a bachelor's degree within six years. Sacred Heart University is more effective than average at successfully graduating students. Post Graduation Earnings Average Salary After 10 Years $59,700 vs. $34,300 national median Average salary after attending Sacred Heart University Average Earnings after 6 years $46,100 per year after 10 years $59,700 per year 10 years after enrolling, the average income of former Sacred Heart University students who are working and no longer in school is $59,700, which is 74% higher than the national median. Sources, U.S. Department of Education College Scorecard / Department of Treasury. View Starting Salaries by Major Student Loan Debt Upon Graduation Median Student Loan Debt for Graduates Percent of students receiving Federal Loans Median Loan Payment Percent of students actively repaying their loans 76% vs. 47% national average If you are having trouble affording your SHU student loan debt, explore your options. Explore Sacred Heart University More Connecticut colleges Primary data source, U.S. Department of Education https://nces.ed.gov/collegenavigator/?id=130253 IPEDS survey data for Sacred Heart University.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,592
layout: post title: JavaScript Compilers excerpt: "" modified: 2017-06-14T17:00:00-00:00 categories: articles tags: [ES6, JavaScript] image: vendor: twitter feature: /media/DNt2jzxX4AAtXkd.jpg:large credit: Nat Geo Photography‏ creditlink: https://twitter.com/NatGeoPhotos comments: true share: true references: - title: "Using Traceur with Node.js" url: "https://github.com/google/traceur-compiler/wiki/Using-Traceur-with-Node.js" - title: "Exploring ES2016 and ES2017" url: "http://exploringjs.com/es2016-es2017/" - title: "ECMAScript compatibility table" url: "http://kangax.github.io/compat-table/es5/" - title: "Medium - ES6, ES2016, ES2017 A Whole New JavaScript" url: "https://medium.com/@_alexray/es2015-es6-es7-a-whole-new-javascript-23008fd28108" --- * TOC {:toc} ## traceur [traceur](https://github.com/google/traceur-compiler) `npm install --save-dev traceur` ### Option 1: `./node_modules/.bin/traceur --out public/index.js index.js` `node ./public/index.js` ### Option 2: bootstrap.js: ```javascript // bootstrap.js var traceur = require('traceur'); traceur.require.makeDefault(function(filename) { // don't transpile our dependencies, just our app return filename.indexOf('node_modules') === -1; }); require('./index'); ``` index.js: ```javascript class Polygon { constructor(height, width) { this.height = height; this.width = width; } } class Square extends Polygon { constructor(length) { // Here, it calls the parent class' constructor with lengths // provided for the Polygon's width and height super(length, length); // Note: In derived classes, super() must be called before you // can use 'this'. Leaving this out will cause a reference error. this.name = 'Square'; } get area() { return this.height * this.width; } set area(value) { this.area = value; } draw() { console.log(this.area); } } new Square(10).draw(); ``` ``` $ node bootstrap.js 100 ``` [Using Traceur with Node.js](https://github.com/google/traceur-compiler/wiki/Using-Traceur-with-Node.js) ## Babel [babel](http://babeljs.io) Install the cli and preset of babel: `npm install --save-dev babel-cli` `npm install --save-dev babel-preset-env` Create the babel's environment file *.babelrc*: ```json { "presets": ["env"] } ``` Compile JavaScript files in folder *src* into folder *lib*: `./node_modules/.bin/babel src -d lib` ## Typescript `npm install -g typescript` `tsc helloworld.ts` `node helloworld.js` If you want add more options for typescript compiler, please add a file named *tsconfig.json* in the root of your project. If you want read more knowledge about TypeScript and Node, refer to [TypeScript Node Starter](https://github.com/Microsoft/TypeScript-Node-Starter#typescript-node-starter)
{ "redpajama_set_name": "RedPajamaGithub" }
2,510
Colpo di fionda (Kådisbellan) è un film del 1993 diretto da Åke Sandgren. Riconoscimenti Guldbagge - 1993 Miglior film Candidatura a migliore sceneggiatura a Åke Sandgren Collegamenti esterni Film drammatici Film sul bullismo Premio Guldbagge per il miglior film
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,172
{"url":"https:\/\/www.biostars.org\/p\/144880\/","text":"trimming fastq files with Trimmomatic\n1\n2\nEntering edit mode\n6.2 years ago\nrodd \u25b4 140\n\nDear all,\n\n\"TO TRIM OR NOT TO TRIM?\"\n\nMy PE RNAseq library prep of human brain tissue was made with TruSeq Illumina kit A using index 5, and I've got a few yellow warnings that I'd like to know what you'd do.\n\nI found a yellow warning for overrepresented sequences - none are Illumina adapters\/index. When I align the multiple overrepresented sequences, they mostly overlap, and when I blast that sequence this is the result:\n\nHomo sapiens uncharacterized LOC105378179 (LOC105378179), transcript variant X2, ncRNA\n\nShould I REMOVE this sequence using Trimmomatic, since it's overly reprersented?\n\nFor last, I have a warning on per sequence GC content.\n\nI thought of doing a \"mild\" trimming of the reads using trimmomatic (LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36) to remove low-quality bases, and that's all. What would you recommend?\n\nRNA-Seq QC \u2022 19k views\n0\nEntering edit mode\n\nhumm, I would trim your question a bit, it is too long...\n\nLooking at GC content will not help decide for or against quality trimming. I would trim adapters even though FastQC did not complain. And RNAseq in general raises FastQC \"Sequence duplication\" flag, but to be certain you have to look at read mapping - highly expressed transcripts will falsely raise the duplication levels.\n\n9\nEntering edit mode\n6.2 years ago\nBrice Sarver \u2605 3.7k\n\nFor what it's worth, I always clean my raw data, at the very least for poor-quality base calls or poor-quality reads in general. So do my colleagues. The real (and somewhat longer) answer to your question, however, is rooted in what type of data you're generating and what you want to ultimately do with it.\n\nIf you have shotgun libraries and are looking to assemble a whole genome, keeping in low-quality reads and bases increases complexity and can dramatically increase run time. It's pretty striking; I've seen assembler get 'hung up' trying to sort out the k-mer graph, and the problem can disappear once poor reads are removed. This also applies to duplicates.\n\nIf you're just mapping to a reference and calling variants, it's less of a deal nowadays than it was a few years ago. BWA's MEM algorithm can soft-clip reads to improve mapping quality, and this is useful if you have residual adapters or low-quality spans at the beginning. I see this as a secondary bonus of sorts, but I would still trim my reads.\n\nAlso, you might have a non-random distribution of k-mers or subsequences represented just based on your library prep. Imagine that you PCR a single locus and then make a library out of it. You will definitely have an overrepresentation. Scaling up, say you targeted and sequenced the exome. Again, your distribution of k-mers might be non-random because you might expect to see certain motifs overrepresented (start\/stop codons, for example).\n\nSo, I would clean my raw data using a series of best practices (remove low-quality bases\/reads\/adapters, identify overlaps, dedup - but the dedup doesn't apply to your expression data). I would also be a bit leery to just chop bases off for no reason other than a summary report suggests overrepresentation. The question to ask is, \"Will this affect the biological interpretation of my data systematically?\"\n\nI would love to hear others' thoughts.\n\n0\nEntering edit mode\n\nThanks for the comprehensive explanation, Brice. Specially because I am new, it honestly helped me better visualise the issues of RNAseq QC and its importance to finally translate to biological understanding. So, what parameters would you recommend for standard trimming? Should I just use what's in Trimmomatic manual?\n\njava -jar trimmomatic-0.30.jar PE --phred33 input_forward.fq.gz input_reverse.fq.gz \\\noutput_forward_paired.fq.gz output_forward_unpaired.fq.gz \\\noutput_reverse_paired.fq.gz output_reverse_unpaired.fq.gz \\\nSLIDINGWINDOW:4:15 MINLEN:36\n\n\nThis will perform the following:\n\n>PrefixPE\/1\nTACACTCTTTCCCTACACGACGCTCTTCCGATCT\n>PrefixPE\/2\nGTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT\n>PE1\nTACACTCTTTCCCTACACGACGCTCTTCCGATCT\n>PE1_rc\nAGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA\n>PE2\nGTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT\n>PE2_rc\nAGATCGGAAGAGCACACGTCTGAACTCCAGTCAC\n\n\u2022 Remove leading low quality or N bases (below quality 3)\n\u2022 Remove trailing low quality or N bases (below quality 3)\n\u2022 Scan the read with a 4-base wide sliding window, cutting when the average quality per base drops below 15\n\u2022 Drop reads below the 36 bases long\n\nI have interest in gene expression regarding a specific chromosome locus. So I have RNA sequenced only from one brain sample in order to identify possibly novel transcripts of those specific genes, expressed in a particular brain region. Then I plan to use these results to guide the design of some RT-PCRs in several samples to prove my findings. (Without actually trimming my input data, I simply aligned it to the reference human genome and did not find expression of a RefSeq gene that consists of 2 merged genes in my sample - I saw in junctions.bed file that there are transcripts from gene 1 and from gene 2, but not gene1+2, and this is very relevant for my PhD studies). Anyway, this is still ongoing work. Sorry for the once again long text and thanks in advance!\n\n0\nEntering edit mode\n\nDo you have any reason to believe that you have residual adapter, or are you asking whether to trim in general when you see overrepresentation? If I'm understanding correctly, you are suggesting trimming what FastQC indicates is overrepresented.\n\n0\nEntering edit mode\n\nI have no reason to believe my data is contaminated with residual adapter. I am just looking for the best practices in data QC.\n\n2\nEntering edit mode\n\nIf the overrepresented sequences are not adapters, it might be the case that your data just has some overrepresented sequences or k-mer enrichment - it is RNA-seq, after all, and thus a non-random sampling of the genome. \u00a0Tools like FastQC are great for exploring your data, but people get too hung up on making sure their data passes everything. If something looks really wrong, it will give you an opportunity to investigate that.\n\nTo get a sense of what other labs do, I would pick a couple of (good) RNA-seq papers and see what they do, then emulate that. If you want to do something differently, make sure you can justify it to yourself and others. There's really no 'this is the 100% right way to do clean your data\" approach.\n\n0\nEntering edit mode\n\nThank you so much! I will research more on this! But you definitely helped a lot already!","date":"2021-08-04 19:14:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23281636834144592, \"perplexity\": 3391.1478835514963}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154897.82\/warc\/CC-MAIN-20210804174229-20210804204229-00632.warc.gz\"}"}
null
null
{"url":"http:\/\/mathhelpforum.com\/advanced-statistics\/152736-relative-increase-entropy-always-less-than-relative-increase-maximum-entropy.html","text":"## Relative increase in entropy is always less than relative increase in maximum entropy\n\nHello,\n\nSuppose we have a set of $N$ states out of which probabilities are calculated based on a frequency approach where N is the grand total, and the following entropy function is given:\n\n$\nH_1 = -\\sum_{i=1}^{N}p_i \\log_2 p_i .\n$\n\nIn this case, the maximal entropy is $H_{{max}_{1}}=\\log_2 N$.\n\nNow, suppose we increase the number of states from $N$ to $M$, $M>N$, and we re-evaluate the probabilities. Now, we get the following entropy function:\n\n$\nH_2 = -\\sum_{i=1}^{M}q_i \\log_2 q_i .\n$\n\nNow, the maximal entropy is $H_{{max}_{2}}=\\log_2 M$.\n\nBased on information theory, increasing the number of states (from N to M) will increase the entropy. My question is related to the conclusion given as the title of this thread: Does the following inequality hold:\n\n$\n\\frac{H_2 - H_1}{H_1} < \\frac{H_{{max}_{2}}-H_{{max}_{1}}}{H_{{max}_{1}}}\n$\n\nThat is, is the relative increase on entropy less than the relative increase in maximum entropy when increasing the number of states?\n\nThanks.","date":"2014-12-28 14:12:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 9, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9225003719329834, \"perplexity\": 318.72536872185424}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1419447557824.148\/warc\/CC-MAIN-20141224185917-00056-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/www.transtutors.com\/questions\/south-carolina-corporation-has-one-temporary-difference-at-the-end-of-2014-that-will-2571985.htm","text":"# South Carolina Corporation has one temporary difference at the end of 2014 that will reverse and ...\n\nSouth Carolina Corporation has one temporary difference at the end of 2014 that will reverse and cause taxable amounts of $60, 050 in 2015,$65, 870 in 2016, and $75, 125 in 2017. South Carolina's pretax financial income for 2014 is$344, 610, and the tax rate is 30% for all years. There are no deferred taxes at the beginning of 2014. (a) Compute taxable income and income taxes payable for 2014. (Round answers to 0 decimal places, e.g. 1250.) Taxable income $Income taxes payable$ (b) Prepare the journal entry to record income tax expense, deferred income taxes, and income taxes payable for 2014. (Round answers to 0 decimal places, e.g. 1250. Credit account titles are automatically indented when amount is entered. Do not indent manually.) Prepare the income tax expense section of the income statement for 2014, beginning with the line \"Income before income taxes.\". (Round answers to 0 decimal places, e.g. 1250. Enter negative amounts using either a negative sign preceding the number e.g. -45 or parentheses e.g. (45).)","date":"2018-08-17 17:43:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24654823541641235, \"perplexity\": 4820.118527208787}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221212639.36\/warc\/CC-MAIN-20180817163057-20180817183057-00396.warc.gz\"}"}
null
null
using System; using System.Collections.Generic; using System.Reflection; using Microsoft.Practices.Unity.InterceptionExtension; using ServiceBridge.Interception; using IMethodInvocation = Microsoft.Practices.Unity.InterceptionExtension.IMethodInvocation; using IMethodReturn = Microsoft.Practices.Unity.InterceptionExtension.IMethodReturn; using PipelineManager = ServiceBridge.Interception.PipelineManager; namespace ServiceBridge.Unity.Interception { internal class UnityInjectionBehavior : IInterceptionBehavior { private readonly PipelineManager _pipelineManager; /// <summary> /// Initializes a new instance of the <see cref="UnityInjectionBehavior" /> with a pipeline manager. /// </summary> /// <param name="pipelineManager"> /// The <see cref="PipelineManager" /> for /// the new instance. /// </param> internal UnityInjectionBehavior(PipelineManager pipelineManager) { _pipelineManager = pipelineManager; } /// <summary> /// Initializes a new instance of the <see cref="UnityInjectionBehavior" /> with the given information /// about what's being intercepted and the current set of injection policies. /// </summary> /// <param name="interceptionRequest">Information about what will be injected.</param> /// <param name="container">Service container that can be used to resolve interceptors.</param> public UnityInjectionBehavior(CurrentInterceptionRequest interceptionRequest, IServiceContainer container) { if (interceptionRequest == null) { throw new ArgumentNullException(nameof(interceptionRequest)); } var hasHandlers = false; var manager = new PipelineManager(container.GetInstance<IInterceptorFactory>()); foreach ( var method in interceptionRequest.Interceptor.GetInterceptableMethods(interceptionRequest.TypeToIntercept, interceptionRequest.ImplementationType)) { var hasNewHandlers = manager.InitializePipeline(method.InterfaceMethodInfo, method.ImplementationMethodInfo, container); hasHandlers = hasHandlers || hasNewHandlers; } foreach (var constructor in interceptionRequest.ImplementationType.GetConstructors()) { var hasNewHandlers = manager.InitializePipeline(constructor, container); hasHandlers = hasHandlers || hasNewHandlers; } _pipelineManager = hasHandlers ? manager : null; } /// <summary> /// Execute behavior processing. /// </summary> /// <param name="input">Inputs to the current call to the target.</param> /// <param name="getNext">Delegate to execute to get the next delegate in the behavior chain.</param> /// <returns>Return value from the target.</returns> public IMethodReturn Invoke(IMethodInvocation input, GetNextInterceptionBehaviorDelegate getNext) { Func<IMethodInvocation, IMethodReturn> defaultInvoke = invocation => { try { return getNext()(invocation, getNext); } catch (TargetInvocationException ex) { // The outer exception will always be a reflection exception; we want the inner, which is // the underlying exception. return invocation.CreateExceptionMethodReturn(ex.InnerException); } }; if (_pipelineManager == null) return defaultInvoke(input); var methodReturn = _pipelineManager.GetPipeline(input.MethodBase).Invoke(new UnityMethodInvocation(input), (injectionInvocation, injectionGetNext) => new UnityMethodReturn(defaultInvoke(((UnityMethodInvocation) injectionInvocation).Unwrap()))); return ((UnityMethodReturn) methodReturn).Unwrap(); } /// <summary> /// Returns the interfaces required by the behavior for the objects it intercepts. /// </summary> /// <returns>The required interfaces.</returns> public IEnumerable<Type> GetRequiredInterfaces() { return Type.EmptyTypes; } /// <summary> /// Returns a flag indicating if this behavior will actually do anything when invoked. /// </summary> /// <remarks> /// This is used to optimize interception. If the behaviors won't actually /// do anything (for example, PIAB where no policies match) then the interception /// mechanism can be skipped completely. /// </remarks> public bool WillExecute => true; } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,960
Is the pope Catholic?, do bears…? etc. Not a rhetorical question apparently but a live and highly intellectual debate on the BNP's ( a UK based far right/fascist organisation) website. Sarah, my partner is crime, is the director of Dimsum, a British- Chinese community organisation which has made the BNP's 'Racist Organisations' list (coming in at number 23!). How so? you may ask, well, the BNP's logic goes that if the BNP is racist for supporting the rights of the downtrodden 'indigenous british population' then organisations that support other minority communities are also racist.
{ "redpajama_set_name": "RedPajamaC4" }
1,084
\section{Introduction} The increasing prevalence of knee osteoarthritis (OA), a degenerative joint disease, and total joint arthoplasty as a serious consequence, means there is a growing need for effective clinical and scientific tools to diagnose knee OA in the early stage, and to assess its severity in progressive stages~\cite{oka2008, shamir2009}. Detecting knee OA and assessing the severity of knee OA are crucial for pathology, clinical decision making, and predicting disease progression \cite{braun2012}. Joint space narrowing (JSN) and osteophytes (bone spurs) formation are the key pathological features of knee OA \cite{oka2008}, which are easily visualized using radiographs~\cite{braun2012}. The assessment of knee OA severity has traditionally been approached as an image classification problem~\cite{shamir2009}, with the KL grades being the ground truth for classification. Radiographic features detectable through a computer-aided analysis are clearly useful to quantify knee OA severity, and to predict the future development of knee OA~\cite{shamir2009}. However, based on the results reported, the accuracy of both the multi-class and consecutive grades classification is far from ideal. Previous work on classifying knee OA from radiographic images have used Wndchrm, a multipurpose bio-medical image classifier~\cite{shamir2008,orlov2008}. The feature space used by Wndchrm includes hand-crafted features to capture these characteristics based on polynomial decomposition, contrast, pixel statistics, textures and also features extracted from image transforms~\cite{shamir2009,shamir2008,orlov2008}. Instead of hand-crafted features, we propose that learning feature representations using a CNN can be more effective for classifying knee OA images to assess the severity condition. Feature learning approaches provide a natural way to capture cues by using a large number of code words (sparse coding) or neurons (deep networks), while traditional computer vision features, designed for basic-level category recognition, may eliminate many useful cues during feature extraction \cite{yang2013}. Manually designed or hand-crafted features often simplify machine learning tasks. Nevertheless, they have a few disadvantages. The process of engineering features requires domain-related expert knowledge, and is often very time consuming~\cite{lee2010}. These features are often low-level as prior knowledge is hand-encoded, and features in one domain do not always generalize to other domains~\cite{le2013}. In recent years, learning feature representations is preferred to hand-crafted features, particularly for fine-grained classification, because rich appearance and shape features are essential for describing subtle differences between categories~\cite{yang2013}. A convolutional neural network (CNN) typically comprises multiple convolutional and sub-sampling layers, optionally followed by fully-connected layers like a standard multi-layer neural network. A CNN exploits the 2D spatial structure images to learn translation invariant features. This is achieved with local connections and associated weights followed by some form of pooling. The main advantage of CNN over fully-connected networks is that they are easier to train and have fewer parameters with the same number of hidden units~\cite{prasoon2013}. In this work, first, we investigated the use of well-known CNNs such as the VGG 16-layer net~\cite{simonyan2014}, and comparatively simpler networks like VGG-M-128~\cite{chatfield2014}, and BVLC reference CaffeNet~\cite{jia2014,karayev2013} (which is very similar to the widely-used \textit{AlexNet} model~\cite{krizhevsky2012imagenet}) to classify knee OA images. These networks are pre-trained for color image classification using a very large dataset such as the ImageNet LSVRC dataset~\cite{russakovsky2015imagenet}, which contains 1.2 million images with 1000 classes. Initially, we extracted features from the convolutional, pooling, and fully-connected layers of VGG16, VGG-M-128, and BVLC CaffeNet, and trained linear SVMs to classify knee OA images. Next, motivated by the transfer learning approach~\cite{yosinski2014}, we fine-tuned the pre-trained networks. We adopted transfer learning as the OAI dataset we work with is small, containing only a few thousand images. In this setting, a base network is first trained on external data, and then the weights of the initial $n$ layers are transferred to a target network~\cite{yosinski2014}. The new layers of the target network are randomly initialized. Intuitively, the lower layers of the networks contain more generic features such as edge or texture detectors useful for multiple tasks, while the upper layers progressively focus on more task specific cues~\cite{karayev2013,yosinski2014}. We used this approach for both classification and regression, adding new fully-connected layers and use backpropagation to fine tune the weights for the complete network on the target loss. The primary contributions of this paper are the use of CNNs and regression loss to quantify knee OA severity. We propose the use of mean squared error for assessing the performance of an automatic knee OA severity assessment instead of binary and multi-class classification accuracy. We show that the inferred CNN features from the fine-tuned BVLC reference CaffeNet provide higher classification accuracy in comparison to the state-of-the-art. We also present an SVM-based method to automatically detect and extract the knee joints from knee OA radiographs. \section{Materials and Methods} \subsection{Dataset} The data used for the experiments are bilateral PA fixed flexion knee X-ray images, taken from the baseline (image release version O.E.1) radiographs of the Osteoarthritis Initiative (OAI) dataset containing an entire cohort of $4,476$ participants. This is a standard dataset for studies involving knee OA. Figure~\ref{fig:sam} shows some samples from the dataset. In the entire cohort, Kellgren \& Lawrence (KL) grades are available for both knee joints in $4,446$ radiographs and these images were used for this study. The distribution of the knee joint images (in total $8,892$) conditioned on the KL grading scale are: Grade 0 - 3433, Grade 1 - 1589, Grade 2 - 2353, Grade 3 - 1222, and Grade 4 - 295. The KL grading system uses 5 grades to classify knee OA severity from the radiographs \cite{park2013}, where `Grade 0' corresponds to the normal knee, and the other grades correspond to the progression of the disease, as shown in Figure~\ref{fig:KL}. \begin{figure}[t] \centering \includegraphics[width = 0.45 \textwidth, height = 0.25 \textwidth]{Sam.png} \caption{A few samples of bilateral PA fixed flexion knee OA radiographs.} \label{fig:sam} \vspace{-0.4 cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width = 0.45 \textwidth, height = 0.25 \textwidth]{KL.png} \caption{The KL grading system to assess the severity of knee OA. \footnotesize{Source: \url{http://www.adamondemand.com/clinical-management-of-osteoarthritis/}} } \label{fig:KL} \end{figure} \subsection{Automatic detection and the extraction of the knee joints} Automatically detecting, and extracting the knee joint region from the radiographs is an important pre-processing step and Shamir et. al.~\cite{shamir2009} proposed the template matching method for this. Though this method is simple to implement, the accuracy of detecting the knee joints is low for our dataset. To improve detection, we propose an SVM-basd method. \subsubsection{Template matching} As a baseline, we adapted the template matching approach~\cite{shamir2009} for detecting the knee joint center, to an image patch of size 20$\times$20 pixels. The radiographs are first down-scaled to 10\% of the original size and subjected to histogram equalization for intensity normalization. An image patch (20$\times$20 pixels) containing the knee joint center is taken as a template. 10 image patches from each grade, so that in total 50 patches were pre-selected as templates. Each input image is scanned by an overlapping sliding window (20$\times$20 pixels). At each window the Euclidean distance between the image patch and the 50 templates are calculated, and the shortest distance is recorded. After scanning an entire image with the sliding window, the window that records the smallest Euclidean distance is recorded as the knee joint center. \subsubsection{Proposed method for detecting the knee joints} We propose an approach using a linear SVM and the Sobel horizontal image gradients as the features for detecting the knee joint centers. The well-known Sobel edge detection algorithm uses the vertical and the horizontal image gradients. The motivation for this is that knee joint images primarily contain horizontal edges. The image patches (20$\times$20 pixels) containing the knee joint center are taken as the positive training samples and the image patches (20$\times$20 pixels) excluding the knee joint center are taken as the negative training samples. After extracting Sobel horizontal gradients for the positive and negative samples, a linear SVM was trained. To detect the knee joint center from both left and right knees, input images are split in half to isolate left and right knees separately. A sliding window (20$\times$20 pixels) is used on either half of the image, and the Sobel horizontal gradient features are extracted for every image patch. The image patch with the maximum score based on the SVM decision function is recorded as the detected knee joint center, and the area (300$\times$300 pixels) around the knee joint center is extracted from the input images using the corresponding recorded coordinates. Figure \ref{fig:AutoDet} shows an example of a detected and extracted knee joint. \begin{figure}[t] \centering \includegraphics[width = 0.4 \textwidth, height = 0.2 \textwidth]{AutoDet.png} \caption{Detecting the knee joint centers and extracting the knee joints.} \label{fig:AutoDet} \vspace{-0.4 cm} \end{figure} \subsection{Assessing the knee OA severity using CNNs} In this study, we investigate the use of CNN for assessing the severity of knee OA through classification and regression. For this, we used two approaches: 1. Pre-trained CNN for fixed feature extraction, 2. Fine-tuning the pre-trained CNN following the transfer learning approach. For benchmarking the classification results obtained by the proposed methods, we have used Wndchrm, an open source utility for medical image classification that has been applied to this task in the literature~\cite{shamir2008,shamir2009}. \subsubsection{Classification using features extracted from pre-trained CNNs} As our initial approach, we trained VGG16~\cite{simonyan2014} with the OAI dataset. We used the Caffe~\cite{jia2014} framework for implementing and training the CNN, and to extract features from the CNN. We extracted features from the different layers of the VGG net such as fully-connected (fc7), pooling (pool5), and convolutional (conv5\_2) layers to identify the most discriminating set of features. Linear SVMs (trained using LIBLINEAR~\cite{fan2008liblinear}) were trained with the extracted CNN features for classifying knee OA images, where the ground truth was labeled images conditioned on the KL grades. Next, we investigated the use of simpler pre-trained CNNs such as VGG-M-128~\cite{chatfield2014} and BVLC CaffeNet~\cite{jia2014} for classifying the knee OA images. These networks have fewer layers and parameters in comparison to VGG16. \subsubsection{Fine-tuning the CNNs for classification and regression} Our next approach fine-tuned the BVLC CaffeNet~\cite{jia2014} and VGG-M-128~\cite{chatfield2014} networks. We chose these two smaller networks, both which contain fewer layers and parameters ($\sim$62M), over the much deeper VGG16, which has $\sim$138M parameters. We replace the top fully-connected layer of both networks and retrain the model on the OAI dataset using backpropagation. The lower-level features in the bottom layers are also updated during fine-tuning. Standard softmax loss was used as the objective for classification, and accuracy layers were added to monitor training progress. A Euclidean loss layer (mean squared error) was used for the regression experiments. \section{Results and Discussion} \subsection{Automatic detection of the knee joints} Standard template matching~\cite{shamir2009} produces poor detection accuracy on our dataset. To improve this, we used a linear SVM with the Sobel horizontal image gradients as features to detect the knee joints. The proposed method is approximately $80\times$ faster than template matching; for detecting all the knee joints in the dataset comprising $4,492$ radiographs, the proposed method took $\sim$9 minutes and the template matching method took $\sim$798 minutes. Image patches containing the knee joint center (20$\times$20 pixels) were used as positive examples and randomly sampled patches excluding the knee joint as negative samples. We used 200 positive and 600 negative training samples. The samples were split into 70\% training and 30\% test set. Fitting a linear SVM produced $\textbf{95.2\%}$ 5-fold cross validation and $\textbf{94.2\%}$ test accuracies. Table~\ref{Tab:AD} shows the precision, recall, and $F_{1}$scores of this classification. \begin{table}[t] \caption{Classification metrics of the SVM for detection.} \label{Tab:AD} \centering \begin{tabular}{ c c c c c } \toprule Class & Precision & Recall & $F_{1}$score\\ \midrule Positive & 0.93 & 0.84 & 0.88 \\ Negative & 0.95 & 0.98 & 0.96 \\ \midrule Mean & 0.94 & 0.94 & 0.94\\ \bottomrule \end{tabular} \end{table} To evaluate the automatic detection, we generated the ground truth by manually annotating the knee joint centers (20$\times$20 pixels) in 4,496 radiographs using an annotation tool that we developed, which recorded the bounding box (20$\times$20 pixels) coordinates of each annotation. We use the well-known Jaccard index to give a matching score for each detected instance. The Jaccard index J(A,D) is given by, \begin{equation} J(A,D) = \frac{A \cap D} {A \cup D} \end{equation} where A, is the manually annotated and D is the automatically detected knee joint center using the proposed method. Table~\ref{Tab:Jac} shows the resulting average detection accuracies based on thresholding of Jaccard indices. \begin{table}[t] \caption{Comparison of automatic detection using the template matching and the proposed method based on Jaccard Index (J).} \label{Tab:Jac} \centering \begin{tabular}{l c c c} \toprule Method & $J=1$ & $J\geq0.5$ & $J>0$\\ \midrule Template Matching & 0.3 \% & 8.3 \% & 54.4 \% \\ Proposed Method & 1.1 \% & 38.6 \% & \textbf{81.8 \%} \\ \bottomrule \end{tabular} \end{table} \begin{table*}[!t] \caption{Classification accuracy (\%) achieved by the Wndchrm and pre-trained CNN features.} \centering \begin{tabular}{c c c c c c c c c c c c} \toprule & \multirow{2}{*}{Classification} & \multirow{2}{*}{Wndchrm} & \multicolumn{3}{c}{VGG 16-Layers Net} & \multicolumn{3}{c}{VGG-M-128 Net}& \multicolumn{3}{c}{BVLC ref CaffeNet}\\ \cmidrule{4-12} & & & fc7 & pool5 & conv5\_2 & fc6 & pool5 & conv4 & fc7 & pool5 & conv5\\ \midrule \multirow{4}{*}{Progressive} & Grade 0 vs Grade 1 & 51.5 & 56.3 & 61.3 & 63.5 & 56.5 & 63.2 & \textbf{64.7} & 62.0 & 64.3 & 63.3\\ & Grade 0 vs Grade 2 & 62.6 & 68.6 & 74.3 & 76.7 & 67.8 & 75.5 & \textbf{77.6} & 69.6 & 73.6 & 73.9\\ & Grade 0 vs Grade 3 & 70.6 & 86.4 & 91.4 & 92.4 & 88.5 & 90.2 & \textbf{92.9} & 87.9 & 92.5 & 91.5\\ & Grade 0 vs Grade 4 & 82.8 & 98.1 & 98.6 & 99.3 & 98.8 & 99.3 & 99.2 & 98.5 & \textbf{99.4} & 99.1\\ \midrule \multirow{3}{*}{Successive} & Grade 1 vs Grade 2 & 48.8 & 60.0 & 64.7 & 67.3 & 57.9 & 63.5 & 65.3 & 61.2 & \textbf{65.8} & 62.8 \\ & Grade 2 vs Grade 3 & 54.5 & 69.8 & 76.4 & 77.0 & 73.0 & 77.3 & \textbf{79.0} & 70.3 & 78.1 & 77.1\\ & Grade 3 vs Grade 4 & 58.6 & 85.2 & 88.8 & 90.0 & 85.0 & 90.4 & 91.2 & 87.4 & \textbf{91.6} & 91.4\\ \midrule \multirow{3}{*}{Multi-class} & Grade 0 to Grade 2 & 39.9 & 51.1 & 53.4 & 56.9 & 51.1 & 55.0 & \textbf{57.4} & 51.1 & 54.8 & 54.4\\ & Grade 0 to Grade 3 & 32.0 & 44.6 & 48.7 & 53.9 & 45.4 & 50.2 & \textbf{53.3} & 46.9 & 51.6 & 50.2\\ & Grade 0 to Grade 4 & 28.9 & 42.6 & 47.6 & 53.1 & 43.8 & 49.5 & \textbf{53.4} & 44.1 & 50.8 & 50.0\\ \bottomrule \end{tabular} \label{Tab:Clsf_PT} \vspace{-0.4cm} \end{table*} The mean Jaccard index for the template matching and the classifier methods are $\textbf{0.1}$ and $\textbf{0.36}$. From Table~\ref{Tab:Jac}, it is evident that the proposed method is more accurate than template matching. This is due to the fact that template matching relies upon the intensity level difference across an input image. Thus, it is prone to matching a patch with small Euclidean distance that does not actually correspond to the joint center. We also varied the templates in a set, and observed that the detection is highly dependent on the choice of templates: template matching is similar to a k-nearest neighbor classifier with $k=1$. The reason for higher accuracy in the proposed method is the use of horizontal edge detection instead of intensity level differences. The knee joints primarily contain horizontal edges and thus are easily detected by the classifier using horizontal image gradients as features. Despite sizable improvements in accuracy and speed using the proposed approach, detection accuracy still falls short of 100\%. We therefore decided to use our manual annotations so as to investigate KL grade classification performance independently of knee joint detection. \subsection{Classification of the knee joints using pre-trained CNNs} The extracted knee joint images were split into training ($\sim$70\%) and test ($\sim$30\%) as per the KL grades. For classifying the knee joint images, we extracted features from fully-connected, pooling and convolution layers of VGG16, VGG-M-128, and BVLC CaffeNet. For binary and multi-class classifications, linear SVMs were trained individually with the extracted features. The classification results achieved with the CNNs are compared to knee classification of OA images using the Wndchrm~\cite{shamir2009,shamir2008,orlov2008}. Table~\ref{Tab:Clsf_PT} shows the test set classification accuracies achieved by Wndchrm and the CNN features. The CNN features consistently outperform Wndchrm for classifying healthy knee samples against the progressive stages of knee OA. The features from conv4 layer with dimension 512$\times$13$\times$13 and pool5 layer 256$\times$13$\times$13 of VGG-M-128 net, and conv5 layer with dimension 512$\times$6$\times$6 and pool5 layer with dimension 256$\times$6$\times$6 of BVLC reference CaffeNet give higher classification accuracy in comparison to the fully-connected fc6 and fc7 layers of VGG nets and CaffeNet. We also extracted features from further bottom layers such as pool4, conv4\_2, pool3, pool2 and trained classifiers on top of these features. As the dimension of the bottom layers are high, significantly more time was required for training but without improvement in classification accuracy. In a fine-grained classification task such as knee OA images classification, the accuracy of classifying successive classes tends to be low, as the variations in the progressive stages of the disease are minimal, and only highly discriminant features can capture these variations. From the experimental results, as shown in Table \ref{Tab:Clsf_PT}, the features extracted from CNNs provide significantly higher classification accuracy in comparison to the Wndchrm, and these features are effective and promising for classifying the consecutive stages of knee OA. We performed multi-class classifications using linear SVMs with the CNN features (Table~\ref{Tab:Clsf_PT}, multi-class). Again, the CNN features perform significantly better than the Wndchrm-based approach. The classification accuracies obtained using convolutional (conv4, conv5) and pooling (pool5) layers are slightly higher in comparison to fully-connected layer features. There are minimal variations in classification accuracy obtained with the features extracted from VGG-M-128 net and BVLC reference CaffeNet in comparison to VGG16. \subsection{Classification of the knee joints using fine-tuned CNNs} \begin{figure}[b] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1\textwidth] {Loss.png} \end{minipage}% \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1\textwidth]{Acc.png} \par\vspace{0pt} \end{minipage} \caption{Learning curves for training and validation loss (left) and validation accuracy (right) during fine-tuning.} \label{fig:LossAcc} \vspace{-0.4cm} \end{figure} Table~\ref{Tab:Clsf_PT} shows the multi-class classification results for the fine-tuned BVLC CaffeNet and VGG-M-128 networks. We omitted the VGG16 network in these experiment since the variation in accuracy among the pre-trained CNNs was small, and fine-tuning VGG16 is significantly more computationally expensive. The dataset was split into training (60\%), validation (10\%) and test (30\%) sets for fine-tuning. To increase the number of training samples, we included the right-left flipped knee joint images in the training set. The networks were fine-tuned for 20 epochs using a learning rate of 0.001 for the transferred layers, and boosting it on newly introduced layers by a factor of 10. The performance of fine-tuned BVLC CaffeNet was slightly better than VGG-M-128. Hence, we only show here the results of fine-tuning CaffeNet. Figure~\ref{fig:LossAcc} shows the learning curves for training and validation loss, and validation accuracy. The decrease in loss and increase in accuracy shows that the fine-tuning is effective and makes the CNN features more discriminative, which improves classification accuracy (Table~\ref{Tab:Clsf_PT}). The features extracted from the fully connected (fc7) layer provide slightly better classification in comparison to pooling (pool5) and convolution (conv5) layers. \begin{table}[t] \caption{Classification accuracy (\%) achieved with the features extracted from fine-tuned BVLC Net.} \centering \begin{tabular}{c c c c c c c} \toprule \multirow{2}{*}{Classification} & \multicolumn{3}{c}{Before Fine-Tuning} & \multicolumn{3}{c}{After Fine-Tuning}\\ \cmidrule{2-7} & fc7 & pool5 & conv5 & fc7 & pool5 & conv5\\ \midrule Grade 0 vs Grade 1 & 62.0 & 64.3 & 63.3 & 63.3 & \textbf{64.3} & 61.9\\ Grade 0 vs Grade 2 & 69.6 & 73.6 & 73.9 & 76.3 & \textbf{77.2} & 74.1\\ Grade 0 vs Grade 3 & 87.9 & 92.5 & 91.5 & \textbf{96.7} & 96.0 & 96.3\\ Grade 0 vs Grade 4 & 98.5 & 99.4 & 99.1 & \textbf{99.8} & 99.7 & 99.7\\ \midrule Grade 1 vs Grade 2 & 61.2 & 65.8 & 62.8 & 63.3 & \textbf{66.7} & 62.7\\ Grade 2 vs Grade 3 & 70.3 & 78.1 & 77.1 & \textbf{85.8} & 83.9 & 83.3\\ Grade 3 vs Grade 4 & 87.4 & 91.6 & 91.4 & \textbf{94.4} & 93.6 & 92.6\\ \midrule Grade 0 to Grade 2 & 51.1 & 54.8 & 54.4 & \textbf{57.4} & 57.0 & 52.0\\ Grade 0 to Grade 3 & 46.9 & 51.6 & 50.2 & \textbf{57.2} & 56.5 & 51.8\\ Grade 0 to Grade 4 & 44.1 & 50.8 & 50.0 & \textbf{57.6} & 56.2 & 51.8\\ \bottomrule \end{tabular} \label{Tab:Clsf_FT} \vspace{-0.4cm} \end{table} \subsection{Regression of KL grades using fine-tuned CNNs.} Existing work on automatic measurement of knee OA severity treats it as an image classification problem, assigning each KL grade to a distinct category \cite{shamir2009}. To date, evaluation of automatic KL grading algorithms has been based on binary and multi-class classification accuracy with respect to these discrete KL grades \cite{oka2008,shamir2009,orlov2008}. KL grades are not, however, categorical, but rather represent an ordinal scale of increasing severity. Treating them as categorical during evaluation means that the penalty for incorrectly predicting that a subject with Grade 0 OA has Grade 4 is the same as the penalty for predicting that the same subject has Grade 1 OA. Clearly the former represents a more serious error, yet this is not captured by evaluation measures that treat grades as categorical variables. In this setup, permuting the ordering of the grades has no effect on classification performance. Moreover, the quantization of the KL grades to discrete integer levels is essentially an artifact of convenience; the true progression of the disease in nature is continuous, not discrete. We therefore propose that it is more appropriate to measure the performance of an automatic knee OA severity assessment system using a continuous evaluation metric like mean squared error. Such a metric appropriately penalizes errors in proportion to their distance from the ground truth, rather than treating all errors equally. Directly optimizing mean squared error on a training set also naturally leads to the formulation of knee OA assessment as a standard regression problem. Treating it as such provides the model with more information on the structure and relationship between training examples with successive KL grades. We demonstrate that this reduces both the mean squared error and improves the multi-class classification accuracy of the model. We fine-tuned the pre-trained BVLC CaffeNet model using both classification loss (cross entropy on softmax outputs) and regression loss (mean squared error) to compare their performance in assessing knee OA severity. In both cases, we replace fc7 with a randomly initialized layer and fine tune for 20 epochs, selecting the model with the highest validation performance. The classification network uses a 5D fully connected layer and softmax following the fc7 layer, and the regression network uses a 1D fully connected node with a linear activation. We compare the models using both mean squared error (MSE) and standard multi-class classification metrics. We calculated the mean squared error using the standard formula: \begin{equation} MSE = \frac{1}{n} \sum_{i=1}^{n}(y_{i} - \hat{y_{i}})^{2}, \end{equation} where $n$ is the number of test samples, $y_{i}$ is the true (integer) label and $\hat{y_{i}}$ is the predicted label. For the classification network the predicted labels $y_{i}$ are integers and for the regression network they are real numbers. We also test a configuration where we round the real outputs from the regression network to produce integer labels. Table~\ref{Tab:MSE} shows the MSE for classification using the Wndchrm and the CNN trained with classification loss (CNN-Clsf), regression loss (CNN-Reg), and regression loss with rounding (CNN-Reg*). Regression loss clearly achieves significantly lower mean squared error than both the CNN classification network and the Wndchrm features. \begin{table}[t] \caption{MSE for classification and regression. } \label{Tab:MSE} \centering \begin{tabular}{c c c c c c} \toprule Classes & Wndchrm & CNN-Clsf & CNN-Reg & CNN-Reg*\\ \midrule Grade 0 to 4 & 2.459 & 0.836 & \textbf{0.504} & 0.576 \\ \bottomrule \end{tabular} \end{table} \begin{table}[b] \caption{ Comparison of classification performance using classification (left) and regression (right) losses. } \label{Tab:Clsf_stats} \centering \begin{tabular}{c c c c c c c c c c} \toprule & \multicolumn{3}{c}{Classification loss} & \multicolumn{3}{c}{Regression loss} \\ \cmidrule{2-4} \cmidrule{5-7} Grade & Precision & Recall & $F_{1}$ & Precision & Recall & $F_{1}$\\ \midrule 0 & 0.53 & 0.64 & 0.58 & 0.57 & 0.92 & 0.71 \\ 1 & 0.25 & 0.19 & 0.22 & 0.32 & 0.14 & 0.20 \\ 2 & 0.44 & 0.32 & 0.37 & 0.71 & 0.46 & 0.56 \\ 3 & 0.37 & 0.47 & 0.41 & 0.78 & 0.73 & 0.76 \\ 4 & 0.56 & 0.54 & 0.55 & 0.89 & 0.73 & 0.80 \\ \midrule Mean & 0.43 & 0.44 & 0.43 & 0.61 & 0.62 & 0.59\\ \bottomrule \end{tabular} \vspace{-0.2cm} \end{table} To demonstrate that the regression loss also produces better classification accuracy, we compare the classification accuracy from the network trained with classification loss and the network trained with regression loss and rounded labels. Rounding, in this case, is necessary to allow for using standard classification metrics. Table~\ref{Tab:Clsf_stats} compares the resulting precision, recall, and $F_{1}$ scores. The multi-class (grade 0--4) classification accuracy of the network fine-tuned with regression loss is 59.6\%. The network trained using regression loss clearly gives superior classification performance. We suspect this is due to the fact that using regression loss gives the network more information about the ordinal relationship between the KL grades, allowing it to converge on parameters that better generalize to unseen data. \section{Conclusion and Future Work} This paper investigated several new methods for automatic quantification of knee OA severity using CNNs. The first step in the process is to detect the knee joint region. We propose training a linear SVM on horizontal image gradients as an alternative to template matching, which is both more accurate and faster than template matching. Our initial approach to classifying the knee OA severity used features extracted from pre-trained CNNs. We investigated three pre-trained networks and found that the BVLC reference CaffeNet and VGG-M-128 networks perform best. A linear SVM trained on features from these networks achieved significantly higher classification accuracy in comparison to the previous state-of-the-art. The features from pooling and convolutional layers were found to be more accurate than the fully connected layers. Fine-tuning the networks by replacing the top fully connected layer gave further improvements in multi-class classification accuracy. Previous studies have assessed their algorithms using binary and multi-class classification metrics. We propose that it is more suitable to treat KL grades as a continuous variable and assess accuracy using mean squared error. This approach allows the model to be trained using regression loss so that errors are penalized in proportion to their severity, producing more accurate predictions. This approach also has the nice property that the predictions can fall between grades, which aligns with a continuous disease progression. Future work will focus on improving knee joint detection accuracy using a CNN or region-based CNN instead of the proposed linear model on Sobel gradients, and on further improving assessment of knee OA severity. It is clear that the distribution of images in ImageNet and those of knee radiographs are very different. Given a large number of training examples, it would be possible to train a model from scratch on the knee OA images, which would likely be better adapted to the domain. In the absence of a large number of labeled examples, semi-supervised approaches such a ladder networks~\cite{rasmus2015semi} may prove more effective than the domain adaptation approach used here. Currently, the detection of knee joints, feature extraction, and classification/regression are separate steps. Future work will also investigate an end-to-end deep learning system by combining these steps. \section*{Acknowledgment} This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under grant number SFI/12/RC/2289. The OAI is a public-private partnership comprised of five contracts (N01-AR-2-2258; N01-AR-2-2259; N01-AR-2- 2260; N01-AR-2-2261; N01-AR-2-2262) funded by the National Institutes of Health, a branch of the Department of Health and Human Services, and conducted by the OAI Study Investigators. Private funding partners include Merck Research Laboratories; Novartis Pharmaceuticals Corporation, GlaxoSmithKline; and Pfizer, Inc. Private sector funding for the OAI is managed by the Foundation for the National Institutes of Health. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,953
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>com.hubspot.nebula</groupId> <artifactId>Nebula</artifactId> <version>0.0.1-SNAPSHOT</version> <relativePath>../pom.xml</relativePath> </parent> <artifactId>NebulaService</artifactId> <repositories> <repository> <id>sonatype-nexus-snapshots</id> <name>Sonatype Nexus Snapshots</name> <url>http://oss.sonatype.org/content/repositories/snapshots</url> </repository> <repository> <id>repo.codahale.com</id> <url>http://repo.codahale.com/</url> </repository> </repositories> <dependencies> <dependency> <groupId>com.hubspot.nebula</groupId> <artifactId>NebulaData</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>${guava.version}</version> </dependency> <dependency> <groupId>com.hubspot.dropwizard</groupId> <artifactId>dropwizard-guice</artifactId> <version>0.7.0.2</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.29</version> </dependency> <dependency> <groupId>io.dropwizard</groupId> <artifactId>dropwizard-migrations</artifactId> <version>${dropwizard.version}</version> </dependency> <dependency> <groupId>com.hubspot.jackson</groupId> <artifactId>jackson-jaxrs-propertyfiltering</artifactId> <version>0.5.0</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </exclusion> </exclusions> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>${java.abi}</source> <target>${java.abi}</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-source-plugin</artifactId> <version>2.1.2</version> <executions> <execution> <id>attach-sources</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.5</version> <configuration> <outputDirectory /> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.2</version> <configuration> <archive> <manifest> <addDefaultImplementationEntries>true</addDefaultImplementationEntries> </manifest> </archive> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>${java.abi}</version> <configuration> <createDependencyReducedPom>true</createDependencyReducedPom> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.hubspot.nebula.NebulaService</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> <plugin> <!-- You'll probably want to remove this for your project. I'm just using it here so that dropwizard-example doesn't get deployed as a library. --> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-deploy-plugin</artifactId> <version>2.7</version> <configuration> <skip>true</skip> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.0</version> <configuration> <skip>true</skip> <skipDeploy>true</skipDeploy> </configuration> </plugin> </plugins> </build> </project>
{ "redpajama_set_name": "RedPajamaGithub" }
8,482
\section{Introduction} \label{sec:intro} Pronunciation dictionaries of hybrid automatic speech recognition (ASR) and Text-to-Speech (TTS) systems, which contain all word forms, are usually generated automatically using a grapheme-to-phoneme (G2P) model trained on a baseline dictionary. Being trained on one language only, monolingual G2P models often struggle with loanwords of foreign heritage since their pronunciation is derived from their source language. In the German language, the use of English loanwords (Anglicisms) has been steadily increasing \cite{burmasova_2010_anglicisms}, resulting in a ratio of 4.5~\% Anglicisms in spontaneous speech \cite{hunt_anglicisms}. German G2P models show higher error rates for Anglicisms due to their irregular pronunciation compared to native German words \cite{milde_2017_multitask}. Currently, a lot of ASR research has moved away from hybrid ASR systems, moving more towards end-to-end modeling. However, for real-world applications in non-English languages, we believe there is still a need for improved phonetic pronunciation. In end-to-end approaches, adaptation to the non-canonical pronunciation of words, such as Anglicisms, would mainly be achievable via fine-tuning. This requires sufficient amounts of transcribed speech with Anglicisms, which, in general, is not always available. The pronunciation lexicons for training G2P systems are usually more common and easier to obtain. Generating phoneme sequences with a G2P model trained on German data for Anglicisms can lead to wrong pronunciations. Looking at the Anglicism \enquote{Whistleblower}, a conventional Sequitur G2P system trained on PHONOLEX Core \citelanguageresource{phonolex} generates the pronunciation /v I s t l e: p l o 6/ by applying the pronunciation rules it learned from the German training data, but the correct pronunciation according to Duden is /v I s l b l O U6/\footnote{\url{https://www.duden.de/rechtschreibung/Whistleblower\#aussprache}}. We propose a multitask learning (MTL) approach to solve this problem, detecting Anglicisms in a second classification task parallel to G2P conversion. In MTL, the human concept of inductive transfer is applied to a machine learning model \cite[p.19]{caruana_1997_multitask}. By detecting Anglicisms based on the input sequence, the model generates phoneme sequences differently depending on the Anglicism classification result. To inspect this approach, we implemented a sequence-to-sequence G2P model with an additional classification task for Anglicism detection to create pronunciations for a list of automatically crawled Anglicisms. We evaluated the influence of the dictionary on the automatic speech recognition performance using a dedicated Anglicism test set. \section{Related Work} \label{sec:related} Sequence-to-sequence (Seq2Seq) models are a deep neural network approach for handling sequences of unknown dimensions. They make use of LSTM cells \cite{LSTM_hochreiter}, a more complex kind of RNN cell. While a traditional RNN struggles with long-term dependencies, an LSTM model is able to handle information over long periods of time by controlling, forgetting, and passing information through the cell states. In the first LSTM implementation for automatic machine translation, \newcite{trans_seq2014} used an LSTM network as an encoder to obtain a fixed dimensional vector representation of an input sequence. As a decoder, they used an LSTM network conditioned on the input sequence to extract the output sequence from the vector. This way, a model is trained that maps source language input sequences to target language output sequences. While experimenting, they discovered that reversing the order of the input sequence elements positively influences the performance. If the LSTM reads the input sentence in reverse, many short-term dependencies in the data are introduced, which simplifies the optimization problem. \begin{figure*}[htb] \centering \includesvg[width=0.9\textwidth]{images/MTL-IS-big.svg} \caption{Seq2Seq G2P model with additional Anglicism classification task processing the input sequence \textlangle Fan\textrangle~with phoneme output /f E: n/ in BAS-SAMPA notation (adapted from \protect\newcite{yao_seq2seq}). The input sequence is read in reverse.} \label{fig:seq2seq_mtl_model} \end{figure*} \newcite{yao_seq2seq} applied the method of \newcite{trans_seq2014} to the G2P task, giving similar results as traditional joint-sequence models. Tested on the CMUdict dataset, their two-layered encoder-decoder LSTM model showed a phoneme error rate (PER) of 7.63~\% and a word error rate (WER) of 28.61~\%, performing slightly worse than the baseline Sequitur G2P \cite{Bisani_2008_g2p} (PER: 5.88~\%, WER: 24.53~\%). However, the Seq2Seq G2P approach enables novel applications in the field of deep learning and will likely benefit from future improvements in recurrent neural network architectures \cite{milde_2017_multitask}. Seq2Seq models can be used to train a multilingual G2P model due to their ability of joint learning of alignments and grapheme-phoneme translations \cite{sokolov_2019_amazon}. \newcite{sokolov_2019_amazon} built a multilingual Seq2Seq G2P model utilizing transfer learning to improve the performance on foreign words from different languages. They determine a language ID value in advance based on the previously known distribution of the word from different given language models. This language ID vector is an additional input to the model, concatenated to the attention vector. They trained both a monolingual and a multilingual G2P model respectively for 18 languages. The approach improved prediction accuracy compared to monolingual models for low-resource languages. However, we assume it does not apply to the Anglicism problem tackled in our work. The English stem words in Anglicisms are often "Germanized" and usually do not occur with this spelling in any other language. Another option that the neural network approach of Seq2Seq G2P models facilitates is MTL. First introduced by \newcite{caruana_1993}, MTL is an approach of modeling the human concept of inductive transfer to a machine learning model. When humans are confronted with a new problem, they use the skills and information they already learned for related problems in the past. As opposed to single-task learning, where every task is learned separately from each other, MTL allows learning multiple tasks in parallel. The tasks of an MTL model have to be related to each other. Related tasks provide an inductive bias, making the model learn more general representations. Hard parameter sharing is the most common approach for MTL. The input layers are shared between all tasks, while the output layers are kept task-specific. Hard parameter sharing reduces the risk of overfitting because the model is forced to find a more general representation that fits all tasks instead of only one. Having multiple tasks also helps to differentiate between relevant and irrelevant features since each task provides evidence for the feature's relevance. This shifts the models' focus towards those genuinely essential features. MTL fits tasks in the field of natural language processing well since text contains various cues that can be helpful for multiple tasks simultaneously. \newcite{milde_2017_multitask} built three multilingual Seq2Seq G2P models utilizing MTL, training them simultaneously on a German and English G2P task using the PHONOLEX and CMUdict data sets. Their models used character embeddings as encoder inputs and phoneme embeddings as decoder inputs. They added a language marker at the start of each input sequence for classifying the source language. The encoder was built as a stacked bi-directional LSTM to represent past and future dependencies. Overall, the MTL models did not outperform the baseline Sequitur G2P model. The MTL models were also tested on specific word groups inside the German PHONOLEX test set, including a set of English loanwords. The Sequitur G2P model outperformed the MTL models within this word group, even though the MTL combinations additionally contained an English G2P task. \section{Proposed Approach} \label{sec:approach} While the MTL approach by \newcite{milde_2017_multitask} did not show improvements for English loanwords, it inspired us to use MTL for solving the challenge of Anglicism pronunciations in the German language. Since Anglicisms are of English heritage, their different linguistic features (grapheme combinations) compared to native German words can be an indicator for detecting Anglicisms and hence applying different pronunciation rules. Classifying grapheme sequences as Anglicisms can help the model understand that Anglicisms are pronounced differently than native German words, resulting in different phoneme conversions. Our proposed approach is shown in Figure \ref{fig:seq2seq_mtl_model}. As basis, we used the encoder-decoder-LSTM model by \newcite{yao_seq2seq} with two layers. The functioning of the model is illustrated in the figure with an example. The encoder LSTM reads the reversed input grapheme sequence \enquote{\texttt{\textlangle s\textrangle} n a F}, where \texttt{\textlangle s\textrangle} indicates the beginning of the sequence. After the last hidden layer activation, the decoder LSTM is initialized. It produces \enquote{\texttt{\textlangle os\textrangle} f E: n} as phoneme prediction of the input sequence and uses \enquote{f E: n \texttt{\textlangle /os\textrangle}} as the output sequence. \texttt{\textlangle os\textrangle} and \texttt{\textlangle /os\textrangle} indicate the start and end of the output sequence, respectively. The encoder LSTM represents the entire input sequence in the hidden layer activities which are used as initial activities of the decoder. Working as a language model, the decoder LSTM uses the past phoneme sequence to predict the next phoneme. It stops predicting after outputting \texttt{\textlangle /os\textrangle}. An Anglicism classifier is added as a second task in the Seq2Seq G2P model. Utilizing hard parameter sharing, the output vectors of the encoder are used as input for both the decoder and the binary Anglicism classification task. The model optimizes both tasks by combining their losses. Looking at Figure \ref{fig:seq2seq_mtl_model}, the grapheme sequence \textlangle Fan\textrangle~is processed by the encoder that passes the output to both the decoder and the Anglicism classification task. Based on the encoder output, the decoder generates the pronunciation while the classification task asserts the probability for the grapheme sequence being an Anglicism. \section{Experimental Setup} \label{sec:setup} To test the viability of our proposed approach, we applied it to a list of Anglicisms to generate pronunciations that will be used as a supplementary pronunciation dictionary in a German hybrid ASR model. The model corresponds to the source model used by \newcite{gref_2019} with a slightly different language model. It was trained on the GER-TV1000h corpus \citelanguageresource{gertv1000h}. Depending on the model performance on a specific Anglicism test set, we can assess the potential for more extensive applications. \begin{table*}[t] \centering \adjustbox{max width=\textwidth}{% \begin{tabular}{@{}llrrrrrrrr@{}} \toprule & & & & \multicolumn{2}{c}{\textbf{G2P Task}} & \multicolumn{4}{c}{\textbf{Anglicism Classification Task}} \\ \cmidrule(l{1em}r{1em}){5-6} \cmidrule(l{1em}r{1em}){7-10} \thead{G2P\\Model} & \thead{Data Source \& Specifics} & \thead{Epochs} & \thead{Iter. /\\Epoch} & \thead{PER} & \thead{WER} & \thead{Accu.} & \thead{Prec.} & \thead{Recall} & \thead{F1} \\ \midrule $\text{MTL}_{\text{Base}}$ & PHONOLEX core & 7 & 2498 & \textit{5.68} & \textit{24.43} & \textit{98.03} & \textit{0.00} & \textit{0.00} & \textit{0.00} \\ \midrule $\text{MTL}_{\text{Wiki}}$ & $\text{MTL}_{\text{Base}}$ + Wiktionary Anglicisms & 6 & 2845 & 8.63 & 30.89 & 91.24 & 80.69 & 54.26 & 64.89 \\ $\text{MTL}_{\text{WL}}$ & $\text{MTL}_{\text{Wiki}}$ + weighed losses ($\alpha = 0.7$) & 7 & 2845 & \textbf{7.87} & \textbf{28.03} & \textbf{92.42} & 86.71 & 58.14 & 69.61 \\ $\text{MTL}_{\text{DS}}$ & $\text{MTL}_{\text{Wiki}}$ + downsampled data & 16 & 806 & 11.21 & 39.63 & 88.66 & \textbf{90.30} & \textbf{86.63} & \textbf{88.43} \\ \bottomrule \end{tabular} } \caption{Selected MTL models with their training information as well as their G2P task and Anglicism classification task evaluation metrics. The PER and WER measures are based on a fixed percentage split of the training data. For $\text{MTL}_{\text{Base}}$, the precision, recall and F1 score values are 0.00~\% because they did not yield any positive classifications. All metrics and error measures are given in percent.} \label{tab:mtl_model_metrics} \end{table*} \begin{table}[htb] \centering \begin{tabular}[t]{@{}lrrr@{}} \toprule \thead{G2P Model} & \thead{PHONOLEX\\Core\\PER (\%)} & \thead{Wiktionary\\Anglicisms\\PER (\%)} \\ \midrule Sequitur & \textbf{2.59} & 17.11 \\ Seq2Seq & 5.13 & 19.80 \\ $\text{MTL}_{\text{Base}}$ & 5.68 & 25.72 & \\ $\text{MTL}_{\text{Wiki}}$ & 7.41 & 16.92 & \\ $\text{MTL}_{\text{WL}}$ & 6.63 & 15.98 & \\ $\text{MTL}_{\text{DS}}$ & 12.69 & \textbf{11.57} \\ \bottomrule \end{tabular} \caption{PER values of the PHONOLEX Core and Wiktionary Anglicism validation data for the baseline and MTL G2P models. The Seq2Seq model corresponds to the basis of all MTL models and is trained with the same data as $\text{MTL}_{\text{Base}}$. The values for PHONOLEX Core show the general performance on German data, while the results for the Wiktionary Anglicism data show the specific performance for Anglicisms.} \label{tab:per_mtl} \end{table} \subsection{Datasets} \label{ssec:datasets} To create an Anglicism word list, we derived 11,839 Anglicisms from Wiktionary's list of German Anglicisms \citelanguageresource{wiki_anglicisms} and Pseudo-Anglicisms \citelanguageresource{wiki_pseudoanglicisms} as well as the VDS Anglizismenindex \citelanguageresource{vds_2020}. Additionally, inflections of the contained words were crawled from the Wiktionary website, expanding the word list to 18,967 entries. We used PHONOLEX core as training data for the G2P model, using 62,427 entries as train set and 3,000 entries as the validation set. We classified the lexicon entries based on the Anglicism word list. Since PHONOLEX core only contained 2.22~\% words classified as Anglicism in the train set, we derived 9,802 additional Anglicism pronunciations from Wiktionary and added them to the training data. With this data added to PHONOLEX core, the train set contained 71,102 entries, including 10,063 Anglicisms (16.11~\%). The validation set contained 3,457 entries, including 516 Anglicisms (17.20~\%). Based on this data, we created an additional downsampled data set that offers a 50/50 class balance between Anglicisms and non-Anglicisms, resulting in 20,126 entries in the train set and 1,032 entries in the validation set. For the evaluation of the resulting ASR models, we created a test set (\enquote{Anglicisms 2020}) including segments with Anglicism usage. The data was derived from newscasts, business \& technical talks, and videos containing colloquial speech, resulting in 1.3~h of audio data. We annotated the audio data using ELAN \cite{elan_2020}. For evaluating the specific performance of Anglicism recognitions, we flagged every Anglicism in the annotations. Of 14,028 total words, 1,362 were marked as Anglicisms (9.71~\%). To ensure that our approach would not negatively influence the general performance of native German words, we also used two in-house test sets representing typical German broadcast use cases as control groups. The audio data for those test sets were derived from 0.94~h of television segments (\enquote{German Broadcast 2020}) and 0.99~h of radio interviews containing spontaneous speech (\enquote{Challenging Broadcast 2018}). \subsection{Implementation} \label{ssec:implementation} For our Seq2Seq G2P model, we rebuilt the encoder-decoder LSTM from \newcite{yao_seq2seq}. Like in \cite{yao_seq2seq}, the model training was set up using 500-dimensional projection and hidden layers and applying back-propagation through time. We used beam search to generate the phoneme sequence during decoding, selecting the hypothesis sequence with the highest posterior probability as the decoding result. We used a batch size of 25 for our data set as it performed best on the validation data among various configurations. The order of the training sequences was randomly permuted in each epoch. We used an adaptive learning rate of $0.007$ that was halved throughout training when no improvements in the validation loss were observed within the last five checks. An early stopping mechanism set in when the learning rate dropped below $0.00001$. \begin{table*}[t] \centering \begin{tabular}[t]{@{}lrrrrr@{}} \toprule & \multicolumn{3}{c}{\textbf{Anglicisms 2020}} & & \\ \cmidrule(l{1em}r{1em}){2-4} \thead{ASR Model} & \thead{WER (\%)} & \thead{AER (\%)} & \thead{Recognized\\Anglicisms} & \thead{German\\BC 2020\\WER (\%)} & \thead{Challenging\\BC 2018\\WER (\%)} \\ \midrule Baseline \cite{gref_2019} & 15.80 & 39.50 & 824 & \textbf{6.56} & 10.84 \\ Sequitur & 15.76 & 39.35 & 826 & \textbf{6.56} & \textbf{10.82} \\ Seq2Seq & 15.75 & 39.28 & 827 & \textbf{6.56} & 10.91 \\ $\text{MTL}_{\text{WL}}$ & \textbf{15.65} & \textbf{38.33} & \textbf{840} & 6.57 & 10.86 \\ $\text{MTL}_{\text{DS}}$ & 15.67 & 38.40 & 839 & 6.60 & 10.90 \\ \midrule Wav2Vec2 & 15.69 & 42.07 & 789 & 9.34 & \textbf{9.48} \\ \bottomrule \end{tabular} \caption{Evaluation results for the baseline and MTL models. As baseline, we used a German ASR model based on the source model in \protect\newcite{gref_2019}. All other models extend the baseline with an additional Anglicism pronunciation dictionary based on the respective G2P approach. For the Anglicism 2020 test set, an additional Anglicism error rate (AER) is reported, indicating the percentage of correctly recognized Anglicisms. The German broadcast test sets served as control groups. The last row additionally shows the results of a Wav2Vec2 model. Since our Wav2Vec2 model could not handle hyphens, all hyphens in the reference transcripts were mapped to whitespaces to simulate a fair comparison to the other models.} \label{tab:wer_asr_long} \end{table*} \begin{table*}[htb] \centering \begin{tabular}{@{}lllll@{}} \toprule & \textbf{Sequitur} & \textbf{Seq2Seq} & \textbf{$\text{MTL}_{\text{WL}}$} & \textbf{$\text{MTL}_{\text{DS}}$} \\ \midrule Boomers & b u: m 6 s & b u: m 6 s & b u: m 6 s & b u: m 6 s \\ Brownie & b r aU n i: & b r o v n i: & b r aU n j @ & b r o v i: \\ Cosplay & k O s p l e: & k O s p l e: & k O s p l E I & k O s p l e: \\ spreadet & s p r E tS E t & S p r i: d @ t & S p r i: d @ t & S p r i: d @ t \\ used & j u: s t & z e: t & Q u: z @ t & Q aU s d \\ virgin & v I6 g I n & v I6 g I n & f I6 g I n & v I6 g I n \\ \bottomrule \end{tabular} \caption{Example entries from the Anglicism pronunciation dictionaries of the compared ASR models in BAS-SAMPA notation. While some pronunciations are similar (e.g.~\enquote{Boomers}) others show strong differences (e.g.~\enquote{used}). For some words, none of the G2P models was able to produce a suitable pronunciation. For example, the pronunciation for \enquote{virgin} would be /v~I6~dZ~I~n/ where the grapheme \textlangle g\textrangle~is pronounced as a /dZ/, but all models chose the phoneme /g/ for their result.} \label{tab:anglicism_dict} \end{table*} We added a binary classification task as an additional task after the encoder step, transforming the single task encoder-decoder LSTM model into an MTL model. The classifier consists of two hidden layers and an output layer. We combined the 500-dimensional cell state and cell output resulting from the encoder and used them as input for the classification task. The first hidden layer was a 1,000-dimensional linear layer with a 100-dimensional output. ReLU was used as the activation function. We applied a dropout of 0.2 to prevent overfitting. The second hidden layer was a 100-dimensional linear layer with equal output using PReLU with a constant $\alpha = 1$ as the activation function. The output layer was a 100-dimensional linear layer with one output neuron. We used the sigmoid function to get an output value between 0 and 1. The closer the output value is to 1, the more likely the word is an Anglicism. For the G2P decoder, we used LogSoftmax as the output activation function in the output layer. The loss was calculated with the negative log-likelihood since it usually goes in combination with Softmax. We calculated the classifier loss with the binary cross-entropy as this fits best with a binary classifier with an output value between 0 and 1. To optimize on both tasks, we combined both losses to one total loss value in the training and validation phase: \begin{equation} \text{Total Loss} = \text{Decoder Loss} + \text{Classifier Loss} \label{eq:sum_loss} \end{equation} Based on an input grapheme sequence, the resulting MTL Seq2Seq G2P model was able to both generate a corresponding phoneme sequence as well as classify whether the input sequence is considered an Anglicism. \section{Evaluation and Results} \label{sec:evaluation} Table \ref{tab:mtl_model_metrics} shows the metrics of our MTL G2P models. Model $\text{MTL}_{\text{Base}}$ being trained on PHONOLEX core shows how the class imbalance in the training data made the model only choose negative classifications. Adding the Wiktionary Anglicism pronunciations to the train data in model $\text{MTL}_{\text{Wiki}}$ helped getting viable classification results. For model $\text{MTL}_{\text{WL}}$, we decided to alter the loss summation by including an additional $\alpha$-parameter in the loss summation to weight the tasks accordingly: \begin{equation} \text{Total Loss} = \alpha \cdot \text{Decoder Loss} + (1 - \alpha) \cdot \text{Classifier Loss} \label{eq:weighed_loss} \end{equation} We chose $\alpha = 0.7$ to put more influence on the decoder task, which led to both improved PER and WER as well as improved classification metrics based on the validation set. We trained model $\text{MTL}_{\text{DS}}$ on the downsampled data set with equal loss summation (see Equation \ref{eq:sum_loss}). This setup shows the highest PER and WER of all models but also the best precision and recall values. While this was most likely caused by the higher number of entries with positive Anglicism classifications in the training data, we were interested in how the performance of this differently trained classifier would affect the ASR results. To get a more universal assessment of the MTL G2P models performance, we compared them to two traditional monolingual German G2P models as the baseline. We used the Seq2Seq G2P model that our MTL models are based on, but without classification task, and the German Sequitur G2P model currently used at Fraunhofer IAIS. We used the same PHONOLEX Core training and validation data as for the MTL models but without added Anglicism pronunciations from Wiktionary. Table \ref{tab:per_mtl} shows the PER results of the baseline and MTL G2P models. They were calculated on the PHONOLEX Core validation set (3.000 entries) and the Wiktionary Anglicism pronunciations from the MTL model's validation sets (516 entries) that were not included in either G2P models' training data. Similar to the results of \newcite{milde_2017_multitask}, all Seq2Seq G2P models showed increased PER values for the PHONOLEX Core validation data compared to Sequitur G2P. Looking at the MTL models, we observed a decreasing Anglicism PER with an increasing Anglicism ratio in the training data. Overall, the additional classification task seemed to worsen the performance on native German words while it helped to generate more accurate Anglicism pronunciations. For evaluating the actual performance in an ASR setup, we chose models $\text{MTL}_{\text{WL}}$ and $\text{MTL}_{\text{DS}}$ to create a supplementary Anglicism pronunciation dictionary used in an ASR model. Based on an existing ASR model, we added the resulting Anglicism pronunciations to the pronunciation dictionary. We created two dedicated ASR models for the two MTL G2P models with this method. To compare the performance of the generated Anglicism pronunciations with those of traditional G2P models, we created two more ASR models by generating Anglicism pronunciations with a Sequitur and a Seq2Seq G2P model. We created the Anglicism pronunciation dictionaries based on a list of 18,967 Anglicisms that was derived from Wiktionary and VDS Anglizismenindex (see Section \ref{ssec:datasets}). Along with the baseline ASR model that did not include an additional Anglicism dictionary, we tested these models on the Anglicism ASR test set \enquote{Anglicisms 2020}. To make sure the results for native German words are not affected by the additional Anglicism pronunciations, we also tested the models on two typical German broadcast (BC) test sets \enquote{German BC 2020}, and \enquote{Challenging BC 2018}. We measured the WER to determine the overall performance of the added pronunciations. To specifically evaluate the performance of Anglicisms, we measured an Anglicism error rate (AER) by flagging every Anglicism in the test set \enquote{Anglicisms 2020}. Based on the number of all Anglicisms in the test set, the AER represents the ratio of wrongly recognized Anglicisms. Table \ref{tab:wer_asr_long} shows the evaluation results of the Anglicism test set. While the WER and AER of all models showed improved results compared to the baseline model due to the additional Anglicism pronunciations in the dictionary, both MTL models outperformed the non-MTL models. $\text{MTL}_{\text{WL}}$ showed the best WER, decreasing the WER of the baseline model by relatively 1~\% and the AER by 3~\%, with 16 more Anglicisms being recognized. The results for test sets \enquote{German BC 2020} and \enquote{Challenging BC 2018} show that the WERs of $\text{MTL}_{\text{WL}}$ only increased by an absolute 0.01~\% and 0.02~\%, respectively, which shows that the additional Anglicism pronunciations did not significantly impact the performance of typical German applications. Given that Anglicisms only account for a small fraction of German spoken language \cite{hunt_anglicisms}, our approach successfully improved the performance of Anglicisms in German ASR without negatively impacting the performance of typical German applications. Considering the recent rise of end-to-end models, we additionally did a benchmark on a Wav2Vec2 model \cite{baevski2020wav2vec}. The model was implemented using the Hugging Face Transformers library \cite{huggingface_transformers}. We used Facebook's XLSR-Wav2Vec2 \cite{facebook_wav2vec} fine-tuned on the German Common Voice dataset \citelanguageresource{commonvoice}, provided by \newcite{jonatas} as \enquote{Wav2Vec2-Large-XLSR-53-German}, as our base model and further fine-tuned it on the GER-TV1000h corpus \citelanguageresource{gertv1000h}. To make the results comparable, we also applied the same language model that was used in our other models. During our tests, we noticed that the Wav2Vec2 model was not able to output hyphens. Since it is relevant to our use cases, we usually consider both casing and hyphen differences when calculating the WER. As the results of the other models did contain hyphens, we tried to resolve this disadvantage of Wav2Vec2 by mapping all hyphens in the reference transcripts to whitespaces. With this mapping applied, Table \ref{tab:wer_asr_long} shows the evaluation results of the Anglicism test set as well as the two control sets. For test set \enquote{Anglicisms 2020}, the Wav2Vec2 model showed a WER of 15.69~\%, which at first sight compares reasonably well to the MTL model results. However, taking a closer look at the recognized Anglicisms, we calculated an AER of 42.07~\%, exceeding the baseline model by an absolute 2.57~\%, and hence showed the highest AER among all tested models. Looking at the results for the two control groups, we assume that the Wav2Vec2 model performs better for German in general but has more problems with Anglicisms, which is reflected in the increased AER. We expected a better performance since the XLSR-Wav2Vec2 was (i.a.) trained on 557~h of English audio, which we thought might positively influence the recognition of Anglicisms. We need to continue experimenting with end-to-end ASR models to get a more reasonable comparison and improve our results with further fine-tuning. For a more qualitative evaluation, we looked at the entries of the resulting supplementary Anglicism pronunciation dictionaries. Table \ref{tab:anglicism_dict} shows eight example Anglicism entries with their generated pronunciations from the respective models. While we observed cases where the generated pronunciations were similar, for example, in \enquote{Boomers} and \enquote{Cosplay}, there were also cases where the phoneme sequences noticeably differed from each other as in \enquote{Brownie} and \enquote{used}. For some pronunciations, we observed that no model was able to generate a proper pronunciation, e.g.~\enquote{spreadet} and \enquote{virgin}. Since this concerned all models, it might be caused by insufficient training data that is missing or under-representing certain grapheme-phoneme-combinations that the models struggle to learn. Looking more into the training data, we found that some Anglicisms were potentially misclassified. As the Anglicism classification of the 65,427 entries in the PHONOLEX core data was done automatically based on an Anglicism list, all words that were not included have not been declared as an Anglicism. We must further evaluate and modify the training data to improve the Seq2Seq model's learning process. We plan to extend the Anglicism list by using a similar approach to \cite{anglicism_corpus} to detect more Anglicisms in the training data. Here, Coats created an Anglicism corpus based on social media data by applying linguistic rules. After generating potential Anglicisms with a rule-based approach, he cross-checked them against German Twitter data to determine which Anglicisms exist and are, in fact, used in every day (written) language. With this method, we can extend our Anglicism list and potentially gather more existing pronunciations from sources like Wiktionary to further improve our training data. \section{Conclusions} \label{sec:conclusions} In this work, we propose a multitask sequence-to-sequence training to enhance the generation of Anglicism pronunciations by a German G2P model. With our approach, we improved the Anglicism recognition results by generating and adding Anglicism pronunciations to the ASR model's pronunciation dictionary. While positively influencing the Anglicism recognition results for our dedicated Anglicism test set, our approach did not noticeably disturb the performance of other test sets representing typical use cases in the broadcast domain. By only modifying the pronunciation dictionary of an existing ASR model, the improvements on the Anglicism test set (WER $-1 \%$ relative and AER $-3 \%$ relative) show that our approach has the potential to tackle the challenge of Anglicisms in German ASR. Since our approach uses only phonemes of the German phoneme set, the resulting pronunciations can be added to a pronunciation lexicon of an existing ASR system without adapting the acoustic model. Another advantage of using only the German phoneme set is that the resulting phoneme sequences refer to the German pronunciation of Anglicisms, rather than the reuse of foreign phonemes that a German speaker may not be able to pronounce depending on their language skills. These "Germanized" pronunciations are more realistic with respect to real-world applications. In addition to ASR systems, which we have focused on in this work, we assume that our approach can also help improve the pronunciation of loan words in German TTS systems. The limited Anglicism data was an issue for creating the Seq2Seq training data. With more Anglicism pronunciations, the classification and decoding tasks might improve further due to more training material. Also, possible misclassifications in the training data could have negatively impacted the learning phase of the classification task. By extending the list of Anglicisms for automatically classifying the training data and additional manual checks, the classification results and the generation of Anglicism pronunciations could be further improved. While we used the same parameter configurations for all Seq2Seq G2P models to better compare the results in this publication, we plan on optimizing individual setups for the different models in the future. We also want to experiment with the tuning criteria, e.g.~focusing more on the classification results to better deal with the class imbalance. After further optimizing our MTL models, we plan on using the approach in a bigger scope by generating an entire pronunciation dictionary instead of only adding supplementary Anglicism entries. We also plan on looking more into end-to-end models. The reported WER for the Wav2Vec2 model only resulted from our first experiments with this technology and had the additional restriction of a language model. We are looking forward to improving the results and finding out if end-to-end models could also be a possible solution for the problem of Anglicisms and other loanwords in the German language. \section{Acknowledgments} \label{sec:acknowledgments} We thank Tilo Himmelsbach from Fraunhofer IAIS for his work on the Wav2Vec2 model. \section{Bibliographical References}\label{reference} \bibliographystyle{lrec2022-bib}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,793
package org.leyfer.thesis.touchlogger_dirty.pojo; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonProperty; /** * Created by k.leyfer on 07.11.2017. */ @JsonInclude(JsonInclude.Include.NON_NULL) public class Event { @JsonProperty("ts") private Long timestamp; public Long getTimestamp() { return timestamp; } public void setTimestamp(Long timestamp) { this.timestamp = timestamp; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,649
As caring for customers becomes the differentiator that drives consumer spend, Sitel is advancing its position as a world leader in outsourced customer care innovation. With 30 years of industry experience, Sitel`s 58,000 employees support clients with CRM contact center services that provide predictable and measurable Return on their Customer Investment by building customer loyalty, increasing sales and improving efficiency. Sitel`s global solutions include customer acquisition, customer care, technical support and social media programs. Support operations span from home based agents to 110+ domestic, nearshore and offshore centers in 23 countries across North America, South America, Europe, Africa and Asia Pacific. Sitel manages client programs on behalf of some of the best known brands in the world in 40 languages. Jim Flynn is Chief Human Resources Officer at Sitel. Previously, Jim held various senior HR leadership roles in the industry.
{ "redpajama_set_name": "RedPajamaC4" }
8,891
Orzheshkovskyi Vasyl, professor, Dr. of medical science, professor of neurology department #2 Shupyk National Medical Academy of Postgraduate Education (neurology) PUBLONS ID: 1625908 Agarwal Naveen Kumar MD, Head of Department Onco-Pathology at Action Cancer Hospital, New Delhi, India (pathology) Amorim Melania Maria de Ramos MD, PhD, Full professor, professor of department of obstetrics and gynecology, Institute of integral medicine of professor Figuer, (neonatology, reproductive medicine) Recife, Brazil SCOPUS ID: 57203176754 Bazanova Olga M. PhD, Dr.Sci.Biol., Full professor, main researcher (Department of Experimental & Clinical Neuroscience, Lab. of Affective, Cognitive & Translational Neuroscience, Federal State Budgetary Scientific Institution " Scientific Research Institute of Physiology & Basic Medicine : (human physiology) Novosibirsk, Russian federation Bychkova Nina G. Dr.Sci.Biol., Full professor, chief researcher, Laboratory of immunology and molecular biology of scientific research institute of experimental and clinical medicine, Bogomolets National Medical University (immunology and allergology) PUBLONS ID: F-5401-2019 SCOPUS ID: 7003608420 Chaban Oleh S. Dr. of medical science, Full professor, Head of the Department of Medicine Psychology, Psychosomatic Medicine and Psychotherapy, Bogomolets National Medical University (psychiatry) Dieieva Yuliia V. Dr. of medical science, Full professor, Head of ENT department, Bogomolets National Medical Univercity (otorhinolaryngology) Drogovoz Svitlana M. Dr. of medical science, Full professor, professor of Pharmacology department, National University of Pharmacy (pharmacology) PUBLONS ID: S-6736-2018 Dudka Petro F. Dr. of medical science, Full professor, professor of the Department of internal medicine №3, Bogomolets National Medical University (internal diseases, pulmonology) Flaherty Maureen P. PhD, associate professor, director of the center of the Ukrainian-Canadian studios, university of Manitoba, Winnipeg, Canada (medical psychology, psychiatry) Fradelos Evangelos PhD in Nursing, Adjunct Professor, University of Thessaly (Thessaly, Greece) Gorban Evgeniy N. head of laboratory of radiobiology, Dmitry F. Chebotarev Institute of Gerontology of the NAMS of Ukraine (normal physiology) Gorovenko Natalia G. Corr. NUAMS, Dr. of medical science, Full professor, head of Medical Genetics department, Shupik National Medical Academy Of Postgraduate Academy (genetics) Hychka Sergiy G. Dr. of medical science, Full professor, head of the department pathological anatomy #2, Bogomolets National Medical University (pathological anatomy) Kaliuzhna Lidiia D. Dr. of medical science, Full professor, professor of dermatovenereology department, Shupyk National Medical Academy of Postgraduate Education (Skin and venereal diseases) Kazmirchuk Vira E. Dr. of medical science, Full professor, director of the Institute of Immunology, Allergology and Rehabilitation (immunology, allergology) Kharchenko Natalia V. Corr. NUAMS, Dr. of medical science, Full professor, head of Department Gastroenterology, Dietology and Endoscopy, Shupik National Medical Academy Of Postgraduate Academy (gastroenterology) Komisarenko Yulia I. Dr. of medical science, Full professor, head of the department of endocrinology, Bogomolets National Medical University (endocrinology) Korovin Sergii I. Dr. of medical science, Full professor, Deputy Director National Cancer Institute (oncology) Kramarоv Sergiy O. Dr. of medical science, Full professor, head of the Pediatric infectious diseases department, Bogomolets National Medical University (infectious diseases) PUBLONS ID: Y-6753-2018 Lakatosh Volodymyr P. Dr. of medical science, Full professor, professor of Obstetrics and gynecology department #1, Bogomolets National Medical University (Obstetrics and gynecology) Liubich Larysa D. Dr.Sci.Biol., Full professor, head of Tissue Culture Laboratory State Institution «Romodanov Neurosurgery Institute, National Academy of Medical Sciences of Ukraine" (immunology) PUBLONS ID: A-7602-2018 Lizogub Viktor G. Dr. of medical science, Full professor, head of the department of internal medicine #4, Bogomolets National Medical University (internal medicine, cardiology) Papathanasiou Ioanna V. PhD in Mental Health Nursing, Assistant Professor of Community Psychiatry Nursing, University of Thessaly, Greece Ponomarev Vladimir V. Dr. of medical science, Full professor, head of the department of neurology and neurosurgery, Belarus Medical Academy for Post-Graduate Education, (Minsk, Belarus) (neurology) Protsiuk Radu G. Dr. of medical science, Full professor, professor of the Department of Phthisiology and Pulmonology, Bohomolets National Medical University (pulmonology, phthisiology) Savychuk Natalia O. Dr. of medical science, Full professor, Vice-Rector For Science, Shupyk National medical academy postgraduate education (stomatology) ORCID ID: 0000-0001-9532-665X Shamrayev Sergiy N. Dr. of medical science, Full professor, leading researcher of laboratory of endourology and lithotripsy, Institute of Urology of NAMS of Ukraine Shypulin Vadym P. Dr. of medical science, Full professor, head of the department of internal medicine # 1, Bogomolets National Medical University (internal diseases, medicine history) Shyrobokov Volodymyr P. Acad NUAMS and NUAS, Dr. of medical science, Full professor, head of the department microbiology, virology and immunology, Bogomolets National Medical University (virology) Skivka Larysa M. Dr.Sci.Biol., Full professor, head of the Department of Microbiology and Immunology, Taras Shevchenko National University of Kyiv, ESC 'Institute of Biology and Medicine' (immunology) PUBLONS ID: B-4720-2019 Soloviova Galyna O. deputy editor-in-chief (infectious diseases) Tolstanov Oleksandr K. Dr. of medical science, Full professor, Vice-Rector For Education, Shupyk National medical academy postgraduate education (social medicine) Volosovets Oleksandr P. Corr. NUAMS, Dr. of medical science, Full professor, Head of of Department of Pediatrics No. 2, Bogomolets National Medical University, Head of the Secretariat Department of the National Agency for Quality Assurance of Higher Education (theory and technique of professional education, pediatrics) PUBLONS ID: V-4884-2018 Voronenko Yuriy V. Acad NUAMS, Dr. of medical science, Full professor, rector of Shupik National Medical Academy Of Postgraduate Academy (social medicine) Vus Viktor PhD, Senior research fellow; NDSAN Network (sector of partnership building) (Italy) Institute for social and political psychology NAES; NDSAN Network (medical psychology) Italy PUBLONS ID: D-8487-2018 Vydyborets Stanislav V. Dr. of medical science, Full professor, Head of Department of Hematology and Transfusiology, Shupyk National medical academy postgraduate education (hematology) PUBLONS ID: Y-3845:2018 Yavorovskiy Oleksandr P. Acad NUAMS, Dr. of medical science, Full professor, head of Department Hygiene and Ecology No.2, Bogomolets National Medical University (hygiene and professional pathology) Zozulia Ivan S. Dr. of medical science, Full professor, professor of department of Emergency Medicine, Shupyk National Medical Academy of Postgraduate Education (neurology) Zukow Walery associate professor, Dr. of medical science, Faculty of Earth Sciences, Nicolaus Copernicus University, Toruń, Poland (rehabilitation and sports medicine)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,764
{"url":"https:\/\/www.shaalaa.com\/question-bank-solutions\/find-five-numbers-in-g-p-such-that-their-product-is-1024-and-fifth-term-is-square-of-the-third-term-sequence-series-geometric-progression-(gp)_163753","text":"# Find five numbers in G. P. such that their product is 1024 and fifth term is square of the third term. - Mathematics and Statistics\n\nSum\n\nFind five numbers in G. P. such that their product is 1024 and fifth term is square of the third term.\n\n#### Solution\n\nLet the five numbers in G. P. be\n\n\"a\"\/\"r\"^2, \"a\"\/\"r\", \"a\", \"ar\", \"ar\"^2\n\nAccording to the given conditions,\n\n\"a\"\/\"r\"^2 xx \"a\"\/\"r\" xx \"a\" xx \"ar\" xx \"ar\"^2 = 1024\n\n\u2234 a5 = 45\n\u2234 a = 4\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0...(i)\nAlso, ar2 = a2\n\u2234 r2 = a\n\u2234 r2 = 4\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ...[From (i)]\n\u2234 r = \u00b12\nWhen a = 4, r = 2\n\n\"a\"\/\"r\"^2 = 1, \"a\"\/\"r\" = 2, a = 4, ar = 8, ar2 = 16\n\nWhen a = 4, r = \u2013 2\n\n\"a\"\/\"r\"^2 = 1, \"a\"\/\"r\" =\u00a0 \u2013 2, a = 4, ar = \u2013 8, ar2 = 16\n\n\u2234\u00a0 the five numbers in G.P. are\n1, 2, 4, 8, 16 or \u2013 2, 4, \u2013 8, 16.\n\nConcept: Sequence and Series - Geometric Progression (G.P.)\nIs there an error in this question or solution?\n\n#### APPEARS IN\n\nBalbharati Mathematics and Statistics 1 (Commerce) 11th Standard Maharashtra State Board\nChapter 4 Sequences and Series\nExercise 4.1 | Q 8 | Page 51","date":"2021-04-23 08:57:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3271130323410034, \"perplexity\": 4860.732817480742}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039568689.89\/warc\/CC-MAIN-20210423070953-20210423100953-00171.warc.gz\"}"}
null
null
package org.openuap.cms.comment.model; import java.util.Date; import org.openuap.cms.comment.ICommentPost; /** * * <p> * CMS评论实体. * </p> * * <p> * $Id: CommentPost.java 3950 2010-11-02 09:10:01Z orangeforjava $ * </p> * * * @author Joseph * @version 1.0 */ public class CommentPost implements ICommentPost, java.io.Serializable { /** * */ private static final long serialVersionUID = -250933656289004716L; /** * 缺省构造函数 */ public CommentPost() { } /** 评论Id. */ private Long id; /** 根帖子id. */ private Long rootId; /** 父帖子id. */ private Long parentId; /** 用户id. */ private Long userId; /** 用户名. */ private String userName; /** 内容索引Id. */ private String objectId; /** 内容索引类型. */ private String objectType; private Long catalogId; /** 评论日期. */ private Long creationDate; /** 最后修改日期. */ private Long lastModifyDate; /** 帖子标题. */ private String title; /** 评论内容. */ private String content; /** 评论IP. */ private String ip; /** 发贴者真实Ip */ private String realIp; /** 此帖状态. */ private int status; /** 支持者数量. */ private int agreeCount; /** 反对者数量. */ private int opposeCount; /** 隐藏Ip状态. */ private Integer hiddenIpStatus; public int getAgreeCount() { return agreeCount; } public void setAgreeCount(int agreeCount) { this.agreeCount = agreeCount; } public int getOpposeCount() { return opposeCount; } public void setOpposeCount(int opposeCount) { this.opposeCount = opposeCount; } /** * constructor with id * * @param commentId */ public CommentPost(Long id) { this.id = id; } @Override public boolean equals(Object obj) { if (obj != null && obj instanceof CommentPost) { CommentPost that = (CommentPost) obj; if (this.id.equals(that.getId())) { return true; } } return false; } @Override public int hashCode() { return this.id.hashCode(); } public Long getCreationDate() { return creationDate; } public Date getDisplayCreationDate() { return new Date(creationDate); } public Long getId() { return this.id; } public String getIp() { return this.ip; } public Long getLastModifyDate() { return this.lastModifyDate; } public Long getParentId() { return this.parentId; } public String getRealIp() { return this.realIp; } public Long getRootId() { return this.rootId; } public int getStatus() { return this.status; } public String getTitle() { return this.title; } public Long getUserId() { return this.userId; } public String getUserName() { return this.userName; } public void setCreationDate(Long creationDate) { this.creationDate = creationDate; } public void setId(Long id) { this.id = id; } public void setIp(String ip) { this.ip = ip; } public void setLastModifyDate(Long lastModifyDate) { this.lastModifyDate = lastModifyDate; } public void setParentId(Long parentId) { this.parentId = parentId; } public void setRealIp(String realIp) { this.realIp = realIp; } public void setRootId(Long rootId) { this.rootId = rootId; } public void setStatus(int status) { this.status = status; } public void setTitle(String title) { this.title = title; } public void setUserId(Long userId) { this.userId = userId; } public void setUserName(String userName) { this.userName = userName; } public String getContent() { return this.content; } public void setContent(String content) { this.content = content; } public String getObjectId() { return objectId; } public void setObjectId(String objectId) { this.objectId = objectId; } public String getObjectType() { return objectType; } public void setObjectType(String objectType) { this.objectType = objectType; } public Long getCatalogId() { return catalogId; } public void setCatalogId(Long catalogId) { this.catalogId = catalogId; } public String getDisplayIp() { if (this.getHiddenIpStatus() == 1) { // 用户想隐藏IP地址 return "*"; } else { if (this.realIp != null) { // 屏蔽掉最后一位的IP地址 int pos = realIp.lastIndexOf("."); if (pos > 0) { String displayIp = realIp.substring(0, pos); return displayIp + ".*"; } return realIp; } if (this.ip != null) { // 屏蔽掉最后一位的IP地址 int pos = ip.lastIndexOf("."); if (pos > 0) { String displayIp = ip.substring(0, pos); return displayIp + ".*"; } return ip; } } return "*"; } public Integer getHiddenIpStatus() { return this.hiddenIpStatus; } public void setHiddenIpStatus(Integer hiddenIp) { this.hiddenIpStatus = hiddenIp; } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,470
.class public interface abstract Lcom/htc/widget/HtcExpandableListView$OnGroupExpandListener; .super Ljava/lang/Object; .source "HtcExpandableListView.java" # annotations .annotation system Ldalvik/annotation/EnclosingClass; value = Lcom/htc/widget/HtcExpandableListView; .end annotation .annotation system Ldalvik/annotation/InnerClass; accessFlags = 0x609 name = "OnGroupExpandListener" .end annotation # virtual methods .method public abstract onGroupExpand(I)V .end method
{ "redpajama_set_name": "RedPajamaGithub" }
1,785
Markets Rally Despite Inflation Report: Peter C. Earle on CBS News… by Peter C. Earle Peter C. Earle is an economist and writer who joined AIER in 2018. Prior to that he spent over 20 years as a trader and analyst at a number of securities firms and hedge funds in the New York metropolitan area, as well as running a gaming and cryptocurrency consultancy. His research focuses on financial markets, monetary policy, the economics of games, and problems in economic measurement. He has been quoted by the Wall Street Journal, Bloomberg, Reuters, CNBC, Grant's Interest Rate Observer, NPR, and in numerous other media outlets and publications. Pete holds an MA in Applied Economics from American University, an MBA (Finance), and a BS in Engineering from the United States Military Academy at West Point. Follow him on Twitter. "General Institutional Considerations of Blockchain and Emerging Applications" Co-Authored with David M. Waugh in The Emerald Handbook on Cryptoassets: Investment Opportunities and Challenges (forthcoming), edited by Baker, Benedetti, Nikbakht, and Smith (2022) "Operation Warp Speed" Co-authored with Edwar Escalante in Pandemics and Liberty, edited by Raymond J. March and Ryan M. Yonk (2022) "A Virtual Weimar: Hyperinflation in Diablo III" in The Invisible Hand in Virtual Worlds: The Economic Order of Video Games, edited by Matthew McCaffrey (2021) "The Fickle Science of Lockdowns" Co-authored with Phillip W. Magness, Wall Street Journal (December 2021) "How Does a Well-Functioning Gold Standard Function?" Co-authored with William J. Luther, SSRN (November 2021) "Populist Prophets, Public Prophets: Pied Pipers of Lucre, Then and Now" in Financial History (Summer 2021) "Boston's Forgotten Lockdowns" in The American Conservative (November 2020) "Private Governance and Rules for a Flat World" in Creighton Journal of Interdisciplinary Leadership (June 2019) "'Federal Jobs Guarantee' Idea Is Costly, Misguided, And Increasingly Popular With Democrats" in Investor's Business Daily (December 2018) Tags: CBSEarleinflationMarketsNewsPeterrallyReport Peter C. Earle *HOT* Bella Pro Series 2-lb Bread Maker only $49.99 shipped (Reg. $150!) Highway Heist: The Development of America's Road System How to Avoid High Tenant Turnover Costs How to Find the Best Stocks for Inflation WBN partners with Zurich for resilience CDC, In Yet Another Data Debacle, Fails to Protect You by Butchering Reveal of Covid "Escape" Variant BQ.1* Voters Care About Crime – Even If the New York Governor Doesn't
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,277
{"url":"http:\/\/etna.mcs.kent.edu\/volumes\/1993-2000\/vol9\/abstract.php?vol=9&pages=39-52","text":"## Quadrature formulas for rational functions\n\nF. Cala Rodriguez, P. Gonzalez-Vera, and M. Jimenez Paiz\n\n### Abstract\n\nLet $\\omega$ be an $\\mbox{L}_1$-integrable function on $[-1,1]$ and let us denote $$I_{\\omega}(f)=\\int_{-1}^1 f(x)\\omega(x)dx,$$ where $f$ is any bounded integrable function with respect to the weight function $\\omega$. We consider rational interpolatory quadrature formulas (RIQFs) where all the poles are preassigned and the interpolation is carried out along a table of points contained in $\\bar{\\bf C}$. The main purpose of this paper is the study of the convergence of the RIQFs to $I_\\omega(f)$.\n\nFull Text (PDF) [116 KB]","date":"2017-11-22 23:51:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.952231228351593, \"perplexity\": 545.8399863268768}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806708.81\/warc\/CC-MAIN-20171122233044-20171123013044-00563.warc.gz\"}"}
null
null
Remembering Modernism…Through Ikea By F Newsmagazine July 28, 2011 Arts & Culture, Uncategorized Jeff Carter's "The Common Citizenship of Forms" at the Illinois Institute of Technology Installation view of "The Common Citizenship of Forms." Image courtesy of the artist. Chicago-based artist and SAIC alum Jeff Carter (MFA 1998) has created a fairly specific niche for himself as an Ikea-hacking, architectural model-building sculptor. Never has his work had such resonance, however, as in The Common Citizenship of Forms, an homage to the 2009-2010 demolition of Chicago's Michael Reese Hospital, on view at the Illinois Institute of Technology. Here, Carter uses salvaged Ikea furniture to construct architectural models that are evocative of the 1946 South Side hospital campus designed by Walter Gropius and landscape architect Hideo Sasaki. The redevelopment of large swaths of the near South Side by Gropius, Mies van der Rohe, Ludwig Hillberseimer, Bertrand Goldberg and others in the mid-century created a veritable ville radieuse, cementing Chicago's position as the postwar home of the architectural avant-garde. But in a controversial move, the city of Chicago ignored the pleas of the international architectural community and ordered the demolition of most of the MRH campus, much as Chicago School masterpieces by Sullivan and Adler fell out of fashion and were destroyed in the 1960s and '70s. Jeff Carter, Untitled #3 (Chicago Tribune Tower), 2010. Image courtesy of the artist. Carter's incarnation of the Bauhaus style, rendered in the materials of its latter-day consumerist successor, is a thoughtful contemplation on the compromised models of high modernism, and the physical ramifications of abandoning an ideology. In his structures, all the rough edges of the pressed particleboard are left exposed, and corrugated cardboard calls to mind a row of sash windows. The intentional shoddiness of Carter's construction materials is a reminder of the essential ephemerality of modernist architecture itself without proper care and preservation. Carter has explored the intertwined relationship between memory and architecture before in his Tribune Tower project, a series of built models based on failed entries in the historic 1922 Tribune Tower competition. Raymond Hood's conservative Gothic Revival pastiche took the prize, but the project elicited avant-garde responses from the likes of Gropius, Hannes Meyer and Adolph Loos. These strange reinterpretations of skyscraper ideology (Loos famously suggested the form of a single Doric column) have been more widely disseminated and reproduced than the tower itself, despite the fact that they have no material history. Although he works primarily in sculpture, Carter has a background in photography, and that background remains evident in the artist's interest in how architectural memory is primarily preserved in printed matter. Carter cites art historian Thomas McEvilley's writing on the shift of semiotic significance from image to object as an explanation of his own interest in dismantling this dependence on photography, and instead remembering architectural forms in three-dimensional models. Jeff Carter, The Common Citizenship of Forms (Power Plant), 2010. Image courtesy of the artist. Carter never saw the Michael Reese Hospital in person, and learned of the modernist work only during preservation efforts. This is typical of modernist architectural encounters, which tend to be second-hand. Viewing Carter's evocation of the structure, one has to wonder, if MRH lives on in textbooks, museum collections and Flickr albums, what is the true impact of the buildings' demolition for art and architectural history? But despite these philosophical underpinnings, The Common Citizenship of Forms is not a somber elegy to some forsaken masterwork. Retaining the buoyancy of its origins — one can only assume the Ikea children's department — the sculptures together adopt a tone of exuberant playfulness, installed haphazardly across the floor of IIT's Crown Hall like forgotten toys. Crown Hall, an iconic building in its own right, seems to hum with the palimpsestic remnants of a long history of creative work; the Mies van der Rohe open floor plan is the site of the School of Architecture's first-year program during the academic year. The summer installation, situated outside a traditional gallery environment, retains that spirit of experimentation. Jeff Carter, The Common Citizenship of Forms (Laundry Building), 2010. Image courtesy of the artist. I entered the gallery in late afternoon to find the hall empty and the exhibition dormant; yet the unpretentious nature of the installation emboldened my viewing partner and I to examine the sculptures closely from floor level, discovering hidden details and, eventually, audio-visual equipment. Gleefully moving from piece to piece, we plugged in plugs and flicked on switches, discovering the sculptures for a second time with their kinetic elements. Plastic reeds swayed in rhythm with a musical composition by sound artist Annie Goh and, in a particularly inspired if literalist turn, the laundry building (uncannily resembling a wheeled laundromat hamper) began vibrating in what sounded like a spin cycle. Carter's exhibition title implies the hope for reconciliation between the rupture of Bauhaus ideology and its enduring, if much changed, aesthetic. The Common Citizenship of Forms gives credence to the notion that Bauhaus beneficiaries, including both Ikea and the artist himself, have equal potential for inventiveness in the future, while still embodying the spirit of their past. Jeff Carter: The Common Citizenship of Forms SR Crown Hall, The Illinois Institute of Technology 3360 S. State Street www.miessociety.org Alumni, Architecture, Arts & Culture, Chicago School, Design, historic preservation, Ikea hacking, Jeff Carter, Modernism Arts & Culture, Uncategorized Remembering Modernism…Through Ikea By F Newsmagazine The End of an Era: F Sits Down with... If You Don't Have the Time to do Acid… Alum Report: Javier Carmona Social Architectures Building Across Disciplines Regressing Céline ten − four =
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,649
/** * Created by Michael on 12/14/2015. * Meteor methods code in large thanks to Team MistyRose */ textbook = "Textbook"; Textbook = new Mongo.Collection(textbook); if(Meteor.isClient) { Meteor.startup(function() { sAlert.config({ effect: 'jelly', position: 'bottom', timeout: 5000, html: false, onRouteClose: true, stack: true, offset: 0, beep: false }); }); }; Meteor.methods({ addTextbook: function(doc) { if (_.findWhere(Textbook.find().fetch(), {title: doc.title}) && _.findWhere(Textbook.find().fetch(), {ISBN13: doc.ISBN13}) ) { if (Meteor.isClient) { sAlert.error("Title and ISBN already exists in the catalog! Please enter a different title and ISBN", configOverwrite); } return; } else if (_.findWhere(Textbook.find().fetch(), {title: doc.title}) ) { if (Meteor.isClient) { sAlert.error("Title already exists in the catalog! Please enter a different title.", configOverwrite); } return; } else if (_.findWhere(Textbook.find().fetch(), {ISBN13: doc.ISBN13})) { if (Meteor.isClient) { sAlert.error("ISBN already exists in the catalog! Please enter a different ISBN.", configOverwrite); } return; } check(doc, Textbook.simpleSchema()); Textbook.insert(doc); }, /** * * Invoked by AutoForm to update a Textbooks record. * @param doc The Textbooks document. * @param docID It's ID. */ editTextbook: function(doc, docID) { check(doc, Textbook.simpleSchema()); Textbook.update({_id: docID}, doc); }, deleteTextbook: function(docID) { Textbook.remove(docID); } }); if(Meteor.isServer) { Meteor.publish(textbook, function() { return Textbook.find(); }); } Textbook.attachSchema(new SimpleSchema({ title: { label: "Title", type: String, optional: false, unique: true, autoform: { group: textbook, placeholder: "Name of textbook" } }, ISBN10: { label: "ISBN 10", type: String, optional: false, unique: true, max: 50, autoform: { group: textbook, placeholder: "ISBN 10" } }, ISBN13: { label: "ISBN 13", type: String, optional: false, unique: true, max: 50, autoform: { group: textbook, placeholder: "ISBN 13" } }, author: { label: "Author", type: String, optional: false, max: 50, autoform: { group: textbook, placeholder: "Name of author (first and last)" } }, cover: { label: "Cover", type: String, optional: true, autoform: { group: textbook, placeholder: "Cover image" } } }));
{ "redpajama_set_name": "RedPajamaGithub" }
457
We Master Co., Ltd. will do our making efforts to develop new item and models with the confidence that it may survive to have superior technology only. RedwoodComm develops and provides measurement system for R&D, mass-production of broadcast system and wireless communications such as DAB, DRM, RDS, NFC, BT and LoRa. Deviser is leading manufacturer and trader of electronic testing instruments and equipments in the world. Deviser aims to make stable, cost-effective testing instruments and equipments to meet wide range of requirements of varying industries.
{ "redpajama_set_name": "RedPajamaC4" }
2,388
Home / materials / analysis with the story of odyssey versus rapunzel Analysis with the story of odyssey versus rapunzel Category: materials Topics: hero journey, hero journey archetype, journey archetype, living life, Mother Gothel Published: 10.12.2019 | Words: 1878 | Views: 116 The Tangled Journey We have to be happy to get rid of the lifestyle weve organized, so as to have life that is waiting for all of us. —Joseph Campbell. Campbell means that in order to live the great life that is ahead of us, we must finish up the loose ends of our your life that we have recently been trying to achieve for so very long and start a new beginning. This relates to both Rapunzel and Odysseus because they must conclude their very own life yet discover a fresh life to live. A long lost princess, "daughter" of the witch Mother Gothel, Rapunzel is awaiting her birthday and she desires to achieve her dream to begin to see the lanterns. Perishing to see the signals, Rapunzel detects her way out of the structure to chase her dreams and on how she ends up a new fantasy. Disney Pixar, the creators of Complicated, portrays the contemporary hero's journey from the hero's journey archetype as well shown through Homer's The Odyssey. Odysseus is the ruler of Ithaca who has recently been away from his homeland, his wife Penelopeia, and his kid Telemachos. Although his property is defeat with unruly men, Penelopeia and Telemachos are impossible about Odysseus' return. Odysseus is met numerous trials in the journey which is unable to returning home for 20 years. Finally, if he comes home, he's a new person and has evolved 180 degrees. The steps for the hero's quest that are used on the quest of Rapunzel and Odysseus were rephrased by Captain christopher Vogler, who also in return, derived his state through Paul Campbell, the main one who discovered the hero's journey archetype. The purpose of the hero's voyage can usually vary, nevertheless they have similar three steps: The Preparation, The Journey, plus the Return, all of which depict the importance, the reality, and the depth of the hero's journey. The comparison of Rapunzel from Disney Pixar's Tangled and Odysseus from Homer's The Odyssey are based on the hero's quest archetype and possess that they are not simply stories that ought to be told, yet instead stories that should be were living. The Preparation is a crucial stage in kick starting the hero's journey of Odysseus and Rapunzel who both have identical and different reactions to each step. The first stage, The Preparation, includes five steps: the ordinary world, the call to adventure, the refusal from the call, conference the advisor, and traversing the threshold. Overall, the normal world is among the most essential stage to The Prep and is the moment where the hero feels like something is not correct with where they are. During their stay in the normal world, Odysseus and Rapunzel are both unhappiness with in which they are and want to leave. Staying in Calypso's area, Odysseus is stuck and ultimately Calypso explains to him to leave: "She found him sitting upon the shore. The cry were by no means dry in his eyes, your life with its sweetness was slowly and gradually trickling away" (Homer 65). Odysseus displays his discontentment with his common world and shows just how being at Ogygia without everything to do about it causes him to become dull. With every passing day, Odysseus' hope of returning residence becomes cut as he is by himself and has no one to console with. This regular world demonstrated through Odysseus can be relatable to some, although, instead of dropping hope plus the meaning of life including Odysseus, persons should keep enduring rather than give up. In Tangled, Rapunzel is a long lost princess kept away in a tower with her "mother, " and feels like something basically right. For example Rapunzel inquiries her mom as to why your woman can never proceed outside and Mother Gothel replies: "The outside universe is a very hazardous place filled up with horrible, selfish people. inch (Tangled). This really is significant as this shows that Rapunzel is wondering why Mom Gothel isn't very letting her go exterior and does not fully have confidence in her. Simply by questioning Mother Gothel, Rapunzel shows just how she feels as if she no more belongs in the tower anymore and needs to look for her fresh life prepared for her on the journey. This can be similar to Odysseus' ordinary universe because he also, is unhappiness with his ordinary world and wants to leave. By both equally wanting to leave their normal worlds, Rapunzel and Odysseus show that they want to get the life in store for them rather than having to continue living the life span they've always been living. People too, should certainly follow their particular instincts like Rapunzel and Odysseus inside their ordinary worlds, and know when a thing doesn't think right. The second stage in the hero's journey archetype is The Trip, which is the actual adventure on its own and the main part of the story. It also may be the part where hero encounters life for the fullest and undergoes various challenges that affect the characters. In this stage there are four steps: test, allies and enemies, the brand new approach, the ordeal, and lastly, the incentive. The ordeal is the most important stage because the hero finally realizes that he or she hasn't been living how he or she would have and discovers that what they've been doing isn't proper. When Odysseus is at the party and the minstrel performs the music of Odysseus, he relives flashbacks of his difficulties from his past. For example the book narrates: "So did the famous minstrel. Odysseus was melted, and tears leaped over his cheeks" (Homer 98). Because the minstrel sang the song regarding Odysseus' previous, Odysseus had to reevaluate how he were living his life, what this individual did incorrect, and how this individual should procedure life in a different way. The challenge is an important level in life for anyone to experience because, without long-lasting the ordeal, no one will learn from blunders. Similarly, Rapunzel also experiences her challenge, though in another way. In this case, Eugene "leaves" Rapunzel which is all part of Mother's strategy and Rapunzel figures out she actually is the misplaced princess. By way of example Rapunzel says, "I are the misplaced princess, not necessarily I? Performed I mumble, mother? Or should I possibly call you that? Simply no! You were wrong about the world. And you simply were wrong about me" (Tangled). In her circumstance, Rapunzel falls in love with Eugene and provides the taken crown back, but his "monsters": Mom Gothel plus the two crooks, send him off to jail putting him on a boat. At the same time, Rapunzel recognizes him "leaving" and Mom tries demonstrating herself to Rapunzel how she realized that Eugene would leave her and flee her. Mother Gothel then simply brings Rapunzel back to the castle and Rapunzel finally figures out that she is the lost little princess. The way the lady handles this ordeal can result in a life or loss of life crisis, therefore , Rapunzel has to meticulously help to make her decision on how to deal with it. This is important to apply in life because it just takes a single mistake in the hero for the antagonist to destroy their your life. The final stage, The Return, is definitely when the leading man finally actually reaches his or her destination, experiences much more than they may have expected to happen, and becomes a new and improved person. The three steps included in this stage are: the street back, the resurrection hero, and the come back with the spirit. The most important step of this level can be considered as the return together with the elixir because it is when the leading man can finally relax and radiate pleasure and comfort to those around him or her. Odysseus' return with the elixir is definitely when he publicizes about his homecoming to Penelopeia and everybody around him. For example , the moment Penolopeia realizes that the stranger is actually Odysseus, she is filled with emotion: "She was conquered, she may hold out no longer when Odysseus told the key she knew so well. The girl burst into tears and ran straight to him, throwing her hands about his neck. The girl kissed his head…" (Homer 257). Enjoyment joy overwhelms Penelopeia while Odysseus comes back to her with the "elixir" and makes everything in the world seem excellent. The "return with the elixir" is the minute where the struggle not only ends for Odysseus, but as well ends the torment triggered upon Penelopeia and Telemachos throughout Odysseus' absence. Once Odysseus comes back to Ithaca, people can finally put together all their troubles and rejoice about his coming. This is important because it isn't just a moment to indicate the hero's homecoming, it also is a time for you to rejoice over the new existence the main character is living. In Disney's Tangled, the hero's go back with the pocima is once Rapunzel returns to the kingdom as the princess, everyone's dreams come true, and Rapunzel accepts Eugene's marriage proposal. For example , Eugene describes how everyone's dreams became truth and also narrates: "The get together lasted a whole week, and honestly, I don't remember most of this. " (Tangled). Returning with the elixir, Rapunzel and her return offers caused the anxiety within the kingdom being replaced with peace, joy, and hope. The long lost princess' homecoming not only brightens the mood of the kingdom, nevertheless also unlocks a new sense of existence within the persons. This new perception of life is "the your life that has been expecting them". Rapunzel's return while using elixir is important because it empowered everyone to carry out their dreams. Overall the return with the elixir is a moment when the hero starts living the life that they seen in their hero's journey. The hero begins living a brand new life, and also brings a new lifestyle also to those around him or her. Odysseus and Rapunzel are portrayed because heroes in whose journeys needs to be learned coming from and trained about in order to capture the life-style people needs to be living. Over the three levels of The Prep, The Voyage, and The Go back, each has a significant stage that pushes the hero to live living ahead of him or her. Those methods are: the normal world, the ordeal, plus the return together with the elixir. First the hero feels like something is incorrect and starts the quest. Then the hero experiences various trials inside their journey and then returns home as a transformed person, also changing these around him or her. In the hero's journey archetype, the trip not only adjustments the hero's character nevertheless also gives fresh attributes to the lives of those around him or her. All of these steps result in a person to mold and alter their way of living, living fewer of the existence that they already have tried to live for so long. Still, in the real world, some people never get to experience the hero's journey… Should certainly that get in the way in "getting rid of the life span we've planned" and "living the life before us" while John Campbell mentioned
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,479
Michael Parsons (ur. 3 października 1995 w Rockville) – amerykański łyżwiarz figurowy, startujący w parach tanecznych z Caroline Green. Mistrz czterech kontynentów (2022), mistrz świata juniorów (2017), zwycięzca finału Junior Grand Prix (2016), medalista zawodów z cyklu Challenger Series, mistrz Stanów Zjednoczonych juniorów (2017) oraz wicemistrz Stanów Zjednoczonych seniorów (2023). Osiągnięcia Z Caroline Green Z Rachel Parsons Programy Caroline Green / Michael Parsons Przypisy Bibliografia Amerykańscy łyżwiarze figurowi Ludzie urodzeni w Rockville (Maryland) Urodzeni w 1995
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,688