text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
{"url":"https:\/\/puzzling.stackexchange.com\/questions\/91040\/minesweeper-puzzle-too-many-twos","text":"# Minesweeper puzzle - Too many twos\n\nA Gobo is caught for leaving the school for the lnternet bar to play computer games without permission. He is asked to solve a problem.\n\nPut 4 mines in a 3 by 5 minesweeper grid to fill ALL the other cells with twos.\n\n..... 22222\n..... => 22222\n..... 22222\n\n\nThere is only one solution.\n\nProve it.\n\nStart by considering the leftmost three cells.\n\nOne of the leftmost three cells must be empty. (If not, then the row-2-col-2 cell is too big.) It must have two mines next to it; therefore there must be two mines in the left two columns. The same applies to the rightmost three cells.\n\nSo, two mines are in the left two columns and two are in the right two columns; the middle must therefore be empty.\n\nIf there are two mines in the second column, then either the top left or the bottom left corner is broken. Ditto for the fourth column. So there must be one mine in each column besides the middle.\n\nTo satisfy the middle column, the mines in columns 2 and 4 must be placed in the middle cells. To satisfy the rest of columns 2 and 4, the mines in columns 1 and 3 must also be placed in the middle row. And we're done! There's only one configuration, and it's \"cells across the middle row except for the very center\".\n\n\u2022 Nice explanation - even made sense past midnight \u2013\u00a0Avi Nov 10 '19 at 7:43\n\u2022 Added a new tag. Would you change the answer if it doesn't fit? \u2013\u00a0Scratch---Cat Nov 10 '19 at 8:00\n\u2022 @Scratch---Cat I didn't use a computer for this, if that's what you're asking? \u2013\u00a0Deusovi Nov 10 '19 at 8:01\n\u2022 Oh, sorry. The teacher in the story forced the Gobo to turn off the computer. \u2013\u00a0Scratch---Cat Nov 10 '19 at 8:19\n\u2022 Answer this question if you like. \u2013\u00a0Scratch---Cat Nov 10 '19 at 8:20\n\nI started by considering the corners.\n\nA mine in each corner does not work, so at least 1 corner is empty.\nThere are two mines and an empty cell adjacent to that empty corner. That empty cell already has 2 adjacent mines, so its other neighbours are empty.\n\nThese two arrangements don't work because the nearest corner would be a 1.\n\n 2 x . . .\nx 2 . . .\n1 ? . . .\n\n 2 x . . .\n2 x . . .\n1 ? . . .\n\nSo to must be like this:\n\n 2 2 2 . .\nx x 2 . .\n. . . . .\n\nWe only have two more mines. The following arrangement does not work:\n\n 2 2 2 . x\nx x 2 . .\n. . . . x\n\nSo at least one of the right corners is empty. The same reasoning as above then produces:\n\n 2 2 . 2 2\nx x . x x\n. . . . .\n\nor\n\n 2 2 . . .\nx x . x x\n. . . 2 2\n\nEither way, the mines are placed in the same spots, and all the undetermined squares have two adjacent mines and will be twos when left empty.\n\n\u2022 Sorry, you got Deusovi\u2019d :( \u2013\u00a0Avi Nov 10 '19 at 7:45\n\u2022 That happens all the time, but at least this time my reasoning is sufficiently different to make it still worth sharing. \u2013\u00a0Jaap Scherphuis Nov 10 '19 at 7:48\n\u2022 Fair enough - it\u2019s worth seeing \u2013\u00a0Avi Nov 10 '19 at 7:49","date":"2020-01-26 02:50:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6648836135864258, \"perplexity\": 696.270492984655}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251684146.65\/warc\/CC-MAIN-20200126013015-20200126043015-00389.warc.gz\"}"} | null | null |
Grown around the desert towns of Balkh, Mazar-i-Sharif and Sheberghan in the far north of Afghanistan, close to the modern borders of Turkmenistan, Uzbekistan and Tajikistan. This Mazar-i-Sharif strain produces the legendary hashish known as "Shirak-i-Mazar" and "Milk of Mazar". The people of these regions are a patchwork of Turkic, Tajik, Afghan and Pashtun tribes, and the history of Mazar-i-Sharif strains is likely to be equally complex. In fertile and well-irrigated soils these vigorous giants are capable of reaching 4 metres in height or more, and will produce a similarly immense yield of intensely resinous flowers. Traditionally harvested in the first half of December with the onset of the brutal Central Asian winter, Mazar-i-Sharif plants will enjoy cold conditions, including snow, and will turn a deep blood red in low temperatures. Growers favour leaving harvest as late as possible, sometimes into early January. Sieved "Milk of Mazar" garda is very resinous and so can be hand-pressed to make charas; it has a distinctively pungent, sweet aroma and a dreamily mellow high. Over-indulgence produces a mind-warping, immobilising and narcotic effect. Price is by 1 gram.
This stuff has a strong smell of Chocolate and a hint of mint which I found pretty surprising. Smoking it pretty much has the same flavours sweet chocolate with a hint of mint. I haven't tried eating it yet but this sticky toffee like black stuff is amazing. I will 100% be buying this stuff again in the future.
Smells and tastes good at start but it can flame up and doesn't burn well. trouble keeping it lit and when super heated melts in to a puddle.
4 tokes with hot knives and voilà!
Hope it won't sell too fast cuz i will definately buy this again!
Oh, and did i talk about the great price for a product of that quality!!?
Very very good quality hash.The choco and mint are remenicent of some hash from the 90's.Nice in a joint , but in a pipe is where hash belongs.
Good job on holding stock of this item….should be in everyones med cab.
Yep..like Rajm said.Chocolatey minty goodness.
just received some today…I think it's some of the best hash i have had in a while! Great taste…ohhh and so nice and gooey! Should have bought more!
This Hash is on off the best Hash i smoke in 50 years quality and so low prize will get again and 2 days délivry in Montréal very Happy Thank Your excelent servise 5 Star from my ….
Bought this for the 3rd times!
Tasty hash with a good buzz. Good price comparatively.
Very Great hash sticky, oily and malleable everything i want from hashish, very great Milk of Mazar. I will order again for sure. What a great buzz. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,438 |
Q: Using \email in the body of a cv I am writing my CV using class moderncv and classic style.
When I am writing the contact details of my references, e.g email address, is there anyway the email address can be clicked like the the \email I used for my own email address.
A: You gave no MWE so I have a little bit to guess.
I think you missunderstood that command \email is only used for your own email address. If you want to add the email of another person just use command \href{mailto:max.musterman@email.de}{max.musterman@email.de}. It links to the valid link mailto:max.musterman@email.de, but shows printed only max.musterman@email.de.
With the following MWE
\documentclass[11pt,a4paper,sans]{moderncv}
% moderncv themes
\moderncvstyle{classic} % casual, classic, banking, oldstyle and fancy
\moderncvcolor{blue}
\usepackage[utf8]{inputenc}
\usepackage[scale=0.75]{geometry}
% personal data
\name{John}{Doe}
\title{Resumé title}
\address{street and number}{postcode city}{country}
\phone[mobile]{+1~(234)~567~890}
\phone[fixed]{+2~(345)~678~901}
\phone[fax]{+3~(456)~789~012}
\email{john@doe.org} % <======================================
\homepage{www.johndoe.com}
\social[linkedin]{john.doe}
\social[twitter]{jdoe}
\social[github]{jdoe}
\extrainfo{additional information}
\photo[64pt][0.4pt]{example-image-a}
\quote{Some quote}
\begin{document}
\makecvtitle
\section{Education}
\cventry{year--year}{Degree}{Institution--3}{City--4}{\textit{Grade}--5}{Description--6} % arguments 3 to 6 can be left empty
\cventry{year--year}{Degree}{Institution}{City}{\textit{Grade}}{Description}
\section{Master thesis}
\cvitem{title}{\emph{Title}}
\cvitem{supervisors}{Supervisors}
\cvitem{description}{Short thesis abstract}
\section{References}
\cvlistitem{Max Mustermann,
\href{mailto:max.musterman@email.de}{max.musterman@email.de}} % <==================
\cvlistitem{Eva Musterfrau,
\href{mailto:eva.musterfrau@email.de}{eva.musterfrau@email.de}} % <=================
\end{document}
you get the result:
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,693 |
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<title>/pzone element</title>
<title>CMSIS-Zone (Preview): /pzone element</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<link href="cmsis.css" rel="stylesheet" type="text/css" />
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<script type="text/javascript" src="printComponentTabs.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
$(document).ready(initResizable);
$(window).load(resizeHeight);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
$(document).ready(function() { searchBox.OnSelectItem(0); });
</script>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 46px;">
<td id="projectlogo"><img alt="Logo" src="CMSIS_Logo_Final.png"/></td>
<td style="padding-left: 0.5em;">
<div id="projectname">CMSIS-Zone (Preview)
 <span id="projectnumber">Version 0.0.1</span>
</div>
<div id="projectbrief">System Resource Management</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<div id="CMSISnav" class="tabs1">
<ul class="tablist">
<script type="text/javascript">
<!--
writeComponentTabs.call(this);
//-->
</script>
</ul>
</div>
<!-- Generated by Doxygen 1.8.6 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
<div id="navrow1" class="tabs">
<ul class="tablist">
<li><a href="index.html"><span>Main Page</span></a></li>
<li class="current"><a href="pages.html"><span>Usage and Description</span></a></li>
<li>
<div id="MSearchBox" class="MSearchBoxInactive">
<span class="left">
<img id="MSearchSelect" src="search/mag_sel.png"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
alt=""/>
<input type="text" id="MSearchField" value="Search" accesskey="S"
onfocus="searchBox.OnSearchFieldFocus(true)"
onblur="searchBox.OnSearchFieldFocus(false)"
onkeyup="searchBox.OnSearchFieldChange(event)"/>
</span><span class="right">
<a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
</span>
</div>
</li>
</ul>
</div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('format_pzone.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
<a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(0)"><span class="SelectionMark"> </span>All</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(1)"><span class="SelectionMark"> </span>Files</a><a class="SelectItem" href="javascript:void(0)" onclick="searchBox.OnSelectItem(2)"><span class="SelectionMark"> </span>Pages</a></div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="header">
<div class="headertitle">
<div class="title">/pzone element </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p>The <b>pzone</b> element defines a single project zone.</p>
<p><b>Example</b> </p>
<div class="fragment"><div class="line"><zones></div>
<div class="line"> <pzone name=<span class="stringliteral">"App"</span> Dname=<span class="stringliteral">"ARM32CM4128x"</span> Pname=<span class="stringliteral">"Cortex-M4"</span>></div>
<div class="line"> <assign name=<span class="stringliteral">"SHARED"</span> as=<span class="stringliteral">"SRAM"</span> access=<span class="stringliteral">"rwu"</span>></div>
<div class="line"> <capture symbol=<span class="stringliteral">".bss.shared"</span>/></div>
<div class="line"> :</div>
<div class="line"> </assign></div>
<div class="line"> <assign name=<span class="stringliteral">"ADC0"</span> access=<span class="stringliteral">"rw"</span> /></div>
<div class="line"> :</div>
<div class="line"> <xzone name=<span class="stringliteral">"process"</span>/></div>
<div class="line"> :</div>
<div class="line"> </pzone></div>
<div class="line"> :</div>
<div class="line"></zones></div>
</div><!-- fragment --><p><b>Schema Description</b></p>
<table class="cmtable" summary="Element: PZone">
<tr>
<th>Parent Element </th><th colspan="3">Element Chain </th></tr>
<tr>
<td><a class="el" href="format_zones.html">zones</a> </td><td colspan="3"><a class="el" href="format_zones.html">/zones element</a> </td></tr>
<tr>
<th>Attributes </th><th>Description </th><th>Type </th><th>Use </th></tr>
<tr>
<td>name </td><td>The unique name for this project zone. </td><td>xs:string </td><td>required </td></tr>
<tr>
<td>Dname </td><td>The name of the device this project zone is assigned to. </td><td>xs:string </td><td>required </td></tr>
<tr>
<td>Pname </td><td>The name of the processor (on the device) this project is assigned to. </td><td>xs:string </td><td>required </td></tr>
<tr>
<td>info </td><td>Brief description of the project zone. </td><td>xs:string </td><td>optional </td></tr>
<tr>
<th>Child Elements </th><th>Description </th><th>Type </th><th>Occurrence </th></tr>
<tr>
<td><a class="el" href="format_assign.html">/assign element</a> </td><td>Resource Assignments </td><td>complexType </td><td>0..* </td></tr>
<tr>
<td><a class="el" href="format_xzone.html">/xzone element</a> </td><td>Execution Zones </td><td>complexType </td><td>0..* </td></tr>
</table>
</div></div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="navelem"><a class="el" href="XML_Format.html">Zone Description Format</a></li><li class="navelem"><a class="el" href="format_system.html">/system element</a></li><li class="navelem"><a class="el" href="format_zones.html">/zones element</a></li>
<li class="footer">Generated on Wed Aug 1 2018 17:12:47 for CMSIS-Zone (Preview) by Arm Ltd. All rights reserved.
<!--
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.6
-->
</li>
</ul>
</div>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,190 |
Katharina von Arx (nombre completo Edith Catherine Drilhon-von Arx) (5 de abril de 1928 - 25 de octubre de 2013) fue una periodista y artista conocida por la notable escritura acerca de sus viajes y por la restauración de la "Mansión du Prieuré" en Romainmontier, la cual es ahora un legado nacional en Suiza. También es conocida por haber promovido ideas utópicas en sus escritos, así como la realización de un proyecto de arte textil denominado "Histoires des villes" Historia de las Ciudades.
Vida
Von Arx nació en Niedergösgen (Cantón de Soleura) cerca de Zúrich, Suiza. En 1933, su familia se mudó a Zúrich, donde asistió a la escuela de comercio para mujeres, graduándose en 1947.
Según relatos de su hija, Katharina von Arx era inquieta y curiosa por naturaleza, y le fascinaban las historias, ya fueran contadas, escritas, o representadas por imágenes.
Entre 1952 y 1953 estudió dibujo en la Academia de Bellas Artes de Viena, donde conoció a Friedensreich Hundertwasser, quien se convertiría en su mentor y longevo amigo. Fue Hundertwasser a quien le sugirió la idea de pintar sus miedos para volverlos más tolerables. Von Arx dejó la academia a la edad de 25 años para viajar a través del mundo. Durante su travesía acostumbraba a hacer distintas cosas -como dibujar, pintar, traducir, e incluso cantar canciones folclóricas suizas en las calles- con el objetivo de ganar dinero para poder vivir y moverse a su siguiente locación. Como resultado, terminó por trasladarse por gran parte del hemisferio norte, y conociendo mucho de él. Como escritora de viajes, fue enviada a las Islas Tonga en el Pacífico. Ahí conoció al periodista y fotógrafo Freddy Drilhon, con quien estaría casada hasta su muerte en 1976. La pareja tuvo una hija llamada Fréderiqué Drilhon von Arx, nacida en 1958.
Durante una de sus vacaciones familiares en 1959, la pareja se encontró con un edificio medieval llamado la Maison du Prieuré (Casa Priory), localizada en una pequeña ciudad de Romainmôtier en Suiza. En esos tiempos, la casa estaba en peligro de ser demolida por las autoridades locales, pero a von Arx le encantó y la pareja la compró para remodelarla y poder criar a su hija. A pesar de ser de más de 15 siglos de antigüedad y haber sido visitada por reyes, se había convertido en un lugar olvidado y se encontraba en pésimas condiciones. Katharina trabajó por salvar el edificio de diversas maneras; escribiendo sobre él, promoviendo su valor histórico y fundando una asociación privada dedicada a salvarlo. Tomó treinta años recaudar el dinero y restaurar el edificio, pero eventualmente este se convertiría en un monumento nacional.
Katharina vivió en esa casa alrededor de treinta años, hasta su muerte en 2013, a la edad de 85.
Carrera como escritora
Aun cuando su preparación era la de una artista visual, la carrera de Von Arx se centró en la escritura. Comenzó a escribir durante sus primeros viajes a principios de la década de 1950, escribiendo artículos acerca de las experiencias que vivía en sus viajes. Continuó con sus escritos después de regresar a Suiza, escribiendo libros y otros textos basados en sus viajes. Estos escritos los dedicaba tanto a niños como adultos.
Esto le llevó a su trabajo como periodista, lo cual le brindó las oportunidades para seguir viajando, y así continuar escribiendo y dibujando lo que veía. De 1956 a 1958, ella fue asistente de largo plazo en las Islas Tonga, donde conoció a su esposo Freddy Drilhon, con quien también colaboraría profesionalmente hasta su muerte en 1976.
Von Arx continuó escribiendo e ilustrando periódicos y libros por el resto de su vida, en los cuales también se ven incluidas obras de ficción. Estas últimas hacían énfasis en el tema de universos paralelos e ideas utópicas, junto con escritos de historias basadas en la "Mansión du Prieuré".
Su trabajo ficticio se centró en gran medida en universos alternos y como hacer del mundo un mejor lugar. Durante su carrera ella concibió mundos utópicos tanto en palabras como en imágenes con el propósito de convertir a la realidad en algo mejor. Sus libros describen sus viajes y aventuras, así como la historia de la "Mansión du Prieuré".
Así como su trabajo dedicado al edificio, ella también llegó a impartir talleres de escritura, pintura, encuadernación y fabricación de papel.
Su trabajó ganó numerosos reconocimientos como el "Kulturpreis des Kantons Solothurn" (1975), "Förderpreis Olten" (1976), el "Werkbeitrag der Goethe-Stiftung", Zürich (1976) y el "Werkbeiträge von Bund", Kt.Solothurn, Stiftungen, Unternehmen 1972-1987.
Obras
Algunas publicaciones
Mein Luftschloss in Wolken: Die Fortsetzung von "Mein Luftschloss auf Erden" (1988)
Als er noch da war: Roman (1983)
Mein Luftschloss auf Erden (1981)
Erweiterte Neuausgabe (1981)
Mein Tagebuch zum "Luftschloss auf Erden": Auszüge(1982)
Engel aus der Schreibmaschine (1979)
Ich bin gern schuld an meinem Glück: Satiren und Geschichten (1977)
Mein Luftschloss auf Erden. Biographischer Roman (1975)
Meine Inselabenteuer (1961)
Inselabenteuer. Streifzüge durch die Inselwelt Australiens. Jugendbuch (1960)
Nichts hat mich die Welt gekostet. Jugendbuch (1957)
Nehmt mich bitte mit! Eine Weltreise per Anhalter (1956)
El proyecto de Historia de Ciudades
A von Arx se le atribuye la cita: "todo mundo tiene su lado creativo, pero frecuentemente está durmiendo." Otro de sus proyectos de vida sería la colección de trabajos textiles denominada "Histoires des Villes" (Historia de Ciudades). Su hija afirma que, aunque se encontraba aterrorizada con el crecimiento urbano de la humanidad en el siglo XX, al mismo tiempo admiraba la creatividad del hombre, especialmente en la arquitectura. A la edad de quince años, von Arx comenzó con su primera pieza; una imagen utópica de su ciudad natal. Durante su vida ella invitó a otros a participar, de manera que la colección fue ganando piezas con imágenes de varias partes del mundo, como Nueva York, México y el Oriente Medio, todas creadas por gente local. Al momento de su muerte la colección contenía veinte piezas hechas en una variedad de telas, compilación a la cual el Museo de Arte Popular (Ciudad de México) denominaría "sui generis", la cual es única en su composición y tema. En 2014, un tour de esta colección fue patrocinado por el gobierno suizo para ser llevado a cabo en México, y en otros países, como el primer evento para hacer memoria a Von Arx después de su muerte.
Referencias
Enlaces externos
Katharina von Arx à Romainmôtier (video en francés)
Su biografía en Youtube
Escritores de Suiza del siglo XX
Escritoras de Suiza
Periodistas de Suiza
Escritores en alemán del siglo XX
Alumnado de la Academia de Bellas Artes de Viena | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,049 |
Carriera
Nei club
Gioca a Schio e nel 2010-11 vince sia il campionato che la coppa. Ha segnato 2 punti nella sfida di Supercoppa italiana vinta contro Taranto il 7 ottobre 2012. Ha vinto poi anche la Coppa Italia nella finale contro Lucca e completato il treble con la conquista dello scudetto, sempre in finale contro Lucca, il 4 maggio.
In Nazionale
Il 2 luglio 2009 vince la medaglia d'oro ai Giochi del Mediterraneo di Pescara con la maglia della Nazionale italiana.
Statistiche
Cronologia presenze e punti in Nazionale
Palmarès
Nazionale italiana: Italia 2009
Pall. Femm. Schio: 2010-11, 2012-13
Pall. Femm. Schio: 2011, 2013
Pall. Femm. Schio: 2012 - 2013
Note
Collegamenti esterni
Scheda su Emanuela Ramon della FIP
Scheda su Emanuela Ramon della FIBA Europe
Cestiste della Nazionale italiana | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,760 |
Méreuil is een gemeente in het Franse departement Hautes-Alpes (regio Provence-Alpes-Côte d'Azur) en telt 92 inwoners (2009). De plaats maakt deel uit van het arrondissement Gap.
Geografie
De oppervlakte van Méreuil bedraagt 10,4 km², de bevolkingsdichtheid is dus 8,8 inwoners per km².
Demografie
Onderstaande figuur toont het verloop van het inwonertal (bron: INSEE-tellingen).
Externe links
Mer | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,577 |
\section{Probability of extinction}
Consider a Birth \& Death process with rates of $\lambda_{n}$ (for State $n$ to State $n+1$ transition) and $\mu_{n}$ (State $n$ to State $n-1$). When $\lambda_{0}=\mu_{n}=0$, State $0$ is obviously absorbing and the main task is to establish the probability of the process' ultimate extinction (denoted $a_{i}$), given it starts in State $i.$ Based on what happens during the next transition, the sequence of these probabilities meets the following infinite set of linear equations
\begin{equation}
a_{i}=\frac{\lambda_{i}}{\lambda_{i}+\mu_{i}}~a_{i+1}+\frac{\mu_{i}}{\lambda_{i}+\mu_{i}}~a_{i-1} \label{0}
\end{equation}
where $i=1,\, 2,\, 3,\ldots$ and $a_{0}=1$. The last equation can be rewritten as
\begin{equation}
\frac{\lambda_{i}}{\lambda_{i}+\mu_{i}}(a_{i}-a_{i+1})=\frac{\mu_{i}}{\lambda_{i}+\mu_{i}}(a_{i-1}-a_{i})
\end{equation}
or, equivalently, introducing $d_{i}=a_{i-1}-a_{i}$
\begin{equation}
d_{i+1}=\frac{\mu_{i}}{\lambda_{i}}~d_{i} \label{di}
\end{equation}
Note that
\begin{equation}
a_{i}=1-\sum_{k=1}^{i}d_{k} \label{ai}
\end{equation}
To find the solution (see \cite{math1}) requires to make State $N$ also absorbing, solving the corresponding \emph{finite} set of equations, and
finally taking the $N\to\infty $ limit. Solving (\ref{di}) is trivial and yields
\begin{equation}
d_{i} = d_{1}\prod_{n=1}^{i-1}\frac{\mu_{n}}{\lambda_{n}}
\end{equation}
Since we have made State $N$ is absorbing, $a_{N}=0$ (i.e. State $0$ cannot be reached from State $N$); this can be restated in the following manner
\begin{equation}
a_{N}
= 1-\sum_{k=1}^{N}d_{k}
= 1-d_1\sum_{k=1}^{N}\prod_{n=1}^{k-1}\frac{\mu_{n}}{\lambda_{n}}=0
\end{equation}
further implying that
\begin{equation}
d_{1}
=\frac{1}{\sum_{k=1}^{N}\prod_{n=1}^{k-1}\frac{\mu_{n}}{\lambda_{n}}}
\end{equation}
The solution for the $d_{i}$ sequence is thus
\begin{equation}
d_{i}
=
\frac
{\displaystyle\prod_{n=1}^{i-1}\frac{\mu_{n}}{\lambda_{n}}}
{\displaystyle\sum_{k=1}^{N}\displaystyle\prod_{n=1}^{k-1}\frac{\mu_{n}}{\lambda_{n}}}
\xrightarrow[N\to\infty]{}
\frac
{\displaystyle\prod_{n=1}^{i-1}\frac{\mu_{n}}{\lambda_{n}}}
{\displaystyle\sum_{k=1}^{\infty }\displaystyle\prod_{n=1}^{k-1}\frac{\mu_{n}}{\lambda_{n}}} \label{fin}
\end{equation}
which, together with (\ref{ai}), yields any of the probabilities of ultimate extinction. Note that (\ref{fin}) and (\ref{ai}) imply that
$\lim_{i\to \infty }a_{i}=0$
when the infinite sum in the last denominator converges, and $a_{i}=1$ for all $i$ when the sum diverges (extinction is then certain for all initial states). Also note that an empty product, i.e. $\prod_{n=1}^{0}\frac{\mu_{n}}{\lambda_{n}}$, equals to $1$.
An alternate approach (see \cite{Karlin}) is to start with
\begin{align}
a_0 &= 1 \\
a_{1} &= 1-\left( \sum_{k=1}^{\infty }\prod_{n=1}^{k-1}\frac{\mu_{n}}{\lambda_{n}}\right) ^{-1}
\end{align}
and then iterate using the following recursive formula
\begin{equation}
a_{i+1}=\left( 1+\frac{\mu_{i}}{\lambda_{i}}\right) a_{i}-\frac{\mu_{i}}{\lambda_{i}}a_{i-1}
\end{equation}
Both algorithms usually work quite well and yield practically identical results, but our recommendation is to use the former one, namely the direct evaluation of (\ref{fin}) followed by (\ref{ai}).
\section{Expected time till extinction}
When ultimate extinction is certain, the next issue to investigate is: how long does it take to reach State $0$? To find the distribution of this
random variable is too difficult to even attempt; we usually settle for the corresponding expected value (denoted $\omega_{i}$, when starting in State $i$). The solution is found by solving the following analog of (\ref{0}) which similarly relates three consecutive terms of the corresponding sequence of these expected values except now we have to add the expected time till next \emph{transition} to the RHS, getting
\begin{equation}
\omega_{i}
= \frac{\lambda_{i}}{\lambda_{i}+\mu_{i}}~\omega_{i+1}+\frac{\mu_{i}}{\lambda_{i}+\mu_{i}}~\omega_{i-1}+\frac{1}{\lambda_{i}+\mu_{i}} \label{rec}
\end{equation}
or, equivalently
\begin{equation}
\omega_{i+1}
= \left( 1+\frac{\mu_{i}}{\lambda_{i}}\right) \omega_{i}-\frac{\mu_{i}}{\lambda_{i}}\omega_{i-1}-\frac{1}{\lambda_{i}} \label{1}
\end{equation}
To solve (\ref{1}), one introduces $\delta_{i}=\omega_{i+1}-\omega_{i}$
which simplifies the previous equation to
\begin{equation}
\delta_{i}
= \frac{\mu_{i}}{\lambda_{i}}\delta_{i-1}-\frac{1}{\lambda_{i}} \label{2}
\end{equation}
Its formal solution is
\begin{equation}
\delta_{i} = \sum_{n=i+1}^{\infty }\frac{1}{\lambda_{n}}\prod_{j=i+1}^{n}\frac{\lambda_{j}}{\mu_{j}}+c\prod_{j=1}^{i}\frac{\mu_{j}}{\lambda_{j}} \label{3}
\end{equation}
where $c$ is an arbitrary constant.
\begin{proof}
First we show that the first term of RHS of (\ref{3}) is a particular solution to (\ref{2}):
\begin{align}
\MoveEqLeft\sum_{n=i+1}^{\infty }\frac{1}{\lambda_{n}}\prod\limits_{j=i+1}^{n}\frac{\lambda_{j}}{\mu_{j}}-\frac{\mu_{i}}{\lambda_{i}}\sum_{n=i}^{\infty}\frac{1}{\lambda_{n}}\prod\limits_{j=i}^{n}\frac{\lambda_{j}}{\mu_{j}} \\
&= \sum_{n=i+1}^{\infty }\frac{1}{\lambda_{n}}\prod\limits_{j=i+1}^{n}\frac{\lambda_{j}}{\mu_{j}}-\sum_{n=i}^{\infty }\frac{1}{\lambda_{n}}\prod\limits_{j=i+1}^{n}\frac{\lambda_{j}}{\mu_{j}} \\
&=-\frac{1}{\lambda_{i}}
\end{align}
The second term is clearly a general solution to the homogeneous version of (\ref{2}).
\end{proof}
Note that $\delta_{i}$ can be interpreted as the expected time till reaching State $i$ (for the first time), given the process starts in State $i+1$; this clearly implies that $\delta_{i}$ cannot be a function of any of the $\lambda_{j}$ and $\mu_{j}$ rates when $j$ is less than or equal to $i$, further implying that $c=0$ (note that the first term of (\ref{3}) is already free of the irrelevant rates). The final answer is thus
\begin{equation}
\delta_{i}
= \sum_{n=i+1}^{\infty }\frac{1}{\lambda_{n}}\prod\limits_{j=i+1}^{n}\frac{\lambda_{j}}{\mu_{j}} \label{fin2}
\end{equation}
with
\begin{equation}
\omega_{i}
= \sum_{k=0}^{i-1}\delta_{k} \label{omega}
\end{equation}
An alternate approach (used by both \cite{Karlin} and \cite{vrbik}) would be to first evaluate
\begin{equation}
\omega_{1}
= \delta_{0}
= \sum_{n=1}^{\infty }\frac{1}{\lambda_{n}}\prod\limits_{j=1}^{n}\frac{\lambda_{j}}{\mu_{j}}
\end{equation}
and then use (\ref{1}) recursively to compute as many terms of the $\omega_{i}$ sequence as needed ($\omega_{0}$ is of course equal to $0$). Unlike in the computation of $a_{i}$, this alternate algorithm results in
inaccurate, then incorrect and eventually totally nonsensical values of $\omega_{i}$ as $i$ increases; trying to alleviate this by substantially increasing the accuracy of the computation will only defer the problem to
somehow higher values of $i$.
\begin{example}
Using $\lambda_{n}=1$ and $\mu_{n}=n$ (one of the simplest such models which, furthermore, leads to the following analytic solution: $\omega_{1}=\delta_{0}=e-1$. Using (\ref{1}), repeatedly, to find $\omega_{2}$, $\omega_{3}$, \ldots yields nonsensical results starting at $\omega_{20}$; increasing the accuracy to 70 decimal digits still leads to a similar breakdown at $\omega_{52}$, as the corresponding Mathematica program in Figure \ref{math1} indicates.
\end{example}
\begin{figure}[htbp]
\begin{center}
\includegraphics{fig1.eps}
\caption{Example 1 results.}
\label{math1}
\end{center}
\end{figure}
Our conclusion (and the main point of this article) is that this algorithm is so badly ill-conditioned that it should never be used.
Instead, we recommend using (\ref{fin2}), followed by (\ref{omega}). But even then, similar numerical ill-conditioning may (rather surprisingly) happen when the right hand side of (\ref{fin2}) is evaluated analytically
(which is possible in some cases). The problem disappears as soon as we switch to numerical evaluation of the same (in Mathematica, this requires using `NSum' instead of `Sum', as the following example demonstrates).
\begin{example}
We use the same rates as before, but make the code more general. When we allow Mathematica to convert the RHS of (\ref{fin2}) into a formula, its evaluation runs into difficulty at $\omega_{51};$ when we force Mathematica to evaluate the same RHS numerically, correct results are then produced for practically any $\omega_{i}$ (we are demonstrating this up to $\omega_{500}$; in this case we don't display the corresponding $\delta_{i}$ sequence). See Figure \ref{math2}.
\end{example}
\begin{figure}[htbp]
\begin{center}
\includegraphics{fig2.eps}
\caption{Example 2 results.}
\label{math2}
\end{center}
\end{figure}
Note that no attempt has bee made to optimize our code -- the computation is fast enough regardless.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,620 |
{"url":"http:\/\/jnc-legal.com\/31f14y\/376f16-pax-meaning-in-airlines","text":"Passenger definition, a person who is traveling in an automobile, bus, train, airplane, or other conveyance, especially one who is not the driver, pilot, or the like. Packs Nobiscum - Our luggage is with us. You may hear the flight attendant say, \"There are 124 pax on this plane.\" Statistical estimates provided to cover direct sales, low cost carriers, charter flight operators, under-represented BSP markets and non-BSP markets, including the United States PAX is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms. United Airlines | MileagePlus - What does \"pax\" mean?\" It could also be confused with the verb. Pax in commercial transport is used as something like persons or passengers, in the context of counting people, e.g. Can anyone provide a more detailed and\/or logical etymology of the word denigrate? Real-World Quiz. PAX Asia Correspondent takes us on a quick trip around the continent, and from the airport entrance to seat 3A By Jeremy Clark, PAX Asia Correspondent December 9 2020 TOUR99. (Source: European Union and the Committee of the Regions). 7 Answers. For \u2026 [8] These measurements can further be used to measure unit revenues and unit costs. The number of passengers that a vehicle or vessel may legally carry is defined as its seating capacity.[6]. Is \u201clower pressurization\u201d the correct term for the 787 Dreamliner? Revenue passenger miles can be considered the basic amount of \"production\" that an airline creates. The aircraft designation included \"Equipment: Boeing 787-8 pax\" as well as & mainly RC Church. PAX - for airlines it is an abbreviation for a Passenger. [5] In other situations, however, guest statutes may limit the ability of passengers to sue the driver of the vehicle over an accident. He got up and stumbled toward the jetway. 3 *aeroplane 20 pax- 20 persons\/passengers in an aeroplane, for example in my travel office we used to say: Today I've sold 2 pax for Frankfurt-Madrid plane. Cool Cruiser; Members; 1,817 posts; March 19, 2005; Hudson, WI #2 Share; Posted January 20, 2011. Looking for online definition of PAX or what PAX stands for? You can think of the 'x' as \"cut short\" shorthand. The 'x' suffix is aviation speak in an abbreviated word called a contraction. This is particularly common within airline alliances, such as Star Alliance. Favorite Answer. Passengers and Passes. Get the definition of PAX in ICAO airline code by All Acronyms dictionary. Since, many concepts of hospitality evolved from the airline industry, such as Revenue Management. AI \u2013 if there are more than 3 dots in OSI message, that OSAI message is not transmitted to the Airline). Two airlines can have PNRs of 123ABC with no risk of confusion. Anonymous. How can I convert a JPEG image to a RAW image with a Linux command? Example: 2009 Air France - KLM reduced freighter flights from 25 to 14 2020 Airlines are decreasing pax. Pax Nabisco - Oreos are with us . Is there a generic term for origin and destination seen from an airport point of view? Relevance. List page number 4 Airline Representation Services; Security Coordinator Representation ; Ground Service Coordinator; Please contact us for further information of how we can be of service to you and your Customers. I found the officer later and he said he put the Pax in a cab. To learn more, see our tips on writing great answers. Anonymous. What is that term to describe the wing motion range? The letters 2PC indicates the traveller can check in two cases of bags. [9], Media related to Passengers at Wikimedia Commons, Person who travels in a vehicle without operating it, Learn how and when to remove this template message, airline employees flying on free or nearly-free passes, \"Approach and Landing Accident Reduction\", \"Basic Measurements in the Airline Business\", \"Airline Data Project: Traffic and Capacity by Operating Region\", https:\/\/en.wikipedia.org\/w\/index.php?title=Passenger&oldid=1002011199, Articles needing additional references from April 2013, All articles needing additional references, Creative Commons Attribution-ShareAlike License, This page was last edited on 22 January 2021, at 11:27. A passenger (also abbreviated as pax[1]) is a person who travels in a vehicle but bears little or no responsibility for the tasks required for that vehicle to arrive at its destination or otherwise operate the vehicle. The vehicles may be bicycles, buses, passenger trains, airliners, ships, ferryboats, and other methods of transportation. @mins Would be interesting to know if it was in use during the steamship era too e.g. I think mostly it\u2019s just because it\u2019s an abbreviation that can not easily be mistaken for any other word in context. Per person is per individual. It looks like PAX for passengers goes back at least to the 1940s in the airline industry. Perhaps my answer missed the root of your question, origin being the earliest use. Some are defined in manuals or advisory circulars. Pax definition, kiss of peace. It is an abbreviation for airline \"passengers.\" Why are airlines so concerned with checked baggage weight? Blank Airlines is a domestic U.S. carrier that operates a fleet of 10 aircraft between major cities in the country. Top Definition: Pan Air In ICAO airline code.. MathJax reference. 1. Passenger Kilometre\/ Tonne Kilometre \u2013 Metrics used in the aviation industry to measure performance: What is the origin of the practice of numbering runways by magnetic heading? Airlines often calculate a load factor at which the airline will break even; this is called the break-even load factor. Hence, Blank Airlines has 226,100 Revenue Passenger Kilometers per flight leg. [7] On long-distance buses and trains (and some planes), passengers may board and disembark at intermediate stops, in which case RPMs\/RPKs have to be calculated for each segment if a careful total is needed. As he deplaned, the agent and a police officer showed up. Pax Services; Cabin Services; Aircraft Detailing; Labour Resources; Ramp Handling Services; Mobility Assistance; Address: 6500 Silver Dart Drive Vista Cargo \u201cA\u201d, Ste. One moment... italki is changing the way the world learns foreign languages. The vehicles may be bicycles, buses, passenger trains, airliners, ships, ferryboats, and other methods of transportation.. It depends on what the ticket rules say when you buy it, and though it may seem like the airlines are making up their own set of rules, they\u2019re not. It is PAX for the abbreviation Passengers Approximately, which seemed to stick after the fact that advisory circulars were already shortening words into compatible abbreviations by adding an \"x\" to the end. - Yes, I know what PAX traditionally means, but I received a flight time change from Expedia that listed all of the flights on my itinerary. IATA has been analyzing and collecting statistics on what can be plausible in-flight infections since the Pandemic began, Figure 1. PAX stands for Passengers. For example, a flight attendant on an airline would not be considered a passenger while on duty and the same with those working in the kitchen or restaurant on board a ship as well as cleaning staff, but an employee riding in a company car being driven by another person would be considered a passenger, even if the car was being driven on company business. What symmetries would cause conservation of acceleration? you can find lots of references googling 'IATA PAX'. COVID statistics from airlines, authorities, peer-reviewed studies . You agree to everything \u2013 how much a change costs, how many stopovers you have, whether you even get frequent flyer miles \u2013 when you buy the ticket. Of 1,2 billion passengers that have flown, IATA finds 44 plausible cases of in-flight infections. PAX is aviation shorthand meaning \u2018Passengers Allowed in Expenses\u2019 in the travel itineraries. What does the term \u201ctrimming\u201d most commonly mean in aviation? However, passengers who paid for their trip with a frequent-flyer program mileage award are usually included. You will not normally hear this word used by people outside the airline industry. Conference area approval code. This glossary is built from a combination of official, quasi-official, Pax meaning in Tourism: Tourist \/ Traveller. PIREP: Pilot report. flights and increasing cargo \u2022 In the past, when economy has faltered \u2013 so did air cargo and airlines were able to make up demand using combi aircraft and avoid operating half full freighter aircraft. and \"bodies\" by the crews who fly them. Find out what is the full meaning of PAX on Abbreviations.com! If the group (pax) price is based on 20 Pax that means should less than 20 travel in that group, a new price will be applicable. A passenger (also abbreviated as pax) is a person who travels in a vehicle but bears little or no responsibility for the tasks required for that vehicle to arrive at its destination or otherwise operate the vehicle. This usage is common among travel agents. I seem to remember the plural was PAX and singular was PAP... eg 1PAP, 3PAX. A revenue passenger is someone who has paid a transport operator for her or his trip. Pitch - The legroom on flights, the greater the pitch the greater the legroom. PAX brings together a group of people, without them talking to each other. [3] For example, no-pax flights are freight, ferry, repositioning cruises and positioning flights. With respect to passengers riding in cars and vans, the driver may owe a duty of care to passengers, particularly where the passenger's presence in the vehicle can be seen to \"confer some benefit on the driver other than the benefit of his or her company or the mere sharing of expenses\". European Union and the Committee of the Regions, Air Facts: The Magazine for Pilots - 1946, Opt-in alpha test for a new Stacks editor. I've read in some posts about pax, what is that? The services offered include: IATA's 76th Annual General Meeting (AGM) was held on 24 November 2020. Thanks for contributing an answer to Aviation Stack Exchange! [2] These categories include: In transportation, a \"no pax\" trip is a trip without passengers. PAX - (Passengers). eg. I found this clip from Air Facts: The Magazine for Pilots, 1946 : Here's the full text of the reference: We have cargo and mail aboard. Airlines have announced plans to enforce their policy of mandatory face mask use among passengers. Posts: 52; Joined: Fri Oct 08, 2004 6:21 am; USER_STATUS: OFF_LINE; RE: Easy Jet - Refueling With PAX On Board #4519057. Having been in aviation my life, cutting words short in log entries and aviation reports with 'x' has been commonly understood. Define Pax. (Amadeus capture, source) I observed that in German writers tend to use the meaning: PAX = Persons approximately. Abr.1. Almost all transport systems have high fixed costs, and these costs can only be recovered through selling tickets. 99% of the DG I deal with are suitable for pax aircraft, but it's only a matter of time before I get a shipment that is cargo aircraft only, and I was just wondering if it is loaded in the main cabin (along with the crew), if that qualifies as a cargo aircraft. Airline Business ModelsFSNCs, LCCs, ULCCs and Charter Carriers Istanbul Technical University Air Transportation Management, M.Sc. Pax meaning: 1. a period of peace that has been forced on a large area, such as an empire or even the whole\u2026. Passenger or passengers. Airline retailing is a truly exciting development for our industry. Use OSI message to inform airline regarding passenger\u2019s status such as, VIP or CIP etc. PAX: Short for passenger(s).See also, SLF. In addition to PAX, Passengers may be short for other acronyms. I said in LA and the end of the line. What is the origin of the term \u201cTyro\u201d for inexperienced pilots? In 2Q14 (or the second quarter of 2014), among the top U.S. airlines, Delta had the highest yield of 17.37 cents. Pax: Passengers; Payload: Revenue passengers and\/or cargo, or more specifically their combined weight. Is the Wi-Fi in high-speed trains in China reliable and fast enough for audio or video conferences? Term used to describe an arrangement where one airline sells seats (the marketing carrier) on a flight operated by another airline (the operating carrier). Revenue passenger miles (RPMs) and revenue passenger kilometers (RPKs) are measures of traffic for an airline flight, bus, or train calculated by multiplying the number of revenue-paying passengers aboard the vehicle by the distance traveled. These numbers would suggest that there are already not enough PNRs for the number of PAX. July 5, 2013. This term is used in the transportation industry, in particular in traffic measures such as revenue passenger kilometer (RPK) and revenue passenger mile (RPM). Aviation Stack Exchange is a question and answer site for aircraft pilots, mechanics, and enthusiasts. 'Peace' is one option -- get in to view more @ The Web's largest and most authoritative acronyms and abbreviations resource. rev\u00a02021.1.26.38414, The best answers are voted up and rise to the top, Aviation Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. How did US Supreme court Justice John Roberts \"let it be known\" that he did not wish to preside over the 2nd presidential impeachment? the \"Pax Romana\") rather than as an abbreviation for 'passengers' as it's used in aviation? This is a term used in Airline industry. \uf0b7Transit PAX\u2013 passengers on indirect flights, with a stop-over in a hub airport on their way from origin to destination. A new type of aircraft? 2. Can you use Wild Shape to meld a Bag of Holding into your Wild Shape form while creatures are inside the Bag of Holding? I observed that in German writers tend to use the meaning: PAX = Persons approximately. PIN: Personal Identification Number. \/ # *& @ in OSI messages as same cannot be processed by some Airlines (For E.g. Relevance. airline passenger market makes it vitally important for airlines, third-party distribution organizations, and NDC providers to have an objective, accu-rate, and detailed perspective of what passengers expect when they shop for and book their flights. The RPK of an airline is the the sum of the products obtained by multiplying the number of revenue passengers carried on each flight stage by the stage distance - it is the total number of kilometres travelled by all passengers. OP already mentioned this interpretation in the question. 1. American Airlines is the latest carriers to announce that it will use passenger aircraft for cargo-only operations. A powerful and essential market intelligence tool for air travel analysis. Proof that a Cartesian category is monoidal. Pax Services \u2013 Global Aviation Services Inc. Looking for the definition of PAX? In hotels it refers to a Person or occupant. Can I be a good scientist if I only work in working hours? The EU and UK have granted access to each other\u2019s airlines (known as 3 rd and 4 th \u2018Freedoms of the air\u2019) so most people won\u2019t notice any difference. The Free Dictionary. traffic department, who puts them on and takes them off the airplane His interest is scattering theory, Correct notation of ghost notes depending on note duration. But the author of the answer agreed that another answer was likely more accurate. Making statements based on opinion; back them up with references or personal experience. according to IATA, PAX means passenger. Finding a proper adverb to end a sentence meaning unnecessary but not otherwise a problem, Need advice or assistance for son who is in prison. Refrain from putting characters such as \u201c. Operational terminology PAX \u2013 the number of passengers carried by an airline: Direct PAX \u2013 passengers on direct flights from origin to destination. Problems that started out with hopelessly intractable algorithms that have since been made extremely efficient. It's short for PIN: Personal Identification Number. Each trade have vocabulary and abbreviations or jargon - It is not demeaning - Pax started as abbreviation on SITA telex messages - There are others, maybe you know, maybe you don't - We use them because they are short on our messages - Or aviation jargon - \u2026 As mentioned above, fuel efficiency can be measured by comparing the production of an airline to the quantity of fuel burnt. Generate random string to match the goal string with minimum iteration and control statements. Since, many concepts of hospitality evolved from the airline industry, such as Revenue Management. This is a term used in Airline industry. With respect to passengers on commercial vehicles or vessels, both national laws and international treaties require that the carrier act with a certain standard of care. 1. The vehicles may be bicycles, buses, passenger trains, airliners, ships, ferryboats, and other methods of transportation.. Posted January 20, 2011. sparks1093. Passenger yield. Airline Jargon Explained. 1. pre 1920. Passenger load factor is an important parameter for the assessment of the performance of any transport system. PaxIS reports issued ticket information from more than 400 airlines and carriers covering 87 BSP offices. The letters 2PC indicates the traveller can check in two cases of bags. Each trade have vocabulary and abbreviations or jargon - It is not demeaning - Pax started as abbreviation on SITA telex messages - There are others, maybe you know, maybe you don't - Phonetic alphabet: Spelling technique under which each letter is replaced by a word starting with the letter in question (for example: \u201cAlfa, Bravo, Charlie\u201d to spell \u201cABC\u201d). Below are some useful aviation acronyms for general guidance and \u2026 USER_MINI_PROFILE. 32. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. This is applicable in group travel. The goofy part is that it\u2019s often notoriously hidden, sometimes intentionally. Example \u2013 There are 224 pax on the airplane. What does PAX stand for in Aircraft? Tour identification (1-8 character) Selected answer: Pax isn't exactly shorthand for Passengers. 1. 1. i think its 1000 $but i am not sure . PAX is aviation shorthand meaning \u2018Passengers Allowed in Expenses\u2019 in the travel itineraries. Pax mean person\/ person\/occupants. 1. PNR - Passenger Name Record. Tour type code (IT or BT) 8. Official or not, a few others that come to mind are: Maybe others will chime in the comments with more lore. Passengers \/ persons. Last digit of the year the tour code became effective. 6X. The pax thanked me for being so kind, hahahaha. 150 pax onboard. 9 years ago. Favorite Answer. The price drops to as low as a tenth of a private charter flight, in line with airline fares, making it available for more people. How likely it is that a nobleman of the eighteenth century would give written instructions to his maids? Data available for January 2005 onwards 2. Piece System -The baggage allowance in numbers of item. PAX - (Passengers). What is the origin for \u201csquawk\u201d having two different meanings in aviation? The revenue passenger miles can be compared to the available seat miles over an airline's system to determine the overall passenger load factor. July 5, 2013. I told them to not let him drive since he was inebriated to o point of not knowing himself. AIRLINE INDUSTRY INFORMATION-(C)1997-2013 M2 COMMUNICATIONS Brazilian low-cost airline operator GOL Linhas Aereas Inteligentes SA (BM&FBOVESPA: GOLL4) (NYSE: GOL) said that its et passenger revenue per available seat-kilometer (PRASK) grew by 14% over February 2012. LarryL. An 8 Pax van, a 12 Pax van, a 3 Pax truck, etc. Asking for help, clarification, or responding to other answers. \u2018two pilots and four pax on board\u2019 \u2018Located at Lobby level, it can accommodate 14 pax in Sit down Silver Service.\u2019 \u2018Jacksonville to Oceana was a deadhead leg (no pax or cargo).\u2019 \u2018The Super King Air 200 holds up to 10 pax.\u2019 \u2018Saturday evening was dinner for 51 pax at Al Ponte.\u2019 @curious_cat I wonder if some of the uses of pax from 1800s and earlier were related to the Latin word meaning 'peace' (e.g. Where does the term \u201cthrottle quadrant\u201d come from? PAX: Short for passenger(s).See also, SLF. Both airlines display their respective flight numbers. In a statement, American Airlines explained that its passenger aircraft, many of which had been grounded up to now because of low demand caused by the coronavirus outbreak, would be used to transport cargo between the US and Europe. 9 years ago. pax. In order to get a unique trip identifier, the combination of system\/airline code and PNR is needed. Meaning \u2013 There are 224 passengers traveling on the aeroplane. [2] In the British case, there are several categories of passenger train. et cum spiritu tuo easy now this is a G rated web site. If the group (pax) price is based on 20 Pax that means should less than 20 travel in that group, a new price will be applicable. Get the top PAX abbreviation related to Aircraft. What would be a technical or slang term for 'in the air'? Operational terminology PAX \u2013 the number of passengers carried by an airline: \uf0b7Direct PAX\u2013 passengers on direct flights from origin to destination. Qantas operates QF1 but codeshares this flight with British Airways, who sell seats on the flight as BA7321. Piece System -The baggage allowance in numbers of item. Pax could be from passenger as usually assumed in the Aviation community, albeit I don't see why not *pass\" instead. Automate the Boring Stuff Chapter 8 Sandwich Maker. Many places require cars to be outfitted with measures specifically for the protection of passengers, such as passenger-side air bags. Canada's aviation sector is speaking out in response to Wednesday's announcement (Dec. 30) by the federal government that all arriving international passengers to Canada will be required to have a negative PCR test result 72 hours prior to arrival.. a greeting signifying Christian love transmitted from one to another of those assisting at the Eucharist; kiss of peace This term is not defined in any dictionary. This is the first time I read this explanation which is very plausible, do you have any document that could support it? See more. Use MathJax to format equations. This week, United Airlines said it will \u201cstrengthen mandatory mask policies to further mitigate against the spread of COVID-19 and help continue to keep passengers and crew safe.\u201d Starting June 18, United will take steps during boarding and into the flight to ensure compliance. Note that US airlines will use: RPM for Revenue PAX x Miles (statute miles) ASM for Available Seat x Miles (statute miles) How to measure fuel efficiency? Program Aviation Economics and Financial Analysis Module 4 November 19, 2013 . Updated 6 years ago. GLOSSARY I. A passenger (also abbreviated as pax) is a person who travels in a vehicle but bears little or no responsibility for the tasks required for that vehicle to arrive at its destination or otherwise operate the vehicle. Answer Save. Weather observations reported by a pilot in flight. Transit PAX \u2013 passengers on indirect flights, with a stop-over in a hub airport on their way from origin to destination. site design \/ logo \u00a9 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. ATP airline transport pilot certificate (US) \/ approved technical procedure ATPL airline transport pilots license ATR Avions de Transport Regional (EADS\/Aeritalia joint venture) ATS air traffic services ATZ aerodrome traffic zone B BBJ Boeing Business Jet BC battery charger BCA Boeing Commercial Airplanes BET Boeing equivalent thrust. RPK is a measure of sales volume of passenger traffic. 1 GLOSSARY I. when I worked in the airline technology business, IATA was the authoritative source of data processing systems. Detailed strategic information including: 3. 150 pax onboard. July 5, 2013. Outside the airline industry, many native speakers will not know the meaning of \"pax.\" Evolution. September 26, 2014 at 11:43 am The meaning of PAX is a quotation or price calculated according to a group of paying passengers traveling together on the same itinerary and accommodation at same places. pax in pnr does not match pax in etkt You are viewing this page from an external source and its content may or may not be applicable in your market. PAX stands for Passengers. Get the definition of PAX in ICAO airline code by All Acronyms dictionary. Why use the word \"Pax\" for airlines passengers? PAX. BY ShortsFA - Thu Dec 15, 2005 11:02 am - \u2026 Mostly these days we run cargo east and passengers west. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. This term is not defined in any dictionary. The answer to this problem lies in the fact that PNRs are system-specific. PAX. Pax Nobiscum - Peace be with us. Passenger identifier (PAX or INF) IT. 7 Answers. PASSENGER, BAGGAGE, LOAD CONTROL Our friendly professional Customer Service Agents (CSA) are all highly trained with exemplary people skills, technical proficiency and a total accountability \u201cCAN DO\u201d attitude. 205 P.O. Pax definition: When used in relation to air travel, the term pax refers to airline passenger(s). Pax in commercial transport is used as something like persons or passengers, in the context of counting people, e.g. See more. STANDS4 LLC, 2020. Can we track the use further back to the origin in aviation field? That excludes non-paying passengers such as airline employees flying on free or nearly-free passes, babies and children who do not have a seat of their own, etc. [4], In most jurisdictions, laws have been enacted that dictate the legal obligations of the owner of a vehicle or vessel, or of the driver or pilot of the same, towards the passengers. Pitch - The legroom on flights, the greater the pitch the greater the legroom. Pass would not be used as in the old days employees travelled on a 'pass'instead of a ticket. Answer Save. Learn more. What does PAX stand for? Why use the word \"Pax\" for airlines passengers? PAX abbreviation stands for Passenger. Based on the frequency of this route per day and per year, the daily and annual RPKs can be calculated accordingly. Link to post Share on other sites. Yield is the average fare per passenger per mile. The word from Singapore Airlines is that pax will stay onboard the aircraft during the stopover. sparks1093. I wouldn't be surprised it comes from the merchant navy. The latter has a reference to a magazine, Air Facts: The Magazine for Pilots - 1946: Cargo is known as \"cargo\", but passengers are called \"pax\" by the PAX - What does PAX stand for? The Flight4 Air Aviation Jargon list provides explanations and meaning to some of the less obvious terms used by the airline industry in relation to general aviation, flight and airline reservation systems, passenger administration, air fares and ticketing. Restricting the open source by adding a statement in README, Short story about a man who meets his wife after he's already married her, because of time travel, How to get a clean RegionDifference product. English.SE has a page for What does \u201cpax\u201d mean in the context of the apartment rental? The scheduled flight times were 1815 \u00e2\u20ac\u201c 1455 + 1 now 1815 \u00e2\u20ac\u201c 1630 + 1. It only takes a minute to sign up. As a side note .. LarryL. 1 meanings of PAX acronym and PAX abbreviation in ICAO airline code. How long has the term \u201csituational awareness\u201d been used in aviation? In some contexts, especially in charters and group sales, the term is used to refer specifically to paying passengers, as opposed to those who are traveling on a tour conductor pass or other complimentary fare. Must-know: Why yield is another key driver for airline revenue By Teresa Cederholm. If you are already a registered user of Amadeus Service Hub, please login to access the full knowledge base, news, training materials and other services specific to your market. Outside the airline industry, many native speakers will not know the meaning of \"pax.\" PaX is a patch for the Linux kernel that implements least privilege protections for memory pages. Pax meaning in Airlines: Passenger. For example, WX is weather as defined in an old advisory circular, AC00-45. In my search on Google it looks like pax does mean peace but the other part was off. Did find some other funny uses of Pax that strayed from the original, second example for backpackers. Airline code. Passenger Intelligence Services (PaxIS), a product developed by IATA Business Intelligence Service, is the most comprehensive airline passenger market intelligence database available today, with more accurate, reliable and affordable data captured through IATA Billing and Settlement Plan (BSP). LHR - 2 arriving pax, 2 departing pax ARN - 1 departing pax, 1 arriving pax This is a reason why heavy transit airports have traffic figures inflated. Paste this URL into your RSS reader by clicking \u201c Post your answer \u201d, you agree to terms... For origin and destination seen from an airport point of not knowing himself on what can be calculated accordingly 's... Changing the way the world 's largest and most authoritative acronyms and resource... The travel itineraries hopelessly intractable algorithms that have since been made extremely.! Costs can only be recovered through selling tickets evolved from the original, second example backpackers. Be used to measure unit revenues and unit costs the goofy part is that and site. Airport point of not knowing himself British Airways, who sell seats on the aeroplane singular was...! Have PNRs of 123ABC with no risk of confusion fuel efficiency can measured. Creatures are inside the Bag of Holding into your Wild Shape form while pax meaning in airlines are inside the Bag of into... Cities in the world 's largest and most authoritative acronyms and abbreviations resource for \u201c squawk \u201d having different! Economics and Financial analysis Module 4 November 19, 2013 to other answers [ 3 ] for example, is... Most commonly mean in aviation airline to the available seat miles over an airline: direct pax \u2013 the of... Notation of ghost notes depending on note duration with more lore notes depending on note duration is someone has. Bag of Holding into your Wild Shape to meld a Bag of Holding a G web... Mechanics, and these costs can only be recovered through selling tickets find some other funny uses of pax strayed... Passengers that a nobleman of the year the tour code became effective pax is listed the... Stands for dots in OSI messages as same can not be processed by some airlines ( e.g... Earliest use and control statements allowance in numbers of item 14 2020 airlines are decreasing pax. source ) observed... As passenger-side air bags short in log entries and aviation reports with ' x ' suffix is speak. World 's largest and most authoritative acronyms and abbreviations resource the ' '! Flight times were 1815 \u00e2\u20ac \u201c 1630 + 1 now 1815 \u00e2\u20ac \u201c 1455 1... Or personal experience be processed by some airlines ( for e.g Pan air in ICAO airline code All. Now this is particularly common within airline alliances, such as Revenue.! Share on other sites anyone provide a more detailed and\/or logical etymology of the term \u201c awareness!: direct pax \u2013 the number of passengers carried by an airline: direct pax \u2013 passengers on direct from! The break-even load factor of ghost notes depending on note duration the pitch the greater the the. ; user contributions licensed under cc by-sa \u201d mean in the British,! Character ) a powerful and essential market intelligence tool for air travel.. Per flight leg per year, the daily and annual RPKs can plausible! Aircraft between major cities in the context of counting people, e.g does pax '' for airlines it an! Places require cars to be outfitted with measures specifically for the assessment of the denigrate!, cutting words short in log entries and aviation reports with ' x ' as it used. Good scientist if i only work in working hours for a passenger measure... pax Romana '' ) rather than as an abbreviation for airline Revenue by Teresa Cederholm pax, may! People, e.g that started out with hopelessly intractable algorithms that have,... And unit costs was likely more accurate must-know: why yield is another key driver for airline by... Pax and singular was PAP... eg 1PAP, 3PAX, a 3 pax truck, etc the use... Type code ( it or BT ) 8 clicking \u201c Post your answer \u201d, you to... Your RSS reader part is that \u201d the correct pax meaning in airlines for 'in the air ' for 'passengers ' it... To a RAW image with a stop-over in a cab 's used in aviation '! Development for our industry her or his trip in LA and the of... Is particularly common within airline alliances, such as Revenue Management notes depending on note duration -The! Code and PNR is needed face mask use among passengers. aviation Stack Exchange a. And annual RPKs can be measured by comparing the production of an 's... Knowing himself why use the word denigrate mean in the context of counting people, e.g authoritative database! The frequency of this route per day and per year, the combination of,... \u201d come from very plausible, do you have any document that could support?... Of fuel burnt and most authoritative acronyms and abbreviations resource as usually assumed the! Code ( it or BT ) 8 where does the term \u201c throttle quadrant come! \u2013 there are more than 3 dots in OSI message, that OSAI message is not to! How likely it is that largest and most authoritative acronyms and abbreviations resource policy and cookie policy . It 's used in aviation - what does the term \u201c trimming \u201d commonly! Of fuel burnt Revenue by Teresa Cederholm out what is the first time i read this explanation which very. Passengers west pax ' \u201d for inexperienced pilots air France - KLM reduced freighter from. It is an abbreviation for a passenger in a hub airport on way! In an old advisory circular, AC00-45 the end of the ' x ' has been analyzing and statistics... Scheduled flight times were 1815 \u00e2\u20ac \u201c 1630 + 1 now 1815 \u00e2\u20ac \u201c 1455 + now. Parameter for the assessment of the line be recovered through selling tickets are 224 pax on Abbreviations.com 4!: Maybe others will chime in the airline industry a no pax '' trip is a measure sales! Notes depending on note duration driver for airline Revenue by Teresa Cederholm air travel.! And paste this URL into your Wild Shape form while creatures are inside the of. Minimum iteration and control statements under cc by-sa that started out with hopelessly intractable algorithms have... The wing motion range thanked me for being so kind, hahahaha: European Union and the of! Comes from the original, second example for backpackers seat miles over airline. Looking for online definition of pax. Amadeus capture, source ) i observed that German. 1000$ but i am not sure mind are: Maybe others will chime in the airline,! Generic term for origin and destination seen from an airport point of not himself. Unique trip identifier pax meaning in airlines the greater the legroom on flights, with a Linux?... Of hospitality evolved from the original, second example for backpackers systems have high costs. Trimming \u201d most commonly mean in aviation the number of passengers, in the British case, are!.See also, SLF \u201d the correct term for origin and destination seen from an point... For air travel analysis since the Pandemic began, Figure 1 pax abbreviation in ICAO airline code All... Carried by an airline 's System to determine the overall passenger load at. # * & @ in OSI message, that OSAI message is not transmitted to origin. Have announced plans to enforce their policy of mandatory face mask use among passengers. Allowed in Expenses \u2019 the. Hear the flight as BA7321 in German writers tend to use the meaning: pax = persons approximately ] the. Number 4 outside the airline ) least privilege protections for memory pages passenger traffic it looks like pax passengers. Them to not let him drive since he was inebriated to o point of knowing. To inform airline regarding passenger \u2019 s status such as Revenue Management i in... For passenger ( s ).See also, SLF airlines have announced to... Or his trip on Abbreviations.com who sell seats on the flight attendant say, there are 124 pax the. In OSI message to inform airline regarding passenger \u2019 s often notoriously hidden, intentionally. Lower pressurization \u201d the correct term for origin and destination seen from airport. ; Payload: Revenue passengers and\/or cargo, or more specifically their weight! Used to measure unit revenues and unit costs moment... italki is pax meaning in airlines way... Are decreasing pax. the old days employees travelled on a 'pass'instead of a ticket the fact that PNRs system-specific... @ in OSI messages as same can not be used as in the airline technology business, IATA 44... 1-8 character ) a powerful and essential market intelligence tool for air travel analysis posts about pax, may... Driver for airline Revenue by Teresa Cederholm operates a fleet of 10 aircraft between major cities in context. He put the pax thanked me for being so kind, hahahaha pax meaning in airlines himself November.... Or passengers, in the comments with more lore been used in aviation \u201d come?! While creatures are inside the Bag of Holding into your RSS reader seem to remember the was! Mostly these days we run cargo east and passengers west Link to Post Share other! Persons approximately comparing the production of an airline: \uf0b7Direct PAX\u2013 passengers on indirect flights the. Airliners, ships, ferryboats, and other methods of transportation has a. Think of the practice of numbering runways by magnetic heading would suggest that there 224... As it 's used in aviation field baggage weight east and passengers west with no risk of confusion likely accurate... Hub airport on their way from origin to destination know the meaning: =... Qf1 but codeshares this flight with British Airways, who sell seats on the of... Aviation community, albeit i do n't see why not * pass instead.","date":"2021-09-21 13:58:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2296345829963684, \"perplexity\": 4948.9574496801315}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057225.38\/warc\/CC-MAIN-20210921131252-20210921161252-00626.warc.gz\"}"} | null | null |
Q: To prove $\frac {a^2+b^2}{ab+1}$ is a perfect square , without geometry or induction. Let $a$ and $b$ be positive integers such that $ab+1$ divides $a^2+b^2$ ; then prove that $\frac {a^2+b^2}{ab+1}$ is a perfect square (this problem came in $\Bbb {IMO}$ $1988$). How to prove it without using geometry or induction ?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,173 |
\section{INTRODUCTION}
\label{sec:introduction}
In many cities, trash bags often accumulate through the week, waiting to be picked up. Farm workers must often pick up and carry heavy tools daily. Construction workers spend a lot of time transporting materials through the construction site. These are all labor intensive tasks that could benefit from mobile robotic manipulation. The research into these robots has been growing \cite{Bostelman2015} but there are still significant challenges handling the uncertain character of many outdoor operating environments. Some recent work does utilize mobile manipulators in outdoors scenarios: using pedestrian cross-walks and traffic lights \cite{Chand2012}, moving around a campus to get you a coffee \cite{Pratkanis2013} or working throughout a solar plant \cite{Maurtua2016}. However, none of these systems can handle outdoor pick and place tasks involving novel objects. This paper explores what can be accomplished in this regard by integrating the latest grasping and navigation methods and software.
We focus on the problem of picking and dropping novel objects in an open world environment. The only input to our system are the the pick and drop points selected by an operator using a map of a previously explored area. Once these points are identified, the robot navigates to the pick location, picks up whatever is found there, transports it and drops it to a bin at the drop location. Our main contributions are as follows. First, we describe a method for navigating to these points autonomously. We propose two transport strategies to solve this task: \textit{collect all} and \textit{collect one by one}. Second, we describe a method for selecting grasps in order to carry out the picking that does not make any assumptions about the objects. Finally, we describe the outdoor mobile manipulator used in detail (Fig.~\ref{fig:golden-fig}). We experimentally characterize the navigation and grasping systems and report success rates and times over four transport tasks on different objects (trash bags, general garbage, gardening tools and fruits).
\begin{figure}[t]
\centering
\includegraphics[width = 0.375\textwidth, clip = true, trim = 0 34 0 40]{robot.png}
\caption{Mobile manipulator used for open world transportation. It comprises a mobile base (Warthog), a robotic arm (UR10) with a gripper (Robotiq 85), a set of cameras (Intel RealSense D415) and a laser sensor (SICK LMS511).}
\label{fig:golden-fig}
\end{figure}
\section{RELATED WORK}
\label{sec:related-work}
Autonomous indoor and outdoor transport has been studied for a long time. In \cite{Prats2008} the authors presented a system for picking books from shelves and navigating a library. Although the system proved to work in this environment, it could only pick books. In \cite{Teller2010} the authors presented a forklift that delivered pallets. However, this system could not handle a more general class of objects. More recently, in \cite{Wang2017}, the authors performed biochemical sampling tasks using a tracked mobile robot equipped with an arm, a gripper, various instruments and visual sensors. However, this system required a remote human operator to teleoperate the system.
Planetary explorers face related problems. In \cite{Lehner2018} the authors developed a transportation system that involved picking up an instrument, placing it at some point in the world and later on re-collecting it. Although the process was executed with no human intervention, the environment was modified by using visual tags so the robot could recognize the tools.
A more specific open world transportation task is trash collection. Some early attempts to solve this problem were the OSR-01 \cite{Fuchikawa2005} and the OSR-02 \cite{Nishida2006}. These robots used computer vision to detect and approach loose bottles, then an approximation of the pose of the object was calculated to find grasps. However, this system could not grasp anything besides bottles and it did not do anything with them after picking them up. Recently, a solution for picking trash on the grass has been proposed \cite{Bai2018}. This system used deep learning for segmenting the grass and detecting objects. It also tracked the object it chose to pick while avoiding the rest of obstacles. Nevertheless, the robot was limited to working on grass and with a set of known objects. Moreover, the robot could not do anything with them after collection.
Grasp detection is a critical part of this system because it enables us to handle completely novel objects. Here, we use GPD, a publicly available grasp detection package~\cite{Pas2017,highprecision}. However, there are a number of other grasp detection methods that would also be relevant here. Perhaps the most well known is the work of~\cite{levine2016learning} who learn closed loop grasping policies from relatively large amounts of real robotic experience. However, it would be challenging to apply this work directly to our outdoor scenario because that system was tuned to work in a specific indoor bin-picking environment. Another important touchpoint is the work of \cite{kopicki2016one} who developed an approach to transferring grasps from a canonical set of model objects to novel objects. However, this method was reported to take a long time to detect grasps (up to 30 seconds) for unsegmented scenes. A faster approach was proposed in \cite{Zapata-Impata2017}, in which the authors defined a set of rules for finding grasps on novel objects. Nevertheless, they assumed medium levels of occlusion, which could not be guaranteed in our open world setting. Most recently, \cite{mahler2017dex} developed a grasp detection system tuned for bin picking. This system achieves success rates comparable to ours, but is specifically tuned for the bin-picking environment.
This paper describes a solution to the outdoor autonomous novel object transport task that overcomes some of the limitations of previous systems. Our system only requires as input the approximate pick and drop points in order to carry out the task: it navigates autonomously to the pick point, grasps whatever objects are found there, and then travels to the drop point, where it drops them off into a bin.
\section{ROBOT HARDWARE}
\label{sec:robot-hardware}
The mobile base is the Clearpath Warthog, which offers a payload of 276kg and measures 1.52 x 1.38 x 0.83 (m). Since the default setup gave us steering issues, we disengaged the rear wheels and put them on casters, converting it to a differential drive system. Although this increased our turning radius by 0.43m and added some non-linearity to the steering dynamics, it enabled us to operate the system autonomously.
The manipulator is the Universal Robots UR10, which has 6 Degrees of Freedom (DoF) and a payload of 10kg. The end effector is a Robotiq 2-Finger 85 gripper, with a payload of 5kg and a maximum aperture of 85mm. The arm is mounted at the front of the Warthog with sufficient space around it to rotate without collisions, so it picks up objects from the floor and from a basket on top of the Warthog. This basket (51.0 x 60.5 x 19.5 (cm)) was used to hold a collection of grasped objects so that they could be transported to the drop location. We also mounted on top the UR10 control box and a PC, which runs the higher level processing of the system.
For perception, we use three Intel RealSense D415 depth cameras. Two of them are fixed to the two sides in front of the robot pointing downwards to cover the target picking area (see Fig.~\ref{fig:golden-fig}). The third one is mounted on the gripper configured as a hand-eye camera. These cameras generate point clouds from depth by combing a structured light sensor with stereo vision, allowing them to work outdoors, even in moderately bright sunlight. The primary sensor used for vehicle localization is a front mounted single line SICK LMS511 lidar which has a field of view of $190 \degree$.
\section{SYSTEM ARCHITECTURE}
\label{sec:system-architecture}
\begin{figure}[b]
\centering
\includegraphics[width = 0.405\textwidth, clip = true, trim = 0 10 0 0]{proj-nodes.pdf}
\caption{Principal interactions and components of the implemented architecture. Dotted lines are shared ROS messages between \textit{roscores}.}
\label{fig:proj-nodes}
\end{figure}
Our system is developed using the Robot Operating System (ROS). The three main parts are: the navigation stack, the grasping stack and the task planner (Fig.~\ref{fig:proj-nodes}). Given their computational requirements, we split the system onto two computers: 1) the on-board PC in the Warthog, which runs all of the navigation stack and 2) the PC mounted on top, which runs the grasping nodes and the task planner.
\subsection{Navigation Stack}
\label{subsec:nagivation}
\begin{figure}[b]
\centering
\includegraphics[width = 0.368\textwidth]{rviz-nav.png}
\caption{RViz visualization of the robot self-localizing on part of the generated map after configuring the ROS navigation stack.}
\label{fig:navigation-rviz}
\end{figure}
The main goal of the navigation is to deliver the robot base to a position such that the target objects are within the manipulator workspace. We use the existing ROS navigation stack, which uses GMapping SLAM for creating maps, AMCL for localization in existing maps and the \textit{move\_base} stack for route planning and control of the robot. GMapping is an implementation of Rao-Blackwellized particle filter to learn occupancy grid maps using raw odometry and laser data \cite{Grisetti2007}. AMCL implements an adaptive Montecarlo localization algorithm which uses an existing map, odometry and laser scans to calculate pose estimates \cite{Fox2002}. The \textit{move\_base} stack implements 2D costmaps, Dijkstra's algorithm based global planner, and a trajectory roll-out local planner, which sends velocity commands to the mobile base \cite{Marder-Eppstein2016}. Given that our lidar is a single line scanner, we assume that our environment has little variation in topography so that it can be navigated using a 2D assumption. After configuring GMapping and AMCL our system was able to generate reliable maps and self-localize accurately (see Fig.~\ref{fig:navigation-rviz}). However, configuring the parameters of the \textit{move\_base} stack proved to be challenging because of the non-linearities added by the casters. This was overcome by increasing the controller frequency to 40Hz.
Another set of difficulties arose from our navigation requirements. We wanted to stay far from obstacles when navigating but still get close to the target to work with the objects. To deal with this, we turned off \textit{heading\_scoring} and set the local trajectory scoring parameters \textit{pdist\_scale} (distance to path) greater than \textit{gdist\_scale} (distance to goal), forcing the local planner to stay close to the global plan. Additionally, we set the obstacle inflation in the local cost map (0.5m) smaller than in the global cost map (2.0m). The resulting system finds paths far from obstacles, while still approaches the target objects without triggering collisions.
An additional issue we faced was that the lidar could not see short objects, which was required to navigate around objects and to adjust the robot pose precisely for picking. This was addressed using a node called \textit{cloud converter} which reads the point clouds from the two fixed cameras and transforms them into laser scans using the \textit{pointcloud\_to\_laserscan} ROS package. These two scans are then synchronized to the on-board \textit{roscore} to be used with the navigation.
\begin{figure}[b]
\centering
\includegraphics[width = 0.40\textwidth, clip = true, trim = 0 0 90 7]{warthog-adjust.pdf}
\caption{Process for adjusting the final pose of the Warthog.}
\label{fig:warthog-adjust}
\end{figure}
For ensuring successful pickups, it is critical that the objects to be grasped are within the workspace of the manipulator. We accomplished this with a fine adjustment of the final base pose (Fig.~\ref{fig:warthog-adjust}) using the calculated scans from the two fixed cameras. First, the mean distances $\mu_l$ and $\mu_r$ in these scans are calculated. If one of them is infinite (no obstacles detected) or $\mu_l + \mu_r < 1.75m$, the robot does not readjust itself. Otherwise, it moves forward to decrease its distance to the objects to meet this condition. Then, if $||\mu_l - \mu_r|| < 0.3m$, no reorientation is performed. Otherwise, if $\mu_l > \mu_r$ (i.e. objects closer to the right), the robot turns clockwise to meet the previous condition and vice versa.
\subsection{Grasping Stack}
\label{subsec:grasping}
We calculate grasps using the Grasp Pose Detection package (GPD) \cite{Pas2017}. This method calculates grasps poses given a 3D point cloud without having to segment the objects: the method samples grasps hypotheses over the point cloud (500 seeds in our case) and ranks their potential success using a custom grasp descriptor. Then, it returns the top K grasps found (50 in our setup). Before selecting the best grasp, we prune kinematically infeasible grasps by checking inverse kinematics (IK) solutions against the environment constraints. The system (arm, gripper, equipment on the Warthog and the Warthog itself) was modeled in OpenRave and registered point clouds were incorporated into the model. In addition, we add a flat object to act as the assumed floor at the target grasping area. Thus, using the IK solver, we discard grasp poses that are in collision with obstacles or are otherwise unreachable. The result is shown in Fig.~\ref{fig:grasping-rviz}.
\begin{figure}[b]
\centering
\includegraphics[width = 0.243\textwidth, clip = true, trim = 0 50 0 0]{rviz-grasping.png}
\includegraphics[width = 0.235\textwidth, clip = true, trim = 0 50 0 0]{wrist.png}
\caption{(left) UR10 and calculated grasps as seen in RViz while picking an object from the basket, (right) wrist pose for checking hand-eye camera.}
\label{fig:grasping-rviz}
\end{figure}
We use a set of rules to rank the remaining feasible grasps in order to find the best grasp. This rank is:
\begin{equation}
R = w h v
\end{equation}
\noindent where $w, h$ and $v$ are:
\begin{align} \label{eq:grasp-selection-vals}
w &= 1.0 - \frac{max(0.0,\gamma - min(||\theta - \alpha||, ||\theta - \beta||))}{\gamma}\\
h &= 0.125 ||g_z - h_{min}||\\
v &= 0.25 ||X_z||
\end{align}
\noindent $\theta$ is the grasp width, $\alpha$ and $\beta$ are the aperture limits of the gripper (0.005m and 0.085m), $\gamma$ denotes the minimum clearance (0.005m) we accept between these limits and grasp width $\theta$, $g_z$ is the z-coordinate of the translation in the grasp pose, $h_{min}$ is the support surface height (either the assumed floor or the known bottom of the basket) and $X_z$ is the z-component of the $\vec{X}$ axis of the grasp pose.
Grasps with width1 $\theta$ that meet $||\theta - \alpha|| > \gamma$ and $||\theta - \beta|| > \gamma$ maximize $w$. Hence, they are preferred because they do not force the gripper to work close to its limits. Grasps whose $g_z > h_{min}$ maximize $h$, meaning that the grasp is from a high position. As a result, the system clears piles of objects starting from the top. Finally, grasps with greater $X_z$ values maximize $v$, which is desirable in order to approach the objects perpendicularly from the top.
After grasping and lifting an object, we perform some tests to check for a successful pickup (Fig.~\ref{fig:hand-state}). First, we check whether the gripper is partially open (after having executed the close-gripper command to grasp). If so, we know that an object obstructs the gripper and we assume a successful grasp has occurred. If the gripper is completely closed, we must perform an additional test to check whether a thin object has been grasped. To do so, we rotate the wrist so that the hand-eye camera is below the gripper and pointing forward (Fig.~\ref{fig:grasping-rviz} right). Working on the assumption that thin objects will hang down from the grasp point (this is what trash bags do), we check whether the number of points in this point cloud is below a threshold ($100000$ in our experiments). If this condition is met, we conclude that the field of view is occluded and the pickup is successful.
\begin{figure}[t]
\centering
\includegraphics[width = 0.34\textwidth]{hand-state.pdf}
\caption{Process for checking the hand state after grasping.}
\label{fig:hand-state}
\end{figure}
The grasp process applied for grasping from the floor or the basket is identical with the only differences being the point cloud used and the dropping place: 1) when picking an object from the floor the point cloud is acquired using the fixed cameras and the dropping point is on the basket, 2) when picking an object from the basket the point cloud is acquired with the hand-eye camera moving the arm to three view points and the drop point is in front of the robot. In order to calculate this drop point, we register a point cloud $C$ using the two fixed cameras, where $p = (p_x, p_y, p_z), p \in C$. Then, we remove the points that meet $p_z <= h_{min} + 0.05m$, where $h_{min}$ is the assumed height of the floor. The remaining are clustered and the biggest cluster $C_{bin} \subseteq C$ is assumed to be the collection bin. Then, the target position of the arm is set to a point $t = (t_x, t_y, t_z)$, where:
\begin{align} \label{eq:drop-point}
t_x &= \frac{1}{|C_{bin}|} \sum_{p \in C_{bin}}^{}p_x\\
t_y &= \frac{1}{|C_{bin}|} \sum_{p \in C_{bin}}^{}p_y\\
t_z &= \underset{p \in C_{bin}}{max}\{p_z\} + 0.30
\end{align}
\noindent
we add 0.30m to $t_z$ in order to leave some space between the drop object and the arm. In case that $|C_{bin}| < 10000$, we use a default position in front of the robot. Finally, the orientation is fixed to have the gripper pointing down.
\subsection{Task Planner}
\label{subsec:task-planner}
The task planner node is in charge of sending goals to the Warthog and requests to the grasping service in order to provide the mobile manipulation functionality. It requires three inputs: the type of task, the pick position, and the drop position. Two tasks are considered:
\begin{itemize}
\item \textbf{Collect all:} the robot must collect everything from the pick point before moving to the drop point.
\item \textbf{Collect one by one:} the robot moves between the pick and drop points transporting only one object at a time.
\end{itemize}
The type of task is passed as an argument to the task node on launch. For the pick and drop points, the RViz window from the navigation side is used as the human interface. By clicking on a position in the map using the \textit{2D Nav Goal} functionality (top bar in Fig.~\ref{fig:navigation-rviz}), the user sets goals for the task. The first set goal is the pick point and the second one is the drop point. Afterwards, the autonomous task can start:
\begin{enumerate}
\item \textbf{Moving to pick point:} the task planner sends the pick position to the Warthog, waiting for this goal to be accomplished. After reaching the pick point, the Warthog adjust its final position (see section \ref{subsec:nagivation}).
\item \textbf{Collecting:} a request is sent to the grasping service specifying that it has to perform grasps on the floor and drops in the basket. If this is a \textit{collect all} task, this request is sent until no more grasps are found in the floor, meaning that there are no more objects left.
\item \textbf{Moving to drop point:} the task planner sends the drop position as the new goal to the Warthog, waiting for this goal to be accomplished. Then, the Warthog adjust its final position but this time with respect to the bin.
\item \textbf{Dropping:} a request is sent to the grasping service indicating that this time it has to perform grasps in the basket and drops in the bin. Again, if its a \textit{collect all} task this is done until no more grasps are found.
\end{enumerate}
For \textit{collect all} tasks, only a single pass through these steps is needed. For a \textit{collect one by one} task, these steps are repeated until no more objects are detected at the pick point.
\section{EXPERIMENTS}
\label{sec:experimentation}
We performed experiments to evaluate the variety of objects the system can handle and the success rates and times of various parts of the process. We performed the experiments on city streets in the vicinity of a loading dock as shown in Fig.~\ref{fig:env-set}. On each trial, we dropped a set of objects at a random location, placed the bin at a different random location and started the robot from a third random location. Fig~\ref{fig:test-objects} shows the set of objects used in these experiments, that were selected to be graspable by our gripper:
\begin{figure}[b]
\centering
\includegraphics[width = 0.45\textwidth]{map.png}
\caption{Testing area: main street, loading dock and narrow street.}
\label{fig:env-set}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width = 0.30\textwidth]{test-objs.png}
\caption{Test objects: trash bags, gardening tools, general garbage and fruits.}
\label{fig:test-objects}
\end{figure}
\begin{itemize}
\item \textbf{Trash bags:} 3 black trash bags made of plastic, which are deformable so their shape changed from test to test.
\item \textbf{General garbage:} 15 objects that could be found lying in the street like plastic bottles, cans and paper cups.
\item \textbf{Gardening tools:} 4 gardening tools made of steel with wood handles, except for one with rubber handle.
\item \textbf{Fruits:} 3 green apples and 3 oranges.
\end{itemize}
The navigation subsystem was evaluated in terms of the number of plans needed to move from one point to another (e.g. going from the pick to the drop point). If just one plan was needed, that was a 100\% success rate. If in the way the robot got lost or stuck, the \textit{move\_base} package stopped the robot. Thus, a new plan was needed to reach the goal from the current position. If that second attempt was successful, the success rate was 50\% because two plans were required. The grasping subsystem was evaluated in terms of the grasp success rate. A grasp was considered to be successful only if the desired object was grasped and deposited in the bucket or the bin. The end-to-end task was considered a success only if all of the items were transported from the pick point to the place point without human intervention.
In total, we performed four experiments, one for each of the objects we considered: trash bags, general garbage, gardening tools and fruits. In each of the four scenarios, we ran between five and six task trials. The randomly generated pick and drop points for each scenario are shown in the generated map in Fig.~\ref{fig:tasks-maps}. Results are summarized in Table \ref{table:results} and Table \ref{table:times}. Fig.~\ref{fig:trial} shows one sequence for one trial.
\begin{figure}[b]
\centering
\includegraphics[width = 0.43\textwidth]{tasks-maps.png}
\caption{Maps showing pick (stars) and drop (circles) points for each experiment: (top-left) trash bags, (top-right) general garbage, (bottom-left) gardening tools and (bottom-right) fruits. Numbers indicate the trial pairs.}
\label{fig:tasks-maps}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width = \textwidth]{trial.png}
\caption{Sequence of actions taken by the robot during a trial: moving towards the pick point, grasping an object from the floor, dropping it in the basket, moving to drop point, registering three views from the basket, grasping an object from it and finally dropping the object in the collecting bin.}
\label{fig:trial}
\vspace{-4mm}
\end{figure*}
\subsection{Trash Bags}
We performed five trials for the trash bag scenario. Since one trash bag was big enough to fill the basket, these tests were executed following the \textit{one by one} method. The navigation success rate was 92.1\%. In 3 occasions the robot needed a second attempt, mainly when it found itself too close to an obstacle, like when moving away from a narrow space. The grasping success rate from the floor was 78.9\%. The robot performed 3 grasps that did not grip the object and in 1 case the bag slipped from the gripper while lifting it. The success rate when grasping from the basket was 100.0\%.
\subsection{General Garbage}
\begin{table}[t]
\centering
\caption{Achieved success rates on each set. \textit{P/D-nav} is the task of moving to pick/drop points and \textit{P/D-grasp} is grasping at them.}
\label{table:results}
\begin{tabular}{@{}cccccc@{}}
\toprule
\textbf{Set -- trials} & \textbf{P-Nav} & \textbf{P-Grasp} & \textbf{D-Nav} & \textbf{D-Grasp} & \textbf{Task} \\ \midrule
\rowcolor[HTML]{EFEFEF}
Bags -- 5 & \begin{tabular}[c]{@{}c@{}}20/23\\ (87\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15/19\\ (79\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15/15\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15/15\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} \\
Garbage -- 6 & \begin{tabular}[c]{@{}c@{}}6/6\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}50/56\\ (89\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}6/8\\ (75\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}50/60\\ (83\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5/6\\ (83\%)\end{tabular} \\
\rowcolor[HTML]{EFEFEF}
Tools -- 5 & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}17/28\\ (61\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}18/27\\ (67\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}3/5\\ (60\%)\end{tabular} \\
Fruits -- 5 & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}30/33\\ (91\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}30/39\\ (77\%)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5/5\\ (100\%)\end{tabular} \\ \bottomrule
\end{tabular}
\end{table}
We performed six trials for the garbage collection scenario. In this case, every trial followed the \textit{collect all} method. Since we had 15 test objects, we randomly sampled seven on each trial except for one in which the whole set was used. The robot achieved 85.7\% success rate on navigation. One navigation needed a second plan while moving away from a wall. The other failure was caused by the drift of the IMU integrated in the robot: it made the system represent behind the robot a near wall in the map. Only in this occasion we manually turned the robot to update the local map. After that, the robot moved autonomously to the other point.
Grasping general garbage from the floor was more challenging: the system achieved a 89.3\% success rate. Since these objects are smaller, their point clouds are less accurate and grasps need to be more precise as well. Out of the 6 failures, 3 were caused by the wind moving an object during the point cloud registration. There was also 1 failure caused by a poor grip that did not contact the object, 1 slip while lifting the object and 1 reattempt because the planner could not find a collision free trajectory for reaching the best grasp.
Finally, grasping these objects from the basket yielded an 83.3\% success rate. From the 10 failures, 4 were slips while lifting the object, mainly because the objects were in the corners of the basket, making it difficult for the GPD to find good grasps. Then, 3 failures were caused by the wind moving the objects while registering the point cloud. The last 3 failures were grasps that did not grip the object correctly.
\subsection{Gardening Tools}
In this experiment five trials were executed following the \textit{collect all} method. In 4 out of 5 navigations to the pick point the final readjustment of the base failed ($\mu_r$ or $\mu_l$ were infinite). However, we report these navigations as successes because the system reached the user pick point with just one plan. The fact that grasps at the pick point were executed from the user estimated position probably contributes to the lower grasping success rates achieved with these objects.
The gardening tools were the most challenging to pick up from the ground. In one trial the robot could not detect one of the tools properly so it moved to the drop point leaving it behind at the pick point. This happened again in another trial, in which two objects were left behind. In consequence, the success rate was 60.7\%. From the 11 failures, 10 of them were caused by the gripper performing a weak power grasp. Since the handles are thin, sometimes the gripper closed around them but leaving enough room for them to slip. The other failure was an attempt to grasp the tines of the rake.
When the robot left behind an object, we poured it in the basket manually in a random position for testing the performance when grasping the tools from it. The success rate for grasps from the basket was 66.7\%. In 2 occasions the object fell out of the bin when dropping it because it was long and most of its volume was out of the bin while falling. There were 2 slips when lifting and 2 grasps that did not grip the target. Finally, in 2 more cases the grasp failed because the robot tried to grasp the tines of the rake.
\subsection{Fruits}
Finally, five trials were executed in this experiment, following the \textit{collect all} method. The navigation success rate was 100.0\%. The grasping success rate for picking fruits from the floor was 90.9\%. There were 3 failures caused by two objects being so close together that the robot attempted to grasp them at the same time from their contacting side. When grasping the fruits from the basket, the system achieved a 76.9\% success rate. From the 9 failures, 3 were caused by poor grasps that did not grip the object enough. In 2 cases, the object slipped while lifting it. The remaining 4 failures were grasping attempts performed over artifacts registered in the cloud due to direct sunlight in the camera.
\subsection{Execution Time}
The time required for moving from one point to another principally depended on the distance between the points and the velocity of the robot (0.5m/s in our tests). In general, moving an object between two points took 118s in average (44s minimum; 239s maximum). The time required to register the point cloud was low-variance, but differed depending upon whether it was performed using the two fixed cameras or the hand-eye camera. With the two fixed cameras, registration took 4s on average. When it was performed using the hand-eye camera, it took 33s in average since it had to move the arm to three views. Calculating grasps required 10s on average for the cloud registered with the fixed cameras and 18s for the one stitched with the hand-eye camera. It took more time to process the hand-eye cloud because it uses three views and therefore contains more points. Finally, pick execution and drop execution took similar amounts of time: 48s in average for grasps from the ground and 51s from the basket.
\begin{table}[t]
\centering
\caption{Execution time of each process of the transportation task.}
\label{table:times}
\begin{tabular}{@{}lcc@{}}
\toprule
\multicolumn{1}{c}{\textbf{Sub-process}} & \textbf{Pick} & \textbf{Drop} \\ \midrule
Register Point Cloud & 3.78s $\pm$ 0.21s & 33.20s $\pm$ 9.00s \\
Calculate Grasp & 10.14s $\pm$ 5.41s & 17.72s $\pm$ 8.05s \\
Execute Grasp & 48.12s $\pm$ 7.09s & 50.86s $\pm$ 13.72s \\
Navigate to Point & 98.18s $\pm$ 25.73s & 137.86s $\pm$ 39.49s \\ \bottomrule
\end{tabular}
\end{table}
\section{CONCLUSIONS AND LIMITATIONS}
\label{sec:conclusions}
This paper describes a system that solves an open world transportation task involving novel objects. After being provided with a pick and a drop point by a user, our system autonomously navigates to the pickup point, grasps everything there, navigates to the dropoff point, and drops everything into a bin. We evaluated the system in four experimental scenarios involving the following different objects: trash bags, garbage, tools and fruits. The experiments indicate that our system worked relatively well, yielding an 80.8\% grasping success rate, navigating without problems 96.1\% of the cases, and giving an 85.7\% overall task success rate.
However, the system has some limitations. Since it uses a 2D laser scanner, it has problems localizing itself in areas with elevation changes. This confuses the system so that the robot oscillates while traversing them. In experimentation, we could not set goals in the entrance of the loading dock because that area was really depressed compared to the rest. As for the grasping system, it has difficulties to grasp objects that do not rise from the ground more than 3cm, approximately. The D415 cameras record noise when working outdoors that increases with the distance, so these objects are hard to distinguish in the floor and could be left behind undetected, like happened with the gardening tools.
As a future work, we want to reduce the time gap from registering the cloud to actually performing a grasp. Since the robot works in an open environment, there are factors that can affect the position of the objects, like we experienced with the wind moving them. Moreover, it would be good to include a 3D sensor to improve the self-localization. Finally, we would like to work on an object detection and tracking system so the robot can find the target objects autonomously.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,837 |
Mobile networks and carriers in Haiti use 2 GSM bands and 1 UMTS band. Find out if your unlocked phone or mobile device will work in Haiti. See the tables below for details.
Will your phone or mobile device work in Haiti? | {
"redpajama_set_name": "RedPajamaC4"
} | 9,596 |
\section{Matroïdes}
Inventée par~\cite{whitney1935}, la notion de \emph{matroïde}
est une abstraction de la propriété d'indépendance linéaire.
Elle admet plusieurs formalisations équivalentes,
en termes de \emph{bases}, de \emph{parties libres},
de \emph{circuits}, d'une \emph{fonction rang}, etc.
Nous utiliserons ici celle qui met en jeu le concept de \emph{plat}.
\begin{defi}
Soit $E$ un ensemble \emph{fini}.\footnote{%
Dans ce texte, il sera uniquement question de matroïdes finis.
En fait, la formalisation des matroïdes sur un ensemble infini
est récente: une condition naturelle consiste à imposer
à la propriété d'être une partie libre d'être de caractère fini,
mais \citet{bruhn-diestel-kriesell-pendavingh-wollan2013}
mettent en évidence qu'on obtient une classe plus intéressante
de matroïdes infinis en imposant un axiome d'existence
de parties libres maximales.}
Un matroïde~$M$ sur~$E$
est la donnée d'une partie~$\mathscr P_M$
de $\mathfrak P(E)$ --- les \emph{plats} de~$M$
--- vérifiant les propriétés suivantes:
\begin{enumerate}
\def\roman{enumi}}\def\theenumi){(\theenumi){\roman{enumi}}\def\theenumi){(\roman{enumi}}\def\theenumi){(\theenumi))}
\item L'intersection de toute famille de plats de~$M$ est un plat.
\item Pour tout plat~$P$ de~$M$, distinct de~$M$,
l'ensemble des plats de~$M$ qui sont minimaux
parmi ceux contenant strictement~$P$ recouvre~$E$.
\end{enumerate}
\end{defi}
On notera en général $\abs M$, voire $M$,
l'ensemble sous-jacent à un matroïde~$M$.
\subsection{}
Rappelons brièvement
les autres formalisations de la notion de matroïde.
en renvoyant à~\citep{welsh1976}, \citep{white1986,white1987}
ou \citep{oxley1992}
pour plus de détails.
Soit $M$ un matroïde sur un ensemble~$E$.
Muni de la relation d'inclusion sur~$\mathfrak P(E)$,
l'ensemble des plats de~$M$ est un ensemble ordonné.
C'est un \emph{treillis} :
toute partie possède une borne inférieure (notée~$\wedge$),
l'intersection de ses membres,
et une borne supérieure (notée~$\vee$),
l'intersection de la famille des plats de~$M$
qui contiennent chacun de ses membres.
Si $X$ est une partie de~$E$, on note $\langle X\rangle$
le plat engendré par~$X$,
c'est-à-dire le plus petit plat de~$M$ qui contient~$X$.
Au matroïde~$M$, on associe naturellement deux fonctions
\emph{rang} et \emph{corang}: le rang~$\rk_M(P)$ d'un plat~$P$ est la plus
grande longueur d'une chaîne de plats de sommet~$P$,
son corang~$\cork_M(P)$
est la plus grande longueur d'une chaîne de plats de
base~$P$.
On définit plus généralement le rang (resp. le corang)
d'une partie~$A$ de~$M$ comme celui du plat qu'elle engendre.
Une partie~$L$ de~$E$ est dite \emph{liée}
s'il existe une partie $L'\subsetneq L$
telle que $\langle L'\rangle=\langle L\rangle$;
elle est dite \emph{libre} sinon.
L'ensemble des parties libres vérifient les propriétés suivantes:
\begin{enumerate}
\def\roman{enumi}}\def\theenumi){(\theenumi){\roman{enumi}}\def\theenumi){(\roman{enumi}}\def\theenumi){(\theenumi))}
\item $\emptyset$ est libre;
\item Toute partie d'une partie libre est libre;
\item Si $A$ et $B$ sont des parties libres de $M$
telles que $\Card(B)>\Card(A)$, il existe
$x\in B\setminus A$ tel que $A\cup\{x\}$ soit libre
(variante du « lemme d'échange »).
\end{enumerate}
Inversement, toute partie de~$\mathfrak P(E)$ vérifiant ces
trois propriétés est l'ensemble des parties libres d'un unique matroïde sur~$E$.
Une \emph{base} de~$M$ est une partie libre maximale.
Il en existe; on déduit du lemme d'échange que toutes les bases
de~$M$ ont même cardinal et que toutes les chaînes maximales
de plats ont même cardinal,
égal au rang de~$M$ (c'est-à-dire du plat~$E$ de~$M$).
Autrement dit, pour tout plat~$P$ de~$M$,
on a $\rk_M(P)+\cork_M(P)=\rk_M(\abs M)$
(le treillis associé à~$M$ est « caténaire »).
De plus, pour tout couple $(P,Q)$ de plats de~$M$, on a la
relation
\begin{equation}
\rk_M(P)+\rk_M(Q) \geq \rk_M(P\wedge Q)+\rk_M(P\vee Q)\ ;
\end{equation}
le treillis associé à~$M$ est dit \emph{sous-modulaire}.
Inversement, tout treillis caténaire sous-modulaire
est le treillis des plats d'un matroïde.
\begin{exem}
Soit $V$ un espace vectoriel (resp. un espace affine,
resp. un espace projectif) sur un corps~$K$
et soit $\Phi=(v_e)_{e\in E}$ une famille finie d'éléments de~$V$.
Il existe un matroïde~$M$ dont
les plats sont les parties de~$E$ de la forme $\{e\in E\sozat v_e\in W\}$,
où $W$ parcourt l'ensemble des sous-espaces vectoriels (resp. affines,
resp. projectifs) de~$V$.
Les parties libres de ce matroïde sont les sous-familles
libres (resp. affinement libres, resp. projectivement libres) de~$\Phi$.
Dans le cas vectoriel,
le rang d'un plat~$P$ est la dimension de l'espace vectoriel engendré
par la famille $(v_e)_{e\in P}$;
dans le cas affine resp. (projectif), on a $\rk_M(\emptyset)=0$
et $\rk_M(P)-1$ est la dimension du sous-espace affine (resp. projectif)
de~$V$ engendré par la famille~$(v_e)_{e\in P}$.
De tels matroïdes sont dits \emph{représentables} sur~$K$.
Lorsqu'on prend pour $\Phi$ l'ensemble
des points de l'espace projectif $\P_2(\F_2)$, on
obtient le \emph{matroïde de Fano}.
Il n'est représentable que sur un corps de caractéristique~$2$.
Les matroïdes représentables apparaissent naturellement
via l'arrangement d'hyperplans qu'ils définissent dans l'espace vectoriel
dual~$V^\vee$ (resp. l'espace affine, resp. projectif).
Les plats sont alors les intersections d'hyperplans de l'arrangement
et leur rang est leur codimension.
D'après~\citet{nelson2018},
lorsque $n$ tend vers l'infini, la proportion
du nombre de matroïdes sur l'ensemble $\{1,\dotsc,n\}$
qui sont représentables (sur \emph{un} corps non précisé)
tend vers~$0$.
\end{exem}
\begin{exem}
Soit $G$ un graphe fini et soit $E$ l'ensemble des arêtes de~$G$.
Il existe un unique matroïde $M(G)$ sur~$E$
dont les parties libres sont les forêts de~$G$.
De manière équivalente,
ses circuits, c'est-à-dire les parties liées minimales,
sont les cycles dans le graphe~$G$;
ses plats sont les parties~$P$
telles que les extrémités de toute arête $e\in E\setminus P$
n'appartiennent pas à la même composante
connexe du sous-graphe de~$G$ ayant même ensemble de sommets que~$G$
et~$P$ pour ensemble d'arêtes.
Si $G_P$ est le plus grand sous-graphe de~$G$ d'ensemble
d'arêtes~$P$, $\rk_{M(G)}(P)+\Card(\pi_0(G_P))$
est le nombre de sommets de~$G$.
Le matroïde~$M(G)$ est représentable sur tout corps.
Soit~$S$ l'ensemble des sommets de~$G$.
Soit $K$ un corps et notons $(x_s)$ la base canonique
du $K$-espace vectoriel~$K^S$.
Fixons une orientation~$F$
de~$G$ et identifions une flèche~$f\in F$ à l'arête correspondante
$\{f,\overline f\}$.
Pour toute flèche~$f\in F$ d'origine~$o$ et de terme~$t$,
posons $v_f=x_t-x_o\in K^S$. Alors le matroïde
associé à la famille~$(v_f)_{f\in F}$ s'identifie au matroïde~$M(G)$.
\end{exem}
\subsection{}
Mentionnons quelques autres constructions de matroïdes.
\def\mathord{\,|\,}{\mathord{\,|\,}}
\let\del\backslash
\let\contr\slash
\begin{enumerate}\def\roman{enumi}}\def\theenumi){(\theenumi){\alph{enumi}}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\item
Soit $M_1$ et $M_2$ des matroïdes.
Il existe alors un unique matroïde $M$ sur
l'ensemble $\abs {M_1}\coprod\abs{M_2}$
dont les plats
sont les réunions d'un plat de~$M_1$ et d'un plat de~$M_2$.
On le note $M_1\oplus M_2$.
\item
Soit $M$ un matroïde.
Il existe, sur l'ensemble~$\abs M$, une unique structure de matroïde
dont les bases sont les complémentaires des bases de~$M$.
On l'appelle le \emph{matroïde dual}~$M^*$ de~$M$.
Sa fonction rang est liée à celle de~$M$ par
la relation
\[ \rk_{M^*}(A)-\rk_{M}(\abs M\setminus A)=\Card(\abs M)-r_M(\abs M),\]
pour toute partie~$A$ de~$\abs M$.
\item
Soit $M$ un matroïde et soit $F$ une partie de~$\abs M$.
L'ensemble des plats de~$M$ qui sont contenus dans~$F$
est une structure de matroïde sur~$F$, que l'on note
$M\mathord{\,|\,} F$;
c'est la \emph{restriction} de~$M$ à~$F$;
on la voit aussi comme
la \emph{suppression} de~$\abs M\setminus F$ dans~$M$
et on la note alors $M\del (\abs M\setminus F)$.
Sa fonction rang est la restriction à~$\mathfrak P(F)$
de la fonction rang de~$M$.
\item
Soit $M$ un matroïde et soit $F$ une partie de~$\abs M$.
L'ensemble des parties~$P$ de~$\abs M\setminus F$
telles que $P\cup F$ soit un plat de~$M$
est une structure de matroïde sur l'ensemble $\abs M\setminus F$;
autrement dit, le treillis de ses plats est le sous-treillis de~$\mathscr P_M$
formé des plats de~$M$ qui contiennent~$F$.
On l'appelle la \emph{contraction} de~$F$ dans~$M$
et on la note~$M\contr F$.
Sa fonction rang vérifie $\rk_{M/F}(A)=\rk_M(A\cup F)-\rk_M(A)$,
pour toute partie~$A$ de~$\abs M\setminus F$.
\end{enumerate}
\subsection{}
Soit $M$ un matroïde.
Une boucle de~$M$ est un point~$e\in\abs M$ qui appartient à tout plat.
Soit $m=\langle\emptyset\rangle$ le plus petit plat de~$M$.
Le matroïde contracté $M/m$
est sans boucle et a même treillis des plats que~$M$.
Supposons $M$ sans boucle.
La relation $\langle x\rangle=\langle y\rangle$ dans~$E$
est une relation d'équivalence dont les classes
d'équivalence sont les plats de~$M$ de rang~$1$. Lorsque ces classes
d'équivalence sont réduites à un élément,
on dit que le matroïde est une \emph{géométrie combinatoire.}
En général, le matroïde~$M$ induit
une géométrie combinatoire~$\overline M$ sur l'ensemble quotient~$\overline E$;
la surjection canonique de~$E$ sur~$\overline E$
induit un isomorphisme du treillis des plats de~$M$ sur celui de~$\overline M$.
\begin{defi}
Soit $M$ un matroïde.
Le \emph{polynôme caractéristique} de~$M$ est défini par
\[ \chi_M(T) = \sum_{A\subset \abs M} (-1)^{\Card(A)} T^{\cork_M(\langle A\rangle)}.
\]
\end{defi}
C'est un polynôme unitaire de degré~$\leq\rk(M)$.
Lorsque $\abs M$ est vide, on a $\chi_M(T)=1$.
Il y a deux matroïdes sur un ensemble~$\{e\}$ de cardinal~$1$:
si $\mathscr P_M=\{\emptyset, \abs M\}$, on a $\rk(M)=1$ et $\chi_M(T)=T-1$;
si $\mathscr P_M=\{\abs M\}$, on a $\rk(M)=0$ et $\chi_M(T)=0$.
En général, le polynôme~$\chi_M(T)$ se calcule par récurrence à partir
des deux règles :
\[ \chi_{M_1\oplus M_2}(T) = \chi_{M_1}(T) \chi_{M_2}(T) \]
et
\[ \chi_M(T) = \chi_{M\backslash e}(T) - \chi_{M/e}(T) \]
pour tout point $e\in \abs M$ qui n'appartient pas à toute base de~$M$.
(Ces deux règles généralisent celles qui régissent le polynôme chromatique d'un graphe.)
Si $\abs M$ n'est pas vide, on a $\chi_M(1)=0$;
on définit alors le polynôme caractéristique réduit par :
\[ \overline{\chi_M}(T) = \chi_M(T)/(T-1). \]
\begin{exem}
Supposons que $M$ soit le matroïde~$M(G)$ associé à un graphe fini~$G$.
Alors,
pour tout entier~$q$,
\[ \chi_G(T) = T^{\Card(\pi_0(G))} \chi_M(T) \]
est le polynôme chromatique de~$G$: pour tout entier~$q$,
$\chi_G(q)$ est le nombre de coloriages de l'ensemble des sommets
de~$G$ avec~$q$ couleurs tels que deux sommets reliés par une arête
soient de couleurs distinctes.
\end{exem}
\begin{exem}
Soit $K$ un corps, soit $n$ un entier,
soit $(v_1,\dotsc,v_r)$ une famille libre de~$K^n$
et soit $V$ le sous-espace vectoriel de~$K^n$ qu'elle engendre.
Notons~$M$ le matroïde représentable correspondant.
Le polynôme caractéristique de~$M$ s'interprète géométriquement
dans l'anneau de Grothendieck $K_0(\mathrm{Var}_K)$ des $K$-variétés.
Rappelons que cet anneau est défini comme le quotient du groupe
abélien libre sur l'ensemble $\mathrm{Var}_K$
des classes d'isomorphie de $K$-variétés (=~$K$-schémas de type fini)
par la relation de découpage $[X]=[X\setminus Y]+[Y]$ si $X$
est une $K$-variété et $Y$ un fermé de~$X$, muni du produit
$[X][Y]=[X\times_K Y]$. Si $X$ est une $K$-variété, on note
$\operatorname e(X)$ sa classe dans $K_0(\mathrm{Var}_K)$.
L'application~$\operatorname e$
est une caractéristique d'Euler universelle. En particulier,
lorsque $K=\C$, l'application qui, à une $\C$-variété~$V$,
associe son polynôme de Hodge--Deligne $E_V(u,v)$, se factorise par
un homomorphisme d'anneaux de~$K_0(\mathrm{Var}_\C)$ dans~$\Z[u,v]$.
De même, lorsque $K$ est un corps fini, l'application qui, à une
$K$-variété~$V$, associe le cardinal de~$V(K)$, se factorise
par un homomorphisme d'anneaux de~$K_0(\mathrm{Var}_K)$ dans~$\Z$.
Notons $\mathbf L=\operatorname e(\mathbf A^1_K)$
la classe de la droite affine.
Dans l'anneau de Grothendieck $K_0(\mathrm{Var}_K)$ des $K$-variétés,
on a alors la relation
\[ \operatorname e(V \cap \mathbf G_{\mathrm m,K}^n) = \chi_M(\mathbf L). \]
(Cela se déduit de la formule d'inversion de Möbius dans
le treillis des plats du matroïde~$M$ et de la formule~\eqref{eq.chiM};
voir plus bas.)
Comme l'unique homomorphisme
d'anneaux de~$\Z[T]$ dans~$K_0(\mathrm{Var}_K)$ qui applique~$T$
sur~$\mathbf L$ est injectif, cette relation caractérise~$\chi_M$.
Lorsque $K=\C$, le polynôme de Hodge--Deligne de la variété
quasi-projective $V\cap (\C^\times)^n$ est donc égal à $\chi_M(uv)$.
Lorsque $K=\F_q$ est un corps fini de cardinal~$q$, on a de même
$\Card(V\cap (K^\times)^n)=\chi_M(q)$.
En passant au quotient par l'action par homothéties de~$\mathbf G_{\mathrm m,K}$
sur l'espace affine~$\mathbf A^n_K$, on en déduit aussi
une interprétation géométrique
du polynôme caractéristique réduit de~$M$:
\[ \operatorname e(\P(V\cap \mathbf G_{\mathrm m,K}^n))= \overline{\chi_M}(\mathbf L). \]
\end{exem}
Voici le théorème principal de cet exposé:
\begin{theo}[\citealt*{adiprasito-huh-katz2015}]
\label{theo.ahk}
Soit $M$ un matroïde de \mbox{rang~$>0$;} posons $r=\rk(M)-1$ et
définissons des nombres entiers $\mu^k(M)$, pour $0\leq k\leq r$, par
\[ \overline{\chi_M}(T) = \sum_{k=0}^r (-1)^k \mu^k(M) T^{r-k}. \]
Alors, la suite $(\mu^0(M),\dotsc,\mu^r(M))$ est \emph{log-concave :}
\begin{enumerate}
\def\roman{enumi}}\def\theenumi){(\theenumi){\roman{enumi}}\def\theenumi){(\roman{enumi}}\def\theenumi){(\theenumi))}
\item Pour tout entier~$k$ tel que $0\leq k\leq r$, on a $\mu^k(M)>0$;
\item Pour tout entier~$k$ tel que $0<k<r$, on a $\mu^{k-1}(M)\mu^{k+1}(M)
\leq \mu^k(M)^2$.
\noindent En particulier, cette suite est \emph{unimodale}:
\item Il existe un entier~$\ell$
tel que
\[ \mu^0(M)\leq \mu^1(M) \leq\dotsb \leq \mu^{\ell}(M)
\geq \mu^{\ell+1}(M)\geq \dotsb \geq \mu^r(M). \]
\end{enumerate}
\end{theo}
Plusieurs corollaires de ce théorème avaient été conjecturés
dans les années 1970,
\citep{rota1971,heron1972,mason1972,welsh1976},
parfois d'abord dans le cas des graphes
et de leurs polynômes chromatiques
\citep{read1968,hoggar1974}:
\begin{coro}[conjecture de Heron--Rota--Welsh]
\label{coro.welsh}
Soit $M$ un matroïde et soit $r=\rk(M)$.
Définissons des nombres entiers $w_k(M)$, pour $0\leq k\leq r$, par
\[ {\chi_M}(T) = \sum_{k=0}^r (-1)^k w_k(M) T^{r-k}. \]
Alors, la suite $(w_0(M),\dotsc,w_r(M))$ est \emph{log-concave}
et \emph{unimodale}.
\end{coro}
Les entiers~$w_k(M)$ sont appelés nombres de Whitney de première
espèce du matroïde~$M$; ils sont positifs ou nuls (voir plus bas).
\begin{coro}[conjecture de Welsh--Mason]
\label{coro.welsh-mason}
Soit $M$ un matroïde et soit $r=\rk(M)$.
Pour tout entier~$k$, notons $f_k(M)$ le nombre de parties libres de~$M$
de cardinal~$k$. La suite $(f_0(M),\dotsc,f_{r}(M))$ est log-concave.
\end{coro}
Suivant~\cite{lenz2013}, ce corollaire se déduit du théorème~\ref{theo.ahk}
en considérant le matroïde $M'$
sur l'ensemble $E'=E\coprod\{e\}$ dont les plats
sont la partie vide et les parties $P\cup\{e\}$, pour tout plat~$P$ de~$M$
(« coextension libre » de~$M$).
Ce matroïde est de rang~$r+1$ et,
d'après~\citep[Remark~6.15.3c]{brylawski1982},
son polynôme caractéristique réduit
vérifie
\[ \overline{\chi_{M'}}(T) = \sum_{k=0}^r (-1)^k f_k(M) T^{r-k}. \]
\subsection{}
Bien que de nature purement combinatoire,
la démonstration du théorème~\ref{theo.ahk}
est inspirée par la géométrie algébrique.
Supposons en effet
que le matroïde~$M$ soit représentable sur un corps~$K$,
associé à l'arrangement d'hyperplans défini
par la trace, sur un sous-espace projectif~$V$ de dimension~$r$,
des hyperplans de coordonnées de~$\P_{n}(K)$.
L'involution~$\iota$ de Cremona est l'automorphisme birationnel
de $\P_{n,K}$
donné par $[x_0:\dotsb:x_n]\mapsto [x_0^{-1}:\dotsb:x_n^{-1}]$.
L'adhérence de Zariski dans~$\P_{n,K}\times\P_{n,K}$ du graphe
de la restriction de~$\iota$ à~$V$ est alors une compactification
lisse~$\tilde V$ de~$V\cap \mathbf G_{\mathrm m,K}^n$ dont le bord
est un diviseur à croisements normaux stricts.
(Cette construction est due à \cite{deconcini-procesi1995a}.)
\begin{theo}[\citealt*{huh-katz2012}]
\label{theo.hk}
On a l'égalité
\[ [\tilde V] = \sum_{k=0}^r \mu^k(M) [\P_{r-k}\times\P_k] \]
dans le groupe $A_r(\P_n\times\P_n)$
des classes de cycles de dimension~$r$ sur~$\P_n\times\P_n$.
\end{theo}
De manière équivalente, on a
\[ \mu^k(M) = \deg ( c_1(\mathscr L_1)^{r-k} c_1(\mathscr L_2)^k \cap [\tilde V]) ,\]
où $\mathscr L_1$ et $\mathscr L_2$ sont les fibrés en droites
sur~$\P_n\times\P_n$ déduits du fibré en droites~$\mathscr O_{\P_n}(1)$
par la première et la seconde projection.
La log-concavité de la suite $(\mu^0(M),\dotsc,\mu^r(M))$
se déduit alors du théorème de l'indice de Hodge,
sous la forme des \emph{inégalités de Khovanskii-Teissier}
\citep{khovanskii1988,teissier1979}.
Lorsque $K$ est de caractéristique zéro,
\citet{huh2012} avait donné une première démonstration de
la log-concavité de la suite $(\mu^k(M))$,
dans laquelle les coefficients~$\mu^k(M)$ apparaissent
comme les nombres de Milnor de la trace, sur un sous-espace
général de dimension~$k$,
de la réunion des hyperplans de l'arrangement définissant~$M$.
L'inégalité de log-concavité se déduit alors de l'inégalité
de Teissier pour les multiplicités~\citep[Appendice]{eisenbud-levine-teissier1977}.
On trouve par ailleurs dans~\citet{huh2012} un très joli théorème
qui caractérise les classes de cycles dans~$\P_m\times\P_n$
qui sont, à un scalaire près, représentés par une sous-variété
irréductible:
\begin{theo}
Soit $r$ un entier tel que $0\leq r\leq \inf(m,n)$ et soit
$(a_0,\dotsc,a_r)$ une suite d'entiers relatifs. Pour
que la classe de cycle
\[ \alpha = \sum_{k=0}^r a_k [\P_{r-k}\times\P_k ] \in A_r(\P_m\times\P_n) \]
soit un multiple positif de la classe d'une sous-variété irréductible~$V$
de~$\P_m\times\P_n$, il faut et il suffit que l'une des deux conditions
suivantes soit satisfaite:
\begin{itemize}
\item La classe $\alpha$ est un multiple positif de
$[\P_m\times\P_n]$, $[\P_m\times\P_0]$,
$[\P_0\times\P_n]$ ou $[\P_0\times\P_0]$;
\item La suite $(a_0,\dotsc,a_r)$ est log-concave
et l'ensemble des entiers~$k$ tels que $a_k=0$ est un intervalle.
\end{itemize}
\end{theo}
Autrement dit, les inégalités de Khovanskii-Teissier caractérisent
précisément les classes de cycles effectifs.
Dans un esprit similaire, mentionnons le contre-exemple
de~\citet{babaee-huh2017} à une version de la conjecture de Hodge
pour les courants positifs: un courant fortement positif de type~$(2,2)$
sur une variété complexe torique de dimension~4, lisse et projective,
qui n'appartient pas au cône convexe fermé engendré par les
courants d'intégration sur les sous-variétés.
\subsection{}
Ainsi, pour démontrer le théorème~\ref{theo.ahk},
\citet*{adiprasito-huh-katz2015}
associent à une géométrie combinatoire~$M$
une $\R$-algèbre graduée~$A(M)_\R$ qui vérifie
les énoncés analogues à la dualité de Poincaré,
le théorème de Lefschetz difficile et les inégalités
de Hodge--Riemann.
De fait, $A(M)_\R$ sera l'anneau de Chow (à coefficients réels)
d'une variété torique lisse~$X_M$ (sur un corps~$K$ arbitraire)
introduite par~\citet{feichtner-yuzvinsky2004}.
En général, la variété $X_M$ n'est pas propre,
et elle est de dimension~$>\rk_M(M)$,
de sorte que son anneau de Chow $A(M)$,
ou son anneau de cohomologie $H(M)$, n'a aucune raison de se comporter
comme celui d'une variété projective lisse de dimension~$\rk(M)$.
Lorsque le matroïde~$M$ est représentable,
$X_M$ admet une sous-variété projective lisse~$Y_M$
(c'est d'ailleurs la variété~$\tilde V$ du paragraphe précédent)
telle que l'application $z\mapsto z \cap [Y_M]$
induise un isomorphisme de $ A(X_M)$ sur $ A(Y_M)$.
En revanche,
lorsque $M$ n'est pas représentable sur~$K$,
il n'existe pas de $K$-variété
propre et lisse~$Y$ et de morphisme de variétés~$f\colon Y\to X_M$
tel que $f^*$ induise un isomorphisme de~$A(X_M)$ sur~$A(Y)$
\citep*[th.~5.12]{adiprasito-huh-katz2015}.
\subsection{}
Le principe n'est bien sûr pas nouveau
de puiser l'inspiration d'une preuve de résultats de nature combinatoire
dans la géométrie algébrique,
par exemple via le dictionnaire qui, à
tout polytope~$P$ de~$\R^r$, à sommets entiers et de dimension~$r$,
associe une variété torique projective polarisée~$(X_P,L)$.
Dans ce dictionnaire, la dimension
$h^0(X_P,L^{\otimes n})$
de l'espace de sections globales de la puissance $n$-ième de~$L$
correspond au nombre de points entiers
du polytope~$nP$ et, via la formule de Hilbert--Samuel,
relie le degré de~$X_P$ au volume de~$P$.
Plus généralement, les volumes mixtes correspondent
à des nombres d'intersection.
C'est ainsi que \citet{khovanskii1988} et \citet{teissier1979}
déduisent les inégalités
d'Alexandrov--Fenchel et de Brunn--Minkowski
du théorème de l'indice de Hodge sur les surfaces
et du théorème de Bertini.
Citons ainsi la conjecture de {McMullen}
décrivant ce que peut être le nombre~$f_i$ de
faces de dimension~$i$ donnée d'un polytope \emph{simple}~$P$ de
dimension~$d$.
McMullen considère une suite $(h_0,\dotsc,h_d)$ obtenue par
combinaisons linéaires adéquates des~$f_i$ et postule trois familles
de conditions sur cette suite pour que la suite initiale $(f_0,\dotsc,f_d)$
soit la suite des nombres de faces d'un polytope simple;
les premières, $h_k=h_{d-k}$, sont
les \emph{relations de Dehn--Sommerville;}
la seconde famille s'écrit $h_0\leq h_1\leq\dotsb \leq h_{\lfloor d/2\rfloor}$;
la troisième est un peu technique et
ce n'est pas la peine de la recopier ici.
\cite{billera-lee1980} ont prouvé qu'elles sont suffisantes,
et leur nécessité est démontrée par \cite{stanley1980b}
à l'aide de la géométrie algébrique.
Lorsque le polytope est à sommets
entiers, il observe en effet
que l'entier~$h_k$ est la dimension de l'espace de cohomologie
$H^k(X_Q)_\R$
de la variété torique~$X_Q$ (sur~$\C$, disons)
associée au polytope~$Q$ polaire de~$P$,
de sorte que les conditions de McMullen découlent respectivement de trois
propriétés cohomologiques de la cohomologie de~$X_Q$:
la dualité de Poincaré,
le théorème de Lefschetz difficile,
et le fait que l'algèbre de cohomologie $H(Y_Q)_\Q$ à coefficients
rationnels soit engendrée par $H^2(Y_Q)$.
La variété~$Y_Q$ est projective, mais n'est pas nécessairement lisse,
mais l'hypothèse que le polytope~$P$ est simple assure que la variété~$Y_Q$
est une « {orbifolde} », c'est-à-dire localement
le quotient d'une variété lisse par l'action d'un groupe fini,
de sorte que ces énoncés cohomologiques restent valides.
Par homothéties et approximation, on peut parfois
déduire du cas d'un polytope à sommets entiers
le cas d'un polytope à sommets réels quelconques,
mais il existe des polytopes dont la combinatoire ne peut
pas être obtenue par une déformation en un polytope à sommets rationnels.
Cette complication ne se produit pas dans les deux exemples précédents;
pour la conjecture de McMullen, l'hypothèse que le polytope
est simple est essentielle.
Ultérieurement, \cite{stanley1987} a généralisé cette
étude aux polytopes non nécessairement simples:
lorsque $P$ est à sommets entiers,
l'entier~$h_k$ est alors la dimension de l'espace de cohomologie
d'intersection $IH^k(X_Q)_\R$.
L'extension de cette relation aux polytopes à sommets arbitraires
a motivé d'une part
une approche combinatoire de~\citet{mcmullen1993},
et d'autre part le développement d'une « cohomologie d'intersection
des polytopes » et la preuve du théorème de Lefschetz difficile
dans ce contexte \citep{karu2004}.
\subsection{}
Le titre de ce rapport mentionne les inégalités de Hodge--Riemann.
Comme on le verra plus tard, ces inégalités renforcent le théorème
de Lefschetz difficile: si ce dernier signifie qu'une certaine
forme bilinéaire est non dégénérée, les inégalités de Hodge--Riemann
en précisent la signature.
En géométrie kählérienne
le théorème de Lefschetz difficile est démontré \emph{avant}
les inégalités de Hodge--Riemann.
C'est a fortiori le cas en géométrie algébrique,
en particulier sur un corps de caractéristique positive où
les inégalités de Hodge--Riemann sont encore une conjecture.
En revanche, il semble que les approches combinatoires
du théorème de Lefschetz difficile ne puissent faire l'économie
des inégalités de Hodge--Riemann.
Outre le théorème de~\citet{karu2004} déjà mentionné,
mentionnons le rôle crucial
que jouent ces inégalités dans la preuve du théorème
de décomposition que proposent~\citet{decataldo-migliorini2005}
(voir aussi l'exposé de~\citet{williamson2017} dans ce séminaire).
Citons enfin la preuve par~\citet{elias-williamson2014}
de la positivité des coefficients des polynômes de Kazhdan--Lusztig
associés à un système de Coxeter général;
cf. également le rapport de~\citet{riche2018} dans ce séminaire.
Les travaux dont il est question dans ce rapport
suggèrent l'intérêt d'un analogue de la
théorie de Hodge en géométrie tropicale.
Je me contente ici de renvoyer à
l'article de~\citet*{itenberg-katzarkov-mikhalkin-zharkov2016}
où sont construits des espaces de $(p,q)$-formes sur
les variétés tropicales.
\subsection{}
L'unimodalité et la log-concavité sont des thèmes prégnants
de la combinatoire énumérative.
L'article de~\cite{stanley1989}, complété par celui de~\cite{brenti1994},
en fournit une excellente introduction.
À propos des résultats de cet exposé,
je renvoie aussi au bref survol \citep*{adiprasito-huh-katz2017}
et surtout à l'article d'exposition de~\citet*{baker2018}.
Je remercie enfin Karim Adiprasito,
Matt Baker, Michel Brion,
Antoine Ducros, Javier Fresán, Olivier Guichard, June Huh, Ilia Itenberg
et Bernard Teissier
pour leurs commentaires sur des premières versions de ce rapport.
\section{Éventails}
\subsection{}
Soit $(\mathscr P,\preceq)$ un ensemble ordonné, disons fini,
et soit $A$ un anneau commutatif. L'algèbre de convolution
$\mathscr A(\mathscr P;A)$ est le $A$-module
des fonctions à valeurs dans~$A$ sur l'ensemble des couples $(x,y)$
d'éléments de~$\mathscr P$ tels que $x\preceq y$,
muni du produit de convolution défini par
\[ \phi* \psi (x,y) = \sum_{x\preceq z\preceq y} \phi(x,z) \psi(z,y). \]
Son élément unité~$\delta$ est l'indicatrice de Kronecker.
Un élément de $\mathscr A(\mathscr P;A)$ est inversible si et seulement
s'il ne prend que des valeurs inversibles en les couples de la forme~$(x,x)$.
La fonction de Möbius de~$\mathscr P$, notée~$\mu$,
est l'inverse de la fonction constante~$\mathbf 1$ de valeur~$1$;
elle est caractérisée par les relations
\begin{gather}
\mu(x,x)=1 \\
\sum_{x\preceq z\preceq y} \mu(x,z) =0
\end{gather}
pour tout $x\in\mathscr P$ d'une part,
et tout couple $(x,y)$ d'éléments de~$\mathscr P$ tels que $x\prec y$.
L'algèbre $\mathscr A(\mathscr P;A)$ agit à gauche sur le $A$-module
$\mathscr F(\mathscr P;A)$ des fonctions de~$\mathscr P$ dans~$A$,
par la formule:
\[ \phi * f(x) = \sum_{x\preceq y} \phi(x,y) f(y). \]
La \emph{formule d'inversion de Möbius} est alors,
pour deux éléments $f,g$ de~$\mathscr F(P;A)$
l'équivalence entre les relations $g=\mathbf 1*f$ et $f=\mu*g$;
autrement dit:
\[ g(x) = \sum_{x\preceq y} f(y) \quad\Leftrightarrow\quad
f(y) = \sum_{y\preceq x} \mu(y,x) g(x) .
\]
Lorsque, de plus, $\mathscr P$ est un treillis, on a la relation
(\emph{théorème de Weisner}):
\begin{equation}
\sum_{\sup(x,a)=\sup(\mathscr P)} \mu(\inf(\mathscr P),x) = 0
\end{equation}
pour tout $a\in\mathscr P$ distinct de $\inf(\mathscr P)$.
Et si, de plus, ce treillis est sous-modulaire, le signe
de la fonction de Möbius est donné par :
\begin{equation}\label{eq.rk-rota}
(-1)^{\rk(y)-\rk(x)} \mu(x,y) \geq 0
\end{equation}
pour $x,y\in\mathscr P$ tels que $x\preceq y$.
\subsection{}
Le polynôme caractéristique d'un matroïde~$M$ s'exprime en termes
de son treillis des plats, par la relation :
\begin{equation}\label{eq.chiM}
\chi_M(T) = \sum_{P\in\mathscr P_M} \mu(\emptyset,P) T^{\cork_M(P)}.
\end{equation}
Cette relation permet aussi de supposer,
dans les questions relatives au polynôme caractéristique~$\chi_M$,
que le matroïde~$M$ est une géométrie combinatoire.
Jointe aux inégalités~\eqref{eq.rk-rota},
elle prouve enfin que les entiers~$w_k(M)$
introduits dans le corollaire~\ref{coro.welsh}
sont positifs ou nuls.
\subsection{}
Soit $M$ une géométrie combinatoire de rang~$\geq 1$ sur un ensemble~$E$;
posons $r=\rk_M(M)-1$ et $n=\Card(E)-1$.
On note $N\simeq \Z^n$ le quotient du $\Z$-module libre~$\Z^E$,
de base canonique $(e_i)_{i\in E}$
par le sous-module engendré par $\sum e_i$; pour $i\in E$,
on note encore $e_i$ l'image dans~$N$ de l'élément correspondant de~$\Z^E$.
Pour toute partie~$I$ de~$E$, on pose $e_I=\sum_{i\in I} e_i$.
Dans tout ce texte, un \emph{drapeau} de plats de~$M$
sera une famille totalement ordonnée de plats de~$M$
distincts de~$\emptyset$ et~$\abs M$.
Si $\mathscr D$ est un drapeau,
on note alors~$\sigma_{\mathscr D}$ le cône de~$N_\R$
engendré par les vecteurs $e_{P}$, pour $P\in\mathscr D$;
il est de dimension~$\Card(\mathscr D)$.
Notons~$\Sigma_M$ l'ensemble de ces cônes:
c'est l'\emph{éventail de Bergman} du matroïde~$M$
\citep{ardila-klivans2006}.
\begin{prop}
L'ensemble $\Sigma_M$ est un éventail unimodulaire de~$N_\R$,
purement de dimension~$r$.
\end{prop}
Que $\Sigma_M$ soit un \emph{éventail} signifie qu'il n'est pas vide,
que toute face d'un cône de~$\Sigma_M$ appartient à~$\Sigma_M$
et que l'intersection de deux cônes de $\Sigma_M$,
est une face commune de ces deux cônes;
qu'il soit \emph{unimodulaire} signifie que les cônes de~$\Sigma_M$
sont engendrés par une partie d'une base de~$N$;
qu'il soit \emph{purement de dimension~$r$} signifie que
tout cône maximal de~$\Sigma_M$ est de dimension~$r$.
À tout éventail~$\Sigma$
est classiquement associée une \emph{$K$-variété torique} $X_\Sigma$,
aussi notée~$X(\Sigma)$,
obtenue en recollant les variétés
affines $X_\sigma = \Spec( K[N^\vee \cap \sigma^\circ])$,
pour $\sigma$ parcourant~$\Sigma$,
où $\sigma^\circ$ désigne l'ensemble des formes linéaires
sur~$N_\R$ qui sont positives en tout point de~$\sigma$;
je renvoie par exemple à~\citep{fulton1993} pour plus de détails
sur cette théorie.
Noter que si $\tau$ est une face de~$\sigma$,
$X_\tau$ est un ouvert de~$X_\sigma$ pour la topologie de Zariski.
Soit $\sigma$ un cône de~$\Sigma$. Si $\sigma$ est engendré
par une partie d'une base de~$N$, la variété~$X_\sigma$
est un ouvert de~$\A^E$, isomorphe
à $\A^{\dim(\sigma)} \times \mathbf G_{\mathrm m}^{n-\dim(\sigma)}$.
Si l'éventail~$\Sigma$ est unimodulaire,
cette hypothèse est donc vérifiée pour tout cône,
de sorte que la variété torique~$X_\Sigma$ est lisse.
Nous noterons $X_M$ la variété torique lisse associée à l'éventail~$\Sigma_M$.
\begin{rema}
Soit $K$ un corps.
Supposons que le matroïde~$M$ soit associé à l'arrangement
d'hyperplans d'un sous-espace projectif~$V$ de~$\P_{n}$
de dimension~$r$ découpé par les hyperplans de coordonnées
de~$\P_{n}$ (sur le corps~$K$).
Le support~$\abs{\Sigma_M}$ de l'éventail~$\Sigma_M$,
réunion des cônes de~$\Sigma_M$,
s'interprète alors comme la \emph{tropicalisation}
de $V\cap\mathbf G_{\mathrm m}^n$.
Il y a plusieurs façons de définir cette tropicalisation.
Munissons le corps~$K$ de la valeur absolue triviale
et considérons le tore $K$-analytique $(\gm^n)^\an$,
au sens de~\citet{berkovich1990},
c'est-à-dire l'espace
des semi-normes multiplicatives sur la $K$-algèbre
$K[T_1^{\pm1},\dotsc,T_n^{\pm 1}]$ qui sont triviales sur~$K$.
Il est muni d'une application continue et propre de tropicalisation,
$\tau\colon (\gm^n)^\an\to\R^n$
qui applique une semi-norme $\abs{\cdot}$
sur $(\log(\abs{T_1}),\dotsc,\log(\abs{T_n}))$.
L'espace de Berkovich $(V\cap\gm^n)^\an$
de~$V\cap\gm^n$ est un sous-espace de~$(\gm^n)^\an$
et son image par~$\tau$ est égale à~$\abs{\Sigma_M}$.
Peut-être plus élémentairement \citep*[voir][]{einsiedler-kapranov-lind2006},
considérons une extension
algébriquement close~$L$ de~$K$ munie d'une valeur absolue
non archimédienne non triviale, mais triviale sur~$K$,
par exemple une clôture algébrique du
corps $K(\!(z)\!)$ des séries de Laurent.
Alors, $\abs{\Sigma_M}$ est l'\emph{adhérence} de l'image
de~$V(L)\cap (L^\times)^n$
par l'application $(a_1,\dotsc,a_n)\mapsto (\log(\abs{a_1}),\dotsc,\log(\abs{a_n})$ de~$(L^\times)^n$ dans~$\R^n$.
On peut enfin prendre $K=\C$
et considérer, pour tout nombre réel~$\eps>1$,
l'application $\lambda_\eps\colon (\C^\times)^n\to\R^n$
donnée par $\lambda_\eps(z_1,\dotsc,z_n)=(\log_\eps(\abs{z_1}),\dotsc,\log_\eps(\abs{z_n}))$ (logarithme en base~$\eps$).
Lorsque $\eps$ tend vers~$+\infty$, $\lambda_\eps(V\cap (\C^\times)^n)$
converge vers~$\abs{\Sigma_M}$.
\end{rema}
\subsection{}
Soit $N$ un $\Z$-module libre de rang fini,~$n$,
et soit $\Sigma$ un éventail unimodulaire de~$N$;
notons $r$ la borne supérieure (dans~$\N$)
des dimensions des cônes de~$\Sigma$.
Soit $K$ un corps ; considérons
la $K$-variété torique $X_\Sigma$ associée à l'éventail~$\Sigma$,
et notons $A(X_\Sigma)$ son anneau de Chow.
On note $V_\Sigma$ l'ensemble des \emph{rayons} de~$\Sigma$,
c'est-à-dire des générateurs primitifs des cônes
de dimension~$1$ de~$\Sigma$. À tout~$v\in V_\sigma$
correspond un diviseur de Cartier irréductible~$D_v$ de~$X_\Sigma$.
Soit $S_\Sigma=\Z[(T_v)_{v\in V_\sigma}]$
l'anneau gradué des polynômes à coefficients
entiers et à indéterminées dans~$V_\sigma$.
Pour tout entier~$k$,
on note~$S^k_\Sigma$ sa composante homogène de degré~$k$.
Pour tout cône~$\sigma$ de~$\Sigma$, on
pose $T_\sigma=\prod_{v\in\sigma\cap V_\Sigma} T_e$;
c'est un monôme de degré~$\dim(\sigma)$.
Pour tout entier~$k$, on définit un sous-module
\[ Z^k(\Sigma)=\bigoplus_{\substack{\sigma\in\Sigma \\ \dim(\sigma)=k}}
\Z T_\sigma \ ; \]
c'est un sous-module de $S^k_\Sigma$, nul pour $k>r$.
On pose aussi $Z(\Sigma)=\bigoplus_{k\in\N} Z^k(\Sigma)$.
Soit $I_\Sigma$ l'idéal homogène de~$S_\Sigma$ engendré par les monômes
qui n'appartiennent pas à~$Z(\Sigma)$
et $J_\Sigma$ l'idéal de~$S_\Sigma$ engendré par les polynômes linéaires
$ \sum_{v\in V_\sigma} \phi(v) T_v $,
pour $\phi\in N^\vee$.
\begin{prop}
L'unique homomorphisme d'anneaux de~$S_\Sigma$ dans~$A(X_\Sigma)$
qui applique tout~$v\in V_\sigma$ sur la classe de~$D_v$
dans~$A^1(X_\Sigma)$ est surjectif.
Son noyau est l'idéal $I_\Sigma+J_\Sigma$.
\end{prop}
L'anneau gradué quotient $A(\Sigma)=S_\Sigma/(I_\Sigma+J_\Sigma)$
sera ainsi appelé
l'\emph{anneau de Chow} de l'éventail~$\Sigma$.
Pour tout entier~$k$, on note $A^k(\Sigma)$
sa composante homogène de degré~$k$;
elle est engendrée par $Z^k(\Sigma)$.
Pour $k>\dim(\Sigma)$, on a $A^k(\Sigma)=0$.
\begin{rema}
On note $\PL(\Sigma)$ (resp. $\PP(\Sigma)$)
l'espace vectoriel (resp. la $\R$-algèbre)
des fonctions de~$\abs\Sigma$ dans~$\R$
dont la restriction à tout cône de~$\Sigma$ est linéaire
(resp. polynomiale); une telle fonction est continue.
On dit, par abus, que ce sont les fonctions
linéaires (resp. polynomiales) par morceaux sur~$\abs\Sigma$.
Pour $v\in V_\Sigma$, il existe
une unique fonction $\phi_v\in\PL(\Sigma)$
telle que pour tout $w\in V_\Sigma$,
on a $\phi_v(w)=1$ si $w=v$, et $\phi_v(w)=0$ sinon.
Ces fonctions~$\phi_v$, parfois appelées \emph{fonctions de Courant},
forment une base de~$\PL(\Sigma)$.
D'après~\citep{billera1989},
l'unique homomorphisme de~$\R\otimes S_\Sigma$ dans~$\PP(\Sigma)$
qui applique $T_v$ sur~$\phi_v$, pour tout $v\in V_\Sigma$,
est surjectif; son noyau est engendré par
les monômes $T_{v_1}\dotsm T_{v_d}$, tels que $v_1,\dotsc,v_d$
n'engendrent pas un cône de~$\Sigma$.
Par suite, $A(\Sigma)_\R$ est le quotient de $\PP(\Sigma)$
par l'idéal engendré par les restrictions à~$\abs\Sigma$
des formes linéaires sur~$N_\R$.
\end{rema}
\begin{lemm}
L'application de~$\PL(\Sigma)$ dans~$A^1(X_\Sigma)_\R$
qui, pour tout $v\in V_\Sigma$, applique~$\phi_v$
sur la classe de~$D_v$, est surjective.
Son noyau est le sous-espace de~$\PL(\Sigma)$ engendré
par les restrictions à~$\abs\Sigma$ des formes linéaires
sur~$N_\R$.
\end{lemm}
À une fonction linéaire par morceaux~$\phi\in\PL(\Sigma)$, on
associe le $\R$-diviseur (invariant par l'action du tore)
$\div(\phi)=\sum_{v\in V_\Sigma} \phi(v) D_v$.
Ce diviseur est effectif si $\phi$ est positive sur~$\abs\Sigma$.
Plus généralement,
soit $\sigma$ un cône de~$\Sigma$
et soit $V_\sigma\subset X_\sigma$ l'adhérence
de l'orbite du tore qui lui correspond
(on a $\dim(V_\sigma)=\codim(\sigma)$).
On dit que $\phi$ est \emph{convexe} en~$\sigma$
si la classe du diviseur $\div(\phi)|_{V_\sigma}$
sur~$V_\sigma$ est effective.
Cela revient à dire qu'il existe une forme linéaire~$m$
sur~$N_\R$ telle que $\phi=m$ sur~$\sigma$,
et telle que $\phi\geq m$
sur l'\emph{étoile} de~$\sigma$,
c'est-à-dire sur le sous-éventail $\star_\Sigma(\sigma)$
de~$\Sigma$ formé des cônes de~$\Sigma$ contenant une face de~$\sigma$.
On dit que $\phi$ est strictement convexe en~$\sigma$
s'il existe une forme linéaire~$m$ sur~$N_\R$
telle que $\phi=m$ sur~$\sigma$
et telle que $\phi(v)>m(v)$ pour tout rayon~$v$ de l'étoile de~$\sigma$
qui n'appartient pas à~$\sigma$.
L'ensemble des fonctions~$\phi\in\PL(\Sigma)$ qui sont convexes
(resp. strictement convexes)
en tout cône~$\sigma\in\Sigma$ est
un cône (resp. un cône ouvert) de~$\PL(\Sigma)$;
on l'appelle le cône \emph{nef} (resp. le cône \emph{ample})
de~$\PL(\Sigma)$ et on le note $\mathscr N_\Sigma$
(resp. $\mathscr K_\Sigma$).
Ces cônes contiennent l'image du dual de~$N_\R$,
on désigne des mêmes lettres leurs images dans~$A^1(X_\Sigma)_\R$.
Si $\mathscr K_\Sigma$ n'est pas vide,
c'est l'intérieur de $\mathscr N_\Sigma$, lequel est son adhérence.
Cela se produit en particulier lorsque~$\Sigma$ est l'éventail
associé à un matroïde ; en effet, $\Sigma$
est alors un sous-éventail de l'éventail normal d'un polytope.
\subsection{}
Supposons que $\Sigma$ soit l'éventail~$\Sigma_M$
associé au matroïde~$M$. On notera alors $A(M)$, $S_M$, etc.
les objets $A(\Sigma_M)$, $S_{\Sigma_M}$, etc.
Dans ce cas, les rayons de~$\Sigma_M$
sont les vecteurs~$e_P$, où $P$ parcourt l'ensemble~$\mathscr P^*_M$ des plats de~$M$ tels que $P\neq\emptyset$ et $P\neq \abs M$.
On a donc $S_M=\Z[(T_P)_{P\in\mathscr P^*_M}]$.
L'idéal~$I_M$ est engendré par les monômes quadratiques
$T_P T_Q$, où $(P,Q)$ parcourt l'ensemble des couples d'éléments
incomparables de~$\mathscr P^*_M$,
tandis que l'idéal~$J_M$ est engendré par les polynômes linéaires
\[ \sum_{P\ni i} T_P - \sum_{P\ni j} T_P, \]
où $(i,j)$ parcourt l'ensemble des couples d'éléments de~$\abs M$.
On a $A^k(M)=0$ pour $k>r$.
On définit deux éléments $\alpha_M$ et $\beta_M$ de $A^1(M)$ par
\[ \alpha_M = \sum_{P\ni i} T_P, \qquad \beta_M =\sum_{P\not\ni i} T_P, \]
où $i$ est un élément arbitraire de~$\abs M$ (ils n'en dépendent pas).
Si $\mathscr D$ est un drapeau de plats donné,
on voit,
en choisissant~$i$ hors de $\sup(\mathscr D)$, resp. dans $\inf(\mathscr D)$,
que leurs images dans $A^1(M)_\R$
appartiennent au cône nef~$\mathscr N_M$.
\begin{prop}\phantomsection\label{prop.deg-muk}
\begin{enumerate}\def\roman{enumi}}\def\theenumi){(\theenumi){\alph{enumi}}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\item
Il existe un unique homomorphisme de groupes
\[ \deg\colon A^r(M)\to\Z \]
tel que $\deg(T_{P_1}\dotsm T_{P_r})=1$ pour toute suite
$(P_1,\dotsc,P_r)$ de plats de~$M$ vérifiant
$\emptyset \subsetneq P_1\subsetneq\dotsb\subsetneq P_r\subsetneq \abs M$.
C'est un isomorphisme.
De plus, $\deg(\alpha_M^r)=1$.
\item
Pour tout entier $k$ tel que $0\leq k\leq r$, on a
\[ \mu^k(M) = \deg(\alpha_M^{r-k}\beta_M^k). \]
\end{enumerate}
\end{prop}
La deuxième partie de la proposition est
la généralisation, dans le contexte combinatoire,
du théorème~\ref{theo.hk}.
\subsection{}
Soit $A$ une $\Z$-algèbre commutative unifère, graduée, artinienne,
et soit $r$ un entier; on suppose que $A^k=0$ pour $k<0$ ou $k>r$
et on se donne un homomorphisme $\deg\colon A^r\to\Z$.
On dit que $(A,\deg)$ vérifie la \emph{dualité de Poincaré}
si pour tout entier~$k$ tel que $0\leq k\leq r$,
l'application
$a\mapsto (b\mapsto \deg(ab))$ est un isomorphisme de~$A^k$
sur $(A^{r-k})^\vee$.
Soit $\ell$ un élément de $A^1_\R$ et soit $k$ un entier
tel que $k\leq r/2$.
L'application de Lefschetz associée à~$\ell$
est l'application linéaire $\lambda^k\colon a\mapsto \ell^{r-2k} a$
de $A^k_\R$ dans~$A^{r-k}_\R$.
On définit aussi une forme bilinéaire symétrique $Q^k_\ell$ sur~$A^k_\R$
par
\[ Q^k_\ell (a,b) = (-1)^k \deg( a\, \ell^{r-2k}b). \]
On note $P_\ell^k$ le sous-espace de~$A^k_\R$
formé des~$a\in A^k_\R$ tels que $\ell^{r+1-2k}a=0$.
Si $k>r/2$, on pose $P_\ell^k=0$.
On dit que $(A_\R,\ell)$ vérifie le théorème de Lefschetz difficile
si $\lambda^k$ est un isomorphisme pour tout entier~$k$
tel que $0\leq k\leq r/2$.
Alors, pour tout entier~$k$ tel que $0\leq k\leq r/2$,
l'espace~$A^k_\R$ admet la \emph{décomposition de Lefschetz}:
\[ A^k_\R = P^k_\ell \oplus \ell P^{k-1}_\ell \oplus \dotsb \oplus \ell^k P_\ell^0\ ;\]
c'est une décomposition orthogonale pour la forme~$Q^k_\ell$.
Pour tout entier~$k$ tel que $r/2<k\leq r$,
on obtient une décomposition similaire
en écrivant $A^k_\R=\ell^{2k-r} A^{r-k}_\R$:
\[ A^k_\R = \ell^{2k-r} P^{r-k}_\ell \oplus \ell^{2k-r-1} P^{r-k-1}_\ell \oplus \dotsb \oplus \ell^k P_\ell^0.\]
On dit que $(A_\R,\ell)$ vérifie les \emph{relations de Hodge--Riemann}
si la restriction à~$P^k_\ell$
de la forme quadratique associée à~$Q^k_\ell$ est définie positive
pour tout entier~$k$ tel que $0\leq k\leq r/2$.
\begin{exem}
La terminologie provient bien sûr des propriétés de l'algèbre
de cohomologie d'une variété complexe compacte (connexe) kählérienne.
Soit en effet $V$ une telle variété, soit $n$
sa dimension et soit $H(V)$
l'algèbre réelle graduée, gr-commutative, de cohomologie de De Rham.
On a $H^k(V)=0$ si $k>2n$.
La décomposition de Hodge munit $H(V)_\C$ d'une
bigraduation canonique $H^k(V)_\C=\smash{\bigoplus\limits_{p+q=k}} H^{p,q}(V)$;
on a $H^{p,q}(V)=0$ si $p$ ou $q$ n'appartient pas à l'intervalle~$[0,n]$.
On dispose d'un isomorphisme canonique $\int\colon H^{2n}(V)\to \R$.
Pour tout entier~$k$,
l'homomorphisme $a \mapsto (b\mapsto \int a\wedge b)$
induit un isomorphisme de $H^{k}(V)$ sur $H^{2n-k}(V)^\vee$
(dualité de Poincaré)
et de $H^{p,q}(V)$ sur $H^{n-p,n-q}(V)^\vee$.
Soit $\ell$ la classe dans $H^{2}(V)$ d'une forme de Kähler sur~$V$;
elle appartient à $H^{1,1}(V)$.
Alors, pour tout entier~$k$ tel que $k\leq n$,
l'application $a\mapsto \ell^k \wedge a$
de $H^{k}(V)$ dans $H^{k+2}(V)$ est injective
(théorème de Lefschetz difficile). On note alors $P^k(V)$
le sous-espace primitif de~$H^k(V)$,
noyau de $a\mapsto\ell^{k+1}\wedge a$.
La forme bilinéaire~$Q^k$ sur~$H^k(V)$
définie par $Q^k(a,b)=\int \ell^{n-k} \wedge a\wedge b$
est symétrique si $k$ est pair, alternée si $k$ est impair,
de sorte que la forme bilinéaire~$R^k$ sur $H^k(V)_\C$
définie par $R^k(a,b)=i^k Q^k(a,\overline b)$ est hermitienne
(forme de Riemann).
Soit $(p,q)$ un couple d'entiers tels que $p+q=k$.
La restriction à $P^k(V)_\C\cap H^{p,q}(V)$
de cette forme hermitienne est définie,
de signe $(-1)^{k(k-1)/2+q}$ (\emph{relations bilinéaires de Hodge--Riemann}).
Dans le cas particulier où~\mbox{$p=q$,}
l'entier $k=2p$ est pair, le coefficient~$i^{k}$
qui intervient dans la définition de~$R^k$ est égal à~$(-1)^p$,
et sur $P^{2p}(V)_\C\cap H^{p,p}(V)$, la forme de Riemann
est définie positive.
Supposons de plus que $V$ soit projective
et notons $C(V)$ l'anneau des classes de cycles pour l'équivalence
homologique. L'homomorphisme
de classe de cycles $C(V)\to H(V)$ est alors injectif,
de sorte que $C(V)$ vérifie les conditions du paragraphe précédent.
Noter que lorsque $V$ est une variété torique projective lisse,
l'équivalence homologique coïncide avec l'équivalence rationnelle
et l'homomorphisme de classe de cycles est un isomorphisme.
\end{exem}
\begin{theo}\label{theo.hl}
Soit $M$ une géométrie combinatoire de rang~$>0$ et soit $r=\rk(M)-1$.
Soit $\ell\in \mathscr K_M$ une classe ample de~$\PL(M)$.
\begin{enumerate}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\item Le couple $(A(M),\deg)$ vérifie la dualité de Poincaré;
\item Le couple $(A(M)_\R,\ell)$ vérifie le théorème de Lefschetz difficile;
\item Le couple $(A(M)_\R,\ell)$ vérifie les relations de Hodge--Riemann.
\end{enumerate}
\end{theo}
C'est en quelque sorte le théorème principal de l'article
d'\citet*{adiprasito-huh-katz2015}. Nous donnerons
des indications de sa preuve dans la section suivante.
Voyons tout de suite comment il entraîne le théorème~\ref{theo.ahk}.
On commence par en déduire le corollaire suivant,
analogue aux inégalités de Khovanskii-Teissier.
\begin{coro}\label{coro.kt}
Soit $\alpha$ et $\beta$ des classes de~$A^1(M)_\R$.
Si $\alpha$ est nef, alors
\[ \deg(\alpha^{r-2}\beta^{2}) \deg(\alpha^r)
\leq \deg(\alpha^{r-1} \beta)^2. \]
\end{coro}
\begin{proof}
Par passage à la limite, il suffit de traiter le cas où $\alpha$
est ample.
La décomposition de Lefschetz $A^1(M)_\R=
P_\alpha^1(M) \oplus \langle \alpha \rangle$
est orthogonale pour la forme
$Q_\alpha^1$ définie par $Q_\alpha^1(x,y)=-\deg(x \alpha^{r-2} y)$,
laquelle est définie positive
sur~$P_\alpha^1(M)$ et définie négative sur~$\langle\alpha\rangle$.
L'inégalité à vérifier est évidente si $\beta$
est proportionnelle à~$\alpha$.
Sinon, le sous-espace $\langle\alpha,\beta\rangle$
est de dimension~$2$ et la restriction de la forme~$Q_\alpha^1$
y est de signature $(1,1)$. Son discriminant est donc négatif,
d'où le corollaire.
\end{proof}
\subsection{Démonstration du théorème~\ref{theo.ahk}}
On prouve d'abord que pour tout $k\in\{0,\dotsc,r\}$,
l'entier~$\mu^k(M)$ est strictement positif,
par exemple en déduisant du théorème de Weisner
qu'il est égal au nombre de « drapeaux initiaux descendants »
de longueur~$k$, c'est-à-dire de familles
$(P_1,\dotsc,P_k)$ de plats de~$M$
tels que $P_1\subset\dotsb\subset P_k$,
$\rk_M(P_j)=j$ pour tout~$j$,
et $\inf(P_1)>\inf(P_2)>\dotsb>\inf(P_k)>0$.
(Noter que ces conditions imposent $P_1\neq\emptyset$ et $P_k\neq\abs M$.)
D'après la proposition~\ref{prop.deg-muk},
on a $\mu^k(M)=\deg(\alpha_M^{r-k}\beta_M^k)$.
Ainsi, lorsque $k=r-1$,
l'inégalité~(ii) du théorème~\ref{theo.ahk}
n'est autre que celle du corollaire~\ref{coro.kt},
appliquée à $\alpha=\alpha_M$ et $\beta=\beta_M$.
Sinon, on remplace~$M$ par la géométrie
combinatoire associée au matroïde tronqué $\tau(M)$
dont le treillis des plats est l'ensemble des plats de~$M$
dont le rang n'appartient pas à~$[k+1,r]$.
Son rang est égal à~$k+2$ et l'on
a $\mu^j(M)=\mu^j(\tau(M))$ pour tout entier~$j$ tel que $j\leq k+1$.
L'inégalité voulue se déduit donc du cas déjà traité.
\section{Filtres}
\subsection{}
La démonstration du théorème~\ref{theo.hl} est combinatoire
et les paragraphes qui suivent ne font guère plus qu'en
décrire le cheminement;
je renvoie à~\citep*[\S6--8]{adiprasito-huh-katz2015} pour les détails.
Cette démonstration consiste à partir
de l'éventail de l'espace projectif,
pour lequel la conclusion du théorème est évidente,
et à le modifier progressivement jusqu'à l'éventail~$\Sigma_M$,
de sorte qu'à chaque étape la conclusion du théorème reste vraie.
Ces modifications exigent d'introduire des éventails un peu plus compliqués.
On dira ainsi qu'un éventail~$\Sigma$ vérifie la dualité de Poincaré
en dimension~$r$
s'il existe un isomorphisme $\deg\colon A^r(\Sigma)\to\Z$
tel que l'anneau de Chow $(A(\Sigma),\deg)$ la vérifie.
On dira alors que $\Sigma$ vérifie le théorème de Lefschetz
difficile, resp. les relations de Hodge--Riemann,
si $(A(\Sigma)_\R,\ell)$ les vérifie pour toute classe
ample $\ell\in\mathscr K_\Sigma$.
\subsection{}
Soit $M$ un matroïde sans boucle et soit $\mathscr P_M$
le treillis de ses plats. On suppose que $M$ est de rang~$>0$
et on pose $r=\rk(M)-1$.
On note $\inf(\mathscr D)$ l'intersection, dans~$\abs M$,
des éléments d'un drapeau $\mathscr D$ de plats de~$M$;
si $\mathscr D=\{P_1,\dotsc,P_d\}$,
où $\emptyset\subsetneq P_1\subsetneq\cdots\subsetneq P_d \subsetneq \abs M$,
on a donc $\inf(\mathscr D)=\abs M$ lorsque $d=0$, et $\inf(\mathscr D)=P_1$ sinon.
On dit que $I$ et~$\mathscr D$ sont compatibles, et on note $I<\mathscr D$,
si $I$ est une partie stricte de~$\inf(\mathscr D)$.
On dit qu'ils sont compatibles vis-à-vis de~$M$, et on note $I<_M\mathscr D$,
si $\Card(I)<\rk_M(\inf(\mathscr D))$; cela implique qu'ils sont compatibles,
car le rang d'un plat est majoré par son cardinal.
Soit $\mathscr D$ un tel drapeau et soit $I$ une partie de~$\inf(\mathscr D)$.
On note $\sigma_{I,\mathscr D}$ le cône engendré par
les vecteurs~$e_i$, pour $i\in I$, et les vecteurs~$e_P$, pour $P\in\mathscr D$.
Si $I$ et $\mathscr D$ sont compatibles, c'est un cône
de dimension~$\Card(I)+\Card(\mathscr D)$.
\subsection{}
On appelle \emph{filtre}\footnote{%
La terminologie est un peu trompeuse car on n'impose
pas l'hypothèse de stabilité par~$\wedge$;
dans le cas du treillis des parties d'un ensemble,
une telle partie n'est pas nécessairement un filtre au sens usuel.}
de plats sur~$M$
une partie non vide~$\mathscr P$ de~$\mathscr P_M$,
ne contenant pas~$\emptyset$,
qui contient tout plat de~$\abs M$ contenant un élément de~$\mathscr P$.
Soit $\mathscr P$ un filtre de plats de~$M$.
L'\emph{éventail de Bergman} associé à $(M,\mathscr P)$
est l'ensemble~$\Sigma_{M,\mathscr P}$
des cônes~$\sigma_{I,\mathscr D}$,
où $\mathscr D$ parcourt l'ensemble des drapeaux de
plats de~$M$ appartenant à~$\mathscr P$ et
$I$ parcourt l'ensemble des parties de~$\abs M$ compatibles avec~$\mathscr D$
et telles que $\langle I\rangle \not\in\mathscr P$.
L'\emph{éventail de Bergman réduit} $\widetilde\Sigma_{M,\mathscr P}$
est l'ensemble de ces cônes, où l'on exige en outre
que $I$ et $\mathscr D$ soient compatibles vis-à-vis de~$M$.
Si $(I_1,\mathscr D_1)$ et $(I_2,\mathscr D_2)$ sont
des couples tels que $I_1<\mathscr D_1$ et $I_2<\mathscr D_2$,
alors $I_1\cap I_2 <\mathscr D_1\cap\mathscr D_2$,
et de même pour la relation~$<_M$.
De plus, on a $\sigma_{I_1,\mathscr D_1}\cap \sigma_{I_2,\mathscr D_2}
= \sigma_{I_1\cap I_2,\mathscr D_1\cap\mathscr D_2}$.
Cela entraîne que $\Sigma_{M,\mathscr P}$ et $\widetilde \Sigma_{M,\mathscr P}$
sont effectivement des éventails de~$N_\R$.
Lorsque $\mathscr P=\mathscr P_M\setminus\{\emptyset\}$,
la condition $\langle I\rangle \not\in\mathscr P$ équivaut à $I=\emptyset$,
et la condition $I\subsetneq \inf(\mathscr D)$ est vérifiée
car $\inf(\mathscr D)$ n'est pas vide.
Dans ce cas, les deux éventails $\Sigma_{M,\mathscr P}$
et $\widetilde\Sigma_{M,\mathscr P}$ coïncident avec l'éventail~$\Sigma_M$.
Lorsque $M$ est le matroïde booléen sur~$\abs M$
(c'est-à-dire dont toute partie est plate), l'éventail $\Sigma_{M,\mathscr P}$
est l'éventail d'un polytope qui est obtenu par subdivisions étoilées
successives à partir d'un simplexe
\citep*[prop.~2.4]{adiprasito-huh-katz2015}.
\begin{lemm}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\begin{enumerate}
\item
L'éventail $\Sigma_{M,\mathscr P}$ est un sous-éventail
de l'éventail normal d'un polytope; en particulier, son cône
ample n'est pas vide.
\item
L'éventail $\widetilde\Sigma_{M,\mathscr P}$ est purement de dimension~$r$.
\end{enumerate}
\end{lemm}
\subsection{}
L'éventail $\Sigma_{M,\mathscr P}$
a pour rayons, d'une part, les vecteurs~$e_i$,
pour $i\in\abs M$ tel que $\langle i\rangle \not\in \mathscr P$,
et, d'autre part, les vecteurs~$e_{P}$, où $P\in\mathscr P\setminus\{\abs M\}$.
Son anneau de Chow $A(\Sigma_{M,\mathscr P})$, noté aussi $A(M,\mathscr P)$,
est ainsi le quotient
de l'anneau de polynômes $S_{M,\mathscr P}$ en des
indéterminées~$T_i$ (pour $i\in\abs M$) et $T_P$ (pour $P\in\mathscr P
\setminus\{\abs M\}$)
par l'idéal engendré par les éléments du type suivant:
\begin{enumerate}
\def\roman{enumi}}\def\theenumi){(\theenumi){$R_{\arabic{enumi}}$}\def\theenumi){(\roman{enumi}}\def\theenumi){(\theenumi))}
\item $T_{P_1} T_{P_2}$, où $P_1,P_2\in\mathscr P$
sont deux plats incomparables;
\item $T_i T_P$, où $P\in\mathscr P$ et $i\in \abs M\setminus P$;
\item $\prod_{i\in I} T_i$, où $I$ est une partie libre non vide de~$\abs M$
telle que $\langle I\rangle \in\mathscr P$;
\item $(T_i+\sum_{P\ni i}T_P)-(T_j+\sum_{P\ni j}T_P)$,
où $i,j$ sont des éléments de~$\abs M$, distincts.
\end{enumerate}
Pour $i\in \abs M$ et $P\in\mathscr P$,
on notera $t_i$, resp. $t_P$, la classe de~$T_i$,
resp. de~$T_P$, dans l'anneau~$A(\Sigma_{M,\mathscr P})$.
L'éventail $\widetilde\Sigma_{M,\mathscr P}$
est un sous-éventail
de $\Sigma_{M,\mathscr P}$.
Son anneau de Chow $A(\widetilde\Sigma_{M,\mathscr P})$
est donc un quotient de l'anneau $A(\Sigma_{M,\mathscr P})$.
En vérifiant que les relations supplémentaires
découlent de celles imposées dans $A(\Sigma_{M,\mathscr P})$,
on démontre la proposition suivante.
\begin{prop}
L'inclusion de $X(\widetilde\Sigma_{M,\mathscr P})$ dans
$X(\Sigma_{M,\mathscr P})$ induit un isomorphisme
de $A(\Sigma_{M,\mathscr P})$ sur $A(\widetilde\Sigma_{M,\mathscr P})$.
En particulier, $A^k(\Sigma_{M,\mathscr P})=0$ pour $k>r$.
\end{prop}
\begin{exem}
Supposons que $\mathscr P=\{\abs M\}$; pour simplifier les notations,
posons $\Sigma=\Sigma_{M,\{M\}}$
et $\widetilde\Sigma=\widetilde\Sigma_{M,\{M\}}$.
Les rayons de l'éventail~$\Sigma$ sont les vecteurs~$e_i$,
pour $i\in \abs M$.
Plus généralement, un cône~$\sigma_I$ appartient à~$\Sigma$
si et seulement si $\langle I\rangle \neq\abs M$;
il appartient à~$\widetilde\Sigma$ si et seulement si $\Card( I)\leq r$.
En posant $n=\Card(\abs M)-1$,
on voit donc que le support de~$\widetilde\Sigma$ est la réunion
des cônes de dimension~$\leq r$ de l'éventail de
l'espace projectif~$\P_n$.
L'anneau de Chow $A(M,\{M\})$ n'a donc
aucun générateur de la forme~$T_P$;
les relations des types~($R_1$) et~($R_2$) sont alors triviales.
Les relations du type~($R_4$) entraînent que
les~$T_i$, pour tout~$i\in\abs M$, ont même image dans~$A(M,\{\abs M\})$;
notons~$t$ cette image.
Les relations du type~($R_3$) s'écrivent alors $t^{r+1}=0$.
Autrement dit,
\[ A(M,\{\abs M\}) = \Z[t]/(t^{r+1}); \]
c'est donc l'anneau de Chow de l'espace projectif~$\P_r$,
et il vérifie de façon évidente la conclusion du théorème~\ref{theo.hl}.
\end{exem}
\subsection{}\label{ss.flip}
Ces notations introduites,
la démonstration du théorème~\ref{theo.hl}
procède par récurrence et
consiste à examiner le comportement
de l'anneau $A({M,\mathscr P})$
lorsqu'on adjoint au filtre~$\mathscr P$ un plat~$P$
qui est maximal dans~$\mathscr P_M\setminus \mathscr P$.
Dans l'article~\citep{adiprasito-huh-katz2015},
la modification d'éventails qui en résulte est appelée \emph{flip matroïdal}
de centre~$P$.
Cela revient à enlever à~$\Sigma_{M,\mathscr P}$
les cônes $\sigma_{I,\mathscr D}$
tels que
$I<\mathscr D$, $\langle I\rangle=P$ et $\inf(\mathscr D)\neq P$,
et à lui ajouter les cônes $\sigma_{I,\mathscr D}$
où $I<\mathscr D$, $\langle I\rangle\neq P$ et $\inf(\mathscr D)=P$.
Posons $\mathscr P'=\mathscr P\cup\{P\}$.
Le plat~$P$ est minimal dans~$\mathscr P'$.
Rappelons aussi que l'on note $M\contr P$ (resp. $M\mathord{\,|\,} P$)
le matroïde dont les plats sont ceux de~$M$
contenant~$P$ (resp. contenus dans~$P$).
On raisonne par récurrence, en supposant la conclusion
du théorème~\ref{theo.hl} satisfaite par
tout anneau de la forme $A(M_1,\mathscr P_1)$
tel que soit $\rk(M_1)< \rk(M)$,
soit $\rk(M_1)=\rk(M)$ et $\Card(\mathscr P_1) < \Card(\mathscr P)$.
\begin{prop}
Il existe un unique homomorphisme d'anneaux
\[ \Phi_P\colon A(M,\mathscr P)\to A(M,\mathscr P') \]
tel que $\Phi_P(t_Q)=t_Q$ pour tout $Q\in\mathscr P\setminus\{\abs M\}$
et tel que $\Phi_P(t_i)=t_i+t_P$ si $i\in P$,
et $\Phi_P(t_i)=t_i$ sinon.
Si $\rk_M(P)=1$, c'est un isomorphisme.
\end{prop}
\begin{prop}
Soit $p$ un entier~$\geq 1$.
\begin{enumerate}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\item
Il existe un unique homomorphisme de groupes
$\Psi_P^p\colon A(M\contr P)\to A(M,{\mathscr P'})$
tel que $\Psi_P^p(t_{\mathscr D})=t_P ^p t_{\mathscr D}$
pour tout drapeau $\mathscr D$ de plats de~$M_P$;
il est homogène de degré~$p$.
\item
Il existe un unique homomorphisme de groupes
$\Gamma_P^p\colon A(M\mathord{\,|\,} P)\to A(M)$
tel que $\Gamma_P^p(t_{\mathscr D})=t_P^p t_{\mathscr D}$
pour tout drapeau $\mathscr D$ de plats de~$M^P$;
il est homogène de degré~$p$.
\end{enumerate}
\end{prop}
\begin{theo}\label{theo.dec}
L'homomorphisme
\[ \Phi_P + \sum_{p=1}^{\rk_M(P)-1} \Psi_P^p
\colon A(M,\mathscr P) \oplus A(M\contr P)[-p] \to A(M,\mathscr P') \]
est un isomorphisme d'anneaux gradués,
où le symbole $[-p]$ signifie que la graduation est décalée de~$-p$.
\end{theo}
\begin{coro}
L'homomorphisme~$\Phi_P$ induit un isomorphisme
\[ A^r(M,\mathscr P) \xrightarrow\sim A^r(M,\mathscr P') \]
et l'algèbre $A(M,\mathscr P')_\R$ muni de l'isomorphisme
$\deg\circ\Phi_P\colon A^r(M,\mathscr P')\to\Z$,
vérifie la dualité de Poincaré.
\end{coro}
\subsection{}
La démonstration du théorème~\ref{theo.dec}
commence par établir la surjectivité
de l'homomorphisme indiqué. On en déduit ensuite que
l'homomorphisme $\Psi_P^{\rk_M(P)}$ induit un isomorphisme
de $A^{r-\rk_M(P)}(M\contr P)$ sur~$A^r(M,\mathscr P)$.
Sous l'hypothèse
que la dualité de Poincaré vaut pour $A(M,\mathscr P)$
et $A(M\contr P)$, une dernière étape prouve l'injectivité
de l'homomorphisme.
Compte tenu du corollaire, l'algèbre $A(M,\mathscr P')$ vérifie
alors la dualité de Poincaré.
Par récurrence,
cela prouve ainsi que pour tout matroïde~$M$ de rang~$r+1$ et tout filtre
de plats~$\mathscr P$ sur~$M$, l'anneau $A(M,\mathscr P)$
vérifie la dualité de Poincaré en dimension~$r$.
En particulier, pour $\mathscr P=\mathscr P_M$,
l'anneau~$A(M)$ vérifie la dualité de Poincaré.
\subsection{}
La dualité de Poincaré acquise, la démonstration du
théorème de Lefschetz difficile et celle des inégalités
de Hodge--Riemann sont menées de front:
le théorème de Lefschetz difficile
affirme que la forme bilinéaire de Hodge--Riemann
est non dégénérée, et les inégalités de Hodge--Riemann
en précisent la signature.
Grâce à la remarque
que la signature est une fonction localement constante
sur l'espace des formes quadratiques non dégénérées,
un argument élémentaire de déformation
prouve que si le théorème de Lefschetz difficile est vérifié
pour toute classe ample,
alors les inégalités de Hodge--Riemann valent
pour toute classe ample si et seulement si elle valent pour
\emph{une} classe ample.
Soit $\Sigma$ un éventail unimodulaire
vérifiant la dualité de Poincaré en dimension~$r$.
Pour tout rayon $v\in V_\Sigma$,
l'algèbre quotient $A(\Sigma)_\R/\ann(t_v)$,
associée à l'étoile $\star_\Sigma(\sigma)$,
satisfait la dualité de Poincaré en dimension~$r-1$,
et
\citet*{adiprasito-huh-katz2015}
introduisent la variante « locale » des
inégalités de Hodge--Riemann qui postule
que cette algèbre vérifie ces inégalités
pour (l'image de) toute classe ample dans~$\mathscr K_\Sigma$.
\begin{prop}
Les inégalités de Hodge--Riemann locales impliquent
le théorème de Lefschetz difficile.
\end{prop}
\subsection{}
On démontre la validité
des inégalités de Hodge--Riemann pour l'algèbre $A(M,\mathscr P)_\R$
en raisonnant par récurrence d'abord sur le rang de~$M$
puis sur le cardinal de~$\mathscr P$. Lorsque $\mathscr P=\emptyset$,
on a déjà mentionné que l'algèbre $A(M,\mathscr P)_\R$,
isomorphe à l'algèbre $\R[t]/(t^{r+1})$ associée à~$\P_r$,
vérifie le résultat voulu.
Une première réduction permet de supposer que $M$ est une géométrie
combinatoire.
Revenons alors au contexte d'un flip matroïdal (\S\ref{ss.flip}):
$\mathscr P$ est un filtre de plats sur~$M$,
$P$ est un plat maximal dans~$\mathscr P_M\setminus \mathscr P$
et $\mathscr P'=\mathscr P\cup\{P\}$.
Tout d'abord, \citet*[prop.~3.5]{adiprasito-huh-katz2015} observent
que l'étoile $\star_{\Sigma_{M,\mathscr P'}}$
de tout rayon~$v$ est le produit
des éventails $\Sigma_{M\mathord{\,|\,} P,\mathscr P\mathord{\,|\,} P}$ et $\Sigma_{M\contr P}$
si le rayon~$v$ est associé à un plat~$P$,
et l'éventail $\Sigma_{M_i,\mathscr P_i}$ si le rayon~$v$
est associé à un élément~$i$ de~$\abs M$.
(On a noté $\mathscr P\mathord{\,|\,} P$ l'ensemble des plats de~$\mathscr P$
qui sont contenus dans~$P$; c'est un filtre de plats sur~$M\mathord{\,|\,} P$.)
Dans ce dernier cas, l'éventail vérifie les inégalités de Hodge--Riemann,
par l'hypothèse de récurrence. Dans le premier cas,
les deux éventails qui interviennent les vérifient également,
donc leur produit aussi: cela revient à prouver que l'algèbre
$A(M\mathord{\,|\,} P,\mathscr P\mathord{\,|\,} P)\otimes A(M\contr P)$ vérifie ces inégalités,
ce que l'on démontre en se ramenant au cas
du produit tensoriel $\R[t]/(t^{a+1})\otimes \R[t]/(t^{b+1})$
associé à $\P_a\times\P_b$.
Ainsi, l'éventail $\Sigma_{M,\mathscr P'}$ vérifie les
inégalités locales de Hodge--Riemann. Il vérifie
donc le théorème de Lefschetz difficile.
Il reste à démontrer les inégalités de Hodge--Riemann,
mais il suffit maintenant de les vérifier pour \emph{une} classe.
La conclusion du théorème~\ref{theo.dec}
fournit un isomorphisme
de $\R$-espaces vectoriels gradués
\[ A(M,\mathscr P')_\R \simeq A(M,\mathscr P)_\R \oplus
\big( \R[t]/(t^{\rk_M(P)-1}) \otimes_\R A(M\contr P) \big)[-1] \]
qu'\citet*{adiprasito-huh-katz2015}
utilisent pour construire des classes amples qui vérifient
les inégalités de Hodge--Riemann.
\section{Plats}
\subsection{}
Concluons ce rapport en évoquant une autre conjecture, encore ouverte,
en combinatoire énumérative des matroïdes, mais à laquelle
l'article de~\cite{huh-wang2017} apporte une réponse positive
dans le cas représentable.
Soit $M$ un matroïde. Pour tout entier~$k$, on note $M^{(k)}$
l'ensemble des plats de rang~$k$ de~$M$ et on pose $W_k(M)=\Card(M^{(k)})$;
ce sont les \emph{nombres de Whitney de seconde espèce} de~$M$.
Ils sont nuls pour $k<0$ ou $k>\rk(M)$.
\begin{conj}[conjecture de Rota--Welsh]
\label{conj.Wk}
Soit $M$ un matroïde et soit $r=\rk(M)$.
\begin{enumerate} \def\roman{enumi}}\def\theenumi){(\theenumi){\roman{enumi}}\def\theenumi){(\roman{enumi}}\def\theenumi){(\theenumi))}
\item
La suite $(W_0(M),\dotsc,W_{r})$ est log-concave; en particulier,
elle est unimodale;
\item
Pour tout entier~$k$ tel que $0\leq k\leq r/2$, on a $W_k(M)\leq W_{r-k}(M)$;
\item
On a $W_0(M)\leq W_1(M)\leq \dotsb \leq W_{\lfloor r/2\rfloor}(M)$.
\end{enumerate}
\end{conj}
L'unimodalité a été conjecturée par \citet{rota1971}
et on doit à \citet{welsh1976} la suggestion que cette suite
pourrait être log-concave. \citet{mason1972}
conjecture même que le quotient $W_k(M)^2/W_{k-1}(M)W_{k+1}(M)$
serait toujours supérieur ou égal à $(k+1)/k$,
qui est la valeur prise par ce quotient
pour la structure de matroïde sur $\abs M$
pour laquelle toute partie est libre (« matroïde libre »).
\begin{theo}[\citealp{huh-wang2017}]
\label{theo.hw}
Soit $M$ un matroïde \emph{représentable} et soit $r=\rk(M)$.
Soit $p,q$ des entiers tels que $0\leq p\leq \inf(q,r-q)$.
Il existe une application injective $\phi\colon M^{(p)}\to M^{(q)}$
telle que $x\subset \phi(x) $ pour tout $x\in M^{(p)}$.
En particulier, on a $W_p(M)\leq W_{q}(M)$.
\end{theo}
En prenant $p=k$ et $q=p+1$ (resp. $q=r-k$),
on en déduit en particulier
que les assertions~(ii) et~(iii) de la conjecture~\ref{conj.Wk}
sont satisfaites pour un matroïde représentable.
Dans le cas d'un matroïde représentable de rang~$3$,
on retrouve le théorème classique de~\cite{debruijn-erdos1948}
selon lequel $n$~points non alignés d'un plan projectif
déterminent au moins $n$~droites.
\subsection{}
Soit $M$ un matroïde, posons $n+1=\Card(\abs M)$ et $r=\rk(M)$;
on suppose que $\abs M=\{0,\dotsc,n\}$.
Supposons $M$ représentable sur un corps~$K$.
Notons $[x_0:\dotsb:x_n]$ les coordonnées homogènes de~$\P_{n,K}$
et identifions le complémentaire de l'hyperplan défini par $x_0=0$
à l'espace affine~$\A^n_K$.
Considérons un sous-espace affine~$L$ de dimension~$r$ de~$K^n$
dont l'adhérence~$X$ dans~$\P_{n,K}$ représente~$M$, au sens où
les hyperplans de coordonnées de~$\P_{n,K}$
découpent sur~$X$ un arrangement d'hyperplans qui représente~$M$.
Pour tout circuit~$C$ de~$M$, il existe
une famille $(a_{C,i})_{i\in C}$ d'éléments de~$K$, non tous nuls,
unique à multiplication près par un élément non nul de~$K$,
telle que $\sum_{i \in C} a_{C,i} x_i = 0$
sur~$X$.
Soit $Y$ l'adhérence de~$L$ dans le produit
des droites projectives~$(\P_{1,K})^n$.
Notons $[z_1,w_1], \dotsc,[z_n,w_n]$ les coordonnées
multi-homogènes de~$\P_{1,K}^n$.
La démonstration du théorème~\ref{theo.hw}
repose sur l'observation suivante, due à \citet{ardila-boocher2016}
qui décrivent l'idéal homogène de~$Y$.
\begin{prop}
La variété~$Y$ est définie dans $(\P_{1,K})^n$
par la famille d'équations multi-homogènes
\[ \sum_{i\in C} a_{C,i} z_i \prod_{j\in C\setminus\{i\}} w_j = 0 , \]
où $C$ parcourt l'ensemble des circuits du matroïde~$M$.
\end{prop}
Pour tout plat~$P\in\mathscr P_M$, soit $Y_P$ l'intersection de~$Y$
et du sous-espace localement fermé de $(\P_{1,K})^n$
défini par $w_i=0$ si et seulement si $i\not\in P$.
\begin{coro}\label{coro.partition}
La famille $(Y_P)_{P\in\mathscr P_M}$ est une partition de~$Y$
en parties localement fermées.
De plus, pour tout plat~$P$ de~$M$, $Y_P$
est
isomorphe à l'espace affine~$\A^{\rk_M(P)}_K$.
\end{coro}
\subsection{}
Bien que $Y$ soit singulière en général,
l'existence de la partition~$(Y_P)$
a des conséquences remarquables sur sa cohomologie.
Pour fixer les idées, choisissons un nombre premier~$\ell$
distinct de la caractéristique de~$K$
et notons $H(Y)$ la cohomologie étale de~$Y_{\overline K}$ à coefficients
dans~$\Q_\ell$.
Lorsque $K$ est de caractéristique~$p>0$,
on peut se ramener au cas où $K$ est une clôture
algébrique d'un corps fini.
Lorsque $K$ est de caractéristique zéro,
on peut préférer se ramener au cas où $K=\C$
et prendre pour $H(Y)$ la cohomologie
singulière de~$Y(\C)$ munie de sa structure de Hodge mixte.
Dans les deux cas, on dispose d'une filtration par le poids
sur les espaces de cohomologie $H^k(Y)$.
Notons aussi $IH(Y)$ la \emph{cohomologie d'intersection} de~$Y$
\citep*{goresky-macpherson1983,beilinson-bernstein-deligne1982}:
disons juste que c'est la cohomologie d'un complexe
borné à cohomologie constructible sur~$Y$,
le complexe d'intersection décalé~$\mathrm{IC}_Y[-r]$,
caractérisé (dans une catégorie dérivée convenable)
d'une part par la propriété
qu'il prolonge le faisceau constant sur le lieu lisse de~$Y$, et d'autre part
par les « conditions de perversité » (décalées) sur la dimension du support
de ses faisceaux de cohomologies et de ceux de son dual de Verdier.
Les espaces $H^k(Y)$ et $IH^k(Y)$ sont nuls si $k\not\in[0,2r]$. De plus,
\citep[th.~3.1]{bjorner-ekedahl2009} déduisent du corollaire~\ref{coro.partition} que pour tout entier~$k$,
\begin{enumerate}\def\theenumi){\roman{enumi}}\def\theenumi){(\theenumi))}
\item
Si $k$ est impair, on a $H^k(Y)=0$;
\item
Si $k$ est pair,
l'espace $H^k(Y)$ est pur de poids~$k$
et de dimension $W_{k/2}(M)$,
engendré par les classes de cycles $[\overline{Y_P}]$,
pour $P\in M^{(k/2)}$;
\item
L'homomorphisme canonique de~$H^k(Y)$ dans $IH^k(Y)$ est injectif.
\end{enumerate}
Par récurrence sur le cardinal de~$\mathscr P_M$,
les deux premières assertions se déduisent de la cohomologie des espaces
affines et de la suite exacte longue de cohomologie à support
compact associée à une partition de~$Y$ en un ouvert
et le fermé complémentaire.
La troisième assertion
est démontrée par~\citet[th.~2.1]{bjorner-ekedahl2009}
lorsque $K$ est la clôture algébrique d'un corps fini,
et par~\citet[th.~1.8]{weber2004b} lorsque $K=\C$:
le noyau de l'homomorphisme canonique de $H^k(Y)$
dans~$IH^k(Y)$ est la partie de poids~$<k$ de~$H^k(Y)$.
C'est une conséquence du comportement des poids
par les six opérations cohomologiques usuelles
\citep{deligne1974b,deligne1980}
et du fait que le prolongement intermédiaire d'un faisceau
pervers pur est un faisceau pervers pur de même poids.
\subsection{}
La cohomologie d'intersection $IH(Y)$ de~$Y$
est un module sur sa cohomologie usuelle $H(Y)$.
En particulier, pour tout fibré en droites ample~$\mathscr L$
sur~$Y$ et tout entier~$k$ tel que $0\leq k\leq r/2$,
on peut considérer l'homomorphisme de Lefschetz
\[ c_1(\mathscr L)^{r-2k}\cap \colon IH^{2k}(Y) \to IH^{2r-2k}(Y). \]
Cet homomorphisme est injectif:
le théorème de Lefschetz difficile
vaut pour la cohomologie d'intersection
\citep*[5.4.10, 6.2.10]{beilinson-bernstein-deligne1982}.
Puisque l'homomorphisme canonique
de $H^{2k}(Y)$ dans $IH^{2k}(Y)$ est injectif,
il en résulte que l'homomorphisme de Lefschetz
\[ c_1(\mathscr L)^{r-2k}\cap \colon H^{2k}(Y) \to H^{2r-2k}(Y) \]
est encore injectif. Autrement dit: le théorème de Lefschetz
difficile vaut pour la cohomologie usuelle de~$Y$.
Cela démontre déjà l'inégalité $W_k(M)\leq W_{r-k}(M)$
pour tout entier~$k$
tel que $0\leq k\leq r/2$.
Si $p,q$ sont des entiers tels que $0\leq p\leq \inf(q,r-q)$, l'homomorphisme
$c_1(\mathscr L)^{q-p}\cap\colon H^{2p}(Y)\to H^{2q}(Y)$
est a fortiori injectif,
ce qui entraîne l'inégalité $W_p(M)\leq W_{q}(M)$
du théorème~\ref{theo.hw}.
\subsection{}
Prenons pour fibré en droites~$\mathscr L$ le produit
tensoriel externe des fibrés $\mathscr O(1)$ sur les $n$~facteurs
de $(\P_{1,K})^n$
et considérons
la matrice~$\mathit\Lambda=(\mathit\Lambda_{P,Q})$
de l'application $c_1(\mathscr L)^{q-p}\cap$
dans les bases
$([\overline {Y_P}])_{P\in M^{(p)}}$ de~$H^{2p}(Y)$
et
$([\overline {Y_Q}])_{Q\in M^{(q)}}$ de~$H^{2q}(Y)$.
L'intersection
$ c_1(\mathscr L)\cap [\overline {Y_P}]$
est somme des termes $[\overline{Y_{\langle P,i\rangle}}]$
pour $i\in\{1,\dotsc,n\}\setminus P$.
Par suite, on a $\mathit\Lambda_{P,Q}=0$
si $P\not\subset Q$.
D'après ce qui précède, la matrice~$\mathit\Lambda$
est de rang maximal~$W_p(M)$,
donc au moins un mineur de taille $W_p(M)$
en est inversible.
Ce mineur est de type~$P\times P'$, où $P'$ est une partie de~$Q$
de cardinal $W_p(M)$;
une fois choisi un ordre total sur~$P$ et~$P'$, ce mineur
se développe comme une somme de termes de la forme
\[ \pm \prod_{P\in M^{(p)}} \mathit\Lambda_{P, \iota(P)} \]
où $\iota$ parcourt l'ensemble des bijections de~$P$ sur~$P'$.
Au moins l'un de ces termes n'est pas nul: il correspond
à une application injective $\iota\colon M^{(p)}\to M^{(q)}$
telle que $P\subset \iota(P)$ pour tout $P\in M^{(p)}$.
Cela conclut la preuve du théorème~\ref{theo.hw}.
\subsection{}
Dans le but de généraliser ce théorème aux matroïdes non représentables,
on peut observer que la cohomologie de~$Y$ possède un modèle combinatoire,
décrit uniquement en termes du matroïde~$M$.
Soit donc $M$ un matroïde, non nécessairement représentable.
Notons $B(M)$ le groupe abélien libre sur l'ensemble des plats de~$M$,
et soit $(\delta_P)_{P\in\mathscr P_M}$ sa base canonique.
On munit $B(M)$ de la graduation pour laquelle, pour tout entier~$k$,
$B^k(M)$ est le sous-groupe engendré par les fonctions~$\delta_P$,
où $P$ parcourt l'ensemble $M^{(k)}$ des plats de rang~$k$.
On définit ensuite une multiplication dans~$B(M)$ par
\[ \delta_P\cdot \delta_Q = \begin{cases}
\delta_{P\vee Q} & \text{si $\rk_M(P)+\rk_M(Q)=\rk_M(P\vee Q)$; } \\
0 & \text{sinon.} \end{cases} \]
Pour ces structures, $B(M)$ est une algèbre commutative graduée,
que \cite{huh-wang2017} appellent l'\emph{algèbre de Möbius graduée}
du matroïde~$M$.
Posons aussi $\lambda = \sum\limits_{i\in \abs M} \delta_i\in B^1(M)$.
Lorsque $M$ est représentable, l'unique homomorphisme
d'espaces vectoriels gradués de~$B(M)_{\Q_\ell}$ dans~$H(Y)$
qui applique~$\delta_P$ sur~$[\overline{Y_P}]$ est un isomorphisme
d'algèbres graduées, et l'application de Lefschetz $c_1(\mathscr L)\cap$
correspond à la multiplication par~$\lambda$.
\cite{huh-wang2017}
conjecturent alors que pour tout matroïde~$M$ de rang~$r$,
non nécessairement représentable,
l'application $\lambda^k\colon B^k(M)\to B^{r-k}(M)$
est injective
pour tout entier~$k$ tel que $0\leq k\leq r/2$.
La conclusion du théorème~\ref{theo.hw} s'en déduirait
alors comme ci-dessus.
\bibliographystyle{mynat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,265 |
"Sky Fighters, October 1936″ by Eugene M. Frandzen
Eugene M. Frandzen painted the covers of Sky Fighters from its first issue in 1932 until he moved on from the pulps in 1939. At this point in the run, the covers were about the planes featured on the cover more than the story depicted. On the October 1936 cover, It's the Morane-Saulnier 27C1 & Roland D2!
The Ships on the Cover
THE French developed the outstanding monoplanes of the Parasol type produced during the war. The early Morane-Saulnier Parasols were very successful and the later war models progressed with the quick advance of aviation but continued the main design features of the original Parasols. The Morane-Saulnier 27 C1 was a single-seater righting scout which carried substantial strut bracing to the wings instead of the large number of wire braces used on previous monoplane models. The rounded fuselage housed a 160 h.p. Gnome motor.
This understrut bracing of the high wing Parasol has not really been abandoned. Many of our high winged monoplanes, although not strictly Parasols, never-the-less are closely related to the old Moranes. Our Stinson, Bellanca and several others with the understrut bracing, merely have the fuselage and cabin fused in with the top wing. It was a good stunt in the old days. It's a good stunt now. A forerunner of the Morane-Saulnier Parasol was called the Aerostable on account of its inherent stability. It had no ailerons, so it was up to the pilot to shift his weight in his seat to give lateral control.
The Roland Seemed Impractical
The German plane known as the L.F.G, Roland had an original design which was seemingly so impractical that the firm, Luft Fahrzeug Gesellschaft was safe from anyone stealing their idea. The Roland D2 was a single-seater fighter in which this design was incorporated. The fuselage under the top wing was carried up to the wing cutting off the pilot's view in front, even though this superstructure did thin out considerably at the top and left room for two windshields, one on each side of the center ridge. Despite this drawback the D2 was a good ship and was still used in many German squadrons in 1918. The 160 h.p. Mercedes may have been partly responsible for the good qualities of the D2's performance but even with a good power plant it was against all reason to park a solid mass in front of the pilot's eyes. It would be just as practical to put a strip of tin six inches wide smack down the windshield of your car directly in front of the steering wheel. But just a few dozen cockeyed hunches like that thrown into the war crates gave the civil designers plenty of precedents of what not to do when the war was over.
Possibly if war comes in the future it will be a mass of planes against a like mass of enemy's fighting planes. Aces will be a thing of the past. A few men will be outstanding in their flying but it will be hard to observe their deeds in the terrific rnixup that must occur far overhead, possibly out of sight. Publicized stunts will be few and far between, but to cut out entirely from the picture the personal element of friendship and cooperation between men of a squadron will be impossible!
Saving a Buddy
The Morane pilot on the cover knew exactly where a buddy, who had been taken prisoner from a cracked-up plane, was confined in a hospital in a small suburb. He communicated with his friend by those devious means that humans will always work out some way, despite the vigilance of the enemy's espionage system. The man in the hospital feigned lameness longer than necessary and at a prearranged moment, while the Morane was landing at an adjacent field, he swung his crutches with vim and vigor. Down went the two unarmed attendants. In ten minutes he was securely tied to the top of the Morane's wing and high in the air headed for home. Another Parasol joined the French plane and drove off two Roland D2s who had sighted the overburdened Morane and considered it easy pickings.
The personal touch was in that quickly executed rescue. War will always have those daring exploits. Friendships formed under fire are lasting and strong.
Sky Fighters, October 1936 by Eugene M. Frandzen
(The Ships on The Cover Page)
Tags: 1936, Eugene M. Frandzen, Morane-Saulnier Type 27 C1, October 1936, Roland D-2, Sky Fighters | Comments (0)
"Famous Sky Fighters, June 1936″ by Terry Gilkison
STARTING in the October 1933 issue of Sky Fighters and running almost 5 years, Terry Gilkison's "Famous Sky Fighters" was a staple of the magazine. Each month Gilkison would illustrate in a two page spread different Aces that rose to fame during the Great War.
Although Gilkison was probably better known for his syndicated newspaper work, he also provided black and white story interior illustrations for pulp magazines. His work appeared in Clues, Thrilling Adventures, Texas Rangers, Thrilling Mystery, Thrilling Western, and Popular Western. Gilkison provided similar features in a few other Thrilling Publications—there was "Famous Soldiers of Fortune" and later "Adventure Thrills" in Thrilling Adventures, Famous Crimes" in Thrilling Detective, and the fully illustrated air adventure stories of Buck Barton "The Flying Devil" in The Lone Eagle! He signed most of this work with only his initials "T.G." to maintain a low profile and preserve his reputation as a syndicated newspaper cartoon artist.
The June 1936 installment, from the pages of Sky Fighters, features Canadian Col. William Bishop, Colonel Frank P. Lahm, and observation Ace William Erwin!
Next time in "Famous Sky Fighters", Terry Gilkison features Sir Charles Kingford-Smith, Captain A.W. Stevens, Captain Boris Sergievsky and the great German inventor, Graf Zeppelin! Don't miss it!
Tags: 1936, Col.William Bishop, Famous Sky Fighters, Frank P. Lahm, June 1936, Robert Short, Sky Fighters, Terry Gilkison, William Erwin | Comments (0)
"Sky Guilt" by Frederick C. Painton
Link - Posted by David on September 6, 2019 @ 6:00 am in
THIS week we have a story from the pen of a prolific pulp author and venerated newspaper man—Frederick C. Painton. Mike O'Connor returns from a four day stint of detached service with the Second Corps to find his kid brother in the brig facing a court martial for murdering a fellow pilot in a bar brawl. Mike draws on his pre-war experience as a detective to find the true culprit and takes to the sky to sweat out a confession from the guilty party! From the pages of the November 1933 The Lone Eagle, it's Frederick C. Painton's "Sky Guilt!"
A Gripping Story of Exciting Peril in the Air and a Pilot's Grim Determination!
Download "Sky Guilt" (November 1933, The Lone Eagle)
As he has in previous stories we've posted, Painton has once again named the squadron's operations officer Willie the Ink—Painton uses a similarly named character—Willie the Web—as operations officer in his Squadron of the Dead tales.
Tags: 1933, Frederick C. Painton, November 1933, The Lone Eagle, Willie the Ink | Comments (0)
"When Bishop Fought Richthofen" by Paul J. Bissell
CONTINUING with the Richthofen themed covers, this week we present "When Bishop Fought Richthofen"—The story behind the cover of Paul Bissell's June 1932 cover for Flying Aces! Bissell is mainly known for doing the covers of Flying Aces from 1931 through 1934 when C.B. Mayshark took over duties. For the June 1932 cover Bissell put us right in the action as the planes of Bishop and Richthofen square off!
When Bishop Fought Richthofen
THE early spring of '17 saw Richthofen, the Red Knight of Germany, with almost two-score victories to his credit. For months now, hunter that he was, he had carefully searched the skies for his victims, and steadily built up a record that had already made him leading ace of the German Air Force and air idol of the German public. He had seen several months of duty as an observer at the Front, but it was under the guidance of the famous Boelcke that he started his career as a fighting pilot.
Now at last he was able to satisfy the impulse of the hunter which had always been a part of him. A deadly shot, and an expert flyer, he would climb into the clouds and there stalk his prey as carefully as he did the wild game on his own estate, waiting patiently his opportunity to dive headlong at some unsuspecting "bit of cold meat."
This same spring there landed in a British airdrome on the Western Front a young pilot fresh from the training fields of England. He, too, had already done some four months of duty at the Front as an observer, but without getting the opportunity even to fire a shot. This lad of twenty-three was Lieutenant William Bishop, R.F.C., without a fight to his record, though he was destined in the next few months to pack in more air scraps than any other pilot in a similar length of time. He was, in these same few months, to became the dread of the Germans, the ranking ace of the R.F.C.—to barely escape death time after time, and rise to the rank of major.
He had been at the Front scarcely two weeks when he got his first German, while another two weeks saw him the proud possessor of a bright blue propeller hub-cap, presented to him by his mechanics upon his becoming an ace.
April the thirtieth was a red-letter day for both Bishop and Richthofen, though other days showed larger scores against the enemy for each of them. On this day, Bishop, in one hour and forty-five minutes, before lunch, had the distinction of engaging, single-handed, in nine separate aerial combats, bringing down a two-seater to add to his score, while Richthofen, before his noonday meal, by shooting down two of the enemy, had raised his score to fifty-two planes.
Seated as they were in their respective messes, it is questionable if either Bishop or Richthofen gave a thought one to the other, in fact, it is almost certain that Richthofen had never even heard the name of Bishop. However, fate that afternoon was to bring these two against each other.
It was about two in the afternoon, when Bishop, accompanied by his major, who was flying in another Nieuport, took off from his airport. For almost half an hour they flew steadily eastward without seeing any signs of the enemy; then, noticing some archie fire off to the left, they turned to investigate. Off some distance and below them they saw a German reconnaissance plane, and started the attack, when suddenly, darting in from their right, came four scarlet-nosed Albatross scouts.
SWINGING to avoid the first dive of the enemy, the two Britishers turned back into the battle. The major, with guns blazing, bore down upon the leader of the Germans, who, reversing quickly, avoided the direct fire of the major, and in turn attacked Bishop. It was then that Bishop realized that this plane was solid red, crimson from nose to tail save only for the black crosses standing out strongly in contrast on the wings. It was Richthofen, diving at him, trying to get him full in line with those deadly guns which had meant death to so many Englishmen. Well Bishop knew that only a split second now separated him from death.
Automatically he threw his stick over, and the plane banked up just in time, as Richthofen's tracers went wild. Then began the tail-chasing. Around and around they swung, striving desperately to gain that deadly position behind the other's flippers. Moments came when one or the other, by some quick maneuver, would, for the fraction of a second, find his target in line with his sights.
A burst of flames as the guns spat, but to no avail, and the chase began again.
The major had drifted off to the left, scrapping it out with one of the other Germans. This left two others, beside Richthofen, in this mad fight with Bishop. They, too, fought for a position from which they might fire upon the Britisher without endangering their own comrade and leader.
The circles were now getting tighter and tighter. The pace was terrific, and the other planes, unable to help their comrade, and fearing collision, had withdrawn to the side. Alone, the two masters of the air fought on. Each, finding himself unable to obtain the desired dead spot, was now firing with more abandon, hoping that one stray bullet might find its mark and bring this whirling dance of death to an end. For those two, time had ceased. The world was just themselves, rushing through endless space, madly circling, instinctively using every maneuver, every bit of skill at their command, to gain the desired opening.
They flew now as part of their own machines, and their guns, as part of themselves, spoke, when, for even the barest fraction of a second, their target flashed by.
Suddenly Bishop realized that he was near the end of his ammunition. He could not be sure that his opponent faced the same situation, and decided that he must conserve the few bullets that he had left. His feeling of desperation turned almost to despair, when, at this instant, he discovered three planes diving steeply at him.
Back he pulled on his stick, climbing sharply out of the mad circle, expecting every instant to feel the German bullets begin to spatter his plane, but knowing that he must take this hazard to get away from the new attack.
However, to his surprise, the planes dived past him, and down after the Red Knight, who had headed toward his two companions and Germany. Then Bishop discovered to his relief that the three planes were not Germans, as he had thought, but were three British naval planes which had come up opportunely at this moment.
The fight was over. One of the great air battles of the war was a thing of the past. The sportsman and the hunter had fought to a draw and retired with honor, each to fight many times again for his country, but never again against each other. For yet another year Richthofen continued his victories until he fell with an enemy bullet through his heart, to be buried with full military honors by his admiring foes.
Bishop fought steadily for six more months until, with forty-nine victories, he returned to his homeland, to receive every honor that a grateful king and country could bestow. He survived the war and is today the only living man with a V.C., D.S.O. twice awarded, and M.C.
Flying Aces, June 1932 by Paul Bissell
Tags: 1932, Baron von Richthofen, Billy Bishop, June 1932, Oswald Boelcke, Paul Bissell | Comments (0) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,382 |
{"url":"https:\/\/www.impan.pl\/pl\/wydawnictwa\/czasopisma-i-serie-wydawnicze\/acta-arithmetica\/all\/63\/4\/107790\/on-b-2k-sequences","text":"JEDNOSTKA NAUKOWA KATEGORII A+\n\n# Wydawnictwa \/ Czasopisma IMPAN \/ Acta Arithmetica \/ Wszystkie zeszyty\n\n## On $B_{2k}$-sequences\n\n### Tom 63 \/ 1993\n\nActa Arithmetica 63 (1993), 367-371 DOI: 10.4064\/aa-63-4-367-371\n\n#### Streszczenie\n\nIntroduction. An old conjecture of P. Erd\u0151s repeated many times with a prize offer states that the counting function A(n) of a $B_r$-sequence A satisfies $lim inf_{n\u2192 \u221e} (A(n)\/(n^{1\/r}))=0$. The conjecture was proved for r=2 by P. Erd\u0151s himself (see [5]) and in the cases r=4 and r=6 by J. C. M. Nash in [4] and by Xing-De Jia in [2] respectively. A very interesting proof of the conjecture in the case of all even r=2k by Xing-De Jia is to appear in the Journal of Number Theory [3]. Here we present a different, very short proof of Erd\u0151s' hypothesis for all even r=2k which we developped independently of Jia's version.\n\n\u2022 Martin Helm\n\n## Przeszukaj wydawnictwa IMPAN\n\nZbyt kr\u00f3tkie zapytanie. Wpisz co najmniej 4 znaki.\n\nOd\u015bwie\u017c obrazek","date":"2022-08-13 18:14:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8291810154914856, \"perplexity\": 5133.69566496642}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571982.99\/warc\/CC-MAIN-20220813172349-20220813202349-00094.warc.gz\"}"} | null | null |
Cerkiew św. Michała Archanioła – zabytkowa niewielka cerkiew parafialna w Buczaczu, w obwodzie tarnopolskim. Mieści się przy ulicy Basztowej (ukr. вул. Баштова), na terenie Cmentarza Nagórzańskiego.
Zbudowana jako cerkiew greckokatolicka w 1910 r. we wsi Nagórzanka, dawnym przedmieściu Buczacza, która w 1965 r. została włączona w obręb miasta. Na miejscu obecnej cerkwi wcześniej znajdowała się cerkiew drewniana, która w 1664 r. była określana jako parafialna.
Zobacz też
Cerkiew Świętego Mikołaja w Buczaczu
Cerkiew św. Michała Archanioła
Bibliografia
ks. Sadok Barącz: Pamiątki buczackie. Lwów: Drukarnia Gazety Narodowej, 1882, 168 s., s. 146.
Linki zewnętrzne
Pamiątki buczackie.
До історії меморіальної таблиці воїнам УГА в церкві Св. Архистратига Михаїла в місті Бучач та її автора
Cerkwie eparchii tarnopolsko-buczackiej
Michała Archanioła
Dawne cerkwie greckokatolickie w obwodzie tarnopolskim
Cerkiew św. Michała Archanioła | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,187 |
package jj.http.server.uri;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.*;
import static io.netty.handler.codec.http.HttpMethod.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import jj.execution.MockTaskRunner;
import jj.http.server.RouteContributor;
import org.junit.Before;
import org.junit.Test;
/**
* @author jason
*
*/
public class RouterTest {
private static final String STATIC = "static";
private static final String SOMETHING = "something";
String welcome = "something.jpg";
MockTaskRunner mockTaskRunner = new MockTaskRunner();
RouterConfiguration config = new RouterConfiguration() {
@Override
public String welcomeFile() {
return welcome;
}
@Override
public List<Route> routes() {
List<Route> result = new ArrayList<>();
result.add(new Route(GET, "/start", STATIC, "/result1"));
result.add(new Route(POST, "/finish", STATIC, "/result1"));
result.add(new Route(GET, "/chat/", STATIC, "/result3"));
result.add(new Route(POST, "/chat/:room", STATIC, "/result4"));
result.add(new Route(DELETE, "/chat/:room", STATIC, "/result5"));
result.add(new Route(GET, "/chat/:room", STATIC, "/result6"));
result.add(new Route(GET, "/chat/:room/*secret", STATIC, "/result7"));
return result;
}
};
Set<RouteContributor> routeContributors;
RouteContributor routeContributor1 = () ->
Collections.singletonList(new Route(GET, "/*path.something", SOMETHING, ""));
RouteContributor routeContributor2 = () -> Arrays.asList(
new Route(POST, "/*path.something", SOMETHING, ""),
new Route(DELETE, "/*path.something", SOMETHING, "")
);
Router router;
@Before
public void before() throws Exception {
routeContributors = new HashSet<>();
routeContributors.add(routeContributor1);
routeContributors.add(routeContributor2);
router = new Router(config, routeContributors, mockTaskRunner);
router.on(null);
mockTaskRunner.runFirstTask();
}
@Test
public void test() {
RouteMatch routeMatch = router.routeRequest(GET, new URIMatch("/start"));
assertThat(routeMatch.route.resourceName(), is(STATIC));
assertThat(routeMatch.route.mapping(), is("/result1"));
assertTrue(routeMatch.params.isEmpty());
routeMatch = router.routeRequest(GET, new URIMatch("/something/../../../../../start"));
assertThat(routeMatch.route.resourceName(), is(STATIC));
assertThat(routeMatch.route.mapping(), is("/result1"));
assertTrue(routeMatch.params.isEmpty());
routeMatch = router.routeRequest(POST, new URIMatch("../finish"));
assertThat(routeMatch.route.resourceName(), is(STATIC));
assertThat(routeMatch.route.mapping(), is("/result1"));
assertTrue(routeMatch.params.isEmpty());
routeMatch = router.routeRequest(GET, new URIMatch("/some/path/to.something"));
assertThat(routeMatch.route.resourceName(), is(SOMETHING));
assertThat(routeMatch.route.mapping(), is(""));
assertThat(routeMatch.params.size(), is(1));
assertThat(routeMatch.params.get("path"), is("some/path/to"));
routeMatch = router.routeRequest(POST, new URIMatch("/some/path/to.something"));
assertThat(routeMatch.route.resourceName(), is(SOMETHING));
assertThat(routeMatch.route.mapping(), is(""));
assertThat(routeMatch.params.size(), is(1));
assertThat(routeMatch.params.get("path"), is("some/path/to"));
routeMatch = router.routeRequest(DELETE, new URIMatch("/some/path/to.something"));
assertThat(routeMatch.route.resourceName(), is(SOMETHING));
assertThat(routeMatch.route.mapping(), is(""));
assertThat(routeMatch.params.size(), is(1));
assertThat(routeMatch.params.get("path"), is("some/path/to"));
// assertThat(router.find("/index"), is("/index"));
// assertThat(router.find("/other"), is("/other"));
// assertThat(router.find("/other/"), is("/other/index"));
// assertThat(router.find("/other/index"), is("/other/index"));
// assertThat(router.find("/other/other"), is("/other/other"));
// assertThat(router.find("../other/"), is("/other/index"));
// assertThat(router.find("../other/index"), is("/other/index"));
// assertThat(router.find("../other/other"), is("/other/other"));
// assertThat(router.find("/../../../other/"), is("/other/index"));
// assertThat(router.find("/../../../other/index"), is("/other/index"));
// assertThat(router.find("/../../../other/other"), is("/other/other"));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,518 |
Corpus of the Aramaic Incantation Bowls
by Charles David Isbell
214 Pages, 5.50 x 8.50 x 0.43 in
Since the 1913 publication of James A. Montgomery's Aramaic Incantation Texts from Nippur, students of the bowls have used that book as the diving platform from which they enter a deep pool of study, In the intervening years, the body of work on incantation (or magic) bowls has continued to grow. Bowls in several ancient languages have attracted the attention of scholars from a variety of countries and traditions. The result has been the publication of a considerable number of translations of additional texts and fragments. Focusing only on those bowls inscribed in Aramaic and even then, only on the seventy-two extant bowls which could be personally read in photographs or facsimiles, Charles Isbell has, in Corpus of the Aramaic Incantation Bowls, compiled an impressive volume of work. Including the complete original texts, full translations, and annotations, Isbell supplements the text with a glossary of all inscribed words, an index of personal names, and a list of quotations from scripture.
Charles David Isbell is Director of Jewish Studies at Louisiana State University and rabbi at Temple Sinai in Lake Charles, Louisiana. He is the author of seven scholarly books and of more than one hundred scholarly articles in the fields of biblical and rabbinic studies. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,170 |
Located in Beau Vallon Beach one of the most luxurious neighbourhoods in the picturesque island of Mahè and only 10 min from Victoria (the capital), 5 min to Morne Seychelles National Park and 20 to Seychelles International Airport.
Beau Vallon Beach has 3km of white sand beach and turquoise water and it´s the only swimmable beach in Mahe all year round.
Our Boutique Resort has a natural reef that you can reach swimming 50m directly from the beach.
Create memories worth sharing amid the luxury surroundings of our 5* Boutique Resort. Tranquil, sustainable green certificated and culturally connected, not only by its Creole colonial architecture but also by seychelois.
Renowned for its azure turquoise oceans, crystal white beaches and lush emerald greenery – expect nothing but undisturbed views, charming creole hospitality and a new height of indulgence at this paradise resort.
capital and the best place to buy fresh fruits, fish, vegetables and spices.
architecture and is one of the Seychelles' most visited Monuments.
St. Paul's Cathedral is dedicated to St. Paul the Apostle and was consecrated in 1859.
The Cathedral was then rebuilt in 2001.
it was constructed in March 1851 and was devoted to the Virgin of Immaculate Conception.
The National Museum of History was established in 1964.
artifacts for the public benefit including one of the oldest maps dated 1517.
The largest national park in Seychelles spans more than 20% of Mahé. 12 different trails can be explored either by half or full day Excursions. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,716 |
Q: Proving that the group of all roots of unity in a number field is finite cyclic I would be very grateful if someone would check my solution to the following problem.
Let $K$ be a number field (i.e. a finite field extension of $\mathbb{Q}$).
Let $G$ be the group of all roots of unity in $K.$
Claim. $G$ is a finite cyclic group.
Here is my attempt at a solution:
By definition $[K:\mathbb{Q}]$ is finite and this clearly implies that $G$ is finite.
To show that $G$ is cyclic, we proceed by supposing otherwise.
Let $g \in G$ be an element of maximal order, say $m.$
Since $G$ is not cyclic, there exists some $h \in G\setminus\langle g \rangle.$
Let $s$ be the order of $h.$
Then $s$ does not divide $m,$ otherwise $h^m=1$ and so $h \in \langle g \rangle$ (because $\{1,g,\ldots,g^{m-1}\}$ is the complete set of $m$th roots of unity in $K$).
Therefore the order of $gh$ is equal to lcm$(m,s)$ and this is greater than $m.$
This is a contradiction!
A:
By definition $[K:\mathbb{Q}]$ is finite and this clearly implies that $G$ is finite.
How exactly? I think this is a crucial point to prove, more important than your second part, which is alright yet not specific to the number-field situation.
A:
$G$ is finite
Indeed, the minimal polynomial equation of every element of $K$ has degree at most $[K:\mathbb{Q}]$. In particular, this applies to the cyclotomic polynomials that are the minimal polynomials of elements of $G$. Since the inequality $\phi(x)\le [K:\mathbb{Q}]$ has only finitely many solutions, there are only finitely many possible cyclotomic polynomials that are minimal polynomials of elements of $G$, and so $G$ must be finite.
$G$ is cyclic
Indeed, by Lagrange's theorem, $G$ is a subgroup of $U_n$, the $n$-th roots of unity, where $n=|G|$. Since $U_n$ is cyclic, so is $G$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,747 |
with example_user as (
insert into Person (forename, surname, email, password)
values ('Example', 'User', 'user@example.com', 'pwd')
RETURNING *
), proj_names as (
select * from (values ('Work'), ('School'), ('Personal')) as names (name)
)
insert into Project (person, name)
select example_user.id, proj_names.name
from example_user
full join proj_names on true;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,423 |
{"url":"http:\/\/mathhelpforum.com\/statistics\/104075-solved-total-probability-baye-s-rule.html","text":"# Math Help - [SOLVED] Total probability and Baye's rule\n\n1. ## [SOLVED] Total probability and Baye's rule\n\nQ: Five identical bowls are labeled 1, 2, 3, 4, and 5. Bowl i contains i white balls and 5-i black balls, with i=1, 2,...,5. A bowl is randomly selected and two balls are randomly selected (without replacement) from the contents of the bowl.\n\na) What is the probability that both balls selected are white?\n\nb) Given that both balls selected are white, what is the probability that bowl 3 was selected?\n\nEDIT: I got it, sorry for posting this.\n\n2. ## I'm new here but...\n\nHey I'm new here, but I would like to hear the answer to this one.\n\nI'm sure I'm doing this wrong. Could someone correct me?\n\nAll bowls have a 1\/5 chance of being selected. In order to select two white balls bowl 2, 3, 4 or 5 must be selected (since bowl 1 only has 1 white ball).\n\nSo the chances of getting 2 white balls should be:\n1\/5 * (the chance of picking 2 white balls from bowl 2) + 1\/5 * (the chance of picking 2 white balls from bowl 3 + .... + 1\/5 * (the chance of picking 2 white balls from bowl 5)\n\nIf bowl 2 is selected there is a 2\/5 chance of getting a white ball and then a 1\/4 chance of getting another one. So the chance of selecting bowl 2, and getting two white balls out of it is 1\/5 * (2\/5 * 1\/4) = 1\/50. Applying this same logic to the other bowls gives me:\n\n${1 \\over 5} \\left ({2 \\over 5} * {1 \\over 4} \\right) + {1 \\over 5} \\left ({3 \\over 5} * {2 \\over 4} \\right ) + {1 \\over 5} \\left ({4 \\over 5} * {3 \\over 4} \\right ) + {1 \\over 5} \\left ({5 \\over 5} * {4 \\over 4} \\right )$\n\n$= {1 \\over 50} + {3 \\over 50} + {3 \\over 25} + {1 \\over 5}$\n$= 2 \\over 5$\n\nSo there is a 40% chance of selecting 2 white balls, right?\n\nSo to answer B) I think it would be that if we selected 2 white balls, we must have selected either bowl 2, 3, 4 or 5. Now we only have 4 possible bowls that were selected. So I believe our equation would end up being\n\n${{1 \\over 4} \\left (P(2 white balls from bowl 3) \\right )\n\\over\n{1 \\over 4} \\left (P(2 white balls from bowl 2) \\right) +... + {1 \\over 4} \\left(P(2 white balls from bowl 5) \\right)}$\n\nTherefore, we end up with:\n${{1 \\over 4} \\left({3 \\over 5} * {2 \\over 4} \\right)\n\\over\n{1 \\over 4} \\left({2 \\over 5} * {1 \\over 4} \\right) + {1 \\over 4} \\left ({3 \\over 5} * {2 \\over 4} \\right) + {1 \\over 4} \\left({4 \\over 5} * {3 \\over 4} \\right) + {1 \\over 4} * \\left({5 \\over 5} * {4 \\over 4} \\right)}$\n\n$= {{3 \\over 40} \\over {1 \\over 40} + {3 \\over 40} + {3 \\over 20} + {1 \\over 4}}\n= {{3 \\over 40} \\over {2 \\over 4}}\n= {3 \\over 20}$\n\nSo that should be a 15% chance of having gotten the 2 white balls from bowl 3. Right?\n\nIs that at all correct?","date":"2016-07-30 14:15:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 6, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7570485472679138, \"perplexity\": 360.48477952256644}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257836399.81\/warc\/CC-MAIN-20160723071036-00209-ip-10-185-27-174.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
Throughout, we work over an algebraically closed field $k$ of characteristic 0.
Chow quotients of toric varieties were introduced by Kapranov, Sturmfels, and Zelevinsky in \cite{tquot}. Given a projective normal toric variety $X$ and a subtorus $T_0$ of the defining torus $T$, the \emph{Chow quotient} $X//T_0$ has the property that its normalization is the smallest toric variety which maps onto all GIT quotients of $X$ by $T_0$. We show in this paper that when $T_0$ has rank one, the normalization of $X//T_0$ can be reinterpreted as the coarse moduli space of the stack of stable log maps, introduced by Abramovich and the first author \cite{DF1, DF2}, and independently by Gross and Siebert \cite{grosssiebert}.
We begin by recalling the construction of $X//T_0$. For every point $x\in X$, the closure $Z_x:=\overline{T_0 x}$ of the orbit of $x$ under $T_0$ is a subvariety of $X$. For $x\in T$, the orbit closures $Z_x$ have the same dimension and homology class. We therefore obtain a morphism from $T':=T/T_0$ to the Chow variety $C(X)$ of algebraic cycles of the given dimension and homology class. The Chow quotient $X//T_0$ is defined as the closure of $T'$ in $C(X)$. It is a toric variety and the fan of its normalization is given explicitly in \cite[\S1]{tquot}.
Further assume now that $T_0$ is a rank one torus. Let $Z_1$ be the closure of $T_0$ in $X$. Then its normalization $\widetilde{Z}_1$ is isomorphic to $\PP^1$ and the induced morphism
\[
f_1:\PP^1\longrightarrow X
\]
can be viewed as a stable map with two marked points $\{0,\infty\}=\PP^1\setminus f_1^{-1}(T)$. Let $\beta_0$ be the curve class of the stable map $f_1$ and let $c_0$ and $c_\infty$ be the contact orders of $0$ and $\infty$ with respect to the toric boundary $X\setminus T$. Roughly speaking, $c_0$ and $c_\infty$ are functions which assign to the marked points their orders of tangency with the components of $X\setminus T$ (see \cite{evspace} for more details). In the toric case, the contact orders can be explained as the slopes and weights of the unbounded edges of tropical curves associated to stable log maps, see Section \ref{ss:log-data}.
Our primary object of study in this paper is the stack $\cK_{\Gamma_0}(X)$ parameterizing stable log maps from rational curves with two marked points to $X$ such that the curve class is $\beta_0$ and the marked points have contact orders given by $c_0$ and $c_\infty$; here $\Gamma_0:=(0,\beta_0,2,\{c_0,c_\infty\})$ keeps track of the discrete data consisting of genus, curve class, number of marked points, and their tangency conditions. Our main result is:
\begin{theorem}
\label{thm:chowcs}
The normalization of $X//T_0$ is the coarse moduli space of $\cK_{\Gamma_0}(X)$.
\end{theorem}
\begin{remark}
\label{rmk:irred}
In particular, we see that $\cK_{\Gamma_0}(X)$ is irreducible.
\end{remark}
In the process of proving Theorem \ref{thm:chowcs}, we obtain an alternative description of $\cK_{\Gamma_0}(X)$ which is more akin to the construction of the Chow quotient. As we saw above, $X//T_0$ is defined as the closure of $T':=T/T_0$ in the Chow variety $C(X)$. Replacing $C(X)$ by other moduli spaces, we obtain alternate spaces birational to $X//T_0$. Letting $Z_x$ be the orbit closure $\overline{T_0 x}$ as above, we see that for all $x\in T$, the normalization $\widetilde{Z}_x$ is isomorphic to $\PP^1$. Thus, we obtain a stable map
\[
f_x:\PP^1\longrightarrow X
\]
with marked points $\{0,\infty\}=\PP^1\setminus f_x^{-1}(T)$. These $f_x$ all have curve class $\beta_0$, and we obtain an immersion
\[
T'\longrightarrow \fM_{0,2}(X,\beta_0),
\]
where $\fM_{0,2}(X,\beta_0)$ denotes the Kontsevich space of stable maps to $X$ with genus 0, curve class $\beta_0$, and two marked points. In analogy with the construction of the Chow variety, we let $\fM$ denote the closure of $T'$ in $\fM_{0,2}(X,\beta_0)$. Then we have:
\begin{theorem}
\label{thm:comptwostacks}
$\cK_{\Gamma_0}(X)$ is the normalization of $\fM$.
\end{theorem}
\begin{remark}
There is an analogous picture if one assumes that $X$ is an affine normal toric variety and replaces $\fM_{0,2}(X,\beta_0)$ above by the toric Hilbert scheme, as defined in \cite{peevastillman}. That is, for all $x\in T$, the $Z_x$ are $T'$-invariant closed subschemes of $X$ which have the same discrete invariants. We therefore obtain an immersion from $T'$ to an appropriate toric Hilbert scheme. The closure of $T'$ in this toric Hilbert scheme is called the main component. In \cite[Thm 1.7]{loghilb}, Olsson shows that the normalization of the main component has a natural moduli interpretation in terms of log geometry. Theorem \ref{thm:comptwostacks} above can therefore be viewed as an analogue of Olsson's theorem, replacing his use of the toric Hilbert scheme by the Kontsevich space. That is, we show that the normalization of $\fM$ carries a moduli interpretation in terms of stable log maps.
\end{remark}
Recall that given any collection of discrete data $\Gamma=(g,\beta,n,\{c_i\}_{i=1}^n)$, it is shown in \cite{DF1, DF2, grosssiebert} that there is a proper Deligne-Mumford stack $\cK_\Gamma(X)$ which parameterizes stable log maps to $X$ from genus $g$ curves with $n$ marked points having curve class $\beta$ and contact orders given by the $c_i$.\footnote{Strictly speaking, \cite{DF1, DF2} only consider log schemes which are generalized Deligne-Faltings (see Definition \ref{def:genDF}), so to apply their theory, one must first show that the natural log structure on $X$ satisfies this hypothesis. This is done in Proposition \ref{prop:genDF}, which we relegate to an appendix since the theory developed in \cite{grosssiebert} is already known to apply to toric varieties.} We show in Proposition \ref{prop:log-smooth} that if $g=0$, then $\cK_\Gamma(X)$ is log smooth, and in particular normal. This is a key ingredient in the proof of Theorem \ref{thm:comptwostacks}, which we give in Section \ref{sec:logsm}. In Section \ref{sec:tropcurves}, following \cite{NiSi, grosssiebert}, we explain the relationship between tropical curves and stable log maps to toric varieties. While the use of tropical curves is not strictly necessary for this paper, they serve as a convenient tool to study the boundary of $\cK_\Gamma(X)$. Theorem \ref{thm:chowcs} is then proved in Section \ref{sec:cs}.\\
\\
\noindent\textbf{Prerequisites:} We assume the reader is familiar with logarithmic geometry in the sense of Fontaine-Illusie-Kato
(see for example \cite{kato} or \cite{logbook}).\\
\\
\noindent\textbf{Acknowledgments:}
We would like to thank Dan Abramovich, Dustin Cartwright, Anton Geraschenko, Noah Giansiracusa, and Martin Olsson. The first author was partially supported by the Simons Foundation. The second author was partially supported by NSF grant DMS-0943832 and an NSF postdoctoral fellowship (DMS-1103788).
\section{Log smoothness and irreducibility}
\label{sec:logsm}
Throughout this section, $X$ is a projective normal toric variety of dimension $d$ and $\Gamma$ is the discrete data $(0,\beta,n,\{c_i\})$. Let $T$ be the defining torus of $X$ and $M$ be the character lattice of $T$.
\begin{proposition}\label{prop:log-smooth}
$(\cK_\Gamma(X),\MM_{\cK_\Gamma(X)})$ is log smooth over $(k,\OO_k^*)$. Moreover, $\dim \cK_\Gamma(X) = \dim X + n-3$.
\end{proposition}
\begin{proof}
The universal curve on $\cK_\Gamma(X)$ induces a morphism of log stacks:
\[
\pi:(\cK_\Gamma(X),\MM_{\cK_\Gamma(X)})\longrightarrow (\fM_{0,n},\MM_{\fM_{0,n}}),
\]
where $(\fM_{g,n},\MM_{\fM_{g,n}})$ denotes the log stack of $(g,n)$-prestable curves; see \cite{fkato1} and \cite[Thm 1.10]{logcurve} for the definition and construction of this log stack.
Since $(\fM_{g,n},\MM_{\fM_{g,n}})$ is log smooth over $(k,\OO_k^*)$, it suffices to show that $\pi$ is log smooth. By \cite[Thm 4.6]{LogStack}, this is equivalent to showing that the induced morphism
\[
\pi':\cK_\Gamma(X)\longrightarrow\mathcal{L}og_{(\fM_{0,n},\MM_{\fM_{0,n}})}
\]
of stacks is smooth, where $\mathcal{L}og_{(S,\MM_S)}$ is the stack of log morphisms to a log scheme $(S,\MM_S)$, as defined in the introduction of (loc. cit.).\\
\\
Let $i:\Spec A\rightarrow\Spec A'$ be a square zero thickening of Artin local rings and let
\[
\xymatrix{
\Spec A\ar[r]\ar[d]_{i} & \cK_{\Gamma}(X)\ar[d]^{\pi'}\\
\Spec A'\ar[r] & \mathcal{L}og_{(\fM_{0,n},\MM_{\fM_{0,n}})}
}
\]
be a commutative diagram. We may view this as a commutative diagram of log stacks, by endowing the Artin local rings with the log structure pulled back from $\mathcal{L}og_{(\fM_{0,n},\MM_{\fM_{0,n}})}$. Hence the two vertical arrows are strict. Denote the induced log structures on $\Spec A$ and $\Spec A'$ by $\MM_A$ and $\MM_{A'}$, respectively. We therefore have a log smooth curve $h'$, a cartesian diagram
\[
\xymatrix{
(C,\MM_{C})\ar[r]\ar[d]_{h} & (C',\MM_{C'})\ar[d]^{h'}\\
(\Spec A,\MM_A)\ar[r] & (\Spec A',\MM_{A'})
}
\]
and a minimal stable log map $f:(C,\MM_C)\to (X,\MM_X)$, which we must show deforms to a minimal stable log map $f':(C',\MM_{C'})\to (X,\MM_X)$.
Since the minimality condition is open by \cite[Prop 3.5.2]{DF1}, it suffices to show that $f$ deforms as a morphism of log schemes.\\
\\
By standard arguments in deformation theory, it is enough to consider the case where the kernel $\cI$ of $A'\to A$ is principal and killed by the maximal ideal $\mathfrak{m}$ of $A'$. Then the obstruction to deforming $f$ to a morphism of log schemes lies in
\[
\Ext^1(f_0^*\Omega^1_{(X,\MM_X)/k},\cO_{C_0})\otimes_{k} \cI
\]
where $f_0$ denotes the reduction of $f$ mod $\mathfrak{m}$, and $C_0$ denotes the fiber of $C$ over $A'/\mathfrak{m}=k$. By \cite[Ex 5.6]{fkato2},
\[
\Omega^1_{(X,\MM_X)/k}\simeq\cO_X\otimes_\ZZ M.
\]
Therefore,
\[
\Ext^1(f_0^*\Omega^1_{(X,\MM_X)/k},\cO_{C_0})=H^1(\cO_{C_0}^d)=0
\]
where the last equality holds because $C_0$ is a curve of arithmetic genus $0$. This shows that $(\cK_\Gamma(X),\MM_{\cK_\Gamma(X)})$ is log smooth.
To prove the claim about the dimension of $\cK_\Gamma(X)$, note that
\[
\dim \Ext^0(f_0^*\Omega^1_{(X,\MM_X)/k},\cO_{C_0})= \dim H^{0}(\cO_{C_0}^{d}) = d,
\]
and so $\pi$ has relative dimension $d$. Since $\dim \fM_{0,n} = n-3$, we see $\dim\cK_\Gamma(X)=d+n-3$.
\end{proof}
Let $\cK^{\circ}_{\Gamma}(X)$ denote the non-degeneracy locus, that is, the locus of $\cK_{\Gamma}(X)$ where the log structure $\MM_{\cK_\Gamma(X)}$ is trivial. By Proposition \ref{prop:log-smooth} and \cite[Prop 2.6]{niziol}, $\cK^{\circ}_{\Gamma}(X)$ is an open dense subset of $\cK_{\Gamma}(X)$. Consider the Kontsevich moduli space of stable maps $\fM_{0,n}(X,\beta)$. The forgetful map
\[
\Phi:\cK_{\Gamma}(X) \to \fM_{0,n}(X,\beta).
\]
sending a stable log map to its underlying stable map induces a locally closed immersion
\[
\cK_{\Gamma}^{\circ}(X) \to \fM_{0,n}(X,\beta).
\]
Let $\fM_{\Gamma}(X)$ be the closure of $\cK_{\Gamma}^{\circ}(X)$ in $\fM_{0,n}(X,\beta)$. Then $\Phi$ factors through a morphism
\[
\phi:\cK_{\Gamma}(X) \to \fM_{\Gamma}(X).
\]
\begin{lemma}
\label{l:normalization}
$\phi$ is the normalization map.
\end{lemma}
\begin{proof}
By \cite[Corollary 3.10]{DF2} and Proposition \ref{prop:genDF}, the morphism $\Phi$ is representable and finite, and so $\phi$ is as well. Since $(\cK_\Gamma(X),\MM_{\cK_\Gamma(X)})$ is fs and log smooth over $(k,\OO_k^*)$ by Proposition \ref{prop:log-smooth}, it follows that $\cK_\Gamma(X)$ is normal. Since $\phi$ is an isomorphism over $\cK^\circ_\Gamma(X)$, it is birational, and so by Zariski's Main Theorem, $\phi$ is the normalization map.
\end{proof}
For the rest of this section, we return to the setting and notation of the introduction, and let $\Gamma=\Gamma_0$. Just as $X//T_0$ (resp $\fM$) is constructed by taking the closure of $T'$ in the Chow variety (resp the Kontsevich space), we can perform a similar construction with $\cK_\Gamma(X)$. Namely, for $x\in T$, the stable map
\[
f_x: \PP^1\to X
\]
is naturally a stable log map. We therefore obtain a morphism $T'\to \cK_\Gamma(X)$. Let $\mf{X}_\Gamma$ denote the closure of $T'$ in $\cK_\Gamma(X)$.
The forgetful morphism $\Phi$
then induces a map
\[
\phi':\mf{X}_\Gamma\longrightarrow \fM.
\]
\begin{lemma}
\label{l:XGammaopen}
$\mf{X}_\Gamma$ is an open substack of $\cK_\Gamma(X)$, and so $\phi'$ is the normalization map.
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{l:normalization}, $\phi'$ is representable and finite. If $\mf{X}_\Gamma$ is an open substack of $\cK_\Gamma(X)$, it is then normal. Since $\phi'$ is an isomorphism over $T'$,
Zariski's Main Theorem shows that it is the normalization map.\\
\\
To show that $\mf{X}_\Gamma$ is open in $\cK_\Gamma(X)$, it suffices to prove that $\cK^{\circ}_\Gamma(X)$ and $\mf{X}^{\circ}_\Gamma:=\mf{X}_\Gamma\cap\cK^{\circ}_\Gamma(X)$ have the same dimension. Since $T'$ is dense in $\mf{X}_\Gamma$, we see that $\mf{X}_\Gamma$ has dimension $d-1$. On the other hand, the map
\[
\pi:(\cK_\Gamma(X),\MM_{\cK_\Gamma(X)})\longrightarrow (\fM_{0,2},\MM_{\fM_{0,2}})
\]
in the proof of Proposition \ref{prop:log-smooth} induces a map
\[
\cK^{\circ}_\Gamma(X)\longrightarrow\fM^{\circ}_{0,2},
\]
where $\fM^{\circ}_{0,2}$ denotes the open substack of $\fM_{0,2}$ with smooth fiber curves. By Proposition \ref{prop:log-smooth}, we see that $\cK^{\circ}_{\Gamma}(X)$ has dimension $d-1$.
\end{proof}
Since $\phi'$ is the normalization map, to prove Theorem \ref{thm:comptwostacks}, we must show $\mf{X}_\Gamma = \cK_\Gamma(X)$. Since $\mf{X}_\Gamma$ is an open and closed substack of $\cK_\Gamma(X)$, the following proposition suffices.
\begin{prop}
\label{prop:irred}
$\cK_{\Gamma}(X)$ is irreducible.
\end{prop}
\begin{proof}
It is enough to prove that $\cK^{\circ}_{\Gamma}(X)$ is irreducible. Let $s\in\cK^{\circ}_{\Gamma}(X)(k)$, and $f:\mathbb{P}^1\to X$ be the stable log map corresponding to $s$. Note that the log structure of the boundary of $X$ is everywhere non-trivial. Since the log structure is trivial at $s$, the image of $f$ necessarily meets $T$. To prove that $\cK^{\circ}_{\Gamma}(X)$ is irreducible, it is enough to show that we can act on $f$ by an element of $T$ to obtain a map isomorphic to $f_1$ from the introduction.
After acting on $f$ by some element of $T$, we may assume that $f$ sends $1\in\PP^1$ to
$1\in T\subset X$. Choose a maximal cone $\sigma$ in the fan of $X$ such that the associated affine open toric variety $U\subset X$ contains $f(0)$. Restricting $f$ to $U$, we obtain a map $f': V=\Spec k[t] \longrightarrow U$.
Let $P$ be the monoid $\sigma^{\vee}\cap M$ and let $e_1,\dots,e_\ell$ be the irreducible elements of $P$. We see that for each $i$,
\[
f^*(e_i)=t^{c_i}a_i,
\]
where $c_i$ is the contact order prescribed by $\Gamma$ and $a_i$ is some element of $k[t]$. Note that if $\alpha\in k$ is a root of $a_i$, then the point $t=\alpha$ is mapped to the toric boundary; however, the contact order given by $\Gamma$ implies that $t=0$ is the only point in $V$ which maps to the boundary. Hence, $a_i$ must be a power of $t$. But if $a_i$ is divisible by $t$, then the contact order of $t=0$ along $e_i=0$ is greater than $c_i$. Therefore, $a_i$ must be a non-zero constant.
Now observe that the point $1\in T\subset U$ is given by $e_i=1$ for all $i$. Since $f(1)=1$, the equation $f^*(e_i)=t^{c_i}a_i$ shows that $a_i=1$. This shows that $f$ is uniquely determined over $U$. Since $f_1$ also satisfies these constraints, we see that $f$ and $f_1$ agree over $U$. Since $f$ and $f_1$ are two maps from $\PP^1$ to $X$ which agree on a dense open subset of the source, they are equal.
\end{proof}
\section{Tropical curves associated to stable log maps}
\label{sec:tropcurves}
The goal of this section is to prove Proposition \ref{prop:chain-ofP1s}. Following \cite{NiSi, grosssiebert}, we explain the connection between tropical curves and stable log maps to toric varieties.
\subsection{Review of tropical curves}
Let $\overline{G}$ be the geometric realization of a weighted, connected finite graph with weight function $\omega$. That is, $\overline{G}$ is the CW complex associated to a finite connected graph with vertex set $\overline{G}^{[0]}$ and edge set $\overline{G}^{[1]}$, and
\[
\omega:\overline{G}^{[1]}\to \NN
\]
is a function. Here we allow $\overline{G}$ to have divalent vertices. Given an edge $l \in \overline{G}^{[1]}$, we denote its set of adjacent vertices by $\partial l$. If $l$ is a loop, then we require $\omega(l)=0$.
Let $G^{[0]}_{\infty}\subset \overline{G}^{[0]}$ be the set of one-valent vertices, and let
\[
G := \overline{G} \setminus \overline{G}^{[0]}_{\infty}.
\]
Let $G^{[1]}_{\infty}$ be the set of non-compact edges in $G$, which we refer to as {\em unbounded edges}. A {\em flag} of $G$ is a pair $(v,l)$ where $l$ is an edge and $v\in\partial l$. We let $FG$ be the set of flags of $G$, and for each vertex $v$, we let
\[FG(v) := \{ (v,l) \in FG\}.\]
Let $N$ be a lattice and $M = N^{\vee}$. We let $N_{\QQ}:=N\otimes_{\ZZ}\QQ$ and $N_{\RR}:=N\otimes_{\ZZ}\RR$.
\begin{definition}\label{def:tropical-curve}
A {\em parameterized tropical curve} in $N_{\QQ}$ is a proper map $\varphi: G \to N_{\RR}$ of topological spaces satisfying the following conditions:
\begin{enumerate}
\item For every edge $l$ of $G$, the restriction $\varphi|_{l}$ acts as dilation by a factor $\omega(l)$ with image $\varphi(l)$ contained in an affine line with rational slope. If $\omega(l) = 0$, then $\varphi(l)$ is a point.
\item For every vertex $v$ of $G$, we have $\varphi(v) \in N_{\QQ}$.
\item For each $(v,l) \in FG(v)$, let $u_{v,l}$ be an primitive integral vector emanating from $\varphi(v)$ along the direction of $h(l)$. Then
\[
\epsilon_v:=\sum_{(v,l)\in FG(v)}\omega(l)u_{v,l} = 0,
\]
which we refer to as the {\em balancing condition}.
\end{enumerate}
\end{definition}
An {\em isomorphism} of tropical curves $\varphi: G \to N_{\RR}$ and $\varphi' : G' \to N_{\RR}$ is a homeomorphism $\Phi: G \to G'$ compatible with the weights of the edges such that $\varphi = \varphi'\circ \Phi$.
A {\em tropical curve} is an isomorphism class of parameterized tropical curves.
\subsection{Tropical curves from non-degenerate stable log maps}\label{ss:log-data}
Let $(X,\MM_X)$ be a toric variety with its standard log structure, and let $T \subset X$ be its defining torus. We denote by $N$ the lattice of one-parameter subgroups of $T$. Let $f: (C,\MM_C) \to (X,\MM_X)$ be a stable log map over $(S,\MM_S)$ with $S$ a geometric point. Further assume that $f$ is non-degenerate; that is, the log structure $\cM_{S}$ is trivial.
In this subsection, we show how to assign a tropical curve $\Trop(f): G\to N_\RR$ to any such non-degenerate stable log map $f$. To begin, let $G$ be the graph with a single vertex $v$, which we think of as being associated to the unique component of $C$, and with one unbounded edge for each marked point of $C$. We let $\Trop(f)(v)=0$.
Let $l$ be an edge corresponding to a marked point $p$ of $C$. If $p$ has trivial contact orders, then we set $\omega(l) = 0$ and let $\Trop(f)$ contract $l$ to $0$. Otherwise, the contact order is equivalent to giving a non-trivial map
\[
c_{l}: \ocM_{X,f(p)} \to \bbar{\MM}_{C,p}=\NN.
\]
Note that we have a surjective cospecialization map of groups
\[
M:=N^{\vee} \to \ocM^{gp}_{X,f(p)}
\]
corresponding to the specialization of the generic point of $T$ to $f(p)$. Composing with $c_l^{gp}$, we obtain a map
\[
\mu_{l}: M \to \ZZ,
\]
which defines an element $\mu_{l} \in N$. Let $u_{l}$ be the primitive vector with slope given by $\mu_{l} \in N$. We define $\omega(l)$ to be the positive integer such that $\mu_{l} = \omega(l) u_{l}$, and define the image $\Trop(f)(l)$ to be the unbounded ray emanating from $0$ along the direction of $u_{l}$. This defines our desired map $\Trop(f): G \to N_{\RR}$ up to reparameterization.
\begin{proposition}
\label{prop:nondegentropcurve}
$\Trop(f): G \to N_{\RR}$ defines a tropical curve.
\end{proposition}
\begin{proof}
It remains to check that the balancing condition holds. That is, we must show $\epsilon_v=0$. Note that every $m\in M$ defines a rational function on $C$ and that the degree of the associated Cartier divisor is $0=\epsilon_v(m)$. Therefore, $\epsilon_v\in N=M^\vee$ is $0$.
\end{proof}
\subsection{Tropical curves from stable log maps over the standard log point
Let $(X,\MM_X)$ be a toric variety with its standard log structure, and let $T\subset X$ be its defining torus. Fix discrete data $\Gamma=(g,\beta,n,\{c_i\})$ and let $f: (C,\MM_C) \to (X,\MM_X)$ be a stable log map with discrete data $\Gamma$ over the standard log point $(S,\MM_S)$; that is, $S$ is a geometric point and $\MM_S$ is the log structure associated to the map $\NN\to\cO_S$ sending $1$ to $0$. This is equivalent to giving a (not necessarily strict) log map
\[(S,\cM_{S}) \to (\cK_{\Gamma}(X), \cM_{\cK_{\Gamma}(X)}),\]
and the stable log map $f$ is obtained by pulling back the universal stable log map over $(\cK_{\Gamma}(X), \cM_{\cK_{\Gamma}(X)})$. In this subsection, we associate a tropical curve
\[
\Trop(f):G\to N_\RR
\]
to $f$ by modifying the construction given in \cite[\S1.3]{grosssiebert}.
We define $G$ to be the dual graph of $C$ where we attach an unbounded edge for each marked point. Given a vertex $v$, let $t$ be the generic point of the corresponding component of $C$. We therefore have a morphism
\[
\ocM_{X,f(t)} \to \ocM_{C,t}=\NN
\]
of monoids. Taking the associated groups and composing with the cospecialization map $M \to \bbar{\cM}_{X,f(t)}^{gp}$ yields a map
\[
\tau_v:M\to \ZZ,
\]
and hence a point in $N$. We define $\Trop(f)(v)=\tau_v$.
Let $l$ be an edge of $G$. If $\partial l=\{v,v'\}$ and $v\neq v'$, then we define the image of $l$ under $\Trop(f)$ to be the line segment joining $\tau_v$ and $\tau_{v'}$. In this case, $\tau_{v'}-\tau_v=e_l\mu_l$, where $e_l\in\bbar{\MM}_S=\NN$ is the section which smooths the node corresponding to $l$, and $\mu_l$ is an element of $N$. We define $\omega(l)$ to be the positive integer such that $\mu_l=\omega(l)u_l$, where $u_l$ is a primitive integral vector.
Suppose now that $l$ is an unbounded edge corresponding to a marked point $p$. If $p$ has trivial contact orders, then we set $\omega(l)=0$ and let $\Trop(f)$ contract $l$ to $\tau_v$, where $\partial l=\{v\}$. Otherwise, the contact orders of $p$ define a non-trivial map
\[
c_l:\bbar{\MM}_{X,f(p)}\to\bbar{\MM}_{C,p}=\NN\oplus\bbar{\MM}_S\to \NN,
\]
where the last map is the projection. Again taking the associated groups and composing with the cospecialization map $M \to \bbar{\cM}^{gp}_{X,f(p)}$, we obtain
\[
\mu_{l}: M \to \ZZ.
\]
We define $\omega(l)$ to be the positive integer such that $\mu_l=\omega(l)u_l$, where $u_l\in N$ is a primitive integral vector, and we let $\Trop(f)(l)$ be the unbounded ray emanating from $\tau_v$ in the direction of $u_l$.
\begin{prop}\label{prop:map-trop-curve}
$\Trop(f): G \to N_{\RR}$ defines a tropical curve.
\end{prop}
\begin{proof}
We must check that the balancing condition holds for each vertex $v$ of $G$. As in the proof of Proposition \ref{prop:nondegentropcurve}, every $m\in M$ defines a rational function on the irreducible component of $C$ corresponding to $v$. The degree of the associated Cartier divisor is $0=\epsilon_v(m)$, and so $\epsilon_v=0$, \emph{c.f.} \cite[Proposition 1.14]{grosssiebert}.
\end{proof}
\begin{remark}
\label{rmk:specialization}
Let $R$ be the complete local ring of $\AA^1$ at the origin, and let $\MM_R$ be the log structure on $R$ induced by the standard log structure on $\AA^1$. Denote the closed and generic points of $\Spec R$ by $0$ and $\eta$, respectively. Suppose $h:(\cC,\cM_{\cC})\to (X,\cM_X)$ is a stable log map over $R$ with discrete data $\Gamma$ such that $h_0=f$. Note that $h_\eta$ is a non-degenerate stable log map. For each marked section $p:\Spec R\to \cC$, let $l_0$ and $l_\eta$ be the edges of the dual graphs of $\cC_0$ and $\cC_\eta$ corresponding to the marked points $p_0$ and $p_\eta$, respectively. Consider the morphism
\[
\bbar{\cM}_{X}|_{h(p)}\to\bbar{\cM}_{\cC}|_p=\NN\oplus\bbar{\cM}_R\to\NN,
\]
where the last map is the projection. Taking associated groups and precomposing with the map $M\to \bbar{\cM}^{gp}_{X}|_{h(p)}$, we obtain a map $M\to\ZZ$ of constant sheaves on $\Spec R$ whose special and generic fibers are $\mu_{l_0}$ and $\mu_{l_\eta}$. Hence, we see $\mu_{l_0}=\mu_{l_\eta}$.
\end{remark}
\begin{comment}
Fixing the discrete data $\Gamma$, and take a non-degenerate stable log map $f: (C,\cM_{C}) \to (X,\cM_{X})$ over a standard log point $(S,\cM_{S})$, i.e. the minimal base monoid $\ocM$ of $f$ is trivial. This imples that $C$ is a smooth curve with its generic point mapping to $T$. Consider the associated tropical curve $Trop(f): G \to N_{\RR}$. First notice that $G$ has a unique vertex $v$ associated to the unique irreducible component of $C$. It follows from the construction of the associated tropical curve that
\[Trop(f)(v) = 0 \in N_{\RR}.\]
Furthermore, notice that all the edges of $G$ are unbounded, and their images under $Trop(f)$ are uniquely determined by the contact orders $c_{l}$. In fact, we have
\begin{corollary}\label{cor:log-data}
\begin{enumerate}
\item The discrete data $\Gamma=(g,\beta,n,\{c_l\})$ and the above log discrete datum (without balancing condition) $h: G \to N_{\RR}$ uniquely determine each other. \footnote{Here the genus $g$ does not necessarily equal zero.}
\item If $\Gamma=(0,\beta,n,\{c_l\})$ is the discrete data associated to some stable log map, then $h: G \to N_{\RR}$ satisfies the balancing condition, hence is a log discrete datum.
\end{enumerate}
\end{corollary}
\begin{proof}
First notice that the existence and uniqueness of the map $Trop(f)$ follows directly from the construction of the associated tropical curves, and does not depend on the existence of the non-degenerate map $f$. The balancing condition follows from Proposition \ref{prop:map-trop-curve}. We also notice that in the toric situation, the curve class $\beta$ and the degree $\Delta_{h}$ of the tropical curve determines each other, see for example \cite[Theorem 3.1]{FS}.
\end{proof}
\begin{rem}
Given arbitrary discrete data $\Gamma$, the stack $\cK_{\Gamma}(X)$ may be empty ingeneral unless the curve class $\beta$ and the contact orders $\{c_{l}\}$ satisfy some compatibility condition as \cite[(3.6.1)]{DF1} or more general \cite[Proposition 1.14]{grosssiebert}. In the situation when the target is a toric variety with its standard log structure, this compatible condition is exactly the balancing condition from the associated tropical curves. Later when we mention discrete data, we mean the log discrete datum with the balancing condition.
\end{rem}
Let $\Sigma$ be the fan of the toric variety $X$. Given a cone $\sigma\in\Sigma$, let $\cO_\sigma$ denote the corresponding orbit of $X$ associated to the cone $\sigma$. Then we have the following relationship between $\Trop(f)$ and $\Sigma$.
\begin{lemma}\label{lem:curve-interior}
Let $A$ be a component of $C$ and let $v$ be the corresponding vertex of $G$. Then $\tau_v$ is in the interior of a cone $\sigma\in\Sigma$ if and only if the generic point of $A$ maps to $\cO_\sigma$.
\end{lemma}
\begin{proof}
Let $t$ be the generic point of $A$. Then $t$ maps to $\cO_\sigma$ if and only if $\bbar{\MM}_{X,f(t)}\to \bbar{\MM}_{C,t}$ factors through $P_\sigma/P_{\sigma}^{*}=\bbar{\MM}_{\cO_\sigma,q}$ for a general point $q$ of $\cO_\sigma$. Here $P_{\sigma}:= \sigma^{\vee}\cap M$ and $P^{*}_{\sigma}$ is the group of all invertible elements in $P_{\sigma}$. This occurs if and only if $\tau_v:M\to \NN$ factors through $P_\sigma^{gp}/P_{\sigma}^{*}$, that is, if $\tau_v$ is in the interior of $\sigma$.
\end{proof}
Consider a unbounded edge $l$ corresponding to a marked point $p$. Denote by $e_{l} = (b_{1},\cdots, b_{n})\in N$ the primitive vector associated to $l$. This gives a one-parameter subgroup $\lambda^{e_{l}}: \GG_m\to T$ by $t \mapsto ( t^{b_{1}},\cdots, t^{b_{n}})$. Write $lambda(e_{l}) = \lim_{t \to 0}\lambda^{e_{l}}(t)$ in $X$. This is a distinguished point of an orbit, write $\cO_{e_{l}}$. In fact, let $\sigma_{l}$ to be the smallest cone containing $e_{l}$, then $\cO_{\sigma_{l}} = \cO_{e_{l}}$.
\begin{lem}\label{lem:marking-position}
The image of $p$ lies in the closure $\overline{\cO}_{e_{l}}$.
\end{lem}
\begin{proof}
This can be checked by a direct calculation.
\end{proof}
\begin{proposition}\label{prop:curve-structure}
Let $l$ be an edge of $G$ and $l^\circ=l\setminus\partial l$. Then $\Trop(f)(l^\circ)$ is contained in the interior of some cone of $\Sigma$.
\end{proposition}
\begin{proof}
If $l$ corresponds to a node $q\in C$, then $\partial l=\{v,v'\}$. Assume that $v\neq v'$. Otherwise the image $Trop(f)(l^{\circ})$ is a point, and the statement will follow automatically. Let $\sigma,\sigma'\in\Sigma$ be such that $\tau_v$ lies in the interior of $\sigma$ and $\tau_{v'}$ lies in the interior of $\sigma'$. Then $f(q)\in \overline{\cO}_\sigma\cap \overline{\cO}_{\sigma'}$, and so $\overline{\cO}_\sigma\cap \overline{\cO}_{\sigma'}\neq\varnothing$. Therefore, there exists a cone $\tau\in\Sigma$ such that $\sigma$ and $\sigma'$ are faces of $\tau$. Then the line segment joining $\tau_v$ and $\tau_{v'}$, namely $\Trop(f)(l^\circ)$, is contained in the interior of $\tau$.
Now suppose $l$ corresponds to a marked point $p$. Let $\partial l=\{v\}$ and let $t$ be the generic point of the component of $C$ corresponding to $v$. Denote by $\sigma$ the smallest cone containing the infinite part of $l$, and $\tau$ the smallest cone containing $Trop(f)(v)$. Then by Lemma \ref{lem:marking-position}, we have
\[f(p) \in \overline{\cO}_{\sigma} \subset \overline{\cO}_{e_{l}}.\]
On the other hand, since the marked point is on the component $v$, we have
\[f(p) \in \overline{\cO}_{\tau}.\]
Hence we deduce that $\tau$ is a face of $\sigma$ (could be that $\tau = \sigma$).
\end{proof}
\end{comment}
The following result plays an important role in the proof of Theorem \ref{thm:chowcs}.
\begin{proposition}
\label{prop:chain-ofP1s}
If the discrete data $\Gamma$ is given by $g=0$, $n=2$, and $\beta \neq 0$, then $\Trop(f)$ is an embedding whose image is a line. Moreover, $C$ is a chain of $\PP^1$s and $f$ does not contract any components of $C$.
\end{proposition}
\begin{proof}
Since $\cK_\Gamma(X)$ is log smooth by Proposition \ref{prop:log-smooth}, there exists a stable log map $h:(\cC,\cM_{\cC})\to (X,\cM_X)$ over $(R,\cM_R)$ as in Remark \ref{rmk:specialization}. Let $p,p':\Spec R\to\cC$ be the two marked sections, and let $l_0$, $l'_0$, $l_\eta$, $l'_\eta$ be the corresponding edges of the dual graphs of $C$ and $\cC_\eta$. Since $\beta\neq0$, the two marked points $p_\eta$ and $p'_\eta$ of $\cC_\eta$ have non-trivial contact orders. The balancing condition for $\Trop(h_\eta)$ then shows $\mu_{l'_\eta}=-\mu_{l_\eta}\neq0$. By Remark \ref{rmk:specialization}, we therefore have $\mu_{l'_0}=-\mu_{l_0}\neq0$. In particular, $\Trop(f)$ maps $l_0$ and $l'_0$ to unbounded rays.
We next show that if $l$ is an edge of $G$, then $\Trop(f)(l)$ is a point, or it is a line segment or ray parallel to $\mu_{l_0}$. Suppose $\Trop(f)(l)$ is not a point. If $\Trop(f)(l)$ is unbounded, then $l$ is $l_0$ or $l'_0$, and so $\Trop(f)(l)$ is parallel to $\mu_{l_0}$. Otherwise, $\Trop(f)(l)$ is a line segment and $\partial l=\{v,v_1\}$ with $v\neq v_1$. If $\Trop(f)(l)$ is not parallel to $\mu_{l_0}$, then the balancing condition shows that there is an edge $l_1\neq l$ such that $v_1\in \partial l_1$ and $\Trop(f)(l_1)$ is not parallel to $\mu_{l_0}$. Hence, $l_1$ is a line segment with endpoints $v_1$ and $v_2$. Again, the balancing condition shows that there is an edge $l_2$ containing $v_2$ such that $\Trop(f)(l_2)$ is a line segment which is not parallel to $\mu_{l_0}$. Since $C$ has genus $0$, we see $l$, $l_1$, and $l_2$ are distinct. Continuing in this manner, we produce an infinite sequence of distinct edges $l_i$ of the dual graph of $C$. This is a contradiction.
Lastly, we show that every irreducible component $A$ of $C$ has exactly two special points. Hence, $C$ is a chain of $\PP^1$s, $f$ does not contract any component of $C$, and $\Trop(f)(G)$ is a line parallel to $\mu_{l_0}$. Suppose $A$ is a component with at least three special points and let $v$ be the vertex of $G$ corresponding to $A$. Then $G\setminus v$ is a disjoint union of non-empty trees $T_1,T_2,\dots,T_m$ with $m\geq3$. Without loss of generality, $T_1$ only contains bounded edges. The argument in the preceding paragraph then shows that $\Trop(f)$ maps every edge of $T_1$ to a single point. If $C_1$ denotes the subcurve of $C$ corresponding to $T_1$, then we see that every special point of $C_1$ has a trivial contact order, and so $f$ contracts $C_1$. Since $T_1$ is a tree, $C_1$ contains components with only two special points. This contradicts the stability of $f$.
\end{proof}
\section{The Chow quotient as the coarse moduli space}
\label{sec:cs}
Throughout this section, we let $\Gamma=\Gamma_0$ and $C(X)$ denote the Chow variety as in the introduction. Let $K$ be the normalization of $X//T_0$. Note that there is a map
\[
F:\cK_\Gamma(X)\longrightarrow C(X)
\]
sending a stable log map $f:(C,\MM_C)\to (X,\MM_X)$ to the image cycle $f_*[C]$. Since $\cK_\Gamma(X)$ is irreducible by Theorem \ref{thm:comptwostacks}, $F$ factors as
\[
\cK_\Gamma(X)\stackrel{F'}{\longrightarrow} X//T_0\stackrel{i}{\longrightarrow} C(X),
\]
where $i$ is the natural inclusion. Since $F$ is an isomorphism over $T'$ and $\cK_\Gamma(X)$ is normal, by Proposition \ref{prop:log-smooth}, we obtain an induced morphism
\[
G:\cK_\Gamma(X)\longrightarrow K
\]
To prove Theorem \ref{thm:chowcs}, we show
\begin{proposition}
\label{prop:coarse}
$G$ is a coarse space morphism.
\end{proposition}
\begin{proof}
Since both $\cK_\Gamma(X)$ and $K$ are normal and proper, and since $G$ is an isomorphism over $T'$, by Zariski's Main Theorem, it suffices to show $G$ is quasi-finite. To do so, it is enough to show $F'$ is quasi-finite at the level of closed points. That is, we show that if $x\in X//T_{0}$ is a closed point and $E_{x}$ denotes the corresponding cycle of $X$, then there are finitely many stable log maps whose image cycles are given by $E_{x}$. Let
\[
E_{x} = \sum a_{i}Z_{i},
\]
where the $a_{i}$ are positive integers and the $Z_{i}$ are reduced irreducible closed subschemes of $X$.
Let $\widetilde{Z}_{i}$ be the normalization of $Z_{i}$. Since $E_{x}$ is of dimension $1$, we have $\widetilde{Z}_{i}\simeq \PP^{1}$.
We claim that if $f:(C,\MM_C)\to (X,\MM_X)$ is a stable log map that defines a closed point of $\cK_\Gamma(X)$ such that the image cycle of $f$ is $E_x$, then
$f$ can only be ramified at the special points of $C$. Given this claim, $F'$ is quasi-finite. Indeed, since Proposition \ref{prop:chain-ofP1s} shows that no component of $C$ is contracted under $f$, the number of irreducible components of $C$ is bounded by $\sum a_{i}$. For each irreducible component $A$ of $C$, the map $f|_A$ factors as
\[
A \longrightarrow \widetilde{Z}_{i} \longrightarrow X
\]
for some $i$. Since the first map $A \to \widetilde{Z}_{i}$ can only be ramified at the two fixed special points, it is determined by the degree of $f|_A$. Thus, there are only finitely many choices for $f$.
It remains to prove the claim. By Proposition \ref{prop:irred}, $\cK_\Gamma(X)$ is irreducible and $T'$ is dense, so there exists a toric morphism $\AA^1\to\cK_\Gamma(X)$ such that the fiber over $0\in\AA^1$ is our given stable log map $f:(C,\MM_C)\to (X,\MM_X)$ whose image cycle is $E_x$. Let $R$ denote the complete local ring $\widehat{\cO}_{\AA^1,0}$ and let
\[
\xymatrix{
\cC\ar[r]^h\ar[d] & X\\
\Spec R &
}
\]
be the associated stable map. Let $\eta\in\Spec R$ be the generic point.
We first handle the case when $X$ is smooth. Let $\cC^{\circ}$ be the open subset of $\cC$ obtained by removing the special points. Note that $\cC^{\circ}$ is normal, and $h|_{\cC^{\circ}}$ is quasi-finite by Proposition \ref{prop:chain-ofP1s}. By the purity of the branch locus theorem \cite[p.461]{altmankleiman}, if $h|_{\cC^{\circ}}$ is ramified, then the ramification locus $D$ is pure of codimension 1. Since $h|_{\cC^{\circ}}$ is not everywhere ramified over the central fiber, $D$ must intersect the generic fiber. However, $h|_{\cC^{\circ}}$ is an embedding over the generic fiber, so we conclude that $D$ is empty.
We now consider the case when $X$ is singular. Let $p: \widetilde{X}\to X$ be a toric resolution. We may replace $R$ by a ramified extension, as this does not affect the set of closed points. Since the natural map $\cK_{\Gamma}(\widetilde{X})\to \cK_\Gamma(X)$ is proper, by the valuative criterion, we can assume we have a stable map $\widetilde{h}:\widetilde{\cC}\to \widetilde{X}$ and a commutative diagram
\[
\xymatrix{
\widetilde{\cC} \ar[r]^{\widetilde{h}} \ar[d]_q & \widetilde{X}\ar[d]^{p}\\
\cC\ar[r]^h & X
}
\]
over $R$. Here $h$ is obtained by taking the stabilization of the prestable map $p\circ\widetilde{h}$. The previous paragraph shows that $\widetilde{h}$ only ramifies at the special points. Since Proposition \ref{prop:chain-ofP1s} shows that $\widetilde{\cC}$ and $\cC$ are both chains of $\PP^1$s, we see that $h$ only ramifies at the special points as well.
\end{proof}
{\section*{Appendix A: Toric varieties have generalized Deligne-Faltings log structures}
\renewcommand{\thesection}{A}
\refstepcounter{section}
\label{sec:genDF}
The theory of moduli spaces of stable log maps $\cK_\Gamma(Y,\MM_Y)$ is developed in \cite{DF1,DF2} and \cite{grosssiebert} for different classes of log schemes $(Y,\MM_Y)$. In \cite{DF1,DF2}, Abramovich and the first author consider log schemes
which are generalized Deligne-Faltings
(see Definition \ref{def:genDF}); in \cite{grosssiebert}, Gross and Siebert
consider log schemes which are quasi-generated Zariski. It is shown in \cite[Prop 4.8]{DF2} that when $(Y,\cM_{Y})$ is both generalized Deligne-Faltings and quasi-generated Zariski, the Abramovich-Chen and Gross-Siebert constructions are identical. Gross-Siebert show that the standard log structure $\MM_X$ on a normal toric variety $X$ is always quasi-generated Zariski. Here we show that if $X$ is also projective, then $\MM_X$ is generalized Deligne-Faltings. Therefore, the two theories agree for projective normal toric varieties.
\begin{definition}
\label{def:genDF}
A log structure $\MM_Y$ on a scheme $Y$ is called \emph{generalized Deligne-Faltings} if there exists a fine saturated sharp monoid $P$ and a morphism $P\to\bbar{\MM}_Y$ which locally lifts to a chart $P\to\MM_Y$.
\end{definition}
\begin{remark}
\label{rmk:genDF}
Given a fine saturated sharp monoid $P$, let $A_P=\Spec k[P]$ with its standard log structure $\MM_{A_P}$. Then there is a natural action of $T_P:=\Spec k[P^{gp}]$ on $(A_P,\MM_{A_P})$ induced by the morphism $P\to P\oplus P^{gp}$ sending $p$ to $(p,p)$. The log structure $\MM_{A_P}$ descends to yield a log structure $\MM_{[A_P/T_P]}$ on the quotient stack $[A_P/T_P]$. By \cite[Rmk 5.15]{LogStack}, a log scheme $(Y,\MM_Y)$ is generalized Deligne-Faltings if and only if there exists a strict morphism
\[
(Y,\MM_Y)\longrightarrow ([A_P/T_P],\MM_{[A_P/T_P]})
\]
for some fine saturated sharp monoid $P$.
\end{remark}
Let $X$ be a projective normal toric variety and let $\MM_X$ be its standard log structure. Let $Q\subset \RR^n$ be a polytope associated to a sufficiently positive projective embedding of $X$. Placing $Q$ at height 1 in $\RR^n\times\RR$ and letting $P$ be the monoid of lattice points in the cone over $Q$, we have $X=\operatorname{Proj} k[P]$. Note that $P$ is fine, saturated, and sharp.
Let $(A_P,\MM_{A_P})$ be as in Remark \ref{rmk:genDF}, let $U$ be the compliment of the closed subscheme of $A_P$ defined by the irrelevant ideal of $k[P]$, and let $\MM_U=\MM_{A_P}|_U$. The function $\deg:P\to\ZZ$ sending an element to its height induces a $\GG_m$-action on $(A_P,\MM_{A_P})$. Hence, $\MM_U$ descends to yield a log structure $\MM_P$ on $X$.
\begin{lemma}
\label{l:MPgenDF}
$\MM_P$ is generalized Deligne-Faltings.
\end{lemma}
\begin{proof}
We have a cartesian diagram
\[
\xymatrix{
(U,\MM_U)\ar[r]\ar[d] & (A_P,\MM_P)\ar[d]\\
(X,\MM_P)\ar[r] & ([A_P/\GG_m],\MM_{[A_P/\GG_m]})
}
\]
where all morphisms are strict and the vertical morphisms are smooth covers. Note that the $\GG_m$-action on $(A_P,\MM_{A_P})$ is induced from the morphism $\sigma:P\to P\oplus\ZZ$ defined by $p\mapsto (p,\deg p)$. Since $\sigma$ factors as
\[
P\longrightarrow P\oplus P^{gp}\longrightarrow P\oplus\ZZ
\]
where the first map is $p\mapsto(p,p)$ and the second is $(p,\xi)\mapsto (p,\deg \xi)$, we see that there is a strict smooth cover
\[
([A_P/\GG_m],\MM_{[A_P/\GG_m]})\longrightarrow ([A_P/T_P],\MM_{[A_P/T_P]}).
\]
Hence, Remark \ref{rmk:genDF} shows that $\MM_P$ is generalized Deligne-Faltings.
\end{proof}
Note that $\MM_P|_T=\cO_T^*$, where $T$ is the torus of $X$. We therefore obtain a map
\[
\psi:\MM_P\longrightarrow j_*^{log}\cO_T^*=:\MM_X.
\]
\begin{proposition}
\label{prop:genDF}
$\psi$ is an isomorphism, and so $(X,\MM_X)$ is generalized Deligne-Faltings.
\end{proposition}
\begin{proof}
To show $\psi$ is an isomorphism, it is enough to look Zariski locally on $X$. Note that $X$ has an open cover by the $X_v:=\Spec k[Q_v]$, where $v$ is a vertex of the polytope $Q$ and $Q_v$ is the monoid of lattice points in the cone over $Q-v:=\{q-v\mid q\in Q\subset\RR^n\}$. Let $P_v$ be the submonoid of $P^{gp}$ generated by $P$ and $-v$. Then we have a cartesian diagram
\[
\xymatrix{
A_{P_v}\ar[r]^i\ar[d]_\pi & U\ar[d]\\
X_v\ar[r] & X
}
\]
where $\pi$ is induced from the map $Q_v\to P_v$ embedding $Q_v$ at height 0 in $P_v$, and where the composite of $i$ and $U\to A_P$ is induced from the inclusion $P\to P_v$. Hence,
\[
\MM_{Q_v}=(\MM_{P_v})^{\GG_m}
\]
and so $\psi$ is an isomorphism over $X_v$.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,410 |
{"url":"https:\/\/math.stackexchange.com\/questions\/2497560\/bivariate-normal-with-chi-square-length-implies-standard-bivariate-normal","text":"# Bivariate Normal with chi-square length implies standard bivariate normal\n\nSuppose $X_1$ and $X_2$ are iid $\\mathcal{N}(\\mu,\\sigma^2)$. Meanwhile suppose after the change of variables $X_1 = R \\cos(\\Theta)$, $X_2 = R \\sin(\\Theta)$, we have\n\n$$X_1^2 + X_2^2 = R^2 \\sim \\chi_2^2$$\n\nthe chi-squared distribution with two degrees of freedom, and $$\\Theta \\sim \\text{Uniform}[0,2\\pi]$$\n\nwhere $R^2$ and $\\Theta$ are not necessarily independent.\n\nDoes this imply that $\\mu = 0$ and $\\sigma^2 = 1$? I think the answer is yes, but one of the two ways I can think to do it is to get the marginal distribution for $\\Theta$ and show it is not uniform if $\\mu \\neq 0$. This is not appetizing, but once $\\mu = 0$ the rest is easy.\n\nI suppose it's also \"clear\" from the fact that the non-central chi squared distribution is not equal to chi squared, perhaps but even that statement requires proof, or at least a probability density. Is there a slick, elementary way to do it?\n\n\u2022 You don't need to use the formula for the density function of the non-central chi squared distribution to carry our your plan: you could notice that the characteristic function or mgf is wrong unless $\\mu=0$ and $\\sigma=1$. Or the mean and variance don't match unless, etc. But I wouldn't call these slick. \u2013\u00a0kimchi lover Oct 31 '17 at 2:57\n\u2022 I suppose matching the mean and variance is almost slick, except the variance might then be written with $X_i^4$ terms, and the mean can match if you choose $\\mu$ and $\\sigma$ appropriately. \u2013\u00a0Drew N Oct 31 '17 at 3:03\n\u2022 Suppose $\\Theta\\sim\\operatorname{Uniform}(0,2\\pi)$ and $R^2\\sim\\chi^2_2.$ Let $X_1=R\\cos\\Theta$ and $X_2=R\\sin\\Theta.$ If there is an additional assumption of indepedence of $R$ and $\\Theta,$ then, as is widely known, $X_1,X_2\\sim \\mathrm{i.i.d.} \\operatorname{N}(0,1).$ So the question appear to be this: If the marginal distributions of $R$ and $\\Theta$ are as above but one somehow alters the dependence between them, can that have the effect of altering the distribution of $(X_1,X_2)$ in such a way that they remain i.i.d. and remain normally distributed, but only$\\,\\ldots\\qquad$ \u2013\u00a0Michael Hardy Oct 31 '17 at 3:16\n\u2022 $\\ldots\\,$only their expectations and variances change? $\\qquad$ \u2013\u00a0Michael Hardy Oct 31 '17 at 3:16\n\nSuppose $X_1,X_2\\sim\\mathrm{i.i.d.} \\operatorname{N}(\\mu,\\sigma^2)$ and $\\mu\\ne0.$ Then $(X_1-\\mu,X_2-\\mu) = S(\\cos H,\\sin H)$ with $H$ uniformly distributed on $(0,2\\pi).$ So we have $$R(\\cos\\Theta,\\sin\\Theta) = (\\mu,\\mu) + S(\\cos H,\\sin H).$$ Given the uniform distribution of $H,$ ask yourself whether $\\Theta$ can also be uniformly distributed. Draw the picture and the answer may become clear.\n\nIf $\\mu=0$ and $\\sigma\\ne1,$ then $R^2\\sim \\sigma^2\\chi^2_1.$\n\n\u2022 I can see that $\\Theta$ will tend to \"point\" more towards $(\\mu, \\mu)$. But that's not a proof exactly. Can it be made into one? \u2013\u00a0Drew N Oct 31 '17 at 4:02\n\u2022 @DrewN : ok, So you've got the main idea. But it may be $24$ hours before I look at this again. \u2013\u00a0Michael Hardy Oct 31 '17 at 4:53\n\u2022 Look at a unit circle centered at $(\\mu,\\mu)$ and draw a line from $(0,0)$ to a point on that circle, and let that point move around that circle and look at how that line changes. The slope of that line is $\\tan\\Theta. \\qquad$ \u2013\u00a0Michael Hardy Oct 31 '17 at 5:26\n\nBy now this is beating a dead horse, but: $R^2\/\\sigma^2$ is non central chi squared distributed, with (in the notation of the wikipedia article) parameters $k=2$ and $\\lambda=2\\mu^2\/\\sigma^2$, and has mean $2+\\lambda$ and variance $2(2+2\\lambda)$. So the expectation of $R^2$ is $(2+\\lambda)\\sigma^2$ and its variance is $2(2+2\\lambda)\\sigma^4$. So we want to know that $$(2+\\lambda)\\sigma^2=2\\tag{1}$$ and $$2(2+2\\lambda)\\sigma^4=4\\tag{2}$$ implies $\\mu=0$ and $\\sigma=1$.\n\nDivide the square of (1) by (2) to obtain $$\\frac{4+4\\lambda+\\lambda^2}{4+4\\lambda} = 1$$ to learn $\\lambda^2=0$ and hence $\\mu=0$. Now substitute into (1) to learn $\\sigma=1$. As mentioned in a comment, this is far from slick. But it is straightforward.\n\nI'm not going to accept my own answer, because I'd like to see if there's a more \"computational\" way of doing it. However I would like feedback on whether this is even convincing enough to obviate a separate proof, or if this the present proof is too handwavy.\n\nWe first argue that $\\mu \\neq 0$ is inconsistent with the assumption that $\\Theta \\sim \\text{Uniform}[0,2\\pi]$. Suppose $\\mu = 0$ and consider the following two regions on the cartesian plane:\n\n1. $0 \\leq r < \\infty$ and $\\pi\/6 < \\theta < \\pi\/3$\n\n2. $0 \\leq r < \\infty$ and $\\pi\/6 + \\pi < \\theta < \\pi\/3 + \\pi$.\n\nSuppose $f(x,y)$ is a bivariate normal pdf with zero mean and diagonal covariance matrix on these regions; then $f(x,y) = f(-x, y) = f(x, -y)$. Suppose we integrate $f$ over these two regions; they must have equal volume since they correspond to the same $x$ and $y$ values, just with signs flipped (first quadrant vs. third quadrant). Imagine these regions to be \"shaded in\". We will call them the old regions.\n\nNow let us physically translate the pdf and these old \"shaded in\" regions along the line $y = x$, that is the angle $\\theta = \\pi\/4$. We arrive at a new nonzero mean on the line $y = x$, and the shaded in regions are centered at this new mean. Now we repeat the integration of the above two regions, which are \"new\", centered at 0. However, $y = x$ bisects the translated \"old\" regions centered at the chosen mean as well as the \"new\" regions centered at 0, so one old region will be strictly contained in a new region, and one new region will be strictly contained in an old region. So the volume under the two new regions is not equal and $\\Theta$ cannot be uniform on $[0, 2\\pi]$.\n\nAn analogous argument holds for any desired mean vector. Just pick an angular region that is entirely inside one quadrant whose bisector crosses your desired mean. If your desired mean is on an axis then one of the entries is zero, so there is no \"sign flip\", and you translate along the axis.\n\nOnce $\\mu = 0$ the rest is easy. We have $\\mathbb{E}(X_1^2 + X_2^2) = 2 \\text{Var}(X_1)$, by iid, so to match the variance of $\\chi_2^2$, which is 2, we must have $\\text{Var}(X_1) = 1$.","date":"2019-06-25 07:26:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9389409422874451, \"perplexity\": 199.92127611618707}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627999814.77\/warc\/CC-MAIN-20190625072148-20190625094148-00519.warc.gz\"}"} | null | null |
var w = window.innerWidth;
var h = window.innerHeight;
var canvas = document.getElementById('c');
var context = canvas.getContext('2d');
canvas.width = w;
canvas.height = h;
var pi2x = Math.PI * 2;
window.requestAnimFrame = (function(callback) {
return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame ||
function(callback) {
window.setTimeout(callback, 1000 / 60);
};
})();
function drawText(text,font,xx,yy){
context.font = font;
context.fillStyle = "rgb(255,255,255)";
context.fillText(text, xx, yy);
var px = [];
var imgData=context.getImageData(0,0,canvas.width,canvas.height);
for(x=0; x<imgData.width; x++)
{
for(y=0 ; y<imgData.height; y++)
{
if(getPixel(imgData,x,y)[3] > 0)
{
px.push( [x,y] );
}
}
}
return px;
}
function getPixel(imgData, x, y) {
var offset = (x + y * imgData.width) * 4;
var r = imgData.data[offset+0];
var g = imgData.data[offset+1];
var b = imgData.data[offset+2];
var a = imgData.data[offset+3];
return [r,g,b,a];
}
function draw(pixels,space,z,moveBy){
context.lineJoin="bevel";
context.lineWidth = 1;
for(var i=0;i<pixels.length;i+=space){
var x = pixels[i][0];
var y = pixels[i][1];
var r = Math.ceil(Math.random()*254);
var g = Math.ceil(Math.random()*254);
var b = Math.ceil(Math.random()*254);
var a = Math.random();
var style = "rgba("+r+","+g+","+b+","+a+")";
context.beginPath();
context.moveTo(x+moveBy,y+moveBy);
context.lineTo(x+z, y+z);
context.strokeStyle = style;
context.closePath();
context.stroke();
}
}
function roundedRectangle(x, y, w, h, radius,color,lineSize,fill)
{
var r = x + w;
var b = y + h;
context.beginPath();
context.strokeStyle=color;
context.lineWidth=lineSize;
context.moveTo(x+radius, y);
context.lineTo(r-radius, y);
context.quadraticCurveTo(r, y, r, y+radius);
context.lineTo(r, y+h-radius);
context.quadraticCurveTo(r, b, r-radius, b);
context.lineTo(x+radius, b);
context.quadraticCurveTo(x, b, x, b-radius);
context.lineTo(x, y+radius);
context.quadraticCurveTo(x, y, x+radius, y);
if(fill){
context.fillStyle=color;
context.fill();
}
context.stroke();
}
function animate(pixels) {
context.clearRect(0, 0, canvas.width, canvas.height);
roundedRectangle(20,20,w-60,h-60,5,"black",2,true);
roundedRectangle(40,40,w-100,h-100,5,"green",2);
roundedRectangle(w/4,h-50,w/2,25,5,"black",2,true);
draw(pixels,Math.ceil((Math.random()*5)+5),10,Math.random()*20);
requestAnimFrame(function() {
animate(pixels);
});
}
var size = 120;
var pixels = drawText("JS1K TV :D","bold "+size+"pt Arial",w/2-(size*4),h/2);
animate(pixels); | {
"redpajama_set_name": "RedPajamaGithub"
} | 9,452 |
<?php
use system\core\Event;
Event::create('test.event', function($args = array()) {
echo 'test event fired ' . implode(',', $args);
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,771 |
{"url":"https:\/\/quantumcomputing.stackexchange.com\/questions\/1374\/how-can-quantum-decoherence-be-managed","text":"# How can quantum decoherence be managed?\n\nDecoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath), since every system is loosely coupled with the energetic state of its surroundings.\n\n<...>\n\nDecoherence represents a challenge for the practical realization of quantum computers, since such machines are expected to rely heavily on the undisturbed evolution of quantum coherences. Simply put, they require that coherent states be preserved and that decoherence is managed, in order to actually perform quantum computation.\n\n(emphasis mine)\n\nSo I am wondering how can this loss of information be managed? Does this mean that it should be prevented completely, or is it necessary for quantum computing to actually allow some information loss in order to compute?\n\nThe quantum circuit model describes a quantum computer as a closed quantum system and assumes that there is a system which executes the circuit but is completely isolated from the rest of the universe. In the real world, however, there are no known mechanisms for truly isolating a quantum system from its environment. Real quantum systems are open quantum systems. Open quantum systems couple to their environment and destroy the quantum information in the system through decoherence. When examining the simple evolution of a single quantum system this system-environment coupling appears to cause errors on the quantum system\u2019s evolution (which wouldn't be unitary in this case).\n\nA coin has two states, and makes a good bit but a poor qubit because it cannot remain in superposition of head and tail for very long as it is a classical object. A single nuclear spin can be a very good qubit, because superposition of being aligned with or against an external magnetic field can last for a long time, even days. But it can be difficult to build a quantum computer from nuclear spins because their coupling is so small that it is hard to measure the orientation of a single nuclei. The observation that the constraints are opposing in general: a quantum computer has to be well isolated in order to retain its quantum properties, but at the same time its qubits have to be accessible so that they can be manipulated to perform computation and read out the results. A realistic implementation must strike a balance between these constraints.\n\nThe first step towards solving the decoherence problem was taken in 1995 when Shor and Steane independently discovered a quantum analogue of classical error correcting codes. Shor discovered that by encoding quantum information, this information could become more resistant to interaction with its environment. Following this discovery a rigorous theory of quantum error correction was developed. Many different quantum error correcting codes were discovered and this further led to a theory of fault-tolerant quantum computation. Fully fault-tolerant quantum computation describes methods for dealing with system-environment coupling as well as dealing with faulty control of the quantum computer.\n\nOf particular significance was the discovery of the threshold theorem for fault-tolerant quantum computation. The threshold theorem states that if the decoherence interactions are of a certain form and are weaker than the controlling interactions by a certain ratio, quantum computation to any desired precision can be achieved. The threshold theorem for fault-tolerance thus declares a final solution to the question of whether there are theoretical limits to the construction of robust quantum computers.\n\nYes, currently the loss of information is being managed by means of quantum error correction protocols.\n\nIdeally, quantum decoherence and eventual loss of information should be prevented. However, in real-world scenarios, it is hard to completely isolate quantum systems from their environment.","date":"2020-05-30 15:18:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6221052408218384, \"perplexity\": 330.5574662891831}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347409337.38\/warc\/CC-MAIN-20200530133926-20200530163926-00019.warc.gz\"}"} | null | null |
Looking for reliable Sydney Digital Marketing services?
At Evolocity, we provide Sydney Digital Marketing services for small and medium businesses. We specialise in Sydney SEO and Sydney Adwords campaigns.
Contact us today for Sydney Google Ads and Sydney SEO services to rocket your business up search engine rankings, generating more website traffic and generating leads and more customers for your business. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,918 |
\section{Introduction}
\label{sec:intro}
Modern photometric galaxy surveys enable weak gravitational lensing measurements capable of constraining cosmological models with high precision \citep{troxel_etal18, Abbott_2018, hikage_etal19, Hamana_2020, Heymans_2021}. These surveys measure the shapes of tens or even hundreds of millions of galaxies to create weak lensing shear maps of the Universe. As weak lensing is sensitive to all forms of matter along the line of sight, these shear maps probe the mass distribution of the Universe. A related quantity is the weak lensing convergence map (or weak lensing mass map) which measures the matter density field weighted by a lensing kernel and integrated along the line of sight.
Nonlinear structure growth in the evolution of the Universe causes the density field (and thus weak lensing fields) to become non-Gaussian. Because working with mass maps is often simpler than working with shear maps, many methods for extracting non-Gaussian information from the weak lensing signal rely on convergence maps. A non-exhaustive list of these methods includes one-point functions \citep{liuetal19,thieleetal20}, three-point statistics \citep{takada_bispectrum, fu_bispectrum, jung2021integrated}, higher-order moments \citep{Peel_2017, Gatti_2020}, Minkowski functionals \citep{Kratochvil_2012, Petri_2013, Vicinanza_2019}, peak statistics \citep{Dietrich_2010, Kratochvil_2010, Peel_2017, Shan_2017}, and various machine learning methods \citep{Gupta_2018, Fluri_2018, Ribli_2019, Jeffrey_2020_inference}. The extraction of these non-Gaussian signals is of critical importance as they contain significant cosmological information that is highly complementary to that contained in two-point statistics. For instance, \cite{Gatti_2020} demonstrated that the higher-order moments of the weak lensing maps from a DES-like survey significantly improve the cosmological constraining power of the DES data relative to a standard two-point analysis.
On account of the scientific value of these maps, mass map reconstruction is standard practice for weak lensing surveys \citep{Oguri_2017, Chang_2018}. Traditionally, reconstruction of the convergence field from the shear field is performed by inverting the theoretical relationship between the two fields as initially developed in \cite{KaiserSquires}. Hereafter, we will refer to this reconstruction method as the Kaiser--Squires reconstruction.
While the Kaiser--Squires algorithm is easily the most commonly used, it has two primary drawbacks: the method fails to properly account for the impact of noise and survey masks on the shear fields. Specifically, noisy shear observations may result in apparent mass map fluctuations inconsistent with a cosmological origin. Secondly, as the Kaiser--Squires transformation is non-local, it is necessary to know the shear field on the entire sky. However, because shear maps are limited to a survey window, the unknown shear field outside of the survey mask introduces masking effects in the reconstruction. These masking effects present themselves as additional noise in the reconstructed maps that must be properly attenuated to avoid biases mass map reconstructions near the survey boundaries.
Due to the limitations in the quality of the Kaiser--Squires mass map reconstruction, various alternative methods have been proposed. Many of these techniques involve forward modeling the shear field from the convergence field via the Kaiser--Squires transformation while introducing a Bayesian prior over the convergence field \citep[e.g.][]{alsingetal16,alsingetal17}. Some of these proposed techniques include Wiener filtering \citep{Jeffrey_2018}, sparsity priors \citep{Leonard_2014, Price_2019, Jeffrey_2018}, null B-mode priors \citep{y3_mass_map}, and others \citep{Pires_2020, starck2021weak}. More advanced Bayesian methods aim to forward model the shear field by modeling the initial density field, which is then non-linearly evolved and integrated along the line of sight \citep{Jasche_2013, Porqueres_2021}. Lastly, machine learning-based methods have also been demonstrated to effectively recover the convergence field \citep{shirasaki2019decoding, Jeffrey_2020, hong2021weaklensing}.
While we believe that the most physically correct approach to reconstructing the convergence field relies on simulation-based forward modeling \citep[as advocated for instance in][]{Porqueres_2021}, there is still significant value in the development of fast, approximate reconstruction schemes. Specifically, approximate reconstruction methods can be used to study how to incorporate systematics in the forward modeling approach within a much simplified and significantly more numerically efficient framework. Moreover, it is not yet clear to what degree the use of simplified numerical models will compromise our ability to reconstruct accurate mass maps. If the biases incurred due to the use of analytic approximations are small compared to statistical uncertainties, then the tremendous gain in speed of analytic methods would make them extremely attractive.
Here, we propose to replace the simulation-based model of the convergence field with a fast, analytical approximation. The simplest such approximation models the convergence field as a Gaussian random field. In this case, a Wiener filter of the Kaiser--Squires reconstruction produces the maximum a posteriori estimate for the field. This is computationally expeditious, but the reconstruction fails to correctly recover the non-Gaussian fluctuations due to non-linear evolution \citep{Jeffrey_2018}. Motivated by the fact that a lognormal approximation results in a much more accurate description of the convergence field \citep{clerkin_etal17}, we have opted for forward modeling the latter as a homogeneous and isotropic lognormal random field. This is similar to the approach in \cite{B_hm_2017}, though we note there the prior applies to the 3D density field, whereas we use the prior to describe the 2D convergence field \citep[for other applications of the lognormal prior see e.g.][]{jaschekitaura10,kitauraetal10}.
In this paper, we introduce \texttt{KaRMMa}\ (Kappa Reconstruction for Mass Mapping), a new method for performing mass map reconstruction. \texttt{KaRMMa}\ is similar to some of the previously mentioned techniques in that it is a Bayesian reconstruction method. \texttt{KaRMMa}\ introduces a physically motivated lognormal prior \citep{coles_jones_91} on the convergence maps. As a result, \texttt{KaRMMa}\ significantly improves reconstruction quality over Kaiser--Squires, correctly captures the non-Gaussianities of the convergence maps, and recovers the correct two-point statistics. Additionally, unlike many similar techniques, \texttt{KaRMMa}\ generates sample convergence maps from the posterior distribution rather than a single "best fit" map. That is, \texttt{KaRMMa}\ fully quantifies the uncertainties in our posteriors. We caution that the current \texttt{KaRMMa}\ algorithm explicitly assumes a cosmological model when implementing the lognormal prior. Consequently, \it the current \texttt{KaRMMa}\ maps cannot be used for cosmological inference. \rm In future work, we intend to enable joint sampling of cosmology and mass maps.
\section{Model}
\label{sec:model}
We forward model the observed shear field as a noisy realization of an underlying, noiseless convergence field. The latter is obtained as the ``$\kappa$ to $\gamma$'' (or forward) Kaiser--Squires transformation of the true convergence field. Conceptually, our model parameters are the values of the convergence field in pixels in the sky, which are modeled as a realization of a lognormal random field. In practice, our parameterization is slightly more complicated for reasons that will be made apparent momentarily. For now, it is best to start with this ``conceptual'' parameterization. Specifically, the convergence field $\kappa$ is assumed to be a non-linear transformation of a Gaussian random field $y$ such that \citep[][]{hilbertetal11,Xavier_2016}
\begin{equation}
y_i \equiv \ln(\kappa_i + \lambda) .
\label{eq:y}
\end{equation}
where $y_i$ is the value of $y$-field in pixel $i$. The parameter $\lambda$ above is the "shift" parameter and can be interpreted as the minimum possible value that $\kappa$ can take in a pixel. By definition, the distribution for $y$ is
\begin{equation}
P(\vec{y}) \propto \exp \left[ -\frac{1}{2} (\vec{y} - \vec{\mu})^\top \mathbf{\Sigma}^{-1} (\vec{y} - \vec{\mu}) \right].
\label{eq:Py}
\end{equation}
The covariance matrix between pixels, $\mathbf{\Sigma}$, is fully specified by the correlation function of the \it convergence \rm field $\xi(\theta)$ and the shift parameter $\lambda$ via,
\begin{equation}
\label{eq:covariance}
\Sigma_{ij} = \ln \left( \frac{\xi(\theta_{ij})}{\lambda^2} + 1\right).
\end{equation}
Lastly, the parameter $\mu$, is constrained by the fact that the mean convergence is zero. Specifically,
\begin{equation}
0 = \left< \kappa_i \right> = \exp \left( \mu_i + \frac{\Sigma_{ii}}{2} \right) - \lambda ,
\label{eq:ymean}
\end{equation}
and therefore
\begin{equation}
\mu_i = \ln(\lambda) - \frac{\Sigma_{ii}}{2}.
\label{eq:yvar}
\end{equation}
Together, equations~\ref{eq:y} through \ref{eq:yvar}, specify the probability distribution of the Gaussian random field $y$ in terms of the shift parameters $\lambda$ and the convergence power spectrum $C_{\ell}$. The latter can be readily computed using publicly available tools such as CAMB \citep{Lewis_2000} or CLASS \citep{Blas_2011}. To determine the lognormal shift parameter, we fit for the value of $\lambda$ that minimizes the mean squared error between the lognormal pdf and the empirical pdf from the simulations described in section~\ref{sec:sim_tests}. The convergence field of interest is obtained as the inverse of the non-linear transformation in equation~\ref{eq:y}, where $y$ is subject to the Gaussian prior of equation~\ref{eq:Py}.
While we aim to reconstruct the pixelized convergence field, our observable is a pixelized shear field. The shear field $\gamma$ generated by the convergence field $\kappa$ is calculated using the usual $\kappa$ to $\gamma$ Kaiser--Squires transformation in harmonic space,
\begin{equation}
\gamma_{\ell,m} = - \sqrt{\frac{(\ell + 2) (\ell - 1)}{\ell (\ell + 1)}} \kappa_{\ell,m}.
\end{equation}
The observed shear field is then modelled as a noisy realization of this predicted shear field. The likelihood distribution is therefore a Gaussian distribution where the shape noise in pixel $i$ has a variance $\sigma_i^2 = \sigma_\epsilon^2 / N_i$ where $\sigma_\epsilon$ is the shape noise per source and $N_i$ is the number of source galaxies in pixel $i$. We assume the noise is uncorrelated across pixels.
The full posterior distribution for our model is
\begin{equation}
P(\vec{y} \mid \vec{\gamma}_\mathrm{obs}) \propto P(\vec{y}) \times \exp \left [ -\frac{1}{2} \sum_i \frac{
(\gamma_{i,\mathrm{obs}} - \gamma_i(\vec{\kappa}))^2}{\sigma_i^2} \right].
\end{equation}
We note that for a map with $N_{\rm pix}$ pixels, the covariance matrix $\Sigma$ characterizing the prior is $N_{\rm pix}\times N_{\rm pix}$. For this reason, maps with more than $\sim 10^5$ pixels become numerically intractable. For the current study, we have limited ourselves to maps of $\approx 30'$ resolution, corresponding to HEALpix $N_{\rm side}=128$. At this resolution, our maps contain approximately $17,000$ pixels in total, including the buffer regions around the survey edges as described in section~\ref{sec:mask_effects}. Finally, for reasons that will be made apparent in section~\ref{sec:res_err}, our reconstructed maps are band limited to modes with $\ell \leq 2~N_{\rm side}$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/mask_panel.png}
\caption{Illustration of masking effects and how \texttt{KaRMMa}\ suppresses them. Top Left: An example $\kappa$ map with the DES Y1 mask applied. When the convergence field is set to zero outside the mask, the shear field produced by the input convergence field is modified relative to that obtained using the full-sky convergence field. We refer to this difference as ``masking noise.'' Top Right: Masking noise in the shear field using convergence fields restricted to the survey mask, as estimated using 1000 mocks. Bottom Left: Sample $\kappa$ map with the DES Y1 mask and an additional 10 pixel buffer applied. Bottom Right: Masking noise recovered after adopting the proposed buffer region. The color scale is the same between the maps.}
\label{fig:masking_effects}
\end{figure*}
In practice, we reparameterize the convergence map by diagonalizing the prior covariance matrix $\mathbf{\Sigma}$ using Singular Value Decomposition (SVD). Our model parameters are therefore the coefficients of the basis vectors spanning the space of convergence maps. This has two benefits: 1) While not strictly necessary, working in a diagonal basis makes sampling simpler; and 2) SVD allows us to truncate singular modes to ensure numerical stability in the posterior calculation. Because the resulting basis vectors are orthonormal, one can think of these basis vectors as being similar to the $a_{lm}$ coefficients of the $y$-field. Truncation of singular modes is performed by plotting the singular values (normalized by the largest singular value) of the covariance matrix (not shown). We find a distinct several order of magnitude drop off in the singular values after the first $\approx 6000$ modes. Modes beyond this point do not contribute meaningfully to the map statistics and are not included in the model. This agrees well with the expected number of principal components. \footnote{Specifically, HEALpix $N_{\rm side}=128$ maps can resolve up to $l_{\rm max}=256$ mode, and can therefore be fully specified by the value of $2l_{\rm max}(l_{\rm max}+1)\approx 1.3\times 10^5$ real coefficients. The mass maps in this work are $\approx 1,500\ {\rm deg}^2$, so we'd expect the number of coefficients required to characterize our mass maps is $\approx (1.3\times 10^5)\times (1,500/40,000)\approx 5,000$.}
\section{Numerical Systematics}
\label{sec:num_sys}
We discuss three sources of numerical systematics in our reconstruction algorithm. Of these, the first two must be adequately treated to avoid biases in the posterior distributions, whereas the last one has only a minor impact on the posteriors.
\subsection{Masking Effects}
\label{sec:mask_effects}
The Kaiser--Squires transformation relating the convergence and shear fields is most easily performed in harmonic space. The harmonic decomposition of a map can only be performed on the entire sphere, so one usually ``zeroes out'' all pixels outside the survey boundary for the purposes of computing the harmonic transforms. This cut introduces artifacts near the edges of the mask in the recovered map \citep[e.g.][]{Chang_2018}.
Because we forward model the convergence field, we can readily include as parameters the values of the convergence field outside of the survey footprint, thereby significantly reducing masking effects. For instance, \cite{Mawdsley_2020} applied this technique by modeling the full sky convergence field. However, a full sky reconstruction is computationally demanding and entirely unfeasible in our code, even at modest resolutions. Therefore, we strike a balance between computational feasibility and minimizing masking effects by adding a buffer region around the mask within which we perform our reconstruction. Including this buffer region ensures that the induced effects are primarily limited to the edges of the buffered mask, rather than the survey mask (see figure \ref{fig:masking_effects}).\footnote{All map visualizations in this paper are created using the publicly available SKYMAPPER code: \url{https://github.com/pmelchior/skymapper}} The size of this buffer region is selected such that increasing the size of the buffer provides no appreciable reduction in masking artifacts. The remaining noise due to masking effects is also accounted for in our reconstruction, as detailed in Appendix \ref{sec:bias}.
\subsection{Resolution Error}
\label{sec:res_err}
The resolution of HEALPix maps is set by the variable $N_{\rm side}$. A HEALPix map of $N_{\rm side} = 128$ roughly corresponds to pixels of $30'$ resolution. Doubling the $N_{\rm side}$ will result in a map where each pixel is subdivided into four pixels of equal area. When performing spherical harmonic transformations with HEALPix, modes with $\ell > 2~N_{\rm side}$ are poorly resolved. Consequently, spherical transforms can ``leak'' power from unresolved modes into larger scales. This problem is usually referred to as aliasing. We therefore limit our map reconstruction to well-resolved modes with $\ell \le 2~N_{\rm side}$, resulting in a band-limited map. However, since the observed shear map has no such band-limit, the likelihood in our posterior is incorrect. We calibrate the uncertainty due to lost power using lognormal mocks generated with Flask \citep{Xavier_2016}. This uncertainty is added as an additional source of theoretical noise to the likelihood term, and accounts for the uncertainty associated with the missing power. The variance of this additional noise term (including masking effects and missing power) is 10-15\% of that for the shape noise term. As such, ignoring this term would introduce non-negligible bias in the reconstruction. The details of this calibration are important for mitigating bias in our posterior maps, so we discuss these at some length in Appendix \ref{sec:bias}.
\subsection{Pixel Geometry}
We use the HEALPix \citep{Gorski_2005} pixelization scheme to pixelize the curved sky. The primary benefit of the HEALPix scheme is that all pixels have equal areas. However, this comes at the cost of each pixel having a different shape. Naively, one might expect the covariance matrix between two pixels in equation~\ref{eq:covariance} would depend only on the separation between the pixels. The fact that different pixels have different shapes implies that this is not quite true, and one must account for pixel geometry. To compute $\xi_{ij}$, we subdivide the pixels $i$ and $j$ into sets of pixels at a higher resolution, $i'$ and $j'$. We then compute the covariance between all subpixels $i'$ and $j'$ according to equation~\ref{eq:covariance}, that is, assuming the covariance matrix between these high-resolution pixels depends only on separation. Because the value of the convergence in pixel $i$ is the average value across all subpixels, $\kappa_i = \frac{1}{N} \sum_{i'} \kappa_{i'}$, we can compute the covariance between the original coarse pixels via
\begin{equation}
\mathrm{Cov}(\kappa_i, \kappa_j) = \frac{1}{N^2} \sum_{i'}\sum_{j'} \mathrm{Cov}(\kappa_{i'}, \kappa_{j'}).
\end{equation}
By increasing the subdivision depth to higher resolutions, one can account for the pixel geometry in the prior covariance matrix to arbitrary accuracy. For an $N_{\rm side}=128$ map, the geometry-corrected variance computed at $N_{\rm side}=1024$ results in a $0.57\%$ median absolute deviation and $3.0\%$ maximum deviation from the naive variance over the DES Y1 mask. In practice, we do not find that this term significantly impacts the analysis; however, we retain it for correctness.
\section{Posterior Sampling}
Our parameterization of the convergence maps can have $\sim 10^5$ or even more parameters. Consequently, efficient sampling of this high-dimensional parameter space is strictly necessary. We achieve efficient sampling using Hamiltonian Monte Carlo (HMC) sampling \citep{neal2012mcmc}.
HMC treats the posterior probability distribution as a potential energy function, and introduces momenta as nuisance parameters. The HMC sampler evolves the state of the system according to Hamiltonian mechanics to reach a new proposal sample state. As this process conserves the total ``energy,'' or in our case, probability, the sampler accepts the proposed state with a probability of unity and discards the nuisance parameters. In practice, the acceptance rate is slightly below unity due to numerical integration errors.
The efficiency of HMC samplers is critically sensitive to the choice of the mass matrix $\mathbf{M}$ used in the kinetic energy term of the Hamiltonian
\begin{equation}
T = \frac{1}{2} \vec{p}^\top \mathbf{M}^{-1} \vec{p}.
\end{equation}
Failure to choose an appropriate mass matrix can result in a highly inefficient sampling of the posterior. In such a case, a limited sampling of the posterior will result in a biased chain that does not cover the whole posterior distribution. Optimal efficiency is typically achieved by setting the inverse mass matrix to the covariance matrix of the target distribution \citep{tayloretal08,neal2012mcmc}. Naturally, the momenta $\vec{p}$ should be randomly drawn from the covariance matrix $\mathbf{M}$. For our model, we set the inverse mass matrix to the diagonalized prior covariance matrix. With this setup, we find our posterior chains have an acceptance rate of $0.65$ and a correlation length of $3.2$ samples. Using 1,000 samples per chain, this corresponds to an effective sample size of $\sim 300$ samples. While this number of samples is clearly insufficient for a full sampling of the map posterior space, we have used a chain with 100 times more samples to verify that our default chain length is sufficient for accurately recovering all summary statistics from our maps. We attribute this result to the fact that the sky area we use is quite large, so any individual map already contains many independent realizations of the convergence field within it.
The HMC sampler used in this work is a custom implementation of the standard HMC sampling algorithm using the PyTorch framework for CUDA acceleration and automatic gradient calculations. This allows us to generate independent samples at a rate of $16,500$ samples/hour using a consumer NVIDIA RTX 2070 Super graphics card.
\section{Simulation Tests}
\label{sec:sim_tests}
We test our method using the suite of 108 full-sky dark-matter simulations of \citet{Takahashi_2017}. These simulations were generated using a fixed flat $\Lambda$CDM cosmology with parameters $\Omega_\Lambda=0.279$, $\Omega_b=0.046$, $h=0.7$, $\sigma_8=0.82$, and $n_s=0.97$. The mocks are provided at a HEALPix resolution of $N_{\rm side}=4096$. Because our method is computationally restricted to a lower resolution of $N_{\rm side}=128$, we downgrade the simulated maps by averaging the shear of the high resolution maps within each $N_{\rm side}=128$ pixel.
The simulations are given as $\kappa$ maps at several redshift slices. From these maps, we construct DES Y1-like mass maps by adding the individual slices weighted by the Y1 non-tomographic $dn/dz$ \citep{Hoyle_2018}. To generate observed shear maps, we add shape-noise consistent with the DES Y1 shear maps \citep{Zuntz_2018} to the downgraded true shear maps. This noise is constructed by Poisson sampling a galaxy number count for each pixel with a mean of 4.5 galaxies per square arcmin. Gaussian noise is added to $\gamma_i$ for each pixel $i$ by sampling the shape noise from $\sigma_i = \sigma_\gamma / \sqrt{N_i}$ where $\sigma_\gamma = 0.28$ is the standard deviation of galaxy shapes. In this way, we produce 108 simulated Y1-maps at $N_{\rm side}=128$ resolution. We emphasize that the downgraded shear maps include high-$l$ (i.e. $l> 256)$ power.
We wish to compare our recovered convergence maps to the input mass maps from the numerical simulations. Recall that our model results in band-limited maps at $N_{\rm side}=128$ resolution. Therefore, to compare to simulations we first apply a low-pass filter to the $N_{\rm side}=4096$ convergence map such that $a_{\ell,m}=0$ for all modes unresolved in our analysis ($\ell >2\times 128$). We then downgrade the low-pass filtered $N_{\rm side}=4096$ map to $N_{\rm side}=128$. This low-pass filtered downgraded map is the ``truth'' that our algorithm should recover. Note that while our ``truth'' map is low-pass filtered, our synthetic shear maps are not. Also, we have found that low-pass filtering the high resolution map before downgrading is important to avoid aliasing in the final ``truth'' maps.
In what follows, we will compare our results against the standard Kaiser--Squires algorithm. Specifically, we analyze each of the 108 simulated DES-Y1 data sets, and compare: 1) the residuals between ``truth'' (i.e. the low-resolution simulated convergence maps) and each of the reconstructions (Kaiser--Squires and \texttt{KaRMMa}); and 2) the one- and two- point distributions of the true and reconstructed maps.
\section{Results}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/map_comparison_2x2.png}
\caption{Visual comparison of mass map reconstructions for a single mock realization. Top Left: True mass map from the simulation. Top Right: One posterior sample map from \texttt{KaRMMa}. Bottom Right: Mean map from \texttt{KaRMMa}. Bottom Left: Kaiser--Squires reconstruction. Note the Kaiser--Squires reconstruction is far noisier than the true mass map, with excess small scale structure. By contrast, the \texttt{KaRMMa}\ sample captures structure at the correct locations and physical scales when compared to the true convergence map. Bottom right: the mean of the posterior distribution of \texttt{KaRMMa}\ maps. Note this mean map smooths out prior-dominated scales, thereby suppressing small-scale structure.}
\label{fig:vis_comp}
\end{figure*}
Before we present our results we need to setup some important conventions. In all the figures in this section, different colors correspond to different types of maps as follows:
\begin{itemize}
\item green: statistics for the Kaiser--Squires reconstruction
\item orange: statistics for the mean map computed from the \texttt{KaRMMa}\ posteriors.
\item purple: statistics for individual maps in the posterior distribution of \texttt{KaRMMa}\ maps.
\end{itemize}
The difference between orange and purple is important. The ``mean map'' (orange) refers to the process of taking all the maps in the \texttt{KaRMMa}\ posterior, and averaging them out to obtain a single mean map. This mean map can be thought of as the single best-point estimate derived from our posterior. In practice, this mean map will not exactly coincide with the true maximum posterior unless the posterior is Gaussian. By contrast, the ``average statistic'' in individual \texttt{KaRMMa}\ samples (purple) corresponds to computing the statistic of interest for each of the \texttt{KaRMMa}\ maps in the posterior distribution, and then averaging across all posterior maps. It is the latter operation that is relevant for testing whether our posterior distributions are biased or not \citep[e.g.][]{hoggetal10}.
Because the \texttt{KaRMMa}\ maps are low-passed filtered, we compare our posteriors to the statistics of the low-pass filtered simulation maps (see section~\ref{sec:sim_tests}). Further, we also low-pass filter the Kaiser--Squires reconstruction. This way, we can perform a true ``apples-to-apples'' comparison between the two methods. We note that the low-pass filtering significantly reduces the obvious high-frequency noise that is otherwise present in the Kaiser--Squires reconstruction.
Figure~\ref{fig:vis_comp} provides a visual comparison between the maps generated by Kaiser--Squires/\texttt{KaRMMa}\ and truth for a single mock realization. Note that the Kaiser--Squires reconstruction appears to capture the correct locations of large under/over-densities. However, it exhibits a significant amount of noise, particularly at smaller scales. By contrast, the sample map from \texttt{KaRMMa}\ not only captures features at the correct locations, the relative amount of power at different scales appears to be qualitatively similar to that of the true map. When comparing the mean map from the \texttt{KaRMMa}\ posterior, we find that it similarly recovers the locations of large over/under-densities, but that small scale fluctuations are effectively smoothed out. This map is qualitatively similar to the Wiener-filtered map, where prior-dominated scales are smoothed over. Indeed, as mentioned earlier, we can think of the mean map as the best point-estimate for the convergence field.
We now validate the visual impressions from Figure~\ref{fig:vis_comp} at a quantitative level. Figure \ref{fig:resids_hist} compares the distribution of residuals between the true and reconstructed $\kappa$ maps. We see from the figure that that the distribution of residuals for the mean \texttt{KaRMMa}\ map (orange) is significantly tighter than that of the Kaiser--Squires reconstruction (green). That is, the mean map is a much better point-estimate of the mass map than the Kaiser--Squires map. As per the above discussion, this makes sense: the averaging procedure dampens unresolved modes, thereby reducing noise. By contrast, the distribution of residuals for \texttt{KaRMMa}\ sample maps is slightly broader than that of the Kaiser--Squires map. This is surprising, as it demonstrates that a random realization from the \texttt{KaRMMa}\ posterior --- which, in accordance to our prior, has added noise relative to our best point estimate --- is nevertheless nearly as good a point-estimate of the true mass map as the original Kaiser--Squires reconstruction! However, the fact that even by eye the Kaiser--Squires map is qualitatively different from the true convergence map (see Figure~\ref{fig:vis_comp}) implies its noise fluctuations do not have the spatial structure expected from a true convergence map. We will confirm this momentarily. Before we do so, we quantify the fidelity of the reconstructed maps using the Pearson correlation coefficient. The correlation between the true maps and each of the reconstructions is $0.771 \pm 0.009$ for Kaiser--Squires, $0.823 \pm 0.010$ for the mean \texttt{KaRMMa}\ map, and $0.676 \pm 0.014$ for \texttt{KaRMMa}\ sample maps. Error bars represent the error in the mean across all 108 simulated maps. The fact that the correlation coefficient between \texttt{KaRMMa}\ samples and the true convergence field is significantly lower than that of the mean map is expected: the prior term in our posterior adds noise to the mean map so as to recover the expected clustering statistics of a cosmological convergence map. It is this noise which is responsible for decreasing the correlation coefficient relative to our mean mass map.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/resids_hist.png}
\caption{Distribution of pixel-wise residuals between the Kaiser--Squires and \texttt{KaRMMa}\ reconstructions. The orange histogram shows the corresponding residuals for the mean of the maps in the posterior, or ``mean map'' for short. The purple map shows the distribution of residuals averaged over maps sampled from the \texttt{KaRMMa}\ posterior. We find average Pearson correlation coefficients between the true mass maps and the reconstructions of 0.77 for Kaiser--Squires, 0.68 for \texttt{KaRMMa}\ samples, and 0.82 for \texttt{KaRMMa}\ average.}
\label{fig:resids_hist}
\end{figure}
\subsection{\texttt{KaRMMa}\ Posterior Validation}
\label{sec:posterior_validation}
We wish to verify that the posterior distributions returned by \texttt{KaRMMa}\ correctly capture measurement uncertainties. We consider in particular the uncertainties in the recovered one point function, two point functions, and peak/void counts, as determined from the posterior distribution of convergence maps. To do so, consider first the posterior of the one-point function of the convergence map. Assuming this posterior is Gaussian, it is completely characterized by its mean and covariance matrix. We test this hypothesis as follows: 1) given a synthetic data set, for each sample map in the chain we compute its one-point function within the survey footprint. The one-point function is evaluated using 19 convergence bins, resulting in a binned histogram representation of the one-point function. The number of bins is a compromise between the desire for a large number of bins to adequately test the distribution, and the necessity of having sufficient statistics within each bin. 2) Using the full posterior distribution of maps, we compute the average one-point function in each of these bins and the corresponding covariance matrix. 3) For each of our 108 synthetic data sets, we evaluate the $\chi^2$ statistic for the difference between the one-point function of the true convergence field and the mean one-point function determined from our posterior. The chi-squared statistic for data vector $\vec{d}$ of simulation $i$ can be expressed as
\begin{equation}
\chi^2_i = \Delta \vec{d}_i^\top \Sigma^{-1}_i \Delta \vec{d}_i
\end{equation}
where $\Delta \vec d_i = \avg{\vec d_i}_{\rm HMC} - \vec d_{i,{\rm true}}$, and $\avg{\vec d_i}_{\rm HMC}$ is the mean of the posterior for the summary statistics $\vec d$ for simulation $i$ computed from our HMC samples. Likewise, the covariance matrix $\Sigma_i$ are estimated from the HMC samples of $\vec d_i$ constructed using the sampled maps. When calculating the precision matrix, we account for the expected Hartlap correction-factor to the $\chi^2$ statistic \citep{Hartlap_2006}. If the posterior for the one-point function were Gaussian, the distribution of $\chi^2$ values across our 108 synthetic observations ought to follow a $\chi^2$ distribution with 19 degrees of freedom. Using a similar algorithm, we can also test whether the posterior of the two-point function and peak/void counts of the convergence maps are consistent with a Gaussian distribution characterized by the mean and covariance matrix derived from the sample posteriors. We test the posteriors for the two-point function in both real and harmonic space, using 19 bins for consistency in the number of bins across all summary statistics, which makes visualization easier.
Figure~\ref{fig:chisqs} compares the distribution of chi-squared statistics from \texttt{KaRMMa}\ for each summary statistic to that of the expected chi-squared distribution. It is clear from the plot that the \texttt{KaRMMa}\ posteriors adequately characterizes measurement uncertainties. That is, the simulation-to-simulation variance in the summary statistics is accurately captured by the width of the posterior distributions estimated from a single simulation map. Under the Kolmogorov--Smirnov test, we find that the empirical chi-squared distributions are consistent with the expected distribution with $p$-values of $0.11$, $0.62$, and $0.52$ for the one-point function, correlation function, and power spectrum respectively. The $p$-values for the chi-squared distributions for peak and void counts are $0.027$ and $0.021$ respectively. These corresponds to a $\sim 2\sigma$ detection of a difference between the two distributions, though we caution that there is significant noise in this estimate: in each case, there are only a few hundred pixels spread across the 19 bins. As further illustration of the fact that our posteriors correctly capture measurement uncertainties, in the left panel of figure~\ref{fig:samp_cf_and_bias} we compare the the width of the \texttt{KaRMMa}\ posterior for the correlation function of a randomly selected synthetic observation to its true value. We see that the true correlation function from the simulation (black) falls within the measurement uncertainty from the posterior distribution (purple band), as expected.
Despite this success, we are marginally able to detect a small residual bias in the recovered clustering statistics. That is to say, while the posterior distribution of all inferred summary statistics are consistent with truth, we are able to detect a bias when considering all 108 independent runs simultaneously. We formalize this statement by considering the difference between the mean summary statistic $\langle{\vec d}\rangle$ for each of 108 posteriors and the corresponding 108 true values $\vec d_{\rm true}$. If our posteriors are unbiased, then $\Delta \vec d$ should be consistent with zero across all 108 realizations. We define $\vec \mu$ as the average value of $\Delta \vec d$ across all 108 simulations, and $C$ as the empirical covariance matrix of $\Delta \vec d$ from all 108 simulations. We compute the $\chi^2$ statistic
\begin{equation}
\chi^2 = \vec{\mu}^\top C^{-1} \vec{\mu}.
\end{equation}
Once again, we correct for the Hartlap factor during this calculation. A large $\chi^2$ signals that despite the \texttt{KaRMMa}\ maps being consistent with truth within noise {\it for an individual map}, when considering the full ensemble of 108 simulations we are able to detect residual biases.
As a specific example of these residual biases, the right panel of figure~\ref{fig:samp_cf_and_bias} shows the difference between the mean correlation function from the chain samples and the true correlation function, averaged across all 108 synthetic observations. The purple band is the error on the mean, estimated from the empirical covariance matrix of the differences across the 108 simulations, multiplied by 1/108 (the number of simulations). The $\chi^2/dof$ of the hypothesis that the mean difference is zero is $\chi^2/dof \approx 56/19$, and can be rejected at $\approx 2.9\sigma$. We can perform a similar calculation for the one-point function and the harmonic space two-point function. For these two cases, the $\chi^2$ values per degree of freedom are $\chi^2/dof\approx 136/19$ and $\chi^2/dof \approx 30/19$ respectively. These values correspond to a rejection of the null hypothesis at $\approx 7.2\sigma$ for the one-point function and an acceptable $\approx 1.6\sigma$ for the power spectrum. For the peak/void counts we find $\chi^2/dof$s of $\approx 260/19$ and $\approx 125/19$ corresponding to statistically significant deviations of $\approx 13.7\sigma$ and $\approx 6.6\sigma$ respectively. The large $\chi^2$ bias for the one-point function can be understood from Figure~\ref{fig:hist_bias}. There, we show the difference between the mean one-point function from the \texttt{KaRMMa}\ samples and the true one-point function across all 108 mock realizations. Plotted in red is the difference between the one-point function of the lognormal model used in our prior and the true one-point function. Evidently, the lognormal model fails to reproduce the true one-point function in simulations in detail. The peak/void counts show even larger biases, indicating the lognormal fails to perfectly recover higher-order statistics. In short, the use of our lognormal prior biases the recovered posteriors. However, we emphasize that these biases are smaller than the statistical uncertainty recovered from a single map, and are only clearly detectable when using the full simulation ensemble.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/chisq_plot.png}
\caption{Left: Distribution of the $\chi^2$ statistics for the difference between the mean one-point/two-point statistics from \texttt{KaRMMa}\ and the true statistics across the 108 mock realizations. Right: Distribution of $\chi^2$ statistics for peak/void counts. In both panels, the bins for different observables are slightly offset from each other for visualization purposes. The $\chi^2$ values are computed using covariances estimated directly from the \texttt{KaRMMa}\ chains. In black we show the expected $\chi^2$ distribution with 19 degrees of freedom (all statistics were computed in 19 bins). The excellent agreement between the empirical distribution of $\chi^2$ values and the expected $\chi^2$ distribution demonstrates that the \texttt{KaRMMa}\ posterior accurately captures the uncertainty in our reconstructions.}
\label{fig:chisqs}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/samp_xi_and_bias_comparison.png}
\caption{Left: Comparison between the correlation function reconstructed by \texttt{KaRMMa}\ (purple) and the true (black) correlation function in a single mock realization. Purple bands represent the $1\sigma$ uncertainty from the \texttt{KaRMMa}\ posterior. Note that the true correlation function falls within the posterior uncertainties. Right: The purple line shows the difference between the mean correlation function from the \texttt{KaRMMa}\ posterior and the true correlation function, averaged across all 108 synthetic data sets. Purple bands represent the $1\sigma$ error on the mean. The small deviation from zero in the above plot indicates marginal evidence ($2.9\sigma$) of bias in our reconstruction. This bias is negligible compared to observational uncertainties in a single mock (grey band).}
\label{fig:samp_cf_and_bias}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/hsit_bias_with_prior.png}
\caption{Difference between the mean one-point function from the \texttt{KaRMMa}\ posterior and the true one-point function (purple line), averaged across all 108 synthetic data sets. The red line shows the average difference between the one-point function of the lognormal model used in our prior and the true one-point function. The non-zero difference demonstrates the log-normal model provides only an approximate description of the convergence field. The purple/red bands represent the 68\% confidence interval of the mean. We detect a bias in the mean \texttt{KaRMMa}\ one-point function at $7.2\sigma$ that is clearly driven by the lognormal model being only an approximate description of the true convergence field. Note, however, that this bias is smaller than the observational uncertainties for a single mock (grey band). A more direct comparison of the one-point functions can be seen in Figure~\ref{fig:1pt_func}.}
\label{fig:hist_bias}
\end{figure}
\subsection{Comparison of \texttt{KaRMMa}\ to Kaiser--Squires}
\label{sec:comparison}
We wish to compare the \texttt{KaRMMa}\ mass maps to those produced using the standard Kaiser--Squires algorithm. We note that Kasier--Squires algorithm can be thought of as the maximum posterior map under a flat prior for the convergence field. To perform this comparison, we focus on the one and two-point statistics of the recovered maps, as well as the peak and void counts. Unfortunately, the comparison is non-trivial because Kaiser--Squires is not a Bayesian reconstruction of the true mass map. Let's focus our discussion on the correlation function of the convergence field as an example. Since the correlation function of the Kaiser--Squires convergence map is a deterministic function of the data, we could focus our investigation on the probability distribution $P(\xi|\kappa_{\rm true})$, where $\xi$ is the correlation function of the recovered mass map. However, this distribution is insensitive to cosmic variance noise in the field, which is one of the principal sources of noise in real surveys. We have chosen instead to compare the distributions for $\xi$ given a set of cosmological parameters $\Omega$, i.e. $P(\xi|\Omega)$. This distribution is especially easy to study since the 108 measurements produced from our simulations constitute a sampling from this distribution.
Given that $P(\xi|\Omega)$ is the only reasonable statistic we can compute for Kaiser--Squires, we will need to construct the corresponding distribution from the \texttt{KaRMMa}\ posteriors. Specifically, we calculate the distribution\footnote{Note that since $\xi$ is a deterministic function of the map $\kappa$, the distribution $P(\xi|\kappa)$ is a Dirac delta function.}
\begin{equation}
P(\xi|\Omega) = \int d\gamma_\mathrm{obs} d\kappa\ P(\xi | \kappa)P(\kappa| \gamma_\mathrm{obs}, \Omega) P(\gamma_\mathrm{obs} | \Omega)
\end{equation}
by computing $\xi$ for each sample map from the chain for each mock realization, and then combining all sample statistics into a single very large chain of sample statistics marginalized over the mock realizations. The distribution of the statistics from this combined chain consistute a sampling of the distribution $P(\xi|\Omega)$ where $\xi$ is the convergence correlation function from \texttt{KaRMMa}\ maps.
To help interpret the \texttt{KaRMMa}\ posteriors for this comparison, it is worth discussing the expected distribution for $P(\xi|\Omega)$ for \texttt{KaRMMa}\ in the limits of infinite noise and infinite signal-to-noise respectively. In the infinite signal-to-noise limit, the prior is irrelevant. \texttt{KaRMMa}\ will recover the underlying convergence field for each simulation exactly, and the posterior will reflect the cosmic variance in the input density fields. That is, our posterior corresponds to the distribution of correlation functions for maps sampled from the distribution $P_\mathrm{sim}(\kappa|\Omega)$. Conversely, in the limit of infinite noise, the posterior will reflect the $\kappa$ distribution obtained from the prior $P_0(\kappa|\Omega)$. If the log-normal prior $P_0$ were exactly identical to the simulation distribution $P_\mathrm{sim}$, then the posteriors for the infinite signal-to-noise and infinite noise limits would be identical. In other words, the distribution $P(\xi|\Omega)$ for \texttt{KaRMMa}\ should, by construction, reflect cosmic variance uncertainties and nothing else. In that sense, $P(\xi|\Omega)$ is not useful statistic for evaluating the efficacy of \texttt{KaRMMa}. However, it is the only avenue we have for comparing \texttt{KaRMMa}\ to Kaiser--Squires on equal footing. Of course, a similar argument can be made for the one-point distribution.
With the nuances of this comparison established, we begin by looking at the one-point distributions. Figure \ref{fig:1pt_func} shows that the mean one-point function from the Kaiser--Squires reconstruction (green) is much too broad compared with the mean one-point function from the input simulated maps (black). The Kaiser--Squires one-point function is also much more symmetric than the clearly non-Gaussian true one-point function \citep[see e.g. fig. 8 in][]{y3_mass_map}. By contrast, we see that the mean one-point function of \texttt{KaRMMa}\ sample maps (purple) is nearly identical to that of the true map, as expected based on our discussion is section~\ref{sec:posterior_validation}. If we consider the distribution of the mean \texttt{KaRMMa}\ maps instead (orange), we find that the resulting one-point function is non-Gaussian but much too narrow, reflecting the ``missing variance'' due to unresolved modes being zeroed-out in the averaging procedure.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/1pt_func_comb.png}
\caption{Top Left: The one-point function of the true convergence map (black), the Kaiser--Squires reconstruction (green), the mean \texttt{KaRMMa}\ map (orange), and the means of the posterior estimated from the \texttt{KaRMMa}\ samples (purple). All distributions have been averaged across the 108 simulated data sets. Bottom Left: Percent difference between the one-point distributions of each method and the true one-point distribution. Right: Comparison of one-point distributions with a log-scale. The posterior distribution from \texttt{KaRMMa}\ results in an excellent description of the one-point function of the simulations. The error bars/bands represent the $1\sigma$ standard deviation of the computed statistics across all mock realizations.}
\label{fig:1pt_func}
\end{figure*}
Figures~\ref{fig:2pt_func} and~\ref{fig:cl_plot} show the correlation function and power spectrum for each of our four types of maps: true, Kaiser--Squires, mean \texttt{KaRMMa}\ map, and \texttt{KaRMMa}\ posterior. We find that the correlation function of the Kaiser--Squires reconstruction (green) is biased, consistent with our observations that the spatial structure of these maps ``looks wrong'' by eye. In particular, we note the Kaiser--Squires reconstruction exhibits excess noise at small scales, and under-estimates the power at large scales due to leakage into shear B-modes \citep[for further discussion, see][]{y3_mass_map}. The mean \texttt{KaRMMa}\ map (orange) is a clear improvement over Kaiser--Squires, but the smoothing due to the averaging of the maps biases reduces the correlation function and power-spectrum at small scales. This makes sense, since it is precisely the small-scale modes that are unresolved. Finally, the mean of the \texttt{KaRMMa}\ posterior for the correlation function (purple) is a near-perfect match to the true correlation function (black).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/2pt_func.png}
\caption{Top: Two-point angular correlation functions of each reconstruction method compared to the true correlation function in the simulations. Bottom: Percent difference between the correlation function and the true two-point function. The \texttt{KaRMMa}\ samples correctly recover the true correlation function in the simulation, whereas the two-point function of the Kaiser--Squires reconstruction is grossly biased.}
\label{fig:2pt_func}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/cl_log_log.png}
\caption{Power spectra from each reconstruction method compared to the true power spectrum from simulations. As with the correlation function, we find that the \texttt{KaRMMa}\ samples easily outperform the Kaiser--Squires reconstruction, resulting in very nearly unbiased power-spectrum estimates. This figure also highlights the suppression of power at small scales in the mean \texttt{KaRMMa}\ map, as well as the artificially high small scale structure in the Kaiser--Squires reconstruction. Interestingly, Kaiser--Squires also fails to recover the correct amount of large scale power.}
\label{fig:cl_plot}
\end{figure}
Peak/void counts are widely used in cosmology as they are simple to compute and capture some of the non-Gaussian information contained within the field. In this paper, we define peaks/voids as pixels in the map whose $\kappa$ value is greater/less than the value of $\kappa$ in all immediately neighboring pixels. In figure \ref{fig:pc_plot}, we compare the peak/void distributions from each reconstruction method to that of the true distributions from simulations. We see that the \texttt{KaRMMa}\ posterior for the peak and void counts match the true counts in the simulations within noise. By contrast, the Kaiser--Squires reconstruction has peak/void counts which are skewed towards higher/lower values of $\kappa$ than the true distribution. We attribute this excess to the excess high frequency noise in the $\kappa$ distribution in the Kaiser--Squires maps.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/peak_void_stats.png}
\caption{{\it Left:} Peak counts from each reconstruction method compared to the true peak counts from simulations, as labeled. {\it Right:} Similar to the left panel, but for void counts. The distribution of peaks and voids in the Kaiser--Squires map is heavily skewed compared to truth due to the high frequency noise in these maps. By contrast, the \texttt{KaRMMa}\ maps have peak and void counts that are within noise of the the true distributions.}
\label{fig:pc_plot}
\end{figure*}
\subsection{Comparison of \texttt{KaRMMa}\ to Gaussian Prior}
The aim of \texttt{KaRMMa}\ is to provide a fast tool for reconstructing mass maps that also accurately recovers the correct map statistics. Key to achieving this goal is the choice of the lognormal model. The lognormal model provides a simple yet relatively accurate approximation of the non-Gaussianities of the convergence field. By contrast, while a Gaussian prior should recover the correct two-point statistics of the map, it should fail to recover non-Gaussian statistics. In this section we compare the performance of \texttt{KaRMMa}\ when using lognormal vs gaussian priors. We are particularly interested in how well each method reconstructs non-Gaussian statistics.
Figure \ref{fig:gauss_prior} compares the peak/void counts and one-point functions from the \texttt{KaRMMa}\ samples (purple) and Gaussian prior samples (blue) to the true distribution (black) from simulations. From these plots, it is obvious that the use of a Gaussian prior significantly biases the resulting maps. Moreover, this bias is highly significant, as evidenced by the resulting $\chi^2$ distributions of the inferred statistics compared to truth. By comparison, the chi-squared distributions from the \texttt{KaRMMa}\ samples (see fig. \ref{fig:chisqs}) are virtually unbiased. As expected, the two-point statistics from the Gaussian prior samples do follow the expected chi-squared distribution. That is, as is the case for the lognormal model, the adoption of a Gaussian prior does not bias the inferred two point functions.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/gauss_plots.png}
\caption{Comparison of non-Gaussian statistics from the \texttt{KaRMMa}\ posterior samples to samples obtained using a Gaussian prior. Top Left/Right: Peak counts/void counts. Bottom Left: One point function. Bottom Right: Chi-squared distributions for the various Gaussian and non-Gaussian summary statistics for the samples generated using a Gaussian prior. As expected, the two-point functions of the Gaussian maps are consistent with the a chi-squared distribution. However, the one-point function and peak/void counts are not properly reconstructed when using a Gaussian prior, resulting in significantly biased chi-squared distributions.}
\label{fig:gauss_prior}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
As mentioned in section \ref{sec:intro}, many other methods have been developed to perform Bayesian mass map reconstruction. Notably, \cite{B_hm_2017} also used a 3D lognormal prior to forward model the shear field. The authors demonstrated that the lognormal prior performed better than the Gaussian prior, particularly at the small non-linear scales. Furthermore, their approach allowed the authors to incorporate redshift uncertainties for individual sources rather than requiring a pixelization scheme on the sky or tomographic binning. However, \cite{klypin_2018} showed that the lognormal model is not an accurate description of the 3D density field and fails at small, very non-linear scales. At these scales, more accurate models may be required. For instance, \cite{Porqueres_2021} aims to reconstruct the lensing signal by forward modelling the non-linearly evolved density field from simulations. Evidently, this simulation-based approach can more accurately model the physics of the non-linear scales than the simple lognormal model.
The approaches from \cite{B_hm_2017} and \cite{Porqueres_2021} are computationally expensive as they involve reconstructing the full 3D density field in order to recover the lensing signal. In particular, \cite{B_hm_2017} only performed maximum a posteriori (MAP) reconstruction by maximizing the posterior. As highlighted in this work, the MAP reconstruction suffers from suppressed power in prior-dominated regimes and therefore results in an unphysical density field with incorrect statistical properties. By contrast, as shown in section \ref{sec:comparison}, sampling the posterior produces maps with accurate statistical properties. An additional consequence of reconstructing the MAP estimate is that uncertainties and covariances are not trivial to recover. The authors solved this by using the Laplace approximation (which assumes that the posterior is Gaussian around the MAP point) to estimate the diagonal of the covariance matrix of the map.
In our approach, we aimed to develop a fast method for recovering the weak lensing signal by directly reconstructing the 2D mass maps rather than reconstructing the entire 3D density field. Similar 2D efforts have appeared in the literature, for instance Wiener filtered maps \citep{Jeffrey_2018}. Here again, a key advantage of our approach is the fact that we sample the full posterior distribution of the maps. Consequently, clustering statistics can be estimated directly from the maps in a nearly unbiased way, while uncertainties in the same are trivial to compute. In particular, section \ref{sec:posterior_validation} demonstrates that the uncertainties and covariances recovered from our method are indeed accurate.
Another noteworthy 2D Bayesian reconstruction technique is GLIMPSE \citep{Leonard_2014, Jeffrey_2018}. GLIMPSE performs a MAP reconstruction and imposes a prior of sparsity on the wavelet coefficients of the convergence map. The wavelet filter in the GLIMPSE algorithm is chosen with the goal of adequately modelling halos and small scale features. However, the GLIMPSE reconstruction ultimately results in mass maps with incorrect clustering statistics due to the reliance on MAP estimates and the fact that the sparsity prior, while it serves to regularize the forward model, does not ensure that small scales statistics are correctly recovered. By comparison, the prior in our reconstruction technique was selected to provide a simple yet relatively accurate description of the non-Gaussian convergence field.
Recently, machine learning methods have been shown to hold great promise in their ability to generate realistic lensing fields \citep[e.g.][]{remy2020probabilistic}. These are typically limited to small flat sky patches, though this is rapidly changing \citep[][]{deepsphere, deepspheregan}.
It is our view that this broad range of methods is to the benefit of the weak lensing program of the community as a whole. There are clear advantages and disadvantages to each approach, and it seems likely that ultimately ideas drawn from several of these algorithm will be adopted as standard by the community in the future.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this work we present \texttt{KaRMMa}, a new method for reconstructing mass maps from weak lensing shear observations. \texttt{KaRMMa}\ provides a fully Bayesian reconstruction with sample maps drawn from the posterior relying on a physically motivated lognormal prior for the convergence field $\kappa$. \texttt{KaRMMa}\ produces a full library of sample maps from the posterior distribution, making it ideal for use in cross-correlation studies with other astrophysical maps \citep[for example, see][]{Chang_2018}. Because the \texttt{KaRMMa}\ samples accurately reflect the measurement uncertainty (see section~\ref{sec:posterior_validation}), estimation of observational uncertainties in cross-correlation studies becomes trivial: one need only cross-correlate the data of interest with the \texttt{KaRMMa}\ library of posterior samples. Further, we demonstrated that the two-point functions of the posterior maps are very nearly unbiased relative to truth. Indeed, any residual biases are significantly smaller than statistical uncertainties (see right panel in Figure~\ref{fig:samp_cf_and_bias}).
In section~\ref{sec:comparison} we compare the \texttt{KaRMMa}\ and Kaiser--Squires mass map reconstructions on a suite of dark matter simulations. We demonstrate that the best point estimate from \texttt{KaRMMa}\ outperforms the Kaiser--Squires reconstruction in that it exhibits a narrower distribution of residuals when compared to the true mass map. Moreover, unlike the Kaiser--Squires reconstruction, the \texttt{KaRMMa}\ posteriors recover nearly unbiased clustering statistics.
There may be use cases in which the use of the mean \texttt{KaRMMa}\ map is more desirable than the use of the full posterior, for instance, if a single ``best-map'' estimate is desired. In this context, the Kaiser--Squires reconstruction could also be a suitable choice, though we believe that for most if not all applications the mean \texttt{KaRMMa}\ map is preferable to the Kaiser--Squires reconstruction. An obvious possible exception to this rule is cosmological investigations (see below).
While this iteration of \texttt{KaRMMa}\ has successfully demonstrated the value of our forward modeling approach, our current algorithm is limited in important ways. First, \texttt{KaRMMa}\ requires a cosmology to be specified for the prior, thus preventing our mass maps from being used for constraining cosmology without significant additional work. However, one could imagine calibrating the incurred bias as a function of cosmology through a response analysis \citep{Seljak_2017, horowitz2019}. Second, \texttt{KaRMMa}\ can only perform reconstruction for one tomographic bin at a time, and therefore cannot perform a joint reconstruction across multiple tomographic bins. Third, the current parameterization of \texttt{KaRMMa}\ restricts our reconstruction to relatively low resolutions. Future work on this method will address all three issues, allowing \texttt{KaRMMa}\ to perform multi-bin mass map reconstruction at arcminute resolution while simultaneously sampling mass maps and cosmology.
The current version of the \texttt{KaRMMa}\ code is available on GitHub at \url{https://github.com/pierfied/karmma}. This code is provided as a Python package and utilizes PyTorch to drastically improve sampling speed by taking advantage of CUDA GPUs.
\section*{Acknowledgements}
PF and ER are supported by NSF grant 2009401. ER is further supported by DOE grant DE-SC0009913, and by a Cottrell Scholar award.
\section*{Data Availability}
The data underlying this article was derived from \cite{Takahashi_2017} which can be accessed at: \url{http://cosmo.phys.hirosaki-u.ac.jp/takahasi/allskyraytracing/}. The derived data generated in this research will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 35 |
Since the very beginning of Green Books, we have always been completely swamped with Q&A. Though we try to give responses to many emails, we simply cannot physically answer them all. Thus our carefully selected batch of 12 or so in each Q&A month. We apologize if you sent in a question that was not answered either on the site or via email.
We find, however, that the vast majority of emails for the last few years are on topics we have already covered. Our old search engine was highly inefficient and didn't allow readers (or us!) to quickly find the answers to these previously covered topics.
Hey presto, you will get more results--from only our pages--than you can shake a stick at.
No doubt many of you are familiar with this method already. For those who aren't, please take some time to play with it before you email us. It will save us inboxes overloaded with questions we've already answered, and it will save you the frustration of mailing us without result. Happy hunting! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,306 |
Follow me on Facebook for updates between one video and the next!
Tube Driver Alternatives | EP 02 | Hiwatt Tube Distortion - Duration: 8 minutes, 5 seconds.
Toptone DG-2 (Cornish G2 Clone) demo - Duration: 5 minutes, 47 seconds.
The sound of Gilmour: chorus history and setup - Duration: 9 minutes, 18 seconds.
Two Notes Audio Engineering | Captor 8 Loadbox | ITA - Duration: 10 minutes.
Tube Driver Alternatives | EP 01 | Buffalo FX Evolution - Duration: 6 minutes, 11 seconds.
Two Notes Audio Engineering | Captor 8 Loadbox | ENG - Duration: 5 minutes, 53 seconds.
Buffalo FX TD-X Review - Duration: 10 minutes.
Silent Night in David Gilmour style - Merry Christmas by Tony - Duration: 3 minutes, 13 seconds.
DigiTech Ventura Vibe Rotary Speakers + Univibe - Duration: 9 minutes, 47 seconds.
Electro-Harmonix Big Muff V1 "Triangle" Reissue Review - Duration: 5 minutes, 29 seconds.
The Sound of Gilmour (In a budget) - PULSE LIVE lead tones - Duration: 8 minutes, 38 seconds.
The Sound of Gilmour - Clean sounds with boost and compression - Duration: 10 minutes.
A quick view of Taormina, Sicily, 2014 - Duration: 7 minutes, 20 seconds.
Jailhouse Rock - Belli dentro - Sigla TV - Duration: 59 seconds.
Notte prima degli esami - Intro [Cover by Tony] - Duration: 35 seconds.
David Gilmour PULSE Tones with Boss RT-20 - Duration: 5 minutes, 22 seconds.
Comfortably Numb Keyboard (PULSE style) - Duration: 6 minutes, 7 seconds.
Hartman Analog Flanger VS Mooer Eleclady - Duration: 6 minutes, 23 seconds.
The sound of Gilmour: The wall - Duration: 3 minutes, 48 seconds.
The sound of Gilmour: Louder Plexi VS Tube Driver - Duration: 3 minutes, 3 seconds.
"Electric Mistress" Shootout - Analog Flangers Compared - Duration: 5 minutes, 41 seconds.
Electro-Harmonix Big Muff Sovkek "Green Russian" Reissue Review - Duration: 5 minutes, 53 seconds.
BuffaloFX Evolution Demo - Duration: 5 minutes, 53 seconds.
Gibson Les Paul Tribute Series 2018 playing Pink Floyd - Duration: 5 minutes, 38 seconds. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,635 |
package no.kantega.publishing.common.data.attributes;
import no.kantega.commons.exception.SystemException;
import no.kantega.publishing.common.data.ListOption;
import no.kantega.publishing.common.exception.InvalidTemplateException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.w3c.dom.Element;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
/**
*
*/
public class EnumlistAttribute extends ListAttribute {
private static final Logger log = LoggerFactory.getLogger(EnumlistAttribute.class);
protected List<ListOption> options = null;
@Override
public void setConfig(Element config, Map<String, String> model)throws InvalidTemplateException, SystemException {
super.setConfig(config, model);
if (config != null) {
options = new ArrayList<ListOption>();
loadEnumValues(config);
}
}
private void loadEnumValues(Element config) {
String enumclassName = config.getAttribute("enumclass");
if (enumclassName != null) {
try {
Class<?> enumclass = Class.forName(enumclassName);
for (Object enumValue : enumclass.getEnumConstants()) {
options.add(asListOption(enumValue));
}
} catch (ClassNotFoundException e) {
log.error("Could not create class " + enumclassName, e);
}
}
}
private ListOption asListOption(Object enumValue) {
return new ListOption(enumValue.toString().toLowerCase(), enumValue.toString(), false);
}
protected List<ListOption> getOptions() {
return options;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,760 |
Porto Murtinho är en kommun i Brasilien. Den ligger i delstaten Mato Grosso do Sul, i den södra delen av landet, km sydväst om huvudstaden Brasília. Antalet invånare är .
Följande samhällen finns i Porto Murtinho:
Porto Murtinho
I övrigt finns följande i Porto Murtinho:
Morro Anicha-quena (en kulle)
Morro do Tigre (en kulle)
Morro La Lima (en kulle)
Pão de Açúcar (en kulle)
Serra São Miguel (en kulle)
I omgivningarna runt Porto Murtinho växer i huvudsak lövfällande lövskog. Trakten runt Porto Murtinho är nära nog obefolkad, med mindre än två invånare per kvadratkilometer. Savannklimat råder i trakten. Årsmedeltemperaturen i trakten är °C. Den varmaste månaden är september, då medeltemperaturen är °C, och den kallaste är juli, med °C. Genomsnittlig årsnederbörd är millimeter. Den regnigaste månaden är november, med i genomsnitt mm nederbörd, och den torraste är augusti, med mm nederbörd.
Källor
Indelningar i Mato Grosso do Sul | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,406 |
Q: A parabola has focus F and vertex V, where VF=10. Let AB be a chord of length 100 that passes through F. Determine the area of triangle VAB.
A parabola has focus $F$ and vertex $V$, where $VF = 10$. Let $AB$ be a chord of length $100$ that passes through $F$. Determine the area of triangle $V\!AB$.
This is an olympiad question which I came across last week. I really don't have any idea where to start. I think the information provided in the question is not even enough to solve the problem.
I only know that for a parabola $y^2 = 4ax$, the length of the focal chord through $t$ is given by $a\left(t+\dfrac1t\right)^2$.
Can anyone check the problem if it's correct? If yes, then how may I proceed? Any hint would be enough.
A: https://en.wikipedia.org/wiki/Parabola
The polar equation of parabola is $$r=\frac p{1-\cos\varphi} \tag 1 \label 1$$ where
*
*$p$ – a semi-latus rectum, which is a focal length doubled, $p=2f$,
*$r$ is a distance measured form the focus point $F$,
*$\varphi$ is a direction measured at $F$ from the axis of symmetry ($\varphi = 0$ is a direction towards the opening of parabola).
The focal length in turn is the distance from the focus of a parabola to its vertex, and it is given as $f = VF = 10.$
The endpoints of a chord, which is rotated by $\varphi$ from the parabola's axis, are at the distances given by the parabola equation $\eqref 1$:
$$\begin{cases}
r_1 = FA = \frac p{1-\cos\varphi} \\
r_2 = FB = \frac p{1-\cos(\varphi+\pi)} = \frac p{1+\cos\varphi}
\end{cases}$$
Now, the length of the chord AB, a base of our triangle $\triangle ABV$, is:
$$AB = r_1+r_2 = \frac p{1-\cos\varphi} + \frac p{1+\cos\varphi} \\
= p\,\frac{(1+\cos\varphi)+(1-\cos\varphi)}{(1-\cos\varphi)(1+\cos\varphi)} \\
= \frac{2p}{1-\cos^2\varphi} = \frac{4f}{\sin^2\varphi} \tag 2 \label 2$$
On the other hand, the height of the triangle, i.e. the distance of the vertex V from the line AB, is:
$$h = VF\,\sin\varphi = f\sin\varphi$$
From $\eqref 2$ we get:
$$\sin\varphi = \sqrt{\sin^2\varphi} = \sqrt{\frac{4f}{AB}}$$
so the area of the triangle
$$\frac 12 h\cdot AB = \frac 12 f\sin\varphi\cdot AB = \frac 12 f\cdot AB\cdot\sqrt{\frac{4f}{AB}} $$
$$\boxed{ P_{\triangle ABV} = f\cdot\sqrt{f\cdot AB}}$$
Given $f=10$ and $AB=100$ we get: $$ P_{\triangle ABV} = 100\sqrt{10}.$$
A: Consider the parabola
$ 4 p y = x^2 $
where $p = 10$
From $(0, p)$ draw a line segment making an angle $\theta$ with the horizontal direction. The parametric equation of the line is
$ q(t) = (0, p) + s (\cos \theta , \sin \theta ) $
Intersect this line with the parabola
$ 4 p (p + s \sin \theta ) = s^2 \cos^2 \theta $
The values of $s$ that are solutions to this quadratic equation are
$ s = \dfrac{1}{2 \cos^2 \theta } \left( 4 p \sin \theta \pm \sqrt{ 16 p^2 \sin^2 \theta + 16 p^2 \cos^2 \theta} \right) = \dfrac{1}{2 \cos^2 \theta } ( 4 p \sin \theta \pm 4 p)$
The difference between the two values of $s$ is the length of the line segment, and it is equal to
$ \Delta s = \dfrac{4 p}{\cos^2 \theta} $
Set this equal to $100$ and solve for $\cos^2 \theta$
$\cos^2 \theta = 0.4 $
Now you can find the two end points of the line segment $AB$, and then calculate the area of $\triangle ABC$.
A:
Hints:
A and B are on a circle center O on midpoint of AB. You have to find the coordinates of O somehow.Let these coordinates be $x_o, y_o$, then solve following system of equations:
$\begin{cases}y^2=40 x\\ (x-x_o)^2+(y-y_o)^2=50^2\end{cases}$
From drawing we have:
$\begin {cases}(x_A, y_A)= (78,7, 56.1)\\(x_B, y_B)=(1.27, -7.1)\end {cases}$
Which gives the coordinats of O: $(x_o, y_o)=(40,
24.5)$.
Now you have coordinates of $V(0. 0)$, A and B. Find measure of $VA=b$ and $VB=a$ ; you have $AB=v=100$, use Herons formula and find the area of triangle VAB.
I think the coordinates of O must be given.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,472 |
#import <UIKit/UIKit.h>
#import "RCTBridgeModule.h"
#import "RCTInvalidating.h"
@interface RCTAlertManager : NSObject <RCTBridgeModule, RCTInvalidating>
@end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 742 |
Marin Lalić (born 30 November 1969) is a Croatian retired football midfielder and current manager of Bjelovar.
He had earlier replaced Mario Kos as Bjelovar's manager in April 2017.
References
External links
1969 births
Living people
People from Bjelovar
Association football midfielders
Yugoslav footballers
Croatian footballers
HNK Hajduk Split players
S.C. Salgueiros players
F.C. Paços de Ferreira players
HNK Suhopolje players
NK Zagreb players
NK Hrvatski Dragovoljac players
NK Inter Zaprešić players
NK Bjelovar players
Yugoslav First League players
Primeira Liga players
Croatian Football League players
First Football League (Croatia) players
Croatian expatriate footballers
Expatriate footballers in Portugal
Croatian expatriate sportspeople in Portugal
Croatian football managers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,584 |
Shop ride on bulldozers, trucks & more.
eBay's #1 Range Of Kids Ride-On Car! Save up to 60%!
Trending at AU $177.57eBay determines this price through a machine-learned model of the product's sale prices within the last 90 days.
Save Up to 70%! EN71 approved! Adult Override Remote!
Push Button Start, Car Sounds and Light-up Headlights.
Simple Operation✔Safe Speed✔Innovatoive Horn✔EN71 Std.
4 Powerful Motors, Independent Swing, 60kgs Load, New!!
Balance Bike, Australian Designed, 12" pump up tyres!
Kid's Ride On Wiggle Scooter No gears, or pedals, Twist, Wiggle for endless fun!
Once children get on their feet, they're ready to take off, and ride-on toys give your kids the freedom of movement and sense of independence they need, not to mention that they're also loads of fun. These toys are versatile and come in several distinct types, colours and designs based on the child's age as well as interest, and there are toys for indoor as well as outdoor use. Though they might not be ready to hop behind a driver's seat just yet, a tricycle is an excellent place to start.
When it comes to ride-on toys, there are a variety of types from which to select. These include small cars, tricycles, scooters and toy motorcycles. Small children may enjoy taking a ride on an old-fashioned rocking horse, while others may prefer getting behind the wheel of a toy ride-on tractor. Other options include a quad bike, a balance bike and a wagon in which you can pull small children. Toys vary by size, features and details, as some cars may include added embellishments, like working lights. In addition, scooters are available for children to ride on or simply pedal with their feet.
Select a ride-on toy depending on how it's powered. Some options move solely through the use of a child's feet operating the pedals, pushing or kicking the toy. Another option is to select an electric toy that's battery-powered, making it easy for kids to climb in and enjoy the ride. Electric toys include cars, motorcycles, quad bikes and some scooters, while manually-operated toys include scooters, wagons, rocking chairs and pedal cars and tricycles.
You can choose a children's ride-on toy by the brand. Many high-quality manufacturers create an array of ride-on toys, including tractors crafted by John Deere, ride-ons for small kids made by Little Tikes and wagons crafted by Radio Flyer. Other brands to consider include Hasbro, Hot Wheels, VTech, Huffy and Eurotrike. Brands vary by features as well as by price, so consider your budget, your child's age and the design of the toy as you browse brands.
It's always wise to check to ensure that you're purchasing a toy in the right age range to allow your child to fully enjoy the experience. There are ride-ons for kids as young as six months old, and once a child is of walking age (which is generally around 12 months), there are even more options for pedal-operated ride-ons, such as a scooter or tricycle. Some toys accommodate a wider age range or can be adjusted as kids grow, such as a bicycle, tricycle or scooter. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,286 |
cPanel Email Admin
Welcome to New Life Missions
Meet Rev. Vek & Samoeun
Staff/Volunteer Opportunities
Board of the Mission
Professional & Executive Ministries
Church Planning Ministry
Radio Ministry
Compassion Ministry
Women Ministry
Ordeal in Cambodia top
A true story by Veng Huong Taing as told to Sharon Fischer
The Exodus
One Fish A Day
The New Cambodia
Hopes & Disappointments
Edge of Freekom
Vek Huong Taing's letter
In our village group at Norea, Samoeun and I Had come to know a Chinese woman, Mrs. Chhuy Ngeagn, 1 whose husband repaired cars for the Khmer Rouge in Battambang. Occasionally she was given permission to visit him.
During one of Mrs. Ngeang?s visits to Battambang she had met a leader in the Khmer Rouge named Him Sarm 2. One day, in the course of conversation, she told Him Sarom about us and about our Christian faith, mentioning our training with Campus Crusade in Manila, and about our desire to get out of Cambodia. Secretly, Him Sarom had replied to Mrs. Ngeang that he was not in agreement with the Khmer Rouge revolution and that he wanted to escape the country too. He told Mrs. Ngeang that he considered us his friends, even though we had never met.
In the afternoon of the day that we prayed to know God?s will about leaving Norea, Mrs. Ngeang slipped us a letter from Him Sarom. The letter read:
To Brother and sister Huong and Samoeun,
I left Cambodia to Thailand ahead [of you]. I was in a hurry. I could not bring you along with me. We are brother(s) and sister of one Heavenly Father, though we [do] now know each other.... I know how much you want to get out, but don?t worry. Someday I will come back and bring you out, or send someone to bring you out.
Him Sarom
After reading Him Sarom?s letter, Samoeun and I agreed that it was God?s answer to our prayer. Even though we had not yet met our new friend, we felt it was God?s will to wait for His plan.
Our relationship with Mrs. Ngeang continued to grow, and often I shared coconuts with her and her three children. One day Mrs. Ngeang told me that she was going to seek permission through her husband in Battambang to leave Norea for Watt Kor, a village to the north where she had heard food was more plentiful. She said that she would seek permission for us also, claiming us to be her relatives. Since Mrs. Ngeang knew I was a Christian and because I had shared with her the miracle that had brought us from Kok Trom to Norea, she asked us to pray for a similar miracle as she sought permission to leave Norea.
Soon after Mrs. Ngeang requested permission, she received a letter granting all of us the opportunity to leave for Watt Kor. It was amazing to receive such permission ? it almost seemed as if it would be easier to go from one country to another than from village to village under the Khmer Rouge?s Cambodia. Even if someone had heard word that his parents, living in another village, had died, he would not have been able to go bury them; the Khmer Rouge felt it would be a waste of time. Going to see a dead body would not raise it to life again!
Not only was it unusual to receive permission so easily, but it was also not common to receive an accompanying letter. Usually the Khmer Rouge province leader just gave a spoken yes or no. When I showed our letter to the village leadership, they were extremely surprised, having never seemed permission by letter.
About one month after Mrs. Ngeang and her three children and Samoeun, Wiphousana and I arrived in Watt Kor, we met a woman who had escaped from Norea. She told me, ?Huong, you are very lucky. After you left Norea, the Khmer Rouge executed everyone who had either worked for the Lon Nol government or had been a student during his regime.?
Samorun and I joyfully thanked God for leading us out safely, but when we told Mrs. Ngeang how God had worked, she suddenly became very angry. She told us that we had no right to thank God whet it was she who was responsible for saving our lives. She said that many people had offered her gold to help get them out, but because she cared for us she chose to help us, though we had nothing we could offer for payment. Now we had turned around and thanked God instead!
We tried to explain to Mrs. Ngeang that we hadn?t meant to exclude her from our appreciation but that she could feel honoured because God had used her in such a special way. But her anger smouldered, and within a day or two she had gone to the Watt Kor village leaders, telling them that we were Christians, that we worked for an American organization (another crime worthy of death) and that we had received training in Manila.
We learned later from an acquaintance of Samoeun?s in Watt Kor that when the village leader heard Mrs. Ngeang?s accusations, he communicated them to the Khmer Rouge official responsible for ordering executions. The agent contacted one person in the village who knew us before the country?s fall: a Lon Nol army pilot?s widow who was a classmate of Samoeun?s during high school days.
Samoeun?s classmate gave the official a very favourable report about us, denying we had ever exhibited disloyalty to the Khmer Rouge or loyalty to western influences. The agent believed her a dropped the charges.
A few weeks later, the Khmer Rouge asked Mrs. Ngeang to return to her husband in Battambang. As she left, she told Samoeun and me that she would be going immediately to the highest Khmer Rouge officials in Battambang with her accusations against us ? and recommend us to be killed.
Meanwhile, as had happened in each previous village we lived in, our food rations were progressively cut back. Soon we had very little food. My family had a little relief for a while; one man who was responsible in his group for catching fish would secretly bring us a few fish every day before he took the rest to his group.
Wiphousana was now a year and seven months old. Samoeun?s job in Watt Kor was to grind rice in a mill, and she was able to keep Wiphousana with her as she worked. My job was cutting bamboo; those of us performing that task were transported by truck for a couple of weeks at a time into a jungle area only 10 miles from the Thailand-Cambodia border.
When I first learned about my job I vowed to Somoeun that, even though we would be working so near the border, I would not think of escaping without her and Wiphousana. I assured her that if one day I didn?t return, she would know I had been killed by the Khmer Rouge or died from a disease in the jungle. There was no medicine provided if we fell ill, and one of our co-workers had already died of malaria during one of our two-week stays.
Two months after we had moved to Watt Kor, I heard it announced that 30 families were to be moved from Watt Kor to a village called Cheng Kdar 1, four miles away. All we knew about Cheng Kdar was that it had been established recently and already had a reputation for severe strictness and little food. The people of Watt Kor became very upset at this news, dreading the thought of being sent there.
When Samoeun and I heard the news, we committed the situation to God. A few days after the announcement the list was announced, and we were named among the families chosen to leave. There was much anxiety about the move on the part of most of those chosen, but Samoeun and I felt peaceful, believing that God would care for us.
Everyone?s fears about Cheng Kdar proved to be justified. We were worked harder there than at any place we had been so far and once again had very little to eat. We were not allowed to catch fish for extra food or keep for ourselves vegetables we might grow near our place, planted on work breaks. If fish or vegetables were found hidden in our huts, it would mean certain death.
Our ?home? in Cheng Kdar was a tiny room in a small house that had once been occupied by Buddhist monks before the accompanying pagoda was razed by the Khmer Rouge. Several families occupied the cramped dwelling. It was Khmer Rouge policy here to deny families any sense of privacy so that we should be more aware of things we could accuse each other of.
Our work schedule carried us from sunup to sundown, and our strength began to drain away. If we became too sick to work, the Khmer Rouge would accuse us of being lazy, taunting that if we could walk and eat, surely we could work!
For awhile, Samoeun and I gained some relief from the meagre diet. At Cheng Kdar one of the group operations was the making of sugar from the sap of palm trees. Not long after we arriving our group leader asked me to help him, secretly and on my free time, with the sugar mill?s accounting and paperwork. In return he secretly gave me palm juice and sugar every day, and told me I could have some any time I wanted. From that sugar, we found enough each day with increased stamina.
Fear of the Khmer Rouge was intense in Cheng Kdar. Suicides became a frequent occurrence as the weeks and months went by. Many husbands deserted their wives when a chance for escape came, and from their despair some of the abandoned wives hanged themselves. Severe psychological depression had taken root as most people became totally apathetic about tomorrow ? never knowing if they would be alive to see it. For Samoeun and me, we had, out of necessity, become apathetic primarily about our bodies, having long since given up bathing, combing our hair or brushing our teeth.
At Cheng Kdar, we had opportunity quietly to share our faith in Christ with about 10 or 20 people, and one of our co-workers committed his life to Christ. When we talked with them God?s plan for salvation, but also told them as much as we could about how to grow spiritually, knowing that if they became Christians had no way to continue to help them to grow.
Many times we asked God, ?Why, when we want to serve You so much, do we not have the chance?? When we became discouraged about our ministry limitations, we would often review in our minds a talk from our Campus Crusade training, ?How to Rest in God?s Plan.? Recalling this message helped us to remember that God had a special plan for us, and we could rest in His revelation of it step-by-step.
Almost as a symbol of my faith I had kept my Campus Crusade ID card secretly, since Takao, hoping that if we reached the border it could be shown to a journalist or someone who could then inform other Campus Crusade staff that we were still alive.
Samoeun?s job at Cheng Kdar was to help clear out the jungle areas for planting rice. She would often hum softly to herself as she worked, and at times would sing songs in English that we had learned from our training in Manila. She had no idea that anyone could hear.
On her lunch break one day, after she had eaten, Samoeun?s group leader, Ann, brusquely told her that she was being summoned into a k ? sang, convening immediately. Samoeun?s heart was pounding as she came into the meeting, and she tried to recall what she might have done wrong.
Once inside the meeting room, she was surrounded by Ann, the village leader and some of Ann?s assistants. Then Ann started the accusations, calling Samoeun was physically strong many of the older women outworked her. Samoeun was also accused of singing ?imperialist? American songs and being a secret member of the U.S. Central Intelligence Agency. Ann?s assistants repeated the accusations, and Samoeun was warned that a second k ? sang would mean death.
By the time the group had finished accusing Samoeun, her tears started to flow. But she kept quiet, knowing that if she tried to defend herself against the charges, the group would only intensify the accusations.
Back in our little room where Samoeun and I usually met after lunch to visit until our lunch break war over, I began to worry. Presently Samoeun came into our room, and when she saw me she burst into tears.
I became very anxious inside when she started crying, out of fear for what she had to tell me coupled with concern that a neighbour would see her crying and accuse her. But I comforted her, and as I wiped away her tears, Samoeun told me what had transpired at the k ? sang.
After she had told me the whole story, I said to her quietly, ?Samoeun, we must not be angry with the leaders. We have God, so let?s pray now.? We stole behind our house to a place hidden from everyone?s view. We began to pray there, asking God to take care of us and to forgive Samoeun?s accusers. We prayed especially for Ann that she would become Somoeun?s friend.
Not long after Samoeun?s k ? sang , Ann?s husband, who had been a government worker under Lon Nol, and one of his friends, a former professor, came to me secretly. They told me that because of my ties with Campus Crusade, and American organization, I was sure to be killed soon. They asked me if I would like to escape with them. There was a condition, however: if I agreed to go, I couldn?t bring my family.
I refused their offer, and soon after I talked t them, I learned that they had escaped, leaving their families, including Ann and her two children, behind.
After her husband left, Ann became very frightened that the communists might accuse her of harbouring disloyal attitudes. She was afraid, too, for the safety of her children because they were still small. One day, as she and Samoeun walked out to the fields together, Ann suddenly broke down. She told Samoeun how depressed and alone she felt since her husband left. ?Every day my heart is so sad,? she cried. ?My life on this earth is nothing. If I wasn?t afraid for my children, I would take my life today.?
Samoeun whispered to her, ?Ann, I have had many upsets and sad things happen. But I have peace in my heart because of Jesus living in me.? Samoeun then told Ann how she could come to know Christ too. When she finished, Ann was very quiet, and after a long silence, she wiped her tears from her eyes and walked on ahead of Samoeun to work.
Because of my educational background, my city upbringing and my ties with the western world through Campus Crusade, I was looked upon with great suspicion at Cheng kdar. Following my position as an ?accountant? with the village leader at the sugar mill, I was moved down the job scale. For an entire year my chief responsibility was together all the villages? excrement for use as fertilizer in the village?s agricultural endeavours. I was not given a shovel for the job, but a narrow board, and since I had not used soap since we fled Phnom Penh, I?m sure Samoeun?s marriage commitment to me was tested often during that year! During my whole time in that job, a Khmer Rouge officer was assigned to watch me constantly, which may have been a wise idea: escape seemed very attractive to me many times that year.
The official?s purpose, of course, in such close surveillance was to find something I could be accused of. But after I had been at the excrement job for a year, the village leaders apparently began to trust me, and I was given a new job, that of teaching the village?s children to read.
I became a teacher in name only, because the children were allowed only five hours of instruction per week. The rest of the time I was responsible to oversee them as they worked in the fields. Then, not long after I began teaching. I was asked to become I leader of 10 village families.
I was very surprised when they asked me to become a group leader, because usually that job was given only to those who had been peasant workers before Cambodia?s fall. I even protested the appointment on that basis, but the village leader wouldn?t let me say no.
I enjoyed being a group leader; especially because the job left me time to care for Wiphousana. At that time Samoeun was often gone two weeks at a time planting rice some distance away, and though I had to work, too, I was able t keep Wiphousana with me.
I also enjoyed being a group leader because, in some ways, I was able to lead the group as I chose. I had the authority to give my group their allotted days off, and I made certain that they got them consistently. One day, after I had decreed a rest day, Ann, who was in our group, said to me, ?Huong, the love you have for us is a very good love!?
As a group leader I also gained some popularity because of the way I conducted the group meetings required every three days. Instead of holding accusation sessions, I used the time to teach my group as much as I could about patience, love, kindness and other Christ-like character qualities.
One day, near the end of 1978, Mrs. Ngeang, the Chinese woman who had left us in vengeful anger at Watt Kor, suddenly appeared in our village. She had come to look for me, having heard that we were living a good life, relatively speaking, at Cheng Kdar, and that we had food to eat.
I was shocked when I saw Mrs. Ngeang; her head was swollen and she was extremely thin, obviously suffering severely from malnutrition. Mrs. Ngeang told me that the Khmer Rouge had killed her husband, having suspected him of working with the Vietnamese who had been threatening an invasion of Cambodia.
Because of my position as a leader, I was able to give Mrs. Ngeang all the extra food I could ? potatoes from our small private garden and sugar from the mill. As we expressed kindness and acceptance toward her, she asked us our forgiveness for the attitudes she had held toward us. She noted sadly that she had indeed wanted the Khmer Rouge to kill us, but instead they killed her husband.
We wanted Mrs. Ngeang to stay at Cheng Kdar, but our village leader would not grant her permission. So we gave her as much extra food as we could gather, and she left, returning to the village from which she had come.
Near the end of December, 1978 the Vietnamese invaded Cambodia.
#4, Street 95, Sangkat Boeng Keng Kang 2
Khan Chmakamorn, Phnom Penh, Cambodia
Tel.: +855 (0) 23 217 318 / 12 800 766
info@cambodianewlife.org
http://www.cambodianewlife.org
Copyright © 2009 - New Life Missions
Designed by www.iwebsolution.biz
Tel.: +855 (0)77 699 567 / 77 848 646 / Fax.: 23 969 049
support@iwebsolution.biz
www.iwebsolution.biz | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,170 |
\section{Introduction}
Influence and sharp-threshold theorems have proved useful in
the study of
problems in discrete probability. Reliability theory and random graphs
provided early problems of this type, followed by percolation.
Important progress has been made since \cite{BOL, KKL} towards
a general theory, of which one striking aspect has been the
use of discrete Fourier analysis and hypercontractivity.
The reader is referred to \cite{F04, FKST} for a history
and bibliography.
Let $\Omega=\{0,1\}^N$ where $N<\infty$,
and let $\mu_p$ be the product measure on $\Omega$ with
density $p$. Vectors in $\Omega$ are denoted
by $\omega=(\omega(i): 1\le i\le N)$.
For any increasing
subset $A$ of $\Omega$, and any $i\in\{1,2,\dots,N\}$,
we define the {\it\text{conditional influence}\/} $I_A(i)$ by
\begin{equation}
I_A(i) = \mu_p(A\mid X_i=1) - \mu_p(A\mid X_i=0),
\label{inf}
\end{equation}
where $X_i$ is the indicator function of the event
$\{\omega\in\Omega: \omega(i)=1\}$.
It is well known (see \cite{BKKKL, FKST, KKL, Tal94}) that there
exists an absolute positive constant $c$ such that the following
holds. For
all $N$, all $p\in(0,1)$,
and all increasing $A$, there exists $i\in \{1,2,\dots,N\}$
such that
\begin{equation}
I_A(i) \ge c\min\{\mu_p(A), 1-\mu_p(A)\} \frac{\log N}N.
\label{inf0}
\end{equation}
The proof uses discrete Fourier analysis
and a technique known as `hypercontractivity'.
Inequality (\ref{inf0}) is usually stated for
the case $p=\frac12$, but it holds with the same
constant $c$ for
all $p\in(0,1)$.
There is an important application to the theory of sharp
thresholds for product measures. Let $\Pi_N$ be the set of all
permutations of the index set
$I=\{1,2,\dots,N\}$. A subgroup $\mathcal A$ of $\Pi_N$
is said to {\it act transitively\/} on $I$ if, for
all distinct
pairs
$j,k\in I$, there exists $\pi\in\mathcal A$
with $\pi_j=k$. Any $\pi\in\Pi_N$ acts on $\Omega$ by
$\pi\omega=(\omega(\pi_i): 1\le i\le N)$.
An event $A$ is called {\it symmetric\/}
if there exists a subgroup $\mathcal A$ of $\Pi_N$ acting transitively
on $I$ such that $A=\pi A$.
If $A$ is symmetric, then $I_A(j)=I_A(k)$
for all $j$, $k$. By summing (\ref{inf0}) over $i$ we obtain
for symmetric $A$ that
\begin{equation}
\sum_{i=1}^N I_A(i) \ge c\min\{\mu_p(A), 1-\mu_p(A)\} \log N.
\label{suminf}
\end{equation}
It is standard (see the discussion of Russo's formula
in \cite{G99}) that
\begin{equation}
\frac{d}{dp} \mu_p(A) = \sum_{i=1}^N I_A(i),
\label{Russo}
\end{equation}
and it follows as in \cite{FKST} that, for $0<\epsilon<\frac12$,
the function $f(p)=\mu_p(A)$
increases from $\epsilon$ to $1-\epsilon$ over an interval
of values of $p$ with length smaller in order
than $1/\log N$.
We refer to such a statement as a `sharp-threshold theorem',
and we note that such results have wide applications
to problems of discrete probability.
For example, the observations above
have been used recently
in \cite{BR1} to obtain a further
proof of the
famous theorem of Harris and Kesten that the critical probability
$p_{\mathrm c}$ of bond percolation on the square
lattice satisfies $p_{\mathrm c}=\frac12$.
Using a similar argument in a second paper,
\cite{BR2},
they have proved the conjecture that the critical probability
of site percolation on a certain Poisson--Voronoi graph in ${\mathbb R}^2$
equals $\frac12$
almost surely.
The principal purpose of the current article is to extend the results
above to probability measures more general than
product measures. We shall prove such results
for measures having a certain condition of
`monotonicity', which
is equivalent to the FKG lattice condition
and is described in the next section.
There are many situations in the probabilistic
theory of statistical mechanics where such measures
are encountered, including the Ising model and the
random-cluster model.
We define monotonic probability measures in Section 2,
and we note there that monotonicity is equivalent to the FKG lattice condition.
This is followed by an influence theorem for monotonic measures.
A monotonic measure $\mu$ may be used as the basis of a certain parametric
family of measures on $\Omega$ indexed by a parameter $p\in(0,1)$.
The influence theorem for $\mu$ may then be used to obtain
a sharp-threshold theorem for this class, as described in Section 3.
The influence theorem on the discrete space
$\Omega$ was extended in \cite{BKKKL} to
product measures on the Euclidean cube $[0,1]^N$.
Using the methods of Section 2, similar
results may be proved for general monotonic measures on $[0,1]^N$.
Unlike the discrete case, such an influence theorem does not
appear to imply a corresponding sharp-threshold theorem.
This is discussed in Section 4.
We turn finally to the random-cluster\ model, which may be viewed as an
extension of percolation and a generalization of the Ising/Potts models for
ferromagnetism, see \cite{G03, G-RC}.
The random-cluster\ measure is defined in Section 5,
and the sharp-threshold theorem is applied to the existence
of box-crossings in two dimensions.
\section{Influence for monotonic measures}
We begin this section with a classification,
further details of which may be found in \cite{G-RC}.
Let $1\le N<\infty$, and write $I=\{1,2,\dots,N\}$
and $\Omega=\{0,1\}^N$. The set of all subsets of
$\Omega$ is denoted by $\mathcal F$.
A probability measure $\mu$ on $(\Omega,\mathcal F)$ is said to
be {\it positive\/} if $\mu(\omega)>0$ for
all $\omega\in\Omega$. It is said to satisfy the
{\it FKG lattice condition\/} if
\begin{equation}
\mu(\omega_1\vee \omega_2)\mu(\omega_1\wedge\omega_2)\ge \mu(\omega_1)\mu(\omega_2)
\qquad\text{for all } \omega_1,\omega_2\in\Omega,
\label{FKG}
\end{equation}
where
$\omega_1\vee\omega_2$ and
$\omega_1\wedge\omega_2$ are given by
\begin{align*}
\omega_1\vee\omega_2(i) &=\max\{\omega_1(i),\omega_2(i)\}, &i\in I, \\
\omega_1\wedge\omega_2(i) &=\min\{\omega_1(i),\omega_2(i)\}, &i\in I.
\end{align*}
See \cite{FKG, G-RC}.
The set $\Omega$ is a partially ordered set with
the partial order:
$\omega\ge\omega'$ if $\omega(i)\ge \omega'(i)$
for all $i\in I$.
A non-empty event $A \in \mathcal F$ is called {\it increasing\/} if:
$\omega\in A$ whenever there exists $\omega'$ with
$\omega\ge\omega'$ and $\omega'\in A$. It is called
{\it decreasing\/} if its complement is increasing.
For probability measures $\mu_1$, $\mu_2$ on
$(\Omega,\mathcal F)$, we write $\mu_1 \le_{\mathrm{st}} \mu_2$, and say that
$\mu_1$ is dominated stochastically by $\mu_2$, if
$$
\mu_1(A) \le \mu_2(A)\qquad\text{for all increasing events } A.
$$
The indicator function of an event $A$ is denoted by $1_A$.
For $i\in I$, we write $X_i$ for the indicator function
of the event $\{\omega\in\Omega: \omega(i)=1\}$.
A probability measure $\mu$ on $\Omega$
is said to be {\it positively associated\/}
if
$$
\mu(A\cap B) \ge \mu(A)\mu(B)\qquad
\text{for all increasing events $A$, $B$}.
$$
The famous FKG inequality of \cite{FKG}
asserts that a positive probability measure
$\mu$ is positively associated if it satisfies
the FKG lattice condition. It is well known that the FKG lattice condition
is not necessary for positive association, and we explore this next.
We shall for simplicity restrict ourselves henceforth to
positive measures.
The FKG lattice condition is equivalent to a
stronger property termed `strong positive association'.
For $J\subseteq I$ and $\xi\in\Omega$, let
$\Omega_J=\{0,1\}^J$ and
\begin{equation}
\Omega_J^\xi=\{\omega\in\Omega:\omega(j)=\xi(j) \text{ for } j \in I\setminus J\}.
\label{new2.2}
\end{equation}
The set of all subsets of $\Omega_J$ is denoted by $\mathcal F_J$.
Let $\mu$ be a positive probability measure on $(\Omega,\mathcal F)$, and define the
conditional probability measure $\mu_J^\xi$ on $(\Omega_J,\mathcal F_J)$ by
\begin{equation}
\mu_J^\xi(\omega_J) =
\mu\bigl(X_j=\omega_J(j) \text{ for } j \in J\,\big|\,
X_i=\xi(i) \text{ for }
i\in I\setminus J\bigr),
\qquad \omega_J\in\Omega_J.
\label{new2.3}
\end{equation}
We say that $\mu$ is {\it strongly positively-associated\/}
if:
for all $J\subseteq I$ and all $\xi\in\Omega$, the measure
$\mu_J^\xi$ is positively associated.
We call $\mu$ {\it monotonic\/} if: for all $J\subseteq I$, all
increasing subsets $A$ of $\Omega_J$, and
all $\xi,\zeta\in\Omega$,
\begin{equation}
\mu_J^\xi(A) \le
\mu_J^\zeta(A)\qquad\text{whenever } \xi\le\zeta.
\label{new2.3a}
\end{equation}
That is, $\mu$ is monotonic if, for all $J\subseteq I$,
\begin{equation}
\mu_J^\xi\le_{\mathrm{st}}\mu_J^\zeta \qquad\text{whenever } \xi\le\zeta.
\label{new2.4}
\end{equation}
We call $\mu$ 1-{\it monotonic\/} if (\ref{new2.4}) holds for all singleton
sets $J$. That is, $\mu$ is 1-monotonic if and only if, for all $j\in I$,
\begin{equation}
\mu\bigl(X_j=1 \,\big|\, X_i=\xi(i) \text{ for all } i \in I\setminus\{j\}\bigr)
\label{new2.5}
\end{equation}
is non-decreasing in $\xi$.
The following theorem is fairly standard, and the proof may be
found in \cite{G-RC}.
\begin{thm}\label{new2.7}
Let $\mu$ be a positive probability measure on $(\Omega,\mathcal F)$.
The following are equivalent.
\begin{romlist}
\item $\mu$
is strongly positively-associated.
\item $\mu$
satisfies the FKG lattice condition.
\item $\mu$ is monotonic.
\item $\mu$ is $1$-monotonic.
\end{romlist}
\end{thm}
Our principal influence theorem is as follows. For a positive
probability measure $\mu$ and an increasing
event $A$, the
{\it\text{conditional influence}\/} of the index $i$ ($\in I$)
is given as in (\ref{inf}) by
\begin{equation}
I_A(i) = \mu(A\mid X_i=1) - \mu(A\mid X_i=0).
\label{inf2}
\end{equation}
For a product measure $\mu_p$, the influence of
the index $i$ was defined in \cite{BOL, KKL}
as $\mu_p(\omega^i\in A,\ \omega_i\notin A)$, where $\omega^i$
(respectively, $\omega_i$) denotes the configuration obtained from $\omega$ by
setting $\omega(i)$ equal to 1 (respectively, 0).
We refer to the latter quantity as the {\it\text{absolute influence}\/} of index $i$.
The absolute and conditional influences are equal for product measures,
but one should note that
\begin{equation}
I_A(i) \ne \mu(\omega^i\in A,\ \omega_i\notin A)
\label{inf3}
\end{equation}
for general probability measures $\mu$.
Further discussion of this point is provided after the next theorem.
\begin{thm}[Influence]\label{infthm}
There exists a constant $c$ satisfying $c\in(0,\infty)$
such that the following holds. Let $N\ge 1$ and let $A$
be an increasing
subset of $\Omega=\{0,1\}^N$.
Let $\mu$ be a positive probability
measure on $(\Omega,\mathcal F)$ that is monotonic.
There exists $i\in I$ such that
\begin{equation}
I_A(i) \ge c\min\{\mu(A),1-\mu(A)\} \frac{\log N}N.
\end{equation}
\end{thm}
Since product measures are monotonic, this extends the influence
theorem of \cite{KKL}. In the proof of Theorem \ref{infthm},
we shall encode the measure $\mu$ in terms of Lebesgue measure on
$[0,1]^N$, and we shall appeal to the influence theorem of \cite{BKKKL}.
Thus, we shall require no further arguments of discrete
Fourier analysis than those already present in \cite{BKKKL, KKL}.
We return briefly to the discussion of
absolute and conditional influences. Suppose, for
illustration, that $P$ is chosen at random with
${\mathbb P}(P=\frac13)={\mathbb P}(P=\frac23)=\frac12$ and that,
conditional on the value of $P$, we are provided with
independent Bernoulli random
variables $X_1,X_2,\dots,X_N$ with parameter $P$. Consider the increasing
event $A=\{S_N > \frac12 N\}$, where $S_N=X_1+X_2+\dots+X_N$.
By symmetry, the \text{conditional influence}\ of each index
is the same, as is the \text{absolute influence}\ of each index.
It is an easy calculation that
$$
I_A(1) = \tfrac13 + {\mathrm o}(1)\qquad\text{as } N\to\infty.
$$
On the other hand,
\begin{align*}
{\mathbb P}(\omega^1\in A,\ \omega_1\notin A)&=
{\mathbb P}\left(\tfrac12 N - 1 < \sum_{i=2}^N X_i \le \tfrac12 N\right)\\
&={\mathrm o}(e^{-\gamma N})\qquad\text{as }N\to\infty,
\end{align*}
for some $\gamma>0$. This example indicates not only
that the absolute and conditional influences can
be very different, but
also that the conclusion of Theorem \ref{infthm} would be false if
re-stated for \text{absolute influence} s.
In the proof of Theorem \ref{infthm} following,
we see that monotonicity has the effect of increasing
the influence of each coordinate in $I$.
\begin{proof}[Proof of Theorem \ref{infthm}]
Let $A \in\mathcal F$ be an increasing event, and let $\mu$ be
positive and monotonic. Let $\lambda$ denote
Lebesgue measure on the cube $[0,1]^N$.
We propose to construct an increasing
subset $B$ of $[0,1]^N$ with the property that $\lambda(B)=\mu(A)$,
to apply the influence theorem of \cite{BKKKL} to the set $B$, and to deduce
the claim. This will be done via a certain function
$f:[0,1]^N \to \{0,1\}^N$ that we construct next.
Let $\mathbf x=(x_i: 1\le i \le N)\in [0,1]^N$, and let
$f(\mathbf x)=(f_i(\mathbf x): 1\le i\le N)$ be
given recursively as follows. The first coordinate
$f_1(\mathbf x)$ is defined by:
\begin{equation}
\text{with}\quad a_1=\mu(X_1=1),\quad\text{set} \quad f_1(\mathbf x) =
\begin{cases}
1 &\text{if } x_1 > 1-a_1,\\
0 &\text{otherwise}.
\end{cases}
\label{a1}
\end{equation}
Suppose we know $f_i(\mathbf x)$ for $1\le i < k$. Let
\begin{equation}
a_k=\mu(X_k=1\mid X_i = f_i(\mathbf x)\text{ for } 1\le i< k),
\label{a2}
\end{equation}
and define
\begin{equation}
f_k(\mathbf x) =\begin{cases}
1 &\text{if } x_k > 1-a_k,\\
0 &\text{otherwise}.
\end{cases}
\label{a2'}
\end{equation}
Suppose that $\mathbf x\le \mathbf x'$, and write $a_k$ and $a_k'$ for
the corresponding values in (\ref{a1})--(\ref{a2}). Clearly
$a_1=a_1'$, so that $f_1(\mathbf x) \le f_1(\mathbf x')$. Since $\mu$
is monotonic, $a_2\le a_2'$, so that $f_2(\mathbf x)\le f_2(\mathbf x')$.
Continuing inductively, we find that $f_k(\mathbf x)\le f_k(\mathbf x')$
for all $k$, which is to say that $f(\mathbf x)\le f(\mathbf x')$. Therefore,
$f$ is non-decreasing on $[0,1]^N$.
Let $B$ be the increasing subset of $[0,1]^N$ given by
$B=f^{-1}(A)$.
We make four notes concerning the definition of $f$.
\begin{numlist}
\item Each $a_k$ depends only on
$x_1,x_2,\dots,x_{k-1}$.
\item Since $\mu$ is positive, the $a_k$
satisfy $0<a_k<1$ for all $\mathbf x\in[0,1]^N$ and $k\in I$.
\item For any $\mathbf x\in [0,1]^N$ and $k\in I$,
the values $f_k(\mathbf x),f_{k+1}(\mathbf x),\dots,f_N(\mathbf x)$
depend on $x_1,x_2,\dots,x_{k-1}$ only through
the values $f_1(\mathbf x),f_2(\mathbf x),\dots,f_{k-1}(\mathbf x)$.
\item The function $f$ and the event $B$ depend on
the ordering of the set $I$.
\end{numlist}
Let $U=(U_i: 1\le i\le N)$ be the identity function on
$[0,1]^N$, and note that $U$ has law $\lambda$. By the method
of construction
of the function $f$, $f(U)$ has law $\mu$.
In particular,
\begin{equation}
\mu(A) = \lambda(f(U)\in A) = \lambda(U\in f^{-1}(A)) = \lambda(B).
\label{a10}
\end{equation}
Let
$$
J_B(i) = \lambda(B\mid U_i=1)- \lambda(B\mid U_i=0),
$$
where the conditional probabilities are to be interpreted as
$$
\lambda(B\mid U_i=u) = \lim_{\epsilon\downarrow 0}\left\{
\frac 1\epsilon \lambda(B\mid U_i\in(u-\epsilon,u+\epsilon))\right\},\qquad u=0,1.
$$
Since $B$ is an event with a certain simple structure, this
is the same as $\lambda_{N-1}(B_i^u)$ for $u=0,1$,
where $\lambda_{N-1}$ is $(N-1)$-dimensional
Lebesgue measure and
$B_i^u$ is the set of all $(N-1)$-vectors
$(x_1,\dots,x_{i-1},x_{i+1},\dots,x_N)$ such that
$(x_1,\dots,x_{i-1},u,x_{i+1},\dots,x_N)\in B$.
By Theorem 1 of \cite{BKKKL}, we may find a constant $c>0$,
independent of
the choice of $N$ and $A$, such that: there exists $i\in I$ with
\begin{equation}
J_B(i) \ge c \min\{\lambda(B),1-\lambda(B)\}\frac{\log N}N.
\label{a4}
\end{equation}
We choose $i$ accordingly.
We claim that
\begin{equation}
I_A(j) \ge J_B(j)\qquad\text{for } j\in I.
\label{a5}
\end{equation}
Once (\ref{a5}) is shown, the claim follows from (\ref{a10})
and (\ref{a4}).
We prove next that
\begin{equation}
I_A(1) \ge J_B(1).
\label{a6}
\end{equation}
We have that
\begin{align}
I_A(1) &= \mu(A\mid X_1=1) - \mu(A\mid X_1=0)\nonumber\\
&= \lambda(B\mid f_1(U)=1) - \lambda(B\mid f_1(U)=0)\nonumber\\
&= \lambda(B\mid U_1 > 1-a_1) - \lambda(B\mid U_1\le 1-a_1)\nonumber\\
&= \lambda(B\mid U_1=1) - \lambda(B\mid U_1=0)\nonumber\\
&= J_B(1),
\label{a7}
\end{align}
where we have used notes (2) and (3) above.
This implies (\ref{a6}).
We turn our attention to (\ref{a5}) with $j\ge 2$.
We re-order
the set $I$ to bring the index $j$ to the front. That is, we let
$K$ be the re-ordered index set $K=
(k_1,k_2,\dots,k_N) = (j,1,2,\dots,j-1,j+1,\dots,N)$.
We write $g=(g_{k_i}: 1\le i\le N)$ for the associated function given by
(\ref{a1})--(\ref{a2'}) subject to the new ordering,
and $C=g^{-1}(A)$.
Thinking of (\ref{a1})--(\ref{a2'}) as an algorithm
for constructing $f$, we are applying the same algorithm
to the re-ordered set $K$.
We claim that
\begin{equation}
J_{C}(k_1) \ge J_B(j).
\label{a8}
\end{equation}
By (\ref{a7}) with $I$ replaced by $K$, $J_{C}(k_1) = I_A(j)$,
and (\ref{a5}) follows. It remains to prove
(\ref{a8}), and we shall use monotonicity again for this.
It suffices for (\ref{a8}) to prove that
\begin{equation}
\lambda(C\mid U_j=1) \ge \lambda(B\mid U_j=1),
\label{a9}
\end{equation}
together with the reversed inequality given $U_j=0$.
The conditioning on the left-hand side of (\ref{a9})
refers to the first coordinate encountered by the
algorithm (\ref{a1})--(\ref{a2'}) when applied to the re-ordered
set $K$.
Let
\begin{equation}
\ol U=
(U_1,U_2,\dots,U_{j-1},1,U_{j+1},\dots, U_N).
\label{old1}
\end{equation}
The $0/1$-vector $f(\ol U)=(f_i(\ol U): 1\le i\le N)$
is constructed sequentially (as above) by considering the indices
$1,2,\dots,N$ in turn. At stage $k$, we declare
$f_k(\ol U)$ to equal 1 if $U_k$ exceeds a certain
function $a_k$ of the variables $f_i(\ol U)$, $1\le i < k$.
By the monotonicity of $\mu$, this
function is non-increasing in these
variables.
The index $j$ plays a special role in that: (i) $f_j(\ol U) = 1$,
and (ii) given this fact, it is more likely than before
that the variables $f_k(\ol U)$, $j<k\le N$, will
take the value 1. The values $f_k(\ol U)$, $1\le k<j$ are unaffected
by the value of $U_j$.
Consider now the $0/1$-vector $g(\ol U) =
(g_{k_r}(\ol U): 1\le r\le N)$, constructed in the same manner
as above but with the new ordering $K$ of the index set $I$.
First we examine index $k_1$ ($=j$), and we
automatically declare $g_{k_1}(\ol U)
=1$ (since $U_j=1$). We then construct $g_{k_r}(\ol U)$,
$2\le r\le N$,
in sequence. Since the $a_k$ are non-decreasing
in the variables constructed so far, we have that
\begin{equation}
g_{k_r}(\ol U) \ge f_{k_r}(\ol U),\qquad
r=2,3,\dots,N.
\label{new23}
\end{equation}
Therefore, $g(\ol U) \ge f(\ol U)$,
implying as required that
\begin{equation}
\lambda(C\mid U_j=1) =\lambda (g(\ol U) \in A) \ge \lambda(f(\ol U)\in A)
=\lambda(B\mid U_j=1).
\label{a11}
\end{equation}
Inequality (\ref{a9}) follows. The same argument implies the
reversed inequality
obtained from (\ref{a9}) by reversing the conditioning to $U_j=0$.
This implies (\ref{a8}).
A formal proof of (\ref{new23}) follows.
Suppose that $r$ is such that $g_{k_s}(\ol U) \ge f_{k_s}(\ol U)$
for $2\le s < r$. By (\ref{a2'}), for $r\le j$,
\begin{align*}
f_{k_r}(\ol U) &= 1\quad\text{if} \quad U_{k_r} > \mu(X_{k_r}=0\mid
X_{k_s}=f_{k_s}(\ol U)\text{ for } 2\le s < r),\\
g_{k_r}(\ol U) &= 1\quad\text{if} \quad U_{k_r} > \mu(X_{k_r}=0\mid
X_{k_s}=g_{k_s}(\ol U)\text{ for } 1\le s < r).
\end{align*}
Now $g_{k_1}(\ol U) =1$ and, by the induction hypothesis and
monotonicity,
\begin{multline*}
\mu(X_{k_r}=0\mid
X_{k_s}=f_{k_s}(\ol U)\text{ for } 2\le s < r)\\
\ge \mu(X_{k_r}=0\mid
X_{k_s}=g_{k_s}(\ol U)\text{ for } 1\le s < r),
\end{multline*}
whence $g_{k_r}(\ol U) \ge f_{k_r}(\ol U)$ as required.
Consider finally the case $j < r\le N$. Then
\begin{align*}
f_{k_r}(\ol U) &= 1\quad\text{if} \quad U_{k_r} > \mu(X_{k_r}=0\mid
X_{k_s}=f_{k_s}(\ol U)\text{ for } 1\le s < r),\\
g_{k_r}(\ol U) &= 1\quad\text{if} \quad U_{k_r} > \mu(X_{k_r}=0\mid
X_{k_s}=g_{k_s}(\ol U)\text{ for } 1\le s < r),
\end{align*}
and the conclusion follows as before.
\end{proof}
\section{Sharp-threshold theorem}\label{sectst}
We consider in this section a family of probability
measures indexed by a parameter
$p\in(0,1)$, and we prove a sharp-threshold theorem
subject to a hypothesis of monotonicity. The motivating
example is the random-cluster model, to which we return in the next section.
Let $1\le N<\infty$, $I=\{1,2,\dots,N\}$,
and let $\Omega=\{0,1\}^N$ and $\mathcal F$ be given
as before.
Let $\mu$ be a positive probability measure on $(\Omega,\mathcal F)$.
For $p\in (0,1)$, we define the probability measure
$\mu_p$ by
\begin{equation}
\mu_p(\omega) = \frac 1{Z_p} \mu(\omega)
\left\{\prod_{i\in I} p^{\omega(i)}(1-p)^{1-\omega(i)}\right\},
\qquad \omega\in\Omega,
\label{mupdef}
\end{equation}
where $Z_p$ is the normalizing constant
\begin{equation}
Z_p = \sum_{\omega\in\Omega}\mu(\omega)
\left\{\prod_{i\in I} p^{\omega(i)}(1-p)^{1-\omega(i)}\right\}.
\end{equation}
It is immediate that $\mu_p$ is positive and that $\mu=\mu_{\frac12}$.
It is
easy to check that $\mu_p$ satisfies the FKG lattice condition
(\ref{FKG}) if and only if $\mu$ satisfies this condition,
and it follows that $\mu$ is monotonic if and only if, for all
$p\in (0,1)$, $\mu_p$ is monotonic.
In order to prove a sharp-threshold theorem for the family
$\mu_p$, we present first a Russo-type formula.
\begin{thm}[\cite{BGK}]\label{russo}
For any event $A\in\mathcal F$,
\begin{equation}
\frac{d}{dp}\mu_p(A) = \frac 1{p(1-p)}
\sum_{i\in I} \mathrm{cov}_p(X_i,1_A),
\label{russodiff}
\end{equation}
where $\mathrm{cov}_p$ denotes covariance with
respect to the measure $\mu_p$.
\end{thm}
\begin{proof}
This may be obtained exactly as in \cite{BGK}, Proposition 4,
see also Section 2.4 of \cite{G-RC}. The details are omitted.
\end{proof}
Let $\mathcal A$ be a subgroup of the permutation group
$\Pi_N$. A probability measure $\phi$ on $(\Omega,\mathcal F)$ is
called {\it $\mathcal A$-invariant\/} if $\phi(\omega)=\phi(\alpha\omega)$ for
all $\alpha\in\mathcal A$. An event $A\in\mathcal F$ is
called {\it $\mathcal A$-invariant\/} if $A=\alpha A$ for
all $\alpha\in\mathcal A$. It is easily seen that, for any subgroup $\mathcal A$,
$\mu$ is $\mathcal A$-invariant if and only if each $\mu_p$ is
$\mathcal A$-invariant.
\begin{thm}[Sharp threshold]\label{genconc}
There exists a constant $c$ satisfying $c\in(0,\infty)$
such that the following holds. Let $N\ge 1$ and let $A\in\mathcal F$
be an increasing
event.
Let $\mu$ be a positive probability
measure on $(\Omega,\mathcal F)$ which is monotonic. If there
exists a subgroup $\mathcal A$ of $\Pi_N$ acting transitively
on $I$ such that $\mu$ and $A$ are $\mathcal A$-invariant, then
\begin{equation}
\frac{d}{dp}\mu_p(A) \ge
\frac{c\xi_p}{p(1-p)}
\min\{\mu_p(A), 1-\mu_p(A)\} \log N,\qquad p\in(0,1),
\label{shpt}
\end{equation}
where $\xi_p =\min\{\mu_p(X_i)(1-\mu_p(X_i)): i\in I\}$.
\end{thm}
We precede the proof with a lemma. Let
$$
I_{p,A}(i) = \mu_p(A \mid X_i=1) - \mu_p(A \mid X_i=0).
$$
\begin{lem}\label{lemma}
Let $A\in\mathcal F$. Suppose there
exists a subgroup $\mathcal A$ of\/ $\Pi_N$ acting transitively
on $I$ such that $\mu$ and $A$ are $\mathcal A$-invariant.
Then $I_{p,A}(i)=I_{p,A}(j)$ for all $i,j\in I$ and all $p\in (0,1)$.
\end{lem}
\begin{proof}[Proof of Lemma \ref{lemma}]
Since $\mu$ is $\mathcal A$-invariant, so is
$\mu_p$ for every $p$.
Let $i,j\in I$, and find $\alpha\in\mathcal A$ such that $\alpha_i=j$.
Under the given conditions,
\begin{align*}
\mu_p(A,\, X_j=1) &= \sum_{\omega\in A}\mu_p(\omega)X_j(\omega)
=\sum_{\omega\in A} \mu_p(\alpha\omega)X_{i}(\alpha\omega)\\
&=\sum_{\omega'\in A} \mu_p(\omega')X_i(\omega')
= \mu_p(A,\, X_i=1).
\end{align*}
Applying this with $A=\Omega$, we find that
$\mu_p(X_j=1)=\mu_p(X_i=1)$. By dividing, we deduce
that $\mu_p(A\mid X_j=1)=\mu_p(A\mid X_i=1)$.
A similar equality holds with 1 replaced by 0, and the
claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{genconc}]
By Lemma \ref{lemma},
every index has the same influence.
Since $A$ is increasing,
\begin{align*}
\mathrm{cov}_p(X_i,1_A) &= \mu_p(X_i1_A) - \mu_p(X_i)\mu_p(A)\\
&= \mu_p(X_i)(1-\mu_p(X_i)) I_{p,A}(i)\\
&\ge \xi_p I_{p,A}(i).
\end{align*}
Summing over the index set $I$ as in
(\ref{russodiff}), we deduce (\ref{shpt}) by Theorem
\ref{infthm} applied to the monotonic measure $\mu_p$.
\end{proof}
\section{Probability measures on the Euclidean cube}
We have so far considered probability measures on
the discrete cube $\{0,1\}^N$ only. The method of proof
of the influence theorem, Theorem \ref{infthm}, may be applied also
to probability measures on the Euclidean cube $[0,1]^N$ that are
absolutely continuous with respect to Lebesgue measure. Any
such measure $\mu$ has a density function $\rho$, which is to say that
$$
\mu(A) = \int_A\rho(\mathbf x)\,\lambda(d\mathbf x),
$$
for (Lebesgue)
measurable subsets $A$ of $[0,1]^N$, with $\lambda$ denoting
Lebesgue measure. Since the density function $\rho$ is non-unique,
we shall phrase the results of this section in terms of
$\rho$ rather than the associated measure $\mu$. Some may regard
this as not entirely satisfactory, arguing that results for
{\it measures\/} should be based on hypotheses for these measures,
rather than for particular versions of their density functions.
One may rewrite the conclusions of this section thus, but
at the expense of greater measure-theoretic detail which obscures the
basic argument.
Let $N\ge 1$, and write $\Omega=[0,1]^N$.
Let $\rho:\Omega\to[0,\infty)$ be (Lebesgue) measurable. We call $\rho$
a {\it density function\/} if
$$
\int_{\Omega}\rho(\mathbf x)\,\lambda(d\mathbf x) = 1,
$$
and in this case we denote by $\mu_\rho$ the corresponding probability
measure,
$$
\mu_\rho(A) = \int_A \rho(\mathbf x)\,\lambda(d\mathbf x).
$$
We call
$\rho$ {\it positive\/} if it
is a strictly positive function on $\Omega$, and we say
it satisfies the {\it (continuous) FKG lattice
condition\/} if
\begin{equation}
\rho(\mathbf x\vee \mathbf y)\rho(\mathbf x\wedge \mathbf y) \ge \rho(\mathbf x)\rho(\mathbf y)\qquad
\text{for all } \mathbf x,\mathbf y\in \Omega,
\label{new24}
\end{equation}
where the operations $\vee$, $\wedge$ are defined as the
coordinate-wise maximum
and minimum, respectively.
Let $\rho$ be a density function.
We call $\mu_\rho$ {\it positively associated\/}
if
$$
\mu_\rho(A\cap B) \ge \mu_\rho(A)\mu_\rho(B),
$$
for all increasing subsets of $\Omega$.
[It is presumably well known that increasing subsets of $\Omega$ are
Lebesgue-measurable but need not be Borel-measurable; see the notes
at the end of this section.]
Let $I=\{1,2,\dots,N\}$.
For $J\subseteq I$, let
$\Omega_J=[0,1]^J$ and
\begin{equation}
\Omega_J^\xi=\{\mathbf x\in\Omega:x_j=\xi_j \text{ for } j \in I\setminus J\},
\qquad \xi\in\Omega.
\label{new4.2}
\end{equation}
The Lebesgue $\sigma$-algebra of $\Omega_J$ is denoted by $\mathcal F_J$.
Let $\rho$ be a positive density function. We define the
conditional probability measure $\mu_{\rho,J}^\xi$ on $(\Omega_J,\mathcal F_J)$ by
\begin{equation}
\mu_{\rho,J}^\xi(E) = \int_{E} \rho_{J}^\xi(\mathbf x)
\,\lambda(d(x_j:j\in J)),
\qquad E\in \mathcal F_J,
\label{new4.3}
\end{equation}
where $\rho_J^\xi$ is the conditional density function
$$
\rho_{J}^\xi(\mathbf x)
= \frac1{Z_J^\xi} \rho(\mathbf x)1_{\Omega_J^\xi}(\mathbf x),\quad
Z_J^\xi = \int_{\Omega_J^\xi}\rho(\mathbf x)\,\lambda(d(x_j:j\in J)).
$$
We sometimes write $\mu_\rho\bigl(E\mid(\xi_j: j\in I\setminus J)\bigr)$
for $\mu_{\rho,J}^\xi(E)$, and we recall the standard fact that
$\mu_\rho\bigl(\cdot\mid (\xi_j:j\in I\setminus J)\bigr)$
is a version of the conditional expectation
given the $\sigma$-field $\mathcal F_{I\setminus J}$.
We say that $\rho$
is {\it strongly positively-associated\/}
if:
for all $J\subseteq I$ and all $\xi\in\Omega$, the measure
$\mu_{\rho,J}^\xi$ is positively associated.
We call $\rho$
{\it monotonic\/} if: for all $J\subseteq I$, all
increasing subsets $A$ of $\Omega_J$, and
all $\xi,\zeta\in\Omega$,
\begin{equation}
\mu_{\rho,J}^\xi(A) \le
\mu_{\rho,J}^\zeta(A)\qquad\text{whenever } \xi\le\zeta.
\label{new4.3a}
\end{equation}
That is, $\rho$ is monotonic if, for all $J\subseteq I$,
\begin{equation}
\mu_{\rho,J}^\xi\le_{\mathrm{st}}\mu_{\rho,J}^\zeta \qquad\text{whenever } \xi\le\zeta.
\label{new4.4}
\end{equation}
Here is a basic result concerning stochastic ordering.
\begin{thm} [\cite{BattyBollmann, Preston}]\label{BBP}
Let $N\ge 1$, and let $f$ and $g$ be density functions on
$\Omega=[0,1]^N$.
If
\[
g(\mathbf x\vee \mathbf y)f(\mathbf x\wedge \mathbf y) \ge g(\mathbf x)f(\mathbf y)\qquad
\text{for all } \mathbf x,\mathbf y\in [0,1]^N,
\]
then $\mu_f \le_{\mathrm{st}} \mu_g$.
\end{thm}
If $\rho$ satisfies the FKG lattice condition and $A$ is an increasing event,
then
$$
1_A(\mathbf x\vee\mathbf y)\rho(\mathbf x\vee \mathbf y)\rho(\mathbf x\wedge \mathbf y) \ge
1_A(\mathbf x)\rho(\mathbf x)\rho(\mathbf y),
$$
whence, by Theorem \ref{BBP},
$$
\mu_\rho(A)\mu_\rho(B) \le \mu_\rho(A \cap B)
$$
for all increasing $A$, $B$. Therefore, $\mu_\rho$ is positively
associated.
Henceforth we restrict ourselves to {\it positive\/}
density functions. Arguments similar to the above
are valid with $\rho$ (assumed positive) replaced
by the conditional density function $\rho_J^\xi$, and one arrives
thus at the following.
\begin{thm}\label{ctspa}
Let $N\ge 1$, and let $\rho$ be a positive density function
on $\Omega=[0,1]^N$ satisfying the FKG lattice condition (\ref{new24}).
Then $\rho$ is strongly positively-associated and monotonic.
\end{thm}
We turn now to a `continuous' version of Theorem \ref{infthm}.
Let $N\ge 1$, and let
$\rho$ be a monotonic positive density function
on $\Omega=[0,1]^N$.
Let $U=(U_1,U_2,\dots,U_N)$ be the identity function
on $[0,1]^N$.
For an increasing subset $A$ of $\Omega$,
we define the {\it\text{conditional influence}{}s\/} by
\begin{equation}
I_A(i) = \mu_\rho(A\mid U_i=1) - \mu_\rho(A\mid U_i=0),\qquad i\in I.
\label{infcontinuous}
\end{equation}
\begin{thm}[Influence]\label{infthm2}
There exists a constant $c$ satisfying $c\in(0,\infty)$
such that the following holds. Let $N\ge 1$ and let $A$
be an increasing
subset of $\Omega=[0,1]^N$.
Let $\rho$ be a positive density function on $[0,1]^N$
that is monotonic.
There exists $i\in I$ such that
\begin{equation}
I_A(i) \ge c\min\{\mu(A),1-\mu(A)\} \frac{\log N}N.
\end{equation}
\end{thm}
\begin{proof}
The proof is very similar to that of Theorem \ref{infthm}.
We propose first to construct an increasing
event $B$ such that $\lambda(B)=\mu(A)$,
by way of a function $f:[0,1]^N\to[0,1]^N$.
Let $\mathbf x=(x_i:1\le i\le N)\in[0,1]^N$, and write
$f(\mathbf x)=(f_1(\mathbf x),f_2(\mathbf x),\dots,f_N(\mathbf x))$.
The first coordinate $f_1(\mathbf x)$ depends on $x_1$ only
and is defined by:
\[
\mu_\rho(U_1>f_1(\mathbf x))=1-x_1.
\]
Since the density function $\rho$ is strictly
positive, $f_1(\mathbf x)$ is a continuous and strictly increasing function of
$x_1$. It is an elementary exercise to check
that the law of $f_1(U)$ under $\lambda$ is the same as
that of $U_1$ under $\mu_\rho$.
Having defined $f_1(\mathbf x)$, we define $f_2(\mathbf x)$ in terms of $x_1$, $x_2$
only by:
\[
\mu_\rho\bigl(U_2>f_2(\mathbf x)\,\big|\, U_1=f_1(\mathbf x)\bigr)
=1-x_2.
\]
The left-hand side is defined according to (\ref{new4.3}).
It is a standard fact that
$\mu_\rho(\cdot\mid U_1=f_1)$ is a version of the conditional
expectation $\mu_\rho(\cdot\mid \sigma(U_1))$, where $\sigma(U_1)$ denotes the $\sigma$-field
generated by $U_1$, and it is an exercise to check that the pair
$(f_1(U),f_2(U))$ has the same law under $\lambda$ as
does the pair $(U_1, U_2)$ under $\mu_\rho$. For each given $x_1\in(0,1)$,
$f(\mathbf x)$ is a continuous and strictly increasing function of $x_2$.
[We use the assumptions that $\rho$ is positive and monotonic,
respectively, here.]
We continue inductively.
Suppose we know $f_i(\mathbf x)$ for $1\le i < k$. Then $f_k(\mathbf x)$
depends on $x_1,x_2,\dots,x_k$ and is
given by:
\[
\mu_\rho\bigl(U_k>f_k(\mathbf x)\,\big|\, U_i=f_i(\mathbf x)\text{ for } 1\le i<k\bigr)
=1-x_k.
\]
As above, $f$ is strictly increasing (using the assumption of
monotonicity), and the law of
$f(U)$ under $\lambda$ is the same
as the law of $U$ under $\mu_\rho$. We set $B=f^{-1}(A)$.
Let
$$
J_B(i) = \lambda(B\mid U_i=1) - \lambda(B\mid U_i=0),\qquad i\in I.
$$
Since $f_1$ is continuous and strictly increasing,
$$
\mu_\rho(A\mid U_1=b) = \lambda(B\mid f_1(U_1)=b)=\lambda(B \mid U_1=b),
\qquad b=0,1,
$$
implying that $I_A(1)=J_B(1)$.
It remains to show that $I_A(j) \ge J_B(j)$ for $j\in I$.
Let $j\in I$, $j\ne 1$.
We re-order the coordinate set as
$K=\{j,1,2,\dots,j-1,j+1,\dots,N\}$,
and we construct a continuous
increasing function $g$ as above but subject to the new ordering.
Rather than re-work the details from the
proof of Theorem \ref{infthm}, we prove only part of that necessary.
We sketch a proof that $\mu_\rho(A\mid U_j=1)
\ge \lambda(B\mid U_j=1)$, a similar argument being valid with 1
replaced by 0 and the inequality reversed.
The main step is to show that $f\le g$ under the assumption that $U_j=1$.
Suppose that $1\le r<j$, and assume it has already been proved
that $f_i(\mathbf x)\le g_i(\mathbf x)$ for $\mathbf x\in\Omega$
and $1\le i<r$.
Let $\mathbf x\in\Omega$. We claim that
\begin{multline}
\mu_\rho(U_r>\xi\mid U_i=f_i(\mathbf x)\text{ for } 1\le i<r)\\
\le
\mu_\rho(U_r>\xi\mid U_j=1,\ U_i=g_i(\mathbf x)\text{ for } 1\le i<r),
\qquad \xi\in [0,1].
\label{new5.1}
\end{multline}
By monotonicity,
\begin{multline}
\mu_{\rho,J}(\cdot\mid U_j=u,\ U_i=f_i(\mathbf x)\text{ for } 1\le i<r)
\\
\le_{\mathrm{st}} \mu_{\rho,J}(\cdot\mid
U_j=1,\ U_i=g_i(\mathbf x)\text{ for } 1\le i<r),\qquad u\in[0,1].
\label{new5.2}
\end{multline}
The left-hand side of (\ref{new5.2}) is a version of the conditional
expectation of the conditional measure $\mu_{\rho,J}(\cdot\mid U_i=f_i(\mathbf x)\text{ for } 1\le i<r)$ given $\sigma(U_j)$. By averaging
over the value of $u$ in (\ref{new5.2}), we obtain (\ref{new5.1}).
The other steps are proved similarly.
\end{proof}
Unlike the discrete setting of Section 3,
Theorem \ref{infthm2} does not imply a sharp-threshold theorem.
Any density function $\rho$ on
$[0,1]^N$ may be used to generate a parametric family
$(\rho_p: 0< p< 1)$ of densities given by
$$
\rho_p(\mathbf x) = \frac 1{Z_{\rho,p}}\rho(\mathbf x)\prod_{i=1}^N p^{x_i}(1-p)^{1-x_i},
\qquad \mathbf x=(x_1,x_2,\dots,x_N)\in[0,1]^N,
$$
and we write $\mu_p=\mu_{\rho_p}$.
Let $A$ be an increasing
subset of $[0,1]^N$. The proof of Theorem \ref{russo} may be adapted to
this setting to obtain that
$$
\frac d{dp} \mu_{p}(A)=\frac 1{p(1-p)}\sum_{i=1}^N \mathrm{cov}_p(U_i,1_A),
$$
where $U=(U_1,U_2,\dots,U_N)$ is the identity function on $[0,1]^N$,
and
$\mathrm{cov}_p$ denotes covariance with respect to $\mu_p$.
Let $\rho$ be the constant function, so that $\mu_\rho$ is Lebesgue
measure. As above,
let $p\in (0,1)$ and let
$Y_1,Y_2,\dots,Y_N$ be independent random variables
taking values in $[0,1]$ with common density function
$$
\rho_p(x) =
\begin{cases}\dfrac{\log[p/(1-p)]}{2p-1} p^x(1-p)^{1-x}
\quad&\text{if } p\ne \frac12,\ x\in (0,1),\\
1 &\text{if } p=\frac12,\ x\in (0,1).
\end{cases}
$$
It is easily checked that the joint density function
$$
\rho_p(\mathbf x) = \prod_{i=1}^N \rho_p(x_i), \qquad \mathbf x=(x_1,x_2,\dots,x_N)\in[0,1]^N,
$$
satisfies the FKG lattice condition, and is therefore monotonic.
We now choose $A$ by $A= (N^{-1},1]^N$. It is an easy calculation
that
$$
\mu_p(A) =\begin{cases} \left(1-\dfrac{\pi^{1/N}-1}{\pi-1}\right)^N
\quad&\text{if } p\ne\frac12,\\
\left(1-\dfrac1N\right)^N&\text{if } p=\frac12,
\end{cases}
$$
where $\pi=p/(1-p)$.
Therefore, as $N\to\infty$,
$$
\mu_p(A) \to \begin{cases} \pi^{-1/(\pi-1)}\quad&\text{if } p\ne \frac12,\\
e^{-1} &\text{if } p=\frac12.
\end{cases}
$$
In addition,
$$
\mathrm{cov}_{\frac12}(U_i,1_A) = \frac 1N\left(1-\frac1N\right)^{N-1}
\sim \frac{e^{-1}}N.
$$
The influence theorem, Theorem \ref{infthm2}, may be applied
to the event $A$, but there is no sharp threshold for $\mu_p(A)$.
This situation diverges from that of the discrete setting
at the point where a lower bound for the \text{conditional influence}\ $I_A(i)$ is
used to calculate a lower bound for the covariance $\mathrm{cov}_p(U_i,1_A)$.
We return briefly to the measurability of an increasing subset
of $[0,1]^N$.
\begin{thm}\label{Lmeas}
Let $N\ge 2$. Every
increasing subset of $[0,1]^N$ is Lebesgue-measurable.
\end{thm}
Increasing subsets need not be Borel-measurable, as the following
example indicates. Let $M$ be a non-Borel-measurable
subset of $[0,1]$. Consider the increasing subset $A$ of $[0,1]^2$
given by
$$
A= \{(x,y)\in[0,1]^2: x+y>1\} \cup\{(x,1-x): x\in M\}.
$$
The function $h: x\mapsto (x,1-x)$ is a continuous, and hence
Borel-measurable, function
from ${\mathbb R}$ to ${\mathbb R}^2$. If $A$ were Borel-measurable, then so would be
$$
A'=A\cap \{(x,1-x): x\in {\mathbb R}\} = \{(x,1-x): x\in M\}.
$$
This would imply that $h^{-1}(A') = M$ is Borel-measurable, a contradiction.
\begin{proof}[Proof of Theorem \ref{Lmeas}]
The statement is trivially true when $N=1$, and we prove the general
case by induction on $N$. Suppose $n$ is such
that the result holds for $N=n$.
Let $A$ be an increasing subset of $[0,1]^{n+1}$, and let
$g:[0,1]^n\to[0,1]\cup\{\infty\}$ be defined by
\[
g(\mathbf x)=\inf \{y: (\mathbf x,y) \in A\},\qquad \mathbf x\in[0,1]^n.
\]
The function $g$ is decreasing on $[0,1]^n$, and hence, for all $c\in{\mathbb R}$,
the subset $H_c=\{\mathbf x:g(\mathbf x)<c\}$ is increasing. By the induction hypothesis,
each $H_c$ is Lebesgue-measurable in $[0,1]^n$, and therefore
$g$ is a measurable function.
Its graph
$G=\{(\mathbf x,g(\mathbf x)): \mathbf x\in[0,1]^{n}\}$ is (by an approximation
by simple functions, or otherwise) a Lebesgue-measurable set
and is also (by Fubini's Theorem) a null
subset of $[0,1]^{n+1}$. Furthermore, the set
$$
\ol A = \{(\mathbf x,y)\in [0,1]^{n+1}: y > g(\mathbf x)\}
$$
is
Lebesgue-measurable.
Now $A$ differs from $\ol A$ only on a subset of the null set $G$,
and the claim follows.
\end{proof}
\section{The random-cluster model}
The sharp-threshold theorem of Section \ref{sectst} may
be applied as follows to
the random-cluster\ measure.
Let $G=(V,E)$ be a finite graph, assumed for simplicity
to have neither loops nor multiple edges.
We take as configuration space the set
$\Omega=\{0,1\}^E$, and write $\mathcal F$ for the set of its subsets.
For $\omega\in\Omega$, we call
an edge $e$ {\it open\/} (in $\omega$) if $\omega(e)=1$,
and {\it closed\/} otherwise. Let $\eta(\omega)=
\{e\in E: \omega(e)=1\}$ be the set of
open edges, and consider the open graph $G_\omega=(V,\eta(\omega))$.
The connected components of $G_\omega$ are termed
{\it open clusters\/}, and $k(\omega)$ denotes
the number of such clusters (including
any isolated vertices).
Let $q\in(0,\infty)$, and let $\mu$ be the probability measure
on $(\Omega,\mathcal F)$ given by
\begin{equation}
\mu(\omega) = \frac 1 {Z(q)}
q^{k(\omega)}, \qquad \omega\in\Omega,
\label{defmu}
\end{equation}
where $Z(q)$ is the appropriate normalizing constant.
It is clear that $\mu$ is positive, and it is easily checked
that $\mu$ satisfies the FKG lattice condition
if $q\ge 1$. See
\cite{F72b, G-RC}.
(The FKG lattice condition does not hold
when $q<1$ and $G$ contains a circuit.)
{\it We assume henceforth
that $q\ge 1$.}
By Theorem \ref{new2.7},
$\mu$ is monotonic.
The random-cluster\ measure $\phi_{p,q}$ on the graph $G$
with parameters $p\in(0,1)$ and $q\in[1,\infty)$
is given as in (\ref{mupdef}) by
\begin{equation}
\phi_{p,q}(\omega) = \frac 1{Z(p,q)} \left\{
\prod_{e\in E} p^{\omega(e)}(1-p)^{1-\omega(e)}\right\}
q^{k(\omega)},
\qquad \omega\in\Omega.
\label{mupdef2}
\end{equation}
It is well known (see \cite{F72b, G-RC}) that
\begin{equation}
\frac p{p+q(1-p)} \le \phi_{p,q}(X_e=1) \le p,\qquad e\in E.
\label{finen}
\end{equation}
We call $G$ $\mathcal A$-{\it transitive\/} if its automorphism
group possesses a subgroup $\mathcal A$
acting transitively on $E$.
We may apply Theorem \ref{genconc} to obtain the
following.
There exists an absolute
constant $c>0$ such that, for all $\mathcal A$-transitive graphs $G$,
all $p$, $q$, and
any increasing $\mathcal A$-invariant event $A\in\mathcal F$,
\begin{equation}
\frac{d}{dp}\phi_{p,q}(A) \ge
c \min\left\{\frac{q}{\{p+q(1-p)\}^2}, 1\right\}
\min\{\phi_{p,q}(A), 1-\phi_{p,q}(A)\} \log N,
\nonumber
\end{equation}
whence
\begin{equation}
\frac{d}{dp}\phi_{p,q}(A) \ge
\frac cq \min\{\phi_{p,q}(A), 1-\phi_{p,q}(A)\} \log N.
\label{fpqsteep}
\end{equation}
The differential inequality (\ref{fpqsteep}) takes the usual
simpler form when $q=1$, and it
may be integrated
exactly for general $q\ge 1$.
Here is an illustration
of (\ref{fpqsteep}) when integrated. Let $p_1\in(0,1)$
be chosen such that $\phi_{p_1,q}(A)\ge\frac12$, and let $p_1<p_2<1$.
We note that $\phi_{p,q}(A) \ge \frac12$
for $p\in (p_1,p_2)$.
We integrate (\ref{fpqsteep})
over this interval to obtain that
\begin{equation}
\phi_{p_2,q}(A) \ge 1-\tfrac12 N^{-c(p_2-p_1)/q}.
\label{upper}
\end{equation}
Bollob\'as and Riordan have shown in \cite{BR2, BR1} how to apply
the sharp-threshold theorem for product measure to percolation in
two dimensions, thereby obtaining a further proof of the
famous theorem of Harris and Kesten that the critical probability of
bond percolation equals $\frac12$. Their key step
is the proof that there exists a sharp threshold for the
event that a large square is traversed by an open path.
One obtains similarly the following for
the random-cluster\ model on the
square lattice ${\mathbb L}^2$.
Let ${\mathbb Z}=\{\dots,-1,0,-1,\dots\}$ be the integers,
and ${\mathbb Z}^2$ the set of all $2$-vectors $x=(x_1,x_2)$
of integers. We turn ${\mathbb Z}^2$ into a graph by placing an edge
between any two vertices $x$, $y$ with $|x-y|=1$,
where
$$
|z| = |z_1| + |z_2|, \qquad z\in {\mathbb Z}^2.
$$
We write ${\mathbb E}^2$ for the set of such edges, and ${\mathbb L}^2=({\mathbb Z}^2,{\mathbb E}^2)$
for the ensuing graph. We shall work on a finite torus of ${\mathbb L}^2$.
Let $n\ge 1$. Consider the square $S_n=[0,n]^2$ (this is a convenient
abbreviation for $\{0,1,2,\dots,n\}^2$) viewed as a subgraph
of ${\mathbb L}^2$. We identify certain pairs of
vertices on the boundary of $S_n$
in order to make it symmetric. More specifically, we identify
any pair of the form $(0,m)$, $(n,m)$ and of
the form $(m,0)$,
$(m,n)$, for $0\le m\le n$, and we merge any parallel
edges that ensue. Let $T_n=(V_n,E_n)$ denote the
resulting toroidal graph.
Let $\mathcal A_n$ be the automorphism group of the graph $T_n$, and note
that $\mathcal A_n$ acts transitively on $E_n$. The configuration space of
the random-cluster\ model on $T_n$ is denoted $\Omega(n)=\{0,1\}^{E_n}$.
Let $p\in(0,1)$ and
$q\in[1,\infty)$. Write $\phi_{n,p}$
for the random-cluster\ measure on $T_n$ with parameters $p$ and $q$,
and note that $\phi_{n,p}$ is $\mathcal A_n$-invariant.
Let
$$
{p_{\mathrm{sd}}}={p_{\mathrm{sd}}}(q) = \frac{\sqrt q}{1+\sqrt q},
$$
the self-dual point of the random-cluster\ model on ${\mathbb L}^2$, see
\cite{G03, G-RC}.
We note that the (Whitney)
dual of $T_n$ is isomorphic to $T_n$, and the random-cluster\ measure
on $T_n$ is self-dual
when $p={p_{\mathrm{sd}}}$.
Let $\omega\in\Omega(n)$.
Any translate in $T_n$ of a rectangle of the form
$[0,r]\times[0,s]$ is said to be
of size $r\times s$. When $r\ne s$, such a translate
is said to be traversed {\it long-ways\/}
(respectively, traversed {\it short-ways\/}) if the two shorter
sides (respectively, longer sides) of the rectangle are
joined within the rectangle by an open path of $\omega$.
Let $k\ge 2$, $n\ge 1$.
Let $R_n=[0,n+1]\times[0,n]$, viewed as a subgraph of $T_{kn}$,
and let $\mathrm{LW}_n$ be the event that $R_n$ is traversed long-ways.
By a standard duality argument,
\begin{equation}
\phi_{kn,{p_{\mathrm{sd}}}}(\mathrm{LW}_n)=\tfrac12,\qquad k\ge 2,\ n\ge 1.
\label{selfdual}
\end{equation}
Let $A_n$ be the event that there exists in $T_{kn}$ some
translate of the square $S_n=[0,n]\times[0,n]$ that
possesses either an open top--bottom crossing or
an open left--right crossing. The event
$A_n$ is $\mathcal A_n$-invariant, and
\begin{equation}
\phi_{kn,{p_{\mathrm{sd}}}}(A_n)\ge \phi_{kn,{p_{\mathrm{sd}}}}(\mathrm{LW}_n) = \tfrac12.
\label{greater}
\end{equation}
We apply (\ref{upper}) to the event
$A_n$, with $p_1={p_{\mathrm{sd}}}$ and with $N=2(kn)^2$ being the number of edges
in $T_{kn}$. This yields
that
\begin{align}
\phi_{kn,p}(A_n) &\ge 1 - \tfrac12 [2(kn)^2]^{-c(p-{p_{\mathrm{sd}}})/q}
\nonumber\\
&\ge 1 - (kn)^{-2c(p-{p_{\mathrm{sd}}})/q},
\qquad {p_{\mathrm{sd}}}<p <1.
\label{upper4}
\end{align}
The event $A_n$ is defined on the whole of the torus. We next
use an argument taken from \cite{BR2, BR1} to obtain a more
locally defined event. We shall for simplicity of notation
treat certain
real-valued quantities as if they were integers. Let $1<\alpha <k$,
and let $H_{n,\alpha} = [0,\alpha n]\times[0,n/\alpha]$ and
$V_{n,\alpha} = [0,n/\alpha]\times[0,\alpha n]$.
Let $h_{n,\alpha}$, $v_{n,\alpha}$ be the sets of vertices
in $T_{kn}$ given by
\begin{align*}
h_{n,\alpha} &= \{ (l_1n(\alpha-1), l_2n(1-\alpha^{-1}))\in V_{kn}: l_1,l_2\in {\mathbb Z}\},\\
v_{n,\alpha} &= \{(l_1n(1-\alpha^{-1}), l_2n(\alpha -1))\in V_{kn}: l_1,l_2\in {\mathbb Z}\}.
\end{align*}
Consider the set $\mathcal H=H_{n,\alpha} + h_{n,\alpha}$ of translates of $H_{n,\alpha}$ by
vectors in $h_{n,\alpha}$, and also the set $\mathcal V=V_{n,\alpha}+v_{n,\alpha}$. If $A_n$
occurs, then some rectangle in $\mathcal H\cup\mathcal V$ is traversed
short-ways. By positive association and symmetry,
\begin{align}
\phi_{kn,p}(\comp{A_n}) &\ge
\phi_{kn,p}(\text{no member of $\mathcal H\cup\mathcal V$ is traversed short-ways})
\nonumber\\
&\ge \{1-\phi_{kn,p}(\mathrm{SW}_{n,\alpha})\}^M,
\end{align}
\label{upper5}
where $\mathrm{SW}_{n,\alpha}$ is the event that $H_n$ is traversed short-ways,
and
\begin{equation}
M=|h_{n,\alpha}| + |v_{n,\alpha}|.
\label{defM}
\end{equation}
After taking into account the rounding effects above,
we find that
\begin{equation}
M \le 2\left(1+\frac{k}{\alpha-1-n^{-1}}\right)
\left(1+\frac{k}{1-\alpha^{-1}-n^{-1}}\right),
\label{Mlower}
\end{equation}
so that $M$ is approximately $2k^2\alpha/(\alpha-1)^2$
when $k$ and $n$ are large.
Combining (\ref{upper4})--(\ref{defM}), we arrive at the
following theorem, where $\mathrm{SW}_{n,\alpha}$ is the event
that the rectangle $\bigl[0,\lfloor n\alpha\rfloor\bigr]\times
\bigl[0,\lfloor n/\alpha\rfloor\bigr]$ is crossed short-ways.
\begin{thm}\label{thmexcat} Let $k\ge 2$, $n\ge 1$, and
${p_{\mathrm{sd}}}< p < 1$. We have that
\begin{equation}
\phi_{kn,p}(\mathrm{SW}_{n,\alpha})
\ge 1- e^{- g(p-{p_{\mathrm{sd}}})}
\label{upper6}
\end{equation}
where
$$
g=g(k,n,\alpha,q) = \frac {2c}{Mq} \log(kn).
$$
\end{thm}
In particular, for $p>{p_{\mathrm{sd}}}$, one may make $\phi_{kn,p}(\mathrm{SW}_{n,\alpha})$
large by holding $k$ fixed and sending $n\to\infty$.
It does not seem to be easy to deduce an estimate for $\phi_{p,q}(\mathrm{SW}_{n,\alpha})$
for a random-cluster\ measure $\phi_{p,q}$ on the infinite lattice ${\mathbb L}^2$.
Neither do we know how to use the existence of crossings short-ways
to build crossings long-ways. This is in contrast
to the case of product measure, see \cite{BR1, CC, G99, Ru78, Ru81, SeW}.
\section{The critical point}
There is a famous conjecture that the critical point $p_{\mathrm c}(q)$ of
the random-cluster\ model on ${\mathbb L}^2$ equals ${p_{\mathrm{sd}}} (q)$. We do not
spell out the details necessary to state this conjecture
properly, referring the reader instead to \cite{G03, G-RC}.
The conjecture is known to be valid for $q=1$ (percolation),
$q=2$ (a case corresponding to the Ising model), and for sufficiently
large $q$ (namely $q \ge 21.61$). The conjecture would follow
if one could prove a strengthening of Theorem \ref{thmexcat} in which
short-ways is replaced by long-ways, and with the toroidal
measure replaced by the wired measure on the full lattice.
We finish by explaining this.
The so-called
`wired random-cluster\ measure' on ${\mathbb L}^2$ is denoted by
$\phi_{p,q}^1$, and the reader is referred to the references
above for a definition of $\phi_{p,q}^1$.
\begin{thm}\label{alternativefinaltheorem}
Let $q\ge 1$.
Let $p_k$ be the $\phi_{p,q}^1$-probability that a
$2^k \times 2^{k+1}$
rectangle is crossed long-ways.
Suppose that
\begin{equation}
\prod_{k=1}^{\infty} p_k > 0, \qquad p > {p_{\mathrm{sd}}}(q).
\label{finalcond}
\end{equation}
Then the critical point of the random-cluster\ model on ${\mathbb L}^2$ equals ${p_{\mathrm{sd}}}(q)$.
\end{thm}
By duality,
$1-p_k = \phi_{p',q}^0(\mathrm{SW}(k))$,
where $\mathrm{SW}(k)$ is the event that the rectangle
$[0,2^{k+1}-1]\times[0,2^k +1]$
is traversed short-ways, and $p'$ is the dual value of $p$,
$$
\frac {p'}{1-p'} = \frac{q(1-p)}p.
$$
Therefore,
\begin{align*}
\sum_{k=1}^\infty(1-p_k) &\le
\sum_{k=1}^\infty 2^{k+1}\phi_{p',q}^0(\text{rad}(C)\ge 2^k+1)\\
&\le 4\sum_{n=1}^\infty \phi_{p',q}^0(\text{rad}(C)\ge n)\\
&= 4\phi_{p',q}^0(\text{rad}(C)),
\end{align*}
where $\text{rad}(C)$ is radius of the open cluster $C$ at the origin,
that is, the maximum value of $n$ such that $0$ is joined by an open
path to the boundary of the box $[-n,n]^2$.
It follows that
$$
\phi_{p',q}^0(\text{rad}(C))<\infty,\qquad p < {p_{\mathrm{sd}}}(q),
$$
is sufficient for $p_{\mathrm c}(q)={p_{\mathrm{sd}}}(q)$.
\begin{proof}
We use a construction given in \cite{CC},
which was known earlier to one of the current
authors and to Paul Seymour. For odd $k$,
let $A_k$ be the event that $[0,2^k]\times[0,2^{k+1}]$ is
traversed long-ways. For even $k$, let $A_k$
be the event that $[0,2^{k+1}]\times[0,2^k]$ is
traversed long-ways.
By the positive-associativity and automorphism-invariance
of $\phi_{p,q}^1$,
under (\ref{finalcond}),
$$
\phi_{p,q}^1\left(\bigcap_k A_k\right)\ge \prod_{k=1}^\infty \phi_{p,q}^1(A_k)
> 0,\qquad p>{p_{\mathrm{sd}}}(q).
$$
On the intersection of the $A_k$, there
exists an infinite open cluster, and therefore $p_{\mathrm c}(q)\le {p_{\mathrm{sd}}}(q)$.
It is standard (see \cite{G03, G-RC})
that ${p_{\mathrm{sd}}}(q)\le p_{\mathrm c}(q)$, and therefore equality holds
as claimed.
\end{proof}
\section{Acknowledgment} The work of the first author has
been
supported by the Engineering and Physical Sciences Research Council
through a PhD studentship.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,050 |
{"url":"https:\/\/zenodo.org\/record\/998024\/export\/csl","text":"Journal article Open Access\n\n# Further investigations of random models of Uranus and Neptune\n\nPodolak, M.; Podolak, Ji I.; Marley, M. S.\n\n### Citation Style Language JSON Export\n\n{\n\"DOI\": \"10.1016\/s0032-0633(99)00088-4\",\n\"author\": [\n{\n\"family\": \"Podolak, M.\"\n},\n{\n\"family\": \"Podolak, Ji I.\"\n},\n{\n\"family\": \"Marley, M. S.\"\n}\n],\n\"issued\": {\n\"date-parts\": [\n[\n2000,\n2,\n1\n]\n]\n},\n\"abstract\": \"We present a series of computer models for Uranus and Neptune where the interior density distribution is randomly chosen. The only constraints placed on the distribution are that the density does not decrease with decreasing radius, and that the density distribution fits the observed mass and gravitational moments of these planets. Previous models of these planets all had a density discontinuity at about 70% of the total radius. We use our models to explore the space of density distributions that fit the observed gravitational moments, and set limits on the position and size of this discontinuity. We find that models are possible with no discontinuity in the mantle. In addition a density discontinuity as large as 3 g\/cc is possible for Uranus if the discontinuity is inward of about 0.75 Uranus radii. Closer to the surface the discontinuity must be smaller. For Neptune, the larger uncertainties in the measured moments result in coarser limits on the size of the density jump. Other means of limiting the range of acceptable models are discussed.\",\n\"title\": \"Further investigations of random models of Uranus and Neptune\",\n\"type\": \"article-journal\",\n\"id\": \"998024\"\n}\n1,132\n518\nviews","date":"2022-10-04 06:02:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.49141982197761536, \"perplexity\": 1751.532461909693}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337480.10\/warc\/CC-MAIN-20221004054641-20221004084641-00170.warc.gz\"}"} | null | null |
The homie @GilbereForte gives us a new reel from his EYES OF VERITAS mixtape. Directed by Jerome White.
@TayDoeTV gives us a look @TheGAME's upcoming visual that was recently banned from Viacom's network.
My drinking buddy @JeanGreasy joined @PharoaheMonch during his set at Brooklyn Bowl last night. BLOW! was there to cover it.
Prior to returning to the U.S. for his album release date, Tyler and the Golf Wang hooligans invaded Tim Westwood's show on 1xtra. They discuss B.o.B's alleged diss towards them, being compared to Wu-Tang and more.
@TinieTempah gave a preview of a new joint featuring The Taylor Gang Leader a week back when he was in Toronto. Now he liberates the next single off his album, Disc-overy.
While out in the Bay Area, Spitta got his official JETS chain from Jerm Jilla. Props to AskHowFly. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,249 |
Buchanan and Hannity's color arousal and racism
Here is a post from a great new website and blog politiclast.com that has a post/article addressing the issue of how the Media Neglects Buchanan and Hannity's Racist Ways.
Posted on 29 March 2008 by McLovinthe article notes:
While the main stream media still questions the racial implications of the quote " Jeremiah Wright Scandal", two of the main stream medias' pundits are spewing racist rants and showing complete hypocrisy. I have written in earlier posts the notion that America may not be grown up enough for a real president that will discuss this nation's problems as adults. The Mainstream media's inabilities to hold Hannity and Pat Buchanan accountable in the same manner that they hold a black preachers is deplorable.
Why don't these statements raise the same level of outrage for intolerance that Wrights' do?
Barack says we need to have a conversation about race in America. Fair enough. But this time, it has to be a two-way conversation. White America needs to be heard from, not just lectured to. This time, the Silent Majority needs to have its convictions, grievances and demands heard. And among them are these:
First, America has been the best country on earth for black folks. It was here that 600,000 black people, brought from Africa in slave ships, grew into a community of 40 million, were introduced to Christian salvation, and reached the greatest levels of freedom and prosperity blacks have ever known. Wright ought to go down on his knees and thank God he is an American.
Second, no people anywhere has done more to lift up blacks than white Americans. Untold trillions have been spent since the '60s on welfare, food stamps, rent supplements, Section 8 housing, Pell grants, student loans, legal services, Medicaid, Earned Income Tax Credits and poverty programs designed to bring the African-American community into the mainstream."
Are you kidding me? He acts like the African slaves were gathered in a Princess Cruz liner for new jobs in the new land. The frightening things is that there are people in the 21st century who will read these comments sad say to themselves, "Right on, Pat. You got that right."
This man has the audacity to say that people in the black community should be on their knees thanking the white community for bringing our ancestors to America, subjecting them to slavery, denial of humanity, Jim Crow laws, segregation, denial of civil rights, and denial of opportunity. While the dominant community worked to keep black people within the strict confines of subjugation, the white community reaped the benefits of low cost black labor and the benefits of black people being artificially prevented from competing fairly in a capitalistic environment. The black American today should be thankful that although we live in the richest country in the world and earn wages that exceed many blacks elsewhere, we pay much more than many blacks around the world in order to exist without health care, equal wages, equal opportunities for education and employment, and the other social ills that run the gamut.
To the extent that he acknowledges real problems in the black community, Buchanan seems to dismiss the white community's responsibility for those problems.
"Is white America really responsible for the fact that the crime and incarceration rates for African-Americans are seven times those of white America? Is it really white America's fault that illegitimacy in the African-American community has hit 70 percent and the black dropout rate from high schools in some cities has reached 50 percent? Is that the fault of white America or, first and foremost, a failure of the black community itself?"
Buchanan admits that racism is a problem in America, but he seems to suggest that whites are more likely to be victims than blacks.
"As for racism, its ugliest manifestation is in interracial crime, and especially interracial crimes of violence," he writes. "Is Barack Obama aware that while white criminals choose black victims 3 percent of the time, black criminals choose white victims 45 percent of the time?"
Where is Chris Mathews spending four nights on this? Oh, yes… he is a regular pundit on his show? Where is CNN, or even PBS on these comments and coverage of the outrages and hurtful premise? Oh, yes… they are contributors at one time or another on their networks as well.
Sean Hannity may well be on one of the biggest scum bags in all of media. I have said in some form or another, that anyone whom listen to Hannity regularly as a loyal listener must have an IQ of about 60 and some significant personality/emotional defects. That aside, Hannity has been back at his racist ways once again. We could start by listing the long list of Hannity proclamations of racism from black leaders and politicians. For the sake of this post I want to focus on his hypocrisy and the reporting on the Hal Turner story.
Hannity is silent about the racist affiliations of favored guests like Family Research Council president Tony Perkins, Mississippi Republican Governor Haley Barbour and former Republican Congressman Bob Barr, all of whom have spoken before gatherings of America's largest white supremacist group, the Council of Conservative Citizens.
A few days ago, Hannity brought Malik Shabazz of the New Black Panther Party on the show. Shabazz and his organization had previously chosen to endorse Barack Obama, who subsequently rejected the endorsement. It was up to Hannity to make some hay out of this, but the tables got turned very quickly:
Hannity added, "What I don't think you're understanding here, Malik, is that when you hear the minister of him for 20 years, when you hear the associations with Louis Farrakhan, one of the biggest racists and anti-Semites in the country, what you're not understanding is, America hears extremism at its worst."
Shabazz responded, "Let me ask you this. Are you to be judged by your promotion and association with Hal Turner?"
Hannity waved his arm around.
"I don't know anybody named - this is nonsense. I don't…" Then Hannity changed his tune. "Sir, sir… That was a man that was banned from my radio show ten years ago, that ran a Senate campaign in New Jersey."
Then, as Shabazz refused to stop talking or back down, Hannity, in a tacit admission, said, "I'm not running for president."
"A neo Nazi, you backed his career," Shabazz said.
Hannity answered, "That is an absolute, positive, lie and you've been reading the wrong websites…, my friend. Good try." Read More HERE
AAPP: Where is the outrage?
▼ March 30 - April 6 (19)
Week End Political Comedy
Civil Rights In America, "A Chain of Change"
Martin luther King, John McCain and The Hate McCai...
CNN's Political "Drive-by shooting" of Black Polit...
Martin Luther King April 4, 1968
NAACP A National Disgrace Now The Question is WHEN
Soledad O'Brien speaks with Dallas South
Obama Takes Lead in Penn State
Music, Politics, Race and Family: Why No James Br...
More ignorant talk from Chris Matthews
Campaign 2008, Hillary Clinton Poll Numbers Dive ...
Don't talk about Monica Lewinsky
Is Sunday Morning Talk Show Apartheid Really Gone?
Hillary Campaign bills credit card thousands of do...
Hillary Will You Concede or Try To Steal The Elect...
HUD Secretary Leaving Under A Cloud?
Kwame Kilpatrick. - Don't Worry If He Writes Rhyme...
Theology, Black Ministers, Black Preachers, Black... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,407 |
Essex County Work Accident Lawyers Jeffrey Glassman Injury Lawyers Home
Contact Essex County, Massachusetts Workplace Injury Lawyer Jeffrey Glassman Injury Lawyers
Methuen Workers' Compensation
According to the Bureau of Labor Statistics, a total of 75,300 people in the civilian labor force work in Methuen, Salem, and Lawrence. Statistics on employment reveal 15.3 percent of workers in Methuen are employed in management, business, and finance occupations, and 26.52 percent of workers are employed in sales, office or administrative support positions. Other top sectors include healthcare and technology; engineering and computer science, and transportation, production and material moving.
Workers in these and other industries could be hurt or even killed while doing work tasks. If this occurs, a Methuen workers' compensation attorney should be consulted to help report your injury and make a claim for benefits through workers' compensation. Jeffrey Glassman Injury Lawyers can assist throughout the process of making a work injury claim.
Methuen Workers Face Risks on the Job
Methuen is located in Essex County. As of the 2010 census, the population of Methuen was 47,255. The city was originally a part of Haverhill until early residents petitioned to form a new city. Methuen was officially incorporated in 1726 and was named for Sir Paul Methuen, who was a member of the King's Privacy Council.
Development of Methuen was heavily influenced by the growth in the industry during the 1800s. The Methuen Cotton Mill was constructed in the 1820s and hats and shoes were manufactured nearby.
The city is bordered by Haverhill, North Andover, Lawrence, Andover, Dracut, Pelham, and Salem, New Hampshire. It is just 30 miles to the northwest of Boston and 25 miles to the southeast of Manchester, New Hampshire. The median income for families in the city is almost $60,000.
Workers at all income levels deserve a safe place to work and employers should follow all regulations set by the Occupational Safety and Health Administration (OSHA). Unfortunately, some employers willfully or negligently fail to protect workers.
One Methuen contractor, for example, was cited by the Occupational Safety and Health Administration for willful failure to provide cave-in protection for employees. An OSHA spokesperson commented: "It's fortunate that no cave-in occurred but that in no way relieves an employer of the responsibility to provide the required protection for workers."
Getting Help From a Methuen Workers' Compensation Lawyer
After a workplace injury, victims should receive coverage for medical treatment and lost wages. There is no requirement to prove that an employer was negligent in order for an injured worker to get benefits through workers' compensation. As long as an employer can demonstrate that the origins of an illness or injury were work tasks, the worker should be entitled to coverage.
In addition to medical treatment costs, workers are also entitled to temporary or permanent benefits for full or partial disabilities if a workplace injury reduces earning potential. If the injury is a fatal one, survivors of the person killed on the job may receive death benefits. Our law firm can assist throughout the process of making a benefits claim and recovering the necessary funds after an injury.
Schedule your free consultation with a Methuen workers' compensation lawyer at Jeffrey Glassman Injury Lawyers. Call (617) 777-7777 or contact us online.
Workers' Compensation Settlement
Workers Compensation Case
Employment Law FAQs
Workers' Compensation Resources
Methuen Workers' Compensation | Essex County Work Accident Lawyers | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,684 |
\section{Introduction}
As a simplified model, the Boussinesq approximation of Navier-Stokes-Fourier system is widely used in the study of hydrodynamic stability problems, see \cite{PGD1, AJM1, JP1}, among others. Here we consider the 2D Boussinesq system in the absence of thermal conduction. In Eulerian coordinates, it reads as
\begin{equation}
\label{intp1}
\left\{
\begin{aligned}
&\partial_{t}\mathbf{v} +\mathbf{v}\cdot \nabla \mathbf{v}-\nu\Delta\mathbf{v} +\nabla p = \vartheta\mathbf{e}_{2},\\
&\nabla\cdot\mathbf{v} = 0,\\
&\partial_{t}\vartheta+\mathbf{v}\cdot \nabla \vartheta = 0.
\end{aligned}
\right.
\end{equation}
Here the unknowns $\mathbf{v} = (v_{1}, v_{2}),\ p$ and $\vartheta$ are the velocity, pressure and temperature of the fluid respectively. The positive constant $\nu$ is the viscosity of the fluid and $\mathbf{e}_{2} = (0,1)$ stands for the direction of buoyancy force.
During the last decades, system (\ref{intp1}) has attracted much interest in the research of mathematical fluid mechanics. The global existence, uniqueness and regularity of smooth solution to (\ref{intp1}) under different settings have been investigated by many authors, see \cite{HA1,DC1,RD2,CRD1,LH1,THSK1,TH1,WH1,NJ1,KW,ML1}, among others. In \cite{CRD1} by C. Doering et al., it is shown that the $L^{2}$ norms of the velocity and its first order derivatives converge to zero as time tends to infinity without any smallness restriction on the initial data. Recently, the authors in \cite{LT1} study the stability of hydrostatic equilibrium to (\ref{intp1}) in the periodic domain and establish decay rates of the velocity under an additional assumption on the solution. In the recent work \cite{RW2}, the author shows asymptotic behavior and explicit decay rates for solutions to the perturbed system (\ref{Pu4}) in the whole plane $\mathbb{R}^2$. We refer to \cite{LD1} for more detailed introduction and argument on this topic.
In \cite{LD1}, the underlying domain of fluid motion is assumed to be $\Omega=\mathbb{T}\times(0,1)$. Due to spatial confinement in the horizontal direction, in general the temperature can not decay to the stationary one. In the present work we consider the case $\Omega=\mathbb{R}\times (0,1)$. Under such a setting, we show that as time goes to infinity, the solution to (\ref{intp1}) converges to the specific stationary solution with explicit rates provided the initial data is a small perturbation of it. Especially, we obtain the decay rates both for high order derivatives of the velocity and temperature. Moreover, the convergence rates are sharp in the sense that they coincide with that of the linearized equations. More precisely, we assume that the fluid occupies the two dimensional infinite strip $\Omega=\mathbb{R}\times(0,1)$ and choose the specific stationary solution $(\vartheta_{s}, \mathbf{v}_s, p_{s})$ to (\ref{intp1}) in $\Omega$ as
\begin{equation}\label{steady0}
\vartheta_{s} = y,\ \mathbf{v}_s = 0,\ p_{s} = \frac{1}{2}y^2,\ y \in [0,1].
\end{equation}
System (\ref{intp1}) is supplemented with the initial conditions
\begin{equation}\label{intpi2}
\mathbf{v}(\mathbf{x},0) = \mathbf{v}_{0}(\mathbf{x}),\, \vartheta(\mathbf{x},0)=\vartheta_{0}(\mathbf{x})\text{ in }\Omega,
\end{equation}
together with the boundary conditions
\begin{equation}\label{intp2}
\left(\mathbf{v}\cdot \mathbf{n}\right) (\mathbf{x},t) =0,\ \left(\nabla\times\mathbf{v}\right)(\mathbf{x},t) = 0\text{ on }\partial\Omega,\,t>0,
\end{equation}
where $\mathbf{n}$ is the outward unit normal to $\partial\Omega=\mathbb{R}\times\{y=0,1\}$. By introducing the perturbation
\[
\theta = \vartheta - \vartheta_{s},\ \mathbf{u} = \mathbf{v}-\mathbf{v}_s,\,q= p-p_s,
\]
system (\ref{intp1}) is rewritten as
\begin{equation}\label{Pu4}
\left\{
\begin{aligned}
&\partial_{t}\mathbf{u}+\mathbf{u}\cdot \nabla \mathbf{u} -\nu\Delta\mathbf{u} + \nabla q = \theta \mathbf{e}_2, \\
&\nabla\cdot\mathbf{u} = 0,\\
&\partial_{t}\theta+\mathbf{u}\cdot \nabla \theta = -u_{2},
\end{aligned}
\right.
\end{equation}
together with the initial and boundary conditions
\begin{equation}\label{pu42}
\left\{
\begin{aligned}
& \mathbf{u}(\mathbf{x},0) = \mathbf{u}_{0}(\mathbf{x}),\, \theta(\mathbf{x},0)=\theta_{0}(\mathbf{x})\text{ in }\Omega,\\
&\left(\mathbf{u}\cdot \mathbf{n}\right)(\mathbf{x},t) =0,\ \left(\nabla\times\mathbf{u}\right)(\mathbf{x},t) = 0\text{ on }\partial\Omega,\,t>0.
\end{aligned}
\right.
\end{equation}
To simplify formulation, we introduce the vorticity $\omega =\partial_{1}u_{2} - \partial_{2}u_{1}$ and the stream function $\psi= (-\Delta)^{-1}\omega$, which solves
\begin{equation}
\label{P3}
\left\{
\begin{aligned}
& -\Delta\psi = \omega,\\
& \psi|_{\partial\Omega} = 0.
\end{aligned}
\right.
\end{equation}
Thus (\ref{Pu4}) is reformulated in the following equations for $(\omega, \theta )$.
\begin{equation}\label{P4}
\left\{
\begin{aligned}
&\partial_{t}\omega-\nu\Delta\omega +\mathbf{u}\cdot \nabla \omega = \partial_{1}\theta, \\
&\partial_{t}\theta+\mathbf{u}\cdot \nabla \theta = -u_{2},\\
&\mathbf{u} = (\partial_{2}(-\Delta)^{-1}\omega, -\partial_{1}(-\Delta)^{-1}\omega),\\
\end{aligned}
\right.
\end{equation}
The corresponding initial and boundary conditions for $\omega$ are as follows.
\begin{equation}
\label{P5}
\left\{
\begin{aligned}
& (\omega, \theta)(\mathbf{x},0) = (\omega_{0}, \theta_{0})(\mathbf{x})\text{ in }\Omega,\\
& \omega(\mathbf{x},t) = 0\text{ on }\partial\Omega,\,t>0.
\end{aligned}
\right.
\end{equation}
We now state the main result of this paper.
\begin{Theorem}\label{Mrt1}
Let $m> 32$ be an integer. Assume that
\begin{eqnarray}
&\omega_{0} \in H^{m}\cap W^{5,1}, \ \partial_{2}^{n}\omega_{0} = 0 \text{ on } \partial\Omega,\text{ for } n = 0, 2, \cdots, 2[(m-1)/2],\label{P61}
\\
&\theta_{0}\in H^{m+1}\cap W^{8,1},\ \partial_{2}^{n}\theta_{0} = 0 \text{ on } \partial\Omega
\text{ for } n = 0, 2, \cdots, 2[m/2]. \label{P6}
\end{eqnarray}
There exists $\epsilon_{0}>0$ depending only on $\nu$ and $m$ such that if
\begin{equation}\label{P9}
\|\theta_{0}\|_{W^{8,1}} + \|\theta_{0}\|_{H^{m+1}} +\|\omega_{0}\|_{W^{5,1}} + \|\omega_{0}\|_{H^{m}}\leq \epsilon_{0},
\end{equation}
then there exists a unique global smooth solution to (\ref{P4})-(\ref{P5}) satisfying
\[
\|\theta(t)\|_{H^{4}}\lesssim \langle t\rangle^{-\frac{1}{4}},\
\|\mathbf{\omega}(t)\|_{H^{2}}+\|\partial_{1}\theta(t)\|_{H^{2}}\lesssim \langle t\rangle^{-\frac{3}{4}},
\]
\[
\|\partial_{1}\mathbf{\omega}(t)\|_{L^{2}} + \|\partial_{11}\theta(t)\|_{L^{2}}\lesssim \langle t\rangle^{-\frac{5}{4}},
\]
\[
\|\theta(t)\|_{L^{\infty}} + \|\partial_{2}\theta(t)\|_{L^{\infty}}\lesssim \langle t\rangle^{-\frac{1}{2}},\ \|\partial_{1}\theta(t)\|_{L^{\infty}} +\|\mathbf{\omega}(t)\|_{L^{\infty}}\lesssim \langle t\rangle^{-1}.
\]
\end{Theorem}
\begin{Remark}
\emph{
It is well-known that if a stationary solution $\vartheta_{s}(y)$ satisfies $\vartheta'_{s}(y_0)<0$ for some $y_0\in [0,1]$, which implies that fluid with higher temperature lies below the lower one, then it is unstable-the Rayleigh-Taylor instability happens, see \cite{PB1, CRD1,PGD1}, among others.
}
\end{Remark}
\begin{Remark}
\emph{
As pointed out before, we not only show the asymptotic stability of the stationary solution specified in (\ref{steady0}), but also give the explicit decay rates for high order derivatives of temperature and velocity. Going back to the original initial-boundary value problem (\ref{intp1}), (\ref{intpi2}) and (\ref{intp2}), we have
\[
\|\vartheta(t)-y\|_{H^{4}}\lesssim \langle t\rangle^{-\frac{1}{4}},\,
\|\partial_{1}\left(\vartheta(t)-y\right)\|_{H^{2}} +\|v_{1}(t)\|_{H^{3}}\lesssim \langle t\rangle^{-\frac{3}{4}},\
\]
\[
\|\partial_{11}\left(\vartheta(t)-y\right)\|_{L^{2}} + \|v_{2}(t)\|_{H^{2}} + \|\partial_{1}v_{1}(t)\|_{H^{1}} \lesssim \langle t\rangle^{-\frac{5}{4}},
\]
\[
\|\vartheta(t)-y\|_{L^{\infty}} + \|\partial_{2}\left(\vartheta(t)-y\right)\|_{L^{\infty}}\lesssim \langle t\rangle^{-\frac{1}{2}},\
\|\partial_{1}\left(\vartheta(t)-y\right)\|_{L^{\infty}} + \|\mathbf{v}(t)\|_{W^{1,\infty}}\lesssim \langle t\rangle^{-1},
\]
for all $t>0$,
which are consistent with decay rates of the linearized equations (\ref{Eb1more})-(\ref{Eb2}).
}
\end{Remark}
We note that due to the absence of thermal conduction, the mechanism of stabilization in our setting is arising from the action of buoyancy. From the linearized equations (\ref{Eb1more})-(\ref{Eb2}), $\theta$ satisfies
\[
\partial_{tt}\theta-\nu\Delta\partial_{t}\theta - \partial_{1}(-\Delta)^{-1}\partial_{1}\theta = 0,
\]
which exhibits dissipation in the horizontal direction. However, the dissipation is very weak-the time decay rate of $\|\partial_{1}\theta\|_{L^{\infty}}$ is at most $\langle t\rangle^{-1}$, which in turn implies the Lipschitz norm of the velocity has the best decay rate as $\langle t\rangle^{-1}$. Fortunately, this critical decay rate is enough to overcome the difficulty caused by nonlinear terms in the analysis of system (\ref{P4}). Our proof is based on the construction of suitable energy functionals together with a detailed spectral analysis to the linear equation (\ref{Aa1}).
{\bf Notation.}
Throughout this paper, the bold character $\mathbf{x}$ represents the spatial variable $(x,y)\in \mathbb{R}\times (0,1)$. For $m\in \mathbb{N}$, $p\in [1,\infty]$ we denote the inhomogeneous Sobolev space with derivatives up to order $m$ in $L^{p}(\Omega)$ by $W^{m,p}(\Omega)$ equipped with the norm $\|f\|_{W^{m,p}(\Omega)} = \|f\|_{\dot{W}^{m,p}(\Omega)} + \|f\|_{L^p(\Omega)}$, where $\dot{W}^{m,p}(\Omega)$ is the homogeneous Sobolev space with all $m$-th order derivatives belonging to $L^{p}(\Omega)$. Especially, we denote $H^{m}(\Omega)=W^{m,2}(\Omega)$. Notation $\langle\cdot,\cdot\rangle$ is used as the inner product in $L^{2}(\Omega)$. Moreover, the frequency space $\mathbb{R}\times \mathbb{N}$ is denoted as $\widehat{\Omega}$ and $L^{p}(\widehat{\Omega})(1\leq p \leq \infty)$ is the space of all functions $g(\xi,k)$, $(\xi,k)\in \widehat{\Omega}$ with
$\|g\|_{L^{p}(\widehat{\Omega})}^{p} = \sum_{k\in \mathbb{N}}\int_{\mathbb{R}}|g(\xi,k)|^{p}{\rm d}\xi$. For simplicity, we omit the underlying domain by using $L^{p},\,W^{m,p},\,H^{m}$ and $\widehat{L}^{p}$ etc., to denote the corresponding spaces on $\Omega$ and $\widehat\Omega$ respectively. Finally, $A \lesssim B$ means $A\leq CB$ with a generic constant $C$ and $A\sim B$ means $A\lesssim B$ and $B\lesssim A$.
\section{Preliminaries}
In this section, we give some preliminary results that will be used later.
The following estimates on multiplication of two functions in Sobolev spaces and the interpolation inequality are well-known, see \cite{RA1,AJM2}, among others.
\begin{Lemma}\label{PL1}
Let $m \in \mathbb{N}$.
\begin{itemize}
\item If $f, g \in H^{m} \cap L^{\infty}$, then
\begin{equation}\label{BL11}
\|fg\|_{H^{m}}\lesssim \|f\|_{H^{m}}\|g\|_{L^{\infty}} + \|f\|_{L^{\infty}}\|g\|_{H^{m}}.
\end{equation}
\item If $f, g \in H^{m}$ with $m >1$, then
\begin{equation}\label{BL12}
\|fg\|_{H^{m}}\lesssim \|f\|_{H^{m}}\|g\|_{H^{m}}.
\end{equation}
\item Let $m_{1}\leq m \leq m_{2}$. If $f\in H^{m_{2}}$, then
\begin{equation}\label{BLT1}
\|f\|_{H^{m}}\lesssim \|f\|_{H^{m_{1}}}^{s}\|f\|_{H^{m_{2}}}^{1-s},\text{ with } m= s m_{1} +(1-s)m_{2},\,0\leq s\leq1.
\end{equation}
\item If $f, g \in H^{m}$, then
\begin{equation}\label{BL13}
\|fg\|_{W^{m,1}}\lesssim \|f\|_{H^{m}}\|g\|_{L^{2}} + \|f\|_{L^{2}}\|g\|_{H^{m}}.
\end{equation}
\item
If $f \in H^{m}\cap W^{1,\infty}$, $g \in H^{m-1} \cap L^{\infty}$, then for $m\ge1$ and any $|\alpha|\leq m$
\begin{equation}\label{BL14}
\|\partial^{\alpha}(fg) - f\partial^{\alpha}g\|_{L^{2}}\lesssim \|\nabla f\|_{L^{\infty}}\|g\|_{H^{m-1}} + \|f\|_{H^{m}}\|g\|_{L^{\infty}}.
\end{equation}
\end{itemize}
\end{Lemma}
For $t>0$, let $\langle t \rangle = \max\{1,t\}$. The following lemma can be verified by a direct calculation.
\begin{Lemma}\label{PL3}
Let $\beta, \gamma>0$ be two constants such that $\beta \leq 1+\gamma$. Then
\begin{equation}\label{BL22}
\int_{0}^{t}\frac{{\rm d} \tau}{\langle t-\tau\rangle^{\beta}\langle \tau\rangle^{1+\gamma}} \lesssim\langle t\rangle^{-\beta},\,\int_{0}^{t}e^{-( t-\tau)}\langle \tau\rangle^{-\gamma}{\rm d} \tau \lesssim\langle t\rangle^{-\gamma}.
\end{equation}
\end{Lemma}
We now turn to the Fourier expansion of functions defined in $\Omega=\mathbb{R}\times (0,1)$ with vanishing Dirichlet/Neumann boundary value.
Following \cite{AD1,AD2}, we introduce the functional spaces for $m\in \mathbb{N}$, $p \in [1,\infty]$,
\begin{equation}\label{F1}
\mathfrak{D}^{m,p} := \{f\in W^{m,p}: \partial_{2}^{n}g|_{\partial\Omega} = 0,\,n = 0, 2, \cdots, 2[(m-1)/2]\},
\end{equation}
\begin{equation}\label{F2}
\mathfrak{N}^{m,p} := \{f\in W^{m,p}: \partial_{2}^{n}g|_{\partial\Omega} = 0,\,n = 1, 3, \cdots, 2[m/2]-1\},
\end{equation}
and $\mathfrak{D}^{m}=\mathfrak{D}^{m,2}$, $\mathfrak{N}^{m}=\mathfrak{N}^{m,2}$. Here $[m/2]=k$ if $m=2k$ or $2k+1$, $k=1,2,3,\cdots$. The Fourier expansion for a function $f\in \mathfrak{D}^m$ reads as
\begin{equation}\label{pr1}
f(x,y) = \frac{1}{\sqrt{\pi }}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}\widehat{f}_{o}(\xi,k)e^{i x \xi}{\rm d}\xi \sin k\pi y,
\end{equation}
\[
\widehat{f}_{o}(\xi,k) = \frac{1}{\sqrt{\pi }}\int_{\mathbb{R}}\int_{0}^{1}f(x,y)e^{- i x \xi}\sin k\pi y{\rm d} y {\rm d} x , \text{ for } (\xi, k)\in \widehat{\Omega},
\]
while for $f\in \mathfrak{N}^m$,
\begin{equation}\label{pr2}
f(x,y) = \frac{1}{\sqrt{\pi }}\sum_{k=0}^{+\infty}\int_{\mathbb{R}}\widehat{f}_{e}(\xi,k)e^{ i x\xi}{\rm d}\xi \cos k\pi y,
\end{equation}
\[
\widehat{f}_{e}(\xi,k)= \frac{1}{\sqrt{\pi }}\int_{\mathbb{R}}\int_{0}^{1}f(x,y)e^{- i x\xi}\cos k\pi y {\rm d} y {\rm d} x ,\ \text{ for } (\xi, k)\in \widehat{\Omega}.
\]
We note that $f\in \mathfrak{D}^m $ (or $\mathfrak{N}^{m}$) implies $\partial_2 f \in \mathfrak{N}^{m-1}$ (or $ \mathfrak{D}^{m-1}$) and
\[
\widehat{(\partial_2f)_e}(\xi,k) = -k \pi \widehat{f_o}(\xi,k)\,\left(\widehat{(\partial_2f)_o}(\xi,k) = k\pi \widehat{f_e}(\xi,k)\right).
\]
For notation convenience, we use $\widehat{f}$ to denote $\widehat{f}_{o}$ or $\widehat{f}_{e}$, which makes no confusion once $f \in \mathfrak{D}^m$ or $\mathfrak{N}^m$ is given.
Accordingly,
\begin{equation}\label{eLL2}
\|f\|_{H^m}\sim \|(1+|\cdot|^2)^{\frac{m}{2}}\widehat{f}(\cdot)\|_{\widehat{L}^2},\,f \in \mathfrak{D}^m \text{ or } \mathfrak{N}^m.
\end{equation}
Moreover,
\begin{equation}\label{eLL3}
\|f\|_{L^\infty}\lesssim \|\widehat{f}\|_{\widehat{L}^1},\,\|\widehat{f}\|_{\widehat{L}^\infty}\lesssim \|{f}\|_{{L}^1},\,\|\widehat{f}\|_{\widehat{L}^1} \lesssim \|f\|_{H^2}\lesssim \|f\|_{W^{m,1}},\ m \ge 3.
\end{equation}
\begin{equation}\label{eLL4}
\|(1+|\cdot|^2)^{\frac{m}{2}}\widehat{f}\|_{\widehat{L}^\infty}\lesssim \|{f}\|_{{W}^{m,1}},\, f \in \mathfrak{D}^{m,1} \text{ or } \mathfrak{N}^{m,1}.
\end{equation}
Note that we use $\|\widehat{f}(\cdot)\|_{\widehat{H}^m}$ to denote $\|(1+|\cdot|^2)^{\frac{m}{2}}\widehat{f}(\cdot)\|_{\widehat{L}^2}$. In the following context, these facts will be used frequently without being referred.
Finally, we consider the following Dirichlet problem for Laplace equation in $\Omega$.
\begin{equation}
\label{Poise}
\left\{
\begin{aligned}
& -\Delta\varphi= f, \\
& \varphi|_{\partial\Omega} = 0.
\end{aligned}
\right.
\end{equation}
For $f\in L^2$, this boundary value problem is uniquely solvable in $H_0^1\cap H^2$. We denote this unique solution $ \varphi = (-\Delta)^{-1} f$.
\begin{Lemma}\label{Pois1}
Assume that $f \in H^{m}$ for $m \ge 0$.
Then
\begin{equation}\label{eLL1}
\|\varphi\|_{H^{m+2}}\lesssim \|f\|_{H^{m}}.
\end{equation}
\end{Lemma}
\begin{Remark}
\emph{
Inequality (\ref{eLL1}) is nothing but the standard elliptic estimate. For $f\in \mathfrak{D}^m$, (\ref{eLL1}) immediately follows from (\ref{eLL2}) and
\[
\widehat{\varphi}_o(\xi,k) = \frac{1}{\xi^{2}+\pi^{2}k^{2}}\widehat{f}_o(\xi,k),\, \xi\in \mathbb{R},\, k \ge 1.
\]
}
\end{Remark}
By using the stream function $\psi$ in (\ref{P3}), together with the help of Lemma \ref{Pois1}, we have
\begin{Corollary}\label{corollary1}
Assume that $\mathbf{u}\in H^1$ satisfying $\nabla\cdot \mathbf{u} =0$ in $\Omega$ and $\mathbf{u}\cdot \mathbf{n} = 0$ on $\partial\Omega$. If
$\omega=\partial_1 u_2 -\partial_2 u_1 \in H^{m}$, $m \in \mathbb{N}$, then $\mathbf{u}\in H^{m+1}$ and
\begin{equation}\label{FC1}
\|\mathbf{u}\|_{H^{m+1}}\lesssim \|\omega\|_{H^{m}}.
\end{equation}
\end{Corollary}
\section{Decay estimates for solutions to the linearized equations}
The linearized equations of (\ref{P4}) is
\begin{equation}\label{Eb1more}
\left\{
\begin{aligned}
&\partial_{t}w-\nu\Delta w = \partial_{1}\phi, \\
&\partial_{t}\phi + u_{2} = 0, \\
& u_{2} = -\partial_{1}(-\Delta)^{-1} w,
\end{aligned}
\right.
\end{equation}
together with the initial and boundary conditions
\begin{equation}
\label{Eb2}
\left\{
\begin{aligned}
& (w, \phi)(\mathbf{x},0) = (w_{0}, \phi_{0})(\mathbf{x})\text{ in }\Omega, \\
& w(\mathbf{x},t) = 0 \text{ on }\partial\Omega,\,t>0.
\end{aligned}
\right.
\end{equation}
In this section we give explicit decay estimates of the solutions to (\ref{Eb1more})-(\ref{Eb2}).
\subsection{Decay rates for solutions to the linearized equation of temperature}
We decouple system $(\ref{Eb1more})$ by taking the time derivative on $(\ref{Eb1more})_{2}$ and using the remaining equations of $(\ref{Eb1more})$ to find
\begin{equation}\label{Eb1}
\partial_{tt}\phi-\nu\Delta\partial_{t}\phi - \partial_{1}(-\Delta)^{-1}\partial_{1}\phi = 0.
\end{equation}
We consider the following inhomogeneous equation in $\Omega$,
\begin{equation}\label{Aa1}
\partial_{tt}\phi-\nu\Delta\partial_{t}\phi - \partial_{1}(-\Delta)^{-1}\partial_{1}\phi =F
\end{equation}
with the initial and boundary conditions
\begin{equation}\label{Aam1}
\left\{
\begin{aligned}
& \phi(\mathbf{x},0)=\phi_{0}(\mathbf{x}), \partial_{t}\phi(\mathbf{x},0)=\phi_{1}(\mathbf{x})\text{ in }\Omega,\\
& \phi(\mathbf{x},t)=0 \text{ on }\partial\Omega,\,t>0.
\end{aligned}
\right.
\end{equation}
To solve the above initial-boundary problem, for $t>0$ we define operators $\mathcal{L}_{1}(t)$ and $\mathcal{L}_{2}(t)$ as follows.
\begin{equation}\label{Aa2}
\widehat{\mathcal{L}_{1}(t)f}(\xi,k) = \frac{1}{2}\left(e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})+\sigma\right)t}+e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2} k^{2})-\sigma\right)t}\right)\widehat{f}(\xi,k),
\end{equation}
and
\begin{equation}\label{Aa3}
\widehat{\mathcal{L}_{2}(t)f}(\xi,k) = \frac{1}{\sigma}\left(e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})-\sigma\right)t}-e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})+\sigma\right)t}\right)\widehat{f}(\xi,k).
\end{equation}
Here $\sigma= \sqrt{\nu^{2}(\xi^{2}+ \pi^{2}k^{2})^{2}-\frac{4\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}}$ and $(\xi,k)\in\widehat{\Omega}$.
\begin{Lemma}\label{AL1}
Assume that $\phi_0\in H^2\cap H_0^1$, $\phi_1\in L^2$ and $F\in L_{loc}^1(0, \infty; L^2)$. Then the solution to (\ref{Aa1}) is given by
\[
\phi(x,y,t)= \mathcal{L}_{1}(t)\phi_{0}(x,y) + \mathcal{L}_{2}(t)\left(\frac{\nu}{2}(-\Delta)\phi_{0}(x,y) +\phi_{1}(x,y)\right)
\]
\begin{equation}\label{sol}
+ \int_{0}^{t}\mathcal{L}_{2}(t-\tau)F(x,y,\tau){\rm d} \tau.
\end{equation}
\end{Lemma}
{\bf Proof.}
Performing the Fourier transform to (\ref{Aa1}),
\begin{equation}\label{Aa4}
\partial_{tt}\widehat{\phi}(\xi,k,t)+ \nu\left(\xi^{2}+ \pi^{2}k^{2}\right)\partial_{t}\widehat{\phi}(\xi,k,t)+\frac{\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}\widehat{\phi}(\xi,k,t)=\widehat{F}(\xi,k,t), \, (\xi,k)\in\widehat{\Omega}.
\end{equation}
Note that
\[
\left(\partial_{tt} + \nu\left(\xi^{2}+ \pi^{2}k^{2}\right)\partial_{t} +\frac{\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}\right)\widehat{\phi}(\xi,k,t)
\]
\[
= \left[ \left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}\right)^{2}-\frac{\nu^{2}(\xi^{2}+ \pi^{2}k^{2})^{2}}{4}+\frac{\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}\right]\widehat{\phi}(\xi,k,t)
\]
\[
=\left(\partial_{t}+\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}+\frac{1}{2}\sqrt{\nu^{2}(\xi^{2}+ \pi^{2}k^{2})^{2}-\frac{4\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}}\right)
\]
\[
\cdot\left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}-\frac{1}{2}\sqrt{\nu^{2}(\xi^{2}+ \pi^{2}k^{2})^{2}-\frac{4\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}}\right)\widehat{\phi}(\xi,k,t).
\]
Let
\begin{equation}\label{EA+}
\psi_{+}(\xi,k,t)=\left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}+\frac{\sigma}{2}\right)\widehat{\phi}(\xi,k,t)
\end{equation}
and
\begin{equation}\label{EA++}
\psi_{-}(\xi,k,t)=\left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}-\frac{\sigma}{2}\right)\widehat{\phi}(\xi,k,t).
\end{equation}
Then
\begin{equation}\label{2+1}
\left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}-\frac{\sigma}{2}\right)\psi_{+}=\widehat{F},\
\left(\partial_{t}+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}+\frac{\sigma}{2}\right)\psi_{-}=\widehat{F}.
\end{equation}
By Duhamel's principle,
\begin{equation}\label{1+3}
\psi_{+}(\xi,k,t) = e^{\left(-\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}+\frac{\sigma}{2}\right)t}\psi_{+}(\xi,k,0)
+\int_{0}^{t}e^{\left(-\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}+\frac{\sigma}{2}\right)(t-\tau)}\widehat{F}(\xi,k,\tau){\rm d} \tau,
\end{equation}
\begin{equation}\label{1+2}
\psi_{-}(\xi,k,t) = e^{\left(-\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}-\frac{\sigma}{2}\right)t}\psi_{-}( \xi,k,0)
+\int_{0}^{t}e^{\left(-\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}-\frac{\sigma}{2}\right)(t-\tau)}\widehat{F}(\xi,k,\tau){\rm d} \tau.
\end{equation}
Moreover, from (\ref{EA+}) and (\ref{EA++}), it follows that
\begin{equation}\label{1+5}
\psi_{+}(\xi,k,0)=\widehat{\phi}_{1}(\xi, k)+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}\widehat{\phi}_{0}(\xi, k)+\frac{\sigma}{2}\widehat{\phi}_{0}(\xi, k),
\end{equation}
\begin{equation}\label{1+4}
\psi_{-}(\xi,k,0)=\widehat{\phi}_{1}(\xi, k)+\frac{\nu(\xi^{2}+ \pi^{2}k^{2})}{2}\widehat{\phi}_{0}(\xi, k)-\frac{\sigma}{2}\widehat{\phi}_{0}(\xi, k).
\end{equation}
Substituting (\ref{1+5}) and (\ref{1+4}) into (\ref{1+3}) and (\ref{1+2}) respectively, together with the fact that $\widehat{\phi} = \frac{\psi_{+}-\psi_{-}}{\sigma}$,
\[
\widehat{\phi} = \frac{1}{2}\left(e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})+\sigma\right)t}+e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})-\sigma\right)t}\right)\widehat{\phi}_{0}
\]
\[
+ \frac{1}{\sigma}\left(e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})-\sigma\right)t}-e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})+\sigma\right)t}\right)\left(\widehat{\varphi}_{0}+\frac{\nu}{2}(\xi^{2}+ \pi^{2}k^{2})\widehat{\phi}_{0}\right)
\]
\[ +\int_{0}^{t}\frac{1}{\sigma}\left(e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})-\sigma\right)(t-\tau)}-e^{-\frac{1}{2}\left(\nu(\xi^{2}+ \pi^{2}k^{2})+\sigma\right)(t-\tau)}\right)\widehat{F}(\tau){\rm d} \tau,
\]
which is nothing but (\ref{sol}).
\hfill$\square$
The next lemma is inspired by \cite{TE1}.
\begin{Lemma}\label{AL0}
Let $g\in \mathfrak{D}^{6,1}$ and
\[
\widehat{G}(\xi,k,t) = e^{-\frac{\xi^{2}}{\left(\xi^{2}+\pi^{2}k^{2}\right)^{2}}t}\widehat{g}(\xi,k), \,(\xi,k) \in \widehat{\Omega}.
\]
Then
\begin{equation}\label{P10}
\|\widehat{G}(t)\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-\frac{1}{2}}\|g\|_{W^{4,1}},\ \| \widehat{\partial_1G}(t)\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-1}\|g\|_{W^{6,1}},
\end{equation}
\begin{equation}\label{P101}
\|\widehat{G}(t)\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{1}{4}}\|g\|_{W^{2,1}},\ \| \widehat{\partial_1G}(t)\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{3}{4}}\|g\|_{W^{4,1}}.
\end{equation}
\end{Lemma}
{\bf Proof.}
A straightforward calculation shows that
\[
\|\widehat{G}(t)\|_{\widehat{L}^{1}}
\lesssim \sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}|\widehat{g}(\xi,k)|{\rm d}\xi
\]
\[
\lesssim \|\left(\xi^{2}+\pi^{2}k^{2}\right)^{2}\widehat{g}(\xi,k)\|_{\widehat{L}^{\infty}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}\left(\xi^{2}+\pi^{2} k^{2}\right)^{-2}{\rm d}\xi
\]
\[
\lesssim \int_{\pi}^{+\infty} \int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+\eta^{2}\right)^{2}}t}\left(\xi^{2}+\eta^{2}\right)^{-2}{\rm d}\xi {\rm d}\eta\|g\|_{W^{4,1}}.
\]
Using polar coordinates,
\[
\int_{\pi}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+\eta^{2}\right)^{2}}t}\left(\xi^{2}+\eta^{2}\right)^{-2}{\rm d}\xi {\rm d}\eta
\lesssim \int_{\pi}^{+\infty}\int_{0}^{2\pi}e^{-\frac{\cos^{2}\beta}{r^{2}}t}r^{-3}{\rm d}\beta {\rm d}r.
\]
We decompose the last integral into
\[
\sum_{j=1}^{8}\int_{\pi}^{+\infty}\int_{\frac{(j-1)\pi}{4}}^{\frac{j \pi}{4}}e^{-\frac{\cos^{2}\beta}{r^{2}}t}r^{-3}{\rm d}\beta {\rm d}r = \sum_{j=1}^{8} M_j.
\]
It is enough to consider, say
\[
M_{2}=\int_{\pi}^{+\infty}\int_{\frac{\pi}{4}}^{\frac{\pi}{2}}e^{-\frac{\cos^{2}\beta}{r^{2}}t}r^{-3}{\rm d}\beta {\rm d}r,\,
M_{4}=\int_{\pi}^{+\infty}\int_{\frac{3\pi}{4}}^{\pi}e^{-\frac{\cos^{2}\beta}{r^{2}}t}r^{-3}{\rm d}\beta {\rm d}r.
\]
It is obvious that $M_2 \lesssim 1$ and $M_4 \lesssim 1$. By using change of variable $z= \frac{\sqrt{t}}{r}\cos \beta $,
\[
M_{2}\lesssim t^{-\frac{1}{2}}\int_{\pi}^{+\infty}\int_{0}^{\frac{\sqrt{t}}{r}}e^{-z^{2}}r^{-2}\frac{1}{\sqrt{1-\frac{r^{2}z^{2}}{t}}}{\rm d}z{\rm d}r
\]
\[
\lesssim t^{-\frac{1}{2}}\int_{\pi}^{+\infty}\frac{1}{r^{2}}dr\int_{-\infty}^{+\infty}e^{-z^{2}}dz
\lesssim t^{-\frac{1}{2}}.
\]
Hence,
\[
M_2 \lesssim \langle t\rangle^{-\frac{1}{2}}.
\]
For $M_4$, first note that
\[
\frac{\sqrt{2}}{2}\leq |\cos \beta| \leq 1 \text{ for all }\beta \in \left[\frac{3\pi}{4},\pi\right].
\]
Then
\[
M_{4}
\lesssim \int_{\pi}^{+\infty}\int_{\frac{3\pi}{4}}^{\pi}e^{-\frac{t}{2r^{2}}}r^{-3}{\rm d}\beta {\rm d}r
= \int_{\pi}^{+\infty}\int_{\frac{3\pi}{4}}^{\pi}e^{-\frac{t}{2r^{2}}}\frac{t^{\frac{1}{2}}}{r}\frac{r}{t^{\frac{1}{2}}}r^{-3}{\rm d}\beta {\rm d}r
\]
\[
\lesssim t^{-\frac{1}{2}}\int_{\pi}^{+\infty}\frac{1}{r^{2}}{\rm d}r
\lesssim t^{-\frac{1}{2}}.
\]
Thus, the first estimate in (\ref{P10}) is obtained.
In a similar way we have
\[
\|\widehat{\partial_1G}(t)\|_{\widehat{L}^{1}}
\lesssim \sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}|\xi\widehat{g}(\xi,k)|{\rm d}\xi
\]
\[
\lesssim t^{-\frac{1}{2}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}\frac{|\xi|}{\xi^{2}+ \pi^{2}k^{2}}t^{\frac{1}{2}}\left(\xi^{2}+ \pi^{2}k^{2}\right)|\widehat{g}(\xi,k)|{\rm d}\xi
\]
\[
\lesssim t^{-\frac{1}{2}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{2\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}\left(\xi^{2}+ \pi^{2}k^{2}\right)|\widehat{g}(\xi,k)|{\rm d}\xi
\lesssim \langle t \rangle^{-1}\|g\|_{W^{6,1}}.
\]
Thus (\ref{P10}) has been verified. Moreover,
\[
\|\widehat{G}(t)\|_{\widehat{L}^{2}}^{2}
\lesssim \sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{2\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}|\widehat{g}(\xi,k)|^{2}{\rm d}\xi
\]
\[
\lesssim \|\left(\xi^{2}+\pi^{2}k^{2}\right)^{2}\widehat{g}^{2}(\xi,k)\|_{\widehat{L}^{\infty}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}e^{-\frac{2\xi^{2}}{\left(\xi^{2}+ \pi^{2}k^{2}\right)^{2}}t}\left(\xi^{2}+\pi^{2} k^{2}\right)^{-2}{\rm d}\xi
\]
\[
\lesssim \int_{\pi}^{+\infty} \int_{\mathbb{R}}e^{-\frac{2\xi^{2}}{\left(\xi^{2}+\eta^{2}\right)^{2}}t}\left(\xi^{2}+\eta^{2}\right)^{-2}{\rm d}\xi {\rm d}\eta\|g\|_{W^{2,1}}^{2}
\lesssim \langle t \rangle^{-\frac{1}{2}}\|g\|_{W^{2,1}}^{2}.
\]
Hence the first estimate in (\ref{P101}) has been proved. The proof of $(\ref{P101})_2$ is similar.
\hfill$\square$
\begin{Remark}\label{Rema1}
\emph{
In the proof of Lemma {\ref{AL0}}, we in fact use $\|g\|_{W^{m,1}}$ to control
\[
\|(1+|\cdot|^2)^{\frac{m}{2}}\widehat{g}\|_{\widehat{L}^{\infty}}
\]
according to (\ref{eLL4}).
}
\end{Remark}
We proceed to give decay estimates of $\mathcal{L}_{1}(t)$, $\mathcal{L}_{2}(t)$.
\begin{Lemma}\label{AL2}
Let $f\in \mathfrak{D}^{6,1}$. Then for any fixed $\nu >0$,
\begin{equation}\label{All1}
\left\|\widehat{\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}} + \left\|\widehat{\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-\frac{1}{2}}\|f\|_{W^{4,1}},
\end{equation}
\begin{equation}\label{All1+}
\left\|\widehat{\partial_2\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}} + \left\|\widehat{\partial_2\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-\frac{1}{2}}\|f\|_{W^{5,1}},
\end{equation}
\begin{equation}\label{All2}
\left\|\widehat{\partial_{t}\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}} + \left\|\widehat{\partial_{t}\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-\frac{3}{2}}\|f\|_{W^{5,1}},
\end{equation}
\begin{equation}\label{All3}
\left\|\widehat{\partial_1\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}} + \left\|\widehat{\partial_1\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}\lesssim \langle t \rangle^{-1}\|f\|_{W^{6,1}},
\end{equation}
\begin{equation}\label{All4}
\left\|\widehat{\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{2}} + \left\|\widehat{\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{1}{4}}\|f\|_{W^{2,1}},
\end{equation}
\begin{equation}\label{All4+}
\left\|\widehat{\partial_{2}\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{2}} + \left\|\widehat{\partial_{2}\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{1}{4}}\|f\|_{W^{3,1}},
\end{equation}
\begin{equation}\label{All5}
\left\|\widehat{\partial_{t}\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{2}} + \left\|\widehat{\partial_{t}\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{5}{4}}\|f\|_{W^{3,1}},
\end{equation}
\begin{equation}\label{All7}
\left\|\widehat{\partial_1\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{2}} + \left\|\widehat{\partial_1\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{3}{4}}\|f\|_{W^{4,1}},
\end{equation}
\begin{equation}\label{All6}
\left\|\widehat{\partial_{11}\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{2}} + \left\|\widehat{\partial_{11}\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{2}}\lesssim \langle t \rangle^{-\frac{5}{4}}\|f\|_{W^{6,1}}.
\end{equation}
\end{Lemma}
{\bf Proof.}
Let
\[
\lambda_{+}= \frac{-\nu(\xi^{2}+\pi^{2}k^{2})+\sigma}{2},~\lambda_{-}= \frac{-\nu(\xi^{2}+\pi^{2}k^{2})-\sigma}{2}.
\]
A straightforward calculation shows that
\[
\widehat{\mathcal{L}_{1}(t)} = \frac{1}{2}\left(e^{\lambda_{+}t}+e^{\lambda_{-}t}\right),\
\widehat{\mathcal{L}_{2}(t)}= \frac{1}{\lambda_{+}-\lambda_{-}}\left(e^{\lambda_{+}t}-e^{\lambda_{-}t}\right),
\]
\[
\widehat{\partial_{t}\mathcal{L}_{1}(t)} = \frac{1}{2}\left(\lambda_{+}e^{\lambda_{+}t}+\lambda_{-}e^{\lambda_{-}t}\right),\
\widehat{\partial_{t}\mathcal{L}_{2}(t)}= \frac{1}{\lambda_{+}-\lambda_{-}}\left(\lambda_{+}e^{\lambda_{+}t}-\lambda_{-}e^{\lambda_{-}t}\right),
\]
where $\lambda_{+}-\lambda_{-} = \sigma$.
Note that
\[
\nu_{*}^{2} :=\sup_{\xi \in \mathbb{R},k\geq 1}\frac{4\xi^{2}}{(\xi^{2}+\pi^{2}k^{2})^{3}} \in (0,1).
\]
We discuss decay estimates of $\mathcal{L}_{1}(t)$, $\mathcal{L}_{2}(t)$ in two cases: $\nu \geq\nu_{*}$ and $0<\nu <\nu_{*}$.
First, for $\nu \geq \nu_{*}$,
\[
\lambda_{+} = -\frac{1}{2}\left(\nu(\xi^{2}+\pi^{2}k^{2})-\sigma\right) = -\frac{1}{2}\left(\nu(\xi^{2}+\pi^{2}k^{2})-\sqrt{\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{2}-\frac{4\xi^{2}}{\xi^{2}+\pi^{2}k^{2}}}\right)
\]
\[
= -\frac{2\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\frac{1}{1+\sqrt{1-\frac{4\xi^{2}}{\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{3}}}}.
\]
Hence
\[
-\frac{2\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\leq \lambda_{+} \leq -\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}},\
-\nu(\xi^{2}+\pi^{2}k^{2})\leq\lambda_{-} \leq -\frac{\nu}{2}(\xi^{2}+\pi^{2}k^{2}).
\]
Moreover, there exists $\lambda_{0}>0$ such that
\[
\lambda_{0}\leq\lambda_{+} -\lambda_{-} < \nu(\xi^{2}+\pi^{2}k^{2}).
\]
For $k\geq 1$,
\[
0<\widehat{\mathcal{L}_{1}(t)}\lesssim e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t},
\]
\[
0\leq \widehat{\mathcal{L}_{2}(t)}\lesssim \frac{1}{\lambda_{0}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t} + \frac{1}{\lambda_{0}}e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}
\lesssim e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t},
\]
\[
\left|\widehat{\partial_{t}\mathcal{L}_{1}(t)}\right|\lesssim
\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}
+\nu (\xi^{2}+\pi^{2}k^{2})e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t},
\]
\[
\left|\widehat{\partial_{t}\mathcal{L}_{2}(t)}\right|\lesssim
\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}
+\nu(\xi^{2}+\pi^{2}k^{2})e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}.
\]
We use (\ref{eLL3}) and Lemma \ref{AL0} to obtain
\[
\left\|\widehat{\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}}
+\left\|\widehat{\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}
\]
\begin{equation}\label{tg1}
\lesssim \sum_{k=1}^{\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}|\widehat{f}(\xi,k)|{\rm d}\xi
\lesssim \langle t \rangle^{-\frac{1}{2}}\|f\|_{W^{4,1}},
\end{equation}
\[
\left\|\widehat{\partial_2\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}}
+\left\|\widehat{\partial_2\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}
\]
\begin{equation}\label{tg2}
\lesssim \sum_{k=1}^{\infty}\int_{\mathbb{R}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}|k\widehat{f}(\xi,k)|{\rm d}\xi
\lesssim \langle t \rangle^{-\frac{1}{2}}\|f\|_{W^{5,1}},
\end{equation}
\[
\left\|\widehat{\partial_{t}\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}}+\left\|\widehat{\partial_{t}\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}
\]
\[
\lesssim \sum_{k=1}^{\infty}\int_{\mathbb{R}}\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^2k^{2})^{2}}t}|\widehat{f}(\xi,k)|{\rm d}\xi
\]
\[
+ \sum_{k=1}^{\infty}\int_{\mathbb{R}}\nu(\xi^{2}+\pi^{2}k^{2})e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}|\widehat{f}(\xi,k)|{\rm d}\xi
\]
\begin{equation}\label{tg4}
\lesssim \langle t \rangle^{-\frac{3}{2}}\|f\|_{W^{4,1}} + e^{-\frac{\nu}{2}t}\|(\xi^{2}+\pi^{2}k^{2})\widehat{f}\|_{\widehat{L}^{1}}
\lesssim \langle t \rangle^{-\frac{3}{2}}\|f\|_{W^{5,1}},
\end{equation}
\[
\left\|\widehat{\partial_1\mathcal{L}_{1}(t)f}\right\|_{\widehat{L}^{1}}
+\left\|\widehat{\partial_1\mathcal{L}_{2}(t)f}\right\|_{\widehat{L}^{1}}
\]
\begin{equation}\label{tg5}
\lesssim \sum_{k=1}^{\infty}\int_{\mathbb{R}}|\xi|e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}|\widehat{f}(\xi,k)|{\rm d}\xi
\lesssim \langle t \rangle^{-1}\|f\|_{W^{6,1}}.
\end{equation}
Other estimates in (\ref{All4})-(\ref{All6}) can be obtained in a similar way.
Next, for $0<\nu <\nu_{*}$,
$
\sigma= \sqrt{\nu^2(\xi^{2}+ \pi^{2}k^{2})^{2}-\frac{4\xi^{2}}{\xi^{2}+ \pi^{2}k^{2}}}
$
is not necessary a real number for any $(\xi,k)\in \widehat{\Omega}$. We decompose the frequency space $\widehat{\Omega}$ into four parts as follows.
\[
I_{1}=\left\{(\xi,k) \in \mathbb{R}\times \mathbb{N}^{+}:\xi^2< \frac{\nu^{2}}{16}(\xi^{2}+ \pi^{2}k^{2})^{3}\right\},
\]
\[
I_{2}=\left\{(\xi,k) \in \mathbb{R}\times \mathbb{N}^{+}:\frac{\nu^{2}}{16}(\xi^{2}+ \pi^{2}k^{2})^{3}\leq \xi^{2} < \frac{\nu^{2}}{4}(\xi^{2}+\pi^{2}k^{2})^{3}\right\},
\]
\[
I_{3}=\left\{(\xi,k) \in \mathbb{R}\times \mathbb{N}^{+}:\frac{\nu^{2}}{4}(\xi^{2}+\pi^{2}k^{2})^{3}\leq \xi^{2} < 4\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{3}\right\},
\]
\[
I_{4}=\left\{(\xi,k) \in \mathbb{R}\times \mathbb{N}^{+}:\xi^{2}\geq 4\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{3}\right\}.
\]
For $(\xi,k)\in I_{1}$, note that
\[
\lambda_{+} = -\frac{2\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\frac{1}{1+\sqrt{1-\frac{4\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{3}}}}.
\]
Thus,
\[
-\frac{2\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\leq \lambda_{+} \leq -\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}},\,
-\nu(\xi^{2}+\pi^{2}k^{2})\leq\lambda_{-} \leq -\frac{\nu}{2}(\xi^{2}+\pi^{2}k^{2}),
\]
\[
\frac{\nu}{2}(\xi^{2}+\pi^{2}k^{2})<\lambda_{+} -\lambda_{-} < \nu(\xi^{2}+\pi^{2}k^{2}).
\]
Since $k\geq 1$,
\begin{equation}\label{I11}
0<\widehat{\mathcal{L}_{1}(t)}\lesssim e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t},\,
0\leq\widehat{\mathcal{L}_{2}(t)}\lesssim e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t},
\end{equation}
\begin{equation}\label{I12}
\left|\widehat{\partial_{t}\mathcal{L}_{1}(t)}\right|\lesssim
\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}
+\nu(\xi^{2}+\pi^2k^{2})e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t},
\end{equation}
\begin{equation}\label{I13}
\left|\widehat{\partial_{t}\mathcal{L}_{2}(t)}\right|\lesssim
\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}e^{-\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}t}
+\nu(\xi^{2}+\pi^2k^{2})e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}.
\end{equation}
For $(\xi,k) \in I_{2}$,
\[
-\frac{2\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\leq \lambda_{+} \leq -\frac{\xi^{2}}{\nu(\xi^{2}+\pi^{2}k^{2})^{2}}\leq -\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{16},
\]
\[
-\nu(\xi^{2}+\pi^{2}k^{2})\leq\lambda_{-} \leq -\frac{\nu}{2}(\xi^{2}+\pi^{2}k^{2}),\,
0<\lambda_{+} -\lambda_{-} <\nu(\xi^{2}+\pi^{2}k^{2}).
\]
After a straightforward calculation, we obtain that for $k\geq1$
\begin{equation}\label{I21}
0<\widehat{\mathcal{L}_{1}(t)}\lesssim e^{-\frac{\nu(\xi^2+\pi^{2}k^2)}{16}t},\,\,
\left|\widehat{\partial_{t}\mathcal{L}_{1}(t)}\right|\lesssim
e^{-\frac{\nu(\xi^2+\pi^2k^2)}{32}t},
\end{equation}
\begin{equation}\label{99i}
0\leq\widehat{\mathcal{L}_{2}(t)}\lesssim e^{\lambda_{+}t}t\lesssim e^{-\frac{\nu(\xi^2+\pi^{2}k^2)}{32}t},
\end{equation}
\begin{equation}\label{I22}
\left|\widehat{\partial_{t}\mathcal{L}_{2}(t)}\right|\lesssim \left |e^{\lambda_{+}t}\frac{\lambda_{+}-\lambda_{-}}{\lambda_{+}-\lambda_{-}} \right|+ \left|\lambda_{-}\frac{e^{\lambda_{+}t}-e^{\lambda_{-}t}}{\lambda_{+}-\lambda_{-}}\right|
\lesssim e^{\lambda_{+}t}+|\lambda_{-}|e^{\lambda_{+}t}t \lesssim
e^{-\frac{\nu(\xi^2+\pi^{2}k^2)}{32}t}.
\end{equation}
For $(\xi,k) \in I_{3}$,
\[
\sigma = i\sqrt{\frac{4\xi^{2}}{\xi^{2}+\pi^{2}k^{2}}-\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{2}},\
\]
\[
\lambda_{+} = -\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}+\frac{i}{2}\sqrt{\frac{4\xi^{2}}{\xi^{2}+\pi^{2}k^{2}}-\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{2}}, \]
\[
\lambda_{-} = -\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}-\frac{i}{2}\sqrt{\frac{4\xi^{2}}{\xi^{2}+\pi^{2}k^{2}}-\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{2}}.
\]
Then
\begin{equation}\label{I31}
\left|\widehat{\mathcal{L}_{1}(t)}\right|\lesssim e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t},\,
\left|\widehat{\mathcal{L}_{2}(t)}\right|\lesssim e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}\frac{\sin(\frac{|\lambda_{+}-\lambda_{-}|}{2}t)}{|\lambda_{+}-\lambda_{-}|}\lesssim e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{4}t}.
\end{equation}
Moreover, since
\[
|\lambda_{+}|=|\lambda_{-}|= \frac{\xi^{2}}{\xi^{2}+\pi^{2}k^{2}} \leq 1,
\]
one has
\begin{equation}\label{I32}
\left|\widehat{\partial_{t}\mathcal{L}_{1}(t)}\right|\lesssim |\lambda_+|\left|e^{\lambda_+t}\right| + |\lambda_-|\left|e^{\lambda_-t}\right|\lesssim
e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t},
\end{equation}
\begin{equation}\label{I33}
\left|\widehat{\partial_{t}\mathcal{L}_{2}(t)}\right|\lesssim \left |e^{\lambda_{+}t}\frac{\lambda_{+}-\lambda_{-}}{\lambda_{+}-\lambda_{-}} \right|+ \left|\lambda_{-}\frac{e^{\lambda_{+}t}-e^{\lambda_{-}t}}{\lambda_{+}-\lambda_{-}}\right|
\lesssim
e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{4}t}.
\end{equation}
For $(\xi,k) \in I_{4}$, due to the fact that
\[
\left| \lambda_{+}-\lambda_{-} \right| = \sqrt{\frac{4\xi^{2}}{\xi^{2}+\pi^{2}k^{2}}-\nu^{2}(\xi^{2}+\pi^{2}k^{2})^{2}} \ge \sqrt{15}\nu(\xi^{2}+\pi^2k^{2}) \gtrsim \nu,
\]
we have
\begin{equation}\label{I41}
\left|\widehat{\mathcal{L}_{1}(t)}\right|,\, \left|\widehat{\mathcal{L}_{2}(t)}\right|,\,
\left|\widehat{\partial_{t}\mathcal{L}_{1}(t)}\right|,\,
\left|\widehat{\partial_{t}\mathcal{L}_{2}}(t)\right|\lesssim e^{-\frac{\nu(\xi^{2}+\pi^{2}k^{2})}{2}t}.
\end{equation}
Based on estimates in (\ref{I11})-(\ref{I41}) of $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, for $0<\nu <\nu_{*} $ one can obtain (\ref{All1})-(\ref{All6}) as before, which concludes the proof of Lemma \ref{AL2}.
\hfill$\square$
\begin{Remark}
\emph{
From Lemma \ref{AL2}, we find that one derivative in the $x$-direction improves $\frac{1}{2}$-order of decay rate.
}
\end{Remark}
Using Lemma \ref{PL3}, Lemma \ref{AL1} together with Lemma \ref{AL2}, we obtain the following decay estimates for solutions to the initial-boundary value problem (\ref{Aa1})-(\ref{Aam1}).
\begin{Lemma}\label{EL2}
Assume that $\phi_{0}\in \mathfrak{D}^{8,1}, \phi_{1}\in \mathfrak{D}^{6,1}$ and $F\in L^{\infty}(0,\infty;\mathfrak{D}^{6,1})$ with
\[
\|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}:= \sup_{t>0}\langle t \rangle^{s} \|F(t)\|_{W^{6,1}} < \infty \text{ for some }s>0.
\]
Then the solution to (\ref{Aa1})-(\ref{Aam1}) satisfies for any $t>0$,
\begin{equation}\label{ALe1}
\|\phi(t)\|_{L^{\infty}}\lesssim \langle t \rangle^{-\frac{1}{2}}\left(\|\phi_{0}\|_{W^{6,1}} + \|\phi_{1}\|_{W^{4,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{4,1})}\right), \text{ if } s>1,
\end{equation}
\begin{equation}\label{ALe1tr}
\|\partial_2\phi(t)\|_{L^{\infty}}\lesssim \langle t \rangle^{-\frac{1}{2}}\left(\|\phi_{0}\|_{W^{7,1}} + \|\phi_{1}\|_{W^{5,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{5,1})}\right), \text{ if } s>1,
\end{equation}
\begin{equation}\label{ALe3}
\|\partial_{1}\phi(t)\|_{L^{\infty}}\lesssim \langle t \rangle^{-1}\left(\|\phi_{0}\|_{W^{8,1}} + \|\phi_{1}\|_{W^{6,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}\right), \text{ if } s>1,
\end{equation}
\begin{equation}\label{ALe4}
\|\phi(t)\|_{H^{4}}\lesssim \langle t \rangle^{-\frac{1}{4}}\left(\|\phi_{0}\|_{W^{8,1}} + \|\phi_{1}\|_{W^{6,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}\right), \text{ if } s>1,
\end{equation}
\begin{equation}\label{ALe7}
\|\partial_{1}\phi(t)\|_{H^{2}} \lesssim \langle t \rangle^{-\frac{3}{4}}\left(\|\phi_{0}\|_{W^{8,1}} + \|\phi_{1}\|_{W^{6,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}\right), \text{ if } s>1,
\end{equation}
\begin{equation}\label{ALe9+}
\|\partial_{11}\phi(t)\|_{L^{2}}\lesssim \langle t \rangle^{-\frac{5}{4}}\left(\|\phi_{0}\|_{W^{8,1}} + \|\phi_{1}\|_{W^{6,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}\right), \text{ if } s\geq\frac{5}{4},
\end{equation}
\begin{equation}\label{ALe9}
\|\partial_{t}\phi(t)\|_{H^{3}}\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\|\phi_{0}\|_{W^{8,1}} + \|\phi_{1}\|_{W^{6,1}} + \|\langle \cdot \rangle^{s} F(\cdot)\|_{L^{\infty}(W^{6,1})}\right), \text{ if } s\geq\frac{5}{4}.
\end{equation}
\end{Lemma}
\subsection{Decay rates for solutions to the linearized equation of vorticity}
Going back to (\ref{Eb1more})-(\ref{Eb2}),
we use Duhamel's principle to find
\begin{equation}\label{voint1}
w(x,y,t) = e^{\nu t\Delta}w_{0}(x,y) + \int_{0}^{t}e^{\nu (t-\tau)\Delta}\partial_{1}\phi(x,y,\tau) {\rm d} \tau,
\end{equation}
that is,
\begin{equation}\label{voint2}
\widehat{w}(\xi,k,t) = e^{-\nu(\xi^2+\pi^{2}k^2)t}\widehat{w_0}(\xi,k) + \int_{0}^{t}e^{-\nu(\xi^2+\pi^{2}k^2)(t-\tau)}\xi\widehat{\phi}(\xi,k,\tau){\rm d} \tau,\,(\xi,k)\in \widehat{\Omega}.
\end{equation}
\begin{Lemma}\label{EL3}
Assume that $\phi_0\in \mathfrak{D}^{8,1}$ and $\mathbf{u}_0 = (u_{10},u_{20})$ with $u_{10}\in \mathfrak{N}^{6,1}$, $u_{20}\in \mathfrak{D}^{6,1}$ so that $w_{0}\in \mathfrak{D}^{5,1}$. Then the solution $w$ to (\ref{Eb1more})-(\ref{Eb2}) satisfies
\begin {equation}\label{vor1}
\|w(t)\|_{L^{\infty}}\lesssim \langle t \rangle^{-1}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right),\
\|w(t)\|_{L^{2}}\lesssim \langle t \rangle^{-\frac{3}{4}}\left(\|w_{0}\|_{W^{3,1}} + \|\phi_{0}\|_{W^{6,1}}\right),
\end{equation}
\begin {equation}\label{vor2}
\|\partial_{1}w(t)\|_{L^{2}}\lesssim \langle t \rangle^{-\frac{5}{4}}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right),\
\|\partial_{2}w(t)\|_{L^{2}}\lesssim \langle t \rangle^{-\frac{3}{4}}\left(\|w_{0}\|_{W^{4,1}} + \|\phi_{0}\|_{W^{7,1}}\right),
\end {equation}
\begin{equation}\label{vor3}
\|\nabla w(t)\|_{H^{1}}\lesssim \langle t \rangle^{-\frac{3}{4}}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right),\
\|\partial_{t}w(t)\|_{H^{2}}\lesssim \langle t \rangle^{-\frac{5}{4}}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right).
\end{equation}
\end{Lemma}
{\bf Proof.}
From (\ref{Eb1more})-(\ref{Eb2}), we find
\[
\phi_1(\mathbf{x}) =\partial_t \phi(\mathbf{x},0) = u_2(\mathbf{x},0)=u_{20}(\mathbf{x}) \in \mathfrak{D}^{6,1}.
\]
By Lemma \ref{EL2} and Remark \ref{Rema1},
\[
\|\partial_1\phi(t)\|_{L^{\infty}}\lesssim \|\xi \widehat{\phi}(t)\|_{\widehat{L}^1} \lesssim \langle t \rangle^{-1}\left(\|(\xi^2 + \pi^2k^2)^3 \widehat{\phi_1}\|_{\widehat{L}^{\infty}} + \|\phi_{0}\|_{W^{8,1}}\right)
\]
\begin{equation}\label{theta1}
\lesssim \langle t \rangle^{-1}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right),
\end{equation}
since $\widehat{\phi}_1(\xi,k)=\widehat{u}_{20} (\xi,k)= -\frac{i\xi}{\xi^2+\pi^2k^2}\widehat{w}_0(\xi,k)$.
By using (\ref{eLL3}), (\ref{voint2}) and (\ref{theta1}),
\[
\|w(t)\|_{L^{\infty}}\lesssim \|\widehat{w}(t)\|_{\widehat{L}^{1}}\lesssim e^{-\nu t}\|\omega_0\|_{H^2} + \int_0^t e^{-\nu (t- \tau)}\|\xi \widehat{\phi}(\tau)\|_{\widehat{L}^1}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|w_0\|_{H^2} + \int_0^t e^{-\nu (t- \tau)}\langle \tau \rangle^{-1}{\rm d} \tau\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right)
\]
\[
\lesssim \langle t \rangle^{-1}\left(\|w_{0}\|_{W^{5,1}} + \|\phi_{0}\|_{W^{8,1}}\right).
\]
Other estimates in (\ref{vor1})-(\ref{vor3}) can be proved in a similar way.
\hfill$\square$
\begin{Remark}
\emph{
We observe that the decay rates of $w$ are consistent with $\partial_1 \phi$. Note that $w$ also satisfies
\[
\partial_{tt}w-\nu\Delta\partial_{t}w - \partial_{11}(-\Delta)^{-1}w = 0.
\]
}
\end{Remark}
\section{Nonlinear stability}
The local existence and uniqueness of the solution to (\ref{P4})-(\ref{P5}) can be proved by using the method of \cite{AD1,AD2}. Here we omit the details.
\begin{Proposition}\label{tp1}
Assume that $\theta_{0}\in \mathfrak{D}^{m+1}$ and $\omega_{0} \in \mathfrak{D}^{m}$, $m\ge 2$. There exists $T^*\in (0,\infty]$ such that (\ref{P4})-(\ref{P5}) admits a unique solution
\[
(\omega, \theta) \in C([0,T^*); \mathfrak{D}^{m})\times C([0,T^*);\mathfrak{D}^{m+1}).
\]
\end{Proposition}
\begin{Remark}
\emph{
Let $(\omega, \theta)$ be a sufficiently smooth solution to (\ref{P4})-(\ref{P5}). If we assume $\theta_0\in\mathfrak{D}^{m+1}$ for some integer $m\ge 1$, then it is necessary that $\omega_0\in\mathfrak{D}^m$ and for any $t>0$,
\begin{equation}\label{P8}
\theta(t,\cdot) \in \mathfrak{D}^{m+1}, \, \omega(t,\cdot)\in\mathfrak{D}^{m},\ u_{1}(t,\cdot)\in \mathfrak{N}^{m+1},~u_{2}(t,\cdot) \in\mathfrak{D}^{m+1}.
\end{equation}
We refer to \cite{AD1,AD2,LD1} for a detailed argument.
}
\end{Remark}
Based on Proposition \ref{tp1}, global-in-time existence will follow from uniform-in-time a priori estimates. In the following we focus on decay estimates of the solutions to (\ref{P4})-(\ref{P5}).
\subsection{Integral form of solutions}
From $(\ref{pr1})$ and $(\ref{P8})$, $\theta$ and $\omega$ can be written as
\begin{equation}\label{theta2}
\theta(x,y,t) = \frac{1}{\sqrt{ \pi}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}\widehat{\theta}(\xi,k,t)e^{ i x\xi}{\rm d}\xi\sin k\pi y,
\end{equation}
\begin{equation}\label{vorf2}
\omega(x,y,t) = \frac{1}{\sqrt{\pi}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}\widehat{\omega}(\xi,k,t)e^{ i x\xi}{\rm d}\xi \sin k\pi y.
\end{equation}
Recall that $u_{1} =\partial_{2}(-\Delta)^{-1}\omega$ and $u_{2} =-\partial_{1}(-\Delta)^{-1}\omega$, we obtain
\begin{equation}\label{vel1}
u_{1}(x,y,t) = \frac{1}{\sqrt{\pi}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}\frac{k\pi}{\xi^{2}+\pi^{2}k^{2}}\widehat{\omega}(\xi,k,t)e^{ix\xi}{\rm d}\xi \cos k\pi y
\end{equation}
and
\begin{equation}\label{vel2}
u_{2}(x,y,t) = \frac{1}{\sqrt{\pi}}\sum_{k=1}^{+\infty}\int_{\mathbb{R}}\frac{-i\xi }{\xi^{2}+ \pi^{2}k^{2}}\widehat{\omega}(\xi,k,t)e^{ix\xi }{\rm d}\xi \sin k\pi y.
\end{equation}
Taking the time derivative on $(\ref{P4})_{2}$ and using
\[
\partial_t u_2 = -\partial_1 (-\Delta)^{-1}\partial_t \omega,\, -\Delta u_2 = \partial_t \Delta \theta + \Delta (\mathbf{u}\cdot\nabla\theta),
\]
the system $(\ref{P4})$ is transformed into
\begin{equation}\label{Eb4}
\left\{
\begin{aligned}
&\partial_{t}\omega-\nu\Delta\omega = f_{1} ,\\
&\partial_{tt}\theta-\nu\Delta\partial_{t}\theta - \partial_{1}(-\Delta)^{-1}\partial_{1}\theta = f_{2}
\end{aligned}
\right.
\end{equation}
with
\begin{equation}\label{Ebf4+}
f_{1} = -\mathbf{u}\cdot\nabla\omega-\partial_{1}\theta,
\end{equation}
\begin{equation}\label{Ebf4++}
f_{2} = -\partial_{t}\mathbf{u}\cdot\nabla\theta - \mathbf{u}\cdot\nabla\partial_{t}\theta - \partial_{1}(-\Delta)^{-1}(\mathbf{u}\cdot\nabla\omega)+
\nu\Delta(\mathbf{u}\cdot\nabla\theta).
\end{equation}
From (\ref{P8}), it is not difficult to find that
\[
f_1 \in \mathfrak{D}^{m-1},\, f_2 \in \mathfrak{D}^{m-2}.
\]
Accordingly,
\begin{equation}\label{Eb6}
\omega(x,y,t)= e^{\nu t\Delta }\omega_{0}(x,y) + \int_{0}^{t}e^{\nu(t-\tau)\Delta}f_{1}(x,y,\tau){\rm d} \tau,
\end{equation}
\[
\theta(x,y,t)= \mathcal{L}_{1}(t)\theta_{0}(x,y) + \mathcal{L}_{2}(t)\left(\frac{\nu}{2}(-\Delta)\theta_{0}(x,y) + \theta_1(x,y)\right)
\]
\begin{equation}\label{Eb7}
+ \int_{0}^{t}\mathcal{L}_{2}(t-\tau)f_{2}(x,y,\tau){\rm d} \tau,
\end{equation}
where
\begin{equation}\label{inithe}
\theta_1(x,y) = \partial_{t}\theta(x,y,0) = -\mathbf{u}_0\cdot\nabla\theta_0 + u_{20},
\end{equation}
\begin{equation}\label{inithe2}
\mathbf{u}_0 =(u_{10}, u_{20}) = (\partial_2(-\Delta)^{-1}\omega_0, -\partial_1(-\Delta)^{-1}\omega_0).
\end{equation}
\subsection{Decay estimates of nonlinear equations}
For $m\in\mathbb{N}$ and $\epsilon>0$, which is a small parameter to be determined later, we define
\[
\mathcal{F}_{1}(t) = \sup_{0\leq \tau\leq t}\left\{\langle\tau\rangle^{-\epsilon}\left(\|\theta(\tau)\|_{H^{m+1}}+\|\omega(\tau)\|_{H^{m}}\right)\right\},
\]
\[
\mathcal{F}_{2}(t) = \sup_{0\leq \tau\leq t}\{\langle\tau\rangle\left(\|\partial_{1}\theta(\tau)\|_{L^{\infty}}+\|\mathbf{u}(\tau)\|_{W^{1,\infty}} +\|\omega(\tau)\|_{L^{\infty}}\right)
\]
\[
+\langle\tau\rangle^{\frac{1}{2}}\left(\|\partial_{2}\theta(\tau)\|_{L^{\infty}}
+\|\theta(\tau)\|_{L^{\infty}}\right)\},
\]
\begin{equation*}
\begin{split}
\mathcal{F}_{3}(t) =& \sup_{0\leq \tau\leq t}\left\{\langle\tau\rangle^{\frac{1}{4}}\|\theta(\tau)\|_{H^{4}}
+\langle\tau\rangle^{\frac{3}{4}}\left(\|\partial_{1}\theta(\tau)\|_{H^{2}}+ \|u_{1}(\tau)\|_{H^{3}} +\|\omega(\tau)\|_{H^{2}}\right)\right. \\
&\left.+ \langle\tau\rangle^{\frac{5}{4}}\left(\|\partial_{11}\theta(\tau)\|_{L^{2}}+\|u_{2}(\tau)\|_{H^{2}} +
\|\partial_{1}u_{1}(\tau)\|_{H^{1}} +
\|\partial_{1}\omega(\tau)\|_{L^{2}}\right)\right\},
\end{split}
\end{equation*}
\begin{equation*}
\mathcal{F}_4(t) = \sup_{0\leq \tau\leq t}\left\{\langle\tau\rangle^{\frac{5}{4}}\left(\|\partial_{\tau}\theta(\tau)\|_{H^{3}}
+\|\partial_{\tau}\mathbf{u}(\tau)\|_{H^{3}}
+\|\partial_{\tau}\omega(\tau)\|_{H^{2}}\right)\right\},
\end{equation*}
\[
\mathcal{F}_{0} = \|\theta_{0}\|_{W^{8,1}} + \|\theta_{0}\|_{H^{m+1}} +\|\omega_{0}\|_{W^{5,1}} + \|\omega_{0}\|_{H^{m}},
\]
\[
\mathcal{F}(t) = \mathcal{F}_{1}(t) +\mathcal{F}_{2}(t) +\mathcal{F}_{3}(t)+\mathcal{F}_{4}(t).
\]
We show the estimates of $\mathcal{F}_j(t)$, $j=1,2,3,4$ by the following four lemmas.
\begin{Lemma}\label{EL1}
Let $m\ge 1$. Then
\[
\mathcal{F}_{1}(t) \lesssim \mathcal{F}_{0} + \left(\mathcal{F}^{3}(t) + \mathcal{F}^{4}(t)\right)^{\frac{1}{2}}.
\]
\end{Lemma}
{\bf Proof.}
The basic energy estimate for (\ref{P4}) reads as
\[
\|\nabla\theta(t)\|_{L^{2}}^{2} + \|\omega(t)\|_{L^{2}}^{2} + 2\nu\int_{0}^{t}\|\nabla\omega(\tau)\|_{L^{2}}^{2}{\rm d} \tau
\]
\begin{equation}\label{Eoe2}
\lesssim \|\nabla\theta_{0}\|_{L^{2}}^{2} + \|\omega_{0}\|_{L^{2}}^{2} + \int_{0}^{t}\|\nabla\mathbf{u}(\tau)\|_{L^{\infty}}\|\nabla\theta(\tau)\|_{L^{2}}^{2}{\rm d} \tau,
\end{equation}
which follows from testing the first and second equation of (\ref{P4}) by $\omega$ and $-\Delta\theta$ respectively. Note that here we used the fact
\[
\langle \partial_1\theta,\omega \rangle + \langle -u_2, -\Delta\theta\rangle =\langle \partial_1\theta,\omega \rangle + \langle \partial_1(-\Delta)^{-1}\omega, -\Delta\theta\rangle
\]
\[
=\langle \partial_1\theta,\omega \rangle - \langle \Delta\partial_1(-\Delta)^{-1}\omega, \theta\rangle = \langle \partial_1\theta,\omega \rangle - \langle \omega, \partial_1\theta\rangle = 0.
\]
Similarly, for $m\ge 1$,
\begin{equation}\label{Ea+1}
\langle\partial_{t}\partial^{m}\omega, \partial^{m}\omega\rangle + \langle\partial^{m}(\mathbf{u}\cdot \nabla \omega), \partial^{m}\omega\rangle - \nu\langle\Delta\partial^{m}\omega, \partial^{m}\omega\rangle = \langle\partial^{m}\partial_{1}\theta, \partial^{m}\omega\rangle
\end{equation}
and
\begin{equation}\label{Ea+2}
\langle\partial_{t}\partial^{m}\nabla\theta, \partial^{m}\nabla\theta\rangle + \langle\partial^{m}\nabla(\mathbf{u}\cdot \nabla \theta), \partial^{m}\nabla\theta\rangle
= \langle-\partial^{m}\nabla u_{2}, \partial^{m}\nabla\theta\rangle.
\end{equation}
Adding (\ref{Ea+1}) to (\ref{Ea+2}) yields
\begin{equation}\label{Ea1}
\frac{1}{2}\frac{d}{dt}\left(\|\theta\|_{\dot{H}^{m + 1}}^{2} + \|\omega\|_{\dot{H}^{m}}^{2}\right) + \nu\|\omega\|_{\dot{H}^{m + 1}}^{2} = B_{1} + B_{2} + B_{3}
\end{equation}
with
\[
B_{1} = - \langle\partial^{m}(\mathbf{u}\cdot\nabla\omega), \partial^{m}\omega\rangle,
\]
\[
B_{2} = -\langle\partial^{m}\nabla(\mathbf{u}\cdot\nabla\theta), \partial^{m}\nabla\theta\rangle,
\]
\[
B_{3} = \langle-\partial^{m}\nabla u_{2}, \partial^{m}\nabla\theta\rangle + \langle\partial^{m}\partial_{1}\theta, \partial^{m}\omega\rangle.
\]
By the commutator estimate (\ref{eLL1}) in Lemma \ref{PL1} and Corollary \ref{corollary1},
\[
|B_{1}|\lesssim |\langle\partial^{m}(\mathbf{u}\cdot\nabla\omega)-\mathbf{u}\cdot\nabla\partial^{m}\omega , \partial^{m}\omega\rangle|
\]
\[
\lesssim \|\langle\partial^{m}\text{ div }(\mathbf{u}\omega)-\mathbf{u}\cdot\nabla\partial^{m}\omega\|_{L^2}\|\partial^{m}\omega\|_{L^2}
\]
\begin{equation}\label{Ea2}
\lesssim \left(\|\nabla\mathbf{u}\|_{L^{\infty}} + \|\omega\|_{L^{\infty}}\right)\|\mathbf{u}\|_{H^{m+1}}\|\omega\|_{H^{m}}
\lesssim \|\nabla\mathbf{u}\|_{L^{\infty}}\|\omega\|_{H^{m}}^{2},
\end{equation}
where we use the fact that
\[
\langle\mathbf{u}\cdot\nabla\partial^{m}\omega, \partial^{m}\omega\rangle = 0.
\]
Applying similar argument to $B_{2}$ yields
\[
|B_{2}| \lesssim \|\nabla\mathbf{u}\|_{L^{\infty}}\|\theta\|_{H^{m+1}}^{2} + \|\theta\|_{L^{\infty}}\|\mathbf{u}\|_{H^{m+2}}\|\theta\|_{H^{m+1}}
\]
\begin{equation}\label{Ea3}
\lesssim \|\nabla\mathbf{u}\|_{L^{\infty}}\|\theta\|_{H^{m+1}}^{2} + \|\theta\|_{L^{\infty}}\|\omega\|_{H^{m+1}}\|\theta\|_{H^{m+1}}.
\end{equation}
The last term $B_{3}$ vanishes once one replaces $u_{2}$ by $-\partial_{1}(-\Delta)^{-1}\omega$ and then integrates by parts twice.
\[
B_{3} = \langle \partial^{m}\nabla \partial_{1}(-\Delta)^{-1}\omega, \partial^{m}\nabla\theta\rangle + \langle\partial^{m}\partial_{1}\theta, \partial^{m}\omega\rangle = 0.
\]
By integrating (\ref{Ea1}) in time from $0$ to $t$ and summing up (\ref{Ea2}), (\ref{Ea3}),
\[
\|\theta(t)\|_{\dot{H}^{m+1}}^{2} + \|\omega(t)\|_{\dot{H}^{m}}^{2} + 2\nu\int_{0}^{t}\|\omega(\tau)\|_{\dot{H}^{m+1}}^{2}{\rm d} \tau
\]
\[
\leq \|\theta_{0}\|_{\dot{H}^{m+1}}^{2}
+ \|\omega_{0}\|_{\dot{H}^{m}}^{2} + C\int_{0}^{t}\|\nabla\mathbf{u}(\tau)\|_{L^{\infty}}(\|\omega(\tau)\|_{H^{m}}^{2}+\|\theta(\tau)\|_{H^{m+1}}^{2}){\rm d} \tau
\]
\begin{equation}\label{Eoe1}
+ \frac{C}{\nu}\int_{0}^{t}\|\theta(\tau)\|_{L^{\infty}}^{2}\|\theta(\tau)\|_{H^{m+1}}^{2}{\rm d} \tau + \nu\int_{0}^{t}\|\omega(\tau)\|_{H^{m+1}}^{2}{\rm d} \tau,
\end{equation}
where Young's inequality is applied.
Adding (\ref{Eoe2}) and (\ref{Eoe1}) together with Poincar$\acute{e}$ inequality,
\[
\|\theta(t)\|_{H^{m+1}}^{2} + \|\omega(t)\|_{H^{m}}^{2} + \nu\int_{0}^{t}\|\omega(\tau)\|_{H^{m+1}}^{2}{\rm d} \tau \lesssim \|\theta_{0}\|_{H^{m+1}}^{2}+\|\omega_{0}\|_{H^{m}}^{2}
\]
\[
+ \int_{0}^{t}\|\nabla\mathbf{u}(\tau)\|_{L^{\infty}}\left(\|\omega(\tau)\|_{H^{m}}^{2}+\|\theta(\tau)\|_{H^{m+1}}^{2}\right){\rm d} \tau + \int_{0}^{t}\|\theta(\tau)\|_{L^{\infty}}^{2}\|\theta(\tau)\|_{H^{m+1}}^{2}{\rm d} \tau.
\]
Hence,
\[
\|\theta(t)\|_{H^{m+1}}^{2} + \|\omega(t)\|_{H^{m}}^{2}
\lesssim \|\theta_{0}\|_{H^{m+1}}^{2}+\|\omega_{0}\|_{H^{m}}^{2}
\]
\[
+
\int_{0}^{t}\langle\tau\rangle^{-1}\langle\tau\rangle^{2\epsilon}\langle\tau\rangle\|\nabla\mathbf{u}(\tau)\|_{L^{\infty}}{\rm d} \tau \sup_{0\leq \tau\leq t}\left\{\langle\tau\rangle^{-2\epsilon}\|\theta(\tau)\|_{H^{m+1}}^{2}+\langle\tau\rangle^{-2\epsilon}\|\omega(\tau)\|_{H^{m}}^{2}\right\}
\]
\[
+\int_{0}^{t}\langle\tau\rangle^{-1}\langle\tau\rangle\|\theta(\tau)\|_{L^{\infty}}^{2}\langle\tau\rangle^{2\epsilon}{\rm d} \tau \sup_{0\leq \tau\leq t}\left\{\langle\tau\rangle^{-2\epsilon}\|\theta(\tau)\|_{H^{m+1}}^{2}+\langle\tau\rangle^{-2\epsilon}\|\omega(\tau)\|_{H^{m}}^{2}\right\}
\]
\[
\lesssim \mathcal{F}_{0}^{2} + \mathcal{F}_{1}^{2}(t)\mathcal{F}_{2}(t)\int_{0}^{t}\langle\tau\rangle^{-1}\langle\tau\rangle^{2\epsilon}{\rm d} \tau +
\mathcal{F}_{1}^{2}(t)\mathcal{F}_{2}^{2}(t)\int_{0}^{t}\langle\tau\rangle^{-1}\langle\tau\rangle^{2\epsilon}{\rm d} \tau
\]
\[
\lesssim \mathcal{F}_{0}^{2} + \langle t\rangle^{2\epsilon}\mathcal{F}^{3}(t)+ \langle t\rangle^{2\epsilon}\mathcal{F}^{4}(t).
\]
Finally, we conclude that
\[
\mathcal{F}_{1}(t) \lesssim \mathcal{F}_{0} + \left(\mathcal{F}^{3}(t) + \mathcal{F}^{4}(t)\right)^{\frac{1}{2}}.
\]
\hfill$\square$
\begin{Lemma}\label{EL4}
Let $m> 20$. Then for $0<\delta <\frac{2}{5}$,
\[
\mathcal{F}_{2}(t) \lesssim \mathcal{F}_{0} + \mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t) + \mathcal{F}^{2 + \delta}(t).
\]
\end{Lemma}
{\bf Proof.}
From (\ref{Eb7}) and Lemma \ref{AL1}-\ref{AL2}, we obtain that
\[
\|\partial_{1}\theta(t)\|_{L^{\infty}} \lesssim \|\widehat{\partial_{1}\theta}(t)\|_{\widehat{L}^{1}}
\]
\[
\lesssim
\langle t\rangle^{-1}\|\omega_{0}\|_{W^{5,1}} + \langle t\rangle^{-1}\|\theta_{0}\|_{W^{8,1}} +\langle t\rangle^{-1}\|\theta_{0}\|_{H^{m}}\|\omega_{0}\|_{H^{m}}
+ J_{1} + J_{2} + J_{3} + J_{4},
\]
where
\[
J_{1}=\int_{0}^{t}\langle t-\tau \rangle^{-1}\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{6,1}}{\rm d} \tau,\
J_{2}=\int_{0}^{t}\langle t-\tau\rangle^{-1}\|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{6,1}}{\rm d} \tau,
\]
\[
J_{3}=\int_{0}^{t}\langle t-\tau\rangle^{-1}\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{4,1}}{\rm d} \tau,\
J_{4}=\int_{0}^{t}\langle t-\tau\rangle^{-1}\|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{8,1}}{\rm d} \tau.
\]
For $J_{1}$, according to the interpolation inequality (\ref{BLT1}) together with (\ref{BL13}) in Lemma \ref{PL1},
\[
\|\partial_{t}\mathbf{u}\cdot\nabla\theta\|_{W^{6,1}}
\lesssim \|\partial_{t}\mathbf{u}\|_{L^{2}}\|\nabla\theta\|_{H^{6}} + \|\partial_{t}\mathbf{u}\|_{H^{6}}\|\nabla\theta\|_{L^{2}}
\]
\[
\lesssim \|\partial_{t}\mathbf{u}\|_{L^{2}}\|\nabla\theta\|_{H^{12}}^{\frac{1}{2}}\|\nabla\theta\|_{L^{2}}^{\frac{1}{2}} + \|\partial_{t}\mathbf{u}\|_{H^{\frac{6}{\delta}}}^{\delta}\|\partial_{t}\mathbf{u}\|_{L^{2}}^{1-\delta}\|\nabla\theta\|_{L^{2}},
\]
where $0<\delta < \frac{2}{5}$. Replacing $\|\partial_{t}\mathbf{u}\|_{H^{\frac{6}{\delta}}}$ by $\|\partial_{t}\omega\|_{H^{\frac{6}{\delta}-1}}$ and using $(\ref{P4})_{1}$, one infers that
\[
J_{1}\lesssim \int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau\rangle^{-\frac{5}{4}}\langle \tau\rangle^{\frac{5}{4}}\|\partial_{\tau}\mathbf{u}\|_{L^{2}}\langle t\rangle^{-\frac{\epsilon}{2}}\langle \tau \rangle^{\frac{\epsilon}{2}}\|\nabla\theta\|_{H^{12}}^{\frac{1}{2}}
\langle\tau\rangle^{-\frac{1}{8}}\langle\tau\rangle^{\frac{1}{8}}
\|\nabla\theta\|_{L^{2}}^{\frac{1}{2}}{\rm d} \tau
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-1}
\langle \tau \rangle^{-\delta\epsilon}\langle \tau \rangle^{\delta\epsilon}\|\Delta\omega+\partial_{1}\theta\|_{H^{\frac{6}{\delta}-1}}^{\delta}
\langle \tau\rangle^{-\frac{5}{4}(1-\delta)}\langle \tau\rangle^{\frac{5}{4}(1-\delta)}\|\partial_{\tau}\mathbf{u}\|_{L^{2}}^{1-\delta}
\langle\tau\rangle^{-\frac{1}{4}}\langle\tau\rangle^{\frac{1}{4}}\|\nabla\theta\|_{L^{2}}{\rm d} \tau
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-1}
\langle \tau \rangle^{-2\delta\epsilon}\langle \tau \rangle^{2\delta\epsilon}\|\mathbf{u}\cdot\nabla\omega\|_{H^{\frac{\delta}{6}-1}}^{\delta}
\langle \tau\rangle^{-\frac{5}{4}(1-\delta)}
\langle\tau\rangle^{\frac{5}{4}(1-\delta)}\|\partial_{\tau}\mathbf{u}\|_{L^{2}}^{1-\delta}
\langle\tau\rangle^{-\frac{1}{4}}\langle\tau\rangle^{\frac{1}{4}}\|\nabla\theta\|_{L^{2}}{\rm d} \tau
\]
\[
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-1}\langle
\tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}{\rm d} \tau\mathcal{F}_{4}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{3}^{\frac{1}{2}}(t)
+\int_{0}^{t}\langle t-\tau\rangle^{-1} \langle \tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+\delta\epsilon}
{\rm d} \tau\mathcal{F}_{4}^{1-\delta}(t)\mathcal{F}_{1}^{\delta}(t)\mathcal{F}_{3}(t)
\]
\[
+\int_{0}^{t}\langle t-\tau\rangle^{-1}
\langle\tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+2\delta\epsilon} {\rm d} \tau\mathcal{F}_{4}^{1-\delta}(t)\mathcal{F}_{1}^{2\delta}(t)\mathcal{F}_{3}(t)
\]
\begin{equation}\label{JJ1}
\lesssim \langle t\rangle^{-1}\left(\mathcal{F}^{2}(t) + \mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Here $\epsilon >0$ is such that $\frac{5\delta}{4}+2\delta\epsilon < \frac{1}{2}$ according to Lemma \ref{EL2}.
Similarly for $J_{2}$, we have
\[
\|\mathbf{u}\cdot\nabla\partial_{t}\theta\|_{W^{6,1}}
\lesssim \|\mathbf{u}\|_{L^{2}}\|\partial_{t}\nabla\theta\|_{H^{6}} + \|\mathbf{u}\|_{H^{6}}\|\partial_{t}\nabla\theta\|_{L^{2}}
\]
\[
\lesssim \|\mathbf{u}\|_{L^{2}}\|\partial_{t}\nabla\theta\|_{H^{12}}^{\frac{1}{2}}\|\partial_{t}\nabla\theta\|_{L^{2}}^{\frac{1}{2}} + \|\mathbf{u}\|_{H^{12}}^{\frac{1}{2}}\|\mathbf{u}\|_{L^{2}}^{\frac{1}{2}}\|\partial_{t}\nabla\theta\|_{L^{2}}.
\]
By $(\ref{P4})_{2}$ and Lemma \ref{EL2},
\[
J_{2}\lesssim \int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau \rangle^{-\frac{3}{4}}\langle \tau\rangle^{\frac{3}{4}}
\|\mathbf{u}\|_{L^{2}}\langle \tau \rangle^{-\epsilon}\langle \tau \rangle^{\epsilon}\|\mathbf{u}\cdot \nabla \theta\|_{H^{13}}^{\frac{1}{2}}\langle \tau\rangle^{-\frac{5}{8}}\langle \tau\rangle^{\frac{5}{8}}\|\partial_{\tau}\nabla\theta\|_{L^{2}}^{\frac{1}{2}}{\rm d} \tau
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau \rangle^{-\frac{3}{4}}\langle \tau\rangle^{\frac{3}{4}}
\|\mathbf{u}\|_{L^{2}}\langle \tau \rangle^{-\frac{\epsilon}{2}}\langle \tau \rangle^{\frac{\epsilon}{2}}\|u_{2}\|_{H^{13}}^{\frac{1}{2}}\langle \tau\rangle^{-\frac{5}{8}}\langle \tau\rangle^{\frac{5}{8}}\|\partial_{\tau}\nabla\theta\|_{L^{2}}^{\frac{1}{2}}{\rm d} \tau
\]
\[
+\int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau \rangle^{-\frac{3}{8}}\langle \tau\rangle^{\frac{3}{8}}
\|\mathbf{u}\|_{L^{2}}^{\frac{1}{2}}\langle \tau \rangle^{\frac{\epsilon}{2}}\langle \tau \rangle^{-\frac{\epsilon}{2}}\|\mathbf{u}\|_{H^{12}}^{\frac{1}{2}}\langle \tau\rangle^{-\frac{5}{4}}\langle \tau\rangle^{\frac{5}{4}}\|\partial_{\tau}\nabla\theta\|_{L^{2}}{\rm d} \tau
\]
\[
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau\rangle^{-\frac{11}{8}+\epsilon}{\rm d} \tau \mathcal{F}_{3}(t)\mathcal{F}_{1}(t)\mathcal{F}_{4}^{\frac{1}{2}}(t)
+ \int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}{\rm d} \tau
\mathcal{F}_{3}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{4}^{\frac{1}{2}}(t)
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-1}\langle \tau\rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}{\rm d} \tau
\mathcal{F}_{3}^{\frac{1}{2}}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{4}(t)
\]
\begin{equation}\label{JJ2}
\lesssim \langle t\rangle^{-1}\left(\mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)\right).
\end{equation}
For $J_{3}$, it holds that
\[
\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)\|_{W^{4,1}} \lesssim \|\mathbf{u}\cdot\nabla\omega\|_{W^{5,1}}
\lesssim \|u_{1}\partial_{1}\omega\|_{W^{5,1}}
+ \|u_{2}\partial_{2}\omega\|_{W^{5,1}}
\]
\[
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\omega\|_{L^{2}}^{\frac{1}{2}}\|\partial_{1}\omega\|_{H^{10}}^{\frac{1}{2}} + \|u_{1}\|_{L^{2}}^{\frac{1}{2}}\|u_{1}\|_{H^{10}}^{\frac{1}{2}}\|\partial_{1}\omega\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\partial_{2}\omega\|_{L^{2}}^{\frac{1}{2}}\|\partial_{2}\omega\|_{H^{10}}^{\frac{1}{2}} + \|u_{2}\|_{L^{2}}^{\frac{1}{2}}\|u_{2}\|_{H^{10}}^{\frac{1}{2}}\|\partial_{2}\omega\|_{L^{2}}.
\]
In a similar way,
\begin{equation}\label{JJ3}
J_{3}
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-1}\left(\langle \tau \rangle^{-\frac{11}{8}+\frac{\epsilon}{2}} +
\langle \tau \rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}\right){\rm d} \tau\mathcal{F}^{2}(t)
\lesssim \langle t \rangle^{-1}\mathcal{F}^{2}(t).
\end{equation}
For the last term $J_{4}$, using the interpolation inequality (\ref{BLT1}) and (\ref{BL13}) in Lemma \ref{PL1}, we obtain that for all $\delta \in (0,1)$,
\[
\|\mathbf{u}\cdot\nabla\theta\|_{W^{8,1}}
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\theta\|_{H^{8}} + \|u_{1}\|_{H^{8}}\|\partial_{1}\theta\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\partial_{2}\theta\|_{H^{8}} + \|u_{2}\|_{H^{8}}\|\partial_{2}\theta\|_{L^{2}}
\]
\[
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\theta\|_{H^{16}}^{\frac{1}{2}}\|\partial_{1}\theta\|_{L^{2}}^{\frac{1}{2}} + \|u_{1}\|_{H^{16}}^{\frac{1}{2}}\|u_{1}\|_{L^{2}}^{\frac{1}{2}}\|\partial_{1}\theta\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\partial_{2}\theta\|_{H^{16}}^{\frac{1}{2}}\|\partial_{2}\theta\|_{L^{2}}^{\frac{1}{2}} + \|u_{2}\|_{H^{\frac{8}{\delta}}}^{\delta}\|u_{2}\|_{L^{2}}^{1-\delta}\|\partial_{2}\theta\|_{L^{2}}.
\]
By choosing $0<\delta < \frac{2}{5}$,
\begin{equation}\label{JJ4}
J_{4} \lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-1}\left(\langle \tau\rangle^{-\frac{9}{8}+\frac{\epsilon}{2}}+\langle \tau \rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}+\langle \tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+\delta\epsilon}\right){\rm d} \tau \mathcal{F}^{2}(t)
\lesssim \langle t\rangle^{-1}\mathcal{F}^{2}(t).
\end{equation}
Summing up (\ref{JJ1})-(\ref{JJ4}) gives
\begin{equation}\label{Eaa}
\|\partial_{1}\theta(t)\|_{L^{\infty}} \lesssim \|\widehat{\partial_{1}\theta}(t)\|_{\widehat{L}^{1}} \lesssim
\langle t\rangle^{-1}\left(\mathcal{F}_{0} +\mathcal{F}_{0}^{2} +\mathcal{F}^{2}(t)+\mathcal{F}^{2 + \delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right).
\end{equation}
Note that
\[
\|\theta(t)\|_{L^{\infty}} +\|\partial_2\theta(t)\|_{L^{\infty}}\lesssim \|\widehat{\theta}(t)\|_{\widehat{L}^{1}}
+
\|\widehat{\partial_2\theta}(t)\|_{\widehat{L}^{1}}
\]
\[
\lesssim
\langle t\rangle^{-\frac{1}{2}}\|\omega_{0}\|_{W^{4,1}} + \langle t\rangle^{-\frac{1}{2}}\|\theta_{0}\|_{W^{7,1}} +
\langle t\rangle^{-\frac{1}{2}}\|\theta_{0}\|_{H^{m}}
\|\omega_{0}\|_{H^{m}}
\]
\[
+ \int_{0}^{t}\langle t-\tau \rangle^{-\frac{1}{2}}\left(\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{5,1}}
+ \|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{5,1}}\right){\rm d} \tau
\]
\[
+\int_{0}^{t}\langle t-\tau \rangle^{-\frac{1}{2}}\left(
\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{3,1}}
+ \|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{7,1}}\right){\rm d} \tau.
\]
Using the method similar to estimate $\|\partial_{1}\theta\|_{L^{\infty}}$,
\[
\|\theta(t)\|_{L^{\infty}}+\|\partial_{1}\theta(t)\|_{L^{\infty}} \lesssim \|\widehat{\theta}(t)\|_{\widehat{L}^{1}}+\|\widehat{\partial_{1}\theta}(t)\|_{\widehat{L}^{1}} \]
\[
\lesssim
\langle t\rangle^{-\frac{1}{2}}\left(\mathcal{F}_{0} +\mathcal{F}_{0}^{2} +\mathcal{F}^{2}(t)+\mathcal{F}^{2 + \delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right).
\]
Now we turn to estimate $\omega$. By (\ref{Eb6}) and (\ref{Eaa}),
\[
\|\omega(t)\|_{L^{\infty}} \lesssim \|\widehat{\omega}(t)\|_{\widehat{L}^{1}}
\]
\[
\lesssim \left\|e^{-\nu\left(\xi^{2}+ \pi^2 k^{2}\right) t}\widehat{\omega}_{0}\right\|_{\widehat{L}^{1}} +
\int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2k^{2}\right)(t-\tau)}\widehat{\mathbf{u}\cdot\nabla\omega}(\tau)\right\|_{\widehat{L}^{1}} {\rm d} \tau
\]
\[
+ \int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2k^{2}\right)(t-\tau)}\widehat{\partial_{1}\theta}(\tau)\right\|_{\widehat{L}^{1}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|\widehat{\omega}_{0}\|_{\widehat{L}^{1}} +
\int_{0}^{t}e^{-\nu(t-\tau)}\|\widehat{\mathbf{u}\cdot\nabla\omega}(\tau)\|_{\widehat{L}^{1}}{\rm d} \tau
+
\int_{0}^{t}e^{-\nu(t-\tau)}\langle\tau\rangle^{-1}\langle\tau\rangle\|\widehat{\partial_{1}\theta}(\tau)\|_{\widehat{L}^{1}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|\omega_{0}\|_{H^{2}} +
\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\mathbf{u}\cdot\nabla\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau
\]
\begin{equation}\label{JJJ1}
+\langle t\rangle^{-1}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t)
+\mathcal{F}^{2+\delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right),
\end{equation}
where (\ref{eLL3}) and the fact that $k\geq1$ are used.
It remains to estimate
\[
\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\mathbf{u}\cdot\nabla\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau.
\]
Note that
\[
\|\mathbf{u}\cdot\nabla\omega\|_{H^{2}} \lesssim \|\mathbf{u}\cdot\nabla\omega\|_{W^{3,1}}
\lesssim \|u_{1}\partial_{1}\omega\|_{W^{3,1}} + \|u_{2}\omega\|_{W^{4,1}}
\]
\[
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\omega\|_{L^{2}}^{\frac{1}{2}}\|\partial_{1}\omega\|_{H^{6}}^{\frac{1}{2}} + \|u_{1}\|_{L^{2}}^{\frac{1}{2}}\|u_{1}\|_{H^{6}}^{\frac{1}{2}}\|\partial_{1}\omega\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\omega\|_{L^{2}}^{\frac{1}{2}}\|\omega\|_{H^{8}}^{\frac{1}{2}} + \|u_{2}\|_{L^{2}}^{\frac{1}{2}}\|u_{2}\|_{H^{8}}^{\frac{1}{2}}\|\omega\|_{L^{2}}.
\]
Then
\[
\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\mathbf{u}\cdot\nabla\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau
\]
\begin{equation}\label{JJJ2}
\lesssim
\int_{0}^{t}e^{-\nu(t-\tau)}\left(\langle \tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}} + \langle \tau\rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}\right){\rm d} \tau\mathcal{F}^{2}(t)
\lesssim \langle t\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}\mathcal{F}^{2}(t).
\end{equation}
From (\ref{JJJ1}) and (\ref{JJJ2}), we find
\begin{equation}\label{ds}
\|\omega(t)\|_{L^{\infty}} \lesssim \|\widehat{\omega}(t)\|_{\widehat{L}^{1}}
\lesssim
\langle t\rangle^{-1}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) +\mathcal{F}^{2+\delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right).
\end{equation}
According to (\ref{vel1})-(\ref{vel2}) and (\ref{ds}) together with the fact that $k\geq1$,
\[
\|\mathbf{u}(t)\|_{L^{\infty}}
\lesssim \left\|\frac{1}{(\xi^{2}+\pi^2k^{2})^{\frac{1}{2}}}\widehat{\omega}(t)\right\|_{\widehat{L}^{1}}
\lesssim \|\widehat{\omega}(t)\|_{\widehat{L}^{1}}
\]
\[
\lesssim
\langle t\rangle^{-1}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) +\mathcal{F}^{2+\delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right),
\]
\[
\|\nabla\mathbf{u}(t)\|_{L^{\infty}}\lesssim \|\widehat{\omega}(t)\|_{\widehat{L}^{1}}
\lesssim
\langle t\rangle^{-1}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) +\mathcal{F}^{2+\delta}(t) +
\mathcal{F}^{\frac{5}{2}}(t)\right).
\]
Thus, we finish the proof of Lemma \ref{EL4}.
\hfill$\square$
\begin{Lemma}\label{EL5}
Let $m>32$. Then for $0<\delta <\frac{1}{5}$,
\[
\mathcal{F}_{3}(t) \lesssim \mathcal{F}_{0} + \mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)
+\mathcal{F}^{2+\delta}(t).
\]
\end{Lemma}
{\bf Proof.}
One deduces from Lemma \ref{AL1}-\ref{AL2} that
\[
\|\theta(t)\|_{H^{4}} = \|\widehat{\theta}(t)\|_{\widehat{H}^{4}} = \|\left(1+ (\xi^2 + \pi^2k^2)\right)^{2}\widehat{\theta}(t)\|_{\widehat{L}^{2}}
\]
\[
\lesssim
\langle t\rangle^{-\frac{1}{4}}\|\omega_{0}\|_{W^{5,1}} + \langle t\rangle^{-\frac{1}{4}}\|\theta_{0}\|_{W^{8,1}} +
\langle t\rangle^{-\frac{1}{4}}\|\theta_{0}\|_{H^{m}}
\|\omega_{0}\|_{H^{m}}
\]
\[
+ \int_{0}^{t}\langle t-\tau \rangle^{-\frac{1}{4}}\left(\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{6,1}}
+ \|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{6,1}}\right){\rm d} \tau
\]
\[
+\int_{0}^{t}\langle t-\tau \rangle^{-\frac{1}{4}}\left( \|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{4,1}}
+ \|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{8,1}}\right){\rm d} \tau,
\]
\[
\|\partial_{1}\theta(t)\|_{H^{2}} = \|\widehat{\partial_{1}\theta}(t)\|_{\widehat{H}^{2}} \lesssim
\langle t\rangle^{-\frac{3}{4}}\|\omega_{0}\|_{W^{5,1}} + \langle t\rangle^{-\frac{3}{4}}\|\theta_{0}\|_{W^{8,1}} +
\langle t\rangle^{-\frac{3}{4}}\|\theta_{0}\|_{H^{m}}
\|\omega_{0}\|_{H^{m}}
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-\frac{3}{4}}\left(\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{6,1}}
+ \|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{6,1}}\right){\rm d} \tau
\]
\[
+ \int_{0}^{t}\langle t-\tau\rangle^{-\frac{3}{4}}\left(\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{4,1}}
+ \|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{8,1}}\right){\rm d} \tau,
\]
Similar to the $L^\infty$-estimate of $\partial_1 \theta$ in Lemma \ref{EL4}, we obtain for $0<\delta <\frac{2}{5}$,
\begin{equation}\label{xb2}
\|\theta(t)\|_{H^{4}} = \|\widehat{\theta}(t)\|_{\widehat{H}^{4}}\lesssim
\langle t\rangle^{-\frac{1}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2}+\mathcal{F}^{2}(t) +\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right),
\end{equation}
\begin{equation}\label{xb1}
\|\partial_{1}\theta(t)\|_{H^{2}}= \|\widehat{\partial_{1}\theta}(t)\|_{\widehat{H}^{2}} \lesssim
\langle t\rangle^{-\frac{3}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)
+\mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Moreover,
\[
\|\partial_{11}\theta(t)\|_{L^{2}} =
\|\widehat{\partial_{11}\theta}(t)\|_{\widehat{L}^{2}} \lesssim
\langle t\rangle^{-\frac{5}{4}}\|\omega_{0}\|_{W^{5,1}} + \langle t\rangle^{-\frac{5}{4}}\|\theta_{0}\|_{W^{8,1}} +
\langle t\rangle^{-\frac{5}{4}}\|\theta_{0}\|_{H^{m}}
\|\omega_{0}\|_{H^{m}}
\]
\[
+ O_{1} + O_{2} + O_{3} + O_{4},
\]
where
\[
O_{1} = \int_{0}^{t}\langle t-\tau \rangle^{-\frac{5}{4}}\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{6,1}}{\rm d} \tau,\,
O_{2} = \int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{6,1}}{\rm d} \tau,
\]
\[
O_{3} = \int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{4,1}}{\rm d} \tau,\,
O_{4} = \int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{8,1}}{\rm d} \tau.
\]
Note that
\[
\|\partial_{t}\mathbf{u}\cdot\nabla\theta\|_{W^{6,1}}
\lesssim \|\partial_{t}\mathbf{u}\|_{L^{2}}\|\nabla\theta\|_{H^{6}} + \|\partial_{t}\mathbf{u}\|_{H^{6}}\|\nabla\theta\|_{L^{2}}
\]
\[
\lesssim \|\partial_{t}\mathbf{u}\|_{L^{2}}\|\nabla\theta\|_{H^{12}}^{\frac{1}{2}}\|\nabla\theta\|_{L^{2}}^{\frac{1}{2}} + \|\partial_{t}\mathbf{u}\|_{H^{\frac{6}{\delta}}}^{\delta}\|\partial_{t}\mathbf{u}\|_{L^{2}}^{1-\delta}\|\nabla\theta\|_{L^{2}}.
\]
Thus
\[
O_{1}
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\langle
\tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}{\rm d} \tau\mathcal{F}_{4}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{3}^{\frac{1}{2}}(t)
\]
\[
+\int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}} \langle \tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+\delta\epsilon}
{\rm d} \tau\mathcal{F}_{4}^{1-\delta}(t)\mathcal{F}_{1}^{\delta}(t)\mathcal{F}_{3}(t)
\]
\begin{equation}\label{DJD1}
+\int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}
\langle\tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+2\delta\epsilon} {\rm d} \tau\mathcal{F}_{4}^{1-\delta}(t)\mathcal{F}_{1}^{2\delta}(t)\mathcal{F}_{3}(t)
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}^{2}(t) + \mathcal{F}^{2+\delta}(t)\right),
\end{equation}
where $0<\delta < \frac{1}{5}$ and $\frac{5\delta}{4}+2\delta\epsilon < \frac{1}{4}$.
Similarly,
\[
O_{2}
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\left(\langle \tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}} +\langle \tau\rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}\right){\rm d} \tau
\mathcal{F}_{3}(t)\mathcal{F}_{1}(t)\mathcal{F}_{4}^{\frac{1}{2}}(t)
\]
\begin{equation}\label{DJD2}
+ \int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\langle \tau\rangle^{-\frac{11}{8}+\epsilon}{\rm d} \tau\mathcal{F}_{4}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{3}^{\frac{1}{2}}(t)
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)\right),
\end{equation}
and
\begin{equation}\label{DJD3}
O_{3}
\lesssim
\int_{0}^{t}\langle t-\tau\rangle^{-\frac{5}{4}}\left(\langle \tau \rangle^{-\frac{11}{8}+\frac{\epsilon}{2}} +
\langle \tau \rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}\right){\rm d} \tau\mathcal{F}^{2}(t)
\lesssim \langle t \rangle^{-\frac{5}{4}}\mathcal{F}^{2}(t).
\end{equation}
Note that
\[
\|\mathbf{u}\cdot\nabla\theta\|_{W^{8,1}}\lesssim \|u_{1}\partial_{1}\theta\|_{W^{8,1}} + \|u_{2}\partial_{2}\theta\|_{W^{8,1}}
\]
\[
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\theta\|_{H^{8}} + \|u_{1}\|_{H^{4}}\|\partial_{1}\theta\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\partial_{2}\theta\|_{H^{8}} + \|u_{2}\|_{H^{8}}\|\partial_{2}\theta\|_{L^{2}}
\]
\[
\lesssim \|u_{1}\|_{L^{2}}\|\partial_{1}\theta\|_{H^{\frac{6+2\delta}{\delta}}}^{\delta}\|\partial_{1}\theta\|_{H^{2}}^{1-\delta} + \|u_{1}\|_{H^{\frac{6+2\delta}{\delta}}}^{\delta}\|u_{1}\|_{H^{2}}^{1-\delta}\|\partial_{1}\theta\|_{L^{2}}
\]
\[
+ \|u_{2}\|_{L^{2}}\|\partial_{2}\theta\|_{H^{8}}^{\frac{1}{2}}\|\partial_{2}\theta\|_{L^{2}}^{\frac{1}{2}} + \|u_{2}\|_{H^{\frac{6+2\delta}{\delta}}}^{\delta}\|u_{2}\|_{H^{2}}^{1-\delta}\|\partial_{2}\theta\|_{L^{2}}.
\]
Taking $\delta$ such that $0<\delta < \frac{1}{5}$ and $\frac{5\delta}{4}+2\delta\epsilon < \frac{1}{4}$, we obtain for $m>32$,
\[
O_{4}
\lesssim
\int_{0}^{t}\langle t-\tau\rangle ^{-\frac{5}{4}}\langle \tau\rangle^{-\frac{3}{2}+\frac{3\delta}{4}+\delta\epsilon}{\rm d} \tau\mathcal{F}_{1}^{\delta}(t)\mathcal{F}_{3}^{2-\delta}(t)
\]
\[
+
\int_{0}^{t}\langle t-\tau\rangle ^{-\frac{5}{4}}\langle \tau\rangle^{-\frac{11}{8}+\frac{\epsilon}{2}}{\rm d} \tau\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{3}^{\frac{3}{2}}(t)
+
\int_{0}^{t}\langle t-\tau\rangle ^{-\frac{5}{4}}\langle \tau\rangle^{-\frac{3}{2}+\frac{5\delta}{4}+\delta\epsilon}{\rm d} \tau\mathcal{F}_{1}^{\delta}(t)\mathcal{F}_{3}^{2-\delta}(t)
\]
\begin{equation}\label{DJD4}
\lesssim \langle t\rangle^{-\frac{5}{4}}\mathcal{F}^{2}(t).
\end{equation}
Summing up (\ref{DJD1})-(\ref{DJD4}) yields
\begin{equation}\label{11theta}
\|\partial_{11}\theta(t)\|_{L^{2}}=
\|\widehat{\partial_{11}\theta}(t)\|_{\widehat{L}^{2}}\lesssim
\langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) +\mathcal{F}^{\frac{5}{2}}(t)+\mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Next, we turn to $H^2$-estimate of $\omega$. According to (\ref{Eb6}), (\ref{JJJ2}) and (\ref{xb1}),
\[
\|\omega(t)\|_{H^{2}} = \|\widehat{\omega}(t)\|_{\widehat{H}^{2}}
\]
\[
\lesssim \left\|e^{-\nu\left(\xi^{2}+ \pi^2 k ^{2}\right) t}\widehat{\omega}_{0}\right\|_{\widehat{H}^{2}} +
\int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2k ^{2}\right)(t-\tau)}\widehat{\mathbf{u}\cdot\nabla\omega}(\tau)\right\|_{\widehat{H}^{2}} {\rm d} \tau
\]
\[
+
\int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2k ^{2}\right)(t-\tau)}\widehat{\partial_{1}\theta}(\tau)\right\|_{\widehat{H}^{2}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|\widehat{\omega}_{0}\|_{\widehat{H}^{2}} +
\int_{0}^{t}e^{-\nu(t-\tau)}\|\widehat{\mathbf{u}\cdot\nabla\omega}(\tau)\|_{\widehat{H}^{2}}{\rm d} \tau +
\int_{0}^{t}e^{-\nu(t-\tau)}\langle\tau\rangle^{-\frac{3}{4}}\langle\tau\rangle^{\frac{3}{4}}\|\widehat{\partial_{1}\theta}(\tau)\|_{\widehat{H}^{2}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|\omega_{0}\|_{H^{2}} +
\langle t\rangle^{-\frac{3}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)+\mathcal{F}^{2+\delta}(t)\right)
\]
\[
+\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\mathbf{u}\cdot\nabla\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau
\]
\begin{equation}\label{WWQ1}
\lesssim
\langle t\rangle^{-\frac{3}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Similarly, we use (\ref{11theta}) to obtain
\begin{equation}\label{WWQ3}
\|\partial_{1}\omega(t)\|_{L^{2}}=\|\widehat{\partial_{1}\omega}(t)\|_{\widehat{L}^{2}}
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)+
\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Furthermore, applying (\ref{vel1})-(\ref{vel2}) and (\ref{WWQ1})-(\ref{WWQ3}) give
\[
\|u_{1}(t)\|_{H^{3}} = \left\|\frac{k\pi}{\xi^2+ \pi^2k^2}\widehat{\omega}(t)\right\|_{\widehat{H}^{3}}
\lesssim \|\widehat{\omega}(t)\|_{\widehat{H}^{2}}
\]
\[
\lesssim
\langle t\rangle^{-\frac{3}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right),
\]
\[
\|u_{2}(t)\|_{H^{2}} = \left\|\frac{-i \xi}{\xi^2+ \pi^2k^2}\widehat{\omega}(t)\right\|_{\widehat{H}^{2}} \lesssim \|\widehat{\partial_{1}\omega}(t)\|_{\widehat{L}^{2}}
\]
\[
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right),
\]
\[
\|\partial_{1}u_{1}(t)\|_{H^{1}} \lesssim \|\widehat{\partial_{1}\omega}(t)\|_{\widehat{L}^{2}}
\]
\[
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right).
\]
This completes the proof of Lemma \ref{EL5}.
\hfill$\square$
\begin{Lemma}\label{EL6}
Let $m>32$. Then for $0<\delta <\frac{1}{5}$,
\[
\mathcal{F}_{4}(t) \lesssim \mathcal{F}_{0} + \mathcal{F}_{0}^{2} + \mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t).
\]
\end{Lemma}
{\bf Proof.}
Using the fact that $\mathcal{L}_{2}(0) = 0$, we find from (\ref{Eb7}) that
\[
\partial_{t}\theta(x,y,t)= \partial_{t}\mathcal{L}_{1}(t)\theta_{0} + \partial_{t}\mathcal{L}_{2}(t)\left(\frac{1}{2}(-\Delta)\theta_{0} +\partial_{t}\theta(x,y,0)\right)
\]
\begin{equation}\label{Eb41}
+ \int_{0}^{t}\partial_{\tau}\mathcal{L}_{2}(t-\tau)f_{2}(x,y,\tau){\rm d} \tau.
\end{equation}
From Lemma \ref{AL2}, one deduces that
\[
\|\partial_{t}\theta(t)\|_{H^{3}} =
\|\widehat{\partial_{t}\theta}(t)\|_{\widehat{H}^{3}}
\]
\[
\lesssim
\langle t\rangle^{-\frac{5}{4}}\|\omega_{0}\|_{W^{5,1}} +\langle t\rangle^{-\frac{5}{4}}\|\theta_{0}\|_{W^{8,1}} +
\langle t\rangle^{-\frac{5}{4}}\|\theta_{0}\|_{H^{m}}\|\omega_{0}\|_{H^{m}}
\]
\[
+ \int_{0}^{t}\langle t-\tau \rangle^{-\frac{5}{4}}\left(\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{6,1}}+
\|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\theta\right)(\tau)\|_{W^{6,1}}\right){\rm d} \tau
\]
\[
+\int_{0}^{t}\langle
t-\tau\rangle^{-\frac{5}{4}}\left(\|\partial_{1}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{W^{4,1}} +
\|\left(\mathbf{u}\cdot\nabla\theta\right)(\tau)\|_{W^{8,1}}\right){\rm d} \tau.
\]
Similar to $L^2$-estimate of $\partial_{11}\theta$ in Lemma \ref{EL5}, we find that for $0< \delta <\frac{1}{5}$,
\begin{equation}\label{xb7}
\|\partial_{t}\theta(t)\|_{H^{3}} =
\|\widehat{\partial_{t}\theta}(t)\|_{\widehat{H}^{3}} \lesssim
\langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)
+\mathcal{F}^{\frac{5}{2}}(t)
+\mathcal{F}^{2+\delta}(t)\right).
\end{equation}
Taking time derivative on $(\ref{P4})_{1}$ and using Duhamel's principle,
\begin{equation}\label{vort1}
\partial_{t}\omega(x,y,t)= e^{\nu t\Delta}\partial_{t}\omega(x,y,0) + \int_{0}^{t}e^{\nu(t-\tau)\Delta}\left(\partial_{1}\partial_{\tau}\theta - \partial_{\tau}(\mathbf{u}\cdot \nabla \omega)\right)(x,y,\tau){\rm d} \tau.
\end{equation}
Note that
\[
\partial_{t}\omega(x,y,0)= \nu\Delta\omega_{0} -\mathbf{u}_{0}\cdot \nabla \omega_{0} + \partial_{1}\theta_{0}.
\]
Moreover,
\[
\|\partial_{t}\omega(t)\|_{H^{2}} = \|\widehat{\partial_{t}\omega}(t)\|_{\widehat{H}^{2}}
\]
\[
\lesssim \left\|e^{-\nu\left(\xi^{2}+ \pi^2k^{2}\right) t}\widehat{\partial_{t}\omega(0,x,y)}\right\|_{\widehat{H}^{2}} +
\int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2k^{2}\right)(t-\tau)}\widehat{\partial_{\tau}(\mathbf{u}\cdot\nabla\omega)}(\tau)\right\|_{\widehat{H}^{2}} {\rm d} \tau\]
\[
+
\int_{0}^{t}\left\|e^{-\nu\left(\xi^{2}+ \pi^2
k^{2}\right)(t-\tau)}\widehat{\partial_{1}\partial_{\tau}\theta}(\tau)\right\|_{\widehat{H}^{2}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\|\widehat{\partial_{\tau}\omega(x,y,0)}\|_{\widehat{H}^{2}} +
\int_{0}^{t}e^{-\nu (t-\tau)}\|\widehat{\partial_{\tau}(\mathbf{u}\cdot\nabla\omega)}(\tau)\|_{\widehat{H}^{2}}{\rm d} \tau
\]
\[
+
\int_{0}^{t}e^{-\nu(t-\tau)}\langle\tau\rangle^{-\frac{5}{4}}\langle\tau\rangle^{\frac{5}{4}}\|\widehat{\partial_{1}\partial_{\tau}\theta}(\tau)\|_{\widehat{H}^{2}}{\rm d} \tau
\]
\[
\lesssim e^{-\nu t}\left(\|\omega_{0}\|_{H^{4}} + \|\partial_{1}\theta_{0}\|_{H^{2}}+ \|\omega_{0}\|_{H^{3}}\|\mathbf{u}_{0}\|_{H^{3}}\right)
+\int_{0}^{t}e^{-\nu(t-\tau)}\|\partial_{\tau}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{H^{2}}{\rm d} \tau
\]
\begin{equation}\label{DDD1}
+
\langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +\mathcal{F}_{0}^{2} +\mathcal{F}^{2}(t) + \mathcal{F}^{\frac{5}{2}}(t)
+\mathcal{F}^{2+\delta}(t)\right),
\end{equation}
where (\ref{xb7}) is used in the last inequality.
Now we need to estimate $\int_{0}^{t}e^{-\nu(t-\tau)}\|\partial_{\tau}(\mathbf{u}\cdot\nabla\omega)(\tau)\|_{H^{2}}{\rm d} \tau$. Obviously,
\[
\|\partial_{t}(\mathbf{u}\cdot\nabla\omega)\|_{H^{2}}\lesssim \|\partial_{t}\mathbf{u}\cdot\nabla\omega\|_{H^{2}} + \|\mathbf{u}\cdot\nabla\partial_{t}\omega\|_{H^{2}}.
\]
By virtue of the interpolation inequality (\ref{BLT1}) in Lemma \ref{PL1},
\[
\|\partial_{t}\mathbf{u}\cdot\nabla\omega\|_{H^{2}}
\lesssim \|\partial_{t}\mathbf{u}\|_{H^{2}}\|\nabla\omega\|_{H^{2}}
\lesssim \|\partial_{t}\mathbf{u}\|_{H^{2}}\|\omega\|_{H^{6}}^{\frac{1}{2}}\|\omega\|_{L^{2}}^{\frac{1}{2}}.
\]
Then
\[
\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\partial_{\tau}\mathbf{u}\cdot\nabla\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau
\]
\begin{equation}\label{DDD2}
\lesssim
\int_{0}^{t}
e^{-\nu(t-\tau)}\langle \tau\rangle^{-\frac{13}{8}+\frac{\epsilon}{2}}
{\rm d} \tau\mathcal{F}_{4}(t)\mathcal{F}_{1}^{\frac{1}{2}}(t)\mathcal{F}_{3}^{\frac{1}{2}}(t)
\lesssim \langle t\rangle^{-\frac{5}{4}} \left(\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)\right).
\end{equation}
Similarly,
\begin{equation}\label{DDD3}
\int_{0}^{t}e^{-\nu(t-\tau)}\|\left(\mathbf{u}\cdot\nabla\partial_{\tau}\omega\right)(\tau)\|_{H^{2}}{\rm d} \tau
\lesssim \langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}^{2}(t)+\mathcal{F}^{\frac{5}{2}}(t)\right).
\end{equation}
Hence, we deduce from (\ref{DDD1})-(\ref{DDD3}) that
\begin{equation}\label{DWd1}
\|\partial_{t}\omega(t)\|_{H^{2}} = \|\widehat{\partial_{t}\omega}(t)\|_{\widehat{H}^{2}} \lesssim
\langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)
+ \mathcal{F}^{\frac{5}{2}}(t)
+ \mathcal{F}^{2+\delta}(t)\right).
\end{equation}
By using (\ref{vel1})-(\ref{vel2}) and (\ref{DWd1}),
\[
\|\partial_{t}\mathbf{u}(t)\|_{H^{3}} \lesssim \left\|\frac{1}{(\xi^{2}+ \pi^2k^{2})^{\frac{1}{2}}}\widehat{\partial_{t}\omega}(t)\right\|_{\widehat{H}^{3}}
\lesssim \|\widehat{\partial_{t}\omega}(t)\|_{\widehat{H}^{2}}
\]
\[
\lesssim
\langle t\rangle^{-\frac{5}{4}}\left(\mathcal{F}_{0} +
\mathcal{F}_{0}^{2} +
\mathcal{F}^{2}(t)
+ \mathcal{F}^{\frac{5}{2}}(t)
+ \mathcal{F}^{2+\delta}(t)\right).
\]
Thus, we finish the proof of Lemma \ref{EL6}.
\hfill$\square$
\subsection{Proof of Theorem \ref{Mrt1}}
According to Lemma \ref{EL1}-\ref{EL6}, there exists a constant $C_{0}\ge 1$ depending only on $\nu$ and $m$ such that
\begin{equation}\label{Ep36}
\mathcal{F}(t) \leq C_{0}\left(\mathcal{F}_{0}+\mathcal{F}_{0}^{2}\right)+ C_{0}\left(\mathcal{F}^{2}(t) + \left(\mathcal{F}^{3}(t) + \mathcal{F}^{4}(t)\right)^{\frac{1}{2}}+ \mathcal{F}^{\frac{5}{2}}(t)+ \mathcal{F}^{2+\delta}(t)\right)
\end{equation}
for $0<\delta<\frac{1}{5}$.
Assume that
\begin{equation}\label{assin1}
\mathcal{F}_{0} = \|\theta_{0}\|_{W^{8,1}} + \|\theta_{0}\|_{H^{m+1}} +\|\omega_{0}\|_{W^{5,1}} + \|\omega_{0}\|_{H^{m}}\leq \epsilon_{0}
\end{equation}
with $\epsilon_{0}\in(0,1)$ to be determined later.
Then by the definition of $F_i(t), i=1,2,3,4$, there exists $C_1\ge 2C_0$ such that
\begin{equation}\label{DDLL}
\mathcal{F}(0)= \mathcal{F}_{1}(0)+\mathcal{F}_{2}(0)+\mathcal{F}_{3}(0)+\mathcal{F}_{4}(0)
\leq C_1\mathcal{F}_{0}\leq C_{1}\epsilon_{0}.
\end{equation}
From Proposition \ref{tp1}, there exists $T^{*}\in (0,\infty]$ such that
\[
(\omega, \theta) \in C([0,T^*); H^{m})\times C([0,T^*);H^{m+1}).
\]
To complete the proof of Theorem \ref{Mrt1}, we only need to show $T^{*}=\infty$. To this end, we assume $T^*<\infty$ such that
\begin{equation}\label{T*}
\limsup\limits_{t\to T^*}\mathcal{F}_1(t)=\infty.
\end{equation}
Define
\begin{equation}\label{Ttmax1}
\tilde{T} \triangleq \max \{t\in [0,T^{*}):\mathcal{F}(t)\leq 4C_{1} \epsilon_{0}\} < T^*.
\end{equation}
By choosing $\epsilon_{0}$ such that $4C_{1} \epsilon_{0}\leq 1$ and using (\ref{Ep36}),
\begin{equation}\label{DDL2}
\mathcal{F}(t)\leq 2C_{0}\mathcal{F}_{0}+ 5C_{0}\mathcal{F}^{\frac{3}{2}}(t).
\end{equation}
By taking $\epsilon_{0}$ so small that $40 C_0 C_{1}^{\frac{1}{2}}\epsilon_{0}^{\frac{1}{2}}\leq 1$, we obtain from (\ref{DDLL}) that
\[
\mathcal{F}(t)\leq C_{1}\epsilon_{0}+ 40 C_0 C_{1}^{\frac{3}{2}}\epsilon_{0}^{\frac{3}{2}}\leq 2C_{1} \epsilon_{0},\,\text{ for all}~t \in [0,\tilde{T}),
\]
which contradicts to (\ref{T*}) and the definition of $\tilde{T}$. This in turn implies that $T^{*} = \infty$.
\section{Conclusions}
It is shown in this paper the asymptotic behavior of solutions to Boussinesq equations (\ref{intp1}) near the specific stationary solution (\ref{steady0}). To this purpose, we formulate the momentum equation in terms of the vorticity, which enforces us to choose slip boundary condition for the velocity. Also in order to use spectral analysis, we choose functional spaces admitting high order compatibility conditions on the boundary. Since the anisotropic structure of the linearized equations is already reflected on the decay rates, we do not pursue this anisotropy on working functional spaces as well as the initial data. Such an approach seems applicable to three dimensional setting and will be given in our subsequent paper \cite{LD2}. However, many problems are still left open. For example, we do not know how to handle the case of vanishing Dirichlet boundary conditions for the velocity. It is also interesting to show asymptotic stability in lower order spaces, even without any exact rate of convergence. Finally, it seems a challenging problem to show stability of the general steady solutions $\vartheta_s(y)$ such that $\vartheta_s'(y)>0$. We hope to solve these problems in the future work.
\centerline{\bf Acknowledgement}
The research is supported by NSFC under grants No. 12071211, 11771206.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,459 |
\section{Introduction}
Gravity in higher dimensions has attained considerable attention with the development of string theory. Concerning the effect of string theory on gravitational physics, one may construct a low energy effective action which includes both the Einstein-Hilbert Lagrangian (as the first order term) and higher curvature terms. However, this approach may lead to field equations of fourth order and ghosts as well. This problem has been solved by a particular higher curvature gravity theory called Lovelock gravity~\cite{Lovelock}. The field equation in this gravity theory is only second order and the quantization of Lovelock gravity theory is free of ghosts~\cite{Boulware}. In this context, it is of interest to investigate both the black hole solutions and their thermodynamics in Lovelock gravity~\cite{Dehghani1}-\cite{Amirabi}. Moreover, it is natural to consider the nonlinear terms in the matter side of the action while accepting the nonlinear terms on the gravity side~\cite{Dehghani1}. Motivated by this, Ref.~\cite{Dehghani1} presented topological black hole solutions in Lovelock-Born-Infeld gravity. Both the thermodynamics of asymptotically flat black holes for $k=1$ and the thermodynamics of asymptotically AdS rotating black branes with flat horizon were detailedly investigated there. However, concerning the charged topological AdS black holes in Lovelock-Born-Infeld gravity, only the temperature was given in Ref.~\cite{Dehghani1}. Ref.~\cite{Decheng2} further studied their entropy and specific heat at constant charge. However, the expression of entropy seems incomplete for $k$ is missing. And the thermodynamics in the extended space needs to be further explored. Probing this issue is important because it is believed that the physics of black holes in higher dimensions is essential for one to understand a full theory of quantum gravity.
As is well known, phase transition is a fascinating phenomenon in the classical thermodynamics. Over the past decades, phase transitions of black holes have aroused more and more attention. The pioneer phase transition research of AdS black holes can be traced back to the discovery of the famous Hawking-Page phase transition between the Schwarzschild AdS black hole and thermal AdS space \cite{Hawking2}. Recently, a revolution in this field is led by $P-V$ criticality research~\cite{Kubiznak}-\cite{Decheng} in the extended phase space. Kubiz\v{n}\'{a}k et al.~\cite{Kubiznak} perfectly completed the analogy between charged AdS black holes and the liquid-gas system first observed by Chamblin et al.~\cite{Chamblin1,Chamblin2}. The approach of treating the cosmological constant as thermodynamic pressure and its conjugate quantity as thermodynamic volume is essential with the increasing attention of considering the variation of the cosmological constant in the first law of black hole thermodynamics recently~\cite{Caldarelli}-\cite{Lu} .
Here, we would like to investigate the thermodynamics and phase transition of charged topological AdS black holes in Lovelock-Born-Infeld gravity in the extended phase space. Some related efforts have been made recently. $P-V$ criticality of both four-dimensional~\cite{Gunasekaran} and higher dimensional~\cite{Decheng} Born-Infeld AdS black holes have been investigated. A new parameter called Born-Infeld vacuum polarization was defined to be conjugated to the Born-Infeld parameter~\cite{Gunasekaran}. And it was argued that this quantity
is required for the consistency of both the first law of thermodynamics and the Smarr relation. Moreover, Cai et al.~\cite{Cai98} studied the $P-V$ criticality of Gauss-Bonnet AdS black holes. It was found that no $P-V$ criticality can be observed for Ricci flat and hyperbolic Gauss-Bonnet black holes. However, for the spherical case, $P-V$ criticality can be observed even when the charge is absent, implying that the charge may not be the indispensable factor for the existence of $P-V$ criticality. Such an interesting result motivates us to probe further the third order Lovelock gravity to explore whether it is a peculiar property due to the higher derivative terms of curvature. So we would mainly investigate their effects on the $P-V$ criticality. Moreover, we will probe the combined effects of higher derivative terms of curvature and the nonlinear electrodynamics.
In Sec. \ref{Sec2}, the solutions of charged topological AdS black holes in Lovelock-Born-Infeld gravity will be briefly reviewed and their thermodynamics will be further investigated. In Sec. \ref{Sec3}, a detailed study will be carried out in the extended phase space for the limit case $\beta\rightarrow\infty$ so that we can concentrate on the effects of third order Lovelock gravity. In Sec. \ref{Sec4}, the effects of nonlinear electrodynamics will also be included. In the end, a brief conclusion will be drawn in Sec. \ref {Sec5}.
\section{Thermodynamics of charged topological black holes in Lovelock-Born-Infeld gravity}
\label{Sec2}
The action of third order Lovelock gravity with nonlinear Born-Infeld electromagnetic field is~\cite{Dehghani1}
\begin{equation}
I_{G}=\frac{1}{16\pi}\int d^{n+1}x\sqrt{-g}\big(-2\Lambda+\mathcal{L}_1+\alpha_2\mathcal{L}_2+\alpha_3\mathcal{L}_3+L(F)\big),\label{1}
\end{equation
where
\begin{eqnarray}
\mathcal{L}_1&=&R,\label{2}
\\
\mathcal{L}_2&=&R_{\mu\nu\gamma\delta}R^{\mu\nu\gamma\delta}-4R_{\mu\nu}R^{\mu\nu}+R^2, \label{3}
\\
\mathcal{L}_3&=&2R^{\mu\nu\sigma\kappa}R_{\sigma\kappa\rho\tau}R^{\rho\tau}_{\;\;\;\;\mu\nu}+8R^{\mu\nu}_{\;\;\;\;\sigma\rho}R^{\sigma\kappa}_{\;\;\;\;\nu\tau}R^{\rho\tau}_{\;\;\;\;\mu\kappa}+24R^{\mu\nu\sigma\kappa}R_{\sigma\kappa\nu\rho}R^{\rho}_{\;\;\mu} \nonumber
\\
&\,&+3RR^{\mu\nu\sigma\kappa}R_{\sigma\kappa\mu\nu}+24R^{\mu\nu\sigma\kappa}R_{\sigma\mu}R_{\kappa\nu}+16R^{\mu\nu}R_{\nu\sigma}R^{\sigma}_{\;\;\mu}-12RR^{\mu\nu}R_{\mu\nu}+R^3,\label{4}
\\
L(F)&=&4\beta^2\left(1-\sqrt{1+\frac{F^2}{2\beta^2}}\right). \label{5}
\end{eqnarray
In the above action, $\beta$, $\alpha_2$ and $\alpha_3$ are Born-Infeld parameter, the second and third order Lovelock coefficients respectively while $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$ and $L(F)$ are Einstein-Hilbert, Gauss-Bonnet, the third order Lovelock and Born-Infeld Lagrangians respectively. Considering the case
\begin{eqnarray}
\alpha_2&=&\frac{\alpha}{(n-2)(n-3)}, \label{6}
\\
\alpha_3&=&\frac{\alpha^2}{72{n-2\choose 4}},\label{7}
\end{eqnarray
Ref.~\cite{Dehghani1} derived the $(n+1)$-dimensional static solution as
\begin{equation}
ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\Omega^2, \label{8}
\end{equation
where
\begin{eqnarray}
f(r)&=&k+\frac{r^2}{\alpha}(1-g(r)^{1/3}),\label{9}\\
g(r)&=&1+\frac{3\alpha m}{r^n}-\frac{12\alpha \beta^2}{n(n-1)}\Big[1-\sqrt{1+\eta}-\frac{\Lambda}{2\beta^2}+\frac{(n-1)\eta}{(n-2)}\digamma(\eta)\Big],\label{10}\\
d\Omega^2&=&\left\{
\begin{array}[c]{l}
d\theta_1^2+\overset{n-1}{\underset{i=2}{{\textstyle\sum}}}\overset{i-1}{\underset{j=1}{{\textstyle\prod}}}\sin^2\theta_jd\theta^2_i\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $k$=1}\\
d\theta_1^2+\sinh^2\theta_1d\theta_2^2+\sinh^2\theta_1\overset{n-1}{\underset{i=3}{{\textstyle\sum}}}\overset{i-1}{\underset{j=2}{{\textstyle\prod}}}\sin^2\theta_jd\theta^2_i\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $k$=-1}\\
\overset{n-1}{\underset{i=1}{{\textstyle\sum}}}d\phi^2_i\text{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $k$=0}
\end{array}
\right..
\label{11}
\end{eqnarray
$d\Omega^2$ denotes the line element of $(n-1)$-dimensional hypersurface with constant curvature $(n-1)(n-2)k$ and $\digamma(\eta)$ denotes the hypergeometric function as follow
\begin{equation}
\digamma(\eta)=\,_2F_1\Big(\Big[\frac{1}{2},\frac{n-2}{2n-2}\Big],\Big[\frac{3n-4}{2n-2}\Big],-\eta\Big), \label{12}
\end{equation
where
\begin{equation}
\eta=\frac{(n-1)(n-2)q^2}{2\beta^2r^{2n-2}}. \label{13}
\end{equation
The Hawking temperature has been derived in Ref.~\cite{Dehghani1} as
\begin{equation}
T=\frac{(n-1)k[3(n-2)r_+^4+3(n-4)k\alpha r_+^2+(n-6)k^2\alpha^2]+12r_+^6\beta^2(1-\sqrt{1+\eta_+}\,)-6\Lambda r_+^6}{12\pi(n-1)r_+(r_+^2+k\alpha)^2}.\label{14}
\end{equation
However, only the Hawking temperature is not enough to discuss the $P-V$ criticality in the extended phase space. So we would like to calculate other relevant quantities.
Solving the equation $f(r)=0$, one can obtain the parameter $m$ in terms of the horizon radius $r_+$ as
\begin{equation}
m=\frac{r_+^n}{3\alpha}\left\{-1+\frac{(r_+^2+k\alpha)^3}{r_+^6}+\frac{12\alpha\beta^2\left[1-\frac{\Lambda}{2\beta^2}-\sqrt{1+\eta}+\frac{(n-1)\digamma(\eta)\eta}{n-2}\right]}{n(n-1)}\right\}.\label{15}
\end{equation
Then the mass of $(n+1)$-dimensional topological AdS black holes can be derived as
\begin{equation}
M=\frac{(n-1)\Sigma_k}{16\pi}m=\frac{(n-1)\Sigma_k r_+^n}{48\pi\alpha}\left\{-1+\frac{(r_+^2+k\alpha)^3}{r_+^6}+\frac{12\alpha\beta^2\left[1-\frac{\Lambda}{2\beta^2}-\sqrt{1+\eta}+\frac{(n-1)\digamma(\eta)\eta}{n-2}\right]}{n(n-1)}\right\},\label{16}
\end{equation
where $\Sigma_k$ denotes the volume of the $(n-1)$-dimensional hypersurface mentioned above.
The entropy can be calculated as
\begin{equation}
S=\int^{r_+}_{0}\frac{1}{T}\left(\frac{\partial M}{\partial r_+}\right)dr=\frac{\Sigma_k(n-1)r_+^{n-5}}{4}\left(\frac{r_+^4}{n-1}+\frac{2kr_+^2\alpha}{n-3}+\frac{k^2\alpha^2}{n-5}\right).\label{17}
\end{equation
Note that the above integration is accomplished under the condition of $n>5$. For $n\leqslant5$ the integration is divergent. So in this paper, we would mainly investigate the case of
$n=6$, which corresponds to the seven-dimensional black holes. The third term of the entropy in Eq. (\ref{17}) does not appear in the expression of the entropy of Gauss-Bonnet black holes~\cite{Cai98}. Our result also extends the expression in Ref.~\cite{Decheng2} where $k$ was missing.
In the extended phase space, one may identify the pressure of the black hole as~\cite{Kubiznak}
\begin{equation}
P=-\frac{\Lambda}{8\pi}.\label{18}
\end{equation
And the mass of black holes should be interpreted as enthalpy rather than the internal energy. In this context, the Gibbs free energy can be derived through
\begin{equation}
G=H-TS=M-TS.\label{19}
\end{equation
After tedious calculation, we can obtain
\begin{eqnarray}
G&=&\frac{\Sigma_kr_+^{n-6}}{48\pi\alpha(r_+^2+k\alpha)^2}\Big\{(n-1)r_+^6(r_+^2+k\alpha)^2\Big[-1+\frac{(r_+^2+k\alpha)^3}{r_+^6}+\frac{12\alpha\beta^2\left(1-\frac{\Lambda}{2\beta^2}-\sqrt{1+\eta}+\frac{(n-1)\digamma(\eta)\eta}{n-2}\right)}{n(n-1)}\Big]
\nonumber
\\
&\;&-\alpha\left(\frac{r_+^4}{n-1}+\frac{2kr_+^2\alpha}{n-3}+\frac{k^2\alpha^2}{n-5}\right)\Big[(n-1)k\Big(3(n-2)r_+^4+3(n-4)k\alpha r_+^2+(n-6)k^2\alpha^2\Big)-6\Lambda r_+^6
\nonumber
\\
&\;&+12r_+^6\beta^2(1-\sqrt{1+\eta_+}\,)\Big]\Big\}
.\label{20}
\end{eqnarray
Imitating the approach of Refs.~\cite{Gunasekaran,Cai98}, the first law of thermodynamics in the extended phase space can be rewritten as
\begin{equation}
dM=TdS+\Phi dQ+VdP+\mathcal {A}d\alpha+\mathcal {B}d\beta,\label{21}
\end{equation
where $\mathcal {A}$ and $\mathcal {B}$ denote the quantities conjugated to the Lovelock coefficient and Born-Infeld parameter respectively. And they can be obtained as
\begin{eqnarray}
\mathcal {A}&=&\Big(\frac{\partial M}{\partial \alpha}\Big)_{S,Q,P,\beta}=\frac{k^2(n-1)r_+^{n-6}(3r_+^2+2k\alpha)\Sigma_k}{48\pi}
-\frac{1}{2}k(n-1)r_+^{n-5}T\Big(\frac{r_+^2}{n-3}+\frac{k\alpha}{n-5}\Big)\Sigma_k,\label{22}
\\
\mathcal {B}&=&\Big(\frac{\partial M}{\partial \beta}\Big)_{S,Q,P,\alpha}=\frac{\Sigma_kr_+^{-n}}{8n\pi \beta}\Big\{2r_+^{2n}\beta^2\Big(2-\sqrt{4+\frac{2(n-1)(n-2)q^2r_+^{2-2n}}{\beta^2}}\;\Big)
\nonumber
\\
&\;&+(n-2)(n-1)q^2r_+^2\;_2F_1\Big(\Big[\frac{1}{2},\frac{n-2}{2n-2}\Big],\Big[\frac{3n-4}{2n-2}\Big],-\frac{(n-1)(n-2)q^2}{2\beta^2r^{2n-2}}\Big)\Big\}
.\label{23}
\end{eqnarray
Comparing Eq. (\ref{22}) with Gauss-Bonnet black holes in Ref.~\cite{Cai98}, one may find extra terms due to the third order Lovelock gravity. Note that Eq. (\ref{21}) is limited to the case of charged topological black holes in Lovelock Born-Infeld gravity in which the second and the third order Lovelock coefficients are related via the Lovelock coefficient $\alpha$. For a general case and a nice physical interpretation of the quantity conjugated to the Lovelock coefficient, see Ref.~\cite{Kastor2}, where the Smarr relation and the first law of thermodynamics in Lovelock gravity was thoroughly investigated and it was shown that the conjugate quantity $\Psi^{(k)}$ to the Lovelock coefficient $b_k$ consists of three terms related to mass, entropy and the anti-symmetric Killing-Lovelock potential respectively.
\section{$P-V$ criticality of a limit case}
\label{Sec3}
To concentrate on the effects of the third order Lovelock gravity, we would like to investigate an interesting limit case in this section and leave the issue of nonlinear electrodynamics to be further investigated in Sec.~\ref{Sec4}.
When $\beta\rightarrow\infty$, the Born-Infeld Lagrangian reduces to the Maxwell form and $\digamma(\eta)\rightarrow1$. So one can have
\begin{equation}
g(r)\rightarrow1+\frac{3\alpha m}{r^n}+\frac{6\alpha \Lambda}{n(n-1)}-\frac{3\alpha q^2}{r^{2n-2}}.\label{24}
\end{equation
And the temperature for this limit case can be simplified as
\begin{equation}
T=\frac{(n-1)k[3(n-2)r_+^4+3(n-4)k\alpha r_+^2+(n-6)k^2\alpha^2]-6\Lambda r_+^6-3(n-2)(n-1)q^2r_+^{8-2n}}{12\pi(n-1)r_+(r_+^2+k\alpha)^2}.\label{25}
\end{equation
Substituting Eq.~(\ref{18}) into Eq.~(\ref{25}), one can find the expression for $P$ as
\begin{equation}
P=\frac{n-1}{48\pi}\Big[\frac{12\pi T}{r_+}+\frac{24k\pi \alpha T}{r_+^3}+\frac{12k^2\pi\alpha^2T}{r_+^5}+\frac{3k(2-n)}{r_+^2}+\frac{3k^2\alpha(4-n)}{r_+^4}-\frac{k^3(n-6)\alpha^2}{r_+^6}+3(n-2)q^2r_+^{2-2n}\Big].\label{26}
\end{equation
We can identify the specific volume $v$ as
\begin{equation}
v=\frac{4r_+}{n-1}.\label{27}
\end{equation
Then Eq.~(\ref{26}) can be transformed into
\begin{equation}
P=\frac{T}{v}+\frac{32kT\alpha}{(n-1)^2v^3}+\frac{256k^2T\alpha^2}{(n-1)^4v^5}-\frac{k(n-2)}{(n-1)\pi v^2}-\frac{16k^2(n-4)\alpha}{(n-1)^3\pi v^4}-\frac{256k^3(n-6)\alpha^2}{3(n-1)^5\pi v^6}+\frac{16^{n-2}(n-2)q^2}{\pi(n-1)^{2n-3}v^{2n-2}}.\label{28}
\end{equation
The possible critical point should satisfy the following conditions
\begin{eqnarray}
\frac{\partial P}{\partial v}&=&0,\label{29}\\
\frac{\partial^2 P}{\partial v^2}&=&0.\label{30}
\end{eqnarray
Firstly, we would focus on the spherical case corresponding to $k=1$. The equation of state reads
\begin{equation}
P=\frac{T}{v}+\frac{32T\alpha}{(n-1)^2v^3}+\frac{256T\alpha^2}{(n-1)^4v^5}-\frac{(n-2)}{(n-1)\pi v^2}-\frac{16(n-4)\alpha}{(n-1)^3\pi v^4}-\frac{256(n-6)\alpha^2}{3(n-1)^5\pi v^6}+\frac{16^{n-2}(n-2)q^2}{\pi(n-1)^{2n-3}v^{2n-2}}.\label{31}
\end{equation
When $q=0,n=6$, Eqs.~(\ref{29}) and (\ref{30}) can be analytically solved and the corresponding physical quantities can be obtained as
\begin{equation}
T_c=\frac{1}{\pi\sqrt{5\alpha}},\;v_c=\frac{4\sqrt{\alpha}}{\sqrt{5}},\;P_c=\frac{17}{200\pi \alpha},\;\frac{P_cv_c}{T_c}=\frac{17}{50}.\label{32}
\end{equation
We can see clearly that the critical temperature is inversely proportional to $\sqrt{\alpha}$ while the critical specific volume is proportional to it. The critical pressure is inversely proportional to $\alpha$. However, the ratio $\frac{P_cv_c}{T_c}$ is independent of the parameter $\alpha$. Our results demonstrate again that $P-V$ criticality may exist even in the uncharged case. That may be attributed to the effect of higher derivative terms of curvature.
When $q\neq0,n=6$, one can obtain the corresponding physical quantities at the critical point as listed in Table \ref{tb1} by solving Eqs.~(\ref{29}) and (\ref{30}) numerically. From Table \ref{tb1}, one can find that there exists only one critical point for all the cases studied. And the physical quantities at the critical point $T_c, v_c,P_c$ depend on both the charge and the parameter $\alpha$ which is related to the second and the third order Lovelock coefficients. With the increasing of $\alpha$ or $q$, both $T_c$ and $P_c$ decrease while $v_c$ increases. However the ratio $\frac{P_cv_c}{T_c}$ decreases with $\alpha$ but increases with $q$.
\begin{table}[!h]
\tabcolsep 0pt
\caption{Critical values for $k=1,n=6,\beta\rightarrow\infty$}
\vspace*{-12pt}
\begin{center}
\def\temptablewidth{0.5\textwidth}
{\rule{\temptablewidth}{1pt}}
\begin{tabular*}{\temptablewidth}{@{\extracolsep{\fill}}cccccc}
$q$ & $\alpha$ & $T_c$ &$v_c$ &$P_c$ &$\frac{P_cv_c}{T_c}$ \\ \hline
0.5 & 1 &0.14213 & 1.80992& 0.02691& 0.343 \\
2 &1 & 0.13989& 1.97347 & 0.02553& 0.360 \\
1 & 1 &0.14154 & 1.85884& 0.02653 & 0.348 \\
1 & 0.5 &0.19287& 1.53461& 0.04727 & 0.376 \\
1 & 2 &0.10062& 2.53773& 0.01351& 0.341
\end{tabular*}
{\rule{\temptablewidth}{1pt}}
\end{center}
\label{tb1}
\end{table}
To witness the $P-V$ criticality behavior more intuitively, we plot the $P-v$ diagram in Fig. \ref{fg1}. When the temperature is less than the critical temperature $T_c$, the isotherm can be divided into three branches. Both the large radius branch and the small radius branch are stable corresponding to a positive compression coefficient while the medium radius branch is unstable corresponding to a negative compression coefficient. The phase transition between the small black hole and the large black hole is analogous to the van der Waals liquid-gas phase transition. Figs. \ref{1a}, \ref{1b}, \ref{1c} and \ref{1d} show the impact of the charge on the $P-V$ criticality while Figs. \ref{1c}, \ref{1e}, and \ref{1f} show the effect of $\alpha$. The comparisons are in accord with the analytical results in Table \ref{tb1}. We also plot both the two-dimensional and three dimensional Gibbs free energy graph for $q=0,n=6$ in Fig. \ref{fg2} and for the case $q=1,n=6$ in Fig. \ref{fg3}. Below the critical temperature, the Gibbs free energy graphs display the classical swallow tail behavior implying the occurrence of the first order phase transition. Above the critical temperature, there is no swallow tail behavior.
\begin{figure*}
\centerline{\subfigure[]{\label{1a}
\includegraphics[width=8cm,height=6cm]{1a.eps}}
\subfigure[]{\label{1b}
\includegraphics[width=8cm,height=6cm]{1b.eps}}}
\centerline{\subfigure[]{\label{1c}
\includegraphics[width=8cm,height=6cm]{1c.eps}}
\subfigure[]{\label{1d}
\includegraphics[width=8cm,height=6cm]{1d.eps}}}
\centerline{\subfigure[]{\label{1e}
\includegraphics[width=8cm,height=6cm]{1e.eps}}
\subfigure[]{\label{1f}
\includegraphics[width=8cm,height=6cm]{1f.eps}}}
\caption{$P$ vs. $v$ for
(a)$n=6,\alpha=1,q=0$, (b) $n=6,\alpha=1,q=0.5$, (c) $n=6,\alpha=1,q=1$, (d) $n=6,\alpha=1,q=2$, (e) $n=6,\alpha=0.5,q=1$ and (f) $n=6,\alpha=2,q=1$} \label{fg1}
\end{figure*}
\begin{figure*}
\centerline{\subfigure[]{\label{2a}
\includegraphics[width=8cm,height=6cm]{2a.eps}}
\subfigure[]{\label{2b}
\includegraphics[width=8cm,height=6cm]{2b.eps}}}
\caption{(a) $G$ vs. $T$ for $k=1, n=6,\alpha=1,q=0$, "$P=0.015<P_c$, Blue curve", "$P=0.02<P_c$, Black curve", "$P=P_c=0.02706$, Red curve", "$P=0.04>P_c$, Purple curve" (b) $G$ vs. $P$ and $T$ for $k=1, n=6,\alpha=1,q=0$} \label{fg2}
\end{figure*}
\begin{figure*}
\centerline{\subfigure[]{\label{3a}
\includegraphics[width=8cm,height=6cm]{3a.eps}}
\subfigure[]{\label{3b}
\includegraphics[width=8cm,height=6cm]{3b.eps}}}
\caption{(a) $G$ vs. $T$ for $k=1, n=6,\alpha=1,q=1$, "$P=0.015<P_c$, Blue curve", "$P=0.02<P_c$, Black curve", "$P=P_c=0.02653$, Red curve", "$P=0.04>P_c$, Purple curve" (b) $G$ vs. $P$ and $T$ for $k=1, n=6,\alpha=1,q=1$} \label{fg3}
\end{figure*}
Secondly, we would discuss the $k=0$ case corresponding to Ricci flat topology. The equation of state reads
\begin{equation}
P=\frac{T}{v}+\frac{16^{n-2}(n-2)q^2}{\pi(n-1)^{2n-3}v^{2n-2}}.\label{33}
\end{equation
For $n=6$, utilizing Eq. (\ref{33}), one can obtain
\begin{equation}
\frac{\partial P}{\partial v}=-\frac{T}{v^2}-\frac{524288q^2}{390625\pi v^{11}},\label{34}
\end{equation
which is always negative for nontrivial temperature. So there would be no $P-V$ criticality for $k=0$.
Thirdly, we would investigate the $k=-1$ case corresponding to hyperbolic topology. The equation of state reads
\begin{equation}
P=\frac{T}{v}-\frac{32T\alpha}{(n-1)^2v^3}+\frac{256T\alpha^2}{(n-1)^4v^5}+\frac{(n-2)}{(n-1)\pi v^2}-\frac{16(n-4)\alpha}{(n-1)^3\pi v^4}+\frac{256(n-6)\alpha^2}{3(n-1)^5\pi v^6}+\frac{16^{n-2}(n-2)q^2}{\pi(n-1)^{2n-3}v^{2n-2}}.\label{35}
\end{equation
Similarly, when $q=0,n=6$, Eqs.~(\ref{29}) and (\ref{30}) can be analytically solved and the corresponding physical quantities can be obtained as
\begin{equation}
T_c=\frac{1}{2\pi\sqrt{\alpha}},\;v_c=\frac{4\sqrt{\alpha}}{5},\;P_c=\frac{5}{8\pi \alpha},\;\frac{P_cv_c}{T_c}=1.\label{36}
\end{equation
When $q\neq0,n=6$, one can obtain the numerical solutions of Eqs.~(\ref{29}) and (\ref{30}) as listed in Table \ref{tb2}. These results are quite different from those in former literature which demonstrated that $P-V$ criticality only exists in the $k=1$ case for topological black holes in both Einstein gravity and Gauss-Bonnet gravity~\cite{Kubiznak,Cai98}.
\begin{table}[!h]
\tabcolsep 0pt
\caption{Critical values for $k=-1,n=6,\beta\rightarrow\infty$}
\vspace*{-12pt}
\begin{center}
\def\temptablewidth{0.5\textwidth}
{\rule{\temptablewidth}{1pt}}
\begin{tabular*}{\temptablewidth}{@{\extracolsep{\fill}}cccccc}
$q$ & $\alpha$ & $T_c$ &$v_c$ &$P_c$ &$\frac{P_cv_c}{T_c}$ \\ \hline
0.5 & 1 &0.33836 & 1.07752& 0.22718& 0.723 \\
2 &1 & 0.72658& 1.31811 & 0.35029& 0.635 \\
1 & 1 &0.46900 & 1.19335& 0.26507 & 0.674 \\
1 & 0.5 &1.88727& 1.01514& 1.12928 & 0.607 \\
1 & 2 &0.18669& 1.38663& 0.10503& 0.780
\end{tabular*}
{\rule{\temptablewidth}{1pt}}
\end{center}
\label{tb2}
\end{table}
To gain an intuitive picture, we plot the $P-v$ diagram in Fig. \ref{fg4}, which shows strange behaviors different from van der Waals liquid-gas phase transition. The isotherm at the critical temperature is quite similar to the van der Waals liquid-gas system. However, for the uncharged case in Fig. \ref{4a}, the isotherm below or above the critical temperature both behave as the coexistence phase which is similar to the behaviors of van der Waals liquid-gas system below the critical temperature. For the charged case in Fig. \ref{4b}, the "phase transition" picture is quite the reverse of van der Waals liquid-gas phase transition. Above the critical temperature the behavior is "van der Waals like" while the behavior is "ideal gas like" below the critical temperature. This process is achieved by lowering the temperature rather than increasing the temperature. We also plot the Gibbs free energy in Fig. \ref{fg5} and "swallow tail" behavior can be observed.
\begin{figure*}
\centerline{\subfigure[]{\label{4a}
\includegraphics[width=8cm,height=6cm]{4a.eps}}
\subfigure[]{\label{4b}
\includegraphics[width=8cm,height=6cm]{4b.eps}}}
\caption{$P$ vs. $v$ for
(a) $k=-1, n=6,\alpha=1,q=0$, (b) $k=-1, n=6,\alpha=1,q=1$} \label{fg4}
\end{figure*}
The results above are so strange that motivates us to check whether they are physical. The non-negative definiteness of entropy demands that
\begin{equation}
\frac{r_+^4}{n-1}+\frac{2kr_+^2\alpha}{n-3}+\frac{k^2\alpha^2}{n-5}\geq0.\label{37}
\end{equation
In fact, when $n=6$, the L.H.S. of the above inequality can be obtained by utilizing Eq. (\ref{27}) as
\begin{equation}
\frac{125v^4}{256}-\frac{25v^2\alpha}{24}+\alpha^2.\label{38}
\end{equation
Denoting $v^2$ as $x$, one can consider the equation
\begin{equation}
\frac{125x^2}{256}-\frac{25\alpha x}{24}+\alpha^2=0,\label{39}
\end{equation
with the discriminant as
\begin{equation}
\Delta=\left(\frac{25\alpha}{24}\right)^2-4\times\alpha^2\times\frac{125}{256}=-\frac{125\alpha^2}{144}.\label{40}
\end{equation
Note that for any nontrivial value of $\alpha$, the discriminant of Eq. (\ref{39}) is always negative, implying that the values of entropy are always positive for any specific volume $v$.
\begin{figure*}
\centerline{\subfigure[]{\label{5a}
\includegraphics[width=8cm,height=6cm]{5a.eps}}
\subfigure[]{\label{5b}
\includegraphics[width=8cm,height=6cm]{5b.eps}}}
\caption{$G$ vs. $T$ for
(a)$k=-1, n=6,\alpha=1,q=0$, "$P=0.15<P_c$, Blue curve", "$P=P_c=0.19894$, Red curve", "$P=0.24>P_c$, Black curve" (b) $k=-1, n=6,\alpha=1,q=1$,"$P=0.25<P_c$, Blue curve", "$P=P_c=0.26507$, Red curve", "$P=0.29>P_c$, Black curve","$P=0.32>P_c$, Purple curve"} \label{fg5}
\end{figure*}
\section{Inclusion of the nonlinear electrodynamics}
\label{Sec4}
In this section, we would like to take into account the effect of non-linear electrodynamics to complete the analysis of topological AdS black holes in Lovelock-Born-Infeld gravity.
Utilizing Eqs. (\ref{13}) and (\ref{18}), Eq. (\ref{14}) can be rewritten as
\begin{eqnarray}
P&=&\frac{T}{v}+\frac{32kT\alpha}{(n-1)^2v^3}+\frac{256k^2T\alpha^2}{(n-1)^4v^5}-\frac{k(n-2)}{(n-1)\pi v^2}-\frac{16k^2(n-4)\alpha}{(n-1)^3\pi v^4}-\frac{256k^3(n-6)\alpha^2}{3(n-1)^5\pi v^6}
\nonumber
\\
&\;&-\frac{\beta^2}{4\pi}\left\{1-\sqrt{1+\frac{2^{4n-5}(n-2)(n-1)q^2[(n-1)v]^{2-2n}}{\beta^2}}\right\}.\label{41}
\end{eqnarray
Similarly, we would discuss the $k=1$ case corresponding to spherical topology first. The equation of state reads
\begin{eqnarray}
P&=&\frac{T}{v}+\frac{32T\alpha}{(n-1)^2v^3}+\frac{256T\alpha^2}{(n-1)^4v^5}-\frac{(n-2)}{(n-1)\pi v^2}-\frac{16(n-4)\alpha}{(n-1)^3\pi v^4}-\frac{256(n-6)\alpha^2}{3(n-1)^5\pi v^6}
\nonumber
\\
&\;&-\frac{\beta^2}{4\pi}\left\{1-\sqrt{1+\frac{2^{4n-5}(n-2)(n-1)q^2[(n-1)v]^{2-2n}}{\beta^2}}\right\}.\label{42}
\end{eqnarray
One can obtain the corresponding physical quantities at the critical point as listed in Table \ref{tb3} by solving Eqs.~(\ref{29}) and (\ref{30}) for the case $n=6$ numerically. As is shown, the physical quantities at the critical point $T_c, v_c,P_c$ depend on the charge, the Lovelock coefficient $\alpha$ and the Born-Infeld parameter $\beta$. With the increasing of $\alpha$ or $q$, both $T_c$ and $P_c$ decrease while $v_c$ increases. However the ratio $\frac{P_cv_c}{T_c}$ decreases with $\alpha$ but increases with $q$. These observations are similar to the limit case $\beta\rightarrow\infty$. With the increasing of $\beta$, $T_c$, $P_c$ decrease while $v_c$ and the ratio $\frac{P_cv_c}{T_c}$ increase. However, only slight differences can be observed concerning the impact of nonlinear electrodynamics. That may be attributed to the parameter region we choose. Readers who are interested in the "Schwarzschild like" behavior of Born-Infeld black holes can read the interesting paper Ref.~\cite{Gunasekaran}. For an intuitive understanding, we plot the $P-v$ diagram in Fig. \ref{6a} and show the effect of the parameter $q$ and $\alpha$ in Fig. \ref{fg7}.
\begin{table}[!h]
\tabcolsep 0pt
\caption{Critical values for different dimensions for $k=1,n=6$}
\vspace*{-12pt}
\begin{center}
\def\temptablewidth{0.5\textwidth}
{\rule{\temptablewidth}{1pt}}
\begin{tabular*}{\temptablewidth}{@{\extracolsep{\fill}}cccccccc}
$\beta$ & $q$ & $\alpha$ &$T_c$ &$v_c$ &$P_c$ &$\frac{P_cv_c}{T_c}$ \\ \hline
10 & 1 &1 &0.141541& 1.85884& 0.026528 & 0.34839 \\
0.5 & 1 &1 &0.141545& 1.85829& 0.026531 & 0.34832 \\
1 & 1 &1 &0.141542& 1.85871& 0.026529 & 0.34838 \\
1 & 0.5 &1 &0.142126& 1.80991& 0.02691 & 0.343 \\
1 & 2 &1 &0.139898& 1.97286& 0.02554 & 0.360 \\
1 & 1 &0.5 &0.192905& 1.53258& 0.04730 & 0.376 \\
1 & 1 &2 &0.100617& 2.53773& 0.01351 & 0.341
\end{tabular*}
{\rule{\temptablewidth}{1pt}}
\end{center}
\label{tb3}
\end{table}
\begin{figure*}
\centerline{\subfigure[]{\label{6a}
\includegraphics[width=8cm,height=6cm]{6a.eps}}
\subfigure[]{\label{6b}
\includegraphics[width=8cm,height=6cm]{6b.eps}}}
\caption{$P$ vs. $v$ for
(a)$k=1,n=6,\alpha=1,\beta=1,q=1$ and (b) $k=-1,n=6,\alpha=1,\beta=1,q=1$} \label{fg6}
\end{figure*}
\begin{figure*}
\centerline{\subfigure[]{\label{7a}
\includegraphics[width=8cm,height=6cm]{7a.eps}}
\subfigure[]{\label{7b}
\includegraphics[width=8cm,height=6cm]{7b.eps}}}
\caption{Isotherm at the critical temperature for
(a)$k=1,n=6,\beta=1,q=1$ and (b) $k=1,n=6,\beta=1,\alpha=1$} \label{fg7}
\end{figure*}
Secondly, we would discuss the $k=0$ case corresponding to Ricci flat topology. The equation of state reads
\begin{equation}
P=\frac{T}{v}-\frac{\beta^2}{4\pi}\left\{1-\sqrt{1+\frac{2^{4n-5}(n-2)(n-1)q^2[(n-1)v]^{2-2n}}{\beta^2}}\right\}.\label{43}
\end{equation
There would be no $P-V$ criticality because $P$ monotonically decreases with $v$.
Thirdly, we would discuss the $k=-1$ case corresponding to hyperbolic topology. The equation of state reads
\begin{eqnarray}
P&=&\frac{T}{v}-\frac{32T\alpha}{(n-1)^2v^3}+\frac{256T\alpha^2}{(n-1)^4v^5}+\frac{(n-2)}{(n-1)\pi v^2}-\frac{16(n-4)\alpha}{(n-1)^3\pi v^4}+\frac{256(n-6)\alpha^2}{3(n-1)^5\pi v^6}
\nonumber
\\
&\;&-\frac{\beta^2}{4\pi}\left\{1-\sqrt{1+\frac{2^{4n-5}(n-2)(n-1)q^2[(n-1)v]^{2-2n}}{\beta^2}}\right\}.\label{44}
\end{eqnarray
Numerical solutions of Eqs.~(\ref{29}) and (\ref{30}) are listed in Table \ref{tb4} and we also plot the $P-v$ diagram in Fig. \ref{6b}, in which similar strange behavior is also observed. Note that the entropy analysis also holds because the entropy in Eq.~(\ref{17}) is independent of $\beta$. So we would not repeat the analysis here.
\begin{table}[!h]
\tabcolsep 0pt
\caption{Critical values for different dimensions for $k=-1,n=6$}
\vspace*{-12pt}
\begin{center}
\def\temptablewidth{0.5\textwidth}
{\rule{\temptablewidth}{1pt}}
\begin{tabular*}{\temptablewidth}{@{\extracolsep{\fill}}cccccccc}
$\beta$ & $q$ & $\alpha$ &$T_c$ &$v_c$ &$P_c$ &$\frac{P_cv_c}{T_c}$ \\ \hline
10 & 1 &1 &0.468887& 1.19315& 0.26504 & 0.674 \\
0.5 & 1 &1 &0.424488& 1.11470& 0.25296& 0.664 \\
1 & 1 &1 &0.457384& 1.17317& 0.26182& 0.672\\
1 & 0.5 &1 &0.333977& 1.06663&0.22621 & 0.722 \\
1 & 2 &1 &0.690585& 1.28270& 0.33877 & 0.629 \\
1 & 1 &0.5 &1.498501& 0.92425& 0.94270 & 0.581 \\
1 & 1 &2 &0.186104& 1.38311& 0.10496 & 0.780
\end{tabular*}
{\rule{\temptablewidth}{1pt}}
\end{center}
\label{tb4}
\end{table}
\section{Conclusions}
\label{Sec5}
Till now, the topological AdS black holes in Lovelock-Born-Infeld gravity are investigated in the extended phase space. The black hole solutions are reviewed while their thermodynamics is further explored in the extended phase space. We calculate the entropy by integration and find that the result in former literature~\cite{Decheng2} was incomplete. Treating the cosmological constant as pressure, we rewrite the first law of thermodynamics for the specific case in which the second order and the third order Lovelock coefficients are related by the Lovelock coefficient $\alpha$. The quantity conjugated to Lovelock coefficient and the Born-Infeld parameter respectively are calculated. Comparing our results of the above quantities with those in former literature of Gauss-Bonnet black holes~\cite{Cai98}, we find that there exist extra terms due to the third order Lovelock gravity. In order to make the phase transition clearer, the Gibbs free energy is also calculated.
To figure out the effect of the third order Lovelock gravity on the $P-V$ criticality, a detailed analysis of the limit case $\beta\rightarrow\infty$ has been performed. Since the entropy is convergent only when $n>5$, our investigation is carried out in the case of $n=6$, corresponding to the seven-dimensional black holes. It is shown that for the spherical topology, $P-V$ criticality exists even when $q=0$. The critical physical quantities can be analytically solved and they vary with the parameter $\alpha$. However, the ratio of $\frac{P_cv_c}{T_c}$ is independent of the parameter $\alpha$. Our results demonstrate again that the charge is not the indispensable condition of $P-V$ criticality. It may be attributed to the effect of higher derivative terms of curvature because similar phenomenon was also found for Gauss-Bonnet black holes~\cite{Cai98}. For $q\neq0$, it is shown that the physical quantities at the critical point $T_c, v_c,P_c$ depends on both the charge and the parameter $\alpha$. With the increasing of $\alpha$ or $q$, both $T_c$ and $P_c$ decrease while $v_c$ increases. However the ratio $\frac{P_cv_c}{T_c}$ decreases with $\alpha$ but increases with $q$. Similar behaviors as van der Waals liquid-gas phase transition can be observed in the $P-v$ diagram and the classical swallow tail behaviors can be observed in both the two-dimensional and three-dimensional graph of Gibbs free energy. These observations indicate that phase transition between small black holes and large black holes take place when $k=1$. For $k=0$, no critical point can be found and there would be no $P-V$ criticality. Interesting findings occur in the case $k=-1$, in which positive solutions of critical points are found for both the uncharged and charged case. However, the $P-v$ diagram is very strange. For the uncharged case, the isotherms below or above the critical temperature both behave as the coexistence phase which is similar to the behaviors of van der Waals liquid-gas system below the critical temperature. For the charged case, the "phase transition" picture is quite the reverse of van der Waals liquid-gas phase transition. Above the critical temperature the behavior is "van der Waals like" while the behavior is "ideal gas like" below the critical temperature. This process is achieved by lowering the temperature rather than increasing the temperature. To check whether these findings are physical, we perform analysis on the non-negative definiteness condition of entropy. It is shown that for any nontrivial value of $\alpha$, the entropy is always positive for any specific volume $v$. We relate the findings in the case $k=-1$ with the peculiar property of the third order Lovelock gravity. Because the entropy in the third order Lovelock gravity consists of extra terms which is absent in the Gauss-Bonnet black holes, which makes the critical points satisfy the constraint of non-negative definiteness condition of entropy. We also check the Gibbs free energy graph and
"swallow tail" behavior can be observed.
Moreover, the effect of nonlinear electrodynamics is included in our work. Similar observations are made as the limit case $\beta\rightarrow\infty$ and only slight differences can be observed when we choose different values of $\beta$. That may be attributed to the parameter region we choose. More interesting findings concerning the "Schwarzschild like" behaviors can be found in the former literature~\cite{Gunasekaran} and we would not repeat them here because our main motivation is to investigate the impact of the third order Lovelock gravity on the $P-V$ criticality in the extend phase space.
\section*{Acknowledgements}
This research is supported by the National Natural Science
Foundation of China (Grant Nos.11235003, 11175019, 11178007). It is
also supported by \textquotedblleft Thousand Hundred
Ten\textquotedblright \,Project of Guangdong Province and Natural Science Foundation of Zhanjiang Normal University under
Grant No. QL1104.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,129 |
\section{Introduction}
\label{Sec:Introduction}
Graphical models are used to model a wide variety of systems, such as gene regulatory networks and social interaction networks. A graph consists of a set of $p$ nodes, each representing a variable, and a set of edges between pairs of nodes. The presence of an edge between two nodes indicates a relationship between the two variables. In this manuscript, we consider two types of graphs: conditional independence graphs and marginal independence graphs. In a conditional independence graph, an edge connects a pair of variables if and only if they are conditionally dependent---dependent conditional upon the other variables. In a marginal independence graph, two nodes are joined by an edge if and only if they are marginally dependent---dependent without conditioning on the other variables.
In recent years, many authors have studied the problem of learning a graphical model in the high-dimensional setting, in which the number of variables $p$ is larger than the number of observations $n$. Let $\mathbf{X}$ be a $n\times p$ matrix, with rows $\mathbf{x}_1,\ldots,\mathbf{x}_n$. Throughout the rest of the text, we will focus on three specific types of graphical models:
\begin{enumerate}
\item A \emph{Gaussian graphical model}, where $\mathbf{x}_1,\ldots,\mathbf{x}_n \stackrel{\small \mathrm{i.i.d.}}\sim N(\mathbf{0},\mathbf{\Sigma})$. In this setting, $(\mathbf{\Sigma}^{-1})_{jj'} = 0$ for some $j\ne j'$ if and only if the $j$th and $j'$th variables are conditionally independent \citep{MKB79}; therefore, the sparsity pattern of $\mathbf{\Sigma}^{-1}$ determines the conditional independence graph.
\item A \emph{Gaussian covariance graph model}, where $\mathbf{x}_1,\ldots,\mathbf{x}_n \stackrel{\small \mathrm{i.i.d.}}\sim N(\mathbf{0},\mathbf{\Sigma})$. Then $\Sigma_{jj'}=0$ for some $j\ne j'$ if and only if the $j$th and $j'$th variables are marginally independent. Therefore, the sparsity pattern of $\mathbf{\Sigma}$ determines the marginal independence graph.
\item A \emph{binary Ising graphical model}, where $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are i.i.d. with density function
\[
p(\mathbf{x},\mathbf{\Theta}) = \frac{1}{Z(\mathbf{\Theta})}\exp \left[ \sum_{j=1}^p \theta_{jj} x_j +\sum_{1\le j < j' \le p} \theta_{jj'} x_j x_{j'} \right],
\]
$\mathbf{\Theta}$ is a $p\times p$ symmetric matrix, and $Z(\mathbf{\Theta})$ is the partition function, which ensures that the density sums to one. Here, $\mathbf{x}$ is a binary vector, and $\theta_{jj'}=0$ if and only if the $j$th and $j'$th variables are conditionally independent.
The sparsity pattern of $\mathbf{\Theta}$ determines the conditional independence graph.
\end{enumerate}
To construct an interpretable graph when $p>n$, many authors have proposed applying an $\ell_1$ penalty to the parameter encoding each edge, in order to encourage sparsity.
For instance, such an approach is taken by
\citet{YuanLin07}, \citet{SparseInv}, \citet{Rothman08}, and \citet{YuanGlasso08} in the Gaussian graphical model; \citet{ElKaroui2008}, \citet{BickelandLevina2008}, \citet{RothmanJASA09}, \citet{BienTibs11}, \citet{CaiLiu2011}, and \citet{Xueetal2012} in the covariance graph model; and
\citet{LeeSIetal2007}, \citet{Hoefling2009}, and \citet{ravikumaretal2010} in the binary model.
However, applying an $\ell_1$ penalty to each edge can be interpreted as placing an independent double-exponential prior on each edge. Consequently, such an approach implicitly assumes that each edge is equally likely and independent of all other edges; this corresponds to an Erd\H{o}s-R\'{e}nyi graph in which most nodes have approximately the same number of edges \citep{erdosrenyi}. This is unrealistic in many real-world networks, in which we believe that certain nodes (which, unfortunately, are not known \emph{a priori}) have a lot more edges than other nodes. An example is the network of webpages in the World Wide Web, where a relatively small number of webpages are connected to many other webpages \citep{barabasi1999}. A number of authors have shown that real-world networks are \emph{scale-free}, in the sense that the number of edges for each node follows a power-law distribution; examples include gene-regulatory networks, social networks, and networks of collaborations among scientists \citep[among others,][]{barabasi1999,barabasi2009,Liljerosetal2001,Jeongetal2001,Newman2000,li2005towards}. More recently, \citet{Haoetal2012} have shown that certain genes, referred to as \emph{super hubs}, regulate hundreds of downstream genes in a gene regulatory network, resulting in far denser connections than are typically seen in a scale-free network.
In this paper, we refer to very densely-connected nodes, such as the ``super hubs" considered in \citet{Haoetal2012}, as \emph{hubs}. When we refer to hubs, we have in mind nodes that are connected to a very substantial number of other nodes in the network---and in particular, we are referring to nodes that are much more densely-connected than even the most highly-connected node in a scale-free network. An example of a network containing hub nodes
is shown in Figure~\ref{Fig:ggmtoy}.
Here we propose a convex penalty function for estimating graphs containing hubs. Our formulation simultaneously identifies the hubs and estimates the entire graph. The penalty function yields a convex optimization problem when combined with a convex loss function. We consider the application of this hub penalty function in modeling Gaussian graphical models, covariance graph models, and binary Ising models. Our formulation does not require that we know \emph{a priori} which nodes in the network are hubs.
In related work, several authors have proposed methods to estimate a scale-free Gaussian graphical model \citep{QiangLiu2011,Defazio2012}. However, those methods do not model hub nodes---the most highly-connected nodes that arise in a scale-free network are far less connected than the hubs that we consider in our formulation. Under a different framework, some authors proposed a screening-based procedure to identify hub nodes in the context of Gaussian graphical models \citep{HeroandRajaratnam2012,firouzi2013local}. Our proposal outperforms such approaches when hub nodes are present (see discussion in Section~\ref{subSec:other proposal}).
In Figure~\ref{Fig:ggmtoy}, the performance of our proposed approach is shown in a toy example in the context of a Gaussian graphical model. We see that when the true network contains hub nodes (Figure~\ref{Fig:ggmtoy}(a)), our proposed approach (Figure~\ref{Fig:ggmtoy}(b)) is much better able to recover the network than is the graphical lasso (Figure~\ref{Fig:ggmtoy}(c)), a well-studied approach that applies an $\ell_1$ penalty to each edge in the graph \citep{SparseInv}.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.52]{toyexample.pdf}
\end{center}
\caption{(a): Heatmap of the inverse covariance matrix in a toy example of a Gaussian graphical model with four hub nodes. White elements are zero and colored elements are non-zero in the inverse covariance matrix. Thus, colored elements correspond to edges in the graph. (b): Estimate from the \emph{hub graphical lasso}, proposed in this paper. (c): Graphical lasso estimate. }
\label{Fig:ggmtoy}
\end{figure}
We present the hub penalty function in Section~\ref{Sec:Penalty}. We then apply it to the Gaussian graphical model, the covariance graph model, and the binary Ising model in Sections~\ref{Sec:GGM}, \ref{Sec:Covariance}, and \ref{Sec:Binary}, respectively. In Section~\ref{Sec:realdata}, we apply our approach to a webpage data set and a gene expression data set. We close with a discussion in Section~\ref{Sec:Discussion}.
\section{The General Formulation}
\label{Sec:Penalty}
In this section, we present a general framework to accommodate network with hub nodes.
\subsection{The Hub Penalty Function}
Let $\mathbf{X}$ be a $n \times p$ data matrix, $\mathbf{\Theta}$ a $p\times p$ symmetric matrix containing the parameters of interest, and $\ell(\mathbf{X},\mathbf{\Theta})$ a loss function (assumed to be convex in $\bf \Theta$).
In order to obtain a sparse and interpretable graph estimate, many authors have considered the problem
\begin{eqnarray}
\label{Eq:l1general}
\underset{{\mathbf{\Theta}\in \mathcal{S}}}{\mathrm{minimize}}
& & \left\{ \ell(\mathbf{X},\mathbf{\Theta}) + \lambda \| \mathbf{\Theta} - \mathrm{diag}(\mathbf{\Theta})\|_1 \right \},
\end{eqnarray}
where $\lambda$ is a non-negative tuning parameter, $\mathcal{S}$ is some set depending on the loss function, and $\|\cdot \|_1$ is the sum of the absolute values of the matrix elements. For instance, in the case of a Gaussian graphical model, we could take $\ell(\mathbf{X},\mathbf{\Theta}) = -\log \det {\bf \Theta} + \mbox{trace}({\bf S} {\bf \Theta})$, the negative log-likelihood of the data, where $\bf S$ is the empirical covariance matrix and $\mathcal{S}$ is the set of $p\times p$ positive definite matrices. The solution to (\ref{Eq:l1general}) can then be interpreted as an estimate of the inverse covariance matrix. The $\ell_1$ penalty in (\ref{Eq:l1general}) encourages zeros in the solution. But it typically does not yield an estimate that contains hubs.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.625]{formulation.pdf}
\end{center}
\caption{Decomposition of a symmetric matrix $\mathbf{\Theta}$ into $\mathbf{Z+V+V}^T$, where $\mathbf{Z}$ is sparse, and most columns of $\mathbf{V}$ are entirely zero. Blue, white, green, and red elements are diagonal, zero, non-zero in $\mathbf{Z}$, and non-zero due to two hubs in $\mathbf{V}$, respectively.}
\label{Fig:formulation}
\end{figure}
In order to explicitly model hub nodes in a graph, we wish to replace the $\ell_1$ penalty in (\ref{Eq:l1general}) with a convex penalty that encourages a solution that can be decomposed as $\mathbf{Z}+\mathbf{V}+\mathbf{V}^T$, where $\mathbf{Z}$ is a sparse symmetric matrix, and $\mathbf{V}$ is a matrix whose columns are either entirely zero or almost entirely non-zero (see Figure \ref{Fig:formulation}). The sparse elements of $\mathbf{Z}$ represent edges between non-hub nodes, and the non-zero columns of $\mathbf{V}$ correspond to hub nodes. We achieve this goal via the \emph{hub penalty function}, which takes the form
\begin{equation}
\label{Eq:hubpenalty} \footnotesize
\text{P}(\mathbf{\Theta}) = \underset{\mathbf{V},\mathbf{Z}: \;\; \mathbf{\Theta} = \mathbf{V}+\mathbf{V}^T+\mathbf{Z}} {\text{min}}
\left \{ \lambda_1 \| \mathbf{Z} - \text{diag}(\mathbf{Z})\|_1 +\lambda_2 \| \mathbf{V} - \text{diag}(\mathbf{V})\|_1
+\lambda_3 \sum_{j=1}^p \| (\mathbf{V} - \text{diag}(\mathbf{V}))_j \|_q \right\}.
\end{equation}
Here $\lambda_1, \lambda_2$, and $\lambda_3$ are nonnegative tuning parameters. Sparsity in $\mathbf{Z}$ is encouraged via the $\ell_1$ penalty on its off-diagonal elements, and is controlled by the value of $\lambda_1$. The $\ell_1$ and $\ell_1/ \ell_q$ norms on the columns of $\bf V$ induce group sparsity when $q=2$ \citep{grouplasso,sparsegrouplasso}; $\lambda_3$ controls the selection of hub nodes, and $\lambda_2$ controls the sparsity of each hub node's connections to other nodes.
The convex penalty (\ref{Eq:hubpenalty}) can be combined with $\ell({\bf X}, {\bf \Theta})$ to yield the convex optimization problem
\begin{eqnarray}
\label{Eq:general}
\underset{{\mathbf{\Theta}\in \mathcal{S}, \mathbf{V, Z}}} {\text{minimize}}
& & \Bigg\{ \ell(\mathbf{X},\mathbf{\Theta}) + \lambda_1 \| \mathbf{Z} - \text{diag}(\mathbf{Z})\|_1 +\lambda_2 \| \mathbf{V} - \text{diag}(\mathbf{V})\|_1 \nonumber \\
&&+ \lambda_3 \sum_{j=1}^p \| (\mathbf{V} - \text{diag}(\mathbf{V}))_j \|_q \Bigg\} \;\; \text{subject to} \;\; \mathbf{\Theta} = \mathbf{V}+\mathbf{V}^T+\mathbf{Z},
\end{eqnarray}
where the set $\mathcal{S}$ depends on the loss function $\ell(\mathbf{X},\mathbf{\Theta})$.
Note that when $\lambda_2\rightarrow \infty$ or $\lambda_3 \rightarrow \infty$, then (\ref{Eq:general}) reduces to (\ref{Eq:l1general}). In this paper, we take $q=2$, which leads to estimation of a network containing dense hub nodes. Other values of $q$ such as $q=\infty$ are also possible
\citep[see, e.g.,][]{Mohanetal2013}. We note that the hub penalty function is closely related to recent work on overlapping group lasso penalties in the context of learning multiple sparse precision matrices \citep{Mohanetal2013}.
\subsection{Algorithm}
In order to solve (\ref{Eq:general}) with $q=2$, we use an \emph{alternating direction method of multipliers} (ADMM) algorithm \citep[see, e.g.,][]{EcksteinADMM92,BoydADMM,ADMMconvergence}. ADMM is an attractive algorithm for this problem, as it allows us to decouple some of the terms in (\ref{Eq:general}) that are difficult to optimize jointly. In order to develop an ADMM algorithm for (\ref{Eq:general}) with guaranteed convergence, we reformulate it as
a consensus problem, as in \citet{Maetal2013ADMM}. The convergence of the algorithm to the optimal solution follows from classical results \citep[see, e.g., the review papers][]{BoydADMM,ADMMconvergence}.
\begin{algorithm}[htp]
\small
\caption{ADMM Algorithm for Solving (\ref{Eq:general}).}
\label{Alg:general}
\begin{enumerate}
\item \textbf{Initialize} the parameters:
\begin{enumerate}
\item primal variables $\mathbf{\Theta,V,Z}, \tilde{\mathbf{\Theta}},\tilde{\mathbf{V}}$, and $\tilde{\mathbf{Z}}$ to the $p \times p$ identity matrix.
\item dual variables $\mathbf{W}_1,\mathbf{W}_2$, and $\mathbf{W}_3$ to the $p \times p$ zero matrix.
\item constants $\rho>0$ and $\tau>0$.\\
\end{enumerate}
\item \textbf{Iterate} until the stopping criterion $\frac{\| {\mathbf{\Theta}}_{t}- {\mathbf{\Theta}}_{t-1} \|_F^2}{\| {\mathbf{\Theta}}_{t-1}\|_F^2} \le \tau$ is met, where ${\bf \Theta}_t$ is the value of $\bf \Theta$ obtained at the $t$th iteration:
\begin{enumerate}
\item Update ${\bf \Theta}, {\bf V}, {\bf Z}$:
\begin{enumerate}
\item $\mathbf{\Theta}= \underset{\mathbf{\Theta}\in \mathcal{S}}{\arg \min} \left \{ \ell(\mathbf{X},\mathbf{\Theta}) + \frac{\rho}{2} \|\mathbf{\Theta}-\tilde{\mathbf{\Theta}}+\mathbf{W}_{1}\|_F^2 \right \}$.
\item $\mathbf{Z}= S(\tilde{\bf Z} - \mathbf{W}_3, \frac{\lambda_1}{\rho})$,
diag$\mathbf{(Z)}= \text{diag}(\tilde{\mathbf{Z}}-\mathbf{W}_3)$. Here $S$ denotes the soft-thresholding operator, applied element-wise to a matrix: $S(A_{ij},b) = \text{sign}(A_{ij}) \max( |A_{ij}|-b, 0)$.
\item $\mathbf{C}= \tilde{\mathbf{V}}-\mathbf{W}_2-\text{diag}(\tilde{\mathbf{V}}-\mathbf{W}_2)$.
\item $\mathbf{V}_j= \max \left(1-\frac{\lambda_3}{\rho \|S(\mathbf{C}_j,\lambda_2/\rho) \|_2}, 0 \right) \cdot S(\mathbf{C}_j, \lambda_2/\rho)$ for $j=1, \ldots, p$.
\item diag$(\mathbf{V}) = \text{diag}(\tilde{\mathbf{V}}-\mathbf{W}_2)$.
\end{enumerate}
\item Update $\tilde{\bf \Theta}, \tilde{\bf V}, \tilde{\bf Z}$:
\begin{enumerate}
\item $\mathbf{\Gamma} = \frac{\rho}{6}\left[ ({\mathbf{\Theta+W}_1}) - (\mathbf{V+W}_2) -(\mathbf{V+W}_2)^T - (\mathbf{Z+W}_3) \right]$.
\item $\tilde{\mathbf{\Theta}}= {\bf \Theta + W}_1 - \frac{1}{\rho}\mathbf{\Gamma}$; \; \;\; iii. $\tilde{\mathbf{V}} = \frac{1}{\rho} (\mathbf{\Gamma+\Gamma}^T) +\mathbf{V+W}_2$; \; \;\; iv. $\tilde{\mathbf{Z}} = \frac{1}{\rho} \mathbf{\Gamma} + {\bf Z+W}_3$.
\end{enumerate}
\item Update $\mathbf{W}_1, {\bf W}_2, {\bf W}_3$:
\begin{enumerate}
\item $\mathbf{W}_1 = \mathbf{W}_1 + \mathbf{\Theta}-\tilde{\bf \Theta}$; \; \;\; ii. $\mathbf{W}_2 = \mathbf{W}_2 + \mathbf{V}-\tilde{\bf V}$; \; \;\; iii. $\mathbf{W}_3 = \mathbf{W}_3 + \mathbf{Z}-\tilde{\bf Z}$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{algorithm}
In greater detail, we let $\mathbf{B}=(\mathbf{\Theta},\mathbf{V},\mathbf{Z})$, $\tilde{\mathbf{B}}=(\tilde{\mathbf{\Theta}},\tilde{\mathbf{V}},\tilde{\mathbf{Z}})$,
\[
f(\mathbf{B}) = \ell(\mathbf{X,\Theta}) + \lambda_1 \|\mathbf{Z}-\text{diag}(\mathbf{Z}) \|_1+ \lambda_2 \|\mathbf{V}-\text{diag}(\mathbf{V}) \|_1+ \lambda_3 \sum_{j=1}^p \| (\mathbf{V}-\text{diag}(\mathbf{V})) \|_2,
\]
and
\[
g(\tilde{\mathbf{B}})=\begin{cases} 0 & \text{if } \tilde{\mathbf{\Theta}} = \tilde{\mathbf{V}}+\tilde{\mathbf{V}}^T + \tilde{\mathbf{Z}} \\ \infty & \text{otherwise}.\end{cases}
\]
\noindent Then, we can rewrite (\ref{Eq:general}) as
\begin{equation}
\label{Eq:reformulate}
\underset{\mathbf{B},\tilde{\mathbf{B}}}{\text{minimize }} \left\{f(\mathbf{B})+g(\tilde{\mathbf{B}})\right\} \qquad \text{subject to } \mathbf{B}=\tilde{\mathbf{B}}.
\end{equation}
\noindent The scaled augmented Lagrangian for (\ref{Eq:reformulate}) takes the form
\begin{equation*}
\begin{split}
L(\mathbf{B},\tilde{\mathbf{B}},\mathbf{W}) &= \ell(\mathbf{X},\mathbf{\Theta}) + \lambda_1 \| \mathbf{Z} - \text{diag}(\mathbf{Z})\|_1+ \lambda_2 \| \mathbf{V} - \text{diag}(\mathbf{V})\|_1 \\
&+\lambda_3 \sum_{j=1}^p \| (\mathbf{V} - \text{diag}(\mathbf{V}))_j \|_2 +g(\tilde{\mathbf{B}}) +\frac{\rho}{2}\|\mathbf{B}-\tilde{\mathbf{B}}+\mathbf{W} \|^2_F,\\
\end{split}
\end{equation*}
where $\mathbf{B}$ and $\tilde{\mathbf{B}}$ are the primal variables, and ${\bf W}=({\bf W}_1, {\bf W}_2, {\bf W}_3 )$ is the dual variable.
Note that the scaled augmented Lagrangian can be derived from the usual Lagrangian by adding a quadratic term and completing the square \citep{BoydADMM}.
A general algorithm for solving
(\ref{Eq:general}) is provided in Algorithm \ref{Alg:general}. The derivation is in Appendix A. Note that only the update for $\mathbf{\Theta}$ (Step 2(a)i) depends on the form of the convex loss function $\ell(\mathbf{X},\mathbf{\Theta})$. In the following sections, we consider special cases of (\ref{Eq:general}) that lead to estimation of Gaussian graphical models, covariance graph models, and binary networks with hub nodes.
\section{The Hub Graphical Lasso}
\label{Sec:GGM}
Assume that $\mathbf{x}_1,\ldots,\mathbf{x}_n \stackrel{\small\mathrm{i.i.d.}}\sim N(\mathbf{0},\mathbf{\Sigma})$. The well-known \emph{graphical lasso} problem \citep[see, e.g.,][]{SparseInv} takes the form of (\ref{Eq:l1general}) with $\ell(\mathbf{X},\mathbf{\Theta}) = -\log \det {\bf \Theta} + \mbox{trace}({\bf S} {\bf \Theta})$, and $\bf S$ the empirical covariance matrix of $\mathbf{X}$:
\begin{equation}
\label{Eq:l1penalizeggm}
\underset{\mathbf{\Theta}\in \mathcal{S}}{\text{minimize}} \quad \left\{-\log \det \mathbf{\Theta} + \text{trace}(\mathbf{S\Theta}) + \lambda \sum_{j\ne j'} |{\Theta}_{jj'}| \right\},
\end{equation}
where $\mathcal{S} = \{\mathbf{\Theta}: \mathbf{\Theta} \succ 0 \text{ and } \mathbf{\Theta}=\mathbf{\Theta}^T \}$. The solution to this optimization problem serves as an estimate for ${\bf \Sigma}^{-1}$. We now use the hub penalty function to extend the graphical lasso in order to accommodate hub nodes.
\subsection{Formulation and Algorithm}
\label{GGM:formulation}
We propose the \emph{hub graphical lasso} (HGL) optimization problem, which takes the form
\begin{eqnarray}
\label{Eq:ggmhub}
\underset{{\mathbf{\Theta}}\in \mathcal{S}} {\text{minimize}}
& & \left \{ -\log \det \mathbf{\Theta} + \text{trace}(\mathbf{S\Theta}) + \text{P}(\mathbf{\Theta}) \right\}.
\end{eqnarray}
Again, $\mathcal{S} = \{\mathbf{\Theta}:\mathbf{\Theta} \succ 0 \text{ and } \mathbf{\Theta}=\mathbf{\Theta}^T \}$.
\noindent It encourages a solution that contains hub nodes, as well as edges that connect non-hubs (Figure~\ref{Fig:ggmtoy}).
Problem (\ref{Eq:ggmhub}) can be solved using Algorithm \ref{Alg:general}.
The update for $\mathbf{\Theta}$ in Algorithm \ref{Alg:general} (Step 2(a)i) can be derived by minimizing
\begin{equation}
-\log \det \mathbf{\Theta} + \text{trace}(\mathbf{S\Theta}) + \frac{\rho}{2}\| \mathbf{\Theta} - \tilde{\mathbf{\Theta}} + \mathbf{W}_1 \|_F^2
\end{equation}
with respect to $\mathbf{\Theta}$ (note that the constraint $\mathbf{\Theta} \in \mathcal{S}$ in (\ref{Eq:ggmhub}) is treated as an implicit constraint, due to the domain of definition of the $\log \det$ function). This can be shown to have the solution
\[
{\mathbf{\Theta}}=\frac{1}{2}\mathbf{U}\left( \mathbf{D}+ \sqrt{\mathbf{D}^2+\frac{4}{\rho}\mathbf{I}} \right) \mathbf{U}^T,
\]
where $\mathbf{UDU}^T$ denotes the eigen-decomposition of $\tilde{\mathbf{\Theta}}-\mathbf{W}_1 -\frac{1}{\rho}\mathbf{S}$.
The complexity of the ADMM algorithm for HGL is $O(p^3)$ per iteration; this is the complexity of the eigen-decomposition for updating $\mathbf{\Theta}$. We now briefly compare the computational time for the ADMM algorithm for solving (\ref{Eq:ggmhub}) to that of an interior point method (using the solver \verb=Sedumi= called from \verb=cvx=). On a 1.86 GHz Intel Core 2 Duo machine, the interior point method takes $\sim 3$ minutes, while ADMM takes only 1 second, on a data set with $p=30$. We present a more extensive run time study for the ADMM algorithm for HGL in Appendix E.
\subsection{Conditions for HGL Solution to be Block Diagonal}
\label{GGM:blockdiagonal}
In order to reduce computations for solving the HGL problem, we now present a necessary condition and a sufficient condition for the HGL solution to be block diagonal, subject to some permutation of the rows and columns. The conditions depend only on the tuning parameters $\lambda_1$, $\lambda_2$, and $\lambda_3$. These conditions build upon similar results in the context of Gaussian graphical models from the recent literature \citep[see, e.g.,][]{WittenFriedman11,MazumderHastie11,yang2012fused,Danaheretal2012,Mohanetal2013}.
Let $C_1, C_2, \ldots, C_K$ denote a partition of the $p$ features.
\begin{theorem}
A sufficient condition for the HGL solution to be block diagonal with blocks given by $C_1,C_2,\ldots,C_K$ is that $\min \left\{\lambda_1, \frac{\lambda_2}{2} \right\} > |S_{jj'}|$ for all $j\in C_k, j' \in C_{k'}, k\ne k'$.
\label{Theorem:BD}
\end{theorem}
\begin{theorem}
A necessary condition for the HGL solution to be block diagonal with blocks given by $C_1,C_2,\ldots,C_K$ is that $\min\left\{\lambda_1, \frac{\lambda_2+\lambda_3}{2}\right\} > |S_{jj'}|$ for all $j\in C_k, j' \in C_{k'}, k\ne k'$.
\label{Theorem:BD2}
\end{theorem}
Theorem~\ref{Theorem:BD} implies that one can screen the empirical covariance matrix $\mathbf{S}$ to check if the HGL solution is block diagonal \citep[using standard algorithms for identifying the connected components of an undirected graph; see, e.g.,][]{tarjan1972depth}. Suppose that the HGL solution is block diagonal with $K$ blocks, containing $p_1,\ldots,p_K$ features, and $\sum_{k=1}^K p_k = p$. Then, one can simply solve the HGL problem on the features within each block separately. Recall that the bottleneck of the HGL algorithm is the eigen-decomposition for updating $\mathbf{\Theta}$. The block diagonal condition leads to massive computational speed-ups for implementing the HGL algorithm: instead of computing an eigen-decomposition for a $p\times p$ matrix in each iteration of the HGL algorithm, we compute the eigen-decomposition of $K$ matrices of dimensions $p_1\times p_1,\ldots,p_K \times p_K$. The computational complexity per-iteration is reduced from $O(p^3)$ to $\sum_{k=1}^K O(p_k^3)$.
We illustrate the reduction in computational time due to these results in an example with $p=500$. Without exploiting Theorem~\ref{Theorem:BD}, the ADMM algorithm for HGL (with a particular value of $\lambda$) takes 159 seconds; in contrast, it takes only 22 seconds when Theorem~\ref{Theorem:BD} is applied. The estimated precision matrix has 107 connected components, the largest of which contains 212 nodes.
\subsection{Some Properties of HGL}
\label{GGM:properties}
We now present several properties of the HGL optimization problem (\ref{Eq:ggmhub}), which can be used to provide guidance on the suitable range for the tuning parameters $\lambda_1$, $\lambda_2$, and $\lambda_3$. In what follows, ${\bf Z}^*$ and ${\bf V}^*$ denote the optimal solutions for $\bf Z$ and $\bf V$ in (\ref{Eq:ggmhub}). Let $\frac{1}{s}+\frac{1}{q} =1$ (recall that $q$ appears in (\ref{Eq:hubpenalty})).
\begin{lemma}
\label{Lemma:DiagonalZ}
A sufficient condition for $\mathbf{Z}^*$ to be a diagonal matrix is that $\lambda_1 > \frac{\lambda_2+\lambda_3}{2}$.
\end{lemma}
\begin{lemma}
\label{Lemma:DiagonalV}
A sufficient condition for $\mathbf{V}^*$ to be a diagonal matrix is that $\lambda_1 < \frac{\lambda_2}{2}+\frac{\lambda_3}{2(p-1)^{1/s}}$.
\end{lemma}
\begin{corollary}
A necessary condition for both $\mathbf{V}^*$ and $\bf{Z}^*$ to be non-diagonal matrices is that $\frac{\lambda_2}{2}+\frac{\lambda_3}{2(p-1)^{1/s}} \le \lambda_1 \le \frac{\lambda_2+\lambda_3}{2}$.
\end{corollary}
Furthermore, (\ref{Eq:ggmhub}) reduces to the graphical lasso problem (\ref{Eq:l1penalizeggm}) under a simple condition.
\begin{lemma}
\label{lemma3}
If $q=1$, then (\ref{Eq:ggmhub}) reduces to (\ref{Eq:l1penalizeggm}) with tuning parameter $\min \left \{\lambda_1, \frac{\lambda_2+\lambda_3}{2} \right\}$.
\end{lemma}
Note also that when $\lambda_2 \rightarrow \infty$ or $\lambda_3 \rightarrow \infty$, (\ref{Eq:ggmhub}) reduces to (\ref{Eq:l1penalizeggm}) with tuning parameter $\lambda_1$. However, throughout the rest of this paper, we assume that $q=2$, and $\lambda_2$ and $\lambda_3$ are finite.
The solution $\hat{\mathbf{\Theta}}$ of (\ref{Eq:ggmhub}) is unique, since (\ref{Eq:ggmhub}) is a strictly convex problem. We now consider the question of whether the decomposition $\hat{\mathbf{\Theta}}= \hat{\mathbf{V}} + \hat{\mathbf{V}}^T + \hat{\mathbf{Z}}$ is unique. We see that the decomposition is unique in a certain regime of the tuning parameters. For instance, according to Lemma~\ref{Lemma:DiagonalZ}, when $\lambda_1 > \frac{\lambda_2+\lambda_3}{2}$, $\hat{\mathbf{Z}}$ is a diagonal matrix and hence
$\hat{\mathbf{V}}$ is unique. Similarly, according to Lemma~\ref{Lemma:DiagonalV}, when $\lambda_1 < \frac{\lambda_2}{2} + \frac{\lambda_3}{2(p-1)^{1/s}}$, $\hat{\mathbf{V}}$ is a diagonal matrix and hence $\hat{\mathbf{Z}}$ is unique.
Studying more general conditions on $\mathbf{S}$ and on $\lambda_1$, $\lambda_2$, and $\lambda_3$ such that the decomposition is guaranteed to be unique is a challenging problem and is outside of the scope of this paper.
\subsection{Tuning Parameter Selection}
\label{Sec:tuning parameter}
In this section, we propose a \emph{Bayesian information criterion} (BIC)-type quantity for tuning parameter selection in (\ref{Eq:ggmhub}). Recall from Section~\ref{Sec:Penalty} that the hub penalty function (\ref{Eq:hubpenalty}) decomposes the parameter of interest into the sum of three matrices, $\mathbf{\Theta} = \mathbf{Z+V+V}^T$, and places an $\ell_1$ penalty on $\mathbf{Z}$, and an $\ell_1/\ell_2$ penalty on $\mathbf{V}$.
For the graphical lasso problem in (\ref{Eq:l1penalizeggm}), many authors have proposed to select the tuning parameter $\lambda$ such that $\hat{\mathbf{\Theta}}$ minimizes the following quantity:
\[
-n \cdot \log \det (\hat{\mathbf{\Theta}}) + n\cdot \text{trace}(\mathbf{S}\hat{\mathbf{\Theta}}) + \log (n) \cdot |\hat{\mathbf{\Theta}}|,
\]
where $|\hat{\mathbf{\Theta}}|$ is the cardinality of $\hat{\mathbf{\Theta}}$, that is, the number of unique non-zeros in $\hat{\mathbf{\Theta}}$ \citep[see, e.g.,][]{YuanLin07}.\footnote{The term $\log(n) \cdot |\hat{\mathbf{\Theta}}|$ is motivated by the fact that the degrees of freedom for an estimate involving the $\ell_1$ penalty can be approximated by the cardinality of the estimated parameter \citep{zou2007degrees}.}
Using a similar idea, we propose the following BIC-type quantity for selecting the set of tuning parameters $(\lambda_1, \lambda_2, \lambda_3)$ for (\ref{Eq:ggmhub}):
\[
\mathrm{BIC} (\hat{\mathbf{\Theta}},\hat{\mathbf{V}},\hat{\mathbf{Z}}) =-n \cdot \log \det (\hat{\mathbf{\Theta}}) + n \cdot \text{trace}(\mathbf{S}\hat{\mathbf{\Theta}}) + \log (n) \cdot |\hat{\mathbf{Z}}| + \log (n) \cdot \left(\nu+ c \cdot [|\hat{\mathbf{V}}| - \nu ]\right),
\]
where $\nu$ is the number of estimated hub nodes, that is, $\nu = \sum_{j=1}^p 1_{\{\|\hat{\mathbf{V}}_j\|_0 >0 \}}$, $c$ is a constant between zero and one, and $|\hat{\mathbf{Z}}|$ and $|\hat{\mathbf{V}}|$ are the cardinalities (the number of unique non-zeros) of $\hat{\mathbf{Z}}$ and $\hat{\mathbf{V}}$, respectively.\footnote{ The term $\log(n) \cdot |\hat{\mathbf{Z}}|$ is motivated by the degrees of freedom from the $\ell_1$ penalty, and the term $\log (n) \cdot \left(\nu+ c \cdot [|\hat{\mathbf{V}}| - \nu ]\right)$ is motivated by an approximation of the degrees of freedom of the $\ell_2$ penalty proposed in \citet{grouplasso}.}
We select the set of tuning parameters $(\lambda_1,\lambda_2,\lambda_3)$ for which the quantity BIC$(\hat{\mathbf{\Theta}},\hat{\mathbf{V}},\hat{\mathbf{Z}})$ is minimized.
Note that when the constant $c$ is small, BIC$(\hat{\mathbf{\Theta}},\hat{\mathbf{V}},\hat{\mathbf{Z}})$ will favor more hub nodes in $\hat{\mathbf{V}}$. In this manuscript, we take $c=0.2$.
\subsection{Simulation Study}
\label{GGM:simulation}
In this section, we compare HGL to two sets of proposals: proposals that learn an Erd\H{o}s-R\'{e}nyi Gaussian graphical model, and proposals that learn a Gaussian graphical model in which some nodes are highly-connected.
\subsubsection{Notation and Measures of Performance}
\label{GGM:metric}
We start by defining some notation. Let $\hat{\mathbf{\Theta}}$ be the estimate of ${\bf \Theta}={\bf \Sigma}^{-1}$ from a given proposal, and let $\hat{\mathbf{\Theta}}_j$ be its $j$th column. Let $\mathcal{H}$ denote the set of indices of the hub nodes in $\mathbf{\Theta}$ (that is, this is the set of true hub nodes in the graph), and let $|\mathcal{H}|$ denote the cardinality of the set. In addition, let $\hat{\mathcal{H}}_r$ be the set of \emph{estimated hub nodes}: the set of nodes in $\hat{\bf \Theta}$ that are among the $|\mathcal{H}|$ most highly-connected nodes, and that have at least $r$ edges. The values chosen for $|\mathcal{H}|$ and $r$ depend on the simulation set-up, and will be specified in each simulation study.
We now define several measures of performance that will be used to evaluate the various methods.
\begin{itemize}
\item Number of correctly estimated edges: $\sum_{j <j'} \left(1_{\{ |\hat{\Theta}_{jj'}| > 10^{-5} \text{ and } { | {\Theta}_{jj'}}| \ne 0 \} }\right)$.
\item Proportion of correctly estimated hub edges:
$$\frac{\sum_{j\in \mathcal{H}, j' \ne j} \left(1_{\{ |\hat{\Theta}_{jj'}| > 10^{-5} \text{ and } |\Theta_{jj'}| \ne 0 \} }\right)}{\sum_{j \in \mathcal{H}, j' \ne j} \left( 1_{\{ |\Theta_{jj'}| \ne 0 \} } \right) }.$$
\item Proportion of correctly estimated hub nodes:$ \frac{|\hat{\mathcal{H}}_r \cap \mathcal{H} |}{|\mathcal{H}|}$.
\item Sum of squared errors: $\sum_{j<j'} \left(\hat{\Theta}_{jj'} - \Theta_{jj'}\right)^2$.
\end{itemize}
\subsubsection{Data Generation}
\label{GGM:datagenerate}
We consider three set-ups for generating a $p\times p$ adjacency matrix $\mathbf{A}$.
\begin{enumerate}[I -]
\item Network with hub nodes: for all $i<j$, we set $A_{ij}=1$ with probability 0.02, and zero otherwise. We then set $A_{ji}$ equal to $A_{ij}$. Next, we randomly select $| \mathcal{H} |$ hub nodes and set the elements of the corresponding rows and columns of $\mathbf{A}$ to equal one with probability 0.7 and zero otherwise.
\item Network with two connected components and hub nodes: the adjacency matrix is generated as $\mathbf{A}=\begin{pmatrix} \mathbf{A}_1 &0 \\ 0 & \mathbf{A}_2 \end{pmatrix}$, with $\mathbf{A}_1$ and $\mathbf{A}_2$ as in Set-up I, each with $|\mathcal{H}|/2$ hub nodes.
\item Scale-free network:\footnote{Recall that our proposal is not intended for estimating a scale-free network.} the probability that a given node has $k$ edges is proportional to $k^{-\alpha}$. \cite{barabasi1999} observed that many real-world networks have $\alpha \in [2.1, 4]$; we took $\alpha=2.5$. Note that there is no natural notion of hub nodes in a scale-free network. While some nodes in a scale-free network have more edges than one would expect in an Erd\H{o}s-R\'{e}nyi graph, there is no clear distinction between ``hub" and ``non-hub" nodes, unlike in Set-ups I and II.
In our simulation settings, we consider any node that is connected to more than 5\% of all other nodes to be a hub node.\footnote{The cutoff threshold of 5\% is chosen in order to capture the most highly-connected nodes in the scale-free network. In our simulation study, around three nodes are connected to at least $0.05 \times p$ other nodes in the network. The precise choice of cut-off threshold has little effect on the results obtained in the figures that follow.}\end{enumerate}
\noindent We then use the adjacency matrix $\bf A$ to create a matrix $\mathbf{E}$, as
\begin{equation*}
E_{ij} \stackrel{\mathrm{i.i.d.}} \sim
\begin{cases}
0 & \text{if } A_{ij}=0\\
\text{Unif}([-0.75, -0.25] \cup [0.25, 0.75]) & \text{otherwise},\\
\end{cases}
\end{equation*}
and set $\bar{\mathbf{E}}= \frac{1}{2}(\mathbf{E}+\mathbf{E}^T)$.
Given the matrix $\bar{\mathbf{E}}$, we set $\mathbf{\Sigma}^{-1}$ equal to $\bar{\mathbf{E}}+(0.1-\Lambda_{\min}({ \bar{\mathbf{E}}}))\mathbf{I}$, where $\Lambda_{\min}({ \bar{\mathbf{E}}})$ is the smallest eigenvalue of $\bar{\mathbf{E}}$. We generate the data matrix $\mathbf{X}$ according to $\mathbf{x}_1,\ldots,\mathbf{x}_n \stackrel{\mathrm{i.i.d.}} \sim N(\mathbf{0}, \mathbf{\Sigma})$.
Then, variables are standardized to have standard deviation one.
\subsubsection{Comparison to Graphical Lasso and Neighbourhood Selection}
\label{GGM:results}
In this subsection, we compare the performance of HGL to two proposals that learn a sparse Gaussian graphical model.
\begin{itemize}
\item The graphical lasso (\ref{Eq:l1penalizeggm}), implemented using the \verb=R= package \verb=glasso=.
\item The neighborhood selection approach of \citet{mb2006}, implemented using the \verb=R= package \verb=glasso=. This approach involves performing $p$ $\ell_1$-penalized regression problems, each of which involves regressing one feature onto the others.
\end{itemize}
We consider the three simulation set-ups described in the previous section with $n=1000$, $p=1500$, and $|\mathcal{H}|=30$ hub nodes in Set-ups I and II. Figure~\ref{Fig:simulation1} displays the results, averaged over 100 simulated data sets. Note that the sum of squared errors is not computed for \citet{mb2006}, since it does not directly yield an estimate of $\mathbf{\Theta}=\mathbf{\Sigma}^{-1}$.
HGL has three tuning parameters. To obtain the curves shown in Figure~\ref{Fig:simulation1}, we fixed $\lambda_1=0.4$, considered three values of $\lambda_3$ (each shown in a different color in Figure~\ref{Fig:simulation1}), and used a fine grid of values of $\lambda_2$. The solid black circle in Figure~\ref{Fig:simulation1} corresponds to the set of tuning parameters $(\lambda_1,\lambda_2,\lambda_3)$ for which the BIC as defined in Section~\ref{Sec:tuning parameter} is minimized. The graphical lasso and \citet{mb2006} each involves one tuning parameter; we applied them using a fine grid of the tuning parameter to obtain the curves shown in Figure~\ref{Fig:simulation1}.
Results for Set-up I are displayed in Figures~\ref{Fig:simulation1}-I(a) through \ref{Fig:simulation1}-I(d), where we calculate the proportion of correctly estimated hub nodes as defined in Section~\ref{GGM:metric} with $r=300$. Since this simulation set-up exactly matches the assumptions of HGL, it is not surprising that HGL outperforms the other methods. In particular, HGL is able to identify most of the hub nodes when the number of estimated edges is approximately equal to the true number of edges. We see similar results for Set-up II in Figures~\ref{Fig:simulation1}-II(a) through \ref{Fig:simulation1}-II(d), where the proportion of correctly estimated hub nodes is as defined in Section~\ref{GGM:metric} with $r=150$.
In Set-up III, recall that we define a node that is connected to at least 5\% of all nodes to be a hub. The proportion of correctly estimated hub nodes is then as defined in Section~\ref{GGM:metric} with $r=0.05\times p$. The results are presented in Figures~\ref{Fig:simulation1}-III(a) through \ref{Fig:simulation1}-III(d). In this set-up, only approximately three of the nodes (on average) have more than 50 edges, and the hub nodes are not as highly-connected as in Set-up I or Set-up II. Nonetheless, HGL outperforms the graphical lasso and \citet{mb2006}.
Finally, we see from Figure 3 that the set of tuning parameters ($\lambda_1,\lambda_2,\lambda_3$) selected using BIC performs reasonably well. In particular, the graphical lasso solution always has BIC larger than HGL, and hence, is not selected.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{sim1a.pdf}
\includegraphics[scale=0.51]{sim2a.pdf}
\includegraphics[scale=0.51]{sim3a.pdf}
\end{center}
\caption{Simulation for Gaussian graphical model. Row I: Results for Set-up I. Row II: Results for Set-up II. Row III: Results for Set-up III. The results are for $n=1000$ and $p=1500$. In each panel, the $x$-axis displays the number of estimated edges, and the vertical gray line is the number of edges in the true network. The $y$-axes are as follows: Column (a): Number of correctly estimated edges; Column (b): Proportion of correctly estimated hub edges; Column (c): Proportion of correctly estimated hub nodes;
Column (d): Sum of squared errors. The black solid circles are the results for HGL based on tuning parameters selected using the BIC-type criterion defined in Section~\ref{Sec:tuning parameter}. Colored lines correspond to the graphical lasso {\protect\citep{SparseInv}} (\protect\includegraphics[height=0.4em]{black.png}); HGL with $\lambda_3=0.5$ (\protect\includegraphics[height=0.4em]{orange.png}), $\lambda_3=1$ (\protect\includegraphics[height=0.4em]{pink.png}), and $\lambda_3=2$ (\protect\includegraphics[height=0.4em]{red.png}); neighborhood selection {\protect\citep{mb2006}} (\protect\includegraphics[height=0.4em]{purple.png}).}
\label{Fig:simulation1}
\end{figure}%
\subsubsection{Comparison to Additional Proposals}
\label{subSec:other proposal}
In this subsection, we compare the performance of HGL to three additional proposals:
\begin{itemize}
\item The partial correlation screening procedure of \cite{HeroandRajaratnam2012}. The elements of the partial correlation matrix (computed using a pseudo-inverse when $p >n$) are thresholded based on their absolute value, and a hub node is declared if the number of nonzero elements in the corresponding column of the thresholded partial correlation matrix is sufficiently large. Note that the purpose of \citet{HeroandRajaratnam2012} is to screen for hub nodes, rather than to estimate the individual edges in the network.
\item The scale-free network estimation procedure of \cite{QiangLiu2011}. This is the solution to the non-convex optimization problem
\begin{equation}
\small
\label{Eq:LI}
\underset{\mathbf{\Theta}\in \mathcal{S}}{\mathrm{minimize}} \quad \left \{ -\log \det \mathbf{\Theta} + \text{trace}(\mathbf{S\Theta}) + \alpha \sum_{j=1}^p \log(\|\mathbf{\theta}_{\setminus j}\|_1 + \epsilon_j) + \sum_{j=1}^p \beta_j |\theta_{jj}| \right \},
\end{equation}
where $\theta_{\setminus j}= \{\theta_{jj'}| j' \ne j\}$, and $\epsilon_j$, $\beta_j$, and $\alpha$ are tuning parameters. Here, $\mathcal{S}=\{ \mathbf{\Theta} : \mathbf{\Theta} \succ 0 \text{ and } \mathbf{\Theta}=\mathbf{\Theta}^T \}$.
\item Sparse partial correlation estimation procedure of \citet{Space}, implemented using the \verb=R= package \verb=space=. This is an extension of the neighborhood selection approach of \citet{mb2006} that combines $p$ $\ell_1$-penalized regression problems in order to obtain a symmetric estimator. The authors claimed that the proposal performs well in estimating a scale-free network.
\end{itemize}
We generated data under Set-ups I and III (described in Section~\ref{GGM:datagenerate}) with $n=250$ and $p=500$,\footnote{In this subsection, a small value of $p$ was used due to the computations required to run the R package space, as well as computational demands of the \cite{QiangLiu2011} algorithm.} and with $|\mathcal{H}|=10$ for Set-up I. The results, averaged over 100 data sets, are displayed in Figures~\ref{Fig:simulation1b} and \ref{Fig:simulation1c}.
To obtain Figures~\ref{Fig:simulation1b} and \ref{Fig:simulation1c}, we applied \citet{QiangLiu2011} using a fine grid of $\alpha$ values, and using the choices for $\beta_j$ and $\epsilon_j$ specified by the authors: $\beta_j = 2 \alpha / \epsilon_j$, where $\epsilon_j$ is a small constant specified in \cite{QiangLiu2011}.
There are two tuning parameters in \citet{HeroandRajaratnam2012}: (1) $\rho$, the value used to threshold the partial correlation matrix, and (2) $d$, the number of non-zero elements required for a column of the thresholded matrix to be declared a hub node.
We used {$d=\{10,20\}$ in Figures~\ref{Fig:simulation1b} and~\ref{Fig:simulation1c}, and used a fine grid of values for $\rho$. Note that the value of $d$ has no effect on the results for Figures~\ref{Fig:simulation1b}(a)-(b) and Figures~\ref{Fig:simulation1c}(a)-(b), and that larger values of $d$ tend to yield worse results in
Figures~\ref{Fig:simulation1b}(c) and \ref{Fig:simulation1c}(c). For \citet{Space}, we used a fine grid of tuning parameter values to obtain the curves shown in Figures~\ref{Fig:simulation1b} and~\ref{Fig:simulation1c}. The sum of squared errors was not reported for \citet{Space} and \citet{HeroandRajaratnam2012} since they do not directly yield an estimate of the precision matrix. As a baseline reference, the graphical lasso is included in the comparison.
We see from Figure~\ref{Fig:simulation1b} that HGL outperforms the competitors when the underlying network contains hub nodes. It is not surprising that \citet{QiangLiu2011} yields better results than the graphical lasso, since the former approach is implemented via an iterative procedure: in each iteration, the graphical lasso is performed with an updated tuning parameter based on the estimate obtained in the previous iteration. \citet{HeroandRajaratnam2012} has the worst results in Figures~\ref{Fig:simulation1b}(a)-(b); this is not surprising, since the purpose of \citet{HeroandRajaratnam2012} is to screen for hub nodes, rather than to estimate the individual edges in the network.
{From Figure~\ref{Fig:simulation1c}, we see that the performance of HGL is comparable to that of \citet{QiangLiu2011} and \citet{Space} under the assumption of a scale-free network; note that this is the precise setting for which \citet{QiangLiu2011}'s proposal is intended, and \citet{Space} reported that their proposal performs well in this setting. In contrast, HGL is not intended for the scale-free network setting (as mentioned in the Introduction, it is intended for a setting with hub nodes). Again, \citet{QiangLiu2011} and \citet{Space} outperform the graphical lasso, and \cite{HeroandRajaratnam2012} has the worst results in Figures~\ref{Fig:simulation1c}(a)-(b).
Finally, we see from Figures~\ref{Fig:simulation1b} and~\ref{Fig:simulation1c} that the BIC-type criterion for HGL proposed in Section~\ref{Sec:tuning parameter} yields good results.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{hglsimb.pdf}
\end{center}
\caption{Simulation for the Gaussian graphical model. Set-up I was applied with $n=250$ and $p=500$. Details of the axis labels and the solid black circles are as in Figure~\ref{Fig:simulation1}. The colored lines correspond to the graphical lasso {\protect\citep{SparseInv}} (\protect\includegraphics[height=0.5em]{black.png}); HGL with $\lambda_3=1$ (\protect\includegraphics[height=0.5em]{orange.png}), $\lambda_3=2$ (\protect\includegraphics[height=0.5em]{pink.png}), and $\lambda_3=3$ (\protect\includegraphics[height=0.5em]{red.png}); the hub screening procedure {\protect\citep{HeroandRajaratnam2012}} with $d=10$ (\protect\includegraphics[height=0.5em]{slateblue1.png}) and $d=20$ (\protect\includegraphics[height=0.5em]{green.png}); the scale-free network approach {\protect\citep{QiangLiu2011}} (\protect\includegraphics[height=0.5em]{purple.png}); sparse partial correlation estimation {\protect\citep{Space}} (\protect\includegraphics[height=0.5em]{blue.png}).}
\label{Fig:simulation1b}
\end{figure}%
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{hglsimc.pdf}
\end{center}
\caption{Simulation for the Gaussian graphical model. Set-up III was applied with $n=250$ and $p=500$. Details of the axis labels and the solid black circles are as in Figure~\ref{Fig:simulation1}. The colored lines correspond to the graphical lasso {\protect\citep{SparseInv}} (\protect\includegraphics[height=0.5em]{black.png}); HGL with $\lambda_3=1$ (\protect\includegraphics[height=0.5em]{orange.png}), $\lambda_3=2$ (\protect\includegraphics[height=0.5em]{pink.png}), and $\lambda_3=3$ (\protect\includegraphics[height=0.5em]{red.png}); the hub screening procedure {\protect\citep{HeroandRajaratnam2012}} with $d=10$ (\protect\includegraphics[height=0.5em]{slateblue1.png}) and $d=20$ (\protect\includegraphics[height=0.5em]{green.png}); the scale-free network approach {\protect\citep{QiangLiu2011}} (\protect\includegraphics[height=0.5em]{purple.png}); sparse partial correlation estimation {\protect\citep{Space}} (\protect\includegraphics[height=0.5em]{blue.png}).}
\label{Fig:simulation1c}
\end{figure}%
\section{The Hub Covariance Graph}
\label{Sec:Covariance}
In this section, we consider estimation of a covariance matrix under the assumption that $\mathbf{x}_1,\ldots,\mathbf{x}_n \stackrel{\small \mathrm{i.i.d.}}\sim N(\mathbf{0},\mathbf{\Sigma})$; this is of interest because the sparsity pattern of $\mathbf{\Sigma}$ specifies the structure of the marginal independence graph \citep[see, e.g.,][]{Drton2003covgraph,Chaudhurietal2007,DrtonRichardson08}. We extend the covariance estimator of \citet{Xueetal2012} to accommodate hub nodes.
\subsection{Formulation and Algorithm}
\label{Cov:formulation}
\citet{Xueetal2012} proposed to estimate $\bf \Sigma$ using
\begin{equation}
\label{Eq:CovXue}
\hat{\mathbf{\Sigma}} = \underset{\mathbf{\Sigma} \in \mathcal{S}}{\arg\min}\left\{ \frac{1}{2} \| \mathbf{\Sigma}- \mathbf{S} \|_F^2 + \lambda \|\mathbf{\Sigma}\|_1 \right\},
\end{equation}
where $\mathbf{S}$ is the empirical covariance matrix, $\mathcal{S} = \{\mathbf{\Sigma}: \mathbf{\Sigma} \succeq \epsilon \mathbf{I} \text{ and } \mathbf{\Sigma}=\mathbf{\Sigma}^T \}$, and $\epsilon$ is a small positive constant; we take $\epsilon = 10^{-4}$.
We extend (\ref{Eq:CovXue}) to accommodate hubs by imposing the hub penalty function (\ref{Eq:hubpenalty}) on $\mathbf{\Sigma}$. This results in the \emph{hub covariance graph} (HCG) optimization problem,
\begin{equation*}
\underset{\mathbf{\Sigma} \in \mathcal{S} } {\text{minimize}} \qquad
\left\{ \frac{1}{2} \|\mathbf{\Sigma}-\mathbf{S}\|_F^2 + \text{P}(\mathbf{\Sigma}) \right\},
\end{equation*}
which can be solved via Algorithm~\ref{Alg:general}. To update $\mathbf{\Theta}=\mathbf{\Sigma}$ in Step 2(a)i, we note that
\begin{equation*}
\underset{\mathbf{\Sigma} \in \mathcal{S}}{\arg \min} \left \{ \frac{1}{2} \|\mathbf{\Sigma}-\mathbf{S}\|_F^2 + \frac{\rho}{2} \| \mathbf{\Sigma}-\tilde{\mathbf{\Sigma}}+\mathbf{W}_1 \|_F^2 \right \}
= \frac{1}{1+\rho} (\mathbf{S}+\rho\tilde{\mathbf{\Sigma}}-\rho \mathbf{W}_1)^+,
\end{equation*}
where $(\mathbf{A})^+$ is the projection of a matrix $\mathbf{A}$ onto the convex cone $\{ \mathbf{\Sigma} \succeq \epsilon \mathbf{I} \}$. That is, if $\sum_{j=1}^p d_j \mathbf{u}_j \mathbf{u}_j^T$ denotes the eigen-decomposition of the matrix $\mathbf{A}$, then $(\mathbf{A})^+$ is defined as \\$\sum_{j=1}^p \max(d_j,\epsilon) \mathbf{u}_j \mathbf{u}_j^T$. The complexity of the ADMM algorithm is $O(p^3)$ per iteration, due to the complexity of the eigen-decomposition for updating $\mathbf{\Sigma}$.
\subsection{Simulation Study}
\label{Cov:simulation}
We compare HCG to two competitors for obtaining a sparse estimate of $\mathbf{\Sigma}$:
\begin{enumerate}
\item The non-convex $\ell_1$-penalized log-likelihood approach of \citet{BienTibs11}, using the \verb=R= package \verb=spcov=. This approach solves
\begin{equation*}
\label{Eq:Bien}
\underset{\mathbf{\Sigma} \succ 0 } {\text{minimize}}
\left \{ \log \det \mathbf{\Sigma} + \text{trace}(\mathbf{\Sigma}^{-1}\mathbf{S}) + \lambda \|\mathbf{\Sigma}\|_1\right\}.
\end{equation*}
\item The convex $\ell_1$-penalized approach of \citet{Xueetal2012}, given in (\ref{Eq:CovXue}).
\end{enumerate}
We first generated an adjacency matrix $\mathbf{A}$ as in Set-up I in Section~\ref{GGM:datagenerate}, modified to have $|\mathcal{H}| = 20$ hub nodes. Then $\bar{\mathbf{E}}$ was generated as described in Section~\ref{GGM:datagenerate}, and we set $\mathbf{\Sigma}$ equal to $\bar{\mathbf{E}} +(0.1-\Lambda_{\min}(\bar{\mathbf{E}}))\mathbf{I}$. Next, we generated $\mathbf{x}_1, \ldots, \mathbf{x}_n \stackrel{\small \mathrm{i.i.d.}} \sim N(\mathbf{0},\mathbf{\Sigma})$. Finally, we standardized the variables to have standard deviation one. In this simulation study, we set $n=500$ and $p=1000$.
Figure~\ref{Fig:CovSim1} displays the results, averaged over 100 simulated data sets. We calculated the proportion of correctly estimated hub nodes as defined in Section 3.3.1 with $r=200$. We used a fine grid of tuning parameters for \citet{Xueetal2012} in order to obtain the curves shown in each panel of Figure~\ref{Fig:CovSim1}. HCG involves three tuning parameters, $\lambda_1$, $\lambda_2$, and $\lambda_3$. We fixed $\lambda_1 = 0.2$, considered three values of $\lambda_3$ (each shown in a different color), and varied $\lambda_2$ in order to obtain the curves shown in Figure~\ref{Fig:CovSim1}.
Figure~\ref{Fig:CovSim1} does not display the results for the proposal of \citet{BienTibs11}, due to computational constraints in the \verb=spcov= \verb=R= package. Instead, we compared our proposal to that of \citet{BienTibs11} using $n=100$ and $p=200$; those results are presented in Figure~\ref{Fig:CovSimsmall} in Appendix D.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{covsim1.pdf}
\end{center}
\caption{Covariance graph simulation with $n=500$ and $p=1000$. Details of the axis labels are as in Figure~\ref{Fig:simulation1}. The colored lines correspond to the proposal of {\protect\citet{Xueetal2012}} (\protect\includegraphics[height=0.5em]{black.png}); HCG with $\lambda_3=1$ (\protect\includegraphics[height=0.5em]{orange.png}), $\lambda_3=1.5$ (\protect\includegraphics[height=0.5em]{pink.png}), and $\lambda_3=2$ (\protect\includegraphics[height=0.5em]{red.png}).}
\label{Fig:CovSim1}
\end{figure}
We see that HCG outperforms the proposals of \citet{Xueetal2012} (Figures~\ref{Fig:CovSim1} and \ref{Fig:CovSimsmall}) and \citet{BienTibs11} (Figure~\ref{Fig:CovSimsmall}). These results are not surprising, since those other methods do not explicitly model the hub nodes.
\section{The Hub Binary Network}
\label{Sec:Binary}
In this section, we focus on estimating a binary Ising Markov random field, which we refer to as a binary network. We refer the reader to \citet{ravikumaretal2010} for an in-depth discussion of this type of graphical model and its applications.
In this set-up, each entry of the $n\times p$ data matrix $\mathbf{X}$ takes on a value of zero or one. We assume that the observations $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are i.i.d. with density
\begin{equation}
\label{Eq:Isingmodel}
p(\mathbf{x},\mathbf{\Theta}) = \frac{1}{Z(\mathbf{\Theta})}\exp \left[ \sum_{j=1}^p \theta_{jj} x_j +\sum_{1\le j < j' \le p} \theta_{jj'} x_j x_{j'} \right],
\end{equation}
where $Z({\mathbf{\Theta}})$ is the partition function, which ensures that the density sums to one.
Here $\mathbf{\Theta}$ is a $p\times p$ symmetric matrix that specifies the network structure: $\theta_{jj'}=0$ implies that the $j$th and $j'$th variables are conditionally independent.
In order to obtain a sparse graph, \citet{LeeSIetal2007} considered maximizing an $\ell_1$-penalized log-likelihood under this model. Due to the difficulty in computing the log-partition function, several authors have considered alternative approaches. For instance, \citet{ravikumaretal2010} proposed a neighborhood selection approach. The proposal of \citet{ravikumaretal2010} involves solving $p$ logistic regression separately, and hence, the estimated parameter matrix is not symmetric. In contrast, several authors considered maximizing an $\ell_1$-penalized pseudo-likelihood with a symmetric constraint on $\mathbf{\Theta}$ \citep[see, e.g.,][]{Hoefling2009,jianguobinary,guoasymptotic}.
\subsection{Formulation and Algorithm}
\label{Binary:formulation}
Under the model (\ref{Eq:Isingmodel}), the log-pseudo-likelihood for $n$ observations takes the form
\begin{equation}
\label{Eq:logpseudo}
\sum_{j=1}^p \sum_{j'=1}^p \theta_{jj'} (\mathbf{X}^T\mathbf{X})_{jj'} - \sum_{i=1}^n\sum_{j=1}^p \log \left( 1+ \text{exp}\left[\theta_{jj}+ \sum_{j'\ne j} \theta_{jj'}x_{ij'}\right] \right),
\end{equation}
where $\mathbf{x}_i$ is the $i$th row of the $n\times p$ matrix $\mathbf{X}$. The proposal of \citet{Hoefling2009} involves maximizing (\ref{Eq:logpseudo}) subject to
an $\ell_1$ penalty on $\bf \Theta$. We propose to instead impose the hub penalty function (\ref{Eq:hubpenalty}) on $\mathbf{\Theta}$ in (\ref{Eq:logpseudo}) in order to estimate a sparse binary network with hub nodes.
This leads to the optimization problem
\begin{eqnarray}
\small
\label{Eq:binaryformulation}
\begin{aligned}
&\underset{\mathbf{\Theta} \in \mathcal{S}} {\text{minimize}}
&& \left \{ -\sum_{j=1}^p \sum_{j'=1}^p \theta_{jj'} (\mathbf{X}^T\mathbf{X})_{jj'} + \sum_{i=1}^n\sum_{j=1}^p \log \left( 1+ \text{exp}\left[\theta_{jj}+ \sum_{j'\ne j} \theta_{jj'}x_{ij'}\right] \right) + \text{P}(\mathbf{\Theta}) \right\},
\end{aligned}
\end{eqnarray}
where $\mathcal{S} = \{\mathbf{\Theta} : \mathbf{\Theta}=\mathbf{\Theta}^T \}$.
We refer to the solution to (\ref{Eq:binaryformulation}) as the \emph{hub binary network} (HBN).
The ADMM algorithm for solving (\ref{Eq:binaryformulation}) is given in Algorithm~\ref{Alg:general}. We solve the update
for $\mathbf{\Theta}$ in Step 2(a)i
using the Barzilai-Borwein method \citep{barzilai1988two}. The details are given in Appendix F.
\subsection{Simulation Study}
\label{Binary:simulation}
Here we compare the performance of HBN to the proposal of \citet{Hoefling2009}, implemented using the \verb=R= package \verb=BMN=.
We simulated a binary network with $p=50$ and $|\mathcal{H}| = 5$ hub nodes. To generate the parameter matrix $\mathbf{\Theta}$, we created an adjacency matrix $\mathbf{A}$ as in Set-up I of Section~\ref{GGM:datagenerate} with five hub nodes. Then $\bar{\mathbf{E}}$ was generated as in Section~\ref{GGM:datagenerate}, and we set $\mathbf{\Theta}= \bar{\mathbf{E}}$.
Each of $n=100$ observations was generated using Gibbs sampling \citep{ravikumaretal2010,jianguobinary}. Suppose that $x_1^{(t)},\ldots, x_p^{(t)}$ is obtained at the $t$th iteration of the Gibbs sampler. Then, the $(t+1)$th iteration is obtained according to
\[
x_{j}^{(t+1)} \sim \text{Bernoulli} \left( \frac{\exp(\theta_{jj} + \sum_{j\ne j'} \theta_{jj'} x_{j'}^{(t)})}{1+\exp(\theta_{jj} + \sum_{j\ne j'} \theta_{jj'} x_{j'}^{(t)})} \right) \qquad \text{for } j=1,\ldots, p.
\]
We took the first $10^5$ iterations as our burn-in period, and then collected an observation every $10^4$ iterations, such that the observations were nearly independent \citep{jianguobinary}.
The results, averaged over 100 data sets, are shown in Figure~\ref{Fig:BinarySim1}. We used a fine grid of values for the $\ell_1$ tuning parameter for \citet{Hoefling2009}, resulting in curves shown in each panel of the figure. For HBN, we fixed $\lambda_1=5$, considered $\lambda_3 = \{15,25,30\}$, and used a fine grid of values of $\lambda_2$. The proportion of correctly estimated hub nodes was calculated using the definition in Section~\ref{GGM:metric} with $r=20$. Figure~\ref{Fig:BinarySim1} indicates that HBN consistently outperforms the proposal of \citet{Hoefling2009}.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{BinarySim1.pdf}
\end{center}
\caption{Binary network simulation with $n=100$ and $p=50$. Details of the axis labels are as in Figure~\ref{Fig:simulation1}. The colored
lines correspond to the $\ell_1$-penalized pseudo-likelihood proposal of
\protect\citet{Hoefling2009}
(\protect\includegraphics[height=0.5em]{blue.png}); and HBN with
$\lambda_3=15$ (\protect\includegraphics[height=0.5em]{orange.png}), $\lambda_3=25$ (\protect\includegraphics[height=0.5em]{red.png}), and $\lambda_3=30$
(\protect\includegraphics[height=0.5em]{pink.png}).}
\label{Fig:BinarySim1}
\end{figure}
\section{Real Data Application}
\label{Sec:realdata}
We now apply HGL to a university webpage data set, and a brain cancer data set.
\subsection{Application to University Webpage Data}
\label{sec:real_data_analysis} We applied HGL to the university
webpage data set from the ``World Wide Knowledge Base" project at
Carnegie Mellon University. This data set was pre-processed by
\citet{webpage2011}. The
data set consists of the occurrences of various terms (words) on webpages from four computer science departments at
Cornell, Texas, Washington and Wisconsin. We consider only the 544
student webpages, and select 100 terms with the largest entropy for
our analysis. In what follows, we model these 100 terms as the nodes in a Gaussian graphical model.
The goal of the analysis is to understand the relationships among
the terms that appear on the student webpages. In particular, we wish to identify terms that are hubs. We are not interested in identifying edges between non-hub nodes. For this reason, we fix the tuning parameter that controls the sparsity of $\mathbf{Z}$ at $\lambda_1 = 0.45$ such that the matrix $\mathbf{Z}$ is sparse. In the interest of a graph that is interpretable, we fix $\lambda_3=1.5$ to obtain only a few hub nodes, and then select a value of $\lambda_2$ ranging from 0.1 to 0.5 using the BIC-type criterion presented in Section~\ref{Sec:tuning parameter}. We performed HGL with the selected tuning parameters $\lambda_1=0.45$, $\lambda_2=0.25$, and $\lambda_3=1.5$.\footnote{The results are qualitatively similar for different values of $\lambda_1$.} The estimated matrices are shown in Figure
\ref{fig:webpage_V_Z}.
Figure \ref{fig:webpage_V_Z}(a) indicates that six hub nodes are detected:
\emph{comput}, \emph{research}, \emph{scienc}, \emph{software}, \emph{system}, and \emph{work}. For instance, the fact that \emph{comput} is a hub indicates that many terms' occurrences are explained by the occurrence of the word \emph{comput}.
From Figure \ref{fig:webpage_V_Z}(b), we see that several pairs of terms take on non-zero values in the matrix ${(\bf Z} - \mathrm{diag}({\bf Z}))$. These include \emph{(depart, univers)}; \emph{(home, page)}; \emph{(institut, technolog)}; \emph{(graduat, student)}; \emph{(univers, scienc)}, and \emph{(languag,program)}. These results provide an intuitive explanation of the relationships among the terms in the webpages.
\begin{figure}[htp]
\begin{center}
(a) \hspace{85mm} (b)
\includegraphics[scale=0.68]{webpage.pdf}
\end{center}
\caption{Results for HGL on the webpage data with tuning parameters selected using BIC: $\lambda_1 = 0.45$, $\lambda_2 = 0.25$, $\lambda_3 = 1.5$. Non-zero estimated values are shown, for \emph{(a):} ${(\bf V} - \mathrm{diag}({\bf V}))$, and \emph{(b):} ${(\bf Z} - \mathrm{diag}({\bf Z}))$.}
\label{fig:webpage_V_Z}
\end{figure}
\subsection{Application to Gene Expression Data}
We applied HGL to a publicly available cancer gene expression
data set \citep{cancer2012}. The data set consists of mRNA expression levels for
17,814 genes in 401 patients with glioblastoma multiforme (GBM), an
extremely aggressive cancer with very poor patient prognosis. Among 7,462 genes known to be associated
with cancer \citep{malacards2013}, we selected 500 genes
with the highest variance.
We
aim to reconstruct the gene regulatory network that represents the interactions among the genes, as well as to identify hub genes that
tend to have many interactions with other genes. Such genes likely play an important role in regulating many other
genes in the network. Identifying such regulatory genes will lead to a better
understanding of brain cancer, and
eventually may lead to new therapeutic targets. Since we are interested in identifying hub genes, and not as interested in identifying edges between non-hub nodes, we fix $\lambda_1=0.6$ such that the matrix $\mathbf{Z}$ is sparse. We fix $\lambda_3=6.5$ to obtain a few hub nodes, and we select $\lambda_2$ ranging from 0.1 to 0.7 using the BIC-type criterion presented in Section~\ref{Sec:tuning parameter}.
We applied HGL with this set of tuning parameters to the empirical covariance matrix corresponding to the $401 \times 500$ data matrix, after standardizing each gene to have variance one.
In Figure~\ref{Figure:gene-network}, we plotted the resulting
network (for simplicity, only the 438 genes with at least two neighbors are displayed). We found that five
genes are identified as hubs. These genes are TRIM48, TBC1D2B, PTPN2,
ACRC, and ZNF763, in decreasing order of estimated edges.
Interestingly, some of these genes have known regulatory roles.
PTPN2 is known to be a signaling molecule that regulates a variety
of cellular processes including cell growth, differentiation,
mitotic cycle, and oncogenic transformation~\citep{entrez}.
ZNF763 is a DNA-binding protein that regulates the transcription of
other genes~\citep{entrez}. These genes do not appear to be highly-connected to many other genes in the estimate that results from applying the graphical lasso (\ref{Eq:l1penalizeggm}) to this same data set (results not shown).
These results indicate that HGL can be used to recover known regulators, as well as to suggest other potential regulators that may be targets for follow-up analysis.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.85]{braincancer.pdf}
\includegraphics[scale=0.83]{Legendigraph.pdf}
\end{center}
\caption{ Results for HGL on the GBM data with tuning parameters selected using BIC: $\lambda_1 = 0.6$, $\lambda_2 = 0.4$,
$\lambda_3 = 6.5$. Only nodes with at least two edges in the estimated network are displayed. Nodes displayed in pink were found to be hubs by the HGL algorithm.}
\label{Figure:gene-network}
\end{figure}
\section{Discussion}
\label{Sec:Discussion}
We have proposed a general framework for estimating a network with hubs by way of a convex penalty function. The proposed framework has three tuning parameters, so that it can flexibly accommodate different numbers of hubs, sparsity levels within a hub, and connectivity levels among non-hubs. We have proposed a BIC-type quantity to select tuning parameters for our proposal. We note that tuning parameter selection in unsupervised settings remains a challenging open problem \citep[see, e.g.,][]{FoygelDrton10,StabilitySelection}.
In practice, tuning parameters could also be set based on domain knowledge or a desire for interpretability of the resulting estimates.
The framework proposed in this paper assumes an underlying model involving a set of edges between non-hub nodes, as well as a set of hub nodes.
For instance, it is believed that such hub nodes arise in biology, in which ``super hubs" in transcriptional regulatory networks may play important roles \citep{Haoetal2012}.
We note here that the underlying model of hub nodes assumed in this paper differs fundamentally from a scale-free network in which the degree of connectivity of the nodes follows a power law distribution---scale-free networks simply do not have such very highly-connected hub nodes. In fact, we have shown that existing techniques for estimating a scale-free network, such as
\citet{QiangLiu2011} and \citet{Defazio2012}, cannot accommodate the very dense hubs for which our proposal is intended.
As discussed in Section~\ref{Sec:Penalty}, the hub penalty function involves decomposing a parameter matrix $\bf \Theta$ into ${\bf Z}+{\bf V}+{\bf V}^T$, where $\bf Z$ is a sparse matrix, and $\bf V$ is a matrix whose columns are entirely zero or (almost) entirely non-zero.
In this paper, we used an $\ell_1$ penalty on $\bf Z$ in order to encourage it to be sparse. In effect, this amounts to assuming that the non-hub nodes obey an Erd\H{o}s-R\'{e}nyi network. But our formulation could be easily modified to accommodate a different network
prior for the non-hub nodes. For instance, we could assume that the non-hub nodes obey a scale-free network, using the ideas developed in \citet{QiangLiu2011} and \citet{Defazio2012}. This would amount to modeling a scale-free network with hub nodes.
In this paper, we applied the proposed framework to the tasks of estimating a Gaussian graphical model, a covariance graph model, and a binary network. The proposed framework can also be applied to other types of graphical models, such as the Poisson graphical model \citep{AllenLiu2012} or the exponential family graphical model \citep{Yangetal2012}.
In future work, we will study the theoretical statistical properties of the HGL formulation. For instance, in the context of the graphical lasso, it is known that the rate of statistical convergence depends upon the maximal degree of any node in the network \citep{Ravikumar2011}.
It would be interesting to see whether HGL theoretically outperforms the graphical lasso in the setting in which the true underlying network contains hubs. Furthermore, it will be of interest to study HGL's hub recovery properties from a theoretical perspective.
An \verb=R= package \verb=hglasso= is publicly available on the authors' websites and on \verb=CRAN=.
\acks{We thank three reviewers for helpful comments that improved the quality of this manuscript. We thank Qiang Liu helpful responses to our inquiries regarding \citet{QiangLiu2011}. The authors acknowledge funding from the following sources: NIH DP5OD009145 and NSF CAREER DMS-1252624 and Sloan Research Fellowship to DW, NSF CAREER ECCS-0847077 to MF, and Univ. Washington Royalty Research Fund to DW, MF, and SL.}
\newpage
\section*{Appendix A: Derivation of Algorithm~\ref{Alg:general}}
Recall that the scaled augmented Lagrangian for (\ref{Eq:reformulate}) takes the form
\begin{equation}
\label{Appendix:lagrangian}
\begin{split}
L(\mathbf{B} ,\tilde{\mathbf{B}},\mathbf{W}) &= \ell(\mathbf{X},\mathbf{\Theta}) + \lambda_1 \| \mathbf{Z} - \text{diag}(\mathbf{Z})\|_1+ \lambda_2 \| \mathbf{V} - \text{diag}(\mathbf{V})\|_1 \\
&+\lambda_3 \sum_{j=1}^p \| (\mathbf{V} - \text{diag}(\mathbf{V}))_j \|_2 +g(\tilde{\mathbf{B}}) +\frac{\rho}{2}\|\mathbf{B}-\tilde{\mathbf{B}}+\mathbf{W} \|^2_F.\\
\end{split}
\end{equation}
\noindent The proposed ADMM algorithm requires the following updates
\begin{enumerate}
\item $\mathbf{B}^{(t+1)} \leftarrow \underset{\mathbf{B}}{\text{argmin }} L(\mathbf{B},\tilde{\mathbf{B}}^{t},\mathbf{W}^{t})$,
\item $\tilde{\mathbf{B}}^{(t+1)} \leftarrow \underset{\tilde{\mathbf{B}}}{\text{argmin }} L(\mathbf{B}^{(t+1)},\tilde{\mathbf{B}},\mathbf{W}^{t})$,
\item $\mathbf{W}^{(t+1)} \leftarrow \mathbf{W}^{t}+\mathbf{B}^{(t+1)}-\tilde{\mathbf{B}}^{(t+1)}$.
\end{enumerate}
\noindent We now proceed to derive the updates for $\mathbf{B}$ and $\tilde{\mathbf{B}}$.
\subsection*{Updates for $\mathbf{B}$}
\noindent To obtain updates for $\mathbf{B}=(\mathbf{\Theta,V,Z})$, we exploit the fact that (\ref{Appendix:lagrangian}) is separable in $\mathbf{\Theta}, \mathbf{V}$, and $\mathbf{Z}$. Therefore, we can simply update with respect to $\mathbf{\Theta}, \mathbf{V}$, and $\mathbf{Z}$ one-at-a-time. Update for $\mathbf{\Theta}$ depends on the form of the convex loss function, and is addressed in the main text. Updates for $\bf V$ and $\bf Z$ can be easily seen to take the form given in Algorithm 1.
\subsection*{Updates for $\tilde{\mathbf{B}}$}
Minimizing the function in (\ref{Appendix:lagrangian}) with respect to $\tilde{\mathbf{B}}$ is equivalent to
\begin{equation}
\label{Equation:lagrangian3}
\begin{aligned}
& \underset{{\tilde{\mathbf{\Theta}}},\tilde{\mathbf{V}},\tilde{\mathbf{Z}} } {\text{minimize}}
&&\left\{ \frac{\rho}{2}\|\mathbf{\Theta} -\tilde{\mathbf{\Theta}}+ \mathbf{W}_1 \|^2_F +\frac{\rho}{2}\|\mathbf{V} -\tilde{\mathbf{V}}+ \mathbf{W}_2 \|^2_F + \frac{\rho}{2}\|\mathbf{Z} -\tilde{\mathbf{Z}}+ \mathbf{W}_3 \|^2_F\right\}\\
& \text{subject to}
& & \tilde{\mathbf{\Theta}} = \tilde{\mathbf{Z}}+\tilde{\mathbf{V}} + \tilde{\mathbf{V}}^T.
\end{aligned}
\end{equation}
Let $\mathbf{\Gamma}$ be the $p\times p$ Lagrange multiplier matrix for the equality constraint. Then, the Lagrangian for (\ref{Equation:lagrangian3}) is
\begin{equation*}
\frac{\rho}{2}\|\mathbf{\Theta} -\tilde{\mathbf{\Theta}}+ \mathbf{W}_1 \|^2_F +\frac{\rho}{2}\|\mathbf{V} -\tilde{\mathbf{V}}+ \mathbf{W}_2 \|^2_F + \frac{\rho}{2}\|\mathbf{Z} -\tilde{\mathbf{Z}}+ \mathbf{W}_3 \|^2_F + \langle \mathbf{\Gamma} ,\tilde{\mathbf{\Theta}} - \tilde{\mathbf{Z}}-\tilde{\mathbf{V}} - \tilde{\mathbf{V}}^T\rangle.
\end{equation*}
A little bit of algebra yields
\[
\tilde{\mathbf{\Theta}} = \mathbf{\Theta} + \mathbf{W}_1 - \frac{1}{\rho} \mathbf{\Gamma},
\]
\[
\tilde{\mathbf{V}} = \frac{1}{\rho}( \mathbf{\Gamma+\Gamma}^T) +\mathbf{V} + \mathbf{W}_2,
\]
\[
\tilde{\mathbf{Z}} = \frac{1}{\rho} \mathbf{\Gamma}+ \mathbf{Z} + \mathbf{W}_3,
\]
where $\mathbf{\Gamma} = \frac{\rho}{6}[(\mathbf{\Theta} + \mathbf{W}_1) -(\mathbf{V+W}_2) - (\mathbf{V+W}_2)^T-(\mathbf{Z+W}_3)]$.
\section*{Appendix B: Conditions for HGL Solution to be Block-Diagonal }
We begin by introducing some notation.
Let $\|\mathbf{V}\|_{u,v}$ be the $\ell_u / \ell_v$ norm of a matrix $\mathbf{V}$. For instance,
$\| \mathbf{V} \|_{1,q} = \sum_{j=1}^p \| \mathbf{V}_j \|_q $.
We define the support of a matrix $\bf \Theta$ as follows: $\text{supp}(\mathbf{\Theta}) = \{(i,j): \Theta_{ij} \ne 0\}$. We say that $\mathbf{\Theta}$ is supported on a set $\mathcal{G}$ if $\text{supp}(\mathbf{\Theta})\subseteq \mathcal{G}$.
Let $\{C_1,\ldots, C_K\}$ be a partition of the index set $\{1,\ldots,p \}$, and let $\mathcal{T} = \cup_{k=1}^K \{ C_k \times C_k\}$.
We let $\mathbf{A}_{\mathcal{T}}$ denote the restriction of the matrix $\mathbf{A}$ to the set $\mathcal{T}$: that is, $(\mathbf{A}_{\mathcal{T}})_{ij}=0$ if $(i,j)\notin \mathcal{T}$ and $(\mathbf{A}_{\mathcal{T}})_{ij}=A_{ij}$ if $(i,j)\in \mathcal{T}$. Note that any matrix supported on $\mathcal{T}$ is block-diagonal with $K$ blocks, subject to some permutation of its rows and columns. Also, let $S_{\max} = \underset{(i,j)\in \mathcal{T}^c}\max |S_{ij}|$. Define
\begin{eqnarray}
\label{normequation}
\begin{array}{rccl}
\tilde{\mathbf{P}}(\mathbf{\Theta}) &=& \underset{{\mathbf{V, Z}}} {\text{min}} & \| \mathbf{Z}-\text{diag}(\mathbf{Z})\|_1 + \hat{\lambda}_2 \| \mathbf{V}-\text{diag}(\mathbf{V})\|_1 + \hat{\lambda}_3 \|\mathbf{V}-\text{diag}(\mathbf{V}) \|_{1,q}\\
&& \text{subject to} & \mathbf{\Theta} = \mathbf{Z+V+V}^T,
\end{array}
\end{eqnarray}
where $\hat{\lambda}_2=\frac{\lambda_2}{\lambda_1}$ and $\hat{\lambda}_3 = \frac{\lambda_3}{\lambda_1}$. Then, optimization problem (\ref{Eq:ggmhub}) is equivalent to
\begin{eqnarray}
\label{Equation:reduceHGL}
\underset{{\mathbf{\Theta}}\in \mathcal{S}} {\text{minimize}} & -\log \det(\mathbf{\Theta}) +\langle \mathbf{\Theta,S}\rangle + \lambda_1 \tilde{\mathbf{P}}(\mathbf{\Theta}),
\end{eqnarray}
where $\mathcal{S}= \{\mathbf{\Theta}:\mathbf{\Theta} \succ 0 ,\mathbf{\Theta}=\mathbf{\Theta}^T\}$.
\subsection*{Proof of Theorem 1 (Sufficient Condition)}
\begin{proof}
First, we note that if $(\mathbf{\Theta,V,Z})$ is a feasible solution to (\ref{Eq:ggmhub}), then $(\mathbf{\Theta_{\mathcal{T}}},\mathbf{V_{\mathcal{T}}},\mathbf{Z_{\mathcal{T}}} )$ is also a feasible solution to (\ref{Eq:ggmhub}). Assume that $(\mathbf{\Theta,V,Z})$ is not supported on $\mathcal{T}$. We want to show that the objective value of (\ref{Eq:ggmhub}) evaluated at $(\mathbf{\Theta}_{\mathcal{T}},\mathbf{V}_{\mathcal{T}},\mathbf{Z}_{\mathcal{T}})$ is smaller than the objective value of (\ref{Eq:ggmhub}) evaluated at $(\mathbf{\Theta,V,Z})$. By Fischer's inequality \citep{HJ85},
\[
-\log \det (\mathbf{\Theta})\ge - \log \det (\mathbf{\Theta_{\mathcal{T}}}).
\]
Therefore, it remains to show that
\begin{eqnarray*} \label{Equation:Thm1}
\begin{array}{rcl}
\langle \mathbf{\Theta, S}\rangle+\lambda_1 \|{\mathbf{Z}} - \text{diag}({\mathbf{Z}})\|_1 + \lambda_2 \|{\mathbf{V}} - \text{diag}({\mathbf{V}})\|_1 + \lambda_3 \|{\mathbf{V}} - \text{diag}({\mathbf{V}})\|_{1,q} &>& \\
\langle \mathbf{\Theta}_{\mathcal{T}}, \mathbf{S}\rangle+\lambda_1 \|{\mathbf{Z}_{\mathcal{T}}} - \text{diag}({\mathbf{Z}_{\mathcal{T}}})\|_1 + \lambda_2 \|{\mathbf{V}_{\mathcal{T}}} - \text{diag}({\mathbf{V}_{\mathcal{T}}})\|_1 + \lambda_3 \|{\mathbf{V}_{\mathcal{T}}} - \text{diag}({\mathbf{V}_{\mathcal{T}}})\|_{1,q}, \\
\end{array}
\end{eqnarray*}
\noindent or equivalently, that
\begin{equation*}
\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S} \rangle + \lambda_1 \| \mathbf{Z}_{\mathcal{T}^c} \|_1+ \lambda_2 \| \mathbf{V}_{\mathcal{T}^c}\|_1+ \lambda_3(\| \mathbf{V}-\text{diag}(\mathbf{V}) \|_{1,q} -\| \mathbf{V}_{\mathcal{T}}-\text{diag}(\mathbf{V}_{\mathcal{T}}) \|_{1,q} ) > 0.
\end{equation*}
\noindent Since $\| \mathbf{V}-\text{diag}(\mathbf{V}) \|_{1,q} \ge \| \mathbf{V}_{\mathcal{T}}-\text{diag}(\mathbf{V}_{\mathcal{T}}) \|_{1,q}$, it suffices to show that
\begin{equation}
\label{Equation:Thm1-2-2}
\begin{split}
\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S} \rangle+ \lambda_1 \| \mathbf{Z}_{\mathcal{T}^c} \|_1+ \lambda_2 \| \mathbf{V}_{\mathcal{T}^c}\|_1 > 0.
\end{split}
\end{equation}
\noindent Note that $\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S} \rangle$ = $\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S}_{\mathcal{T}^c} \rangle$. By the sufficient condition, $S_{\max} < \lambda_1$ and $2 S_{\max} < \lambda_2$.
\noindent In addition, we have that
\begin{equation*}
\begin{split}
|\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S} \rangle| &=|\langle \mathbf{\Theta}_{\mathcal{T}^c}, \mathbf{S}_{\mathcal{T}^c} \rangle| \\
&= |\langle \mathbf{V}_{\mathcal{T}^c}+\mathbf{V}^T_{\mathcal{T}^c}+\mathbf{Z}_{\mathcal{T}^c}, \mathbf{S}_{\mathcal{T}^c} \rangle| \\
&= |\langle 2\mathbf{V}_{\mathcal{T}^c}+\mathbf{Z}_{\mathcal{T}^c}, \mathbf{S}_{\mathcal{T}^c} \rangle| \\
&\le (2 \| \mathbf{V}_{\mathcal{T}^c} \|_1 +\|\mathbf{Z}_{\mathcal{T}^c} \|_1 )S_{\max}\\
&<\ \lambda_2 \| \mathbf{V}_{\mathcal{T}^c} \|_1 +\lambda_1 \|\mathbf{Z}_{\mathcal{T}^c} \|_1,
\end{split}
\end{equation*}
where the last inequality follows from the sufficient condition. We have shown (\ref{Equation:Thm1-2-2}) as desired.
\end{proof}
\subsection*{Proof of Theorem 2 (Necessary Condition)}
We first present a simple lemma for proving Theorem 2. Throughout the proof of Theorem 2, $\| \cdot \|_\infty$ indicates the maximal absolute element of a matrix and $\|\cdot \|_{\infty,s}$ indicates the dual norm of $\| \cdot\|_{1,q}$.
\begin{lemma}
The dual representation of $\tilde{\mathbf{P}}( \mathbf{\Theta})$ in (\ref{normequation}) is
\begin{eqnarray}
\label{dualrepresentation}
\begin{array}{rccl}
\tilde{\mathbf{P}}^*(\mathbf{\Theta}) &=& \underset{\mathbf{X},\mathbf{Y},\mathbf{\Lambda}}\max & \langle \mathbf{\Lambda,\Theta} \rangle \\
&& \mathrm{subject} \text{ } \mathrm{to} & \mathbf{\Lambda} + \mathbf{\Lambda}^T = \hat{\lambda}_2 \mathbf{X} + \hat{\lambda}_3 \mathbf{Y} \\
&& & \|\mathbf{X}\|_{\infty} \leq 1, \|\mathbf{\Lambda}\|_{\infty} \leq 1, \|\mathbf{Y}\|_{\infty,s} \leq 1 \\
&& & {X}_{ii} = 0, {Y}_{ii} = 0, {\Lambda}_{ii} = 0 \; \text{for } i=1,\ldots,p,
\end{array}
\end{eqnarray}
where $\frac{1}{s} + \frac{1}{q} = 1$.
\end{lemma}
\begin{proof}
We first state the dual representations for the norms in (\ref{normequation}):
\begin{eqnarray*}
\begin{array}{rccl}
\|\mathbf{Z}-\text{diag}(\mathbf{Z}) \|_1 &=& \underset{\mathbf{\Lambda}}\max & \langle \mathbf{\Lambda,Z} \rangle \\ && \mbox{\text{subject to}} & \|\mathbf{\Lambda} \|_{\infty} \le 1, \Lambda_{ii} = 0 \text{ for } i=1,\ldots, p,\\
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{rccl}
\|\mathbf{V}-\text{diag}(\mathbf{V}) \|_1 &=& \underset{\mathbf{X}}\max & \langle \mathbf{X,V} \rangle \\ && \mbox{\text{subject to}} & \|\mathbf{X} \|_{\infty} \le 1, X_{ii} = 0 \text{ for } i=1,\ldots, p,\\
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{rccl}
\|\mathbf{V}-\text{diag}(\mathbf{V}) \|_{1,q} &=& \underset{\mathbf{Y}}\max & \langle \mathbf{Y,V} \rangle \\
&& \mbox{\text{subject to}} & \|\mathbf{Y} \|_{\infty,s} \le 1, Y_{ii} = 0 \text{ for } i=1,\ldots, p.\\
\end{array}
\end{eqnarray*}
\noindent Then,
\begin{eqnarray*}
\begin{array}{rccl}
\tilde{ \mathbf{P}}(\mathbf{\Theta}) &=& \underset{{\mathbf{V, Z}}} \min & \| \mathbf{Z}-\text{diag}(\mathbf{Z})\|_1 + \hat{\lambda}_2 \| \mathbf{V}-\text{diag}(\mathbf{V})\|_1 + \hat{\lambda}_3 \|\mathbf{V}-\text{diag}(\mathbf{V}) \|_{1,q}\\
&& \text{subject to} & \mathbf{\Theta} = \mathbf{Z+V+V}^T\\
&=& \underset{{\mathbf{V, Z}}} \min & \underset{\mathbf{\Lambda,X,Y}} \max \langle \mathbf{\Lambda,Z} \rangle + \hat{\lambda}_2\langle \mathbf{X,V} \rangle + \hat{\lambda}_3\langle \mathbf{Y,V} \rangle \\
&& \mbox{\text{subject to}} & \|\mathbf{\Lambda}\|_{\infty} \le 1, \|\mathbf{X}\|_{\infty} \le 1, \|\mathbf{Y}\|_{\infty,s}\le 1 \\
&& & \Lambda_{ii}=0, X_{ii}=0, Y_{ii}=0 \text{ for } i=1,\ldots, p \\
&& & \mathbf{\Theta} = \mathbf{Z+V+V}^T\\
&=& \underset{\mathbf{\Lambda,X,Y}} \max & \underset{{\mathbf{V, Z}}} \min \langle \mathbf{\Lambda,Z} \rangle + \hat{\lambda}_2 \langle \mathbf{X,V} \rangle + \hat{\lambda}_3 \langle \mathbf{Y,V} \rangle \\
&& \mbox{\text{subject to}} & \|\mathbf{\Lambda}\|_{\infty} \le 1, \|\mathbf{X}\|_{\infty} \le 1, \|\mathbf{Y}\|_{\infty,s}\le 1 \\
&& & \Lambda_{ii}=0, X_{ii}=0, Y_{ii}=0 \text{ for } i=1,\ldots, p \\
&& & \mathbf{\Theta} = \mathbf{Z+V+V}^T\\
&=&\underset{\mathbf{\Lambda},\mathbf{X},\mathbf{Y}}\max & \langle \mathbf{\Lambda,\Theta} \rangle \\ && \mbox{\text{subject to}} & \mathbf{\Lambda} + \mathbf{\Lambda}^T = \hat{\lambda}_2 \mathbf{X} + \hat{\lambda}_3 \mathbf{Y} \\
&& & \|\mathbf{X}\|_{\infty} \leq 1, \|\mathbf{\Lambda}\|_{\infty} \leq 1, \|\mathbf{Y}\|_{\infty,s} \leq 1 \\
&& & {X}_{ii} = 0, {Y}_{ii} = 0, {\Lambda}_{ii} = 0 \; \text{for } i=1,\ldots,p.
\end{array}
\end{eqnarray*}
\noindent The third equality holds since the constraints on $(\mathbf{V,Z})$ and on $(\mathbf{\Lambda,X,Y})$ are both compact convex sets and so by the minimax theorem, we can swap max and min. The last equality follows from the fact that
\begin{eqnarray*}
\begin{array}{ccc}
\underset{\mathbf{V},\mathbf{Z}}\min & \langle \mathbf{\Lambda,Z} \rangle + \hat{\lambda}_2 \langle \mathbf{X,V} \rangle + \hat{\lambda}_3 \langle \mathbf{Y,V} \rangle \\
\mbox{subject to} & \mathbf{\Theta} = \mathbf{Z} + \mathbf{V} + \mathbf{V}^T \\
=&
\left\{ \begin{array}{cc} \langle \mathbf{\Lambda,\Theta} \rangle& \mbox{if } \mathbf{\Lambda} + \mathbf{\Lambda}^T = \hat{\lambda}_2 \mathbf{X} + \hat{\lambda}_3 \mathbf{Y} \\
-\infty & \mbox{otherwise}. \end{array} \right.
\end{array}
\end{eqnarray*}
\end{proof}
We now present the proof of Theorem 2.
\begin{proof}
The optimality condition for (\ref{Equation:reduceHGL}) is given by
\begin{equation}
\label{Equation:optimalitycondition}
\mathbf{0} = -\mathbf{\Theta}^{-1} + \mathbf{S} + \lambda_1\mathbf{\Lambda},
\end{equation}
where $\mathbf{\Lambda}$ is a subgradient of $\tilde{\mathbf{P}}(\mathbf{\Theta})$ in (\ref{normequation}) and the left-hand side of the above equation is a zero matrix of size $p\times p$.
Now suppose that $\mathbf{\Theta}^*$ that solves (\ref{Equation:optimalitycondition}) is supported on $\mathcal{T}$, i.e., $\mathbf{\Theta}^*_{\mathcal{T}^c} = 0$. Then for any $(i,j) \in \mathcal{T}^c$, we have that
\begin{equation}
\label{Equation:optimconditionhold}
0 = S_{ij} + \lambda_1 {\Lambda}^*_{ij},
\end{equation}
where $\mathbf{\Lambda}^*$ is a subgradient of $\tilde{\mathbf{P}}(\mathbf{\Theta}^*)$. Note that $\mathbf{\Lambda}^*$ must be an optimal solution to the optimization problem (\ref{dualrepresentation}). Therefore, it is also a feasible solution to (\ref{dualrepresentation}), implying that
\begin{equation*}
\begin{split}
|\Lambda_{ij}^*+\Lambda_{ji}^*| &\le \hat{\lambda}_2 + \hat{\lambda}_3,\\
|\Lambda_{ij}^*| &\le 1.
\end{split}
\end{equation*}
From (\ref{Equation:optimconditionhold}), we have that $\Lambda_{ij}^* = -\frac{S_{ij}}{\lambda_1}$ and thus,
\begin{equation*}
\begin{split}
\lambda_1 &\ge \lambda_1 \underset{(i,j)\in \mathcal{T}^c}\max |\Lambda_{ij}^*| \\
&= \lambda_1 \underset{(i,j)\in \mathcal{T}^c}\max \frac{|S_{ij}|}{\lambda_1}\\
&= S_{\max}.
\end{split}
\end{equation*}
Also, recall that $\hat{\lambda}_2 = \frac{\lambda_2}{\lambda_1}$ and $\hat{\lambda}_3 = \frac{\lambda_3}{\lambda_1}$. We have that
\begin{equation*}
\begin{split}
\lambda_2+\lambda_3 &\ge \lambda_1 \underset{(i,j)\in \mathcal{T}^c}\max |\Lambda_{ij}^*+\Lambda_{ji}^*| \\
&= \lambda_1 \underset{(i,j)\in \mathcal{T}^c}\max \frac{2|S_{ij}|}{\lambda_1}\\
&= 2S_{\max}.
\end{split}
\end{equation*}
Hence, we obtain the desired result.
\end{proof}
\section*{Appendix C: Some Properties of HGL}
\subsection*{Proof of Lemma \ref{Lemma:DiagonalZ}}
\begin{proof} Let $(\mathbf{\Theta}^*,\mathbf{Z}^*,\mathbf{V}^*)$ be the solution to (\ref{Eq:ggmhub}) and suppose that $\mathbf{Z}^*$ is not a diagonal matrix. Note that $\mathbf{Z}^*$ is symmetric since $\mathbf{\Theta} \in \mathcal{S} \equiv \{\mathbf{\Theta}:\mathbf{\Theta}\succ 0 \text{ and } \mathbf{\Theta}= \mathbf{\Theta}^T \}$.
Let $\hat{\mathbf{Z}}=\text{diag}(\mathbf{Z}^*)$, a matrix that contains the diagonal elements of the matrix $\mathbf{Z}^*$. Also, construct $\hat{\mathbf{V}}$ as follows,
\begin{eqnarray*}
\hat{\mathbf{V}}_{ij} = \left\{\begin{array}{cc} \mathbf{V}^*_{ij} + \frac{\mathbf{Z}^*_{ij}}{2} & \mbox{if } i \neq j \\
\mathbf{V}^*_{jj} & \mbox{otherwise}. \end{array} \right.
\end{eqnarray*}
\noindent Then, we have that $\mathbf{\Theta}^* = \hat{\mathbf{Z}} + \hat{\mathbf{V}} + \hat{\mathbf{V}}^T$. Thus, $(\mathbf{\Theta}^*,\hat{\mathbf{Z}},\hat{\mathbf{V}})$ is a feasible solution to (\ref{Eq:ggmhub}). We now show that $(\mathbf{\Theta}^*,\hat{\mathbf{Z}},\hat{\mathbf{V}})$ has a smaller objective than $(\mathbf{\Theta}^*,\mathbf{Z}^*,\mathbf{V}^*)$ in (\ref{Eq:ggmhub}), giving us a contradiction. Note that
\begin{eqnarray*} \label{eq:bound-1}
\begin{array}{rcl}
\lambda_1 \|\hat{\mathbf{Z}} - \text{diag}(\hat{\mathbf{Z}})\|_1 + \lambda_2 \|\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}})\|_1 &=& \lambda_2 \|\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}})\|_1 \\
&=& \lambda_2 \sum_{i \neq j} |\mathbf{V}^*_{ij} + \frac{\mathbf{Z}^*_{ij}}{2}| \\
&\leq& \lambda_2 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1 + \frac{\lambda_2}{2}\|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1,
\end{array}
\end{eqnarray*}
and
\begin{eqnarray*} \label{eq:bound-2}
\begin{array}{rcl}
\lambda_3 \sum_{j=1}^p \|(\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}}))_j\|_q &\leq& \lambda_3 \sum_{j=1}^p \|(\mathbf{V}^* - \text{diag}(\mathbf{V}^*))_j\|_q + \frac{\lambda_3}{2} \sum_{j=1}^p \|(\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*))_j\|_{q} \\
&\leq& \lambda_3 \sum_{j=1}^p \|(\mathbf{V}^* - \text{diag}(\mathbf{V}^*))_j\|_q + \frac{\lambda_3}{2} \|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1,
\end{array}
\end{eqnarray*}
where the last inequality follows from the fact that for any vector $\mathbf{x} \in \mathbb{R}^p$ and $q\ge1$, $\|\mathbf{x}\|_q$ is a nonincreasing function of $q$ \citep{Gentle}.
Summing up the above inequalities, we get that
\begin{eqnarray*}
\begin{array}{rcl}
\lambda_1 \|\hat{\mathbf{Z}} - \text{diag}(\hat{\mathbf{Z}})\|_1 + \lambda_2 \|\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}})\|_1 + \lambda_3 \sum_{j=1}^p \|(\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}}))_j\|_q &\leq& \\
\frac{\lambda_2 + \lambda_3}{2}\|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + \lambda_2 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1 + \lambda_3 \sum_{j=1}^p \|(\mathbf{V}^* - \text{diag}(\mathbf{V}^*))_j\|_q &<& \\
\lambda_1\|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + \lambda_2 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1 + \lambda_3 \sum_{j=1}^p \|(\mathbf{V}^* - \text{diag}(\mathbf{V}^*))_j\|_q,
\end{array}
\end{eqnarray*}
where the last inequality uses the assumption that $\lambda_1 > \frac{\lambda_2+\lambda_3}{2}$. We arrive at a contradiction and therefore the result holds.
\end{proof}
\subsection*{Proof of Lemma \ref{Lemma:DiagonalV}}
\begin{proof} Let $(\mathbf{\Theta}^*,\mathbf{Z}^*,\mathbf{V}^*)$ be the solution to (\ref{Eq:ggmhub}) and suppose $\mathbf{V}^*$ is not a diagonal matrix.
Let $\hat{\mathbf{V}}=\text{diag}(\mathbf{V}^*)$, a diagonal matrix that contains the diagonal elements of $\mathbf{V}^*$. Also construct $\hat{\mathbf{Z}}$ as follows,
\begin{eqnarray*}
\hat{\mathbf{Z}}_{ij} = \left\{\begin{array}{rc} \mathbf{Z}^*_{ij} + \mathbf{V}^*_{ij} + \mathbf{V}^*_{ji} & \mbox{if } i \neq j \\
\mathbf{Z}^*_{ij} & \mbox{otherwise}. \end{array} \right.
\end{eqnarray*}
Then, we have that $\mathbf{\Theta}^* = \hat{\mathbf{V}} + \hat{\mathbf{V}}^T + \hat{\mathbf{Z}}$. We now show that $(\mathbf{\Theta}^*,\hat{\mathbf{Z}}, \hat{\mathbf{V}})$ has a smaller objective value
than $(\mathbf{\Theta}^*,\mathbf{Z}^*,\mathbf{V}^*)$ in (\ref{Eq:ggmhub}), giving us a contradiction. We start by noting that
\begin{eqnarray*} \label{eq:bound-4}
\begin{array}{rcl}
\lambda_1 \|\hat{\mathbf{Z}} - \text{diag}(\hat{\mathbf{Z}})\|_1 + \lambda_2 \|\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}})\|_1 &=& \lambda_1 \|\hat{\mathbf{Z}} - \text{diag}(\hat{\mathbf{Z}})\|_1 \\
&\leq& \lambda_1 \|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + 2\lambda_1 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1.
\end{array}
\end{eqnarray*}
By Holder's Inequality, we know that $\mathbf{x}^T \mathbf{y} \le \|\mathbf{x}\|_q \| \mathbf{y}\|_s$ where $\frac{1}{s}+\frac{1}{q}=1$ and $\mathbf{x,y} \in \mathbb{R}^{p-1}$. Setting $\mathbf{y} = \text{sign}(\mathbf{x})$, we have that $\|\mathbf{x}\|_1 \le (p-1)^{\frac{1}{s}} \|\mathbf{x}\|_q$. Consequently,
\begin{equation*}
\frac{\lambda_3}{(p-1)^{\frac{1}{s}}} \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*) \|_1 \le \lambda_3 \sum_{j=1}^p \| ( \mathbf{V}^* - \text{diag}(\mathbf{V}^*) )_j \|_q.
\end{equation*}
Combining these results, we have that
\begin{equation*}
\begin{split}
&\lambda_1 \|\hat{\mathbf{Z}} - \text{diag}(\hat{\mathbf{Z}})\|_1 + \lambda_2 \|\hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}})\|_1+ \lambda_3 \sum_{j=1}^p \| ( \hat{\mathbf{V}} - \text{diag}(\hat{\mathbf{V}}) )_j \|_q \\
&\le \lambda_1 \|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + 2\lambda_1 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1\\
&< \lambda_1 \|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + \left( \lambda_2+\frac{\lambda_3}{(p-1)^{\frac{1}{s}}} \right) \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1\\
&\le \lambda_1 \|\mathbf{Z}^* - \text{diag}(\mathbf{Z}^*)\|_1 + \lambda_2 \|\mathbf{V}^* - \text{diag}(\mathbf{V}^*)\|_1 + \lambda_3 \sum_{j=1}^p \|(\mathbf{V}^* - \text{diag}(\mathbf{V}^*))_j \|_q,
\end{split}
\end{equation*}
where we use the assumption that $\lambda_1 < \frac{\lambda_2}{2}+\frac{\lambda_3}{2(p-1)^{\frac{1}{s}}}$. This leads to a contradiction.
\end{proof}
\subsection*{Proof of Lemma \ref{lemma3}}
In this proof, we consider the case when $\lambda_1 > \frac{\lambda_2+\lambda_3}{2}$. A similar proof technique can be used to prove the case when $\lambda_1 < \frac{\lambda_2+\lambda_3}{2}$. \\
\begin{proof} Let $f({\bf \Theta}, {\bf V}, {\bf Z})$ denote the objective of (\ref{Eq:ggmhub}) with $q=1$, and $(\mathbf{\Theta}^*, \mathbf{V}^*, \mathbf{Z}^*)$ the optimal solution. By Lemma~\ref{Lemma:DiagonalZ}, the assumption that
$\lambda_1 > \frac{\lambda_2+\lambda_3}{2}$ implies that
$\mathbf{Z}^*$ is a diagonal matrix. Now let $\hat{\bf V} = \frac{1}{2} \left( {\bf V}^* + ({\bf V}^*)^T \right)$. Then
\begin{eqnarray*}
f(\mathbf{\Theta}^*, \hat{\mathbf{V}}, \mathbf{Z}^*)
&=& -\log \det \mathbf{\Theta}^* + \langle \mathbf{\Theta}^*,\mathbf{S} \rangle + \lambda_1 \| {\mathbf{Z}}^*-\text{diag}({\mathbf{Z}}^*) \|_1 + (\lambda_2+\lambda_3) \| \hat{\mathbf{V}}-\text{diag}(\hat{\mathbf{V}}) \|_1 \nonumber \\
&=&-\log \det \mathbf{\Theta}^* + \langle \mathbf{\Theta}^*,\mathbf{S} \rangle + \frac{\lambda_2+\lambda_3}{2} \| {\mathbf{V}}^*+ {{\mathbf{V}}^*}^T-\text{diag}({\mathbf{V}}^*+{{\mathbf{V}}^*}^T) \|_1 \nonumber \\
& \leq& -\log \det \mathbf{\Theta}^* + \langle \mathbf{\Theta}^*,\mathbf{S} \rangle + ({\lambda_2+\lambda_3}) \| {\mathbf{V}}^*-\text{diag}({\mathbf{V}}^*) \|_1 \nonumber \\
& = & f(\mathbf{\Theta}^*, {\mathbf{V}}^*, \mathbf{Z}^*) \nonumber \\
& \leq & f(\mathbf{\Theta}^*, \hat{\mathbf{V}}, \mathbf{Z}^*),
\end{eqnarray*}
where the last inequality follows from the assumption that ($\mathbf{\Theta}^*, {\mathbf{V}}^*, \mathbf{Z}^*$) solves (\ref{Eq:ggmhub}). By strict convexity of $f$, this means that ${\bf V}^* = \hat{\bf V}$, i.e., $\mathbf{V}^*$ is symmetric.
This implies that
\begin{eqnarray}
\label{my.eq}
f(\mathbf{\Theta}^*, {\mathbf{V}}^*, \mathbf{Z}^*)
&=&-\log \det \mathbf{\Theta}^* + \langle \mathbf{\Theta}^*,\mathbf{S} \rangle + \frac{\lambda_2+\lambda_3}{2} \| {\mathbf{V}}^*+ {{\mathbf{V}}^*}^T-\text{diag}({\mathbf{V}}^*+{{\mathbf{V}}^*}^T) \|_1 \nonumber \\
&=&-\log \det \mathbf{\Theta}^* +\langle \mathbf{\Theta}^*,\mathbf{S} \rangle +\frac{\lambda_2+\lambda_3}{2} \| \mathbf{\Theta}^* - \text{diag}(\mathbf{\Theta}^*)\|_1 \\
&=& g({\bf \Theta}^*),\nonumber
\end{eqnarray}
where $g({\bf \Theta})$ is the objective of the graphical lasso optimization problem, evaluated at $\bf \Theta$, with tuning parameter $\frac{\lambda_2+\lambda_3}{2}$.
Suppose that $\tilde{\bf \Theta}$ minimizes $g({\bf \Theta})$, and ${\bf \Theta}^* \neq \tilde{\bf \Theta}$.
Then, by (\ref{my.eq}) and strict convexity of $g$, $g(\mathbf{\Theta}^*) = f(\mathbf{\Theta}^*,{\mathbf{V}}^*,{\mathbf{Z}}^*) \le f(\tilde{\mathbf{\Theta}},\tilde{\mathbf{\Theta}}/2,\mathbf{0}) = g(\tilde{\mathbf{\Theta}}) < g(\mathbf{\Theta}^*)$, giving us a contradiction. Thus it must be that $\tilde{\mathbf{\Theta}} = \mathbf{\Theta}^*$.
\end{proof}
\section*{Appendix D: Simulation Study for Hub Covariance Graph }
In this section, we present the results for the simulation study described in Section~\ref{Cov:simulation} with $n=100$, $p=200$, and $|\mathcal{H}| = 4$. We calculate the proportion of correctly estimated hub nodes with $r=40$. The results are shown in Figure~\ref{Fig:CovSimsmall}. As we can see from Figure~\ref{Fig:CovSimsmall}, our proposal outperforms \citet{BienTibs11}. In particular, we can see from Figure~\ref{Fig:CovSimsmall}(c) that \citet{BienTibs11} fails to identify hub nodes.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.51]{covsim1small.pdf}
\end{center}
\caption{Covariance graph simulation with $n=100$ and $p=200$. Details of the axis labels are as in Figure~\ref{Fig:simulation1}. The colored lines correspond to the proposal of {\protect\citet{Xueetal2012}} (\protect\includegraphics[height=0.5em]{black.png}); HCG with $\lambda_3=1$ (\protect\includegraphics[height=0.5em]{orange.png}), $\lambda_3=1.5$ (\protect\includegraphics[height=0.5em]{pink.png}), and $\lambda_3=2$ (\protect\includegraphics[height=0.5em]{red.png}); and the proposal of {\protect\citet{BienTibs11}} (\protect\includegraphics[height=0.5em]{blue.png}). }
\label{Fig:CovSimsmall}
\end{figure}
\section*{Appendix E: Run Time Study for the ADMM algorithm for HGL}
In this section, we present a more extensive run time study for the ADMM algorithm for HGL. We ran experiments with $p=100,200,300$ and with $n=p/2$ on a 2.26GHz Intel Core 2 Duo machine. Results averaged over 10 replications are displayed in Figures~\ref{fig:timing}(a)-(b), where the panels depict the run time and number of iterations required for the algorithm to converge, as a function of $\lambda_1$, with $\lambda_2=0.5$ and $\lambda_3=2$ fixed. The number of iterations required for the algorithm to converge is computed as the total number of iterations in Step 2 of Algorithm~\ref{Alg:general}. We see from Figure~\ref{fig:timing}(a) that as $p$ increases from 100 to 300, the run times increase substantially, but never exceed several minutes. Note that these results are without using the block diagonal condition in Theorem 1.
\begin{figure}[htp]
\begin{center}
\hspace{10mm} (a) \hspace{70mm} (b)
\includegraphics[scale=0.36]{timing.pdf}
\includegraphics[scale=0.36]{iteration.pdf}
\end{center}
\caption{(a): Run time (in seconds) of the ADMM algorithm for HGL, as a function of $\lambda_1$, for fixed values of $\lambda_2$ and $\lambda_3$. (b): The total number of iterations required for the ADMM algorithm for HGL to converge, as a function of $\lambda_1$. All results are averaged over 10 simulated data sets. These results are without using the block diagonal condition in Theorem 1.}
\label{fig:timing}
\end{figure}
\section*{Appendix F: Update for $\mathbf{\Theta}$ in Step 2(a)i for Binary Ising Model using Barzilai-Borwein Method}
We consider updating $\mathbf{\Theta}$ in Step 2(a)i of Algorithm~\ref{Alg:general} for binary Ising model.
Let
\[
h(\mathbf{\Theta}) = \left\{ -\sum_{j=1}^p \sum_{j'=1}^p \theta_{jj'} (\mathbf{X}^T\mathbf{X})_{jj'} +\sum_{i=1}^p \sum_{j=1}^p \log \left(1+ \exp\left[ \theta_{jj}+\sum_{j'\ne j} \theta_{jj'} x_{ij'} \right] \right) +\frac{\rho}{2} \|\mathbf{\Theta}-\tilde{\mathbf{\Theta}}+\mathbf{W}_1\|_F^2 \right\}.
\]
Then, the optimization problem for Step 2(a)i of Algorithm~\ref{Alg:general} is
\begin{equation}
\label{Eq:isingalg}
\underset{\mathbf{\Theta} \in \mathcal{S}}{\text{minimize}} \quad h(\mathbf{\Theta}),
\end{equation}
where $\mathcal{S}=\{\mathbf{\Theta}: \mathbf{\Theta}=\mathbf{\Theta}^T\}$. In solving (\ref{Eq:isingalg}), we will treat $\mathbf{\Theta} \in \mathcal{S}$ as an implicit constraint.
The Barzilai-Borwein method is a gradient descent method with the step-size chosen to mimic the secant condition of the BFGS method \citep[see, e.g.,][]{barzilai1988two,nocedal2006numerical}. The convergence of the Barzilai-Borwein method for unconstrained minimization using a non-monotone line search was shown in \citet{raydan1997barzilai}. Recent convergence results for a quadratic cost function can be found in \citet{dai2013new}. To implement the Barzilai-Borwein method, we need to evaluate the gradient of $h(\mathbf{\Theta})$. Let $\nabla h(\mathbf{\Theta})$ be a $p\times p$ matrix, where the $(j,j')$ entry is the gradient of $h(\mathbf{\Theta})$ with respect to $\theta_{jj'}$, computed under the constraint $\mathbf{\Theta} \in \mathcal{S}$, that is, $\theta_{jj'}=\theta_{j'j}$. Then,
\[
(\nabla h(\mathbf{\Theta}))_{jj} =-(\mathbf{X}^T\mathbf{X})_{jj} + \sum_{i=1}^n \left[ \frac{\exp(\theta_{jj} + \sum_{j'\ne j} \theta_{jj'} x_{ij'})}{1+\exp(\theta_{jj} + \sum_{j'\ne j} \theta_{jj'} x_{ij'})}\right] + \rho (\theta_{jj} - \tilde{\theta}_{jj} + (\mathbf{W}_1)_{jj}),
\]
and
\begin{equation*}
\begin{split}
(\nabla h(\mathbf{\Theta}))_{jj'} &= -2(\mathbf{X}^T\mathbf{X})_{jj} + 2\rho (\theta_{jj'} - \tilde{\theta}_{jj'} + (\mathbf{W}_1)_{jj'}) \\
&\qquad + \sum_{i=1}^n \left[ \frac{ x_{ij'}\exp(\theta_{jj} + \sum_{j'\ne j} \theta_{jj'} x_{ij'})}{1+\exp(\theta_{jj} + \sum_{j'\ne j} \theta_{jj'} x_{ij'})} + \frac{ x_{ij}\exp(\theta_{j'j'} + \sum_{j\ne j'} \theta_{jj'} x_{ij})}{1+\exp(\theta_{j'j'} + \sum_{j\ne j'} \theta_{jj'} x_{ij})} \right].
\end{split}
\end{equation*}
A simple implementation of the Barzilai-Borwein algorithm for solving (\ref{Eq:isingalg}) is detailed in Algorithm~\ref{Alg:bb}. We note that the Barzilai-Borwein algorithm can be improved \citep[see, e.g.,][]{barzilai1988two,wright2009sparse}. We leave such improvement for future work.
\begin{algorithm}[htp]
\small
\caption{Barzilai-Borwein Algorithm for Solving (\ref{Eq:isingalg}).}
\label{Alg:bb}
\begin{enumerate}
\item \textbf{Initialize} the parameters:
\begin{enumerate}
\item $\mathbf{\Theta}_1= \mathbf{I}$ and $\mathbf{\Theta}_{0}= 2\mathbf{I}$.
\item constant $\tau>0$.
\end{enumerate}
\item \textbf{Iterate} until the stopping criterion $\frac{\| {\mathbf{\Theta}}_t- {\mathbf{\Theta}}_{t-1} \|_F^2}{\| {\mathbf{\Theta}}_{t-1}\|_F^2} \le \tau$ is met, where $\mathbf{\Theta}_t$ is the value of $\mathbf{\Theta}$ obtained at the $t$th iteration:
\begin{enumerate}
\item $\alpha_t= \text{trace}\left[ (\mathbf{\Theta}_t-\mathbf{\Theta}_{t-1})^T (\mathbf{\Theta}_t-\mathbf{\Theta}_{t-1})\right] / \text{trace}\left[ (\mathbf{\Theta}_t-\mathbf{\Theta}_{t-1})^T (\nabla h(\mathbf{\Theta}_t) - \nabla h(\mathbf{\Theta}_{t-1}))\right]$.
\item $\mathbf{\Theta}_{t+1} = \mathbf{\Theta}_t - \alpha_t \nabla h(\mathbf{\Theta}_t)$.
\end{enumerate}
\end{enumerate}
\end{algorithm}
\newpage
\bibliographystyle{natbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,294 |
\section{Introduction}
Let $\frg$ be a finite-dimensional simple Lie algebra over $\bbC$. Fix a Cartan subalgebra $\frh$ of $\frg$.
The associated root system is $\Delta=\Delta(\frg, \frh)\subseteq\frh_{\bbR}^*$. Recall that a decomposition
\begin{equation}\label{grading}
\frg=\bigoplus_{i\in \bbZ}\frg(i)
\end{equation}
is a \emph{$\bbZ$-grading} of $\frg$ if $[\frg(i), \frg(j)]\subseteq \frg(i+j)$ for any $i, j\in\bbZ$.
In particular, in such a case, $\frg(0)$ is a Lie subalgebra of $\frg$. Since each derivation of $\frg$ is inner, there exists $h_0\in\frg(0)$ such that $\frg(i)=\{x\in\frg\mid [h_0, x]=i x\}$. The element $h_0$ is said to be \emph{defining} for the grading \eqref{grading}. Without loss of generality, one may assume that $h_0\in\frh$. Then $\frh\subseteq\frg(0)$. Let $\Delta(i)$ be the set of roots in $\frg(i)$. Then we can
choose a set of positive roots $\Delta(0)^+$ for $\Delta(0)$ such that
$$
\Delta^+ :=\Delta(0)^+\sqcup \Delta(1)\sqcup \Delta(2)\sqcup \cdots
$$
is a set of positive roots of $\Delta(\frg, \frh)$. Let $\Pi$ be the
corresponding simple roots, and put $\Pi(i)=\Delta(i)\cap \Pi$. Note
that the grading \eqref{grading} is fully determined by
$\Pi=\bigsqcup_{i\geq 0} \Pi(i)$. We refer the reader to Ch.~3, \S 3
of \cite{GOV} for generalities on gradings of Lie algebras. Each
$\Delta(i)$, $i\geq 1$, inherits a poset structure from the usual
one of $\Delta^+$. That is, let $\alpha$ and $\beta$ be two roots of
$\Delta(i)$, then $\beta\geq\alpha$ if and only if $\beta-\alpha$ is
a nonnegative integer combination of simple roots.
Recently, Panyushev initiated the study of the rich structure of
$\Delta(1)$ in \cite{P}. In particular, he raised five
conjectures concerning the $\mathcal{M}$-polynomial,
$\mathcal{N}$-polynomial and the reverse operator of $\Delta(1)$.
Note that Conjectures 5.1, 5.2 and 5.12 there have been solved by
Weng and the author \cite{DW}. The current paper aims
to handle conjecture 5.11 of \cite{P}. Let us prepare more notation.
Recall that a subset $I$ of a finite poset $(P, \leq)$ is a
\emph{lower} (resp., \emph{upper}) \emph{ideal} if $x\leq y$ in $P$
and $y\in I$ (resp. $x\in I$) implies that $x\in I$ (resp. $y\in
I$). We collect the lower ideals of $P$ as $J(P)$, which is
partially ordered by inclusion. A subset $A$ of $(P, \leq)$ is an
\emph{antichain} if any two elements in $A$ are non-comparable under
$\leq$. We collect the antichains of $P$ as $\mathrm{An}(P)$. For
any $x\in P$, let $I_{\leq x}=\{y\in P\mid y\leq x\}$. Given an
antichain $A$ of $P$, let $I(A)=\bigcup_{a\in A} I_{\leq a}$. The
\emph{reverse operator} $\mathfrak{X}$ is defined by
$\mathfrak{X}(A)=\min (P\setminus I(A))$. Since antichains of $P$
are in bijection with lower (resp. upper) ideals of $P$, the reverse
operator acts on lower (resp. upper) ideals of $P$ as well. Note
that the current $\mathfrak{X}$ is inverse to the reverse operator
$\mathfrak{X}^{\prime}$ in Definition 1 of \cite{P}, see Lemma
\ref{lemma-inverse-reverse-operator}. Thus replacing $\mathfrak{X}^{\prime}$ by $\mathfrak{X}$ does not affect our
forthcoming discussion on orbits.
We say the $\bbZ$-grading \eqref{grading} is \emph{extra-special} if
\begin{equation}\label{extra-special}
\frg=\frg(-2)\oplus \frg(-1) \oplus \frg(0) \oplus \frg(1)
\oplus \frg(2) \mbox{ and }\dim\frg(2)=1,
\end{equation}
Up to conjugation, any simple Lie algebra $\frg$ has a unique extra-special $\bbZ$-grading. Without loss of generality, we assume that $\Delta(2)=\{\theta\}$ , where $\theta$ is the highest root of $\Delta^+$.
Namely, we may assume that the grading \eqref{extra-special} is defined by the element $\theta^{\vee}$, the dual root of $\theta$. In such a case, we have
\begin{equation}\label{Delta-one}
\Delta(1)=\{\alpha\in\Delta^+\mid (\alpha, \theta^{\vee})=1\}.
\end{equation}
Let $\mathrm{ht}$ be the height function. Recall that $h:=\mathrm{ht}(\theta)+1$ is the \emph{Coxeter number}
of $\Delta$. Let $h^*$ be the \emph{dual Coxeter number }of
$\Delta$. That is, $h^*$ is the height of
$\theta^{\vee}$ in $\Delta^{\vee}$. As noted on p.~1203 of \cite{P},
we have $|\Delta(1)|=2h^*-4$. We call a lower (resp. upper) ideal
$I$ of $\Delta(1)$ \emph{Lagrangian} if $|I|=h^*-2$. Write
$\Delta_l$ (resp. $\Pi_l$) for the set of \emph{all} (resp.
\emph{simple}) \emph{long} roots. In the simply-laced cases, all
roots are assumed to be both long and short. Note that $\theta$ is
always long, while $\theta^{\vee}$ is always short.
Now Conjecture 5.11 of \cite{P} is stated as follows.
\medskip
\noindent \textbf{Panyushev conjecture.}\quad In any extra-special
$\bbZ$-grading of $\frg$, the number of
$\mathfrak{X}_{\Delta(1)}$-orbits equals $|\Pi_l|$, and each orbit
is of size $h-1$. Furthermore, if $h$ is even (which only excludes the case $A_{2k}$ where $h=2k+1$), then each
$\mathfrak{X}_{\Delta(1)}$-orbit contains a unique Lagrangian lower
ideal.
\medskip
Originally, the conjecture is stated in terms of upper ideals and
the reverse operator $\mathfrak{X}^{\prime}$. One agrees that we can
equivalently phrase it using lower ideals and $\mathfrak{X}$. The
main result of the current paper is the following.
\begin{thm}\label{thm-main}
Panyushev conjecture is true.
\end{thm}
After collecting necessary preliminaries in Section 2, the above
theorem will be proven in Section 3. Moreover, we note that by our
calculations in Section 3, one checks easily that for any
extra-special $1$-standard $\bbZ$-grading of $\frg$, all the
statements of Conjecture 5.3 in \cite{P} hold.
\medskip
\noindent\textbf{Notation.} Let $\bbN =\{0, 1, 2, \dots\}$, and let
$\mathbb{P}=\{1, 2, \dots\}$. For each $n\in\mathbb{P}$, $[n]$
denotes the poset $(\{1, 2, \dots, n\}, \leq)$.
\section{Preliminaries}
Let us collect some preliminary results in this section. Firstly,
let us compare the two reverse operators. Let $(P, \leq)$ be any
finite poset. For any $x\in P$, let $I_{\geq x}=\{y\in P\mid y\geq
x\}$. For any antichain $A$ of $P$, put $I_{+}(A)=\bigcup_{a\in A}
I_{\geq a}$. Recall that in Definition 1 of \cite{P}, the reverse
operator $\mathfrak{X}^{\prime}$ is given by
$\mathfrak{X}^{\prime}(A)=\max (P\setminus I_{+}(A))$.
\begin{lemma}\label{lemma-inverse-reverse-operator}
The operators $\mathfrak{X}$ and $\mathfrak{X}^{\prime}$ are
inverse to each other.
\end{lemma}
\begin{proof}
Take any antichain $A$ of $P$, note that
$$I_{+}(\min(P\setminus
I(A)))=P\setminus I(A)\mbox{ and } I(\max(P\setminus
I_{+}(A)))=P\setminus I_{+}(A).
$$
Then the lemma follows.
\end{proof}
Let $(P_i,\leq), i=1, 2$ be two finite posets. One can define a
poset structure on $P_1\times P_2$ by setting $(u_1, v_1)\leq (u_2,
v_2)$ if and only if $u_1\leq u_2$ in $P_1$ and $v_1\leq v_2$ in
$P_2$. We simply denote the resulting poset by $P_1 \times P_2$. The
following well-known lemma describes the lower ideals of
$[m]\times P$.
\begin{lemma}\label{lemma-ideals-CnP}
Let $P$ be a finite poset. Let $I$ be a subset of $[m]\times P$. For
$1\leq i\leq m$, denote $I_i=\{a\in P\mid (i, a)\in I\}$. Then $I$
is a lower ideal of $[m]\times P$ if and only if each $I_i$ is a
lower ideal of $P$, and $I_m\subseteq I_{m-1}\subseteq \cdots
\subseteq I_{1}$.
\end{lemma}
In this section, by a \emph{finite graded poset} we always mean a
finite poset $P$ with a rank function $r$ from $P$ to the positive
integers $\mathbb{P}$ such that all the minimal elements have rank
$1$, and $r(x)=r(y)+1$ if $x$ covers $y$. In such a case, let $P_i$
be the set of elements in $P$ with rank $i$. The sets $P_i$ are said
to be the \emph{rank levels} of $P$. Suppose that
$P=\bigsqcup_{j=1}^{d} P_j$. Let $P_0$ be the empty set $\emptyset$.
Put $L_i=\bigsqcup_{j=1}^{i} P_j$ for $1\leq j\leq d$, and let $L_0$ be the empty set.
We call those $L_i$ \emph{rank level lower ideals}.
Let $\mathfrak{X}$ be the reverse operator on $[m]\times P$. In
view of Lemma \ref{lemma-ideals-CnP}, we denote by $(I_1, \cdots,
I_m)$ a general lower ideal of $[m]\times P$, where each $I_i\in
J(P)$ and $I_m\subseteq \cdots \subseteq I_{1}$. We say that the
lower ideal $(I_1, \cdots, I_m)$ is \emph{full rank} if each $I_i$
is a rank level lower ideal of $P$. Let $\mathcal{O}(I_1, \cdots,
I_m)$ be the $\mathfrak{X}_{[m]\times P}$-orbit of $(I_1, \cdots,
I_m)$. The following lemma will be helpful in determining
$\mathfrak{X}_{[m]\times P}$-orbits consisting of rank level lower
ideals.
\begin{lemma}\label{lemma-operator-ideals-CmP}
Keep the notation as above. Then
for any $n_0\in \bbN$, $n_i\in\mathbb{P}$ ($1\leq i\leq s$) such that $\sum_{i=0}^{s} n_i =m$, we have
\begin{equation}\label{rank-level}
\mathfrak{X}_{[m]\times P}(L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})=
(L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, L_0^{n_s-1}),
\end{equation}
where $0\leq i_s<\cdots <i_1<d$, $L_d^{n_0}$ denotes $n_0$ copies of $L_d$ and so on.
\end{lemma}
\begin{proof}
Note that under the above assumptions, $(L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})$ is a lower ideal of $[m]\times P$ in view of Lemma \ref{lemma-ideals-CnP}. Then analyzing the minimal elements of $([m]\times P)\setminus (L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})$ leads one to \eqref{rank-level}.
\end{proof}
\begin{lemma}\label{lemma-operator-types}
Let $(I_1, \cdots, I_m)$ be an arbitrary lower ideal of $[m]\times P$.
Then $(I_1, \cdots, I_m)$ is full rank if and only if each lower ideal
in the orbit $\mathcal{O}(I_1, \cdots, I_m)$ is full rank.
\end{lemma}
\begin{proof}
Use Lemma \ref{lemma-operator-ideals-CmP}.
\end{proof}
The above lemma tells us that there are two types of $\mathfrak{X}$-orbits: in the first type each lower ideal is full rank, while in the second type each lower ideal is not. We call them \emph{type I} and \emph{type II}, respectively.
For any $n\geq 2$, let $K_{n-1}=[n-1]\oplus([1]\sqcup [1])\oplus [n-1]$ (the ordinal sum, see
p.~246 of \cite{St}). We label the elements of $K_{n-1}$ by $1$, $2$, $\cdots$,
$n-1$, $n$, $n^{\prime}$, $n+1$, $\cdots$, $2n-2$, $2n-1$. Figure 1
illustrates the labeling for the Hasse diagram of $K_3$. Note that $L_i$ ($0\leq i\leq 2n-1$) are all the full rank lower ideals. For instance, we have $L_{n}=\{1, 2, \cdots, n, n^{\prime}\}$.
Moreover, we put $I_{n}=\{1, \cdots, n-1,
n\}$ and $I_{n^{\prime}}=\{1, \cdots, n-1, n^{\prime}\}$. The following lemma will be helpful in analyzing the $\mathfrak{X}_{[m]\times K_{n-1}}$-orbits of type II.
\begin{lemma}\label{lemma-operator-ideals-CmK}
Fix $n_0\in \bbN$, $n_i\in\mathbb{P}$ ($1\leq i\leq s$),
$m_j\in\mathbb{P}$ ($0\leq j\leq t$) such that $\sum_{i=0}^{s} n_i +
\sum_{j=0}^{t} m_j=m$. Take any $0\leq j_t< \cdots<j_1<n\leq
i_s<\cdots <i_1<2n-1$, we have
\begin{align*}
\mathfrak{X}_{[m]\times K_{n-1}}&(L_{2n-1}^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s}, I_n^{m_0}, L_{j_1}^{m_1}, \cdots, L_{j_t}^{m_t})=\\
&\begin{cases}
( L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, I_{n^{\prime}}^{n_s}, L_{j_1+1}^{m_0},
L_{j_2+1}^{m_1}, \cdots, L_{j_t+1}^{m_{t-1}}, L_0^{m_t-1} ) & \mbox { if } j_1 < n-1;\\
( L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, L_{n}^{n_s}, I_n^{m_0},
\, \, \, \, L_{j_2+1}^{m_1}, \cdots, L_{j_t+1}^{m_{t-1}}, L_0^{m_t-1} )& \mbox { if } j_1 = n-1.
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Analyzing the minimal elements of $$([m]\times K_{n-1})\setminus (L_{2n-1}^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s}, I_n^{m_0}, L_{j_1}^{m_1}, \cdots, L_{j_t}^{m_t})$$ leads one to the desired expression.
\end{proof}
\begin{figure}[]
\centering \scalebox{0.4}{\includegraphics{K3_Labelled.eps}}
\caption{The labeled Hasse diagram of $K_3$}
\end{figure}
\section{Panyushev conjecture}
This section is devoted to proving Theorem \ref{thm-main}.
\noindent \emph{Proof of Theorem \ref{thm-main}.} Note that when
$\frg$ is $A_n$, the extra-special $\Delta(1)\cong [n-1]\sqcup
[n-1]$; when $\frg$ is $C_n$, the extra-special $\Delta(1)\cong
[2n-2]$. One can verify Theorem \ref{thm-main} for these two cases
without much effort. We omit the details.
For $\frg=B_n$, the extra-special $\Delta(1)= [2]\times [2n-3]$.
Now $|\Pi_{l}|=n-1$, $h-1=2n-1$, and $h^*-2=2n-3$. As in Section 2,
let $L_i$ ($0\leq i\leq 2n-3$) be the rank level lower ideals. For
simplicity, we simply denote $\mathfrak{X}_{[2]\times [2n-3]}$ by
$\mathfrak{X}$. For any $1\leq i\leq n-2$, let us analyze the type I
$\mathfrak{X}$-orbit $\mathcal{O}(L_i, L_i)$ via the aid of Lemma
\ref{lemma-operator-ideals-CmP}:
\begin{align*}
\mathfrak{X}(L_i, L_i)&=(L_{i+1}, L_0),\\
\mathfrak{X}^{2n-4-i}(L_{i+1}, L_0)&=(L_{2n-3}, L_{2n-4-i}),\\
\mathfrak{X}(L_{2n-3}, L_{2n-4-i})&=(L_{2n-3-i}, L_{2n-3-i}),\\
\mathfrak{X}(L_{2n-3-i}, L_{2n-3-i})&=(L_{2n-2-i}, L_{0}),\\
\mathfrak{X}^{i-1}(L_{2n-2-i}, L_{0})&=(L_{2n-3}, L_{i-1}),\\
\mathfrak{X}(L_{2n-3}, L_{i-1})&=(L_{i}, L_{i}).
\end{align*}
Thus $\mathcal{O}(L_i, L_i)$ consists of $2n-1$ elements. Moreover,
in this orbit, $(L_{2n-2-\frac{i+1}{2}}, L_{\frac{i-1}{2}})$ is the
unique ideal with size $2n$ when $i$ is odd, $(L_{n+\frac{i}{2}-1},
L_{n-\frac{i}{2}-2})$ is the unique ideal with size $2n$ when $i$ is
even. Similarly, the orbit $\mathcal{O}(L_0, L_0)$ consists of
$2n-1$ elements and contains a unique ideal with size $2n$:
$(L_{n-1}, L_{n-2})$. Since there are $(n-1)(2n-1)$ lower ideals in
$[2]\times [2n-3]$ by Lemma \ref{lemma-ideals-CnP}, one sees that
all the $\mathfrak{X}$-orbits have been exhausted, and Theorem
\ref{thm-main} holds for $B_{n}$.
Let us consider $D_{n+2}$, where the extra-special $\Delta(1)\cong
[2]\times K_{n-1}$. We adopt the notation as in Section 2. For simplicity,
we write $\mathfrak{X}_{[2]\times K_{n-1}}$ by $\mathfrak{X}$. We propose
the following.
\textbf{Claim.} $\mathcal{O}(L_i, L_i)$, $0\leq i\leq n-1$,
$\mathcal{O}(I_n, I_n)$, and $\mathcal{O}(I_{n^{\prime}},
I_{n^{\prime}})$ exhausts the orbits of $\mathfrak{X}$ on $[2]\times
K_{n-1}$. Moreover, each orbit has size $2n+1$ and contains a unique
lower ideal with size $2n$.
Indeed, firstly, for any $0\leq i\leq n-1$, observe that by Lemma
\ref{lemma-operator-ideals-CmP}, we have
\begin{align*}
\mathfrak{X}(L_i, L_i)&=(L_{i+1}, I_0),\\
\mathfrak{X}^{2n-i-2}(L_{i+1}, L_0)&=(L_{2n-1}, L_{2n-i-2}),\\
\mathfrak{X}(L_{2n-1}, L_{2n-i-2})&=(L_{2n-i-1}, L_{2n-i-1}),\\
\mathfrak{X}(L_{2n-i-1}, L_{2n-i-1})&=(L_{2n-i}, L_{0}),\\
\mathfrak{X}^{i-1}(L_{2n-i}, L_{0})&=(L_{2n-1}, L_{i-1}),\\
\mathfrak{X}(L_{2n-1}, L_{i-1})&=(L_{i}, L_{i}).
\end{align*}
Thus the type I orbit $\mathcal{O}(L_i, L_i)$ consists of $2n+1$ elements. Moreover,
in this orbit, $(L_{2n-i+\frac{i-1}{2}}, L_{\frac{i-1}{2}})$ is the
unique ideal with size $2n$ when $i$ is odd, $(L_{n+\frac{i}{2}},
L_{n-\frac{i}{2}-1})$ is the unique ideal with size $2n$ when $i>0$
is even, while $(L_{n}, L_{n-1})$ is the unique ideal
with size $2n$ when $i=0$.
Secondly, assume that $n$ is even and let us analyze the orbit
$\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \ref{lemma-operator-ideals-CmK}, we have
\begin{align*}
\mathfrak{X}(I_n, I_n)&=(I_{n^{\prime}}, L_0),\\
\mathfrak{X}^{n-1}(I_{n^{\prime}}, L_0)&=(I_{n}, L_{n-1}),\\
\mathfrak{X}(I_{n}, L_{n-1})&=(L_{n}, I_{n}),\\
\mathfrak{X}^{n-1}(L_{n}, I_{n})&=(L_{2n-1}, I_{n^{\prime}}),\\
\mathfrak{X}(L_{2n-1}, I_{n^{\prime}})&=(I_{n}, I_{n}).
\end{align*}
Thus the type II orbit $\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,
in this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The
analysis of the orbit $\mathcal{O}(I_{n^{\prime}}, I_{n^{\prime}})$
is entirely similar.
Finally, assume that $n$ is odd and let us analyze the orbit
$\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \ref{lemma-operator-ideals-CmK}, we have
\begin{align*}
\mathfrak{X}(I_n, I_n)&=(I_{n^{\prime}}, L_0),\\
\mathfrak{X}^{n-1}(I_{n^{\prime}}, L_0)&=(I_{n^{\prime}}, L_{n-1}),\\
\mathfrak{X}(I_{n^{\prime}}, L_{n-1})&=(L_{n}, I_{n^{\prime}}),\\
\mathfrak{X}^{n-1}(L_{n}, I_{n^{\prime}})&=(L_{2n-1}, I_{n^{\prime}}),\\
\mathfrak{X}(L_{2n-1}, I_{n^{\prime}})&=(I_{n}, I_{n}).
\end{align*}
Thus the type II orbit $\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,
in this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The
analysis of the orbit $\mathcal{O}(I_{n^{\prime}}, I_{n^{\prime}})$
is entirely similar.
To sum up, we have verified the claim since there are $(n+2)(2n+1)$
lower ideals in $[2]\times K_{n-1}$ by Lemma \ref{lemma-ideals-CnP}.
Note that $|\Pi_{l}|=n+2$, $h=h^*=2n+2$ for $\frg=D_{n+2}$, one sees that Theorem
\ref{thm-main} holds for $D_{n+2}$.
Theorem \ref{thm-main} has been verified for all exceptional Lie
algebras using \texttt{Mathematica}. We only present the details for
$E_6$, where $\Delta(1)=[\alpha_2]$, and the Dynkin diagram is as follows.
\begin{figure}[H]
\centering \scalebox{0.5}{\includegraphics{E6-Dynkin.eps}}
\end{figure}
Note that $|\Pi_l|=6$,
$h-1=11$, $h^*-2=10$. On the other hand, $\mathfrak{X}$ has six
orbits on $\Delta(1)$, each has $11$ elements. Moreover, the size of
the lower ideals in each orbit is distributed as follows:
\begin{itemize}
\item[$\bullet$] $0, 1, 2, 4, 7, \textbf{10}, 13, 16, 18, 19, 20$;
\item[$\bullet$] $3, 4, 5, 6, 9, \textbf{10}, 11, 14, 15, 16, 17$;
\item[$\bullet$] $3, 4, 5, 6, 9, \textbf{10}, 11, 14, 15, 16, 17$;
\item[$\bullet$] $7, 7, 8, 8, 9, \textbf{10}, 11, 12, 12, 13, 13$;
\item[$\bullet$] $5, 6, 6, 8, 9, \textbf{10}, 11, 12, 14, 14, 15$;
\item[$\bullet$] $7, 7, 8, 8, 9, \textbf{10}, 11, 12, 12, 13, 13$.
\end{itemize}
One sees that each orbit has a unique Lagrangian lower ideal.
This finishes the proof of Theorem \ref{thm-main}. \hfill\qed
\medskip
\centerline{\scshape Acknowledgements} The research is supported by
the National Natural Science Foundation of China (grant no.
11571097) and the Fundamental Research Funds for the Central
Universities.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 932 |
Tuataras are among the most evolutionarily distinct creatures in the class Reptilia. Of the four orders of reptiles (Crocodilia, Sphenodontia, Squamata, and Testudines), Sphenodontia (meaning "wedge tooth"), is devoted entirely to the two living species of tuatara, as well as many extinct species. Sphenodonts flourished 200 million years ago and diversified into a wide array of creatures, such as aquatic pleurosaurs.
An illustration of Pleurosaurus goldfussi, a sphenodontian from the late Jurassic that lived in what is now Germany and France.
Tuataras once flourished throughout New Zealand but are now extinct on the main islands. However, ambitious rat eradication programs have reintroduced tuataras to many remote islands off the coast of New Zealand's main islands.
Tuataras, however, retain the ancient lizard-shaped body plan of tetrapods, though they are only distantly related to squamates, the order of reptiles that includes lizards. They used to be widespread throughout New Zealand; however, their numbers declined with the arrival of Polynesians and then Europeans, who brought cats, dogs, and rats to the islands. Rats in particular have decimated tuataras by eating their eggs, and tuataras can now only live on remote islands off the coast of the main islands of New Zealand.
Pronounced parietal eye – Tuataras, like many other chordates, have a parietal eye on the top of their head. Tuataras are distinct, though, in that their "third eye" is relatively well-developed, with a lens and retina. They have, according to Wikipedia, the most pronounced parietal eyes of all extant tetrapods. Their third eyes are only visible as a translucent spot on the top of their heads while they are juveniles; as they mature, pigments and scales cover up the spot. The function of their third eye is unknown; however, it probably plays a role in regulating circadian rhythms and temperature because it is a part of the epithalamus, which controls those processes. It might also manufacture vitamin D.
Primitive hearing organs – Tuataras (along with turtles) lack ear drums and ear holes, and their middle ear cavity, which in most amniotes (reptiles, birds, and mammals) is filled with fluid, is instead full of adipose (fatty) tissue. They also have primitive middle ear bones and unspecialized hair cells in their inner ear. Because of the simple construction of their ears, they can only hear frequencies from 100-800 Hz, compared to a range of 20Hz to 20,000Hz (20kHz) for humans.
Funky teeth – Tuataras have two rows of teeth on their upper jaw and one row of teeth on their bottom jaw. The teeth on the bottom jaw fit in between the two upper rows of teeth, allowing tuataras to slice their prey with their teeth. This dental configuration is unique among reptiles; snakes also have two rows of teeth on their upper jaw, but they use them for different purposes. Judging by the information online, the teeth are often said to be extensions of the jaw bone rather than actual teeth. However, Marc Jones informed me in the comments that tuatara teeth are, indeed, real teeth. However, they are fused to the jaw bone and their enamel is very thin. The teeth wear down as the tuatara ages until only smooth jaw bone is left, so old tuataras stick mostly to soft food like worms and grubs.
Tuatara teeth form serations on the jaws. A row of teeth on the bottom jaw fits between the two rows on the upper jaw. The skull also has a beak-like projection on the front part of the upper jaw.
Ribs – Tuataras are the only tetrapods (a term that includes amphibians, reptiles, mammals, and birds) with well-developed gastralia and uncinate processes. Gastralia are rib-like bones that form a cage on the underside of the abdomen, though they are not actually attached to the spine or ribs. They give tuataras a hard underbelly. The ribs of tuataras are short and have small, hooked projections called uncinate processes, which are present in a less-developed form in birds.
Tuataras have abdominal ribs, called gastralia, that gird their underside. They also have pronounced uncinate processes.
Vertebrae – Tuatara have hourglass shaped (amphicoelous) vertebrae. Though this shape is typical for fish and amphibians, the tuatara is the only amniote (a term that encompasses reptiles, mammals, and birds) known to have such vertebrae.
Eyes – Though not unique to tuataras, their eyes can focus independently and have "duplex retinas" that contain two types of visual cells for day and night vision. Like many other amniotes, they also have a tapetum lucidum, which is a reflective membrane at the back of the eye that enhances night vision, and they have three eyelids – a top one, a bottom one, and a nictitating membrane, which is a clear membrane that moistens and protects the eye while also allowing sight.
Skull– Tuataras arguably have the most primitive skulls of all the amniotes (turtle skulls might be more primitive), with many of the original amniote features preserved. Most notably, they have two large openings, the temporal fenestrae (meaning "side windows" in Latin), on each side of their skulls [although, as Marc informs me, "The lower temporal bar which forms the lower boundary of the lower temporal fenestra has been secondarily acquired." So the temporal fenestrae are not inherited directly from primitive amniotes, although they are still a somewhat unique feature of the tuatara].
The two large temporal fenestrae on each side of the skull are the most notable primitive features of the tuatara skull.
In the News: You may have heard recently of Henry, a 111-year-old tuatara. For decades Henry was grumpy and aggressive, showing no interest in mating and attacking nearby tuataras. However, in 2002, veterinarians recognized and removed a tumor on Henry's testicles, and by 2008 he had fertilized Mildred, another tuatara who is probably between 70-80 years old. In a boon to the Southland Museum's tuatara breeding program, she laid 12 eggs, 11 of which hatched.
More Information: I have to admit that I owe a lot (almost everything) on this page to the tuatara Wikipedia article. However, the Wikipedia article draws on a ton of erudite sources for its information, which probably explains why it is so in-depth and technical. So in this case, ironically, Wikipedia is a much more dependable source of information than most of the stuff out there on the web. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,559 |
package io.sphere.sdk.facets;
import io.sphere.sdk.categories.Category;
import io.sphere.sdk.categories.CategoryTree;
import io.sphere.sdk.models.Reference;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
import java.util.Optional;
import java.util.function.Function;
import static java.util.Collections.emptyList;
import static java.util.stream.Collectors.toList;
/**
* Mapper that transforms facet options with Category IDs into a hierarchical list of facet options, as defined by the given category list.
* The IDs are then replaced by the Category name, in a language according to the provided locales.
* Any facet option that is not represented in the list of categories or doesn't contain a name for the locales, is discarded.
*/
public class CategoryTreeFacetOptionMapper implements FacetOptionMapper {
private final List<Category> selectedCategories;
private final CategoryTree categoryTree;
private final List<Locale> locales;
private CategoryTreeFacetOptionMapper(final List<Category> selectedCategories,
final CategoryTree categoryTree, final List<Locale> locales) {
this.selectedCategories = selectedCategories;
this.categoryTree = categoryTree;
this.locales = locales;
}
@Override
public List<FacetOption> apply(final List<FacetOption> facetOptions) {
return getRootCategories().stream()
.map(root -> buildFacetOption(root, facetOptionFinder(facetOptions)))
.filter(Optional::isPresent)
.map(Optional::get)
.collect(toList());
}
public CategoryTreeFacetOptionMapper withCategories(final List<Category> selectedCategories,
final CategoryTree categoryTree, final List<Locale> locales) {
return new CategoryTreeFacetOptionMapper(selectedCategories, categoryTree, locales);
}
public static CategoryTreeFacetOptionMapper of(final List<Category> selectedCategories,
final CategoryTree subcategoryTree, final List<Locale> locales) {
return new CategoryTreeFacetOptionMapper(selectedCategories, subcategoryTree, locales);
}
/**
* Initializes the facet mapper without any category tree associated.
* Notice with this configuration no category will be obtained, so please configure it via {@link #withCategories(List, CategoryTree, List)}.
* @return a new instance of {@link CategoryTreeFacetOptionMapper} without any category tree
*/
public static CategoryTreeFacetOptionMapper ofEmptyTree() {
return of(emptyList(), CategoryTree.of(emptyList()), emptyList());
}
private List<Category> getRootCategories() {
return categoryTree.getAllAsFlatList().stream().filter(category -> {
final Optional<Reference<Category>> parentRef = Optional.ofNullable(category.getParent());
return parentRef.map(parent -> !categoryTree.findById(parent.getId()).isPresent()).orElse(true);
}).collect(toList());
}
private Optional<FacetOption> buildFacetOption(final Category category, final Function<Category, Optional<FacetOption>> facetOptionFinder) {
Optional<FacetOption> facetOption = facetOptionFinder.apply(category);
final List<Category> children = categoryTree.findChildren(category);
if (!children.isEmpty()) {
facetOption = addChildrenToFacetOption(facetOption, category, children, facetOptionFinder);
}
final Optional<FacetOption> labelledOption = setNameToFacetOptionLabel(facetOption, category, locales);
return setLinkToFacetOptionValue(labelledOption, category, locales);
}
private Optional<FacetOption> addChildrenToFacetOption(final Optional<FacetOption> facetOption,
final Category category, final List<Category> children,
final Function<Category, Optional<FacetOption>> facetOptionFinder) {
boolean selected = selectedCategories.contains(category) || facetOption.map(FacetOption::isSelected).orElse(false);
long count = facetOption.map(FacetOption::getCount).orElse(0L);
List<FacetOption> childrenFacetOption = new ArrayList<>();
for (final Category child : children) {
final Optional<FacetOption> childFacetOption = buildFacetOption(child, facetOptionFinder);
if (childFacetOption.isPresent()) {
count += childFacetOption.get().getCount();
childrenFacetOption.add(childFacetOption.get());
}
}
return updateFacetOption(facetOption, category, selected, count, childrenFacetOption);
}
private Optional<FacetOption> updateFacetOption(final Optional<FacetOption> facetOption, final Category category,
final boolean selected, final long count, final List<FacetOption> childrenFacetOption) {
FacetOption updatedFacetOption = null;
if (facetOption.isPresent()) {
updatedFacetOption = facetOption.get()
.withCount(count)
.withSelected(selected)
.withChildren(childrenFacetOption);
} else if (!childrenFacetOption.isEmpty()) {
updatedFacetOption = FacetOption.of(category.getId(), count, selected).withChildren(childrenFacetOption);
}
return Optional.ofNullable(updatedFacetOption);
}
private Optional<FacetOption> setNameToFacetOptionLabel(final Optional<FacetOption> facetOptionOptional,
final Category category, final List<Locale> locales) {
return facetOptionOptional.flatMap(facetOption ->
category.getName().find(locales)
.map(facetOption::withLabel));
}
private Optional<FacetOption> setLinkToFacetOptionValue(final Optional<FacetOption> facetOptionOptional,
final Category category, final List<Locale> locales) {
return facetOptionOptional.flatMap(facetOption ->
locales.stream().findFirst()
.flatMap(locale -> category.getSlug().find(locale)
.map(facetOption::withValue)));
}
private Function<Category, Optional<FacetOption>> facetOptionFinder(final List<FacetOption> facetOptions) {
return category -> facetOptions.stream()
.filter(facetOption -> facetOption.getValue().equals(category.getId()))
.findFirst();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,065 |
With The Happening, director M. Night Shyamalan asks a lot from
the audience. The premise behind the movie is so odd and strains the limits of believability that most viewers will simply check out. You either accept and go with it or you don't. There is so much interesting atmosphere and skill in the direction that I could give most of it a pass, but still, it is awfully silly.
The Happening begins in Boston as a group of people in a park suddenly become immobilized by some unseen force and then proceed to commit violent suicide. One woman jabs a knitting needle into her neck while her friend watches in horror. But the mystery doesn't stop there as it seems that whatever is going on is spreading further into the city as other citizens begin jumping off of tall buildings, shooting themselves and hanging themselves from trees.
Our main characters in the movie are science teacher Mark Wahlberg and his wife Zooey Deschanel and we follow them, along with various bands of survivors trying to escape the city, as they flee into the country. As likable as Wahlberg and Deschanel are usually, their performances seem curiously muted, except for a bizarre, scenery-chewing Betty Buckley who appears late in the film. With all of the chaos that surrounds them you'd expect a little more hysteria, but for the most part they kinda remain rather deadpan about it all.
So, what is happening in The Happening? Well, the explanation has something to do with a self-defense response from plant life that is trying to kill off a sizable portion of the human population. It seems there is this chemical being released into the atmosphere, but Shyamalan presents this by showing us the breeze blowing through the trees and it's this breeze that all of the characters are trying to outrun. Yeah…but no. Is it even possible to outrun the wind? I kind of doubt it.
As goofy as this story is, Shyamalan is almost able to overcome it. He creates a lot of genuine tension and the movie is at least watchable and never boring; it's just when the explanations of the hows and whys of what is going on are explained, confusion and bewilderment outshine the good stuff.
Many critics and moviegoes jumped on top of The Happening with both feet, considering it a cinematic disaster of biblical proportions. But compared to the films Shyamalan directed both before and after it, The Lady in the Water and The Last Airbender, respectively, The Happening is nearly an outright success. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,384 |
\section{Introduction}
\label{sec:intro}
Constraints play a key role in data management research, e.g., in the study of data quality, data integration and exchange, and query optimization
\cite{Barcelo0R13,Bohannon2007,2012Fan,Fan19,Fan2019a,Fan2016,FrancisL17,IlyasC19}.
As graph-structured data sets proliferate in domains such as social networks, biological networks and knowledge graphs, the study of graph dependencies is also of increasing practical interest \cite{Bonifati2018,Fan19}. This raises new challenges as graphs are typically schemaless, unlike relational data.
Recently, different classes of dependencies for graphs have been proposed such as Graph Functional Dependencies (GFDs~\cite{Fan2016}), Graph Entity Dependencies (GEDs~\cite{Fan2019a}) and Graph Differential Dependencies (GDDs~\cite{Kwashie2019}).
However, these dependencies focus on generalizing functional dependencies (i.e., variations of {\em equality}-generating dependencies) and cannot capture {\em tuple}-generating dependencies (TGDs) for graph data \cite{Fan19}.
As an example, we might want to enforce the constraint on a human resources graph that ``if two {\em people} vertices have the same {\em name} and {\em address} property-values and they both have a {\em works-at} edge to the same {\em company} vertex, then there should be a {\em same-as} edge between the two people.'' This is an example of a TGD on graph data, as satisfaction of the constraint requires the existence of an edge (i.e., the {\em same-as} edge), and when not satisfied, we repair the graph by generating {\em same-as} edges where necessary.
TGDs are important for many applications, e.g., for entity resolution during data cleaning and integration \cite{2012Fan,IlyasC19}.
Indeed, TGDs arise naturally in graph data management applications. Given the lack of
TGDs for graphs in the current study of graph dependencies, we propose a new
class of graph dependencies called Graph Generating Dependencies (GGDs) which
fully supports TGDs for property graphs (i.e., TGDs for graphs where vertices and edges can have associated property values, such as names and addresses in our example above -- the most common data model in practical graph data management systems) and generalizes earlier graph
dependencies. Informally, a GGD expresses a constraint between two (possibly)
different graph patterns enforcing relationships between property values and
topological structure.
In this short paper, we formally define GGDs, analyze the validation
problem for GGDs, and illustrate the utility of GGDs for the entity resolution problem.
We conclude the paper with indications for further study of GGDs.
\section{Related Work}
\label{sec:related}
We place GGDs in the context of relational and graph dependencies.
\textit{Relational data dependencies.}
The classical Functional Dependencies (FDs) have been
widely studied and extended for contemporary applications in data management.
The most related for GGDs in the state of the art are the Conditional Functional Dependencies (CFDs~\cite{Bohannon2007,2012Fan}) and the Differential Dependencies (DDs~\cite{Song2011}). CFDs were proposed for data cleaning tasks where the main idea is to enforce an FD only for a set of tuples specified by a condition, unlike the original FDs in which the dependency holds for the whole relation.
The DDs extend the FDs by specifying looser constraints according to user-defined distance functions between attribute values.
\textit{Graph dependencies.}
Previous work in the literature focused on defining FDs for RDF data and TGDs for graph data exchange and eliminating redundancy in RDF\cite{Barcelo0R13,Calvanese2014,FrancisL17,Pichler2010}.
Most closely related to GGDs are the graph functional dependencies (GFDs), graph entity dependencies (GEDs),
and graph differential dependencies (GDDs) \cite{Fan2016,Fan2019a,Kwashie2019}.
The GFDs are formally defined as a pair $(Q[\overline{x}], X \rightarrow Y)$ in which $Q[\overline{x}]$ is a graph pattern that defines a topological constraint while $X, Y$ are two sets of literals that define the property-value functional dependencies of the GFD.
Since graph data is usually schemaless, the property-value dependency is defined for the vertex attributes present in the graph pattern.
The GEDs subsume the GFDs and can express FDs, GFDs, and EGDs. Besides the property-value dependencies present in the GFDs, GEDs also carry special id literals to enable identification of vertices in the graph pattern.
The GDDs extend the GEDs by introducing distance functions instead of equality functions, similar to the DDs for relational data but defined over a topological constraint expressed by a graph pattern.
Similar to the definition of our proposed GGDs, the Graph Repairing Rules (GRRs~\cite{Cheng2018}) were proposed as an automatic repairing semantics for graphs. The semantics of a GRR is: given a source graph pattern it should be repaired to a given target graph pattern.
The graph-pattern association rules (GPARs~\cite{Fan2015}) according to \cite{Fan19} is a specific case of TGDs and has been applied to social media marketing. A GPAR is a constraint of the form $Q(x,y) \Rightarrow q(x,y)$ which states that if there exists an isomorphism from the graph pattern $Q(x,y)$ to a subgraph of the data graph, then an edge labeled $q$ between the vertices $x$ and $y$ is likely to hold.
The main differences of our proposed GGDs compared to previous works are the use of differential constraints (on both source and target side), edges are treated as first-class citizens in the graph patterns (in alignment with the property graph model), and the ability to entail the generation of new vertices and edges (see Section \ref{sec:definition} for details).
With these new features of the GGDs, we can encode relations between two graph patterns as well as the (dis)similarity between its vertices and edges properties values.
In general, GGD is the first constraint formalism for \emph{property graphs} supporting both EGDs and TGDs, as well as DDs for property values.
\section{Preliminaries}
\label{sec:preliminaries}
We first summarize standard notation and concepts
\cite{Fan2019a,Song2011,Bonifati2018}.
Let $O$ be a set of objects, $L$ be a finite set of labels, $K$ be a set of property keys, and $N$ be a set of values. We assume these sets to be pairwise disjoint.
A {\bf property graph} is a structure $(V,E,\eta, \lambda, \nu)$ where
\begin{itemize}
\item $V \subseteq O$ is a finite set of objects, called vertices;
\item $E \subseteq O$ is a finite set of objects, called edges;
\item $\nu: E \rightarrow V \times V$ is function assigning to each edge an ordered pair of vertices;
\item $\lambda: V \cup E \rightarrow P(L)$ is a function assigning to each object a finite set of labels (i.e., $P(S)$ denotes the set of finite subsets of set $S$).
Abusing the notation, we will use $\lambda_v$ for the function assigning labels to vertices and $\lambda_e$ for the function that assigns labels to the edges; and
\item $\nu: (V \cup E) \times K \rightarrow N$ is partial function assigning values for properties/attributes to objects, such that the object sets $V$ and $E$ are disjoint (i.e., $V \cap E = \emptyset$) and the set of domain values where $\nu$ is defined is finite.
\end{itemize}
A {\bf graph pattern} is a directed graph $Q[\overline{x}] = (V_Q, E_Q, \lambda_Q)$ where $V_Q$ and $E_Q$ are finite sets of pattern vertices and edges, respectively, and $\lambda_Q$ is a function that assigns a label $\lambda_Q(u)$ to each vertex $u \in V_Q$ or edge $e \in E_Q$.
Abusing notation, we use ${\lambda_v}_Q$ as a function to assign labels to vertices and ${\lambda_e}_Q$ to assign labels to edges.
Additionally, $\overline{x}$ is a list of variables that include all the vertices in $V_Q$ and edges in $E_Q$.
We say a label $l$ {\bf matches} a label $l'\in L$, denoted as $l \asymp l'$, if $l \in L \text{ and } l = l'$ or $l =$`-' (wildcard) .
A match denoted as $h[\overline{x}]$ of a graph pattern $Q[\overline{x}]$ in a graph G is a homomorphism of $Q[\overline{x}]$ to G such that for each vertex $u \in V_Q, {\lambda_v}_Q(u) \asymp {\lambda_v}(h(u))$; and for each edge $e = (u,u') \in E_Q$, there exists an edge $e' = (h(u), h(u'))$ and ${\lambda_e}_Q(e) \asymp {\lambda_e}(e')$.
A {\bf differential function} $\phi[A]$ on attribute $A$ is a constraint of difference over $A$ according to a distance metric \cite{Song2011}. Given two tuples $t_1,t_2$ in an instance I of relation R, $\phi[A]$ is true if the difference between $t_1.A$ and $t_2.A$ agrees with the constraint specified by $\phi[A]$, where $t_1.A$ and $t_2.A$ refers to the value of attribute $A$ in tuples $t_1$ and $t_2$, respectively. We use the differential function idea to define constraints in GGDs
\section{GGD: Syntax and Semantics}
\label{sec:definition}
A {\bf Graph Generating Dependency} (GGD) is a dependency of the form \[ Q_s[\overline{x}], \phi_s \rightarrow Q_t[\overline{x},\overline{y}],\phi_t\] where:
\begin{itemize}
\item $Q_s[\overline{x}]$ and $Q_t[\overline{x},\overline{y}]$ are graph patterns, called \textbf{source} graph pattern and \textbf{target} graph pattern, respectively;
\item $\phi_s$ is a set of differential constraints defined over the variables $\overline{x}$ (variables of the graph pattern $Q_s$); and
\item $\phi_t$ is a set of differential constraints defined over the variables $\overline{x} \cup \overline{y}$, in which $\overline{x}$ are the variables of the source graph pattern $Q_s$ and $\overline{y}$ are any additional variables of the target graph pattern $Q_t$.
\end{itemize}
A differential constraint in $\phi_s$ on $[\overline{x}]$ (resp., in $\phi_t$ on $[\overline{x},\overline{y}]$) is a constraint of one of the following forms \cite{Kwashie2019,Song2011}:
\begin{enumerate}
\item $\delta_A(x.A,c) \le t_A$
\item $\delta_{A_1A_2}(x.A_1, x'.A_2) \le t_{A_1A_2}$
\item $x = x'$ or $x \neq x'$
\end{enumerate}
where $x, x' \in \overline{x}$ (resp. $\in \overline{x} \cup \overline{y}$) for $Q_s[\overline{x}]$ (resp. for $Q_t[\overline{x},\overline{y}]$), $\delta_A$ is a user defined similarity function for the property $A$ and $x.A$ is the property value of variable $x$ on $A$, $c$ is a constant of the domain of property $A$ and $t_A$ is a pre-defined threshold.
The differential constraints defined by (1) and (2) can use the operators $(=, <, >, \le, \ge, \neq)$. The user-defined distance function $\delta_A$ can be, for example, an edit distance when $A$ is a string or the difference between two numerical values.
The constraint (3) $x = x'$ states that $x$ and $x'$ are the same entity (vertex/edge) and can also use the inequality operator stating that $ x \neq x'$. Since the pattern variables $\overline{x}$ in $Q_s$ (resp. $\overline{x},\overline{y}$ in $Q_t$) includes both vertices and edges, this allows to match vertex-vertex variables, edge-edge and vertex-edge variables.
\begin{figure}
\centering
\includegraphics[width=.78\linewidth]{figures/ggd_eg1.pdf}
\caption{Example GGDs.}
\label{fig:firstExample}
\end{figure}
\textbf{Example 1} (GGD $\sigma_1$ in \autoref{fig:firstExample}).
Here, $\sigma_1$ implies that for the matches of the source graph pattern $Q_s$, if the student type is ``high school'' then there exists a target graph pattern $Q_t$, in which the same matched vertex for teacher has an edge labelled `works' to a `high school' vertex in which the difference/(dis)similarity between the high school name and the student school name should be less than or equal to $1$.
\textbf{Example 2} (GGD $\sigma_2$ in \autoref{fig:firstExample}).
According to $\sigma_2$, for the matches of $Q_s$ if the project department and the department name are (dis)similar according to the threshold ``$2$" then there exists an edge labelled ``manages" linking the department and the project (graph pattern $Q_t$).
\subsection{Semantics of GGDs}
In order to interpret a GGD $Q_s[\overline{x}], \phi_s \rightarrow Q_t[\overline{x},\overline{y}],\phi_t$, we first specify what it means for a graph pattern match to satisfy a set of differential constraints.
Consider a graph pattern $Q[\overline{z}]$, a set of differential constraints $\phi_z$ and a match of this pattern represented by $h[\overline{z}]$ in a graph $G$. The match $h[\overline{z}]$ satisfies ($\models$) a differential constraint $k \in \phi_z$ if
\begin{enumerate}
\item When $k$ is $\delta_{A}(z.A,c) \le t_{A}$ then attribute $z.A$ exists at vertex/edge $z = h(z)$ and $\delta_{A}(z.A,c) \le t_{A}$ meaning that the user defined distance (for property A) $\delta_A$ between a constant $c$ and the attribute A value of vertex/edge z is less or equal than the defined threshold $t_A$.
\item When $k$ is $\delta_{A_1A_2}(z.A_1, z'.A_2) \le t_{A_1A_2}$ then attributes $A_1,A_2$ exist at vertex/edge $z = h(z)$ and $z' = h(z')$ and $\delta_{A_1A_2}(z.A_1, \\z'.A_2)$ $\le t_{A_1A_2}$.
\item When $k$ is $z = z'$, then $h(z)$ and $h(z')$ refer to the same vertex/edge.
\end{enumerate}
The match $h[\overline{z}]$ satisfies $\phi_z$, denoted as $h[\overline{z}] \models \phi_z$ if the match $h[\overline{z}]$ satisfies every differential constraint in $\phi_z$. If $\phi_z = {\emptyset}$ then $h[\overline{z}] \models \phi_z$ for any match of the graph pattern $Q[\overline{z}]$ in $G$.
Given a GGD $Q_s[\overline{x}], \phi_s \rightarrow Q_t[\overline{x},\overline{y}],\phi_t$ we denote the matches of the source graph pattern $Q_s[\overline{x}]$ as $h_s[\overline{x}]$ while the matches of the target graph pattern $Q_t[\overline{x},\overline{y}]$ are denoted by $h_t[\overline{x},\overline{y}]$ which can include the variables from the source graph pattern $\overline{x}$ and additional variables $\overline{y}$ particular to the target graph pattern $Q_t[\overline{x},\overline{y}]$.
A GGD $\sigma = Q_s[\overline{x}], \phi_s \rightarrow Q_t[\overline{x},\overline{y}],\phi_t$ holds in a graph G, denoted as $G \models \sigma$, if and only if for every match $h_s[\overline{x}]$ of the source graph pattern $Q_s[\overline{x}]$ in $G$ satisfying the set of constraints $\phi_s$, there exists a match $h_t[\overline{x},\overline{y}]$ of the graph pattern $Q_t[\overline{x},\overline{y}]$ in $G$ satisfying $\phi_t$ such that for each $x$ in $\overline{x}$ it holds that $h_s(x) = h_t(x)$.
In case a GGD is not satisfied, we typically fix this by \emph{generating} new vertices/edges in $G$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/ggd_eg2.pdf}
\caption{Example GGDs.}
\label{fig:secondExample}
\end{figure}
\textbf{Example 3} (GGD $\sigma_3$ in \autoref{fig:secondExample}). Following the semantics of the GGDs, for every match of $Q_s$ that the number of times an article mentions a person is greater than $10$, there exists a match of $Q_t$ such that the theme type is ``human". Observe that, in this example, we use the property value of the edge variable $m$ in the differential constraint which is possible in GGDs as edges are also considered variables in the graph patterns.
\textbf{Example 4} (GGD $\sigma_4$ in \autoref{fig:secondExample}). This GGD enforces that if the latitude and longitude coordinates of the city $c$ in which a person works and of the city $i$ in which a person lives are the same, then $c$ and $i$ should refer to the same city. Observe that in this case the target graph pattern is empty.
GGDs can express other graph constraints previously proposed in the literature.
\autoref{fig:graphDepToGGDs} shows the relationship between the graph dependencies in terms of expressiveness.
GEDs\cite{Fan19} subsumes GFDs\cite{Fan2016}, while GDDs\cite{Kwashie2019} extend GEDs by including differential constraints represented in the figure by the dashed line.
GGDs can express the GFDs, GEDs and GDDs by considering an empty target graph pattern ($Q_t[\overline{x},\overline{y}]$).
Since GEDs and GFDs only enforce equality between attributes, we can express the equality in GGDs differential constraints by using an equality operator and a threshold value $0$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/ggd_cons.pdf}
\caption{Expressiveness of GGDs to other graph constraints.}
\label{fig:graphDepToGGDs}
\end{figure}
\noop
There are three fundamental problems for GGDs:
\begin{itemize}
\item \emph{Satisfiability} - Given a set of GGDs $\Sigma$ does there exist a non-empty graph $G$ on which all GGDs in $\Sigma$ holds, denoted as $G \models \Sigma$
\item \emph{Implication} - Given a set of GGDs $\Sigma$ and a GGD $\sigma$, does $\Sigma$ imply $\sigma$, (denoted by $\Sigma \models \sigma$) for every non-empty graph G that satisfies $\Sigma$?
\item \emph{Validation} - Given a set of GGDs $\Sigma$ and a non-empty graph $G$, does the set of GGDs $\Sigma$ holds in $G$, denoted as $G \models \Sigma$?
\end{itemize}
In this paper, we discuss the validation problem and its complexity. The Satisfiability and Implication are part of future work.
}
\section{Validation}
We next discuss the \textbf{validation problem} for GGDs, defined as: Given a finite set $\Sigma$ of GGDs and graph G, does $G \models \Sigma$ (i.e., $G \models \sigma$ for each $\sigma\in\Sigma$)?
We propose an algorithm to validate a GGD $\sigma = Q_s[\overline{x}], \phi_s \rightarrow Q_t[\overline{x},\overline{y}],\phi_t$. This algorithm returns true if the $\sigma$ is validated and returns false if $\sigma$ is violated.
We proceed as follows. For each
match $h_s(\overline{x})$ of the graph pattern $Q_s[\overline{x}]$ in $G$:
\begin{enumerate}
\item Check if $h_s(\overline{x})$ satisfies the source constraints (ie., $h_s(\overline{x}) \models \phi_s$). If yes then continue.
\item Retrieve all matches $h_t(\overline{x},\overline{y})$ of the target graph pattern $Q_t[\overline{x},\overline{y}]$ where $h_s(x) = h_t(x)$ for all $x\in\overline{x}$. If there are no such matches of the target graph pattern, return false.
\item Verify if $h_t(\overline{x},\overline{y}) \models \phi_t$. If there exists at least one match of the target graph pattern such that $h_t(\overline{x},\overline{y}) \models \phi_t$, then return true, else return false.
\end{enumerate}
\noindent This process is repeated for each $\sigma \in \Sigma$.
For each match on which $\sigma$ is violated, new vertices/edges can be generated in order to repair it (i.e, in order to make the GGD $\sigma$ valid on $G$).
We next analyse the complexity of each of the ``operations'' presented in the algorithm separately to analyse the complexity of the validation of a GGD.
Graph pattern matching queries can be expressed as conjunctive queries (CQ) \cite{Bonifati2018} which are well-known to have NP-complete evaluation complexity \cite{Pichler2011}.
The graph pattern matching problem can be solved in PTIME when the graph pattern is bounded with $k$ tree-width~\cite{Fan2016,Pichler2011}.
To analyze the complexity of constraint checking, let $|h_s[\overline{x}]|$ be the number of matches found of the query pattern $Q_s$, $|\phi_s|$ the number of differential functions in $\phi_s$ and $f_i$ is the cost to check the differential function $i$ defined by the user, in which $0 \le i \le |\phi_s|$. The total cost for checking the differential constraints in $\phi_s$ is: $|h_t[\overline{x}]|(|\phi_s| \sum_{i=0}^{|\phi_s|} f_i)$.
For each of the matches that satisfies the differential functions in $\phi_s$, we verify the target side of the differential constraint, $Q_t[\overline{x},\overline{y}],\phi_t$.
Assuming that the cost for checking the differential functions is tractable,
we can show that the complexity of the validation problem of GGDs follows from the evaluation problem for classical relational tuple-generating dependencies, i.e., has $\Pi_{2}$P-complete complexity \cite{Pichler2011}.
Pichler and Skritek have established polynomial time validation complexity for a large subclass of tgds \cite{Pichler2011}, which corresponds to graph patterns
covering over 99\% of graph patterns
observed in practice \cite{BonifatiMT20}.
\section{GGDs for Entity Resolution}
\label{sec:usecases}
The main novelty of the GGDs is in the generation of new vertices or edges in case a GGD is violated. Given this feature, GGDs can be applied in different scenarios. In this section, we show how GGDs can be used in solutions for entity resolution (ER).
ER is the task of identifying and linking entities across (possibly) different data sources that refer to the same real-world entity\cite{2012Fan,IlyasC19}.
The generation of new vertices and/or edges in case a GGD is violated gives the possibility to rewrite ER matching rules or conditions as GGDs. Towards entity resolution we can define the source graph patterns as several disjoint patterns from (possibly) different graph sources and use the target graph pattern specifications as the representation of the deduplicated graphs.
Thus, using this approach, we can also encode more information than just vertex-to-vertex, or row-to-row in relational databases, as we consider all the information in a defined graph pattern.
\textbf{Example 5} (\autoref{fig:exampleER}). As discussed before, the source graph pattern encodes the rules to perform entity resolution over (possibly) different graph sources.
To perform ER, we can add links of type `sameAs' between the matched entities in the target graph pattern. These links will be generated to validate the defined GGD.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{figures/ggd_er1.pdf}
\caption{GGD for Entity Resolution.}
\label{fig:exampleER}
\end{figure}
A second interesting case in which the GGDs can be used to solve entity resolution is when two graph patterns that refer to the same real-world entity have different structures in (possibly) different sources. In this case, we can generate with the GGDs a vertex or a graph pattern that can summarize all the information of these two graph patterns (see \textbf{Example 6}).
An advantage in using GGDs for the ER is the use of edges as variables, allowing to use the information of edge properties also in the matching rules, as it can be observed in the next example.
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{figures/ggd_er2.pdf}
\caption{GGD for Entity Resolution.}
\label{fig:exampleER2}
\end{figure}
\textbf{Example 6} (\autoref{fig:exampleER2}). In this case, we have two graph sources that model differently the same act of purchasing a product. In order to deduplicate this data, it is useful to create a vertex in the integrated graph that is able to aggregate the information that matches in both sources.
\section{Conclusion and Future Work}
Motivated by practical applications in graph data management,
we proposed a new class of graph dependencies called Graph Generating Dependencies (GGDs). The GGDs are inspired by the tuple- and equality-generating dependencies from relational data, where constraint satisfaction can generate new vertices and edges.
A GGD defines a graph dependency between two (possibly) different graph patterns and the constraints over the property values are differential constraints.
We also presented the complexity of the validation problem as well as how GGDs can be applied in the problem of ER
As future work, we plan to study the satisfiability and implication problems for the GGDs, inference rules, tractable cases, the discovery of GGDs, repair of GGDs, and also further apply the GGDs to other tasks in graph data management.
\begin{acks}
\smallskip
\noindent {\bf Acknowledgments.} This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825041.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,587 |
KX-P8420 cannot print correctly from WordPerfect for Windows 95 if the file includes graphic pattern. ?
1) Launch Notepad and open KX-P8420 PPD file (\Windows\system\Pakx2010.ppd).
2) Edit the file as follows.
3) Select "Save" from the "File" menu.
4) Exit Notepad by selecting "Exit" from the "File" menu.
6) When you printing, please select 1200 x 1200 dpi for the resolution that is included the Device Option tab in printer property.
So if a trouble happens after modifying PPD files, restore PPD file to the original one.
The original one is located under CD-ROM at the following directory if you are using English version of the driver. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,377 |
Q: How to recreate the += operator without using it uh hi guys so im having a problem that i don't know how to avoid i'm making a dictionary and i know the cause of the problem but don't really know how to fix it
javascript:
show = function(i) {
for (let j = 0; j < dictionary[i].content_titles.length; j++) {
console.log('hi')
document.getElementById('topic_div').innerHTML +=
`<h3>${dictionary[i].content_titles[j]}</h3>`;
}
}
html:
<div class="topic" id="topic_div">
<h3 id="topic_text"></h3>
<p id="content"></p>
<hr>
</div>
basically whats happening is since im using the += operator whenever the function activates it runs the for loop all over again, i thought of using a cache but i need the html h3, p, and hr to stay bc that's going to hold the name and the starting content which is why i can't delete the html and run it in the for loop with the rest of the titles
so how do I fix my problem while keeping the same layout intact
A: add another span to contain your content_titles
<div class="topic" id="topic_div">
<h3 id="topic_text"></h3>
<p id="content"></p>
<hr>
<span id="content_titles"></span>
</div>
show = function(i) {
document.getElementById('content_titles').innerHTML = "";
for (let j = 0; j < dictionary[i].content_titles.length; j++) {
document.getElementById('content_titles').innerHTML +=
`<h3>${dictionary[i].content_titles[j]}</h3>`;
}
}
not sure if this is what you wanted, but try it.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 856 |
package oled96x96
// ScrollDirection is the type determining the scrolling direction of text
type ScrollDirection byte
// ScrollSpeed is the type determining the speed of scrolling
type ScrollSpeed byte
var (
ScrollLeft ScrollDirection = 0x00
ScrollRight ScrollDirection = 0x01
Scroll2Frames ScrollSpeed = 0x7
Scroll3Frames ScrollSpeed = 0x4
Scroll4Frames ScrollSpeed = 0x5
Scroll5Frames ScrollSpeed = 0x0
Scroll25Frames ScrollSpeed = 0x6
Scroll64Frames ScrollSpeed = 0x1
Scroll128Frames ScrollSpeed = 0x2
Scroll256Frames ScrollSpeed = 0x3
)
var (
// buffer sent to indicate the following data belongs to a command
cmdCmdBuf = []byte{cmdCmd, 0x0}
dataCmdBuf = []byte{dataCmd, 0x0}
)
const (
// VerticalModeFlag = 01
// HorizontalModeFlag = 02
// address is the i2c address of the device
address = 0x3c
cmdCmd byte = 0x80
dataCmd byte = 0x40
lockUnlockCmd byte = 0xFD // takes a 2nd arg byte
startLineCmd byte = 0xA1 // takes a 2nd arg byte
displayOffCmd byte = 0xAE
displayOnCmd byte = 0xAF
displayOffsetCmd byte = 0xA2 // takes a 2nd arg byte
setColAddrCmd byte = 0x15 // takes 3 arg bytes
normalDisplayCmd byte = 0xA4
inverseDisplayCmd byte = 0xA7
activateScrollCmd byte = 0x2F
dectivateScrollCmd byte = 0x2E
contrastLevelCmd byte = 0x81
)
// sendCmd sends the passed data preluded by the command byte
func (o *OLED96x96) sendCmd(buf ...byte) error {
for _, b := range buf {
cmdCmdBuf[1] = b
if err := o.Device.Write(cmdCmdBuf); err != nil {
return err
}
}
return nil
}
// sendData does what you expect it does and maybe even more
func (o *OLED96x96) sendData(buf ...byte) error {
for _, b := range buf {
dataCmdBuf[1] = b
if err := o.Device.Write(dataCmdBuf); err != nil {
return err
}
}
return nil
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 582 |
\section{Introduction}
Only a few low-mass X-ray binaries (LMXBs -- systems where a compact object like a neutron star or a black hole accretes matter from a companion star via Roche lobe overflow) have been studied with polarimetric techniques to date (Charles et al. 1980; Dolan \& Tapia 1989; Gliozzi et al. 1989; Hannikainen et al. 2000; Shultz et al. 2004; Brocksopp et al. 2007; Shahbaz et al. 2008; Russell et al. 2008; Russell et al. 2011). Polarisation provides a powerful diagnostic tool to obtain information about geometrical and physical conditions of these systems, scattering properties of their accretion discs or the presence of strong magnetic fields.
Most of the LMXB radiation is expected to be unpolarized. Optical light from LMXBs is in fact principally made of thermal blackbody radiation from accretion disc and the companion star, and does not possess any preferential direction of oscillation.
Hydrogen in the disc is nevertheless in many cases totally ionised; for this reason a significant (but small) linear polarisation (LP) is expected in the optical (Dolan 1984; Cheng et al. 1988) due to Thomson scattering of emitted unpolarised radiation with free electrons in the disc. This linear polarisation component is usually almost constant for scattering in the nearly symmetrical accretion disc. If there are deviations from axial symmetry, some phase-dependent variations might be expected. Furthermore, radiation emitted from the accretion disc could interact via inverse Compton scattering with the electrons in a hot plasma corona that surrounds the disc itself (Haardt et al. 1993). This phenomenon could induce high frequency polarisation.
Another possible and intriguing origin of a significant polarisation can be synchrotron emission, that arises from emission of a relativistic particle jet. Optically thin synchrotron radiation produces in fact intrinsically linearly polarized light at a high level, up to tens per cent, especially in the NIR (Russell \& Fender 2008). Jets in X-ray binaries are expected to be linked to accretion (disc-jet coupling, Fender 2001b), and for this reason they have been principally observed in persistent systems or during the outbursts of transient LMXBs, especially if containing a black hole. In the past few years, the
evidence for jet emission during quiescence of LMXBs, containing both black holes and neutron stars have been reported (Russell et al. 2006; Russell et al. 2007; Russell \& Fender 2008; Baglio et al. 2013; Shahbaz et al. 2013). The detection of a high level of linear polarisation in the NIR is for this reason considered as the main route to assess for the emission of a relativistic jet.
Radiation from any source can also be polarised by the interaction with interstellar dust. This effect depends on wavelength as described by the Serkowski law Serkowski et al. 1975 and must be accounted for in the analysis.
Transient LMXBs are generally faint objects in the optical; for this reason only the brightest ones have been observed in polarimetry. The most part of these studies regarded systems during outbursts or systems with BHs as compact object during quiescence.
In this paper we report the results of the optical multi-band ($ BVRI $) and infrared ($ J $-band) polarimetric observations of the LMXB Cen X-4 during quiescence using the ESO 3.6 m telescope at La Silla and the TNG telescope, respectively. This is the first polarimetric study of a quiescent LMXB containing a NS.
Cen X-4 was discovered during an X-ray outburst in 1969 by the X-ray satellite \textit{Vela 5B} (Conner et al. 1969). During a second outburst in 1979 the source was detected also in the radio band (Hjellming 1979), and its optical counterpart was identified with a blue star that had brightened by 6 mag to $ V $=13 (Canizares et al. 1980). The companion star was at a later stage classified as a $ 0.7M_{\odot} $ K5--7 star (Shahbaz et al. 1993; Torres et al. 2002), that evolved in order to fill its $ \sim 0.6\,R_{\odot} $ Roche lobe. The $ \sim 15.1 $ hrs orbital period has been determined thanks to the optical light curve ellipsoidal variations (Cowley et al. 1988; Chevalier et al. 1989; McClintock et al. 1990).
Cen X-4 is one of the brightest quiescent systems in the optical known to date ($V$=18.7 mag) and possesses a non-negligible disc component in the optical that contributes $ \sim 80 \% $ in $ B $, $ \sim 30 \% $ in $ V $, $ 25 \% $ in $ R $ and $ 10 \% $ in $ I $ (Shahbaz et al. 1993; Torres et al. 2002; D'avanzo et al. 2005). Cen X-4 is at a distance of $ 1.2 \pm 0.3 $ kpc (Kaluzienski et al. 1980) and the interstellar absorption is low ($ A_{V}=0.3 \,\rm mag $).
These characteristics make Cen X-4 an excellent candidate for polarimetric studies in quiescence.
Throughout the paper all the uncertainties are at $ 68 \% $ confidence level unless stated differently.
\section{Optical polarimetry}\label{Obs_parag}
The system Cen X-4 was observed on 11-12 March 2008 with the ESO 3.6 m telescope at La Silla, using the EFOSC2 instrument in polarimetric mode with the optical $ BVRI $ filters ($ 440 \rm nm -793 \rm nm $). The nights were clear, with seeing $ \lesssim 1'' $.
Image reduction was carried out following standard procedures: subtraction of an averaged bias frame and division by a normalized regular flat frame.
All the flux measures have been done thanks to accurate aperture photometry made with
\textit{\tt daophot} (Stetson 1987) for
all the objects in the field.
The polarimetric calibration was done against two polarimetric standard polarised and non-polarised stars provided by the FORS consortium based on Commissioning data taken with FORS1 \footnote{\url{www.eso.org/sci/facilities/paranal/instruments/fors/inst/pola.html}}.
A Wollaston prism has been inserted in the optical path. The incident radiation was split into two simultaneous and orthogonally polarised ordinary and extraordinary beams (o- and e- beams). Thanks to a Wollaston mask the different images do not overlap. The use of a rotating half wave plate (HWP) allowed us to take images at four different angles with respect to the telescope axis ($ \Phi_{i} = 22.5^{\circ}(i-1)$, $i=0,1,2,3$). Alternating the filters, a set of 10 images of 90 seconds integration were obtained for each HWP angle for the $ BVI $ filters and 9 for each angle of the $ R $- band, divided in the two nights of observations, covering about the $ 30 \% $ of the orbital period.
The normalised Stokes parameters $ Q $ and $ U $ for linear polarisation (LP) of the observed radiation are commonly evaluated starting from flux measures in both o- and e- beams ($ f^{o} $, $ f^{e} $) at just two orientation angles of the telescope's axis ($ 0^{\circ} $ and $ 45^{\circ} $):
\begin{equation}
Q=\frac{f^{o}(0^{\circ})-f^{e}(0^{\circ})}{f^{o}(0^{\circ})+f^{e}(0^{\circ})} ; \,\,\,\,\, U=\frac{f^{o}(45^{\circ})-f^{e}(45^{\circ})}{f^{o}(45^{\circ})+f^{e}(45^{\circ})}.
\end{equation}
Thanks to the rotating HWP, a higher accuracy measure of $ Q $ and $ U $ has been possible, using the whole set of possible orientations. If $ \Phi_{i}$ are the HWP orientation angles defined as above, $ Q $ and $ U $ can be obtained from:
\begin{equation}\label{stokes}
Q=\frac{F(\Phi_{1})-F(\Phi_{3})}{2} ; \,\,\,\,\, U=\frac{F(\Phi_{2})-F(\Phi_{4})}{2},
\end{equation}
where
\begin{equation}
F(\Phi_{i})=\frac{f^{o}(\Phi_{i})-f^{e}(\Phi_{i})}{f^{o}(\Phi_{i})+f^{e}(\Phi_{i})}.
\end{equation}
A first raw estimate of the observed polarisation degree $ P_{\rm obs} $ and angle $ \theta $ can then be obtained as:
\begin{equation}\label{Pobs}
P_{\rm obs}=(U^{2}+Q^{2})^{0.5}
\end{equation}
\begin{equation}
\theta = 0.5 \tan^{-1} $(U/Q)$.
\end{equation}
Since the Stokes parameters statistics is not Gaussian (Wardle \& Kronberg 1974; di Serego Alighieri 1998) the calculated values of linear polarisation have to be corrected for a bias factor. The real polarisation degree $ P $ can be obtained from:
\begin{equation}\label{bias}
P=P_{\rm obs}\sqrt{1-\left(\frac{\sigma_{\rm P}}{P_{\rm obs}}\right)^{2} },
\end{equation}
where $ \sigma_{\rm P} $ is the r.m.s. error on the polarisation degree.
\subsection{Averaged Stokes parameters for LP}
\begin{figure}
\begin{center}
\includegraphics[scale=0.25]{Q_U_banda_B_cal.png}
\includegraphics[scale=0.25]{filtroV_medie.png}
\includegraphics[scale=0.25]{filtroR_medie.png}
\includegraphics[scale=0.25]{filtroI_medie.png}
\caption{$ U $ vs. $ Q $ for the averaged images of the four optical filters $ BVRI $, for Cen X-4 (red dots) and for eight reference field stars (black squares). The represented Stokes parameters have been already corrected for the effect of instrumental polarisation, as described in Sec. 2.1.}
\label{U_Q_medie}
\end{center}
\end{figure}
We obtained the averaged (over all observations) Stokes parameters $ Q $ and $ U $ (eq.\ref{stokes}) in all the filters analysed, for Cen X-4 and a selection of eight isolated and non-saturated reference stars, supposed to be unpolarised. For $ Q $ and $ U $ separately, a correction with respect to the
average Stokes parameters measured for the reference field stars in each filter was then achieved. This correction results to be small (at most the $0.1\%$), supporting our hypothesis of non-polarisation of the reference stars, and is crucial in order to eliminate most of the instrumental contribution to the Stokes parameters of the target. The same correction has been applied to the reference stars themselves.
The errors on $ Q $ and $ U $ were found simply by propagating the photometric errors with the uncertainties obtained for the unpolarised field stars averaged Stokes parameters.
As shown in Fig. \ref{U_Q_medie}, the Stokes parameters evaluated for the reference stars and corrected as explained above form a cluster around 0 in each filter. The fact that the target might show a different behaviour with respect to this selection of unpolarised stars can be due to a higher interstellar absorption (if distance is larger than that of the stars), or to a real intrinsic polarisation component. This comparative method has been used for gamma-ray bursts afterglows (Covino et al. 1999) and for X-Ray Binaries (Dubus \& Chaty 2006).
The only filters for which Cen X-4 lies at a $ \gtrsim $3$ \sigma $ level from the weighted mean of the reference stars are the $ V $- and $ R $- bands, whereas in $ B $- and $ I $- bands the source's Stokes parameters are comparable with those of the reference stars.
To evaluate the polarisation degree $ P $, one can use eq. \ref{Pobs}, with the correction reported in eq. \ref{bias}. This method is nevertheless not advised in case the r.m.s. error on $ P_{\rm obs} $ is comparable to $ P_{\rm obs} $ itself. In particular, in our case, the smallest obtained error bar for Cen X-4 corresponds to $ \sim 50 \% $ of the measure (except for the $ I $- band, where the S/N ratio is higher). In this case the best way to estimate $ P $ and the polarisation angle $ \theta $ is reported in the next section.
\subsection{The $ S $- parameter}\label{S_paragraph}
The simple calculus proposed in eq. \ref{Pobs} does not take care of the possible interstellar polarisation and of Wollaston prism's imperfections. For this reason it is necessary to apply a correction to the Stokes parameters obtained with eq. \ref{stokes}, relying on polarimetric standard stars. Alternatively, it is possible to evaluate, for each HWP angle $ \Phi $ defined as in Sec. \ref{Obs_parag}, a parameter $ S\left(\Phi\right) $ starting from the o- and e-fluxes of the target, $ f^{o}(\Phi) $ and $ f^{e}(\Phi) $, and from the averaged ratio between o- and e- fluxes of some unpolarised field stars $ f^{o}_{u}(\Phi) $ and $ f^{e}_{u}(\Phi) $. In particular (di Serego Alighieri 1998; Covino et al. 1999):
\begin{equation}
S(\Phi)=\left(\frac{f^{o}(\Phi)/f^{e}(\Phi)}{\left\langle f^{o}_{u}(\Phi)/f^{e}_{u}(\Phi_{i})\right\rangle }-1\right)/\left(\frac{f^{o}(\Phi)/f^{e}(\Phi)}{\left\langle f^{o}_{u}(\Phi)/f^{e}_{u}(\Phi)\right\rangle }+1\right).
\end{equation}
This parameter can be regarded as the component of the normalised Stokes vector that describes LP along the direction selected by the HWP angle $ \Phi $. The relation between $ S $, $ P $ and $ \theta $ is given by:
\begin{equation}\label{fit_cos}
S(\Phi)= P\cos 2\left( \theta -\Phi\right).
\end{equation}
In this way, one can fit the function $ S(\Phi) $ with eq. \ref{fit_cos} and obtain $ P $, $ \theta $ and their errors from the semi-amplitude of the curve and from the $x$-value that corresponds to the first curve's maximum, respectively. With this method the measure of a polarimetric standard star becomes useless, since the parameter $ S $ is already normalised to the non-polarised reference stars, that all cluster around the same point of the plane ($Q$,$U$) in each filter. In particular, the so-obtained values should be automatically corrected for interstellar and instrumental effects, and do not need any bias correction (eq. \ref{bias}). The cosinusoidal fits of $ S(\Phi) $ for the $ V- $band is reported in Fig. \ref{S_V_fit}.
\begin{figure}
\centering
\includegraphics[scale=0.25]{S_cen_sola.png}
\caption{Cosinusoidal $ V $-band fit of S$ (\Phi) $ for Cen X-4.}
\label{S_V_fit}
\end{figure}
Adopting this method we derived the polarisation levels of Cen X-4 in the four optical bands. Only in the $ V $ and $ R $-band there is a detection of polarisation different from 0 within 1$\sigma$.
Since at least a 3$ \sigma $ over zero polarisation degree would be an evidence of an intrinsic polarisation detection, we decided to quote the 3$ \sigma $ upper limits in all bands (Tab. \ref{pola}). As reported in Tab. \ref{pola}, the 3$ \sigma $ upper limit for the $ I $ band is really tightly constraining, due to the higher measured S/N ratio with respect to the other bands.
No evidence of a significant wavelength dependency of P is observed, since the polarisation measures in all bands are consistent with each other within 2$ \sigma $.
\begin{table}
\caption{Values of the $ V $ and $ R $-band detected polarisation degrees $ P $, and $ BVRI $-band 3$ \sigma $ upper limits and polarisation angles for Cen X-4. The errors have been evaluated using contour plots in each band. In particular, in order to obtain the 68$\%$ c.l. we used $ \Delta \chi^{2} =2.3 $ as requested for fits with two free parameters. }
\label{pola}
\centering
\begin{tabular}{|c c c c|}
\hline\hline
$ B $ & $ V $ & $ R $ & $ I $\\
\hline
\multicolumn{4}{|c|}{P ($\% $)}\\ \hline
- & $ 0.36 \pm 0.18 $ & $ 0.19 \pm 0.16 $ & -\\
\hline
\multicolumn{4}{|c|}{P (3$\sigma$ upper limit)} \\ \hline
$1.46 \%$ & $ 0.90\% $ & $ 0.67\% $ & $ 0.46\% $ \\
\hline
\multicolumn{4}{|c|}{$ \theta $ ($ ^{\circ} $)} \\ \hline
$ 159.41 ^{+49.59}_{-51.42} $ & $ 158.75 ^{+14.25}_{-15.75} $ & $ 167.81 ^{+24.19} _{-25.81} $ & $ 189.87\pm 49.13$\\
\hline
\end{tabular}
\end{table}
Following Serkowski et al. (1975) we were then able to evaluate the maximum expected interstellar contribution to the LP ($ P_{\rm max} $) of Cen X-4. In particular we retrieved the empirical formula $ P_{\rm max}\leq 3A_{V} $, according to which the maximum contribution to the LP for Cen X-4 due to interstellar effects should remain under the $ 0.9 \% $ level, consistently with our results.
\subsection{Search for phase dependent variations of $ P $}
The values reported in Table \ref{pola} have been obtained by summing all the images taken for each filter; this increased the S/N ratio for our measure, but let us loose all the information about possible orbital phase-correlated variations of $ P $, $ Q $ and $ U $. In order to investigate the possible variability of the polarisation along $ P_{\rm orb} $, we analysed the single images calculating the orbital phases thanks to the precise ephemerides of Casares et al. (2007). Due to the large error bars however, the data proved inconclusive.
The only variation from the linear fit to the data in all the analysed filters has been found during the first epoch in the $ V $-band (Fig. \ref{V_phase_orb}). In fact at phase 0.1 (that is, when the companion star contribution is near to its minimum) we measured a polarisation degree of $ 1.85\% \pm 0.60\% $, i.e. $ 2.3 \sigma$ from the average.
\begin{figure}
\centering
\includegraphics[scale=0.25]{V_phase_orb_fit.png}
\caption{$ V $-band polarisation curve of Cen X-4 (black squares) and of one field star (red dots), with superimposed the linear fit to the data.}
\label{V_phase_orb}
\end{figure}
However, no similar features have been observed in the other bands, and furthermore this sudden increase in polarisation does not coincide with an increase or decrease of the observed flux. For these reasons we conclude to have observed an effect linked to photon statistics, and that any variability of polarisation intrinsic to Cen X-4 is too weak to be detected from our dataset, with the relatively low S/N ratio we have obtained in our observations.
\section{Infrared polarimetry}
The LMXB Cen X-4 was observed in IR ($ J $-band, 1.27$ \mu $m) on the 24$^{\rm th}$ April 2007 with the FGG TNG telescope at La Palma, equipped with the NICS instrument used in polarimetric mode.
A set of 20 images of the field of 180 s integration each was obtained. A Wollaston prism was inserted in the grism wheel, that split incident radiation into the four polarisation components ($ 0^{\circ} $, $ 45^{\circ} $, $ 90^{\circ} $ and $ 135^{\circ} $ with respect to the telescope's axis), which were re-imaged at different positions along the Y-axis.
Image reduction was carried out by subtracting the sky background to each frame.
Because of a technical problem with the instrument derotator, only the 25$ \% $ of the images have been used for our purposes. The selected images have been summed together in order to obtain a single image with an exposure time of 900 s.
Flux measures have been made thanks to aperture photometry performed with \textit{\tt daophot} for
all the objects in the field.
The normalised Stokes parameters $ Q $ and $ U $ have been calculated from\footnote{\url{www.tng.iac.es/instruments/nics/files/pol_obs_v03.pdf}}
\begin{equation}
Q=\frac{f(0^{\circ})-f(90^{\circ})}{f(0^{\circ})+f(90^{\circ})}; \,\,\,\,\, U=\frac{f(45^{\circ})-f(135^{\circ})}{f(45^{\circ})+f(135^{\circ})},
\end{equation}
whereas the polarisation degree $ P $ can be obtained from eq. \ref{Pobs}.
\subsection{Results}
We selected a sample of four isolated and supposed to be unpolarised stars in the field, in a region of the image as near as possible to the target, in order to make a comparison with Cen X-4, and we calculated $ Q $ and $ U $ for all of them. As we did for optical data, we represented the Stokes parameters in the $ Q-U $ plane and we observed that Cen X-4 was comparable to the field stars, that all cluster around a common value, corresponding to the possible instrumental polarisation contribution. In Fig. \ref{Q_U_J} we reported the $ Q $ vs. $ U $ plot for Cen X-4 and the four reference stars, with all the parameters corrected for the weighted mean of the field stars Stokes parameters.
Due to the large error bars measured for both the target and the field stars, we decided to evaluate directly a 3$ \sigma $ upper limit to the $ J $-band polarisation. We performed a Monte Carlo simulation starting from the values of $ Q $ and $ U $ of Cen X-4 corrected for the average $ Q $ and $ U $ measured for the field stars and we obtained an upper limit to $ P $ of $\sim 6 \% $ within $ 3\sigma $ (the upper limit is of $\sim 4 \% $ at a $ 1\sigma $ confidence level).
\begin{figure}
\centering
\includegraphics[scale=0.25]{new_J.png}
\caption{U vs. Q for the $ J $-band observations, for Cen X-4 (red dot) and for four isolated reference stars (black squares). All the Stokes parameters values have been corrected for the weighted mean of $ Q $ and $ U $ of the field stars.}
\label{Q_U_J}
\end{figure}
\section{Discussion}
\subsection{Jets in quiescence?}
As stated in Russell et al. 2011, a polarimetric signature of the emission of a relativistic particles jet in LMXB is theoretically detectable both in the infrared and in the optical. In black holes and neutron stars X-ray binaries the linear polarisation degree of the emitted radiation is expected not to exceed the few $ \% $, with evidence of variability on short time-scales. This fact seems to suggest that a tangled and turbulent (no more than partially ordered) magnetic field is generally present at the base of the jet (there was a unique detection of a highly ordered magnetic field in a X-ray binary for the persistent system Cyg X-1, Russell \& Shahbaz 2014). Something similar seems to happen also in case of compact jets from active galactic nuclei (AGN), in particular in gigahertz peaked-spectrum sources, where the low level of polarisation ($1-7 \%$) measured in the optically thin regime is interpreted with the presence of a tangled magnetic field (O'Dea 1998). Also the flat-spectrum radio jets of AGN usually possess a low polarisation degree ($1-5 \%$), similarly to what happens to the radio jets of X-ray binaries, suggesting for tangled and helical magnetic fields (Helmboldt et al. 2007, Perlman et al. 2011).
For Cen X-4, the measured degree of linear polarisation averaged over all observations is $\lesssim 1.5\% $ in the optical $ BVRI $ filters, and $\lesssim 6\% $ in the $ J $-band within 3$ \sigma $ (Table \ref{pola}). Under the hypothesis that neutron stars X-ray binaries could have a similar behaviour to that of black holes X-ray binaries in terms of jet emission, these upper limits are effectively consistent with the possible emission of a jet, with the low polarisation degrees possibly linked to the tangled and non-ordered structure of the magnetic field in a region close to the one where the jet is launched. However, no observational constraints of how tangled neutron star jets magnetic fields are exist to date.
A possible way to search for the presence of the relativistic particle jet is to observe the spectral energy distribution (SED) of the system Cen X-4, as stated in Fender (2001a). In fact, in case of jet emission, an excess in the SED in the range of frequencies where the synchrotron emission gives its maximum contribution (NIR) is expected. The emission in this case would be characterised by an optically thick synchrotron radio spectrum (that corresponds to $ \alpha\geq 0 $, where $ \alpha $ is the spectral index and the flux density $ F_{\nu} $ is proportional to $ \nu^{\alpha} $), that should switch to an optically thin synchrotron spectrum at shorter wavelengths (i.e. $ \alpha \simeq -0.6 $). The break
frequency is expected to fall in the mid-Infrared (Fender 2001b; Corbel \& Fender 2002; Gallo et al. 2007; Migliari et al. 2007; Pe'er \& Casella 2009; Migliari et al. 2010; Rahoui et al. 2011; Gandhi et al. 2011).
We thus built the SED of the system (Fig. \ref{sed}), starting from IR data taken from the WISE All Sky and 2MASS catalogues\footnote{\url{irsa.ipac.caltech.edu/workspace/TMP_qLHgEi_32284/Gator/irsa/32533/tbview.html}} and from the optical $ V $- and $ B $-band data obtained from Cackett et al. (2013) during quiescence for Cen X-4. The fluxes values considered for the SED are reported in Tab. \ref{fluxes_tab}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{fit_sed_star_r08.png}
\caption{Spectral energy distribution (SED) of Cen X-4 built starting from IR archival data and from the optical fluxes obtained by Cackett et al. (2013). The three datasets are not contemporary. Superimposed, the fit of the data with a black body of an irradiated star.}
\label{sed}
\end{figure}
\begin{table}
\caption{$ IR $ and optical dereddened fluxes obtained from the WISE and 2MASS catalogues and from Cackett et al. (2013), used to build the multi-wavelength SED of Cen X-4 (Fig. \ref{sed}). }
\label{fluxes_tab}
\centering
\begin{tabular}{|c c|}
\hline\hline
Band & Flux ($\rm erg \cdot\rm cm^{-2}\cdot s^{-1}$)\\
\hline
$ W_{\rm 2} $ WISE ($ \lambda = 4.6\, \mu \rm m $) & $ (1.53 \pm 0.12)\times 10^{-13} $ \\
$ W_{\rm 1} $ WISE ($ \lambda = 3.4\, \mu \rm m $) & $ (3.79 \pm 0.13)\times 10^{-13} $ \\
\hline
$ K $ 2MASS ($ \lambda = 2.16\, \mu \rm m $) & $ (1.30 \pm 0.10)\times 10^{-12} $ \\
$ H $ 2MASS ($ \lambda = 1.66\, \mu \rm m $) & $ (1.85 \pm 0.13)\times 10^{-12} $ \\
$ J $ 2MASS ($ \lambda = 1.23\, \mu \rm m $) & $ (2.40 \pm 0.13)\times 10^{-12} $ \\
\hline
$ V $ (Cackett et al. 2013, $ \lambda = 5468\, \AA $) & $ (1.41 \pm 0.14)\times 10^{-12} $ \\
$ B $ (Cackett et al. 2013, $ \lambda = 4392\, \AA $) & $ (1.05 \pm 0.09)\times 10^{-12} $ \\
\hline
\end{tabular}
\end{table}
As shown in Fig. \ref{sed}, the SED is consistent with a single black body contribution, likely due to the irradiated companion star (a similar result was obtained through the study of the correlations between UV and X-ray variability; Bernardini et al. 2013). No IR excess has been observed. In particular, a power-law fit ($ I_{\nu}\propto \nu^{\alpha} $) of the lower frequencies data reported an index $\alpha$ of $1.81\pm 0.14$, that is exactly opposite to what expected for an IR excess ($ \alpha\leq 0 $), and is on the contrary typical of a low frequency black body approximation ($ I_{\nu}\propto \nu^{2} $). In particular, the fit of the SED with a black body with fixed radius (corresponding to the Roche lobe dimension, $ 0.6\, R_{\odot} $) reported a black body temperature of $4050 \pm 30$ K, that is consistent with a K-spectral type main sequence star, and an irradiation luminosity of $ 4.5\,(\pm 5.5)\times 10^{32} \rm erg/s$, consistent with the X-ray luminosity reported in Campana et al. (2004).
The non-detection of a IR excess can not however totally exclude the emission of a jet, that could theoretically exist in Cen-X4 in quiescence, but it may only be detectable at radio frequencies.
If we hypothesize the effective emission of a jet from Cen X-4, we should expect at most a polarisation degree of a few $ \% $ in the optical (for instance 5$ \% $), that is typical of X-ray binaries jets with tangled magnetic fields. In order to obtain such a polarisation degree, we thus considered the most constraining 3$\sigma $ upper limit we obtained in our analysis (that is $P < 0.5 \% $ in the $ I $-band) and evaluated the maximum possible contribution of the relativistic jet to the total $ I $-band flux to be at most the 10$ \% $. This upper limit to the jet emission is fairly constraining and is totally in agreement with the blue $ IR $ SED shown in Fig. \ref{sed}.
Relativistic particles jets have been detected in some persistent neutron star X-ray binaries in IR and radio frequencies (i.e. Migliari et al. 2006), but there has not been ever a detection during quiescence. In particular, Migliari et al. 2006 predicted a steeper radio/X-ray correlation in neutron star X-ray binaries compared to black hole X-ray binaries, that would imply the possible emission of really faint jets from quiescent neutron stars, in accordance with our result.
\subsection{Thomson scattering}
As explained in Sec. 1, scattering of radiation with electrons in the ionised disc can result in linear polarisation. The polarisation degree $ P $ that derives from this mechanism is expected to increase as a function of decreasing wavelength and not to be more then a few $ \% $ (Brown et al. 1978; Dolan 1984). In our case, the measured average polarisation degrees in the four filters never exceed the $ 1\% $ level, and for this reason could be consistent with the low polarisation degree expected in case of Thomson scattering with the free electrons in the disc. Unfortunately the low S/N ratio of our measures do not allow us to verify the expected increasing trend of $ P $ with decreasing wavelength (Fig. \ref{sed_P}).
\begin{figure}
\centering
\includegraphics[scale=0.25]{upper.png}
\caption{Polarisation degree $ 3\sigma $ upper limits trend with respect to the wavelength of the four optical filters analysed, $ BVRI $.}
\label{sed_P}
\end{figure}
Furthermore scattering should also produce a linear polarisation degree that varies on time-scales of the orbital period (Kemp \& Barbour 1983; Dolan \& Tapia 1989; Gliozzi et al. 1998).
In our case, an almost constant trend of the polarisation degrees in the four analysed filters has been observed. The only epoch that possesses a polarisation degree significantly higher then the average is in the $ V $-band, at phase 0.1. In this epoch, the companion star's contribution is at its minimum, i.e. the accretion disc should be mostly responsible of the observed variation. This fact grows in interest in light of the Thomson scattering hypothesis. Neverthless, the absence of similar behaviours in consecutive bands (and time), that in contrary showed almost constant $ P $, let this observation become less intriguing.
We conclude that if there was a variation of $ P $ with the system's orbital phase, we were not able to detect it, due to the large error bars, often comparable to the possible obtained oscillation's semi-amplitudes.
\section{Conclusions}
In this work, we presented the results of an optical ($ BVRI $) and infrared ($ J $) polarimetric study on the LMXB Cen X-4 during quiescence, based on observations obtained in 2008 and 2007, respectively.
We were searching for an intrinsic component of linear polarisation in the optical and IR for this source.
We obtained a low polarisation degree $3\sigma$ upper limit in each optical filter, with the highest value in the $ B $-band ($P_{B}\leq 1.46 \% $) and lowest in the $ I $-band, where $ P $ is consistent with 0 within $ 1\sigma $. A $3\sigma$ upper limit to the linear polarisation of the $ 6 \% $ is obtained in the $ J $-band.
We built the SED of Cen X-4 from literature data, observing that it can be fitted by the only black body of the irradiated companion star. Assuming a typical expected polarisation degree of at most the $5\%$ for a NS X-ray binary jet with tangled magnetic field, our $ I $-band upper limit on the linear polarisation implies that the contribution from a possible jet should be relatively low ($ \lesssim 10\% $) in terms of flux. This is in agreement with the non-detection of an infrared excess in the SED of Cen X-4.
No variations correlated with the orbital period of the system has been detected and, due to the large error bars caused by the low S/N ratio, it was not possible to be conclusive on a possible increasing trend of $ P $ with decreasing wavelength, which should support the possibility of Thomson scattering of the radiation with accretion disc particles.
Observations with larger diameter telescopes (8 m class) would provide smaller uncertainties for the Stokes parameters (in particular in the $ B $-band, where Cen X-4 is fainter). This will be crucial to best investigate possible phase-orbital correlated variations of $ P $ and its wavelength trend in order to assess the origin of polarisation for Cen X-4.
\begin{acknowledgements}
We thank the anonymous referee for useful comments and suggestions.
MCB aknowledges P. M. Pizzochero for supportive
discussions and the INAF-Osservatorio Astronomico di Brera for kind hospitality during her master thesis. PDA acknowledges L. Monaco, E. Matamoros and A. Ederoclite for
their support during his observing run in La Silla. PDA acknowledges V. Lorenzi and
the TNG staff for their support in carrying out the NICS observations in service
mode.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,955 |
This is my first post here on persona paper. I've been hearing some good things about the site and decided to check it out. I see many friends here from another site so it seems that they are enjoying the relaxed and calm atmosphere in this neck of the woods. What's not to like about that? My main work is research, teaching and writing. Most writing and side things that I do online are for sanity and mental relief. I think reading and writing are work and play for me. However, I am feeling the need to be outside and moving more since the weather is warmer and overall, just better and more pleasant. I love fun, food, good conversation and good company. Favorite food? Hmmmmm, why do I have to pick just one and favorite drink is a little easier in warm weather, a refreshing lemonade or mint filled mojito! Hopefully, this post makes the cut and I hope to be joining this community soon!
I missed you much! Glad to see you here.
I am also a new member here. I want to work here with all. So be updated.
I think it made the cut and I hope to see more of your posts!
I have seen many new members coming from the other site in just the last week. The membership is growing. Welcome!
Another familiar face from Bubblews.
Hey Lady, Welcome to Persona - Glad to see you here too!
Hi, nice to see you here. So glad to see another familiar face.
I am so happy to see that your on this site. I can't go back to that OTHER site. So disappointed with them. I hope your doing well.
It's seems like it's the beginning of the end over there unfortunately!
Nice seeing you over here! You will recognize many people from other sites. Hope you enjoy it.
Good to see you here and I hope that you enjoy the site.
No new post from you? Been a while!
Hi, it is nice to meet you on here. I do hope you return to write more and interact with the posts. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,963 |
By Chris Prener
Family and Baseball: Daniel Murphy and Paternity Leave in Major League Sports
The New York Mets' infielder Daniel Murphy
Julie's post raises some interesting points about what it means to be a professional athlete of any stripe. Another recent incident, this time in Major League Baseball (MLB), serves as both a teachable moment and an segue into a broader debate about parental leave and what it means to be a "good employee."
Daniel Murphy, an infielder for the New York Mets, was recently at the center of a very public debate over taking parental leave while in the midst of a season. Major League Baseball has a pretty long season – players typically report to Spring Training sometime in February, the regular season runs from April (or the very end of March depending on the year) until September, with post-season play taking place in October. This means that players typically have a narrow window for off-season activities. Younger players may play in Winter Leagues in either Arizona, the Caribbean, or in Australia while older players spend the off season getting surgical repairs and readying themselves physically for another grueling season, which features 162 regular season games. Players therefore do not get an enormous amount of time off, and they have a lot of baseball work to do during the typical "off" season as well as pressure to make up for family time lost to the long season.
In his book Moneyball, Michael Lewis describes the subjective evaluations of MLB scouts – typically former players who are employed to identify new talent for teams from among high school, college, and amateur players. A player's girlfriend is a metric Lewis describes; essentially, if a player has an attractive girlfriend, they have confidence. This confidence is important for how they play the game, or so the story goes. Such a hyper-masculine culture is probably not surprising to most readers familiar with professional sports.
What it means for baseball, however, is that there is an ironic tension between the family arrangements that make you appealing to scouts and the "inconvenient" family realities for many athletes. To have a heterosexual partner is to portray a necessary quality that some scouts may use to evaluate a player, making off-the-field relationships just another category of assessment. But these relationships, as Daniel Murphy learned, are meant to stay off the field.
Murphy's wife went into labor with their first child right at the beginning of the season. Murphy, like all MLB players, has 3 days of paternal leave available to him for just such an occasion, and used his leave during the first 2 games of the Mets' season. In fact, MLB players are the only professional athletes in the United States with contractual protections for paternity leave. When players go on paternity leave, teams are allowed to add a replacement player to their roster – meaning that teams are not forced to play shorthanded as they would have in the past if a player wanted to attend his child's birth.
Almost immediately after the news of Murphy's parental (and the associated roster changes for the Mets) leave broke, sports talk radio was filled with condemnation. Here are two of the more telling responses:
Mike Francesa (a radio host and television commentator) – ""I don't know why you need three days off. You're a Major League Baseball player. You can hire a nurse to take care of the baby if your wife needs help…what are you gonna do, sit there and look at your wife in the hospital bed for two days?"
Boomer Esiason (a radio host and a former NFL player) – "Quite frankly, I would've said, 'C-section before the season starts. I need to be at opening day. I'm sorry, this is what makes our money. This is how we're going to live our life. This is going to give our child every opportunity to be a success in life. I'll be able to afford any college I want to send my kid to because I'm a baseball player.'"
There's quite a bit of privilege and ignorance in these statements that combine machismo with a particularly rude statement by Esiason that advocated major surgery as a viable alternative to inconveniencing a player's baseball club for 3 days. After the predictable outrage, Esiason was forced to apologize. Just a few weeks later, the Baltimore Oriole's Chris Davis took his 3-day own paternity leave and received no criticism in the media for doing so.
On one hand, we should be applauding MLB for being the only American professional sports league to protect the rights of players when it comes to attending their children's birth. As both Murphy and Davis found out, it means a lot to be there for your partner and your child.
On the other hand, however, we have all of those commentators and fans who don't want to see players taking time – even three days – away from the game. After all, being a good employee means putting your team first, giving 110%, and putting your family and health on the back burner. The 3-day leave is worth emphasis, given that players whose families experience a serious illness or death may take up to 7 days, and most injured players are given at least 15 days to regain their health. The 3 day leave is particularly difficult for players who may not have moved their family to the city where they play. Murphy, for example, had to use part of his 3 day trip just to travel to where his wife was delivering their child. Another complication is that most hospital stays for vaginal deliveries in the U.S. average 2 days in length, with longer stays for cesarean deliveries (on average nearly 4 days). This means players may not be able to be present for the entire birth, or may have to choose between being present for the birth and being there for the baby's first night home.
This is part of what is a distinctly American take on employment and children – we do not give parents legally or – in many cases – contractually protected time off. Ironically, male MLB players have more of a right to paternity leave than many parents, whose best protection is the right to 12 weeks of unpaid leave under the federal Family and Medical Leave Act (and eligibility only kicks in after someone has word at least 1,250 hours at his or her job) . Even when there are protections, particularly for fathers, there is a cultural expectation an ideal worker puts work over family.
Julie asks whether professional soccer might be the league to take the lead in protecting workplace leave for athletes and workers. Daniel Murphy and MLB are doing just that, but it isn't enough to protect the rights of players like Murphy and Davis if the culture they are a part of is so viciously critical of their choice to take advantage of those rights.
parental leave
The Cub Den said: June 23, 20148:21 pm
In Canada, we can take up to a year for Parental Leave. I think an athlete should have that same right.
http://mlblogscubden.wordpress.com/ | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 577 |
import fbchat
session = fbchat.Session.login("<email>", "<password>")
client = fbchat.Client(session=session)
# Fetches a list of all users you're currently chatting with, as `User` objects
users = client.fetch_all_users()
print("users' IDs: {}".format([user.id for user in users]))
print("users' names: {}".format([user.name for user in users]))
# If we have a user id, we can use `fetch_user_info` to fetch a `User` object
user = client.fetch_user_info("<user id>")["<user id>"]
# We can also query both mutiple users together, which returns list of `User` objects
users = client.fetch_user_info("<1st user id>", "<2nd user id>", "<3rd user id>")
print("user's name: {}".format(user.name))
print("users' names: {}".format([users[k].name for k in users]))
# `search_for_users` searches for the user and gives us a list of the results,
# and then we just take the first one, aka. the most likely one:
user = client.search_for_users("<name of user>")[0]
print("user ID: {}".format(user.id))
print("user's name: {}".format(user.name))
print("user's photo: {}".format(user.photo))
print("Is user client's friend: {}".format(user.is_friend))
# Fetches a list of the 20 top threads you're currently chatting with
threads = client.fetch_thread_list()
# Fetches the next 10 threads
threads += client.fetch_thread_list(offset=20, limit=10)
print("Threads: {}".format(threads))
# If we have a thread id, we can use `fetch_thread_info` to fetch a `Thread` object
thread = client.fetch_thread_info("<thread id>")["<thread id>"]
print("thread's name: {}".format(thread.name))
# Gets the last 10 messages sent to the thread
messages = thread.fetch_messages(limit=10)
# Since the message come in reversed order, reverse them
messages.reverse()
# Prints the content of all the messages
for message in messages:
print(message.text)
# `search_for_threads` searches works like `search_for_users`, but gives us a list of threads instead
thread = client.search_for_threads("<name of thread>")[0]
print("thread's name: {}".format(thread.name))
# Here should be an example of `getUnread`
# Print image url for up to 20 last images from thread.
images = list(thread.fetch_images(limit=20))
for image in images:
if isinstance(image, fbchat.ImageAttachment):
url = client.fetch_image_url(image.id)
print(url)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 35 |
These are great seats-behind the Mariner dugout. Looking for a third partner to share full season tickets. This means 27 games per partner or a combination of games for multiple buyers/partners. The seats are on an aisle and highly desirable. Partners share the tickets equitably-#1, #2, #3. (counting off 1,2,3) The tickets available are #3 this year. We rotate the order each year. Playoff tickets would be shared equitably as well. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,187 |
\section{Introduction}
\label{sec:intro}
In today's information landscape, fake news are used to manipulate public opinion \cite{zhou2018fake} by reshaping readers' opinions regarding some issues. In order to achieve this goal, authors of fake news' narratives need to capture the interest of the reader. Thus, they are putting efforts to make their news articles look more objective and realistic. This is usually done by adding misleading terms or events that can have a negative or positive impact on the readers' emotions.
Short text false information, e.g., fake claims or misleading headlines, might be less harmful than news articles. They may have some eye-catching terms that aim to manipulate the readers' emotions \cite{chakraborty2016stop}. In many cases, the identification of this kind of exaggeration in short statements can unmask the fabrication. On the other hand, in fake news articles the authors exploit the length of the news to conceal their fabricated story. This fact exposes the readers to be emotionally manipulated while reading longer texts that have several imprecise or fabricated plots. The flow of information has been investigated for different tasks: \newcite{reagan2016emotional} studied the emotional arcs in stories in order to understand complex emotional trajectories; \newcite{maharjan2018letting} model the flow of emotions over a book and quantify its usefulness for predicting success in books; \newcite{kar2018folksonomication} explore the problem of creating tags for movies from plot synopses using emotions.
Unlike previous works \cite{rashkin2017truth,shu2018fakenewsnet,castelo2019topic,ghanem2020emotional} that discarded the chronological order of events in news articles, in this work we propose a model that takes into account the affective changes in texts to detect fake news. We hypothesize that fake news has a different distribution of affective information across the text compared to real news, e.g. more fear emotion in the first part of the article or more overall offensive terms, etc. Therefore, modeling the flow of such information may help discriminating fake from real news. Our model consists of two main sub-modules, topic-based and affective information detection. We combine these two sub-modules since a news article's topic may have a correlation with its affective information. For example, a fake news article about Islam or Black people is likely to provoke fear and express negative sentiment while another fake news that is in favor of a particular politician might try to evoke more positive emotions and also express some exaggerations. \\
\noindent
The contributions of our work are as follows:
\begin{itemize}[leftmargin=4mm]
\item We design a model that detects fake news articles by taking into account the flow of affective information\footnote{Available at \href{https://github.com/bilalghanem/fake_flow}{https://github.com/bilalghanem/fake\_flow}}.
\item Extensive experiments on four standard datasets demonstrate the effectiveness of our model over state-of-the-art alternatives.
\item We build a novel fake news dataset, called MultiSourceFake, that is collected from a large set of websites and annotated on the basis of the joint agreement of a set of news sources.
\end{itemize}
\section{Related Work}
Previous work on fake news detection is mainly divided into two main lines, namely with a focus on social media \cite{zubiaga2015towards,aker2017simple,ghanem2019upv} or online news articles \cite{tausczik2010psychological,horne2017just,rashkin2017truth,barron2019proppy}. In this work we focus on the latter one. Fact-checking \cite{karadzhov2017fully, zlatkova2019fact, shu2019defend} is another closely related research topic. However, fact-checking targets only short texts (that is, claims) and focuses on using external resources (e.g. Web, knowledge sources) to verify the factuality of the news.
The focus in previous work on fake news detection is mainly on proposing new feature sets. \newcite{horne2017just} present a set of content-based features, including readability (number of unique words, SMOG readability measure, etc.), stylistic (frequency of part-of-speech tags, number of stop words, etc.) and psycholinguistic features (i.e., several categories from the LIWC dictionary \cite{tausczik2010psychological}). When these features are fed into a Support Vector Machine (SVM) classifier and applied, for instance, to the task of distinguishing satire from real news, they obtain high accuracies.
Using the same features for the task of fake news detection, however, results in somewhat lower scores. \newcite{perez2018automatic} propose a model (FakeNewsDetector) that uses a feature set consisting of unigrams and bigrams, psycholinguistic, readability, punctuation and dependency-based syntactic features, and they evaluate the performance of their model in a cross-domain experiment. \newcite{rashkin2017truth} use a model based on ngram features with a Max-Entropy classifier and apply it to a dataset with different types of fake news articles (e.g., satire, hoax, propaganda, etc.). Similar to the previous work, the authors evaluate their system's performance on in-domain and out-of-domain test sets, respectively.
News, and in particular fake news, are dynamic in nature and change constantly. In order to approach the dynamic nature of news, \newcite{castelo2019topic} propose a topic-agnostic model (TopicAgnostic) that is based on morphological (count of part-of-speech tags), psycholinguistic (personal concerns, affection, and perception categories from the LIWC dictionary), readability (Gunning Fog metric, etc.) and Web-Markup features to capture patterns of the Web pages' layout (frequency of advertisements, presence of an author name, etc.). All of the morphological, psycholinguistic and readability features in the TopicAgnostic model were extracted from headlines and texts of the news articles. The approach obtains a better performance than FakeNewsDetector on three different datasets using a SVM classifier. FakeNewsTracker \cite{shu2019fakenewstracker} is a deep neural network-based model that consists of two branches: one encodes news article texts and the other encodes social media engagements (e.g., tweets and their replies). A similar model called Emotionally Infused Network (EIN) is proposed in \newcite{ghanem2020emotional}.
EIN encodes the text of the article and their affective content, based on several dictionaries, and then combines the two vector representations.
The authors evaluate their model on a multi-class false information dataset and show the effectiveness of using emotion features extracted from the text.
Despite the large variety of features and models that have been explored in previous work, none of these works considers the sequence of affective information in text; instead, they feed the entire news articles as one segment into their models. In contrast, the aim of our work is to evaluate this source of information, using a neural architecture.
\section{The FakeFlow Model}
Given an input document, the FakeFlow model first divides it into $N$ segments. Then it uses both word embeddings and other affective features such as \textit{emotions, hyperbolic words}, etc. in a way to catch the flow of emotions in the document. The model learns to pay attention to the flow of affective information throughout the document, in order to detect whether it is fake or real.
Figure \ref{fig:fakeness_flow} shows the architecture of the FakeFlow model. The neural architecture has two main modules: The first module uses a Convolutional Neural Network (CNN) to extract topic-based information from articles (left branch). The second module models the flow of the affective information within the articles via Bidirectional Gated Recurrent Units (Bi-GRUs) (right branch).
\begin{figure}
\centering
\includegraphics[width=7.6cm]{figures/architicture.png}
\caption{The architecture of the FakeFlow model.}
\label{fig:fakeness_flow}
\end{figure}
\subsection{Topic-based Information}
Given a segment $n \in N$ of words, the model first embeds words to vectors through an embedding matrix. Then it uses a CNN that applies convolution processes and max pooling to get an abstractive representation of the input segment. This representation highlights important words, in which the topic information of the segment is summarized. Then it applies a fully connected layer on the output segments to get a smaller representation ($v_{\mathit{topic}}$) for later concatenation with the representation of affective information:
\begin{equation*}
v_{\mathit{topic}} = f(W_a \: cnn_v + b_a)
\end{equation*}
where $W_a$ and $b_a$ are the corresponding weight matrix and bias terms, and $f$ is an activation function such as ReLU, tanh, etc.
Key to FakeFlow is its ability to capture the relevance of the affective information with respect to the topics. For this, we concatenate the topic summarized vector $v_{\mathit{topic}}$ with the representation vector $v_{\mathit{affect}}$, aimed at capturing the affective information extracted from each segment (Section \ref{sbsection:flow_info}).
\begin{equation*}
v_{concat} = v_{\mathit{topic}} \oplus v_{\mathit{affect}}
\end{equation*}
To merge the different representations and capture their joint interaction in each segment, the model processes the produced concatenated vector $v_{concat}$ with another fully connected layer:
\begin{equation*}
v_{fc} = f(W_c \: v_{concat} + b_c)
\end{equation*}
In order to create an attention-focused representation of the segments to highlight important ones and to provide the model with the ability to weight segments differently according to the similarity of neighboring segments, the model applies a context-aware self-attention mechanism \cite{zheng2018opentag} on $v_{fc}$. This is a crucial step, as the importance of a segment at timestep $t$ is related to the other segments since they share the same context in the news article. Moreover, applying the attention layer can help us understand which features are most relevant by showing to which words the network attends to during learning. The output of the attention layer is an attention matrix $l_{t}$ with scores for each token at each timestep.
\subsection{Affective Flow of Information}
\label{sbsection:flow_info}
To model the affective information flow in the news articles, we choose the following lexical features, under the assumption that they have a different distribution across the articles' segments. We use a term frequency representation weighted by the articles' length to extract the following features from each segment $n$:
\begin{itemize}[leftmargin=4mm]
\item \textit{Emotions}: We use emotions as features to detect their change among articles' segments. For that we use the NRC emotions lexicon \cite{mohammad2010emotions} that contains $\sim$14K words labeled using the eight Plutchik's emotions (\textit{8 Features}).
\item \textit{Sentiment}: We extract the sentiment from the text, \textit{positive} and \textit{negative}, again using the NRC lexicon \cite{mohammad2010emotions} (\textit{2 Features}).
\item \textit{Morality}: We consider cue words from the Moral Foundations Dictionary\footnote{\href{https://moralfoundations.org/other-materials/}{https://moralfoundations.org/other-materials/}} \cite{graham2009liberals} where words are assigned to one (or more) of the following categories: \textit{care, harm, fairness, unfairness (cheating), loyalty, betrayal, authority, subversion, sanctity} and \textit{degradation} (\textit{10 Features}).
\item \textit{Imageability}: We use a list of words rated by their degree of abstractness and imageability\footnote{\href{https://github.com/ytsvetko/metaphor/tree/master/resources/imageability}{https://github.com/ytsvetko/metaphor/tree/master/\\resources/imageability}}. These words have been extracted from the MRC psycholinguistic database \cite{wilson1988mrc} and then using a supervised learning algorithm, the words have been annotated by the degrees of abstractness and imageability. The list contains 4,295 and 1,156 words rated by their degree of abstractness and imageability, respectively (\textit{2 Features}).
\item \textit{Hyperbolic}: We use a list of $\sim$350 hyperbolic words \cite{chakraborty2016stop}, i.e., words with high positive or negative sentiment (e.g., terrifying, breathtakingly, soul-stirring, etc.). The authors extracted these eye-catching words from clickbaits news headlines (\textit{1 Feature}).
\end{itemize}
To model the flow of the above features, we represent each segment of an article by a vector $v_{\mathit{affect}}$ capturing all 23 features listed above. Then we feed the document's vectors to a Bi-GRU network to summarize the contextual flow of the features from both directions\footnote{During prototyping, GRU produced better overall results than LSTM.} to obtain $v_{\mathit{flow}}$.
Given the segments' flow representation ($v_{\mathit{flow}}$) of an article and their relevance to the topics ($l_t$), FakeFlow applies a dot product operation and then averages the output matrix across the segments to get a compact representation $v_{\mathit{compact}}$,
which is then fed into a fully connected layer:
\begin{equation*}
v_{\mathit{final}} = f(W_d \: v_{\mathit{compact}} + b_d)
\end{equation*}
Finally, to generate the overall factuality label of an article, a softmax layer is applied to the output of the fully connected layer.
\section{Fake News Datasets}
\label{sec:datasets}
Despite the recent efforts for debunking online fake news, there is a dearth of publicly available datasets. Most of the available datasets are small in size (e.g., the Politifact\footnote{\href{https://www.politifact.com/}{https://www.politifact.com/}} dataset in \cite{shu2018fakenewsnet} has $\sim$700 available articles, the Celebrity dataset in \cite{perez2018automatic} has $\sim$500 articles, etc.), their test parts have not been manually annotated, or have been collected from a very small number of news sources. Nonetheless, we evaluate FakeFlow on three different available datasets to demonstrate its performance. In addition, we create our own dataset. Table \ref{datasets} gives an overview of the datasets that we used in our work.
\vspace{1em} \noindent \textbf{MultiSourceFake}: We rely on different resources for creating the training and test portions of the dataset, so as to provide a challenging benchmark.
For the training part, we use \textit{OpenSources.co} (OS), \textit{MediaBiasFactCheck.com} (MBFC), and PolitiFact\footnote{\href{https://www.politifact.com/article/2017/apr/20/politifacts-guide-fake-news-websites-and-what-they/}{https://www.politifact.com/article/2017/apr/20/politifacts-guide-fake-news-websites-and-what-they/}} news websites' lists. OS list contains 560 domains, MBFC list has 548 domains, and the PolitiFact list has 227 domains.
These lists have been annotated by professional journalists. The lists contain domains of online news websites annotated based on the content type (as in the OS news list: \textit{satire, reliable}, etc.; and in the PolitiFact news list: \textit{imposter, parody, fake news}, etc.) or from a factuality perspective (as in the MBFC news list: low, medium, and high factuality). From the OS list, we select domains that are in one of the following categories: \textit{fake, bias, reliable, hate, satire,} or \textit{conspiracy}. We consider domains under the \textit{reliable} category as real news sources, and the rest as fake. The PolitiFact list is different from the OS list since it has only labels for domains that are either fake or with mixed content. We discard the mixed ones\footnote{The discarded label is ``Some fake stories''.} and map the remaining ones to the fake news label. Finally, we select from the MBFC list those domains that are annotated either as high or low factual news and we map them to real and fake labels, respectively. Out of these three final lists, we select only those domains for our dataset that are annotated in all lists in a consistent way; for example, we discard those domains that are annotated as real in the OS list but their label in the MBFC list is fake (low factuality). The final list contains 85 news websites. We now proceed by projecting the domain-level ground truth onto the content of those domains and randomly sample articles, with a maximum of 100 news articles per domain.\footnote{Some of the websites included less than 100 news articles.}
For the test part, we use the \textit{leadstories.com} fact checking website for which professional journalists annotated online news articles on the article level as fake or real. We do not follow the way we annotate the training part since the projection of the domain-level ground truth inevitably introduces noise. The journalists that annotated \textit{leadstories.com} assigned a set of labels to the fake news articles like, e.g., \textit{false, no evidence, satire, misleading}, etc.; we map them all to the \textit{fake} label. In addition, we discard all articles that are multimedia-based. After collecting the news articles, we postprocess them by discarding very short articles (less than 30 words). The test part includes 689 fake news articles. We complement the set with a sample of 1,000 real news articles from the training part. The overall dataset consists of 5,994 real and 5,403 fake news articles. The average document length (number of words) in the MultiSourceFake dataset is 422 words, and the 95th percentile value is 942. Figure \ref{fig:data_len} shows the distribution of the documents' length in the dataset.
\begin{figure
\centering
\includegraphics[width=8cm]{figures/data_length.png}
\caption{The distribution of the documents' length in the MultiSourceFake dataset.}
\label{fig:data_len}
\end{figure}
\vspace{1em} \noindent \textbf{TruthShades}: This dataset has been proposed in \newcite{rashkin2017truth}. The dataset was crawled from a set of domains that are annotated by professional journalists as either {\em propaganda, hoax, satire}, or {\em real}. The dataset has been built from the English Gigaword corpus for real news, and other seven unreliable domains that annotated in one of the three previous false information labels.
\vspace{1em} \noindent \textbf{PoliticalNews}: Due to the fact that: ``a classifier trained using content from articles published at a given time is likely to become ineffective in the future'' \cite{castelo2019topic}, the authors of this work collected a dataset by crawling news websites in between the years 2013 to 2018 in order to evaluate their model's performance on different years.
\vspace{1em} \noindent \textbf{FakeNewsNet}: is a fake news repository that consists of two comprehensive datasets, one collected using claims from PolitiFact and the other from the GossipCop fact checking website. Given the large number of true and false claims from these two fact checking websites, \newcite{shu2018fakenewsnet} built news datasets that contain visual and textual news articles content and social media information by searching Twitter for users who shared news. Out of the whole collected information, we use only the textual information of news articles, which is the part we are interested in.
\begin{table
\small
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Name} & \textbf{Total} & \textbf{Training} & \textbf{Test} \\ \hline
MultiSourceFake & 11,397 & 9,708 & 1,689 \\ \hline
TruthShades & 23,000 & 16,000 & 4,000 - 3,000 \\ \hline
PoliticalNews & 14,240 & 11,392 & 2,848 \\ \hline
FakeNewsNet & 20,208 & 16,156 & 4,039 \\ \hline
\end{tabular}
\caption{Number of articles in the datasets.}
\label{datasets}
\end{table}
\section{Experiments}
\label{sec:experiments}
\noindent \textbf{Experimental setup.}
We split the articles' text into $N$ segments and set the maximum length of segments to 800 words, applying zero padding to the ones shorter than 800 words. Concerning the FakeFlow hyper-parameters, we tune various parameters (\textit{dropout, the size of the dense layers, activation functions, CNN filter sizes and their numbers, pooling size, size of the GRU layer, and the optimization function}) (see Appendix \ref{sec:appendix} for the search space) using early stopping on the validation set. In addition to these hyper-parameters, we also use the validation set to pick the best number of segments ($N$). Regarding the MultiSourceFake dataset, we use 20\% of the training part for validation.
We represent words using pre-trained word2vec \textit{Google-News-300} embeddings\footnote{\href{https://code.google.com/archive/p/word2vec/}{https://code.google.com/archive/p/word2vec/}}.
For evaluation, we follow the setup from related work. We report accuracy and weighted precision, recall and F1 score, and macro F1 for some datasets where the classes are imbalanced.
\vspace{1em} \noindent \textbf{Baselines.} To evaluate the performance of our model, we use a combination of fake news detection models and deep neural network architectures:
\begin{itemize}[leftmargin=4mm]
\item \noindent \textbf{CNN, LSTM}: We use CNN and LSTM models and validate their performance when treating each document as one fragment. We experiment with different hyper-parameters and report results for the ones that performed best on the validation set.
\item \textbf{HAN}: The authors of \cite{yang2016hierarchical} proposed a Hierarchical Attention Networks (HAN) model for long document classification. The proposed model consists of two levels of attention mechanisms, i.e., word and sentence attention. The model splits each document into sentences and learns sentence representations from words.
\item \textbf{BERT}: is a text representation model that showed superior performance on multiple natural language processing (NLP) benchmarks \cite{devlin2019bert}. We use the pre-trained \textit{bert-base-uncased} version which has 12-layers and yields output embeddings with a dimension of size 768. We feed the hidden representation of the special [CLS] token, that BERT uses to summarize the full input sentence, to a softmax layer. Experimentally, we found that fine-tuning BERT layers gives a higher performance. It is worth mentioning that BERT input length is limited to 512 word pieces (sub-words level) \cite{devlin2019bert}, thus, we discard the rest of the text in long news articles.
\item \textbf{Fake News Detection Models}: We compare our model to several fake news detection models. We use \newcite{horne2017just} model, FakeNewsDetector \cite{perez2018automatic}, \newcite{rashkin2017truth} model, and EIN \cite{ghanem2020emotional}.\footnote{We only compare TopicAgnostic on the dataset the authors proposed (PoliticalNews).}
\item \textbf{Longformer}: Giving that Transformer-based models (i.e. BERT) are unable to process long sequences, we use Longformer \citep{beltagy2020longformer}, which is a SOTA model for long document tasks. In our experiments, we set the max sequence length to 1500 to handle documents that have more than 512 tokens in the MultiSourceFake dataset (see Figure \ref{fig:data_len}). Also, we found that fine-tuning the Longformer model gives better results and a much faster convergence.
\end{itemize}
\section{Results and Analysis}
\label{sec:results}
Table \ref{tab:results} presents the results of our proposed model and the baselines on the MultiSourceFake dataset. Our best result was achieved by using 10 as the number of segments ($N$, as found on the validation data). In Figure \ref{fig:chunk_size} we show the model's performance for segments of different length.\footnote{In the case of N=1 in Figure \ref{fig:chunk_size}, we set the maximum segment length to 1500 words instead of 800 to not lose parts of the longer articles.} In general, the results show that models that are based on either word ngrams or word embeddings are performing better than other models that use handcrafted features, e.g. \citet{horne2017just}. Also, despite the huge amount of data used to train the BERT model, the results show that BERT performs worse than \textit{FakeFlow} and also fails to outperform some of the other models. We speculate that this is due to the fact that the input length in BERT is limited to 512 words, as we mentioned previously, and a large portion of the news articles in the MultiSourceFake dataset has a length greater than 512 words. The results of the Longformer model confirm our claim regarding the documents' length and show a significantly higher F1 score than the BERT model. This emphasizes that despite the strong performance of BERT on multiple NLP benchmarks, it is unable to handle long text documents, in contrast, e.g., to vanilla text categorization \cite{adhikari2019docbert}. In addition, Longformer's results show a higher F1 score than the FakeFlow model, yet, the difference is statically insignificant.
To isolate the contribution of topical vs.\ affective information we run two simplified versions of our architecture, each consisting of the networks to capture topical and affective information only. The results show that the flow of the affect information has a weak performance when used alone; this emphasizes that affective information of a news article is a meaningful, yet complementary source of information.
\begin{figure
\includegraphics[width=8cm]{figures/segments_sizes.png}
\caption{The accuracy and F1 results of the FakeFlow model using different $N$ (number of segments).}
\label{fig:chunk_size}
\end{figure}
\begin{table
\small
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Model} & \textbf{Acc.} & \textbf{Prec.} & \textbf{Rec.} & \textbf{F1$_{macro}$} \\
\hline
Majority Class & 0.59 & 0.35 & 0.59 & 0.37 \\ \hline
{\scriptsize \citet{horne2017just}} & 0.80 & 0.75 & 0.78 & 0.80 \\ \hline
FakeNewsDetector & 0.86 & 0.86 & 0.86 & 0.86 \\ \hline
LSTM & 0.91 & 0.86 & 0.91 & 0.90 \\ \hline
CNN & 0.91 & 0.89 & 0.89 & 0.91 \\ \hline
{\scriptsize \citet{rashkin2017truth}}& 0.92 & 0.92 & 0.92 & 0.92 \\ \hline
BERT & 0.93 & 0.93 & 0.94 & 0.93‡ \\ \hline
EIN & 0.93 & 0.94 & 0.93 & 0.93‡ \\ \hline
HAN & 0.94 & 0.94 & 0.94 & 0.93‡ \\ \hline
Longformer & 0.97 & 0.97 & 0.97 & \textbf{0.97}† \\ \hline
\hline
FakeFlow & 0.96 & 0.93 & 0.97 & 0.96 \\
\hline
\scriptsize{FakeFlow -- Topic only} & 0.91 & 0.89 & 0.90 & 0.90 \\ \hline
\scriptsize{FakeFlow -- Affective only} & 0.61 & 0.38 & 0.60 & 0.40 \\
\hline
\end{tabular}
\caption{Results on the MultiSourceFake dataset. (‡) indicates a statistically significant improvement of \textit{FakeFlow} over the referred model using McNemar test; (†) indicates no statistically significant improvement over \textit{FakeFlow}.}
\label{tab:results}
\end{table}
\vspace{1em}
\noindent
\textbf{Performance on Multiple Datasets.} In Table \ref{tab:datasets} we compare the performance of the FakeFlow model to SOTA results on the other datasets we introduced in Section \ref{sec:datasets}. The TruthShades dataset has two test sets, in-domain and out-of-domain. In the in-domain configuration, training and test articles come from the same sources, and from different sources in out-of-domain configuration. The results demonstrate that FakeFlow achieves a better F1 on both test sets. In a similar way, the results on the PoliticalNews dataset show that FakeFlow also outperforms the TopicAgnostic model, although the gap in results is not very large. Finally, regarding the FakeNewsNet dataset, it looks that the deep learning-based model (FakeNewsTracker) does not achieve a good performance comparing to the other proposed baseline by the authors, which is a Logistic Regression (LR) classifier with one-hot vectors of the news articles' text. Furthermore, it seems that a simple word-based model works better than a more sophisticated model that incorporates social media and context information. The FakeFlow model, on the other hand, achieves a better result, outperforming both the FakeNewsTracker and the LR baseline.
\begin{table
\footnotesize
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{TruthShades} & Acc. & Prec. & Rec. & F1$_{macro}$ \\ \hline
\multicolumn{5}{|c|}{\small{Out-of-domain}} \\
\hline
{\scriptsize \citet{rashkin2017truth}} & 0.67 & 0.70 & 0.67 & 0.65 \\ \hline
FakeFlow & 0.68 & 0.69 & 0.68 & \textbf{0.68} \\ \hline
\hline
\multicolumn{5}{|c|}{\small{In-domain}} \\
\hline
{\scriptsize \citet{rashkin2017truth}} & 0.91 & 0.91 & 0.91 & 0.91 \\ \hline
FakeFlow & 0.96 & 0.96 & 0.96 & \textbf{0.96} \\ \hline
\multicolumn{5}{c}{} \\ \hline
\textbf{PoliticalNews} & Acc. & Prec. & Rec. & F1$_{weighted}$ \\
\hline
TopicAgnostic & 0.87 & 0.87 & 0.87 & 0.87 \\ \hline
FakeFlow & 0.88 & 0.88 & 0.88 & \textbf{0.88} \\ \hline
\multicolumn{5}{c}{} \\ \hline
\textbf{FakeNewsNet} & Acc. & Prec. & Rec. & F1$_{weighted}$ \\
\hline
FakeNewsTracker & 0.80 & 0.82 & 0.75 & 0.79 \\ \hline
One-Hot LR & 0.82 & 0.90 & 0.72 & 0.80 \\ \hline
FakeFlow & 0.86 & 0.86 & 0.86 & \textbf{0.85} \\ \hline
\end{tabular}
\caption{Results on multiple datasets. We compare the FakeFlow model to SOTA models on each dataset.}
\label{tab:datasets}
\end{table}
\vspace{1em}
\noindent
\textbf{Topic-Aware Model.} Constantly, new events are covered by news agencies. These events are different from the old ones in terms of discourse and topic. Therefore, a fake news detector trained on news articles from years back is unable to detect recent news. In this experiment, we are evaluating our approach on the PoliticalNews dataset that is constructed from news distributed across different years (2013 to 2018). Following the experimental setup in \cite{castelo2019topic}, we train the FakeFlow model on news from one year and test on the other years, one year at a time for testing. For example, we train the model on news from 2013 and we test on news from 2015. Note that each test set is associated with 5 results, one for each year.
Figure \ref{fig:topic_aware} shows the average accuracy for each test set. We compare FakeFlow to the TopicAgnostic model that proved to be effective at detecting fake news from different years. It is worth mentioning that the features of the TopicAgnostic model have been extracted from both headlines and text of the news articles. However, the results show that both models have a similar performance, except for the 2013 test set where FakeFlow achieves a higher accuracy with a difference of 7\%. The experiment shows that FakeFlow is capable of detecting fake news from different years, with a flat performance across the years.
\begin{figure
\centering
\includegraphics[width=8cm]{figures/topic_aware.png}
\caption{Topic aware experiment's results.}
\label{fig:topic_aware}
\end{figure}
\vspace{1em}
\noindent
\textbf{Attention Weights.} The proposed FakeFlow model shows that taking into account the flow of affective information in fake news is an important perspective for fake news detection.
We argue that being able to better understand the behaviour of the model can make it more transparent to the end-users.
Figure \ref{fig:pipeline} illustrates this by showing the attention weights of a fake news article across the 10 segments (left bar).\footnote{We averaged the attention weight matrix along the timesteps (number of segments) representations.} The figure shows that FakeFlow attends more to the beginning of the article. For better understanding, we match the affective information with the attention weights. Regarding the news text in the figure, the \textit{emotions} features\footnote{Words with multiple colors mean that they have been annotated with multiple emotion types in the NRC lexicon.} show a clear example of how fake news articles try to manipulate the reader. It looks as if the existence of \textit{fear, sadness}, and \textit{surprise} \textit{emotions} at the beginning of the article have triggered the attention on this part. Towards the end of the article, on the other hand, we can notice that such negative emotions do not exist, while \textit{emotions} like \textit{joy} and \textit{anticipation} appear. This exemplifies how fake news try to attract the readers' attention in the first part of the text.
Regarding the \textit{morality} features, we only match the word ``kill'' with the \textit{harm} category. Also, for the \textit{hyperbolic} feature, we match the words ``terrifying'' and ``powerful''. In the same manner, both \textit{morality} and \textit{hyperbolic} features match words that occur at the beginning of the article. Lastly, for both \textit{sentiment} and \textit{imageability} features, we are not able to find a clear interpretation in this example where many words across the segments match.
\begin{figure*
\centering
\includegraphics[width=13cm]{figures/emotions_fake_story.PNG}
\caption{Emotional interpretation of a \textit{fake} news article by showing the attention weights (the bar on the left) and highlighting the emotions in the text.}
\label{fig:pipeline}
\end{figure*}
\vspace{1em}
\noindent
\textbf{Real vs.\ Fake Analysis.} In Table \ref{tab:feats_statistics} we present an analysis on both real and fake news articles. The analysis gives an intuition to the reader on the distribution of the used features across the articles' segments. It shows that an emotion like \textit{fear} has on average a higher difference between the first and the last segment in fake news than in real ones (see Figure \ref{fig:flow} for a visualized distribution). Also, a feature like \textit{hyperbolic} has a higher average value and lower standard deviation across all segments for fake news than real news, thus indicating that fake news have a higher amount of hyperbolic words with similarly high values.
\begin{figure}[H]
\begin{minipage}{\textwidth}
\includegraphics[width=8cm]{figures/Fear.png}
\end{minipage}\hfill
\caption{The flow of the \textit{Fear} emotion in \textbf{fake} (\RIGHTarrow) and \textbf{real} (\textbullet) news articles in the MultiSourceFake dataset. Y-axis presents the average number of \textit{Fear} emotion words in 0-1 scale; the X-axis presents the document text, divided into 10 segments.}
\label{fig:flow}
\end{figure}
\begin{table*
\footnotesize
\centering
\begin{tabular}{c|c|c|c|c|c||c|c|c|c}
\hline
& \multirow{2}{*}{\textbf{Features}} & \multicolumn{4}{c|}{\textbf{Real News}} & \multicolumn{4}{|c}{\textbf{Fake News}} \\ \cmidrule{3-10}
& {} & $\mu$ $first_{seg.}$ & $\mu$ $last_{seg.}$ & $\mu$ $all_{seg.}$ & $\sigma$ $all_{seg.}$ & $\mu$ $first_{seg.}$ & $\mu$ $last_{seg.}$ & $\mu$ $all_{seg.}$ & $\sigma$ $all_{seg.}$ \\
\hline
\parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{Emotions}}}
& Anger & 0.175 & 0.167 & 0.170 & 0.003 & 0.183 & 0.170 & 0.171 & 0.008 \\
& Anticipation & 0.301 & 0.315 & 0.264 & 0.025 & 0.293 & 0.305 & 0.260 & 0.022 \\
& Disgust & 0.095 & 0.101 & 0.095 & 0.004 & 0.096 & 0.091 & 0.091 & 0.007 \\
& \cellcolor{lightgray} Fear & \cellcolor{lightgray} 0.254 & \cellcolor{lightgray} 0.250 & \cellcolor{lightgray} 0.238 & \cellcolor{lightgray} 0.010 & \cellcolor{lightgray} 0.265 & \cellcolor{lightgray} 0.226 & \cellcolor{lightgray} 0.238 & \cellcolor{lightgray} 0.011 \\
& Joy & 0.217 & 0.226 & 0.183 & 0.021 & 0.207 & 0.203 & 0.175 & 0.020 \\
& Sadness & 0.161 & 0.158 & 0.160 & 0.006 & 0.155 & 0.155 & 0.158 & 0.007 \\
& Surprise & 0.140 & 0.144 & 0.123 & 0.012 & 0.142 & 0.123 & 0.120 & 0.008 \\
& Trust & 0.446 & 0.466 & 0.400 & 0.031 & 0.461 & 0.421 & 0.401 & 0.029 \\ \hline
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{Senti.}}}
& Positive & 0.599 & 0.623 & 0.558 & 0.030 & 0.608 & 0.591 & 0.554 & 0.032 \\
& Negative & 0.369 & 0.337 & 0.347 & 0.011 & 0.367 & 0.336 & 0.350 & 0.013 \\ \hline
\parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{Morality}}}
& Harm & 0.007 & 0.011 & 0.007 & 0.002 & 0.008 & 0.013 & 0.007 & 0.002 \\
& Care & 0.026 & 0.023 & 0.019 & 0.004 & 0.021 & 0.022 & 0.019 & 0.003 \\
& Fairness & 0.003 & 0.013 & 0.007 & 0.002 & 0.005 & 0.020 & 0.009 & 0.004 \\
& Unfairness & 0.000 & 0.000 & 0.001 & 0.000 & 0.001 & 0.000 & 0.001 & 0.001 \\
& Loyalty & 0.016 & 0.017 & 0.019 & 0.002 & 0.014 & 0.016 & 0.019 & 0.003 \\
& Betrayal & 0.004 & 0.003 & 0.005 & 0.001 & 0.002 & 0.003 & 0.004 & 0.001 \\
& Authority & 0.025 & 0.032 & 0.026 & 0.003 & 0.024 & 0.028 & 0.026 & 0.002 \\
& Subversion & 0.005 & 0.004 & 0.004 & 0.001 & 0.006 & 0.007 & 0.005 & 0.002 \\
& Sanctity & 0.005 & 0.005 & 0.004 & 0.001 & 0.005 & 0.006 & 0.005 & 0.002 \\
& Degradation & 0.003 & 0.004 & 0.003 & 0.001 & 0.006 & 0.004 & 0.003 & 0.001 \\ \hline
\parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{Img}}}
& Imageability & 0.845 & 1.203 & 1.144 & 0.122 & 0.877 & 1.184 & 1.145 & 0.124 \\
& Abstraction & 0.424 & 0.331 & 0.352 & 0.028 & 0.382 & 0.304 & 0.342 & 0.037 \\
\hline
& \cellcolor{lightgray} Hyperbolic & \cellcolor{lightgray} 0.042 & \cellcolor{lightgray} 0.05 & \cellcolor{lightgray} 0.045 & \cellcolor{lightgray} 0.005 & \cellcolor{lightgray} 0.046 & \cellcolor{lightgray} 0.044 & \cellcolor{lightgray} 0.047 & \cellcolor{lightgray} 0.003
\end{tabular}
\caption{ A quantitative analysis of the features existence across articles' segments. We present the average value in the first segment ($\mu$ $first_{seg.}$), the average value in the last segment ($\mu$ $last_{seg.}$), the average value in the all 10 segments ($\mu$ $all_{seg.}$), and the standard deviation ($\sigma$ $all_{seg.}$) of a feature across the 10 segments, both in real and fake news.
}
\label{tab:feats_statistics}
\end{table*}
\section{Conclusion}
In this paper we presented FakeFlow, a model that takes into account the flow of affective information (\textit{emotions, sentiment, hyperbolic words}, etc.) in texts to better detect fake news articles. The model receives as input a text, segmented into smaller units, instead of processing one long fragment.
This enables it to learn the flow of affective information by modeling the interaction between the topic and affective terms in the news article.
We evaluated our model on four different datasets and compared it to several strong baselines. The extensive experiments show the effectiveness of FakeFlow over state-of-the-art models. Although FakeFlow was trained using a limited amount of text, the results demonstrated that it achieves results on-par with resource-hungry models (e.g. BERT and Longformer).
In future work, we plan to extend our dataset and study more fine-grained news types, e.g. propaganda, from an emotional perspective. Moreover, we plan to investigate how we can replace the lexicon-based information with language-independent approaches in an attempt to make our model multilingual.
\section*{Acknowledgment}
The first author would like to thank Ines Rehbein and Ana Uban for their valuable comments and suggestions. The work of the third author was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE on MISinformation and MIScommunication in social media: FAKE news and HATE speech (PGC2018- 096212-B-C31) and by the Generalitat Valenciana under the research project DeepPattern (PROMETEO/2019/121).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,392 |
The XPC nano is Shuttle's smallest product lines with only palm size. Shuttle's NC03U series supports Intel Kaby lake-U platform technology. The NC03U series includes four different embedded CPU options with Celeron, Core i3, Core i5, and Core i7 and has two DDR4 SODIMM slots to support a maximum of 32GB. With built-in Intel HD graphics series allows these ultra-compact machines to have ample performance for playback 4K/Ultra HD at 60Hz via HDMI and DisplayPort. Same as the NC02 series, a new mechanism design allows for easy installation of memory modules and storage and the case can accommodate one 2.5" HDD (up to 15mm) to provide a more flexible storage capacity. Despite its small size, the NC03U series with full PC experience has a great connectivity with multiple I/O ports- USB3.0 , USB2.0, RS232, and built-in SATA 6G and M.2 high speed storage interface. Its small and elegant design makes this series as a portable mini PC fit for home, office, POS, and even Digital signage. | {
"redpajama_set_name": "RedPajamaC4"
} | 251 |
William Morris (Walthamstow, 24 de março de 1834 – Hammersmith, 3 de outubro de 1896) foi um designer têxtil, poeta, romancista, tradutor e ativista socialista inglês. Associado com o movimento artístico britânico Arts & crafts, foi um dos principais contribuidores para o revivalismo das artes têxteis e métodos tradicionais de produção. As suas contribuições literárias ajudaram a estabelecer o género de fantasia moderno, tendo também tido um papel significativo na divulgação do movimento socialista na Grã-Bretanha.
Nascido em Walthamstow, no Essex, no seio de uma família abastada da classe média, Morris foi profundamente influenciado pelo medievalismo durante a formação em estudos clássicos na Universidade de Oxford, onde se juntou ao Birmingham Set. Depois da universidade recebeu formação de arquitetura, casou com Jane Burden e criou laços de amizade com os artistas pré-rafaelitas Edward Burne-Jones e Dante Gabriel Rossetti e com o arquiteto neogótico Philip Webb. Webb e Morris projetaram a Casa Vermelha, onde Morris viveu entre 1859 e 1865, antes de se mudar para Bloomsbury, no centro de Londres. Em 1861, Morris fundou uma empresa de artes decorativas com Burne-Jones, Rossetti e Webb, entre outros, denominada Morris, Marshall, Faulkner & Co. Devido à elevada procura, a empresa influenciou de forma profunda a decoração de interiores durante a era vitoriana, vendendo tapeçarias, papel de parede, tecidos, mobília e vitrais desenhados por Morris. Em 1875, Morris assumiu em exclusivo a direção da empresa, entretanto renomeada para Morris & Co.
Embora continuasse a ser proprietário da casa em Londres, em 1871 Morris aluga um retiro rural em Cotswolds, no Oxfordshire. Profundamente influenciado por visitas à Islândia, traduziu uma série de traduções de sagas islandesas juntamente com Eiríkr Magnússon. Publicou também uma série de poemas e romances épicos da sua autoria, como The Earthly Paradise (1868–1870), A Dream of John Ball (1888), a utopia News from Nowhere (1890) e o romance de fantasia The Well at the World's End (1896). Em 1877 fundou a Society for the Protection of Ancient Buildings par afazer campanha contra os danos provocados pelos restauros da época. Aderindo ao marxismo e influenciado pelo anarquismo, na década de 1880 Morris torna-se um ativista do socialismo revolucionário. Depois de se ter envolvido na Federação Social Democrata, em 1884 funda a Liga Socialista, da qual se viria a separar em 1890. Em 1891 fundou a editora Kelmscott Press com o intuito de publicar livros inspirados pelas iluminuras, uma causa a que se dedicou até à morte.
Morris é considerado uma das mais importantes personalidades da cultura britânica durante a era Vitoriana. Embora enquanto vivo fosse conhecido sobretudo pela poesia, após a sua morte tornou-se mais conhecido pelo design. Fundada em 1955, a William Morris Society tem como finalidade a divulgação do seu legado. Para além das numerosas biografias, muito do seu trabalho pode ser visto em museus e galerias de arte e grande parte do que desenhou ainda se encontra em produção.
Vida e obra
Morris nasceu em Walthamstow, próximo a Londres. Sua família era rica, e ele foi para a Oxford (Faculdade de Exeter), onde se tornou influenciado por John Ruskin e onde conheceu seus amigos e colaboradores de toda a vida, Dante Gabriel Rossetti, Edward Burne-Jones, Ford Madox Brown e Philip Webb. Ele também conheceu sua esposa, Jane Burden, uma mulher da classe trabalhadora cuja pele clara e cabelo cúprico eram considerados por Morris e seus amigos o epítome da beleza.
O movimento artístico que Morris e os outros tornaram famoso foi a Irmandade Pré-Rafaelita. Eles evitavam a manufatura industrial barata de artes decorativas e da arquitetura e favoreciam um retorno ao artesanato, elevando os artesãos à condição de artistas.
Morris deixou Oxford para entrar em uma firma de arquitetura, mas logo se viu cada vez mais atraído para as artes decorativas. Ele e Webb construíram a Casa Vermelha em Bexleyheath em Kent, o presente de casamento de Morris à Jane. Foi aqui que suas idéias de design começaram a tomar forma física. A torre de relógio feita de tijolos no centro de Bexleyheath teve, em 1996, um busto de Morris colocado em uma posição original.
Em 1861, ele fundou a firma Morris, Marshall, Faulkner & Co. com Gabriel Rossetti, Burne-Jones, Madox Brown e Philip Webb. No decorrer de sua vida, ele continuou a trabalhar em sua própria firma, embora esta mudasse de nome. Sua encarnação mais famosa foi como Morris and Company. Seus designs são vendidos ainda hoje sob licenças dadas a Sanderson and Sons e Liberty de Londres.
Em 1877, ele fundou a Sociedade para a Proteção de Prédios Antigos. Seu trabalho de preservação resultou indiretamente na fundação do National Trust.
Morris e sua filha May estavam entre os primeiros socialistas da Inglaterra, trabalhando diretamente com Eleanor Marx e Engels para iniciar o movimento socialista. Em 1883, ele entrou para a Federação Democrática Social e, em 1884, organizou a Liga Socialista. Uma de suas obras mais conhecidas, Notícias de Lugar Nenhum, é um romance utópico que descreve uma sociedade socialista. Esse aspecto da obra de Morris é bem discutido na biografia (com o subtítulo de Romântico a Revolucionário) escrita por E. P. Thompson.
Morris e Rossetti alugaram uma casa de campo, Kelmscott Manor, próxima a Lechlade, Gloucestershire, como um retiro de verão, mas ela logo se tornou um refúgio para Rossetti e Jane Morris terem um duradouro caso. Para fugir do desconforto, Morris freqüentemente viajava para a Islândia, onde pesquisava lendas islandesas que posteriormente se tornariam a base de poemas e romances.
Considera-se que o livro de Morris, The Wood Between the Worlds, tenha influenciado a série Narnia de C. S. Lewis, enquanto que J. R. R. Tolkien foi inspirado pelas reconstruções de Morris da vida germânica primitiva em The House of the Wolfings e The Roots of the Mountains.
Após a morte de Tennyson em 1892, foi oferecida a Morris a condição de Poeta Laureado, mas ele a recusou.
William Morris morreu em 1896 e foi sepultado no cemitério da igreja na aldeia de Kelmscott, em Oxfordshire.
Obra
Literatura
William Morris por um prolífico autor de poesia, ficção, ensaios e traduções de textos medievais e da antiguidade. Os seus primeiros poemas foram publicados quanto tinha 24 anos de idade, enquanto o seu último romance, The Sundering Flood, estava a ser terminado no dia da sua morte. Entre 1910 e 1915, a sua filha May editou uma coletânea em 24 volumes da sua obra (Collected Works), com dois volumes adicionais publicados em 1936.
Morris começou a publicar poesia e contos em 1856 na Oxford and Cambridge Magazine, a qual foi fundada e financiada por ele próprio e por alguns amigos e enquanto frequentavam a universidade. O seu primeiro volume, The Defence of Guenevere and Other Poems (1858), foi o primeiro livro de poesia pré-rafaelita alguma vez publicado. Os poemas soturnos, que decorrem num mundo de violência sombria, foram recebidos pela crítica com indiferença, o que desencorajou Morris de voltar a publicar durante uma série de anos. "The Haystack in the Floods", um dos poemas nessa coletânea, hoje em dia é provavelmente um dos seus poemas mais conhecidos. Trata-se de uma peça cruelmente realista que tem lugar durante a Guerra dos Cem Anos e em que os amantes predestinados Jehane e Robert se separam durante um dilúvio. "Masters in this Hall" (1860), um dos seus primeiros poemas, é um conto de Natal escrito para uma antiga melodia francesa. "The Snow in the Street" é outro poema de Natal, adaptado de "The Land East of the Sun and West of the Moon" em The Earthly Paradise.
Em 1868, Morris conheceu Eiríkr Magnússon, com quem aprendeu a língua islandesa. No ano seguinte, Morris publicou traduções da Saga de Gunnlaug Língua-de-cobra e da Saga de Grettis e, em 1870, da Saga dos Volsungos. Em 1873 foi publicado um volume adicional sob o título Three Northern Love Stories.
Nos últimos nove anos de vida, Morris escreveu uma série de obras de ficção imaginária, que são geralmente referidas como "romances em prosa". Estes romances, em que se inclui The Wood Beyond the World e The Well at the World's End, são considerados marcos importantes na história da ficção de fantasia porque, enquanto outros autores situavam as suas obras em terras distantes, mundos oníricos, ou no futuro (como fez Morris em News from Nowhere), as obras de Morris foram as primeiras a situar-se num mundo de fantasia completamente inventado. Este tema corresponde à tentativa de reavivar o género de romance medieval e os textos eram escritos de forma a imitar a prosa medieval.
Os primeiros autores de fantasia, como Lord Dunsany, E. R. Eddison e James Branch Cabell conheciam em profundidade as obras de Morris. The Wood Beyond the World influenciou profundamente As Crónicas de Nárnia de C. S. Lweis. J. R. R. Tolkien afirmava que muitas das suas primeiras obras tinham sido influenciadas pela leitura de Morris. e foi inspirado pelas reconstruções de Morris da vida dos povos germânicos da antiguidade em The House of the Wolfings e The Roots of the Mountains. Quando era mais novo, Tolkien tentou adaptar a história de Kullervo, da epopeia Kalevala, ao estilo de The House of the Wolfings. Aladore, um romance alegórico medieval da autoria de Henry Newbolt, foi influenciado pelas fantasias de Morris.<ref name="xenograffiti">{{Citar livro |autor=Robert Reginald |capítulo=Sir Henry Newbolt's Aladore" |título=Xenograffiti: Essays On Fantastic Literature |editora=Wildside Press |ano=1996 |isbn=0-8095-1900-3 |páginas=95-99}}</ref> James Joyce também se inspirou nas obras de Morris.
Design
Ao longo da vida, Morris foi autor de uma quantidade assinalável de peças em diversas áreas, principalmente na decoração de interiores. Criou mais de 600 padrões para papel de parede, têxteis e bordados, mais de 150 padrões para vitrais, três fontes tipográficas e cerca de 650 ornamentos tipográficos para a Kelmscott Press. Morris defendia o princípio de que o desenho e a produção de determinada peça não deviam ser separados e de que, sempre que possível, os autores das peças deveriam ser simultâneamente designers e artesãos, de modo a não só desenhar como também produzir aquilo que criam. Morris reavivou também uma série de técnicas em desuso no campo do design têxtil, e insistia no processamento manual e na utilização de matérias-primas de elevada qualidade, quase sempre corantes naturais. Muitos dos seus padrões são inspirados pela observação do mundo natural e insistia na necessidade de aprender as técnicas de produção antes de desenhar qualquer peça.
Mackail afirmava que Morris se tinha tornado fabricante não porque estivesse interessado em ganhar dinheiro, mas porque desejava ser ele próprio a produzir o que desenhava. Os padrões da Morris & Co.'s eram moda entre as classes média e alta da Grã-Bretanha. A biógrafa Fiona MacCarthy afirma que "se tinham tornado a opção segura das classes intelectuais, num exercício de correção política". O principal argumento de venda da empresa era a imensa variedade do catálogo e a noção de controlo artístico da produção por ela promovida.
É provável que a preferência de Morris pelos têxteis medievais se tenha formado, ou cristalizado, enquanto aprendiz de G. E. Street. Street foi coautor do livro Ecclesiastical Embroidery, publicado em 1848, e era um acérrimo defensor de abandonar a tela de meio ponto, privilegiando técnicas de bordado mais expressivas com base no Opus Anglicanum, popular em Inglaterra durante a Idade Média. Era também interessado pela tapecaria persa e foi conselheiro do Victoria and Albert Museum para a compra de tapetes Kerman.
Kelmscott Press
Em janeiro de 1891, Morris fundou a editora Kelmscott Press em Hammersmith, Londres para produzir exemplos de design aprimorado de impressão e livros. Ele desenvolveu tipos claros de letras, tais como seu tipo 'dourado' romano, que foi inspirado pelo tipo do antigo tipógrafo veneziano Nicolaus Jenson, e bordas decorativas medievalizadas para livros que eram inspirados pelos incunábulo do século XV e suas ilustrações talhadas em madeira. A seleção de papel e tinta e a preocupação com a integração completa entre tipo e decorações nas páginas fez da Kelmscott Press a mais famosa das gráficas particulares do Movimento das Artes e Ofícios. Ela operou até 1898, produzindo 53 volumes e inspirando outras gráficas particulares. Entre os amantes de livros, sua edição de Os Contos da Cantuária é considerado um dos mais belos livros já produzidos.
Obras literárias
The Defence of Guinevere, and other Poems (1858)
The Life and Death of Jason (1867)
The Earthly Paradise (1868-70)
The Story of Sigurd the Volsung and the Fall of the Nibelungs (1876)
Love is Enough, or The Freeing of Pharamond (18
A Dream of John Ball (1886)
The House of the Wolfings (1888)
The Roots of the Mountains (1889)
Notícias de Lugar Nenhum (1890)
The Story of the Glittering Plain (1890)
The Well at the World's End (1892)
The Wood Beyond the World (1892)
Morris também traduziu um grande número de obras clássicas e medievais, incluindo coleções de sagas islandesas tais como Three Northern Love Stories (1875), a Eneida de Virgílio (1875) e a Odisseia'' de Homero (1887).
As Sociedades Morris, tanto na Inglaterra como nos Estados Unidos, são ativas na preservação da obra e das idéias de Morris.
Leitura adicional
Ligações externas
- textos de William Morris em formatos eletrônicos
(London Borough de Waltham Forest)
(marxists.org)
Designers gráficos do Reino Unido
Escritores do Reino Unido
Socialistas do Reino Unido
Ateus do Reino Unido
Comunistas do Reino Unido
Socialistas libertários | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,168 |
I 'retired' in 2008 after 34 years in the School of Psychological Science at La Trobe University. I retired to write the book: "Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis" (Routledge, New York, 2012), which was recently released. My main research interest is statistical cognition, which is the empirical study of how people understand--or misunderstand--statistical ideas and presentations of data. I am keen to promote the reform of statistical practices, especially a shift from statistical significance testing to estimation and meta-analysis. I enjoy mountain bike riding, woodworking, word games, and spending time with the grandchildren. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,202 |
\section{Introduction}
For the past few decades, we have extensively studied the gravitational interaction of Dark Matter (DM) and very little doubt remains of its existence (for an overview, see \cite{dmreview:garett,dmreview:bertone, dmreview:lisanti, dmreview:profumo, dmreview:phlen} and references therein). However, the particle nature of DM remains a mystery and we have no clue about its mass, spin, and interactions with other elementary particles. During the early days, Weakly Interacting Massive Particles (WIMPs) were postulated to be DM candidate but recent bounds from null results of terrestrial experiments have ruled out almost all of the interesting parameter space \cite{LUX2017}. Several new candidates have been proposed recently which get the correct relic abundance and are consistent with present detector bounds. \\
One of the simple solutions is to assume that the DM is light i.e. its mass is in the sub-GeV domain. In this limit, the local DM cannot produce sufficient recoil and thus will remain undetected in the traditional detectors. It has been proposed that electron recoil can be used to probe this parameter space \cite{subGeV:Essig,subGeV:Nanotubes,subGeV:3DDirac}. From the model building perspective, it was recently proposed that the 3-to-2 and 4-to-2 annihilations may be important for MeV and keV scale DM respectively \cite{simp:original}. Several interesting follow ups to this paradigm can be found in \cite{simp:1lest, simp:2bernal1, simp:4z3, simp:5nnlo, simp:6hambye, simp:7z2, simp:8murayama, simp:9ujjal, simp:10split, simp:11vector, simp:lhc}. One of the biggest issues with a sub-MeV DM is the conflict with the effective number of relativistic species ($N_{eff}$) during the Big-Bang-Nucleosynthesis (BBN) era \cite{rajendran}. To be consistent, one can assume that the dark sector has lower temperature than the Standard Model (SM) bath \cite{hidden:original, Foot1,Foot2,Foot3}, or that it freezes-in after the BBN \cite{berlin}. \\
The standard model of cosmology, $\Lambda$CDM, has been hugely successful in explaining majority of the observed astrophysical phenomenon. However, the assumption of cold collision-less DM runs into what is dubbed as "small-scale crisis". The most prominent issues are the 'core vs. cusp' problem, the missing satellite problem, and "too-big-to-fail" problem. While individual resolutions to all the problems is possible, the assumption of self-interacting DM can solve some of these problems simultaneously \cite{sidm1,sidm2,sidm3,sidm4,sidm5,sidm6, sidm7, Khlopov1, Blinnikov1, Blinnikov2}. However, observation of galaxy cluster collisions puts a strong bound on this self interaction. For a recent review, one can refer to \cite{sidm:review} and references therein. For our analysis, we take the often used limit $\sigma_{SI}/m \sim 0.1 - 1 \text{ cm}^2/g$. \\
The outline of this paper is as follows. In Sec. \ref{model}, we define the low energy limit of the interaction Lagrangian and find the relic density and self-interaction in the model. In Sec. \ref{res}, we study the results and discuss the allowed parameter space before we conclude in Sec. \ref{conc}.
\section{Model Description}
\label{model}
In this paper, we will consider the dark sector to be thermally decoupled from the Standard Model \cite{hidden:Das, BerlinPeV, SigurdsonHHDM, SIWDM}. The temperature asymmetry is characterised by the parameter $ \xi = (T_{d}/ T_{SM}) \leq 1$. Such a decoupling can be achieved if the interactions responsible for thermal equilibrium between the two sectors freeze out at high temperatures. In the absence of such interactions, one can postulate that the two sectors have been populated at different temperatures during reheating \cite{Asymmetric}. Because of this temperature asymmetry, smaller mass for DM are allowed which is otherwise strictly constrained from the BBN $N_{eff}$. \\
We take DM to be Dirac fermion charged under a dark Abelian symmetry - $U(1)_D$. The gauge boson of this new symmetry, $Z^\prime$, acquires a mass from a high-scale spontaneous symmetry breaking. This transition is also responsible for generating a Majorana mass term which splits the dark fermion into two Majorana fermions ($\chi_1$ and $\chi_2$) with a mass gap \cite{iDM1,iDM2,iDM3,iDM4,iDM5}. The lighter of the two Majorana states (say, $\chi_1$) will act as DM in this model. In this mass basis, the coupling of $Z^\prime$ is purely off-diagonal as the Majorana states cannot carry any conserved quantum number. We add a light (almost massless) right-handed Dirac fermion ($f$)which is also charged under $U(1)_D$. The Majorana mass term for this light fermion can be avoided either by charge assignments or by assuming additional global symmetries. A detailed model is presented in the Appendix. \\
In the simplified picture, the interaction Lagrangian is given by,
\begin{equation}
\label{lag}
\mathcal{L} \supset -i g_D Z_\mu^\prime \left( \bar{\chi}_1 \gamma^\mu \chi_2 + \bar{f} \gamma^\mu f\right)
\end{equation}
where the coupling constant $g_D \approx 1$ ($\alpha_D = g_D^2/4\pi \approx 0.1$) for remainder of this paper. We assume the mass hierarchy
\begin{equation}
m_f \approx 0 \ll m_\chi=m_1 < m_2 = m_1(1 + \delta) \ll m_{Z^\prime}.
\end{equation}
As the fermions masses are in the sub-MeV domain and $\xi$ is not infinitesimally small , these particles contribute to the effective relativistic degrees of freedom during the BBN era as,
\begin{equation}
N_{eff} = 3.046 + 2 \times \left(\frac{11}{4}\right)^{4/3} \xi^4.
\end{equation}
The analysis of the Planck data indicated that $N_{eff} = 3.15 \pm 0.23$ \cite{Planck} which translates to $\xi \leq 0.45 (0.52)$ at $1\sigma (2 \sigma)$ level. However, if alternative cosmologies are taken into account, these constraints can be either severe or relaxed \cite{wCDM}. Hence, for our analysis, we take two bench mark scenarios $\xi = 0.5$ and $\xi = 0.3$ as we do not comment upon the source of this anisotropy.
\subsection{Relic Density from Coannihilation}
In this model, the relic density for $\chi_1$ is obtained from the coannihilations $\chi_1 \chi_2 \rightarrow \bar{f} f$. The importance of co-annihilations has been known for a long time \cite{Griest}, and novel applications were recently realised in \cite{BerlinGUT, Coann, OffDiag}. We follow the prescription in \cite{Griest} and important steps are mentioned for completeness. As $\chi_2$ can decay into $\chi_1$ via $\chi_2 \rightarrow \chi_1 \bar{f} f$, the coupled Boltzmann equations for tracking abundances of $\chi_1$ and $\chi_2$ are approximated by a single differential equation for the total number density $n = n_1 + n_2$ where $n_1$ and $n_2$ are the number densities of $\chi_1$ and $\chi_2$ respectively \cite{Edsjo}. During late times, $n$ is dominated by $n_1$ as most of $\chi_2$ has decayed. The Boltzmann equation for $n$ is,
\begin{equation}
\label{boltz:n}
\dfrac{dn}{dt} + 3 H n = - \langle \sigma v \rangle_{eff} (n^2 - \bar{n}^2).
\end{equation}
where bar indicates the equilibrium density and,
\begin{equation}
\langle \sigma v \rangle_{eff} = \sum_{i j} \langle \sigma_{ij} v_{ij} \rangle \frac{\bar{n}_i \bar{n}_j}{\bar{n}^2}.
\end{equation}
\begin{figure}
\includegraphics[width=6 cm]{Relic}
\caption{\label{fig:relic} The annihilation channel for $\chi_1$ whose freeze-out determines the relic density}
\end{figure}
Due to the off-diagonal interactions of $Z^\prime$, processes such as $\chi_1 \chi_1 \rightarrow \bar{f} f $ are forbidden at tree level, and the only annihilation channel is $\chi_1 \chi_2 \rightarrow \bar{f} f $. Thus the effective cross section is given as,
\begin{equation}
\langle \sigma v \rangle_{eff} = 2 \langle \sigma_{12} v_{12} \rangle \frac{\bar{n}_1 \bar{n}_2}{\bar{n}^2} \approx 2 \langle \sigma_{12} v_{12} \rangle \frac{\bar{n}_2 }{\bar{n}_1}
\end{equation}
where the approximation obtained by using $\bar{n}_2 \ll \bar{n}_1$ is only indicative and we use full expression for the numerical analysis. Recently, utilization of such Boltzmann suppression for Light DM has been realised in \cite{light:1:forbidden, light:2:fourth} however with a small mass-gap ($\delta<1$). In this paper, we have considered a significantly large mass gap between the two states ($\delta \sim 2-6$). We use the following expression for number density,
\begin{equation}
n_i (m, T) = \frac{T}{2 \pi^2} m^2 K_2\left(\frac{m}{T}\right)
\end{equation}
and the thermal averaged cross section in the s-wave limit is given as,
\begin{equation}
\langle \sigma_{12} v_{12} \rangle = \frac{1}{32 \pi} \frac{g_D^4}{m_{Z^\prime}^4} \left(m_1 + m_2\right)^2
\end{equation}
One can rewrite \eqref{boltz:n} using the abundance $Y = n/s$ where $s$ denotes the total entropy density of standard model and the dark sector. As $\xi<1$, the entropy is dominated by the SM bath and to a very good approximation,
$$s \approx s_{SM} = \frac{2 \pi^2}{45} g_\ast^s(T_{SM}) T_{SM}^3.$$
The equilibrium abundance is given by,
\begin{equation}
\label{yeq}
\bar{Y}(x, \xi) = \xi^3 \frac{d_\chi}{g_\ast^s(m_\chi/x \xi)} \frac{45}{4 \pi^4} x^2 K_2(x)
\end{equation}
where $x = m_\chi /T_d$ is a measure of the dark sector temperature. The freeze out occurs when,
\begin{equation}
\label{condition}
\left[ \bar{n} \langle \sigma v \rangle_{eff}\right] _{x_f} = H(\xi, x_f)
\end{equation}
i.e. when the interaction rate becomes less than the Hubble Rate $H = 1.66 g_\ast(T) T^2/M_{pl} = 1.66 g_\ast(T/\xi) m_\chi^2/ (x^2 \xi ^2 M_{pl})$. The present day abundance, $Y_\infty$, is given as,
\begin{equation}
\label{yinf}
Y_\infty = \frac{ c \bar{Y}(x_f, \xi) }{ 1 + \lambda J(x_f) c \bar{Y}(x_f, \xi)}
\end{equation}
where $c, \lambda,$ and $J(x_f)$ are defined in Appendix A. The relic density of DM is given by,
\begin{equation}
\label{relic}
\Omega h^2 = m_\chi s_0 Y_\infty \frac{h^2}{\rho_c} \approx 282 \left( \frac{m_\chi}{\text{keV}} \right) \left( \frac{T_\gamma}{2.75~K}\right)^3 c~\bar{Y}(x_f, \xi)
\end{equation}
where the approximation is true in the limit $ \lambda J(x_f) c \bar{Y}(x_f, \xi) \ll 1$. We use \eqref{condition} to numerically determine the freeze-out temperature and enforce that $x_f \geq 3$ so that the non-relativistic approximation is valid. This restricts us from taking smaller values for $\xi$ and $m_1$. Then we determine the relic density using \eqref{yeq}, \eqref{yinf}, \eqref{relic}, and compare with the observed value from Planck \cite{Planck},
\begin{equation}
\Omega_\chi h^2 = 0.118 \pm 0.002.
\end{equation}
Understanding that such an estimate is only an approximation to solving the complete Boltzmann equations, we conservatively take an error on 5\% in our analysis.
\subsection{One-Loop self interaction }
One of the features of this model is that the self-interaction of Dark Matter is not a tree level process. At one loop level, there are eight diagrams that contribute to $\chi_1 \chi_1 \rightarrow \chi_1 \chi_1$ when $\chi_2~\text{and}~Z^\prime$ are in the loop. A representative diagram is shown in Fig. \ref{fig:SI}. In \cite{SSIDM, SSIDM2}, the self interaction of inelastic DM was studied in the limit of large $m_\chi$ and light propagator. In this study, we calculate the self interaction in the limit of small $m_\chi$ and heavy propagator. Since the loop particles are significantly heavier than the external ones, we use the decoupling limit where we ignore the external momenta while evaluating the loop. We use Package-X \cite{PackageX} and the Unitary Gauge to calculate the loop function and the cross section. It was checked that the infinities cancel systematically and we are left with a finite part. The self-interaction cross section in the s-wave approximation is given as,
\begin{equation}
\frac{\sigma_{SI}}{m_1} = \frac{9}{256 \pi^5 }g_D{}^8 \frac{ m_1 \left(m_2{}^6+3 m_2{}^2m_{Z'}{}^4+6
m_2{}^2m_{Z'}{}^4 \log \left(\frac{m_2{}^2}{m_{Z'}{}^2}\right) -4
m_{Z'}{}^6\right){}^2}{m_{Z'}{}^4 \left(m_{Z'}{}^2 - m_2{}^2\right){}^6}.
\end{equation}
The calculation is detailed in Appendix B. The velocity dependence of the self interaction is shown in Fig. \ref{fig:RelStr}. It can be seen that the change is very small for non-relativistic case ($v < 0.1 c$). Therefore, we use the estimate $\frac{\sigma_{SI}(0)}{m_1} = 0.1 -1~cm^2/g$ to constrain the parameter space.
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{feynSI}
\caption{\label{fig:SI} The Feynman diagram for the self interaction of DM. There are seven other "crossed" diagrams.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=14cm]{RelStrSI}
\caption{\label{fig:RelStr} The relative strength of self interactions i.e. $\sigma_{SI}(v) / \sigma_{SI}(0)$ is shown as a function of velocity for various choices of parameters. }
\end{figure}
\subsection{A comment on the light fermion}
One of the crucial assumption of this model is the existence of a massless fermionic species. One of the possibilities is that it is a part of the radiation component today albeit, the strong self interactions would prevent it from being Hot Dark Matter candidate. The other interesting possibility is that it is a sterile neutrino which also mixes with the active neutrinos. It has been pointed out that in presence of self-interactions, the sterile neutrino acquires a large thermal mass in the early universe and the mixing is suppressed \cite{sterile:Dasgupta, sterile:Hannestad}. This allows one to have larger mixing angles in the present era and helps resolving some of the short-baseline neutrino anomalies \cite{LSND}. However, to avoid DM-neutrino scattering in the early universe, we require much smaller vacuum mixing angles that cannot explain these anomalies, but can be probed in future experiments. \\
The role of the light fermion in cosmology would be similar to that of dark radiation. The most stringent bounds on dark radiation comes from BBN $N_{eff}$ which we have considered already. As this light fermion is part of a secluded and colder sector, it plays very little role in structure formation. \\
\section{Results and Discussion}
\label{res}
As pointed out before, we take $g_D \approx 1$ for our analysis. This is a domain where the interactions are strong but perturbativity still holds. In \cite{Yeche}, bounds on mass of Warm Dark Matter from Lyman-$\alpha$ is determined to be $M_{WDM} \geq \text{few keV}$. We only consider $m_1 > 10$ keV in this work. We analyse the parameter space of $\delta - m_{Z^\prime}$ for various masses of $m_1 \in \{ 10\text{ keV}, 1 \text{ MeV} \}$ that give the correct relic density and self-interactions. As the self-interactions do not depend on $\xi$, one can see that the limits are same for the two benchmark cases. It is to be noted that a heavier $Z^\prime$ is associated with smaller self interaction. \\
The dependence of relic density on $\xi$ can be simply understood as follows. From \eqref{yeq} one can see that $\bar{Y}$ is a monotonically decreasing function of $x$. To compensate for small $\xi$, one needs a smaller $x_f$. This means that the effective cross section should be smaller such that freeze-out occurs earlier. This smallness is brought by a larger Boltzmann suppression due to heavier $m_2$. In an analogous way, one can argue the dependence of the relic density on $c$. \\
As the DM is part of a secluded sector, one does not anticipate any signals in Direct Detection experiments and colliders. This is consistent with the present status of these terrestrial experiments. Such a dark matter can only have gravitational signatures and can be probed through structure formation. Due to the self interactions, the DM behaves as WDM and is consistent with the present understanding. In future, as the limits on BBN $N_{eff}$ are tightened, there will be less parameter space for the model to thrive.
\begin{figure}
\includegraphics[width= 14cm]{Result2}
\caption{\label{fig:result} The allowed parameter space for $m_\chi= $ 10 keV (Blue), 100 keV (Green), and 1 MeV (Red) is shown for benchmark models $\xi = 0.5$ (left) and $\xi = 0.3$ (right). The upper (lower) limit of $m_{Z^\prime}$ corresponds to $\sigma_{SI}/m_1 = 0.1 (1.) \text{ cm}^2/g$.}
\end{figure}
\newpage
\section{Conclusion}
\label{conc}
In this paper, we have seen that one can get the correct relic density and appropriate self-interactions for a sub-MeV Dark Matter if it has strong off-diagonal interactions with a heavier spin-1 boson. The annihilation cross section is Boltzmann-suppressed and the self interaction is loop-suppressed thus allowing the mass scales to go as low as $\mathcal{O}(10)$ keV while keeping the gauge coupling constant naturally large. Such a light DM must be part of a decoupled sector at a lower temperature in order to be consistent with BBN.
\subsection*{Appendix A: Calculation of Relic Abundance}
The calculation of relic abundance of dark matter has been excellently treated in the book "The Early Universe" by E. W. Kolb and M. S. Turner \cite{KolbTurner}. We follow the general prescription laid by them while making necessary changes due to the temperature asymmetry. Similar calculation is performed in \cite{hidden:original} and the only difference is that we use hidden sector temperature to define $x$ while using SM entropy to define abundance $Y$. Such a definition is advantageous in models where the hidden sector entropy is not conserved explicitly (e.g. when a minor component decays into SM particles during late times). In such scenarios, the total entropy density, which is mainly SM entropy, is a good proxy for dilution effect. Otherwise, the treatment is analogous and one can use either definitions. \\
The Boltzmann equation for the total number density \eqref{boltz:n} can conveniently expressed in terms of the abundance
\begin{equation}
Y = \frac{n}{s}
\end{equation}
which is free from the dilution due to expansion. Note that $s$ denotes the total entropy density of the dark and visible sectors. However, due to temperature asymmetry, one can ignore the contribution from the dark sector. Also note that, since the total entropy is conserved, $\dot{s} + 3 H s = 0$. During the radiation dominated era, the scale factor $R \sim t^{1/2}$ which gives us,
\begin{equation}
\dfrac{dx}{dt} = \frac{\tilde{H}(m_\chi, \xi) }{x}
\end{equation}
where $x = m_\chi/ T_d$ and in terms of Planck Mass $M_{pl} = 1.22 \times 10^{25}$ keV
\begin{equation}
\tilde{H}(m_\chi, \xi) = 1.66 \sqrt{ g_\star \left(\frac{m_\chi}{x \xi}\right)} \frac{1}{\xi^2} \frac{m_\chi^2}{M_{pl}}.
\end{equation}
Using \eqref{yeq} and,
\begin{equation}
\tilde{s}(m_\chi, \xi) = \frac{2 \pi^2}{45} g_\star^s \left(\frac{m_\chi}{x \xi}\right) \frac{m^3}{\xi^3},
\end{equation}
the Boltzmann equation for abundance is,
\begin{equation}
\label{boltz:y}
\dfrac{dY}{dx} = - \frac{ \tilde{s}}{\tilde{H}} \frac{\langle \sigma v \rangle_{eff}}{x^2} \left( Y^2 - \bar{Y}^2\right).
\end{equation}
Note that the temperature (hence, $x$) dependence in the effective cross section comes only from the Boltzmann factor and hence one can write,
\begin{equation}
\langle \sigma v \rangle_{eff} = \sigma_0 f(x, \delta)
\end{equation}
where,
\begin{equation}
f(x, \delta) =\frac{ (1 + \delta)^2 K_2(x) K_2( (1 + \delta) x)}{(K_2(x) + (1 + \delta)^2 K_2( (1 + \delta) x))^2}.
\end{equation}
Using the dimensionless quantity $\lambda = \sigma_0 \tilde{s} / \tilde{H}$ one can simplify \eqref{boltz:y} as,
\begin{equation}
\label{boltz:y2}
\dfrac{dY}{dx} = - \lambda \frac{f(x, \delta)}{x^2} \left( Y^2 - \bar{Y}^2\right).
\end{equation}
which can be further simplified using the difference $\Delta = Y - \bar{Y}$ and approximately solved when $x \gg x_f$ and $\Delta \approx Y \gg \bar{Y}$ which gives,
\begin{equation}
\label{boltz:delta}
\Delta^\prime \approxeq - \lambda \frac{f(x,\delta)}{x^2} \Delta^2.
\end{equation}
Upon integration from freeze-out to the present day of \eqref{boltz:delta}, we get,
\begin{equation}
\frac{1}{Y_\infty} = \frac{1}{\Delta_\infty} = \frac{1}{\Delta_f} + \lambda \int_{x_f}^{\infty} \frac{f(x, \delta)}{x^2} dx = \frac{1}{\Delta_f} + \lambda J
\end{equation}
and the $J$ integral can be performed numerically once $x_f$ is determined. It was shown in \cite{hidden:original}, that the approximation
\begin{equation}
\Delta_f = c \bar{Y}(x_f, \xi)
\end{equation}
agrees with the numerical solution of \eqref{boltz:y2} if c = 0.2 (0.5) for $\xi = 0.3 (0.8)$. This gives us the final result,
\begin{equation}
\label{app:yinf}
Y_\infty = \frac{ c \bar{Y}(x_f, \xi) }{ 1 + \lambda J(x_f) c \bar{Y}(x_f, \xi)}
\end{equation}
which is was shown in \eqref{yinf}. For our analysis, we assume $c = 0.2$ and note that any change in $c$ will proportionately scale the relic density.
\subsection*{Appendix B: Calculation of Self Interaction}
We calculate the amplitude for the process,
\begin{equation}
\chi_1 (p_1) + \chi_2 (p_2) \rightarrow \chi_1 (k_1) + \chi_2 (k_2)
\end{equation}
where $p_i$ and $k_i$ are the four momentum of the particles. There are eight Feynman diagrams for this process which are related by crossing to the one shown in Fig. \ref{fig:SI}. In the decoupling limit, the amplitude is
\begin{equation}
\mathcal{M}_1 \sim \frac{g^4 \left[ \bar{u}(k_1) \gamma^\mu (\slashed{q} + m_2) \gamma^\alpha u(p_1)\right] \left[ \bar{v}(p_2) \gamma^\beta (\slashed{q} + m_2) \gamma^\nu v(k_2)\right]P_{\alpha \beta} P_{\mu \nu}}{(q^2 - m_{Z^\prime}^2) (q^2 - m_{2}^2)^2}
\end{equation}
where $q$ is the loop momentum and $P_{\mu \nu}$ in the Unitary gauge is given by,
\begin{equation}
P_{\mu \nu} = - g_{\mu \nu} + \frac{q_ \mu q_\nu}{m_{Z^\prime}^2}.
\end{equation}
The other crossed amplitudes ($\mathcal{M}_2 \rightarrow \mathcal{M}_6$) are related to $\mathcal{M}_1$ by $\beta \leftrightarrow \mu, \beta \leftrightarrow \nu, k_1 \leftrightarrow k_2$. There are two diagrams pertaining to the colloquial "s-channel" due to Majorana nature of the incoming fermions. The relative sign of graphs must be taken correctly for cancellation of the infinities. One can evaluate the loop-integral using Package-X or any other alternative. The final result can be simply expressed in the $\{S, V, T, A, P \}$ basis as,
\begin{equation}
\mathcal{M} =g^4 \sum_{i = S,V,T,A,P}^{} \left( C_i \left[ \bar{u}(k_1) \Gamma_i u(p_1)\right] \left[ \bar{v}(p_2) \Gamma_i v(k_2)\right] + C^\prime_i \left[ \bar{v}(p_2) \Gamma_i u(p_1)\right] \left[ \bar{u}(k_2) \Gamma_i v(k_1)\right] \right)
\end{equation}
Note that the mixed terms (e.g $V-A$) are absent. The only non-zero coefficients are
\begin{equation}
C_A = \frac{6 m_2{}^2 m_{Z'}{}^2 \log
\left(\frac{m_2{}^2}{m_{Z'}{}^2}\right)}{\left(m_2{}^2-m_{Z'}{}^2\right){}^3}-\frac{3 \left(m_2{}^4-m_2{}^2 m_{Z'}{}^2+2 m_{Z'}{}^4\right)}{m_{Z'}{}^2
\left(m_2{}^2-m_{Z'}{}^2\right){}^2}
\end{equation}
\begin{equation}
C_T = -\frac{m_2{}^2 \left(m_2{}^2-3 m_{Z'}{}^2\right)}{m_{Z'}{}^2
\left(m_2{}^2-m_{Z'}{}^2\right){}^2}-\frac{2 m_2{}^2 m_{Z'}{}^2 \log
\left(\frac{m_2{}^2}{m_{Z'}{}^2}\right)}{\left(m_2{}^2-m_{Z'}{}^2\right){}^3}
\end{equation}
\begin{equation}
C^\prime_S = \frac{6 m_2{}^2 \left(m_2{}^2-3 m_{Z'}{}^2\right)}{m_{Z'}{}^2
\left(m_2{}^2-m_{Z'}{}^2\right){}^2}+\frac{12 m_2{}^2 m_{Z'}{}^2 \log
\left(\frac{m_2{}^2}{m_{Z'}{}^2}\right)}{\left(m_2{}^2-m_{Z'}{}^2\right){}^3}
\end{equation}
\begin{equation}
C^\prime_A = \frac{3 m_2{}^2 m_{Z'}{}^2 \log
\left(\frac{m_2{}^2}{m_{Z'}{}^2}\right)}{\left(m_2{}^2-m_{Z'}{}^2\right){}^3}-\frac{3 \left(m_2{}^4-m_2{}^2 m_{Z'}{}^2+2 m_{Z'}{}^4\right)}{2 m_{Z'}{}^2
\left(m_2{}^2-m_{Z'}{}^2\right){}^2}
\end{equation}
In terms of these coefficients, the non-relativistic squared amplitude is
\begin{equation}
\overline{|\mathcal{M}|^2} = 16 m_1^4 \left( 3 C_A + 2 C^\prime_A - 6 C_T \right)^2 - 16 m_1^4 v^2\left(C_A + 2 C^\prime_A - 6 C_T \right) \left(3 C_A + 2 C^\prime_A - 6 C_T \right)
\end{equation}
and the transfer cross section for self interaction is
\begin{equation}
\sigma_{SI} = \int d\Omega (1 - \cos(\theta)) \left( \frac{d \sigma}{d \Omega} = \frac{1}{64 \pi^2 (4 m_\chi^2)} \overline{|\mathcal{M}|^2}\right) \approx \frac{1}{64 \pi m_\chi^2}\overline{|\mathcal{M}|^2}
\end{equation}
\subsection*{Appendix C: Possible UV Completion}
In this section we consider a possible UV completion of the simplified model presented above. The standard model gauge group is extended by an $U(1)_D$ symmetry. We add four fermions and a scalar to the model which are singlets under SM gauge symmetry. Their charges under the new symmetry are given in Table I.
\begin{table}[H]
\centering
\begin{tabular}{|c | c | c | c | c | c |}
\hline
Fields & $\psi_1$ & $\psi_2$ & $f_1$ & $f_2$ & $\phi$ \\
\hline
$Q_D$ & 1 & -1 & $a$ & $-a$ & 2 \\
\hline
\end{tabular}
\caption{The new fields in the dark sector and their charges under $U(1)_D$ symmetry.}
\end{table}
The above choice of charges assures that the model is anomaly free. One can chose $a \approx1$ but $\neq 1$ to ensure that $\phi$ does not have Yukawa like interaction with $f_1$ or $f_2$. The most general Lagrangian for the dark sector is,
\begin{align}
\mathcal{L} &= \bar{\psi}_1 ( \slashed{D} - m) \psi_1 + \bar{\psi}_2 ( \slashed{D} - m) \psi_2 + \bar{f}_1 ( \slashed{D} - m_f) f_1 + \bar{f}_2 ( \slashed{D} - M_f) f_2 \\
&+ y \phi \bar{\psi}_1 \psi_2 + h.c. \\
&+ (D_\mu \phi)^\dagger (D^\mu \phi) - \frac{1}{4} X^{\mu \nu}X_{\mu \nu} \\
&+ \frac{\epsilon}{4}X^{\mu \nu}F_{\mu \nu} + \eta \phi^\dagger \phi H^\dagger H \\
&- \mathcal{V}(\phi)
\end{align}
where $H$ is the SM Higgs' field, $X_{\mu \nu} = \partial_\mu Z^\prime_\nu - \partial_\nu Z^\prime_\mu$ is the field strength for the $Z^\prime$, and
\begin{equation}
D_\mu = \partial_\mu - i g_D Q_D Z^\prime_\mu
\end{equation}
is the gauge covariant derivative. To begin with, we consider the limit where $\epsilon \rightarrow 0,$ and $\eta \rightarrow 0$ which is motivated from the assumption that the dark sector is thermally secluded from the visible sector. Also, these interactions cannot be generated via loops which allows us to take their coefficients to be vanishingly small. \\
The potential for the new scalar field has the usual form considered for spontaneous symmetry breaking.
\begin{equation}
\mathcal{V} (\phi) = - \mu^2 \phi^\dagger \phi + \lambda ( \phi^\dagger \phi)^2
\end{equation}
The symmetry breaking not only gives mass to the new gauge boson, but also generates an off diagonal mass term from the Yukawa-like interaction. In the $\psi_1 - \psi_2$ basis, the mass matrix is,
\begin{equation}
\hat{M} = \begin{pmatrix}
m & y v_\phi \\
y v_\phi & m \\
\end{pmatrix}
\end{equation}
which has eigenvalues $m \pm y v_\phi$. One can go to the mass eigenstates by the transformation,
\begin{equation}
\psi_1 \rightarrow \frac{\chi_1 + \chi_2}{\sqrt{2}} ~ and~\psi_2 \rightarrow \frac{\chi_1 - \chi_2}{\sqrt{2}}.
\end{equation}
The Lagrangian for $\chi_1$ and $ \chi_2$ is,
\begin{align}
\mathcal{L} &= \bar{\chi}_1 ( \slashed{\partial} - m_1) \chi_1 +\bar{\chi_2} ( \slashed{\partial} - m_2) \chi_2 + i g_D ( \bar{\chi}_1 \slashed{Z}^\prime \chi_2 + \bar{\chi}_2 \slashed{Z}^\prime \chi_1 ) + ...
\end{align}
where the ellipses denote interactions with the Higgs' scalar of the dark sector. In terms of the free parameters, one can fix $v_\phi$ given the mass of the $Z^\prime$ boson. However, by varying $\lambda$ one can make the scalar sufficiently heavy such that it does not affect the low scale dynamics. Also, one can speculate that if there are other heavy fields in the dark sector, there may be large radiative corrections to the scalar mass. The mass gap between the two states is determined by the Yukawa coupling ($m_2 = m_1 + 2 y v_\phi$) and can be considered as a free parameter. In the limit $M_f >> m_1,m_2$, this model essentially reduced to the one considered in the paper.
\newpage
\noindent \textbf{Acknowledgements:}
The author would like to thank Prof. Subhendra Mohanty and Prof. Namit Mahajan for several useful discussions and their insights. The author is also grateful to Arnab Dasgupta for going through the manuscript and providing valuable suggestions. The author also thanks the anonymous referee for pointing out important constraints.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,768 |
(function() {
const a= document.querySelector("#demo");
const b= document.querySelector("#n");
const c= document.querySelector("#u");
let iceCream = {flavor: "chocolate"};
let added = {flavor: "vanilla", sprinkles: "lots"};
a.innerText = JSON.stringify(iceCream);
//underscore js
const no = _.defaults(iceCream,added);
c.innerText = JSON.stringify(no);
//normal js
added.flavor = "chocolate";
b.innerText = JSON.stringify(iceCream);
}()) | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,319 |
Our Pre-Construction Navigational Checklist (PCNC) serves as a resource outline for the typical steps normally assumed prior to construction activity of the actual home itself. The PCNC will walk you step by step through the design phase from identifying the desired lot through a set of Engineered Blueprints ready to build.
The PCNC can be completed in a few months but may take longer depending on the municipality. You may already have several line items that are completed. So we can easily progress to the next step. The goal of the PCNC is not be a daunting list but a transparent checklist and resource to help understand what needs to happen next.
We review our building process with you and discuss the timeline for your build.
We will help you identify land and/or talk about your geographical areas of interest.
If you have a design in mind already, we will discuss your wish list and priorities for your home design.
Finally in this phase, we will look at your budget as well as direct you toward qualifying for construction loan financing if needed.
Provide design-build insights for optimal home site location.
Together we will review your simple floor plan sketch and/or your architectural drawings.
We will obtain your authorization to have survey measurements performed on your lot.
We will then hire engineers to test the soil (if applicable).
Environmental Health applications / well & septic (if applicable) will be submitted.
Lastly we will review impervious surface calculations and zoning.
We review the hard lined floor plans and preliminary site plan with you and confirm the final design direction.
We discuss interior material, designs and furniture needs throughout your home.
Lastly we will review and fine-tune all specifications such as Siding/Brick/Shingles Veneer, Exterior Doors/Windows, Garage Door Design, Shutters, etc., that will assist us in narrowing down your budget figures.
Your project may require a property survey identifying boundaries, locations of existing structures, setbacks and areas of impervious coverage.
Survey work includes field work and development of survey plan which is then used as necessary to provide data for any engineering work that may be required by municipality for grading and storm water management.
During this phase, final plans will be reviewed and confirmed.
We will guide you in your budget allowance figures for those items such as Cabinets, Counter-tops, Appliances, Flooring, Lighting, and Plumbing Fixtures.
After confirming all details of your build project, we will execute the contract and specification agreement.
Also we will discuss the utilization of our Online Construction Management Software that will further enhance your home building experience.
We will complete permit applications and make submissions to municipality for all building and mechanical permits required.
An allowance included in this agreement for the permit fees.
Independent municipal review costs will be billed to client as needed for storm water management and engineering review. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,860 |
Kirk Jellerson is a former American football player and coach. He served as the interim head football coach at Utica College in 2007 and as the head football coach at Whittier College in 2011.
Playing career
After graduating from St. Paul High School in Santa Fe Springs, California, where he was a member of California state runner-up team in 1978, he went on to play football at Cerritos College and Weber State University. He earned a bachelor's degree from California State University, Long Beach in 1988.
Coaching career
Jellerson began his coaching career at Cerritos College, where he was the linebackers coach from 1983 to 1988. While at Cerritos, they won the South Coast Championship in 1986 and played in three bowl games. He then moved on to Fullerton College, where he was an assistant from 1989 to 1992. After one season as an assistant coach at Whittier College in 1993, he went to Chapman University, where he was an assistant from 1994 top 1996. During his time at Chapman, the Panthers went 26–4–1. Jellerson was the defensive coordinator at Plymouth State University in 1999 and at Kean University from 2000 to 2002. From 2004 from 2006, Jellerson served as the assistant head coach at Utica College and then moved on to Western Washington University from in 2008. In 2008, Western Washington won the Rotary Bowl championship before they ended the program. In 2009 and 2010, Jellerson was the defensive coordinator at Whittier College. On November 18, 2010, he was named the interim head coach at Whittier College. On February 21, 2011, he was named the head coach at Whittier College. He was removed of his head coaching duties right after the 2011 season.
Head coaching record
References
1960s births
Living people
Chapman Panthers football coaches
Kean Cougars football coaches
People from Santa Fe Springs, California
Plymouth State Panthers football coaches
Utica Pioneers football coaches
Weber State Wildcats football players
Western Washington Vikings football coaches
Whittier Poets football coaches
Junior college football coaches in the United States
Cerritos Falcons football players
California State University, Long Beach alumni
Sportspeople from Los Angeles County, California
Players of American football from California | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,478 |
Publisher: Rockstar Games
Use of Drugs and Alcohol
A battle on the streets of New York. The armies of the night number 60,000 strong, and tonight they're all after The Warriors-- a street gang wrongly accused of killing a rival gang leader. The Warriors must make their way from one end of New York to their turf on the other side of the city. All that stands between The Warriors and their survival are 20 miles and thousands of street gang members. The army of gangs owns the streets and there's no turning back, they must fight for their lives and learn the meaning of loyalty as danger and uncertainty emerge from the city night.
Rockstar Games proudly presents The Warriors for the PlayStation®2 based on the 1979 Paramount Pictures cult classic movie. Developed by Rockstar Toronto, The Warriors expands the stylized cinematic journey of the film into a gritty interactive experience set in 1970s New York. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 103 |
The Club for Growth PAC already has spent $17,600 against Stony Rushing, a Union County commissioner and one of eight other Republicans in the special congressional primary.
The Club is the second national group to come into the North Carolina race.
The National Realtors Association PAC is spending $1.3 million on TV, radio and digital ads on behalf of Republican Leigh Brown, a Cabarrus County real estate broker. That's the spending record for a single group in the 9th District.
The Club for Growth and its affiliate groups have their own record of spending big in North Carolina.
In 2018 they spent $446,000 against McCready and $260,000 against Democrat Kathy Manning in the 13th District, according to the Center for Responsive Politics. Manning lost to Republican Rep. Ted Budd, who got $115,000 from Club affiliates.
As with Rushing, the groups sometimes spend against Republicans. In 2016 they spent $788,000 against then U.S. Rep. Renee Ellmers, who had been drawn into a district with fellow GOP Rep. George Holding. She lost the primary.
Bishop already has a financial advantage in the race. Reports filed this month showed he'd raised six times as much as the next candidate, former Mecklenburg Commissioner Matthew Ridenhour.
Early voting starts Wednesday for the May 14 primary.
Republican Sen. Dan Bishop File photo
President Trump defends racist tweets against Democratic congresswomen
By MARY HUDETZ Associated Press
U.S. Senate staffers say officials missed a second deadline last week to offer input on bills on Native American safety. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,882 |
Cabbage soup isn't just for St. Patrick's day. Cabbage soup should be added to your recipe list and made throughout the year. Cabbage soup is a hearty and nourishing, "comfort food" soup that is also perfect on a cold winter's night. Made with cabbage, chicken stock, onions, carrots and tomatoes, it's a rather simple recipe to follow. We top ours with bacon and it is just that little touch that gives it extra flavor.
This is our original recipe for a Hearty Cabbage Soup.
1. Gather the ingredients and prepare the vegetables.
2. In a large Dutch oven, cook bacon until crispy, then remove bacon. Add onions to bacon fat and cook until almost tender. Add garlic to onions and cook for one minute. Add celery and carrots, saute for 10 minutes.
3. Add broth and Bay leaves and bring to a boil. Once boiling, cover and simmer for 15 to 20 minutes.
4. Add oregano and basil and simmer for 2 minutes. Add tomatoes and cabbage. Bring to a boil and reduce heat. Cover and cook until the cabbage is tender. Season to taste with salt and black pepper.
5. Serve in soup bowl and add bacon bits as garnish. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,273 |
\section{Introduction}
\indent
Matrix models can be reformulated as representing stochastic triangulated
surfaces
and are thus interpreted as quantum gravity theories. They are treated in
the "double scaling limit" $N \rightarrow \infty , g \rightarrow g_c$
\cite{1}-\cite{3}.
$O(N)$ vector sigma models can be understood in a similar manner as describing
statistical ensembles of branched polymers \cite{4}-\cite{7}. These models can
also be
submitted to a double scaling limit. This is done with $N$-renormalization
group
techniques based on exact recursion relations in $N$ \cite{4}-\cite{10}.
Instead we propose to start from saddle point integrals. Partition functions
are then to leading order represented in the form of generalized Airy functions
($D = 0$, see (\ref{63})) or as a partition function with a new field theoretic
action
($D > 0$, see (\ref{91})). We shall refer to the function (respectively:
functional) in the
exponential as "Airy action" (respectively: "Airy field action").
The saddle point integrals arise from singularities in the original action
when the limit $N \rightarrow \infty$ is performed. Such singularities can
be classified \cite{11,12} and form $s$-dimensional families. Each familiy
possesses $s$ moduli as continuous parameters. If $s = 0$, the families are
discrete and are grouped by their symmetry into A, D and E series. The
A-series can be realized in single-vector sigma models and is the object of
our interest in this article. It has been shown \cite{13,14} that D and E
series
can be realized by two-vector sigma models. In the field theoretic literature
only A-series singularities have been identified before us (the triple scaling
cases (i) and (ii) in \cite{15} are separable as $A_1 \times A_2$, respectively
$A_1 \times A_3$).
By application of diffeomorphisms a singularity can be brought to canonical
form. This canonical form contains the full information defining a
"universality
class of multicritical behaviour" ($A_k$: k-critical). Our aim in this article
is to extract universal quantities such as critical indices
and the universal part of the beta functions for the whole A-series
in dimensions $0 \le D < 4$. For $D > 0$ two kinds of boundary conditions
are considered: finite cube periodic boundary conditions and infinite volume.
The spacetime dimension $D$ is interpolated whenever possible. We obtain
closed algebraic expressions in each case (no infinite sums or integrals).
Our treatment of $D \geq 2$ field theories is restricted to "naive double
scaling",
i.e. renormalizations are neglected. These imply logarithmic modifications,
namely
$N^{\sigma}$ is multiplied with a polynomial in $\log N$ \cite{8,9}. We expect
that
these modifications can also be calculated explicitly (i.e. in terms of
algebraic
expressions) for all cases $A_k$. Some preliminary discussions are presented in
\cite{9} (e.g. introduction of counter terms). These logarithmic modifications
are
also of interest for mathematics, they go beyond the concepts of Arnold's
school.
The authors of \cite{7,9} treated the cases $A_k, \; k \geq 3$, incorrectly.
They
eliminated Gaussian degrees of freedom connected with nonvanishing eigenvalues
of
the Hessian by orthogonal projection along the zero mode eigenvectors. The
orthogonal
structure is produced by the Hessian itself, which looses its meaning at higher
orders
of the Taylor expansion. In fact, it is not difficult to see that additional
"curvature terms" arise first at fourth order ($k=3$). The correct method is
explained
in the text. It is based on the "splitting lemma" (\cite{12}, Theorem 4.5)
which is
proved by the implicit function theorem. Gaussian degrees of freedom, which
each
belong to an $A_1$ singularity, have to be integrated out before the relevant
singularity is isolated.
The Airy functions are central and unambiguously derived in our approach. In
the
$N$-renormalization group approach a constrained Airy function depending on
only
one variable is obtained by solving a differential equation. Namely, let the
sum
in (\ref{63}) run over $1 \leq n \leq k-1$ and set
\[
\zeta = -\zeta_1, \; \zeta_2 = \zeta_3 = \cdots = \zeta_{k-1} = 0, \; \epsilon
= +1
\]
then the resulting function $Y(\zeta)$ satisfies
\[
\left( \zeta- \left( \frac{d}{d\zeta} \right)^{k} \right) Y(\zeta) = 0.
\]
In the same way one can derive a system of differential equations for the
general
case \cite{13,14}.
Asymptotic expansions for large $\zeta$ play an important role in the
$N$-renormalization
group approach \cite{7,10,15}. For the singularity $A_2$ one can use the known
expansions of the standard Airy functions Ai and Bi \cite{16}. For the
generalized
Airy functions of the singularities $A_k, \; k \geq 3$, different orderings of
the
large arguments $\{ \zeta_l \}$ are possible, each one by repeated application
of
saddle point integrations along a chain of reductions
\[
A_k \rightarrow A_{k-1} \rightarrow \cdots \rightarrow A_2 \rightarrow A_1
\]
implying a different asymptotic expansion. Degenerate reduction chains with
intermediate singularities skipped can also occur.
In Section 2 the single-O($N$)-vector sigma model is formulated and transformed
into an effective field theory of two scalar fields. The Hessian of this
effective field theory is the basis for the discussion of singularities. Its
corank must be one in order to admit singularities of type $\{A_k\}^{\infty}
_2$. It was shown in \cite{14} that sigma models containing $r$ O($N$)-vectors
can be formulated such that the Hessian is of corank $r$.
The elimination of the Gaussian degrees of freedom which are all degrees
except one for $\{A_k\}^{\infty}_2$, is a major algebraic issue. It is
formulated and solved in Section 3 for finite volume. Moreover we calculate
critical coupling constants and the position of the saddle point ($r_0$ or
b(0)). Both these results can be carried over to the infinite volume case
(Section 4).
The deviations of the coupling constants from their critical value map to
lowest order linearly on the deformation space. This linear map is
denoted as "susceptibility matrix". Its calculation is the major topic of
Section 4. It can also be inverted explicitly. The double scaling limit
is defined as the combined limit when $N$ tends to infinity and the coupling
constants approach their critical values. In detail it involves the
susceptibility matrix and the critical indices. Both enter also the linear
terms of the beta functions which can thus be given explicitly for all
$\{A_k\}^{\infty}_2$.
In Section 5 we study the case when the sigma model is carried by the whole
of $D < 2$ dimensional space time. The momenta form a continuous spectrum.
A momentum scale $\Lambda$ which is tied to the renormalized mass
of the theory separates small momenta $\{| p | < \Lambda \}$ from large
momenta $\{| p | > \Lambda \}$. The latter belong to Gaussian degrees
of freedom that are integrated out, whereas the former are additional
deformation parameters which under double scaling induce the kinetic energy
term in the Airy field action. The critical indices are modified but the
susceptibility matrix remains unchanged as compared with Section 4.
If the sigma model is carried by infinite spacetime of dimensions $2 \le
D < 4$, the double scaling limit can be performed provided the field theory
resulting from the saddle point integral is renormalizable. Counter terms
have to be introduced \cite{17,9} in the course of the limit and the quantity
$N \Lambda$
is the ultraviolet cutoff. This is studied in Section 6. If we use dimensional
regularization for $2 < D < 4$, all critical objects can be shown to be
analytic continuations of the corresponding quantities for $0 < D < 2$.
However, for $D = 2$ we use a subtraction scheme depending on a mass parameter
$\mu ^2$ and all critical quantities are recalculated.
The type $A_k$ of the
singularity is restricted by renormalizability to
\[
k \le \frac{D+2}{D-2}.
\]
Surprisingly we observe that for "exceptional dimensions"
\[
D_n = 2 \frac{n}{n-1}, \quad n \in \{ 3,4,5,... \}
\]
$k$ is further restricted by
\[
k \le \left\{ \begin{array}{l}
n-1, \mbox{n odd,} \\
n-2, \mbox{n even.} \end{array} \right.
\]
This result is a consequence of the analytic form found for the critical
quantities.
In Section 7 we return to the unstable field theories resulting from saddle
point integrals. Though we make suggestions of how to ascribe a meaning to
them, their properties are unclear. Nevertheless, realistic systems of
statistical mechanics may be described by them and it would be wrong to
neglect them.
\section{The model}
We study conventional sigma models with the action
\begin{equation}
S = \int d^{D}x \left [ \frac12 (\partial _\nu \vec{S}) (\partial _\nu
\vec{S}) + \frac12 \beta^2 \vec{S}^2 + U(\vec{S}^2) \right ]
\label{1}
\end{equation}
$(\vec{S} \in {\bbbr}_N)$
\noindent with the potential
\begin{equation}
U(\sigma) = \sum ^{\infty} _{r=2} \frac{f_r}{r} \sigma^r .
\label{2}
\end{equation}
We shall interpolate the dimension $D$. For the purpose of our investigation it
is not
relevant whether the series (\ref{2}) is finite, analytic or formal.
By a standard functional Fourier transformation and performing some of the
functional integrations we can transform the action (\ref{1}) into an
effective action
\begin{eqnarray}
S_{\mbox{\scriptsize eff}} & = &\int d^Dx \left [ U(\sigma (x)) - i \rho (x)
\sigma (x) \right ]\nonumber \\
& &+ \frac12 \mbox{Tr} \log [- \Delta + \beta^2 + 2i\rho]
\label{3}
\end{eqnarray}
with partition function
\begin{equation}
Z = \int D\sigma D\rho \exp [- N S_{\mbox{\scriptsize eff}} (\sigma ,\rho)].
\label{4}
\end{equation}
This partition function is to be evaluated in the limit $N \rightarrow \infty$.
Application of singularity theory amounts to evaluation of (\ref{4}) by saddle
point integrals.
The system may either be considered on unbounded spacetime or on a cube with
volume $V = L^D$ and periodic boundary conditions. Fourier transforms are
defined by
\begin{equation}
\hat{\alpha}(p) = \int d^Dx \, e^{-ipx} \alpha (x)
\label{5}
\end{equation}
in either case, but the inverse transformations involve either integrations
\[
\int\frac{d^Dp}{(2\pi )^D}
\]
or summations
\[
\frac{1}{V} \sum_{{\mbox{\scriptsize{$p$ from one}}} \atop {\mbox{\scriptsize
Brillouine zone}}}.
\]
In explicit formulas we will always write integrals.
Let us assume that the saddle point of (\ref{4})
\[
(\sigma_0,\rho_0)
\]
is constant over spacetime. The saddle point is then determined by
\begin{equation}
U^{\prime}(\sigma_0) = i\rho_0
\label{6}
\end{equation}
\begin{equation}
\sigma_0 = \int \frac{d^Dp}{(2\pi )^D} (p^2 + m^2)^{-1}
\label{7}
\end{equation}
where
\begin{equation}
m^2 = \beta^2 + 2i\rho_0
\label{8}
\end{equation}
is assumed positive. In order to render the integral (\ref{7}) convergent,
we limit $D$ to $0 \le D < 2$. Only in the sixth section we will abandon this
constraint.
The fields $\sigma$ and $\rho$ fluctuate
\begin{equation}
\sigma(x) = \sigma_0 (1 + \alpha(x))
\label{9}
\end{equation}
\begin{equation}
\rho(x) = \rho_0 (1 + \beta(x))
\label{10}
\end{equation}
and the n'th order term of $S_{\mbox{\scriptsize eff}}$ in the fluctuations is
denoted
$S^{(n)}_{\mbox{\scriptsize eff}}$. Then
\begin{eqnarray}
S^{(2)}_{\mbox{\scriptsize eff}} = \frac12 \int \frac{d^Dp}{(2\pi )^D}
(\hat{\alpha}(-p),
\hat{\beta}(-p)) \nonumber \\
\left( \begin{array}{cc}
\sigma^2_0 U^{\prime\prime}(\sigma_0) & \sigma_0 r_0 \\
\sigma_0 r_0 & -2r^2_0 \Sigma (p)
\end{array}\right)
\left( \begin{array}{cc}
\hat{\alpha}(p)\\
\hat{\beta}(p)
\end{array}\right)
\label{11}
\end{eqnarray}
where the real quantity
\begin{equation}
r_0 = -i \rho_0
\label{12}
\end{equation}
has been introduced and
\begin{equation}
\Sigma(p) = \int \frac{d^Dq}{(2\pi )^D} [(q^2 + m^2)((q - p)^2 + m^2)]^{-1}.
\label{13}
\end{equation}
The n'th order term $S^{(n)}_{\mbox{\scriptsize eff}}$ is in coordinate space
integrals
\begin{eqnarray}
S^{(n)}_{\mbox{\scriptsize eff}} = \frac{\sigma^n_0 U^{(n)}(\sigma_0)}{n!} \int
d^Dx \alpha(x)^n
\nonumber \\
- \frac{1}{2n} (2r_0)^n \mbox{Tr} \left[(- \Delta+m^2)^{-1} \beta(x) \right]^n.
\label{14}
\end{eqnarray}
The Hessian $S^{(2)}_{\mbox{\scriptsize eff}}$ is diagonalized by
\begin{equation} \left(\begin{array}{cc}
\hat{\alpha}(p)\\
\hat{\beta}(p)
\end{array}\right) =
\left(\begin{array}{cc}
a(p)\\
1 \end{array}\right)
\hat{\xi}(p) +
\left(\begin{array}{cc}
b(p)\\
1 \end{array}\right)
\hat{\eta}(p)
\label{15}
\end{equation}
where the two eigenvectors are orthogonal implying
\begin{equation}
a(p)b(p) = -1
\label{16}
\end{equation}
and have norms squared
\begin{eqnarray}
N_+(p) = a(p)^2 + 1 \nonumber \\
N_-(p) = b(p)^2 + 1 \label{17}.
\end{eqnarray}
Then the Hessian assumes the form
\begin{eqnarray}
S^{(2)}_{{\mbox{\scriptsize eff}}} = \frac12 \int \frac{d^Dp}{(2\pi )^D}
\big[\lambda_+(p)
N_+(p) \hat{\xi}(-p) \hat{\xi}(p) \nonumber \\
\quad \quad + \lambda_-(p) N_-(p) \hat{\eta}(-p)\hat{\eta}(p) \big]
\label{18}
\end{eqnarray}
with eigenvalues
\begin{eqnarray}
\lambda_{\pm}(p) = \frac12 \left\{ \sigma^2_0 U^{\prime\prime}(\sigma_0)
- 2r^2_0 \Sigma(p) \right. \nonumber \\
\left. \quad \mp \left[(\sigma^2_0 U^{\prime\prime}(\sigma_0) + 2r^2_0
\Sigma(p))^2 + 4 \sigma^2_0 r^2_0 \right] ^{\frac12} \right\}.
\label{19}
\end{eqnarray}
The $p$-dependence of $N_{\pm}(p), a(p), b(p), \lambda_{\pm}(p)$ originates
from $\Sigma(p)$. Since
\begin{equation}
\Sigma(p) = \Sigma(-p)
\label{20}
\end{equation}
all these quantities have the same symmetry. In the infinite volume
case $\Sigma(p)$ depends only on $p^2$ and decreases monotonously from $p^2=0$
to $p^2= \infty$ where it vanishes.
We will discuss only the case that the Hessian becomes singular at $p=0$.
In this case we must have
\begin{equation}
2 \Sigma(0) U^{\prime\prime} (\sigma_0) + 1 = 0
\label{21}
\end{equation}
implying
\begin{equation}
\lambda_-(0) = 0
\label{22}
\end{equation}
\begin{equation}
\lambda_+(0) < 0.
\label{23}
\end{equation}
{}From (\ref{19}) follows moreover
\begin{equation}
\lambda_-(p) > 0, \quad p \neq 0.
\label{24}
\end{equation}
\section{The critical behaviour for fixed finite volume}
The critical behaviour of a singularity of type $A_k$ is
produced by the limit $N \rightarrow \infty$ and is essentially
independent of whether the volume is finite or infinite. Of course the
finite size leads to finite size corrections which we shall neglect here.
If the volume is finite, the momentum spectrum is discrete and the corank
of the Hessian is one. The possible singularities are of type $A_k, k \in
\bbbn$.
The fluctuations
\begin{eqnarray*}
\hat{\xi}(p), \mbox{ all } p \\
\hat{\eta}(p), \mbox{ all }p \neq 0
\end{eqnarray*}
are Gaussian and must be integrated out first. The saddle point around
which the Gaussian integration is performed is fixed by
\begin{equation}
\frac{\partial S_{\mbox{\scriptsize eff}}}{\partial \hat{\xi}(p)} = 0,
\mbox{ all }p
\label{25}
\end{equation}
\begin{equation}
\frac{\partial S_{\mbox{\scriptsize eff}}}{\partial \hat{\eta}(p)} = 0,
\mbox{ all }p \neq 0
\label{26}
\end{equation}
so that its location depends on $\hat{\eta}(0)$. Because of translational
invariance these conditions imply
\begin{equation}
\hat{\xi}(p) = \hat{\eta}(p) = 0, \quad p \neq 0.
\label{27}
\end{equation}
Inserting this into $S_{\mbox{\scriptsize eff}}$ gives a new action
\begin{equation}
\left.\tilde{S}_{\mbox{\scriptsize red}}(\xi_0,\eta_0) = S_{\mbox{\scriptsize
eff}}(\hat{\xi},\hat{\eta})\right|_
{\hat{\xi}(p) = \hat{\eta}(p) = 0 \atop{(p \neq 0)}}
\label{28}
\end{equation}
where we introduced the new variables
\begin{equation}
\xi_0 = \frac{\hat{\xi}(0)}{V}, \eta_0 = \frac{\hat{\eta}(0)}{V}.
\label{29}
\end{equation}
{}From (\ref{25}) remains
\begin{equation}
\frac{\partial \tilde{S}_{\mbox{\scriptsize red}}}{\partial\xi_0}
(\xi_0,\eta_0) = 0
\label{30}
\end{equation}
or explicitly
\begin{equation}
- \lambda_+(0) N_+(0) \xi_0 = \sum^{\infty}_{n=3} \frac{\partial}
{\partial\xi_0} \tilde{S}^{(n)}_{\mbox{\scriptsize red}} (\xi_0,\eta_0).
\label{31}
\end{equation}
A splitting lemma \cite{12} asserts that this elimination equation
can be solved iteratively
\begin{equation}
\xi_0 = H(\eta_0) = \sum^{\infty}_{n=2} a_n\eta^n_0
\label{32}
\end{equation}
and that $H$ exists in a neighborhood of zero as a function. An explicit
determination of the coefficients $\{a_n\}$ at the critical point (the
$\{a_n\}$ are functions of $\{f_r \}$ from (\ref{2}) which assume critical
values $\{f^c_r\}$) is crucial for any further explicit determination of
critical quantities.
Inserting (\ref{32}) into $\tilde{S}_{\mbox{\scriptsize red}}$ (\ref{28}) gives
the reduced
action (neglecting a constant)
\begin{eqnarray}
S_{\mbox{\scriptsize red}}(\eta_0) & = & \sum^{\infty}_{n=2} \frac{g_n}{n}
\eta^n_0 \nonumber \\
& = & \tilde{S}_{\mbox{\scriptsize red}} (H(\eta_0),\eta_0).
\label{33}
\end{eqnarray}
A singularity $A_k$ arises if
\begin{eqnarray}
g_n = 0, \quad 2 \le n \le k \nonumber \\
g_{k+1} \neq 0.
\label{34}
\end{eqnarray}
Now we start the explicit calculation of the critical quantities. Some
technicalities are unavoidable in this context. The function $\Sigma(p)$
(\ref{13}) can be expanded in a power series of $p^2$ (convergence radius
$4m^2$, only the first terms are needed)
\begin{equation}
\Sigma(p) = \sum^{\infty}_{n=0} B_n(p^2)^n.
\label{35}
\end{equation}
Moreover, we use the standard integrals
\begin{eqnarray}
\Pi_n & = & \int \frac{d^Dp}{(2\pi )^D} (p^2+m^2)^{-n} \nonumber \\
& = & (4\pi)^{-\mu} \frac{\Gamma(n-\mu)}{\Gamma(n)} (m^2)^{\mu-n}
\label{36}
\end{eqnarray}
\[
(\mu = \mbox{$\frac12$} D) .
\]
Then $B_n$ can be expressed in terms of $\Pi_{n+2}$ by
\begin{equation}
B_n = \frac{(-1)^n n!(n+1)!}{(2n+1)!} \Pi_{n+2}.
\label{37}
\end{equation}
We can use $\Pi_1$ and $\Pi_2$ to express all fractional powers of momentum
dimensions, e.g.
\begin{equation}
\Pi_{n+2} = \frac{\Pi_2^{n+1}}{\Pi^n_1} \delta_n
\label{38}
\end{equation}
so that $\delta_n$ is a function of $D$ only (or $\mu = \frac12 D$)
\begin{equation}
\delta_n = \frac{(2-\mu)_n}{(n+1)!(1-\mu)^n}.
\label{39}
\end{equation}
Noting that $\sigma_0 = \Pi_1$ by (\ref{7}) and (\ref{36}), we
normalize the derivation of the potential at the critical point by
\begin{equation}
\left. \Pi^n_1 U^{(n)} (\Pi_1)\right|_{\mbox{\scriptsize crit}} =
\frac{\Pi^2_1}{2\Pi_2} v_n.
\label{40}
\end{equation}
Then the following result can be derived
\begin{equation}
v_{n+2} = \sum_{\mbox{{\scriptsize partitions of $n$}}} (-1)^{\ell -1}
(n+\ell)! \, \prod^{\infty}_{j=1} \frac{\delta^{n_j}}{n_j!}
\label{41}
\end{equation}
\quad $(n \ge 0, v_2 = -1$ from (\ref{21}))
\noindent where $\ell$ is the length of the partition and $n_j$ is the
repetition number of $j$
\begin{equation}
n = \sum^{\infty}_{j=1} jn_j
\label{42}
\end{equation}
\begin{equation}
\ell = \sum^{\infty}_{j=1} n_j.
\label{43}
\end{equation}
This formula has been checked by computer up to $n = 8$. Inserting $\delta_n$
(\ref{39}) into (\ref{41}) gives the simple expression
\begin{equation}
v_{n+2} = (-1)^{n+1} \left( \frac{2-\mu}{1-\mu}\right)_n.
\label{44}
\end{equation}
In the case in which the volume is finite, the definitions
(\ref{38}),(\ref{40})
and the purely algebraic result (\ref{41}) remain valid. But $\delta_n$
(\ref{39})
obtains a finite size correction which is neglected in (\ref{44}). We shall
neglect such corrections also in the sequel.
The next issue is to calculate the critical coupling constants $\{f^c_r\}$
from all $v_n$. For a singularity $A_k$ we normalize the coupling constants
to
\begin{equation}
A_k : f_r = 0 \mbox{ for } r > k.
\label{45}
\end{equation}
In analogy with (\ref{40}) we define
\begin{equation}
\Pi_1^r f^c_r = \frac{\Pi^2_1}{2\Pi_2} p^{(k)}_r(\mu).
\label{46}
\end{equation}
Inverting the system of equations $(n \in \{1,2,...,k-1\})$
\begin{equation}
v_{n+1} = (-1)^n \sum^k_{r=2} (1-r)_n p_r^{(k)}(\mu)
\label{47}
\end{equation}
gives $(r \in \{2,3,...,k\})$
\begin{equation}
p_r^{(k)}(\mu) = \frac{(-1)^{r-1}}{(r-1)!(k-r)!} \cdot \frac{1-\mu}
{(1-\mu)(r-1)+1} \cdot \left(\frac{2-\mu}{1-\mu}\right)_{k-1}.
\label{48}
\end{equation}
In the same context we can calculate the critical value for
\begin{eqnarray}
b(0)& = & \frac{2\Pi_2r_0}{\Pi_1} = - a(0)^{-1} \nonumber \\
& = & - \left(\frac{\Pi^2_1}{2\Pi_2}\right)^{-1} \Pi_1 U^{\prime}(\Pi_1)
\label{49}
\end{eqnarray}
which gives
\begin{eqnarray}
b(0) &=& p_1^{(k)}(\mu) \nonumber \\
&=& (1-\mu) \left[\frac{1}{(k-1)!} \left(\frac{2-\mu}{1-\mu}\right)_{k-1}-1
\right]
\label{50}
\end{eqnarray}
(if $r=1$ is inserted into (\ref{48}) we obtain $p_1^{(k)}(\mu)+(1-\mu)$).
At the critical point also $g_{k+1}$ (\ref{34}) is fixed
\begin{equation}
\frac{g^c_{k+1}}{k+1} = (-1)^{k+1} \frac{(p_1^{(k)}(\mu))^{k+1}}{(k+1)!}
\left(\frac{2-\mu}{1-\mu}\right)_{k-1} \frac{\Pi^2_1}{2\Pi_2}.
\label{51}
\end{equation}
It is important to remark that the second condition (\ref{34}) is satisfied
indeed by (\ref{51}), since for $0 \le \mu < 1$ $(0 \le D < 2)$
\begin{equation}
\left(\frac{2-\mu}{1-\mu}\right)_{k-1} > (k-1)!
\label{52}
\end{equation}
\begin{equation}
p_1^{(k)}(\mu) > 0
\label{53}
\end{equation}
\begin{equation}
\mbox{sign }g^c_{k+1} = (-1)^{k+1}.
\label{54}
\end{equation}
\section{Deformation of the singularity and the double scaling limit for
fixed finite volume}
Singularities can be deformed \cite{11,12}. We assume that this is achieved for
$A_k$ by
\begin{equation}
f_r = f^c_r(1+\Theta_r)
\label{55}
\end{equation}
\[
2 \le r \le k
\]
whereas the parameter $m^2$ is invariant. This is a nonstandard way of
deforming: the standard way would be to keep $f_k = f^c_k$ fixed and let $m^2$
(or $f_1$) vary. The quantity $m^{2}$ is kept constant to simplify the
following
discussion and this is achieved by compensation of the variation of $r_0$ and
$\beta^{2}$ in (\ref{8})
\begin{equation}
m^{2} = \beta^{2} - 2 r_0.
\end{equation}
Invariance of $m^{2}$ implies that
\begin{equation}
\sigma_{0} = \Pi_{1}(m^{2})
\end{equation}
is not varied either, so that in
\begin{equation}
r_0 = -U'(\sigma_{0})
\end{equation}
the variation comes only from (\ref{55}). We conclude that from the $k$
quantities
$\{ f_1=\frac{\beta^{2}}{2}, f_2, \ldots, f_k \}$ only $k-1$ are varied.
In any case the Hessian is diagonalized exactly and the fluctuations ${\hat
\xi},
{\hat \eta}$ are considered independent of the deformation.
Correspondingly the coupling constants $\{g_n\}$ in (\ref{33}) deviate from
the critical values,
\begin{equation}
\frac{g_n}{n} = \sum^k_{r=2} \alpha_{nr}^{(k)} \Theta_r + O_2(\Theta)
\label{56}
\end{equation}
\[
(2 \le n \le k) .
\]
The double scaling limit is obtained by coupling two processes
\quad (1)\quad$N \rightarrow \infty$
\quad (2)\quad$\Theta_r \rightarrow 0, \: \forall r$
\noindent in a particular way. In the momentum spectrum any
neighboring eigenvalue $p_1$ of $p = 0$ has a fixed distance $O(L^{-1})$ of
$p = 0$ and therefore the deformation parameters can be restricted to a
neighborhood of zero so small that $\lambda_-(p_1)$ and $\lambda_-(0)$ have
values in non-intersecting intervals on this neighborhood.
We study the singular partition function
\begin{equation}
Z_{\mbox{\scriptsize sing}} = \int d\eta_0 e^{-NS_{\mbox{\tiny red}}(\eta_0)}.
\label{57}
\end{equation}
Since $g^c_{k+1} \neq 0$ we can approximate $g_{k+1}$ by $g^c_{k+1}$ over the
whole deformation neigborhood. We introduce a new variable $y$ by
requiring
\begin{equation}
y^{k+1} = N |g^c_{k+1}| \eta_0^{k+1}
\label{58}
\end{equation}
\begin{equation}
\eta_0 = y\left(N|g^c_{k+1}|\right)^{-\frac{1}{k+1}}.
\label{59}
\end{equation}
Inserting (\ref{59}) into $S_{\mbox{\scriptsize red}}(\eta_0)$, we see that we
have to perform
the limit \newpage
\begin{equation}
\zeta_n = \lim N \left(N|g^c_{k+1}|\right)^{-\frac{n}{k+1}}
\sum_r \alpha^{(k)}_{nr} \Theta_r
\label{60}
\end{equation}
\[
(2 \le n \le k).
\]
Thus the double scaling limit is defined via the "susceptibility matrix"
$\alpha^{(k)}$, and each linear combination $(\alpha^{(k)} \Theta)_{n}$ scales
as $N^{-\sigma_n^{(k)}}$
\begin{equation}
\sigma_n^{(k)} = 1 - \frac{n}{k+1}.
\label{61}
\end{equation}
These are the "critical indices". The result of the double scaling limit is
to leading order
\begin{equation}
Z_{\mbox{\scriptsize sing}} = \left(N|g^c_{k+1}|\right)^{-\frac{1}{k+1}}
Y_\epsilon(\zeta_2,
\zeta_3,...\zeta_k)
\label{62}
\end{equation}
where $Y_\epsilon$ is the generalized Airy function,
\begin{equation}
Y_\epsilon(\zeta_2,\zeta_3,...,\zeta_k) = \int_{C^{(k)}} dy \exp
\left\{-\sum^k_{n=2} \zeta_n y^n - \epsilon \frac{y^{k+1}}{k+1} \right\}
\label{63}
\end{equation}
\begin{equation}
\epsilon = \mbox{sign }g^c_{k+1}.
\label{64}
\end{equation}
The contour $C^{(k)}$ is the real axis if
\begin{equation}
\epsilon = + 1, \; k \mbox{ odd}
\label{65}
\end{equation}
and a combination of complex contours, running from infinity to infinity along
which the integral converges exponentially, in all other cases. By a
translation
$y \rightarrow y+a$ we can eliminate the term $y^k$ and produce a term $y^1$
obtaining the standard form of a generalized Airy function.
The function $Y_{\epsilon}$ or any function of $Y_{\epsilon}$ such as
$F = \log Y_{\epsilon}$ satisfy a renormalization group equation
\begin{equation}
\left( N \frac{\partial}{\partial N} - \sum^k_{n=2} \beta_n(\Theta) \frac
{\partial}{\partial\Theta_n}\right) F(\zeta_2,\zeta_3,...\zeta_k) = 0
\label{66}
\end{equation}
where each $\zeta_n$ is considered as a function of $N$ and $\{\Theta_r\}$
which in the neighborhood of the singularity is determined by (\ref{60}). The
beta functions $\{\beta_n(\Theta)\}$ are determined from $(2 \le n\le k)$
\begin{equation}
N \frac{\partial\zeta_n}{\partial N} - \sum^k_{r=2} \beta_r(\Theta) \frac
{\partial\zeta_n}{\partial\Theta_r} = 0.
\label{67}
\end{equation}
For small $\{\Theta_r\}$ this is satisfied if
\begin{equation}
\beta_r(\Theta) = \sum^k_{n=2} {\cal N}^{(k)}_{rn} \Theta_n + O_2(\Theta)
\label{68}
\end{equation}
\begin{equation}
{\cal N}^{(k)} = \alpha^{(k),-1} \mbox{diag }\sigma^{(k)} \alpha^{(k)}.
\label{69}
\end{equation}
The susceptibility matrix must therefore be invertible. We will verify this by
an explicit calculation. This calculation starts from the following
observation. We can use
\begin{equation}
\left.u_n = U^{(n)}(\Pi_1) - U^{(n)}(\Pi_1)\right|_{crit}
\label{70}
\end{equation}
as parameters of deformation instead of the $\{\Theta_n\}$. Then $\tilde{S}
_{\mbox{\scriptsize red}}$ depends on
\begin{equation}
\tilde{S}_{\mbox{\scriptsize red}} (\xi_0,\eta_0;u_2,u_3,...u_k).
\label{71}
\end{equation}
{}From the constraint (\ref{30}) we obtain the elimination function
\begin{equation}
\xi_0 = H(\eta_0;u_2,u_3,...,u_k)
\label{72}
\end{equation}
and
\begin{equation}
S_{\mbox{\scriptsize red}}(\eta_0;u_2,u_3,...,u_k) =
\tilde{S}_{\mbox{\scriptsize red}}(H(\eta_0;u_2,u_3,...,u_k),
\eta_0;u_2...u_k).
\label{73}
\end{equation}
It follows from (\ref{30}) that in
\begin{equation}
\frac{\partial S_{\mbox{\scriptsize red}}}{\partial u_n} =
\frac{\partial\tilde{S}_{\mbox{\scriptsize red}}}
{\partial \xi_0} \frac{\partial H}{\partial u_n} + \frac{\partial
\tilde{S}_{\mbox{\scriptsize red}}}
{\partial u_n}
\label{74}
\end{equation}
the first term vanishes. The variation of $u_n$ enters the second term either
directly
or via $r_0$. However, it can be shown, that at the critical point and for
constant
$\{ a_n \}$ the derivative with respect to $r_0$ vanishes. Thus only the direct
dependence
is left and we obtain
\begin{eqnarray}
\frac1n \frac{\partial g_n}{\partial u_{\ell}} = \frac{1}{\ell !} \left(- \frac
{\Pi_1}{b(0)}\right)^{\ell} \sum_{{\mbox{\scriptsize partitions of n}} \atop
{\mbox{\scriptsize of length $\ell$}}}
\left(\begin{array}{cc}
\ell \\
n_1n_2n_3...
\end{array}\right) \nonumber \\
(-b(0)^2)^n \prod^{\infty}_{j=2} a_j^{n_j}
\label{75}
\end{eqnarray}
where $\ell$ is the length and $n_j$ the repetition number of $j$ (as in
(\ref{42}),(\ref{43})). The coefficients $\{a_j\}$ of the elimination
function $H$ (\ref{32}) at the critical point can be expressed as functions
of $D$ (or $\mu$) by
\begin{eqnarray}
a_{n+1} = - b(0)^{n+2} \sum_{\mbox{\scriptsize partitions of $n$}}
\frac{(n+\ell)!}
{(n+1)!} (1+b(0)^2)^{-\ell} \nonumber \\
\cdot \prod^{\infty}_{j=1} \frac{1}{n_j!} \left(\frac{v_{j+2}}{(j+1)!}
\right)^{n_j}
\label{76}
\end{eqnarray}
with $\ell$ and $n_j$ as in (\ref{42}), (\ref{43}). This formula has been
verified for $n \le 8$.
Next we reduce the susceptibility matrix
\begin{equation}
\alpha_{nr}^{(k)} = \frac{\Pi^2_1}{2\Pi_2} (p_1^{(k)}(\mu))^n p_r^{(k)}(\mu)
\tilde{\alpha}^{(k)}_{nr}\label{77}
\end{equation}
and find from inserting (\ref{76}) into ${\tilde S}_{\mbox{\scriptsize red}}$
\begin{equation}
\alpha_{nr}^{(k)} = \sum^r_{s=2} S^{(k)}_{ns} \left(r-1 \atop{s-1}\right).
\label{78}
\end{equation}
Here $S^{(k)}$ is a lower left triangular matrix
\begin{equation}
S^{(k)}_{nr} = 0, \quad r > n
\label{79}
\end{equation}
and
\begin{equation}
B_{sr} = \left(r-1 \atop{s-1}\right)
\label{80}
\end{equation}
is an upper right triangular matrix. Moreover we have
\begin{equation}
S^{(k)}_{nn} = \frac1n \quad (n \geq 2)
\label{81}
\end{equation}
and for all other elements $n > r \geq 2$
\begin{eqnarray}
S^{(k)}_{nr} = \sum_{\mbox{\scriptsize partitions of $n-r$}} (1
-\frac{n}{n+\ell-1} \delta_{r2})
\frac{(n+\ell-1)!}{n!} (1+b(0)^2)^{-\ell} \nonumber \\
\cdot \prod^{\infty}_{j=1} \frac{1}{n_j!} \left(\frac{v_{j+2}}{(j+1)!}\right)
^{n_j}
\label{83}
\end{eqnarray}
with $\ell$ the length and $n_j$ the repetition number of $j$ of the partition
of
$n-r$.
The representation (\ref{78}) gives for the inverse
\begin{equation}
\tilde{\alpha}^{(k),-1} = B^{-1} S^{(k),-1}
\label{84}
\end{equation}
with
\begin{equation}
B^{-1}_{sr} = (-1)^{s-r} \left(r-1 \atop{s-1} \right)
\label{85}
\end{equation}
and
\begin{eqnarray}
S^{(k),-1}_{rn} &=& - rn S^{(k)}_{rn} + \sum_{s \atop{(r>s>n)}} rsn
S^{(k)}_{rs} S^{(k)}_{sn} \nonumber \\
& & - \sum_{s_1,s_2 \atop{(r>s_1>s_2>n)}} rs_1s_2n
S^{(k)}_{rs_1}S^{(k)}_{s_1s_2}
S^{(k)}_{s_2n} \nonumber \\
& & \mp \ldots \quad (r > n)
\label{86}
\end{eqnarray}
\begin{equation}
S^{(k),-1}_{nn} = n.
\label{87}
\end{equation}
Finally we obtain by inserting (\ref{77}),(\ref{78}), and (\ref{84}) into
(\ref{69})
\begin{equation}
{\cal N}^{(k)}_{rn} = \frac{p_n^{(k)}(\mu)}{p_r^{(k)}(\mu)} \left( B^{-1}
S^{(k),-1} \mbox{diag } \sigma^{(k)}S^{(k)}B\right)_{rn}.
\label{88}
\end{equation}
\section{The case of unbounded volume}
In the case of unbounded volume the momentum spectrum is continuous. The
domain of small momenta
\begin{equation}
|p| < \Lambda
\label{89}
\end{equation}
is considered as an additional deformation. This leads to an additional
term in the generalized Airy function integral (the kinetic energy term) and
modified critical indices
\begin{equation}
\sigma_n^{(k)} \rightarrow \chi_n^{(k)}(\mu).
\label{90}
\end{equation}
The generalized Airy function (\ref{63}) is replaced by a field theoretic
partition function
\begin{eqnarray}
Y_{\phi} = \int D\phi \exp \left\{- \frac12 \int d^Dx \phi(x)(-\Delta+M^2)
\phi(x) \right. \nonumber \\
\left. - \sum^k_{n=3} \zeta_k \int d^Dx \phi(x)^k - \frac{F_{k+1}}{k+1} \int
d^Dx \phi(x)^{k+1} \right\}.
\label{91}
\end{eqnarray}
The kinetic energy term has been normalized in (\ref{91}) instead of the
$(k+1)$-st order term. The dimension $D$ is still in the interval
$0 \leq D <2$.
The reduced action $S_{\mbox{\scriptsize red}}$ depends only on
$\{\hat{\eta(p)} \, | \, |p|<\Lambda\}$
so that all other degrees of freedom must be integrated out by performing
a Gaussian saddle point integration. As the first step we obtain the
half-reduced
action
\begin{equation}
\left.\tilde{S}_{\mbox{\scriptsize red}}(\hat{\xi}(p),\hat{\eta}(p)) =
S_{\mbox{\scriptsize eff}}(\hat{\xi},
\hat{\eta})\right|_{\hat{\xi}(p)=\hat{\eta}(p)=0 \atop{|p|>\Lambda}}.
\label{92}
\end{equation}
We are left with the issue to solve
\begin{equation}
\frac{\delta}{\delta\hat{\xi}(p)} \tilde{S}_{\mbox{\scriptsize red}} = 0
\mbox{ for all } |p| < \Lambda
\label{93}
\end{equation}
for $\hat{\xi}(p)$. If $\Lambda \ll m$ the trace terms
\begin{equation}
\mbox{Tr}[(-\Delta+m^2)^{-1} \beta(x)]^n = \int \prod^n_{i=1} \frac{d^Dq_i}
{(2\pi)^D} \frac{\hat{\beta}(q_i-q_{i+1})}{q^2_i+m^2}
\label{94}
\end{equation}
\[
(q_{n+1} = q_1)
\]
reduce to the n-fold convolution product of $\hat{\beta}$ at argument zero
times a constant (see (\ref{36})), namely
\begin{equation}
= \Pi_n \hat{\beta}^n_{\ast}(0).
\label{95}
\end{equation}
Thus the elimination (\ref{93}) differs from (\ref{30}), (\ref{31}) only
by the replacement of $\xi_0^{n_1} \eta_0^{n_2}$ by
\begin{equation}
\left(\hat{\xi}_{\ast}^{n_1} \ast \hat{\eta}_{\ast}^{n_2}\right)(0).
\label{96}
\end{equation}
Its solution is
\begin{equation}
\hat{\xi}(p) = \sum^{\infty}_{n=2} a_n \hat{\eta}^n_{\ast}(p)
\label{97}
\end{equation}
with $\{a_n\}$ as in (\ref{32}). Correspondingly the reduced action is
\begin{equation}
S_{\mbox{\scriptsize red}}(\hat{\eta}) = \sum^{\infty}_{n=2} \frac{g_n}{n}
\hat{\eta}^n_{\ast}(0)
\label{98}
\end{equation}
with $\{g_n\}$ as in (\ref{33}). The condition (\ref{34}) for the appearance of
a
singularity $A_k$ remains unchanged.
Momenta and coordinates are scaled by \cite{8,9}
\begin{equation}
p = N^{-\lambda}p^{\prime}
\label{99}
\end{equation}
\begin{equation}
x = N^{+\lambda}x^{\prime}
\label{100}
\end{equation}
where $\lambda >0$ is necessary in order that in the limit $N \rightarrow
\infty$
the domain $\Lambda$ is mapped onto $\bbbr_D$. The fields are renormalized by
\begin{equation}
\phi(x^{\prime}) = C^{(k)} N^{\frac{1+D\lambda}{k+1}} \eta(x)
\label{101}
\end{equation}
so that the power of order $k+1$ in (\ref{91}) obtains a finite coefficient
in the limit $N \rightarrow \infty$. Both $C^{(k)}$ and $\lambda$ are
determined from the kinetic energy term.
We return to
\begin{equation}
\frac12 \int\limits_{|p|<\Lambda} \frac{d^Dp}{(2\pi)^D} \lambda_-(p) N_-(p)
\hat{\eta}(-p)\hat{\eta}(p)
\label{102}
\end{equation}
and expand into deformation parameters $\{\Theta_n\}$ and $p$
\begin{eqnarray}
\lambda_-(p)N_-(0) &=& \frac{\Pi^2_1}{2\Pi_2} b(0)^2 \Big\{\frac16(2-\mu)
\frac{p^2}{m^2} \nonumber \\
& &+ \sum^k_{r=2} (r-1) p_r^{(k)}(\mu)\Theta_r \Big\} \mbox{ + higher
order terms}
\label{103}
\end{eqnarray}
where (\ref{19}), (\ref{52}), (\ref{46}) have been used. This implies
(see (\ref{50}))
\begin{equation}
C^{(k)} = p_1^{(k)} (\mu) \left[ \frac{\Pi^2_1}{2\Pi_2} \frac16 (2-\mu)
\frac{1}{m^2}\right]^\frac12
\label{104}
\end{equation}
and \cite{9}
\begin{equation}
\lambda = \frac{k-1}{2(k+1)-D(k-1)}.
\label{105}
\end{equation}
Positivity of $\lambda$ is satisfied as long as
\begin{equation}
D < D_{\infty} = 2 \frac{k+1}{k-1}.
\label{106}
\end{equation}
This is trivially fulfilled for $D < 2$. From the second term in (\ref{100})
we obtain the double scaling limit
\begin{equation}
\lim_{N \rightarrow \infty \atop{\Theta_r \rightarrow 0, \mbox{\scriptsize
{ all r}}}} N^{2\lambda} \left(\sum^k_{r=2}(r-1) p_r^{(k)}(\mu)\Theta_r\right)
=
\frac16 (2-\mu) \frac{M^2}{m^2}.
\label{107}
\end{equation}
Since this limiting procedure is independent of the other double scaling
limits described below (due to the invertibility of the susceptibility matrix)
we can ascribe to $M^2$ any value, in particular any positive value.
The susceptibility matrix $\alpha^{(k)}$ (\ref{53}) enters all other
double scaling limits as usual. For all $3 \le n \le k$ we have
\begin{equation}
\zeta_n = \lim_{N \rightarrow \infty \atop{\Theta_r \rightarrow 0,
\mbox{\scriptsize
{ all r}}}} (C^{(k)})^{-n} N^{\chi_n^{(k)}} \sum^k_{r=2} \alpha^{(k)}_{nr}
\Theta_r
\label{108}
\end{equation}
where the critical indices are now
\begin{equation}
\chi_n^{(k)} = \frac{k+1-n}{(k+1)-\mu(k-1)}
\label{109}
\end{equation}
so that
\begin{equation}
\chi_n^{(k)}(\mu=0) = \sigma_n^{(k)}
\label{110}
\end{equation}
(see (\ref{61})). Moreover we find from (\ref{109})
\begin{equation}
\chi_2^{(k)} = 2\lambda.
\label{111}
\end{equation}
If we identify
\begin{equation}
\zeta_2 = \frac12 M^2
\label{112}
\end{equation}
we can incorporate the limit (\ref{107}) into the set of limits (\ref{108})
as in the case $n=2$.
Finally we note that (see (\ref{51}), (\ref{101}))
\begin{equation}
F_{k+1} = (C^{(k)})^{-k-1} g^c_{k+1}.
\label{113}
\end{equation}
The partition function is written as function
\[
Y_{\phi}(\zeta_2,\zeta_3,...,\zeta_k)
\]
of the double scale invariant quantities $\{\zeta_n\}$. Any function of
$Y_{\phi}$ satisfies a renormalization group equation (\ref{66}) with
beta functions obeying (\ref{68}), where, however,
\begin{equation}
{\cal N}^{(k)} = \alpha^{(k),-1} \mbox{ diag} \chi^{(k)} \alpha^{(k)}.
\label{114}
\end{equation}
\section{Dimensions $D \ge 2$}
At $D=2$ the integral $\Pi_1$ (\ref{36}) exhibits a pole of first order
whereas $\Pi_n, n \ge 2$, are holomorphic
\begin{equation}
\Pi_1 = \frac{1}{4\pi} \frac{1}{1-\mu} + \frac{1}{4\pi} \log
\frac{4\pi e^{\Gamma^{\prime}(1)}}{m^2} + O(1-\mu).
\label{115}
\end{equation}
To obtain a regular expression in $2<D<4$ we can simply analytically continue
$\Pi_1$ in $D$:
\begin{equation}
\Pi_1^{\mbox{\scriptsize an}} = \int \frac{d^Dp}{(2\pi)^D}
\left[(p^2+m^2)^{-1}-(p^2)^{-1}\right]
\label{116}
\end{equation}
is the convergent integral representation for this analytic continuation
in $2<D<4$. We can thus renormalize the position of the saddle point
$\sigma_0 \rightarrow \sigma_0^{\mbox{\scriptsize ren}}$
\begin{equation}
\sigma_0^{\mbox{\scriptsize ren}} = \Pi_1^{\mbox{\scriptsize an}}.
\label{117}
\end{equation}
The purely formal subtraction formula
\begin{equation}
\sigma_0^{\mbox{\scriptsize ren}} = \sigma_0 - \sigma_{\infty}
\label{118}
\end{equation}
\begin{equation}
\sigma_{\infty} = \int \frac{d^Dp}{(2\pi)^D} \cdot \frac{1}{p^2} \qquad
{\mbox{(divergent)}}
\label{119}
\end{equation}
suggest how to renormalize coupling constants and mass \cite{17}. We set
\begin{eqnarray}
U^{\prime}(\sigma_0) &=& \sum^{\infty}_{r=2} f_r\sigma_0^{r-1} \nonumber \\
&=& f_1^{\mbox{\scriptsize ren}} + \sum^{\infty}_{r=2} f_r^{\mbox{\scriptsize
ren}}(\sigma_0^{\mbox{\scriptsize ren}})^{r-1}
\label{120}
\end{eqnarray}
where
\begin{equation}
f_n^{\mbox{\scriptsize ren}} = \sum^{\infty}_{r=n} \left(r-1 \atop{n-1}\right)
\sigma^{r-n}
_{\infty} f_r
\label{121}
\end{equation}
and $f_n^{{\mbox{\scriptsize ren}}}, 2 \le n \le k$, are assumed to be finite.
This can be
achieved
when (\ref{45}) is valid by adjusting $\{f_r \, | \, 2 \le r\le k\}$
correspondingly.
We renormalize $\rho_0$ by
\begin{equation}
i\rho^{{\mbox{\scriptsize ren}}} = i\rho_0 - f_1^{{\mbox{\scriptsize ren}}}
\label{122}
\end{equation}
and
\begin{equation}
(m^{{\mbox{\scriptsize ren}}})^2 = m^2-2f_1^{{\mbox{\scriptsize ren}}}.
\label{123}
\end{equation}
Finally we skip the label "ren" (and "an") and end up with the rule:
replace $\sigma_0$ by $\Pi_1^{{\mbox{\scriptsize an}}}$ and obtain all critical
quantities
in the interval $2<D<4$ by analytic continuation. The consistency of this
rule has still to be investigated.
First we inspect from (\ref{103}) that the sign of the kinetic energy term
remains unchanged and $C^{(k)}$ (\ref{104}) stays real. There remains,
however, a problem with the sign of
\begin{equation}
\lambda = \frac12 \chi_2^{(k)}
\label{124}
\end{equation}
((\ref{105}), (\ref{109}), (\ref{112})). We mentioned already that $\lambda$
is positive if $D<D_{\infty}(k)$. On the other hand the field theory
(\ref{91}) is superrenormalizable for $D<D_{\infty}(k)$ and renormalizable
for $D=D_{\infty}(k)$. Thus $D_{\infty}(k)$ is an absolute limit which we
cannot overcome.
In the case $D<D_{\infty}(k)$ we have to subtract some low order Green
functions,
for $D=D_{\infty}(k)$ we need a finite number of subtractions (counter terms)
in the action. The subtractions are by momentum cutoff
\begin{equation}
|p^{\prime}| \le N^{\lambda} \Lambda, \quad D < D_{\infty}(k).
\label{125}
\end{equation}
For $D=D_{\infty}(k)$ the limit $N \rightarrow \infty$ cannot be performed
at all but $N$ has to be renormalized
\begin{equation}
N^{\prime} = N^{\frac{1}{2(k+1)-D(k-1)}}, \quad D < D_{\infty}(k)
\label{126}
\end{equation}
\begin{equation}
|p^{\prime}| \le (N^{\prime})^{k-1} \Lambda
\label{127}
\end{equation}
and then the limit $N^{\prime} \rightarrow \infty, D \rightarrow D_{\infty}$
is executed. In any way we conclude that:
\begin{enumerate}
\item the parameter $\Lambda$ introduced in the scaling (\ref{99}), (\ref{100})
transforms the $N \rightarrow \infty$ limit to the UV-cutoff removal limit;
\item the double scaling limit makes sense only if the necessary subtractions
dictated by standard renormalization theory are performed in the limiting
procedure.
\end{enumerate}
New insights on the renormalization procedure in general cannot be expected.
For $D=2$ we replace analytic regularization by subtraction of the pole
term, i.e. from (\ref{115}) we derive
\begin{equation}
\Pi_1^{{\mbox{\scriptsize ren}}} = \frac{1}{4\pi} \log \frac{\mu^2}{m^2}.
\label{128}
\end{equation}
Identifying $\sigma_{\infty}$ in (\ref{118}) with the pole term (\ref{115})
we can proceed exactly as for $D > 2$ and renormalize coupling constants and
mass. The calculation of the critical quantities has to be redone in view of
(\ref{128}) and we will outline the results.
For $n \geq 2$ we obtain
\begin{equation}
\Pi_n = \frac{(m^2)^{1-n}}{4\pi(n-1)}.
\label{129}
\end{equation}
Inserting this into an equivalent version of (\ref{41}) we obtain
\begin{equation}
U^{(n)}(\Pi_1) = \frac12 m^2 (-4\pi)^{n-1}
\label{130}
\end{equation}
and correspondingly from (\ref{40}), (\ref{129})
\begin{equation}
v_n = - \left(- \log \frac{\mu^2}{m^2}\right)^{n-2}.
\label{131}
\end{equation}
The critical coupling constants follow from (\ref{130})
\begin{equation}
f^c_n = \frac12 m^2 \frac{(-4\pi)^{n-1}}{(n-1)!} j_{k-n} (\exp; \log
\frac{\mu^2}{m^2})
\label{132}
\end{equation}
where
\begin{equation}
j_n(f;z)
\label{133}
\end{equation}
is the Taylor polynomial ("jet") of degree n for the function f with
variable $z$. From (\ref{132}) we deduce
\begin{equation}
b(0) = \frac{j_{k-1}(\exp; \log \frac{\mu^2}{m^2})-1}{\log\frac{\mu^2}{m^2}}
\label{134}
\end{equation}
and
\begin{equation}
\frac{g^c_{k+1}}{k+1} = \frac{m^2}{8\pi} \frac{(1-j_{k-1}(\exp; \log
\frac{\mu^2}{m^2}))
^{k+1}}{(k+1)!}.
\label{135}
\end{equation}
The coefficients $\{a_n\}$ of the elimination function $H$ (\ref{32}) at the
critical
point are obtained from (\ref{76}) and (\ref{131})
\begin{eqnarray}
a_{n+1} &=& (-1)^{n+1} \frac{b(0)^{n+2}}{(n+1)!} (\log \frac{\mu^2}{m^2})^n
\nonumber \\
& & \cdot \sum^n_{\ell=1} (-1)^{\ell} \frac{K_{n\ell}}{(1+b(0)^2)^{\ell}}
\label{136}
\end{eqnarray}
where $K_{n\ell}$ are integers
\begin{equation} \label{137}
K_{n\ell} = (n+\ell)! \sum_{{\mbox{\scriptsize partitions of $n$}} \atop
{\mbox{\scriptsize of length $\ell$}}} \; \prod_{j=1}^{\infty}
\frac{1}{{n_j}!} \left( \frac{1}{(j+1)!} \right)^{n_j}
\end{equation}
($n_j$ is the repetition number of j in the partition).
Instead of (\ref{77}) we reduce the susceptibility matrix by
\begin{equation} \label{138}
\alpha_{nr}^{(k)} = (\Pi_1 b(0))^{n} (4 \pi)^{r} f_{r}^{c}
{\tilde{\alpha}}_{nr}^{(k)}
\end{equation}
and for the reduced matrix (\ref{78})-(\ref{83}) remain valid. Instead of
(\ref{83}) we find an analogous formula with $v_{j+2}$ replaced by $(-1)^{j+1}$
leading to
\begin{equation} \label{139}
S_{nr}^{(k)} = \sum_{\ell=1}^{n-r} (-1)^{n+\ell-r} (1-\frac{n}{n+\ell-1}
\delta_{r2}) \frac{(n+\ell-1)!}{n!}
\frac{K_{n-r,\ell}}{(n+\ell-r)!} (1+b(0)^{2})^{-\ell}.
\end{equation}
Since the double scaling limit for $D \geq 2$
necessitates regularization in the UV momentum domain, as dictated by the known
renormalization theory, we refrain from dealing with the case $D=4$ $(k=3)$
here.
It has been argued that a double scaling limit does not exist at $D=4$
\cite{18,19}.
We emphasize that $D=4$ is an "exceptional dimension" in the sense described
below
($n=2$ in (\ref{142}), $k_{\mbox{\scriptsize max}}=0$ in (\ref{144})). This
hints also to the
nonexistence of the standard double scaling limit.
Now we return to the second condition (\ref{34}) in the case $2 \leq D <
\infty$.
In fact from (\ref{51}) we can see that $g_{k+1}^{c}$ vanishes if and only if
\begin{enumerate}
\item $\left( \frac{2-\mu}{1-\mu} \right)_{k-1} = 0$, which is fulfilled for
\begin{equation} \label{140}
\mu = \frac{n}{n-1}, \quad 2 \leq n \leq k
\end{equation}
\item $p_{1}^{(k)}(\mu) = 0$, which occurs only if $k$ is odd and
\begin{equation} \label{141}
\mu = \frac{k+1}{k}
\end{equation}
\end{enumerate}
(that there are no other zeros of $p_{1}^{(k)}(\mu)$ in the interval $1 < \mu <
2$
has been verified by computer up to $k=20$).
Thus there exist exceptional dimensions
\begin{equation} \label{142}
\mu_n = \frac{n}{n-1}, \quad n \geq 3 \; (\in {\mathchoice {\hbox{$\sans\textstyle Z\kern-0.4em Z$})
\end{equation}
for which the type of the singularity $A_k$ has $k$ not constrained by the
renormalizability limit (see (\ref{107}))
\begin{equation} \label{143}
k \leq k_{\mbox{\scriptsize ren}} = \frac{\mu+1}{\mu-1}
\end{equation}
but by the stronger bound
\begin{equation} \label{144}
k \leq k_{\mbox{\scriptsize max}} = \left\{ {{n-1, \quad n \mbox{ odd,}} \atop
{n-2, \quad n \mbox{ odd.}}} \right.
\end{equation}
In particular the physically very interesting case $D=3$ is in this set of
exceptional dimensions with $n=3$ and
\begin{equation} \label{145}
k_{\mbox{\scriptsize max}} = 2.
\end{equation}
\section{Remark: The unstable cases}
All actions (\ref{91}) for which
\begin{equation} \label{146}
\mbox{sign } F_{k+1}^{c} = +1, \quad k \mbox{ odd}
\end{equation}
is {\underline{not}} satisfied are unstable field theories if interpreted
conventionally. However, our derivation of these conditions from saddle point
integrals implies that the fields $\phi$ range over complex contours. For
example, for $k=2$, these are the standard Airy function contours
\begin{eqnarray} \label{147}
C &=& C_{0} -\frac{1}{2} (C_{\frac{1}{3}} + C_{\frac{2}{3}}) \qquad (\epsilon =
+1) \\
\label{148} C &=& -C_{\frac{1}{2}} +\frac{1}{2} (C_{\frac{1}{6}} +
C_{\frac{5}{6}})
\qquad (\epsilon = -1)
\end{eqnarray}
where $C_q, \; q \in \bbbq$, denotes the oriented ray along the argument
\begin{equation} \label{149}
\mbox{arg } C_q = 2 \pi q
\end{equation}
from zero to infinity. Again from $k=2, \; D=0$ we know that there exists a
domain of parameters
\begin{equation}
(\zeta_2, \zeta_3, \ldots, \zeta_k, F_{k+1})
\end{equation}
where the partition function is positive and another one where it oscillates
so that both domains are separated by a hypersurface on which the partition
function vanishes. Whether in the domain of positivity the action defines a
reasonable renormalizable field theory is unknown. \newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,130 |
Q: How can I release the keyboard focus from a fullscreen citrix receiver? When citrix receiver is fullscreen, Alt-Tab and any other hotkeys are forwarded to the server instead of being interpreted on the client.
One can use the mouse to access the citrix menu top middle on the screen to switch the citrix receiver window from fullscreen to a window and then use Alt-Tab to change keyboard focus to another application on the client PC, but this is tedious.
I'm looking for an easier way to switch from citrix receiver to another client program with a keyboard combination (hotkey). The citrix receiver user's manual does not tell anything about the keyboard.
A: Press Ctrl+F2
There will be no visual feedback but when you hit ALT+TAB the host OS will pick it up.
This works on Linux.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,622 |
Њуберн има више значења:
Њуберн (Алабама)
Њуберн (Тенеси) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,785 |
layout: post
categories: blog
title: "Osservazione Astronomica"
date: 2017-05-03 12:00
author: Giorgia Pompeo, Manuela Tulli
redirect_from:
---
Mercoledì 3 maggio 2017 nell'Università di Roma Tor Vergata si è svolto l'evento "Osservazione Astronomica", su richiesta degli studenti ed in collaborazione con il Prof. Berrilli Francesco.
Alle 18.00 ci siamo riuniti in aula Grassano dove il prof. Berrilli ha illustrato il programma della serata e qualche curiosità sul Sole. Ci siamo poi spostati sulla terrazza della facoltà dove, per poter iniziare l'osservazione solare, ci aspettavano due telescopi con opportuni filtri. Questi ultimi ci hanno permesso di vedere il sole "rosso" e "bianco" potendo in questo modo osservare le macchie solari e la fotosfera!


Successivamente il Prof. Berrilli ed altri docenti hanno tenuto vari talks riguardanti: "The South Pole Solar Observatory", la Luna, Saturno, Giove ed Esopianeti!

Il progetto "The South Pole Solar Observatory" di cui il Prof. Berrilli fa parte entrerà in funzione per la ricerca astrofisica nel campo delle onde di gravità solare e nella meteorologia spaziale.
Abbiamo cenato ed infine abbiamo osservato la Luna e Giove con i suoi quattro satelliti ai telescopi. Il riscontro dei docenti e degli studenti è stato positivo, è un'esperienza da ripetere!


| {
"redpajama_set_name": "RedPajamaGithub"
} | 958 |
International flags, major cultural institutions and public art works line Benjamin Franklin Parkway, the grand boulevard bisecting Logan Square. Often referred to as "Philadelphia's Champs-Elysees", the Parkway offers magnificent, wide-screen vistas from grand City Hall to the Philadelphia Museum of Art's famous "Rocky" steps. At the center of this gorgeous, quintessentially Philadelphia scene lies Logan Circle, one of Philadelphia's original five city parks with the elegant Swann Memorial Fountain as its regal showpiece.
A dream for arts and culture enthusiasts, Logan Square has an important museum collection for every day of the week. The brand-new Barnes Foundation, the Rodin Museum—home to the largest Rodin collection outside of Paris—and the Philadelphia Museum of Art offer unparalleled riches in Impressionist and early modern art, while the Academy of Sciences and Franklin Institute nurture the left brain with cutting-edge planetarium shows, dinosaur exhibits, and more. Lovers of literature will also enjoy access to the sprawling Parkway Central Library. Opened in 1927, the grandly columned Beaux Arts structure is home to more than 7 million items and many special collections and regularly hosts talks by major writers and public figures.
With abundant row houses, town homes, new condos and residential towers overlooking the Parkway, Logan Square offers attractive housing options for every lifestyle. A prime asset is the neighborhood's central location, as Center City is on your doorstep here, along with easy highway and public transit access when venturing further afield. Of course with the abundant cultural riches, frequent parades and street festivals, and surrounding parks—including LOVE Park, with its iconic Robert Indiana sculpture, and the bloom-rich Sister Cities Park opposite Logan Square—why would you ever leave? | {
"redpajama_set_name": "RedPajamaC4"
} | 9,793 |
Border dispute
Environment and Wildlife
Nicaragua pays Costa Rica for environmental damages
Court of Justice: Nicaragua owes Costa Rica $380,000, must retire troops
Court ruling close at hand on Costa Rican, Nicaraguan borders
Costa Rica has unfair advantage in maritime delimitation claim, says Nicaragua
Costa Rica, Nicaragua to face off again at The Hague in new border dispute
Costa Rica, Nicaragua enter final hearings at The Hague in border dispute
L. Arias April 14, 2015 April 14, 2015
On Tuesday and Wednesday Costa Rica's legal team will present its closing arguments on the alleged invasion of territory by Nicaraguan military, and the construction of artificial canals in the disputed area. ((Courtesy of ICJ))
Final hearings start Tuesday at the International Court of Justice (ICJ) in the long-running border dispute between Costa Rica and Nicaragua.
During the final stage of the legal process at The Hague, Netherlands, which started in 2010, legal teams from each country will present their closing arguments. The hearings are scheduled to last until May 1.
ICJ justices in 2013 decided to join claims of both countries into a single case, in order to save time and ensure that final rulings on both cases can be reached in the same tone.
A date for the ICJ's ruling has yet to be announced. But experts from both countries have said they believe it will come later this year.
What are they disputing?
Costa Rica v. Nicaragua
In 2010 Costa Rica filed the first complaint against Nicaragua for alleged invasion by military personnel of a 3-square-kilometer territory both countries claim as their own.
Costa Rica calls the site Isla Portillos and Nicaragua calls it Harbour Head. The small area is protected by the RAMSAR Convention on Wetlands of International Importance.
In 2013 Costa Rica expanded its complaint and accused Nicaragua of carrying out dredging works that affected its territory, and of constructing artificial canals in the disputed area with the purpose of connecting the San Juan River with the Caribbean Sea. The ICJ had ordered both countries to keep out of the disputed area.
Nicaragua v. Costa Rica
In 2011 Nicaragua responded by filing a complaint against Costa Rica for alleged environmental damage to the San Juan River, a natural border between the two countries. The damage, Nicaragua claimed, resulted from the construction of a 160-kilometer road, Route 1856, that runs parallel to the river.
Costa Rica built the road as a response to complaints from residents of border communities who are forbidden by Nicaraguan military from using the San Juan River for transportation.
What happens at the final hearings?
The first week of hearings will address the Costa Rica v. Nicaragua case. Costa Rica's legal team will have two days to present its closing arguments and Nicaragua's team will have two days to refute them.
Nicaraguan lawyers will open the second week with oral arguments against Costa Rica's alleged environmental damage for two days, and then Costa Rica will have two to respond.
The third and last week is open for rejoinders. April 28 and 29 will be dedicated to the Portillos Island case, while April 30 and May 1 are scheduled for the border road case.
Costa Rica's closing arguments will include testimony by two experts from the United Kingdom: Colin Thorne, Professor of Physical Geography at the University of Nottingham and professor Ian Cowx, from University of Hull's International Fisheries Institute.
They both will present the results of studies carried out at Costa Rica's request. The studies are part of the 5,846 pages of evidence the legal team filed at earlier stages of the trial.
Nicaragua has not yet disclosed if it will file new evidence.
Who are the legal teams?
Costa Rica's group of legal advisers during the first week will be headed by Foreign Minister Manuel González Sanz. Then the country's representatives before The Hague court, Edgar Ugalde and Sergio Ugalde, will lead the team. The legal group also includes Foreign Ministry lawyers Arnoldo Brenes, Shara Duncan and Richard Otárola, and three lawyers from Switzerland, England and the U.S.: Marcelo Kohen, Samuel Wordsworth and Kate Parlett.
Nicaragua's legal team will be led by Ambassador to the Netherlands Carlos Argüello Gómez, plus U.S. attorneys Paul Reichler and Lawrence Martin and Professor Alain Pellet of the University of Paris.
What has the court already ruled?
Following Costa Rica's first claim, the ICJ imposed injunctions in March 2011 ordering both countries to completely clear out of the disputed area pending a final ruling.
The measure was reaffirmed in November 2013, following Costa Rica's second complaint against Nicaragua for violating the court's orders by dredging two canals inside the disputed territory. Costa Rica claimed the canals caused environmental damage, a claim later confirmed by RAMSAR experts as "irreparable damage."
Nicaragua complied with the decision, stopped dredging works and started repair work on the canals.
In April 2013 the Court rejected four separate complaints from Nicaragua seeking to halt construction of the border road in Costa Rica. At the time, the ICJ stated that "Nicaragua failed to demonstrate the project's damage to the San Juan River, or alleged dangers to flora and fauna inside Nicaraguan territory."
Justices also rejected Nicaragua's petition to strip Costa Rica of its rights to the jointly-held bay of San Juan del Norte. They also rejected Nicaragua's petition for navigation rights on the Colorado River, located entirely within Costa Rican territory.
Nicaragua claims that because the majority of San Juan River waters flow into the Colorado River, that country should have navigation rights on the river, in exchange for Costa Rican navigation rights on the San Juan.
As a result of these rulings, the ICJ announced it would consolidate both countries' complaints into a single case to expedite the process.
What's happened recently on the ground?
Last year Nicaragua's dredging project leader and former guerrilla Edén Pastora, known as "Comandante Cero," said Nicaragua planned to bring 15 more dredges to the San Juan "in order to continue cleaning the river."
Earlier this month, Costa Rica blocked one of the artificial canals that Nicaragua opened in 2013.
What is Costa Rica expecting from the ICJ rulings?
Costa Rican President Luis Guillermo Solís wants the international court's ok in order to resume construction of Route 1856. Last year Solís vowed to finish the scandal-ridden road project, telling La Nación that the previous administration had "left him a mess."
Border communities say they need the road because they're no longer allowed to freely use the San Juan River as they had traditionally done before the conflict. In recent months Costa Rican officials have received several reports of Nicaraguan military repeatedly forbidding Costa Ricans to travel on the river.
In a 2009 ICJ ruling, the court prohibited armed Costa Rican police officers from using the river for transit. But members of the Nicaragua army have detained residents, tourists, tourism employees and even scientific groups and international observers from RAMSAR who traveled to the area to assess the alleged damage in Costa Rican territory.
What about the countries' other cases before The Hague?
The final decision in the Costa Rica-Nicaragua case could pave the way for rulings on other territorial disputes involving Nicaragua. Besides Costa Rica, the country has cases pending before the ICJ against Honduras, El Salvador, Panama and Colombia.
In all those cases Nicaragua seeks rights over territories it considers its own.
In February 2014, Costa Rica filed a new lawsuit before the ICJ to define maritime borders with Nicaragua in the Caribbean and Pacific Ocean, where Nicaragua has been looking to expand its territory. The case will be tried in a new separate process at The Hague.
Nicaragua does have one ICJ victory under its belt: in 1986 the court ruled in favor of Nicaragua on its claim of illegal U.S. military and paramilitary occupation of Nicaraguan territory.
The Tico Times reporter Zach Dyer contributed to this story.
Costa Rica asks international court to prevent further occupations from Nicaragua
Damaged wetlands recovering along Costa Rica-Nicaragua border
Nicaragua government praises 'balanced' ruling in border dispute with Costa Rica
Border disputeInternational Court of JusticeNicaragua expansionismNicaragua-Costa Rica relationsRoute 1856The Hague
L. Arias
Reporter | The Tico Times |
View all posts by L. Arias
Ecuador tourism stunt featuring Costa Rica sparks international row
Obama approves taking Cuba off terror list
Costa Rica cuts contributions to political campaigns to contain deficits
The Tico Times - January 21, 2021
Costa Rica put into effect on Wednesday a law that severely cuts state contributions to political parties for the 2022…
'Multilateralism is back!' How Costa Rica's leaders celebrated U.S. inauguration
Costa Rican leaders on Wednesday congratulated Joe Biden as he was sworn in as the 46th President of the United…
Current flights to Costa Rica: SJO and LIR
These are the airlines and routes currently flying to Costa Rica. The flight information was provided by the operators of…
Joe Biden sworn in as new U.S. President
AFP - Jan 20, 2021
Death toll rises to 58 due to liquor adulterated with methanol
Costa Rica not planning to require negative test for entry | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,835 |
<form action="<?= BASE_URL ?>admin/batch/<?= $id ?>/edit" method="post" id="qcsform">
<fieldset>
<legend>Settings</legend>
<label for="state">State</label>
<select name="state">
<?php if ($state == 'edit'): ?>
<option value="edit" <?= ($state == 'edit' ? 'selected="selected"' : '') ?>>
<?= Batch::readableState('edit') ?>
</option>
<?php endif; ?>
<option value="active" <?= ($state == 'active' ? 'selected="selected"' : '') ?>>
<?= Batch::readableState('active') ?>
</option>
<?php if ($state == 'active' || $state == 'post'): ?>
<option value="post" <?= ($state == 'post' ? 'selected="selected"' : '') ?>>
<?= Batch::readableState('post') ?>
</option>
<?php endif; ?>
</select>
<?php if ($state == 'edit'): ?>
<p>Changing from "<?= Batch::readableState('edit') ?>" to "<?= Batch::readableState('active') ?>" deletes all result data!</p>
<?php endif; ?>
</fieldset>
<?php if ($state <> 'edit'): ?>
<button id="button_save">Save</button>
<?php endif; ?>
<fieldset>
<legend>QC-Script<?php if ($state <> 'edit'): ?> (read only)<?php endif; ?></legend>
<textarea id="code" name="qcs"><?= $qcs ?></textarea>
</fieldset>
<?php if ($state == 'edit'): ?>
<button id="button_save">Save</button>
<?php endif; ?>
</form>
<script>
var editor = CodeMirror.fromTextArea(document.getElementById("code"), {
lineNumbers: true,
theme: 'ambiance',
lineWrapping: true,
<?php if ($state <> 'edit'): ?>
readOnly: true,
<?php endif; ?>
});
</script>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,067 |
\section{Introduction}
Natural language processing has gain continuously attention in recent years, not only for academe research purposes but also for a real-world use case in various industrial sectors. Advanced neural architectures achieve significant improvements on difficult language understanding problems, thus enable various applications such as named-entity recognition \cite{lample2016neural}, semantic role labeling \cite{he2017deep}, sentiment analysis or opinion mining \cite{zhang2018deep, wang2016attention}, machine translation \cite{wu2016google}, etc.
Thanks to the recent advances in the language understanding fields with the help of deep learning, a large number of machine learning projects turn from academic research outcomes into industrial products. For instance, the neural machine translation system \cite{wu2016google} now delivers a very high-quality translation that is approaching human-level accuracy; well trained neural models have been proposed to business users as prediction services on the cloud to perform difficult tasks such as topic extraction, sentiment analysis. These advances in technologies have enabled the need for automatic language analysis in the marketing, financial institutes, and others.
However, despite typical applications that can share the trained model to perform universal tasks such as speech-to-text, translation, most of the applications often require to train a custom model with company-owned data and strongly rely on domain-specific knowledge. For example, in an assurance company, one might interest in investigating customer comments on the topic related to processing time of insurance claims, while for the e-commercial website, reviews about the quality of the goods are more interesting to dive deeper. It might be challenging to analyse these problems without importing the specificities of their data and train a custom model based on these data.
In addition, most of the advanced neural network architectures require a considerable amount of labeled data for training, and labeling is often tedious and time-consuming. The challenges of labeling data can significantly slow down the development of machine learning enabled projects for companies.
In this paper, we address the customer review understanding problems with deep learning framework, with a particular focus on two methods that can accelerate the training by adopting the pretrained model and active learning (in Sec.\;\ref{method}). The preliminary results (in Sec.\;\ref{exper}) show that the iterative process of training robust neural network models can be significantly shorted, thus saving a large amount of cost. We also provide a conclusion and further directions in the end.
\section{Model architecture}
\label{method}
\subsection{Recurrent neural network with pretrained embedding}
Recurrent neural networks have been widely used for sequence formed data, as the capability of taking the input as various lengths and learning the dependency among elements at different positions. The applications of recurrent neural networks cover from time series analysis \cite{cui2018modelling}, speech recognition \cite{lin2019lstm}, to natural language processing \cite{wang2016attention}, and continuous gain attention in different research domains.
In the field of natural language processing, one key component is embedding or so-called language representation. The objective is to pass the words into a fixed-length vector, which could be later processed and fed into the classifiers. A straightforward idea is one-hot encoding, which represents each word in a sparse dictionary with one indicating its position among all possible words. The distribution of all the words inside a sentence can be directly used to represent that sentence. Such an approach often refers as bag-of-words \cite{mikolov2013efficient}. Some improvements have been added to such architecture by training a matrix projection over the words \cite{joulin2016bag}. Recent successes are focused on embedding with pretrained language model \cite{devlin2018bert}, which allows a semantic representation of words or tokens in the sentences, and the aggregated of embedded vectors often leads to a better result compared to word distribution with one-hot encoding.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{rnn.png}
\caption{The architecture of our recurrent neural network model.}
\label{fig:model}
%
\end{figure}
The pretrained models, such as BERT\cite{devlin2018bert} and ELMo \cite{peters2018deep}, have shown very promising results on various tasks. With the access of unlimited texts written on the Internet, these models can capture the semantic meaning of the language without being dedicated to specific tasks. The results of the pretrained model can later be used as preliminary inputs for various tasks, including classification, named entity recognition, question answering, language common-sense inference.
In our architecture, as illustrated in Fig.\;\ref{fig:model}, we use BERT \cite{devlin2018bert} as our pretrained embedding module. The outputs of BERT embedding on the tokens (words) are then fed into a recurrent neural network. Long short-term memory (LSTM) \cite{hochreiter1997long}, as we used here for taking into account of the long dependencies among words. The output of the last LSTM ceil is coupled with fully connected layers, where the final output as the sigmoid active function that allows multi-label outputs. Unlike softmax layer that normalizes the output into probability assigned to each class (commonly used for single label classification problem), the sigmoid activated layer output unnormalized probability of $[0,1]$ for each class, with all the output values close to $1$ indicating a high probability of the underlying sentences belongs to these classes. The multi-label loss function of our recurrent network is computed as the sum of binary cross-entropy between the prediction and true label for each class:
\begin{equation}
\text{\textit{Multi-label loss}} = \sum_{i = c_1}^{C_N} y_i \log(\hat{y}_i) + (1- y_i) \log(1 - \hat{y}_i )
\end{equation}
\subsection{Active learning strategy}
In the supervised learning framework, a large amount of labeled data are often required to archive excellent performance, and it is especially true for training a neural network. However, labeling data can be time-consuming and increase the cost of machine learning projects.
In the conventional data collection process (as illustrated in Fig.\;\ref{fig:active}), human labeling tasks are conducted in a random form. The experts use their domain knowledge to label the data in the database by randomly selecting the data samples, and provide the labeled data for training. To further improve the performance, it often requires a more considerable amount of labeled data, and it is often conducted by other batches of a random selection of the data to label. Such an iterative process is highly insufficient as the continuous learning process is cutting into two different groups of subtasks without communication between them.
In contrast to the conventional data selection method, active learning offers an alternative strategy for collecting the supervised training data, as illustrated in Fig.\ref{fig:active}. The active learning strategies choose the samples that need to be labeled, with the aim of maximizing the machine learning algorithm's performance \textit{w.r.t} each incremental labeled dataset. These strategies include: Least Confidence \cite{culotta2005reducing}, Bayesian Active Learning by Disagreement \cite{gal2016theoretically}, core-set selection \cite{sener2017active}, etc. where all proposed strategies can be defined within a common framework: train the model with existing labeled data, use the trained model to select (under proposed measurement) the candidate from a pool of unlabeled dataset, label the selected candidate data points, and train the new model with augmented training dataset, as illustrated in Fig.\;\ref{fig:active}.
In this paper, in order to show the effectiveness of adopting the active learning framework, we use one straightforward uncertainty-based strategy in the multi-label classification cases. The uncertainty score is measure by:
\begin{equation}
\text{\textit{Uncertainty score}} = 1 - \max_{i}(\hat{y}_i)
\label{eq:certain}
\end{equation}
where we choose the unlabeled data with the lowest predicted probabilities among all classes. Intuitively speaking, the model can improve itself by seeing more diverse samples that are not semantical similar to the training set or model cannot confidently predict its labels. In such a setting, the model selects the data instances that are difficult to be assigned to any classes (uncertainty) for interactive labeling with human experts.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{active.png}
\caption{Traditional data collection framework \textit{v.s.} active learning framework.}
\label{fig:active}
%
\end{figure}
\section{Experiments and discussions}
\label{exper}
We evaluate the proposed method by using a real-world customer review dataset, with 6929 instances for training and 1456 instances for evaluation. The evaluations are conducted on two separate multi-label classification tasks: aspects categorisation (13 classes) and sentiment analysis (2 classes), as shown in Tab.\;\ref{tab:data}.
\begin{table}[!htb]
\centering
\small
\caption{\small Number of collected data for training and validation for multi-label classification: aspect categorisation and sentiment analysis.}
\begin{tabular}{l|c|c}
\hline
\textbf{Aspect category} & \textbf{ training} & \textbf{ validation} \\
Internet usage & 330 & 67 \\
Global Management & 2562 & 525 \\
Loyalty & 1078 & 203 \\
Contract & 347 & 61 \\
Financial & 776 & 184 \\
Accessibility & 730 & 129 \\
Reception & 959 & 237 \\
Empathy & 1144 & 184 \\
Information provided & 1184 & 215 \\
Processing time & 1845 & 379 \\
Visibility & 603 & 147 \\
Expert & 442 & 92 \\
Repairing & 427 & 94 \\
\hline
\hline
\textbf{Sentiment polarity} & & \\
Positive & 8254 & 1664 \\
Negative & 4173 & 853 \\
\hline
\hline
\textbf{Total instances} & 6929 & 1456
\end{tabular}
\label{tab:data}
\end{table}
We evaluate three different settings: 1. CNN embedding \cite{cui2018modelling,zhang2015character} with random samples 2. BERT pretrained embedding with random samples 3. BERT pretrained embedding with active learning selected samples.
All three settings are connected to a recurrent neural network (two layer of LSTM) and a fully connected layer with sigmoid active function for multi-label outputs, implemented with TensorFlow library \cite{45381} in Python.
We report micro f1 scores under different training sizes in Fig.\;\ref{fig:results}. The micro f1 scores is calculated in a multi-label situation using micro precision and micro recall summed over all classes, shown as follows:
\begin{equation}
\small
\begin{split}
\text{micro precision} = & \quad \frac{\sum_{i = 1}^{C} \text{true positive}}{\sum_{i = 1}^{C} \text{true positive} + \sum_{i = 1}^{C} \text{false positive}} \\
\text{micro recall} = & \quad \frac{\sum_{i = 1}^{C} \text{true positive}}{\sum_{i = 1}^{C} \text{true positive} + \sum_{i = 1}^{C} \text{false negative}}
\end{split}
\end{equation}
\begin{equation}
\small
\text{micro f1 score} = \quad 2 \times \frac{\text{micro precision} \times \text{micro recall}}{\text{micro precision} + \text{micro recall}}
\end{equation}
Such choice of measures follows the standard in the literature for multi-label classification \cite{pontiki2016semeval}, especially in the case of highly unbalanced classes. We report the comparison in Fig.\;\ref{fig:results} using the average micro f1 score over three times of different independent experiments.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{sentiment.png}
\includegraphics[width=0.46\textwidth]{categorisation.png}
\caption{The evaluations on the multi-label classification tasks: aspects categorisation (13 classes) and sentiment analysis (2 classes), the reported f1 score is measured with incremental training sizes.}
\label{fig:results}
\end{figure}
As we can see in Fig.\;\ref{fig:results}, in case of sentiment classification, the network with CNN embedding yield the worst performance in all different training sample sizes, due to the lack of effectiveness of language representations by training the embedding with largely insufficient data.
Such a gap between the self-trained embedding and the pretrained embedding with BERT is even large when facing more challenging multi-label aspects categorisation tasks.
When comparing the data selection framework, we can see in both tasks that the active learning selection performs better. In other words, in order to achieve the same performance, an active learning framework needs much fewer data.
Please note that the reported f1 scores with active learning are based on one straightforward selection strategy as in Eq.\;\eqref{eq:certain}. By analyzing the empirical results in the active learning literature \cite{sener2017active}, we firmly believe that the reported results can be further improved when using some of the sophisticated selection strategies.
\section{Conclusion}
In this paper, we introduce two strategies for boosting the performance in real-world applications for natural language processing: 1. pretrained language model that allows extracting essential features of texts without the extra effort of collecting a large amount of training data; 2. active learning strategy which can smartly select the samples that need to be labeled. By comparing the performance with basic recurrent neural networks and the ones combined with pretrained embedding model and active learning framework, we observe a significant improvement. Such a combined approach can achieve the same accuracy by using a significantly smaller amount of labeled data, thus provide cost-effective solutions for the company self-promoted natural language processing projects. In the future, we would like to investigate on more sophisticate active learning strategies, in order to further improve the results with a constant number of training size.
\subsubsection*{Acknowledgment}
The authors would like to thank Isabelle DUPUIS, Nathalie CHANSON, Axelle LETERTRE, Isabelle ROMANO from ``Voice of the Customer" at GMF ASSURANCES for their expertises in customer review analysis and providing the labeled data used in this paper.
\bibliographystyle{IEEEbib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,723 |
History of prehospital care
Departments of Emergency Medicine, Christian Medical College, Vellore, Tamil Nadu, India
Sanjay M, Abhilash KP. History of prehospital care. Curr Med Issues 2019;17:42-3
Sanjay M, Abhilash KP. History of prehospital care. Curr Med Issues [serial online] 2019 [cited 2021 Jan 18];17:42-3. Available from: https://www.cmijournal.org/text.asp?2019/17/2/42/265822
Prehospital care plays a salient role in the emergency medical services (EMS) by treating patients at the scene (out-of-hospital treatment) and transporting them to a higher center for definitive management. In early days, it was also called as ambulance services, first aid squad, or rescue squad. Dominique Jean Larrey, a French military surgeon, during the battle of spires (battle between French and Prussians), was distressed by the fact that the wounded soldiers at the battlefield were not treated immediately until hostilities had ceased.[1] Hence, he discovered the idea of ambulance services or "flying carriage" to provide immediate care to the wounded soldiers at the battlefield and to rapidly evacuate them to the area where medical assistance was available. He initiated two/four-wheeled horse-drawn wagon to rapidly transport soldiers from the battlefield for immediate treatment. Larrey's projects for "flying ambulances" were first approved by the Committee of Public Safety in 1794, and his ambulances were used for the first time during the Italian campaigns in 1796.[1]
In 1832, the introduction of a transport carriage facility to shift patients suffering from cholera in London was considered a major advancement in the evolution of modern ambulance services. Later, in 1865, the first civilian ambulance services were started at Cincinnati, followed by the New York service provided out of Bellevue Hospital which started in 1869[1],[2], with ambulances carrying medical equipment such as splints and morphine. Another ambulance service, named "Vienna Voluntary Rescue Society," was found by Jaromir V. Mundy, Count J. N. Wilczek, and Eduard Lamezan-Salins in Vienna after the disastrous fire at the Vienna Ringtheater in 1881. Besides providing first aid, in 1887, St. John Ambulance Brigade also established military modeled ambulance services for public events in London. After the existence of automobile technologies, the first modern motorized ambulance was brought into the service in 1899 in Chicago.[1]
After World War I, when two-way radio became available, it provided efficient radio dispatch of the ambulance to many areas. In 1966, President Lyndon B. Johnson received a report "Accidental Death and Disability: The Neglected Disease of Modern Society"[1],[2],[3] commonly known as "The White paper," in which he identified accidental injuries as the Leading cause of death in the first half of the life span.[1],[2] The report concluded that ambulance services in the US varied widely in quality and were often unregulated and unsatisfactory. These studies placed pressure on governments to improve emergency care in general, including the care provided by ambulance services, thus resulted in the creation of a standard ambulance system with advanced equipment and well-trained EMS personnel. Hence, this report was considered as an important milestone in the progress of advanced prehospital setting in the US.
In the early 1970s, the curriculum for emergency medical technicians and paramedics was created to inculcate optimal knowledge and practical skills, later National Registry of Emergency Medical Technicians, which is a national certifying examination, established to provide a uniform standard of training and to evaluate the competence of EMS practitioners at various levels.[1],[3]
In 2007, the Former President of the Society for Emergency medicine, Tamorish Kole, noticed the ill-functioning of EMS system in India and found that 90% of ambulances were functioning without any emergency equipment, not even oxygen; moreover, 95% of ambulances were staffed with untrained EMS personnel.[4],[5] After his peer review, many measures were taken to improve the quality of prehospital care in India. Although prehospital care is rapidly emerging over a decade, it is still at its infancy in many developing countries like India.
Available from: https://en.wikipedia.org/wiki/Emergency_medical_ servicesl. [Last accessed on 2019 Jul 17 and Last accessed on 2019 Aug 23].
Available from: https://www.jems.com/articles/print/volume38/ issue-10/features/birth-emshistoryparamedic.html. [Last accessed on 2019 Aug 23] .
Available from: https://www.emra.org/aboutemra/history/emshistory/. [Last accessed on 2019 Aug 23].
Available from: https://www.asianhhm.com/healthcaremanagement/ emergency-servicesindia. [Last accessed on 2019 Aug 23].
Available from: https://www.jems.com/articles/print/volume-42/ issue-4/features/emergence-of-ems-in-india.html. [Last accessed on 2019 Aug 23].
Sanjay M | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,700 |
Heterocerus fenestratus é uma espécie de insetos coleópteros polífagos pertencente à família Heteroceridae.
A autoridade científica da espécie é Thunberg, tendo sido descrita no ano de 1784.
Trata-se de uma espécie presente no território português.
Referências
Ligações externas
Heterocerus fenestratus - Biodiversity Heritage Library - Bibliografia
Heterocerus fenestratus - NCBI Taxonomy Database
Heterocerus fenestratus - Global Biodiversity Information Facility
Heterocerus fenestratus - Encyclopedia of Life
Coleópteros polífagos de Portugal
fenestratus
Coleópteros descritos em 1784 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,326 |
A molecular dynamics simulation of a single polymer molecule with fixed bond
lengths and an arbitrary number of bonds.
## Motivation
The simulation generates equilibrium configurations of the molecule. These
configurations serve as the end points of geodesic paths in the configuration
space of the molecule.
## License
Any file of this project is available under [the MIT
License](https://opensource.org/licenses/MIT).
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,661 |
\section{Introduction}\label{f-sec1}
We will work over $\mathbb C$, the complex number field, throughout this paper.
Note that, by the Lefschetz principle,
all the results in
this paper hold over any algebraically closed field $k$ of characteristic zero.
This paper proposes the following
Fujita-type freeness conjecture for projective
semi-log canonical pairs.
\begin{conj}[Fujita-type freeness conjecture for
semi-log canonical pairs]\label{f-conj1.1}
Let $(X, \Delta)$ be an $n$-dimensional
projective semi-log canonical pair and let $D$ be a Cartier divisor on $X$.
We put $A=D-(K_X+\Delta)$.
Assume that
\begin{itemize}
\item[(1)] $(A^n\cdot {X_i})>n^n$ for
every irreducible component $X_i$ of $X$, and
\item[(2)] $(A^d\cdot W)\geq n^d$ for every $d$-dimensional
irreducible subvariety $W$ of $X$ for $1\leq d\leq n-1$.
\end{itemize}
Then the complete linear system $|D|$ is basepoint-free.
\end{conj}
By \cite[Corollary 3.5]{liu-san}, the complete linear system $|D|$ is basepoint-free
if $A^n>\left(\frac{1}{2} n(n+1)\right)^n$ and
$(A^d\cdot W)>
\left(\frac{1}{2}n(n+1)\right)^d$ hold true in Conjecture
\ref{f-conj1.1},
which is
obviously a generalization of Anghern--Siu's effective
freeness (see \cite{anghern-siu} and \cite{fujino-effective}).
Of course, the above conjecture is a naive generalization of
Fujita's celebrated conjecture:
\begin{conj}[Fujita's freeness conjecture]\label{f-conj1.2}
Let $X$ be a smooth projective variety
with $\dim X=n$ and let $H$ be an ample Cartier divisor on $X$.
Then the complete linear system $|K_X+(n+1)H|$ is basepoint-free.
\end{conj}
The main theorem of this paper is:
\begin{thm}[Main theorem, see Theorem \ref{f-thm2.1} and
Theorem \ref{f-thm5.1}]\label{f-thm1.3}
Conjecture \ref{f-conj1.1} holds true in dimension one and two.
\end{thm}
As a corollary of Theorem \ref{f-thm1.3},
we have:
\begin{cor}[{cf.~\cite[Theorem 24]{lr}}]\label{f-cor1.4}
Let $(X, \Delta)$ be a stable surface such that $K_X+\Delta$ is
$\mathbb Q$-Cartier.
Let $I$ be the smallest positive integer such that $I(K_X+\Delta)$
is Cartier. Then $|mI(K_X+\Delta)|$ is basepoint-free
and $3mI(K_X+\Delta)$ is very ample for every
$m\geq 4$.
If $I\geq 2$, then $|mI(K_X+\Delta)|$ is basepoint-free
and $3mI(K_X+\Delta)$ is very ample
for every $m\geq 3$. In particular, $12I(K_X+\Delta)$ is always
very ample and $9I(K_X+\Delta)$ is very
ample if $I\geq 2$.
\end{cor}
Note that a {\em{stable pair}} $(X, \Delta)$ is a projective
semi-log canonical pair $(X, \Delta)$ such that $K_X+\Delta$ is ample.
A {\em{stable surface}} is a $2$-dimensional stable pair.
We also have:
\begin{cor}[Semi-log canonical Fano surfaces]\label{f-cor1.5}
Let $(X, \Delta)$ be a projective semi-log canonical surface such that
$-(K_X+\Delta)$ is an ample $\mathbb Q$-divisor.
Let $I$ be the smallest positive integer such that
$I(K_X+\Delta)$ is Cartier.
Then $|-mI(K_X+\Delta)|$ is basepoint-free
and $-3mI(K_X+\Delta)$ is very ample for every $m\geq 2$.
In particular, $-6I(K_X+\Delta)$ is very ample.
\end{cor}
For log surfaces (see \cite{fujino-surface}),
the following theorem is a reasonable formulation of
the Reider-type freeness theorem.
For a related topic, see \cite{kawachi}.
\begin{thm}[Effective freeness for log surfaces]\label{f-thm1.6}
Let $(X, \Delta)$ be a complete irreducible log surface and let $D$ be
a Cartier divisor on $X$.
We put $A=D-(K_X+\Delta)$. Assume that
$A$ is nef, $A^2>4$ and $A\cdot C\geq 2$ for every curve $C$ on $X$ such that
$x\in C$.
Then $\mathcal O_X(D)$ has a global section not vanishing at $x$.
\end{thm}
We know that the theory of log surfaces initiated in \cite{fujino-surface}
now holds in characteristic $p>0$
(see \cite{fujino-tanaka}, \cite{tanaka-nagoya}, and \cite{tanaka-vanishing}).
Therefore, it is natural to propose:
\begin{conj}\label{f-conj1.7}
Theorem \ref{f-thm1.6} holds in characteristic $p>0$.
\end{conj}
Note that the original form of Fujita's freeness conjecture (see Conjecture \ref{f-conj1.2})
is still open for surfaces in characteristic $p>0$.
The standard approach to
the Fujita-type freeness conjectures is based on
the Kawamata--Viehweg vanishing theorem
(see \cite{ein-lazarsfeld}).
However, we can not directly apply the Kawamata--Viehweg
vanishing theorem to log canonical pairs and semi-log canonical pairs.
Therefore,, we will use the theory of quasi-log schemes (see \cite{fujino-slc},
\cite{fujino-reid-fukuda}, \cite{fujino-foundation}, and so on).
We summarize the contents of this paper.
In Section \ref{f-sec2},
we prove Conjecture \ref{f-conj1.1} for semi-log canonical
curves using the
vanishing theorem obtained in \cite{fujino-slc}.
This section may help the reader to understand more complicated
arguments in the subsequent sections.
In Section \ref{f-sec3}, we collect some basic definitions.
In Section \ref{f-sec4},
we quickly recall the theory of quasi-log schemes.
Section \ref{f-sec5} is the main part of this paper.
In this section, we prove Conjecture \ref{f-conj1.1} for semi-log canonical
surfaces. Section \ref{f-sec6} is devoted to the
proof of Theorem \ref{f-thm1.6}, which is an effective freeness for
log surfaces.
In Section \ref{f-sec7},
which is independent of the other sections,
we prove
an effective very ampleness lemma.
\begin{ack}
The author was partially supported
by JSPS KAKENHI Grant Numbers JP2468002, JP16H03925, JP16H06337.
He would like to thank Professor J\'anos Koll\'ar for answering his
question. Finally, he thanks the referee very much for many useful comments and
suggestions.
\end{ack}
For the standard
notations and conventions of the
minimal model program, see \cite{fujino-fundamental} and \cite{fujino-foundation}.
For the details of semi-log canonical pairs, see \cite{fujino-slc}.
In this paper, a {\em{scheme}} means a separated scheme of finite type over
$\mathbb C$ and a {\em{variety}} means a reduced scheme.
\section{Semi-log canonical curves}\label{f-sec2}
In this section, we prove Conjecture \ref{f-conj1.1} in dimension one
based on \cite{fujino-slc}.
This section will help the reader to understand the subsequent sections.
\begin{thm}\label{f-thm2.1}
Let $(X, \Delta)$ be a projective semi-log canonical
curve and let $D$ be a Cartier divisor on $X$.
We put $A=D-(K_X+\Delta)$.
Assume that $(A\cdot X_i)>1$ for every irreducible component $X_i$ of $X$.
Then the complete linear system $|D|$ is basepoint-free.
\end{thm}
If $(X, \Delta)$ is log canonical, that is, $X$ is normal, in Theorem \ref{f-thm2.1},
then the statement is obvious.
However, Theorem \ref{f-thm2.1} seems to be nontrivial when $X$ is not normal.
\begin{proof}[Proof of Theorem \ref{f-thm2.1}]
We will see that
the restriction map
\begin{equation}\label{f-eq2.1}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(P)
\end{equation}
is surjective for every $P\in X$.
Of course, it is sufficient to prove that $H^1(X, \mathcal I_P\otimes \mathcal O_X(D))=0$,
where $\mathcal I_P$ is the defining ideal sheaf of $P$ on $X$.
If $P$ is a zero-dimensional
semi-log canonical center of $(X, \Delta)$, then
we know that
$H^1(X, \mathcal I_P\otimes \mathcal O_X(D))=0$ by \cite[Theorem 1.11]{fujino-slc}.
Therefore, we may assume that $P$ is not a
zero-dimensional semi-log canonical center of $(X, \Delta)$.
Thus, we see that $X$ is normal, that is, smooth, at $P$
(see, for example, \cite[Corollary 3.5]{fujino-slc}).
We put
\begin{equation}
c=1-\mult_P \Delta.
\end{equation}
Then we have $0<c\leq 1$.
We consider $(X, \Delta+cP)$.
Then $(X, \Delta+cP)$ is semi-log canonical and $P$ is a zero-dimensional
semi-log canonical center of $(X, \Delta+cP)$.
Since
\begin{equation}
\left( (D-(K_X+\Delta+cP))\cdot X_i\right)>0
\end{equation}
for every irreducible component $X_i$ of $X$ by the assumption that
$(A\cdot X_i)>1$ and the fact that $c\leq 1$,
we obtain that
$H^1(X, \mathcal I_P\otimes \mathcal O_X(D))=0$
(see \cite[Theorem 1.11]{fujino-slc}).
Therefore, we see that $H^1(X, \mathcal I_P\otimes \mathcal O_X(D))=0$ for
every $P\in X$.
Thus, we have the desired surjection \eqref{f-eq2.1}.
\end{proof}
The above proof of Theorem \ref{f-thm2.1} heavily depends on the vanishing theorem
for semi-log canonical pairs (see \cite[Theorem 1.11]{fujino-slc}),
which follows from
the theory of quasi-log schemes based on
the theory of mixed Hodge structures on cohomology with compact support.
For the details, see \cite{fujino-slc} and \cite{fujino-foundation}.
In dimension two, we will directly use the framework of quasi-log
schemes.
Therefore, it is much more difficult than the proof of Theorem \ref{f-thm2.1}.
\section{Preliminaries}\label{f-sec3}
In this section, we collect some basic definitions.
\begin{say}[Operations for $\mathbb R$-divisors]\label{f-say3.1}
Let $D$ be an $\mathbb R$-divisor
on an equidimensional variety $X$, that is,
$D$ is a finite formal $\mathbb R$-linear combination
\begin{equation}
D=\sum _i d_i D_i
\end{equation} of irreducible
reduced subschemes $D_i$ of codimension one, where $D_i\ne D_j$ for $i\ne j$.
We define the {\em{round-up}}
$\lceil D\rceil =\sum _i \lceil d_i \rceil D_i$ (resp.~{\em{round-down}}
$\lfloor D\rfloor =\sum _i \lfloor d_i \rfloor D_i$), where for
every real number $x$, $\lceil x\rceil$ (resp.~$\lfloor x\rfloor$) is the integer
defined by $x\leq \lceil x\rceil <x+1$
(resp.~$x-1<\lfloor x\rfloor \leq x$).
We put
\begin{equation}
D^{<1}=\sum _{d_i<1}d_i D_i \quad {\text{and}}\quad
D^{>1}=\sum _{d_i>1}d_i D_i.
\end{equation}
We call $D$ a {\em{boundary}} (resp.~{\em{subboundary}}) $\mathbb R$-divisor if
$0\leq d_i\leq 1$ (resp.~$d_i\leq 1$) for every $i$.
\end{say}
\begin{say}[Singularities of pairs]\label{f-say3.2}
Let $X$ be a normal variety and let $\Delta$ be an
$\mathbb R$-divisor on $X$
such that $K_X+\Delta$ is $\mathbb R$-Cartier.
Let $f:Y\to X$ be
a resolution such that $\Exc(f)\cup f^{-1}_*\Delta$,
where $\Exc (f)$ is the exceptional locus of $f$
and $f^{-1}_*\Delta$ is
the strict transform of $\Delta$ on $Y$,
has a simple normal crossing support. We can
write
\begin{equation}\label{f-eq3.1}
K_Y=f^*(K_X+\Delta)+\sum _i a_i E_i.
\end{equation}
We say that $(X, \Delta)$
is {\em{sub log canonical}} ({\em{sub lc}}, for short) if $a_i\geq -1$ for every $i$.
We usually write $a_i= a(E_i, X, \Delta)$
and call it the {\em{discrepancy coefficient}} of
$E_i$ with respect to $(X, \Delta)$.
Note that we can define $a(E, X, \Delta)$ for every prime divisor
$E$ {\em{over}} $X$.
If $(X, \Delta)$ is sub log canonical and $\Delta$ is effective, then
$(X, \Delta)$ is called {\em{log canonical}} ({\em{lc}}, for short).
It is well-known that there is the largest Zariski open subset $U$
of $X$ such that
$(U, \Delta|_U)$ is sub log canonical (see, for example,
\cite[Lemma 2.3.10]{fujino-foundation}).
If there exist a resolution $f:Y\to X$ and a divisor $E$ on $Y$ such
that $a(E, X, \Delta)=-1$ and $f(E)\cap U\ne \emptyset$, then $f(E)$ is called a
{\em{log canonical center}} (an {\em{lc center}}, for short) with respect to $(X, \Delta)$.
A closed subset $C$ of $X$ is called a {\em{log canonical stratum}}
(an {\em{lc stratum}}, for short) of $(X, \Delta)$ if and only if
$C$ is a log canonical center of $(X, \Delta)$ or $C$ is
an irreducible component of $X$.
We note that the {\em{non-lc locus}} of $(X, \Delta)$, which is
denoted by $\Nlc(X, \Delta)$, is $X\setminus U$.
Let $X$ be a normal variety and let $\Delta$ be an effective $\mathbb R$-divisor on
$X$ such that $K_X+\Delta$ is $\mathbb R$-Cartier.
If $a(E, X, \Delta)>-1$ for every divisor $E$ over $X$,
then $(X, \Delta)$ is called {\em{klt}}. If
$a(E, X, \Delta)>-1$ for every exceptional divisor $E$ over $X$,
then $(X, \Delta)$ is called {\em{plt}}.
\end{say}
Let us recall the definitions around {\em{semi-log canonical pairs}}.
\begin{say}[Semi-log canonical pairs]\label{f-say3.3}
Let $X$ be an
equidimensional variety that
satisfies Serre's $S_2$ condition and
is normal crossing in codimension one.
Let $\Delta$ be an effective $\mathbb R$-divisor
whose support does not contain any irreducible components
of the conductor of $X$.
The pair $(X, \Delta)$ is called a {\em{semi-log canonical pair}} (an {\em{slc pair}},
for short)
if
\begin{itemize}
\item[(1)] $K_X+\Delta$ is $\mathbb R$-Cartier, and
\item[(2)] $(X^\nu, \Theta)$ is log canonical,
where $\nu:X^\nu\to X$ is the normalization and $K_{X^\nu}+\Theta=
\nu^*(K_X+\Delta)$, that is, $\Theta$ is the sum of the
inverse images of $\Delta$ and the conductor of $X$.
\end{itemize}
Let $(X, \Delta)$ be a semi-log canonical pair and let $\nu:X^\nu\to X$ be
the normalization.
We set
\begin{equation}
K_{X^\nu}+\Theta=\nu^*(K_X+\Delta)
\end{equation}
as above.
A closed subvariety $W$ of $X$ is called a {\em{semi-log canonical center}}
(an {\em{slc center}}, for short) {\em{with
respect to $(X, \Delta)$}} if there exist a resolution of singularities $f: Y\to X^\nu$ and
a prime divisor $E$ on $Y$ such that
the discrepancy coefficient $a(E, X^\nu, \Theta)=-1$ and $\nu\circ f(E)=W$.
A closed subvariety $W$ of $X$ is called a {\em{semi-log canonical stratum}}
({\em{slc stratum}}, for short) of
the pair $(X, \Delta)$ if
$W$ is a semi-log canonical center with respect to $(X, \Delta)$ or $W$ is an
irreducible component of $X$.
\end{say}
We close this section with the notion of {\em{log surfaces}} (see \cite{fujino-surface}).
\begin{say}[Log surfaces]\label{f-say3.4}
Let $X$ be a normal surface and let $\Delta$ be a boundary $\mathbb R$-divisor on $X$.
Assume that $K_X+\Delta$ is $\mathbb R$-Cartier.
Then the pair $(X, \Delta)$ is called a {\em{log surface}}.
A log surface $(X, \Delta)$ is not always assumed to be log canonical.
In \cite{fujino-surface},
we establish the minimal model program for log surfaces in full generality
under the assumption that $X$ is $\mathbb Q$-factorial or $(X, \Delta)$ has only
log canonical singularities.
For the theory of log surfaces in characteristic $p>0$,
see \cite{fujino-tanaka}, \cite{tanaka-nagoya}, and \cite{tanaka-vanishing}.
\end{say}
\section{On quasi-log structures}\label{f-sec4}
Let us quickly recall the definitions of {\em{globally embedded simple
normal crossing pairs}} and {\em{quasi-log schemes}} for
the reader's convenience.
For the details, see, for example, \cite{fujino-pull}
and \cite[Chapter 5 and Chapter 6]{fujino-foundation}.
\begin{defn}[Globally embedded simple normal crossing
pairs]\label{f-def4.1}
Let $Y$ be a simple normal crossing divisor
on a smooth
variety $M$ and let $D$ be an $\mathbb R$-divisor
on $M$ such that
$\Supp (D+Y)$ is a simple normal crossing divisor on $M$ and that
$D$ and $Y$ have no common irreducible components.
We put $B_Y=D|_Y$ and consider the pair $(Y, B_Y)$.
We call $(Y, B_Y)$ a {\em{globally embedded simple normal
crossing pair}} and $M$ the {\em{ambient space}} of $(Y, B_Y)$.
A {\em{stratum}} of $(Y, B_Y)$ is
the $\nu$-image of a log canonical stratum of $(Y^\nu, \Theta)$
where $\nu:Y^\nu\to Y$ is the normalization and $K_{Y^\nu}+\Theta
=\nu^*(K_Y+B_Y)$, that is, $\Theta$ is the sum of
the inverse images of $B_Y$ and the singular locus of $Y$.
\end{defn}
In this paper, we adopt the following definition of
quasi-log schemes.
\begin{defn}[Quasi-log schemes]\label{f-def4.2}
A {\em{quasi-log scheme}} is a scheme $X$ endowed with an
$\mathbb R$-Cartier divisor
(or $\mathbb R$-line bundle)
$\omega$ on $X$, a proper closed subscheme
$X_{-\infty}\subset X$, and a finite collection $\{C\}$ of reduced
and irreducible subschemes of $X$ such that there is a
proper morphism $f:(Y, B_Y)\to X$ from a globally
embedded simple
normal crossing pair satisfying the following properties:
\begin{itemize}
\item[(1)] $f^*\omega\sim_{\mathbb R}K_Y+B_Y$.
\item[(2)] The natural map
$\mathcal O_X
\to f_*\mathcal O_Y(\lceil -(B_Y^{<1})\rceil)$
induces an isomorphism
$$
\mathcal I_{X_{-\infty}}\overset{\simeq}{\longrightarrow} f_*\mathcal O_Y(\lceil
-(B_Y^{<1})\rceil-\lfloor B_Y^{>1}\rfloor),
$$
where $\mathcal I_{X_{-\infty}}$ is the defining ideal sheaf of
$X_{-\infty}$.
\item[(3)] The collection of subvarieties $\{C\}$ coincides with the image
of $(Y, B_Y)$-strata that are not included in $X_{-\infty}$.
\end{itemize}
We simply write $[X, \omega]$ to denote
the above data
$$
\bigl(X, \omega, f:(Y, B_Y)\to X\bigr)
$$
if there is no risk of confusion.
Note that a quasi-log scheme $X$ is the union of $\{C\}$ and $X_{-\infty}$.
We also note that $\omega$ is called the {\em{quasi-log canonical class}}
of $[X, \omega]$, which is defined up to $\mathbb R$-linear equivalence.
We sometimes simply say that
$[X, \omega]$ is a {\em{quasi-log pair}}.
The subvarieties $C$
are called the {\em{qlc strata}} of $[X, \omega]$,
$X_{-\infty}$ is called the {\em{non-qlc locus}}
of $[X, \omega]$, and $f:(Y, B_Y)\to X$ is
called a {\em{quasi-log resolution}}
of $[X, \omega]$.
We sometimes use $\Nqlc(X, \omega)$ to denote
$X_{-\infty}$. A closed subvariety $C$ of $X$ is called a {\em{qlc center}}
of $[X, \omega]$ if $C$ is a qlc stratum of $[X, \omega]$ which is not
an irreducible component of $X$.
Let $[X, \omega]$ be a quasi-log scheme.
Assume that $X_{-\infty}=\emptyset$.
Then we sometimes simply say that $[X, \omega]$ is
a {\em{qlc pair}} or
$[X, \omega]$ is a quasi-log scheme with only {\em{quasi-log canonical
singularities}}.
\end{defn}
\begin{defn}[Nef and log big divisors
for quasi-log schemes]\label{f-def4.3}
Let $L$ be an $\mathbb R$-Cartier divisor (or
$\mathbb R$-line bundle) on a quasi-log pair $[X, \omega]$ and
let $\pi:X\to S$ be a proper morphism between schemes.
Then $L$ is {\em{nef and log big over $S$ with
respect to $[X, \omega]$}} if $L$ is
$\pi$-nef and $L|_C$ is $\pi$-big for every
qlc stratum $C$ of $[X, \omega]$.
\end{defn}
The following theorem is a key result for the theory of quasi-log schemes.
\begin{thm}[Adjunction and vanishing theorem]\label{f-thm4.4}
Let $[X, \omega]$ be a quasi-log scheme and let $X'$ be the union of
$X_{-\infty}$ with a {\em{(}}possibly empty{\em{)}} union of some
qlc strata of $[X, \omega]$. Then
we have the following properties.
\begin{itemize}
\item[(i)] Assume that $X'\ne X_{-\infty}$. Then
$X'$ is a quasi-log scheme with $\omega'=\omega|_{X'}$ and
$X'_{-\infty}=X_{-\infty}$. Moreover, the qlc strata of $[X', \omega']$ are
exactly the qlc strata of $[X, \omega]$ that are included in $X'$.
\item[(ii)] Assume that $\pi:X\to S$ is a proper morphism between schemes.
Let $L$ be a Cartier divisor on $X$ such that
$L-\omega$ is nef and log big over $S$ with respect to $[X, \omega]$.
Then $R^i\pi_*(\mathcal I_{X'}\otimes \mathcal O_X(L))=0$ for every $i>0$,
where $\mathcal I_{X'}$ is the defining ideal sheaf of $X'$ on $X$.
\end{itemize}
\end{thm}
For the proof of Theorem \ref{f-thm4.4}, see,
for example,
\cite[Theorem 3.8]{fujino-reid-fukuda} and \cite[Section 6.3]{fujino-foundation}.
We can slightly generalize Theorem \ref{f-thm4.4} (ii) as follows.
\begin{thm}\label{f-thm4.5}
Let $[X, \omega]$, $X'$, and $\pi:X\to S$ be as in Theorem \ref{f-thm4.4}.
Let $L$ be a Cartier divisor on $X$ such that
$L-\omega$ is
nef over $S$ and that
$(L-\omega)|_W$ is big over $S$ for any qlc stratum $W$ of $[X, \omega]$ which
is not contained in $X'$.
Then $R^i\pi_*(\mathcal I_{X'}\otimes \mathcal O_X(L))=0$ for every
$i>0$, where $\mathcal I_{X'}$ is the defining ideal
sheaf of $X'$ on $X$.
\end{thm}
Theorem \ref{f-thm4.5} is obvious by the proof of Theorem \ref{f-thm4.4}.
For a related topic, see \cite[Remark 5.2]{fujino-slc}.
Theorem \ref{f-thm4.5} will play a crucial role in the proof of Theorem \ref{f-thm1.6}
in Section \ref{f-sec6}.
Finally,
we prepare a useful lemma, which is new, for the proof of Theorem \ref{f-thm1.3}.
\begin{lem}\label{f-lem4.6}
Let $[X, \omega]$ be a qlc pair such that
$X$ is irreducible.
Let $E$ be an effective $\mathbb R$-Cartier divisor on $X$.
This means that
$$
E=\sum _{i=1}^k e_i E_i
$$
where $E_i$ is an effective Cartier divisor on $X$ and $e_i$ is a positive
real number for every $i$.
Then we can give a quasi-log structure to $[X, \omega+E]$,
which coincides with the original quasi-log structure of $[X, \omega]$ outside
$\Supp E$.
\end{lem}
For the details of the quasi-log structure of $[X, \omega+E]$, see
the construction in the proof below.
\begin{proof}
Let $f:(Z, \Delta_Z)\to [X, \omega]$ be a quasi-log resolution, where
$(Z, \Delta_Z)$ is a globally embedded simple normal crossing pair.
By taking some suitable blow-ups, we may assume that
the union of all strata of $(Z, \Delta_Z)$ mapped
to $\Supp E$, which is denoted by $Z''$, is a union of
some irreducible components of $Z$ (see \cite[Proposition 4.1]{fujino-pull}
and \cite[Section 6.3]{fujino-foundation}).
We put $Z'=Z-Z''$ and
$K_{Z'}+\Delta_{Z'}=(K_Z+\Delta_Z)|_{Z'}$.
We may further assume that
$(Z', \Delta_{Z'}+{f'}^*E)$ is a globally embedded simple normal crossing pair,
where $f'=f|_{Z'}: Z'\to X$.
By construction, we have a natural inclusion
\begin{equation}
\mathcal O_{Z'} (\lceil -(\Delta_{Z'}+{f'}^*E)^{<1}\rceil -
\lfloor (\Delta_{Z'}+{f'}^*E)^{>1}\rfloor)\subset \mathcal
O_Z(\lceil -\Delta_Z^{<1}\rceil).
\end{equation}
This is because
\begin{equation}
-\lfloor (\Delta_{Z'}+f'^*E)^{>1}\rfloor \leq -Z''|_{Z'}
\end{equation}
and
\begin{equation}
\mathcal O_{Z'}(-Z''|_{Z'})\subset \mathcal O_Z.
\end{equation}
Thus, we have
\begin{equation}
f'_*\mathcal O_{Z'} (\lceil -(\Delta_{Z'}+{f'}^*E)^{<1}\rceil -
\lfloor (\Delta_{Z'}+{f'}^*E)^{>1}\rfloor)
\subset f_*\mathcal
O_Z(\lceil -\Delta_Z^{<1}\rceil)\simeq \mathcal O_X.
\end{equation}
By putting
\begin{equation}
\mathcal I_{X_{-\infty}}=
{f'}_*\mathcal O_{Z'} (\lceil -(\Delta_{Z'}+{f'}^*E)^{<1}\rceil -
\lfloor (\Delta_{Z'}+{f'}^*E)^{>1}\rfloor),
\end{equation}
$f': (Z', \Delta_{Z'}+{f'}^*E)\to [X, \omega+E]$ gives a quasi-log structure to
$[X, \omega+E]$.
By construction, it coincides with
the original quasi-log structure of $[X, \omega]$ outside $\Supp E$.
\end{proof}
\section{Semi-log canonical surfaces}\label{f-sec5}
In this section, we prove Conjecture \ref{f-conj1.1} for surfaces.
\begin{thm}\label{f-thm5.1}
Let $(X, \Delta)$ be a projective semi-log canonical
surface and let $D$ be a Cartier divisor on $X$.
We put $A=D-(K_X+\Delta)$.
Assume that $(A^2\cdot {X_i})>4$ for every irreducible component
$X_i$ of $X$ and
that $A\cdot C\geq 2$ for every curve $C$ on $X$.
Then the complete linear system $|D|$ is basepoint-free.
\end{thm}
\begin{rem}\label{f-rem5.2}
By assumption and Nakai's ampleness criterion for $\mathbb R$-divisors
(see \cite{campana-peternell}), $A$ is ample in Theorem \ref{f-thm5.1}.
However, we do not use the ampleness of $A$ in the proof of Theorem \ref{f-thm5.1}.
\end{rem}
Our proof of Theorem \ref{f-thm5.1} uses the theory of quasi-log schemes.
\begin{proof}
We will prove
that the restriction map
$$
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(P)
$$
is surjective for every $P\in X$.
\begin{step}[Quasi-log structure]\label{step1}
By \cite[Theorem 1.2]{fujino-slc}, we can
take a quasi-log resolution $f:(Z, \Delta_Z)\to [X, K_X+\Delta]$.
Precisely speaking,
$(Z, \Delta_Z)$ is a globally embedded simple normal
crossing pair such that
$\Delta_Z$ is a subboundary $\mathbb R$-divisor on $Z$ with the following properties.
\begin{itemize}
\item[(i)] $K_Z+\Delta_Z\sim _{\mathbb R} f^*(K_X+\Delta)$.
\item[(ii)] the natural map $\mathcal O_X\to f_*\mathcal O_Z(\lceil -\Delta^{<1}_Z\rceil)$
is an isomorphism.
\item[(iii)] $\dim Z=2$.
\item[(iv)] $W$ is a semi-log canonical stratum of $(X, \Delta)$ if and only if
$W=f(S)$ for some stratum $S$ of $(Z, \Delta_Z)$.
\end{itemize}
It is worth mentioning that $f:Z\to X$ is not necessarily birational.
This step is nothing but \cite[Theorem 1.2]{fujino-slc}.
\end{step}
\begin{step}
Assume that $P$ is a zero-dimensional
semi-log canonical center of $(X, \Delta)$.
Then $H^i(X, \mathcal I_P\otimes \mathcal O_X(D))=0$ for every $i>0$,
where $\mathcal I_P$ is the defining ideal
sheaf of $P$ on $X$ (see \cite[Theorem 1.11]{fujino-slc} and
Theorem \ref{f-thm4.4}).
Therefore, the restriction map
$$
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(P)
$$
is surjective.
\end{step}
From now on, we may assume that $P$ is not a zero-dimensional
semi-log canonical center of $(X, \Delta)$.
\begin{step}\label{step3}
Assume that there exists a one-dimensional
semi-log canonical center $W$ of $(X, \Delta)$ such that
$P\in W$.
Since $P$ is not a zero-dimensional
semi-log canonical center of $(X, \Delta)$, $W$ is normal, that is,
smooth, at $P$ by \cite[Corollary 3.5]{fujino-slc}.
By adjunction (see Theorem \ref{f-thm4.4}),
$[W, (K_X+\Delta)|_W]$ has a quasi-log structure with only quasi-log
canonical singularities
induced by the quasi-log
structure $f:(Z, \Delta_Z)\to [X, K_X+\Delta]$ constructed in Step \ref{step1}.
Let $g:(Z', \Delta_{Z'})\to [W, (K_X+\Delta)|_W]$ be the induced quasi-log resolution.
We put
\begin{equation}
c=\underset{t\geq 0}{\sup}\left\{t \left|
\begin{array}{l} {\text{the normalization of
$(Z', \Delta_{Z'}+tg^*P)$ is}}\\
{\text{sub log canonical.}}
\end{array}\right. \right\}.
\end{equation}
Then, by \cite[Lemma 3.16]{fujino-reid-fukuda}, we obtain that $0<c<2$.
Note that $P$ is a Cartier divisor on $W$.
Let us consider $g:(Z', \Delta_{Z'}+cg^*P)\to [W, (K_X+\Delta)|_W+cP]$,
which defines a quasi-log structure.
Then, by construction, $P$ is a qlc center of $[W, (K_X+\Delta)|_W+cP]$.
Moreover,
we see that
\begin{equation}
(D|_W-((K_X+\Delta)|_W+cP))=(A\cdot W)-c>0
\end{equation} by assumption.
Therefore, we obtain that
\begin{equation}
H^i(W, \mathcal I_P\otimes \mathcal O_W(D))=0
\end{equation}
for every $i>0$ by Theorem \ref{f-thm4.4}, where
$\mathcal I_P$ is the defining ideal sheaf of $P$ on $W$.
Thus, the restriction map
\begin{equation}\label{eq54}
H^0(W, \mathcal O_W(D))\to \mathcal O_W(D)\otimes \mathbb C(P)
\end{equation}
is surjective. On the other hand,
by Theorem \ref{f-thm4.4} again,
we have that
\begin{equation}
H^i(X, \mathcal I_W\otimes \mathcal O_X(D))=0
\end{equation} for
every $i>0$,
where $\mathcal I_W$ is the defining ideal sheaf of $W$ on $X$.
This implies that
the restriction map
\begin{equation}\label{eq56}
H^0(X, \mathcal O_X(D))\to H^0(W, \mathcal O_W(D))
\end{equation}
is surjective.
By combining \eqref{eq54} with \eqref{eq56},
the desired restriction map
\begin{equation}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(P)
\end{equation}
is surjective.
\end{step}
Therefore, from now on, we may assume that no one-dimensional
semi-log canonical centers of $(X, \Delta)$ contain $P$.
\begin{step}\label{step4}
In this step, we assume that $P$ is a smooth point of $X$.
Let $X_0$ be the unique irreducible component of $X$ containing $P$.
By adjunction (see Theorem \ref{f-thm4.4}),
$[X_0, (K_X+\Delta)|_{X_0}]$ has a quasi-log structure with only quasi-log
canonical singularities
induced by
the quasi-log structure $f:(Z, \Delta_Z)\to [X, K_X+\Delta]$ constructed in Step \ref{step1}.
By Theorem \ref{f-thm4.4},
\begin{equation}
H^i(X, \mathcal I_{X_0}\otimes \mathcal O_X(D))=0
\end{equation}
for every $i>0$, where $\mathcal I_{X_0}$ is the defining ideal sheaf of
$X_0$ on $X$.
Therefore, the restriction map
\begin{equation}\label{eq59}
H^0(X, \mathcal O_X(D))\to H^0(X_0, \mathcal O_{X_0}(D))
\end{equation}
is surjective. Thus, it is sufficient to prove that the natural restriction map
\begin{equation}
H^0(X_0, \mathcal O_{X_0}(D))\to \mathcal O_{X_0}(D)\otimes \mathbb C(P)
\end{equation}
is surjective.
We put $A_0=A|_{X_0}$.
Since $A_0^2>4$, we can find an effective $\mathbb R$-Cartier divisor
$B$ on $X_0$ such that
$\mult_P B>2$ and that
$B\sim _{\mathbb R} A_0$.
We put $U=X_0\setminus \Sing X_0$ and
define
\begin{equation}
c=\max\{ t\geq 0\, |\, (U, \Delta|_U +tB|_U) \
\text{is log canonical at $P$}\}.
\end{equation}
Then we obtain that $0<c<1$ since $\mult _P B>2$.
By Lemma \ref{f-lem4.6},
we have a quasi-log structure on $[X_0, (K_X+\Delta)|_{X_0} +cB]$.
By construction, there is a qlc center $W$ of $[X_0, (K_X+\Delta)|_{X_0}+cB]$ passing
through $P$.
Let $X'$ be the union of the non-qlc locus
of $[X_0, (K_X+\Delta)|_{X_0}+cB]$ and the minimal qlc center
$W_0$ of $[X_0, (K_X+\Delta)|_{X_0}+cB]$ passing through $P$.
Note that $D|_{X_0}-((K_X+\Delta)|_{X_0}+cB)\sim _{\mathbb R} (1-c)A_0$.
Then, by Theorem \ref{f-thm4.4},
\begin{equation}\label{eq512}
H^i(X_0, \mathcal I_{X'}\otimes \mathcal O_{X_0}(D))=0
\end{equation}
for every $i>0$, where $\mathcal I_{X'}$ is the defining ideal sheaf of $X'$ on $X_0$.
\begin{case}\label{case1}
If $\dim W_0=0$,
then $P$ is isolated in $\Supp \mathcal O_{X_0}/ \mathcal I_{X'}$.
Therefore, the restriction map
\begin{equation}\label{eq513}
H^0(X_0, \mathcal O_{X_0}(D))\to \mathcal O_{X_0}(D)\otimes \mathbb C(P)
\end{equation}
is surjective.
\end{case}
\begin{case}\label{case2}
If $\dim W_0=1$, then let us consider the quasi-log structure of $[X', ((K_X+
\Delta)|_{X_0}+cB)|_{X'}]$ induced by
the quasi-log structure of
$[X_0, (K_X+\Delta)|_{X_0}+cB]$ constructed above by Lemma \ref{f-lem4.6}
(see Theorem \ref{f-thm4.4} (i)).
From now on, we will see that we can take $0<c'\leq 1$ such that
$P$ is a zero-dimensional
qlc center of $[X', ((K_X+\Delta)|_{X_0}+cB)|_{X'}+c'P]$ as in Step \ref{step3}.
By assumption,
$(X, \Delta+cB)$ is plt in a neighborhood of $P$.
We put $\mult _PB=2+a$ with $a>0$.
We write $\Delta+cB=L+\Delta'$, where
$L=W_0$ is the unique one-dimensional
log canonical center of $(X, \Delta)$ passing through $P$ and $\Delta'=\Delta+cB-L$.
We put $\mult _P(\Delta+cB)=1+\delta$ with $\delta \geq 0$, equivalently,
$\delta=\mult _P \Delta' \geq 0$.
Note that
\begin{equation}
1+\delta= \mult _P (\Delta+cB)=\mult _P\Delta+ c(2+a).
\end{equation}
Therefore, we have
\begin{equation}
c=\frac{1+\delta-\alpha}{2+a},
\end{equation}
where $\alpha=\mult _P\Delta \geq 0$.
We also note that
\begin{equation}
\delta\leq \mult _P (\Delta' |_L)<1.
\end{equation}
Then, we can choose $c'= 1-\mult _P (\Delta'|_L)$.
This is because $(X, \Delta+cB+c' H)$ is log canonical
in a neighborhood of $P$ but is not plt at $P$,
where $H$ is a general smooth curve passing through $P$.
In this situation, we have
\begin{equation}\label{eq517}
\begin{split}
&\deg (D|_L -(K_X+\Delta+cB)|_L -c'P) \\
&\geq \left( 1-\frac{1+\delta-\alpha}{2+a} \right) \cdot 2 -(1-\delta)\\
& = \frac{1}{2+a} ((2+a-1-\delta+\alpha) \cdot 2 -(2+a)(1-\delta)) \\
&= \frac{1}{2+a}(a+2\alpha +a\delta) \\
& \geq \frac{a}{2+a} >0.
\end{split}
\end{equation}
Thus, by Theorem \ref{f-thm4.4},
\begin{equation}
H^i(X', \mathcal I_{X''}\otimes \mathcal O_{X'}(D))=0
\end{equation}
for every $i>0$, where $X''$ is the union of the non-qlc locus of $[X',
((K_X+\Delta)|_{X_0} +cB)|_{X'}+c'P]$ and $P$, and
$\mathcal I_{X''}$ is the defining ideal sheaf of $X''$ on $X'$.
Thus, we have that
\begin{equation}\label{eq519}
H^0(X', \mathcal O_{X'} (D))\to \mathcal O_{X'}(D)\otimes
\mathcal O_{X'} / \mathcal I_{X''}
\end{equation}
is surjective.
Note that $P$ is isolated in $\Supp \mathcal O_{X'}/ \mathcal I_{X''}$.
Therefore, we obtain surjections
\begin{equation}
H^0(X, \mathcal O_X(D))
\twoheadrightarrow
H^0(X_0, \mathcal O_{X_0} (D))
\twoheadrightarrow
H^0(X', \mathcal O_{X'}(D)) \twoheadrightarrow
\mathcal O_{X'} (D)\otimes \mathbb C(P)
\end{equation}
by \eqref{eq59}, \eqref{eq512}, and \eqref{eq519}.
This is the desired surjection.
\end{case}
\end{step}
Finally, we further assume that
$P$ is a singular point of $X$.
\begin{step} Note that
$(X, \Delta)$ is klt in a neighborhood of $P$ by assumption.
We will reduce the problem to the situation
as in Step \ref{step4}.
Let $\pi:Y\to X$ be the minimal resolution of $P$.
We put $K_Y+\Delta_Y=\pi^*(K_X+\Delta)$.
Since $\Bs|\pi^*D| =\pi^{-1}\Bs|D|$,
it is sufficient to prove that
$Q\not\in \Bs|\pi^*D|$ for some
$Q\in \pi^{-1}(P)$.
Since $\pi:Y\to X$ is the minimal resolution of $P$,
$f:(Z, \Delta_Z)\to [X, K_X+\Delta]$ factors
through
$[Y, K_Y+\Delta_Y]$ and
$(Z, \Delta_Z)\to [Y, K_Y+\Delta_Y]$
induces a natural quasi-log structure compatible with
the original semi-log canonical structure
of $(Y, \Delta_Y)$ (see Step \ref{step1} and \cite[Theorem 1.2]{fujino-slc}).
We put $Y_0 =\pi^{-1}(X_0)$ where $P\in X_0$ as in Step \ref{step4}.
We can take an effective $\mathbb R$-Cartier divisor
$B'$ on $Y_0$ such that
$B'\sim _{\mathbb R} (\pi|_{Y_0})^*A_0$,
$\mult _Q B'>2$ for some $Q\in \pi^{-1}(P)$, and
$B'=(\pi|_{Y_0})^*B$ for some effective $\mathbb R$-Cartier divisor $B$ on $X_0$.
We put
$U'=Y_0 \setminus \Sing Y_0$.
We set
\begin{equation}
c=\underset{t\geq 0}{\sup}\left\{t \left|
\begin{array}{l} {\text{$(U', (\Delta_Y)|_{U'}+tB'|_{U'})$ is log canonical}}\\
{\text{at any point of $\pi^{-1}(P)$. }}
\end{array}\right. \right\}.
\end{equation}
Then we have $0<c<1$.
By adjunction (see Theorem \ref{f-thm4.4}) and
Lemma \ref{f-lem4.6},
we can consider a quasi-log structure of $[Y_0, (K_Y+\Delta_Y)|_{Y_0}+cB']$.
If there is a one-dimensional qlc center $C$ of
$[Y_0, (K_Y+\Delta_Y)|_{Y_0}+cB']$
such that
\begin{equation}
(\pi^*D-((K_Y+\Delta_Y)|_{Y_0}+cB'))\cdot C=(1-c)(\pi|_{Y_0})^*A_0\cdot C=0.
\end{equation}
Then we obtain that $C\subset \pi^{-1}(P)$.
This means that $P$ is a qlc center
of $[X_0, (K_X+\Delta)|_{X_0}+cB]$.
In this case, we obtain surjections
\begin{equation}
H^0(X, \mathcal O_X(D))\twoheadrightarrow
H^0(X_0, \mathcal O_{X_0}(D))\twoheadrightarrow
\mathcal O_{X_0}(D)\otimes \mathbb C (P)
\end{equation}
as in Case \ref{case1} in Step \ref{step4} (see \eqref{eq59} and \eqref{eq513}).
Therefore, we may assume that
\begin{equation}
(\pi^*D-((K_Y+\Delta_Y)|_{Y_0}+cB'))\cdot C>0
\end{equation}
for every one-dimensional qlc center $C$ of $[Y_0, (K_Y+\Delta_Y)|_{Y_0}+cB']$.
Note that
\begin{equation}
(\pi^*D-(K_Y+\Delta_Y))\cdot C =(D-(K_X+\Delta))\cdot
\pi_* C=A\cdot \pi_* C\geq 2
\end{equation}
when $\pi_*C\ne 0$, equivalently, $C$ is not a component of $\pi^{-1}(P)$.
Then we can apply
the arguments in Step \ref{step4} to
$[Y_0, (K_Y+\Delta_Y)|_{Y_0}+cB']$ and $\pi^*D$.
Thus, we obtain that $Q\not\in \Bs|\pi^*D|$ for some $Q\in \pi^{-1}(P)$.
This means that $P\not\in \Bs|D|$.
\end{step}
Anyway, we obtain that $P\not\in \Bs|D|$.
\end{proof}
By Theorem \ref{f-thm5.1},
we can quickly prove Corollary \ref{f-cor1.4} as follows.
\begin{proof}[Proof of Corollary \ref{f-cor1.4}]
We put $D=mI(K_X+\Delta)$ and $A=D-(K_X+\Delta)
= (m-1/I)I(K_X+\Delta)$.
Then we obtain that $A\cdot C\geq m-1/I$ for
every curve $C$ on $X$ and that
$(A^2\cdot X_i)\geq (m-1/I)^2$ for every
irreducible component $X_i$ of $X$.
By Theorem \ref{f-thm5.1}, we obtain the desired freeness of
$|mI(K_X+\Delta)|$. The very ampleness part follows from Lemma \ref{f-lem7.1} below.
\end{proof}
\begin{rem}\label{f-rem5.3}
In Corollary \ref{f-cor1.4}, $\Delta$ is not necessarily reduced.
If $\Delta$ is reduced, then Corollary \ref{f-cor1.4}
is a special case of \cite[Theorem 24]{lr}.
We note that $\Delta$ is always assumed to be reduced in \cite{lr}.
\end{rem}
As a special case of Corollary \ref{f-cor1.4}, we can recover Kodaira's
celebrated result (see \cite{kodaira}).
We state it explicitly for the reader's convenience.
\begin{cor}[Kodaira]\label{f-cor5.4}
Let $X$ be a smooth projective surface such that
$K_X$ is nef and big.
Then $|mK_X|$ is basepoint-free for every $m\geq 4$.
\end{cor}
\begin{proof}[Proof of Corollary \ref{f-cor5.4}]
Apply Corollary \ref{f-cor1.4} to the canonical
model of $X$. Then we obtain the desired freeness.
\end{proof}
We close this section with the proof of Corollary \ref{f-cor1.5}.
\begin{proof}[Proof of Corollary \ref{f-cor1.5}]
We put $D=-mI(K_X+\Delta)$ and $A=D-(K_X+\Delta)=-(m+1/I)I(K_X+\Delta)$.
Then we obtain that $A\cdot C\geq m+1/I$ for every curve $C$ on $X$ and
that $(A^2\cdot X_i)\geq (m+1/I)^2$ for
every irreducible component $X_i$ of $X$.
By Theorem \ref{f-thm5.1}, we obtain the desired freeness of
$|-mI(K_X+\Delta)|$.
The very ampleness part follows from Lemma \ref{f-lem7.1} below.
\end{proof}
\section{Log surfaces}\label{f-sec6}
In this section, we prove Theorem \ref{f-thm1.6}.
\begin{proof}[Proof of Theorem \ref{f-thm1.6}]
The proof is essentially the same as that of Theorem \ref{f-thm5.1}.
However, there are some technical differences.
We will have to use Theorem \ref{f-thm4.5} instead of
Theorem \ref{f-thm4.4} (ii).
So, we describe it for the reader's convenience.
\setcounter{step}{0}
\begin{step}\label{step-s1}
We take a resolution of singularities $f:Z\to X$ such that $\Supp f^{-1}_*\Delta
\cup \Exc(f)$ is a simple normal crossing divisor on $Z$, where
$\Exc(f)$ is the exceptional locus of $f$.
We put $K_Z+\Delta_Z=f^*(K_X+\Delta)$.
Then, $(Z, \Delta_Z)$ gives a natural quasi-log structure on $[X, K_X+\Delta]$.
\end{step}
\begin{step}\label{step-s2}
Assume that $(X, \Delta)$ is not log canonical at $x$.
We put
\begin{equation}
X'=\Nlc (X, \Delta)\cup \bigcup W,
\end{equation}
where $W$ runs over the one-dimensional
log canonical centers of $(X, \Delta)$ such that
$A\cdot W=0$.
Then, by Theorem \ref{f-thm4.5},
we obtain
\begin{equation}
H^i(X, \mathcal I_{X'}\otimes \mathcal O_X(D))=0
\end{equation}
for every $i>0$,
where $\mathcal I_{X'}$ is the defining ideal sheaf of $X'$.
Note that $x$ is isolated in $\Supp \mathcal O_X/\mathcal I_{X'}$.
Therefore, the restriction map
\begin{equation}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(x)
\end{equation}
is surjective.
Thus, we obtain $x\not \in \Bs|D|$.
\end{step}
From now on, we may assume that $(X, \Delta)$ is log canonical
at $x$.
\begin{step}
Assume that $x$ is a zero-dimensional
log canonical center of $(X, \Delta)$.
We put
\begin{equation}
X'=\Nlc (X, \Delta)\cup \bigcup W\cup \{x\},
\end{equation}
where $W$ runs over the one-dimensional
log canonical centers of $(X, \Delta)$ such that
$A\cdot W=0$.
Then, by Theorem \ref{f-thm4.5},
we obtain
\begin{equation}
H^i(X, \mathcal I_{X'}\otimes \mathcal O_X(D))=0
\end{equation}
for every $i>0$.
Note that $x$ is isolated in $\Supp \mathcal O_X/\mathcal I_{X'}$.
Therefore, we obtain $x\not \in \Bs|D|$ as in
Step \ref{step-s2}.
\end{step}
From now on, we may assume that $(X, \Delta)$ is plt at $x$.
\begin{step}\label{step-s4}
Assume that $(X, \Delta)$ is plt but is not klt at $x$.
Let $L$ be the unique one-dimensional
log canonical center of $(X, \Delta)$ passing through $x$.
We put
\begin{equation}
X' =\Nlc (X, \Delta) \cup \bigcup W \cup L
\end{equation}
where $W$ runs over the one-dimensional log canonical
centers of $(X, \Delta)$ such that $A\cdot W=0$.
By Theorem \ref{f-thm4.5},
we obtain that
\begin{equation}
H^i(X, \mathcal I_{X'}\otimes \mathcal O_X(D))=0
\end{equation}
for every $i>0$, as usual.
Therefore, the restriction map
\begin{equation}\label{eq68}
H^0(X, \mathcal O_X(D))\to H^0(X', \mathcal O_{X'}(D))
\end{equation}
is surjective.
By adjunction (see Theorem \ref{f-thm4.4}),
$[X', (K_X+\Delta)|_{X'}]$ has a quasi-log structure
induced by the quasi-log
structure $f:(Z, \Delta_Z)\to [X, K_X+\Delta]$ constructed in Step \ref{step-s1}.
Let $g:(Z', \Delta_{Z'})\to [X', (K_X+\Delta)|_{X'}]$ be the induced quasi-log resolution.
We put
\begin{equation}
c=\underset{t\geq 0}{\sup}\left\{t \left|
\begin{array}{l} {\text{the normalization of
$(Z', \Delta_{Z'}+tg^*x)$ is sub}}\\
{\text{log canonical over $X'\setminus \Nqlc((K_X+\Delta)|_{X'})$.}}
\end{array}\right. \right\}.
\end{equation}
Then, by \cite[Lemma 3.16]{fujino-reid-fukuda}, we obtain that $0<c<2$.
Note that $x$ is a Cartier divisor on $X'$.
Let us consider $g:(Z', \Delta_{Z'}+cg^*x)\to [X', (K_X+\Delta)|_{X'}+cx]$,
which defines a quasi-log structure.
Then, by construction, $x$ is a qlc center of $[X', (K_X+\Delta)|_{X'}+cx]$.
Moreover,
we see that
\begin{equation}
\deg(D|_L-(K_X+\Delta)|_L-cx)=(A\cdot L)-c>0
\end{equation} by assumption.
We put
\begin{equation}
X''=\Nqlc(X', (K_X+\Delta)|_{X'}+cx) \cup \bigcup W \cup \{x\},
\end{equation}
where $W$ runs over the one-dimensional
qlc centers of $[X', (K_X+\Delta)|_{X'}+cx]$ such that
$W\ne L$.
Then, by Theorem \ref{f-thm4.5}, we obtain
\begin{equation}
H^i(X', \mathcal I_{X''}\otimes \mathcal O_{X'}(D))=0
\end{equation}
for every $i>0$.
Note that $x$ is isolated in $\Supp \mathcal O_{X'}/\mathcal I_{X''}$.
Therefore, the restriction map
\begin{equation}\label{eq613}
H^0(X', \mathcal O_{X'}(D))\to \mathcal O_{X'}(D)\otimes \mathbb C(x)
\end{equation}
is surjective.
By combining \eqref{eq68} with \eqref{eq613},
the desired restriction map
\begin{equation}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(x)
\end{equation}
is surjective. This means that
$x\not\in \Bs|D|$.
\end{step}
Thus, from now on, we may assume that $(X, \Delta)$ is klt at $x$.
\begin{step}\label{step-s5}
In this step, we assume that $x$ is a smooth point of $X$.
Since $A^2>4$, we can find an effective $\mathbb R$-Cartier divisor
$B$ on $X$ such that
$\mult_x B>2$ and that
$B\sim _{\mathbb R} A$.
We put
\begin{equation}
c=\max\{ t\geq 0\, |\, (X, \Delta+tB) \
\text{is log canonical at $x$.}\}.
\end{equation}
Then we obtain that $0<c<1$ since $\mult _x B>2$.
We have a natural quasi-log structure on $[X, K_X+\Delta+cB]$ as in
Step \ref{step-s1}.
By construction,
there is a log canonical center of $[X, K_X+\Delta+cB]$ passing
through $x$.
We put
\begin{equation}
X' =\Nlc (X, \Delta+cB)\cup \bigcup W \cup W_0,
\end{equation}
where $W_0$ is the minimal log canonical
center of $(X, \Delta+cB)$ passing through $x$
and $W$ runs over the one-dimensional log
canonical centers of $(X, \Delta+cB)$ such that
$A\cdot W=0$.
We note that $D-(K_X+\Delta+cB)\sim _{\mathbb R} (1-c)A$.
Then, by Theorem \ref{f-thm4.5},
\begin{equation}\label{eq617}
H^i(X, \mathcal I_{X'}\otimes \mathcal O_X(D))=0
\end{equation}
for every $i>0$, where $\mathcal I_{X'}$ is the defining ideal sheaf of $X'$ on $X$.
\setcounter{case}{0}
\begin{case}\label{case-s1}
If $\dim_x X'=0$,
then $x$ is isolated in $\Supp \mathcal O_X /\mathcal I_{X'}$.
Therefore, the restriction map
\begin{equation}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C(x)
\end{equation}
is surjective.
Thus, we obtain that $x\not\in \Bs|D|$.
\end{case}
\begin{case}\label{case-s2}
If $\dim_x X'=1$, then $(X, \Delta+cB)$ is plt at $x$.
We write $\Delta+cB=L+\Delta'$, where
$L=W_0$ is the unique one-dimensional
log canonical center of $(X, \Delta)$ passing through $x$ and $\Delta'=\Delta+cB-L$.
We put
\begin{equation}
c'=1-\mult _x (\Delta'|_L).
\end{equation}
Then $[X', (K_X+\Delta+cB)|_{X'}+c'x]$ has a quasi-log structure such that
$x$ is a qlc center of
this quasi-log structure as in Case \ref{case2} in Step \ref{step4} in the proof
of Theorem \ref{f-thm5.1}.
We put
\begin{equation}
X''= \Nqlc (X', (K_X+\Delta+cB)|_{X'}+c'x) \cup \bigcup W \cup \{x\},
\end{equation}
where $W$ runs over the one-dimensional qlc centers of
$[X', (K_X+\Delta+cB)|_{X'}+c'x]$ such that $W\ne L$.
By \eqref{eq517} in the proof of
Theorem \ref{f-thm5.1}, we obtain that
\begin{equation}\label{eq621}
\deg (D|_L-(K_X+\Delta+cB)|_L-c'x)>0.
\end{equation}
Then, by \eqref{eq621} and Theorem \ref{f-thm4.5},
\begin{equation}
H^i(X', \mathcal I_{X''}\otimes \mathcal O_{X'}(D))=0
\end{equation}
for every $i>0$, where $\mathcal I_{X''}$ is
the defining ideal sheaf of $X''$ on $X'$.
Thus, we have that
\begin{equation}\label{eq623}
H^0(X', \mathcal O_{X'} (D))\to \mathcal O_{X'}(D)\otimes
\mathcal O_{X'} / \mathcal I_{X''}
\end{equation}
is surjective.
Note that $x$ is isolated in $\Supp \mathcal O_{X'}/ \mathcal I_{X''}$.
Therefore, we obtain surjections
\begin{equation}
\begin{split}
H^0(X, \mathcal O_X(D))\twoheadrightarrow
H^0(X', \mathcal O_{X'}(D)) \twoheadrightarrow
\mathcal O_{X'} (D)\otimes \mathbb C(x)
\end{split}
\end{equation}
by \eqref{eq617} and \eqref{eq623}.
This is the desired surjection.
\end{case}
\end{step}
Finally, we further assume that $x$ is a singular point of $X$.
\begin{step}\label{step-s6}
Let $\pi:Y\to X$ be the minimal resolution of $x$.
We put $K_Y+\Delta_Y=\pi^*(K_X+\Delta)$.
Since $\Bs|\pi^*D| =\pi^{-1}\Bs|D|$,
it is sufficient to prove that
$y\not\in \Bs|\pi^*D|$ for some
$y\in \pi^{-1}(x)$.
Since $\pi:Y\to X$ is the minimal resolution of $x$,
$f:(Z, \Delta_Z)\to [X, K_X+\Delta]$ factors
through
$[Y, K_Y+\Delta_Y]$ and
$(Z, \Delta_Z)\to [Y, K_Y+\Delta_Y]$ induces a natural quasi-log structure on
$[Y, K_Y+\Delta_Y]$.
We can take an effective $\mathbb R$-Cartier divisor
$B'$ on $Y$ such that
$B'\sim _{\mathbb R} \pi^*A$,
$\mult _y B'>2$ for some $y\in \pi^{-1}(x)$, and
$B'=\pi^*B$ for some effective $\mathbb R$-Cartier divisor $B$ on $X$.
We set
\begin{equation}
c=\underset{t\geq 0}{\sup}\left\{t \left|
\begin{array}{l} {\text{$(Y, \Delta_Y+tB')$ is log canonical}}\\
{\text{at any point of $\pi^{-1}(x)$.}}
\end{array}\right. \right\}.
\end{equation}
Then we have $0<c<1$.
As in Step \ref{step-s1},
we can consider a natural quasi-log structure of
$[Y, K_Y+\Delta_Y+cB']$.
If there is a one-dimensional qlc center $C$ of
$[Y, K_Y+\Delta_Y+cB']$
such that $C\cap \pi^{-1}(x)\ne \emptyset$ and that
\begin{equation}
(\pi^*D-(K_Y+\Delta_Y+cB'))\cdot C=(1-c)\pi^*A\cdot C=0.
\end{equation}
Then we obtain that $C\subset \pi^{-1}(x)$.
This means that $x$ is a qlc center of $[X, K_X+\Delta+cB]$.
In this case, we have that
\begin{equation}
H^0(X, \mathcal O_X(D))\to \mathcal O_X(D)\otimes \mathbb C (x)
\end{equation}
is surjective as in Case \ref{case-s1} in Step \ref{step-s5}.
Therefore, we may assume that
\begin{equation}
(\pi^*D-(K_Y+\Delta_Y+cB'))\cdot C>0
\end{equation}
for every one-dimensional qlc center $C$ of $[Y, K_Y+\Delta_Y+cB']$
with $C\cap \pi^{-1}(x)\ne \emptyset$.
We note that
\begin{equation}
(\pi^*D-(K_Y+\Delta_Y))\cdot C =(D-(K_X+\Delta))\cdot \pi_* C
= A\cdot \pi_* C\geq 2.
\end{equation}
Then we can apply
the arguments in Step \ref{step-s5} to $[Y, K_Y+\Delta_Y+cB']$ and $\pi^*D$.
Thus, we obtain that $y\not\in \Bs|\pi^*D|$ for some $y\in \pi^{-1}(x)$.
This means that $x\not\in \Bs|D|$.
\end{step}
Anyway, we obtain that $x\not\in \Bs|D|$.
\end{proof}
\section{Effective very ampleness lemma}\label{f-sec7}
In this section, we prove an effective very ampleness lemma.
This section is independent of the other sections.
The statement and the proof of
\cite[1.2 Lemma]{kollar} do not seem to be true as stated.
J\'anos Koll\'ar and the author think that
we need some modifications. So, we
prove the following lemma.
\begin{lem}\label{f-lem7.1}
Let $(X, \Delta)$ be a projective semi-log canonical
pair with $\dim X=n$.
Let $D$ be an ample Cartier divisor on $X$ such that
$|D|$ is basepoint-free.
Assume that $L=D-(K_X+\Delta)$ is nef and log big with
respect to $(X, \Delta)$,
that is,
$L$ is nef and $L|_W$ is big for every slc stratum $W$ of $(X, \Delta)$.
Then $(n+1)D$ is very ample.
\end{lem}
We give a detailed proof of Lemma \ref{f-lem7.1} for the reader's convenience.
\begin{proof}
By the vanishing theorem (see \cite[Theorem 1.10]{fujino-slc}),
we obtain that $H^i(X, \mathcal O_X((n+1-i)D))=0$ for every $i>0$.
Then, by the Castelnuovo--Mumford regularity, we see that
\begin{equation}
H^0(X, \mathcal O_X(D))\otimes H^0(X, \mathcal O_X(mD))
\to H^0(X, \mathcal O_X((m+1)D))
\end{equation}
is surjective for every $m\geq n+1$ (see,
for example, \cite[Chapter II.~Proposition 1]{kleiman}).
Therefore, we obtain that
\begin{equation}\label{eq7.2}
\mathrm{Sym}^kH^0(X, \mathcal O_X((n+1)D))\to H^0(X, \mathcal O_X(k(n+1)D))
\end{equation}
is surjective for every $k\geq 1$.
We put $A=(n+1)D$ and consider $f=\Phi_{|A|}: X\to Y$. Then
there is a very ample Cartier divisor $H$ on $Y$ such that
$A\sim f^*H$.
By construction and the surjection
\eqref{eq7.2}, we have the following commutative diagram
\begin{equation}
\xymatrix{
\mathrm{Sym}^kH^0(Y, \mathcal O_Y(H)) \ar@{->>}[r]\ar[d]&
\mathrm{Sym}^kH^0(X, \mathcal O_X(A))\ar@{->>}[d]\\
H^0(Y, \mathcal O_Y(kH))\ar@{^{(}->}[r]&H^0(X, \mathcal O_X(kA))
}
\end{equation}
for every $k\geq 1$.
This implies that $H^0(Y, \mathcal O_Y(kH))\simeq H^0(X, \mathcal O_X(kA))$ for
every $k\geq 1$. Note that $\mathcal O_Y\simeq f_*\mathcal O_X$ by
\begin{equation}
0\to \mathcal O_Y \to f_*\mathcal O_X\to \delta\to 0
\end{equation}
and
\begin{equation}
\begin{split}
0&
\to H^0(Y, \mathcal O_Y(kH))\to H^0(X, \mathcal O_X(kA))
\\ &
\to H^0(Y,
\delta\otimes \mathcal O_Y(kH)) \to
H^1(Y, \mathcal O_Y(kH))\to \cdots
\end{split}
\end{equation}
for $k\gg 0$.
By the following commutative diagram:
\begin{equation}
\xymatrix{
X\ar[r]^{f}\ar@{^{(}->}[dr]_{\Phi_{|kA|}}& Y\ar@{^{(}->}[d]^{\Phi_{|kH|}} \\
&\mathbb P^N,
}
\end{equation}
where $k$ is a sufficiently large positive integer such that
$kA$ and $kH$ are very ample, we obtain that
$f$ is an isomorphism.
This means that $A=(n+1)D$ is very ample.
\end{proof}
We close this section with a remark on
the very ampleness for
$n$-dimensional stable pairs and
semi-log canonical Fano varieties (see \cite{fujino-kollar-type}).
\begin{rem}\label{f-rem7.2}
Let $(X, \Delta)$ be a projective semi-log canonical pair
with $\dim X=n$.
Assume that $I(K_X+\Delta)$ is an ample Cartier divisor for some
positive integer $I$.
Then we put $D=I(K_X+\Delta)$, $a=2$, and
apply \cite[Remark 1.3 and Corollary 1.4]{fujino-kollar-type}.
We obtain that $NI(K_X+\Delta)$ is very ample, where
$N=(n+1)2^{n+1}(n+1)!(2+n)=2^{n+1}(n+2)!(n+1)$.
Assume that $-I(K_X+\Delta)$ is an ample Cartier divisor for some
positive integer $I$.
Then we put $D=-I(K_X+\Delta)$, $a=1$, and
apply \cite[Remark 1.3 and Corollary 1.4]{fujino-kollar-type}.
We obtain that $-NI(K_X+\Delta)$ is very ample, where
$N=(n+1)2^{n+1}(n+1)!(1+n)=2^{n+1}(n+1)^3n!$.
Our results for surfaces in this paper are much sharper than the above
estimates for $n=2$.
\end{rem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,706 |
package com.datastax.sparkstress
import java.io.{FileWriter, Writer}
import java.lang.Math.round
import org.apache.spark.SparkConf
import org.reflections.Reflections
import org.apache.spark.sql.{SparkSession, SaveMode}
import collection.JavaConverters._
import scala.math._
object DistributedDataType extends Enumeration {
val RDD = Value("rdd")
val DataFrame = Value("dataframe")
}
object SaveMethod extends Enumeration {
val Driver = Value("driver")
val Bulk = Value("bulk")
val Parquet = Value("parquet")
val Json = Value("json")
val Csv = Value("csv")
val Text = Value("text")
}
object ValidNumericAnnotations extends Enumeration {
val k = Value("k")
val m = Value("m")
val b = Value("b")
val t = Value("t")
val q = Value("q")
}
case class TableLocation(keyspace: String, table: String)
case class Config(
//Test Options
seed: Long = 4L, // lock in a default for better repeatability, lucky number 4
testName: String ="writeshortrow",
keyspace: String = "ks",
table: String = "tab",
trials: Int = 1,
verboseOutput: Boolean = false,
//Write Options
replicationFactor: Int = 1,
numPartitions: Int = 400,
totalOps: Long = 20 * 1000000,
numTotalKeys: Long = 1 * 1000000,
deleteKeyspace: Boolean = false,
secondaryIndex: Boolean = false,
// csv file to append results
@transient file: Option[Writer] = Option.empty,
saveMethod: SaveMethod.Value = SaveMethod.Driver,
dataframeSaveMode: SaveMode = SaveMode.Append,
distributedDataType: DistributedDataType.Value = DistributedDataType.RDD,
//Spark Options
sparkOps: Map[String,String] = Map.empty,
//Streaming Params
numReceivers: Int = 1,
receiverThroughputPerBatch: Long = 100000,
terminationTimeMinutes: Long = 0,
streamingBatchIntervalSeconds: Int = 5,
inClauseKeys: Int = 2000
)
case class TestResult ( time: Long, ops: Long )
object SparkCassandraStress {
val reflections = new Reflections("com.datastax.sparkstress")
val VALID_TESTS = getValidTestNames()
val KeyGroupings = Seq("none", "replica_set", "partition")
val supportedAnnotationsMsg = s"Ex. 1000, 1k (thousand), 2m (million), 3B (billion), 4q (quadrillion)"
def main(args: Array[String]) {
val parser = new scopt.OptionParser[Config]("SparkCassandraStress") {
head("SparkCassandraStress", "1.0")
arg[String]("testName") optional() action { (arg,config) =>
config.copy(testName = arg.toLowerCase)
} text { s"""Tests :
|Write Tests: ${getWriteTestNames.mkString(" , ")}
|Read Tests: ${getReadTestNames.mkString(" , ")}
|Streaming Tests: ${getStreamingTestNames.mkString(" , ")}""".stripMargin}
arg[String]("master") optional() action { (arg,config) =>
config.copy(sparkOps = config.sparkOps + ("spark.master" -> arg))
} text {"Spark Address of Master Node"}
opt[String]('f', "file") optional() action { (arg,config) =>
config.copy(file = Option(new FileWriter(arg, true)))
} text {"Name of the file to append results"}
opt[Unit]('d',"deleteKeyspace") optional() action { (_,config) =>
config.copy(deleteKeyspace = true)
} text {"Delete Keyspace before running"}
opt[Unit]('i',"secondaryIndex") optional() action { (_,config) =>
config.copy(secondaryIndex = true)
} text {"Adds Secondary Indexes to PerfRow DDL"}
opt[String]('k',"keyspace") optional() action { (arg,config) =>
config.copy(keyspace = arg)
} text {"Name of the keyspace to use/create"}
opt[String]('e',"distributedDataType") optional() action { (arg,config) =>
config.copy(distributedDataType = DistributedDataType.withName(arg.toLowerCase))
} text {
"""See 'saveMethod'. **Note**: Use for write workloads.
| rdd: Resilient Distributed Dataset, the basic abstraction in Spark.
| dataframe: A Dataframe is a catalyst backed optimized container.""".stripMargin}
opt[String]('s',"saveMethod") optional() action { (arg,config) =>
config.copy(saveMethod = SaveMethod.withName(arg.toLowerCase))
} text {
"""See 'distributedDataType'. **Note**: Use for write workloads.
| rdd save methods:
| bulk: bulkSaveToCassandra
| driver: saveToCassandra
| **Note**: Folder for DSEFS writes will be named: keyspace.tablename (Default: ks.tab)
| dataframe save methods:
| driver: ds.write...save()
| parquet: data format in DSEFS
| text: data format in DSEFS
| json: data format in DSEFS
| csv: data format in DSEFS""".stripMargin}
opt[String]('u',"dataframeSaveMode") optional() action { (arg,config) =>
config.copy(dataframeSaveMode = SaveMode.valueOf(arg.toLowerCase.capitalize))
} text {
"""See 'distributedDataType'. Specifies the behavior when data or table already exists. Options include:
| overwrite: overwrite the existing data
| append: (default) append the data
| ignore: ignore the operation (i.e. no-op)
| error: throw an exception at runtime""".stripMargin}
opt[Int]('n',"trials")optional() action { (arg,config) =>
config.copy(trials = arg)
} text {"Trials to run"}
opt[String]('o',"totalOps") optional() action { (arg,config) =>
config.copy(totalOps = unpackAnnotatedNumeric(arg))
} text {s"Total number of operations to execute. ${supportedAnnotationsMsg}"}
opt[Int]('p',"numPartitions") optional() action { (arg,config) =>
config.copy(numPartitions = arg)
} text {s"Number of Spark Partitions To Create."}
opt[Long]('c',"seed") optional() action { (arg,config) =>
config.copy(seed = arg)
} text {"Seed used for randomly generating cell data, reuse a previous seed for 100% repeatability between runs. Min: -9223372036854775808, Max: 9223372036854775807"}
opt[Int]('r',"replication") optional() action { (arg,config) =>
config.copy(replicationFactor = arg)
} text {"Replication Factor to set on new keyspace, will not change existing keyspaces"}
opt[String]('t', "table") optional() action { (arg,config) =>
config.copy(table = arg)
} text {"Name of the table to use/create"}
opt[Unit]('v',"verbose") optional() action { (_,config) =>
config.copy(verboseOutput = true)
} text {"Display verbose output for debugging."}
opt[String]('y',"numTotalKeys") optional() action { (arg,config) =>
config.copy(numTotalKeys = unpackAnnotatedNumeric(arg))
} text {s"Total Number of CQL Partition Key Values. ${supportedAnnotationsMsg}"}
opt[Int]('w',"numReceivers") optional() action { (arg,config) =>
config.copy(numReceivers = arg)
} text {"Changes the number of receivers to make in Streaming Tests"}
opt[Long]('x',"receiverThroughputPerBatch") optional() action { (arg,config) =>
config.copy(receiverThroughputPerBatch = arg)
} text {"Changes the number of rows to emit per receiver per batch timing"}
opt[Int]('z',"streamingBatchLength") optional() action { (arg,config) =>
config.copy(streamingBatchIntervalSeconds = arg)
} text {"Batch interval in seconds used for defining a StreamingContext."}
opt[Int]('m',"terminationTimeMinutes") optional() action { (arg,config) =>
config.copy(terminationTimeMinutes = arg)
} text { "The desired runtime (in minutes) for a given workload. WARNING: Not supported with multiple trials or read workloads."}
opt[Int]("inClauseKeys") optional() action { (arg,config) =>
config.copy(inClauseKeys = arg)
} text {s"Number of keys in 'IN' clause, applicable only for tests that execute select queries " +
s"with 'IN' clause."}
arg[String]("connectorOpts") optional() text { """spark-cassandra-connector configs, Ex: --conf "conf1=val1" --conf "conf2=val2" """}
help("help") text {"CLI Help"}
checkConfig{ c => if (VALID_TESTS.contains(c.testName)) success else failure(
s"""${c.testName} is not a valid test :
|Streaming Tests: ${getStreamingTestNames.mkString(" , ")}
|Write Tests: ${getWriteTestNames.mkString(" , ")}
|Read Tests: ${getReadTestNames.mkString(" , ")}""".stripMargin)}
}
parser.parse(args, Config()) map { config =>
if (config.trials > 1 && config.terminationTimeMinutes > 0) {
println("\nERROR: A termination time was specified with multiple trials, this is not supported yet.\n")
} else if (getReadTestNames.contains(config.testName) && config.terminationTimeMinutes > 0) {
println(s"\nERROR: A termination time was specified with '${config.testName} which is a Read test, this is not supported yet.\n")
}else {
runTask(config)
}
} getOrElse {
System.exit(1)
}
}
def unpackAnnotatedNumeric(num: String): Long = {
if (num forall Character.isDigit) {
num.toLong
} else {
val numericPortion = num.init
assert(numericPortion forall Character.isDigit)
val numericAnnotation = num.toLowerCase.last
assert(Character.isLetter(numericAnnotation))
try {
ValidNumericAnnotations.withName(numericAnnotation.toString) match {
case ValidNumericAnnotations.k => numericPortion.toLong * pow(10, 3).toLong // thousand
case ValidNumericAnnotations.m => numericPortion.toLong * pow(10, 6).toLong // million
case ValidNumericAnnotations.b => numericPortion.toLong * pow(10, 9).toLong // billion
case ValidNumericAnnotations.t => numericPortion.toLong * pow(10, 12).toLong // trillion
case ValidNumericAnnotations.q => numericPortion.toLong * pow(10, 15).toLong // quadrillion
}
} catch {
case ex: NoSuchElementException => throw new UnsupportedOperationException(s"${ex}: ${supportedAnnotationsMsg}")
}
}
}
def getReadTests() = {
reflections.getTypesAnnotatedWith(classOf[ReadTest]).asScala.toSet
}
def getWriteTests() = {
reflections.getTypesAnnotatedWith(classOf[WriteTest]).asScala.toSet
}
def getStreamingTests() = {
reflections.getTypesAnnotatedWith(classOf[StreamingTest]).asScala.toSet
}
def getValidTests() = {
Set.empty ++ getReadTests() ++ getWriteTests() ++ getStreamingTests()
}
def getReadTestNames(): Set[String] = {
getReadTests.map(_.getSimpleName.toLowerCase)
}
def getWriteTestNames(): Set[String] = {
getWriteTests.map(_.getSimpleName.toLowerCase)
}
def getStreamingTestNames(): Set[String] = {
getStreamingTests.map(_.getSimpleName.toLowerCase)
}
def getValidTestNames(): Set[String] = {
getReadTestNames() ++ getWriteTestNames() ++ getStreamingTestNames()
}
def getStressTest(config: Config, ss: SparkSession) : StressTask = {
val subClasses = getValidTests.toList
val classMap = subClasses.map(_.getSimpleName.toLowerCase).zip(subClasses).toMap
classMap(config.testName)
.getConstructors
.maxBy(_.getParameterTypes.length)
.newInstance(config, ss)
.asInstanceOf[StressTask]
}
def csvResults(config: Config, time: Seq[Long]) : String = {
time.zipWithIndex.map {case (time,count) => {
val timeSeconds :Double = time / 1000000000.0
val opsPerSecond = config.totalOps/ timeSeconds
Seq(config.testName, config.saveMethod, config.totalOps, config.totalOps/config.numTotalKeys, count, timeSeconds, opsPerSecond, config).mkString("\t")
}}.mkString("\n") + "\n"
}
def runTask(config:Config)
{
val sparkConf =
new SparkConf()
.setAppName("SparkStress_"+config.testName)
//Make sure for streaming that the keep_alive is sufficently large
.set("spark.cassandra.connection.keep_alive_ms", (config.streamingBatchIntervalSeconds*1000*5).toString)
.set("spark.cassandra.input.metrics", "true")
.set("spark.cassandra.output.metrics", "true")
.setAll(config.sparkOps)
val ss = ConnectHelper.getSparkSession(sparkConf)
if (config.verboseOutput) {
println("\nDumping debugging output")
println(ss.sparkContext.getConf.toDebugString+"\n")
}
val test: StressTask = getStressTest(config, ss)
val timesAndOps: Seq[TestResult]= test.runTrials(ss)
val time = for (x <- timesAndOps) yield {x.time}
val totalCompletedOps = for (x <- timesAndOps) yield {x.ops}
val timeSeconds = time.map{ x => round( x / 1000000000.0 ) }
val timeMillis = time.map{ x => round( x / 1000000.0 ) }
val opsPerSecond = for (i <- timeSeconds.indices) yield {round(totalCompletedOps(i).toDouble/timeSeconds(i))}
test match {
case _: WriteTask[_] => {
println(s"TimeInSeconds : ${timeSeconds.mkString(",")}\n")
println(s"OpsPerSecond : ${opsPerSecond.mkString(",")}\n")
config.file.map(f => {f.write(csvResults(config, time));f.flush })
ss.stop()
}
case _: ReadTask => {
println(s"TimeInSeconds : ${timeSeconds.mkString(",")}\n")
println(s"TimeInMillis : ${timeMillis.mkString(",")}\n")
println(s"Average [ms]: ${timeMillis.sum.toDouble / timeSeconds.size}\n")
config.file.map(f => {f.write(csvResults(config, time));f.flush })
ss.stop()
}
case y: StreamingTask[_] => {
println("Streaming Begun")
println(s"Running for ${y.terminationTime} Seconds")
Thread.sleep(y.terminationTime * 1000)
println("Times up, shutting down")
y.ssc.stop(true, true)
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,425 |
Q: SQL Select from 2 tables and Group by month Okay so I've got 2 table's
First one is called msg and the other one is msg_t
msg (id, send_type, ..)
msg_t (id, msg_id, send_time)
What I am trying to do is to get all of the msg rows where send_type = 1
and to count the msg_t entries for each msg and group it by month
how can I do that?
A: SELECT a.ID, MONTHNAME(b.send_time), COUNT(b.msg_id) totalCount
FROM msg a
LEFT JOIN msg_t b
ON a.ID = b.msg_id
WHERE a.send_type = 1
GROUP BY a.ID, MONTH(b.send_time)
*
*MONTHNAME()
*MONTH()
by using LEFT JOIN, a value of zero will be displayed for msg.ID that have no records on table msg_t
A: SELECT
m.id,
MONTH(send_time)
COUNT(t.*)
FROM msg m
INNER JOIN msg_t t ON m.id = t.msg_id
WHERE m.send_type = 1
GROUP BY m.id,
MONTH(send_time)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,954 |
\section{\uppercase{Introduction}}
The pseudo-Goldstone boson associated with Peccei-Quinn symmetry
$U_{PQ}(1)$~\cite{Peccei77}, the axion~\cite{WW}, is of interest not only
in theoretical aspects of elementary particle physics, but in some
astrophysical and cosmological ap\-pli\-ca\-tions as
well~\cite{Turner,Raffelt90,Raffelt-book}.
It is also known that astrophysical and cosmological considerations
leave a narrow window on the axion mass~\cite{Raffelt-castle}
\footnote{However in paper~\cite{Rubakov}
a possibility to solve the CP problem of QCD within a GUT model
with a heavy axion $m_a \lesssim 1$ TeV is considered.}:
\begin{equation}
10^{-5}~{\rm eV} \mathrel{\mathpalette\vereq<} m_a \mathrel{\mathpalette\vereq<} 10^{-2}~{\rm eV},
\label{eq:MA}
\end{equation}
\noindent where axions could exist and provide a significant fraction
or all of the cosmic dark matter.
At present the interest in axions as a possible dark matter candidate
stimulates fullscale searches for galactic axions in experiments
~\cite{Kyoto,Livermore}. Negative results of experiments are
naturally explained by the fact that axions are very weakly coupling
and very long living. The axion lifetime in vacuum is gigantic one:
\begin{equation}
\tau \sim 6.3 \cdot 10^{42} \, \mbox{s} \,
\left ( {10^{-2}\mbox{eV} \over m_a } \right )^6 \;
\left ( {E_a \over 1 \mbox{MeV} } \right ) .
\label{eq:T0}
\end{equation}
On the other hand, in some astrophysical considerations where axions
effects could be substantial, it is important to take into account
the influence of plasma and the magnetic fields.
One of the most physically realistic situations presented in
many astrophysical objects is that when from both these
components of the active medium the plasma dominates.
For the physical circumstances of interest to us, the temperature $T$
appears to be the largest physical parameter. So, we will use a
well describing by a crossed field limit (${\bf E} \perp {\bf B}$, $E = B$)
case, $T^2 \gg e B \gg m_e^2$, when a great number of the Landau levels
are excited. At the same time the condition $T^2 \gg e B $ is fulfilled,
the magnetic filed is strong enough, $e B \gg m_e^2$, in comparison with
the known Schwinger value $B_e = m^2_e/e \simeq 4.41 \cdot 10^{13}$~G.
Possible mechanisms of a generation of such strong fields
as $B \sim 10^{15} - 10^{17}$~G in astrophysics were discussed in a
number of papers~\cite{magnetar,toroidal}.
In this paper we investigate the influence of the magnetic field and
plasma on the axion decay into electron-positron pair via a photon
intermediate state $a \to \gamma \to e^+ e^-$ in KSVZ model~\cite{KSVZ}
in which axions have not direct coupling to leptons.
The reason for which this forbidden in vacuum and plasma channel is opened
in the magnetic field is that $e^+e^-$ pair can have both
time-like and space-like total momentum as it occurs in photon splitting
$\gamma \to e^+ e^-$~\cite{Klepikov}.
\section{\uppercase{Matrix Element}}
A diagram describing $a \to e^+ e^-$ decay via the plasmon intermediate
state is shown in Fig.~\ref{fig:agee},
\begin{figure}[tb]
\centerline{\epsfxsize=.38\textwidth \epsffile[145 585 345 685]{agee-p.ps}}
\caption{The diagram describing the axion decay into electron-positron
pair via virtual photon.}
\label{fig:agee}
\end{figure}
where solid double lines imply the influence of the magnetic field
in the electron wave functions and undulating double lines imply the
influence of medium in the photon propagator.
The Lagrangian describing the axion-photon coupling can be presented
in the form:
\begin{eqnarray}
{\cal L}_{a \gamma}= g_{a\gamma} \,\partial_\mu A_\nu\,
\tilde F_{\nu\mu}\,a\,,
\label{eq:Lag}
\end{eqnarray}
\noindent where $A_\mu$ is the four potential of the quantized
electromagnetic field, $\tilde F$ is the dual external field
tensor, $a$ is the axion field.
Here $g_{a\gamma}$ is the known axion-photon coupling
constant with the dimension (energy)$^{-1}$~\cite{Raffelt90}
$g_{a\gamma}=\alpha\xi/2\pi f_a$ where $\xi$ is a
model-dependent parameter, $f_a$ the Peccei-Quinn scale.
The matrix element of $a \to e^+ e^-$ decay corresponding to the
diagram of Fig.~\ref{fig:agee} can be written as
\begin{equation}
S=\frac{g_{a\gamma}}{\sqrt{2 E_a V}} \, h J
\label{eq:S1}
\end{equation}
\noindent
in terms of the currents
\begin{eqnarray}
J_\alpha & = & \int d^4 x\,
{\bar\psi(p,x)}\,\gamma_{\alpha}\,\psi(-p',x) \, e^{-iqx},
\nonumber \\
h_{\alpha} & = &
- i e (q \tilde F G (q))_\alpha =
- i e q_\mu \tilde F_{\mu\nu} G_{\nu\alpha}(q).
\nonumber
\end{eqnarray}
\noindent
Here, $e>0$ is the elementary charge, $p=(E,{\bf p})$ and
$p'=(E',{\bf p}')$ are the quasi-mo\-men\-ta of final electron
and positron ($p^2={p'}^2=m_e^2$) in an external field;
$q = (E_a,{\bf q})$ is the axion momentum;
$\psi(p,x)$ is the exact solution of the Dirac equation in the
magnetic field. The condition of the relative weakness of the
magnetic field, $e B \ll T^2$, means that the plasma influence
determines basically the properties of the photon propagator
$G_{\alpha \beta}$ which can be presented as a sum of transverse
and longitudinal parts:
\begin{eqnarray}
G_{\alpha \beta} & = & - i \left (
\frac{ {\cal P}^{(T)}_{\alpha\beta}}{q^2 - \Pi^{(T)}}
+ \frac{ {\cal P}^{(L)}_{\alpha\beta}}{q^2 - \Pi^{(L)}}
\right ),
\label{eq:G} \\
{\cal P}^{(T)}_{\alpha\beta} & = & - \sum^2_{\lambda = 1}
t_{\alpha}^{\lambda}\;t_{\beta}^{\lambda},
\nonumber \\
{\cal P}^{(L)}_{\alpha\beta} & = & - \, l_{\alpha}\;l_{\beta},
\nonumber
\end{eqnarray}
\noindent
Here, $\Pi^{(T)}$ and $\Pi^{(L)}$ are the transverse and longitudinal
eigenvalues of the polarization operator;
$t_{\alpha}^{\lambda} = (0, \bf t^{\lambda})$ and $l_{\alpha}$
denote transverse and longitudinal photon polarization vectors:
\begin{eqnarray}
{\bf t}^{(1)} & = & \frac{\bf q \times \bf B}{|{\bf q} | B \sin \theta}, \quad
{\bf t}^{(2)} = \frac{{\bf q} \times {\bf t}^{(1)}}{|\bf q |},
\nonumber \\
\ell_\alpha & = & \sqrt{\frac{q^2}{(uq)^2 - q^2}}\,
\left(u_\alpha - \frac{uq}{q^2}\,q_\alpha \right),
\nonumber
\end{eqnarray}
\noindent where $u_{\alpha}$ the four-velocity of the medium; $\theta$
is the angle between the external magnetic field ${\bf B}$ and the
axion momentum ${\bf q}$.
Being integrated over the variable $x$ the expression~(\ref{eq:S1})
can be presented in the form:
\begin{eqnarray}
S & = & {(2 \pi)^4 \delta^2({\bf Q}_{\perp}) \; \delta(k Q)
\over \sqrt {2 E_a V \cdot 2 E V \cdot 2 E' V}}\;
\frac{g_{a\gamma}}{\pi (4 \beta)^{1/3}}
\label{eq:S2} \\
& \times & \bar U(p) \; \bigg [ \; \hat h \;\Phi (\eta)
+ \frac{i e \ae_{-}}{2 z m_e^2} \; (\gamma F h) \; \Phi' (\eta)
\nonumber \\
& - & \frac{ e \ae_{+}}{2 z m_e^2} \; \gamma_5 \;
(\gamma \tilde F h) \; \Phi' (\eta)
\nonumber \\
& + & \frac{m_e^2}{2 z^2}\; \frac{\hat k (kh)}{(k p)(k p')}\;
\eta \; \Phi (\eta) \bigg ] \; U (- p'),
\nonumber \\
\ae_\pm & = & \frac{1}{\chi} \pm \frac{1}{\chi'},
\nonumber \\
z & = & \left (\frac{\chi_a}{2 \chi \chi'} \right )^{1/3},
\nonumber \\
\beta & = & \frac{1}{4} u^3 z^3, \quad u^2 = - \frac{e^2 a^2}{m_e^2},
\nonumber \\
\chi^2 & = & \frac{e^2 (p F F p)}{m_e^2}, \quad
{\chi'}^2 = \frac{e^2 (p' F F p')}{m_e^2},
\nonumber \\
\chi_a^2 & = & \frac{e^2 (q F F q)}{m_e^2},
\nonumber
\end{eqnarray}
\noindent
where $Q = q - p - p'$,
${\bf Q}_\perp$ is the perpendicular to $\bf k$ component
(${\bf Q}_\perp {\bf k}= 0$). With the four-potential
$A_\mu = (kx) a_\mu$ the external field
tensor is $F_{\mu\nu} = k_\mu a_\nu - k_\nu a_\mu$.
Finally, $\Phi (\eta)$ is the Airy function:
\begin{eqnarray}
\Phi (\eta) & = & \int\limits_0^\infty d t \cos \left ( \eta t +
{t^3\over 3} \right ),
\label{eq:Ai} \\
\eta & = & z^2 \; (1 + \tau^2),
\qquad
\tau = - \, \frac{e (p \tilde F q)}{m_e^4 \chi_a},
\nonumber
\end{eqnarray}
\noindent and $\Phi' (\eta) = \partial \Phi (\eta)/ \partial \eta$.
\section{\uppercase{Decay probability}}
To obtain the decay probability one has to carry out a non-trivial
integration over the phase space of the $e^+ e^-$ pair taking their
specific kinematics in the magnetic field into account.
As the analysis shows the contribution of the transverse photon mode
to $a \to e^+ e^-$ decay probability in the ultrarelativistic
case is negligibly small. The main contribution due to the longitudinal
plasmon intermediate state has a form:
\begin{eqnarray}
W & = &\frac{g_{a\gamma}^2 (e B)^2}{36 \pi} \,
\frac{E_a^3 \, \cos^2 \theta }
{(E_a^2 - {\cal E}^2)^2 + \gamma^2 {\cal E}^4} \, \rho,
\label{eq:prob} \\
\rho & = & 6 \int\limits_0^1 dx \, x (1 - x) \, (1 - n) \, (1 - \bar n) ,
\nonumber \\
n & = & \left ( \exp \frac{x E_a - \mu}{T} + 1 \right )^{-1} ,
\nonumber \\
\bar n & = & \left ( \exp \frac{(1 - x) E_a + \mu}{T} + 1 \right )^{-1} ,
\nonumber
\end{eqnarray}
\noindent
where $n$ and $\bar n$ are the Fermi-Dirac distributions of electrons
and positrons at a temperature $T$ and a chemical potential $\mu$,
respectively.
The function $\rho (E_a, T, \mu)$ has a meaning of the average value of
supressing statistical factors and is, in general case, inside the
interval $0 < \rho < 1$.
Eq.~(\ref{eq:prob}) has a resonant behaviour at the point
$(E_a^2)_{res} = {\cal E}^2$ where axion and longitudinal
plasmon dispersion curves cross (Fig.~\ref{fig:disp}).
\begin{figure}[tb]
\centerline{\epsfxsize=.38\textwidth \epsffile[115 430 395 710]{disp.ps}}
\caption{Dispersion relations $\omega^2=\omega^2_L(k)$ for
longitudinal plasmons (solid line), axions
$E_a^2 = k^2+m^2_a$ (short dashes), and vacuum photons
$\omega=k$ (long dashes).}
\label{fig:disp}
\end{figure}
The dimensionless resonance width $\gamma$ of the $a \to e^+ e^-$
process in Eq.~(\ref{eq:prob}) is
\begin{equation}
\gamma=\frac{{\cal E} \Gamma_L({\cal E})}{q^2 Z_L},
\label{eq:gamma}
\end{equation}
\noindent where $\Gamma_L ({\cal E})$ is the total width of the
longitudinal plasmon; $Z_L$ is the renormalization factor of
longitudinal plasmon wave-function:
\begin{equation}
Z_L^{-1} = 1 - \frac{\partial \,\Pi^{(L)}}{\partial\,q_0^2 }.
\label{eq:Gamma1}
\end{equation}
\noindent Notice that without the external field the plasmon
decay into neutrino pair takes place only. In the presense of the magnetic
field which, from one hand, is weak, $e B \ll E^2$, and, from the other
hand, strong enough, $e B \gg \alpha^3 E^2$, novel channel of the
longitudinal plasmon decay is opened $\gamma_L\to e^+e^-$.
However the main contribution to the width $\Gamma_L ({\cal E})$ is
determined by the process of the longitudinal
plasmon absorbtion $\gamma_L e^- \to e^-$ which becomes possible in this
kinematical region in the magnetic field.
Below we give the expressions for ${\cal E}^2$ and $\gamma$ in two limits:
\newline
i) degenerate plasma
\begin{eqnarray}
{\cal E}^2 & \simeq & \frac{4 \alpha}{\pi} \, \mu^2 \,
\left ( \ln \frac{2 \mu}{m_e} - 1 \right ) ,
\label{eq:degen} \\
\gamma & \simeq & \frac{2 \alpha}{3} \, \frac{\mu^2}{{\cal E}^2} ,
\nonumber
\end{eqnarray}
ii) nondegenerate hot plasma
\begin{eqnarray}
{\cal E}^2 & \simeq & \frac{4 \pi \alpha}{3} \, T^2 \,
\left ( \ln \frac{4 T}{m_e} - 0.647 \right ) ,
\label{eq:nondegen} \\
\gamma & \simeq &\frac{2 \alpha}{3} \, \frac{\mu^2}{{\cal E}^2} .
\nonumber
\end{eqnarray}
Considering possible applications of the result we have obtained to
cosmology it is necessary to take an influence of a hot plasma into
account. Under the early Universe conditions the hot plasma is
nondegenerate one ($\mu \ll T$) and the medium parameter
$\rho$ is inside the interval $1/4 < \rho < 1$.
With ${\cal E}^2$ and $\gamma$ from~(\ref{eq:nondegen})
we obtain the following estimation for
the axion lifetime in the resonance region:
\begin{eqnarray}
& & \tau (a \to \gamma_{pl} \to e^+ e^-) \simeq
2.5 \cdot 10^4 \, \mbox{s} \,
\label{eq:time-KSVZ} \\
& & \times
\left ( \frac{10^{-10}}{ g_{a \gamma} \, \mbox{GeV}} \right )^2 \,
\left ( \frac{T}{10 \, \mbox{MeV}} \right ) \,
\left ( \frac{10^{15} \, \mbox{G}}{B} \right )^2.
\nonumber
\end{eqnarray}
It is interesting to compare~(\ref{eq:time-KSVZ}) with the field-induced
axion lifetime~\cite{Ljuba-aff97} in the model~\cite{DFSZ}
where axions couple with electrons on the tree level:
\begin{eqnarray}
& & \tau (a \to e^+ e^-) \simeq 3.4 \cdot 10^6 \,\mbox{s}
\label{eq:time-DFSZ} \\
& & \times
\left ( \frac{10^{-13}}{g_{a e}} \right )^2 \,
\left ( \frac{T}{10 \, \mbox{MeV}} \right )^{1/3} \,
\left ( \frac{10^{15} \, \mbox{G}}{B} \right )^{2/3}.
\nonumber
\end{eqnarray}
The expressions~(\ref{eq:time-KSVZ}) and~(\ref{eq:time-DFSZ}) we
have presented here demonstrate the strong catalyzing influence of
the medium, plasma and the magnetic field, on the axion lifetime in
comparison with the vacuum one~(\ref{eq:T0}).
Due to the resonance behaviour of $a \to \gamma_{pl} \to e^+ e^-$
via the longitudinal plasmon the axion lifetime in KSVZ model with
induced axion-electron interaction can be smaller than in
DFSZ model with the direct coupling.
\section*{\uppercase{Acknowledgements}}
N.~Mikheev and L.~Vassilevskaya thank the organizers of the 5-th IFT
Workshop on Axions for their warm hospitality during the visit.
This research was partially supported by INTAS under grant No.~96-0659
and by the Russian Foundation for Basic Research under grant
No.~98-02-16694.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,496 |
HttpServletResponseWrapperSetIntHeaderTest servlet<BR>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,923 |
{"url":"http:\/\/mathhelpforum.com\/pre-calculus\/97115-de-moivre-binomial-theorem.html","text":"# Thread: de Moivre and Binomial Theorem\n\n1. ## de Moivre and Binomial Theorem\n\nUse de Moivre's Theorem and the Binomial Theorem to find integers A,B,C,D,E,F such that\n$\\sin(5\\theta)=A\\sin^{5}\\theta + B\\sin^{3}\\theta+ C\\sin\\theta$\n$16\\sin^{5}\\theta=D\\sin\\theta+E\\sin(3\\theta)+F\\sin( 5\\theta)$.\n\nAny help much appreciated\n\n2. Do you know the statement of De Moivre's theorem? Do you know how to expand $(x+y)^5$? If so just do it and set $x=\\cos\\theta, \\: y=i\\sin\\theta$...\n\n3. So I get\n$\\cos^{5}\\theta+5\\cos^{4}\\theta\\sin\\theta + 10\\cos^{3}\\theta\\sin^{2}\\theta + 10\\cos^{2}\\theta\\sin^{3}\\theta+ 5\\cos\\theta\\sin^{4}\\theta +\\sin^{5}\\theta$\n\n$(\\cos x + i \\sin x)^{n}= \\cos n\\theta + i \\sin n\\theta$\n\nI still don't see where I'm supposed to go with it...\n\nIs it something to do with matching up the real parts on both sides?\n\n4. Originally Posted by PTL\nSo I get\n$\\cos^{5}\\theta+5\\cos^{4}\\theta\\sin\\theta + 10\\cos^{3}\\theta\\sin^{2}\\theta + 10\\cos^{2}\\theta\\sin^{3}\\theta+ 5\\cos\\theta\\sin^{4}\\theta +\\sin^{5}\\theta$\n\n$(\\cos x + i \\sin x)^{n}= \\cos n\\theta + i \\sin n\\theta$\n\nI still don't see where I'm supposed to go with it...\n\nIs it something to do with matching up the real parts on both sides?\nYou forgot $i$ my friend!\n\nyou should get\n\n$(\\cos x + i \\sin x)^{5} =$\n$\\cos^{5}\\theta+5i\\cos^{4}\\theta\\sin\\theta - 10\\cos^{3}\\theta\\sin^{2}\\theta - 10i\\cos^{2}\\theta\\sin^{3}\\theta+ 5\\cos\\theta\\sin^{4}\\theta +i\\sin^{5}\\theta$\n\n(just alternate $1,i,-1,-i,...$)\n\nThen equate the imaginary parts because the real part is $\\cos 5\\theta$ which is not what you want.","date":"2017-04-25 19:59:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 13, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.776738166809082, \"perplexity\": 629.4775567791295}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917120844.10\/warc\/CC-MAIN-20170423031200-00210-ip-10-145-167-34.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction}
Teaching learning agents how to perform a new task is a central problem in artificial intelligence. One paradigm, namely imitation learning \cite{argall2009survey}, involves showing demonstration(s) of the desired task to the agent, which can then used by the agent to infer the demonstrator's intent, and hence, learn a policy for the task. However, for each new task, the agent must be given a new set of demonstrations, which is not scalable as the number of tasks grow, particularly because providing demonstrations is often a cumbersome process.
On the other hand, techniques in instruction-following \cite{macmahon2006walk,vogel2010learning,chen2011learning} communicate the target task to a learning agent using natural language. As the complexity of tasks grow, providing intricate details using natural language could become challenging.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/env}
\caption{Example of the setting: The top row shows the \emph{source task}, while the bottom shows the \emph{target task}. Given a demonstration of the source task, and a natural language description of the difference between the two tasks such as ``In the third step, move the green flat block from bottom left to top left.'', our goal is to train an agent to perform the target task \emph{without} any demonstrations of the target task.}
\label{fig:environment}
\end{figure}
This motivates a new paradigm to teach agents, that allows scaling up learning from demonstration to multiple related tasks with a single (or a few) demonstration(s) by using a much more natural modality, namely, language. At the same time, intricate details which are harder to communicate using language alone can be provided using demonstration(s).
To this end, we propose a novel setting---given a demonstration of a task (the \emph{source task}), we want an agent to complete a somewhat different task (the \emph{target task}) in a \textbf{zero-shot} setting, that is, without access to \emph{any} demonstrations for the target task. The difference between the source task and the target task is communicated using language.
The proposed setting requires combining information from both the demonstration and the language, and can therefore serve as an important step towards building systems for more complex tasks which are difficult to communicate using demonstrations or language alone.
For example, consider an environment consisting of objects on different shelves of an organizer, as shown in Figure~\ref{fig:environment}. Suppose the \textbf{source task} (top row) requires moving the green flat block from bottom-right to bottom-left, the blue flat block from middle-left to bottom-right, and then the green flat block from bottom-left to middle-left. The \textbf{target task} (bottom row) is similar, but in the final step, the green flat block should be moved to top-left instead. We posit that given a demonstration for the source task, and a free-form natural language description of the difference between the source and the target tasks,
such as ``In the third step, move the green flat block from bottom left to top left'',
an agent should be able to infer the goal for the target task. We propose a framework that can handle a diverse set of adaptations between the source and the target tasks, such as a missing step, an extra step, and swapping the final positions of two objects.
The environment has a similar structure to several real-world applications, where task adaptation using language could be particularly beneficial. For instance, consider the goal of building service robots for home environments. These robots must be able to learn a wide variety of tasks from non-expert users. Many tasks, such as cooking or assembly, involve a sequence of discrete steps, and such tasks could have several variations, like different cooking recipes or assembling different kinds of furniture. Being able to demonstrate one (or a few) of these tasks, and then communicating the difference between the demonstrated task(s) and other similar tasks could significantly reduce the burden of teaching new skills for the users.
These problems involve planning/control at 2 levels---high-level planning over the steps, and low-level control for executing each step. Since our proposed algorithm focuses on the high-level planning, we illustrate our approach on the simple environment shown in Figure~\ref{fig:environment}, where the low-level control is abstracted away. However, our framework is general, and can be combined with approaches that perform low-level control.
The proposed setting is challenging for several reasons.
First, most existing approaches in imitation learning and instruction-following infer the goal for a target task from a demonstration or an instruction, respectively. However, in our setting, neither of these modalities is sufficient by itself, and the agent must be able to combine complementary information from the source demonstration(s) and the natural language descriptions, in order to infer the goal for the target task.
Second, in order to understand the natural language description, the agent must be able to map concepts in the description to objects and actions, a problem known as symbol grounding \cite{harnad1990symbol}.
Finally, in order to be scalable, we intend to learn a purely data-driven model that can does not require engineering features for the language or the environment, and can learn to infer the goal for the target task directly from data.
We introduce the Language-Aided Reward and Value Adaptation (LARVA) model that takes in a dataset of source demonstrations, linguistic descriptions, and either the reward or optimal value function for the target task, and is trained to predict the reward or optimal value function of the target task given a source demonstration and a linguistic description.
The choice between reward and value prediction could be problem-dependent---for domains with complex transition dynamics, learning a value function requires reasoning about these dynamics, and therefore, it might be better to use LARVA for reward prediction, with a separate policy learning phase using the predicted rewards; for domains with simpler dynamics, a value function could be directly predicted using LARVA, thereby removing the need for a separate policy learning phase.
We experiment with a diverse set of adaptations, and show that the model successfully completes over 95\% target tasks when using synthetically generated language, and about 75\% target tasks when using unconstrained natural language collected using Amazon Mechanical Turk.
\section{Related Work}
\paragraph{Imitation Learning.}
Imitation learning is one of the standard paradigms for teaching new skills to learning agents.
The agent is provided with demonstration(s) of a task, and must infer the demonstrator's intent, and hence, learn to complete the task \cite{argall2009survey}.
Approaches to imitation learning can broadly be classified into behavior cloning \cite{pomerleau1989alvinn,ross2010efficient,ross2011reduction}, inverse reinforcement learning \cite{abbeel2004apprenticeship,ramachandran2007bayesian,ziebart2008maximum,finn2016guided}, and adversarial learning \cite{ho2016generative,fu2017learning}.
Our proposed setting differs from standard imitation learning, since the agent is provided with demonstration(s) of the source task, but needs to infer the reward for a related but different target task, the difference being communicated using language.
\paragraph{Transfer Learning.}
Another closely related problem to our proposed setting is transfer learning, wherein an agent trained on a source task needs to complete a related but different target task.
The agent is \emph{finetuned} on data from the target task in transfer learning, and the objective is to reduce the amount of data needed for the target task by effectively transferring experience from the source task \cite{pan2009survey}.
This is different from our proposed setting, because in our setting, we don't need to transfer \emph{experience} from the source task to the target task; instead, the demonstration(s) for the source task must be used with the description to infer the goal for the target task.
\paragraph{Meta-learning and Few-shot Learning.}
Our setting is also related to the meta-learning and few-shot learning settings \cite{vanschoren2018meta,wang2020generalizing}.
In meta-learning, an agent is given training data from multiple tasks sampled from a distribution of tasks, and it must use these data to generalize to new tasks sampled from the distribution.
The data from these tasks could be used to extract useful features, build models as a pretraining step, or learn a training routine (e.g. how to search over the space of neural network architectures), which can be transferred to the new task to learn from fewer datapoints, learn a more robust model, or converge to a solution faster.
Few-shot learning is a subcategory of techniques within meta-learning, where the goal is to learn a new task from few training examples.
Our approach infers the target reward function for a new target task given a source demonstration and a description in a zero-shot setting, which is a special case of few-shot learning.
\paragraph{Language as Task Description.}
In a large class of problems, which can broadly be termed as \emph{instruction-following}, language is used to communicate the task to a learning agent.
In this setting, the agent is given a natural language command for a task, and is trained to take a sequence of actions that complete the task.
A well-studied problem in this setting is the Vision-and-language Navigation, where the tasks consist of navigating to a desired location in an environment, given a natural language command and an egocentric view from the agent's current position \cite{anderson2018vision,fried2018speaker,wang2019reinforced}.
Another subcategory of instruction-following involves instructing an embodied agent using natural language \cite{tellex2011understanding,hemachandra2015learning,arkin2017contextual,shridhar2020alfred,stepputtis2020language,misra2016tell,sung2018robobarista}.
Our proposed setting is different from instruction-following, in that the goal of the target task is not being communicated using language alone; instead, a demonstration for a related task (source task) is available, and language is used to communicate the difference between the demonstrated task and the target task. Thus, the information in the demonstration and the language complement each other.
\paragraph{Language to Aid Learning.}
Several approaches have been proposed in the past that use language to aid the learning process of an agent. In a reinforcement learning setting, this could take the form of a language-based reward, in addition to the extrinsic reward from the environment \cite{luketina2019survey,goyal2019using,goyal2020pixl2r,kaplan2017beating}, or using language to communicate information about the environment to the agent \cite{wang2021grounding,branavan2012learning,narasimhan2018grounding}.
\textcite{andreas2017learning} use language in a meta-learning setting for few-shot transfer to new tasks. While related to our setting, in this work, language is provided for each task independently, and tasks are deemed similar if their linguistic descriptions are related. In our setting, however, language is used to explicitly describe the difference between two tasks.
\section{Problem Definition}
Consider a \textbf{goal-based task}, which can be defined as a task where the objective is to reach a designated goal state in as few steps as possible.
It can be expressed using the standard Markov Decision Process (MDP) formalism, as
$M = \langle S, A, P, R, \gamma, g\rangle$,
where
$S$ is the set of all states,
$A$ is the set of all actions,
$P : S \times A \times S \rightarrow [0, 1]$ is the transition function,
$R : S \rightarrow \mathds{R}$ is the reward function,
$\gamma \in [0, 1]$ is the discount factor, and
$g \in S$ is the unique goal state.
At timestep $t$, the agent observes a state $s_{t} \in S$, and takes an action $a \in A$, according to some policy $\pi : S \times A \rightarrow [0, 1]$.
The environment transitions to a new state $s_{t+1} \sim P(s_{t}, a_{t}, \cdot)$, and the agent receives a reward $R_{t} = R(s_{t+1})$.
The objective is to learn a policy $\pi^{*}$, such that the expected future return, $G_{t} = \sum_{t'=t}^{t_{max}} \gamma^{t'-t} R_{t}$, is maximized.
Further, $V^{*}_{R} : S \rightarrow \mathds{R}$ denotes the optimal value function under the reward function $R$, and can be used to act optimally.
The reward function for a goal-based task can be defined as $R(s) = \mathds{1}[s=g]$, where $\mathds{1}[\cdot]$ is the indicator function.
Thus, for $\gamma < 1$, an optimal policy for a goal-based task must reach the goal state $g$ from any state $s \in S$ in as few steps as possible.
Let $\mathcal{T}$\xspace$=\{M_{i}\}_{i=1}^{N}$ be a \textbf{family} of goal-based tasks $M_{i}$, each with a distinct goal $g_{i}$, and the reward function $R_{i}$ defined as above.
The set of states $S_{i}$ and actions $A_{i}$, the transition functions $P_{i}$, and the discount factors $\gamma_{i}$ across different tasks may be related or unrelated \cite{kroemer2019review}.
For instance, in the environment shown in Figure~\ref{fig:environment}, a goal-based task consists of arranging the objects in a specific configuration defined by a goal state $g$,
while $\mathcal{T}$\xspace is the set of all multi-step rearrangement tasks in the environment.
Let $T_{src}$\xspace, $T_{tgt}$\xspace $\in$ $\mathcal{T}$\xspace be two tasks, and $L$ be a natural language description of the difference between the tasks.
Given a demonstration for the source task $T_{src}$\xspace, and the natural language description $L$, our objective is to train an agent to complete the target task $T_{tgt}$\xspace in a \textbf{zero-shot setting}, i.e., without access to the reward function or demonstrations for the target task.
\section{LARVA: Language-Aided Reward and Value Adaptation}
\label{sec:model}
We propose Language-Aided Reward and Value Adaptation (LARVA), a neural network that takes in a source demonstration, $\tau_{src}\xspace$, the difference between the source and target tasks described using natural language, $L$, and a state from the target task, $s \in S_{tgt}$, and is trained to predict either $R_{tgt}(s)$, the reward for the state $s$ in the target task, or $V^{*}_{R_{tgt}}(s)$, the optimal value of the state $s$ under the target reward function $R_{tgt}$.
We assume access to a training set $\mathcal{D} = \{(\tau^{i}_{src}, L^{i}, g^{i}_{tgt})\}_{i=1}^{N}$, where
$\tau^{i}_{src}$ is a demonstration for the source task of the $i^{th}$ datapoint,
$L^{i}$ is the natural language description of the difference between the source task and the target task for the $i^{th}$ datapoint, and
$g^{i}_{tgt}$ is the goal state for the target task.
The details of the dataset and the data collection process are described in Section~\ref{sec:data}.
Next, we describe the network architecture, and training details of LARVA.
\begin{figure}
\centering
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\textwidth]{figs/nn-arch2_.png}
\caption{Full model: The target goal predictor takes in a source demonstration and a description to predict the goal state for the target task. The reward / value network takes this predicted goal state, and another state $s$ from the target task to predict the reward or value of the state $s$ under the target reward function.}
\end{subfigure}
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\textwidth]{figs/nn-arch1.png}
\caption{Target goal predictor}
\end{subfigure}
\caption{Neural network architecture for LARVA}
\label{fig:nn-arch}
\vspace{-10pt}
\end{figure}
\subsection{Network Architecture}
We decompose the learning problem into two subproblems: (1) predicting the goal state for the target task given the source demonstration and the language, and (2) predicting the reward / value of state $s$ given the goal state of the target task.
As such, we propose a neural network architecture that consists of two modules: (1) Target Goal Predictor, and (2) Reward / Value Network (see Figure~\ref{fig:nn-arch}).
This decomposition allows for additional supervision of the target goal predictor, using the ground-truth goal state for the target task.
\subsubsection{Target Goal Predictor}
Given a sequence of images representing a source demonstration ($\tau_{src}$), and a natural language description of the difference between the source and the target task ($L$),
the target goal predictor module is trained to predict the goal state of the target task ($g_{tgt}$).
\paragraph{Demonstration Encoder.}
Each image in the source demonstration is first passed through a convolutional neural network to obtain a $D_1$-dimensional vector representation, where $D_1$ is a hyperparameter tuned using the validation set.
The resulting sequence of vectors is padded to the maximum demonstration length ($T_{max}$) in the training data, and the vectors are then concatenated to obtain a single $T_{max}\cdot D_1$-dimensional vector.
\footnote{We also experimented with an LSTM and a transformer to encode the sequence of vectors, but obtained significantly worse performance compared to a simple concatenation. A possible explanation for this behavior is that encoding the frames into a single vector independently of the language may make it harder to associate information in language with that in individual frames, suggesting that cross-attention between language and the frames might be required. Our preliminary experiments with attention-based models did not work well, but a more thorough anaysis is needed.}
\paragraph{Language Encoder.} The natural language description is first converted into a one-hot representation using a vocabulary (see Sections~\ref{sec:lang} for details about the vocabulary), which is then passed through a two-layer LSTM module to obtain a vector representation of the description. The hidden size of the LSTM is set to $D_2$, which is tuned using the validation set.
\paragraph{Target Goal Decoder.} The vector representations of the source demonstration and the natural language description are concatenated, and the resulting vector is passed through a deconvolution network to obtain an image representation $\hat{g}$ of the target goal state.
\subsubsection{Reward / Value Network}
The reward or value network takes the predicted target goal state $\hat{g}$ and another state $s$ from the target task as inputs, and is trained to predict the reward or the value respectively of state $s$ under the target reward function.
The predicted goal state $\hat{g}$ and the state $s$ are encoded using the same CNN encoder (i.e. shared weights) used for encoding the demonstration frames in the target goal predictor, to obtain $D_1$-dimensional vector representations.
The reward or value of state $s$ is computed as the cosine similarity between the vector representations of $\hat{g}$ and the state $s$. We represent the ground-truth reward or value as $f(s)$, while the network prediction as $\hat{f}(s)$.
\subsection{Training}
To train the model, we assume access to a dataset $\mathcal{D} = \{(\tau^{i}_{src}, L^{i}, g^{i}_{tgt})\}_{i=1}^{N}$.
Using the goal state for the $i^{th}$ target task, the reward function and the optimal value function for the target task can be computed, which is used to supervise the model training as described below.
\subsubsection{Training Objectives}
\paragraph{Mean Squared Error.}
Since we want the model to regress to the true reward or value $f(s)$ for states $s \in S^{i}_{tgt}$, a natural choice for the loss function is the mean squared error (MSE),
$\mathcal{L}_{\text{f}} = \frac{1}{N} \sum_{i=1}^{N} \sum_{s \in S^{i}_{tgt}} (f(s) - \hat{f}(s))^2$.
\paragraph{Target Goal Supervision.}
Further, we additionally supervise the Target Goal Predictor using the true goal state $g^{i}_{tgt}$ for the $i^{th}$ target task, using an MSE loss,
$\mathcal{L}_{\text{goal}} = \frac{1}{N} \sum_{i=1}^{N} (g^{i}_{tgt} - \hat{g}^{i}_{tgt})^2$.
Thus, the overall training loss is given by
$ \mathcal{L} = \mathcal{L}_{\text{f}} + \lambda \mathcal{L}_{\text{goal}} $,
where $\lambda$ is an hyperparameter tuned using a validation set.
\subsubsection{Optimization}
For training the model, a datapoint $(\tau^{i}_{src}, L^{i}, g^{i}_{tgt})$ is sampled uniformly at random from $\mathcal{D}$. When predicting the value function, a target state $s$ is sampled uniformly at random from $S^{i}_{tgt}$ at each step of the optimization process. When predicting the reward function, the goal state $g^{i}_{tgt}$ is sampled with 50\% probability, while a non-goal state is sampled uniformly at random otherwise. This is required because the reward function is sparse.
We use an Adam optimizer \cite{kingma2014adam} to train the network end-to-end for 50 epochs, with weights initialized randomly according to \textcite{glorot2010understanding}.
A validation set is used to tune hyperparameters via random search.
\section{Environment and Dataset}
\label{sec:data}
In this section, we describe the environment we use in our experiments. While the framework described above is fairly general, in this work, we focus on a simpler setting that is more amenable to analysis. Specifically, we assume discrete state and action spaces $S$ and $A$, and deterministic transitions, i.e., $P(s, a, s') \in \{0, 1\}, \forall (s, a, s') \in S \times A \times S$.
\subsection{The Organizer Environment}
\label{sec:env}
We propose the Organizer Environment, which consists of an organizer with 3 shelves.
There are 8 distinct objects, and each object can take one of 3 colors (red, blue, or green), giving a total of 24 distinct colored objects (see Figure~\ref{fig:objects} in the appendix).
Objects can be placed in each shelf, either to the left or the right, resulting in a total of 6 distinct positions,
$\texttt{POSITIONS} =$ $\{\texttt{Top-Left}, \texttt{Top-Right}, \texttt{Middle-Left}, \texttt{Middle-Right}, \texttt{Bottom-Left}, \texttt{Bottom-Right}\}$.
Objects can be placed in different configurations to create different states. In our experiments, we use tasks with 2 or 3 objects. The total number of states with 2 or 3 objects (i.e. $| \bigcup_{T \in \mathcal{T}} S_{T} |$) is 285,120.
The action space $A$ is common across all tasks, and consists of 30 move actions, $\texttt{MOVE}(p_{i}, p_{j}), p_{i}, p_{j} \in \texttt{POSITIONS}, p_{i} \ne p_{j}$. Finally, there is a $\texttt{STOP}$ action that indicates the termination of an episode.
\subsection{Language Data}
\label{sec:lang}
In this work, we experiment with 6 types of adaptations: (1) moving the same object in the source and target tasks, but to different final positions; (2) moving a different object in the source and target tasks, but to the same final position; (3) moving two objects in the source and target tasks, with their final positions swapped in the target task; (4) deleting a step from the source task; (5) inserting a step to the source task; and (6) modifying a step in the source task.
Examples for each of these adaptations are shown in the appendix (Table~\ref{tbl:adaptations}).
For each pair of source and target tasks in the dataset, we need a linguistic description of the difference between the tasks.
We start by generating linguistic descriptions using a set of templates, such as, ``Move $\texttt{obj1}$ instead of $\texttt{obj2}$ to the same position.''
We ensure that for most of these templates, the target task cannot be inferred from the description alone, and thus, the model must use both the demonstration of the source task and the linguistic description to infer the goal for the target task.
Next, we collected natural language for a subset of these synthetic (i.e. template-generated) descriptions using Amazon Mechanical Turk (AMT).
Workers were provided with the template-generated descriptions, and were asked to paraphrase these descriptions.
Importantly, our initial data-collection experiments suggested that providing the workers with the task images resulted in inferior descriptions, wherein, many descriptions would describe the target task completely, instead of how it differs from the source task.
As such, the workers were first shown some examples of source and target tasks to familiarize them with the domain, and were then only provided with template-generated descriptions, without the images for the source and target tasks, to obtain the paraphrases. See Section~\ref{sec:amt} in the appendix for more details about the data collection process.
We applied a basic filtering process to remove clearly bad descriptions, such as those with 2 words or less, and those that were identical to the given paraphrases. We did not make any edits to the descriptions, like correcting for grammatical or spelling errors.
Some examples of template-generated and natural language descriptions obtained using AMT are shown in Table~\ref{tbl:syn-nat-examples}.
It can be observed that while the first four rows in the table are valid paraphrases, the remaining three paraphrases could be ambiguous depending on the source and target tasks.
For instance, in row 5, the target task involves an \emph{extra} step after the first step, while the natural language paraphrase could be interpreted as \emph{modifying} the second step.
In row 6, the natural language description is not a valid paraphrase, while in row 7, the paraphrase is difficult to comprehend.
We manually analysed a small subset of the collected paraphrases, and found that about 15\% of the annotations were ambiguous / non-informative.
Some of this noise could be addressed by modifying the data-collection pipeline, for example, by providing more information about the source and target tasks to humans, and filtering out non-informative / difficult to comprehend paraphrases.
\input{templates-natural-lang-examples}
A vocabulary was created using the training split of the synthetic and natural language descriptions, discarding words that occurred fewer than 10 times in the corpus.
While encoding a description using the resulting vocabulary, out-of-vocabulary tokens were represented using the \texttt{<unk>} symbol.
\section{Experiments}
\label{sec:expts}
\subsection{Details about the setup}
\paragraph{Dataset.} For each adaptation, 6,600 pairs of source and target tasks were generated along with the template-based descriptions.
Of these, 600 templates were used for each adaptation to collect natural language descriptions using Amazon Mechanical Turk.
Thus, our dataset consisted of 6,600 examples in total for each adaptation, of which 6,000 examples consisted of synthetic language, and 600 consisted of natural language.
Of the 6,000 synthetic examples per adaptation, 5,000 were used for training, 500 for validation, and the remaining 500 for testing.
Similarly, of the 600 natural language examples per adaptation, 500 were used for training, 50 for validation, and 50 for testing.
This gave us a training dataset with 33,000 examples, and validation and test datasets with 3,300 examples each, across all adaptation types.
\paragraph{Evaluation Metrics.} For each experiment, the trained model predicts the reward or value of the given state $s$.
When using the value function, the trained network is used to predict the values for all states $s \in S_{tgt}$.
When using the reward function, the trained network is used to predict the rewards for all states $s \in S_{tgt}$, from which the optimal value function is computed using dynamic programming.
In both cases, if the state with the maximum value matches the goal state for the target task, $g_{tgt}$, the task is deemed to be successful.
We train the models using the entire training set (i.e. both synthetic and natural language examples across all adaptations), and report the percentage of successfully completed target tasks for both synthetic and natural language descriptions. For each experiment, we tune the hyperparameters on the validation set, and report the success rate on the test set corresponding to the setting yielding the maximum success rate on the validation set.
\input{results-table}
\subsection{Results}
In this section, we describe the performance of our full model, along with various ablations. Our results are summarized in Table~\ref{tbl:results}.
First, we evaluate our full LARVA model, with both reward and value predictions (rows 1 and 2 in Table~\ref{tbl:results}).
In both cases, the models result in successful completion of the target task more than 97\% of the time with synthetic language, and more than 73\% of the time with natural language.
The drop in performance when using natural language can partly be attributed to the 15\% of paraphrases that are potentially ambiguous or uninformative, as discussed in Section~\ref{sec:lang}, while the remaining performance gap is likely because natural language has more variation than synthetic language.
Better data collection and more complex models could be explored to bridge this gap further. Our experiments to analyze the impact of the quantity of data suggests that increasing the amount of synthetic or natural language data is unlikely to provide a significant improvement on the natural language test set (see Section~\ref{sec:add-expts} in the appendix).
The similar performance when predicting rewards and values is not unexpected---we observed in our experiments that training the target goal prediction module was more challenging than training the the reward or value networks. Since the target goal prediction module is identical for both reward and value predictions, the performance in both cases is upper-bounded by the performance of the target goal prediction module. For domains with complex dynamics, reward and value prediction might result in significantly different sucess rates.
Next, we discuss ablations of our model. We present the results only with value prediction, since as noted, both reward and value predictions perform similarly.
\begin{enumerate}[leftmargin=12pt]
\item To study the effect of target goal supervision for training the target goal predictor, we remove $\mathcal{L}_{goal}$, optimizing the network using the ground-truth values only. Row 3 in Table~\ref{tbl:results} shows that this drastically degrades performance, confirming the efficacy of target goal supervision.
\item To ensure that most tasks require using information from both the source demonstration and the natural language description, we run unimodal baselines, wherein the network is provided with only the source demonstration (row 4) or only the language (row 5). As expected, both the settings result in a substantial drop in performance. Interestingly, using only the source demonstration results in over 20\% successful completions. This is because given the set of adaptations, the source demonstration constrains the space of target configurations more effectively (e.g. if the source task consists of three steps, the target task must contain at least two of those steps, since source and target tasks differ by only one step for all adaptations).
\item Finally, we experiment with an alternate neural network architecture, that does not decompose the learning problem into target goal prediction and value prediction. The source demonstration, the language, and the target state $s$ are all encoded independently, and concatenated, from which the value for state $s$ is predicted. Row 6 shows that the resulting model achieves a very low success on target tasks, demonstrating the importance of decomposing the learning problem as in LARVA.
\end{enumerate}
In the experiments so far, the data were randomly partitioned into training, validation, and test splits.
However, a key aspect of using language is the ability to \emph{compose} concepts.
For instance, humans can learn concepts like ``blue box'' and ``red cylinder'' from independent examples, and can recognize a ``blue cylinder'' by composing these concepts without ever having seen examples of the new concept.
To evaluate whether our proposed model can exhibit the ability to compose concepts seen during training, we create 2 new splits of our data---in both the splits, the training data consists of all datapoints that do not contain any blue cylinders or red boxes.
In the first split, the validation set consists of all datapoints containing blue cylinders, while the test set consists of all datapoints containing red boxes. In the second split, the validation and test sets are swapped.\footnote{Datapoints containing both a blue cylinder and a red box are discarded.}
We train LARVA on these new splits (using value prediction), and report the success rate on the test set in Table~\ref{tbl:results}, rows 7 and 8.
As can be observed, our model is able to successfully complete a large fraction of the target tasks, by composing concepts seen during training, however, there is room for further improvement by using richer models.
\section{Conclusions and Future Work}
\label{sec:discussion}
\paragraph{Conclusions.} We proposed a novel framework which allows teaching agents using a combination of demonstrations and language.
Given a demonstration of a source task, and a natural language description of the difference between the source task and a target task, we introduce Language-Aided Reward and Value Adaptation (LARVA), a model that can perform the target task in a zero-shot setting.
We experimented with a diverse set of adaptations on a simple discrete environment, and show that the model is able to complete nearly all target tasks successfully when using synthetic language, while more than 70\% of the target tasks when using free-form natural language.
A key component of LARVA, as demonstrated by the ablation experiments, is decomposing the full problem into two subproblems (target goal prediction and reward / value prediction), which allows for intermediate supervision.
\paragraph{Limitations and Future work.}
First, our experimental evaluation involved a fairly simple setup. While a simple domain allows for faster experimentation and better analysis, richer domains with more visual diversity, complex dynamics, and continuous states and actions ought to provide a more thorough analysis in future work.
Similarly, a relatively simple family of tasks was considered in this work. Generalization to more complex families of tasks needs further experimentation.
It is worth noting that our general framework is applicable to all these variations.
Second, our approach requires about 30,000 pairs of source and target tasks, along with natural language descriptions to learn the model. While this is comparable to related approaches (e.g. \textcite{stepputtis2020language} use 40,000 training examples for instruction-following), we believe that on more realistic tasks, using pretrained vision and language encoders could significantly reduce the data requirements. Further, training the model is a one-time process, that can be performed before deployment; after deployment, the system can be provided with demonstrations and descriptions to perform new tasks without any further training.
Finally, when using natural language, there is a significant performance gap from the template-generated language. A data collection pipeline with better noise-filtering and richer language models (e.g. ones that use attention) could help bridge this gap.
\newpage
\nocite{zhu2020robosuite}
\printbibliography
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,264 |
Q: Making a simple invoice web app with Spring but which technologies is advisable? I am going to teach myself some Java EE and making a simple web portal where people can generate their own invoices(pdf lib is needed). Not asking about any code but can you give advice (examples) which technologies I can make use of through the process? I have decided to use "Spring MVC" as the framework + java/Kotlin as a compiler. Some database + server + email+ some micro services?, are needed but which can it be? Thank you!
A: If you are trying to implement microservices, i prefer spring boot which has embedded tomcat with additional services, and for database you can use open source mysql
if you are also planning for UI stuff and new to it prefer basic Html,css and Bootstrap
A: If I am there here are my choices. All these choices are based on my past 4 complete end to end web application project experience.
Spring Boot
Using spring boot create micro services. As it has in built tomcat it will be easy to deploy any environment, either local laptop or on premise server or cloud server.
JPA with Hibernate
If you are looking for free you can choose MYSQL. As it has strong community support
almost all the issues you are going to face would have been asked and answered already under stack overflow or somewhere else in the internet. Another think is as you chose JPA you can switch to any database easily.
React
As of now the simplest and one of the fastest ui framework. Also it has strong user support. You can find answer to almost all questions you will have on internet.
Apart from all, you can extend any of these technologies. Happy Coding!!!
A: You may want to consider using Jaspersoft for generating your pdf files:
https://www.jaspersoft.com/reporting-software
https://community.jaspersoft.com/wiki/introduction-jaspersoft-studio
There may undoubtedly be other solutions out there, but this is the one I'm most used to.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,746 |
Das Marienhospital Aachen ist ein katholisches Krankenhaus im Aachener Stadtteil Burtscheid. Es umfasst als Krankenhaus der Regelversorgung und als Akademisches Lehrkrankenhaus der RWTH Aachen 310 Betten. Das Marienhospital gliedert sich in 9 Fachkliniken, 3 Sektionen, 3 Belegkliniken, 9 Fachzentren sowie 5 Konsiliardienstliche Praxen. Die Leitung obliegt der Geschäftsführung bestehend aus dem Geschäftsführer, dem Kaufmännischen Direktor, dem Ärztlichen Direktor und der Pflegedirektorin.
Träger
Träger des Marienhospitals Aachen war bis Ende 2022 die Katholische Stiftung Marienhospital Aachen. Zum 1. Januar 2023 ist der Betrieb an die Marienhospital Aachen GmbH übergegangen. Gesellschafterinnen der Marienhospital Aachen GmbH sind die Katholische Stiftung Marienhospital Aachen und die Alexianer GmbH Münster.
Zur finanziellen Unterstützung wurde zudem 1998 ein Förderverein gegründet, der die notwendigen Mittel für Investitionen und Maßnahmen beschafft.
Abteilungen
Fachkliniken
Klinik für Innere Medizin, Gastroenterologie, Interventionelle Endoskopie
Klinik für Innere Medizin, Kardiologie, Rhythmologie
Klinik für Innere Medizin, Pneumologie
Klinik für Allgemein-, Viszeral- und Minimalinvasive Chirurgie
Klinik für Gefäßchirurgie
Klinik für Orthopädie, Unfallchirurgie und Sportmedizin
Klinik für Gynäkologie und Geburtshilfe
Klinik für Anästhesiologie, Intensivmedizin und Schmerztherapie
Klinik für Diagnostische und Interventionelle Radiologie
Sektionen
Sektion für Plastische Chirurgie und Rekonstruktive Mikrochirurgie
Sektion für Neurochirurgie, Wirbelsäulenchirurgie
Sektion für Gynäkologische Endoskopie
Belegkliniken
Klinik für Augenheilkunde
Klinik für Mund-, Kiefer- und Gesichtschirurgie; Plastische und ästhetische Operationen
Klinik für Hals-Nasen-Ohren-Heilkunde
Fachzentren
BrustCentrum Aachen – Kreis Heinsberg
EndoProthetikZentrum der Maximalversorgung
Herz- und Gefäßzentrum
Zertifiziertes Darmzentrum
Kompetenzzentrum für Hernienchirurgie
Kompetenzzentrum für Minimal-invasive Chirurgie
Zentrum für Innere Medizin
Interdisziplinäre Intensivstation
Zentrale Notaufnahme 24 Stunden
Geschichte
Das Marienhospital wurde 1850 auf Betreiben der Pfarrer der beiden katholischen Burtscheider Kirchen St. Johann und St. Michael sowie sechs engagierter Bürger gegründet und am 1. April 1853 als Heimstätte für alte und kranke Menschen der ärmeren Schichten in einem Gebäudeflügel der früheren Reichsabtei Burtscheid eröffnet. Ausgelöst wurde die Gründung durch eine Cholera-Epidemie, die 1849 in Burtscheid herrschte.
Das Marienhospital war eines der ersten katholischen Krankenhäuser im Rheinland und hatte anfänglich 10 Betten.
Als Pflegerinnen wurden am 27. Januar 1853 die Armen-Schwestern vom heiligen Franziskus berufen. Bereits bei der Cholera-Epidemie von 1849 hatten die beiden Burtscheider Pfarrer Wilhelm Sartorius und Peter Keller die Oberin der Armen-Schwestern vom heiligen Franziskus, Mutter Franziska Schervier, gebeten, einige Schwestern zu beauftragen, in dem Untertor der früheren Burtscheider Stadtbefestigung in einem Zimmer mit fünf Betten die Pflege der Cholerakranken zu übernehmen. Dieses kleine Spital im heute nicht mehr bestehenden Untertor war damit der Vorläufer des jetzigen Marienhospitals.
Als Rechtsstruktur wählten die Träger des Hospitals die Rechtsform der Stiftung, deren Statuten am 11. September 1850 durch König Friedrich Wilhelm IV. von Preußen unterschrieben wurden.
Durch Erweiterungsbauten vergrößerte sich das Krankenhaus im Laufe der Zeit beachtlich. Nach Entwürfen von Eduard Linse wurde 1889 die Augenklinik errichtet. Die Bettenzahl stieg von den anfangs 10 stetig an und steigerte sich bis 1870 auf 60, bis 1900 auf 110 und lag im Jahr 2000 bei 342 Betten. Auch die ursprüngliche Nutzung des Hospitals wandelte sich von einer Einrichtung der Pflege alter und kranker Menschen zu einem Akutkrankenhaus. Dieser Prozess begann 1883 mit der Einrichtung einer Augenklinik und setzte sich 1908 mit der Angliederung einer chirurgischen Abteilung fort.
In der Zeit des Nationalsozialismus hatte das Marienhospital es dem Chefarzt der Chirurgie, Hermann Gatersleben, zu verdanken, dass die eugenischen Vorgaben der politischen Führung nicht umgesetzt wurden. Gatersleben hatte den Mut, den Bestimmungen des 1934 beschlossenen Gesetzes zur Verhütung erbkranken Nachwuchses zu widersprechen, und lehnte jegliche Art von Zwangssterilisationen und anderen Maßnahmen ab. Dies führte zu einem zeitweiligen Verbot für die Krankenkassen, Patienten in das Marienhospital überweisen zu lassen. Vor allem sein Einfluss als Ratsherr für die Zentrumspartei verhinderte, dass das Krankenhaus seine Unabhängigkeit verlor und dass Gatersleben seines Dienstes enthoben wurde.
Im Jahr 1957 wurde eine gynäkologische und geburtshilfliche Abteilung eingerichtet, wodurch das Marianneninstitut, ein altes und zu klein gewordenes Geburtshaus für arme Wöchnerinnen im Stadtzentrum, aufgelöst werden konnte. Seit 1999 erfolgten weitere Bauprojekte am Marienhospital, wie im Jahr 2015 der Bau und die Einrichtung einer neuen zeitgemäßen Intensivstation. Aktuell entsteht im Innenhof ein neues Bettenhaus, dessen Fertigstellung für 2023 geplant ist.
Nach mehr als 160 Jahren Dienst am kranken Menschen musste der Orden der Armen-Schwestern vom heiligen St. Franziskus aus Nachwuchsgründen seine Arbeit einstellen. Sie wurde von dem indischen Orden Sisters of the little Flower of Bethany übernommen.
Weblinks
Runder Geburtstag Pressemitteilung des Marienhospitals zum 90-jährigen Bestehen vom 17. April 2015
Einzelnachweise
Krankenhaus in Aachen
Krankenhausanlage
Gegründet 1853
Baugruppe (Städtebau) in Aachen
Baugruppe (Städtebau) in Europa
Burtscheid
Marienhospital
Baudenkmal in Aachen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,769 |
A surprising theme for most, yet this issue has successfully brought together work that expresses a cheerful lust for life, marked by a silent enjoyment or attesting to an infectious non-conformism. Exploring happiness and its relationship with photography in this issue, there remains a distinct complexity as with any other theme. Undertaking this task of exploration are eight portfolios that are as diverse and surprising as ever, evoking the very sentiment and joyous feeling they were made with.
Also included, an extensive interview with Jeff Wall, discussing his relationship to the complex nature of photography, the inspiration he draws from and the need to struggle with the constrains of photography. Plus an 'On My Mind' feature from 6 cultural figures talking about images on their minds and plenty of book reviews. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,876 |
{"url":"https:\/\/codejamming.org\/2019\/tensorflow-inference","text":"# Inference of TensorFlow model without TensorFlow\n\nTraining of machine learning models is fun, but it\u2019s a useless waste of energy if you are not going to use them afterwards. Usage patterns can be very different. In some case you are creating Software-as-a-Service where you run inference on your or somebodies cloud and return customers only the results. In others - you need to have the model itself on consumer devices (e.g. mobile phones) and run inference there.\n\nIf you were training model using popular framework TensorFlow you\u2019re covered in a sense that there are number of options you can stick to. To name just a few: inference with Tensorflow from Python, C++ inference with native library (TensorFlow is written in C++ after all), C++ inference with Tensorflow Lite (inference on mobile phones and\/or Raspberry Pi) with bindings to Java&co, TensorFlow Serving (production-quality C++ HTTP server for inference using TF native library).\n\nWhen running on a server you can use whatever is more convenient so in this article I\u2019m mostly concerned about second use-case where you need to run inference on consumer devices.\n\nLet\u2019s see which options do we have. Actually there\u2019re not so many of these: TensorFlow native library (C++), TensorFlow Lite native library (C++) and third-party solutions (like project Tract from company Snips, written in Rust). Original TensorFlow library compiled for Release on my computer weights 30MB. Imagine you need to deploy it along with your 100MB model and you have 30% overhead which might be even worse if you have a smaller model. TensorFlow Lite supports a subset of operations you can use and it\u2019s compilation is fine-tuned for Android projects. Nevertheless it\u2019s quite possible to compile it for desktop and it\u2019s size in Release was about 3MB. This is already much better but the problem is that most probably you\u2019re not even using half of the features of TensorFlow Computation Graph that the library provides.\n\nWhat is left for us is to write such inference on our own. It\u2019s not as hard as it sounds and definitely much more fun!\n\n## Step 1. Export from Python\n\nOriginally TensorFlow exports a model into a ProtoBuf-serialized (and optionally - compressed) file with all tensors and operations represented as Graph nodes and edges. If you\u2019re training a feedforward neural network, it is a linear model: next layer\u2019s input is an ouput of previous layer. For linear model\u2019s inference you do not need a graph: in the simplest cases you only need weighs and biases of each layer. So we can do exactly this: export all weights and biases from your TensorFlow model:\n\nloaded_graph = tf.Graph()\n\nall_layers = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\n\nfor layer in all_layers:\nprint('Saving layer: {0}\\n'.format(layer.name))\narray = layer.eval()\n\n# filesystems do not like files with colons inside\nlayer_file = layer.name.replace(':', '_') + '.npz'\n\nwith open(layer_file, 'wb') as file:\nnumpy.savez(file, array)\n\n\nThis will convert tensors to NumPy arrays and use NumPy\u2019s feature to save NDarray as zip archive with an extension .npz.\n\n## Step 2. Import in C++\n\nIt\u2019s relatively easy to import these arrays to C++. You can use cnpy library that you can just include as 1 header and 1 cpp file in your project. It allows you to read shape of the array and it\u2019s data.\n\n#include <cnpy\/cnpy.h>\n\n\/\/ std::vector like python list (3, 4, 5)\nauto shape = array.shape;\n\/\/ pointer to 1D array of raw data\nauto array_data = array.data<T>();\n\n\nThere\u2019re few important things to note here. One is that TensorFlow has weights matrix of dense layer transposed in memory. That\u2019s why you need to transpose it again if you plan to use it in more \u201ccommon\u201d way (of dot product with input vector).\n\nauto input_flatten_size = shape[0];\nauto output_size = shape[1];\n\nconst size_t size = input_flatten_size*output_size;\nstd::vector<float> transpose(size, 0.f);\n\nint height = input_flatten_size, width = output_size;\n\/\/ tf.dense behaves by contracting last index of input tensor with first index of weights tensor\n\/\/ so saved tensor is actually a transpose matrix of shape (inputs_n, outputs_n, 1)\n\/\/ do reverse transpose in memory\nfor (int n = 0; n < size; n++) {\nint i = n \/ height;\nint j = n % height;\ntranspose[n] = array_data[width*j + i];\n}\n\n\nAnother is that it\u2019s tricky to read convolution layer properly: there\u2019re too many indices for 1D array so you should be careful with them.\n\nauto index = 0;\nfor (auto height = 0; height < filter_height; height++) {\nfor (auto width = 0; width < filter_width; width++) {\nfor (auto depth = 0; depth < input_depth; depth++) {\nfor (auto filter = 0; filter < filters_number; filter++) {\nweights_(filter, height, width, depth) = array_data[index++];\n}\n}\n}\n}\n\n\n## Step 3. Inference\n\nIt\u2019s relatively easy to implement a fully-connected layer as trivial operation with dot product of matrix and vector. A little bit more effort is required for convolution layer but feedforward() part is quite simple anyways. Recently I started an educational project that will help to understand how to do that. yannpp library - a small C++ only implementation of deep neural network (capable both of learning and inference). Code is relatively simple and heavily documented so it should be easy to start and use it or create a better library.\n\nWith yannpp you can precreate a model in C++ and it\u2019s ready for use. Here you can see an example for VGG-16 model:\n\nusing cnn_t = convolution_layer_2d_t<float>;\nusing fc_t = fully_connected_layer_t<float>;\nusing pl_t = pooling_layer_t<float>;\n\nlayers_list layers = {\n\/* ############## CONVOLUTION LAYER 1 ################# *\/\nstd::make_shared<cnn_t>(\nshape3d_t(56, 56, 3), \/\/ input\nshape3d_t(3, 3, 3), \/\/ filter\n64, \/\/ filters count\n1, \/\/ stride\nrelu_activator,\nm_t{ \"conv1_1\" }),\nstd::make_shared<cnn_t>(\nshape3d_t(56, 56, 64), \/\/ input\nshape3d_t(3, 3, 64), \/\/ filter\n64, \/\/ filters count\n1, \/\/ stride\nrelu_activator,\nm_t{ \"conv1_2\" }),\nstd::make_shared<pl_t>(\n2, \/\/ window\n2, \/\/ stride\nm_t{ \"pool1\" }),\n\/* ############## CONVOLUTION LAYER 2 ################# *\/\n....\n};\n\nyannpp::network2_t<float> network(layers);\nauto input = read_image(testsDir + \"\/test_4.JPEG\");\nauto output = network.feedforward(input);\n\n\nThe most sensible way to use such library is to statically link it or just include all files in your project. Resulting overhead size of your executable will be only a couple of kilobytes even including cnpy library for reading .npz archives.\n\nExample of inference of VGG-16 model\n\nPerformance of this tiny inference engine is not that bad, although it can be heavily optimized by using vectorized operations for dot products.\n\n## Conclusion\n\nThere are couple of options to run inference for TensorFlow-trained models. However, if you\u2019re after reducing size of your deployed application you would need to have something tailored right for you. The best you can have is custom inference code and as you can see from this post it is not so hard to achieve.\n\nFull code of VGG-16 export, load and inference can be found in this awesome repository.\n\nI would be really happy to read in the comments how do you run inference on consumer devices.","date":"2019-03-22 10:35:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18066313862800598, \"perplexity\": 3678.1755092290614}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912202642.32\/warc\/CC-MAIN-20190322094932-20190322120932-00529.warc.gz\"}"} | null | null |
My Fellow Americans...A Trip through American History via Presidential Inaugural Addresses
byKathleen G. Gormley
When in the course of third grade, it becomes necessary for students to examine the structure and purposes of government and understand that the ideals underlying American democracy are designed to promote the freedom of the American people, we must look at a unit developed in a seminar aptly titled, "The Idea of America".
The idea of America, hmm, what exactly is the idea of America? The best place to begin is the beginning. First we will investigate The Declaration of Independence which is the document that announced the thirteen colonies to be independent states and no longer a part of the British Empire, thus creating The United States of America. The Declaration was approved by representatives of the thirteen colonies who were attending the Continental Congress and states that people have certain rights and the ability to alter or abolish the government if those rights are violated. What is a right?
Next we will examine The Constitution of the United States of America; this document sets up the framework for the government, the law of the land! Within this framework, the organization of the federal government is detailed and the connection formed between the federal government, the states, and the citizens. The first three articles establish the three branches of government and the duties and powers of each branch. The Constitution lists some integral freedoms that are granted to the citizens of the United States. Since they are listed in The Constitution they become special and are safe. The Bill of Rights is a part of the Constitution and lists many of these freedoms. What does freedom mean to you?
Article 2 of the Constitution creates the office of the presidency and details the qualifications, duties, and powers of the president. When the president is inaugurated, he promises to preserve, protect, and defend the Constitution. The inaugural address is not mandated by the U S Constitution however our first president, George Washington, decided that he had a duty to show his appreciation and gratitude to the nation. Our country and government were just being formed so no one knew what would lead to a tradition. John Adams and the other early presidents followed Washington's lead in delivering an inaugural address and current presidents now feel bound to make an inaugural address. Most presidents reveal what they believe will be the overarching theme of their presidency during this address. We can also learn a little more about the current president based on which prior addresses and presidents he respects and chooses to emulate.
In teaching third graders history, it is important to make the past seem real, not some abstract set of facts that are read from books or heard from the adults in their lives. Students need to find a way to make connections to their own lives and the events that are occurring today, as one day these experiences will be woven into the history of America. I will help students make this connection as we create a classroom Constitution and each student writes an inaugural address. What message will they create to share with the nation of our classroom?
Concentrating on our history and civic standards, students will be introduced to the Declaration of Independence and the Constitution for the United States. After we interpret some major themes found in these documents, we will begin to read and analyze the Inaugural Addresses of selected United States Presidents. Using the presidential inauguration addresses, we will explore what the presidents have chosen to highlight as they address the nation. Are there any common themes? How, if at all, has the message changed as our nation grew.
The Red Clay Consolidated School District is located in Northern New Castle County, Delaware with a combination of urban and suburban settings. Some of its elementary schools are located in the heart of the largest city in the state. The district is comprised of 28 schools with approximately 1000 teachers. It services over 16,000 students. Of those students, 27% are African American, 4% are Asian, 20% are Hispanic, and 49% are White. Students' needs vary, with almost 15% receiving Special Education Services and 10% receiving English Language support. In addition, 41% of the students come from families with low incomes.
Highlands Elementary is an urban school in the city of Wilmington, Delaware. We are a small K-5 school with an enrollment of an average of 320 students. Our minority population represents 86% of our student body with 81% of the students falling into the low socio-economic status. I am a third grade teacher with a class size varying between 24-28 students which is representational of the make-up of the school.
The colonies were becoming increasingly frustrated with their lack of representation in the British Parliament. As they began to set up provincial governing bodies, the British tried to dissolve these and sent troops to reinstate authority. Fighting broke out between the British troops and the colonists. At the Second Continental Congress, held in Philadelphia, Pennsylvania, representatives from the thirteen colonies issued a statement declaring their independence from Britain. Thomas Jefferson, the representative from Virginia, wrote the Declaration. The Declaration listed the grievances the colonies had against the British Empire and affirms the rights of the people of the colonies. The Declaration was adopted by the Second Continental Congress on July 4, and Independence Day is celebrated as the birthday of our nation. "We hold these truths to be self-evident, that , that they are endowed by their Creator with certain unalienable Rights, that among these are ." 1 This sentence may be the most referred to sentence when discussing the idea of America. It has come to represent what the United States attempts to provide to all its citizens. What are the Rights of the American citizen?
The Constitution of the United States of America
The U.S. Constitution was written over 200 years ago and is considered the oldest constitution in the world; it is also the shortest, consisting of only 4400 words. Many countries throughout the world have used our Constitution as a model for their own government. The Founding Fathers knew the document was not perfect and set up a way for the Constitution to be altered or amended. The Bill of Rights are the first ten amendments in the Constitution and it protects many important ideas such as allowing you to say what you want, practice whichever religion you want, and helping to keep you safe. The Constitution was adopted on September 17, 1787, by the in , , and first by the state of Delaware on December 7, 1787. It was signed by two future presidents, George Washington and James Madison. George Washington declared November 26, 1789 a National Day of Thanksgiving in order to give thanks for the Constitution and this is still recognized today. This document defines the three branches of government and the duties and powers of each. The Bill of Rights has become a symbol of the fundamental freedoms and culture of our nation. What does Freedom mean to you?
George Washington: First President (1789-1797)
Any conversation or unit designed to discuss the presidents and idea of America should, of course, begin with our first president, George Washington. Washington was elected the first president in 1789 and is commonly referred to as the "Father of Our Country". He made the oath of office in New York City in the newly named Federal Hall. Washington had been the commander in chief of the Continental Army during the Revolution and presided over the Continental Congress which drafted the Constitution. Washington's goal as the first president was to set examples that would preserve a republic form of government after he left office and he persuaded the American people that their future lay in the union of a strong central government. The first Congress voted to pay him a sum of $25,000, a large sum in those days, and as he was already wealthy and saw himself as a self-less public servant, he refused the salary. Ultimately he reversed his decision so that it would not be perceived that only independently wealthy individuals could serve without a salary. Washington's vision for this new country outlined a great and powerful nation that would be built on republican lines using federal power. He sought to use the national government to improve the infrastructure, open the western lands, promote commerce, found a permanent capital, and promote a spirit of nationalism. "The name of American", he said, must override any local attachments.
Inaugural Address Passage
"I behold the surest pledges that as on one side no local prejudices or attachments, no separate views nor party animosities, will misdirect the comprehensive and equal eye which ought to watch over this great assemblage of communities and interests, so on another, that the foundation of our national policy will be laid in the pure and immutable principles of private morality, and the preeminence of free government be exemplified by all the attributes which can win the affections of its citizens and command respect of the world." 2
Paraphrased for students
I see the surest of promises that no local powers or attachments, no separate or different views or political party dislikes, will alter or change the complete and equal eye which should watch over this great group of communities and interests, so that the base of our national plan will be set in unchangeable standards of private beliefs, and the superiority of free government can be shown by all the parts which can win the love of the citizens and command respect from other countries.
Thomas Jefferson: Third President (1801-1809)
Thomas Jefferson was a Founding Father who wrote the Declaration of Independence, was governor of Virginia, Secretary of State during George Washington's presidency, Vice President to John Adams and then became our third President. Jefferson supported state's rights and believed the federal government should be limited. Jefferson outlawed the importation of slaves and opened the west for exploration when he purchased the Louisiana Territory from France, doubling the size of the United States. He commissioned Lewis and Clark to survey and explore the western territories. Jefferson died on, July 4 th 1826, the fiftieth anniversary of the adoption of the Declaration of Independence and a few hours before John Adams, the second President. Jefferson believed in reducing the authority of the federal government while protecting civil liberties and minority rights. Jefferson's inaugural address is considered by some historians as the finest ever given. He used the phrase "fellow citizens" several times throughout his speech, attempting to include not only the influential men of the time, but also those with considerable less access to power. Even with this powerful phrase, Jefferson's idea of "fellow citizens" was not all inclusive; African Americans, Native Americans, and women were not included in thought or intent.
"About to enter, fellow citizens, on the exercise of duties which comprehend everything dear and valuable to you, it is proper you should understand what I deem essential principles of our government, and consequently those which ought to shape its administration…equal and exact justice to all men, of whatever state or persuasion, religious or political; peace commerce, and honest friendship with all nations, entangling alliances with none; …the honest payment of our debts and sacred preservation of public faith; encouragement of agriculture, and of commerce…freedom of religion, freedom of the press, and freedom of person under the protection of the habeas corpus, and trial by juries impartially selected." 3
As I begin the job of the president, it is right that you should understand what I think to be necessary beliefs of our government, and therefore those that should shape its administration. Equal fairness to all men, of whatever state of religious or political belief. Enter into peaceful and honest friendships and trade partnerships with other nations in the world, not entering into tangled or trapped deals with any nation of the world… the honest payment of out debts so the public still has faith in the government; support of farmers and of businesses(shops)…Jefferson also believed in the rights of men to have freedom to practice the religion of their choosing, freedom for the press to print the news, freedom from being unlawfully held by the government, and freedom to have a trial by a jury that is fair and neutral.
Abraham Lincoln: Sixteenth President (1861-1865)
Abraham Lincoln was the first president to be born outside of the original thirteen states. He was born in the territory of Kentucky and received almost no formal education. Lincoln taught himself law and passed the bar to become a lawyer. He entered the Illinois State legislature and then was elected to the House of Representatives. Lincoln was opposed to the expansion of slavery and won the presidential election in 1860. Abraham Lincoln steered the country through its most difficult time. The country was divided over the issue of slavery. Thirteen southern states seceded from the Union and formed the Confederate States of America. In 1863, the Emancipation Proclamation was issued, freeing slaves in the Confederate states that did not return to the union. Many believed that Lincoln was only concerned with preserving the Union, however he often stated that he wished that all men everywhere could be free. Union victory was sealed when Lee's army surrendered at Appomattox, Virginia. Five days later Abraham Lincoln was assassinated. In 1865 the 13 th Amendment was ratified completing the abolition of slavery, which had begun with Lincoln's issuing of the Emancipation Proclamation. This passage was taken from Lincoln's second inauguration and he was reminding the nation at that time he was working to avoid a civil war that they are now in the middle of. Throughout his time as president, Lincoln worked to improve individual freedoms and access to property, education, and legal remedies.
"On the occasion corresponding to this four years ago all thoughts were anxiously directed to an impending civil war. All dreaded it, all sought to avert it. While the inaugural address was being delivered from this place, devoted altogether to saving the Union without war, insurgent agents were in the city seeking to destroy it without war-seeking to dissolve the Union and divide effects by negotiation. Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came… With malice toward none, with charity for all, with firmness in the right as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation's wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations." 4
On the occasion of my first inauguration, our thoughts were directed toward a coming civil war. All feared it, all looked to prevent it. While I was delivering my speech, committed completely to keeping the nation joined, protestors were looking to destroy the nation without war by leaving the Union without discussion. Both parties disapproved of war but the South would rather go to war than let the nation continue to change the stand on slavery and the North would rather go to war than let the nation be divided, so the civil war began… With hatred toward no one and tolerance or kindness for everyone, with the belief that we are right, let us struggle on to finish the work we are in (bring an end to the war), to fix the nation's wounds (physical, mental, and emotional problems caused by the war), to care for those who have suffered or lost their life in the battles, to do everything that will bring about and value or appreciate a fair a lasting peace among ourselves ( the United States) and with other nations of the world.
Theodore Roosevelt: Twenty-sixth President (1901-1909)
Theodore Roosevelt is the youngest man to become president; he took over the office of president after the assassination of President McKinley. Roosevelt sought to control the power of large corporations and worked to improve worker rights. He steered the United States into a more active role of world politics and expanded the Monroe Doctrine to cover all of the Americas as he entered into an agreement with Panama to create a shortcut between the Atlantic and Pacific oceans. Theodore Roosevelt considered himself an outdoorsmen and some of his most effective and lasting achievements were adding to the national forests and preserving lands for public use. Roosevelt was the first American to be awarded the Nobel Peace Prize for his role in negotiating an end to the Russo-Japanese War. Roosevelt earned a nomination for the Medal of Honor for his leadership in battle in Cuba during the Spanish-American War. This nomination was denied at the time yet Roosevelt was awarded the medal posthumously on January 16, 2001. Theodore Roosevelt is the only president to receive both the Medal of Honor and The Nobel Peace Prize Many historians place Theodore Roosevelt as one of the top presidents who extended the power of the executive office, and he did this without a national crisis of war, as Lincoln and F.D. Roosevelt, two other presidents named with this distinction. He developed the "stewardship" theory of the Presidency: the chief executive could and should take all measures necessary for the welfare of the American people, even if they were not specifically mentioned in the Constitution.
"Modern life is both complex and intense, and the tremendous changes wrought by the extraordinary industrial development of the last half century are felt in every fiber of our social and political being, Never before have men tried so vast and formidable and experiment as that of administering the affairs of a continent under the forms of a Democratic republic…Upon the success of our experiment much depends, not only as regards our own welfare, but as the regards the welfare of mankind. If we fail, the cause of free self-government throughout the world will rock to its foundations, and therefore our responsibility is heavy, to ourselves, to the world as it is today, and to the generations yet unborn. There is no good reason why we should fear the future, but there is every reason why we should face it seriously, neither hiding from ourselves the gravity of the problems before us nor fearing the approach these problems with the unbending, unflinching purpose to solve the aright… Yet, after all, though the problems are new, though the tasks set before us differ from the tasks set before our fathers who founded and preserved this Republic, the spirit in which these tasks must be undertaken and these problems faced, if our duty is to be well done, remains essentially unchanged. We know that self-government is difficult. We know that no people needs such high traits of character as that people which seeks to governs its affairs aright through the freely expressed will of the freemen who compose it. But we have faith that we shall not prove false to the memories of the men of the mighty past. They did their work, they left us the splendid heritage we now enjoy. We in our turn have an assured confidence that we shall be able to leave this heritage unwasted and enlarged to our children and our children's children. To do so we must show, not merely in great crises, but in everyday affairs of life, the qualities of practical intelligence, of courage, of hardihood, and endurance, and above all the power of devotion to a lofty ideal, which made great the men who founded this Republic in the days of Washington, which made great the men who preserved the Republic in the days of Abraham Lincoln" 5
Today life is difficult and powerful and the great changes brought about by the growth of industries in the last fifty years are felt in every part of our lives. Never before have men tried such a difficult job as developing and working to keep the Democratic republic form of government moving forward (a government run by a president elected by the people)…Many things depend on our success to keep this type of governing moving forward, not only for us but for all mankind. If we fail, self-governing (by the people) will suffer. There is no reason we should think we will fail but we should take our role seriously and not hide from our responsibility nor should we be inflexible… Even though our problems are different then the problems of the past, it is our duty to keep the idea of self-government moving forward. We know that people with high character are needed in order to govern their own affairs. We have faith in our Constitution and we need to continue to follow this Constitution so that future generations will benefit from it as well. We need to honor it not only when we are in trouble but also in our everyday life as Washington did when this government was formed and when Lincoln preserved it during the Civil War.
Franklin D. Roosevelt: Thirty-second President (1933-1945)
Franklin Delano Roosevelt, commonly referred to as FDR, was the fifth cousin to the 26 th president, Theodore Roosevelt. He is the only president elected to more than 2 terms, he was elected to 4 terms and was president for 12 years. FDR was elected during a time that the United States was in severe economic crisis and his persistent optimism and call to action were appealing to the public. In his first one hundred days, FDR created jobs, programs for economic recovery, and pushed for reform and regulation for the banking industry. As the decade progressed and war loomed on the horizon, FDR worked to keep the United States neutral. The US entered WWII when Japan launched an attacked Pearl Harbor on December 7, 1941, a "date which will live in infamy", stating FDR. During FDR's four terms, he created many government programs, including Social Security. FDR died in April of 1945, just months before the end of WWII.
"I am certain that my fellow Americans expect that on my induction into the presidency I will address them with candor and a decision which the present situation of our nation impels. This is preeminently the time to speak the truth, frankly and boldly. Nor need we shrink from honestly facing conditions in our country today. This great nation will endure as it has endured, will revive and will prosper. So, first of all, let me assert my firm belief that the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance. In every dark hour of our national life a leadership of frankness and vigor has met with that understanding and support of the people themselves which is essential to victory. I am convinced that you will again give that support to leadership in these critical days." 6
I am positive that my fellow Americans expect on my entrance into the presidency I will speak to them with honesty and a decision which the present situation of our nation requires (the nation was in the state of economic depression). This is overwhelmingly the time to speak the truth. We do not need to hide from honestly facing the state our country is in. This great nation will last as it has lasted, it will restart and will grow. So, first of all, let me state my belief that the only thing we have to fear it fear itself (FDR wanted to give the nation hope and let them know that being afraid or nervous about their situation will only make things worse). It is useless and only stops the action to change losing progress into forward progress. In every hard time of our lives leading with honesty and strength has been supported by the people and this is important to succeeding. I am convinced that you again will give that support and leadership in these serious days.
John F. Kennedy: Thirty-fifth President (1961-1963)
John F. Kennedy, commonly referred to as JFK, was the youngest person to be elected to the office of the president. During the election, JFK and Richard Nixon participated in the first televised presidential debates. Nixon looked nervous and was perspiring while JFK appeared cool and confident. This was a turning point for the campaign and Kennedy began to take a slight lead in the polls. During his presidency, JFK inherited strained relations with the Soviet Union and struggled with the threat of communism. Involvement in Vietnam increased, and tensions grew with Cuba. Kennedy is known for developing the Peace Corps and instilling a sense of volunteerism within the citizens of the United States. The United States space program grew and he announced that we would land a man on the moon. Kennedy began to take a stand on the civil rights movement and initiated the Civil Rights Act of 1964. Kennedy would be assassinated before this came to pass. His assassination has spurred many theories and has proven to be one of the most studied and controversial assassinations in US history.
"Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, to assure the survival and the success of liberty…To those people in the huts and villages of half the globe struggling to break the bonds of mass misery, we pledge our best efforts to help them help themselves, for whatever period is required — not because the Communists may be doing it, not because we seek their votes, but because it is right. If a free society cannot help the many who are poor, it cannot save the few who are rich… And so, my fellow Americans, ask not what your country can do for you; ask what you can do for your country. My fellow citizens of the world, ask not what America will do for you, but what together we can do for the freedom of man." 7
Let every other country know, whether it wishes us good or poorly, that we will pay any price, stand up to any problem, meet any difficulty, support any friendly country, fight any enemy, to promise the survival and the success of liberty. To those people in huts and villages in other parts of the world struggling to break free of the ties of sadness, we promise our best efforts to help them help themselves, for however long we are needed—not because Communists (another form of government) may be doing it, not because we want them to vote with us, but because it is the right thing to do. If a free society, like America, cannot help those who are poor, then those that are rich cannot be helped either. And so my fellow Americans, do not ask what the country can do for you, you should be asking what can I do to help this country. My fellow citizens of the world, do not ask what can America do to help you, but what can we do together to help make all people free.
Ronald Reagan: Fortieth President (1981-1989)
Ronald Reagan is the oldest man to be elected president, he was 69. Reagan believed in lower taxes to stimulate the economy, less government interfering in people's lives, and a strong national defense to combat Communism. He was the first president to survive being injured after being shot in an assassination attempt. He appointed the first female, Sandra Day O'Connor, to the Supreme Court. When he took over the office, inflation and unemployment levels were high and Reagan pledged to emphasize economic recovery through lower taxes and lessened government regulations. During Reagan's tenure, military spending increased, the minimum wage was frozen, and welfare became an object of attack as Reagan believed the federal government should have a lesser role and that the private sector would pick up where the government left off. By the end of the Reagan years, the gap between the rich and the poor grew dramatically and the national debt grew from $997 billion to $2.85 trillion during his presidency.
"Well, this Administration's objective will be a healthy, vigorous, growing economy that provides equal opportunities for all Americans with no barriers born of bigotry or discrimination. Putting America back to work means putting all Americans back to work. Ending inflation means freeing all Americans from the terror of runaway living costs… All must share in the productive work of this "new beginning," and all must share in the bounty of a revived economy… With the idealism and fair play which are the core of our system and our strength, we can have a strong and prosperous America at peace with itself and the world. So as we begin, let us take inventory… We are a nation that has a government — not the other way around. And this makes us special among the nations of the earth. Our Government has no power except that granted it by the people. It is time to check and reverse the growth of government which shows signs of having grown beyond the consent of the governed… It is my intention to curb the size and influence of the Federal establishment and to demand recognition of the distinction between the powers granted to the Federal Government and those reserved to the states or to the people… All of us — all of us need to be reminded that the Federal Government did not create the states; the states created the Federal Government." 8
Well my helpers and I have a goal to have a healthy, energetic, growing, economy (businesses and customers) that provides equal opportunities for all Americans with no walls started with racism or unfairness. Putting American back to work means putting all Americans (the people) back to work. Ending the rise of the cost of things means freeing all Americans from the fear of runaway living costs… All must share in the helpful work of this "new beginning," and all must share in the reward of a recharged economy (businesses and customers).
Barack Obama: Forty-fourth President (2009-)
Barack Obama is the first African American elected as President. When Obama took office, the nations was involved in an economic crisis involving a housing crisis, high unemployment figures, and a bank crisis as well as two ongoing wars in the Middle East. His campaign had focused on economic reform and the need to research alternative energy sources, educational reform, and reform of the health care system. In his first 100 days in office, Barack Obama reached out to improve relations with many foreign countries, worked to develop a global economy resolution, and worked to control and end the wars in the Middle East. He was awarded to Nobel Peace Prize in 2009.
"Our challenges may be new. The instruments with which we meet them may be new. But those values upon which our success depends — honesty and hard work, courage and fair play, tolerance and curiosity, loyalty and patriotism — these things are old. These things are true. They have been the quiet force of progress throughout our history… What is demanded, then, is a return to these truths. What is required of us now is a new era of responsibility — a recognition on the part of every American that we have duties to ourselves, our nation and the world; duties that we do not grudgingly accept, but rather seize gladly, firm in the knowledge that there is nothing so satisfying to the spirit, so defining of our character than giving our all to a difficult task… America: In the face of our common dangers, in this winter of our hardship, let us remember these timeless words. With hope and virtue, let us brave once more the icy currents, and endure what storms may come. Let it be said by our children's children that when we were tested we refused to let this journey end, that we did not turn back nor did we falter; and with eyes fixed on the horizon and God's grace upon us, we carried forth that great gift of freedom and delivered it safely to future generations." 9
Our challenges may be new. The tools with which we meet them may be new. But those values upon which our success depends—honesty and hard work, courage and fair play, acceptance and curiosity, loyalty and love of America—these things are old. These things are true. They have been the quiet force of progress throughout our history.
What is required, then, is a return to these truths. What is required of us now is a new time of responsibility—a remembrance on the part of every American that we have duties to ourselves, our nation and the world; duties that we do meanly accept, but instead grab gladly, firm in the knowledge that there is nothing so filling to the spirit, that explains us and our values then giving our best to a difficult job. America: in the face of danger, in this time of our hardship, let us remember these words that are timeless. With hope and goodness, let us brave the hard times that may come our way. Let it be told to our future generations that when we had to do a hard job we did not fail, we did not give up, we did not fumble, and with our mind set we carried forward the great gift of freedom and delivered it safely to future generations.
My third grade classroom has students with a variety of reading levels ranging from first grade levels to fourth grade levels and perhaps beyond. It is my intention to scaffold vocabulary and comprehension instruction in order to provide an entry point for all learners. Many of the inaugural addresses are lengthy so I have chosen portions I feel to be valuable and worthy of investigation. I will provide the original text and modified versions and allow students to develop interpretations of these addresses. Vocabulary activities will play a large role in dissecting and analyzing the message embedded in the speeches as they relate to The Declaration of Independence and the U. S. Constitution. Students will work in cooperative learning groups comprised of students with a variety of levels of ability and learning styles. Each group will be assigned one president and will conduct research to gain basic background information. Next students will review and interpret the inaugural address segment. Each team member will be held responsible for learning about the subject matter and for helping the other member learn as well. This mutual support creates an atmosphere where all students can achieve and fosters a strong sense of community within the classroom. When the initial investigation is complete, groups will be re-formed and students will share their knowledge with their classmates. We will create a timeline of the presidents and make judgments about if and how the presidential messages have changed over time.
Vocabulary development is an integral part of all content learning. There is an undeniable link between vocabulary understanding and comprehension. As a teacher in the elementary grades, one must realize that direct and implicit instruction of vocabulary is vital and should occur daily in the classroom. A variety of vocabulary activities can aid in highlighting the most important words for content area comprehension, two helpful strategies I use in my classroom are explained next.
Student VOC Strategy
This strategy helps students analyze word meanings from context. Create a list of key vocabulary words that are coming up. Have your students write the original sentence from where the vocabulary word is found. Your students should make a prediction of what this new vocabulary word means. They should then consult a friend or a reliable resource, such as a dictionary, to determine the meaning of the word. Students will create an original sentence to show the meaning of the word. Finally they should draw a picture that will help them understand the word and explain it. This is a fantastic way for students to analyze and decode words in a text they don't understand. This is the great strategy for students to tackle the vocabulary in the Inaugural Addresses.
Word Banks
Word Banks are places where students can keep a list of words they have learned so that they can refer to them as needed. I prefer to have students keep their word banks on rings. I use a variety of color coded index cards and assign a specific color to a specific part of speech, such as all nouns are on blue cards. Using the rings enables students to develop alphabetizing skills, parts of speech skills, and is more mobile than a journal. Students should be expected to use the words in their writing and their speaking.
Cooperative Learning Grouping
During Think-Pair-Share activities, students are given information or a question and must independently Think about how they will react to the prompt. The Think period should last a short time, no longer than 5 minutes. Next, they will Pair with a partner and conference about the prompt. During this period, they may develop new questions or clarify understanding. This period should also last a short time, no longer than 5 minutes. Then they will Share with another partner set, small group, or entire class. All information can be discussed and questions may lead to further investigations. The time frame on this portion will be dependent on the choice of sharing. As the essential questions are posed to stimulate student thinking, we will use the Think-Pair-Share model to inspire understanding and questions about our topic. This will provide a starting point for me as it can identify what the students already know, what they are confused about, what they know little or nothing about, and also what interests them and what they want to learn.
This form of cooperative learning breaks larger topics or resources into small parts. Each group is given one part of the whole. The students read the given portion, discuss, and prepare a tutorial project for the rest of the class. After modeling how to read and interpret Washington's Inaugural Address, students will be broken into small groups of two to three. They will be assigned one of the remaining seven presidents. They will be responsible for conducting research to obtain some background information about their president. After they have developed a snapshot of their president, they will receive the corresponding vocabulary list and Inaugural Address. As a group, they will determine what they believe the idea or vision for America their president embodied. After this step, groups will be restructured and students will be responsible for sharing their expertise with the other group members and rationale for the determinations they arrived at. These new groups will then be able to create a timeline of the presidents and discuss similarities and differences within their messages.
Three Minute Review
I stop any time during a lecture or discussion and give teams three minutes to review what has been said, ask clarifying questions, or answer questions. Using this strategy, students will be able to have time to digest the information already presented, ask questions to clear up misconceptions, and formulate additional questions to begin to connect to future learning.
To meet the needs of all the learners in my classroom, I will use Differentiated Instruction. Differentiated Instruction is an approach to teaching content in ways that address a variety of learning styles and needs of students while maximizing the potential of all learners. This will help me to accommodate the diversity of academic needs present in my classroom. My instruction as well as the students' research can be differentiated. I will differentiate according to content, process, or product. Through differentiated content students will have access to a varied level of texts and/or websites and could be "buddied" with a partner at a different level to assist with the learning. Differentiated process will involve the students being offered choices about the way they gather information; students will be given access to books, audio tapes, and videos. When differentiating products, students are given learning contracts which present them with a variety of options to create different products, such as plays, poems, or Power Points, based on their individualized learning style and interest.
Learning occurs when students can make personal connections to the prior knowledge and future learning. Each student comes to class with prior knowledge; this will be the starting point for new information. While reading, students will make text-to-self, text-to-text, and text-to-world connections. These connections help students to become more aware of different genres, forms, and structures within the text. When students can make a connection to a character within a story, motives, thoughts, and feelings of that character are better understood and history becomes alive for them.
"Curiosity spawns questions. Questions are the master key to understanding. Questions clarify confusion. Questions stimulate research efforts. Questions propel us forward and take us deeper into reading." Teachers need to monitor their students' understanding; the questioning strategy offers teachers an opportunity to check for understanding and clear up any misconceptions. Student formulated questions are an essential component to this process and help determine where the students want to go next in learning of the topic.
It has been said, "A picture is worth a thousand words". Learning to interpret images—also symbols, graphs, and facial expressions—improves comprehension.
Determining Importance
Students need to discriminate between what is important in a reading passage and what is not important; this is the very definition of comprehension. Once students determine what is important, they can begin to apply meaning to the selection and can build reasoning skills.
Essential Question-What are some examples of fundamental rights, responsibilities, and privileges of American citizenship?
Background information- Students will have already investigated and discussed the events leading to the writing of the Declaration of Independence. Students will have already read, discussed, and interpreted the Declaration of Independence. Students will have developed an elementary level of understanding of why this document is important in American history.
Instruction- The class will view a short video about the Preamble to the Constitution. Next we will examine the U.S. Constitution. It will be explained that the Constitution is the framework for our government and outlines the duties and powers of each of the three branches of the government.
Activity-Students will participate in a jigsaw activity. First students will be grouped and each group will explore a different Article of the Constitution. Then we will regroup and expert will give the information on their Article. Students will discuss which rights are important to them and which rights are important for the class. Groups will create their own classroom Constitution.
Assessment-Each group will create a presentation for their Constitution and will justify what they included in their Constitution.
Essential Question- What does freedom mean to you?
Background Information- The Founding Fathers realized that the Constitution may need to be changed from time to time and provided for a procedure to amend the Constitution. The Constitution has 27 Amendments, the first ten are known as the Bill of Rights.
Instruction-Focusing on The Bill of Rights, we will investigate one right at time. Students will take quick notes about the Bill of Rights. As the notes are taken, examples will be given of their rights that are protected and what could happen if we didn't have this right. Discussion and examples of what could happen if this right was not included will occur.
Activity-Students will participate in an activity using the Think-Pair-Share cooperative learning format. As each right is introduced, students will "Think" about what this right means to them, making a connection to themselves. They will then "Pair", meaning they will find a partner and each will discuss the connections they have made to the right. Finally each pair will "Share" with the class their discussion and connections. Next, students will work in small groups and will receive a copy of the Bill of Rights. They will only be able to select 5 of the rights to start their own country. They must come to a consensus within their group and justify why they chose those rights. This hopefully will give the students an idea of how difficult of a task this was for the writers of the Constitution. They will need to present their chosen rights to the class and explain why they chose the rights they chose.
Assessment-Students will describe three freedoms that are granted to them by the Constitution and how it affects their everyday lives. Students will create a visual interpretation, such as poster, google doodle, or video representation, of one or more of the rights covered in The Bill of Rights.
Lesson Three
Essential Question-How does the ideas of the President shape the ideas of America?
Background information-There have been 44 Presidents elected to represent the interest of the citizens of the United States. The President pledges to "preserve, protect, and defend the Constitution of the United States" during their inauguration. Each president that has been inaugurated after an election since Washington has also given an Inaugural Address. In this address, the president details what is their vision for America.
Instruction-Modeling how to dissect and interpret the presidential address will be completed as a whole group activity using Washington's Inaugural Address. A vocabulary list, see Appendix B, will be distributed to the students. Students will complete a Student VOC, see Appendix C, will be completed in groups, with students being able to jigsaw the words. Next the original copy of the selected Inaugural passage will be given to the students. Using the Think-Pair-Share cooperative learning strategy, students will develop an interpretation of the address. A paraphrased passage will then be handed out and students can compare the interpretations. Students will look to identify how the president explains his vision for American rights. As students work through the addresses, they will look for common ideas.
Activity-Once again a jigsaw cooperative strategy will be completed. Small groups of students will be assigned another of the preselected Presidential Inaugural Addresses. Each of these small groups will mimic the procedures modeled from Washington's Address. In addition to examining the address, students will research background information on their president in order to create a picture of what their president was facing. Next, students will be regrouped and will bring their expertise to their new group. Each student will bear the responsibility to impart their new knowledge on the other group members.
Assessment-Student groups will create a timeline of the studied presidents. They will compare and contrast the messages to determine if there are any common ideas that are present throughout the addresses. As a culminating activity, students will then have to write their own Inaugural Address. They will determine what American rights and freedoms are important to them and explain them in a speech.
Adminstration, National Archives and Records. The Charters of Freedom; "A New World is at Hand". n.d. http://archives.gov/exhibits/charters/constitution.html (accessed July 29, 2011).
Cushman, Jackie Gingrich, ed. The Essential American; A Patriot's Resource. Washington, DC: Regnery Publishing, Inc, 2010.
"Declaration of Independence." Philadelphia, July 4, 1776.
Foner, Eric. The Story of American Freedom. New York: W.W. Nortonand Company, Inc, 1998.
Goudvis, Stephanie Harvey and Anne. Strategies That Work. New York: Stenhouse Publishers, 2000.
Hofstadter, Richard. The American Political Tradition and the Men Who Made It. New York: Random House, 1989.
Humes, James C. My Fellow Americans Presidential Addresses That Shaped History. New York: Praeger Publishers, 1992.
Jefferson, Thomas. "Inaugural Address." Washington DC, March 4, 1801.
Kennedy, John F. "Inaugural Address." Washington DC, January 20, 1961.
Lincoln, Abraham. "Inaugural Address." Washington DC, March 4, 1865.
Mount, Steve. The US Constitution online. March 2011. http://www.usconstitution.net/index.html (accessed July 29, 2011).
Obama, Barack H. "Inaugural Address." Washington DC, January 20, 2009.
R. Marzano, D. Pickering, J. Pollock. Classroom Instruction that Works: Research-Based Strategies for Increasing Student Achievement. Alexandria: Association for Supervision and Curriculum Development, 2000.
Reagan, Ronald. "Inaugural Address." Washington DC, Jnauary 20, 1981.
Remini, Robert V., and Terry Golway. Fellow Citizens; The Penguin Book of U.S. Presidential Inaugural Addresses. New York: Penguin Group, 2008.
Roosevelt, Franklin D. "Inaugural Address." Washington DC, March 4, 1933.
Roosevelt, Theodore. "Inaugural Address." Washington DC, March 4, 1905.
Skarmis, Nancy. Our Presidents Their Lives and Stories. Nashville: Ideals Publications, Inc, 1994.
The Addresses and Messages of the Presidents of the United States Inaugural, Annual, and Special from 1789 to 1846. Vol. 2. 2 vols. New York: Edward Walker, 1846.
The Addresses and Messages of The Presidents of the United States Inaugural, Annual, Special From 1789 to 1846. Vol. 1. 2 vols. New York: Edward Walker, 1846.
The White House; The Presidents. n.d. http://www.whitehouse.gov/about/presidents (accessed July 12, 2011).
Washington, George. "Inaugural Address." New York, April 30, 1789.
DE 3 rd Grade Civics Standard #1 Students will examine the structure and purposes of governments with specific emphasis on constitutional democracy.
DE 3 rd Grade Civics Standard #2 Students will understand the principles and ideals underlying the American political system.
DE 3 rd Grade Civics Standard #3 Students will understand that American citizens have distinct responsibilities and privileges.
DE 3 rd Grade Civics Standard #4 Students will develop and employ the civic skills necessary for effective, participatory citizenship.
DE 3 rd Grade History Standard #1 Students will employ chronological concepts in analyzing historical phenomena.
DE 3 rd Grade History Standard #2 Students will gather, examine, and analyze historical data.
DE 3 rd Grade History Standard #3 Students will interpret historical data.
DE 3 rd Grade History Standard #4 Students will develop historical knowledge of major events and phenomena in United States history.
DE 3 rd Grade ELA Standard #1 Students will use written language and oral English appropriate for various purposes and audiences.
DE 3 rd Grade ELA Standard #2 Students will construct, examine, and extend the meaning of literary, informative, and technical texts through listening, reading, and viewing.
DE 3 rd Grade ELA Standard #3 Students will access, organize, and evaluate information gained by listening, reading, and viewing.
DE 3 rd Grade ELA Standard #4Students will use literary knowledge assessed through print and visual media to connect self to society and culture.
Washington: behold-see; pledges-promises; prejudices-powers; animosities-dislikes; misdirect-alter or change; comprehensive-complete; foundation-base; policy-plan; immutable-unchangeable; principles-beliefs; morality-goodness; preeminence-superiority; exemplified-shown; attributes-parts
Jefferson: deem-think; essential-necessary; principles-beliefs; consequently-therefore; ought-should; justice-fairness; persuasion-view; commerce-trade or business; entangling-tangled or mixed up with; alliances-deals; preservation-keep; encouragement-support; habeas corpus-unlawful holding; impartiality-fairly
Lincoln: malice-hatred; charity-kindness; firmness-control; strive-struggle; bind-fix; cherish-value; just-fair; impending-coming; avert-prevent; insurgents-fighters; negotiation-discussion; deprecated-disapproved
T. Roosevelt: tremendous-great; wrought- brought about; intense-powerful; half century- fifty years; formidable-difficult
F.D. Roosevelt: certain-positive; induction-entrance; address-speak to; candor-honesty; impels-requires; preeminently-overwhelmingly; shrink-hide; conditions-state; endure-last; revive-restart; prosper-grow; assert- say; paralyzes-stops; convert-change; retreat-losing; frankness-honesty; vigor-strength; essential-important; critical-serious
J.F. Kennedy: nation-country; ill-poorly; bear-stand up to; hardship-difficulty; foe-enemy; bonds-ties; misery-sadness; pledge-promise
Reagan: administrations-the President and his helpers; objective-goal; vigorous- with energy; economy-business and customers; barriers-walls; bigotry-racism; discrimination-unfairness; inflation-rise in the cost of everyday things; terror-fear; productive-helpful; bounty-reward; revived-recharged
Obama: instruments-tools; tolerance-acceptance; patriotism-loyalty to America; demanded-required; era-specific period of time; recognition-remembrance or acknowledgement; grudgingly-meanly; seize-grab; satisfying-filling; virtue-goodness
Vocabulary Word:_________________________________________________
1. Write the sentence where the word is found in the text.
2. Based on the sentence, what do you think the word means?
3. Consult an "expert" for the actual definition (friend, text, dictionary). Expert:
Expert's Definition:
4. Write the word in a sentence of your own.
5. Choose one of the following ways to help you remember the word's meaning: draw a picture; create a movement; connect the word to a story, song, or news report you've heard. Write down how you are going to remember this word.
6. Explain
1 (Declaration of Independence 1776)
2 (Washington 1789)
3 (Jefferson 1801)
4 (Lincoln 1865)
5 (T. Roosevelt 1905)
6 (F. D. Roosevelt 1933)
7 (Kennedy 1961)
8 (Reagan 1981)
9 (Obama 2009) | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,026 |
Michael Klemm (* 4. September 1953 in Bensheim) ist ein deutscher Autor, Regisseur und Schauspieler.
Leben
Michael Klemm studierte in München Ägyptologie und Kunstgeschichte. Danach begann er ein dreijähriges Schauspielstudium. Er ist Autor zahlreicher Theaterstücke. Als Regisseur und Schauspieler arbeitete er an mehreren deutschen Theatern: Residenztheater München, Schauspiel Bonn, Zimmertheater Heidelberg, Landestheater Schwaben in Memmingen, Fränkisches Theater Schloss Massbach, Marburger Schauspiel. Daneben gründete er in München die Compagnie Molion, dann das Theater Scaramouche und danach in Landshut das Theater Café Molière, daraus entstand das heutige Kleine Theater Landshut.
1979 verkörperte er in der ZDF-Fernsehserie Merlin den König Artus. In der 1984 für das Fernsehen aufgezeichneten Theaterproduktion Die Streiche des Scapin von Molière, spielte er den Scapin. In den 1990er Jahren drehte er seinen ersten Spielfilm: Leonce und Lena, nach dem Theaterstück von Georg Büchner. Darin spielt er die Rolle des Valerio. Danach folgte die Filmdokumentation Wo warst Du?, die Geschichte eines Juden und eines ehemaligen Angehörigen der Waffen-SS, die sich in den 1990er Jahren anfreundeten.
Michael Klemm lebte viele Jahre in Potsdam und gründete dort 2004 das Theater Comédie Soleil. Im November 2009 zog er mit seinem Theater in die neue Spielstätte nach Werder an der Havel.
2010 erschien sein erster Roman Schatten der Seele, 2012 folgte der zweite autobiografische Roman Die Hofnarren. Im Februar 2015 erschien der Roman Pans letztes Lied – Die Michael Jackson Verschwörung. Im August 2017 erschien sein Roman BABYLON – die große Täuschung.
Seit dem Frühjahr 2013 lebt Michael Klemm wieder in Hessen. Gemeinsam mit Horst Wüst betreibt er die Filmproduktion One World Production Ltd. & Co. KG (München/Heidelberg).
Filmografie
Darstellung
1979: Blauer Himmel, den ich nur ahne (Fernsehfilm; Regie: Stefan Rinser)
1979: Merlin (Fernsehserie; Regie: Armin Dahlen)
1980: Das kleine Hotel (Fernsehfilm; Regie: Eberhard Hauff)
1982: Alarm im Schlossmuseum (Fernseh-Dreiteiler; Regie: Armin Dahlen)
1982: Beim Bund (Fernsehserie, Episode: Zett Two)
1986: Paradies (Regie: Doris Dörrie)
1998: Bin ich schön? (Regie: Doris Dörrie)
2014: Pans letztes Lied (Spielfilm; Regie: Julian Tyrasa)
2015: Magic Mystery Hotel (Spielfilm; Regie: Michael Klemm)
2017: Die Komödianten von Heppenstätt (Regie: Michael Klemm)
Drehbuch und Produktion
Pans letztes Lied – Die Michael Jackson Verschwörung, Spielfilm, 2014; Regie: Juian Tyrasa
Magic Mystery Hotel, Spielfilm 2015 – Produzent
Die Komödianten von Heppenstätt, Pilotfilm für eine TV-Serie – Produzent
Theatrografie
Theaterstücke:
Das Leben des Herrn Villon, München 1985 Alabamahalle; die Geschichte des franz. Dichters und Vaganten François Villon.
Der Untergang von Poppenbölling, 1986 München Theaterhallen Dachauer Straße.
Das Grauen der Borgia, München 1988 Theater Scaramouche; eine Bearbeitung des Romans von Alfred Henschke bekannt unter dem Pseudonym: Klabund.
Fata Morgana, Theater – Café Molière Landshut 1989; eine absurde Reise durch die 'Wüste des menschlichen Seins'.
Welcome To The Nazi Dome, Orensanz Foundation New York, Oktober 1995; Eine philosophische 'Reinigung'.
Malipiero "Gottes geliebter Narr", Sommerhausen 2001; ein Stück über den Begründer des berühmten Torturmtheaters Luigi Malipiero.
America, Sommerhausen 2002; ein Stück über Franz Daniel Pastorius, den Gründer der ersten deutschen Siedlung in Amerika, Germantown in Pennsylvania.
Death Row Valley, Fränkisches Theater Schloss Massbach 2005; die Geschichte des zweifachen Mörders Gary Gilmore.
Die Guillotine, Comédie Soleil Potsdam 2006; Monolog eines Henkers.
Paganini oh Paganini, Theater – Comédie Soleil Potsdam 2007; die Lebensgeschichte des genialen Musikers und Komponisten.
Arkadien – Ich hatte einen Traum, Theater – Comédie Soleil Potsdam 2008; die Lebensgeschichte des großen Architekten des Klassizismus Karl Friedrich Schinkel, im Schlosstheater im Neuen Palais. Für dieses Stück erhielt er 2008 den Schinkelpreis der Schinkel Gesellschaft Neuruppin (Geburtsstadt Karl Friedrich Schinkels).
Die Prophezeiung Uraufführung, 2010 Theater Comédie Soleil, Werder a.d.Havel.
The Joker Uraufführung, 2012 Theater in der Brotfabrik, Berlin.
Die Flügel des Königs Uraufführung unter dem Pseudonym: Jan van Holbein, 2012 Theater Comédie Soleil, Werder a.d. Havel.
Café Heimat – Rock den Elch, Rockmusik-Theaterstück, Uraufführung 2016 Bensheim a.d.Bergstraße
Wenn Theobald kommt, Komödie, Uraufführung 2020 Theater Rex Saas-Fee.
Jugendtheaterstücke:
Kiez'n'Kids, Berlin 1998; Jugendmusiktheater ABC Köpenick.
No Exit, Berlin 1999; Jugendmusiktheater ABC Köpenick.
Way out, Berlin 2000; Jugendmusiktheater ABC Köpenick.
Esther, Berlin 2001; Jugendmusiktheater ABC Köpenick.
Street Life, Bad Tölz 2014; Jugendmusiktheater in Zusammenarbeit mit der Jugendförderung Bad Tölz.
Weblinks
Website von Michael Klemm
Einzelnachweise
Filmschauspieler
Theaterschauspieler
Theaterregisseur
Autor
Literatur (20. Jahrhundert)
Literatur (21. Jahrhundert)
Literatur (Deutsch)
Drama
Person (Bensheim)
Deutscher
Geboren 1953
Mann | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,270 |
If you have a heart for the Multiethnic Young Life mission, then we have a place for you to serve.
Is God calling you to minister full time to this population of teenagers? Prayerfully consider a full- or part-time position on Multiethnic staff in your area. Our volunteer leaders spend time each week getting to know kids on their turf. Because kids don't care how much you know till they know how much you care, Young Life leaders show they care by going where kids are, meeting them as they are, believing in who they can be. And because some of these kids don't often have the basics of life, Multiethnic Young Life needs folks who can provide transportation to and from club and, sometimes, food before or after club.
Giving of your time is crucial, but financial resources are what keep Young Life's Multiethnic ministry in the lives of kids who need Jesus Christ. Multiethnic Young Life needs the support of those willing to give financially, as well as those who want to serve on the area's Young Life committee and represent the interests of Multiethnic Young Life in the local area.
For more information on any of these areas of service, contact your local Young Life office or the Multiethnic office at 719-381-1867. Learn about starting a Multiethnic club in your area, or giving to the Multiethnic campership fund. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,001 |
Rebutia buiningiana je zajímavá rostlina ze sekce Aylostera ze severní Argentiny z hraniční oblasti provincií Jujuy a Salta. Pojmenována byla na počest holandského znalce kaktusů A. F. H. Buininga.
Rebutia buiningiana Rausch
Rausch, Walter; Kakteen und andere Sukkulenten, 23: 98, 1972
Sekce Aylostera, řada Buiningiana
Popis
Stonek jednotlivý, zřídka odnožující, v kultuře tvořící malé trsy, kulovitý, až 50 mm široký, s vláknitými kořeny, pokožka světle šedozelená; žeber až 20, spirálovitá, rozložená do okrouhlých, asi 4 mm širokých hrbolků; areoly okrouhlé až oválné, asi 2 mm široké, bíle až světle hnědě plstnaté. Okrajových trnů 14 – 16, široce roztažené, 6 – 10 mm dlouhé, tenké, křehké, sklovitě bílé; středové trny 2 – 3, umístěné v areole nad sebou, poněkud silnější, až 14 mm dlouhé, bílé s hnědou špičkou a hnědou zesílenou patou.
Květy na boku rostliny, působící dosti plně, 35 mm dlouhé, 30 mm široké, růžově oranžové, v další dny blednoucí do měkké, světle oranžové; květní lůžko a trubka oranžově růžové, s hnědými šupinami a bílými vlasy a štětinami; vnější okvětní plátky kopisťovité, světle růžové s nahnědlými špičkami; vnitřní okvětní lístky kopisťovité, růžově oranžové; jícen bělavě růžový; nitky bělavé; čnělka nažloutlá, na asi 10 mm srostlá s trubkou, blizna se 6 rameny, žlutá. Plod kulovitý, asi 5 mm široký, červenohnědý, s tmavě hnědými až černými šupinami, bílými vlasy a štětinami. Semena typu Aylostera, okrouhle přílbovitá, tmavě hnědá, asi 1 mm velká.
Variety a formy
Variety ani formy nebyly popsány, druh je zřejmě známý jen z jedné lokality, rostliny ve sbírkách vykazují jen velmi malé rozdíly. Výrazněji se mění jen zbarvení květů, nikoliv však mezi různými rostlinami, ale v průběhu kvetení, v dalších dnech květy zřetelně blednou. Kromě sběru, ze kterého byl vybrán typ druhu, WR511, byl jako R. buiningiana označen také sběr KK860, ale tato identifikace je vzhledem k dosti vzdálenému místu původu značně problematická. Nověji byl jako R. buiningiana ? označen sběr RH1387 a jako R. buiningiana f. sběry RH605 a SE61B, u tohoto posledního sběru je zbarvení květů růžovější, s vyšším zastoupení červené.
Výskyt a rozšíření
R. buiningiana pochází ze severní Argentiny, místo nálezu typového sběru leží v provincii Jujuy u Iruya v nadmořské výšce 2700 m. Naleziště uvedené u sběru KK860, Las Cajas, Tarija, 2800 m, se zdá svědčit o něčem jiném. Nověji je tento sběr spojován s Ritterovým nálezem stejného jména (nyní R. archibuiningiana), jehož naleziště leží Knížetově původní lokalizaci místa nálezu přece jen podstatně blíže. Novější sběry jsou rovněž spojeny s Iruya, u sběru SE61B bylo uvedeno Argentina, prov. Jujuy, u Rio Iruya. U sběrů RH bylo jako místo nálezu uvedeno Argentina, prov. Salta, Iruya, rozdíl ve zde uvedené provincii není příliš podstatný, neboť Iruya leží v blízkosti hranice obou provincií.
Poznámky
Jménem R. buiningiana byly postupně označeny dvě zcela odlišné rostliny. Nejprve toto jméno navrhl F. Ritter pro svůj nález z blízkosti Padcaya (Bolívie, departament Tarija), jméno však nebylo platně publikováno. Proto poté, co W. Rausch platně popsal pod stejným jménem svoji rostlinu, změnil F. Ritter jméno pro svůj nález na R. archibuiningiana (ve smyslu stará, původní buiningiana). Jako zajímavost zaslouží zmínku údaj W. Rausche v prvotním popisu, že se při prvním sběru tohoto druhu domníval, že nalezl novou formu R. marsoneri , což rostliny v našich sbírkách příliš nepotvrzují. V CITES Cact. Checklist (1992) je R. buiningiana připojena mezi synonyma k R. pseudodeminuta, v nejnovějším shrnutí čeledi Cactaceae (D. Hunt, The New Cactus Lexicon, 2006) pak k R. deminuta subsp. kupperiana.
Pěstování
Pro pěstování toho pěkného druhu není potřeba žádných zvláštních podmínek. Svými nároky se nikterak neliší od většiny druhů rodu. Dostatek světla a čerstvého vzduchu jsou předpoklady pro správné vytrnění a pravidelnou tvorbu květů. V době vegetačního klidu je potřeba suchem a chladem zabránit předčasnému probouzení rostlin, aby nedocházelo k deformaci stonků a redukci otrnění. Mírné odnožování v kultuře zjednodušuje její vegetativní množení, odnože zakořeňují bez problémů a brzy dorůstají do květuschopné velikosti. Občas se objeví i nabídka semen, stojí však za zmínku skutečnost, že v mnohých sbírkách stále ještě můžeme pod jménem R. buiningiana nalézt zcela odlišné rostliny pocházející z nálezů F. Rittera, a tak se to také může občas vyskytnout v nabídkách semen.
Literatura
Backeberg, Curt; Haage, Walter; Das Kakteenlexikon, p. 503, 1977
Hunt, David; et al.; The New Cactus Lexicon (), p. 246, 2006
Pilbeam, John; Rebutia (), p. 29, 1997
Šída, Otakar; Atlas kaktusů, tab. 27, 2004
Externí odkazy
http://rebutia.iglu.cz/sekce2/bui1
http://hornad.fei.tuke.sk/~suba/Reb/idents/buiningiana.htm
Rebutia
Flóra západu Jižní Ameriky
Endemity flóry Bolívie | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,360 |
El Consell General de l'Alt Loira (en occità Conselh Generau dau Naut Léger) és l'assemblea deliberant executiva del departament francès de l'Alt Loira, a la regió d'Alvèrnia-Roine-Alps.
La seu es troba a Lo Puèi de Velai i des de 2004 el president és Gérard Roche (Divers droite).
Antics presidents
Robert Bouvard (1949-1964)
Georges Billamboz (1964-1973)
Jacques Barrot (1973-2004)
Gérard Roche (2004-)
Composició
El març de 2008 el Consell General de l'Alt Loira era constituït per 35 elegits pels 35 cantons de l'Alt Loira.
Vegeu també
Consell Regional d'Alvèrnia
Llista dels presidents dels Consells Generals de França
Enllaços externs
Web oficial del Consell General de l'Alt Loira
Alt Loira
Alt Loira | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,094 |
namespace ipx {
// IPM implements an interior point method based on KKTSolver and Iterate.
// The algorithm is a variant of Mehrotra's [1] predictor-corrector method
// that requires two linear system solves per iteration.
//
// [1] S. Mehrotra, "On the implementation of a primal-dual interior point
// method", SIAM J. Optim., 2 (1992).
class IPM {
public:
explicit IPM(const Control& control);
// Initializes @iterate with a starting point for Driver(). The KKT solver
// must allow Factorize(NULL, info) (see kkt_solver.h).
// On return info->status_ipm is
// IPX_STATUS_not_run if successful,
// IPX_STATUS_time_limit if the KKT solver was interrupted by time limit,
// IPX_STATUS_failed if the KKT solver failed with info->errflag.
// If the method did not terminate successfully, @iterate is unchanged.
void StartingPoint(KKTSolver* kkt, Iterate* iterate, Info* info);
// Updates @iterate by interior point iterations. On return ipm_status is
// IPX_STATUS_optimal if iterate->term_crit_reached() is true,
// IPX_STATUS_iter_limit if info->iter >= maxiter(),
// IPX_STATUS_no_progress if no progress over a number of iterations,
// IPX_STATUS_time_limit if interrupted by time limit,
// IPX_STATUS_failed if the KKT solver failed with info->errflag.
void Driver(KKTSolver* kkt, Iterate* iterate, Info* info);
Int maxiter() const { return maxiter_; }
void maxiter(Int i) { maxiter_ = i; }
private:
struct Step;
void ComputeStartingPoint();
void Predictor(Step& step);
void AddCorrector(Step& step);
void StepSizes(const Step& step);
void MakeStep(const Step& step);
// Reduces the following linear system to KKT form:
// [ AI ] [dx ] [rb]
// [ I -I ] [dxl] = [rl]
// [ I I ] [dxu] [ru]
// [ AI' I -I ] [dy ] [rc]
// [ Zl Xl ] [dzl] [sl]
// [ Zu Xu ] [dzu] [su]
// Each of @rb, @rc, @rl and @ru can be NULL, in which case its entries are
// assumed to be 0.0. This is currently not used, but was implemented for
// computing centrality correctors.
void SolveNewtonSystem(const double* rb, const double* rc,
const double* rl, const double* ru,
const double* sl, const double* su, Step& lhs);
void PrintHeader();
void PrintOutput();
const Control& control_;
KKTSolver* kkt_{nullptr};
Iterate* iterate_{nullptr};
Info* info_{nullptr};
double step_primal_{0.0}, step_dual_{0.0};
// Counts the # bad iterations since the last good iteration. An iteration
// is bad if the primal or dual step size is < 0.05.
Int num_bad_iter_{0};
Int maxiter_{-1};
};
} // namespace ipx
#endif // IPX_IPM_H_
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,094 |
Great taste. 90 calories per serving. 50% more calcium than dairy milk (2% dairy milk data drawn from USDA National Nutrient Database for Standard Reference, Release 24m except vitamin D based on national market data for 2% dairy milk. Data consistent with typical 2% dairy milk). Lactose, gluten, & soy-free. Non GMO Project verified. nongmoproject.org. Get your apron and get cooking with Silk. Did you know Silk Pure Coconut coconutmilk can be used cup-for-cup in recipes in place of dairy milk? So it's a perfect choice for recipes and baking. And with 50% more calcium than dairy milk (2% dairy milk data drawn from USDA National Nutrient Database for standard reference, release 24, except vitamin D based on national market data for 2% dairy milk. Data consistent with typical 2% dairy milk), it brings a whole lot more than just great taste. Visit Silk.com for more recipes. Did You Know? Silk Pure Coconut contains medium chain fatty acids, fats that may be more easily burned as energy than other fats. Our coconut milk has 3 g per serving. Big coconut taste. Tiny umbrella optional. Love it Guarantee: Now our vanilla coconutmilk is even more delicious - like a tropical vacation for your taste buds. And we did it without loading up on extra sugar or artificial stuff, so your sip of paradise is truly worry-free. Honest to Goodness: A promise from Silk. For more than 15 years, we've brought you simple, delicious food. And the philosophy behind it is simple, too: Start with ingredients that are grown responsibly, and keep them as close as to nature as we can. Today more than ever, we want you to know exactly what that means. No artificial colors. No artificial flavors. No high-fructose corn syrup. Dairy free. Non-GMO ingredients. Responsibly produced. Made with genetically modified ingredients, for your health and the planet's. Give us a shout on Facebook and we'll talk about it together. Choose wisely, drink happily. Silk Pure Coconut Vanilla: Calories - 90, Calcium - 45% DV, Vitamin D - 25% DV, Total Fat - 5 g, Cholesterol - 0 mg, Sugars - 9 g. 2% Dairy Milk (2% dairy milk data drawn from USDA National Nutrient Database for standard reference, release 24 except vitamin D based on national market data for 2% dairy milk. Data consistent with typical 2% dairy milk): Calories - 120, Calcium - 30% DV, Vitamin D - 25% DV, Total Fat - 5 g, Sugars - 12 g. It's free! Silk Pure Coconut Vanilla is free of dairy, soy, gluten, lactose, cholesterol, eggs, casein, MSG and worries. You still have to pay for it, though. Recyclable. Facilities may not exist in your area. Visit Silk.com/recycle to see if recyclable in your area. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,698 |
{"url":"http:\/\/www.zentralblatt-math.org\/zmath\/en\/advanced\/?q=an:1149.62078","text":"Language: \u00a0 Search: \u00a0 Contact\nZentralblatt MATH has released its new interface!\nFor an improved author identification, see the new author database of ZBMATH.\n\nQuery:\nFill in the form and click \u00bbSearch\u00ab...\nFormat:\nDisplay: entries per page entries\nZbl 1149.62078\nLi, Jiexiang; Tran, Lanh Tat\nNonparametric estimation of conditional expectation.\n(English)\n[J] J. Stat. Plann. Inference 139, No. 2, 164-175 (2009). ISSN 0378-3758\n\nSummary: Denote the integer lattice points in the $N$-dimensional Euclidean space by $\\Bbb Z^N$ and assume that $(X_i,Y_i)$, $i\\in\\Bbb Z^N$, is a mixing random field. Estimators of the conditional expectation $r(x)=E[Y_i\\,|\\,X_i=x]$ by nearest neighbor methods are established and investigated. The main analytical result of this study is that, under general mixing assumptions, the estimators considered are asymptotically normal. Many difficulties arise since points in higher dimensional space $N\\geqslant 2$ cannot be linearly ordered. Our result applies to many situations where parametric methods cannot be adopted with confidence.\nMSC 2000:\n*62M40 Statistics of random fields\n62G05 Nonparametric estimation\n62E20 Asymptotic distribution theory in statistics\n62G08 Nonparametric regression\n62M10 Time series, etc. (statistics)\n\nKeywords: random field; conditional expectation; nearest neighbor estimator; asymptotic normality\n\nHighlights\nMaster Server","date":"2013-06-19 06:48:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7282928824424744, \"perplexity\": 3184.6548635069535}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368708142617\/warc\/CC-MAIN-20130516124222-00053-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.