text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Q: AWS S3 peer not authenticated I am using the AWS Java SDK in an application that uploads small files about once a minute to S3. The application has been running uninterrupted for two weeks, but had two major periods in which uploads throw an SSL-related peer not authenticated error.
The first period lasted about three days. It wasn't abrupt; but "sputtered out" as more and more uploads threw exceptions until it was doing none within a few minutes.
The second period began about a week ago and has not ended.
On the AWS forum, an Amazon employee seems to recommend solving this problem by not validating Amazon's certificate.
Please explain if I've misunderstood this, or what else I should try.
A: I have had a similar issue but with my EC2 instance a while back. I was using GnuTLS libraries and SSL communication. Like it is mentioned on that thread, you need to code and allow for acceptance of self-signed certificates or if the servers identity does not matter to you ( as in if you dont have issues with not validating the server..) you can disable checking the server authenticity. It worked fine for me thereafter. It happens once in a while like you said and I am still unsure as to why there is this behavior.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,897
|
Ксероде́рма пигме́нтная (синонимы: ретикулярный прогрессирующий меланоз, прогрессирующий ретикулярный меланоз Пика) — наследственное заболевание кожи, проявляющееся повышенной чувствительностью к ультрафиолетовому облучению, проявляется в возрасте двух-трех лет и постоянно прогрессирует. Является предраковым состоянием кожи. Встречается редко.
Этиология и патогенез
Имеет наиболее весомое значение наследственный фактор. Также в некоторой литературе описывался профессионально производственный токсический/тератогенный момент в этиологии болезни при работе на шахтах на урановых рудниках родителей родившихся с болезнью детей и вдыхании ими радона. (Версии о подобном результате, приводящим к болезни выдвигаются некоторыми Западными Историческими и Генетическими Авторами-исследователями, об одной из причин ее возникновения, что она возникла в одном из регионов планеты якобы в том числе и в следствии геноцидной Кампании Американских Военных и Властей США - под названием "Долгий марш Навахо" в отношении американских индейцев, их насильственной депортации с территории Аризоны на территорию Нью Мексико, в результате чего, такое племя как народ Навахо попало в "скученную генетическую ловушку", что усугубилось безработицей в регионе в годы Холодной Войны и ядерного противостояния США и СССР, и то что в результате многие члены племени были вынуждены работать на урановых шахтах, часто будучи никак не защищенными от пагубного воздействия радиации, из-за того, что власти США - якобы знали о вреде данных работ при их организации, но не предпринимали никаких мер защиты в отношении соответствующих рабочих на урановых рудниках от вдыхания радона защиты их и их семей, что якобы и привело к большому количеству рожденных детей с измененным геном и с данной аномалией в данном племени, конкретно в этой резервации) Характер наследственного дефекта заключается в отсутствии или малой активности ферментов, устраняющих повреждающий эффект ультрафиолетового излучения на клетки кожи. В результате мутации белки, репарирующие ДНК больного, становятся неактивны, и при всяком повреждении, например, при облучении ультрафиолетом, дефектных молекул ДНК становится больше. Повреждения накапливаются и со временем приводят к раку кожи. Изучены два вида нарушений. При одном из них, помимо высокой чувствительности к УФ-излучению, у больных имеет место и повышенная чувствительность к радиации. Результатом в обоих случаях являются нарушения пигментации и ороговения кожи, атрофические изменения эпидермиса и дистрофия соединительнотканных волокон, а конечным эффектом — клеточная атипия и озлокачествление.
Клиническая картина
Клиника ксеродермы характеризуется тремя стадиями. Первая стадия отмечается у детей 2—3 лет жизни (в редких случаях — позже), обычно в весенне-летний период после пребывания на солнце. На открытых участках кожи (лицо, шея, предплечья, кисти) возникает стойкая воспалительная реакция, характеризующаяся пятнами, шелушением и последующим развитием неравномерной повышенной гиперпигментации по типу лентиго, веснушек. Каждое повторное облучение приводит к усилению этих проявлений.
Клинические проявления второй стадии приобретают выраженность спустя несколько лет. Она характеризуется наличием участков атрофии кожи разных размеров и очертаний, сосудистыми звездочками, неравномерной пигментацией, что все вместе придает коже пестрый вид. Внешне картина кожи очень напоминает проявления хронического радиационного дерматита. На отдельных участках кожи могут наблюдаться бородавчатые разрастания, корки, трещины, изъязвления, пятнистость. Страдает не только кожа, но и хрящевая и соединительная ткань: ушные раковины и естественные отверстия (носовые ходы, ротовое отверстие) деформируются, хрящи носа истончаются. Отмечаются, помимо того, выворот век, блефариты, изъязвления слизистой оболочки век, выпадение и нарушение последующего роста ресниц, помутнение роговицы, слезотечение и светобоязнь.
Третья стадия заболевания развивается в подростковом или юношеском возрасте. Она характеризуется появлением в очагах поражения доброкачественных и злокачественных опухолей (фибром, кератом, ангиом, базалиом, меланом и пр.). Высокой степенью озлокачествления и метастазированием во внутренние органы отличаются очаги бородавчатых разрастаний. Прогноз неблагоприятный: 2/3 больных погибают в возрасте до 15 лет.
Ксеродермическая идиотия (синдром Де Са́нктиса—Каккьо́не) — наиболее тяжелая форма пигментной ксеродермы. Она характеризуется наличием выраженных нарушений со стороны центральной нервной системы на фоне кожных проявлений. Отмечаются микроцефалия, недостаточное развитие гипофиза и мозжечка, идиотия, парезы, судороги, координационные и рефлекторные расстройства. Помимо этих проявлений, для ксеродермической идиотии типичны задержка роста и полового развития, потеря слуха.
Диагноз в большинстве случаев основывается на данных клиники: связь заболевания с солнечным облучением, поражение открытых участков кожи (пигментация и сосудистые звездочки) с последующим озлокачествлением.
Лечение и профилактика
Ранние этапы заболевания подлежат амбулаторному лечению у дерматолога. Рекомендуют прием синтетических противомалярийных препаратов (типа хингамина), которые уменьшают чувствительность кожи к солнечным лучам. Показаны витамины А, РР, группы В. Местно используются кортикостероидные мази, в участках бородавчатых разрастаний — цитостатические. Применяют также фотозащитные кремы и мази (хининовая 5%-ная, салоловая 10%-ная и др.). При развитии опухолевого процесса лечение больного осуществляется онкологом.
При синдроме Де Санктиса-Каккьоне лечение проводится под наблюдением невропатолога в специализированном стационаре.
Первичная профилактика не разработана. Для замедления процесса озлокачествления больные должны постоянно находиться под диспансерным наблюдением дерматолога, онколога, при необходимости — офтальмолога и невропатолога. Важны меры по защите открытых участков тела от облучения (больным рекомендуют носить широкополые шляпы, перчатки, зонтики от солнца, пользоваться фотозащитными кремами). При появлении бородавчатых разрастаний целесообразно их оперативное удаление в возможно более ранние сроки во избежание озлокачествления.
Документалистика
В популярной культуре
Вымышленные персонажи, болеющие ксеродермой:
Дети главной героини фильма «Другие» режиссёра Алехандро Аменабара, Энн и Николас.
Кристофер Сноу, главный герой трилогии «Moonlight Bay Trilogy» Дина Кунца.
Люк, персонаж романа «Операция «Выход»» Скарлетт Томас.
Каору, главная героиня фильма «Полночное Солнце» режиссёра Норихидо Коизуми.
Рик Клейтон, главный герой фильма «Тёмная сторона Солнца» режиссёра Божидара Николича.
Этан, племянник главного героя из романа «Взглянуть второй раз» Джоди Пиколт.
Ромен, главный герой фильма «Полночное разрешение» режиссёра Дэльфина Глейза.
Один из второстепенных персонажей сериала «Ультрафиолет»
Девочка из романа «A Cool Moonlight» Анджелы Джонсон.
Эрве, персонаж романа «Анук, mon amour…» Виктории Платовой.
Кэтти Прайс, главная героиня фильма 《Полночное солнце》 режиссера Скотта Спира
См. также
Лучевая болезнь
Примечания
Литература
Заболевания кожи и её придатков
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,377
|
Q: SQL Limit min and max value in database CREATE TABLE TBL_CD(
CDnr int identity(1,1),
CDTitel nvarchar(80) NOT NULL,
CDduur int,
CDprijs smallmoney,
So I am creating this table, is there any way I can limit the value of CDprijs to be between 0 and 100?
A: Add a check constraint:
CREATE TABLE TBL_CD(
CDnr int identity(1,1),
CDTitel nvarchar(80) NOT NULL,
CDduur int,
CDprijs smallmoney,
check (CDprijs between 0 and 100),
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,747
|
Rajské zahrady jsou dokumentární seriál České televize o historických parcích a zahradách v České republice. Natočen byl ve dvou cyklech. První byl vysílán od 4. října do 27. prosince 2009 a prováděla jím herečka Zuzana Slavíková. Druhý cyklus televize vysílala od 2. ledna do 10. dubna 2010 a pořadem provázeli herci Arnošt Goldflam a Josef Polášek.
Přehled dílů
Rajské zahrady I.
Rajské zahrady II.
Externí odkazy
Televizní cestopisy
Televizní seriály České televize
Televizní seriály vysílané od roku 2009
Televizní seriály vysílané do roku 2010
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,309
|
{"url":"https:\/\/math.stackexchange.com\/questions\/2745152\/using-stokes-theorem-twice","text":"# Using Stokes theorem twice\n\nLet $\\partial S$ be the closed, piece wise smooth curve that travels from $(0,0,0)$, $(2,0,4)$, $(3,2,6)$, $(1,2,2)$ and then back to $(0,0,0)$ in precisely that order. Let $S$ be the planar region with boundary $\\partial S$ (this region is necessarily contained in the plane $z = 2x$). Use Stokes' theorem to calculate the line integral $$\\oint_{\\partial S} \\mathbf{F} \\cdot \\mathrm{d}r$$ where $\\mathbf{F}$ is the vectorfield $\\mathbf{F}(x,y,z) = z (\\cos x) \\cdot \\mathbf{i} + x^2 y z \\cdot \\mathbf{j} + yz \\cdot \\mathbf{k}$.\n\nSimply using Stokes we can parameterize the surface as $$\\mathbf{r}(u,v) = (2u + v, 2v, 4u + 2v) \\ , \\qquad (u,v) \\in [0,1]^2$$ Some very tedious calculations give then $$\\oint_{\\partial S} \\mathbf{F} \\cdot \\mathrm{d}r \\stackrel{\\text{Stokes'}}{=} \\iint_{S} \\mathrm{curl}\\,\\mathbf{F} \\,\\mathrm{d}S = \\iint_{[0,1]^2} \\mathrm{curl}\\,\\mathbf{F}(\\mathbf{r}(u,v)) \\cdot \\left( \\frac{\\partial \\mathrm{r}}{\\partial u}\\times\\frac{\\partial \\mathrm{r}}{\\partial v}\\right) \\mathrm{d}(u,v) = 52$$ I was hoping there was a quicker way to do this. Plotting the function in the $xz$-plane it looks like this.\n\nIs there any reason why I can not use Stokes backwards and instead calculate the line integral over this simpler curve? I should be able to take the projection of S, down into the $xz$-plane as is still a piecewise-smooth boundary to $S$. Any other simpler way would also be appreciated.\n\n\u2022 Unlike your map of the $z=2x$ plane to the $u$-$v$ plane, this map is singular. \u2013\u00a0amd Apr 19 '18 at 23:11\n\u2022 The other way is to simply take the required line integrals, which involve parameterizing each leg of the journey, substituting that data into F, taking the required scalar products then summing the results....which takes longer than calculating the flux of the curl. \u2013\u00a0Triatticus Apr 19 '18 at 23:12","date":"2019-08-19 10:17:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9307054877281189, \"perplexity\": 201.41675633400573}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027314721.74\/warc\/CC-MAIN-20190819093231-20190819115231-00174.warc.gz\"}"}
| null | null |
class HuntsController < ApplicationController
before_action :set_hunt, only: [:show, :edit, :destroy, :update]
def index
@hunts = current_user.hunts rescue Hunt.active
end
def new
@hunt = Hunt.new
@locations = current_user.locations
end
def show
redirect_to hunt_teams_path(hunt_id: @hunt.id) unless current_user
end
def create
@hunt = current_user.hunts.new(hunt_params)
if @hunt.save
redirect_to hunt_path(@hunt), notice: "well done"
else
render :new, notice: "hunt could not be created"
end
end
def edit
@locations = current_user.locations
end
def update
if @hunt.update_attributes(hunt_params)
redirect_to hunt_path(@hunt)
else
render :edit, notice: "Hunt could not be updated. Try again."
end
end
def destroy
if @hunt.destroy
redirect_to hunts_path
else
redirect_to hunts_path, notice: "Hunt could not be destroyed. Try again."
end
end
private
def set_hunt
@hunt = Hunt.find(params[:id])
end
def hunt_params
params.require(:hunt).permit(:name, master_path: [])
end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,199
|
<?xml version="1.0" encoding="ascii"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>api_shim.core.http.HttpClient</title>
<link rel="stylesheet" href="epydoc.css" type="text/css" />
<script type="text/javascript" src="epydoc.js"></script>
</head>
<body bgcolor="white" text="black" link="blue" vlink="#204080"
alink="#204080">
<!-- ==================== NAVIGATION BAR ==================== -->
<table class="navbar" border="0" width="100%" cellpadding="0"
bgcolor="#a0c0ff" cellspacing="0">
<tr valign="middle">
<!-- Tree link -->
<th> <a
href="module-tree.html">Trees</a> </th>
<!-- Index link -->
<th> <a
href="identifier-index.html">Indices</a> </th>
<!-- Help link -->
<th> <a
href="help.html">Help</a> </th>
<th class="navbar" width="100%"></th>
</tr>
</table>
<table width="100%" cellpadding="0" cellspacing="0">
<tr valign="top">
<td width="100%">
<span class="breadcrumbs">
Package api_shim ::
<a href="api_shim.core-module.html">Package core</a> ::
<a href="api_shim.core.http-module.html">Module http</a> ::
Class HttpClient
</span>
</td>
<td>
<table cellpadding="0" cellspacing="0">
<!-- hide/show private -->
<tr><td align="right"><span class="options">[<a href="javascript:void(0);" class="privatelink"
onclick="toggle_private();">hide private</a>]</span></td></tr>
<tr><td align="right"><span class="options"
>[<a href="frames.html" target="_top">frames</a
>] | <a href="api_shim.core.http.HttpClient-class.html"
target="_top">no frames</a>]</span></td></tr>
</table>
</td>
</tr>
</table>
<!-- ==================== CLASS DESCRIPTION ==================== -->
<h1 class="epydoc">Class HttpClient</h1><p class="nomargin-top"><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient">source code</a></span></p>
<pre class="base-tree">
object --+
|
<a href="api_shim.core.ssl_support.SSLSupport-class.html">ssl_support.SSLSupport</a> --+
|
object --+
|
<a href="api_shim.core.ssl_support.ClientSSLSupport-class.html">ssl_support.ClientSSLSupport</a> --+
|
object --+ |
| |
<a href="api_shim.core.tcp_support.TCPSupport-class.html">tcp_support.TCPSupport</a> --+
|
object --+
|
<strong class="uidshort">HttpClient</strong>
</pre>
<hr />
<p>An HTTP client. A client maintains a pool of connections to a specific
host, at a specific port. The HTTP connections can act as pipelines for
HTTP requests. It is used as a factory for HttpClientRequest instances
which encapsulate the actual HTTP requests. It is also used as a factory
for HTML5 WebSocket websockets.</p>
<!-- ==================== INSTANCE METHODS ==================== -->
<a name="section-InstanceMethods"></a>
<table class="summary" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr bgcolor="#70b0f0" class="table-header">
<td colspan="2" class="table-header">
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr valign="top">
<td align="left"><span class="table-header">Instance Methods</span></td>
<td align="right" valign="top"
><span class="options">[<a href="#section-InstanceMethods"
class="privatelink" onclick="toggle_private();"
>hide private</a>]</span></td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#__init__" class="summary-sig-name">__init__</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">**kwargs</span>)</span></td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.__init__">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#exception_handler" class="summary-sig-name">exception_handler</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
Set the exception handler.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.exception_handler">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a name="get_max_pool_size"></a><span class="summary-sig-name">get_max_pool_size</span>(<span class="summary-sig-arg">self</span>)</span><br />
The maxium number of connections this client will pool.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_max_pool_size">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#set_max_pool_size" class="summary-sig-name">set_max_pool_size</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">val</span>)</span><br />
Set the maximum pool size.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_max_pool_size">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a name="get_keep_alive"></a><span class="summary-sig-name">get_keep_alive</span>(<span class="summary-sig-arg">self</span>)</span><br />
Return if the client use keep alive.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_keep_alive">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#set_keep_alive" class="summary-sig-name">set_keep_alive</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">val</span>)</span><br />
If val is true then, after the request has ended the connection will
be returned to the pool where it can be used by another request.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_keep_alive">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a name="get_port"></a><span class="summary-sig-name">get_port</span>(<span class="summary-sig-arg">self</span>)</span><br />
Return the port the client will attempt to .</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_port">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#set_port" class="summary-sig-name">set_port</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">val</span>)</span><br />
Set the port that the client will attempt to connect to on the server
on.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_port">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a name="get_host"></a><span class="summary-sig-name">get_host</span>(<span class="summary-sig-arg">self</span>)</span></td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_host">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#set_host" class="summary-sig-name">set_host</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">val</span>)</span><br />
Set the host name or ip address that the client will attempt to
connect to on the server on.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_host">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a name="get_verify_host"></a><span class="summary-sig-name">get_verify_host</span>(<span class="summary-sig-arg">self</span>)</span></td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_verify_host">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#set_verify_host" class="summary-sig-name">set_verify_host</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">val</span>)</span><br />
If set then the client will try to validate the remote server's certificate
hostname against the requested host.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_verify_host">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#connect_web_socket" class="summary-sig-name">connect_web_socket</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
Attempt to connect an HTML5 websocket to the specified URI.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.connect_web_socket">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#get_now" class="summary-sig-name">get_now</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>,
<span class="summary-sig-arg">**headers</span>)</span><br />
This is a quick version of the get method where you do not want to do
anything with the request before sing.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_now">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#options" class="summary-sig-name">options</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP OPTIONS request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.options">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#get" class="summary-sig-name">get</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP GET request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#head" class="summary-sig-name">head</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP HEAD request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.head">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#post" class="summary-sig-name">post</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP POST request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.post">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#put" class="summary-sig-name">put</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP PUT request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.put">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#delete" class="summary-sig-name">delete</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP DELETE request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.delete">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#trace" class="summary-sig-name">trace</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP TRACE request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.trace">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#connect" class="summary-sig-name">connect</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP CONNECT request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.connect">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#patch" class="summary-sig-name">patch</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP PATCH request with the specified uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.patch">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#request" class="summary-sig-name">request</a>(<span class="summary-sig-arg">self</span>,
<span class="summary-sig-arg">method</span>,
<span class="summary-sig-arg">uri</span>,
<span class="summary-sig-arg">handler</span>)</span><br />
This method returns an HttpClientRequest instance which represents an
HTTP request with the specified method and uri.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.request">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr>
<td><span class="summary-sig"><a href="api_shim.core.http.HttpClient-class.html#close" class="summary-sig-name">close</a>(<span class="summary-sig-arg">self</span>)</span><br />
Close the client.</td>
<td align="right" valign="top">
<span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.close">source code</a></span>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td colspan="2" class="summary">
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.ssl_support.ClientSSLSupport-class.html">ssl_support.ClientSSLSupport</a></code></b>:
<code><a href="api_shim.core.ssl_support.ClientSSLSupport-class.html#get_trust_all">get_trust_all</a></code>,
<code><a href="api_shim.core.ssl_support.ClientSSLSupport-class.html#set_trust_all">set_trust_all</a></code>
</p>
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.ssl_support.SSLSupport-class.html">ssl_support.SSLSupport</a></code></b>:
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#get_key_store_password">get_key_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#get_key_store_path">get_key_store_path</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#get_ssl">get_ssl</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#get_trust_store_password">get_trust_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#get_trust_store_path">get_trust_store_path</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#set_key_store_password">set_key_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#set_key_store_path">set_key_store_path</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#set_ssl">set_ssl</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#set_trust_store_password">set_trust_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#set_trust_store_path">set_trust_store_path</a></code>
</p>
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.tcp_support.TCPSupport-class.html">tcp_support.TCPSupport</a></code></b>:
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_receive_buffer_size">get_receive_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_reuse_address">get_reuse_address</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_send_buffer_size">get_send_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_so_linger">get_so_linger</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_tcp_keep_alive">get_tcp_keep_alive</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_tcp_no_delay">get_tcp_no_delay</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_traffic_class">get_traffic_class</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#get_use_pooled_buffers">get_use_pooled_buffers</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_receive_buffer_size">set_receive_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_reuse_address">set_reuse_address</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_send_buffer_size">set_send_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_so_linger">set_so_linger</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_tcp_keep_alive">set_tcp_keep_alive</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_tcp_no_delay">set_tcp_no_delay</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_traffic_class">set_traffic_class</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#set_use_pooled_buffers">set_use_pooled_buffers</a></code>
</p>
</td>
</tr>
</table>
<!-- ==================== CLASS VARIABLES ==================== -->
<a name="section-ClassVariables"></a>
<table class="summary" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr bgcolor="#70b0f0" class="table-header">
<td colspan="2" class="table-header">
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr valign="top">
<td align="left"><span class="table-header">Class Variables</span></td>
<td align="right" valign="top"
><span class="options">[<a href="#section-ClassVariables"
class="privatelink" onclick="toggle_private();"
>hide private</a>]</span></td>
</tr>
</table>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<a name="max_pool_size"></a><span class="summary-name">max_pool_size</span> = <code title="property(get_max_pool_size, set_max_pool_size)">property(get_max_pool_size, set_max_pool_size)</code>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<a name="keep_alive"></a><span class="summary-name">keep_alive</span> = <code title="property(get_keep_alive, set_keep_alive)">property(get_keep_alive, set_keep_alive)</code>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<a name="port"></a><span class="summary-name">port</span> = <code title="property(get_port, set_port)">property(get_port, set_port)</code>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<a name="host"></a><span class="summary-name">host</span> = <code title="property(get_host, set_host)">property(get_host, set_host)</code>
</td>
</tr>
<tr>
<td width="15%" align="right" valign="top" class="summary">
<span class="summary-type"> </span>
</td><td class="summary">
<a name="verify_host"></a><span class="summary-name">verify_host</span> = <code title="property(get_verify_host, set_verify_host)">property(get_verify_host, set_verify_host)</code>
</td>
</tr>
<tr>
<td colspan="2" class="summary">
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.ssl_support.ClientSSLSupport-class.html">ssl_support.ClientSSLSupport</a></code></b>:
<code><a href="api_shim.core.ssl_support.ClientSSLSupport-class.html#trust_all">trust_all</a></code>
</p>
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.ssl_support.SSLSupport-class.html">ssl_support.SSLSupport</a></code></b>:
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#key_store_password">key_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#key_store_path">key_store_path</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#ssl">ssl</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#trust_store_password">trust_store_password</a></code>,
<code><a href="api_shim.core.ssl_support.SSLSupport-class.html#trust_store_path">trust_store_path</a></code>
</p>
<p class="indent-wrapped-lines"><b>Inherited from <code><a href="api_shim.core.tcp_support.TCPSupport-class.html">tcp_support.TCPSupport</a></code></b>:
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#receive_buffer_size">receive_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#reuse_address">reuse_address</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#send_buffer_size">send_buffer_size</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#so_linger">so_linger</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#tcp_keep_alive">tcp_keep_alive</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#tcp_no_delay">tcp_no_delay</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#traffic_class">traffic_class</a></code>,
<code><a href="api_shim.core.tcp_support.TCPSupport-class.html#use_pooled_buffers">use_pooled_buffers</a></code>
</p>
</td>
</tr>
</table>
<!-- ==================== METHOD DETAILS ==================== -->
<a name="section-MethodDetails"></a>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr bgcolor="#70b0f0" class="table-header">
<td colspan="2" class="table-header">
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr valign="top">
<td align="left"><span class="table-header">Method Details</span></td>
<td align="right" valign="top"
><span class="options">[<a href="#section-MethodDetails"
class="privatelink" onclick="toggle_private();"
>hide private</a>]</span></td>
</tr>
</table>
</td>
</tr>
</table>
<a name="__init__"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">__init__</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">**kwargs</span>)</span>
<br /><em class="fname">(Constructor)</em>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.__init__">source code</a></span>
</td>
</tr></table>
<dl class="fields">
<dt>Overrides:
object.__init__
<dd><em class="note">(inherited documentation)</em></dd>
</dt>
</dl>
</td></tr></table>
</div>
<a name="exception_handler"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">exception_handler</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.exception_handler">source code</a></span>
</td>
</tr></table>
<p>Set the exception handler.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>handler</code></strong> - function to be used as the handler</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="set_max_pool_size"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">set_max_pool_size</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">val</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_max_pool_size">source code</a></span>
</td>
</tr></table>
<p>Set the maximum pool size. The client will maintain up to this number
of HTTP connections in an internal pool</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>val</code></strong> - The maximum number of connections (default to 1).</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="set_keep_alive"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">set_keep_alive</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">val</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_keep_alive">source code</a></span>
</td>
</tr></table>
<p>If val is true then, after the request has ended the connection will
be returned to the pool where it can be used by another request. In this
manner, many HTTP requests can be pipe-lined over an HTTP connection.
Keep alive connections will not be closed until the close method is
invoked. If val is false then a new connection will be created for each
request and it won't ever go in the pool, the connection will closed
after the response has been received. Even with no keep alive, the client
will not allow more than max_pool_size connections to be created at any
one time.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>val</code></strong> - The value to use for keep_alive</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="set_port"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">set_port</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">val</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_port">source code</a></span>
</td>
</tr></table>
<p>Set the port that the client will attempt to connect to on the server
on. The default value is 80</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>val</code></strong> - The port value.</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="set_host"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">set_host</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">val</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_host">source code</a></span>
</td>
</tr></table>
<p>Set the host name or ip address that the client will attempt to
connect to on the server on.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>val</code></strong> - The host name or ip address to connect to.</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="set_verify_host"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">set_verify_host</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">val</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.set_verify_host">source code</a></span>
</td>
</tr></table>
<pre class="literalblock">
If set then the client will try to validate the remote server's certificate
hostname against the requested host. Should default to 'true'.
This method should only be used in SSL mode.
Keyword arguments:
@param val: true if verify hostname
</pre>
<dl class="fields">
</dl>
</td></tr></table>
</div>
<a name="connect_web_socket"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">connect_web_socket</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.connect_web_socket">source code</a></span>
</td>
</tr></table>
<p>Attempt to connect an HTML5 websocket to the specified URI. The
connect is done asynchronously and the handler is called with a WebSocket
on success.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to connect the websocket on the host, e.g.
/some/path</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the WebSocket</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="get_now"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">get_now</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>,
<span class="sig-arg">**headers</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get_now">source code</a></span>
</td>
</tr></table>
<p>This is a quick version of the get method where you do not want to do
anything with the request before sing. With this method the request is
immediately sent. When an HTTP response is received from the server the
handler is called passing in the response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the GET on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
<li><strong class="pname"><code>headers</code></strong> - A dictionary of headers to pass with the request.</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="options"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">options</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.options">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP OPTIONS request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the OPTIONS on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="get"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">get</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.get">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP GET request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the GET on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="head"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">head</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.head">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP HEAD request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the HEAD on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="post"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">post</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.post">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP POST request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the POST on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="put"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">put</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.put">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP PUT request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the PUT on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="delete"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">delete</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.delete">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP DELETE request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the DELETE on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="trace"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">trace</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.trace">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP TRACE request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the TRACE on the server. handler.
The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="connect"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">connect</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.connect">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP CONNECT request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the CONNECT on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="patch"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">patch</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.patch">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP PATCH request with the specified uri. When an HTTP response is
received from the server the handler is called passing in the
response.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the PATCH on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="request"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">request</span>(<span class="sig-arg">self</span>,
<span class="sig-arg">method</span>,
<span class="sig-arg">uri</span>,
<span class="sig-arg">handler</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.request">source code</a></span>
</td>
</tr></table>
<p>This method returns an HttpClientRequest instance which represents an
HTTP request with the specified method and uri. When an HTTP response is
received from the server the handler is called passing in the response.
method. The HTTP method. Can be one of OPTIONS, HEAD, GET, POST, PUT,
DELETE, TRACE, CONNECT.</p>
<p>Keyword arguments:</p>
<dl class="fields">
<dt>Parameters:</dt>
<dd><ul class="nomargin-top">
<li><strong class="pname"><code>uri</code></strong> - A relative URI where to perform the OPTIONS on the server.</li>
<li><strong class="pname"><code>handler</code></strong> - The handler to be called with the HttpClientResponse</li>
</ul></dd>
</dl>
</td></tr></table>
</div>
<a name="close"></a>
<div>
<table class="details" border="1" cellpadding="3"
cellspacing="0" width="100%" bgcolor="white">
<tr><td>
<table width="100%" cellpadding="0" cellspacing="0" border="0">
<tr valign="top"><td>
<h3 class="epydoc"><span class="sig"><span class="sig-name">close</span>(<span class="sig-arg">self</span>)</span>
</h3>
</td><td align="right" valign="top"
><span class="codelink"><a href="api_shim.core.http-pysrc.html#HttpClient.close">source code</a></span>
</td>
</tr></table>
<p>Close the client. Any unclosed connections will be closed.</p>
<dl class="fields">
</dl>
</td></tr></table>
</div>
<br />
<!-- ==================== NAVIGATION BAR ==================== -->
<table class="navbar" border="0" width="100%" cellpadding="0"
bgcolor="#a0c0ff" cellspacing="0">
<tr valign="middle">
<!-- Tree link -->
<th> <a
href="module-tree.html">Trees</a> </th>
<!-- Index link -->
<th> <a
href="identifier-index.html">Indices</a> </th>
<!-- Help link -->
<th> <a
href="help.html">Help</a> </th>
<th class="navbar" width="100%"></th>
</tr>
</table>
<table border="0" cellpadding="0" cellspacing="0" width="100%%">
<tr>
<td align="left" class="footer">
Generated by Epydoc 3.0.1
on Wed Jul 17 20:24:59 2013
</td>
<td align="right" class="footer">
<a target="mainFrame" href="http://epydoc.sourceforge.net"
>http://epydoc.sourceforge.net</a>
</td>
</tr>
</table>
<script type="text/javascript">
<!--
// Private objects are initially displayed (because if
// javascript is turned off then we want them to be
// visible); but by default, we want to hide them. So hide
// them unless we have a cookie that says to show them.
checkCookie();
// -->
</script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,152
|
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/128245\/is-there-a-way-to-have-mma-simplify-1xp-to-1xp-modulo-p-p-a-prime","text":"# Is there a way to have MMA simplify (1+x)^p to 1+x^p modulo p, p a prime?\n\nAs the title suggests, I want to have Mathematica resolve\n\nElement[p, Primes]\nPolynomialMod[(1+x)^p, p]\n\n\nto $1+x^p$. I have a massive symbolic function I would like to reduce in this fashion, and it involves many terms of the form above for two arbitrary primes $p$ and $q$. I can do a replace all with the reductions above, but that'll involve going through the pages of output and manually finding things to replace. I was hoping for something more automatic and elegant. Also, is it possible to have the above reduction occur within a square root?\n\n## 1 Answer\n\nIf I'm understanding the documentation for PolynomialMod correctly, it appears to be treating (1+x)^p as a polynomial where the order p term has a coefficient of 1. Hence, that coefficient won't be reduced further modulo anything and MMA just spits back the polynomial you started with.\n\nOne simple solution might be to just use ReplaceAll like this:\n\n(1+x)^p \/. Power[1 + y_,q_] -> 1 + Power[y,q]\n\n\nThis can also be extended to square roots (I'm assuming you mean $\\sqrt{(1+x)^p} \\rightarrow \\sqrt{1 + x^p}$), for example:\n\n {(1 + x)^p, (1 + z)^t + (1 + r)^m, Sqrt[(1 + tasty)^taco]} \/. {Sqrt[Power[1 + y_, q_]] -> Sqrt[1 + y^q], Power[1 + y_, q_] -> 1 + y^q}\n\n(*{1 + x^p, 2 + r^m + z^t, Sqrt[1 + tasty^taco]}*)\n\n\nAs long as you know what the relevant transformations should be, you should be able to use them to define patterns and rules that will give you what you want. This approach won't be able to automate any mathematics behind the scenes if there are other transformations you don't already know.","date":"2020-02-21 01:06:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6403965950012207, \"perplexity\": 1030.5299993414296}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145316.8\/warc\/CC-MAIN-20200220224059-20200221014059-00463.warc.gz\"}"}
| null | null |
Troussey é uma comuna francesa na região administrativa de Grande Leste, no departamento de Meuse. Estende-se por uma área de 17,23 km².
Comunas de Mosa (departamento)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,127
|
Employment within a group: the parent company may be considered joint employer along with the French subsidiary
/ Actualités / By Mgg Legal
In a recent decision of November 23, 2022, the French Supreme Court reminded of the risks pertaining to the involvment of a parent company in the management of its French subsidiary.
In this case, an employee was hired by a company which was then integrated to an international group. After being made redundant by the French employer company, the employee decided to sue both their French employer company and the group's parent company considering that both were liable to the payment of damages for unfair dismissal, given that both should be considered the employee's joint employers.
The main reason for the employee's suing the parent company (which remained in bonis) was that the French subsidiary had filed for bankruptcy therefore making it more difficult for the employee to challenge the grounds for redundancy and above all limiting the amounts that would be paid to the employee by the Public Pay Insurance Fund ("AGS").
The Court of Appeal had upheld the employee's claims and considered that both the parent company and the French subsidiary were joint employers. The parent company challenged that decision before the French Supreme Court.
The Court of Cassation however confirmed the Court of Appeal's decision, stressing again the definition of joint employment: "according to article L.1221-1 of the French Labour Code, notwithstanding the existence of a subordinate relationship, a company belonging to a group may be qualified as joint employer if, beyond the necessary coordination of the business between companies belonging to the same group and the control that such membership may entail, there is a permanent interference by this company in the economic and social management of the employing company, leading to the total loss of autonomy by the latter."
In its decision, the French Supreme Court noted that in the situation that was put to them:
the French subsidiary company had no longer any client of its own, the only client being the group;
the parent company had fully replaced the employer company in the HR management;
the subsidiary company's financial and accounting management was carried out by the parent company.
Groups should be aware of these criteria which may lead to an increased financial risk if the French subsidiary is deprived of any autonomy in the financial and HR decisions taken.
https://www.courdecassation.fr/decision/637dcb4614982305d4c204cc?search_api_fulltext=20-23.206&op=Rechercher&previousdecisionpage=&previousdecisionindex=&nextdecisionpage=&nextdecisionindex=
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 252
|
\section{Introduction and main results}
Since the pioneering work of R\'enyi \cite{R:1957} and Parry \cite{MR0142719,MR0166332,Par,Par:1979}, an increasing amount of attention has been paid to maps of the unit interval. Their study has provided solutions to practical problems within biology, engineering, information theory and physics. Applications appear in analogue to digital conversion \cite{1011470}, analysis of electroencephalography (EEG) data \cite{Stolz:2017}, data storage \cite{LM:1995}, electronic circuits \cite{BV:2001}, mechanical systems with impacts and friction \cite{MR2015431} and relay systems \cite{MR2001701}.
The concept of topological entropy, now ubiquitous in the study of dynamical systems, was introduced by Adler, Konheim and McAndrews \cite{MR0175106} as a measure of the complexity of a dynamical system and is an invariant under a continuous change of coordinates, called topological conjugation. Bowen \cite{MR0442989} gave a new, but equivalent, definition for a continuous map of a (not necessarily compact) metric space. For our purposes the following formulation, given by Misiurewicz and Szlenk in \cite{MR579440} and consistent with the definition given in \cite{MR0175106}, serves as a definition of the topological
entropy. Let $T$ be a piecewise monotonic interval map, such as a Lorenz map (see Figure~\ref{fig:Lorenz}), the \textsl{topological entropy} $h(T)$ of $T$ is defined by
\begin{align}\label{eq:entropy}
h(T) \coloneqq \lim_{n \to \infty} \frac{1}{n} \ln (\operatorname{Var}(T^{n})),
\end{align}
where $\operatorname{Var}(f)$ denotes the total variation of the function $f$.
The problem of comparing the topological entropies of two smooth interval maps which are close to each other, in a suitable sense, has been extensively studied in, for instance, \cite{MR1336987,MR3264762,MR1326374,MR1410784,MR1181083,MR1736945}. This problem has also been studied in the setting of piecewise linear maps with one increasing branch and one decreasing branch, see for example \cite{MR3035348}. We consider this problem for Lorenz maps, a class of interval maps with a single discontinuity and two increasing branches. These maps play an important role in the study of the global dynamics of families of vector fields near homoclinic bifurcations, see \cite{MR556582,MR2178223,MR2217319,MR481632,MR1766514,MR1773551} and references therein.
Lorenz maps and their topological entropy have been and still are investigated intensely, see for instance \cite{BHV,MR3035348,MR1336987,MR3264762,G,GH,GS,H,HR,HS,MR1410784,LSS,MR1181083,MR1736945,SSV}. The simplest example of a Lorenz map is a (normalised) \mbox{$\beta$-transformation}, and the topological entropy of such a transformation is equal to $\ln(\beta)$; this was first shown in \cite{H,Par}. However, for a general Lorenz map the question of determining the topological entropy is much more complicated. In \cite{G} Glendinning showed that every Lorenz map is semi-conjugate to an intermediate $\beta$-transformation and gave a criterion, in terms of kneading sequences, for when the semi-conjugacy is a conjugacy; see also \cite{MR3411531,BHV,DS:2004}. Note, this criterion turns out to be equivalent to topological transitivity.
\begin{figure}
\includegraphics[height=9.5em]{Lorenz_maps.pdf}
\caption{A pair $(f_0, f_1)$ of branch functions and an associated Lorenz map.}
\label{fig:Lorenz}
\end{figure}
\begin{defn}\label{def:Lorenz}
Let $0 < a \leq p \leq b < 1$. An \textsl{upper}, or \textsl{lower}, \textsl{Lorenz map} is a map $T^+ \colon [0,1] \to [0, 1]$, respectively $T^- \colon [0,1] \to [0, 1]$, of the form
\begin{align*}
T^+(x) \coloneqq \begin{cases}
f_0(x) &\text{if} \; 0 \leq x < p, \\
f_1(x) &\text{if} \; p \leq x \leq 1,
\end{cases}
\quad \text{respectively} \quad
T^-(x) \coloneqq \begin{cases}
f_0(x) &\text{if} \; 0 \leq x \leq p, \\
f_1(x) &\text{if} \; p < x \leq 1.
\end{cases}
\end{align*}
where $f_0$ and $f_1$, called the \textsl{branch functions}, satisfy the following conditions.
\begin{enumerate}[leftmargin=2.5em]
\item\label{defn:part:Lorenz1} The functions $f_0 \colon [0,b] \to [0,1]$ and $f_1 \colon [a,1] \to [0,1]$ are continuous, strictly increasing and surjective.
\item\label{defn:part:Lorenz2} There exist constants $C, c > 1$ with $C^{-1}\lvert x-y \rvert \leq \lvert f_i^{-1}(x) - f^{-1}_i(y) \rvert \leq c^{-1} \lvert x-y \rvert$ for $i \in \{ 0,1 \}$ and $x \in [0,1]$.
\end{enumerate}
\end{defn}
When we wish to emphasis the point of discontinuity, we write $T^{\pm}_{p}$ for $T^{\pm}$. Further, by definition, we have $h(T^{+}_{p}) = h(T^{-}_{p})$, and hence, for ease of notation, we let $h(T_{p})$ denote this common value. Further, for a fixed pair of branch functions, a direct consequence of \eqref{eq:entropy} is that
\begin{align}\label{eq:entropy_lower_bound}
h(T_{p}) \geq \ln (C)
\end{align}
for all $p \in (a, b)$, where $a, b$ and $C$ are as in Definition~\ref{def:Lorenz}.
The main result of this article is the following.
\begin{theorem}\label{thm:continuity-of-entropy}
For a fixed pair of branch functions, $p \mapsto h(T_p)$ is continuous.
\end{theorem}
This paper is arranged as follows. In Section~\ref{sec:gensetup} we provide necessary definitions and preliminary results required for the proof of Theorem~\ref{thm:continuity-of-entropy}. Section~\ref{sec:proof} is dedicated to the proof of Theorem~\ref{thm:continuity-of-entropy} and in Section~\ref{sec:affine_Lorenz} we include a discussion pertaining to Milnor's monotonicity conjecture in the setting of affine Lorenz maps.
\section{General setup}\label{sec:gensetup}
Throughout we use the convention that $\pm$ means either $+$ or $-$ and when we write $T^{\pm}_{p}$, we require that both $T_{p}^{+}$ and $T_{p}^{-}$ are defined using the same branch functions.
The set of all infinite words over the alphabet $\{0, 1 \}$ is denoted by $\Omega$ and is equipped with the discrete product topology. For $n \in \mathbb{N}$, define $\Omega_{n}$ to be the set of finite words over the alphabet $\{ 0, 1\}$ of length $n$, and set $\Omega^{*} \coloneqq \bigcup_{n \in \mathbb{N}_{0}} \Omega_{n}$, where by convention $\Omega_{0}$ is the set containing only the \textsl{empty word} $\varnothing$. For $\omega = \omega_{0} \cdots \omega_{k}$ and $v = v_{0} \cdots v_{n} \in \Omega^{*}$, we set $\omega v \coloneqq \omega_{0} \cdots \omega_{k} v_{0} \cdots v_{n}$, that is the \textsl{concatenation} of $\omega$ and $v$, and let $\overline{v} \coloneqq v v v \cdots$. The \textsl{length} of $v \in \Omega^{*}$ is denoted by $\lvert v \rvert$ with $\lvert \varnothing \rvert=0$ and,
for a natural number $k \leq \lvert v \rvert$, we set $v\lvert_{k} \coloneqq v_{0} \cdots v_{k-1}$. We use the same notations when $v$ is an infinite word.
The continuous map $S \colon \Omega \to \Omega$ defined by $S( \omega_{0} \omega_{1} \cdots ) \coloneqq \omega_{1} \omega_{2} \cdots$ is called the \mbox{\textsl{left-shift}}. We also allow for $S$ to act on finite words as follows. For $k \in \mathbb{N}_{0}$ and $v = v_{0} \cdots v_{k} \in \Omega^{*}$, we set $S(v) = v_{1} \cdots v_{k}$, if $k \geq 1$ and $S(v) = \varnothing$ otherwise.
The \textsl{upper} and \textsl{lower itinerary maps} $\tau_{p}^{\pm} \colon [0,1] \to \Omega$ encode the orbit of a point $x \in [0, 1]$ under $T_p^\pm$ and are given by $\tau_{p}^{+}(x) = \omega_{0} \omega_{1} \cdots$ and $\tau_{p}^{-}(x) = v_{0} v_{1} \cdots$ where
\begin{align*}
\omega_{k} \coloneqq \begin{cases}
0 & \text{if} \; (T^{+}_{p})^{k}(x) < p,\\
1 & \text{if} \; (T^{+}_{p})^{k}(x) \geq p,
\end{cases}
\quad \text{and} \quad
v_{k} \coloneqq
\begin{cases}
0 & \text{if} \; (T^{-}_{p})^{k}(x) \leq p,\\
1 & \text{if} \; (T^{-}_{p})^{k}(x) > p.
\end{cases}
\end{align*}
Here, for $n \in \mathbb{N}$, we denote by $(T^{\pm}_{p})^{n}$ the $n$-fold composition of $T^{\pm}_{p}$ with itself where $(T^{\pm}_{p})^{0}$ is set to be the identity map. The infinite words $\alpha \coloneqq \tau_{p}^{-}(p)$ and $\beta \coloneqq \tau_{p}^{+}(p)$ are called the \textsl{kneading sequences} of $T^{\pm}_{p}$.
We say that $\tau_{p}^{\pm}(p)$ is periodic if there exists $n \in \mathbb{N}$ such that $(T_{p}^{\pm})^{n}(p) = p$, and the \textsl{period} of $\tau_{p}^{\pm}(p)$ is the smallest $n \in \mathbb{N}$ for which this holds. If $\tau_{p}^{\pm}(p)$ are periodic, then there exists $v, \omega \in \Omega^{*}$ such that $\tau_{p}^{+}(p) = \overline{v}$ and $\tau_{p}^{-}(p) = \overline{\omega}$.
\begin{lemma}[{\cite{BHV,BarnsleyMihalache}}]\label{thm:always-continuous}
The maps $x \mapsto \tau_x^{\pm}(x)$ are strictly increasing. Additionally, $x \mapsto \tau_x^+(x)$ is right-continuous and $x \mapsto \tau_x^-(x)$ is left-continuous.
\end{lemma}
The following lemma extends this result.
\begin{lemma}\label{thm:nonperiodic-to-continuous}
If $p \neq a$ and $\beta$ is non-periodic, then $x \mapsto \tau_x^+(x)$ is continuous at $p$ and if $p \neq b$ and $\alpha$ is non-periodic, then $x \mapsto \tau_x^-(x)$ is continuous at $p$.
\end{lemma}
\begin{proof}
We prove the first statement, as the proof of the second statement is identical. Fix $\varepsilon>0$ and choose a natural number $N > 2$ with $2^{-N} < \varepsilon$. By definition and Lemma~\ref{thm:always-continuous}, it is sufficient to show there exists $\delta \in (0, p-a)$ such that $\tau_{p-\delta'}^+(p-\delta')\vert_N = \tau_p^+(p)\vert_N$ for all $\delta' \in (0, \delta)$. This means we require $\delta >0$ so that for $n \in \{ 0, 1, \dots, N-1 \}$ and $\delta' \in (0, \delta)$ either
\begin{align}\label{eqn:both-on-the-same-side}
\begin{aligned}
(T_{p-\delta'}^+)^n(p-\delta') < &p-\delta' \; \text{and} \; (T_p^+)^n(p) < p \\
&\quad\text{or}\\
(T_{p-\delta'}^+)^n(p-\delta') \geq &p-\delta' \; \text{and} \; (T_p^+)^n(p) > p.
\end{aligned}
\end{align}
To this end, let $c$ and $C$ be as in Definition~\ref{def:Lorenz} and choose $\delta \in (0, p-a)$ such that
\begin{align}\label{eqn:delta_min_sqeeze}
0 < \delta < \min \left\{ {\frac{(T^+_p)^k(p) - p}{C^k}} \colon k \in \{ 1, 2, \dots, N - 1 \} \right\}.
\end{align}
Note, since $p \neq a$ and since $\beta$ is not periodic, the value on the left-hand-side of \eqref{eqn:delta_min_sqeeze} is positive. We claim, for all $n \in \{ 0, 1, \dots, N -1 \}$ and $\delta' \in (0, \delta)$, that
\begin{align}\label{eqn:inductive_close}
\delta' c^n \leq (T_p^+)^n(p) - (T_{p-\delta'}^+)^n(p-\delta') \leq \delta' C^n.
\end{align}
The base case, $n=0$, is immediate. Assume \eqref{eqn:inductive_close} holds for some $n \in \{ 0, 1, \dots, N-1 \}$. If $(T_p^+)^n(p) < p$, then \eqref{eqn:inductive_close} implies $(T_{p-\delta'}^+)^n(p-\delta') \leq (T_p^+)^n(p) -\delta' c^n < p-\delta'$. If $(T_p^+)^n(p) \geq p$, then \eqref{eqn:inductive_close} implies $(T_p^+)^n(p) - p > \delta' C^n$; combining this with \eqref{eqn:delta_min_sqeeze} yields $(T_{p-\delta'}^+)^n(p-\delta') \geq (T_{p}^{+})^{n}(p) - \delta' C^{n} > p > p-\delta'$. Therefore, by definition, we have \eqref{eqn:inductive_close} for $n+1$. To complete the proof, notice that \eqref{eqn:inductive_close} implies \eqref{eqn:both-on-the-same-side}.
\end{proof}
In our proof of Theorem~\ref{thm:continuity-of-entropy}, we use the following Laurent series which can be thought of as a generating function of the kneading sequences $\alpha = \tau_{p}^{-}(p) = \alpha_{1} \alpha_{2} \cdots$ and $\beta = \tau_{p}^{+}(p) = \beta_{1} \beta_{2} \cdots$. For $z \in \mathbb{C}\setminus\{0\}$, set
\begin{align}\label{eqn:xi}
\xi_{p}(z) \coloneqq \sum_{k=0}^{\infty}(\beta_{k} - \alpha_{k})z^{-k}.
\end{align}
Observe that the interval $(1, 2)$ belongs to the domain of convergence of $\xi_{p}$. Further, we have the following result, which identifies the maximal zero of $\xi_{p}$ and the value $\gamma = \gamma_{p} \coloneqq \exp(h(T_{p}))$.
\begin{theorem}[{\cite{BHV,GH}}]\label{thm:entropy-is-the-zero}
The topological entropy of $T^{\pm}_p$ is equal to $\ln(r)$, where $r$ is the maximal positive real zero of $\xi_p$ in the interval $(1, 2]$. Additionally, if the maximal zero of $\xi_p$ in the interval $(1, 2]$ is not simple, then this is the only zero of $\xi_p$ in the interval $(1, 2]$.
\end{theorem}
\section{Proof of Theorem~\ref{thm:continuity-of-entropy}}\label{sec:proof}
In Section~\ref{sec:nonperiodic} and Section~\ref{sec:periodic} for a fixed pair of branch functions we prove that the map $p \mapsto h(T_{p})$ is left-continuous; right-continuity follows by an identical argument, see Section~\ref{sec:right-continuity} for further details. The proof of left-continuity is subdivided into two \mbox{sub-cases}: when the kneading sequences of $T_{p}^{\pm}$ are not periodic, and when they are periodic. For each sub-case we use the same approach.
\begin{enumerate}[leftmargin=2.5em]
\item\label{strategy_1} Fix $p \in (a, b)$ and $\varepsilon > 0$ with $(\gamma -\varepsilon, \gamma+\varepsilon) \subseteq (1,2)$.
\item\label{strategy_2} Show there exists $\delta>0$ such that $\xi_{p-\delta}(x)$ has a maximal zero $r \in (\gamma-\varepsilon,\gamma+\varepsilon)$.
\item\label{strategy_3} Show there are no zeros larger than $r$.
\end{enumerate}
With this at hand, Theorem~\ref{thm:entropy-is-the-zero} allows us to conclude that $\ln(r) = h(T_{p-\delta})$. Note, in Step (ii) we must take into account the multiplicity of $\gamma$; see Figure~\ref{fig:gamma-nbd}. If $\gamma$ has odd multiplicity, one can appeal to the intermediate value theorem, but more care is required in the case when $\gamma$ has even multiplicity.
\subsection{Case 1: $\beta$ non-periodic}\label{sec:nonperiodic}
Fix $\varepsilon > 0$ with $(\gamma -\varepsilon, \gamma+\varepsilon) \subseteq (1,2)$. By Lemma~\ref{thm:nonperiodic-to-continuous}, the assumption that $\beta$ is non-periodic ensures $x \mapsto \tau_x^{\pm}(x)$ are left-continuous at $p$.
To prove \eqref{strategy_2}, that is there exists $\delta>0$ such that $\xi_{p-\delta}(x)$ has a maximal zero $r \in (\gamma-\varepsilon,\gamma+\varepsilon)$, we replace the infinite sum $\xi_p(x)$ with a partial sum (a polynomial) and approximate $\gamma_{p-\delta}$ as the root of this polynomial. To this end, for $n \in \mathbb{N}$, set
\begin{align*}
R_{p, n}(x) \coloneqq \sum_{k=n}^{\infty}(\beta_{k}-\alpha_{k})x^{-k}
\quad \text{so that} \quad
\xi_{p}(x) = \sum_{k=0}^{n-1}(\beta_{k}-\alpha_{k})x^{-k}+R_{p, n}(x).
\end{align*}
\begin{lemma}\label{lem:claim1_non-periodic}
For $n \in \mathbb{N}$, there exists $\delta\,>\,0$ so that, for $\delta' \in (0, \delta)$ and $x \in (1,2)$,
\begin{align}\label{eqn:claim-bound}
\lvert \xi_{p-\delta'}(x) - \xi_p(x) \rvert \leq \frac{2x^{-n}}{1-x^{-1}}.
\end{align}
\end{lemma}
\begin{proof}
For $m \in \mathbb{N}$, we have $\beta_{m}-\alpha_{m} \in \{-1, 0, 1\}$, whence, for $n \in \mathbb{N}$ and $x \in (1, 2)$,
\begin{align}\label{eqn:Rp-bound}
\lvert R_{p, n}(x) \rvert &\leq \sum_{k=n}^{\infty}x^{-k}=\frac{1}{1-x^{-1}}-\frac{1-x^{-n}}{1-x^{-1}}=\frac{x^{-n}}{1-x^{-1}}.
\end{align}
Let $n \in \mathbb{N}$ be fixed. Since the maps $x \mapsto \tau_x^\pm(x)$ are both left-continuous at $p$, there exists $\delta > 0$ such that, if $\delta' \in (0, \delta)$, then $\tau_{p-\delta'}^{\pm}(p-\delta')\vert_{n} = \tau_{p}^{\pm}(p)\vert_{n}$. As \eqref{eqn:Rp-bound} also holds for $R_{p-\delta', n}(x)$, we have established \eqref{eqn:claim-bound}.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}[b]{0.475\linewidth}
\centering\includegraphics[height=7em]{gamma-nbd_a.pdf}
\subcaption{When $\gamma$ has odd multiplicity.}
\end{subfigure}%
\hspace{1em}
\begin{subfigure}[b]{0.475\linewidth}
\centering\raisebox{1.15em}{\includegraphics[height=6em]{gamma-nbd_b.pdf}}
\subcaption{When $\gamma$ has even multiplicity.}
\end{subfigure}
\caption{The graph of $\xi_p$ together with a neighbourhood in which the graph of $\xi_{p-\delta}$ belongs.}
\label{fig:gamma-nbd}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{thm:continuity-of-entropy}: left-continuity with $\beta$ non-periodic]
Note that $\gamma$ is an isolated zero. Indeed, in its domain of convergence, the function $\xi_{p}$ is holomorphic. Consequently, the existence of a sequence of zeros of $\xi_{p}$ converging to $\gamma$ would imply that $\xi_{p}$ is the constant zero function; a contradiction. Let $\varepsilon > 0$ be fixed such that $(\gamma -\varepsilon, \gamma+\varepsilon) \subseteq (1,2)$ and such that $\xi_{p}$ has a single root in this interval, namely at $\gamma$. Let $C > 1$ be as in Definition~\ref{def:Lorenz} and fix $u \in (1,C)$. The specific value of $u$ is not important, so for convenience set $u = (1+C)/2$, and by \eqref{eq:entropy_lower_bound} and Theorem~\ref{thm:entropy-is-the-zero} the real zeros of $\xi_{q}$ are greater than $u$ for $q \in (a, b)$.
Assume that $\gamma$ has odd multiplicity. Let $n \in \mathbb{N}$ be such that
\begin{align*}
\lvert \xi_{p}(\gamma - \varepsilon) \rvert \geq \frac{2u^{-n}}{1 - u^{-1}}
\quad \text{and} \quad
\lvert \xi_{p}(\gamma + \varepsilon) \rvert \geq \frac{2u^{-n}}{1 - u^{-1}},
\end{align*}
and let $\delta$ be chosen in accordance with Lemma~\ref{lem:claim1_non-periodic}. In which case, for all $\delta' \in (0,\delta)$,
\begin{align*}
\left \lvert \xi_{p-\delta'}(\gamma -\varepsilon) - \xi_{p}(\gamma -\varepsilon)\right \rvert
< \frac{2u^{-n}}{1-u^{-1}}
\;\; \text{and} \;\;
\left \lvert \xi_{p-\delta'}(\gamma +\varepsilon) - \xi_{p}(\gamma +\varepsilon)\right \rvert
< \frac{2u^{-n}}{1-u^{-1}},
\end{align*}
which ensures $\operatorname{sgn}(\xi_{p-\delta'}(\gamma \pm \varepsilon)) = \operatorname{sgn}(\xi_{p}(\gamma \pm. \varepsilon))$, respectively. This together with the fact that $\xi_{p}$ is smooth and has a single root in $(\gamma -\varepsilon, \gamma+\varepsilon)$ and an application of the intermediate value theorem yields that $\xi_{p_-\delta'}$ has a zero in $(\gamma -\varepsilon, \gamma+\varepsilon)$ for all $\delta' \in (0,\delta)$; see Figure~\ref{fig:gamma-nbd}.
Assume that $\gamma$ has even multiplicity. In this case, $\xi_p(x) > 0$ for all $x \neq \gamma$ with $x \in [u, 2]$, since $\xi_{p}$ is smooth, $\xi_{p}(2) > 0$ and, by Theorem~\ref{thm:entropy-is-the-zero}, we have $\xi_p$ has a single zero in the interval $(1, 2)$. Let $\rho \coloneqq \inf \left\{ \xi_{p}(x)/2 \colon x \in [u, \gamma - \epsilon] \cup [\gamma + \epsilon, 2] \right\}$, let $n \in \mathbb{N}$ be such that
\begin{align*}
\frac{2u^{-n}}{1 - u^{-1}} < \rho,
\end{align*}
and let $\delta$ be chosen in accordance with Lemma~\ref{lem:claim1_non-periodic}. In which case, for all $\delta' \in (0,\delta)$ and $x \in [u, \gamma - \epsilon] \cup [\gamma + \epsilon, 2]$,
\begin{align*}
\xi_{p-\delta'}(x) \geq \xi_{p}(x) - \frac{2u^{-n}}{1-u^{-1}} > \xi_{p}(x) - \rho > 0
\end{align*}
which ensures that $\xi_{p-\delta'}(x)$ has no zeros in $[u, \gamma - \epsilon] \cup [\gamma + \epsilon, 2]$. Therefore, by Theorem~\ref{thm:entropy-is-the-zero} and \eqref{eq:entropy_lower_bound}, namely that the real zeros of $\xi_{q}$ belong to the interval $(u, 2]$ for $q \in (a, b)$, we have $\xi_{p_-\delta'}$ necessarily has a zero in the interval $(\gamma -\varepsilon, \gamma+\varepsilon)$
Therefore, regardless of the multiplicity of $\gamma$, it is necessarily the case that $\xi_{p_-\delta'}$ has a zero in $(\gamma -\varepsilon, \gamma+\varepsilon)$ for all $\delta' \in (0, \delta)$, whence Theorem~\ref{thm:entropy-is-the-zero} implies that $\lvert h(T_{p})-h(T_{p-\delta'}) \rvert \leq \varepsilon$ for all $\delta' \in (0, \delta)$, as required.
\end{proof}
\subsection{Case 2: $\beta$ periodic}\label{sec:periodic}
In this section we assume $\beta$ is periodic with period $N$, for some $N \in \mathbb{N}$.
\begin{lemma}\label{lem:claim1}
For $n \in \mathbb{N}$ with $n \geq N$, there exists $\delta > 0$ such that, for all $\delta' \in (0, \delta)$, the concatenation of $\tau^+_{p}(p)\vert_{N}$ and $\tau^-_{p}(p)\vert_{n-N}$ is equal to $\tau^+_{p-\delta'}(p-\delta')\vert_n$, namely
\begin{align*}
\tau^+_{p-\delta}(p-\delta')\vert_n = (\tau^+_{p}(p)\vert_{N}) ( \tau^-_{p}(p)\vert_{n-N}).
\end{align*}
\end{lemma}
\begin{proof}
Let $n \geq N$ denote a fixed integer. By Lemma~\ref{thm:always-continuous} there exists $\eta > 0$ such that, if $\eta' \in (0, \eta)$, then $(T_{p-\eta'}^+)^{j}(p-\eta') \neq p - \eta'$ for all $j \in \{ 1, 2, \dots, N+n-1 \}$. Using the same arguments as in the proof of Lemma~\ref{thm:nonperiodic-to-continuous}, we may choose $\eta$ small enough so that, in addition to this, if $\eta' \in (0, \eta)$, then $\eta' c^j \leq (T_p^+)^j(p) - (T_{p-\eta'}^+)^j (p-\eta') \leq \delta C^j$ for all $j \in \{ 0, 1, \dots, N \}$; here $c$ and $C$ are as in Definition~\ref{def:Lorenz}. If $j = N$, then this yields $\eta' c^N < p-(T_{p-\eta'}^+)^N (p-\eta') < \eta' C^N$ for $\eta' \in (0, \eta)$. Since $c > 1$, this implies that $0 < (p-\eta') - (T_{p-\eta'}^+)^N(p-\eta') < \eta' (C^N-1)$. By Lemma~\ref{thm:always-continuous}, there exists $\lambda > 0$ so that, for $q \in [a, b]$ with $0 < p - q < \lambda$, we have $\tau_{p}^-(p)\vert_n = \tau_{q}^-(q)\vert_n$. Setting $\delta \coloneqq \min\{ \lambda C^{-N}/2, \eta \}$, if $\delta' \in (0, \delta)$, then $0 < (p-\delta') - (T_{p-\delta'}^+)^N (p-\delta') < \lambda/2$, which implies
\begin{align*}
\tau_{p-\delta'}^-(p-\delta')\vert_n
&= \tau_{p-\delta'}^-((T_{p-\delta'}^+)^N (p-\delta'))\vert_n\\
&= \tau_{p-\delta'}^+((T_{p-\delta'}^+)^N (p-\delta'))\vert_n\\
&= S^N(\tau_{p-\delta'}^+(p-\delta'))\vert_n.
\end{align*}
Therefore, $\tau^+_{p-\delta}(p-\delta)\vert_n = (\tau^+_{p-\delta}(p-\delta)\vert_{N} ) ( \tau^-_{p-\delta}(p-\delta)\vert_{n-N})$. By the same arguments as in the proof of Lemma~\ref{thm:nonperiodic-to-continuous}, if $\delta > 0$ is suitably small, then $\tau^+_{p-\delta}(p-\delta)\vert_{N}=\tau^+_p(p)\vert_{N}$, and by Lemma~\ref{thm:always-continuous}, for $\delta > 0$ suitably small, $\tau^-_{p-\delta}(p-\delta)\vert_{n-N}=\tau^-_p(p)\vert_{n-N}$.
\end{proof}
\begin{lemma}\label{lem:claim2}
If $\varepsilon > 0$ and $(\gamma-\varepsilon,\gamma+\varepsilon) \subseteq (1,2)$, then there exists $n \in \mathbb{N}$ such that for the $\delta$ guaranteed by Lemma~\ref{lem:claim1}, we have
\begin{align*}
\lvert \xi_{p}(x) - \xi_{p-\delta'}(x) \rvert < \varepsilon,
\end{align*}
for all $x \in (u, 2]$ and $\delta' \in (0, \delta)$, where, as in Section~\ref{sec:nonperiodic}, we set $u = (1+ C)/2$.
\end{lemma}
\begin{proof}
Since $\beta$ is periodic with period $N$, for all $x \in (1, 2]$,
\begin{align*}
\xi_{p}(x)
= \sum_{k=0}^{\infty} \beta_{k}x^{-k} - \sum_{k=0}^{\infty} \alpha_{k}x^{-k}
= \frac{1}{1-x^{-N}} \sum_{k=0}^{N-1} \beta_{k} x^{-k} - \sum_{k=0}^{\infty} \alpha_{k}x^{-k}.
\end{align*}
Let $v \in \mathbb{N}$ be fixed. For $x \in (1, 2)$ set $\eta_{1, v}(x) \coloneqq \sum_{k=Nv}^\infty (\beta_k-\alpha_k) x^{-k}$ and observe
\begin{align}\label{eqn:xi_p-decomposition2}
\xi_{p}(x)
= \frac{1-x^{-Nv}}{1-x^{-N}} \sum_{k=0}^{N-1} \beta_{k} x^{-k}
- \sum_{k=0}^{Nv-1} \alpha_{k} x^{-k}
+ \eta_{1, v}(x).
\end{align}
Note, $\eta_{1, v}(x)$ is bounded by the tail of a geometric series, namely we have that $\lvert \eta_{1, v}(x) \rvert \leq x^{-Nv}/(1-x^{-1})$.
Let $n \geq N(v + 1)$. By Lemma~\ref{lem:claim1}, for $\delta' \in (0, \delta)$, an expansion of $\xi_{p-\delta'}(x)$ similar to \eqref{eqn:xi_p-decomposition2} yields
\begin{align}\label{eqn:xi_(p-d)-decomposition}
\xi_{p-\delta'}(x)
= \sum_{k=0}^{N-1} \beta_{k} x^{-k}
- \sum_{k=0}^{Nv-1} \alpha_{k} x^{-k}
+ \sum_{k=N}^{Nv-1} \alpha_{k-N} x^{-k}
+ \eta_{2, v}^{(n)}(x, \delta')(x).
\end{align}
Here $\eta_{2, v}^{(n)}(\cdot, \delta')$ consists of remaining terms in the expansion of $\xi_{p-\delta'}(x)$. In \eqref{eqn:xi_(p-d)-decomposition} we have used the observation made in the last line of the proof of Lemma~\ref{lem:claim1}. Also, note the remainder term $\eta_{2, v}^{(n)}(\cdot, \delta')$ is bounded by the tail of a geometric series, namely $\lvert \eta_{2, v}^{(n)}(x) \rvert \leq x^{-Nv}/(1-x^{-1})$. Combining the above, we have
\begin{align}\label{eqn:xi_p-xi_(p-d)}
\!\!\xi_{p}(x)\!-\!\xi_{p-\delta'}(x)
\!=\!\frac{x^{-N}}{1-x^{-N}} \sum_{k=0}^{N-1} \!\beta_{k} x^{-k}
\!- \eta_{3, v}(x)
- \!\sum_{k=N}^{Nv-1}\! \alpha_{k-N} x^{-k}
\!- \eta_{2,v}^{(n)}(x, \delta'),
\end{align}
where $\eta_{3, v}(x) \coloneqq \sum_{k=Nv}^{\infty} \alpha_{k} x^{-k}$. As with $\eta_{1, v}$ and $\eta_{2,v}^{(n)}(\cdot, \delta')$, observe that $\eta_{3, v}$ is bounded by the tail of a geometric series, namely $\lvert \eta_{3, v}(x) \rvert \leq x^{-Nv}/(1-x^{-1})$. Reindexing the second series in \eqref{eqn:xi_p-decomposition2} and rearranging yields
\begin{align*}
\sum_{k=N}^{N(v+1)-1} \alpha_{k-N} x^{-k}
&= x^{-N} \left(\frac{1-x^{-Nv}}{1-x^{-N}} \sum_{k=0}^{N-1} \beta_{k}x^{-k}
- \xi_{p}(x) + \eta_{1, v}(x)\right).
\end{align*}
Substituting this into \eqref{eqn:xi_p-xi_(p-d)} gives, for $\delta' \in (0, \delta)$,
\begin{align*}
\begin{aligned}
&\xi_{p}(x)-\xi_{p-\delta'}(x)\\
&= \frac{x^{-Nv} x^{-N}}{1-x^{-N}} \sum_{k=0}^{N-1} \beta_{k} x^{-k}
- \eta_{3, v}(x)
+ x^{-N} \xi_{p}(x) - x^{-N} \eta_{1, v}(x)
- \eta_{2, v}^{(n)}(x, \delta').
\end{aligned}
\end{align*}
Taking absolute values and using the bounds obtained for $\eta_{1, v}$, $\eta_{2, v}^{(n)}(\cdot, \delta')$ and $\eta_{3, v}$ gives, for a suitable constant $K>0$, that $\left\lvert (1-x^{-N})\xi_{p}(x)-\xi_{p-\delta'}(x) \right\rvert \leq K x^{-Nv}$, for all $\delta' \in (0, \delta)$ and $x \in (u, 2]$, as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:continuity-of-entropy}: left-continuity with $\beta$ periodic.]
The proof proceeds as in the proof of left-continuity of Theorem~\ref{thm:continuity-of-entropy} for $\beta$ non-periodic, with the following modification. Instead of choosing $\delta$ in accordance with Lemma~\ref{lem:claim1_non-periodic}, we use Lemma~\ref{lem:claim1} and Lemma~\ref{lem:claim2} to choose $\delta$; the remainder of the proof follows identically.
\end{proof}
\subsection{Right continuity}
\label{sec:right-continuity}
In Section~\ref{sec:nonperiodic} and~\ref{sec:periodic}, the point $p$ was shifted to the left by subtracting some $\delta > 0$ from $p$. By adding $\delta>0$ to $p$ instead of subtracting it, right continuity can be shown by repeating Section~\ref{sec:nonperiodic} and~\ref{sec:periodic} with the following substitutions: swap $\tau_p^+$ with $\tau_p^-$, $T_p^+$ with $T_p^-$, and $p-\delta$ with $p+\delta$. Then, proceeding by cases (whether or not $\alpha$ is periodic). Therefore, it follows that $x \mapsto h(T_x)$ is continuous at every point $p \in (a, b)$. This concludes the proof of Theorem~\ref{thm:continuity-of-entropy}.
\begin{figure}[t]
\includegraphics[height=15.25em]{entropy_p}
\caption{Numerical approximation of the topological entropy of the affine Lorenz map $T_{p}$, with first branch $f_{0}(x) \coloneqq 1.1 x$ and second branch $f_{1}(x) \coloneqq 1.9 x - 0.9$, using the algorithm developed in \cite{SSV} -- truncation term: $n=500$; tolerance: $\varepsilon=10^{-7}$.}
\label{fig:affine-entropy}
\includegraphics[height=15.25em]{nonmonotonic}
\caption{Agreement of separate numerical methods to compute topological entropy on the highlighted non-monotonic feature of Figure~\ref{fig:affine-entropy}. The magenta curve uses the algorithm developed in \cite{SSV} and the blue curve computes the lap number of $(T_{p}^{\pm})^{50}$.}
\label{fig:non-monotonic-entropy}
\end{figure}
\section{Monotonicity of topological entropy}\label{sec:affine_Lorenz}
An \textsl{affine Lorenz map} $T_p$ is a Lorenz map where the branch functions $f_0$ and $f_1$ are affine, namely $f_0(x) = b_0 x$ and $f_{1} = 1 - b_{1} + b_{1}x$ where $0 < b_{0} \leq p \leq b_{1}$, $b_{0}, b_{1} > 1$ and $b_{0} + b _{1} > b_{0} b_{1}$. In the special case when $b_0=b_1$, the map $T_p$ is called a \textsl{uniform Lorenz map} and is denoted by $T^{\pm}_{p,b}$. Here $b$ denotes the common value $b_{0} = b_{1}$.
In \cite{H,Par} it is shown that $h(T_{p,b}) = \ln(b)$. Further, Glendinning \cite{G}, Palmer \cite{P:1979} and Parry \cite{Par}, proved that a large class of piecewise monotone transformations of the unit interval are topologically conjugate to a uniform Lorenz map.
\begin{theorem}[\cite{G,P:1979,Par}]
Every topologically transitive Lorenz map is topologically conjugate to a uniform Lorenz map.
\end{theorem}
In the following we take a first steps to address Milnor's monotonicity conjecture in the setting of affine Lorenz maps. Indeed, numerical experiments show that there exist affine Lorenz maps $T_p^{\pm}(x)$ where $x \mapsto h(T_x)$ is non-monotonic and non-constant. This phenomena is shown using the algorithm developed in \cite{SSV} and is corroborated by a second algorithm that gives an approximation to topological entropy by computing lap numbers, see for instance \cite{MR1736945}.
The algorithm in \cite{SSV} computes topological entropy by comparing the kneading sequences of a given Lorenz map against kneading sequences of uniform Lorenz maps. To verify its validity we compare the results against a separate computation which approximates the topological entropy of a given Lorenz map by computing the lap numbers of iterations of the original Lorenz map, see Figure~\ref{fig:non-monotonic-entropy}.
Here we considered the family affine Lorenz map $(T_{p})_{p \in [9/19, 10/11]}$ with branch functions $f_{0}$ and $f_{1}$ given by $f_{0}(x) \coloneqq 1.1 x$ and $f_{1}(x) \coloneqq 1.9 x - 0.9$. The graph of the map $p \mapsto h(T_{p})$ is shown in Figure~\ref{fig:affine-entropy}; here $h(T_{p})$ has been computed by using the algorithm given in \cite{SSV} with truncation term $n=500$ and tolerance $\varepsilon=10^{-7}$. We see that there are many instances of non-monotonicity in the plot that exceed the algorithm convergence tolerance. Figure~\ref{fig:affine-entropy} captures a significant non-monotonic feature. Indeed, there are many non-monotonic sections of the graph that exceed the algorithm error tolerance.
\section*{Acknowledgements}
The authors acknowledge the support of California Polytechnic State University's \textsl{Bill and Linda Frost Fund} and \textsl{College-Based Fees}. Part of this work was completed while T.~Samuel was visiting the Institut Mittag-Leffler as part of the research program \textsl{Fractal Geometry and Dynamics}. He is grateful to the organisers and staff for their very kind hospitality, financial support and a stimulating atmosphere.
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,452
|
Negotino is a city/town in the Negotino municipality of Macedonia. Negotino original name (with diacritics) is Negotino. Negotino is the seat of the Negotino municipality in the Negotino area.
Negotino hotel deals include also long stay accommodation Negotino offers. Take advantage of our exclusive offers and long stay discounts for selected Negotino hotels' suites.
Welcome to the Negotino google satellite map! Negotino City/Town is situated in Municipality of Negotino, Negotino, Macedonia, its geographical coordinates are 41° 29' 4.09" North, 22° 5' 26.42" East.
See Negotino photos and images from satellite below, explore the aerial photographs of Negotino in Macedonia.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,692
|
\section{Introduction}\label{introduccion}
In commutative algebra, a ring $B$ is called {\em Armendariz} (the term was introduced by Rege and Chhawchharia in \cite{RegeChhawchharia1997}), if whenever polynomials $f(x)=a_0+a_1x+\dotsb + a_nx^n$, $g(x)=b_0+b_1x+\dotsb + b_mx^m\in B[x]$ such that $f(x)g(x)=0$, then $a_ib_j=0$, for every $i,j$. The interest of this notion lies in its natural and its useful role in understanding the relation between the annihilators of the ring $B$ and the annihilators of the polynomial ring $B[x]$. As a matter of fact, in \cite{Armendariz1974}, Lemma 1, Armendariz showed that a reduced ring (a ring has no nonzero nilpotent elements) always satisfies this condition. It is well known that reduced rings are abelian (i.e., every idempotent is central). Now, following \cite{LiuZhao2006}, a ring $B$ is called {\em weak Armendariz}, if whenever two polynomials $p(x)=\sum_{i=0}^{s} a_ix^{i}$ and $q(x)=\sum_{j=0}^{t} b_jx^{j}$ of the polynomial ring $B[x]$ satisfy $pq=0$, then $a_ib_j$ is a nilpotent element of $B$, for each $i, j$. \\
In the context of Ore extensions introduced by Ore in \cite{Ore1933}, for $\alpha$ an endomorphism of a ring $B$, Hong et al. \cite{HongKimKwak2003} called $B$ an $\alpha$-{\em skew Armendariz ring}, if for two elements $p=\sum_{i=0}^{s} a_ix^{i},\ q=\sum_{j=0}^{t} b_jx^{j}$ of the Ore extension of endomorphism type $B[x;\alpha]$, $pq = 0 \Rightarrow a_i\sigma^{i}(b_j)=0$, for every $i, j$. As a generalization of the $\alpha$-skew Armendariz rings, Ouyang \cite{Ouyang2008} defined the {\em weak} $\alpha$-{\em skew Armendariz rings} in the following way: a ring $B$ is said to be {\em weak} $\alpha$-{\em skew Armendariz rings}, if whenever two polynomials $p=\sum_{i=0}^{s} a_ix^{i},\ q=\sum_{j=0}^{t} b_jx^{j}$ of $B[x;\alpha]$, $pq=0 \Rightarrow a_i\sigma^{i}(b_j)$ is a nilpotent element of $B$, for all $i, j$. It is clear that weak $\alpha$-skew Armendariz rings are more general than weak Armendariz rings and $\alpha$-skew Armendariz rings.\\
On the other hand, following Krempa \cite{Krempa1996}, an endomorphism $\alpha$ of a ring $B$ is called {\em rigid}, if $a\alpha(a)=0\Rightarrow a=0$, for $a\in B$. $B$ is called $\alpha$-rigid, if there exists a rigid endomorphism $\alpha$ of $B$. It is known that any rigid endomorphism of a ring is injective and $\alpha$-rigid rings are reduced (see Hong et al \cite{HongKimKwak2000}). Several properties of $\alpha$-rigid rings have been established in the literature (c.f. \cite{Krempa1996}, \cite{HongKimKwak2000}, and see \cite{Reyes2015} for detailed references). With this definition in mind, Ouyang \cite{Ouyang2008} defined {\em weak} $\alpha$-{\em rigid rings} which are a generalization of $\alpha$-rigid rings. More precisely, if $\alpha$ is en endomorphism of a ring $B$, $B$ is said to be {\em weak} $\alpha$-{\em rigid}, if $a\alpha(a)\in {\rm nil}(R) \Leftrightarrow a\in {\rm nil}(B)$, where ${\rm nil}(B)$ is the set of nilpotent elements of $B$. Ouyang \cite{Ouyang2008}, Proposition 2.2, showed that $B$ is $\alpha$-rigid if and only if $B$ is weak $\alpha$-rigid and reduced. In this way, weak $\alpha$-rigid rings are a generalization of $\alpha$-rigid rings deleting the condition to be reduced.\\
With the aim of extending the above two notions introduced by Ouyang in \cite{Ouyang2008} to a more general setting than Ore extensions, in this paper we focus on the kind of noncommutative rings known in the literature as skew Poincar\'e-Birkhoff-Witt extensions (briefly, skew PBW extensions). These objects were introduced by Gallego and Lezama in \cite{LezamaGallego2011} and contain strictly Ore extensions of injective type (i.e., when $\sigma$ is an injective endomorphism of $R$; see Example \ref{mentioned} for different noncommutative rings which are skew PBW extensions but they can not be expressed as Ore extensions). As a matter of fact, skew PBW extensions generalize several families of noncommutative rings defined in the literature, and include as particular rings different examples of remarkable algebras appearing in mathematical physics, representation theory, Hopf algebras, quantum groups, Lie algebras, and others. Briefly, next we mention some of these families of algebras (see \cite{Reyes2013PhD} and \cite{LezamaReyes2014} for a detailed reference of every one of these families): (i) universal enveloping algebras of finite dimensional Lie algebras; (ii) PBW extensions introduced by Bell and Goodearl; (iii) almost normalizing extensions defined by McConnell and Robson; (iv) sol\-va\-ble polynomial rings introduced by Kandri-Rody and Weispfenning; (v) diffusion algebras studied by Isaev, Pyatov, and Rittenberg; (vi) 3-dimensional skew polynomial algebras introduced by Bell and Smith; (vii) the regular graded algebras studied by Kirkman, Kuzmanovich, and Zhang; (viii) different algebras with PBW bases of polynomial type. The greatest difference of skew PBW extensions with respect all these algebras is that the coefficients do not necessarily commute with the variables, and these coefficients are not necessarily elements of fields (see Definition \ref {gpbwextension} below). Due to this fact, the skew PBW extensions contain well-known groups of algebras such as some types of $G$-algebras studied by Levandovskyy and some PBW algebras defined by Bueso et. al., (both $G$-algebras and PBW algebras take coefficients in fields and assume that coefficientes commute with variables), Auslander-Gorenstein rings, some Calabi-Yau and skew Calabi-Yau algebras, some Artin-Schelter regular algebras, some Koszul algebras, quantum polynomials, some quantum universal enveloping algebras (see \cite{LezamaReyes2014}, \cite{ReyesSuarez2017FEJM}, \cite{ReyesSuarezMomento2017}, \cite{SuarezLezamaReyes2017}, \cite{SuarezReyes2017JP} and \cite{SuarezReyes2017FJMS} for a considerable list of examples of all these algebras). As we see, skew PBW extensions include a lot of noncommutative rings, which means that a theory extending the two notions above for these extensions will establish general results for a lot of noncommutative rings much more general than Ore extensions, and hence we contribute to the study of properties of noncommutative algebras. Precisely, this kind of thinking has been presented a few years ago in several works (c.f. \cite{ReyesSuarezClifford2017}, \cite{ReyesSuarezskewCY2017}, \cite{ReyesSuarezUMA2018} and \cite{ReyesSuarezYesica2018}).\\
The paper is organized as follows: In Section \ref{definitionexamplesspbw} we establish some useful results about skew PBW extensions for the rest of the paper. Section \ref{weakSigmarigid} contains the first concept of the paper, the {\em weak} $\Sigma$-{\em rigid rings} (Definition \ref{weaksigmarigidring}). These rings are a generalization of weak $\alpha$-rigid rings introduced by Ouyang \cite{Ouyang2008} and $\Sigma$-rigid rings defined by the first author in \cite{Reyes2015}. However, as we will see in Theorem \ref{2008Proposition2.2}, weak $\Sigma$-rigid rings and $\Sigma$-rigid rings coincide when the ring is assumed to be reduced. Different results of \ref{weakSigmarigid} are presented in this section. In Section \ref{weakSigmaskewArmendariz} we present the second concept of this paper, the {\em weak} $\Sigma$-{\em skew Armendariz rings} (Definition \ref{weakSigmaskewArmendariz}) which are a generalization of weak $\alpha$-skew Armendariz rings defined by Ouyang \cite{Ouyang2008} and $\Sigma$-skew Armendariz rings introduced by the first author in \cite{ReyesSuarez2016UIS}. We prove that when $R$ is a NI ring (a ring $B$ is called NI ring, if the set ${\rm nil}(B)$ of nilpotent elements of $B$ forms an ideal of $B$), if $R$ is weak $\Sigma$-rigid ring, then $R$ is a weak $\Sigma$-skew Armendariz ring (Theorem \ref{2008Theorem3.3}). We present an example which illustrates the importance of the condition NI (if we do not assume this fact, then there exist examples of rings which are weak $\Sigma$-rigid but not weak $\Sigma$-skew Armendariz, see Remark \ref{2008Example3.4}). Finally, Section \ref{futurework} presents some ideas for a future work concerning the objects introduced in this article.\\
Throughout the paper, the word ring means a ring (not necessarily commutative) with unity. $\mathbb{C}$ will denote the field of complex numbers.
\section{Skew PBW extensions}\label{definitionexamplesspbw}
In this section we establish some useful results about skew PBW extensions for the rest of the paper.
\begin{definition}[\cite{LezamaGallego2011}, Definition 1]\label{gpbwextension}
Let $R$ and $A$ be rings. We say that $A$ is a {\em skew PBW extension over} $R$, which is denoted by $A:=\sigma(R)\langle
x_1,\dots,x_n\rangle$, if the following conditions hold:
\begin{enumerate}
\item[\rm (i)]$R\subseteq A$;
\item[\rm (ii)]there exist elements $x_1,\dots ,x_n\in A$ such that $A$ is a left free $R$-module, with basis ${\rm Mon}(A):= \{x^{\alpha}=x_1^{\alpha_1}\cdots
x_n^{\alpha_n}\mid \alpha=(\alpha_1,\dots ,\alpha_n)\in
\mathbb{N}^n\}$, and $x_1^{0}\dotsb x_n^{0}:=1\in {\rm Mon}(A)$.
\item[\rm (iii)]For each $1\leq i\leq n$ and any $r\in R\ \backslash\ \{0\}$, there exists an element $c_{i,r}\in R\ \backslash\ \{0\}$ such that $x_ir-c_{i,r}x_i\in R$.
\item[\rm (iv)]For any elements $1\leq i,j\leq n$, there exists $c_{i,j}\in R\ \backslash\ \{0\}$ such that $x_jx_i-c_{i,j}x_ix_j\in R+Rx_1+\cdots +Rx_n$.
\end{enumerate}
\end{definition}
\begin{proposition}[\cite{LezamaGallego2011}, Proposition
3]\label{sigmadefinition}
Let $A$ be a skew PBW extension over $R$. For each $1\leq i\leq
n$, there exist an injective endomorphism $\sigma_i:R\rightarrow
R$ and an $\sigma_i$-derivation $\delta_i:R\rightarrow R$ such that $x_ir=\sigma_i(r)x_i+\delta_i(r)$, for each $r\in R$. From now on, we write $\Sigma:=\{\sigma_1,\dotsc, \sigma_n\}$, and $\Delta:=\{\delta_1,\dotsc, \delta_n\}$.
\end{proposition}
\begin{definition}[\cite{LezamaGallego2011}, Definition 4]\label{sigmapbwderivationtype}
Let $A$ be a skew PBW extension over $R$.
\begin{enumerate}
\item[\rm (i)] $A$ is called \textit{quasi-commutative} if the conditions {\rm(}iii{\rm)} and {\rm(}iv{\rm)} in Definition \ref{gpbwextension} are replaced by the following: (iii') for each $1\leq i\leq n$ and all $r\in R\ \backslash\ \{0\}$, there exists $c_{i,r}\in R\ \backslash\ \{0\}$ such that $x_ir=c_{i,r}x_i$; (iv') for any $1\leq i,j\leq n$, there exists $c_{i,j}\in R\ \backslash\ \{0\}$ such that $x_jx_i=c_{i,j}x_ix_j$.
\item[\rm (ii)] $A$ is called \textit{bijective}, if $\sigma_i$ is bijective for each $1\leq i\leq n$, and $c_{i,j}$ is invertible, for any $1\leq i<j\leq n$.
\item[\rm (iii)] $A$ is called of {\em endomorphism type}, if $\delta_i=0$, for every $i$. In addition, if every $\sigma_i$ is bijective, $A$ is a skew PBW extension of {\em automorphism type}.
\end{enumerate}
\end{definition}
\begin{examples}\label{mentioned}
If $R[x_1;\sigma_1,\delta_1]\dotsb [x_n;\sigma_n,\delta_n]$ is an iterated Ore extension where
\begin{itemize}
\item $\sigma_i$ is injective, for $1\le i\le n$;
\item $\sigma_i(r)$, $\delta_i(r)\in R$, for every $r\in R$ and $1\le i\le n$;
\item $\sigma_j(x_i)=cx_i+d$, for $i < j$, and $c, d\in R$, where $c$ has a left inverse;
\item $\delta_j(x_i)\in R + Rx_1 + \dotsb + Rx_n$, for $i < j$,
\end{itemize}
then $R[x_1;\sigma_1,\delta_1]\dotsb [x_n;\sigma_n, \delta_n] \cong \sigma(R)\langle x_1,\dotsc, x_n\rangle$ (\cite{LezamaReyes2014}, p. 1212). Note that skew PBW extensions of endomorphism type are more general than iterated Ore extensions $R[x_1;\sigma_1]\dotsb [x_n;\sigma_n]$, and in general, skew PBW extensions are more general than Ore extensions of injective type. More precisely, next we show that there are noncommutative rings which are skew PBW extensions but they can not be expressed as iterated Ore extensions as we will next (see \cite{Reyes2013PhD}, \cite{LezamaReyes2014} or \cite{ReyesSuarezClifford2017} for the reference of every example). Examples of these extensions appearing in noncommutative algebraic geometry and theoretical physics can be found in \cite{ReyesSuarez2017FEJM}, \cite{ReyesSuarezMomento2017}, \cite{ReyesSuarezskewCY2017}, \cite{ReyesSuarezYesica2018}, \cite{SuarezReyes2017JP} and \cite{SuarezReyes2017FJMS}.
\begin{enumerate}
\item[\rm (a)] Let $k$ be a commutative ring and $\mathfrak{g}$ a finite dimensional Lie algebra over $k$ with basis $\{x_1,\dots ,x_n\}$. The \textit{universal enveloping algebra} of $\mathfrak{g}$, denoted $\mbox{${\cal U}$}(\mathfrak{g})$, is a skew PBW extension over $k$, since $x_ir-rx_i=0$, $x_ix_j-x_jx_i=[x_i,x_j]\in \mathfrak{g}=k+kx_1+\cdots+kx_n$, $r\in k$, for $1\leq i,j\leq n$. In particular, the \textit{universal enveloping algebra} \textit{of a Kac-Moody Lie algebra} is a skew PBW extension over a polynomial ring.
\item [\rm (b)] The \textit{universal enveloping ring} $\mbox{${\cal U}$}(V,R ,\Bbbk)$, where $R$ is a $\Bbbk$-algebra, and $V$ is a $\Bbbk$-vector space which is also a Lie ring containing $R$ and $\Bbbk$ as Lie ideals with suitable relations. The enveloping ring $\mbox{${\cal U}$}(V,R,\Bbbk)$ is a finite skew PBW extension over $R$ if ${\rm dim}_\Bbbk\ (V/R)$ is finite.
\item [\rm (c)] Let $k$, $\mathfrak{g}$, $\{x_1,\dots ,x_n\}$ and $\mbox{${\cal U}$}(\mathfrak{g})$ be as in the previous example; let $R$ be a $k$-algebra containing $k$. The \textit{tensor product} $A:=R\ \otimes_k\ \mbox{${\cal U}$}(\mathfrak{g})$ is a skew PBW extension over $R$, and it is a particular case of \textit{crossed product} $R*\mbox{${\cal U}$}(\mathfrak{g})$ of $R$ by $\mbox{${\cal U}$}(\mathfrak{g})$, which is a skew PBW extension over $R$.
\item [\rm (d)] The \textit{twisted or smash product differential operator ring} $R\ \# _{\sigma}\ \mbox{${\cal U}$}(\mathfrak{g})$, where $\mathfrak{g}$ is a finite-dimensional Lie algebra acting on $R$ by derivations, and $\sigma$ is Lie 2-cocycle with values in $R$.
\item [\rm (e)] Diffusion algebras arise in physics as a possible way to understand a large class of $1$-dimen\-sional stochastic process. A \textit{diffusion algebra} $\mbox{${\cal A}$}$ with parameters $a_{ij}\in \mathbb{C}\ \backslash\ \{0\}$ for $1\le i, j\le n$, is an algebra over $\mathbb{C}$ generated by variables $x_1,\dotsc,x_n$ subject to relations $a_{ij}x_ix_j-b_{ij}x_jx_i=r_jx_i-r_ix_j$, whenever $i<j$, $b_{ij}, r_i\in \mathbb{C}$ for all $i<j$. $\mbox{${\cal A}$}$ admits a $PBW$-basis of standard monomials $x_1^{i_1}\dotsb x_n^{i_n}$, that is, $\mbox{${\cal A}$}$ is a diffusion algebra if these standard monomials are a $\mathbb{C}$-vector space basis for $\mbox{${\cal A}$}$. From Definition \ref{gpbwextension}, (iii) and (iv), it is clear that the family of skew PBW extensions are more general than diffusion algebras. We will denote $q_{ij}:=\frac{b_{ij}}{a_{ij}}$. The parameter $q_{ij}$ can be a root of unity if and only if is equal to 1. It is therefore reasonable to assume that these parameters not to be a root of unity other than 1. If all coefficients $q_{ij}$ are nonzero, then the corresponding diffusion algebra have a PBW basis of standard monomials $x_1^{i_1}\dotsb x_n^{i_n}$, and hence these algebras are skew PBW extensions. More precisely, $\mbox{${\cal A}$}$ is a skew PBW extension over $\mathbb{C}$ with indetermnates $x_1,\dotsc,x_n$.
\end{enumerate}
\end{examples}
\begin{definition}[\cite{LezamaGallego2011}, Definition 6]\label{definitioncoefficients}
Let $A$ be a skew PBW extension over $R$. Then:
\begin{enumerate}
\item[\rm (i)]for $\alpha=(\alpha_1,\dots,\alpha_n)\in \mathbb{N}^n$,
$\sigma^{\alpha}:=\sigma_1^{\alpha_1}\cdots \sigma_n^{\alpha_n}$,
$|\alpha|:=\alpha_1+\cdots+\alpha_n$. If
$\beta=(\beta_1,\dots,\beta_n)\in \mathbb{N}^n$, then
$\alpha+\beta:=(\alpha_1+\beta_1,\dots,\alpha_n+\beta_n)$.
\item[\rm (ii)]For $X=x^{\alpha}\in {\rm Mon}(A)$,
$\exp(X):=\alpha$, $\deg(X):=|\alpha|$, and $X_0:=1$. The symbol $\succeq$ will denote a total order defined on ${\rm Mon}(A)$ (a total order on $\mathbb{N}^n$). For an
element $x^{\alpha}\in {\rm Mon}(A)$, ${\rm exp}(x^{\alpha}):=\alpha\in \mathbb{N}^n$. If
$x^{\alpha}\succeq x^{\beta}$ but $x^{\alpha}\neq x^{\beta}$, we
write $x^{\alpha}\succ x^{\beta}$. Every element $f\in A$ can be expressed uniquely as $f=a_0 + a_1X_1+\dotsb +a_mX_m$, with $a_i\in R$, and $X_m\succ \dotsb \succ X_1$ (eventually, we will use expressions as $f=a_0 + a_1Y_1+\dotsb +a_mY_m$, with $a_i\in R$, and $Y_m\succ \dotsb \succ Y_1$). With this notation, we define ${\rm
lm}(f):=X_m$, the \textit{leading monomial} of $f$; ${\rm
lc}(f):=a_m$, the \textit{leading coefficient} of $f$; ${\rm
lt}(f):=a_mX_m$, the \textit{leading term} of $f$; ${\rm exp}(f):={\rm exp}(X_m)$, the \textit{order} of $f$; and
$E(f):=\{{\rm exp}(X_i)\mid 1\le i\le t\}$. Note that $\deg(f):={\rm max}\{\deg(X_i)\}_{i=1}^t$. Finally, if $f=0$, then
${\rm lm}(0):=0$, ${\rm lc}(0):=0$, ${\rm lt}(0):=0$. We also
consider $X\succ 0$ for any $X\in {\rm Mon}(A)$. For a detailed description of monomial orders in skew PBW extensions, see \cite{LezamaGallego2011}, Section 3.
\end{enumerate}
\end{definition}
\begin{proposition}[\cite{Reyes2015}, Proposition 2.9] \label{lindass}
If $\alpha=(\alpha_1,\dotsc, \alpha_n)\in \mathbb{N}^{n}$ and $r$ is an element of $R$, then
\begin{align*}
x^{\alpha}r = &\ x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_{n-1}^{\alpha_{n-1}}x_n^{\alpha_n}r = x_1^{\alpha_1}\dotsb x_{n-1}^{\alpha_{n-1}}\biggl(\sum_{j=1}^{\alpha_n}x_n^{\alpha_{n}-j}\delta_n(\sigma_n^{j-1}(r))x_n^{j-1}\biggr)\\
+ &\ x_1^{\alpha_1}\dotsb x_{n-2}^{\alpha_{n-2}}\biggl(\sum_{j=1}^{\alpha_{n-1}}x_{n-1}^{\alpha_{n-1}-j}\delta_{n-1}(\sigma_{n-1}^{j-1}(\sigma_n^{\alpha_n}(r)))x_{n-1}^{j-1}\biggr)x_n^{\alpha_n}\\
+ &\ x_1^{\alpha_1}\dotsb x_{n-3}^{\alpha_{n-3}}\biggl(\sum_{j=1}^{\alpha_{n-2}} x_{n-2}^{\alpha_{n-2}-j}\delta_{n-2}(\sigma_{n-2}^{j-1}(\sigma_{n-1}^{\alpha_{n-1}}(\sigma_n^{\alpha_n}(r))))x_{n-2}^{j-1}\biggr)x_{n-1}^{\alpha_{n-1}}x_n^{\alpha_n}\\
+ &\ \dotsb + x_1^{\alpha_1}\biggl( \sum_{j=1}^{\alpha_2}x_2^{\alpha_2-j}\delta_2(\sigma_2^{j-1}(\sigma_3^{\alpha_3}(\sigma_4^{\alpha_4}(\dotsb (\sigma_n^{\alpha_n}(r))))))x_2^{j-1}\biggr)x_3^{\alpha_3}x_4^{\alpha_4}\dotsb x_{n-1}^{\alpha_{n-1}}x_n^{\alpha_n} \\
+ &\ \sigma_1^{\alpha_1}(\sigma_2^{\alpha_2}(\dotsb (\sigma_n^{\alpha_n}(r))))x_1^{\alpha_1}\dotsb x_n^{\alpha_n}, \ \ \ \ \ \ \ \ \ \ \sigma_j^{0}:={\rm id}_R\ \ {\rm for}\ \ 1\le j\le n.
\end{align*}
\end{proposition}
\begin{remark}[\cite{Reyes2015}, Remark 2.10)]\label{juradpr}
About Proposition \ref{lindass}, we have the following observation: If $X_i:=x_1^{\alpha_{i1}}\dotsb x_n^{\alpha_{in}}$ and $Y_j:=x_1^{\beta_{j1}}\dotsb x_n^{\beta_{jn}}$, then when we compute every summand of $a_iX_ib_jY_j$ we obtain pro\-ducts of the coefficient $a_i$ with several evaluations of $b_j$ in $\sigma$'s and $\delta$'s depending of the coordinates of $\alpha_i$. This assertion follows from the expression:
\begin{align*}
a_iX_ib_jY_j = &\ a_i\sigma^{\alpha_{i}}(b_j)x^{\alpha_i}x^{\beta_j} + a_ip_{\alpha_{i1}, \sigma_{i2}^{\alpha_{i2}}(\dotsb (\sigma_{in}^{\alpha_{in}}(b_j)))} x_2^{\alpha_{i2}}\dotsb x_n^{\alpha_{in}}x^{\beta_j} \\
+ &\ a_i x_1^{\alpha_{i1}}p_{\alpha_{i2}, \sigma_3^{\alpha_{i3}}(\dotsb (\sigma_{{in}}^{\alpha_{in}}(b_j)))} x_3^{\alpha_{i3}}\dotsb x_n^{\alpha_{in}}x^{\beta_j} \\
+ &\ a_i x_1^{\alpha_{i1}}x_2^{\alpha_{i2}}p_{\alpha_{i3}, \sigma_{i4}^{\alpha_{i4}} (\dotsb (\sigma_{in}^{\alpha_{in}}(b_j)))} x_4^{\alpha_{i4}}\dotsb x_n^{\alpha_{in}}x^{\beta_j}\\
+ &\ \dotsb + a_i x_1^{\alpha_{i1}}x_2^{\alpha_{i2}} \dotsb x_{i(n-2)}^{\alpha_{i(n-2)}}p_{\alpha_{i(n-1)}, \sigma_{in}^{\alpha_{in}}(b_j)}x_n^{\alpha_{in}}x^{\beta_j} \\
+ &\ a_i x_1^{\alpha_{i1}}\dotsb x_{i(n-1)}^{\alpha_{i(n-1)}}p_{\alpha_{in}, b_j}x^{\beta_j}.
\end{align*}
\end{remark}
\section{Weak $\Sigma$-rigid rings}\label
{weakSigmarigid}
For a ring $B$ with a ring endomorphism $\sigma:B\to B$, an $\sigma$-derivation $\delta:B\to B$, considering the Ore extension $B[x;\sigma,\delta]$, Krempa in \cite{Krempa1996} defined $\sigma$ as a {\em rigid endomorphism} if $b\sigma(b)=0$ implies $b=0$, for $b\in B$. Krempa called $B$ $\sigma$-rigid if there exists a rigid endomorphism $\sigma$ of $B$. Since Ore extensions of injective type are particular examples of skew PBW extensions, the first author introduced the following definition with the purpose of studying the notion of {\em rigidness} for this more general setting.
\begin{definition}[\cite{Reyes2015}, Definition 3.2] \label{generaldef2015}
Let $B$ be a ring and $\Sigma$ a family of endomorphisms of $B$. $\Sigma$ is called a {\em rigid endomorphisms family} if $r\sigma^{\alpha}(r)=0$ implies $r=0$, for every $r\in B$ and $\alpha\in \mathbb{N}^n$. A ring $B$ is called to be $\Sigma$-{\em rigid} if there exists a rigid endomorphisms family $\Sigma$ of $B$.
\end{definition}
Note that if $\Sigma$ is a rigid endomorphisms family, then every element $\sigma_i\in \Sigma$ is a monomorphism. In fact, $\Sigma$-rigid rings are reduced rings: if $B$ is a $\Sigma$-rigid ring and $r^2=0$ for $r\in B$, then $0=r\sigma^{\alpha}(r^2)\sigma^{\alpha}(\sigma^{\alpha}(r))=r\sigma^{\alpha}(r)\sigma^{\alpha}(r)\sigma^{\alpha}(\sigma^{\alpha}(r))=r\sigma^{\alpha}(r)\sigma^{\alpha}(r\sigma^{\alpha}(r))$, i.e., $r\sigma^{\alpha}(r)=0$ and so $r=0$, that is, $B$ is reduced (note that there exists an endomorphism of a reduced ring which is not a rigid endomorphism, see \cite{HongKimKwak2000}, Example 9). With this in mind, we consider the family of injective endomorphisms $\Sigma$ and the family $\Delta$ of $\Sigma$-derivations in a skew PBW extension $A$ over a ring $R$ (see Proposition \ref{sigmadefinition}). Remarkable examples of $\Sigma$-rigid rings can be found in \cite{ReyesSuarezUMA2018}, Examples 3.3, \cite{Reyes2018}, Examples 2.9 or \cite{ReyesSuarezYesica2018}, Example 2. \\
Now, following the ideas presented by Ouyang \cite{Ouyang2008} for Ore extensions, we present the following definition which extends $\Sigma$-rigid rings.
\begin{definition}\label{weaksigmarigidring}
Let $\Sigma=\{\sigma_1,\dotsc,\sigma_n\}$ and $\Delta=\{\delta_1,\dotsc,\delta_n\}$ be a family of endomorphisms and $\Sigma$-derivations of $R$, respectively. $R$ is called a {\em weak} $\Sigma$-{\em rigid ring}, if $a\sigma^{\theta}(a)\in {\rm nil}(R)\Leftrightarrow a\in {\rm nil}(B)$, for each element $a\in R$ and every $\theta\in \mathbb{N}^{n}$.
\end{definition}
\begin{remark}
It is clear that $\Sigma$-rigid rings are weak $\Sigma$-rigid. However, the converse is false as we can appreciated in the following example taken from \cite{Ouyang2008}, Example 2.1. Let $\sigma$ be an endomorphism of a ring $R$ which is an $\sigma$-rigid ring. Consider the ring
\[
R_3:=\biggl\{\begin{pmatrix}
a & b & c \\ 0 & a & d \\ 0 & 0 & a
\end{pmatrix} \mid a, b, c\in R
\biggr\}.
\]
If we extend the endomorphism $\sigma$ of $R$ to the endomorphism $\overline{\sigma}:R_3\to R_3$ defined by $\overline{\sigma}(a_{ij}) = (\sigma(a_{ij}))$, then $R_3$ is a weak $\overline{\sigma}$-rigid ring but $R_3$ is not $\overline{\sigma}$-rigid. Therefore, weak $\Sigma$-rigid rings are a generalization of $\Sigma$-rigid rings to the case where the ring of coefficients is not assumed to be reduced (note that the ring $R_3$ is not reduced).
\end{remark}
The next theorem gives an equivalence between the notions of $\Sigma$-rigid rings and weak $\Sigma$-rigid rings. This result extends \cite{Ouyang2008}, Proposition 2.2.
\begin{theorem}\label{2008Proposition2.2}
Let $\Sigma=\{\sigma_1,\dotsc,\sigma_n\}$ and $\Delta=\{\delta_1,\dotsc,\delta_n\}$ be a family of endomorphisms and $\Sigma$-derivations of $R$, respectively. $R$ is $\Sigma$-rigid if and only if $R$ is weak $\Sigma$-rigid and reduced.
\end{theorem}
\begin{proof}
Suppose that $R$ is $\Sigma$-rigid. As we saw above, $R$ is reduced. Let us see that $R$ is weak $\Sigma$-rigid. If $a\in {\rm nil}(R)$, then $a=0$, since $R$ is reduced, whence $a\sigma^{\theta}(a)=0\in {\rm nil}(R)$, for all $\theta\in \mathbb{N}^{n}$ and $1\le i\le n$. Now, if $a\sigma^{\theta}(a)\in {\rm nil}(R)$, for $a\in R$ and every $\theta\in \mathbb{N}^{n}$, then $a\sigma^{\theta}(a)=0$, for all $\theta\in\mathbb{N}^{n}$, since $R$ is reduced, and hence $a=0$ because $R$ is $\Sigma$-rigid. Then $R$ is weak $\Sigma$-rigid and reduced.
Conversely, suppose that $R$ is weak $\Sigma$-rigid and reduced, and let $a\sigma^{\theta}(a)=0$, for $a\in R$ and $\theta\in \mathbb{N}^{n}$. Then $a\in {\rm nil}(R)$, since $R$ is weak $\Sigma$-rigid, and so $a=0$ because $R$ is reduced. Therefore $R$ is $\Sigma$-rigid.
\end{proof}
The next proposition extends \cite{Ouyang2008}, Proposition 2.3 (compare also with \cite{Reyes2015}, Lemma 3.3).
\begin{proposition}\label{2008Proposition2.3}
If $R$ is a NI ring which is weak $\Sigma$-rigid, then we have the following assertions:
\begin{enumerate}
\item [\rm (1)] If $ab\in {\rm nil}(R)$, then $a\sigma^{\alpha}(b), \sigma^{\beta}(a)b\in {\rm nil}(R)$, for every elements $\alpha, \beta\in \mathbb{N}^{n}$.
\item [\rm (2)] If $\sigma^{\alpha}(a)b\in {\rm nil}(B)$, for some element $\alpha\in \mathbb{N}^{n}$, then $ab, ba\in {\rm nil}(R)$.
\item [\rm (3)] If $a\sigma^{\alpha}(b)\in {\rm nil}(B)$, for some element $\alpha\in \mathbb{N}^{n}$, then $ab, ba\in {\rm nil}(R)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $ab\in {\rm nil}(R)$. Using that $\sigma_i(ab)=\sigma_i(a)\sigma_i(b)\in {\rm nil}(R)$, for every $1\le i\le n$, where ${\rm nil}(R)$ is an ideal of $R$, we obtain $b\sigma_i(a)\sigma_i(b)\sigma_i^{2}(a)=b\sigma_i(a)\sigma_i(b\sigma_i(a))\in {\rm nil}(R)$ which shows that $b\sigma_i(a)\in {\rm nil}(R)$ whence $\sigma_i(a)b\in {\rm nil}(R)$, for all $i$. If we consider repeatedly this argument, then we obtain that $\sigma^{\beta}(a)b\in {\rm nil}(R)$, for all $\beta\in \mathbb{N}^{n}$. In a similar way, if $ab\in {\rm nil}(R)$, then $ba\in {\rm nil}(R)$, and so $\sigma^{\alpha}(b)a\in {\rm nil}(R)$ which implies that $a\sigma^{\alpha}(b)\in {\rm nil}(R)$, for every element $\alpha \in \mathbb{N}^{n}$.
(2) Suppose that $\sigma^{\alpha}(a)b\in {\rm nil}(B)$, for some element $\alpha\in \mathbb{N}^{n}$. We have $\sigma^{\alpha}(a)\sigma^{\alpha}(b) = \sigma^{\alpha}(ab) = \sigma_1^{\alpha_1}(\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab)))) = \sigma_1(\sigma_1^{\alpha_1-1}(\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab))))) \in {\rm nil}(R)$, by part (1). Since ${\rm nil}(R)$ is an ideal of $R$, $(\sigma_1^{\alpha_1-1}(\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab)))))\cdot \sigma_1(\sigma_1^{\alpha_1-1}(\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab)))))\in {\rm nil}(R)$, whence we obtain $\sigma_1^{\alpha_1-1}(\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab))))$ by definition of weak $\Sigma$-rigid ring. Continuing in this way we can prove that $\sigma_2^{\alpha_2}(\dotsb (\sigma^{\alpha_n}(ab)))\in {\rm nil}(R)$. Again, continuing this procedure we can see that $ab\in {\rm nil}(R)$.
(3) The proof uses a similar argument to the considered in part (2).
\end{proof}
The next proposition generalizes \cite{Ouyang2008}, Proposition 2.4 (compare also with \cite{Reyes2015}, Proposition 3.5).
\begin{proposition}\label{2008Proposition2.4}
If $R$ is a NI and weak $\Sigma$-rigid ring, then $\sigma_i(e) = e\ (1\le i\le n)$, for every central idempotent element $e\in R$.
\end{proposition}
\begin{proof}
Consider $e$ a central idempotent of $R$. It is clear that $e(1-e)=0$. By Proposition \ref{2008Proposition2.3} (1) we obtain $\sigma_i(e)(1-e)\in {\rm nil}(R)$, for $1\le i\le n$. This means that there exists some positive integer $k$ such that $0=(\sigma_i(e)(1-e))^{k} = \sigma_i(e)(1-e)$ (for a fixed $i$). In this way $\sigma_i(e) = \sigma_i(e)e$, for all $i$. In a similar way, $(1-e)e=0\Rightarrow \sigma_i(1-e)e=0$, and so $e=\sigma_i(e)e$, whence $\sigma_i(e)=e$, for all $i$.
\end{proof}
With the aim of establishing the following proposition, an ideal $I$ of $R$ will be called {\em weak} $\Sigma$-rigid, if $a\sigma^{\theta}(a)\in {\rm nil}(R) \Leftrightarrow a\in {\rm nil}(R)$, for every element $a\in I$ and each $\theta\in \mathbb{N}^{n}$. Our Proposition \ref{2008Proposition2.5} extends \cite{Ouyang2008}, Proposition 2.5.
\begin{proposition}\label{2008Proposition2.5}
If $R$ is an abelian ring with $\sigma_i(e)=0$ $(1\le i\le n)$, for every idempotent element $e$ of $R$, then the following assertions are equivalent:
\begin{enumerate}
\item [\rm (1)] $R$ is weak $\Sigma$-rigid.
\item [\rm (2)] $eR$ and $(1-e)R$ are weak $\Sigma$-rigid ideals.
\end{enumerate}
\end{proposition}
\begin{proof}
$(1)\Rightarrow (2)$ It is clear since $eR$ and $(1-e)R$ are subrings of $R$.
$(2)\Rightarrow (1)$ Let $a$ be a nilpotent element of $R$. Then $ea, (1-e)a\in {\rm nil}(R)$. Having in mind that $eR$ and $(1-e)R$ are weak $\Sigma$-rigid, there exist positive integers $k, l$ with $(ea\sigma_i(ea))^{k} = e(a\sigma_i(a))^{k} = 0$ and $((1-e)a\sigma_i((1-e)a))^{l} = (1-e)(a\sigma_i(a))^{l}$, for a fixed $i$. If we take $m:={\rm max}\{k, l\}$, then $e(a\sigma_i(a))^{m} = (1-e)(a\sigma_i(a))^{m} = 0$. Therefore $(a\sigma_i(a)))^{m} = 0$, that is, $a\sigma_i(a)\in {\rm nil}(R)$, for all $i$.
Conversely, suppose that $a\sigma^{\theta}(a)\in {\rm nil}(R)$, for $\theta\in \mathbb{N}^{n}$. Then $ea\sigma^{\theta}(ea)\in {\rm nil}(R)$ and $(1-e)a\sigma^{\theta}((1-e)a) \in {\rm nil}(R)$. So, $ea\in {\rm nil}(R)$ and $(1-e)a\in {\rm nil}(R)$, since $eR$ and $(1-e)R$ are weak $\Sigma$-rigid ideals. Hence $a\in {\rm nil}(R)$, that is, $R$ is weak $\Sigma$-rigid.
\end{proof}
\section{Weak $\Sigma$-skew Armendariz rings}\label{weakSigmaskewArmendariz}
In the literature we find the following notions about Armendariz rings in commutative and noncommutative case concerning Ore extensions.
\begin{definition}\label{cardona}
\begin{enumerate}
\item [\rm (i)] (\cite{LiuZhao2006}, Definition 2.1) A ring $B$ is called {\em weak Armendariz}, if whenever polynomials $f=\sum_{i=0}^{s} a_ix^{i}$ and $g=\sum_{j=0}^{t} b_jx^{j} \in B[x]$ satisfy $pq=0$, then $a_ib_j\in {\rm nil}(B)$, for each $i, j$.
\item [\rm (ii)] (\cite{HongKimKwak2003}, p. 104) $B$ is called $\alpha$-{\em skew Armendariz}, if whenever $f=\sum_{i=0}^{s} a_ix^{i}$ and $g=\sum_{j=0}^{t}b_jx^{j}\in B[x;\alpha]$ with $fg=0$, then $a_i\sigma^{i}(b_j)=0$, for every $i, j$.
\item [\rm (iii)] (\cite{Ouyang2008}, p. 110) $B$ is called {\em weak} $\alpha$-skew Armendariz, if whenever $f=\sum_{i=0}^{s} a_ix^{i}$ and $g=\sum_{j=0}^{t} b_jx^{j}\in B[x;\alpha]$ satisfy $fg=0$, then $a_i\alpha^{i}(b_j)\in {\rm nil}(B)$.\end{enumerate}
\end{definition}
In the context of skew PBW extensions, the authors have defined the following Armendariz notions:
\begin{definition}
Let $A$ be a skew PBW extension over a ring $R$. Then:
\begin{enumerate}
\item [\rm (i)] (\cite{NinoReyes2017}, Definition 3.4) $R$ is called a $(\Sigma, \Delta)$-{\em skew Armendariz ring}, if whenever $f = \sum_{i=0}^{t} a_iX_i$, $g = \sum_{j=0}^{s} b_jY_j\in A$ with $fg=0$, then $a_iX_ib_jY_j=0$, for every value of $i$ and $j$.
\item [\rm (ii)](\cite{ReyesSuarez2016UIS}, Definition 3.1) $R$ is called a $\Sigma$-{\em skew Armendariz ring}, if for elements $f=\sum_{i=0}^{m} a_iX_i$ and $g=\sum_{j=0}^{t} b_jY_j$ in $A$, the equality $fg=0$ implies $a_i\sigma^{\alpha_i}(b_j) = 0$, for all $0\le i\le m$ and $0\le j\le t$, where $\alpha_i = {\rm exp}(X_i)$.
\item [\rm (iii)] (\cite{ReyesSuarezClifford2017}, Definition 4.1) $R$ is a {\em skew-Armendariz} ring, if for polynomials $f=a_0+a_1X_1+\dotsb + a_mX_m$ and $g=b_0+b_1Y_1 + \dotsb + b_tY_t$ in $A$, $fg=0$ implies $a_0b_k=0$, for each $0\le k\le t$.
\item [\rm (iv)] (\cite{Reyes2018}, Definition 3.1) $R$ is called a {\em skew}-$\Pi$ {\em Armendariz ring}, if for elements $f=\sum_{i=0}^{m} a_iX_i,\ g=\sum_{j=0}^{t} b_jY_j$ of $A$, $fg\in {\rm nil}(A)$ implies that $a_ib_j\in {\rm nil}(R)$, for every $0\le i\le m$ and $0\le j\le t$.
\end{enumerate}
\end{definition}
Several relations about these four notions of Armendariz rings for coefficient rings of skew PBW extensions can be found in \cite{ReyesSuarez2016UIS}, Section 3, \cite{ReyesSuarezClifford2017}, Sections 3 and 4, and \cite{Reyes2018}, Section 3. Now, with the aim of extending Definition \ref{cardona} (iii) from Ore extensions of endomorphism type to skew PBW extensions of endomorphism type (which are more general, see Examples \ref{mentioned}), and also $\Sigma$-skew Armendariz rings defined by the first author in \cite{ReyesSuarez2016UIS}, Definition 3.1, (at least in the endomorphism case), we present the following definition.
\begin{definition}\label{weaksigmaskewArmendarizPBW}
Let $A$ be a skew PBW extension of endomorphism type over a ring $R$. $R$ is called a {\em weak} $\Sigma$-{\em skew Armendariz ring}, if for elements $f=\sum_{i=0}^{m} a_iX_i$ and $g=\sum_{j=0}^{t} b_jY_j$ in $A$, the equality $fg=0$ implies $a_i\sigma^{\alpha_i}(b_j)\in {\rm nil}(R)$, for all $0\le i\le m$ and $0\le j\le t$, where $\alpha_i = {\rm exp}(X_i)$.
\end{definition}
The following theorem extends \cite{Ouyang2008}, Theorem 3.3. We need to assume that the elements $c_{i,j}$ in Definition \ref{2008Proposition2.3} (iv) are both central and invertible in $R$. We denote ${\rm nil}(R)A:=\{f\in A\mid f= a_0 + a_1X_1 + \dotsb + a_mX_m,\ a_i\in {\rm nil}(R)\}$.
\begin{theorem}\label{2008Theorem3.3}
If $R$ is a NI and weak $\Sigma$-rigid ring, then $R$ is a weak $\Sigma$-skew Armendariz ring.
\end{theorem}
\begin{proof}
Suppose that $fg=0$, where $f=a_0+a_1X_1+\dotsb + a_mX_m$ and $g=b_0+b_1Y_1+\dotsb + b_tY_t$, with the monomial order $X_1\prec X_2\prec \dotsb \prec X_m$ and $Y_1\prec Y_2\prec \dotsb \prec Y_t$, respectively (Definition \ref{definitioncoefficients} (ii)). Since $fg =\sum_{k=0}^{m+t} \biggl(\sum_{i+j=k} a_iX_ib_jY_j\biggr)$, then ${\rm lc}(fg)= a_m\sigma^{\alpha_m}(b_t)c_{\alpha_m, \beta_t}=0$. By assumption, the elements $c_{i,j}$ (Definition \ref{2008Proposition2.3} (iv)) are invertible in $R$, so $c_{\alpha_m,\beta_t}$ are also invertible, and hence ${\rm lc}(fg)= a_m\sigma^{\alpha_m}(b_t)=0$ which means that the element $a_m\sigma^{\alpha_m}(b_t)\in {\rm nil}(R)$. The idea is to prove that $a_p\sigma^{\alpha_p}(b_q)\in {\rm nil}(R)$, for $p+q\ge 0$. We proceed by induction. Suppose that $a_p\sigma^{\alpha_p}(b_q)\in {\rm nil}(R)$, for $p+q=m+t, m+t-1, m+t-2, \dotsc, k+1$ for some $k>0$. From Remark \ref{juradpr} and Proposition \ref{2008Proposition2.3} we obtain that $a_pX_pb_qY_q$ is an element of ${\rm nil}(R)A$, for these values of $p+q$. In this way we only consider the sum of the products $a_uX_ub_vY_v$, where $u+v=k, k-1,k-2,\dotsc, 0$. Fix $u$ and $v$. Consider the sum of all terms of $fg$ having exponent $\alpha_u+\beta_v$. By Proposition \ref{lindass}, Remark \ref{juradpr} and the assumption $fg=0$, we know that the sum of all coefficients of all these terms can be written as
\begin{equation}\label{Federer}
a_u\sigma^{\alpha_u}(b_v)c_{\alpha_u, \beta_v} + \sum_{\alpha_{u'} + \beta_{v'} = \alpha_u + \beta_v} a_{u'}\sigma^{\alpha_{u'}} ({\rm \sigma's\ and\ \delta's\ evaluated\ in}\ b_{v'})c_{\alpha_{u'}, \beta_{v'}} = 0.
\end{equation}
By assumption we know that $a_p\sigma^{\alpha_p}(b_q)\in {\rm nil}(R)$, for $p+q=m+t, m+t-1, \dotsc, k+1$. So, Proposition \ref{2008Proposition2.3} (1) guarantees that the product
\[a_p({\rm \sigma's\ and\ \delta's\ evaluated\ in}\ b_{q})\ \ \ \ \ \ \ ({\rm any\ order\ of}\ \sigma's\ {\rm and}\ \delta's)
\]
is an element of ${\rm nil}(R)$. Proposition \ref{2008Proposition2.3} guarantees that $({\rm \sigma's\ and\ \delta's\ evaluated\ in}\ b_{q})a_p$ is also an element of ${\rm nil}(R)$. In this way, multiplying (\ref{Federer}) by $a_k$, and using the fact that the elements $c_{i,j}$ in Definition \ref{gpbwextension} (iv) are in the center of $R$,
\begin{equation}\label{doooooo}
a_u\sigma^{\alpha_u}(b_v)a_kc_{\alpha_u, \beta_v} + \sum_{\alpha_{u'} + \beta_{v'} = \alpha_u + \beta_v} a_{u'}\sigma^{\alpha_{u'}} ({\rm \sigma's\ and\ \delta's\ evaluated\ in}\ b_{v'})a_kc_{\alpha_{u'}, \beta_{v'}} = 0,
\end{equation}
whence, $a_u\sigma^{\alpha_u}(b_0)a_k=0$. Since $u+v=k$ and $v=0$, then $u=k$, so $a_k\sigma^{\alpha_k}(b_0)a_k=0$ whence $a_k\sigma^{\alpha_k}(b_0)\in {\rm nil}(R)$ by Proposition \ref{2008Proposition2.3}. Therefore, we now have to study the expression (\ref{Federer}) for $0\le u \le k-1$ and $u+v=k$. If we multiply (\ref{doooooo}) by $a_{k-1}$ we obtain
{\small{\[
a_u\sigma^{\alpha_u}(b_v)a_{k-1}c_{\alpha_u, \beta_v} + \sum_{\alpha_{u'} + \beta_{v'} = \alpha_u + \beta_v} a_{u'}\sigma^{\alpha_{u'}} ({\rm \sigma's\ and\ \delta's\ evaluated\ in}\ b_{v'})a_{k-1}c_{\alpha_{u'}, \beta_{v'}} = 0.
\]}}
Using a similar reasoning as above, we can see that $a_u\sigma^{\alpha_u}(b_1)a_{k-1}c_{\alpha_u, \beta_1}=0$, and using the assumptions on the elements $c_{\alpha_u,\beta_1}$. Now, since $a_u\sigma^{\alpha_u}(b_1)a_{k-1}=0$, and $u=k-1$, Proposition \ref{2008Proposition2.3} imply that $a_{k-1}\sigma^{\alpha_{k-1}}(b_1)=0$. Continuing in this way, we prove that $a_i\sigma^{\alpha_i}(b_j)\in {\rm nil}(R)$, for $i+j=k$. Therefore $a_i\sigma^{\alpha_i}(b_j)\in {\rm nil}(R)$, for $0\le i\le m$ and $0\le j\le t$.
\end{proof}
\begin{remark}\label{2008Example3.4}
The importance of the condition NI on $R$ in Theorem \ref{2008Theorem3.3} can be appreciated in the following example taken from \cite{Ouyang2008}, Example 3.4, which presents a noncommutative ring which is weak $\Sigma$-rigid but not weak $\Sigma$-skew Armendariz. Let $R$ be a ring and $M_2(R)$ be the $2\times 2$ matrix ring over $R$. Let
\[
S = \biggl \{\begin{pmatrix} A & B\\ 0 & C\end{pmatrix}\mid A, B, C \in M_2(R) \biggr\}.
\]
It is clear that $S$ is a ring with usual matrix operations. If we consider the endomorphism $\sigma:S\to S$ defined by
\[
\sigma \biggl ( \begin{pmatrix} A & B\\ 0 & C\end{pmatrix}\biggr) = \biggl (\begin{pmatrix} A & -B\\ 0 & C\end{pmatrix} \biggr),\ \ \ \ \begin{pmatrix} A & B\\ 0 & C\end{pmatrix} \in S,
\]
then $S$ is weak $\sigma$-rigid but not weak $\sigma$-skew Armendariz.
\end{remark}
\section{Future work}\label{futurework}
Having in mind that $\Sigma$-rigid rings have been studied in several papers concerning ring theoretical properties such as Armendariz, Baer, quasi-Baer, p.p. and p.q.-Baer rings, zip, McCoy, invariant ideals, ascending chain condition on principal left (resp. right) ideals, and others (c.f. \cite{Reyes2015}, \cite{ReyesSuarez2016Boletin}, \cite{ReyesSuarez2016UIS}, \cite{NinoReyes2017}, \cite{ReyesSuarezClifford2017}, \cite{ReyesSuarezUMA2018}, \cite{Reyes2018} and \cite{ReyesSuarezYesica2018}), there is a considerable number of results about $\Sigma$-rigid rings which can be extended to the more general setting of weak $\Sigma$-rigid rings. This will be our line of thinking in future papers.
\vspace{0.5cm}
\noindent {\bf \Large{Acknowledgements}}
\vspace{0.5cm}
The first author was supported by the research fund of Facultad de Ciencias, Universidad Nacional de Colombia, Bogot\'a, Colombia, HERMES CODE 41535.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,882
|
Q: Is it possible to have a static index field for Liferay using solr-web plugin? Can anyone tell me if I can associate a static index field for Liferay using the solr-web.plugin? Is there a way to define a static index in solr?
I need something similar to the following configuration in Nutch
<property>
<name>index.static</name>
<value>source:nutch</value>
</property>
This will add the field "source" as an index and its value as "nutch" to all documents in Nutch. Anything similar to this for Liferay + Solr?
A: Not sure for Liferay configuration, however you can add a default value in the schema.xml which will be applied to documents.
<field name="source" type="string" indexed="true" stored="true" default="Nutch" />
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,698
|
← openSUSE Conference 09 has started !
Would it be a good place to store the info from http://gitorious.org/opensuse/osc-plugin-collab/blobs/master/server/upstream/upstream-tarballs.txt ?
A way to know which packages are outdated (compared to upstream) is something I really miss from the OBS. I think having to submit new entries to gitorious is the cause it's not used more.
As part of the boosters' factory project, it is planned to store for example the upstream url and version in attributes.
This should actually go to the _service file in near future, to be able to download and build automatically new sources. Otherwise we need to sync the data between two places, not such a good idea.
The version number can get requested already from source server btw.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,640
|
Archer Daniels Midland (ADM) will commit to achieving gender parity across its senior leadership structure by 2030.
The agri-business is partnering with Paradigm for Parity, a coalition of business leaders dedicated to addressing the corporate leadership gender gap.
Through the partnership, ADM is agreeing to address unconscious bias in the workplace; significantly increase the number of women in senior operating roles, with a goal of at least 30% representation in all leadership groups during the short term; and providing sponsors to women well-positioned for long-term success.
The business will also measure targets and maintain accountability by providing regular progress reports, as well as promising to base career progress on business results and performance rather than physical presence in the office.
And it will drive internal efforts on elimination of the wage gap, ensuring fairness in the hiring process, and widespread diversity and inclusion communications.
ADM chief executive Juan Luciano said: "We recognise that our success as a company and as an industry relies on developing, creating and growing an inclusive culture and diverse workforce. We believe that true innovation arises from having many different perspectives and backgrounds represented at the highest levels of an organisation, and we have a comprehensive plan in place to promote inclusion in all roles, at all levels at ADM.
The food industry has long suffered from a gender imbalance at boardroom level and, following the departures of PepsiCo's Indra Nooyi and Mondelēz's Irene Rosenfeld, there are currently just two female chief executives at any of the world's 100 largest food and beverage companies.
Michele Buck has been CEO at Hershey since March 2017.
And Beth Ford took over at Land O' Lakes last August, and in so doing became the first openly gay woman to lead a Fortune 500 company.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,596
|
(Theory webinar) Information Choice under Ambiguous Signal Structures
日期 2021-12-09 (週四)
時間 02:30 PM
地點 online or Conference Room B110
主持人 Professor Yi-Hsuan Lin
演講者 Professor Norio Takeoka
演講者簡介 Professor Takeoka received his Ph.D. in Economics from University of Rochester in 2006. He is currently a Professor at Hitotsubashi University. His research fields are Microeconomics, Decision Theory, Game Theory, and General Equilibrium.
摘要 As in the growing literature of rational inattention, decision making about information acquisition has been recognized increasingly in economics. If the agent may not be able to form a probabilistic belief over states because of the scarcity of relevant information for decision making, a decision about information acquisition seems even more significant. By adopting the choice theoretic model of information acquisition, provided in de Oliveira, Denti, Mihm, and Ozbek (2017), we argue that one of their axioms, which takes a form of quasi-convexity of preference, excludes ambiguity aversion toward three types of ambiguity sources: ambiguity about priors, ambiguity about posteriors, and ambiguity about feasibility of information structures. By relaxing their quasi-convexity axiom, we axiomatically characterize a model of information acquisition, which allows for the first and the third types of ambiguity aversion.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,425
|
rss: false
layout: section
title: "Exception and Error Handling"
seotitle: "Exception and Error Handling"
description: "TBD description."
introduction: "The Chrome DevTools Console panel exposes a wealth of information about your page's exceptions and errors."
article:
written_on: 2015-04-14
updated_on: 2015-04-14
order: 6
authors:
- megginkearney
priority: 0
collection: javascript
panel: sources
id: exceptions-errors
---
{% wrap content %}
Page exceptions and JavaScript errors are actually quite useful -- if you can get to the details behind them. When a page throws an exception or a script produces an error, the Console provides specific, reliable information to help you locate and correct the problem. In the Console you can track exceptions and trace the execution path that led to them, explicitly or implicitly catch them (or ignore them), and even set error handlers to automatically collect and process exception data.
{% endwrap %}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 251
|
La campagne de Hesse de 1760 est l'ensemble des opérations militaires et batailles menées et livrées sur le théâtre d'opération de Hesse-Cassel en l'an 1760 pendant la Guerre de Sept Ans.
Théâtres d'opérations européens de la Guerre de Sept ans et situation à la veille de la campagne
Théâtres d'opérations
La guerre terrestre en Europe se déroule sur deux fronts distincts :
l'Europe centrale et l'Allemagne orientale d'un côté avec la Silésie, la Poméranie, la Saxe et la Prusse comme théâtres d'opération où s'affrontent principalement le Royaume de Prusse, le Duché de Brunswick-Lunebourg, l'Électorat de Saxe, l'Archiduché d'Autriche et l'Empire russe
l'Allemagne occidentale (actuels Länder de Basse-Saxe, Rhénanie-du-Nord-Westphalie et Hesse) d'autre part, théâtre de la « bataille pour le Hanovre », allié « naturel » de la Grande-Bretagne depuis l'accession au trône britannique de George de Grande-Bretagne, ancien Duc de Brunswick-Lunebourg, où s'opposent le Royaume de France et le Duché de Wurtemberg, allié de l'Autriche, et la coalition formée par le Royaume-Uni, l'Électorat de Brunswick-Lunebourg et le landgraviat de Hesse-Cassel.
Précédentes campagnes en Hesse-Cassel
Le landgraviat de Hesse-Cassel avait déjà été le théâtre de la bataille de Hastenbeck en juillet 1757.
Le , une armée française sous le commandement du comte de Clermont et une armée coalisée sous les ordres du duc de Brunswick s'affrontent au cours de la Bataille de Rheinberg. Neuf jours plus tard, les troupes hanovriennes, commandées par Ferdinand de Brunswick-Lunebourg, frère du duc de Brunswick combattent les troupes françaises commandées par Louis de Bourbon-Condé, comte de Clermont à Krefeld.
Le , Ferdinand, à la tête de ses troupes hanovriennes, rencontrent les troupes françaises commandées par Victor-François de Broglie lors de la Bataille de Bergen, les Français résistant aux Hanovriens mais n'exploitant toutefois pas leur avantage après leur repli.
Situation à la veille de la campagne
Le , le Landgrave Guillaume VIII de Hesse-Cassel décède à Rinteln. Son fils Frédéric II lui succède et réorganise l'armée hessoise
Campagne de 1760
La campagne de 1760 est marquée par une série de marches, de « coups de main » et de batailles rangées au cours desquels l'armée française garde le plus souvent l'avantage de l'initiative.
Combats de Corbach
Les combats de Corbach, livrés le à Korbach, furent la première bataille de la campagne et aboutirent à une victoire pour les Français sur les Hanovriens, les Britanniques et leurs alliés.
Bataille de Emsdorff
La bataille d'Emsdorf se déroula le à et opposa les forces alliées de Hanovre, du Royaume-Uni et de la Hesse sous le commandement du prince Frédéric II de Hesse-Cassel et les Français sous le commandement du baron Chrétien Sigismond de Glaubitz (1711-1765). Ce furent les Alliés qui cette fois remportèrent la victoire.
Bataille de Warburg
Bataille de Kloster Kampen
Ce mois d' est également marqué par le décès du roi George II de Grande-Bretagne le 25.
Bibliographie
Correspondance inédite de Victor-François, duc de Broglie, maréchal de France, avec le prince Xavier de Saxe, comte de Lusace, lieutenant général, pour servir à l'histoire de la guerre de sept ans (campagnes de 1759 à 1761):
Tome I, - , Albin Michel, Paris, 1903-1905 (lire en ligne)
Tome II, juin-, id. (lire en ligne)
Tome III, - , id. (lire en ligne)
Carl Renouard: Geschichte des Krieges in Hannover, Hessen und Westfalen von 1757 bis 1763, Zweiter Band: Die Feldzüge von 1759 und 1760. Cassel, 1864 (Lire en ligne)
Liens externes
Kronoskaf:
1760 - French campaign in West Germany – French offensive in Hesse
1760 - French campaign in West Germany – Winter operations
1760 - French campaign in West Germany – Campaign till the combat of Korbach
Seven Years' War Battles Involving the Hesse-Kassel
Notes et références
Notes
Références
Bataille de la guerre de Sept Ans
Bataille de 1760
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,331
|
\section{Introduction}\label{sec:intro}
This paper discusses two of the major challenges when building open-domain social dialogue systems:
\begin{enumerate}
\item How can we facilitate open domain interaction while still executing control?
\item Which utterance fits best in a given dialogue context?
\end{enumerate}
Early systems for social chat, such as ELIZA \citep{eliza}, were based on carefully handwritten rules, but recent systems are often trained using a variety of (deep) learning techniques over large public data sets, such as OpenSubtitles or Twitter \citep[e.g.][]{vinyals2015neural,sordoni-EtAl:2015:NAACL-HLT,Li2016}.
However, learning directly from data also has its pitfalls when deploying a system to real customers, as recent examples such as Microsoft's Tay bot
demonstrate.
We present a hybrid model, incorporating hand-crafted rules (validated and developed through customer feedback) and machine learning models trained on carefully chosen datasets.
Following previous hybrid systems, \cite[e.g.][]{yu_strategy_2016}, we apply a ranker model to select the most relevant reply from a pool of replies generated by an ensemble of different agents/bots.
It is still an open question how to best define this ranking function. Previous work has manually defined a evaluation function
based on hand-selected turn-level features \citep{yu_strategy_2016,Li2016}.
Other work has experimented with learning from crowdsourced user ratings \citep{lowe:acl2017}.
One major drawback of such previous work is that it only evaluates a possible response locally, i.e.\ per turn, rather than considering its contribution to the overall dialogue outcome, (e.g.\ to engage the user. As such, these ranking functions often favour safe, but dull responses \citep{lowe:acl2017}).
We experimented with a variety of ranking functions and datasets as described below.
This resulted in one of the top bots in the competition according to average customer rating, as well as with respect to average dialogue length.
\section{System Design and Architecture}\label{sec:system}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{alexa_architecture.png}
\rule{35em}{0.5pt}
\caption[Alana/ Watt's up Architecture]{Alana is a hybrid hierarchical architecture with ranking} \label{fig:alexa_architecture}
\end{figure}
The system architecture is shown in Fig.~\ref{fig:alexa_architecture}.
We rely on an ensemble of bots. These bots fall into two main categories:
\begin{enumerate}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt,label=\textbf{\arabic*.}]
\item {\bf Data-driven Bots:} We experimented with retrieval based bots as well as generative Sequence-to-Sequence models (Seq2Seq, see section \ref{sec:seq2seq}) While the former always produce well-formed sentences (as retrieved from the data set), the latter can generate new and possibly more contextually appropriate replies, however at the expense on needing larger data sets to learn from. We follow previous work by combing both paradigms into an ensemble-based approach \citep{SongYLZZ:2016}.
\item {\bf Rule-based bots} are used to respond to the specific user queries in a controlled and consistent way, (e.g.\ to queries about the personality of our bot, such as favourite things etc., or the weather), using a combination of in-house developed bots and extended versions of 3rd party bots.
\end{enumerate}
These two categories include the following bots:
{\bf Persona:} A rule-based system implemented in AIML\footnote{\url{http://www.alicebot.org/aiml.html}} whose main purpose is to maintain personality-based responses consistent across turns, such as music tastes or other preferences. \textit{Persona} also includes replies to other topics, where we want to guarantee an appropriate response to inappropriate user utterances and topics such as sex, as per the competition rules.
{\bf Eliza:} We extended an existing
Eliza-style chatbot called \textit{Rosie}.\footnote{\url{https://github.com/pandorabots/rosie}} Since the initial \textit{Rosie} bot was designed for
mobile devices, we heavily altered it for the Challenge.
{\bf NewsBot:}
An information retrieval bot based on an open-source framework Lucene.\footnote{\url{https://lucene.apache.org}} We build and continuously populate a search index
of selected news sources provided via NewsAPI.\footnote{\url{https://newsapi.org}}
For indexing as well as for the bot's responses, we use summaries of the news articles extracted with an open-source library called {\em Sumy}.\footnote{\url{https://pypi.python.org/pypi/sumy}}
In order to select a relevant piece of news for a user's query,
we create 1, 2, and 3-grams over the user's utterance and dialogue context.
We employ the BM25 algorithm to score news relevance,
with named entities and noun phrases from the user query boosted using a set of weights adjusted empirically. A re-ranking step is then applied for the top 10 candidates based on the articles' recency.
{\bf Factbot -- Fun facts, Jokes, and Stories:} A collection of facts, jokes and stories that get triggered whenever the user specifically asks for them or as a deflection strategy when no suitable response is found.
For the fun facts, the user can also specify a named entity (``{\em Tell me a fact about X}'').
Otherwise, a
fact is chosen randomly. The data was collected from a multitude of online resources.
{\bf Quiz Game:} A hand-crafted system developed using a VoiceXML-based structure. During the game, the user has to guess the right answer to topic-specific questions
(e.g.\ 80's music, science, history, sport and geography).
The user can end the game at any point.
{\bf EVI:} A third party bot retrieving factual information (if applicable) about the user utterance, powered by the EVI question answering engine API.\footnote{\url{https://www.evi.com/})} This bot returns only one candidate if there is one. Some EVI answers which would not be appropriate in a dialogue are filtered out.
{\bf Weatherbot:} A simple rule-based bot that provides the user with weather-related information, if asked for, querying the \textit{OpenWeatherMap API}\footnote{\url{https://openweathermap.org/}} on the fly.
Each of these bots produces a possible system utterance according to its internal rules. Note that not all bots fire at each turn.
All the returned candidates are postprocessed and normalized.
Profanity, single-word and repetitive (news only) candidates are filtered out.
The final system response is selected in three steps:
\begin{enumerate}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt,label=\textbf{\arabic*.}]
\item \textbf{Bot priority list.} Some of the deployed bots are prioritized, i.e.\ if they produce a response, it is always selected. The priority order is the following: \textit{Quiz game, Factbot, Weatherbot, Persona, Evi}.
\item \textbf{Contextual priority.} The NewsBot's response is prioritized if it stays on the topic of a previously mentioned news story.
\item \textbf{Ranking function.} If none of the priority bots produced an answer, the rest of the deployed bots' responses populate the list of candidates and the best response is selected via a ranking function, see Section \ref{sec:devel}.
\end{enumerate}
In the extreme case where none of the bots produced an answer (or all of them were filtered out due to postprocessing rules), the system returns a random fun fact, produced by the \textit{Factbot}.
Please refer to \cite{alexa_2017} for more details.
\subsection{Other Bots and Data}
We also experimented with other data-driven bots, which were not included in the final system.
\subsubsection{Data Sets for Information Retrieval Bots}\label{ssec:data}
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt]
\item\textbf{OpenSubtitles} \citep{lison_opensubtitles2016:_2016}, with the automatic turn segmentation provided by \citet{lison_automatic_2016}. We used all dialogues of two or more turns and filtered the data as described below.
\item\textbf{Cornell Movies, Jabberwacky, CNN:} these datasets proved to be too small for our purposes: Cornell Movie Dataset \citep{danescu-niculescu-mizil_cornell-movies_2011}, Jabberwacky chatbot chat logs\footnote{\url{http://www.jabberwacky.com/}}, and CNN chat show transcripts from \cite{yu_strategy_2016,yu_learning_2017}.
\end{itemize}
In order to comply with the competition rules, we first filtered the data for profanities.
However, profanities are often context-dependent
and hard to capture by a purely lexicon-driven approach.
As such, we experimented with restricting the OpenSubtitles data set using age ratings of the movies.
We obtained movie ratings from IMDb
and only included in our dataset the movies with a U.S.\ "G" or U.K.\ "U" ratings ("general", "universal").
Another problem from OpenSubtitles data was the occurrence of many personal names and other named entities that would appear out-of-context in a dialogue. We used Stanford NER \citep{finkel_stanford-ner_2005} to detect named entities and filtered out all context-response pairs containing named entities in the response.
However, the downside of this approach is that we ended up with much smaller data sets
which made data-driven approaches, such as the generative Seq2Seq approach less feasible.
\subsubsection{Seq2Seq}\label{sec:seq2seq}
Throughout system development, we experimented with a sequence-to-sequence dialogue model \citep{vinyals2015neural}, training it on several datasets.
The first promising behaviour was obtained with Twitter data\footnote{\url{https://github.com/Marsan-Ma/chat_corpus}}: it was interesting and mostly grammatical yet often offensive and politically related. We then switched to a subset of Reddit logs~-- over 21,000 conversation snippets in the form of question-answer pairs cleaned from profanity and filtered to only contain small-talk conversation (thanks to Dr.\ Zhuoran Wang).
In order to exclude ungrammatical responses, we disregarded all answers with a low confidence score (defined as the sum of the logits at the decoder's output). We adjusted the confidence threshold empirically on a separate development set of 100 sample user utterances both collected from WoChat\footnote{\url{http://workshop.colips.org/wochat/}} transcripts and paraphrased from a list of popular daily topics provided by Amazon.
The experiment thus resulted in a casual conversation bot: its answers are supposed to be given at times when the user is following up on the previous system's answer or just hesitating.
Due to time constraints, the final version of the seq2seq bot was not deployed into production, and so its possible contribution to the users' ratings is left for future work.
\section{Example Dialogue}\label{sec:examples}
Note: The dialogue presented here does not come from real customer data, but was recreated by interacting with our system (running a text-based version on Telegram) The same structure of interaction and Named Entities are used as occurred in a real dialogue. We also indicate which bot in our ensemble generated the answer.
\ignore{
\noindent\fbox{%
\parbox{\textwidth}{%
U: can we talk about music\\
A: \comment{(Persona)} Who is your favorite band?\\
U: the Doors\\
A: \comment{(Evi)} The Doors were an American rock band formed in 1965
in Los Angeles, with vocalist Jim Morrison, keyboardist Ray Manzarek,
guitarist Robby Krieger, and drummer John Densmore. What you think
about Los Angeles.\\
U: it's an okay place\\
A : \comment{(Eliza)} I can see you've put thought into that.\\
U: and what are your views on it\\
A: \comment{(News API)} I saw this on the news, Los Angeles
strikes deal to host 2028 Summer Olympics: Source. Do you want to
learn more?\\
U: yes sure\\
A: \comment{(Multi-turn News)} I'm so happy that you are interested in this topic.
Here's more. Then in June, the International Olympic Committee decided
that with two strong candidates, Los Angeles and Paris, it would give
one city the 2024 games and the other the 2028 games. This will be Los
Angeles' third time hosting the Olympics. What do you think about Los
Angeles? \VR{Repetition!}\\
U: ...
}%
}
\noindent\fbox{%
\parbox{\textwidth}{%
U: so have you ever played a game\\
A: \comment{(Eliza?)} No I didn't. What's like?\\
U: I can tell you about my favorite game Minecraft\\
A: \comment{(News API)} Did you see the news about Minecraft is
now available for cross-play on any device. With the launch of
Microsoft's new Xbox One X gamers got an update on the latest
Minecraft news -- chiefly that Minecraft is now available across any
device. Do you want to know more?\\
U: yes thanks for telling me that because I didn't know that\\
A: \comment{(Multi-turn News)} That's all right. Let's talk about it more. The new
update also features better graphics and cross-platform support to let
Windows 10 and mobile Minecraft players play the game together. So a
pack bought on mobile will also be available on Xbox and Windows 10
editions too. I was wondering what you think about Minecraft.\\
U: I love it it's a really fun game ....\\
A: ...
}%
}
\noindent\fbox{%
\parbox{\textwidth}{%
U: is there any gossip\\
A : Do you think Miley Cyrus and Liam Hemsworth will last?\\
U: how old is she \comment{(Anaphora resolution)}\\
A Miley Cyrus is 24 years old.\\
User: well is there any news about her \comment{(Anaphora resolution)}\\
A: I've been reading the news and I saw an article about Miley
Cyrus on toning down her image: 'I did feel sexualized'. Miley Cyrus
has said goodbye to her tongue-wagging, twerking days. Do you want to
learn more?\\
U: sure\\
A: I'm so happy that you are interested in this subject. Here's
more. The Malibu singer said she decided to tone down her image after
her sometimes shocking antics left her feeling exploited. It became
something that was expected of me, she said of her over-the-top
fashion sense and wild persona in a new cover story for Harper's
Bazaar. What are your views on Miley Cyrus?\\
}%
}
}
\noindent\fbox{%
\parbox{\textwidth}{%
A: Hi, [...]
What would you like to talk about? \\
U: music \\
A: \comment{(Persona)} Great! Who is your favorite singer? \\
U: Bob Dylan \\
A: \comment{(Evi + additional question)} Bob Dylan is an American songwriter, singer, painter, and writer. What are your opinions on Bob Dylan?\\
U: When was {\bf he} born? \\
A: \comment{(Evi + anaphora resolution)} Bob Dylan was born on Saturday May 24th 1941. \\
U: What's happening with {\bf him}?\\
A: \comment{(News-multi-turn + anaphora resolution)} I heard this on the radio -- Was Dylan too freewheelin' in borrowing for Nobel lecture? The whiff of plagiarism is blowin' in the wind for Bob Dylan. Want to know more?\\
U: sure\\
A: \comment{(News-multi-turn)} It seems that you are interested in this topic. I'm so glad. Here's more. Here's Dylan: Some men who receive injuries are led to God, others are led to bitterness The cases Pitzer found are not blatant or explicit -- there are no verbatim sentences, only identical phrases and similar phrasing. What are your opinions on Bob Dylan?\\
U: I love him! \\
A: \comment{(Persona)} Great! What else do you like? \\
U: cooking \\%\comment{(Transformed into ``I like cooking.")} \\
A: \comment{(Facts)} I heard that every day, Americans eat an estimated 18 acres of pizza
}}
\section{Experiments with Ranking Functions}\label{sec:devel}
The responses proposed by each bot are ranked according to a set of features.
We have experimented with several ranking functions.
\subsection{Hand-engineered Ranker function}\label{sec:handcrafted-ranker}
The hand-engineered ranking function uses the following features:
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt]
\item \textbf{Coherence:} Following \cite{Li2016}, we reward semantic similarity between the user's utterance and the candidates using Word2Vec \citep{word2vec}
\item \textbf{Flow:} Also similar to \cite{Li2016}, we penalise similarity between consecutive system utterances in order to prevent repetition. Here, we use both Word2Vec
and METEOR word n-gram overlap
as measures of similarity.
\item \textbf{Questions:}
By promoting questions, we aim to incite the user to continue the conversation.
\item \textbf{Named Entities:} We strongly reward utterances containing the same named entities as the user's reply to promote candidates relating to the same topic.
\item \textbf{Noun Phrases:} Similarly, we reward matching noun phrases between the user's and the system's utterances. Noun phrases are identified based on part-of-speech tagging
\item \textbf{Dullness:} We compare each response to a list of dull responses such as ``I don't know" and penalise Word2Vec similarity between them, since we would like the bot's utterances to be engaging, similarly to \cite{Li2016}.
\item\textbf{Topic Divergence:} We trained a Latent Dirichlet Allocation (LDA) model on a weighted combination of preprocessed versions of the OpenSubtitles and the WashingtonPost datasets. We set the vocabulary size to $20k$ and the number of topics to $200$, and we used a tailored stop-words list. For every proposed answer in the bucket, we compute the topic divergence from the user utterance.
\item\textbf{Sentiment Polarity:} We use the VADER sentiment analyser \citep{gilbert_vader:_2014} from the NLTK toolkit,\footnote{\url{http://www.nltk.org/api/nltk.sentiment.html}} which provides a floating point value indicating sentence sentiment.
\end{itemize}
These features are calculated using the last two system turns in order
to maintain dialogue context.
The final score is a weighted sum of these features:
\begin{equation}
\begin{split}
score = 0.25*turn_0 + 0.25*turn_1 + 0.25*turn_2 + 0.25*noun\_phrases\\
+ 3*named\_entities - 0.25*topic\_divergence
\end{split}
\end{equation}
where $turn_i$ is computed
using the $i$-th utterance counting from the end of the dialogue history:
\begin{equation}
\begin{split}
turn_i = -0.2*flow_{sem\_similarity} - 3*flow_{METEOR} + 0.1*coherence_{sem\_similarity}\\
- 0.24*dullness + 0.2*question + 0.1*sentiment\_polarity
\end{split}
\end{equation}
\subsection{Linear Classifier Ranker}\label{sec:linear-ranker}
In order to use the feedback ratings obtained from real users in the competition, we also trained the VowpalWabbit linear classifier \citep{langford_vowpal_2007} to rank Bucket responses based on the following features:
\begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt]
\item bag-of-n-grams from the context (preceding 3 utterances) and the response (unigrams, bigrams, and trigrams)
\item position-specific n-grams at the beginning of the context and the response (first 5 positions)
\item dialogue flow features, same as for the hand-engineered ranker (see Section~\ref{sec:handcrafted-ranker})
\item bot name.
\end{itemize}
The ranker is trained as a binary classifier, but it outputs a floating-point score in practice. At runtime,
the highest-scoring response is selected for the output.
We initially trained the ranker on Cornell movies, Twitter, and Jabberwacky datasets (see Section~\ref{ssec:data}), with positive examples from the real dialogues and negative ones randomly sampled from the rest of the set, but the ranker only learned to prefer responses similar to data from these datasets; its performance in real dialogues was lacking in our tests.
Therefore, after collecting enough live dialogues during the Alexa Prize competition, we retrained the ranker on {\it real dialogues collected during the competition}. The rating target function is an approximation of human ratings – we use all context-response pairs from successful dialogues (human rating 4 or 5) as positive examples (value +1) and all pairs from unsuccessful dialogues (rating 1 or 2) as negative (value -1) and train the ranker to mimic this rating.
We collected 60k dialogue instances over one month for training and 7k dialogue instances over 4 days as a development set.
We did not perform any large-scale parameter optimization, but based on performance on the development data, we selected the following VowpalWabbit parameters: \begin{itemize}[itemsep=0pt,topsep=0pt,leftmargin=10pt,labelwidth=10pt]
\item logistic loss function (logistic regression),
\item feature concatenations (context + response n-grams, pairs of n-grams from responses, bot name + response n-grams, bot name + context n-grams, bot name + dialogue flow, bot name + context n-grams + response n-grams),
\item 16-bit feature hash table,
\item 1 pass over the training data.
\end{itemize}
This setup reached 69.40\% accuracy in classifying the development data items as positive or negative.
The results of deploying this Linear Ranker are presented in section \ref{results:linear}.
\subsection{Results}\label{results:linear}
The Linear Ranker, trained on the user feedback received during the competition (see Section \ref{sec:linear-ranker}), was deployed on top of Alana v1.1, and evaluated in comparison to the hand-crafted ranking function (see Section~\ref{sec:handcrafted-ranker}).
The results are shown in Table~\ref{tab:results-linear}.
\begin{table*}[ht]
\begin{center}
\begin{tabular}{|c||c|c|}\hline
{\bf System} & average user rating & number of dialogues \\\hline \hlin
Alana v1.1 : Hand-engineered Ranker&3.26 & 191 \\\hline
Alana v1.1 : Trained Linear Ranker &3.28& 272 \\\hline
\end{tabular}
\end{center} \caption{Results: Trained Linear Ranker (semi-finals period)}\label{tab:results-linear}
\end{table*}
This shows that we can continuously improve system performance by training on real customer feedback from the competition, even though it is noisy and sparse (ratings are only available for whole dialogues, and not each dialogue turn).
\section{Future Work}
This paper describes our Alexa system as entered in the semi-finals (July-August 2017).
We are now competing as one of three systems in the Amazon Alexa Challenge finals, where we have replaced the linear ranker with a neural model. This neural ranker is trained on an increased number of user ratings, which we were able to gather August-October 2017, and outperforms the linear ranker in terms of accuracy.
\subsubsection*{Acknowledgements}
We would like to thank Helen Hastie and Arash Eshghi for their helpful comments and discussions.
\small
\bibliographystyle{apalike}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,348
|
\section{Introduction}
The unprecedented imaging and spectroscopic sensitivity and higher spatial
resolution of the {\it Spitzer} Space Telescope ({\it SST}; Werner et al. 2004)
compared to past IR missions such as the Infrared Astronomical Satellite
({\it IRAS}; Neugebauer et al. 1984) and the Infrared Space Observatory
({\it ISO}; Kessler et al. 1996) provide a unique opportunity to probe the
inter-stellar medium (ISM) of optically faint sources such as low surface
brightness (LSB) galaxies. The goal of this study is to use the {\it Spitzer}
data to analyze the IR properties of three LSB optical giants: Malin 1, UGC
6614, and UGC 9024. This is the first time we are able to view these galaxies
in $\sim$3-160 $\mu$m wavelength range.
The LSB galaxies are usually defined as diffuse spiral disks with low $B$-band
central surface brightness (e.g. $\mu_{B,0} \geq 23$ mag arcsec$^{-2}$;
Bothun et al. 1997; Impey \& Bothun 1997). These galaxies are either blue
(B-V $\lesssim 0.5$) or red (B-V $\gtrsim 0.8$) in color (O'Neil et al. 1997),
metal poor ([O/H] $\lesssim 1/3 Z_{\odot}$; McGaugh 1994; de Blok \& van der
Hulst 1998b; de Naray et al. 2004), rich in neutral hydrogen (H I) (Schombert
et al. 1992; O'Neil et al. 2004), deficient in H II emission (McGaugh et al.
1995; de Naray et al. 2004), and have low star formation rate (SFR)
$\lesssim 0.1$ M$_{\odot}$ yr$^{-1}$ (van den Hoek et al. 2000; Burkholder et
al. 2001).
The majority of these galaxies lack molecular (CO) gas (Schombert et al. 1990;
de Blok \& van der Hulst 1998a); only a handful have been reported to have
molecular emission (O'Neil \& Schinnerer 2004; Matthews et al. 2005; Das et al.
2006).The observed properties suggest that LSB disks are relatively unevolved
systems and may have a different evolutionary history compared to their high
surface brightness (HSB) counterparts (McGaugh 1994; van den Hoek et al. 2000;
Vallenari et al. 2005; Zackrisson et al. 2005).
Most of our knowledge regarding the composition and structure of the ISM of
LSB spirals comes from optical (Impey et al. 1996) and H I surveys (O'Neil et
al. 2004). These surveys have demonstrated that the neutral hydrogen is by far
the dominant component of the ISM in these galaxies ($\sim$95\% by mass;
Matthews 2005). While we have improved understanding of the gaseous component
of the ISM from decade long H I surveys, our knowledge of the inter-stellar
dust, the component of the ISM radiating from mid-IR ($\sim$8 $\mu$m) through
sub-millimeter ($\sim$850 $\mu$m) wavelengths, is still very limited. Because
of the scarcity of information in this wavelength range, complementary
observational facts such as low metal abundance (McGaugh 1994), strong
similarities in optical and near-IR morphology (Bergvall et al. 1999; Bell et
al. 2000), transparency of the stellar disks (O'Neil et al. 1998; Matthews et
al. 1999), and deficiency in molecular emission (Schombert et al. 1990; de Blok
\& van der Hulst 1998a) have been used to probe the ISM of LSB galaxies. All
these observations lead to a general consensus that the LSB disks are deficient
in dust and molecular gas.
Given that LSB spirals comprise $\sim$$50\%$ of the local disk galaxy population
(McGaugh et al. 1995b), they deserve equal attention as their HSB cousins.
To develop a consistent picture of the local galaxy populations it is therefore
necessary to probe each population at all wavelength regimes as have been done
in most cases for HSB galaxies. Previous long wavelength studies on LSB galaxies
involved a few cases in sub-millimeter and millimeter wavelengths (de Blok \&
van der Hulst 1998b; Pickering \& van der Hulst 1999; O'Neil \& Schinnerer 2004;
Matthews et al. 2005; Das et al. 2006). Hoeppe et al. (1994) made the first
attempt to investigate long wavelength (60 $\mu$m, 100 $\mu$m, and 20 cm)
properties of LSB dwarfs; however, no study has been made in the mid-IR and
far-IR for LSB disks. In this study we have made the first attempt to explore
$\sim$3-160 $\mu$m properties of three LSB giants: Malin 1, UGC 6614, and UGC
9024, recently observed by the {\it SST}. We focus on the IR morphology to probe
extent of dust in the ISM, on the SEDs for the IR energy budget and dust content,
and on the IR colors to establish the dust temperature of the ISM.
The organization of this paper is as follows: we describe observation and data
reduction in section $\S2$ and present our results in section $\S3$. Discussions
and conclusions are given in section $\S4$.
\section{Observation and Data Reduction}
We describe the Infrared Array Camera (IRAC; Fazio et al. 2004) and Multiband
Imaging Photometer for {\it Spitzer} (MIPS; Rieke et al. 2004) imaging data for
Malin 1, UGC 6614, and UGC 9024. These extended disk galaxies with
radial scale length $\rm h_{r,R}>$ 5 kpc are observed as part of a larger
guaranteed time observing program ({\it Spitzer} PID \#62). The program also
includes two LSB galaxies (UGC 5675 and UGC 6151) with 2 kpc $<\rm h_{r,R}<$ 3
kpc, an edge-on disk (UGC 6879) with $\rm h_{r,R} \sim$2.5 kpc, and a HSB
dwarf (UGC 10445) with $\rm h_{r,R} \sim$1 kpc. The central brightness of UGC
6879 is $\mu_{B,0} \sim 20.4$ mag arcsec$^{-2}$. A simple correction for
inclination, using $\mu_{0, face-on} = \mu_{0, observed} + 2.5 \log (a/b)$
brings its central brightness to $\sim 21.71$ mag arcsec$^{-2}$. It is lower
than the conventional choice ($\mu_0 \approx 23$ mag arcsec$^{-2}$) but close
to the Freeman value ($\mu_0 \sim 21.65$ mag arcsec$^{-2}$; Freeman 1970).
The central brightness of this galaxy falls in between the range of LSB and
HSB galaxies and hence, we include it as a representative of the intermediate
class. The properties of UGC 5675 and UGC 6151 will be explored in a
forthcoming paper. The readers are refereed to Hinz et al. (2006) for an
analysis of UGC 10445.
The IRAC 3.6, 4.5, 5.8, and 8 $\mu$m images and the MIPS 24, 70, and 160
$\mu$m images were acquired, respectively, in the mapping and photometry
modes. The IRAC images were reduced with the standard {\it Spitzer} Science
Center data pipeline, and aligned, re-sampled, and combined into a mosaic
image using the {\it Mopex}\footnote{http://ssc.spitzer.caltech.edu/postbcd/}
software. The MIPS 24 $\mu$m data required the use of the self-calibration
procedure described in the MIPS Data
Handbook\footnote{http://ssc.spitzer.caltech.edu/mips/dh/} to remove latent
image artifacts. The corrected images were then combined into a mosaic
using {\it Mopex}. Time filtering and column filtering was applied to the
70 $\mu$m images using IDL routines created by D. Fadda. The filtered images
were then combined using {\it Mopex}. The 160 $\mu$m images were combined
into a mosaic using {\it Mopex}. The IRAC spatial resolution is $\sim$2$^{''}$
for all bands. The MIPS spatial resolutions are 6$^{''}$, 18$^{''}$, and
40$^{''}$ for the respective bands.
Sky subtraction was carried out through the use of multiple sky apertures
placed near the source which do not overlap with the faintest isophotes
visible from the galaxy. For each galaxy we measured flux densities from
the sky subtracted images within the aperture covering the entire galaxy.
The flux density contributed by foreground stars within a galaxy aperture
was removed by measuring each such star in a small aperture and subtracting
the result from the total flux within the galaxy aperture.
The calibration uncertainty in the IRAC flux densities is at the level
of $\sim$10\% (Reach et al. 2005; Dale et al. 2005). Aperture corrections
have been applied to all IRAC flux densities (T. H. Jarrett 2006; private
communication). The MIPS flux density calibration uncertainties are 10\% at
24 $\mu$m and 20\% at 70 and 160 $\mu$m.
Near-IR (1.3, 1.7, and 2.2 $\mu$m) flux densities from the Two Micron All
Sky Survey (2MASS; Jarrett et al. 2000), upper limits on {\it IRAS} flux
densities, and those derived from the IRAC and MIPS bands are given in
Table \ref{flux_table}. Basic properties of the galaxies obtained from the
literature, the NASA/IPAC Extragalactic Database (NED), the Lyon-Meudon
Extragalactic Database (LEDA), and derived in this study are summarized in
Table \ref{basic_table}. The {\it IRAS} flux densities
for UGC 6614 and UGC 6879 were computed using the SCANPI tool available from
IRSA as linked via NED where {\it IRAS} flux density limits represent SCANPI's
inband total measurement, $f_{\nu}(t)$. No {\it IRAS} detections were available
for Malin 1 and UGC 9024. Distances for the galaxies are estimated from
heliocentric radial velocity after correcting for the local group, infall into
the Virgo cluster, the Great Attractor, and the Shapley supercluster following
the Mould et al. (2000) flow model.
\subsection{Contamination from Galactic Cirrus}
A basic concern about faint extragalactic sources with highly diffuse disk
structure is confusion by foreground Galactic ``cirrus'' emission. This is
especially critical in the case of far-IR cool sources (defined below) such
as LSB galaxies. The far-IR ratio of typical local cirrus is
$S_{60 \mu m}/S_{100 \mu m} \leq 0.2$ (Gautier 1986). UGC 6614 and UGC 6879
are well above this limit (Fig. \ref{colo_colo}c). From its Galactic latitude
$b \sim +22^{o}$ and the observed $S_{70 \mu m}/S_{160 \mu m}$ ratio, it is
reasonably safe to assume that UGC 9024 has not been affected by cirrus
emission. In the absence of far-IR information it is uncertain for Malin 1.
However, its high galactic latitude $b \sim +14^{\circ}$ can be used to argue
against any cirrus contamination.
We should also stress the fact that whether there is cirrus in the foreground
is less significant as compared to whether the cirrus in the foreground varies
on scales of the IRAC and MIPS field of view. If its spatial variation is
negligible across the IRAC and MIPS mosaic, it gets subtracted out as ``sky''
in the data reduction process.
\section{Results}
In this section we present IR morphology, SEDs, and IR colors of LSB galaxies.
Using this information we estimate dust mass ($\rm M_d$), dust-to-(atomic)
gas ratio ($\mathcal D$), total infrared luminosity ($\rm L_{TIR}$), and star
formation rate (SFR). To obtain a qualitative assessment of the observed
properties of LSB disks compared to their HSB counterparts, we take the
{\it Spitzer} Infrared Nearby Galaxy Survey (SINGS; Kennicutt et al. 2003)
sample as a representative sample of local HSB galaxies. This sample contains
75 galaxies of various Hubble types as well as dwarfs and irregular galaxies,
and thus making it a suitable reference for comparative analysis with LSB
spirals.
\subsection{Infrared Morphology}
The IR emission beyond $\sim$25 $\mu$m is dominated by the inter-stellar dust
under various heating conditions. On the other hand, mid-IR ($\sim$5-25 $\mu$m)
emission marks the transition from stellar photospheres to inter-stellar dust
dominating the emission. Whereas the morphology of a galaxy at 3.6 and 4.5
$\mu$m represents the stellar disk, at $\sim$5 $\mu$m and beyond it shows the
structure of the ISM. The IRAC 3.6 and 4.5 $\mu$m bands are sensitive to the
underlying stellar populations typically consisting of red giants and old
stars. In some galaxies 3.6 $\mu$m band is also known to contain emission
from a hot dust component (Bernard et al. 1994; Hunt et al. 2002; Lu et al.
2003; Helou et al. 2004).
The hot dust is also visible in the other IRAC bands. While 3.6 $\mu$m is
only sensitive to hot dust near the sublimation temperatures ($\sim$1000 K),
the longer wavelength IRAC bands can detect dust at lower temperatures down
to several hundred Kelvin.
The IRAC 5.8 and 8 $\mu$m bands are primarily sensitive to the PAH emission
at 6.2, 7.7, and 8.6 $\mu$m (Puget \& Leger 1989; Helou et al. 2000; Lu et
al. 2003). The PAH is the hot component of the inter-stellar dust with
effective temperature $\rm T_d > 100$ K stochastically excited to high energy
levels by stellar photons. Stellar photospheric emission also contributes to
these two IRAC wavebands. The fraction of stellar emission at 5.8 and 8 $\mu$m,
respectively, are $\sim$40\% and $\sim$20\% (Helou et al. 2004; Pahre et al.
2004).
The emission detected by the MIPS bands are coming from dust grains with
different size distributions. Very small grains ($\sim$1-10 nm) emit in the
mid-IR region ($\gtrsim$15 $\mu$m), intermediate between thermal equilibrium
and single-photon heating. Large, classical grains ($\sim$100-200 nm) are in
thermal equilibrium with the radiation field and responsible for far-IR
emission (Desert et al. 1990). To begin with we should bear in mind that the
demarcation lines among various heating environments are {\it ad-hoc} and we
assume the following effective color temperature ranges for large grains in
thermal equilibrium: warm (40 K $\lesssim \rm T_d \lesssim$ 100 K), cool (20
K $\lesssim \rm T_d \lesssim $ 30 K), cold (10 K $\lesssim \rm T_d \lesssim $
20 K), and very cold (10 K $\lesssim \rm T_d$).
The Solan Digital Sky Survey (SDSS) images of target galaxies are shown in
Fig. \ref{optical}. The {\it Spitzer} images of the LSB galaxies
are shown in Fig. \ref{malin1} (Malin 1), Fig. \ref{ugc6614} (UGC 6614), and
Fig. \ref{ugc9024} (UGC 9024). UGC 6879 is shown in Fig. \ref{ugc6879}. Galaxy
images are shown using Gaussian equalization (Ishida 2004).
The images are oriented such that north is up and east is to the left. Malin 1
is too faint to be detected by the MIPS 70 and 160 $\mu$m channels. The other
three galaxies were detected by all IRAC and MIPS bands. We do not show 5.8
$\mu$m images in these figures since the morphological appearances of each
of these galaxies at 5.8 $\mu$m closely follow that at 8 $\mu$m. For all
galaxies, the IRAC 4.5 and 8 $\mu$m images are shown without subtracting
stellar photospheric emission. The contours represent surface brightness with
intervals of $\sqrt{10}$ where the lowest level is 4$\sigma$ above the
background. The lowest level of contours at different bands are 0.04 (3.6
$\mu$m), 0.05 (4.5 $\mu$m), 0.30 (8 $\mu$m), 0.08 (24 $\mu$m), 1.6 (70 $\mu$m),
and 2.1 (160 $\mu$m) expressed in MJy/Sr.
The giant optical disks in Malin 1 and UGC 9024 appear as point sources in the
IRAC images. While we detect the stellar bulges of these galaxies, their optically
diffuse disks remain undetected long-ward of the IRAC bands. This suggests that the
low surface brightness structures at larger radii might be photometrically distinct
components rather than smoothed extensions of the normal inner disks (see Barth
2007 for a discussion on Malin 1 based on {\it Hubble} data). That the disks
appear in the $B$-band but are undetected at 3.6 $\mu$m suggest that these disks
have a small population of young stars rather than a large population of old stars.
The bulge spectrum of Malin 1 is consistent with a predominantly old stellar
population (Impey \& Bothun 1989). For both of these galaxies, the mid-IR emissions
at 8 and 24 $\mu$m are concentrated in the central few kpc, within a region of
12\arcsec \ (20 kpc) radius for Malin 1 and 24\arcsec \ (5 kpc) radius
for UGC 9024.
Undetected far-IR emission from the disk of Malin 1 implies that it contains
either very cold ($\rm T_d <10$ K) cirrus-like dust emitting in the submillimeter
and millimeter wavelengths or it lacks cold dust altogether and contains only
neutral gas. For UGC 9024, 70 $\mu$m emission comes from the central region
but 160 $\mu$m emission is very hard to measure because of large scale diffuse
emission in the field. This result in an upper limit of $S_{160 \ \mu m}<268$
mJy.
The optical morphology of UGC 6614 shows a massive bulge and spiral structure.
A thin ring $\sim$40\arcsec \ from the core is prominent in H$\alpha$ (McGaugh
et al. 1995). The 3.6 and 4.5 $\mu$m emission is spread over the entire disk of
this galaxy. The 3.6 $\mu$m image shows a discernible spiral arm pattern closely
resembling the optical morphology. At 4.5 $\mu$m this feature disappears and
the disk shrinks in radius showing only its inner region. This galaxy appears
markedly different at 5.8 and 8 $\mu$m compared to the other IRAC bands. The 8
$\mu$m morphology suggests that the PAH emission is coming from two distinct
regions: the central bulge and an outer ring surrounding the bulge. The 8 $\mu$m
morphology closely traces the H$\alpha$ image.
The MIPS 24 $\mu$m morphology is similar to the 8 $\mu$m PAH emission although
the outer ring appears a bit more disjointed at this band. The dust emission at
24 $\mu$m is coming mostly ($\sim$70\%) from the central disk.
The lower resolution image at 70 $\mu$m indicates that only $\sim$25\% of the
dust emission is coming from the central part of the galaxy, with the remaining
$\sim$75\% emission co-spatial with the ring of radius $\sim$40\arcsec.
Surprisingly a dumbbell shaped region is the dominant source ($\sim$90\%) of 160
$\mu$m emission; these two peaks are located on the NE and NW sides of the ring.
The far-IR images of this galaxy show a small, localized region within the ring,
SW from the center. Whether or not this region coincide with the spatial location
showing the CO emission in this galaxy disk (Das et al. 2006) is not entirely
clear. We will investigate this in the forthcoming paper.
UGC 6879 is an edge on spiral with a red central part and a blue (optical) disk.
This radial color gradient is perhaps related to greater concentrations of dust
in the nucleus than in the overall disk. Being a transitional disk, with a central
surface brightness in between HSB and LSB spirals, it is not unexpected to find
that UGC 6879 is a strong IR emitting source compared to the LSB galaxies. Both
PAH emission and warm dust (24 $\mu$m ) emission show spatial variation along
the disk. These emissions peak at the central region and diminish toward the edge.
\subsection{Infrared Diagnostics}
\subsubsection{IR Spectral Energy Distributions}
The flux densities obtained from the 2MASS and {\it IRAS} archives and those
estimated from the IRAC and MIPS images are given in Table \ref{flux_table}.
These flux densities are used to construct the observed infrared SEDs of LSB
galaxies as shown in Fig. \ref{obse_seds}. The open and solid circles represent,
respectively, 2MASS and IRAC data. The open triangles represent the MIPS data
whereas the {\it IRAS} upper limits for UGC 6614 and UGC 6879 are shown by filled
triangles.
Since Malin 1 was undetected by the MIPS far-IR channels we show the detection
limits at 70 and 160 $\mu$m for the total integration time ($\sim$252 sec. and
$\sim$42 sec. respectively).
We include four SINGS galaxies with different ISM properties for comparison.
These are NGC 0337 (a normal star forming galaxy), NGC 2798 (a star burst
galaxy), NGC 2976 (a normal galaxy with nuclear H II region), and NGC 3627 (a
Seyfert galaxy). They are shown, respectively, by dotted, dashed, dashed-dotted,
and long dashed lines. The motive is to make a visual comparison between SEDs
of LSB and HSB galaxies. All flux densities are normalized by the 3.6 $\mu$m
flux density.
There are several noticeable features in these SEDs.
First, the amplitude of the mid-IR and far-IR dust emissions of LSB galaxies are
lower compared to those of HSB galaxies. An obvious and expected result is that
the LSB galaxies have less dust content and hence have lower IR emission.
Second, Malin 1 is deficient in the integrated 8 $\mu$m emission compared to its
4.5 $\mu$m emission. This is quite opposite for other LSB galaxies and more like
the SED of an elliptical galaxy (see Fig. 4 in Pahre et al. 2004).
Third, the 24 $\mu$m emission is suppressed in both Malin 1 and UGC 9024. For
UGC 6614 it is slightly above the IRAC bands. This feature suggests that the ISM
of UGC 9024 lacks warm dust emission, and is made mostly of cool dust.
Fourth, the SEDs show a tendency to turnover at relatively longer wavelength, a
signature that the low density and low surface brightness ISM have low radiation
intensity. On the other hand, the shape of the SED of UGC 9024 is quite similar
to those of the representative HSB galaxies. We discuss this in detail in section
$\S3.3$
\subsubsection{Dust Mass}
Dust mass is frequently estimated by fitting the far-IR peak by a single
temperature modified blackbody function (Hildebrand 1983). However, the
inability of a single temperature or even two temperature blackbody function
to fit the observed flux densities suggests that a more sophisticated model
of the IR SED is needed. The global SED models of Dale et al. (2001) and Dale
\& Helou (2002; hereafter DH02) provide a robust treatment of the multiple
grain populations that contribute to the IR emission in a galaxy. This model
allows a realistic derivation of dust mass since it combines information from
the full range of heating environments ($\sim$10K-1000K). Previous studies have
shown that dust mass is underestimated by a factor of $\sim$5-10 for quiescent
galaxies (i.e. IR cool) if one simply fits the far-IR and sub-millimeter
continuum data points with a simple single temperature black body instead of
exploiting information from the full range of the SED.
Figure \ref{glob_seds} shows the fits to the observed SEDs obtained from the
DH02 model (solid line). The dashed and dotted lines represent, respectively,
empirical stellar SED (Pahre et al. 2004) and stellar synthesis model prediction
fitted only to the 2MASS fluxes from Vazquez \& Leitherer (2005). We use the
model fit (solid line) to estimate $\rm M_d$ and $\mathcal D$ noting that DH02
model does not provide ideal fits to very extended, low density, diffuse disks
like Malin 1 and UGC 9024. However, within the measurement and observational
uncertainties the model fits provide useful insight. A rigorous and detailed
treatment of infrared SEDs of LSB disks will be presented in a future study.
The estimated mass is given in Table \ref{basic_table}. We find that the ISM
of UGC 6614 has the highest amount of dust with dust-to-gas ratio,
$\mathcal D \sim$0.01. Both Malin 1 and UGC 9024 are $\sim$3 times less dusty
than UGC 6614. Given that IR emission coming only from the central regions of
the latter two galaxies, it is not surprising that they show low dust content.
In a recent study Das et al. (2006) detected CO(1-0) emission localized in a
specific region on the disk of UGC 6614. They estimated molecular gas mass
($\rm M_{H_2} \sim 2.8 \times 10^8 M_{\odot}$) which is almost equal to the
total dust content ($\rm M_d \sim 2.6 \times 10^8$ $M_{\odot}$) that we measure
distributed over the bulge and disk. The difference between $\mathcal D$ and
dust-to-(total) gas mass is negligible.
The (systematic) calibration uncertainty in the observed flux densities and
the uncertainty in the distance estimates result in a $\sim$30\% uncertainty
in $\rm M_d$. Additional uncertainty comes from the mass absorption coefficient.
The different sizes used to measure IR and H I fluxes will also attribute
additional uncertainty in $\mathcal D$. Besides, considering that the
long-wavelength end of the SED is poorly constrained the estimate of overall
dust mass and gas-to-dust ratio is a bit uncertain. All of these errors compound
to make M$_d$ and $\mathcal D$ uncertain by a factor of $\sim$2 or more.
\subsubsection{Infrared Luminosity}
DH02 proposed a simple relation to compute total IR luminosity using the MIPS
bands (see Eq. 4 in DH02). Due to uncertainties in the MIPS flux densities for
Malin 1 and UGC 9024 we use the empirical relation given by Calzetti et al.
(2005) to estimate $\rm L_{TIR}$. Calzetti et al. relate flux densities at 8
$\mu$m and 24 $\mu$m to derive $L_{TIR}$ for M 51, a normal star forming
galaxy.
Estimated total IR luminosities are given in Table \ref{basic_table} with
estimated uncertainty of $\sim$35\%. We find comparable IR luminosity for both
Malin 1 and UGC 6614. UGC 9024 is the least luminous because of its suppressed
24 $\mu$m warm dust emission. In spite of its border line HSB nature, the IR
output of UGC 6879 resembles a normal quiescent galaxy.
We also estimate $\rm L_{TIR}$ using DH02 model fits. We follow Sanders \& Mirabel
(1996) to define $\rm L_{TIR}$ where flux densities at 12, 25, 60, and 100 $\mu$m
are obtained from the model SEDs by interpolation. The result is presented in
Table \ref{basic_table}. Interestingly, these two estimates are within a factor
of $\sim$2.5 where the model estimates are always higher. The largest difference
is noted for UGC 9024 which is a result of relatively poor fit to the data.
Within the uncertainty, the IR energy output of LSB galaxies are smaller by a
factor of a few compared to their $B$-band luminosities $L_B$. The infrared-to-blue
ratio, $\rm L_{TIR}/L_{B}$, compares the luminosity processed by dust to that of
escaping starlight (see Table 2). The ratio ranges from $<$0.01 (in quiescent
galaxies) to $\sim$100 (in ultra-luminous galaxies). It can be used to
characterize optical depth of a system composed of dust and stars as well as
recent ($\sim$100 Myr) SFR to the long term ($\sim$1 Gyr) average rate.
The ratio is in the range 0.3-0.5 (see Table 2) indicating that the current
level of star formation is low, a consistent result with previous studies on
star formation in the LSB ISM (van den Hoek et al. 2000; Burkholder 2001).
However, there is a potential degeneracy in this parameter and one can only
make an indirect assessment of the ISM of a galaxy if this degeneracy can be
lifted. The fact that $\rm L_{TIR}/L_{B}$ is less than unity can arise from two
different physical conditions. In one hand, a galaxy may be undergoing intense
heating by young stars (large $\rm L_B$) but have very little neutral ISM left
(less IR emission) resulting in low $\rm L_{TIR}/L_{B}$. On the other hand, a
quiescent galaxy may generate most of its IR emission in H I clouds heated by
older stellar populations and will display a similarly low $\rm L_{TIR}/L_{B}$
(Helou 2000).
\subsubsection{Star Formation Rate}
A widely used recipe for estimating SFR from IR luminosity is given by Kennicutt
(1998). However, far-IR luminosity in Kennicutt model is based on {\it IRAS} data.
Without a proper calibration between {\it IRAS} and MIPS far-IR flux densities the
uncertainty will loom large in the SFR estimate. We use a new SFR estimator,
derived recently by Alonso-Herrero et al. (2006; hereafter AH06) using 24 $\mu$m
flux density. Our estimates are given in Table 2. The error associated with this
SFR is $\sim$10\%.
van den Hoek et al. (2000) derived a current SFR ranging $\sim$0.02-0.2
M$_{\odot}$ yr$^{-1}$ from $I$-band photometry for a sample of LSB galaxies
generally found in the field. Their estimate agree within a factor of two with
our results based on infrared data. For Malin 1 and UGC 6614 the infrared SFR
are $\sim$0.38 M$_{\odot}$ yr$^{-1}$ and $\sim$0.88 M$_{\odot}$ yr$^{-1}$ ,
respectively, whereas it is $\sim$0.01 M$_{\odot}$ yr$^{-1}$ for UGC 9024.
The fact that these galaxies have low dust content indicates that extinction is
less likely to cause the difference between SFR derived from IR and optical data.
The higher SFR of UGC 6614 compared to the other two galaxies is consistent with
the fact that this galaxy has a prominent H$\alpha$ morphology which indicate
modest level of current star formation. A very low SFR for UGC 9024 implies that
more light is scattered off from the central disk than being absorbed by the ISM.
The SFR of these three LSB galaxies, as derived from IR data, are thus
significantly below the rate $\sim$5-10 M$_{\odot}$ yr$^{-1}$ derived for the HSB
galaxies (Kennicutt 1998) but considerably larger than the rate $\sim$0.001-0.01
$\rm M_{\odot}$ yr$^{-1}$ observed typically in dwarf irregular galaxies (Hunter
\& Elmegreen 2004).
The star formation efficiency (SFE), quantified by $\rm L_{TIR}/M_{H I}$, is
also shown in Table \ref{basic_table}. This measure represents the amount of
unprocessed gas available to be consumed in subsequent star formation. As
expected the LSB galaxies have a SFE that is $\leq 1/5$ of the SFE of HSB
galaxies.
\subsection{Infrared Colors}
Panels \ref{colo_colo}a and \ref{colo_colo}b show, respectively, mid-IR colors
and the well known PAH-metallicity connection. In these diagrams we highlight
two extreme classes (in terms of their IR SEDs) of HSB galaxies: the dwarf
systems represented by the decimal numerals and the massive elliptical galaxies
represented by the roman numerals. All of these galaxies are obtained from the
SINGS sample.
Panel \ref{colo_colo}a compares PAH emission in various classes of galaxies.
Metal-rich HSB galaxies preferentially show high ratio in $S_{8\mu m}/S_{24\mu m}$
and low ratios of non-stellar 4.5-to-8 $\mu$m dust emission.
Metal poor (e.g. dwarfs and irregulars) and extremely metal poor HSB galaxies
such as blue compact dwarfs (BCDs), show the opposite trend and fall within the
PAH-deficient box (Engelbracht et al. 2005; hereafter E05). Note that E05 only
looked at star-forming galaxies. ``Red and dead'' ellipticals should have little
or no PAH emission, yet are nowhere near the PAH-deficient region. So this region
will not necessarily contain all PAH deficient galaxies. Also note that the result
of E05 could also arise from selection bias since recent studies have shown that
dwarf galaxies are not necessarily PAH defficient systems (Rosenberg et al. 2006).
Panel \ref{colo_colo}b, on the other hand, illustrates the PAH-$Z$ connection in
HSB galaxies. E05 showed that galaxies with low PAH emission have relatively
unpolluted ISMs. They noted a sharp boundary between galaxy metallicity with and
without PAH emission, although the trend may have been affected by selection
bias. We show this trend in panel \ref{colo_colo}b, where the PAH deficient
galaxies reside inside the dashed region and galaxies with higher metallicites
avoid two regions in the diagram which are shown by the dotted lines. We are
interested to see where the LSB galaxies fit in these diagrams.
Panel \ref{colo_colo}a shows that the LSB galaxies are different than the
PAH-deficient dwarf galaxies in terms of their mid-IR colors. In the color
space, LSB galaxies occupy a region similar to HSB galaxies and reside
significantly farther ($> 3\sigma$, along the horizontal axis) from PAH-deficient
galaxies. While UGC 6614 stays right in the middle of the locus, both Malin 1 and
UGC 9024 falls in the edge because of the shapes of their SEDs at 8 and 24 $\mu$m.
The mid-IR colors of these galaxies closely resemble those of elliptical galaxies
which is surprising given that their apparent different star formation histories.
The LSB galaxies are metal poor with $Z \leq 1/3 Z_{\odot}$ (McGaugh 1994; de
Bloke \& van der Hulst 1998b; de Naray et a. 2004). McGaugh (1994) provided an
estimate of oxygen abundance for UGC 6614 but it is highly uncertain. Following
the general trend shown by the LSB galaxies we assign one-third solar value to
Malin 1 and a solar value to UGC 6879. The full range and the median values of
published oxygen abundance are shown for UGC 6614 and UGC 9024.Having very
limited information it is, therefore, extremely difficult to explore the
PAH-metallicity connection for these galaxies. We are interested in the question
whether LSB galaxies being low $Z$ systems, will appear close to the PAH-deficient
box or will they fall in the region shunned both by HSB dwarfs and HSB spirals
with extended disks (dotted regions; Fig \ref{colo_colo}b)? While it is tempting
to give more weight to the latter case, only three data points with large errors
are insufficient to derive any trend. A larger sample of LSB galaxies with 8
$\mu$m detections is needed to shed more light on this topic.
Panel \ref{colo_colo}c shows the connection between mid-IR and far-IR colors
which basically describe the nature of dust emission at these wavelengths.
In this panel, the far-IR color of Malin 1 is shown with respect to a far-IR
flat SED, i.e. same flux density at 70 and 160 $\mu$m. For the other two
galaxies the diagram shows rather low $S_{70 \mu m}/S_{160 \mu m}$ and high
$S_{8 \mu m}/S_{24 \mu m}$ ratios. While low far-IR color suggest that they
are IR cool sources, the mid-IR color may be linked with the destruction of
very small grains (but not the PAH molecules) and thus can be used as a
parameter which can signal evolutionary stages of an ISM.
A higher value of $S_{8 \mu m}/S_{24 \mu m}$ can mean that $S_{8 \mu m}$ is
high (large amount of PAH) or that $S_{24 \mu m}$ is low (little or no warm
dust). Ellipticals are in the latter category (see Pahre et al. 2004) whereas
LSB systems are in the first category.
It should be noted that the 24 $\mu$m emission is very closely associated
with H II regions (Helou et al. 2004). Therefore, the lack of emission at this
wavelength is more likely a deficiency in H II region which is quite consistent
with the H$\alpha$ images of LSB galaxies (McGaugh et al. 1995; de Naray et al.
2004).
The vertical arrow in panel \ref{colo_colo}d represents a probable range of
{\it IRAS} far-IR color for UGC 9024 assuming that {\it Spitzer} far-IR ratio
is the same as the {\it IRAS} far-IR ratio. The sequence of IR colors can be
associated with a progression toward greater dust heating intensity and thus
with a sequence of star formation activity.
The cool end of the color sequence corresponds to cool diffuse H I medium
and quiescent molecular clouds, whereas the warm end corresponds to the
colors of H II regions, starbursts, and galaxies with higher $\rm L_{TIR}/L_B$
ratios. Although the IR nature of these three LSB galaxies are explicit in
this diagram, the interesting feature is that they are not the extreme cases
in terms of IR SEDs as shown by some of the SINGS galaxies.
Two primary sources have been proposed to explain the heating of the dust
which produces IR luminosity in spiral galaxies - massive young (OB) stars
and associated H II regions (Helou et al. 1985; Devereux \& Young 1990, 1993),
and non-ionizing A and later stars (Lonsdale \& Helou 1986; Walterbos \&
Schwering 1987; Bothun et al. 1989). Some authors, however, suggest
contribution from both sources (Smith 1982; Habing et al. 1984; Rice et al.
1990; Sauvage \& Thuan 1992; Smith et al. 1994; Devereux et al. 1994).
The
dominance of the heating source is, therefore, governed by the availability
of discrete and dense star forming regions in the ISM. While observational
evidence suggests that the global IR emission from luminous IR spiral galaxies
provides a measure of high mass SFR (Kennicutt 1998), the diffuse IR emission
in quiescent galaxies is caused mainly by the thermal radiation of the
interstellar dust heated by the interstellar radiation field (ISRF) (Jura 1982;
Mezner et al. 1982; Cox et al. 1986; Jura et al. 1987).
The dust color temperature ($\rm T_d$) deduced for galaxies from the {\it IRAS}
60 and 100 $\mu$m flux densities are typically 25-40 K assuming emissivity index
$\beta =2$ (Rahman et al. 2006). This range is similar to the temperature of
dust in Galactic star-forming regions (Wynn-Williams \& Becklin 1974; Scoville
\& Good 1989), and considerably greater than the 15-20K temperatures expected
for dust heated by the ambient ISRF (Jura 1982; Mezner et al. 1982; Cox et al.
1986; Jura et al. 1987). The color temperature derived from the MIPS 70 and 160
$\mu$m of LSB galaxies ranges $\sim$17-21 K.
From a statistical study of a large sample of optically selected galaxies, Bothun
et al. (1989) demonstrated that in the absence of UV radiation, far-IR color ratio
of $S_{60 \mu m}/S_{100 \mu m} \leq 0.3$ can result from dust which is heated by
old stars. Galaxies with $S_{60 \mu m}/S_{100 \mu m} \approx 0.3-0.5$ requires a
steadily increasing proportion of UV heated dust, while galaxies with
$S_{60 \mu m}/S_{100 \mu m} \geq 0.5$ are entirely dominated by the UV heated dust.
From panel \ref{colo_colo}d we find that the LSB galaxies have cool effective dust
temperatures and therefore lack intense heating from massive stars.
It should be mentioned here that in the IR color analysis we did not subtract the
stellar contributions from the 8 and 24 $\mu$m measurements to exactly reproduce
the E05 diagram. Assuming conservatively that the 3.6 $\mu$m emission is coming
from the stellar photosphere, the stellar contributions obtained from the empirical
stellar SED of Pahre et al. (2004) derived for early type galaxies are
$\sim$59\%, $\sim$38\%, and $\sim$19\% at 4.5, 5.8, and 8 $\mu$m, respectively.
The stellar contribution at 24 $\mu$m emission is $\sim$5\% or less.
Applying corrections to the corresponding flux densities do not change our results
since these corrections systematically shift the parameters in the color space.
Note that we use the same set of numbers for each galaxy ignoring the variation
in amount of stellar contributions from galaxy to galaxy. However, error due to
this is negligible. To find stellar contributions, preference is given to the
empirical SED than any stellar synthesis model because of unknown SFHs and poor
metallicity constraints for LSB galaxies.
In summary, from the four panels of Fig. \ref{colo_colo} we conclude that:
(a) Malin 1, UGC 6614, and the intermediate HSB/LSB object UGC 6879 have mid-IR
colors similar to quiescent HSB disk galaxies; UGC 9024 falls in the region of
HSB elliptical galaxies in this color plane. (b) There is insufficient data to
conclude whether LSB galaxies have PAH emission properties significantly different
from HSB galaxies with comparably low metallicities. Observations of many more
LSB galaxies are needed to settle this. (c) Available far-IR detections and upper
limits indicate that LSB galaxies are far-IR cool sources. The dust temperatures
derived from the MIPS 70 and 160 $\mu$m of LSB galaxies ranges $\rm T_d \sim$17-21
K, similar to many quiescent HSB spiral and elliptical galaxies.
\subsection{Molecular ISM}
The LSB galaxies are rich in neutral hydrogen (H I). Molecular hydrogen ($\rm H_2$)
gas inferred from CO emission has been detected in only a handful of such
galaxies (Matthews \& Gao 2001; O'Neil \& Schinnerer 2004; Matthews et al. 2005;
Das et al. 2006). The low rate of CO detection has been attributed to an ISM with
low dust content and a low surface density of neutral gas.
Dust opacity is crucial for the formation and survival of molecules since it
provides them necessary shielding from the ISRF. A larger column density is
needed to self-shield the H$_2$ molecule. A low density and less dusty
environment exposes H$_2$ to UV photons which can easily dissociate these
molecules. The low star formation and far-IR cool nature of LSB galaxies implies
lower energy density of the radiation field and consequently lower dissociation
of CO and H$_2$ (de Blok \& van der Hulst 1998b).
The deficiency of molecular gas detections in LSB galaxies may point to a
dynamical condition such as the absence of
large-scale instability in the disk preventing formation of giant molecular
clouds (Mihos et al. 1997). Local instabilities may lead to cloud condensation
resulting in localized star formation which may escape detection by current
observations (de Blok \& van der Hulst 1998b). Local instability in the disk
invokes energetic phenomena such as SN explosions and the frequency of such
occurrence in LSB galaxies is low (Hoeppe et al. 1994; McGaugh 1994).
The enhanced cooling by molecules is crucial in the onset of instability of
molecular clouds. Therefore, the effect of less efficient cooling of the ISM
can also prevent local instabilities. Long cooling time leads to higher cloud
temperatures and thus makes it difficult for a cold molecular phase to exist
(Gerritson \& de Blok 1999).
If the low density ISM truly lacks H$_2$ and CO molecules, what other types of
molecule can exist in this environment? Are they very cold or very warm
molecules of known types? To which dust components do they belong? Along with
millimeter wavelength observations, can we use currently available IR data to
shed light into these questions?
One of the three LSB galaxies, UGC 6614, has been reported to have CO(1-0)
emission at a certain localized region in the disk (Das et al. 2006), while
the other two galaxies have not been observed in this wavelength.
On the other hand, the {\it Spitzer} observation of UGC 6614 shows the presence
of enhanced PAH emission on the bulge and almost entirely along the outer disk.
This 8 $\mu$m emission, however, is concentrated in the central regions of the
other two galaxies.
A possible source of the origin of PAH molecules is the dense, high temperature,
carbon-rich ([C]/[O] $> 1$) environment of circumstellar envelopes surrounding
mass-losing
AGB carbon stars. PAH formation by stars with more normal, oxygen-rich,
photospheric abundances ([C]/[O] $< 1$) will be negligible because nearly all
the available carbon is bound up in the CO molecules (Latter 1991). Therefore,
if the stellar population responsible for the enrichment of the ISM are
dominantly old carbon-rich stars, the warm PAH molecules will be ubiquitous
resulting in lower abundance of CO molecules.
While a detailed investigation is beyond the scope of this study, we believe
the observed PAH emission and lack of CO emission holds the potential clue
to probe not only the ISM but also to better understand the SFHs in LSB galaxies.
\subsection{Mid-IR Photometry of UGC 6614}
The optical spectra of large LSB disks show an unexpected high occurrence of
low-level active galactic nuclei (AGN) type activity (Spraybarry et al. 1995;
Schombert 1998). UGC 6614, a optical giant LSB galaxy, is suspected to harbor
a weak AGN from the optical spectrum (Schombert 1998) and from excess emissions
at millimeter (Das et al. 2006) and radio wavelengths (Condon et al. 2000).
Integrated mid-IR photometry provides a robust technique to identify AGN in HSB
galaxies where AGN tend to be redder than normal star forming galaxies in the
mid-IR (Lacy et al. 2004; Stern et al. 2005).
To determine whether one can use a similar technique to detect AGN signatures
in a LSB bulge, we analyze the IRAC colors ([3.6]-[4.5] vs. [5.8]-[8.0]) of UGC
6614. The colors are measured for two regions of radius $\sim$1 and $\sim$5.5
kpc, respectively, encircling the galaxy center (Fig. \ref{agn}; left panel).
We find that in the color space, along the vertical [3.6]-[4.5] axis, the galaxy
resides well outside the Stern et al ``AGN box''. Although the lower end of
the AGN box is within the error bar of this galaxy, its overall mid-IR color
put it in a region occupied mostly by star forming galaxies suggesting that
broad band colors may not be an efficient tracer for weak AGNs.
The contribution from an AGN to the measured IRAC fluxes can be estimated by
combining image subtraction with assumptions about the nature of the SED coming
from starlight and AGN emission. We use Pahre et al. (2004) for stellar flux
ratios, and a $\nu^{-1}$ power law SED for AGN (Clavel et al. 2000).
Following the procedure of Howell et al. (2007), a 4.5 $\mu$m image of the
non-stellar emission was constructed (Fig. \ref{agn}; right panel).
Unlike the procedure of E05 which simply measures $S_{4.5} - \alpha S_{3.6}$,
this procedure includes an additional factor to account for the contribution
of non-stellar emission to the 3.6 $\mu$m image.
The non-stellar flux density measured this way indicates that,
within a 12\arcsec \ aperture, the AGN contributes $\sim$12\% of the
light at 4.5 $\mu$m and $\sim$6\% at 3.6 $\mu$m. At 8 $\mu$m, starlight and
the AGN each contribute $\sim$35\% of the light, with PAHs contributing the
remaining. Note that the selected aperture includes only the bulge of the
galaxy, excluding the spiral arm/ring structure.
UGC 6614 illustrates that although [3.6]-[4.5] color can identify strong AGN,
weaker AGN will not be clearly separated from pure stellar sources. The
procedure of E05, measuring $S_{4.5 \mu m}-\alpha S_{3.6 \mu m}$, will identify
regions of non-stellar emission but will not provide quantitative picture of
stellar emission. Given reasonable assumptions the procedure of Howell et al.
allows a quantitative decomposition of the stellar and non-stellar flux densities.
\section{Summary and Conclusions}
The {\it Spitzer} observations of the three optical giant low surface brightness
galaxies Malin 1, UGC 6614, and UGC 9024 have been examined to study the mid
and far-IR morphology, spectral energy distributions, and IR color to estimate
dust mass, dust-to-(atomic) gas mass ratio, total IR luminosity, and star
formation rate (SFR). We also investigate UGC 6879, which is intermediate
between HSB and LSB galaxies.
The 8 $\mu$m images indicate that polycyclic aromatic hydrocarbon (PAH)
molecules are present in the central regions of all three metal-poor LSB
galaxies. The diffuse optical disks of Malin 1 and
UGC 9024 remain undetected at mid- and far-infrared wavelengths. The
dustiest of the three LSB galaxies, UGC 6614, has infrared morphology that
varies significantly with wavelength; 160 $\mu$m (cool) dust
emission is concentrated in two clumps on the NE and NW sides of a distinct
ring seen in the 24 and 8 $\mu$m images (and a broken ring at 70 $\mu$m)
at a radius of $\sim$40\arcsec \ (18 kpc) from the galaxy center.
The 8 and 24 $\mu$m emission is co-spatial with H$\alpha$ emission
previously observed in the outer ring of UGC 6614. The estimated dust-to-gas
ratios, from less than $10^{-3}$ to $10^{-2}$, support previous indications
that LSB galaxies are relatively dust poor compared to HSB galaxies.
The total infrared luminosities are approximately 1/3 to 1/2 the blue band
luminosities, suggesting that old stellar populations are the primary source
of dust heating in these LSB objects. The SFR estimated from the infrared
data ranges $\sim$$\rm 0.01-0.88~M_\odot~yr^{-1}$, consistent with results
from optical studies. The mid-IR colors of UGC 6614 shows the presence of a
weak AGN at the central bulge.
Questions can be raised such as what is the most viable reason for these LSB
galaxies to have $\rm L_{TIR}/L_{B} < 1$? To answer this question we first
note that observables such as stellar populations and SED shapes can be used
to break the degeneracy in infrared-to-blue ratio (Helou 2000). That LSB
galaxies have low infrared-to-blue luminosities, stellar populations spanning
a wide range of mean ages, are not dominated by OB stars (McGaugh 1994), have
less dust than the HSB galaxies (see Fig. \ref{obse_seds}), and are IR cool
sources (see Fig. \ref{colo_colo}) suggest a composite scenario. The LSB disks
are less dusty and the older stellar populations are the primary source of the
IR emission from their ISMs.
The presence of PAH emission in these three galaxies indicates that the ISM
of the region contributing to the emission in these galaxies have significant
amount of carbon enrichment over cosmic time and the ISRF in the ISM must have
been significantly weak and thus it is unable to reduce the strength of PAH
emission. In other words, the small grains are more exposed to the ISRF so that
their destruction rate is larger than for PAH molecules.
The detection of mid and far-IR emission from a larger sample will be crucial
to understand the properties of ISM in LSB galaxies and probing their star
formation histories. This will have a significant effect on analytical modeling
of galaxy formation and evolution, the role of different galaxy populations in
observed number counts and possibly metallicity effects in the observed number
counts. Whether star formation in LSB disks occurred in a continuous fashion
but with a low rate, or in an exponentially decreasing rate, or as sporadic
bursts with quiescent periods in between is still a matter of debate. Since
each type of formation history will lead to a stellar population
that could be traced by optical photometry (i.e. blue or red), the formation
scenario must follow the route where the ISM would be in a state having
significant carbon enrichment with substantial amount of dust. The constraints
coming from {\it SST} such as mid-IR 8 $\mu$m emission and moderate dust mass
could be used as probes to understand the nature of LSB spirals.
Metal poor HSB objects such as blue compact dwarf galaxies are PAH deficient
systems (E05; Wu et al. 2006). Their SEDs are markedly different in the
$\sim$5-15 $\mu$m wavelength range compared to metal rich HSB galaxies. The
LSB galaxies are metal poor but have substantial PAH emission. Although these
galaxies are not the extreme cases in metal deficiency such as BCDs, they fill
an interesting niche among local populations, distinct from HSB dwarfs and
from HSB regular galaxies. They may also represent a significant fraction of
the galaxy population at earlier epochs (Zackrisson et al. 2005), and therefore,
may have important implication in the interpretation of galaxy number counts
in the infrared/submillimeter as well as in the visible and near-IR wavelengths.
Previous analytical studies suggest that metal abundances have profound
implications on galaxy number counts observed at 24, 70, and 160 $\mu$m
(Lagache et al. 2003; Dale et al. 2005). To date, in analytical models
metallicity effects have been incorporated in an ad hoc manner by artificially
manipulating SEDs of HSB galaxies in the wavelength range mentioned above
(Lagache et al. 2003). When template SEDs of many nearby metal-poor LSB
galaxies become available, one can incorporate these in galaxy evolution models
as an independent class along with various other classes such as normal star
forming, starburst, luminous and ultra-luminous, and AGNs to understand the
observed galaxy number counts and the origin of the IR background.
\acknowledgments
The anonymous referee is thanked for constructive comments and suggestions.
We happily thank D. Dale for his model fits. We also thank Y. Wu, B. R. Brandl,
J. R. Houck for helpful communications. We acknowledge useful discussions from
A. Blain, G. D. Bothun, and S. S. McGaugh on LSB galaxy population. One of us
(NR) gratefully acknowledges the support of a Research Associateship
administered by Oak Ridge Associated Universities (ORAU) during this research.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of Technology,
USA under contract with the National Aeronautics and Space Administration, and
the LEDA database in France. This study is based on observations made with the
{\it Spitzer} Space Telescope, which is operated by the Jet Propulsion
Laboratory, California Institute of Technology under NASA contract 1407.
This study has made use of data products from the Two Micron All Sky Survey,
which is a joint project of the University of Massachusetts and IPAC/Caltech,
funded by NASA and the National Science Foundation. This study also acknowledges
use of data products from Solan Digital Sky Survey.
\begin{figure}
\epsscale{0.80}
\plotone{f01.eps}
\caption{\small
The Solan Digitial Sky Survey (SDSS) composite images of target galaxies. In each
image north is up and east is to the left. The field of view is $2.5\arcmin \times
2.5\arcmin$ with 0.4\arcsec \ resolution. Note that the diffuse disks of Malin 1
and UGC 9024 are barely visible in these images. To help visualize these extended
disks readers are referred to Barth (2007) for {\it I}-band {\it Hubble} image of
Malin 1 and the following website for deeper {\it B}-band images of UGC 6614 and
UGC 9024 (http://zebu.uoregon.edu/sb2.html).\label{optical}}
\end{figure}
\begin{figure}
\epsscale{0.80}
\plotone{f02.eps}
\figcaption{\small
The {\it Spitzer} view of Malin 1. The IRAC 3.6, 4.5, and 8 $\mu$m images are at
left and the MIPS 24, 70, and 160 $\mu$m images are at right. The IRAC 4.5 and
8 $\mu$m images are shown without subtracting stellar photospheric emission.
In each image north is up and east is to the left. The field of view is
$2.5\arcmin \times 2.5\arcmin$ in all bands. Pixel sizes are $0.61\arcsec$ for
the IRAC bands and $1.8\arcsec$, $4.0\arcsec$, and $8.0\arcsec$ for the MIPS 24,
70, and 160 $\mu$m bands, respectively.
Galaxies from Figs. \ref{ugc6614}-\ref{ugc6879} are presented in a similar
manner. There is no detection of Malin 1 at 70 and 160 $\mu$m and hence the
position of 24 $\mu$m peak emission is shown by the ``+'' sign at these bands.
The contours represent surface brightness (MJy/Sr) with intervals of $\sqrt{10}$
where the lowest level is 4$\sigma$ above the background. See text for values of
the lowest contour levels at different bands. \label{malin1}}
\end{figure}
\begin{figure}
\epsscale{0.80}
\plotone{f03.eps}
\caption{\small
The {\it Spitzer} view of UGC 6614. The position of 24 $\mu$m peak emission is
shown by the ``+'' sign at 160 $\mu$m image. \label{ugc6614}}
\end{figure}
\begin{figure}
\epsscale{0.80}
\plotone{f04.eps}
\caption{\small
The {\it Spitzer} view of UGC 9024. The position of 24 $\mu$m peak emission is
shown by the ``+'' sign at 160 $\mu$m image. \label{ugc9024}}
\end{figure}
\begin{figure}
\epsscale{0.80}
\plotone{f05.eps}
\caption{\small
The {\it Spitzer} view of UGC 6879. The $B$-band central surface brightness
$\mu_{B,0}$ of this galaxy is intermediate between LSB and HSB galaxies.
\label{ugc6879}}
\end{figure}
\begin{figure}
\epsscale{1.0}
\plotone{f06.eps}
\figcaption{\small
The observed SEDs of LSB galaxies and UGC 6879 at near, mid, and far-IR
wavelengths. The 2MASS, IRAC, and MIPS points are shown by the open circles,
filled circles, and open triangles, respectively. The {\it IRAS} upper limits
are shown by the filled triangles. For all galaxies the flux densities are
normalized at 3.6 $\mu$m. Dotted, dashed, dashed-dotted, and long dashed
lines are used, respectively, to show the SEDs of:
NGC 0337 (normal star forming galaxy), NGC 2798 (starburst galaxy), NGC 2976
(galaxy with nuclear H II region), and NGC 3627 (seyfert II galaxy). Malin 1
was undetected by the MIPS far-IR channels and hence the detection limits are
shown for the total integration time ($\sim$252 sec. at 70 $\mu$m and $\sim$42
sec. at 160 $\mu$m). \label{obse_seds}}
\end{figure}
\begin{figure}
\epsscale{1.0}
\plotone{f07.eps}
\figcaption{\small
The observed SEDs of LSB galaxies and UGC 6879 using the DH02 model. The symbol
styles are similar to Fig. \ref{obse_seds}. The dashed represents the empirical
stellar SED of Pahre et al. (2004). The dotted lines represents the stellar
synthesis model prediction from Vazquez \& Leitherer (2005) fitted only to the
2MASS fluxes. The dust mass is estimated using the fitted SEDs (solid line).
\label{glob_seds}}
\end{figure}
\begin{figure}
\epsscale{0.75}
\plotone{f08.eps}
\figcaption{\small
The mid and far-IR color-color diagrams highlighting the LSB spirals (shown by
open stars) with respect to different classes of HSB galaxies (shown by black
dots). The SINGS (Kennicutt et al. 2003 ) sample galaxies are taken as the
representative of local HSB galaxies. This sample contains various types of
galaxies such as dwarfs, normal starforming, starburst, Syfert I, Seyfert II
and ellipticals.
In all panels galaxies represented by the decimal numerals are dwarfs systems
whereas those shown by the roman numerals are ellipticals. The rest of the
points (black dots) represent other population types where one galaxy from each
population are shown by the open circle: NGC 0337 (normal star forming galaxy),
NGC 2798 (starburst galaxy), NGC 2976 (galaxy with nuclear H II region), and
NGC 3627 (Seyfert II galaxy).
In panel (a) $\alpha \approx 0.58$ is the stellar contribution at 4.5 $\mu$m
estimated from the empirically derived stellar SED of Pahre et al. (2004).
The regions covered by the dotted boxes (panel b) are from a similar diagram of
Eangelbracht et al. (2005). No HSB galaxies (dwarfs or extended disks) occupy
these regions. Far-IR color of Malin 1 (panel c) is shown with respect to a flat
far-infrared SED. The vertical arrow in panel (d) represents a probable range of
{\it IRAS} color for UGC 9024. A few galaxies shown by numerals do not appear in
panel (d) because of a lack of {\it IRAS} data. Representative error bars based
on calibration uncertainty are shown. \label{colo_colo}}
\end{figure}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=2.45in]{f09.eps}
\includegraphics[width=2.45in]{f10.eps}
\end{center}
\caption{\small
Left panel: The IRAC colors for UGC 6614 as shown by solid symbols. The triangle
(square) represents the color measured within a central region of radius of
2\arcsec \ (12\arcsec); \ this corresponds to a physical radius
of $\sim$1 kpc ($\sim$5.5 kpc). The circle represents integrated flux from the
entire galaxy (Table \ref{flux_table} and \ref{basic_table}).
In this diagram, stars occupy a locus around (0,0) with various exceptions
(Stern et al. 2005). Star forming galaxies mostly lie along the the horizontal
axis depending on the amount of 8 $\mu$m PAH emission. UGC 6879 is shown by open
symbols to highlight the behavior of non-AGN type galaxies.
A typical error bar is shown at the bottom. Right panel: an image
(radius 12\arcsec) of non-stellar emission at 4.5 $\mu$m after taking
into account the contribution of similar emission at 3.6 $\mu$m image. See text
for detail. \label{agn}}
\end{figure*}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,501
|
\section{Introduction}
The coherent manipulation of the electron spin in semiconductor materials via the coupling of the
electron's motion with its spin degree of freedom is a key ingredient in most spintronic
devices.~\cite{Szumniak2012} The special place among them belongs to the spin field effect
transistor (spin-FET)~\cite{Datta1990} in which the electrically tunable
spin-orbit interaction of Rashba~\cite{Rashba1984} is used to control - via the spin rotation - the
electric current between ferromagnetic source and drain. However, the experimental
realization of the functional spin-FET encounters serious physical obstacles, i.e. the low
efficiency of the spin injection from ferromagnet into
semiconductor due to the resistance mismatch~\cite{Schmidt2000} and the spin
relaxation induced mostly by the Dyakonov-Perel mechanism.~\cite{Fabian2007} Both these effects
lead to the low electrical signal and the low ratio of the "on-conductance" to the "off-conductance"
in the first experimental realization of the spin-FET.~\cite{Koo2009,Wunderlich2010} Note, that the
maximum value of the on/off conductance ratio defined as $G_{on}/G_{off}=(1+P_SP_D)/(1-P_SP_D)$,
where $P_S(P_D)$ is the spin injection (detection) efficiency in the source (drain), has a value
$2.92$ for $P_S=P_D=70 \%$ (the highest spin injection efficiency reported at room
temperature\cite{Salis2005}), which is insufficient for the electric circuit application.
Therefore, the basic condition which has to be meet in the experimental setup of the spin-FET is the
spin injection (detection) nearly equals to $100 \%$ - the ratio $G_{on}/G_{off}=10^5$, adequate
for the modern electronics, requires $P_S=P_D=99.9995 \%$.~\cite{Spintronics} This requirement
can be satisfied only by the use of the semiconductor spin filters such as magnetic
resonant tunneling diodes~\cite{Wojcik2012,Wojcik2013} or quantum point contacts
(QPC) with the lateral Rashba SO interaction.~\cite{Debray2009,Khoda2012,Wan2009,Nowak2013} The
latter have been successfully used as the
spin injector and detector in the recent experiment,\cite{Chuang2015, Alomar2016} in which about
$10^5$ times greater conductance oscillations have been observed as compared to the conventional
spin-FET based on ferromagnets.~\cite{Koo2009} The further improvement of the spin transistor
operation involves the suppression of the spin relaxation. For this purpose the layer conduction
channel with two dimensional electron gas (2DEG) should be replaced
by the nanowire~\cite{Wojcik2014} in which the Dyakonov-Perel mechanism of the spin relaxation is
strongly suppressed by the motional narrowing effect.\cite{Holleitner2006,Kwon2007}
Another concept assumes equating the Rashba and Dresselhaus term~\cite{Schliemann2003, Koralek2009}
which generates the persistent spin helix state with extraordinary long spin lifetime. Nevertheless,
this concepts~\cite{Schliemann2003,Kunihashi2012,Yoshizumi2016} of the spin transistor is still
waiting for the experimental realization.
The alternative spin transistor design in which
the spin signal is observed over the distance 50~$\mu$m has been recently demonstrated by
Betthausen et al. in Ref.~\onlinecite{Betthausen2012}.
In this design, the spin transistor action is generated by the Landau-Zener transitions, which
occur in the the combined homogeneous and helical magnetic fields. The latter is
generated by the ferromagnetic stripes located above the conduction channel made of the magnetic
semiconductor. As shown in Refs.~\onlinecite{Betthausen2012, Saarikoski2014}, by
keeping the transport in the adiabatic regime, the spin state is protected
against the electron scattering on defects. The switching into the non-adiabatic regime generates
the additional conductance dips, which result from the resonant Landau-Zener
transitions.~\cite{Wojcik2015_SST} Although the alternative spin-FET~\cite{Betthausen2012} seems to
be characterized by the long spin lifetime, it requires the application of
the external homogeneous magnetic field, which is difficult to be applied in the integrated
circuit. For this reason, in our recent paper\cite{Wojcik2016} we have proposed analogous design, in
which the spin transistor action is generated by all-electric means with the use of the lateral
Rashba SO interaction.
Most of the theoretical studies and experimental realizations of the spin transistor reported
so far have been based on 2DEG fabricated in the narrow n-type AlInAs/GaInAs well.\cite{Koo2009,
Wunderlich2010} In the sufficiently narrow quantum well the electrons occupy only the first subband,
i.e. we are dealing with the lowest-energy state occupancy. However,
the recent interest of researchers is directed towards the systems with the wide and coupled
quantum wells\cite{Bentmann2012,Hernandez2013,Hu1999} with double occupancy (two lowest-energy
subbands are occupied), which leads to interesting physical effects such
as band anticrossings or spin mixing. The SO interaction in 2DEG
quantum well with two subbands has been studied by Bernardes et al. in
Ref.~\onlinecite{Bernardes2007}. The inter-subband-induced SO interaction has been found which
results from coupling between states with opposite parity. This inter-subband SO interaction,
quadratic in the momentum, can give raise to
interesting physical phenomena, e.g. unusual Zitterbewegung\cite{Bernardes2007} or intrinsic spin
Hall effect in symmetric quantum wells.\cite{Hernandez2013,Khaetskii2016} All these new
phenomena motivated us to
investigate the spin-FET based on the conduction channel with double
occupancy and analyze the influence of the inter-subband-induced SO interaction on the spin
transistor operation.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.6]{fig1.eps}
\caption{(a) Schematic of the spin transistor. Nanowire of width $W$ is located between two
leads acting as the spin polarizer and analyzer. The spin dynamics in the conduction channel is
controlled by the voltages applied to the gates $V_{g1}$ and $V_{g2}$. (b) Cross section
of Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As double quantum well with a central
barrier. (c) Band profile for the double quantum well with the central barrier.
(d) Self-consistent potential energy profile and the corresponding wave functions $\varphi_1$ and
$\varphi _2$. (e) $G(V_g)$ characteristics of the spin transistor with the marked on and off
state.}
\label{fig1}
\end{center}
\end{figure*}
In the present paper, we consider the electron transport in the Datta and Das spin transistor
architecture within the two subband model, which allows us to include the intra- and inter-subband
SO couplings. Starting from the model, in which the values of the SO coupling constants are
treated as the parameters, we analyze the influence of the inter-subband-induced SO interaction on
the conductance and answer the question how this type of SO interaction affects the operation of
spin transistor. Next, we consider the realistic double quantum well with the applied external
gate voltages for different electron concentrations. Following the method proposed in
Ref.~\onlinecite{Calsaverini2008}, based on the $8\times 8$ Kane model within the $\mathbf{k}\cdot
\mathbf{p}$ approximation, we determine the intra- and inter-subband induced SO coupling constants
via the self-consistent Schr\"{o}dinger-Poisson procedure. These values are used in the conductance
calculations performed by the scattering matrix method. We reproduce the resonant
behavior of the SO coupling constants reported for a double quantum well.\cite{Calsaverini2008} This
resonant behavior for which the values of the SO parameters change abruptly near the zero gate
voltage is suitably for the spin transistor application in which the on/off transition should be
realized in the narrow voltage range. By calculating the conductance for different gate voltages, we
analyze the spin transistor operation for different electron concentrations $n_e$ and find that for
high $n_e$ the inter-subband-induced SO interaction start to play a crucial role leading to the
suppression of the on/off conductance ratio. Finally, the spin transistor operation is analyzed in
the context of the coupling between the quantum wells which is determined by the width
of the central barrier.
The paper is organized as follows: in section ~\ref{sec2} we introduce the model of the
nanostructure and briefly review the Kane Hamiltonian, which leads to the formulas for the intra-
and inter-subband SO coupling constants. Next, we describe the self-consistent
Schr\"{o}dinger-Poisson method used to the SO coupling constants calculations. Finally, we derive
the $4 \times 4$ Hamiltonian in two subband model used to the electronic transport calculations
within the scattering matrix approach.
In section~\ref{sec3} we present our results starting from these obtained for the model
in which the values of the SO coupling constants are treated as the parameters and going to
the realistic double quantum well heterostructure. The summary is contained in sec.~\ref{sec4}.
\section{Theoretical model}
\label{sec2}
\subsection{Model of nanostructure}
\label{sec2a}
We consider the Datta and Das spin transistor architecture. Accordingly, the
nanowire of width $W$ is located between two reflectionless leads acting as the
spin polarizer and analyzer [see Fig.~\ref{fig1}(a)]. In order to ensure the high value of the
on/off conductance ratio we assume 100\% spin injection (detection) efficiency of the
contacts, which as shown by recent experiments,\cite{Debray2009,Khoda2012,Wan2009} can be achieved
using the QPC with the lateral Rashba SO interaction.
Figure~\ref{fig1}(b) presents the cross-section of the layer heterostructure in the grown direction.
We consider the Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As
double quantum well (width $50$ nm) with a central barrier Al$_{0.3}$In$_{0.7}$As with width $w_b$
which determines the coupling between the conduction electron states in the quantum wells. The
nanostructure contains two $n-$doped layers with donor concentrations $N_{d}=4\times 10
^{18}$~cm$^{-3}$ and width $3$~nm located on either side of the quantum well, $20$ nm away from the
well interface. In this nanodevice, the Rashba SO interaction can be tunned by the
external gates with the lengths $L_g$ located below and above the quantum well, $50$ nm away from
the doping layers. By applying the suitably chosen voltages to these gates the spin transistor can
be electrically switched between the on to off states as shown in Fig.~\ref{fig1}(e).
\subsection{Hamiltonian with SO interaction}
Here we briefly present the derivation of an effective Hamiltonian for conduction electrons with SO
interaction. We start from the $8 \times 8$ Kane Hamiltonian
for the layer heterostucture, which in the block form is given
by\cite{Calsaverini2008,Fabian2007}
\begin{equation}
\label{eq:KH}
H_{8\times 8}=\left (
\begin{array}{cc}
H_c & H_{cv} \\
H^{\dagger}_{cv} & H_v
\end{array}
\right ),
\end{equation}
where $H_c$ is the $2\times 2$ diagonal matrix related to the conduction band ($\Gamma _6$
in the energy band profile - see Fig.~\ref{fig1}(c)] while $H_v$ is the $6\times 6$ diagonal matrix
corresponds to the valence bands ($\Gamma _8$, $\Gamma _7$ in the energy band profile)
\begin{eqnarray}
H_c&=&H_{\Gamma _6}(z)\mathbf{1}_{2\times2}, \\
H_v&=&H_{\Gamma _8}(z)\mathbf{1}_{4\times4} \oplus H_{\Gamma _7}(z)\mathbf{1}_{2\times2}.
\end{eqnarray}
The Hamiltonian $H_{\Gamma _i}(z)$ ($i=6,7,8$) for the band $\Gamma _i$ is expressed as
\begin{equation}
\label{eq:Hi}
H_{\Gamma _i}(z)= -\frac{\hbar ^2}{2m_0} \frac{d^2}{dz^2} + \frac{\hbar ^2 (k_x^2+k_y^2)}{2m_0} +
V_H(z)+V_{\Gamma _i}(z),
\end{equation}
where $m_0$ is the free electron mass and $V_H(z)$ is the Hartee potential.
The potential energy profile $V_{\Gamma _i}(z)$ in Eq.~(\ref{eq:Hi}) is related to the band-offset
and is
given by
\begin{eqnarray}
V_{\Gamma _6}(z)&=&h_6(z), \\
V_{\Gamma _8}(z)&=&-h_8(z)-E_g, \\
V_{\Gamma _7}(z)&=&-h_7(z)-E_g-\Delta_g,
\end{eqnarray}
where $h_i(z)=\delta_i h_{QW}(z)+\delta_{bi} h_b(z)$ with $h_{QW(b)}(z)$ being a dimensionless
functions describing the potential energy profile of the quantum well (central barrier), $\delta
_{i(bi)}$ is the band-offset at the quantum well (central barrier) interface
while $E_g$ and $\Delta_g$ are the energy gap and the split-off band gap, respectively. \\
The off-diagonal element $H_{cv}$ of the Hamiltonian (\ref{eq:KH}) has the form
\begin{equation}
H_{cv}=\left ( \begin{array}{cccccc}
\frac{-\kappa_+}{\sqrt{2}} & \sqrt{\frac{2}{3}}\kappa_z & \frac{\kappa_-}{\sqrt{6}} & 0 &
\frac{-\kappa_z}{\sqrt{3}} & \frac{-\kappa_-}{\sqrt{3}} \\
0 & \frac{-\kappa_+}{\sqrt{6}} & \sqrt{\frac{2}{3}}\kappa_z & \frac{\kappa_-}{\sqrt{2}} &
\frac{-\kappa_+}{\sqrt{3}} & \frac{-\kappa_z}{\sqrt{3}}
\end{array}
\right ),
\end{equation}
where $\kappa _{+,-,z}=Pk_{+,-,z}$, $k_{\pm}=k_x\pm ik_y$ and $P=-i\hbar \langle S|p_x|X \rangle /
m_0$ is the conduction to valence band coupling with $|S\rangle$, $|X \rangle$ being the Bloch
functions at the $\Gamma$ point. \\
Using the folding-down transformation, the $8\times 8$ Hamiltonian (\ref{eq:KH}) can
be reduced into the $2\times 2$ effective Hamiltonian for the
conduction band
\begin{equation}
\label{eq:Hc}
\mathcal{H}(E)\psi_c=H_c+H_{cv}(E-H_v)^{-1}H_{cv}^{\dagger}.
\end{equation}
Since $E_g$ and $\Delta _g$ are the largest energies in the system, we can expand the on- and
off-diagonal elements of the Hamiltonian (\ref{eq:Hc}) in a series limiting to the first
non-zero elements. This procedure leads to the Hamiltonian
\begin{eqnarray}
\label{eq:3DH}
\mathcal{H} &=& \left [ -\frac{\hbar ^2}{2m^*} \frac{d^2}{dz^2} + \frac{\hbar ^2
(k_x^2+k_y^2)}{2m^*} + V_{self}(z) \right ] \mathbf{1}_{2\times 2} \nonumber \\
&+& \alpha(z)
\left (
\begin{array}{cc}
0 & k_y+ik_x \\
k_y-ik_x & 0
\end{array}
\right ),
\end{eqnarray}
where $V_{self}(z)$ is the self-consistent potential energy profile
\begin{equation}
\label{eq:vself}
V_{self}(z)=V_H(z)+\delta _6 h_{QW}(z)+\delta _{b6} h_{b}(z),
\end{equation}
$m^*$ is the effective mass
\begin{equation}
\frac{1}{m^*}=\frac{1}{m_0}+\frac{2P^2}{3\hbar ^2} \left ( \frac{2}{E_g} + \frac{1}{E_g+\Delta_g}
\right ),
\end{equation}
and $\alpha(z)$ is the Rashba SO coupling constant
\begin{equation}
\label{eq:a}
\alpha(z)=\alpha_{QW}\frac{dh_{QW}(z)}{dz}+\alpha_{b}\frac{dh_{b}(z)}{dz}-\alpha_{H}\frac{dV_{H
}(z)}{dz}
\end{equation}
with
\begin{eqnarray}
\label{eq:a1}
\alpha _{QW}&=&\frac{P^2}{3} \left [ \frac{\delta _8}{E_g^2} - \frac{\delta _7}{(E_g+\Delta _g)^7}
\right ], \\
\label{eq:a2}
\alpha _{b}&=&\frac{P^2}{3} \left [ \frac{\delta _{b8}}{E_g^2} - \frac{\delta _{b7}}{(E_g+\Delta
_g)^7}
\right ], \\
\label{eq:a3}
\alpha _{H}&=&\frac{P^2}{3} \left [ \frac{1}{E_g^2} - \frac{1}{(E_g+\Delta _g)^7}
\right ] .
\end{eqnarray}
\subsection{SO coupling constants}
\label{subsec:SOC}
In this subsection, we briefly describe the procedure used to determine $\alpha(z)$ based on
Eq.~(\ref{eq:a}). The main part of this procedure contains the calculations of the self-consistent
potential energy profile $V_{self}(z)$ which includes the band potential energy profile, the
potential generated by the gates and doping and the Hartree potential resulting from the
electron-electron interaction. In our calculations, we
start from the single-electron Hamiltonian without the SO interaction and assume that
the electron is confined in the $z$ direction while in the $x-y$ plane the system is infinite.
This leads to the 1D Schr\"{o}dinger equation in the form
\begin{equation}
\label{eq:RS1D}
\left ( -\frac{\hbar ^2}{2m^*} \frac{d^2}{dz^2} + \frac{\hbar ^2
k_{\parallel}^2}{2m^*} + V_{self}(z) \right )\varphi_n(z)=\mathcal{E} _n\varphi_n(z),
\end{equation}
where $k_{\parallel}^2=k_x^2+k_y^2$.
The eigenproblem (\ref{eq:RS1D}) is solved numerically by the diagonalization in the basis of
infinite quantum well states $\varphi_n(z)=\sum _{j=1}^N c_j \sin(j\pi z/L_z)$, where $L_z$ is the
total length of the heterostructure in the $z$ direction. The Hartee potential $V_H(z)$ [see
Eq.~(\ref{eq:vself})] is calculated from the Poisson equation
\begin{equation}
\label{eq:poisson}
\frac{d^2}{dz^2}V_H(z)=-\frac{e}{\epsilon _0 \epsilon _r} [n_e(z)+n_d(z)],
\end{equation}
where $\epsilon$ is the dielectric constant, $n_d(z)$ is the doping profile and $n_e(z)$ is the
electron density, which is given by
\begin{equation}
n_e(z)=\frac{em^*}{\pi \hbar ^2} k_B T \sum _{n} \ln \left [ 1+e^{(E_F - \mathcal{E} _n
)/k_B T}\right ]
\end{equation}
where $k_B$ is the Boltzmann constant, $T$ is the temperature and $E_F$ is the Fermi energy.
Equation (\ref{eq:poisson}) is solved by the relaxation
method assuming the Dirichlet boundary conditions determined by the gate voltages. In calculations
we always keep $V_{g1}=0$ as the reference potential.\\
In the self-consistent procedure, equations (\ref{eq:RS1D}) and (\ref{eq:poisson}) are solved
iteratively until the convergence is reached. The self-consistent potential energy profile and the
corresponding wave functions for two lowest states $\varphi_1$ and $\varphi _2$ are presented in
Fig.~\ref{fig1}(d). Then, the SO coupling $\alpha (z)$ is determined from the potential
$V_{self}(z)$ by the use of Eq.~(\ref{eq:a}).
The present calculations have been performed for
Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As double quantum well with the
following material parameters:\cite{Vurgaftman2001} $E_g=0.8161$~eV, $\Delta
_g=0.3296$~eV, $\delta _6=0.52$~eV, $\delta _{7}=0.1637$~eV, $\delta _{8}=0.1935$~eV,$\delta
_{b6}=0.21$~eV, $\delta _{b7}=0.1343$~eV, $\delta _{b8}=0.152$~eV, $m^*=0.043$ and $E_P=2m_0P^2 /
\hbar ^2 =25.3$~eV. The dielectric constant $\epsilon _r=14.013$ is assumed to be
constant in the entire heterostructure.
\subsection{Effective 2D Hamiltonian and conductance calculations}
Now, we derive an effective 2D Hamiltonian in the two subband model starting from its 3D version
given by Eq.~(\ref{eq:3DH}). For this purpose we define the four element basis $\{
|\varphi _1,\uparrow \rangle |\varphi _1,\downarrow \rangle, |\varphi _2,\uparrow \rangle, |\varphi
_2,\downarrow \rangle\}$ which consists of
the spin-degenerate ground and first excited eigenstate of the Hamiltonian (\ref{eq:RS1D})
without SO interaction. The projection of (\ref{eq:3DH}) onto this basis leads to the $4 \times 4$
Hamiltonian given by
\begin{widetext}
\begin{equation}
\label{eq:2DH}
\mathcal {H} _{2D}= \left ( \begin{array}{cccc}
\frac{\hbar ^2 k _{\parallel}^2}{2m^*} + \varepsilon _1 & \alpha _{11} (k_y+ik_x) & 0 &
\alpha_{12}(k_y+ik_x) \\
\alpha _{11} (k_y-ik_x) & \frac{\hbar k _{\parallel}^2}{2m^*} + \varepsilon_1 &
\alpha_{12}(k_y-ik_x) & 0 \\
0 & \alpha_{12}(k_y+ik_x) & \frac{\hbar ^2 k _{\parallel}^2}{2m^*} + \varepsilon_2 & \alpha
_{22} (k_y+ik_x)\\
\alpha_{12}(k_y-ik_x) & 0 & \alpha _{22} (k_y-ik_x) & \frac{ \hbar ^2 k
_{\parallel}^2}{2m^*} +
\varepsilon_2
\end{array}
\right )
\end{equation}
\end{widetext}
where $\alpha _{nm} = \langle \varphi _n| \alpha (z) | \varphi _m \rangle$ with $n,m=1,2$.
The calculations of the conductance have been performed within the scattering matrix
method using the Kwant package.~\cite{kwant} For this purpose we have transformed the
Hamiltonian (\ref{eq:2DH}) into the discretized form on the grid $(x_{\mu}, y_{\nu})= \mu dx, \nu
dx$ ($\mu, \nu = 1,2, \ldots$) where $dx$ is the lattice constant.
We introduce the discrete representation of the electron state in the $4 \times 4$ space as follows:
$|\Psi(x_{\mu}, y_{\nu})\rangle
=
\left(|\psi_1^{\uparrow}( x_{\mu},y_{\nu})\rangle
,|\psi_1^{\downarrow}( x_{\mu},y_{\nu})\rangle, |\psi_2^{\uparrow}( x_{\mu},y_{\nu})\rangle
|\psi_2^{\downarrow}( x_{\mu},y_{\nu})\rangle \right)^T
= |\Psi_{\mu, \nu}\rangle$.
Introducing a set $\boldsymbol{\tau}$ of Pauli-like matrices in the orbital space, the Hamiltonian
(\ref{eq:2DH}) takes on the discretized form
\begin{eqnarray}
\mathcal{H}_{2D}&=& \sum\limits_{\mu\nu} \left [ (4t + \varepsilon _+) \mathbf{1}
\otimes \mathbf{1} - \varepsilon _- \tau _z \otimes \mathbf{1} \right ] |
\Psi_{\mu,\nu } \rangle \langle \Psi_{\mu,\nu}| \nonumber \\
&+& \sum _{{\mu}{\nu}} \bigg \{ t \mathbf{1} \otimes \mathbf{1} + it_{SO} \bigg [ \alpha_{11}
\frac{1}{2}(\mathbf{1}-\tau _z)\otimes \sigma_y \nonumber \\
&+& \alpha_{22} \frac{1}{2}(\mathbf{1}+\tau _z)\otimes \sigma_y + \alpha _{12}\tau _x \otimes
\sigma _y \bigg ] \bigg \} + H.c.\nonumber \\
&+& \sum _{{\mu}{\nu}} \bigg \{ t \mathbf{1} \otimes \mathbf{1} + it_{SO} \bigg [ \alpha_{11}
\frac{1}{2}(\mathbf{1}-\tau _z)\otimes \sigma_x \nonumber \\
&+& \alpha_{22} \frac{1}{2}(\mathbf{1}+\tau _z)\otimes \sigma_x + \alpha _{12}\tau _x \otimes
\sigma _x \bigg ] \bigg \} + H.c.
\label{HTB}
\end{eqnarray}
where $t=\hbar ^2/(2m dx^2)$, $t_{SO}=1/(2dx)$ and $\mathbf{1}$ is the $2 \times 2$ unity matrix.
Let us assume that the electron with spin up in the first subband is injected from the source
(polarizer) into the conduction channel. The electron can be transmitted via the conduction channel
to the analyzer in one of the four possible processes: (i) intra-subband transmission with spin
conservation $(T_{11}^{\uparrow \uparrow})$, (ii) intra-subband transmission with spin-flip
$(T_{11}^{\uparrow \downarrow})$, (iii) inter-subband transmission with spin conservation
($T_{12}^{\uparrow \uparrow}$) and (iv) inter-subband transmission with spin flip ($T_{12}^{\uparrow
\downarrow}$), where $T_{nm}^{\sigma \sigma'}$ with $\sigma, \sigma'=\uparrow, \downarrow$ and
$n,m=1,2$ denotes the probabilities of the transmission processes (i) - (iv). Similar
scattering processes can be introduced for the spin-up electrons injected from the second
subband. Their probabilities are denoted by $T_{22}^{\uparrow \uparrow}$, $T_{22}^{\uparrow
\downarrow}$, $T_{21}^{\uparrow \uparrow}$, $T_{21}^{\uparrow \downarrow}$. \\
Having determined the transmission coefficients $T^{\sigma \sigma '}_{nm}$
we calculate the conductance in the ballistic regime using the Landauer formula
\begin{equation}
G_{nm}^{\sigma \sigma^{\prime}}=\frac{e^2}{h} \int T_{nm} ^{\sigma \sigma ^{\prime}} (E) \left (
\frac{\partial f_{FD}(E,E_F)}{\partial E} \right ) dE,
\end{equation}
where $\sigma$, $\sigma^{\prime}$ are the spin indices and $f_{FD}(E,E_F)=1/[1+\exp(E-E_F)/k_BT]$ is
the Fermi-Dirac distribution function, where $T$ is the temperature and $E_F$ is the Fermi energy.
For the assumed 100\% spin injection (detection)
efficiency of the contacts, the total conductance via the device is
given by
\begin{equation}
G=\sum _{n,m=1}^{2} G_{nm}^{\uparrow \uparrow}.
\end{equation}
The conductance calculations presented in the paper have been performed for $dx=2$~nm and
$T=4.2$~K.
\section{Results}
\label{sec3}
In this section we study the conductance through the spin transistor including
the intra- and inter-subband SO interaction. We start from
the model, in which the SO coupling constants are treated as the parameters (subsection~A)
and show how the inter-subband-induced SO coupling affects the spin transistor operation. Then, in
subsection B, we introduce the realistic model with the
Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As double quantum well, for which the SO coupling
constants are determined by the Schr\"{o}dinger-Poisson approach
presented in subsec.~\ref{subsec:SOC}.
\subsection{Parametrized model}
We consider the spin transistor with the length $L=800$~nm and the gate attached to the conduction
channel in the middle of the nanostructure. The length of the gate $L_g=400$~nm (see
Fig.~\ref{fig1}).
The energy difference between the two subbands is taken to be $\Delta \varepsilon = \varepsilon _2 -
\varepsilon_1=1$~meV [Eq.~(\ref{eq:2DH})]. We assume the channel width $W=40$~nm which guaranteers
that the energy separation between the two lowest energy states related to the confinement in the
lateral $y$ direction $\Delta \varepsilon _{\perp} \approx \hbar ^2 \pi ^2 / 2 m^* W^2=4.2$~meV is
greater than $\Delta \varepsilon$. All results presented in this subsection have been obtained for
the Fermi energy $E_F=4$~meV, which ensures that only the lowest energy state in the transverse
motion ($y$ direction) is occupied and the two subbands in the grown $z$ direction participate in
the transport. The SO coupling constants, experimentally controlled by the gate voltage, are
treated as the parameters of the calculations. \\
Let us start our study from the case in which the intra-subband SO coupling constants, in the both
subbands are equal $\alpha _{11}=\alpha _{22}= \alpha$. Figure~\ref{fig2} presents the conductance
as a function of $\alpha$ for different inter-subband SO coupling constant $\alpha
_{12}$. We assume that $\alpha _{12}$ takes on the negative values which is consistent with the
results for the realistic structure (see subsection~\ref{sec3:real}). As we have checked,
the change of sign $\alpha _{12}$ does not change the conductance in any way -- the conductacne
depends only on the absolute value of $\alpha _{12}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.40]{fig2.eps}
\caption{Conductance $G$ as a function of intra-subband SO coupling constant $\alpha$ for
different inter-subband couplings $\alpha _{12}$. Vertical dashed line marks the value of
$\alpha$, for which the spin density distributions are depicted in Fig.~\ref{fig5}. Inset: spin
density distributions $s^{1(2)}_z$ for the first (1st) and second (2nd) subband for
$\alpha=14$~meVnm and $\alpha _{12}=0$.}
\label{fig2}
\end{center}
\end{figure}
For the inter-subband SO coupling constant $\alpha _{12}=0$ the spin dynamics in the two
subbands, via which the electrons are transmitted, is independent. The spin of electron flowing
in the vicinity of the gate rotates due to the SO interaction. Since the intra-subband SO coupling
are assumed to be equal, the electron spin in each of the subbands precesses with
the same precession length $\lambda _{SO}=2\pi/\Delta k$, where
$\Delta k=k_F^{\uparrow}-k_F^{\downarrow}=2m^*\alpha /\hbar ^2$. In this case, the slight
difference in the transport conditions through the both subbands can result from the energy
difference $\Delta \varepsilon$, however it is too small to affect the spin transistor operation.
Hence, if we assume the ideal spin polarizer (analyzer), which transmits only electrons with well
defined spin, the conductance oscillates as a function of $\alpha$ according to the
formula~\cite{Spintronics}
\begin{equation}
\label{eq:Ga0}
G=2G_0 \cos ^2 \left ( \frac{\Delta k L_g}{2} \right )=2G_0 \cos ^2 \left ( \frac{m^* \alpha
L_g}{\hbar ^2} \right ),
\end{equation}
where $G_0=e^2/h$ and the factor $2$ is related to the fact that the electrons are
transmitted via the two subbands.\\
These conductance oscillations as a function of the intra-subband SO coupling constant $\alpha$ are
presented in Fig.~\ref{fig2} (black line, $\alpha _{12}=0$). Based on Eq.~(\ref{eq:Ga0}) one can
conclude that the conductance reaches maximum for $\Delta k L_g = 2 N \pi$, which
corresponds to the process, in which the spin of the electron flowing through the conduction
channel precesses the integer number of times. On the other hand, the conductance minimum is reached
for $\Delta k L_g = (2 N+1) \pi$, which corresponds to the half-integer rotation number of the
electron spin. The former case is depicted in the inset of Fig.~\ref{fig2}, in which we present the
spin density distributions
$s^{1(2)}_z$ in the nanostructure calculated for the both subbands for $\alpha=14$~meV and $\alpha
_{12}=0$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.45]{fig3.eps}
\caption{(a) Conductance $G$ as a function of intra-subband SO coupling constant $\alpha$ and
inter-subband spin-orbit coupling constant $\alpha _{12}.$ (b) Conductance $G$ and (c)
transmission probabilities as a function of $\alpha_{12}$ for $\alpha=0$.}
\label{fig3}
\end{center}
\end{figure}
The regular oscillations of $G(\alpha)$ are modified if we introduce the inter-subband SO coupling
into the system, i.e. $\alpha _{12}\ne0$. As presented in Fig.~\ref{fig2}, for $\alpha
_{12}=-10$~meVnm the change of the conductance becomes significant for large values of $\alpha$. The
black and red line come together for small $\alpha$ and diverge for $\alpha>10$~meVnm.
For $\alpha=14$~meVnm, marked by the vertical dashed line, the inter-subband SO
interaction leads to the slight reduction of the conductance. The further change of the
inter-subbands SO coupling constant up to $\alpha _{12}=-18$~meVnm leads to the inversion of
the oscillations, namely the conductance reaches the minimum for $\alpha$ for which it is maximal in
the case of $\alpha _{12}=0$. This inversion is clearly visible in Fig.~\ref{fig3} which presents
the conductance as a function of the intra- and inter-subband SO coupling constants $G(\alpha,
\alpha _{12})$. The complete inversion is observed for $\alpha _{12}=-18$~meVnm (white dashed line),
for which also the period of the $G(\alpha)$ oscillations
slightly increases. Interestingly, as presented in Fig.~\ref{fig3}(b) even for
$\alpha=0$ corresponding to the symmetric heterostructure, the conductance oscillates as a
function of $\alpha _{12}$. The transmission probabilities
shown in Fig.~\ref{fig3}(c) indicate that this behavior is directly related to the increase of the
inter-subbands spin-flip transmission probability. All these results suggest the possible
application of the inter-subband SO interaction in the spin transistor design, although the
experimental control of $\alpha _{12}$ still remains an open issue.
The conductance behavior [Figs.~\ref{fig2} and \ref{fig3}] result from the spin dynamics, which in
the presence of the inter-subband SO interaction becomes much more complicated. Similarly, as for
$\alpha _{12}=0$, the spin dynamics is determined by the differences of $k_F$ Fermi wave vector
between the subbands participating in the transport for the given Fermi energy. These
differences can be determined from the eigenenergies of Hamiltonian (\ref{eq:2DH}), which are given
by
\begin{equation}
E_{ks\rho}= E_0+\frac{h^2 k^2}{2m^*}+\varepsilon _+ + s \alpha _+ k +\rho \sqrt{(\alpha
_{12}k)^2+(\varepsilon _- + s \alpha _- k)^2},
\end{equation}
where
\begin{equation}
\varepsilon _{\pm}=\frac{1}{2} (\varepsilon _1 \pm \varepsilon_2), \:\:\:\:
\alpha _{\pm}=\frac{1}{2} (\alpha _1 \pm \alpha _2),
\end{equation}
$E_0$ is the energy of the lowest state related to the confinement in the lateral $y$ direction
while $s=\pm 1$ and $\rho=\pm1$ correspond to the spin state and the subband, respectively. \\
Notice, that the electron
initially injected into the channel within spin up state oscillates between the subbands changing
its spin. The spin dynamics is the combination of the precession with different precession
lengths which, in contrast to the case with $\alpha _{12}=0$, depend on the Fermi energy. The
simplest case for which this problem can be solved
analytically is the symmetric structure with zero intra-subband SO coupling ($\alpha=0$),
presented in Fig.~\ref{fig3}(b). Then, the spin precession length is given by
\begin{equation}
\lambda _{SO}=\frac{2 \pi}{\Delta k}=\frac{2 \pi}{k_2 - k_1},
\label{eq:precess}
\end{equation}
where
\begin{widetext}
\begin{eqnarray}
\label{eq:k1}
k_1=\frac{\sqrt{2m^*E_F}}{\hbar} \sqrt{1-\frac{\varepsilon _{+}}{E_F}+\frac{\alpha _{12}^2m^*}{\hbar
^2 E_F} \left ( 1+\sqrt{1+\frac{2\hbar ^2 (E_F-\varepsilon _{-})}{\alpha _{12}^2m^*} +\frac{\hbar
^4 \varepsilon _{-}^2}{\alpha _{12}^4m^{*2}}}\right ) }, \\
\label{eq:k2}
k_2=\frac{\sqrt{2m^*E_F}}{\hbar} \sqrt{1-\frac{\varepsilon _{+}}{E_F}+\frac{\alpha _{12}^2m^*}{\hbar
^2 E_F} \left ( 1-\sqrt{1+\frac{2\hbar ^2 (E_F-\varepsilon _{-})}{\alpha _{12}^2m^*} +\frac{\hbar
^4 \varepsilon _{-}^2}{\alpha _{12}^4m^{*2}}}\right ) }.
\end{eqnarray}
\end{widetext}
In Fig.~\ref{fig4} we present the $z$ component $s^{1(2)}_z$ of the spin density distribution for
the both subbands and $\alpha _{12}=-18$~meVnm corresponding to the conductance minimum in
Fig.~\ref{fig3}(b). The lower panels in Fig.~\ref{fig3} depict
the partial spin density distributions: (I) $s_z^{11}$ and (II) $s_z^{12}$ correspond to
the spin density distribution in the first and second subband, respectively, if the electron with
spin up is injected into the first subband, while $s_z^{21}$ (III) and $s_z^{22}$ (IV) correspond
to the spin density distribution in the first and second subband, if the electron with spin up is
injected into the second subband. These partial spin density distributions give us information
not only about the spin dynamics in the considered subband but also about the spin behavior due to
the inter-subband transitions.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.35]{fig4.eps} \\
\caption{Spin density distribution $s^{1(2)}_z$ for the 1st and 2nd subband (upper panels)
calculated for $\alpha=0$ and $\alpha _{12}=-18$~meVnm. Figures (I) and (II) correspond to
the spin density distribution in the 1st and 2nd subband, respectively, if the electron with spin
up is injected into the first subband, while figures (III) and (IV) correspond to the spin
density distribution in the 1st and 2nd subband if the electron with spin up is injected into the
2nd subband.}
\label{fig4}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.35]{fig5a.eps} \\
\vspace{5mm}
\includegraphics[scale=0.35]{fig5b.eps}
\caption{Spin density distribution $s^{1(2)}_z$ for the 1st and 2nd subband (upper panels)
calculated for $\alpha=14$~meVnm and (a) $\alpha _{12}=-10$~meVnm, (b)
$\alpha _{12}=-18$~meVnm. Figures (I) and (II) correspond to the spin density distribution in
the 1st and 2nd subband, respctively if the electron with spin up is injected into the first
subband, while figures (III) and (IV) correspond to the spin density distribution in
the 1st and 2nd subband if the electron with spin up is injected into the 2nd subband.}
\label{fig5}
\end{center}
\end{figure}
For the both subbands (Fig.\ref{fig4}) the electrons initially injected with spin up
reverse their spin when flowing through the nanodevice [cf. $s^1_z$ and $s^2_z$]. The spin-down
electrons reaching the output are backscattered from the ideal spin-up polarized contact (analyzer),
which leads to the decrease of the conductance presented in Fig.~\ref{fig3}(b). Since the
intra-subband SO coupling constant $\alpha=0$ the spin of the electron flowing through the subband,
in which it was injected, does not precess [cf. Figs.~\ref{fig3} (I) and (IV)]. Nevertheless, as
presented in Fig.~\ref{fig3} (II) and (III) the electron spin is inverted in the inter-subband
transitions, the probability of which reaches maximum for the chosen $\alpha _{12}$. In this
case, the spin precession length is given by Eq.~(\ref{eq:precess}).
The spin dynamics near the gate becomes more complicate for the nonzero intra-subband SO coupling
($\alpha \ne 0$). In this case the spin degeneracy of the subbands is lifted. Thus, beside the
reversal of spin related to the inter-subband transition, we expect the intra-subband spin
precession. In Fig.~\ref{fig5} we present the spin density
distributions for $\alpha=14$~meVnm (marked by the vertical dashed line in Fig.~\ref{fig2}) and two
chosen values of the inter-suband SO coupling constants (a) $\alpha _{12}=-10$~meVnm and (b) $\alpha
_{12}=-18$~meVnm. As shown in Fig.~\ref{fig5}, for $\alpha _{12}=-10$~meVnm the spin
dynamics in the nanostructure differs only slightly from the case without the inter-subband SO
interaction (compare with the inset of Fig.~\ref{fig2}). The electron spin performances one full
rotation and leaves the nanodevice with almost the same spin as on the input. In contrast to the
case with $\alpha _{12}=0$, we observe the inter-subband transition in which the electron
conserves its spin. The spin dynamics drastically changes if we increase the inter-subband SO
coupling. For $\alpha _{12}=-18$~meVnm [Fig.~\ref{fig5}(b)] the electrons initially injected with
spin up reverse their spin on the output leading to the decrease of the conductance. Note that
in contrast to the spin dynamics for $\alpha=0$, for which the spin
flip is related to the inter-subband transition, in the case of $\alpha \ne 0$ the electron
conserves the spin in this type of transitions. Due to the intra-subband SO interaction the spin
precession takes place mainly
in the subband into which the electron is injected. However this precession is strongly
affected by the inter-subband SO interaction, which significantly changes the precession length.
\begin{figure}[ht]
\begin{center}
\vspace{5mm}
\includegraphics[scale=0.4]{fig6.eps}
\caption{Conductance $G$ as a function of intra-subband SO coupling constant $\alpha$ and
asymmetry parameter $\xi=\alpha_{11} / \alpha_{22}$ for different $\alpha
_{12}$.}
\label{fig6}
\end{center}
\end{figure}
Finally, we have also performed the calculations of the conductance for the most general
case for which the intra-subband spin-orbit coupling constants are different in the both
subbands. For this purpose
we define the parameter of the asymmetry $\xi=\alpha_{11}/\alpha _{22}$. Fig.~\ref{fig6}
displays the conductance as a function of the intra-subband SO coupling constant $\alpha$ and the
asymmetry parameter $\xi$ for different $\alpha _{12}$. We
see that even in the absence of the inter-subband SO interaction [Fig.~\ref{fig6}(a)] the asymmetry
of the intra-subband SO coupling strongly affects the conductance oscillations making them
irregular. In this case the conductance is symmetric relative to the subband interchange. As shown
in Figs.~\ref{fig6}(b)-(d) this symmetry is lifted by the inter-subband SO interaction.
\subsection{Realistic model}
\label{sec3:real}
In this subsection we study the conductance of the spin transistor with the
conduction channel formed from the Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As double quantum
well presented in Fig.~\ref{fig1}(b). For this heterostructure we have determined the SO
coupling constants using the self-consistent procedure described in subsec.~\ref{subsec:SOC}.
Figure~\ref{fig7} presents the intra- ($\alpha _{11}$ and $\alpha _{22}$) and inter-subband ($\alpha
_{12}$) Rashba SO coupling constants as a function of the gate voltage for different electron
concentrations. We have also calculated the Dresselhaus SO coupling defined as
\begin{equation}
\beta _n=\beta ^{3D} \langle \varphi _n| \hat{k}_z^2 | \varphi _n \rangle,
\end{equation}
where $\beta ^{3D}$ is the Dresselhaus SO coupling for the bulk taken on as $\beta
^{3D}=0.0237$~meVnm$^3$.\cite{Jancu2005} Figure \ref{fig7} shows that for the considered wide
quantum well, the Dresselhaus SO coupling constants $\beta _{1(2)}$ are two orders of magnitude
smaller than the Rashba constants [Fig.~\ref{fig7}(d)]. Therefore, the Dresselhaus SO interaction
is neglected in the conductance calculations presented in the rest of the paper.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.7]{fig7.eps}
\caption{Intra-subband (a) $\alpha _{11}$ and (b) $\alpha _{22}$ and inter-subband (c) $\alpha
_{12}$ SO coupling constants as a function of gate voltage $V_g$ for different electron
densities $n_e$.(d) Dresselhaus SO coupling constants as a function of gate voltage $V_g$ for
$n_e=10 \times 10
^{11}$~cm$^{-2}$.}
\label{fig7}
\end{center}
\end{figure*}
Figure~\ref{fig7}(c) shows that the inter-subband SO coupling constant is an even function of the
gate voltage and exhibits the "resonant behavior" around $V_g=0$ corresponding to the symmetric
geometry of the heterostructure.
Simultaneously, at the resonant voltage $V_g=0$, the intra-subband SO coupling constants $\alpha
_{11}$ and $\alpha _{22}$ change the sign.
Similar "resonant behavior" was recently reported by Calsaverini et. al. for
InSb/Al$_{0.12}$In$_{88}$Sb double quantum well.~\cite{Calsaverini2008} The
authors~\cite{Calsaverini2008} argued that this feature results from the dominant role of
the Hartree potential and the overlap between the wavefunctions of the ground and the first excited
state in the quantum well, which for $V_g=0$ becomes maximal. Notice
that the conduction channel, in which the SO coupling constants rapidly change around
$V_g=0$ is preferred for the application in the spin transistor architecture in which the switching
between the on and off states should be realized in the gate voltage range as narrow as possible
[see Fig.~\ref{fig1}(e)]. We have performed the calculations of Rashba constants for different
electron densities (Fig.~\ref{fig7}) taking care that only the two lowest-energy states in the
quantum well were occupied. As shown in Fig.~\ref{fig7} the increasing
electron density $n_e$ leads to the increase of the slope $\alpha_{11(22)}(V_g)$ curves around
$V_g=0$ making the heterostructure more convenient for the spin transistor application.
Simultaneously, the inter-suband SO coupling $\alpha _{12}$ at $V_g=0$ decreases which, as we will
show later, also affects the conductance at this gate voltage.
Having the SO coupling constants determined from the Schr\"{o}dinger-Poisson approach we calculate
the conductance using of the scattering matrix method. For this purpose we consider the spin
transistor with length $L=3$~$\mu$m, width $W=40$~nm and the gate located in the middle of
the conduction channel. The length of the gate $L_g=2$~$\mu$m is assumed to be comparable to that
used in recent experiment.~\cite{Chuang2015} Figure~\ref{fig8}(a) depicts the conductance as a
function of the gate voltage for different electron densities.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{fig8.eps}
\caption{(a) Conductance $G$ as a function of gate voltage $V_g$ for different electron
densities $n_e$. (b) Conductance $G(0)$ for $V_g=0$ as a function of electron density
$n_e$. (c) Energy difference between the subbands $\Delta \varepsilon = \varepsilon_2 -
\varepsilon_1$ as a function of gate voltage $V_g$ for the same electron densities as in
figure (a).}
\label{fig8}
\end{center}
\end{figure}
The conductance $G(V_g)$ exhibits the pronounced peak around $V_g=0$
related to the low resistance state of the spin transistor (on state). The change of the gate
voltage in the narrow range around $V_g=0$ switches the transistor into the low conductance state
with the high resistance (off state). Notice that the on/off conductance ratio strongly depends
on the electron density and is larger for high $n_e$. The dependence $G(V_g)$ is
determined by the spin dynamics in the conduction channel, which depends on the strength of the
Rashba SO interaction. At $V_g=0$ related to the symmetric geometry of the heterostructure, the
intra-subband SO coupling constants
$\alpha _{11}=\alpha _{22}=0$ [see Fig.~\ref{fig7}(a)(b)]. Then, in the absence of the inter-subband
SO interaction, the spin of the electron injected from the polarizer does not precess and the
electron leaves the conduction channel with the same spin matching the polarization of the left
contact (analyzer). The both subbands
transmit the electrons giving raise to the
conductance $G=2 e^2 /h$. However, as shown in Fig.~\ref{fig7}(a) and (b) the slight deviation of
the gate voltage from $V_g=0$ causes the rapid change of the intra-subband SO coupling constants.
In particular, if the strength of this SO interaction is sufficient to inverse the spin of the
electron flowing through the nanostructure the electron is reflected from the analyzer,
which results in the zero conductance. As shown in Fig.~\ref{fig8}(a) the large changes of the
conductance around $V_g=0$ are strictly related to the abrupt change of the SO coupling constants
presented in Fig.~\ref{fig7}. Outside the close vicinity of $V_g=0$ the conductance is almost
constant, which results from the nearly constant values of the SO coupling in this range
(see Fig.~\ref{fig7}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{fig9.eps}
\caption{Transmission probabilities $T$ as a function of gate voltage $V_g$ for the electron
density (a) $n_e=4\times 10^{11}$~cm$^{-2}$ and (b) $n_e=10\times 10^{11}$~cm$^{-2}$. }
\label{fig9}
\end{center}
\end{figure}
The model of spin dynamics presented above is correct in the absence of the
inter-subband-induced SO interaction or, for the realistic structure, out of the range of the
conductance peak where the inter-subband SO coupling constant is much smaller than the intra-subband
coupling constants. However, in the gate voltage range, in which the conductance peak occurs,
i.~e. around $V_g=0$, the inter-subband SO interaction pays a significant role. The strong evidence
of this interaction is the value of the conductance for $V_g=0$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{fig10.eps}
\caption{Intra-subband (a) $\alpha _{11}$ and (b) $\alpha _{22}$ and inter-subband (c) $\alpha
_{12}$ SO coupling constants as a function of gate voltage $V_g$ for different barrier widths
$w_b$. (d) Energy separation between the subbands $\Delta \varepsilon = \varepsilon_2 -
\varepsilon_1$ at $V_g=0$ as a function of barrier width $w_b$. Results for $n_e=10 \times 10
^{11}$~cm$^{-2}$.}
\label{fig10}
\end{center}
\end{figure}
As mentioned before for $V_g=0$ for which $\alpha _{11}=\alpha _{22}=0$, the absence of the
inter-subband SO interaction leads to $G(0)=2e^2 /h$. However, as depicted in Fig.~\ref{fig8}(a)
this value of the conductance is reached only for the low electron density $n_e=4 \times 10
^{11}$~cm$^{-2}$ for which the inter-subband SO coupling is low (see Fig.~\ref{fig7}). For higher
electron densities, $G(0)$ gradually decreases leading to the reduction of the on/off conductance
ratio. As shown in Fig.~\ref{fig4}, for $V_g=0$ the only possible
process, which decreases the conductance, is the inter-subband transmission with spin-flip resulting
from the inter-subband SO interaction. The probability of this process depends not only on the value
of $\alpha _{12}$ but also on the energy separation between the subbands $\Delta \varepsilon
=\varepsilon_2 - \varepsilon_1$. In Fig.~\ref{fig8}(c), we present $\Delta \varepsilon$ versus
$V_g$. Comparing the results of Fig.~\ref{fig7}(c) and Fig.~\ref{fig8}(c), we see that
for increasing $n_e$, $\alpha _{12}$ increases while $\Delta \varepsilon$ decreases. These
effects all together enhance the inter-subband transition with spin flip and leads to the
conductance reduction at $V_g=0$. In order to show this, in Fig.~\ref{fig9} we present the
transmission probabilities as a function of the gate voltage for electron densities (a)
$n_e=4\times 10 ^{11}$~cm$^{-2}$ and (b) $n_e=10\times10^{11}$~cm$^{-2}$.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.4]{fig11.eps}
\caption{(a) Conductance $G$ as a function of gate voltage $V_g$ for different barrier widths
$w_b$. (b) Inter-subband transmission with spin conservation $T_{12}^{\uparrow \uparrow}$ and (c)
inter-subband transmission with spin flip $T_{12}^{\uparrow \downarrow}$ as a function of gate
voltage $V_g$.}
\label{fig11}
\end{center}
\end{figure}
For the low electron density $n_e=4\times10^{11}$~cm$^{-2}$ the
inter-subband transmissions is absent both for electrons injected from the first and the second
subband. The decrease of the conductance for $V_g \ne 0$ corresponds to the increase of the
intra-subband transmission with spin flip. For the high electron density
$n_e=10\times 10^{11}$~cm$^{-2}$ the probability of the inter-suband transmissions is nonzero
around $V_g=0$. Notice that at $V_g=0$ the inter-subband transmission always accompanies the
spin flip while the probability of the inter-subband transmission with spin conservation
$T_{12}^{\uparrow \uparrow}=T_{21}^{\uparrow \uparrow}=0$. It is worth mentioning that the
transmission probabilities for positive and negative gate voltages are not equivalent leading to the
nonsymmetric dependence $G(V_g)$ presented in Fig.~\ref{fig8}. This asymmetry emerges for high
gate voltages for which the inter-subband SO interaction is weak. Hence, it results from
the asymmetry of the intra-subband SO coupling constants for the ground and first excited state
which is analogous to that observed in Fig.~\ref{fig6}.
As shown above the conductance in the vicinity of $V_g=0$ is mainly determined by the inter-subband
transitions which emerge in the system as a result of the inter-subband SO interaction. This leads
to the question how the width of the central barrier $w_b$, which directly determines the coupling
between the quantum wells, affects the conductance in the considered gate voltage range. In
Fig.~\ref{fig10}~(a)-(c) we present the intra- and inter-subband SO
couplings as a function of the gate voltage calculated for different barrier widths.
Figure~\ref{fig10}(a) shows that the resonant behavior of $\alpha _{12}$ is more pronounced
for the wide central barrier, while the width of the barrier almost does not change the value of
$\alpha _{12}$ at $V_g=0$.
In addition, the slopes $d \alpha _{11} / dV_g$ and $d \alpha _{22} / dV_g$ at $V_g=0$ increase
with the increasing $w_b$ making the system more suitable for the spin transistor application.
However, as presented in Fig.~\ref{fig11}(a), the conductance at $V_g=0$ is strongly reduced for
the wide barrier giving raise to the low on/off conductance ratio. This effect
results from the inter-subband transmissions the probabilities of which are presented in
Figs.~\ref{fig11}(b) and (c). Both these figures clearly indicate that the reduction of $G(0)$ is
due to the inter-subband transmission with spin-flip. However, we note that $\alpha _{12}$ at
$V_g=0$ is nearly constant and is almost independent on
on the barrier width [Fig.~\ref{fig10}(c)]. Therefore, we conclude that the increase of
$T_{12}^{\uparrow \downarrow}$ is mainly caused by the reduction of $\Delta \varepsilon$ [cf.
Fig.~\ref{fig10}(d)], which decreases with the increasing barrier width - the reduction of the
coupling between quantum wells considerably weakens the repulsion of the states.
\section{Summary}
\label{sec4}
The inter-subband-induced SO interaction in the quantum well with the double occupancy has attracted
the growing interest because it can give raise to interesting physical effects, e.g. unusual
Zitterbewegung. This specific SO interaction is nonzero
even in the symmetric heterostructure, as it arises from the coupling between states with opposite
parity. The strength of this coupling is
comparable to the ordinary Rashba intra-subband SO coupling. In the present paper we have analyzed
the influence of the inter-subband SO interaction on the spin transistor operation. For this
purpose, we have calculated the electron transport in the spin transistor within the two-subband
model including both the intra- and inter-subband SO interaction. We have started from the
model in which the SO coupling constants are treated as the parameters. In the absence of the
inter-subband SO interaction and with equal intra-subband SO coupling constants we have obtained the
regular conductance oscillations, similar to those predicted for the quantum well with the single
occupancy. We have shown that these oscillations are strongly affected by the inter-subband SO
interaction leading to its irregular and damped form. For large $\alpha _{12}$ we have found the
inversion of the oscillations, i.e., the conductance
maxima and minima interchange. Interestingly, we have demonstrated that even for the zero
intra-subband SO coupling related to the symmetric geometry, the conductance oscillates as a
function of the inter-subband SO coupling. This effect has been explained as resulting from the
inter-subband transitions with spin flip. Finally we have also performed calculations with the
asymmetry of the intra-subband SO coupling constants. As we found the inter-subband SO interaction
lifts the symmetry of the conductance with respect to the subbands interchange.
In the second part of the paper, we have studied the conductance within the realistic spin
transistor
model with the conduction channel based on the Al$_{0.48}$In$_{0.52}$As/Ga$_{0.47}$In$_{0.53}$As
double quantum well. For the considered nanostructure, by performing a detailed self-consistent
calculations in which we solve both Poisson's and Schr\"{o}dinger's equation iteratively, we have
determined the strengths of the SO coupling contacts $\alpha _{11}$, $\alpha _{22}$ and
$\alpha_{12}$. The values of these coupling constants contain contributions arising from the
potential-well and barrier offsets, the Hartree potential, the external gate potential and the
modulation doping potential. We have obtained the resonant behavior of $\alpha
_{12}$ versus the gate voltage. Furthermore, the intra-subband SO coupling rapidly changes its sign
and magnitude at $V_g=0$. As we have stated in the paper such a rapid change of the SO coupling
constants in the narrow voltage range is favorable for the spin-FET application in which the
on/off conductance switching should be realized in the possibly narrow gate voltage. Our
calculations for different electron densities have shown that this effect can be strengthened for
the high electron
concentration in the quantum well. However, for the high electron density the inter-suband SO
interaction becomes dominant. The suppression of the conductance at $V_g=0$ which results from the
inter-subband transition with spin flip is the strong evidence of this interaction. This effect
leads to the reduction of the on/off conductance ratio. Similar effect has been observed for the
wide central barrier, for which the increase of the inter-subband transmissions is mainly due to the
decrease of the energy separation between both the subbands with almost constant $\alpha _{12}$.
In summary, our studies of the inter-subband SO interaction on the spin transistor operation show
that this SO coupling leads to the reduction of the on/off conductance ratio and thus
decreases the efficiency of the spin transistor.
\begin{acknowledgements}
This work was supported by the funds of Ministry of Science and Higher Education for 2016 and by
PL-Grid Infrastructure.
\end{acknowledgements}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,187
|
package org.apache.jackrabbit.jcr2spi.operation;
import java.util.ArrayList;
import java.util.List;
import javax.jcr.AccessDeniedException;
import javax.jcr.ItemExistsException;
import javax.jcr.RepositoryException;
import javax.jcr.UnsupportedRepositoryOperationException;
import javax.jcr.ValueFormatException;
import javax.jcr.lock.LockException;
import javax.jcr.nodetype.ConstraintViolationException;
import javax.jcr.version.VersionException;
import org.apache.jackrabbit.jcr2spi.state.ItemStateValidator;
import org.apache.jackrabbit.jcr2spi.state.NodeState;
import org.apache.jackrabbit.jcr2spi.state.UpdatableItemStateManager;
import org.apache.jackrabbit.spi.Name;
import org.apache.jackrabbit.spi.NodeId;
import org.apache.jackrabbit.spi.QPropertyDefinition;
import org.apache.jackrabbit.spi.QValue;
public class SetTree extends TransientOperation {
/**
* List of operations added to this SetTree operation.
*/
private final List<Operation> operations = new ArrayList<Operation>();
private final NodeState treeState;
private SetTree(NodeState treeState) throws RepositoryException {
super(ItemStateValidator.CHECK_NONE);
this.treeState = treeState;
}
private SetTree(UpdatableItemStateManager itemStateMgr, NodeState parentState, Name nodeName, Name nodeTypeName, String uuid) throws RepositoryException {
super(ItemStateValidator.CHECK_NONE);
Operation addNode = InternalAddNode.create(parentState, nodeName, nodeTypeName, uuid);
operations.add(addNode);
itemStateMgr.execute(addNode);
treeState = (NodeState) ((AddNode) addNode).getAddedStates().get(0);
}
//-----------------------------------------------------------------< Operation >---
/**
* @param visitor
*/
public void accept(OperationVisitor visitor) throws ValueFormatException, LockException, ConstraintViolationException, AccessDeniedException, ItemExistsException, UnsupportedRepositoryOperationException, VersionException, RepositoryException {
assert status == STATUS_PENDING;
visitor.visit(this);
}
/**
* Persisting a SetPolicy operation involves persisting each individual operation added
* by this policy. The concerned operation will assert the status and set it accordingly.
*
* @see Operation#persisted()
*/
@Override
public void persisted() throws RepositoryException {
assert status == STATUS_PENDING;
status = STATUS_PERSISTED;
for (Operation op : operations) {
op.persisted();
}
}
/**
* Undoing a SetPolicy operation involves undoing all operations added by the SetPolicy.
* @see Operation#undo()
*/
@Override
public void undo() throws RepositoryException {
assert status == STATUS_PENDING;
status = STATUS_PERSISTED;
for (Operation op : operations) {
op.undo();
}
}
public NodeId getParentId() throws RepositoryException {
return treeState.getParent().getNodeId();
}
public NodeState getParentState() throws RepositoryException {
return treeState.getParent();
}
public NodeState getTreeState() throws RepositoryException {
return treeState;
}
/**
* Add a child node operation to this {@code setTree} instance.
*
* @param parentState
* @param nodeName
* @param nodeTypeName
* @param uuid
* @return
* @throws RepositoryException
*/
public Operation addChildNode(NodeState parentState, Name nodeName, Name nodeTypeName, String uuid) throws RepositoryException {
Operation addNode = InternalAddNode.create(parentState, nodeName, nodeTypeName, uuid);
operations.add(addNode);
return addNode;
}
/**
* Add a child property operation to this {@code setTree} instance.
*
* @param parentState
* @param propName
* @param propertyType
* @param values
* @param definition
* @return
* @throws RepositoryException
*/
public Operation addChildProperty(NodeState parentState, Name propName,
int propertyType, QValue[] values,
QPropertyDefinition definition) throws RepositoryException {
Operation addProperty = new InternalAddProperty(parentState, propName, propertyType, values, definition);
operations.add(addProperty);
return addProperty;
}
//------------------------------------------------------------< factory >---
public static SetTree create(NodeState treeState) throws RepositoryException {
SetTree operation = new SetTree(treeState);
return operation;
}
public static SetTree create(UpdatableItemStateManager itemStateMgr, NodeState parent, Name nodeName, Name nodeTypeName, String uuid) throws RepositoryException {
return new SetTree(itemStateMgr, parent, nodeName, nodeTypeName, uuid);
}
//--------------------------------------------------------------------------
/**
* Inner class for adding a protected node.
*/
private static final class InternalAddNode extends AddNode implements IgnoreOperation {
/**
* Options that must not be violated for a successful set policy operation.
*/
private final static int ADD_NODE_OPTIONS = ItemStateValidator.CHECK_ACCESS |
ItemStateValidator.CHECK_LOCK |
ItemStateValidator.CHECK_COLLISION |
ItemStateValidator.CHECK_VERSIONING;
private InternalAddNode(NodeState parentState, Name nodeName, Name nodeTypeName, String uuid) throws RepositoryException {
super(parentState, nodeName, nodeTypeName, uuid, ADD_NODE_OPTIONS);
}
public static Operation create(NodeState parentState, Name nodeName, Name nodeTypeName, String uuid) throws RepositoryException {
assertChildNodeEntries(parentState);
InternalAddNode an = new InternalAddNode(parentState, nodeName, nodeTypeName, uuid);
return an;
}
}
/**
* Inner class for adding a protected property.
*/
private static final class InternalAddProperty extends AddProperty implements IgnoreOperation {
private final static int ADD_PROPERTY_OPTIONS = ItemStateValidator.CHECK_ACCESS |
ItemStateValidator.CHECK_LOCK |
ItemStateValidator.CHECK_COLLISION |
ItemStateValidator.CHECK_VERSIONING;
private InternalAddProperty(NodeState parentState, Name propName, int propertyType, QValue[] values, QPropertyDefinition definition) throws RepositoryException {
super(parentState, propName, propertyType, values, definition, ADD_PROPERTY_OPTIONS);
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 567
|
FOX News Radio é uma rede de rádios de notícias pertencente ao canal de televisão Fox News. É conhecida pelas talk news, ou seja, debates de temas do cotidiano e do ramo político.
No final de 2015, a Fox News Radio começou a oferecer Headlines da Fox News 24/7 exclusivamente para os assinantes da SiriusXM no Canal 115. É uma estação de notícias ao vivo com uma equipe editorial dedicada, fornecendo um panorama das notícias do dia "de Hollywood a Wall Street para Main Street"
Ligações externas
Fox News
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,262
|
package com.intellij.util.containers;
/**
* Packs the specified number of bits (1..32) into a chunk which stored in array and allows to get and set these bits atomically.
* Useful for storing related flags together.
* Guarantees are similar to {@link ConcurrentBitSet}, only for bit chunk instead of bit.
*/
public class ConcurrentPackedBitsArray {
private final int bitsPerChunk;
private final ConcurrentBitSet bits = new ConcurrentBitSet();
private final int mask;
private final int chunksPerWord;
public ConcurrentPackedBitsArray(int bitsPerChunk) {
if (bitsPerChunk <= 0 || bitsPerChunk > ConcurrentBitSet.BITS_PER_WORD) {
throw new IllegalArgumentException("Bits-to-pack number must be between 1 and " +
ConcurrentBitSet.BITS_PER_WORD +
", but got: "+bitsPerChunk);
}
this.bitsPerChunk = bitsPerChunk;
mask = bitsPerChunk == Integer.SIZE ? -1 : (1 << bitsPerChunk) - 1;
chunksPerWord = ConcurrentBitSet.BITS_PER_WORD / bitsPerChunk;
}
/**
* returns {@link #bitsPerChunk} bits stored at the offset "id"
* The returned bits are LSB, other (ConcurrentBitSet.BITS_PER_WORD-bitsPerChunk) higher bits are undefined
*/
public long get(int id) {
assert id >= 0 : id;
int bitIndex = id/chunksPerWord * ConcurrentBitSet.BITS_PER_WORD + (id%chunksPerWord)*bitsPerChunk;
return bits.getWord(bitIndex) >> bitIndex;
}
// stores chunk atomically, returns previous chunk
public long set(int id, final long flags) {
assert id >= 0 : id;
if ((flags & ~mask) != 0) {
throw new IllegalArgumentException("Flags must be between 0 and "+ mask +" but got:"+flags);
}
final int bitIndex = id/chunksPerWord * ConcurrentBitSet.BITS_PER_WORD + (id%chunksPerWord)*bitsPerChunk;
int prevChunk = bits.changeWord(bitIndex, word -> word & ~(mask << bitIndex) | ((int)flags << bitIndex)) >> bitIndex;
return prevChunk;
}
public void clear() {
bits.clear();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,480
|
Ebussuud Efendi (, 30 December 1490 – 23 August 1574) was a Hanafi Maturidi Ottoman jurist and Qur'an exegete, who served as the Qadi (judge) of Istanbul from 1533 to 1537, and the Shaykh al-Islām of the Ottoman Empire from 1545 to 1574. He was also called "El-İmâdî" because his family was from Imâd, a village near Iskilip.
Ebussuud was the son of Iskilipli Sheikh Muhiddin Muhammad Efendi. In the 1530s, Ebussuud served as judge in Bursa, Istanbul and Rumelia, where he brought local laws into conformity with Islamic divine law (sharia). Sultan Suleiman the Magnificent promoted him to Shaykh al-Islām – supreme judge and highest official – in 1545, an office Ebussuud held until his death and which he brought to the peak of its power. He worked closely with the Sultan, issuing judicial opinions that legitimised Suleiman's killings of Yazidis and his successor Selim's attack on Cyprus. Ebussuud also issued legal rulings (fatwās) which labeled the Qizilbash, regardless of whether they lived on Iranian or Ottoman soil, as "heretics", and declared that killing them would be viewed as praiseworthy, other than just being allowed according to law.
Together with Suleiman, the "Lawgiver", Ebussuud reorganized Ottoman jurisprudence and brought it under tighter governmental control, creating a legal framework joining sharia and the Ottoman administrative code (qānūn). While the previously prevailing opinion held that judges were free to interpret sharia, the law that even the ruler was subject to, Ebussuud instituted a framework in which the judicial power was derived from the Sultan and which compelled judges to follow the Sultan's qānūn-nāmes, "law-letters", in their application of the law.
In addition to his judicial reforms, Ebussuud is also remembered for the great variety of fatwās he issued. His opinions allowing Karagöz plays and the consumption of coffee, a novelty at the time, are particularly celebrated. He is also known for a widely-contested fatwā permitting monetary dealings involving riba (interest) in certain situations. This opinion is often referenced by contemporary Muslim modernists.
Footnotes
References
Further reading
1490 births
1574 deaths
Hanafis
Maturidis
Political people from the Ottoman Empire
Sheikh-ul-Islams of the Ottoman Empire
Quranic exegesis scholars
Grand Muftis of Istanbul (Ottoman)
16th-century Muslim scholars of Islam
Jurists from the Ottoman Empire
16th-century jurists
People from İskilip
Islamic scholars from the Ottoman Empire
Shaykh al-Islāms
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,425
|
\section{Acknowledgements}
\label{sec:acknowledgments}
This research was supported by the PETRAS IoT hub (EPSRC grant EP/N023242/1)
All opinions, findings and conclusions, or recommendations expressed in this
material are those of the and do not necessarily reflect the views of the
sponsors.
\section{Overview of \tool}
\label{sec:approach}
\begin{figure}[t]
\centering \includegraphics[scale=.3]{tool_overview.pdf}
\caption{An overview of \tool. The task manager processes URLs from
a queue and instantiates a \textit{crawler} task for each URL. The
crawler collects information about a webpage and the environment
from which the page was retrieved. The output of the crawler is
stored in a dedicated data structure, which is later analyzed with
the module that checks if content was served from a self-hosted
network environment.}
\label{fig:tool_overview}
\end{figure}
\tool is entirely written in Python, and
Figure~\ref{fig:tool_overview} provides an overview of its system
components. In a pipeline, \tool performs the following two steps,
({\em i}\/)\xspace~\emph{Data Collection:} it takes a URL feed, and renders the list
of websites, recording detailed information on all resources loaded,
before complementing the data with RDAP records for the IP address
hosting the website; and ({\em ii}\/)\xspace~\emph{Hosting Detector:} it then passes
this information through a hosting detector, which decides if the
owner of the webpage is the same as the owner of the hosting
infrastructure where the server is located. The outcome is a
structured JSON file that details if the website is self-hosted or
operated by a third-party. In addition to this, \tool also provides
information on the ownership of both the webpage and the hosting
service.
\subsection{Data Collection}
We first present the methods we use to collect the necessary data to
infer ownership. This includes Web data, DNS information and,
finally, RDAP records for all domains loaded.
\paragraph{Web Data Collection}
Upon receiving a list of URLs, the crawler obtains Web content through
Selenium,\footnote{\url{https://www.seleniumhq.org/}} a popular
framework used for testing Web applications. We instrument Selenium to
take a URL and to render it within a fully fledged instance of the
Google Chrome browser. After the page has been loaded and rendered,
our module outputs the retrieved HTML, in addition to a list of all
the HTTP requests/responses (URLs) that were generated during the
process. Each request/response is accompanied by metadata including
the HTTP status code, and HTTP headers (e.g., server, content-type).
An important part of the crawling process is to determine when a
webpage has finished loading. Since we do not know how long this
process will take, we use an adaptive mechanism that leverages the
information logged by the Web browser. We continuously monitor the
browser logs and we consider the page loaded once all the requests to
external resources have received their corresponding response. At the
same time, we also set a hard timeout, after which we will close the
browser session independently if the loading succeeded or not.
As part of the process, we also follow all the redirects that occur
while loading an URL. This includes not just CNAME or HTTP redirects,
but also those triggered by the refresh meta tag or a script. However,
being primarily a tool for developers who need to check their
own webpages, Selenium does not provide access to the browser
internals. Hence, \tool extracts information about the redirection
chain from the browser log. After filtering out all the request to
external resources, we identify the landing page and URLs on which the
browser terminated the navigation.
\paragraph{Domain Resolution} Since Web browser logs do not contain
information about domain resolution, \tool launches DNS queries for
all domains encountered (at a modest cost of only around 3.2\%
additional overhead per webpage). An advantage of this systems is that
we can to use our own DNS server and we do not need to rely on the
built-in mechanisms of proprietary resolvers, which might be hardcoded
in the browser.
\paragraph{IP Ownership} Using the DNS results, we then
determine the ``owner'' of the IP space where the content is hosted.
\tool uses the RDAP protocol to find the network prefix to which an IP
address belongs to, and to identify the owner of that range. Our
framework uses a local RDAP cache to overcome rate-limiting issues of
RDAP servers and to avoid querying ranges for which we already have
fresh information. After successfully completing the RDAP resolutions,
all the data generated by the above steps is stored in a JSON data
structure.
\subsection{Detecting Hosting Infrastructures}
The next step is to use the above information to detect if a website
is self-hosted, or whether it uses on a third-party
infrastructure. Our tool identifies the organization/company behind a
domain name and a webpage, and searches for evidence that the page
owner is the same as the owner of the network prefix or the AS hosting
the Web server. We do not differentiate among various types of hosting
services (i.e., VPS, CDN, or generic web hosting) and we do not use
any precompiled list of popular or known hosting services. Instead we
extract our information from the URL and the HTML retrieved from the
landing page, and we match this data with the RDAP response of the IP
address that is hosting the web server. This process allows us to
detect the third-party network infrastructures even in the presence of
CDN caches located at ISPs: even if we did not identify correctly the
provider, our algorithm will detect a mismatch in the ownership of the
webpage and the IP range.
To identify the organization that owns a webpage we use both the
information from the URL and the HTML code. In particular, from the
URL component we extract the Effective Second Level Domain (ESLD), and
from the HTML we use the content of the $<$title$>$ tag. Before
retrieving any ownership information for a RDAP response, we first
filter unnecessary details from the data such as ``comments'' or the
``symbolic name of the network'', which can contain references to the
owner of the webpage even when the IP range is assigned to a
completely different organization. After this step, each string
contained in the \textit{HTML title} or the \textit{RDAP fields} has
its leading space delimiters removed, is cleaned from punctuation
characters and stop words, converted into lower case, and finally is
split into tokens on space delimiters. The DNS system does not allow
domain names to contain space delimiters and it is common have
domains, such as ``bankofamerica.com'', where the ESLD is a
combination of multiple words. To overcome this issue, the ESLD string
follows the same cleaning process of the title and the RDAP, with the
only difference in the tokenization, which is performed following the
technique described in~\cite{segaran2009beautiful}.
This process results in a series of string tokens that represent ownership
features of both the webpage and the domain/IP address hosting it.
The next step is to compare these tokens to see if they correspond.
Our algorithm does six checks: four with the strings contained in the
title/ESLD/RDAP, and an additional two with the tokenized versions of
those strings. First, the algorithm verifies if the HTML title or the
ESLD appears as a sub-string in any of the RDAP
fields. Subsequently it repeats the same procedure with each string in
the RDAP fields by comparing it both with the HTML title and the
ESLD. The output of this process is binary and if the algorithm
finds a match, it concludes that the owner of the webpage is also the
owner of the network. As a final step, the algorithm checks for the
presence of common tokens among the lists of tokens obtained from the
HTML title/ESLD and the RDAP information. In this case a single match
is not enough to conclude that the same organization owns both the
webpage and the network, and we require that the common tokens
represent at least 50\% of the overall number of tokens in the
shortest list.
\section{Conclusion}
\label{sec:conclusion}
In this work we presented \tool, a tool for collecting information
about a webpage and the environment where the page is hosted. Our
framework extracts information from the retrieved HTML, the DNS and
the ownership information associated to a network prefix. \tool then
exploits this data to infer if the website is self-hosted, or is
reliant on a third party operator, e.g., a Content Delivery Network.
We tested \tool on 40,000 URLs and compared the results with similar
applications that detect the presence of known hosting providers. Our
framework is accurate and outperforms all other applications, when
tested on a manually validated groundtruth. \tool is released as open
source and is built in a modular way, which gives the possibility to
integrate it with new capabilities and extensions.
\section{Validation and Evaluation}
\label{sec:evaluation}
\begin{table*}[t]
\tiny
\centering
\begin{tabularx}{\textwidth}{|X|X|X|X|X|X|X|X|X|X|X|X|X|}
\cline{2-13}
\multicolumn{1}{c|}{} & \multicolumn{9}{c|}{\sc URLs (Domains)} & \multicolumn{3}{c|}{\sc IPs}\\
\cline{2-13}
\multicolumn{1}{c|}{} & {All} & {\sc Starting} & \multicolumn{4}{c|}{\sc Crawls} & \multicolumn{3}{c|}{\sc Landing} & \multicolumn{3}{c|}{}\\
\hline
{Format}
& {\sc Tot.} & {\sc Tot.} & {\sc Completed} & {\sc Domain Change} & {\sc Protocol Change} & {\sc AVG. Redirects} & {\sc NON-triggering exceptions} & {\sc self-host} & {\sc 3rd-host}
& {\sc Tot.} & {\sc NON-triggering exceptions} & {\sc Landing Domains}\\
\hline
{http} & 874,574 (46,679) & 10,000 (10,000) & 9,204 & 6,114 & 6,701 & 2.6 & 8,968 (8,897) & 941 (935) & 8,027 (7,962) & 34,332 & 33,728 & 8,178\\
\hline
{http + \newline www} & 885,035 (43,359) & 10,000 (10,000) & 9,280 & 2,606 & 6,627 & 2.4 & 9,033 (8,962) & 977 (971) & 8,056 (7,991) & 32,496 & 31,973 & 8,214\\
\hline
{https} & 745,939 (40,696) & 10,000 (10,000) & 8,215 & 5,147 & 580 & 2.3 & 8,037 (7,989) & 877 (873) & 7,160 (7,116) & 30,640 & 30,217 & 7,353\\
\hline
{https + \newline www} & 794,449 (39,787) & 10,000 (10,000) & 8,673 & 2,229 & 491 & 2.2 & 8,462 (8,404) & 918 (913) & 7,544 (7,491) & 30,177 & 29,781 & 7,683\\
\hline
\hline
{All} & 1,736,929 (54,410) & 40,000 (20,000) & 35,372 & 16,096 & 14,399 & 2.4 & 13,940 (11,253) & 1,559 (1,220) & 12,381 (10,033) & 38,092 & 37,492 & 9,188\\
\hline
\end{tabularx}
\caption{Results of running \tool on the \textsc{top-40k-URLs}
dataset. The values in the brackets indicates the unique number of
elements for each entry.}
\label{tbl:data_collection}
\end{table*}
\tool is intended to be both accurate and straightforward to use for
the community. To validate its capabilities we next run it over
multiple datasets.
\subsection{Validating the Data Collection}
Our first goal is to test the efficacy of our crawler in collecting
the necessary data to perform the hosting classification. Hence, we
run \tool over a series of URL lists. A first dataset includes 10,000
unique domain names obtained from a snapshot of the ``Alexa top 10,000
websites'' (\textsc{top-10k}) on 1st of May 2018. A second dataset
(\textsc{top-20k-www}) is an extended version of the previous one,
where domains are extended with the ``www.'' prefix. Finally, a third
dataset (\textsc{top-40k-URLs}) includes all the entries from
\textsc{top-20k-www} expanded with ``http://'' and ``https://''
prefixes.
\paragraph{Data Collection}
We run \tool over the \textsc{top-40k-URLs} to collect information
about their home pages.
We split our dataset into chunks of 350 elements and we process each
chunk separately. Each element of the chunk is a unique domain name,
which is crawled both with and without the ``www.'' prefix and with
the two protocols that \tool supports (HTTP and HTTPS). This means
that when we successfully crawl an entire chunk, we obtain information
for 1400 unique URLs. We refer to the initial URL from which we begin
our crawling, and that we load in the browser, as ``starting URL'';
similarly, the URL on which the crawl terminates is called ``landing
URL''. Once a chunk is processed, \tool waits for 120 seconds before
switching to the next one. Each chunk is analyzed using 20 parallel
instances of our Crawler module, which uses a maximum timeout of 60
seconds while waiting for a webpage to finish loading. Note that when
collecting IP Ownership information from RDAP, we randomize waiting
timeouts, with of a maximum of 90 seconds, before retrying a query
that triggered an exception; after 3 consecutive exceptions, the
module marks an IP as ``no info available'' before switching to the
next one. The entire data collection process took place from a single
machine, although we note it is possible to split the dataset and run
parallel instances of \tool on different
machines.
\begin{figure}[t]
\centering
\includegraphics[scale=.32]{cdf_completed_crawls.pdf}
\caption{Cumulative distribution functions of the successful crawls
with different URL formats.}
\label{fig:cdf_successuful_crawls}
\end{figure}
\paragraph{Data Collection Performance}
The overall process of downloading the HTML, resolving DNS names and
collecting RDAP data, took 35 hours to complete for the
\textsc{top-40k-URLs}. Figure~\ref{fig:cdf_successuful_crawls}
presents the Cumulative Distribution Function (CDF) of the increase of
number successful crawls across time. To be able to use fresh entries
from our local RDAP cache, we crawled the 4 starting URLs linked to
each domain at the same time. Due to this choice, the CDFs of each
``URL format'' have very similar shapes. Hence \textit{four}
distributions in Figure~\ref{fig:cdf_successuful_crawls} are almost
stacked on the top of each other, and the red line connecting the
values is the average across those distributions. After 24 hours, we
had crawled only 50\% of the URLs, and in the last 1/3 of the time, we
obtained the information for the remaining half of the dataset. The
dataset contained shuffled entries, which did not follow any ranking
depending on the ``popularity of a domain'' and the likelihood to have
an ``unreachable URL'' at the beginning or at the end the crawl is the
same. We explain the spike in the increase of the number of downloads
after the 24th hour with the presence of our RDAP cache. As we will
later show, the majority of URLs/domains use a third-party hosting
provider and as times passes we observe an increase of the number of
RDAP queries which can be resolved with our local cache. Those local
resolutions increase our crawling speed allowing us to allow us to
gather information for the same amount of elements, in half of the
time.
In Table~\ref{tbl:data_collection} we summarize the results of the
data collection process using \tool. The first thing to notice is that
85\% of the starting URLs are successfully reached. Overall, our
crawler visited 1,736,929 external URLs, which were retrieved from
54,410 different domains. This suggests an average factor of 42 URLs
per ``starting URL''. It is therefore clear that each HTML page
contains a considerable number of external resources, although it
should be noted that this not only includes links to script and
images, but also \textit{redirects} from a starting to a landing
webpage. Independently of the protocol and the presence of the
``www.'' prefix, the 6th column in the Table shows that redirects are
extremely popular. On average we pass via 2.4 intermediate URLs before
reaching a landing page. For this estimate we \textit{only} use
redirects that happen when loading a ``starting URL'' in our browser,
and we excluded any redirects triggered by the external resources
embedded in the HTML of the landing webpage. In general, redirects
seem to be more popular for the HTTP protocol, but the average
difference with HTTPS is minimal.
Related to the redirect phenomenon, we also notice that more than 60\%
of the crawls observed a ``change of the domain'' name among the
``starting URL'' and the ``landing URL''. This happens with only 1/3
of that frequency value if we crawl domains without the ``www.''
prefix. The reason is that often the redirection happens from one
domain to the same domain expanded with the ``www.'' prefix. A
similar trend is observable for the ``change of the protocol'', when
crawling URLs with HTTP (which get upgraded to HTTPS).
The overall number of unique IPs of the landing pages is slightly less
than 10,000, and it reflects the fact that crawling the same domain
with the four different formats, most of the time will lead to the
same landing URL/domain. On average there are 1.22 domain per each
landing IP (comparison of columns 7 and 12 in
Table~\ref{tbl:data_collection}). This is explained by the presence of
large hosting providers with many different customers. The same
argument explains why we observe a similar relationship of 1.43
domains per each IP, when considering the dataset of all
URLs. Finally, the results of \tool indicate that 89\% of the landing
URLs are served from a third-party hosting infrastructure which does
not belong to the owner of the webpage. In the following section we
illustrate how we tested the accuracy of our classification, by using
a manually validated groundtruth and by comparing with similar
applications.
\subsection{Classification Validation}
We next validate the efficacy of our tool by compiling a groundtruth
classification, and comparing it against \tool.
\paragraph{Compiling a Comparative Dataset} To the best of our
knowledge, no groundtruth dataset exists regarding Web hosting. To
build this, we randomly select 324 domain names to manually annotate.
These are taken from the \textsc{top-10k} dataset, crawled with the
HTTP protocol. For each of these domains, we load the landing webpage
in a browser and use search engines to check if the owner of the IP
prefix is an organization offering Web hosting or CDN services to its
customers.
We note that 324 domains are not enough to evaluate our tool. Thus,
we also collect equivalent data from a variety of public tools that
allow users to ``\textit{discover who is hosting a website}''. This
allows us to compare our results against their outputs. Example of
those services include \textsc{HostingCompass.com} which can detect
who is hosting an ESLD, or \textsc{HostingDetector.com} and
\textsc{What's My CDN?} which allow more fine grained queries
including ``www.'' as prefix to the ESLD
\cite{hostingcompass,hostingdetector,whatsmycdn}. We choose to use
those three application because they are free Web-based services that
do not require any registration. We query those services with the
URLs from our \textsc{top-10k} and \textsc{top-20k-www} datasets,
depending on the service. As the services mentioned above do not
provide any detail about the methodology they use to detect hosting
providers, in addition to those Web applications, we also use
\textsc{cdnfinder}, an open-source project which aims to detect the
usage of CDNs within websites. The tool uses \textsc{phantomjs} and a
hard-coded list of hostnames to load a webpage and detect the presence
of external resources which are hosted on a CDN~\cite{cdnfinder,
phantomjs}. Analogously to the Web services, we downloaded the tool
and run it on our \textsc{top-40k-URLs} dataset. In total, this
results in 5 datasets to compare \tool against.
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{|l|r|r|r|r|r|}
\cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\sc Self-hosting} & \multicolumn{2}{c|}{\sc 3rd-party hosting} & \multicolumn{1}{c}{}\\
\hline
{\sc Tool/Service} & {\sc TP} & {\sc FN} & {\sc TP} & {\sc FN} & {\sc F1-score}\\
\hline
\hline
\tool & 26 & 3 & 279 & 16 & 0.73\\
\hline
hostingcompass & 27 & 2 & 102 & 193 & 0.21\\
\hline
hostingdetector & 12 & 17 & 239 & 56 & 0.25\\
\hline
whatsmycdn & 27 & 2 & 127 & 168 & 0.24\\
\hline
cdnfinder & 28 & 1 & 65 & 230 & 0.2\\
\hline
\end{tabular}
\caption{Performance comparison of \tool and other applications on
our manually validated groundtruth.}
\label{tbl:manual_validation}
\end{table}
\begin{table*}[!htbp]
\scriptsize
\centering
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
{\sc service} & {\sc domain} & {\sc www. + domain} & {\sc http + domain} & {\sc https + domain} & {\sc http + www. + domain} & {\sc https + www. + domain}\\
\hline
\hline
{\sc hostingcompass} & 3,481 (3,283) & {-} & {-} & {-} & {-} & {-} \\
\hline
{\sc hostingdetector} & {3,229 (2,879) } & {5,607 (4,973)} & {-} & {-} & {-} & {-} \\
\hline
{\sc whatsmycdn} & {978 (963)} & {3,268 (3,202)} & {-} & {-} & {-} & {-} \\
\hline
{\sc cdnfinder} & {-} & {-} & {59 (55)} & {403 (395)} & {149 (144)} & {1,849 (1,700)} \\
\hline
\end{tabular}
\caption{Comparison of \tool with similar services/applications when
evaluated on all of our datasets. The values in the brackets
indicates results obtained by \tool.}
\label{tbl:comparison_with_others_on_all_datasets}
\end{table*}
\paragraph{Comparison with Manual Annotations}
Table~\ref{tbl:manual_validation} contains the results of comparing
\tool against the above online services, using the manual annotations
as the groundtruth. Our algorithm was specifically designed for
detecting the presence of ``self-hosting'' environments. Consequently,
a domain will be flagged as ``hosted on third-parties'' in any
situation where the webpage owner differs from the owner of the
network prefix (e.g., a private Web server run at at home, where the
broadband ISP is the owner of the network prefix). Despite this
limitation, in the binary classification problem where a website is
either self-hosted or hosted on a third-party service, \tool still
outperforms all the other services that we tested, achieving an
F1-score which is almost three times larger than the average for the
other services. Indeed, on our manual groundtruth we observe an
accuracy of over 95\%, even when, instead of verifying self-hosting,
we focus on the complementary problem of detecting the presence of
third-party hosting providers. \textsc{cdnfinder} performs well in
detecting self hosting, but has a very high false positive rate when
classifying domains as third-party hosting. Similarly,
\textsc{hostingdetector} achieves the highest accuracy in detecting
external hosting services, at a cost of an extremely high false
negative rate (59\%), when a single organization is in control of both
the webpage and the network prefix.
\paragraph{Comparison with Similar Services} To further test the
accuracy of our framework, we compare our results with the four
tools/services mentioned earlier. The results of this comparison are
shown in Table~\ref{tbl:comparison_with_others_on_all_datasets}. The
goal of this is to show that \tool achieves similar results to other
applications. To this end, we narrow our goal to identify all domains
which are hosted on third-party network infrastructures. As mentioned
in the previous sections, \tool follows any kind of redirect. Hence,
Table~\ref{tbl:data_collection} only presents the classification using
the ``landing URLs/domains''. Since we do not know what are the exact
capabilities of the four services that we tested, and how they handle
redirects, we decided to back-propagate the results of our
classification from the ``landing URL'' to the corresponding
``starting URL'', and ``starting domain'', from where the navigation
started. In this way we are able to compare our results with each one
of those services and verify that our framework has a detection rate
close to those of the other services.
For almost all of the domains inspected, \tool achieves an accuracy of
around 90\%, and it identifies a third-party-hoster every time one of
the other four services detects its presence. Since the highest number
of misclassified services originate from the sets of domains analyzed
with \textsc{hostingdetector}, we sampled 20 domains without the
``www.'' prefix and another 20 with the ``www.'' prefix. We then
manually verify if they are actually hosted on a third-party
infrastructure. For 27 out of 40 cases, \textsc{hostingdetector}
failed to identify self-hosted domains and \tool correctly labeled
those as ``self-hosting''. Ten of those cases were domains of large
universities with their own network prefixes. For another 7 cases, the
landing page is the home page of large hosting services such as
``Google'', ``Salesforce'' or ``1and1''. \tool correctly labeled these
as self-hosted. For four domains our \tool did not succeed in
downloading the RDAP information, and \tool could not classify those
domains. The remaining 9 domains were hosted on a third-party
infrastructure, but we did not detect them. According to those
results, we conclude that the accuracy of \tool is inline with similar
services we compared to.
\section{Introduction}
\label{sec:intro}
When deploying a website, companies have the choice to either host it
on their own servers, or to offload the content to third-parties,
e.g., Web hosts or Content Delivery Networks (CDNs). Choosing a third
party can have multiple advantages, including cost savings,
reliability, and the ability to sustain larger amounts of traffic
(even during distributed denial of service attacks). We argue that
understanding the Web hosting landscape is important for a number of
reasons. These range from allowing us to assess how critical certain
hosting infrastructures are and to estimate the impact that a network
attack could have on the
Web~\cite{delignat2015network,liang2014https,simenovski2017who}, to
being able to determine who is responsible when incidents (e.g.,
malware hosting) occur~\cite{tajalizadehkhoob2017role}. Despite the
importance of the problem, the research community lacks scalable
methods to map the hosting landscape. Instead, multiple studies tend
to take an ad hoc approach, relying on various assumptions, which
differ between papers. Although there are third party services that
offer this functionality~\cite{netcraft,webhosting}, they do not
disclose their methodology, creating concerns for both
reproducibility, as well as accuracy of their results.
To fill this gap, this paper \textit{presents an open source tool to
the community,\footnote{The source code of \tool is available at
\url{https://bitbucket.org/srdjanmatic/pythia.git}} which can
determine whether the webpage of an organization is self-hosted or
it uses a third-party hosting provider}. Our tool, \tool, leverages
the HTML code of the webpage, domain information, and the network
ownership obtained from RDAP records, to determine if the content is
hosted on a third-party infrastructure. This is done by computing the
ownership of both the webpage \emph{and} the hosting provider, such
that the two can be compared. \tool is built with a modular design and
it is capable of obtaining information of the landing webpage even in
the presence of complex HTML structures which use redirects.
To evaluate the efficacy of \tool, we run it on 40,000 URLs generated
from the Alexa top-10k domains~\cite{alexatop1m}. Our validation
process shows that our framework outperforms similar applications
available on the Web, and it achieves an accuracy of 90\% in detecting
when a webpage is hosted by a third party. Furthermore, our
measurement reports that over 89\% of the popular domains that we
inspected, take advantage of third parties for their hosting needs.
\tool is open source and allows the research community to reproduce
our findings. We intend this to become a shared community effort,
allowing third party researchers to avoid the complexity involved in
devising and building their own independent methodologies for this
commonly encountered task.
\section{Background}
\label{sec:overview}
Understanding and measuring the Web hosting ecosystem is a complex
endeavor. To complete this task, we need both information about the
ownership of domains and the ownership of the IP addresses where
webpages are hosted. In this section we introduce the concepts on
which our approach is based, and the type of data that we retrieve to
determine whether webpages are self-hosted or not.
\subsection{Third-Party Hosting}
Content Delivery Networks (CDNs) and Hosting Services are two popular
mechanisms for delivering content to end users on behalf of other
organizations. By offloading the task of serving the content of a
website to third parties, these solutions are designed to provide
better availability, scalability, faster content loads, redundancy,
and enhanced security. These technologies have become so widespread
that according to recent statistics, more than 60\% of the most
visited websites use CDNs to serve content to their
users~\cite{cdnusage}. In this work we study the deployment of any
kind of solution that delivers Web content for third parties, and for
this reason we use the term \textit{hosting} to refer to the
\textit{network where Web servers offering a service are based}.
There are countless papers that have explored the hosting patterns of
websites, each taking a slightly different approach. A common
approach is to launch large-scale distributed
measurements~\cite{su2009drafting,ager2011web,fanou2016}, which
perform DNS queries around the world to retrieve and classify DNS
responses. This, unfortunately, is extremely complex and costly;
furthermore, it cannot alone confirm if the infrastructure is
third-party operated without further inspection. Calder et
al.~\cite{calder2013mapping} utilized the EDNS-0 Client Subnet
extension to simulate distributed queries towards Google's
CDN. Although it revealed a large number of servers, all were operated
by Google rather than third-parties. These techniques also do not work
well for Anycast CDNs~\cite{calder2015analyzing}, which do not
necessarily return DNS responses containing redirects. Another
strategy employed is to utilize domain prefix lists, which map CNAME
responses to their respective CDNs~\cite{scheitle2018}. These,
however, are limited to CDNs that exclusively rely on CNAME redirects
(e.g., this excludes Bing). Furthermore, the list requires constant
maintenance to remain up-to-date. Lastly, some studies utilize IP
address to Autonomous System (AS) mappings~\cite{ibosiola2018movie} or
metadata encoded into DNS records~\cite{Bottger18}; these, however are
vulnerable to misattributing ownership, e.g, when a CDN places a cache
in a third-party network. Such techniques have also been complimented
with manually curated AS annotations, which stipulate the type of
AS~\cite{noroozian2016gets}. Again, these suffer from both manual
annotation errors and require substantial upkeep. We argue that these
diverse ad hoc techniques are driven by the lack of a standardized
tool within the community, which can provide metadata on website
hosting patterns.
\subsection{The RDAP Protocol}
To acquire information about a domain's ownership we use the
Registration Data Access Protocol (RDAP)~\cite{RFC7482}. This protocol
was designed to replace the WHOIS~\cite{RFC3912} protocol as the
authoritative source for registering information about IP addresses,
ASes and domain names. While its predecessor retrieved free text
content, RDAP leverages a RESTful interface to deliver the data in a
machine-readable JSON format. This simplifies the parsing process and
allows us to easily extract information (e.g., the type of entity to
which a range of IP addresses has been assigned, or the description of
an AS). In this paper, we use the RDAP protocol to retrieve
information about the ownership of an IP address to which a domain
resolved.
\section{Related Work}
\label{sec:relatedWork}
A significant amount of research has been done in the field of Content
Delivery Networks and cloud computing. Krishnamurthy et
al.~\cite{krishnamurthy2001use} were the first to analyze the rise of
CDNs and the benefits that they provide to end-users. After them
several studies investigated this
trend~\cite{huang2008measuring,calder2015analyzing,su2009drafting,ager2011web}. Similar
work has tried to uncover cloud usage patterns and which Web services
are running on a cloud-associated IP address~\cite{he2013next,whowas}.
Our techniques relies on a mix of methodologies, particularly
exploiting RDAP data. There have been a small set of past papers that
rely on similar data. For example, Cai et al.\ proposed to combine
WHOIS information with the ASN in order to generate a comprehensive
AS-to-organization mapping~\cite{cai2010towards}. Tajalizadehkhoob et
al.\ were the first ones to explore the identification of hosting
provides by combining passive DNS with WHOIS
information~\cite{tajalizadehkhoob2016apples}. Unfortunately their
approach leverages a classification of 2,000 ASes to filter out
organization such as ISPs, education and government. This list has
limited size and is manually generated, and this raises concerns about
its reliability across time. Contrary to previous studies, our work
does not use any precomplied list of organization names and it focuses
on identifying self-hosting environments. \tool allows other
researchers to reproduce our results and it does not require any
manual analysis or a priori knowledge of network prefixes or the ASes
in charge for routing the network traffic.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,882
|
Q: arrayWithContentsOfFile returns apparently empty array In http://www.raywenderlich.com/21320/objectively-speaking-a-crash-course-in-objective-c-ios6 there is a cut-and-paste XML version of a property list. I have the following code:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
NSString *plistCatPath = [[NSBundle mainBundle] pathForResource:@"quotes" ofType:@"plist"];
self.movieQuotes = [NSMutableArray arrayWithContentsOfFile:plistCatPath];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
-(IBAction)quoteButtonTapped:(id)sender {
int array_total = [self.movieQuotes count];
int index = (arc4random() % array_total);
NSString *my_quote = self.movieQuotes[index][@"quote"];
self.quoteText.text = [NSString stringWithFormat:@"Quote:\n\n%@", my_quote];
}
The third-to-last line in quoteButtonTapped is crashing because I'm trying to take the modulus of 0. That means that self.movieQuotes is registering as empty.
quotes.plist is stored in the root of the directory, and it appears to be the same, modulo one comment and whitespacing, as in the tutorial.
Any ideas what I am doing to have an empty self.movieQuotes?
A: Make sure that the memory management semantic of the property in which you're storing the array is strong, retain, or copy. Note that if you use copy semantics, you'll need to implement the setter method yourself if the property type is NSMutableArray. (It's not clear from your example whether you really expect or need the array to be mutable though.)
Of course you should also double-check to make sure that you have a well-formed plist file named quotes.plist that has an array as its root element, and that it's been properly added to your project's Copy Files build phase.
Edit
Finally, make sure that your viewDidLoad method is actually being called by adding a breakpoint or an NSLog statement.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,131
|
We provide service to the City of Tillamook and certain areas of Tillamook County. Our residential and commercial service boundary goes from Juno Hill to the north and Green Timber road to the south. We go to the summit of Hwy 6 (Wilson River Hwy) to the east and Netarts and Cape Lookout State Park to the west.
We provide roll off service in all these areas and Rockaway, Garibaldi, Bay City and Cape Meares.
We offer Open Top Boxes in 13, 20, and 30 yard sizes. We also offer Mobile Storage Units and and can make arrangements for commercial account compactors.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 474
|
This highly detailed bronze sculpture of a Marine can be used as either an award or a gift. The bronze electroplated resin figure sits on an ebony black base forming a stunning contrast. The height of the trophy is 10" and the base is 5.5" wide. We will include an engraved plate at no extra charge to make this award a true bargain!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,562
|
{"url":"https:\/\/asa2.silverchair.com\/anesthesiology\/article\/103\/4\/855\/624\/Preoperative-Clinic-Visits-Reduce-Operating-Room","text":"Background\n\nAnesthesiologist-directed preoperative medicine clinics are used to prepare patients for the administration of anesthesia and surgery. Studies have shown that such a clinic reduces preoperative testing and consults, but few studies have examined the impact of the clinic on the day of surgery. The authors tested whether a visit to an anesthesia preoperative medicine clinic (APMC) would reduce day-of-surgery case cancellations and\/or case delays.\n\nMethods\n\nThe authors conducted a retrospective chart review of all surgical cases during a 6-month period at the University of Chicago Hospitals. Case cancellations and rates of first-start case delay over the 6-month period were cross-referenced with a database of APMC attendees in both the general operating rooms and the same-day surgery suite. The impact of a clinic visit on case cancellation and delay in both sites were analyzed separately.\n\nResults\n\nA total of 6,524 eligible cases were included. In the same-day surgery suite, 98 of 1,164 (8.4%) APMC-evaluated patients were cancelled, as compared with 366 of 2,252 (16.2%) in the non-APMC group (P < 0.001). In the general operating rooms, 87 of 1,631 (5.3%) APMC-evaluated patients were cancelled, as compared with 192 of 1,477 (13.0%) patients without a clinic visit (P < 0.001). For both operating areas, APMC patients had a significantly earlier room entry time than patients not evaluated in the APMC.\n\nConclusions\n\nAn evaluation in the APMC can significantly impact case cancellations and delays on the day of surgery.\n\nPREOPERATIVE anesthesiology clinics were originally developed to optimize the medical condition of a patient before surgery and anesthetic administration.1In the clinic, the anesthesiologist considers the special needs of a patient before the day of surgery and completes a thorough preoperative evaluation. Not surprisingly, preoperative anesthesia clinics have been shown to enhance patient safety2and satisfaction.3,4They may also improve hospital resource utilization before the day of surgery by reducing preoperative consults and laboratory testing.4\u20137In addition, visits to an anesthesia preoperative medicine clinic (APMC) have been shown to reduce the duration of hospital stay.8Although these benefits of a preoperative clinic visit are known, the impact of a preoperative clinic visit on cancellations and delays on the day of surgery has been less well studied. We hypothesized that a preoperative clinic visit would decrease day-of-surgery case cancellations and reduce case delays. The financial impact of even small improvements in operating room efficiency on the day of surgery could be significant to a hospital with a busy operating room schedule because cancelled cases may delay subsequent cases and waste expensive case setups. When case starts are delayed, valuable operating room time may be left unused, and staff time can be wasted.\n\nWith institutional review board approval (University of Chicago, Chicago, IL), we conducted a retrospective data analysis of all surgical procedures requiring anesthesia at our institution from July 1 through December 31, 2003. Cases were divided into two groups: those performed in the 8-room same-day surgery suite, and those performed in the 15 general operating rooms. Data on all surgical cases were collected from the operating room scheduling database. Inpatients were excluded from the study because the APMC at our institution serves outpatients only. Also excluded were all cardiac surgery cases because these patients are not typically evaluated in the APMC. For patients with more than one scheduled surgery in the study period, only the first planned operative procedure was included.\n\nA second database, containing all APMC visitors, was then cross-referenced with the operating room schedule to determine which patients had been seen in the clinic. The decision as to whether a patient is seen in the APMC at our institution is made by the referring surgeon. At a clinic visit, patients undergo a history and physical examination by an attending anesthesiologist, necessary preoperative tests and consults are requested, and previous studies are reviewed. An anesthetic plan is then formulated. In addition, patients are counseled regarding their medications and oral intake. American Society of Anesthesiologists (ASA) physical status scores and ASA base billing units were collected for all patients who underwent an operative procedure. APMC records were used to collect ASA physical status scores for patients who had a planned operative procedure that was cancelled on the day of surgery. For patients who were cancelled and not seen in the APMC, ASA physical status scores and ASA base billing units could not be collected. The type of surgery for each patient and their age was also collected from the published operating room schedule.\n\nCancellation rates at both operating sites during the 6-month period were tabulated. Cancelled cases were defined as cases on the final copy of the published operating room schedule that did not occur. By definition, emergency cases did not appear on the printed schedule, so these cases and cases performed on weekends and holidays were not included in the study.\n\nTo determine case delay times, all first-start cases during the 6-month study period were examined. First-start cases were chosen because delays in starting these cases are most likely caused by a problem with the case itself (rather than a previous case causing the delay).\n\n### Statistical Methods\n\nWilcoxon rank sum tests were used to test differences in ASA physical status scores and ASA base billing unit allocations in patients seen in the clinic and those not seen in the clinic. A two-sample t\u00a0 test was used to compare differences in age, and a Pearson chi-square test was used to compare the difference in types of surgery between the two groups.\n\nRates of cancellation were compared between APMC visitors and those not seen in the clinic using a Pearson chi-square test. A random-effects logistic regression model that treated days as random events was used to determine whether the cancellation rate was clustered on certain days due to nonrandom events (e.g.\u00a0, surgeons who were ill, operating room closings). Results were verified using a t\u00a0 test to the Freeman-Tukey double arcsin transformed cancellation rate per month as suggested by Dexter et al.\u00a0 9A chi-square test was used to determine an association between ASA physical status score, type of surgery, and rate of cancellation in APMC-evaluated patients. A two-sample t\u00a0 test analyzed the impact of age on cancellation in APMC-evaluated and non\u2013APMC-evaluated patients separately. Finally, a multiple logistic regression model examined the independent effect of an APMC visit on cancellation rate after adjusting for age, ASA physical status, and type of surgery. The differential effect of the APMC by age was also explored in the framework of logistic regression.\n\nFor the delay data, a Wilcoxon rank sum test was used to determine the statistical significance of the difference in median times for room entry between patients who visited the APMC and those who did not. A median regression model with bootstrapped standard error determined the independent effect of an APMC visit on room entry time after controlling for age, type of surgery, ASA physical status, and ASA base billing units.10,11All statistical analyses were performed separately for each operative site. A P\u00a0 value of 0.05 or less was considered statistically significant.\n\nThe study period included 6,524 cases. Of these, 3,416 were in the same-day surgery suite; the remainders were in the general operating rooms. Overall, the APMC attendance rate was 43% (2,795 of 6,524). During the study, 743 cases (11%) were cancelled.\n\n### Same-day Surgical Suite Cancellations\n\nIn the same-day surgery suite, 34% of patients (1,164 of 3,416) were evaluated in the APMC. ASA base billing units and ASA physical status scores for these patients were significantly higher than for patients not evaluated in the APMC (table 1). The average age of APMC-evaluated patients was 54.9 yr (SD, 20.0 yr) as compared with 30.2 yr (SD, 23.7 yr) for a non-APMC evaluated patient, and this difference was highly significant.\n\nTable 1. Characteristics by APMC Attendance in the Same-day Surgery Suite\n\nOverall, 464 of the 3,416 scheduled procedures (13.6%) were cancelled on the day of surgery. In the APMC group, 98 of 1,164 (8.4%) were cancelled, as compared with 366 of 2,252 (16.2%) in the non-APMC group (odds ratio, 0.47; 95% confidence interval [CI], 0.37\u20130.60; P\u00a0< 0.001; table 1). Figure 1shows the histogram of cancellations per day. On average, 26.9 surgeries (SD, 6.2) were scheduled per day, and 3.7 (SD 2.0) were cancelled per day. No clustering on date of cancellation was observed (intracluster correlation coefficient = 0; P\u00a0= 1.0), suggesting little variation in the number of cancellations per day. Our result was verified by applying the t\u00a0 test to the transformed rates per month,9which demonstrated that the difference in cancellation rate between APMC visitors and nonvisitors was significant (P\u00a0< 0.001).\n\nFig. 1. Histogram of cancellations per day for same-day surgery suite (26.9 \u00b1 6.2 surgeries scheduled per day).\n\nFig. 1. Histogram of cancellations per day for same-day surgery suite (26.9 \u00b1 6.2 surgeries scheduled per day).\n\nClose modal\n\nSurgery cancellations occurred more often among APMC patients with a higher ASA physical status score (P\u00a0< 0.001; fig. 2). After controlling for age, ASA physical status, and type of surgery in a logistic regression model, the adjusted odds ratio of cancellation for an APMC visitor was 0.36 (95% CI, 0.27\u20130.47; P\u00a0< 0.001). The impact of an APMC evaluation on cancellation was differentiated by age (P\u00a0= 0.01), and the benefit with respect to cancellation was more pronounced in older patients. The odds ratios of cancellation in patients aged younger than 18 yr, 18\u201339 yr, 40\u201364 yr, and 65 yr or older were 0.74 (95% CI, 0.34\u20131.64), 0.51 (95% CI, 0.29\u20130.89), 0.40 (95% CI, 0.27\u20130.59), and 0.20 (95% CI, 0.12\u20130.32), respectively (fig. 3).\n\nFig. 2. Cancellation rate by American Society of Anesthesiologists (ASA) physical status among patients who visited the anesthesia preoperative medicine clinic and had surgery scheduled in the same-day surgery suite. Cancellations were more likely to occur among patients with a higher ASA physical status (\u00a0 P\u00a0< 0.001).\n\nFig. 2. Cancellation rate by American Society of Anesthesiologists (ASA) physical status among patients who visited the anesthesia preoperative medicine clinic and had surgery scheduled in the same-day surgery suite. Cancellations were more likely to occur among patients with a higher ASA physical status (\u00a0 P\u00a0< 0.001).\n\nClose modal\n\nFig. 3. Odds ratio (95% confidence interval [CI]) for cancellation for an anesthesia preoperative medicine clinic visitor according to age group in the same-day surgery suite. There was a significant anesthesia preoperative medicine clinic visit by age interaction (\u00a0 P\u00a0= 0.01).\n\nFig. 3. Odds ratio (95% confidence interval [CI]) for cancellation for an anesthesia preoperative medicine clinic visitor according to age group in the same-day surgery suite. There was a significant anesthesia preoperative medicine clinic visit by age interaction (\u00a0 P\u00a0= 0.01).\n\nClose modal\n\n### Same-day Surgical Suite Delays\n\nAnalysis of first-start room entry times in our same-day surgery suite demonstrated that the median in-room time for a patient seen in the clinic was 7:35 am (interquartile range, 7:31\u20137:42), and that for a patient not seen the median in room time was 7:36 am (interquartile range, 7:30\u20137:46). After controlling for ASA physical status, ASA base billing units, type of surgery, and age using a median regression model, the median in room time for a patient evaluated in the clinic was 3 min less than for a patient not evaluated (P\u00a0< 0.001). In addition, patients with higher ASA physical status scores had longer delays on the day of surgery (P\u00a0< 0.001).\n\n### General Operating Room Cancellations\n\nDuring the study period, 3,108 cases were scheduled in the general operating rooms, and 52% (1,631 of 3,108) of these patients were seen in the APMC. The ASA physical status scores and ASA base billing units were significantly higher among patients seen in the clinic than those not evaluated (table 2). The average age of an APMC patient was 54.6 yr (SD, 17.1 yr), compared with an average age of 38.4 yr (SD, 23.0 yr) for patients not seen. This difference was also highly significant.\n\nTable 2. Characteristics by APMC Attendance in the General Operating Rooms\n\nOverall, 279 cases (9.0%) were cancelled on the day of surgery. Among APMC visitors, 87 of 1,631 (5.3%) were cancelled, as compared with 192 of 1,477 (13.0%) in patients without a clinic visit (odds ratio, 0.38; 95% CI, 0.29\u20130.49; P\u00a0< 0.001; table 2). Figure 4shows the histogram of cancellations per day. On average, 24.3 surgeries (SD, 4.9) were scheduled per day and 2.2 (SD, 1.6) were cancelled per day. As with the same-day surgery patients, there was no clustering on the date of cancellation (intracluster correlation coefficient = 0.02; P\u00a0= 0.18), suggesting a relatively constant cancellation rate from day to day. A t\u00a0 test was also applied to the transformed cancellation rate per month,9and a significant difference in cancellation rate between APMC visitors and non-APMC visitors was observed (P\u00a0< 0.001).\n\nFig. 4. Histogram of cancellations per day for general operating rooms (24.3 \u00b1 4.9 surgeries scheduled per day).\n\nFig. 4. Histogram of cancellations per day for general operating rooms (24.3 \u00b1 4.9 surgeries scheduled per day).\n\nClose modal\n\nSurgery cancellations occurred more often among APMC patients with a higher ASA physical status score (P\u00a0= 0.06; fig. 5). After controlling for age, ASA physical status, and type of surgery in a logistic regression model, the adjusted odds ratio of cancellation for an APMC visitor was 0.37 (95% CI, 0.28\u20130.50; P\u00a0< 0.001). The influence of an APMC visit was differentiated by age (P\u00a0= 0.006), and the beneficial effect was more pronounced in older patients. The odds ratios of cancellation in patients aged younger than 18 yr, 18\u201339 yr, 40\u201364 yr, and 65 yr or older were 0.31 (95% CI, 0.09\u20131.06), 0.95 (95% CI, 0.52\u20131.70), 0.33 (95% CI, 0.22\u20130.50), and 0.24 (95% CI, 0.14\u20130.41), respectively (fig. 6).\n\nFig. 5. Cancellation rate by American Society of Anesthesiologists (ASA) physical status among patients who visited the anesthesia preoperative medicine clinic and had surgery scheduled in the general operating rooms. Cancellations were more likely to occur among patients with a higher ASA physical status (\u00a0 P\u00a0= 0.06).\n\nFig. 5. Cancellation rate by American Society of Anesthesiologists (ASA) physical status among patients who visited the anesthesia preoperative medicine clinic and had surgery scheduled in the general operating rooms. Cancellations were more likely to occur among patients with a higher ASA physical status (\u00a0 P\u00a0= 0.06).\n\nClose modal\n\nFig. 6. Odds ratio (95% confidence interval [CI]) for cancellation for an anesthesia preoperative medicine clinic visitor according to age group in the general operating rooms. There was a significant anesthesia preoperative medicine clinic visit by age interaction (\u00a0 P\u00a0= 0.006).\n\nFig. 6. Odds ratio (95% confidence interval [CI]) for cancellation for an anesthesia preoperative medicine clinic visitor according to age group in the general operating rooms. There was a significant anesthesia preoperative medicine clinic visit by age interaction (\u00a0 P\u00a0= 0.006).\n\nClose modal\n\n### General Operating Room Delays\n\nIn the general operating rooms, the median in room time for non\u2013APMC-evaluated patients was 7:37 am (interquartile range, 7:31\u20137:46), whereas APMC evaluated patients had an in room time of 7:35 am (interquartile range, 7:30\u20137:43). This difference was statistically significant (P\u00a0< 0.001). After controlling for ASA physical status, ASA base billing units, type of surgery, and age, the results were similar (median 2 min longer wait for non\u2013APMC-evaluated patients; P\u00a0= 0.015). In addition, patients with higher ASA physical status scores had longer delays on the day of surgery (P\u00a0< 0.014).\n\nWe found that patients seen in our preoperative clinic were cancelled less often and experienced fewer case delays than patients not seen. This observation was true even though patients seen in the clinic had higher ASA physical status scores, had higher ASA base billing units, and were older. Because ASA physical status score was independently associated with an increased\u00a0 cancellation rate, it is unlikely that our observations were the result of differences in severity of illness between patient groups. These data suggest strongly that our preoperative clinic played a significant role in reducing cancellation rates and case delays in our hospital. Although previous studies have shown that APMC visits reduce preoperative costs12,13and improve patient safety and satisfaction,2,3our study demonstrates that an APMC visit also decreases unused operating room time on the day of surgery.\n\nA high rate of case cancellations has significant consequences. In addition to the negative impact that a case cancellation has on patient and staff satisfaction, cancellations also have potentially severe financial implications on hospital operations. When a case is cancelled, many dollars are potentially wasted on unnecessary setups, including sterilization, disposable instruments, and sutures. Many more dollars are lost when appropriated operating room time is not billed. Previous studies have suggested that revenues lost from cancellations range from $1,430 to$1,700 per hour plus variable costs in hospitals not on a fixed budget.9\n\nA reduction in case delays might also have great financial implications. Although preventing delays alone may not free up sufficient time to add an extra case to the operating room schedule,14,15reducing delays could affect hospital staffing costs when operating rooms are running at or above full capacity (> 60% overtime utilization)14,16or when staffing costs are paid hourly instead of salaried (a growing trend with per diem\u00a0 staffing arrangements). Because the cost of a surgical minute has been estimated at \\$10,17even small reductions in case delays have the potential to save significant amounts of money when extrapolated across a busy operating room suite. In addition, in an operating room running at or near capacity, daytime delays can result in the use of overtime staffing, which can increase the per-minute cost of operating room utilization by 50\u201375%.\n\nThe sheer volume of daily cases in an institution may make funding of an APMC visit for all patients prohibitive. However, our study suggests that certain populations are more likely than others to benefit from a clinic visit. Older patients should be sent to clinic, because our data show that older patients (aged > 60 yr) had the greatest reduction in cancellation rate (odds ratio, 0.22) when they were seen in the APMC. This study also suggests that patients with more medical comorbidities should visit the clinic preoperatively.\n\nThis large, retrospective study of service data has several limitations. Because patients could not be randomly assigned to attend the APMC, selection bias may have skewed our results. To detect bias, we examined ASA physical status (as a measure of medical comorbidity), ASA base billing units (as a crude measure of operative case complexity), age, surgeon, and surgical procedure in both patient groups. We demonstrated that patients with higher ASA physical status scores were more likely to be cancelled on the day of surgery. Nevertheless, patients evaluated in the clinic had more medical comorbidities but were less likely to experience a cancellation than patients who were not seen in the clinic. This finding suggests that if patients were randomly assigned to attend the APMC, the impact of a visit on case cancellation might be even greater. Unfortunately, no ASA physical status scores or base billing units were available for cancelled patients not seen in the clinic. To address this issue, a prospective study could be performed with ASA physical status scores and ASA base billing units attributed to all patients, whether or not they were evaluated in the APMC, before the day of surgery. Finally, reasons for surgery case cancellation and delay were not reliably available in the medical record and therefore could not be assessed in the study.\n\nIn summary, we found that an APMC visit can impact the day of surgery by reducing both case cancellations and case delays. Although work in the past has shown that a clinic can reduce hospital costs before and after the day of surgery, this is one of the first studies to show a significant impact of a preoperative clinic visit on the day of surgery. With these results in mind, we believe that an APMC visit should be supported for many or all patients scheduled to undergo surgery.\n\n1.\nKopp VJ: Preoperative preparation: Value, perspective and practice in patient care. Anesthesiol Clin North Am 2000; 18:551\u201374\n2.\nParsa P, Sweitzer B, Small SD: The contribution of a preoperative evaluation to patient safety in high-risk surgical patients: A pilot study (abstract). Anesth Analg 2004; 100:S-147\n3.\nHepner DL, Bader AM, Hurwitz S, Gustafson M, Tsen LC: Patient satisfaction with preoperative assessment in a preoperative assessment testing clinic. Anesth Analg 2004; 98:1099\u2013105\n4.\nParker BM, Tetzlaff JE, Litaker DL, Maurer WG: Redefining the preoperative evaluation process and the role of the anesthesiologist. J Clin Anesth 2000; 12:350\u20136\n5.\nStarsnic MA, Guarnieri DM, Norris MC: Efficacy and financial benefit of an anesthesiologist-directed university preadmission evaluation center. J Clin Anesth 1997; 9:299\u2013305\n6.\nFischer SP: Development and effectiveness of an anesthesia preoperative evaluation clinic in a teaching hospital. Anesthesiology 1996; 85:190\u2013206\n7.\nPower LM, Thackray NM: Reduction of preoperative investigations with the introduction of an anaesthetist-led preoperative assessment clinic. Anaesth Intensive Care 1999; 27:481\u20138\n8.\nHalaszynski TM, Juda R, Silverman DG: Optimizing postoperative outcomes with efficient preoperative assessment and management. Crit Care Med 2004; 32 (suppl):S76\u201386\n9.\nDexter F, Marcon E, Epstein RH, Ledolter J: Validation of statistical methods to compare cancellation rates on the day of surgery. Anesth Analg 2005; 101:465\u201373\n10.\nNarula SC, Wellington JF: The minimum sum of absolute errors regression: A state of art survey. Intl Stat Review 1982; 50:317\u201326\n11.\nGould WW: Sg11.1: Quantile regression with bootstrapped standard errors. Statist Tech Bull 1992; 28:14\u201322\n12.\nFoss JF, Apfelbaum J: Original Investigations of preoperative evaluation clinics. Curr Opin Anaesthesiol 2001; 14:559\u201362\n13.\nPollard JB, Zboray AL, Mazze RI: Economic benefits attributed to opening a preoperative evaluation clinic for outpatients. Anesth Analg 1996; 83:407\u201310\n14.\nDexter F, Abouleish AE, Epstein RH, Whitten CW, Lubarsky DA: Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg 2003; 97:1119\u201326\n15.\nAbouleish AE, Dexter F, Whitten CW, Zavaleta JR, Prough DS: Quantifying net staffing costs due to longer-than-average surgical case durations. Anesthesiology 2004; 100:403\u201312\n16.\nEpstein RH, Dexter F: Uncertainty in knowing the operating rooms in which cases were performed has little effect on operating room allocations or efficiency. Anesth Analg 2002; 95:1726\u201330\n17.\nStrum DP, Vargas LG, May JH: Surgical subspecialty block utilization and capacity planning: A minimal cost analysis model. Anesthesiology 1999; 90:1176\u201385","date":"2022-12-02 02:16:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20658399164676666, \"perplexity\": 8645.322706944002}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710890.97\/warc\/CC-MAIN-20221202014312-20221202044312-00714.warc.gz\"}"}
| null | null |
Merten Friese († nach 1639) war ein Orgelbauer, der in Danzig und im Herzogtum Preußen tätig war.
Leben
Merten Friese war wahrscheinlich ein Sohn oder Nachkomme von Julius Anthoni Friese, der bis 1584 in Danzig als Orgelbauer wirkte.
Er wurde von 1616 bis 1629 in Danzig erwähnt, danach im Herzogtum Preußen.
Ob eine Orgel im damals polnischen Grodno auch von ihm war, ist unklar, eine ähnliche Dispositionscharakteristik wie bei Julius Anthoni Friese und ihm weist in deren Wirkungsbereich.
In der Trinitatiskirche und der Marienkirche in Danzig sind Prospekte von Orgeln von Merten Friese erhalten.
Werkliste (Auswahl)
Literatur
Werner Renkewitz, Jan Janca: Geschichte der Orgelbaukunst in Ost- und Westpreußen von 1333 bis 1984. Band 1. Weidlich, Würzburg 1984. S. 86f.
Weblinks
Grodno Lithuanian Historical Organs (englisch)
Einzelnachweise
Orgelbauer (17. Jahrhundert)
Orgelbauer (Deutschland)
Person (Danzig)
Orgellandschaft Westpreußen
Geboren im 16. Jahrhundert
Gestorben im 17. Jahrhundert
Mann
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 35
|
Q: Get rowid value with datatables when using pure json input In my project I use datatables to display data directly form the database. My input data is pure JSON and I never used any of back end processing methods mentioned in the datatables website
$(document).ready(function() {
var table = $('#example').DataTable( {
"order": [[ -0 ]],
"pageLength": 24,
"oLanguage": { "sSearch": '<a class="btn searchBtn" id="searchBtn"><i class="fa fa-search"></i></a>' },
"lengthChange": false,
"ajax": {
"url": "data.php?content=userdata",
"dataSrc": ""
},
"columns": [
{ "data": "id",
"visible": false,
},
{ "data": "oid" },
{ "data": "name" },
{ "data": "mobile" },
{ "data": "email" },
{ "data": null,
"targets": -0,
"data": null,
"defaultContent": "<button>test</button>",
}
],
} );
$('#example tbody').on( 'click', 'button', function () {
var data = table.row( $(this).parents('tr') ).data();
alert("oid is: "+ data[ 1] );
} );
} );
and my JSON input is like this
[
{
id: "3",
oid: "213",
name: "Koh Tien Kit Thomas ",
mobile: 0123456789,
email: "some@mail.com",
}
]
but when I click the datatable row button, It saying value is undefined. my json not include the data object as datatables site mentioning. how to fix this ?
A: API method row().data() returns data in the original format. Since you used objects, OID would be available in data['oid'] .
Use the code below instead:
$('#example tbody').on( 'click', 'button', function () {
var data = table.row( $(this).parents('tr') ).data();
alert("oid is: "+ data['oid'] );
} );
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,767
|
La XXII edición del Campeonato de Euskadi del Cuatro y Medio, competición de pelota vasca en la variante de pelota mano profesional de primera categoría, se disputó en el año 2010. Fue organizada conjuntamente por Asegarce y ASPE, las dos principales empresas dentro del ámbito profesional de la pelota a mano.
La final se disputó el 12 de diciembre de 2010 entre Juan Martínez de Irujo y Abel Barriola.
Pelotaris
En negrita los cabezas de serie
Primera ronda
Octavos de final
Cuartos de final
Liguilla de Semifinales
Clasificación de la liguilla
Final
Campeonato de Euskadi del Cuatro y Medio
Pelota vasca en 2010
Deporte en España en 2010
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 647
|
Stories of Impact: Nonprofit Return on Investment Case Studies
Reasons for Tax-Exemption
Nonprofits + Profit
Nonprofit Oversight
Nonprofit Boards
Nonprofit Overhead
Nonprofits + Lobbying
MANP > About Nonprofits
The nonprofit sector is the collective name used to describe institutions and organizations in American society that are neither government nor for-profit business.
Other names often used include:
Not-for-profit sector
Independent sector
Voluntary sector
Nonprofits come in all shapes and sizes. Most are very small, community based, all volunteer groups, and some are very large, complex, professionally run businesses. This section of our website will shed light on Maine's diverse and robust nonprofit sector.
Scratch the surface of why people love Maine and you'll find a strong network of nonprofit organizations delivering on their mission. This section shows why we rely on nonprofits to make Maine the way life should be.
Frequently Asked Questions about the nonprofit sector such as: "Why are nonprofits tax-exempt?" and "What's the difference between a nonprofit and a not-for-profit?" Peruse this section to learn more about the history, structure and composition of the nonprofit sector.
How big is Maine's nonprofit sector? How many volunteers does the sector mobilize? How many Maine workers are employed by a nonprofit? This section presents the latest research on Maine nonprofits as well as national research.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,552
|
Perform a software update to install the most recent software version for your Huawei P9 Lite which contains the latest functionalities and applications.
The following steps contain instructions on how to update your Huawei P9 Lite to the latest software version over the air.
If this screen appears, choose Turn on.
The Huawei P9 Lite is now searching for available updates.
If this screen appears, the Huawei P9 Lite already has the latest software version.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,704
|
\section{Introduction}
Throughout this article, we work over an algebraically closed field $k$ of an arbitrary characteristic. We are primarily interested in the nature of the slope stability of vector bundles under a finite surjective generically separable morphism of certain one dimensional proper Deligne-Mumford stacks. The problem is studied in \cite{BP} for smooth irreducible $k$-curves, further generalized to higher dimensional normal varieties in \cite{BDP}, and in the context of formal orbifold curves in \cite{BKP}. The generic idea is as follows: consider a finite surjective generically separable morphism (henceforth, referred to as a cover) $f \colon Y \longrightarrow X$ of suitable objects to study on which vector bundles and slopes of vector bundles are well defined. Using the slopes $\mu_{X}(E)$ for vector bundles $E \in \text{\rm Vect}(X)$, one can define $\mu_X$-(semi/poly)-stability. When the cover $f$ is as in one of the above mentioned articles, it can be seen that for any $E \in \text{\rm Vect}(X)$, the pullback bundle $f^*E \in \text{\rm Vect}(Y)$ has the following properties.
\begin{itemize}
\item $E$ is $\mu_X$-semistable if and only if $f^*E$ is $\mu_Y$-semistable.
\item If $f^*E$ is $\mu_Y$-stable, then $E$ is $\mu_X$-stable.
\item Under suitable conditions on $f$, one can construct a $\mu_X$-stable bundle $E \in \text{\rm Vect}(X)$ such that $f^*E$ is not $\mu_Y$-stable.
\end{itemize}
The question is to find necessary and sufficient conditions on the covers $f$ such that for any $\mu_X$-stable bundle $E \in \text{\rm Vect}(X)$, the pullback $f^*E \in \text{\rm Vect}(Y)$ is $\mu_Y$-stable. This characterization of covers preserving slope stability is completely done in \cite{BP} for smooth projective curves and in \cite{BDP} for normal proper varieties: there covers are called `genuinely ramified covers'.
We will give this complete characterization of covers $f \colon \mathfrak{Y} \longrightarrow (X,P)$ of proper stacky curves (i.e., a connected reduced one-dimensional proper separated Deligne-Mumford stack of finite type over $k$; see Definition~\ref{def_stacky_curve}) with $(X, P)$ an orbifold curve (i.e. a smooth stacky curve or equivalently, a formal orbifold curve defined by a finite data $P$ of certain finite Galois field extensions associated finitely many closed points on a smooth projective connected $k$-curve $X$; see Definition~\ref{def_f_o_c} and Theorem~\ref{thm_equiv}). For this, we prove the necessary properties of the slope (semi/poly)stability (with respect to the slope $\mu_{\mathfrak{X}}$ for a stacky curve $\mathfrak{X}$ and $\mu_P = \mu_{(X,P)}$ for an orbifold curve $(X,P)$; see Section~\ref{sec_stability_stacky_curve}). The following result gives a set of equivalent conditions on a cover to be `genuinely ramified', generalizing the results of \cite{BP}.
\begin{proposition}[{Proposition~\ref{prop_gen_ram_equivalences}; see \cite[Lemma~2.4, Proposition~2.5, Lemma~3.1]{BP} in case of smooth curves, or \cite[Theorem~2.4]{BDP} for a higher dimensional analogue}]\label{prop_intro_1}
Let $\mathfrak{X} = (X,P)$ be a proper orbifold $k$-curve. Let $f \colon \mathfrak{Y} \longrightarrow (X,P)$ be a finite cover of proper stacky curves. The maximal destabilizing sub-bundle (cf. Proposition~\ref{prop_HN_socle}~\eqref{h:2}) $\text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \subset f_* \mathcal{O}_{\mathfrak{Y}}$ is a sheaf of $\mathcal{O}_{(X,P)}$-algebras, and it is a $P$-semistable vector bundle of $P$-degree $0$. Moreover, the following are equivalent for the finite cover $f \colon \mathfrak{Y} \longrightarrow (X,P)$.
\begin{enumerate}
\item $\text{\rm HN}(f_*\mathcal{O}_{\mathfrak{Y}})_1 = \mathcal{O}_{(X,P)}$.\label{e:1}
\item The map $f$ does not factor through any non-trivial \'{e}tale sub-cover.\label{e:2}
\item The homomorphism between \'{e}tale fundamental groups $f_* \colon \pi_1(\mathfrak{Y}) \longrightarrow \pi_1(\mathfrak{X})$ induced by $f$ is a surjection.\label{e:3}
\item The fiber product stacky curve $\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}$ is connected.\label{e:4}
\item $\text{\rm dim} \, H^0( \mathfrak{Y}, f^* f_* \mathcal{O}_{\mathfrak{Y}}) = 1$.\label{e:5}
\end{enumerate}
Finally, the above conditions imply that the finite cover $f_0$ induced on the Coarse moduli curves is a genuinely ramified morphism.
\end{proposition}
\emph{A cover $f$ an in the above result, satisfying the equivalent conditions, will be referred to as a genuinely ramified cover.}
We mention that the slope stability of vector bundles on orbifold curve has been studied in \cite{BKP} using equivariant set up; we give the definition of this slope stability using the more intrinsic finite data $P$ on an orbifold curve $(X,P)$, hence concluding more finer results (eg. Proposition~\ref{prop_HN_socle}).
It should also be mentioned that although the stacky curves considered are not necessarily smooth, they admit smooth Coarse moduli curves by definition; also the target of any finite cover considered above is smooth. This is done to make sure that the pushforward coherent sheaf $f_*\mathcal{O}_{\mathfrak{Y}}$ of the structure sheaf $\mathcal{O}_{\mathfrak{Y}}$ is a vector bundle on $\mathfrak{X}$, and we can avoid torsion. Now, we come back to our original question:
\emph{Under what conditions on $f$, does every $\mu_{\mathfrak{X}}$-stable bundle on $\mathfrak{X}$ pullback to $\mu_{\mathfrak{Y}}$-stable bundle on $\mathfrak{Y}$?}
In \cite{BP}, it was shown that the precise condition needed on $f$ are the equivalent conditions of the above proposition for a cover of smooth irreducible curves. We prove that the exact conditions needed in our case is: $f$ is a genuinely ramified cover.
\begin{theorem}[{Proposition~\ref{prop_counter_eg}, Theorem~\ref{thm_main}}]\label{thm_intro}
Let $f \colon \mathfrak{Y} \longrightarrow (X,P)$ be a finite cover from a proper stacky curve to a proper orbifold curve. The cover $f$ is genuinely ramified (i.e. satisfies the equivalent conditions from Proposition~\ref{prop_intro_1}) if and only if the pullback $f^*E$ for any $P$-stable bundle $E \in \text{\rm Vect}(X,P)$ is $\mu_{\mathfrak{Y}}$-stable.
\end{theorem}
The structure of the paper is as follows. In Section~\Cref{sec_stacky_curves}, we recall the definition and convention for the objects of our interest related to a stacky curve, an orbifold curves and a formal orbifold curve. Section~\Cref{sec_slope_stability} is devoted to formalizing the definition and properties of slope stability for vector bundles on a stacky curve or on an orbifold curve. The equivalent conditions in Proposition~\ref{prop_intro_1} are proved in Section~\Cref{sec_gen_ram}. In Section~\Cref{sec_main}, we prove the characterization of the genuinely ramified maps as the stability preserving morphisms. An Appendix~\Cref{sec_equiv} is added to provide the complete arguments showing the categorical equivalences of the proper orbifold curves and the vector bundles on them with the proper formal orbifold curves and the vector bundles on them. For the purpose of Section~\Cref{sec_slope_stability}--Section~\Cref{sec_main}, we will always work with proper stacky curves.
\section*{Acknowledgement}
I would like to thank Snehajit Misra for valuable discussions. I am indebted to Indranil Biswas, Manish Kumar, Souradeep Majumder and A. J. Parameswaran. The author is supported by NBHM Post-doctoral Fellowship.
\section{Notation and Convention}\label{sec_Not}
Throughout this article, we work over an algebraically closed field $k$ of arbitrary characteristics. In this paper, the curves we consider are reduced $k$-curves. For any $k$-scheme $W$ and a closed point $w \in W$, we denote the completion of the local ring $\mathcal{O}_{W,w}$ of $W$ at $w$ by $\widehat{\mathcal{O}}_{W,w}$. When this complete local ring is a domain, we set $K_{W,w}$ as the quotient field $QF(\widehat{\mathcal{O}}_{W,w})$. For a smooth $k$-curve $X$ and a closed point $x \in X$, any finite extension $L/K_{X,x}$ is understood as an extension in a fixed separable closure $K^{\text{Sep}}_{X,x}$ of $K_{X,x}$.
A \textit{cover} $f \colon Y \longrightarrow X$ of curves refers to a finite surjective morphism $f$ that is generically separable. For a finite group $G$, a $G$-\textit{Galois cover} $Y \longrightarrow X$ is a finite cover together with a $G$-action on $Y$ such that $G$ acts simply transitively on each generic geometric fiber. Any finite cover (in particular, a Galois cover) is \'{e}tale away from finitely many points on the base curve, which may be empty.
For a $G$-Galois cover $f \colon Y \longrightarrow X$, the group $G$ acts transitively on the fiber $f^{-1}(x) \subset Y$ for each point $x \in X$; the stabilizer groups at points in $f^{-1}(x)$ are conjugate to each other in $G$. (Up to conjugacy) we define an \textit{inertia group} above a point $x \in X$ to be a stabilizer group $\text{\rm Stab}_G(y)$ for some $y \in f^{-1}(x)$. In particular, the cover $f$ is \'{e}tale above $x \in X$ if and only if the inertia group above $x$ is the trivial group. When the order of the inertia group at $x$ is invertible in $k$, we say that the cover $f$ is \textit{tamely ramified} over the point $x$.
\section{Preliminaries}
\subsection{Stacky curves, Orbifold Curves}\label{sec_stacky_curves}
We fix an algebraically closed base field $k$ of an arbitrary characteristic. For the definition and properties of a Deligne-Mumford stack (a DM stack) and a morphism of DM stacks, we refer to \cite{Olsson}, \cite{DM} and \cite[Appendix A]{V}. Our interest will be on the connected proper one-dimensional DM stacks that are generically schematic. We mention some important defining properties and convention used in this article.
We only consider DM stacks that are \emph{separated and of finite type} over $k$. A \textit{representable morphism} $\mathfrak{Y} \longrightarrow \mathfrak{X}$ in this article is a morphism representable by a scheme, i.e. for any scheme $Z$ and a morphism $Z \longrightarrow \mathfrak{X}$ of stacks, the fibre product $\mathfrak{Y} \times_{\mathfrak{X}} Z$ is a scheme. A representable morphism $\mathfrak{Y} \longrightarrow \mathfrak{X}$ is said to be \textit{unramified} if for any scheme $Z$ and a morphism $Z \longrightarrow \mathfrak{X}$ of stacks, the morphism $\mathfrak{Y} \times_{\mathfrak{X}} Z \longrightarrow Z$ is a formally unramified morphism of schemes. For a DM stack $\mathfrak{X}$ over $k$, the following hold.
\begin{enumerate}
\item The diagonal morphism
$$\Delta_{\mathfrak{X}} \colon \mathfrak{X} \longrightarrow \mathfrak{X} \times_{\text{\rm Spec}(k)} \mathfrak{X}$$
is a representable unramified morphism (see \cite[Proposition~7.15]{V}, \cite[Theorem~8.3.3]{Olsson}). The `separated' assumption on $\mathfrak{X}$ means that for any morphism $Y \longrightarrow \mathfrak{X} \times_{\text{\rm Spec}(k)} \mathfrak{X}$ of stacks where $Y$ is a scheme, the morphism
\begin{equation}\label{eq_1}
\mathfrak{X} \times_{\mathfrak{X} \times_{\text{\rm Spec}(k)} \mathfrak{X}} Y \longrightarrow Y
\end{equation}
of schemes is a proper (equivalently, finite) morphism.
\item There exists an \'{e}tale surjective morphism $Z \longrightarrow \mathfrak{X}$ from a scheme $Z$ (the morphism $Z \longrightarrow \mathfrak{X}$ is called an \textit{atlas} of $\mathfrak{X}$). We say that $\mathfrak{X}$ is \textit{smooth} if there exists an atlas $Z \longrightarrow \mathfrak{X}$ where $Z$ is a smooth scheme (equivalently, for every atlas $Z' \longrightarrow \mathfrak{X}$, \, $Z'$ is a smooth scheme; \cite[Section~4, pg. 100]{DM}).
\item The DM stack $\mathfrak{X}$ admits a Coarse moduli scheme $\pi \colon \mathfrak{X} \longrightarrow X$ (\cite[Theorem 11.1.2]{Olsson}) satisfying the following properties.
\begin{enumerate}
\item The morphism $\pi$ is initial among all morphisms from $\mathfrak{X}$ to $k$-schemes.
\item $\pi$ induces a bijective correspondence between the $k$-points of $X$ and the isomorphism classes of $k$-points of $\mathfrak{X}$.
\item $X$ is separated and of finite type over $k$.
\item $\pi$ is a proper morphism of stacks, and $\pi_* \mathcal{O}_{\mathfrak{X}} = \mathcal{O}_X$.
\item (\cite[Theorem~11.3.6]{Olsson}) For any morphism $h \colon X' \longrightarrow X$ of schemes, the Coarse moduli scheme of the fiber product DM stack $X' \times_X \mathfrak{X}$ is universally homeomorphic to $X'$, and it is an isomorphic if either $h$ is flat or $\mathfrak{X}$ is a tame stack.
\end{enumerate}
If the stack $\mathfrak{X}$ is one-dimensional (i.e. it admits an atlas from a $k$-curve), the Coarse moduli space $X$ is also a $k$-curve.
\end{enumerate}
The notion of connectedness and irreducibility are well defined for a DM stack $\mathfrak{X}$; see \cite[Section~4]{DM}. In particular, $\mathfrak{X}$ is a disjoint union of its connected components, and each connected component is a union of its irreducible components (\cite[Proposition~4.13, Proposition~4.15]{DM}) in a unique way. It can be seen that the DM stack $\mathfrak{X}$ is connected if and only if its Coarse moduli scheme $X$ is connected.
\begin{example}\label{eg_quotient_stack}
One important example of a DM stack is a quotient stack. Let $Y$ be a quasi-projective $k$-variety equipped with an action of a finite group $G$ such that the quotient variety $X \coloneqq Y/G$ exists. We can assign a DM stack $[Y/G]$ to this data (see \cite[Example~8.1.12]{Olsson}). The $k$-points of $[Y/G]$ and the closed points of $X$ are both canonically identified with the $G$-orbits of the closed points of $Y$. For each such point $x \in [Y/G](k)$, we obtain a stabilizer group $G_x$, which is the group of automorphisms lying over $\text{\rm Id}_{\text{\rm Spec}\left( k\right)}$. More precisely, the fiber product $[Y/G] \, \times_{\Delta, \, \left( [Y/G] \times_{\text{\rm Spec}(k)} [Y/G] \right) , \, (x,x)} \, \text{\rm Spec}\left( k \right)$ is a constant $k$-group scheme associated to the finite group $G_x$. A point $x \in [Y/G](k)$ is called a \textit{stacky point} if the stabilizer group $G_x$ is non-trivial. The canonical morphism $Y \longrightarrow [Y/G]$ is an atlas, and so $[Y/G]$ is a smooth (respectively, proper) DM stack if and only if $Y$ is smooth (respectively, proper). Moreover, the stack $[Y/G]$ admits a Coarse moduli morphism $[Y/G] \longrightarrow X$.
Assume that $Y$ is a projective curve (not necessarily connected or smooth). Since $G$ is a finite group, the branch locus $B$ of the $G$-Galois cover $Y \longrightarrow X$ consists of finitely many closed points of $X$. Set $U = X - B$. Then a point $x \in [Y/G](k)$ is a stacky point if and only if the image of $x$ in $X$ is in $B$. It follows that $[Y/G] \times_X U \cong U$.
\end{example}
We come to the definition of a stacky curve and an orbifold curve. In some literature (eg. \cite{VZB}), these objects have the same definition; but for our context, we distinguish them: \emph{an orbifold curve is a smooth stacky curve}.
\begin{definition}[{Stacky Curve}]\label{def_stacky_curve}
A connected reduced separated DM stack $\mathfrak{X}$ of finite type over $k$ is said to be a \textit{stacky curve} if it satisfies the following properties.
\begin{enumerate}
\item every irreducible component of $\mathfrak{X}$ is one-dimensional and is generically an integral $k$-curve;
\item $\mathfrak{X}$ admits an irreducible smooth $k$-curve $X$ as its Coarse moduli space.
\end{enumerate}
A \textit{cover} or \textit{a finite cover} of stacky curves is defined to be a finite surjective morphism that is generically separable.
\end{definition}
Before proceeding, we make the following remarks.
\begin{remark}\label{rmk_Chow}
There is a smooth $k$-curve $Z$ together with a Galois cover $Z \longrightarrow X$ dominating the Coarse moduli morphism $\mathfrak{X} \longrightarrow X$. To see this, we can follow the proof of \cite[Theorem~11.4.1, Chow's Lemma, pg. 233]{Olsson} to find an \'{e}tale surjection $W \longrightarrow \mathfrak{X}$ with $W$ a reduced $k$-curve such that $W \longrightarrow X$ is a finite cover of $k$-curves. Further, taking the normalization of the function field $k(X)$ in suitable Galois extensions containing all the function fields of the generic points of $W$, we conclude the statement.
We will frequently consider stacky curves which are proper; in this case, the above Galois cover $Z \longrightarrow X$ is naturally a cover of smooth projective $k$-curves.
\end{remark}
\begin{remark}\label{rmk_deg_cover}
The definition of a finite morphism of stacky curves $f \colon \mathfrak{Y} \longrightarrow \mathfrak{X}$ is taken analogous to the definition of a proper morphism in \cite[Definition~4.11]{DM}. So a finite morphism is not necessarily representable; the morphism is finite if it is dominated by a morphism $\mathfrak{Y}' \longrightarrow \mathfrak{X}$ of stacky curves that is representable and finite, and $\mathfrak{Y}' \longrightarrow \mathfrak{Y}$ is a surjection. The generically separable condition can be checked for the induced morphism of the moduli curves since a stacky curve is generically isomorphic to its Coarse moduli curve. So a finite cover $f \colon \mathfrak{Y} \longrightarrow \mathfrak{X}$ induces a finite cover $f_0 \colon Y \longrightarrow X$ of the Coarse moduli curves. We set
$$\text{\rm deg}(f) \coloneqq \text{\rm deg}(f_0).$$
Moreover, when $\mathfrak{Y} = Y \times_X \mathfrak{X}$, the projection $\mathfrak{Y} \longrightarrow \mathfrak{X}$ is a finite cover if and only if $f_0$ is a finite cover. This is also equivalent to $Y \times_X Z \longrightarrow Z$ being a finite cover where $Z \longrightarrow X$ is any finite cover dominating the Coarse moduli morphism $\mathfrak{X} \longrightarrow X$.
\end{remark}
\begin{remark}\label{rmk_non-smooth}
We also note that a stacky curve in our definition need not be smooth although its Coarse moduli curve is smooth. Take the example of two distinct lines in $\mathbb{A}^2$ intersecting at a point and $\mathbb{Z}/2$ acting on this union of lines via interchanging points. The corresponding quotient stacky curve is not smooth as the union of the above lines is not smooth, but the Coarse moduli curve $\mathbb{A}^1$ is smooth.
\end{remark}
\begin{definition}[{Orbifold Curve}]\label{def_orbi_curve}
A stacky curve $\mathfrak{X}$ is said to be an \textit{orbifold curve} if it is smooth (i.e. any atlas is a smooth $k$-curve).
\end{definition}
When $Y \longrightarrow X$ is a $G$-Galois cover of smooth $k$-curves for some finite group $G$ and $X$ is connected, the quotient stack $[Y/G]$ in Example~\ref{eg_quotient_stack} is an orbifold curve.
Let $\mathfrak{X}$ be an orbifold curve. For any point $x \in \mathfrak{X}(k)$, the fiber product $\mathfrak{X} \times_{\Delta_{\mathfrak{X}}, \mathfrak{X} \times_k \mathfrak{X}, (x,x)} \text{\rm Spec}(k) = \underline{\text{\rm Isom}}(x,x)$ is a constant group $k$-scheme associated to a finite group $G_x$ (up to a canonical isomorphism). We say that $G_x$ is the \textit{stabilizer group} at $x$. So a closed geometric point in $\mathfrak{X}$ is identified with a closed point $x$ of $X$ together with the group $G_x$ that acts as the group of automorphisms on $x$. It follows that $G_x$ is the trivial group if and only if $x$ lies in the open sub-scheme of $\mathfrak{X}$.
We note that a stacky curve $\mathfrak{X}$ is \'{e}tale locally a quotient stack (\cite[Theorem~11.3.1]{Olsson}). When $\mathfrak{X}$ is an orbifold curve, this description produces a finite data of certain Galois field extensions associated to finitely many closed points of the Coarse moduli curve, called a formal orbifold curve (introduced in \cite{P} when $k = \mathbb{C}$; generalized over fields of arbitrary characteristic in \cite{KP}).
\begin{definition}\label{def_f_o_c}
A \textit{formal orbifold curve} is a pair $(X,P)$ where $X$ is a smooth $k$-curve and $P$ is a branch data, i.e. a function that to every closed point $x \in X$ associates a finite Galois extension $P(x)$ of $K_{X,x}$ (in some fixed separable algebraic closure of $K_{X,x}$) such that the set $\text{\rm Supp}(P)$, called the \textit{support} of $P$, defined as
$$\text{\rm Supp}(P) \coloneqq \left\{ x \in X \hspace{.2cm} | \hspace{.2cm} P(x) \text{ is a nontrivial extension of } K_{X,x} \right\}$$
is a finite set of closed points of $X$. A formal orbifold curve $(X,P)$ is said to be \textit{connected} (respectively, \textit{projective}) if the $k$-curve $X$ is connected (respectively, projective).
\end{definition}
Thus, a formal orbifold curve is a smooth curve $X$ together with the data of a finite set $B = \text{\rm Supp}(P)$ (which may be empty) of closed points in $X$, and for each $x \in B$, a finite Galois extension $P(x)/K_{X,x}$. When $|\text{\rm Gal}\left( P(x)/K_{X,x} \right)|$ is invertible in $k$ for each closed point $x \in X$, the Galois extensions $P(x)/K_{X,x}$ are uniquely determined by their degree. In these situations, $(X,P)$ is determined by $X$ together with finitely many points and a positive integer (that is invertible in $k$) attached to each of these points.
One useful perspective to us is that the assignment of a proper orbifold curve to a formal orbifold curve is an equivalence of categories (Theorem~\ref{thm_equiv}). Moreover, by Theorem~\ref{thm_equiv_bundles}, the category of vector bundles on an orbifold curve in the sense of stacks coincides with the category of vector bundles defined on the corresponding formal orbifold curve defined in \cite[Definition~4.4, Definition~4.5]{KM}. These equivalences are the described in detail in Appendix~\ref{sec_equiv}.
Finally, we list some useful definitions and notation used throughout this article.
\begin{enumerate}
\item For two branch data $P$ and $Q$ on a smooth projective curve $X$, we write $Q \geq P$ if $P(x) \subset Q(x)$ as extensions of $K_{X,x}$ for every closed point $x \in X$.
(\cite[Definition~2.5]{KP}) Given a finite cover $f_0 \colon Y \longrightarrow X$ of smooth projective curves and a branch data $P$ on $X$, we can define the pullback $f_0^*P$ branch data on $X$: for any closed point $y \in Y$, the field $f_0^*P(y)$ is the compositum $P(f_0(y)) \cdot K_{Y,y}$. By \cite[Lemma~2.12]{KP}, There is a morphism $f \colon (Y,Q) \longrightarrow (X,P)$ of formal orbifold curves if and only if $Q \geq f_0^*P$. Moreover, the induced morphism $f \colon (Y, f_0^*P) \longrightarrow (X,P)$ of formal orbifold curves is \'{e}tale (i.e. $f_0^*P(y) = P(f_0(y))$ as extensions of $K_{X,f_0(y)}$ for all closed point $y \in Y$) if and only if $f_0$ is an essentially \'{e}tale cover of $(X,P)$ (i.e. $K_{Y,y} \subset P(f_0(y))$ for all closed point $y \in Y$).\label{n:1}
\item (\cite[Definition~2.28]{KP}) A connected formal orbifold curve $(X,P)$ is said to be \textit{geometric} if there exists a connected \'{e}tale cover $(Y,O) = Y \longrightarrow (X,P)$ of formal orbifold curves where $O$ is the trivial branch data on $Y$. In this case, $P$ is called a geometric branch data on $X$.
Under the equivalence of Theorem~\ref{thm_equiv}, the proper orbifold curve $\mathfrak{X}$ associated to $(X,P)$ is a a quotient stack if and only if $(X,P)$ is geometric. By \cite[Proposition~2.30]{KP}, given any branch data $P$ on $X$, we can always find a branch data $Q$ such that $Q\geq P$ and $Q$ is geometric.\label{n:2}
\end{enumerate}
Vector bundles and their morphisms on an orbifold curve were studied in \cite{KM}. For any closed point $x \in X$, let $R_x$ be the integral closure of $\widehat{\mathcal{O}}_{X,x}$ in $P(x)$, and set $G_x \coloneqq \text{\rm Gal}\left( P(x)/K_{X,x} \right)$. A vector bundle $E = \left( E_0, \Phi, \eta \right)$ on a formal orbifold curve $(X,P)$ is defined (see \cite[Definition~4.4, Definition~4.5]{KM}) as a vector bundle $E_0$ on the curve $X$ together with action maps $\Phi_x$ for a $G_x$-action on $(E_0)_x \otimes_{\mathcal{O}_{X,x}} R_x$, compatible via $G_x$-equivariant isomorphisms $\eta_x$ to the generically trivial actions of $G_x$ on $R_x$. We refer to \cite{KM} for the details and notation. One important notion for us is a sub-bundle of a vector bundle on $(X,P)$.
\begin{definition}\label{def_subsubdle_formal_orbifold_curves}
Let $E = \left( E_0, \Phi, \eta \right) \in \text{\rm Vect}(X,P)$ be a vector bundle. Let $R_x$ and $G_x$ be defined as above. A vector bundle $F = \left( F_0, \Psi, \theta \right) \in \text{\rm Vect}(X,P)$ is called a \textit{sub-bundle} of $E$ if there is a morphism $\left( g, \sigma \right) \colon F \longrightarrow E$ such that $g \colon F_0 \longrightarrow E_0$ is an injective homomorphism making $F_0$ into a sub-bundle of $E_0$ (so the quotient $E_0/g(F_0)$ is a vector bundle) and for each $x \in \text{\rm Supp}(P)$, the $R_x$-module homomorphism $\sigma_x \colon (F_0)_x \otimes_{\mathcal{O}_{X,x}} R_x \longrightarrow (E_0)_x \otimes_{\mathcal{O}_{X,x}} R_x$ is a $G_x$-equivariant monomorphism.
\end{definition}
\begin{remark}\label{rmk_sub-bundle}
For a sub-bundle $F \subset E$ in $\text{\rm Vect}(X,P)$ as above, for any $g \in G_x$ and $a \in (F_0)_x \otimes_{\mathcal{O}_{X,x}} R_x$, we have $\sigma_x(\Psi_x(g) (a)) = \Phi_x(g)(\sigma_x(a))$. Thus for any $g, \, a$, and $r \in R_x$, we have the following.
\begin{equation*}
\begin{array}{rcl}
\Phi_x(g)(r \cdot \sigma_x(a)) & = & \phi_x(g)(r) \cdot \Phi_x(g)(\sigma_x(a)) \\
& = & \phi_x(g)(r) \cdot \sigma_x(\Psi_x(g)(a)) \\
& = & \sigma_x(\Psi_x(g)(r \cdot a)).
\end{array}
\end{equation*}
Thus the action morphisms $\Psi_x$ and the $G_x$-equivariant isomorphisms $\theta_x$ are restriction of $\Phi_x$ and $\eta_x$, respectively.
\end{remark}
\emph{In whatever follows, we will work over proper orbifold curves and will not distinguish between an orbifold curve and its corresponding formal orbifold curve. Without further mention, we will use the notions of a vector bundle on an orbifold curve as a vector bundle on a stack and as a bundle on the corresponding formal orbifold curve interchangeably.}
\section{Slope Stability}\label{sec_slope_stability}
\subsection{Slope stability for stacky curves}\label{sec_stability_stacky_curve}
In this section, we consider the slope stability conditions for vector bundles on a proper stacky curve $\mathfrak{X}$ over $k$, defined in terms of an equivariant set up. Although, this notion is known to experts, we present it in a concise comprehensive way. We adapt notions from \cite[Definition 7.18]{V}: a vector bundle (respectively, a quasi-coherent or a coherent sheaf) on $\mathfrak{X}$ is the data given by a vector bundle (respectively, a quasi-coherent or a coherent sheaf) on each atlas that satisfy certain co-cycle conditions. The structure sheaf $\mathcal{O}_{\mathfrak{X}}$ on an orbifold curve $\mathfrak{X}$ is the quasi-coherent sheaf defined by associating the structure sheaf $\mathcal{O}_Z$ for every atlas $Z$ of $\mathfrak{X}$. It should be noted that a quasi-coherent sheaf in the above sense is actually a quasi-coherent sheaf of $\mathcal{O}_{\mathfrak{X}}$-modules as in \cite[Definition~9.1.14, Proposition~9.1.15]{Olsson}. Under the hypothesis on $\mathfrak{X}$, we can always find a Galois cover $Z \longrightarrow X$ of projective curves dominating the Coarse moduli morphism (see Remark~\ref{rmk_Chow}). For any morphism $f \colon \mathfrak{Y} \longrightarrow \mathfrak{X}$ of stacky curve, we have the following functors
$$\text{\rm Vect}(\mathfrak{Y}) \overset{f^*} \longrightarrow \text{\rm Vect}(\mathfrak{X}) \hspace{.5cm} \text{and} \hspace{.5cm} \text{\rm Vect}(\mathfrak{X}) \overset{f^*} \longrightarrow \text{\rm Vect}(\mathfrak{Y})$$
of the categories of vector bundles (\cite[Section~9.2.5. pg. 198 and Section~9.3.]{Olsson}; defined up to a canonical natural isomorphism for the choice of charts). We start with the following observations.
First, suppose that $Z$ is a projective (not necessarily smooth or connected) $k$-curve equipped with an action of a finite group $G$ such that $X \coloneqq Z/G$ is a smooth projective connected $k$-curve. Consider the quotient stacky curve $\mathfrak{X} = [Z/G]$ (see Example~\ref{eg_quotient_stack}). Then the $G$-Galois cover $f \colon Z \longrightarrow X$ factors as a composition of a $G$-Galois \'{e}tale cover $g \colon Z \longrightarrow \mathfrak{X}$ followed by the Coarse moduli morphism $\iota \colon \mathfrak{X} \longrightarrow X$. The functor $g^*$ defines an equivalence of categories
\begin{equation}\label{eq_equivalence_orbifold_and_equivariant}
g^* \colon \text{\rm Vect}(\mathfrak{X}) \overset{\sim} \longrightarrow \text{\rm Vect}^G(Z)
\end{equation}
of vector bundles on $\mathfrak{X}$ with the $G$-equivariant vector bundles on $Z$, with a quasi-inverse defined by $g^G_*$ (to see that this defines a quasi-inverse, one can work over charts and use the Galois \'{e}tale descent for schemes).
Now consider a stacky curve $\mathfrak{X}'$ with its Coarse moduli curve $X$. By Remark~\ref{rmk_Chow}, for some finite group $G$, there exists a $G$-Galois cover $f \colon Z \longrightarrow X$ that factors as the composition
$$f \colon Z \overset{g} \longrightarrow \mathfrak{X} \coloneqq [Z/G] \overset{\iota} \longrightarrow \mathfrak{X}' \overset{\iota '} \longrightarrow X$$
where $\iota'$ and $\iota ' \circ \iota$ are the Coarse moduli morphisms. By definition, any vector bundle $E$ on $\mathfrak{X}'$ can be seen as a $G$-equivariant vector bundle on $Z$, and under the equivalence~\eqref{eq_equivalence_orbifold_and_equivariant}, as a vector bundle on $\mathfrak{X}$. Further, for any two vector bundles $E, \, F \in \text{\rm Vect}(\mathfrak{X}')$, by \cite[Proposition~9.3.6, pg. 205]{Olsson} and \cite[Proposition~1.12]{Adjoint}, we have the following.
\begin{equation*}
\begin{array}{rcl}
\Hom_{\text{\rm Vect}(\mathfrak{X})}\left( \iota^* E, \iota^* F \right) & = & \Hom_{\text{\rm Vect}(\mathfrak{X}')}\left( E, \iota_* \iota^* F \right) \\
& = & \Hom_{\text{\rm Vect}(\mathfrak{X}')}\left( E, F \otimes_{\mathcal{O}_{\mathfrak{X}'}} \iota_*\mathcal{O}_{\mathfrak{X}} \right).
\end{array}
\end{equation*}
Since the map $\iota_Z \colon \mathfrak{X} \times_{\mathfrak{X}'} Z = Z \longrightarrow Z$ is the Coarse moduli map, $(\iota_Z)_* \mathcal{O}_{\mathfrak{X} \times_{\mathfrak{X}'} Z} = \mathcal{O}_Z$. We conclude that $\iota_* \mathcal{O}_{\mathfrak{X}} = \mathcal{O}_{\mathfrak{X}'}$. This shows that we have an embedding of categories
\begin{equation}\label{eq_inclusion_for_vect_on_stacky_curves}
\iota^* \colon \text{\rm Vect}(\mathfrak{X}') \hookrightarrow \text{\rm Vect}(\mathfrak{X}).
\end{equation}
In view of the above discussion, we make the following definition.
\begin{definition}\label{def_slope_stacky_curve}
Let $\mathfrak{X}$ be a proper stacky curve with Coarse moduli space $X$. Let $Z \longrightarrow X$ be a Galois cover of curves with group $G$, dominating the Coarse moduli map $\mathfrak{X} \longrightarrow X$. For a vector bundle $E \in \text{\rm Vect}(\mathfrak{X})$, define the \textit{degree} and the \textit{slope} of $E$ as follows.
Let the $G$-equivariant vector bundle $\mathcal{E}$ be the image of $E$ under the inclusion functor $\text{\rm Vect}(\mathfrak{X}) \hookrightarrow \text{\rm Vect}^G(Z)$ (as the composition of the functors in Equations~\eqref{eq_equivalence_orbifold_and_equivariant}, \eqref{eq_inclusion_for_vect_on_stacky_curves}). Define
$$\text{\rm deg}_{\mathfrak{X}}(E) \coloneqq \frac{1}{|G|} \text{\rm deg}(\mathcal{E}),$$
$$\text{and } \, \mu_{\mathfrak{X}}(E) \coloneqq \frac{1}{|G|} \mu(\mathcal{E}).$$
\end{definition}
\begin{remark}
To see that the above notion are well defined, it is enough to consider the case $\mathfrak{X} = [Z/G]$. Suppose that $[Z/G] = [Z'/G']$, and let $\mathcal{E}'$ be the $G'$-equivariant vector bundle on $Z'$ corresponding to $E$. As $\mathcal{E}$ and $\mathcal{E}'$ pullback to the same equivariant bundle on $Z \times_{\mathfrak{X}} Z'$ of degree $|G'| \text{\rm deg}(\mathcal{E}) = |G| \text{\rm deg}(\mathcal{E}')$, we see that $\text{\rm deg}_{\mathfrak{X}}$ and $\mu_{\mathfrak{X}}$ do not depend on the choice of the cover $Z \longrightarrow X$.
\end{remark}
In view of the above definition, we can define $\mu_{\mathfrak{X}}$-(semi)stable or $\mu_{\mathfrak{X}}$-polystable vector bundles using the slope $\mu_{\mathfrak{X}}$.
\begin{definition}\label{def_stability_condions_stacky_curve}
Let $\mathfrak{X}$ be a proper stacky curve with Coarse moduli curve $X$. A vector bundle $E \in \text{\rm Vect}(\mathfrak{X})$ is called $\mu_{\mathfrak{X}}$-(semi)stable if for any sub-bundle $0 \neq F \subset E$ in $\text{\rm Vect}(\mathfrak{X})$, we have $$
\mu_{\mathfrak{X}}(F) \hspace{.2cm} ( \leq ) \hspace{.2cm} \mu_{\mathfrak{X}}(E).\, \footnote{As in the case of schemes, the notation $(\leq)$ means that $E$ is $\mu_{\mathfrak{X}}$-semistable if we have $\leq$, and it is $\mu_{\mathfrak{X}}$-stable if we have the strict inequality $<$.}$$
A $\mu_{\mathfrak{X}}$-polystable vector bundle on $\mathfrak{X}$ is a $\mu_{\mathfrak{X}}$-semistable vector bundle that is a finite sum of $\mu_{\mathfrak{X}}$-stable vector bundles having the same slope.
\end{definition}
We have the following relation between a vector bundle on $\mathfrak{X}$ and the equivariant vector bundle on $Z$.
\begin{proposition}\label{prop_stacky_equivariant_equivalence}
Let $\mathfrak{X}$ be a proper stacky curve with Coarse moduli curve $X$. Let $Z \longrightarrow X$ be a Galois cover of curves with group $G$, dominating the Coarse moduli map $\mathfrak{X} \longrightarrow X$. Let $E \in \text{\rm Vect}(\mathfrak{X})$. Let the $G$-equivariant vector bundle $\mathcal{E}$ be the image of $E$ under the inclusion functor $\text{\rm Vect}(\mathfrak{X}) \hookrightarrow \text{\rm Vect}^G(Z)$ (as the composition of the functors in Equations~\eqref{eq_equivalence_orbifold_and_equivariant}, \eqref{eq_inclusion_for_vect_on_stacky_curves}). Then $E$ is $\mu_{\mathfrak{X}}$-(semi)stable if and only if $\mathcal{E}$ is $G$-(semi)stable. Moreover, $E$ is $\mu_{\mathfrak{X}}$-polystable if and only if $\mathcal{E}$ is $G$-polystable.
\end{proposition}
\begin{proof}
The first conclusion is immediate from the slope relation $\mu_{\mathfrak{X}} = \frac{1}{|G|} \mu$. This relation together with the fact that the equivalence and the inclusion functors in Equation~\eqref{eq_equivalence_orbifold_and_equivariant}, ~\eqref{eq_inclusion_for_vect_on_stacky_curves} preserve finite direct sum imply the second statement.
\end{proof}
\begin{remark}\label{rmk_iota_preserve}
Using the above proposition, we also conclude that the embedding $\iota^*$ in Equation~\eqref{eq_inclusion_for_vect_on_stacky_curves} preserves slope stability conditions; namely, for any $E' \in \text{\rm Vect}(\mathfrak{X}')$, $\iota^* E'$ is $\mu_{\mathfrak{X}}$-(semi)stable (respectively, $\mu_{\mathfrak{X}}$-polystable) if and only if $E'$ is $\mu_{\mathfrak{X}'}$-(semi)stable (respectively, $\mu_{\mathfrak{X}'}$-polystable).
\end{remark}
\begin{remark}
We also note that a $G$-equivariant vector bundle $\mathcal{E}$ on $Z$ is $G$-semistable (respectively, $G$-polystable) if and only if $\mathcal{E}$ is a semistable (respectively, polystable) in the usual sense (for example, see \cite[Lemma~2.7]{B}; these follow from the uniqueness of the Harder-Narasimhan filtration and the socle of a semistable bundle). Whereas, $G$-stability need not be same as the usual stability -- consider any irreducible $k[G]$-module $V$ of dimension $\geq 2$ and equip the trivial bundle $\mathcal{O}_Z \otimes_k V$ with the diagonal $G$-action. This $G$-equivariant bundle is $G$-stable, but non stable in the usual sense.
\end{remark}
We list some useful properties of the slope stability.
\begin{proposition}\label{prop_properties_stacky}\hfill
\begin{enumerate}
\item Under the hypothesis of Proposition~\ref{prop_stacky_equivariant_equivalence}, we have the following.\label{s:a}
\begin{enumerate}
\item $$\mu_{\mathfrak{X}, \text{\rm max}}(E) \coloneqq \frac{1}{|G|} \mu_{\text{\rm max}}(\mathcal{E})$$
is independent of the choice of the cover $Z \longrightarrow X$.\label{s:a1}
\item If $E$ is also $\mu_{\mathfrak{X}}$-semistable and $\text{\rm Hom}_{\text{\rm Vect}(\mathfrak{X})}(E,F)$ is non-trivial for some $F \in \text{\rm Vect}(\mathfrak{X})$, we have
$$\mu_{\mathfrak{X}}(E) \leq \mu_{\mathfrak{X}, \text{\rm max}}(F).$$\label{s:a2}
\item Suppose that $\mathfrak{X} = [Z/G]$. There is a unique Harder-Narasimhan filtration for $E$, and $\mu_{\mathfrak{X},\text{\rm max}}$ coincides with the slope $\mu_{\mathfrak{X}}$ of the maximal destabilizing sub-bundle. If $E$ is also $\mu_{[Z/G]}$-semistable, there is a unique socle for $E$.\label{s:a3}
\item If $L$ is a line bundle on $\mathfrak{X}$, the tensor product $E \otimes L$ is $\mu_{\mathfrak{X}}$-semistable if and only if $E$ is $\mu_{\mathfrak{X}}$-semistable.\label{s:a4}
\end{enumerate}
\item Let $f \colon \mathfrak{Y} \longrightarrow \mathfrak{X}$ be a cover of proper stacky curves. This necessarily induce a cover $f_0 \colon Y \longrightarrow X$ of the Coarse moduli curves. We have the following.\label{s:b}
\begin{enumerate}
\item $$\text{\rm deg}_{\mathfrak{Y}}(f^* E) = \text{\rm deg}(f_0) \text{\rm deg}_{\mathfrak{X}}(E).$$
The same holds for $\mu_{-}$ and $\mu_{-,\text{\rm max}}$.\label{s:b1}
\item $E$ is $\mu_{\mathfrak{X}}$-semistable (respectively, $\mu_{\mathfrak{X}}$-polystable) if and only if $f^*E \in \text{\rm Vect}(\mathfrak{Y})$ is $\mu_{\mathfrak{Y}}$-semistable (respectively, $\mu_{\mathfrak{Y}}$-polystable).\label{s:b2}
\item If $f^*E \in \text{\rm Vect}(\mathfrak{Y})$ is $\mu_{\mathfrak{Y}}$-stable, then $E$ is $\mu_{\mathfrak{X}}$-stable.\label{s:b3}
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proof}
All of the above are easy consequences of Proposition~\ref{prop_stacky_equivariant_equivalence}, the previous definitions and the usual results for curves.
\end{proof}
In \eqref{s:a3} above, we needed the assumption that $\mathfrak{X}$ is the quotient stacky curve $[Z/G]$. For an arbitrary proper stacky curve, one can still construct a `maximal destabilizing sub-bundle' $E_1 \subseteq E$ or a `socle' $\mathfrak{S}(E_1)$ of $E$ using the $\mu_{\mathfrak{X}}$. But, we do not know whether the corresponding $G$-equivariant bundles on $Z$ coincide with the destabilizing sub-bundle $\mathcal{E}_1 \subseteq \mathcal{E}$ or the socle $\mathfrak{S}(\mathcal{E}_1)$ of $\mathcal{E}$. We will see that these concepts coincide for an orbifold curve.
\subsection{Slope Stability for Orbifold Curves}\label{sec_stability_f_o_c}
\emph{Throughout the section, any orbifold curve will be assumed to be connected and proper.}
An exposition on the divisors on an orbifold curve is given in \cite[Section~5.4]{VZB}. Due to the equivalences in Theorem~\ref{thm_equiv} and Theorem~\ref{thm_equiv_bundles}, we can define the notion of slope stability for bundles on an orbifold curve in a more intrinsic way, without the dependency of the equivariant set up as in Section~\ref{sec_stability_stacky_curve}. This allows us to obtain finer results and also to develop a Harder-Narasimhan filtration for bundles.
We briefly recall some important notions for our context. Let $\mathfrak{X} = (X,P)$ be a proper orbifold curve. Then for any point $x \in \mathfrak{X}(k) \cong X(k)$, the stabilizer group $G_x$ is equal to the Galois group $\text{\rm Gal} \left( P(x)/K_{X,x} \right)$, and the $P$-degree of the point $x$ is defined to be $\frac{1}{|G_x|} = \frac{1}{[P(x): K_{X,x}]}$. A (Weil) divisor is then a finite formal sum of $k$-points of $\mathfrak{X}$, and hence is an element of the free abelian group $\text{\rm Div}(X,P)$ generated by the set of $0$-dimensional closed $k$-substacks of $\mathfrak{X}$. The $P$-\textit{degree} $\text{\rm deg}_P(D)$ of a divisor $D = \sum\limits_x n_x x \in \text{\rm Div}(X,P)$ is defined linearly as
$$\text{\rm deg}_P(D) \coloneqq \sum\limits_x \frac{n_x}{[P(x) \colon K_{X,x}]}.$$
Linear equivalence is well defined for divisors on $(X,P)$ (\cite[Definition~5.4.2]{VZB}), and one can associate a line bundle $\mathcal{O}_{(X,P)}(D)$ to a divisor $D$ on $(X,P)$. The following result is of importance to us.
\begin{lemma}[{\cite[Lemma~5.4.5]{VZB}}]\label{lem4.2}
Every line bundle $L$ on $\mathfrak{X} = (X,P)$ is of the form $\mathcal{O}_{(X,P)}(D)$ for some divisor $D\in \text{\rm Div}(X,P)$. Moreover, $\mathcal{O}_{(X,P)}(D) \cong \mathcal{O}_{(X,P)}(D')$ if and only if $D$ and $D'$ are linearly equivalent.
\end{lemma}
\begin{definition}\label{def_deg_lb_DM}
Let $L$ be a line bundle on a proper orbifold curve $\mathfrak{X} = (X,P)$. Then $L \cong \mathcal{O}_{(X,P)}(D)$ for a divisor $D$ on $(X,P)$, unique up to a linear equivalence (Lemma~\ref{lem4.2}). Define the $P$-degree of $L$ to be
$$\text{\rm deg}_P(L) \coloneqq \text{\rm deg}(D).$$
\end{definition}
Now we introduce the notion of $P$-slope stability for any vector bundle on an orbifold curve $\mathfrak{X} = (X,P)$. We will write $E = (E_0, \Phi, \eta)$ for a vector bundle on $(X,P)$ where $E_0$ is the underlying bundle on $X$, $\Phi$ is the data of the action maps, and $\eta$ is the data of equivariant isomorphisms defining the compatible generically trivial actions (cf. \cite[Definition~4.4, Definition~4.5]{KM}).
The \textit{rank} of the bundle $E$ is defined to be the rank of the underlying vector bundle $E_0$ on $X$. To $E$, we naturally associated the \textit{determinant bundle} by
$$\text{\rm det} \left(E \right) \coloneqq (\wedge^n E_0, \wedge^n \Phi, \wedge^n \eta) \in \text{\rm Vect}(X,P)$$
where $n = \text{\rm rank}(E)$. Since $\wedge$ commutes with $\otimes$, we have a line bundle $\text{\rm det} \left(E \right) \in \text{\rm Vect}(X,P)$ whose underlying vector bundle is the usual determinant line bundle $\text{\rm det}(E_0)$ on $X$.
We define the \textit{degree} of $E$ with respect to $P$ to be
$$\text{\rm deg}_P(E) \coloneqq \text{\rm deg}_P\left( \text{\rm det} \left(E\right) \right).$$
The $P$-\textit{slope} of $E$, denoted by $\mu_P(E)$, is defined to be
$$\mu_P(E) \coloneqq \frac{\text{deg}_P(E)}{\text{rank} (E)}.$$
We need to check that $\mu_P$ and $\mu_{\mathfrak{X}}$ coincide. First, we see how the degrees with respect to branch data behave for the pullback under a morphism of formal orbifold curves.
\begin{lemma}\label{lem_iota_invariance}
Let $P' \geq P$ be two branch data on a smooth projective connected $k$-curve $X$. The identity map on $X$ induces a morphism $\iota \colon (X,P') \longrightarrow (X,P)$ of orbifold curves. Consider the embedding $\iota^* \colon \text{\rm Vect}(X,P) \longrightarrow \text{\rm Vect}(X,P')$. Then $\text{\rm deg}_{P'}(\iota^*E) = \text{\rm deg}_{P}(E)$ for any $E \in \text{\rm Vect}(X,P)$.
\end{lemma}
\begin{proof}
By definition, it follows that for any $F \in \text{\rm Vect}(X,P)$, we have
$$\iota^*\text{\rm det}(F) = \text{\rm det}(\iota^*E).$$
The $P$-degree of any bundle $F \in \text{\rm Vect}(X,P)$ is the $P$-degree of the line bundle $\text{\rm det}(F)$, which in turn is defined to be the $P$-degree of a divisor associated to it. So it is enough to show that for any closed point $x \in X$ viewed as a divisor on $(X,P)$, we have
$$\text{\rm deg}_{P'} (\iota^* x) = \text{\rm deg}_P(x).$$
This is immediate from the definition of the divisor $\iota^* x$ as $[P'(x) \colon P(x)] x$ whose $P'$-degree is
$$ \frac{[P'(x) \colon P(x)]}{[P'(x) \colon K_{x,x}]} = \frac{1}{[P(x) \colon K_{X,x}]} = \text{\rm deg}_P(x).$$
\end{proof}
\begin{lemma}\label{lem_slope_finite_pullback}
Let $f \colon (Y,Q) \longrightarrow (X,P)$ be a morphism of proper orbifold curves. Then for any $E \in \text{\rm Vect}(X,P)$, we have
$$\text{\rm deg}_{Q}(f^*E) = \text{\rm deg}(f_0)\hspace{1mm}\text{\rm deg}_{P}(E)$$
where $f_0 \colon Y \longrightarrow X$ is the map induced on the Coarse moduli curves.
\end{lemma}
\begin{proof}
The morphism $f$ factors as $(Y,Q) \overset{\jmath} \longrightarrow (Y,f_0^*P) \longrightarrow (X,P)$. Using Lemma~\ref{lem_iota_invariance}, it is enough to consider $Q = f_0^*P$. Using the definition of the $P$-degree, it is again enough to show that for any closed point $x \in X$ viewed as a divisor on $(X,P)$, we have
$$\text{\rm deg}_{f_0^*P}(f^*x) = \text{\rm deg}(f_0) \hspace{1mm} \text{\rm deg}_P(x).$$
This follows since $f^*x = \sum\limits_{f_0(y)=x} [f_0^*P(y) \colon P(x)] y$ whose $f_0^*P$-degree is equal to
$$\sum\limits_{f_0(y)=x} \frac{[f_0^*P(y) \colon P(x)]}{[f_0^*P(y) \colon K_{X,x}]} = \sum\limits_{f_0(y)=x} \frac{1}{[P(x) \colon K_{X,x}]} = \text{\rm deg}(f_0) \hspace{1mm} \text{\rm deg}_P(x).$$
\end{proof}
We have the following immediate consequence.
\begin{proposition}\label{prop_slopes_coincide}
Let $\mathfrak{X} = (X,P)$ be a connected projective orbifold curve. For any vector bundle $E \in \text{\rm Vect}(X,P)$, we have
$$\mu_{\mathfrak{X}}(E) = \mu_P(E).$$
\end{proposition}
\begin{proof}
Let $Q \geq P$ be a geometric branch data on $X$ such that there is a finite $G$-Galois \'{e}tale cover $g \colon Z \longrightarrow (X,Q)$ where $Z$ is a smooth projective connected $k$-curve. The $G$-Galois cover$f \colon Z \longrightarrow X$ of smooth projective connected $k$-curves factors as follows.
$$f \colon Z \overset{g} \longrightarrow \mathfrak{X}' \coloneqq (X,Q) \overset{\jmath} \longrightarrow \mathfrak{X} = (X,P) \overset{\iota} \longrightarrow X.$$
Now the result follows from Lemma~\ref{lem_iota_invariance}, Lemma~\ref{lem_slope_finite_pullback} and Definition~\ref{def_slope_stacky_curve}.
\end{proof}
Thus we can interchangeably talk about the stability conditions with respect to $\mu_P$ or $\mu_{\mathfrak{X}}$, and Proposition~\ref{prop_properties_stacky} remains valid with respect to $\mu_P$. One extra advantage of working with formal orbifold curves is that we can define the maximal destabilizing sub-bundle and socle even when the orbifold curve is not a quotient stack (see the discussion after Proposition~\ref{prop_properties_stacky}).
\emph{We will say that a bundle $E \in \text{\rm Vect}(X,P)$ is $P$-(semi)stable or $P$-polystable instead of $\mu_P$-(semi)stable or $\mu_P$-polystable.}
\begin{proposition}\label{prop_HN_socle}
Let $(X,P)$ be a connected projective formal orbifold curve, and $E \in \text{\rm Vect}(X,P)$.
\begin{enumerate}
\item (Harder-Narasimhan Filtration) There is a unique filtration
$$0 = \text{\rm HN}(E)_0 \subset \text{\rm HN}(E)_1 \subset \cdots \subset \text{\rm HN}(E)_l = E$$
such that $\text{\rm HN}(E)_i/\text{\rm HN}(E)_{i-1}$ are $P$-semistable and their slopes satisfy
$$\mu_{P,\text{\rm max}}(E) \coloneqq \mu_P(\text{\rm HN}(E)_1) > \cdots > \mu_P(E/\text{\rm HN}(E)_{l-1}).$$\label{h:1}
\item The `maximal destabilizing sub-bundle' $\text{\rm HN}(E)_1$ has the following property: for any sub-bundle $F \subseteq E$, we have $\mu_P(\text{\rm HN}(E)_1) \geq \mu_P(F)$; when $\mu_P(\text{\rm HN}(E)_1) = \mu_P(F)$, we have $F \subset \text{\rm HN}(E)_1$.\label{h:2}
\item If $E$ is $P$-semistable, there exists a filtration
$$0 = E^{(0)} \subset E^{(1)} \subset \cdots \subset E^{(l-1)} \subset E^{(l)}$$
such that $E^{(i)}/E^{(i-1)}$ are $P$-stable, having the same $P$-slope as $\mu_P(E)$.\label{h:3}
\item If there is a $G$-Galois cover $g \colon Z \longrightarrow (X,P)$, the pullback of the filtration in \eqref{h:1} is the unique Harder-Narasimhan filtration of $g^*(E)$, and $g^*(\text{\rm HN}(E)_1)$ is the maximal destabilizing sub-bundle of $g^*(E)$. When $E$ is also $P$-semistable, the pullback of the filtration in \eqref{h:3} is a Jordan-H\"{o}lder filtration. In particular, $g^*(\oplus_i E^{(i)}/E^{(i-1)})$ is the socle $\mathfrak{S}(g^*E)$ of the semistable $G$-bundle $g^*E$.\label{h:4}
\end{enumerate}
\end{proposition}
\begin{proof}
\eqref{h:1}--\eqref{h:3} are obtained as in the proof of \cite[Lemma~1.3.5, pg. 17 and Proposition~1.5.2, pg. 23]{HL} using the slope $\mu_P$. By Proposition~\ref{prop_slopes_coincide} and Proposition~\ref{prop_properties_stacky}~\eqref{s:a3}, the statement~\eqref{h:4} follows when the map $g$ is also \'{e}tale. So to prove \eqref{h:4}, it is enough to show that for branch data $P' \geq P$ on $X$ with induced morphism $\iota \colon (X,P') \longrightarrow (X,P)$ and $E \in \text{\rm Vect}(X,P)$, we have
$$\text{\rm HN}(\iota^*(E))_1 = \iota^* ( \text{\rm HN}(E)_1),$$
$$ \text{and for } P'{\text{-semistable }} E, \,\mathfrak{S}(\iota^*E) = \iota^* (\mathfrak{S}(E)).$$
We have a vector bundle inclusion
$$\iota^* ( \text{\rm HN}(E)_1) \subseteq \text{\rm HN}(\iota^*(E))_1 \subseteq \iota^*(E)$$
on $(X,P')$. In particular, $\text{\rm HN}(\iota^*(E))_1 = \iota^* F$ for some sub-bundle $F \subseteq E$ (see Remark~\ref{rmk_sub-bundle}). Then $\mu_{P'}(\text{\rm HN}(\iota^*(E))_1) = \mu_P(F) \leq \mu_P(\text{\rm HN}(E)_1)$. By the maximality of $\mu_P(\text{\rm HN}(E)_1)$, we have $\text{\rm HN}(E)_1 = F$. Thus $\iota^* (\text{\rm HN}(E)_1)$ is the maximal destabilizing sub-bundle of $\iota^*E$. Since the Harder-Narasimhan filtration is constructed inductively, we obtain the first equality.
By Remark~\ref{rmk_sub-bundle}, every vector sub-bundle of $\iota^*E$ is of the form $\iota^* F$ for some sub-bundle $F \subseteq E$. So each direct summand of $\mathfrak{S}(\iota^*E)$ is also a direct summand of $\iota^* \mathfrak{S}(E)$. Also since $\iota^*$ preserves polystability (see Proposition~\ref{prop_stacky_equivariant_equivalence}), we obtain the second equality.
\end{proof}
\begin{remark}\label{rmk_parabolic_slope_same_as_P_slope}
When $k = \mathbb{C}$, an orbifold curve is determined by a finite set $B \subset X$ of closed points and a positive integers $n_x$ for each $x \in B$. Let $D = \sum_{\substack{x \in B}} x \in \text{\rm Div}(X)$. By \cite[Proposition~5.15]{KM}, there is an equivalence of categories
\begin{equation}\label{eq_13}
\text{\rm Vect}(X,P) \overset{\sim} \longrightarrow \text{\rm Vect}_{\text{\rm par, rat}}(X,D)
\end{equation}
where $\text{\rm Vect}_{\text{\rm par, rat}}(X,D)$ is the category of parabolic vector bundles on $X$ with respect to the divisor $D$ and over each $x \in B$, the weights are of the form $a/n_x$, \, $0 \leq a < n_x$.
There exists a connected $G$-Galois cover $g \colon Z \longrightarrow X$ of smooth projective connected $k$-curves that is branched over the set $B$, and for each point $x \in B$, the integer $n_x$ divides the ramification index at any point $z \in g^{-1}(x)$. By \cite{B}, \cite{P}, for each parabolic vector bundle $V_* \in \text{\rm Vect}_{\text{\rm par, rat}}(X,D)$, there is a unique $G$-bundle $\hat{V} \in \text{\rm Vect}^G(Z)$, and
$$\mu(\hat{V}) = |G|\mu_{\text{\rm para}}(V_*)$$
where $\mu$ is the usual slope for vector bundles on $Z$ and $\mu_{\text{\rm para}}$ is the parabolic slope. Moreover, the association $V_* \mapsto \hat{V}$ preserves the respective slope stability.
Using Proposition~\ref{prop_stacky_equivariant_equivalence} and Proposition~\ref{prop_slopes_coincide}, we see that under the equivalence~\eqref{eq_13}, the parabolic slope is the same as $P$-slope, and parabolic slope stability conditions are the same as $P$-stability conditions.
\end{remark}
\section{A Genuinely Ramified Morphism}\label{sec_gen_ram}
Historically, the notion of a genuinely ramified morphism arises in the study of covers of the projective line and in the construction of the Hurwitz spaces. Recent study in \cite{BP} on the slope stability of a vector bundle on smooth curves under finite covers shows another importance of such morphisms. Our objective in this section is to extend the definition of a genuinely ramified morphism from smooth curves to stacky curves. Let us start by recalling the definition in the case of curves, following \cite{BP}.
Consider any non-trivial cover $f \colon Y \longrightarrow X$ of smooth projective connected $k$-curves. The pushforward sheaf $f_* \mathcal{O}_Y$ is a vector bundle on $X$ that a semistable bundle if and only if $f$ is an \'{e}tale cover. The maximal destabilizing sub-bundle $\text{\rm HN}(f_*\mathcal{O}_Y)_1 \subset f_* \mathcal{O}_Y$ is a sheaf of algebras that is a semistable sub-bundle of degree $0$ containing $\mathcal{O}_X$ (\cite[Equation~(2.7), Lemma~2.4]{BP}). The cover $f$ is said to be \textit{genuinely ramified} if $f_*\mathcal{O}_Y$ is $\mathcal{O}_X$. This is equivalent to: the homomorphism between \'{e}tale fundamental groups $f_* \colon \pi_1(Y) \longrightarrow \pi_1(X)$, induced by $f$, is a surjection. Other equivalent conditions are given in \cite[Proposition~2.6, Lemma~3.1]{BP}.
Any cover $f$ as above factors as a composition
$$f \colon Y \longrightarrow \hat{X} \coloneqq \underline{\text{\rm Spec}}\left( \text{\rm HN}(f_*\mathcal{O}_Y)_1 \right) \longrightarrow X$$
where $\hat{X} \longrightarrow X$ is the maximal \'{e}tale cover of $X$ via which the map $f$ factors. Moreover, the induced finite cover $Y \longrightarrow \hat{X}$ is genuinely ramified. One of the main results of \cite{BP} gives another important criterion of a map to be genuinely ramified in terms of the slope stability of vector bundles via the pullbacks under covers.
\begin{theorem}[{\cite[Theorem~5.3]{BP}}]
Let $f \colon Y \longrightarrow X$ be a cover of smooth projective connected $k$-curves. The map $f$ is genuinely ramified if and only if for every stable vector bundle $E$ on $X$, the pullback bundle $f^*E$ is stable on $Y$.
\end{theorem}
In the following, we establish equivalent conditions for certain covers of stacky curves, which will serve as the `genuinely ramified morphisms'. Will will prove in the next section (see Proposition~\ref{prop_counter_eg} and Theorem~\ref{thm_main}) the result characterizing genuinely ramified covers as those preserving the slope stability under pullback, generalizing the above theorem.
\begin{proposition}\label{prop_gen_ram_equivalences}
Let $\mathfrak{X} = (X,P)$ be a connected proper orbifold $k$-curve. Let $f \colon \mathfrak{Y} \longrightarrow (X,P)$ be a finite cover of connected proper stacky curves. The maximal destabilizing sub-bundle (cf. Proposition~\ref{prop_HN_socle}~\eqref{h:2}) $\text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \subset f_* \mathcal{O}_{\mathfrak{Y}}$ is a sheaf of $\mathcal{O}_{(X,P)}$-algebras, and it is a $P$-semistable vector bundle of $P$-degree $0$. Moreover, the following are equivalent for the finite cover $f \colon \mathfrak{Y} \longrightarrow (X,P)$.
\begin{enumerate}
\item $\text{\rm HN}(f_*\mathcal{O}_{\mathfrak{Y}})_1 = \mathcal{O}_{(X,P)}$.\label{equiv:1}
\item The map $f$ does not factor through any non-trivial \'{e}tale sub-cover.\label{equiv:2}
\item The homomorphism between \'{e}tale fundamental groups $f_* \colon \pi_1(\mathfrak{Y}) \longrightarrow \pi_1(\mathfrak{X})$ induced by $f$ is a surjection.\label{equiv:3}
\item The fiber product stacky curve $\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}$ is connected.\label{equiv:4}
\item $\text{\rm dim} \, H^0( \mathfrak{Y}, f^* f_* \mathcal{O}_{\mathfrak{Y}}) = 1$.\label{equiv:5}
\end{enumerate}
Finally, the above conditions imply that the finite cover $f_0$ induced on the Coarse moduli curves is a genuinely ramified morphism.
\end{proposition}
\begin{proof}
Let $Z \longrightarrow (X,P)$ be a $G$-Galois cover where $Z$ is a smooth projective connected $k$-curve. Choose a finite cover $Z' \longrightarrow \mathfrak{Y} \times_{(X,P)} Z$ where $Z'$ is a projective $k$-curve (since $\mathfrak{Y} \times_{(X,P)} Z$ is a stacky curve, such a cover always exists; see Remark~\ref{rmk_Chow}). Note that \cite[Lemma~2.2 and Lemma~2.4]{BP} are valid even when the source is a singular curve. Thus, under the finite cover $g \colon Z' \longrightarrow Z$, we conclude the following. The maximal destabilizing sub-sheaf of $g_* \mathcal{O}_{Z'}$ is a semistable $G$-equivariant bundle of degree $0$, and is a sheaf of $\mathcal{O}_Z$-algebras containing $\mathcal{O}_Z$. By definition, $f_*\mathcal{O}_{\mathfrak{Y}}$ corresponds to the $G$-equivariant vector bundle $g^*\mathcal{O}_{Z'}$. By Proposition~\ref{prop_HN_socle}\eqref{h:4} and Proposition~\ref{prop_properties_stacky}~\eqref{s:a1}, we conclude that $\text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \subset f_* \mathcal{O}_{\mathfrak{Y}}$ is a sheaf of $\mathcal{O}_{(X,P)}$-algebras, and is a $P$-semistable vector bundle of $P$-degree $0$.
The equivalence between \eqref{equiv:2} and \eqref{equiv:3} is a tautology from the formalism of Galois categories (for a detail argument for varieties, see \cite[Theorem~2.4]{BDP}). The same argument from \cite[Proposition~2.6]{BP} used in our context establishes the equivalence of \eqref{equiv:1} and \eqref{equiv:2}.
We show the equivalence \eqref{equiv:4}$\Leftrightarrow$\eqref{equiv:5}. By \cite[Proposition~13.1.9, pg. 122]{LMB} or \cite[Proposition~A.1.7.4]{Brochard}, we have an isomorphism
$$f^*f_* \mathcal{O}_{\mathfrak{Y}} \cong (p_1)_* p_2^* \mathcal{O}_{\mathfrak{Y}}$$
where $p_1$ and $p_2$ are the projection morphisms. As we have $p_2^* \mathcal{O}_{\mathfrak{Y}} = \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}$, we conclude
$$H^0(\mathfrak{Y}, f^* f_* \mathcal{O}_{\mathfrak{Y}}) = H^0(\mathfrak{Y}, (p_1)_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}) = H^0(\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}, \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}).$$
We note that $\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}$ is a reduced DM stack each of whose irreducible components is one-dimensional and generically an integral curve. The later cohomology can be calculated using the Leray spectral sequence (see \cite[(11.6.2.2), pg. 237]{Olsson})
$$E_2^{p q} = H^p (S, R^q \pi_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}) \, \Rightarrow \, H^{p+q}(\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}, \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}})$$
where $\pi \colon \mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y} \longrightarrow S$ denote the Coarse moduli map. In particular, $H^0(\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}, \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}) = H^0(S, \pi_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}) = H^0(S,\mathcal{O}_S)$ using the property of the Coarse moduli space. Since $\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}$ is connected if and only if $S$ is connected, the desired equivalence follows.
Now we show that \eqref{equiv:4}$\Rightarrow$\eqref{equiv:2}. Suppose that $f$ factors as a composition $\mathfrak{Y} \overset{g} \longrightarrow \mathfrak{X}' \overset{h} \longrightarrow \mathfrak{X}$ where $h$ is a non-trivial finite \'{e}tale cover. Then the fiber product $\mathfrak{X}' \times_{\mathfrak{X}} \mathfrak{X}'$ is disconnected as it contains $\mathfrak{X}'$ as a connected component (the diagonal morphism is an open imbedding) and $\mathfrak{X}' \times_{\mathfrak{X}} \mathfrak{X}' \longrightarrow \mathfrak{X}$ is an \'{e}tale cover of degree $> \text{\rm deg}(h)$.
We will show \eqref{equiv:1}$\Rightarrow$\eqref{equiv:5}. We have
$$\text{\rm HN}(f^*f_* \mathcal{O}_{\mathfrak{Y}})_1 = f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 = f^* \mathcal{O}_{(X,P)} = \mathcal{O}_{\mathfrak{Y}}.$$
By Proposition~\ref{prop_properties_stacky}~\eqref{s:a2}, $\mu_{\mathfrak{Y}, \text{\rm max}} \left( f^*f_* \mathcal{O}_{\mathfrak{Y}} / f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \right) < 0$. So there is no vector bundle inclusion $\mathcal{O}_{\mathfrak{Y}} \hookrightarrow f^*f_* \mathcal{O}_{\mathfrak{Y}}/f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1$, and consequently,
\begin{equation}\label{eq_no_section}
H^0\left(\mathfrak{Y}, f^*f_* \mathcal{O}_{\mathfrak{Y}}/f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \right) = 0.
\end{equation}
Now the implication follows from the long exact sequence of cohomologies associated to the following exact sequence of vector bundles on $\mathfrak{Y}$:
$$0 \longrightarrow f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \longrightarrow f^*f_* \mathcal{O}_{\mathfrak{Y}} \longrightarrow f^*f_* \mathcal{O}_{\mathfrak{Y}}/f^* \text{\rm HN}(f_* \mathcal{O}_{\mathfrak{Y}})_1 \longrightarrow 0.$$
To see the last statement, assume that $f_0$ is not genuinely ramified. Then there is a non-trivial \'{e}tale sub-cover $X' \longrightarrow X$ via which $f_0$ factors. Then $X' \times_X (X,P) \longrightarrow (X,P)$ is a non-trivial \'{e}tale sub-cover via which $f$ factors.
\end{proof}
\section{Slope Stability under a Pullback}\label{sec_main}
\subsection{Necessary Condition}\label{sec_counter_eg}
We show that if a cover from a stacky curve to an orbifold curve is not genuinely ramified, one can always construct an orbifold stable bundle whose pullback is not stable.
\begin{proposition}\label{prop_counter_eg}
Let $f \colon \mathfrak{Y} \longrightarrow (X,P)$ be a finite cover from a stacky curve to an orbifold curve. Assume that $f$ is not genuinely ramified. Then there is a $P$-stable vector bundle $E \in \text{\rm Vect}(X,P)$ such that the pullback bundle $f^*E \in \text{\rm Vect}(\mathfrak{Y})$ is not $\mu_{\mathfrak{Y}}$-stable.
\end{proposition}
\begin{proof}
First assume that $f$ is a non-trivial \'{e}tale cover and $(X,P)$ is geometric.
We first claim that $\mathfrak{Y}$ is the orbifold curve $(Y,f_0^*P)$ where $Y$ is the Coarse moduli curve for $\mathfrak{Y}$ and $f_0 \colon Y \longrightarrow X$ is the finite cover induced on the Coarse moduli curves by $f$. Consider a finite \'{e}tale cover $Z \longrightarrow (X,P)$ where $Z$ is a smooth projective connected $k$-curve. Then the projection morphism $\mathfrak{Y} \times_{(X,P)} Z \longrightarrow Z$ is a finite \'{e}tale cover. So the fiber product DM stack $\mathfrak{Y} \times_{(X,P)} Z$ is a smooth projective $k$-curve, and it is an atlas of $\mathfrak{Y}$. So $\mathfrak{Y}$ is a smooth stacky curve, i.e. an orbifold curve, and we have $\mathfrak{Y} = (Y,Q)$ for some branch data $Q$ on $Y$. Moreover, for any close point $y \in Y$, we have (see \cite[Definition~2.5, Definition~2.6]{KP})
$$Q(y) = P(f_0(y)) \cdot K_{X,f_0(y)} = f_0^*P (y).$$
Thus $Q = f_0^*P$, proving the claim.
Now since $f$ is an \'{e}tale cover, we can find a Galois \'{e}tale cover $g \colon Z' \longrightarrow (X,P)$ that factors as a composition of an \'{e}tale cover $Z' \longrightarrow (Y,f_0^*P)$ followed by the \'{e}tale cover $f$. Suppose that $g$ is $G$-Galois for some finite group $G$. Then the cover $Z' \longrightarrow (Y,f_0^*P)$ is also a Galois \'{e}tale cover, for some subgroup $H \leq G$. Since $f_0$ is non-trivial, we have $H \neq G$. To conclude the result under the present hypothesis, it is enough to construct a $G$-stable bundle on $Z'$ that is not $H$-stable under the induced $H$-action. Consider an irreducible one-dimensional $k[H]$-module $V$. This defines a rank one trivial bundle $\mathcal{F} \coloneqq \mathcal{O}_{Z'} \otimes_k V$ with diagonal $H$-action on $Z'$ that is $H$-stable. Let $\{a_1, \ldots, a_s\}$ be the complete set of coset representatives of $G/H$; note that $s \geq 2$. Consider the $G$-equivariant bundle $\mathcal{E} \coloneqq \oplus_{1 \leq i \leq s} \mathcal{F}$ where the $G$-action is given by the $H$-action on $\mathcal{F}$ together with the transitive $G$-action on $G/H$. Then $\mathcal{E}$ is a semistable $G$-equivariant bundle of degree $0$ on $Z'$. Since any sub-bundle of $\mathcal{E}$ of degree $0$ is again a trivial bundle (a detailed argument is given in the proof of \cite[Theorem~2.4, after Equation~2.19]{BDP}), we conclude that $\mathcal{E}$ does not contain any proper $G$-equivariant sub-bundle of degree $0$. Thus, $\mathcal{E}$ is a $G$-stable vector bundle which is not $H$-stable.
Now we consider the general situation: $f$ factors as a composition
$$f \colon \mathfrak{Y} \overset{\beta} \longrightarrow \mathfrak{Y}' \overset{\alpha} \longrightarrow (X,P)$$
where $\alpha$ is a non-trivial \'{e}tale cover. As in the geometric case, we have $\mathfrak{Y}' = (Y,\alpha_0^*P)$, an orbifold curve. There is a maximal geometric branch data $Q$ with $P \geq Q$ such that $f_0$ is an essentially \'{e}tale cover of $(X,Q)$ (the branch data $Q$ constructed in \cite[Proposition~2.30]{KP} has this property). Then we also obtain an \'{e}tale cover $\alpha' \colon (Y,\alpha_0^* Q) \longrightarrow (X,Q)$, and the following diagram is commutative.
\begin{center}
\begin{tikzcd}
\mathfrak{Y} \arrow[r, "\beta"] & \mathfrak{Y}' = (Y,\alpha_0^*P) \arrow[r, "\alpha"] \arrow[d, "\jmath"] & (X,P) \arrow[d, "\iota"]\\
& (Y, \alpha_0^*Q) \arrow[r, "\alpha'"] & (X,Q)
\end{tikzcd}
\end{center}
Here $\iota$ and $\jmath$ are the morphisms induced by the respective identity maps on $X$ and $Y$. Since $(Y,Q)$ is geometric, there is a $Q$-stable bundle on $(X,Q)$ such that $(\alpha')^*E$ is not $\alpha_0^*Q$-stable on $(Y,\alpha_0^*Q)$. By Remark~\ref{rmk_iota_preserve}, $\iota^*E$ is a $P$-stable bundle on $(X,P)$, but $\alpha^* \iota^* E = \jmath^* (\alpha')^* E$ is not $\alpha_0^*P$-stable on $(Y,\alpha_0^*P)$. Hence $f^*\iota^*E$ is not $\mu_{\mathfrak{Y}}$-stable on $\mathfrak{Y}$ by Proposition~\ref{prop_properties_stacky}~\eqref{s:b2}.
\end{proof}
\begin{remark}\label{rmk_etale}
We can achieve more using the proof of the above result: for any finite \'{e}tale cover $f \colon \mathfrak{Y} \longrightarrow (X,P)$, the vector bundle $f_* \mathcal{O}_{\mathfrak{Y}} \in \text{\rm Vect}(X,P)$ is a $P$-semistable vector bundle of $P$-degree zero.
We saw that $\mathfrak{Y} = (Y, f_0^*P)$, and for the maximal geometric branch data $Q$ satisfying $P \geq Q$ as in \cite[Proposition~2.30]{KP}, we have the following cartesian square.
\begin{center}
\begin{tikzcd}
(Y, f_0^*P) \arrow[r, "f"] \arrow[d, "\jmath"] \arrow[dr, phantom, "\square"] & (X,P) \arrow[d, "\iota"] \\
(Y, f_0^*Q) \arrow[r, "f'"] & (X,Q)
\end{tikzcd}
\end{center}
where $f'$ is an \'{e}tale cover. Moreover, there is a $G$-Galois \'{e}tale cover $g \colon Z \longrightarrow [Z/G] \cong (X,Q)$ from a smooth projective connected $k$-curve $Z$ that factors as a composition
$$g \colon Z \overset{H\text{-Galois, \'{e}tale}} \longrightarrow (Y,f_0^*Q) \overset{f'} \longrightarrow (X,Q),$$
for some subgroup $H \leq G$. Set $\tilde{Z}$ a the fiber product stack $(Y, f_0^*Q) \times_{(X,Q)} Z$. Since the projection $p_2 \colon \tilde{Z} \longrightarrow Z$ is a finite \'{e}tale cover, $\tilde{Z}$ is possibly disconnected smooth projective $k$-curve, and the projection map $\tilde{Z} \longrightarrow (Y, f_0^*Q)$ is a $G$-Galois \'{e}tale cover. We can calculate the pushforward bundle $f'_* \mathcal{O}_{(Y, f_0^*Q)}$ on $(X,Q)$ using the cover $p_2$. The structure sheaf on $(Y, f_0^*Q)$ corresponds to the $G$-equivariant bundle $\mathcal{O}_{\tilde{Z}}$, and cover $p_2$ restricted to each connected component of $\tilde{Z}$ is a finite \'{e}tale cover to $Z$. So the $G$-equivariant bundle $(p_2)_* \mathcal{O}_{\tilde{Z}}$ on $Z$ is a direct sum of bundles, whose each summand is a semistable bundle of degree zero (\cite[Lemma~2.2]{BP}). By Proposition~\ref{prop_stacky_equivariant_equivalence} and Proposition~\ref{prop_properties_stacky}~\eqref{s:a1}, the vector bundle $f'_* \mathcal{O}_{(Y, f_0^*Q)}$ on $(X,Q)$ is $Q$-semistable of $Q$-degree $0$. Now using \cite[Proposition~13.1.9, pg. 122]{LMB} or \cite[Proposition~A.1.7.4]{Brochard}, we see that
$$ f_* \mathcal{O}_{(Y, f_0^*P)} \cong f_*\jmath^* \mathcal{O}_{(Y, f_0^*Q)} \cong \iota^* f'_* \mathcal{O}_{(Y, f_0^*Q)}$$
is $P$-semistable of $P$-degree zero.
\end{remark}
We have the following consequence of the above observation, similar to the case of curves.
\begin{corollary}\label{cor_maximal_etale}
Let $f \colon (Y,Q) \longrightarrow (X,P)$ be a finite cover of orbifold curves. Then $f$ factors as
$$f \colon (Y,Q) \overset{\hat{g}} \longrightarrow (\hat{X}, g_0^*P) \coloneqq \underline{\text{\rm Spec}}\left( \text{\rm HN}\left( f_* \mathcal{O}_{(Y,Q)} \right)_1 \right) \overset{g} \longrightarrow (X,P)$$
where $\hat{g}$ is a genuinely ramified cover and $g$ is the maximal \'{e}tale cover of $(X,P)$ via which $f$ factors.
\end{corollary}
\begin{proof}
We have observed that any finite \'{e}tale cover of an orbifold curve is an orbifold curve. By \cite[Proposition~2.42]{KP}, we have the existence of the factorization $(Y,Q) \longrightarrow (\hat{X}, g_0^*P) \longrightarrow (X,P)$. We need to prove that $\underline{\text{\rm Spec}}\left( \text{\rm HN}\left( f_* \mathcal{O}_{(Y,Q)} \right)_1 \right) = (\hat{X}, g_0^*P)$. By Remark~\ref{rmk_etale}, $g_* \mathcal{O}_{(\hat{X},g_0^*P)}$ is a $P$-semistable bundle of $P$-degree zero. By Proposition~\ref{prop_gen_ram_equivalences}, the maximal destabilizing sub-sheaf $\text{\rm HN}\left( f_* \mathcal{O}_{(Y,Q)} \right)_1$ is a $P$-semistable bundle of $P$-degree zero. By Proposition~\ref{prop_HN_socle}~\eqref{h:2}, we have $g_* \mathcal{O}_{(\hat{X},g_0^*P)} \subset \text{\rm HN}\left( f_* \mathcal{O}_{(Y,Q)} \right)_1$. Since $g$ is also maximal, this is an equality. By \cite[Theorem~10.2.4, pg. 212]{Olsson}, we have $(\hat{X},g_0^*P) = \underline{\text{\rm Spec}} \left( g_* \mathcal{O}_{(\hat{X},g_0^*P)} \right)$, and the result follows.
\end{proof}
\subsection{Pullback under a Genuinely Ramified Map}\label{sec_main-direction}
In this section, we establish that for a genuinely ramified morphism $f \colon \mathfrak{Y} \longrightarrow (X,P)$ from a stacky curve to an orbifold curve, the pullback of any $P$-stable bundle is $\mu_{\mathfrak{Y}}$-stable. Recall that a finite cover $f$ is called genuinely ramified if it induces a surjection on the \'{e}tale fundamental groups, or satisfies the equivalent conditions from Proposition~\ref{prop_gen_ram_equivalences}. We present a proof following the ideas developed in \cite{BP}. One of the key observation is that when $\mathfrak{Y} = (Y,Q)$ is a geometric orbifold curve and $f$ is a Galois genuinely ramified cover, the bundle $f^* \left( f_* \mathcal{O}_{\mathfrak{Y}} / \mathcal{O}_{(X,P)} \right)$ admits a filtration such that the successive quotients are line bundles of negative $Q$-degrees. For the usual curve case, this follows from \cite[Proposition~5.13, pg. 76]{E2}.
\begin{proposition}\label{prop_inclusion}
Let $f \colon (Y,Q) \longrightarrow (X,P)$ be a $G$-Galois morphism, $|G| = d \geq 2$. Then we have an inclusion
$$f^* \left( f_* \mathcal{O}_{(Y,Q)} / \mathcal{O}_{(X,P)} \right) \subset \mathcal{O}^{\oplus (d-1)}_{(Y,Q)}$$
as coherent sheaves on $(Y,Q)$.
\end{proposition}
\begin{proof}
Set $\mathfrak{Y} \coloneqq (Y,Q)$ and $\mathfrak{X} \coloneqq (X,P)$. When $P$ and $Q$ are the trivial branch data, the statement is \cite[Proposition~3.2]{BP}.
We consider the fiber product DM stack $\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}$, and its normalization which is the formal orbifold fiber product (\cite[Proposition~2.14]{KP}), namely $(\widetilde{Y \times_X Y}, Q \times_P Q)$. We have the following commutative diagram.
\begin{center}
\begin{tikzcd}
\tilde{\mathfrak{Y}} \coloneqq (\widetilde{Y \times_X Y}, Q \times_P Q)
\arrow[drr, bend left, "h"]
\arrow[ddr, bend right, "\theta"]
\arrow[dr, "\nu"] & & \\
& \mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y} \arrow[dr, phantom, "\square"] \arrow[r, "p_{2}"] \arrow[d, "p_{1}"]
& \mathfrak{Y} = (Y,Q) \arrow[d, "f"] \\
& \mathfrak{Y} = (Y,Q) \arrow[r, "f"]
& \mathfrak{X} = (X,P)
\end{tikzcd}
\end{center}
We have $(p_1)_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}} \subset \theta_* \mathcal{O}_{\tilde{\mathfrak{Y}}} \cong \mathcal{O}_{\mathfrak{Y}} \otimes k[G]$. Under this inclusion, $\mathcal{O}_{\mathfrak{Y}} \subset (p_1)_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}$ is mapped isomorphically onto $\mathcal{O}_{\mathfrak{Y}} \subset \theta_* \mathcal{O}_{\tilde{\mathfrak{Y}}}$, which in turn is a line sub-bundle $L \cong \mathcal{O}_{\mathfrak{Y}}$ of $\mathcal{O}_{\mathfrak{Y}} \otimes k[G]$. Any subspace of $H^0(\mathfrak{Y}, \mathcal{O}_{\mathfrak{Y}} \otimes k[G]) = k[G]$ is a direct summand. So there exists a trivial sub-bundle $W \subset \mathcal{O}_{\mathfrak{Y}} \otimes k[G]$ such that $\mathcal{O}_{\mathfrak{Y}} \otimes k[G] = W \oplus L$. Thus we obtain an inclusion
$$(p_1)_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}/ \mathcal{O}_{\mathfrak{Y}} \subset W \cong \mathcal{O}_{\mathfrak{Y}}^{d-1}.$$
By the flat base change (\cite[Proposition~13.1.9, pg. 122]{LMB} or \cite[Proposition~A.1.7.4]{Brochard}), we have an isomorphism
$$f^* \left( f_* \mathcal{O}_{(Y,Q)} / \mathcal{O}_{(X,P)} \right) \cong f^*f_* \mathcal{O}_{(Y,Q)}/f^* \mathcal{O}_{(X,P)} \cong (p_1)_* \mathcal{O}_{\mathfrak{Y} \times_{\mathfrak{X}} \mathfrak{Y}}/ \mathcal{O}_{\mathfrak{Y}},$$
and the result follows.
\end{proof}
\begin{proposition}\label{prop_negative}
Let $f \colon (Y,Q) \longrightarrow (X,P)$ be a $G$-Galois genuinely ramified morphism, $|G| = d \geq 2$. Assume that $(Y,Q)$ is geometric. Set $E \coloneqq f^* \left( f_* \mathcal{O}_{(Y,Q)} / \mathcal{O}_{(X,P)} \right) \in \text{\rm Vect}(Y,Q)$. Then $E$ admits a filtration
$$E = E_1 \supset E_2 \supset \ldots \supset E_{d}=0$$
of sub-bundles such that each $E_i/E_{i+1}$ is a line bundle of negative $Q$-degree.
\end{proposition}
\begin{proof}
The proof is a stacky version of \cite[Proposition~5.13, pg. 76]{E2}. By Proposition~\ref{prop_inclusion}, we have an inclusion $\alpha \colon E \hookrightarrow \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}$. By Equation~\ref{eq_no_section}, $H^0(\mathfrak{Y}, E) = 0$. Thus $\alpha$ cannot be an isomorphism. So $\text{\rm coker}(\alpha)$ is a torsion sheaf. Let $g \colon Z \longrightarrow (Y,Q)$ be a $\Gamma$-Galois \'{e}tale cover where $Z$ is a smooth projective connected $k$-curve. Then we have a $\Gamma$-equivariant inclusion in $\text{\rm Vect}^{\Gamma}(Z)$:
$$g^* \alpha \colon \mathcal{E} \coloneqq g^* E \hookrightarrow \mathcal{O}_Z^{\oplus (d-1)}.$$
So $\text{\rm coker}(g^* \alpha)$ is a $\Gamma$-equivariant torsion sheaf and has a finite $\Gamma$-invariant support. Let $z \in Z$ be in the support. Set $\Gamma z$ for the $\Gamma$-orbit of the point $z$. We can choose a $\Gamma$-equivariant epimorphism $\mathcal{O}_Z^{\oplus (d-1)}/\mathcal{E} \twoheadrightarrow \oplus_{z' \in \Gamma z} \mathcal{O}_{z'}$, where $\mathcal{O}_{z'}$ is the skyscraper sheaf at $z'$. Consider the coherent torsion sheaf $F \coloneqq g_*^G(\oplus_{z' \in \Gamma z} \mathcal{O}_{z'})$ on $(X,P)$. Then we obtain an epimorphism $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}/E \twoheadrightarrow F$. Since $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}$ is generated by its global sections, the images of the global sections generate $F$. So the map $k^{d-1} = H^0(\mathfrak{Y}, \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}) \longrightarrow H^0(\mathfrak{Y}, F) = k$ is surjective, whose kernel has dimension $d-2$ (here the cohomologies are calculated using the the Leray spectral sequence; see \cite[(11.6.2.2), pg. 237]{Olsson}). Since any subspace of $H^0(\mathfrak{Y}, \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)})$ generate a direct summand, we obtain a summand $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-2)}$ of $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}$ together with a map $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)} \longrightarrow \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}/E$ whose image is the torsion sheaf supported at the point $g(z)$. The map $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)} \longrightarrow F$ factors through the quotient $\mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}/ \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-2)} = \mathcal{O}_{\mathfrak{Y}}$ as in the following diagram.
\begin{center}
\begin{tikzcd}
& \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-2)} \arrow[d] \arrow[dr] & \\
E \arrow[r, hookrightarrow, "\alpha"] \arrow[rd, "\beta"] & \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)} \arrow[r, ->>] \arrow[d] & \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)}/E \arrow[d] \\
& \mathcal{O}_{\mathfrak{Y}} \arrow[r, ->>] & F
\end{tikzcd}
\end{center}
As the composite map $E \longrightarrow F$ is zero, the map $\beta \colon E \overset{\alpha} \hookrightarrow \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-1)} \longrightarrow \mathcal{O}_{\mathfrak{Y}}$ is not a surjection. So the ideal sheaf $L_1 = \beta(E) \subsetneq \mathcal{O}_{\mathfrak{Y}}$ defines a non-empty finite substack $\mathfrak{Y}' \subset \mathfrak{Y}$ corresponding to a $\Gamma$-invariant finite set of points on $Z$, and $\text{\rm deg}_Q(L_1) = - \text{\rm deg}_Q(\mathfrak{Y}') < 0$. Since $(Y,Q)$ is smooth, $L_1$ is a line bundle. Thus we obtain a surjection $E \twoheadrightarrow L_1$ with $\text{\rm deg}_Q(L_1) < 0$. Then the first statement follows using an induction by noting that the kernel $E'$ of the above surjection satisfies: $E' \subset \mathcal{O}_{\mathfrak{Y}}^{\oplus (d-2)}$ and $H^0(\mathfrak{Y}, E') = 0$.
\end{proof}
We have the following easy consequence.
\begin{corollary}\label{cor_main}
Under the hypothesis of Proposition~\ref{prop_negative}, for any two $P$-semistable vector bundle $E, \, F \in \text{\rm Vect}(X,P)$ with $\mu_P(E) = \mu_P(F)$, we have
$$\text{\rm Hom}_{\text{\rm Vect}(Y,Q)}\left( f^*E, f^*F \right) = \text{\rm Hom}_{\text{\rm Vect}(X,P)}\left(E, F\right).$$
\end{corollary}
\begin{proof}
By \cite[Proposition~9.3.6, pg. 205]{Olsson} and \cite[Proposition~1.12]{Adjoint}, we have
\begin{equation}\label{eq_containment}
\begin{array}{rcl}
\Hom_{\text{\rm Vect}(Y,Q)}\left( f^* E, f^* F \right) & = & \Hom_{\text{\rm Vect}(X,P)}\left( E, f_* f^* F \right) \\
& = & \Hom_{\text{\rm Vect}(X,P)}\left( E, F \otimes_{\mathcal{O}_{(X,P)}} f_*\mathcal{O}_{(Y,Q)} \right).
\end{array}
\end{equation}
We claim that
$$\mu_{P,\text{\rm max}} \left( F \otimes_{\mathcal{O}_{(X,P)}} \left( f_*\mathcal{O}_{(Y,Q)}/\mathcal{O}_{(X,P)} \right) \right) < \mu_{P}(F_*) = \mu_P(E_*).$$
By Proposition~\ref{prop_negative}, $f^*F \otimes_{\mathcal{O}_{(Y,Q)}} f^* \left( f_*\mathcal{O}_{(Y,Q)}/\mathcal{O}_{(X,P)} \right)$ on $(Y,Q)$ admits a filtration $\{V_i\}$ by sub-bundles so that the subsequent quotients $V_i/V_{i+1} = L_i$ are line bundles of negative $Q$-degrees. Then
$$\mu_{Q, \text{\rm max}} \left( f^*F \otimes_{\mathcal{O}_{(Y,Q)}} f^* \left( f_*\mathcal{O}_{(Y,Q)}/\mathcal{O}_{(X,P)} \right) \right) \leq \text{\rm Max}_{i} \{ \mu_{Q, \text{\rm max}}\left( f^*F \otimes L_i \right)\} < \mu_{Q, \text{\rm max}}\left( f^*F \right).$$
From this, the claim follows. By Proposition~\ref{prop_properties_stacky}~\eqref{s:a2}, we have
$$\text{ \rm Hom}_{\text{\rm Vect}(X,P)}\left( E, F \otimes_{\mathcal{O}_{(X,P)}} \left( f_*\mathcal{O}_{(Y,Q)}/\mathcal{O}_{X,P} \right) \right) = 0.$$
Now the result follows from the exact sequence
\begin{eqnarray*}
0 \longrightarrow \text{\rm Hom}_{\text{\rm Vect}(X,P)}\left( E, F \right) \longrightarrow \text{\rm Hom}_{\text{\rm Vect}(X,P)}\left( E, F \otimes_{\mathcal{O}_{(X,P)}} f_*\mathcal{O}_{(Y,Q)} \right) \longrightarrow \\
\longrightarrow \text{\rm Hom}_{\text{\rm Vect}(X,P)}\left( E, F \otimes_{\mathcal{O}_{(X,P)}} \left( f_* \mathcal{O}_{(Y,Q)}/\mathcal{O}_{(X,P)} \right) \right) \longrightarrow 0.
\end{eqnarray*}
\end{proof}
Now we are ready to prove that the slope stability is preserved under a genuinely ramified morphism.
\begin{theorem}\label{thm_main}
Let $f \colon \mathfrak{Y} \longrightarrow (X,P)$ be a finite genuinely ramified morphism from a stacky curve to an orbifold curve. For any $P$-stable bundle $E \in \text{\rm Vect}(X,P)$, the pullback bundle $f^*E \in \text{\rm Vect}(\mathfrak{Y})$ is $\mu_{\mathfrak{Y}}$-stable.
\end{theorem}
\begin{proof}
Suppose that $E \in \text{\rm Vect}(X,P)$ is a $P$-stable bundle. By Proposition~\ref{prop_properties_stacky}~\eqref{s:b2}, the vector bundle $f^*E$ on $\mathfrak{Y}$ is $\mu_{\mathfrak{Y}}$-polystable.
We denote the cover induced on the Coarse moduli curves by $f_0 \colon Y \longrightarrow X$. Let $\bar{f}_0 \colon \bar{Y} \longrightarrow X$ be its Galois closure. Let $\bar{Z} \longrightarrow \bar{Y}$ be an $A$-Galois cover (for some group $A$) of projective curves dominating the Coarse moduli morphism $\bar{Y} \times_Y \mathfrak{Y} \longrightarrow \bar{Y}$. Take the normalization $\tilde{Z}$ of $\bar{Z}$, and consider the quotient stack $[\tilde{Z}/A]$. Since $A$ acts on $\bar{Z}$ with the generically trivial stabilizers, the same is true for the $A$-action on $\tilde{Z}$. As $[\tilde{Z}/A]$ admits $\bar{Y}$ as its Coarse moduli curve, it is a connected DM stack of dimension one, and hence a stacky curve. Moreover, since $\tilde{Z}$ is smooth, $[\tilde{Z}/A]$ is a geometric orbifold curve $(\bar{Y}, \bar{Q})$ for some branch data $\bar{Q}$ on $\bar{Y}$. Then $\bar{f}_0$ induces a morphism $\bar{f} \colon (\bar{Y}, \bar{Q}) \longrightarrow (X,P)$ that need not be genuinely ramified. By Corollary~\ref{cor_maximal_etale}, the Galois cover $\bar{f}$ factors as a composition of a genuinely ramified Galois cover
$$(\bar{Y},\bar{Q}) \overset{\hat{g}} \longrightarrow (\hat{X}, g_0^*P) = \underline{\text{\rm Spec}}\left( \text{\rm HN}(\bar{f}_* \mathcal{O}_{(\bar{Y}, \bar{Q})})_1 \right)$$
followed by the maximal \'{e}tale cover
$$g \colon (\hat{X}, g_0^*P) \longrightarrow (X,P);$$
by the maximality of $g$, the cover $g$ is Galois for some group $G$. We have the following commutative diagram. Set $\hat{P} \coloneqq g_0^*P$.
\begin{center}
\begin{tikzcd}
(\bar{Y},\bar{Q})
\arrow[drr, bend left, "\hat{g}"]
\arrow[ddr, bend right, "\hat{f}"]
\arrow[dr, "\nu"] & & \\
& \hat{\mathfrak{Y}} \coloneqq \mathfrak{Y}\times_{(X,P)} (\hat{X},g_0^*P) \arrow[dr, phantom, "\square"] \arrow[r, "p_2"] \arrow[d, "p_1"]
& (\hat{X},g_0^*P) = (\hat{X}, \hat{P}) \arrow[d, "g"] \\
& \mathfrak{Y} \arrow[r, "f"]
& (X,P)
\end{tikzcd}
\end{center}
Now, let $0 \neq S \subseteq f^*E$ be a $\mu_{\mathfrak{Y}}$-stable sub-bundle of $f^*E$ such that $\mu_{\mathfrak{Y}}(S) = \mu_{\mathfrak{Y}}(f^*E)$. We will show that $f^*E = S$. For this, we will first construct a sub-bundle $V \subseteq g^*E$ having the same $\mu_{\hat{P}}$ such that $p_2^*V = p_1^*S$. Using the fact that $p_1^*S \subseteq (f \circ p_1)^*E$ is a $G$-invariant inclusion, we will descend the bundle $V$ to a sub-bundle of $E$ on $(X,P)$ having the same $\mu_P$, and this will conclude the proof.
First, taking the pullback under the Galois morphism $\hat{f} \colon (\bar{Y},\bar{Q}) \longrightarrow \mathfrak{Y}$, we obtain a sub-bundle $\hat{f}^* S \subseteq \hat{f}^* f^* E = \bar{f}^* E$. By Proposition~\ref{prop_properties_stacky}~\eqref{s:b1}, we have $\mu_{\bar{Q}}(\hat{f}^* S) = \mu_{\bar{Q}}(\bar{f}^* E)$. Since $S$ is a $\mu_{\mathfrak{Y}}$-stable bundle on $\mathfrak{Y}$ and $E$ is a $P$-stable bundle on $(X,P)$, both $\hat{f}^* S$ and $\bar{f}^*E$ are $\bar{Q}$-polystable by Proposition~\ref{prop_properties_stacky}~\eqref{s:b2}. Define the right ideal $\bar{\Theta}$ of the associative algebra $\text{\rm End}_{\text{\rm Vect}(\bar{Y},\bar{Q})}(\bar{f}^*E)$ by
$$\bar{\Theta} \coloneqq \{ \gamma \in \text{\rm End}_{\text{\rm Vect}(\bar{Y},\bar{Q})}(\bar{f}^*E) \hspace{.2cm} | \hspace{.2cm} \gamma(\bar{f}^* E) \subset \hat{f}^*S \}.$$
Since $\hat{f}^*S$ is a direct summand of $\bar{f}^*E$, the bundle $\hat{f}^* S$ coincides with the vector sub-bundle of $\bar{f}^*E$ generated by the images of the endomorphisms in $\bar{\Theta}$.
Similarly, via the pullback under the $G$-Galois \'{e}tale cover $p_1 \colon \hat{\mathfrak{Y}} \longrightarrow \mathfrak{Y}$, we obtain a sub-bundle $p_1^* S \subseteq p_1^* f^* E$ on $\hat{\mathfrak{Y}}$, which are also $\mu_{\hat{\mathfrak{Y}}}$-polystable, and $\mu_{\hat{\mathfrak{Y}}}(p_1^*S) = \mu_{\hat{\mathfrak{Y}}}((f \circ p_1)^*E)$. Further, $p_1^*S$ is generated by the images of endomorphisms in the right ideal $\hat{\Theta} \subset \text{\rm End}_{\text{\rm Vect}(\hat{\mathfrak{Y}})}((f \circ p_1)^*E)$ defined as
$$\hat{\Theta} \coloneqq \{ \gamma' \in \text{\rm End}_{\text{\rm Vect}(\hat{\mathfrak{Y}})}(f \circ p_1)^*E) \hspace{.2cm} | \hspace{.2cm} \gamma'((f \circ p_1)^*E) \subset p_1^*S \}.$$
Applying Corollary~\ref{cor_main} to the genuinely ramified Galois map $\hat{g} \colon (\bar{Y},\bar{Q}) \longrightarrow (\hat{X},\hat{P})$, we obtain
\begin{equation}\label{eq_7}
\text{\rm End}_{\text{\rm Vect}(\bar{Y},\bar{Q})}(\bar{f}^*E) = \text{\rm End}_{\text{\rm Vect}(\hat{X},\hat{P})}(g^* E).
\end{equation}
As an element $\gamma \in \text{\rm End}_{\text{\rm Vect}(\hat{X},\hat{P})}(g^* E)$ is mapped to $\hat{g}^* \gamma \in \text{\rm End}_{\text{\rm Vect}(\bar{Y},\bar{Q})}(\bar{f}^*E)$ under $\hat{g}^*$, the associative algebra structures are preserved. Let the right ideal $\hat{\Theta}' \subset \text{\rm End}_{\text{\rm Vect}(\hat{X},\hat{P})}(g^* E)$ be the image of $\bar{\Theta}$. Since $g^*E$ is a $\hat{P}$-polystable bundle on $(\hat{X}, \hat{P})$, the image of any endomorphism of it is a sub-bundle. Let $V$ be the sub-bundle of $g^* E$ generated by the images $\gamma(g^* E)$ for $\gamma \in \hat{\Theta}'$. Then we have $\hat{g}^* V = \hat{f}^*S$. Moreover, $V$ is again a $\hat{P}$-polystable bundle on $(\hat{X},\hat{P})$, and by Lemma~\ref{lem_slope_finite_pullback}, we have
$$\mu_{\hat{P}} \left( V \right) = \mu_{\hat{P}} \left( g^* E \right).$$
We have the following (note that Equation~\eqref{eq_containment} holds for any cover)
$$\text{\rm End}_{\text{\rm Vect}(\hat{X},\hat{P})}(g^* E) \subseteq \text{\rm End}_{\text{\rm Vect}(\hat{\mathfrak{Y}})}((g \circ p_2)^*E) \subseteq \text{\rm End}_{\text{\rm Vect}(\bar{Y},\bar{Q})}(\bar{f}^*E).$$
By Equation~\eqref{eq_7}, each of the above containment is an equality, and the associative algebra structures are preserved. Thus the ideal $\hat{\Theta}'$ maps onto the ideal $\hat{\Theta}$ which maps onto the ideal $\bar{\Theta}$. Since $V$ is generated by the images of the endomorphisms in $\hat{\Theta}'$ and $p_1^*S$ is generated by the images of the endomorphisms in $\hat{\Theta}$, we have
\begin{equation}\label{eq_8}
p_1^*S = p_2^* V.
\end{equation}
Since $p_1$ is a $G$-Galois \'{e}tale cover, the injective morphism $p_1^*S \subseteq p_1^*f^* E$ is $G$-equivariant. By our construction, the injective morphism $V \subseteq g^*E$ is also $G$-equivariant.
For any atlas $U \longrightarrow (X,P)$, via pullback under $g$, we obtain an atlas $U' = (\hat{X}, \hat{P}) \times_{(X,P)} U$ of $(\hat{X}, \hat{P})$, and the map $U' \longrightarrow U$ is a $G$-Galois \'{e}tale map. The $G$-equivariant sub-bundle map $V \subseteq g^*E$ corresponds to a $G$-equivariant sub-bundle map on $U'$, and hence by the \'{e}tale descent, to a sub-bundle of $\mathcal{E}$ on $U$ where $\mathcal{U}$ corresponds to the equivariant bundle on $U$ corresponding to $E$. Since this happens for every atlas, in a compatible way, we obtain a sub-bundle $W \subseteq E$ on $(X,P)$ such that $g^*W = V$. Thus $f^*W = S$. By Lemma~\ref{lem_slope_finite_pullback}, we have
$$\mu_P(W) = \mu_P(E).$$
Since $E$ was assumed to be $P$-stable, we obtain $W = E$, and consequently, $S = f^* E$.
\end{proof}
\begin{remark}\label{rmk_condition}
In \cite{BKP}, the authors have put necessary conditions on the orbifold curve $(X,P)$ such that the fiber product $Y \times_X (X,P)$ is again an orbifold curve and concluded the above result in this case.
\end{remark}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 33
|
{"url":"https:\/\/www.trustudies.com\/question\/443\/a-guy-wire-attached-to-a-vertical-pol\/","text":"3 Tutor System\nStarting just at 265\/hour\n\n# A guy wire attached to a vertical pole of height 18 m is 24 m long and has a stake attached to the other end. How far from the base of the pole should the stake be driven so that the wire will be taut?\n\nGiven, a guy wire attached to a vertical pole of height 18 m is 24 m long and has a stake attached to the other end.\n\nLet AB be the pole and AC be the wire.\n\nBy Pythagoras theorem,\n\n$$AC^2 = AB^2 + BC^2$$\n$$24^2 = 18^2 + BC^2$$\n$$BC^2 = 576 \u2013 324$$\n$$BC^2 = 252$$\n$$BC = 6 \\sqrt{7}$$m\n\nTherefore, the distance from the base is $$6 \\sqrt{7}$$ m","date":"2023-03-22 02:39:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6913176774978638, \"perplexity\": 583.4220766354722}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943749.68\/warc\/CC-MAIN-20230322020215-20230322050215-00298.warc.gz\"}"}
| null | null |
Datacenter/
Metaforic to Give Away Free Versions of MySQL, Apache
Versions will be protected by a lightweight edition of the company's MetaFortress anti-tampering solution
By Gladys Rama
Metaforic recently announced that it is giving free copies of MySQL and Apache to enterprises that request it. The copies will be secured with MetaFortress Open, a "lightweight" version of the company's anti-tampering solution.
MetaFortress Open is designed to protect the LAMP stack against attacks that target the weakest point in a server infrastructure, which are often the infrastructure's open source components.
MetaFortress Open guards against these attacks by planting integrity checks into the source code of an application -- checks that the company said can take many years to remove. If deployed, the solution can increase the time it takes for someone to hack into an application by months or years, the company said.
"We are starting with the LAMP stack because there is a critical need for anti-tamper protection there, and it provides organizations with an opportunity to see MetaFortress in action in their environments," said Andrew McLennan, CEO of Metaforic, in a prepared statement. "By delivering MetaFortress Open protected versions of Apache and MySQL, we are showcasing the versatility of our unique anti-tamper and integrity checking technology."
For more information, go to http://www.metaforic.com
Gladys Rama is the senior site producer for Redmondmag.com, RCPmag.com and MCPmag.com.
The Essential Guide to Cloud-based Backup and Disaster Recovery
CRM Buyer's Guide 2019
Object Storage: Customer Insights and Best Practices
An Insiders Guide to Object Storage
> More TechLibrary
Prevail or Fail: Overcoming the Challenges of Protecting Your Distributed Data
Solving Real-World Data Management Challenges
Simplifying Hyper-V Backups with the Public Cloud
Finding the Right Backup Model for Your Business
> More Webcasts
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,808
|
Q: How to get a tables TD-offset()'s with jQuery 1.5.2 (Drupal7)? I need to find the correct .offset()-position of a tabled TD-elements within a Drupal7-generated HTML-site (jQuery_update installed). I use
$('#contenttable td').each(function(){
console.log($(this).offsetParent());
console.log($(this).offset().left);
});
within my
jQuery(function($) {
//$(document).ready(function(){
of my script.js to get the offset().left-position but the console allways traces me '0' for all TDs.
When I use the Safari Developer Console the output of
jQuery('#contenttable td:nth-child(2)').offset().left
is '1728', so it seems to work at all. But why doesn't it work from within my DOCUMENT.READY?
The .offsetParent() is BODY by the way... And changing some of the parents DIV's position to absolute or relative didn't give any change, too.
THNX!
edit: seems not to work for any element at all.
A: Unfortunately, jQuery.offset does not work on hidden elements, because the browser doesn't bother rendering them at all. So if possible, you need to make the element visible in order to get its offset. According to jquery: get the offset of hidden element , you should be able to call .show() on the table/content, get the desired offset, then call .hide() on it. This show/hide should not be apparent to the user, as it won't repaint the page since it's in the same execution event.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,288
|
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>About - kathyh</title>
<!-- Bootstrap Core CSS -->
<link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="vendor/font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet" type="text/css">
<link href='https://fonts.googleapis.com/css?family=Kaushan+Script' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Droid+Serif:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Roboto+Slab:400,100,300,700' rel='stylesheet' type='text/css'>
<!-- Theme CSS -->
<link href="css/agency.min.css" rel="stylesheet">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body id="page-top" class="index">
<!-- Navigation -->
<nav id="mainNav" class="navbar navbar-default navbar-custom navbar-fixed-top">
<div class="container">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span> Menu <i class="fa fa-bars"></i>
</button>
<a class="navbar-brand page-scroll" href="#page-top">About kathyh</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
<li class="hidden">
<a href="#page-top"></a>
</li>
<li>
<a class="page-scroll" href="#services">Services</a>
</li>
<li>
<a class="page-scroll" href="#portfolio">Portfolio</a>
</li>
<li>
<a class="page-scroll" href="#about">About</a>
</li>
<li>
<a class="page-scroll" href="#team">Team</a>
</li>
<li>
<a class="page-scroll" href="#contact">Contact</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
<!-- Header -->
<header>
<div class="container">
<div class="intro-text">
<div class="intro-lead-in">kathy herring hayashi</div>
<div class="intro-heading">Software Engineer</div>
<a href="#services" class="page-scroll btn btn-xl">Tell Me More</a>
</div>
</div>
</header>
<!-- Services Section -->
<section id="services">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Services</h2>
<h3 class="section-subheading text-muted">Technology Services</h3>
</div>
</div>
<div class="row text-center">
<div class="col-md-4">
<span class="fa-stack fa-4x">
<i class="fa fa-circle fa-stack-2x text-primary"></i>
<i class="fa fa-shopping-cart fa-stack-1x fa-inverse"></i>
</span>
<h4 class="service-heading">IEEE Women In Engineering</h4>
<p class="text-muted">Founded and Chair of the IEEE Women in Engineering San Diego Affinity Group</p>
</div>
<div class="col-md-4">
<span class="fa-stack fa-4x">
<i class="fa fa-circle fa-stack-2x text-primary"></i>
<i class="fa fa-laptop fa-stack-1x fa-inverse"></i>
</span>
<h4 class="service-heading">Presenter</h4>
<p class="text-muted">Presenter at many technical events</p>
</div>
<div class="col-md-4">
<span class="fa-stack fa-4x">
<i class="fa fa-circle fa-stack-2x text-primary"></i>
<i class="fa fa-lock fa-stack-1x fa-inverse"></i>
</span>
<h4 class="service-heading">Security</h4>
<p class="text-muted">Interest in IOT Security technologies</p>
</div>
</div>
</div>
</section>
<!-- Portfolio Grid Section -->
<section id="portfolio" class="bg-light-gray">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Portfolio</h2>
<h3 class="section-subheading text-muted">Lorem ipsum dolor sit amet consectetur.</h3>
</div>
</div>
<div class="row">
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal1" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/roundicons.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Round Icons</h4>
<p class="text-muted">Graphic Design</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal2" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/startup-framework.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Startup Framework</h4>
<p class="text-muted">Website Design</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal3" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/treehouse.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Treehouse</h4>
<p class="text-muted">Website Design</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal4" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/golden.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Golden</h4>
<p class="text-muted">Website Design</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal5" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/escape.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Escape</h4>
<p class="text-muted">Website Design</p>
</div>
</div>
<div class="col-md-4 col-sm-6 portfolio-item">
<a href="#portfolioModal6" class="portfolio-link" data-toggle="modal">
<div class="portfolio-hover">
<div class="portfolio-hover-content">
<i class="fa fa-plus fa-3x"></i>
</div>
</div>
<img src="img/portfolio/dreams.png" class="img-responsive" alt="">
</a>
<div class="portfolio-caption">
<h4>Dreams</h4>
<p class="text-muted">Website Design</p>
</div>
</div>
</div>
</div>
</section>
<!-- About Section -->
<section id="about">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">About</h2>
<h3 class="section-subheading text-muted">Lorem ipsum dolor sit amet consectetur.</h3>
</div>
</div>
<div class="row">
<div class="col-lg-12">
<ul class="timeline">
<li>
<div class="timeline-image">
<img class="img-circle img-responsive" src="img/about/1.jpg" alt="">
</div>
<div class="timeline-panel">
<div class="timeline-heading">
<h4>1982</h4>
<h4 class="subheading">Era of in-house CAD teams</h4>
</div>
<div class="timeline-body">
<p class="text-muted">Started out strong in the semiconductor industry. Project lead of R&D software teams before EDA companies were a thing. Developed a custom HDL before Cadence bought Gateway. Created an internal AI rules based synthesis tool just as Synopsys was formed. Ahead of our time. </p>
</div>
</div>
</li>
<li class="timeline-inverted">
<div class="timeline-image">
<img class="img-circle img-responsive" src="img/about/2.jpg" alt="">
</div>
<div class="timeline-panel">
<div class="timeline-heading">
<h4>1995</h4>
<h4 class="subheading">Growing EDA companies</h4>
</div>
<div class="timeline-body">
<p class="text-muted">Exciting times going through an acquisition by Cadence Design System. Our Graphical Layout Language was incorporated into a commercial product. Making a difference.</p>
</div>
</div>
</li>
<li>
<div class="timeline-image">
<img class="img-circle img-responsive" src="img/about/3.jpg" alt="">
</div>
<div class="timeline-panel">
<div class="timeline-heading">
<h4>1999</h4>
<h4 class="subheading">Up and Down and UP with a DOT COM</h4>
</div>
<div class="timeline-body">
<p class="text-muted">Just at the end of the DOT COM bubble joined a semiconductor startup that reported and provided detailed analytics dedicated to semiconductor yield. On the leadership team to take the company down to 50% of its size and then back up to break even. Startups can certainly make you focused and strong.</p>
</div>
</div>
</li>
<li class="timeline-inverted">
<div class="timeline-image">
<img class="img-circle img-responsive" src="img/about/4.jpg" alt="">
</div>
<div class="timeline-panel">
<div class="timeline-heading">
<h4>2010</h4>
<h4 class="subheading">Fabless Semiconductors</h4>
</div>
<div class="timeline-body">
<p class="text-muted">Still in semiconductors, but looking now from a corporate IT perspective. Amazing learning experience on a variety of emerging technology flows in global and cloud computing environments. So much to learn!</p>
</div>
</div>
</li>
<li>
<div class="timeline-image">
<img class="img-circle img-responsive" src="img/about/4.jpg" alt="">
</div>
<div class="timeline-panel">
<div class="timeline-heading">
<h4>Now</h4>
<h4 class="subheading">Looking Towards the Future</h4>
</div>
<div class="timeline-body">
<p class="text-muted">Continuing to look at the world from an Engineering/IT perspective and excited about working with all of the emerging technologies which continue to change the world that we live in. Good times.</p>
</div>
</div>
</li>
<li class="timeline-inverted">
<div class="timeline-image">
<h4>Having
<br>Some
<br>Fun!</h4>
</div>
</li>
</ul>
</div>
</div>
</div>
</section>
<!-- Team Section -->
<section id="team" class="bg-light-gray">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Our Amazing Team</h2>
<h3 class="section-subheading text-muted">Lorem ipsum dolor sit amet consectetur.</h3>
</div>
</div>
<div class="row">
<div class="col-sm-4">
<div class="team-member">
<img src="img/team/1.jpg" class="img-responsive img-circle" alt="">
<h4>Kay Garland</h4>
<p class="text-muted">Lead Designer</p>
<ul class="list-inline social-buttons">
<li><a href="#"><i class="fa fa-twitter"></i></a>
</li>
<li><a href="#"><i class="fa fa-facebook"></i></a>
</li>
<li><a href="#"><i class="fa fa-linkedin"></i></a>
</li>
</ul>
</div>
</div>
<div class="col-sm-4">
<div class="team-member">
<img src="img/team/2.jpg" class="img-responsive img-circle" alt="">
<h4>Larry Parker</h4>
<p class="text-muted">Lead Marketer</p>
<ul class="list-inline social-buttons">
<li><a href="#"><i class="fa fa-twitter"></i></a>
</li>
<li><a href="#"><i class="fa fa-facebook"></i></a>
</li>
<li><a href="#"><i class="fa fa-linkedin"></i></a>
</li>
</ul>
</div>
</div>
<div class="col-sm-4">
<div class="team-member">
<img src="img/team/3.jpg" class="img-responsive img-circle" alt="">
<h4>Diana Pertersen</h4>
<p class="text-muted">Lead Developer</p>
<ul class="list-inline social-buttons">
<li><a href="#"><i class="fa fa-twitter"></i></a>
</li>
<li><a href="#"><i class="fa fa-facebook"></i></a>
</li>
<li><a href="#"><i class="fa fa-linkedin"></i></a>
</li>
</ul>
</div>
</div>
</div>
<div class="row">
<div class="col-lg-8 col-lg-offset-2 text-center">
<p class="large text-muted">Lorem ipsum dolor sit amet, consectetur adipisicing elit. Aut eaque, laboriosam veritatis, quos non quis ad perspiciatis, totam corporis ea, alias ut unde.</p>
</div>
</div>
</div>
</section>
<!-- Clients Aside -->
<aside class="clients">
<div class="container">
<div class="row">
<div class="col-md-3 col-sm-6">
<a href="#">
<img src="img/logos/envato.jpg" class="img-responsive img-centered" alt="">
</a>
</div>
<div class="col-md-3 col-sm-6">
<a href="#">
<img src="img/logos/designmodo.jpg" class="img-responsive img-centered" alt="">
</a>
</div>
<div class="col-md-3 col-sm-6">
<a href="#">
<img src="img/logos/themeforest.jpg" class="img-responsive img-centered" alt="">
</a>
</div>
<div class="col-md-3 col-sm-6">
<a href="#">
<img src="img/logos/creative-market.jpg" class="img-responsive img-centered" alt="">
</a>
</div>
</div>
</div>
</aside>
<!-- Contact Section -->
<section id="contact">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 class="section-heading">Contact Us</h2>
<h3 class="section-subheading text-muted">Lorem ipsum dolor sit amet consectetur.</h3>
</div>
</div>
<div class="row">
<div class="col-lg-12">
<form name="sentMessage" id="contactForm" novalidate>
<div class="row">
<div class="col-md-6">
<div class="form-group">
<input type="text" class="form-control" placeholder="Your Name *" id="name" required data-validation-required-message="Please enter your name.">
<p class="help-block text-danger"></p>
</div>
<div class="form-group">
<input type="email" class="form-control" placeholder="Your Email *" id="email" required data-validation-required-message="Please enter your email address.">
<p class="help-block text-danger"></p>
</div>
<div class="form-group">
<input type="tel" class="form-control" placeholder="Your Phone *" id="phone" required data-validation-required-message="Please enter your phone number.">
<p class="help-block text-danger"></p>
</div>
</div>
<div class="col-md-6">
<div class="form-group">
<textarea class="form-control" placeholder="Your Message *" id="message" required data-validation-required-message="Please enter a message."></textarea>
<p class="help-block text-danger"></p>
</div>
</div>
<div class="clearfix"></div>
<div class="col-lg-12 text-center">
<div id="success"></div>
<button type="submit" class="btn btn-xl">Send Message</button>
</div>
</div>
</form>
</div>
</div>
</div>
</section>
<footer>
<div class="container">
<div class="row">
<div class="col-md-4">
<span class="copyright">Copyright © Your Website 2016</span>
</div>
<div class="col-md-4">
<ul class="list-inline social-buttons">
<li><a href="#"><i class="fa fa-twitter"></i></a>
</li>
<li><a href="#"><i class="fa fa-facebook"></i></a>
</li>
<li><a href="#"><i class="fa fa-linkedin"></i></a>
</li>
</ul>
</div>
<div class="col-md-4">
<ul class="list-inline quicklinks">
<li><a href="#">Privacy Policy</a>
</li>
<li><a href="#">Terms of Use</a>
</li>
</ul>
</div>
</div>
</div>
</footer>
<!-- Portfolio Modals -->
<!-- Use the modals below to showcase details about your portfolio projects! -->
<!-- Portfolio Modal 1 -->
<div class="portfolio-modal modal fade" id="portfolioModal1" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Project Name</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/roundicons-free.png" alt="">
<p>Use this area to describe your project. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Est blanditiis dolorem culpa incidunt minus dignissimos deserunt repellat aperiam quasi sunt officia expedita beatae cupiditate, maiores repudiandae, nostrum, reiciendis facere nemo!</p>
<p>
<strong>Want these icons in this portfolio item sample?</strong>You can download 60 of them for free, courtesy of <a href="https://getdpd.com/cart/hoplink/18076?referrer=bvbo4kax5k8ogc">RoundIcons.com</a>, or you can purchase the 1500 icon set <a href="https://getdpd.com/cart/hoplink/18076?referrer=bvbo4kax5k8ogc">here</a>.</p>
<ul class="list-inline">
<li>Date: July 2014</li>
<li>Client: Round Icons</li>
<li>Category: Graphic Design</li>
</ul>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 2 -->
<div class="portfolio-modal modal fade" id="portfolioModal2" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<h2>Project Heading</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/startup-framework-preview.png" alt="">
<p><a href="http://designmodo.com/startup/?u=787">Startup Framework</a> is a website builder for professionals. Startup Framework contains components and complex blocks (PSD+HTML Bootstrap themes and templates) which can easily be integrated into almost any design. All of these components are made in the same style, and can easily be integrated into projects, allowing you to create hundreds of solutions for your future projects.</p>
<p>You can preview Startup Framework <a href="http://designmodo.com/startup/?u=787">here</a>.</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 3 -->
<div class="portfolio-modal modal fade" id="portfolioModal3" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Project Name</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/treehouse-preview.png" alt="">
<p>Treehouse is a free PSD web template built by <a href="https://www.behance.net/MathavanJaya">Mathavan Jaya</a>. This is bright and spacious design perfect for people or startup companies looking to showcase their apps or other projects.</p>
<p>You can download the PSD template in this portfolio sample item at <a href="http://freebiesxpress.com/gallery/treehouse-free-psd-web-template/">FreebiesXpress.com</a>.</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 4 -->
<div class="portfolio-modal modal fade" id="portfolioModal4" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Project Name</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/golden-preview.png" alt="">
<p>Start Bootstrap's Agency theme is based on Golden, a free PSD website template built by <a href="https://www.behance.net/MathavanJaya">Mathavan Jaya</a>. Golden is a modern and clean one page web template that was made exclusively for Best PSD Freebies. This template has a great portfolio, timeline, and meet your team sections that can be easily modified to fit your needs.</p>
<p>You can download the PSD template in this portfolio sample item at <a href="http://freebiesxpress.com/gallery/golden-free-one-page-web-template/">FreebiesXpress.com</a>.</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 5 -->
<div class="portfolio-modal modal fade" id="portfolioModal5" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Project Name</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/escape-preview.png" alt="">
<p>Escape is a free PSD web template built by <a href="https://www.behance.net/MathavanJaya">Mathavan Jaya</a>. Escape is a one page web template that was designed with agencies in mind. This template is ideal for those looking for a simple one page solution to describe your business and offer your services.</p>
<p>You can download the PSD template in this portfolio sample item at <a href="http://freebiesxpress.com/gallery/escape-one-page-psd-web-template/">FreebiesXpress.com</a>.</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Portfolio Modal 6 -->
<div class="portfolio-modal modal fade" id="portfolioModal6" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="close-modal" data-dismiss="modal">
<div class="lr">
<div class="rl">
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2">
<div class="modal-body">
<!-- Project Details Go Here -->
<h2>Project Name</h2>
<p class="item-intro text-muted">Lorem ipsum dolor sit amet consectetur.</p>
<img class="img-responsive img-centered" src="img/portfolio/dreams-preview.png" alt="">
<p>Dreams is a free PSD web template built by <a href="https://www.behance.net/MathavanJaya">Mathavan Jaya</a>. Dreams is a modern one page web template designed for almost any purpose. It's a beautiful template that's designed with the Bootstrap framework in mind.</p>
<p>You can download the PSD template in this portfolio sample item at <a href="http://freebiesxpress.com/gallery/dreams-free-one-page-web-template/">FreebiesXpress.com</a>.</p>
<button type="button" class="btn btn-primary" data-dismiss="modal"><i class="fa fa-times"></i> Close Project</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- jQuery -->
<script src="vendor/jquery/jquery.min.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="vendor/bootstrap/js/bootstrap.min.js"></script>
<!-- Plugin JavaScript -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js"></script>
<!-- Contact Form JavaScript -->
<script src="js/jqBootstrapValidation.js"></script>
<script src="js/contact_me.js"></script>
<!-- Theme JavaScript -->
<script src="js/agency.min.js"></script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 623
|
Google Cloud Opens New Region in Toronto
Cloud, Data Center
ITS SECOND CLOUD REGION IN CANADA
Google Cloud has just announced the opening of its new data center region in Toronto, Canada with the offering of its Cloud Platform (GCP) products, including Compute Engine, App Engine, Google Kubernetes Engine, Bigtable, Spanner, and BigQuery.
The new Toronto cloud region joins Google Cloud flagship facility in Montreal which was opened in 2018, making it the second cloud region in Canada. The new facility also joins 27 existing Google Cloud interconnected regions across the globe.
"We're working to bring you new cloud products and capabilities in Canada, and our goal is to allow you to access those services quickly and easily—wherever you might be in the country. The past year has proved how important easy access to digital infrastructure, technical education, training and support are to helping businesses respond to the pandemic," said Jim Lambe, Managing Director, Canada, Google Cloud.
"We're particularly proud of the teams who faced the unique challenges of building a cloud region during this time to help our customers and community accelerate their digital transformation. To support all of our users, customers and government organizations in Canada, we'll continue to invest in new infrastructure, engineering support and solutions."
The new Toronto cloud region which features three availability zones – is expected to support businesses in continuity planning through the provision of distributed infrastructure needed to meet IT and business requirements for disaster recovery and data sovereignty.
2 months ago, Google Cloud announced the opening of its second cloud region in Delhi National Capital Region (NCR), India, which is expected to serve customers and the public sector in the country and across Asia Pacific. With its first cloud region located in Mumbai, Google Cloud India regions both have the feature of three availability zones.
In April, Google Cloud also opened a new region in Warsaw, Poland, which is a fulfilment of the plan announced in September 2019, a plan that is accompanied by a partnership with Poland's Domestic Cloud Provider (DCP) to resell Google Cloud services in Poland and build managed services capabilities around Google Cloud.
Digital Realty
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,392
|
Patos (Albanië), een stad in het district Fier in de gelijknamige Albanese prefectuur
Patos (gemeente in Brazilië), een gemeente in de Braziliaanse deelstaat Paraíba
Patos (microregio), een microregio in de Braziliaanse deelstaat Paraíba
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,443
|
By [email protected]_84 8 months ago
Most days, Kelly Ball buzzes around her Lakewood bakery, watching after her staff – she
Most days, Kelly Ball buzzes around her Lakewood bakery, watching after her staff – she can be spotted with short, brown, curly hair and tortoise blue glasses. A personalized gallery lines the wall with mementos from her travels to places like Serbia and Japan. A warm, colorful lamp hangs above the cash register.
Ball, a long-time lover of baking and cooking, opened Leila Bakery & Cafe two and a half years ago. The European-inspired eatery sells an assortment of quiches, kolaches, muffins and more.
Now settled into her career, Ball has reflected on the journey that brought her to where she is. Ball said the struggles she has endured — including a miscarriage and divorce — brought greater clarity to her purpose in life and the direction she wanted to take in her career.
After years of growth and maturing, Ball uses her own life lessons to create a healthier workplace environment for her staff.
"I say laughingly [that] this came about," she said, "when my life fell apart."
Related: Enjoy the simple art of the sausage roll (aka klobasnek) at these 5 D-FW bakeries
When Ball was a child, growing up in Texas and Louisiana, she found freedom in the kitchen. She was particular about her food and what ingredients went into her meals, but she trusted herself to cook something good — plus, she got to dye her scrambled eggs purple for breakfast.
"I cooked for myself pretty early on and in college and into my adult life," she said. "It was my creative outlet or even stress relief."
Tyler Hollandsworth takes fresh quiche out of the oven at Leila Bakery & Cafe in Dallas, Texas on Saturday, June 11, 2022.(Emil Lippe / Special Contributor)
She didn't plan to make a career out of cooking or baking. In 2007, she graduated from Northwestern University with her undergraduate degree in sociology and cognitive science and then moved to Dallas with her then-husband. She went back to school to take post-baccalaureate classes at the University of Texas Dallas for a pre-med degree.
Around this time, Ball learned she was pregnant.
"I was terrified for about a week, and then I was over the moon," she said.
Ball found out in the second trimester of her pregnancy she had suffered a missed miscarriage, which means the baby either dies or does not develop in the parent's womb, but the parent doesn't have any physical symptoms.
"Miscarriage is devastating, no matter when you learn of it," Ball said. "If you're wanting a child, it's heartbreaking."
Ball and her husband separated around the time their baby would have been due. She tried another semester of pre-med classes, but she said it was too difficult to keep up with the classwork while she was struggling with her mental health.
She eventually found herself working at the Legal Grounds coffee shop, where Ball said she was initially "ashamed" and "lost" because her life wasn't shaping up the way she expected it.
"I had all these expectations of what I thought would bring me happiness that involved being married and having kids and having a good job," she said. "All of those things just fell out from under me at the same time."
Related: LISTEN: What food do we want at weddings these days?
But a turning point for Ball came in early 2012, when she met a group of friends she said inspired her with not only how much they pursued their business, but also how much they pursued their hobbies, which she said are just as seriously a part of themselves as their careers.
Composed of musicians, photographers and business owners, this circle of friends opened her eyes to new possibilities for her career, she said. Among them was Nikola Olic, a Dallas-based architectural photographer, whom she has been with for nearly 10 years.
"That was when I started to think differently about what is possible with life," she said. "It's OK to pursue something that you enjoy just because you like it."
So, Ball began wholesale baking and selling her goods at farmers markets while catering on the side. Ball eventually opened her storefront in early 2020 and named it Leila Bakery & Cafe after Nikola's aunt, who lives in Serbia and gifted the lamp that hangs above the register.
Ball is set to open another location of Leila Bakery & Cafe this fall in the White Rock Center. She said the bakery experienced staffing instability in the early winter and spring of last year. She said was overwhelmed with her workload — so much so that she wanted to sell her business.
Even though the cafe is still experiencing some staffing instability, Ball said she has learned to better manage her staff and delegate responsibilities, which included hiring a manager to take on tasks.
"It was such a huge lesson to have a really strong manager that I can rely on," she said. "It's been a really tough journey, and learning to delegate and rely on people … and be a good manager has been a huge part of this being able to survive."
Kelly Ball converses with a customer at Leila Bakery & Cafe in Dallas, Texas on Saturday, June 11, 2022.(Emil Lippe / Special Contributor)
Ball has been open with her employees about her journey with mental health, and she works to respect and accommodate their mental well-being.
"They feel comfortable sharing their stories with me, and we try to make schedules that allow people to have a healthy lifestyle," Ball said. "There are definitely times when we have overtime, but I try to work with my employees to keep reasonable schedules in place. That's a value of mine. "
On a recent Saturday afternoon, the storefront had music playing in the background, with an occasional staff member dancing and the customers coming through the door. The front-of-house manager, Tyler Hollandsworth, handed out stickers to a child. Ball embraced a cook who had just gone into labor and was heading out the door.
Hollandsworth has worked at the bakery since December. She said it's her favorite industry job she's had, and the care for mental health is "totally different" from her previous jobs.
There are about 20 people on staff, and Hollandsworth said there is an "environment of love and acceptance" at the bakery, where she has been able to make real friendships. Hollandsworth said she loves that it's women-owned and operated.
"We're all very respectful [and] willing to work with any of our employees on making sure that they feel comfortable," she said. "I am constantly talking with the other front of house workers and making sure that they feel valued, that their ideas are heard and that they know their worth in the company."
Rahim Quazi, a full-time musician and songwriter, met Ball about 10 years ago. He's a part of her friend group, which Quazi said is full of creatives who are always supporting one another, whether that be attending a friend's concert or going to a gallery night.
When Ball used to sell her goods at the Good Local Markets, Quazi said it became a "waterhole" for the friend group, and he would take his son with him every other weekend.
"It kind of became our breakfast hook-up with all our friends," he said. "And then if she ever was in a bind or needed help, one of us was there."
Quazi said he watched Ball's business grow little by little over the years, and he makes regular visits to the storefront for baked goods – not only because she's his friend, Quazi said, but because the food is good, too.
"I'm always kind of blown away by her," Quazi said. "She does the same for me, and she comes out to my shows tirelessly and has fun, and I can look at her for a smile."
Ball said in the past decade she has matured and grown in her personal life. She now has a better understanding of what she wants in life and how she wants to treat other people, which she attributes to Nikola's help.
"I have a completely different understanding of what it means to love and to be loved," she said, "and I feel like I am much healthier now than I used to be."
Tags: bakery, Cafe, community, creating, kitchen, Leila
Previous 3 Must-Have Kitchen Knives Every Home Cook Needs, According to Experts
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,438
|
<div class="container">
<div class="page-title">Vendor Locations</div>
<div style="margin-top: -5px;">
<span class="info-msg">
Total Location: {{vm.childVendors.length}}
</span>
<div class="list-group">
<div ng-repeat="vendor in vm.childVendors" class="list-group-item {{vendor.itemColor}}">
<div class="list-group-div-text-block">
<h5 class="list-group-item-heading">{{vendor.name}}
<span class="badge background-{{vendor.badgeColor}}">{{vendor.active}}</span>
</h5>
<h6 class="list-group-item-text">
<span>Pin: <strong>{{vendor.pin}}</strong></span>
<span class="left-margin-20">
Phone: <strong>{{vendor.phone | hideChar:"|"}}</strong>
<a href="tel:{{vendor.phone | hideChar:'|'}}">
<img class="view-details-item-img" alt="" src="images/call.png"/>
</a>
</span>
</h6>
</div>
<div class="list-group-div-btn-block" ng-click="vm.onVendorDetailsClick(vendor.vendId)">
<img class="" alt="" src="images/r_arrow.png"/>
</div>
</div>
</div>
</div>
<save-back-buttons submit-label="Add Location" cancel-label="Back"
on-submit-click="vm.onAddLocClick()"
on-cancel-click="vm.onCancelClick()"></save-back-buttons>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,472
|
\section{Introduction}
\label{sec:intro}
\begin{figure}[t]
\centering
%
\includegraphics[width=0.69\linewidth]{figs/teaser/teaser_helper} \\[-1ex]
%
%
\begin{tabular}{@{\hskip 0mm}l@{\hskip 0mm}l@{\hskip 0mm}l}
\raisebox{-.5\height}{%
\includegraphics[width=0.47\linewidth]{figs/teaser/teaser_coco/elephant_beta0_28_1.png}} & \dots & %
\raisebox{-.5\height}{%
\includegraphics[width=0.47\linewidth]{figs/teaser/teaser_coco/elephant_beta256_27_0.png}} \\[-3.2ex]
{\color{white} \,28.1dB}
&&
{\color{white} \,27.0dB}\\[-1ex]
\end{tabular}
%
%
%
%
%
\caption{\label{fig:teaser}
Decoding two reconstructions \emph{from the same representation} $\hat y$,
which takes 2345 bytes to store: We use a low realism weight $\beta=0$ for the left reconstruction, and a high $\beta=2.56$ for the right.
Note that increasing $\beta$ leads to a much sharper reconstruction, but the PSNR drops by 1.1dB, consistent with rate-distortion-realism theory~\cite{blau2019rethinking}.
We only show two reconstructions, but our generator $G$ can produce any reconstruction in between
by changing $\beta$.
This allows the user to decide between viewing a reconstruction that is close to the input (left, \ie, high PSNR), or that looks realistic (right).
%
}
\end{figure}
\begin{figure*}
\centering
{
\footnotesize
\begin{tabular}{@{\hskip 0mm}l@{\hskip 1mm}l@{\hskip 0.1mm}l@{\hskip 1mm}l@{\hskip 0mm}}
\toprule
\multicolumn{1}{c}{Input} &
\multicolumn{2}{c}{High-Realism} &
\multicolumn{1}{c}{Low-Distortion} %
\\\midrule
{\scriptsize Kodak: \texttt{kodim20}}
& Ours $\beta{=}2.56$, 0.12bpp, 31.3dB
& HiFiC, 0.12bpp, 29.3dB
& Ours $\beta{=}0$, 0.12bpp, 32.4dB \\
\includegraphics[width=0.245\linewidth]{figs/visual/v1/airplane_input.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/airplane_img2.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/airplane_img3.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/airplane_img4.jpg} \\
{\scriptsize CLIC 2020: \texttt{3f273}}
& Ours $\beta{=}2.56$, 0.085bpp, 32.6dB
& HiFiC, 0.082bpp, 30.5dB
& Ours $\beta{=}0$, 0.085bpp, 33.6dB \\
\includegraphics[width=0.245\linewidth]{figs/visual/v1/sneaker_input.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/sneaker_img2.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/sneaker_img3.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/sneaker_img4.jpg} \\
{\scriptsize CLIC 2020: \texttt{88c58}}
& Ours $\beta{=}2.56$, 0.048bpp, 32.3dB
& HiFiC, 0.092bpp {\color{bppgapcol}($\mathbf{1.92{\boldsymbol\times}}$)}, 33.2dB
& Ours $\beta{=}0$, 0.048bpp, 33.7dB \\
\includegraphics[width=0.245\linewidth]{figs/visual/v1/hair_input.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/hair_img2.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/hair_img3.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/hair_img4.jpg} \\
{\scriptsize MS COCO 30K: \texttt{45962}}
& Ours $\beta{=}2.56$, 0.090bpp, 31.0dB
& HiFiC, 0.17bpp {\color{bppgapcol}($\mathbf{1.86{\boldsymbol\times}}$)}, 32.4dB
& Ours $\beta{=}0$, 0.090bpp, 31.9dB \\
\includegraphics[width=0.245\linewidth]{figs/visual/v1/banana_input.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/banana_img2.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/banana_img3.jpg} &
\includegraphics[width=0.245\linewidth]{figs/visual/v1/banana_img4.jpg} \\[-2ex]
\end{tabular}
}
\caption{\label{fig:visualcomp2}Comparing
input images to reconstructions from our model at $\beta{=}2.56$, the generative state-of-the-art HiFiC, as well as our model at $\beta{=}0$.
Note that both our models always have the same bits-per-pixel (bpp) per row, since for each row, the two reconstructions we show are obtained from the \emph{same} representation---we simply vary $\beta$ for the generator.
Overall, we see how our high-realism reconstructions ($\beta{=}2.56$) closely match the input, more-so than HiFiC.
On the airplane (first row), we can read the text in our reconstruction, in contrast to the one from HiFiC.
In the second row, the texture of the sneaker is faithfully preserved.
For the hair, we note that HiFiC uses {\color{bppgapcol}$\mathbf{1.92{\boldsymbol\times}}$} the bitrate of our model to achieve a similar reconstruction. In the last row, HiFiC uses
{\color{bppgapcol}$\mathbf{1.86{\boldsymbol\times}}$} the rate.
In the first two rows, where we have comparble bpp to HiFiC, both of our reconstructions have higher PSNR.
In the rightmost column ($\beta=0$) we can see the low-distortion reconstructions of our model. There we have near state-of-the-art PSNR at the cost of losing the (synthetic) detail.%
}
\end{figure*}
Lossy image compression considers the trade-off between the number of bits used to store an input image and how close the reconstruction (that we obtain from the bits) is to that input image.
As we use more bits, we will be able to get closer to the input.
This idea is formalized in the fundamental rate-distortion trade-off~\cite{shannon1959coding}, where ``rate'' stands for bit-rate, and ``distortion'' is formalized as a pair-wise metric between the input image and the reconstruction (\eg, the mean-squared error, MSE).
While minimizing this trade-off has been the focus of many works starting from JPEG~\cite{jpeg1992wallace} all the way to recent neural~\cite{he2022elic} and non-neural~\cite{vtm17} codecs,
there has been a surge of interest in additionally considering the ``realism'' or ``perceptual quality'' of the reconstructions~\cite{blau2019rethinking,tschannen2018deep,theis2021coding,theis2021advantages,mentzer2020high,po-elic,yang2021perceptual,mentzer2021towards,zhang2021universal,theis2022lossy}.
After all, as we move toward low rates, purely rate-\emph{distortion} optimized systems will produce artifacts in the reconstructions, such as the well known block artifacts of JPEG or blurry patches for neural approaches.
There is simply not enough bitrate available to store all of the details, and if we target, \eg, MSE, the best reconstruction is the average image over all images that map to the given representation since, inevitably, many images will map to the same representation at low rates.
Intuitively, instead of an average image reconstruction, we could prefer a ``realistic'' reconstruction that is sharp and appropriately textured. This reconstruction might have worse MSE than the average image, but users might find it more perceptually pleasing and less artificial.
We can see from this argument that there exists an additional trade-off here, between ``realism'' and ``distortion'', and that distortion will increase as we improve realism.
Following Blau and Michaeli~\cite{blau2019rethinking}, we formalize ``distortion'' as a metric between pairs of images (\eg, MSE) that indicates how close is the reconstruction to the input, while
``realism'' indicates how realistic the reconstructions look (regardless of the input).
We formalize the latter as a divergence $d(p_X,p_{\hat X})$ between the distribution of real images, $X$, and reconstructions, $\hat X$. Note that this can only be measured over a set of images since an accurate estimate of the distribution is needed.
Throughout this text, we use PSNR as a measure of distortion, and FID~\cite{heusel2017gans} as a measure of realism.
Previous work successfully optimized the triple rate-distortion-realism trade-off~\cite{agustsson2019extreme,mentzer2020high,po-elic,rippel17a,santurkar2017generative}, however, there is one caveat. Since the realism constraint might produce reconstructions that are far away from the input, these systems might be looked at with suspicion because it is not clear which details
are in the original and which were added by the architecture.
We address this caveat by training a decoder that, given a \emph{single} compressed representation,
either produces a reconstruction where little or no detail is generated (like rate-distortion optimized codecs), one where fine-grained detail is generated (like rate-distortion-realism optimized codecs), or anything in between (see Fig.~\ref{fig:teaser}).
We emphasize that the \emph{receiver} can decide how much detail to generate, because we condition the decoder, not the encoder, on a ``realism factor'', $\beta$, and thus the receiver can produce the full spectrum of reconstructions from a single representation, $\hat y$.
%
%
Our main contributions are:
\begin{enumerate}[leftmargin=*,noitemsep]
\item We bridge the generative and non-generative compression worlds, by navigating the trade-off between distortion and realism from a \emph{single} representation using a conditional generator.
%
\item Our method sets a new state-of-the-art in terms of distortion-realism on high-resolution benchmark datasets,
pushing the frontier
of achievable distortion-realism pairs.
Our method achieves better distortions at high realism (low FID) and better realism at low distortion (high PSNR) than ever before (Fig.~\ref{fig:dist_perc}).
%
%
%
%
\end{enumerate}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figs/metrics/psnr_fid_MSCOCO_30K_DALLE}
\includegraphics[width=\linewidth]{figs/metrics/psnr_fid_clic2020}%
\caption{
\label{fig:results}
Results on MS-COCO (top) and CLIC 2020 (bottom).
We show FID as a measure of \emph{realism} (left, lower is better) and PSNR as a measure of \emph{distortion} (right, higher is better).
The chains of dots
%
%
\protect\includegraphics[height=2ex]{figs/supporting/chainofdots.pdf}
%
indicate the realism-distortion points our method can achieve simply by varying $\beta$ on the receiver side, \ie, we have single model per bitrate.
%
We highlight two values of $\beta$ to make this clear.
%
We observe that for high $\beta$, we match or outperform HiFiC in FID (realism), while having a significantly better PSNR ($>1$dB).
On the low $\beta$ side, our model outperforms Charm and matches VTM in terms of PSNR, while having a significantly better FID.
%
%
%
%
%
%
%
}
\end{figure*}
\section{Related Work}
\label{sec:relwork}
Zhang~\etal~\cite{zhang2021universal} studied ``universal'' representations for rate-distortion-realism and provide a theoretical exploration of the bit-rate overhead incurred by using a \emph{single} representations to obtain different points in the distortion-realism plane.\footnote{Note that the terms ``realism'' and ``perception'' can be used interchangeably. Both refer to a divergence between distributions over images. In contrast, ``perceptual quality'' is used more generally as a measure of subjective visual quality, which could be measured by ``realism'' or by an image quality metric like LPIPS or MS-SSIM.}
They formalize the rate overhead of a universal representation compared to different representations for each distortion-realism point, and show that this overhead is zero for scalar Gaussian sources.
For general sources, they show that the rate-distortion-optimal representation can be used to meet any realism constraint by increasing the distortion by no more than a factor 2.
They present empirical results on MNIST, showing that after training a single encoder-decoder pair, further decoders can be trained given the frozen encoder.
We note that we jointly train an encoder and a conditional decoder that navigates the distortion-realism trade-off by adapting to a side input in the form of a single realism factor, $\beta$.
He~\etal present the ``ELIC'' model~\cite{he2022elic}, which is a state-of-the-art neural compression model for rate-distortion (MSE and MS-SSIM~\cite{wang2003multiscale}) performance amongst practical methods.
Some methods like Koyuncu~\etal~\cite{koyuncu2022contextformer} outperform ELIC at some bit rates, but they use a serial autoregressive context model to improve entropy modeling. Such context models typically lead to 10x slower decode times due to underutilization of the parallel cores of GPUs and TPUs.
Various previous compression methods incorporated adversarial losses to boost realism. Mentzer~\etal developed HiFiC, which combined a conditional GAN~\cite{mirza2014conditional} with a hyperprior-based compression architecture~\cite{balle2018variational} and showed rate savings of 50\% for equal subjective quality compared to MSE-optimized and standard (non-neural) codecs. While ELIC was only optimized for rate-distortion, it was extended to create a ``perceptually-oriented'' variant called PO-ELIC~\cite{po-elic}. This model focused on realism by augmenting the loss function with an adversarial term, a perceptual loss based on LPIPS~\cite{zhang2018unreasonable}, and a patch-based style loss~\cite{gatys2016}. Similarly, Li~\etal~\cite{li2022content} also combine multiple loss terms including a Laplacian loss, MSE, MAE, adversarial loss, and LPIPS, but they merge these terms in a spatially varying, content-adaptive manner based on different detectors (faces, edges, and structure) that run during training. The model is able to learn where to apply each type of loss based on image content, which boosts perceptual quality.
Other methods utilize region-of-interest (ROI) or semantic maps to guide detail and texture synthesis~\cite{Ma2022a,agustsson2019extreme}.
We emphasize that these methods target a single point on the distortion-realism tradeoff, and would require
storing a different representation for each distortion-realism target.
This is in contrast to our method, which only requires a single model and representation, yet can still generate reconstructions targeting any trade-off along the distortion-realism curve.
More related to our approach, Gao~\etal~\cite{gao2022} present an approach for targeting different multi-distortion trade-offs with a single model using semi-amortized inference: first, a model is trained for a single trade-off to predict a latent representation. This representation is then further optimized for a new trade-off at \emph{encode time}. Although effective, this approach has several drawbacks:
(1) the new trade-off parameters must be selected at encode time, not decode time and
(2) encoding becomes very slow since hundreds or thousands of optimization steps must run for each image. %
Iwai~\etal~\cite{iwai2021fidelity} use network interpolation to achieve different distortion-realism targets, however their method operates in a different regime by targeting extremely low bitrates (${<}0.04$bpp on Kodak). %
Theis~\etal~\cite{theis2017lossy} show promising results for generative compression of small (64x64) images using gaussian diffusion and reverse channel coding, obtaining state-of-the-art results on ImageNet64. The approach is based on using reverse channel coding~\cite{theis2022algorithms} to transmit samples, which is under active research and currently computationally prohibitive for the large images we consider here.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figs/metrics/dist_perc_MSCOCO_30K_DALLE_var2}\\[2ex]
\includegraphics[width=0.9\linewidth]{figs/metrics/dist_perc_clic2020_var2}
\caption{\label{fig:dist_perc}
Distortion (PSNR) vs.\ realism (FID/256) trade-off on MS-COCO and CLIC 2020, for a \emph{single bitrate}.
This is obtained by slicing Fig.~\ref{fig:results} at a single bitrate
(0.26bpp for MS-COCO and 0.14bpp for CLIC 2020).
In this figure, the optimal point is at the top left, and our method is closer to this optimum than the state-of-the-art generative method HiFiC,
%
as well as the SOTA MSE Baseline.
At high-realism (low FID), we outperform or match HiFiC in FID, while sporting a significantly higher PSNR.
At low-distortion (high PSNR), we reach towards state-of-the-art, with significantly lower FID.
We note that we get a chain of dots for our method because \emph{we can decode multiple reconstructions from the same representation} using different realism weights $\beta$, whereas the shown baselines only have a single dot.
We see similar results for other bitrates, see Sec~\ref{sec:app:dist_realism_rates}.
%
%
%
%
%
%
}
\end{figure}
\section{Method}
\label{sec:method}
\subsection{Background}
\paragraph{Neural Image Compression}
We follow the commonly used~\cite{balle2018variational,
minnen2018joint,
mentzer2018conditional,
minnen2020channel,
cheng2020learned,
he2021checkerboard,
he2022elic,
zou2022devil%
} non-linear transform coding approach~\cite{balle2020nonlinear} to do lossy image compression:
We train an auto-encoder $E,G$, that maps an input image $x$ to a \emph{quantized} representation $\hat y = E(x)$ and back to a reconstruction $\hat x = G(\hat y)$ (we use $G$ for the decoder and call it ``generator'' to avoid confusion with the discriminator $D$ we introduce below).
Training $E,G$ for reconstruction (\eg with a MSE loss) already leads to a compression system, where the sender runs $E$, stores $\hat y$ to disk, the receiver obtains $\hat y$ from disk and runs $G$. However, naively storing $\hat y$ to disk is expensive,
as it takes $\log_2 |\mathcal S|$ bits per symbol (assuming $\hat y_i \in \mathcal S$).
One can do better if the distribution of the symbols is known, as one can then assign shorter bitstrings to more likely symbols.
Given a distribution $p(\hat y)$ estimating the true (unknown) distribution $q(\hat y)$ of the symbols in $\hat y$, we can use entropy coding algorithms to store $\hat y$ using $B(\hat y) = \mathbb{E}_{\hat y \sim q} -\log_2 p(\hat y)$ bits, where $B$ is the cross-entropy between the true data distribution $q$ and our model $p$ (see, \eg, Yang~\etal~\cite{Yang2022a}).
To minimize $B$, recent neural compression approaches train a separate ``entropy model'' $P$ to predict $p(\hat y)$.
Using $P$, we can estimate a bitrate loss during training, $r(\hat y) = B(\hat y)$ and thereby minimize the rate-distortion trade-off by minimizing
\begin{align}
\mathcal L_{RD} = \mathbb E_{x\sim p_X} [
r(\hat y) + \lambda \text{MSE}(x, \hat x)], \label{eq:lossmse}
\end{align}
where $\lambda$ controls the trade-off.
Typically, a set of models is trained by varying $\lambda$, which results in models covering different bitrates.
\paragraph{Generative Image Compression}
Inspired by the theoretical formalisation of ``realism'' as a divergence between the distribution of real images $p_X$ and reconstructions $p_{\hat X}$ (see Introduction),
previous works~\cite{agustsson2019extreme,mentzer2020high} use a generative adversarial networks (GANs)~\cite{goodfellow2014generative}-based loss to estimate and minimize this divergence during training.
We follow the formulation of Mentzer~\etal~\cite{mentzer2020high}, which is based on conditional GANs: In addition to $E, G$, a \emph{conditional} discriminator $D(\hat y, x)$ is trained to predict the probability that the given $x$ is a realistic image corresponding to the representation $\hat y$.
We use the patch discriminator from HiFiC~\cite{mentzer2020high}.
Using $D$, we can formulate the GAN losses for $G$ and $D$ as follows:
\begin{align}
\mathcal L_G =& \mathbb E_{\hat y \sim p_Y}[-\log ( D(\hat y, G(\hat y)) )] \\
\mathcal L_D =& \mathbb E_{\hat y \sim p_Y}[-\log (1-D(\hat y,G(\hat y))] + \\ \nonumber
&\mathbb E_{x\sim p_X }[-\log D(E(x),x)],
\end{align}
where $p_Y$ is the distribution of representations, induced by the encoder transform $E$.
\subsection{Our Approach}\label{sec:ourapproach}
Let $\beta \in [0, \beta_\text{max}]$ be the ``realism weight'' that specifies whether our generator $G$ should produce a low distortion or high realism reconstruction. Our goal is to train a single generator $G$ to work well for any $\beta$.
To this end, we base our $E,G$ on the ``ELIC'' architectures introduced by He~\etal~\cite{he2022elic} to achieve state-of-the-art rate-distortion results,
but we make $G$ slightly wider, using $N{=}256$ channels instead of $N{=}192$.\footnote{%
On a one megapixel image using a NVidia V100 GPU, our $G$ runs in 67ms, compared to 43ms for $N{=}192$, and 99ms for HiFiC~\cite{mentzer2020high}.}
%
Additionally, and crucially, we condition $G$ on $\beta$, obtaining a $\beta$-conditional generator $G(\hat y, \beta)$, as shown in Fig.~\ref{fig:archoverview} and described in Sec.~\ref{sec:betacond}.
Thus, to obtain a reconstruction given a representation $\hat y$, the receiver chooses a $\beta \in [0, \beta_\text{max}]$ and runs $G$ to obtain $\hat x_\beta = G(\hat y, \beta)$.
We adopt the channel autoregressive entropy model (Charm) proposed by Minnen~\etal\cite{minnen2020channel} to minimize bitrate, using 10 slices.
Combining the losses from the previous section, we obtain the overall loss for training $E,G$ and $D$:
\begin{equation}
\label{eq:lossbeta}
\begin{split}
\mathcal L_{EGD}(\beta) = & \mathbb E_{x \sim p_X}\Big[
\lambda' r(\hat y)
+
d(x, \hat x_\beta)
+ \\
& \quad %
\beta \big(
\underbrace{%
-\log(D(\hat y, \hat x_\beta)%
}_{\mathcal L_G}
+
C_P \mathcal L_P (x, \hat x_\beta)
\big)
\Big].
\end{split}
\end{equation}
We use $d(x, \hat x_\beta){=}1/100\,\text{MSE}(x, \hat x_\beta)$ (where MSE is calculated on inputs and reconstructions scaled to $\{0, \dots, 255\}$), and $\lambda '{=}100 \lambda$.
Following~\cite{mentzer2020high}, we use $\mathcal L_P{=}\text{LPIPS}$~\cite{zhang2018unreasonable}.
$C_P$ is a hyper-parameter to weight $\mathcal L_P$ relative to $\mathcal L_G$.
We note that in contrast to Eq.~\ref{eq:lossmse}, we have the rate parameter $\lambda'$ on $r(\hat y)$ instead of MSE. This formulation allows us to target different bitrates without changing the relative weight of the distortion compared to other terms. The factor $1/100$ in $d$ and $\lambda '$ is chosen such that this formulation is the same as Eq.~\ref{eq:lossmse} for $\lambda = 1/100$.
During training, we uniformly sample $\beta$ and minimize
$\mathbb E_{\beta\sim U(0, \beta_\text{max})}\mathcal L_{EGD}(\beta)$, using $\beta_\text{max}{=}5.12$ for training.
During inference the receiver can choose $\beta$ freely to navigate the distortion-realism trade-off, obtaining different $\hat x_\beta$ from a fixed $\hat y$.
As motivated in Sec.~\ref{sec:ablations}, we use $\beta_\text{max}{=}2.56$ for inference.
\begin{figure}[b]
\centering
\vspace{3em}
\includegraphics[width=0.7\linewidth]{%
figs/schemata/arch_overview}
\caption{ \label{fig:archoverview} Overview of our architecture. Our encoder $E$ and decoder/generator $G$ are based on the ELIC~\cite{he2022elic} architecture, but we replace every residual block (RB) in $G$ with our conditional RB shown on the right, to let $G$ know which generative weight $\beta$ to target.
We first embed $\beta$ using fourier features into a $d$-dimensional space, then apply a 2-layer MLP to obtain features representing $\beta$, $f(\beta) = \text{MLP}(\text{Fourier}(\beta))$.
Assuming the $i$-th conv.\ layer in the RB has $C_i$ channels, we then project $f(\beta)$ to $C_i$ channels with a learned (per layer) weight $W_i$, obtaining $W_i f(\beta)$. This is added to the output of the conv.\ layer.
%
}
\end{figure}
\subsection{Beta Conditioning}\label{sec:betacond}
To condition $G$, we use the $\beta$-conditioning scheme shown in Fig.~\ref{fig:archoverview}, which we call \textbf{FourierCond}.
It is inspired by how diffusion models condition on the timestep\cite{ho2020denoising}:
We first obtain global (\ie shared for all layers) features $f(\beta)$ by calculating Fourier features~\cite{vaswani2017attention, mildenhall2021nerf}.
Here, we use the NeRF~\cite{mildenhall2021nerf} approach, using $L{=}10$ in~\cite[Eq.~2]{mildenhall2021nerf}.
Afterwards, we apply a 2-layer MLP (with ReLU activations and 512 channels for each of the two dense layers).
We then learn a projection for each convolutional layer in each residual block in our $G$.
To explore whether it matters how exactly $\beta$ is fed into $G$, we explore a second approach in the ablations, which we named \textbf{TableCond}. It is inspired by multi-rate image compression models~\cite{balle2020nonlinear}, where we use a lookup table indexed by $\beta$ to obtain scaling factors and biases which are applied to each of the (non-residual) convolutions in $G$.
\section{Experiments}
\label{sec:experiments}
\subsection{Metrics}
We evaluate our models for distortion and realism through PSNR and FID~\cite{heusel2017gans} respectively.
PSNR in RGB is still the most widely used metric to asses distortion-optimized neural codecs, whereas FID is widely used to asses generative models in terms of realism~\cite[\dots]{ramesh2021zero,yu2022scaling,saharia2022photorealistic}.
\subsection{Datasets}
We train our method on 256px crops extracted from a large set of high-resolution images, where each image is randomly resized such that the shorter side is between 500 and 1000 pixels.
We evaluate on the following common benchmark datasets from \emph{image compression}: \textbf{Kodak}~\cite{kodakurl}, 24 images of resolution $512{\times}768$ or $768{\times}512$ and \textbf{CLIC 2020}~\cite{clic2020}, from which we use the \texttt{test} split with
428 high-resolution images. The shorter side is $\approx$1500px for most images (see~\cite[Fig.~A12]{mentzer2020high} for more statistics).
For Kodak, we only report PSNR since it has too few images to reliably estimate FID.
For CLIC 2020, we follow HiFiC~\cite[Sec.~A.7]{mentzer2020high} and report \emph{patched} FID, where we extract $256{\times}256$ patches that cover all images (which we denote ``FID/256'').
This produces ~30K overlapping patches (and ~15K unique patches), which is of the order of magnitude required to measure FID.
However, neither of these datasets are commonly used for evaluation in the generative modeling literature (\eg, DALL-E~\cite{ramesh2021zero}, Parti~\cite{yu2022scaling} and Imagen~\cite{saharia2022photorealistic}),
where \textbf{MS-COCO 30K} has become the main benchmark dataset, which we thus also use.
The dataset consists of 30\,000 $256{\times}256$ images obtained from the MS-COCO 2014 validation
set.\footnote{%
Like Parti~\cite{yu2022scaling}, we use the Dalle processing~\cite[Sec.~A.2,~Listing~1]{ramesh2021zero}.}
We note that FID/256 is equivalent to vanilla FID on COCO.
\subsection{Building Strong Baselines}\label{sec:strongbase}
No code or set of reconstructions is publicly available for the state-of-the-art non-generative image compression methods, so we aim to match the approach of He~\etal, ELIC~\cite{he2022elic} in PSNR,
as it is state-of-the-art while reporting fast inference on GPU (50ms on a Nvidia Titan XP for a $512{\times}768$ image).\footnote{%
Koyuncu~\etal~\cite{koyuncu2022contextformer} report marginally better PSNR at a significant increase in inference time.}
We use their $E, G$, but like for our method (Sec.~\ref{sec:ourapproach}), we use $N{=}256$ for $G$,
and also use the Charm~\cite{minnen2020channel} entropy model (\ie, in contrast to the paper by He~\etal, we use equally sized slices and no checkerboard).
The resulting model almost matches ELIC in PSNR (there is a ${\approx}0.1dB$ difference on Kodak, see Sec.~\ref{sec:results}), and we thus use it as a stand-in for state-of-the-art in PSNR, calling it \textbf{SOTA MSE Baseline}.
We combine this model with the discriminator and GAN formulation from Sec.~\ref{sec:ourapproach} to form our \textbf{GAN baseline}. We train it for Eq.~\ref{eq:lossbeta} using a fixed $\beta{=}2.56$, \ie, this can be viewed as the same as our main model but using a non-conditional $G$ that can only target a single realism weight.
We use this baseline to tune the weights $C_P$ for LPIPS and the GAN weight $\beta$, reported in Sec.~\ref{sec:ablations}, and then use the resulting $C_P$ for our main models.
The GAN baseline method outperforms HiFiC on COCO in FID, and nearly matches it on CLIC 2020, while being significantly stronger in terms of PSNR.
We note that this is despite the fact that we do not port the multi-stage training or rate controller from HiFiC, \ie, we train our models end-to-end from scratch.
Theoretically, a stochastic decoder is necessary to achieve high perceptual quality at low rates~\cite{tschannen2018deep}, so we explored concatenating noise to the representation before decoding in the GAN baseline.
However, we found that this did not affect reconstructions at the rates we are interested in
(intuitively stochasticity is crucial as the bitrate approaches zero).
\subsection{Published Baselines}
Since the publication of HiFiC~\cite{mentzer2020high} in 2020, there has been limited research in improving generative image compression. The few methods that have been published~\cite{po-elic,Ma2022a,iwai2021fidelity} have very limited evaluation (usually only on the validation sets of CLIC'21 (41 images) or '22 (30 images) which do not have enough images to estimate FID), and do not publish code to run on custom datasets.
In contrast, \textbf{HiFiC} has code and reconstructions available.
On high-resolution datasets large enough to estimate FID (CLIC 2020 and MS-COCO 30K), HiFiC remains state-of-the-art in terms of FID prior to the presented work.
On the MSE side, we compare to Minnen~\etal's~\textbf{Charm}~\cite{minnen2020channel}, since code is available for it and it is still close to state-of-the-art.
We compare to \textbf{ELIC}~\cite{he2022elic} in terms of PSNR on Kodak, as well as visually to the two reconstructions they publish in Sec.~\ref{sec:app:eliccmp}.
We additionally compare to
the non-learned \textbf{BPG}~\cite{bpgurl} (based on the HEVC standard) and
\textbf{VTM}~17.1~\cite{vtm17} (the reference implementation of VVC~\cite{bross2021overview}).
VVC/VTM is the state-of-the-art among non-neural image codecs.
We detail how we run the publicly available methods in Sec.~\ref{sec:app:codecinfo}.
\subsection{Our Models}
We train all baselines and ablations for 2M iterations at batch size $8$ on 256px crops.
For the multi-realism models, we train for 3M steps since the decoder needs to simultaneously learn to achieve high and low realism (we note that 3M is still less than training two models that target a single $\beta$, and our model can target infinitely many $\beta$s.)
We use the Adam optimizer, with learning rate $1\sc{e}^{-4}$ and default settings.
As common in the literature, we train with a higher lambda (10x) in the first 15\% steps, and decay the learning rate by a factor 10x in the last 15\% steps. We did not tune these training parameters.
We evaluate our model for $\beta \in \{0.0, ..., 2.56\}$ (motivated in Sec.~\ref{sec:ablations}).
\section{Results}
\subsection{Main Results}
As mentioned in Sec.~\ref{sec:strongbase}, the state-of-the-art in image compression~\cite{he2022elic,koyuncu2022contextformer} in terms of MSE
does not provide code or reconstructions.
We thus use our ``SOTA MSE Baseline'' as a stand-in for the state-of-the-art in terms of PNSR.
We establish its strength in Fig.~\ref{fig:resultskodak}, where we show that it is ${\approx}0.0-0.2$dB below ELIC~\cite{he2022elic}.
In Fig.~\ref{fig:results}, we show that our model can achieve a new state-of-the-art in terms of distortion-realism:
On the high-realism side ($\beta{=}2.56$), we
match or outperform the state-of-the-art generative method HiFiC in FID (left plots, note the annotation of $\beta$), while also
significantly outperforming it in PSNR (right plots).
On the low-distortion side ($\beta{=}0$),
we are strong in PSNR, reaching towards the SOTA MSE baseline in terms of PSNR (right plots), while significantly outperforming it in FID (left plots).
We emphasize that this means that
\textit{a)} our model is significantly closer to the input than HiFiC in the high-$\beta$ mode (\ie, it has higher PSNR, see right plots), leading to more faithful reconstructions,
and also
\textit{b)} we have greater realism than state-of-the-art MSE models in the low-$\beta$ mode.
This is even more apparent as we consider Fig.~\ref{fig:dist_perc}, which shows a single rate point and compares PSNR vs.\ FID.
In this figure, it is best to be in the upper left, like in the common rate-distortion plots.
\emph{We can see that our approach reaches closer to this optimum than any previous method}.
We now see more clearly that for low FID (high $\beta$), our method has significantly higher PSNR than HiFiC ($\approx 1dB$), while for high PSNR (low $\beta$), our method has signficantly lower FID (about $40\%$) than any of the non-generative methods (VTM, BPG, Charm, and the SOTA MSE baseline).
Comparing our model at $\beta=0$ to the MSE baseline and our model at $\beta=2.56$ to the GAN basline, we might expect symmetric gaps on both sides. However, as we can see in Fig.~\ref{fig:dist_perc},
our GAN baseline leads to a similar operating point as our model set to $\beta{=}2.56$,
while the MSE basline has slightly better PSNR. %
It appears that
the multi-task nature of our loss leads to models that slightly favor realism, perhaps not surprisingly, given that we randomly sample $\beta$ during training and a large portion of the optimization thus uses $\beta{>}0$. Indeed, our model at $\beta{=}0$ actually has significantly better FID than the MSE baseline.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/metrics/psnr_fid_kodak}
\caption{Results on Kodak in terms of PSNR (note that Kodak is too small (24 images) to estimate FID).
Our multi-realism model is strong in terms of PSNR in the low-$\beta$ mode while also allowing for high realism reconstructions (see Fig~\ref{fig:visualcomp2}).
Additionaly, here we can see that our ``SOTA MSE Basline'' is competitive with the published (state-of-the-art) ELIC\cite{he2022elic} model, we observe only a tiny gap ($0.0-0.2$dB).
%
%
%
%
\label{fig:resultskodak}
}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/abl/abl_MSCOCO_30K_DALLE}
\caption{
\label{fig:ablations}
Ablations and tuning for our GAN Baseline. See Sec.~\ref{sec:ablations} for details.\vspace{2ex}
}
\end{figure*}
Additionally, in Fig.~\ref{fig:resultskodak}, we compare PSNR on Kodak,
a commonly used benchmark in compression.
Here we can see that our ``SOTA MSE Baseline'' is competitive with state-of-the-art ELIC~\cite{he2022elic}.
Furthermore, despite the fact that we train a \emph{single} model for the full spectrum from high-realism to low-distortion, our method at $\beta{=}0$ is competitive with VTM~\cite{bross2021overview} and Charm~\cite{minnen2020channel} in terms of PSNR, and only ${\approx}0.5dB$ below the state-of-the-art (with significantly better realism as discussed above).
\paragraph{Visual Comparison}
We compare visually to the generative method HiFiC, our strongest contender, in Fig.~\ref{fig:visualcomp2}.
We can observe that our reconstructions $\beta=2.56$ are closes to the input than HiFiC when we compare at the same bitrate (first two rows), or we achieve the same visual quality at lower bitrates (third and forth row). Our model achieves higher PSNR as we go towards $\beta{=}0$, but we lose detail in the reconstructions, as expected.
For completeness, we compare to the two reconstructions available for the non-generative state-of-the-art ELIC in Sec.~\ref{sec:app:eliccmp}. Visually, the reconstructions look similar to our model at $\beta{=}0$.
Finally, we provide reconstructions on CLIC 2020 in Sec.~\ref{sec:app:clicrecons}.
\vspace{3ex}
\subsection{Ablations \& Tuning}\label{sec:ablations}
In Fig.~\ref{fig:ablations} we show the results for our ablation experiments on MS-COCO.
For the ablations, we decouple the weight on $\mathcal{L}_P$ in Eq.~\ref{eq:lossbeta} by setting $\beta C_P = C_P'$.
\subsubsection{Loss Weights and Charm}
Most ablation experiments are conducted using the Hyperprior~\cite{minnen2018joint} entropy model instead of Charm~\cite{minnen2020channel}, which we denote with ``-Charm''.
We trained the baseline for various rate points with a fixed LPIPS weight $C_P' = 4.26$ (the default weight of HiFiC\cite{mentzer2020high} ) but without the GAN loss, which we refer to as ``Baseline -GAN -Charm''.
We then introduce the GAN loss,
and vary first the weight of the GAN loss $\beta \in \{0.0, 0.08, 0.16, 0.32, \cdots 5.12\}$
(``Baseline +GAN -Charm [vary $\beta$]'').
Here we find that the GAN loss lowers the FID up to $\beta=2.56$ where it starts saturating.
With $\beta$ fixed to this value, we now vary $C_P'\in \{0.0, 1.0, 2.0, 4.26, 8.0\}$ and find that the weight $4.26$ remains a good choice (``Baseline +GAN -Charm [vary $C_{P'}$]'').
The resulting model is called ``Baseline +GAN -Charm''.
Now we adopt Charm\cite{minnen2020channel} entropy modeling
which lowers the bitrate (resulting in ``GAN Baseline'').
For the main model, we sampled $\beta$ uniformly in the range $[0, 5.12]$ during training,
but also found that the FID score at inference is lowest for $\beta{=}2.56$.
We thus set $\beta_\text{max}{=}2.56$ for our model during inference.
\subsubsection{Conditioning} When comparing \textit{FourierCond} and \textit{TableCond} we found they give very similar results, see Sec.~\ref{sec:app:table_vs_mlp}.
We choose \textit{FourierCond} for our main model as it leads to a simpler implementation.
\label{sec:results}
\vfill
\section{Conclusion}
We have presented a method which is capable of outputting a single representation for compressed images, from which a receiver can either decode a high-realism reconstruction (high $\beta$) or a high-PSNR reconstruction (low $\beta$). We saw that in terms of distortion (PSNR) vs.\ realism (FID), our method can reach a new state-of-the-art.
To the best of our knowledge, this is the first single decoder method which allows the trade-off between
realism and distortion to happen on the receiver side, with no change in the bitstream.
This means that depending on the use-case, the users may choose to view reconstructions which are as close to the original as possible, or switch to view images with a better level of (generated) detail.
Somewhat surprisingly, we find that we can obtain high realism without sacrificing PSNR by more than ${\approx}1-1.5$dB.
We hope our findings inspires further work to push the boundary of the the realism-distortion trade-off.
\FloatBarrier
\newpage
{\small
\bibliographystyle{ieee_fullname}
\subsection{Runtime Benchmarks}
We present runtime benchmark results on three systems. The first system used an NVIDIA Tesla V100, released in late 2017. The results for this system are presented in Table~\ref{tab:runtime_v100}. The second system employs an NVIDIA 2080 Ti, a consumer GPU targeted primarily at video gaming and released 2018. The results for this system are summarized in Table~\ref{tab:runtime_2080}.
The third system uses a more powerful NVIDIA 3090 Ti GPU, summarized in Table~\ref{tab:runtime_3090ti}.
To present a realistic usage of the method, we include the CPU runtime needed to encode images (i.e., range coding is included in the total compression/entropy encoding/decoding/total decompression time).
We note that comparing runtime numbers across papers is challenging due to implementation details and platform-specific optimizations.
In our case, we did not aim to provide the fastest runtime numbers and, as such, when implementing the ELIC method~\cite{he2022elic}, we omitted some performance critical features. Primarily, we did not use checkerboard encoding, nor did we employ uneven grouping for CHARM. Both of these should provide better runtime performance for the entropy coding and decoding components.
\begin{table*}[t]
\centering
\small
\begin{tabular}{lcc|cc|cccc}
\toprule
Model & Encoder & \adjustbox{stack=cc}{Entropy\\Coding} &
\adjustbox{stack=cc}{Entropy\\Decoding} & Decoder &
\multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Compression}} & \multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Decompression}} \\
& [ms]
& [ms]
& [ms]
& [ms]
& [ms]
& [mp/s]
& [ms]
& [mp/s]
\\
\midrule
Ours (FourierCond) & 121.3 & 55.6 & 50.6 & 153.8 & 176.8 & 11.2 & 204.4 & 9.7 \\
MSE Baseline (N=192) & 71.2 & 52.3 & 47.6 & 86.3 & 123.6 & 16.0 & 133.9 & 14.8 \\
SOTA MSE Baseline (N=256) & 121.4 & 66.6 & 74.3 & 139.0 & 188.0 & 10.5 & 213.3 & 9.3 \\
HiFiC & 43.2 & 47.4 & 57.4 & 200.5 & 90.6 & 21.9 & 257.9 & 7.7 \\
\bottomrule
\end{tabular}
\caption{Runtime numbers (ms) needed to process a one megapixel image on a \textbf{Tesla V100} using float32 precision. Compression/decompression numbers include predicting entropy parameters on the GPU and running the range coder/decoder on CPU.}
\label{tab:runtime_v100}
\end{table*}
\begin{table*}[t]
\centering
\small
\begin{tabular}{lrr|rr|rrrr}
\toprule
Model & Encoder & \adjustbox{stack=cc}{Entropy\\Coding} &
\adjustbox{stack=cc}{Entropy\\Decoding} & Decoder &
\multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Compression}} & \multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Decompression}} \\
& [ms]
& [ms]
& [ms]
& [ms]
& [ms]
& [mp/s]
& [ms]
& [mp/s]
\\
\midrule
Ours (FourierCond) & 128.4 & 92.3 & 75.2 & 205.9 & 220.7 & 9.0 & 281.1 & 7.1 \\
MSE Baseline (N=192) & 81.4 & 89.1 & 72.4 & 109.2 & 170.5 & 11.6 & 181.6 & 10.9 \\
SOTA MSE Baseline (N=256) & 128.7 & 68.5 & 72.9 & 155.5 & 197.2 & 10.1 & 228.4 & 8.7 \\
HiFiC & 40.5 & 48.9 & 57.5 & 248.3 & 89.4 & 22.2 & 305.8 & 6.5 \\
\bottomrule
\end{tabular}
\caption{Runtime numbers (ms) needed to process a one megapixel image on a \textbf{NVIDIA 2080 Ti} consumer GPU using float32 precision. Compression/decompression numbers include predicting entropy parameters on the GPU and running the range coder/decoder on CPU.}
\label{tab:runtime_2080}
\end{table*}
\begin{table*}[t]
\centering
\small
\begin{tabular}{lrr|rr|rrrr}
\toprule
Model & Encoder & \adjustbox{stack=cc}{Entropy\\Coding} &
\adjustbox{stack=cc}{Entropy\\Decoding} & Decoder &
\multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Compression}} & \multicolumn{2}{c}{\adjustbox{stack=cc}{Total\\Decompression}} \\
& [ms]
& [ms]
& [ms]
& [ms]
& [ms]
& [mp/s]
& [ms]
& [mp/s]
\\
\midrule
Ours (FourierCond) & 62.7 & 45.9 & 37.2 & 80.7 & 108.6 & 18.3 & 117.9 & 16.8 \\
MSE Baseline (N=192) & 44.6 & 46.0 & 37.5 & 47.9 & 90.6 & 21.9 & 85.4 & 23.2 \\
SOTA MSE Baseline (N=256) & 62.9 & 47.2 & 54.0 & 68.8 & 110.1 & 18.0 & 122.8 & 16.1 \\
HiFiC & 20.7 & 33.2 & 46.8 & 109.7 & 54.0 & 36.7 & 156.5 & 12.7 \\
\bottomrule
\end{tabular}
\caption{Runtime numbers (ms) needed to process a one megapixel image on a \textbf{NVIDIA 3090 Ti} consumer GPU using float32 precision. Compression/decompression numbers include predicting entropy parameters on the GPU and running the range coder/decoder on CPU. Please note that certain components don't scale linearly when compared to the \textbf{NVIDIA 2080 Ti}. This is because
entropy encoding/decoding is still serial and the autoregressive components aren't parallelized.}
\label{tab:runtime_3090ti}
\end{table*}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,084
|
Central Florida Airboat Tours
Rides And Rates
Facebook Instagram Youtube Tripadvisor
Cocoa, Florida
captain@centralfloridaairboattours.com
Home › Wildlife › Roseate Spoonbill
The roseate spoonbill is the only spoonbill endemic (native) to the Western Hemisphere (Bjork and Powell 1996). This species can reach a length of 30-40 inches (76-102 centimeters) with a wingspan of 50-53 inches (127-135 centimeters). It has pink wings and underparts (with some red on the tops of the wings) with a white neck and back, and pinkish legs and feet. While the species looks almost entirely pink in flight, they actually have no feathers at all on their heads. The pink coloration comes from the organisms on which they feed, which are full of caroteniods (organic pigment) (Texas Parks and Wildlife Department, n.d.). As the name implies, the roseate spoonbill also has a large, spoon-shaped bill, which it sweeps back and forth in shallow water to capture prey.
The roseate spoonbill is a resident breeder in South America, generally east of the Andes, and coastal areas of Central America, the Caribbean, and the Gulf of Mexico (Dumas 2000). Mangrove islands and occasionally dredge-spoil islands are the preferred nesting habitat for the species. In Florida, the species is found in Florida Bay, Tampa Bay, and Brevard County.
Often misidentified as flamingos
There are six species of Spoonbill in the world, the Roseate is the only one with pink plumage
A flock of Spoonbills is called a "bowl"
Their diet typically consists of small fish, shrimp, crayfish, crabs, insects, and some plant material.
birds, wildlife
Other related posts
Explore The Swamp.
The Space Coast's only airboat ride that explores the beauty and nature of the St. John's River between Lake Poinsette and Lake Winder.
amphibians birds mammals reptile wildlife
Facebook-f Instagram Youtube Tripadvisor Calendar-check
Rides & Rate
© Central Florida Airboat Tours
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,414
|
Аріель Гарсе (, 14 липня 1979, Танділь) — аргентинський футболіст, що грав на позиції захисника.
Виступав, зокрема, за клуб «Рівер Плейт», а також національну збірну Аргентини.
Клубна кар'єра
У дорослому футболі дебютував 1998 року виступами за команду клубу «Рівер Плейт», в якій провів п'ять сезонів, взявши участь у 92 матчах чемпіонату.
Згодом з 2003 по 2013 рік грав у складі команд клубів «Монаркас», «Рівер Плейт», «Колон», «Олімпо», «Росаріо Сентраль» та «Архентінос Хуніорс».
Перебуваючи в Олімпо, Гарсе був дискваліфікований на 6 місяців Аргентинською федерацією після того, як у нього був позитивний тест на кокаїн
Завершив професійну ігрову кар'єру у клубі «Атлетіко Рафаела», за команду якого виступав протягом 2013—2014 років.
Виступи за збірну
У 2003 році дебютував в офіційних матчах у складі національної збірної Аргентини. Протягом кар'єри у національній команді, яка тривала 8 років, провів у формі головної команди країни лише 4 матчі.
Гарсе провів два товариські матчі під керівництвом Марсело Бьєлси за збірну Аргентини з футболу в 2003 році. Потім він зіграв товариський матч проти Гаїті під керівництвом Дієго Марадони. 19 травня 2010 року Гарсе був несподівано обраний одним із 23 гравців за збірну Аргентини на чемпіонаті світу з футболу 2010 року в Південній Африці, хоча він не брав участь у жодному матчі. За словами Марадони, він бачив уві сні збірну Аргентини, яка виграє чемпіонат світу, і єдине обличчя, яке він пам'ятав, було обличчям Гарсе.
У складі збірної був учасником чемпіонату світу 2010 року у ПАР.
Посилання
Аргентинські футболісти
Футболісти «Рівер Плейта»
Футболісти «Монаркас» (Морелія)
Футболісти «Колона»
Футболісти «Олімпо»
Футболісти «Росаріо Сентраль»
Футболісти «Архентінос Хуніорс»
Футболісти «Атлетіко Рафаела»
Аргентинські футбольні легіонери
Футбольні легіонери в Мексиці
Уродженці Танділя
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,523
|
Art Print | Be Local Love Hartford - Hartford Prints!
The inspiration for our Be Local, Love Hartford art print came from our 2013 "Local Love" grassroots campaign. We designed and printed black and white posters, which are hung in businesses and public spaces all over the Hartford area. Titled "Eat," "Be," "Grow," "Buy," and "Live," these graphic posters encourage people to live locally. Additionally, we created five fine art prints as companion pieces to each poster. These limited edition, 3-color art prints rework the imagery from the posters in abstract and interesting ways. Each sale of the fine art prints helps to sustain the "Local Love" campaign for years to come! Designed and printed by Adrienne Gale.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,143
|
{"url":"https:\/\/www.physicsforums.com\/threads\/voltage-division-ac-cct.674600\/","text":"Voltage Division - AC cct.\n\nHomework Statement\n\nHi,\nWe have this cut. :\n\nand in the image below, I10 is calculated by using current division :\n\nThe Attempt at a Solution\n\nI tried to find it using voltage division but the answer is wrong :\nVx(of the upper node) = 100 * $$\\frac{j5}{4+j5}$$ = 60.98 + j48.78 V\n\n=> I10 = $$\\frac{Vx}{10 - j5}$$ = 2.93 + 6.34 = 6.98 $\\ 65.2$\n\nwhich is wrong, WHY?\n\nthanks\n\nRelated Engineering and Comp Sci Homework Help News on Phys.org\ngneill\nMentor\n\nHomework Statement\n\nHi,\nWe have this cut. :\n\nand in the image below, I10 is calculated by using current division :\n\nThe Attempt at a Solution\n\nI tried to find it using voltage division but the answer is wrong :\nVx(of the upper node) = 100 * $$\\frac{j5}{4+j5}$$ = 60.98 + j48.78 V\n\n=> I10 = $$\\frac{Vx}{10 - j5}$$ = 2.93 + 6.34 = 6.98 $\\ 65.2$\n\nwhich is wrong, WHY?\nYour voltage division isn't taking into account the impedance of the 10\u03a9 and -5j capacitor branch that's also connected at the Vx node. You might try applying nodal analysis to find Vx...\n\nLast edited:\noh\nthank you gneill","date":"2020-07-06 13:29:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.634850263595581, \"perplexity\": 1466.4383514820684}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655880616.1\/warc\/CC-MAIN-20200706104839-20200706134839-00552.warc.gz\"}"}
| null | null |
Q: R loop for simple regression I would like to create a function who can work with any data frame, with a minimum number of columns (1) and maximum number of columns (n). The function has to do a simple linear regression for each of the independent variables. I know that I have to use the loop for (.), but I don't know how to use it.
I try this, but it doesn't work:
>data1<-read.csv(file.choose(),header=TRUE,sep=",")
>n<-nrow(data1)
>PredictorVariables <- paste("x", 1:n, sep="")
>Formula <-paste("y ~ ", PredictorVariables, collapse=" + ",data=data1)
>lm(Formula, data=data1)
A: Here is an approach with lapply(), using the mtcars data set. We will selectmpg as the dependent variable, extract the remaining columns from the data set, and then use lapply() to run regression models on each element in the indepVars vector. The output from each model is saved to a list, including the name of the independent variable as well as the resulting model object.
indepVars <- names(mtcars)[!(names(mtcars) %in% "mpg")]
modelList <- lapply(indepVars,function(x){
result <- lm(mpg ~ mtcars[[x]],data=mtcars)
list(variable=x,model=result)
})
# print the first model
modelList[[1]]$variable
summary(modelList[[1]]$model)
The extract operator [[ can then be used to print the content of any of the models.
...and the output:
> # print the first model
> modelList[[1]]$variable
[1] "cyl"
> summary(modelList[[1]]$model)
Call:
lm(formula = mpg ~ mtcars[[x]], data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-4.9814 -2.1185 0.2217 1.0717 7.5186
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 37.8846 2.0738 18.27 < 2e-16 ***
mtcars[[x]] -2.8758 0.3224 -8.92 6.11e-10 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 3.206 on 30 degrees of freedom
Multiple R-squared: 0.7262, Adjusted R-squared: 0.7171
F-statistic: 79.56 on 1 and 30 DF, p-value: 6.113e-10
>
Responding to the comment from the original poster, here is the code necessary to encapsulate the above process within an R function. The function regList() takes a data frame name and a dependent variable string, and then proceeds to run regressions of the dependent variable on each of the remaining variables in the data frame passed to the function.
regList <- function(dataframe,depVar) {
indepVars <- names(dataframe)[!(names(dataframe) %in% depVar)]
modelList <- lapply(indepVars,function(x){
message("x is: ",x)
result <- lm(dataframe[[depVar]] ~ dataframe[[x]],data=dataframe)
list(variable=x,model=result)
})
modelList
}
modelList <- regList(mtcars,"mpg")
# print the first model
modelList[[1]]$variable
summary(modelList[[1]]$model)
One can extract a variety of content from the individual model objects. The output is as follows:
> modelList <- regList(mtcars,"mpg")
> # print the first model
> modelList[[1]]$variable
[1] "cyl"
> summary(modelList[[1]]$model)
Call:
lm(formula = dataframe[[depVar]] ~ dataframe[[x]], data = dataframe)
Residuals:
Min 1Q Median 3Q Max
-4.9814 -2.1185 0.2217 1.0717 7.5186
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 37.8846 2.0738 18.27 < 2e-16 ***
dataframe[[x]] -2.8758 0.3224 -8.92 6.11e-10 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 3.206 on 30 degrees of freedom
Multiple R-squared: 0.7262, Adjusted R-squared: 0.7171
F-statistic: 79.56 on 1 and 30 DF, p-value: 6.113e-10
>
A: How about the following:
First, I create some sample data:
# Sample data
set.seed(2017);
x <- sapply(1:10, function(x) x * seq(1:100) + rnorm(100));
df <- data.frame(Y = rowSums(x), x);
Next I define a custom function:
# Custom function where
# df is the source dataframe
# idx.y is the column index of the response variable in df
# idx.x.min is the column index of the first explanatory variable
# idx.x.max is the column index of the last explanatory variable
# The function returns a list of lm objects
myfit <- function(df, idx.y, idx.x.min, idx.x.max) {
stopifnot(idx.x.min < idx.x.max, idx.x.max <= ncol(df));
res <- list();
for (i in idx.x.min:idx.x.max) {
res[[length(res) + 1]] <- lm(df[, idx.y] ~ df[, i]);
}
return(res);
}
Then I run myfit using the sample data.
lst <- myfit(df, 1, 2, 11);
The return object lst is a list of 11-2+1 = 10 fit results of class lm. For example,
lst[[1]];
#
#Call:
#lm(formula = df[, idx.y] ~ df[, i])
#
#Coefficients:
#(Intercept) df[, i]
# -5.121 55.100
PS
For future posts I recommend having a look at how to ask good questions here on SO, and providing a minimal reproducible example/attempt, including sample data.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,798
|
Exceptional PCB assembly service and quality delivered on time, everytime. We look forward to hearing from you.
To request a quote, please email us your Bill of Materials and any other assembly files, and we will contact you within 24 hours. As always, we are also available by phone. We look forward to hearing from you!
Come see us in beautiful Houston, Texas, home of the Texans, the Rockets and the Astros! Consistently listed as a top 10 place for businesses and recently proclaimed 'A Great Eating Capital' of America by the New York Times.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,146
|
(206) Hersilia – planetoida z pasa głównego asteroid okrążająca Słońce w ciągu 4 lat i 197 dni w średniej odległości 2,74 j.a. Została odkryta 13 października 1879 roku w Clinton w stanie Nowy Jork przez Christiana Petersa. Nazwa planetoidy pochodzi od Hersilii, żony Romulusa w mitologii rzymskiej.
Zobacz też
lista planetoid 1–1000
lista ponumerowanych planetoid
Bibliografia
Linki zewnętrzne
Planetoidy pasa głównego
Nazwane planetoidy
Obiekty astronomiczne odkryte w 1879
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,652
|
Pax Ben-Gurioni
Last week I had my beef with Susan Jacoby on her reading of the Gaza flotilla raid as a kind of capsule version of Israel-Arab tribal rivalries. This week she hits the mark in a wonderful, highly critical column about Israel's Haredim – or ultra-ultra-orthodox Jews – proving that it is possible to criticize Israel without falling into the myopic, anti-Semitic tropes of people like Jose Saramago.
For the record, I share Jacoby's worry about the Haredim. They are religious extremists dedicated to a Torah-only vision of life on this planet. As an atheist, a secularist and a half-Jew (like Jacoby herself) who cares deeply about the present and future of Israel, I can only applaud her claim that these fanatics imperil Israeli democracy from within.
The sight of thousands of Jews taking to the streets of Israeli cities to fight for the right to wall themselves off in their own ghetto within a Jewish state–and at the expense of that state–is utterly dispiriting. These are people who want to write Baruch Spinoza and Moses Mendelssohn out of Jewish history. They want to shackle their own minds and let other Jews–the Jews who who played such a vital role in creating the modern world—do their fighting for them. And they want the rest of us to shut our mouths out of fear that we will be charged with anti-Semitism for saying that their form of religion is rigid, retrograde, and contemptuous of the beliefs of others. That the State of Israel, founded by men and women of far-reaching vision, should tremble in awe of these fearful people is a shame and a disgrace. And it breaks the hearts of those of us who can never forget the hope and pride we once invested in Israel's future. Even more, it breaks the hearts of the sabra grandchildren of the tough, proud, secular Jews–men and women of reason who hated the very idea of spiritual or physical ghettos–who devoted their lives to the creation of Israel.
So these are the same problems dogging countries like the United States and Italy. The US has its evangelical nutjobs, and Italy its criminal Catholic Church which intimidates Italian politicians in a way strikingly similar to that of the Haredim in Israel. Of course, the Church is a multi-national institution representing the world's largest religious denomination, and the Haredim are a small percentage of one of the world's smallest peoples. But they both want theocracy in the end.
So why can't the Israelis stand up to them? The history of the Jewish people is so rich, so ennobling, so varied and engrossing that the Haredi version palls in comparison. To think that Torah, or the Gospels, or the Qur'an is unequivocally the best guide to life in the twenty-first century is beyond laughable. It's dangerous. I'm with Susan on this one.
June 28, 2010 Marc Alan Di Martino
HaredimIsraelreligious extremismSusan Jacoby
← The banality of conspiracy theories
The Jewish question in Southern Italy →
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 108
|
Импфлинген () — коммуна в Германии, в земле Рейнланд-Пфальц.
Входит в состав района Южный Вайнштрассе. Подчиняется управлению Ландау-Ланд. Население составляет 799 человек (на 31 декабря 2010 года). Занимает площадь 5,18 км². Официальный код — 07 3 37 043.
Примечания
Ссылки
Официальная страница
Города Рейнланд-Пфальца
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,620
|
Q: semantic ui fixed menu inside sidebar I tried (with no success so far) to set up a sidebar with a fixed menu inside.
Example:
<div class="ui vertical right sidebar">
<div class="ui top pointing menu fixed">
<a class="active item">Infos</a>
</div>
<div class="ui segment" style='padding-top:45px;'>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
<p>Eins, zwei, drei, vier
<br/>Fünf, sechs, sieben, acht
<br/>Uno, due
<br/>Três, quatro
<br/>One, two
<br/>Ichi, ni, san, chi
<br/>Adjin, dva, tri
<br/>Li, tva, tri
<br/>
</p>
</div>
</div>
<div class="ui top fixed menu">
<div class="ui title borderless item launch button">Numbers</div>
</div>
<div class="pusher">
</div>
Here is the jsfiddle.
However, when the sidebar content is scrolled, the menu is not fixed.
I don't know if I missed anything, but I took care of putting the sidebar outside the pusher. I also tried with the sticky class, with no luck.
Any idea ?
Thanks in advance
A: I have found a pretty simple solution with css, adding the following properties to the container segment :
#segm{
overflow-y: scroll;
position: absolute;
height: 100%;
width:100%;
padding: 0;
margin: 0;
}
And it works as expected. See the updated fiddle.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,578
|
Q: Correlation between a single item of scale 1 with total score of scale 2 I'm trying to figure out which analysis is most suitable to find whether the correlation between a single item in scale 1 is significantly correlated to the total score of scale 2 (mental wellbeing). Scale 1 is ordinal ranked from 1 - 5, but the total score of scale 2 is the addition of all items in the scale (hence continuous I'm assuming?).
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,872
|
Q: when git ,fatal: Unable to create '//Mac/Home/Desktop/TGh/02/.git/index.lock': File exists fatal: Unable to create '//Mac/Home/Desktop/TGh/02/.git/index.lock': File exists.
A: The error message tells you exactly what to do:
If no other git process is currently running, this probably means a
git process crashed in this repository earlier. Make sure no other git
process is running and remove the file manually to continue.
So, stop any running Git processes you may have (close Source Tree, Xcode, etc), and simply remove the lock file manually, and carry on.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,731
|
Building legal expertise, learning to assume like a lawyer, gaining the tools to practice with integrity. It is a great alternative the school provides for these people who for whatever motive should not able to really get the full studying the primary time round," Gillespie says. The law explains how the person allocates the $200 among his or her varied wants in order to maximize the satisfaction.
One of the crucial significant laws in regards to the safety of the surroundings in Oman is the legislation on the Control of Marine Pollution (Sultani Decree 34/seventy four). Candidates from Tier 3 and Tier four colleges sometimes should finish in the High 5-10% with the intention to meet the hiring standards for big companies in Texas (although sure Tier 4 colleges are favored over others).
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,175
|
cask :v1 => 'tal-chorus-lx' do
version :latest
sha256 :no_check
url 'http://kunz.corrupt.ch/downloads/plugins/TAL-Chorus-LX-installer.pkg'
homepage 'http://kunz.corrupt.ch/products/tal-chorus-lx'
license :unknown # todo: change license and remove this comment; ':unknown' is a machine-generated placeholder
pkg 'TAL-Chorus-LX-installer.pkg'
uninstall :pkgutil => [
'ch.corrupt.talchoruslx.*',
'ch.corrupt.talunolxInstaller.TAL-Chorus-LX-64.pkg',
]
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,118
|
\section{Derivation of STM current}
We derive the STM current flowing from the tip to the sample, starting from the Hamiltonian of these two parts,
\begin{eqnarray}
\mathcal{H}=\mathcal{H}_{\rm tip} + \mathcal{H}_{\rm sys} + \mathcal{H}^{\rm tip}_{\rm hyb}.
\end{eqnarray}
$\mathcal{H}_{\rm sys}$ and $\mathcal{H}^{\rm tip}_{\rm hyb}$ are written as
\begin{eqnarray}
\mathcal{H}_{\rm sys}=\mathcal{P}\mathcal{H}_{\rm K}\mathcal{P},
\end{eqnarray}
\begin{eqnarray}
\mathcal{H}^{\rm tip}_{\rm hyb}=\sum_{m,\sigma}v^{\rm t}_{m\sigma}(\alpha^{\dag}_{m\sigma}f_{i\sigma} + {\rm H.c.}),
\end{eqnarray}
as introduced in the main text.
We keep the system in equilibrium with the substrate with a common chemical potential, $\mu$. Meanwhile we control electrostatic potential of the tip, and set a voltage bias $V$ between the tip and the system.
The voltage bias affects the tip in two ways: it gives (i) a shift of one-particle energy, and (ii) a shift of chemical potential.
Concerning (i), the Hamiltonian of the tip is described by
\begin{eqnarray}
\mathcal{H}_{\rm tip}=\sum_m(E_m^t-eV)\alpha^{\dag}_m\alpha_m,
\end{eqnarray}
where the electron charge is set to be $-e$. Corresponding to (ii), we assume the chemical potential of the tip is given as, $\mu_{\rm tip}\equiv\mu - eV$.
Accordingly, the particle distribution is given by the shifted Fermi distribution function, $f(\varepsilon + eV) = \frac{1}{1 + e^{\beta(\varepsilon - \mu + eV)}}$.
To obtain the current, we start with the equation of motion of electric charge at the target site, $i$,
\begin{eqnarray*}
\hat{I}_i\equiv -e\frac{d}{dt}\sum_{\sigma}f^{\dag}_{i\sigma}f_{i\sigma} = i\frac{e}{\hbar}\sum_{m,\sigma} v_{m\sigma}^t(\alpha^{\dag}_{m\sigma}f_{i\sigma} - f^{\dag}_{i\sigma}\alpha_{m\sigma}),
\end{eqnarray*}
which leads to the current expectation value, $I_i$, in terms of the non-equilibrium Green's function~\cite{kamenev2011field}.
\begin{eqnarray}
I_i = -\frac{e}{\hbar}\sum_{m\sigma}v_{m\sigma}^t\int\frac{d\varepsilon}{2\pi}{\rm Re}(g^{\rm K}_{st,im\sigma}(\varepsilon)).
\end{eqnarray}
Here, $g^{\rm K}_{st,im\sigma}(\varepsilon)$ is the Keldysh component of system-tip Green's function, defined from $g^{>}_{st,im\sigma}(t) = -i\langle f_{i\sigma}(t)\alpha^{\dag}_{m\sigma}\rangle, g^{<}_{st,im\sigma}(t) = i\langle \alpha^{\dag}_{m\sigma}f_{i\sigma}(t)\rangle$, and $g^{\rm K}_{st,im\sigma}(t) = g^{>}_{st,im\sigma}(t) + g^{<}_{st,im\sigma}(t)$.
we regard the tunneling Hamiltonian, $\mathcal{H}^{\rm tip}_{\rm hyb}$, to be small, and treat it with first-order perturbation theory.
From the standard double-path real-time perturbation theory~\cite{kamenev2011field}, we can attribute $g^{\rm K}_{st,im}(t)$ to the system and tip Green's functions as
\begin{eqnarray*}
g^{\rm K}_{st,im\sigma}(\varepsilon) = v_{m\sigma}^t(g^{\rm R}_{s,i\sigma}(\varepsilon)g^{\rm K}_{t,m\sigma}(\varepsilon) + g^{\rm K}_{s,i\sigma}(\varepsilon)g^{\rm A}_{t,m\sigma}(\varepsilon)).
\end{eqnarray*}
Here, $g^{\rm R}_{s,i\sigma}(\varepsilon)$ and $g^{\rm A}_{t,m\sigma}(\varepsilon)$ are retarded and advanced Green's function in the system and tip, respectively, and they are connected with spin- and level-resolved density of states, as
\begin{eqnarray*}
\rho_{i\sigma}(\varepsilon) = -\frac{1}{\pi}{\rm Im}g^{\rm R}_{s,i\sigma}(\varepsilon),\ \rho_{m\sigma}^t(\varepsilon + eV) = \frac{1}{\pi}{\rm Im}g^{\rm A}_{t,m\sigma}(\varepsilon).
\end{eqnarray*}
Meanwhile, in equilibrium, the Keldysh components are related to distribution function,
\begin{eqnarray}
g^{\rm K}_{s,i\sigma}(\varepsilon) = -2\pi i\rho_{i\sigma}(\varepsilon)(1-2f(\varepsilon))
\end{eqnarray}
\begin{eqnarray}
g^{\rm K}_{t,m\sigma}(\varepsilon) = -2\pi i\rho_{m\sigma}^t(\varepsilon+eV)(1-2f(\varepsilon + eV)).
\end{eqnarray}
Combining these equations together, and assuming simple tunneling matrix elements, $v^t_{m\sigma}\equiv v$, we obtain the electric current,
\begin{eqnarray*}
I_i = -\frac{2\pi e}{\hbar}|v|^2\int\ d\varepsilon\rho^{\rm t}(\varepsilon + eV)\rho_i(\varepsilon)[f(\varepsilon) - f(\varepsilon + eV)].
\end{eqnarray*}
\section{Derivation of hole Green's function}
As introduced in the main text, the hole Green's function of spin $\sigma=\pm1$ is written as
\begin{align}
g_{i\sigma}(t) &= -i\frac{{\rm Tr}[e^{-(\beta-it)\mathcal{H}_{\rm K}}f^{\dag}_{i\sigma}e^{-it\mathcal{H}_{\rm K}}f_{i\sigma}]}{{\rm Tr}\ e^{-\beta\mathcal{H}_{\rm K}}} = -i\frac{{\rm Tr}[e^{-(\beta-it)\frac{i}{4}c_kA_{kk'}c_{k'}}f_{i\sigma}e^{-it\frac{i}{4}c_kA_{kk'}c_{k'}}f^{\dag}_{i\sigma}]}{{\rm Tr}\ e^{-\beta\frac{i}{4}c_kA_{kk'}c_{k'}}}
\end{align}
Given the fermionic kinetic energy is frozen, the fermion annihilation operator, $f_{i\sigma}$, satisfies the following commutation relation with Hamiltonian:
\begin{align}
f^{\dag}_{i\sigma}e^{-it\frac{i}{4}c_kA_{kk'}c_{k'}}f_{i\sigma} &= e^{-it\frac{i}{4}c_kA^i_{kk'}c_{k'}}f^{\dag}_{i\sigma}f_{i\sigma} = e^{-it\frac{i}{4}c_kA^i_{kk'}c_{k'}}(\frac{1}{2} + \sigma S_i^z),
\end{align}
where $\frac{1}{2} + \sigma S_i^z$ is regarded as a projection operator on the state with spin $\sigma$ at site $i$. Accordingly, the Green's function can be transformed into
\begin{align}
g_{i\sigma}(t) = -i\frac{\sum_{\{W_p\}}{\rm Tr}_c[e^{-(\beta-it)\frac{i}{4}c_kA_{kk'}c_{k'}}e^{-it\frac{i}{4}c_kA^i_{kk'}c_{k'}}(i\sigma b_ic_i+\frac{1}{2})]}{\sum_{\{W_p\}}{\rm Tr}_c[e^{-\beta\frac{i}{4}c_kA_{kk'}c_{k'}}]}.
\label{eq:g}
\end{align}
Here, $\sum_{\{W_p\}}$ stands for the summation over the $Z_2$ flux configurations, $\{W_p\}$. ${\rm Tr}_c$ is the trace over $c$-fermions, implicitly involving the projection onto the physical fermion parity~\cite{pedrocchi2011physical}. In Eq.~(\ref{eq:g}), the term involving $b_j$ vanishes, as it changes the conserved flux sector. Accordingly, following the procedure in Ref.~\onlinecite{PhysRevB.98.220404}, we obtain
\begin{align}
g_{i\sigma}(t) &= -\frac{i}{2}\frac{\sum_{\{W_p\}}{\rm Tr}[e^{-(\beta-it)\frac{i}{4}c_kA_{kk'}c_{k'}}e^{-it\frac{i}{4}c_kA^i_{kk'}c_{k'}}]}
{\sum_{\{W_p\}}{\rm Tr}[e^{\beta\frac{i}{4}c_kA_{kk'}c_{k'}}]}\nonumber\\
&=-\frac{i}{2}\frac{\sum_{\{W_p\}}\sqrt{{\rm det}(1 + e^{-(\beta-it)\cdot iA}e^{-it\cdot iA^i})} + (-1)^F\sqrt{{\rm det}(1 - e^{-(\beta-it)\cdot iA}e^{-it\cdot iA^i})}}
{\sum_{\{W_p\}}\sqrt{{\rm det}(1 + e^{-\beta\cdot iA})} + (-1)^F\sqrt{{\rm det}(1 - e^{-\beta\cdot iA})}}.
\label{holegreenfunction}
\end{align}
Here, $(-1)^F$ is the physical fermion parity, which depends on the flux configuration. If we consider only a fixed sector of the flux configuration, we can omit the average over fluxes, and $g_{i\sigma}(t)$ can be simplified as Eq.~(5) in the main text.
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,129
|
Please own out this field.
I'd capture to fetch records from DailyFX and IG about trading alternatives and their products and services and products thru email.
But don't appropriate learn our evaluation – place it to the leisure. Your forecast comes with afree demo yarnfrom our provider, IG, so that you would possibly are attempting out trading with zero risk.
Your demo is preloaded with£10,000 digital funds, which it is possible you'll perhaps well use to interchange over10,000 are residing global markets.
We are going to email you login crucial parts rapidly.
An error took place submitting your make.
Please are attempting all over again later.
CAD:The Loonie soared following an encouraging position of inflation figures, while the headline figures printed primarily based mostly on estimates, the Bank of Canada's most in vogue measure of inflation jumped to 1.96% from 1.9% which in flip would possibly perhaps well perhaps simply take into yarn the BoC refrain from making a fats 180 pivot and protect the option of a rate hike launch, albeit, no longer to pick out out blueprint for a while. In response to the CPI characterize, USDCAD dropped to lows of 1.3270, on the other hand, the 1.3250-1.3465 vary that has been blueprint for over a month stays in tact for now. Alongside this, the frenzy bigger in oil costs also offered underlying beef up for the Loonie.
NZD:The Contemporary Zealand Greenback got right here under significant promoting stress overnight after Q1 CPI figures disillusioned forecasts with the headline reading at 1.5% (Exp. 1.7%). Consequently, money markets priced in increased likelihood that the RBNZ will ease protection on the next meeting (Might perhaps perhaps 8th), leaping to a reach 50/50 likelihood from 25%. Nevertheless, NZDUSD soon reclaimed the 0.67 contend with with the RBNZ's Sectoral Ingredient mannequin exhibiting inflation remained at 1.7%, while encouraging Chinese language GDP and production records lifted risk bustle for food. Alternatively, with the RBNZ Governor Orr asserting his easing bias, the likelihood of a rate cut on the Might perhaps perhaps meeting, would possibly perhaps well take into yarn upside tiny, which in flip favours moves bigger in AUD and CAD vs NZD.
GBP: Softer than expected CPI records saw the Pound lend a hand under stress with GBPUSD making a switch in direction of 1.30 beef up. Nevertheless, given the wind down in Brexit headlines, designate motion within the Pound has been relatively tame with implied volatility dropping to multi month lows.
Oil:Brent and WTI outrageous futures continued to march bigger as the day prior to this's API inventory characterize confirmed a surprise 3.1mln barrel drawdown, which in flip sees WTI edging in direction of $65/bbl. In other areas, with the combating within Libya continuing to escalate, the geopolitical top rate continues to protect oil costs supported, while the likelihood tone had also been improved by China's encouraging GDP figures for Q1 with indicators rising that Q2 would possibly perhaps well perhaps simply launch to take into yarn a rebound in teach.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,794
|
{"url":"http:\/\/wiki.lazarus.freepascal.org\/Talk:LCL_Internals","text":"# Talk:LCL Internals\n\nThe trayintf.pas unit is not too well explained. If I read this document correctly, it is suposed to be the counterpart of the interfaces unit from the LCL. In this case, I don't think it is necessary to have such a unit, since the registering of the TWSXXXTrayIcon class does all that is needed. (I am not completely sure though) Vincent 19:34, 22 Dec 2005 (CET)\n\nWe can change it then. The example is suposed to be very simple and show how the LCL chooses different widgets without IFDEFS. Any work on it will help. --Sekelsenmat 20:37, 22 Dec 2005 (CET)\nOn the LCL all those files would be used. wstrayicon would even be on a extra \/widgetset\/ directory. --Sekelsenmat 14:19, 26 Dec 2005 (CET)\nthe trayintf.pas file is only needed if you need \"flat\" (non object) calls to some widgetset functions defined in the TxxxWidgetset object. It is at the same level as the winapi files. --Marc 10:50, 12 October 2006 (CEST)\n\n## Some remarks\n\n\u2022 TQtWidgetSet = Class(TWidgetSet):\n\nIn the example of the declaration for TQtWidgetSet, the following functions might be removed in the future:\n\n procedure SetDesigning(AComponent: TComponent); override;\nfunction CreateComponent(Sender\u00a0: TObject): THandle; override; \/\/ deprecated\nfunction CreateTimer(Interval: integer; TimerFunc: TFNTimerProc): integer; override;\nfunction DestroyTimer(TimerHandle: integer): boolean; override;\n\n\u2022 TQtWSWinControl.ShowHide:\n\n if not WSCheckHandleAllocated(AWincontrol, 'SomeProcUsingAWincontrolHandle')\nthen Exit;\n\n\u2022 TQtWidgetSet.ShowWindow:\n\nFunctions like this might move to corntol implementation itself -> TxxxWSControl.Show\n\n\u2022 gtkwstrayicon.pas\n\nIf needed, someone may want to link a private internal class to a WSwidget, like:\n\n RegisterWSComponent(TCustomTrayIcon, TGtkWSTrayIcon, TGtkWSTrayIconPrivate)\n\n\nThis private class can have its own (true) inheritence, and it will propagate to all \"derived\" TxxxWSyyy classes\n\n--Marc 10:39, 12 October 2006 (CEST)\n\n\u2022 The section about Adding a new unit to the LCL looks outdated.\n\n--Bart (talk) 22:59, 22 October 2017 (CEST)","date":"2018-01-20 14:54:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5814125537872314, \"perplexity\": 6393.979755171968}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084889660.55\/warc\/CC-MAIN-20180120142458-20180120162458-00280.warc.gz\"}"}
| null | null |
{"url":"http:\/\/clay6.com\/qa\/59939\/the-susceptibility-of-magnetism-at-book-is-1-2-times-10-at-what-temperature","text":"Comment\nShare\nQ)\n\n# The susceptibility of magnetism at book is $1.2 \\times 10^{-5}$ At what temperature will the susceptibility be equal to $1.44 \\times 10^5?$\n\n$\\begin{array}{1 1} 200\\;K \\\\ 150\\;K \\\\ 250\\;K \\\\ 100\\;K \\end{array}$\n\n$\\psi=\\large\\frac{c}{T} \\large\\frac{\\psi _m}{\\psi' m}$\n$\\quad= \\large\\frac{T'} {T}$\n$T'= \\large\\frac{\\psi _m}{\\psi ' _m}$$\\times T \\quad= \\large\\frac{1.2 \\times 10^{-5}}{1.44 \\times 10^{-5}}$$\\times 300$\n$\\quad= 250\\;k$\nAnswer : $250\\;K$","date":"2019-08-21 07:16:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000100135803223, \"perplexity\": 9084.721661373615}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027315811.47\/warc\/CC-MAIN-20190821065413-20190821091413-00175.warc.gz\"}"}
| null | null |
Aleksander Dębski, ps. "Aleksander", "Gustaw" (ur. 17 listopada 1857 w Mogielnicy, zm. 6 marca 1935 w Warszawie) – polski działacz socjalistyczny i niepodległościowy.
Młodość
Urodził się w rodzinie ziemiańskiej. Po ukończeniu gimnazjum w Płocku wyjechał w 1878 r. do Petersburga, gdzie studiował na uniwersytecie, a następnie w Instytucie Dróg i Komunikacji. Działał tam w Gminie Socjalistów Polskich oraz wraz z Tadeuszem Rechniewskim i Stanisławem Kunickim w Polsko-Litewskiej Partii Socjalno-Rewolucyjnej. W 1883 r. brał udział w wileńskim zjeździe kół socjalistycznych. Utrzymywał kontakty z I Proletariatem i Narodną Wolą.
Działalność w Wielkim Proletariacie
Po aresztowaniu Ludwika Waryńskiego i innych przyjechał do Warszawy i wraz ze Stanisławem Kunickim stanął na czele I Proletariatu. Jeździł do Petersburga na uzgodnienia z Narodną Wolą. Po starciu z policją 30 lipca 1884 r. musiał uciekać z kraju. Przebywał w Paryżu i Genewie, gdzie uczestniczył wraz ze Stanisławem Mendelsonem w wydawaniu pisma "Walka" i "Przedświt".
W 1887 r. ponownie wrócił do kraju z zamiarem odbudowy partii, jednak wkrótce został aresztowany pod Krakowem. Po zwolnieniu ponownie emigrował do Szwajcarii. Po nieudanym przygotowaniu zamachu bombowego na cara został w wyniku interwencji poselstwa rosyjskiego usunięty ze Szwajcarii. Udał się do Paryża, gdzie studiował na Sorbonie. Po zamachu S. Padlewskiego na gen. Sieliwierstowa (17 września 1890 r.) został uwięziony, a następnie uwolniony z powodu braku dowodów.
Założyciel PPS
17–23 listopada 1892 r. z ramienia Proletariatu uczestnik Zjazdu Paryskiego tworzącego Związek Zagraniczny Socjalistów Polskich. W 1893 powołany do Centralizacji ZZSP. Po wydaleniu z Francji wraz z wszystkimi członkami Centralizacji zamieszkał w Londynie. Działał w składzie Centralizacji do 1898 r. Był współpracownikiem "Przedświtu" i członkiem polskich delegacji na Kongresie II Międzynarodówki w Londynie.
W 1899 r. wyjechał do Stanów Zjednoczonych, gdzie założył Związek Socjalistów Polskich. Uczestniczył również w Związku Narodowym Polaków. Wspierał i propagował wśród Polonii działania PPS – Frakcji Rewolucyjnej. W 1912 r. wystąpił z inicjatywą powołania Komitetu Obrony Narodowej w USA wspierającego działający w Galicji Polski Skarb Wojskowy i Komisję Tymczasową Skonfederowanych Stronnictw Niepodległościowych. Zorganizował system wsparcia finansowego dla działań niepodległościowych.
Po wybuchu I wojny światowej przybył jako delegat Komitetu Obrony Narodowej do Krakowa. Po raz drugi przybył w czerwcu 1915 r. Członek Centralnego Komitetu Narodowego w Warszawie (VII 1916 – V 1917) Na polecenie Józefa Piłsudskiego udał się w 1917 r. do Sztokholmu na zjazd działaczy niepodległościowych, a następnie do Nowego Jorku, propagując działania niepodległościowe.
W niepodległej Polsce
W końcu 1919 r. powrócił do Polski. Brał czynny udział w działalności Polskiej Partii Socjalistycznej. Podczas wojny polsko-bolszewickiej w 1920 r. pełnił funkcję sekretarza generalnego biura werbunkowego PPS. Następnie jako przedstawiciel Biura Propagandy Zagranicznej PPS wyjeżdżał do Stanów Zjednoczonych i Anglii, wspierając politykę rządu.
We wrześniu 1928 r. na XXI Kongresie PPS w Sosnowcu został wybrany do Centralnego Sądu Partyjnego, pozostając jego członkiem do końca życia. W 1930 r. wybrano go senatorem z ramienia PPS.
3 czerwca 1933 r. "za pracę w dziele odzyskania niepodległości" został nadany Aleksandrowi Dębskiemu Krzyż Niepodległości z Mieczami.
Zmarł 6 marca 1935 r. w Warszawie. Został pochowany na cmentarzu Stare Powązki w Warszawie (kwatera 196-4-25,26).
Przypisy
Bibliografia
Słownik Biograficzny Działaczy Polskiego Ruchu Robotniczego Tom 1, .
Barlicki N., Aleksander Dębski. Życie i działalność 1857-1935, Warszawa 1937.
Księga życiorysów działaczy ruchu rewolucyjnego w Polsce, Tom I, pod red. Jana Cynarskiego-Krzesławskiego i Adama Próchnika, Wydawn. "Kronika Ruchu Rewolucyjnego", Warszawa 1939.
Linki zewnętrzne
Publikacje Aleksandra Dębskiego w serwisie Polona.pl
Członkowie II Proletariatu
Członkowie Centralnego Komitetu Narodowego w Warszawie (1915–1917)
Członkowie Związku Zagranicznego Socjalistów Polskich
Odznaczeni Krzyżem Niepodległości z Mieczami
Pochowani na cmentarzu Powązkowskim w Warszawie
Politycy Polskiej Partii Socjalistycznej (1919–1939)
Uczestnicy zjazdu socjalistów polskich zaboru rosyjskiego w Paryżu 1892
Urodzeni w 1857
Zmarli w 1935
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 381
|
{"url":"https:\/\/chemicalstatistician.wordpress.com\/2014\/04\/15\/video-tutorial-rolling-2-dice-an-intuitive-explanation-of-the-central-limit-theorem\/","text":"# Video Tutorial \u2013 Rolling 2 Dice: An Intuitive Explanation of The Central Limit\u00a0Theorem\n\nAccording to the central limit theorem, if\n\n\u2022 $n$ random variables, $X_1, ..., X_n$, are independent and identically distributed,\n\u2022 $n$ is sufficiently large,\n\nthen the distribution of their sample mean, $\\bar{X_n}$, is approximately normal, and this approximation is better as $n$ increases.\n\nOne of the most remarkable aspects of the central limit theorem (CLT) is its validity\u00a0for any parent distribution of $X_1, ..., X_n$. \u00a0In my new Youtube channel, you will find a video tutorial that\u00a0provides an intuitive explanation of why this is true by considering a thought experiment of rolling 2 dice. \u00a0This video focuses on the intuition rather than the mathematics of the CLT. \u00a0In a later video, I will discuss the technical details of the CLT and how it applies to this example.\n\nYou can also watch the video below the fold!\n\n### 2 Responses to Video Tutorial \u2013 Rolling 2 Dice: An Intuitive Explanation of The Central Limit\u00a0Theorem\n\n1. Bob Mrotek says:\n\nEric,\nI loved this lesson. A bight light bulb lit up over my head to light my path. Thank you!","date":"2017-01-19 08:39:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 6, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5844557285308838, \"perplexity\": 603.9999223234848}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560280504.74\/warc\/CC-MAIN-20170116095120-00516-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
| null | null |
Q: Implicit partial derivative of wave function I'm working out some QM problems and need to clarify the procedure for calculating the partial of an implicit function. What's needed is to differentiate a wave function twice with respect to t. Here's the function: $φ(x,t) = e^{i(ax − bt)}ψ(x − vt,t)$. My answer differs from the books. I'm messing up the derivative of psi. My question is since t shows up twice in psi how do you handle this? Differentiate each piece seperately then add them? Not sure. Thanks.
A: Your idea is right about breaking up the problem in parts, but it's a bit more sophisticated than just adding them up. First, note that $\phi$ is a product. The derivative of a product is obtained by Leibniz's rule:
$$\partial_t\phi(x,t) = \partial_t(e^{i(ax − bt)}) \psi(x − vt,t) + e^{i(ax − bt)} \partial_t (\psi(x − vt,t))$$
I shortened the notation for the partial derivative with respect to $t$ as $\partial_t$. This can be further worked out to
$$\partial_t\phi(x,t) = -ibe^{i(ax − bt)} \psi(x − vt,t) + e^{i(ax − bt)} \partial_t (\psi(x − vt,t))$$
Second, there is the last factor in the last term, which is a composite function. Here you should use the chain rule. I'm going to notate the derivative with respect to the first variable of $\psi$ as $\partial_1\psi$, likewise for the derivative w.r.t. the second variable $\partial_2\psi$. Then, we have that
$$\partial_t (\psi(x − vt,t)) = -v\partial_1 \psi(x − vt,t) + \partial_2 \psi(x − vt,t)$$
Combining everything we have
$$\partial_t\phi(x,t) = -ibe^{i(ax − bt)} \psi(x − vt,t) -v e^{i(ax − bt)} \partial_1 \psi(x − vt,t) + e^{i(ax − bt)} \partial_2 \psi(x − vt,t)$$
If you still have trouble with this, I suggest you check up the chain rule for partial derivatives in particular.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,484
|
{"url":"https:\/\/motls.blogspot.com\/2006\/03\/newspaper-snippet.html?m=1","text":"## Friday, March 24, 2006\n\n### A newspaper snippet\n\nRae Ann has scanned and sent me an interesting newspaper snippet that is apparently going to be published on Sunday:\n\nPlease don't ask me what it exactly means because I must still study the details.\n\nUpdate: a reader has sent me a TV screenshot when they informed about the same thing. You can see our experimental colleague from Fermilab.","date":"2021-01-17 00:47:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2847963571548462, \"perplexity\": 2341.939862811022}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703507971.27\/warc\/CC-MAIN-20210116225820-20210117015820-00006.warc.gz\"}"}
| null | null |
John Edward Walasek
27-May-1948 - 20-May-2018
Flowers & Donations
Obituary Overview
Walasek, John Edward, age 69, of Dartmouth, passed away on May 20, 2018 in the Dartmouth General Hospital.
Born in Edinburgh, Scotland, he was a son of the late Dr. Jozef and Betty (Coombs) Walasek.
With his family, John came from Scotland to Winnipeg in 1957. They moved to Greensburg Pennsylvania four years later where John was an American Boy Scout and his father, a medical intern. Two years later, they moved to Ontario where he attended high school at CFB Borden and met his future wife. After high school, he began his career with the Canadian Coast Guard in September 1966 at the Coast Guard College in Sydney, Cape Breton. He went on to serve over 35 years and retired on May 29, 2003. He was a Chief Engineer and an Officer who was awarded the Exemplary Service Medal. He served on the CCGS John A. MacDonald on an Arctic mission, the CCGS Daring, the CCGS Provo Wallis and other ships in the Atlantic Fleet. One of his final missions was search, rescue and recovery of Swiss Air 111. During his Coast Guard service, he served as a volunteer firefighter at Cole Harbour Station 17.
John was a dad, friend and hockey coach in Cole Harbour. In retirement, he enjoyed walking his dog "Duffy" in the "A" section of Colby Village and coffee at the Portland Street Tim Hortons.
John is survived by his wife of 48 years, Linda (Rodrigues) Walasek; sons, Christian (Colleen), West Hill, Ontario, Craig (Danielle), Dartmouth; grandchildren Ryan and Molly. He is also survived by his sisters, Helen (Yuri), London, England, Anne (Bill) Barrie, Ontario; mother-in-law Norma Rodrigues; and many nieces and nephews.
In addition to his parents, he was predeceased by his father-in-law Joseph Rodrigues.
The funeral service will take place at 2pm on Friday May 25th in Atlantic Funeral Home, 771 Main Street, Dartmouth, Rev. James Haughn officiating. Reception to follow. Burial in Dartmouth Memorial Gardens.
In lieu of flowers, donations may be made to the Arthritis Society or the Canadian Cancer Society.
Special thanks to the staff of the 4th floor, Dartmouth General, the homecare team from We Care Health Services and our loving neighbours in Colby Village.
Online condolences may be made by visiting the Dartmouth Chapel at www.atlanticfuneralhomes.com
Chapel Service
Atlantic Funeral Home
Event Times:
25-May-2018 2:00 PM - 2:45 PM
Dartmouth Memorial Gardens
More Details & Directions
Posted by Patrick Wilson | 23-Aug-2018
I worked for John on the CCGS Daring from 1976 to 1978. My deepest condolences to Linda, their sons and family.
Posted by Lorne and Janet Simpson | 25-May-2018
We first met John several years ago as fellow dog walkers in Colby Village. We will miss crossing paths with him and Duffy on our daily walks. Our condolences to Linda and the family.
View Guestbook
Express your condolences
Make a donation to their preferred charity or order flowers.
Arbor Memorial
Subscribe to this profile
Enter the following information to subscribe to this profile:
Select notification frequency:
Instant Notifications Enabled
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,522
|
Sport Psychologist, Dr. Michelle Cleere, talks with us about why training the brain is crucial to the overall junior development process. Her approach to working with young players is to normalize the process and help them understand that they need tools to deal with the ups and downs that naturally happen during a tennis match or during training. If all a child knows how to do is throw a temper tantrum or cry, then they are more likely to give up on the sport. Instead, what if they were able to process and file away their strong emotions so they didn't get in the way of their play?
This is the work that Dr. Michelle does with her young charges. She gives them the tools to develop their own mental game plan. You can learn more about Dr. Michelle at https://drmichellecleere.com. Email her at drmichelle@drmichellecleere.com.
To purchase her book, Beating the Tennis Demons, go to https://www.amazon.com/gp/product/B00WFG2AJG/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B00WFG2AJG&linkCode=as2&tag=pareaces-20&linkId=e407c93a7974ea287379cea8280e9594.
Dr. Michelle helps professional, junior and age group tennis players overcome their performance challenges. It's her passion, mission, and promise.
Ultimately, she works with top athletes to help them unlock the power of the mind and create the mental toughness necessary to be the best. Dr. Michelle's extensive academic background, which includes a PhD in Clinical Psychology and a Masters in Sports Psychology, allows her to help clients deal with performance anxiety, gain more confidence, and build resilience. As many clients attest, their experience with Dr. Michelle is exactly what they needed and more than they expected – it was life changing.
Dr. Michelle's bestseller, Beating the Tennis Demons, helps clients develop practical skills to gain more control over competitive environments and mitigate the interruption in play to overcome intense odds and defeat adversity. She has been involved in many different sports and understands the stress and demands to perform at the top. As a 15-year USAT Coach, she developed simple and effective tools to mentally train her athletes, and they are used by PTR and USPTA coaches around the world.
If you are interested in finding out how Dr. Michelle can help your child beat their tennis demons, email her for a free 30-minute phone consult – drmichelle@drmichellecleere.com . She works with clients remotely via Skype or phone.
Be sure to enter the Sol Schwartz #SaveCollegeTennis All-In Tournament August 12-13 in Baltimore at http://events.universaltennis.com/tournaments/336/. If your child is unable to play but you would still like to support our fund to provide grants to college tennis programs at risk of being cut, you can make a donation using Venmo to my email address: lisa@parentingaces.com.
Visit us online at www.parentingaces.com. Email us at lisa@parentingaces.com. Email me at lisa@parentingaces.com.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,330
|
Q: Starting Tomcat with local user causes JVM_Bind exception I have a tomcat service that starts fine with the local system account but when I switch it over to log on as a local user I am getting a JVM_Bind exception. The service starts and then shuts down.
It looks like the shutdown port is already being used by another service. I can change the tomcat service to use a different shutdown port, but I want to understand why it starts fine with the system account and complains about the shutdown port with the user account.
06-Jun-2017 15:46:13.801 SEVERE [main] org.apache.catalina.core.StandardServer.await StandardServer.await: create[localhost:10006]:
java.net.BindException: Address already in use: JVM_Bind
at java.net.DualStackPlainSocketImpl.bind0(Native Method)
at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at org.apache.catalina.core.StandardServer.await(StandardServer.java:420)
at org.apache.catalina.startup.Catalina.await(Catalina.java:713)
at org.apache.catalina.startup.Catalina.start(Catalina.java:659)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:351)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:485)
A: Found that one service was listening to the invalid address and the Tomcat Service was listening on the loopback address.
According to https://msdn.microsoft.com/en-us/library/windows/desktop/ms740621(v=vs.85).aspx
If both application are in the same user context then both can bind to the same port if one of the application is using the invalid address. (0.0.0.0)
If both application are in different user context then we get the exception.
A: Generic Solution for Tomcat JVM_BIND exceptions
Windows
1) Might be some other services listening to the same port 8080, In such cases go to %TOMCAT_HOME%/conf/server.xml and change the port to your desired integer like 8081,9080 etc.
2) Go to Windows task manager and check for some other java process running behind, if running end the process.
3) You can do netstat command on Windows to check for active connections and check for 8080 or (whatever your configured) ports which is listening and kill those services.
Linux
1) ps - ef ¦ grep java use this command to identify the process running behind, if many processes running which is not relevant. You can kill the process by kill - 9 processid
2) You can do the netstat command on linux and check for the 8080 or (whatever your configured) ports which is listening and kill those services.
3) Also you can change your $TOMCAT_HOME/conf/server.xml port number to your desired integers.
Mac
*
*Kill processId terminal kill <pid>
To find ProcessId use this below command.
lsof -i:
*Also you can change your $TOMCAT_HOME/conf/server.xml port number to your desired integers.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,623
|
import Localization from './localization';
namespace Copy {
type LocalizedCopyStore = { [language: string]: Localization.LocalizedCopy };
// Languages for which messages are defined under this dir are acceptable
export const acceptableLanguages = Localization.Locales.approved;
export let defaultLocale = '';
export function setDefaultLocale(locale: string) {
if (acceptableLanguages.indexOf(locale) !== -1) {
defaultLocale = locale;
console.log(`Set the default locale to ${locale}`);
}
}
export function forLocale(locale: string = defaultLocale): Localization.LocalizedCopy {
const copy = Localization.Locales.allLocales[locale];
return copy;
}
}
export default Copy;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,914
|
\section{Introduction}
With an ever-increasing capacity to collect and store data, industry, business and government offices all encounter the task of analyzing the data of unprecedented size arisen from various practical problems such as panel studies of economic, social and natural (such as weather) phenomena, financial market analysis, genetic studies and communications engineering. A significant feature of these data is that the number of variables recorded in each observation is extremely large. Meanwhile, most real data exhibit evidence that
the observations on the same unit over time are temporal dependent. Therefore, many well-developed statistical inference methods for independent and identically distributed (i.i.d.) data may not work here. Those features of modern data bring both opportunities and challenges to statisticians and econometricians.
Covariance matrices provide a measure for the marginal linear dependence among components of a random vector. There are a set of recent works on estimation and hypothesis testing of high-dimensional covariances with i.i.d.\ data. See \cite{Bj_2008a,Bj_2008b}, \cite{QC_2012}, \cite{CLX_2013}, \cite{ChangZhouZhouWang_2017} and references therein. Compared with the marginal dependence, the conditional dependence captures the direct ``link" between two variables when the other variables are conditioned on, which demonstrates the interaction between variables.
Gaussian graphical model (GGM) is a widely used tool to model and analyze the conditional dependence relationship among components of a random vector.
Under the Gaussian assumption of the data,
the precision matrix, defined as the inverse of the covariance matrix, provides an equivalent representation for the conditional dependence. Therefore, analyzing a high-dimensional GGM can be transformed to investigate the structure of the associated precision matrix. Beyond the Gaussian assumption, the bijection relationship between the conditional dependence and the precision matrix might not hold, however, the precision matrix still plays an important role in many statistical applications. For illustration, the analysis of linear regression models, the Kalman recursions in state-space models, and partial correlation graphs. See Examples 1--3 in Section 2 for more details.
Several methods have been proposed to estimate high-dimensional precision matrix $\boldsymbol{\Omega}$ with i.i.d.\ data in the recent few years. Graphical Lasso \citep{YuanLin_2007,Friedman_2008} is a penalized likelihood estimation approach with an $L_1$ penalty on the entries of $\boldsymbol{\Omega}$. \cite{MB_2006} introduced a neighborhood selection approach, which estimates $\boldsymbol{\Omega}$ by finding the nonzero regression coefficients of each variable on all other variables via Lasso \citep{Tibshirani_1996} or Dantzig estimator \citep{CandesTao_2007}. Also see \cite{CLj_2011}, \cite{XueZou_2012} and \cite{SZ_2013} for other penalized methods. \cite{CXW_2013} investigated the theoretical properties of graphical Lasso with time dependent data. Even though the above mentioned methods can estimate $\boldsymbol{\Omega}$ consistently, they cannot provide statistical inference for $\boldsymbol{\Omega}$ due to a non-negligible bias term incurred by the penalized methods. Recent progress has been made to overcome this issue. Under the
Gaussian assumption, \cite{Ren_2015} proposed a novel estimator for each entry of $\boldsymbol{\Omega}$ based on
pairwise $L_1$ penalized regressions,
and showed their procedure does not incur a bias term. Although their estimator shares some good theoretical advantages, it requires to fit $\frac{p(p-1)}{2}$ high-dimensional regressions, which is computational intensive when $p$ is large. \cite{Liu_2013} proposed a bias corrected estimator for $\boldsymbol{\Omega}$ based on only $p$ node-wise regressions with Gaussian distributed data. However, neither of those two approaches consider time dependent observations or global level inference for a target sub-region of $\boldsymbol{\Omega}$. Such global inference is essential to understand the structure of the precision matrix. To this end, a precise asymptotic expansion of the estimator is required, which is missing in the previous literatures.
Let $\mathcal{S}$ be a given index set of interest, and $n$ be the sample size. The main goal of this paper is to propose a data-driven procedure to determine a class of confidence regions $(\mathcal{C}_{\mathcal{S},\alpha})_{0<\alpha<1}$ for $\boldsymbol{\Omega}_{\mathcal{S}}$ with high-dimensional time dependent data such that $\sup_{0<\alpha<1}|\mathbb{P}(\boldsymbol{\Omega}_{\mathcal{S}}\in\mathcal{C}_{\mathcal{S},\alpha})-\alpha|\rightarrow0$ as $n\rightarrow\infty$, where $\boldsymbol{\Omega}_{\mathcal{S}}$ is a vector whose components are the elements of $\boldsymbol{\Omega}$ indexed by $\mathcal{S}$. Such constructed confidence regions are of great practical importance in many statistical applications. For example, it can be used to test for some specific structures of $\boldsymbol{\Omega}$, to detect and recover the nonzero components in $\boldsymbol{\Omega}$ consistently, and to construct the simultaneous confidence intervals for the elements in $\boldsymbol{\Omega}_{\mathcal{S}}$.
We first propose a bias corrected estimator $\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}$ for $\boldsymbol{\Omega}_{\mathcal{S}}$ via penalized node-wise regressions, and then investigate its asymptotic expansion without requiring either Gaussian or stationary assumption.
Based on the obtained asymptotic expansion, the leading term of $n(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})$ takes the form of a partial sum of an unobservable process.
For any ${\mathbf A}=(a_{j_1,j_2})$, denote by $|{\mathbf A}|_\infty=\max_{j_1,j_2}|a_{j_1,j_2}|$ the elementwise $L_\infty$-norm of ${\mathbf A}$. Inspired by the Gaussian approximation technique developed in \cite{CCK_2013,CCK_2014}, we approximate the distribution of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ by that of the $L_\infty$-norm of a high-dimensional normal distributed random vector with mean zero and covariance formulated as an estimate for the long-run covariance of an unobservable process. In practice, we apply a parametric bootstrap procedure to generate a set of independent samples from such defined multivariate normal distribution, and then use the empirical distribution of the $L_\infty$-norm of those generated random vectors to characterize the probabilistic behavior of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$. Our analysis shows that the kernel-type estimator initially suggested by \cite{Andrews_1991} for the fixed
dimensional long-run covariance still works in the proposed procedure for the high-dimensional scenario without imposing any additional stringent structural assumptions on the long-run covariance. Owing to the form of the kernel-type estimator, we construct a computationally feasible algorithm to implement the proposed procedure. As we will discuss in Section 3, our procedure has four significant advantages: (i) it is fully data-driven; (ii) it is adaptive to any structure of the long-run covariance; (iii) it significantly reduces the computational and storage costs in generating bootstrap samples; and (iv) it performs better in approximating the distribution of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ in finite samples than the limiting distribution of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ does. Most importantly, there is no guarantee for the existence of the limiting distributions for such $L_\infty$-type
statistics in general. The existence of such limiting distributions usually require additional stringent assumptions that are restrictive and difficult to verify in practice. As an advantage, our procedure is free of those assumptions.
Both theoretical and numerical results demonstrate the promising properties of the proposed procedure.
The rest of the paper is organized as follows. Section \ref{se:background} introduces the problem to be solved and its background. The proposed procedure and its theoretical properties are given in Section \ref{se:GA}. Section \ref{se:app} discusses the applications of our results. Simulation studies and a real data analysis are reported in Sections \ref{se:simulation} and \ref{se:case}, respectively. All the technical proofs are relegated to the Appendix. At the end of this section, we introduce the notations used in this paper here.
For two real numbers $a$ and $b$, the notation $a \asymp b$ stands for the same order. Namely, $0 < c_{1} < |a / b| < c_{2} < \infty$ for two positive constants $c_{1}$ and $c_{2}$.
For a series of random sequences $\{b_{n,j}\}_{n=1}^{\infty}$ for $j \in \mathcal{J}$, and a real sequence $\{a_{n}\}_{n=1}^{\infty}$, $b_{n,j} = o_{p}(a_{n})$ uniformly for $j \in \mathcal{J}$ means $\max_{j\in \mathcal{J}}|b_{n,j} / a_{n,j}| \xrightarrow{p} 0$ as $n\to\infty$.
Let $|\mathcal{B}|$ be the cardinality of the set $\mathcal{B}$. For an $s_1\times s_2$ matrix ${\mathbf A}=(a_{j_1,j_2})$ and an index set $\mathcal{A}\subset \{1,\ldots,s_1\}\times\{1,\ldots,s_2\}$ with $q=|\mathcal{A}|$, we denote by ${\mathbf A}_{\mathcal{A}}$ a $q$-dimensional vector where the components of ${\mathbf A}_{\mathcal{A}}$ are $a_{j_1,j_2}$'s indexed by $\mathcal{A}$. Recall we defined before, $|{\mathbf A}|_\infty = \max_{j_1,j_2} |a_{j_1,j_2}|$ denotes the elementwise $L_\infty$-norm of ${\mathbf A}$. When ${\mathbf A}={\mathbf a} = (a_{1}, \ldots, a_{s_{1}})^{{ \mathrm{\scriptscriptstyle T} }}$ is an $s_1$-dimensional vector, we define ${\mathbf a}_{\mathcal{A}}$ and $|{\mathbf a}|_\infty$ in the same way with $s_2=1$. Let $\mathbb{I}(\cdot)$ be the indicator function. Let $|{\mathbf a}|_{0} = \sum_{j=1}^{s_1}\mathbb{I}(a_{j} \neq 0)$ and $|{\mathbf a}|_{1} = \sum_{j=1}^{s_1}|a_{j}|$ be the $L_0$- and $L_1$-norms of ${\mathbf a}$, respectively.
\section{Preliminaries}\label{se:background}
Let ${\mathbf y}=(y_1,\ldots,y_{p_n})^{ \mathrm{\scriptscriptstyle T} }$ be a $p_n$-variate random vector with mean $\bmu_n$ and covariance $\boldsymbol{\Sigma}_n$.
Let $\boldsymbol{\Omega}_n=\boldsymbol{\Sigma}_n^{-1}$ be the precision matrix.
Without loss of generality, we assume $\bmu_n = {\mathbf 0}$ in the following analysis.
Assume that $\mathcal{Y}_n=\{{\mathbf y}_1,\ldots,{\mathbf y}_n\}$ be an observed sample of size $n$ from an $\mathbb{R}^{p_n}$-valued time series, where ${\mathbf y}_t = (y_{1, t}, \ldots, y_{p_n, t})^{{ \mathrm{\scriptscriptstyle T} }}$ and each ${\mathbf y}_t$ has the same first two moments as ${\mathbf y}$, i.e. $\mathbb{E}({\mathbf y}_t)=\bmu_n$ and ${\rm Cov}({\mathbf y}_t)=\boldsymbol{\Sigma}_n$ for each $t$.
Here, we allow the dimension $p_n$ grows as the sample size $n$ increases. The parameters $\bmu_n$ and $\boldsymbol{\Sigma}_n$ related to the distribution of ${\mathbf y}$ may also change as the increase of $n$. The subscript $n$ shows their dependence on the sample size. We will drop this subscript $n$ for notation simplification whenever there is no confusion.
We assume the temporal dependence among $\{{\mathbf y}_t\}$ satisfies the $\beta$-mixing condition such that $\beta_k\rightarrow0$ as $k\rightarrow\infty$, where
\[
\beta_k=\sup_t\mathbb{E}\bigg\{\sup_{B\in\mathscr{F}_{t+k}^{\infty}}\big|\mathbb{P}(B|\mathscr{F}_{-\infty}^t)-\mathbb{P}(B)\big|\bigg\}.
\]
Here $\mathscr{F}_{-\infty}^t$ and $\mathscr{F}_{t+k}^{\infty}$ are the $\sigma$-fields generated respectively by $\{{\mathbf y}_{u}\}_{u\leq t}$ and $\{{\mathbf y}_u\}_{u\geq t+k}$. The $\beta$-mixing condition is a mild assumption in time series literature. It is well known that causal ARMA processes with continuous innovation distributions, stationary Markov chains under some mild conditions and stationary GARCH models with finite second moments and continuous innovation distributions all satisfy the $\beta$-mixing condition. We refer to Section 2.6 of \cite{FanYao_2003} for detailed discussion on $\beta$-mixing.
Given a precision matrix $\boldsymbol{\Omega}=(\omega_{j_1,j_2})_{p\times p}$
and an index set $\mathcal{S}\subset \{1,\ldots,p\}^2$ with $r=|\mathcal{S}|$, we are interested to construct a class of confidence regions $(\mathcal{C}_{\mathcal{S},\alpha})_{0<\alpha<1}$ for the components of $\boldsymbol{\Omega}_{\mathcal{S}}$ such that
\begin{equation}\label{eq:cr1}
\sup_{0<\alpha<1}\big|\mathbb{P}(\boldsymbol{\Omega}_{\mathcal{S}}\in \mathcal{C}_{\mathcal{S},\alpha})-\alpha\big|\rightarrow0~~~\textrm{as}~~n\rightarrow\infty
\end{equation}
in the high-dimensional scenario.
As we will discuss in Section \ref{se:app}, such confidence regions can be employed to
(i) test for specific structures of $\boldsymbol{\Omega}$, and (ii) detect and recover the nonzero components in $\boldsymbol{\Omega}$ consistently. Before introducing the methodology to determine $\mathcal{C}_{\mathcal{S},\alpha}$ specified in (\ref{eq:cr1}), we first give some interpretation on the motivation of our focus in this paper through the following examples.
\begin{ex}
(High-dimensional linear regression) Let $z_{t}$ and ${\mathbf x}_t = (x_{1,t}, \ldots, x_{m,t})^{{ \mathrm{\scriptscriptstyle T} }}$ be, respectively, the response variable and explanatory variables with zero mean. Suppose that $z_{t}$ and ${\mathbf x}_{t}$ are linearly related as $z_{t} = {\mathbf x}_{t}^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\gamma} + \varepsilon_{t}$ with $\mathbb{E}({\mathbf x}_{t} \varepsilon_{t}) = {\mathbf 0}$ for any $t = 1, \ldots, n$, where $\boldsymbol{\gamma} = (\gamma_{1}, \ldots, \gamma_{m})^{{ \mathrm{\scriptscriptstyle T} }} = \{\mathbb{E}({\mathbf x}_{t} {\mathbf x}_{t}^{{ \mathrm{\scriptscriptstyle T} }})\}^{-1}\mathbb{E}({\mathbf x}_{t} z_{t})$ is the true regression coefficients. The coefficient $\gamma_{l}$ stands for the $l$th covariate effect on the response variable. In high-dimensional regression analysis, one goal is to separate zero covariate effects from the non-zero ones.
Consider to test \begin{equation} H_{0}: \gamma_{l} = 0 \mbox{ \ for any $l \in \mathcal{A}$~~~~~vs.~~~~~} H_{1}: \gamma_{l} \neq 0 \mbox{ \ for some $l \in \mathcal{A}$}, \label{eq:testHDR}\end{equation} where $\mathcal{A} \subset \{1, \ldots, m\}$ is a given index set. This testing problem can be formulated in the framework of the study in this paper. Write ${\mathbf y}_{t} = (z_{t}, {\mathbf x}_{t}^{{ \mathrm{\scriptscriptstyle T} }})^{{ \mathrm{\scriptscriptstyle T} }}$ and denote by $\boldsymbol{\Omega}=(\omega_{j_1,j_2})_{p\times p}$ the
precision matrix of ${\mathbf y}_t$. It can be shown that $(\omega_{1, 2}, \ldots, \omega_{1, p})^{{ \mathrm{\scriptscriptstyle T} }} = -c \boldsymbol{\gamma}$ for $c = [ {\rm Var}(z_t) - \mathbb{E}({\mathbf x}^{{ \mathrm{\scriptscriptstyle T} }}_tz_t) \{\mathbb{E}({\mathbf x}_t {\mathbf x}^{{ \mathrm{\scriptscriptstyle T} }}_t)\}^{-1} \mathbb{E}({\mathbf x}_t z_t) ]^{-1} > 0$. Thus, testing (\ref{eq:testHDR}) is equivalent to testing\begin{equation} H_{0}: \omega_{1, l} = 0 \mbox{ \ for any $l \in \mathcal{S}$~~~~~vs.~~~~~} H_{1}: \omega_{1, l} \neq 0 \mbox{ \ for some $l \in \mathcal{S}$} \label{eq:testHDR1}\end{equation} with $\mathcal{S}=\{(1,l):l-1\in\mathcal{A}\}$.
\end{ex}
\begin{ex}
(Kalman recursions in the state-space model) In recent years state-space representations and the associated Kalman recursions have a profound impact on time series analysis and many related areas. Precision matrices are important in analyzing multivariate state-space models. Denote by $\textrm{WN}({\mathbf 0},{\mathbf K})$ a white noise vector with mean ${\mathbf 0}$ and covariance ${\mathbf K}$. A state-space model for multivariate time series $\{{\mathbf y}_t\}_{t=1}^\infty$ consists two equations:
\[
\left\{ \begin{aligned}
{\mathbf y}_t &= {\mathbf G}{\mathbf x}_t+{\mathbf W}_t, \\
{\mathbf x}_{t+1}&={\mathbf F}{\mathbf x}_t+{\mathbf V}_t,
\end{aligned} \right.
\]
where $\{{\mathbf x}_t\}_{t=1}^\infty$ is a $v$-dimensional unobserved state process, $\{{\mathbf W}_t\}_{t=1}^\infty\sim \textrm{WN}({\mathbf 0},{\mathbf R})$, ${\mathbf G}$ is a $p\times v$ unknown matrix, ${\mathbf F}$ is a $v\times v$ unknown matrix, and $\{{\mathbf V}_t\}_{t=1}^\infty\sim \textrm{WN}({\mathbf 0},{\mathbf Q})$ which is uncorrelated with $\{{\mathbf W}_t\}_{t=1}^\infty$.
Let ${\mathbf J}_t={\mathbf G}{\mathbf A}_t{\mathbf G}^{ \mathrm{\scriptscriptstyle T} }+{\mathbf R}$ be the variance of one-step ahead forecast error given $\{{\mathbf y}_{i}\}_{i < t}$, where ${\mathbf A}_{t}={\mathbf F}{\mathbf A}_{t-1}{\mathbf F}^{ \mathrm{\scriptscriptstyle T} }+{\mathbf Q}-{\mathbf B}_{t-1}{\mathbf J}_{t-1}^{-1}{\mathbf B}_{t-1}^{ \mathrm{\scriptscriptstyle T} }$ with ${\mathbf B}_t={\mathbf F}{\mathbf A}_t{\mathbf G}^{ \mathrm{\scriptscriptstyle T} }$ is the filtering variance of ${\mathbf x}_t$ given $\{{\mathbf y}_{i}\}_{i < t}$. The quantity ${\mathbf J}_t^{-1}$ is involved in three fundamental problems associated with the state-space model: prediction, filtering and smoothing. Under the steady-state solution assumption that ${\mathbf A}_t$ is independent of $t$, we can compute ${\mathbf J}_t$ $(t\geq 2)$ via ${\mathbf J}_1$. Notice that ${\mathbf J}_1=\textrm{Var}({\mathbf y}_1)$, thus ${\mathbf J}_1^{-1}$ is essentially the precision matrix $\boldsymbol{\Omega}$. A good estimate for $\boldsymbol{\Omega}$ will lead to good performance of Kalman recursions under high-dimensional settings.
To obtain a good estimate for $\boldsymbol{\Omega}$, some prior structural information of $\boldsymbol{\Omega}$ will be helpful. Bandness is one special structure of precision matrices. In many practical problems, the variables have a natural ordering that are related to the ones in their neighborhood. For instance, sales, prices and weather indices depend more on those at close range locations. The information from further locations may become redundant given that from neighbors. See, for example, \cite{CanMeg_1997} for a house price data example which exhibits such a dependence structure. In those cases,
there exists a constant $k > 0$ such that $\omega_{j_1,j_2} = 0$ for any $| j_1-j_2 | > k$. To explore and confirm such a structure, we consider the hypothesis testing problem \begin{equation} H_{0}: \omega_{j_1,j_2} = 0 \mbox{ \ for any $| j_1-j_2 | > k$~~~~~vs.~~~~~} H_{1}: \omega_{j_1,j_2} \neq 0 \mbox{ \ for some $| j_1-j_2 | > k$} \label{eq:banded}\end{equation} for some positive integer $k$ which may diverge to infinity as $n,p\to\infty$.
\cite{Bj_2008a} proposed banding the Cholesky decomposition factor matrix to estimate the high-dimensional bandable $\boldsymbol{\Omega}$.
Testing the hypothesis (\ref{eq:banded}) is a practical guideline to confirm if $\boldsymbol{\Omega}$ is within the bandable class so that the banding estimator can be applied.
If the banded hypothesis is confirmed by the test, the banding estimators may be employed.
\end{ex}
\begin{ex}
(Partial correlation network) Given a precision matrix $\boldsymbol{\Omega}=(\omega_{j_1,j_2})_{p\times p}$, we can define an undirected network $G=(V,E)$ where the vertex set $V=\{1,\ldots,p\}$ represents the $p$ components of ${\mathbf y}$ and the edge set $E=\{(j_1,j_2)\in V\times V:\omega_{j_1,j_2}\neq0\}$ are the pairs of variables with non-zero precision coefficients.
Let $\rho_{j_1,j_2} = \mbox{Corr}(\varepsilon_{j_1},\varepsilon_{j_2})$ be the partial correlation between the $j_1$-th and the $j_2$-th components of ${\mathbf y}$ for any $j_1\neq j_2$, where $\varepsilon_{j_1}$ and $\varepsilon_{j_2}$ are the errors of the best linear predictors of $y_{j_1}$ and $y_{j_2}$ given ${\mathbf y}_{-(j_1,j_2)}=\{y_k: k\neq j_1,j_2 \}$, respectively. It is known that $\rho_{j_1,j_2}=-\frac{\omega_{j_1,j_2}}{\sqrt{\omega_{j_1,j_1}\omega_{j_2,j_2}}}$. Therefore, the network $G = (V, E)$ also represents the partial correlation graph of ${\mathbf y}$. The vertices $(j_1, j_2) \not\in E$ if and only if $y_{j_1}$ and $y_{j_2}$ are partially uncorrelated. Additionally, if ${\mathbf y}$ follows a multivariate normal distribution, such a network $G$ is a Gaussian graphical model, which indicates the conditional dependence structure among the components of ${\mathbf y}$.
Neighborhood and community are two basic features in a network. The neighborhood of the $j$th vertex, denoted by $\mathcal{N}_{j}$, is the set of all the vertices directly connected to it.
For most of the spatial data, it is believed that the partial correlation neighborhood is related to the spatial neighborhood.
Let ${\mathcal{N}}_{j}(k)$ be the set including the first $k$ closest vertices to the $j$th vertex in the spatial domain.
To investigate such a relationship, we consider to test $H_0:\mathcal{N}_j=\mathcal{N}_j(k)$ versus $H_{1}: \mathcal{N}_j \neq \mathcal{N}_j(k)$ for some pre-specified positive constant $k$.
A community in a network is a group of vertices that have heavier connectivity within the group than outside the group. The community structure builds the fundamental architecture of a network. Empirical evidence in the analysis of stock data indicates that the returns of the stocks from a same sector are highly correlated \citep{ChanKarceskiLakonishok_1999,Kenett_2010}. Intuitively, the partial correlation of the returns between two stocks from different sectors should be small. Hence, the sectors can be regarded as different communities in the stock return network. In brain connectivity studies, it has been shown in literature that the brain anatomical regions have dense connection within each lobes, which is considered as a community. The between-lobe connections are much fewer \citep{Huang_2010, Power_2011}. However, those between-lobe connections are still important in brain operating and information processing. It has been found that some neurodegenerative diseases, such as Alzheimer's Disease and Autism Spectrum Disorder, reduce the between-lobe
connections compared with normal healthy individuals \citep{Huang_2010}.
Therefore, it is of practical importance to explore the connectivity between different communities. Assume the $p$ components of ${\mathbf y}$ are decomposed into $K$ disjoint communities $V_{1}, \ldots, V_{K}$. We are interested to recover $\mathcal {D} = \{ (k_{1}, k_{2}) : \omega_{j_1,j_2} \neq 0 \mbox{ \ for some $j_1 \in V_{k_{1}}$ and $j_2 \in V_{k_{2}}$} \}$.
\end{ex}
\iffalse
\begin{ex}
(Detecting the neighborhood structure in Markov random fields models) Suppose we observe data on a spatial lattice $D_{p} = \{s_{1}, \ldots, s_{p}\} \subset \mathbf{R}^{2}$ over time, where $s_{i} \in \{(\ell_{1}, \ell_{2}): 1 \leq \ell_{1}, \ell_{2} \leq k\}$ for $p = k^{2}$. Let the response be ${\mathbf y}_t=(y_{1,t},\ldots,y_{p,t})'$ for $t = 1, \ldots, n$, where $y_{j,t}$ denotes the response variable at location $s_{j}$ and time $t$. We order the locations on the lattice first by the row index and then by the column index within each row, such that $s_{(\ell_{1} - 1)k + \ell_{2}} = (\ell_{1}, \ell_{2})$ for $1 \leq \ell_{1}, \ell_{2} \leq k$.
Let $N(s_{j})$ be the spatial neighborhoods of $s_{j}$. For each time point $t$, suppose $\{{\mathbf y}_{t}\}$ follows a Gaussian Markov Random Fields (GMRF) model, where the full conditional distribution of $y_{j,t}$ given all the other responses at time $t$ is equal to the condition distribution given the responses in the spatial neighborhood of $s_{j}$. This means that \begin{equation} f_{j}(\cdot | \{y_{\ell, t}: s_{\ell} \in D_{p}\}, \theta_{j}) = f_{j}(\cdot | \{y_{\ell, t}: s_{\ell} \in N(s_{j})\}, \theta_{j}), \label{eq:MRF}\end{equation} where $f_{j}(\cdot | \cdot, \theta_{j})$ is a normal density function with parameter $\theta_{j}$, standing for the conditional density of $y_{j, t}$. Under such a model, $y_{j, t}$ is conditional independent with $\{y_{\ell, t}: s_{\ell} \notin N(s_{j})\}$ given $\{y_{\ell, t}: s_{\ell} \in N(s_{j})\}$. This implies that for any $s_{\ell} \notin N(s_{j})$, $y_{j, t}$ is independent with $y_{\ell, t}$ given all the other variables ${\mathbf y}_{t} \backslash \{y_{j, t}, y_{\ell,
t}\}$.
Note that the conditional independence implied by the GMRF models is closely related to the precision matrix of the spatial data. Let $\boldsymbol{\Sigma} = \operatorname{Cov}({\mathbf y}_{t})$ be the covariance of ${\mathbf y}_{t}$, and $\boldsymbol{\Omega} = \boldsymbol{\Sigma}^{-1}$ be its precision matrix, where $\boldsymbol{\Sigma} = (\sigma_{\elj_1,\elj_2})$ and $\boldsymbol{\Omega} = (\omega_{\elj_1,\elj_2})$. Under Gaussianity, $y_{\ell_{1}, t}$ and $y_{\ell_{2}, t}$ are conditional independent given ${\mathbf y}_{t} \backslash \{y_{\ell_{1}, t}, y_{\ell_{2}}, t\}$ if and only if $\omega_{\elj_1,\elj_2} = 0$. Furthermore, with centered parameterization, the spatial dependence can be fully represented by the precision matrix. Let \begin{eqnarray} f_{j}(y_{j, t} | \{y_{\ell, t}: s_{\ell} \in N(s_{j})\}, \theta_{j}) = \frac{1}{(2\pi \sigma^{2})^{1/2}}\exp\{-(y_{j, t} - A_{j})^{2} / (2\sigma^{2})\} \nonumber\end{eqnarray} for $j = 1, \ldots, p$, where \begin{equation} A_{j} = A_{j}(\{y_{\ell, t}: s_{\ell} \in N(s_{j})\}, \theta_{j}) = \kappa_{j} + \sum_{s_{\ell} \in N(s_{j})} \eta_{j\ell}(y_{\ell,
t} - \kappa_{\ell}) \label{eq:NaturalParameter}\end{equation} is the conditional mean of $y_{j, t}$ given it neighborhood, $\kappa_{j}$ is the marginal mean of $y_{j, t}$, and $\theta_{j} = \{\sigma^{2}, \kappa_{j}, \kappa_{\ell}, \eta_{j\ell}: s_{\ell} \in N(s_{j})\}$.
Under (\ref{eq:NaturalParameter}), it can be shown that the joint distribution of ${\mathbf y}_{t}$ is $N(\kappa, (I_{p} - C)^{-1} M)$, where $\kappa = (\kappa_{1}, \cdots, \kappa_{p})^{\prime}$, $C = (\eta_{\ell_{1}\ell_{2}})$, $M = \operatorname{diag}(\sigma^{2})$ and $I_{p}$ is the $p \times p$ identity matrix. This means that the dependence structure specified by $\{\eta_{j\ell}\}$ is corresponding to a unique precision matrix of ${\mathbf y}_{t}$. Therefore, testing neighborhood structures in GMRF models is equivalent to testing the structures of the precision matrix $\boldsymbol{\Omega}$.
\begin{itemize}
\item Detection of neighborhood sizes is of great importance in GMRF models. One example is the four nearest and eight nearest neighborhood structures which are commonly used in practice, where for $j = (\ell_{1} - 1)k + \ell_{2}$, \begin{eqnarray}
N_{4}(s_{j}) &= &\{(\ell_{1}, \ell_{2} - 1), (\ell_{1}, \ell_{2} + 1), (\ell_{1} - 1, \ell_{2}), (\ell_{1} + 1, \ell_{2})\} \ \mbox{and} \nonumber \\
N_{8}(s_{j}) &=& N_{4}(s_{j}) \cup \{(\ell_{1} - 1, \ell_{2} - 1), (\ell_{1} - 1, \ell_{2} + 1), (\ell_{1} + 1, \ell_{2} - 1), \nonumber \\
&& (\ell_{1} + 1, \ell_{2} + 1)\} \nonumber \nonumber\end{eqnarray} respectively. To assess the validity of the neighborhood, we consider to test
\begin{equation}\label{eq:NeighborHypothesis}
\begin{split}
&H_{0}: \mbox{ $N_{w}(s_{j})$ is the neighborhood for any $j = 1, \cdots, p$ \ \ versus } \\
&H_{a}: \mbox{ $N_{w}(s_{j})$ is not the neighborhood for all $j = 1, \cdots, p$, }
\end{split}
\end{equation}
where $w = 4$ or $8$. For four nearest and eight nearest neighborhoods, the supports of $\Omega$ are
$$\mathcal{S}_{N_{4}} = \{(\ell, \ell), (\ell - 1, \ell), (\ell + 1, \ell), (\ell + k, \ell), (\ell - k, \ell): 1 \leq \ell \leq p\} \mbox{ \ and}$$
$$\mathcal{S}_{N_{8}} = \mathcal{S}_{N_{4}} \cup \{(\ell - k + 1, \ell), (\ell + k - 1, \ell), (\ell + k + 1, \ell), (\ell - k -1, \ell): 1 \leq \ell \leq p\},$$
respectively. Therefore, (\ref{eq:NeighborHypothesis}) is a special case of (\ref{eq:test}) with ${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,\btheta) = \omega_{j_1,j_2}$ and $S = {S_{N_{w}}}^{c}$.
\item Given the neighborhood sizes, we are interested in the conditional dependence structures within each neighborhood.
For isotropic dependence structures, the conditional dependences of $y_{j, t}$ between each variable in its neighborhood $\{y_{\ell, t}: s_{\ell} \in N(s_{j})\}$ are the same. Under this case, $\eta_{j\ell}$ in (\ref{eq:NaturalParameter}) are constant for all pairs of locations, and \begin{equation} \mbox{Isotropic:} \ \ A_{j} = \kappa_{j} + \eta\sum_{s_{\ell} \in N(s_{j})} (y_{i\ell} - \kappa_{\ell}). \label{eq:Iso}\end{equation} The precision matrix corresponding to the isotropic dependence under four nearest neighborhood structure is within the matrix class \begin{equation} \Lambda_{iso} = \{\Omega = (\omega_{\ell_{1}\ell_{2}}) \in \mathcal{Q}_{4}: \omega_{\ell_{1}\ell_{2}} = c \mbox{ \ for $(\ell_{1}\ell_{2}) \in \mathcal{S}_{N_{4}}$ and $c \in \mathbb{R}$} \}, \label{eq:IsoPresicion}\end{equation} where $\mathcal{Q}_{p}$ is the class of $p \times p$ precision matrices under four nearest neighborhood structure. Testing for the isotropic structure under four nearest neighborhood is equivalent to hypothesis (\ref{eq:test}) with
${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,c) = \omega_{j_1,j_2} - c$ for $(j_1, j_2) \in S_{N_{4}}$ and ${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,c) = \omega_{j_1,j_2}$ for $ (j_1, j_2) \in {S_{N_{4}}}^{c}$.
\item In the case that the dependence structures are directional, the dependence parameters $\eta_{j\ell}$ may be different between the horizontal and vertical neighborhoods of $y_{ij}$.
We have \begin{equation}\mbox{Directional:} \ \ A_{j} = \kappa_{j} + \eta_{u}\sum_{s_{\ell} \in N_{u}(s_{j})} (y_{i\ell} - \kappa_{\ell}) + \eta_{v}\sum_{s_{\ell} \in N_{v}(s_{j})} (y_{i\ell} - \kappa_{\ell}), \label{eq:Dir}\end{equation} where $N_{u}(s_{j})$ and $N_{v}(s_{j})$ are the neighborhoods of $s_{j}$ in horizontal and vertical directions, respectively. Under four nearest neighborhoods, the directional dependence structure in (\ref{eq:Dir}) is equivalent to the class \begin{equation}\begin{split}
\Lambda_{dir} = \{\Omega \in \mathcal{Q}_{4}: &\mbox{ $\omega_{\ell_{1}\ell_{2}} = c_{1}$ for $|\ell_{1}-\ell_{2}| = 1$}, \\
&\mbox{$\omega_{\ell_{1}\ell_{2}} = c_{2}$ for $|\ell_{1}-\ell_{2}| = k$, and $c_{1} \neq c_{2}$} \}.
\end{split}\label{eq:DirPresicion}\end{equation}
Therefore, setting ${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,c_1,c_2) = \omega_{j_1,j_2} - c_1$ for $|\ell_{1}-\ell_{2}| = 1$, ${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,c_1,c_2) = \omega_{j_1,j_2} - c_2$ for $|\ell_{1}-\ell_{2}| = k$ and ${\mathbf g}(\omega_{j_1,j_2};j_1,j_2,c_1,c_2) = \omega_{j_1,j_2}$ for $ (j_1, j_2) \in {S_{N_{4}}}^{c}$ in the hypothesis (\ref{eq:test}) is for testing the directional dependence structure under four nearest neighborhood.
\item A third structure is based on the distance between two locations, where $\eta_{j\ell}$ is reciprocal to the distance between $s_{j}$ and $s_{\ell}$. The corresponding expression of $A_{j}$ in (\ref{eq:NaturalParameter}) is
\begin{equation} \mbox{Distance:} \ \ A_{j} = \kappa_{j} + \frac{\eta}{d_{j\ell}}\sum_{s_{\ell} \in N(s_{j})} (y_{i\ell} - \kappa_{\ell}), \label{eq:Iso}\end{equation} where $d_{j\ell} = \|s_{j} - s_{\ell}\|$ is the Euclidean distance between $s_{j}$ and $s_{\ell}$. And the precision matrix under the distance based dependence structure falls within the class \begin{equation} \Lambda_{dist} = \{\Omega \in \mathcal{Q}_{p}: \omega_{\ell_{1}\ell_{2}} = c / d_{\ell_{1}\ell_{2}} \mbox{ \ for $(\ell_{1}\ell_{2}) \in \mathcal{S}_{N_{w}}$ and $c \in \mathbb{R}$} \}. \label{eq:IsoPresicion}\end{equation} It is clear that testing for the distance based dependence structure coincides with the hypothesis (\ref{eq:test}) with ${\mathbf g}(\omega_{\elj_1,\elj_2};\elj_1,\elj_2,c) = d_{\ell_{1}\ell_{2}}\omega_{\elj_1,\elj_2} - c$ for $(\elj_1, \elj_2) \in S_{N_{w}}$ and ${\mathbf g}(\omega_{\elj_1,\elj_2};\elj_1,\elj_2,c) = \omega_{\elj_1,\elj_2}$ for $ (\elj_1, \elj_2) \in {S_{N_{w}}}^{c}$.
\item One of the most popular variogram model in geostatistics is the Mat\'{e}rn covariance class.
The Mat\'{e}rn covariance function between locations $s_{j}$ and $s_{\ell}$ is defined as \begin{equation} \sigma_{j\ell} = \operatorname{Cov}(y_{j, t}, y_{\ell, t}) = \frac{\sigma^{2}}{2^{\nu-1}, \Gamma(\nu)} (\rho d_{j\ell})^{\nu} K_{\nu}(\rho d_{j\ell}). \label{eq:Matern}\end{equation} Here, $K_{\nu}(\cdot)$ is the modified Bessel function of the second kind and order $\nu>0$, $\rho$ is a scaling parameter and $\sigma^{2}$ is the marginal variance. \cite{LindgrenRueLindstrom_2011} showed that the inverse of the Mat\'{e}rn covariance is approximately sparse, and the Guassian fields with Mat\'{e}rn covariance (\ref{eq:Matern}) can be well approximated by GMRF models. Due to the sparsity of the precision matrix brought by the Markov property, they suggested to use GMRF representation to compute the Guassian fields with Mat\'{e}rn covariance. Meanwhile, we could test the structure of the precision matrix that serves a way to check the Mat\'{e}rn covariance structure. When $\nu = 1$,
\cite{LindgrenRueLindstrom_2011} showed that GMRF representation for the Mat\'{e}rn fields have the precision matrix in the form that $\omega_{\ell\ell} = 4 + c^{2}$, $\omega_{\ell_{1}\ell_{2}} = -2c$ for $|\ell_{1} - \ell_{2}| = 1$ or $k$, $\omega_{\ell_{1}\ell_{2}} = 2$ for $|\ell_{1} - \ell_{2}| = k \pm 1$ and $\omega_{\ell_{1}\ell_{2}} = 1$ for $|\ell_{1} - \ell_{2}| = 2$ or $2k$. Therefore, to test for the Mat\'{e}rn covariance with $\nu = 1$, we set the hypothesis (\ref{eq:test}) in such a way that
\[
{\mathbf g}(\omega_{\elj_1,\elj_2};\elj_1,\elj_2,c) = \left\{ \begin{array}{c l}
\sqrt{\omega_{\elj_1,\elj_2} - 4 } - c, &\textrm{if} \ \ \elj_1 = \elj_2; \\
-\omega_{\elj_1,\elj_2} / 2 - c, &\textrm{if} \ \ |\ell_{1} - \ell_{2}| = 1 \ \textrm{or} \ k; \\
\omega_{\elj_1,\elj_2} - 2, &\textrm{if} \ \ |\ell_{1} - \ell_{2}| = k \pm 1; \\
\omega_{\elj_1,\elj_2} - 1, &\textrm{if} \ \ |\ell_{1} - \ell_{2}| = 2 \ \textrm{or} \ 2k. \\
\end{array} \right.
\]
\end{itemize}
\end{ex}
\fi
\section{Main results}\label{se:GA}
\subsection{Estimation of $\boldsymbol{\Omega}$}
To state our methodology, we first revisit the relationship between the precision matrix and node-wise regressions. For a random vector ${\mathbf y} = (y_{1}, \ldots, y_{p})^{ \mathrm{\scriptscriptstyle T} }$ with mean ${\mathbf 0}$ and covariance $\boldsymbol{\Sigma}$, we consider $p$ node-wise regressions
\begin{equation}\label{eq:regression}
y_{j_1} = \sum_{j_2 \neq j_1}\alpha_{j_1,j_2}y_{j_2} + \epsilon_{j_1} ~~~ (j_1=1,\ldots,p).
\end{equation}
Let ${\mathbf y}_{-j_1} = \{y_{j_2} : j_2 \neq j_1\}$.
The regression error $\epsilon_{j_1}$ is uncorrelated with ${\mathbf y}_{-j_1}$ if and only if $\alpha_{j_1,j_2} = -\frac{\omega_{j_1,j_2}}{\omega_{j_1,j_1}}$ for any $j_2 \neq j_1$. For such specified regression coefficients, it can be shown that $ {{\rm Cov}}(\epsilon_{j_1}, \epsilon_{j_2}) = \frac{\omega_{j_1,j_2}}{\omega_{j_1,j_1}\omega_{j_2,j_2}}$ for any $j_1$ and $j_2$. Let $\boldsymbol{\epsilon} = (\epsilon_{1}, \ldots, \epsilon_{p})^{{ \mathrm{\scriptscriptstyle T} }}$ and ${\mathbf V} = \textrm{Cov}(\boldsymbol{\epsilon})= (v_{j_1,j_2})_{p\times p}$. The precision matrix $\boldsymbol{\Omega}= \boldsymbol{\Sigma}^{-1}$ is proportional to ${\mathbf V}$ as $\boldsymbol{\Omega} = \{\textrm{diag}({\mathbf V})\}^{-1}{\mathbf V}\{\textrm{diag}({\mathbf V})\}^{-1}$. See Lemma 1 of \cite{PengWangZhouZhu_2009} for the above result.
This relationship between $\boldsymbol{\Omega}$ and ${\mathbf V}$ provides a way to learn $\boldsymbol{\Omega}$ by the regression errors in (\ref{eq:regression}).
Since the error vector $\boldsymbol{\epsilon}$ in (\ref{eq:regression}) is unobservable in practice, its ``proxy'' -- the residuals of the node-wise regressions -- can be used to estimate ${\mathbf V}$.
Let $\balpha_j=(\alpha_{j,1},\ldots,\alpha_{j,j-1},-1,\alpha_{j,j+1},\ldots,\alpha_{j,p})^{ \mathrm{\scriptscriptstyle T} }$. For each $j=1,\ldots,p$, we fit the high-dimensional linear regression
\begin{equation}\label{eq:regressionData}
y_{j,t} = \sum_{k \neq j}\alpha_{j,k} y_{k,t} + \epsilon_{j,t}~~~ (t = 1, \ldots, n)
\end{equation}
by Lasso \citep{Tibshirani_1996}, Dantizg estimation \citep{CandesTao_2007} or scaled Lasso \citep{SZ_2012}.
For the case $\bmu \neq {\mathbf 0}$, the regression (\ref{eq:regressionData}) will be conducted on the centered data ${\mathbf y}_{t} - \bar{{\mathbf y}}$, where $\bar{{\mathbf y}} = n^{-1}\sum_{t=1}^{n}{\mathbf y}_{t}$ is the sample mean.
For simplicity, we present our results under Lasso estimator. Other estimators can be applied similarly. Let $\widehat{\balpha}_j$ be the Lasso estimator of $\balpha_j$ defined as follows:
\begin{equation}\label{eq:bestimate}
\widehat{\balpha}_j= \arg\min_{\boldsymbol{\gamma}\in\Theta_j}\bigg[\frac{1}{n}\sum_{t=1}^n (\boldsymbol{\gamma}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t )^2 + 2 \lambda_j|\boldsymbol{\gamma}|_1\bigg],
\end{equation}
where $\Theta_j=\{\boldsymbol{\gamma}=(\gamma_1,\ldots,\gamma_p)^{ \mathrm{\scriptscriptstyle T} }\in\mathbb{R}^p:\gamma_j=-1\}$ and $\lambda_{j}$ is the tuning parameter.
For each $t$, the residual
\begin{equation}\label{eq:epsilonest}
\widehat{\epsilon}_{j,t} =-\widehat{\balpha}_j^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t
\end{equation}
provides an estimate of $\epsilon_{j,t}$. Write $\widehat{\boldsymbol{\epsilon}}_{t} = (\widehat{\epsilon}_{1,t}, \ldots, \widehat{\epsilon}_{p,t})^{ \mathrm{\scriptscriptstyle T} }$ and let $\widetilde{{\mathbf V}}=(\widetilde{v}_{j_1,j_2})_{p\times p}$ be the sample covariance of $\{\widehat{\boldsymbol{\epsilon}}_{t}\}_{t=1}^{n}$, where $\widetilde{v}_{j_1,j_2} = n^{-1}\sum_{t=1}^{n}\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t}$. It is well known that $n^{-1}\sum_{t=1}^n\epsilon_{j_1,t}\epsilon_{j_2,t}$ is an unbiased estimator of $v_{j_1,j_2}$, however, replacing $\epsilon_{j_1,t}$ by $\widehat{\epsilon}_{j_1,t}$ will incur a bias term. Specifically, as shown in Lemma \ref{la:bias} in Appendix, it holds that
\begin{equation}\label{eq:v}
\begin{split}
\widetilde{v}_{j_1,j_2}-\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}\epsilon_{j_2,t}=&-(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_2,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&-(\widehat{\alpha}_{j_2,j_1}-\alpha_{j_2,j_1})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&+o_p\{(n\log p)^{-1/2}\}.
\end{split}
\end{equation}
Here the higher order term $o_p\{(n\log p)^{-1/2}\}$ holds uniformly for any $j_1$ and $j_2$. Since $n^{-1}\sum_{t=1}^n\epsilon_{j,t}^2$ is $n^{1/2}$-consistent to $v_{j,j}$, (\ref{eq:v}) implies that $\widetilde{v}_{j,j}$ is also $n^{1/2}$-consistent to $v_{j,j}$. However, for any $j_1\neq j_2$, due to the slow convergence rates of the Lasso estimators $\widehat{\alpha}_{j_1,j_2}$ and $\widehat{\alpha}_{j_2,j_1}$, $\widetilde{v}_{j_1,j_2}$ is no longer ${n}^{1/2}$-consistent to $v_{j_1,j_2}$. To eliminate the bias, we employ a new estimator below for $v_{j_1,j_2}$:
\begin{equation}\label{eq:hatv}
\widehat{v}_{j_1,j_2}= \left\{ \begin{aligned}
-\frac{1}{n}\sum_{t=1}^n(\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t}+\widehat{\alpha}_{j_1,j_2}\widehat{\epsilon}_{j_2,t}^2+\widehat{\alpha}_{j_2,j_1}\widehat{\epsilon}_{j_1,t}^2),~~ &j_1\neq j_2; \\
\frac{1}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t},~~~~~~~~~~~~~~&j_1=j_2.
\end{aligned} \right.
\end{equation}
By noticing that $\boldsymbol{\Omega}=\{\textrm{diag}({\mathbf V})\}^{-1}{\mathbf V}\{\textrm{diag}({\mathbf V})\}^{-1}$, we estimate $\omega_{j_1,j_2}$ as
\begin{equation}
\widehat{\omega}_{j_1,j_2}=\frac{\widehat{v}_{j_1,j_2}}{\widehat{v}_{j_1,j_1}\widehat{v}_{j_2,j_2}}
\end{equation}
for any $j_1$ and $j_2$.
To study the theoretical properties of this estimator $\widehat{\omega}_{j_1,j_2}$, we need the following regularity conditions.
\begin{as}\label{as:moment}
There exist constants $K_1>0$, $K_2>1$, $0<\gamma_1\leq 2$ and $0<\gamma_2\leq 2$ independent of $p$ and $n$ such that for each $t=1,\ldots,n$,
\[
\max_{1\leq j\leq p}\mathbb{E}\{\exp(K_1|y_{j,t}|^{\gamma_1})\}\leq K_2~~~\textrm{and}~~~\max_{1\leq j\leq p}\mathbb{E}\{\exp(K_1|\epsilon_{j,t}|^{\gamma_2})\}\leq K_2.
\]
\end{as}
\begin{as}\label{as:cov}
The smallest eigenvalues of $\boldsymbol{\Sigma}$ and $\boldsymbol{\Omega}$ are uniformly bounded away from zero.
\end{as}
\begin{as}\label{as:betamix}
There exists constants $K_3>0$ and $\gamma_3>0$ independent of $p$ and $n$ such that $\beta_k\leq \exp(-K_3k^{\gamma_3})$ for any positive $k$.
\end{as}
Condition \ref{as:moment} implies $\max_{1\leq j\leq p}\mathbb{P}(|y_{j,t}|\geq x)\leq K_2\exp(-K_1x^{\gamma_1})$ and $\max_{1\leq j\leq p}\mathbb{P}(|\epsilon_{j,t}|\geq x)\leq K_2\exp(-K_1x^{\gamma_2})$ for any $x>0$ and $t=1,\ldots,n$. It ensures exponential-type
upper bounds for the tail probabilities of the statistics concerned (see for example Lemma 1 in Appendix), which makes our procedure work for $p$ diverging at some exponential rate of $n$.
Condition \ref{as:cov} implies the bounded eigenvalues of $\boldsymbol{\Sigma}$ and $\boldsymbol{\Omega}$. Such a regularized condition for eigenvalues is commonly assumed in the literatures of high-dimensional data analysis.
Condition \ref{as:betamix} for the $\beta$-mixing coefficients of $\{{\mathbf y}_{t}\}$ is mild. Causal autoregressive moving average processes with continuous innovation
distributions are $\beta$-mixing with exponentially decaying $\beta_k$. So are stationary Markov chains
satisfying certain conditions. See Section 2.6.1 of \cite{FanYao_2003} and the references within.
In fact, stationary GARCH models with finite second
moments and continuous innovation distributions are also $\beta$-mixing with exponentially decaying
$\beta_k$; see Proposition 12 of \cite{CarrascoChen_2002}. If we only require $\sup_t\max_{1\leq j\leq p} \mathbb{P}(|y_{j,t} | >
x) = O\{x^{-2(\nu+\iota)}\}$ and $\sup_t\max_{1\leq j\leq p} \mathbb{P}(|\epsilon_{j,t} | >
x) = O\{x^{-2(\nu+\iota)}\}$ for any $x > 0$ in Condition 1 and $\beta_k = O\{k^{-\nu(\nu+\iota)/(2\iota)}\}$ in Condition 3
for some $\nu > 2$ and $\iota> 0$, we can apply Fuk-Nagaev-type inequalities to construct the upper
bounds for the tail probabilities of the statistics if $p$
diverges at some polynomial rate of $n$. We refer to Section 3.2 of \cite{ChangGuoYao_2014} for
the implementation of Fuk-Nagaev-type inequalities in such a scenario. The $\beta$-mixing condition
can be replaced by the $\alpha$-mixing condition, under which we can justify the proposed method for
$p$ diverging at some polynomial rate of $n$ by using Fuk-Nagaev-type inequalities. However, it
remains an open problem to establish the relevant properties under $\alpha$-mixing for $p$ diverging at
some exponential rate of $n$.
The following proposition gives the asymptotic expansion of $\widehat{\omega}_{j_1,j_2}$.
\begin{proposition}\label{pro:1}
Let $s=\max_{1\leq j\leq p}|\balpha_j|_0$ and select the tuning parameter $\lambda_j$ in {\rm(\ref{eq:bestimate})} satisfying $\lambda_j \asymp (n^{-1}\log p)^{1/2}$ for each $j=1,\ldots,p$. Under Conditions {\rm \ref{as:moment}--\ref{as:betamix}}, if $s^2(\log p)^3n^{-1}=o(1)$ and $\log p=o(n^{\varrho_1})$ for a positive constant $\varrho_1$ specified in the proof of this proposition in Appendix, it holds that
\[
\widehat{\omega}_{j_1,j_2}-\omega_{j_1,j_2} = -\frac{\delta_{j_1,j_2}}{v_{j_1,j_1}v_{j_2,j_2}} + o_p\{(n\log p)^{-1/2}\},
\]
where $\delta_{j_1,j_2}=n^{-1}\sum_{t=1}^n(\epsilon_{j_1,t}\epsilon_{j_2,t}-v_{j_1,j_2})$ for any $j_1$ and $j_2$, and $o_p\{(n\log p)^{-1/2}\}$ is a uniform higher order term.
\end{proposition}
We see from Proposition \ref{pro:1} that such defined $\widehat{\omega}_{j_1,j_2}$ is centered at the true parameter $\omega_{j_1,j_2}$ with a standard deviation at the order $n^{-1/2}$.
Under the condition $s \ll n^{1/2}$, a commonly used assumption in the literatures of high-dimensional data analysis, Proposition \ref{pro:1} holds when $\log p=o(n^{c})$ for some positive constant $c$. This means that Proposition \ref{pro:1} is valid even the dimension $p$ grows at some exponential rate of the sample size $n$.
\cite{Ren_2015} proposed an estimator for $\omega_{j_1,j_2}$ via pairwise regression. Specifically, they fitted each pair of variables $(y_{j_1},y_{j_2})$ on all other variables ${\mathbf y}_{-(j_1,j_2)}$ via scaled Lasso, which requires $\frac{p(p-1)}{2}$ high-dimensional regressions. As a comparison, our proposed estimator (\ref{eq:hatv}) only needs $p$ regressions, which dramatically reduces the computation burden when $p$ is large. \cite{Liu_2013} employed $-{n}^{1/2}c_{j_1,j_2}\widehat{v}_{j_1,j_2}$ for $\widehat{v}_{j_1,j_2}$ defined in (\ref{eq:hatv}) and some specified scale $c_{j_1,j_2}$ as the statistic to detect whether $\omega_{j_1,j_2}=0$ or not. He showed that ${n}^{1/2}c_{j_1,j_2}(-\widehat{v}_{j_1,j_2} + \frac{b_{j_1,j_2} \omega_{j_1,j_2} } {\omega_{j_1,j_1}\omega_{j_2,j_2}})$ is
asymptotically normal distributed with some random variable $b_{j_1,j_2}$, which indicates the asymptotical normality of $-n^{1/2}c_{j_1,j_2}\widehat{v}_{j_1,j_2}$ only when $\omega_{j_1,j_2}=0$. However, such a result under $\omega_{j_1,j_2}=0$ is not sufficient to construct confidence regions for the non-zero $\omega_{j_1,j_2}$'s.
The asymptotic expansion of $\widehat{\omega}_{j_1,j_2}$ in Proposition \ref{pro:1} is a more delicate result, which is necessary for our analysis.
\subsection{Confidence regions}\label{se:cr}
Let $\boldsymbol{\Delta} = -n^{-1}\sum_{t=1}^n(\boldsymbol{\epsilon}_t\boldsymbol{\epsilon}_t^{ \mathrm{\scriptscriptstyle T} }-{\mathbf V})$. It follows from Proposition 1 that
\begin{equation*}\label{eq:asympexp}
\widehat{\boldsymbol{\Omega}}-\boldsymbol{\Omega} = \boldsymbol{\Pi} + \boldsymbol{\Upsilon}
\mbox{ \ for \ } \boldsymbol{\Pi} = \{\textrm{diag}({\mathbf V})\}^{-1}\boldsymbol{\Delta}\{\textrm{diag}({\mathbf V})\}^{-1},
\end{equation*}
where $|\boldsymbol{\Upsilon}|_\infty=o_p\{(n\log p)^{-1/2}\}$. Restricted on a given index set $\mathcal{S}$ with $r=|\mathcal{S}|$, we have
\begin{equation}\label{eq:asyp}
\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}=\boldsymbol{\Pi}_{\mathcal{S}}+\boldsymbol{\Upsilon}_{\mathcal{S}}.
\end{equation}
Based on (\ref{eq:asyp}), we consider two choices below for the confidence regions $\mathcal{C}_{\mathcal{S},\alpha}$ specified in (\ref{eq:cr1}):
\begin{equation}\label{eq:cr2}
\begin{split}
\mathcal{C}_{\mathcal{S},\alpha,1}=&~\{{\mathbf a}\in\mathbb{R}^r:n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf a}|_\infty\leq q_{\mathcal{S},\alpha,1}\},\\
\mathcal{C}_{\mathcal{S},\alpha,2}=&~\{{\mathbf a}\in\mathbb{R}^r:n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf a})|_\infty\leq q_{\mathcal{S},\alpha,2}\},\\
\end{split}
\end{equation}
where $\widehat{{\mathbf D}}$ is an $r\times r$ diagonal matrix specified later in Remark \ref{re:stude}, of which the elements are the estimated standard derivations of the $r$ components in $n^{1/2}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})$. Here $q_{\mathcal{S},\alpha,1}$ and $q_{\mathcal{S},\alpha,2}$ are two critical values to be determined. $\mathcal{C}_{\mathcal{S},\alpha,1}$ and $\mathcal{C}_{\mathcal{S},\alpha,2}$ represent the so-called ``non-Studentized-type" and ``Studentized-type" confidence regions for $\boldsymbol{\Omega}_{\mathcal{S}}$, respectively. As comment by \cite{ChangZhouZhou_2014a}, the Studentized-type confidence regions perform better than the non-Studentized-type ones when the heteroscedasticity exists, however, the performance of the non-Studentized confidence regions is more stable when the sample size $n$ is fairly small.
In the sequel, we mainly focus on estimating the critical value $q_{\mathcal{S},\alpha,1}$ in (\ref{eq:cr2}). By the same way, $q_{\mathcal{S},\alpha,2}$ can be estimated similarly, which will be discussed in Remark \ref{re:stude} later. To determine $q_{\mathcal{S},\alpha,1}$, we need to first characterize the probabilistic behavior of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$. Since $\boldsymbol{\Upsilon}_{\mathcal{S}}$ is a higher order term, $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ will behave similarly as $n^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_\infty$ when $n$ is large.
For each $t$, let $\mbox{\boldmath$\varsigma$}_t$ be a $r$-dimensional vector whose $j$th element is $\frac{\epsilon_{\chi_1(j),t}\epsilon_{\chi_2(j),t}-v_{\boldsymbol{\chi}(j)}}{v_{\chi_1(j),\chi_1(j)}v_{\chi_2(j),\chi_2(j)}}$ where $\boldsymbol{\chi}(\cdot)=\{\chi_1(\cdot),\chi_2(\cdot)\}$ is a bijective mapping from $\{1,\ldots,r\}$ to $\mathcal{S}$ such that $\boldsymbol{\Omega}_{\mathcal{S}}=\{\omega_{\boldsymbol{\chi}(1)},\ldots,\omega_{\boldsymbol{\chi}(r)}\}^{ \mathrm{\scriptscriptstyle T} }$. Then, we have
\[
\boldsymbol{\Pi}_{\mathcal{S}}=-\frac{1}{{n}}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t.
\]
Denote by ${\mathbf W}$ the long-run covariance of $\{\mbox{\boldmath$\varsigma$}_t\}_{t=1}^n$, namely,
\begin{equation}\label{eq:W}
{\mathbf W}=\mathbb{E}\bigg\{\bigg(\frac{1}{{n}^{1/2}}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t\bigg)\bigg(\frac{1}{{n}^{1/2}}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t\bigg)^{ \mathrm{\scriptscriptstyle T} }\bigg\}.
\end{equation}
Let $\bfeta_t=(\eta_{1,t},\ldots,\eta_{r,t})^{ \mathrm{\scriptscriptstyle T} }$ where $\eta_{j,t}=\epsilon_{\chi_1(j),t}\epsilon_{\chi_2(j),t}-v_{\boldsymbol{\chi}(j)}$. Then ${\mathbf W}$ specified in (\ref{eq:W}) can be written as
\begin{equation}\label{eq:w}
{\mathbf W}={\mathbf H} \mathbb{E}\bigg\{\bigg(\frac{1}{{n}^{1/2}}\sum_{t=1}^n\bfeta_t\bigg)\bigg(\frac{1}{{n}^{1/2}}\sum_{t=1}^n\bfeta_t\bigg)^{ \mathrm{\scriptscriptstyle T} }\bigg\}{\mathbf H}
\end{equation}
where ${\mathbf H}=\textrm{diag}\{v_{\chi_1(1),\chi_1(1)}^{-1}v_{\chi_2(1),\chi_2(1)}^{-1},\ldots,v_{\chi_1(r),\chi_1(r)}^{-1}v_{\chi_2(r),\chi_2(r)}^{-1}\}$.
To study the asymptotical distribution of the average of the temporal dependent sequence $\{\mbox{\boldmath$\varsigma$}_t\}_{t=1}^n$ and its long run covariance ${\mathbf W}$, we make the following condition on $\{\bfeta_t\}_{t=1}^n$.
\begin{as}\label{as:block}
There exist constants $K_4>0$ and $\iota>0$ such that
\[
\begin{split}
&K_4^{-1}<\liminf_{b\rightarrow\infty}\inf_{1\leq \ell\leq n+1-b}\mathbb{E}\bigg(\bigg|\frac{1}{{b}^{1/2}}\sum_{\ell\leq t\leq \ell+b-1}\eta_{j,t}\bigg|^{2+\iota}\bigg)\\
&~~~~~~~~~~~~~~~\leq\limsup_{b\rightarrow\infty}\sup_{1\leq \ell\leq n+1-b}\mathbb{E}\bigg(\bigg|\frac{1}{{b}^{1/2}}\sum_{\ell \leq t\leq \ell+b-1}\eta_{j,t}\bigg|^{2+\iota}\bigg)<K_4
\end{split}
\]
for each $j=1,\ldots,r$.
\end{as}
Condition 4 is a technical assumption for the validity of the Gaussian
approximation for dependent data. See \cite{CCK_2014}. Such condition is mild. Based on Condition 1, same as Lemma 2 of \cite{ChangTangWu_2013}, we have $\sup_t\max_{1\leq j\leq r}\mathbb{P}(|\eta_{j,t}|>x)\leq C_1\exp(-C_2x^{\gamma_2/2})$ for any $x>0$, where $C_1$ and $C_2$ are two positive constants only depending on $K_1$ and $K_2$ specified in Condition 1. Together with Condition 3, it holds that $b^{-1/2}\sum_{t=\ell}^{\ell+b-1}\eta_{j,t}\rightarrow_dN(0,\sigma_{j,\ell}^2)$, as $b\rightarrow\infty$, for any $j$ and $\ell$. Furthermore, it can be shown that $\mathbb{P}(|b^{-1/2}\sum_{t=\ell}^{\ell+b-1}\eta_{j,t}|\geq x)\leq C_3\exp(-C_4x^{\gamma})$ for any $x>0$, where $C_3$, $C_4$ and $\gamma$ are three positive constants only depending on the uniform constants specified in Conditions 1 and 3. Therefore, $\lim\sup_{b\rightarrow\infty}\sup_{1\leq \ell\leq n+1-b}\mathbb{E}(|b^{-1/2}\sum_{t=\ell}^{\ell+b-1}\eta_{j,t}|^{2+\iota})<K_4$ holds automatically for some positive constant $K_4$, provided that Conditions 1 and 3 hold. Let $\sigma_{j,\ell}^2(b)=\textrm{Var}(b^{-1/2}\sum_{t=\ell}^{\ell+b-1}\eta_{j,t})$ for any $j$, $\ell$ and $b$. Then for any $j$ and $\ell$, we have $\sigma_{j,\ell}^2(b)\rightarrow\sigma_{j,\ell}^2$ as $b\rightarrow\infty$. If we assume $\sup_{j,\ell}|\sigma_{j,\ell}^2(b)-\sigma_{j,\ell}^2|\rightarrow0$ as $b\rightarrow\infty$, and $\sigma_{j,\ell}^2$'s are uniformly bounded away from zero for any $j$ and $\ell$, then by Jensen's inequality we have $\lim\inf_{b\rightarrow\infty}\inf_{1\leq \ell\leq n+1-b}\mathbb{E}(|b^{-1/2}\sum_{t=\ell}^{\ell+b-1}\eta_{j,t}|^{2+\iota})>K_4^{-1}$ holds for some positive constant $K_4$. If $\{\eta_{j,t}\}_{t\geq 1}$ is stationary, Conditions 1 and 3 implies $|\sigma_{j,\ell}^2(b)-\sigma_{j,\ell}^2|\leq C_5b^{-1}$ for any $\ell$, where $C_5$ is independent of $j$. Hence, if $\{\eta_{j,t}\}_{t\geq 1}$ is stationary for each $j$, then it holds automatically that $\sup_{j,\ell}|\sigma_{j,\ell}^2(b)-\sigma_{j,\ell}^2|\rightarrow0$ as $b\rightarrow\infty$.
The next theorem shows that the probabilistic behavior of ${n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ can be approximated by that of $|\bxi|_\infty$ for $\bxi\sim N({\mathbf 0},{\mathbf W})$.
\begin{theorem}\label{tm:1}
Let $\bxi\sim N({\mathbf 0},{\mathbf W})$ for ${\mathbf W}$ specified in {\rm(\ref{eq:W})}. Under the conditions of Proposition {\rm\ref{pro:1}} and Condition {\rm\ref{as:block}}, we have
\[
\sup_{x>0}\big|\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)-\mathbb{P}(|\bxi|_\infty>x)\big|\rightarrow0
\]
as $n\rightarrow\infty$, provided that $s^2(\log p)^3n^{-1}=o(1)$ and $\log p=o(n^{\varrho_2})$ where $s=\max_{1\leq j\leq p}|\balpha_j|_0$ and $\varrho_2$ is a positive constant specified in the proof of this theorem in Appendix.
\end{theorem}
\begin{remark}\label{re:1}
Theorem \ref{tm:1} shows that the Kolmogorov distance between the distributions of ${n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and $|\bxi|_\infty$ converges to zero in the high-dimensional scenario. More specifically, as shown in the proof of Theorem \ref{tm:1} in Appendix, this convergence rate is $O(n^{-C})$ for some constant $C>0$.
Under certain conditions, the $L_\infty$-type statistic ${n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ will converge weakly to some kind of extreme distribution. However, such convergence
usually requires some stringent assumptions on the structure of the underlying covariance ${\mathbf W}$ and suffers from low accuracy as the rate of convergence to the extreme distribution is fairly slow. Taking the extreme distribution of type I as an example, the convergence rate is of order $O\{\log(\log n)/\log n\}$. These defects may cause the poor performance of the limiting distribution calibration approach for the $L_\infty$-type statistic in finite samples. However, our approximation strategy does not require such stringent assumptions on ${\mathbf W}$ and has faster convergence rate.
\end{remark}
Theorem \ref{tm:1} provides a guideline to approximate the distribution of ${n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$. To implement it in practice, we need to propose an estimator for ${\mathbf W}$. Denote by $\boldsymbol{\Xi}$ the matrix sandwiched by ${\mathbf H}$'s on the right-hand side of (\ref{eq:w}), which is the long-run covariance of $\{\bfeta_t\}_{t=1}^n$. Notice that $\widehat{v}_{j,j}$ defined in (\ref{eq:hatv}) is $n^{1/2}$-consistent to $v_{j,j}$, we can estimate ${\mathbf H}$ by
\begin{equation}\label{eq:hath}
\widehat{{\mathbf H}}=\textrm{diag}\big\{\widehat{v}_{\chi_1(1),\chi_1(1)}^{-1}\widehat{v}_{\chi_2(1),\chi_2(1)}^{-1},\ldots,\widehat{v}_{\chi_1(r),\chi_1(r)}^{-1}\widehat{v}_{\chi_2(r),\chi_2(r)}^{-1}\big\}.
\end{equation}
Let $\widehat{\bfeta}_t=(\widehat{\eta}_{1,t},\ldots,\widehat{\eta}_{r,t})^{ \mathrm{\scriptscriptstyle T} }$ for $\widehat{\eta}_{j,t}=\widehat{\epsilon}_{\chi_1(j),t}\widehat{\epsilon}_{\chi_2(j),t}-\widehat{v}_{\boldsymbol{\chi}(j)}$, and define
\[
\widehat{\bGamma}_k= \left\{ \begin{aligned}
\frac{1}{n}\sum_{t=k+1}^n\widehat{\bfeta}_t\widehat{\bfeta}_{t-k}^{ \mathrm{\scriptscriptstyle T} },~~~ &k\geq0; \\
\frac{1}{n}\sum_{t=-k+1}^n\widehat{\bfeta}_{t+k}\widehat{\bfeta}_t^{ \mathrm{\scriptscriptstyle T} },~~&k<0.
\end{aligned} \right.
\]
Based on the $\widehat{\bGamma}_k$'s,
we propose a kernel-type estimator suggested by \cite{Andrews_1991} for $\boldsymbol{\Xi}$ as
\begin{equation}\label{eq:hatXi}
\widehat{\boldsymbol{\Xi}}=\sum_{k=-n+1}^{n-1}\mathcal {K}\bigg(\frac{k}{S_n}\bigg)\widehat{\bGamma}_k
\end{equation}
where $S_n$ is the bandwidth, $\mathcal{K}(\cdot)$ is a symmetric kernel function that is continuous at 0 and satisfying $\mathcal{K}(0)=1$, $|\mathcal{K}(u)|\leq 1$ for any $u\in\mathbb{R}$, and $\int_{-\infty}^\infty\mathcal{K}^2(u)du<\infty$. Given $\widehat{{\mathbf H}}$ and $\widehat{\boldsymbol{\Xi}}$ defined respectively in (\ref{eq:hath}) and (\ref{eq:hatXi}), an estimator for ${\mathbf W}$ is given by
\begin{equation}\label{eq:hatW}
\widehat{{\mathbf W}}=\widehat{{\mathbf H}}\widehat{\boldsymbol{\Xi}}\widehat{{\mathbf H}}.
\end{equation}
In finite samples, Theorem \ref{tm:2} below shows that we can approximate the distribution of ${n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ by that of $|\widehat{\bxi}|_\infty$ for $\widehat{\bxi}\sim N({\mathbf 0},\widehat{{\mathbf W}})$.
\begin{remark}
\cite{Andrews_1991} systematically investigated the theoretical properties of such a kind of estimators in the fixed dimensional framework, and shows that the Quadratic Spectral kernel
\[
\mathcal{K}_{QS}(u)=\frac{25}{12\pi^2u^2}\bigg\{\frac{\sin(6\pi u/5)}{6\pi u/5}-\cos(6\pi u/5)\bigg\}
\]
is the optimal one for estimating the long-run covariance in the sense of minimizing the asymptotic truncated mean square error. A data-driven bandwidth selection procedure for the Quadratic Spectral kernel was studied in Section 6 of \cite{Andrews_1991}, which can be computed efficiently.
We will adopt the Quadratic Spectral kernel $\mathcal{K}_{QS}(\cdot)$ with its data-driven bandwidth selection procedure in our numerical studies. Both our theoretical and simulation results show that this kernel estimator $\widehat{\boldsymbol{\Xi}}$ performs very well even in high-dimensional scenarios. There also exist various other estimation methods for long-run covariances, including the estimators utilizing the moving block bootstraps \citep{Lahiri_2003,NordmanLahiri_2005}. Also see \cite{DenHanLevin_1997} and \cite{Kieferetaj_2000}. Compared to those methods, the kernel-type estimators applied in our procedure have two advantages.
First, it does not require the stationary assumption imposed on $\{\bfeta_t\}_{t=1}^n$. Second, the form of kernel-type estimator given in (\ref{eq:hatXi}) can be applied to simplify the data generating mechanism for a high-dimensional Gaussian random vector with covariance $\widehat{{\mathbf W}}$, and therefore can significantly improve the computational efficiency when $p$ is large. See Remark \ref{re:gener} later for more detailed discussion on the computational issues.
\end{remark}
\begin{theorem}\label{tm:2}
Let $\widehat{\bxi}\sim N({\mathbf 0},\widehat{{\mathbf W}})$ for $\widehat{{\mathbf W}}$ specified in {\rm(\ref{eq:hatW})}. Assume the kernel function $\mathcal{K}(\cdot)$ satisfy $|\mathcal{K}(x)|\asymp |x|^{-\tau}$ as $x\rightarrow\infty$ for some $\tau>1$, and the bandwidth $S_n\asymp n^{\rho}$ for some $0<\rho<\min\{\frac{\tau-1}{3\tau},\frac{\gamma_3}{2\gamma_3+1}\}$ and $\gamma_3$ in Condition {\rm\ref{as:betamix}}. Under the conditions of Theorem {\rm\ref{tm:1}},
it holds that
\[
\sup_{x>0}\big|\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)-\mathbb{P}\big(|\widehat{\bxi}|_\infty>x|\mathcal{Y}_n\big)\big|\xrightarrow{p}0
\]
as $n\rightarrow\infty$, provided that $s^2(\log p)n^{-1}\max\{S_n^2,(\log p)^2\}=o(1)$ and $\log p=o(n^{\varrho_3})$ where $s=\max_{1\leq j\leq p}|\balpha_j|_0$, $\varrho_3$ is a positive constant specified in the proof of this theorem in Appendix, and $\mathcal{Y}_n=\{{\mathbf y}_1,\ldots,{\mathbf y}_n\}$.
\end{theorem}
\begin{remark}
From the theoretical property of the Gaussian approximation technique proposed by \cite{CCK_2013}, we know the result stated in Theorem \ref{tm:2} is valid for any $\widehat{{\mathbf W}}$ satisfying $|\widehat{{\mathbf W}}-{\mathbf W}|_\infty=o_p(1)$. The proposed procedure requires the estimation of a high-dimensional covariance matrix ${\mathbf W}$. However, different from the widely used estimators in the literatures of the high-dimensional covariance estimation, our procedure does not need some specific structural assumptions, such as sparsity or bandableness, imposed on ${\mathbf W}$. This advantage widens the applications of the proposed method.
\end{remark}
In finite samples, we can use the Monte Carlo simulation to approximate the distribution of $|\widehat{\bxi}|_\infty$ given the data $\mathcal{Y}_n$. Specifically, let $\widehat{\bxi}_1,\ldots,\widehat{\bxi}_M$ be i.i.d. $r$-dimensional random vectors drawn from $N({\mathbf 0},\widehat{{\mathbf W}})$. Then the conditional distribution of $|\widehat{\bxi}|_\infty$ given $\mathcal{Y}_n$ can be approximated by the empirical distribution of $\{|\widehat{\bxi}_1|_\infty,\ldots,|\widehat{\bxi}_M|_\infty\}$, namely,
\[
\widehat{F}_M(x)=\frac{1}{M}\sum_{m=1}^M\mathbb{I}\big\{|\widehat{\bxi}_m|_\infty\leq x\big\}.
\]
Then, $q_{\mathcal{S},\alpha,1}$ specified in (\ref{eq:cr2}) can be estimated by
\begin{equation}\label{eq:hatq}
\widehat{q}_{\mathcal{S},\alpha,1}=\inf\{x\in \mathbb{R}:\widehat{F}_M(x)\geq 1-\alpha\}.
\end{equation}
To improve computational efficiency, we propose the following Kernel based Multiplier Bootstrap (KMB) procedure to generate $\widehat{\bxi}$:
\vspace{-10pt}
\begin{itemize}[leftmargin = 2.35cm, rightmargin=1cm]
\item[{\bf Step 1.}] Let ${\mathbf A}$ be a $n\times n$ matrix whose $(i,j)$-th element is $\mathcal{K}(|i-j|/S_n)$, and generate an $n$-dimensional Gaussian random vector ${\mathbf g}=(g_1,\ldots,g_n)^{ \mathrm{\scriptscriptstyle T} }$ with mean ${\mathbf 0}$ and covariance ${\mathbf A}$.
\vspace{-5pt}
\item[{\bf Step 2.}] Let $\widehat{\bxi}=n^{-1/2}\widehat{{\mathbf H}}(\sum_{t=1}^ng_t\widehat{\bfeta}_t)$ where $\widehat{{\mathbf H}}$ is defined in (\ref{eq:hath}).
\end{itemize}
\vspace{-10pt} It is easy to see that $\widehat{\bxi}\sim N({\mathbf 0},\widehat{{\mathbf W}})$ given $\mathcal{Y}_n$. Note that this KMB procedure only requires to generate an $n$-dimensional Gaussian random vector in each bootstrap sample.
\begin{remark}\label{re:gener}
The classical approach to draw a random vector $\widehat{\bxi}\sim N({\mathbf 0},\widehat{{\mathbf W}})$ consists of three steps: (i) perform the Cholesky decomposition on the $r\times r$ matrix $\widehat{{\mathbf W}}={\mathbf L}^{ \mathrm{\scriptscriptstyle T} }{\mathbf L}$, (ii) generate $r$ independent standard normal random variables ${\mathbf z}=(z_1,\ldots,z_r)^{ \mathrm{\scriptscriptstyle T} }$, (iii) perform transformation $\widehat{\bxi}={\mathbf L}^{ \mathrm{\scriptscriptstyle T} }{\mathbf z}$. Thus, it requires to store matrix $\widehat{{\mathbf W}}$ and $\{\widehat{\bfeta}_t\}_{t=1}^n$, which amounts to the storage costs $O(r^2)$ and $O(rn)$, respectively. The computational complexity is $O(r^2n+r^3)$, mainly due to computing $\widehat{{\mathbf W}}$ and the Cholesky decomposition. When $r=O(p^2)$ and $p$ is large, the classical approach requires intensive computation and high storage. However, the new data generating mechanism KMB makes our procedure practically feasible even when $p$ is large. Notice that the proposed KMB procedure only needs to store $\{\widehat{\bfeta}_t\}_{t=1}^n$ and ${\mathbf A}$, and draw an $n$-dimensional random vector
${\mathbf g}\sim N({\mathbf 0},{\mathbf A})$ in each bootstrap sample, which amounts to total storage cost $O(rn+n^2)$. More significantly, the computational complexity of the KMB procedure is only $O(n^3)$ which is independent of $r$ and $p$.
\end{remark}
\begin{remark}\label{re:stude}
For the Studentized-type confidence regions $\mathcal{C}_{\mathcal{S},\alpha,2}$ defined in (\ref{eq:cr2}), we can choose the diagonal matrix $\widehat{{\mathbf D}}=\{{\rm diag}(\widehat{{\mathbf W}})\}^{1/2}$ for $\widehat{{\mathbf W}}$ specified in (\ref{eq:hatW}). Correspondingly, for $\widehat{\bxi}\sim N({\mathbf 0},\widehat{{\mathbf D}}^{-1}\widehat{{\mathbf W}}\widehat{{\mathbf D}}^{-1})$, it can be proved similarly to Thorem \ref{tm:2} that
\[
\sup_{x>0}\big|\mathbb{P}\big\{{n}^{1/2}|\widehat{{\mathbf D}}^{-1} (\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}) |_\infty>x\big\}-\mathbb{P}(|\widehat{\bxi}|_\infty>x|\mathcal{Y}_n)\big|\xrightarrow{p}0~~\textrm{as}~~n\rightarrow\infty.
\]
Thus, to approximate the distribution of ${n}^{1/2}|\widehat{{\mathbf D}}^{-1} (\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}) |_{\infty}$, we only need to replace the Step 2 in the KMB procedure by \vspace{-10pt}
\begin{itemize}[leftmargin = 2.35cm, rightmargin=1cm]
\item[{\bf Step 2$^{\prime}$.}] Let $\widehat{\bxi} = n^{-1/2}\widehat{{\mathbf D}}^{-1} \widehat{{\mathbf H}}(\sum_{t=1}^n g_t\widehat{\bfeta}_t)$ where $\widehat{{\mathbf H}}$ is defined in (\ref{eq:hath}).
\end{itemize}
\vspace{-10pt} Based on the i.i.d.\ random vectors $\widehat{\bxi}_1,\ldots,\widehat{\bxi}_M$ generated by Steps 1 and $2^{\prime}$, we can estimate $q_{\mathcal{S},\alpha,2}$ via $\widehat{q}_{\mathcal{S},\alpha,2}$, which is calculated the same as $\widehat{q}_{\mathcal{S},\alpha,1}$ in (\ref{eq:hatq}). We call the procedure combining Steps 1 and $2^{\prime}$ as Studentized Kernel based Multiplier Bootstrap (SKMB).
\end{remark}
\section{Applications}\label{se:app}
\subsection{Testing structures of $\boldsymbol{\Omega}$}
As discussed in Section \ref{se:background}, in many statistical applications, we are interested in exploring and detecting some specific structures of the precision matrix $\boldsymbol{\Omega}=(\omega_{j_1,j_2})_{p\times p}$. Given an index set $\mathcal{S}$ of interest and a set of pre-specified constants $\{c_{j_1,j_2}\}$, we consider to test the hypotheses
\[
H_0:\omega_{j_1,j_2}=c_{j_1,j_2}~~\textrm{for any}~(j_1,j_2)\in\mathcal{S}~~~~~\textrm{vs.}~~~~~H_1:\omega_{j_1,j_2} \neq c_{j_1,j_2}~~\textrm{for some}~(j_1,j_2)\in\mathcal{S}.
\]
Let $r=|\mathcal{S}|$, and ${\mathbf c}=\{c_{\boldsymbol{\chi}(1)},\ldots,c_{\boldsymbol{\chi}(r)}\}^{ \mathrm{\scriptscriptstyle T} }$ where $\boldsymbol{\chi}(\cdot)=\{\chi_1(\cdot),\chi_2(\cdot)\}$ is a bijective mapping from $\{1,\ldots,r\}$ to $\mathcal{S}$ such that $\boldsymbol{\Omega}_{\mathcal{S}}=\{\omega_{\boldsymbol{\chi}(1)},\ldots,\omega_{\boldsymbol{\chi}(r)}\}^{ \mathrm{\scriptscriptstyle T} }$. A usual choice of ${\mathbf c}$ is the zero vector, corresponding to the test for non-zero structures of $\boldsymbol{\Omega}$. Given a prescribed level $\alpha\in(0,1)$, define $\Psi_\alpha=\mathbb{I}\{{\mathbf c}\notin\mathcal{C}_{\mathcal{S}, 1-\alpha, 1}\}$ for $\mathcal{C}_{\mathcal{S}, 1-\alpha, 1}$ specified in (\ref{eq:cr2}). Then, we reject the null hypothesis $H_0$ at level $\alpha$ if $\Psi_\alpha=1$. This procedure is equivalent to the test based on the $L_\infty$-type statistic $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf c}|_\infty$ that rejects $H_0$ if $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf c}|_\infty > \widehat{q}_{\mathcal{S},1-\alpha,1}$. $L_\infty$-type statistics are widely used in testing high-dimensional means and
covariances. See, for example, \cite{CJ_2011}, \cite{CLX_2013} and \cite{ChangZhouZhou_2014a,ChangZhouZhouWang_2017}. The following corollary gives the empirical size and power of the proposed testing procedure $\Psi_\alpha$.
\begin{cy}\label{cy:1}
Assume conditions of Theorem {\rm\ref{tm:2}} hold. It holds that: {\rm(i)} $\mathbb{P}_{H_0}(\Psi_\alpha=1)\rightarrow\alpha$ as $n\rightarrow\infty$; {\rm(ii)} if $\max_{(j_1,j_2)\in\mathcal{S}}|\omega_{j_1,j_2}-c_{j_1,j_2}|\geq C(n^{-1}\log p)^{1/2}\max_{1\leq j\leq r}w_{j,j}^{1/2}$ where $w_{j,j}$ is the $j$th component in the diagonal of ${\mathbf W}$ defined in {\rm(\ref{eq:W})}, and $C$ is a constant larger than $\sqrt{2}$, then $\mathbb{P}_{H_1}(\Psi_\alpha=1)\rightarrow1$ as $n\rightarrow\infty$.
\end{cy}
From Corollary \ref{cy:1}, we see that the empirical size of the proposed testing procedure $\Psi_\alpha$ will converge to its nominal level $\alpha$ under $H_0$.
Comparing the proposed test to the $L_\infty$-type test based on limiting distribution calibration for determining the critical value $\widehat{q}_{\mathcal{S},1-\alpha,1}$, the later will suffer from the size distortion, due to the slow convergence rate of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf c}|_\infty$ to its limiting distribution (see Remark \ref{re:1}).
Since the KMB procedure enjoys much faster convergence rate, the proposed test $\Psi_\alpha$ will have more accurate size.
The condition $\max_{(j_1,j_2)\in\mathcal{S}}|\omega_{j_1,j_2}-c_{j_1,j_2}|\geq C(n^{-1}\log p)^{1/2}\max_{1\leq j\leq r}w_{j,j}^{1/2}$ specifies the maximal deviation of the precision matrix from the null hypothesis $H_0:\omega_{j_1,j_2}=c_{j_1,j_2}$ for any $(j_1,j_2)\in\mathcal{S}$.
The power of the proposed test $\Psi_\alpha$ will approach 1,
if this maximal signal strength is larger than $C_6(n^{-1}\log p)^{1/2}$ for some positive constant $C_6$.
Such a condition on signal strength is commonly assumed for studying the power of the $L_\infty$-type test. See \cite{CJ_2011}, \cite{CLX_2013} and \cite{ChangZhouZhou_2014a,ChangZhouZhouWang_2017}.
A ``Studentized-type" test can be similarly constructed via replacing $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf c}|_\infty$ and $\widehat{q}_{\mathcal{S},1-\alpha,1}$ by $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-{\mathbf c})|_\infty$ and $\widehat{q}_{\mathcal{S},1-\alpha,2}$ in (\ref{eq:cr2}), respectively.
\subsection{Support recovering of $\boldsymbol{\Omega}$}
In studying partial correlation networks or GGMs, a common interest is to identify the edges between nodes. This is equivalent to recover the non-zero components in the associated precision matrix. Let $\mathcal{M}_0=\{(j_1,j_2):\omega_{j_1,j_2}\neq 0\}$ be the set of indices with non-zero precision coefficients. Choose $\mathcal{S}=\{1,\ldots,p\}^2$. Note that $\mathcal{C}_{\mathcal{S},\alpha,1}$ provides simultaneous confidence intervals for all entries of $\boldsymbol{\Omega}$. To recover the set $\mathcal{M}_0$ consistently, we choose those precision coefficients whose confidence intervals do not include zero. For any $m$-dimensional vector ${\mathbf u}=(u_1,\ldots,u_m)^{ \mathrm{\scriptscriptstyle T} }$, let $\mbox{supp}({\mathbf u})=\{j:u_j\neq 0\}$ be the support set of ${\mathbf u}$. Recall $\boldsymbol{\chi}(\cdot)=\{\chi_1(\cdot),\chi_2(\cdot)\}$ is a bijective mapping from $\{1,\ldots,r\}$ to $\mathcal{S}$ such that $\boldsymbol{\Omega}_{\mathcal{S}}=\{\omega_{\boldsymbol{\chi}(1)},\ldots,\omega_{\boldsymbol{\chi}(r)}\}^{ \mathrm{\scriptscriptstyle T} }$. For any $\alpha\in(0,1)$, let
\[
\widehat{\mathcal{M}}_{n,\alpha}=\bigg\{\boldsymbol{\chi}^{-1}(l):l\in\bigcap_{{\mathbf u}\in\mathcal{C}_{\mathcal{S},1-\alpha, 1}}\mbox{supp}({\mathbf u})\bigg\}
\]
be the estimate of $\mathcal{M}_0$.
In our context, note that the false positive means estimating the zero $\omega_{j_1,j_2}$ as non-zero. Let $\mbox{FP}$ be the number of false positive errors conducted by the estimated signal set $\widehat{\mathcal{M}}_{n,\alpha}$. Let the family wise error rate (FWER) be the probability of conducting any false positive errors, namely, $\mbox{FWER} = \mathbb{P}(\mbox{FP} > 0)$. See \cite{HT_2009} for various types of error rates in multiple testing procedures. Notice that $\mathbb{P}(\mbox{FP} > 0) \leq \mathbb{P}(\boldsymbol{\Omega}_{\mathcal{S}} \not\in \mathcal{C}_{\mathcal{S},1-\alpha,1}) = \alpha\{1 + o(1)\}$. This shows that the proposed method is able to control family wise error rate at level $\alpha$ for any $\alpha\in(0,1)$. The following corollary further shows the consistency of $\widehat{\mathcal{M}}_{n,\alpha}$.
\begin{cy}\label{cy:2}
Assume conditions of Theorem {\rm\ref{tm:2}} hold, and the signals satisfy $\min_{(j_1,j_2)\in\mathcal{M}_0}|\omega_{j_1,j_2}|\geq C(n^{-1}\log p)^{1/2}\max_{1\leq j\leq r}w_{j,j}^{1/2}$ where $w_{j,j}$ is the $j$th component in the diagonal of ${\mathbf W}$ defined in {\rm(\ref{eq:W})}, and $C$ is a constant larger than $\sqrt{2}$. Selecting $\alpha\rightarrow0$ such that $1/\alpha=o(p)$, it holds that $\mathbb{P}(\widehat{\mathcal{M}}_{n,\alpha}=\mathcal{M}_0)\rightarrow1$ as $n\rightarrow\infty$.
\end{cy}
From Corollary \ref{cy:2}, we see that the selected set $\widehat{\mathcal{M}}_{n,\alpha}$ can identify the true set $\mathcal{M}_0$ consistently if the minimum signal strength $\min_{(j_1,j_2)\in\mathcal{S}}|\omega_{j_1,j_2}|$ is larger than $C_7(n^{-1}\log p)^{1/2}$ for some positive constant $C_7$.
Notice from Corollary 1 that only the maximum signal is required in the power analysis of the proposed testing procedure.
Compared to signal detection, signal identification is a more challenging problem. The full support recovery of $\boldsymbol{\Omega}$ require all non-zero $|\omega_{l_1,l_2}|$ larger than a specific level.
Similarly, we can also define $\widehat{\mathcal{M}}_{n,\alpha}$ via replacing $\mathcal{C}_{\mathcal{S},1-\alpha,1}$ by its ``Studentized-type'' analogue $\mathcal{C}_{\mathcal{S},1-\alpha,2}$ in (\ref{eq:cr2}).
\section{Numerical study}\label{se:simulation}
In this section, we evaluate the performance of the proposed KMB and SKMB procedures in finite samples.
Let $\boldsymbol{\varepsilon}_{1}, \ldots, \boldsymbol{\varepsilon}_{n}$ be i.i.d. $p$-dimensional samples from $N({\mathbf 0}, \boldsymbol{\Sigma})$. The observed data were generated from the model ${\mathbf y}_1=\boldsymbol{\varepsilon}_1$ and ${\mathbf y}_t=\rho{\mathbf y}_{t-1}+(1-\rho^2)^{1/2}\boldsymbol{\varepsilon}_t$ for $t\geq2$. The parameter $\rho$ was set to be $0$ and $0.3$, which captures the temporal dependence among observations. We chose the sample size $n=150$ and $300$, and the dimension $p=100$, $500$ and $1500$ in the simulation. Let $\boldsymbol{\Sigma}=\{\textrm{diag}(\boldsymbol{\Sigma}_*^{-1})\}^{1/2}\boldsymbol{\Sigma}_*\{\textrm{diag}(\boldsymbol{\Sigma}_*^{-1})\}^{1/2}$ based on a positive definite matrix $\boldsymbol{\Sigma}_*$. The following two settings were considered for $\boldsymbol{\Sigma}_*=(\sigma_{j_1,j_2}^*)_{1\leq j_1,j_2\leq p}$. \vspace{-10pt}
\begin{itemize}[leftmargin = 1.35cm, rightmargin=1cm]
\item[{\bf A}.] Let $\sigma_{j_1,j_2}^* = 0.5^{|j_1-j_2|}$ for any $1\leq j_1,j_2\leq p$.
\vspace{-5pt}
\item[{\bf B}.] Let $\sigma_{j,j}^*=1$ for any $j=1,\ldots,p$, $\sigma_{j_1,j_2}^*=0.5$ for $5(h-1)+1\leq j_1\neq j_2\leq 5h$, where $h=1,\ldots,p/5$, and $\sigma_{j_1,j_2}^*=0$ otherwise.
\end{itemize}
\vspace{-10pt} Structures A and B lead to, respectively, the banded and block diagonal structures for the precision matrix $\boldsymbol{\Omega}=\boldsymbol{\Sigma}^{-1}$. Note that, based on such defined covariance $\boldsymbol{\Sigma}$, the diagonal elements of the precision matrix are unit. For each of the precision matrices, we considered two choices for the index set $\mathcal{S}$: (i) all zero components of $\boldsymbol{\Omega}$, i.e. $\mathcal{S}=\{(j_1,j_2):\omega_{j_1,j_2}=0\}$, and (ii) all the components excluded the ones on the main diagonal, i.e. $\mathcal{S}=\{(j_1,j_2):j_1\neq j_2\}$.
Notice that the sets of all zero components in $\boldsymbol{\Omega}$ for structures A and B are $\{(j_1,j_2): |j_1-j_2|>1\}$ and $\cap_{h=1}^{p/5}\{(j_1,j_2): 5(h-1)+1\leq j_1, j_2\leq 5h\}^{c}$, respectively.
From the asymptotic expansion of $\widehat{\omega}_{j_1,j_2}$ given in Proposition \ref{pro:1}, under the Gaussian assumption of ${\mathbf y}_t$, it holds that $\textrm{Var}\{n^{1/2}(\widehat{\omega}_{j_1,j_2}-\omega_{j_1,j_2})\}=\frac{1+\rho^2}{1-\rho^2}\{1+o(1)\}$ for any $(j_1,j_2)$ satisfying $\omega_{j_1,j_2}=0$.
Therefore, the index sets $\mathcal{S}$ in the setting (i) and (ii) mimic, respectively, the homogeneous and heteroscedastic cases for the variances of $n^{1/2}(\widehat{\omega}_{j_1,j_2}-\omega_{j_1,j_2})$ among $(j_1,j_2)\in\mathcal{S}$.
For each of the cases above, we examined the accuracy of the proposed KMB and SKMB approximations to the distributions of the non-Studentized-type statistic $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and the Studentized-type statistic $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})|_\infty$, respectively. More specifically, we first draw 1000 independent samples where each sample with size $n$ was generated by the above discussed data generating mechanism. We then computed $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})|_\infty$ in each sample and employed their empirical distributions as the benchmark for their true distributions. For $\alpha=0.075, 0.050$ and $0.025$, we applied the KMB and SKMB procedures in each sample to estimate the $100(1-\alpha)\%$ quantiles of
$n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})|_\infty$, respectively, with bootstrap resampling $M=3000$ for each of the procedures. Based on the benchmark distributions of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})|_\infty$ given before, we computed the associated empirical coverages for the estimated quantiles in each sample.
We report the averages and standard deviations of such determined 1000 empirical coverages in Tables \ref{tb:p100}--\ref{tb:p1500} corresponding to the cases $p=100$, $p=500$ and $p=1500$, respectively.
It is worth noting that in order to accomplish the statistical computing for large $p$ under the R environment in high speed,
we programmed the generation of random numbers and most loops into C functions such that we utilized ``.C()'' routine to call those C functions from R. However, the computation of the two types of statistics involves the fitting of the $p$ node-wise regressions. As a consequence, the simulation for large $p$ still requires a large amount of computation time. For instance, out of 1000 simulations, one simulation had taken more than 6 hours to complete the computation as $p=1500$ and $n=300$. In order to overcome this time-consuming issue, the computation in this numerical study was undertaken with the assistance of the supercomputer Raijin at the NCI National Facility systems supported by the Australian Government. The supercomputer Raijin comprises 57,864 cores, which helped us parallel process a large number of simulations simultaneously.
From Tables \ref{tb:p100}--\ref{tb:p1500}, we observe that, for both KMB and SKMB procedures, the overall differences between the empirical coverage rates and the corresponding nominal levels are small, which demonstrates that the KMB and SKMB procedures can provide accurate approximations to the distributions of $n^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ and $n^{1/2}|\widehat{{\mathbf D}}^{-1}(\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}})|_\infty$, respectively. Also note that the coverage rates improve as $n$ increases. And, our results are robust to the temporal dependence parameter $\rho$, which indicates the proposed procedures are adaptive to time dependent observations.
Comparing the simulation results indicated by KMB and SKMB in the category $\mathcal{S}=\{(j_1,j_2):j_1\neq j_2\}$ of Tables \ref{tb:p100}--\ref{tb:p1500}, when the dimension is less than the sample size ($p=100$, $n=150, 300$), we can see that the SKMB procedure has better accuracy than the KMB procedure if the heteroscedastic issue exists. This finding also exits when the dimension is over the sample size and both of them are large ($n=300$, $p=1500$). For the homogeneous case $\mathcal{S}=\{(j_1,j_2):\omega_{j_1,j_2}=0\}$, the KMB procedure provides better accuracy than the SKMB procedure when sample size is small ($n=150$). However, when the sample size becomes larger $(n=300)$, the accuracy of the SKMB procedure can be significantly improved and it will outperform the KMB procedure. The phenomenon that the SKMB procedure sometimes cannot beat the KMB procedure might be caused by incorporating the estimated standard deviations of $\widehat{\omega}_{j_1,j_2}$'s in the denominator of the Studentized-type statistic, which suffers from high variability when the sample size is small. The simulation results suggest us that: (i) when the dimension is less than the sample size or both the dimension and the sample size are very large, the SKMB procedure should be used to construct the confidence regions of $\boldsymbol{\Omega}_{\mathcal{S}}$ if the heteroscedastic issue
exists; (ii) if the sample size is small, and we have some previous information that there does not exist heteroscedastic, then the KMB procedure should be used to construct the confidence regions of $\boldsymbol{\Omega}_{\mathcal{S}}$. However, even in the homogeneous case, the SKMB procedure should still be employed when the sample size is large. In practice, if the dimension is less than the sample size or both the dimension and the sample size are very large, we may select SKMB procedure; otherwise, we may select KMB procedure.
\section{Real data analysis}\label{se:case}
In this section, we follow Example 3 in Section \ref{se:background} to study the partial correlation networks of the Standard and Poors (S\&P) 500 Component Stocks in 2005 (252 trading days, preceding the crisis) and in 2008 (253 trading days, during the crisis), respectively. The reason to analyze those two periods is to understand the structure and dynamic of financial networks affected by the global financial crisis \citep{Schweitzer_2009}. \cite{Ait-Sahalia_2015} analyze the data in 2005 and 2008 as well in order to investigate the influence of the financial crisis.
The S\&P 500 companies are 500 large-capital companies. Their stocks are traded on American stock exchanges, and cover about 75\% of the American equity market by capitalization.
We analyze the data from {http://quote.yahoo.com/} via the R package {\it tseries}, which contains the daily closing prices of S\&P 500 stocks. The R command {\it get.hist.quote} can be used to acquire the data. We keep 402 stocks in our analysis whose closing prices are capable of being downloaded by the R command and do not have any missing values during 2005 and 2008.
Let $y_{j,t}$ be the $j$th stock price at day $t$. We consider the log return of the stocks, which is defined by $\log(y_{j, t}) - \log(y_{j, t-1})$. Via standardizing the log return of each stock by its mean and standard deviation, we can obtain the standardized log returns ${\mathbf R}_{t} = (R_{1, t}, \ldots, R_{402, t})^{ \mathrm{\scriptscriptstyle T} }$ of all the 402 assets at day $t$.
It is widely acknowledged that the stock return is influenced by its performance in the past time.
Thus, the S\&P 500 data are time dependent.
Let $\boldsymbol{\Omega}=(\omega_{j_1,j_2})_{p\times p}$ be the precision matrix of ${\mathbf R}_{t}$.
By the relationship between partial correlation and precision matrix, the partial correlation network can be constructed by the non-zero precision coefficients $\omega_{j_1,j_2}$ as demonstrated in Example 3 in Section \ref{se:background}.
Under the Gaussian graphical model, zero precision coefficient represents the conditional independence of two assets given all the other assets.
To learn the structures of $\boldsymbol{\Omega}$, we focus on the Global Industry Classification Standard (GICS) sectors and their sub industries of the S\&P 500 companies, and aim to discover the sub blocks of $\boldsymbol{\Omega}$ which are nonzero. Those blocks can help us build the partial correlation networks of the sectors and sub industries for the S\&P 500 stocks in 2005 and 2008, respectively.
The advantage of investigating the complex financial network system by partial correlation is to overcome the issue that the marginal correlation between two stocks might be a result of their correlations to other mediating stocks \citep{Kenett_2010}.
For example, if two stocks $R_{j_1,t}$ and $R_{j_2,t}$ are both correlated with some stocks in the set ${\mathbf R}_{-(j_1,j_2),t}=\{R_{j,t}: j\neq j_1,j_2 \}$, the partial correlation can suitably remove the linear effect of ${\mathbf R}_{-(j_1,j_2),t}$ on $R_{j_1,t}$ and $R_{j_2,t}$.
Hence, it measures a ``direct'' relationship between $j_1$ and $j_2$ \citep{DeLaFuente_2004}.
The partial correlation analysis is widely used in the study of financial networks \citep{Shapira_2009,Kenett_2010}, as well as the study of gene networks \citep{DeLaFuente_2004,Reverter_2008,Chen_2009}.
Based on the information on bloomberg and ``List of S\&P 500 companies'' on wikipedia, we identify 10 major sectors with 54 sub industries of the S\&P 500 companies (see Table \ref{tb:stock1} and Table \ref{tb:stock2} for detailed categories).
The 10 sectors are Consumer Discretionary, Consumer Staples, Energy, Financials, Health Care, Industrials, Information Technology, Materials, Telecommunication Services and Utilities.
There are 1 company with the unidentified sector and 8 companies with unidentified sub industries due to acquisition or ticket change (represented by ``NA'' in Table \ref{tb:stock1} and Table \ref{tb:stock2}).
To explore the partial correlation networks of different sectors and sub industries, we are interested in a set of hypotheses
\begin{equation}\begin{split}
H_{h_1h_2, 0}:& \ \omega_{j_1,j_2} = 0 \mbox{ \ for any $(j_1, j_2) \in I_{h_1} \times I_{h_2}$ \ versus \ } \\
H_{h_1h_2, 1}:& \ \omega_{j_1,j_2} \neq 0 \mbox{ \ for some $(j_1, j_2) \in I_{h_1} \times I_{h_2}$}
\end{split}\label{eq:SP500}\end{equation}
for disjoint index sets $\{I_{1}, \ldots, I_{H}\}$, which represent different sectors or sub industries.
For each of the hypotheses in (\ref{eq:SP500}), we calculate the Studentized-type statistic $\sqrt{n}|\widehat{{\mathbf D}}^{-1}\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}|_\infty$ in (\ref{eq:cr2}), where $\mathcal{S} = I_{h_1} \times I_{h_2}$, and
apply the SKMB procedure to obtain $M = 10000$ bootstrap samples $\widehat{\bxi}_1,\ldots,\widehat{\bxi}_M$.
For each of the bootstrap samples, we compute their maximum norms $\{|\widehat{\bxi}_1|_\infty,\ldots,|\widehat{\bxi}_M|_\infty\}$.
The P-value of the hypothesis (\ref{eq:SP500}) is
$$\operatorname{P-value}_{h_1,h_2} = \frac{1}{M} \sum_{m = 1}^{M} \mathbb{I}\{ |\widehat{\bxi}_m|_\infty \geq \sqrt{n}|\widehat{{\mathbf D}}^{-1}\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}|_\infty \} \mbox{ \ for \ } \mathcal{S} = I_{h_1} \times I_{h_2}.$$
To identify the significant blocks, we apply the \citet{Benjamini_1995}'s multiple testing procedure that controls the false discovery rate (FDR) of (\ref{eq:SP500}) at the rate $\alpha=0.1$.
Let $\operatorname{pvalue}_{(1)} \leq \cdots \leq \operatorname{pvalue}_{(K)}$ be the ordered P-values and $H_{(1),0}, \ldots, H_{(K),0}$ be the corresponding null hypotheses, where $K = H(H - 1) / 2$ is the number of hypotheses under our consideration.
Note that $K = 45$ for testing sector blocks and $K = 1431$ for testing sub industry blocks.
We reject $H_{(1),0}, \ldots, H_{(v),0}$ in (\ref{eq:SP500}) for $v = \max\{j : \operatorname{pvalue}_{(j)} \leq \alpha j / K \}$.
We construct the partial correlation networks based on the significant blocks from the above multiple testing procedure. The estimated networks are shown in Figures \ref{fg:2005} and \ref{fg:2008}, corresponding to 2005 and 2008, respectively.
The left panels in Figures \ref{fg:2005} and \ref{fg:2008} give the partial correlation networks of the sectors, where the nodes represent the 10 sectors, and two nodes (sectors) $h_1$ and $h_2$ are connected if and only if the precision matrix $\boldsymbol{\Omega}$ on their crossing block $I_{h_1} \times I_{h_2}$ is significant non-zero.
The right panels are the partial correlation networks of the 54 sub industries, labeled by numbers from 1 to 54. The corresponding names of each sub industries can be found in Tables \ref{tb:stock1} and \ref{tb:stock2}. The shaded areas with different colors represent the 10 major sectors, respectively. This network provides more detailed connectivity information within and between sectors.
\begin{figure}[htp!]
\begin{center}
\includegraphics[scale=0.25]{SectorA_1.pdf}
\includegraphics[scale=0.24]{Industry_1.pdf}
\end{center}
\caption{Partial correlation networks of S\&P 500 sectors and sub industries in 2005 (preceding the crisis). The detailed information of the sub industries represented by numbers 1-54 in the right panel can be correspondingly found in Tables \ref{tb:stock1} and \ref{tb:stock2}.}
\label{fg:2005}
\end{figure}
\begin{figure}[htp!]
\begin{center}
\includegraphics[scale=0.25]{SectorA_4.pdf}
\includegraphics[scale=0.24]{Industry_4.pdf}
\end{center}
\caption{Partial correlation networks of S\&P 500 sectors and sub industries in 2008 (during the crisis). The detailed information of the sub industries represented by numbers 1-54 in the right panel can be correspondingly found in Tables \ref{tb:stock1} and \ref{tb:stock2}.}
\label{fg:2008}
\end{figure}
We observe from the left panel of Figure \ref{fg:2005} that preceding the crisis in 2005, the Industrials sector is likely to be a hub connecting to 5 other sectors: Consumer Discretionary, Energy, Health Care, Utilities and Material. It is the most influential sector with the largest degree, i.e., the total number of directed links connecting to the Industrials sector in the network. However, during the crisis in 2008, the most influential sector shifts to Consumer Discretionary as shown by the left panel of Figure \ref{fg:2008}. The Financials sector is separated from the other sectors except for Consumer Discretionary in 2008, in contrast with the network connectivity in 2005. The edges of the network during the crisis becomes less as well. The similar situation also appears in the partial correlations networks of S\&P 500 sub industries as shown in the right panels of Figures \ref{fg:2005} and \ref{fg:2008}. More specifically, both the numbers of the edges within and between sectors for the network of S\&P 500 sub industries in 2008 are significantly less than those in 2005 (see Table \ref{tb:degree} for details), which indicates that the market fear in the crisis broke the connections of stock sectors and sub industries. From the perspective of financial network studies, the above analysis confirms that fear froze the market in the 2008 crisis \citep{Reavis_2012}.
\iffalse
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{BoxAR050100.pdf} \\
\includegraphics[scale=0.6]{BoxAR050200.pdf}
\end{center}
\caption{ { Boxplot of the quantiles from KMB samples versus the simulated ``true'' quantiles of $\sqrt{n}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ under Covariance (A) with $\rho_{a} = 0.5$, $\rho = 0$ and $p = 50$. Left is for the maximum statistics over $\mathcal{S} = \{(j_1,j_2): |j_1-j_2| > 1\}$, the set of all the zero precision coefficients. Right is for the maximum statistics over all the coefficients, $\mathcal{S} = \{(j_1,j_2): j_1 \neq j_2\}$. The blue dash line is the $45^{\circ}$ line through the origin.} }
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{BoxBD050100.pdf} \\
\includegraphics[scale=0.6]{BoxBD050200.pdf}
\end{center}
\caption{ { Boxplot of the quantiles from KMB samples versus the simulated ``true'' quantiles of $\sqrt{n}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty$ under Covariance (B) with $\rho_{b} = 0.5$, $\rho = 0$ and $p = 50$. Left is for the maximum statistics over all the zero precision coefficients. Right is for the maximum statistics over all the coefficients. The blue dash line is the $45^{\circ}$ line through the origin.} }
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{BoxWAR050100.pdf} \\
\includegraphics[scale=0.6]{BoxWAR050200.pdf}
\end{center}
\caption{ { Boxplot of the quantiles from SKMB samples versus the simulated ``true'' quantiles of $\sqrt{n}|\widehat{{\mathbf D}}^{-1} (\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}) |_\infty$ under Covariance (A) with $\rho_{a} = 0.5$, $\rho = 0$ and $p = 50$, where $\widehat{{\mathbf D}}$ are
the square root diagonal elements of $\widehat{{\mathbf W}}$ in (\ref{eq:hatW}). Left is for the maximum statistics over all the zero precision coefficients. Right is for the maximum statistics over all the coefficients. The blue dash line is the $45^{\circ}$ line through the origin.} }
\end{figure}
\fi
\section*{Acknowledgments}
The authors are grateful to the Co-Editor, an
Associate Editor and two anonymous referees for
constructive comments and suggestions. This research was undertaken with the assistance of resources provided at the NCI National Facility systems at the Australian National University through the ANU Allocation Scheme supported by the Australian Government. Jinyuan Chang was supported in part by the Fundamental Research Funds for the Central Universities
(Grant No. JBK150501), NSFC (Grant No. 11501462), the Center of Statistical Research
at SWUFE, and the Joint Lab of Data Science and Business Intelligence at SWUFE. Qiwei Yao was supported in part by an EPSRC research grant.
\section*{Appendix}
Throughout the Appendix, let $C$ denote a generic positive constant depending only on the constants specified in Conditions \ref{as:moment}--\ref{as:block}, which may be different in different cases. Let $\rho_1^{-1}=2\gamma_1^{-1}+\gamma_3^{-1}$, $\rho_2^{-1}=2\gamma_2^{-1}+\gamma_3^{-1}$ and $\rho_3^{-1}=\gamma_1^{-1}+\gamma_2^{-1}+\gamma_3^{-1}$. Define $\zeta=\min\{\rho_1,\rho_2,\rho_3\}$ and $\boldsymbol{\Delta}=n^{-1}\sum_{t=1}^n\boldsymbol{\epsilon}_t\boldsymbol{\epsilon}_t^{ \mathrm{\scriptscriptstyle T} }-{\mathbf V}=:(\delta_{j_1,j_2})$.
\begin{lemma}\label{la:1}
Assume Conditions {\rm\ref{as:moment}--\ref{as:betamix}} hold. If $\log p=o\{n^{\zeta/(2-\zeta)}\}$, there exists a uniform constant $A_0>1$ such that
\[
\mathbb{P}\big\{|\widehat{\boldsymbol{\Sigma}}-\boldsymbol{\Sigma}|_\infty>A_1(n^{-1}\log p)^{1/2}\big\}\leq \exp\{-CA_1^{\rho_1}(n\log p)^{\rho_1/2}\}+\exp(-CA_1^2\log p),
\]
\[
\mathbb{P}\big\{|\boldsymbol{\Delta}|_\infty>A_2(n^{-1}\log p)^{1/2}\big\}\leq \exp\{-CA_2^{\rho_2}(n\log p)^{\rho_2/2}\}+\exp(-CA_2^2\log p),
\]
\[
\sup_{1\leq j\leq p}\mathbb{P}\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j,t}^2>A_3v_{j,j}\bigg)\leq \exp(-CA_3^{\rho_2}n^{\rho_2}),
\]
\[
\sup_{1\leq j\leq p}\mathbb{P}\bigg\{\max_{k\neq j}\bigg|\frac{1}{n}\sum_{t=1}^n\epsilon_{j,t}y_{k,t}\bigg|>A_4(n^{-1}\log p)^{1/2}\bigg\}\leq \exp\{-CA_4^{\rho_3}(n\log p)^{\rho_3/2}\}+\exp(-CA_4^2\log p),
\]
for any $A_1,A_2,A_3, A_4>A_0$.
\end{lemma}
\noindent {\bf Proof:} For any given $j_1$ and $j_2$, based on the first part of Condition \ref{as:moment}, Lemma 2 of \cite{ChangTangWu_2013} leads to
\[
\sup_{1\leq t\leq n}\mathbb{P}\big(|y_{j_1,t}y_{j_2,t}-\sigma_{j_1,j_2}|>x\big)\leq C\exp(-Cx^{\gamma_1/2})~~\textrm{for any}~x>0.
\]
Hence, for any $x>0$ such that $nx\rightarrow\infty$, Theorem 1 of \cite{Merlevedeetaj_2011} leads to
\[
\mathbb{P}\bigg(\bigg|\frac{1}{n}\sum_{t=1}^ny_{j_1,t}y_{j_2,t}-\sigma_{j_1,j_2}\bigg|>x\bigg)\leq n\exp(-Cn^{\rho_1}x^{\rho_1})+\exp(-Cnx^2).
\]
By Bonferroni inequality, we have
\[
\mathbb{P}\big(|\widehat{\boldsymbol{\Sigma}}-\boldsymbol{\Sigma}|_\infty>x\big)\leq np^2\exp(-Cn^{\rho_1}x^{\rho_1})+p^2\exp(-Cnx^2).
\]
Let $x=A_1(n^{-1}\log p)^{1/2}$, we obtain the first conclusion. Following the same arguments stated above, we can establish the other inequalities. $\hfill\Box$
\begin{lemma}\label{la:lasso}
Assume Conditions {\rm\ref{as:moment}--\ref{as:betamix}} hold. Let $s=\max_{1\leq j\leq p}|\balpha_j|_0$. For some suitable $\lambda_j\asymp (n^{-1}\log p)^{1/2}$ for each $j=1,\ldots,p$, we have
\[
\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1=o_p\{(\log p)^{-1}\}~~~\textrm{and}~~~\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_2=o_p\{(n\log p)^{-1/4}\}
\]
provided that $\log p=o\{n^{\zeta/(2-\zeta)}\}$ and $s^2(\log p)^3n^{-1}=o(1)$.
\end{lemma}
\noindent {\bf Proof:} Define
\[
\mathscr{T}=\bigg\{\max_{1\leq j\leq p}\max_{k\neq j}\bigg|\frac{1}{n}\sum_{t=1}^n\epsilon_{j,t}y_{k,t}\bigg|\leq A_4(n^{-1}\log p)^{1/2}\bigg\}
\]
for some $A_4>A_0$, where $A_0$ is given in Lemma \ref{la:1}. Selecting $\lambda_j\geq 4A_4(n^{-1}\log p)^{1/2}$ for any $j$, Theorem 6.1 and Corollary 6.8 of \cite{BuhlmannvandeGeer_2011} imply that, restricted on $\mathscr{T}$, we have
\begin{equation}\label{eq:l1}
\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1 \leq Cs(n^{-1}\log p)^{1/2}
\end{equation}
and
\begin{equation}\label{eq:l2}
(\widehat{\balpha}_j-\balpha_j)^{ \mathrm{\scriptscriptstyle T} }\widehat{\boldsymbol{\Sigma}}_{-j,-j}(\widehat{\balpha}_j-\balpha_j)\leq Csn^{-1}\log p
\end{equation}
with probability approaching 1. By Bonferroni inequality and Lemma \ref{la:1},
\[
\begin{split}
\mathbb{P}(\mathscr{T}^c)\leq&~\sum_{j=1}^p\mathbb{P}\bigg\{\sum_{k\neq j}\bigg|\frac{1}{n}\sum_{t=1}^n\epsilon_{j,t}y_{k,t}\bigg|>A_4(n^{-1}\log p)^{1/2}\bigg\}\\
\leq&~p\exp\{-CA_4^{\rho_3}(n\log p)^{\rho_3/2}\}+p\exp(-CA_4^2\log p).
\end{split}
\]
For suitable selection of $A_4$, we have $\mathbb{P}(\mathscr{T}^c)\rightarrow0$ as $n\rightarrow\infty$. Thus, from (\ref{eq:l1}), it holds that
\begin{equation}\label{eq:m1}
\begin{split}
\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1=&~O_p\{s(n^{-1}\log p)^{1/2}\}\\
=&~o_p\{(\log p)^{-1}\}.
\end{split}
\end{equation}
On the other hand, notice that
\[
\begin{split}
(\widehat{\balpha}_j-\balpha_j)^{ \mathrm{\scriptscriptstyle T} }\widehat{\boldsymbol{\Sigma}}_{-j,-j}(\widehat{\balpha}_j-\balpha_j)\geq&~\lambda_{\min}(\boldsymbol{\Sigma}_{-j,-j})|\widehat{\balpha}_j-\balpha_j|_2^2\\
&-|\widehat{\boldsymbol{\Sigma}}_{-j,-j}-\boldsymbol{\Sigma}_{-j,-j}|_\infty|\widehat{\balpha}_j-\balpha_j|_1^2,
\end{split}
\]
by Condition \ref{as:cov}, Lemma \ref{la:1}, (\ref{eq:l2}) and (\ref{eq:m1}), we have
\[
\begin{split}
\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_2=&~O_p\{(sn^{-1}\log p)^{1/2}\}\\
=&~o_p\{(n\log p)^{-1/4}\}.
\end{split}
\]
Hence, we complete the proof. $\hfill\Box$
\begin{lemma}\label{la:bias}
Assume the conditions for Lemmas {\rm\ref{la:1}} and {\rm\ref{la:lasso}} hold, then
\[
\begin{split}
&\frac{1}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t}-\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}\epsilon_{j_2,t}\\
=&-(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_2,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&-(\widehat{\alpha}_{j_2,j_1}-\alpha_{j_2,j_1})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&+o_p\{(n\log p)^{-1/2}\}.
\end{split}
\]
Here the remainder term $o_p\{(n\log p)^{-1/2}\}$ is uniformly for any $j_1$ and $j_2$.
\end{lemma}
\noindent {\bf Proof:} Notice that $\epsilon_{j,t}=-\balpha_{j}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t$ and $\widehat{\epsilon}_{j,t}=-\widehat{\balpha}_j^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t$ for any $t$, then
\[
\begin{split}
\frac{1}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t}-\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}\epsilon_{j_2,t}=&-\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\epsilon_{j_2,t}\\
&-\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_2}-\balpha_{j_2})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\epsilon_{j_1,t}\\
&+\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t{\mathbf y}_t^{ \mathrm{\scriptscriptstyle T} }(\widehat{\balpha}_{j_2}-\balpha_{j_2}).
\end{split}
\]
Condition \ref{as:cov}, Lemmas \ref{la:1} and \ref{la:lasso} imply that
\[
\begin{split}
&\max_{1\leq j_1,j_2\leq p}\bigg|\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t{\mathbf y}_t^{ \mathrm{\scriptscriptstyle T} }(\widehat{\balpha}_{j_2}-\balpha_{j_2})\bigg|\\
\leq&\max_{1\leq j_1,j_2\leq p}|(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma} (\widehat{\balpha}_{j_2}-\balpha_{j_2})|\\
&+\max_{1\leq j_1,j_2\leq p}|(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }(\widehat{\boldsymbol{\Sigma}}-\boldsymbol{\Sigma}) (\widehat{\balpha}_{j_2}-\balpha_{j_2})|\\
\leq&~C\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_2^2+|\widehat{\boldsymbol{\Sigma}}-\boldsymbol{\Sigma}|_\infty\max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1^2\\
=&~o_p\{(n\log p)^{-1/2}\}.
\end{split}
\]
Meanwhile, by Lemma \ref{la:1}, we have
$
\max_{1\leq j\leq p}\max_{k\neq j}|n^{-1}\sum_{t=1}^n\epsilon_{j,t}y_{k,t}|=O_p\{(n^{-1}\log p)^{1/2}\},
$
which implies that
\[
\begin{split}
&\max_{1\leq j_1,j_2\leq p}\bigg|\sum_{k\neq j_1,j_2}(\widehat{\alpha}_{j_1,k}-\alpha_{j_1,k})\bigg(\frac{1}{n}\sum_{t=1}^ny_{k,t}\epsilon_{j_2,t}\bigg)\bigg|\\
\leq& \max_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1\cdot\max_{1\leq j\leq p}\max_{k\neq j}\bigg|\frac{1}{n}\sum_{t=1}^ny_{k,t}\epsilon_{j,t}\bigg|\\
=&~o_p\{(n\log p)^{-1/2}\}.
\end{split}
\]
Therefore, we have
\begin{equation}\label{eq:eq1}
\begin{split}
&~\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\epsilon_{j_2,t}\\%=\frac{1}{n}\sum_{t=1}^n\sum_{j\neq j_1}(\widehat{\alpha}_{j_1,j}-\alpha_{j_1,j})y_{j,t}\epsilon_{j_2,t}\\
=&~(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^ny_{j_2,t}\epsilon_{j_2,t}\bigg)\mathbb{I}(j_1\neq j_2)\\
&+\sum_{k\neq j_1,j_2}(\widehat{\alpha}_{j_1,k}-\alpha_{j_1,k})\bigg(\frac{1}{n}\sum_{t=1}^ny_{k,t}\epsilon_{j_2,t}\bigg)\\
=&~(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^ny_{j_2,t}\epsilon_{j_2,t}\bigg)\mathbb{I}(j_1\neq j_2)\\
&+o_p\{(n\log p)^{-1/2}\}.
\end{split}
\end{equation}
Here the remainder term is uniform for any $j_1$ and $j_2$. On the other hand,
$
{n}^{-1}\sum_{t=1}^ny_{j,t}\epsilon_{j,t}={n}^{-1}\sum_{t=1}^n\epsilon_{j,t}^2+{n}^{-1}\sum_{t=1}^n\balpha_{j,-j}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_{-j,t}\epsilon_{j,t}.
$
By Lemma \ref{la:1}, it yields that
$
{n}^{-1}\sum_{t=1}^ny_{j,t}\epsilon_{j,t}={n}^{-1}\sum_{t=1}^n\epsilon_{j,t}^2+O_p\{(n^{-1}\log p)^{1/2}\}.
$
Here the remainder term is uniform for any $j$. Together with (\ref{eq:eq1}), we have
\[
\begin{split}
&~\frac{1}{n}\sum_{t=1}^n(\widehat{\balpha}_{j_1}-\balpha_{j_1})^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\epsilon_{j_2,t}\\
=&~(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_2,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&+o_p\{(n\log p)^{-1/2}\}.
\end{split}
\]
Here the remainder term is also uniform for any $j_1$ and $j_2$. Hence,
\[
\begin{split}
&\frac{1}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_1,t}\widehat{\epsilon}_{j_2,t}-\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}\epsilon_{j_2,t}\\
=&-(\widehat{\alpha}_{j_1,j_2}-\alpha_{j_1,j_2})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_2,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&-(\widehat{\alpha}_{j_2,j_1}-\alpha_{j_2,j_1})\bigg(\frac{1}{n}\sum_{t=1}^n\epsilon_{j_1,t}^2\bigg)\mathbb{I}(j_1\neq j_2)\\
&+o_p\{(n\log p)^{-1/2}\}.
\end{split}
\]
We complete the proof. $\hfill\Box$
\bigskip
\noindent {\bf Proof of Proposition \ref{pro:1}:} Notice that $v_{j_1,j_2} = \frac{\omega_{j_1,j_2}}{\omega_{j_1,j_1}\omega_{j_2,j_2}}$ and $\alpha_{j_1,j_2} = -\frac{\omega_{j_1,j_2}}{\omega_{j_1,j_1}}$ for any $j_1$ and $j_2$, (\ref{eq:v}) implies that
\[
\begin{split}
&~\widetilde{v}_{j_1,j_2}+\frac{\widehat{\alpha}_{j_1,j_2}}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_2,t}^2+\frac{\widehat{\alpha}_{j_2,j_1}}{n}\sum_{t=1}^n\widehat{\epsilon}_{j_1,t}^2+v_{j_1,j_2}\\
=&~\frac{1}{n}\sum_{t=1}^n(\epsilon_{j_1,t}\epsilon_{j_2,t}-v_{j_1,j_2})+\frac{\alpha_{j_1,j_2}}{n}\sum_{t=1}^n(\epsilon_{j_2,t}^2-v_{j_2,j_2})\\
&+\frac{\alpha_{j_2,j_1}}{n}\sum_{t=1}^n(\epsilon_{j_1,t}^2-v_{j_1,j_1})+o_p\{(n\log p)^{-1/2}\}
\end{split}
\]
for any $j_1\neq j_2$. Recall $\boldsymbol{\Delta}=n^{-1}\sum_{t=1}^n\boldsymbol{\epsilon}_t\boldsymbol{\epsilon}_t^{ \mathrm{\scriptscriptstyle T} }-{\mathbf V}=:(\delta_{j_1,j_2})$. It follows from Lemma \ref{la:1} that
$
\max_{1\leq j_1,j_2\leq p}|\delta_{j_1,j_2}|=O_p\{(n^{-1}\log p)^{1/2}\}.
$
By Taylor expansion, if $\log p=o\{n^{\zeta/(2-\zeta)}\}$ for $\zeta$ specified in Lemma \ref{la:1} and $s^2(\log p)^3n^{-1}=o(1)$, it holds that
$
\widehat{\omega}_{j_1,j_2}-\omega_{j_1,j_2
=-\frac{\delta_{j_1,j_2}}{v_{j_1,j_1}v_{j_2,j_2}}+o_p\{(n\log p)^{-1/2}\}
$
for any $j_1\neq j_2$. Meanwhile, by the same arguments, for each $j=1,\ldots,p$, it holds that
$
\widehat{\omega}_{j,j}-\omega_{j,j}=-\frac{\delta_{j,j}}{v_{j,j}^2}+o_p\{(n\log p)^{-1/2}\}.
$
This proves Proposition \ref{pro:1}. $\hfill\Box$
\bigskip
\noindent {\bf Proof of Theorem \ref{tm:1}:} Define
$
d_1=\sup_{x>0}|\mathbb{P}({n}^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_{\infty}>x)-\mathbb{P}(|\bxi|_\infty>x)|.
$
For any $x>0$ and $\varepsilon_1>0$, it yields that
\begin{equation*}
\begin{split}
&~\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)\\
\leq&~ \mathbb{P}({n}^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_{\infty}>x-\varepsilon_1)+\mathbb{P}({n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty>\varepsilon_1)\\
\leq&~\mathbb{P}(|\bxi|_\infty>x-\varepsilon_1)+d_1+\mathbb{P}({n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty>\varepsilon_1)\\
=&~\mathbb{P}(|\bxi|_\infty>x)+\mathbb{P}(x-\varepsilon_1<|\bxi|_\infty\leq x)+d_1\\
&+\mathbb{P}({n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty>\varepsilon_1).
\end{split}
\end{equation*}
On the other hand, notice that $\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)\geq \mathbb{P}({n}^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_{\infty}>x+\varepsilon_1)-\mathbb{P}({n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty>\varepsilon_1)$, following the same arguments, we have
\begin{equation}\label{eq:bound1}
\begin{split}
&~\sup_{x>0}\big|\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)-\mathbb{P}(|\bxi|_\infty>x)\big|\\
\leq&~ d_1+\sup_{x>0}\mathbb{P}(x-\varepsilon_1<|\bxi|_\infty\leq x+\varepsilon_1)+\mathbb{P}({n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty>\varepsilon_1).
\end{split}
\end{equation}
By the Anti-concentration inequality for Gaussian random vector \citep{CCK_2015}, it holds that
\begin{equation}\label{eq:anti}
\sup_{x>0}\mathbb{P}(x-\varepsilon_1<|\bxi|_\infty\leq x+\varepsilon_1)\leq C\varepsilon_1(\log p)^{1/2}
\end{equation}
for any $\varepsilon_1>0$. From the proofs of Lemmas \ref{la:lasso} and \ref{la:bias}, we know ${n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty=O_p(sn^{-1/2}\log p)$. Thus, if $s^2(\log p)^3n^{-1}=o(1)$, we can select a suitable $\varepsilon_1$ to guarantee $\varepsilon_1(\log p)^{1/2}\rightarrow0$ and ${n}^{1/2}|\boldsymbol{\Upsilon}_{\mathcal{S}}|_\infty=o_p(\varepsilon_1)$. Therefore, for such selected $\varepsilon_1$, (\ref{eq:bound1}) leads to
\begin{equation}
\sup_{x>0}\big|\mathbb{P}\big({n}^{1/2}|\widehat{\boldsymbol{\Omega}}_{\mathcal{S}}-\boldsymbol{\Omega}_{\mathcal{S}}|_\infty>x\big)-\mathbb{P}(|\bxi|_\infty>x)\big|\leq d_1+o(1).
\end{equation}
To prove Theorem \ref{tm:1}, it suffices to show $d_1\rightarrow0$ as $n\rightarrow\infty$. We will show it below.
Write $\boldsymbol{\Pi}_{\mathcal{S}}=(\bar{\varsigma}_1,\ldots,\bar{\varsigma}_r)^{ \mathrm{\scriptscriptstyle T} }$ where $\bar{\varsigma}_j=n^{-1}\sum_{t=1}^n\varsigma_{j,t}$ and $\bxi=(\xi_1,\ldots,\xi_r)^{ \mathrm{\scriptscriptstyle T} }$.
Given a $D_n\rightarrow\infty$, define
$
\varsigma_{j,t}^{+}=\varsigma_{j,t}\mathbb{I}\{|\varsigma_{j,t}|\leq D_n\}-\mathbb{E}[\varsigma_{j,t}\mathbb{I}\{|\varsigma_{j,t}|\leq D_n\}]$ and $
\varsigma_{j,t}^{-}=\varsigma_{j,t}\mathbb{I}\{|\varsigma_{j,t}|> D_n\}-\mathbb{E}[\varsigma_{j,t}\mathbb{I}\{|\varsigma_{j,t}|> D_n\}].
$
Write $\mbox{\boldmath$\varsigma$}_t^+=(\varsigma_{1,t}^+,\ldots,\varsigma_{r,t}^+)^{ \mathrm{\scriptscriptstyle T} }$ and $\mbox{\boldmath$\varsigma$}_t^-=(\varsigma_{1,t}^-,\ldots,\varsigma_{r,t}^-)^{ \mathrm{\scriptscriptstyle T} }$ for each $t$. The diverging rate of $D_n$ will be specified later. Let $L$ be a positive integer satisfying $L\leq n/2$, $L\rightarrow\infty$ and $L=o(n)$. We decompose the sequence $\{1,\ldots,n\}$ to the following $m+1$ blocks where $m=\lfloor n/L\rfloor$ and $\lfloor\cdot\rfloor$ is the integer truncation operator: $\mathcal{G}_{\ell}=\{(\ell-1)L+1,\ldots,\ell L\}$ $(\ell=1,\ldots,m)$ and $\mathcal{G}_{m+1}=\{mL+1,\ldots,n\}$. Additionally, let $b>h$ be two positive integers such that $L=b+h$, $h\rightarrow\infty$ and $h=o(b)$. We decompose each $\mathcal{G}_{\ell}$ $(\ell=1,\ldots,m)$ to a ``large" block with length $b$ and a ``small" block with length $h$. Specifically, $\mathcal{I}_{\ell}=\{(\ell-1)L+1,\ldots,(\ell-1)L+b\}$ and $\mathcal{J}_{\ell}=\{(\ell-1)L+b+1,\ldots,\ell L\}$ for any $\ell=1,\ldots,m$, and
$\mathcal{J}_{m+1}=\mathcal{G}_{m+1}$. Assume ${\mathbf u}$ is a centered normal random vector such that
\[
{\mathbf u}=({u}_1,\ldots,{u}_{r})^{ \mathrm{\scriptscriptstyle T} }\sim N\bigg[{\mathbf 0},\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\mbox{\boldmath$\varsigma$}_{t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\mbox{\boldmath$\varsigma$}_t^+\bigg)^{ \mathrm{\scriptscriptstyle T} }\bigg\}\bigg].
\]
Our following proof includes two steps. The first step is to show
\begin{equation}\label{eq:step1}
d_2:=\sup_{x>0}\big|\mathbb{P}\big({n}^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_\infty>x\big)-\mathbb{P}(|{\mathbf u}|_\infty>x)\big|=o(1). \end{equation} And the second step is to show
\begin{equation}\label{eq:step2}
\sup_{x>0}\big|\mathbb{P}(|{\mathbf u}|_\infty>x)-\mathbb{P}(|\bxi|_\infty>x)\big|=o(1).
\end{equation}
From (\ref{eq:step1}) and (\ref{eq:step2}), we have $d_1=o(1)$.
We first show (\ref{eq:step1}). Define
$
d_3=\sup_{x>0}|\mathbb{P}(|n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^+|_\infty>x)-\mathbb{P}(|{\mathbf u}|_\infty>x)|.
$
Notice that ${n}^{1/2}\boldsymbol{\Pi}_{\mathcal{S}}=n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^++n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^-$, by the triangle inequality, it holds that
$
|{n}^{1/2}|\boldsymbol{\Pi}_{\mathcal{S}}|_\infty-|{n}^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^+|_\infty|\leq |{n}^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^-|_\infty.
$
Similar to (\ref{eq:bound1}), we have
\begin{equation}\label{eq:d2}
d_2\leq d_3+\sup_{x>0}\mathbb{P}(x-\varepsilon_2<|{\mathbf u}|_\infty\leq x+\varepsilon_2)+\mathbb{P}\bigg(\bigg|\frac{1}{{n}^{1/2}}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^-\bigg|_\infty>\varepsilon_2\bigg)
\end{equation}
for any $\varepsilon_2>0$. For each $j$, it follows from Davydov inequality \citep{Davydov_1968} that
\[
\begin{split}
\mathbb{E}\bigg(\bigg|\frac{1}{\sqrt{n}}\sum_{t=1}^n\varsigma_{j,t}^-\bigg|^2\bigg)=&~\frac{1}{n}\sum_{t=1}^n\mathbb{E}\{(\varsigma_{j,t}^-)^2\}+\frac{1}{n}\sum_{t_1\neq t_2}\mathbb{E}(\varsigma_{j,t_1}^-\varsigma_{j,t_2}^-)\\
\leq&~\frac{1}{n}\sum_{t=1}^n\mathbb{E}\{(\varsigma_{j,t}^-)^2\}+\frac{C}{n}\sum_{t_1\neq t_2}[\mathbb{E}\{(\varsigma_{j,t_1}^-)^4\}]^{1/4}[\mathbb{E}\{(\varsigma_{j,t_2}^-)^4\}]^{1/4}\exp(-C|t_1-t_2|^{\gamma_3})\\
\end{split}
\]
Applying Lemma 2 of \cite{ChangTangWu_2013}, Conditions \ref{as:moment} and \ref{as:block} imply that $\sup_{j,t}\mathbb{P}(|\varsigma_{j,t}|>x)\leq C\exp(-Cx^{\gamma_2/2})$ for any $x>0$. Then
\begin{equation}\label{eq:tailbound}
\begin{split}
\mathbb{E}\{\varsigma_{j,t}^4\mathbb{I}(|\varsigma_{j,t}|>D_n)\}=&~4\int_0^{D_n}x^3\mathbb{P}(|\varsigma_{j,t}|>D_n)~dx+4\int_{D_n}^\infty x^3\mathbb{P}(|\varsigma_{j,t}|>x)~dx\\
\leq&~CD_n^4\exp(-CD_n^{\gamma_2/2}).
\end{split}
\end{equation}
By the triangle inequality and Jensen's inequality,
\begin{equation}\label{eq:1}
\begin{split}
\mathbb{E}\{(\varsigma_{j,t}^-)^4\}\leq&~ C\mathbb{E}\{\varsigma_{j,t}^4\mathbb{I}(|\varsigma_{j,t}|>D_n)\}+C[\mathbb{E}\{\varsigma_{j,t}\mathbb{I}(|\varsigma_{j,t}|>D_n)\}]^4\\
\leq&~C\mathbb{E}\{\varsigma_{j,t}^4\mathbb{I}(|\varsigma_{j,t}|>D_n)\}\\
\leq&~CD_n^4\exp(-CD_n^{\gamma_2/2}),
\end{split}
\end{equation}
which implies that
\[
\sup_{1\leq j\leq r}\mathbb{E}\bigg(\bigg|\frac{1}{{n}^{1/2}}\sum_{t=1}^n\varsigma_{j,t}^-\bigg|^2\bigg)\leq CD_n^2\exp(-CD_n^{\gamma_2/2}).
\]
Thus, it follows from Markov inequality that
\[
\begin{split}
\mathbb{P}\bigg(\bigg|\frac{1}{{n}^{1/2}}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^-\bigg|_\infty>\varepsilon_2\bigg)\leq&~ \frac{r}{\varepsilon_2^2}\sup_{1\leq j\leq r}\mathbb{E}\bigg(\bigg|\frac{1}{{n}^{1/2}}\sum_{t=1}^n\varsigma_{j,t}^-\bigg|^2\bigg)\\
\leq&~Cr\varepsilon_2^{-2}D_n^2\exp(-CD_n^{\gamma_2/2}).
\end{split}
\]
Similar to (\ref{eq:anti}), it holds that
$
\sup_{x>0}\mathbb{P}(x-\varepsilon_2<|{\mathbf u}|_\infty\leq x+\varepsilon_2)\leq C\varepsilon_2(\log p)^{1/2}.
$
If we choose $\varepsilon_2=(\log p)^{-1}$ and $D_n=C(\log p)^{2/\gamma_2}$ for some sufficiently large $C$, then
$
\sup_{x>0}\mathbb{P}(x-\varepsilon_2<|{\mathbf u}|_\infty\leq x+\varepsilon_2)+\mathbb{P}(|{n}^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^-|_\infty>\varepsilon_2)=o(1).
$
Therefore, (\ref{eq:d2}) implies $d_2\leq d_3+o(1)$. To show (\ref{eq:step1}) that $d_2=o(1)$, it suffices to prove $d_3=o(1)$. Let $\mbox{\boldmath$\varsigma$}_{t}^{+,\textrm{ext}}=(\mbox{\boldmath$\varsigma$}_t^{+,{ \mathrm{\scriptscriptstyle T} }},-\mbox{\boldmath$\varsigma$}_t^{+,{ \mathrm{\scriptscriptstyle T} }})^{ \mathrm{\scriptscriptstyle T} }=(\varsigma_{t,1}^{+,\textrm{ext}},\ldots,\varsigma_{t,2r}^{+,\textrm{ext}})^{ \mathrm{\scriptscriptstyle T} }$ and ${\mathbf u}^{\textrm{ext}}=({\mathbf u}^{{ \mathrm{\scriptscriptstyle T} }},-{\mathbf u}^{{ \mathrm{\scriptscriptstyle T} }})^{ \mathrm{\scriptscriptstyle T} }=(u_1^{\textrm{ext}},\ldots,u_{2r}^{\textrm{ext}})^{ \mathrm{\scriptscriptstyle T} }$. To prove $d_3=\sup_{x>0}|\mathbb{P}(|n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_t^+|_\infty>x)-\mathbb{P}(|{\mathbf u}|_\infty>x)|\rightarrow0$, it is equivalent to show $\sup_{x>0}|\mathbb{P}(\max_{1\leq j\leq 2r}n^{-1/2}\sum_{t=1}^n\varsigma_{t,j}^{+,\textrm{ext}}>x)-\mathbb{P}(\max_{1\leq j\leq 2r}u_{j}^{\textrm{ext}}>x)|\rightarrow0$. From Theorem B.1 of Chernozhukov, Chetverikov and Kato (2014), $\sup_{z\in\mathbb{R}}|\mathbb{P}(\max_{1\leq j\leq 2r}n^{-1/2}\sum_{t=1}^n\varsigma_{t,j}^{+,\textrm{ext}}>z)-\mathbb{P}(\max_{1\leq j\leq 2r}u_{j}^{\textrm{ext}}>z)|\rightarrow0$ if $|\textrm{Var}(n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_{t}^{+,\textrm{ext}})-\textrm{Var}({\mathbf u}^{\textrm{ext}})|_\infty\rightarrow0$. Notice that $|\textrm{Var}(n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_{t}^{+,\textrm{ext}})-\textrm{Var}({\mathbf u}^{\textrm{ext}})|_\infty=|\textrm{Var}(n^{-1/2}\sum_{t=1}^n\mbox{\boldmath$\varsigma$}_{t}^+)-\textrm{Var}({\mathbf u})|_\infty$, thus to show $d_3=o(1)$, it suffices to show
\[
d_4:=\sup_{z\in\mathbb{R}}\bigg|\mathbb{P}\bigg(\max_{1\leq j\leq r}n^{-1/2}\sum_{t=1}^n\varsigma_{t,j}^{+}>z\bigg)-\mathbb{P}\bigg(\max_{1\leq j\leq r}u_{j}>z\bigg)\bigg|\rightarrow0.
\]
By Theorem B.1 of \cite{CCK_2014}, it holds that
$
d_4\leq Cn^{-C}+Cm\exp(-Ch^{\gamma_3})
$
provided that \begin{equation} \label{eq:rest} hb^{-1}(\log p)^2\leq Cn^{-\varpi}~~\textrm{and}~~b^2D_n^2\log p+bD_n^2(\log p)^7\leq Cn^{1-2\varpi}
\end{equation}
for some $\varpi\in(0,1/4)$. As we mentioned above, $D_n\asymp (\log p)^{2/\gamma_2}$. To make $p$ diverge as fast as possible, we can take $h\asymp (\log n)^{\vartheta}$ for some $\vartheta>0$. Then (\ref{eq:rest}) becomes
\begin{equation*}
\left\{ \begin{aligned}
C(\log n)^{\vartheta}n^{\varpi}(\log p)^2 &\leq b; \\
C(\log n)^{2\vartheta}(\log p)^{4/\gamma_2+5}&\leq n^{1-4\varpi};\\
C(\log n)^{\vartheta}(\log p)^{4/\gamma_2+9}&\leq n^{1-3\varpi}.
\end{aligned} \right.
\end{equation*}
Therefore, $ \log p=o(n^\varphi)~~\textrm{where}~~\varphi=\min\big\{\tfrac{(1-4\varpi)\gamma_2}{4+5\gamma_2},\tfrac{(1-3\varpi)\gamma_2}{4+9\gamma_2}\big\}. $ Notice that $\varphi$ takes the supremum when $\varpi=0$. Hence, if $\log p=o\{n^{\gamma_2/(4+9\gamma_2)}\}$, it holds that $d_4\rightarrow0$. Then we construct the result (\ref{eq:step1}).
Analogously, to show (\ref{eq:step2}), it suffices to show $\sup_{z\in\mathbb{R}}|\mathbb{P}(\max_{1\leq j\leq r}u_j> z)-\mathbb{P}(\max_{1\leq j\leq r}\xi_j> z)|\rightarrow0$. Write $\widetilde{{\mathbf W}}$ as the covariance of ${\mathbf u}$. Recall ${\mathbf W}$ denotes the covariance of $\bxi$. Lemma 3.1 of \cite{CCK_2013} leads to
\begin{equation}\label{eq:2}
\begin{split}
&~\sup_{z\in\mathbb{R}}\bigg|\mathbb{P}\bigg(\max_{1\leq j\leq r}u_j> z\bigg)-\mathbb{P}\bigg(\max_{1\leq j\leq r}\xi_j> z\bigg)\bigg|\\
\leq&~ C|\widetilde{{\mathbf W}}-{\mathbf W}|_\infty^{1/3}\{1\vee \log(r/|\widetilde{{\mathbf W}}-{\mathbf W}|_\infty)\}^{2/3}.
\end{split}
\end{equation}
We will specify the convergence rate of $|\widetilde{{\mathbf W}}-{\mathbf W}|_\infty$ below. Notice that, for any $1\leq j_1,j_2\leq r$, we have
\[
\begin{split}
&\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}\\
&~~~~~~~~~~~~~~-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\\
=&-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\\
&-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\\
&-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}.
\end{split}
\]
With the triangle inequality, it yields that
\[
\begin{split}
&~\bigg|\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}\\
&~~~~~~~~~~~~~~~~~~~-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\\
\leq&~\frac{1}{mb}\sum_{\ell=1}^m\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\bigg|\\
&+\frac{1}{mb}\sum_{\ell=1}^m\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\bigg|\\
&~+\frac{1}{mb}\sum_{\ell=1}^m\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}\bigg|.
\end{split}
\]
For each $\ell=1,\ldots,m$, the following identities hold:
\[
\begin{split}
\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}=&~\sum_{t\in\mathcal{I}_\ell}\mathbb{E}(\varsigma_{j_1,t}^{-}\varsigma_{j_2,t}^{-})+\sum_{t_1\neq t_2}\mathbb{E}(\varsigma_{j_1,t_1}^{-}\varsigma_{j_2,t_2}^{-}), \\
\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}=&~\sum_{t\in\mathcal{I}_\ell}\mathbb{E}(\varsigma_{j_1,t}^{+}\varsigma_{j_2,t}^{-})+\sum_{t_1\neq t_2}\mathbb{E}(\varsigma_{j_1,t_1}^{+}\varsigma_{j_2,t_2}^{-}),\\
\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}=&~\sum_{t\in\mathcal{I}_\ell}\mathbb{E}(\varsigma_{j_1,t}^{-}\varsigma_{j_2,t}^{+})+\sum_{t_1\neq t_2}\mathbb{E}(\varsigma_{j_1,t_1}^{-}\varsigma_{j_2,t_2}^{+}).
\end{split}
\]
Together with the triangle inequality, Davydov inequality and Cauchy-Schwarz inequality, we have
\[
\begin{split}
\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\bigg|\leq&~ Cb\sup_{j,t}[\mathbb{E}\{(\varsigma_{j,t}^{-})^4\}]^{1/2},\\
\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^-\bigg)\bigg\}\bigg|\leq&~ Cb\sup_{j,t}[\mathbb{E}\{(\varsigma_{j,t}^+)^4\}]^{1/4}\sup_{j,t}[\mathbb{E}\{(\varsigma_{j,t}^-)^4\}]^{1/4},\\
\bigg|\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^-\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}\bigg|\leq&~ Cb\sup_{j,t}[\mathbb{E}\{(\varsigma_{j,t}^+)^4\}]^{1/4}\sup_{j,t}[\mathbb{E}\{(\varsigma_{j,t}^-)^4\}]^{1/4}.\\
\end{split}
\]
From (\ref{eq:1}), it holds that
\[
\begin{split}
&~\sup_{1\leq j_1,j_2\leq r}\bigg|\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}^+\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}^+\bigg)\bigg\}\\
&~~~~~~~~~~~~~~~~~-\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\leq CD_n\exp(-CD_n^{\gamma_2/2}).
\end{split}
\]
By the proof of Lemma 2 in \cite{ChangChenChen_2015}, we can prove that
\begin{equation}\label{eq:toprove}
\begin{split}
&~\sup_{1\leq j_1,j_2\leq r}\bigg|\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\\
&~~~~~~~~~~~~~~~~~-\frac{1}{n}\mathbb{E}\bigg\{\bigg(\sum_{t=1}^n\varsigma_{j_1,t}\bigg)\bigg(\sum_{t=1}^n\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\leq Ch^{1/2}b^{-1/2}+Cbn^{-1}.
\end{split}
\end{equation}
Specifically, notice that
\begin{equation}\label{eq:p1}
\begin{split}
&~\mathbb{E}\bigg\{\bigg(\sum_{t=1}^n\varsigma_{j_1,t}\bigg)\bigg(\sum_{t=1}^n\varsigma_{j_2,t}\bigg)\bigg\}\\
=&~\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}+\sum_{\ell_1\neq \ell_2}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\\
&+\sum_{\ell=1}^{m+1}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{J}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}+\sum_{\ell_1\neq \ell_2}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{J}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\\
&+\sum_{\ell=1}^{m+1}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{J}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}+\sum_{\ell_1\neq \ell_2}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{J}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\\
&+\sum_{\ell=1}^{m+1}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{J}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{J}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}+\sum_{\ell_1\neq \ell_2}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{J}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{J}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\\
\end{split}
\end{equation}
where we set $\mathcal{I}_{m+1}=\emptyset$. By Cauchy-Schwarz inequality and Davydov inequality, we have
\[
\begin{split}
&~\bigg|\frac{1}{mb}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\\
&~~~~~~~~~~~~~~~~~~~~~~-\frac{1}{n}\sum_{\ell=1}^m\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\\
=&~\frac{n-mb}{nm}\sum_{\ell=1}^m\bigg|\mathbb{E}\bigg\{\bigg(\frac{1}{\sqrt{b}}\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_1,t}\bigg)\bigg(\frac{1}{\sqrt{b}}\sum_{t\in\mathcal{I}_\ell}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\\
\leq&~\frac{mh+b}{nm}\times Cm\leq Chb^{-1}+Cbn^{-1},
\end{split}
\]
\[
\begin{split}
&~\bigg|\frac{1}{n}\sum_{\ell_1\neq \ell_2}\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\sum_{t\in\mathcal{I}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\\
\leq&~\frac{b}{n}\sum_{\ell_1\neq\ell_2}\bigg|\mathbb{E}\bigg\{\bigg(\frac{1}{\sqrt{b}}\sum_{t\in\mathcal{I}_{\ell_1}}\varsigma_{j_1,t}\bigg)\bigg(\frac{1}{\sqrt{b}}\sum_{t\in\mathcal{I}_{\ell_2}}\varsigma_{j_2,t}\bigg)\bigg\}\bigg|\\
\leq&~Cbn^{-1}\sum_{\ell_1\neq\ell_2}\exp\{-C|(\ell_1-\ell_2)b|^{\gamma_3}\}\leq Cbn^{-1}.
\end{split}
\]
Similarly, we can bound the other terms in (\ref{eq:p1}). Therefore, we have (\ref{eq:toprove}) holds which implies that $ |\widetilde{{\mathbf W}}-{\mathbf W}|_\infty\leq Ch^{1/2}b^{-1/2}+Cbn^{-1}+CD_n\exp(-CD_n^{\gamma_2/2})$. For $b, h$ and $D_n$ specified above, (\ref{eq:2}) implies $\sup_{z\in\mathbb{R}}|\mathbb{P}(\max_{1\leq j\leq r}u_j> z)-\mathbb{P}(\max_{1\leq j\leq r}\xi_j> z)|\rightarrow0$. Then we construct the result (\ref{eq:step2}). Hence, we complete the proof of Theorem \ref{tm:1}. $\hfill\Box$
\begin{lemma}\label{la:4}
Assume Conditions {\rm\ref{as:moment}} and {\rm\ref{as:betamix}} hold, the kernel function $\mathcal{K}(\cdot)$ satisfies $|\mathcal{K}(x)|\asymp |x|^{-\tau}$ as $x\rightarrow\infty$ for some $\tau>1$, and the bandwidth $S_n\asymp n^{\rho}$ for some $0<\rho<\min\{\tfrac{\tau-1}{3\tau},\tfrac{\gamma_3}{2\gamma_3+1}\}$. Let $\kappa=\max\big\{\tfrac{1}{2\gamma_3+1},\tfrac{\rho \tau-\rho+2}{\tau+1+\gamma_3},\tfrac{\rho\tau+1}{\tau}\big\}$, and $\alpha_0$ be the maximizer for the function $f(\alpha)=\min\{1-\alpha-2\rho,2(\alpha-\rho)\tau-2\}$ over $\kappa<\alpha<1-2\rho$. Then
\[
\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }-\mathbb{E}(\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} })\}\bigg]\bigg|_\infty=O_p\big(\{\log (pn)\}^{4/\gamma_2}n^{-f(\alpha_0)/2}\big)
\]
provided that $\log p \leq Cn^{C\delta}$ where $\delta=\min[\tfrac{\gamma_2}{\gamma_2+8}(2\alpha_0\gamma_3+\alpha-1), \tfrac{\gamma_2}{8}\{(\alpha_0-\rho)\tau+\alpha_0+\alpha_0\gamma_3+\rho-2\}].$
\end{lemma}
\noindent {\bf Proof:} We first construct an upper bound for
$
\sup_{1\leq j_1,j_2\leq r}\mathbb{P}\{|\sum_{k=0}^{n-1}\mathcal{K}(k/S_n)[n^{-1}\sum_{t=k+1}^n\{\eta_{j_1,t}\eta_{j_2,t-k}-\mathbb{E}(\eta_{j_1,t}\eta_{j_2,t-k})\}]|>x\}
$.
For any $j_1$ and $j_2$, it holds that
\begin{equation}\label{eq:ss1}
\begin{split}
&~\mathbb{P}\bigg\{\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\eta_{j_1,t}\eta_{j_2,t-k}-\mathbb{E}(\eta_{j_1,t}\eta_{j_2,t-k})\}\bigg]\bigg|>x\bigg\}\\
\leq&~\mathbb{P}\bigg\{\sum_{k=0}^{\lfloor Cn^{\alpha}\rfloor}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2}\bigg\}\\
&+\mathbb{P}\bigg\{\sum_{k=\lfloor Cn^{\alpha}\rfloor +1}^{n-1}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2}\bigg\}
\end{split}
\end{equation}
for any $\alpha\in(0,1)$, where $\psi_{t,k}=\eta_{j_1,t+k}\eta_{j_2,t}-\mathbb{E}(\eta_{j_1,t+k}\eta_{j_2,t})$. Following Lemma 2 of \cite{ChangTangWu_2013}, it holds that
\begin{equation}\label{eq:t1}
\sup_{0\leq k\leq n-1}\sup_{1\leq t\leq n-k}\mathbb{P}\left(|\psi_{t,k}|>x\right)\leq C\exp(-Cx^{\gamma_2/4})
\end{equation}
for any $x>0$. Notice that $S_n\asymp n^\rho$, we have $\max_{\lfloor Cn^{\alpha}\rfloor+1\leq k\leq n-1}|\mathcal{K}(k/S_n)|\leq Cn^{-(\alpha-\rho)\tau}$ if $\alpha>\rho$. Then, (\ref{eq:t1}) leads to
\begin{equation}\label{eq:term1}
\begin{split}
&~\mathbb{P}\bigg\{\sum_{k=\lfloor Cn^{\alpha}\rfloor +1}^{n-1}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2}\bigg\}\\
\leq&~\sum_{k=\lfloor Cn^\alpha\rfloor +1}^{n-1}\mathbb{P}\bigg\{\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>Cxn^{(\alpha-\rho)\tau-1}\bigg\}\\
\leq&~\sum_{k=\lfloor Cn^\alpha\rfloor +1}^{n-1}\sum_{t=1}^{n-k}\mathbb{P}\big\{|\psi_{t,k}|>Cxn^{(\alpha-\rho)\tau-1}\big\}\\
\leq&~Cn^2\exp[-C\{xn^{(\alpha-\rho)\tau-1}\}^{\gamma_2/4}].
\end{split}
\end{equation}
We will specify the upper bound for $\mathbb{P}\{\sum_{k=0}^{\lfloor Cn^{\alpha}\rfloor}|\mathcal{K}(k/S_n)||{n}^{-1}\sum_{t=1}^{n-k}\psi_{t,k}|>{x}/{2}\}$ below. Similar to (\ref{eq:t1}), we have that
\begin{equation}\label{eq:s1}
\sup_{1\leq j_1,j_2\leq r}\sup_{0\leq k\leq n-1}\sup_{1\leq t\leq n-k}\mathbb{P}(|\eta_{j_1,t+k}\eta_{j_2,t}|>x)\leq C\exp(-Cx^{\gamma_2/4})
\end{equation}
for any $x>0$. Denote by $\mathcal{T}$ the event $\{\sup_{0\leq k\leq n-1}\sup_{1\leq t\leq n-k}|\eta_{j_1,t+k}\eta_{j_2,t}|>M\}$. For each $k=0,\ldots,\lfloor Cn^\alpha\rfloor$, let $\psi_{t,k}^+=\eta_{j_1,t+k}\eta_{j_2,t}\mathbb{I}\{|\eta_{j_1,t+k}\eta_{j_2,t}|\leq M\}-\mathbb{E}[\eta_{j_1,t+k}\eta_{j_2,t}\mathbb{I}\{|\eta_{j_1,t+k}\eta_{j_2,t}|\leq M\}]$ for $t=1,\ldots,n-k$. Write $D=\sum_{k=0}^{\lfloor Cn^{\alpha}\rfloor}|\mathcal{K}(k/S_n)|$, then
\begin{equation}\label{eq:term2}
\begin{split}
&~\mathbb{P}\bigg\{\sum_{k=0}^{\lfloor Cn^{\alpha}\rfloor}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2}\bigg\}\\
\leq&~\sum_{k=0}^{\lfloor Cn^{\alpha} \rfloor}\mathbb{P}\bigg(\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2D},~\mathcal{T}^c\bigg) +\mathbb{P}(\mathcal{T})\\
\leq&~\sum_{k=0}^{\lfloor Cn^{\alpha} \rfloor}\mathbb{P}\bigg(\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}^+\bigg|>\frac{x}{4D}\bigg) +\mathbb{P}(\mathcal{T})\\
&+\sum_{k=0}^{\lfloor Cn^{\alpha} \rfloor}\mathbb{P}\bigg(\frac{1}{n}\sum_{t=1}^{n-k}\mathbb{E}[|\eta_{j_1,t+k}\eta_{j_2,t}|\mathbb{I}\{|\eta_{j_1,t+k}\eta_{j_2,t}|> M\}]>\frac{x}{4D}\bigg).
\end{split}
\end{equation}
From (\ref{eq:s1}), we have $\mathbb{P}(\mathcal{T})\leq Cn^2\exp(-CM^{\gamma_2/4})$. Similar to (\ref{eq:tailbound}), we have
\[
\sup_{1\leq j_1,j_2\leq r}\sup_{0\leq k\leq n-1}\sup_{1\leq t\leq n-k}\mathbb{E}[|\eta_{j_1,t+k}\eta_{j_2,t}|\mathbb{I}\{|\eta_{j_1,t+k}\eta_{j_2,t}|> M\}]\leq CM\exp(-CM^{\gamma_2/4}).
\]
If $DMx^{-1}\exp(-CM^{\gamma_2/4})\rightarrow0$, then (\ref{eq:term2}) yields that
\begin{equation}\label{eq:term3}
\begin{split}
&~\mathbb{P}\bigg\{\sum_{k=0}^{\lfloor Cn^{\alpha}\rfloor}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}\bigg|>\frac{x}{2}\bigg\}\\
\leq&~\sum_{k=0}^{\lfloor Cn^{\alpha} \rfloor}\mathbb{P}\bigg(\bigg|\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}^+\bigg|>\frac{x}{4D}\bigg) +Cn^{2}\exp(-CM^{\gamma_2/4}).
\end{split}
\end{equation}
For each $k=0,\ldots,\lfloor Cn^{\alpha} \rfloor$, we first consider $\mathbb{P}\{n^{-1}\sum_{t=1}^{n-k}\psi_{t,k}^+>x/(4D)\}$. By Markov inequality, it holds that
\begin{equation}\label{eq:u2}
\mathbb{P}\bigg(\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}^+>\frac{x}{4D}\bigg)\leq \exp\bigg(-\frac{unx}{4D}\bigg)\mathbb{E}\bigg\{\exp\bigg(\sum_{t=1}^{n-k}u\psi_{t,k}^+\bigg)\bigg\}
\end{equation} for any $u>0$. Let $L$ be a positive integer such that $L\asymp n^{\alpha}$ and $L\geq 3\lfloor Cn^{\alpha}\rfloor$ for $C$ specified in (\ref{eq:ss1}). We decompose the sequence $\{1,\ldots, n-k\}$ to the following $m$ blocks when $m = \lfloor n/L\rfloor$: $\mathcal{G}_\ell= \{(\ell-1)L+1,\ldots, \ell L\}$ $(\ell= 1, \ldots ,m)$ and $\mathcal{G}_{m+1} = \{mL+1,\ldots, n\}$. Additionally, let $b=\lfloor L/2\rfloor$ and $h=L-b$. We then decompose each $\mathcal{G}_\ell$ $(\ell=1,\ldots,m)$ to a block with length $b$ and a block with length $h$. Specifically, $\mathcal{I}_\ell=\{(\ell-1)L+1,\ldots,(\ell-1)L+b\}$ and $\mathcal{J}_\ell=\{(\ell-1)L+b+1,\ldots,\ell L\}$ for any $\ell=1,\ldots,m$, and
$\mathcal{I}_{m+1}=\mathcal{G}_{m+1}$. Based on these notations and Cauchy-Schwarz inequality, it holds that
\[
\begin{split}
\mathbb{E}\bigg\{\exp\bigg(\sum_{t=1}^{n-k}u\psi_{t,k}^+\bigg)\bigg\}\leq&~ \bigg[\mathbb{E}\bigg\{\exp\bigg(\sum_{\ell=1}^{m+1}\sum_{t\in\mathcal{I}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}\bigg]^{1/2}\\
&~~\times\bigg[\mathbb{E}\bigg\{\exp\bigg(\sum_{\ell=1}^{m}\sum_{t\in\mathcal{J}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}\bigg]^{1/2}.
\end{split}
\]
By Lemma 2 of \cite{Merlevedeetaj_2011}, noticing that $b(m+1)\leq 2n$, we have
\begin{equation}\label{eq:u1}
\begin{split}
\mathbb{E}\bigg\{\exp\bigg(\sum_{\ell=1}^{m+1}\sum_{t\in\mathcal{I}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}\leq&~ \prod_{\ell=1}^{m+1}\mathbb{E}\bigg\{\exp\bigg(\sum_{t\in\mathcal{I}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}\\&+CuMn\exp(8uMn-C|b-k|_+^{\gamma_3}).
\end{split}
\end{equation}
Following the inequality $e^x\leq 1+x+{x^2}e^{x\vee0}/2$ for any $x\in\mathbf{R}$, we have that
\[
\begin{split}
\mathbb{E}\bigg\{\exp\bigg(\sum_{t\in\mathcal{I}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}\leq&~ 1+2u^2\mathbb{E}\bigg\{\bigg(\sum_{t\in\mathcal{I}_\ell}\psi^+_{t,k}\bigg)^2\bigg\}\exp(4ubM)\\
\leq&~ 1+Cu^2b^2\exp(4ubM).
\end{split}
\]
Together with (\ref{eq:u1}), following the inequality $(1+x)^{m+1}\leq e^{(m+1)x}$ for any $x>0$, and $bm\leq n/2$, it holds that
\[
\begin{split}
\mathbb{E}\bigg\{\exp\bigg(\sum_{\ell=1}^{m+1}\sum_{t\in\mathcal{I}_\ell}2u\psi_{t,k}^+\bigg)\bigg\}
\leq&~\exp\{Cu^2nb\exp(4ubM)\}\\
&+CuMn\exp(8uMn-C|b-k|_+^{\gamma_3}).
\end{split}
\]
Similarly, we can obtain the same upper bound for $\mathbb{E}\{\exp(\sum_{\ell=1}^{m}\sum_{t\in\mathcal{J}_\ell}2u\psi_{t,k}^+)\}$. Hence,
\[
\begin{split}
\mathbb{E}\bigg\{\exp\bigg(\sum_{t=1}^{n-k}u\psi_{t,k}^+\bigg)\bigg\}\leq&~\exp\{Cu^2nb\exp(4ubM)\}\\
&+CuMn\exp\{8uMn-C|b-k|_+^{\gamma_3}).
\end{split}
\]
We restrict $|ubM|\leq C$. Notice that $b-k\geq \lfloor Cn^{\alpha}\rfloor/2-1$, then
\[
\mathbb{E}\bigg\{\exp\bigg(\sum_{t=1}^{n-k}u\psi_{t,k}^+\bigg)\bigg\}\leq C\exp(Cu^2nb)+CuMn\exp(8uMn-Cn^{\alpha \gamma_3}).
\]
Together with (\ref{eq:u2}), notice that $D\asymp S_n\asymp n^{\rho}$ and $b\asymp n^{\alpha}$, it holds that
\begin{equation}\label{eq:upper1}
\begin{split}
\mathbb{P}\bigg(\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}^+>\frac{x}{4D}\bigg)\leq&~ C\exp(-Cun^{1-\rho}x+Cu^2n^{1+\alpha})\\
&+CuMn\exp(-Cun^{1-\rho}x+8uMn-Cn^{\alpha\gamma_3}).
\end{split}
\end{equation}
To make the upper bound in above inequality decay to zero for some $x\rightarrow0^+$ and $M\rightarrow\infty$, we need to require $ uMn^{1-\alpha\gamma_3}\leq C. $ For the first term on the right-hand side of above inequality, the optimal selection of $u$ is $u\asymp xn^{-\alpha-\rho}$. Therefore, (\ref{eq:upper1}) can be simplified to
\[
\begin{split}
\mathbb{P}\bigg(\frac{1}{n}\sum_{t=1}^{n-k}\psi_{t,k}^+>\frac{x}{4D}\bigg)\leq&~C\exp(-Cn^{1-\alpha-2\rho}x^2)+C\exp(-Cn^{\alpha\gamma_3})
\end{split}
\]
if $xMn^{1-\alpha-\alpha\gamma_3-\rho}\leq C$. The same inequality also hold for $\mathbb{P}\{{n}^{-1}\sum_{t=1}^{n-k}\psi_{t,k}^+<-{x}/(4D)\}$. Combining with (\ref{eq:ss1}), (\ref{eq:term1}) and (\ref{eq:term3}),
\[
\begin{split}
&~\mathbb{P}\bigg\{\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\eta_{j_1,t}\eta_{j_2,t-k}-\mathbb{E}(\eta_{j_1,t}\eta_{j_2,t-k})\}\bigg]\bigg|>x\bigg\}\\
\leq&~Cn^\alpha\exp(-Cn^{1-\alpha-2\rho}x^2)+Cn^{\alpha}\exp(-Cn^{\alpha\gamma_3})\\
&+Cn^2\exp[-C\{xn^{(\alpha-\rho)\tau-1}\}^{\gamma_2/4}]+Cn^{2}\exp(-CM^{\gamma_2/4})
\end{split}
\]
for any $x>0$ such that $xMn^{1-\alpha-\alpha\gamma_3-\rho}\leq C$. Notice that above inequality is uniformly for any $j_1$ and $j_2$, thus
\[
\begin{split}
&~\mathbb{P}\bigg\{\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }-\mathbb{E}(\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} })\}\bigg]\bigg|_\infty>x\bigg\}\\
\leq&~Cp^2n^\alpha\exp(-Cn^{1-\alpha-2\rho}x^2)+Cp^2n^{\alpha}\exp(-Cn^{\alpha\gamma_3})\\
&+Cp^2n^2\exp[-C\{xn^{(\alpha-\rho)\tau-1}\}^{\gamma_2/4}]+Cp^2n^{2}\exp(-CM^{\gamma_2/4}).
\end{split}
\]
To make the upper bound of above inequality converge to zero, $x$ and $M$ should satisfy the following restrictions:
\begin{equation} \label{eq:restr2}
\left\{ \begin{aligned}
x\geq &~C\bigg[\sqrt{\frac{\log (pn)}{n^{1-\alpha-2\rho}}} \vee \frac{\{\log(pn)\}^{4/\gamma_2}}{n^{(\alpha-\rho)\tau-1}}\bigg], \\
M\geq&~C\{\log(pn)\}^{4/\gamma_2}.
\end{aligned} \right.
\end{equation}
Notice that $xMn^{1-\alpha-\alpha\gamma_3-\rho}\leq C$, (\ref{eq:restr2}) implies that $\log p\leq Cn^{C\delta} $ where $\delta=\min\{\tfrac{\gamma_2}{\gamma_2+8}(2\alpha\gamma_3+\alpha-1),\tfrac{\gamma_2}{8}\{(\alpha-\rho)\tau+\alpha+\alpha\gamma_3+\rho-2\}\}. $ To make $x$ can decay to zero and $p$ can diverge at exponential rate of $n$, we need to assume $0<\rho<\min\{\frac{\tau-1}{3\tau},\frac{\gamma_3}{2\gamma_3+1}\}$ and $ \kappa<\alpha<1-2\rho. $ Let $f(\alpha)=\min\{1-\alpha-2\rho,2(\alpha-\rho)\tau-2\}$ and $\alpha_0=\arg\max_{\kappa<\alpha<1-2\rho}f(\alpha)$. We select $\alpha=\alpha_0$ and $x=C\{\log(pn)\}^{4/\gamma_2}n^{-f(\alpha_0)/2}$, then
\[
\mathbb{P}\bigg\{\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\bfeta_{t}\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }-\mathbb{E}(\bfeta_{t}\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} })\}\bigg]\bigg|_\infty>x\bigg\}\rightarrow0.
\]
Hence, we complete the proof of Lemma \ref{la:4}. $\hfill\Box$
\bigskip
\noindent{\bf Proof of Theorem \ref{tm:2}:} Similar to the proof of (\ref{eq:step2}), it suffices to prove $|\widehat{{\mathbf W}}-{\mathbf W}|_\infty=o_p(1)$. By Lemmas \ref{la:1} and \ref{la:bias}, we have $\max_{1\leq j\leq p}|\widehat{v}_{j,j}-v_{j,j}|=O_p\{(n^{-1}\log p)^{1/2}\}$. Notice that $v_{j,j}$'s are uniformly bounded away from zero, then $\widehat{v}_{j,j}^{-1}$'s are uniformly bounded away from infinity with probability approaching one. Thus,
\begin{equation}
\begin{split}
|\widehat{{\mathbf W}}-{\mathbf W}|_\infty\leq&~C|\widehat{\boldsymbol{\Xi}}-\boldsymbol{\Xi}|_\infty+C|\widehat{{\mathbf H}}-{\mathbf H}|_\infty\\
=&~C|\widehat{\boldsymbol{\Xi}}-\boldsymbol{\Xi}|_\infty+O_p\{(n^{-1}\log p)^{1/2}\}.
\end{split}
\end{equation}
We will show $|\widehat{\boldsymbol{\Xi}}-\boldsymbol{\Xi}|_\infty=o_p(1)$ below.
Define
\[
\widetilde{\boldsymbol{\Xi}}=\sum_{k=-n+1}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bGamma_k
\]
where
\[
{\bGamma}_k= \left\{ \begin{aligned}
\frac{1}{n}\sum_{t=k+1}^n\mathbb{E}({\bfeta}_t{\bfeta}_{t-k}^{ \mathrm{\scriptscriptstyle T} }),~~~ &k\geq0; \\
\frac{1}{n}\sum_{t=-k+1}^n\mathbb{E}({\bfeta}_{t+k}{\bfeta}_t^{ \mathrm{\scriptscriptstyle T} }),~~&k<0.
\end{aligned} \right.
\]
We will specify the convergence rates of $|\widehat{\boldsymbol{\Xi}}-\widetilde{\boldsymbol{\Xi}}|_\infty$ and $|\widetilde{\boldsymbol{\Xi}}-\boldsymbol{\Xi}|_\infty$, respectively. Notice that
\[
\begin{split}
\widehat{\boldsymbol{\Xi}}-\widetilde{\boldsymbol{\Xi}}=&~\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\big(\widehat{\bGamma}_k-\bGamma_k\big)\\
&~+\sum_{k=-n+1}^{-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\big(\widehat{\bGamma}_k-\bGamma_k\big).
\end{split}
\]
For any $k\geq0$, it holds that
\[
\begin{split}
\widehat{\bGamma}_k=&~\frac{1}{n}\sum_{t=k+1}^n\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }+\frac{1}{n}\sum_{t=k+1}^n\big(\widehat{\bfeta}_t-\bfeta_t\big)\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }\\
&+\frac{1}{n}\sum_{t=k+1}^n\bfeta_t\big(\widehat{\bfeta}_{t-k}-\bfeta_{t-k}\big)^{ \mathrm{\scriptscriptstyle T} }\\
&+\frac{1}{n}\sum_{t=k+1}^n\big(\widehat{\bfeta}_t-\bfeta_t\big)\big(\widehat{\bfeta}_{t-k}-\bfeta_{t-k}\big)^{ \mathrm{\scriptscriptstyle T} },
\end{split}
\]
which implies
\begin{equation}\label{eq:bound}
\begin{split}
\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\big(\widehat{\bGamma}_k-\bGamma_k\big)=&~\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }-\mathbb{E}(\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} })\}\bigg]\\
&+\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg\{\frac{1}{n}\sum_{t=k+1}^n\big(\widehat{\bfeta}_t-\bfeta_t\big)\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }\bigg\}\\
&+\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg\{\frac{1}{n}\sum_{t=k+1}^n\bfeta_t\big(\widehat{\bfeta}_{t-k}-\bfeta_{t-k}\big)^{ \mathrm{\scriptscriptstyle T} }\bigg\}\\
&+\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg\{\frac{1}{n}\sum_{t=k+1}^n\big(\widehat{\bfeta}_t-\bfeta_t\big)\big(\widehat{\bfeta}_{t-k}-\bfeta_{t-k}\big)^{ \mathrm{\scriptscriptstyle T} }\bigg\}.
\end{split}
\end{equation}
We will prove the $|\cdot|_\infty$-norm of the last three terms on the right-hand side of above identity are $O_p\{sS_n(n^{-1}\log p)^{1/2}\}$. We only need to show this rate for one of them and the proofs for the other two are similar. For any $j$ and $t$,
\[
\begin{split}
\widehat{\eta}_{j,t}-\widehat{\eta}_{j,t}=&~\big\{\widehat{\epsilon}_{\chi_1(j),t}\widehat{\epsilon}_{\chi_2(j),t}-\epsilon_{\chi_1(j),t}\epsilon_{\chi_2(j),t}\big\}-\big\{\widehat{v}_{\boldsymbol{\chi}(j)}-v_{\boldsymbol{\chi}(j)}\big\}\\
=&~\widehat{\epsilon}_{\chi_1(j),t}\widehat{\epsilon}_{\chi_2(j),t}-\epsilon_{\chi_1(j),t}\epsilon_{\chi_2(j),t}+O_p\{(n^{-1}\log p)^{1/2}\}\\
=&~\big\{\widehat{\balpha}_{\chi_1(j)}-\balpha_{\chi_1(j)}\big\}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t{\mathbf y}_t^{ \mathrm{\scriptscriptstyle T} }\big\{\widehat{\balpha}_{\chi_2(j)}-\balpha_{\chi_2(j)}\big\}\\
&-\epsilon_{\chi_2(j),t}\big\{\widehat{\balpha}_{\chi_1(j)}-\balpha_{\chi_1(j)}\big\}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\\
&-\epsilon_{\chi_1(j),t}\big\{\widehat{\balpha}_{\chi_2(j)}-\balpha_{\chi_2(j)}\big\}^{ \mathrm{\scriptscriptstyle T} }{\mathbf y}_t\\
&+O_p\{(n^{-1}\log p)^{1/2}\}.
\end{split}
\]
Here the term $O_p\{(n^{-1}\log p)^{1/2}\}$ is uniform for any $j$ and $t$. Then the $(j_1,j_2)$-th component of $\sum_{k=0}^{n-1}\mathcal{K}(k/S_n)\{n^{-1}\sum_{t=k+1}^n(\widehat{\bfeta}_t-\bfeta_t)\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }\}$ is
\begin{equation}\label{eq:l1l2}
\begin{split}
&\big\{\widehat{\balpha}_{\chi_1(j_1)}-\balpha_{\chi_1(j_1)}\big\}^{ \mathrm{\scriptscriptstyle T} }\bigg\{\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n\eta_{j_2,t-k}{\mathbf y}_t{\mathbf y}_t^{ \mathrm{\scriptscriptstyle T} }\bigg)\bigg\}\big\{\widehat{\balpha}_{\chi_2(j_2)}-\balpha_{\chi_2(j_2)}\big\}\\
-&\big\{\widehat{\balpha}_{\chi_1(j_1)}-\balpha_{\chi_1(j_1)}\big\}^{ \mathrm{\scriptscriptstyle T} }\bigg\{\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n{\mathbf y}_t\eta_{j_2,t-k}\epsilon_{\chi_2(j_1),t}\bigg)\bigg\}\\
-&\big\{\widehat{\balpha}_{\chi_2(j_1)}-\balpha_{\chi_2(j_1)}\big\}^{ \mathrm{\scriptscriptstyle T} }\bigg\{\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n{\mathbf y}_t\eta_{j_2,t-k}\epsilon_{\chi_1(j_1),t}\bigg)\bigg\}\\
+&\widetilde{R}_{j_1,j_2},
\end{split}
\end{equation}
where
\[
\begin{split}
|\widetilde{R}_{j_1,j_2}|\leq&~ \bigg\{\sum_{k=0}^{n-1}\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg(\frac{1}{n}\sum_{t=k+1}^n|\eta_{j_2,t-k}|\bigg)\bigg\}\cdot O_p\{(n^{-1}\log p)^{1/2}\}\\
\leq&~\bigg\{\sum_{k=0}^n\bigg|\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg|\bigg\}\bigg(\frac{1}{n}\sum_{t=1}^n|\eta_{j_2,t}|\bigg)\cdot O_p\{(n^{-1}\log p)^{1/2}\}\\
=&~O_p\{S_n(n^{-1}\log p)^{1/2}\}.
\end{split}
\]
Here the term $O_p\{S_n(n^{-1}\log p)^{1/2}\}$ is uniform for any $j_1$ and $j_2$. Following the same arguments, we have
\[
\begin{split}
\sup_{1\leq j_1,j_2\leq p}\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n\eta_{j_2,t-k}{\mathbf y}_t{\mathbf y}_t^{ \mathrm{\scriptscriptstyle T} }\bigg)\bigg|_\infty\leq&~CS_n,\\
\sup_{1\leq j_1,j_2\leq p}\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n{\mathbf y}_t\eta_{j_2,t-k}\epsilon_{\chi_2(j_1),t}\bigg)\bigg|_\infty\leq&~CS_n,\\
\sup_{1\leq j_1,j_2\leq p}\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg(\frac{1}{n}\sum_{t=k+1}^n{\mathbf y}_t\eta_{j_2,t-k}\epsilon_{\chi_1(j_1),t}\bigg)\bigg|_\infty\leq&~CS_n.
\end{split}
\]
Therefore, the $(j_1,j_2)$-th component of $\sum_{k=0}^{n-1}\mathcal{K}(k/S_n)\{n^{-1}\sum_{t=k+1}^n(\widehat{\bfeta}_t-\bfeta_t)\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }\}$ can be bounded by
$
CS_n\sup_{1\leq j\leq p}|\widehat{\balpha}_j-\balpha_j|_1+O_p\{S_n(n^{-1}\log p)^{1/2}\}=O_p\{sS_n(n^{-1}\log p)^{1/2}\},
$
where the last identity in above equation is based on (\ref{eq:m1}). Therefore, from (\ref{eq:bound}), by Lemma \ref{la:4}, we have
\[
\begin{split}
&~\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\big(\widehat{\bGamma}_k-\bGamma_k\big)\bigg|_\infty\\
\leq&~\bigg|\sum_{k=0}^{n-1}\mathcal{K}\bigg(\frac{k}{S_n}\bigg)\bigg[\frac{1}{n}\sum_{t=k+1}^n\{\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} }-\mathbb{E}(\bfeta_t\bfeta_{t-k}^{ \mathrm{\scriptscriptstyle T} })\}\bigg]\bigg|_\infty\\
&+O_p\{sS_n(n^{-1}\log p)^{1/2}\}\\
=&~O_p[\{\log (pn)\}^{4/\gamma_2}n^{-f(\alpha_0)/2}]+O_p\{sS_n(n^{-1}\log p)^{1/2}\}.
\end{split}
\]
Analogously, we can prove the same result for $|\sum_{k=-n+1}^{-1}\mathcal{K}({k}/{S_n})(\widehat{\bGamma}_k-\bGamma_k)|_\infty$. Therefore,
$
|\widehat{\boldsymbol{\Xi}}-\widetilde{\boldsymbol{\Xi}}|_\infty=O_p[\{\log (pn)\}^{4/\gamma_2}n^{-f(\alpha_0)/2}]+O_p\{sS_n(n^{-1}\log p)^{1/2}\}.
$
Repeating the the proof of Proposition 1(b) in \cite{Andrews_1991}, we know the convergence in Proposition 1(b) is uniformly for each component of $\widetilde{\boldsymbol{\Xi}}-\boldsymbol{\Xi}$. Thus, $|\widetilde{\boldsymbol{\Xi}}-\boldsymbol{\Xi}|_\infty=o(1)$. Then
$
|\widehat{\boldsymbol{\Xi}}-{\boldsymbol{\Xi}}|_\infty=o_p(1).
$
Similar to (\ref{eq:2}), we complete the proof. $\hfill\Box$
\bigskip
\noindent {\bf Proof of Corollary \ref{cy:1}:} From Theorem \ref{tm:2}, it holds that $\mathbb{P}_{H_0}({\mathbf c}\in\mathcal{C}_{\mathcal{S},1-\alpha,1})\rightarrow1-\alpha$. Therefore, $\mathbb{P}_{H_0}(\Psi_{\alpha}=1)=\mathbb{P}_{H_0}({\mathbf c}\notin\mathcal{C}_{\mathcal{S},1-\alpha,1})\rightarrow\alpha$ which establishes part (i). For part (ii), the following standard results on Gaussian maximum hold:
\[
\mathbb{E}\big(|\widehat{\bxi}|_\infty | \mathcal{Y}_n\big)\leq\{1+(2\log p)^{-1}\}(2\log p)^{1/2}\max_{1\leq j\leq r}\widehat{w}_{j,j}^{1/2}
\]
and
\[
\mathbb{P}\big\{|\widehat{\bxi}|_\infty\geq \mathbb{E}\big(|\widehat{\bxi}|_\infty|\mathcal{Y}_n\big)+u | \mathcal{Y}_n\big\}\leq \exp\bigg(-\frac{u^2}{2\max_{1\leq j\leq p}\widehat{w}_{j,j}}\bigg)
\]
for any $u>0$. Then,
$
\widehat{q}_{\mathcal{S},1-\alpha,1}\leq[\{1+(2\log p)^{-1}\}(2\log p)^{1/2}+\{2\log(1/\alpha)\}^{1/2}]\max_{1\leq j\leq r}\widehat{w}_{j,j}^{1/2}.
$
Let $\mathscr{T}_\varepsilon=\{\max_{1\leq j\leq r}|\widehat{w}_{j,j}^{1/2}-w_{j,j}^{1/2}|/w_{j,j}^{1/2}\leq \varepsilon\}$ for some $\varepsilon>0$. Restricted on $\mathscr{T}_\varepsilon$,
$
\widehat{q}_{\mathcal{S},1-\alpha,1}\leq(1+\varepsilon)[\{1+(2\log p)^{-1}\}(2\log p)^{1/2}+\{2\log(1/\alpha)\}^{1/2}]\max_{1\leq j\leq r}{w}_{j,j}^{1/2}.
$
Let $(\tilde{j}_{1},\tilde{j}_{2})=\arg\max_{(j_1,j_2)\in\mathcal{S}}|\omega_{j_1,j_2}-c_{j_1,j_2}|$. Without lose of generality, we assume $\omega_{\tilde{j}_{1},\tilde{j}_{2}}-c_{\tilde{j}_{1},\tilde{j}_{2}}>0$. Therefore,
\[
\begin{split}
\mathbb{P}_{H_1}(\Psi_\alpha=1)=&~\mathbb{P}_{H_1}\bigg\{\max_{(j_1,j_2)\in\mathcal{S}}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}-c_{j_1,j_2}|>\widehat{q}_{\mathcal{S},1-\alpha,1}\bigg\}\\
\geq&~\mathbb{P}_{H_1}\Big\{{n}^{1/2}(\widehat{\omega}_{\tilde{j}_1,\tilde{j}_2}-c_{\tilde{j}_1,\tilde{j}_2})>\widehat{q}_{\mathcal{S},1-\alpha,1}\Big\}\\
=&~1-\mathbb{P}_{H_1}\Big\{{n}^{1/2}(\widehat{\omega}_{\tilde{j}_1,\tilde{j}_2}-c_{\tilde{j}_1,\tilde{j}_2})\leq\widehat{q}_{\mathcal{S},1-\alpha,1},~\mathscr{T}_\varepsilon\Big\}\\
&~-\mathbb{P}(\mathscr{T}_\varepsilon^c).
\end{split}
\]
Restricted on $\mathscr{T}_\varepsilon$, if $\varepsilon\rightarrow0$, it holds that
$
\widehat{q}_{\mathcal{S},1-\alpha,1}-(\omega_{\tilde{j}_1,\tilde{j}_2}-c_{\tilde{j}_1,\tilde{j}_2})\leq-C(\log p)^{1/2}\max_{1\leq j\leq r}w_{j,j}^{1/2}
$
for some $C>0$, which implies
\[
\begin{split}
&~\mathbb{P}_{H_1}\Big\{{n}^{1/2}(\widehat{\omega}_{\tilde{j}_1,\tilde{j}_2}-c_{\tilde{j}_1,\tilde{j}_2})\leq\widehat{q}_{\mathcal{S},1-\alpha,1},~\mathscr{T}_\varepsilon\Big\}\\
\leq&~ \mathbb{P}_{H_1}\Big\{{n}^{1/2}(\widehat{\omega}_{\tilde{j}_1,\tilde{j}_2}-\omega_{\tilde{j}_1,\tilde{j}_2})\leq -C(\log p)^{1/2}\max_{1\leq j\leq r}w_{j,j}^{1/2}\Big\}\\
\rightarrow&~0.
\end{split}
\]
From Lemma \ref{la:4}, we know that $\max_{1\leq j\leq r}|\widehat{w}_{j,j}-w_{j,j}|=o_p(1)$ which also implies that $\max_{1\leq j\leq r}|\widehat{w}_{j,j}^{1/2}-w_{j,j}^{1/2}|/w_{j,j}^{1/2}=o_p(1)$. Then we can choose suitable $\varepsilon\rightarrow0$ such that $\mathbb{P}(\mathscr{T}_\varepsilon^c)\rightarrow0$. Hence, we complete part (ii). $\hfill\Box$
\noindent {\bf Proof of Corollary \ref{cy:2}:} Our proof includes two steps: (i) to show $\mathbb{P}(\widehat{\mathcal{M}}_{n,\alpha}\subset\mathcal{M}_0)\rightarrow1$, and (ii) to show $\mathbb{P}(\mathcal{M}_0\subset\widehat{\mathcal{M}}_{n,\alpha})\rightarrow1$. Result (i) is equivalent to $\mathbb{P}(\mathcal{M}_0^c\subset\widehat{\mathcal{M}}_{n,\alpha}^c)\rightarrow1$. The latter one is equivalent to $\mathbb{P}\{\max_{(j_1,j_2)\in\mathcal{M}_0^c}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}|\geq \widehat{q}_{\mathcal{S},1-\alpha,1}\}\rightarrow0$. Notice that $\mathcal{S}=\{1,\ldots,p\}^2$, it holds that
\[
\begin{split}
&~\mathbb{P}\bigg\{\max_{(j_1,j_2)\in\mathcal{M}_0^c}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}|\geq\widehat{q}_{\mathcal{S},1-\alpha,1}\bigg\}\\
\leq&~\mathbb{P}\bigg\{\max_{(j_1,j_2)\in\mathcal{S}}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}-\omega_{j_1,j_2}|\geq\widehat{q}_{\mathcal{S},1-\alpha,1}\bigg\}\\
\leq&~\alpha+o(1),
\end{split}
\]
which implies $\mathbb{P}\{\max_{(j_1,j_2)\in\mathcal{M}_0^c}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}|\geq\widehat{q}_{\mathcal{S},1-\alpha,1}\}\rightarrow0$. Then we construct result (i). Result (ii) is equivalent to $\mathbb{P}\{\min_{(j_1,j_2)\in\mathcal{M}_0}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}|\leq \widehat{q}_{\mathcal{S},1-\alpha,1}\}\rightarrow0$. Let $(\tilde{j}_1,\tilde{j}_2)=\arg\min_{(j_1,j_2)\in\mathcal{M}_0}|\omega_{j_1,j_2}|$. Without lose of generality, we assume $\omega_{\tilde{j}_1,\tilde{j}_2}>0$. Notice that
\[
\begin{split}
&~\mathbb{P}\bigg\{\min_{(j_1,j_2)\in\mathcal{M}_0}{n}^{1/2}|\widehat{\omega}_{j_1,j_2}|\leq \widehat{q}_{\mathcal{S},1-\alpha,1}\bigg\}\\
\leq&~\mathbb{P}\big\{{n}^{1/2}(\widehat{\omega}_{\tilde{j}_1,\tilde{j}_2}-\omega_{\tilde{j}_1,\tilde{j}_2})\leq \widehat{q}_{\mathcal{S},1-\alpha,1}-{n}^{1/2}\omega_{\tilde{j}_1,\tilde{j}_2}\big\},
\end{split}
\]
we can construct result (ii) following the arguments for the proof of Corollary \ref{cy:1}. $\hfill\Box$
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,872
|
Q: Wake on Lan through router to repeater client to router from outside LAN Let's say I have an Internet connection with a public IP address 123.123.123.123. This connection is going to a router which gives out addresses from 192.168.1.100 to 192.168.1.200.
A repeater is connected to the router as a client. The repeater is then connected to another router, which gives out addresses from 192.168.2.100 to 192.168.2.200.
The IP of the computer I want to wake up, which is on the second router, is 192.168.2.102, and the MAC is 11:aa:11:aa:11:aa.
Wake on LAN works using 192.168.2.102 on local network using MAC 11:aa:11:aa:11:aa.
How do I make Wake on LAN work from an outside IP with this setup?
Here's a chart to illustrate the setup:
Router with Further router with
built-in built-in DHCP.
DHCP server WAN port---| 192.168.2.100-200 range [Target Computer]
| | \->| | |
| | LAN with range 192.168.1.100-200 | | |
External IP --------/ \-------------------(repeater)-----/ \----------- (LAN part 2) ---------
123.123.123.123 WOL does not work for target WOL works for target
A: To be able to do this your inner router must support Subnet Directed Broadcasts.
Subnet directed broadcasts
A principal limitation of standard broadcast Wake-On-LAN is that
broadcast packets are generally not routed. This prevents the
technique being used in larger networks or over the internet. Subnet
Directed Broadcasts (SDB) may be used to overcome this
limitation. SDB may require changes to intermediate router
configuration. Subnet directed broadcasts are treated as normal
network packets until processed by the final (local) router. This
router converts the packet into a true broadcast packet. This
technique allows a broadcast to be initiated on a remote network but
requires all intervening routers to forward the SDB. When
preparing a network to forward SDB packets, care must be taken to
filter such that only desired (e.g. WoL) SDB packets are
permitted—otherwise the network becomes a participant in DDoS attacks
such as the Smurf Attack.
Refer to your router's firmware documentation to see if it supports this feature.
Your 2nd option is have a computer that is always on in the inner LAN listening for a normal TCP connection and have that computer broadcast the WOL packet (If you have a customizeable firmware like dd-wrt you could have the router itself be that computer). This is what the service LogMeIn does to do it's WOL, if it detects that two computers on the same network are using the service it will use the on computer to broadcast a WOL packet to the off computer.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,223
|
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<meta content="IE=edge;chrome=1" http-equiv="X-UA-Compatible" />
<title>dognews</title>
<meta content="width=device-width, initial-scale=1" name="viewport" />
<link rel="alternate" type="application/atom+xml" title="Atom Feed" href="/feed.xml" /><!--[if lt IE 9]><script src="../../../js/ie8.js" type="text/javascript"></script><![endif]--><link href="../../../css/all.css" media="screen" rel="stylesheet" type="text/css" /><script type="text/javascript">
(function(d,e,j,h,f,c,b){d.GoogleAnalyticsObject=f;d[f]=d[f]||function(){(d[f].q=d[f].q||[]).push(arguments)},d[f].l=1*new Date();c=e.createElement(j),b=e.getElementsByTagName(j)[0];c.async=1;c.src=h;b.parentNode.insertBefore(c,b)})(window,document,"script","//www.google-analytics.com/analytics.js","ga");ga("create","UA-63279904-1", location.hostname);ga("send","pageview");
</script>
<link href="/favicon.png" rel="icon" type="image/png" />
</head>
<body>
<nav class="navbar navbar-inverse navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button class="navbar-toggle collapsed" data-target=".navbar-ex1-collapse" data-toggle="collapse" type="button"><span class="sr-only">Toggle navigation</span><span class="icon-bar"></span><span class="icon-bar"></span><span class="icon-bar"></span></button><a class="navbar-brand" href="/">dognews</a>
</div>
<div class="collapse navbar-collapse navbar-ex1-collapse">
<ul class="nav navbar-nav">
<li>
<a href="/menu1.html"> Über Uns </a>
</li>
<li>
<a href="/menu2.html"> Newsletter! </a>
</li>
<li class="dropdown">
<a aria-expanded="false" class="dropdown-toggle" data-toggle="dropdown" href="#" role="button">Categories <span class="caret"></span></a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="/tags/businessidee.html">businessidee (38)</a>
</li>
<li>
<a href="/tags/deutschland.html">deutschland (596)</a>
</li>
<li>
<a href="/tags/erziehung.html">erziehung (35)</a>
</li>
<li>
<a href="/tags/fotografie.html">fotografie (5)</a>
</li>
<li>
<a href="/tags/freizeit.html">freizeit (83)</a>
</li>
<li>
<a href="/tags/gesetz.html">gesetz (38)</a>
</li>
<li>
<a href="/tags/gesundheit.html">gesundheit (116)</a>
</li>
<li>
<a href="/tags/herdenhunde.html">herdenhunde (10)</a>
</li>
<li>
<a href="/tags/hundesachkunde.html">hundesachkunde (13)</a>
</li>
<li>
<a href="/tags/hundesport.html">hundesport (12)</a>
</li>
<li>
<a href="/tags/kinder.html">kinder (9)</a>
</li>
<li>
<a href="/tags/kurioses.html">kurioses (29)</a>
</li>
<li>
<a href="/tags/oesterreich.html">oesterreich (63)</a>
</li>
<li>
<a href="/tags/rassen.html">rassen (8)</a>
</li>
<li>
<a href="/tags/ratgeber.html">ratgeber (161)</a>
</li>
<li>
<a href="/tags/rettungshunde.html">rettungshunde (3)</a>
</li>
<li>
<a href="/tags/schweiz.html">schweiz (99)</a>
</li>
<li>
<a href="/tags/senioren.html">senioren (10)</a>
</li>
<li>
<a href="/tags/stars.html">stars (11)</a>
</li>
<li>
<a href="/tags/urlaub.html">urlaub (39)</a>
</li>
<li>
<a href="/tags/veranstaltung.html">veranstaltung (1)</a>
</li>
<li>
<a href="/tags/wandern.html">wandern (17)</a>
</li>
<li>
<a href="/tags/wissen.html">wissen (200)</a>
</li>
</ul>
</li>
<li class="dropdown">
<a aria-expanded="false" class="dropdown-toggle" data-toggle="dropdown" href="#" role="button">By Year <span class="caret"></span></a>
<ul class="dropdown-menu" role="menu">
<li>
<a href="/2017.html">2017 (8)</a>
</li>
<li>
<a href="/2016.html">2016 (55)</a>
</li>
<li>
<a href="/2015.html">2015 (458)</a>
</li>
<li>
<a href="/2014.html">2014 (273)</a>
</li>
</ul>
</li>
<ul class="list-unstyled list-inline nav navbar-nav navbar-right"></ul>
<li><a href="https://twitter.com/cbdognews">
<i class="fa fa-lg fa-inverse fa-twitter-square"></i></a>
</li>
</ul>
</div>
</div>
</nav>
<div class="container">
<div class="row">
<div class="col-lg-9 col-md-9">
<h1>
Archive for May 2015
</h1>
<p>
Page 5 of 6
</p>
<p>
<a href="/2015/05/page/4.html">Previous page</a>
</p>
<ul>
<h2>
<a href="/2015/05/07/von-hundeleben-rehkitzen-und-pferdegeduld.html">Von Hundeleben, Rehkitzen und Pferdegeduld</a>
</h2>
<p>
<small class="label label-default">wissen</small>
<small class="label label-default">oesterreich</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 7
</p>
<hr />
<div class="row">
<div class="article">
<p><span/> Die Veterinärmedizinische Uni Wien wird 250 Jahre alt. Seit damals hat sich in der Beziehung Tier-Mensch viel getan. Das zeigt sich auch an der Vielfalt der einschlägigen akademischen Weiterbildungslehrgänge.
<a href="http://diepresse.com/home/bildung/weiterbildung/4695858/Von-Hundeleben-Rehkitzen-und-Pferdegeduld">Link</a></p>
</div>
</div><a class="btn btn-primary" href="/2015/05/07/von-hundeleben-rehkitzen-und-pferdegeduld.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/06/ungetrubte-fruhlingsgefuhle-so-kommen-hund-und-halter-gesund-durch-die-outdoor-s.html">Ungetrübte Frühlingsgefühle: So kommen Hund und Halter gesund durch die Outdoor-Saison</a>
</h2>
<p>
<small class="label label-default">ratgeber</small>
<small class="label label-default">deutschland</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 6
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Auf ausgedehnten Spaziergängen können Hunde und ihre Besitzer die Wintermüdigkeit abschütteln. "Bei aller Vorfreude auf die schöne Jahreszeit darf jedoch die Sicherheit des Hundes nicht zu kurz kommen. Im Frühling toben nicht nur Fellnasen...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/06/ungetrubte-fruhlingsgefuhle-so-kommen-hund-und-halter-gesund-durch-die-outdoor-s.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/06/sie-sind-nicht-allein-auch-hunde-haben-heuschnupfen.html">Sie sind nicht allein! Auch Hunde haben Heuschnupfen</a>
</h2>
<p>
<small class="label label-default">gesundheit</small>
<small class="label label-default">schweiz</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 6
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Wälzt sich Ihr Hund gerade besondes oft in der Wiese? Tränen vielleicht sogar seine Augen? Dann kann es sein, dass auch er – wie viele Menschen – an eine Pollenallergie leidet. Die Menschen niesen, den Hund juckts Anders als wir Zweibeiner...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/06/sie-sind-nicht-allein-auch-hunde-haben-heuschnupfen.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/06/was-sie-bei-der-hundeerziehung-beachten-sollten.html">Was Sie bei der Hundeerziehung beachten sollten</a>
</h2>
<p>
<small class="label label-default">deutschland</small>
<small class="label label-default">erziehung</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 6
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Die Erziehung eines Hundes kann manchmal schwieriger sein als gedacht. Vor allem, wenn der Hund nicht das tut, was man von ihm möchte, stellt sich schnell die Frage: woran liegt's? Hier sind ein paar Ursachen dafür, dass ihr Hund sie...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/06/was-sie-bei-der-hundeerziehung-beachten-sollten.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/05/wolfe-auf-dem-laufband.html">Wölfe auf dem Laufband</a>
</h2>
<p>
<small class="label label-default">wissen</small>
<small class="label label-default">oesterreich</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 5
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> ölfe auf dem Laufband Wölfe können bis zu 100 Kilometer pro Tag laufen. Im Wildpark Ernstbrunn haben sie dafür ein Laufband. Das Wolfsforschungszentrum (WSC) führt Tests durch, die das soziale Laufverhalten von Wölfen und Hunden behandeln...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/05/wolfe-auf-dem-laufband.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/05/einhaltung-der-leinenpflicht-wird-kontrolliert.html">Einhaltung der Leinenpflicht wird kontrolliert</a>
</h2>
<p>
<small class="label label-default">schweiz</small>
<small class="label label-default">gesetz</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 5
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Im Kanton Luzern müssen Hunde zwischen dem 1. April und dem 31. Juli im Wald und am Waldrand an die Leine genommen werden. Damit wird das Wild besser geschützt. Diese Leinenpflicht wurde 2014 erstmals eingeführt. Mit der Leinenpflicht...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/05/einhaltung-der-leinenpflicht-wird-kontrolliert.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/05/ein-hund-braucht-ausreichend-bewegungsfreiheit.html">Ein Hund braucht ausreichend Bewegungsfreiheit</a>
</h2>
<p>
<small class="label label-default">deutschland</small>
<small class="label label-default">gesetz</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 5
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Eine Autobox, die zum Transport von Hunden bestimmt ist, ist kein geeigneter Ort zur täglichen Unterbringung für Hunde. Die tägliche Unterbringung in der Box während der Arbeitszeit des Hundehalters verstößt gegen zwingende Bestimmungen...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/05/ein-hund-braucht-ausreichend-bewegungsfreiheit.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/04/wenn-der-spaziergang-ein-spierutenlauf-ist.html">Wenn der Spaziergang ein Spießrutenlauf ist</a>
</h2>
<p>
<small class="label label-default">ratgeber</small>
<small class="label label-default">oesterreich</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 4
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Probleme beim Gassigehen, Protestpinkeln und Impfschutz - der KURIER-Tiercoach weiß Antworten aus Leserfragen. Warum legt sich der kleine Hund mit großen an? Wieso setzt der eigene Hund nach dem Besuch eines Artgenossen Duftnoten im Zimmer...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/04/wenn-der-spaziergang-ein-spierutenlauf-ist.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/04/pfotenservice-fur-reisende.html">Pfotenservice für Reisende</a>
</h2>
<p>
<small class="label label-default">urlaub</small>
<small class="label label-default">deutschland</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 4
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> An Bord der Reederei TT-Line sind Schwedenreisende mit Haustier gut aufgehoben: Auf den Fähren können Hunde mit Frauchen und Herrchen spazieren gehen und dürfen sogar mit ihnen die Kabine teilen. Bei der Fährlinie genießen Bello und...</p>
</div>
</div><a class="btn btn-primary" href="/2015/05/04/pfotenservice-fur-reisende.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
<h2>
<a href="/2015/05/04/kein-tier-das-uns-ahnlicher-ist-als-der-wolf.html">'Kein Tier, das uns ähnlicher ist als der Wolf'</a>
</h2>
<p>
<small class="label label-default">wissen</small>
<small class="label label-default">oesterreich</small>
</p>
<hr />
<p>
<span class="glyphicon glyphicon-time"></span> Posted on May 4
</p>
<hr />
<div class="row">
<div class="article">
<p><span></span> Verhaltensforscher Kurt Kotrschal stellt seine Tiere aufs Laufband: Hunde wie auch Wölfe joggen auf vier Beinen - untersucht wird, welche Effekte das gemeinsame Laufen auf Herzschlagrate, Motivation und soziale Beziehungen hat.
<a href="http://www.kleinezeitung.at/s/chronik/tiere/4709051/Forschungszentrum-im-Weinviertel_Kein-Tier-das-uns-aehnlicher-ist">Link</a></p>
</div>
</div><a class="btn btn-primary" href="/2015/05/04/kein-tier-das-uns-ahnlicher-ist-als-der-wolf.html">Read More<span class="glyphicon glyphicon-chevron-right"></span></a>
<hr />
</ul>
<p>
<a href="/2015/05/page/6.html">Next page</a>
</p>
<hr />
<aside>
<h3>
Recent Articles
</h3>
<ol>
<li>
<a href="/2017/12/05/nun-ist-es-raus-hunde-sind-kluger-als-katzen.html">Nun ist es raus: Hunde sind klüger als Katzen</a> <span>Dec 5</span>
</li>
<li>
<a href="/2017/07/27/die-macht-der-geruche.html">Die Macht der Gerüche</a> <span>Jul 27</span>
</li>
<li>
<a href="/2017/06/21/vorsicht-giftig-diese-lebensmittel-sollten-hunde-nicht-fressen.html">Vorsicht giftig! Diese Lebensmittel sollten Hunde nicht fressen</a> <span>Jun 21</span>
</li>
<li>
<a href="/2017/03/27/studie-schaferhunde-konnen-brustkrebs-diagnostizieren.html">Studie: Schäferhunde können Brustkrebs diagnostizieren</a> <span>Mar 27</span>
</li>
<li>
<a href="/2017/03/27/atopische-dermatitis-was-tun-wenn-es-juckt-und-kratzt-allergien-belasten-das-woh.html">Atopische Dermatitis: Was tun, wenn es juckt und kratzt? / Allergien belasten das Wohlbefinden ...</a> <span>Mar 27</span>
</li>
<li>
<a href="/2017/02/27/tiermedizin-epilepsie-gen-entdeckt.html">Tiermedizin - Epilepsie-Gen entdeckt</a> <span>Feb 27</span>
</li>
<li>
<a href="/2017/01/17/auch-haustiere-frieren-so-kommt-bello-durch-den-winter.html">Auch Haustiere frieren | So kommt Bello durch den Winter</a> <span>Jan 17</span>
</li>
<li>
<a href="/2017/01/17/hunde-sind-bei-minusgraden-schnell-unterkuhlt.html">Hunde sind bei Minusgraden schnell unterkühlt</a> <span>Jan 17</span>
</li>
<li>
<a href="/2016/12/08/venedig-wo-die-gondeln-hunde-tragen.html">Venedig: Wo die Gondeln Hunde tragen</a> <span>Dec 8</span>
</li>
<li>
<a href="/2016/11/01/hunde-heulten-halbe-stunde-vor-erdbeben.html">Hunde heulten halbe Stunde vor Erdbeben</a> <span>Nov 1</span>
</li>
</ol>
</aside>
<hr>
<p class="text-center">
©2018 <a href="/">dognews</a> - <a href="/footer1.html">Disclaimer</a><br /><span class="small">Powered by<a href="https://cloudburo.net/docs/products.html"> Cloudburo Curation Engine</a></span>
</p>
</hr>
</div>
<div class="col-lg-3 col-md-3">
<div class="well">
<h4>
Categories
</h4>
<ul class="list-unstyled">
<li>
<a href="/tags/businessidee.html">businessidee</a> (38)
</li>
<li>
<a href="/tags/deutschland.html">deutschland</a> (596)
</li>
<li>
<a href="/tags/erziehung.html">erziehung</a> (35)
</li>
<li>
<a href="/tags/fotografie.html">fotografie</a> (5)
</li>
<li>
<a href="/tags/freizeit.html">freizeit</a> (83)
</li>
<li>
<a href="/tags/gesetz.html">gesetz</a> (38)
</li>
<li>
<a href="/tags/gesundheit.html">gesundheit</a> (116)
</li>
<li>
<a href="/tags/herdenhunde.html">herdenhunde</a> (10)
</li>
<li>
<a href="/tags/hundesachkunde.html">hundesachkunde</a> (13)
</li>
<li>
<a href="/tags/hundesport.html">hundesport</a> (12)
</li>
<li>
<a href="/tags/kinder.html">kinder</a> (9)
</li>
<li>
<a href="/tags/kurioses.html">kurioses</a> (29)
</li>
<li>
<a href="/tags/oesterreich.html">oesterreich</a> (63)
</li>
<li>
<a href="/tags/rassen.html">rassen</a> (8)
</li>
<li>
<a href="/tags/ratgeber.html">ratgeber</a> (161)
</li>
<li>
<a href="/tags/rettungshunde.html">rettungshunde</a> (3)
</li>
<li>
<a href="/tags/schweiz.html">schweiz</a> (99)
</li>
<li>
<a href="/tags/senioren.html">senioren</a> (10)
</li>
<li>
<a href="/tags/stars.html">stars</a> (11)
</li>
<li>
<a href="/tags/urlaub.html">urlaub</a> (39)
</li>
<li>
<a href="/tags/veranstaltung.html">veranstaltung</a> (1)
</li>
<li>
<a href="/tags/wandern.html">wandern</a> (17)
</li>
<li>
<a href="/tags/wissen.html">wissen</a> (200)
</li>
</ul>
</div>
<div class="well">
<h4>
By year
</h4>
<ol>
<li>
<a href="/2017.html">2017</a> (8)
</li>
<li>
<a href="/2016.html">2016</a> (55)
</li>
<li>
<a href="/2015.html">2015</a> (458)
</li>
<li>
<a href="/2014.html">2014</a> (273)
</li>
</ol>
</div>
</div>
</div>
</div>
<script src="../../../js/all.js" type="text/javascript"></script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,748
|
#ifndef MRUBY_ERROR_H
#define MRUBY_ERROR_H
void mrb_sys_fail(mrb_state *mrb, const char *mesg);
mrb_value mrb_exc_new_str(mrb_state *mrb, struct RClass* c, mrb_value str);
mrb_value mrb_make_exception(mrb_state *mrb, int argc, mrb_value *argv);
mrb_value mrb_format(mrb_state *mrb, const char *format, ...);
void mrb_exc_print(mrb_state *mrb, struct RObject *exc);
void mrb_longjmp(mrb_state *mrb);
#endif /* MRUBY_ERROR_H */
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,715
|
\part{Avant-Propos}
\newpage
A mon épouse, Léa,
A ma fille Gabrielle,
A mes parents, Françoise et Dominique
\begin{center}
\vspace*{1cm}
\large{Remerciements}
\end{center}
Je souhaite ici rendre hommage et exprimer ma profonde gratitude à tous ceux qui, de près ou de loin, ont contribué à la réalisation de cette thèse.
Mes remerciements s'adressent tout d'abord à mon Directeur de thèse, le Professeur Sofiane Aboura. Tout au long de ce travail, il a su m'apporter une disponibilité, une écoute, une confiance, des conseils précieux et avisés à la hauteur de ses compétences et de ses qualités humaines.
J'adresse également tous mes remerciements aux Professeurs Bertrand Maillet et Yannick Malevergne, de l'honneur qu'ils m'ont fait en acceptant d'être rapporteurs de cette thèse ainsi qu'aux Professeur Jean-Michel Courtault, Jean-Luc Prigent et Isabelle Rivals de l'honneur qu'ils m'ont fait en acceptant d'être examinateur.
Je remercie aussi mes collègues, l'équipe de recherche de John Locke, Denis Grebenkov, Stanislav Kuperstein, Maxime Beucher, Edouard Limouse et Frédéric Herbette. Sans eux, ce travail n'aurait probablement pas vu le jour. Leur aide et soutien ont été déterminants. J'ai aussi une pensée pour Thomas Dionysopoulos, avec lequel, suite à nos discussions en janvier 2015, j'ai commencé à réfléchir à l'opportunité de publier des articles sur la matrice de corrélation.
Enfin, sur un plan plus personnel, l'amour que je porte à mon épouse, Léa, et à ma fille, Gabrielle, m'a permis de surmonter les étapes difficiles.
J'ai aussi une pensée pour les personnes qui ont été déterminantes dans ma reconversion professionnelle il y a 12 ans de l'énergie nucléaire à la finance de marché dont cette thèse est l'un des jalons. Je me souviens notamment de ma discussion avec le Professeur Jean-Philippe Bouchaud, en mai 2006, au jardin du Luxembourg, qui a été un moment crucial. Il a été très bienveillant en me donnant des bons conseils: il ne fallait surtout pas reprendre mes études dans les produits dérivés, les calculs stochastiques et les équations différentielles, il fallait au contraire reprendre mes études axées sur la recherche, l'empirisme et les inefficiences de marché, sujet qui n'était pas encore très à la mode avant la crise financière. Je me souviens, par sa clairvoyance, qu'il avait déjà anticipé la crise des subprimes de 2007-2008 en me disant que selon lui les hypothèses liées à la valorisation des produits financiers dérivés complexes étaient devenues surréalistes. J'ai toujours été impressionné par sa simplicité, sa disponibilité et sa gentillesse vu sa réussite exceptionnelle. Je me souviens à cette époque du nombre de journées que j'ai passées à lire son livre et ses nombreux articles de recherche qui m'ont permis de former mes premières intuitions par rapport au fonctionnement des marchés financiers. J'ai aussi une pensée émue pour les Professeurs Jacques Prost et Pierre-Gilles de Gennes, tout aussi simples, bienveillants et exceptionnels, qui ont permis probablement en me recommandant, que l'université dauphine m'accepte en tant qu'étudiant, à 28 ans, avec mon parcours atypique au master de finance. J'ai aussi une pensée pour Joel Benarroch et François Bonnin qui m'ont appris le métier de gérant de fonds discrétionnaire et systématique. J'ai aussi une pensée pour les seuls professeurs de l'ESPCI qui ont vraiment cru en moi, Isabelle Rivals et Léon Personnaz, qui m'ont formé aux statistiques pendant un an dans leur laboratoire. Ils ont eu beaucoup d'influence sur mes travaux. J'ai aussi une pensée pour mes amis les plus proches Pierre, Alexandre, Charles-Henri, Olivier, Emmanuel et Antoine et pour mes parents et mes frères et sœurs, qui ont cru en moi, et m'ont toujours soutenu dans cette reconversion professionnelle et je les remercie aussi.
\if false
Une nouvelle méthode a été mise en place pour débruiter la matrice de corrélation des rendements des actions en se basant sur une analyse par composante principale sous contrainte en exploitant les données financières. Des portefeuilles, que j'ai nommés ``Fundamental Maximum variance portfolios'', sont construits pour capturer de manière optimale un style de risque défini par un critère financier (``Book'', ``Capitalization'',etc.). Les vecteurs propres sous contraintes de la matrice de corrélation, qui sont alors des combinaisons linéaires de ces portefeuilles, sont alors étudiés. Grâce à cette méthode, plusieurs faits stylisés de la matrice ont été mis en évidence dont: \begin{itemize}
\item l'augmentation des premières valeurs propres avec l'échelle de temps de 1 minute à plusieurs mois semble suivre la même loi pour toutes les valeurs propres significatives avec deux régimes : de quelques secondes à quelques minutes, l'augmentation des corrélations vient d'un effet retard alors que de quelques jours à plusieurs mois l'augmentation des corrélations vient d'un manque de liquidité du marché et du comportement moutonnier des agents.
\item une loi ``universelle'' semble gouverner la composition de tous les portefeuilles ``Maximum variance''. Les poids optimaux seraient directement proportionnels au classement selon le critère financier étudié. Le portefeuille ainsi construit capturerait mieux le risque recherché tout en minimisant le risque spécifique.
\item La volatilité de la volatilité des portefeuilles ``Maximum Variance'', qui ne sont pas orthogonaux, suffirait à expliquer une grande partie de la diffusion de la matrice de corrélation. Une des propriétés reproduite est la forme de la distribution des valeurs propres des variations de la matrice de corrélation, qui ne suit pas la loi demi cercle de Wigner et qui n'est pas capturée par des modèles standards de la littérature qui sont tous dérivés du processus de Wishart. La diffusion de la distribution des valeurs propres des variations de la matrice de corrélation s'expliquerait plus par la diffusion des vecteurs propres de la matrice de corrélation que par la diffusion des valeurs propres de la matrice de corrélation. Les modèles stochastiques standards de la matrice de covariance semblent minimiser fortement les plus grandes valeurs propres des variations de la matrice de corrélation et donc semble sous-estimer les chocs de volatilité et la probabilité de pertes extrêmes de certaines stratégies d'investissement.
\item L'effet de levier (augmentation de la première valeur propre avec la baisse du marché) n'existe que pour le marché et ne se généralise pas aux autres facteurs de risque. L'impact de l'effet de levier sur les beta, sensibilité des actions avec le ``market mode'', a aussi été modélisé ainsi que son élasticité. Quand une action surperforme, son beta va baisser. Quand le marché baisse, les corrélations vont augmenter. Quand la volatilité de l'action augmente plus que celle du marché, le beta de l'action augmente aussi. Je montre qu'il est important de tenir compte de ces effets pour mesurer les beta sans biais et construire des portefeuilles ``market neutral'' c'est à dire complètement insensibles aux variations du marché.
\end{itemize}
Ces différents faits stylisés doivent être pris en compte pour modéliser proprement la matrice de corrélation ce qui est primordiale dans la gestion de portefeuille et la gestion des risques. Cela est aussi utile pour l'Asset Pricing et identifier que non seulement les entreprises qui rémunèrent bien leurs employés ont tendance à partager un risque en commun mais ont aussi tendance à surperformer.
\fi
\newpage
\begin{center}
\vspace*{1cm}
\large{Avant-Propos}
\end{center}
Cette thèse a débuté en janvier 2016 dans le laboratoire CEPN (UMR 7234, CNRS) de l'université Paris-XIII. Le travail de recherche a été réalisé chez John Locke Investments, société de gestion indépendante et à taille humaine (15 salariés), pour laquelle j'ai continué à travailler à pleins temps en tant que chercheur et gérant des fonds systématiques John Locke Equity Market Neutral et John Locke Smart Equity. J'ai ainsi pu profiter de mon expérience concrète des marchés financiers pour adapter mes modèles à la réalité. Aussi j'ai dû me concentrer sur des modèles qui devaient avoir un intérêt certain pour la gestion d'actifs et les deux fonds que je gère. Modéliser la matrice de corrélation des actions est clef chez John Locke Investments. Ainsi les portefeuilles optimaux pour faire du trend following se basent uniquement sur l'exploitation de la matrice de corrélation qu'il faut maitriser, nettoyer, inverser, modéliser très proprement pour pouvoir amplifier les faibles autocorrélations en performances robustes. Ainsi les compétences de certains gérants peuvent très bien se limiter à la bonne modélisation de la matrice de corrélation. Les papiers de recherche devaient aussi être pratiques et constituer un support intellectuel pour convaincre les clients des fonds du fondement scientifique de mes modèles de gestion.
Trois papiers ont été présentés lors des conférences en 2016 (Liège, Belgique), en 2017 (Valence) et en 2018 (Paris) de l'AFFI et un quatrième sera présenté lors de la conférence en Juin 2019 (Laval, Quebec):
\begin{itemize}
\item le papier ``Emergence of Correlation between Securities at Short Time Scales'' a été présenté au 35th International Conference of the French Finance Association à l'ESCP à Paris du 20 au 24 mai 2018.
Le papier est présenté en premier au chapitre \ref{emergence} car il explique l'origine physique des corrélations entre actions;
\item le papier ``Fundamental Market Neutral Maximum Variance Portfolios'' a été soumis en janvier 2019 au 36th International Conference of the French Finance Association à Laval au Quebec. Le papier justifie la méthodologie utilisée dans la thèse pour débruiter la matrice de correlation. Le papier est à cheval entre plusieurs spécialités (model factoriels, matrice aléatoires, Asset Pricing) et doit être restructuré et découpé en plusieurs projets pour être publiable. Le papier est présenté au chapitre \ref{maxvar}.
\item le papier ``The Reactive Beta Model'' a été présenté au 34th International Conference of the French Finance Association à Valence le 31 mai et 1er et 2 juin 2017.
Le papier est présenté au chapitre \ref{beta};
\item le papier ``Should Employers Pay Better their Employees? An Asset Pricing Approach'' a été présenté au 33rd International Conference of the French Finance Association à HEC-Management School of the University of Liege du 23 au 25 mai 2016. Le papier est présenté dans le manuscrit au chapitre \ref{remuneration} comme une application
\end{itemize}
\tableofcontents
\part{Introduction Générale}
\chapter{Introduction}
La matrice de corrélation des rendements des actions est nécessaire à l'analyse du risque d'un portefeuille. Une modélisation fine est nécessaire pour construire les portefeuilles optimaux robustes (maximiser le gain potentiel tout en minimisant le risque). Les mesures empiriques de la matrice de corrélation sont bruitées du fait d'un nombre trop faible de rendements indépendants et homoscedastiques disponibles et d'un nombre trop grand d'actions. Ainsi il est courant de devoir mesurer les corrélations entre 500 actions ou plus avec beaucoup moins d'un an d'historique\footnote{ce qui correspond à moins de 250 rendements journaliers qui ne sont que très approximativement gaussiens} afin de pouvoir supposer que les corrélations restent à peu près constantes sur cette période. L'échantillon se réduit encore lorsqu'on s'intéresse aux corrélations des rendements mensuels voire annuels, qui importent le plus pour les investisseurs. Les autocorrélations des rendements sont faibles mais suffisantes pour déformer la matrice selon l'horizon de temps et transformer des facteurs de risque négligeables à l'horizon de la journée en significatif à l'horizon du mois pour un gérant. Les mesures empiriques, qui se basent sur un échantillon trop petit, capturent des corrélations fallacieuses. Ces mesures fallacieuses peuvent résulter en portefeuilles qui semblent sans risque dans l'échantillon utilisé pour mesurer la matrice mais risqués dans un autre échantillon. La moindre optimisation de portefeuille, qui cherche à minimiser le risque pour une même rentabilité espérée, va forcément privilégier les portefeuilles qui semblent sans risque ou très peu risqués ``in the sample'' et il en résulte un manque de robustesse. Les vraies corrélations sont réputées être de plus très variables en fonction du temps. Ainsi quand le marché est stressé, les investisseurs se mettent à paniquer et les corrélations ont tendance à augmenter, si bien que toutes les actions sont entrainées par les mouvements des indices. Lorsqu'un facteur de risque devient majeur, lorsqu'un évènement inattendu survient, alors toutes les actions qui capturent ce facteur de risque vont se corréler brusquement. Lorsqu'une action sous-performe ou surperforme, ses corrélations avec les autres actions vont changer. Ainsi on peut parler de corrélations non linéaires car les corrélations dépendent des trajectoires de chaque action mais une grande partie des variations semblent complètement stochastiques. Une grande difficulté est donc de mesurer les corrélations de ``population'' sans erreur et sans retard. Une autre difficulté est aussi de prévoir comment la matrice risque de varier. Malgré les enjeux, de nombreux faits stylis\'es de la matrice de corrélation, qui sont noyés dans le bruit, restent pourtant encore à découvrir. Cette thèse, qui cherche à mettre en évidence plusieurs faits stylisés, remplit un vide dans la littérature académique et fait le lien entre plusieurs disciplines entre mathématiques (processus stochastique), finance (modèles multifactoriels), gestion d'actifs (portefeuilles optimaux), econophysique (matrice aléatoire) et économie (Asset Pricing).
Je me suis d'abord intéressé à l'origine des corrélations des rendements des actions: j'ai ainsi modélisé l'émergence des corrélations des rendements des actions européennes et américaines de 2000 à 2017 sur des \'echelles courtes (de 1 minute à 1 jour) grâce à un modèle de retard inspir\'e de la microstructure. A des échelles très courtes, de l'ordre de la seconde, les corrélations sont nulles puis elles apparaissent et augmentent avec l'échelle de temps. L'émergence des corrélations est en fait la conséquence de l'impact des transactions, qui se matérialisent entre les actions similaires via des algorithmes de trading. J'ai mis en place, dans le chapitre \ref{emergence}, un mod\'ele de retard qui reproduit très bien l'effet d'\'echelle mesur\'e qui permet d'extrapoler la vision de la matrice des rendements 1 minute à la journ\'ee.
Pour identifier la structure et la dynamique des valeurs propres et vecteurs propres, j'ai mis en place, dans le chapitre \ref{maxvar}, une m\'ethodologie bas\'ee sur l'analyse par composante principale contrainte qui permet de débruiter la matrice en tirant bénéfice des informations financières, comme, par exemple, le ratio entre la valeur comptable et la valeur de marché de l'action (``Book''), la valeur capitalistique de l'action (``Capitalization'') ou de nombreux autres ratios financiers. L'analyse par composante principale appliquée aux rendements de l'ensemble des actions revient à diagonaliser la matrice de corrélation des rendements. La matrice de corrélation est préférable à la matrice de covariance pour éviter un biais vers les actions les plus volatiles. La diagonalisation permet d'identifier les portefeuilles d'actions decorrélés les uns des autres qui génèrent le plus de volatilité pour un même investissement (mesurée exactement par la volatilité du portefeuille obtenue sans tenir compte des corrélations entre les actions). Les rendements de ces portefeuilles particuliers permettent de modéliser simplement les mouvements principaux du marché: ces portefeuilles particuliers sont réputés proches des combinaisons très bruitées de stratégies de base (les indices pondérés par les capitalisations, les indices sectoriels, les indices investis sur les petites capitalisations, les indices investis sur les entreprises de croissance, les indices ``Min Variance'' investis sur les actions peu volatiles, les indices investis sur les entreprises ``Value'', etc.). Les variances de ces portefeuilles lorsqu'ils sont normalisés sont proportionnelles aux valeurs propres. Les corrélations entre actions sont quasiment toutes positives et rend l'identification du premier vecteur propre plutôt aisée: le premier vecteur propre est très significatif. Il reste proche du portefeuille investi sur chacune des actions avec une valeur propre de l'ordre de 100\footnote{proche de la corrélation moyenne de l'ordre de 0.4 au carré multipliée par le nombre d'action de l'ordre de 500 dans mon cas}. Les autres vecteurs propres ont des valeurs propres beaucoup plus petites (inférieure à 20) et représentent des portefeuilles ``long/short'' et ``market neutre'' d'abord plutôt sectoriels puis plutôt de style. Cependant l'instabilité de la matrice de corrélation associée au bruit de mesure rend difficile l'interprétation des vecteurs propres ``long/short'' mesurées. Ainsi d'un côté, on devrait réduire la profondeur sur laquelle on mesure la corrélation pour espérer une certaine stabilité des corrélations sur la période de mesure, mais, de l'autre coté, on devrait augmenter la période et la fréquence pour réduire le bruit de mesure.
Pour filtrer le bruit de mesure, inhérent à l'analyse par composante principale, j'ai contraint l'analyse au sous espace des facteurs de risque principaux, déjà identifi\'es dans la litt\'erature, dont j'ai optimis\'e la construction. J'ai inclus les facteurs de styles principaux (``Momentum'', ``Capitalization'', ``Quality'', etc.) et les facteurs de risque sectoriel. Les facteurs principaux optimisés ont été nommés ``Fundamental Maximum variance market neutral portfolios'', car la variance de leurs rendements a été optimis\'ee par construction. Ces facteurs peuvent aussi être directement utiles dans l'industrie de la gestion d'actifs, car ils optimisent théoriquement le gain ajusté du risque des primes de risque alternatives, qui sont devenues des véhicules d'investissement très populaires. Aussi grâce à l'optimisation, j'ai pu relier les valeurs propres sous contraintes débruit\'ees aux valeurs propres bruit\'ees de la matrice. Cela m'a permis de débruiter la matrice de corrélation et de caract\'eriser finement une loi universelle, selon laquelle, les poids optimaux des facteurs de risque seraient uniformément distribués pour tous les critères financiers ce qui est particulièrement intriguant (on aurait attendu une distribution gaussienne et non uniforme plus logique pour obtenir des vecteurs propres aléatoires). Cette loi universelle a des conséquences importantes dans l'Asset Pricing: la norme dans cette discipline est de construire des portefeuilles ``long/short'' investis à l'achat sur le premier quintile selon le critère financier étudié et à la vente sur le dernier quintile. Si le portefeuille capture une performance significativement différente de zéro alors une anomalie de marché est identifiée. Une construction plus optimale du portefeuille avec une règle d'investissement linéaire au lieu de la marche en escalier peut aider à obtenir des performances plus significatives pour les petites anomalies.
Le filtre du bruit de mesure m'a aussi permis de caractériser finement la dynamique de la matrice de corrélation et d'identifier notamment les violents changements des valeurs propres (la valeur propre du premier mode peut passer de 200 à 30 soit une variation de corrélation moyenne de 0.5 à 0.05 en quelques mois seulement). Ce violent changement peut être mod\'elis\'e par l'effet de levier (corrélation négative entre les rendements et les volatilités) pour la premi\'ere valeur propre. J'ai aussi v\'erifi\'e que les premiers vecteurs propres s'investissaient sur les facteurs fondamentaux de risque les plus risqués qui sont différents selon les périodes. Selon les crises, il peut s'agir du secteur IT, du secteur de la finance, du secteur de l'énergie, des REITs ou des entreprises exposées à la dette. Les entreprises qui sont peu sensibles aux variations de l'indice, et celles qui constituent les composants du facteur ``Momentum'', restent très représentées dans le deuxième et troisième vecteurs propres. Les facteurs ``Capitalization'' et ``Book'' de Fama et French sont très peu représentés dans les premiers vecteurs propres de la matrice de corrélation.
La première application de la méthodologie que j'ai introduite et qui permet de débruiter la matrice de corrélation a consisté à étendre l'étude de l'effet d'\'echelle sur les valeurs propres de 1 minute à 1 journée sur des échelles de temps plus longues entre 1 jour et plusieurs mois. Les corrélations continuent d'augmenter avec l'échelle de temps. Cela explique par exemple que la norme dans l'Asset Pricing est de se baser sur les rendements mensuels pour estimer les corrélations. En effet, même si l'utilisation des rendements journaliers donnerait des résultats plus robustes, les chercheurs dans l'Asset Pricing préfèrent travailler avec les rendements mensuels, car les corrélations sont réputées plus fortes lorsqu'elles sont mesurées à partir de rendements mensuels qu'à partir des rendements journaliers à cause d'un effet d'echelle que les chercheurs redoutent. La réduction de la matrice de corrélation dans le sous espace généré par les portefeuilles fondamentaux ``Maximum variance'' permet de confirmer cette crainte avec des mesures significatives. Les corrélations ont tendance à continuer à augmenter sur des horizons de temps plus long. Ce phénomène est expliqué grâce à un modèle d'autocorrélation, qui permet de reproduire l'effet de manque de liquidit\'e du march\'e. L'illiquidité crée de l'inertie et fait qu'un mouvement de marché dure et peut être prolongé par le comportement moutonnier des investisseurs. Les autocorrélations, introduites dans le chapitre \ref{emergence2}, apparaissent plus robustes que les anomalies non conditionnelles pas toujours significatives telles qu'identifiées dans l'Asset Pricing. Ces anomalies se matérialisent par des primes de risque alternatives pour justifier les incohérences avec le modèle d'évaluation des actifs financiers (MEDAF ou CAPM en anglais), selon lequel, les primes de risque ne doivent dépendre que du beta, sensibilité de l'action avec les variations de l'indice.
La deuxième application de de la méthodologie que j'ai introduite et qui permet de débruiter la matrice de corrélation a consisté à caractériser la dynamique de la matrice de corrélation qui est importante à modéliser pour estimer les risques. En effet la matrice de corrélation de population peut changer et cela peut représenter un risque. Le problème est que la matrice est déjà tellement bruitée qu'espérer mesurer ces changements est illusoire, si bien que les modèles stochastiques théoriques ne peuvent pas facilement être validés empiriquement. Dans le chapitre \ref{diffusion}, j'ai réussi à faire plusieurs mesures grâce à la méthodologie qui permet d'utiliser les informations financières pour réduire la taille de la matrice de corrélation et grâce à l'utilisation des rendements 5 minutes. J'ai ainsi mis en évidence certains faits stylisés de la diffusion de la matrice de corrélations très mal connus et très mal reproduits par les modèles standard issus de Wishart, comme la distribution des valeurs propres des incréments de la matrice de corrélations des actions (il s'agit ici précisément de la matrice de corrélation des actions sous sa forme réduite dans le sous espace des 24 facteurs fondamentaux Maximum Variance pour éliminer le bruit de mesure). L'étude de cette distribution permet de caractériser la diffusion des vecteurs propres de la matrice de corrélation des actions. Cette distribution ne suit pas une loi demi-cercle de Wigner mais une distribution avec des queues, qui peuvent être interprétées par la présence de valeurs propres extrêmes. Ces valeurs propres extrêmes expliquent que des corrélations entre actions peuvent changer beaucoup plus brutalement que les modèles classiques ne peuvent le prévoir. J'ai ainsi modélisé l'instabilité de la matrice de corrélation avec un processus empirique plus réaliste. La diffusion dans la composition des premiers vecteurs propres explique en grande partie la distribution des valeurs propres des incréments de la matrice de corrélation. Cette diffusion s'explique quasiment entièrement par la volatilité de la volatilité des portefeuilles fondamentaux ``Maximum variance''. Les portefeuilles n'étant pas orthogonaux, la volatilité de la volatilité permet de répliquer la diffusion des vecteurs propres tout en supposant les corrélations entre portefeuilles fondamentaux fixes.
Une composante particulière de la diffusion de la matrice a aussi fait l'objet d'une grande attention: la dynamique des poids du premier vecteur propre qui sont liés aux beta, qui est la sensibilité des rendements d'une action avec les indices boursiers, a été analysée en profondeur. Les beta constituent par ailleurs une mesure du risque qui est capitale car ils forment une indication d'un risque systématique qui ne peut pas se diversifier ou s'éliminer pour un investisseur classique. Cela justifie intuitivement que les actions à fort beta doivent rémunérer plus les actionnaires et que les primes de risque doivent être proportionnelles au beta qui est à la base du MEDAF. Aussi les fonds alternatifs, qu'on appelle aussi ``hedge funds'', ont la capacité à prendre des positions vendeuses avec des ventes à découvert pour neutraliser l'exposition de leur investissement aux variations des indices boursiers. Cela permet de mieux contrôler le risque et de proposer des investissements diversifiant aux épargnants. Pour construire des portefeuilles immunisés contre les variations de la bourse, qu'on appelle ``beta neutre'', il est extrêmement important de se baser sur des mesures fiables et sans biais des betas d'autant plus que certaines stratégies très populaires ont tendances à amplifier les biais de mesure du beta. Cela m'a motivé à mod\'eliser finement l'effet de levier et l'élasticit\'e des beta, qui décrivent aussi la composante du premier vecteur propre de la matrice. Par exemple, lorsqu'une action sous performe, son beta va augmenter. Lorsque la volatilité de l'action augmente plus que les autres, son beta va augmenter aussi. J'ai mis au point, dans le chapitre \ref{beta}, une m\'ethode r\'eactive de la mesure des beta n\'ecessaire pour construire des facteurs fondamentaux beta et secteur neutre moins biaisés et potentiellement mieux valider le MEDAF. Des tests montrent l'intérêt d'un tel modèle par rapport à des méthodes standards (OLS, régression par quantile, DCC GARCH).
Enfin une application concrète de mes travaux, dont la portée peut ne pas se limiter à la gestion d'actifs, met en avant l'intérêt de la m\'ethode que j'ai introduite en se révélant assez fine pour distinguer le facteur ``Rémuneration'' du bruit. Dans le chapitre \ref{remuneration}, je montre que facteur ``Rémuneration'' s'avère être un facteur de risque commun significatif. Les entreprises qui rémunèrent mieux leurs employ\'es ont un risque en commun. Ces entreprises ont aussi tendance à avoir des meilleures performances. J'ai ainsi découvert une nouvelle anomalie par rapport au MEDAF et aux facteurs de Fama et French qui pourrait avoir une portée managériale voire politique.
Cette thèse peut donc avoir de multiples applications: une meilleure analyse du risque, une optimisation plus robuste d'un portefeuille, une meilleure modélisation des autocorrélations qui sont exploitées par les programmes de trading d'arbitrage de style, une meilleure mesure des anomalies dans l'Asset Pricing, une modélisation plus réaliste de la dynamique de la matrice de corrélations pour évaluer des produits dérivés. Elle peut aussi avoir des implications très concrètes en économie et en management car, par exemple, elle permet de montrer que les entreprises qui rémunèrent bien leurs employés partagent un risque significatif en commun et ont aussi tendance à mieux performer.
\chapter{Revue de la littérature}
Ce travail de recherche s'est articulé autour de six champs disciplinaires relativement cloisonnés entre plusieurs disciplines finance, économie, éconophysique et mathématiques appliquées.
\section{Gestion de portefeuille}
\label{ptfmng}
La bonne estimation des corrélations des rendements des actions est nécessaire pour l'analyse de risque d'un portefeuille et pour son optimisation. La bonne compréhension des variations temporelles des corrélations en cours ou potentielles est aussi critique pour la gestion d'un portefeuille et notamment d'un fonds ``market neutre'' qui utilise un fort effet de levier financier et dont l'arbitrage de style est un des moteurs de performance. L'arbitrage de style est une stratégie de trading qui consiste à investir sur les styles de gestion porteurs. Par exemple si le style de gestion qui consiste à acheter des petites capitalisations et à vendre des grosses capitalisations est profitable, la stratégie va acheter les petites capitalisations et vendre les grandes. Dans le cas inverse, la stratégie va acheter les grosses et vendre les petites. Plus de 240 styles de gestion ou facteurs de risques profitables ont été publiés dans la littérature scientifique décrites dans la section \ref{Assetpricing}. L'intérêt du market timing ne fait pas consensus (\cite{Lee17, Bender18,Bass17}) et certains préfèrent bénéficier simplement de la diversification. \cite{Miguel17} montrent qu'en pratique il suffit, pour construire un portefeuille, de sélectionner 15 critères financiers significatifs sur plus de 100. Les stratégies de market timing peuvent être complexes. Elles s'appuyent sur des modèles de prévision. \cite{Hodges17} cherchent des prédicteurs des facteurs dans diffèrent régimes économiques et différentes conditions de marché. Ils trouvent que l'utilisation d'une combinaison d'indicateurs sur le cycle économique, la valorisation, la tendance et la dispersion serait plus efficace que l'utilisation d'indicateur individuel. Ainsi \cite{Dichtl18} fabriquent un portefeuille ``long/short'' grâce à la méthode d'optimisation des paramètres introduite par \cite{Brandt09} en utilisant plusieurs indicateurs de valorisation et de tendance et ils montrent que le market timing permet de surperformer le portefeuille investi équitablement sur les différents styles de gestion dont les primes de risque sont positives. La fragilité de ces résultats vient du risque de surapprentissage. De plus ces stratégies en général peuvent souffrir de chocs de corrélations entre les différents styles de gestion qui peuvent survenir et générer des pics de volatilités. Ainsi les stratégies quantitatives d'habitude non corrélées peuvent se corréler fortement de manière brutale. Cela s'est passé du 8 au 9 août 2007, quand la plupart des fonds d'arbitrage de style ont subi des pertes très significatives brutalement en même temps (\cite{Stein09}). Lors de cet évènement, nommé plus tard ``quant crash'', la plupart des fonds touchés employaient des stratégies ``market neutre'' quantitatives sans exposition au marché ce qui remet en question leur statut ``market neutre'' (\cite{Khandani11}). Il semble en fait que trop de gérants étaient investis sur les même ``crowded'' stratégies avec trop de levier et qu'ils ont tous voulu réduire leurs positions en même temps au même signal. De tels risques de ``crowding'' affectent une grande variété de stratégies, comme le style ``Momentum'' (acheter les actions qui ont surperformé et vendre les actions qui ont sous performé) car ils ne dépendent pas d'estimation indépendante des valeurs fondamentales des entreprises (\cite{Hong16,Stein09}).
Des centaines de milliards de dollars sont aussi gérées directement en utilisant l'optimisation Mean-Variance introduite par \cite{Markowitz52} en préférant se baser sur des hypothèses simples concernant les espérances des rendements. La valeur ajoutée des gérants viendrait seulement d'une modélisation plus adaptée de la matrice de corrélation et d'une bonne capacité à exécuter les ordres en minimisant l'impact de marché. Ainsi le portefeuille ``Min Variance'' suppose que les espérances des rendements sont toutes identiques et que la matrice de corrélation peut se modéliser simplement, par exemple, avec un modèle à un facteur (\cite{Clarke13}). Le portefeuille ``Max Diversification'' introduit par \cite{Choueifaty08} suppose que les espérances sont proportionnelles au risque. Ces deux derniers portefeuilles nécessitent d'inverser la matrice de corrélation ce qui peut poser problème si la matrice n'est pas proprement modélisée. Le portefeuille ``Equal-Risk Contribution'' introduit par \cite{Maillard10} est moins sensible aux bruits de mesure et est donc plus robuste mais n'est plus théoriquement optimal. \cite{Benichou16} introduisent le portefeuille ``Agnostic Risk Parity''. Les espérances ne sont plus forcément positives mais dépendent des rendements passés. Le portefeuille dépend alors de l'inverse de la racine carré de la matrice de corrélation multipliée par des signaux qui représentent des indicateurs techniques des tendances. Ce portefeuille ``trend following'' alloue le même risque sur chaque vecteur propre de la matrice de corrélation.
\section{Econophysique}
Aujourd'hui les performances des différents styles de gestion et les performances sectorielles sont très suivies par tous les acteurs du marché qui ne se contentent plus d'avoir une vue binaire (le marché va-t-il monter ou baisser?) et s'auto-alimentent par un phénomène d'effet moutonnier très bien décrit dans la littérature (\cite{Guedj05,Michard05,Cont00,Wyart07,Lux99}). Ainsi quand tel ou tel style de gestion chute, les acteurs vont le vendre en même temps et accentuer sa chute. La moindre nouvelle macroéconomique va impacter les indices mais aussi les autres facteurs de risque. Quand la Réserve fédérale des États-Unis se dit prête à augmenter les taux d'intérêt, le facteur levier (vente d'actions endettées, achat d'actions peu endettées) sera joué puis d'autres facteurs seront entrainés. \cite{Benzaquem16} mettent en évidence le lien entre le trading et la matrice de corrélation en partant de la microstructure et du cross impact des transactions sur les prix. Les corrélations ne décriraient que l'interaction entre actions par le jeu des traders. Les corrélations sont aussi réputées pour augmenter avec l'échelle de temps: les rendements mensuels sont plus corrélés que les rendements journaliers qui sont plus corrélés que les rendements 1 minutes (\cite{Epps79}). \cite{Bouchaud09} avaient déjà proposé dans la partie ``some open problem'' une piste (les rendements de l'action i n'impactent pas instantanément les rendements de l'action j mais avec un certain retard) pour expliquer la dépendance des corrélations à la fréquence mais ne l'avait pas développé.
\section{Modèles multifactoriels}
\label{mutifact}
Depuis l'article majeur de \cite{Markowitz52}, l'optimisation « Mean Variance » est devenue une méthode rigoureuse pour construire un portefeuille d'investissement. Deux ingrédients fondamentaux sont nécessaires: les espérances des rendements de chaque action et la matrice de covariance des rendements. L'estimation de la matrice de covariance a toujours été un sujet important. La méthode de base se contente d'agréger les rendements historiques et de calculer leurs covariances historiques. Malheureusement cela crée des problèmes bien documentés (\cite{Jobson80}). Pour l'expliquer simplement, quand le nombre d'actions est grand devant le nombre d'observations disponibles, ce qui est généralement le cas, la matrice de corrélation historique comporte beaucoup d'erreurs. Cela implique que les coefficients les plus extrêmes prennent des valeurs extrêmes non pas à cause de la réalité mais à cause d'erreurs extrêmes. Invariablement les optimisations de portefeuille vont miser leurs plus gros paris sur ces erreurs extrêmes ce qui rendra l'optimisation extrêmement non fiable. \cite{Michaud89} appelle ce phénomène ``error-optimization''. De manière alternative on peut considérer une estimation avec beaucoup de contraintes, comme le « single-factor model » de \cite{Sharpe63}. Ces estimateurs de la matrice de corrélation contiennent d'un côté peu d'erreurs mais de l'autre beaucoup d'erreurs de spécification et de biais. Une alternative est le ``Shrinkage'' qui consiste à un mélange entre l'estimation sans contrainte et l'estimation avec la contrainte (\cite{Ledoit03,Ledoit12}). L'APT (``Arbitrage Pricing Theory'') de \cite{Ross76} a généré un intérêt croissant dans les modèles multifactoriels. Ainsi le standard de l'industrie de la gestion d'actifs est d'utiliser des modèles multifactoriels. Quelques entreprises, comme APT, Barra et Axioma (\cite{Barra98}) qui sont devenues incontournables dans l'industrie de la gestion d'actifs, proposent à leurs clients des matrices de covariances qui s'adaptent mieux aux optimisations de portefeuille. Ces sociétés ont été accusées d'être à l'origine du ``quant crash'' de 2007, déjà mentionné dans la section \ref{ptfmng}, car elles favorisaient le ``crowding'' en fournissant les même facteurs de risque à tous les gérants. Ces méthodes se basent sur des modèles multifactoriels fondamentaux combinant une cinquantaine de facteurs sectoriels et d'autres risques. Ces facteurs utilisent le rendement des portefeuilles associés à certains critères financiers observables tel que le ``Dividend Yield'', le ``Book to Market'' ratio ou les secteurs d'appartenance. Une autre approche est d'utiliser les facteurs statistiques issus de l'analyse par composante principale, qui est décrite dans la section \ref{acp}, avec un nombre total de facteurs de l'ordre de 5. \cite{Connor95} montre que les modèles multifatoriels ``fondamentaux'' permettent d'expliquer 42\% ($R^2=42\%$ étant le pouvoir explicatif du modèle) des rendements alors qu'une simple analyse par composant principale sur 5 facteurs explique deja 39\%. \cite{Connor95} trie les facteurs selon leur pouvoir explicatif. Les secteurs permettent d'augmenter de 18\%, puis le facteur ``Low Volatility'' (proche du facteur ``Low Beta'') augmente le $R^2$ de 0.9\% puis les facteurs ``Momentum'', ``Capitalization'', ``Liquidity'', ``Growth'', ``Earning'', augmentent de moins de 0.8\%. Puis il reste par ordre d'importance décroissant des facteurs plutôt mineurs: le ``Book to market'', le ``Earning Variability'', le ``Leverage'', l'investissement à l'étranger, le coût du travail et enfin le ``Dividend Yield''. Toutefois la sélection des facteurs nécessaires et le choix du nombre a fait l'objet de nombreuses controverses (\cite{Roll80,Roll84,Dhrymes84,Luedecke84,Trzcinka86,Conway88,Brown89}). \cite{Connor93} proposent une méthodologie simple pour estimer le nombre de facteurs significatifs: si le rajout d'un facteur ne réduit pas significativement le carré du résidu alors le facteur n'est pas considéré comme significatif. La plupart des études académiques se base sur une analyse historique depuis 1967 en exploitant la base de données du centre de recherche des prix des actions (Center of Research in Security Prices). Cette base de données regroupe principalement les actions cotées à la bourse du New-York Stock Exchange depuis 1926.
\section{Analyse par composante principale}
\label{acp}
L'analyse par composante principale (ACP) prend sa source dans un article de Karl Pearson publié en 1901. Encore connue sous le nom de transformée de Karhunen-Loeve ou de transformée de Hotelling, l'ACP a été de nouveau développée et formalisée dans les années 1930 par Harold Hotelling. La puissance mathématique de l'économiste et statisticien américain le conduira aussi à développer l'analyse canonique, généralisation des analyses factorielles dont fait partie l'ACP. Les champs d'application sont aujourd'hui multiples, allant de la biologie à la recherche économique et sociale, et plus récemment le traitement d'images.
La théorie de la matrice aléatoire, dont la distribution des valeurs propres obtenues par l'ACP suit la loi de Mar\v{c}enko-Pastur pour les grandes matrices, modélise les bruits de mesure des corrélations et montre que les petites valeurs propres en dessous d'une valeur propre critique sont sous estimées et ne sont pas significatives (\cite{Laloux98, Plerou99, Plerou02, Potters05, Wang11}). \cite{Bun16} appliquent une méthode théorique introduite par \cite{Ledoit11}, qu'ils appellent ``Rotationnaly invariant estimator'', pour debiaiser de manière continue les valeurs propres empiriques et ils montrent que la méthode semble plus robuste que celles du ``Clipping'' ou du ``Shrinkage'' qui sont bien documentées par \cite{Ledoit01,Ledoit03}). \cite{Allez12} modélisent l'impact du bruit sur le premier vecteur propre et montre que ce dernier tourne légèrement autour d'un vecteur fixe. L'angle de rotation dépend du ratio entre la première valeur propre et les autres. En appliquant ce modèle aux autres vecteurs propres, on comprend qu'ils tournent aussi autour d'axes fixes mais avec un angle de rotation bien plus important. Ils sont ainsi très bruités ce qui explique la difficulté à les interpréter.
L'ACP avec une contrainte linéaire est une alternative aux filtres issus de la théorie de la matrice aléatoire pour éliminer le bruit de mesure et est entièrement résolu depuis longtemps (\cite{Golub73}). Dans ce cas les vecteurs propres sous contrainte appartiennent tous au sous espace solution de la contrainte : les vecteurs propres sous contrainte sont simplement les vecteurs propres d'une matrice qui a été réduite et débruitée. Toute la difficulté est de définir les facteurs formant le sous espace contraint pour que les contraintes n'impactent principalement que le bruit des valeurs propres. Pour cela, il est possible de s'inspirer de la littérature de l'Asset Pricing décrite dans la section \ref{Assetpricing}) et des modèles multifactoriels décrite dans la section \ref{mutifact}).
\section{Asset pricing}
\label{Assetpricing}
\cite{Fama65} a abouti à la théorie des marchés efficients, selon laquelle, les prix suivent des marches aléatoires. Puis \cite{Sharpe64} dérive le MEDAF à partir d'hypothèses plus ou moins réalistes, comme l'absence de coût de transaction et la rationalité des investisseurs. Selon le MEDAF, l'espérance des rendements doit être théoriquement proportionnel au beta, seul risque qui n'est pas diversifiable et qui doit être rémunéré. Depuis 1970, différentes anomalies ont été observées par rapport à cette théorie. Les facteurs classiques de \cite{Fama92,Fama93} sont investis à l'achat sur le top 20 \%, selon le critère financier étudié, et investis à la vente sur le bottom 20\%. Ces facteurs peuvent capturer une anomalie par rapport à la théorie des marchés efficients s'ils génèrent des gains significativement différents de zéros. La construction top 20 \% bottom 20\% est clairement sous optimale, selon \cite{Asness13}, mais reste paradoxalement la référence dans le domaine de l'Asset Pricing. La régression de \cite{Fama73} est la méthode la plus utilisée pour mettre en évidence des anomalies par rapport au MEDAF. Plusieurs modèles ont été développés pour fournir une interprétation économique aux nombreuses anomalies et pour améliorer le MEDAF. \cite{Fama93} ont proposé un modèle à trois facteurs pour modéliser les espérances des rendements. \cite{Harvey15} ont listé 316 facteurs potentiels censés capturer une anomalie à partir de 313 articles depuis 1967. Selon eux, la plupart des facteurs peuvent être le fruit du data mining et ne seraient pas robustes. La plupart de ces facteurs se recoupent c'est pourquoi une vingtaine peut suffire mais le niveau de significativité pour caractériser les anomalies ne fait pas consensus. Les travaux académiques ont d'abord retenu les critères financiers tels que la ``Capitalization'', le ``Price Earning Ratio'', le ``Cash Flow'', le ``Book to Market'', la croissance et le ``Momentum''. Par exemple les actions de petites capitalisations tendent à surperformer (\cite{Banz81}). La volume moyen semble plus adéquate que la taille pour \cite{Ciliberti17}. Une autre anomalie importante est la prime ``Value'': les entreprises ``Value'' tendent à surperformer les entreprises de croissance (\cite{Fama98}). La profitabilité proche du ``Cash Flow'' est aussi une variable explicative significative de l'espérance des rendements (\cite{Fama15}). L'anomalie ``Low Volatility'' ou ``Low Beta'' ont aussi été révélées (\cite{Jordan13, Fu09, Ang06}). L'anomalie la plus populaire reste le ``Momentum'': les actions qui ont surperformé auront tendance à continuer à surperformer (\cite{Jegadeesh93}). Les anomalies sont directement exploitées dans la gestion d'actifs dont les strategies sont décrites dans la section \ref{ptfmng}. \cite{Asness13} expliquent ainsi qu'une stratégie de base d'investissement et très populaire simplement allouée en partie sur le ``Momentum'' et sur l'anomalie ``Value'' permet d'atteindre un Sharpe ``in the sample'' supérieur à 1.
Les théories financières pour justifier de telles primes de risque alternatives (manque de liquidité, asymétrie) sont remises en cause car les anomalies ont tendance à disparaitre une fois publiées. \cite{McLean15} ont plusieurs explications alternatives: le biais ``in the sample'' avec le problème de la suroptimisation ou l'adaptation des marchés.
A ma connaissance aucune étude ne s'est encore intéressée à la mise en évidence des autocorrélations des rendements des facteurs de risque qui pourrait constituer une inefficience plus subtile et plus robuste des marchés financiers. Une explication est que les autocorrélations sont trop difficiles à caractériser de manière significative. Des articles existent mais la significativité et la robustesse de leurs résultats ne sont pas convaincants. Ainsi \cite{Hodges17} cherchent des prédicteurs des facteurs dans diffèrents régimes économiques et différentes conditions de marché. Ils trouvent que l'utilisation d'une combinaison d'indicateurs sur le cycle économique, la valorisation, la tendance et la dispersion serait plus efficace que l'utilisation d'indicateurs individuels.
\section{Processus stochastique}
L'instabilité de la matrice de corrélation de population a d'abord été modélisée par des modèles de diffusion pour évaluer des produits dérivés (\cite{Gauthier11}). Les modèles théoriques ont été ajustés pour retrouver les prix des produits dérivés sans chercher à connaitre la réalité de la dynamique de la matrice de corrélation empirique car cette dernière est difficilement mesurable avec la précision recherchée: les modèles ARCH ont été initialement développés pour décrire l'heteroscedasticité des variations de l'inflation (\cite{Engle82}) mais ont ensuite été utilisés pour modéliser la dynamique de la volatilité des actions pour évaluer des options (\cite{Duan95}). Des modèles de type ``Dynamic Conditional Correlation'' (DCC GARCH, \cite{Engle02,Engle16}) ont étendu le modèle GARCH à une dimension et ont été développés pour modéliser la dynamique des corrélations et des volatilités. De la même façon le processus introduit par \cite{Cox85}, qui est très populaire en finance pour décrire la dynamique des taux d'intérêt et de la volatilité des actions pour évaluer des produits derivés, a aussi été étendu à partir de la diffusion Feller pour modéliser la dynamique des covariances: les processus de Wishart généralisent à plusieurs dimensions la diffusion de Feller. \cite{Gourieroux2007} introduit ainsi un terme de retour vers la moyenne au processus de Wishart en le rendant stationnaire et généralise le processus de \cite{Cox85}. \cite{Fonseca2008} généralisent de la même manière le modèle d'\cite{Heston93} pour valoriser les options multi asset. Un processus de Wishart peut être vu comme le carré de Browniens ou dans sa version stationnaire d'Ornstein-Uhlenbeck. \cite{Cuchiero11} analysent les fondations des processus stochastiques affines continus sur l'univers des matrices de covariance motivé par l'utilisation de tels modèles pour valoriser des options multi-asset ou pour décrire les intensités de défauts.
\cite{Bru91} dérive les équations stochastiques pour décrire la dynamique de la matrice et la dynamique des valeurs propres. D'autres matrices aléatoires sont aussi très étudiées, comme les matrices gaussiennes dont la distribution des valeurs propres suit la loi circulaire de Wigner. \cite{Ahdida13} s'intéressent à des matrices de corrélations aléatoires à travers la diffusion de Wright-Fisher pour modéliser les corrélations des actions. Des algorithmes ont aussi été implémentés pour générer des marches aléatoires parmi les matrices de rotation. Cela permet de décrire la diffusion des vecteurs propres de la matrice de corrélation. Ainsi la marche aléatoire de \cite{Kac1959} est un algorithme assez efficient mais il ne contient pas de retour vers la moyenne, si bien qu'au bout d'un certain temps la matrice n'a plus aucun lien avec la matrice initiale.
D'autres phénomènes assez fins, comme l'effet de levier restent mal modélisés par les modèles de la littérature. Ainsi des versions asymétriques des modèles type DCC GARCH ont été développées pour tenir compte de l'effet de levier. Malgré une littérature conséquente sur l'effet de levier (quand les prix baissent, la volatilité augmente, selon \cite{Black76,Christie82,Campbell92,Bekaert00,Bouchaud01}), aucun ne s'intéresse à la réalité et la complexité du phénomène bien décrite dans \cite{Bouchaud01}. De nombreux papiers rapportent que les beta, sensibilité des prix des actions aux variations de l'indice, peuvent varier (\cite{Blume71, Fabozzi78,Jagannathan96,Fama97,Bollerslev98,Lettau01,Lewellen06,Ang07}) sans établir de relation précise entre entre l'effet de levier et l'augmentation des beta. Les actions à fort effet de levier sont plus exposées à un beta instable (\cite{Galai76,DeJong85}). Bien tenir compte de la variabilité des beta est important aussi pour bien tester les modèles d'Asset Pricing. Ainsi \cite{Bali17} prétendent qu'une fois que les beta sont bien estimés à partir d'un modèle DCC GARCH, alors l'anomalie ``Low Beta'' disparait et le MEDAF est alors enfin vérifié empiriquement (le rendement espéré serait bien proportionnel au beta quand il est bien mesuré).
\chapter{Contributions principales}
Le travail de recherche s'est focalisé sur six sujets pointus et s'est décliné sous la forme de six projets d'articles. Les contributions principales pour chacun des six sujets sont les suivantes:
\begin{itemize}
\item ``Emergence of Correlation of Securities at Short Time Scales'' (chapitre \ref{emergence}) : l'article introduit un modèle multifactoriel de retard, qui reproduit assez fidèlement les mesures de l'effet d'échelle sur les valeurs propres. Le modèle s'inspire du modèle d'impact de \cite{Kyle85}. Le modèle suppose que les transactions sur les facteurs de risque, impactent le prix des actions avec un certain retard. Je dérive, sous certaines hypothèses, une formule simple pour décrire la dépendance des valeurs propres avec l'échelle de temps. La formule contient deux paramètres pour chaque valeur propre: la valeur propre asymptotique et un temps de relaxation de l'ordre d'1 minute qui traduit un retard moyen de l'ordre de quelques minutes entre les actions et les facteurs de risque. Ainsi les corrélations apparaissent à partir d'une minute. Toutefois ce retard de quelques minutes continue d'impacter les valeurs propres de la matrice de corrélation des rendements 20 minutes et au-delà à cause d'une loi en puissance qui s'explique par un mécanisme relativement subtile bien que le phénomène sature. L'article identifie donc une inefficience significative du marché, qui pourrait générer des gains dans le cas théorique où les coûts de transactions sont nuls.
\item ``The Fundamental Market Neutral Maximum Variance Portfolios'' (chapitre \ref{maxvar}): l'article introduit le ``FCL'' d'un portefeuille (ratio entre la variance du portefeuille et la variance du portefeuille dans le cas où les corrélations entre actions seraient nulles). Le ``FCL'' est un concept proche des valeurs propres et a l'avantage de s'appliquer non seulement aux vecteurs propres mais aussi à n'importe quel facteur de risque. Le ``FCL'' serait une mesure idéale pour caractériser la significativité d'un facteur de risque. J'introduis aussi le portefeuille ``fundamental Max variance'' qui optimise le ``FCL'' et qui peut être interprété comme un vecteur propre de la matrice de corrélation sous contrainte pour capturer au mieux un style donnée defini par un critère financier. Je montre que les poids optimaux dépendent directement des classements des actions en fonction de ce critère et suivent une même loi universelle qui s'applique à tous les critères financiers. Je montre que cette optimisation permet de répliquer au mieux la matrice de corrélation à partir de quelques facteurs ainsi que sa dynamique en filtrant le bruit. Je fais le lien entre les différents ``FCL'', les valeurs propres sous contraintes et les valeurs propres empiriques. Je montre enfin que les vecteurs propres principaux de la matrice de corrélation s'investissent sur les facteurs qui ont les ``FCL'' les plus élevés. Les ``FCL'' sont volatiles et sont bien modélisés par des processus d'Orstein-Ulhenbeck avec un temps de relaxation de 60 jours. La composition des vecteurs propres est donc très variable ce qui explique pourquoi leur interprétation est difficile à l'exception du premier. Je montre aussi sous certaines hypothèses que le Sharpe des portefeuilles ``maximum variance'' est optimal théoriquement. Les résultats de ce chapitre ont été obtenus en collaboration avec Stanislav Kuperstein.
\item ``Time Scale Effect on Correlation at Long Time Horizon'' (chapitre \ref{emergence2}): l'article décrit une forme plus subtile mais plus robuste d'inefficience des marchés financiers que les écarts entre les espérances non conditionnelles des rendements et les prédictions du MEDAF. Il s'agit de l'autocorrélation des rendements des facteurs de risque qui s'explique par l'illiquidité des marchés financiers et par le comportement moutonnier des investisseurs qui ont tendance à acheter les produits qui ont marché. Cette autocorrélation qui n'est pas décrite dans la littérature va rendre les vecteurs propres et les valeurs propres de la matrice de corrélation sensibles à l'échelle de temps.
\item ``The Reactive Beta Model'' (chapitre \ref{beta}): l'article décrit le modèle de levier systématique (la corrélation augmentent lorsque l'indice baisse), spécifique (le beta d'une action augmente lorsque elle sous performe) et d'élasticité (lorsque la volatilité relative augmente le beta augmente). Il ressort qu'une grande partie de la variabilité des beta s'explique par ces phénomènes. L'approche qui consiste à normaliser les rendements pour corriger ces petits phénomènes permet de réduire le bais de certains facteurs (``Momentum'' et ``Low Beta'') par rapport à la régréssion linéaire directe sur les rendements. Des tests empiriques montrent la supériorité du modèle par rapport à la simple régréssion linéaire. Des simulations Monte-Carlo montrent aussi l'avantage d'un tel modèle par rapport aux méthodes robustes telles que les régressions par quintiles et les modèles de type DCC GARCH symétriques ou asymétriques. Je montre que mon modèle semble le plus adapté à la réalité des marchés car il a été conçu pour s'adapter à des phénomènes bien caractérisés et mesurés.
\item ``The Model of Diffusion of the Correlation between Securities'' (chapitre \ref{diffusion}): l'article identifie quelques faits stylisés qui caractérisent la diffusion des vecteurs propres empiriques des marchés. Les vecteurs propres de la matrice à l'instant t voient leur corrélation en utilisant la matrice à l'instant t +$\tau$ augmenter très légèrement avec $\tau$. Je m'intéresse à la distribution des valeurs propres des incréments de la matrice de corrélation qui est différente de la loi demi-cercle de Wigner et de la distrution qui ressemble à un chapeau pointu. Les équations stochastiques standard (Wright-Fisher, Feller) qui simulent directement la matrice de corrélation ainsi que d'autres méthodes simples qui simulent des trajectoires aléatoires de la matrice de rotation autour de la matrice identité avec un terme de retour vers la moyenne pour simuler la diffusion des vecteurs propres ne permettent pas de reproduire la distribution empirique des valeurs propres. La diffusion des FCL, définies dans le chapitre \ref{maxvar}, permet de generer simplement cette distribution. Les résultats de ce chapitre ont été obtenus en collaboration avec Stanislav Kuperstein.
\item ``Should Employers Pay Better their Employees? An asset Pricing Approach'' (chapitre \ref{remuneration}) : le facteur rémunération est identifié comme un facteur de risque commun significatif grâce au ``FCL`` mesuré qui est significativement supérieur à 1. Le facteur est ainsi aussi significatif que le facteur ``Book'' de Fama et French. Le facteur rémunération révèle aussi une faible anomalie de marché: les entreprises qui payent bien leurs employés ont un risque en commun et tendent à surperformer les autres. L'article remet en cause la méthodologie de Fama et French qui ne serait pas assez fine pour caractériser une telle anomalie. Ainsi il semble très important de maintenir à chaque instant le facteur beta neutre et pas seulement en moyenne seulement pour mesurer l'anomalie.
\end{itemize}
\part{Dissertation doctorale}
\chapter{Emergence of Correlation between Securities at Short Time Scales}
\label{emergence}
\includepdf[pages=-]{emergence.pdf}
\chapter{Fundamental Market Neutral Maximum Variance Portfolios}
\label{maxvar}
\footnote{
The results of this chapter were obtained in collaboration with Stanislav Kuperstein.}
\includepdf[pages=-]{MaxVar7.pdf}
\chapter{Time Scale Effect on Correlation between Securities at Long Time Horizon}
\label{emergence2}
\includepdf[pages=-]{Largertimescale2.pdf}
\chapter{The Reactive Beta Model}
\label{beta}
\includepdf[pages=-]{beta_revised6.pdf}
\chapter{The Model of Diffusion of Correlations between Securities}
\label{diffusion}
\footnote{
The results of this chapter were obtained in collaboration with Stanislav Kuperstein.}
\includepdf[pages=-]{DiffusionofMatrice3.pdf}
\chapter{Should Employers Pay Better their Employees? An Asset Pricing Approach}
\label{remuneration}
\includepdf[pages=-]{employers.pdf}
\part{Conclusion Générale}
Les propriétés empiriques de la matrice de corrélation des rendements des actions ne sont pas bien documentées dans la littérature car elles sont noyées dans le bruit de mesure. L'originalité de la méthode que j'ai introduite permet de débruiter la matrice de corrélation en profitant de données disponibles en plus des rendements pour contraindre les vecteurs propres. Elle ouvre donc de nouvelles portes. La méthode est particulièrement adaptée aux matrices de corrélation des actions car la première valeur propre est bien plus grande que les autres et de nombreuses données financières sont disponibles (``Book'', ``Capitalization'', ``Cash Flow'', etc.). Le débruitage de la matrice a permis de mettre en évidence de nouvelles propriétés importantes de la matrice de corrélation des actions: \begin{itemize}
\item l'instabilité des valeurs propres et des vecteurs propres. Ces derniers sont investis en priorité sur les facteurs de risque les plus importants. L'importance d'un facteur est mesurée à travers le ``FCL'', notion que j'ai introduite. Le ``FCL'' est la variance normalisée d'un facteur de risque et correspond aussi à la moyenne pondérée des valeurs propres par les projections au carré du facteur sur les différents vecteurs propres ;
\item la diffusion du logarithme des ``FCL'' modélisée par de simples processus d'Orstein-Uhlenbeck semble suffire pour expliquer une grande partie de la diffusion de la matrice de corrélation. Cela permet de retrouver une distribution des valeurs propres des incréments de la matrice de corrélation;
\item les poids des facteurs de risque qui optimisent les ``FCL'' sont repartis de manière uniforme ce qui n'est pas compatible avec une distribution aléatoires des vecteurs propres. En effet on aurait à priori attendu une distribution gaussienne des poids qui aurait été naturelle si les vecteurs propres étaient complètement aléatoires. Cela a beaucoup d'applications notamment dans la construction des portefeuilles ``risk premia'' qui sont devenus importants dans l'industrie de la gestion d'actifs. Ces portefeuilles, qui capturent un style donné sont construits, selon la méthode de Fama et French, avec une fonction ``double Heavyside'' c'est-à-dire investis à l'achat sur les top 20\% et à la vente sur le bottom 20\% par rapport à un critère donné (``Book'', ``Capitalization'', ``Momentum'',etc.). Ces portefeuilles peuvent être optimisés avec une règle linéaire compatible avec la distribution uniforme au lieu de la ``double Heavyside'' de Fama et French. J'ai nommé ces portefeuilles optimaux ``Fundamental Market Neutral Maximum Variance Portfolios'' car ces portefeuilles capturent de manière optimale un style donné en minimisant le risque spécifique. Ils ont théoriquement un Sharpe et un ``FCL'' optimaux ;
\item l'effet d'échelle sur les corrélations avec deux régimes: \begin{itemize}
\item aux petites échelles de temps entre quelques secondes et quelques minutes, un effet de retard de l'ensemble des actions avec un temps de relaxation de quelques minutes explique les petites autocorrélations et l'augmentation des valeurs propres avec l'échelle de temps. J'ai développé un modèle de retard et j'ai dérivé une formule simple qui décrit cette augmentation qui intègre curieusement une loi en puissance. Le modèle reproduit précisément les mesures. Aussi, on peut interpréter les corrélations entre actions comme la conséquence des interactions entre les actions par l'intermédiaire des traders ;
\item aux grandes échelles de temps entre 1 jour et plusieurs mois une faible autocorrélation est initiée par un manque de liquidité et un comportement moutonnier des acteurs. De la même façon un modèle d'autocorrélation qui inclut des tendances qui suivent un processus d'Ornstein-Ulhenbeck permet de reproduire les augmentations des valeurs propres sur des échelles de temps longues.
\end{itemize}
\item l'effet de levier qui est caractérisé par l'augmentation des corrélations et de la première valeur propre avec la baisse du marché, ne se généralise pas aux autres facteurs de risque. Lorsqu'un facteur chute, son ``FCL'' et les valeurs propres n'augmentent pas. Cela est théoriquement intéressant économiquement dans la mesure où les facteurs de risque alternatifs ne peuvent pas avoir de risque asymétrique sur un horizon de temps long à cause de la loi des grands nombres, s'il n'y a pas d'effet de levier et ne peuvent pas justifier une prime de risque positive. En effet c'est l'effet de levier principalement avec ou sans les queues épaisses des distributions des rendements qui rend la convergence vers la distribution gaussienne très lente en maintenant l'asymétrie. Sans effet de levier les rendements des primes de risque doivent converger plus rapidement vers la distribution gaussienne.
\end{itemize}
Par ailleurs j'ai aussi étudié finement la dynamique des beta qui est la sensibilité d'une action par rapport aux variations de l'indice, qui est directement liée à la composition du premier vecteur propre de la matrice de corrélations et qui constitue le paramètre clef de risque. J'ai proposé un modèle réactif avec 3 composants intégrant l'effet de levier spécifique (lorsqu'une action sous performe, son beta augmente), l'effet de levier systématique (lorsque l'indice baisse les corrélations augmentent), l'élasticité des beta (quand la volatilité relative augmente, les beta augmentent). Les trois composants ont été calibrés et testés. J'ai testé le biais du modèle à partir de 4 stratégies ``market neutre'' de base et j'ai montré la supérioté du modèle par rapport à une simple régression linéaire. J'ai aussi procédé à un test Monte-Carlo qui confirme la supériorité du modèle par rapport aux méthodes alternatives (``Minimum Absolute Deviation'', ``Trimean Quantile Regression'' et ``Dynamic Conditional Correlation'' avec ou sans asymetrie).
Enfin j'ai présenté une application très pratique qui présente des implications concrètes pour la gestion d'entreprise en montrant empiriquement que les entreprises qui rémunèrent bien leurs employés partagent une partie significative de leur risque et ont tendance à surperformer. La finesse de methode de mesure permet d'identifier cette anomalie de marché et met en lumière les limitations de la méthode classique de Fama et French. Cette anomalie qui reste néanmoins relativement faiblement significative semble intuitive et évidente aux professionnels.
\bibliographystyle{te}
\nocite{*}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,518
|
Got a healthy recipe that you'd like to share? Head over to our Submit a Recipe page!
In late summer, housewives used to get very nervous about what they had "put back" to feed their families during the winter. Supermarkets have solved most of the end-of-harvest jitters, but there's still something special about saving the last of summer. Freezing rather than canning lessens cooking time, reduces needed sugar and keeps more of ingredients' fresh taste. Here are several sauces, a couple of them keepers, to let your family enjoy the waning freshness of summer and the bounty of fall.
In the old days of dairy-fat-heavy meals, herb butters were a wonderful way to save garden bounty and a quick way to brighten broiled chicken, fish and baked or boiled potatoes. Instead of herb butters, try the following lighter toppings. Keep extra in the freezer, modern day's answer to the root cellar and canning cupboard. Many frozen herbs darken with freezing, but the taste excels that of dried herbs.
Or pick whatever your garden grows!
In a large bowl, stir to blend the yogurt and mayonnaise. Add chopped herbs and stir gently to mix evenly.
Line a cookie sheet or brownie pan with waxed paper. Scoop the mixture in teaspoonfuls onto the pan, leaving spaces between mounds. Freeze, remove from pan and pack in a covered freezer container or sealed plastic bag. Remove from the freezer when you start dinner; warm food will melt your thawed herb topping.
In a 2 quart saucepan, combine all of the ingredients.
Simmer until fruit is tender; sauce will be somewhat liquid. Package in 1/2 cup freezer containers. Add a container to the baking pan for chicken parts or pork chops or serve warm over broiled meats. Make this chutney-related sauce with apples when good stone fruit is no longer available.
Before the berry season ends, make this quick freezer "jam" to serve on pancakes or spoon over vanilla ice cream during the winter.
In a 2 to 3-quart saucepan heat to a boil: apple juice, lemon juice, mint leaves and sugar. Add all the fruit and return the mixture to a low boil. Reduce heat and simmer for 5 minutes, till berries are cooked through. Package in 1-cup freezer containers and bring the taste of summer back to your table.
Previous article Guerrilla Gardening in South Central L.A.
Next article Where Has All the Ice Gone?
Guerrilla Gardening in South Central L.A.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 263
|
{"url":"http:\/\/www.edutalk.info\/4-voices-from-tedxlondon\/","text":"No Shows planned!\n\n# 4 voices from #TedXLondon\n\nDuring one of the last breaks I came across four people still in the auditorium obviously deeply engaged in conversation. So I went up and talked with them about #TedXLondon and the afternoon and their thoughts on education. A boo by Eyebeams\n\nDuring one of the last breaks I came across four people still in the auditorium obviously deeply engaged in conversation. So I went up and talked with them about #TedXLondon and the afternoon and their thoughts on education.\n\nA boo by Eyebeams","date":"2022-09-27 08:34:57","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8178088665008545, \"perplexity\": 3909.9705803085226}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030334992.20\/warc\/CC-MAIN-20220927064738-20220927094738-00149.warc.gz\"}"}
| null | null |
Q: Action Bar No Resource Found. I was trying to run the following code from the android studio guide:
<menu xmlns:android="http://schemas.android.com/apk/res/android" >
<!-- Search, should appear as action button -->
<item android:id="@+id/action_search"
android:icon="@drawable/ic_action_search"
android:title="@string/action_search"
android:showAsAction="ifRoom" />
<!-- Settings, should always be in the overflow -->
<item android:id="@+id/action_settings"
android:title="@string/action_settings"
android:showAsAction="never" />
However, for the following code:
android:icon="@drawable/ic_action_search"
android:title="@string/action_search"
It says:
Cannot resolve symbol '@string/action_search' less... (Ctrl+F1)
Validates resource references inside Android XML files
And that no resource matches the given name.
I am not sure how to make of it. I have googled extensively and these are possible solutions?
*
*Download the image set at "https://developer.android.com/design/downloads/index.html#action-bar-icon-pack". I was unable to find what to download, and where to extract to.
I have tried:
*
*Setting the minSdk to 11 (and 22) in gradle and in manifest
*Redoing the project and reading very closely.
If someone could shed some light on this situation I would be very grateful! I have tried doing my own research/ googling/ experimenting but it has come to no avail.
A: You need to create the string value for your action_search reference.
<resources>
<string name="app_name">My Application</string>
<string name="edit_message">Enter a message</string>
<string name="button_send">Send</string>
<string name="action_settings">Settings</string>
<string name="action_search">Search</string>
<string name="title_activity_main">MainActivity</string>
<string name="title_activity_display_message">My Message</string>
</resources>
To fix your empty reference for your drawables you have to drop an image with the name of your reference like ic_action_search.png in your drawable folder.
A: C:\Users\user\AndroidStudioProjects\FunFacts\app\src\main\res\values\dimens.xml: Error: In DataSet 'main', no data file for changedFile. This is an internal error in the incremental builds code; to work around it, try doing a full clean build.
Scott Junner on May 12, 2017 If you open the Build menu at the top of
AndroidStudio, you will see two options there of interest to you.
"Clean Project" and "Rebuild Project". Try those one after the other
and see what results you get. Other than that I don't really know.
It works for me.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,371
|
{"url":"https:\/\/zbmath.org\/?q=an:1199.28047","text":"zbMATH \u2014 the first resource for mathematics\n\nCertain properties of triangular transformations of measures. (English) Zbl\u00a01199.28047\nSummary: We study the convergence of triangular mappings on $$\\mathbb R^n$$, i.e., mappings $$T$$ such that the $$i$$-th coordinate function $$T_i$$ depends only on the variables $$x_1,\\dots,x_i$$. We show that, under broad assumptions, the inverse mapping to a canonical triangular transformation is canonical triangular as well. An example is constructed showing that the convergence in variation of measures is not sufficient for the convergence almost everywhere of the associated canonical triangular transformations. Finally, we show that the weak convergence of absolutely continuous convex measures to an absolutely continuous measure yields the convergence in variation. As a corollary, this implies the convergence in measure of the associated canonical triangular transformations.\nMSC:\n 28C20 Set functions and measures and integrals in infinite-dimensional spaces (Wiener measure, Gaussian measure, etc.) 46G12 Measures and integration on abstract linear spaces 60B11 Probability theory on linear topological spaces","date":"2021-01-27 17:10:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8942427635192871, \"perplexity\": 429.09711426264664}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610704828358.86\/warc\/CC-MAIN-20210127152334-20210127182334-00218.warc.gz\"}"}
| null | null |
Q: Escaping underscore template syntax in twig I have a .twig file with some content that is rendered using backbone via underscore's template engine.
index.twig file
<select name="" id="" class='select-specializations'>
<% _.each(itemList, function(item){%>
<option value="<%= item %>"> <%= item %> </option>
<%})%>
</select>
The problem is that twig doesn't seem to ignore this <% syntax when I autoescape in my template file. Throwing the following error:
A block must start with a tag name in..
And if I use the raw block, underscore doesn't seem to undestand the syntax. Is there a was to resolve this conflict of syntaxes between twig and underscore?
A: I faced same issue as Andrew mentioned. The solution is to escape the underscore template tags. Below is the final code after escaping.
<select name="" id="" class='select-specializations'>
{{ "<% _.each(itemList, function(item){%>" }}
<option value="{{ "<%= item %>" }}"> {{ "<%= item %>" }} </option>
{{ "<%})%>" }}
</select>
You will find more info from Twig documentation
A: Surround your underscore code with verbatim.
See http://twig.sensiolabs.org/doc/tags/verbatim.html for details.
Side note: <% is not the problem, but {% is.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,013
|
Električna influenca ili elektrostatička indukcija je ponovna raspodela električnog naelektrisanja na objektu, uzrokovana okolnim naelektrisanjima Indukciju su otkrili britanski naučnik Džon Kanton 1753. i švedski profesor Johan Karl Vilke 1762. Elektrostatički generatori, poput Vimshurstove mašine, Van de Graafovog generatora i elektroforusa, koriste ovaj princip. Elektrostatička indukcija se razlikuje od elektromagnetne indukcije, mada se obe često nazivaju indukcijom.
Kada se govori o električnoj influenci, neophodno je pomenuti i električno polje. Električno polje je prostor u kojem nastaju određene pojave. Pored električne sile prisutna je i električna influenca, koja je od velikog značaja. Električna influenca se ispoljava u tome što prvobitno naelektrisane čestice postaju električne u polju koje nastaje delovanjem nekog električnog tela.
Ispitavanje značaja
Uloga i značaj ove pojave se mogu prikazati kroz različite eksperimente. Na primer, u ovom eksperimentu će električno polje stvarati velika kugla čiji će oslonac biti izolator. Kugli treba dovesti pozitivan elektricitet sa staklenog štapa. Najpre se štap naelektriše trenjem, a zatim se njime prelazi po površini kugle. Tom prilikom različiti delovi štapa dolaze u dodir sa kuglom i može se čuti pucketanje električnih iskrica i one preskaču sa jedne površine na drugu. Ovaj postupak bi trebalo ponoviti par puta. Za očekivati je da će u slučaju većeg električnog naboja pojave u električnom polju kugle biti izrazitije.
Zatim se u električno polje unosi telo koje ima oblik izduženog elipsoida i sačinjeno je od uglačanog lima. Ono se takođe nalazi na izolatoru. Za ovaj eksperiment je potrebno koristiti elektroskop koji je posebno opremljen. Čašica i deo štapića elektroskopa zaštićeni su mrežicom spojenom sa kućištem elektroskopa. Kućište je uzemljeno. Interesantno je da zaštitna mrežica štiti elektroskop od električnog polja kugle, ali nam ipak pruža mogućnost da vidimo šta se dešava u prostoru koji je prethodno zaštićen.
Na samom početku ogleda duguljasto telo udaljimo od naelektrisane kugle. Ukoliko se naizmenično dodiruju tela i čašica elektroskopa iznutra, može se zaključiti da listići elektroskopa ostaju skupljeni, bez obzira na to koliko puta ponovimo pokušaje prenosa električnog naboja. Nakon toga stalak elipsoida treba približiti kugli. Pritom, jako je važno ne dodirnuti ga. Prvo dotaknemo kušalicom elipsoid na njegovom kraju koji je bliži naelektrisanooj kugli, a zatim i čašicu elektroskopa iznutra. Tom prilikom se listići razmiču, a na telu se može uočiti naboj koji se pojavio zbog delovanja električnog polja kugle.
Zatim kušalicom treba dotaći elipsoid, tj. njegov kraj koji je bliži kugli i zatim dodirnuti čašicu elektroskopa iznutra. Na taj način će se listići međusobno približiti i rastojanje među njima će se postepeno smanjivati. Ukoliko proces ponavljamo više puta uzastopno, razmak među listovima postepeno iščezava, a zatim ponovo raste i time započinje novi krug.
Osobine električne influence
Ovaj ogled može poslužiti da odredimo ulogu koju zapravo električna influencija poseduje i da se uverimo u to da je ona jedna od pojava koje su od velikog značaja u fizici. Na više načina se može utvrditi njen značaj, a jedan od najboljih i najpouzdanijih je upravo navedeni eksperiment. Na osnovu svega prethodno rečenog može se zaključiti da su se na elipsoidu pojavile obe vrste elektriciteta, jedna vrsta na kraju bližem kugli, a druga na daljem kraju. Posledica (pojavljivanje obe vrste naelektrisanja) proizilazi iz delovanja električnog polja kugle.
Delovanje električnog polja naziva se influenciono delovanje, a sama pojava nosi naziv električna influenca.
Uzrok nastajanja električnih naboja na telima koja su unesena u električno polje je postojanje influencije koja ima veoma važnu ulogu i znatno utiče na druge procese. U slučaju da telo pre unošenja u ovo polje nije posedovalo naelektrisanje, kao i ako je izolovano, na njemu će se nakon unošenja pojaviti jednake količine pozitivno i negativno naelektrisanih čestica, tj. plus i minus еlektriciteta.
Takođe, naboji koji su prvobitno izmešani električnom influencom na telu se delimično razdvajaju. To međusobno odvajanje naboja može da ide jako daleko, a to zavisi prvenstveno od osobina električnog polja u koje smo uneli telo. Neophodno je spomenuti vektor električnog pomeraja (-{D}-). On je odlučan za influenciono delovanje električnog polja o kome se govori.
Da bismo predstavili način izračunavanja vektora električnog pomeraja, potrebno je uzeti dve pozitivno naelektrisane pločice i postaviti ih paralelno sa pločama kondenzatora i prilikom njihovih rastavljanja influencirani naboj na pločicama će dostići maksimalnu vrednost. Dakle, postoji orijentacija pločica za koju dobijamo najveći influentni pozitivan naboj (-{Qmax}-) na jednoj pločici i isto tako veliki negativni naboj na drugoj pločici. Ta orijentacija pločica određena je normalom (-{n}-) na pozitivno naelektrisanu pločicu. Ukoliko površinu pločice prekrivenu pozitivnim elektricitetom nakon što je bila uronjena u električno polje označimo sa -{S}-, onda je vektor električnog pomeraja ovako definisan:
-{D = n • Qmax/S}-
Po veličini, ovaj vektor je izražen maksimalnom gustinom pozitivnog elektriciteta koji je prisutan na pločicama, a po smeru je podudaran normali koja odgovara pozitivno naelektrisanoj pločici.
Reference
Spoljašnje veze
Електростатика
Електрицитет
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,085
|
Go Back to Shop All CategoriesAdsImportant and Commemorative DaysJewelry
HomeBlogJewelry7 Interesting Jewelry Facts You Didn't Know
7 Interesting Jewelry Facts You Didn't Know
We all appreciate jewellery, whether it is precious or artificial, but we don't know much about it. There are a plethora of fascinating facts regarding the jewellery you wear and cherish.
Here's are 7 interesting jewelry facts that you might not have been aware of –
The very first diamond was found in India in the 4th century and India was the only place where diamonds were mined until 1726 when diamonds were discovered in Brazil. Diamonds were found in various parts of the country but most mining took place near the Krishna river (now in Andhra Pradesh) — a fact recorded by Italian explorer Marco Polo and French traveler Jean Baptiste.
The oldest known pieces of jewelry known to mankind are 100,000-year-old which were made out of Nassarius shells. It is assumed that they were used for decorative purposes.
No wedding is complete without an engagement ring in today's time. But this tradition was not always around. It was in 1477 the Archduke of Austria proposed to Duchess of Burgundy with a ring as a promise of marriage. Though romantic details of the story aren't available, since then the trend caught on. And, today it's a billion dollar business.
Due to an uncanny resemblance to gold, pyrite has earned itself the name fool's gold. Pyrite is used in artificial jewelry. When crushed, pyrite looks greenish-black, whereas gold powder is yellow.
While In the West, diamonds and rubies are most sought-after gemstones, in China it's jade. For Chinese, jade has been a symbol of status, spirituality, purity, and health for centuries. Jadeite, jade's most expensive and rarest variety, is one of the most expensive gemstones in the world.
Almost all rubies have flaws. Finding a flawless ruby is exceptionally rare and it is priced higher than diamonds of a similar weight and quality.
The most expensive piece jewelry ever used in a film was the necklace worn by actor Nicole Kidman in the movie Moulin Rogue. The necklace, which cost over $1 million, was made of 1,308 diamonds and platinum.
Keep an eye on our blog for more fascinating jewellery information.
Unspoken Facts About Wedding Rings
10+ Facts About Wedding Rings That You Didn't Know Wedding rings with blue stones are thought to promote a long and happy marriage since blue symbolises faithfulness and sincerity. Palladium has recently surpassed gold and platinum as the most popular metal among brides-to-be due to its affordability. Serpents with ruby eyes were very popular in...
Shop this story
Rings – Free Shipping On Us
Shop for Women's Rings at Rumih.com —make a striking statement with our superbly crafted rings for women and men.
10+ Facts About Wedding Rings That You Didn't Know Wedding rings with blue stones are thought to promote a long and happy marriage since blue symbolises faithfulness and sincerity. Palladium has recently surpassed gold and platinum as the most popular metal among brides-to-be due to its affordability. Serpents with ruby...
rumih.com
Got a code? Redeem it next! Shipping & taxes calculated at checkout
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,981
|
{"url":"https:\/\/www.khanacademy.org\/math\/cc-eighth-grade-math\/cc-8th-linear-equations-functions\/cc-8th-function-intro\/v\/recognizing-functions-example-4","text":"# Checking if an equation represents a\u00a0function\n\nCCSS Math: 8.F.A.1\n\n## Video transcript\n\nIn the relation x is equal to y squared plus 3, can y be represented as a mathematical function of x? So the way they've written it, x is being represented as a mathematical function of y. We could even say that x as a function of y is equal to y squared plus 3. Now, let's see if we can do it the other way around, if we can represent y as a function of x. So one way you could think about it is you could essentially try to solve for y here. So let's do that. So I have x is equal to y squared plus 3. Subtract 3 from both sides, you get x minus 3 is equal to y squared. Now, the next step is going to be tricky, x minus 3 is equal to y squared. So y could be equal to-- and I'm just going to swap the sides. y could be equal to-- if we take the square root of both sides, it could be the positive square root of x minus 3, or it could be the negative square root. Or y could be the negative square root of x minus 3. If you don't believe me, square both sides of this. You'll get y squared is equal to x minus 3. Square both sides of this, you're going to get y squared is equal to-- well, the negative squared is just going to be a positive 1. And you're going to get y squared is equal to x minus 3. So this is a situation here where for a given x, you could actually have 2 y-values. Let me show you. Let me attempt to sketch this graph. So let's say this is our y-axis. I guess I could call it this relation. This is our x-axis. And this right over here, y is a positive square root of x minus 3. That's going to look like this. So if this is x is equal to 3, it's going to look like this. That's y is equal to the positive square root of x minus 3. And this over here, y is equal to the negative square root of x minus 3, is going to look something like this. I should make it a little bit more symmetric looking, because it's going to essentially be the mirror image if you flip over the x-axis. So it's going to look something like this-- y is equal to the negative square root of x minus 3. And this right over here, this relationship cannot be-- this right over here is not a function of x. In order to be a function of x, for a given x it has to map to exactly one value for the function. But here you see it's mapping to two values of the function. So, for example, let's say we take x is equal to 4. So x equals 4 could get us to y is equal to 1. 4 minus 3 is 1. Take the positive square root, it could be 1. Or you could have x equals 4, and y is equal to negative 1. So you can't have this situation. If you were making a table x and y as a function of x, you can't have x is equal to 4. And at one point it equals 1. And then in another interpretation of it, when x is equal to 4, you get to negative 1. You can't have one input mapping to two outputs and still be a function. So in this case, the relation cannot-- for this relation, y cannot be represented as a mathematical function of x.","date":"2018-12-16 03:21:04","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8398981690406799, \"perplexity\": 148.5470179934989}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-51\/segments\/1544376827252.87\/warc\/CC-MAIN-20181216025802-20181216051802-00229.warc.gz\"}"}
| null | null |
\section{Introduction}
Intensity correlations were discovered first in
astrophysics by R. Hanbury Brown and R. Q. Twiss~\cite{hbt},
who invented this method to determine the angular diameter
of main sequence stars (HBT effect). In particle physics, intensity
correlations of pions were observed by Goldhaber, Goldhaber,
Lee and Pais (GGLP effect)~\cite{gglp}.
Bose-Einstein correlations are intensity
correlations among detected bosons, that are studied
mainly with the purpose of reconstructing the
space-time picture of particle production.
The analysis of higher-order Bose-Einstein
correlation functions became a focal point of current
research interest.
In particle physics, significant three or higher order
Bose-Einstein correlations have been extracted from the data
sampled by the AFS~\cite{afs_n1},
the NA22~\cite{na22_n1,na22_n2,na22_n3}
and the UA1-collaborations ~\cite{ua1_n1}.
These data were used to test the possible existence of
a coherent source in multi-particle physics and to compare
the correlation functions to the
strength of these correlations predicted from
the quantum optical QO formalism~\cite{glauber,qo,qow}.
As the precision of the measurements improved, the
QO predictions with higher order correlations
were found to be marginally consistent with the
data on 3 and 4 order Bose-Einstein correlation
functions (BECF-s)~\cite{na22_ng} in $(\pi^+/K^+) + p$ reactions
at CERN SPS. Recently, this basic QO formalism was shown
be insufficient to simultaneously describe the high precision
UA1 data on two- and three-particle Bose-Einstein correlations
~\cite{ua1_ng}.
In high energy heavy ion physics,
the first experimental determination of the three-particle
correlation function has just been
reported by the NA44 collaboration
~\cite{na44-qm97,bengt-cf98,janus-cris98,na44-3pi}.
NA44 reports that the genuine three-particle correlation is
quite suppressed in the studied reaction, $S+Pb$ collisions.
By the genuine three-particle correlation is meant the part
of the three-particle correlation that is not due to included
combinations of two-particle correlations. This suppression can
be expressed as a phase factor, $\cos(\phi)$, of the three-particle
correlation function in the case of totally incoherent particle
production. In that case this phase factor is related to an
asymmetry of the particle source, not possible to extract
from two-particle correlations. Theoretical estimates
of this asymmetry effect on the phase factor show very small
departures from $\cos(\phi) \approx 1.$~\cite{3pi-nbi,henning,uli_zhang}.
The large departure from $\cos(\phi) =1$ found by the NA44 collaboration,
$\cos(\phi) =0.2 \pm 0.2$~\cite{na44-3pi}, ought to be due to some other mechanism.
We will discuss the possibility of a partial coherent source in this Letter.
A possible existence of
such an extra phase in the three and higher order correlation
functions was noted already e.g. in papers by the
NA22 collaboration~\cite{na22_ng}, but no experimental
evidence has been put forward for a $\cos(\phi) \ne 1$ value
in particle physics.
From the theoretical side, Cramer and Kadija predicted up to
order 6 the strength of Bose-Einstein correlations for sources with
partially coherent and incoherent components that included
also a possible contamination by mis-indentified, non-interfering
particles~\cite{cramer-kadija}. Their formulas were
obtained in the quantum-optical formalism.
Recently, Suzuki and collaborators calculated higher order
exclusive Bose-Einstein correlations from
the generating functional approach to
quantum-optical formalism~\cite{suzuki} for the case that
the source has $M$ incoherent and one coherent component.
Recently, multi-particle symmetrizations up to arbitrary
high order were evaluated exactly by Zhang~\cite{zhang-npi}
for the special case of a pion-laser model proposed by Pratt
in ref.~\cite{plaser}.
Surprizingly, the structure of the $n$-particle inclusive
correlation functions in terms of the Fourier-transformed
inclusive emission function
was found to be the same as the structure of the $n$-particle
exclusive correlation functions in terms of the
single-particle exclusive emission function~\cite{zhang-npi}.
However, this result is valid only in case
when Bose-Einstein condensation, hence the development of
partial coherence, is not yet reached~\cite{zjcst-rev}.
A simple recurrence relation was obtained
for the strength of the higher order correlation functions
of core/halo type systems~\cite{nhalo}.
Such systems are boson emitting
sources where some particles come from the a incoherent center of the
particle emission, that is assumed to be resolvable
by the Bose-Einstein microscope. The rest of the particles
is assumed to come from the halo region, that corresponds
to large length-scales not resolvable by intensity interferometry
~\cite{chalo,marburg-halo}.
In Letter ~\cite{nhalo}, a prediction was made for the strength
of third order and arbitrary order BECF assuming that the
core has no coherent component.
The purpose of the present Letter is to investigate the
effect of a partially coherent component in the core of
particle emission. We present a generalization of
the earlier recurrence relations in ref.~\cite{nhalo},
the new expressions also yield an easy way to calculate formula for the
strength of the $n$-th order correlation function with a
partially coherent and a halo component and we apply
these expressions to the NA44 data on $S+Pb$ collisions.
\section{Basic definitions}
The central assumption of the core/halo model is
that the reduction of the intercept parameter of the $n$-particle
BECF-s
is due only to the presence of the long-lived resonances~\cite{nhalo}.
This assumption was motivated by the success of fully incoherent
event generators like RQMD or VENUS in the description
of two-particle BECF-s.
The emission function of the whole source
can be written as a sum of a contribution
from the core and from the halo, where
halo stands for the decay products of the (unresolvable)
long-lived resonances. The core is indexed with (c) ,
the halo is by (h).
\begin{equation}
S(x,k) = S_c(x,k) + S_h(x,k)
\end{equation}
In earlier studies of the core/halo model it was assumed
that $S_c(x,k)$ describes a fully incoherent (thermal)
source. Now we assume, that some fraction of the core
emits bosons in a coherent manner, e.g., due to
emerging formation of pion lasers or
Bose-Einstein condensates of pions or production of disoriented
chiral condensates or ..., so we define
\begin{equation}
S_c(x,{\bf k}) = S_c^p(x,{\bf k}) + S_c^i(x,{\bf k})
\end{equation}
where the upper index $p$ stands for coherent
component (p as partial), upper index $i$ stands for incoherent
component of the source.
The invariant spectrum is given by
\begin{equation}
N({\bf k}) = \int d^4x S(x,{\bf k}) = N_c({\bf k}) + N_h({\bf k})
\end{equation}
and the core contribution is a sum of the coherent
and incoherent components:
\begin{equation}
N_c({\bf k}) = \int d^4x S_c(x,{\bf k}) = N_c^p({\bf k}) + N_c^i({\bf k})
\end{equation}
One can introduce the momentum dependent core fractions $f_c({\bf k})$
and partially coherent core fractions $p_c({\bf k})$ as
\begin{eqnarray}
f_c({\bf k}) & = & N_c({\bf k})/N({\bf k}) \label{e:fck} \\
p_c({\bf k}) & = & N_c^p({\bf k}) / N_c({\bf k})
\end{eqnarray}
The halo and the incoherent fractions $f_h, f_i$ are
\begin{eqnarray}
f_h({\bf k}) & = & N_h({\bf k})/N({\bf k}) = 1 - f_c({\bf k}) \\
f_i({\bf k}) & = & N_c^i({\bf k}) / N_c({\bf k}) = 1 - p_c({\bf k})
\label{e:fik}
\end{eqnarray}
Note that our definition of the momentum dependent,
partially coherent core fraction, $p_c({\bf k})$
should be clearly distinguished
from the chaociticy $p$ of Weiner~\cite{weiner},
defined as $p = \langle n_{chao}\rangle / \langle n_{tot} \rangle$,
the ratio of the mean number of particles from the chaotic source
to the mean total multiplicity. If we neglect the momentum dependence
of $f_c({\bf k})$ and $p_c({\bf k})$, the core fraction and the partially
coherent core fraction, formally one obtains $p = 1 - p_c f_c$.
However, we distinquish the resolvable intercept $\lambda_*$ from
the exact intercept $\lambda_{xct}$, in contrast to ref.~\cite{weiner}.
For example, in case of two-particle correlations,
$\lambda_{*,2} = f_c^2 [ (1 - p_c)^2 + 2 p_c (1 - p_c)]$,
while $\lambda_{xct,2} = \lambda_{*,2} + (1 - f_c)^2 +
2 f_c (1 - f_c)$ in our case, while in case of the quantum
optical formalism without long lived resonances,
$\lambda_2^{QO} = 2 p (1 - p) + p^2 = 1 - (1 - p)^2$.
\section{ The strength of the $n$-particle correlations, $\lambda_{*,n}$}
We define the $n$-particle correlation function as
\begin{eqnarray}
C_n(1,2,..., n) & = &C_n({\bf k}_1,{\bf k}_2,...,{\bf k}_n) =
\frac{N_n({\bf k}_1,{\bf k}_2,...,{\bf k}_n) }{N_1({\bf k}_1) N_1({\bf k}_2) ... N_1({\bf k}_n)} \\
\null & = &
\frac{N_n(1,2,...,n) }{N_1(1) N_1(2) ... N_1(n)}
\end{eqnarray}
where a symbolic notation for ${\bf k}_i$ is introduced,
only the index of ${\bf k}$ is written out in the argument.
In the forthcoming, we shall apply this notation consistently
for the arguments of various functions of the momenta,
i.e., $f({\bf k}_i,{\bf k}_j, ... , {\bf k}_m)$ is symbolically
denoted by $f(i,j, ... ,m)$.
We find that the intercept of the $n$-particle
correlation function ( extrapolated from
finite relative momenta to zero
relative momentum) is given by
the following formula,
\begin{equation}
C_n(k_i=k_j,\forall i,j) = 1 + \lambda_{*,n}= 1 + \sum_{j=2}^n
{\left( \null^{\displaystyle\phantom{|} n}_{\displaystyle\phantom{|} j}\right)}
\alpha_j f_c^j
\left[ (1-p_c)^j + j p_c (1-p_c)^{j-1} \right] ,
\label{e:lamnfp_sim}
\end{equation}
where $\alpha_j $ counts the number
of fully mixing permutations of $j$ elements.
This can be calculated from a simple recurrence,
as obtained in ref.~\cite{nhalo}.
Note that the equations of ref.~\cite{nhalo,chalo}
were given for the purely incoherent core, and they
are modified above for an additional coherent
component in a straight-forward manner. In general,
terms proportional to $f_c^j$ in the incoherent case
shall pick up an additional factor
$ [ (1 - p_c)^j + j p_c (1 - p_c)^{j-1} ]$
in case the core has a coherent component.
This extra factor means that either all $j$ particles
must come from the incoherent part of the core,
or one of them can come from the coherent, the remaining
$j-1$ particles from the incoherent part.
If two or more particles come from the coherent component
of the core, the contribution to intensity correlations
vanishes as the intensity correlator for two coherent particles
is zero.
Let us indicate the number of permutations that completely mix exactly
$j$ non-identical elements by $\alpha_j$. There are
exactly $\left( \null^{\displaystyle\phantom{|} n}_{\displaystyle\phantom{|} j} \right)$ different
ways to choose $j$ different elements from among $n$ different elements.
Since all the $n!$ permutations can be written as a sum over the
fully mixing permutations, the counting rule yields
a recurrence relation for $\alpha_j$, ref. ~\cite{nhalo}:
\begin{eqnarray}
\alpha_n & = & n! - 1 - \sum_{j = 1}^{n - 1}
{\left( \null^{\displaystyle\phantom{|} n}_{\displaystyle\phantom{|} j} \right)} \alpha_j.
\label{e:alp}
\end{eqnarray}
The first few values of $\alpha_j$ are given as
\begin{eqnarray}
\alpha_1 & = & 0, \\
\alpha_2 & = & 1, \\
\alpha_3 & = & 2, \\
\alpha_4 & = & 9, \\
\alpha_5 & = & 44, \\
\alpha_6 & = & 265.
\end{eqnarray}
We have the following explicit expressions for the first few intercept
parameters:
\begin{eqnarray}
\lambda_{*,2} & = & f_c^2
[ (1 - p_c)^2 + 2 p_c (1-p_c)]
\label{e:l2} \\
\lambda_{*,3} & = & 3 f_c^2
[ (1 - p_c)^2 + 2 p_c (1-p_c)] \nonumber
\\
\null & \null & \qquad
+ 2 f_c^3
[ (1 - p_c)^3 + 3 p_c (1-p_c)^2]
\\
\lambda_{*,4} & = & 6 f_c^2
[ (1 - p_c)^2 + 2 p_c (1-p_c)] \nonumber
\\
\null & \null & \qquad
+ 8 f_c^3
[ (1 - p_c)^3 + 3 p_c (1-p_c)^2] \nonumber
\\
\null & \null & \qquad
+ 9 f_c^4
[ (1 - p_c)^4 + 4 p_c (1-p_c)^3]
\\
\lambda_{*,5} & = &
10 f_c^2
[ (1 - p_c)^2 + 2 p_c (1-p_c)] \nonumber
\\
\null & \null & \qquad
+ 20 f_c^3
[ (1 - p_c)^3 + 3 p_c (1-p_c)^2] \nonumber
\\
\null & \null & \qquad
+ 45 f_c^4
[ (1 - p_c)^4 + 4 p_c (1-p_c)^3] \nonumber
\\
\null & \null & \qquad
+ 44 f_c^5
[ (1 - p_c)^5 + 5 p_c (1-p_c)^4]
\label{e:lamfp)}
\end{eqnarray}
In the above equations, the effective intercept parameters,
the core fraction and the partially coherent fraction
are evaluated at a mean momentum ${\bf K}$,
$\lambda_{*,n} = \lambda_{*,n}({{\bf K}})$,
$f_c = f_c({{\bf K}})$ and $p_c = p_c({{\bf K}})$.
\section{The $n$-body correlation function }
Let us give the closed form for the full correlation
function for arbitrary high order of correlation functions,
generalizing the results of ref.~\cite{nhalo} for an
additional partially coherent component in the source:
Let $\rho^{(n)}$
stand for those permutations of $(1,...,n)$
that are mixing {\it all} the
numbers from 1 to $n$ and let us indicate by
$\rho_i$ the value which is
replaced by $i$ in a given permutation belonging
to the set of permutations $\rho^{(n)}$.
(Superscript indexes a set of
permutations, subscript stands for a given value).
Then we have $\rho_i \ne i$ for all values of $i = 1,..., n$.
If the partial coherent component is
vanishing, the general expression for
the $n$-particle inclusive correlation function
$C_n(1,...,n)$ was given
in ref~\cite{nhalo} as
\begin{eqnarray}
C_n(1,...,n) & = & 1 + \sum_{j = 2}^n~~
\sum_{i_1, ..., i_j = 1}^{\null \,\,\, n \,\,\, _{\prime}}~~
\sum_{\rho^{(j)}} \prod_{k=1}^j
f_c(i_k) \tilde s_c(i_k,i_{\rho_k}).
\label{e:fmix}
\end{eqnarray}
Here $\sum'$ indicates that the
summation should be taken over those set of
values of the indices which do not contain any value more than once,
and the Fourier-transformed emission function of the core
is
\begin{eqnarray}
\tilde s_c(i,j)
= {\tilde S_c(i,j) \over \tilde S_c(i,i)}
\end{eqnarray}
\begin{eqnarray}
\tilde s_c(i,j)
=
\tilde s_c^*(j,i) {\tilde S_c(j,j) \over \tilde S_c(i,i)}
\ne \tilde s^*_c(j,i), \label{e:asym}
\end{eqnarray}
In the above equations, the tilde denotes Fourier-transformation
over the relative momenta,
\begin{eqnarray}
\tilde S_c(l,m) = \int d^4 x \exp[i (k_l - k_m) \cdot x ]
S_c(x,{k_l+k_m \over 2})
\end{eqnarray}
and similar expressions hold for the coherent and the
incoherent components of the core.
\footnote{Note that with this definition the normalized Fourier-transformed
emission function becomes asymmetric to the exchange of the
arguments and complex conjugation:
although the relationship
$\tilde S_c(i,j) = \tilde S_c^*(j,i)$ is satisfied.}
The expression in eq.~(\ref{e:fmix}) is valid not only
for the case when exactly $n$
bosons are in the system, full symmetrization is performed,
$C_n(1,2,...,n)$ stands for the $n$-particle exclusive
correlation function and
$\tilde S_c(i,j)$ stands for the Fourier-transformed
core emission function without
modifications due multi-particle symmetrization~\cite{zhang-npi,zjcst-rev}.
In addition, eq.~(\ref{e:fmix}) is also valid
when the only source of correlations between pions is
due to Bose-Einstein symmetrization,
the number of pions is randomly varying from event to event,
and $C_n(1,2, ... ,n)$ is interpreted as the $n$-particle
inclusive correlation function~\cite{zhang-npi,zjcst-rev},
and $\tilde S_c(i,j) $ includes all higher order symmetrization
effects.
However, eq.~(\ref{e:fmix}) is valid only if the core has no partially
coherent component.
If a coherent component is present,
one can introduce the normalized {\underline i}ncoherent and {\underline p}artially
coherent core fractions as
\begin{eqnarray}
\tilde s_c^i(j,k)
& = & {\tilde S^i_c(j,k) \over \tilde S^i_c(j,j)}\\
\tilde s_c^p(j,k)
& = & {\tilde S^p_c(j,k) \over \tilde S^p_c(j,j)}
\end{eqnarray}
and we obtain
\begin{eqnarray}
C_n(1,...,n) & = & 1 + \sum_{j = 2}^n~~
\sum_{m_1, ..., m_j = 1}^{\null \,\,\, n \,\,\, _{\prime}}~~
\sum_{\rho^{(j)}}
\left\{
\prod_{k=1}^{j}
f_c(m_k) [1 - p_c(m_k)] \,\, \tilde s^i_c(m_k,m_{\rho_k})
\right. \nonumber
\\
\null & \null & \null \hspace{-2cm}
\left. +
\sum_{l = 1}^j f_c(m_l) p_c(m_l) \,\, \tilde s^p_c(m_l,m_{\rho_l}) \!
\prod_{k=1, k \ne l}^j
f_c(m_k) [1 - p_c(m_k)] \, \tilde s^i_c(m_k,m_{\rho_k})
\right\}
\label{e:fpmix}
\end{eqnarray}
This expression contains phases in the Fourier-transformed,
normalized source distributions. Actually, two (momentum dependent)
phases are present:
one denoted by $\phi^i({\bf k}_m,{\bf k}_n)$ in the Fourier-transformed
normalized {\it incoherent} core emission function,
$\tilde s_c^i({\bf k}_m,{\bf k}_n)$ and another independent
phase denoted by $\phi^c({\bf k}_m,{\bf k}_n)$ is present in the
the Fourier-transformed
normalized {\it coherent} core emission function,
$\tilde s_c^p({\bf k}_m,{\bf k}_n)$. One can write
\begin{eqnarray}
\tilde s_c^i({\bf k}_m,{\bf k}_n)
& = & |\tilde s_c^i({\bf k}_m,{\bf k}_n)| \exp[i \phi^i({\bf k}_m,{\bf k}_n)],\\
\tilde s_c^p({\bf k}_m,{\bf k}_n)
& = & |\tilde s_c^p({\bf k}_m,{\bf k}_n)| \exp[i \phi^p({\bf k}_m,{\bf k}_n)].
\end{eqnarray}
The shape of both the coherent and the incoherent components
is arbitrary in these equations, but should correspond to the space-time distribution
of particle production.
If the variances of the core are finite,
the emission functions are usually
parameterized by Gaussians.
If the core distributions have power-law like tails,
like in case of the Lorentzian distribution~\cite{3d},
then the Fourier-transformed emission functions correspond
to exponentials or to power-law structures~\cite{bialas}.
For completeness, we list these possibilities below:
\begin{eqnarray}
|\tilde s_c^i({\bf k}_m,{\bf k}_n)|^2
& = & \exp( - R_{i}^2 Q_{mn}^2) \qquad \mbox{\rm or} \\
|\tilde s_c^i({\bf k}_m,{\bf k}_n)|^2
& = & \exp( - R_{i} Q_{mn}) \qquad \mbox{\rm or} \\
|\tilde s_c^i({\bf k}_m,{\bf k}_n)|^2
& = & a_i (R_{i} Q_{mn})^{b_i} \qquad \mbox{\rm etc ... , } \\
|\tilde s_c^p({\bf k}_m,{\bf k}_n)|^2
& = & \exp( - R_{p}^2 Q_{mn}^2) \qquad \mbox{\rm or} \\
|\tilde s_c^p({\bf k}_m,{\bf k}_n)|^2
& = & \exp( - R_p Q_{mn}) \qquad \mbox{\rm or} \\
|\tilde s_c^p({\bf k}_m,{\bf k}_n)|^2
& = & a_p (R_p Q_{mn})^{b_p} \qquad \mbox{\rm etc ... } .
\end{eqnarray}
In the above equations, subscripts $_i$ and $_p$ index the
parameters belonging to the incoherent or to the partially
coherent components of the core, and $Q_{mn}$ stands for
certain experimentally defined relative momentum component
determined from ${\bf k}_m$ and ${\bf k}_n$.
A straightforward counting yields that in the limiting case
when all momenta
are equal, the simple formula of
eq.~(\ref{e:lamnfp_sim}) follows from the shape of the
$n$-particle Bose-Einstein correlation functions of
eq.~(\ref{e:fpmix}), as
$\tilde s^i_c(i,i) = \tilde s^p_c(i,i) = 1. $
\section{Application to three-particle correlation data}
As an application of the above formalism, we attempt to
determine the core fraction $f_c$ and the partially
coherent fraction $p$ from the strength of the
NA44 two - and three- particle correlation functions, $\lambda_{*,2}$
and $\lambda_{*,3}$,
in the CERN SPS S + Pb reactions.
The two experimentally determined values are
$\lambda_{*,2} = 0.44 \pm 0.04 $ and
$\lambda_{*,3} = 1.35 \pm 0.12 $ (statistical errors only).~
\footnote{\samepage Coulomb corrections are large in heavy ion collisions and
the value of $\lambda_{*,3}$ was determined with the
help of a newly developed Coulomb 3-particle wave-function
integration method described in ref.~\cite{alt-3}.}
Figure 1 illustrates the 2 $\sigma$ contour plots
in the $(f_c,p_c)$ plane, that is obtained for these
parameters from the
experimental values of $\lambda_{*,2}$ and $\lambda_{*,3}$.
\begin{figure}
\begin{center}
\epsfig{file=pcohf.eps,height=10.cm,width=10.cm,angle=270}
\end{center}
\caption{Allowed regions for possible values of the
$f_c$ core fraction and the $p_c$ partially coherent fraction
are evaluated on the two $\sigma$ level from the intercept
of the second order and the third order correlation
functions, $\lambda_{*,2} $ and $\lambda_{*,3}$.
}
\end{figure}
The overlap area in
Fig. 1 shows, that a big range of $(f_c,p_c)$
values is able to describe simultaneously
the strength of the two-particle and the three-particle
correlation functions within two standard deviations
of experimental errors.
Thus neither the fully incoherent, nor the partially coherent
source picture can be excluded.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\hline
$f_c$ & $p$ & $\lambda_{*,2}$ &
$ \lambda_{*,3}$ & $\lambda_{*,4}$ &$ \lambda_{*,5}$\\
\hline
0.60 & 0.00 & 0.36 & 1.5 & 5.1 & 17.2 \\
0.70 & 0.50 & 0.37 & 1.4 & 4.3 & 11.9 \\
1.00 & 0.75 & 0.44 & 1.6 & 4.3 & 10.5 \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Evaluation of the strength of higher order
correlation functions, $\lambda_{*,n} $, for various core fractions and
partially coherent fractions allowed by NA44 two- and three-particle
correlation data.}
\end{table}
Now we can predict the intercept of higher order correlations
to see if they become more sensitive to the presence a partial coherent
source. In Table 1 we have evaluated the the $\lambda_{*,2},
\lambda_{*,3},\lambda_{*,4},\lambda_{*,5}$-values for some cases
in the overlap region. We find that the $\lambda_{*,5}$
is almost a factor of 2 larger for a completely incoherent source,
than for a partially coherent source with no halo component,
although within experimental errors both cases
describe $\lambda_{*,2}$ and $\lambda_{*,3}$.
This is in agreement with Cramer and Kadija, who have pointed out
that for higher values
of $n$ the difference between a partially coherent source
and the fully incoherent source
will become larger and larger~\cite{cramer-kadija}.
The results presented here imply that the measurement
of higher order correlations, 5-th order,
is necessary to determine the value of
the degree of partial coherence of the source in this reaction.
\section{Summary, conclusions}
In summary, we have found a simple generalization of the
core-halo model for the case when the core has a partially
coherent component. The strength of the $n$-particle
correlation function can be evaluated for arbitrary
value of $n$ with the help of a simple recurrence formula.
The shape of the $n$-particle Bose-Einstein
correlation function was determined in terms of the Fourier-transformed
emission function of the incoherent and the partially
coherent component of the source.
The graph rules for the calculation of these functions
are summarized and illustrated graphically in Appendix A.
We found that the strengths of the second and the third order
Bose-Einstein correlation functions in the NA44 $S + Pb$ reaction
at CERN SPS can be accommodated simultaneously both in
a fully incoherent core picture $(p_c = 0)$ with a halo fraction of
$f_c = 0.6 $ as well as in a partially coherent core picture
that has no halo component, $p_c = 0.75 $ and $f_c = 1.$
However, the strength of the fourth and fifth order
correlation functions is shown to be quite
different in the two scenarios.
\section*{Acknowledgements}
T. Cs. thanks N. Suzuki and W. Kittel for valuable discussions.
T. Cs. and A. Ster would like to express their gratitude
to B. L\"orstad for kind hospitality at University of Lund
and to W. Kittel for kind hospitality at University of Nijmegen.
T. Cs. is grateful to M. Gyulassy for kind hospitality at Columbia
University.
This research was partially supported by the OTKA grant
T026435, by the US - Hungarian Joint Fund MAKA 652/1998
and by the NWO-OTKA grant N25487.
\vfill\eject
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,937
|
8.5.2015 New Music
Stream Bridget Kelly's New EP 'Summer of 17'
Just in time for summer, Bridget Kelly returns with a brand new EP called Summer of 17. The R&B songstress, who split with Roc Nation last year, keeps the feel-good vibes flowing on the '90s-infused set, which she released independently.
The six tracks feature the Mack Wilds-assisted single "Act Like That," with production from Da Internz, Swagg, K-Mack, Jordan Bratton, JDot, and Roc.
"This project is unlike anything I've ever released but I'm so proud of it," Bridget told Billboard. "I wrote/co-wrote every song; it's tailor-made to remind myself that I'm old enough to know better but young enough not to take life [and] love too seriously."
She will celebrate the EP's release by embarking on her "Summer of 17" tour, starting Aug. 8 in Washington, D.C.
Summer of 17 will be available at digital retailers tomorrow. Stream it below.
CyHi the Prynce Disses Kanye West, Pusha T on 'Elephant in the Room'
It was all G.O.O.D. just a week ago. CyHi the Prynce is putting his G.O.O.D. Music family including Kanye …
Mariah Carey Receives Star on Hollywood Walk of Fame
The moment has arrived. Twenty-five years after releasing her debut album, 200 millions albums sold worldwide, and 18 No. …
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,710
|
A pernambucói tüskefarkú (Synallaxis infuscata) a madarak (Aves) osztályának verébalakúak (Passeriformes) rendjébe és a fazekasmadár-félék (Tyrannidae) családjába tartozó faj.
Rendszerezése
A fajt Olivério Mário de Oliveira Pinto brazil ornitológus írta le 1950-ben.
Előfordulása
Dél-Amerikában az Atlanti-óceán partvidékékén, Brazília északkeleti részén Pernambuco állam területén honos. Természetes élőhelyei a szubtrópusi vagy trópusi síkvidéki esőerdők és cserjések. Állandó, nem vonuló faj.
Megjelenése
Testhossza 18 centiméter, testtömege 16-20 gramm.
Életmódja
Ízeltlábúakkal táplálkozik.
Természetvédelmi helyzete
Az elterjedési területe nagyon kicsi, egyedszáma 250-999 példány közötti és csökken. A Természetvédelmi Világszövetség Vörös listáján veszélyeztetett fajként szerepel.
Jegyzetek
Források
További információk
Képek az interneten a fajról
Synallaxis
Madárfajok
Brazília endemikus madarai
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,929
|
Q: At what point is the gradient equal to (0,-1)? If I drew a level curve of a given function, how would one "loosely" determine at what point/area on that level curve the gradient vector is $(0,-1)$? What's the general idea here?
The level curve I am currently looking at is shaped like the number 8, but tilted a bit to the right, lying in the 1st and 4th quadrant of the xy-plane: $\textit{8}$. It tells me that the gradient vector is $(0,-1)$ at the top of the 8.... why?
The answer is D in the following::
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 321
|
from django.conf.urls.defaults import *
from tribes.models import Tribe
from wiki import models as wiki_models
wiki_args = {'group_slug_field': 'slug',
'group_qs': Tribe.objects.filter(deleted=False)}
urlpatterns = \
patterns('',
url(r'^create/$', 'tribes.views.create', name="tribe_create"),
url(r'^your_tribes/$', 'tribes.views.your_tribes', name="your_tribes"),
url(r'^$', 'tribes.views.tribes', name="tribe_list"),
url(r'^order/topics/least-topics/$', 'tribes.views.tribes',
{'order': 'least_topics'}, name="tribe_list_least_topics"),
url(r'^order/topics/most-topics/$', 'tribes.views.tribes',
{'order': 'most_topics'}, name="tribe_list_most_topics"),
url(r'^order/members/least-members/$', 'tribes.views.tribes',
{'order': 'least_members'}, name="tribe_list_least_members"),
url(r'^order/members/most-members/$', 'tribes.views.tribes',
{'order': 'most_members'}, name="tribe_list_most_members"),
url(r'^order/name/ascending/$', 'tribes.views.tribes',
{'order': 'name_ascending'}, name="tribe_list_name_ascending"),
url(r'^order/name/descending/$', 'tribes.views.tribes',
{'order': 'name_descending'}, name="tribe_list_name_descending"),
url(r'^order/date/oldest/$', 'tribes.views.tribes',
{'order': 'date_oldest'}, name="tribe_list_date_oldest"),
url(r'^order/date/newest/$', 'tribes.views.tribes',
{'order': 'date_newest'}, name="tribe_list_date_newest"),
# tribe-specific
url(r'^tribe/([-\w]+)/$', 'tribes.views.tribe', name="tribe_detail"),
url(r'^tribe/([-\w]+)/delete/$', 'tribes.views.delete', name="tribe_delete"),
# topics
url(r'^tribe/([-\w]+)/topics/$', 'tribes.views.topics', name="tribe_topics"),
url(r'^topic/(\d+)/edit/$', 'tribes.views.topic', kwargs={"edit": True}, name="tribe_topic_edit"),
url(r'^topic/(\d+)/delete/$', 'tribes.views.topic_delete', name="tribe_topic_delete"),
url(r'^topic/(\d+)/$', 'tribes.views.topic', name="tribe_topic"),
# wiki
url(r'^tribe/(?P<group_slug>\w+)/wiki/', include('wiki.urls'), kwargs=wiki_args),
)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,732
|
\section{Introduction}
Let $u,v$ be real harmonic function in a simply connected domain
$\Omega$ , then the continuous function $f=u+iv$ defined in $\Omega$
is said to be harmonic in $\Omega$. If $f=u+iv$ is harmonic in
$\Omega$ then there exist analytic functions $G,H$ such that $u=Re~
G$ and $v=Im~ H$ , therefor $f=u+iv=h+\overline g$ where
$h=\frac{G+H}{2},~ \overline g=\frac{\overline G-\overline H}{2}$
and we call $h$ and $g$ analytic part and co-analytic part of $f$
respectively. The jacobian of $f$ is given by
$J_f|z|=|h'(z)|^2-|g'(z)|^2$ , also we show by $w(z)$ the dilatation
function for $f$ and define $w(z)=\frac {g'(z)}{h'(z)}.$ Lewy [6],
Clunie and Small [3] have showed that the mapping $z\longrightarrow
f(z)$ is sense preserving and injective in $\Omega$ if and only if
$J_f|z|>0$ in $\Omega$. The function $f=h+\overline g$ is said to be
univalent in $\Omega$ if the mapping $z\longrightarrow f(z)$ is
sense preserving and injective in $\Omega$. Denote by $\mathcal{H}$
the class of all harmonic functions $f=h+\overline g$ that are
univalent and sense preserving in the open unit disk $\mathcal{D}$
where
\begin{equation}
h(z)=z+\sum_{n=2}^\infty a_nz^n,~ g(z)=\sum_{n=1}^\infty b_n z^n~~
|b_1|<1.
\end{equation}
With normalization conditions $f(0)=0,~ f_z(0)=1$ where $f_z(0)$
denotes the partial derivative of $f(z)$ at $z=0.$ In case $g=0$
this class reduces to the class of $\mathcal{S}$ consisting of all
analytic univalent functions.
\begin{definition} \label{th2.2}( See [7] and [9]) Let the function
$f(z)$ be analytic in a simply-connected region of the $z$-plane
containing the origin. The fractional derivative of $f$ of order
$\nu$ is defined by
\[D_z^\nu f(z)=\frac{1}{\Gamma(1-\nu)}\frac{d}{dz}\int_0^1\frac{f(\zeta)}{(z-\zeta)^\nu}d\zeta, ~~0\leq\nu<1\]
where the multiplicity of $(z-\zeta)^\nu$ is removed by requiring
$\log (z-\zeta)$ to be real when $z-\zeta>0 .$
\end{definition}
Making use of fractional derivative and its known extensions
involving fractional derivatives and fractional integrals, Owa and
Srivastava [8] introduced the operator
$\Omega_z^\nu:\mathcal{A}_0\longrightarrow \mathcal{A}_0$ defined by
\[ \Omega_z^\nu f(z):=\Gamma(2-\nu)z^\nu D_z^\nu f(z) ~~ \nu\neq 2,3,4,...\]
where $\mathcal{A}_0$ denote the class of functions which are
analytic in the unit disk $\mathcal{D}$, satisfying normalization
conditions $f(0)=f'(0)-1=0.$
It is easy to see that
\[ \Omega_z^\nu f(z)=z+\sum_{n=2}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)}{\Gamma(n+1-\nu)}a_nz^n.~~ f\in \mathcal{A}_0\]
\begin{definition} \label{th2.2} Suppose that $f=h+\overline g$ where
$h$ and $g$ are in (1.1), define $\Omega_z^\nu f(z)=\Omega_z^\nu
h(z)+\overline {\Omega_z^\nu g(z)}.$
\end{definition}
Then we obtain
\[\Omega_z^\nu f(z)=z+\sum_{n=2}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)}{\Gamma(n+1-\nu)}a_nz^n+ \sum_{n=1}^\infty \frac{\Gamma(2-\nu)\Gamma(n+1)}
{\Gamma(n+1-\nu)}b_n{\overline z}^n.\]
By making use of Definition 1.2, we introduce a new class of
harmonic univalent functions in the unit disk $\mathcal{D}$ as in
definition 1.3.
\begin{definition} \label{th2.2} Let $\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)~ (0\leq
k\leq 1,~0<\beta\leq 1,~0\leq\lambda,~0\leq\nu <1)$ be the class of
functions $f\in \mathcal{H}$ satisfying the following inequality:
\[ Re~ \left\{ (1 - \lambda) \frac{\Omega^vf}{z} + \lambda(1-k) \frac{(\Omega^vf)'}{z'} + \lambda k
\frac{(\Omega^vf)''}{z''} \right\} > \beta .~ (z=re^{i\theta})\]
where
\[z'=\frac{\partial}{\partial\theta}\left(re^{i\theta}\right),~~ z''=\frac{\partial}{\partial\theta}( z')
,\] and
\[(\Omega^\nu f(z))'=
\frac{\partial}{\partial\theta}\left(\Omega^\nu
f(z)\right)=iz(\Omega^\nu h(z))'-i\overline {z(\Omega^\nu g(z))'},\]
\[(\Omega^\nu f(z))''=\frac{\partial}{\partial\theta}(\Omega^\nu
f(z))'=-z(\Omega^\nu h(z))'-z^2(\Omega^\nu h(z))''-\overline
{z(\Omega^\nu g(z))'}-\overline {z^2(\Omega^\nu g(z))''},\] also we
denote by $\overline{\mathcal{H}\mathcal{M}}(\beta, \lambda, k,
\nu)$ the subclass of $\mathcal{H}\mathcal{M}(\beta, \lambda, k,
\nu)$ consisting of functions $f=h+\overline g$ such that
\begin{equation}
h(z)=z-\sum_{n=2}^\infty |a_n|z^n,~~ g(z)=\sum_{n=1}^\infty
|b_n|z^n,~~|b_1|<1.
\end{equation}
\end{definition}
In [9] H. M. Srivastava and S. Owa investigated this class with
$D^\nu f(z)$ instead of $\Omega^\nu f(z)$ where $D^\nu f(z)$ is the
Ruscheweyh derivative of $f$, for $p$-valent harmonic functions.
This class in special cases involve the works studied by the
previous authors such as Bhoosnurmath and Swamay [2], Ahuja and
Jahangiri [1,5].
In this paper the coefficient inequalities for the classes
$\mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu)$ and $\overline
{\mathcal{H}\mathcal{M}}(\beta, \lambda, k, \nu)$ are obtained also
some other interesting properties of these classes are investigated.
\section{Coefficient Bounds}
In the first theorem we give the sufficient condition for
$f\in\mathcal{H}$ to be in the class $\mathcal{H}\mathcal{M}(\beta,
\lambda, k, \nu).$
\begin{theorem} \label{th2.2} Let $f\in\mathcal{H},$ and
\[\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n |<1-\beta,\]where
\begin{equation}
\phi(n,k,\lambda,\nu):=\frac{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)},
\end{equation}
and
\begin{equation}
\psi(n,k,\lambda,\nu):=\frac{[1-\lambda(n+1)(1-nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)},
\end{equation}
then $f\in \mathcal{H}\mathcal{M}(\beta, \lambda, k, \nu).$ The
result is sharp for the function $f(z)$ given by
\begin{eqnarray}
f(z)&&=z+\sum_{n=2}^\infty\frac{\gamma^n\Gamma(n+1-\nu)z^n}{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}\nonumber\\
&&+\sum_{n=1}^\infty\frac{\delta^n\Gamma(n+1-\nu)}{|1-\lambda(n+1)(1-nk)|\Gamma(n+1)\Gamma(n-\nu)}\overline
z^n\nonumber
\end{eqnarray}
where $\sum_{n=2}^\infty |\gamma_n|+\sum_{n=2}^\infty
|\delta_n|=1-\beta.$
\end{theorem}
\begin{proof} Suppose
\[E(z)=(1-\lambda)\frac{\Omega^\nu f(z)}{z}+\lambda(1-k)\frac{(\Omega f(z))'}{z'}+\lambda k\frac{(\Omega f(z))''}{z''}.\]
It suffices to show that $|1-\beta+E(z)|\geq |1+\beta-E(z)|.$ A
simple calculation by substituting for $h$ and $g$ in $E(z)$ shows
\begin{eqnarray}
E(z)&&=1+\sum_{n=2}^\infty\frac{[1+\lambda(n-1)(1+nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}a_nz^{n-1}\nonumber\\&&+
\sum_{n=1}^\infty\frac{[1-\lambda(n+1)(1-nk)]\Gamma(n+1)\Gamma(2-\nu)}{\Gamma(n+1-\nu)}b_n\frac{\overline
z^n}{z},\nonumber
\end{eqnarray}
Considering (2.1) and (2.2) we have
\[
\phi(n,k,\lambda,\nu)=n(n-1)[1+\lambda(n-1)(1+nk)]B(n-1,2-\nu),
\] and
\[
\psi(n,k,\lambda,\nu)=n(n-1)[1-\lambda(n+1)(1-nk)]B(n-1,2-\nu),
\]
where $B(\alpha,\beta)=\int_0^1
t^{\alpha-1}(1-t)^{\beta-1}dt=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$
is the familiar Beta function. Then we obtain
\[E(z)=1+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}+\sum_{n=1}^\infty\psi(k,n,\lambda,\nu)b_n\frac{\overline
z^n}{z}.\] Now we have
\begin{eqnarray}
&&|1-\beta+E(z)|-|1+\beta-E(z)|\nonumber\\
&&=|2-\beta+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}+\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline
z^n}{z}|\nonumber\\&&-|\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nz^{n-1}-\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline
z^n}{z}|\nonumber\\
&&\geq
2-\beta+\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline
z^n}{z}|\nonumber\\
&&-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline
z^n}{z}|\nonumber\\
&&=2-2\beta-2\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-2\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline
z^n}{z}|\nonumber\\
&&>2-2\beta-2\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n||z|^{n-1}-2\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)||b_n||\frac{\overline
z^n}{z}|\nonumber\\
&&\geq 0,\nonumber
\end{eqnarray}
and the proof is complete.
\end{proof}
In our next theorem we obtain the necessary and sufficient
coefficients condition for the $f\in\mathcal{H}$ to be in
$\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$
\begin{theorem} \label{th2.2} Let $f\in\mathcal{H}$ then
$f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ if and
only if
\begin{equation}
\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_n|<1-\beta.
\end{equation}
\end{theorem}
\begin{proof} Since
$\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)\subset\mathcal{H}\mathcal{M}(\beta,\lambda,k,\nu)$
then the "if" part of theorem follows from Theorem 2.1, for "only
if" part we show that if the condition (2.3) dose not hold then
$f\ne\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$ Let
$f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ then we
have
\begin{eqnarray}
0&&\leq Re~ \left\{(1-\lambda)\frac{\Omega^\nu
f(z)}{z}+\lambda(1-k)\frac{(\Omega^\nu f(z))'}{z'}+\lambda
k\frac{(\Omega^\nu f(z))''}{z''}-\beta\right\}\nonumber\\
&&=Re~
\left\{1-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)a_nZ^{n-1}-\sum_{n=1}^\infty\psi(n,k,\lambda,\nu)b_n\frac{\overline
z^n}{z}\right\}.\nonumber
\end{eqnarray}
This inequality holds for all values of $z$ for which $|z|=r<1$ so
we can choose the values of $z$ on positive real axis such that
$0\leq z=r<1$ therefore we get the followin inequality
\[
0\leq 1-\beta-\sum_{n=2}^\infty
\phi(n,k,\lambda,\nu)|a_n|r^{n-1}-\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_n|r^{n-1}.\] Now by letting
$r\longrightarrow 1^-$ we have
\begin{equation}
0\leq
1-\beta-\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|-\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_n|.
\end{equation}
If the condition (2.3) dose not hold then the right hand of (2.4) is
negative for $r$ sufficiently close to $1.$ Thus there exists a
$z_0=r_0\in (0,1)$ for which the right hand of (2.4) is negative.
This contradicts the required condition for
$f\in\overline{\mathcal{H}{M}}(\beta,\lambda,k,\nu)$ and so the
proof is complete.
\end{proof}
Putting $\lambda=0$ in Theorem 2.2 we get:
\begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,0,k,\nu)=\left\{f:~ Re~ \left(\frac{\Omega^\nu
f(z)}{z}\right)>\beta\right\}$ if and only if
\[
\sum_{n=1}^\infty n(n-1)B(n-1,2-\nu)|a_n|+\sum_{n=1}^\infty
n(n-1)B(n-1,2-\nu)|b_n|< 1-\beta.
\]
\end{corollary}
Putting $\lambda=1$ in Theorem 2.2 we have:
\begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,1,k,\nu)=\left\{f:~ Re~ \left((1-k)\frac{(\Omega^\nu f(z))'}{z'}
+k\frac{(\Omega^\nu f(z))''}{z''}\right)>\beta\right\}$ if and only
if \[\sum_{n=2}^\infty
n^2(n-1)(1-k+nk)B(n-1,\nu)|a_n|+\sum_{n=1}^\infty
n^2(n-1)|nk+k-1|B(n-1,2-\nu)|b_n|<1-\beta.\]
\end{corollary}
Putting $k=1$ in Theorem 2.2 we have:
\begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,1,\nu)=\left\{f:~ Re~ \left((1-\lambda)\frac{\Omega^\nu f(z)}{z}
+\lambda\frac{(\Omega^\nu f(z))''}{z''}\right)>\beta\right\}$ if and
only if \[\sum_{n=2}^\infty
n(n-1)[1+\lambda(n^2-1)]B(n-1,2-\nu)(|a_n|+|b_n|)<1-\beta.\]
\end{corollary}
Finally putting $k=0$ in Theorem 2.2 we obtain:
\begin{corollary} \label{th2.2} $f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,0,\nu)=\left\{f:~ Re~ \left((1-\lambda)\frac{\Omega^\nu f(z)}{z}
+\lambda\frac{(\Omega^\nu f(z))'}{z'}\right)>\beta\right\}$ if and
only if \[\sum_{n=2}^\infty
n(n-1)[1+\lambda(n-1)]B(n-1,2-\nu)|a_n|+\sum_{n=1}^\infty
n(n-1)|1-\lambda(n+1)|B(n-1,2-\nu)|b_n|<1-\beta.\]
\end{corollary}
\begin{theorem} \label{th2.2}
$f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ if and
only if
\begin{equation}
f(z)=t_1z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty s_ng_n(z) ~~
(z\in\mathcal{D}),
\end{equation}
where $t_i\geq 0,~ s_i\geq 0,~ t_1+\sum_{n=2}^\infty
t_n+\sum_{n=1}^\infty s_n=1$ and
\[f_n(z)=z-\frac{1-\beta}{\phi(n,k,\lambda,\nu)}z^n,\]
\[g_n(z)=z+\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}\overline z^n.\]
\end{theorem}
\begin{proof} Let $f$ be of the form (2.5) then we have
\begin{eqnarray}
f(z)&&=t_1z+\sum_{n=2}^\infty
t_n\left(z-\frac{1-\beta}{\phi(n,k,\lambda,\nu)}z^n\right)+\sum_{n=1}^\infty
s_n \left(z+\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}\overline
z^n\right)\nonumber\\
&&=z-\sum_{n=2}^\infty
\frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_nz^n+\sum_{n=1}^\infty
\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\overline z^n.\nonumber
\end{eqnarray}
Therefore we have
\begin{eqnarray}
&&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)
\frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_n+\sum_{n=1}^\infty|\psi(n,k,\lambda,\nu)|
\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\nonumber\\
&&=(1-\beta)\left[\sum_{n=2}^\infty t_n+\sum_{n=1}^\infty
s_n\right]=(1-\beta)(1-t_1)\nonumber\\
&&<1-\beta.\nonumber
\end{eqnarray}
This shows that
$f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$
Conversely suppose that
$f\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ letting
\[t_1=1-\sum_{n=2}^\infty t_n-\sum_{n=1}^\infty s_n,\]
where
\[t_n=\frac{\phi(n,k,\lambda,\nu)}{1-\beta}|a_n|,~
s_n=\frac{|\psi(n,k,\lambda,\nu)|}{1-\beta}|b_n|.\] We obtain
\begin{eqnarray}
f(z)&&=z-\sum_{n=2}^\infty |a_n|z^n+\sum_{n=1}^\infty |b_n|\overline
z^n\nonumber\\
&&=z-\sum_{n=2}^\infty
\frac{1-\beta}{\phi(n,k,\lambda,\nu)}t_nz^n+\sum_{n=1}^\infty
\frac{1-\beta}{|\psi(n,k,\lambda,\nu)|}s_n\overline z^n.\nonumber\\
&&=z-\sum_{n=2}^\infty(z-f_n(z))t_n+\sum_{n=1}^\infty(g_n(z)-z)s_n\nonumber\\
&&=\left(1-\sum_{n=2}^\infty t_n-\sum_{n=1}^\infty
s_n\right)z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty
s_ng_n(z)\nonumber\\
&&=t_1z+\sum_{n=2}^\infty t_nf_n(z)+\sum_{n=1}^\infty
s_ng_n(z).\nonumber
\end{eqnarray}
This completes the proof.
\end{proof}
\section{Convolution and Convex combinations}
In the present section we investigate the convolution properties of
the class $\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$
The convolution of two harmonic function $f_1$ and $f_2$ given by
\begin{equation}
f_1(z)=z-\sum_{n=2}^\infty |a_n|z^n+\sum_{n=1}^\infty |b_n|\overline
z^n,\\
f_2(z)=z-\sum_{n=2}^\infty |c_n|z^n+\sum_{n=1}^\infty |d_n|\overline
z^n,
\end{equation}
is defined by
\begin{equation}
(f_1*f_2)(z)=z-\sum_{n=2}^\infty |a_nc_n|z^n+\sum_{n=1}^\infty
|b_nd_n|\overline z^n.
\end{equation}
\begin{theorem} \label{th2.2} For $0\leq\beta<\alpha<1 $ let $f_1,~
f_2$ be of the form (3.1) such that for every $n,~ |c_n|<1,~
|d_n|<1.$ If $f_1,~
f_2\in\overline{\mathcal{H}\mathcal{M}}(\alpha,\lambda,k,\nu)$ then
\[f_1*f_2\in\overline{\mathcal{H}\mathcal{M}}(\alpha,\lambda,k,\nu)\subset\mathcal{H}\mathcal{M}(\beta,\lambda,k,\nu).\]
\end{theorem}
\begin{proof} Considering (3.2) we have
\begin{eqnarray}
&&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_nc_n|+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_nd_n|\nonumber\\
&&<\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_n|+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_n|\nonumber\\
&&<1-\alpha,
\end{eqnarray}
and the proof is complete.
\end{proof}
In the last theorem we examine the convex combination properties of
the elements of
$\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu).$
\begin{theorem} \label{th2.2} The class
$\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$ is closed
under convex combination.
\end{theorem}
\begin{proof} Suppose that
\[f_i(z)=z-\sum_{n=2}^\infty |a_{n,i}|z^n+\sum_{n=1}^\infty
|b_{n,i}|\overline z^n,~ i=1,2,...\] then the convex combinations of
$f_i$ may be written as
\[\sum_{i=1}^\infty t_if_i(z)=z-\sum_{n=2}^\infty \left(\sum_{i=1}^\infty
t_i|a_{n,i}|\right)z^n+\sum_{n=1}^\infty \left(\sum_{i=1}^\infty
t_i|b_{n,i}|\right)\overline z^n,\] where $\sum_{i=1}^\infty t_i=1,~
0\leq t_i\leq 1.$ Since
\[\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_{n,i}|+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_{n,i}|<1-\beta,\] so we have
\begin{eqnarray}
&&\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)\left(\sum_{i=1}^\infty
t_i|a_{n,i}|\right)+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)|\left(\sum_{i=1}^\infty t_i|b_{n,i}|\right)\nonumber\\
&&=\sum_{i=1}^\infty
t_i\left\{\sum_{n=2}^\infty\phi(n,k,\lambda,\nu)|a_{n,i}|+\sum_{n=1}^\infty
|\psi(n,k,\lambda,\nu)||b_{n,i}|\right\}\nonumber\\
&&<(1-\beta)\sum_{i=1}^\infty t_i=1-\beta.\nonumber
\end{eqnarray}
This shows that $\sum_{i=1}^\infty
t_if_i(z)\in\overline{\mathcal{H}\mathcal{M}}(\beta,\lambda,k,\nu)$
and the proof is complete.
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,482
|
SBICs Provide Financial Support and Stability
In 2001, we became a Business Development Company (BDC) and in 2002, we applied for, and received, a Small Business Investment Company (SBIC) license through the U.S. Small Business Administration (SBA). Under this program, Rand Capital SBIC, Inc. was formed as a subsidiary of Rand Capital Corporation.
How do SBICs Work?
The Small Business Administration (SBA) licenses Small Business Investment Companies (SBICs) as part of a program designed to stimulate the flow of private debt and/or equity capital to small businesses. SBICs use funds borrowed from the SBA, together with their own capital, to provide loans to, and make equity investments in, concerns that have a net worth of less than $19.5 million and average net income of less than $6.5 million. The SBIC structure provides access to long-term, low-interest, fixed rate loans from the SBA that are not callable and are pre-payable without penalty. The structure also facilitates partnering with other investment firms, which increases deal sourcing avenues and adds stability.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,459
|
Пири — многозначный термин.
Имя и фамилия
Пири, Гордон (, 1931—1991) — английский легкоатлет, стайер.
Дибич Пири, Джозефина (1863—1955) — американская исследовательница Арктики и писательница, жена Роберта Пири.
Пири, Кик (род. 2000) — нидерландский футболист.
Пири, Роберт Эдвин (, 1856—1920) — американский исследователь Арктики.
Пири, Чарлз (1897—1960) — шотландский шахматист.
Пири-реис (также «Пири Рейс»; ; примерно 1465 (1470)—1554 или 1555) — османский (турецкий) мореплаватель, адмирал и картограф.
Другое
Пири — большой древний кратер, ближайший крупный кратер к северному полюсу Луны (полюс находится на его валу).
— корейский духовой инструмент, родственный гуань.
См. также
Порт-Пири () — город в Австралии.
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,921
|
\section{Introduction}
\label{intro}
\let\thefootnote\relax\footnotetext{\vspace{-6pt}\textit{\\Originally published at the Computer Vision and Pattern Recognition Conference (CVPR), 2022.}}
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=0.47\columnwidth]{images/feature_reuse/feature_reuse-deit_small.pdf} &
\includegraphics[width=0.47\columnwidth]{images/feature_reuse/feature_reuse-swin_tiny.pdf} &
\multirow{2}{*}[3cm]{\includegraphics[width=0.1 \columnwidth ]{images/feature_reuse/colorbar.pdf}}
\\[-1.5mm]
\includegraphics[width=0.47\columnwidth]{images/feature_reuse/feature_reuse-inception_v3.pdf} &
\includegraphics[width=0.47\columnwidth]{images/feature_reuse/feature_reuse-resnet50.pdf} &
\\[-1.5mm]
\end{tabular}
\\[2.5mm]
\begin{tabular}{@{}c@{\hspace{0.5mm}}c@{\hspace{0.5mm}}c@{\hspace{0.5mm}}c@{\hspace{0.5mm}}c@{}}
\includegraphics[width=0.1933\columnwidth]{images/image_examples/aptos_example.png} &
\includegraphics[width=0.1933\columnwidth]{images/image_examples/ddsm_example.png} &
\includegraphics[width=0.1933\columnwidth]{images/image_examples/isic_example.png} &
\includegraphics[width=0.1933\columnwidth]{images/image_examples/chexpert_example.png} &
\includegraphics[width=0.1933\columnwidth]{images/image_examples/camelyon_example.png}\\[-1.5mm]
{\scriptsize APTOS 2019} & {\scriptsize CBIS-DDSM} & {\scriptsize ISIC 2019} & {\scriptsize CheXpert} & {\scriptsize PatchCamelyon} \\
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\emph{Factors affecting the utility of transfer learning from ImageNet to medical domains.}
The size of each dot represents relative increase in performance ($\frac{WT}{RI}$) achieved transferring weights from \textsc{ImageNet}\xspace (WT) compared to random initialization (RI).
The color of the dot indicates how much of the gain can be attributed to feature reuse (relative gains $\frac{WT-ST}{WT}$ from Table \ref{tab:wt_vs_st}, normalized between the minimum and the maximum value for all settings, see Section \ref{methods} for details).
Each panel shows the gains observed by a different model over five runs, in order of increasing inductive biases: {\textsc{DeiT-S}\xspace}, {\textsc{SWIN}\xspace}, {\textsc{Inception}\xspace} and {\textsc{ResNet50}\xspace}.
The benefits from transfer learning increase with (1) reduced data size, (2) smaller distances between the source and target domain, and (3) less inductive bias.
Moreover, feature reuse correlates strongly with observed gains from transfer learning, suggesting that feature reuse plays an essential role -- especially for ViTs which lack the inductive biases of CNNs.
(*) indicates cases where feature reuse is less important, uncovered in \cite{transfusion,neyshabur2020being}.}
\label{fig:feature_reuse}
\vspace{-12mm}
\end{figure}
The goal of transfer learning is to reuse knowledge gained in one domain, the \emph{source} domain, to improve performance in another, the \emph{target} domain.
Transfer learning is often used when data from the target domain is limited.
Such is the case for medical imaging, where the expense of acquisition,
the rareness of the disease, as well as legal and ethical issues limit data size.
The lack of large public datasets
has led to the widespread adoption of transfer learning from \textsc{ImageNet}\xspace \cite{imagenet_cvpr09} to improve performance on medical tasks \cite{tajbakhsh2016convolutional,morid2020scoping,matsoukas2021time}.
Despite its pervasive use, we do not yet fully understand what makes transfer learning from the natural to the medical domain work.
In this paper, we endeavor to paint a more complete picture of which factors enable a successful transfer.
Through a series of comprehensive experiments, we study the effectiveness of transfer learning as a function of dataset size, the distance from the source domain, the model's capacity, and the model's inductive bias.
Our findings, summarized in Figure \ref{fig:feature_reuse}, show that \emph{the benefits from transfer learning increase} with:
\begin{itemize}
\vspace{-1.0mm}
\item reduced data size
\vspace{-1.5mm}
\item smaller distance between the source and target
\vspace{-1.5mm}
\item models with fewer inductive biases
\vspace{-1.5mm}
\item models with more capacity, to a lesser extent.
\vspace{-1.0mm}
\end{itemize}
We also find a strong correlation between the observed benefits from transfer learning and evidence for \emph{feature reuse}.
Much of our understanding about how transfer learning works was, until recently, based on the feature reuse hypothesis.
The feature reuse hypothesis assumes that weights learned in the source domain yield features that can readily be used in the target domain.
In practice, this means that weights learned on ImageNet provide useful features in the target domain, and do not change substantially during fine-tuning
despite differences between the domains \cite{bengio2012deep, bengio2013representation, girshick2014rich, raghu2019rapid}.
This hypothesis was recently challenged when
Raghu \etal demonstrated that gains observed transferring to a medical task could largely be attributed to weight scaling and low-level statistics \cite{transfusion}, which was later confirmed in \cite{neyshabur2020being}.
We aim to bring some clarity to the role of feature reuse in this work.
Because feature reuse is difficult to measure precisely, we examine it from multiple perspectives through a series of experiments.
We find that \emph{when transfer learning works well}:
(1) \emph{weight statistics cannot account the majority of the gains}
(2) \emph{evidence for feature reuse is strongest.}
Our findings do not contradict those of \cite{transfusion, neyshabur2020being}, rather, we show that they uncovered an isolated case (* in Figure \ref{fig:feature_reuse})\footnote{A limitation of \cite{transfusion, neyshabur2020being} was that they only considered CNNs applied to \textsc{CheXpert}\xspace, one of the largest publicly available medical imaging datasets (and a similarly large private retinal image dataset in \cite{transfusion}).} where feature reuse is less important: a large dataset, distant from \textsc{ImageNet}\xspace.
In this scenario, transfer learning yields only marginal benefits which can largely be attributed to the weight statistics.
Our work paints a more complete picture, considering datasets with more variety in size and distance to the source domain, and concludes that feature reuse plays an important role in nearly all cases.
We add to this picture with the finding that vision transformers (ViTs), a rising class of models with fewer inductive biases \cite{dosovitskiy2020image, deit}, show a strong dependence on feature reuse in all the datasets we tested.
We select four families of CNNs and ViTs with progressively stronger inductive biases and find that models with less inductive bias rely more heavily on feature reuse.
Moreover, the \emph{pattern of feature reuse} changes in models with less inductive bias.
Specifically, feature reuse in ViTs is concentrated in early layers, whereas CNNs reuse features more consistently throughout the network.
We share the code to reproduce our experiments, available at
\href{https://github.com/ChrisMats/feature-reuse}{github.com/ChrisMats/feature-reuse}.
\section{Problem Formulation and Methodology}
\label{methods}
The aim of this work is to examine transfer learning from the natural to the medical image domain.
Our central question is:~\emph{what factors determine if transferred representations are effective in the medical domain?}
Under what conditions do they yield improved performance?
Is this affected by the size of the target dataset?
The similarity/dissimilarity to the source dataset?
What role does feature reuse play?
Which of the source features are reused?
And finally, what roles do the model's architecture and inductive biases play?
To investigate these questions, we conduct a series of experiments considering a variety of medical image datasets, initialization strategies, and architectures with different levels of inductive bias.
We also perform several ablation studies to characterize feature reuse at different depths throughout each network.
The details of our methodology are described below.
\vspace{-2mm}
\paragraph{Datasets.}
We select datasets that help us characterize how the efficacy of transfer learning varies with properties of the data.
For the source domain, we use \textsc{ImageNet}\xspace throughout this work.
For the target domain, we select a representative set of five standard medical image classification datasets.
They cover a variety of imaging modalities and tasks, ranging from a few thousand examples to the largest public medical imaging datasets.
\begin{itemize}
\vspace{-1mm}
\item \textbf{\textsc{APTOS2019}\xspace} $(N = 3,662)$ High-resolution diabetic retinopathy images where the task is classification into 5 categories of disease severity \cite{kaggle}.
\vspace{-1mm}
\item \textsc{\textbf{CBIS-DDSM}} $(N=10,239)$
A mammography dataset in which the task is to detect the presence of masses \cite{CBIS_DDSM_Citation,DDSM}.
\vspace{-1mm}
\item \textbf{\textsc{ISIC}\xspace 2019} $(N=25,331)$ Dermoscopic images -- the task is to classify among 9 different diagnostic categories of skin lesions \cite{tschandl2018ham10000,codella2018skin,combalia2019bcn20000}.
\vspace{-1mm}
\item \textbf{\textsc{CheXpert}\xspace} $(N=224,316)$ Chest X-rays with labels over 14 categories of diagnostic observations \cite{chexpert}.
\vspace{-1mm}
\item \textbf{\textsc{PatchCamelyon}\xspace} $(N=327,680)$ Patches of H\&E stained WSIs of lymph node sections. The task is to classify each patch as cancerous or normal \cite{bejnordi2017diagnostic, veeling2018rotation}.
\vspace{-1mm}
\end{itemize}
We compute the Fréchet Inception Distance (FID) \cite{fid} between \textsc{ImageNet}\xspace and the datasets listed above to measure similarity to the source domain (Figure \ref{fig:feature_reuse} and Table \ref{tab:wt_vs_st}).
Although it may not be a perfect measure \cite{lucic2017gans, borji2019pros}, it gives a reasonable indication of relative distances between datasets.
\vspace{-2mm}
\paragraph{Architectures.}
To study the role of network architecture we selected two representative ViT models, \textsc{DeiT} \cite{deit} and \textsc{SWIN} \cite{liu2021swin}, and two representative CNN models, \textsc{ResNet}s \cite{resnet} and \textsc{Inception} \cite{szegedy2016rethinking}.
We selected these model types because they are widely studied and commonly used as backbones for other networks.
To ensure a fair comparison we select architectural variants that are similar in capacity for our main experiments.
Aside from their popularity, another reason we chose these models is to study the role of \emph{inductive bias} in transfer learning -- as each model has a unique set of inductive biases built in.
The models, in increasing order of inductive bias are: \textsc{DeiT}\xspace, \textsc{SWIN}\xspace, \textsc{Inception}\xspace, and \textsc{ResNet}\xspace.
We start with the model with the least inductive biases, the \textsc{DeiT} family.
Like the original ViT \cite{dosovitskiy2020image}, \textsc{DeiT}\xspace is similar in spirit to a pure transformer -- doing away with nearly all image-specific inductive biases, \eg locality, translational equivariance, and hierarchical scale.
According to Dosovitskiy \etal, this causes pure ViTs like \textsc{DeiT}\xspace\footnote{We use DEIT without the distillation token \cite{deit}.} to generalize poorly when trained on insufficient amounts of data \cite{dosovitskiy2020image}.
Recently, \textsc{SWIN}\xspace transformers were shown to outperform \textsc{DeiT}s\xspace and other ViTs on \textsc{ImageNet}\xspace by reintroducing many inductive biases of CNNs.
Combining self-attention with a hierarchical structure that operates locally at different scales, \textsc{SWIN}\xspace transformers have built locality, translational equivariance, and hierarchical scale into ViTs.
Moving to CNNs, we include \textsc{Inception}\xspace, an older CNN which features an inception block that processes the signal in parallel at multiple scales before propagating it to the next layer.
Finally, we selected the \textsc{ResNet}\xspace family, as it is the most common and highly cited CNN backbone, and recent works have shown that \textsc{ResNet}s are competitive with recent SOTA CNNs when modern training methods are applied \cite{bello2021revisiting}.
\vspace{-2mm}
\paragraph{Initialization methods.}
To understand the mechanism driving the success of transfer learning from \textsc{ImageNet}\xspace to the medical domain, we need to assess \emph{to what extent improvements from transfer learning can be attributed to feature reuse}.
Transfer learning is typically performed by taking an architecture, along with its \textsc{ImageNet}\xspace pretrained weights, and then fine-tuning it on the target task.
Two things are transferred via this process: the model architecture and its learned weights.
Raghu \etal showed that the actual values of the weights are not always necessary for good transfer learning performance \cite{transfusion}.
One can achieve similar performance by initializing the network using its \emph{weight statistics}.
In this setting, transfer amounts to providing a good range of values to randomly initialize the network -- \emph{eliminating feature reuse as a factor}.
To isolate the contribution of feature reuse vs.~weight statistics, we employ three initialization strategies:
\begin{itemize}
\vspace{-1mm}
\item \emph{Weight transfer (WT)} -- transferring \textsc{ImageNet}\xspace pre-trained weights,
\vspace{-1mm}
\item \emph{Stats transfer (ST)} -- sampling weights from a normal distribution whose mean and variance are taken layer-wise from an \textsc{ImageNet}\xspace pre-trained model,
\vspace{-1mm}
\item \emph{Random init.~(RI)} -- Kaiming initialization \cite{he2015delving}.
\vspace{-1mm}
\end{itemize}
Interrogating the differences between models initialized with these methods gives an indication as to what extent the transferred model reuses \textsc{ImageNet}\xspace features.
Furthermore, we can investigate \emph{where feature reuse is beneficial within the network} by transferring weights (WT) up to block $n$ and initializing the remaining $m$ blocks using ST.
We denote this setup WT-ST.
For example, a \textsc{ResNet50}\xspace with weight transfer up to \texttt{conv1} is written ResNet50-WT-ST-1/5 \footnote{The number of blocks differs for each model; for CNNs $n=1$ corresponds to the first convolutional layer, for ViTs it refers to the patchifier.}.
\vspace{-2mm}
\paragraph{Representational similarity.}
Looking more closely at feature reuse within the network, we ask the questions:
\emph{how are features organized before and after fine-tuning -- are they similar?
Can feature similarity reveal feature reuse, or lack thereof?}
To answer these questions, we use Centered Kernel Alignment (CKA) to compute similarity between features within and across networks \cite{kornblith2019similarity}.
CKA's properties of invariance to orthogonal transformations and isotropic scaling allow meaningful quantitative comparisons between representations of different size.
We compute CKA pairwise between every layer (in a single network or pair of networks) to provide a visual overview of network similarity.
\vspace{-2mm}
\paragraph{Resilience of the transfer.}
It is difficult to directly measure whether transferred features are reused after fine-tuning.
But, by investigating how ``sticky'' the transfer was -- how much the weights drifted from their initial transferred values during fine-tuning -- we can gain some insights.
We use two different strategies to quantify the ``stickiness'' of the transfer:
\emph{(1)} we compute the $\ell_2$\xspace distance between the initial weights and the weights after fine-tuning;
\emph{(2)} we measure the impact of resetting a layer's weights to their initial values, a property called \emph{re-initialization robustness} by Zhang \etal \cite{zhang2019all}.
Layers that undergo critical changes during fine-tuning (and thus exhibit low robustness) have either not re-used the transferred weights well or adapted strongly to the new domain.
\vspace{-2mm}
\paragraph{Analyzing transferred representations layer-wise.}
The next questions we wish to address are: \emph{Which parts of the network produce/reuse low-level vs.~high-level features?}
And
\textit{how do differences in representation between CNNs and ViTs impact transfer learning?}
The representational power and the effective receptive field of CNNs increase with depth.
ViTs, on the other hand, ``see'' differently \cite{raghu2021vision} -- they maintain more uniform representations throughout, and can utilize both local and global features at every layer.
To investigate these questions, we assess the representational power of the transferred features throughout the network.
After initialization with WT, ST, and WT-ST, we fine-tune on the target dataset and apply a $k$-NN evaluation protocol at the layers in question \cite{caron2021emerging}.
This compares the embedded representation of test samples to the $k = 200$
closest embeddings from the training set using cosine similarity.
Essentially, this test allows us to see when high-level features emerge within the network.
For CNNs, the embedding is obtained using global average pooling at the layer in question.
For ViTs we follow a similar procedure, but with special modifications to handle the \texttt{cls} token in \textsc{DeiT}s\xspace.
The \texttt{cls} token processes information differently than the spatial tokens, carrying much of the information necessary for classification \cite{dosovitskiy2020image, deit, raghu2021vision}.
Therefore we construct the embeddings in three different ways: \textit{(1)} using only the \texttt{cls} token's activations, \textit{(2)} using activations from the spatial tokens, \textit{(3)} concatenating \textit{(1)} and \textit{(2)}.
\begin{figure}[t]
\begin{center}
\vspace{-2mm}
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=0.333\columnwidth]{images/wst_all/wst_APTOS2019-all_models.pdf} &
\includegraphics[width=0.333\columnwidth]{images/wst_all/wst_DDSM-all_models.pdf} &
\includegraphics[width=0.333\columnwidth]{images/wst_all/wst_ISIC2019-all_models.pdf}\\[-1.5mm]
\includegraphics[width=0.333\columnwidth]{images/wst_all/wst_CheXpert-all_models.pdf} &
\includegraphics[width=0.333\columnwidth]{images/wst_all/wst_Camelyon-all_models.pdf} &
\includegraphics[width=0.333\columnwidth]{images/wst_all/relative_gain_global.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Which layers benefit from feature reuse?} We evaluate the impact of weight transfer when using WT-ST initialization
(WT fraction from 0 to 1, where 0 = ST and 1 = WT).
Lower performance on the left indicates that the network relies on transferred weights.
$\star$ = RI.
The last panel reports the average relative gains for each model type averaged over all datasets.
Details of WT-ST initialization can be found in Appendix \ref{appdx:wtst_details}.}
\label{fig:wst_all}
\vspace{-4mm}
\end{figure}
\vspace{-2mm}
\paragraph{Training procedure.}
Unless otherwise specified, we used the following training procedure for all experiments.
Each dataset was divided into 80/10/10 train/test/validation splits, with the exception of APTOS2019, which was divided 70/15/15 due to its small size.
Images were normalised
and resized to $256\times256$ with the following augmentations applied: color jitter, random vertical and horizontal flips, and random crops $224\times224$ after rescaling.
\textsc{ImageNet}\xspace-pretrained weights were either available in PyTorch \cite{NEURIPS2019_9015} or downloaded from the official repositories, in the cases of {\textsc{DeiT}\xspace} and {\textsc{SWIN}\xspace}.
CNN and ViT models were trained with the Adam \cite{adam} and AdamW \cite{adamw} optimizers respectively, with a batch size of 64.
We performed independent grid searches to find suitable learning rates, and found that $10^{-4}$ works best for both CNNs and ViTs, except for RI which used $3\times10^{-4}$.
We used these as the base learning rates for the optimizers along with default 1,000 warm-up iterations.
During training, we reduce the learning rate by a factor of 10 when the validation performance saturates, until we reach a final learning rate of $10^{-6}$.
For transformer models, we used the default patch size of $16\times16$ for the {\textsc{DeiT}\xspace} models and $4\times4$ for {\textsc{SWIN}\xspace}.
For each run, we save the initial checkpoint and the checkpoint with highest validation performance.
\section{Experiments}
\label{experiments}
In this section, we report our findings related to transfer learning and feature reuse.
Unless otherwise stated, each experiment is repeated 5 times.
We report the mean and standard deviation of the appropriate evaluation metric for each dataset: Quadratic Cohen Kappa for \textsc{APTOS2019}\xspace, Recall for \textsc{ISIC}\xspace, and ROC-AUC for \textsc{DDSM}\xspace, \textsc{CheXpert}\xspace, and \textsc{PatchCamelyon}\xspace.
\addtolength{\tabcolsep}{-5pt}
\begin{table}[t]
\tiny
\input{tables/ir_st_wt}
\caption{
\emph{Performance of the models w.r.t different initializations.}
}
\label{tab:wt_vs_st}
\vspace{-3mm}
\end{table}
\addtolength{\tabcolsep}{3pt}
\vspace{-2mm}
\paragraph{When is transfer learning to medical domains beneficial, and how important is feature reuse?}
To quantify the overall benefit of transfer learning and isolate the contribution of feature reuse, we compare weight transfer (WT), stats transfer (ST), and random initialization (RI).
We also measure the distance between the source domain (\textsc{ImageNet}\xspace) and target domains using Fréchet Inception Distance (FID) \cite{fid}.
The results are reported in Table \ref{tab:wt_vs_st} and Figure \ref{fig:feature_reuse}.
The overall trend we observe is the following: the benefits from transfer learning increase with (1) reduced data size, (2) smaller distances between the source and target domain, and (3) models with fewer inductive biases.
We first consider the case where transfer learning is least beneficial: models with strong inductive biases applied to large datasets that poorly resemble \textsc{ImageNet}\xspace.
Here, gains from transfer learning are insignificant, \eg for \textsc{ResNet50}\xspace and \textsc{Inception}\xspace applied to \textsc{CheXpert}\xspace and \textsc{PatchCamelyon}\xspace.
The small benefits we do observe can be largely attributed to the weight statistics (ST), not feature reuse (WT), confirming previous observations \cite{transfusion, neyshabur2020being}.
However, these findings do not carry over to ViTs.
\emph{ViTs appear to benefit far more from feature reuse than CNNs.}
\textsc{DeiT}\xspace sees a strong boost from transfer learning on \textsc{CheXpert}\xspace and \textsc{PatchCamelyon}\xspace, wholly attributed to weight transfer, implying strong feature reuse.
\textsc{SWIN}\xspace, which re-introduces the inductive biases of CNNs, falls somewhere in the middle.
A possible explanation for this behavior is that, owing to \textsc{DeiT}\xspace's lack of inductive bias, even the largest public medical datasets lack sufficient examples to learn better features than those transferred from \textsc{ImageNet}\xspace.
The picture changes when we turn to small datasets.
Here, transfer learning shows noteworthy gains for all models.
However, the strength of the gains and the importance of feature reuse depends on the inductive biases of the model
and the distance between the domains.
\textsc{DeiT}\xspace and \textsc{SWIN}\xspace observe significant gains across the board, strongly attributed to feature reuse.
\textsc{ResNet50}\xspace and \textsc{Inception}\xspace show reasonable gains from transfer learning on \textsc{APTOS2019}\xspace and \textsc{DDSM}\xspace which can be partially attributed to feature reuse.
Finally \textsc{ISIC}\xspace, the dataset which most closely resembles \textsc{ImageNet}\xspace, shows strong benefits for transfer learning and evidence for feature reuse for \emph{all models}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-APTOS2019-deit_small-tul_12-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-DDSM-deit_small-tul_12-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-ISIC2019-deit_small-tul_12-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-CheXpert-deit_small-tul_12-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-Camelyon-deit_small-tul_12-trained_to_init}
\\[-1.5mm]
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-APTOS2019-resnet50-tul_4-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-DDSM-resnet50-tul_4-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-ISIC2019-resnet50-tul_4-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-CheXpert-resnet50-tul_4-trained_to_init} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS-Camelyon-resnet50-tul_4-trained_to_init}
\\[-1.5mm]
\midrule
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-APTOS2019-deit_small} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-DDSM-deit_small} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-ISIC2019-deit_small} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-CheXpert-deit_small} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-Camelyon-deit_small}
\\[-1.5mm]
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-APTOS2019-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-DDSM-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-ISIC2019-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-CheXpert-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/similarity/WS_cross-Camelyon-resnet50.pdf}
\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Layer-wise feature similarity using CKA.} \textbf{top:} The CKA representational similarity as a function of model depth for WT initialized {\textsc{DeiT-S}\xspace} and {\textsc{ResNet50}\xspace}, before and after fine-tuning. \textbf{bottom:} Feature similarity between ST and WT initialized models after fine-tuning. See text for details. Full results appear in Appendix \ref{sec:sup-figures-feature-similarity}.}
\label{fig:similarity}
\vspace{-4mm}
\end{figure}
\vspace{-2mm}
\paragraph{Which layers benefit from feature reuse?}
We investigate where feature reuse occurs within the network by transferring weights (WT) up to block $n$ and initializing the remaining $m$ blocks using ST.
The results appear in Figure \ref{fig:wst_all}.
Here, we see distinctive trends revealing the differences between CNNs and ViTs.
On large datasets, CNNs exhibit a relatively flat line indicating that, throughout the network, weight transfer (WT) offers no benefit over the statistics (ST).
Here, most of the benefits of transfer learning come from the statistics, not feature reuse.
For smaller datasets, CNNs show a linear trend implying that every layer sees some modest benefit from feature reuse.
\textsc{DeiT}\xspace shows a markedly different trend across all datasets -- a sharp jump in performance in early layers -- indicating a strong dependence on feature reuse in these layers.
This fits with previous works that have shown that local attention, which is crucial for good performance, is learned in early layers
\cite{dosovitskiy2020image, caron2021emerging}.
The importance of early layers we observe might be attributed to reuse of these local features which require huge amounts of data to learn \cite{raghu2021vision}.
\textsc{SWIN}\xspace exhibits properties of both \textsc{DeiT}\xspace and the CNNs, reflecting its mixture of inductive biases.
On small datasets and those similar to \textsc{ImageNet}\xspace \textsc{SWIN}\xspace closely mirrors \textsc{DeiT}\xspace, but shows trends resembling a CNN with enough data.
General inductive bias trends can be seen comparing models in the last panel of Figure \ref{fig:wst_all} which shows the average relative gains.
For ViTs, fewer inductive biases necessitates extensive feature reuse but concentrated in the early layers.
CNNs benefit from reused features to a lesser extent, but more consistently throughout the network, reflecting the hierarchical nature of the architecture.
To summarize the findings thus far: the benefits of transfer learning are tied to feature reuse, and depend on the size of the dataset, proximity to \textsc{ImageNet}\xspace, and the model's inductive biases.
Next, we look for further evidence of feature reuse through different perspectives.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_data_avg_deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_data_avg_swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_data_avg_inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_data_avg_resnet50.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{$\ell_2$\xspace distance of weights before and after fine-tuning.}
We report the the mean $\ell_2$\xspace distances between the initial and trained weights for different WT-ST initialization schemes, averaged over all datasets. Increased distances indicate that during training, the network larger changes the layer weights. More results can be found in Figure \ref{fig:L2_appdx} in Appendix \ref{appdx:L2experiment}.}
\label{fig:L2_dist}
\vspace{-3mm}
\end{figure}
\vspace{-2mm}
\paragraph{What properties of transfer learning are revealed via feature similarity?}
We investigate where similar features occur within the network using CKA, a similarity measure described in Section \ref{methods}.
In Figure \ref{fig:similarity} (top) and Figure \ref{fig:similarity_apx_wt} in the Appendix, we visualize feature similarity resulting from transfer learning (WT), before and after fine-tuning.
Red indicates high feature similarity.
High feature similarity along the diagonal is evidence for feature reuse in the corresponding layers.
For \textsc{DeiT}\xspace, we see feature similarity is strongest in the early- to mid-layers.
In later layers, the trained model adapts to the new task and drifts away from the \textsc{ImageNet}\xspace features.
\textsc{ResNet50}\xspace after transfer learning shows more broad feature similarity -- with the exception of the final layers which must adapt to the new task.
This fits with the compositional nature of CNN features, also reflected in layer-by-layer improvements in Figures \ref{fig:wst_all} and \ref{fig:wst_knn}.
A common trend shared by both ViTs and CNNs is that when more data is available, the transition point from feature reuse to feature adaptation shifts towards earlier layers because the network has sufficient data to adapt more of the transferred \textsc{ImageNet}\xspace features to the new task.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_APTOS2019-deit_small.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_DDSM-deit_small.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_ISIC2019-deit_small.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_CheXpert-deit_small.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_Camelyon-deit_small.pdf}
\\ [-1.5mm]
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_APTOS2019-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_DDSM-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_ISIC2019-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_CheXpert-resnet50.pdf} &
\includegraphics[width=0.20\columnwidth]{images/layer_importance/LI_Camelyon-resnet50.pdf}
\\ [-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Re-initialization robustness}.
We measure the impact of resetting the model's weights to their initial value, one layer at a time.
Drops in performance indicate that during learning, the network made critical changes to the layer weights which indicate it has
not reused the transferred weights well.
See text for details.
Full results appear in Appendix \ref{sec:sup-figures-layerwise-importance}.}
\label{fig:layer_importance}
\vspace{-4mm}
\end{figure}
\vspace{-2mm}
\paragraph{Which transferred weights change?}
Another way to investigate feature reuse is to measure how much the weights drifted from their initial values during fine-tuning.
In Figure \ref{fig:L2_dist} and Appendix \ref{appdx:L2experiment} we report the $\ell_2$\xspace distance between the initial weights of each network and the weights after fine-tuning.
The general trend is that transferred weights (WT) remain in the same vicinity after fine-tuning, more so when transfer learning gains are strongest (Figure \ref{fig:L2_appdx}).
As the network is progressively initialized more with ST, the transferred weights tend to ``stick'' less well.
Certain layers, however, undergo substantial changes regardless -- early layers in ViTs (the patchifier) and \textsc{Inception}\xspace, and the first block at each scale in \textsc{ResNet50}\xspace.
These are the first layers to encounter the data, or a scale change.
The final way we look at feature reuse is to measure the impact of resetting a layer's weights to its initial values, or its \emph{re-initialization robustness}, reported in Figure \ref{fig:layer_importance} and Figure \ref{fig:LI_apx} of the Appendix.
Layers with low robustness underwent critical changes during fine-tuning.
Those transferred weights could not be reused directly and had to be adapted.
Our main finding is that networks with weight transfer (WT) undergo few critical changes, indicating feature reuse.
When transfer learning is least effective (\textsc{ResNet}\xspace on \textsc{CheXpert}\xspace and \textsc{PatchCamelyon}\xspace) the gap in robustness between WT and ST is at its smallest.
Interestingly, in ViTs with partial weight transfer (WT-ST), critical layers often appear at the transition between WT and ST.
Rather than change the transferred weights, the network quickly adapts.
But following this adaptation, no critical layers appear.
As the data size increases, ViTs make more substantial early changes to adapt to the raw input (or partial WT).
Transferred weights in CNNs, on the other hand, tend to be less ``sticky'' than ViTs.
We see the same general trend where WT is the most robust, but unlike ViTs where WT was robust throughout the network, \textsc{ResNet50}\xspace exhibits poor robustness at the final layers responsible for classification, and also periodically within the network at critical layers where the scale changes, as observed by \cite{zhang2019all}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-APTOS2019-resnet50-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-DDSM-resnet50-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-ISIC2019-resnet50-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-CheXpert-resnet50-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-Camelyon-resnet50-all.pdf}
\\[-1.5mm]
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-APTOS2019-deit_small-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-DDSM-deit_small-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-ISIC2019-deit_small-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-CheXpert-deit_small-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/knn_wst-Camelyon-deit_small-all.pdf}
\\[-1.5mm]
\midrule
\includegraphics[width=0.20\columnwidth]{images/knn_wst/max_knn_wst-APTOS2019-all_models-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/max_knn_wst-DDSM-all_models-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/max_knn_wst-ISIC2019-all_models-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/max_knn_wst-CheXpert-all_models-all.pdf} &
\includegraphics[width=0.20\columnwidth]{images/knn_wst/max_knn_wst-Camelyon-all_models-all.pdf}
\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Predictive performance of features at different depths using $k$-nn evaluation}.
\textbf{top:} $k$-NN evaluation performance at different depths for \textsc{ResNet50}\xspace (\textit{row one}) and \textsc{DeiT-S}\xspace (\textit{row two}), with varying WT-ST fractions. \textbf{bottom:} Maximum $k$-NN evaluation score achieved at any depth for corresponding WT-ST initialization fraction, for each model type.
See discussion in the text.
Full results appear in Appendix \ref{sec:sup-figures-knn}.
}
\label{fig:wst_knn}
\vspace{-4mm}
\end{figure}
\paragraph{Are reused features low-level or high-level?}
Above, we employed multiple techniques to investigate when and where feature reuse occurs within the network.
With those experiments in mind, our aim now is to determine what role the reused features play.
Are they low-level or high-level features?
A good indicator for a high-level feature is that it can partition the data for the final task -- a property we can measure layer-wise using the $k$-NN evaluation.
Results of the $k$-NN test are given in Figure \ref{fig:wst_knn}.
First, we consider ViTs.
Previously, we observed that early layers are most crucial for ViT performance (Figure \ref{fig:wst_all}).
In the re-initialization experiment (Figure \ref{fig:layer_importance}) we also noticed that critical changes in ViTs occur either directly after the input, or at the transition between WT and ST.
From the $k$-NN tests in Figure \ref{fig:wst_knn} and \ref{fig:LI_apx} in the Appendix, we see that the relevance of the features increases dramatically within these critical layers.
Later layers do not seem to contribute further to solve the task\footnote{The zig-zag pattern in row 2 of Fig.~\ref{fig:wst_knn} is due to alternating self-attention (+) \& MLP layers ($\cdot$) common in ViT architectures.}.
In the bottom of Figure \ref{fig:wst_knn} we notice that the discriminative power of ViT features increases rapidly as we add more WT layers in the beginning, but it saturates approximately halfway through the network.
Interestingly, in an ablation we present in Appendix \ref{sec:smaller-deit}, we found that the first 5 blocks of \textsc{DeiT}s\xspace performs comparably with the full 12 blocks for transfer learning.
Evidently, early feature reuse in ViTs combined with the small medical data size results in unutilized capacity in the later layers of the ViTs, which can effectively be thrown away.
Thus, we find that \textit{features reused in these critical early layers of ViTs are responsible for the creation of high-level features}.
According to \cite{dosovitskiy2020image, raghu2021vision}, these same critical early layers are responsible for learning a mix of local and global features -- an essential component for good performance which requires very large datasets to learn -- explaining ViT's strong dependence on feature reuse in transfer learning.
In Appendix \ref{appdx:mean_att_distance} we confirm that WT transfer produces a mixture of local and global attention in early ViT layers, whereas ST initialization cannot learn to attend locally.
Next we turn to the CKA experiments at the bottom of Figure \ref{fig:similarity}.
Here, we find that early layers of ST-initialized models are similar to features from the first half of the WT-initialized models.
We see that if the network is denied these essential pre-trained weights, it attempts to learn them rapidly using only a few layers (due to lack of data), resulting in poor performance.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_APTOS2019-resnet.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_DDSM-resnet.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_ISIC2019-resnet.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_CheXpert-resnet.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_Camelyon-resnet.pdf}
\\[-1.5mm]
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_APTOS2019-deit.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_DDSM-deit.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_ISIC2019-deit.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_CheXpert-deit.pdf} &
\includegraphics[width=0.20\columnwidth]{images/capacity_figure/capacity_wst_Camelyon-deit.pdf}
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\emph{The impact of weight transfer for different model capacities.}
We evaluate the impact of weight transfer when using WT-ST initialization as a function of model capacity.
Larger models benefit more from transfer learning but the same architectures follow similar patterns.
}
\label{fig:wst_capacity}
\vspace{-4mm}
\end{figure}
The role of transferred features in CNNs is different, as one might expect.
We saw in Figure \ref{fig:wst_all} that performance benefits from feature reuse are more evenly distributed throughout CNNs, while the re-initialization experiment in Figure \ref{fig:layer_importance} revealed that the critical layers are also spread out throughout the network.
The $k$-NN\xspace test in Figure \ref{fig:wst_knn} further supports these findings -- a jump in early layers corresponding to low-level feature extraction is followed by progressive improvements in the features as each layer adds complexity over the previous, until the final layer.
Large periodic $k$-NN\xspace increases correspond to critical layers in Figure \ref{fig:layer_importance}.
These trends nicely follow our understanding of compositional learning in CNNs.
A notable outlier is \textsc{ISIC}\xspace, where $k$-NN\xspace improvement is delayed.
This is likely due to \textsc{ISIC}\xspace's similarity to \textsc{ImageNet}\xspace, which allows mid-level transferred features to be reused more readily.
From the bottom row of Figure \ref{fig:similarity} we further observe that CNNs seem to learn similar features from different initializations, suggesting that their inductive biases may somehow naturally lead to these features (although the final layers used for classification diverge).
We also observe a trend where, given more data, the ST-initialization is able to learn some novel mid- to high-level features not found in \textsc{ImageNet}\xspace.
\vspace{-2mm}
\paragraph{Capacity and convergence.}
In addition to the other transfer learning factors investigated thus far we consider model capacity.
We repeat our main experiments using \textsc{DeiT}s\xspace and \textsc{ResNet}s\xspace with different capacities
and report the results in Figure \ref{fig:wst_capacity}.
We observe slight increases in transfer learning performance as model size increases, but the patterns exhibited by the individual architectures do not change.
Finally, we investigate the impact of transfer learning on convergence speed.
Validation curves in Figure \ref{fig:convergence} demonstrate the speed-up from transfer learning, which we measure in the last panel.
We observe that convergence speed monotonically increases with the number of WT layers, in line with the finding of \cite{transfusion}.
Furthermore, we observe that CNNs converge faster at a roughly linear rate as we include more WT layers, while vision transformers see a rapid increase in convergence speed for the first half of the network but diminishing returns are observed after that.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{\hspace{0.5mm}}c@{\hspace{2mm}}|c@{}}
\includegraphics[width=0.3\columnwidth]{images/convergence_figure/convergence-CheXpert-resnet50.pdf} &
\includegraphics[width=0.3\columnwidth]{images/convergence_figure/convergence-CheXpert-deit_small.pdf} &
\multirow{1}{*}[2.0cm]{\includegraphics[width=0.045\columnwidth]{images/convergence_figure/colorbar.pdf}} &
\includegraphics[width=0.3\columnwidth]{images/convergence_figure/convergence_global_noleg.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Transfer learning and convergence speed.}
\textbf{left}: Validation curves of \textsc{ResNet50}\xspace and \textsc{DeiT-S}\xspace on \textsc{CheXpert}\xspace using a constant learning rate. \textbf{right}: Relative convergence speedups as a function of WT transferred layers. As we transfer more layers, the convergence speed of CNNs see increases linearly with the depth, while for ViTs rapid increases are observed for the first half of the network followed by a plateau.
}
\label{fig:convergence}
\vspace{-4mm}
\end{figure}
\section{Discussion}
\label{discussion}
\input{4_0_related_end}
\vspace{-2mm}
\paragraph{Factors of transferability.}
In this work, we paint a more complete picture
of transfer learning to the medical domain
by considering more medical modalities, data sizes, and the model's capacity and inductive bias.
It is our conclusion that, for the majority of situations, transfer learning from \textsc{ImageNet}\xspace yields significant performance gains.
Our findings do not contradict those of \cite{transfusion, neyshabur2020being}, rather, we show that they uncovered an isolated case where the yields from transfer learning are minimal and feature reuse is less important.
We identify four factors that influence transfer learning from \textsc{ImageNet}\xspace to the medical domain.
The data size and distance from the source domain are important factors that should not be overlooked.
Smaller datasets always benefit from transfer learning, and so do datasets that are close to the source domain.
The model's capacity has a small effect, but inductive bias is another important factor -- the benefits from transfer learning are negatively correlated with the strength of the model's inductive biases.
Looking at the extremes from our study: \textsc{DeiT}s\xspace, with the weakest inductive bias, heavily depend on transfer learning across the board.
\textsc{ResNet}s\xspace, the models primarily used in previous works with the strongest inductive bias, show only limited improvement for large datasets and datasets that are distant from \textsc{ImageNet}\xspace.
But when the data size is smaller (as is often the case in medical tasks) or more similar to \textsc{ImageNet}\xspace, even \textsc{ResNet}\xspace's benefits become significant.
\vspace{-2mm}
\paragraph{The role of feature reuse.}
The importance of feature reuse in transfer learning has also been recently questioned \cite{transfusion}.
In order to better understand what drives transfer learning, we examined feature reuse from a number of different angles.
Our main take-away is that \textit{when transfer learning works well, there is strong evidence of feature reuse}.
Beyond this, we characterized feature reuse within the network in a number of ways.
We identified that certain critical features are ``sticky'' and less prone to change through transfer learning -- though which particular features stick depends on the architecture.
We observed that early layers are most crucial for ViT performance, which reuse a mixture of local and global features learned on \textsc{ImageNet}\xspace to perform competitively.
ViT's inability to relearn these essential features on small medical data sizes explains their strong dependence on feature reuse.
We also found that this pattern of early feature reuse in ViTs means that later layers can be discarded without strongly affecting performance.
CNNs benefit differently from feature reuse.
In CNNs, feature reuse occurs more uniformly, marked by progressive improvements in the features as each layer adds complexity over the previous.
The slope of the improvement varies with data characteristics -- it can even become flat, as found in \cite{transfusion, neyshabur2020being}.
We confirmed that these differences are primarily associated with model's inductive bias, rather than capacity, through a series of ablations.
\vspace{-2mm}
\paragraph{Limitations and potential negative societal impact.}
An exhaustive study of the factors that impact transfer learning is impossible -- countless models and datasets could have been included.
Nevertheless, we tried to select relevant and representative datasets and model types, covering a more diverse selection than previously studied.
A potential pitfall of this work is the use of FID \cite{fid}, which may not provide a perfect measure of distance between datasets \cite{lucic2017gans, borji2019pros}.
Despite well-meaning intentions, applying deep learning to medical data opens the possibility of unanticipated negative impacts.
Without proper consideration, models can learn to replicate unwanted biases in the data.
Failures can erode the public trust, and models that operate on medical data must take care not to reveal patient information.
\section{Conclusions}
\label{conclusions}
In this work we evaluate the benefits from transfer learning when working with medical images and how feature reuse and other factors, like the dataset and model characteristics, affect its usefulness.
We show that when transfer learning works, it is because of increased reuse of learned representations, and that models with less inductive bias, small datasets and datasets that are closer to \textsc{ImageNet}\xspace see greater gains from it.
We demonstrate that models with low inductive bias rely on reuse of local representations, composed mainly in early layers, to perform competitively with models with high inductive bias, which benefit from feature reuse throughout the network, but often to a lesser extent.
Our work focuses on transfer to the medical domain, but we believe that our findings may apply to other domains, which we leave for future work.
\section{Feature similarity}
\label{sec:sup-figures-feature-similarity}
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-APTOS2019-deit_small-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-APTOS2019-swin_tiny-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-APTOS2019-inception_v3-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-APTOS2019-resnet50-full_transfusion-trained_to_init} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-DDSM-deit_small-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-DDSM-swin_tiny-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-DDSM-inception_v3-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-DDSM-resnet50-full_transfusion-trained_to_init} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-ISIC2019-deit_small-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-ISIC2019-swin_tiny-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-ISIC2019-inception_v3-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-ISIC2019-resnet50-full_transfusion-trained_to_init} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-CheXpert-deit_small-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-CheXpert-swin_tiny-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-CheXpert-inception_v3-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-CheXpert-resnet50-full_transfusion-trained_to_init} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-Camelyon-deit_small-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-Camelyon-swin_tiny-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-Camelyon-inception_v3-full_transfusion-trained_to_init} &
\includegraphics[width=0.25\columnwidth]{images/similarity/WS-Camelyon-resnet50-full_transfusion-trained_to_init} \\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Feature similarity between initial and fine-tuned model for ST initialization.} CKA feature similarity comparison between ST initialized models before and after fine-tuning. Reported for each dataset (rows) and model type (columns).
As we can see, models with no inductive biases exhibit changes throughout the network, while models with increased inductive bias focus more on the mid-to-high layers during training. }
\label{fig:similarity_apx_st}
\vspace{-4mm}
\end{figure}
\paragraph{Details of the feature similarity calculations.}
The feature similarity throughout this work is measured using Centered Kernel Alignment (CKA).
CKA computes the feature similarity between two representations, allowing us to compare those of different layers and/or different models.
For a more detailed description of CKA see \cite{raghu2021vision} and \cite{nguyen2020wide}.
The similarity scores reported in these experiments follow the procedure described in \cite{nguyen2020wide}.
For each setting the values are calculated by measuring the similarity over the full test set, in batches of $128$.
This is done for all five runs of each setting, and we report the mean similarity score averaged over all runs.
The intermediate layers of the models that were used for calculating similarities could be seen in Table \ref{tab:cnn_layer_details} and Table \ref{tab:vit_layer_details} in the Appendix and the results can be found in Figure \ref{fig:similarity} in the main text and Figures \ref{fig:similarity_apx_wt}, \ref{fig:cross_similarity_apx}, \ref{fig:cross_similarity_deit_apx} and \ref{fig:similarity_apx_st} in Appendix.
\section{{$k$-NN\xspace} evaluation}
\label{sec:sup-figures-knn}
\begin{figure}[t!]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-APTOS2019-deit_small-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-APTOS2019-swin_tiny-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-APTOS2019-inception_v3-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-APTOS2019-resnet50-all.pdf}\\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-DDSM-deit_small-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-DDSM-swin_tiny-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-DDSM-inception_v3-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-DDSM-resnet50-all.pdf}\\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-ISIC2019-deit_small-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-ISIC2019-swin_tiny-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-ISIC2019-inception_v3-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-ISIC2019-resnet50-all.pdf}\\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-CheXpert-deit_small-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-CheXpert-swin_tiny-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-CheXpert-inception_v3-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-CheXpert-resnet50-all.pdf}\\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-Camelyon-deit_small-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-Camelyon-swin_tiny-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-Camelyon-inception_v3-all.pdf} &
\includegraphics[width=0.25\columnwidth]{images/knn_wst/knn_wst-Camelyon-resnet50-all.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Predictive performance of features at different depths using $k$-NN\xspace evaluation}.
$k$-NN\xspace evaluation performance at different depths for models initialized with varying WT fractions, reported for each dataset (rows) and model type (columns). Overall the $k$-NN\xspace performance increases monotonically with depth for all models and datasets. However, relative performance gains from layer to layer exhibit different patterns. CNNs improve progressively, while ViTs increase rapidly in the beginning and then they reach a plateau. This plateau is observed to appear in association with the first ST-initialized layer in the {WT-ST-$n$/$m$\xspace} experiments.}
\label{fig:wst_knn_apx}
\vspace{-4mm}
\end{figure}
We use $k$-NN\xspace evaluation to investigate the discriminative power at different layers throughout the network..
The evaluation is performed by comparing the similarity, in the feature space, of samples from the training and the test set.
In particular, we use cosine similarity as the means to calculate the distance between different data-points.
Then, labels are assigned to the query data-point, from the test set, by considering its $k$ nearest neighbors from the training set.
Throughout this work we use $k=200$ .
The layers used to extract the embeddings are listed in Table \ref{tab:cnn_layer_details} for CNNs and Table \ref{tab:vit_layer_details} for ViTs.
The results of the {$k$-NN\xspace} evaluation experiments can be found in Figure \ref{fig:wst_knn} in the main text and Figures \ref{fig:wst_knn_apx}, \ref{fig:wst_max-knn_apx} and \ref{fig:wst_deit-knn_apx} in the Appendix.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-APTOS2019-all_models-all.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-Camelyon-all_models-all.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-ISIC2019-all_models-all.pdf}\\[-1.5mm]
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-CheXpert-all_models-all.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-DDSM-all_models-all.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Maximum $k$-nn predictive performance of intermediate features for different WT-ST-$n$/$m$\xspace initialization schemes}.
Similar to Figure \ref{fig:wst_knn_apx} relative gains exhibit different patterns of improvement. Once again ViTs appear to improve quickly early on and then plateau. However, this plateau has a negative slope in some cases, suggesting the presence of strong biases in the high-level features, possibly inherited from the pre-training task.
}
\label{fig:wst_max-knn_apx}
\vspace{-4mm}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-APTOS2019-deit_small-ftypes.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-Camelyon-deit_small-ftypes.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-ISIC2019-deit_small-ftypes.pdf}\\[-1.5mm]
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-CheXpert-deit_small-ftypes.pdf} &
\includegraphics[width=0.333\columnwidth]{images/knn_wst/max_knn_wst-DDSM-deit_small-ftypes.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Maximum $k$-nn predictive performance of intermediate features for different WT-ST-$n$/$m$\xspace initialization schemes when using different feature types from \textsc{DeiT-S}\xspace for evaluation}.
Maximum $k$-nn evaluation score achieved at any depth for corresponding {WT-ST-$n$/$m$\xspace} initialization fraction, for \textsc{Deit-S} \textit{(1)} using only the \texttt{cls} token's activations, \textit{(2)} using activations from the spatial tokens, \textit{(3)} concatenating \textit{(1)} and \textit{(2)}.
The different feature embeddings seem to exhibit similar trends
, but the \texttt{cls} token often seem to outperform the patch embedding.
}
\label{fig:wst_deit-knn_apx}
\vspace{-4mm}
\end{figure}
\section{Re-initialization robustness}
\label{sec:sup-figures-layerwise-importance}
\begin{figure}[t!]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_APTOS2019-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_APTOS2019-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_APTOS2019-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_APTOS2019-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_DDSM-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_DDSM-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_DDSM-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_DDSM-resnet50.pdf} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_ISIC2019-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_ISIC2019-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_ISIC2019-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_ISIC2019-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_CheXpert-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_CheXpert-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_CheXpert-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_CheXpert-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_Camelyon-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_Camelyon-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_Camelyon-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/layer_importance/LI_Camelyon-resnet50.pdf}\\ [-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Resilience of trained layers to change}. We report the performance when reverting a fine-tuned layer back to its original state. This is done using one layer at a time, for each dataset (rows) and model types (columns) for four different {WT-ST-$n$/$m$\xspace} initialization strategies.
The results show that layers with low robustness underwent critical changes during fine-tuning. In ViTs, critical layers often appear at the transition between WT and ST. CNNs on the other hand, exhibit poor robustness at the final layers responsible for classification, and also periodically within the network at critical layers.
}
\label{fig:LI_apx}
\vspace{-4mm}
\end{figure}
In the layer re-initialization experiments, we consider the impact of reverting individual layer of the models to their initial state.
In detail, we initialize models with different WT-ST-$n$/$m$\xspace schemes. Then, after fine-tuning, we reinitialize a single layer at a time to its original state, while keeping all the other layers unchanged.
Finally, we evaluate the model on the test set and measure the drop in predictive performance.
For {\textsc{DeiT}s\xspace} and {\textsc{SWIN}\xspace}, the intermediate modules include the patchifier, the self-attention layers of each block separately. For {\textsc{ResNet50}\xspace}, the modules include the first convolutional layer and the residual blocks of each stage. For {\textsc{Inception}\xspace} the modules consist of all the initial convolutional blocks, and all the individual inception modules. A detailed description of the layers that were used can be seen in Table \ref{tab:cnn_layer_details} and Table \ref{tab:vit_layer_details} in Appendix. The results of these experiments can be seen in Figure \ref{fig:layer_importance} in the main text for {\textsc{DeiT}\xspace} and Figure \ref{fig:LI_apx} for other model types.
\section{$\ell_2$\xspace distance}
\label{appdx:L2experiment}
In order to understand the extent that model's weights change during training, we calculate the $\ell_2$\xspace norm of the weights before and after fine-tuning.
In practice, for each layer, we calculate the $\ell_2$\xspace distances between the original and fine-tuned weights and then we divide this value by the number of the weights in the layer.
The details of the layers used for each model can be seen in Table \ref{tab:cnn_layer_details} and Table \ref{tab:vit_layer_details} in the Appendix. Figure \ref{fig:L2_appdx} shows the the results for each model and dataset individually, and Figure \ref{fig:L2_dist} in the main text shows the distances averaged over all the datasets for each model.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_APTOS2019-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_APTOS2019-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_APTOS2019-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_APTOS2019-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_DDSM-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_DDSM-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_DDSM-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_DDSM-resnet50.pdf} \\[-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_ISIC2019-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_ISIC2019-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_ISIC2019-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_ISIC2019-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_CheXpert-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_CheXpert-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_CheXpert-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_CheXpert-resnet50.pdf}\\ [-1.5mm]
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_Camelyon-deit_small.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_Camelyon-swin_tiny.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_Camelyon-inception_v3.pdf} &
\includegraphics[width=0.25\columnwidth]{images/L2_figure/L2_Camelyon-resnet50.pdf}\\ [-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{$\ell_2$\xspace distance of the weights.}
We report the the mean $\ell_2$\xspace distances between the initial and trained weights for different models with different initialization schemes. A large distance indicates that the corresponding layer has changed significantly during training.
}
\label{fig:L2_appdx}
\vspace{-4mm}
\end{figure}
\section{Mean attended distance}
\label{appdx:mean_att_distance}
To understand the type of features that emerge at different layer depths as a function of the initialization strategy WT-ST for \textsc{DeiT}s\xspace, we calculate the mean attended distance per layer.
That is, for each of the \textsc{DeiT-S}\xspace' attention heads we calculate the mean of the element-wise multiplication of each query token's attention and its distance from the other tokens, similarly to \cite{dosovitskiy2020image}.
Then, we average the calculated distances per layer for all of the WT-ST initialization schemes.
In Figure \ref{fig:distance_appdx} in the Appendix, we report the mean attended distance per layer for all datasets and the average attended distance over all datasets.
The results clearly show that (1) the ST initialization results in global attention throughout the network (2) after the critical layers the attention is mainly global (3) the WT layers introduce a mixture of local and global features.
This suggests that the WT layers are important for a mixture of local and global features that the model is incapable of learning on its own -- due to the small data size.
\begin{figure}[t!]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}c@{}}
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-APTOS2019.pdf} &
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-DDSM.pdf} &
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-ISIC2019.pdf} &
\multirow{2}{*}[1.7cm]{\includegraphics[width=0.075 \columnwidth ]{images/distance_figures/colorbar.pdf}}
\\[-1.5mm]
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-CheXpert.pdf} &
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-Camelyon.pdf} &
\includegraphics[width=0.31\columnwidth]{images/distance_figures/dist-average.pdf} &
\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-4mm}
\caption{\emph{Mean attended distance for different initializations.}
We report the mean attended distance of the fine-tuned \textsc{DeiT-S}\xspace model for all datasets using different WT-ST initializations (WT fraction from 0 to 1, where 0 = ST and 1 = WT).
The bottom-right figure shows the mean attended distance, averaged over all datasets.
Evidently, in the absence of WT layers the attention is mainly global, whilst the \textsc{ImageNet}\xspace pre-trained weights introduce a mixture of local and global attention that the network cannot learn on its own.}
\label{fig:distance_appdx}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence-APTOS2019.pdf} &
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence-DDSM.pdf} &
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence-ISIC2019.pdf} \\[-1.5mm]
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence-CheXpert.pdf} &
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence-Camelyon.pdf} &
\includegraphics[width=0.333\columnwidth]{images/convergence_figure/convergence_global.pdf}\\[-1.5mm]
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{\emph{Convergence speed as a function of WT fraction for different models and datasets.} We report the number of iterations it takes for each model to converge for different WT fractions for each individual dataset. The bottom-right figure shows the relative speedups averaged over all datasets. Evidently, the convergence speed monotonically increases with the number of WT layers for all datasets and models.}
\label{fig:convergence_appdx}
\vspace{-4mm}
\end{figure}
\section{Model convergence}
\label{sec:convergence-study}
We investigate how the convergence behavior of the models change as we transfer more layers. Figure \ref{fig:convergence_appdx} in the Appendix and Figure \ref{fig:convergence} in the main text show the number of iterations needed for each model to reach its best validation performance. As a general trend for all models, the higher the WT fraction is, the faster the models converge. Interestingly, for vision transformers, transferring the first few blocks dramatically increases the convergence speed, while transferring further blocks slightly speeds up training. CNNs however, follow a different trend, where transferring more layers improves the convergence speed at a roughly linear rate.
\section{Model capacity}
\label{appdx:capacity-study}
We investigate the impact that the model's capacity has on transfer learning for {\textsc{DeiT}s\xspace} \cite{deit} and {\textsc{ResNet}s\xspace} \cite{resnet}. To this end, we consider 3 different capacities for each architecture, which are comparable in the number of parameters and compute time. For the {\textsc{ResNet}\xspace} family, we considered {\textsc{ResNet18}\xspace}, {\textsc{ResNet50}\xspace}, and {\textsc{ResNet152}\xspace}. For the {\textsc{DeiT}\xspace} family, we chose {\textsc{DeiT-T}\xspace}, {\textsc{DeiT-S}\xspace}, and {\textsc{DeiT-B}\xspace}. For each model capacity, we carry out the same WT-ST-$n$/$m$\xspace experiments. That is, we initialize each model with different {WT-ST-$n$/$m$\xspace} initialization schemes and then we fine-tune them on the target task. The training strategy follows exactly the details mentioned in Section \ref{methods}. Please refer to to Figure \ref{fig:convergence} and Section \ref{experiments} for the results and discussion.
\addtolength{\tabcolsep}{-2pt}
\begin{table}[t]
\tiny
\input{tables/cnn_layer_details}
\vspace{-2mm}
\caption{\emph{Implementation details for the CNN models}.
\textbf{(Left)} Modules and layers used for \textsc{Inception}\xspace.
\textbf{(Right)} Modules and layers used for \textsc{ResNet}s\xspace.
The first column of each side shows the initialization type that we used.
The module that corresponds to each initialization scheme (second column from each side) is the last module that we initialized with WT. The modules after that were initialized with ST.
The third row from each side shows the layers we used for the $k$-NN\xspace, $\ell_2$\xspace, re-initialization and representation similarity experiments.
}
\label{tab:cnn_layer_details}
\vspace{-3mm}
\end{table}
\addtolength{\tabcolsep}{3pt}
\section{The WT-ST initialization schemes}
\label{appdx:wtst_details}
Here, we provide additional details regarding the WT-ST-$n$/$m$\xspace initialization procedure and the modules we used to investigate where feature reuse occurs within the network.
We transfer weights (WT) up to block $n$ and we initialize the remaining $m$ blocks using ST.
In practice this means that, the first $n$ modules use the exact ImageNet pretrained weights, while the weights of the next $m$ modules are initialised with a Normal distribution $\mathcal{N}(\mu_i,\,{\sigma_i}^{2})$, where $\mu_i$ and ${\sigma_i}^{2}$ are the mean and variance of the $i$th ImageNet pretrained weight.
Due to the architectural differences, we use a different selection of modules for each model.
For \textsc{DeiT}s\xspace and \textsc{SWIN}s\xspace, we use the input layer (patchifier), each of the transformer blocks and the last normalization layer of the network.
For \textsc{Inception}\xspace we use the first four modules which include the layers that operate at the same scale and the inception modules that belong to the same stage.
Finally, for \textsc{ResNet}s\xspace we include the input layer, the first normalization layer and the resnet blocks from each scale.
The exact details for each {WT-ST-$n$/$m$\xspace} setting for {\textsc{ResNet50}\xspace}, {\textsc{ResNet18}\xspace}, {\textsc{ResNet152}\xspace} and {\textsc{Inception}\xspace} can be found in Table \ref{tab:cnn_layer_details} in the Appendix. Similarly, the details for {\textsc{DeiT}\xspace}, {\textsc{DeiT-T}\xspace}, {\textsc{DeiT-B}\xspace} and {\textsc{SWIN}\xspace}-T are found in Table \ref{tab:vit_layer_details} in the Appendix.
The results of these experiments are reported in Figure \ref{fig:wst_all} and \ref{fig:wst_capacity} in the main text.
\addtolength{\tabcolsep}{-2pt}
\begin{table}[t]
\tiny
\input{tables/vit_layer_details}
\vspace{-2mm}
\caption{\emph{Implementation details for the ViT models}.
\textbf{(Left)} Modules and layers used for \textsc{DeiT}s\xspace.
\textbf{(Right)} Modules and layers used for \textsc{SWIN}\xspace-T.
The first column of each side shows the initialization type that we used.
The module that corresponds to each initialization scheme (second column from each side) is the last module that we initialized with WT. The modules after that were initialized with ST.
The third row from each side shows the layers we used for the $k$-NN\xspace, $\ell_2$\xspace, re-initialization and representation similarity experiments.
}
\label{tab:vit_layer_details}
\vspace{-3mm}
\end{table}
\addtolength{\tabcolsep}{3pt}
\section{The 5-layer {\textsc{DeiT-S}\xspace} model}
\label{sec:smaller-deit}
As we showed in Figure \ref{fig:wst_all} and Figure \ref{fig:wst_knn} of the main text, it appears that vision transformers benefit significantly from weight transfer in their initial-to-middle blocks, while transferring the weights in the later blocks seems to offer little or no benefit.
In fact, transferring weights too deep into the network \emph{may result in worse high-level features}, possibly due to biases learned during the pre-training task.
Furthermore, we noticed from the layer-wise experiments that critical layers often appear at the transition between WT and ST.
This begs the question: \textit{
Can we use a smaller \textsc{DeiT}\xspace model that has been initialized with weight transfer without compromising classification performance?}
To this end, we use a trimmed version of a \textsc{DeiT-S}\xspace model that has only five transformer blocks -- effectively reducing the memory and computational requirements by a factor of 2.
We initialize this model, denoted as \textsc{DeiT-S}\xspace-5b, with weight transfer from \textsc{ImageNet}\xspace and we fine-tune it on the target datasets, using the settings described in Section \ref{methods}.
Surprisingly, our results in Table \ref{tab:trimmed_vs_full} showing no significant changes in classification performance.
This supports the arguments that: \emph{1)} the initial blocks of ViTs contribute the most to the overall performance of the model, \emph{2)} feature reuse in the first layers is so strong for \textsc{DeiT}s\xspace that can compensate for the lack of additional transformer blocks.
This finding might be of further interest for practitioners who work with limited computational and memory budgets.
For example, in medical imaging, there is a need for light-weight models as the large image sizes that are encountered in practice prohibit the utilization of large models.
However, further evaluation is needed to asses the extent to which these benefits are broadly applicable.
\vfill\eject
\addtolength{\tabcolsep}{-2.5pt}
\begin{table}[ht]
\tiny
\input{tables/deit5b_results}
\vspace{-2mm}
\caption{
\emph{A trimmed {\textsc{DeiT-S}\xspace} with only 5 blocks performs comparably to the full {\textsc{DeiT-S}\xspace} model.} We keep only the first 5 (out of 12) blocks of {\textsc{DeiT-S}\xspace} and discard the rest. Then, after WT initialization, we fine-tune the model with the same strategy detailed in Section \ref{methods}. It can clearly be seen that the smaller model performs competitively to the full \textsc{DeiT-S}\xspace, even when more than half of the blocks are removed.}
\label{tab:trimmed_vs_full}
\vspace{-3mm}
\end{table}
\addtolength{\tabcolsep}{3pt}
\end{appendices}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,377
|
{"url":"https:\/\/gmatclub.com\/forum\/wharton-vs-reapply-to-hbs-stanford-149965.html?fl=similar","text":"Wharton vs. reapply to HBS\/Stanford : Admitted - Which BSchool to Choose?\nCheck GMAT Club Decision Tracker for the Latest School Decision Releases https:\/\/gmatclub.com\/AppTrack\n\n It is currently 21 Feb 2017, 04:30\n\n### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we\u2019ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n# Events & Promotions\n\n###### Events & Promotions in June\nOpen Detailed Calendar\n\n# Wharton vs. reapply to HBS\/Stanford\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [0], given: 2\n\nWharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 00:37\n1\nThis post was\nBOOKMARKED\nJust received my round 2 decisions - offer from Wharton, but dinged from Stanford & HBS.\n\nI'm a UK applicant and Wharton doesn't have much of a rep over here... could anyone outline the merits of reapplying to HBS \/ Stanford vs. Wharton?\n\nTime is on my side by the way - I'm only 24\nDirector\nStatus: Can't wait for August!\nJoined: 13 Sep 2011\nPosts: 988\nLocation: United States (MA)\nConcentration: Marketing, Strategy\nGMAT 1: 660 Q44 V37\nGMAT 2: 680 Q45 V38\nGMAT 3: 710 Q45 V42\nGPA: 3.32\nWE: Information Technology (Retail)\nFollowers: 24\n\nKudos [?]: 349 [0], given: 109\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 05:04\nWharton doesn't have a lot of rep in the UK? That sounds pretty crazy to me for some reason...\n\nhttp:\/\/essaysnark.com\/2012\/05\/i-got-int ... -stanford\/\n\nWhy do you feel you'd do better next time? I would first see if Wharton allows you to defer your decision.\nCurrent Student\nStatus: Too close for missiles, switching to guns.\nJoined: 23 Oct 2012\nPosts: 787\nLocation: United States\nSchools: Johnson (Cornell) - Class of 2015\nWE: Military Officer (Military & Defense)\nFollowers: 17\n\nKudos [?]: 316 [2] , given: 175\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 05:13\n2\nKUDOS\n#1 HBS and GSB are two of the toughest schools on the planet to get into. Re-applying guarantees nothing.\n#2 If you turn down Wharton, there will be plenty of people happy to take your spot, and if they don't let you defer, don't expect them to welcome you back with open arms next application cycle after spurning them once.\n#3 If Wharton doesn't have a good rep in the UK, why did you apply there?\n_________________\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [0], given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 08:14\nThanks for the responses guys!\n\nhighwyre: I think I'd have a better chance next year for a few reasons - a bit older, more (relevant) work experience & also a lot of people from my company were applying this year. I also think I could have done a much better job at the Stanford story & HBS interview (I can't follow that link as don't subscribe - what is the gist)\n\nCobraKai:\n\n#1&2 - Understood! I appreciate that trying again might mean I get nothing. If its a risk that has a 50\/50 chance of paying off I'd take it though\n\n#3 OK I'll tone down 'Wharton doesn't have a big rep' a bit! Essentially having looked at the courses, spoke to alums etc I think HBS & SGSB are much more in line with what I want to get out of an MBA. HBS delivers 10x more than the other 2 on rep, and Stanford's curriculum is much more in line with what I'm looking for. Applying to Wharton was a bit of a marginal decision, but nonetheless I'll definitely go to the admit weekend & check it out.\n\n#4 - LBS \/ a European school isn't an option\nCurrent Student\nStatus: Too close for missiles, switching to guns.\nJoined: 23 Oct 2012\nPosts: 787\nLocation: United States\nSchools: Johnson (Cornell) - Class of 2015\nWE: Military Officer (Military & Defense)\nFollowers: 17\n\nKudos [?]: 316 [1] , given: 175\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 08:21\n1\nKUDOS\neverton wrote:\n#1&2 - Understood! I appreciate that trying again might mean I get nothing. If its a risk that has a 50\/50 chance of paying off I'd take it though\n\nSeeing how GSB accepts ~6% and HBS ~12%, it's far from a 50\/50 chance. Also, as a re-applicant you'll need to be prepared to show how you've become a better candidate since they'll have this year's application on file.\n\neverton wrote:\n\n#4 - LBS \/ a European school isn't an option\n\nJust curious as to why? Not enough rep as HBS and GSB? Not sure what your goals are post-MBA, but just be aware there are programs outside of HBS and GSB that can help you reach your goals.\n\nBest of luck with whatever you decide, though! It's quite an achievement to get into Wharton!\n_________________\nDirector\nStatus: Can't wait for August!\nJoined: 13 Sep 2011\nPosts: 988\nLocation: United States (MA)\nConcentration: Marketing, Strategy\nGMAT 1: 660 Q44 V37\nGMAT 2: 680 Q45 V38\nGMAT 3: 710 Q45 V42\nGPA: 3.32\nWE: Information Technology (Retail)\nFollowers: 24\n\nKudos [?]: 349 [0], given: 109\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 08:34\neverton wrote:\nThanks for the responses guys!\n\nhighwyre: I think I'd have a better chance next year for a few reasons - a bit older, more (relevant) work experience & also a lot of people from my company were applying this year. I also think I could have done a much better job at the Stanford story & HBS interview (I can't follow that link as don't subscribe - what is the gist)\n\nAh, sorry didnt realize you needed to subscribe to read that one, nevermind.\n\nEither way, will you be unhappy at Wharton? will you always wonder what if? If so, ok... don't go and try again. Do I personally think you are crazy? yes.\n\nI doubt 1 year will change your chances much. Simply 1 year of experience (without a sweet promotion) probably wont matter.\n\nJust curious, how close did you get to H\/S? Were you interviewed?\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [0], given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 08:52\nYeah I think that is essentially the crux of it - I feel would look back on it with regret if I didn't try again (slightly crazy I appreciate...). Obviously its a great position to be in though & I'm not knocking Wharton as a school!\n\nQuite a lot has changed already since I wrote my application (will have spent 9 months on secondment at a client by the summer, with higher level of responsibility than in consulting) so I'm hoping that would make a difference.\n\nI got an interview at HBS, not at Stanford.\n\nre: #4 - a big draw for me is going to a US school. By staying somewhere I know well (London) I'd be missing out on quite an essential part of the MBA (widening horizons & network, new culture, etc). I've covered half of the academic part in my undergrad\/work training already & would be able to enter into my preferred industry without an MBA (which in reality is what I will most likely do if I don't choose Wharton).\nDirector\nStatus: Can't wait for August!\nJoined: 13 Sep 2011\nPosts: 988\nLocation: United States (MA)\nConcentration: Marketing, Strategy\nGMAT 1: 660 Q44 V37\nGMAT 2: 680 Q45 V38\nGMAT 3: 710 Q45 V42\nGPA: 3.32\nWE: Information Technology (Retail)\nFollowers: 24\n\nKudos [?]: 349 [4] , given: 109\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 09:09\n4\nKUDOS\neverton wrote:\nYeah I think that is essentially the crux of it - I feel would look back on it with regret if I didn't try again (slightly crazy I appreciate...). Obviously its a great position to be in though & I'm not knocking Wharton as a school!\n\nQuite a lot has changed already since I wrote my application (will have spent 9 months on secondment at a client by the summer, with higher level of responsibility than in consulting) so I'm hoping that would make a difference.\n\nI got an interview at HBS, not at Stanford.\n\nre: #4 - a big draw for me is going to a US school. By staying somewhere I know well (London) I'd be missing out on quite an essential part of the MBA (widening horizons & network, new culture, etc). I've covered half of the academic part in my undergrad\/work training already & would be able to enter into my preferred industry without an MBA (which in reality is what I will most likely do if I don't choose Wharton).\n\nWell, I think you answered your own question. Reassess after the welcome weekend, but it's your choice.\n\nJust don't make this decision as if it's Wharton this year VS H\/S next year. Your decision is really Wharton this year vs no MBA + a glimmer of hope at H\/S next year.\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [0], given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n28 Mar 2013, 09:13\nYeah I'll make sure to see the campus etc before its final\n\nThanks for the advice - & agree with you - its more likely than not its no MBA\nSenior Manager\nJoined: 17 Mar 2011\nPosts: 452\nLocation: United States (DC)\nConcentration: General Management, Technology\nGMAT 1: 760 Q49 V45\nGPA: 3.37\nWE: Information Technology (Consulting)\nFollowers: 11\n\nKudos [?]: 183 [1] , given: 5\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n29 Mar 2013, 08:22\n1\nKUDOS\nJust something I'd mention:\n\nWithout a good promotion or title change, your chances may actually be worse next year than they were this year. I know you mentioned you took on some new responsibilities this year, which is good to mention. However, if you're making the same salary (roughly) and have the same title that you did last year, it may not be enough.\n\nWhat I mean is, if you had 2 promotions in 2 years, that is pretty impressive. You apply to schools, dont get in, decide to re-apply. Now you've had 2 promotions in 3 years, which is less impressive. That's just something to consider, maybe it doesn't apply to your case if the changes you've made at work are substantial.\n\nYou should also consider how friendly each program is to re-applicants. Stanford indicates 'reapplicants are at no disadvantage for having applied and been rejected' which I would tend to interpret as 'certainly no advantage, and possibly a slight disadvantage despite what we just said'. I think HBS may be more favorable towards re-applicants, but don't quote me on that.\n\nFor next year, you should decide whether it's \"HBS or bust\" or HBS or Stanford or bust\", because you mention that you feel HBS rep is 10x that or Stanford\/Penn, and that Stanford's curriculum is in line with your interests. But which one is more important to you? If next year you get denied at HBS but admitted at Stanford (or vice versa), are you going to be happy at Stanford? Or are you going to be in the same position again.\nDirector\nAffiliations: Columbia, Wharton, LBS\nJoined: 02 Nov 2009\nPosts: 592\nSchools: Harvard, Stanford, LBS, Columbia, Wharton, HEC Paris\nFollowers: 25\n\nKudos [?]: 120 [2] , given: 1\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n02 Apr 2013, 13:52\n2\nKUDOS\n\nWharton does have a great global reputation. I worked for Bain in London after graduation and many of my classmates ended up at top tier UK and Euro firms.\nIf you turn down Wharton this year and re-apply next year -- do not expect them to welcome your application. They know that you are playing them so they would most probably play you.\nHBS and Stanford do not like to admit they made mistakes in their admissions process -- so re-applying your chances would not be that great. If you do re-apply, your candidacy would have to be different from the last one that you submitted. What you submitted did not work.\n\nKimberly Plaga\nManhattan Review\n_________________\n\nManhattan Review GMAT Prep & MBA Admissions Consulting\nWeb: http:\/\/www.manhattanreview.com | Phone: +1.212.316.2000\n\nIntern\nJoined: 07 Dec 2011\nPosts: 38\nLocation: United States\nGPA: 3.55\nFollowers: 7\n\nKudos [?]: 87 [0], given: 4\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n03 Apr 2013, 05:57\nA girl in my Wharton interview group told us after the interview that she had been admitted the previous year but decided not to go. This year she was dinged. Don't assume that Wharton will be a \"bird in hand\" during next year's application cycle. In my opinion the girl was kind of assuming that she'd get in again and it showed in her attitude and lack of preparation for the interview. It's a real possibility that you reapply next year and don't get in anywhere.\n\nUltimately, this crazy process comes down a lot to luck (how many applicants total, how many from your industry\/company, who interviews you, etc.) - you might get in to H\/S next year and be the happiest person in the world that you waited, but it's equally likely that you'll be on the forums this time next year bemoaning your decision to forego one of the best business schools in the world (clearly, I have a biased opinion ). Just go into Welcome Weekend with eyes wide open.\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [2] , given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n12 Dec 2014, 10:14\n2\nKUDOS\n1\nThis post was\nBOOKMARKED\nI'd like to revisit this and thank you all for the advice.\n\nAs an update: I did turn down Wharton in 2013, changed job and reapplied to H + S this year. I was admitted to both this week.\n\nFeeling incredibly lucky to be in this position, and incredibly glad I followed my heart and not my head!\nCurrent Student\nStatus: In at Kellogg!\nJoined: 25 Sep 2014\nPosts: 137\nLocation: United States (IL)\nFollowers: 3\n\nKudos [?]: 38 [0], given: 42\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n12 Dec 2014, 12:10\nefc1 wrote:\nAs an update: I did turn down Wharton in 2013, changed job and reapplied to H + S this year. I was admitted to both this week.\n\nFeeling incredibly lucky to be in this position, and incredibly glad I followed my heart and not my head!\n\nCongratulations!\n_________________\n\nExecutive MBA Schools and Discussions:\n\nWant a 6.0 on the AWA?\nhttp:\/\/gmatclub.com\/forum\/how-to-get-6-0-awa-my-guide-64327.html#p470475\n\nCurrent Student\nJoined: 23 May 2012\nPosts: 60\nConcentration: General Management, Technology\nGMAT 1: Q V\nGMAT 2: Q V\nGMAT 3: 690 Q0 V0\nGPA: 3.25\nWE: Engineering (Telecommunications)\nFollowers: 0\n\nKudos [?]: 12 [0], given: 7\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n13 Dec 2014, 01:08\nefc1 wrote:\nI'd like to revisit this and thank you all for the advice.\n\nAs an update: I did turn down Wharton in 2013, changed job and reapplied to H + S this year. I was admitted to both this week.\n\nFeeling incredibly lucky to be in this position, and incredibly glad I followed my heart and not my head!\n\nI'm speechless\nGood job and congratulations efc1\nNo wonder Stanford and HBS admitted you\nQuestion now: HBS or Stanford?\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [1] , given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n13 Dec 2014, 09:00\n1\nKUDOS\nThank you!\n\nThat's the million dollar question I guess. My gut feeling is HBS, but I'll think it out over a few Christmas beers. Both schools would be a dream come true.\nIntern\nJoined: 28 Nov 2012\nPosts: 25\nFollowers: 0\n\nKudos [?]: 5 [0], given: 7\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n03 Jan 2015, 01:11\nCurrent Student\nJoined: 27 Mar 2013\nPosts: 7\nGMAT 1: 760 Q48 V47\nGMAT 2: Q V\nGPA: 4\nFollowers: 0\n\nKudos [?]: 10 [0], given: 2\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n03 Jan 2015, 04:51\nhibye wrote:\n\nI'll pm it to you\nManager\nJoined: 26 Dec 2011\nPosts: 113\nLocation: United States (NY)\nConcentration: Finance, Entrepreneurship\nGPA: 3.42\nWE: Investment Banking (Investment Banking)\nFollowers: 2\n\nKudos [?]: 18 [0], given: 13\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n04 Jan 2015, 19:13\nefc1 wrote:\nhibye wrote:\n\nI'll pm it to you\n\ncould you pm it to me as well?\n_________________\n\nSenior Manager\nJoined: 03 Aug 2011\nPosts: 296\nConcentration: Strategy, Finance\nGMAT 1: 640 Q44 V34\nGMAT 2: 700 Q42 V44\nGMAT 3: 680 Q44 V39\nGMAT 4: 740 Q49 V41\nGPA: 3.7\nWE: Project Management (Energy and Utilities)\nFollowers: 12\n\nKudos [?]: 69 [0], given: 916\n\nRe: Wharton vs. reapply to HBS\/Stanford\u00a0[#permalink]\n\n### Show Tags\n\n10 Jan 2015, 11:44\nefc1,\n\nCongratulations! Did you already take your decision? (Mine would be the GSB)\n\nCan you btw pm me your profile as well? I'm looking forward to an application in R1 2015! Thanks you!\n_________________\n\nThank you very much for reading this post till the end! Kudos?\n\nRe: Wharton vs. reapply to HBS\/Stanford \u00a0 [#permalink] 10 Jan 2015, 11:44\n\nGo to page \u00a0 \u00a01\u00a0\u00a0\u00a02\u00a0 \u00a0 Next \u00a0[ 23 posts ]\n\nSimilar topics Replies Last post\nSimilar\nTopics:\n1 Go to Ross this year vs re-apply to Haas 8 13 Jul 2016, 04:09\nStern vs. Yale SOM WL vs. Reapplying 3 05 Apr 2015, 10:41\nUCLA vs. USC (\\$) vs. Haas (re-apply) 0 31 Dec 2014, 18:31\nINSEAD vs. No School (Kellogg\/Sloan\/Haas reapply) 1 31 Mar 2014, 08:42\n17 Booth vs. Yale vs. Kellogg vs. Reapply 21 30 Mar 2012, 13:35\nDisplay posts from previous: Sort by","date":"2017-02-21 12:30:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2625388503074646, \"perplexity\": 9139.72068100541}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-09\/segments\/1487501170708.51\/warc\/CC-MAIN-20170219104610-00649-ip-10-171-10-108.ec2.internal.warc.gz\"}"}
| null | null |
Are you searching for classified advertising?
An online classified advertisement is a small impressive advertisement put in classified-ad web sites to order the focus of the preferred visitor. It is a short-term advertisement that makes it possible for great response within a very short period. Classifieds are low-cost advertisements and also deliver high value for rate. For instance: an individual can put the advertisement of utilized bike to obtain the best client for it. Web individuals are utilized to invest time on high quality categorized web sites to find something beneficial and also rewarding. There are numerous Net customers, either working online or looking for something concrete. Classifieds are advertised in abundance. Online advertisements are really useful in short-term responses. A categorized is promoted for tiny amount of time and that too in a short room, still it is conveniently looked by the site visitor within the time defined for that categorized.
Classified ads provide value for loan. Even tiny individuals can put their ad on online classified websites. This is so because these ads are cheap as well as totally free at some web sites. The marketers fetch ideal value for their cash and obtain great returns. An increasing number of people are marketing their used bikes, automobiles and mobiles on classified-ad websites to get maximum returns. Online classifieds are good resource of used-products at good prices. State a used-bike or a used-car is quickly available at such websites. Even used-mobiles are promoted for user-reference. No area for middle-man makes the bargain best for both purchaser and the seller. The offer is devoid of any type of intermediary so it is a straightforward offer between both individuals. The intermediary aims to manipulate the outcomes which are not possible via an offer with classified add. Look here for important points https://chugiong.com/.
Store could advertise their brand-new systems or promotional campaigns to charm the site visitors to that shop and obtain desired clients completely roe. Stores additionally promote their seasonal sales on on-line classified areas to get good reaction. Classified-ads are simple to layout and also better positioned for the target web site visitors. The frustrating reaction originated from on the internet classified advertisement makes it possible for these categorized advertisement sites to grow even more and advertize to create good returns for the classified.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,617
|
\section{Introduction}
The $\mathcal{CONGEST}$ model is a synchronous, message-passing model of distributed
computation in which the amount of information that a node can transmit along an incident communication link
in one round is restricted to $O(\log n)$ bits, where $n$ is the size of the network \cite{peleg2000distributed}.
As the name suggests, the $\mathcal{CONGEST}$ model focuses on \textit{congestion} as an obstacle to distributed computation.
In this paper, we focus on the design of distributed algorithms in the $\mathcal{CONGEST}$ model
on a \textit{clique} communication network; we call this the \textit{congested clique} model.
In the congested clique model, all information is nearby, i.e., at most one hop away,
and so any difficulty in solving a problem is due to congestion alone.
Let $H = (V, E_H)$ denote the underlying clique communication network.
In general, the input to the problems we consider consists of a $|V| \times |V|$ matrix $M$ of edge-attributes and a length-$|V|$ vector of node attributes.
$M$ represents edge weights (or distances, or costs) and it is initially distributed among the nodes in $V$ in such a way that each node $v \in V$ knows the corresponding
row and column of $M$.
In one typical example, $M$ could simply be the adjacency matrix of a spanning subgraph $G = (V, E)$ of
$H$; in this setting, each node $v \in V$ initially knows all the edges of $G$ incident on it.
A number of classical problems in
distributed computing, e.g., maximal independent set (MIS), vertex coloring, edge coloring, maximal matching, shortest paths, etc., are well-defined in this setting. However, the difficulty of proving lower
bounds in the congested clique model \cite{DruckerKuhnOshmanPODC2014} means that it is not clear how quickly one should be
able to solve any of these problems in this model. Note that the input $G$ can be quite dense (e.g., have
$\Theta(n^2)$ edges)
and therefore any reasonably fast algorithm for the problem will have to be ``truly'' distributed in the sense
that it cannot simply rely on shipping off the problem description to a single node for local computation.
In this setting, the algorithm of Berns et al.~\cite{berns2012arxiv,berns2012facloc} that computes a \textit{2-ruling set} of $G$ in expected-$O(\log \log n)$ rounds is worth mentioning.
(A \textit{t-ruling set} is defined to be an independent set $I \subseteq V$ such that every
node in $V$ is at most $t$ hops in $G$ from some node in $I$.)
In another important class of problems that
we study, the input matrix $M$ represents a metric space $(V, d)$; thus each node $v \in V$
initially has knowledge of distances $d(v, w)$ for all $w \in V$.
Nodes then need to collaborate to solve a problem such as \textit{minimum spanning tree} (MST) or
\textit{metric facility location} (MFL) that are defined on the input metric space.
In this setting, the deterministic MST algorithm of Lotker et al.~\cite{lotker2006distributed} running in $O(\log\log n )$
rounds is worth mentioning.
Thus far the congested clique model has mainly served the theoretical purpose
of helping us understand the role of congestion as an obstacle to
distributed computation.
However, recent papers \cite{KlauckArxiv2013,HegemanPemmarajuSIROCCO2014}
have made connections between congested clique algorithms and
algorithms in popular systems of parallel computing such as MapReduce \cite{DeanGhemavat} and graph processing systems
such as Pregel \cite{MalewiczPregelSIGMOD2010}, thus providing a practical motivation
for the development of fast algorithms on the congested clique.
Specifically, in \cite{HegemanPemmarajuSIROCCO2014}, it is shown that congested clique algorithms
with fairly liberal resource constraints can be efficiently simulated
in a MapReduce model of computation \cite{KarloffSuriVassilvitskii}.
\subsection{Main Results}
\label{section:mainResults}
In this paper we present several constant-time or near-constant-time algorithms for fundamental problems in the congested clique setting.
\begin{itemize}
\item First, we present an algorithm that computes a 3-ruling set of $G$
in expected $O(\log \log \log n)$ rounds, significantly improving the running time of the 2-ruling set
algorithm of Berns et al.~\cite{berns2012arxiv,berns2012facloc}.
\item Via a reduction presented in Berns et al.~\cite{berns2012arxiv,berns2012facloc}, this implies an expected $O(\log \log \log n)$-round algorithm
for computing an $O(1)$-approximation for MFL. Again, this
significantly improves on the running time of the fastest known algorithm for this problem.
\end{itemize}
Distributed algorithms that run in $O(\log \log n)$ rounds are typically analyzed by showing a doubly-exponential rate of progress; such progress, for example, is achieved if the number of nodes that have ``successfully finished'' grows by squaring after each iteration.
The congested clique algorithms for MST due to Lotker et al.~\cite{lotker2006distributed} and the above-mentioned MFL algorithm due to Berns et al.~\cite{berns2012arxiv,berns2012facloc} are both examples of such phenomena. Our algorithm with triply-logarithmic running time, involves new techniques that seem applicable to congested clique algorithms in general. Our result raises the distinct possibility that other problems, e.g., MST, can also be solved in $O(\log \log \log n)$ rounds on a congested clique.
In fact, our next set of results represents progress in this direction.
\begin{itemize}
\item We show how to solve the MIS problem on a congested clique in \textit{constant} rounds on an input graph $G_r$
induced by the metric space $(V, d)$ in which every pair of nodes at distance at most
$r$ (for any $r \ge 0$) are connected by an edge. This result has two implications.
\item First, given a metric space $(V, d)$ of constant doubling dimension, we show that a constant-approximation to
the MST problem on this metric space can be obtained in \textit{constant} rounds on a congested clique
setting.
\item An additional implication of the aforementioned MIS result is that it leads to a \textit{constant}-round constant-approximation to MFL in metric spaces of constant doubling dimension on a congested clique.
\end{itemize}
In order to achieve our results, we use a variety of techniques that balance bandwidth constraints with
the need to make rapid progress. We believe that our techniques will have independent utility in any
distributed setting in which congestion is a bottleneck.
\subsection{Technical Preliminaries}
\label{subsection:techPrelim}
\paragraph{Congested Clique Model.} The underlying communication network is a clique $H = (V, E_H)$ of size $n = |V|$.
Computation proceeds in synchronous rounds and in each round a node (i) receives all messages sent to it in the previous round,
(ii) performs unlimited local computation, and then (iii) sends a, possibly
different, message of size $O(\log n)$ to each of the
other nodes in the network.
We assume that nodes have distinct IDs that can each be represented in $O(\log n)$ bits.
\paragraph{MST and MFL problems.} We assume that the input to the MST problem is a
metric space $(V, d)$. Initially, each node $v \in V$ knows
distances $d(v, w)$ to all nodes $w \in V$. When the algorithm ends, all nodes in $V$
are required to know a spanning tree $T$ of $V$ of minimum weight.
(Note that here we take $d(u, v)$ to be the ``weight'' of edge $\{u, v\}$.)
The input to MFL consists of a metric space $(V, d)$
along with \textit{facility opening costs} $f_v$ associated with
each node $v \in V$.
The goal is to find a subset $F \subseteq V$ of nodes to
\textit{open} as facilities so as to minimize the facility opening costs plus connection
costs, i.e., $\sum_{v \in F} f_v + \sum_{u \in V} D(u, F)$,
where $D(u, F) := \min_{v \in F} d(u, v)$ is the \textit{connection
cost} of node $u$.
Initially, each node $v \in V$ knows facility opening cost $f_v$ and distances $d(v, w)$ for
all $w \in V$.
Facility location is a well-studied problem in operations research
\cite{Balinski66,CNWBook,EHK77} that arises in
contexts such as locating hospitals in a city or locating distribution centers
in a region.
More recently, the facility location problem has been used as an abstraction for
the problem of locating resources in a wireless network
\cite{FrankBook,PanditPemmarajuICDCN09} and motivated by this application several distributed approximation algorithms for this problem have been
designed \cite{MoscibrodaFLPODC05,GehweilerSPAA2006,HegemanPemmarajuDISC2013}.
\paragraph{$t$-ruling set problem.}
A \textit{$t$-ruling set} of a graph $G = (V, E)$ is an independent set $I \subseteq V$ such that every vertex in $G$ is at most
$t$ hops from some vertex in $I$.
A $t$-ruling set, for constant $t$, is a natural generalization of an MIS and can stand as a proxy for an MIS in many instances.
The input to the $t$-ruling set problem on a congested clique $H = (V, E_H)$ is a spanning subgraph
$G = (V, E)$ of the underlying communication network $H$.
Each node $v \in V$ is initially aware of all its neighbors in $G$.
At the end of the $t$-ruling set algorithm, every node is required to know the identities of
all nodes in the computed $t$-ruling set.
\paragraph{Metric spaces, doubling dimension, and growth-bounded graphs.}
If $M = (V, d)$ is a metric space then we use $B_M(v, r)$ to denote the set of points $w \in V$ such that
$d(v, w) \le r$.
We call $B_M(v, r)$ the \textit{ball of radius $r$ centered at $v$}.
A metric space $M = (V, d)$ has \textit{doubling dimension} $\rho$ if for any $v \in V$ and $r \ge 0$,
$B_M(v, r)$ is contained in the union of at most $2^\rho$ balls $B_M(u, r/2)$, $u \in V$.
In this paper, we work with metric spaces with constant doubling dimension, i.e., $\rho = O(1)$.
Note that constant-dimensional Euclidean metric spaces are natural examples of metric spaces with constant doubling dimension.
In distributed computing literature, metric spaces of constant doubling dimension have
been investigated in the context of wireless networks \cite{damian2006Spanner,KuhnMoscibrodaWattenhoferPODC2005}.
For a graph $G = (V, E)$ and a node $v \in V$, let $B_G(v, r)$ denote the set of all vertices $u \in V$ that are at most $r$ hops from $v$.
A graph $G = (V, E)$ is said to have \textit{bounded growth} (or said to be \textit{growth-bounded}) if the size of any independent set in any ball
$B_G(v, r)$, $v \in V$, $r \ge 0$, is bounded by $O(r^c)$ for some constant $c$.
For any metric space $(V, d)$ and $r \ge 0$, the graph $G_{r} = (V, E_r)$, where
$E_r = \{\{u, v\} \in d(u, v) \le r\}$
is called a \textit{distance-threshold graph}.
It is easy to see that if $(V,d)$ has constant doubling dimension then a distance-threshold graph
$G_{r}$, for any $r \ge 0$, is growth-bounded; this fact will play an important role in our algorithms.
For a given metric space $(V, d)$ the \emph{aspect ratio} $\lambda(Y)$ of a subset of points $Y \subseteq V$ is the ratio of maximum of pair-wise distance between points in $Y$ to the minimum of pair-wise distance between points in $Y$, i.e.
$\lambda(Y) = {\max\{d(u,v) \mid u, v\in Y\}}/{\min\{d(u,v)\mid u, v \in Y\}}$.
The following fact is easy to prove by applying the definition of doubling dimension:
if $(V, d)$ is a metric with doubling dimension $\rho$ and $Y\subseteq V$ is a subset of points, then $|Y| \leq 2^{\rho\cdot\lceil \log_2 \lambda(Y) \rceil}$ where $\lambda(Y)$ is the
aspect ratio of $Y$.
We refer to this property as the \textit{growth-bounded property} of the metric space $(V, d)$.
Distance-threshold graphs and more generally, growth-bounded graphs have
attracted attention in the
distributed computing community as flexible models of wireless networks \cite{KuhnMoscibrodaWattenhoferPODC2005}.
Schneider and Wattenhofer \cite{schneider2008logstar} present a deterministic algorithm,
running in $O(\log^* n)$ rounds, for computing an MIS on a growth-bounded graph.
\paragraph{Lenzen's routing protocol.}
A key algorithmic tool that allows us to design constant- and near-constant-time round
algorithms is a recent deterministic routing protocol by Lenzen
\cite{lenzen2013routing} that disseminates a large volume of information
on a congested clique in constant rounds.
The specific routing problem, called an \textit{Information Distribution Task},
solved by Lenzen's protocol is the following.
Each node $i \in V$ is given a set of $n' \le n$ messages, each of size $O(\log n)$, $\{m_i^1, m_i^2, \ldots, m_i^{n'}\}$,
with destinations $d(m_i^j) \in V$, $j \in [n']$.
Messages are globally lexicographically ordered by their source $i$, destination $d(m_i^j)$, and $j$.
Each node is also the destination of at most $n$ messages.
Lenzen's routing protocol solves the Information Distribution Task in $O(1)$ rounds.
\paragraph{General Notation.}
For a subset $S \subseteq V$, $G[S]$ denotes \textit{induced} subgraph of $G$ by set $S$;
thus $G[S] = (S, E')$ where $E' = \{\{u, v\} \mid u, v \in S \mbox{ and } \{u, v\} \in E\}$.
In the context of our MST algorithm we will interpret metric distances $d(u,v)$ as as edge weights;
we will use $wt(u, v)$ and $d(u, v)$ interchangeably.
Given an edge-weighted graph $G = (V, E)$ and an edge set $E' \subseteq E$,
we denote the sum of all edge-weights in $E'$ as $wt(E')$.
We use $\Delta$ to denote the maximum degree of a graph; sometimes, to avoid
ambiguity we use $\Delta(G)$ to denote maximum degree of graph $G$.
All logarithms are assumed to have base 2 unless otherwise specified.
We say an event occurs \textit{with high probability} (w.h.p.), if the probability of that event is at least $(1-1/n^c)$ for a constant $c \ge 1$.
\section{3-Ruling Sets in \texorpdfstring{$O(\log \log \log n)$}{O(log log log
n)} Rounds}
\label{sec:ruling}
In this section, we show how nodes in $V$ can use the underlying clique
communication network $H$ to compute, in expected-$O(\log \log \log n)$ rounds,
a $3$-ruling set of an arbitrary spanning subgraph $G$ of $H$.
At a high level, our $3$-ruling set algorithm can be viewed as having three steps.
In the first step, the graph is decomposed into $O(\log \log n)$ degree-based classes and at the end of this step every node knows the class it belongs to.
In the next subsection, we describe this \textit{degree-decomposition step} and show that it runs in expected $O(\log \log \log n)$ rounds.
In the second step, each vertex $v$ of the given graph $G$ joins a set $S$ independently with probability $p_v$,
where $p_v$ depends on $v$'s class as defined in the degree-decomposition step.
This \textit{vertex-selection step} yields a set $S$ that will be shown to have two
properties: (i) the expected number of edges in the induced subgraph $G[S]$ is
$O(n \cdot \poly(\log n))$; and (ii) with high probability, every vertex
in $G$ is either in $S$ or has a neighbor in $S$.
Given the degree-decomposition, the vertex-selection step is elementary and requires no communication.
In the third step, we use the 2-ruling set algorithm of Berns et al.~\cite{berns2012arxiv,berns2012facloc}.
We show that, on an $n$-node graph with $O(n \cdot \mbox{poly}(\log n))$ edges, this algorithm
runs in expected-$O(\log \log \log n)$ rounds.
We will refer to this algorithm from~\cite{berns2012arxiv,berns2012facloc} as the
\textit{2-ruling set algorithm}.
Putting these three steps together yields a $3$-ruling set algorithm that runs in $O(\log \log \log n)$ rounds in expectation.
\subsection{Degree-Decomposition Step}
\label{sub:degree-decomposition}
Let $G = (V, E)$ be an arbitrary graph.
Let $U_1$ be the set of all nodes in $G$ with degrees in the range $[n^{1/2}, n)$.
Let $V_1$ be the remaining nodes, i.e., $V_1 = V \setminus U_1$.
Let $U_2$ be the set of all nodes in $V_1$ with degrees in $G[V_1]$ belonging
to the range $[n^{1/4}, n^{1/2})$.
The decomposition continues in this manner until $V$ is partitioned into sets
$U_1, U_2, \ldots$.
We now provide a more formal description.
For $k = 0, 1, 2, \ldots$, let
$D_k = n^{1 / 2^k}$. The $D_k$'s will serve as degree thresholds and will lead
to a vertex partition. Let $k^* = \lceil \log \log n \rceil$. Note that
$1 < D_{k^*} \leq 2$. Let $V_0 = V$, $G_0 = G$, and
$U_1 = \{v \in V_0 \mid \degree_{G_0}(v) \in [D_1, D_0)\}$. For
$1 \leq k < k^*$, let \[V_k = V_{k-1} \setminus U_k, \qquad G_k = G[V_k],
\qquad U_{k+1} = \{v \in V_k \mid \degree_{G_k}(v) \in [D_{k+1}, D_k)\}\]
Let $V_{k^*} = V_{k^*-1} \setminus U_{k^*}$, $G_{k^*} = G[V_{k^*}]$, and
$U_{k^*+1} = V_{k^*}$.
See Figure~\ref{fig:degree-decomposition} for an illustration of this decomposition.
Let $N_G(v)$ denote the set of neighbors of vertex $v$ in
graph $G$. Here are some easy observations:
\begin{itemize}
\item[(i)] For $0 \leq k \leq k^*$, $\Delta(G_k) < D_k$.
\item[(ii)] For $1 \leq k \leq k^*+1$, if $v \in U_k$ then
$|N_G(v) \cap V_{k-1}| < D_{k-1}$.
\item[(iii)] For $1 \leq k \leq k^*+1$, if $v \in U_k$ then
$|N_G(v) \cap U_j| < D_j$ for $j = 1, 2, \ldots k-1$.
\end{itemize}
\begin{figure}
\begin{boxedminipage}{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{degree-decomposition}
\end{boxedminipage}
\caption{Degree-Decomposition Step.
$U_1$ is the set of all nodes in $G$ with degrees in the range $[n^{1/2}, n)$ and
$V_1$ is the remaining nodes.
$U_2$ is the set of all nodes in $V_1$ with degrees in $G[V_1]$ belonging
to the range $[n^{1/4}, n^{1/2})$.
The decomposition continues in this manner until all nodes belong to some $U_k$.
We use $k^*$ to denote $\lceil \log\log n\rceil$. Assuming that $\log\log n = k^*$,
we see that $U_k^*$ is the set of nodes that have degree in $G[V_{k^*-1}]$ in the range $[2, 4)$.
Note that a node $v$ that belongs to $U_{k+1}$ could have degree in $G$ that is much
larger than $D_k = n^{1/2^k}$.
\label{fig:degree-decomposition}}
\end{figure}
Now we describe algorithm to compute this degree-decomposition; in particular,
we precisely describe how each node $v$ computes an index $k(v) \in [k^* + 1]$ such that
$v \in U_{k(v)}$.
Below, we first describe at a high level a 2-phase approach that we use to compute the index
$k(v)$ for each vertex $v$.
Subsequently we will flesh out our approach with necessary details and show that it is correct and can be implemented in $O(\log \log \log n)$ rounds on a congested clique.
\begin{description}
\item[Lazy phase:] Let $t = \lceil 1 + \log \log \log n \rceil$. The sets
$U_1, U_2, \ldots, U_t$ are identified in a leisurely manner, one-by-one, in
$O(\log \log \log n)$ rounds. At the end of this phase each vertex
$v \in \cup_{i=1}^t U_i$ knows the index $k(v) \in [t]$ such that
$v \in U_{k(v)}$.
\item[Speedy phase:] The set of remaining vertices, namely $V_t$, induces a
graph $G_t$ whose maximum degree is less than
\[D_t \leq n^{1 / 2^{1 + \log \log \log n}} = n^{1 / (2 \log \log n)}.\]
This upper bound on the maximum degree helps us compute the index values $k(v)$
for the remaining vertices at a faster rate.
We first show that each vertex $v$
in $G_t$ can acquire knowledge of the graph induced by the ball $B_{G_t}(v,k^*)$
in $O(\log \log \log n)$ rounds via a fast \textit{ball-growing algorithm}. (Recall that
$k^* = \lceil \log \log n \rceil$.) We then show that
$G[B_{G_t}(v,k^*)]$ contains enough information for $v$ to determine
$k(v) \in [k^* + 1]$ via local computation. Therefore, after each vertex
$v \in V_t$ acquires complete knowledge of the radius-$k^*$ ball centered at it,
it can locally compute index $k(v)$ and proceed to the vertex-selection step.
\end{description}
\noindent We now present the \textit{Lazy-phase algorithm} executed by all
vertices $v \in G$.
\begin{algorithm}[H]
\caption{Lazy-phase algorithm at vertex $v$}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\STATE $k(v) \leftarrow 0$
\FOR{$i \leftarrow 1$ \TO $t$}
\STATE $s(v) \leftarrow |\{u \in N_G(v) \mid 1 \leq k(u) < i\}|$
\IF{$degree_G(v) - s(v) \in [D_i, D_{i-1})$}
\STATE $k(v) \leftarrow i$
\STATE Send $k(v)$ to all neighbors
\STATE \textbf{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\begin{lemma}
The Lazy-phase algorithm runs in $O(\log \log \log n)$ rounds and at the end of
the algorithm, for each vertex $v \in \cup_{j=1}^t U_j$, $k(v)$ has a value in
$[t]$ such that $v \in U_{k(v)}$. For any vertex $v \notin \cup_{j=1}^t U_j$,
$k(v)$ is set to $0$.
\end{lemma}
\begin{proof}
Given that the sets $U_1, U_2, \ldots, U_i$ have been determined, and that the
members of each are known to every node in the network, each node can locally
determine its degree in $G_i = G[V_i]$ and thus determine its membership in
$U_{i+1}$. Each node can then broadcast whether or not it has joined $U_{i+1}$,
thus providing knowledge of $U_{i+1}$ to every node in the network. It follows
that the implementation of the Lazy-phase algorithm requires exactly
$t = \lceil 1 + \log \log \log n \rceil$ rounds of communication to complete.
\end{proof}
\noindent We now present the \textit{Speedy-phase algorithm} executed by vertex
$v$. Note that the Speedy-phase algorithm is only executed at vertices $v$ for
which $k(v)$ is $0$ after the Lazy-phase algorithm. In other words, the
Speedy-phase algorithm is only executed at vertices $v$ in $G_t$, the graph
induced by vertices not in $\cup_{j=1}^t U_j$.
The key idea of the Speedy-phase algorithm is that once each node $v$ in $G_t$ has acquired knowledge of $G_t[B_{G_t}(v, r)]$, then in constant rounds of communication, each node $v$ can ``double'' its knowledge, i.e., acquire knowledge of $G_t[B_{G_t}(v, 2r)]$. This is done by each node $v$ sending knowledge of $G_t[B_{G_t}(v, r)]$ to all nodes in $B_{G_t}(v, r)$; the key is to establish that this volume of communication can be achieved on a congested clique in constant rounds.
This idea has appeared in a slightly different context in \cite{LenzenWattenhoferBAPODC2010}.
\begin{algorithm}[H]
\caption{Speedy-phase algorithm at vertex $v$}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\STATE \COMMENT{Growing the ball $B_{G_t}(v,k^*)$}
\STATE Each node sends a list of all of its neighbors in $G_t$ to each of
its neighbors (in $G_t$) \COMMENT{After which each $v \in V_t$ knows
$G[B_{G_t}(v,1)]$}
\FOR{$i \leftarrow 0$ \TO $\lceil \log \log \log n \rceil - 1$}
\label{algo:speedy:ballstart}
\STATE Send a description of $G[B_{G_t}(v,2^i)]$ to all nodes in
$B_{G_t}(v,2^i)$
\STATE Construct $G[B_{G_t}(v, 2^{i+1})]$ from $G[B_{G_t}(u, 2^i)]$
received from all $u \in B_{G_t}(v, 2^i)$
\label{algo:speedy:ballend}
\ENDFOR
\STATE Locally compute $k(v) \in [k^* + 1]$ such that $v \in U_{k(v)}$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\begin{lemma}
The Speedy-phase algorithm above runs in $O(\log \log \log n)$ rounds in the
congested-clique model and when this algorithm completes execution, each vertex $v$
in $G_t$ knows $G[B_{G_t}(v,k^*)]$.
\end{lemma}
\begin{proof}
Line~2 of the Speedy-phase algorithm can be completed in a constant number of rounds using Lenzen's routing protocol~because each node needs only to send and receive $O(D_t)$ messages to/from $O(D_t)$ neighbors (each message listing a neighbor and destined for a neighbor), as the maximum degree of $G_t$ is less than $D_t$.
In implementing the Speedy-phase algorithm, the key step is to perform Line~4 in
$O(1)$ rounds of communication. If this can be done, then after
$O(\log \log \log n)$ rounds, each node $v$ remaining in $V_t$ will have
knowledge of its entire neighborhood graph out to a distance of
$2^{\lceil \log \log \log n \rceil} \geq \lceil \log \log n \rceil = k^*$ hops
away from $v$.
Since $G_t$ has maximum degree less than $D_t$, the neighborhood graph
$G[B_{G_t}(v,2^i)]$ can be completely described by listing all $O(D_t^{2^i+1})$
edges. Thus, such a neighborhood can be communicated from $v$ to another node
(in particular, to any other node in $B_{G_t}(v,2^i)$) via
$O(D_t^{2^i + 1}) = O(n^{(2^i + 1) / 2^t})$ messages of size $O(\log n)$.
Therefore, to perform a given iteration of Line~4 within the Speedy-phase
algorithm, each node will need to send (and receive) $O(n^{(2^i + 1) / 2^t})$
messages (of size $O(\log n)$) to $O(D_t^{2^i}) = O(n^{2^{i-t}})$ other nodes in
the network. As above, we can use Lenzen's routing protocol~to perform this
task in $O(1)$ rounds as long as the total number of messages to be sent (and
received) by each node is $O(n)$.
Thus, Line~4 of the Speedy-phase algorithm can be executed in a constant number
of rounds if $n^{(2^{i+1}+1) / 2^t}$ $= O(n)$; in other words, if
$2^{i+1} + 1 \leq 2^t$, or $i \leq t - 2 = \lceil \log \log \log n \rceil - 1$.
This lower bound on the maximum value of $i$ that still allows Line~4 to be
completed in $O(1)$ rounds is precisely the final index in the for-loop
(Line~3). This completes the proof.
\end{proof}
\begin{lemma}
For any graph $H$ and a vertex $v$ in $H$, suppose that $v$ knows the graph
induced by $B_H(v,k^*)$. Then $v$ can locally compute the index
$k(v) \in [k^* + 1]$ such that $v \in U_{k(v)}$.
\end{lemma}
\begin{proof}
The proof is by induction. Whether a vertex $u$ is in $U_1$ is determined by its
degree in $H$. Since $v$ knows $H[B_H(v,k^*)]$ it can determine via local
computation which $u \in B_H(v,k^*-1)$ belong to $U_1$ and which don't. As the
inductive hypothesis, suppose that for some $i \geq 1$, $v$ has determined for
all $u \in B_H(v,k^*-i)$ the following information:
\begin{itemize}
\item[(i)] if $u \in \cup_{j=1}^i U_j$, then $v$ knows $k(u) \in [i]$ such that
$u \in U_{k(u)}$.
\item[(ii)] if $u \not\in \cup_{j=1}^i U_j$, then $v$ knows that
$u \not\in \cup_{j=1}^i U_j$.
\end{itemize}
Now consider a vertex $u \in B_H(v,k^*-i-1)$ such that
$u \not\in \cup_{j=1}^i U_j$. In order to determine if $u \in U_{i+1}$, vertex
$v$ needs to check if the \textit{residual degree} of $u$, defined as
\begin{equation}
\label{residualDegree}
r(u) := degree_H(u) - |N_H(u) \cap (\cup_{j=1}^i U_j)|
\end{equation}
belongs to the interval $[D_{i+1}, D_i)$. In other words, we need to check that
the degree of $u$ after we have deleted all neighbors in $\cup_{j=1}^i U_j$ is
in the range $[D_{i+1}, D_i)$. Given the information that $v$ knows about all
$u \in B(v,k^*-i)$ (by the inductive hypothesis), vertex $v$ can compute the
residual degree $r(u)$ for each $u \in B_H(v,k^*-i-1)$. Therefore for all such
$u$, vertex $v$ can determine if $u \in U_{i+1}$ or not. This completes the
inductive step of the proof.
Now since $B_H(v,0) = \{v\}$, it follows from the above inductive argument that
$v$ can determine the index $k(v) \in [k^* + 1]$ such that $v \in U_{k(v)}$.
\end{proof}
\subsection{Vertex-Selection Step}
\label{sub:vertex-selection}
\begin{algorithm}[H]
\caption{Vertex-Selection Step\label{algo:vertex-selection}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}
\IF{$v \in U_k$ for $k = 1, 2, \ldots, k^*$}
\STATE $v$ is selected with probability $\min\left(\frac{2 \log n}{D_k}, 1\right)$
\ENDIF
\IF{$v \in U_{k^*+1}$}
\STATE $v$ is selected with probability 1
\ENDIF
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\noindent
As mentioned earlier, the vertex-selection step randomly and independently samples nodes in $G$, with
each node $v$ sampled with a probability $p_v$ that depends on the class $U_{k(v)}$ it belongs to.
Specifically, if $v$ belongs to $U_k$ then $v$ is independently selected with probability $\min(2\log n/D_k,1)$.
Algorithm~\ref{algo:vertex-selection} shows pseudocode for the vertex-selection step.
Let $S$ be the set of vertices that are selected. Let $e(S)$ denote the set of edges in the induced graph $G[S]$.
\begin{lemma} \label{lemam:sizeeS}
$E[|e(S)|] = O(n \cdot \log^2 n \cdot \log \log n)$.
\end{lemma}
\begin{proof}
Consider an arbitrary vertex $v \in V$ and let $k$, $1 \leq k \leq k^*+1$ be
such that $v \in U_k$. We will show that the expected number of edges between
$v$ and nodes in $\cup_{j \leq k} U_j$ is less than $4 k \cdot \log^2 n$.
In the graph $G$, node $v$ has fewer than $D_{k-1}$ neighbors in $U_k$. Thus, if
$1 \leq k \leq k^*$, the expected number of edges in $e(S)$ between $v$ and
nodes in $U_k$ is at most
\[\frac{2 \log n}{D_k} \cdot \sum_{u \in N_G(v) \cap U_k} \frac{2 \log n}{D_k}
< \frac{4 \log^2 n \cdot D_{k-1}}{D_k^2} = 4 \log^2 n.\]
If $k = k^* + 1$, the number of edges between $v$ and other nodes in $U_{k^*+1}$
is at most $1$.
In the graph $G$, node $v$ has fewer than $D_j$ neighbors in $U_j$, for $j < k$.
Thus, if $1 \leq k \leq k^*$, the expected number of edges in $e(S)$ between $v$
and nodes in $U_j$, $j < k$, is at most
\[\frac{2 \log n}{D_k} \cdot \sum_{u \in N_G(v) \cap U_j} \frac{2 \log n}{D_j}
< \frac{4 \log^2 n}{D_k} \leq 4 \log^2 n.\]
If $k = k^* + 1$, the expected number of edges in $e(S)$ between $v$ and nodes
in $U_j$, $j < k$, is
\[1 \cdot \sum_{u \in N_G(v) \cap U_j} \frac{2 \log n}{D_j} < 2 \log n.\]
Hence, summing over $j$, the expected total number of edges in $e(S)$ between
$v$ and $\cup_{j \leq k} U_j$ is less than $4 k \cdot \log^2 n$. Using the fact
that $k \leq 1 + \log \log n$, we see that the expected total number of edges in
$e(S)$ between $v$ and $\cup_{j \leq k} U_j$ is $O(\log^2 n \cdot \log \log n)$.
The result follows.
\end{proof}
\begin{lemma}
For any $v \in V$,
$\Pr(v \mbox{ is in } S \mbox{ or } v \mbox{ has a neighbor in } S) \geq 1 - 1 /
n^2$.
\end{lemma}
\label{lemma:highProb}
\begin{proof}
Suppose that $v \in U_k$, for some $1 \leq k \leq k^*$. Vertex $v$ has at least
$D_k$ neighbors in $V_{k-1}$. Each such neighbor is selected for $S$ with
probability at least $\min \{(2 \log n) / D_k, 1\}$. If $2 \log n \geq D_k$,
than any of these neighbors is selected for $S$ with probability $1$, so $v$ has
a neighbor in $S$ with probability $1$. Otherwise, we have
\[\Pr(v \mbox{ has no neighbor in } S) \leq \left(1 - \frac{2 \log
n}{D_k}\right)^{D_k}
\leq e^{-2 \log n} \leq \frac{1}{n^{2.8}}\]
Also, if $v \in U_{k^*+1}$, then $v$ is selected for $S$ with probability $1$.
\end{proof}
\subsection{2-Ruling Set Algorithm}
\label{sub:2ruling}
We now briefly describe the $2$-ruling set of
Berns et al.~\cite{berns2012arxiv,berns2012facloc} that
runs on a congested clique in expected $O(\log\log n)$ rounds.
This text is borrowed largely from \cite{berns2012arxiv,berns2012facloc} and the
reader is advised to
consult these papers for a more detailed description.
Pseudocode for the algorithm appears in Algorithm~\ref{alg:2RulingSetAlg}.
The algorithm proceeds in
\textit{Iterations} and in each Iteration some number of nodes become
inactive and we
measure progress by the number of edges remaining in the graph
induced by active nodes.
In an Iteration $i$, each active node ($S$ denotes the set of
active nodes) joins a ``Test'' set $T$
independently with probability $q = \sqrt{\frac{n}{m}}$ (Line 6), where
$m$ is the number of edges in the graph induced by active nodes.
The probability $q$ is set such that the expected
number of edges in $G[T]$ is equal to $n$.
If the number of edges in $G[T]$ is no more than $4n$, then we can ship off
$G[T]$ to a single node and have that node locally compute an MIS.
All of this takes a constant number of rounds and
then we delete $T$ and its neighborhood $N(T)$ from the active set $S$.
(Lines 7-10). Because $m$, the number of edges in $G[S]$, decreases,
the probability $q$ rises (Line 12) while
still having the expected number of edges in $G[T]$ during the next iteration
bounded above by $n$.
Berns et al.~\cite{berns2012arxiv,berns2012facloc} have analyzed this algorithm to show that it
requires expected $O(\log\log n)$ rounds.
We now sketch this analysis to observe that for $n$-node graphs with
$O(n \cdot \mbox{poly}(\log n))$ edges, this 2-ruling set algorithm requires an
expected $O(\log\log\log n)$ rounds.
\begin{algorithm}[t]
\caption{\textsc{$2$-RulingSet}}
\label{alg:2RulingSetAlg}
\begin{boxedminipage}{\textwidth}
\small
\textbf{Input:} $(G, S)$\\
\textbf{Output:} A $2$-ruling set $R$ of $G[S]$
\small
\begin{tabbing}
......\=a....\=b....\=c....\=d....\=e....\=f....\=g....\=h......\kill
1.\>$R \leftarrow \emptyset$\\
2.\>$m \leftarrow e[G[S]]$ (Each node $x$ broadcasts its degree in $G[S]$ to all others)\\
3.\>$q \leftarrow \sqrt{\frac{n}{m}}$\\
4.\>\textbf{while} $m > 2 n$ \textbf{do}\\[1mm]
5.\>\>$T \leftarrow \emptyset$\\
6.\>\>Each $x \in S$ joins $T$ independently with probability $q$ and
broadcasts its choice.\\
7.\>\>\textbf{if} $e[G[T]] \leq 4 n$ \textbf{then}\\
8.\>\>\> $L \leftarrow $\textsc{LocalMIS}$(G[T])$ \\
9.\>\>\>$R \leftarrow R \cup L$\\
10.\>\>\>$S \leftarrow S\setminus (T \cup N(T))$ \\
11.\>\>$m \leftarrow e[G[S]]$\\
12.\>\>$q \leftarrow \sqrt{\frac{n}{m}}$\\[1mm]
13.\> $L\leftarrow $\textsc{LocalMIS}$(G[S])$ \\
14.\>$R \leftarrow R \cup L$\\
15.\>\textbf{return} $R$.
\end{tabbing}
\end{boxedminipage}
\end{algorithm}
\begin{lemma}
Given an $n$-vertex graph $G$ with $O(n \cdot \poly(\log n))$ edges, Algorithm~\ref{alg:2RulingSetAlg}, \textsc{2-RulingSet} (derived from~\cite{berns2012arxiv,berns2012facloc}),
computes a $2$-ruling set of $G$ in expected-$O(\log \log \log n)$ rounds.
\end{lemma}
\begin{proof}
Algorithm~\ref{alg:2RulingSetAlg}, \textsc{2-RulingSet} computes a
$2$-ruling set on a subgraph of a congested clique in expected-$O(\log \log n)$
rounds~\cite{berns2012arxiv,berns2012facloc}. The analysis of this algorithm proceeds by defining $O(\log \log n)$
threshold values $L_k = n^{1 + 1 / 2^k}$, for $k = 0, 1, 2, \ldots$, and
computing a bound on the expected number of rounds required for the number of
edges remaining in the unprocessed portion of the graph to fall from roughly
$L_i$ to roughly $L_{i+1}$. In Lemma~9 of \cite{berns2012arxiv}, it is proved
that this expected number of rounds is uniformly bounded by a constant for every
$k$.
For our use of the $2$-ruling set algorithm in the present work, we observe
that, if the number of edges in the input subgraph (of the congested clique) is
already sufficiently small, then the expected round-complexity of the $2$-ruling
set algorithm is also much less than would be the case in general. Specifically,
we consider a $2$-ruling set computation on an input subgraph having
$O(n \cdot \poly(\log n))$ edges -- in this case, the computation begins
having already reached threshold $L_{k'}$, where
$n^{1 + 1 / 2^{k'}} \approx n \cdot \log^c n$ (for a constant $c$). More
precisely, let $k' = \lfloor \log \log n - \log \log \log n - \log c \rfloor$;
then $\frac{1}{2^{k'}} \geq \frac{c \log \log n}{\log n} = \log_n \log^c n$ and
$n^{1 + 1 / 2^{k'}} \geq n \log^c n$.
Therefore, using the same analysis as occurs in the proof of Theorem 2 in
\cite{berns2012arxiv} (and in which $\mathcal{T}(k)$ represents the number of
iterations necessary to progress from having at most $L_{k-1}$ edges remaining
to at most $L_k$ edges remaining), we see that the expected running time (in
rounds) of the $2$-Ruling Set algorithm applied to an input graph having only
$O(n \log^c n)$ edges can be written as
\begin{align*}
\E\left[O(1) + \sum\limits_{k=k'}^{\log \log n} %
O(\mathcal{T}(k))\right] &= O(1) + %
\sum\limits_{k=k'}^{\log \log n} O\left(\E[\mathcal{T}(k)]\right)\\[1mm]
&= O(1) + \sum\limits_{k = \lfloor \log \log n - \log \log \log n - \log c
\rfloor}^{\log \log n} O(1)\\[1mm]
&= O(\log \log \log n)
\end{align*}
which completes the proof.
\end{proof}
\subsection{Putting it all together}
We now combine the algorithm for degree-decomposition step algorithm,
the vertex-selection step algorithm, and the $2$-ruling set algorithm in order
to obtain a $3$-ruling set algorithm that runs in $O(\log \log \log n)$ rounds
in expectation.
\begin{algorithm}[H]
\caption{3-Ruling Set Algorithm}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\STATE Each node $v \in V$ uses the Lazy-phase and Speedy-phase algorithms to
determine the index $k(v) \in [k^* + 1]$ such that $v \in U_{k(v)}$
\STATE Run the vertex-selection step to compute $S$
\STATE $I \leftarrow $ \textsc{2-RulingSet}$(G[S])$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\begin{lemma}
With probability at least $1 - 1 / n$, $I$ is a $3$-ruling set of $G$.
\end{lemma}
\begin{proof}
Consider a vertex $v \in V$. By Lemma \ref{lemma:highProb}, $v$ has a neighbor
in $S$ with probability at least $1 - 1 / n^2$. Since $I$ is a $2$-ruling set of
$G[S]$, there is a node in $I$ at distance at most $3$ from $v$. Thus, with
probability at least $1 - 1 / n$ we have constructed a $3$-ruling set for $G$.
\end{proof}
\section{MIS in Growth Bounded Graphs in Constant Rounds}
\label{sec:mis_in_doubling}
Given a metric space $(V,d)$ with constant doubling dimension, we show in this section how to compute an MIS
of a distance-threshold graph $G_r = (V, E_r)$, for any real $r \ge 0$, in
a \textit{constant} number of rounds on a congested clique.
\subsection{Simulation of the Schneider-Wattenhofer MIS algorithm.}
\label{subsection:SW-MIS}
Before we describe our MIS algorithm, we describe an algorithmic tool that will
prove quite useful.
We know that $G_r$ is growth-bounded and in particular the size of a largest independent set in a ball $B_{G_r}(v, r)$
for any $v \in V$ is $O(r^\rho)$, where $\rho$ is the doubling dimension of $(V, d)$.
Schneider and Wattenhofer \cite{schneider2008logstar} present a deterministic $O(\log^*n)$-round
algorithm to compute an MIS for growth-bounded graphs in the $\mathcal{CONGEST}$ model.
Suppose that $f$ is a constant such that the Schneider-Wattenhofer algorithms runs in at most
$f \log^*n$ rounds (note that $f$ depends on $\rho$).
We can \textit{simulate} the Schneider-Wattenhofer algorithm in the congested clique model
by (i) having each node $v \in V$ grow a ball of radius $f \log^*n$, i.e., gather a description of the induced graph
$G[B_{G_r}(v, f\log^* n)]$ and then (ii) having each node $v$ \textit{locally simulate}
the Schneider-Wattenhofer algorithm using the description of $G[B_{G_r}(v, f\log^* n)]$.
Note that since the Schneider-Wattenhofer algorithm takes at most $f \log^* n$ rounds,
it suffices for each node $v \in V$ to know the entire topology of $G[B_{G_r}(v, f\log^* n)]$
to determine if it should join the MIS.
The ``ball growing'' step mentioned above can be implemented by using Lenzen's routing protocol as follows, provided
$\Delta$ (the maximum degree of $G_r$) is not too large.
Each node $v$ can describe its neighborhood using at most $\Delta$ messages of size $O(\log n)$ each.
Node $v$ aims to send each of these $\Delta$ messages to every node $w$ such that $d(v, w) \le r \cdot f\log^*n$.
In other words, $v$ aims to send messages to all nodes in $B_M(v, r \cdot f\log^*n)$.
Since $B_{G_r}(v, f \log^*n) \subseteq B_M(v, r \cdot f\log^*n)$, it follows that the messages sent by $v$ are received
by all nodes in $B_{G_r}(v, f \log^*n)$.
We now bound the size of $B_M(v, r \cdot f\log^*n)$ as follows.
Since $M$ has doubling dimension $\rho$, the size of any MIS in $B_M(v, r \cdot f\log^*n)$ is $O((\log^*n)^{\rho})$ and hence
total number of nodes in $B_M(v, r \cdot f\log^*n)$ is $O(\Delta \cdot (\log^*n)^{\rho})$.
Therefore every node $v$ has $O((\log^*n)^{\rho}\cdot \Delta^2 )$ messages to send, each of size $O(\log n)$.
Every node is the receiver of at most $O((\log^*n)^{\rho} \Delta^2)$ messages by
similar arguments.
Therefore, if $\Delta = O(\sqrt{n}/(\log^* n)^{\rho/2})$, we can use Lenzen's routing protocol~to route these messages in $O(1)$ time. We refer
this simulation of the Schneider-Wattenhofer algorithm \cite{schneider2008logstar}
as Algorithm \textsc{SW-MIS}. The following theorem summarizes this simulation result.
\begin{theorem}\label{thm:logstar}
If $\Delta(G_r) = O(\sqrt{n}/(\log^* n)^{\rho/2})$ then Algorithm
\textsc{SW-MIS} computes an MIS of $G_r$ in $O(1)$ rounds on a congested clique.
\end{theorem}
\subsection{Constant-Round MIS Algorithm}
\label{subsection:constantRoundMIS}
Our MIS algorithm consists of 4 phases.
Next we describe, at a high level, what each phase accomplishes.
\begin{description}
\item[Phase 1:] We compute vertex-subset $P \subseteq V$ such that (i) every
vertex in $V$ is at most one hop away from some vertex in $P$ and (ii) $G_r[P]$
has maximum degree bounded above by $c\cdot\sqrt{n}$, for some constant $c>0$.
\item[Phase 2:]
We process the graph $G_r[P]$ and compute two subsets $W$ and $Q$ of $P$
such that (i) every vertex in $P$ of degree at least $c\cdot n^{1/4}$ is either in
$W$ or has a neighbor in $W$ and (ii) $Q \subseteq W$ is an independent set
such that every vertex in $W$ is at most 2 hops from some vertex in $Q$.
Thus, if we delete $W$ and all neighbors of vertices in $W$ what remains is a graph
of maximum degree less than $c \cdot n^{1/4}$.
Let $V'$ denote the set $P \setminus (W \cup N(W))$.
Thus, at the end of Phase 2, $Q$ is a 3-ruling set of $G_r[W \cup N(W)]$
and $\Delta(G_r[V']) <c\cdot n^{1/4}$.
\item[Phase 3:] We compute an MIS $R$ of the graph $G_r[V']$
by simply calling \textsc{SW-MIS}.
\item[Phase 4:] Since $Q$ is a 3-ruling set of $G_r[W \cup N(W)]$ and
$R$ is an MIS of $G_r[V']$,
we see that $Q \cup R$ is a 3-ruling set of $G_r[P]$ and thus a 4-ruling set of $G_r$.
In the final phase, we start with the 4-ruling set $Q \cup R$ and expand this into an MIS
$I$ of $G_r$.
\end{description}
Phase 2 is randomized and runs in constant rounds w.h.p. The remaining phases are deterministic and run in constant rounds each.
Algorithm \textsc{LowDimensionalMIS} summarizes our algorithm.
We now describe each phase in more detail.
\begin{algorithm}[t]
\caption{\textsc{LowDimensionalMIS}\label{algo:lowDimensionalMIS}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\REQUIRE {$G_r = (V, E_r)$}
\ENSURE {A maximal independent set $I \subseteq V$ of $G_r$}
\STATE $P \leftarrow \textsc{ReduceDegree}(G_r)$ \COMMENT{Phase 1}
\STATE $(W, Q) \leftarrow \textsc{SampleAndPrune}(G_r, P)$ \COMMENT{Phase 2}
\STATE $V' \leftarrow V \setminus (W \cup N(W))$; $R \leftarrow \textsc{SW-MIS}(G_r, V')$ \COMMENT{Phase 3}
\STATE $S \leftarrow Q \cup R$; $I \leftarrow \textsc{RulingToMIS}(S)$ \COMMENT{Phase 4}
\RETURN $I$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\subsection{Phase 1: Reduce Degree to $O(\sqrt{n})$}
\begin{algorithm}[!ht]
\caption{\textsc{ReduceDegree} (Phase 1)\label{algo:reduceDegree}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\REQUIRE {$G_r = (V,E_r)$}
\ENSURE {$P \subseteq V$ such that (i) $V = P \cup N(P)$ and (ii)
$\Delta(G_r[P]) < c \cdot \sqrt{n}$ for some constant $c > 0$.}
\STATE Partition $V$ (arbitrarily) into $\lceil\sqrt{n}\rceil$ subsets: $V_1,
V_2, \ldots V_{\lceil\sqrt{n}\rceil}$, each of size at most $\sqrt{n}$
\FORALL{$i \leftarrow 1$ \textbf{to} $\lceil \sqrt{n} \rceil$ \textbf{in
parallel}}
\STATE Send $G_r[V_i]$ to a vertex $v_i$ with lowest ID in $V_i$
\STATE Vertex $v_i$ executes $P_i \leftarrow \textsc{LocalMIS}(G_r[V_i])$
\ENDFOR
\STATE $P \leftarrow \cup_{i=1}^{\lceil\sqrt{n}\rceil} P_i$
\RETURN $P$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\noindent
Algorithm \textsc{ReduceDegree} describes Phase~1 of our algorithm.
The algorithm consists of arbitrarily partitioning the vertex-set of $G_r$ into $\sqrt{n}$ groups of size (roughly) $\sqrt{n}$ each and then separately and in parallel computing an MIS of each part.
Since each part has $\sqrt{n}$ vertices, each part induces a subgraph with at most $n$ edges and
therefore each such subgraph can be shipped off to a distinct node and
MIS on each subgraph can be computed locally.
(The subroutine \textsc{LocalMIS} in Line~4 refers to an unspecified MIS algorithm that is executed
locally at a node.)
Using the fact that $G_r$ is growth-bounded, we show that
the union of all the MIS sets (set $P$, Line~5) induces a graph with maximum degree bounded by $c \cdot \sqrt{n}$
for some constant $c$. Also, we show that Phase~1 runs in constant rounds (Lemma~\ref{lemma:si}).
\begin{lemma}\label{lemma:si}
Algorithm \textsc{ReduceDegree} completes in $O(1)$ rounds and returns a set
$P$ such that $\Delta(G_r[P]) < c \cdot n^{1/2}$ for some constant $c > 0$
(that depends on the doubling dimension of the underlying space).
\end{lemma}
\begin{proof}
Algorithm \textsc{ReduceDegree} starts by arbitrarily
partitioning $V$ into $\lceil \sqrt{n} \rceil$ disjoint subsets $V_1, \ldots,
V_{\lceil\sqrt{n}\rceil}$ each of size at most $\sqrt{n}$ which can be done in
$O(1)$ rounds easily.
Since $|V_i| \le \sqrt{n}$, $G_r[V_i]$ contains at most $n$ edges, for any $i
\in \lceil \sqrt{n} \rceil$.
Using Lenzen's routing protocol, all knowledge of $G_r[V_i]$ can be shipped off to a designated
vertex $v_i$ in $V_i$ (e.g., vertex with smallest ID in $V_i$) in $O(1)$ rounds.
The vertex $v_i$ then computes an MIS $P_i$ of $G_r[V_i]$ locally as shown in
Line 4 of Algorithm \textsc{ReduceDegree}.
Finally, $v_i$ informs vertices in $P_i$ of their selection into the MIS.
The union of the $P_i$'s, denoted $P$, is returned by the algorithm.
This discussion shows that Algorithm \textsc{ReduceDegree} completes
in $O(1)$ rounds.
Consider a vertex $u \in P_i$ for some $i\in [\lceil \sqrt{n} \rceil]$. In
$G_r[P]$, vertex $u$ cannot have neighbors in $P_i$ since $P_i$ is an
independent set in $G_r[P]$.
Consider a set $P_j$, $j\neq i$.
The distance between any two vertices in $N(u) \cap P_j$ must be more than $r$
(these nodes are independent) and it must be at most $2r$ (by the triangle
inequality).
Since the underlying metric space has doubling dimension $\rho$, it follows that
$|N(u) \cap P_j| \le 2^{\rho}$.
Hence the degree of $u$ in $G_r[P]$ is bounded above by $2^{\rho} \cdot (\lceil
\sqrt{n} \rceil - 1)$.
The result follows.
\end{proof}
\subsection{Phase 2: Sample and Prune}
\label{sub:phase2}
\begin{algorithm}[t]
\caption{\textsc{SampleAndPrune} (Phase 2)\label{algo:sampleAndPrune}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\REQUIRE {$(G_r, P)$}
\ENSURE {$(W, Q)$, $W \subseteq P$ such that $\{v \in P \mid
\degree_{G_r[P]}(v) \ge n^{1/4}\} \subseteq W \cup N(W)$;
independent set $Q \subseteq W$ such that $Q$ is a 2-ruling set of $G_r[W]$.}
\FORALL{$v \in P$ \textbf{in parallel}}
\STATE Vertex $v \in P$ adds itself to $W_i$ with probability $1/n^{1/4}$
for $i = 1, 2, \ldots, \lceil 2 \cdot \log n \rceil$.
\ENDFOR
\STATE $W \leftarrow \cup_{i=1}^{\lceil 2 \log n \rceil} W_i$
\FORALL{$i \leftarrow 1$ \textbf{to} $\lceil 2 \log n \rceil$ \textbf{in
parallel}}
\STATE Send $G_r[W_i]$ to a vertex $w_i$, where $w_i$ is the vertex of
rank $i$ in the sequence of vertices in $V$ sorted by increasing ID
\STATE Vertex $w_i$ executes $X_i \leftarrow \textsc{LocalMIS}(G_r[W_i])$
\ENDFOR
\STATE $Q \leftarrow \textsc{SW-MIS}(G_r[\cup_{i=1}^{\lceil 2 \log n \rceil}
X_i])$
\RETURN $(W, Q)$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\noindent
Algorithm \textsc{SampleAndPrune} implements Phase 2 of our MIS algorithm.
It takes the induced subgraph $G_r[P]$ as input and starts by computing a set $W
\subseteq P$ using a simple random sampling approach.
Specifically, for each $i = 1, 2, \ldots, \lceil 2 \cdot \log n \rceil$, each vertex in $P$ simply
adds itself to a set $W_i$ independently, with probability $1/n^{1/4}$.
We start by proving a useful property of $W$.
\begin{lemma}\label{lemma:nice}
Every node $u$ with degree at least $n^{1/4}$ in $G_r[P]$ has a neighbor in $W$ with probability at least
$1-\frac{1}{n^2}$.
\end{lemma}
\begin{proof}
Let $u \in P$ be a node with degree at least $n^{1/4}$ in $G_r[P]$.
For any neighbor $v$ of $u$, $\Pr(v \notin W) \leq \left(1 - \frac{1}{n^{1/4}}\right)^{\lceil 2\log n \rceil}$.
Therefore the probability that no neighbor of $u$ is in $W$ is at most
$\left(1-\frac{1}{n^{1/4}}\right)^{\lceil 2\log n \rceil \cdot n^{1/4}}$.
This is bounded above $e^{-\lceil 2\log n \rceil}$, which is bounded above by $1/n^2$.
\end{proof}
After using random sampling to compute $W$, Algorithm \textsc{SampleAndPrune}
then ``prunes'' $W$ in constant rounds to construct a subset $Q \subseteq W$ such
that $Q$ is a 2-ruling set of $W$. In the rest of this subsection we prove that
Algorithm \textsc{SampleAndPrune}
does behave as claimed here.
\begin{lemma} \label{lemma:edgew}
The number of edges in $G_r[W_i]$ is $O(n)$ w.h.p., for each $i = 1, 2, \ldots, \lceil 2 \log n \rceil$.
\end{lemma}
\begin{proof}
We first bound the size of the set $W_i$ and the maximum degree of $G_r[W_i]$ for any $i = 1, 2, \ldots, \lceil 2 \log n \rceil$.
Observe that $\E[|W_i|] = n^{3/4}$ and since nodes join $W_i$ independently,
an application of Chernoff's bound~\cite{dubhashiBook} yields
$\Pr(|W_i| \leq 6n^{3/4}) \geq 1 - \frac{1}{n^2}$.
To bound $\Delta(G_r[W_i])$ we use the fact that degree of any node in $G_r[P]$ is at most $\sqrt{n}$ and therefore the expected degree of any node in
$G_r[W_i]$ is at most $n^{1/4}$.
Another application of Chernoff's bound yields $\Pr(\mbox{degree}_{G_r[W_i]}(v) \leq 6n^{1/4}) \geq 1 - \frac{1}{n^2}$ for each node $v$.
Using the union bound over all nodes $v \in W_i$ yields that with probability at least $1 -
\frac{1}{n}$ every node in $G_r[W_i]$ has degree at most $6n^{1/4}$.
Hence, with high probability, the number of edges in $G[W_i]$ is at most $36n$.
\end{proof}
\begin{lemma}\label{lemma:wimis}
The set $ X := \cup_{i=1}^{\lceil 2\log n \rceil} X_i \subseteq P$ is
computed in constant rounds w.h.p.~in Lines 4-6
of Algorithm \textsc{SampleAndPrune}.
Furthermore, Every vertex in $W$ is at most one hop away from some vertex in $X$.
\end{lemma}
\begin{proof}
We argue that Line~5 can be implemented in $O(1)$ rounds w.h.p.
By Lemma~\ref{lemma:edgew}, each node has to send at most $O(n^{1/4})$ messages to $w_i$ and w.h.p.~each$w_i$ receives at most $O(n)$ messages. Therefore by Lenzen's routing protocol\ Line 5 takes $O(1)$ rounds.
To repeat this for each $i = 1, 2, \ldots, \lceil 2\log n \rceil$ in parallel,
every node
has to send at the most $\lceil 2\log n \rceil \cdot n^{1/4}$ messages.
Since $w_i$'s are distinct no $w_i$ needs to receive more than $O(n)$
messages.
Each $v \in W$ belongs to $W_i$ for some $i$ and is therefore at
most one hop from some vertex in $X_i$.
\end{proof}
\begin{lemma}\label{lemma:qrule}
W.h.p.~it takes constant number of rounds to compute $Q$.
Furthermore, $Q$ is a 2-ruling set of $G_r[W]$.
\end{lemma}
\begin{proof}
Consider a node $v \in \cup_{i=1}^{\lceil 2 \log n\rceil} {X_i}$.
Since each $X_i$ is an independent set, by using the growth-bounded property\
of $G_r[X_i]$, we see that the number of neighbors of $v$ in $X_i$ is bounded
above by a constant. Hence, the maximum degree in
$G_r \left[\cup_{i=1}^{\lceil 2 \log n\rceil} {X_i}\right]$ is $O(\log n)$.
Since the maximum degree of this growth-bounded graph is $O(\log n)$, by
Theorem~\ref{thm:logstar} an MIS of this graph can be computed in constant
rounds by using \textsc{SW-MIS}.
A node $v \in W$ belongs to some $W_i$ and is therefore at most one hop from
some node in $X_i$. Also, every node in every $X_i$ is at most one hop from some
node in $Q$. Also, $Q$ is independent and therefore $Q$ is a 2-ruling set of
$G_r[W]$.
\end{proof}
\subsection{Phase 4: Ruling Set to MIS}
\label{sub:phase4}
Algorithm \textsc{RulingToMIS} implements Phase 4 of our MIS algorithm. The
algorithm takes as input the graph $G_r$ and the vertex subset
$S = Q \cup R$ where $Q$ and $R$ are the outputs of Phase~2 and
Phase~3, respectively.
Note that Lemma~\ref{lemma:qrule} implies that $S$ is a 4-ruling set of $G_r$.
This property is used to cover $G_r$ with balls of radius $4r$,
centered at members of $S$.
Consider the graph $G_{9r} = (V, E_{9r})$ where $E_{9r} = \{\{u,v\} \mid u,
v \in V \mbox{ and } d(u, v) \leq 9r\}$.
In Lemma~\ref{lemma:coloring} we prove a constant upper bound on the maximum degree
$\Delta(G_{9r}[S])$. This allows us to compute a proper vertex coloring of $G_{9r}[S]$
using a constant number of colors.
This coloring guides the rest of the algorithm, providing a
schedule for processing the vertices in the aforementioned balls centered at vertices in $S$.
For each color $i$, the algorithm processes all vertices in $S$ colored $i$ in parallel.
For each vertex $v \in S$ colored $i$, let $B_v$ denote the subset of $B(v, 4r)$ of vertices
still ``active''.
The algorithm computes an MIS of the induced subgraph $G_r[B_v]$; this computation occurs in
parallel for each $v$ colored $i$.
Since the vertex coloring is with respect to $G_{9r}$, two balls $B_v$ and $B_{v'}$ that are
processed in parallel do not intersect and in fact are not even connected by an edge.
Thus processing in parallel all of the balls $B_v$ for $v$ colored $i$ has no untoward
consequences.
We note that due to the growth bounded property, every independent set of $G_r[B_v]$ has a
constant number of vertices.
Hence, we can use a simple sequential algorithm to compute an MIS of $G_r[B_v]$ -- repeatedly
each vertex with smallest ID in its neighborhood joins the MIS and the graph is updated.
We call this MIS algorithm \textsc{SequentialMIS} and use it in Line~9 in Algorithm
\textsc{RulingToMIS}.
Since every vertex in $V$ is at distance at most $4r$ from some vertex in $S$, every vertex
in $V$ is is some ball $B_v$ and is eventually processed.
\begin{algorithm}[t]
\caption{\textsc{RulingToMIS} (Phase 4) \label{algo:rulingMIS}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\REQUIRE $(G_r, S=Q\cup R)$
\ENSURE A maximal independent set $I \subseteq V$ of $G_r$
\STATE $E_{9r} \leftarrow \left\{\left\{u, v\right\} \mid u, v \in S
\mbox{ and } d(u,v) \leq 9r\right\}$
\STATE $G_{9r}[S] \leftarrow (S, E_{9r})$
\STATE Send $G_{9r}[S]$ to a vertex $v^*$ with lowest ID in $S$
\STATE Vertex $v^*$ executes $\Psi \leftarrow
\textsc{LocalColoring}(G_{9r}[S])$
with color pallet $\{1,2,\ldots, \gamma+1\}$. Here
$\gamma$ is the constant from Lemma~\ref{lemma:coloring}.
\STATE $V' \leftarrow V$
\FOR{$i = 1$ \TO $i = \gamma+1$}
\FORALL{$v \in S$ such that $\Psi(v) = i$ \textbf{in parallel}}
\STATE $B_v \leftarrow \left\{u \mid u \in V' \mbox{ and } d(u, v) \leq
4r\right\}$
\STATE $I_v \leftarrow \textsc{SequentialMIS}(G_{9r}[B_v])$
\ENDFOR
\STATE $V' \leftarrow V'\setminus \left( \cup_{v \in S \wedge
\Psi(v) = i}\left( N(I_v) \right)\right)$
\ENDFOR
\STATE $I \leftarrow \mathop{\cup}_{v\in S} I_v$
\RETURN $I$
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
\begin{lemma}\label{lemma:coloring}
$\Delta(G_{9r}[S]) \leq \gamma$, where $\gamma$ is a constant.
\end{lemma}
\begin{proof}
Consider any node $v \in S$ and neighbors $N_{G_{9r}}(v)$ of $v$
in $G_{9r}[S]$. By the triangle inequality, any pair of nodes in
$N_{G_{9r}}(v)$ are at most distance $18r$ apart and by
Lemma~\ref{lemma:qrule}, at least distance $r$
apart. Hence $N_{G_{9r}}(v) \cup v$ has a constant aspect
ratio and by the growth-bounded property, we have $|N_{G_{9r}}(v) \cup v| \leq 18^{\rho} =
\gamma$.
\end{proof}
\begin{lemma}\label{lemma:constantRounds}
Algorithm \textsc{RulingToMIS} executes in a constant number of rounds.
\end{lemma}
\begin{proof}
Since the maximum degree of $G_{9r}[S]$ is a constant, the entire description
of $G_{9r}[S]$ can be shipped to a designated vertex $v^*$ (e.g., a vertex with
the smallest ID) using Lenzen's routing protocol~in $O(1)$ rounds. Then $v^*$ can compute a coloring
of $G_{9r}[S]$ such that no two adjacent vertices have the same color. Notice
that the maximum degree of $G_{9r}[S]$ is bounded above by $\gamma$, hence
$\gamma+1$ colors are sufficient.
The constant upper bound on the size of the color palette implies that the
\textbf{for}-loop starting in Line~6 executes a constant number of iterations.
In each iteration $i$, all nodes $v \in S$ colored $i$ are processed. Specifically,
an MIS of $G_r[B_v]$ is computed and since the size of every independent set in $G_r[B_v]$
is bounded above by a constant (by appealing to the growth-bounded property), Algorithm \textsc{SequentialMIS}
terminates in constant rounds. Hence, each iteration of the outer-\textbf{for}-loop takes
a constant number of rounds of communication.
\end{proof}
\begin{lemma}\label{lemma:correctness}
The set $I$ computed by Algorithm \textsc{RulingToMIS} is an MIS of $G_r$.
\end{lemma}
\begin{proof}
First we show that $I$ is an independent set by contradiction.
Suppose that for some $p, q \in I$, $p$ and $q$ are adjacent in $G_r$.
Then it must be the case that both $p$ and $q$ were selected in the same
iteration of the outer-\textbf{for}-loop; otherwise, the selection of one
of the two nodes would render the other unavailable for selection.
If $p$ and $q$ are selected in the same outer-\textbf{for}-loop iteration,
it must be the case that $p \in B_v$ and $q \in B_{v'}$ where $v \not= v'$, but
$v$ and $v'$ have the same color.
Since $d(p, v) \le 4r$, $d(q, v') \le 4r$, and $d(p, q) \le r$, using the triangle
inequality we see that $d(v, v') \le 9r$.
But, if this is the case then there is an edge between $v$ and $v'$ in $G_{9r}[S]$
and these two vertices would not have the same color, contradicting our earlier
conclusion that $v$ and $v'$ have the same color.
We now prove that $I$ is maximal.
Since $S$ is a 4-ruling set of $G_r$, every node $u \in V$ is in $B(v, 4r)$ for
some $v \in S$.
Suppose that $v$ is colored $i$ and therefore $B_v$ is processed in iteration $i$ of
the outer-\textbf{for}-loop.
If $u \in B_v$ then Algorithm \textsc{SequentialMIS} will either pick $u$ or a neighbor
to join the MIS.
Otherwise, if $u \not\in B_v$ then it must be the case that in an earlier iteration
of the outer-\textbf{for}-loop, either $u$ or a neighbor were selected to be in
the MIS.
\end{proof}
\section{Constant-Approximation to MST in Constant Rounds}
\label{sec:mst}
For a metric space $(V,d)$, define a \textit{metric graph} $G = (V, E)$ as the clique on set $V$ with each edge $\left\{u,v\right\}$
having weight $d(u,v)$.
In this section we present a constant-round algorithm for computing a constant-factor
approximation of an MST of given metric graph $G = (V, E)$
with constant doubling dimension.
We require that at the end of the MST algorithm, each node in $V$ know the entire spanning tree.
Our overall approach is as follows.
We start by showing how to ``sparsify'' $G$ and construct a spanning subgraph $\Ghat = (V, \hat{E})$, $\hat{E} \subseteq E$, such that
$wt(MST(\Ghat)) = O(wt(MST(G)))$. Thus computing an MST on $\Ghat$ yields an $O(1)$-approximation to an MST on $G$.
The sparsification is achieved via the construction of a collection of maximal
independent sets (MIS) \textit{in parallel} on different distance-threshold subgraphs of $G$.
Thus we have reduced the problem of constructing a constant-approximation of an MST on the metric graph $G$ to two problems:
(i) the MIS problem on distance-threshold graphs and (ii) the problem of computing an MST of
a sparse graph $\Ghat$.
Using the fact that the underlying metric space $(V, d)$ has constant doubling dimension, we show that $\Ghat$ has
linear (in $|V|$) number of edges. As a result, problem (ii) can be easily solved in constant number of rounds by simply shipping $\Ghat$ to a single node for local MST computation.
In Section~\ref{sec:mis_in_doubling}, we have already shown how to compute an MIS of a
distance-threshold graph
in a constant doubling dimensional space on a congested clique in constant number of rounds.
Finally, we show that due to the particular bandwidth usage of our MIS algorithm, we can run all
of the requisite MIS computations in parallel in constant rounds.
\subsection{MST Algorithm}
\label{subsection:algorithm}
We now present our algorithm in detail; the reader is encouraged to follow along the pseudocode in Algorithm \ref{algo:construct}.
We partition the edge set $E$ of the metric graph into two subsets $E_\ell$ (\textit{light} edges) and
$E_\mathpzc{h}$ (\textit{heavy} edges) as follows. Let $d_m = \max\left\{d(u,v) \mid
\left\{u,v\right\} \in E \right\}$ denote the diameter of the metric space
\footnote{If the size of the encoding of distances is more than $O(\log n)$ bits then it is suffices to know only most-significant $\log n$-bits of encoding of $d_m$ to act as ``proxy'' for $d_m$ which will only increase the approximation factor by a constant.}.
Define $E_\ell = \left\{\left\{u,v\right\} \mid d(u,v) \leq
d_m/n^3\right\} $ and $E_\mathpzc{h} = E \setminus E_\ell$.
We deal with these two subsets $E_\ell$ and $E_\mathpzc{h}$ separately.
\begin{algorithm}[t]
\caption{\textsc{MST-Approximation} \label{algo:construct}}
\begin{boxedminipage}{\textwidth}
\small
\begin{algorithmic}[1]
\REQUIRE A metric graph $G=(V,E)$ on metric space $(V,d)$
\ENSURE A tree $\hat{\mathcal{T}}$ such that $wt(\hat{\mathcal{T}}) =
O\left(wt\left(MST\left(G\right)\right)\right)$
\STATE $d_m = \max\{d(u, v) \mid \{u, v\} \in E\}$
\STATE $E_\ell \leftarrow \left\{\left\{u,v\right\} \mid d(u,v) \leq \frac{d_m}{n^3}\right\}$ \COMMENT{Processing light edges}
\STATE $S \leftarrow $ \textsc{ComputeMIS}$(G[E_0])$ where $E_0 \leftarrow \left\{\left\{u,v\right\} \mid d(u,v) \leq \frac{d_m}{n^2}\right\}$
\STATE $\hat{E}_\ell \leftarrow \left\{\left\{u,v\right\} \mid u \in S \mbox{ and } d(u,v) \leq \frac{2\cdot d_m}{n^2}\right\}$
\STATE $E_\mathpzc{h} \leftarrow \left\{\left\{u,v\right\} \mid d(u, v) > \frac{d_m}{n^3}\right\}$ \COMMENT{Processing heavy edges}
\STATE $h \leftarrow \left\lceil \frac{3\log n}{\log c_1} \right\rceil$; $r_0 \leftarrow \frac{d_m}{c_1^h}$
\FOR{$i=1$ \TO $h$ \textbf{in parallel}} \label{algo:construct:forstart}
\STATE $r_i \leftarrow (c_1)^i \cdot r_0$
\STATE $E_i \leftarrow \left\{\left\{u,v\right\} \mid d(u,v) \leq r_i\right\}$
\STATE $V_i \leftarrow $\textsc{ComputeMIS}$(G[E_i])$
\STATE $\hat{E}_i \leftarrow \left\{\left\{u,v\right\} \mid u, v\in V_i \mbox{
and } d(u,v) \leq c_2\cdot r_i\right\}$
\ENDFOR \label{algo:construct:forend}
\STATE $\hat{E}_\mathpzc{h} \leftarrow \cup_{i=1}^{h} \hat{E}_i$; $\hat{E} \leftarrow \hat{E}_\ell \cup \hat{E}_\mathpzc{h}$
\RETURN \textsc{MST-Sparse}$(G[\hat{E}])$ \label{algo:construct:sparsemst}
\end{algorithmic}
\end{boxedminipage}
\end{algorithm}
First consider the set of light edges $E_\ell$ and note that
$G[E_\ell]$ may have several components. We would like to select an edge set
$\hat{E}_\ell$ such that \begin{inparaenum}[(i)]
\item any pair of vertices that are in the same connected component in
$G[E_\ell]$ are also in the same connected component in $G[\hat{E}_\ell]$, and
\item $wt(\hat{E}_\ell) = O(wt(MST(G)))$. \end{inparaenum}
(Note that one can define $\hat{E}_\ell = E_\ell$ to have these two properties but
we want to ``sparsify'' $E_\ell$, ideally we would like to have $|\hat{E}_\ell| = O(n)$ and we show this for metric with constant doubling dimension.)
The algorithm for selecting $\hat{E}_\ell$ is as follows.
Let $S$ be an MIS of the distance-threshold graph $G_r$,
where $r = d_m/n^2$.
(This MIS computation is not on graph induced by $E_\ell$, notice the $r$.
This is done to obtain certain properties of $\hat{E}_\ell$ described above.)
Define $\hat{E}_\ell = \left\{\left\{u,v\right\} \mid u \in S \mbox{ and } d(u,v) \leq
2\cdot d_m/n^2 \right\}$.
Note that $\hat{E}_\ell$ may not be a subset of $E_\ell$.
Now we consider the set $E_\mathpzc{h}$ of heavy edges.
Let $c_1 > 1$ be a constant.
Let $h$ be the smallest positive integer such that $c_1^h \geq n^3$.
Observe that $h = \left\lceil \frac{3\log n}{\log c_1} \right\rceil$.
Let $r_0 = {d_m}/{c_1^h}$ (note that for any heavy edge $\{u, v\}$, $d(u, v) > r_0$) and let $r_i = c_1 \cdot r_{i-1}$, for $i>0$.
We construct $\hat{E}_\mathpzc{h}$ in \textit{layers} as follows.
Let $V_0 = V$ and $V_i$ for $0<i\leq h$ is an MIS of the subgraph $G[E_i]$ where $E_i = \left\{\left\{u,v\right\} \mid d(u,v) \leq r_i\right\}$.
Let $c_2 > c_1+2$ be a constant.
Define $\hat{E}_i$, the edge set at the layer $i$ as:
$\hat{E}_i = \left\{\left\{u,v\right\} \mid u,v \in V_i \mbox{ and } d(u,v) \leq
c_2\cdot r_i\right\}$.
We define $\hat{E}_\mathpzc{h} = \cup_{i=1}^h \hat{E}_i$ and $\hat{E} = \hat{E}_\mathpzc{h} \cup
\hat{E}_\ell$.
A key feature of our algorithm is that a layer $\hat{E}_i$ does not depend on
other layers and therefore these layers can be constructed in parallel.
We then call an as-yet-unspecified algorithm called \textsc{MST-Sparse}
that quickly computes an exact MST of $\Ghat = G[\hat{E}]$ in the congested clique model.
In the analysis that follows, we separately analyze the processing of light edges and heavy edges.
We first show the \textit{constant-approximation property} of $\Ghat$ which doesn't require metric to be of constant doubling dimension. Later we show if the underlying metric has constant doubling dimension then Algorithm~\ref{algo:construct} runs in constant rounds w.h.p..
\subsection{Constant-Approximation Property}\label{sub:weight}
Let $\mathcal{T}$ be an MST of graph $G=(V,E)$. Let $\hat{\mathcal{T}}$ be a MST of the graph
$\Ghat = (V,\hat{E})$.
We now prove that $wt(\hat{\mathcal{T}})=O(wt(\mathcal{T}))$.
First we claim that the connectivity that edges in $E_\ell$ (i.e., the light edges)
provide is preserved by the edges selected into $\hat{E}_\ell$ (Lemma~\ref{lemma:connectivity})
and the total weight of these selected edges is not too high (Lemma~\ref{lemma:wtsmall}).
Later we prove a similar claim for heavy edges (Lemma~\ref{lemma:wtbig}).
\begin{lemma}\label{lemma:connectivity}
For any vertices $s$ and $t$ in $V$, if
there is a $s$-$t$ path in $G[E_\ell]$ then there exists an $s$-$t$ path
in $G[\hat{E}_\ell]$.
\end{lemma}
\begin{proof}
Consider an edge $\{u, v\} \in E_\ell$.
If $\left\{u,v\right\} \in \hat{E}_\ell$ then we are done. If
$\left\{u,v\right\} \notin \hat{E}_\ell$ then we show that
there exists a vertex $w$ such that $\{u, w\}, \{v, w\}
\in \hat{E}_\ell$.
Since $\{u, v\} \in E_\ell$, $d(u, v) \le d_m/n^3$.
Furthermore, since $\{u,v\} \notin \hat{E}_\ell$ it means both $u$ and $v$ are not
in $S$, an MIS of $G_r$, $r = d_m/n^2$.
Hence there is a vertex $w\in S$ such that
$d(u, w) \leq d_m/n^2$.
By the definition of $\hat{E}_\ell$, $\{u, w\} \in \hat{E}_\ell$.
By the triangle inequality, we have $d(v, w) \leq
d_m/n^2 + d_m/n^3$ which implies $\{v, w\} \in \hat{E}_\ell$.
The lemma follows by repeatedly applying above result to each edge of the given $s$-$t$ path.
\end{proof}
\begin{lemma}\label{lemma:wtsmall}
$wt(\hat{E}_\ell) = O(wt(\mathcal{T}))$.
\end{lemma}
\begin{proof}
The weight of each edge in $\hat{E}$ is at most $2d_m/n^2$ and since there are
at most $n^2$ edges in $\hat{E}_\ell$ (trivially), we see that $wt(\hat{E}_\ell) = O(d_m)$.
We obtain the lemma by using the fact that the total weight of any spanning tree is bounded below by $d_m$.
\end{proof}
Consider an edge $\left\{u,v\right\} \in E(\mathcal{T})$. Let
$C(u)$ and $C(v)$ be the components containing $u$ and $v$ respectively
in the graph $\mathcal{T} \setminus \left\{u, v\right\}$.
\begin{lemma}\label{lemma:wtbig}
If $\left\{u,v\right\} \in E(\mathcal{T}) \cap E_\mathpzc{h}$ then
there exists an edge $\{u',v'\} \in \hat{E}$ such that (i) $d(u', v') \le c_2 \cdot d(u, v)$
and (ii) $u' \in C(u)$ and $v' \in C(v)$.
\end{lemma}
\begin{proof}
Let $i$ be the largest integer such that $r_i < d(u, v)$.
Hence $d(u, v) \le r_{i+1} = c_1 \cdot r_i \le (c_2-2) \cdot r_i$ (since
$c_2$ was chosen to be greater than $c_1+2$).
Let $u'$ and $v'$ be the nearest nodes in the MIS $V_i$ of $G[E_i]$
from $u$ and $v$ respectively. Note that $u'$ could be $u$ and $v'$ could be
$v$.
Thus $d(u,u') \le r_i$ and $d(v,v') \le r_i$.
By the triangle inequality we have,
$d(u',v') \leq d(u',u) + d(u,v) + d(v,v') \leq r_i + (c_2 - 2) \cdot r_i + r_i \le c_2\cdot r_i < c_2 \cdot d(u, v).$
Hence, $(u',v') \in \hat{E}_i$ and also note that $d(u',v') \leq \alpha\cdot
d(u,v)$ where $\alpha$ is any constant greater than $c_2$.
Now note that $\{u, v\}$ is the lightest edge between a vertex in $C(u)$ and a vertex in $C(v)$ by
virtue of being an MST edge. Therefore, it is the case that $u' \in C(u)$ and $v' \in C(v)$ since
$d(u, u') < d(u, v)$ and $d(v, v') < d(u, v)$.
\end{proof}
\noindent
This lemma implies that for every cut $(X, Y)$ of $G$ and an MST edge $\{u, v\}$ that crosses
the cut, there is an edge $\{u', v'\}$ in $\Ghat$ also crossing cut $(X, Y)$ with weight
within a constant factor of the weight of $\{u, v\}$.
The following result follows from this observation and properties of $\hat{E}_\ell$ proved earlier.
\begin{theorem}\label{thm:weight}
Algorithm~\ref{algo:construct} computes a spanning tree $\hat{\mathcal{T}}$ of $G$ such that
$wt(\hat{\mathcal{T}}) = O\left(wt\left(MST\left(G\right)\right)\right)$.
\end{theorem}
\subsection{Constant Running Time}
The result of the previous subsection does not require that the underlying metric space $(V, d)$
have constant doubling dimension.
Now we assume that $(V, d)$ has constant doubling dimension and in this setting
we show that Algorithm \textsc{MST-Approximation} can be implemented in \textit{constant} rounds.
Even though the algorithm is described in a ``sequential'' style in Algorithm \ref{algo:construct},
it is easy to verify that most of the steps can be easily implemented in constant rounds in
the congested clique model.
However, to finish the analysis we need to show:
(i) that \textsc{ComputeMIS} executes in constant rounds,
(ii) that the $h = O(\log n)$ calls to \textsc{ComputeMIS} in Line 10 can be executed in parallel in
constant rounds, and (iii) that \textsc{MST-Sparse} in Line 13 can be implemented in constant
rounds.
In the following, we show (iii) by simply showing that $\Ghat$ has linear number of edges.
In the previous section, we have shown (i) and later in this section we show (ii).
We first show $|\hat{E}_\ell| = O(n)$ in Lemma~\ref{lemma:smallsize} and then argue about heavy edges.
\begin{lemma}\label{lemma:smallsize}
$|\hat{E}_\ell| = O(n)$.
\end{lemma}
\begin{proof}
For any edge $\left\{u,v\right\} \in \hat{E}_\ell$ either $u$ or $v$ or both
belong to $S$ (by construction). We orient edges such that an edge is directed
towards the node in $S$. If both end points are in $S$ then we add two
oppositely directed edges. We prove that the out-degree of a node is bounded
by a constant.
\noindent Consider a node $u$. Let $N_o(u)$ be the set of endpoints of all
outgoing edges of $u$. If $|N_o(u)| < 2$ then we are done, therefore
consider the case $|N_o(u)| \geq 2$. Consider any two nodes $v_i, v_j \in
N_o(u)$. By construction we have, $d(u,v_i) \leq 2\cdot d_m/n^2$
and $d(u,v_j) \leq 2\cdot d_m/n^2$. Therefore by the triangle inequality,
$d(v_i, v_j) \leq 4\cdot d_m/n^2$. Also, by the definition of orientation $v_i, v_j
\in S$ and therefore by the definition of $S$ we have, $d(v_i, v_j) > d_m/n^2$.
Hence the aspect ratio of $N_o(u)$ is at most $4$.
By the growth-bounded property, we have $|N_o(u)| = O(1)$. Hence, $|\hat{E}_\ell| = O(n)$.
\end{proof}
Now we show $|\hat{E}_\mathpzc{h}| = O(n)$.
We first show in the following lemma two useful properties of vertex-neighborhoods in the graph induced by $\hat{E}_i$.
\begin{lemma} \label{lemma:neighborhood}
For each $u \in V_i$, (i) $|N_i(u)| \leq c_3$ where $c_3={c_2}^{O(\rho)}$ and (ii)
$N_i(u) \cup \{u\}$ induces a clique in $G[E_j]$ for all $i > 0$ and $ j \geq
i + \delta$ where $\delta = \left\lceil \frac{\log 2c_2}{\log c_1}
\right\rceil$.
\end{lemma}
\begin{proof}
We first show that the aspect ratio of $N_i(u)$ is bounded by $2c_2$.
This follows from two facts:
(a) any two points in $N_i(u)$ are at least distance $r_i$ apart,
and (b) any point in $N_i(u)$ is at distance at most $c_2\cdot r_i$ from $u$
and therefore, by using the triangle inequality, any two points in $N_i(u)$ are at
most $2c_2\cdot r_i$ apart. Then using the bound from
the growth-bounded property~we obtain the result claimed in part (i).
Now we show part (ii) of the claim.
If $|N_i(u)| = 0$ then we are done.
If $|N_i(u)| = 1 $ then let $v \in N_i(u)$.
This implies $d(u,v) \leq c_2\cdot r_i \ < c_1^\delta \cdot r_i = r_{i+\delta}$
which implies $\left\{u,v\right\} \in E_j, j\geq i+\delta$.
\noindent Now assume $|N_i(u)| > 1$.
Consider any two distinct vertices $v, w \in N_i(u)$.
Since $\left\{u,v\right\}, \{u,w\} \in \hat{E}_i$ we have
$d(u,v) \leq c_2\cdot r_i$ and $d(u,w) \leq c_2\cdot r_i$.
By the triangle inequality, $ d(v,w) \leq 2c_2\cdot r_i \leq
c_1^\delta \cdot r_i =c_{i+\delta}$.
Therefore $\{v,w\} \in E_{i+\delta}$ and hence we have
$\{v,w\} \in E_j, \mbox{ for all } j \geq i + \delta$.
\end{proof}
The implication of the above result is that $|\hat{E}_i|$ is linear in size.
Since we use $O(\log n)$ layers in the algorithm, it immediately follows that $|\hat{E}_\mathpzc{h}|$ is $O(n\log n)$.
However, part (ii) of the above result implies that only one of the nodes in $N_i(u)$ will be present in
$V_j$, $j\geq i+\delta$ since $V_j$ is an independent set of $G[E_j]$.
This helps us show the sharper bound of $|\hat{E}_\mathpzc{h}| = O(n)$ in the following.
Without loss of generality assume that $h$ is a multiple of $\delta$ (if not, add at most $\delta-1$
empty layers $\hat{E}_{h+1}, \hat{E}_{h+2},\ldots$ to ensure that this is the case).
Let
$$\beta(j) = \bigcup_{i=(j-1)\delta + 1}^{j\delta} \hat{E}_i\qquad\mbox{ for }j= 1, 2, \ldots, \frac{h}{\delta}$$
be a partition of the layers $\hat{E}_i$ into \textit{bands} of $\delta$ consecutive layers.
Let $\hat{E}_{odd} = \cup_{j: odd} \beta(j)$ and $\hat{E}_{even} = \cup_{j: even} \beta(j)$.
\begin{lemma}\label{lemma:layerlinear}
$|\hat{E}_{odd}| = O(n)$, $|\hat{E}_{even}| = O(n)$ and therefore $|\hat{E}| = O(n)$.
\end{lemma}
\begin{proof}
We prove the claim for $\hat{E}_{odd}$. The proof is essentially the same for $\hat{E}_{even}$.
We aim to prove the following claim by induction on $k$ (for odd $k$): for some constant $C > 0$,
\begin{equation}
\label{eqn:indStep}
\left|\bigcup_{j:odd \ge k} \beta(j)\right| \le C \cdot \left|\bigcup_{j:odd \ge k} V(j)\right|,
\end{equation}
where $V(j)$ is the set of vertices such that every vertex in $V(j)$ has some
incident edge in $\beta(j)$.
Setting $k = 1$ in the above inequality, we see that $|\hat{E}_{odd}| = |\cup_{j:odd \ge k} \beta(j)| = O(n)$.
To prove the base case, let $k'$ be the largest odd integer less than or equal to $h/\delta$. Then,
$\cup_{j :odd\ge k'} \beta(j) = \beta(k')$ and
$\cup_{j :odd\ge k'} V(j) = V(k')$.
Consider a vertex $v \in V(k')$.
By Lemma \ref{lemma:neighborhood}, there are at most $c_3$ edges incident on $v$ from any layer.
There are $\delta$ layers in $\beta(k')$ and therefore there are at most $c_3 \delta$ edges
from $\beta(k')$ incident on any vertex $v \in V(k')$.
Hence, $|\beta(k')| \le c_3 \delta |V(k')|$.
Therefore, for any constant $C \ge c_3\delta$, it is the case that
$|\cup_{j \ge k'} \beta(j)| \le C \cdot |\cup_{j \ge k'} V(j)|$.
Taking (\ref{eqn:indStep}) to be the inductive hypothesis, let us now consider
$|\cup_{j \ge k-2} \beta(j)|$.
Then,
\begin{equation}
\left|\bigcup_{j:odd \ge k-2} \beta(j)\right| \le \left|\bigcup_{j:odd \ge k} \beta(j)\right| + |\beta(k-2)|
\le C \cdot \left|\bigcup_{j:odd \ge k} V(j)\right| + c_3\delta\cdot |V(k-2)|.
\end{equation}
The second inequality is obtained by applying the inductive hypothesis and the inequality $|\beta(k-2)|
\le c_3 \delta |V(k-2)|$.
By Lemma \ref{lemma:neighborhood}, at most half the vertices in $V(k-2)$ appear in $\cup_{j \ge k} V(k)$.
Therefore, $|V(k-2) \setminus (\cup_{j \ge k} V(j))| \ge |V(k-2)|/2$.
Hence,
$$\left|\bigcup_{j:odd \ge k-2} \beta(j)\right| \le C\cdot \left|\bigcup_{j:odd \ge k} V(j)\right| + 2c_3\delta \cdot \left|V(k-2) \setminus (\bigcup_{j:odd \ge k} V(j))\right|.$$
Picking $C \ge 2c_3\delta$, we then see that
$$\left|\bigcup_{j:odd \ge k-2} \beta(j)\right| \le C \cdot \left(\left|\bigcup_{j:odd \ge k} V(j)\right| + \left|V(k-2) \setminus \left(\bigcup_{j:odd \ge k} V(j)\right)\right|\right) = C\cdot \left|\bigcup_{j:odd \ge k-2} V(j)\right|.$$
The result follows by induction.
\end{proof}
\subsection{Many MIS Computations in Parallel}
In this section, we argue that
Algorithm~\ref{algo:lowDimensionalMIS} \textsc{LowDimensionMIS} can be executed
on the $O(\log n)$ different distance threshold graphs in parallel on a congested clique.
Table~\ref{tab:messages} shows number of messages sent/received per node
in the execution of Algorithm~\ref{algo:lowDimensionalMIS} and
from this it is easy to see that Line 8 of Phase 2 can be executed as it is using Lenzen's routing protocol~in $O(1)$ rounds for all the $O(\log n)$ layers in parallel due to their low communication
requirements.
For Lines 4-6 of Phase~2 we do the following load balancing via a
\textit{designated receiver scheme}: each vertex has to send at most
$O(n^{1/4}\log n)$ messages in an execution of Phase~2 for a layer.
Therefore, for $O(\log n)$ layers one node is responsible of sending $O(n^{1/4}\log^2 n)$ messages.
There are only $\lceil 2 \log n \rceil$ receivers needed for in an execution at a layer.
For all layers the number of receivers needed are $O(\log^2 n)$.
Hence we can designate different receivers such that no receiver gets more than $O(n)$ messages in execution of Phase~2 for all layers.
Similar designated receiver scheme is applied for the execution of Phase 1.
For parallel execution of Line 9 (\textsc{SequentialMIS}) of Phase 4 for all $O(\log n)$ layers
we use the following \textit{message encoding scheme}:
Each vertex $v$ constructs a $O(\log n)$-length bit string specifying 1 at position $\ell$ if $v$ is in MIS for the layer $\ell$ otherwise 0.
Each vertex $v$ broadcasts this string.
For a layer $\ell$, each vertex considers only $\ell^{th}$ bit of this
message.
\begin{table}[thb]
\caption{Number of messages sent/received per node in the execution of
Algorithm~\ref{algo:lowDimensionalMIS}\label{tab:messages}}
\centering
\begin{tabular}{|l|l|p{0.13\textwidth}|l|l|l|} \hline
Phase & Line & Analysis & \parbox[t]{0.21\textwidth}{Number of messages to
send per node} &
\parbox[t]{0.11\textwidth}{Number of receivers} &
\parbox[t]{0.22\textwidth}{Number of messages to receive per receiver} \\ \hline
1 & 2-4 & Lemma~\ref{lemma:si} & $O(n^{1/2})$ &
$n^{1/2}$ & $O(n)$ \\ \hline
\multirow{2}{*}{2} & 4-6 & Lemma~\ref{lemma:wimis} & $O(n^{1/4}\log n)$ & $\lceil 2\log n \rceil $ & $O(n)$ \\
& 8 & Lemma~\ref{lemma:qrule} & $O\left(\poly(\log n)\right)$ & $n$ & $O\left(\poly(\log n)\right)$ \\ \hline
3 & - & Thm.~\ref{thm:logstar} & $O(n^{1/2}\poly(\log^*n))$ & $ n $ & $O(n^{1/2}\poly(\log^*n))$ \\ \hline
\multirow{2}{*}{4} & 3 & Lemma~\ref{lemma:constantRounds} & $O(1)$ & 1 & $O(n)$
\\
& 9 & Lemma~\ref{lemma:constantRounds} & 1 (1-bit) & $n$ & $n$ \\ \hline
\end{tabular}
\end{table}
\section{Constant-Approximation to MFL}
\label{sec:facilitylocation}
Berns et al.~\cite{berns2012arxiv,berns2012facloc} showed how to compute a constant-factor approximation
to MFL in expected $O(\log\log n)$ rounds. (The algorithm presented in \cite{berns2012facloc}
runs in expected $O(\log\log n \cdot \log^* n)$ rounds, but this was subsequently
improved to expected $O(\log\log n)$ in \cite{berns2012arxiv}.)
A high level description of this algorithm is as follows.
Each node $v$ locally computes a value $r_v \ge 0$ that is a function of its opening cost $f_v$ and
distances to other nodes $\{d(v, w) \mid w \in V\}$.
Nodes with similar $r_v$-values join the same class; more precisely, a node $v$ with $3^k \cdot r_m \le r_v \le 3^{k+1} \cdot r_m$,
joins a class $V_k$.
Here $r_m$ is the minimum $r_u$-value over all nodes $u \in V$.
For nodes in each class $V_k$, we construct a graph $H_k = (V_k, E_k)$, where the edge-set $E_k$ is defined as
$\{\{u, v\} \mid u, v \in V_k, d(u, v) \le r_u + r_v\}$.
In the rest of the algorithm, in order to figure out which nodes to open as facilities,
the algorithm computes a $t$-ruling set on each graph $G_k$.
Analysis in \cite{berns2012arxiv,berns2012facloc} then shows that the solution to facility location produced by
this algorithm is an $O(t)$-approximation.
In \cite{berns2012arxiv} it is shown how to compute a 2-ruling set in expected $O(\log\log n)$ rounds on a
congested clique.
Since the classes $V_k$ form a partition of the nodes, the ruling set computations occur on disjoint sets
of nodes and can proceed in parallel.
This leads to a constant-factor approximation to MFL in expected $O(\log\log n)$ rounds.
The 3-ruling set algorithm and the MIS algorithm in the present paper can replace the slower 2-ruling set
and this yields the following result.
\begin{theorem}
There exists a distributed algorithm that computes a constant-approximation to
the metric facility location problem (w.h.p.) in the
congested-clique model and which has an expected running time of
$O(\log \log \log n)$ rounds.
Additionally, if the input metric space has constant doubling dimension then
a constant-approximation can be computed in constant rounds (w.h.p.)
\label{theorem:FacLocApprox}
\end{theorem}
\section{Conclusion}
In a recent paper, Drucker et al.~\cite{DruckerKuhnOshmanPODC2014} show that the congested clique can simulate powerful classes of bounded-depth circuits, implying that even slightly super-constant lower bounds for the congested clique would give new lower bounds in circuit complexity. This provides some explanation for why there are no
non-trivial lower bounds in the congested clique model. One could view this result as providing motivation
for proving even stronger upper bounds. As shown in this paper, it is possible to design algorithms
that run significantly faster that $\Theta(\log\log n)$ rounds for well-known problems.
Continuing this program, we are interested in designing algorithms running in $o(\log\log n)$ rounds for
MST and related problems such as connectivity verification.
\subsubsection*{Acknowledgments.}
We would like to thank reviewers of DISC 2014 for their careful reading and thoughtful comments.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,527
|
Lama Ajeenah, the architect of the award winning work Award Winning A Writer Experience Personal Identity demonstrates, Writers are the sum of their experiences. What a writer is exposed to is what determines their literature. This very concept was the starting point for branding a Saudi fiction book writer. Clearly, the hybrid of dynamic and colorful spirit of the Spanish culture as well as the Saudi heritage was the main source of inspiration for her. The project was influenced by Gaudi's mosaic art as it reflects the eclectic experience that stands behind her imagination. .
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,344
|
The first step in solving any problem is to acknowledge that there is a problem. It is not easy to admit that "you" have a problem, but it is imperative to fully accept that your problems are real and that they exist. The path to recovery is paved with excuses and hard fought lessons, but relief can be attained. Acceptance is the first key to success, but it is far from easy and that is okay. To accept your life completely "as is" without judgement is a quest worth embarking on, and so… the journey begins.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,489
|
Q: Spring boot War file not running in websphere 8.5 I devoloped simple spring boot application with spring and generated WAR file deployed on Webspher but it is not working.
The same code working on Tomcat or other server wokring fine.If posible please provide the code github repository.I tried with all posible case but still i am facing the issues.
this is my Pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.7.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.demo</groupId>
<artifactId>ClientDemo4</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>war</packaging>
<name>ClientDemo4</name>
<description>nSure Validation Service</description>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Greenwich.SR2</spring-cloud.version>
<!-- <start-class>com.demo</start-class> -->
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-hateoas</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<scope>provided</scope>
</dependency>
<!-- <dependency> -->
<!-- <groupId>com.oracle</groupId> -->
<!-- <artifactId>ojdbc7</artifactId> -->
<!-- <version>12.1.0.2</version> -->
<!-- </dependency> -->
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>javax.el</groupId>
<artifactId>javax.el-api</artifactId>
<version>3.0.0</version>
</dependency>
<!-- swagger -->
<dependency>
<groupId>com.mangofactory</groupId>
<artifactId>swagger-springmvc</artifactId>
<version>0.8.8</version>
</dependency>
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.4.3</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
<finalName>ClientDemo4</finalName>
</build>
</project>
this is my main class
@ComponentScan
@SpringBootApplication(scanBasePackages = { "com.kgisl" })
@OpenAPIDefinition(info = @Info(title = "Towing App", version = "v1", description = "Towing Application", license = @License(name = "MIT License", url = "https://github.com/bchen04/springboot-swagger-rest-api/blob/master/LICENSE"), contact = @Contact(url = "https://in.linkedin.com/company/kgislgss", name = "")))
//@SpringBootApplication
public class CommonValidationsApplication {
public static void main(String[] args) {
SpringApplication.run(CommonValidationsApplication.class, args);
}
}
This is servel class
public class ServletInitializer extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(CommonValidationsApplication.class);
}
}
This is my controller
@RestController
public class BasicController {
@GetMapping(value="/test")
public String testMethod(String str)
{
return "You are input is:"+str;
}
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,099
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.