text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Q: Need to show price information on case page layout using aura I have a requirement to show price information on case page layout using aura. The object relation is case-->contact-->price. I have written apex class where by passing case id. I got price information. But I am struck how to pass case id to the lightning layout.
Here is the cmp and js I have used.
CMP:
<aura:component implements="force:appHostable,flexipage:availableForAllPageTypes,flexipage:availableForRecordHome,force:hasRecordId,force:lightningQuickAction"
controller="Product_Price_List" access="global" >
<aura:handler name="init" value="{!this}" action="{!c.doInit}"/>
<aura:attribute name="recordId" type="String" />
<aura:attribute name="priceLists" type="price_list__c[]"/>
<lightning:card variant="Narrow" class="slds-box slds-box_x-small slds-theme_shade" title="Product Price List" iconName="standard:opportunity"/>
<aura:iteration items="{!v.PriceLists}" var="PRC">
<br> Cost Per Month = {!PRC.Cost_Per_Month__c} </br>
</aura:iteration>
</aura:component>
controller:
({
doInit: function(component, event, helper) {
// Fetch the Opportunity list from the Apex controller
helper.getpriceList(component,event,helper);
},
})
helper:
({
getpriceList: function(component, event, helper) {
var action = component.get('c.getcontactproduct');
action.setparams({
"recordId":component.get("v.recordId")
});
action.setCallback(this, function(response){
var state = response.getState();
//alert('state ' + state);
if(state == "SUCCESS"){
var result = response.getReturnValue();
//alert('result ' + JSON.stringify(result));
component.set('v.pricelist',result);
}
});
$A.enqueueAction(action);
}
})
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,835
|
Q: Twenty Ten Theme: Replace the Logo by a picture without moving the code I would like to replace the twenty ten's theme logo (text) by a picture, Is there any plugin allowing me to do that.
I know how to do it, but every time wordpress gets updated (often) the code comes back to how it was before I changed it.
Thanks in advance.
A: Your best bet is to make a child theme and simply add in the twentyten header.php with a link to your image.
How to make child theme, http://codex.wordpress.org/Child_Themes
It would be something like this ( under the branding div.)
<div id="branding" role="banner">
<div id="logo"><img src="<?php bloginfo( 'template_url' ); ?>/images/YourImage.png" alt=""/></div></div><!-- #branding -->
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,455
|
{"url":"https:\/\/www.emathhelp.net\/en\/cubic-inches-to-cubic-meters\/","text":"# Cubic inches to cubic meters\n\nThis free conversion calculator will convert cubic inches to cubic meters, i.e. in^3 to m^3, cu in to m^3. Correct conversion between different measurement scales.\n\nThe formula is $V_{m^3} = \\frac{2048383}{125000000000} V_{in^3}$, where $V_{in^3} = 15$.\n\nTherefore, $V_{m^3} = \\frac{6145149}{25000000000}$.\n\nAnswer: $15 in^{3} = \\frac{6145149}{25000000000} m^{3} = 0.00024580596 m^{3}$.","date":"2022-01-20 16:34:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8736830949783325, \"perplexity\": 8714.549325974402}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320302355.97\/warc\/CC-MAIN-20220120160411-20220120190411-00581.warc.gz\"}"}
| null | null |
DUMPED: Japan's fling with climate finally ends, three years after giving up custody of lovechild Kyoto
Anjali Appadurai November 16, 2013 0 Comments
by Anjali Appadurai
It was a move that had been expected with dread: on Friday morning Tokyo time, the Japanese government announced its new greenhouse gas emissions target for 2020. The numbers are grim. Japan will cut emissions by 3.8 per cent from its 2005 level by 2020; this translates to an increase of 3.1 per cent from its 1990 levels. The government had been reviewing its international emissions pledges in light of the Fukushima nuclear disaster in 2011.
Japan seems to enjoy making public backsliding announcements with ironic timing. In 2010 it announced its withdrawal from the Kyoto Protocol in the opening plenary of the UN climate talks in Cancun, to the shock of the world and delight of the media.
Friday's announcement of Japan's new, frightening emissions target comes again in the midst of the UN climate negotiations, this time in Warsaw. At this year's talks, the stakes are even higher than they were in Cancun. We're edging toward a deadline of 2015 to agree to a global, legally-binding, fair and ambitious climate deal, and we're nowhere near to achieving it. Almost every one of the 190-odd countries that have gathered in Warsaw has repeated the need to curb emissions as drastically as possible. Japan seems to be laughing in the face of the global community.
There is a bitter lesson to be learned here. Japan's previous emissions target was 25 per cent below 1990 levels, the achievement of which depended upon its nuclear power program, which provided close to a third of the country's electricity. The 2011 Fukushima disaster caused an energy shortfall that the government had to fill quickly. They took the easy way out, and hastily filled the gap with increased coal and gas combustion. Japan's energy needs were so closely linked to dangerous dirty energy that they also had to ramp up fossil fuel imports at great cost. Suddenly the 25 per cent below 1990 target was impossibly out of reach, and Japan's economy firmly dependent on fossil fuels. The lesson is that dependence on dirty energy can often cost far more than an early, sustainable transition to renewable energy.
The disaster also sparked public concerns about safety, which led to an extensive, nation-wide dialogue about the country's energy options and its response to the climate crisis. The government at the time finally approved a new, promising climate policy that mandated a phase-out of nuclear energy and a transition to renewables. Friday's decision was made by a few top officials in a different administration than the one that had developed the climate policy. The decision is neither in line with Japanese public opinion nor Japan's commitments as a developed country in the UN Climate Convention. Japan is also the world's fifth-largest greenhouse gas emitter, and is part of a UN-identified group of thirteen countries with economies in the top twenty in the world. These countries, which include Canada, the EU and Australia, together account for 72% of global emissions. Under the UN Climate Convention, these major emitters have made quantitative pledges to reduce their emissions before 2020.
In addition, Japan owes a historical debt to the less industrialized developing countries who are feelings climate impacts in the present. The Philippines, for example, was hit twice in one year with super-storms of unprecedented size. Countries like the Philippines need a strong, reliable flow of finance and technology in order to deal with the climate crisis. In addition to providing this support to developing countries, rich industrialized countries are required to significantly reduce their own emissions in order to stabilize the climate and prevent runaway climate change. Japan has not only failed on both counts, but has backpedaled so far that it has become the climate laughing-stock of the world.
Fukushima was an unmitigated disaster, but Japan has committed the unjustifiable act of carrying on with business-as-usual in the wake of the disaster. Refusing to immediately switch to renewable energy has effectively caused Japan to pass the burden of its own emissions to climate-impacted people and nations.
Like a vindictive ex-lover, Japan has dealt a fatal blow to the multilateral climate effort. Kyoto suffers from stunted growth while the rest of the world has to scramble and make up the lost emissions reductions of one of the largest polluters. Meanwhile, having cast off all commitments, Japan can finally carry on with its mistress, the dirty energy industry, in the open.
Posted in Climate Change, Climate Negotiations, COP19, Guest BlogsTagged climate finance, cop19, finance, tech, technology, technology mechanism, technology transfer, warsaw
SBI Closing Intervention
Weekend Policy Updates: All Night Parties
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,748
|
local-requirify
===============
Browserify transform that allows both node modules and client side scripts to be required from the console.
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,080
|
Last week, COST Action CA16114 – REthinking Sustainability TOwards a Regenerative Economy (RESTORE) organized a mid-term conference (The RESTORE Challenge), core group meeting, and third management committee meeting in Bolzano, Italy.
Dr Michael Burnard, the InnoRenew CoE deputy director and research group leader for Human Health in the Built Environment, joined the two meetings and conference in his capacity as a member of the Action's management committee.
The conference analysed, through multidisciplinary collaboration and an edutainment approach, how RESTORE is aligned (or may be better aligned) with the United Nations 17 Sustainable Development Goals (SDGs) and how the Action can effectively contribute to their achievement. The SDGs were developed and adopted in 2015 by world leaders at an historic UN Summit, where they also adopted the 2030 Agenda for Sustainable Development.
During conference workshops and presentations, more than 50 participants from over 30 countries developed new ideas and discussed the paradigm shift towards restorative sustainability for new and existing buildings.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,148
|
{"url":"https:\/\/en.wikipedia.org\/wiki\/GV-linear-code","text":"# GV-linear-code\n\nIn coding theory, the bound of parameters such as rate R, relative distance, block length, etc. is usually concerned. Here Gilbert\u2013Varshamov bound theorem claims the lower bound of the rate of the general code. Gilbert\u2013Varshamov bound is the best in term of relative distance for codes over alphabets of size less than 49.[citation needed]\n\n## Gilbert\u2013Varshamov bound theorem\n\nTheorem: Let $q \\ge 2$. For every $0 \\le \\delta < 1 - \\frac{1}{q}$, and $0 < \\varepsilon \\le 1 - H_q (\\delta )$, there exists a code with rate $R \\ge 1 - H_q (\\delta ) - \\varepsilon$, and relative distance $\\delta$.\n\nHere $H_q$ is the q-ary entropy function defined as follows:\n\n$H_q(x) = x\\log_q(q-1)-x\\log_qx-(1-x)\\log_q(1-x).$\n\nThe above result was proved by Edgar Gilbert for general code using the greedy method as here. For linear code, Varshamov proved using the probabilistic method for the random linear code. This proof will be shown in the following part.\n\nHigh-level proof:\n\nTo show the existence of the linear code that satisfies those constraints, the probabilistic method is used to construct the random linear code. Specifically the linear code is chosen randomly by choosing the random generator matrix $G$ in which the element is chosen uniformly over the field $\\mathbb{F}_q^n$. Also the Hamming distance of the linear code is equal to the minimum weight of the codeword. So to prove that the linear code generated by $G$ has Hamming distance $d$, we will show that for any $m \\in \\mathbb{F}_q^k \\backslash \\left\\{ 0 \\right\\}, wt(mG) \\ge d$ . To prove that, we prove the opposite one; that is, the probability that the linear code generated by $G$ has the Hamming distance less than $d$ is exponentially small in $n$. Then by probabilistic method, there exists the linear code satisfying the theorem.\n\nFormal proof:\n\nBy using the probabilistic method, to show that there exists a linear code that has a Hamming distance greater than $d$, we will show that the probability that the random linear code having the distance less than $d$ is exponentially small in $n$.\n\nWe know that the linear code is defined using the generator matrix. So we use the \"random generator matrix\" $G$ as a mean to describe the randomness of the linear code. So a random generator matrix $G$ of size $kn$ contains $kn$ elements which are chosen independently and uniformly over the field $\\mathbb{F}_q$.\n\nRecall that in a linear code, the distance = the minimum weight of the non-zero codeword. This fact is one of the properties of linear code.\n\nDenote $wt(y)$ be the weight of the codeword $y$. So\n\n\\begin{align} P & = {\\Pr}_{\\text{random }G} [\\text{linear code generated by }G\\text{ has distance} < d] \\\\ & = {\\Pr}_{\\text{random }G} [\\text{there exists a codeword }y \\ne 0\\text{ in a linear code generated by }G\\text{ such that }\\mathrm{wt}(y) < d] \\end{align}\n\nAlso if codeword $y$ belongs to a linear code generated by $G$, then $y = mG$ for some vector $m \\in \\mathbb{F}_q^k$.\n\nTherefore $P = {\\Pr}_{\\text{random }G} [\\text{there exists a vector }m \\in \\mathbb{F}_q^k \\backslash \\{ 0\\}\\text{ such that }wt(mG) < d]$\n\nBy Boole's inequality, we have:\n\n$P \\le \\sum\\limits_{m \\in \\mathbb{F}_q^k \\backslash \\{ 0\\} } {{\\Pr}_{\\text{random }G} } [wt(mG) < d]$\n\nNow for a given message $m \\in \\mathbb{F}_q^k \\backslash \\{ 0\\}$, we want to compute $W = {\\Pr}_{\\text{random }G} [wt(mG) < d]$\n\nDenote $\\Delta(m_1,m_2)$ be a Hamming distance of two messages $m_1$ and $m_2$\n\nThen for any message $m$, we have: $wt(m) = \\Delta(0,m)$.\n\nUsing this fact, we can come up with the following equality:\n\n$W = \\sum\\limits_{\\text{all }y \\in \\mathbb{F}_q^n \\text{s.t. }\\Delta (0,y) \\le d - 1} {{\\Pr}_{\\text{random }G} [mG = y]}$\n\nDue to the randomness of $G$, $mG$ is a uniformly random vector from $\\mathbb{F}_q^n$.\n\nSo ${\\Pr}_{\\text{random }G} [mG = y] = q^{ - n}$\n\nLet $\\text{Vol}_q(r,n)$ is a volume of Hamming ball with the radius $r$. Then:\n\n$W = \\frac{\\text{Vol}_q(d-1,n)}{q^n} \\le \\frac{\\text{Vol}_q(\\delta n,n)}{q^n} \\le \\frac{q^{nH_q(\\delta)}}{q^n}$\n\n(The later inequality comes from the upper bound of the Volume of Hamming ball)\n\nThen\n\n$P \\le q^k \\cdot W \\le q^k \\frac{q^{nH_q(\\delta)}}{q^n} = q^k q^{-n(1-H_q(\\delta))}$\n\nBy choosing $k = (1-H_q(\\delta)-\\varepsilon)n$, the above inequality becomes\n\n$P \\le q^{-\\varepsilon n}$\n\nFinally $q^{ - \\varepsilon n} \\ll 1$, which is exponentially small in n, that is what we want before. Then by the probabilistic method, there exists a linear code $C$ with relative distance $\\delta$ and rate $R$ at least $(1-H_q(\\delta)-\\varepsilon)$, which completes the proof.\n\n1. The Varshamov construction above is not explicit; that is, it does not specify the deterministic method to construct the linear code that satisfies the Gilbert\u2013Varshamov bound. The naive way that we can do is to go over all the generator matrices $G$ of size $kn$ over the field $\\mathbb{F}_q$ and check if that linear code has the satisfied Hamming distance. That leads to the exponential time algorithm to implement it.","date":"2015-08-02 13:46:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 58, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.963243305683136, \"perplexity\": 201.64539380879293}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042989043.35\/warc\/CC-MAIN-20150728002309-00271-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
| null | null |
\section{Introduction}
The \emph{Zariski space} $\Zar(K|D)$ of the valuation rings of a field $K$ containing a domain $D$ was introduced (under the name \emph{abstract Riemann surface}) by O. Zariski, who used it to show that resolution of singularities holds for varieties of dimension 2 or 3 over fields of characteristic 0 \cite{zariski_sing,zariski_comp}. In particular, Zariski showed that $\Zar(K|D)$, endowed with a natural topology, is always a compact space \cite[Chapter VI, Theorem 40]{zariski_samuel_II}; this result has been subsequently improved by showing that $\Zar(K|D)$ is a spectral space (in the sense of Hochster \cite{hochster_spectral}), first in the case where $K$ is the quotient field of $D$ \cite{dobbs_fedder_fontana,fontana_krr-abRs}, and then in the general case \cite[Corollary 3.6(3)]{fifolo_transactions}. The topological aspects of the Zariski space has subsequently been used, for example, in real and rigid algebraic geometry \cite{hub-kneb,schwartz-compactification} and in the study of representation of integral domains as intersections of valuation overrings \cite{olberding_noetherianspaces,olberding_affineschemes,olberding_topasp}. In the latter context, i.e., when $K$ is the quotient field of $D$, two important properties for subspaces of $\Zar(K|D)$ to investigate are the properties of compactness and of Noetherianess.
In this paper, we concentrate on the case where $K$ is the quotient field of $D$, studying subspaces of $\Zar(K|D)=\Zar(D)$ that are \emph{not} compact. The starting point is a criterion based on semistar operations, proved in \cite[Theorems 4.9 and 4.13]{fifolo_transactions} (see also \cite[Proposition 4.5]{topological-cons} for a slightly stronger version) and integrated, as in \cite[Example 3.7]{surveygraz}, with the use of the two-faced definition of the integral closure/$b$-operation, either through valuation overrings or through equations of integral dependence (see e.g. \cite[Chapter 6]{swanson_huneke}). In particular, we analyze sets of the form $\Zar(D)\setminus\{V\}$, where $V$ is a minimal valuation overring of $D$: we show in Section \ref{sect:Vmin} that such a space is compact only if $V$ can be obtained from $D$ in a very specific way (more precisely, as the integral closure of a localization of a finitely generated algebra over $D$), and we follow up in Sections \ref{sect:dimV} and \ref{sect:intersection} by showing that this condition implies a bound on the dimension of $V$ in relation with the dimension of $D$ (Proposition \ref{prop:leq2dim}) and a quite strict condition on the intersection of sets of prime ideals of $D$ (Theorem \ref{teor:intersez-P}). Section \ref{sect:kronecker} is dedicated to a brief application of these criteria to the study of Kronecker function rings (the definition will be recalled later).
In Section \ref{sect:Noeth}, we consider the set $\Over(D)$ of overrings of $D$ (which is known to be itself a spectral space \cite[Proposition 3.5]{finocchiaro-ultrafiltri}). Using the result proved in the previous sections, we show that, when $D$ is a Noetherian domain, some distinguished subspaces of $\Over(D)$ (for example, the subspace of overrings of $D$ that are Noetherian) are not spectral.
\section{Preliminaries and notation}
\subsection{Spectral spaces}
A topological space $X$ is a \emph{spectral space} if there is a ring $R$ such that $X$ is homeomorphic to the prime spectrum $\Spec(R)$, endowed with the Zariski topology. Spectral spaces can be characterized in a purely topological way as those spaces that are $T_0$, compact, with a basis of open and compact subset that is closed by finite intersections and such that every irreducible closed subset has a generic point (i.e., it is the closure of a single point) \cite[Proposition 4]{hochster_spectral}.
On a spectral space $X$ it is possible to define two new topologies: the \emph{inverse} and the \emph{constructible} topology.
The \emph{inverse topology} is the topology on $X$ having, as a basis of closed sets, the family of open and compact subspaces of $X$. Endowed with the inverse topology, $X$ is again a spectral space \cite[Proposition 8]{hochster_spectral}; moreover, a subspace $Y\subseteq X$ is closed in the inverse topology if and only if $Y$ is compact (in the original topology) and $Y=Y^{\gen}$ \cite[Remark 2.2 and Proposition 2.6]{fifolo_transactions}, where
\begin{equation*}
\begin{array}{rcl}
Y^\gen & := & \{z\in X\mid z\leq y\text{~for some~}y\in Y\}=\\
& = & \{z\in X\mid y\in\Cl(z)\text{~for some~}y\in Y\},
\end{array}
\end{equation*}
with $\Cl(z)$ denoting the closure of the singleton $\{z\}$ (again, in the original topology) and $\leq$ is the order induced by the original topology \cite[d-1]{encytop}, which coincides on $\Spec(R)$ with the set-theoretic inclusion.
The \emph{constructible topology} on $X$ (also called \emph{patch topology}) is the coarsest topology such that the open and compact subsets of $X$ are both open and closed. Endowed with the constructible topology, $X$ is a spectral space that is also Haussdorff (see \cite[Propositions 3 and 5]{olivier_plats_1}, \cite{olivier_plats_2} or \cite[Proposition 5]{fontana_patch}), and the constructible topology is finer than both the original and the inverse topology. A subset of $X$ closed in the constructible topology is said to be a \emph{proconstructible subset} of $X$; if $Y$ is proconstructible, then it is a spectral space when endowed with the topology induced by the original spectral topology of $X$, and the constructible topology on $Y$ is exactly the topology induced by the constructible topology on $X$ (this follows from \cite[1.9.5(vi-vii)]{EGA4-1}).
\subsection{Noetherian spaces}
A topological space $X$ is \emph{Noetherian} if $X$ verifies the ascending chain condition on the open subsets, or equivalently if every subspace of $X$ is compact. Examples of Noetherian spaces are finite spaces and the prime spectra of Noetherian rings. If $\Spec(R)$ is a Noetherian space, then every proper ideal of $R$ has only finitely many minimal primes (see e.g. the proof of \cite[Chapter 4, Corollary 3, p.102]{bourbaki_ac} or \cite[Chapter 6, Exercises 5 and 7]{atiyah}).
\subsection{Overrings and the Zariski space}
Let $D\subseteq K$ be an extension of integral domains. We denote the set of all rings contained between $D$ and $K$ by $\Over(K|D)$; if $K$ is a field (not necessarily the quotient field of $D$), the set of all valuation rings containing $D$ with quotient field $K$ is denoted by $\Zar(K|D)$, and it is called the \emph{Zariski space} (or the \emph{Zariski-Riemann space}) of $D$.
The \emph{Zariski topology} on $\Over(K|D)$ is the topology having, as a subbasis, the sets of the form
\begin{equation*}
B(x_1,\ldots,x_n):=\{T\in\Over(K|D)\mid x_1,\ldots,x_n\in T\},
\end{equation*}
as $\{x_1,\ldots,x_n\}$ ranges among the finite subsets of $K$. Under this topology, both $\Over(K|D)$ \cite[Proposition 3.5]{finocchiaro-ultrafiltri} and its subspace $\Zar(K|D)$ \cite{fontana_krr-abRs,dobbs_fedder_fontana} are spectral spaces, and the order induced by this topology is the inverse of the set-theoretic inclusion. In particular, every $Y\subseteq\Over(K|D)$ with a minimum element is compact, and, if $Z$ is an arbitrary subset of $\Over(K|D)$, then $Z^{\gen}=\{T\in\Over(K|D)\mid T\supseteq A\text{~for some~}A\in Z\}$.
We denote by $\Zarmin(D)$ the set of minimal elements of $\Zar(D)$; since $\Zar(D)$ is a spectral space, every $V\in\Zar(D)$ contains an element $W\in\Zarmin(D)$.
If $K$ is the quotient field of $D$, then we set $\Over(K|D)=:\Over(D)$ and $\Zar(K|D)=:\Zar(D)$. Elements of $\Over(D)$ are called \emph{overrings} of $D$, elements of $\Zar(D)$ are the \emph{valuation overrings} of $D$ and elements of $\Zarmin(D)$ are the \emph{minimal valuation overrings} of $D$.
The \emph{center map} is the application
\begin{equation*}
\begin{aligned}
\gamma\colon\Zar(K|D) & \longrightarrow \Spec(D)\\
V & \longmapsto \mathfrak{m}_V\cap D,
\end{aligned}
\end{equation*}
where $\mathfrak{m}_V$ is the maximal ideal of $V$. When $\Zar(K|D)$ and $\Spec(D)$ are endowed with the respective Zariski topologies, the map $\gamma$ is continuous (\cite[Chapter VI, \textsection 17, Lemma 1]{zariski_samuel_II} or \cite[Lemma 2.1]{dobbs_fedder_fontana}), surjective (this follows, for example, from \cite[Theorem 5.21]{atiyah} or \cite[Theorem 19.6]{gilmer}) and closed \cite[Theorem 2.5]{dobbs_fedder_fontana}.
\subsection{Semistar operations}\label{sect:semistar}
Let $D$ be a domain with quotient field $K$. Let $\inssubmod(D)$ be the set of $D$-submodules of $K$, $\insfracid(D)$ be the set of fractional ideals of $D$, and $\insfracidfg(D)$ be the set of finitely generated fractional ideals of $D$.
A \emph{semistar operation} on $D$ is a map $\star:\inssubmod(D)\longrightarrow\inssubmod(D)$, $I\mapsto I^\star$, such that, for every $I,J\in\inssubmod(D)$ and every $x\in K$,
\begin{enumerate}
\item $I\subseteq I^\star$;
\item if $I\subseteq J$, then $I^\star\subseteq J^\star$;
\item $(I^\star)^\star=I^\star$;
\item $x\cdot I^\star=(xI)^\star$.
\end{enumerate}
Given a semistar operation $\star$, the map $\star_f$ is defined on every $E\in\inssubmod(D)$ by
\begin{equation*}
E^{\star_f}=\bigcup\{F^\star\mid F\in\insfracidfg(D), F\subseteq E\}.
\end{equation*}
The map $\star_f$ is always a semistar operation; if $\star=\star_f$, then $\star$ is said to be \emph{of finite type}. Two semistar operations of finite type $\star_1,\star_2$ are equal if and only if $I^{\star_1}=I^{\star_2}$ for every $I\in\insfracidfg(D)$. See \cite{okabe-matsuda} for general informations about semistar operations.
If $\Delta\subseteq\Zar(D)$, then $\wedge_\Delta$ is defined as the semistar operation on $D$ such that
\begin{equation*}
I^{\wedge_\Delta}:=\bigcap\{IV\mid V\in\Delta\}
\end{equation*}
for every $D$-submodule $I$ of $K$; a semistar operation of type $\wedge_\Delta$ is said to be a \emph{valuative semistar operation}. By \cite[Proposition 4.5]{topological-cons}, $\wedge_\Delta$ is of finite type if and only if $\Delta$ is compact (in the Zariski topology of $\Zar(D)$). If $\Delta,\Lambda\subseteq\Zar(D)$, then $\wedge_\Delta=\wedge_\Lambda$ if and only if $\Delta^\gen=\Lambda^\gen$ \cite[Lemma 5.8(1)]{spettrali-eab}, while $(\wedge_\Delta)_f=(\wedge_\Lambda)_f$ if and only if $\Delta$ and $\Lambda$ have the same closure with respect to the inverse topology \cite[Theorem 4.9]{fifolo_transactions}. The semistar operation $\wedge_{\Zar(D)}$ is usually denoted by $b$ and called the \emph{$b$-operation}.
\section{The use of minimal valuation domains}\label{sect:Vmin}
The starting point of this paper is the following well-known result.
\begin{prop}[see e.g. {{\protect\cite[Proposition 6.8.2]{swanson_huneke}}}]\label{prop:ic-b}
Let $I$ be an ideal of an integral domain $D$; let $x\in D$. Then, $x\in IV$ for every $V\in\Zar(D)$ if and only if there are $n\geq 1$ and $a_1,\ldots,a_n\in D$ such that $a_i\in I^i$ and
\begin{equation}\label{eq:ic}
x^n+a_1x^{n-1}+\cdots+a_{n-1}x+a_n=0.
\end{equation}
\end{prop}
An inspection of the proof of the previous proposition given in \cite{swanson_huneke} shows that this result does not really rely on the fact that $I$ is an ideal of $D$, or on the fact that $x\in D$; indeed, it applies to every $D$-submodule $I$ of the quotient field $K$, and to every $x\in K$. In the terminology of semistar operations, this means that, for each $I\in\inssubmod(D)$, $I^b=I^{\wedge_{\Zar(D)}}$ is exactly the set of $x\in K$ that verifies an equation like \eqref{eq:ic}, with $a_i\in I^i$. We are interested in generalizing that proof in a different way; we need the following definitions.
\begin{defin}
Let $D$ be an integral domain and let $\Delta,\Lambda\subseteq\Over(D)$. We say that $\Lambda$ \emph{dominates} $\Delta$ if, for every $T\in\Delta$ and every $M\in\Max(T)$, there is a $A\in\Lambda$ such that $T\subseteq A$ and $1\notin MA$.
\end{defin}
For example, $\Zar(D)$ dominates every subset of $\Over(D)$, while the set of localizations of $D$ dominates $\{D\}$.
\begin{defin}
Let $D$ be an integral domain domain. We denote by $D[\insfracidfg]$ the set of finitely generated $D$-algebras of $\Over(D)$, or equivalently
\begin{equation*}
D[\insfracidfg]:=\{D[I]:I\in\insfracidfg(R)\}
\end{equation*}
\end{defin}
Even if the proof of the following result essentially repeats the proof of \cite[Proposition 6.8.2]{swanson_huneke}, we replay it here for clarity.
\begin{prop}\label{prop:domina-Ff-I}
Let $D$ be an integral domain, and suppose that $\Delta\subseteq\Zar(D)$ dominates $D[\insfracidfg]$. Then, for every finitely generated ideal $I$ of $D$, $I^{\wedge_\Delta}=I^b$.
\end{prop}
\begin{proof}
Clearly, $I^b\subseteq I^{\wedge_\Delta}$. Suppose thus that $x\in I^{\wedge_\Delta}$, $x\neq 0$, and let $I=(i_1,\ldots,i_k)D$. Define $J:=x^{-1}I\in\insfracidfg(D)$, and let $A:=D[J]=D[x^{-1}i_1,\ldots,x^{-1}i_k]$; by definition, $J\subseteq A$.
If $JA\neq A$, then there is a maximal ideal $M$ of $A$ containing $J$, and thus, by domination, there is a valuation domain $V\in\Delta$ containing $A$ whose maximal ideal $\mathfrak{m}_V$ is such that $JV\subseteq\mathfrak{m}_V$, and thus $IV\subseteq x\mathfrak{m}_V$. However, $x\in I^b\subseteq IV$, which implies $x\in x\mathfrak{m}_V$, a contradiction.
Hence, $JA=A$, i.e., $1=j_1a_1+\cdots+j_na_n$ for some $j_t\in J$, $a_t\in A$; expliciting the elements of $A$ as elements of $D[J]$ and using $J=x^{-1}I$, we find that there must be an $N\inN$ and elements $i_t\in I^t$ such that $x^N=i_1x^{N-1}+\cdots+i_{N-1}x+i_N$, which gives an equation of integral dependence of $x$ over $I$. Therefore, $x\in I^b$, as requested.
\end{proof}
We can now use the properties of valuative semistar operations to study compactness.
\begin{prop}\label{prop:comp-Zarmin}
Let $D$ be an integral domain, and let $\Delta\subseteq\Zar(D)$ be a set that dominates $D[\insfracidfg]$. Then, $\Delta$ is compact if and only if it contains $\Zarmin(D)$.
\end{prop}
\begin{proof}
If $\Delta$ contains $\Zarmin(D)$, then $\mathcal{U}$ is an open cover of $\Delta$ if and only if it is an open cover of $\Zar(D)$; thus, $\Delta$ is compact since $\Zar(D)$ is.
Conversely, suppose $\Delta$ is compact. By Proposition \ref{prop:domina-Ff-I}, $I^{\wedge_\Delta}=I^b$ for every finitely generated ideal $I$; hence, $(\wedge_\Delta)_f=b_f=b$. By \cite[Lemma 5.8(1)]{spettrali-eab}, it follows that the closure of $\Delta$ with respect to the inverse topology of $\Zar(D)$ is the whole $\Zar(D)$; however, since $\Delta$ is compact, its closure in the inverse topology is exactly $\Delta^\gen=\Delta^\uparrow=\{W\in\Zar(D)\mid W\supseteq V\text{~for some~}V\in\Delta\}$. Hence, $\Delta$ must contain $\Zarmin(D)$.
\end{proof}
Thus, to find a subset of $\Zar(D)$ that is not compact, it is enough to find a $\Delta$ that dominates $D[\insfracidfg]$ but that does not contain $\Zarmin(D)$. The easiest case where this criterion can be applied is when $\Delta=\Zar(D)\setminus\{V\}$ for some $V\in\Zarmin(D)$.
\begin{teor}\label{teor:Zar-meno-V}
Let $D$ be an integral domain and let $V\in\Zarmin(D)$. If $\Zar(D)\setminus\{V\}$ is compact, then $V$ is the integral closure of $D[x_1,\ldots,x_n]_M$ for some $x_1,\ldots,x_n\in K$ and some $M\in\Max(D[x_1,\ldots,x_n])$.
\end{teor}
\begin{proof}
If $\Delta:=\Zar(D)\setminus\{V\}$ is compact, then by Proposition \ref{prop:comp-Zarmin} it cannot dominate $D[\insfracidfg]$. Hence, there is a finitely generated fractional ideal $I$ such that $\Delta$ does not dominate $A:=D[I]$, and so a maximal ideal $M$ of $A$ such that $1\in MW$ for every $W\in\Delta$. In particular, $A\neq K$ (otherwise $M$ would be $(0)$).
However, there must be a valuation ring containing $A_M$ whose center (on $A_M$) is $MA_M$, and the unique possibility for this valuation ring is $V$: it follows that $V$ is the unique valuation ring centered on $MA_M$. However, the integral closure of $A_M$ is the intersection of the valuation rings with center $MA_M$ (since every valuation ring containing $A_M$ contains a valuation ring centered on $MA_M$ \cite[Corollary 19.7]{gilmer}); thus, $V$ is the integral closure of $A_M$.
\end{proof}
\section{The dimension of $V$}\label{sect:dimV}
Before embarking on using Theorem \ref{teor:Zar-meno-V}, we prove a simple yet general result.
\begin{prop}\label{prop:spec-noeth}
Let $D$ be an integral domain. If $\Zar(D)$ is a Noetherian space, so is $\Spec(D)$.
\end{prop}
\begin{proof}
The claim follows from the fact that $\Spec(D)$ is the continuous image of $\Zar(D)$ through the center map $\gamma$, and that the image of a Noetherian space is still Noetherian.
\end{proof}
Note that the converse of this proposition is far from being true (this is, for example, a consequence of Proposition \ref{prop:polynomial} or of Proposition \ref{prop:noethdom}).
The problem in using Theorem \ref{teor:Zar-meno-V} is that it is usually difficult to control the behaviour of finitely generated algebras over $D$. We can, however, control the behaviour of the prime spectrum of $D$.
\begin{lemma}\label{lemma:spec-linord}
Let $D$ be an integral domain, and let $V\in\Zar(D)$ be the integral closure of $D_M$, for some $M\in\Spec(D)$. Then, the set of prime ideals of $D$ contained in $M$ is linearly ordered.
\end{lemma}
\begin{proof}
Let $P,Q$ be two prime ideals of $D$ contained in $M$; then, $PD_M,QD_M\in\Spec(D_M)$. Since $D_M\subseteq V$ is an integral extension, $PD_M=P'\cap D_M$ and $QD_M=Q'\cap D_M$ for some $P',Q'\in\Spec(V)$; however, $V$ is a valuation domain, and thus (without loss of generality) $P'\subseteq Q'$. Hence, $PD_M\subseteq QD_M$ and $P\subseteq Q$, as requested.
\end{proof}
\begin{prop}\label{prop:leq2dim}
Let $D$ be an integral domain, let $V\in\Zarmin(D)$ and suppose that $\Zar(D)\setminus\{V\}$ is compact. Let $\iota_V:\Spec(V)\longrightarrow\Spec(D)$ be the canonical spectral map associated to the inclusion $D\hookrightarrow V$. For every $P\in\Spec(D)$, $|\iota_V^{-1}(P)|\leq 2$; in particular, $\dim(V)\leq 2\dim(D)$.
\end{prop}
\begin{proof}
Suppose $|\iota_V^{-1}(P)|>2$: then, there are prime ideals $Q_1\subsetneq Q_2\subsetneq Q_3$ of $V$ such that $\iota_V(Q_1)=\iota_V(Q_2)=\iota_V(Q_3)=:P$. If $\Zar(D)\setminus\{V\}$ is compact, by Theorem \ref{teor:Zar-meno-V} there is a finitely generated $D$-algebra $A:=D[a_1,\ldots,a_n]$ such that $V$ is the integral closure of $A_M$, for some maximal ideal $M$ of $A$. We can write $A_M$ as a quotient $\frac{D[X_1,\ldots,X_n]_\mathfrak{a}}{\mathfrak{b}}$, where $X_1,\ldots,X_n$ are independent indeterminates and $\mathfrak{a},\mathfrak{b}\in\Spec(D[X_1,\ldots,X_n])$. Since $A_M\subseteq V$ is an integral extension, $Q_i\cap A\neq Q_j\cap A$ if $i\neq j$.
For $i\in\{1,2,3\}$, let $\mathfrak{q}_i$ be the prime ideal of $D[X_1,\ldots,X_n]$ whose image in $A$ is $Q_i$; then, $\mathfrak{q}_1$, $\mathfrak{q}_2$ and $\mathfrak{q}_3$ are distinct, $\mathfrak{q}_i\cap D=P$ for each $i$, and the set of ideals between $\mathfrak{q}_1$ and $\mathfrak{q}_3$ is linearly ordered (by Lemma \ref{lemma:spec-linord}). However, the prime ideals of $D[X_1,\ldots,X_n]$ contracting to $P$ are in a bijective and order-preserving correspondence with the prime ideals of $F[X_1,\ldots,X_n]$, where $F$ is the quotient field of $D/P$; since $F[X_1,\ldots,X_n]$ is a Noetherian ring, there are an infinite number of prime ideals between the ideals corresponding to $\mathfrak{q}_1$ and $\mathfrak{q}_3$. This is a contradiction, and $|\iota_V^{-1}(P)|\leq 2$.
For the ``in particular'' statement, take a chain $(0)\subsetneq Q_1\subsetneq\cdots\subsetneq Q_k$ in $\Spec(V)$. Then, the corresponding chain of the $P_i:=Q_i\cap D$ has length at most $\dim(D)$, and moreover $\iota^{-1}((0))=\{(0)\}$. Hence, $k+1\leq 2\dim(D)+1$ and $\dim(V)\leq 2\dim(D)$.
\begin{figure}\label{fig:prop:leq2dim}
\begin{equation*}
\begin{tikzcd}
& D[X_1,\ldots,X_n]\arrow[two heads]{d}\arrow[hook]{r} & D[X_1,\ldots,X_n]_\mathfrak{a}\arrow[two heads]{d}\\
D\arrow[end anchor=south west,hook]{ur}\arrow[hook]{r} & A=D[a_1,\ldots,a_n]\arrow[hook]{r} & A_M\simeq\frac{D[X_1,\ldots,X_n]_\mathfrak{a}}{\mathfrak{b}}\arrow[hook]{r} & V
\end{tikzcd}
\end{equation*}
\caption{Rings involved in the proof of Proposition \ref{prop:leq2dim}.}
\end{figure}
\end{proof}
The \emph{valuative dimension} of $D$, indicated by $\dim_v(D)$, is defined as the supremum of the dimensions of the valuation overrings of $D$; we have always $\dim(D)\leq\dim_v(D)$, and $\dim_v(D)$ can be arbitrarily large with respect to $\dim(D)$ \cite[Section 30, Exercises 16 and 17]{gilmer}. In particular, with the notation of the previous proposition, the cardinality of $\iota_V^{-1}(P)$ can be arbitrarily large: for example, if $(D,\mathfrak{m})$ is local and one-dimensional, then $|\iota_V^{-1}(\mathfrak{m})|=\dim_v(D)$.
\begin{cor}\label{cor:dimv-dim}
Let $D$ be an integral domain such that $\Zar(D)$ is Noetherian. Then, $\dim_v(D)\leq 2\dim(D)$.
\end{cor}
\begin{proof}
If $\Zar(D)$ is Noetherian, then in particular $\Zar(D)\setminus\{V\}$ is compact for every $V\in\Zarmin(D)$. Hence, $\dim(V)\leq 2\dim(D)$ for every $V\in\Zarmin(D)$, by Proposition \ref{prop:leq2dim}; since, if $W\supseteq V$ are valuation domain, $\dim(W)\leq\dim(V)$, the claim follows.
\end{proof}
\begin{prop}\label{prop:leq2dim-meglio}
Let $D$ be an integral domain, and let $V\in\Zarmin(D)$ be such that $\Zar(D)\setminus\{V\}$ is compact; let $(0)\subsetneq P_1\subsetneq\cdots\subsetneq P_k$ be the chain of prime ideals of $V$ and let $Q_i:=P_i\cap D$. Denote by $ht(P)$ the height of the prime ideal $P$. Then:
\begin{enumerate}[(a)]
\item\label{prop:leq2dim-meglio:alt} for every $0\leq t\leq\dim(D)$, we have
\begin{equation*}
\dim(V)\leq\dim_v(D_{Q_t})+2(\dim(D)-ht(Q_t));
\end{equation*}
\item\label{prop:leq2dim-meglio:val} if $D_{Q_t}$ is a valuation domain, then
\begin{equation*}
\dim(V)\leq 2\dim(D)-ht(Q_t).
\end{equation*}
\end{enumerate}
\end{prop}
\begin{proof}
\ref{prop:leq2dim-meglio:alt} Let $(0)\subsetneq Q^{(1)}\subsetneq Q^{(2)}\subsetneq\cdots\subsetneq Q^{(s)}$ be the chain $(0)\subseteq Q_1\subseteq\cdots\subseteq Q_k$ without the repetitions, and let $a$ be the index such that $Q^{(a)}=Q_t$. For every $b>a$, by the proof of Proposition \ref{prop:leq2dim} there can be at most two prime ideals of $V$ over $Q^{(b)}$; on the other hand, $V_{P_t}$ is a valuation overring of $D_{Q_t}$, and thus $t=\dim(V_{P_t})\leq\dim_v(D_{Q_t})$. Therefore,
\begin{equation*}
\dim(V)\leq t+2(s-a)\leq\dim_v(D_{Q_t})+2(\dim(D)-ht(Q_t))
\end{equation*}
since each ascending chain of prime ideals starting from $Q_t$ has length at most $\dim(D)-ht(Q_t)$.
Point \ref{prop:leq2dim-meglio:val} follows, since $\dim(V)=\dim_v(V)$ for every valuation domain $V$.
\end{proof}
\begin{ex}\label{ex:prufnoeth}
A class of integral domain whose Zariski space is Noetherian is constituted by the class of Pr\"ufer domains with Noetherian spectrum. Indeed, if $D$ is a Pr\"ufer domain then the valuation overrings of $D$ are exactly the localizations of $D$ at prime ideals; thus, the center map $\gamma$ estabilishes a homeomorphism between $\Zar(D)$ and $\Spec(D)$. Thus, if the latter is Noetherian also the former is Noetherian.
In this case, $\dim(D)=\dim_v(D)$.
\end{ex}
\begin{ex}\label{ex:dim12}
It is also possible to construct domains whose Zariski space is Noetherian but with $\dim(D)\neq\dim_v(D)$. For example, let $L$ be a field, and consider the ring $A:=L+YL(X)[[Y]]$, where $X$ and $Y$ are independent indeterminates. Then, the valuation overrings of $A$ different from $F:=L(X)((Y))$ are the rings in the form $V+YL(X)[[Y]]$, as $V$ ranges among the valuation rings containing $L$ and having quotient field $L(X)$; that is, $\Zar(A)\setminus\{F\}\simeq\Zar(L(X)|L)$. By the following Corollary \ref{cor:FL}, $\Zar(A)$ is a Noetherian space.
From this, we can construct analogous examples of arbitrarily large dimension. Indeed, if $R$ is an integral domain with quotient field $K$, and $T:=R+XK[[X]]$, then as above $\Zar(T)$ is composed by $K((X))$ and by rings of the form $V+XK[[X]]$, as $V$ ranges in $\Zar(R)$; in particular, $\Zar(T)=\{K((X))\}\cup\mathcal{X}$, where $\mathcal{X}\simeq\Zar(R)$. Thus, $\Zar(T)$ is Noetherian if $\Zar(R)$ is. Moreover, $\dim(T)=\dim(R)+1$ and $\dim_v(T)=\dim_v(R)+1$.
Consider now the sequence of rings $R_1:=L+YL(X)[[Y]]$, $R_2:=R_1+Y_2Q(R_1)[[Y_2]]$, \ldots, $R_n:=R_{n-1}+Y_nQ(R_{n-1})[[Y_n]]$, where $Q(R)$ indicates the quotient field of $R$ and each $Y_i$ is an indeterminate over $Q(R_{i-1})((Y_{i-1}))$. Recursively, we see that each $\Zar(R_n)$ is Noetherian, while $\dim(R_n)=n\neq n+1=\dim_v(R_n)$.
\end{ex}
\section{Intersections of prime ideals}\label{sect:intersection}
The results of the previous sections, while very general, are often difficult to apply, because it is usually not easy to determine the valuative dimension of a domain $D$. More applicable criteria, based on the prime spectrum of $D$, are the ones that we will prove next.
\begin{teor}\label{teor:intersez-P}
Let $D$ be a local integral domain, and suppose there is a set $\Delta\subseteq\Spec(D)$ and a prime ideal $Q$ such that:
\begin{enumerate}
\item $Q\notin\Delta$;
\item no two members of $\Delta$ are comparable;
\item $\bigcap\{P\mid P\in\Delta\}=Q$;
\item $D_Q$ is a valuation domain.
\end{enumerate}
Then, for any minimal valuation overring $V$ of $D$ contained in $D_Q$, $\Zar(D)\setminus\{V\}$ is not compact; in particular, $\Zar(D)$ is not Noetherian.
\end{teor}
\begin{proof}
Note first that, since $V$ is a minimal valuation overring, its center $M$ on $D$ must be the maximal ideal of $D$ \cite[Corollary 19.7]{gilmer}. Suppose that $\Zar(D)\setminus\{V\}$ is compact: by Theorem \ref{teor:Zar-meno-V}, there is a finitely generated $D$-algebra $A:=D[x_1,\ldots,x_n]$ such that $V$ is the integral closure of $A_M$ for some $M\in\Max(A)$.
Let $I:=x_1^{-1}D\cap\cdots\cap x_n^{-1}D\cap D=(D:_Dx_1)\cap\cdots\cap(D:_Dx_n)$. If $I\subseteq Q$, then $(D:_Dx)\subseteq Q$ for some $x_i:=x$; then, since $D_Q$ is flat over $D$,
\begin{equation*}
(D_Q:_{D_Q}x)=(D:_Dx)D_Q\subseteq QD_Q,
\end{equation*}
and in particular $x\notin D_Q$. However, $V\subseteq D_Q$, and thus $x\notin V$, a contradiction. Hence, we must have $I\nsubseteq Q$.
In this case, there must be a prime ideal $P_1\in\Delta$ not containing $I$. Moreover, $I\cap P_1\nsubseteq Q$ too, and thus there is another prime $P_2\in\Delta$, $P_1\neq P_2$, not containing $I$. By Lemma \ref{lemma:spec-linord}, the prime ideals of $A$ inside $M$ are linearly ordered; in particular, we can suppose without loss of generality that $\rad(P_2A)\subseteq\rad(P_1A)$.
Let now $t\in P_2\setminus P_1$; then, $t\in\rad(P_1A)$, and thus there are $p_1,\ldots,p_k\in P_1$, $a_1,\ldots,a_n\in A$ such that $t^e=p_1a_1+\cdots+p_ka_k$ for some positive integer $e$. For each $i$, $a_i=B_i(x_1,\ldots,x_n)$, where $B_i$ is a polynomial over $D$ of total degree $d_i$; let $d:=\sup\{d_1,\ldots,d_k\}$, and take an $r\in I\setminus P_1$ (recall that $I\nsubseteq P_1$). Then, $r^dB_i(x_1,\ldots,x_n)\in D$ for each $i$; therefore,
\begin{equation*}
r^dt^e=p_1r^da_1+\cdots+p_kr^da_k\in p_1D+\cdots+p_kD\subseteq P_1.
\end{equation*}
However, by construction, both $r$ and $t$ are out of $P_1$; since $P_1$ is prime, this is impossible. Hence, $\Zar(D)\setminus\{V\}$ is not compact, and $\Zar(D)$ is not Noetherian.
\end{proof}
The first corollaries of this result can be obtained simply by putting $Q=(0)$. Recall that a \emph{G-domain} (or \emph{Goldman domain}) is an integral domain such that the intersection of all nonzero prime ideals is nonzero. They were introduced by Kaplansky for giving a new proof of Hilbert's Nullstellensatz (see for example \cite[Section 1.3]{kaplansky}).
\begin{cor}\label{cor:Goldman}
Let $D$ be a local domain of finite dimension, and suppose that $D$ is not a G-domain. Then, $\Zar(D)\setminus\{V\}$ is not compact for every $V\in\Zarmin(D)$.
\end{cor}
\begin{proof}
Since $D$ is finite-dimensional, every prime ideal of $D$ contains a prime ideal of height 1; since $D$ is not a G-domain, it follows that the intersection of the set $\Spec^1(D)$ of the height-1 prime ideals of $D$ is $(0)$. The localization $D_{(0)}$ is the quotient field of $D$, and thus a valuation domain; therefore, we can apply Theorem \ref{teor:intersez-P} to $\Delta:=\Spec^1(D)$.
\end{proof}
\begin{cor}\label{cor:h1}
Let $D$ be a local domain. If $D$ has infinitely many height-1 primes, then $\Zar(D)$ is not Noetherian.
\end{cor}
\begin{proof}
Let $I$ be the intersection of all height-1 prime ideals. If $I\neq(0)$, every height-one prime of $D$ would be minimal over $I$; since there is an infinite number of them, $\Spec(D)$ would not be Noetherian, and by Proposition \ref{prop:spec-noeth} neither $\Zar(D)$ would be Noetherian. Hence, $I=(0)$. But then we can apply Theorem \ref{teor:intersez-P} (for $Q=I$).
\end{proof}
Note that the hypothesis that $D$ is local is needed in Theorem \ref{teor:intersez-P} and in Corollary \ref{cor:h1}: for example, $\insZ$ has infinitely many height-1 primes, and $\bigcap\{P\mid P\in\Spec^1(D)\}=(0)$, but $\Zar(\insZ)\simeq\Spec(\insZ)$ is a Noetherian space.
\begin{prop}\label{prop:polynomial}
Let $D$ be an integral domain. If $D$ is not a field, then $\Zar(D[X])$ is a not a Noetherian space.
\end{prop}
\begin{proof}
Since $D$ is not a field, there exist a nonzero prime ideal $P$ of $D$. For any $a\in P$, let $\mathfrak{p}_a$ be the ideal of $D[X]$ generated by $X-a$; then, each $\mathfrak{p}_a$ is a prime ideal of height 1, $\mathfrak{p}_a\neq\mathfrak{p}_b$ if $a\neq b$, and $\bigcap\{\mathfrak{p}_a\mid a\in P\}=(0)$.
The prime ideal $\mathfrak{m}:=PD[X]+XD[X]$ contains every $\mathfrak{p}_a$; by Corollary \ref{cor:h1}, $\Zar(D[X]_\mathfrak{m})$ is not Noetherian. Therefore, neither $\Zar(D[X])$ is Noetherian.
\end{proof}
\begin{cor}\label{cor:FL}
Let $F\subseteq L$ be a transcendental field extension.
\begin{enumerate}[(a)]
\item\label{cor:FL:1} If $\trdeg_F(L)=1$ and $L$ is finitely generated over $F$ then $\Zar(L|F)$ is Noetherian.
\item\label{cor:FL:>1} If $\trdeg_F(L)>1$ then $\Zar(L|F)$ is not Noetherian.
\end{enumerate}
\end{cor}
\begin{proof}
\ref{cor:FL:1} Let $L=F(\alpha_1,\ldots,\alpha_n)$; without loss of generality we can suppose that $\alpha_1$ is transcendental over $F$. Then, the extension $F(\alpha_1)\subseteq L$ is algebraic and finitely generated, and thus finite.
Each $V\in\Zar(L|F)$ must contain either $\alpha_1$ or $\alpha_1^{-1}$; therefore, $\Zar(L|F)=\Zar(L|F[\alpha_1])\cup\Zar(L|F[\alpha_1^{-1}])$. However, $\Zar(L|A)=\Zar(A')$ for every domain $A$, where we denote by $A'$ is the integral closure of $A$ in $L$; since $F[\alpha_1]$ (respectively, $F[\alpha_1^{-1}]$) is a principal ideal domain and $F(\alpha_1)\subseteq L$ is finite, the integral closure of $F[\alpha_1]$ (resp., $F[\alpha^{-1}]$) is a Dedekind domain, and thus $\Zar(L|F[\alpha_1])=\Zar(F[\alpha_1]')\simeq\Spec(F[\alpha_1]')$ is Noetherian. Being the union of two Noetherian spaces, $\Zar(L|F)$ is itself Noetherian.
\ref{cor:FL:>1} Suppose $\trdeg_F(L)>1$. Then, there are $X,Y\in L$ such that $\{X,Y\}$ is an algebraically independent set over $F$; in particular, we have a continuous surjective map $\Zar(L|F)\longrightarrow\Zar(F(X,Y)|F)$ given by $V\mapsto V\cap F(X,Y)$. However, $\Zar(F(X,Y)|F)$ contains $\Zar(F[X,Y])$; by Proposition \ref{prop:polynomial}, the latter is not Noetherian, since $F[X,Y]$ is the polynomial ring over $F[X]$, a domain of dimension 1. Thus, $\Zar(L|F)$ is not Noetherian.
\end{proof}
The condition that $\bigcap\{P\mid P\in\Delta\}=Q$ of Theorem \ref{teor:intersez-P} can be slightly generalized, requiring only that the intersection is contained in $Q$. However, doing so we can only prove that $\Zar(D)$ is not Noetherian, without always finding a specific $V$ such that $\Zar(D)\setminus\{V\}$ is not compact.
\begin{prop}\label{prop:intersez-contenuto}
Let $D$ be a local integral domain, and suppose there is a set $\Delta\subseteq\Spec(D)$ and a prime ideal $Q$ such that:
\begin{enumerate}
\item $Q\notin\Delta$;
\item no two members of $\Delta$ are comparable;
\item $\bigcap\{P\mid P\in\Delta\}\subseteq Q$;
\item $D_Q$ is a valuation domain.
\end{enumerate}
Then, $\Zar(D)$ is not Noetherian.
\end{prop}
\begin{proof}
If $\Spec(D)$ is not Noetherian, by Proposition \ref{prop:spec-noeth} neither is $\Zar(D)$; suppose that $\Spec(D)$ is Noetherian.
Let $I:=\bigcap\{P\mid P\in\Delta\}$; since an overring of a valuation domain is still a valuation domain, we can suppose that $Q$ is a minimal prime of $I$. Since $D$ has Noetherian spectrum, the radical ideal $I$ has only a finite number of minimal primes, say $Q=:Q_1,Q_2,\ldots,Q_n$; let $\Delta_i:=\{\mathfrak{p}\in\Delta\mid Q_i\subseteq \mathfrak{p}\}$ and $I_i:=\bigcap\{\mathfrak{p}\mid \mathfrak{p}\in\Delta_i\}$. By standard properties of minimal primes, $\Delta=\Delta_1\cup\cdots\cup\Delta_n$ and $I=I_1\cap\cdots\cap I_n$.
In particular, $I_1\cap\cdots\cap I_n\subseteq Q$; hence, $I_k\subseteq Q$ for some $k$. However, $Q_k\subseteq I_k$, and thus $Q_k\subseteq Q$; since different minimal primes of the same ideal are not comparable, $k=1$ and $Q\subseteq I_1\subseteq Q$, i.e., $I_1=Q$. Then, $\Delta_1$ is a family of primes satisfying the hypothesis of Theorem \ref{teor:intersez-P}; in particular, $\Zar(D)$ is not Noetherian.
\end{proof}
An \emph{essential prime} of a domain $D$ is a $P\in\Spec(D)$ such that $D_P$ is a valuation domain. $D$ is an \emph{essential domain} if it is equal to the intersection of the localizations of $D$ at the essential primes. If, moreover, the family of the essential primes is compact, then $D$ can be called a \emph{Pr\"ufer $v$-multiplication domain} (\emph{P$v$MD} for short) \cite[Corollary 2.7]{fin-tar-PvMD-ess}; note that the original definition of P$v$MDs was given through star operations (more precisely, $D$ is a P$v$MD if and only if $D_P$ is a valuation ring for every $t$-maximal ideal $P$ \cite{griffin_vmultiplication_1967,kang_pvmd}).
\begin{prop}\label{prop:PvMD}
Let $D$ be an essential domain. Then, $\Zar(D)$ is Noetherian if and only if $D$ is a Pr\"ufer domain with Noetherian spectrum.
\end{prop}
\begin{proof}
If $D$ is a Pr\"ufer domain with Noetherian spectrum, then $\Zar(D)\simeq\Spec(D)$ is Noetherian (see Example \ref{ex:prufnoeth}). Conversely, suppose $\Zar(D)$ is Noetherian: by Proposition \ref{prop:spec-noeth}, $\Spec(D)$ is Noetherian. Let $\mathcal{E}$ be the set of essential prime ideals of $D$: since $\Spec(D)$ is Noetherian, $\mathcal{E}$ is compact, and thus $D$ is a P$v$MD.
Suppose by contradiction that $D$ is not a Pr\"ufer domain. Then, there is a maximal ideal $M$ of $D$ such that $D_M$ is not a valuation domain; since the localization of a P$v$MD is a P$v$MD \cite[Theorem 3.11]{kang_pvmd}, and $\Zar(D_M)$ is a subspace of $\Zar(D)$, without loss of generality we can suppose $D=D_M$, i.e., we can suppose that $D$ is local.
Since $\mathcal{E}$ is compact, every $P\in\mathcal{E}$ is contained in a maximal element of $\mathcal{E}$; let $\Delta$ be the set of such maximal elements. Clearly, $D=\bigcap\{D_P\mid P\in\Delta\}$. If $\Delta$ were finite, $D$ would be an intersection of finitely many valuation domains, and thus it would be a Pr\"ufer domain \cite[Theorem 22.8]{gilmer}; hence, we can suppose that $\Delta$ is infinite. Let $I:=\bigcap\{P\mid P\in\Delta\}$.
Each $P\in\Delta$ contains a minimal prime of $I$; however, since $\Spec(D)$ is Noetherian, $I$ has only finitely many minimal primes. It follows that there is a minimal prime $Q$ of $I$ that is not contained in $\Delta$; in particular, $\bigcap\{P\mid P\in\Delta\}\subseteq Q$, and thus we can apply Proposition \ref{prop:intersez-contenuto}. Hence, $\Zar(D)$ is not Noetherian, which is a contradiction.
\end{proof}
\begin{oss}
The previous proof can be interpreted using the terminology of the theory of star operations. Indeed, any essential prime $P$ is a $t$-ideal, i.e., $P=P^t$, where (for any ideal $J$ of $D$) $J^t:=\bigcup\{(D:(D:I))\mid I\subseteq J\text{~is finitely generated}\}$ \cite[Lemma 3.17]{kang_pvmd} and if $D$ is a P$v$MD then the set $\Delta$ of the maximal elements of $\mathcal{E}$ is exactly the set of \emph{$t$-maximal ideals}, i.e., the set of the ideals $I$ such that $I=I^t$ and $J\neq J^t$ for every proper ideal $I\subsetneq J$.
\end{oss}
\begin{cor}
Let $D$ be a Krull domain. Then, $\Zar(D)$ is Noetherian if and only if $\dim(D)=1$, i.e., if and only if $D$ is a Dedekind domain.
\end{cor}
\begin{proof}
If $\dim(D)=1$ then $D$ is Noetherian and so is $\Zar(D)$. If $\dim(D)>1$, then $D$ is not a Pr\"ufer domain; since each Krull domain is a P$v$MD, we can apply Proposition \ref{prop:PvMD}.
\end{proof}
Note that this corollary can also be proved directly from Corollary \ref{cor:h1} since, if $D$ is Krull, and $P\in\Spec(D)$ has height 2 or more, then $D_P$ has infinitely many height-1 primes.
\section{An application: Kronecker function rings}\label{sect:kronecker}
Let $D$ be an integrally closed integral domain with quotient field $K$. For every $V\in\Zar(D)$, let $V(X):=V[X]_{\mathfrak{m}_V[X]}\subseteq K(X)$, where $\mathfrak{m}_V$ is the maximal ideal of $V$. If $\Delta\subseteq\Zar(D)$, the \emph{Kronecker function ring} of $D$ with respect to $\Delta$ is
\begin{equation*}
\Kr(D,\Delta):=\bigcap\{V(X)\mid V\in\Delta\};
\end{equation*}
equivalently,
\begin{equation*}
\Kr(D,\Delta)=\{f/g\mid f,g\in D[X], g\neq 0, \cont(f)\subseteq(\cont(g))^{\wedge_\Delta}\},
\end{equation*}
where $\cont(f)$ is the content of $f$ and $\wedge_\Delta$ is the semistar operation defined in Section \ref{sect:semistar}. See \cite[Chapter 32]{gilmer} or \cite{fontana_loper} for general properties of Kronecker function rings.
The set of Kronecker function rings it exactly the set of overrings of the basic Kronecker function ring $\Kr(D,\Zar(D))$; this set is in bijective correspondence with the set of finite-type valuative semistar operations \cite[Remark 32.9]{gilmer}, or equivalently with the set of nonempty subsets of $\Zar(D)$ that are closed in the inverse topology \cite[Theorem 4.9]{fifolo_transactions}.
Let $\Krset(D)$ be the set of Kronecker function rings $T$ of $D$ such that $T\cap K=D$. Then, $\Krset(D)$ is in bijective correspondence with the set of finite-type valuative \emph{star} operations, or equivalently with the set of inverse-closed representation of $D$ through valuation rings, i.e., the sets $\Delta\subseteq\Zar(D)$ that are closed in the inverse topology and such that $\bigcap\{V\mid V\in\Delta\}=D$ \cite[Proposition 5.10]{olberding_affineschemes}.
It has been conjectured \cite{mcg-kronecker-pers} that $\Krset(D)$ is either a singleton (in which case $D$ is said to be a \emph{vacant domain}; see \cite{vacantdomains}) or infinite, and this has been proved to be the case for a wide class of pseudo-valuation domains \cite[Theorem 4.10]{vacantdomains}. As a consequence of the following proposition, we will prove this conjecture for another class of domains.
\begin{prop}\label{prop:Krset-comp}
Let $D$ be an integrally closed integral domain such that $1<|\Krset(D)|<\infty$. Then, there is a minimal valuation overring $V$ of $D$ such that $\Zar(D)\setminus\{V\}$ is compact.
\end{prop}
\begin{proof}
Suppose $|\Krset(D)|>1$. Then, there is an inverse-closed representation $\Delta$ of $D$ different from $\Zar(D)$; let $\Lambda:=\Zar(D)\setminus\Delta$. For each $W\in\Lambda$, let $\Delta(W):=\Delta\cup\{W\}^\uparrow$; then, every $\Delta(W)$ is an inverse-closed representation of $D$, and $\Delta(W)\neq\Delta(W')$ if $W\neq W'$ (since, without loss of generality, $W\nsupseteq W'$, and thus $W\notin\Delta(W')$). Hence, each $W\in\Lambda$ give rise to a different member of $\Krset(D)$; since $|\Krset(D)|<\infty$, it follows that $\Lambda$ is finite.
If now $V$ is minimal in $\Lambda$, then $\Zar(D)\setminus\{V\}=\Delta\cup(\Lambda\setminus\{V\})$ is closed by generizations; since $\Lambda$ is finite, it follows that $\Zar(D)\setminus\{V\}$ is the union of two compact subspaces, and thus it is itself compact.
\end{proof}
\begin{cor}
Let $D$ be an integrally closed local integral domain, and suppose there exist a set $\Delta\subseteq\Spec(D)$ of incomparable nonzero prime ideals such that $\bigcap\{P\mid P\in\Delta\}=(0)$. Then, $|\Krset(D)|\in\{1,\infty\}$.
\end{cor}
\begin{proof}
By Theorem \ref{teor:intersez-P}, each $\Zar(D)\setminus\{V\}$ is noncompact. The claim now follows from Proposition \ref{prop:Krset-comp}.
\end{proof}
\section{Overrings of Noetherian domains}\label{sect:Noeth}
If $D$ is a Noetherian domain, Theorem \ref{teor:Zar-meno-V} admits a direct application, without using any of the results proved in Sections \ref{sect:dimV} and \ref{sect:intersection}. Indeed, if $D$ is Noetherian with quotient field $K$, then it is the same for any localization of $D[x_1,\ldots,x_n]$, for arbitrary $x_1,\ldots,x_n\in K$; thus, the integral closure of $D[x_1,\ldots,x_n]_M$ is a Krull domain for each maximal ideal $M$ of $D[x_1,\ldots,x_n]$ (\cite[(33.10)]{nagata_localrings} or \cite[Theorem 4.10.5]{swanson_huneke}). Since a domain that is both Krull and a valuation ring must be a field or a discrete valuation ring, Theorem \ref{teor:Zar-meno-V} implies that $\Zar(D)\setminus\{V\}$ is not compact as soon as $V$ is a minimal valuation overring of dimension 2 or more.
We can actually say more than this; the following is a proof through Proposition \ref{prop:comp-Zarmin} of an observation already appeared in \cite[Example 3.7]{surveygraz}.
\begin{prop}\label{prop:noethdom}
Let $D$ be a Noetherian domain with quotient field $K$, and let $\Delta$ be the set of valuation overrings of $D$ that are Noetherian (i.e., $\Delta$ is the union of $\{K\}$ with the set of discrete valuation overrings of $D$). Then, $\Delta$ is compact if and only if $\dim(D)=1$.
\end{prop}
\begin{proof}
If $\dim(D)=1$, then $\Delta=\Zar(D)$, and thus it is compact.
On the other hand, for every ideal $I$ of $D$, $I^{\wedge_\Delta}=I^b$ \cite[Proposition 6.8.4]{swanson_huneke}; however, if $\dim(D)>1$, then $\Zar(D)$ contains elements of dimension 2, and thus $\Delta$ cannot contain $\Zarmin(D)$. The claim now follows from Proposition \ref{prop:comp-Zarmin}.
\end{proof}
\begin{oss}
~\\
\begin{enumerate}
\item The equality $I^{\wedge_\Delta}=I^b$ holds also if we restrict $\Delta$ to be the set of discrete valuation overrings of $D$ whose center is a maximal ideal of $D$ \cite[Proposition 6.8.4]{swanson_huneke}. For each prime ideal of height 2 or more, by passing to $D_P$, we can thus prove that the set of discrete valuation overrings of $D$ with center $P$ is not compact (and in particular it is infinite).
\item The previous proposition also allows a proof of the second part of Corollary \ref{cor:FL} without using Theorem \ref{teor:intersez-P}, since $F[X,Y]$ is a Noetherian domain of dimension 2.
\end{enumerate}
\end{oss}
By Proposition \ref{prop:noethdom}, in particular, the space $\Delta$ of Noetherian valuation overrings of $D$ (where $D$ is Noetherian and $\dim(D)\geq 2$) is not a spectral space, since it is not compact. Our next purpose is to see $\Delta$ as an intersection $X\cap\Zar(D)$, for some subset $X$ of $\Over(D)$, and use this representation to prove facts about $X$. We start with using the inverse topology.
\begin{prop}\label{prop:noeth-chiusinv}
Let $D$ be a Noetherian domain with quotient field $K$, and let:
\begin{itemize}
\item $X_1$ be the set of all overrings of $D$ that are Noetherian and of dimension at most 1;
\item $X_2$ be the set of all overrings of $D$ that are Dedekind domains ($K$ included).
\end{itemize}
For $i\in\{1,2\}$, the following are equivalent:
\begin{enumerate}[(i)]
\item\label{prop:noeth-chiusinv:comp} $X_i$ is compact;
\item\label{prop:noeth-chiusinv:sp} $X_i$ is spectral;
\item\label{prop:noeth-chiusinv:cons} $X_i$ is proconstructible in $\Over(D)$;
\item\label{prop:noeth-chiusinv:dim} $\dim(D)=1$.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{prop:noeth-chiusinv:comp} $\Longrightarrow$ \ref{prop:noeth-chiusinv:cons}. In both cases, $X=X^{\gen}$: for $X_1$ see \cite[Theorem 93]{kaplansky}, while for $X_2$ see e.g. \cite[Theorem 40.1]{gilmer} (or use the previous result and \cite[Corollary 36.3]{gilmer}). \ref{prop:noeth-chiusinv:cons} $\Longrightarrow$ \ref{prop:noeth-chiusinv:sp} $\Longrightarrow$ \ref{prop:noeth-chiusinv:comp} always holds.
\ref{prop:noeth-chiusinv:dim} $\Longrightarrow$ \ref{prop:noeth-chiusinv:comp}. If $\dim(D)=1$, then $X_1=\Over(D)$, while $X_2=\Over(D')$, where $D'$ is the integral closure of $D$, and both are compact since they have a minimum.
\ref{prop:noeth-chiusinv:cons} $\Longrightarrow$ \ref{prop:noeth-chiusinv:dim}. If $X_i$ is proconstructible, so is $X_i\cap\Zar(D)$ (since $\Zar(D)$ is also proconstructible), and in particular $X_i\cap\Zar(D)$ is compact. However, in both cases, $X_i\cap\Zar(D)$ is exactly the set of Noetherian valuation overrings of $D$; by Proposition \ref{prop:noethdom}, $\dim(D)=1$.
\end{proof}
\begin{oss}
The equivalence between the first three conditions of Proposition \ref{prop:noeth-chiusinv} holds for every subset $X\subseteq\Over(D)$ such that $X=X^\gen$ (and every domain $D$). In particular, it holds if $X$ is the set of overrings of $D$ that are principal ideal domains, and, with the same proof of the other cases, we can show that if $D$ is Noetherian and these conditions hold, then $\dim(D)=1$. However, it is not clear if, when $D$ is Noetherian and $\dim(D)=1$, this set is actually compact.
\end{oss}
Another immediate consequence of Proposition \ref{prop:noethdom} is that the set $\NoethOver(D)$ of Noetherian overrings of $D$ is not proconstructible as soon as $D$ is Noetherian and $\dim(D)\geq 2$: indeed, if it were, then $\NoethOver(D)\cap\Zar(D)=\Delta$ would be proconstructible, against the fact that $\Delta$ is not compact. However, this is also a consequence of a more general result. We need a topological lemma.
\begin{lemma}\label{lemma:YX-cons}
Let $Y\subseteq X$ be spectral spaces. Suppose that there is a subbasis $\mathcal{B}$ of $X$ such that, for every $B\in\mathcal{B}$, both $B$ and $B\cap Y$ are compact. Then, $Y$ is a proconstructible subset of $X$.
\end{lemma}
\begin{proof}
The hypothesis on $\mathcal{B}$ implies that the inclusion map $Y\hookrightarrow X$ is a spectral map; by \cite[1.9.5(vii)]{EGA4-1}, it follows that $Y$ is a proconstructible subset of $X$.
\end{proof}
\begin{prop}\label{prop:denso-costr}
Let $D$ be an integral domain with quotient field $K$, and let $D[\insfracidfg]$ be the set of finitely generated $D$-algebras contained in $K$.
\begin{enumerate}[(a)]
\item\label{prop:denso-costr:alg} $D[\insfracidfg]$ is dense in $\Over(D)$, with respect to the inverse topology.
\item\label{prop:denso-costr:sp} Let $X$ such that $D[\insfracidfg]\subseteq X\subseteq\Over(D)$. Then, $X$ is spectral in the Zariski topology if and only if $X=\Over(D)$.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{prop:denso-costr:alg} A basis of the constructible topology is given by the sets of type $U\cap(X\setminus V)$, as $U$ and $V$ ranges in the open and compact subsets of $\Over(D)$. Such an $U$ can be written as $B_1\cup\cdots\cup B_n$, where each $B_i=B(x_1^{(i)},\ldots,x_n^{(i)})$ is a basic open set of $\Over(D)$; thus, we can suppose that $U=B(x_1,\ldots,x_n)$. Suppose $\Omega:=U\cap(X\setminus V)$ is nonempty; we claim that $A:=D[x_1,\ldots,x_n]\in\Omega\cap D[\insfracidfg]$. Clearly $A\in D[\insfracidfg]$ and $A\in U$; let $T\in\Omega$. Then, $T\in U$, and thus $A\subseteq T$; therefore, $A$ is in the closure $\Cl(T)$ of $T$, with respect to the Zariski topology. But $X\setminus V$ is closed, and thus $\Cl(T)\subseteq X\setminus V$; i.e., $A\in X\setminus V$. Hence, $A\in\Omega\cap D[\insfracidfg]$, which in particular is nonempty, and $D[\insfracidfg]$ is dense.
\ref{prop:denso-costr:sp} Suppose $X$ is spectral. For every $x_1,\ldots,x_n$, the set $X\cap B(x_1,\ldots,x_n)$ has a minimum (i.e., $D[x_1\ldots,x_n]$), so it is compact. Since the family of all $B(x_1,\ldots,x_n)$ is a basis, by Lemma \ref{lemma:YX-cons} it follows that $X$ is proconstructible. By the previous point, we must have $X=\Over(D)$.
\end{proof}
\begin{cor}
Let $D$ be a Noetherian domain. The spaces
\begin{itemize}
\item $\NoethOver(D):=\{T\in\Over(D)\mid T$ is Noetherian$\}$, and
\item $\KrullOver(D):=\{T\in\Over(D)\mid T$ is a Krull domain$\}$
\end{itemize}
are spectral if and only if $\dim(D)=1$.
\end{cor}
\begin{proof}
If $\dim(D)=1$, then the claim follows by Proposition \ref{prop:noeth-chiusinv}.
If $\dim(D)\geq 2$, then $\NoethOver(D)$ is not spectral by Proposition \ref{prop:denso-costr}\ref{prop:denso-costr:sp} and the Hilbert Basis Theorem; the case of $\KrullOver(D)$ follows in the same way, since $\KrullOver(D)\cap B(x_1,\ldots,x_n)$ has always a minimum (i.e., the integral closure of $D[x_1,\ldots,x_n]$).
\end{proof}
More generally, consider a property $\mathcal{P}$ of Noetherian domains such that every field and every discrete valuation ring satisfies $\mathcal{P}$; for example, $\mathcal{P}$ may be the property of being regular, Gorenstein or Cohen-Macaulay. Let $X_\mathcal{P}(D)$ be the set of overrings of $D$ satisying $\mathcal{P}$; then, $X_\mathcal{P}(D)\cap\Zar(D)$ is not compact, and thus $X_\mathcal{P}(D)$ is not proconstructible. On the other hand, if $X_\mathcal{P}(T)$ is compact for every overring of $D$ that is finitely generated as a $D$-algebra, then by Lemma \ref{lemma:YX-cons} it follows that $X_\mathcal{P}(D)$ cannot be a spectral space. Thus, the assignment $D\mapsto X_\mathcal{P}(D)$ cannot be ``too good'': either some $X_\mathcal{P}(T)$ is not compact, or $X_\mathcal{P}(D)$ is not spectral.
\vspace{5mm}
\textbf{Question.} Let $\mathcal{P}$ be the property of being regular, the property of being Gorenstein or the property of being Cohen-Macaulay. Is it possible to characterize for which Noetherian domains $D$ there is a $T\in\Over(D)$ such that $X_\mathcal{P}(T)$ is not compact and for which $X_\mathcal{P}(D)$ is not spectral?
\begin{comment}
We can even study regular rings, albeit in a very indirect way.
\begin{prop}
Let $D$ be a Noetherian domain such that $\dim(D)>1$. There is a Noetherian overring $T$ of $D$ such that the set of regular overrings of $D$ is not spectral.
\end{prop}
\begin{proof}
Let $\RegOver(T):=\{A\in\Over(T)\mid A\text{~is regular}\}$. If every $\RegOver(T)$ is spectral, then in particular they are all compact; since also $\RegOver(D)$ would be spectral, by Lemma \ref{lemma:YX-cons} it would follow that $\RegOver(D)$ would be proconstructible. Hence, $\RegOver(D)\cap\Zar(D)$ would be proconstructible, contrasting $\dim(D)>1$.
\end{proof}
\end{comment}
\section{Acknowledgments}
Section \ref{sect:kronecker} was inspired by a talk given by Daniel McGregor at the congress ``Recent Advances in Commutative ring and Module Theory'' in Bressanone (June 13-17, 2017). I also thank the referee for his numerous suggestions, which improved the paper.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,580
|
\section{Introduction}\label{sec:introduction}
Counting matchings and independent sets in graphs are central problems in the study of exact and approximate counting algorithms. Exact counting of the total number of matchings of a graph, the number of perfect matchings, the number of matchings of a given size, the number of independent sets, and the number of independent sets of a given size are all \#P-hard problems~\cite{valiant1979complexity}, even for many restricted classes of input graphs (bipartite graphs, graphs of bounded degree). A singular exception is the classical algorithm of Kasteleyn for counting the number of perfect matchings of a planar graph~\cite{kasteleyn1967graph}.
Turning to approximate counting, a stark difference emerges between matchings and independent sets. The landmark work of Jerrum and Sinclair gave an FPRAS (fully polynomial-time randomized approximation scheme) for counting (weighted) matchings in general graphs as well as counting matchings of any given size bounded away from the maximum matching~\cite{jerrum1989approximating}. For the special case of bipartite graphs, Jerrum, Sinclair, and Vigoda \cite{jerrum2004polynomial} gave an FPRAS for the number of matchings of any given size, including perfect matchings. Recent work of Alimohammadi, Anari, Shiragur, and Vuong~ \cite{alimohammadi2021fractionally} provides an FPRAS for the number of matchings of any given size in planar graphs.
On the other hand, for counting (weighted) independent sets and independent sets of a given size, there is a threshold (in terms of degree, weighing factor, or density) above which the approximation problems are NP-hard and below which efficient approximation algorithms exist~\cite{weitz2006counting,DP21, anari2020spectral}. We mention that approximating the number of perfect matchings in general graphs is an outstanding open problem (see \cite{vstefankovivc2018counting}).
Randomization has played a crucial role in the aforementioned algorithmic results, and especially for matchings, there is a wide gap between what is known to be achievable deterministically and with randomness. For graphs of maximum degree $\Delta$, Bayati, Gamarnik, Katz, Nair, and Tetali~\cite{bayati2007simple} gave an FPTAS (fully polynomial-time apporoximation scheme) for the number of matchings and for weighted matchings with a bounded weighing factor; the running time of their algorithm is polynomial in $n$ and $1/\epsilon$, where $\epsilon$ is the desired accuracy and where the exponent of the polynomial depends on $\Delta$ and the weighing factor. However, it is not known, for instance, how to deterministically approximate the number of near-maximum matchings in bounded degree graphs. Our first main result addresses this by achieving, for bounded degree graphs, what was previously only known to be possible with randomness: we provide an FPTAS for the number of matchings of any given size bounded away from the maximum matching. Let $m_k(G)$ be the number of matchings of $G$ of size $k$ and let $m^*(G)$ be the size of the maximum matching of $G$.
\begin{theorem}
\label{thm:matching-fptas}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There exists a deterministic algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, an integer $1\leq k \leq (1-\delta)m^*(G)$, and an error parameter $\epsilon \in (0,1)$, outputs an $\epsilon$-relative approximation to $m_k(G)$ in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$.
\end{theorem}
Here, by an $\epsilon$-relative approximation to $m_k(G)$, we mean that the output $A$ satisfies $e^{-\epsilon}m_k(G) \leq A\leq e^{\epsilon}m_k(G)$.
We next prove the corresponding result for independent sets. Recall that the hard-core model on a graph $G = (V,E)$ at fugacity $\lambda \in \mathbb{R}_{\ge 0}$ is the probability distribution on $\mathcal{I}(G)$, the independent sets of $G$, defined by
\[\mu_{G,\lambda}(I) = \frac{\lambda^{|I|}}{Z_G(\lambda)},\]
where $Z_G(\lambda) = \sum_{I \in \mathcal{I}(G)}\lambda^{|I|} $ is the independence polynomial of $G$.
For $\Delta \ge 3$, let
$$\lambda_c(\Delta) = \frac{(\Delta-1)^{\Delta-1}}{(\Delta-2)^{\Delta}};$$
this is the uniqueness threshold for the hard-core model on the infinite $\Delta$-regular tree. For $ 0\leq \lambda < \lambda_c(\Delta)$, Weitz gave an FTPAS for $Z_G(\lambda)$ on the class of graphs of maximum degree $\Delta$~\cite{weitz2006counting}. Sly~\cite{sly2010computational}, Sly and Sun~\cite{sly2014counting}, and Galanis, {\v{S}}tefankovi{\v{c}}, and Vigoda~\cite{galanis2016inapproximability} complemented this by showing that for $\lambda > \lambda_c(\Delta)$, no FPRAS for $Z_G(\lambda)$ exists unless $\operatorname{NP}=\operatorname{RP}$.
Recently, Davies and Perkins \cite{DP21} showed an analogous threshold for counting independent sets of a given size in bounded degree graphs. Let
\[ \alpha_c(\Delta) = \frac{\lambda_c(\Delta)}{1+(\Delta+1)\lambda_c(\Delta)} = \frac{(\Delta-1)^{\Delta-1}}{(\Delta-2)^{\Delta}+(\Delta+1)(\Delta-1)^{\Delta-1}};\]
this is the occupancy fraction (i.e.~the expected density of an independent set) for the hard-core model on the clique on $\Delta + 1$ vertices at the critical fugacity $\lambda_c(\Delta)$.
They showed that for $\alpha < \alpha_c(\Delta)$, there is an FPRAS for $i_k(G)$ (the number of independent sets in $G$ of size $k$) for any $G$ of maximum degree $\Delta$ on $n$ vertices and any $k \le \alpha n$; conversely, no FPRAS exists for $k \ge \alpha n$ for $\alpha > \alpha_c(\Delta)$ unless $\operatorname{NP} = \operatorname{RP}$. The algorithm of~\cite{DP21} uses randomness in an essential way and the authors conjectured the existence of an FPTAS for $i_k(G)$ for $k\leq \alpha n$ where $\alpha <\alpha_c(\Delta)$~\cite[Conjecture~1]{DP21}. We prove this conjecture.
\begin{theorem}
\label{thm:independent-fptas}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There exists a deterministic algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, an integer $1\leq k \leq (1-\delta)n\alpha_{c}(\Delta)$, and an error parameter $\epsilon \in (0,1)$, outputs an $\epsilon$-relative approximation to $i_k(G)$ in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$.
\end{theorem}
We remark that a deterministic algorithm for approximating $m_k$ and $i_k$ via the cluster expansion is implicit in~\cite{davies2020proof}, but only for $k $ much smaller than the bounds above.
Next, we turn to the problem of uniformly sampling matchings and independent sets of a given size. For fixed $\Delta \geq 3$, $\delta \in (0,1)$, Davies and Perkins \cite{DP21} gave an algorithm for $\epsilon$-approximately sampling (i.e.~within $\epsilon$ in total variation distance) from the uniform distribution on $\mathcal{I}_k(G)$ (the independent sets of size $k$ in $G$) for a graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$, for all $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$. The running time of their algorithm is $\tilde{O}_{\delta, \Delta}(n^3)$, where the $\tilde{O}$ conceals polylogarithmic factors in $n$ and $1/\epsilon$. For the more restricted range $1\leq k \leq (1-\delta)n/(2(\Delta+1))$, it was already shown by Bubley and Dyer \cite{bubley1997path} that the down-up walk on $\mathcal{I}_k(G)$ has $\epsilon$-mixing time $O_{\delta}(n\log(n/\epsilon))$, which is optimal up to constants (see also~\cite{alev2020improved} which gave fast mixing for a larger range of $k$ in graphs satisfying a spectral condition). The down-up walk is the following Markov chain on $\mathcal{I}_k(G)$: at each step, given the current independent set $I \in \mathcal{I}_k(G)$, choose a uniformly random vertex $v \in I$ and a uniformly random vertex $w \in V$. Let $I' = (I \setminus v) \cup w$. If $I' \in \mathcal{I}_k(G)$, then move to $I'$; else, stay at $I$.
For matchings, a polynomial time algorithm for approximately sampling from the uniform distribution on matchings of size $k$, for all $1\leq k \leq (1-\delta)m^*(G)$, is present in the work of Jerrum and Sinclair \cite{jerrum1989approximating}. The running time of their algorithm scales at least as $n^{7/2}$. In the case of graphs of maximum degree at most $\Delta$, a recent result of Chen, Liu, and Vigoda \cite{CLV20} combined with a rejection sampling procedure (and \cref{lem:lambda-determine} below) provides an algorithm for this task running in time $\tilde{O}_{\delta, \Delta}(n^{3/2})$.
It was conjectured in \cite[Conjecture~2]{DP21} that the down-up walk on $\mathcal{I}_k(G)$ mixes rapidly for all $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$ on all graphs on $n$ vertices of maximum degree at most $\Delta$. A stronger conjecture is that the $\epsilon$-mixing time of the chain for this range of $k$ is $O_{\delta, \Delta}(n\log(n/\epsilon))$. While not resolving this specific conjecture, our next main result provides an approximate sampling algorithm for $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$ (and $1\leq k \leq (1-\delta)m^*(G)$ in the case of matchings) running in quasi-linear time, which matches (up to a small polylogarithmic factor) the conjectured mixing time of the down-up walk.
\begin{theorem}
\label{thm:faster-sampling}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There is a randomized algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, an integer $1 \leq k \leq (1-\delta)n\alpha_c(\Delta)$, and an error parameter $\epsilon \in (0,1)$ outputs a random independent set $I \in \mathcal{I}_k(G)$ such that the total variation distance between the distribution of $I$ and the uniform distribution on $\mathcal{I}_k(G)$ is at most $\epsilon$. The running time of the algorithm is $O_{\delta, \Delta}(n\log(n/\epsilon)(\log n)^3 + n\log(n/\epsilon)\log n\log(1/\epsilon)^{3/2})$.
There is also a randomized algorithm with the same guarantee and running time for matchings of size $k$ for all $1\leq k \leq (1-\delta)m^*(G)$.
\end{theorem}
A natural extension of the unresolved conjecture of~\cite{DP21} is that the down--up walk for matchings of size $k$ is rapidly mixing in the setting of Theorem~\ref{thm:faster-sampling}.
\begin{conjecture}
The down--up walk for matchings of size $k$ mixes in time $O_{\Delta,\delta}(n \log(n/\epsilon))$ for graphs $G$ of maximum degree $\Delta$ and $1\leq k \leq (1-\delta)m^*(G)$.
\end{conjecture}
\subsection{Local central limit theorems} In previous works on approximately counting matchings \cite{jerrum1989approximating} and independent sets \cite{DP21} of a given size, a common approach is followed: sample a matching or independent set from the monomer-dimer model (i.e.~hard-core model on the line graph) or hard-core model where the fugacity $\lambda$ is chosen so that the average size is close to the desired size $k$. If the fugacity $\lambda$ is such that sampling from the corresponding monomer-dimer or hard-core model can be done efficiently, and if one can further show that the probability of obtaining a matching or independent set of size exactly $k$ is only polynomially small, then na\"ive rejection sampling gives an efficient sampling algorithm (which can be converted into an FPRAS for $m_k(G)$ or $i_k(G)$ by standard self-reducibility techniques). In the proofs of \cref{thm:matching-fptas,thm:independent-fptas}, we show how to implement a version of this idea deterministically.
Recall that a sequence of random variables $X_n$ with mean $\mu_n$ and variance $\sigma^2_n$ is said to satisfy a central limit theorem (CLT) if for all $a \leq b \in \mathbb{R}$,
\[\mathbb{P}\left[a \leq \frac{X_n - \mu_n}{\sigma_n} \leq b\right] = \frac{1}{\sqrt{2\pi}}\int_{a}^{b}e^{-x^2/2}dx + o_n(1).\]
In particular, central limit theorems provide control on the probability that $X_n$ lies in an interval of length $\Theta(\sigma_n)$. A much more precise notion is that of a local central limit theorem (LCLT). We say that a sequence of integer-valued random variables $X_n$ with mean $\mu_n$ and variance $\sigma^2_n$ satisfies an LCLT if for all integers $k$,
\[ \mathbb{P}[X_n=k] = \frac{1}{\sqrt{2 \pi} \sigma_n} e^{ -(k-\mu_n)^2/(2 \sigma_n^2)} + o_n \left( \sigma_n^{-1} \right).\]
Returning to the discussion in the previous paragraph, suppose we could deterministically find a fugacity $\lambda$ such that:
\begin{enumerate}[(a)]
\item The expected size (to the nearest integer) of a matching or independent set drawn from the monomer-dimer or hard-core model is $k$.
\item There is an FPTAS for the the partition function, expectation, and variance of the monomer-dimer or hard-core model at $\lambda$.
\item The size of a matching or independent set drawn from the monomer-dimer or hard-core model at $\lambda$ satisfies an LCLT.
\end{enumerate}
Then, from (a) and (c), we have that (as long as $\sigma_n = \omega_n(1)$)
\[\frac{m_k(G_n)\lambda^k}{\sum_{j=0}^{n}m_j(G_n)\lambda^j}=\mathbb{P}[Y_n = k] = (1+o_n(1))\frac{1}{\sqrt{2\pi}\sigma_n}e^{-(k-\mu_n)^2/(2\sigma_n^2)},\]
and similarly for independent sets.
Together with (b), this immediately gives a deterministic algorithm for approximating $m_k(G_n)$ or $i_k(G_n)$ to within a factor of $(1+\epsilon)(1+o_n(1))$ in time which is polynomial in $n$ and $1/\epsilon$.
For the range of $k$ covered by \cref{thm:matching-fptas,thm:independent-fptas}, a fugacity $\lambda$ satisfying (a) and (b) does indeed exist and can be found deterministically using (b) along with a binary search procedure. In particular, for the hard-core model, such a $\lambda$ satisfies $\lambda < \lambda_c(\Delta)$. Moreover, by the Heilman--Lieb theorem~\cite{heilmann1972theory}, the roots of the partition function of the monomer-dimer model are restricted to the negative real line, from which it follows that the size of a random matching is distributed as the sum of independent Bernoulli random variables and thus, for $\lambda$ not too small, satisfies an LCLT~\cite{godsil1981matching}. However, for the hard-core model, the corresponding LCLT for all $\lambda < \lambda_{c}(\Delta)$ was not known. In fact, even the much weaker statement that $\mathbb{P}_{\lambda}[|I| = \lfloor \mu_\lambda \rfloor] = \Omega(\sigma_\lambda^{-1})$ (here, the probability $\mathbb{P}_{\lambda}$, the mean $\mu_{\lambda}$, and the variance $\sigma_{\lambda}^2$ are with respect to the hard-core model with fugacity $\lambda$) was unavailable for all $\lambda < \lambda_c(\Delta)$ and is precisely the content of \cite[Conjecture~3]{DP21}. A key step in our proof is the resolution of this conjecture in the much stronger form of an LCLT.
\begin{theorem}
\label{thm:independent-lclt}
Fix $\Delta\ge 3$ and $\delta \in (0,1)$. Then for any sequence of graphs $G_n$ on $n$ vertices of maximum degree $\Delta$, and any sequence $\lambda_n \in \mathbb{R}^+$ so that $n \lambda_n \to \infty$ and $\lambda_n \le (1-\delta) \lambda_c(\Delta)$, the size of a random independent set drawn from the hard-core model on $G_n$ at fugacity $\lambda_n$ satisfies a local central limit theorem.
\end{theorem}
In \cref{secLCLT}, we state and prove a quantitative version of this result (\cref{thm:independent-lclt2}). A critical part in the proof of the LCLT is the existence of a suitable zero-free region (in the complex plane) for the independence polynomial, which has previously been exploited using Barvinok's method (cf.~\cite{barvinok2016combinatorics}) to devise an FPTAS for the independence polynomial evaluated at $\lambda$ in a certain complex region containing the interval $[0, \lambda_c(\Delta))$
on graphs of maximum degree $\Delta$ \cite{patel2017deterministic,PR19}.
Given the LCLT for $\lambda$ satisfying (a) and (b), the above discussion immediately leads to a $(1\pm \epsilon)(1\pm o_n(1))$-factor approximation of $m_k(G)$ or $i_k(G)$ in time which is polynomial in $n$ and $1/\epsilon$, where the degree of the polynomial is allowed to depend on $\Delta$ and $\delta$. As such, this is only an EPTAS (efficient polynomial-time approximation scheme) since the finest approximation one can achieve with this method is $(1\pm o_n(1))$ (see \cref{sub:EPTAS} for a more detailed discussion). Nevertheless, we show that the \emph{proof} of the LCLT using Fourier inversion can be converted into an FPTAS, thereby providing a (perhaps surprising) connection between the deterministic approximation of the matching polynomial or independence polynomial at \emph{complex} fugacities and the deterministic approximation of suitable coefficients of the matching polynomial or independence polynomial. We note that the computational complexity of evaluating partition functions at complex parameters has received much recent attention \cite{harvey2018computing,PR19,bezakova2019inapproximability,liu2019ising,buys2021lee}.
In a nutshell, the idea is the following: given a graph $G$ with maximum degree $\Delta$, an error parameter $\epsilon \in (0,1)$, and $k$ as in \cref{thm:independent-fptas}, consider $\lambda < \lambda_{c}(\Delta)$ satisfying (a) and (b). Let $Y=|I|$ denote the size of an independent set drawn from the hard-core model on $G$ at fugacity $\lambda$, let $\mu$ and $\sigma^2$ denote the mean and variance of $Y$, and let $X = (Y-\mu)/\sigma$. By the Fourier inversion formula for lattices, for all $x \in \sigma^{-1}\cdot \mathbb{Z} - \sigma^{-1}\mu$,
\[\mathbb{P}[X = x] = \frac{1}{2\pi \sigma}\int_{-\pi \sigma}^{\pi \sigma} \mathbb{E}[e^{itX}]e^{-itx}dt,\]
where the probability and expectations are with respect to the hard-core model with fugacity $\lambda$.
In order to approximate $\mathbb{P}[X = x]$ we consider $\mathbb{E}[e^{itX}]$. First, observe that
\[\mathbb{E}[e^{itX}] = e^{-it\mu/\sigma}\cdot \frac{Z_G(\lambda e^{it/\sigma})}{Z_G(\lambda)}.\]
If $t/\sigma$ is sufficiently small so that $\lambda e^{-it/\sigma}$ is in the zero-free region of $Z_G(\lambda)$ (viewed as a univariate polynomial of a single complex variable), then there is an FPTAS for $Z_G(\lambda e^{it/\sigma})$ by Barvinok's method \cite{barvinok2016combinatorics, PR19, patel2017deterministic}. However this method does not handle all $t$. But over the range of $t$ sufficiently large, our proof of the LCLT shows that the integral is bounded by $(\epsilon/2)\mathbb{P}[X=x]$. It turns out that these two regimes overlap. Therefore using a Riemann sum approximation for the small regime gives an FPTAS for $i_k(G)$.
Towards the proof of \cref{thm:faster-sampling}, given $k \leq (1-\delta)\alpha_c(\Delta)n$, we provide in \cref{lem:lambda-determine} a $\tilde{O}_{\delta, \Delta}(n)$ time randomized algorithm for finding $\lambda$ satisfying $|\mathbb{E}_{\lambda}Y - k| \leq \sqrt{\operatorname{Var}_{\lambda}Y}$. As mentioned earlier, this can be combined with the $\tilde{O}_{\Delta, \delta}(n)$ mixing of the Glauber dynamics for the hard-core model at fugacity $\lambda$ \cite{CLV20} and rejection sampling to approximately sample from the uniform distribution on $\mathcal{I}_k(G)$ in time $\tilde{O}_{\Delta, \delta}(n^{3/2})$, since the acceptance probability is $\tilde{\Omega}_{\Delta, \delta}(1/\sqrt{\operatorname{Var}_{\lambda}|I|})$. The main idea underlying our algorithm is that the one may instead perform rejection sampling with the base distribution (effectively) being the hard-core distribution conditioned on $Y\equiv k \bmod p$, with $p = \tilde{\Theta}(\sqrt{\operatorname{Var}_{\lambda}Y})$, while still ensuring that samples from the base distribution can be obtained in time $\tilde{O}(n)$. Given this, the assertion of \cref{thm:faster-sampling} follows since by the LCLT, the acceptance probability is now $\mathbb{P}_{\lambda}[Y=k]/\mathbb{P}_{\lambda}[Y\equiv k \bmod p] = \tilde{\Omega}_{\Delta, \delta}(1)$. Thus, the key step is to show that samples from the hard-core distribution conditioned on $Y\equiv k \bmod p$ may still be obtained in time $\tilde{O}(n)$. For this, we use a multi-stage view of sampling from the hard-core model, motivated directly by the proof of the LCLT.
\subsection{Additional results}
In \cref{thm:independent-fptas,thm:matching-fptas}, we gave an FPTAS for $i_k(G)$ and $m_k(G)$ with running times of the form $(n/\epsilon)^{O_{\Delta, \delta}(1)}$. Now, we provide substantially faster randomized algorithms for the same problems.
In \cite{DP21}, an FPRAS (fully polynomial-time randomized approximation scheme) for $i_k(G)$ was given, for all $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$, with running time $\tilde{O}_{\Delta, \delta}(n^6\epsilon^{-2})$. Combining this algorithm with our near-optimal sampling algorithm improves the running time to $\tilde{O}_{\Delta, \delta}(n^{4}\epsilon^{-2})$. In contrast, it is known that there is an FPRAS for the independence polynomial $Z_G(\lambda)$, for all $0 \leq \lambda \leq (1-\delta)\lambda_c(\Delta)$, with running time $\tilde{O}(n^2\epsilon^{-2})$ (cf.~\cite{vstefankovivc2009adaptive}). Our next result provides an FPRAS for $i_k(G)$ and $m_k(G)$ whose running time exceeds the best-known running time for approximating the independence polynomial or matching polynomial only by a lower order term.
\begin{theorem}
\label{thm:faster-fpras}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There is a randomized algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, an integer $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$ and an error parameter $\epsilon \in (0,1)$ outputs an $\epsilon$-relative approximation to $i_k(G)$ with probability $3/4$ in time
\[T + O_{\Delta, \delta}(n^{3/2}\log n\log (n/\epsilon)\epsilon^{-2}),\]
where $T$ is the time to find an $\epsilon/2$-relative approximation to $Z_G(\lambda)$ for a given $\lambda \in [0, (1-\delta)\lambda_c(\Delta)]$.
Moreover, there exists a constant $C_{\Delta, \delta} > 0$ such that the same conclusion holds for $m_k(G)$ for all $1 \leq k \leq (1-\delta)m^*(G)$, where $T$ is the time to find an $\epsilon/2$-relative approximation to the matching polynomial $Z_G(\lambda)$ for a given $\lambda \in [0, C_{\Delta, \delta}]$.
\end{theorem}
\begin{remark}
For $1 \leq k \leq c_{\Delta}\sqrt{n}$ (where $c_{\Delta}$ is a sufficiently small constant), the running time may be improved to
\[O(k\log{n}\log\log{n}) + O_{\Delta}(k\epsilon^{-2}\log{n}).\]
Moreover, the term $n^{3/2}$ in \cref{thm:faster-fpras} may be replaced by $\tilde{O}(n)$ by using similar ideas as in the proof of \cref{thm:faster-sampling}. However, since the current bounds on $T$ are $\Omega(n^2)$, we have not pursued this improvement.
\end{remark}
Finally, as an intermediate step in obtaining deterministic approximate counting algorithms, we need to deterministically approximate the mean and variance of the size of an independent set or matching drawn from the hard-core or monomer-dimer model respectively. While such algorithms are obtainable by applying algorithms based on the correlation decay method of Weitz~\cite{weitz2006counting,bayati2007simple} to approximate marginals and joint marginals, we instead provide faster deterministic algorithms by adapting the method of Barvinok~\cite{barvinok2016combinatorics} and Patel--Regts~\cite{patel2017deterministic} to approximate the $k$th cumulant of the size of the random independent set or matching.
The $k$th cumulant of a random variable $Y$ is defined in terms of the coefficients of the cumulant generating function $K_Y(t) = \log \mb{E} e^{tY}$ (when this expectation exists in a neighborhood of $0$). In particular, the $k$th cumulant is
\[ \kappa_k(Y) = K^{(k)}_Y(0) \,. \]
The first and second cumulants are the mean and variance respectively.
\begin{theorem}
\label{thm:linear-time-cumulants}
Fix $\Delta \ge 3$, $k\ge 1$, and $\delta \in(0,1)$. There is an algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, $0 < \lambda \le (1- \delta) \lambda_c(\Delta)$, and an error parameter $\epsilon \in (0,1)$ outputs an $\epsilon \lambda n$ additive approximation to $\kappa_k(Y)$, where $Y$ is the size of an independent set drawn from the hard-core model on $G$ at fugacity $\lambda$. The algorithm runs in time $O_{\Delta, \delta, k}(n (1/\epsilon)^{O_{\Delta,\delta}(1)})$. In particular, this provides an FPTAS for $\mb{E}_{\lambda} Y$ and $\on{Var}_{\lambda}Y$ running in time linear in $n$.
For claw-free graphs (hence for the size of a random matching $Y$ drawn from the monomer-dimer model), the same holds for all bounded $\lambda > 0$.
\end{theorem}
\subsection{Outline}
We prove \cref{thm:matching-fptas,thm:independent-fptas} in essentially the same way, given as an input the existence of a zero-free region for the corresponding partition function in the complex plane. For matchings, this is provided by the classical Heilmann--Lieb theorem~\cite{heilmann1972theory}, while for independent sets this is provided by the theorems of Shearer~\cite{shearer1985problem} and Peters and Regts~\cite{PR19}. To avoid excessive repetition (and to gain a slight amount of generality) we work with the larger class of independent sets in claw-free graphs (instead of matchings, which are independent sets in line graphs). The generalization of the Heilmann--Lieb theorem to claw-free graphs is due to Chudnovsky and Seymour~\cite{chudnovsky2007roots}.
We record these zero-freeness results in \cref{sec:prelims}, along with some results from the geometry of polynomials on the consequences of zero-freeness, namely a central limit theorem of Michelen and Sahasradbudhe~\cite{MS19} and a deterministic approximation algorithm for $Z_G(\lambda)$ (with $\lambda$ possibly complex) due to Barvinok~\cite{barvinok2016combinatorics} and Patel and Regts~\cite{patel2017deterministic}. We also record a recent result of Chen, Liu, and Vigoda~\cite{CLV20} on the optimal mixing of Glauber dynamics for bounded marginal spin-systems on bounded degree graphs which is used in our randomized algorithms.
In \cref{secLCLT}, we prove the local central limit theorem for the hard-core model (\cref{thm:independent-lclt}). Our proof uses both the aforementioned central limit theorem and a variant of the technique of Dobrushin and Tirozzi~\cite{dobrushin1977central} who proved local central limit theorems for spin models on the integer lattice $\mathbb{Z}^d$.
In \cref{sec:DetAlgorithms}, we prove our deterministic algorithmic results \cref{thm:matching-fptas,thm:independent-fptas}.
In \cref{sec:RandAlgorithms}, we prove our randomized algorithmic results \cref{thm:faster-fpras,thm:faster-sampling}.
In \cref{secCluster}, we provide a proof of \cref{thm:independent-lclt} when $\lambda$ is sufficiently small as a function of $\Delta$ using the cluster expansion, a tool from classical statistical physics. While not necessary for the main results, we include this since the proof is simpler and perhaps more intuitive.
Finally, in \cref{secCumulants}, we prove \cref{thm:linear-time-cumulants}.
\subsection{Notation}\label{sub:notation}
Throughout, we reserve the random variable $Y$ for the size of a random independent set drawn from the hard-core model on a graph $G$. We use subscripts to indicate the fugacity, e.g. $\mb{E}_\lambda Y$ and $\on{Var}_\lambda Y$. We let $\alpha_G(\lambda)$ denote the occupancy fraction of the hard-core model on $G$ at fugacity $\lambda$; that is, $\alpha_G(\lambda) = \frac{\mb{E}_{\lambda} Y }{|V(G)| }$. We let $\mathcal{Z}$ be a standard normal random variable and $\mathcal{N}(x) = e^{-x^2/2}/\sqrt{2\pi}$ denote its density.
Dependence of various constants on input parameters will often be important; we will write $f(n) = O_\Delta(1)$, for instance, to mean that $f$ is bounded by a constant that depends only on $\Delta$.
\section{Preliminaries}\label{sec:prelims}
\begin{definition}
\label{def:approximation}
Let $z_1, z_2 \in \mathbb{C}$.
We say that $z_1$ is a $\delta$-additive, $\epsilon$-relative approximation of $z_2$ if
$z_1 = re^{i\theta}z_2 + z_3$
for some $e^{-\epsilon} \leq r \leq e^{\epsilon}$, $\theta \in \mathbb{R}$, $|\theta| \leq \epsilon$ and $z_3 \in \mathbb{C}, |z_3| \leq \delta$.
When $\delta = 0$, we simply say that $z_1$ is an $\epsilon$-relative approximation of $z_2$.
\end{definition}
The following theorem combines results of Shearer~\cite{shearer1985problem} on the non-vanishing of the independence polynomial in a complex disk, Peters and Regts~\cite{PR19} on the non-vanishing of the independence polynomial in a complex neighborhood of $(0,\lambda_c(\Delta))$, and Chudnovsky and Seymour~\cite{chudnovsky2007roots} on the real-rootedness of the independence polynomial of claw-free graphs (extending the Heilmann--Lieb theorem on the real-rootedness of the matching polynomial~\cite{heilmann1972theory}).
\begin{theorem}[\cite{shearer1985problem,PR19,chudnovsky2007roots,heilmann1972theory}]
\label{thm:zero-free}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There exists $c_{\ref{thm:zero-free}} = c_{\delta, \Delta} > 0$ such that for any graph $G = (V,E)$ with maximum degree at most $\Delta$, the partition function $Z_G(\lambda)$ of the hard-core model does not vanish on the region
\[\mathcal{R}_{\delta, \Delta} := \{z \in \mathbb{C}: 0\leq \Re(z) \leq (1-\delta)\lambda_{c}(\Delta), |\Im(z)| \leq c_{\delta, \Delta}\} \cup \{z \in \mathbb{C} : |z| < (\Delta-1)^{\Delta-1}/\Delta^\Delta\}.\]
Moreover, if $G$ is claw-free, then all of the roots of $Z_G(\lambda)$ are on the negative real axis below $-e/(\Delta + 1)$.
In particular, $Z_G(\lambda)$ does not vanish on the region
\[\mathcal{C}_{\Delta} := \mathbb{C} \setminus \{z \in \mathbb{C}: \Re(z) \leq -e/(\Delta+1)\}.\]
\end{theorem}
Next, we record two consequences of the above zero-free regions of the partition function. The first is an FPTAS for $Z_G(\lambda)$, provided that $\lambda \in \mathbb{C}$ lies in the zero-free region, and follows by using Barvinok's method \cite{barvinok2016combinatorics} and the work of Patel and Regts \cite{patel2017deterministic}.
\begin{theorem}[\cite{barvinok2016combinatorics, patel2017deterministic}]
\label{thm:patel-regts}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. There exists a deterministic algorithm which, on input a graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$, a (possibly complex) fugacity $\lambda \in \mathcal{R}_{\delta, \Delta}$, and an approximation parameter $\epsilon \in (0,1)$, returns an $\epsilon$-relative approximation of $Z_G(\lambda)$ in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$.
Moreover, if $G$ is claw-free, then the same conclusion holds for any $\lambda \in \mathcal{C}_{\Delta}$ with running time $(n/\epsilon)^{O_{\Delta, |\lambda|}(1)}$.
\end{theorem}
The second is a result of Michelen and Sahasradbudhe \cite{MS19} on converting zero-free regions of probability generating functions into central limit theorems. We note that in order to establish our results with slightly worse quantitative dependencies, earlier results of Lebowitz, Pittel, Ruelle, and Speer \cite{LPRS16} are sufficient.
\begin{theorem}[{\cite[Theorem~1.2]{MS19}}]\label{thm:clt-gen}
Let $X$ be a random variable taking values in $\{0,1,\dots,n\}$ with mean $\mu$ and variance $\sigma^2$ and let
$f_X(z) = \sum_{k=0}^{n}\mathbb{P}[X=k]z^k$ denote its probability generating function. Let $\delta = \min_{\zeta}|\zeta-1|$, where $\zeta$ ranges over the (complex) roots of $f_X$. Then,
\[\sup_{t\in\mathbb{R}}|\mathbb{P}[(X-\mu)/\sigma\le t]-\mathbb{P}[\mathcal{Z}\le t]| = O\bigg(\frac{\log n}{\delta\sigma}\bigg).\]
\end{theorem}
For our proof of the LCLT for the hard-core model, we will need the following simple lemma. The precise version we state here appears in work of Berkowitz \cite{Ber16} on quantitative local central limit theorems for triangle counts in $\mathbb{G}(n,p)$ and has been used, for instance, in further work on local central limit theorems for general subgraph counts in random graphs~\cite{SS20}; we include the short proof for the reader's convenience.
\begin{lemma}[{\cite[Lemma~3]{Ber16}}]\label{lem:fourier-convert}
Let $X$ be a random variable supported on the lattice $\mathcal{L} = \alpha + \beta \mathbb{Z}$ and let $\mathcal{N}(x) = e^{-x^2/2}/\sqrt{2\pi}$ denote the density of the standard normal distribution. Then
\[\sup_{x\in \mathcal{L}}|\beta\mathcal{N}(x) - \mathbb{P}[X=x]|\le \beta\int_{-\pi/\beta}^{\pi/\beta}\big|\mathbb{E}[e^{itX}]-\mathbb{E}[e^{it\mathcal{Z}}]\big|dt + e^{-\pi^2/(2\beta^2)}.\]
\end{lemma}
\begin{proof}
Let $\varphi_X(t) = \mathbb{E}e^{itX}$ and $\varphi(t) = \mathbb{E}e^{it\mathcal{Z}}$. By Fourier inversion on lattices and by Fourier inversion on $\mathbb{R}$, we have for $x\in\mathcal{L}$ that
\[\mathbb{P}[X=x] = \frac{\beta}{2\pi}\int_{-\pi/\beta}^{\pi/\beta}\varphi_X(t)e^{-itx}dt,\qquad\mathcal{N}(x) = \frac{1}{2\pi}\int_{-\infty}^\infty\varphi(t)e^{-itx}dt.\]
Therefore
\begin{align*}
|\beta\mathcal{N}(x)-\mathbb{P}[X=x]|&=\frac{\beta}{2\pi}\bigg|\int_{-\infty}^\infty\varphi(t)e^{-itx}dt-\int_{-\pi/\beta}^{\pi/\beta}\varphi_X(t)e^{-itx}dt\bigg|\\
&\le\frac{\beta}{2\pi}\int_{-\pi/\beta}^{\pi/\beta}|\varphi(t)-\varphi_X(t)|dt+\frac{\beta}{2\pi}\bigg|\int_{|t|>\pi/\beta}e^{-itx}\varphi(t)dt\bigg|\\
&\le\beta\int_{-\pi/\beta}^{\pi/\beta}|\varphi(t)-\varphi_X(t)|dt +e^{-\pi^2/(2\beta^2)},
\end{align*}
where we used a standard tail bound on Gaussian integrals in the last line. Taking the supremum over $x \in \mathcal{L}$ completes the proof.
\end{proof}
For our randomized algorithms, we will make use of recent results of Chen, Liu, and Vigoda \cite{CLV20} establishing optimal mixing of the Glauber dynamics for bounded-degree spin systems.
\begin{theorem}[\cite{CLV20}]
\label{thm:glauber}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. For every graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$ and for every $0 < \lambda \leq (1-\delta)\lambda_c(\Delta)$, the $\epsilon$-mixing time of the Glauber dynamics for the hard-core model on $G$ at fugacity $\lambda$ is $O_{\Delta, \delta}(n\log(n/\epsilon))$.
Moreover, for any $C > 0$, the same conclusion holds for all line graphs $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$ and any $0 < \lambda \leq C$, with the implicit constant depending on $\Delta$ and $C$.
\end{theorem}
\section{Proof of the local central limit theorem}
\label{secLCLT}
The goal of this section is to prove the following quantitative version of \cref{thm:independent-lclt}.
\begin{theorem}
\label{thm:independent-lclt2}
Fix $\Delta\ge 3$ and $\delta>0$. Let $\lambda\in(0,\lambda_c(\Delta)-\delta)$. Given a graph $G$ on $n$ vertices with maximum degree at most $\Delta$, draw a random independent set $I$ from the hard core model $\mu$ with fugacity $\lambda$ and let $Y = |I|$. Let $\mu = \mathbb{E}_{\lambda}Y$ and $\sigma^2 = \operatorname{Var}_{\lambda}Y$. Then
\[\sup_{t\in \mathbb{Z}}|\sigma^{-1}\mathcal{N}((t-\mu)/\sigma) - \mathbb{P}[Y=t]|= O_{\Delta,\delta}\bigg(\operatorname{min}\bigg(\frac{(\log n)^{5/2}}{\sigma^2},\frac{1}{\sigma^{2}} + \frac{\sigma^6(\log n)^2}{n}\bigg)\bigg).\]
Moreover, if $G$ is claw-free, then the same conclusion holds for any $\lambda \in (0,C)$, with the implicit constant depending on $\Delta$ and $C$.
\end{theorem}
\begin{remark}
\cref{lem:var-bound} below shows that $\sigma^{2} = \Theta_{\Delta}(\lambda n)$ (or $\sigma^2 = \Theta_{\Delta, C}(\lambda n)$ in the case of claw-free graphs). Thus, for fixed $\Delta, \delta, C$ and for a sequence of $\lambda_n$ in the appropriate region for which $\lambda_n n \to \infty$, we see that the right hand goes to $0$, thereby establishing the qualitative claim of \cref{thm:independent-lclt}.
\end{remark}
The proof of \cref{thm:independent-lclt2} requires a few steps. First, we use the zero-free regions given by \cref{thm:zero-free} in order to bound $\sigma$.
\begin{lemma}\label{lem:var-bound}
Let $\Delta \geq 3$ and $\delta \in (0,1)$. For any graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$ and for all $\lambda \in (0, \lambda_c(\Delta) - \delta)$, we have
\[\frac{ \lambda n}{ (\Delta+1) (1+\lambda)^{2+\Delta } } \le \operatorname{Var}_{\lambda}Y \le C_{\delta, \Delta} \lambda n \,.\]
Moreover, if $G$ is claw-free, then for all $\lambda >0$,
\[\frac{ \lambda n}{ (\Delta+1) (1+\lambda)^{2+\Delta } } \le \operatorname{Var}_{\lambda}Y \le C_{ \Delta} \lambda n \,.\]
\end{lemma}
\begin{proof}
The lower bounds follow from~\cite[Lemma 9]{DP21}.
For the upper bounds, let $N = \alpha(G) \le n$ be the independence number of $G$ (which is also the degree of $Z_G(\lambda)$ as a polynomial in $\lambda$). Since the constant term of the polynomial $Z_G(\lambda)$ is $1$, we can write
\[ Z_G(\lambda) = \prod_{j=1}^N \left( 1- \lambda r_j \right) \]
where $r_1, \dots , r_N$ are the inverses of the complex roots of $Z_G(\lambda)$. Then
\begin{align*}
\operatorname{Var}_{\lambda}Y &= \lambda^2 (\log Z_G(\lambda)) '' + \lambda (\log Z_G(\lambda))' \\
&= -\sum_{j=1}^N \left( \frac{\lambda^2 r_j^2 }{(1- \lambda r_j)^2} + \frac{\lambda r_j}{ 1-\lambda r_j }\right) \\
&= - \lambda \sum_{j=1}^N \frac{r_j}{ (1-\lambda r_j)^2} \,.
\end{align*}
By \cref{thm:zero-free}, $|r_j| = O(\Delta)$ and $|1/(1-\lambda r_j)^2 | = O_{\delta, \Delta}(1)$ when $\lambda \in (0,\lambda_c(\Delta)-\delta)$. In the case of claw-free graphs, $|1/(1-\lambda r_j)^2 | = O_{\Delta}(1)$ for all $\lambda \ge 0$ by \cref{thm:zero-free}. Combining these estimates completes the proof.
\end{proof}
Next, we control the low Fourier phases of our random variable.
\begin{lemma}\label{lem:low-fourier}
Let $\Delta\ge 3$ and $\delta \in (0,1)$. For any graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$ and for any $\lambda \in (0, \lambda_{c}(\Delta)-\delta)$, we have for all $t \in \mathbb{R}$ that
\[|\mathbb{E}[e^{itX}]-\mathbb{E}[e^{it\mathcal{Z}}]| = O_{\Delta,\delta}\bigg(\frac{(|t|(\log n)^{3/2}+\log n)}{\sigma}\bigg),\]
where $X = (Y-\mu)/\sigma$.
Moreover, if $G$ is claw-free, then the same conclusion holds for all $\lambda > 0$ with the implicit constant depending only on $\Delta$.
\end{lemma}
\begin{proof}
We prove the statement for general graphs; the proof for claw-free graphs is completely analogous. By \cref{thm:clt-gen} combined with the zero-free region from \cref{thm:zero-free} we have for all $t \in \mathbb{R}$ that
\begin{equation}
\label{eq:clt}
|\mathbb{P}[X\le t]-\mathbb{P}[\mathcal{Z}\le t]| = O_{\Delta,\delta}\bigg(\frac{\log n}{\sigma}\bigg).
\end{equation}
Let $X'$ be $X$ convolved with a centered Gaussian of infinitesimally small variance so that $X'$ has density with respect to the Lebesgue measure on $\mathbb{R}$; it suffices to prove the claim for $X'$ and then pass to the limit. For this, we note that
\begin{align*}
\mathbb{E}[e^{itX'}] &= \int_{-\infty}^{\infty}e^{itz}p_{X'}(z)dz\\
&= \int_{|z|\le \tau}e^{itz}p_{X'}(z)dz \pm \mathbb{P}[|X'|\ge \tau]\\
&= \left[e^{itz}\bigg(\int_{-\tau}^{z}p_{X'}(z')dz'\bigg)\right]\bigg|_{z=-\tau}^{z=\tau} -\int_{-\tau}^{\tau}ite^{itz}\bigg(\int_{-\tau}^{z}p_{X'}(z')dz'\bigg)dz\pm \mathbb{P}[|X'|\ge \tau]\\
&= e^{it\tau}-\int_{-\tau}^{\tau}ite^{itz}\bigg(\int_{-\tau}^{z}p_{X'}(z')dz'\bigg)dz\pm \mathbb{P}[|X'|\ge \tau] - e^{it\tau} \mathbb{P}[|X'|\geq \tau]\\
&= e^{it\tau}-\int_{-\tau}^{\tau}ite^{itz}\mathbb{P}[X'\in[-\tau,z]] dz\pm \mathbb{P}[|X'|\ge \tau] - e^{it\tau}\mathbb{P}[|X'|\geq \tau]\\
&= e^{it\tau}-\int_{-\tau}^{\tau}ite^{itz}\mathbb{P}[X'\in[-\tau,z]] dz + e^{i\theta}\cdot O_{\Delta, \delta}\bigg(\frac{\log n}{\sigma} + e^{-\tau^2/4}\bigg),
\end{align*}
for some $\theta \in [0,2\pi)$, where in the last line, we have used \cref{eq:clt} along with a standard Gaussian tail bound.
Applying the same calculation to $\mathcal{Z}$ instead of $X'$ and taking the difference, we find that
\begin{align*}
|\mathbb{E}[e^{itX'}]-\mathbb{E}[e^{itZ}]|
&\leq |t|\left|\int_{-\tau}^{\tau}\left|\mathbb{P}[X'\in[-\tau,z]] - \mathbb{P}[\mathcal{Z} \in [-\tau, z]]\right| dz\right| + O_{\Delta, \delta}\bigg(\frac{\log n}{\sigma} + e^{-\tau^2/4}\bigg)\\
&\leq O_{\Delta, \delta}\left(\frac{(|\tau t| + 1)\log{n}}{\sigma} + e^{-\tau^2/4}\right).
\end{align*}
Since $\sigma \leq n$, setting $\tau = \sqrt{8\log{n}}$ gives the desired conclusion.
\end{proof}
Finally, we control the high Fourier phases of our random variable, following a similar strategy to that of Dobrushin and Tirozzi~\cite{dobrushin1977central}. We need the following elementary lemma.
\begin{lemma}\label{lem:well-separated}
Let $G = (V,E)$ be a graph on $n$ vertices with maximum degree at most $\Delta$. Then, there exists a subset $S \subseteq V$ of size $\Omega(n/\Delta^3)$ such that all vertices in $S$ are pairwise distance at least $4$ with respect to the graph distance. Moreover, there is an algorithm to find such a subset $S$ in time $O_{\Delta}(n)$.
\end{lemma}
\begin{proof}
Let $v_1,\dots, v_n$ denote an arbitrary enumeration of the vertices. Initialize $S = \emptyset$. Consider the greedy algorithm which, at each time step, adds the first available vertex to the set $S$ and removes all vertices within distance $3$ of this vertex from consideration. The algorithm stops when there are no more available vertices. The algorithm runs in time $O_{\Delta}(n)$ and outputs a set $S$ such that any two vertices in $S$ have graph distance at least $4$. Moreover, since at each time, $O(\Delta^3)$ vertices are removed, it follows that $|S| = \Omega(n/\Delta^3)$.
\end{proof}
\begin{lemma}\label{lem:high-fourier}
Let $\Delta \geq 3$ and $C\geq 1$. There exists $c_{\ref{lem:high-fourier}}(\Delta, C) > 0$ satisfying the following. For any graph $G = (V,E)$ on $n$ vertices with maximum degree at most $\Delta$ and for any $\lambda \in (0,C]$, let $X = (Y-\mu)/\sigma$. Then, for all $t \in [-\pi \sigma, \pi \sigma]$, we have
\[|\mathbb{E}[e^{-itX}]|\le \exp(-c_{\ref{lem:high-fourier}}(\Delta, C)\lambda n t^2/\sigma^2).\]
\end{lemma}
\begin{proof}
Rewriting the claim, it suffices to prove that for all $t\in \mathbb{R}$, $|t|\leq \pi$,
\[|\mathbb{E}[e^{-itY}]|\le \exp(-c_{\Delta, C} \lambda n t^2).\]
Let $S$ be a $4$-separated set of vertices of $G$ of size $s = \Omega(n/\Delta^3)$ coming from \cref{lem:well-separated}.
Let $T$ be the set of vertices that are at distance at least $2$ from $S$ in $G$ and let $G[T]$ denote the graph on $T$ induced by $G$. Let $\nu$ denote the distribution on $\mathcal{I}(G[T])$ induced by the hard-core model. We sample $I$ by first sampling $J \sim \nu$ and then sampling from the conditional distribution (induced by the hard-core model and $J$) on $\mathcal{I}(G[v\cup N(v)])$ for each $v \in S$. The key observation is that these conditional distributions are mutually independent. In particular, given $J$, we can write
\[Y \stackrel{d.}{=} |J| + X_1+\cdots+X_s\]
where each $X_j$ is an independent random variable with support in $\{0,\ldots,\Delta+1\}$, a probability mass at $0$ of $\Omega_{\Delta,C}(1)$, and a probability mass of $\Omega_{\Delta,C}(\lambda)$ at $1$. Note that the implicit constant in $\Omega$ does not depend on the specific realisation of $J$.
We claim that for all $|t|\leq \pi$ and all $j \in [s]$, for any realisation of $J$,
\[|\mathbb{E}e^{-itX_j}|\le 1-c\lambda t^2\]
for some absolute $c = c_{\Delta,C} > 0$. Indeed, for any realisation of $J$, letting $X_j'$ denote an independent copy of $X_j$, we have
\begin{align*}
|\mathbb{E}e^{-itX_j}|^2
&= \mathbb{E}e^{it(X_j-X_j')} \\
&= \mathbb{P}[X_j = X_j'] + \sum_{k=1}^{\Delta + 1}(\mathbb{P}[X_j - X_j' = k] + \mathbb{P}[X_j'-X_j = k])\cos(kt)
\\
&\leq \mathbb{P}[X_j = X_j'] + \sum_{k=2}^{\Delta + 1}(\mathbb{P}[X_j - X_j' = k] + \mathbb{P}[X_j'-X_j = k]) + 2\mathbb{P}[X_j - X_j' = 1]\cos(t)\\
&= 1 - 2\mathbb{P}[X_j - X_j' = 1] + 2\mathbb{P}[X_j - X_j' = 1]\cos(t)\\
&= 1 - 2\mathbb{P}[X_j - X_j'=1](1-\cos(t))\\
&\leq 1- \frac{1}{4}\mathbb{P}[X_j - X_j'=1]t^2 \leq 1 - \frac{1}{4}\mathbb{P}[X_j = 1]\mathbb{P}[X_j'=0]t^2 \leq 1 - c_{\Delta, C} \lambda t^2,
\end{align*}
as claimed.
Finally, we have that for any $t \in [-\pi, \pi]$,
\begin{align*}
|\mathbb{E}[e^{-itY}]|
&\leq \max_{J}|\mathbb{E}[e^{-itY} \mid J]|\\
& = \max_{J}\prod_{j=1}^{s}|\mathbb{E}e^{-itX_j}|\\
&\leq (1-c_{\Delta, C}\lambda t^2)^{s/2}\\
&\leq \exp(-c'n\lambda t^2),
\end{align*}
for an appropriate $c'=c_{\Delta,C}' > 0$ and the result follows.
\end{proof}
\subsection{Finishing the proof}\label{sub:finishing}
We now prove \cref{thm:independent-lclt2}.
\begin{proof}[{Proof of \cref{thm:independent-lclt2}}]
We prove the result for general graphs; the proof for claw-free graphs is essentially identical. Applying \cref{lem:fourier-convert} to $X = (Y-\mu)/\sigma \in\alpha+\beta\mathbb{Z}$, where $\alpha = -\mu/\sigma$ and $\beta = 1/\sigma$, and using \cref{lem:var-bound,lem:low-fourier,lem:high-fourier} we see that for $\sigma \geq 2$,
\begin{align*}
\sup_{x\in \mathcal{L}}|\beta\mathcal{N}(x) - \mathbb{P}[X=x]|&\le\frac{1}{\sigma}\int_{-\pi\sigma}^{\pi\sigma}\big|\mathbb{E}[e^{itX}]-\mathbb{E}[e^{it\mathcal{Z}}]\big|dt + e^{-\pi^2\sigma^2/2}\\
&\lesssim_{\Delta,\delta}\frac{1}{\sigma}\int_{-\pi\sigma}^{\pi\sigma}\min\bigg(\frac{|t|(\log n)^{3/2}+\log n}{\sigma},e^{-c_{\ref{lem:high-fourier}}\lambda nt^2/\sigma^2}+e^{-t^2/2}\bigg)dt + e^{-\pi^2\sigma^2/2} \\
&\lesssim_{\Delta,\delta}\frac{1}{\sigma}\int_{-\pi\sigma}^{\pi\sigma}\min\bigg(\frac{|t|(\log n)^{3/2}+\log n}{\sigma},e^{-c_{\Delta, \delta}t^2}+e^{-t^2/2}\bigg)dt + e^{-\pi^2\sigma^2/2}\\
&\lesssim_{\Delta,\delta}\frac{1}{\sigma}\int_{-C_{\Delta,\delta}\sqrt{\log \sigma}}^{C_{\Delta,\delta}\sqrt{\log \sigma}}\frac{|t|(\log n)^{3/2}+\log n}{\sigma}dt + \frac{1}{\sigma^2}\\
&\lesssim_{\Delta,\delta}\frac{(\log n)^{5/2}}{\sigma^2}.
\end{align*}
This gives the first term in the minimum in the statement of \cref{thm:independent-lclt2}. For the second term, we may assume that $1 \leq \sigma < \log{n}$. Let $\lambda' = \lambda/(1+\lambda)$ and observe that the hard-core distribution at fugacity $\lambda$ is identical to the product distribution $\operatorname{Ber}(\lambda')^{\otimes V}$ conditioned on the configuration being an independent set. Here, $\operatorname{Ber}(\lambda')$ is the random variable which is $1$ (or occupied) with probability $\lambda'$ and $0$ (or unoccupied) otherwise. A trivial union bound argument shows that a random sample from $\operatorname{Ber}(\lambda')^{\otimes V}$ is an independent set with probability at least $1 - \lambda'^2 \Delta n = 1 - O_{\Delta}(\lambda^2 \Delta n) = 1- O_{\Delta}(\sigma^4/n)$, where we have used \cref{lem:var-bound}. Therefore, the probability of any configuration under the hard-core model is within a factor of $1 \pm O_{\Delta}(\sigma^4/n)$ of the probability of the same configuration under $\operatorname{Ber}(\lambda')^{\otimes V}$.
Let $Y'$ denote the random variable counting the number of $1$s in a random sample from $\operatorname{Ber}(\mu)^{\otimes V}$, and let $\mu'$ and $\sigma'$ denote the mean and standard deviation of $Y'$. Then, by the classical DeMoivre-Laplace central limit theorem (see \cite[Chap.~VII,~Theorem~(4)]{Pet75} for the quantitative version used here), we get that for any integer $k$,
\[\left|\mathbb{P}[Y'=k] - \frac{1}{\sigma'}\mathcal{N}\left(\frac{k-\mu'}{\sigma'}\right) \right| = O\left(\frac{1}{\sigma'^2}\right).\]
Moreover, from the comparison between the hard-core model and $\operatorname{Ber}(\lambda')^{\otimes V}$ mentioned above, as well as the Chernoff bound for the Binomial distribution, we see that
\begin{align*}
\mathbb{P}[Y = k] &= \mathbb{P}[Y' = k] \pm O_{\Delta}(\sigma^4/n)\\
\mu &= \mu'(1 \pm O_{\Delta}(\sigma^4\log{n}/n))\\
\sigma^2 &= \sigma'^2(1 \pm O_{\Delta}(\sigma^6(\log n)^2/n)).
\end{align*}
Substituting this in the above gives the desired conclusion.
\end{proof}
\section{Deterministic algorithms}\label{sec:DetAlgorithms}
For our deterministic algorithms, we will need the following preliminary lemma, which allows us to find a `good' fugacity at which to either apply the LCLT directly or algorithmize its proof.
\begin{lemma}
\label{lem:find-fugacity}
Let $\Delta \geq 3$ and $\delta \in (0,1/2)$. There exists a constant $C_{\ref{lem:find-fugacity}} = C_{\ref{lem:find-fugacity}}(\Delta, \delta) \geq 1$ for which the following holds.
For any $\alpha \leq (1-\delta)\alpha_{c}(\Delta)$, there exists a unique $\lambda_* < \lambda_{c}(\Delta)$ so that $\alpha_{K_{\Delta+1}}(\lambda_*) = \alpha$. Further, for any graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$ and for any $1\leq k\leq \alpha n$, there exists an integer $t \in \{0,\dots, \lceil C_{\ref{lem:find-fugacity}} n \lambda_* \rceil\}$ such that
\[|n\alpha_G(t/(C_{\ref{lem:find-fugacity}}n)) - k| \leq 1/2.\]
Moreover, if $G$ is claw-free, then for any $1\leq k \leq (1-\delta)i^*(G)$, there exists an integer $t \in \{0,\dots, \lceil n\cdot 8^{(\Delta+1)/\delta^2}\rceil\}$ such that the same conclusion holds.
In either case, such an integer $t$ can be found deterministically in time $n^{O_{\delta, \Delta}(1)}$.
\end{lemma}
\begin{proof}
For general graphs, the existence of such an integer $t$ and constant $C_{\ref{lem:find-fugacity}}$ follows from \cite[Lemma~5]{DP21} (upon replacing \cite[Lemma~9]{DP21} by the optimal variance bound \cref{lem:var-bound}). For claw-free graphs, the existence of such an integer $t$ follows similarly as a simple consequence of the following observations: (i) $\alpha_G(\lambda)$ is monotonically increasing in $\lambda$; (ii) since $i^*(G) \geq n/(\Delta+1)$, we have that for all $\lambda \geq 8^{(\Delta+1)/\delta^2}$
\begin{align*}
\mathbb{P}_{\lambda}[Y\leq (1-\delta/2)i^*(G)] &= \frac{1}{Z_G(\lambda)}\sum_{k=0}^{(1-\delta/2)i^*(G)}\binom{n}{k}\lambda^{k} \leq \frac{1}{Z_G(\lambda)}2^{n} \lambda^{(1-\delta/2)i^*(G)} \\
&\leq \frac{1}{Z_G(\lambda)}\lambda^{i^*(G)}\cdot 2^{n}\lambda^{-\delta n/(2\Delta + 2)} \leq \delta/4
\end{align*}
from which we see that $n\alpha_G(8^{(\Delta+1)/\delta^2}) \geq (1-\delta)i^*(G)$; and
(iii) for all $\lambda > 0$
\[\frac{d}{d\lambda}\alpha_G(\lambda) = \frac{1}{n\lambda}\operatorname{Var}_{\lambda}(Y) \leq C_{\delta, \Delta},\]
where the inequality follows from \cref{lem:var-bound}.
As for the algorithmic claim, note that for each $t$ in the range, we can deterministically approximate $n\alpha_G(t/(C_{\ref{lem:find-fugacity}}n))$ to within an additive error of $1/4$ in time $n^{O_{\delta, \Delta}(1)}$ by \cref{thm:linear-time-cumulants}, so that we may find a $t$ with the desired property in the stated time.
\end{proof}
\subsection{EPTAS from the LCLT}
\label{sub:EPTAS}
Recall that an EPTAS (efficient polynomial-time approximation scheme) is a PTAS (polynomial-time approximation scheme) with a running time of the form $f(\epsilon)n^{O(1)}$ i.e.~the degree of the polynomial in $n$ is independent of the error parameter $\epsilon$. By combining our LCLT with \cref{lem:find-fugacity} and \cref{thm:patel-regts,thm:linear-time-cumulants}, we immediately obtain an EPTAS for $i_k(G)$, for all $k$ bounded away from the relevant barrier.
\begin{theorem}
\label{thm:eptas}
Let $\Delta \geq 3$ and $\delta \in (0,1/2)$. There exists a deterministic algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, an integer $1\leq k \leq (1-\delta)\alpha_c(\Delta)n$, and an error parameter $\epsilon \in (0,1)$ outputs an $\epsilon$-relative approximation to $i_k(G)$ in time $n^{O_{\delta, \Delta}(1)}\exp(\tilde{O}_{\delta, \Delta}(\epsilon^{-1}/\sqrt{n}))$.
Moreover, if $G$ is claw-free, then the same conclusion holds for any $1\leq k\leq (1-\delta)i^*(G)$.
\end{theorem}
\begin{proof}
If $k < e^{-5}n/(\Delta+1)$, then an FPTAS for $i_k(G)$ is already implicit in \cite{davies2020proof}. Therefore, we may restrict our attention to $k \geq e^{-5}n/(\Delta + 1)$. We may also assume that $\epsilon^{-1} \leq c_{\delta, \Delta}\sqrt{n}/(\log{n})^3$ for a sufficiently small constant $c_{\delta, \Delta} > 0$; otherwise $\exp(\tilde{O}_{\delta, \Delta}(\epsilon^{-1}/\sqrt{n}))\geq 4^{n}$ so that exhaustive enumeration runs in the claimed time.
Fix $k \geq e^{-5}n/(\Delta+1)$ which also lies in the specified range, let $t_k$ denote the corresponding value of $t$ given by \cref{lem:find-fugacity}, and let $\lambda_k = t/C_{\ref{lem:find-fugacity}}$ denote the corresponding fugacity. The upper bound on $\alpha_G'(\lambda)$ in the proof of \cref{lem:find-fugacity} shows that $\lambda_k = \Omega_{\delta, \Delta}(1)$. In particular, $\mu_{k} := \mu_{\lambda_k}, \sigma_{k} := \sigma_{\lambda_k}$ satisfy $\mu_k, \sigma^2_k = \Theta_{\delta, \Delta}(n)$. Moreover, since $|k - \mu_k| \leq 1/2$, it follows by \cref{thm:independent-lclt2} that
\begin{align*}
\frac{i_k(G) \lambda_k^k}{Z_G(\lambda_k)}
&= \mathbb{P}[Y = k]\\
&= \sigma_k^{-1}\mathcal{N}((k-\mu_k)/\sigma_k) \pm O_{\Delta, \delta}((\log n)^{5/2}/n)\\
&= (1\pm \epsilon/1000)\cdot \sigma_k^{-1}\mathcal{N}((k-\mu_k)/\sigma_k)\\
&= (1\pm \epsilon/500)\cdot (\sqrt{2\pi} \sigma_k)^{-1}.
\end{align*}
Therefore, letting $\widehat{Z}_G(\lambda_k)$ and $\widehat{\sigma}_k$ denote $\epsilon/1000$-relative approximations to $Z_G(\lambda_k)$ and $\sigma_{k}$, both of which can be computed deterministically in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$ by \cref{thm:patel-regts,thm:linear-time-cumulants}, it follows that
\[\widehat{i_k}(G) = \lambda_k^{-k} \cdot \widehat{Z}_G(\lambda_k)\cdot (\sqrt{2\pi}\widehat{\sigma}_k)^{-1} \]
is an $\epsilon$-relative approximation to $i_k(G)$, as desired.
\end{proof}
\subsection{FPTAS from the proof of the LCLT}
In the previous subsection, we saw how our LCLT gives an EPTAS for $i_k(G)$, for all $k$ bounded away from the relevant barrier. Extending this to an FPTAS requires bypassing the error term $\tilde{O}(1/\sigma^2)$ in \cref{thm:independent-lclt2}. For this, instead of approximating $\mathbb{P}[Y=k]$ by $\sigma^{-1}\mathcal{N}((k-\mu)/\sigma)$, we will directly approximate $\mathbb{P}[Y=k]$ to the desired accuracy. The key ingredient required for this is the following.
\begin{lemma}
\label{lem:approximate-characteristic-function}
Fix $\Delta \geq 3$, $\delta \in (0,1/2)$, and a parameter $C \geq 1$. There exists a deterministic algorithm which, on input a graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, $n^{-2} \leq \lambda \leq (1-\delta)\lambda_c(\Delta)$, an error parameter $\epsilon \in (0,1/\sqrt{n})$, and $t \in [-C\sqrt{\log{1/\epsilon}}, C\sqrt{\log{1/\epsilon}}]$ outputs an $\epsilon^{50}$-relative, $\epsilon^{50}$-additive approximation to $\mathbb{E}_{\lambda}[e^{itX}]$ in time $(n/\epsilon)^{O_{\delta,\Delta, C}(1)}$, where $X = (Y-\mu_{\lambda})/\sigma_{\lambda}$.
Moreover, if $G$ is claw-free, the same conclusion holds for all $\lambda \geq n^{-2}$ with running time $(n/\epsilon)^{O_{\Delta, C, |\lambda|}(1)}$.
\end{lemma}
\begin{proof}
We provide the proof for general graphs; the proof for claw-free graphs follows by straightforward modifications. For convenience of notation, we denote $\mu_{\lambda}$ and $\sigma_{\lambda}$ by $\mu$ and $\sigma$. By \cref{thm:patel-regts,thm:linear-time-cumulants}, we can compute $\epsilon^{100}$-relative approximations $\widehat{Z}$, $\widehat{\mu}$, and $\widehat{\sigma}$ to $Z_G(\lambda)$, $\mu$, and $\sigma$ in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$.
Note that
\begin{align*}
\mathbb{E}_{\lambda}[e^{itX}]
&= e^{-it\mu/\sigma}\cdot \mathbb{E}_{\lambda}[e^{itY/\sigma}]\\
&= e^{-it\mu/\sigma}\cdot \frac{Z_G(\lambda e^{it/\sigma})}{Z_G(\lambda)}.
\end{align*}
We have the following two cases.
\textbf{Case I: }$\lambda \leq \frac{1}{10\Delta}$. In this case, for any $t \in \mathbb{R}$, $\lambda e^{it/\widehat{\sigma}} \in \mathcal{R}_{\delta, \Delta}$, where $\mathcal{R}_{\delta, \Delta}$ is the zero-free region in \cref{thm:zero-free}.
\textbf{Case II: }$\lambda > \frac{1}{10\Delta}$. In this case, we may assume that $\lambda e^{it/\widehat{\sigma}} \in \mathcal{R}_{\delta, \Delta}$ for all $t \in \mathbb{R}$, $|t|\leq C\sqrt{\log{1/\epsilon}}$. Otherwise, since $\widehat{\sigma}^2 = \Omega_{\delta, \Delta}(n)$ by \cref{lem:var-bound}, it follows that $\epsilon^{-1} = \exp(\Omega_{\delta, \Delta, C}(n))$, so that exhaustive enumeration runs in the claimed time.
Hence, in either case, it follows from \cref{thm:patel-regts} that an $\epsilon^{100}$-relative approximation to $Z_G(\lambda e^{it/\widehat{\sigma}})$, which we denote by $\widehat{Z}_{t}$, can be computed in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$.
We claim that the output
\[e^{-it\widehat{\mu}/\widehat{\sigma}}\cdot \frac{\widehat{Z}_t}{\widehat{Z}}\]
is an approximation to $\mathbb{E}_{\lambda}[e^{itX}]$ of the desired quality. Indeed, by the assumed upper bounds on $|t|$ and $\epsilon$ and the assumed lower bound on $\lambda$, we have for all $n$ sufficiently large (depending on $\delta, \Delta, C$) that
\begin{align*}
\frac{e^{it/\sigma}}{e^{it/\widehat{\sigma}}} = e^{it(\widehat{\sigma} - \sigma)/(\sigma \widehat{\sigma})} = e^{i\theta}, \quad \quad \frac{e^{-it\mu}}{e^{-it\widehat{\mu}}} = e^{i\theta'}
\end{align*}
for some $\theta, \theta' \in \mathbb{R}$ satisfying $|\theta|, |\theta'| \leq \epsilon^{97}$.
Therefore, $\frac{e^{it\widehat{\mu}/\widehat{\sigma}}}{e^{it\mu/\sigma}} = e^{i\theta''}$ for some $\theta''\in \mathbb{R}$, $|\theta''| \leq \epsilon^{90}$ and
\begin{align*}
|Z_G(\lambda e^{it/\widehat{\sigma}}) - Z_G(\lambda e^{it/\sigma})| &= \left|\sum_{k=0}^{n}i_k(G)\lambda^{k}(e^{itk/\widehat{\sigma}} - e^{itk/\sigma})\right|
\leq \sum_{k=0}^{n}i_k(G)\lambda^{k}|e^{ik\theta} - 1|\\
&\leq \epsilon^{94}\sum_{k=0}^{n}i_k(G)\lambda^{k} = \epsilon^{94}Z_G(\lambda).
\end{align*}
Hence,
\begin{align*}
\mathbb{E}_{\lambda}[e^{itX}]
&=
\left(e^{i\theta''}e^{-it\widehat{\mu}/\widehat{\sigma}}\right)\cdot \frac{Z_G(\lambda e^{it/\widehat{\sigma}})}{Z_G(\lambda)} + e^{-it\mu/\sigma}\cdot \frac{Z_G(\lambda e^{it/\sigma}) - Z_G(\lambda e^{it/\widehat{\sigma}})}{Z_G(\lambda)}\\
&= \left(e^{i\theta''}e^{-it\widehat{\mu}/\widehat{\sigma}}\right)\cdot e^{\pm 2\epsilon^{100}}e^{\pm i \epsilon^{100}}\frac{\widehat{Z}_t}{\widehat{Z}} + w_{t}
\end{align*}
where $w_t \in \mathbb{C}$ with $|w| \leq \epsilon^{94}$, as desired.
\end{proof}
Given the preceding lemma, \cref{thm:independent-fptas,thm:matching-fptas} follow from the proof of \cref{thm:independent-lclt2}.
\begin{proof}[Proof of \cref{thm:independent-fptas,thm:matching-fptas}]
Again, we provide the details only for general graphs i.e.~\cref{thm:independent-fptas}; the argument for claw-free graphs is essentially the same. Let $k \geq 1$ be as in the statement of the theorem. Moreover, we may assume that $\epsilon \in (0,1/n)$, since the statement for larger values of $\epsilon$ follows from the statement for $\epsilon = 1/n$. Let $t_k$ denote the integer returned by \cref{lem:find-fugacity} and let $\lambda:=\lambda_k = t_k/C_{\ref{lem:find-fugacity}}$ denote the corresponding fugacity. Since $\mu := \mu_{\lambda_k} \geq 1/2$, it follows as in the proof of \cref{lem:find-fugacity} that $\lambda = \Omega_{\delta, \Delta}(1/n)$.
Let $\sigma := \sigma_{\lambda_k}$, $X = (Y-\mu)/\sigma$, and $\gamma = \min(\pi \sigma, C\sqrt{\log{1/\epsilon}})$, where $C$ is a sufficiently large constant depending on $\Delta, \delta$. Let $\widehat{Z}, \widehat{\mu}, \widehat{\sigma}$ be $\epsilon^{100}$-relative approximations to $Z_G(\lambda), \mu, \sigma$, which can be found deterministically in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$ by \cref{thm:patel-regts,thm:linear-time-cumulants}.
Let $x = (k-\mu)/\sigma$ and $\widehat{x} = (k-\widehat{\mu})/\widehat{\sigma}$.
Then,
as in the proof of \cref{thm:independent-lclt2}, we have
\begin{align*}
\mathbb{P}_{\lambda}[X=x]
&= \frac{1}{2\pi \sigma}\int_{-\pi \sigma}^{\pi \sigma}\mathbb{E}_{\lambda}[e^{itX}]e^{-itx}dt \\
&= \pm \epsilon^{100} + \Re\left(\frac{1}{2\pi \sigma}\int_{-\gamma}^{\gamma}\mathbb{E}_{\lambda}[e^{itX}]e^{-itx}dt\right) \\
&= \pm 2\epsilon^{75} + \Re\left(\frac{\epsilon^{100}}{2\pi \sigma} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\mathbb{E}_{\lambda}[e^{i \epsilon^{100}\ell X}]e^{-i\epsilon^{100}\ell x}\right)\\
&= \pm 3\epsilon^{75} + \Re\left(\frac{\epsilon^{100}}{2\pi \sigma} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\mathbb{E}_{\lambda}[e^{i \epsilon^{100}\ell X}]e^{-i\epsilon^{100}\ell \widehat{x}}\right),
\end{align*}
where the first line uses the Fourier inversion formula for lattices, the second line uses the definition of $\gamma$, \cref{lem:high-fourier}, and \cref{lem:var-bound}, the third line uses the upper bound on $\epsilon$ and the lower bound on $\lambda$, and the last line uses the lower bound on $\lambda$.
Next, for each integer $\ell \in [-\gamma \epsilon^{-100}, \gamma \epsilon^{-100}]$, let $\widehat{Z}_{\ell}$ denote an $\epsilon^{100}$-additive, $\epsilon^{100}$-relative approximation to $\mathbb{E}_{\lambda}[e^{i\epsilon^{100}\ell X}]$. By \cref{lem:approximate-characteristic-function}, these approximations may be found deterministically in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$. Then,
\begin{align*}
(1\pm \epsilon^{75})\frac{i_k(G)\lambda^k}{\widehat{Z}} &= \frac{i_k(G)\lambda^k}{Z_G(\lambda)} = \mathbb{P}_{\lambda}[X=x]\\
&= \pm 3\epsilon^{75} + \Re\left(\frac{\epsilon^{100}}{2\pi \sigma} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\mathbb{E}_{\lambda}[e^{i \epsilon^{100}\ell X}]e^{-i\epsilon^{100}\ell \widehat{x}}\right)\\
&= \pm 4\epsilon^{75} + (1\pm \epsilon^{50})\Re\left(\frac{\epsilon^{100}}{2\pi \widehat{\sigma}} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\widehat{Z}_{\ell}e^{-i\epsilon^{100}\ell \widehat{x}}\right).
\end{align*}
Finally, since $\mathbb{P}_{\lambda}[X=x] = \mathbb{P}_{\lambda}[Y = k] = \Omega_{\delta, \Delta}(1/\sigma)$, we have that
\begin{align*}
(1\pm \epsilon^{50})\frac{i_k(G)\lambda^k}{\widehat{Z}} = (1\pm \epsilon^{50})\Re\left(\frac{\epsilon^{100}}{2\pi \widehat{\sigma}} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\widehat{Z}_{\ell}e^{-i\epsilon^{100}\ell \widehat{x}}\right),
\end{align*}
so that the quantity
\[\lambda^{-k}\cdot \widehat{Z}\cdot\Re\left(\frac{\epsilon^{100}}{2\pi \widehat{\sigma}} \sum_{\ell = -\gamma\epsilon^{-100}}^{\gamma \epsilon^{-100}}\widehat{Z}_{\ell}e^{-i\epsilon^{100}\ell \widehat{x}}\right),\]
which can be computed deterministically in time $(n/\epsilon)^{O_{\delta, \Delta}(1)}$, is an $\epsilon$-relative approximation to $i_k(G)$. \qedhere
\end{proof}
\section{Randomized algorithms}
\label{sec:RandAlgorithms}
\subsection{A quasi-linear time sampling algorithm}
We proceed to the proof of \cref{thm:faster-sampling}. We will prove the result for independent sets, noting that the proof for matchings follows identically.
We will need the following preliminary lemma, which is an efficient randomized version of \cref{lem:find-fugacity}.
\begin{lemma}\label{lem:lambda-determine}
Let $\Delta \geq 3$ and $\delta \in (0,1/2)$. There exists a constant $c_{\ref{lem:lambda-determine}}(\delta, \Delta) > 0$ and a randomized algorithm with the following property: for any graph $G = (V,E)$ on $n$ vertices of maximum degree at most $\Delta$, for any $1\leq k \leq (1-\delta)n\alpha_c(\Delta)$, and for any $\epsilon \in (0,1)$, the algorithm outputs $\lambda \in (0, (1-c_{\ref{lem:lambda-determine}})\lambda_c(\Delta)]$ satisfying
\[|\mathbb{E}_{\lambda}{Y}-k|\le \sqrt{\operatorname{Var}_{\lambda}{Y}}\]
with probability $1-\epsilon$. The running time of the algorithm is $O_{\Delta,\delta}(n\log(n/\epsilon)(\log n)^3)$.
\end{lemma}
\begin{remark}
If $k \geq n/(3\Delta)$, then it is easily seen that $\lambda \geq 1/(100\Delta)$.
\end{remark}
\begin{proof}
If $1\leq k \leq \sqrt{n}$, then it follows from \cref{corEVbounds} that the deterministic choice $\lambda = k/n$ satisfies the desired conclusion. Therefore, it suffices to consider the case $\sqrt{n} \leq k \leq (1-\delta)n\alpha_c(\Delta)$.
Let $\Lambda = [n^{-1/3}, (1-c_{\ref{lem:lambda-determine}})\lambda_c(\Delta)] \cap (\mathbb{Z}/n^{2})$. By \cref{lem:find-fugacity}, there exists $\lambda \in \Lambda$ satisfying
\[|\mathbb{E}_{\lambda}Y - k| \leq 1/2. \]
Further, for any $\lambda \in \Lambda$, it follows from \cref{thm:clt-gen} and \cref{lem:var-bound} that
\[|\mathbb{E}_{\lambda}Y - \operatorname{med}_{\lambda}Y| = \tilde{O}_{\delta, \Delta}(1),\]
where $\operatorname{med}$ denotes the median. Since $\operatorname{Var}_{\lambda}Y = \Theta_{\delta, \Delta}(n\lambda)$ by \cref{lem:var-bound}, it follows that there exists $\lambda \in \Lambda$ satisfying
\[|\operatorname{med}_{\lambda}Y - k| \leq \frac{1}{2}\sqrt{\operatorname{Var}_{\lambda}Y}\]
and it suffices to output such a $\lambda \in \Lambda$.
For any $\lambda \in \Lambda$, there is a randomized algorithm to estimate $\operatorname{med}_{\lambda}Y$ to within $\sqrt{\operatorname{Var}_{\lambda}Y}/2$ additive error which succeeds with probability $1- (\epsilon/n^3)$ and runs in time $O(n\log(n/\epsilon)(\log n)^2)$. Indeed, by \cref{thm:glauber,thm:clt-gen}, \cref{lem:var-bound} and the Chernoff bound, this may be accomplished by taking the median of $O_{\delta, \Delta}(\log(n/\epsilon))$ independent runs of the Glauber dynamics, each for $\Theta_{\Delta, \delta}(n\log n)$ steps. Therefore, running binary search with the above primitive takes time $O(n\log(n/\epsilon)(\log n)^3)$ and except with probability $1-\epsilon$, outputs $\lambda \in \Lambda$ satisfying the desired conclusion.
\end{proof}
We now present our sampling algorithm. For simplicity of notation, we will restrict attention to the case $k > n/(3\Delta)$; the case $k \leq n/(3\Delta + 1)$ is already handled by the down-up walk in \cite{bubley1997path} with asymptotically optimal running time. Below, $c_{\Delta, \delta}, C_{\Delta, \delta}, c'_{\Delta, \delta}, C'_{\Delta, \delta}$ are constants depending on $\Delta, \delta$ whose values can be determined using \emph{a priori} analysis. We will assume that $\epsilon \geq \exp(-n/C'_{\Delta, \delta})$; for smaller $\epsilon$, it follows from \cref{thm:glauber,thm:independent-lclt2} that rejection sampling for the hard-core model at the fugacity $\lambda$ given by \cref{lem:lambda-determine} outputs a distribution within $\epsilon$-TV distance of the uniform distribution on $\mathcal{I}_k[G]$ in time
\begin{align*}
O_{\delta, \Delta}(n\log(n/\epsilon)\cdot \log(n)\cdot \sqrt{n}\log(1/\epsilon)) = O_{\delta, \Delta}(n\log(1/\epsilon)^{5/2}\log n).
\end{align*}
For $1/100 \geq \epsilon \geq \exp(-n/C'_{\Delta, \delta})$, we use the following algorithm.
\begin{itemize}
\item (Preprocessing Step 1) Using \cref{lem:lambda-determine}, find $\lambda \in \Lambda$ such that \[|\mathbb{E}_{\lambda}{Y}-k|\le \sqrt{\operatorname{Var}_{\lambda}{Y}}.\]
\item (Preprocessing Step 2) Using \cref{lem:well-separated}, find $S\subseteq V$ such that $|S| = \Omega(n/\Delta^3)$ and such that the vertices in $S$ are distance at least $4$ apart. Let $T = \{v:v \text{ such that} \operatorname{dist}(v,S)\ge 2\}$.
\item (Preprocessing Step 3) Find an independent set $I_0$ of size $k$ using a similar greedy algorithm as in \cref{lem:well-separated}.
\item (Initialize Core) Run Glauber dynamics for the hard-core model on $G$ at fugacity $\lambda$ for $\Theta_{\Delta, \delta}(n\log(n/\epsilon))$-steps to obtain an independent set $I'$. Let $J = I'\cap T$.
\item (Set parameters) Fix $p = c_{\Delta,\delta} \sqrt{n/\log(1/\epsilon)}$, where $c_{\Delta, \delta} > 0$ is sufficiently small. For each $v \in S$ and for each $K \in \mathcal{I}(G[v\cup N(v)])$, use exhaustive enumeration to compute
\[p_{v,K} = \mathbb{P}_{\lambda}[I \cap {N(v)} = K \mid I \cap T = J].\]
Let
\[q_{v} = \min(p_{v,\emptyset}, p_{v, K_v}),\]
where $K_{v} \in \mathcal{I}[N(v)]$ denotes the independent set with $v$ occupied and all other vertices unoccupied.
Let
\[q = \min_{v \in S}q_{v}.\]
For each $v \in S$, let $W_{v}$ be a random variable taking values in $\mathcal{I}[N(v)] \cup \{?\}$ which takes on the value $?$ with probability $2q$, $\emptyset$ with probability $p_{v,\emptyset} - q$, $K_{v}$ with probability $p_{v,K_{v}} - q$, and $K\notin\{\emptyset, K_{v}\}$ with probability $p_{v, K}$.
\item (Resample Neighborhoods Step 1) For each $v \in S$, independently, sample $W_{v}$. If $W_{v} \neq ?$, then set $I \cap N(v) = W_{v}$. Let $S^{\ast} = \{v : W_{v} = ?\}$. If $|S^{\ast}| \leq c'_{\Delta, \delta}n$, where $c'_{\Delta, \delta} > 0$ is sufficiently small, then proceed to the final step. Else, proceed to the next step.
\item (Resample Neighborhoods Step 2) Let $\ell$ be the current number of vertices chosen and let $k - \ell = k'$. With probability $p\binom{|S^{\ast}|}{k'}2^{-|S^{\ast}|}$ (here, we use the convention that the binomial coefficient is $0$ is $k'\notin [0, |S^{\ast}|]$) sample a random subset of $S^{\ast}$ of size $k'$ and set $I \cap N(v) = K_{v}$ for the vertices in the subset and $I \cap N(v) = \emptyset$ for the vertices not in the subset; with the remaining probability, proceed to the next step.
\item Repeat all steps after preprocessing at most at most $C_{\Delta,\delta}\log(1/\epsilon)^{3/2}$ times for sufficiently large $C_{\Delta, \delta} > 0$; if no valid sample after has been reached, output $I_0$.
\end{itemize}
We now show that the above algorithm satisfies the assertion of \cref{thm:faster-sampling} for $k$ and $\epsilon$ in the specified range.
\begin{proof}[Proof of \cref{thm:faster-sampling}]
Before analyzing the correctness of the algorithm, let us quickly bound its running time. By \cref{lem:find-fugacity,lem:well-separated}, the preprocessing steps take time $O_{\Delta, \delta}(n\log(n/\epsilon)(\log n)^3)$. Note that the Preprocessing Step 3 can indeed be accomplished using the greedy algorithm in \cref{lem:well-separated} since $\alpha_c(\Delta) \leq 1/(\Delta+1)$. Each run of Initalize Core takes time $O_{\Delta, \delta}(n\log(n/\epsilon)\log n)$ since each step of Glauber dynamics takes time $O_{\Delta, \delta}(\log n)$ to implement. The step Set parameters takes time $O_{\Delta, \delta}(n)$. Resample Neighborhoods Step 1 takes time $O_{\Delta, \delta}(n)$. In Resample Neighborhoods Step 2, we can compute the probability in time $O(n(\log n)^2)$ by \cite{FT15} and then sample from the hypergeometric distribution using sampling without replacement in time $O(n\log{n})$. Thus, we see that the running time of the algorithm is
\[O_{\delta, \Delta}(n\log(n/\epsilon)(\log n)^3 + n\log(n/\epsilon)\log n\log(1/\epsilon)^{3/2}).\]
We now proceed to the proof of correctness. The idea is that the algorithm may be viewed as implementing rejection sampling where the base sampler outputs $I$ according to the distribution $\mu_{G,\lambda}(\cdot \mid |I| \equiv k \bmod p)$ in time $\tilde{O}_{\delta, \Delta}(n)$. This leads to a $\tilde{O}_{\delta, \Delta}(n)$ time algorithm for approximately sampling from the uniform distribution on $\mathcal{I}_k[G]$ since by \cref{lem:lambda-determine} and \cref{thm:independent-lclt2}, $\mathbb{P}_{\lambda}[|I| = k \mid |I| \equiv k \bmod p] = \tilde{\Omega}_{\delta, \Delta}(1)$.
To formalize this, we begin by noting that for all $t \in \{0,\dots, p-1\}$ and for all $J \in \mathcal{I}[G[T]]$, it follows from the calculations and notation in the proof of \cref{lem:high-fourier} that
\begin{align*}
\left|\mathbb{P}_{\lambda}[|I| \equiv t \bmod p \mid I \cap T = J] - \frac{1}{p}\right|
&= \left|\mathbb{P}_{\lambda}[X_{1} + \dots + X_{s} \equiv t - |J| \bmod p] - \frac{1}{p}\right|\\
&\leq \frac{1}{p}\sum_{\ell = 1}^{p-1}\exp(-\Omega_{\delta, \Delta}(n\ell^2/p^2))\\
& \leq \frac{\epsilon}{100p},
\end{align*}
provided that $c_{\Delta, \delta} > 0$ is chosen to be sufficiently small. Hence, for any $J$
\begin{align*}
\mathbb{P}_{\lambda}[I \cap T = J \mid |I| \equiv k \bmod p] = \mathbb{P}_{\lambda}[I \cap T = J](1\pm\epsilon/50),
\end{align*}
so that up to an $\epsilon/49$-TV distance, the distribution of the set $J$ in Initialize Core is $\mu_{G,\lambda}[I \cap T = \cdot \mid |I| \equiv k \bmod p]$.
Next, for any realisation $\vec{w} = (w_v)_{v\in S}$ of $\vec{W} = (W_v)_{v\in S}$, let $S^*(\vec{w})$ denote the corresponding subset and let $\mu_{\vec{w}}$ denote the uniform distribution on $\{0,1\}^{S^*(\vec{w})}$ with $0$ denoting unoccupied and $1$ denoting occupied. Then, sampling from the conditional distribution $\mu_{G,\lambda}(\cdot \mid I \cap T = J)$ is equivalent to first sampling $\vec{W} = \vec{w}$ according to the distribution specified in Set Parameters and then resampling the $?$ according to $\mu_{\vec{w}}$. For any realisation $\vec{w}$ of $\vec{W}$ with $|S^*(\vec{w})| \geq c'_{\Delta, \delta}n$, it follows as above that for any $t\in \{0,\dots,p-1\}$,
\begin{align*}
\left|\mathbb{P}_{\lambda}[|I|\equiv t \bmod p \mid I \cap T = J, \vec{W} = \vec{w}] - \frac{1}{p}\right|
&\leq \max_{t'} \left|\mathbb{P}[\operatorname{Binomial}(|S^*(\vec{w})|, 1/2) =t' \bmod p]-\frac{1}{p}\right|\\
&\leq \frac{\epsilon}{100p},
\end{align*}
provided that $c_{\Delta, \delta} > 0$ is chosen to be sufficiently small compared to $c'_{\Delta, \delta}$. Since $q = \Omega_{\delta, \Delta}(1)$, it follows by the Chernoff bound that
\[\mathbb{P}_{\lambda}[|S^*(\vec{w})| \leq c'_{\Delta, \delta}n \mid I \cap T = J] \leq \exp(-\Omega_{\Delta, \delta}(n)) \leq \epsilon/n^{3}\]
for all sufficiently large $n$.
This shows that
\begin{enumerate}[(P1)]
\item The probability of moving directly to the final step from Resample Neighborhoods Step 1 is at most $\epsilon/n^3$, and
\item Up to an $\epsilon/49$-TV distance, the $(W_v)_{v\in S}$ in Resample Neighborhoods Step 1 follow the distribution $(W_v)_{v\in S} \mid \{|I|\equiv k \bmod p, I \cap T = J\}$.
\end{enumerate}
Now, we analyze Resample Neighborhoods Step 2.
Observe that for any realisation $\vec{W} = \vec{w}$ with $|S^{\ast}(\vec{w})| \geq c'_{\Delta, \delta}n$,
\begin{align*}
\mathbb{P}_{\lambda}[|I| = k \mid I \cap T = J, \vec{W} = \vec{w}, |I| \equiv k \bmod p]
&= \frac{\mathbb{P}_{\lambda}||I| = k \mid I \cap T = J, \vec{W} = \vec{w}]}{\mathbb{P}_{\lambda}[|I| \equiv k \bmod p \mid I \cap T = J, \vec{W} = \vec{w}]}\\
&= \frac{\mathbb{P}[\operatorname{Binomial}(|S^{\ast}(\vec{w})|,1/2) = k']}{(1\pm \epsilon/100)/p}\\
&= (1\pm \epsilon/50)p \binom{|S^*(\vec{w})|}{k'}2^{-|S^*(\vec{w})|}.
\end{align*}
Therefore, Resample Neighborhoods Step 2 samples from a distribution within $\epsilon/49$-TV distance to the conditional distribution $\mu_{\lambda, G}[\cdot \mid I \cap T = J, \vec{W} = \vec{w}, |I|\equiv k \mod p]$ and rejects the sample if $|I| \neq k$.
So far, we have not used the property of $\lambda$ guaranteed by \cref{lem:lambda-determine}. This will now be used to show that the probability of the steps Initialize Core through Resample Neighborhoods Step 2 outputting an independent set of size $k$ is $\Omega_{\Delta, \delta}(1/\sqrt{\log(1/\epsilon)})$. To see this, note that by taking the expectation over $J$ and $\vec{W}$ on both sides in the display equation above and using (P1) and \cref{thm:independent-lclt2}, we have that
\begin{align*}
\mathbb{E}_{J, \vec{W}}\left[\binom{S^*(\vec{W})}{k'}p2^{-|S^*(\vec{W})|} \mid |S^*(\vec{W})| \geq c'_{\Delta, \delta}n\right]
&= (1\pm \epsilon/25)\mathbb{P}_{\lambda}[|I| = k \mid |I| \equiv k \bmod p] \\
&= \Omega_{\delta, \Delta}(p/\sqrt{n})\\
&= \Omega_{\delta, \Delta}(1/\sqrt{\log(1/\epsilon)}).
\end{align*}
If $|S^*(\vec{W})| \geq c'_{\Delta, \delta}n$, then the quantity inside the expectation is bounded by $O_{\delta, \Delta}(1/\sqrt{\log(1/\epsilon)})$. Hence, by the reverse Markov inequality, with probability $\Omega_{\delta, \Delta}(1)$ (over the choice of $J, \vec{W}$), the quantity inside the expectation is $\Omega_{\delta, \Delta}(1/\sqrt{\log(1/\epsilon)})$.
To summarize, we have shown the following: a single run of Initialize Core through Resample Neighborhoods Step 2 produces an output with probability $\Omega_{\delta, \Delta}(1/\sqrt{\log(1/\epsilon)})$ and the distribution of this output is within $\epsilon/5$ in TV-distance from the uniform distribution on $\mathcal{I}_k(G)$. Therefore, by the Chernoff bound, repeating this procedure independently $C_{\Delta, \delta}\log(1/\epsilon)^{3/2}$ times for $C_{\Delta, \delta}$ produces an output from a distribution on $\mathcal{I}_k(G)$ which is within $\epsilon$ in TV-distance of the uniform distribution on $\mathcal{I}_k(G)$. \qedhere
\end{proof}
\subsection{A faster FPRAS} The FPRAS for $i_k(G)$ and $m_k(G)$ is substantially simpler than the sampling algorithm above. As before, we will present the proof only for $i_k(G)$ with the proof for $m_k(G)$ being similar.
\begin{proof}[Proof of \cref{thm:faster-fpras}]
We have the following two cases:
\textbf{Case I: $1 \leq k \leq c_{\Delta}\sqrt{n}$}, where $c_{\Delta} > 0$ is a sufficiently small constant which can be determined \emph{a priori}. Let
\[p_k := \mathbb{P}[J \in \mathcal{I}_k],\]
where $J$ is a uniformly random subset of $V$ of size exactly $k$. By the union bound, it follows that
\begin{align*}
p_k &= 1 - O\left(n\Delta\cdot \frac{k^2}{n^2}\right) \geq \frac{1}{2},
\end{align*}
provided that $c_{\Delta}$ is sufficiently small.
Since
\[i_k(G) = \binom{n}{k}p_k,\]
it suffices to obtain an $\epsilon$-relative approximation of $p_k$. Let $S_{1},\dots, S_{\ell}$ denote independent samples from the uniform distribution on size $k$ subsets of $V$. Then, by the Chernoff bound,
\[\frac{\mathbbm{1}[S_1 \in \mathcal{I}_k(G)] + \dots + \mathbbm{1}[S_\ell \in \mathcal{I}_k(G)]}{\ell} = (1\pm \epsilon)p_k\]
with probability at least $3/4$, provided that $\ell > C/\epsilon^2$ for a sufficiently large constant $C$.
For the running time, note that sampling a uniformly random subset of size $k$ takes time $O(k\log{n})$, checking whether it is an independent set takes time $O_{\Delta}(k)$, and computing the binomial coefficient $\binom{n}{k}$ takes time $O(k\log{n}\log\log{n})$, so that the total running time is
\[O(k\log{n}\log\log{n}) + O_{\Delta}(k\epsilon^{-2}\log{n}).\]
\textbf{Case II: $c_{\Delta, \delta}\sqrt{n} \leq k \leq (1-\delta)\alpha_c(\Delta)n$}. In this case, we first use \cref{lem:find-fugacity} to find a suitable $\lambda$. By \cref{thm:independent-lclt2},
\begin{align*}
p_k := \frac{i_k(G)\lambda^{k}}{Z_G(\lambda)} = \mathbb{P}_{\lambda}[|I| = k] = \Omega_{\Delta, \delta}\left(\frac{1}{\sqrt{n\lambda}}\right).
\end{align*}
Let $I_1,\dots, I_{\ell}$ denote independent samples obtained by running the Glauber dynamics for the hard-core model at fugacity $\lambda$ for $O_{\Delta, \delta}(n\log n)$ steps. Then, by the Chernoff bound,
\begin{align*}
\frac{\mathbbm{1}[I_1 \in \mathcal{I}_k(G)] + \dots + \mathbbm{1}[I_{\ell} \in \mathcal{I}_k(G)]}{\ell} = (1\pm \epsilon/4)p_k
\end{align*}
with probability at least $3/4$, provided that $\ell > C_{\Delta, \delta}\epsilon^{-2}\sqrt{n\lambda}$, for a sufficiently large constant $C_{\Delta, \delta}$.
For the running time, sampling each $I_i$ takes time $O_{\Delta, \delta}(n(\log n)\log(n/\epsilon))$, finding its size takes time $O_{\Delta}(n)$, computing $\lambda^{k}$ takes time $O(k\log{n}\log\log{n})$ and approximating $Z_G(\lambda)$ to within an $\epsilon/2$-relative approximation takes time $T$, which gives the desired conclusion.
\end{proof}
\section{Cluster expansion}
\label{secCluster}
In this section, we treat the case of small activities $\lambda$ using the cluster expansion, a classical tool from statistical physics. The cluster expansion (or Mayer series~\cite{mayer1941molecular}) is a formal infinite series for $\log Z_G(\lambda)$.
For an introduction to the cluster expansion, see~\cite[Chapter 5]{friedli2017statistical}.
We introduce the cluster expansion in the special case of the hard-core model on a graph $G$. A \textit{cluster} $\Gamma$ is an ordered tuple of vertices from $G$. The size of $\Gamma$, denoted $|\Gamma|$, is the length of the tuple. The incompatibility graph of $\Gamma = (v_1, \dots, v_k)$, $H(\Gamma)$, is the graph with vertex set $\{v_1, \dots, v_k\}$ and an edge between $v_i, v_j$, $i \ne j$, if $v_i \in N(v_j) \cup \{v_j\}$ in $G$. The Ursell function of a graph $H$ is the function
\[ \phi(H) = \frac{1}{|V(H)|!} \sum_{A \subseteq E(H): (V(H), A) \text{ connected}} (-1)^{|A|} \,. \]
The cluster expansion is the formal infinite power series
\[ \log Z_G(\lambda) = \sum_{\Gamma} \phi(H(\Gamma)) \lambda^{|\Gamma|} \, , \]
where the sum is over all clusters of vertices from $G$. In fact, in this setting the cluster expansion is simply the Taylor series for $\log Z_G(\lambda)$ around $0$, with terms organized by clusters instead of grouping all terms of order $k$ together.
To use the cluster expansion as an enumeration tool, it is essential to bound its rate of convergence. We will use the convergence criteria of Koteck\'{y} and Preiss~\cite{kotecky1986cluster} (though the zero-freeness result of Shearer~\cite{shearer1985problem} along with the lemma of Barvinok~\cite{barvinok2016combinatorics} on truncating Taylor series would also work).
This lemma bounds the additive error of truncating the cluster expansion after a given number of terms.
\begin{lemma}
\label{lemKPhardcore}
Let $G$ be a graph of maximum degree at most $\Delta$ on $n$ vertices and suppose $ 0 < \lambda < \frac{e}{\Delta+1}$. Then
\begin{equation}
\label{eqKPhc}
\sum_{\Gamma: | \Gamma | \ge k} \left| \phi(H(\Gamma)) \lambda^{|\Gamma|} \right| \le n \left( \lambda e(\Delta+1) \right) ^k \,.
\end{equation}
\end{lemma}
\begin{proof}
This is a consequence of the main result of~\cite{kotecky1986cluster}. We can express the hard-core model as a polymer model in the setting of~\cite{kotecky1986cluster} by defining each vertex to be a polymer with weight $\lambda$. Taking $a(v) =1$ for all $v \in V(G)$ and $\exp(d(v)) = \frac{1}{\lambda e (\Delta+1)}$, we have for all $v \in V(G)$,
\[ \sum_{u \in N(v) \cup \{v\}} \lambda e^{a(v) + d(v) } \le (\Delta+1) \lambda e \frac{1}{\lambda e (\Delta+1) }=1 \,. \]
Then by the main theorem in~\cite{kotecky1986cluster}, for all $v \in V(G)$,
\[ \sum_{\Gamma \ni v} \left |\phi(H(\Gamma)) \lambda^{|\Gamma|} \left( \frac{1}{\lambda e (\Delta+1) }\right)^{|\Gamma|} \right | \le 1\,. \]
Restricting the sum to clusters of size at least $k$ and summing over all $v \in V(G)$ gives~\eqref{eqKPhc}.
\end{proof}
The cluster expansion is a very convenient tool for studying the cumulants of the random variable $Y = | I|$ (see e.g.~\cite{dobrushin1996estimates,cannon2019bipartite,jenssen2020independent}). In particular, when the cluster expansion converges, we have the formula
\begin{equation}
\label{eqCumulantCluster}
\kappa_k(Y) = \sum_{\Gamma} |\Gamma|^k \phi(H(\Gamma)) \lambda^{|\Gamma|} \,.
\end{equation}
\begin{lemma}
\label{lemClusterKbound}
Fix $\Delta\ge 2, \delta >0$ and suppose $\lambda \le \frac{1-\delta}{e (\Delta+1)}$. Then for all fixed $k \ge 1$, and all graphs $G$ of maximum degree $\Delta$ on $n$ vertices,
\[ \sum_{\Gamma} |\Gamma|^k \lambda^k \phi(H(\Gamma)) = n \lambda + O_{k,\delta} (n \lambda^2 \Delta^2) \,. \]
\end{lemma}
\begin{proof}
Since the contribution to the left-hand side from clusters of size $1$ is $n \lambda$, it suffices to show that
\[ \sum_{\Gamma: | \Gamma | \ge 2} |\Gamma|^k \lambda^{|\Gamma|} | \phi(H(\Gamma))| = O_{k,\delta} (n \lambda^2 \Delta^2) \,.\]
Applying \cref{lemKPhardcore} ,we have
\begin{align*}
\sum_{\Gamma: | \Gamma | \ge 2} |\Gamma|^k \lambda^{|\Gamma|} | \phi(H(\Gamma))| &\le n \sum_{t \ge2} t^k (\lambda e (\Delta+1))^t \le C_{k,\delta} n \lambda^2 (\Delta+1)^2 \,. \qedhere
\end{align*}
\end{proof}
As in~\cite[Corollary 23]{jenssen2020independent}, we can immediately deduce bounds on the mean and variance and a central limit theorem from \cref{lemClusterKbound}.
\begin{corollary}
\label{corEVbounds} Let $G$ be a graph of maximum degree $\Delta$.
If $ \lambda \le \frac{1-\delta}{e (\Delta+1)}$, then the following hold:
\begin{enumerate}
\item $\mb{E}_\lambda Y = n \lambda + O_{\delta}(n \lambda^2 \Delta^2)$.
\item $\on{Var}_{\lambda} Y = n \lambda + O_{\delta}(n \lambda^2 \Delta^2)$.
\end{enumerate}
If in addition $\lambda n \to \infty$ as $n \to \infty$, then
\begin{enumerate}
\setcounter{enumi}{2}
\item The random variable $Y$ satisfies a central limit theorem.
\end{enumerate}
\end{corollary}
\begin{proof}
We prove these statements via the cumulants of $Y$.
\begin{align*}
\mb{E}_\lambda Y = \kappa_1 ( Y) & = \sum_{\Gamma} |\Gamma| \lambda^{|\Gamma|} \phi(H(\Gamma)) = n \lambda + O_{\delta}(n \lambda^2 \Delta^2) \, .
\end{align*}
\begin{align*}
\on{Var}_\lambda Y = \kappa_2 ( Y) & = \sum_{\Gamma} |\Gamma|^2 \lambda^{|\Gamma|} \phi(H(\Gamma)) = n \lambda+ O_{\delta}(n \lambda^2 \Delta^2) \, .
\end{align*}
Now let $X = (Y- \mb{E}_{\lambda} Y)/\sqrt{\on{Var}_{\lambda} Y}$. By definition $\kappa_1(X) =0$ and $\kappa_2(X) =1$. All the cumulants of a standard Gaussian random variable are $0$ except for the second which is $1$, and so to prove a central limit theorem it suffices to show that for any fixed $k \ge 3$, $\kappa_k (X) \to 0$ as $n \to \infty$. We have
\begin{align*}
|\kappa_k(X) | &= \left | \sum_{\Gamma} \frac{|\Gamma|^k}{\on{Var}( Y)^{k/2} } \lambda^{|\Gamma|} \phi(H(\Gamma)) \right | \\
&\le \frac{ 1}{ \on{Var}( Y)^{3/2}} \sum_{\Gamma} |\Gamma|^k \lambda^{|\Gamma|} |\phi(H(\Gamma)) | \\
&=(1+o(1)) (n \lambda )^{-1/2} = o(1) \,. \qedhere
\end{align*}
\end{proof}
Note that if $\lambda n \to \rho >0$ as $n \to \infty$, then \cref{lemClusterKbound} shows that $\kappa_k(Y) \to \rho$ for each fixed $k$, which implies that $Y$ converges in distribution to a Poisson random variable of mean $\rho$.
\begin{theorem}
\label{thmClusterLocal}
Fix $\Delta \ge 3$. If $G_n$ is a sequence of graphs of maximum degree $\Delta $ on $n$ vertices and $n^{-1} \ll \lambda \le \frac{1}{100 \Delta^2}$, then the random variable $Y$ satisfies a local central limit theorem as $n \to \infty$.
\end{theorem}
\begin{proof}
We follow the proof strategy of~\cite[Theorem 19]{jenssen2021independent} that proves a local central limit theorem in the setting of polymer models satisfying the equivalent of \cref{lemClusterKbound}.
Let $X = (Y- \mb{E}_\lambda Y)/\sqrt{\on{Var}_{\lambda} Y}$, and let $\phi_X(t) = \mb{E}_{\lambda} e^{itX}$ be the characteristic function of $X$ (and $\phi_Y$ the characteristic function of $Y$).
First we prove that there exists $c>0$ so that for all $t \in [-\pi, \pi]$, $|\phi_X(t)| \le e^{-ct^2 }$. Using the cluster expansion we write
\begin{align*}
\log \phi_Y(t) &= \sum_{\Gamma} \left( e^{it|\Gamma| }-1 \right) \phi(H(\Gamma)) \lambda^{|\Gamma|} \,,
\end{align*}
and so
\begin{align*}
\mathrm{Re} \log \phi_Y(t) &= \sum_{\Gamma} \left( \cos ( t |\Gamma|)-1 \right) \phi(H(\Gamma)) \lambda^{|\Gamma|} \\
&= n \lambda (\cos(t) -1) + \sum_{\Gamma: |\Gamma|\ge 2} \left( \cos ( t |\Gamma|)-1 \right) \phi(H(\Gamma)) \lambda^{|\Gamma|} \\
&\le -\frac{t^2 n \lambda }{5} + \sum_{\Gamma: |\Gamma|\ge 2} t^2 |\Gamma|^2 \phi(H(\Gamma)) \lambda^{|\Gamma|} \\
&\le -\frac{t^2 n \lambda }{5} + -\frac{t^2 n \lambda }{10} \le -\frac{t^2 n \lambda }{10} \,,
\end{align*}
where we have used the bounds $\cos(t)-1 \le -t^2/5$ and $1-\cos(tx) \le (tx)^2$.
Exponentiating then gives $|\phi_Y(t)| \le e^{-t^2 n \lambda/10}$, which, along with \cref{corEVbounds}, implies that $| \phi_X(t) | \le e^{-c t^2}$ for some $c >0$.
To finish the proof we will apply \cref{lem:fourier-convert} with $\alpha = \mb{E} Y$ and $\beta = \sqrt{\on{Var}(Y)}$, which says
\begin{align*}
\sup_{x\in \mathcal{L}}|\beta\mathcal{N}(x) - \mathbb{P}[X=x]| &\le \beta\int_{-\pi/\beta}^{\pi/\beta}\big|\phi_X(t)-\phi_{\mathcal{Z}}(t)\big|dt + e^{-\pi^2/(2\beta^2)} \\
&\le o(\beta) + \beta\int_{-\infty}^{\infty}\big|\phi_X(t)-\phi_{\mathcal{Z}}(t)\big|dt \,,
\end{align*}
and so to prove the LCLT it suffices to show that
\[ \int_{-\infty}^{\infty}\big|\phi_X(t)-\phi_{\mathcal{Z}}(t)\big|dt = o(1) \,. \]
By the central limit theorem of \cref{corEVbounds} we have that $\phi_X(t) \to \phi_{\mathcal{Z}}(t)$ as $n \to \infty$. Moreover $\big| \phi_X(t)-\phi_{\mathcal{Z}}(t) \big |$ is an integrable function since it is bounded by $e^{-ct^2} + e^{-t^2/2}$ from the bound above. Applying dominated convergence completes the proof.
\end{proof}
\section{Deterministic approximation of cumulants in linear time}
\label{secCumulants}
In this section we prove \cref{thm:linear-time-cumulants}. We prove the theorem first in the regime of cluster expansion convergence, then extend to more general zero-free regions. We prove the theorem for general graphs, noting that the proof for claw-free graphs is similar.
\begin{lemma}
\label{thmAlgsmallLam}
For all graphs $G$ of maximum degree $\Delta$, all $0 < \lambda \le \frac{1- \delta}{e (\Delta+1)}$, and all fixed $k \ge 1$ there is a deterministic algorithm to give an $\epsilon \lambda n$ additive approximation to $\kappa_k(Y)$. The algorithm runs in time $O_{\Delta, \delta, k}(n \cdot (1/\epsilon)^{O_{\Delta, \delta}(1)})$.
\end{lemma}
\begin{proof} The algorithm will be to compute a truncation of the cluster expansion for $\kappa_k(Y)$. Recall that in the regime of cluster expansion convergence, we have
\[\kappa_k(Y) = \sum_{\Gamma} |\Gamma|^k \phi(H(\Gamma)) \lambda^{|\Gamma|} \,.\]
Now let $T_t^{(k)} = \sum_{|\Gamma| < t} |\Gamma|^k \phi(H(\Gamma)) \lambda^{|\Gamma|}$ be the truncation keeping only clusters of size less than $t$. By~\eqref{eqCumulantCluster} and~\eqref{eqKPhc} we have
\begin{align*}
\left| \kappa_k(Y) - T_t^{(k)} \right | &\le n \sum_{ j \ge t} j^k \left( \lambda e(\Delta+1) \right) ^j.
\end{align*}
By taking $t = \Omega \left( \frac{ \log (\epsilon) }{\log (\lambda e (\Delta+1))} + \frac{k^2}{\delta^2}\right) $, we have that $\left| \kappa_k (Y) - T_t^{(k)} \right | \le n \lambda \epsilon $. This truncated cluster expansion can be computed in time $n \cdot \exp ( O(t \log \Delta)) = O_{\Delta, \delta, k}(n \cdot (1/\epsilon)^{O_{\Delta, \delta}(1)})$ using the algorithm of~\cite{patel2017deterministic,helmuth2020algorithmic}.
\end{proof}
Next we give a general algorithm when $\lambda$ is not necessarily in the regime of cluster expansion convergence.
\begin{proof}[Proof of \cref{thm:linear-time-cumulants}]
Since \cref{thmAlgsmallLam} covers the case of $\lambda \leq 1/(2e(\Delta+1))$, we will assume here that $\lambda \geq 1/(2e(\Delta+1))$.
The general algorithm is an adaptation of the approximate counting algorithm of Barvinok and Patel and Regts via an expression for $\kappa_k(Y)$ in terms of derivatives of $\log Z_G(\lambda)$. The $k$th cumulant of $Y$ can be written as a linear combination of the first $k$ derivatives of $\log Z_G(\lambda)$ in $\lambda$, where (for $\lambda$ in the considered range) the size of the coefficients in the linear combination can be bounded in terms of only $k$ and $\Delta$. Hence, it suffices to give an $\epsilon n$ additive approximation to $\frac{d^k}{d \lambda^k} \log Z_G(\lambda)$ in the stated running time.
Let $\delta >0$ be small enough so that $\lambda \in \mathcal{R}_{\delta, \Delta}$.
Following Barvinok~\cite{barvinok2016combinatorics} and Peters and Regts~\cite{PR19}, there is a polynomial $f$ of degree $D= D(\delta,\Delta)$ that maps the unit circle in the complex plane into the region $\mathcal{R}_{\delta, \Delta}$, sending $0$ to $0$ and $1$ to $\lambda$. Let
\[ \hat Z (y) = Z_G( f(y) ) \]
so that $\hat Z(1) = Z_G(\lambda) $. In particular, $\hat Z$ is a polynomial in $y$ of degree $N \le D n$. Let $r_1, \dots, r_N$ denote the inverses of the roots of $\hat Z$ so that $\hat Z(y) = \prod_{i=1}^n (1-r_i y)$. By \cref{thm:zero-free}, there is an $\eta = \eta(\delta,\Delta) \in (0,1)$ so that $|r_i| \le \eta$ for $i=1, \dots , N$.
The first $k$ derivatives of $\log Z_G(\lambda)$ with respect to $\lambda$ can be written in terms of the first $k$ derivatives of $\log \hat Z(y)$ with respect to $y$ and those of $f$ with respect to $y$. Using the chain rule we obtain
\[ \frac{d^k \log Z_G(\lambda)}{d \lambda^k} = \sum_{j=1}^k b_j \frac{ d^j \log \hat Z(y) }{ d y^j } \]
where the coefficients $b_j$ depend only on the first $j$ derivatives of the bounded-degree polynomial $f$ and thus are bounded. For instance, we have
\begin{align*}
\frac{d \log Z_G(\lambda)}{d \lambda} &= \frac{ \frac{d \log \hat Z(y)}{dy} }{ \frac{d f(y)}{dy} }
\intertext{and}
\frac{d^2 \log Z_G(\lambda)}{d \lambda^2} &= \frac{ \frac{d^2 \log \hat Z(y)}{dy^2} }{ \frac{d f(y)}{dy} } - \frac{ \frac{d \log \hat Z(y)}{dy} \cdot \frac{d^2 f(y)}{dy^2} }{ \left( \frac{d f(y)}{dy} \right)^2 }
\end{align*}
In particular, it now suffices to compute an $\epsilon n $ additive approximation to $\frac{ d^j \log \hat Z(y) }{ d y^j } $ for $j= 1,\dots,k$. We can write
\begin{align*}
\frac{ d^k \log \hat Z(y) }{ d y^k } &= \frac{ d^k }{ d y^k } \sum_{i=1}^N \log (1- r_i y) = - (k-1)! \sum_{i=1}^N \frac{r_i^k}{(1-r_i y)^k }
\\&= - (k-1)! \sum_{i=1}^N r_i^k \sum_{s=0}^{\infty} \binom{k-1+s } {k-1 } (r_iy)^s \, .
\end{align*}
Now setting
\[T_t^{(k)} = -(k-1)! \sum_{i=1}^N r_i^k \sum_{s=0}^{t} \binom{k-1+s } {k-1 } r_i^s \]
we have
\begin{align*}
\left|T_t^{(k)} - \frac{ d^k \log \hat Z(y) }{ d y^k } \right| &\le (k-1)! N \sum_{s = t+1}^{\infty} (s+k)^k \eta^s = O (n \eta ^t) \,,
\end{align*}
and so for $t = \Omega_{\Delta, \delta}(\log(1/\epsilon) + k^2)$, the truncation error can be made at most $\epsilon n$ as desired. Moreover, using the algorithm of Patel--Regts~\cite{patel2017deterministic}, $T_t^{(k)}$ can be computed in time $n e^{O_{\Delta,\delta}(t)} = O_{\Delta, \delta, k}(n(1/\epsilon)^{O_{\Delta, \delta}(1)})$ for this choice of $t$.
The FPTAS for the mean and variance follow from the additive approximations and \cref{lem:var-bound}. \qedhere
\end{proof}
\bibliographystyle{amsplain0.bst}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,262
|
The 1980 United States presidential election in Michigan took place on November 4, 1980. All fifty states and The District of Columbia were part of the 1980 United States presidential election. Voters chose 21 electors to the Electoral College, who voted for president and vice president.
Michigan was won by former California Governor Ronald Reagan (R) by 6.5%. This result nonetheless made Michigan 3.2% more Democratic than the nation-at-large. This is despite the fact that it voted to the right of the nation by over seven points in 1976.
Results
Results by county
See also
United States presidential elections in Michigan
References
Michigan
1980
1980 Michigan elections
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,876
|
'use strict';
var async = require('async');
var fs = require('fs');
var npm = require('npm');
var path = require('path');
var spawn = require('child_process').spawn;
var bitcore = require('bitcore');
var $ = bitcore.util.preconditions;
var _ = bitcore.deps._;
/**
* Will remove a service from bitcore-node.json
* @param {String} configFilePath - The absolute path to the configuration file
* @param {String} service - The name of the module
* @param {Function} done
*/
function removeConfig(configFilePath, service, done) {
$.checkArgument(path.isAbsolute(configFilePath), 'An absolute path is expected');
fs.readFile(configFilePath, function(err, data) {
if (err) {
return done(err);
}
var config = JSON.parse(data);
$.checkState(
Array.isArray(config.services),
'Configuration file is expected to have a services array.'
);
// remove the service from the configuration
for (var i = 0; i < config.services.length; i++) {
if (config.services[i] === service) {
config.services.splice(i, 1);
}
}
config.services = _.unique(config.services);
config.services.sort(function(a, b) {
return a > b;
});
fs.writeFile(configFilePath, JSON.stringify(config, null, 2), done);
});
}
/**
* Will uninstall a Node.js service and remove from package.json.
* @param {String} configDir - The absolute configuration directory path
* @param {String} service - The name of the service
* @param {Function} done
*/
function uninstallService(configDir, service, done) {
$.checkArgument(path.isAbsolute(configDir), 'An absolute path is expected');
$.checkArgument(_.isString(service), 'A string is expected for the service argument');
var child = spawn('npm', ['uninstall', service, '--save'], {cwd: configDir});
child.stdout.on('data', function(data) {
process.stdout.write(data);
});
child.stderr.on('data', function(data) {
process.stderr.write(data);
});
child.on('close', function(code) {
if (code !== 0) {
return done(new Error('There was an error uninstalling service(s): ' + service));
} else {
return done();
}
});
}
/**
* Will remove a Node.js service if it is installed.
* @param {String} configDir - The absolute configuration directory path
* @param {String} service - The name of the service
* @param {Function} done
*/
function removeService(configDir, service, done) {
$.checkArgument(path.isAbsolute(configDir), 'An absolute path is expected');
$.checkArgument(_.isString(service), 'A string is expected for the service argument');
// check if the service is installed
npm.load(function(err) {
if (err) {
return done(err);
}
npm.commands.ls([service], true /*silent*/, function(err, data, lite) {
if (err) {
return done(err);
}
if (lite.dependencies) {
uninstallService(configDir, service, done);
} else {
done();
}
});
});
}
/**
* Will remove the Node.js service and from the bitcore-node configuration.
* @param {String} options.cwd - The current working directory
* @param {String} options.dirname - The bitcore-node configuration directory
* @param {Array} options.services - An array of strings of service names
* @param {Function} done - A callback function called when finished
*/
function remove(options, done) {
$.checkArgument(_.isObject(options));
$.checkArgument(_.isFunction(done));
$.checkArgument(
_.isString(options.path) && path.isAbsolute(options.path),
'An absolute path is expected'
);
$.checkArgument(Array.isArray(options.services));
var configPath = options.path;
var services = options.services;
var bitcoreConfigPath = path.resolve(configPath, 'bitcore-node.json');
var packagePath = path.resolve(configPath, 'package.json');
if (!fs.existsSync(bitcoreConfigPath) || !fs.existsSync(packagePath)) {
return done(
new Error('Directory does not have a bitcore-node.json and/or package.json file.')
);
}
async.eachSeries(
services,
function(service, next) {
// if the service is installed remove it
removeService(configPath, service, function(err) {
if (err) {
return next(err);
}
// remove service to bitcore-node.json
removeConfig(bitcoreConfigPath, service, next);
});
}, done
);
}
module.exports = remove;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 131
|
# His Every Whim
## Liliana Rhodes
### Contents
Copyright
Get in Touch with Liliana Rhodes
About His Every Whim
His Every Whim
July
1. Ashley
2. Xander
3. Ashley
4. Xander
5. Ashley
6. Xander
7. Ashley
8. Xander
9. Ashley
August
10. Ashley
11. Xander
12. Ashley
13. Ashley
14. Xander
15. Ashley
16. Xander
17. Ashley
18. Xander
19. Ashley
Also by Liliana Rhodes
About the Author
Sign Up For FREE Books
His Every Whim
Copyright © 2013 by Jaded Speck Publishing/Liliana Rhodes
* * *
This book is a work of fiction. The names, characters, places, and incidents are products of the writer's imagination or have been used fictitiously and are not to be construed as real. Any resemblance to persons, living or dead, actual events, locales or organizations is entirely coincidental.
* * *
**A ll rights reserved.** No part of this book may be reproduced, scanned, or distributed in any manner whatsoever without written permission from the author except in the case of brief quotation embodied in critical articles and reviews.
**_Get news about my new and upcoming releases as well as special offers by signing up for_**
_Liliana's Email Newsletter_
* * *
_For more about me and my books, visit my website at_ _LilianaRhodes.com_
# About His Every Whim
**" I couldn't let her disappear from my life like that. I wanted more of her."**
Curvy Ashley Monroe is having a string of bad luck. Jobless and sleeping on her friend's couch, she takes a waitressing job at an event at the prestigious Boone Art Gallery. While there, she's told one thing--stay away from the mysterious and brooding man sitting by himself. Easier said than done for Ashley who finds herself inexplicably drawn to the gorgeous stranger.
* * *
After spending the evening doing things she normally wouldn't--like letting herself be seduced by this enigmatic man she only knows as Xander--she unknowingly opens her world up to the lavish lifestyle of beautiful mansions, fast cars, and expensive clothes. Because when billionaire Xander Boone sees something he wants, he gets it.
## Part One
# July
## Chapter One
# Ashley
"Ashley, wait! You forgot your tie!"
I turned around when I heard Jackie call me. Taking the black clip-on tie that was part of our uniform from her then clipping it in place, I looked at the other girls in their pressed white shirts, black slacks, ties, and vests. Had anyone told me I'd graduate from college and become a waitress, I would've told them they were full of shit. But there I was, dressed and ready to serve cocktails at the famed Boone Art Gallery, the largest and most prestigious gallery in the city of Canyon Cove.
"Do I look alright? I can't believe you have to wear this outfit whenever you work."
"It pays the bills while I'm in school. Besides Ash, you look great. It really shows off your curves."
I rolled my eyes. She was just trying to make me feel better. I looked like a penguin, and I knew everyone thought the same. I glanced over at Jackie with her dark brown hair up in a ponytail just like mine. That's where the similarities ended. Dressed alike, we looked like the before and after photos of a diet ad.
"Thanks for getting me this job, Jackie. I don't know what I'd do without you."
"No sweat. I know you'd do the same for me. Plus when I heard this party was at the Boone, I knew I had to get you in. Maybe you can see if they're hiring."
I nodded and hoped she'd drop the subject. I needed to put my best foot forward and get through the night. I didn't need to be reminded of my inability to get a job because I "only" had an Art History degree and no "real" work experience.
In a matter of days, my world had come crashing down around me. When my graduate school loans got denied at the last minute, I had to scramble to find a job to keep my apartment. Now, two months later, not only was I still unemployed, but I was sleeping on Jackie's pull-out couch. I really didn't want to think about it.
As if she read my mind, Jackie put her arm around me and gave me a quick hug. We each grabbed a small tray of champagne-filled glasses and headed out of the break room and into the crowded exhibit area.
The Boone Art Gallery buzzed with excitement. The event was invitation only for the grand opening of the Pedro Escamino exhibit--the hottest young sculptor in years. Only the elite of Canyon Cove society made the list. Everywhere I looked, beautiful people posed in their expensive clothes. Men strutted by in perfectly tailored suits and women glowed in couture gowns. It reminded me that being a waitress was the closest I'd ever get to being at this kind of event.
The gallery's large, open, modern space was white and simple in order to allow the artwork to pop and be the focus. Partial walls throughout the room governed the flow of traffic and gave the viewers either the surprise of the next piece or showcased another. Throughout the gallery, cozy alcoves with sleek, low European-style couches and cocktail tables invited the guests to relax.
As I walked around the room with my tray, offering the passing guests a glass of chilled champagne, Jackie rushed over to me, her voice filled with alarm as she whispered even though she had a fake smile in place.
"See that guy over there? In the alcove to the right. Sitting by himself. Don't be obvious about it."
I casually glanced towards the alcove she mentioned and saw a man sitting by himself. His legs were stretched out in front of him as he leaned back on the couch. While he was dressed like the other men in a perfectly tailored suit, he definitely wore it much better. I never realized before how sexy a man could look in a suit.
He looked to be in his forties since his light brown hair had some grey in it, especially at his temples. He wore his hair a little on the longer side, almost shaggy but not unkempt, and he had a closely trimmed beard which made it hard to see his face well. Regardless, it was obvious he was a very handsome man.
Complementing his charcoal grey suit, he wore an ice-blue tie. I couldn't help but wonder what color his eyes were or how rough the hair of his beard would feel against my cheek. With one glance, it was obvious to whoever looked that he refused to enjoy himself, but for some reason I felt myself drawn to him.
"You mean the grump?" I asked.
"Shh! Yes!" she hissed. "He's always at these things. I've never seen him mingle or talk to any guests. He just sits there." She paused for a moment as she snuck a quick peek at him. "Stay away from him. He always yells at the wait staff whenever we offer him anything. I think he likes to make people cry."
"Who is he?"
"Damned if I know. Just some rich asshole I guess, but I wanted to warn you."
Jackie disappeared into the crowd while I stood near an appetizer display taking it all in. I couldn't help it, what she said intrigued me. I regularly darted my eyes over to the man in the alcove. Later, even after I made my way through the crowd again, I still checked to see if anyone spoke to him. He remained completely alone and never once left his seat.
As the evening went on, I started to convince myself that he couldn't be all that bad. I told myself that maybe he was an asshole because no one ever offered him a drink or was nice to him. All Jackie said was that he made everyone cry. If that was the worse he could do, then why not go over?
I hemmed and hawed over it, making sure to walk past his alcove every so often to see if anyone spoke to him. I wasn't the kind of girl to be outgoing, normally I kept to myself and was shy and quiet, but eventually I couldn't help myself. I shoved away the old Ashley and like a moth to a flame, I went over to offer him a glass of champagne.
## Chapter Two
# Xander
Yet another event at the Boone Art Gallery. I only attended these functions out of some bizarre sense of obligation, so there I sat again by myself in my favorite alcove. At least it gave me the chance to people watch and if I hadn't been doing just that, then I wouldn't have seen her.
She stood out from the crowd with her black almond shaped eyes, bee stung lips, and dark brown hair. I imagined she would be even more beautiful with her hair down around her shoulders. Even in that ridiculous catering uniform, she was gorgeous. If all women had curves like that, I'd be a happy man.
It was one of the things with society I didn't understand. We pushed health and eating properly yet the image of beauty that was shoved down our throats was a thin twig of a woman. I preferred women to look healthy, to have some meat on their bones, not be a bag of bones.
So for once I was glad to be at the Boone and alone. It gave me the opportunity to keep admiring her as she walked around offering champagne. I hadn't been this attracted to anyone in a long time. It was a shame I couldn't do anything about it. If this was any other place and if I hadn't hired the caterer, things could be different.
But those were excuses. I chose to not look, to not get attached to anyone. Life was too complicated to bother.
## Chapter Three
# Ashley
Loading my tray with freshly poured champagne, I made my way towards his alcove. As I approached, I took a deep breath and reminded myself the worse he could do was make me cry. I was a big girl, I could handle a few tears if that was what happened. I needed to get closer to him. I entered his alcove and smiled.
"Champagne, sir?"
He grunted and looked up at me. He didn't say a word, just stared. I wondered if I should leave. The old Ashley would've left, but the old Ashley wouldn't have been in his alcove to begin with. I figured I had nothing to lose and set a glass of champagne down on the small table in front of him.
"Did I say I wanted champagne?"
His voice growled and I immediately regretted being there, but I was too stubborn to run away. I summoned up all my courage, stood up straight, and challenged his gaze.
"No, but you didn't say you didn't want it, either."
The brief silence lasted forever, but then he cracked a smile and picked up the glass I had set down. For a moment I was even more frightened of him, thinking he was a wild card and absolutely crazy, until I noticed his eyes. They were ice-blue, like his tie, and cold, but I saw a flash of warmth behind them. I never saw such amazing blue eyes before. I wanted to lose myself in them.
While I braced myself for the worst, he leaned back into his seat and crossed his legs with his ankle resting on his knee. When he took a sip of champagne, I breathed a sigh of relief and turned to head back into the crowd.
"I'm not done with you."
I stopped dead in my tracks. His voice was menacing and sent chills up my spine. Oddly though, I still found myself drawn to him. I couldn't help it. I turned back around to face him and saw the slightest smile playing on his lips.
"Is there anything else I can get for you, sir?"
"Sit."
At first I moved towards the couch. I wanted to do whatever he said, but then I remembered I was working.
"I'm sorry sir, but I need to continue making the rounds."
"Sit. No one is going to say anything to you about it. Join me."
"But--"
"No buts. Sit. You're a waitress, right? I need you to wait on me."
I looked out at the crowded room. I wanted to get back to work and earn some money. I needed to earn money. Again, the old me would've done the responsible thing and returned to work, but I still wasn't ready to do things the old Ashley way. Old Ashley got me nowhere, and it was time to change that. I set my tray of champagne down on the alcove's cocktail table and sat on the couch with him.
"Is there anything I can get you, sir?"
"Don't call me sir, call me Xander. And you are?"
"Ashley."
"Pleasure to meet you, Ashley." He extended his hand to me and I shook it, letting my hand be engulfed by his and overwhelmed when the warmth of his hand penetrated deep into my core. "Didn't they warn you about me?"
I couldn't help it, I giggled and nodded as my cheeks turned hot. It never occurred to me that he might know of his own reputation. I wondered how much he did to maintain it.
"They did. I guess I don't listen very well."
"We'll have to change that," he said.
His voice returned to its menacing tone again and I began to worry. Was he going to turn into the jerk Jackie warned me about? Maybe he planned to report me to the catering manager, Mr. Smithfield.
I was working with Jackie for just the one night but hoped to do it regularly. Being the only recent prospect I had to earn money, I didn't want to mess it up. I kept an eye out for Mr. Smithfield, who Jackie earlier convinced to give me a chance.
"Have you been waitressing for long? I don't think I've seen you here before."
"No sir, tonight is my first night."
"Sir? What did i say?"
"Sorry...Xander."
"And on your first night, you decide to bother the one person they warned you about? Interesting." He was quiet for a moment. "What did you do before this?"
I didn't answer him right away. I didn't know why he asked so many questions. His face was stoic. I couldn't tell if he was trying to be friendly or was simply interrogating me. I didn't think I had a choice, I had to answer him.
"I just graduated college."
"Ahh, so you just graduated college and you're waitressing? What did you major in? A waste like Art History?"
He laughed at his own joke, but I didn't find it funny. I folded my arms in front of my chest. Great, this is how he's going to make me cry.
"It's none of your business, but yes."
He laughed even harder, the sound boomed out of him and I knew everyone heard it, but no one reacted. He took a glass of champagne from my tray and handed it to me.
"I think you need this."
"No, I'm working, I can't."
"Don't make me tell your boss that you refused a simple customer request."
He had a stern look on his face, and I couldn't tell if he was joking or not but based on what Jackie said about him before, I wasn't going to take any chances. I picked up the glass but as I took a small sip, I saw Mr. Smithfield enter the room.
There I sat, the girl Jackie begged him to let waitress, resting and drinking champagne with a guest. I couldn't risk him seeing me and both of us losing our jobs.
"I'm sorry, I have to go."
## Chapter Four
# Xander
"Wait!" I stood up, hoping I could stop her, but it was too late, she was already gone.
Sitting back down, I saw the reason she left. Alan Smithfield headed straight towards me. He wore a light grey suit yet still looked disheveled. His stringy red hair was combed over his bald head like it had been for all the years I had known him. Smithfield was a slight man that would normally go unnoticed except for his Napoleon complex, which made me enjoy putting him in his place.
"Good evening, sir. I see you have some champagne. Is there anything else I can get you? You know my staff is here to please."
"Leave. I'm not in the mood." I growled at him. As he turned to go, I spotted Ashley running through a door at the back of the gallery. I got up, thinking I could catch her. I needed to catch her. "Excuse me Smithfield, I have a pressing engagement."
I pushed him out of the way as I left the alcove and made my way through the party to the back door. I couldn't let her disappear from my life like this. I wanted more of her.
## Chapter Five
# Ashley
I grabbed the tray and quickly ran to the break room to compose myself. Catching a glimpse of myself in the mirror, I saw the clip-on tie, the vest, and my ponytail, which seemed to make my face look even fuller, and I started crying. I knew there were worse jobs, but seeing myself dressed like that reminded me of my dreams for the future and how I had no idea how I'd ever be able to make them happen.
My tears wouldn't stop. I ripped off the clip-on tie and threw it at the wall. I was too upset to continue serving and even though I knew I'd not only lose this job, but that Jackie would be pissed after putting her head on the line for me, I couldn't do it anymore.
I needed to accept what happened and move on. It was time I looked even harder for a job I could support myself with. I couldn't feel sorry for myself anymore, that was old Ashley. Time for something new.
I grabbed my stuff from the locker and as I pulled on my purple hoodie, I noticed a bulletin board with the words "Job Postings" in the corner of the room. On the board were a few index cards with the word "filled" written with red marker on them. I wiped away at my tears and saw one towards the top that looked like it had been there for a long time. The card was discolored and faded but was still legible. I pulled it down to read:
* * *
_Live-In House Manager_
_M ust be on call 24/7. Must be flexible and take direction well. Must be presentable. Familiarity with the arts is a plus._
_All living expenses are included. Position includes a monthly stipend. Start immediately._
* * *
It sounded too good to be true. I didn't care that the position didn't go into details about what was needed or expected from a 24/7 House Manager, I only cared that not only would I get a job but also a place of my own. I quickly jotted down the number so I could call in the morning.
"It was rude of you to run off like that."
Xander startled me. I didn't need to see him to recognize that intimidating voice. I closed my eyes and inhaled his musky scent. He stood so close behind me I could feel his warm breath on my ear, and it sent chills down my spine. Good chills.
"I had no choice. I had to leave," I whispered.
"You always have a choice. You just have to trust it. And trust me."
His voice was low and husky. He edged closer to me, and I could feel his strong body against my back and his breath moving down from my ear to my neck. His lips suddenly brushed against my skin and I held my breath, savoring the moment as his touch sent a rush through my body and I felt the area between my legs begin to pulse with excitement.
With one swift move of his hand, he spun me around to face him and pushed me against the wall beside the bulletin board. He stared at me so intensely I couldn't tear my eyes away. My breathing came fast and shallow. I didn't think I had ever felt this excited because of a man before, yet he had barely touched me.
Suddenly his lips crushed mine then forced them apart as he slid his tongue between my lips. I wrapped my arms around his broad back, slipping them between his crisp button-down shirt and the silk lining of his suit jacket, and pulled him closer against me. All I could think about was feeling his lean body against mine.
My head spun. He kept me pressed against the wall, and I could feel his member hardening as it pushed against my hip. While we kissed, he pulled my shirt out of my black pants then unbuttoned my pants.
As his hand slid into the front of my pants, his lips pulled away from mine. His breathing sounded hurried and short. He moved his hand over my silky panties then ran his finger along the lace border that clung to my leg.
Part of me was in shock. It wasn't like me to just let some guy put his hand down my pants. It also wasn't like me to kiss a stranger, but I didn't care about any of that. The pulsing between my legs had taken over my head. I bit my bottom lip as I waited and hoped for his next move.
Still pinned against the wall but not wanting to move, I had nowhere to look but into his intense ice-blue eyes. I felt his fingers slip into the top of my panties and held my breath in anticipation. His touch burned against my bare skin and sent waves of electricity through my body. I felt the heat of his strong hand move further down and over my lips.
I spread my legs a little, and he slid the tips of his fingers over my swollen clit. My knees went a little weak, but he held me up with his other hand. As he slowly rubbed me, I could feel myself getting so wet that it dripped down the curve of my pussy and onto my panties.
My eyes were still glued to his as his fingers moved further down and into my wet entrance. I briefly closed my eyes and moaned as two of his fingers began to slowly fuck me. My breathing came in even faster, shorter breaths.
Suddenly I heard the door at the end of the hall open and a couple of female voices approaching the break room. The door to the break room remained open and we were directly in front of it, at the end of the room.
"Relax," he whispered his command into my ear, and I felt his beard gently scratch my cheek.
I tried to speak, but his fingers were thrusting fast into me. He pressed his body against mine and I began to shake. I bit down hard on my bottom lip as I tried to keep my moans quiet.
"Come for me, Ashley," he whispered.
His lips brushed against my ear as he spoke. I could hear the footsteps and voices coming closer. He slid his hot wet fingers out of me and rubbed quick circles over my clit before squeezing it between his fingers.
Waves shook my body. I began to writhe and as I cried out, his lips covered mine, muffling the sounds of my orgasm.
He held me up, his body pressing mine against the wall as the last of my orgasm shot through me. As I stood there panting, I heard the voices get even closer.
"Turn around and zip up," he growled at me as he let me go.
I heard his footsteps cross the room quickly and exit as I quickly pulled myself back together. The voices had finally reached just outside the break room door.
"Good evening, Mr. Boone," said one of the women.
I stood in shock. Mr. Boone? As in Boone Art Gallery?? No, it couldn't be! I quickly grabbed my things and headed out of the break room and past the women who were still talking to each other. I ran into the exhibit room and looked around for my ice-blue mystery man, but I didn't see him anywhere.
I knew deep down that Xander and Mr. Boone were the same person, but it didn't make sense to me. There was no way someone like that would be interested in me. He could have anyone. I had to know who he was for sure. I quickly walked past the alcove he sat in earlier, but it was empty. Defeated, I figured I might as well head home.
I stepped out into the crisp night air and took a deep breath. The sky was clear, and the full moon brightly lit the city. As I walked down the steps of the gallery, I noticed a silver Audi R8 driving out of the parking garage under the building. I stood and watched as it pulled out onto the street and noticed the license plate--Boone.
## Chapter Six
# Xander
As I headed to the private underground garage, I couldn't help but think that I should go back. It wasn't like me to just leave a woman like that. Then again, it wasn't like me to do what I just did. That woman brought out the animal in me.
I inhaled deeply as I got into my car and for once, instead of the smell of the leather interior, all I could smell was her. Her sweet scent, like jasmine mixed with vanilla, intoxicated me, and I couldn't get enough of it.
Driving to my apartment, Ashley filled my thoughts. I wondered how I would be able to see her again when I didn't even have her last name. I considered calling Alan Smithfield and forcing him to give me all the information on his employees so I could find out where she lived. I knew I could bully him into it. It didn't seem right, but I didn't care.
"Dial Smithfield."
The phone rang several times before a rushed-sounding Alan Smithfield answered. Based on the background noise, I knew he was still at the art gallery.
"Alan, you have contact information for all of your employees, right?
"Yes sir, of course I do."
"A new girl worked for you tonight. Ashley. Give me her last name, phone number, and address. This stays between us. I find out you spoke of this and you'll never cater in this town again."
"Yes. I understand, sir, but I don't have her information. She's a friend of one of my workers, Jackie Stone. Did she do something to upset you? I know you've had issues with our wait staff before."
"No, she was perfect. Thank you, Alan. I'll make sure there's a bonus in your payment."
I hung up the phone and kept my eyes on the road. How would I get to see her again?
## Chapter Seven
# Ashley
When I got to Jackie's, I immediately turned on my laptop. I had to look up Xander Boone and find out for sure if that was my mystery man's full name. The first result appeared from a recent newspaper article about the Boone Art Gallery's acquisition of the Pedro Escamino exhibit.
I clicked on the link, and a photo slowly loaded but when it finished, I wasn't sure if that was the man I met or not. The picture was a few years old and black and white. The man in it had shorter hair and no beard. I studied the photo for a moment before landing on his eyes. Even in black and white, it was obvious the man in the photo had the most intense ice-blue eyes.
Since I knew nothing about Xander Boone, I scrolled down and read the article. The Boone family was old money. While Xander owned and created Boone Enterprises, a leader in banking industry software and applications, his family operated several huge corporations that were involved in everything. You couldn't purchase something without a Boone company being somehow involved.
Xander immersed himself in the art world, not only having the gallery, but also donating millions to charities and organizations that supported art education and struggling artists. The article summed up with the fact that Xander was one of the wealthiest men in the world, a billionaire.
I did another quick search to see if I could find out anything about his personal life. There were some old articles about 'wild child' Xander Boone who had multiple drug and alcohol related run-ins with the law, but those were from over twenty years ago when he was in college. It was almost as if he didn't exist during the past twenty years.
Hearing the keys at the door, I closed the lid of my laptop. I didn't want Jackie seeing how I was obsessing over this man who was just a grumpy stranger a few hours ago.
"Hey Ash. I'm glad you're here. I was beginning to worry about you."
"You're not mad I left?"
"No way. I saw you talking to that asshole. Figured he probably ruined your night." She shrugged and then walked into her room, talking loudly so I could hear her. "I told you to stay away from him."
"He's not as bad as you think. I found out who he is." I paused a moment to calm my voice. I could hear the excitement in it. "Have you heard of Xander Boone?"
She peeked out of her room, half dressed.
"As in Boone Art Gallery? Boone Enterprises? Boone--"
"Yes okay, so you have. Jeez, I must live under a rock."
We laughed and she came out of her room now dressed in sweatpants and an old t-shirt. She looked me up and down and made a face.
"Why haven't you changed? I thought for sure you'd be out of that uniform by now."
I shrugged. The truth was I was so busy thinking about finding out who he was that it hadn't occurred to me to change. Plus my clothes smelled like him, a mix of musk and spice. I took a deep breath and blushed when I thought of the weight of his body against mine.
* * *
I slept later than usual the next morning. Jackie had already left for class, so I had the place to myself. I got up and poured myself a glass of orange juice and then remembered the phone number I wrote down for the House Manager position, dug it out of my bag, and called.
"Hello?" The voice sounded like it belonged to an older woman with a southern drawl.
"Hi. I'm calling about the House Manager position I saw posted at the Boone Art Gallery. I was wondering if it was still available."
"Yes, it is. I have time for an interview at 10am today. Is that too soon?"
"Oh no, that's perfect. I'll be there." I lied, thinking it wasn't enough time. I started going through my mental closet of interview outfits.
"Let me give you the address. When you get here, ask for me. I'm Tara."
She gave me the address, but it wasn't familiar to me. I pulled it up on a map site and saw that it was on the outskirts of Canyon Cove, a large town on the other side of the city. I'd have to take a cab if I wanted to get there on time. Luckily Jackie didn't mind sharing her closet, so I had my nicer things hung.
Looking through the closet, I decided on a black pencil skirt with a silky teal-colored blouse. The skirt hugged my hips more than I liked, but it still looked nice. I wore my hair loose, letting it brush my shoulders, and applied only minimal make up. I wanted to look professional, responsible, and mature. I didn't want my youth to be a reminder of how inexperienced I was.
The cabby honked his horn to let me know he had arrived. I grabbed my bag, a copy of my resume, and ran out the door. I didn't want to leave the cab waiting too long, I was using what was left of my cash to pay for the ride. I didn't have money to waste.
The car was a traditional yellow cab. The man driving looked like he hadn't taken a shower all month. I was glad for the clear partition between us because I could imagine what he smelled like, and that was bad enough. He wore a cap that covered most of his greasy hair, and his skin looked weathered from too much sun.
"Hi. The address is 3900 Whispering Woods Way."
"You sure about that?" he asked as he inspected my appearance.
"Yes, I'm sure about that. Can you please drive? I don't want to be late. I have an interview."
"Yeah yeah, I'm going. An interview, eh? What for?"
"House Manager." I replied shortly, hoping he'd get the hint that I wasn't in the mood to talk.
"Oh fancy. What's that pay?"
"I don't know, better than nothing."
"Well, good luck. It's a nice looking place there. Lots of them out there."
I thought about asking him why he seemed surprised I was going to that address, but I was already feeling so nervous I didn't want to know anything else. The last thing I needed was to get inside my head and bomb the interview because I was thinking too much. I decided to look out the window and tried to distract myself.
On the freeway, we drove past the white building of an Audi dealership, and I saw an R8 like Xander Boone's sitting on a giant display in the center of their parking lot. I wondered if I'd ever see him again and decided it was unlikely. We would never run in the same circles.
I wondered why he had such a bad reputation of making wait staff cry. There was something so brooding and mysterious about him, it drew me to him. As the cabby drove, I replayed the evening in my head and could feel the heat return to my cheeks.
I couldn't believe I had been so wild. I convinced myself it was best that I never saw him again; if I did, I'd probably die from embarrassment. The way he watched me, his eyes were so intense I felt naked in front of him, like he knew everything about me. He certainly knew how to make me come.
I felt the blood rush even more to my cheeks as I thought about his electric touch and his fingers thrusting inside me. I couldn't believe how wet just the thought of him made me. I crossed my legs, hoping that the cabby wouldn't be able to read my mind or know about the throbbing that had started between my legs.
"It's just right up here, miss."
I looked out the window and saw we were driving on a small country road along acres of fields with grazing horses. A tall wooden picket fence ran along the perimeter and at the edge, on the other side of the farthest fence, was forest. It was absolutely breathtaking.
Looking ahead, I noticed a tiny house sitting in the middle of the field with a large stable behind it and an old red pick-up truck. The home looked much smaller than I imagined, it was more of a cottage. It had a stone front and was surrounded by wildflowers. Why would a house so small need someone to manage it?
"Is that it?" I asked the cabby, who took a glance over in the direction of the cottage.
"What? Oh that? No!" He laughed, and I felt the blood rush to my cheeks from embarrassment. "We're still several minutes away, miss. You can't see the mansion from the road. You know that address is Jefferson Manor, right?"
A mansion? Jefferson Manor? Suddenly I was in over my head. It never occurred to me that I was going to a mansion. Was I dressed well enough? How does one dress for a mansion? And not just any mansion--Jefferson Manor!
Jefferson Manor was the oldest example of Georgian architecture in the state. The mansion itself was considered a work of art. I remembered studying it in a class once. It had stayed in the Jefferson family until about twenty years ago, but when the last of the Jeffersons passed away, people found out that the family had let the interior of the house deteriorate.
Years passed and the mansion and property were purchased via auction, but I didn't know by whom. I remembered reading the home was restored to its former beauty and couldn't wait to see it. The last pictures I had seen showed the decay and neglect of the years.
I went back to looking out the window. The fields were replaced with tall trees. The cab turned onto a tree-lined road with the leaves forming a canopy, and I realized why it was called Whispering Woods Way. The sun glittered through the foliage and dappled the road. I tried to look ahead but couldn't see anything through the greenery.
"It's right up here, miss."
"At the end of the road?"
"At the end of this driveway."
I was looking ahead, over the cabby's shoulder, when I finally caught my first glimpse. Jefferson Manor looked even more impressive and grand than the pictures I had seen in school. It had a light tan exterior with tall white columns flanking the large white wooden door and a slate grey roof.
The home had two stories of windows, each with many panes, but the windows on the ground floor were much taller than those on the upper floor. In the center of the upper floor was a large dome window and as we followed the curve of the gravel driveway, I saw the sparkle of an elegant chandelier. I noticed four chimneys jutting out of the pitched roof and remembered reading that the home had marble fireplaces, originally for heat.
The grounds surrounding the home were perfectly kept. As we slowed down, I noticed a rose garden and what looked like an English hedge maze. Never in my wildest dreams could I have imagined being in that mansion, and here I had the opportunity to live in it.
The cab came to a stop, and I stepped out onto the gravel and looked around in awe. In the distance, I heard a bell tower chime ten times. I ran up to the front door and rang the doorbell. For once, I needed to not be late.
"You Ashley Monroe?" A mature woman's voice with a thick Southern drawl came from behind and startled me.
"Goodness!"
"I'm sorry dear, didn't mean to frighten you. I heard your cab pull up. That's one thing about this gravel, you hear everything. Come into the house. I'm Tara by the way."
Tara didn't look anything like I imagined from hearing her voice on the phone. Pulling up, I began to imagine her as the stereotypical Southern belle, very feminine and girly, just older. She looked far from that though.
Tara had on a worn pair of jeans, a plaid button-down shirt, and a pair of battered cowboy boots. Her long, sandy-colored hair was streaked with grey and pulled into a tight ponytail. As I watched her walk, feet spread apart with a wide gait, I doubted she had a feminine bone in her body.
"Let me show you around. First of all, as far as I know you're the only one who has answered that ad. I'll tell you right now, you've got the job if you want it. I need someone to do it, I already have enough on my hands with the stables."
"Are you kidding? You have no idea how much I need a job. And a place to live."
"I don't kid, Miss Monroe. Alex needs someone to manage the house. I can't do it, we've got over 100 horses here. Some are still racing, some are pasture puffs. I don't have time for anything else."
As we walked, I looked around at the home, which seemed faded underneath a thick layer of dust that covered its restored beauty. The house had a sadness to it, like it was used to neglect and loneliness, something I could relate to. While all the furniture was still covered, cleaners moved through the rooms, removing dust and dirt as they began to make the rooms livable again.
"Doesn't anyone live here? Who would I be managing?"
"You'll need to hire people. Maids, cooks, that kind of thing. Alex has been living in the city when he's not traveling, but he's decided to move back here. I'm sure he'll want to meet you, there's a lot he needs help with so you'll have to be flexible and do whatever he says. Alex has quite the temper, but I stay out of his way and he stays out of mine. You'll do best if you do the same."
I took it all in. We went to the second floor and she showed me a small suite of rooms I would live in with a private staircase that led to the first floor, near the back of the house, by the kitchen. The suite was plain, unlike the rest of the house, and had a living area with a couch, a kitchenette, and a bedroom. The living room and bedroom were separated by glass-paned sliding French doors. To me, it was perfect.
We reached the end of the hall, which opened up into a circular study. Suspended above, in the center, was the crystal chandelier I saw from the driveway. The dome window was even more striking from the inside. I walked to the back of the study, to the second dome window on the opposite side of the room, and looked out.
I could see the stables from there and noticed a man in a suit leaving the cottage and begin walking towards the house. He was too far away for me to make out any features. I was about to ask Tara who he was when she led me down the hall and stopped in front of a pair of closed double doors.
"I need you to listen to me," Tara said. "You'll have free rein to explore the house and the property, but there is one thing I need you to do--do not enter this room."
"What's in there?" I needed to know. I hated not knowing something or not being able to do something.
"All I know is it's the master bedroom."
I nodded. Apparently Alex liked his privacy. I could appreciate that.
I followed her down a large staircase which curved into a formal living room. The room was the picture perfect image of old world elegance with its marble fireplace and ornamental moldings. As we entered it, I could hear footsteps on the gravel outside.
"That must be Alex." Tara walked towards the hall and called out. "Alex? Come to the living room. I'd like to introduce you to your new House Manager."
The sound of Alex's steps echoed on the marble. His long stride quickly carried him across the foyer and into the living room.
"It's about time you found someone," he growled as he entered the room, then stopped dead in his tracks as he looked at me. A smile crossed his lips then quickly disappeared.
My eyes widened with surprise. This couldn't be possible! Standing at the entrance to the living room was Xander.
He crossed the room towards us, his face unreadable.
"Alex, this is Ashley Monroe. She's agreed to take the position," Tara said.
"I've told you to not call me that," he growled at her, and I couldn't help but take a step back. "My mother may have hired you, but that doesn't mean I can't fire you."
He glared at her. She put her hands up and took a couple of steps back.
"Okay, okay, Xander. I think I'll get back to work at the stables now. It was nice meeting you, Miss Monroe."
Tara quickly left the room, leaving me alone with him. I waited for him to say something. I wasn't sure what to say after what happened in the break room the night before and now there I was, his new employee.
"And so we meet again. Miss Monroe, is it?"
## Chapter Eight
# Xander
Unable to sleep the night before, I left my apartment downtown and drove to The Manor. I couldn't get Ashley out of my head, and the only way I could think of to possibly see her again was by throwing another catered event at the gallery.
Part of me wished I hadn't left her in the break room, but I needed to buy her some time, and my leaving was the only thing I could think of that would do that. I knew that if fate ever brought her into my life again, I would make it up to her. Right now though, I needed to get to work on moving back into the house.
I had purchased Jefferson Manor about ten years ago. While I appreciated the architecture, the house was in such disrepair that I had no desire to buy it. To me, it was a money pit, but the choice wasn't all mine. About a million dollars later, it was fully restored with certain additions such as modern plumbing, electricity, renovated bathrooms, and a modern kitchen.
I hadn't lived in Jefferson Manor for almost three years. I left when the demons began to taunt me again as they did when I was younger.
Although I had been thinking about moving back for a while, I kept convincing myself that I didn't need to, that I loved being in the city. I hung the ad for the House Manager at the gallery and several people had shown interest in the position, but I always turned them away. The longer I was without a House Manager, the longer I could stay out of that house; but then my overbearing mother hired Tara to care for the stables and fill the position.
Hearing that Tara found a person gave me mixed feelings. I thought about intimidating the person until they quit, but I knew I had to go back and face the house and its memories. It was the only way I would be able to heal.
I couldn't believe my eyes as I entered the living room. There was Ashley, and she looked even more beautiful than she had the night before. Her hair was down, and it framed her round face. The skirt hugged her hips in such a way that if she was a road, she'd need a 'dangerous curves' sign. Her blouse pooled and shimmied as it rested on her full breasts.
I had to fight every instinct in me to not carry her away like a caveman. I wondered if I could get her alone again when Tara's words finally registered with me--Ashley was my new employee.
I couldn't help myself, just being in the same room as her made my blood begin to pump. I was trying to stay professional, but my mind kept reminding me of the taste of her lips and her soft body crushed against mine.
"I'm sure Tara didn't give you a lot of details about the position. Plus, there are certain responsibilities she doesn't know about. I'd like to go over everything with you now before you make your final decision. As you know, I can be very difficult and demanding."
I grinned at her, hoping the joke would relax her. I could tell by the way her eyes darted and she chewed her bottom lip that she was nervous.
"Yes. I mean, yes, I'd love to hear more about the job. Not yes, you're difficult."
I watched as her cheeks turned the brightest shade of red I'd ever seen. I found her embarrassment sexy. I hoped it meant that she felt the same connection I did. I knew it was wrong to keep thinking of her that way since she was working for me, but I didn't think an attraction like that should be ignored.
"I didn't notice a car, how about I give you a ride home while we talk about the position?"
"Sure, if it's no trouble. I don't have a car, I took a cab here."
I held my arm out to her and she wrapped her hand around the crook of my elbow. Feeling her that close to me was enough to drive me crazy, but I had to control myself.
"A cab? I'm sure Tara could have arranged for a driver. You should have told her."
"No, no, I couldn't do that. I didn't even think of that."
"You'll have to in this position. You'll be in charge of everything at the house. Plus I'm going to need you sometimes at the office."
"Oh? Like secretarial work?"
"Something like that," I said before I stopped and turned to face her. "I'll need you to satisfy my every whim."
My face was barely inches from hers. I knew I crossed a line. I was her employer now, but I couldn't help myself. She was nearly impossible to resist with her sweet vanilla scent and soft skin. I could hear her breathing come a bit faster, and I was again reminded of the break room in the art gallery. All I wanted to do was kiss her...but I couldn't.
## Chapter Nine
# Ashley
_H is every whim?_
I wanted to know what he meant, but I couldn't think. His lips were so close I could feel his warm breath against my cheek and just the slightest tickle of his beard. I felt something electric between us, something I never felt with anyone else before. I knew whatever he wanted, I would do in a heartbeat. If only he'd just kiss me again.
I tilted my face up towards his. I wanted to feel his firm lips again. I needed to. I could feel the throbbing begin between my legs, and it pulsed so loud I could hear it in my ears. I imagined it urging me on--kiss him, kiss him!
Only seconds had passed, yet it felt like an eternity. I couldn't control myself anymore. I reached up to touch his beard with my hands and found it surprisingly soft. I couldn't help myself any longer, I pulled his face down to me and pressed my lips against his.
In an instant his arms were around me, pressing me tightly to him as he hungrily kissed me. I opened my mouth and felt his tongue move against mine. My head swam until he pulled away.
"I can't. You need the job more than I need..."
His voice trailed off, and I wanted to argue but I knew he was right. This job would help me get back on my feet and hopefully back in school. I had no choice.
I looked down at the gravel. I couldn't speak, there was a lump in my throat. I felt his hand move to the small of my back and guide me towards a large rust-colored barn. He entered a code on a keypad, and the old wooden double doors slid open.
I stopped in my tracks as we entered, glad for the distraction. The barn was an immaculate garage with cars lined up on either side. Aside from the Audi R8 I saw last night, there was also an Aston Martin DBS Ultimate, a Ford GT, a Bentley Mulsanne, and others I didn't recognize. I walked up to the Aston Martin.
"James Bond's car!"
"Yes. I'm surprised." He walked over to me as I jumped into the car.
"Well, my dad and I were close and he loved fast cars. We'd go to the car show every year."
"Nothing more exciting than a fast car. Except maybe a beautiful woman." He winked at me before walking over to the Audi and tapping the roof. "What about this one?"
"I love Audi. That's Iron Man's car." I giggled. I never talked about my love of cars with anyone before, but Xander made me feel so comfortable.
"Yes, it is. Do you want it?"
"What? The car? I'd never be able to afford one."
"I didn't ask that. I asked if you wanted the car."
"Well, yeah, if I had $100,000 lying around, then of course, but--"
"Then it's yours." He pulled the keys out of his pocket and tossed them to me.
"No! You've got to be joking. No, I can't take this." I held the keys in my hand as I looked at the car while trying to convince myself it was okay to take it. "Absolutely not, it's too much."
"You're working for me now. You need to present yourself in a certain manner, and you need a car. This car solves that. I won't take no for an answer. End of discussion."
"But--"
"I am a very busy man, Ashley. Aside from the corporate world, I am also obligated to attend many social functions. You'll be attending those with me." He stepped closer to me and took my hand with the car keys. "I'll arrange for your things to be picked up tomorrow so you can move in. I'll also arrange for a personal shopper to be in touch with you."
I was too lost in his ice-blue eyes to say anything. He pressed the key fob further in my hand, closed my fingers around it, then brought my hand up to his lips and kissed it, letting his lips linger before he sighed softly, his eyes never leaving mine.
_His every whim._
The words echoed in my head. Despite what he said, I knew this could never be just a simple working relationship. I didn't pull my hand away from him. I didn't want to. I needed to feel his touch. Every second felt right with him. I guessed soon enough I'd discover what his every whim was...I already knew I was willing to do it.
## Part Two
# August
## Chapter Ten
# Ashley
Jefferson Manor spoiled me. Normally I couldn't get out of bed before 10am unless I had something to do. Since living at Jefferson Manor, I found myself waking before dawn so I could sip my lemon tea in an overstuffed chair in the circular study. The dome window facing the stables and the back of the house was the perfect place to watch the sunrise.
Several of those early mornings, I heard a car drive away. Tara was right, with the gravel driveway, nothing was silent. I figured she must have had a guest spend the night then leave before anyone got up. She obviously wanted her privacy so I wouldn't say anything about it...no matter how curious I was.
I sank deeper into the large leather chair, my hands wrapped around the hot mug. My light blue robe was cinched around my waist, and I curled up underneath the soft fluffy cotton.
The dome window was almost as wide as the wall but arched along the top. At night, it reflected the interior of the house but during the day, the view was breathtaking. The open view of the stables and the fields for the horses was countered by the dense woods that framed the property.
As I sipped my tea, the sky began to lighten. It turned from a dark navy and continued to a lighter shade until the orange of the sun began to rise up from behind the trees. It was perfect. I loved the sunrise. I didn't want to be anywhere else at that moment.
The house came to life once I hired the people to complete the painting and cleaning. The dust tarps had been removed, and everything functioned as I imagined it did hundreds of years ago when the mansion was first built.
During the day, I made sure everything ran smoothly. I fielded questions and managed deliveries. I made sure that to the staff I hired from the agency Tara recommended, it looked like I was in charge and knew what I was doing. Underneath it all though, I was a bundle of nerves. It was thanks to Tara's help that things had progressed as far as they did.
At night, since Xander had yet to move in, the staff went home and I had Jefferson Manor to myself. I mostly stayed in my suite of rooms or accessed the kitchen through the small private stairway connecting the two. The mansion was built in such a way that it felt smaller and cozy in my suite, which once was called the "staff quarters".
It was by accident that I discovered the amazing sunrise from the study. Shortly after moving in, I woke up in the middle of the night and couldn't fall back asleep. The full moon shone so brightly through the dome windows that when I saw the way it lit the hall, I thought I left a light on in the study.
That night I fell asleep in the overstuffed chair, admiring the moon as I thought about my luck of going from Jackie's couch to the mansion. When I woke, the sun splashed deep orange streaks across the sky. I couldn't imagine a better way to start the day so I began my habit of tea and the sunrise.
My phone beeped to remind me of that day's agenda, and I groaned as I watched the words "personal shopper" flit across the screen. I had been pushing off this appointment. It wasn't that I didn't enjoy shopping at times, I just didn't think I needed help. I could look perfectly presentable if that's what Xander wanted from me.
Unfortunately, I didn't know what Xander wanted. In the weeks I had been working as his House Manager, I hadn't seen him at all. Not only that, but while I felt comfortable in the house alone watching the sunrise, once the house filled, I remembered I was just Ashley. My idea of what was nice and presentable was probably worlds away from Xander's reality.
At exactly 9am, I heard a car pull up on the gravel driveway. I was dressed in one of my nicest outfits, a maxi dress with a burgundy and black print. I cinched a wide black belt around my waist, making the dress look more like two pieces and accentuating my waist, therefore giving me more of an hourglass figure. I managed to convince myself that the personal shopper would be a fun experience, but I wanted to look good for her. I needed her to see I had my own sense of style even though she was hired to help.
I opened the door to find a thin woman hurriedly typing on her Blackberry. Her silver hair was cropped short and she was dressed in a tan pair of khakis with a pale yellow cardigan. I easily imagined her having tea while her husband played golf at the country club. I bit my lip to stop myself from giggling at the image.
"Lydia Winkler. I have a 9am appointment with Ashley Monroe." Her voice had a slight snobby accent and her eyes never left her handheld device as she spoke. "She should be expecting me."
"Yes, come in. I'll take you to the living room." I led her through the foyer and motioned to the couch for her to sit. "Can I get you something to drink?"
"No, I'd like to get to work. Can you go get her?"
"Umm...I am her."
"What? You?" She stood up and shook her head as she looked at me. "No, I'm not doing this. I don't have the time for...this." She eyed me up and down. "I was told you just needed some new clothing."
"Yes, that's it. I need clothing I can wear on business trips and events."
"No. I've been doing this for years. You need much more than that and trust me, you wouldn't be able to afford me." She grabbed her things and headed towards the door.
"But...wait...but I..." I didn't know what to say. I felt so humiliated. I heard her car drive away, and I looked down at myself and wondered what could be so horribly wrong with me to make her react like that.
My eyes began to sting as I fought to keep the tears back. I heard voices coming down the hall and quickly ran up the steps. I couldn't let anyone see me like this. Entering the study, I caught a glimpse of myself in the reflection of the window.
I looked fat. I hated that I did this to myself. Just minutes before I thought I looked great, or at least good, but now I thought differently. I couldn't help but notice my soft jawline and full cheeks.
I ran to my room, feeling like an idiot. What was I doing thinking I could do this job? Sure, managing the day to day of the mansion was one thing, but Xander wanted me to double as his assistant, too. I couldn't do that.
I slumped onto the floor and let my tears fall. There was no use fighting them anymore. I didn't know what it was that woman saw in me that was so awful, but I knew what I saw in the mirror, and that was bad enough.
I threw my shoes across the floor as I got mad for feeling sorry for myself. I wiped away my tears and took a deep breath. I had to do better than this. I couldn't just give up. Old Ashley would have given up.
I splashed some cold water on my face as I calmed down. As I stood over the sink, I replayed in my head how the personal shopper reacted to me and felt worse. I heard an insistent knock at the door and groaned, wishing I could be alone.
"Just a minute," I called out as I dried my face and hoped my voice sounded normal. I figured it had to be one of the new maids I hired. I hoped she didn't catch me crying.
"Don't ever make me wait again."Xander growled as he walked through the door.
I immediately wished I looked better. I knew my eyes were a little swollen from crying, and there was nothing I could do about it.
"Sorry, I wasn't expecting you. You should've told me--"
"Why were you crying?" he interrupted as he walked past me and into the living area of the suite. "Where's that personal shopper? I expected her to be here. My mother has nothing but good things to say about her."
"She..." I could barely speak. I gulped the word down. I hated myself for being so sensitive and emotional, especially in front of Xander. "She left..." I mumbled the rest of my answer, but I felt so overwhelmed by everything, including him, that I didn't know what words came out of my mouth.
"She left?!" He spun around, and his eyes met mine. His fingers reached up and softly caressed my cheek. He didn't say a word, but his face softened. I wondered if he could read my mind.
His hand slipped behind my neck and he pulled me closer. My body melted into his as he pressed his lips to my forehead, and I lost it. I bawled like a baby.
His hand moved down to the middle of my back, and he held me so tight I could feel his heart beating through his strong muscles and against my chest. Inhaling deeply between sobs, I noticed he smelled like spicy citrus.
I buried my face in his suit jacket and tightly clung to its soft wool lapels. I couldn't help myself despite how pathetic crying made me feel. As he stroked my hair, I felt him kiss the top of my head.
"I'm sorry. I should've known better." His voice lost the growl I had grown accustomed to. It was deep, husky, and soothing. "I'll arrange something for you this afternoon. You'll need a new wardrobe for Hawaii."
"Hawaii??"
"Yes, we leave tomorrow."
He walked out of the room, leaving me in shock with my still wet, tear-stained face.
## Chapter Eleven
# Xander
"Wait, sir! You can't just come in here! Where do you think you're going?"
The young maid ran alongside me in her pressed black skirt and top as I entered Jefferson Manor. I was pleased Ashley had hired help and that they were dressed in the appropriate uniforms, but I couldn't understand why this one insisted on getting in my way.
As I drove up to the house, I saw who I believed was Lydia Winkler walk out of the house and get into her car. I knew Ashley didn't want the extravagance of it, so I thought she sent Lydia home.
"This is my house, and I'll do what I damn well please. Get out of the way!" My voice echoed throughout the first floor, and it scared the maid enough that she backed off.
I made my way to Ashley's suite of rooms and pounded on her door. When she opened it, I stormed past her and into the room.
"Don't ever keep me waiting," I growled. She started to say something, but I wasn't listening. I couldn't focus on her voice. I noticed her puffy red eyes and knew something had happened.
"She..." her voice trailed off and I could see the pain in her eyes. "She left. She said she couldn't help me. I'm sure it's because I'm fat," she whispered in a half mumble.
"She left?!"
I turned and looked at her again. I couldn't believe it. How could that idiot woman take a look at Ashley and think she couldn't buy clothing for her? She was gorgeous!
I pulled her close and kissed her forehead before whispering, "It's not you. You're beautiful. You're not fat. Don't ever think like that." She cried harder and didn't respond, so I held her tight while I thought about how to fix things. "I'll arrange something for you this afternoon. You'll need a new wardrobe for Hawaii."
I let her go and walked out of the room. I had avoided Ashley for weeks because I knew I had feelings for her. Something about her had sparked emotions in me I thought I'd never feel again. I wasn't sure I wanted them.
Heading out to my car, I glared at the maid I yelled at before and hid a smile as she turned and headed in the other direction. Once I got into my car, I dialed my oldest friend Josh to arrange shopping for Ashley then dialed Lydia Winkler's number.
"This is Lydia."
She spoke her name with a superior air she didn't deserve. It angered me even more.
"Lydia," I barked. "This is Alexander Boone."
"Oh hello there, Alex, your mother has told me so much about you. I feel like I know you."
"Well, you don't." I paused to make sure she realized I was serious. "You left your appointment early. You didn't help Miss Monroe."
"Your mother didn't tell me what a mess--"
"She is not a mess. Do you have any idea what you did?"
"Well, if you feel that way, I'll just go back when I have some free time."
"You'll be having a lot of free time, but I don't want you anywhere near her. I'm not sure you realize who I am and who I know. You're done. I'll make sure of it."
"But...but--"
I hung up the phone. I didn't care to hear her grovel. I dialed my old friend again.
"Josh, did you clear everything for this afternoon?"
"Yes, of course. I told you it wouldn't be a problem." Josh's voice had a feminine lilt to it. "Send her over whenever she's ready, and I'll take good care of her. I also reached out to some friends of mine about that Lydia woman. I've heard stuff like that about her before. Trust me, she'll never get private clients again. It's about time someone stood up to her."
"Good. I should've called you to begin with. Thanks Josh, I owe you."
I arrived at the downtown offices of Boone Enterprises. There were a lot of things that needed to be taken care of before leaving for Hawaii the next day. I wasn't looking forward to the conference until I told Ashley she was coming.
It had been years since my last trip to Hawaii, and I was sure my old ghosts would be there. I hoped that having Ashley with me would keep them at bay. When I held her earlier, I remembered how she smelled like vanilla and how her lush curves molded against my body. At that moment, it didn't matter that she worked for me, I needed her and would use any reason to have her near me.
## Chapter Twelve
# Ashley
That afternoon, I drove out to Fashion Plaza like Xander told me to. I had driven past the area many times over the years but never thought I'd ever step foot in it, let alone shop there.
Fashion Plaza was an outdoor mall with stores I had only heard about on TV and the movies. It was a mixture of large, world-renowned stores and small, high-end family owned boutiques. From the outside it didn't look like much of anything, but once I stepped through the path leading from the parking lot to the stores, I couldn't believe my eyes.
The walkway opened up into a circle. Stores lined the circumference, and in the center was a five tier marble fountain. Everything was beautiful. Even the sidewalk had an ornate design that made it look like Italian tile.
I followed the sidewalk to the other side of the fountain and into a rectangular section with stores along both sides and an amazing garden running down the center. The garden was sculpted to look like a jungle with topiary animals.
I wasn't looking forward to shopping. I didn't expect the shop I was going to would carry my size. I was accustomed to going to plus-sized stores and dealing with whatever they sold. Going to malls sometimes made me a little sad. Walking past all those stores with their cute clothes in the windows while I settled for whatever "fashion" the couple of plus-sized stores carried was _not_ my idea of fun.
As I passed a giant topiary lion about to pounce on a gazelle, I spotted the store in the corner--Joyeux. Sometimes I got upset while trying on clothes because they didn't fit me right. My hips were a bit larger than my waist so if one thing fit, the other didn't. I took a deep breath and summoned all my courage before opening the shop door.
The shop was very girly yet exotic, decorated in soft pinks, but not in an irritating kind of way. Sheer panels of fabric billowed across the ceiling and down the walls, giving the store a dreamlike feel. Towards the back, tall mirrors curved around a fitting area while the rest of the shop displayed outfits and accessories. Several silk upholstered seats were scattered throughout the boutique.
"Good day! You must be Miss Monroe." The man spoke with an effeminate tone. He was tall and slim yet muscular. His dark brown hair was slicked back from his forehead, and his bright green eyes looked friendly. He was very handsome in his black slacks and open-collared black-striped shirt. "I'm Joshua."
"Hi, yes, Xander Boone told me to come in?" My nerves were getting the best of me. I glanced around the store and noticed I was the only big girl there. "I'm not sure you can help me though."
"Nonsense! A gorgeous girl like you? I love it when a shapely woman like you comes into the shop. It brings me back to when I played with dolls. Remember when Barbie had curves? Maybe you're too young. Come this way, I've got some fabulous things to show you."
I followed Joshua towards the back of the store and realized what I thought was just a fitting area was much larger. There were more of the upholstered seats around this section, creating a small runway. We sat on one of the chairs, and a woman brought over two glasses of champagne. It dawned on me that not even a month ago I carried the champagne, now I was drinking it.
Suddenly, a girl in a pair of white cargo pants and a white t-shirt with a large drawn flower on the front walked down the runway.
"This is a sample of my casual wear. Show her the pants, darling."
The girl bent down and demonstrated how the pants could be turned into cropped pants just by adjusting the ties around the ankle.
"That's really cute, but I can't wear white."
"They come in different colors, but what do you mean? You can wear whatever color you want, darling."
"No...do you have black? That's better for me."
"Black? Of course, but you should think about the white, you'd look fabulous in it! Why don't you think you can wear white?"
I sighed. "Because I'll look like a marshmallow."
He laughed and shook his head. Normally I would've been offended, but I started laughing, too. It was obvious he wasn't laughing at me, just the image I described.
"You're crazy! You know Xander told me you were beautiful, he even said you had the perfect body. He told me about that horrible bitch Lydia." He sighed and rolled his eyes then smiled at me. "Xander never exaggerates. Come, I'll show you." He took my hand and brought me to a dressing room that was filled with clothes. "I hope you don't mind, but I took the liberty of guessing your size based on what Xander told me."
I couldn't believe how many items were in there. I took the white pants off the hanger and looked at the size--16. Did Xander think I was fat? It was exactly my size. I wanted to crawl into a hole and die.
"How did you know my size? You never saw me. He told you I was fat, didn't he?"
"No!" He looked at me like I was crazy. "You want to know what he said? He said 'Josh, remember Mrs. Porter in eighth grade? Sexy like that.' Trust me my dear, you wanted to be Mrs. Porter in eighth grade, all the boys loved her, even me!" He laughed. "No one ever skipped her class."
My cheeks burned. It surprised me that Xander would talk like that about me. I decided to try on the pants even though I still thought I'd look huge.
"So you've known Xander for a long time?"
"Oh yes! He's my brother from another mother! And thank the Lord I don't have his mother. Have you met her?"
"No, but I think she's the one who hired Lydia. Why didn't he just send me here?"
"Well goodness, if you ever see his mother, trust me, run in the other direction! Think of those horror movies where you just yell at the screen 'Run! Run!' That's my advice to you if she ever shows up." He laughed. "My guess is he didn't want us talking about him." He winked.
Looking at myself in the mirror, I was shocked. The pants fit. Not only did they fit, but somehow they made me look more proportionate instead of accentuating my ass.
"See? Perfect! More people should just listen to me." Joshua grinned. "You're free to take all this home with you if you'd rather try them on there. I picked out what I thought would be perfect for Hawaii plus some business attire and this..." From the back of the clothing rack, he pulled out an exquisite sapphire blue chiffon dress. "Put it on."
I quickly changed into the dress. The thin straps rested perfectly on my shoulders and the top hugged my breasts, making my cleavage look especially full and sexy. The dress was empire waisted and flowed as if it was weightless. The hem was cut so that it was shorter in the front than the back, which made me look taller.
"Wow. This dress is gorgeous!"
"You make it gorgeous, my dear. Now go! I'm sure you need time to pack. I'll arrange for everything to be delivered to Jefferson Manor within the hour."
"Thanks so much! This was really great." Tears filled my eyes, but I blinked them away. I was silly to get emotional, but it was such a relief after all my insecurities. I hugged Joshua tight.
"Now you realize I might need you sometime. I design pieces like that dress all the time and never have anyone to wear them." He waved his hand towards some of the shoppers and rolled his eyes. "Look at all those scrawny things out there! Enjoy your trip, darling!"
I left the store wearing the white pants with the floral accented t-shirt and carrying the dress with a few other items I wanted to take home right away. As I walked past the topiaries, I saw a woman with short grey hair go into one shop, then a minute later leave and enter the next shop. She was carrying a portfolio. I slowed my walk and watched as she entered each successive shop but left looking more frustrated each time. She wore a khaki skirt and lavender cardigan sweater that made me think of country club attire. No...it couldn't be!
My first thought was to turn around and run in the other direction. I calculated I had enough time to turn and walk to the other side of the garden and Lydia Winkler, personal shopper, wouldn't see me. But why should I hide from her? I felt great! I looked fabulous, didn't I? I straightened my shoulders and paced myself so she'd run into me as she came out of the next shop.
I was several feet in front of her as she stepped out of the next store. She took one look at me and stood still, her face turning white. I smiled and walked past her. I had no idea why she was suddenly looking to work at a shop, but after how she treated me earlier, I didn't care.
## Chapter Thirteen
# Ashley
I guess I was in a daze during the entire trip to Hawaii because I barely remember the first class seats or the limo from the airport. It must've been my nerves or all that champagne on the plane. One thing I learned about being in Xander's world--there was a lot of champagne.
As soon as the limo pulled up at the Okika Resort, a short husky man dressed in a tan linen suit came running over as he ordered a bellhop and several workers around. He was bald but wiped the top of his head several times as if he was pushing his non-existent hair back. He pounced on Xander as soon as he stepped out of the car.
"We're so pleased to have you back, sir. It's been much too long. I personally arranged for your usual villa as well as some spa time for Mrs.--"
Xander, wearing a pair of dark jeans and a white button-down shirt, shot the man a look before turning back to the limo and helping me out. He didn't seem to notice the bald man wringing his hands together nervously.
Waves crashed nearby as I looked in awe at the beauty of the Okika. Palm trees lined the road and walkways. A gentle breeze rippled through the floral print skirt I wore with a red short-sleeved knit top, all from Joshua. The sun was strong but felt good on my pale skin, which reminded me I needed to get outside more. Although the waves were close enough to hear, I knew I wouldn't get a glimpse of them until I entered the building.
As I stood waiting for Xander, I inhaled deeply. Everything about the resort was exotic, right down to the air, which was an amazing blend of sea water and something sweet. I looked around, trying to place the scent, hoping to find the reason behind that sweetness, but nothing triggered an answer.
"Pineapples," Xander whispered the word into my ear, sending goose bumps up my spine. This wasn't the first time I thought he could read my mind. "In case you were trying to figure out the air. It's the pineapples. They're harvested nearby."
Our luggage made its way inside, and Xander followed it without another word to me. I quickly caught up to him at the front desk where the clerk, a shapely older woman with her hair up in a sleek bun, seemed to recognize Xander.
"Mr. Boone! It's been such a long time. Are you here alone or did you bring--"
"Sandra, please. The room key...now."
I smiled, hearing his familiar growl. I was curious who he normally brought with him on these trips but knew better than to ask him. For all I knew, he had an assistant who had been with him for years before me.
The bellhop pushed a tall brass cart filled with our luggage into an elevator. Xander and I were behind him, but he waved the bellhop off when he went to hold the elevator for us.
"I'd rather be alone with you."
His voice made my heart skip, and I looked down at my new shoes so he wouldn't catch me blushing and grinning like a schoolgirl. As the elevator rose, he pressed a card into my hand.
"Here's your room key. I have to speak at the conference late tomorrow morning, but otherwise I don't plan on attending."
"What if you need me? Is my room far from yours?"
"Tomorrow night is an important dinner you will be attending with me, otherwise, for the rest of your time here you are free to use the spa or whatever else you'd like. Just charge it to the room. If I need you, you won't be far. We're sharing a room."
My eyes widened. I didn't know what to think. Did he think I would just sleep with him? I knew I wouldn't want to say no, but still. Confusion gripped me as I wondered if Xander suddenly didn't care that he was my boss.
The elevator gently slowed and the doors opened to a long white hallway. Our shoes clicked on the tiled floor as we walked all the way to the end of the hall to an open door.
I stood in the doorway in awe. The room was beige with dark wood moldings, very modern and simple yet elegant. It was a combined living area and kitchen with high end appliances and granite countertops. The room was larger than Jackie's entire apartment. I thought hotel rooms like this only existed in magazines. I never dreamed I'd stay in one.
Just past the living area, the bellhop clicked a button, and the sheer floor-to-ceiling panels that covered the back wall began to open. He then clicked another button and the glass panels that made up that wall began to slide back and disappear.
"I believe you prefer to keep the lanai open, correct, sir?"
Xander nodded, barely looking at the view. And what a view it was! Plush outdoor furniture was arranged on the extended tile floor of the lanai. I walked to the iron railing and looked down at the palm trees on the beach below as the waves gently lapped the sand. In the distance, the sun slowly sank into the ocean.
As the bellhop left, he let a woman with a small tray of drinks enter. On her tray were two tall curved glasses with a pinkish liquid topped with umbrellas and pineapples. She placed the tray on the long high counter that separated the kitchen from the living room and left.
Xander grabbed the two drinks and joined me on the lanai. He set the drinks on a small table between two thickly cushioned lounge chairs and sat down.
"Come sit, have a drink. They're Lava Flows. Basically a piña colada with strawberry, the resort's specialty." He paused, looked at me, and smiled. "Beautiful."
I smiled back as I blushed. The sunset was beautiful, but I didn't think that was what he was talking about. I felt a little embarrassed by his attention since I still wasn't used to it, but I loved every second of it.
Sitting down, I picked up one of the drinks and clinked it against his. We sat in silence with our sweet drinks as the sun set. When I set my drink down, my hand brushed against his and I felt that electricity his touch always gave me. He looked over at me for a long moment then his large hand closed around mine and our fingers locked together.
It was a comfortable silence. Neither of us felt the need to talk, we just sat there and enjoyed the gorgeous view. After a long day of traveling, I fell asleep on the lanai, still holding his hand.
## Chapter Fourteen
# Xander
I couldn't imagine a more perfect ending to a long day than watching the Hawaiian sunset with a gorgeous woman. Her hand felt soft and delicate in mine. I noticed her eyes slowly close and let her sleep.
She looked like an angel asleep in the final glow of the sunset. I slowly slipped my hand from hers and considered getting a blanket, but I didn't know how long she would sleep. Instead, I picked her up and carried her to her room.
I stifled a laugh as I thought of her face when I said we were sharing a room. Try as I might, it was impossible for me to not want her. I flirted and hinted but could tell she either didn't realize it or didn't accept it. I knew it was inappropriate because she was my employee, but I didn't really care. It meant I could see her more often, and I always got what I wanted.
As I laid her down on the bed, I thought about the last time I carried a sleeping woman to bed. Darcy. She was never far from my thoughts. Conflicting feelings filled me as I slipped Ashley's shoes off. Maybe I should just keep things professional. Looking at her, I knew I'd never be able to.
I pulled the blankets up around her and quietly left the room. She needed her rest for tomorrow. There was a lot I planned for her to do.
## Chapter Fifteen
# Ashley
The next morning I woke up and looked around the room as I tried to remember where I was. The cream-colored walls and dark wood furniture were unfamiliar. Xander must have carried me to bed. I rubbed my hand on the soft mint green down comforter and checked to see that I was completely dressed except for my shoes.
I couldn't help but laugh. I thought about how I misunderstood what he meant by sharing a room and how I was fully dressed in bed after a day of drinking a little too much. He didn't even try anything.
I ran my fingers through my long brown hair, untangling it as I entered the living room. On the counter was a note from Xander.
* * *
_A shley,_
_Last minute changes were made to the slide deck I'm presenting this morning. They should be complete by the time you're up. Download it to this flash drive and bring it to the main conference room ASAP._
_Xander_
* * *
Opening the laptop he gave me, I saw the file had been emailed to me. I saved it to the flash drive and headed down to the conference room, completely aware I was still in yesterday's clothing. I knew better than to keep him waiting.
Xander was speaking to an attractive man with dirty blond hair in the large conference room. He was wearing a navy blue pinstriped suit and hadn't noticed me enter yet. I was positive he could wear anything and it would fit him perfectly. The fabric of his suit jacket stretched across his broad shoulders, and I thought about when he was the grumpy jerk at the art gallery, just a stranger to me.
"Why are you smiling?" he asked.
I giggled as I walked up to him and handed him the flash drive. I was suddenly aware that my hair was disheveled and I didn't have any make up on. I smoothed down my hair and wrinkled clothes, embarrassed.
"No reason, just thinking."
"You'll be thinking a lot today. I have a lot of errands and meetings for you to take care of for me while I'm here." He handed me a card with the name Shelley Isuzu printed on it. "Bring this to the front desk now, they'll direct you to her."
"I need to change first."
"There's no time. You're representing me, you will not be late. Go!" he growled. I knew what that meant.
I rushed out of the conference room, chewing my lip nervously. I didn't bring the laptop, I didn't have anything with me. I wasn't even sure what these meetings would be about. I vowed to try my hardest to not make a fool of myself.
Sandra, the same woman from the day before, was at the front desk.
"Hello Miss Monroe, I believe you're here for Shelley?"
"Yes, I am. I hope I'm not late."
"No, you're fine, dear." She smiled softly and patted my hand.
She snapped her fingers, and an older woman with jet-black hair pulled back into a ponytail stepped forward. She was dressed in a simple white uniform with white shoes and looked almost medical She smiled and wrapped her arm around my shoulders in a motherly kind of way.
"Come with me, my dear. We'll start with a facial, then I'll give you a full body massage. I'll finish with a pedicure, hair cut, and make-up application if that's all right. I know you have a special dinner tonight!"
"Special dinner? No, you must be thinking of someone else. All of this can't be for me. I have a business dinner to attend tonight, that's all."
"You're Ashley Monroe, right? Then these appointments are for you. Someone really wants you to be pampered. Enjoy it."
## Chapter Sixteen
# Xander
I glanced at my watch as people filed out of the conference room. Right about now, she was probably finishing up with the massage. I smiled to myself, imagining how surprised she was.
Walking back to the elevator, I stopped by the front desk as I thought of another way to surprise her. As I approached the front desk, Sandra turned and smiled.
"She was so surprised!" She giggled like a woman much younger than her years.
"I want to do something else. Can you arrange for flowers and candlelight after dinner?"
"In your room? Of course! I'll take care of it myself. I just checked on your other arrangements, and everything is absolutely perfect. Shelley will bring her back to the room once she's done. Will you be escorting her to dinner yourself?"
"Yes, I have a gift I'm going to give her before we head to dinner. I'm sure she's confused right now, and I'd like to keep that going." I grinned at her, excited for the evening.
As I got back to the room, my phone began ringing. I sighed when I saw it was my mother calling. It was just like her to call when she knew I was away. She was a selfish woman who was never there for me and only appeared when she felt neglected, which was often.
"Hello, Mother."
"Xander, you must come home right now. I'm in the hospital."
"What?! Are you okay? What happened?"
"I had a stroke in the office, so I drove myself to the hospital. I need you to come home."
None of what she said made sense to me. If she had a stroke, then how did she drive herself to the hospital? How was she talking to me on her cell phone in a hospital? Anger coursed through my veins.
"Fine. I'll leave in the morning. What did the doctor say?"
"Nothing really, they're already discharging me. I guess I'll have to drive myself home. Talk to you tomorrow, Alex."
She hung up as I growled at the phone with clenched teeth. It was impossible for her to have a stroke and get sent home so quickly. I felt torn. I wanted to enjoy the rest of my time in Hawaii with Ashley, but I needed to check on my mother. Guilt consumed me when I thought of staying for the rest of the trip. We would have to head home in the morning.
I entered Ashley's room and opened the closet door. I knew about the sapphire blue dress Joshua designed especially for her and wanted her to wear it. As I hung the hook of the dress's hanger over the door, all the anger and frustration caused by my mother's call vanished. Ashley was the only woman on my mind.
Patting the gift I had for her in my pocket, I entered my room to change. Joshua designed a necktie in the exact shade of blue as Ashley's dress, and I planned to wear that with a light grey suit. I quietly closed my door as I heard the main room door open. Ashley was back from her spa day, and I was ready to give her the night of her life.
## Chapter Seventeen
# Ashley
I couldn't believe what an amazing day I had. Never in my wildest dreams did I imagine I would ever have a spa day like that. My skin felt beautiful, my hair felt beautiful, _I_ felt beautiful.
My hair was down in soft waves around my shoulders, and the make-up Shelley applied flawlessly was natural yet somehow she made my round cheeks look higher. She even talked me into wearing a peachy color on my lips when I rarely wore lipstick. I always thought lipstick made my full lips look even larger, but somehow she made it work for me.
I entered our room and quickly looked around as I wondered where Xander was until I saw his closed bedroom door. I still had no idea what to expect. Shelley seemed to think I had a hot date, but Xander called it a business dinner so I didn't want to get my hopes up.
Walking into my room, I saw the sapphire blue dress hanging from the door. I slipped out of the heavy white robe the spa gave me and into a purple strapless bra I knew would work perfectly with the dress along with a matching silk thong.
The dress slipped on easily, perfectly, as if it was made just for me, but I was sure it couldn't have been. I looked at myself in the full-length mirror in the bathroom and spun around. I felt like a princess!
Entering the living room, I saw Xander waiting for me, looking as handsome as ever. His eyes traveled up and down my body, and he smiled bigger than I'd ever seen him smile before.
"You're stunning! How is it you keep getting more beautiful?"
"Stop!" I giggled and twirled for him.
"Joshua sure pulled out all the stops this time. That dress is amazing. You look amazing."
"Mrs. Porter in the eighth grade amazing?"
His laugh consumed his entire body and he shook his head. "I can't believe he told you about that. That man cannot keep his mouth shut. I'm afraid to find out what else he told you."
I grinned and shrugged, not wanting to give anything away. Xander slid his arm around my waist and we left the room. In awe of being on the arm of such a gorgeous man, I let him lead me without question. He didn't tell me where we were going or what to expect, but we walked out of the hotel and towards the beach.
It was dark since the sun had already set. Tiki torches lit the flagstone walkway. Lush greenery and palm trees surrounded the area, and just beyond them was the beach. Salt air mixed with the burning torches filled the air with the hypnotic sound of the waves crashing on the beach.
The trees became sparse as the beach surrounded the path. Xander took my hand, and I felt the usual excitement of having him near me. Up ahead, I saw a large white cabana with hanging twinkle lights that looked like stars.
When we reached the cabana, Xander led me to the only table which was set at the edge of the flagstones, just before the sand. The white crested waves crashed onto the beach not far from where we sat.
"I thought you said this was a business dinner."
"I lied." He grinned from ear to ear. "I hope you enjoy everything tonight."
"Today has been the most amazing day. Thank you so much, I just don't understand why you're doing all of this."
"I don't need a reason. Just enjoy it."
Out of the blue, a man yelled and a drum sounded. I turned and on the beach, a group of five men stood in line. Each man wore a short floral sarong around his waist, a shell choker, and had what I guessed were dried palm leaves tied to hang down his calves.
The men took turns doing his own fire dance to the strong rhythmic pounding of the drums. The flames seem to kiss their skin as they threw flaming batons into the air and caught them behind their back and even with their feet. It seemed each dancer had to one-up the next. The entire display was breathtaking.
"I meant to give this to you earlier, but I got distracted." Xander pulled a small square box wrapped in glossy black paper with a small red ribbon from his jacket pocket and slid it across the table to me. "Open it. I was hoping you'd wear it tonight."
I took the ribbon off and slid my finger under the tape of the glossy wrapping paper. When I lifted the lid of the wrapped box, it revealed a navy suede box inside. I lifted the hinged top and gasped when I saw a platinum necklace with a single large, round sapphire surrounded by a halo of tiny diamonds.
"It's beautiful, but I can't accept this."
"Yes you can, and you will. You don't have a choice. Let me put it on you."
He removed the necklace from the box and fastened it around my neck. His hands then slid onto my shoulders. Electricity ran up my spine from his strong hands on my bare skin. My mind went back to the break room at the art gallery, causing a warm feeling to course through my body. I wanted him badly.
As if he sensed his power over me, he kissed my neck, just over the thin chain of the necklace. I let out a long sigh of pleasure from that simple gesture of his, which awakened every pore on my body.
The rest of the meal was a blur. I spent most of it lost in his ice-blue eyes as we talked about anything that came to mind. After dessert, we could hear jazz music coming from one of the hotel lounges, and he took my hand and we danced in the light of the twinkle lights and tiki torches.
It was the end of a perfect day, but I wasn't ready for it, I needed more. I was intoxicated by the combination of his spicy musk and his muscular body pressed against me as we swayed to the music. Although it wasn't like me, I was prepared to make the first move. I couldn't control myself around him.
I lifted my head to kiss him, and our lips met as his head came down. His hands immediately went up to my face and into my hair. My legs felt like jelly and I was sure I was floating. His lips moved between and over mine as hungrily as I kissed him.
"Do you want to go back to the room?" His voice was gruff yet soft.
I couldn't speak. My breathing was shallow with excitement and longing. I nodded, and he grabbed my hand and we ran back down the long walkway and to the elevator.
Once alone in the elevator, he bit my earlobe softly then moved his lips slowly down my neck to my collarbone. I unknotted his tie then began unbuttoning his shirt. His chest was sculpted as perfectly as I imagined when I felt it pressed against me. I was about to slide my hands over his pecs when the elevator slowed and the doors opened.
Xander scooped me up into his arms, his long gait making easy work of the long hallway. He opened the door of the villa, and I was shocked to see candles and flowers throughout the living room. He was about to set me down, but I didn't want that.
"No, your bedroom. Go!"
He laughed, hearing me order him around, and carried me into his bedroom before placing me on the king size bed. I pushed open his shirt, and he quickly removed his jacket and shirt.
Finally sliding my hands over his chiseled torso, I let my fingers trace each muscle of his abs before I unbuckled his belt and opened his pants. I wanted to touch him so badly! He stepped out of his pants and I reached for the bulge in his charcoal grey boxer shorts, but he stepped back.
"No, I get what I want, remember?"
The simple sound of his voice was enough to get me more excited. He ran his hand over my hair, held the back of my head, and kissed me. First his lips lingered on mine then they pushed my lips open as his tongue slid against my tongue.
I ran my hands over his beard and through his shaggy hair before they continued down to his shoulders and arms, enjoying the strong feel of his muscles under my fingertips. His lips left mine as he kissed my chin, then the soft spot at the front of my neck where my collarbone met.
His hands traveled roughly over the dress, squeezing my heavy breasts together as he kissed my cleavage. He sat up and gazed at me for a moment before getting off the bed. Taking my foot, he slipped off my sandals then kissed my instep. I giggled uncontrollably.
"Stop! That tickles!"
He laughed then kissed my ankle and my shin as his hands slid up my legs and under the dress. When his hands reached my panties, he curled his fingers around the silky material at my hips and yanked them down and off me. He had me so turned on, I knew my panties were wet.
His lips worked their way up my leg and once he reached my thighs, he pushed them apart and I was suddenly aware of how bare I was underneath my dress without my panties, and it turned me on even more.
As he slid the dress up further, his lips covered the soft sensitive skin of my inner thighs. His hand slid across my trimmed mound, making my breaths come even faster. He nipped the uppermost part of my thigh and I jumped, then he slowly sucked the spot, making the sensation even more intense.
His lips inched up further and as I resigned myself to the torture of anticipation, he slid his tongue between my lower lips and over my clit. Instinctively, my fingers ran through his hair. Using the flat part of his tongue, he teased me with long slow licks, followed by flicks with the tip of his tongue.
Clutching the pillow my head rested on, I moaned and spread my legs further. No longer was I self conscious of my size, my only thought was the things this sexy man was doing to me and how I wanted more.
He twirled his tongue in small circles as my hips slowly rocked. The pressure was building quickly inside of me and a tingling sensation traveled up my spine. The pulsing between my legs turned into a throb. I wanted more than just his tongue and couldn't believe the thought even entered my head.
He slipped a finger around the outside of my entrance before sliding it inside of me. I couldn't take it much longer, and somehow he knew.
As I let out another moan, his tongue worked its magical circles over my clit and his finger moved quicker as it thrust into me. He pressed his lips around my swollen clit and sucked as I arched my back just a bit and the tingling reached my neck. The pressure within me released and I cried out.
Drawing in a quick breath, I squeezed the pillow tight between my hands as my orgasm overtook me. I writhed while he continued rubbing my clit with his tongue until I was exhausted as I tried to catch my breath, and he moved up beside me.
I reached for his face and kissed him deeply as he wrapped his arms around me. Slowly, I moved my hand down over his tight body to his underwear. I easily slipped my hand into the elastic waistband and over his large member but was stopped by his hand over mine.
"You have no idea how much I want to make love to you, but we can't," he whispered into my ear.
"Why not?"
"Because I want to explore every inch of your body. I don't want to be rushed. I want to take my time and for us both to enjoy every second of it."
He kissed me, and it took all my willpower to not jump on top of him.
"We've got plenty of time though. You said you're not working tomorrow."
"Unfortunately, our plans have changed. We have to head home in the morning."
I sighed as I laid my head down on his chest. He kissed the top of my head and squeezed me tight against him, reminding me of the contrast between his muscular body and my soft one. Through the cracks of the window shades, I could see the sun begin to rise. This was the first sunrise I wish didn't come at all.
## Chapter Eighteen
# Xander
Being back from Hawaii meant it was time to move back into the house. While I looked forward to being that close to Ashley, I knew it wasn't a good idea. Things got out of hand in Hawaii. I should've known better than to bring her, but I couldn't help myself. She brought out feelings I never thought I'd feel again. She deserved to have a special day, but I didn't plan for it to end the way it did.
I thought about her soft skin and her luscious curves. The way she always smelled like vanilla even though she didn't wear perfume. I felt myself falling for her but I had to stop, especially if I was moving into that house.
Ashley wasn't the reason I hadn't moved back in yet. Setting the house up, hiring staff, all those things amounted to the fact that I wasn't ready to be in the house. The closest I'd come to moving in was living in the cottage by the stables.
Walking towards the door, I grabbed my suit jacket from the hanger which hung on a wall hook. The cottage was only three rooms, but it was all I needed. On the outside, I played the role I needed to play--the billionaire--but inside I knew what was really important.
The cottage had handcrafted wood mission-style furniture. A small couch faced the fireplace, which was the only source of heat. The home was so small, it didn't need much more. Beyond the couch was the kitchen with a small dinette table. The only doorway led to the bedroom and bath.
The entire cottage was smaller than the master bedroom at Jefferson Manor. I grimaced when I thought about that room. It hadn't been touched. Everything was still in place as it had been three years ago when I moved to downtown Canyon Cove. Maybe it was time to face it and the demons that existed in there, but I wasn't ready.
Too much pain existed in that room. Too many memories. I began to think that having the entire room redone might be a good idea. Get a fresh start. I wondered what Ashley could do to that room.
The vibration of the phone in my pocket yanked me out of my reverie. I let out a long sigh when I read the caller ID.
"Good morning, Mother."
"Good morning? That's it? You've been home from Hawaii for two days and don't think of calling your mother? Who raised you?!"
"Actually, the nanny did, Mother."
"Ha! The nanny. That Miss Amy," she scoffed. "Who taught you how to read? I didn't have to do that, you know."
"No you didn't, Mother. You could've let Miss Amy teach me, but you wanted someone to read to you. How many nights did I have to stay up or miss school to read to you while you soaked in your tub?"
"You've always been ungrateful. You're only talking to me like this because of that floozy you hired. I've seen it all before, you know. Heard you took this one to Hawaii. Shame you had to rush back, but you didn't even check on me! I was in the hospital with a stroke!"
"She's not a floozy. I don't know what you're talking about. Yes, she came to Hawaii with me. She's my assistant, and I was speaking at a conference. And as for your stroke, I went to the hospital as soon as I got back. They told me you were fine and had been discharged within an hour, which is what you told me on the phone. Nothing's wrong with you. That shouldn't have surprised me."
"Well, you've had other assistants and have never taken them to Hawaii. Do I need to meet this one?"
"This is different--"
"Oh really?" she interrupted. "I seem to remember another one of these assistants of yours that you took to Hawaii. How do you think she would feel about it? Did you think about that? How would Darcy feel about you taking someone to Hawaii?"
I seethed with anger. My mother knew all the buttons to press. She always did things like this whenever she felt she wasn't getting enough attention.
"Mother! Don't go there. Don't you dare talk about Darcy. You didn't even like her."
"It would break her heart if she knew. You know that. I didn't raise you like that."
"You didn't raise me, remember?"
I hung up the phone and punched the wall. I couldn't speak to her anymore. I didn't want to hear it, but she was right. How could I do this to Darcy? If she knew, she'd be crushed. Hawaii was our place.
I dialed a number as I walked to my car.
"This is Xander Boone. I need a bouquet of two dozen red roses with Hawaiian orchids. Deliver it to the usual address. Yes, 2674 Wordsworth Drive. Thank you."
## Chapter Nineteen
# Ashley
While I loved my new life at Jefferson Manor, I missed Hawaii. The day Xander planned for me was the most incredible day of my life. It was something I would never forget. I hoped we would spend more time together during the rest of the trip, but it simply wasn't meant to be.
We had been back two days and I hadn't seen or heard from him. I knew something important happened that he needed to come back for, but I ached not knowing what was going on. It made me wonder if that night we had in Hawaii was a mistake.
I wandered down the hall of the second floor as I tried to find something to get my mind off of him and ended up in front of the double doors of the master bedroom. Xander planned to move in soon, and I didn't even know what the condition of that room was. Tara told me to stay out, and I did.
It was clear that the nights Xander spent at The Manor, he stayed in the cottage. I realized it was his car I heard coming and going and that I had seen him leave in the mornings since the cottage was visible from the study. I wondered why he didn't sleep in the master bedroom.
Curiosity got the better of me. I paced the floor in front of the double doors at the end of the hall. No one was around. The staff was busy downstairs. I put my hand on the doorknob and turned it. It was unlocked.
As I entered the master bedroom, I saw a completely decorated room. Nothing was covered. It still looked lived in. The antique furniture matched the rest of the house and was adorned with ice-blue accents not much different from the color of Xander's eyes. I pushed back the drapes and watched the dust dance in the sun's rays as they streamed through the windows.
I didn't know what to think. I couldn't see what was so wrong with the bedroom that he would rather stay in a tiny cottage.
In the corner of the room, I spotted a vanity. Why would a man have a vanity in his room? Only one thing sat on the vanity, a picture frame. A wedding picture. Picking it up, I immediately recognized Xander from the old photo in the newspaper. He was clean cut and smiling. The bride next to him was stunning in her antique white lace gown, with loose brown curls around her face.
The frame slipped from my hand as realization dawned on me. He's married? It didn't make sense. Where was she? I went to the closets and saw they were filled with clothing, some with Joshua's label. Joshua didn't mention anything, but...
I suddenly remembered when we arrived at the hotel. The front desk clerk had asked if someone was with him. Did she mean his wife? When the hotel manager ran over, I swore he said "Mrs.", but I thought he assumed we were married. Now I realized he meant Xander's wife.
My heart sank in my chest then ached. How could I think I was special to him? All this time I thought what stopped our relationship was the fact that he was my boss.
The ringing of my cell phone shook me from my pain.
"Yes?"
"Hi, I was looking for Xander Boone, and his office gave me this number. He placed an order for flowers and I'm leaving now to deliver them. Can I verify the address with you?"
"Yes, you can. What address did he give you?" I didn't know what came over me, but I needed that address. I was going to confront him for making a fool of me.
"2674 Wordsworth Drive?"
"Yes, that's the correct address."
"Thank you, the flowers should be there in a few minutes."
I didn't know that address or where it was, I just knew I had to go there. I ran out of the house and got into the Audi Xander gave me. I entered the address on the GPS and left.
The address took me further out of town than I expected. The old roads were bumpy and tree-lined. A few times I thought I had taken the wrong turn but continued following the GPS's directions. Finally I made the last turn on the instructions--entering a cemetery.
"You have arrived at your destination. 2674 Wordsworth Drive."
"You stupid piece of shit!" I yelled at the GPS as I smacked the dashboard.
A car came up behind me so I had to drive forward to get out of its way. As I drove, I thought I spotted Xander's Aston Martin. I pulled up behind the car, recognizing the license plate--Boone2.
Getting out of the car, I looked over and saw Xander holding a bouquet of flowers as he stood, head lowered, in front of a gravestone. As I got closer, I read the stone: Darcy Boone - Beloved Wife.
My heart went out to him. The year on the gravestone said she died only three years ago. Everything made sense now. I knew Xander well enough to know he was sensitive. His mean front was just to protect himself from others. I reached up and put my hand on his shoulder.
"She was everything to me. Part of me died with her. I never thought I would feel for someone even an ounce of what I felt for her." He cleared his throat. "Until I met you."
He didn't move. He stared at the grave. I didn't know what to say. Everything I thought of seemed weak and trite. I rubbed his back softly, hoping he understood my own feelings for him.
"I can't do this. You have no idea how much I want to. How madly I'm falling for you." He turned and glanced at me then looked back at the grave. "I can't go through this again. I can't lose another piece of myself, and it will happen. I can't be around you. This has to end."
"What are you saying? I don't understand."
"You're fired. I can't be near you anymore. The more I see you, the longer I'm with you, the more I'm falling in love with you. I can't do that. I'm better off alone. I can't take the pain of losing someone again."
"But--"
"No." He shook his head, still lowered towards the grave. "Remember the man I was speaking to in the conference room when you brought me the flash drive? I didn't introduce you to him, but he's an old buddy of mine. We started our companies around the same time. He needs an assistant and said he would love to hire you. I know you need a job, so I arranged that for you if you want it. Also, I know you have no place to go so you can stay at The Manor until you find your own place. I'll stay at my condo in the city."
"But Xander, don't I get a say in this?"
"No. This is how it needs to be."
He pulled a single red rose from the giant rose and orchid bouquet. Holding onto the one flower, he gently placed the bouquet on the gravestone before touching it delicately. He then turned around and quickly grabbed me, holding me tight, his lips pressed against my forehead.
The tears streamed down my face uncontrollably. I wrapped my arms around him tightly as I sobbed. Being in his arms always felt so right, and now it was over. I didn't want to let go but I knew I didn't have a choice.
He stepped back, breaking my hold on him. Gently wiping the tears from my cheek with his thumb, he softly kissed my lips then handed me the single rose before walking to his car.
I stood frozen to the ground as I watched his car head out of the cemetery. I didn't care about the job or where I would live, I only cared about losing him. He might have been falling for me, but I had already fallen for him--hard. A part of me left when he did, and I knew I would do anything to get that part back.
* * *
_Thank you for reading His Every Whim! I hope you enjoyed it._
_To read the rest of Xander and Ashley's story at a_ ** _discount_** _, pick up_ _The Billionaire's Whim_ _novel which contains all four parts of the His Every Whim serial._
* * *
# Also by Liliana Rhodes
**_Get news about my new and upcoming releases as well as special offers by signing up for_**
_Liliana's Email Newsletter_
* * *
_For more about me and my books, visit my website at_ _LilianaRhodes.com_
* * *
**_Find Liliana Rhodes on Kobo_**
* * *
**_His Every Whim Series_**
_Billionaire Romance_
His Every Whim, Part 1 (FREE!)
His One Desire, Part 2
His Simple Wish, Part 3
His True Fortune, Part 4
The Billionaire's Whim - Boxed Set
* * *
**_Canyon Cove_**
_Standalone Novels_
Playing Games
No Regrets
Second Chance
Hearts Collide
Perfect Together
* * *
**_Made Man Trilogy_**
_Mafia Romance_
Soldier (FREE!)
Capo
Boss
Dante (Boxed Set)
* * *
**_Gambino Family Novels_**
_Mafia Romance_
Sonny
* * *
**_The Crane Curse Trilogy_**
_Paranormal Romance_
Charming the Alpha
Resisting the Alpha
Needing the Alpha
The Crane Curse Trilogy Boxed Set
* * *
_Paranormal Romance_
Wolf at Her Door (Shifter)
His Immortal Kiss (Vampire)
* * *
**_Writing as Veronica Daye_**
_Steamy, Forbidden Romance_
Sinned
Tease
Temptation
Stepbrother Bad Boy
Bad Boy
# About the Author
Liliana Rhodes is a New York Times and USA Today Bestselling Author who writes Contemporary and Paranormal Romance. Blessed with an overactive imagination, she is always writing and plotting her next stories. She enjoys movies, reading, photography, listening to music, and spending time with her son. After growing up in the Northeast, Liliana now lives in the Southeast with her son, two very spoiled dogs, and a parrot and a fish who are plotting to take over the world.
For more about Liliana Rhodes
* @Liliana_Rhodes
* AuthorLilianaRhodes
www.LilianaRhodes.com
Liliana@LilianaRhodes.com
# Sign Up For FREE Books
**When you sign up for my email newsletter, you'll receive _A TASTE OF DESIRE_ , my 4 book series starter boxed set.**
Grab A Taste of Desire by clicking here or on the image above.
### Contents
1. Title Page
2. Copyright
3. Get in Touch with Liliana Rhodes
4. About His Every Whim
5. His Every Whim
1. July
2. 1. Ashley
3. 2. Xander
4. 3. Ashley
5. 4. Xander
6. 5. Ashley
7. 6. Xander
8. 7. Ashley
9. 8. Xander
10. 9. Ashley
11. August
12. 10. Ashley
13. 11. Xander
14. 12. Ashley
15. 13. Ashley
16. 14. Xander
17. 15. Ashley
18. 16. Xander
19. 17. Ashley
20. 18. Xander
21. 19. Ashley
6. Also by Liliana Rhodes
7. About the Author
8. Sign Up For FREE Books
1. Cover
2. Title Page
3. Copyright
4. Beginning
5. Preface
6. Foreword
7. Also by Liliana Rhodes
8. Afterword
9. Table of Contents
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 9,219
|
\section{Introduction}
The recently discovered iron-pnictide high temperature superconductors\cite{Kamihara} have several features in common with the much studied cuprate
superconductors. In addition to their overall quasi-two
dimensional nature such features include commensurate antiferromagnetism in close proximity to or even coexisting with superconductivity.
Intriguingly, there is also a common tendency for breaking of the fourfold rotational symmetry of the crystal\cite{Zhao} which in the case of the cuprates is most
likely an intrinsic
electronic property related to the strong correlations.\cite{Kivelson}
For the pnictides (as well as in some cases for the cuprates) there is at low doping a
crystal symmetry breaking from tetragonal to orthorombic which is closely connected to an accompanying spin density wave
(SDW) order.\cite{Dong,Cruz}
This interplay between atomic and electronic degrees of freedom naturally leads to the question: What is the role of phonons for the electronic properties of these systems?
Focusing on the pnictides, results from density functional theory (DFT)\cite{Singh,Boeri} are credible as they
give an
electronic spectral distribution consistent with photoemission experiments\cite{ARPES} but have also found that the electron-phonon coupling is much too weak to explain the
high T$_c$.\cite{Boeri} Challenging these findings was the recent report of
an isotope shift of T$_c$ (and T$_{SDW}$) when substituting $^{56}$Fe with $^{54}$Fe with an exponent of $\alpha=-\frac{d\ln T_c}{d\ln m}=0.4$
close to the BCS value of $0.5$ for a pure
iron mode.\cite{Liu}
The mechanism for superconductivity in the pnictides is a major unresolved issue and it is clearly essential to understand better the lattice-electron interplay.
In this paper we measure and model one particular Raman active phonon which lives primarily on Fe atoms and thus common to all iron-pnictide superconductors.
The purpose is to study the anharmonic structure of the Fe-As plane with two main objectives: First, to investigate whether any non-phonon
contributions are necessary to describe the temperature dependence of
the phonon energy and lifetime. Second, to estimate the magnitude of the lattice expansion that follows from substitution with a lighter mass and the corresponding changes
in electronic hopping integrals, thus investigating the plausibility of a non-phonon related isotope effect in the pnictides.\cite{Fisher,Chakravarty}
In brief, we model the harmonic spectrum of the As-Fe plane and use standard Greens function methods to calculate the self energy of the Fe B$_{1g}$ Raman phonon
to second order in the cubic anharmonic Fe-As interatomic coupling. The harmonic spectrum is based on a minimal parameter fit to spectra derived within the
local density approximation (LDA) of DFT but the
calculation of the temperature dependent contribution from phonon-phonon interactions is beyond the capabilities of that method.
We present Raman results
on (Nd,Ce)FeAsO$_{1-x}$F$_x$ and estimate the anharmonic coupling strength by fitting our model calculations to the measured temperature dependent energy shift.
Also the width agrees well with Raman measurements on CaFe$_2$As$_2$\cite{Choi}
thus effectively ruling out any significant electronic
contribution to the broadening. Based on the magnitude of the anharmonic coupling we estimate the isotope ($^{56}$Fe$\rightarrow$$^{54}$Fe)
shift of the lattice parameters
to $\lesssim 2\cdot 10^{-4}$ and calculate a similar relative decrease of interatomic hopping integrals. Harmonic zero point fluctuations give an even smaller
isotope shift of hopping integrals. In weak coupling theory correspondingly small changes in the electronic density of states (DOS)
is expected to give an isotope exponent $\alpha\sim 10^{-2}$.
\begin{figure}
\includegraphics[width=8cm]{pnfig01.pdf}
\caption{\label{spectra_Nd} (Color online) Temperature dependent Raman spectra for NdFeAsO$_{1-x}$F$_{x}$ ($x=.12$) showing three
Raman active phonon modes labeled by their symmetry and main atomic displacement.}
\end{figure}
\begin{figure}[b]
\includegraphics[width=8cm]{pnfig1.pdf}
\caption{\label{spectra_Ce} (Color online) Raman spectra for CeFeAsO$_{1-x}$F$_{x}$ ($x=.16$).}
\end{figure}
\section{Raman spectra}
Raman spectra between $100\rm{cm}^{-1}$ and $400 \rm{cm}^{-1}$ were collected at temperatures ranging from $20$K to $300$K for polycrystalline samples of
CeFeAsO$_{1-x}$F$_{x}$ ($x=0.16$) and NdFeAsO$_{1-x}$F$_{x}$ ($x=0.12$).\cite{note_undoped} For sample preparation and
characterization see Chen at al.\cite{samples}
All spectra were recorded using a Dilor-XY800 spectrometer in double subtractive mode.
In all scans the 514.5 nm line from a Ar+ laser used with a power of less than 1mW was focused onto the samples with a spot size less than 2 $\mu$m. The samples were installed in a LHe cooled cryostat.
We observe three Raman active modes with energies around 170, 210 and 220 cm$^{-1}$, see Fig. \ref{spectra_Nd} and Fig.
\ref{spectra_Ce}, in agreement with other
Raman studies,\cite{Raman}
that are identified as (Ce/Nd)-A$_{1g}$, As-A$_{1g}$, and Fe-B$_{1g}$ modes respectively. Also in agreement with earlier studies
we found no effect on the phonon energies from crossing into the superconducting phase (T$_c=$35K/45K for Ce/Nd).
As discussed in the introduction we will be interested in analysing the temperature dependence of the energy as
extracted in Fig. \ref{spectra_shifts} and for modeling purposes we consider only the Fe-B$_{1g}$ ($x^2-y^2$) mode which lives primarily
in the Fe-As plane.
\begin{figure}[ht]
\includegraphics[width=8cm]{pnfig02.pdf}
\caption{\label{spectra_shifts} (Color online) Temperature dependence of the phonon peak position for
NdFeAsO$_{0.88}$F$_{0.12}$ (red) as extracted from Fig. \ref{spectra_Nd} and correspondingly for
CeFeAsO$_{0.84}$F$_{0.16}$ (blue) from Fig. \ref{spectra_Ce}.}
\end{figure}
\section{Modeling}
The temperature dependence of the lifetime and energy of a phonon
is ordinarily due to phonon-phonon interactions that arise from anharmonic interatomic
potentials. In general the cubic anharmonicity is the dominant
term\cite{Calandra} and we consider only this.
The Raman ($\vec{q}=0$) intensity for Stokes scattering for the mode $j$ at frequency $\omega$ is given by
$I_S(j,\omega)\propto -(1+n(\omega)) Im {\cal D}_{ret}(\vec{q} =0,j,\omega)$ \cite{Hayes} where ${\cal D}_{ret}$ is the retarded phonon Greens function
which to linear order in the self energy $\Pi=\Delta-i\Gamma$ (minus sign by convention) gives
\begin{equation}
I_S(j,\omega)\propto \frac{(1+n(\omega))\Gamma(\vec{0},j,\omega)}{[\omega_0(\vec{0},j)+\Delta(\vec{0},j,\omega)-\omega]^2+\Gamma^2(\vec{0},j,\omega)}
\end{equation}
where $n(\omega)=(e^{\hbar\omega/k_BT}-1)^{-1}$ is the Bose occupation factor. The measured width (FWHM) is thus ideally given by $\Gamma$ and the shift by $\Delta$.
We calculate the self energy to second order in the interaction
$H_A=\frac{1}{6}\sum_{\{\vec{q}_i\},\{j_i\}}V(\vec{q}_1,j_1;\vec{q}_2,j_2;\vec{q}_3,j_3)A_{\vec{q}_1,j_1}A_{\vec{q}_2,j_2}A_{\vec{q}_3,j_3}\delta_{\sum{\vec{q}_i},0}$
where $A_{\vec{q},j}=a_{\vec{q},j}+a^{\dagger}_{-\vec{q},j}$ is the phonon operator.\cite{Calandra} $V$ are the matrix elements
\begin{widetext}
\begin{equation}
V(\vec{q}_1,j_1;\vec{q}_2,j_2;\vec{q}_3,j_3)=\sum_{r_1,r_2,r_3,\vec{R}_1,\vec{R}_2,\vec{R}_3}\left[\prod_{i=1}^3
\left((\frac{\hbar}{2M_{r_i}\omega(\vec{q}_i,j_i)})^{\frac{1}{2}}e^{i\vec{q}_i\cdot\vec{R}_i}\vec{e}_{\vec{q}_i,j_i}(r_i)\cdot\nabla\right)\right]
\phi(r_1,r_2,r_3,\vec{R}_1,\vec{R}_2,\vec{R}_3)
\label{Vfull}
\end{equation}
\end{widetext}
where the derivatives act on $\phi$, the interatomic potential, with $r_i$ the atomic positions within unit cell $\vec{R}_i$ and $\vec{e}_{\vec{q}_i,j_i}(r_i)$
are the displacement vectors of the respective mode. The expression thus amounts to the third derivate with respect to phonon
displacements of the atomic positions.
The iron atoms are only
weakly coupled to the $(Ce,Nd)O$ layer and since the phonon dispersions in the c-direction are weak\cite{Boeri} we can model
the the Fe-B$_{1g}$ mode by considering only the Fe-As plane. (This also makes the procedure quasi-universal as it only depends on
the properties of this plane which is common to all iron-pnictide superconductors.) For the harmonic problem we use a minimal spring model (Fig. \ref{crystal})
with nearest neighbor Fe-As potential $\frac{k}{2}\delta r^2$,
nearest neighbor Fe-Fe potential $\frac{k'}{2}\delta r'^2$, and in-plane As-As coupling $\frac{k''}{2}\delta r''^2$ with $\delta r$, $\delta r'$, and $\delta r''$
the deviations from the
respective equilibrium distances. For the Fe-As
coupling we also add a cubic term $-\frac{g}{6}\delta r^3$, where the magnitude of $g$ is to be estimated from the experimental fit.
The Raman active B$_{1g}$ is here a pure Fe mode as depicted in Fig. \ref{crystal} with energy
$\omega_{B_{1g}}=\sqrt{\frac{4k\sin^2\theta}{m}}$ ($m=56u$) and to get $\omega_{B_{1g}}=220cm^{-1}$ we take $k=8.7 eV/\AA^2$, using $\theta=35^0$.\cite{Cruz}
Values $k'=0.3k$ and $k''=0.2k$ gives a lower edge of the accoustic branches
in good agrement with LDA calculations.\cite{Boeri}
Using this phonon spectrum we calculate the interaction matrix elements through Eq. \ref{Vfull} and the cubic potential.
Sampling the coupling $V(\vec{0},Fe_{B1g};\vec{q},j_1;-\vec{q},j_2)\equiv V_0(\vec{q},j_1,j_2)$
over the Brillouin zone shows that except at high symmetry points all 12 modes are coupled quite isotropically and does not vary
strongly with $\vec{q}$ if the frequency dependence is included, giving
\begin{equation}
V_0(\vec{q},j_1,j_2)\approx\kappa g (\frac{\hbar}{2m\omega_{B_{1g}}})^{3/2}\equiv V_0\,,
\end{equation}
with the numerical value $\kappa=0.10$.
\begin{figure}
\includegraphics[width=8cm]{pnfig2.pdf}
\caption{\label{crystal} (Color online) Schematic of the Fe-As plane with in-plane lattice parameter $a$. The harmonic interatomic couplings are $k$, $k'$ and $k''$ and
cubic coupling $g$ as described in the text. Arrows indicate the Fe B$_{1g}$ mode.}
\end{figure}
\subsection{Phonon self energy}
Now we are ready to address the frequency shift by calculating the self-energy to second order in $V_0$.
The self
energy is given by the bubble diagram\cite{Calandra} with imaginary part
\begin{widetext}
\begin{eqnarray}
\Gamma(\omega)=\frac{\pi}{2N\hbar^2}\sum_{\vec{q},j_1,j_2}&&|V_0(\vec{q},j_1,j_2)|^2[(1+n(\omega_{\vec{q},j_1})+n(\omega_{\vec{q},j_2}))
[\delta(\omega-\omega_{\vec{q},j_1}-\omega_{\vec{q},j_2})-\delta(\omega+\omega_{\vec{q},j_1}+\omega_{\vec{q},j_2})]\nonumber\\
&+&(n(\omega_{\vec{q},j_1})-n(\omega_{\vec{q},j_2}))[\delta(\omega+\omega_{\vec{q},j_1}-\omega_{\vec{q},j_2})-\delta(\omega-\omega_{\vec{q},j_1}+\omega_{\vec{q},j_2})]]
\end{eqnarray}
\end{widetext}
(The phonon-difference process, the second term, vanishes at zero temperature but can give a significant contribution at finite temperature.)
The simplest standard way to evaluate this expression is to assume
that the scattering is diagonal in the modes $V_0\sim\delta_{j_1,j_2}$, the Klemens model,\cite{Klemens} which gives a characteristic temperature dependence
$\Gamma(\omega)\sim (1+2n(\omega/2))$. As discussed previously we find instead that the scattering
is approximately isotropic between modes and we need to do a more careful analysis.
In principle we could calculate this expression numerically at each
temperature using
our numerical phonon spectrum but instead
we will use an approximate Lorentzian fit for the phonon DOS with the advantage of giving an analytic
expression for both $\Gamma$ and $\Delta$ including the full temperature dependence.
\begin{figure}
\includegraphics[width=8cm]{pnfig3.pdf}
\caption{\label{dos} (Color online) Phonon density of states of the isolated Fe-As plane calculated within the harmonic model discussed in the text and
its approximation in terms of three Lorentzians (solid curve) that contain
4,3,5 modes with increasing energy respectively. }
\end{figure}
The calculated phonon DOS, plotted in Fig. \ref{dos}, consists of three regions of high density centered around $w_1\approx 100cm^{-1}$,
$w_2\approx 200cm^{-1}$, $w_3\approx 300cm^{-1}$. The lower part contains $m_1=4$ modes, the intermediate part contains $m_2=3$
modes and the upper part $m_3=5$ modes giving
\begin{equation}
\rho(\omega)\approx \frac{N}{\pi}\sum_{i=1}^3m_i\frac{\gamma}{(w-w_i)^2+\gamma^2}\equiv\sum_i\rho_i(\omega)\,,
\end{equation}
which is thus normalized by $\int_{-\infty}^{\infty}\rho(\omega)=12N$ and where we estimate $\gamma=25cm^{-1}$.
To proceed we evaluate the occupation factor $n(\omega)$ at the peak position of the respective DOS and assume a q-independent
spectral weight giving $\Gamma$ as an integral
over all pairs of peaks of the spectral weight with the result
\begin{widetext}
\begin{eqnarray}
\Gamma_0(\omega)=\frac{1}{2\hbar^2}V_0^2\sum_{i,j=1}^3m_im_j&&[(1+n(\omega_i)+n(\omega_j))
[\frac{2\gamma}{(\omega-\omega_i-\omega_j)^2+4\gamma^2}-\frac{2\gamma}{(\omega+\omega_i+\omega_j)^2+4\gamma^2}]\nonumber\\
&&+(n(\omega_i)-n(\omega_j))[\frac{2\gamma}{(\omega+\omega_i-\omega_j)^2+4\gamma^2}-\frac{2\gamma}{(\omega-\omega_i+\omega_j)^2+4\gamma^2}]]\,.
\end{eqnarray}
\end{widetext}
The expression is naturally understood as the scattering of the mode at $\omega$ into two modes within the same or different DOS peaks
(or the corresponding difference process).
Note that $\Gamma_0(\omega_{B_{1g}})$ will be dominated by scattering into the low-energy peak
($i=j=1$) which will give a temperature dependence close to the Klemens model $\Gamma(\omega)\sim (1+2n(\omega/2))$.\cite{Klemens}
Using the Kramers-Kronig relation
$\Delta(\omega)=-\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\Gamma(\omega')}{\omega'-\omega}d\omega'$,
gives
\begin{widetext}
\begin{eqnarray}
\Delta_0(\omega)=\frac{1}{2\hbar^2}V_0^2\sum_{i,j=1}^3m_im_j&&[(1+n(\omega_i)+n(\omega_j))
[\frac{(\omega-\omega_i-\omega_j)}{(\omega-\omega_i-\omega_j)^2+4\gamma^2}-\frac{(\omega+\omega_i+\omega_j)}{(\omega+\omega_i+\omega_j)^2+4\gamma^2}]\nonumber\\
&&+(n(\omega_i)-n(\omega_j))
[\frac{(\omega+\omega_i-\omega_j)}{(\omega+\omega_i-\omega_j)^2+4\gamma^2}-\frac{(\omega-\omega_i+\omega_j)}{(\omega-\omega_i+\omega_j)^2+4\gamma^2}]]\,.
\label{shift}
\end{eqnarray}
\end{widetext}
\section{Discussion}
Figure \ref{b1gfit} shows the fit from equation \ref{shift} to the experimental values using $\omega=\omega_0+\Delta_0(\omega_{B_{1g}})$,
with fitting parameters $\omega_0$ and $g$ ($\Delta_0\sim g^2$). For the Nd sample we find
$\omega_0=221.7cm^{-1}$ and $g=103eV/\AA^3$ and for the Ce sample $\omega_0=221.0cm^{-1}$ and $g=116eV/\AA^3$, where $\omega_0$ is close
to the assumed harmonic value $\omega_{B_{1g}}=220cm^{-1}$.\cite{note} It is interesting to note that we find
$\Delta_0(T=0)\approx 6cm^{-1}$, i.e. even at zero temperature phonon-phonon interactions give a finite (here 3\%) contribution to the
phonon energy, an effect which LDA calculations of phonon energies based on linear response neglects.
\begin{figure}
\includegraphics[width=8cm]{pnfig4.pdf}
\caption{\label{b1gfit} (Color online) Fit of the model frequency shift ($\Delta_0$) to data for $NdFeAsO_{.88}F_{.12}$ (boxes) and $CeFeAsO_{.84}F_{.16}$ (diamonds).
Inset are the corresponding model lifetimes ($\Gamma_0$) together with linewidth data (triangles) on CaFe$_2$As$_2$ from Choi et al.\cite{Choi}.}
\end{figure}
Due to the polycrystalline nature of our samples we are not able to find reliable phonon widths and compare instead our theoretical results
to Raman studies by Choi {\em et al.}\cite{Choi} on CaFe$_2$As$_2$. In that material the B$_{1g}$ mode has lower energy but the temperature
dependence is similar and the width correspond well with our calculations without any additional fit.
The temperature dependence is in fact just the Klemens model but the strength of the calculation is that only one parameter $g$ gives both the shift and the width.
In Ref\cite{Choi} the linewidth variations
were tentatively assigned to changes in the electronic scattering below T$_{SDW}$. These speculations we can quite definitely rule out;
the anharmonic contribution accounts to good accuracy for both the shift and
linewidth variations of this mode.\cite{note2}
\subsection{Lattice expansion}
The cubic coupling will give rise to an expansion of the lattice that
we estimate by considering an isolated Fe-As bond for which $<0|\delta r|0>=\frac{\hbar g}{4(m*)^{1/2}k^{3/2}}$ with $m^*=Mm/(m+M)$.
We include also a nearest neighbor Fe-Fe anharmonic coupling $g'$ with the same relative strength ($g'=g(k'/k)^{3/2}$).
With this the isotope substitution $^{56}$Fe$\rightarrow$ $^{54}$Fe gives an expansion of the in-plane lattice parameter
$\Delta a=5.1\cdot 10^{-4}{\AA}$ and Fe-As distance $\Delta d_{Fe-As}=2.5\cdot 10^{-4}{\AA}$, which compared to $a\approx 4\AA$ and
$d_{Fe-As}\approx 2.5\AA$.\cite{Cruz} gives a relative expansion
$\lesssim 2\cdot 10^{-4}$.
What is the possible effect of a small lattice expansion on $T_c$ and $T_{SDW}$? There is a direct effect on the electronic hopping integrals $t$, and theories
of an isotope effect based on this has been suggested for the cuprates and C$_{60}$ as well the pnictides.\cite{Fisher,Chakravarty,Phillips}
Assuming
$t=t_0e^{-q(r/r_0-1)}$ with $q\sim 1$\cite{comment_Vildosola} and where $\vec{r}=r_0\hat{r}+\vec{\delta}$ with small displacement $\vec{\delta}$ we find
$\frac{\delta t}{t_0}=-q\frac{\delta_r}{r_0}-\frac{q}{2}\frac{\delta_\perp^2}{r_0^2}+\frac{q^2}{2}\frac{\delta_r^2}{r_0^2}$ where $\delta_r=\vec{\delta}\cdot\hat{r}$
and $\delta_\perp=\vec{\delta}\times\hat{r}$. The linear term has only an anharmonic contribution whereas the quadratic terms will get contributions from zero point
fluctuations in the harmonic approximation. We estimate $\delta^2$ from the ground state energy per atom $\epsilon_0=38.9meV$ and $\epsilon_0=39.2meV$
for $^{56}$Fe and $^{54}$Fe respectively and energy $\epsilon_0/2$ per Fe-As bond.
The fluctuation along a bond is given by $\frac{k}{2}{\delta r}^2\approx \epsilon_0/4$ ($k=8.7eV/\AA^2$), giving the difference
$\delta_r^2\approx \Delta\epsilon_0/(2k)\approx 2.3\cdot 10^{-5}\AA^2$ and for the transverse fluctuations
$\delta_\perp^2\approx 2\delta_r^2\approx 4.6\cdot 10^{-5}\AA^2$.\cite{note_fluc}
The contribution to the shift of the hopping integrals is partially canceled by the different signs and in total smaller by an order of
magnitude compared to the anharmonic contribution.
\subsection{Isotope effect}
Consider a weak coupling (non phonon mediated) SC or SDW transition with $T_c\sim \Omega e^{-1/N(0)V}$,
where $\Omega$ is the relevant energy cut-off, which may be the magnon energy for spin fluctuation mediated pairing,
$V$ is the effective interaction and $N(0)$ the relevant electronic DOS at the Fermi energy.
Focusing on the contribution from the DOS, $N(0)\sim 1/t$ gives the isotope exponent
$\alpha=-\frac{d\ln T_c}{d\ln m}\approx-\frac{1}{N(0)V}\frac{d\ln N(0)}{d\ln m}\approx \frac{0.5\cdot 10^{-2}}{N(0)V}\approx 10^{-2}$.
(Assuming, $N(0)V\approx 0.5$. It cannot be much smaller to get a high T$_c$.)
The interaction strength and $\Omega$ may also depend on the hopping integrals and give isotope shifts of similar magnitude.
Although this is a simple analysis
we expect the order of magnitude estimate to be relevant to any purely electronic microscopic model containing
inter and intra orbital hopping integrals and interactions that only depend indirectly on the lattice parameters.
Alternatively, we may relate the change of lattice parameter to a corresponding pressure of
$dP=\frac{3\Delta a}{a}/\beta\approx 0.4$kbar through the
compressibility $\beta=-\frac{d\ln V}{dP}\approx 1.0\cdot 10^{-3}/$kbar\cite{Zhao_JACS}
(We have no estimate of the c-axis change related to
the in-plane expansion but only assume that this is of similar relative magnitude.)
Pressure dependence of T$_c$ of around $0.2K/$kbar has been reported in several materials at different dopings although
close to optimal doping it appears that the effect is generally significantly smaller.\cite{Chu_review} Nevertheless, from these considerations
we find an upper estimate of the isotope exponent $\alpha\approx 0.06$.
Clearly something more sophisticated is needed to produce $\alpha=0.4$
as found experimentally in Ref.\onlinecite{Liu}. Encouraging perhaps, the sign from the naive weak coupling analysis based on a change in the electronic
DOS does agree with
experiments and would naturally imply a similar exponent for both SC and SDW.\cite{note3}
\section{Summary}
In summary, we find that the temperature dependent shift and width of a Raman active phonon is well represented by the anharmonic contribution,
consistent with weak electron-phonon interactions.
At the same time, we estimate the change of electronic hopping integrals due to Fe isotope substitution and find that these are too small to
generate an
isotope effect on T$_c$ or T$_{SDW}$ of the magnitude reported in \cite{Liu} without an explicit phonon contribution.
These apparently contradictory results present a
significant challenge for any theory of superconductivity in the iron-pnictides.
\section{Acknowledgement}
We thank Professor Nan Lin Wang for providing the samples used in the Raman measurements.\\
{\em Note added} After the submission of this work x-ray diffraction (XRD) data on the samples used for the isotope experiments appeared
(Ref. \onlinecite{Liu}, supplemental).
Changes in the lattice parameter of isotope substituted samples are the same within the experimental error of $\sim 1\cdot 10^{-3}\AA$, which is
greater than the change $\Delta a\approx 5\cdot 10^{-4}\AA$ calculated here.
There also appeared a report of a negative Fe-isotope effect with $\alpha=-0.18$.\cite{negative_iso} Again, XRD data find the same in-plane
lattice parameter within experimental error of $\sim 1\cdot 10^{-3}\AA$.\cite{Shirage_private}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,748
|
Q: Pip getting stuck after starting HTTPs connection Pip is not installing any dependencies but just collecting and leaving.
I was trying to install Twisted package in my computer using pip command. The command just starts getting executed and stops in 1 second like this
`Collecting Twisted==15.0.0
1 location(s) to search for versions of Twisted:
*
*https://pypi.python.org/simple/twisted/
Getting page https://pypi.python.org/simple/twisted/
Starting new HTTPS connection (1): pypi.python.org`
I tried using
--no-cache-dir
But there was no use.
And I am using python 3.5.2 for dependency issues.
Hope Anyone can help with this. Thank you in advance.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 410
|
\section{Introduction} \label{section_introduction}
Different forms of the Hermite-Gauss functions have seen wide usage in physics and chemistry,
e.g., in the context of
detection of gravitational waves \cite{Ast_DiPace_Millo_Pichot_Turconi_Christensen_Chaibi_2021,Tao_Green_Fulda_2020},
quantum encoding \cite{Allgaier_Ansari_Donohue_Eigner_Quiring_Ricken_Brecht_Silberhorn_2020} and communication \cite{Perkins_Newell_Schabacker_Richardson_2013},
quantum entanglement with Hermite-Gauss beams \cite{Walborn_Pimentel_2012},
self-healing \cite{AguirreOlivas2015} and non-diffracting \cite{Chabou_Bencheikh_2020} (elegant) Hermite-Gauss beams,
detection beyond the diffraction limit \cite{Singh_Nagar_Roichman_Arie_2017},
Goos–Hänchen shift on reflection of a graphene monolayer \cite{Zhen_Deng_2020},
soft X-ray orbital angular momentum analysis \cite{Lee_Alexander_Kevan_Roy_McMorran_2019},
turbulence-resistant laser beams \cite{Cox_Maqondo_Kara_Milione_Cheng_Forbes_2019},
and for numeric integration \cite{HG_quadrature_6289843}.
This list is far from exhaustive.
The \emph{anisotropic Hermite-Gauss} (AHG) functions have been introduced by \citet{Amari_Kumon_1983}
(using the terminology ``tensorial Hermite-Gauss functions''), and were studied further later by \citet{jastor10.2307.25050684,Holmquist_1996,Ismail_Simeonov_2020}.
By using the quadratic form defined by a given positive definite matrix, these functions form a multivariate extension of the standard univariate Hermite-Gauss (HG) functions.
The positive definite matrix can be used for the representation of spatial deformations, geometric properties and energy tensors of structured optical beams, and potential other future applications.
In the context of optical coherence theory, it was shown that this anisotropy matrix has a clear physical meaning \cite{Steinberg_hg_2021}:
the spatial coherence of light.
This allows for the representation of a large family of coherence functions using a limited count of AHG modes, making the representation computationally-tractable.
\ifx1\undefined
Ismail and Simeonov \cite{Ismail_Simeonov_2020}
\else
\citet{Ismail_Simeonov_2020}
\fi
have derived certain properties of the AHG functions, including the generating functions, recurrence relations and linearization properties.
The purpose of this paper is to study the properties of these functions from a computational and optical perspective.
In addition to a number of useful identities, we derive closed-form expressions for the linear canonical transform (LCT) of an AHG function,
as well as two important transforms generalized by the LCT: the fractional Fourier transform
and Laplace transform.
In addition, we consider the Wigner-Vile distribution in Hermite-Gauss space.
These transforms are fundamental in Fourier optics, quantum mechanics and signal processing.
We also discuss the eigenfunctions of these transforms and show that the AHG functions are the eigenfunctions for specific cases of the LCT.
These results echo well-known results for the univariate HG functions which have not been previously investigated under the context of the multivariate AHG functions.
\section{Notation and Preliminaries} \label{section_preliminaries}
Let $\Natural=\qty{0,1,2,\ldots}$ represent the set of natural numbers and $\Integer,\Real,\Complex$ represent the set of integers, the real field and
the complex field, respectively.
A vector is denoted as $\va{r}=\qty[r_1,r_2,\ldots,r_n]^\transpose\in\Complex^n$ and the all-ones vector is denoted $\va{1}=\qty[1,1,\ldots,1]^\transpose\in\Complex^n$.
We use $\Real^{n\times m},\Complex^{n\times m}$ to denote the sets of all
real-valued and complex-valued $n\times m$
matrices, respectively.
Let $\mat{I}$ be the identity matrix, $\abs{\mat{A}}$ denote the determinant of a (square) matrix $\mat{A}$ and $\mat{A}^\transpose$ the transpose of $\mat{A}$.
Given $\mat{A}\in\Complex^{n\times m}$, the notation $\mat{A}=[\va{a}_j^\transpose]=[a_{jk}]$ defines $\va{a}_j^\transpose$, $a_{jk}$ to be the row vectors and elements
of $\mat{A}$, respectively.
A matrix $\mat{S}\in\Real^{n\times n}$ is said to be positive definite if it is symmetric and $\va{x}^\transpose\mat{S}\va{x}>0$ for all $0\neq\va{x}\in\Real^n$.
The notation $\mat{S}\succ 0$ indicates that $\mat{S}$ is positive definite.
A multi-index is defined as the $n$-tuple $\vb{\nu}=\qty(\nu_1,\nu_2,\ldots,\nu_n)\in\Natural^n$.
We use the standard multi-index factorial, double factorial, degree and power shorthand, viz.
\begin{alignat}{2}
\vb{\nu}! &\triangleq \prod_j \nu_j!
~,
\qquad&&\qquad
\vb{\nu}!! \triangleq \prod_j \nu_j!!
~,
\\
\abs{\vb{\nu}} &\triangleq \sum_j \nu_j
~,
\qquad&&\qquad
\va{r}^{\vb{\nu}} \triangleq \prod_j r_j^{\nu_j}
~,
\end{alignat}
where the double factorial of a natural integer is $n!!=n\cdot(n-2)\cdot\ldots\cdot 1$ when $n$ is odd and $n!!=n\cdot(n-2)\cdot\ldots\cdot 2$ otherwise (the factorial and double factorial of 0 is 1).
The partial order $\preceq$ is defined on the set of multi-indices as follows: $\vb{\nu}\preceq\vb{\mu}$ iff $\forall_j \nu_j\leq\mu_j$.
The usual binomial coefficients are generalized to multi-indices as
\begin{align}
\binom{\vb{\nu}}{\vb{\mu}} = \frac{\vb{\nu}!}{\vb{\mu}!\qty(\vb{\nu}-\vb{\mu})!}
~,
\end{align}
the convention being that this binomial coefficient is non-zero iff $\vb{\nu}\preceq\vb{\mu}$.
For a multi-index $\vb{\nu}\in\Natural^n$ and a vector $\va{r}$, we define the partial derivative shorthand as
\begin{align}
\pDv^{\vb{\nu}}_{\va{r}}
&\triangleq
\frac{\partial^{\abs{\vb{\nu}}}}{\prod_j \partial r_j^{\nu_j}}
~.
\end{align}
Similarly, we define the multi-index matrix, $\mat{\Omega}\in\Natural^{n\times m}$, which consists of $n$ rows, each a multi-index, i.e $\mat{\Omega}=[\vb{\omega}_j]=[\omega_{jk}]$.
We define $\mat{\Omega}!=\prod_{j,k} \omega_{jk}!$ and,
given $\mat{A}=[a_{jk}]\in\Complex^{n\times m}$ set $\mat{A}^{\mat{\Omega}}=\prod_{j,k} a_{jk}^{\omega_{jk}}$.
We sometimes slightly abuse notation and write ${\va{1}}^{\intercal}\mat{\Omega}$ and $\mat{\Omega}\va{1}$ to denote the multi-indices that consist of the column sums and row sums of $\mat{\Omega}$, respectively.
Given a pair of $L^2$ functions $f,g$, the inner product (over $\Real^n$) of $f$ and $g$ is denoted by $\@ifstar{\definp}{\definp*}{f}{g} \triangleq \int_{\Real^n} \dd{\va{x}} f(\va{x})g^\star(\va{x})$, with $\star$ being complex conjugation.
\paragraph{\textbf{The Hermite-Gauss functions}}
The $k$-th order univariate, complex Hermite-Gauss function is defined as
\begin{align}
\hg{k}\qty(z)
&\triangleq
\qty(\sqrt{\mpi} \, 2^k k!)^{-\half} \ee^{-\frac{z^2}{2}} H_k\qty(z)
=
\frac{
\qty(-1)^k \ee^{\frac{z^2}{2}}
}{\sqrt{\sqrt{\mpi} \, 2^k k!}}
\dv[k]{z} \ee^{-z^2}
~,
\label{HG_univariate}
\end{align}
where $z\in\Complex$, $k\in\Natural$ and $H_k$ is the Hermite polynomial of order $k$.
Given a symmetric matrix $\mat{\Theta}\in\Complex^{n\times n}$ with a positive definite real part (i.e. $\Re\mat{\Theta}\succ 0$), we define the $n$-dimensional complex anisotropic Hermite-Gauss function of degree $\vb{\nu}\in\Natural^n$ of order $\abs{\vb{\nu}}$ associated to $\mat{\Theta}$ by
\begin{align}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&\triangleq
\qty(-\frac{1}{\sqrt{2}})^{\abs{\vb{\nu}}}
\frac{
\ee^{\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}
}{
\sqrt{\vb{\nu}!}
\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}
}
\pDv^{\vb{\nu}}_{\va{r}}
\ee^{-\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}
~.
\label{hermite_gauss}
\end{align}
Similarly,
the \emph{dual} of the anisotropic Hermite-Gauss function is defined as
\begin{align}
\HGd[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&\triangleq
\qty(-\frac{1}{\sqrt{2}})^{\abs{\vb{\nu}}}
\frac{
\ee^{\frac{1}{2}\va{s}^\transpose\mat{\Theta}\va{s}}
}{
\sqrt{\vb{\nu}!}
\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}
}
\pDv^{\vb{\nu}}_{\va{s}}
\ee^{-\va{s}^\transpose\mat{\Theta}\va{s}}
~,
\label{hermite_gauss_dual}
\end{align}
with $\va{s}=\mat{\Theta}^{-1}\va{r}$.
The generating functions of the AHG functions are
\begin{subequations}
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&=
\frac{
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{x})}
}{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\label{HG_generating_function}
~,
\\
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\HGd[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&=
\frac{
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \va{x}^\transpose\qty(2\va{r}-\mat{\Theta}\va{x})}
}{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\label{HG_dual_generating_function}
~,
\end{align}
\end{subequations}
for any $\va{x},\va{r}\in\Complex^n$ (see \cite{jastor10.2307.25050684,Ismail_Simeonov_2020}).
\section{Properties and Identities} \label{section_properties}
We begin with a few simple but useful properties of the AHG functions.
Most of the properties listed in \cref{properties_basic} are known
\cite{Ismail_Simeonov_2020,jastor10.2307.25050684} or easy to prove.
They are included here for completeness.
\begin{property}[Basic properties] \label{properties_basic}
Let $\va{r}\in\Complex^n$, symmetric $\mat{\Theta}\in\Complex^{n\times n}$ such that $\Re\mat{\Theta}\succ 0$.
Then
\begin{tasks}[label-format=,label=\textrm{\thetheorem.\arabic*},label-width=3em,item-indent=5em](2)
\task* \label{basic_property_dual}
$\HGd[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}) =
\abs{\mat{\Theta}}^{-\half}
\HG[\mat{\Theta}^{-1}]{\vb{\nu}}\qty(\mat{\Theta}^{-1}\va{r})$.
\task* \label{basic_property_conjugate}
$\HG[\mat{\Theta}]{\vb{\nu}}(\va{r})^\star = \HG[\mat{\Theta}^\star]{\vb{\nu}}(\va{r}^\star)$.
\task* \label{basic_property_analytic_b}
$
\HG[z^2\mat{\Theta}]{\vb{\nu}}\qty(\va{r}) =
\abs{\mat{\Theta}}^{\frac{1}{4}}
\abs{z^2\mat{\Theta}}^{-\frac{1}{4}}
\qty(\frac{1}{z})^{\abs{\vb{\nu}}}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\frac{1}{z}\va{r})
$ for $0\neq z\in\Complex$.
\task* \label{basic_property_analytic_a}
$\HG[\mat{\Theta}]{\vb{\nu}}\qty(-\va{r}) = (-1)^{\abs{\vb{\nu}}} \ii^n \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})$.
\task* \label{basic_property_real}
if $\mat{\mat{\Theta}},\va{r}$ are real-valued then $\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})$ is real.
\task* \label{basic_property_decompose_into_univariate}
if $\mat{\Theta}=\mat{I}$, the AHG function decomposes into a product of the univariate HG functions:
$
\HG[\mat{I}]{\vb{\nu}}(\va{r}) = \HGd[\mat{I}]{\vb{\nu}}(\va{r}) = \prod_k \hg{\nu_k}(r_k)
$.
\task* \label{basic_property_even_odd}
$\HG[\mat{I}]{\vb{\nu}}(\va{r})$ is even as a function of $r_j$ iff $\nu_j$ is even, otherwise it is odd.
\end{tasks}
\end{property}
\begin{proof}
\cref{basic_property_dual,basic_property_decompose_into_univariate} follow trivially from the definitions.
\cref{basic_property_analytic_b,basic_property_conjugate} follow from the generating function (\cref{HG_generating_function}).
\cref{basic_property_real} is a consequence of \cref{basic_property_conjugate}.
\cref{basic_property_analytic_a} is a special case of \cref{basic_property_analytic_b}.
\cref{basic_property_even_odd} is a consequence of \cref{basic_property_decompose_into_univariate} and the fact that the univariate HG function $\hg{k}$ is even iff $k$ is even and odd otherwise.
\end{proof}
\begin{property}[Derivatives]
Let $\mat{\Theta}^{-1}=[\va{q}_j]$ be the rows of $\mat{\Theta}^{-1}$.
Then the partial derivative, gradient, Hessian matrix and Laplacian of the AHG function are given by
\begin{tasks}[label-format=,label=\textrm{\thetheorem.\arabic*},label-width=3em,item-indent=5em]
\task \label{derivative_property_partial}
$
\pdv{}{r_j} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
+ \va{q}_j^\transpose\va{r} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
= 2 \va{q}_{j}^\transpose\va{\phi}_{\vb{\nu}}
~,$
\task \label{derivative_property_vec}
$
\pdv{}{\va{r}} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
= \mat{\Theta}^{-1}
\qty[
2\va{\phi}_{\vb{\nu}}
- \va{r}\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
]
~,$
\task \label{derivative_property_hessian}
$
\begin{aligned}[t]
\pdv[order={2}]{}{\va{r}} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
=&
-\mat{\Theta}^{-1} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\\
& +
2\mat{\Theta}^{-1}
\qty(
\mat{\Phi}_{\vb{\nu}}
- \va{r}\va{\phi}_{\vb{\nu}}^\transpose
- \va{\phi}_{\vb{\nu}}\va{r}^\transpose
+ \frac{1}{2}
\va{r}\va{r}^\transpose
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
)\mat{\Theta}^{-1}
~,
\end{aligned}
$
\task \label{derivative_property_laplacian}
$
\begin{aligned}[t]
\laplacian{\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})}
=&
-
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\tr\mat{\Theta}^{-1}
+
2\tr\qty(
\mat{\Theta}^{-2}
\mat{\Phi}_{\vb{\nu}}
)
\\
& +
\qty(\mat{\Theta}^{-1}\va{r})^\transpose
\qty[
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\mat{\Theta}^{-1}\va{r}
-
4 \mat{\Theta}^{-1}\va{\phi}_{\vb{\nu}}
]
~,
\end{aligned}
$
\end{tasks}
where
$\laplacian=\sum_j\pdv[order={2}]{}{r_j}$ is the Laplace operator (taken with respect to $\va{r}$),
$\pdv[order={2}]{}{\va{r}}$ is the Hessian, the matrix $\mat{\Phi}_{\vb{\nu}}$ is given by
\cref{derivate_eq_Phi} and with
\begin{align}
\va{\phi}_{\vb{\nu}}
&=
\frac{1}{\sqrt{2}}
\begin{bmatrix}
\sqrt{\nu_1} \HG[\mat{\Theta}]{\vb{\nu}-\vb{\varepsilon}_1}\qty(\va{r}), &
\sqrt{\nu_2} \HG[\mat{\Theta}]{\vb{\nu}-\vb{\varepsilon}_2}\qty(\va{r}), &
\ldots, &
\sqrt{\nu_n} \HG[\mat{\Theta}]{\vb{\nu}-\vb{\varepsilon}_n}\qty(\va{r})
\end{bmatrix}^\transpose
~,
\end{align}
where $\vb{\varepsilon}_k\in\Natural^n$ is such that $(\varepsilon_k)_j=\kdelta{jk}$, i.e. the multi-index with $1$ at position $k$ and 0 elsewhere.
\begin{remark}
We adopt the convention that the AHG function vanishes identically if its degree contains negative elements.
\end{remark}
\begin{remark}
\cref{derivative_property_vec,derivative_property_partial} were first derived by \citet{jastor10.2307.25050684}.
A proof is provided below for completeness.
\end{remark}
\end{property}
\begin{proof}
Differentiate the generating function (\cref{HG_generating_function}):
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\pdv{}{r_j} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&=
\va{q}_{j}^\transpose\qty(2\va{x} - \va{r})
\frac{
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{x})}
}{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\\
&=
\va{q}_{j}^\transpose\qty(2\va{x} - \va{r})
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\end{align}
and equate the powers of $\va{x}$ on both sides, proving \cref{derivative_property_partial}.
\cref{derivative_property_vec} follows immediately from \cref{derivative_property_partial}.
Differentiate \cref{derivative_property_vec}:
\begin{align}
\pdv[order={2}]{}{\va{r}} \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&=
\mat{\Theta}^{-1}
\pdv{}{\va{r}}\qty[
2\va{\phi}_{\vb{\nu}}
- \va{r}\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
]
\\
&=
\mat{\Theta}^{-1}
\qty[
2\mat{\Phi}_{\vb{\nu}}
- \HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\qty[\mat{I} - \va{r}\qty(\mat{\Theta}^{-1}\va{r})^\transpose]
- 2\va{r}\qty(\mat{\Theta}^{-1}\va{\phi}_{\vb{\nu}})^\transpose
],
\end{align}
where $\mat{\Phi}_{\vb{\nu}}=\pdv{}{\va{r}}\va{\phi}_{\vb{\nu}}$ is the matrix with the following elements:
\begin{align}
\qty[\mat{\Phi}_{\vb{\nu}}]_{jk}
&\triangleq
\begin{cases}
\sqrt{\nu_j\qty(\nu_j-1)} \HG[\mat{\Theta}]{\vb{\nu}-2\vb{\varepsilon}_j}\qty(\va{r}) & \qif j=k \\
\sqrt{\nu_j\nu_k} \HG[\mat{\Theta}]{\vb{\nu}-\vb{\varepsilon}_j-\vb{\varepsilon}_k}\qty(\va{r}) & \qotherwise
\end{cases}
\label{derivate_eq_Phi}
\end{align}
and simplify, yielding \cref{derivative_property_hessian}.
To complete the proof,
note that $\laplacian\equiv\tr\pdv[order={2}]{}{\va{r}}$ and recall
that the trace of an outer product is the inner product. This gives \cref{derivative_property_laplacian}.
\end{proof}
\begin{lemma}[Orthogonality and completeness] \label{lemma_orthogonality}
Given a symmetric matrix $\mat{\mat{\Theta}}$ with a positive definite real part, the anisotropic Hermite-Gauss functions $\HG[\mat{\Theta}]{\vb{\nu}}$ form a complete orthonormal (with respect to their dual) basis of $\Real^n\to\Complex$ $L^2$-functions.
In other words,
\begin{enumerate}
\item
For all $\vb{\nu},\vb{\mu}\in\Natural^n$, $\@ifstar{\definp}{\definp*}{\HG[\mat{\Theta}]{\vb{\nu}}}{\HGd[\mat{\Theta}]{\vb{\mu}}} = \kdelta{\vb{\nu}\vb{\mu}}$,
where $\kdelta$ denotes the Kronecker delta; and
\item
If an $L^2$-function $f$ is orthogonal to all $\HG[\mat{\Theta}]{\vb{\nu}}$, then $f$ vanishes a.e.
\end{enumerate}
\end{lemma}
\begin{proof}
See
\ifx1\undefined
Ismail and Simeonov \cite{Ismail_Simeonov_2020}.
\else
\citet{Ismail_Simeonov_2020}.
\fi
\end{proof}
Our main contributions in this section start with the next lemma, which allows for the
expansion of an AHG function as a finite series of AHG functions with different anisotropy.
\begin{lemma}[Anisotropy transformation] \label{lemma_HG_anisotropy_transform_identity}
Given symmetric $\mat{\Theta}_1,\mat{\Theta}_2\in\Complex^{n\times n}$, with
$\Re\mat{\Theta}_1,\Re\mat{\Theta}_2\succ 0$, we have
\begin{align}
\HG[\mat{\Theta}_1]{\vb{\nu}}\qty(\va{r})
&=
\sqrt{\vb{\nu}! \abs{\mat{T}}}
\sum_{\substack{
\mat{\Omega}=\qty[\vb{\omega}_{j}^\transpose]\in\Natural^{n\times n}
\text{, s.t. }
\va{1}^\transpose\mat{\Omega}=\vb{\nu}
\\
\mathclap{
\text{with }
\vb{\mu} = \qty(\abs{\vb{\omega}_{1}},\abs{\vb{\omega}_{2}},\ldots,\abs{\vb{\omega}_{n}})
}
}}
\frac{\mat{T}^{\mat{\Omega}}}
{\mat{\Omega}!}
\sqrt{\vb{\mu}!}
\HG[\mat{\Theta}_2]{\vb{\mu}}\qty(\mat{T}\va{r})
\label{transformation_HG_identity}
~,
\end{align}
where $\mat{T}=\mat{\Theta}_2^{\half}\mat{\Theta}_1^{-\half}$.
The summation is over all $n\times n$ multi-index matrices $\mat{\Omega}$, with rows $\vb{\omega}_j$, such that the sum of the $k$-th column of $\mat{\Omega}$ is $\nu_k$.
The multi-index $\vb{\mu}\in\Natural^n$ is defined to be the row sums of $\mat{\Omega}$.\\
\begin{remark}
There are $\prod_j p(\nu_j)$ such matrices, where $p(m)$ is the partition function, which asymptotically grows as $\order{\exp(\sqrt{\abs{\vb{\nu}}})}$.
\end{remark}
\end{lemma}
\begin{proof}
Start with the AHG generating function, \cref{HG_generating_function}, and perform the variable changes $\va{y}=\mat{T}\va{x}$ and $\va{s}^\prime=\mat{T}\va{r}$, viz.
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\HG[\mat{\Theta}_1]{\vb{\nu}}\qty(\va{r})
&=
\frac{
\ee^{-\frac{1}{2}\va{s}^\transpose\mat{\Theta}_2^{-1}\va{s} + \va{y}^\transpose\mat{\Theta}_2^{-1}\qty(2\va{s}-\va{y})}
}{\qty(\mpi^n\abs{\mat{\Theta}_1})^{\frac{1}{4}}}
\\
&=
\abs{\mat{T}}^{\half}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{y}^{\vb{\nu}}
\HG[\mat{\Theta}_2]{\vb{\nu}}\qty(\va{s})
~.
\end{align}
Then, by the multinomial theorem:
\begin{align}
y_k^{\nu_k} &=
\sum_{\substack{\;\;\vb{\omega}\in\Natural^n\\\abs{\vb{\omega}}=\nu_k}}
\frac{\nu_k!}{\vb{\omega}!}
\va{t}_k^{\mkern2mu \vb{\omega}}
\va{x}^{\mkern1mu \vb{\omega}}
~,
\end{align}
where the summation is over all the integer partitions of $\nu_k$ and we denote $\mat{T}=\qty[\va{t}_{j}^\transpose]$, i.e. $\va{t}_j$ are the rows of $\mat{T}$.
The two equations above yield
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
\HG[\mat{\Theta}_1]{\vb{\nu}}\qty(\va{r})
&=
\abs{\mat{T}}^{\half}
\sum_{\substack{
\mat{\Omega}=\qty[\vb{\omega}_{j}^\transpose]\in\Natural^{n\times n},
\\
\mathclap{
\text{with }
\vb{\mu} = \qty(\abs{\vb{\omega}_{1}},\abs{\vb{\omega}_{2}},\ldots,\abs{\vb{\omega}_{n}})
}
}}
\sqrt{2^{\abs{\vb{\mu}}}\vb{\mu}!}
\HG[\mat{\Theta}_2]{\vb{\mu}}\qty(\va{s})
\prod_k
\frac{
\va{t}_k^{\mkern2mu \vb{\omega}_{k}}
\va{x}^{\mkern1mu \vb{\omega}_{k}}
}{\vb{\omega}_{k}!}
~.
\end{align}
Equating the powers of $\va{x}$ on both sides above gives \cref{transformation_HG_identity}.
\end{proof}
Immediate consequences of the above lemma are the next few corollaries.
The first corollary facilitates the dimensional decomposition of an arbitrary AHG function into (finite) univariate HG functions.
This has useful computational applications.
\begin{corollary}[Dimensional decomposition] \label{corollary_dimensional_decomposition}
With $\va{s}=\mat{\Theta}^{-\half}\va{r}$,
\begin{align}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
&=
\sqrt{\vb{\nu}!}
\abs{\mat{\Theta}}^{-\frac{1}{4}}
\sum_{\substack{
\mat{\Omega}=\qty[\vb{\omega}_{j}^\transpose]\in\Natural^{n\times n}
\text{, s.t. }
\va{1}^\transpose\mat{\Omega}=\vb{\nu}
\\
\mathclap{
\text{with }
\vb{\mu} = \qty(\abs{\vb{\omega}_{1}},\abs{\vb{\omega}_{2}},\ldots,\abs{\vb{\omega}_{n}})
}
}}
\frac{\qty(\mat{\Theta}^{-\half})^{\mat{\Omega}}}
{\mat{\Omega}!}
\sqrt{\vb{\mu}!}
\prod_k
\hg{\mu_k}\qty(s_k)
.
\end{align}
\end{corollary}
It is often important to evaluate the AHG functions at 0, e.g., for computation of the peak energy of optical beams or the determination of
the total energy carried by a wave ensemble \cite{Steinberg_hg_2021}.
The next corollary provides an explicit expression for the values at $0$ and may admit interesting combinatorics.
\begin{corollary}[The AHG function at 0] \label{corollary_hg_at_zero}
Applying \cref{corollary_dimensional_decomposition} and recalling the values of the Hermite polynomials at 0, viz. $H_k(0)=(-2)^{\frac{k}{2}}(k-1)!!$ when $k$ is even and $\hg{k}(0)=0$ when $k$ is odd, results in
\begin{align}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(0)
&=
\frac{\sqrt{\vb{\nu}!}}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\sum_{\substack{
\mat{\Omega}=\qty[\vb{\omega}_{j}^\transpose]\in\Natural^{n\times n}
\text{, s.t. }
\va{1}^\transpose\mat{\Omega}=\vb{\nu}
\\
\mathclap{
\text{and }
\vb{\mu} = \qty(\abs{\vb{\omega}_{1}},\abs{\vb{\omega}_{2}},\ldots,\abs{\vb{\omega}_{n}})\in(2\Natural)^n
}
}}
\frac{\qty(\mat{\Theta}^{-\half})^{\mat{\Omega}}}
{\mat{\Omega}!}
\ii^{\abs{\vb{\mu}}}
\qty(\vb{\mu}-\vb{1})!!
,
\end{align}
with $\vb{1}=\qty(1,1,\ldots,1)\in\Natural^n$.
\begin{remark}
Note that the summation is now also constrained to multi-index matrices with even row sums.
The double factorial of $-1$ is defined to be 1.
\end{remark}
\end{corollary}
\begin{lemma}[Offseted argument] \label{lemma_offseted}
For an arbitrary $\va{s}\in\Complex^n$:
\begin{align}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}+\va{s})
&=
2^{-\frac{\abs{\vb{\nu}}}{2}}
\qty(\mpi^n \abs{\mat{\Theta}})^{\frac{1}{4}}
\ee^{\frac{1}{2} \qty(\va{r}-\va{s})^\transpose \mat{\Theta}^{-1}\qty(\va{r}-\va{s})}
\nonumber
\\
&\qquad\qquad\quad\times
\sum_{\substack{
\vb{\mu}\in\Natural^n \\
\mathclap{
\text{s.t. }
\vb{\mu}\preceq\vb{\nu}
}
}}
{\binom{\vb{\nu}}{\vb{\mu}}}^{\half}
\HG[\mat{\Theta}]{\vb{\nu}-\vb{\mu}}\qty(\sqrt{2}\va{r})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\sqrt{2}\va{s})
~.
\end{align}
\end{lemma}
\begin{proof}
Via the generating function:
\begin{align}
\!\!\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
&\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}+\va{s})
=
\frac{
\ee^{-\frac{1}{2}\qty(\va{r}+\va{s})^\transpose\mat{\Theta}^{-1}\qty(\va{r}+\va{s}) + \va{x}^\transpose\mat{\Theta}^{-1}\qty(2\qty(\va{r}+\va{s})-\va{x})}
}{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\\
&=
\frac{
\ee^{-\frac{1}{2}\qty(\va{r}-\va{s})^\transpose\mat{\Theta}^{-1}\qty(\va{r}-\va{s})}
}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\ee^{-\frac{1}{2}\va{r}^\transpose\qty(\frac{1}{2}\mat{\Theta})^{-1}\va{r} +
\frac{\va{x}^\transpose}{2}(\frac{1}{2}\mat{\Theta})^{-1}\qty(2\va{r}-\frac{\va{x}}{2})}
\nonumber \\
&\qquad\qquad
\times
\ee^{-\frac{1}{2}\va{s}^\transpose\qty(\frac{1}{2}\mat{\Theta})^{-1}\va{s} +
\frac{\va{x}^\transpose}{2}(\frac{1}{2}\mat{\Theta})^{-1}\qty(2\va{s}-\frac{\va{x}}{2})}
\\
&=
\frac{
\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}
}
{2^{\frac{n}{2}}}
\ee^{-\frac{1}{2}\qty(\va{r}-\va{s})^\transpose\mat{\Theta}^{-1}\qty(\va{r}-\va{s})}
\nonumber
\\
&\qquad \times
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n \vphantom{\vb{\mu}}}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}
\frac{2^{\abs{\vb{\mu}}}}{\vb{\mu}!}}
\qty(\frac{1}{2}\va{x})^{\vb{\nu}+\vb{\mu}}
\HG[\frac{1}{2}\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\HG[\frac{1}{2}\mat{\Theta}]{\vb{\mu}}\qty(\va{s})
~.
\end{align}
Equating the powers of $\va{x}$ and applying \cref{basic_property_analytic_b} yields the desired result.
\end{proof}
\begin{lemma}[Product of AHG functions] \label{lemma_product_of_hg}
\begin{align}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r})
=&
\sqrt{\frac{\vb{\nu}!\vb{\mu}!}{2^{\abs{\vb{\nu}}+\abs{\vb{\mu}}}}}
\frac{\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\nonumber
\\
&\quad\times
\sum_{\substack{
\qquad \mat{\Omega}\in\Natural^{n\times n} ,\qquad \\
\mathclap{
\text{s.t.}\; \vb{\beta} = \vb{\nu} - \mat{\Omega}\va{1} \in \Natural^n,
}\\
\mathclap{
\;\ \vb{\gamma} = \vb{\mu} - \va{1}^\transpose\mat{\Omega} \in \Natural^n
}
}}
\frac{
\qty(2\mat{\Theta}^{-1})^{\mat{\Omega}}
\sqrt{2^{\abs{\vb{\beta}}+\abs{\vb{\gamma}}}\qty(\vb{\beta}+\vb{\gamma})!}
}
{\mat{\Omega}!\vb{\beta}!\vb{\gamma}!}
\HG[\mat{\Theta}]{\vb{\beta}+\vb{\gamma}}\qty(\va{r})
~.
\end{align}
That is, the sum is over the multi-index matrices $\mat{\Omega}$, with $\vb{\beta}$ being $\vb{\nu}$ minus the row sums of $\mat{\Omega}$, $\vb{\gamma}$ being $\vb{\mu}$ minus the column sums of $\mat{\Omega}$ and such that $\vb{\beta},\vb{\gamma}$ are multi-indices
(consisting of non-negative integers).
\end{lemma}
\begin{proof}
\begin{align}
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n} &
\sqrt{\frac{2^{\abs{\vb{\nu}}+\abs{\vb{\mu}}}}{\vb{\nu}!\vb{\mu}!}}
\va{x}^{\vb{\nu}}
\va{y}^{\vb{\mu}}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r})
\nonumber
\\
&=
\frac{
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{x})}
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \va{y}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{y})}
}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{2}}}
\\
&=
\frac{\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{2}}}
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} + \qty(\va{x}+\va{y})^\transpose\mat{\Theta}^{-1}\qty[2\va{r}-\qty(\va{x}+\va{y})]}
\ee^{2\va{x}^\transpose\mat{\Theta}^{-1}\va{y}}
\\
&=
\frac{\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}}
{\qty(\mpi^n\abs{\mat{\Theta}})^{\frac{1}{4}}}
\sum_{\vb{\alpha}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\alpha}}}}{\vb{\alpha}!}}
\qty(\va{x}+\va{y})^{\vb{\alpha}}
\HG[\mat{\Theta}]{\vb{\alpha}}\qty(\va{r})
\sum_{m\geq 0}
\frac{\qty(2\va{x}^\transpose\mat{\Theta}^{-1}\va{y})^m}{m!}
.
\end{align}
Denote $\mat{\Theta}^{-1}=\qty[q_{jk}]$, the elements of $\mat{\Theta}^{-1}$, and apply again the multinomial theorem:
\begin{align}
\qty(\va{x}+\va{y})^{\vb{\alpha}}
&=
\sum_{\substack{
\vb{\beta}\in\Natural^n,\\
\mathclap{
\text{s.t. }
\vb{\beta}\preceq\vb{\alpha}
}
}}
\binom{\vb{\alpha}}{\vb{\beta}}
\va{x}^{\vb{\beta}}
\va{y}^{\vb{\alpha}-\vb{\beta}}
~,
\\
\sum_{m\geq 0}
\frac{\qty(2\va{x}^\transpose\mat{\Theta}^{-1}\va{y})^m}{m!}
&=
\sum_{m\geq 0}
\frac{\qty[2 \sum_{jk} q_{jk}x_jy_k]^m}{m!}
=
\sum_{\substack{
\mat{\Omega}\in\Natural^{n\times n}
}}
\frac{\qty(2\mat{\Theta}^{-1})^{\mat{\Omega}}}{\mat{\Omega}!}
\qty(\va{x}\va{y}^\transpose)^{\mat{\Omega}}
~.
\end{align}
Equating the powers of $\va{x}$ and $\va{y}$ proves the lemma.
\end{proof}
\cref{lemma_product_of_hg,lemma_offseted} extend well-known results from the univariate case to the multivariate anisotropic case.
\section{Linear Canonical Transform} \label{section_lct}
The linear canonical transform (LCT) generalizes important well-known integral transforms, such as the (fractional) Fourier transform and the Fresnel transform.
The $n$-dimensional LCT (with unitary, angular-frequency kernels) is defined with respect to a matrix $\mat{A}=\qty[\begin{smallmatrix}a\ b\\c\ d\end{smallmatrix}]\in\Complex^{2\times2}$ with
$\abs{\mat{A}}=1$ as
\begin{align}
\lct[\mat{A}]{f}\qty(\va{\zeta})
&\triangleq
\qty(\frac{1}{2\mpi\ii b})^{\frac{n}{2}}
\ee^{\ii \frac{d}{2b} \va{\zeta}^2}
\int_{\Real^n} \dd{\va{r}^\prime}
f\qty(\va{r}^\prime)
\ee^{-\ii \frac{1}{2 b} \va{r}^\prime\cdot \qty(
2 \va{\zeta} - a \va{r}^\prime
)}
.
\label{lct}
\end{align}
Our main result in this section follows:
\begin{theorem}[Linear canonical transform of the AHG function] \label{theorem_lct}
Suppose
$\mat{A}$ is as above. Then
\begin{align}
\lct[\mat{A}]{\HG[\mat{\Theta}]{\vb{\nu}}}\qty(\va{\zeta})
=
\qty(\frac{1}{\ii b})^{\abs{\vb{\nu}}+\frac{n}{2}}
\ee^{-\frac{1}{2} \va{\xi}^\transpose \mat{C} \va{\xi}}
\frac{\abs{\mat{\Xi}}^{\frac{1}{4}}}
{\abs{\mat{\Sigma}}^{\frac{1}{2}}\abs{\mat{\Theta}}^{\frac{1}{4}}}
\HGd[\mat{\Xi}]{\vb{\nu}}\qty(\va{\xi})
\label{eq_lct_hg_thrm}
~,
\end{align}
where
\begin{subequations}
\label{LCT_shorthands}
\begin{alignat}{3}
\mat{\Sigma} &= \mat{\Theta}^{-1}-\ii\frac{a}{b}\mat{I}
~,
& \qquad\qquad
\mat{\Xi} &= b^2[2(\mat{\Theta}\mat{\Sigma}\mat{\Theta})^{-1} - \mat{\Theta}^{-1}]
\label{LCT_shorthands_a}
~,
\\
\mat{C} &= b^{-1} \mat{\Theta} \qty(b^{-1}\mat{\Sigma}-\ii d \mat{\Sigma}^2) \mat{\Theta} - \mat{\Xi}^{-1}
~,
&
\va{\xi} &= \mat{\Sigma}^{-1}\mat{\Theta}^{-1}\va{\zeta}
\label{LCT_shorthands_b}
~,
\end{alignat}
\end{subequations}
under the conditions that $b\neq 0$ and $\mat{\Sigma},\mat{\Xi}$ both have a positive definite real part.
\begin{remark}
A sufficient condition for $\Re\mat{\Sigma}\succ 0$ is $a,b\in\Real$ (as $\Re\mat{\Theta}^{-1}\succ 0$).
\end{remark}
\end{theorem}
\begin{proof}
Take the LCT (with respect to the variable $\va{r}$) of each side of the generating function for $\HG[\mat{\Theta}]{\vb{\nu}}$ (\cref{HG_generating_function}).
Let $\va{y}=2\mat{\Theta}^{-1}\va{x}-\ii\frac{1}{b}\va{\zeta}$ and rewrite the integral as a multidimensional Gaussian integral with a linear term, which admits a well-known closed-form \cite{Stoof2009}
(convergence is ensured by $\Re\mat{\Sigma}\succ 0$). Then
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
&\lct[\mat{A}]{\HG[\mat{\Theta}]{\vb{\nu}}}\qty(\va{\zeta})
\nonumber
\\
&=
\qty(\mpi^n\abs{\mat{\Theta}})^{-\frac{1}{4}}
\lct[\mat{A}]{
\ee^{-\frac{1}{2}\qty(\va{r}^\prime)^\transpose\mat{\Theta}^{-1}\va{r}^\prime - \va{x}^\transpose\mat{\Theta}^{-1}\va{x} + 2\qty(\va{r}^\prime)^\transpose\mat{\Theta}^{-1}\va{x}}
}\qty(\va{\zeta})
\\
&=
\qty(\mpi^n\abs{\mat{\Theta}})^{-\frac{1}{4}}
\qty(\frac{1}{2\mpi\ii b})^{\frac{n}{2}}
\ee^{\ii \frac{d}{2b} \va{\zeta}^2}
\ee^{-\va{x}^\transpose\mat{\Theta}^{-1}\va{x}}
\int_{\Real^n} \dd{\va{r}^\prime}
\ee^{
-\frac{1}{2}\qty(\va{r}^\prime)^\transpose
\mat{\Sigma}
\va{r}^\prime
+
\va{y}^\transpose \va{r}^\prime
}
\\
&= \label{eq_gaussian_integral_thm_lct_proof}
\qty(\mpi^n\abs{\mat{\Theta}})^{-\frac{1}{4}}
\frac{1}{\qty(\ii b)^{\frac{n}{2}} \abs{\mat{\Sigma}}^{\frac{1}{2}}}
\ee^{\ii \frac{d}{2b} \va{\zeta}^2}
\ee^{-\va{x}^\transpose\mat{\Theta}^{-1}\va{x}}
\ee^{
\frac{1}{2}
\va{y}^\transpose \mat{\Sigma}^{-1} \va{y}
}
~.
\end{align}
Rewrite the right-hand side above in terms of $\va{x},\va{\xi},\mat{\Xi}$ in the form of the generating function of the dual AHG function (\cref{HG_dual_generating_function}), i.e.:
\begin{align}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\va{x}^{\vb{\nu}}
&\lct[\mat{A}]{\HG[\mat{\Theta}]{\vb{\nu}}}\qty(\va{\zeta})
\nonumber
\\
=&
\frac{\qty(\mpi^n\abs{\mat{\Theta}})^{-\frac{1}{4}}}
{\qty(\ii b)^{\frac{n}{2}} \abs{\mat{\Sigma}}^{\frac{1}{2}}}
\ee^{-\frac{1}{2} \va{\xi}^\transpose \mat{C} \va{\xi}}
\ee^{
-\frac{1}{2} \va{\xi}^\transpose \mat{\Xi}^{-1} \va{\xi} +
\qty(-\frac{\ii}{b}\va{x})^\transpose
\qty[2\va{\xi} - \mat{\Xi}\qty(-\frac{\ii}{b}\va{x})
]}
\\
=&
\frac{1}{\qty(\ii b)^{\frac{n}{2}}}
\frac{\abs{\mat{\Xi}}^{\frac{1}{4}}}
{\abs{\mat{\Sigma}}^{\frac{1}{2}}\abs{\mat{\Theta}}^{\frac{1}{4}}}
\ee^{-\frac{1}{2} \va{\xi}^\transpose \mat{C} \va{\xi}}
\sum_{\vb{\nu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}}}{\vb{\nu}!}}
\qty(-\frac{\ii\va{x}}{b})^{\vb{\nu}}
\HGd[\mat{\Xi}]{\vb{\nu}}\qty(\va{\xi})
~.
\end{align}
Equating the powers of $\va{x}$ on both sides yields the final result.
\end{proof}
\begin{lemma}[Eigenfunctions of the linear canonical transform] \label{lemma_eigenfunctions_lct}
If $a=d$, $a^2\neq 1$ and $\sqrt{(a^2-1)b^2}\neq -ab$, then set
$\alpha=\ii\frac{b^2}{\sqrt{(a^2-1)b^2}+ab}, \beta=\ii\frac{b^2}{\sqrt{(a^2-1)b^2}}$.
We have
\begin{align}
\lct[\mat{A}]{\HG[\beta\mat{I}]{\vb{\nu}}} &=
\qty(\frac{1}{\ii b})^{\abs{\vb{\nu}}+\frac{n}{2}}
\alpha^{\abs{\vb{\nu}}}
\sqrt{\alpha^n}
\HG[\beta\mat{I}]{\vb{\nu}}
~.
\end{align}
\end{lemma}
\begin{proof}
$\mat{\Theta}=\beta\mat{I}$, therefore \cref{LCT_shorthands_a,LCT_shorthands_b} become
\begin{align}
\mat{\Sigma} = \alpha^{-1}\mat{I}
\text{, } &&
\mat{\Xi} = \alpha^2\mat{\Theta}^{-1}
\text{, }&&
\mat{C} = 0
\text{, } &&
\mat{\Xi}^{-1}\va{\xi} = \alpha^{-1}\va{\zeta}
.
\end{align}
Now consider \cref{eq_lct_hg_thrm} and apply \cref{basic_property_dual,basic_property_analytic_b}:
\begin{align}
\frac{\abs{\mat{\Xi}}^{\frac{1}{4}}}
{\abs{\mat{\Sigma}}^{\frac{1}{2}}\abs{\mat{\Theta}}^{\frac{1}{4}}}
\HGd[\mat{\Xi}]{\vb{\nu}}\qty(\va{\xi})
&=
\frac{\HG[\mat{\Xi}^{-1}]{\vb{\nu}}\qty(\mat{\Xi}^{-1}\va{\xi})}
{\abs{\mat{\Xi}}^{\frac{1}{4}}\abs{\mat{\Sigma}}^{\frac{1}{2}}\abs{\mat{\Theta}}^{\frac{1}{4}}}
=
\frac{\HG[\alpha^{-2}\mat{\Theta}]{\vb{\nu}}\qty(\frac{1}{\alpha}\va{\zeta})}
{\abs{\mat{\Xi}}^{\frac{1}{4}}\abs{\mat{\Sigma}}^{\frac{1}{2}}\abs{\mat{\Theta}}^{\frac{1}{4}}}
=
\alpha^{\abs{\vb{\nu}}}
\sqrt{\alpha^n}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{\zeta})
~,
\end{align}
from which \cref{lemma_eigenfunctions_lct} follows.
\end{proof}
The following corollaries follow from \cref{theorem_lct,lemma_eigenfunctions_lct} as well as the basic properties of the AHG functions.
\begin{corollary}[Fourier Transform] \label{cor_ft}
The LCT reduces to the standard Fourier transform (with unitary, angular frequency kernels) by setting $\mat{A}_\text{FT}=\qty[\begin{smallmatrix}0 & 1 \\ -1 & 0\end{smallmatrix}]$, viz.
$ \frft*{\HG[\mat{\Theta}]{\vb{\nu}}}
\triangleq
\ii^{\frac{n}{2}} \lct*[\mat{A}_\text{FT}]{\HG[\mat{\Theta}]{\vb{\nu}}}
$.
In this case the Fourier transform of the anisotropic Hermite-Gauss function is:
\begin{align}
\frft{\HG[\mat{\Theta}]{\vb{\nu}}}\qty(\va{\zeta})
&=
\qty(-\ii)^{\abs{\vb{\nu}}}
\HGd[\mat{\Theta}^{-1}]{\vb{\nu}}\!\qty(\va{\zeta})
=
\qty(-\ii)^{\abs{\vb{\nu}}}
\abs{\mat{\Theta}}^{\half}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\mat{\Theta}\va{\zeta})
.
\end{align}
In addition,
\begin{tasks}[label-format=,label=\textrm{\thetheorem.\arabic*},label-width=3em,item-indent=5em](1)
\task \label[corollary]{fft_eigenfunctions}
$\HG[\mat{I}]{\vb{\nu}}$ are the eigenfunctions of the Fourier transform with corresponding eigenvalues $(-\ii)^{\abs{\vb{\nu}}}$.
\task \label[corollary]{fft_order_real_imaginary}
If $\mat{\Theta}$ is real, then the Fourier transform of an even-order AHG function is purely real, and of an odd-order AHG function purely imaginary.
\end{tasks}
\end{corollary}
\begin{corollary}[Fractional Fourier Transform] \label{cor_frft}
Similarly, the LCT also generalizes the fractional Fourier transform (FrFT) of degree $\gamma$ via the
parameter matrix $\mat{A}_\text{FrFT}(\gamma)=\qty[\begin{smallmatrix}\cos\gamma & \sin\gamma \\ -\sin\gamma & \cos\gamma\end{smallmatrix}]$ by
$ \frft*[\gamma]{\HG[\mat{\Theta}]{\vb{\nu}}}
\triangleq
\ee^{\ii \frac{n}{2}\gamma}
\lct*[\mat{A}_\text{FrFT}(\gamma)]{\HG[\mat{\Theta}]{\vb{\nu}}}
$.
The eigenfunctions of the Fractional Fourier transform are $\HG[\mat{I}]{\vb{\nu}}$, with corresponding eigenvalues $e^{-\ii\gamma\abs{\vb{\nu}}}$
\end{corollary}
The fact that the univariate HG functions serve as the eigenfunctions of the (fractional) Fourier transform is well-known.
The corollaries above generalize these results to the multivariate (fractional) Fourier transform and AHG functions.
This has interesting Fourier optics interpretations:
AHG beams remain AHG beams under far-field diffraction (\cref{fft_eigenfunctions}).
Furthermore, only diffracted even-order AHG modes propagate to the far-field while odd-order modes diffract as evanescent waves (consequence of \cref{fft_order_real_imaginary}).
We omit the proof of the following corollary.
\begin{corollary}[Laplace Transform] \label{cor_laplace}
The (two-sided) Laplace transform is a special case of the LCT with $\mat{A}_\text{L}=\qty[\begin{smallmatrix}0 & \ii \\ \ii & 0\end{smallmatrix}]$, viz.
\begin{align}
\lt{\HG[\mat{\Theta}]{\vb{\nu}}}\qty(\va{\zeta})
\triangleq
(-2\mpi)^{\frac{n}{2}} \lct[\mat{A}_\text{L}]{\HG[\mat{\Theta}]{\vb{\nu}}}
&=
\qty(2\mpi)^{\frac{n}{2}}
\ii^{\abs{\vb{\nu}}}
\abs{\mat{\Theta}}^{\half}
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\ii\mat{\Theta}\va{\zeta})
.
\end{align}
The eigenfunctions of the Laplace transform are
$
\HG[\mat{I}]{\vb{\nu}}(\frac{1+\ii}{\sqrt{2}}\va{\zeta})
$, with corresponding eigenvalues $
\qty(2\mpi)^{\frac{n}{2}}
\ii^{\abs{\vb{\nu}}}
\sqrt{(-\ii)^n}
$.
\end{corollary}
\section{Wigner-Vile Distribution} \label{section_wvd}
The \emph{Wigner-Vile Distribution} (WVD) is an integral transform that commonly arise in optics and quantum mechanics, useful for processing linear frequency-modulated signals.
The WVD of a $\Real^n\to\Complex$ $L^2$-function $f$ is defined as the following Fourier transform:
\begin{align}
\wvd{f}\qty(\va{r},\va{\zeta})
& \triangleq
\frft{f\qty(\va{r}-\frac{1}{2}\va{\xi}) f^\star\qty(\va{r}+\frac{1}{2}\va{\xi})}\qty(\va{\zeta})
,
\end{align}
where the FT is taken with respect to the integration variable $\va{\xi}$.
\begin{lemma} \label{wvd_lemma_fft}
Let $\va{r},\va{\zeta}\in\Real^n$, $\mat{\Theta}\in\Real^{n\times n}$ s.t. $\mat{\Theta}\succ 0$. Then
\begin{align}
&\frft{
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}-\frac{1}{2}\va{\xi})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r}+\frac{1}{2}\va{\xi})
}\qty(\va{\zeta})
=
\qty(4^n\mpi^n \abs{\mat{\Theta}}^3)^{\frac{1}{4}}
\ee^{-\frac{1}{2}\va{\zeta}^\transpose\mat{\Theta}\va{\zeta}}
\sum_{\substack{
\vb{\tau}\preceq\vb{\nu},\\
\vb{\sigma}\preceq\vb{\mu}
}}
\qty(-1)^{\abs{\vb{\nu}-\vb{\tau}}}
\nonumber
\\
&\;\qquad\times
\ii^{\abs{\vb{\nu}+\vb{\mu}-\vb{\tau}-\vb{\sigma}}}
\sqrt{
\binom{\vb{\nu}}{\vb{\tau}}
\binom{\vb{\mu}}{\vb{\sigma}}
\binom{\vb{\nu}+\vb{\mu}-\vb{\tau}-\vb{\sigma}}{\vb{\nu}-\vb{\tau}}
}
\HG[\mat{\Theta}]{\vb{\tau}}\qty(\va{r})
\HG[\mat{\Theta}]{\vb{\sigma}}\qty(\va{r})
\HGd[\mat{\Theta}^{-1}]{\vb{\mu}+\vb{\nu}-\vb{\sigma}-\vb{\tau}}\qty(\va{\zeta})
\nonumber
~.
\end{align}
\end{lemma}
\begin{proof}
Take the FT of the generating functions:
\begin{align}
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}+\abs{\vb{\mu}}}}{\vb{\nu}!\vb{\mu}!}}
&\va{x}^{\vb{\nu}}
\va{y}^{\vb{\mu}}
\frft{
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}-\frac{1}{2}\va{\xi})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r}+\frac{1}{2}\va{\xi})
}\qty(\va{\zeta})
\nonumber
\\
=&
\frac{1
}{\sqrt{\mpi^n\abs{\mat{\Theta}}}}
\frft
\Big\{
\ee^{-\frac{1}{2}\qty(\va{r}-\frac{1}{2}\va{\xi})^\transpose\mat{\Theta}^{-1}\qty(\va{r}-\frac{1}{2}\va{\xi}) + \va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{\xi}-\va{x})}
\nonumber \\
&\qquad\qquad\qquad\qquad\times
\ee^{-\frac{1}{2}\qty(\va{r}+\frac{1}{2}\va{\xi})^\transpose\mat{\Theta}^{-1}\qty(\va{r}+\frac{1}{2}\va{\xi}) + \va{y}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}+\va{\xi}-\va{y})}
\Big\}\qty(\va{\zeta})
\\
=&
\frac{1}
{\sqrt{\mpi^n\abs{\mat{\Theta}}}}
\ee^{-\va{r}^\transpose\mat{\Theta}^{-1}\va{r}}
\ee^{\va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{x})}
\ee^{\va{y}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{y})}
\nonumber
\\
&\qquad\qquad\qquad\qquad\qquad\times
\frft{
\ee^{-\frac{1}{2}\va{\xi}^\transpose\qty(2\mat{\Theta})^{-1}\va{\xi}}
\ee^{2\va{\xi}^\transpose\qty(2\mat{\Theta})^{-1}\qty(\va{y}-\va{x})}
}\qty(\va{\zeta})
\label{eq_lemma_wvd_fft_proof_step1}
~,
\end{align}
complete the square and integrate (in similar fashion to \cref{eq_gaussian_integral_thm_lct_proof}):
\begin{align}
\frft{
\ee^{\va{\xi}^\transpose\qty(2\mat{\Theta})^{-1}\qty[-\frac{1}{2}\va{\xi} + 2\qty(\va{y}-\va{x})]}
}\qty(\va{\zeta})
&=
\qty(\frac{1}{2\mpi})^{\frac{n}{2}}
\int \dd{\va{\xi}}
\ee^{-\frac{1}{2} \va{\xi}^\transpose\qty(2\mat{\Theta})^{-1}\va{\xi} + \va{\xi}^\transpose\va{\zeta}^\prime}
\\
&=
\sqrt{2^n\abs{\mat{\Theta}}}
\ee^{\qty(\va{\zeta}^\prime)^\transpose\mat{\Theta}\va{\zeta}^\prime}
\label{eq_lemma_wvd_fft_proof_step2}
\end{align}
where we set $\va{\zeta}^\prime=\mat{\Theta}^{-1}\qty(\va{y}-\va{x}) - \ii\va{\zeta}$.
The FT always convergences as $\mat{\Theta}^{-1}\succ 0$.
Then, putting \cref{eq_lemma_wvd_fft_proof_step1,eq_lemma_wvd_fft_proof_step2} together and rewriting the result as the generating functions of
$\HG[\mat{\Theta}]{\vb{\tau}}(\va{r})$,
$\HG[\mat{\Theta}]{\vb{\sigma}}(\va{r})$,
$\HGd[\mat{\Theta}^{-1}]{\vb{\alpha}}(\va{\zeta})$ with variables $\va{x}$, $\va{y}$ and $\va{x}-\va{y}$, respectively, gives
\begin{align}
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\nu}}+\abs{\vb{\mu}}}}{\vb{\nu}!\vb{\mu}!}}
&\va{x}^{\vb{\nu}}
\va{y}^{\vb{\mu}}
\frft{
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}-\frac{1}{2}\va{\xi})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r}+\frac{1}{2}\va{\xi})
}\qty(\va{\zeta})
\nonumber
\\
&=
\qty(\frac{2}{\mpi})^{\frac{n}{2}}
\ee^{-\va{\zeta}^\transpose\mat{\Theta}\va{\zeta}}
\ee^{2\ii\qty(\va{x}-\va{y})^\transpose\va{\zeta}}
\ee^{\qty(\va{x}-\va{y})^\transpose\mat{\Theta}^{-1}\qty(\va{x}-\va{y})}
\nonumber \\
&\quad\qquad\qquad\times
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} +
\va{x}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{x})}
\ee^{-\frac{1}{2}\va{r}^\transpose\mat{\Theta}^{-1}\va{r} +
\va{y}^\transpose\mat{\Theta}^{-1}\qty(2\va{r}-\va{y})}
\\
&=
\qty(4^n\mpi^n \abs{\mat{\Theta}})^{\frac{1}{4}}
\ee^{-\frac{1}{2}\va{\zeta}^\transpose\mat{\Theta}\va{\zeta}}
\sum_{\vb{\alpha}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\alpha}}}}{\vb{\alpha}!}}
\qty(\va{x}-\va{y})^{\vb{\alpha}}
\ii^{\abs{\vb{\alpha}}}
\HGd[\mat{\Theta}^{-1}]{\vb{\alpha}}\qty(\va{\zeta})
\nonumber \\
&\qquad\qquad\qquad\qquad\times
\sum_{\vb{\tau},\vb{\sigma}\in\Natural^n}
\sqrt{\frac{2^{\abs{\vb{\tau}}+\abs{\vb{\sigma}}}}{\vb{\tau}!\vb{\sigma}!}}
\va{x}^{\vb{\tau}}
\va{y}^{\vb{\sigma}}
\HG[\mat{\Theta}]{\vb{\tau}}\qty(\va{r})
\HG[\mat{\Theta}]{\vb{\sigma}}\qty(\va{r})
~.
\end{align}
Finally, apply the multinomial theorem, viz.:
\begin{align}
\qty(\va{x}-\va{y})^{\vb{\alpha}}
&=
\sum_{\substack{
\vb{\beta}\preceq\vb{\alpha}
}}
\binom{\vb{\alpha}}{\vb{\beta}}
\qty(-1)^{\abs{\vb{\beta}}}
\va{x}^{\vb{\alpha} - \vb{\beta}}
\va{y}^{\vb{\beta}}
\end{align}
and equate the powers of $\va{x}$ and $\va{y}$, yielding the lemma.
\end{proof}
As any arbitrary $L^2$-function can be expanded in AHG space (\cref{lemma_orthogonality}), by using \cref{wvd_lemma_fft} we can write an expression for the WVD of that function.
In practice, this allows direct computation of the WVD for functions that can be expressed as a superposition of a limited number of AHG functions (e.g., AHG beams).
\begin{theorem}[WVD in AHG space] \label{wvd_theorem}
Let $\mat{\Theta}\in\Real^{n\times n}$, with $\Re\mat{\Theta}\succ 0$,
and $f(\va{r})=\sum_{\vb{\nu}} a_{\vb{\nu}} \HG[\mat{\Theta}]{\vb{\nu}}(\va{r})$ be an $\Real^n\to\Complex$ $L^2$-functions expressed via its AHG-basis coefficients, viz. $a_{\vb{\nu}} = \@ifstar{\definp}{\definp*}*{f}{\HGd[\mat{\Theta}]{\vb{\nu}}}$.
Then,
\begin{align}
&\wvd{f}\qty(\va{r},\va{\zeta})
=
\qty(4^n\mpi^n \abs{\mat{\Theta}})^{\frac{1}{4}}
\ee^{-\frac{1}{2}\va{\zeta}^\transpose\mat{\Theta}\va{\zeta}}
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n}
a_{\vb{\nu}}
a_{\vb{\mu}}^\star
\sum_{\substack{
\vb{\tau}\preceq\vb{\nu},\\
\vb{\sigma}\preceq\vb{\mu}
}}
\qty(-1)^{\abs{\vb{\nu}-\vb{\tau}}}
\nonumber \\
&\qquad\times
\ii^{\abs{\vb{\nu}+\vb{\mu}-\vb{\tau}-\vb{\sigma}}}
\sqrt{
\binom{\vb{\nu}}{\vb{\tau}}
\binom{\vb{\mu}}{\vb{\sigma}}
\binom{\vb{\nu}+\vb{\mu}-\vb{\tau}-\vb{\sigma}}{\vb{\nu}-\vb{\tau}}
}
\HG[\mat{\Theta}]{\vb{\tau}}\qty(\va{r})
\HG[\mat{\Theta}]{\vb{\sigma}}\qty(\va{r})
\HGd[\mat{\Theta}^{-1}]{\vb{\mu}+\vb{\nu}-\vb{\sigma}-\vb{\tau}}\qty(\va{\zeta})
\nonumber
~.
\end{align}
\end{theorem}
\begin{proof}
Write
\begin{align}
\wvd{f}\qty(\va{r},\va{\zeta})
&=
\frft{
\sum_{\vb{\nu},\vb{\mu}\in\Natural^n}
a_{\vb{\nu}} a_{\vb{\mu}}^\star
\HG[\mat{\Theta}]{\vb{\nu}}\qty(\va{r}-\frac{1}{2}\va{\xi})
\HG[\mat{\Theta}]{\vb{\mu}}\qty(\va{r}+\frac{1}{2}\va{\xi})
}\qty(\va{\zeta})
\end{align}
and apply \cref{wvd_lemma_fft}.
\end{proof}
\bibliographystyle{apalike}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,777
|
Disposta a tutto (Wisegal) è un film per la televisione del 2008 diretto da Jerry Ciccoritti e interpretato da Alyssa Milano, James Caan e Jason Gedrick.
Trasmesso negli Stati Uniti il 15 marzo 2008 sulla rete via cavo Lifetime con oltre 3 milioni di telespettatori, nel 2009 il film ha vinto un WIN Award come miglior prodotto televisivo, oltre ad aver ottenuto una nomination per Alyssa Milano come miglior attrice protagonista.
Trama
La giovane Angie, una ragazza povera e sola, dà alla luce una bellissima bambina, che chiama Patty. Timorosa di non riuscire a sopportare le spese economiche, Angie accarezza l'idea di abbandonare la figlia davanti alla chiesa di St. Peter, a New York. Ma cambia subito idea e decide di tenere con sé la piccola, che cresce diventando una ragazza bella e intelligente. Un giorno, alla stazione dov'è solita stare con la madre, una senzatetto, Patty incontra il poliziotto Dante Montanari, che ben presto diviene suo marito. Dopo un matrimonio felice allietato dalla nascita di due bambini, Dante muore a causa di un tumore. Rimasta sola con i figli e una casa a cui badare, Patty arranca per non abbandonare la normale routine che, dopo tanti sacrifici, è riuscita a crearsi. Ma non riuscendo a trovare un lavoro in pieno periodo natalizio, Patty cede alle lusinghe dall'affascinante Frank Russo, appartenente al clan mafioso di Salvatore Palmeri, che le offre una posizione all'interno del clan e le promette la gestione di un locale da poco requisito ad un debitore poi eliminato. Bisognosa di un lavoro per mantenere la famiglia, Patty accetta. Entra così in un giro infernale fatto di violenza, soldi, pericolo costante e valori effimeri, al fianco di Frank che si rivelerà un uomo spregevole e senza alcuna morale, attratto solo dal suo aspetto fisico. Stanca di una vita costantemente sotto pressione, minacciata dalle pistole della "famiglia", Patty decide di lasciare il clan mafioso di Palmieri. Ma la via della salvezza non sarà senza difficoltà.
Note
Collegamenti esterni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,358
|
The Ministry of Transport (MOT, ) is a government agency of Syria that specializes in transport in Syria. Its head office is in Damascus. Since 2020, Zouhair Khazim is the minister.
References
External links
Ministry of Transport
Ministry of Transport
Transport
Syria
Transport organizations based in Syria
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 108
|
\section{Introduction}
Throughout this paper, we consider simple, connected and undirected graphs.
Denote a graph by $G = (V, E)$, where $V = V (G)$ is called vertex
set and $E = E(G)$ is called edge set. For a vertex $v \in V(G)$, the neighborhood of $v $ is the set $N(v) = N_G(v) = \{w \in V(G), vw \in E(G)\}$, and $d_G(v)$ (or $d(v)$) denotes the degree of $v$ with $ d_{G}(v) = |N(v)|$. $n_i$ is the number of vertices of degree $i \geq 0$.
If a graph $G$ contains $n$
vertices and $n-1$ edges, then $G$ is called a tree. For a
vertex $v \in V (T )$ with $2 \leq d_T(v) \leq \Delta(T)-1$, its edge rotating capacity is defined to be $d_T (v)-1$. The total edge rotating capacity of a tree $T$ is equal to the sum of the
edge rotating capacities of its vertices that satisfy the condition $2 \leq d_T(v) \leq \Delta(T)-1$. As usual, denote $P_n$ by the path on $n$ vertices. The maximum vertex degree in the graph $G$ is denoted by $\Delta(G)$.
The degree sequence of G is a sequence of positive integers $\pi=(d_1, d_2, \cdots, d_n)$ if $d_i=d_G(v) (i = 1,\cdots, n)$ holds for $v \in V(G)$. In this work, we assign an order of the vertex degrees as non-increasing, i.e., $d_1\geq d_2 \geq \cdots \geq d_n$. In addition, a sequence $\pi =(d_1, d_2, \cdots, d_n)$ is called a tree degree sequence if there exists a tree T such that $\pi$ is its degree sequence. Furthermore, it is well known that the sequence $\pi =(d_1, d_2, \cdots, d_n)$ is a degree sequence of a tree with $n$ vertices if and only if
$$\sum_{i=1}^{n}d_i=2(n-1).$$
In the interdisplinary of mathemactics, chemistry and physics,
molecular invariants/descriptors could be useful for the study of quantitative structure-property relationships (QSPR) and quantitative structure-activity relationships (QSAR)
and
for the descriptive presentations of biological and chemical
properties, such as boiling and melting points, toxicity,
physico-chemical, and biological
properties~\cite{Gutman1996,Liu2015,LiuP2015,LiuPX2015,x001,0001,0002}. One class of the oldest topological molecular descriptors
are named as Zagreb indices~\cite{Gutman1972}, which are literal
quantities in an expected formulas for the total
$\pi$-electron energy of conjugated molecules as follows.
\begin{eqnarray} \nonumber
M_1(G) = \sum_{u \in V(G)} d(u)^2
~\text{ and}
~ M_2(G) = \sum_{uv \in E(G)} d(u)d(v).
\end{eqnarray}
Based on the successful considerations on
these applications of Zagreb indices~\cite{Gutman2014},
Todeschini et al.(2010)~\cite{RT20101,RT20102,Wang2015} presented the
following multiplicative variants of molecular structure
descriptors:
\begin{eqnarray} \nonumber
\prod_1(G) = \prod_{u \in V(G)} d(u)^2 ~
\text{ and}\;\;
\prod_2(G) = \prod_{uv \in E(G)} d(u)d(v) = \prod_{u \in V(G)}
d(u)^{d(u)}.
\end{eqnarray}
Recently, there are lots of articles
explored multiplicative Zagreb indices in the interdisplinary of chemistry and
mathematics~\cite{Hu2005,Li2008,shi2015,BF2014,SM2014,Xu2014,WangJ2015}.
Iranmanesh et al.~\cite{Iranmanesh20102} explored first and second multiplicative Zagreb indices for a class of chemical dendrimers.
Xu and Hua~\cite{Xu20102} provided an unified approach to
characterize extremal maximal and minimal trees, unicyclic
graphs and bicyclic graphs regarding to multiplicative Zagreb
indices, respectively.
Wang and Wei~\cite{Wang2015} gave the maximum and minimum indices of these indices in $k$-trees, and the corresponding extreme graphs are provided.
Liu and Zhang [14] investigated some sharp upper bounds for
$\prod_1$-index and $\prod_2$-index in terms of graph parameters
such as an order, a size and a radius~\cite{Liuz20102}.
Kazemi~\cite{Ramin2016} studied the bounds for
the moments and the probability generating function of these
indices in a randomly chosen molecular graph with tree structure
of order $n$.
Borovi\'canin et al.~\cite{Borov2016} introduced upper
bounds on Zagreb indices of trees, and a lower bound for the
first Zagreb index of trees with a given domination number is
determined and the extremal trees are characterized as well.
Borovi$\acute{c}$anin and Lampert\cite{Bojana2015} provided the maximum and minimum
Zagreb indices of trees with given
number of vertices of maximum degree.
Motivated by above results, in this paper we further
investigate the multiplicative Zagreb indices of trees with arbitrary number of vertices of maximum degree. The maximum and minimum values of $\prod_1(G) $ and $\prod_2(G) $ of trees with arbitrary number of maximum degree are provided. In addition, the
corresponding extreme graphs are charaterized.
Our results extends and enriches some known conclusions
obtained by \cite{Bojana2015}.
\section{Preliminaries}
It is known that each tree has at least two minimum degree vertices, named as pendent vertices, and
some maximum degree vertices. It is natural to consider the trees with arbitrary number of
maximum degree vertices.
Let $\mathcal{T}_{n, k}$ be the class of trees with $n$ vertices, in which there exist $k$ vertices having the maximum degree with $n > k > 0$. Note that the path $P_n$ is the unique element of $\mathcal{T}_{n,n-2}$. So, in the following we consider the class $\mathcal{T}_{n,k}$ with $k\leq n-3$.
We first introduce several facts and tools, which are important in the proofs of following sections.
\begin{prop}\cite{Bojana2015} If $T\in \mathcal{T}_{n,k}$ is a tree with $k$ vertices of maximum degree $\Delta$, then $\Delta \leq \lfloor \frac{n-2}{k}\rfloor +1$.
\end{prop}
By the routine calculations, one can derive the following propositions.
\begin{prop}
Let $f(x) = \frac{x}{x+m}$ be a function with $m > 0$. Then $f(x)$ is increasing in $\mathbb{R}$.
\end{prop}
\begin{prop}
Let $g(x) = \frac{x^x}{(x+m)^{x+m}}$ be a function with $m > 0$. Then $g(x)$ is decreasing in $\mathbb{R}$.
\end{prop}
Based on the above algebraic tools, we are ready to provide the sharp upper and lower bounds of first multiplicative Zagreb index of such trees in section 3, and the sharp upper and lower bounds of second multiplicative Zagreb index of these trees in section 4. Some of notations and figures are used close to \cite{Bojana2015}.
\section{ The sharp upper and lower bounds of first mutiplicative Zagreb index on the trees
}
In this section, we obtain the bounds of the first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$.
\subsection{The sharp upper bounds of $\prod_1$ on trees with given number of vertices of maximum degree}
The first multiplicative Zagreb index of $\mathcal{T}_{n, k}$ can be routinely calculated if the degree sequence is given.
\noindent {\bf Lemma 3.1. } Let ${T}_{min}^1$ be a tree with minimal value of first multiplicative Zagreb index in $\mathcal{T}_{n,k}$. Then $\Delta ({T}_{min}^1) =\lfloor \frac{n-2}{k}\rfloor +1$.
\begin{proof} Let $\Delta $ be the maximum vertex degree in the tree $T_{min}^1$. By Proposition 2.1, we have $\Delta\leq \lfloor \frac{n-2}{k}\rfloor +1$. Denote $ \Delta_{max}=\lfloor \frac{n-2}{k}\rfloor +1$ and $n-2 = k \lfloor \frac{n-2}{k}\rfloor +r$, where $0\leq r <k$.
Firstly, we assume that $\Delta<\Delta_{max}$.
Denote $V({T}_{min}^1)$ =$\{v_1, \cdots, v_n\}$ by the degree sequence: $\pi=(d_1, d_2, \cdots , d_n)$. Then
$$\Delta=d_1=\cdots=d_k=\Delta_{max}-t, \; t>0.$$
Note that \begin{eqnarray}
\sum_{i=1}^{\Delta}n_i=n
\end{eqnarray}
and
\begin{eqnarray}
\sum_{i=1}^{\Delta} i n_i=2(n-1).
\end{eqnarray}
By the relations (1) and (2), we can obtain that
\begin{eqnarray}
n_1\geq 2+(\Delta -2)n_{\Delta}=2+k(\Delta -2).
\end{eqnarray}
Let $n_1=2+k(\Delta -2)+n_1'$ with $n_1'\geq 0$. Using (1) we obtain
$$n=2+k(\Delta -2)+n_1'+\sum_{i=2}^{\Delta-1} n_i+k,$$
which implies that
\begin{eqnarray}
\sum_{i=2}^{\Delta-1} n_i +n_1'=r+kt.
\end{eqnarray}
Also, by the relation (2), one can calculate that
\begin{eqnarray}
\sum_{i=2}^{\Delta-1} in_i +n_1'=2(r+kt).
\end{eqnarray}
By subtracting the relations (4) and (5), we have
\begin{eqnarray}
\sum_{i=2}^{\Delta-1} (i-1)n_i =r+kt\geq kt \geq k.
\end{eqnarray}
Then the total edge rotating capacity of this tree is greater than or equal to $k$.
Let $v_i\in V({T}_{min} ^1)$ ($k<i\leq n)$ be a vertex that has positive edge rotating capacity and let
$d_i$ be its degree with $2\leq d_i \leq \Delta -1$. Define a tree ${T}_1$ with the vertex degree sequence
$\pi_1=(d_1^1, d_2^1, \cdots, d_n^1)$ such that $d_1^1=\Delta +1$, $d_i^1=d_i -1$ and $d_j^1=d_j(j\in \{2,\cdots, n\}$, $j\ne i )$.
By the definition of $\prod_1$, we have
\begin{eqnarray}
\nonumber \frac{ \prod_1({T}_1)}{\prod_1({T}_{min}^1)} &&=\frac{(d_1+1)^2d_2^2\cdots(d_i-1)^2\cdots d_n^2}{d_1^2d_2^2 \dots d_i^2 \dots d_n^2}
\\ \nonumber &&= (\frac{d_1+1}{d_1})^2(\frac{d_i-1}{d_i})^2
\\ \nonumber &&= (\frac{\frac{(d_i-1)}{(d_i-1)+1} }{\frac{d_1}{d_1+1}})^2~ \;\;\;\;(\text{by Proposition} \;2.2)
\\
&&<1.
\end{eqnarray}
Therefore, $\prod_1({T}_1) <\prod_1({T}_{min}^1)$. Note that ${T}_1$ has the maximum vertex degree equal to $\Delta+1$, then $T_1\notin \mathcal{T}_{n,k}$. Since ${T}_{min}^1$ has $k-1$ vertices of degree $\Delta$, and the total positive edge rotating capacity of $d_2$, $d_3$, \dots, $d_{\Delta-1}$ is at least $k-1$, by the relation $(6)$. Then we can conclude the following statements.
We recursively proceed the above described transformations of the tree ${T}_{min}^1$ for $k-1$ times on each vertex of maximum degree $\Delta$. In every step, we could define a tree ${T}^l$ with degree sequence $\pi_l=(d_1^l, \dots ,d_n^l)$ such that $d_l^l=\Delta+1$, $d_i^l=d_i^{l-1}-1$ and $d_j^l=d_j^l-1(j\in \{1, \cdots, n\}$, $j\ne i, \;$l$ )$, where $l=2,\cdots,k$ and $d_i^{l-1}$ is the degree of any vertex $v_i\in V(T^{l-1})$ with $k<i\leq n$, which has positive edge rotating capacity (this vertex exists, because the total edge rotating capacity of the tree $T^{l-1}$ is at least $k-l+1)$. It is natural to see that after some described transformations, we obtain a tree whose degrees $d_{k+1} , \cdots , d_n$ are in an increasing order. Except for these steps, every such transformation strictly decreases the first multiplicative Zagreb index. Finally, we could get trees ${T}^k, {T}^{k-1}, \cdots, {T}^2 \in \mathcal{T}_{n,k}$, in which they have the maximum vertex degree equal to $\Delta+1=(\Delta_{max}-t)+1$ and satisfy the conditions $\prod_1({T}^k)<\prod_1({T}^{k-1})< \cdots <\prod_1({T}^1)<\prod_1({T}_{min}^1)$. Considering that ${T}^k$ has $k$ vertices of degree $\Delta+1$, all of them are vertices of maximal degree.
Since $\prod_1({T}^k) <\prod_1({T}_{min}^1)$, it contradicts the fact that ${T}_{min}^1$ has the minimal first multiplicative Zagreb index in $\mathcal{T}_{n,k}$. Thus, $t=0$ and it could be considered as $\Delta=\Delta_{max}=\lfloor \frac{n-2}{k}\rfloor +1$.
This completes the proof of $\Delta=\Delta_{max}=\lfloor \frac{n-2}{k}\rfloor +1$.
\end{proof}
Based on the above proof, the following remark is immediate.
\noindent {\bf Remark 3.1. } The statement of Lemma 3.1 holds for $k=n-2$, i.e., $ T_{n, n-2} =P_n$.
The next theorem provides the sharp lower bound of $\prod_1(G)$ and charaterizes the exremal graphs achieving such lower bound.
\noindent {\bf Theorem 3.1. } Let ${T}\in \mathcal{T}_{n,k}$, where $1\leq k\leq \frac{n}{2}-1$. Then
$$\prod_1({T})\geq\Delta^{2k}(\Delta-1)^{2p}\mu^{2},$$
where the equality holds if and only if its degree sequence is $(\underbrace{\Delta,\cdots,\Delta}_{k}, \underbrace{\Delta-1,\cdots,\Delta-1}_p, \mu, \underbrace{1,1,\cdots,1}_{n-k-p-1})$ with $\Delta=\lfloor \frac{n-2}{k}\rfloor +1$, $p=\lfloor \frac{n-2-k(\Delta-1)}{\Delta-2}\rfloor $ and $\mu=n-1-k(\Delta-1)-p(\Delta-2)$.
\begin{proof}
Let $\pi=(d_1,d_2,\cdots,d_n)$ be the vertex degree sequence of a tree $T_{min}^1$ with minimal first multiplicative Zagreb index in $T_{n,k}$. By Lemma 3.1 we have that $d_1=d_2=\cdots=d_k=\Delta={\lfloor \frac{n-2}{k}\rfloor+1}$. Since $\Delta-1=\lfloor \frac{n-2}{k}\rfloor$, then one can obtain that $\Delta-1$ is the integer section of $\frac{n-2}{k}$. Based on the previous lemma, let $n-2=k(\Delta-1)+r$, where $0\leq r<k$. From the relation (3), it follows that $n_1\geq k(\Delta-2)+2=n-k-r$, and $n_1$ has at least $n-k-r$ vertices with one degree. Therefore, $d_n=d_{n-1}=\dots=d_{k+r+1}=1$.
Note that $n_{\Delta}=k$ and $n_1=k(\Delta+2)+2+n_1'$ with $n_1'\geq 0$. Combining with the relations (1) and (2), and $t=0$ in the relations (5) and (6), we obtain that
\begin{eqnarray}
\sum_{i=2}^{\Delta-1}n_i+n_1'=r
\end{eqnarray}
and
\begin{eqnarray}
\sum_{i=2}^{\Delta-1}(i-1)n_i=r.
\end{eqnarray}
Since $n_i\geq 0$ with $i=2,3,\cdots,\Delta-1$, from the relation (9) it follows that $p=n_{\Delta-1}\leq\frac{r}{\Delta-2}$. Because $p$ is a non-negative integer number, we obtain $p\leq \lfloor \frac{r}{\Delta-2}\rfloor$.
Now, suppose that $p< \lfloor \frac{r}{\Delta-2}\rfloor$ and $\lfloor \frac{r}{\Delta-2}\rfloor=t'$. Let
\begin{eqnarray}
r=n_2 + 2n_3 +\cdots + (\Delta-3)n_{\Delta-2}+(\Delta-2)p=t'(\Delta-2) + y,\end{eqnarray}
where $0\leq y<(\Delta-2)$. Then $\sum_{i=2}^{\Delta-2}(i-1)n_i\geq {\Delta-2}$.
So there exist $n_i$ and $n_j(2\leq i< j\leq {\Delta-2})$, where $n_i\ne 0$ and $n_j\ne 0$ or $n_i\geq 2$~(where $2\leq i\leq \Delta-2)$ and the equality (9) is satisfied. Furthermore, since $\pi=(\underbrace{\Delta,\Delta,\cdots,\Delta}_k,d_{k+1},\cdots,d_{k+r},1,1,\cdots,1)$, then there exist numbers $d_{k+j_1}$ and $d_{k+i_1} (1\leq j_1< i_1\leq r)$ such that $d_{k+j_1}=j> d_{k+i_1}=i ~(\text{or}~ d_{k+j_1}=d_{k+i_1}=i, ~\text{if}~ n_i\geq 2)$.
Let $\pi'=(d_1',d_2',\cdots,d_n')$ be a sequence of positive integers such that $d_{k+j_1}'=d_{k+j_1}+1=j+1 $ and $d_{k+i_1}'=d_{k+i_1}+1=i-1 $, where $d_u'=d_u$ (for $u\ne {k+j_1}, u\ne {k+i_1})$. Clearly, $\sum_{i=1}^{n}d_i'=2n-2$, and $\pi'$ is the vertex degree sequence of a tree $T'$. Also,
\begin{eqnarray}
\nonumber \frac{ \prod_1({T}')}{\prod_1({T}_{min}^1)} &&=\frac{(j+1)^2(i-1)^2}{j^2 i^2}
\\ \nonumber &&= (\frac{\frac{i-1}{(i-1)+1} }{\frac{j}{j+1}})^2 ~\;\;\;\;(\text{by Proposition 2.3})
\\
&&<1.
\end{eqnarray}
Therefore, $\prod_1({T}')<\prod_1({T}_{min}^1)$. This contradicts the choice of ${T}_{min}^1$ which has the minimal first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$. Hence we conclude that $p=n_{\Delta-1}=\lfloor \frac{r}{\Delta-2}\rfloor$.
Next, by the relation (10) we obtain that
$y$=$n_2$+$2n_3$+ $\cdots$ +$(\Delta-3)n_{\Delta-2}$~ (for $0\leq y<
{\Delta-2}$), and
\begin{eqnarray}
y=r-p(\Delta-2)\leq {\Delta-3}.
\end{eqnarray}
According to the analysis of the relation (10), it can be proved that $n_\mu=1$ (where $2\leq \mu\leq {\Delta-2}$). Then $\mu= y+1={r-p(\Delta-2)+1}$, i.e., $\mu=n-1-k(\Delta-1)-p(\Delta-2)$, and $n_i=0$, for $i\ne \mu$, where $2\leq i\leq{\Delta-2}$, since in the opposite case we can construct a tree whose $\prod_1$ is smaller than $\prod_1({T}_{min}^1)$ again.
Hence, the tree ${T}_{min}^1$ has minimum first multiplicative Zagreb index. So we conclude that $n_\mu=1$, and $\mu=r-p(\Delta-2)+1 $, namely, $\mu={n-1-k(\Delta-1)-p(\Delta-2)}$.
Therefore, the tree ${T}_{min}^1$ with minimum first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$ has the vertex degree sequence $\pi=(\underbrace{\Delta,\cdots,\Delta}_{k}, \underbrace{\Delta-1,\cdots,\Delta-1}_p, \mu, \underbrace{1,1,\cdots,1}_{n-k-p-1})$.
\end{proof}
The first multiplicative Zagreb index of the tree $\mathcal{T}_{min}^1$ can now be routinely calculated.
\noindent{\bf Remark 3.2. } The statement of Theorem 3.1 also holds for $k=n-2$, i.e., $ T_{n,k}=P_n$.
\subsection{The sharp lower bound of $\prod_1$ on trees with given number of vertices of maximum degree
}
In the following theorem we will describe the trees that have the maximal first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$.
\noindent {\bf Lemma 3.2. } Let ${T}_{max}^1$ be a tree with maximal first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$, where $1\leq k\leq {\frac{n}{2}-1}$. Then its maximum degree $\Delta$ equals to 3.
\begin{proof}
We first assume that $\Delta \geq 4$ and $u$ is a vertex of maximum degree $\Delta$ in ${T}_{max}^1$.
\begin{figure}[htbp]
\centering
\includegraphics[width=4in]{fig1.jpg}
\caption{ The graphs ${T}_{max}^1$ and ${T}^1$ in Lemma 3.2.}
\label{fig: te}
\end{figure}
Denote $p=v_0v_1...v_{i-1}u(=v_i)v_{i+1}...v_l$ by the longest path in ${T}_{max}^1$ that contains $u$.
Also, let $v_{i-1}$, $v_{i+1}$, $u_1$, $u_2$, ...$u_{\Delta-2}$ be the vertices adjacent to $u$ in ${T}_{max}^1$, and $z_1$ be a pendent vertex connected to $u$ via $u_1$ (it is possible that $z_1\equiv u_1$)(Figure 1). Now we define a tree ${T}^1$ such that
\begin{eqnarray}
{T}^1={T}_{max}^1 - uu_2 + u_2 z_1.
\end{eqnarray}
Then
\begin{eqnarray}
\frac{\prod_1({T}^1)}{\prod_1({T}_{max}^1)} = \frac{(\Delta-1)^2 2^2}{\Delta^2 1^2}
= ({\frac{\Delta-1}{\Delta}})^2 4
>1,
\end{eqnarray}
that is, $\prod_1({T}^1)>\prod_1({T}_{max}^1)$. Obviously, the tree ${T}^1$ has $k-1$ vertices of degree $\Delta$.
Analogously, we apply similar transformations described in the relation (13) on every vertex $u$ of degree $\Delta$. In each step from a tree $T^i$, we obtain a tree ${T}^{i+1}$ with $1\leq i\leq k-1$ that has greater first multiplicative Zagreb index than its predecessor. After $k$ repetitions of these transformations, we arrive at the tree $T^k$, which has $k$ vertices having the maximum degree $\Delta-1$. Clearly, ${T}^k\in \mathcal{T}_{n,k}$ and $\prod_1({T}^k)>\prod_1({T}_{max}^1)$.
Considering the tree maximizing $\prod_1$ in the class $\mathcal{T}_{n,k}$, it contradicts the choice of ${T}_{max}^1$. Then ${T}_{max}^1$ has the maximum degree of $3$, this completes our proof.
\end{proof}
\noindent {\bf Theorem 3.2. } Let ${T}\in \mathcal{T}_{n,k}$ with $1\leq k\leq \frac{n}{2}-1$. Then
$$\prod_1({T})\leq 9^{k}4^{(n-2k-2)}=(\frac{9}{16})^{k}4^{n-2},$$
where the equality holds if and only if ${T}$ has degree sequence
$\pi=(\underbrace{3,3,\cdots,3}_k, \underbrace{2,2,\cdots,2}_{n-2k-2}, \underbrace{1,1,\cdots,1}_{k+2})$.
\begin{proof}
Let $T_{max}^1$ be a tree with maximum $\prod_1$ in the class $\mathcal{T}_{n,k}$. By Lemma 3.2, we obtain $\Delta=3$. So the vertex degree sequence of this tree is $\pi=(\underbrace{3,3,\cdots,3}_k, \underbrace{2,2\cdots,2}_{n_2}, \underbrace{1,1,\cdots,1}_{n_1})$ with $k\leq \frac{n}{2}-1$. Hence, applying to the equality (1) we obtain
\begin{eqnarray}
n_1 + 2n_2 + 3k = 2(n-1) = 2(n_1+n_2+k)-2.
\end{eqnarray}
Furthermore, since $n_3=\Delta=k$, then
\begin{eqnarray}
n_1 + 2n_2 + 3k = 2(n-1) = 2n_1 + 2n_2 + 2k -2.
\end{eqnarray}
According to the relation (16), we can conclude that $n_1$=$k$+2 and $n_2$=$n$-$(n_1+n_3)$=$n$-$2k$-2.
Hence, $$\pi=(\underbrace{3,3,\cdots,3}_k, \underbrace{2,2,\dots,2}_{n-2k-2}, \underbrace{1,1,\cdots,1}_{k+2}),$$
and $$\prod_1({T}_{max}^1)=d_1^2d_2^2\cdots d_n^2={\underbrace{3^2 3^2\cdots 3^2}_k}{\underbrace{2^22^2\cdots 2^2}_{n-2k-2}}{\underbrace{1^21^2 \cdots 1^2}_{k+2}} =9^{k}4^{n-2k-2} =(\frac{9}{16})^k4^{n-2}.$$
Therefore, this completes our proof.
\end{proof}
\noindent {\bf Remark 3.3. } Let ${T}^k$ be a tree with maximal first multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$, and $T^p\in \mathcal{T}_{n,p}$, where $1\leq {k, p}\leq {\frac{n}{2}-1}$. If $k> p$, then $\prod_1({T}^k)< \prod_1({T}^p)$.
\begin{proof} We can directly obtain by Theorem 3.2.
\end{proof}
\section{ The sharp upper and lower bounds of second mutiplicative Zagreb index on the trees }
In this section, we first characterize the trees with maximal second multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$, and let $\pi=(d_1,d_2,\cdots,d_n)$ be a degree sequence of a tree. To prove our main result in this section, we provide a lemma in each subsection and use it to deduce our theorems.
\subsection{The sharp lower bound of $\prod_2$ on trees with given number of vertices of maximum degree}
\noindent{\bf Lemma 4.1. } Let $ T_{min}^2$ be a tree with minimal second multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$. Then, $\Delta(T_{min}^2)=3$.
\begin{proof}
Suppose that $\Delta \geq 4$ and $u$ is a vertex of maximum degree $\Delta$ in $\mathcal{T}_{min}^2$.
\begin{figure}[htbp]
\centering
\includegraphics[width=4in]{fig3.jpg}
\caption{ The graphs ${T}_{min}^2$ and ${T}^1$ in Lemma 4.1.}
\label{fig: te}
\end{figure}
Let $p=v_0v_1...v_{i-1}u(=v_i)v_{i+1}\cdots v_l$ be the longest path in ${T}_{min}^2$ that contains $u$. Also, let $v_{i-1}$, $v_{i+1}$, $u_1$, $u_2$, $\cdots$,$u_{\Delta-2}$ be the vertices adjacent to $u$ in ${T}_{min}^2$, and $z_1$ a pendent vertex connected to $u$ via $u_1$ (it is possible that $z_1\equiv u_1$)(Figure 2). Likewise, we define a tree ${T}^1$ such that
\begin{eqnarray}
{T}^1= {T}_{min}^2 - uu_2 + u_2z_1.
\end{eqnarray}
Then
\begin{eqnarray}
\nonumber \frac{\prod_2({T}^1)}{\prod_2({T}_{min}^2)} &&= \frac{(d_u-1)^{d_u-1} d_{z_1}^{d_{z_1}}}{d_u^{d_u} d_{z_1}^{d_{z_1}}}
\\ \nonumber &&= \frac{(d_u-1)^{d_u-1}{2^2}}{d_u^{d_u}{1^1}}
\\ \nonumber &&=4 \frac{(\Delta-1)^{\Delta-1}}{\Delta^\Delta} ~\;\;\;\;(\text{by Proposition 2.3 and}~
\Delta-1 \geq 3)
\\ &&< 4\frac{3^3}{(3+1)^{3+1}}< 1.
\end{eqnarray}
Therefore, $\prod_2({T}^1)< \prod_2({T}_{min}^2)$.
In the same way, we use these transformations described in the relation (16) on every vertex $v$ of degree $\Delta$. Now, we transform ${T}_{min}^2$ with the remaining $k-1$ vertices of the degree $\Delta$ into $\Delta-1$. After $k$ repetitions of the transformation we arrive at the trees ${T}^2$, ${T}^3$,$\cdots$,${T}^k$, in which $k$ vertices have maximum degree $\Delta-1$. Obviously, ${T}^k\in \mathcal{T}_{n,k}$ and
$\prod_2({T}^k)<
\prod_2({T}^{k-1})<\cdots<\prod_2({T}_{min}^2)$.
Thus, ${T}^k$ has $k$ maximum degree vertices. This contradicts the choice of ${T}_{min}^2$ as the tree that minimizes $\prod_2$ in the class $\mathcal{T}_{n,k}$.
Therefore, $\Delta({T}_{min}^2)=3$.
\end{proof}
\noindent{\bf Theorem 4.1. } Let ${T}\in \mathcal{T}_{n,k}$, where $1\leq k\leq \frac{n}{2}-1$. Then
$$\prod_2({T})\geq {(3^3)^k}{(2^2)}^{n-2k-2}=({\frac{27}{16}})^k 4^{n-2},$$
where the equality holds if and only if ${T}$ has degree sequence
$\pi=(\underbrace{3,3,\cdots,3}_k, \underbrace{2,2,\cdots,2}_{n-2k-2}, \underbrace{1,1,\cdots,1}_{k+2})$.
\begin{proof}
Let ${T}_{min}^2$ be a tree with minimal $\prod_2(G)$ in the class $\mathcal{T}_{n,k}$. According to Lemma 4.1, we obtain $\Delta({T}_{min}^2)=3$.
Thus,
$$\pi=(\underbrace{3,3,\cdots,3}_k, \underbrace{2,2\cdots,2}_{n_2}, \underbrace{1,1,\cdots,1}_{n_1}).$$
Clearly, $n_1+2n_2+3k=2(n-1)=2n_1+2n_2+2k-2.$ Then $n_1=k+2$, $n_2=n-n_1-n_3=n-2k-2$.
Therefore, $\prod_2({T}_{min}^2)={(3^3)^k}{(2^2)^{n_2}}{(1^1)^{n_1}}={27}^k4^{n-2k-2}=(\frac{27}{16})^k 4^{n-2}$.
\end{proof}
\noindent {\bf Remark 4.1. } Let ${T}^k$ be a tree with minimal second multiplicative Zagreb index in $\mathcal{T}_{n,k}$, and ${T}^p$ a tree with minimal second multiplicative Zagreb index in the class $\mathcal{T}_{n,p}$, where $1\leq {k, p}\leq {\frac{n}{2}-1}$. If $k> p$, then $\prod_2({T}^k)> \prod_2({T}^p)$.
\begin{proof} According to Theorem 4.1 we can deduce that the proposition is correct.
\end{proof}
\subsection{The sharp upper bound of $\prod_2$ on trees with given number of vertices of maximum degree
}
\noindent {\bf Lemma 4.2. } Let ${T}_{max}^2$ be a tree with maximum second multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$, then its maximum vertex degree $\Delta({T}_{max}^2)=\lfloor \frac{n-2}{k}\rfloor +1$.
\begin{proof} Let $\Delta=\Delta({T}_{max}^2)$ be the maximum vertex degree of a tree ${T}_{max}^2$. By Proposition 2.1, $\Delta \leq \lfloor \frac{n-2}{k} \rfloor +1$. By the similar proof of Lemma 3.1, we can conclude that the tree ${T}^1$. Then,
\begin{eqnarray}
\nonumber \frac{\prod_2({T}^1)}{\prod_2({T}_{max}^2)} &&= \frac{(\Delta+1)^{\Delta+1} {(d_i-1)^{d_i-1}}}{\Delta^{\Delta} d_i^{d_i}}
\\ \nonumber &&=\frac{\frac{(d_i-1)^{(d_i-1)}}{((d_i-1)+1)^{((d_i-1)+1)}}} {\frac{\Delta^{\Delta}} {(\Delta+1)^{\Delta+1}}}~\;\;\;\; (\text{by Proposition 2.3 and} ~ 2\leq d_i\leq \Delta-1<\Delta )
\\ &&>1.
\end{eqnarray}
Therefore, $\prod_2({T}^1)> \prod_2({T}_{max}^2)$. We can repeat remaining described transformations of the tree ${T}_{max}^2$ on every of degree $\Delta$. Therefore, we can conclude that this contradicts the fact that ${T}_{max}^2$ has the maximum second multiplicative Zagreb index in the class $\mathcal{T}_{n,k}$.
This proves that $\Delta({T}_{max}^2)=\lfloor \frac{n-2}{k}\rfloor +1$.
\end{proof}
\noindent {\bf Theorem 4.2. } Let ${T}\in \mathcal{T}_{n,k}$, where $1\leq k\leq
\frac{n}{2}-1$. Then
$$\prod_2( T )\leq \Delta^{k\Delta}(\Delta-1)^{p(\Delta-1)} \mu^\mu,$$
where the equality holds if and only if the tree ${T}$ has a vertex degree sequence
$\pi=(\underbrace{\Delta,\Delta,\cdots,\Delta}_k, \\ \underbrace{(\Delta-1),(\Delta-1),\cdots,(\Delta-1)}_p, \mu, \underbrace{1,1,\cdots,1}_{n-k-p-1})$,
for $\mu=n-1-k(\Delta-1)-p(\Delta-2)$ and $p=\lfloor \frac{n-2-k(\Delta-1)}{\Delta-2}\rfloor.$
\begin{proof}
Let ${T}_{max}^2$ be a tree with maximum $\prod_2$ in the class $\mathcal{T}_{n,k}$. By Lemma 4.2, we have that $\Delta({T}_{max}^2)=\lfloor \frac{n-2}{k}\rfloor +1$.
Let
$p =n_{\Delta-2}$, then $p=\lfloor \frac{r}{\Delta-2}\rfloor. $
Otherwise, we obtain the tree ${T}'$ (using the method is analogous to the Theorem 3.1 which constructs the tree $T'$ ) such that $\pi'$ is the vertex degree sequence of a tree $T'$. Also, $\pi'=(d_1',d_2',\cdots,d_n')$ is a sequence of positive integers such that $d_{k+j_1}'=d_{k+j_1}+1=j+1 $ and $d_{k+i_1}'=d_{k+i_1}+1=i-1 $, where $d_u'=d_u$ (for $u\ne {k+j_1}, u\ne {k+i_1})$.
Thus,
\begin{eqnarray}
\nonumber \frac{\prod_2({T}')}{\prod_2({T}_{max}^2)} &&= \frac{(j+1)^{j+1} {(i-1)^{i-1}}}{j^{j} i^{i}}
\\ \nonumber &&=\frac{ \frac{(i-1)^{i-1}} {i^i}} {\frac {j^{j}} {(j+1)^{(j+1)}}}~\;\;\;\; (\text{by Proposition 2.3})
\\ &&>1.
\end{eqnarray}
Therefore, we obtain the tree ${T}'$ such that $\prod_2$ is greater than predecessor. So $p=\lfloor \frac{r}{\Delta-2}\rfloor $. In the same way as previously, we also obtain
$\prod_2( T )\leq \Delta^{k\Delta}(\Delta-1)^{p(\Delta-1)} \mu^\mu$.
Finally, this completes our proof.
\end{proof}
\vskip4mm\noindent{\bf Acknowledgements.}
The work was partially supported by the
National Science Foundation of China under Grant nos. 11271149, 11371162 and 11601006, the Natural
Science Foundation for the Higher Education Institutions of Anhui
Province of China under Grant no. KJ2015A331. Also, it was
partially supported by the Self-determined Research Funds of CCNU
from the colleges basic research and operation of MOE. Furthermore, the authors are grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,883
|
Qush-e Sarbuzi (, also Romanized as Qūsh-e Sarbūzī; also known as Qūsh Sar Nūrī) is a village in Tajan Rural District, in the Central District of Sarakhs County, Razavi Khorasan Province, Iran. At the 2006 census, its population was 2,032, in 373 families.
References
Populated places in Sarakhs County
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,601
|
I was asked to add this to the EOF fourm section so that EOF devs can have a read and see if this can be added into the software or atleast explain it for anyone else to do this double stop bedn with different values.
I wanted to add a double stop bend to my CDLC Rock n Roll Star by Oasis this bend is in the chorus of the song and requires the strings to bend at different values as show.
To get the desired effect i tried many things but could get the result I wanted so I turned to CF's discord for help, lucky two clever users I)ark_Seph and Firekorn where there to help me out.
So in EOF you need to find the bend you want to do this on you need to highlight the bend and go to Edit Guitar Pro Note (n key), here you split all the notes that are highlighted.
Next you need to save the song and find the tracks .xml file then open it in notepad or notepad++ to make it easier.
Using the timing form EOF locate the bend code in the file.
Then edit the desired string e.g string 3 to a full bend instead of a half bend do it to all the code that needs it to be done and save the file.
Then you can compile the song in RStoolkit and test it out on RS.
Sadly if you need to edit any of your chart in EOF you will have to do all above again as EOF overwrites the changes.
Hope this helps users and hopefully the devs can add into EOF with a future patch.
I think you should already be able to define this in EOF by using tech notes. You'd have to place the tech notes at least one ms apart so that you could define a different bend strength for each string, but otherwise it should do the trick.
@raynebc Except that a pre-bend value must be on the note head so in this specific case where it's a chord, you can't put different value at the exact same time which is required for the pre-bend.
@raynebc think thats what i tried first but no joy.
To make it easier to place the technote on either note if it is not grid snapped, you can place the tech note just after it and while that tech note is selected, use the "Note>Rocksmith>Move t.n. to prev note" function. Then just apply the appropriate bend strength to each of the tech notes.
Would anybody else end up making use of a pre-bend tech note status?
Well i might use it even if the song that would require it are probably rare but i at least see how it can solve some situation.
By now, a pre-bend tech note status is available to author this.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,213
|
Nok Air (, SET: NOK ) — бюджетна авіакомпанія Таїланду зі штаб-квартирою в Бангкоку, що здійснює регулярні пасажирські авіаперевезення всередині країни і за її межі. Назву перевізника утворено від тайського слова «птах», що вимовляється як «nok» ().
49 відсотків власності Nok Air належать флагманській авіакомпанії країни Thai Airways International. Портом приписки перевізника і його головним транзитним вузлом виступає міжнародний аеропорт Донмианг в Бангкоку.
Компанія є офіційним спонсором тайських професійних футбольних клубів TTM Chiang Mai, Hat Yai FC і Chiangmai FC.
Історія
Авіакомпанія Nok Air була заснована в лютому 2004 року і початку операційну діяльність у липні того ж року. У березні 2007 року штат компанії налічував 130 працівників, а до 2011 року — вже понад 600 співробітників. Nok Air відкрила свій перший міжнародний маршрут 31 травня 2007 року, запустивши щоденні рейси між Бангкоком і Бангалором (Індія). Пізніше авіакомпанія отримала права на виконання регулярних рейсів в інші індійські міста (Ченнаї, Хайдерабад і Нью-Делі).
У листопаді 2007 року услід за іншою бюджетною авіакомпанією Jetstar Asia Airways Nok Air призупинила польоти в Бангалор. За словами генерального директора компанії з індійського регіону Раджива Бхатії, припинення польотів було пов'язано з відсутністю вільних повітряних суден та необхідністю переключення діяльності на більш прибутковий ринок авіаперевезень в Південно-Східній Азії, наприклад, на В'єтнам. Експерти туристичного бізнесу, однак, пов'язують скасування рейсів в Бангалор з падінням коефіцієнта завантаження пасажирських крісел компанії на 40 %.
Після освоєння декількох міжнародних маршрутів керівництво Nor Air вирішило зосередитися на розвитку внутрішніх напрямків, і вже до 2011 року авіакомпанія виконувала найбільше число рейсів між аеропортами Таїланду серед всіх комерційних авіаперевізників. У 2010 році авіакомпанія показала чистий прибуток у 618 мільйонів бат при доході за рік в 3,97 мільярдів бат. В наступному році Nok Air розширила свій повітряний флот сімома літаками Boeing 737-800 і чотирма турбогвинтовими ATR 72.
Маршрутна мережа
Внутрішні напрямки
З Бангкоку – Міжнародний аеропорт Донмианг (Основний хаб)
Бурірам – Аеропорт Бурірам
Чіангмай – Міжнародний аеропорт Чіангмай
Чіанграй – Міжнародний аеропорт Чіанграй
Чумпхон (Патхіо) – Аеропорт Чумпхон
Хат'яй – Міжнародний аеропорт Хат'яй
Кхонкен – Аеропорт Кхонкен
Крабі – Аеропорт Крабі
Лампанг – Аеропорт Лампанг
Лей – Аеропорт Лей
Мехонгсон – Аеропорт Мехонгсон
Месот – Аеропорт
Накхонпханом – Аеропорт
Накхонсітхаммарат – Аеропорт
Нан – Аеропорт
Пхре – Аеропорт
Пхітсанулок – Аеропорт
Пхукет (острів) – Міжнародний аеропорт Пхукет
Ранонг – Аеропорт
Ройет – Аеропорт
Саконнакхон – Аеропорт
Сураттхані – Аеропорт
Транг – Аеропорт
Убонратчатхані – Аеропорт
Удонтхані – Міжнародний аеропорт Удонтхані
З міжнародного аеропорту Чіангмай
Бангкок – Міжнародний аеропорт Донмианг
Удонтхані – Міжнародний аеропорт Удонтхані
Міжнародні напрямки
М'янма
Янгон – Міжнародний аеропорт Янгон
В'єтнам
Хошимін – Міжнародний аеропорт Тан Сон Нхут
Ханой – Міжнародний аеропорт Ханой
Сінгапур
Сінгапур – Міжнародний аеропорт Чангі
Флот
Станом на 11 березня 2016 року повітряний флот авіакомпанії Nok Air складався з наступних літаків (всі лайнери перебувають в лізингу)
Власники
Див. також
Список дешевих авіакомпаній
Примітки
Посилання
Офіційний сайт авіакомпанії Air Nok
Social Networks — Updates on Google (Thai and English)
nokair fleet
Авіакомпанії Таїланду
Лоу-кост авіакомпанії
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,803
|
package org.consec.dynamicca.rest;
import org.codehaus.jettison.json.JSONException;
import org.codehaus.jettison.json.JSONObject;
import org.consec.dynamicca.jpa.EMF;
import org.consec.dynamicca.jpa.entities.Cert;
import org.consec.dynamicca.jpa.entities.CertPK;
import javax.persistence.EntityManager;
import javax.ws.rs.*;
import javax.ws.rs.core.*;
import java.util.Date;
@Path("/cas/{caUid}/certs/{sn}")
public class CertResource {
private String caUid;
private int sn;
@Context
UriInfo uriInfo;
public CertResource(@PathParam("caUid") String caUid,
@PathParam("sn") int sn) {
this.caUid = caUid;
this.sn = sn;
}
@GET
@Produces(MediaType.APPLICATION_JSON)
public JSONObject getCert() throws JSONException {
EntityManager em = EMF.createEntityManager();
try {
CertPK certPK = new CertPK(sn, caUid);
Cert cert = em.find(Cert.class, certPK);
if (cert == null) {
throw new WebApplicationException(Response.Status.NOT_FOUND);
}
JSONObject result = new JSONObject();
result.put("private_key", cert.getPrivateKey());
result.put("certificate", cert.getCertificate());
result.put("serial_number", cert.getCertPK().getSn());
result.put("uri", UriBuilder.fromResource(CertResource.class)
.build(cert.getCa().getUid(), cert.getCertPK().getSn()));
return result;
}
finally {
EMF.closeEntityManager(em);
}
}
@DELETE
public Response revokeCert() {
EntityManager em = EMF.createEntityManager();
try {
CertPK certPK = new CertPK(sn, caUid);
Cert cert = em.find(Cert.class, certPK);
if (cert == null) {
throw new WebApplicationException(Response.Status.NOT_FOUND);
}
if (cert.getRevoked()) {
throw new WebApplicationException(Response.Status.NOT_MODIFIED);
}
em.getTransaction().begin();
cert.setRevoked(true);
cert.setRevocationDate(new Date());
em.getTransaction().commit();
String message = String.format("The certificate with serial number %d has been revoked successfully.", sn);
return Response.status(Response.Status.NO_CONTENT).entity(message).build();
}
finally {
EMF.closeEntityManager(em);
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,790
|
As a mom of two kids, I find it increasingly hard to keep up with the things they need each year. When their pediatrician recommended that I get my children's eyes checked, I didn't know who to see for an appointment. But my kids' doctor directed us to a wonderful optician in our area who specializes in eye wear for everyone in my family. The optician also made sure to prescribe the right contacts for my teen, as well as stylish eyeglasses for my preteen. I developed this blog to help you and other parents find the right optician for your family's vision care. I hope you find the information beneficial and valuable. Thanks for stopping by.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,339
|
Q: Core Graphics - What is the appropriate way to create an RGB colorspace? Apple Says:
"If your application runs in iOS or in Mac OS X v10.4 and later, you can use device-independent color spaces or generic color spaces."
So that means I am to use CGColorSpaceCreateWithName(...) and not CGColorSpaceCreateDeviceRGB(...) because the latter is deprecated. However, in this post on stackoverflow, it's said that the generic color space is deprecated. What's the right answer?
A: CGColorSpaceCreateWithName() is the recommended function on MacOS, but kCGColorSpaceGenericRGB isn't available on iOS, so you have to use CGColorSpaceCreateDeviceRGB() instead (and isn't deprecated on that platform).
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,798
|
I read this article and found it very interesting, thought it might be something for you. The article is called Laughter Therapy Clinic: Monday's Notes and is located at https://www.worldlaughtertour.com/laughter-therapy-clinic-mondays-notes/.
A kind word often goes unspoken, but never goes unheard.
Specific practices are core components of laughter therapy as conceived by World Laughter Tour.
Start on the path of GOOD-HEARTED LIVING by practicing paying compliments three times on MONDAYS.
train yourself to be on the lookout for what's good, then say something about it.
This practice is the antidote for being critical and judgemental.
There are no statues that honor critics.
helps you get along better with others, and brings you peace of mind.
Make this practice a habit and then let it become a way of life for you.
It enables resilience and the ability to successfully navigate adversity.
Soon, all six practices will feel like your natural way of being in the world.
You will fond yourself doing any or all of the practices every day, whenever opportunities present themselves.
If you can maintain your sense of the cosmic joke, too, all the better.
the World Laughter Tour website.
are used together, along with Good-Hearted Living.
Prev:Is it laughter therapy or therapeutic laughter?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,459
|
package com.vladium.jcd.cls.constant;
import java.io.IOException;
import com.vladium.jcd.lib.UDataInputStream;
// ----------------------------------------------------------------------------
/**
* This structure is used in the constant pool to represent dynamic references
* to interface methods. The class_index item of a CONSTANT_InterfaceMethodref_info
* structure must be an interface type that declares the given method.
*
* @see CONSTANT_ref_info
* @see CONSTANT_Fieldref_info
* @see CONSTANT_Methodref_info
*
* @author (C) 2001, Vlad Roubtsov
*/
public
final class CONSTANT_InterfaceMethodref_info extends CONSTANT_ref_info
{
// public: ................................................................
public static final byte TAG = 11;
public CONSTANT_InterfaceMethodref_info (final int class_index, final int name_and_type_index)
{
super (class_index, name_and_type_index);
}
public final byte tag ()
{
return TAG;
}
// Visitor:
public Object accept (final ICONSTANTVisitor visitor, final Object ctx)
{
return visitor.visit (this, ctx);
}
public String toString ()
{
return "CONSTANT_InterfaceMethodref: [class_index = " + m_class_index + ", name_and_type_index = " + m_name_and_type_index + ']';
}
// Cloneable: inherited clone() is Ok
// protected: .............................................................
protected CONSTANT_InterfaceMethodref_info (final UDataInputStream bytes) throws IOException
{
super (bytes);
}
// package: ...............................................................
// private: ...............................................................
} // end of class
// ----------------------------------------------------------------------------
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,746
|
{"url":"https:\/\/forum.allaboutcircuits.com\/threads\/cmos-logic-levels-to-a-nim-logic-level-line-driver.117362\/#post-920831","text":"# CMOS logic levels to a NIM logic level line driver?\n\n#### Pinkamena\n\nJoined Apr 20, 2012\n22\nHi everyone!\n\nThe ADCMP600 is a fast comparator with TTL\/CMOS compatible outputs (high = Vcc-0.4V, low = 0.4V). The output of this comparator needs to be conveyed over several meters (~3) of coax cable to a 50-ohm terminated NIM crate input. However, the NIM standard sets logic \"0\" at 0mA, and logic \"1\" at -16mA.\n\nThe million dollar question is: What is the best way of converting the voltage-mode TTL logic to the current-mode NIM logic? And how can I best ensure that the signal is transferred with its fast leading edge intact? The NIM crate in question is a fast timetagger, and so propagation delay jitter in the logic level converter\/line driver\/coax cable is of importance and must be less than 1ns. The total propagation delay does not matter, only jitter does.\n\nI look forward to your suggestions!\n\n#### crutschow\n\nJoined Mar 14, 2008\n30,114\nThat's not an easy shift (strange the NIM uses a negative logic level whereas almost all logic outputs are positive).\nHere's a thread that may help where Bill Sloman offers some insight.\n\n#### crutschow\n\nJoined Mar 14, 2008\n30,114\nHere is the LTspice simulation of a simple circuit that would seem do what you want.\nThe source Vcomp simulates the comparator output of 0V to 4.3V with a 5V supply.\nThe output current of the comparator is boosted by emitter-follower Q2, which drives the grounded-base level-shifter Q1, generating 0ma to -16mA nominal through the 50\u03a9 load, RLoad.\n\nThe simulated output levels were -16.1mA and +0.3mA. These current levels can be adjusted by changing the values of R2 and\/or R3 if your supply or logic level voltages are different from the simulated values.\n\nThe propagation delay is less than 1ns so the jitter should certainly be much less then that.\n\nNote that the logic is inverted from input to output with a logic low input giving a logic high (-16mA) output.\n\n#### Attachments\n\n\u2022 1.5 KB Views: 22\nLast edited:\n\n#### Pinkamena\n\nJoined Apr 20, 2012\n22\nThat's an elegant and simple solution! Thank you.\n\n#### crutschow\n\nJoined Mar 14, 2008\n30,114\nNote that for good waveform fidelity at those transition times (which correspond to a frequency of a hundred MHz or so), layout is critical. A sloppy layout with long leads on a standard perf board will not work well.\nThe circuit should be built on a circuit board (vector board type is okay) with a ground plane, using as short leads as possible.\nThe collector Q2 and bottom of resistor R3 should have a 0.1\u03bcF ceramic decoupling capacitor (preferably surface mount type) directly to the ground plane.\nThe output should be connected directly to the coax connector with the connector common directly to the ground plane.\nThe comparator must be close to this circuit, also on a ground plane (preferably on the same board as the circuit), with the supply pins decoupled in the same manner.","date":"2022-08-14 13:28:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44569098949432373, \"perplexity\": 4686.422844432502}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572033.91\/warc\/CC-MAIN-20220814113403-20220814143403-00705.warc.gz\"}"}
| null | null |
\section{Introduction}
In social learning strategies, a set of communicating agents seeks to update their opinions as they receive streaming information about a given observed phenomenon~\cite{ChamleyBook, Jad, PoorSPmag2013,ScaglioneSPmag2013}. In most existing methods in the literature, as time evolves, agents' opinions (or beliefs) tend to concentrate on the true state~\cite{AcemogluOzdaglar2011,Jadbabaie2013,Zhao,Salami,NedicTAC2017,Javidi,MattaSantosSayedICASSP2019,MattaBordignonSantosSayed2019}, often at an exponentially fast rate of convergence. However, such remarkable convergence properties have the collateral effect of hindering adaptation.
Let us consider the following example. A network of $10$ agents aims to solve a weather forecast problem using an online social learning algorithm. At each instant, these agents collect data coming from one among three possible hypotheses: ``{\em sunny}'', ``{\em cloudy}'', ``{\em rainy}''. At first, data are consistent with the hypothesis ``{\em sunny}'', but then from instant $i=200$ they indicate that the correct forecast is ``{\em rainy}''. As we see in Fig.~\ref{fig:examplesl} (the curves illustrate the behavior of Agent 1), the social learning algorithm reacts with a considerable inertia to the hypothesis drift.
\begin{figure}[htb]
\centering
\includegraphics[width=3in]{figs/example_sl.pdf}
\caption{{\em Classic} social learning vs. {\em adaptive} social learning. {\em Top panels}: Belief evolution of agent $1$, with $\theta_0$ changing at time $i=200$. {\em Bottom panels}: The instantaneous decision taken by agent $1$ by choosing the hypothesis that maximizes the current belief.}
\label{fig:examplesl}
\end{figure}
In fact, Fig.~\ref{fig:examplesl} shows clearly that the agent learns well until instant $i=200$, whereas from $i=200$ onward, the situation changes dramatically: the classic social learning algorithm has a delayed reaction. {First, agents perceive a change only at $i\approx 350$, but start opting for the wrong hypothesis ``{\em cloudy}''. Then, after a prohibitive number of iterations, at $i\approx 550$, agents manage to overcome their stubbornness and opt for the correct hypothesis ``{\em rainy}''}. To tackle this problem, this work proposes an Adaptive Social Learning (ASL) strategy, whose performance is shown in the second column of the same Fig.~\ref{fig:examplesl} for the same example. We see that the ASL algorithm manages to track the target change at instant $i\approx 200$, exhibiting an adaptation capacity that is remarkably higher than that of the classic social learning algorithm.
The main contributions of this work can be summarized as follows. First, we introduce a novel social learning strategy that enables adaptation.
Then, by exploiting recent advances in the field of distributed detection over adaptive networks~\cite{MattaSayedCoopGraphSP2018}, we provide an accurate analytical characterization of this strategy in terms of $i)$ convergence of the system at steady state (Theorem~\ref{theor:steady}); $ii)$ achievability of consistent learning (Theorem~\ref{theor:weaklaw}); $iii)$ a Gaussian approximation for the learning performance (Theorem~\ref{theor:CLT}). Due to space constraints, proofs will be omitted.
\section{ASL Strategy}
Consider a strongly-connected network of $N$ agents trying to infer the true state of nature $\theta_0\in\Theta$ given a set of $H$ hypotheses, $\Theta=\{1,2,\ldots,H\}$. Each agent $k$, at time $i$, observes streaming data $\bm{\xi}_{k,i}$, belonging to a certain space $\mathcal{X}_k$, drawn from a distribution that depends on the underlying hypothesis $\theta_0$. The data are assumed to be independent over time, i.e., across index $i$, whereas they can be dependent across agents. Moreover, it is assumed that the distribution of $\bm{\xi}_{k,i}$ belongs to a set of $H$ admissible models (likelihood functions) that are identified by the hypotheses $\theta\in\Theta=\{1,2,\ldots,H\}$.
The likelihood of agent $k$ evaluated at $\theta$ is denoted by
$
L_k(\xi|\theta)
$ with $\xi\in\mathcal{X}_k$. Note that the likelihoods are allowed to vary across the agents.
We model the network using a strongly-connected graph, with a left-stochastic combination matrix $A\triangleq[a_{\ell k}]$. Element $a_{\ell k}$ weights information received by agent $k$ from agent $\ell$: $a_{\ell k}$ is a non-negative real number and it is equal to zero if $\ell\notin\mathcal{N}_k$,
{where $\mathcal{N}_k$ is the neighborhood of agent $k$ ($k$ included).}
We define the Perron eigenvector $\pi$ such that~\cite{Sayed}:
\begin{equation}
A\pi=\pi,\qquad\mathbbm{1}^\top\pi=1,\qquad\pi\succ 0.
\end{equation}
Agents will incorporate the information contained in their local observations and diffuse it across the network by iteratively updating and exchanging their belief vectors $\bm{\mu}_{k,i}$. The belief vector is a probability vector over the set of hypotheses $\Theta$ and each component $\bm{\mu}_{k,i}(\theta)$ reflects the confidence of agent $k$ at instant $i$ that $\theta$ is the true hypothesis.
\begin{assumption}[Positive initial beliefs]\label{assum:initbel}
All agents start with a strictly positive belief for all hypotheses, i.e., $\bm{\mu}_{k,0}(\theta)>0$ for each agent $k$ and all $\theta\in \Theta$.~\hfill$\square$
\end{assumption}
\subsection{Adaptive Social Learning (ASL) Algorithm}
In the adaptive scenario, system conditions can change over time, e.g., the true state of nature or the network topology might change. To address that setup we now introduce the ASL strategy, which can be described in terms of the following iterative two-step algorithm. In the first step, each agent $k$ constructs an {\em intermediate} belief vector $\bm{\psi}_{k,i}$ by incorporating the current observation $\bm{\xi}_{k,i}$ into the belief of the preceding time epoch, $\bm{\mu}_{k,i-1}$, through the following {\em adaptive Bayesian update}:
\begin{equation}
\bm{\psi}_{k,i}(\theta)=\displaystyle{
\frac{\bm{\mu}^{1-\delta}_{k,i-1}(\theta)L^{\delta}_k(\bm{\xi}_{k,i}|\theta)}
{\sum_{\theta'\in\Theta}\bm{\mu}^{1-\delta}_{k,i-1}(\theta')L^{\delta}_k(\bm{\xi}_{k,i}|\theta')}
}
\label{eq:ASLinterm}
\end{equation}
where $0<\delta<1$ is a parameter that will be referred to as the {\em step-size}. In the second step, each agent $k$ aggregates all intermediate beliefs received from its neighbors into its updated belief vector $\bm{\mu}_{k,i}$ as
\begin{equation}
\bm{\mu}_{k,i}(\theta)=\displaystyle{
\frac{\exp\Big\{\sum_{\ell\in\mathcal{N}_k}a_{\ell k}\log\bm{\psi}_{k,i}(\theta)\Big\}}
{\sum_{\theta'\in\Theta}
\exp\Big\{\sum_{\ell\in\mathcal{N}_k}a_{\ell k}\log\bm{\psi}_{k,i}(\theta')\Big\}
}
}.
\label{eq:ASLfinal}
\end{equation}
Different than classic social learning methods employed in~\cite{NedicTAC2017,Javidi,MattaSantosSayedICASSP2019,MattaBordignonSantosSayed2019}, we see that each agent performs the first step by modulating, through the convex weights $1-\delta$ and $\delta$, the relative weight assigned to the past and new information. In particular, relatively large values of $\delta$ give more importance to the new data, whereas small values of $\delta$ give more importance to the past beliefs. A similar form of convex combination appeared in the statistical literature for defining the Chernoff information~\cite{Chernoff1952}.
As usual in the theory of adaptation and learning, the learning performance is characterized in the steady-state regime \cite{Sayed}. In steady state, the true hypothesis $\theta_0$ is kept constant over time, yielding:
\begin{equation}
\bm{\xi}_{k,i}\sim L_k(\xi|\theta_0),~~k=1,2,\ldots,N, ~~i=1,2,\ldots
\end{equation}
and that data $\{\bm{\xi}_{k,i}\}$ are independent and identically distributed (i.i.d.) over time.
Let us define the log-likelihood ratio as
\begin{equation}
\bm{x}_{k,i}(\theta)\triangleq \log\left(\frac{L_k(\bm{\xi}_{k,i}|\theta_0)}{L_k(\bm{\xi}_{k,i}|\theta)}\right).
\label{eq:xkidefin}
\end{equation}
\begin{assumption}[Finiteness of KL divergences]\label{assum:integrable}
For each $k=1,2,\dots,N$ and $\theta\neq\theta_0$:
\begin{equation}
d_k(\theta)\triangleq\mathbb{E}[\bm{x}_{k,i}(\theta)]<\infty.
\label{eq:delldef}
\end{equation}
~\hfill$\square$
\end{assumption}
\vspace{-5pt}
To motivate cooperation among agents, we introduce the following identifiability assumption, which implies that the inference problem need not be locally identifiable.
\begin{assumption}[Global identifiability]\label{assum:globo}
For each wrong hypothesis $\theta\neq\theta_0$, there exists at least {one agent $k_{\theta}$ that has strictly positive KL divergence, $d_{k_{\theta}}(\theta)>0$}.~\hfill$\square$
\end{assumption}
In order to characterize the learning performance, it is useful to introduce the logarithm of the ratio between the belief evaluated at $\theta_0$ and the belief evaluated at $\theta\neq\theta_0$:
\begin{equation}
\bm{\lambda}^{(\delta)}_{k,i}(\theta)\triangleq \log\left(\frac{\bm{\mu}_{k,i}(\theta_0)}{\bm{\mu}_{k,i}(\theta)}\right).
\end{equation}
When we omit the argument $\theta$ and write $\bm{\lambda}^{(\delta)}_{k,i}$, we are referring to the $(H-1)\times 1$ vector concatenating the log-belief ratios $\bm{\lambda}^{(\delta)}_{k,i}(\theta)$ for $\theta\neq \theta_0 \in \Theta$. When we omit the subscript $i$ we are referring to a random variable characterized at the {\em steady state}, i.e., as $i\rightarrow\infty$.
Thus,
$
\bm{\lambda}^{(\delta)}_{k}(\theta) \textnormal{ and } \bm{\lambda}^{(\delta)}_{k}
$
are, respectively, the {\em steady-state} log-belief ratio evaluated at $\theta$, and the {\em steady-state} vector of log-belief ratios.
\begin{remark}[Positive beliefs]
\label{rem:muinit}
In view of Assumption~\ref{assum:initbel} and the ASL algorithm \eqref{eq:ASLinterm}--\eqref{eq:ASLfinal}, we see that the belief $\bm{\mu}_{k,i}(\theta)$ remains nonzero for any $\theta$ across time. So, under stationary conditions, at any instant $i_0$, the belief vector $\bm{\mu}_{k,i_0}$ fulfills Assumption~\ref{assum:initbel}. This property allows performing the steady-state analysis from $i=0$ without losing generality. It is also relevant to avoid ill-defined log-belief ratios.~\hfill$\square$
\end{remark}
For each $i$, we can define the instantaneous decision of agent $k$ as corresponding to the hypothesis that maximizes the belief, which leads to the following error probability:
\begin{equation}
p^{(\delta)}_{k,i}=\P\left(\argmax_{\theta\in\Theta}\bm{\mu}_{k,i}(\theta)\neq\theta_0\right)\stackrel{i\rightarrow \infty}{\longrightarrow}p_k^{(\delta)},
\label{eq:errprob}
\end{equation}
where $p_k^{(\delta)}$ is the {\em steady-state} error probability.\footnote{{The existence of the limit in \eqref{eq:errprob} relies on the convergence proved in Theorem 1 (details omitted for space constraints).}}
\subsection{Network Average of Log-Likelihood Ratios}
First, a useful concept to introduce is the {\em network average} of log-likelihood ratios and its expectation, for all $\theta\neq\theta_0$:
\begin{IEEEeqnarray}{rCl}
\bm{x}_{\mathrm{ave}}(\theta)&=&\sum_{\ell=1}^N \pi_{\ell} \bm{x}_{\ell,i}(\theta),
\label{eq:avlik}\\
{\sf m}_{\mathrm{ave}}(\theta)&\triangleq &\mathbb{E}[\bm{x}_{\mathrm{ave}}(\theta)]=\sum_{\ell=1}^N \pi_{\ell} d_{\ell}(\theta)
\label{eq:mnet}.
\end{IEEEeqnarray}
Second, if the log-likelihoods have finite variances\footnote{{Remarkably, the existence of second moments is not required in Theorems \ref{theor:steady} and \ref{theor:weaklaw}, and is used only in Theorem \ref{theor:CLT}.}}, we can compute the covariance {between $\bm{x}_{k,i}(\theta)$ and $\bm{x}_{k,i}(\theta')$ as}
\begin{equation}
\rho_{\ell}(\theta,\theta')=\mathbb{E}\left[
\Big(
\bm{x}_{\ell,i}(\theta)
-
d_{\ell}(\theta)
\Big)
\Big(
\bm{x}_{\ell,i}(\theta')
-
d_{\ell}(\theta')
\Big)
\right].
\end{equation}
Finally, if data are {\em independent across the agents}, the covariance between variables $\bm{x}_{\mathrm{ave}}(\theta)$ and $\bm{x}_{\mathrm{ave}}(\theta')$ is given as
{\begin{equation}
{\sf c}_{\mathrm{ave}}(\theta,\theta')\triangleq
\sum_{\ell=1}^N \pi^2_{\ell} \rho_{\ell}(\theta,\theta').
\label{eq:cnet}
\end{equation}}
\vspace{-10pt}
\section{Steady-State Analysis}
As seen in the introductory example, in the {\em adaptive} setting the belief will not converge (in the almost-sure sense) as $i\rightarrow\infty$. On the contrary, the belief of each agent will exhibit an {\em oscillatory} behavior: feature that enables adaptation. As we will see, because of this asymptotic {\em random} character, steady-state analysis is not trivial. The analysis of this random behavior is established in Theorem~\ref{theor:steady}.
Before stating the theorem, let us examine the evolution of the log-belief ratios.
Manipulating \eqref{eq:ASLinterm} and \eqref{eq:ASLfinal} in the log domain, for every $\theta\neq\theta_0$ we have:
\begin{equation}
\bm{\lambda}^{(\delta)}_{k,i}(\theta)=(1-\delta)\sum_{\ell\in\mathcal{N}_k} a_{\ell k}\, \bm{\lambda}^{(\delta)}_{\ell,i-1}(\theta)
+
\delta \sum_{\ell\in\mathcal{N}_k} a_{\ell k} \,\bm{x}_{\ell,i-1}(\theta).
\label{eq:mainASLrec}
\end{equation}
The recursion in \eqref{eq:mainASLrec} is in the form of a {\em diffusion}
algorithm with {\em step-size} $\delta$ --- see, e.g., \cite{Sayed}. Developing the recursion in \eqref{eq:mainASLrec} we can write, for all $\theta\neq\theta_0$:
\begin{IEEEeqnarray}{rCl}
\bm{\lambda}^{(\delta)}_{k,i}(\theta)
&=&
(1-\delta)^i
\sum_{\ell=1}^N [A^i]_{\ell k} \bm{\lambda}_{k,0}(\theta)
\nonumber\\
\quad&+&\delta \sum_{m=0}^{i-1}\sum_{\ell=1}^N (1-\delta)^m [A^{m+1}]_{\ell k}\, \bm{x}_{\ell,i-m}(\theta).
\label{eq:withtransient}
\end{IEEEeqnarray}
Since the first term on the RHS of \eqref{eq:withtransient} vanishes as $i\rightarrow\infty$, for the steady-state analysis we can rewrite with slight abuse of notation:
\begin{equation}
\bm{\lambda}^{(\delta)}_{k,i}(\theta)=\delta \sum_{m=0}^{i-1}\sum_{\ell=1}^N (1-\delta)^m [A^{m+1}]_{\ell k}\, \bm{x}_{\ell,i-m}(\theta).
\label{eq:lambdarec}
\end{equation}
\begin{theorem}[Stability of log-belief ratios]
\label{theor:steady}
Let:
\begin{equation}
\bm{\lambda}^{(\delta)}_{k}(\theta)=
\sum_{\ell=1}^N \delta \sum_{m=0}^{\infty} (1-\delta)^m [A^{m+1}]_{\ell k}\, \bm{x}_{\ell,m+1}(\theta),
\label{eq:convseries}
\end{equation}
where the ordering of the summations in \eqref{eq:convseries} means that the $N$ inner series are all almost-surely convergent. Then, under Assumptions~\ref{assum:initbel} and~\ref{assum:integrable} we have that:
\begin{equation}
\boxed{
\bm{\lambda}^{(\delta)}_{k,i}\stackrel{i\rightarrow\infty}{\rightsquigarrow} \bm{\lambda}^{(\delta)}_k
}
\end{equation}
where $\rightsquigarrow$ indicates convergence in distribution.
\QED
\end{theorem}
Theorem \ref{theor:steady} shows that, as long as the first moment of $\bm{x}_{k,i}$ exists, the statistical distribution of $\bm{\lambda}^{(\delta)}_{k,i}$ converges to the distribution of a stable (i.e., well-defined) random vector $\bm{\lambda}^{(\delta)}_k$ as $i\rightarrow\infty$. We remark that this does not imply that the partial sum in (15) will converge almost surely to \eqref{eq:convseries} as $i\rightarrow \infty$. The subtlety here is that while
\begin{equation}
\widetilde{\bm{\lambda}}_{k,i}^{(\delta)}(\theta)\triangleq
\delta \sum_{m=0}^{i-1}\sum_{\ell=1}^N (1-\delta)^m [A^{m+1}]_{\ell k}\, \bm{x}_{\ell,m+1}(\theta)
\end{equation}
is almost-surely convergent (which can be deduced from part 1) of Lemma \ref{lem:mainlemma}, the summation in \eqref{eq:lambdarec} is not, due to the reversed ordering of the summands.
\section{Small-$\delta$ Analysis}
We will proceed with the asymptotic analysis of $\bm{\lambda}_{k}^{(\delta)}$ now in the regime of small $\delta$. As seen in~\cite{MattaSayedCoopGraphSP2018}, to deal with this asymptotic behavior in the adaptation context, we first introduce a steady-state vector $\bm{\lambda}^{(\delta)}_k$ that already embodies the effect of summing an infinite number of terms. Only then, we proceed to characterize the asymptotic behavior of the steady-state random vector $\bm{\lambda}^{(\delta)}_k$ as $\delta$ goes to zero. To support the results that follow, we will rely on Lemma~\ref{lem:mainlemma}, which can be found enunciated in Appendix \ref{ap:lem}.
\subsection{Consistent Social Learning}
\begin{theorem}[Consistency of ASL]
\label{theor:weaklaw}
Under Assumptions~\ref{assum:initbel} and~\ref{assum:integrable}, we have the following convergence:
\begin{equation}
\bm{\lambda}^{(\delta)}_k\stackrel{\delta\rightarrow 0}{\longrightarrow}{\sf m}_{\mathrm{ave}}~~\textnormal{ in probability}.
\label{eq:wlawintermediate}
\end{equation}
Since under Assumption~\ref{assum:globo} all entries of ${\sf m}_{\mathrm{ave}}$ are strictly positive, Eq. \eqref{eq:wlawintermediate} implies that for all $\theta\neq\theta_0$:
\begin{equation}
\boxed{
\lim_{\delta\rightarrow 0}p_k^{(\delta)}=0
}
\end{equation} i.e., each agent learns the truth as $\delta\rightarrow 0$.\QED
\end{theorem}
The result of Theorem~\ref{theor:weaklaw} relies on the weak law of small step-sizes proved in Lemma~\ref{lem:mainlemma}, part $3)$. This result requires the existence of the first moments $d_{\ell}(\theta)$, which is guaranteed by Assumption~\ref{assum:integrable}. Moreover, it requires that ${\sf m}_{\mathrm{ave}}(\theta)>0$ for all $\theta\neq\theta_0$, which is ensured by Assumption~\ref{assum:globo} and the strict-positivity of the Perron eigenvector.
\subsection{Normal Approximation for Small $\delta$}
Let us examine the behavior of the first two moments of the log-belief ratios. In view of Lemma~\ref{lem:mainlemma}, part $2)$, we conclude that the expectation of the steady-state random vector $\bm{\lambda}^{(\delta)}_k$ can be expressed as:
\begin{equation}
{\sf m}^{(\delta)}_k(\theta)\triangleq \mathbb{E}\left[\bm{\lambda}^{(\delta)}_k(\theta)\right]
={\sf m}_{\mathrm{ave}}(\theta)+O(\delta),
\label{eq:mean}
\end{equation}
where $O(\delta)$ is a quantity such that the ratio $O(\delta)/\delta$ remains bounded as $\delta\rightarrow 0$.
Likewise, using part $4)$ of Lemma~\ref{lem:mainlemma}, we conclude that the covariance of the steady-state random vector $\bm{\lambda}^{(\delta)}_k$ results in:
\begin{IEEEeqnarray}{rCl}
c^{(\delta)}_k(\theta,\theta')&\triangleq& \mathbb{E}\left[
\Big(\lambda^{(\delta)}_k(\theta)-{\sf m}^{(\delta)}_k(\theta)\Big)
\Big(\lambda^{(\delta)}_k(\theta')-{\sf m}^{(\delta)}_k(\theta')\Big)
\right]\nonumber\\
&=&
{\frac{{\sf c}_{\mathrm{ave}}(\theta,\theta')}{2}}\,\delta
+O(\delta^2).
\label{eq:covar}
\end{IEEEeqnarray}
Note that \eqref{eq:mean} and \eqref{eq:covar} can be rewritten in vector and matrix form, respectively as:
\begin{equation}
{\sf m}^{(\delta)}_k={\sf m}_{\mathrm{ave}}+O(\delta),\quad
{\sf C}^{(\delta)}_k={\frac{{\sf C}_{\mathrm{ave}}}{2}}\,\delta+O(\delta^2)
\label{eq:meancovarmat}
\end{equation}
where ${\sf C}^{(\delta)}_k=[c^{(\delta)}_k(\theta,\theta')]$ and ${\sf C}_{\mathrm{ave}}=[{\sf c}_{\mathrm{ave}}(\theta,\theta')]$.
The first equation in \eqref{eq:meancovarmat} shows that the expectation vector of the steady-state log-belief ratios, ${\sf m}^{(\delta)}_k$, approximates, for small $\delta$, the expectation vector of the average log-likelihood ratios, ${\sf m}_{\mathrm{ave}}$. Moreover, the second equation in \eqref{eq:meancovarmat} reveals that the covariance matrix of the steady-state log-belief ratios, ${\sf C}^{(\delta)}_k$, goes to zero as ${\sf C}_{\mathrm{ave}} \,\delta/2$, where ${\sf C}_{\mathrm{ave}}$ is the covariance matrix of the average log-likelihood ratios.
\begin{theorem}[Asymptotic normality]
\label{theor:CLT}
Assume that the data $\{\bm{\xi}_{k,i}\}$ are independent across the agents (recall that they are always assumed i.i.d. over time), and that the log-likelihood ratios have finite variance.
Then, under Assumptions~\ref{assum:initbel},~\ref{assum:integrable} and~\ref{assum:globo}, the following convergence in distribution holds:
\begin{equation}
\boxed{
\frac{\bm{\lambda}_k^{(\delta)} - {\sf m}_{\mathrm{ave}}}{\sqrt{\delta}}
\stackrel{\delta\rightarrow 0}{\rightsquigarrow} {\mathscr{G}\left(0,\frac{{\sf C}_{\mathrm{ave}}}{2}\right)}
}
\label{eq:CLTstatement2}
\end{equation}
where $\mathscr{G}(0,C)$ is a zero-mean multivariate Gaussian with covariance matrix equal to $C$.\QED
\end{theorem}
The result in Theorem \ref{theor:CLT} comes from Lemma \ref{lem:mainlemma}, part 5). As $\delta\rightarrow 0$, Theorem~\ref{theor:CLT} suggests the approximation:
\begin{equation}
\bm{\lambda}^{(\delta)}_k\approx{\mathscr{G}\left({\sf m}_{\mathrm{ave}}, \frac{{\sf C}_{\mathrm{ave}}}{2}\,\delta\right)}.
\label{eq:CLTfirstapp}
\end{equation}
{Using the expressions in \eqref{eq:mean} and \eqref{eq:covar} instead of the limiting ${\sf m}_{\mathrm{ave}}$ and ${\sf C}_{\mathrm{ave}}\,\delta/2$, we get the alternative approximation:
\begin{equation}
\bm{\lambda}^{(\delta)}_k\approx\mathscr{G}\left({\sf m}^{(\delta)}_k, {\sf C}^{(\delta)}_k\right),
\label{eq:CLTsecondapp}
\end{equation}
which can capture different performance across agents, since ${\sf m}^{(\delta)}_k$ and ${\sf C}^{(\delta)}_k$ depend on $k$.}
\section{Simulation Results}
\label{sec:1}
{We consider the network topology displayed in Fig. \ref{fig:network} (additionally, we allow a self-loop for each agent).
The} combination matrix is designed using an averaging rule, resulting in a left-stochastic matrix \cite{Sayed}.
\vspace{-10pt}
\begin{figure}[htb]
\centering
\includegraphics[width=2.5in]{figs/network.pdf}
\caption{{Network topology with $10$ agents: agent $1$ is highlighted.}}
\label{fig:network}
\end{figure}
\vspace{-5pt}
The network is faced with the following statistical learning problem.
We consider a family of Laplace likelihood functions with scale parameter equal to $1$, and with different expectations parametrized with $n\in\{1,2,3\}$ as follows:
\begin{equation}
f_n(\xi)=\frac{1}{2}\exp\left\{-|\xi-0.5 n|\right\}.
\label{eq:Gausspdf}
\end{equation}
We assume that the inference problem is {\em not locally identifiable} since we consider the setup in Table \ref{tab:id} for each agent's family of likelihood functions.
\vspace{-10pt}
\begin{table}[htbp]
\def\arraystretch{1.3
\caption{Identifiability setup for the network in Fig. \ref{fig:network}.}
\begin{center}
\begin{tabular}{|p{0.1\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|}
\hline
\multirow{2}{*}{\textbf{Agent} $k$}&\multicolumn{3}{|c|}{\textbf{Likelihood Function}: $L_k(\theta)$} \\
\cline{2-4}
&$\theta=1$ & $\theta=2$& $\theta=3$\\
\hline
$1-3$& $f_1(\xi)$& $f_1(\xi)$& $f_3(\xi)$ \\
\hline
$4-6$& $f_1(\xi)$& $f_3(\xi)$& $f_3(\xi)$ \\
\hline
$7-10$& $f_1(\xi)$& $f_2(\xi)$& $f_1(\xi)$ \\
\hline
\end{tabular}
\label{tab:id}
\end{center}
\vspace{-15pt}
\end{table}
\subsection{Consistency}
We consider that all agents are running the ASL algorithm for a fixed $\theta_0=1$ over $10000$ time samples (after which we consider that they achieved the steady state). From Theorem~\ref{theor:weaklaw}, we saw that as $\delta$ approaches zero, all agents $k$ are able to consistently learn --- see \eqref{eq:wlawintermediate}.
To show this effect, for each value of $\delta$ (50 sample points in the interval $\delta\in[0.001,1]$ are taken), we consider a different realization of the observations. In Fig.~\ref{fig:theo1}, for agent $1$ and $\theta=2, 3$, we show how the log-belief ratios $\bm{\lambda}_1^{(\delta)}(\theta)$ behave for decreasing values of $\delta$.
We see the effect of the {weak law of small step-sizes}, since the limiting log-belief ratios tend to concentrate around ${\sf m}_{\sf ave}$.
\vspace{-5pt}
\begin{figure}[htb]
\centering
\includegraphics[width=3in]{figs/consist_ex.pdf}
\caption{Evolution of steady-state log-belief ratios for agent $1$ as $\delta\rightarrow 0$.}
\label{fig:theo1}
\end{figure}
\vspace{-5pt}
\subsection{Asymptotic Normality}
From Theorem~\ref{theor:CLT}, we saw that we can approximate the steady-state log-belief ratios by a multivariate Gaussian, see Eqs. \eqref{eq:CLTfirstapp} and \eqref{eq:CLTsecondapp}. In Fig.~\ref{fig:theo2}, we display the log-belief ratios for instant $i=10000$. The experiment is repeated over $100$ Monte Carlo runs, such that we obtain $100$ realizations of the steady-state variable $\bm{\lambda}_k^{(\delta)}$.
Moreover, we consider four decreasing values of $\delta$.
\begin{figure}[htb]
\centering
\includegraphics[width=3.2in]{figs/Gaussian_ex.pdf}
\caption{Distribution of data samples at steady state compared with the limiting and empirical Gaussian distributions for decreasing $\delta$.}
\label{fig:theo2}
\end{figure}
In dashed blue lines we see the ellipses representing the confidence intervals relative to one and two standard deviations computed for the empirical Gaussian approximation seen in \eqref{eq:CLTsecondapp}: the smaller ellipse encompasses approximately $68\%$ of the samples whereas the larger ellipse encompasses $95\%$. In red dotted lines, we see the corresponding ellipses for the limiting theoretical Gaussian approximation seen in \eqref{eq:CLTfirstapp}, with the red cross indicating the limiting theoretical expectation ${\sf m}_{\sf ave}$. Note how as $\delta$ decreases, the ellipses tend to be smaller, which is in accordance with the scaling of the covariance matrices by $\delta$ in \eqref{eq:CLTfirstapp} and \eqref{eq:CLTsecondapp}, and the distributions tend to overlap, which is in accordance with the behavior predicted by Theorem~\ref{theor:CLT}.
\vspace{-5pt}
\section{Conclusion}
In this paper, we proposed the Adaptive Social Learning strategy as a way to address the significant inertia of the classic social learning to adapting. We have first characterized the behavior of the algorithm in steady state, by showing that the log-belief ratios converge to a stable random variable. Then, exploring the regime of small $\delta$, we could verify the algorithm's learning consistency and the limiting Gaussian behavior of the steady-state log-belief ratios.
\begin{appendices}
\section{}\label{ap:lem}
\begin{lemma}[Asymptotic properties of a useful random series]
\label{lem:mainlemma}
For $m=0,1,\ldots$, let $\{\bm{z}_m\}$ be a sequence of i.i.d. integrable random variables with ${\sf m}_z\triangleq\mathbb{E}\left[\bm{z}_m\right]$ and $
{\sf m}^{\mathrm{abs}}_z\triangleq\mathbb{E}\left[|\bm{z}_m|\right]<\infty$. Let also $0<\delta<1$, and consider the following partial sums:
\begin{equation}
\bm{s}_i(\delta)=\delta
\sum_{m=0}^{i}(1-\delta)^m \alpha_m \bm{z}_m,
\label{eq:randomseries}
\end{equation}
where $0<\alpha_m<1$, with $\alpha_m$ converging to some value $\alpha$ and obeying the following upper bound for all $m$:
\begin{equation}
|\alpha_m - \alpha| \leq \kappa \beta^m,
\label{eq:exprate}
\end{equation}
for some constant $\kappa>0$ and for some $0<\beta<1$.
Then we have the following asymptotic properties.
\begin{enumerate}
\item
{\bf Steady-state stability}. The partial sums in \eqref{eq:randomseries} are almost-surely absolutely convergent, namely, we can define the (almost-surely) convergent series:
\begin{eqnarray}
\bm{s}^{\mathrm{abs}}(\delta)&\triangleq&\delta\sum_{m=0}^{\infty}(1-\delta)^m \alpha_m |\bm{z}_m|,
\label{eq:limitdef1}\\
\bm{s}(\delta)&\triangleq&\delta\sum_{m=0}^{\infty}(1-\delta)^m \alpha_m \bm{z}_m.
\label{eq:limitdef2}
\eeqa
\item
{\bf First moment}. The expectation of $\bm{s}(\delta)$ is:
\begin{equation}
\mathbb{E}[\bm{s}(\delta)]={\sf m}_z\delta\sum_{m=0}^{\infty}(1-\delta)^m \alpha_m=\alpha\,{\sf m}_z + O(\delta),
\label{eq:expeclemma}
\end{equation}
where $O(\delta)$ is a quantity such that the ratio $O(\delta)/\delta$ remains bounded as $\delta\rightarrow 0$.
\item
{\bf Weak law of small step-sizes}.
The series $\bm{s}(\delta)$ converges to $\alpha\,{\sf m}_z$ in probability as $\delta\rightarrow 0$, namely, for all $\epsilon>0$ we have that:
\begin{equation}
\lim_{\delta\rightarrow 0}\P\left[|\bm{s}(\delta)-\alpha\,{\sf m}_z|>\epsilon\right]=0.
\label{eq:weaklawequ}
\end{equation}
\item
{\bf Second moment}. If
$
\sigma^2_z \triangleq{\sf VAR}[\bm{z}_m]<\infty,
\label{eq:VARsingle}
$
then:
\begin{eqnarray}
{\sf VAR}[\bm{s}(\delta)]&=&
\sigma^2_z\delta^2\sum_{m=0}^{\infty}(1-\delta)^{2m} \alpha^2_m\nonumber\\
&=&\frac{\alpha^2\sigma^2_z}{2}\,\delta+O(\delta^2).
\label{eq:VARlemma}
\eeqa
\item
{\bf Asymptotic normality}. If $\bm{z}_m$ has finite variance $\sigma^2_z$, then the following convergence in distribution holds:
\begin{equation}
\frac{\bm{s}(\delta)-{\sf m}_z}{\sqrt{\delta}}\stackrel{\delta\rightarrow 0}{\rightsquigarrow}
\mathscr{G}\Big(0,\alpha^2\sigma^2_z/2\Big),
\label{eq:CLTlemma}
\end{equation}
and, hence, $\bm{s}(\delta)$ is asymptotically normal as $\delta\rightarrow 0$.
\end{enumerate}
\end{lemma}
\end{appendices}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,369
|
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>image-retrieval on Dang-Khoa's blog </title>
<link>http://dangkhoasdc.github.io/tags/image-retrieval/</link>
<description>Recent content in image-retrieval on Dang-Khoa's blog </description>
<generator>Hugo -- gohugo.io</generator>
<language>en-us</language>
<copyright>&copy; 2017 Dang-Khoa</copyright>
<lastBuildDate>Tue, 12 Sep 2017 00:00:00 +0000</lastBuildDate>
<atom:link href="http://dangkhoasdc.github.io/tags/image-retrieval/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Hashing in content-based image retrieval</title>
<link>http://dangkhoasdc.github.io/project/deep_hashing/</link>
<pubDate>Tue, 12 Sep 2017 00:00:00 +0000</pubDate>
<guid>http://dangkhoasdc.github.io/project/deep_hashing/</guid>
<description>Since I worked at SUTD, I have been working on the content-based image retrieval. It is very active in academia and industry. Actually, there are some works I have accomplished:
Image hasing: a joint optimization for image embedding and hashing (CVPR17 paper). Deep Image Retrieval: a scheme for unified deep learning and image retrieval (ACM MM17 paper). Mobile version of Triangulation Embedding. Deep Hashing Image (In progress). </description>
</item>
</channel>
</rss>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,859
|
My boyfriend has a 1991 Cadillac Seville. How hard is it to change the brakes on this kind of car? My dad can do a regular car but wasn't sure about this one. Would it be easier just to buy the parts and take it to a mechanic and pay more?
How often do i need to change the oil in my volkswagen passat?
Hi! I have a 2006 Volkswagen Passat. I had previously thought that I need to change my oil ever 3 months or 3000 miles. I read online today that I only need to change my oil every 5000 or at least once a year. Can someone clarify that for me?
I'm looking to buy my first car and i want it to be one that lasts. I know that vw is a really good brand and that it will last me awile, but i dont really know about suzuki. and i like that the gv is an suv. so my question to you is how good is suzuki? is it dependable? will it last me?
Knocking noises at load change from pendulum support. Tip: If vehicle is equipped with 6 speed manual transmission (MQ350), see Technical bulletin Instance number 2015981 in ElsaWeb. Technical Background Connection between pendulum support and gearbox may be loose. Engine vibrations affect the connection and may loosen the bolts. Production Solution No production change required.
This kit is intended for use on 1993 and newer XR/CRF100's using CDI ignition. Engine assembly requires special attention to detail. A new BBR Cylinder and piston kit have been supplied. The following measurments are critical to proper installation and performance.
Before proceeding with the installation, it is important to know that to validate the 2 year, 100K warranty on your new JR supercharger, you must completely fill out the Moss Motors / Jackson Racing warranty card that comes in every kit, including serial number which is on a small white 'bar code' label on the body of the supercharger. Write down all of the numbers which appear on that label in the appropriate space on the warranty card. Be certain to do this now because once your supercharger is installed, it may be almost impossible to retrieve that serial number.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,914
|
Shorthand function to execute a HTTP GET request.
----------------------------------------------------------------------
## Usage
leaf.http.get(url, [options]).then([success], [failure])
### Params
| Param | Type | Details |
| --------------- | ------------- | -------------------------------- |
| url | `string` | The URL. |
| options | `Object` | The request options. |
### Options
| Options | Type | Details |
| --------------- | ------------- | -------------------------------- |
| headers | `Array` | The request headers. |
| method | `string` | The request method. |
| password | `string` | The password. |
| url | `string` | The URL. |
| username | `string` | The username. |
----------------------------------------------------------------------
## Example
<html>
<body>
<script src="scripts/leaf.min.js"></script>
<script>
leaf.http.get('people.html').then(
function(data) {
console.log('People.html was requested successfully.');
},
function(status) {
console.log('An error occurred requesting people.html.');
},
);
</script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,090
|
{"url":"https:\/\/www.transtutors.com\/questions\/the-payroll-register-of-patel-engineering-co-indicates-2-970-00-of-social-security-w-2562778.htm","text":"The payroll register of Patel Engineering Co. indicates $2,970.00 of social security withheld and... The payroll register of Patel Engineering Co. indicates$2,970.00 of social security withheld and $742.50 of Medicare tax withheld on total salaries of$49,500 for the period. Assume earnings subject to state and federal unemployment compensation taxes are \\$15,300, at the federal rate of 0.8% and the state rate of 5.4%.\n\nProvide the journal entry to record the payroll tax expense for the period. If an amount box does not require an entry, leave it blank. Round to two decimal places.","date":"2019-01-24 10:26:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2036026418209076, \"perplexity\": 4303.746267426354}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547584520525.90\/warc\/CC-MAIN-20190124100934-20190124122934-00227.warc.gz\"}"}
| null | null |
Q: currently having trouble with lists,strings and integers i'm trying to challenge myself by coding a version of the card game 21, but i'm having trouble with the strings and integers. my counter also doesn't work and that is beyond me
I've tried defining everything as an integer
define
cardnum1 = " "
cardnum2 = " "
print("the dealer shuffles the cards and deals the first hand")
import random
list = ["2","3","4","5","6","7","8","9","10","Jack","Queen","King","Ace"]
#1st card
card1 = random.choice(list)
print("your first card is ",card1)
#numbers
if card1 == ["2","3","4","5","6","7","8","9","10"]:
card1=int(cardnum1)
elif card1 == "Jack":
cardnum1=int(10)
elif card1 == "Queen":
cardnum1=int(10)
elif card1 == "King":
cardnum1=int(10)
elif card1 == "Ace":
cardnum1=input("do you want your Ace to be a 1 or a 11?")
card2 = random.choice(list)
print("your second card is ",card2)
#numbers 1 - 10
if card2 == ["2","3","4","5","6","7","8","9","10"]:
card1=int(cardnum1)
#special cards
elif card2 == "Jack":
cardnum2=int(10)
elif card2 == "Queen":
cardnum2=int(10)
elif card2 == "King":
cardnum2=int(10)
elif card2 == "Ace":
cardnum2=input("do you want your Ace to be a 1 or a 11?")
print ("your cards combined are ",cardnum1 + cardnum2)
i'm trying to randomly get 2 cards and the have the program tell me how far i am from 21 and if i've bust or not.
cardnum1 and cardnum2 don't seem to properly add.
i get the error message :
TypeError: can only concatenate str (not "int") to str
A: The error mentioned is due to the fact that in some circustances one of those variable is a string and the other is an integer.
Actually you have several wrong things:
*
*list is a built-in in python, do not use the word list as a variable.
*if card1 == ["2","3","4","5","6","7","8","9","10"] is wrong. To check if a variable is in a list, do: if card1 in ["2","3","4","5","6","7","8","9","10"]. Note the in instead of the ==.
*card1=int(cardnum1) does not make sense if cardnum1 is " ". You probably have them switched: cardnum1 = int(card1).
*cardnum1=int(10) is not wrong, but 10 is already an integer. the int() is redundant, totally not needed.
*cardnum2=input("do you want your Ace to be a 1 or a 11?") I suppose here you expect the user to type 1 or 11. If you are using python3, input always return a string, so you must convert it to an integer: cardnum2=int(input("do you want your Ace to be a 1 or a 11?")). If you are using python2 instead is fine.
A: You should use a dict to assign numbers to a card and you can choose 2 cards using random.choices
import random
cards = {
1: "Ace",
2: "2",
3: "3",
4: "4",
5: "5",
6: "6",
7: "7",
8: "8",
9: "9",
10: "10",
11: "Jack",
12: "Queen",
13: "King"
}
print("the dealer shuffles the cards and deals the first hand")
card_num1, card_num2 = random.choices(list(cards), k=2)
card1, card2 = cards[card_num1], cards[card_num2]
print("your first card is ",card1)
print("your second card is ",card2)
And to put them all into one sentence easily you can use .format() or f-strings in Python 3.6+.
print(f"Your cards combined are {card1} {card2}")
A: As I explained in my comment, you were incorrectly checking for list inclusion and you were setting the wrong variables in the if statements. Here's your code fixed:
import random
cardnum1 = " "
cardnum2 = " "
print("the dealer shuffles the cards and deals the first hand")
list = ["2", "3", "4", "5", "6", "7", "8",
"9", "10", "Jack", "Queen", "King", "Ace"]
# 1st card
card1 = random.choice(list)
print("your first card is ", card1)
# numbers
if card1 in ["2", "3", "4", "5", "6", "7", "8", "9", "10"]:
cardnum1 = int(card1)
elif card1 == "Jack":
cardnum1 = int(10)
elif card1 == "Queen":
cardnum1 = int(10)
elif card1 == "King":
cardnum1 = int(10)
elif card1 == "Ace":
cardnum1 = input("do you want your Ace to be a 1 or a 11?")
card2 = random.choice(list)
print("your second card is ", card2)
# numbers 1 - 10
if card2 in ["2", "3", "4", "5", "6", "7", "8", "9", "10"]:
cardnum2 = int(card2)
# special cards
elif card2 == "Jack":
cardnum2 = int(10)
elif card2 == "Queen":
cardnum2 = int(10)
elif card2 == "King":
cardnum2 = int(10)
elif card2 == "Ace":
cardnum2 = input("do you want your Ace to be a 1 or a 11?")
print("your cards combined are ", cardnum1 + cardnum2)
A: While others have given working versions of your program, and pointed out the issues with your version, let me offer a simple alternative of how you could do it:
import random
deck = ['Ace', 2, 3, 4, 5, 6, 7, 8, 9, 'Jack', 'Queen', 'King']
card1 = random.choice(deck)
card2 = random.choice(deck)
def card_value(card):
if card == 'Ace':
chosen_value = input("do you want your Ace to be a 1 or a 11?")
return int(chosen_value)
elif card in ['Jack', 'Queen', 'King']:
return 10
else:
# Return the integer as it is
return card
print("Your first card is", card1)
print("Your first card is", card2)
print("Your cards combined are ", card_value(card1) + card_value(card2))
The only major difference between this version and yours is that I use a card_value helper method to derive the value of the card as an integer. If the card value is already an integer, it just returns as it is.
P.S:
Welcome to the world of programming, and it's super cool that you are trying to challenge yourself this early. Don't worry about the downvotes to your question; It happens. But you can avoid it by taking a methodical approach to your problem:
*
*First write down an algorithm in plain english or pseudocode
*Code and test each stage of the algorithm
*Write tests that check expected output with actual output
In the beginning, you may also find it beneficial to invest time and learn how to think like a programmer, than dig deep into any one language. See these links to know what I mean:
*
*https://zapier.com/blog/think-like-a-programmer/
*https://www.amazon.com/Think-Like-Programmer-Introduction-Creative/dp/1593274246
*https://medium.freecodecamp.org/how-to-think-like-a-programmer-lessons-in-problem-solving-d1d8bf1de7d2
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,652
|
\section{Conclusion}
We propose a new global adversarial example pair concept and formulate the corresponding global adversarial attack problem to assess the robustness of DNNs over the entire input space without human data labeling. We further propose two families of global adversarial attack methods: (\textbf{1}) alternating gradient global adversarial attacks and (\textbf{2}) extreme-value-guided MCMC sampling global attack (GEVMCMC), demonstrating that DNN models even trained with local adversarial training are vulnerable to this new type of global attacks. Our attack methods are able to generate diverse and intriguing global adversarial which are very different from typical local attacks and shall be taken into consideration when training a robust model. GEVMCMC demonstrates the overall best performance among all proposed global attack methods due to its probabilistic nature.
\subsubsection*{Acknowledgments}
This material is based upon work supported by the Semiconductor Research Corporation (SRC) under Task 2810.024.
The authors would like to thank High Performance Research Computing (HPRC) at Texas A\&M University for providing computing support. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of SRC, Texas A\&M University, and their contractors.
\fi
\bibliographystyle{unsrtnat}
\section{Introduction}
\label{sec:intro}
Deep neural networks (DNNs) have been applied to many applications including safety-critical tasks such as autonomous driving \citep{luckow2016AutoDL} and unmanned autonomous vehicles (UAVs) \citep{carrio2017UAVreview}, which demand high robustness of decision making. However, recently it has been shown that DNNs are susceptible to attacks by adversarial examples \citep{goodfellow2015explaining}. For image classification, for example, adversarial examples may be generated by adding crafted perturbations indistinguishable to human eyes to legitimate inputs to alter the decision of a trained DNN into an incorrect one. Several studies attempt to reason about the underlying causes of susceptibility of deep neural networks towards adversarial examples, for instance, ascribing vulnerability to linearity of the model \citep{goodfellow2015explaining}
or flatness/curvedness of the decision boundaries \citep{moosavi2017analysis}. A widely agreed consensus is certainly desirable, which is under ongoing research.
\paragraph{Target global adversarial attack problem.}
The main objectives of this work are to reveal potential vulnerability of DNNs by presenting a new type of attacks, namely \emph{global adversarial attacks}, propose methods for generating such global attacks, and finally demonstrate that DNNs enhanced by conventional (local) adversarial training exhibit little defense to the proposed global adversarial examples. While several adversarial attack methods were proposed \citep{goodfellow2015explaining,papernot2016limitations,carlini2016towards,kurakin2016adversarial,madry2018towards} in recent literature, we refer to these methods as \emph{local adversarial attack methods} as they all aim to solve the \emph{local adversarial attack problem} defined as follows.
\begin{definition}
\textbf{Local adversarial attack problem}. Given an input space $\Omega$, one legitimate input example $\mathbf{x}\in\Omega$ with label $y\in\mathcal{Y}$, and a trained DNN $f:\Omega\rightarrow\mathcal{Y}$, find another (adversarial) input example $\mathbf{x}^\prime \in\Omega$ within a radius of $\epsilon$ around $\mathbf{x}$ under a distance measure defined by a norm function $\left\|\cdot\right\|:\left\{\mathbf{x}_a-\mathbf{x}_b\left|\mathbf{x}_a\in\Omega,\mathbf{x}_b\in\Omega\right.\right\}\rightarrow\mathbb{R}_{\geq 0}$ such that $f\left(\mathbf{x}^\prime\right)\neq y, \left\|\mathbf{x}^\prime-\mathbf{x}\right\|\leq\epsilon$.
\end{definition}
Typically, the above problem is solved via optimization governed by a loss function, $\mathcal{L}:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}$, measuring the difference between the predicted label $f\left(\mathbf{x}^\prime\right)$ and $y$:
\begin{equation}
\mathbf{x}^{\prime*}=\argmax_{\left\|\mathbf{x}^\prime-\mathbf{x}\right\|\leq\epsilon, \mathbf{x}^\prime\in\Omega}\mathcal{L}\left(f\left(\mathbf{x}^\prime\right),y\right).
\end{equation}
Importantly, the above problem formulation has two key limitations.
\textbf{(1)} it is deemed \emph{local} in the sense that it only examines model robustness inside a \emph{local} region centered at each given input $\mathbf{x}$, which in practice is chosen from the training or testing dataset. As such, local adversarial attacks are not adequate since evaluating the DNN robustness around the training and testing data does not provide a complete picture of robustness globally, i.e. in the entire space $\Omega$. On the other hand, assessment of global robustness is essential, e.g. for safety-critical applications. \textbf{(2)} local attack methods assume that for each clean example $\mathbf{x}$ the label $y$ is known. As a result, they are inapplicable to attack the DNN around locations where no labeled data are available.
In this paper, we propose a notion of global DNN robustness and a global adversarial attack problem formulation to assess it.
We evaluate global robustness of a given DNN by assessing the potential high sensitivity of its decision function with respect to small input perturbation leading to change in the predicted label, globally in the entire $\Omega$. By solving the global attack problem, we generate multiple \emph{global adversarial example pairs} such that each pair of two examples are close to each other but have different labels predicted by the DNN. The detailed definitions are presented in Section \ref{sec:gadv}.
\paragraph{Related works.}
Apart from the local adversarial attacks, several other approaches for DNN robustness evaluation have been reported.
The Lipschitz constant is utilized to bound DNNs' vulnerability to adversarial attacks \citep{tsuiwei2018clever,yusuke2018lipschitz}. As argued in \citep{ian2018gradmask,todd2018liplimit}, however, currently there is no accurate method for estimating the Lipschitz constant, and the resulting overestimation can easily render its use unpractical. \citep{yang2018gan,isaac2019gan} propose to train a generative model for generating unseen samples for which misclassification happens. However, the ground-truth labels of generated examples must be provided for final assessment and these examples do not capture model's vulnerability due to high sensitivity to small input perturbation.
\paragraph{Our contributions.}
We propose a new concept of global adversarial examples and several global attack methods. Specifically, we (\textbf{a}) propose a novel concept called global adversarial example pairs and formulate a global adversarial attack problem for assessing the model robustness over the entire input space without extra data labeling; (\textbf{b}) present two families of global adversarial attack methods: (\textbf{1}) alternating gradient adversarial attacks and (\textbf{2}) extreme-value-guided MCMC sampling attack, and demonstrate their effectiveness in generating global adversarial example pairs; (\textbf{c}) using the proposed global attack methods, demonstrate that DNNs hardened using strong projected gradient descent (PGD) based (local) adversarial training are vulnerable towards the proposed global adversarial example pairs, suggesting that global robustness must be considered while training DNNs.
\section{Global adversarial attacks}
\label{sec:gadv}
We formulate a new global adversarial attack problem as follows.
\begin{definition}
\textbf{Global adversarial attack problem}. Given an input space $\Omega$ and an DNN model $f:\Omega\rightarrow\mathcal{Y}$, find one or more global adversarial example pairs $\left(\mathbf{x}_1, \mathbf{x}_2\right)\in\Omega\times\Omega$ within a radius of $\epsilon$ under a distance measure defined by a norm function $\left\|\cdot\right\|:\left\{\mathbf{x}_a-\mathbf{x}_b\left|\mathbf{x}_a\in\Omega,\mathbf{x}_b\in\Omega\right.\right\}\rightarrow\mathbb{R}_{\geq 0}$ such that $f\left(\mathbf{x}_1\right)\neq f\left(\mathbf{x}_2\right), \left\|\mathbf{x}_1-\mathbf{x}_2\right\|\leq\epsilon$.
\end{definition}
When no confusion occurs, \emph{global adversarial example pair} and \emph{global adversarial examples} are used interchangeably throughout this paper.
The above problem formulation can be cast into an optimization problem w.r.t. a certain loss function $\mathcal{L}:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}$ in the following form:
\begin{equation}
\label{eqn:gadv}
\mathbf{x}_1^*,\mathbf{x}_2^*=\argmax_{\left\|\mathbf{x}_1-\mathbf{x}_2\right\|\leq\epsilon, \left(\mathbf{x}_1,\mathbf{x}_2\right)\in\Omega\times\Omega}\mathcal{L}\left(f\left(\mathbf{x}_1\right),f\left(\mathbf{x}_2\right)\right)
\end{equation}
For convenience of notation, we use $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ to denote $\mathcal{L}\left(f\left(\mathbf{x}_1\right),f\left(\mathbf{x}_2\right)\right)$.
The above definition and problem formulation have several favorable characteristics. A robust DNN model should be insensitive to small input perturbations. Therefore, two nearby inputs shall share the same (or similar) model output. Conversely, any large $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ value of two nearby inputs reveals a sharp transition over the decision boundary in a classification task or an unstable region in a regression task.
\begin{wrapfigure}{r}{0.4\linewidth}
\centering
\includegraphics[width=\linewidth]{gadv_ill.pdf}
\caption{Global vs. local adversarial examples.}
\label{fig:gadv}
\end{wrapfigure}
Loosely speaking, the maximum of $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ over the entire input space can serve as a measure of global model robustness. If it is larger than a preset threshold, the DNN may be deemed as vulnerable towards global adversarial attacks. On the other hand, (\ref{eqn:gadv}) is a global optimization problem for which multiple near-optimal solutions may be reached by starting from different initial solutions or employing different optimization methods. Practically, any pair of two close inputs $\mathbf{x}_1,\mathbf{x}_2$ with different model predictions in the case of classification or a sufficiently large value of $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ in the case of regression may be considered as a global adversarial example pair.
It is important to note that our problem formulation doesn't restrict adversarial examples to be around a certain input example as in the case of the existing local adversarial attacks; it only sets an indistinguishable distance between a pair of two input examples in order to examine the entire input space, e.g. at locations far away from the training or testing dataset. Fig. \ref{fig:gadv} contrasts the conventional local adversarial examples with the proposed global adversarial examples. Importantly, our global attack formulation does not require additional labeled data; it directly measures the model's sensitivity to input perturbation and be applied globally in the entirety of the input space.
We propose two families of attack methods to solve (\ref{eqn:gadv}) as a way of generating global adversarial examples: \textbf{1}) alternating gradient global adversarial attacks and \textbf{2}) extreme-value-guided MCMC sampling global adversarial attack, as discussed in Section \ref{sec:gbatk} and Section \ref{sec:sampleatk}, respectively.
\section{Alternating gradient global adversarial attacks}
\label{sec:gbatk}
\begin{wrapfigure}{r}{0.18\linewidth}
\centering
\includegraphics[width=\linewidth]{gbatk.pdf}
\caption{Alternating attack illustration.}
\label{fig:gbatk}
\end{wrapfigure}
For global adversarial example pair generation per (\ref{eqn:gadv}), a pair of two examples $\left(\mathbf{x}_1,\mathbf{x}_2\right)$ shall be be optimized to maximize the loss $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ under a distance constraint. We propose a family of attack methods called alternating gradient global adversarial attacks which proceed as follows: 1) start from an initial input pair; 2) fix the first example, and then attack (move) the second under the distance constraint while maximizing the loss $\mathcal{L}_f\left(\mathbf{x}_1,\mathbf{x}_2\right)$ using a gradient-based local adversarial attack method, referred to as a \emph{sub-attack method} here, 3) swap the roles of the first and (updated) second examples, i.e. fix the second example while attacking the first, 4) repeat this process for a number of iterations, as shown in Fig. \ref{fig:gbatk}.
Given a DNN model $f$ and a loss function $\mathcal{L}$, a sub-attack method can be characterized using a function $\mathbf{x}^\prime=f_{s\_attk}\left(\mathbf{x},y,\mathcal{R}_{\mathbf{x},\epsilon};f,\mathcal{L}\right)$ constructing an adversarial example $\mathbf{x}^\prime$ w.r.t example $\mathbf{x}$ and its corresponding label $y$, where $\mathcal{R}_{\mathbf{x},\epsilon}=\left\{\mathbf{x}+\delta|\left\|\delta\right\|\leq\epsilon\right\}$ specifies the region for adversarial sample generation. Suppose we start with an initial example pair $\left(\mathbf{x}_1^{\left(0\right)},\mathbf{x}_2^{\left(0\right)}\right)$, then we get the $\left(i+1\right)$-round example pair from the $\left(i\right)$-round sample pair by:
\begin{eqnarray}
\mathbf{x}_1^{\left(i+1\right)}&=&f_{s\_attk}\left(\mathbf{x}_1^{\left(i\right)},f\left(\mathbf{x}_2^{\left(i\right)}\right),\mathcal{R}_{\mathbf{x}_2^{\left(i\right)},\epsilon};f,\mathcal{L}\right)\label{eqn:ggb1}\\
\mathbf{x}_2^{\left(i+1\right)}&=&f_{s\_attk}\left(\mathbf{x}_2^{\left(i\right)},f\left(\mathbf{x}_1^{\left(i+1\right)}\right),\mathcal{R}_{\mathbf{x}_1^{\left(i+1\right)},\epsilon};f,\mathcal{L}\right)\label{eqn:ggb2}
\end{eqnarray}
Here, to attack one example, we use the other example's model prediction as the label when applying the sub-attack method $f_{s\_attk}$ so that the difference between two examples' model predictions is maximized. In the meanwhile, the search region for attacking one example is constrained (centered) by the other example. Hence, the distance between the two examples is always less than $\epsilon$. Equations \ref{eqn:ggb1} and \ref{eqn:ggb2} are invoked to alternately attack $\mathbf{x}_1^{\left(i\right)}$ and $\mathbf{x}_2^{\left(i\right)}$ to generate an updated example pair for the next round. As the global attack continues, multiple global adversarial sample pairs may be generated along the way.
\begin{algorithm}
\caption{Global PGD attack algorithm for classification.}
\label{algo:G-PGD}
\begin{algorithmic}[1]
\REQUIRE ~~ \\
Initial starting example pair $\left(\mathbf{x}_1^{\left(0\right)},\mathbf{x}_2^{\left(0\right)}\right)$; Example vector dimension $D$; distance constraint $\epsilon$;
Number of total rounds $N$; Number of sub-attack steps $S$; Step size $a$ for the sub-attack.\\
\ENSURE ~~
The set of generated global adversarial sample pairs $T$;
\STATE $T=\left\{\right\}$
\FOR{$i\leftarrow1$ to $N$}
\STATE Sample a uniform distribution perturbation $\delta\sim \mathrm{U}\left[-\epsilon,\epsilon\right]^D$ \label{algo:gpgd:noise1}
\STATE $\mathbf{x}_1^{\left(i\right)}\leftarrow Clip\left(\mathbf{x}_1^{\left(i-1\right)}+\delta,\mathcal{R}_{\mathbf{x}_2^{\left(i-1\right)},\epsilon}\right)$
\FOR{$j\leftarrow1$ to $S$}
\STATE $\mathbf{x}_1^{\left(i\right)}\leftarrow Clip\left(\mathbf{x}_1^{\left(i\right)}+a\cdot\mathrm{sign}\left(\nabla_{\mathbf{x}_1}\mathcal{L}_f\left(\mathbf{x}_1^{\left(i\right)},\mathbf{x}_2^{\left(i-1\right)}\right)\right),\mathcal{R}_{\mathbf{x}_2^{\left(i-1\right)},\epsilon}\right)$
\ENDFOR
\STATE Sample a uniform distribution perturbation $\delta\sim \mathrm{U}\left[-\epsilon,\epsilon\right]^D$ \label{algo:gpgd:noise2}
\STATE $\mathbf{x}_2^{\left(i\right)}\leftarrow Clip\left(\mathbf{x}_2^{\left(i-1\right)}+\delta,\mathcal{R}_{\mathbf{x}_1^{\left(i\right)},\epsilon}\right)$
\FOR{$j\leftarrow1$ to $S$}
\STATE $\mathbf{x}_2^{\left(i\right)}\leftarrow Clip\left(\mathbf{x}_2^{\left(i\right)}+a\cdot\mathrm{sign}\left(\nabla_{\mathbf{x}_2}\mathcal{L}_f\left(\mathbf{x}_2^{\left(i\right)},\mathbf{x}_1^{\left(i\right)}\right)\right),\mathcal{R}_{\mathbf{x}_1^{\left(i\right)},\epsilon}\right)$
\ENDFOR
\STATE $T\leftarrow T\cup \left\{\left(\mathbf{x}_1^{\left(i\right)},\mathbf{x}_2^{\left(i\right)}\right)\right\}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\paragraph{Choice of sub-attack methods.} We leverage the popular gradient-based local adversarial attack methods as sub-attack methods under the above family of global attacks. Particularly, the fast gradient sign method (FGSM) \citep{goodfellow2015explaining}, iterative FGSM (IFGSM) \citep{kurakin2016adversarial}, and projected gradient descent (PGD) \citep{madry2018towards} may be considered for sub-attack method $f_{s\_attk}$. As one example, we present the algorithm flow for the global PGD (G-PGD) attack targeting classifiers with PGD employed as the sub-attack in Algorithm \ref{algo:G-PGD}, where the $Clip\left(\mathbf{x},mathcal{R}\right)$ function drags the input $\mathbf{x}$ outside the preset region $\mathcal{R}$ onto the boundary of $\mathcal{R}$. The global IFGSM (G-IFGSM) skips the random noise perturbations (steps \ref{algo:gpgd:noise1} and \ref{algo:gpgd:noise2}). The global FGSM (G-FGSM) sets the number of sub-attack steps $S$ to be $1$ and the step size for sub-attack $a$ to be $\epsilon$ in addition to ignoring the random noise perturbation steps.
\section{Extreme-value-guided MCMC sampling attack}
\label{sec:sampleatk}
While the family of alternating gradient global adversarial attacks discussed in Section~\ref{sec:gbatk} can work effectively in practice, such methods may get trapped at a local maximum, degrading the quality of global attack. To this end, we propose a stochastic optimization approach based on the extreme value distribution theory and Markov Chain Monte Carlo (MCMC) method, which is more advantageous from a global optimization point of view.
\paragraph{Extreme value distribution.}
Consider sampling a set of i.i.d input example pairs $\left\{\left(\mathbf{x}_{1,1},\mathbf{x}_{2,1}\right),\cdots,\left(\mathbf{x}_{1,n},\mathbf{x}_{2,n}\right)\right\}$ in the input pair space $\Omega\times\Omega$. The greatest loss value $L^*=\max_{i\in\left[1,n\right]} \mathcal{L}_f\left(\mathbf{x}_{1,i},\mathbf{x}_{2,i}\right)$ can be regarded as a random variable following a certain distribution characterized by its density function $p_{L^*}\left(l\right)$. The Fisher-Tippett-Gnedenko theorem says that the distribution of the maximum value of examples, if exists, can only be one of the three families of extreme value distribution: the Gumbel class, the Fr{\'e}chet and the reverse Weibull class \citep{gomes2015extreme}. Hence, $p_{L^*}\left(l\right)$ falls into one of the three families as well, whose cumulative density function (CDF) $F_{L^*}\left(l\right)$ can be written in a unified form, called the generalized extreme value (GEV) distribution:
\begin{equation}
F_{L^*}\left(l\right)=\left\{
\begin{array}{ll}
\exp\left(-\left(1+\xi\frac{l-\mu}{\sigma}\right)^{-1/\xi}\right) & \xi\neq 0 \\
\exp\left(-\exp\left(-\frac{l-\mu}{\sigma}\right)\right) & \xi= 0
\end{array}
\right.
\end{equation}
where $\mu$, $\sigma$ and $\xi$ are the location, scale, and shape parameter for the GEV distribution and may be obtained through the maximum likelihood estimation (MLE).
Assuming that the desired generalized extreme value (GEV) distribution of the loss function is available, multiple large loss values, corresponding to potential global adversarial example pairs, may be generated by sampling the GEV distribution. The added benefit here is that the inherent randomness in this sampling process may be explored to find more globally optimal solutions.
Nevertheless, since the GEV distribution may not be easily sampled directly, We adopt the Markov Chain Monte Carlo (MCMC) method to sample the GEV distribution \citep{andrieu2003mcmc}. In reality, the GEV distribution is not known \emph{a priori} and shall be estimated using MLE based on a sample of data as described below.
\paragraph{Extreme-value-guided MCMC sampling algorithm (GEVMCMC).}
The proposed GEVMCMC algorithm is shown in Algorithm \ref{algo:GEVMCMC} for the case of classification problems.
There exist two essential components for MCMC sampling: the target distribution, which is in this case the desired GEV distribution, and the proposal distribution, which is a surrogate distribution easy to sample. For each MCMC round, we collect an example from the proposal distribution, and then accept this example or discard it while keeping the previous one based on an acceptance ratio $p_A$ in Step \ref{algo:GEVMCMC:pa} of Algorithm \ref{algo:GEVMCMC} \citep{andrieu2003mcmc}.
In order to sample the block maximum for the extreme value distribution, a block of example pairs are collected from the proposal distribution in each round as in Step \ref{algo:GEVMCMC:block} of the algorithm instead of a single example.
As the MCMC sampling process proceeds, the actual sampling distribution implemented converges to the target (GEV) distribution.
\begin{algorithm}
\caption{Extreme-value-guided MCMC sampling algorithm (GEVMCMC) for global attack.}
\label{algo:GEVMCMC}
\begin{algorithmic}[1]
\REQUIRE ~~
Initial starting example pair $\left(\mathbf{x}_1^{\left(0\right)},\mathbf{x}_2^{\left(0\right)}\right)$; Number of total rounds $N$; Number of warm-up rounds $N_w$; Block size $B$; Number of example pairs $k$ used for GEV distribution update;\\
\ENSURE ~~
The set of the generated global adversarial example pairs $T$;
\STATE Apply G-PGD for $N_w$ rounds to warm-up and update T. \label{algo:GEVMCMC:warmup}
\FOR{$i\leftarrow N_w+1$ to $N$}
\STATE Sample $B$ i.i.d. samples $\left\{\left(\mathbf{x}_1^{\left[1\right]},\mathbf{x}_2^{\left[1\right]}\right),\cdots,\left(\mathbf{x}_1^{\left[B\right]},\mathbf{x}_2^{\left[B\right]}\right)\right\}$ from the proposal distribution \label{algo:GEVMCMC:block} $q\left(\mathbf{x}_1,\mathbf{x}_2|\mathbf{x}_1^{\left(i-1\right)},\mathbf{x}_2^{\left(i-1\right)}\right)$
\STATE $\left(\mathbf{x}_1^*,\mathbf{x}_2^*\right)\leftarrow\argmax_{j\in\left[1,B\right]}\mathcal{L}_f\left(\mathbf{x}_1^{\left[j\right]},\mathbf{x}_2^{\left[j\right]}\right)$
\STATE Update the GEV distribution $p_{L^*}\left(l\right)$ using top $k$ loss values in the history. \label{algo:GEVMCMC:EVupdate}
\STATE $p_A\leftarrow\min\left\{1.0,\frac{p_{L^*}\left(\mathcal{L}_f\left(\mathbf{x}_1^*,\mathbf{x}_2^*\right)\right)q\left(\mathbf{x}_1^{\left(i-1\right)},\mathbf{x}_2^{\left(i-1\right)}|\mathbf{x}_1^*,\mathbf{x}_2^*\right)}{p_{L^*}\left(\mathcal{L}_f\left(\mathbf{x}_1^{\left(i-1\right)},\mathbf{x}_2^{\left(i-1\right)}\right)\right)q\left(\mathbf{x}_1^*,\mathbf{x}_2^*|\mathbf{x}_1^{\left(i-1\right)},\mathbf{x}_2^{\left(i-1\right)}\right)}\right\}$ \label{algo:GEVMCMC:pa}
\STATE Sample a uniform random variable $\alpha\sim\mathrm{U}\left[0,1\right]$
\IF{$\alpha \leq p_A$}
\STATE Accept the new example. $\left(\mathbf{x}_1^{\left(i\right)},\mathbf{x}_2^{\left(i\right)}\right)=\left(\mathbf{x}_1^*,\mathbf{x}_2^*\right)$
\ELSE
\STATE Reject and keep the previous example. $\left(\mathbf{x}_1^{\left(i\right)},\mathbf{x}_2^{\left(i\right)}\right)=\left(\mathbf{x}_1^{\left(i-1\right)},\mathbf{x}_2^{\left(i-1\right)}\right)$
\ENDIF
\STATE $T\leftarrow T\cup \left\{\left(\mathbf{x}_1^{\left(i\right)},\mathbf{x}_2^{\left(i\right)}\right)\right\}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Importantly, the generalized extreme value (GEV) distribution $p_{L^*}\left(l\right)$ of the loss function doesn't exist at the beginning of the algorithm. Therefore, it is estimated during the sampling process. Two techniques are considered to obtain an accurate GEV distribution. A warm-up procedure (Step \ref{algo:GEVMCMC:warmup} in Algorithm \ref{algo:GEVMCMC}) using a few rounds of G-PGD is performed first to collect a few global adversarial example pairs of large loss values. In each round, the top k loss values among all example pairs in the history including the ones in the current block are selected to estimate $p_{L^*}\left(l\right)$ based on MLE in Step \ref{algo:GEVMCMC:EVupdate} of Algorithm \ref{algo:GEVMCMC}.
\paragraph{Proposal distribution design.}
The convergence speed of MCMC sampling towards the target distribution critically depends on the proposal distribution~\citep{andrieu2003mcmc}. To efficiently generate high-quality global adversarial example pairs, we consider the following essential aspects in designing the proposal distribution: (\textbf{1}) finding large loss values; (\textbf{2}) enabling global search; (\textbf{3}) constraining two examples in each pair to be within distance $\epsilon$.
\begin{wrapfigure}{r}{0.25\linewidth}
\centering
\includegraphics[width=\linewidth]{MCMC_center_dist.pdf}
\caption{MCMC proposal distribution design illustration.}
\label{fig:mcmcpropdist}
\end{wrapfigure}
We decompose the sampling of an example pair into two sequential steps: sample the center location $\mathbf{x}_c$ and sample the difference vector $\Delta$ between the two examples. Then, we construct the example pair by $\mathbf{x}_1=\mathbf{x}_c+\Delta, \mathbf{x}_2=\mathbf{x}_c-\Delta$, as shown in Fig.~\ref{fig:mcmcpropdist}. The two sampling steps are independent of each other, and hence, the proposal distribution conditioning on the previous example pair $\left(\mathbf{x}_c^p,\Delta^p\right)$ is split into a product of a center proposal and difference proposal distribution:
\begin{equation}
q\left(\mathbf{x}_c,\Delta|\mathbf{x}_c^p,\Delta^p\right)=q_{x^c}\left(\mathbf{x}_c|\mathbf{x}_c^p,\Delta^p\right)q_{\Delta}\left(\Delta|\mathbf{x}_c^p,\Delta^p\right).
\end{equation}
As such, the distance constraint for the two examples is only taken into consideration in the design of $q_{\Delta}\left(\Delta|\mathbf{x}_c^p,\Delta^p\right)$ and no distance constraint is necessary when sampling the center.
We speed up the convergence of MCMC by incorporating the (normalized) gradient information
\begin{equation}
\mathbf{g}=\frac{\nabla_{\mathbf{x}_c}\mathcal{L}_f\left(\mathbf{x}_c^p+\Delta^p,\mathbf{x}_c^p-\Delta^p\right)}{\left|\nabla_{\mathbf{x}_c}\mathcal{L}_f\left(\mathbf{x}_c^p+\Delta^p,\mathbf{x}_c^p-\Delta^p\right)\right|}
\end{equation}
into the proposal distribution design. We design the center proposal distribution to be a multi-variate Gaussian distribution centered at $\mathbf{x}_c^p$ with a covariance matrix biasing to sampling along the gradient direction $\mathbf{g}$ to increase the likelihood of finding large loss values while allowing sampling in other directions during the same time:
\begin{equation}
q_{x^c}\left(\mathbf{x}_c|\mathbf{x}_c^p,\Delta^p\right)=\mathcal{N}\left(\mathbf{x}_c^p,\lambda_0^2\mathbf{I}+\left(\lambda_m^2-\lambda_0^2\right)\mathbf{g}\mathbf{g}^{\mathrm{T}}\right),
\end{equation}
where $\lambda_m^2$ sets the largest eigenvalue and $\lambda_0^2 < \lambda_m^2$ sets all other eigenvalues of the covariance matrix. The absolute values of $\lambda_0$ and $\lambda_m$ control the size of the search region while the ratio between $\lambda_m$ and $\lambda_0$ determines to what extent we want to focus on the gradient direction.
The gradient sign information is incorporated into the difference proposal distribution considering the distance constraint $\epsilon$.
Particularly, for the $l_\infty$ norm based distance measure, we propose a Bernoulli distribution with parameter $p_B> 0.5$ for each element of the difference vector:
\begin{equation}
\Delta_i=\left\{\begin{array}{ll}
\frac{\epsilon}{2}\mathrm{sign}\left(g_i\right) & \textrm{with probability $p_B$} \\
-\frac{\epsilon}{2}\mathrm{sign}\left(g_i\right) & \textrm{with probability $1-p_B$},
\end{array}\right.
\end{equation}
which ensures that the pair $\left(\mathbf{x}_1,\mathbf{x}_2\right)$ be within distance $\epsilon$ and each difference component is more likely to be set according to the corresponding gradient sign component.
\section{Experimental results}
\label{sec:res}
\paragraph{Experimental settings.}
We investigate several local and global methods on two popular image classification datasets: MNIST \citep{lecun1998gradient} and CIFAR10 \citep{krizhevsky2009learning}.
To evaluate DNN robustness globally in the input space, we create an additional class ``meaningless'' and append 6,000 and 5,000 random noisy images (one tenth of the original training dataset) under this ``meaningless'' class into the original MNIST and CIFAR10 training datasets, respectively, and refer to the expanded training datasets as the augmented training datasets. All trained DNNs perform classification across 11 classes.
For MNIST, we train a neural network with two convolutional and two fully-connected layers with an accuracy of $99.43\%$ with 40 training epochs. For CIFAR10, a VGG16 \citep{simonyan2014vgg} network is trained for 300 epochs, reaching $94.25\%$ accuracy. Furthermore, we globally attack adversarially-trained models, which are trained using adversarial training based on local adversarial examples. In each epoch of adversarial training, the adversarial samples are generated by attacking the DNN model from the last epoch using a 30-step local white-box PGD attack, which is considered a strong first-order attack \citep{madry2018towards}. And then an updated model is trained using both the augmented training set and the generated adversarial images. Adversarial training process is performed for additional 40 epochs for the MNIST model and 30 epochs for the CIFAR10 model, respectively. The ratio of the weighting parameters between the losses of the examples in the augmented training set and adversarial examples are $1:1$.
\paragraph{Adversarial attack parameter settings.}
The $l_{\infty}$ norm based perturbation limit (the maximum allowed difference between two close images) is set to $\epsilon_{MNIST} = 0.1$ for MNIST and to $\epsilon_{CIFAR10}=0.005$ for CIFAR10. We add another 10,000 random images with the ``meaningless'' class label into the original testing dataset (10,000 samples) to create an augmented testing dataset.
We experiment three common local adversarial attack methods: FGSM \citep{goodfellow2015explaining}, IFGSM \citep{kurakin2016adversarial}, and PGD \citep{madry2018towards}, referred to as L-FGSM, L-IFGSM and L-PGD in this paper. Both L-IFGSM and L-PGD perform a 30-step attack with a $l_{\infty}$ step size of $\epsilon/10$.
All local attacks are performed on the augmented testing set.
All four proposed global adversarial attack methods are considered: alternating gradient global adversarial attack with different sub-attack methods of FGSM (G-FGSM), IFGSM(G-IFGSM) and PGD (G-PGD); extreme-value-guided sampling global attack (GEVMCMC).
For all global attack methods, we randomly pick 100 images from the original testing dataset and from the appended random testing dataset, respectively, to form the first images of the 100 starting pairs. The second image of each pair is obtained by adding small uniformly-distributed random noise bounded by perturbation size $\epsilon$ to the first image. 100 rounds of optimization are performed by all global adversarial attack methods, generating a two sets of 10,000 adversarial example pairs, one set for each of the two starting conditions: start with the 100 original testing images, start from the 100 appended random testing images.
G-IFGSM and G-PGD share the same parameter settings with their local attack counterparts L-IFGSM and L-PGD, respectively. The number of GEVMCMC initial G-PGD warm-up rounds is 10 for MNIST and 30 for CIFAR10. The block size $B$ is set to be 59. Three parameters for the proposal distribution for MNIST are $\lambda_m=1.2\epsilon_{MNIST},\lambda_0=0.3\epsilon_{MNIST}, p_B=0.95$, and for CIFAR10 they are set to be $\lambda_m=4.8\epsilon_{CIFAR10},\lambda_0=0.6\epsilon_{CIFAR10}, p_B=0.99$.
\begin{figure}
\centering
\subfloat[\label{sfig::CIFAR10_example:a}]{
\includegraphics[width=0.3\linewidth]{CIFAR10_adv_rand6.pdf}
}
\hfill
\subfloat[\label{sfig::CIFAR10_example:b}]{
\includegraphics[width=0.31\linewidth]{CIFAR10_nat_test1.pdf}
}
\hfill
\subfloat[\label{sfig::CIFAR10_example:c}]{
\includegraphics[width=0.3\linewidth]{CIFAR10_nat_test11.pdf}
}\\
\subfloat[\label{sfig::CIFAR10_example:d}]{
\includegraphics[width=0.3\linewidth]{CIFAR10_adv_rand31.pdf}
}
\hfill
\subfloat[\label{sfig::CIFAR10_example:e}]{
\includegraphics[width=0.3\linewidth]{CIFAR10_adv_test95.pdf}
}
\hfill
\subfloat[\label{sfig::CIFAR10_example:f}]{
\includegraphics[width=0.32\linewidth]{CIFAR10_nat_test62.pdf}
}
\caption{Global adversarial sample pairs for the CIFAR10 model generated by GEVMCMC.}
\label{fig:CIFAR10_example}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{sfig::MNIST_example:a}]{
\includegraphics[width=0.3\linewidth]{MNIST_adv_rand70.pdf}
}
\hfill
\subfloat[\label{sfig::MNIST_example:b}]{
\includegraphics[width=0.3\linewidth]{MNIST_nat_test31.pdf}
}
\hfill
\subfloat[\label{sfig::MNIST_example:c}]{
\includegraphics[width=0.3\linewidth]{MNIST_adv_test75.pdf}
}\\
\subfloat[\label{sfig::MNIST_example:d}]{
\includegraphics[width=0.3\linewidth]{MNIST_nat_test47.pdf}
}
\hfill
\subfloat[\label{sfig::MNIST_example:e}]{
\includegraphics[width=0.3\linewidth]{MNIST_adv_test79.pdf}
}
\hfill
\subfloat[\label{sfig::MNIST_example:f}]{
\includegraphics[width=0.3\linewidth]{MNIST_adv_test46.pdf}
}
\caption{Global adversarial sample pairs for the MNIST model generated by GEVMCMC.}
\label{fig:MNIST_example}
\end{figure}
\paragraph{Generated global adversarial sample pairs.}
The proposed global adversarial attack methods can generate diverse global adversarial pairs which are rather different from typical local attacks, representing a new type of DNN vulnerability.
Fig. \ref{fig:CIFAR10_example} and Fig. \ref{fig:MNIST_example} show a set of global adversarial example pairs generated by GEVMCMC for the two datasets. In each sub-figure, the two images on the top are the starting pair of two identical starting images. The two at the bottom are the final global adversarial pair generated after 100 rounds of optimization steps. ``N" indicates that the class label predicated by the model is ``meaningless".
Compared to standard local adversarial attacks, the proposed global adversarial pairs are much more diverse and intriguing.
For instance, it is possible to start with two identical random meaningless image but end up with some other two random images that are very similar to each other but have different legitimate class labels predicted by the model such as ones in Fig. \ref{sfig::CIFAR10_example:a} and \ref{sfig::CIFAR10_example:d}. We can also start from two identical images of a legitimate predicted class, and end up with two images with predicated labels that are different from each other and are also different from the starting label, as shown in Fig. \ref{sfig::CIFAR10_example:b}, \ref{sfig::CIFAR10_example:e}, \ref{sfig::CIFAR10_example:f}, \ref{sfig::MNIST_example:b} and \ref{sfig::MNIST_example:e}. Clearly, the existing local attacks such as FGSM, IFGSM, and PGD are not able to generate such complex global adversarial scenarios, which reveal additional hidden vulnerabilities of the model.
Importantly, the existing local adversarial attacks cannot explore the input space beyond the training or testing dataset due to the perturbation constraint. In contrast, the proposed global adversarial attacks are very appealing in the following way: they may find a path towards unseen input space and check the model robustness along the way. For instance, we may start from some random testing image (Fig. \ref{sfig::MNIST_example:a}) or original testing image (Fig. \ref{sfig::MNIST_example:c}), and end up with a completely different image pair which may be both recognized by the humans as a legitimate class, however, the model predicts different labels for them. For instance, the final pairs in the Fig.~\ref{sfig::MNIST_example:a} and Fig.~\ref{sfig::MNIST_example:c} may be recognized as ``8" and ``4", respectively, which are completely different from their starting label.
\begin{table}
\caption{Natural MNIST model (w/o local adversarial training) adversarial attack results.}
\label{tab:MNIST_nat}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Method}&\multicolumn{3}{c}{Start from Original Test Image}&\multicolumn{3}{c}{Start from Random Test Image} \\
\cmidrule(r){2-4}\cmidrule(r){5-7}
& Attack Rate & Max Loss & Avg. Loss & Attack Rate & Max Loss & Avg. Loss\\
\midrule
L-FGSM & 18.18\% & 20.94 & 0.77 & 5.82\% & 6.43 & 0.16 \\
L-IFGSM & 29.80\% & 21.76 & 1.17 & 51.96\% & 7.64 & 1.18 \\
L-PGD & 29.59\% & 21.73 & 1.16 & 49.44\% & 7.57 & 1.13 \\
G-FGSM & 99.14\% & 23.40 & 10.78 & 99.01\% & 20.04 & 9.62 \\
G-IFGSM & 99.92\% & 22.21 & 9.19 & 100.00\% & 12.05 & 5.94 \\
G-PGD & 99.94\% & 20.44 & 10.18 & 100.00\% & 11.94 & 6.58 \\
GEVMCMC & 99.93\% & 24.93 & 14.91 & 100.00\% & 19.98 & 12.67 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Adversarially-trained MNIST model adversarial attack results.}
\label{tab:MNIST_adv}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Method}&\multicolumn{3}{c}{Start from Original Test Image}&\multicolumn{3}{c}{Start from Random Test Image} \\
\cmidrule(r){2-4}\cmidrule(r){5-7}
& Attack Rate & Max Loss & Avg. Loss & Attack Rate & Max Loss & Avg. Loss\\
\midrule
L-FGSM & 3.28\% & 15.14 & 0.11 & 0.00\% & 0.00 & 0.00 \\
L-IFGSM & 3.89\% & 15.56 & 0.13 & 0.00\% & 0.06 & 0.00 \\
L-PGD & 3.88\% & 15.57 & 0.13 & 0.00\% & 0.05 & 0.00 \\
G-FGSM & 98.07\% & 17.79 & 8.19 & 97.50\% & 19.09 & 7.63 \\
G-IFGSM & 99.47\% & 14.25 & 7.84 & 99.56\% & 17.81 & 11.43 \\
G-PGD & 99.50\% & 18.08 & 8.61 & 99.58\% & 19.59 & 12.07 \\
GEVMCMC & 99.38\% & 18.28 & 9.91 & 99.55\% & 18.06 & 12.04 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Natural CIFAR10 model (w/o local adversarial training) adversarial attack results.}
\label{tab:CIFAR10_nat}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Method}&\multicolumn{3}{c}{Start from Original Test Image}&\multicolumn{3}{c}{Start from Random Test Image} \\
\cmidrule(r){2-4}\cmidrule(r){5-7}
& Attack Rate & Max Loss & Avg. Loss & Attack Rate & Max Loss & Avg. Loss\\
\midrule
L-FGSM & 26.77\% & 11.42 & 1.90 & 0.00\% & 0.01 & 0.00 \\
L-IFGSM & 44.80\% & 12.06 & 3.58 & 0.00\% & 0.02 & 0.00 \\
L-PGD & 43.92\% & 12.04 & 3.51 & 0.00\% & 0.02 & 0.00 \\
G-FGSM & 95.96\% & 12.75 & 8.72 & 95.15\% & 11.10 & 5.48 \\
G-IFGSM & 99.68\% & 12.90 & 11.02 & 98.36\% & 10.76 & 6.45 \\
G-PGD & 99.71\% & 13.55 & 11.30 & 98.33\% & 11.02 & 6.70 \\
GEVMCMC & 99.58\% & 13.18 & 8.94 & 98.37\% & 11.02 & 7.18 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Adversarially-trained CIFAR10 model adversarial attack results.}
\label{tab:CIFAR10_adv}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Method}&\multicolumn{3}{c}{Start from Original Test Image}&\multicolumn{3}{c}{Start from Random Test Image} \\
\cmidrule(r){2-4}\cmidrule(r){5-7}
& Attack Rate & Max Loss & Avg. Loss & Attack Rate & Max Loss & Avg. Loss\\
\midrule
L-FGSM & 19.29\% & 10.97 & 0.83 & 0.00\% & 0.00 & 0.00 \\
L-IFGSM & 21.25\% & 11.03 & 0.92 & 0.00\% & 0.00 & 0.00 \\
L-PGD & 21.19\% & 11.03 & 0.92 & 0.00\% & 0.00 & 0.00 \\
G-FGSM & 95.29\% & 9.62 & 4.11 & 90.47\% & 6.57 & 2.52 \\
G-IFGSM & 98.23\% & 9.83 & 4.60 & 95.85\% & 6.99 & 2.94 \\
G-PGD & 98.26\% & 10.54 & 4.77 & 95.88\% & 6.97 & 3.03 \\
GEVMCMC & 98.25\% & 10.89 & 5.47 & 95.89\% & 6.66 & 3.67 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Local vs. global adversarial attacks}
Table \ref{tab:MNIST_nat}-\ref{tab:CIFAR10_adv} show the global adversarial attack results for the MNIST and CIFAR10 models without and with local adversarial training. If the model predictions for the two examples in a generated global adversarial pair are different, we regard this pair as one successful global attack. In these tables, the attack rate is defined as the ratio between the successful number of attacks over the total number of trails which is 10,000. The Max loss and Avg. loss are the maximum loss and average loss found in the attack process.
Attacking the natural MNIST and CIFAR10 models using a local attack method can reach a reasonablly high attack rate. For instance, L-PGD has an attack rate of $29.59\%$ in the case of the natural MNIST, which drops down to $3.88\%$ for the case of the adversarially-trained MNIST model, showing that adversarial training using local adversarial examples does improve the defense to such local attacks.
However, all global adversarial attack methods achieve almost 100.00\% attack rate and produce much higher average loss values compared to the local attack methods, regardless whether local adversarial training is performed or not. It is evident that adversarial training based on local adversarial examples shows none or little defense to global attacks. This indicates the effectiveness of the proposed global attack methods, and equally importantly, suggests that global adversarial examples defined in this paper must be coped with when training robust DNN models.
\begin{table}
\caption{Comparison between GEVMCMC and other proposed global attacks when starting from the same 100 original test (``Test") or 100 random testing (``Rand") example pairs. Each entry shows the number of cases out of the total 100 cases where the final adversarial pair generated by GEVMCMC has a loss higher than the one generated by the other method.
}
\label{tab:cnt}
\centering
\begin{tabular}{ccccccccc}
\toprule
\multirow{3}{*}{Method}&\multicolumn{4}{c}{MNIST}&\multicolumn{4}{c}{CIFAR10} \\
\cmidrule(r){2-5}\cmidrule(r){6-9}
& \multicolumn{2}{c}{Natural}&\multicolumn{2}{c}{Adv.-Trained}&\multicolumn{2}{c}{Natural}&\multicolumn{2}{c}{Adv.-Trained}\\
\cmidrule(r){2-3}\cmidrule(r){4-5}\cmidrule(r){6-7}\cmidrule(r){8-9}
& Test & Rand & Test & Rand & Test & Rand & Test & Rand \\
\midrule
G-FGSM & 86 & 95 & 69 & 81 & 33 & 72 & 76 & 67 \\
G-IFGSM & 98 & 100 & 92 & 90 & 69 & 85 & 84 & 92 \\
G-PGD & 97 & 100 & 80 & 67 & 12 & 73 & 74 & 90 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\subfloat[Testing image starting image pair\label{sfig:MNIST_nat_test}]{
\includegraphics[width=0.48\linewidth]{MNIST_nat_test.pdf}
}
\hfill
\subfloat[Random image starting image pair\label{sfig:MNIST_nat_rand}]{
\includegraphics[width=0.48\linewidth]{MNIST_nat_rand.pdf}
}
\caption{Loss value found by global adversarial attack in each round for the natural MNIST model.}
\label{fig:MNIST_nat}
\end{figure}
\begin{figure}
\subfloat[Testing image starting image pair\label{sfig:MNIST_adv_test}]{
\includegraphics[width=0.48\linewidth]{MNIST_adv_test.pdf}
}
\hfill
\subfloat[Random image starting image pair\label{sfig:MNIST_adv_rand}]{
\includegraphics[width=0.48\linewidth]{MNIST_adv_rand.pdf}
}
\caption{Loss value found by global adversarial attack in each round for the adversarially-trained MNIST model.}
\label{fig:MNIST_adv}
\end{figure}
\begin{figure}
\subfloat[Testing image starting image pair\label{sfig:CIFAR10_nat_test}]{
\includegraphics[width=0.48\linewidth]{CIFAR10_nat_test.pdf}
}
\hfill
\subfloat[Random image starting image pair\label{sfig:CIFAR10_nat_rand}]{
\includegraphics[width=0.48\linewidth]{CIFAR10_nat_rand.pdf}
}
\caption{Loss value found by global adversarial attack in each round for the natural CIFAR10 model.}
\label{fig:CIFAR10_nat}
\end{figure}
\begin{figure}
\subfloat[Testing image starting image pair\label{sfig:CIFAR10_adv_test}]{
\includegraphics[width=0.48\linewidth]{CIFAR10_adv_test.pdf}
}
\hfill
\subfloat[Random image starting image pair\label{sfig:CIFAR10_adv_rand}]{
\includegraphics[width=0.48\linewidth]{CIFAR10_adv_rand.pdf}
}
\caption{Loss value found by global adversarial attack in each round for the adversarially-trained CIFAR10 model.}
\label{fig:CIFAR10_adv}
\end{figure}
\paragraph{Comparison of the proposed global adversarial attack methods}
Table \ref{tab:cnt} compares the two types of the proposed global attack methods when starting from the same 100 original test (``Test") or 100 random testing (``Rand") example pairs.
Each entry shows the number of cases out of the total 100 cases where the final adversarial pair generated by GEVMCMC has a loss higher than the one generated by the other method. Most entries in the table are larger than 50, implying that GEVMCMC finds worse adversarial example pairs than other global adversarial attack methods. We further show the maximum loss value and average loss value of the adversarial example pairs in each round in Fig. \ref{fig:MNIST_nat} - \ref{fig:CIFAR10_adv}. After the initial warm-up rounds using G-PGD, the loss found by GEVMCMC increases rapidly, and ends up with a much larger value compared to that of the other global adversarial attack methods, which tend to converge at a local maximum. The only case in which GEVMCMC does not beat other global adversarial attack methods is the attacking of the natural CIFAR10 model starting with the original testing images, showing the overall better effectiveness of GEVMCMC.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,492
|
{"url":"https:\/\/www.darwinproject.ac.uk\/letter\/?docId=letters\/DCP-LETT-9104.xml","text":"# To John Downing\u00a0\u00a0 20\u00a0October [1873]1\n\nDown. | Beckenham. Kent.\n\nOct\u00a020th.\n\nDear Sir.\n\nI have been much gratified by your letter.2 It is in the highest degree satisfactory to me to find that a man with such large experience as yourself agrees with what I have deduced to a considerable extent.\u2014 I should much enjoy seeing you here, but I am in some perplexity\u00a0\u00a0 I have lately been ill with some secondary head affection, & my doctor urges me not to exert myself by much conversation.\u20143 In fact I can rarely talk with anyone for more than half an hour & sometimes for not so long; though I am able to do a good deal of scientific work if without any excitement\u2014 Our nearest station is Orpington on the S.E.\u00a0Ry4 (starting from Charing Cross) & my house is 4\u00a0miles from the Station. I cannot think it would be worth your while to lose almost a whole day in London to come down here for $\\frac{1}{2}$ or $\\frac{3}{4}$ of an hours talk\u2014 Would it not be more convenient to you to write to me from Ireland?\u2014 But if you like to come I shall be pleased, In this case please to come by the train leaving Charing cross at 10o 35\u2019 & take a fly at Orpington and you will arrive here at 11o 45\u2019 Note. This is my best time in the day. We will lunch at 12o 45\u2019 & then you can leave at 1o 20\u2019 for Orpington for train which starts at 2o 4\u2019 for London; All this sounds very inhospitable but I can assure you that I have no choice in the matter;5\n\nThanking you for your very kind letter, I remain | Dear Sir | Yours faithfully | Ch. Darwin.\n\nIf you come kindly inform me by Post Card of the day.\n\n## Footnotes\n\nThe year is established by the relationship between this letter and the letter from John Downing, 13\u00a0November\u00a01873.\nDowning\u2019s letter has not been found; he was a respected shorthorn breeder from Ireland.\nCD was consulting Andrew Clark (letter from G.\u00a0H.\u00a0Darwin, [1 October\u00a01873]).\nS.E.\u00a0Ry: South-Eastern Railway.\nDowning visited on 24\u00a0October\u00a01874 (see letter to G.\u00a0H.\u00a0Darwin, 24 [October\u00a01873]).\n\n## Summary\n\nGratified that a man of JD\u2019s experience agrees with him.\n\nWould enjoy seeing him at Down but it could only be for a half-hour\u2019s talk at most, because of his health.\n\n## Letter details\n\nLetter no.\nDCP-LETT-9104\nFrom\nCharles Robert Darwin\nTo\nJohn Downing\nSent from\nDown\nSource of text\nDAR 143: 418\nPhysical description\n2pp","date":"2019-07-20 18:41:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 2, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5074803829193115, \"perplexity\": 2438.2197374847665}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526560.40\/warc\/CC-MAIN-20190720173623-20190720195623-00203.warc.gz\"}"}
| null | null |
// +build linux darwin
package util
import (
"bytes"
"fmt"
"os/exec"
"strconv"
"strings"
"syscall"
"k8s.io/kubernetes/pkg/api/resource"
)
// FSInfo linux returns (available bytes, byte capacity, byte usage, total inodes, inodes free, inode usage, error)
// for the filesystem that path resides upon.
func FsInfo(path string) (int64, int64, int64, int64, int64, int64, error) {
statfs := &syscall.Statfs_t{}
err := syscall.Statfs(path, statfs)
if err != nil {
return 0, 0, 0, 0, 0, 0, err
}
// Available is blocks available * fragment size
available := int64(statfs.Bavail) * int64(statfs.Bsize)
// Capacity is total block count * fragment size
capacity := int64(statfs.Blocks) * int64(statfs.Bsize)
// Usage is block being used * fragment size (aka block size).
usage := (int64(statfs.Blocks) - int64(statfs.Bfree)) * int64(statfs.Bsize)
inodes := int64(statfs.Files)
inodesFree := int64(statfs.Ffree)
inodesUsed := inodes - inodesFree
return available, capacity, usage, inodes, inodesFree, inodesUsed, nil
}
func Du(path string) (*resource.Quantity, error) {
// Uses the same niceness level as cadvisor.fs does when running du
// Uses -B 1 to always scale to a blocksize of 1 byte
out, err := exec.Command("nice", "-n", "19", "du", "-s", "-B", "1", path).CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed command 'du' ($ nice -n 19 du -s -B 1) on path %s with error %v", path, err)
}
used, err := resource.ParseQuantity(strings.Fields(string(out))[0])
if err != nil {
return nil, fmt.Errorf("failed to parse 'du' output %s due to error %v", out, err)
}
used.Format = resource.BinarySI
return &used, nil
}
// Find uses the command `find <path> -dev -printf '.' | wc -c` to count files and directories.
// While this is not an exact measure of inodes used, it is a very good approximation.
func Find(path string) (int64, error) {
var stdout, stdwcerr, stdfinderr bytes.Buffer
var err error
findCmd := exec.Command("find", path, "-xdev", "-printf", ".")
wcCmd := exec.Command("wc", "-c")
if wcCmd.Stdin, err = findCmd.StdoutPipe(); err != nil {
return 0, fmt.Errorf("failed to setup stdout for cmd %v - %v", findCmd.Args, err)
}
wcCmd.Stdout, wcCmd.Stderr, findCmd.Stderr = &stdout, &stdwcerr, &stdfinderr
if err = findCmd.Start(); err != nil {
return 0, fmt.Errorf("failed to exec cmd %v - %v; stderr: %v", findCmd.Args, err, stdfinderr.String())
}
if err = wcCmd.Start(); err != nil {
return 0, fmt.Errorf("failed to exec cmd %v - %v; stderr %v", wcCmd.Args, err, stdwcerr.String())
}
err = findCmd.Wait()
if err != nil {
return 0, fmt.Errorf("cmd %v failed. stderr: %s; err: %v", findCmd.Args, stdfinderr.String(), err)
}
err = wcCmd.Wait()
if err != nil {
return 0, fmt.Errorf("cmd %v failed. stderr: %s; err: %v", wcCmd.Args, stdwcerr.String(), err)
}
inodeUsage, err := strconv.ParseInt(strings.TrimSpace(stdout.String()), 10, 64)
if err != nil {
return 0, fmt.Errorf("cannot parse cmds: %v, %v output %s - %s", findCmd.Args, wcCmd.Args, stdout.String(), err)
}
return inodeUsage, nil
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,617
|
Andrómeda es una personaje ficticia que aparece en los cómics estadounidenses publicados por Marvel Comics. El personaje es un atlante del universo compartido de Marvel, conocido como el Universo Marvel. Ella es la hija ilegítima de Attuma.
Historial de publicaciones
Andrómeda se introdujo en The Defenders # 143 (marzo de 1985) y se agregó a la alineación del supergrupo titular solo unos pocos números más tarde. El escritor Peter B. Gillis reveló más tarde: "Mi plan a largo plazo era poblar a los Defensores con mi propio equipo de personajes, personajes que, sin embargo, tenían vínculos con partes interesantes del Universo Marvel. Andrómeda, aunque no era el Submarinero, me dio una conexión con la Atlántida". Sin embargo, Andrómeda sería el último personaje que Gillis agregó a Los Defensores, ya que poco después de su debut le dijeron que la serie estaba siendo cancelada.
Biografía del personaje ficticio
Miembro de la raza Homo mermanus, Andrómeda es la hija ilegítima de Attuma de Atlantis de una mujer llamada Lady Gelva. Él no sabía de su existencia hasta que ella lo confrontó y le dijo que él era su padre. Andrómeda creció en la sociedad atlante y se entrenó en las artes de la caza y la guerra y superó a cualquier otro hombre, excepto a su padre en estas habilidades. A pesar de sus habilidades, se la consideraba indigna de ascenso en el ejército atlante porque es una mujer a pesar de estar muy decorada.
Andrómeda, inspirada en los cuentos de Namor, se mudó al mundo de la superficie, donde usó un suero para darse una apariencia humana y la capacidad de respirar fuera del agua. Tomó el nombre de Andrea McPhee y se hizo pasar por una mujer de la superficie. Cuando fue revelada como atlante, rápidamente abandonó su farsa y se convirtió en miembro de Los Defensores, uniéndose a ellos contra un villano llamado Hotspur.
Estuvo con Los Defensores (bautizada como los "Nuevos Defensores") por poco tiempo, revelándoles solo una parte de sus antecedentes.Con ellos viajó al espacio exterior y luchó contra el segundo Star-Thief. Ella luchó contra Homicida mientras él amenazaba al equipo, y luego ayudó a Los Defensores y al Interloper en la batalla contra la compañera Defensora Dragón Lunar y el Dragón de la Luna que poseía Moondragon. Andrómeda sacrificó su fuerza vital, uniéndose con Homicida, Valquiria y el intruso para expulsar al Dragón de la Luna de la Tierra, y su cuerpo se convirtió en piedra.
El Dragón regresaría más tarde, esta vez sin cuerpo. Para detener al Dragón de la Luna, el Doctor Strange lanzó un hechizo que devolvió las almas de los Defensores caídos en la batalla contra el Dragón a los cuerpos de varios humanos recientemente fallecidos, transformándolos en duplicados de Los Defensores. El alma de Andrómeda entró en el cuerpo de Genevieve Cross y estos Defensores ahora se llamaban a sí mismos el Círculo del Dragón. Juntos, el Círculo del Dragón desterró al Dragón de la Tierra y Andrómeda regresó a los océanos.
Andrómeda jugó un papel importante en el crossover Atlantis Attacks de 1989. Andrómeda dirigió una rebelión para evitar que su padre Attuma invadiera el mundo de la superficie, pero fue superada por Attuma en el combate personal. Ella fue secuestrada inconsciente por el Deviant y cura a Ghaur como uno de sus "Siete novias de Set". Bajo el dominio de Ghaur, ella acompañó a She-Hulk para adquirir un pedazo de la fuerza vital de Set. Al final, las Novias de Set ganaron su libertad gracias a los Cuatro Fantásticos y los Vengadores.
Andrómeda unió sus fuerzas con las de Namor. Ella formó parte de la efímera Profundidad Seis, un grupo de héroes submarinos. Durante este tiempo, su mente y la de Genevieve Cross intercambiaban repetidamente el control e incluso convertían su cuerpo en una copia de Genevieve. Andrómeda sacrificó su propia mente para salvar el alma de Namor, dejando a Genevieve en control del cuerpo de Andrómeda. Meses después, ya sea Genevieve en el cuerpo de Andrómeda o una Andrómeda restaurada, ella misma ayudó a Namor y a los Defensores contra la Profundidad Seis de Attuma. Andrómeda fue vista por última vez como un aliado de Namor, viviendo en la Atlántida.
Andrómeda aparece más tarde como miembro de los Defensores de las profundidades de Namor.
Poderes y habilidades
Andrómeda tiene todas las facultades inherentes al Homo mermanus, pero su fuerza y velocidad son mucho mayores que la de cualquier ordinaria Homo Mermanus, aunque no tan grande como la de su padre, Attuma. Está adaptada para vivir bajo el agua, tiene agallas que le permiten respirar bajo el agua, puede nadar a altas velocidades y su cuerpo es resistente a la presión y al frío de los océanos profundos. Su visión especialmente desarrollada le permite ver claramente en las oscuras profundidades del océano.
Puede sobrevivir solo 10 minutos fuera del agua, a menos que use un suero especial que le permita respirar aire. Su resistencia, agilidad y reflejos se reducen cuando está fuera del agua.
Ella ha sido entrenada como una guerrera atlante, y es muy hábil en las artes de la caza y la guerra, empuñando un tridente como su arma preferida. Ella lleva una espada corta y una daga de 8 "como arma adicional.
Andrómeda también tiene un amplio conocimiento de bioquímica.
Notas
Genevieve Cross se llama Genevieve Cass en la entrada del Círculo del Dragón en la edición del Manual del Marvel Universe '89.
Referencias
Enlaces externos
Andromeda at the Appendix to the Handbook of the Marvel Universe
Andromeda on Marvel Database, a Marvel Comics wiki
Atlanteanos de Marvel Comics (Homo mermanus)
Personajes de Marvel Comics con fuerza sobrehumana
Héroes de Marvel Comics
Heroínas de Marvel Comics
Personajes de Marvel Comics que pueden moverse a velocidades sobrehumanas
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 965
|
Chemora, est une commune de la wilaya de Batna en Algérie, connue pour ses dolmens, située à au nord-est de Batna.
Géographie
Situation
Le territoire de la commune de Chemora est situé au nord-est de la wilaya de Batna.
Relief, géologie, hydrographie
Hydrographie
La ressource en eau est en général d'origine souterraine de deux aquifères différents, le premier superficiel jusqu'à de profondeur (75 % utilisés pour l'irrigation), le second profond d'où 85 % sont utilisés pour l'irrigation
Climat
Chemora a un climat continental semi-aride, avec des hivers rudes avec des précipitations, et des étés secs et chauds. Les températures sont variables d'une saison à l'autre avec des amplitudes parfois très importantes. Les précipitations sont faibles et irrégulières d'une année à l'autre. Les vents sont canalisés par les massifs montagneux voisins des Aurès et du Belezma. Le sirocco souffle pendant le mois de mai et dure entre 20 et 40 jours.
Localités de la commune
La commune de Chemora est composée de 13 localités :
Histoire
Population
Pyramide des âges
Évolution démographique
Administration et politique
Santé
La polyclinique de Chemora est l'unique de toute la daïra de Chemora, avec un personnel médical et paramédical.
Économie
Agriculture
Chemora est une commune à vocation agricole.
Patrimoine
Patrimoine archéologique
Le site archéologique de la commune s'étend sur le djebel Bellaboud sur 167 ha, occupant le sommet et les versants.
À proximité de Chemora se trouvent des restes de centaines de sépultures formant une vaste nécropole berbère, notamment des formes architecturales comme des Tumulus, Basinas et Dolmens. Ces derniers sont des monuments mégalithiques, constitués par une dalle horizontale reposant sur des blocs verticaux. Dans certains cas, le monument est déposé d'une manière circulaire.
Il existe encore une citadelle byzantine de forme polygonale, des chapelles et des pressoirs à huile.
Henchir Fortas comporte des vestiges romains de l'ancienne ville de Gassas (ou Guessès).
Vie quotidienne
Sport
Amal baladiat Chemora (ABC) est une équipe de football évoluant en régionale 2 de la ligue de Batna. Elle joue dans le stade communal de Chemora.
La commune est dotée d'une piscine semi-olympique ouverte en 2010.
Personnalités liées à Chemora
Liliane Raspail, est écrivaine algérienne d'origine française né à Chemora en 1919.
Notes et références
Voir aussi
Bibliographie
Dalila Ouitis, Concis de la toponymie et des noms de lieux de l'Algérie, Ed. Djoussour, Alger 2009
Achour Cheurfi, Dictionnaire des localités algériennes, Casbah Éditions, Alger 2011
Articles connexes
Communes de la wilaya de Batna
Daïras de la wilaya de Batna
Commune dans la wilaya de Batna
Site archéologique en Algérie
Aurès
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,857
|
\section*{Introduction} Residuated structures, rooted in the work
of Dedekind on the ideal theory of rings, arise in many fields of
mathematics, and are particularly common among algebras
associated with logical systems. They are structures $\langle A,
\odot, \rightarrow, \leq \rangle$ such that $A$ is a nonempty
set, $\leq$ is a partial order on $A$ and $\odot$ and
$\rightarrow$ are binary operations such that the following
relation holds for each $a, b , c$ in $A$: $$a \odot b \leq c
\,\,\,\,\, \mbox{{\em iff}} \,\,\,\,\, a \leq b \rightarrow
c.$$ Important examples of residuated structures related to
logic are Boolean algebras (corresponding to classical logic),
Heyting algebras (corresponding to intuitionism), residuated
lattices (corresponding to logics without contraction rule
\cite{TK}), BL-algebras (corresponding to H\'ajek's basic fuzzy
logic \cite{HAJ}), MV-algebras (corresponding to \L ukasiewicz
many-valued logic \cite{CDM}). All these examples, with the
exception of residuated lattices are {\em hoops\/}
\cite{BlokFer}, i. e., they satisfy the equation $x \odot (x
\rightarrow y) = y \odot (y \rightarrow x)$.
The aim of this paper is to investigate injectives and absolute
retracts in classes of residuated lattices and bounded hoops. In
\S2 and \S3 we also present some results on injectives in more
general varieties.
The paper is structured as follows. In \S1 we recall some basic
definitions and properties. In \S2 we show that under some mild
hypothesis on a variety $\cal V$ of algebras, the existence of
nontrivial injectives is equivalent to the existence of a
self-injective maximum simple algebra. In \S3 we use ultrapowers
to obtain lattice properties of the injectives in varieties of
ordered algebras. The results of \S2 and \S3 are applied in \S4,
\S7 and in \S14 to the study of injectives in varieties of
residuated lattices, prelinear residuated lattices and bounded
hoops, respectively. In the remaining sections we consider
injectives in several subvarieties of residuated lattices which
appear in the literature. The results obtained are summarized in
Table 1.
\section{Basic Notion}
We recall from from \cite{BD} and \cite{Bur} some basic notion
of injectives and universal algebra. Let ${\cal A}$ be a class of
algebras. For all algebras $A, B$ in ${\cal A}$, $[A,B]_{\cal A}$
will denote the set of all homomorphism $g:A\rightarrow B$. An
algebra $A$ in ${\cal A}$ is {\bf injective} iff for every
monomorphism $f \in [B, C]_{{\cal A}}$ and every $g \in [B,
A]_{{\cal A}}$ there exists $h \in [C, A]_{{\cal A}}$ such that
$hf = g$; $A$ is {\bf self-injective} iff every homomorphism from
a subalgebra of $A$ into $A$, extends to an endomorphism of $A$.
An algebra $B$ is a {\bf retract} of an algebra $A$ iff there
exists $g \in [B, A]_{{\cal A}}$ and $f\in [A, B]_{{\cal A}}$
such that $fg = 1_B$. It is well known that a retract of an
injective object is injective. An algebra $B$ is called an {\bf
absolute retract} in ${\cal A}$ iff it is a retract of each of
its extensions in ${\cal A}$. For each algebra $A$, we denote by
$Con(A)$, the congruence lattice of $A$, the diagonal congruence
is denoted by $\Delta$ and the largest congruence $A^2$ is
denoted by $\nabla$. A congruence $\theta_M$ is said to be
maximal iff $\theta_M \not= \nabla$ and there is no congruence
$\theta$ such that $\theta_M \subset \theta \subset \nabla$. An
algebra $I$ is {\bf simple} iff $Con(I) = \{\Delta,\nabla \}$. A
nontrivial algebra $T$ is said to be {\bf minimal} in $\cal A$
iff for each nontrivial algebra $A$ in $\cal A$, there exists a
monomorphism $f:T\rightarrow A$. A simple algebra $I_M$ is said
to be {\bf maximum simple} iff for each simple algebra $I$, $I$
can be embedded in $I_M$. A simple algebra is {\bf hereditarily
simple} iff all its subalgebras are simple. An algebra $A$ is {\bf
semisimple} iff it is a subdirect product of simple algebras. An
algebra $A$ is {\bf rigid} iff the identity homomorphism is the
only automorphism. An algebra $A$ has the {\bf congruence
extension property} (CEP) iff for each subalgebra $B$ and $\theta
\in Con(B)$ there is a $\phi \in Con(A)$ such that $\theta = \phi
\cap A^2$. A variety ${\cal V}$ satisfies CEP iff every algebra
in ${\cal V}$ has the CEP. It is clear that if ${\cal V}$
satisfies CEP then every simple algebra is hereditarily simple.
\section{Injectives and simple algebras}
\begin{definition}
{\rm Let $\cal V$ be a variety. Two constant terms $0,1$ of the
language of $\cal V$ are called {\it distinguished constants}
iff $A\models 0\not=1$ for each nontrivial algebra $A$ in $\cal
V$.}
\end{definition}
\begin{lem}\label{Disting}
Let $\cal A$ be variety with distinguished constants $0,1$ and let
$A$ be a nontrivial algebra in $\cal A$. Then $A$ has maximal
congruences, and for each simple algebra $I\in \cal A$, all
homomorphisms $f:I\rightarrow A$ are monomorphisms.
\end{lem}
\begin{proof}
Since for each homomorphism $f:A\rightarrow B$ such that $B$ is a
nontrivial algebra, $f(0)\not=f(1)$ then for each $\theta \in
Con(A)\backslash \{A^2\}$, $(1,0)\notin \theta$. Thus a standard
application of Zorn lemma shows that $Con(A)\backslash \{A^2\}$
has maximal elements. The second claim follows from the
simplicity of $I$ and $f(0)\not=f(1)$. \hfill$\Box$
\end{proof}
\begin{theo} \label{Injective simple}
Let ${\cal A}$ be a variety with distinguished constants $0, 1$
having a minimal algebra. If ${\cal A}$ has nontrivial
injectives, then there exists a maximum simple algebra $I$.
\end{theo}
\begin{proof}
Let $A$ be a nontrivial injective in ${\cal A}$. By
Lemma~\ref{Disting} there is a maximal congruence $\theta$ of $A$.
Let $I = A/\theta$ and $p:A\rightarrow I$ be the canonical
projection. Since ${\cal A}$ has a minimal algebra it is clear
that for each simple algebra $J$, there exists a monomorphism
$h:J \rightarrow A$. Then the composition $ph$ is a monomorphism
from $J$ into $I$. Thus $I$ is a maximum simple algebra. \hfill$\Box$
\end{proof}
\noindent
\\
We want to establish a kind of the converse of the above theorem.
\\
\begin{theo}\label{Simple injective}
Let ${\cal A}$ be a variety satisfying CEP, with distinguished
constants $0, 1$. If $I$ is a self-injective maximum simple
algebra in ${\cal A}$ then $I$ is injective.
\end{theo}
\begin{proof}
For each monomorphism $g:A\rightarrow B$ we consider the
following diagram in ${\cal A}$:
\begin{center}
\unitlength=1mm
\begin{picture}(60,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(2,16){\makebox(0,0){$A$}} \put(20,16){\makebox(0,0){$I$}}
\put(2,0){\makebox(0,0){$B$}}
\put(2,20){\makebox(17,0){$f$}} \put(2,8){\makebox(-5,0){$g$}}
\end{picture}
\end{center}
By CEP, $I$ is hereditarily simple. Hence $f(A)$ is simple and
$Ker(f)$ is a maximal congruence of $A$ such that $(0,1) \notin
Ker(f)$. Further $Ker(f)$ can be extended to a maximal congruence
$\theta$ in $B$. It is clear that $(0,1)\notin \theta$ and
$\theta \cap A^2 = Ker(f)$. Thus if we consider the canonical
projection $p:B \rightarrow B/\theta$, then there exists a
monomorphism $g':f(A) \rightarrow B/\theta$ such that
\begin{center}
\unitlength=1mm
\begin{picture}(60,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(8,0){\vector(1,0){5}} \put(20,10){\vector(0,-2){5}}
\put(26,16){\vector(3,0){5}}
\put(2,16){\makebox(0,0){$A$}} \put(20,16){\makebox(0,0){$f(A)$}}
\put(2,0){\makebox(0,0){$B$}}
\put(20,0){\makebox(0,0){$B/\theta$}}
\put(36,16){\makebox(0,0){$I$}}
\put(1,10){\makebox(17,-3){$\equiv$}}
\put(2,20){\makebox(17,0){$f$}} \put(2,8){\makebox(-5,0){$g$}}
\put(14,-4){\makebox(-5,2){$p$}} \put(19,7){\makebox(-5,2){$g'$}}
\put(20,20){\makebox(17,0){$1_{f(A)}$}}
\end{picture}
\end{center}
Since $I$ is maximum simple, $B/\theta$ is isomorphic to a
subalgebra of $I$. Therefore, since that $I$ is self-injective,
there exists a monomorphism $\varphi: B/\theta \rightarrow I$
such that $\varphi g' = 1_f(A)$. Thus $(\varphi p)g = f$ and $I$
is injective. \hfill$\Box$
\end{proof}
\begin{lem}\label{RIGID}
If $A$ is a rigid simple injective
algebra in a variety, then all the subalgebras of $A$ are rigid. \hfill$\Box$
\end{lem}
\section{Injectives, ultrapowers and lattice properties}
\noindent We recall from \cite{Bir} some basic notions on ordered
sets that will play an important role in what follows. An ordered
set $L$ is called {\bf bounded} provided it has a smallest
element $0$ and a greatest element $1$. The {\bf decreasing
segment} ${\rm ({\it a}]}$ of $L$ is defined as the set $\{\,x\in
L : x\leq a \}$. The increasing segment $[a)$ is defined dualy. A
subset $X$ of $L$ is called {\bf down directed} ({\bf upper
directed}) iff for all $a,b\in X$, there exists $x\in X$ such that
$x \leq a$ and $x \leq b$ ($a \leq x$ and $b \leq x$).\\
\begin{lem}
Let $L$ be a lattice and $X$ be a down (upper) directed subset of
$L$ such that $X$ does not have a minimum (maximum) element. If
$\cal F$ is the filter in ${\cal P}(X)$ generated by the
decreasing (increasing) segments of $X$, then there exists a
nonprincipal ultrafilter $\cal U$ such that $\cal F \subseteq \cal
U$.
\end{lem}
\begin{proof}
Let ${\rm ({\it a}]}$, ${\rm ({\it b}]}$ be decreasing segments
of $X$. Since $X$ is a down directed subset, there exists $x\in
X$ such that $x\leq a$ and $x\leq b$, whence $x\in ({\it a}]\cap
({\it b}]$ and $\cal F$ is a proper filter of ${\cal P}(X)$. By
the ultrafilter theorem there exists an ultrafilter $\cal U$ such
that $\cal F \subseteq \cal U$. Suppose that $\cal U$ is the
principal filter generated by ${\rm ({\it c}]}$. Since $X$ does
not have a minimum element, there exists $x\in X$ such that
$x<c$. Thus ${\rm ({\it x}]} \in \cal U$ and it is a proper subset
of ${\rm ({\it c}]}$, a contradiction. Hence $\cal U$ is not a
principal filter. By duality, we can establish the same result
when $X$ is an upper directed set. \hfill$\Box$
\end{proof}
\begin{definition}\label{LA}
{\rm A variety ${\cal V}$ of algebras has {\it lattice-terms} iff
there are terms of the language of ${\cal V}$ defining on each
$A\in {\cal V}$ operations $\lor$, $\land$, such that $\langle
A,\lor,\land\rangle$ is a lattice. ${\cal V}$ has {\it bounded
lattice-terms} if, moreover, there are two constant terms $0$,$1$
of the language of ${\cal V}$ defining on each $A\in {\cal V}$ a
bounded lattice $\langle A,\lor,\land, 0, 1\rangle$. The order in
$A$, denoted by $L(A)$, is called the {\it natural order} of $A$.}
\end{definition}
\noindent Observe that each subvariety of a variety with
(bounded) lattice-terms is also a variety with (bounded)
lattice-terms.
\\
\noindent Let ${\cal V}$ be a variety with lattice-terms and
$A\in {\cal V}$. ${A^X}/{\cal U}$ will always denote the
ultrapower corresponding to a down (upper) directed set $X$ of
$A$ with respect to the natural order, without smallest
(greatest) element and a nonprincipal ultrafilter ${\cal U}$ of
${\cal P}(X)$, containing the filter generated by the decreasing
(increasing) segments of $X$. For each $f\in A^X$, $[f]$ will
denote the ${\cal U}$-equivalence class of $f$. Thus $[1_X]$ is
the ${\cal U}$-equivalence class of the canonical injection
$X\hookrightarrow A$ and for each $a\in A$, $[a]$ is the ${\cal
U}$-equivalence class of the constant function $a$ in $A^X$. It
is well known that $i_A(a) = [a]$ defines a monomorphism
$A\rightarrow {A^X}/{\cal U}$ {\rm (see \cite[Corollary
4.1.13]{CK})}.
\begin{theo}\label{Ultraproducto}
Let ${\cal V}$ be a variety with lattice-terms. If there exists
an absolute retract $A$ in ${\cal V}$, then each down directed
subset $X\subseteq A$ has an infimum, denoted by $\bigwedge X$.
Moreover if $P(x)$ is a first-order positive formula {\rm (see
\cite{CK})} of the language of ${\cal V}$ such that each $a\in X$
satisfies $P(x)$, then $\bigwedge X$ also satisfies $P(x)$.
\end{theo}
\begin{proof}
Let $X$ be a down directed subset of the absolute retract $A$.
Suppose that $X$ does not admit a minimum element and consider an
ultrapower ${A^X}/{\cal U}$. Since $A$ is an absolute retract
there exists a homomorphism $\varphi$ such that the following
diagram is commutative:
\begin{center}
\unitlength=1mm
\begin{picture}(60,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(8,2){\vector(1,1){7}} \put(2,16){\makebox(0,0){$A$}}
\put(20,16){\makebox(0,0){$A$}}
\put(2,0){\makebox(0,0){${A^X}/{\cal U}$}}
\put(2,20){\makebox(17,0){$1_A$}}
\put(2,10){\makebox(13,0){$\equiv$}}
\put(2,8){\makebox(-5,0){$i_A$}}
\put(16,0){\makebox(-5,2){$\varphi$}}
\end{picture}
\end{center}
We first prove that $\varphi ([1_X])$ is a lower bound of $X$.
Let $a\in X$. Then $[1_X]\leq [a]$ since $\{x\in X : 1_X(x)\leq
a(x) \} = \{x\in X : x\leq a\} \in \cal U$. Thus $\varphi
([1_X])\leq \varphi([a]) = a$ and $\varphi ([1_X])$ is a lower
bound of $X$. We proceed now to prove that $\varphi ([1_X])$ is
the greatest lower bound of $X$. In fact, if $b \in A$ is a lower
bound of $X$ then for each $x\in X$ we have $b\leq x$. Thus
$[b]\leq [1_X]$ since $\{x\in X : b(x)\leq 1_X(x)\} = \{x\in X :
b\leq x\} = X \in \cal U$. Now we have $b = \varphi ([b])\leq
\varphi ([1_X])$. This proves that $\varphi ([1_X])= \bigwedge
X$. If each $a\in X$ satisfies the first order formula $P(x)$
then $[1_X]$ satisfies $P(x)$ and, since $P(x)$ is a positive
formula, it follows from {\rm (\cite[Theorem 3.2.4]{CK} )} that
$\varphi ([1_X])$ satisfies $P(x)$. \hfill$\Box$
\end{proof}
\\
\noindent In the same way, we can establish the dual version of
the above theorem. Recalling that a lattice is complete iff there
exists the infimum $\bigwedge X$ (supremum $\bigvee X$), for each
down directed (upper directed) subset $X$, we have the following
corollary:
\begin{coro}
Let ${\cal V}$ be a variety with lattice-terms. If $A$ is an
absolute retract in ${\cal V}$, then $L(A)$ is a complete
lattice. \hfill$\Box$
\end{coro}
\section{Residuated Lattices and Semisimplicity}
\begin{definition}
{\rm A {\it residuated lattice} {\rm \cite{TK}} or {\it
commutative integral residuated $0,1$-lattice} {\rm \cite{JT}},
is an algebra $ \langle A, \land, \lor, \odot, \rightarrow, 0, 1
\rangle$ of type $ \langle 2, 2, 2, 2, 0, 0 \rangle$ satisfying
the following axioms:
\begin{enumerate}
\item
$\langle A,\odot,1 \rangle$ is an abelian monoid,
\item
$L(A) = \langle A, \lor, \land, 0,1 \rangle$ is a bounded lattice,
\item
$(x \odot y)\rightarrow z = x\rightarrow (y\rightarrow z)$,
\item
$((x\rightarrow y)\odot x)\land y = (x\rightarrow y)\odot x$,
\item
$(x\land y)\rightarrow y = 1$.
\end{enumerate}
\noindent $A$ is called an {\it involutive residuated lattice} or
{\it Girard monoid} {\rm \cite{UHO}} if it also satisfies the
equation:
\begin{enumerate}
\item[6.]
$(x\rightarrow 0)\rightarrow 0 = x$.
\end{enumerate}
\noindent $A$ is called {\it distributive} if satisfies 1. -- 5.
as well as:
\begin{enumerate}
\item[7.]
$x\land (y \lor z) = (x\land y)\lor (x\land z)$.
\end{enumerate}
}
\end{definition}
\noindent The variety of residuated lattices is denoted by ${\cal
RL}$, and the subvariety of Girad monoids is noted by ${\cal GM}$.
Following the notation used in {\rm \cite{JT}}, the variety of
residuated lattices that satisfy the distributive law is denoted
by ${\cal DRL}$, and ${\cal DGM}$ will denote the variety of
distributive Girad monoids. It is clear that $0,1$ are
distinguished constant terms in ${\cal RL}$. Moreover, $\{0,1\}$
is a subalgebra of each nontrivial $A \in {\cal RL}$, which is a
boolean algebra. Hence $\{0,1\}$ with its natural boolean algebra
structure is the minimal algebra in each nontrivial subvariety of
${\cal RL}$. Thus the variety ${\cal BA}$ of boolean algebras is
contained in all nontrivial varieties of residuated lattices. On
each residuated lattice $A$ we can define a unary operation
$\neg$ by $\neg x = x\rightarrow 0$. We also define for all
$a\in A$, $a^1 = a$ and $a^{n+1}= a^n\odot a$. An element $a$ in
$A$ is called {\bf idempotent} iff $a^2 = a$, and it is called
{\bf nilpotent} iff there exists a natural number $n$ such that
$a^n= 0$. The minimum $n$ such that $a^n= 0$ is called {\bf
nilpotence order} of $a$. An element $a$ in $A$ is called {\bf
dense} iff $\neg a = 0$ and it is called a {\bf unity} iff for
all natural numbers $n$, $\neg (a^n)$ is nilpotent. The set of
dense elements of $A$ will be denoted by $Ds(A)$. We recall now
some well-known facts about implicative filters and congruences
on residuated lattices. Let A be a residuated lattice and
$F\subseteq A$. Then $F$ is an {\bf implicative filter} iff it
satisfies the following conditions:
\begin{enumerate}
\item
$1\in F$,
\item
if $x\in F$ and $x\rightarrow y \in F$ then $y\in F$.
\end{enumerate}
\noindent It is easy to verify that a nonempty subset $F$ of a
residuated lattice $A$ is an implicative filter iff for all
$a,b\in A$:
\begin{enumerate}
\item[-]
If $a\in F$ and $a\leq b$ then $b\in F$,
\item[-]
if $a,b\in F$ then $a\odot b \in F$.
\end{enumerate}
Note that an implicative filter $F$ is proper iff $0$ does not
belong to $F$. The intersection of any family of implicative
filters of $A$ is again an implicative filter of $A$. We denote
by $ \langle X \rangle$ the implicative filter generated by
$X\subseteq A$, i.e., the intersection of all implicative filters
of $A$ containing $X$. We abbreviate this as $ \langle a \rangle$
when $X=\{a\}$ and it is easy to verify that $\langle X \rangle =
\{x\in A: \exists \hspace{0.2 cm} w_1 \cdots w_n \in X
\hspace{0.2 cm} \mbox{such that} \hspace{0.2 cm} x\geq
w_1,\odot \cdots , \odot w_n \}$. For any implicative filter $F$
of $A$, $\theta_F = \{(x,y)\in A^2: x\rightarrow y, y\rightarrow
x \in F \}$ is a congruence on $A$.
\noindent Moreover $F = \{ x \in A : (x,1)\in \theta_F \}$.
Conversely, if $\theta \in Con(A)$ then $F_{\theta} = \{x \in A :
(x,1) \in \theta \}$ is an implicative filter and $(x,y) \in
\theta $ iff $(x \rightarrow y, 1) \in \theta$ and $(y
\rightarrow x, 1) \in \theta$. Thus the correspondence $F
\rightarrow \theta_F$ is a bijection from the set of implicative
filters of $A$ onto the set $Con(A)$. If $F$ is an implicative
filter of $A$, we shall write $A/F$ instead of $A/\theta_F$, and
for each $x\in A$ we shall write $x/\theta_F$ for the equivalence
class of $x$.
\begin{prop}
If ${\cal A}$ is a subvariety of ${\cal RL}$, then ${\cal A}$
satisfies CEP.
\end{prop}
\begin{proof}
This follows from the same argument used in {\rm (\cite[Theorem
1.8]{BlokFer})}.
\hfill$\Box$\\
\end{proof}
\noindent If $A$ is a residuated lattice then we define
$$Rad(A) = \bigcap \{F: F \hspace{0.2 cm} is \hspace{0.2 cm} a \hspace{0.2 cm} maximal\hspace{0.2 cm} implicative\hspace{0.2 cm} filter\hspace{0.2 cm} in \hspace{0.2 cm} A\}.$$
\noindent It is clear that $A$ is semisimple iff $Rad(A) =
\{1\}$. If ${\cal A}$ is a subvariety of ${\cal RL}$, we denote
by ${\cal S}em({\cal A}) $ the subclass of ${\cal A}$ whose
elements are the semisimple algebras of ${\cal A}$. Thus we have
${\cal S}em({\cal A}) = \{A/Rad(A):A\in {\cal A} \}$.
\begin{prop}\label{Rad}
Let A be a residuated lattice. Then:
\begin{enumerate}
\item
$A$ is simple iff for each $a<1$, $a$ is nilpotent.
\item
$Rad(A) = \{a\in A : a \hspace{0.2 cm} is \hspace{0.2 cm} unity\}
$.
\item
$Ds(A)$ is an implicative filter in $A$ and $Ds(A) \subseteq
Rad(A)$ .
\end{enumerate}
\end{prop}
\begin{proof}
1)\hspace{0.2 cm} Trivial. 2)\hspace{0.2 cm} See {\rm
(\cite[Lemma 4.6]{UHO} )}. 3) \hspace{0.2 cm}Follows immediately
from 2. \hfill$\Box$
\\
\end{proof}
\noindent If $Rad(A)$ has a least element $a$, i.e., $Rad(A)=
[a)$, then $a$ is called the {\bf principal unity} of $A$. It is
clear that a principal unity is an idempotent element and that it
generates the radical.
\begin{lem}\label{IMPUNIT}
Let $A$ be a residuated lattice having a principal unity $a$. If
$x\in Rad(A)$, then $x\rightarrow \neg a = \neg a$.
\end{lem}
\begin{proof}
$x\rightarrow \neg a = \neg (x\odot a) = \neg a$ since $a$ is
the minimum unity. \hfill$\Box$
\end{proof}
\begin{prop}\label{NEGUNIT}
Let $A$ be a linearly ordered residuated lattice. Then:
\begin{enumerate}
\item
$a$ is a unity in $A$ iff \hspace{0.1 cm} $a$ is not a nilpotent
element.
\item
If $a$ is a unity in $A$, then $\neg a < a$.
\end{enumerate}
\end{prop}
\begin{proof}
1)\hspace{0.2 cm} If $a < 1$ and there exists a natural number
$n$ such that $a^n = 0$, then $\neg(a^n) = 1$ and $a$ is not a
unity. Conversely, suppose $a$ is not a unity. Since $A$ is
linearly ordered, we must have $a^n \leq \neg \neg(a^n) <
\neg(a^n)$. Hence $a^{2n} = 0$ and $a$ is nilpotent, which is a
contradiction. 2)\hspace{0.2 cm} Is an obvious consequence of 1).
\hfill$\Box$
\end{proof}
\begin{coro}\label{UNMTL}
Let $A$ be a residuated lattice such that there exists an
embedding $f:A \rightarrow \prod_{i\in I} L_i$, with $L_i$ a
linearly ordered residuated lattice for each $i\in I $. Then $a$
is a unity in $A$ iff for each $i\in I$, $a_i =\pi_i f(a)$ is a
unity in $L_i$, where $\pi_i$ is the $i{\rm th}$-projection onto
$L_i$ .
\end{coro}
\begin{proof}
If $a$ is a unity in $A$ then $a_i = \pi_i f(a)$ is a unity in
$L_i$, because homomorphisms preserve unities. Conversely,
suppose that $a$ is not a unity. Therefore there is an $n$ such
that $\neg (a^n)$ is not nilpotent, and hence $\neg (a^n) \not
\leq \neg \neg (a^n)$. Since $f$ is an embeding and since $L_i$
is linearly ordered for each $i\in I$, there exists $j\in I$ such
that $\neg\neg(a_j^n) \leq \neg (a_j^n)$, and by Proposition
\ref{NEGUNIT} $a_j$ is not a unity in $L_j$. \hfill$\Box$
\end{proof}
\begin{prop}\label{RadSub}
Let ${\cal A}$ be a subvariety of ${\cal RL}$. Then ${\cal
S}em({\cal A})$ is a reflective subcategory, and the reflector
{\rm \cite{BD}} preserves monomorphism.
\end{prop}
\begin{proof}
If $A \in {\cal A}$, for each $x\in A$, $[x]$ will denote the
$Rad(A)$-congruence class of $x$. We define ${\cal S}(A)=
A/Rad(A)$, and for each $f \in [A,A']_{\cal A}$, we let ${\cal
S}(f)$ be defined by ${\cal S}(f)([x]) = [f(x)]$ for each $x\in
A$. Since homomorphisms preserve unity, we obtain a well defined
function ${\cal S}(f): A/Rad(A)\rightarrow A'/Rad(A')$. It is easy
to check that ${\cal S}$ is a functor from ${\cal A}$ to ${\cal
S}em({\cal A})$. To show that ${\cal S}$ is a reflector, note
first that if $p_A:A \rightarrow A/Rad(A)$ is the canonical
projection, then the following diagram is commutative:
\begin{center}
\unitlength=1mm
\begin{picture}(20,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(-4,10){\vector(0,-2){5}}
\put(8,0){\vector(1,0){5}} \put(24,10){\vector(0,-2){5}}
\put(-4,16){\makebox(0,0){$A$}} \put(24,16){\makebox(0,0){$A'$}}
\put(-6,0){\makebox(0,0){$A/Rad(A)$}}
\put(27,0){\makebox(0,0){$A/Rad(A')$}}
\put(2,10){\makebox(17,-3){$\equiv$}}
\put(2,20){\makebox(17,0){$f$}} \put(-7,8){\makebox(-5,0){$p_A$}}
\put(14,-5){\makebox(-5,2){${\cal S}(f)$}}
\put(33,8){\makebox(-5,2){$p_{A'}$}}
\end{picture}
\end{center}
\hspace{0.2 cm}
Suppose that $B\in {\cal S(A)}$ and $f\in [A,B]_{{\cal A}}$.
Since $Rad(B) = \{1\}$, the mapping $[x]\mapsto f(x)$ defines a
homomorphism $g:A/Rad(A)\rightarrow B$ that makes the following
diagram commutative:
\begin{center}
\unitlength=1mm
\begin{picture}(20,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(10,4){\vector(1,1){7}}
\put(2,10){\makebox(13,0){$\equiv$}}
\put(2,16){\makebox(0,0){$A$}} \put(20,16){\makebox(0,0){$B$}}
\put(2,0){\makebox(0,0){$A/Rad(A)$}}
\put(2,20){\makebox(17,0){$f$}} \put(2,8){\makebox(-6,0){$p_A$}}
\put(18,2){\makebox(-4,3){$g$}}
\end{picture}
\end{center}
\noindent and it is obvious that $g$ is the only homomorphism in
$[A/Rad(A),B]_{{\cal S}em({\cal A})}$ making the triangle
commutative. Therefore we have proved that ${\cal S}$ is a
reflector. We proceed to prove that ${\cal S}$ preserves
monomorphisms. Let $f\in [A,B]_{{\cal A}}$ be a monomorphism and
suppose that $({\cal S}(f))(x)=({\cal S}(f))(y)$, i.e.,
$[f(x)]=[f(y)]$. Then for each number $n$ there exists a number
$m$ such that $0 = (\neg ((f(x)\rightarrow f(y))^n))^m = f((\neg
((x\rightarrow y)^n))^m)$. Since $f$ is a monomorphism then $(\neg
((x\rightarrow y)^n))^m = 0$ and $x\rightarrow y \in Rad(A)$.
Interchanging $x$ and $y$, we obtain $[x]= [y]$ and ${\cal S}(f)$
is a monomorphism. \hfill$\Box$
\end{proof}
\begin{coro}\label{INJBOT}
Let ${\cal A}$ be a subvariety of ${\cal RL}$. If $A$ is
injective in ${\cal S}em({\cal A})$ then $A$ is injective in
${\cal A}$.
\end{coro}
\begin{proof}
It is well known that if ${\cal D}$ is a reflective subcategory
of ${\cal A}$ such that the reflector preserves monomorphisms
then an injective object in ${\cal D}$ is also injective in
${\cal A}$\hspace{0.2 cm}{\rm \cite[I.18]{BD}}. Then this theorem
follows from Propositions \ref{RadSub}.
\hfill$\Box$ \\
\end{proof}
\noindent We will say that a variety ${\cal A}$ is ${\bf
radical-dense}$ provided that ${\cal A}$ is a subvariety of
${\cal RL}$ and $Rad(A) = Ds(A)$ for each $A$ in ${\cal A}$. An
example of a radical-dense variety is the variety ${\cal H}$ of
Heyting algebras (i.e., ${\cal RL}$ plus the equation $x\odot y =
x\land y$).
\begin{theo}\label{H3}
Let ${\cal A}$ be a radical-dense variety. If $A$ is a
non-semisimple absolute retract in ${\cal A}$, then $A$ has a
principal unity $\epsilon$ and $\{0,\epsilon,1\}$ is a
subalgebra of $A$ isomorphic to the three element Heyting algebra
$H_3$.
\end{theo}
\begin{proof}
Let $A$ be a non-semisimple absolute retract. Unities are
characterized by the first order positive formula $\neg x = 0$
because $Rad(A)= Ds(A)$. Since $Ds(A)$ is a down-directed set, by
Theorem \ref{Ultraproducto} there exists a minimum dense element
$\epsilon$. It is clear that $\epsilon$ is the principal unity
and since $\epsilon < 1$, $\{0,\epsilon,1\}$ is a subalgebra of
$A$, which coincides with the three element Heyting algebra $H_3$.
\hfill$\Box$
\end{proof}
\begin{definition}
{\rm Let ${\cal A}$ be a radical-dense variety. An algebra $T \in
{\cal A}$ is called a {\it $test_d$-algebra} iff there are
$\epsilon, t \in Rad(T) $ such that $\epsilon$ is an idempotent
element, $t < \epsilon$ and $\epsilon \rightarrow t \leq
\epsilon$}.
\end{definition}
\noindent An important example of a $test_d$-algebra is the
totally ordered four element Heyting algebra $H_4 = \{0 < b < a <
1\}$ whose operations are given as follows:
$$
x\odot y = x\land y,
$$
$$
x\rightarrow y = \cases {1, & if $x \leq y$,\cr y, & if $x > y.
$\cr }
$$
\begin{theo}\label{TEST1}
Let ${\cal A}$ be a radical-dense variety. If ${\cal A}$ has a
nontrivial injective and contains a $test_d$-algebra $T$, then
all injectives in ${\cal A}$ are semisimple.
\end{theo}
\begin{proof}
Suppose that there exists a non-semisimple injective $A$ in
${\cal A}$. Then by Lemma \ref{H3}, there is a monomorphism
$\alpha: H_3 \rightarrow A$ such that $\alpha(a)$ is the
principal unity in $A$. Let $i:H_3\rightarrow T$ be the
monomorphism such that $i(a)= \epsilon$. Since $A$ is injective,
there exists a homomorphism $\varphi:T\rightarrow A$ such that
the following diagram commutes
\begin{center}
\unitlength=1mm
\begin{picture}(20,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(8,2){\vector(1,1){7}}
\put(2,10){\makebox(13,0){$\equiv$}}
\put(2,16){\makebox(0,0){$H_3$}} \put(20,16){\makebox(0,0){$A$}}
\put(2,0){\makebox(0,0){$T$}} \put(2,20){\makebox(17,0){$\alpha$}}
\put(2,8){\makebox(-6,0){$i$}}
\put(16,0){\makebox(-4,3){$\varphi$}}
\end{picture}
\end{center}
\noindent Since $\alpha(a)$ is the principal unity in $A$ and
$t\leq \epsilon$, then, by commutativity, $\varphi(\epsilon) =
\varphi(t)= \alpha(a)$. Thus $\varphi(\epsilon \rightarrow t) =
1$, which is a contradiction since by hypothesis $\varphi(\epsilon
\rightarrow t) \leq \varphi(\epsilon) = \alpha(a) < 1$. Hence
${\cal A}$ has only semisimple injectives. \hfill$\Box$
\end{proof}
\section {Injectives in ${\cal RL}$, ${\cal GM}$, ${\cal DRL}$ and ${\cal DGM}$ }
\begin{prop}\label{EXT}
Let $A$ be a residuated lattice. Then the set $A^\diamond =
\{(a,b)\in A\times A: a\leq b \}$ equipped with the operations \\
$(a_1,b_1)\land (a_2,b_2):= (a_1\land a_2, b_1\land b_2)$,
$(a_1,b_1)\lor (a_2,b_2):= (a_1\lor a_2, b_1\lor b_2)$,
$(a_1,b_1)\odot (a_2,b_2):= (a_1\odot a_2, (a_1\odot b_2)\lor
(a_2\odot b_1))$,
$(a_1,b_1)\rightarrow (a_2,b_2):= ((a_1\rightarrow a_2)\land (b_1\rightarrow b_2), a_1\rightarrow b_2)$.\\
\noindent is a residuated lattice, and the following properties
hold:
\begin{enumerate}
\item
The map $i:A\rightarrow A^\diamond$ defined by $i(a)=(a,a)$ is a
monomorphism.
\item
$\neg(a,b)= (\neg b, \neg a)$ and $\neg(0,1)=(0,1)$.
\item
$A$ is a Girard monoid iff $A^\diamond$ is a Girard monoid.
\item
$A$ is distributive iff $A^\diamond$ is distributive.
\end{enumerate}
\end{prop}
\begin{proof}
See {\rm \cite[IV Lemma 3.2.1]{UHO}}. \hfill$\Box$
\end{proof}
\begin{definition}
We say that a subvariety ${\cal A}$ of ${\cal RL}$ is {\bf
$\diamond$-closed} iff for all $A \in {\cal A}$, $A^\diamond \in
{\cal A}$.
\end{definition}
\begin{theo} \label{INJRL}
If a subvariety ${\cal A}$ of ${\cal RL}$ is {\bf
$\diamond$-closed}, then ${\cal A}$ has only trivial absolute
retracts.
\end{theo}
\begin{proof}
Suppose that there exists a non-trivial absolute retract $A$ in
${\cal A}$. Then by Proposition \ref{EXT} there exists an
epimorphism $f:A^\diamond \rightarrow A$ such that the following
diagram is commutative
\begin{center}
\unitlength=1mm
\begin{picture}(20,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(8,2){\vector(1,1){7}}
\put(2,10){\makebox(13,0){$\equiv$}}
\put(2,16){\makebox(0,0){$A$}} \put(20,16){\makebox(0,0){$A$}}
\put(2,0){\makebox(0,0){$A^\diamond$}}
\put(2,20){\makebox(17,0){$1_A$}} \put(2,8){\makebox(-6,0){$i$}}
\put(16,0){\makebox(-4,3){$f$}}
\end{picture}
\end{center}
Thus there exists $a\in A$ such that $f(0,1)= a = f(a,a)$. Since
$(0,1)$ is a fixed point of the negation in $A^\diamond$ it
follows that $0<a<1$. We have $f(a,1)=1$. Indeed,
$(0,1)\rightarrow (a,a) = ((0 \rightarrow a)\land (1\rightarrow
a), 0 \rightarrow a) = (a,1)$. Thus $f(a,1)= f((0,1)\rightarrow
(a,a))= f(0,1)\rightarrow f(a,a)= a\rightarrow a =1$. In view of
this we have $1 = f(a,1)\odot f(a,1) = f((a,1) \odot(a,1)) =
f(a\odot a, (a\odot 1)\lor (a\odot 1))= f((a\odot a , a)) \leq
f((a,a))= a$, which is a contradiction since $a<1$. Hence ${\cal
A}$ has only trivial absolute retracts. \hfill$\Box$
\end{proof}
\begin{coro}
${\cal RL}$, ${\cal GM}$, ${\cal DRL}$ and ${\cal DGM}$ have
only trivial absolute retracts and injectives. \hfill$\Box$
\end{coro}
\section {Injectives in SRL-algebras}
\begin{definition}
A SRL-algebra is a residuated lattice satisfying the
equation:$$x\land \neg x = 0 \leqno(S)$$ The variety of
SRL-algebras is denoted by ${\cal SRL}$.
\end{definition}
\begin{prop}\label{NILPS}
If $A$ is a SRL-algebra, then $0$ is the only nilpotent in $A$.
\end{prop}
\begin{proof}
Suppose that there exists a nilpotent element $x$ in $A$ such
that $0<x$, having nilpotence order equal to $n$. By the
residuation property we have $x^{n-1}\leq \neg x$. Thus
\hspace{0.1cm} $x^{n-1} = x\land x^{n-1}\leq x \land \neg x =
0$, which is a contradiction since $x$ has nilpotence order equal
to $n$. \hfill$\Box$
\end{proof}
\begin{coro}\label{SIMPSRL}
Let ${\cal A}$ be a subvariety of ${\cal SRL}$. Then the
two-element boolean algebra is the maximum simple algebra in
${\cal A}$ and ${\cal S}em({\cal A}) = {\cal BA}$.
\end{coro}
\begin{proof}
Follows from Propositions \ref{NILPS} and \ref{Rad}. \hfill$\Box$
\end{proof}
\begin{coro}\label{SRLRD}
If ${\cal A}$ is a subvariety of ${\cal SRL}$ then ${\cal A}$ is
a radical-dense variety.
\end{coro}
\begin{proof}
Let $A$ be an algebra in ${\cal A}$ and let $a$ be a unity. Thus
$\neg a$ is nilpotent and hence $\neg a = 0$. \hfill$\Box$
\end{proof}
\begin{coro}\label{SRLINJ}
If ${\cal A}$ is a subvariety of ${\cal SRL}$, then all complete
boolean algebras are injectives in ${\cal A}$.
\end{coro}
\begin{proof}
By Corollary \ref{SIMPSRL} the two-element boolean algebra is the
maximun simple algebra in ${\cal A}$. Since it is self-injective,
by Theorem \ref{Simple injective} it is injective. Since complete
boolean algebras are the retracts of powers of the two-element
boolean algebra, the result is proved. \hfill$\Box$
\end{proof}
\vspace{0.5cm}
\noindent As an application of this theorem we prove the
following results :
\begin{coro}
In ${\cal SRL}$ and ${\cal H}$, the only injectives are complete
boolean algebras.
\end{coro}
\begin{proof}
Follows from Corollary \ref{SRLINJ} and Theorem \ref{TEST1}
because the $test_d$-algebra $H_4$ belongs to both varieties. \hfill$\Box$
\end{proof}
\begin{rema}
{\rm The fact that injective Heyting algebras are exactly
complete boolean algebras was proved in {\rm \cite{BH}} by
different arguments.}
\end{rema}
\section {MTL-algebras and absolute retracts}
\begin{definition}
{\rm An {\it MTL-algebra} {\rm \cite{GE1}} is a residuated
lattice satisfying the pre-linearity equation
$$(x\rightarrow y)\lor (y\rightarrow x) = 1 \leqno(Pl)$$
\noindent The variety of MTL-algebras is denoted by ${\cal MTL}$}.
\end{definition}
\begin {prop}\label{SUBMTL}
Let $A$ be a residuated lattice. Then the following conditions are
equivalent:
\begin{enumerate}
\item
$A \in {\cal MTL}$.
\item
$A$ is a subdirect product of linearly ordered residuated
lattices.
\end{enumerate}
\end {prop}
\begin{proof}
{\rm \cite[Theorem 4.8 p. 76 ]{UHO}}. \hfill$\Box$
\\
\end{proof}
\begin {coro}\label{dist}
${\cal MTL}$ is subvariety of ${\cal DRL}$. \hfill$\Box$
\end {coro}
\begin {coro}\label{SIMMTL}
Let $A$ be a MTL-algebra.
\begin{enumerate}
\item
If $A$ is simple, then $A$ is linearly ordered.
\item
If $e$ is a unity in $A$, then $\neg e < e$.
\end{enumerate}
\end {coro}
\begin{proof}
1)\hspace{0.2 cm} Is an immediate consequence of Proposition
\ref{SUBMTL}. 2)\hspace{0.2 cm}If we consider that the {\it
i}{\rm th}-coordinate $\pi_if(e)$ of $e$ in the subdirect product
$f:A \rightarrow \prod_{i\in I} L_i$ is a unity, for each $i\in
I$, then by Proposition~\ref{NEGUNIT}, $\neg \pi_if(e) <
\pi_if(e)$. Thus $\neg e < e$. \hfill$\Box$
\\
\end{proof}
To obtain the analog of Theorem \ref{H3} for varieties of
MTL-algebras, we cannot use directly Theorem \ref{Ultraproducto},
because the property of being a unity is not a first order
property. We need to adapt the proof of Theorem \ref{H3} to this
case:
\begin {theo}\label{MINUNITY}
Let ${\cal A}$ be a subvariety of ${\cal MTL}$. If $A$ is an
absolute retract in ${\cal A}$ then $A$ has a principal unity $e$
in $A$.
\end {theo}
\begin{proof}
By Proposition \ref{SUBMTL} we can consider a subdirect embedding
$f:A \rightarrow \prod_{i\in I} L_i$ such that $L_i$ is linearly
ordered. We define a family $H(L_i)$ in ${\cal A}$ as follows:
for each $i\in I$
\begin{enumerate}
\item[(a)]
if there exists $e_i = min\{u\in L_i : u \hspace{0.1 cm} is
\hspace{0.1 cm} unity\}$ then $H(L_i) = L_i$,
\item[(b)]
otherwise, $X = \{u\in L_i : u \hspace{0.1 cm} is \hspace{0.1 cm}
unity\}$ is a down-directed set without least element. Then by
Proposition \ref{Ultraproducto} we can consider an ultraproduct
${L_i^X}_{/{\cal U}}$ of the kind considered after Definition
\ref{LA}. We define $H(L_i) = {L_i^X}_{/{\cal U}}$. It is clear
that $H(L_i)$ is a linearly ordered ${\cal A}$-algebra. If we
take the class $e_i = [1_X]$ then $e_i$ is a unity in $H(L_i)$
since for every natural number $n$, $0 < e_i^n$ iff \hspace{0.1
cm} $\{x\in X : 0 < (1_X(x))^n\} \in {\cal U}$ and $\{x\in X : 0 <
(1_X(x))^n = x^n\} = X \in {\cal U}$.
\end{enumerate}
\noindent
We can take the canonical embedding $j_i: L_i
\rightarrow H(L_i)$ and then for each $i\in I$ we can consider
$e_i$ as a unity lower bound of $L_i$ in $H(L_i)$. By Corollary
\ref{UNMTL}, $(e_i)_{i\in I}$ is a unity in $\prod_{i\in I}
H(L_i)$. Let $j:\prod_{i\in I} L_i \rightarrow \prod_{i\in I}
H(L_i)$ be the monomorphism defined by $j((x_i)_{i\in I})=
(j_i(x_i))_{i\in I}$. Since $A$ is an absolute retract there
exists an epimorphism $\varphi:\prod_{i\in I} H(L_i)\rightarrow
A$ such that the following diagram commutes:
\begin{center}
\unitlength=1mm
\begin{picture}(90,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(32,16){\vector(3,0){5}}
\put(52,10){\vector(0,-2){5}} \put(8,12){\vector(3,-1){40}}
\put(2,16){\makebox(0,0){$A$}}
\put(22,16){\makebox(0,0){$\prod_{i\in I} L_i$}}
\put(52,16){\makebox(0,0){$\prod_{i\in I} H(L_i)$}}
\put(52,-2){\makebox(0,0){$A$}}
\put(2,20){\makebox(17,0){$f$}} \put(26,20){\makebox(17,0){$j$}}
\put(28,9){\makebox(13,0){$\equiv$}}
\put(58,8){\makebox(-6,0){$\varphi$}}
\put(26,0){\makebox(-4,3){$1_A$}}
\end{picture}
\end{center}
\noindent
\\ Let $e = \varphi((e_i)_{i\in I})$. It is clear that $e$ is a
unity in $A$ since $\varphi$ is an homomorphism. If $u$ is a
unity in $A$ then $(e_i)_{i\in I}\leq jf(u)$ and by commutativity
of the above diagram, $e = \varphi((e_i)_{i\in I})\leq \varphi
jf(u)= u$. Thus $e = min\{u\in A : u \hspace{0.1 cm} is
\hspace{0.1 cm} unity\}$ resulting in $Rad(A) = [e)$.
\hfill$\Box$
\end{proof}
\section {Injectives in WNM-algebras and ${\cal MTL}$}
\begin{definition}
{\rm A {\it WNM-algebra} (weak nilpotent minimum) {\rm
\cite{GE1}} is an MTL-algebra satisfying the equation
$$\neg (x\odot y) \lor ((x\land y )\rightarrow (x\odot y)) = 1 . \leqno(W) $$}
\end{definition}
\noindent The variety of WNM-algebras is noted by ${\cal WNM}$.
\begin{theo}\label{SIMPLEWNM}
The following conditions are equivalent:
\begin{enumerate}
\item
$I$ is a simple WNM-algebra.
\item
$I$ has a coatom $u$ and its operations are given by
$$
x\odot y = \cases {0, & if $x, y < 1$\cr x, & if $y = 1$\cr y, &
if $x = 1$\cr}
$$
$$
x\rightarrow y = \cases {1, & if $x \leq y$ \cr y, & if $x = 1$\cr
u, & if $y < x < 1 . $\cr }
$$
\end{enumerate}
\end{theo}
\begin{proof}
$\Rightarrow$). For $Card(I)=2$ this result is trivial. If
$Card(I)>2$ then we only need to prove the following steps:
\begin{enumerate}
\item[a)]
\textit{If $x, y < 1$ in $I$ then $x\odot y = 0$}: Since $I$ is
simple, equation (W) implies that $x^2 = 0$ for each $x \in I
\setminus \{1\}$. Hence if $x \leq y < 1$, then $x \odot y \leq y
\odot y = 0$.
\item[b)]
\textit{$I$ has a coatom}: Let $0< x < 1$. We have that $\neg x <
1$ and, since $I$ is simple, we also have $\neg \neg x < 1$. Then
by a) it follows that $ \neg x \leq \neg \neg x \leq \neg \neg
\neg x = \neg x$, i. e., $\neg x = \neg \neg x$. If $0 < x, y <
1$, again by a) we have $\neg x \odot \neg y = 0$. Thus $\neg x
\leq \neg \neg y = \neg y$. By interchanging $x$ and $y$ we
obtain the equality $\neg x = \neg y$. Now it is clear that if
$0 < x <1$, then $u = \neg x$ is the coatom in $I$.
\item[c)]\textit{
If $y < x < 1$ then $x\rightarrow y = u$}: Since $x\rightarrow y =
\bigvee\{t\in I: t\odot x \leq y\}$, this supremum cannot be $1$
because $y < x $. Thus, in view of item a), $x\rightarrow y$ is
the coatom $u$.
\end{enumerate}
\noindent $\Leftarrow$) Immediate.
\hfill$\Box$
\end{proof}
\begin{example}
{\rm We can build simple WNM-algebras having arbitrary
cardinality if we consider an ordinal $\gamma = Suc \hspace{0.05
cm} (Suc \hspace{0.05 cm}(\alpha))$ with the structure given by
Proposition \ref{SIMPLEWNM}, taking $Suc(\alpha)$ as coatom.
These algebras will be called {\it ordinal algebras}.}
\end{example}
\begin{prop}
${\cal WNM}$ and ${\cal MTL}$ have only trivial injectives.
\end{prop}
\begin{proof}
Follows from Proposition \ref{Injective simple} since these
varieties contain all ordinal algebras. \hfill$\Box$
\end{proof}
\section {Injectives in SMTL-algebras}
\begin{definition}
{\rm An {\it SMTL-algebra} {\rm \cite{GE2}} is a MTL-algebra
satisfying equation {\rm ($S$)}. The variety of SMTL-algebras is
denoted by ${\cal SMTL}$}.
\end{definition}
\begin{prop}
The only injectives in ${\cal SMTL}$ are complete boolean
algebras.
\end{prop}
\begin{proof}
Follows from Corollary \ref{SRLINJ} and Theorem \ref{TEST1}
since the $test_d$-algebra $H_4$ belongs to ${\cal SMTL}$. \hfill$\Box$
\end{proof}
\section {Injectives in $\Pi$SMTL-algebras}
\begin{definition}
{\rm A {\it $\Pi$SMTL-algebra} {\rm \cite{GE1}} is a
SMTL-algebra satisfying the equation:$$(\neg \neg z \odot
((x\odot z) \rightarrow (y\odot z)))\rightarrow (x\rightarrow y)
= 1. \leqno(\Pi)$$}
\end{definition}
\noindent The variety of $\Pi$SMTL-algebras is denoted by
$\Pi{\cal SMTL}$.
\begin{prop}\label{DENSIDP}
Let $A$ be an $\Pi$SMTL-algebra. Then $1$ is the only idempotent
dense element in $A$.
\end{prop}
\begin{proof}
By equation $\Pi$ it is easy to prove that, for each dense
element $\epsilon$, if $\epsilon \odot x = \epsilon \odot y$ then
$x = y$. Thus if $\epsilon$ is an idempotent dense then
$\epsilon \odot 1 = \epsilon \odot \epsilon$ and $\epsilon = 1$.
\end{proof}
\begin{theo}\label{INJPI}
Let ${\cal A}$ be a subvariety of $\Pi{\cal SMTL}$. Then the
injectives in ${\cal A}$ are exactly the complete boolean
algebras.
\end{theo}
\begin{proof}
Follows from Corollary \ref{SRLINJ}, Theorem \ref{H3} and
Proposition \ref{DENSIDP}.
\end{proof}
\section {Injectives in BL, MV, PL, and in Linear Heyting algebras}
\begin{definition}
{\rm A {\it BL-algebra} {\rm \cite{HAJ}} is an MTL-algebra
satisfying the equation $$x\odot(x\rightarrow y) = x\land y
\leqno(B).$$}
\end{definition}
\noindent We denote by ${\cal BL}$ the variety of BL-algebras.
Important subvarieties of ${\cal BL}$ are the variety ${\cal MV}$
of multi-valued logic algebras (MV-algebras for short),
characterized by the equation $\neg \neg x = x$ \cite{CDM,HAJ},
the variety ${\cal PL}$ of product logic algebras (PL-algebras
for short), characterized by the equations ($\Pi$) plus ($S$)
\cite{HAJ,CT}, and the variety ${\cal HL}$ of linear Heyting
algebras, characterized by the equation $x\odot y = x\land y$
(also known as G\"odel algebras \cite{HAJ}).
\begin{rema}\label{MV}
{\rm It is well known that ${\cal MV}$ is generated by the
MV-algebra $R_{[0,1]}= \langle [0,1], \odot, \rightarrow, \land,
\lor, 0, 1 \rangle$ such that $[0,1]$ is the real unit segment,
$\land$, $\lor$ are the natural meet and join on $[0,1]$ and
$\odot$ and $\rightarrow$ are defined as follows: $x\odot y:=
max(0,x+y-1)$, $x\rightarrow y:= min(1,1-x+y)$. $R_{[0,1]}$ is
the maximum simple algebra in ${\cal MV}$ {\rm (see \cite[Theorem
3.5.1]{CDM})}. Moreover $R_{[0,1]}$ is a rigid algebra {\rm (see
\cite[Corollary 7.2.6]{CDM})}, hence self-injective. Injective
MV-algebras were characterized in {\rm \cite[Corollary
2.11]{GLUS})} as the retracts of powers of $R_{[0,1]}$.}
\end{rema}
\begin{prop}
If ${\cal A}$ is a subvariety of ${\cal PL}$, then the only
injectives of ${\cal A}$ are the complete boolean algebras.
\end{prop}
\begin{proof}
Follows from Theorem \ref{INJPI} since ${\cal PL}$ is a
subvariety of $\Pi{\cal SMTL}$. \hfill$\Box$
\end{proof}
\begin{prop}
The only injectives in ${\cal HL}$ are the complete boolean
algebras.
\end{prop}
\begin{proof}
Follows from Corollary \ref{SRLINJ} and Theorem \ref{TEST1}
since the algebra $test_d$ $H_4$ lies in ${\cal SMTL}$. \hfill$\Box$
\end{proof}
\begin{prop}\label{SIMBL}
${\cal BL}$ is a radical-dense variety.
\end{prop}
\begin{proof}
See {\rm \cite[Theorem 1.7 and Remark 1.9]{CT1}}. \hfill$\Box$
\end{proof}
\begin{prop}
Injectives in ${\cal BL}$ are exactly the retracts of powers of
the MV-algebra $R_{[0,1]}$ .
\end{prop}
\begin{proof}
By Remark \ref{MV} and Propositions \ref{SIMBL} and \ref{Simple
injective}, retracts of a power of the $R_{[0,1]}$ are injectives
in ${\cal BL}$. Thus by Theorem \ref{TEST1}, they are the only
possible injectives since $H_4$ lies in ${\cal BL}$. \hfill$\Box$
\end{proof}
\section {Injectives in IMTL-algebras}
\begin{definition}
{\rm An involutive MTL-algebra (or {\it IMTL-algebra}) {\rm
\cite{GE1}} is a MTL-algebra satisfying the equation
$$\neg \neg x = x. \leqno(I)$$}
\end{definition}
\noindent The variety of IMTL-algebras is noted by ${\cal IMTL}$. \\
An interesting IMTL-algebra, whose role is analogous to $H_3$ in
the radical-dense varieties, is the four element chain $I_4$
defined as follows:
\begin{center}
\unitlength=1mm
\begin{picture}(60,20)(0,0)
\put(-26,19){\makebox(0,0){$\odot$}} \put(-29,16){\line(3,0){38}}
\put(-23,20){\line(0,-2){30}}
\put(-28,19){\makebox(17,0){$1$}}
\put(-20,19){\makebox(17,0){$a$}}
\put(-11,19){\makebox(17,0){$b$}} \put(-3,19){\makebox(17,0){$0$}}
\put(-24,13){\makebox(-5,0){$1$}} \put(-24,6){\makebox(-5,0){$a$}}
\put(-24,-1){\makebox(-5,0){$b$}}
\put(-24,-8){\makebox(-5,0){$0$}}
\put(-17,13){\makebox(-5,0){$1$}} \put(-17,6){\makebox(-5,0){$a$}}
\put(-17,-1){\makebox(-5,0){$b$}}
\put(-17,-8){\makebox(-5,0){$0$}}
\put(-9,13){\makebox(-5,0){$a$}} \put(-9,6){\makebox(-5,0){$a$}}
\put(-9,-1){\makebox(-5,0){$0$}} \put(-9,-8){\makebox(-5,0){$0$}}
\put(0,13){\makebox(-5,0){$b$}} \put(0,6){\makebox(-5,0){$0$}}
\put(0,-1){\makebox(-5,0){$0$}} \put(0,-8){\makebox(-5,0){$0$}}
\put(8,13){\makebox(-5,0){$0$}} \put(8,6){\makebox(-5,0){$0$}}
\put(8,-1){\makebox(-5,0){$0$}} \put(8,-8){\makebox(-5,0){$0$}}
\put(21,19){\makebox(0,0){$\rightarrow$}}
\put(19,16){\line(3,0){38}} \put(25,20){\line(0,-2){30}}
\put(21,19){\makebox(17,0){$1$}} \put(28,19){\makebox(17,0){$a$}}
\put(36,19){\makebox(17,0){$b$}} \put(44,19){\makebox(17,0){$0$}}
\put(24,13){\makebox(-5,0){$1$}} \put(24,6){\makebox(-5,0){$a$}}
\put(24,-1){\makebox(-5,0){$b$}} \put(24,-8){\makebox(-5,0){$0$}}
\put(31,13){\makebox(-5,0){$1$}} \put(31,6){\makebox(-5,0){$1$}}
\put(31,-1){\makebox(-5,0){$1$}} \put(31,-8){\makebox(-5,0){$1$}}
\put(39,13){\makebox(-5,0){$a$}} \put(39,6){\makebox(-5,0){$1$}}
\put(39,-1){\makebox(-5,0){$1$}} \put(39,-8){\makebox(-5,0){$1$}}
\put(47,13){\makebox(-5,0){$b$}} \put(47,6){\makebox(-5,0){$b$}}
\put(47,-1){\makebox(-5,0){$1$}} \put(47,-8){\makebox(-5,0){$1$}}
\put(55,13){\makebox(-5,0){$0$}} \put(55,6){\makebox(-5,0){$b$}}
\put(55,-1){\makebox(-5,0){$a$}} \put(55,-8){\makebox(-5,0){$1$}}
\put(72,20){\line(0,-2){30}} \put(72,20){\circle*{1.5}}
\put(72,10){\circle*{1.5}} \put(72,0){\circle*{1.5}}
\put(72,-10){\circle*{1.5}}
\put(78,20){\makebox(-5,0){$1$}} \put(78,10){\makebox(-5,0){$a$}}
\put(83,-0){\makebox(-5,0){$b= \neg a$}}
\put(78,-10){\makebox(-5,0){$0$}}
\end{picture}
\end{center}
\vspace{1cm}
\begin{theo}\label{I4}
Let ${\cal A}$ be a a subvariety of ${\cal IMTL}$. If $A$ is a
non-semisimple absolute retract in ${\cal A}$, then $A$ has a
principal unity $\epsilon$ and $\{0, \neg \epsilon,
\epsilon,1\}$ is a subalgebra of $A$ which is isomorphic to $I_4$.
\end{theo}
\begin{proof}
Follows from Theorem \ref{MINUNITY}. \hfill$\Box$
\end{proof}
\begin{definition}
{\rm Let ${\cal A}$ be a subvariety of ${\cal IMTL}$. An algebra
$T$ is called {\it $test_I$-algebra} iff, it has a subalgebra
$\{0,\neg \epsilon, \epsilon, 1\}$ isomorphic to $I_4$ and there
exists $t \in Rad(T)$ such that $t < \epsilon$}.
\end{definition}
\begin{theo}\label{TEST2}
Let ${\cal A}$ be a subvariety of ${\cal IMTL}$. If ${\cal A}$
has a nontrivial injective and contains a $test_I$-algebra, then
injectives are semisimple.
\end{theo}
\begin{proof}
Let T be a $test_I$-algebra and $t\in Rad(T_i)$ such that $t <
\epsilon$. We can consider a subdirect embedding $f:T \rightarrow
\prod_{j\in J} H_j$ such that $L_j$ is linearly ordered. Let $x_j
= \pi_j f(x)$ for each $x\in T$ and $\pi_j$ the {\it j}{\rm
th}-projection. Since $t < \epsilon$, exists $s\in J$ such that
$\neg \epsilon_s < \neg t_s < t_s < \epsilon_s$ and by Corollary
\ref{UNMTL} , $t_s$ and $\epsilon_s$ are unities in the chain
$H_s$ with $\epsilon_s$ idempotent. Note that $H_s$ is also a
$test_I$-algebra. To see that $\epsilon_s \rightarrow t_s \leq
\epsilon$, observe first that $0 < \epsilon_s \odot \neg t_s$
since, if $\epsilon_s \odot \neg t_s = 0$ then $\epsilon_s \leq
\neg \neg t_s = t_s$ which is a contradiction. Consequently,
$\neg \epsilon_s \leq \epsilon_s \odot \neg t_s$ since, if $
\epsilon_s \odot \neg t_s \leq \neg \epsilon_s$ then $\epsilon_s
\odot \neg t_s = (\epsilon_s)^2 \odot \neg t_s \leq \neg
\epsilon \odot \epsilon = 0$. Thus we can conclude that
$\epsilon_s \rightarrow t_s = \neg(\epsilon_s \odot \neg t_s)
\leq \neg \neg \epsilon_s = \epsilon_s$. Suppose that there
exists a non-semisimple injective $A$ in ${\cal A}$. Then by
Theorem \ref{I4}, let $\alpha: I_4 \rightarrow A$ be a
monomorphism such that $\alpha(a)$ is the principal unity in $A$.
Let $i:I_4\rightarrow H_s$ be the monomorphism such that $i(a)=
\epsilon_s$. Since $A$ is injective, there exists a homomorphism
$\varphi:H_s\rightarrow A$ such that the following diagram
commutes:
\begin{center}
\unitlength=1mm
\begin{picture}(20,20)(0,0)
\put(8,16){\vector(3,0){5}} \put(2,10){\vector(0,-2){5}}
\put(8,2){\vector(1,1){7}}
\put(2,10){\makebox(13,0){$\equiv$}}
\put(2,16){\makebox(0,0){$I_4$}} \put(20,16){\makebox(0,0){$A$}}
\put(2,0){\makebox(0,0){$H_s$}}
\put(2,20){\makebox(17,0){$\alpha$}}
\put(2,8){\makebox(-6,0){$i$}}
\put(16,0){\makebox(-4,3){$\varphi$}}
\end{picture}
\end{center}
\noindent Since $\alpha(a)$ is the principal unity in $A$ and
$t_s\leq \epsilon_s$ then, by commutativity, $\varphi(\epsilon_s)
= \varphi(t_s)= \alpha(a)$. Thus $\varphi(\epsilon_s \rightarrow
t_s) = 1$, which is a contradiction since $\varphi(\epsilon_s
\rightarrow t_s) \leq \varphi(\epsilon_s) = \alpha(a) < 1$. Hence
${\cal A}$ has only semisimple injectives. \hfill$\Box$
\end{proof}
\begin{prop}\label{NOINJIMTL}
${\cal IMTL}$ has only trivial injectives.
\end{prop}
\begin{proof}
Suppose that there exists nontrivial injectives in ${\cal IMTL}$.
By Theorem \ref{Injective simple} there is a simple maximum
algebra $I$ in ${\cal IMTL}$. We consider the six elements $IMTL$
chain $I_6$ defined as follows:
\begin{center}
\unitlength=1mm
\begin{picture}(60,20)(0,0)
\put(-29,19){\makebox(0,0){$\odot$}} \put(-32,16){\line(3,0){53}}
\put(-26,20){\line(0,-2){46}}
\put(-31,19){\makebox(17,0){$1$}}
\put(-23,19){\makebox(17,0){$a_1$}}
\put(-14,19){\makebox(17,0){$t$}}
\put(-6,19){\makebox(17,0){$a_2$}}
\put(2,19){\makebox(17,0){$a_3$}} \put(10,19){\makebox(17,0){$0$}}
\put(-27,13){\makebox(-5,0){$1$}}
\put(-27,6){\makebox(-5,0){$a_1$}}
\put(-27,-1){\makebox(-5,0){$t$}}
\put(-27,-8){\makebox(-5,0){$a_2$}}
\put(-27,-15){\makebox(-5,0){$a_3$}}
\put(-27,-22){\makebox(-5,0){$0$}}
\put(-20,13){\makebox(-5,0){$1$}}
\put(-20,6){\makebox(-5,0){$a_1$}}
\put(-20,-1){\makebox(-5,0){$t$}}
\put(-20,-8){\makebox(-5,0){$a_2$}}
\put(-20,-15){\makebox(-5,0){$a_3$}}
\put(-20,-22){\makebox(-5,0){$0$}}
\put(-12,13){\makebox(-5,0){$a_1$}}
\put(-12,6){\makebox(-5,0){$a_2$}}
\put(-12,-1){\makebox(-5,0){$a_3$}}
\put(-12,-8){\makebox(-5,0){$a_3$}}
\put(-12,-15){\makebox(-5,0){$0$}}
\put(-12,-22){\makebox(-5,0){$0$}}
\put(-3,13){\makebox(-5,0){$t$}} \put(-3,6){\makebox(-5,0){$a_3$}}
\put(-3,-1){\makebox(-5,0){$a_3$}}
\put(-3,-8){\makebox(-5,0){$0$}} \put(-3,-15){\makebox(-5,0){$0$}}
\put(-3,-22){\makebox(-5,0){$0$}}
\put(5,13){\makebox(-5,0){$a_2$}} \put(5,6){\makebox(-5,0){$a_3$}}
\put(5,-1){\makebox(-5,0){$0$}} \put(5,-8){\makebox(-5,0){$0$}}
\put(5,-15){\makebox(-5,0){$0$}} \put(5,-22){\makebox(-5,0){$0$}}
\put(13,13){\makebox(-5,0){$a_3$}} \put(13,6){\makebox(-5,0){$0$}}
\put(13,-1){\makebox(-5,0){$0$}} \put(13,-8){\makebox(-5,0){$0$}}
\put(13,-15){\makebox(-5,0){$0$}}
\put(13,-22){\makebox(-5,0){$0$}}
\put(21,13){\makebox(-5,0){$0$}} \put(21,6){\makebox(-5,0){$0$}}
\put(21,-1){\makebox(-5,0){$0$}} \put(21,-8){\makebox(-5,0){$0$}}
\put(21,-15){\makebox(-5,0){$0$}}
\put(21,-22){\makebox(-5,0){$0$}}
\put(30,19){\makebox(0,0){$\rightarrow$}}
\put(28,16){\line(3,0){53}} \put(34,20){\line(0,-2){46}}
\put(29,19){\makebox(17,0){$1$}}
\put(38,19){\makebox(17,0){$a_1$}}
\put(46,19){\makebox(17,0){$t$}}
\put(54,19){\makebox(17,0){$a_2$}}
\put(62,19){\makebox(17,0){$a_3$}}
\put(70,19){\makebox(17,0){$0$}}
\put(32,13){\makebox(-5,0){$1$}} \put(32,6){\makebox(-5,0){$a_1$}}
\put(32,-1){\makebox(-5,0){$t$}}
\put(32,-8){\makebox(-5,0){$a_2$}}
\put(32,-15){\makebox(-5,0){$a_3$}}
\put(32,-22){\makebox(-5,0){$0$}}
\put(40,13){\makebox(-5,0){$1$}} \put(40,6){\makebox(-5,0){$1$}}
\put(40,-1){\makebox(-5,0){$1$}} \put(40,-8){\makebox(-5,0){$1$}}
\put(40,-15){\makebox(-5,0){$1$}}
\put(40,-22){\makebox(-5,0){$1$}}
\put(49,13){\makebox(-5,0){$a_1$}} \put(49,6){\makebox(-5,0){$1$}}
\put(49,-1){\makebox(-5,0){$1$}} \put(49,-8){\makebox(-5,0){$1$}}
\put(49,-15){\makebox(-5,0){$1$}}
\put(49,-22){\makebox(-5,0){$1$}}
\put(57,13){\makebox(-5,0){$t$}} \put(57,6){\makebox(-5,0){$a_1$}}
\put(57,-1){\makebox(-5,0){$1$}} \put(57,-8){\makebox(-5,0){$1$}}
\put(57,-15){\makebox(-5,0){$1$}}
\put(57,-22){\makebox(-5,0){$1$}}
\put(65,13){\makebox(-5,0){$a_2$}}
\put(65,6){\makebox(-5,0){$a_1$}}
\put(65,-1){\makebox(-5,0){$a_1$}}
\put(65,-8){\makebox(-5,0){$1$}} \put(65,-15){\makebox(-5,0){$1$}}
\put(65,-22){\makebox(-5,0){$1$}}
\put(73,13){\makebox(-5,0){$a_3$}} \put(73,6){\makebox(-5,0){$t$}}
\put(73,-1){\makebox(-5,0){$a_1$}}
\put(73,-8){\makebox(-5,0){$a_1$}}
\put(73,-15){\makebox(-5,0){$1$}}
\put(73,-22){\makebox(-5,0){$1$}}
\put(81,13){\makebox(-5,0){$0$}} \put(81,6){\makebox(-5,0){$a_3$}}
\put(81,-1){\makebox(-5,0){$a_2$}}
\put(81,-8){\makebox(-5,0){$t$}}
\put(81,-15){\makebox(-5,0){$a_1$}}
\put(81,-22){\makebox(-5,0){$1$}}
\put(88,20){\line(0,-2){50}} \put(88,20){\circle*{1.5}}
\put(88,10){\circle*{1.5}} \put(88,0){\circle*{1.5}}
\put(88,-10){\circle*{1.5}} \put(88,-20){\circle*{1.5}}
\put(88,-30){\circle*{1.5}}
\put(94,20){\makebox(-5,0){$1$}}
\put(94,10){\makebox(-5,0){$a_1$}}
\put(94,-0){\makebox(-5,0){$t$}}
\put(94,-10){\makebox(-5,0){$a_2$}}
\put(94,-20){\makebox(-5,0){$a_3$}}
\put(94,-30){\makebox(-5,0){$0$}}
\end{picture}
\end{center}
\vspace{3cm}
\noindent Since $I$ is simple maximum we can consider $I_6$ and
$R_{[0,1]}$ as subalgebras of $I$. In view of this and using the
nilpotence order we have that $1/2 < t < 3/4$ since $I$ is a
chain. Therefore we can consider $u = \bigvee_{R_{[0,1]}} \{ x\in
R_{[0,1]}:x < t\}$ and $v = \bigwedge _{R_{[0,1]}}\{x\in
R_{[0,1]}:x > t\}$ and it is clear that $u,v \in R_{[0,1]}$ since
$R_{[0,1]}$ is a complete algebra. Thus $u < t < v$. This
contradicts the fact that the order of $R_{[0,1]}$ is dense.
Consequently ${\cal IMTL}$ has only trivial injectives. \hfill$\Box$
\end{proof}
\section {Injectives in NM-algebras}
\begin{definition}
{\rm A nilpotent minimum algebra (or {\it NM-algebra}) {\rm
\cite{GE1}} is an IMTL-algebra satisfying the equation {\rm
($W$)}}.
\end{definition}
\noindent The variety of NM-algebras is noted by ${\cal NM}$. As
an example we consider $N_{[0,1]}= \langle [0,1], \odot,
\rightarrow, \land, \lor, 0, 1 \rangle$ such that $[0,1]$ is the
real unit segment, $\land$, $\lor$ are the natural meet and join
on $[0,1]$ and $\odot$ and $\rightarrow$ are defined as follows:
$$
x\odot y = \cases {x\land y, & if $1 < x + y $\cr 0, & otherwise,
\cr}
$$
$$
x\rightarrow y = \cases {1, & if $x \leq y$ \cr max (y, 1-x) &
otherwise .\cr }
$$
\vspace{0.5cm}
\noindent Note that $\{0, \frac{1}{2}, 1\}$ is the universe of a
subalgebra of $N_{[0,1]}$, that we denote by $\L_3$. The
subvariety of ${\cal NM}$ generated by $\L_3$ coincides with the
variety ${\cal L}_3$ of three-valued \L ukasiewicz algebras (see
\cite{Mo,Cig}).
\begin{prop}\label{SIMPLENM}
$\L_3$ is the maximum simple algebra in ${\cal NM}$, and it is
self-injective.
\end{prop}
\begin{proof}
Let $I$ be a simple algebra such that $Card(I)>2$. By Theorem
\ref{SIMPLEWNM} $I$ has a coatom $u$ satisfying $\neg x = u$ for
each $0 < x <1$. Thus $x = \neg \neg x = \neg u = u$ for each $0
< x <1$. Consequently $Card(I)= 3$ and $I = \L_3$. \hfill$\Box$
\end{proof}
\begin{coro}\label{LUK}
${\cal S}em({\cal NM}) = {\cal L}_3$. \hfill$\Box$
\end{coro}
\begin{prop}
Injectives in ${\cal NM}$ coincide with complete Post algebras of
order $3$.
\end{prop}
\begin{proof}
By Proposition \ref{SIMPLEWNM}, Theorem \ref{Simple injective}
and Theorem \ref{TEST2} injectives in ${\cal NM}$ are semisimple
since $N_{[0,1]}$ is an algebra $Test_I$. Thus by Proposition
\ref{LUK} and {\rm \cite{Mo}, \cite[Theorem 3.7]{Cig}}, complete
Post algebras of order $3$ are the injectives in ${\cal NM}$. \hfill$\Box$
\end{proof}
\section {Injective bounded hoops}
\begin{definition}
{\rm A {\it hoop} {\rm \cite{BlokFer}} is an algebra $ \langle A,
\odot, \rightarrow, 1 \rangle$ of type $ \langle 2, 2, 0 \rangle$
satisfying the following axioms:
\begin{enumerate}
\item
$\langle A,\odot,1 \rangle$ is an abelian monoid,
\item
$x\rightarrow x = 1$,
\item
$(x\rightarrow y)\odot x = (y\rightarrow x)\odot y$,
\item
$x\rightarrow (y\rightarrow z) = (x\odot y)\rightarrow z$.
\end{enumerate}
}
\end{definition}
\noindent The variety of hoops is noted ${\cal HO}$. Every hoop
is a meet semilattice, where the meet operation is given by
$x\land y = x\odot (x\rightarrow y)$. Let $A$ be a hoop. If $A$
has smallest element $0$, we can define an unary operation $\neg$
by $\neg x = x\rightarrow 0$. A subset $F$ of $A$ is a {\bf
filter} iff $1\in F$ and $F$ is closed under $\odot$. As in
residuated lattices, filters and congruences can be identified
{\rm \cite {BlokFer}}.
\begin{definition}
{\rm A {\it Wajsberg hoop} {\rm \cite {BlokFer}} is a hoop that
satisfies the following equation $$(x\rightarrow y) \rightarrow y
= (y\rightarrow x)\rightarrow x. \leqno(T)$$}
\end{definition}
\noindent Each Wajsberg hoop is a lattice, in which the join
operation is given by $x\lor y = (x\rightarrow y) \rightarrow y$.
\begin{definition}
{\rm A {\it bounded hoop} is an algebra $ \langle A, \odot,
\rightarrow, 0, 1 \rangle$ of type $ \langle 2, 2, 0, 0 \rangle$
such that:
\begin{enumerate}
\item
$ \langle A, \odot, \rightarrow, 1 \rangle$ is a hoop
\item
$0\rightarrow x = 1.$
\end{enumerate}
}
\end{definition}
The variety of bounded hoop is noted by ${\cal BH}_0$. Observe that since $0$ is in the clone of hoop operation, we require that for each morphism $f$, $f(0)=0$. In the same way as in the case of residuated lattices, for each bounded hoop $A$, we can consider $Ds(A)$ the set of dense elements of $A$, and this is an implicative filter of $A$. \\
\begin{prop}\label{SIMPLEHOOP}
A bounded simple hoop is a simple MV-algebra.
\end{prop}
\begin{proof}
Let $I$ be a simple hoop. Then by {\rm \cite [Corollary
2.3]{BlokFer}} it is a totally ordered Wajsberg hoop. If $0$ is
the smallest element in $I$ then by the equation (T), $ \neg \neg
x = (x\rightarrow 0)\rightarrow 0 = (0\rightarrow x) \rightarrow
x = 1\rightarrow x = x$. Hence it is an MV-algebra. Since the
MV-congruences are in correspondence with implicative filters,
$I$ is a simple MV-algebra. \hfill$\Box$
\end{proof}
\begin{table}[h] \begin{center}{\scriptsize
\begin{tabular}{|l|l|l|}\hline
\multicolumn{1}{|c|}{Variety} & \multicolumn{1}{|c|}{Equations} &
\multicolumn{1}{|c|}{Injectives} \\
\hline\hline ${\cal RL}$ &
&
Trivial \\ \hline
${\cal DRL}$ & ${\cal RL} + x\land (y\lor z) = (x\land y) \lor
(x\land z)$ & Trivial \\ \hline
${\cal GM}$ & ${\cal RL} + \neg \neg x = x$ & Trivial \\ \hline
${\cal DGM}$ & ${\cal GM} + x\land (y\lor z) = (x\land y) \lor
(x\land z)$ & Trivial \\ \hline
${\cal MTL}$ & ${\cal RL} + (x\rightarrow y) \lor (y\rightarrow
x) = 1$ & Trivial \\ \hline
${\cal WNM}$ & ${\cal MTL} + \neg(x\odot y)\lor ((x\land y)
\rightarrow (x\odot y ))=1 $ & Trivial \\ \hline
${\cal IMTL}$ & ${\cal MTL} + \neg \neg x = x$ & Trivial \\
\hline
${\cal BL}$ & ${\cal MTL} + x\land y = x \odot (x\rightarrow y)$ &
Retracts of powers of $R_{[0,1]}$ \\ \hline
${\cal MV}$ & ${\cal BL} + \neg \neg x = x$ & Retracts of powers
of $R_{[0,1]}$ \\ \hline
${\cal BH}_0$ & $\lor$-free subreduct of ${\cal RL} + x\land y =
x \odot (x\rightarrow y)$ & Retracts of powers of $R_{[0,1]}$ \\
\hline
${\cal SRL}$ & ${\cal RL} + x\land \neg x = 0$ & Complete boolean
algebras \\ \hline
${\cal SMTL}$ & ${\cal MTL} + x\land \neg x = 0$ & Complete
boolean algebras \\ \hline
$\Pi{\cal SMTL}$ & ${\cal SMTL} + \neg \neg z \odot ((x\odot
z)\rightarrow (y\odot z))\leq (x\rightarrow y )$ & Complete
boolean algebras \\ \hline
${\cal PL}$ & $\Pi{\cal SMTL} + x\land y = x \odot (x\rightarrow
y)$ & Complete boolean algebras \\ \hline
${\cal HL}$ & ${\cal BL} + x\land y = x \odot y$ & Complete
boolean algebras \\ \hline
${\cal NM}$ & ${\cal WNM} + \neg \neg x = x$ & Complete Post
algebras of order 3 \\ \hline
\end{tabular}}
\caption {Injectives in Varieties of Residuated Algebras}
\end{center}
\end{table}
\begin{prop}\label{SIMPLEHOM}
Let $I,J$ be simple hoops with smallest elements $0_I, 0_J$
respectively. If $\varphi: I \rightarrow J$ is a hoop
homomorphism then $\varphi$ is also an MV-homomorphism, i.e.,
$\varphi(0_I)=0_J$.
\end{prop}
\begin{proof}
Suppose that $\varphi(0_I) = a$. Since $J$ is simple, there
exists a natural number $n$ such that $a^n = 0_J$. Thus we have,
$\varphi(0_I) = \varphi(0_I^n) = (\varphi(0_I))^n = a^n = 0_J$.
\hfill$\Box$
\end{proof}
\noindent The following two results are obtained in the same way
as Theorems \ref{H3} and \ref{TEST1} respectively.
\begin{theo}\label{HH3}
Let ${\cal A}$ be a subvariety of ${\cal BH}_0$. If $A$ is a
non-semisimple absolute retract in ${\cal A}$, then $Ds(A)$ has a
least element $\epsilon$ i.e, $Ds(A) = [\epsilon)$ and
$\{0,\epsilon,1\}$ is a subalgebra of $A$ isomorphic to the three
element Heyting algebra $H_3$. \hfill$\Box$
\end{theo}
\begin{theo}\label{TEST3}
Let ${\cal A}$ be a subvariety of ${\cal BH}_0$. If ${\cal A}$
has a nontrivial injectives and contains the Heyting algebra
$H_4$ then injectives are semisimple. \hfill$\Box$
\end{theo}
\begin{coro}
Injectives in ${\cal BH}_0$ are exactly the retracts of powers of
the MV-algebra $R_{[0,1]}$.
\end{coro}
\begin{proof}
By Proposition \ref{SIMPLEHOOP}, semisimple bounded hoops are
MV-algebras. Therefore $R_{[0,1]}$ is the maximum simple algebra
and it is self injective by Proposition \ref{SIMPLEHOM}. Thus by
Theorem \ref{Simple injective} retracts of powers of the
MV-algebra $R_{[0,1]}$ are injectives in ${\cal BH}_0$. By
Theorem \ref{TEST3} they are the only injectives, because $H_4$
lies in ${\cal BH}_0$. \hfill$\Box$
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,541
|
City Room | Philip Reed, Ex-Councilman, Is Dead at 59
Philip Reed, Ex-Councilman, Is Dead at 59
By Sewell Chan and Jonathan P. Hicks
November 7, 2008 12:55 pm November 7, 2008 12:55 pm
Councilman Philip Reed of Manhattan looked on as Mayor Michael R. Bloomberg signed a bill Mr. Reed had sponsored, in 2002. (Photo: Don Hogan Charles/The New York Times)
Updated, 6:47 p.m. | Philip Reed, a former elevator salesman who devoted himself to health issues as a member of the New York City Council, where he was a pioneer as a black, openly gay and H.I.V.-positive lawmaker representing a largely Latino district, died on Thursday at St. Luke's-Roosevelt Hospital Center. He was 59.
The death was confirmed by Geoff Eaton, who was the councilman's chief of staff and is now an aide to Representative Charles B. Rangel. The cause was complication of pneumonia resulting from leukemia, Mr. Eaton said.
Elected in 1997, Mr. Reed represented East Harlem and Manhattan Valley, and parts of the Upper West Side and the South Bronx. He left office in 2005, unable to seek re-election to a third term because of term limits. He was a Democrat, and the first openly gay black member of the City Council.
Born on Feb. 21, 1949, Mr. Reed, a New York native, was the son of a black father and a white mother. He and a twin sister were "raised by their mother and stepfather, both white, in an upper-middle-class Manhattan world of civil rights activism, prep schools, the Vineyard," according to a 1998 profile in The Times.
Mr. Reed dropped out of Ohio Wesleyan University and received conscientious objector status during the Vietnam War. He was involved in the original Stonewall riots, and then spent 10 years in San Francisco, as a salesman for the Otis Elevator Company and as a gay political activist.
He returned to New York in the late 1970s, and became politically active through a block association, leaving Otis to run a service program in Brooklyn for people with H.I.V. He lived for many years at 425 Central Park West, near 103rd Street. He became a Democratic district leader in the late 1980s.
Mr. Reed became H.I.V.-positive in 1981, when the virus that causes AIDS was first detected, and fought health problems for years; in 1997, he completed chemotherapy for multiple myeloma, a bone marrow cancer.
Mr. Reed had fairly contentious races in both his campaigns for the Council, running unsuccessfully for the State Senate and for the Council before winning a council seat in 1997, in a district that had previously been represented by Adam Clayton Powell IV, who ran for Manhattan borough president. The district has a largely Hispanic population, and Mr. Reed first won by prevailing in a field with four other candidates, all of them Hispanic.
Four years later, in 2001, he prevailed again, but after an even more contentious campaign in the Democratic primary. In that race he faced Felipe Luciano, a television reporter and anchor and a co-founder of the Young Lords.
Mr. Reed was a champion of asthma prevention legislation, frequently pointing out this his district had some of the highest rates of the disease in the state. He was also a passionate advocate for measures to have the city stimulate the development of affordable housing for moderate- and low-income New Yorkers. He was also a staunch opponent of a plan by the administration of Mayor Rudolph W. Giuliani to relocate the Museum of the City of New York from East Harlem to the old Tweed Courthouse, near City Hall. That relocation was later reversed by Mayor Michael R. Bloomberg. He was also an outspoken critic of random searches of black men known as racial profiling.
Mr. Reed is survived by his twin sister, Elinor Reed of Manhattan.
Mr. Bloomberg said in a statement:
Like many New Yorkers, I lost a friend yesterday when former New York City Councilman Phil Reed died, but the entire City lost a passionate advocate for important public causes. From the communities of Harlem of East Harlem, from the leadership of the Hetrick Martin Institute, and from the floor of the Council, Phil had the savvy and smarts to get results on the issues he fought for. While Phil and I didn't agree on every issue, we worked together to craft bold AIDS policies, fight childhood asthma, and – once – put on a pretty good Inner Circle Show. If it weren't for Phil Reed, we would never have been able to move the Department of Education next to City Hall, which has been a key part of our ability to introduce accountability into our schools and turn around a system that had failed our children for generations. Phil's legacy will live on in the results of his work, and my thoughts and prayers are with his family and loved ones.
The city comptroller, William C. Thompson Jr., said in a statement:
I am saddened to learn of the death of my former colleague in government Phil Reed. Phil was a man of exceptional conviction, and I was always grateful for the opportunity to work with him to improve the lives of so many New Yorkers. Whether it was to address steep childhood asthma rates or create more affordable housing, Phil brought an unmatched passion to any task at hand. He was a true New York City leader.
The City Council speaker, Christine C. Quinn, said in a statement:
The City Council has learned with profound sorrow of the passing of former Council Member Philip Reed. Council Member Reed was a dear friend to countless New Yorkers and a passionately dedicated public servant for many years.
An ardent and tireless pioneer for LGBT rights, Council Member Reed always proudly delivered what he promised to the constituents of his Upper Manhattan community. Serving the people of East Harlem and Manhattan Valley for eight years, he made an extraordinary difference in the quality of life for all New Yorkers, especially with his work to reduce asthma, increase affordable housing, improve neighborhood parks, and keep local museums and cultural institutions in Harlem.
Through his passion, service and example, Council Member Reed will always be remembered for the indelible mark he left on this body and on our City.
Alan Van Capelle, executive director of the Empire State Pride Agenda, an advocacy group for the gay, lesbian, bisexual and transgender population in New York, said in a statement:
We at the Pride Agenda are very saddened to learn about the passing of our friend, fellow activist and trailblazer Phil Reed. He was a real New Yorker in every way. As a H.I.V.-positive man of color representing East Harlem, Phil Reed broke down barriers. He created dialogues with those who came from different backgrounds and held different perspectives and he highlighted issues few people were talking about. The Empire State Pride Agenda was happy to work alongside him when he was a member of the City Council, and I am grateful for the advice and counsel he has given me after I became the executive director of the Pride Agenda. We will miss him.
David L. NYC November 7, 2008 · 1:37 pm
Thank you, Philip, for all the work that you accomplished on behalf of the citizens of New York City. You'll be missed, but your inspiring example will live on.
Cynthia Doty November 7, 2008 · 1:41 pm
Phil was a true friend and inspiration.
hell's kitchen guy November 7, 2008 · 1:49 pm
He could be loud and abrasive but he was always on the side of the angels.
NYC Architect November 7, 2008 · 1:59 pm
A great representative of the community, so sorry to see him go- but always a great example to the rest of us.
Richard D November 7, 2008 · 2:13 pm
Sad news indeed. Also unfortunate is the egregious error in this article which dates Phil's HIV-positive diagnosis to 1981, three years before the discovery of the virus. Apparently as far as the Times and AIDS are concerned, plus ca change…
Sewell Chan, City Room Bureau Chief November 7, 2008 · 2:18 pm
From the City Room
Richard D —
Thanks for your comment. Our reporting is accurate: Mr. Reed began experiencing the symptoms later associated with H.I.V. in 1981. But you are correct that a "diagnosis" was not possible until later, as scientists began to understand the virus, so we removed the word "diagnosed."
The C.D.C.'s Web site on the history of H.I.V. states, "HIV was first identified in the United States in 1981 after a number of gay men started getting sick with a rare type of cancer. It took several years for scientists to develop a test for the virus, to understand how HIV was transmitted between humans, and to determine what people could do to protect themselves."
Martha, NYC November 7, 2008 · 2:25 pm
I'm sorry to hear about Mr. Reed's death. I didn't realize that he had leukemia. What a blow this must be to his friends and family. Mr. Reed was my councilman, and he represented the neighborhood well. What a loss this is for all New Yorkers.
Robby Davis November 7, 2008 · 2:43 pm
Phil Reed was one first leaders in the Black community to boldly step up and address the reality of HIV/AIDS. He worked heroically to establish services and prevention messages that could reach his peers. He was a strong articulate voice, from the beginning, in ACTUP and helped to found the Living with AIDS Fund in NYC. He loved and was inspired by Bayard Rustin and dedicated much of his adult life to working in the sacred tradition of the "Community Organizer." But like his hero he often toiled quietly and constantly in the background, until more would listen. He brought help to his community, grew to become a dynamic elected representative and won many personal battles with his health to assure that he made a difference in his service and his valiant life. May he find peace and no more pain.
[I'm sorry I was crying while I wrote that would you please add:
"of the" in the first line]
"Phil Reed was one of the first leaders…"
[And please sign my name as]
—Robby Davis, Seattle
downtown gal November 7, 2008 · 2:51 pm
I did not know Philip well, and the last time I spoke with him was over the summer. He was enrolling into hospice services. His goal was to live long enough to vote for a black man for president. I am really happy he did.
Three Parks person November 7, 2008 · 3:35 pm
Phil was a good man who was trying. He was great against Giuliani, and spoke more than once with amazement at what that devious man put into place. It was difficult for him to get competent and similarly caring people around him. One climber went to the Congressman, and he was never for the people, others just did not have the drive Phil did. He could have done more if the team around him were really culled from the community, and not hacks handed to him by the machine.
Fernando Ferrer November 7, 2008 · 3:50 pm
I was just thinking about Phil on Tuesday. I'm glad he was around long enough to witness the election.
I was proud to know and work with Phil Reed during his service as a Member of the City Council. He was a first rate public servant — truthful (almost to a fault) and always clear about whom he represented.
I miss his voice.
Tom K. November 7, 2008 · 4:37 pm
I enjoyed volunteering for Phil on election night on what must have been his first, unsuccesful, council race, and working with him as part of the first Dinkins campaign's lesbian and gay committee. Mr. Ferrer's comments are well said. Like him, I will miss Phil's voice. Even more, I'll miss his laughter, which was frequent.
Matty Wilkinson November 7, 2008 · 4:52 pm
Phil was my cousin. He was a great friend and beloved family member. He was empathic, caring, and tough as nails. It is wonderful to read about Phil's work in NYC and beyond. I will always appreciate his passion, his effectiveness, and how respected he was by the constituents he served and people he worked with. I will miss him dearly.
Michael Morris November 7, 2008 · 5:00 pm
Your star shines every so bright. Even in death it illuminates for others to know the road you traveled. I am one of the fortunate whom was able to be within your beautiful light and I shall carry it forth from this day on.
To your family I wish the comfort of peace, for if they only remember your sheer joy of life it will allow them to live on in complete happiness.
You will be missed as you were treasured…..
Tony Glover November 7, 2008 · 5:00 pm
Phil was one of the most creative thinkers (and doers) I have ever met. I knew him first as an activist, and then as a councilman, when I moved to his district that encompassed both East Harlem and Manhattan Valley. Whenever I saw him on the street, he always made it a point to say hello and ask what he could do to better the neighborhood .
He displayed both a great sense of humor and a seriousness about his work often combining the two to great effect.
I will miss him deeply and hope that his family and closest friends find solace in knowing how many lives he touched and how fondly he will be remembered. Phil was a trailblazer in many ways. May he rest in peacel.
Bob Zuckerman November 7, 2008 · 5:03 pm
I worked as a consultant to Phil in early 1997, crunching numbers for him while helping to write his campaign plan to win his council race. Phil was brilliant and principled, and he had a tremendous wit which nicely complimented his very direct approach with people . Never one to mince words, you always knew where you stood with Phil.
I think what I admired most about Phil, however, was his bravery. To run for office as an openly gay, openly HIV positive African-American in a majority Latino district took a lot of guts. Phil prevailed, and served 8 great years in the Council, representing each and every one of his constituents no matter who they were with distinction.
I'm so happy that Phil got to see an African-American elected as President of the United States. That must have put a huge smile on his face. Just like the smile I have now thinking about what a great human being he was. Phil – you will be sorely missed.
Fred Baldassaro November 7, 2008 · 5:06 pm
Phil Reed was a smart, dedicated city servant. He also had a great wit that shined through at tense and tough times. Sorry to see him go.
mpirv@aol.com November 7, 2008 · 5:08 pm
I am deeply saddened to know of the death of Philip Reed, my dear friend and neighbor in Martha's Vineyard. He was a warm, and deeply emotional person, and his laugh could wake the next town. I will miss his opinions on all things political and and our discussions on love and life and the
need for one in order to appreciate the other. I am so glad we shared a big hug the last time he was here.
Patricia Rainford Irving
Noel Alicea November 7, 2008 · 5:39 pm
Phil was a wonderful person. I am one of the many who will miss his clear thinking, caring leadership, and righteous anger. He had a way of calling out injustice and cutting through bs that flustered some, but it sure made folks take notice.
At the same time, he was so down to earth that it was often easy to lose sight of the fact that he was a trailblazer: an openly gay, black, HIV-positive elected official, possibly the first in the US — or anywhere. Truly one of a kind, yet eager to connect people in celebration of their differences.
Phil was fully committed to organizing the overlapping communities to which he belonged, and to focusing their power and strength to create change.
I hope he was able to celebrate and rejoice in Obama's victory. We'll miss him.
Wendy Howell November 7, 2008 · 6:15 pm
In my days as a baby activist, Councilmember Reed was one of the first leaders to inspire me. I doubt he'd remember meeting me that first time – I was just one of many in the crowd at a GMHC training. But, his passion and skill for creating positive change left an indelible mark on me that day, and i will never forget it.
Nor will I forget, in the many years since, as I have gotten more deeply involved in my community and my world, his continued persistence and eloquence on many of the issues I too care about. From affordable housing to LGBT rights to HIV/AIDS issues to standing up for his community, Councilmember Reed was a tireless and effective advocate.
Councilmember Reed, you will be missed, but never forgotten – your legacy will ever live on, and continue to shape New York for generations to come. You inspired countless others to stand up for what they believe in, to never stop fighting for what they think is right. That, perhaps, is the greatest gift you could have given your City and your world.
My prayers to your family and loved ones. America has lost a great man and great leader today.
manny onativia November 7, 2008 · 6:25 pm
Phill Reed is a friend that showed me the meaning of leadership and to stand up and get involved.
manny manuel onativia
David November 7, 2008 · 6:33 pm
My condolences to his loved ones. He sounded like a great person.
frank November 7, 2008 · 7:10 pm
philly we hardly knew you….
Ben Stock of Brainpower November 7, 2008 · 7:23 pm
Phil ! we will miss you. So rare, a Statesman. Noble, compassionate, a warrior. Thank you for walking among us. peace, Ben Stock of Brainpower xoxoxoox
The Week in Comments: 'What the World Needs Now'
Mmm … Bacon! (Of the Chocolate-Covered Variety)
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,111
|
est la bande originale du film américain réalisé par Vincente Minnelli Melinda. L'album sort le sous le label Columbia Records et contient des chansons interprétées par Barbra Streisand et Yves Montand, en plus d'une chorale et d'une orchestration live. Aucun single commercial ne sera issu de la bande originale. Cependant, la version reprise du titre éponyme de l'album sera réalisé en tant que single promotionnel par Columbia en vinyle 7". Produites exclusivement par Wally Gold, les dix pistes de l'album sont composées par Burton Lane et les paroles de Alan Jay Lerner. En 2008 sort une réédition de l'album sous format CD.
Les critiques mettent en avant les pistes de l'album et les capacités vocales de Streisand. Sur le plan commercial, il s'agit de la plus mauvaise entrée pour Streisand dans le Billboard 200, culminant par la suite à la place en . Cependant, permet à Montand l'unique entrée de sa carrière dans le classement américain.
Genèse et chansons
La plupart des chansons de l'album sont issues de la comédie musicale éponyme dont Melinda en est l'adaptation. Deux chansons d'amour qui n'étaient pas présentes dans la comédie originale sont composées pour Barbra Streisand. sort le , sous le label de Streisand, Columbia Records. Les chansons sont enregistrées au début de l'année 1970 par Streisand et son partenaire Yves Montand dans les Samuel Goldwyn Studios à West Hollywood en Californie. Les paroles des dix chansons de l'album sont d'Alan Jay Lerner et la musique de Burton Lane. Sur la quatrième piste de l'album, Melinda, Luiz Bonfá et Maria Toledo sont crédités en tant que paroliers supplémentaires. Streisand est crédité comme unique chanteur pour , , , , , et la reprise de , alors que Montand est crédité pour la version standard de la chanson-titre en plus de et . La seconde piste de l'album est une version orchestrale de interprété par des chœurs. L'acteur Jack Nicholson devait chanter dans certaines scènes du film, mais elles ont été coupées. Ces chansons ne sont pas incluses dans la bande originale. L'album est exclusivement produit par Wally Goldet et est arrangé et dirigé par Nelson Riddle.
Bien qu'aucune des chansons n'aient été choisies comme , la chanson-titre est distribué en tant que single promotionnel par Columbia Records en aux radios. Le vinyle 7" contient sur ses deux faces la version reprise de la chanson. La bande originale est également édité sur cartouches 8 pistes avec les mêmes pistes mais dans un ordre différent. sort sous format CD, le .
Accueil
La bande originale de reçoit de manière générale des critiques positives de la part de la presse. La rédaction du magazine Billboard écrit que la prestation de Streisand au sein de la chanson-titre, de , et de valent à la bande originale la peine d'être achetée. Billboard prédit également le succès couronnée de la bande originale dans le Billboard 200 en raison de la popularité du film associé. La rédaction ajoute que Montant confère son charme unique à la chanson-titre et à Come Back to Me. À la suite du visionnage du film, Vincent Canby du New York Times considère comme étant la chanson la plus importante de Melinda en raison de son digne de la période Yolanda et le Voleur du réalisateur ; parmi le reste des pistes de la bande originale, il considère , , , et comme . William Ruhlmann sur AllMusic est plus critique à propos de l'album et explique : . Bien qu'il ait donné deux étoiles sur cinq à l'album et félicité la voix puissante de Streisand, Ruhlmann critique la décision d'inclure la plupart des contributions de Montand à la bande originale: .
Au moment de sa sortie, réalise la plus mauvaise entrée de Streisand dans le Billboard 200, alors que la bande originale permet à Montand de faire l'unique et meilleure entrée de sa carrière dans le classement américain. L'album débute à la place du classement la semaine du , pour culminer six semaines plus tard à la 108 position, devenant le premier album de Streisand à ne pas entrer dans le top 100. L'album reste au total 24 semaines dans le Billboard 200. Plus tard dans l'année, l'autre bande originale de Steisand – – réalisera une performance plus mauvaise pour Streisand en culminant à la place.
Pistes de l'album
Crédits
Yves Montand voix
Barbra Streisand voix
John Arrias restauration CD
Luiz Bonfá paroles
Wally Gold production
Bernie Grundman rematriçage CD
Burton Lane musique
Alan Jay Lerner paroles
Don Meehan ingénieur d'enregistrement
Nelson Riddle arrangements musicaux, chef d'orchestre
Maria Toledo paroles
Classements hebdomadaires
Compléments
Références
Bibliographie
Bande originale de Barbra Streisand
Album musical sorti en 1970
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,921
|
\section{Introduction}
\label{sec:intro}
Consider a single target acquisition over a search region of width $B$ and resolution up to width $\delta$. Mathematically, this is the problem of estimating a unit vector $\mathbf{W} \in \{0,1\}^{\frac{B}{\delta}}$ via a sequence of noisy linear measurements
\begin{equation}
\label{eq:linear1}
Y_n = \langle {\mathbf{S}_n},\mathbf{W} + \boldsymbol{\Xi}_n\rangle, \quad n = 1,2, \ldots, \tau,
\end{equation}
where a binary measurement vector $\mathbf{S}_n \in \{0,1\}^{\frac{B}{\delta}}$ denotes the locations inspected and the vector $\boldsymbol{\Xi}_n \in \mathbb{R}^{\frac{B}{\delta}}$ denotes the additive measurement noise per location. More generally, the observation $Y_n$ at time $n$ can be written as
\begin{equation}
\label{eq:linear2}
Y_n = \langle\mathbf{S}_n, \mathbf{W}\rangle + Z_n(\mathbf{S}_n),
\end{equation}
where $Z_n(\mathbf{S}_n)$ is a noise term whose statistics are a function of the measurement vector $\mathbf{S}_n$. The goal is to design the sequence of measurement vectors $\{\mathbf{S}_n\}_{n = 1}^{\tau}$, such that the target location $\textbf{W}$ is estimated with high reliability, while keeping the (expected) number of measurements $\tau$ as low as possible.
In this paper, we first consider the linear model~\eqref{eq:linear1} when the elements of $\mathbf{\Xi}_n$ are i.i.d Gaussian with zero mean and variance $\delta \sigma^2$. This means that $Z_n(\mathbf{S}_n)$ in~\eqref{eq:linear2} are distributed as $\mathcal{N}(0, |\mathbf{S}_n| \delta \sigma^2)$, and show that the problem of searching for a target under measurement dependent Gaussian noise $Z_n(\mathbf{S}_n)$ is equivalent to channel coding over a binary additive white Gaussian noise (BAWGN) channel with state and feedback (in Section 4.6~\cite{Gallager}). This allows us not only to retrofit the known channel coding schemes based on sorted Posterior Matching (sort PM)~\cite{SungEnChiu} as adaptive search strategies, but also to obtain information theoretic converses to characterize fundamental limits on the target acquisition rate under both adaptive and non-adaptive strategies. As a corollary to the non-asymptotic analysis of our sorted Posterior-Matching-based adaptive strategy and our converse for non-adaptive strategy, we obtain a lower bound on the adaptivity gain.
\subsection{Our Contributions}
Our main results are inspired by the analogy between target acquisition under measurement dependent noise and channel coding with state and feedback. This connection was utilized in~\cite{DBLP:journals/corr/KaspiSJ16} under a Bernoulli noise model. In this paper, in Proposition~\ref{prop:connection}, we formalize the connection between our target acquisition problem with Gaussian measurement dependent noise and channel coding over a BAWGN channel with state. Here, the channel state denotes the variance of the measurement dependent noise $ |\mathbf{S}_n| \delta \sigma^2$. Since feedback codes i.e., adapting the codeword to the past channel outputs, are known to increase the capacity of a channel with state and feedback. This motivates us to use adaptivity when searching, i.e., to utilize past observations $\{Y_1, Y_2, \ldots, Y_{n-1}\}$ when selecting the next measurement vector $\mathbf{S}_n$. Furthermore, this information theoretic perspective allows us to quantify the increase in the adaptive target acquisition rate. Our analysis of improvement in the target acquisition rate as well as the adaptivity gain, measured as the reduction in expected number of measurements, while using an adaptive strategy over a non-adaptive strategy has two components. Firstly, we utilize information theoretic converse for an optimal non-adaptive search strategy to obtain a non-asymptotic lower bound on the minimum expected number of measurements required while maintaining a desired reliability. As a consequence, this provides the best non-adaptive target acquisition rate. Secondly, we utilize a feedback code based on Posterior Matching as a two-stage adaptive search strategy and obtain a non-asymptotic upper bound on the expected number of measurements while maintaining a desired reliability. These two components of our analysis allow us to characterize a lower bound on the increased target acquisition rate due to adaptivity.
Our non-asymptotic analysis of adaptivity gain reveals two qualitatively different asymptotic regimes. In particular, we show that adaptivity gain depends on the manner in which the number of locations grow. We show that the adaptivity grows logarithmically in the number of locations, i.e., $O\left(\log \frac{B}{\delta} \right)$ when refining the search resolution $\delta$ ($\delta$ going to zero) and while keeping total search width $B$ fixed. On the other hand, we show that as the search width $B$ expands while keeping search resolution $\delta$ fixed, the adaptivity gain grows in the number of locations as $O\left(\frac{B}{\delta} \log \frac{B}{\delta} \right)$.
The problem of searching for a target under a binary measurement dependent noise, whose crossover probability increases with the weight of the measurement vector was studied by~\cite{DBLP:journals/corr/KaspiSJ16} and analyzed under sort PM strategy in~\cite{SungEnChiu}. In particular, \cite{DBLP:journals/corr/KaspiSJ16} and~\cite{SungEnChiu} provide asymptotic analysis of the adaptivity gain for the case where $B = 1$ and $\delta $ approaches zero. Our prior work~\cite{8007098} by utilizing a (suboptimal) hard decoding of Gaussian observation $Y_n$, strengthens~\cite{DBLP:journals/corr/KaspiSJ16} and~\cite{SungEnChiu} by also accounting for the regime in which $B$ grows. While the analysis in~\cite{8007098} strengthens the non-asymptotic bounds in~\cite{SungEnChiu} with Bernoulli noise it failed to provide tight analysis for our problem with Gaussian observations. In this paper, by strengthening our analysis in~\cite{8007098} we extend the prior work in three ways: (i) we consider the soft Gaussian observation $Y_n$, (ii) we obtain non-asymptotic achievability and converse analysis, and (iii) we characterize tight non-asymptotic adaptivity gain in the two asymptotically distinct regimes of $B \to \infty$ and $\delta \to 0$.
\subsection{Applications}
Our problem formulation addresses two challenging engineering problems which arise in the context of modern communication systems. We will discuss the two problems in the following examples and then provide the details of the state of art.
\begin{example}[Establishing initial access in mm-Wave communication] Consider the problem of detecting the direction of arrival for initial access in millimeter wave (mmWave) Communications. In mmWave communication, prior to data transmission the base station is tasked with aligning the transmitter and receiver antennas in the angular space. In other words, the base station's antenna pattern can be viewed as a measurement vector $\mathbf{S}_n$ searching the angular space $B \subset (0, 360^{\circ})$. At each time $n$, the noise intensity depends on the base station's antenna pattern $\mathbf{S}_n$ and the noisy observation $Y_n$ is a function of measurement dependent noise $Z_n(\mathbf{S}_n)$. Here it is natural to characterize the fundamental limit on the measurement time as a function of asymptotically small~$\delta$.
\end{example}
\begin{example}[Spectrum Sensing for Cognitive Radio]
Consider the problem of opportunistically searching for a vacant subband of bandwidth $\delta$ over a total bandwidth of $B$. In this problem secondary user desires to locate the single stationary vacant subband quickly and reliably, by making measurements $\mathbf{S}_n$ at every time $n$. At each time instant $n$, the noise intensity depends on the number of subbands probed as dictated by $\mathbf{S}_n$ and noisy observation $Y_n$ is is a function of measurement dependent noise $Z_n(\mathbf{S}_n)$. Here it is natural to characterize fundamental limit of the measurement time required for a secondary user to acquire the vacant subband as a function of the asymptotically large bandwidth $B$.
\end{example}
Giordani~et~al.~\cite{7460513} compare the exhaustive search like the Sequential Beamspace Scanning considered by Barati~et~al.~\cite{7421136}, where the base station sequentially searches through all angular sectors, against a two stage iterative hierarchical search strategy. In the first stage an exhaustive search identifies a coarse sector by repeatedly probing each coarse region for a predetermined SNR to be achieved. In the second stage an exhaustive search over all locations identifies the target. Giordani~et~al.~show that in general the adaptive iterative strategy reduces the number of measurements over exhaustive search except when desired SNR is too high, forcing the number of measurements required at each stage to get too large. We observe this in through our simulations in Section~\ref{sec:num_results}-A. In fact, as confirmed by our simulations random-coding-based non-adaptive strategies including the Agile-Link protocol~\cite{Abari_AgileLink}, outperform the repetition based adaptive strategies.
Past literature on spectrum sensing for cognitive radio~\cite{42_35Multibandjoint, 50AdaptiveMultiband, 55AdaptiveAgileCR} and support vector recovery~\cite{Nowak_CompSensing,Y_Kim_MACSensing} have focused on the problem where $\textbf{S}_n$ can be real or complex, with measurement independent noise applying both exhaustive search and multiple adaptive search strategies. In contrast, our work considers a simple binary model, $\textbf{S}_n \in \{0,1\}^{\frac{B}{\delta}}$, but captures the implications of measurement dependence of the noise, which is known in the spectrum sensing literature as noise folding. The problem of measurement dependent noise (known as noise folding) has been investigated in~\cite{Treichler_NoiseFolding} where non-adaptive design of complex measurements matrix satisfying RIP condition has been investigated. Our work compliments this study by characterizing the gain associated with adaptively addressing the measurement dependent noise (noise folding), albeit for the simpler case of binary measurements. We note that the case of adptively finding a subset of a sufficiently large vacant bandwidth with noise folding is considered in~\cite{Sharma_Murthy}, where ideas from group testing and noisy binary search have been utilized. The solutions however depend strongly on the availability of sufficiently large consective vacant band and does not apply to our setting.
\noindent\underline{Notations:} Vectors are denoted by boldface letters $\mathbf{A}$ and $\mathbf{A}{(j)}$ is the $j^{th}$ element of a vector. Matrices are denoted by overlined boldface letters. Let $\mathcal{U}_M$ denote the set $\{\textbf{u}\in \mathbb{R}^M: u(j)\in\{0,1\} \}$. Bern$(p)$ denotes the Bernoulli distribution with parameter $p$, $h(p) = - p\log p -(1-p)\log(1-p)$ denotes the entropy of a Bernoulli random variable with parameter $p$. Let $G(x; \mu, \sigma^2)$ denote the pdf of Gaussian random variable with mean $\mu$ and variance $\sigma^2$ at $x$. Logarithms are to the base 2. Let $[g]_a = g$ if $g \geq a$ otherwise $[g]_a = 0$.
\section{Problem Setup}
\label{sec:prob_setup}
In this section, we describe the mathematical formulation of the target acquisition problem followed by the performance criteria.
\subsection{Problem Formulation}
We consider a search agent interested in quickly and reliably finding the true location of a single stationary target by making measurements over time about the target's presence. In particular, we consider a total search region of width $B$ that contains the target in a location of width $\delta$. In other words, the search agent is searching for the target's location among $ \frac{B}{\delta}$ total locations. Let $\mathbf{W} \in \mathcal{U}_{\frac{B}{\delta}}$ denote the true location of the target, where $\mathbf{W}(j)=1$ if and only if target is located at location $j$. The target location $\mathbf{W}$ can take $\frac{B}{\delta}$ possible values uniformly at random whose value remains fixed during the search. A measurement at time $n$ is given by a vector $\mathbf{S}_n \in \mathcal{U}_{\frac{B}{\delta}}$, where $\mathbf{S}_n(j)=1$ if and only if location $j$ is probed. Each measurement can be imagined to result in a clean observation $X_{n} = \textbf{W}^{\intercal} \mathbf{S}_{n} \in \{0,1 \}$ indicating of the presence of the target in the measurement vector $\textbf{S}_n$. However, only a noisy version of the clean observation $X_n$ is available to the agent.
The resulting noisy observation $Y_n \in \mathbb{R}$ is given by the following linear model with additive measurement dependent noise
\begin{equation}
Y_n = X_{n}+ {Z}_{n}(\mathbf{S}_n).
\label{eq:noisysearch}
\end{equation}
Here, we assume ${Z}_{n} \sim \mathcal{N}(0, |\textbf{S}_n|\delta \sigma^2)$ which corresponds to the case of i.i.d white Gaussian noise with $\sigma^2$ denotes the noise variance per unit width. Conditioned on the measurement vector $\mathbf{S}_n$, the noise $Z_{n}$ is independent over time.
A search consisting of $\tau$ measurements can be represented by a measurement matrix $\overline{\mathbf{S}}^{\tau} = [\mathbf{S}_1, \mathbf{S}_2, \ldots , \mathbf{S}_{\tau}]$ which yields the observation vector $\mathbf{Y}^{\tau} = [Y_1, Y_2, \ldots, Y_{\tau}]$. At any time instant $n = 1,2 \ldots, \tau$, the agent selects the measurement vector in general as a function of the past observations and measurements. Mathematically,
\begin{align}
\mathbf{S}_{n} = g_{n}\left(\mathbf{Y}^{n-1}, \overline{\mathbf{S}}^{n-1}\right),
\end{align}
for some causal (possibly random) function $g_{n}: \mathbb{R}^{n-1} \times \mathcal{U}^{n-1}_{\frac{B}{\delta}} \to \mathcal{U}_{\frac{B}{\delta}}$. After observing the noisy observations $\mathbf{Y}^{\tau}$ and measurement matrix $\overline{\mathbf{S}}^{\tau}$, the agent estimates the target location $\mathbf{W}$ as follows
\begin{align}
\hat{\mathbf{W}} = d\left( \mathbf{Y}^{\tau}, \overline{\mathbf{S}}^{\tau}\right),
\end{align}
for some decision function $d: \mathbb{R}^{\tau} \times \mathcal{U}^{\tau}_{\frac{B}{\delta}} \to \mathcal{U}_{\frac{B}{\delta}}$. The probability of error for a search is given by $\mathrm {Pe} = \mathsf{P}(\hat{\mathbf{W}} \neq \mathbf{W} | \mathbf{Y}, \overline{\mathbf{S}})$ and the average probability of error is given by $\overline{\mathrm {Pe}} = \mathsf{P}(\hat{\mathbf{W}} \neq \mathbf{W})$.
Now we define the measurement strategy:
\begin{definition}[$\epsilon$-Reliable Search Strategy $\mathfrak{c}_{\epsilon}$]
For some $\epsilon \in(0, 1)$, an \textit{$\epsilon$-reliable search strategy}, denoted by $\mathfrak{c}_{\epsilon}$, is defined as a sequence of $\tau$ (possibly random) number of
causal functions $\{g_1, g_2, \ldots, g_{\tau}\}$, according to which the measurement matrix $\overline{\mathbf{S}}^{\tau}$ is selected, and a decision function $d$ which provides an estimate $\mathbf{\hat{W}}$ of $\mathbf{W}$, such that the average probability of error $\overline{\mathrm {Pe}}$ is at most $\epsilon$.
\end{definition}
\begin{definition}[Achievable Target Acquisition Rate]
A target acquisition rate $R$ is said to be an \textit{$\epsilon$-achievable}, if for any small $\xi > 0$ and $n$ large enough, there exists an $\epsilon$-reliable search strategy $\mathfrak{c}_{\epsilon}$ such the following holds
\begin{align}
\expe_{\mathfrak{c}_{\epsilon}}[\tau] &\leq n, \\
\frac{B}{\delta} &\geq 2^{n(R-\xi)}.
\end{align}
A targeting rate $R$ is said to be \textit{achievable target acquisition rate} if it is $\epsilon$-achievable for all $\epsilon \in (0,1)$.
\end{definition}
The above definition is motivated by information theoretic notion of transmission rate over a communication channel, which captures the exponential rate at which the number of messages grow with the number of channel uses while the receiver can decode with a small average error probability. Similarly, the target acquisition rate captures the exponential rate at which the number of target locations grow with the number of measurement vectors while a search strategy can still locate the target with a diminishing average error probability.
\begin{definition}[Target Acquisition Capacity]
The supremum of achievable target acquisition rates is called the target acquisition capacity.
\end{definition}
\subsection{Types of Search Strategies and Adaptivity Gain}
Each measurement vector $\mathbf{S}_n$ and the number of total measurements $\tau$ can be selected either based on the past observations $\mathbf{Y}^{n-1}$, or independent of them. Based on these two choices, strategies can be divided into four types i) having fixed length versus variable length number of the measurement matrix $\overline{\mathbf{S}}$, and ii) being adaptive versus non-adaptive.
A \textit{fixed length $\epsilon$-reliable strategy} $\mathfrak{c}_{\epsilon}$ uses a fixed number of measurements $\tau$ predetermined offline independent of the observations, to obtain estimate $\hat{\mathbf{W}}$. On the other hand, a \textit{variable length $\epsilon$-reliable strategy} $\mathfrak{c}_{\epsilon}$ uses a random number of measurements $\tau$ (possibly determined as a function of the observations $\mathbf{Y}^{\tau}$) to obtain estimate, $\hat{\mathbf{W}}$. For example, $\tau$ can be selected such that agent achieves $\mathrm {Pe} \leq \epsilon$ in every search and hence $\tau$ is a random variable which is a function of the past noisy observations. Under an \textit{adaptive strategy} $\mathfrak{c}_{\epsilon} \in \mathcal{C}^A_{\epsilon}$ the agent designs the measurement vector $\textbf{S}_n$ as a function of the past observations $\mathbf{Y}^{n-1}$, i.e., $g_n$ is a function of both $\textbf{S}^{n-1}$ and $\mathbf{Y}^{n-1}$.
\begin{definition}
Let $\mathcal{C}^A_{\epsilon}$ be a class of all $\epsilon$-reliable adaptive strategies.
\end{definition}
Under a \textit{non-adaptive strategy}, the agent designs the measurement vector $\textbf{S}_n$ offline independent of past observations, i.e., $g_n$ does not depend on $\textbf{S}^{n-1}$ or $\mathbf{Y}^{n-1}$.
\begin{definition}
Let $\mathcal{C}^{NA}_{\epsilon}$ be a class of all $\epsilon$-reliable non-adaptive strategies.
\end{definition}
For any $\epsilon$-reliable strategy $\mathfrak{c}_{\epsilon}$, the performance is measured by the expected number of measurements $\expe_{\mathfrak{c}_{\epsilon}}[\tau]$. To achieve better reliability, i.e., smaller $\epsilon$, in general the agent requires larger $\expe_{\mathfrak{c}_{\epsilon}}[\tau]$.
\begin{definition}[Adaptivity Gain]
The adaptivity gain is defined as the best reduction in the expected number of measurements when searching with an $\epsilon$-reliable adaptive strategy $\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}$, over an $\epsilon$-reliable non-adaptive strategy $\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}$. Mathematically, it is given as
\begin{align}
\min_{\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}}\expe[\tau] - \min_{\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}}\expe[\tau^{\prime}].
\end{align}
\end{definition}
Hence, characterizing adaptivity gain allows us to characterize the improvement in target acquisition rate when using adaptive strategies over non-adaptive strategies.
\section{Preliminaries: Channel Coding with State and Feedback}
\label{sec:prelim}
In this section, we review fundamentals of channel coding with state and feedback and relevant literature to connect these information theoretic concepts to the problem of searching under measurement dependent noise discussed in the previous section. The aim is to formulate an equivalent model of channel coding with state and feedback for comparison to (\ref{eq:noisysearch}).
\label{ReviewChannelCoding}
\begin{figure}[!htb]
\centering
\includegraphics[ width=0.7\textwidth]{Channel_Basic_with_Feedback4}
\caption{Transmission over a communication channel with state and feedback}
\label{fig:Basic}
\end{figure}
A communication channel is specified by a set of inputs $\tilde{X} \in \tilde{\mathcal{X}}$, a set of outputs $\tilde{Y} \in \tilde{\mathcal{Y}}$, and a channel transition probability measure $\mathsf{P}(\tilde{y}|\tilde{x})$ for every $\tilde{x} \in \tilde{\mathcal{X}}$ and $\tilde{y} \in \tilde{\mathcal{Y}}$ that expresses the probability of observing a certain output $\tilde{y}$ given that an input $\tilde{x}$ was transmitted \cite{CoverBook2nd}. Throughout this work, we will concentrate on coding over a channel with state and feedback (section 4.6 in~\cite{Gallager}). Formally, at time $n$ the channel state, $\tilde{\mathbf{S}}_n$ belongs to a discrete and finite set $\tilde{\mathcal{A}}$. We assume that the channel state is known at both the encoder and the decoder. For a channel with state, the transition probability at time $n$ is specified by the conditional probability assignment $\mathsf{P}_n\left(\tilde{Y}_n |\tilde{X}_n, \tilde{\mathbf{S}}_n \right)$. Transmission over such a channel is shown in Figure~\ref{fig:Basic}. In general, the channel state $\tilde{\mathbf{S}}_n$ at time $n$ evolves as a function of all past outputs and all past states,
\begin{equation}
\label{eq:state}
\tilde{\mathbf{S}}_n = \tilde{g}_n(\tilde{Y}_1, \tilde{Y}_2,\ldots, \tilde{Y}_{n-1}, \tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2,\ldots, \tilde{\mathbf{S}}_{n-1}).
\end{equation}
The goal is to encode and transmit a uniformly distributed message $\tilde{\mathbf{W}} \in [M] $ over the channel. The encoding function $\phi_n$ at any time $n$ depends on the message to be transmitted $\tilde{\mathbf{W}}$, all past states, and all the past outputs. Thus the next symbol to be transmitted is given by
\begin{equation}
\tilde{X}_n = \phi_n(\tilde{Y}_1, \tilde{Y}_2, \ldots, \tilde{Y}_{n-1}, \tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2, ..., \tilde{\mathbf{S}}_{n}, \tilde{\mathbf{W}}).
\end{equation}
The encoder obtains the past outputs from the decoder due to the availability of a noiseless feedback channel from decoder to encoder. In this paper, we assume that both encoder and decoder know the evolution of the channel state, i.e., the sequence $\{\mathbf{\mathbf{S}}_n\}_{n \geq 1}$. After $\tau$ channel uses, the decoder uses the noisy observations $\tilde{\mathbf{Y}}^{\tau}$ and state information $\{\tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2, \ldots, \tilde{\mathbf{S}}_{\tau}\}$ to find the best estimate $\tilde{\mathbf{W}}^{\prime}$, of the message $\tilde{\mathbf{W}}$. The probability of error at the end of message transmission is given by $\mathrm {Pe} = \mathsf{P}(\tilde{\mathbf{W}}^{\prime} \neq \tilde{\mathbf{W}} | \tilde{\mathbf{Y}}, \{\tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2, \ldots, \tilde{\mathbf{S}}_{\tau}\})$ and the average probability of error is given by $\overline{\mathrm {Pe}} = \mathsf{P}(\tilde{\mathbf{W}}^{\prime} \neq \tilde{\mathbf{W}})$.
\begin{example}[Binary Additive White Gaussian Noise channel with State and feedback]
\label{ex:bawgn}
Consider a Binary Additive White Gaussian Noise (BAWGN) channel with noisy output $\tilde{Y}_n$ given as the sum of input $\tilde{X}_n \in \{0,1\}$ and Gaussian random variable $\tilde{Z}_n \in \mathbb{R}$ whose distribution is a function of the channel state $\tilde{\mathbf{S}}_n$. Specifically, $\tilde{Z}_n$ is a Gaussian random variable with state dependent noise variance $|\tilde{\mathbf{S}}_n|\delta\sigma^2$ for some $\delta> 0$.
In other words, we have
\begin{equation}
\tilde{Y}_n = \tilde{X}_{n} + \tilde{Z}_{n}(\tilde{\mathbf{S}}_n),
\label{eq:Gaussianoutput}
\end{equation}
where $\tilde{Z}_n \sim \mathcal{N}(0, |\tilde{\mathbf{S}}_n|\delta\sigma^2)$, and the state evolves as $\tilde{\mathbf{S}}_n = \tilde{g}_n(\tilde{Y}_1, \tilde{Y}_2, \ldots, \tilde{Y}_{n-1}, \tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2, \ldots, \tilde{\mathbf{S}}_{n-1})$. Transmission over a BAWGN channel is illustrated in Figure~\ref{fig:Gaussian}.
\begin{figure}[!htb]
\centering
\includegraphics[ width=0.5\textwidth]{Gaussian_Binaryinputchannel.pdf}
\caption{Transmission over a BAWGN channel with binary input $\tilde{X}_n$ and Gaussian noise $\tilde{Z}_n$.}
\label{fig:Gaussian}
\end{figure}
\end{example}
\begin{proposition}
\label{prop:connection}
The problem of searching under measurement dependent Gaussian noise is equivalent to the problem of channel coding over a BAWGN channel with state and feedback. Specifically,
\begin{itemize}
\item
The true location vector $\mathbf{W}$ can be cast as a message $\tilde{\mathbf{W}}$ to be transmitted over the BAWGN. Therefore, there are $\frac{B}{\delta}$ possible messages.
\item
An $\epsilon$-reliable search strategy $\mathfrak{c}_{\epsilon}$ provides a sequence of $\{g_1, g_2, \ldots, g_{\tau}\}$ such that $\mathsf{P}(\tilde{\mathbf{W}}^{\prime} \neq \tilde{\mathbf{W}}) \leq \epsilon$. Hence, setting $\tilde{g}_i = g_i$ for all $i \in \{1, 2, \ldots, \tau\}$, the search strategy dictates the evolution of channel states $\tilde{\mathbf{S}}_n$.
\item
The measurement matrix $\overline{\mathbf{S}}^{\tau}$ can be used as the codebook, i.e., by setting $\{\tilde{\mathbf{S}}_1, \tilde{\mathbf{S}}_2, \ldots, \tilde{\mathbf{S}}_{\tau}\} = \overline{\mathbf{S}}$. Specifically, codewords are obtained by setting $\tilde{X}_n = \phi_n(\tilde{\mathbf{Y}}^{n-1},\tilde{ \mathbf{S}}_1, \tilde{\mathbf{S}}_2, \ldots, \tilde{\mathbf{S}}_n, \tilde{\mathbf{W}}) = \tilde{\mathbf{W}}^{\intercal}\tilde{\mathbf{S}}_n$.
\item
The measurement vector fixes the channel transition probability measure as $\mathsf{P}(\tilde{Y}_n| \tilde{x}_n,\tilde{\textbf{S}}_n) = \mathcal{N}(\tilde{x}_n, |\tilde{\textbf{S}}_n|\delta\sigma^2)$ since noise distribution is $\tilde{Z}_n \sim \mathcal{N}(0, |\tilde{\mathbf{S}}_n|\delta \sigma^2)$ for $\tilde{x}_n \in \{ 0, 1\}$. Hence, the channel state depends on measurement vector.
\end{itemize}
\end{proposition}
A coding scheme for a channel with state and feedback can double as a search strategy. This general approach of search using channel codes provides an efficient way to design and
compare non-adaptive and adaptive search strategies. This also implies that feedback can improve the capacity of a channel with state which is what we characterize as our adaptivity gain for the problem of searching under measurement dependent noise.
\begin{definition}
The BAWGN capacity with input distribution $\ber(q)$ and noise variance $\sigma^2$ is defined as
\begin{align}
C_{\text{BAWGN}}\left(q, \sigma^2\right)
&:= -\int_{-\infty}^{\infty} \left( (1-q) G(y; 0, \sigma^2) + q G(y; 1, \sigma^2)\right) \times
\nonumber
\\
&\hspace{0.5cm}\times \log \left( (1-q) G(y; 0, \sigma^2) + q G(y; 1, \sigma^2)\right)
\nonumber
\\
& \hspace{0.5cm}- \frac{1}{2}\log(2\pi e \sigma^2).
\end{align}
\end{definition}
\begin{corollary}
\label{cor:channelstate}
From channel coding over a BAWGN channel with state and feedback, we obtain that for any small $\xi > 0$ and $n$ large enough, there exists an $\epsilon$-reliable search strategy $\mathfrak{c}_{\epsilon}$ such the following holds
\begin{align}
\expe_{\mathfrak{c}_{\epsilon}}[\tau] &\leq n, \\
2^{n(C_{\text{BAWGN}}(\frac{1}{2}, \frac{B\sigma^2}{2})-\xi)} &\overset{(a)}\leq \frac{B}{\delta} \overset{(b)}< 2^{nC_{\text{BAWGN}}(\frac{1}{2}, \delta \sigma^2)},
\end{align}
where $(a)$ follows from Theorem 4.6.1 in~\cite{Gallager} and $(b)$ follows by combining the fact that the best channel is obtained when noise variance is the least, i.e., $\delta \sigma^2$, with the converse of the noisy channel coding theorem~\cite{CoverBook2nd}.
\end{corollary}
\color{black}
\section{Main Results}
\label{sec:main_results}
In this section, we characterize a lower bound on the adaptivity gain
$\min_{\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}}\expe[\tau] - \min_{\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}}\expe[\tau^{\prime}]$; the performance improvement measured in terms of reduction in the expected number of measurements for searching over a width $B$ among $\frac{B}{\delta}$ locations under measurement dependent Gaussian noise.
\begin{theorem}
\label{thm:gain_lower_bound}
Let $\epsilon \in (0,1)$. For any $\epsilon$-reliable non-adaptive strategy $\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}$ searching over a search region of width $B$ among $\frac{B}{\delta}$ locations with $\tau$ number of measurements, there exists an $\epsilon$-reliable adaptive strategy $\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}$ with $\tau^{\prime}$ number of measurements, such that for some small constant $\eta > 0$ the following holds
\begin{align*}
\expe_{\mathfrak{c}_{\epsilon}}[\tau] - \expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]
&
\geq \max_{\alpha \in \mathcal{I}_{\frac{B}{\delta}}} \left\{
\log \frac{1}{\alpha} \left(\frac{(1-\epsilon) }{C_{\text{BAWGN}}(q^{\ast}, q^{\ast}B \sigma^2)} \right. -\frac{1}{C_{\text{BAWGN}} \left( q^{\ast}, q^{\ast} B \sigma^2\right) - \eta}
\right)
\nonumber
\\
&
\quad + \log \frac{\alpha B}{\delta} \left(\frac{(1-\epsilon) }{
C_{\text{BAWGN}}(q^{\ast}, q^{\ast}B\sigma^2)}
- \frac{1}{
C_{\text{BAWGN}} \left(\frac{1}{2}, \frac{\alpha B \sigma^2}{2}\right)-\eta} \right)
\\
\nonumber
&\left. \quad
- h(B, \delta, \sigma^2, \alpha, \epsilon, \eta) \right\},
\end{align*}
where
\begin{align*}
h(B, \delta, \sigma^2, \alpha, \epsilon, \eta)
&=
\frac{\log \left( \frac{2}{\epsilon}\right) + \log \log \left( \frac{1}{\alpha}\right) + a_{\eta}}{C_{\text{BAWGN}} \left( q^{\ast}, q^{\ast}B \sigma^2\right) -\eta}
+ \frac{\log \left( \frac{2}{\epsilon}\right) + \log \log \left( \frac{\alpha B}{\delta }\right) + a_{\eta}}{C_{\text{BAWGN}}\left(\frac{1}{2}, \frac{\alpha B \sigma^2}{2} \right) - \eta}
\nonumber
\\
& \quad + \frac{h(\epsilon)}{C_{\text{BAWGN}}^{(B, \delta, \sigma^2)}(q^{\ast}, q^{\ast}B \sigma^2)} ,
\end{align*}
$
q^{*}
=
\mathop{\rm argmax}_{q \in \mathcal{I}_{\frac{B}{\delta}}} C_{\text{BAWGN}}(q, qB\sigma^2),
$
and $a_{\eta}$ is the solution of the following equation
\begin{align}
\eta =\frac{a}{a-3}\max_{q \in \mathcal{I}_{\frac{B}{\delta}}}\int_{-\infty}^{\infty} \frac{e^{-\frac{y^2}{2Bq\sigma^2}}}{\sqrt{2 \pi qB \sigma^2}} \left[ \frac{2y-1}{2qB\sigma^2}\right]_{(a-3)} dy .
\end{align}
\end{theorem}
Proof of Theorem~\ref{thm:gain_lower_bound} is obtained by combining Lemma~\ref{lemma:converse_k_1} and Lemma~\ref{lemma:achv}. Theorem~\ref{thm:gain_lower_bound} provides a non-asymptotic lower bound on adaptivity gain. The bound can be viewed as two parts corresponding to two stages. Intuitively, the first part corresponds to the initial stage of the search, where the agent narrows down the target's location to some coarse $\alpha$ fractions of the total search region, i.e., narrows to a section of width $\alpha B$ with high confidence. The second stage corresponds to refined the search within one of the coarse sections $\alpha B$ obtained from initial stage. This implies that an adaptive strategy can zoom in and confine the search to a smaller section to reduce the noise intensity. Whereas, a non adaptive strategy does not adapt to zoom in, and thus performs equally in both stages. We formalize this intuition in Lemma~\ref{lemma:achv}. Optimizing over $\alpha$ fraction of the first search we obtain a bound on expected number of measurements. We obtain the following corollary as a consequence of Theorem~\ref{thm:gain_lower_bound}.
\begin{corollary}
\label{cor:two_regime_gains}
Let $\epsilon \in (0,1)$. For any $\epsilon$-reliable non-adaptive strategy $\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}$ searching over a search region of width $B$ among $\frac{B}{\delta}$ with $\tau$ number of measurements, there exists an $\epsilon$-reliable adaptive strategy $\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}$ with $\tau^{\prime}$ number of measurements, such that for a fixed $B$ the asymptotic adaptivity gain grows logarithmically with the total number of locations,
\begin{align}
\label{eq:delta_gain}
\lim_{\delta \to 0} \frac{\expe_{\mathfrak{c}_{\epsilon}}[\tau] - \expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]}{\log \frac{B}{\delta}}
\geq
\frac{1-\epsilon}{ C_{\text{BAWGN}}(q^{\ast}, q^{\ast}B \sigma^2)} - 1.
\end{align}
For a fixed $\delta$, the asymptotic adaptivity gain grows at least linearly with total number of locations,
\begin{align}
\label{eq:B_gain}
\lim_{B \to \infty} \frac{\expe_{\mathfrak{c}_{\epsilon}}[\tau] - \expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]}{\frac{B}{\delta} \log \frac{B}{\delta}}
\geq
\frac{(1-\epsilon) \sigma^2 \delta }{\log e}.
\end{align}
Furthermore, we have
\begin{align}
\label{eq:B_NA}
\lim_{B \to \infty} \frac{\min_{\mathfrak{c}_{\epsilon} \in \mathcal{C}_{\epsilon}^{NA}}\expe_{\mathfrak{c}_{\epsilon}}[\tau]}{\frac{B}{\delta} \log \frac{B}{\delta}}
\geq
\frac{(1-\epsilon) \sigma^2 \delta }{\log e},
\end{align}
and
\begin{align}
\label{eq:B_A}
\lim_{B \to \infty} \frac{\min_{\mathfrak{c}_{\epsilon} \in \mathcal{C}_{\epsilon}^{A}} \expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]}{\frac{B}{\delta} }
= 0.
\end{align}
\end{corollary}
The proof of the above corollary is provided in~Appendix-C.
\begin{remarks}
The above corollary characterizes the two qualitatively different regimes previously discussed. For fixed $B$, as $\delta$ goes to zero the asymptotic adaptivity gain scales as only $\log \frac{B}{\delta}$, whereas for fixed $\delta$, as $B$ increases the asymptotic adaptivity gain scales as $\frac{B}{\delta} \log \frac{B}{\delta}$. In other words, target acquisition rate improves by a constant for fixed $B$ as $\delta$ decreases while it grows linearly with $B$ for a fixed $\delta$. In other words, adaptivity provides a larger gain in target acquisition rate for the regime where the total search width is growing than in the case where we fix the total width and shrink the location widths. In Section~\ref{sec:num_results} we related this phenomenon to the diminishing capacity of BAWGN channel when the total noise $\frac{B\sigma^2}{2}$ grows.
\end{remarks}
Next we provide the main technical components of the proof of Theorem~\ref{thm:gain_lower_bound}.
\subsection{Converse: Non-Adaptive Search Strategies}
\begin{lemma}
\label{lemma:converse_k_1}
The minimum expected number of measurements required for any $\epsilon$-reliable non-adaptive search strategy can be lower bounded as
\begin{align*}
\min_{\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}}\expe_{\mathfrak{c}_{\epsilon}}[\tau] \geq \frac{(1-\epsilon) \log \left(\frac{B}{\delta}\right) -h(\epsilon)}{C_{\text{BAWGN}}\left(q^{\ast}, q^{\ast} B \sigma^2 \right)}.
\end{align*}
\end{lemma}
Proof of the Lemma~\ref{lemma:converse_k_1} is provided in~Appendix-A. The proof follows from the fact that clean signal $X_i$ and noise $Z_i$ are independent over time and independent of past observations for $i = 1,2, \ldots, n$, due to the non-adaptive nature of the search strategy. In the absence of information from past observation outcomes, the agent tries to maximize the mutual information $I(X_i, Y_i)$ at every measurement. Since $X_i \sim \ber(q_i)$ and $Z_i \sim \mathcal{N}(0, q_i B\sigma^2)$, the mutual information $I(X_i, Y_i) = C_{\text{BAWGN}}\left(q_i, q_i B \sigma^2 \right)$ is maximized at $q_i = q^{\ast}$.
\subsection{Achievability: Adaptive Search Strategy}
Consider the following two stage search strategy.
\subsubsection{First Stage (Fixed Composition Strategy $\mathfrak{c}^{1}_{\frac{\epsilon}{2}}$)} We group the $\frac{B}{\delta}$ locations of width $\delta$ into $\frac{1}{\alpha}$ sections of width $\alpha B$. Let $\mathbf{W}^{\prime}$ denote the true location of the target among the sections of width $ \alpha B $. Now, we use a non-adaptive strategy to search for the target location among $\frac{1}{\alpha}$ sections of width $\alpha B$. In particular, we use a fixed composition strategy where at every time instant $n$, the fraction of total locations probed is fixed to be $q^{\ast}$. In other words, the measurement vector $\mathbf{S}^{\prime}_n$ at every instant $n$ is picked uniformly randomly from the set of measurement vectors $\{\mathbf{S}^{\prime} \in \mathcal{U}_{\frac{1}{\alpha}}: |\mathbf{S}^{\prime}| = \lfloor \frac{q^{\ast}}{\alpha} \rfloor \}$. For the ease of exposition, we assume that $\frac{q^{\ast}}{\alpha}$ is an integer. Hence, for this strategy, at every $n$, $X_n \sim \ber(q^{\ast})$ and $Z_n \sim \mathcal{N}(0, q^{\ast}B\sigma^2)$. For all $i \in \{1,2, \ldots, \frac{1}{\alpha} \}$, let $\boldsymbol{\rho}^{\prime}_n(i)$ be the posterior probability of the estimate $\hat{\mathbf{W}}^{\prime}(i) = 1$ after reception of $\mathbf{Y}^{n-1}$, i.e., $\boldsymbol{\rho}^{\prime}_n(i): = \mathsf{P} \left( \hat{\mathbf{W}}^{\prime}(i) = 1| \boldsymbol{Y}^{n-1} \right)$ and let $\boldsymbol{\rho}^{\prime}_n: = \left\{\boldsymbol{\rho}^{\prime}_n(1), \boldsymbol{\rho}^{\prime}_n(2), \ldots, \boldsymbol{\rho}^{\prime}_{n}\left(\frac{1}{\alpha}\right) \right\}$. Assume that agent begins with a uniform probability over the $\frac{1}{\alpha}$ sections, i.e., $\boldsymbol{\rho}^{\prime}_0 = \{\alpha, \alpha, \ldots, \alpha \}$. The posterior probability $\boldsymbol{\rho}^{\prime}_{n+1}(i)$ at time $n+1$ when $Y_n = y$ is obtained by the following Bayesian update:
\begin{align}
\label{eq:rho_update1}
\boldsymbol{\rho}^{\prime}_{n+1}(i)
=
\left\{
\begin{array}{ll}
\frac{\boldsymbol{\rho}^{\prime}_{n}(i) G(y; 1, q^{\ast}B\sigma^2)}{\mathcal{D}^{\prime}_n} & \text{if } \boldsymbol{S}^{\prime}_n(i) = 1,\\
\frac{\boldsymbol{\rho}^{\prime}_{n}(i) G(y; 0, q^{\ast}B\sigma^2)}{\mathcal{D}^{\prime}_n} & \text{if } \boldsymbol{S}^{\prime}_n(i) = 0,
\end{array}
\right.
\end{align}
where
\begin{align}
\label{eq:rho_norm1}
\mathcal{D}^{\prime}_n
= \sum_{j: \mathbf{1}_{\{\boldsymbol{S}_n(j) = 1\}}}\boldsymbol{\rho}^{\prime}_{n}(j) G(y; 1, q^{\ast}B\sigma^2)
+ \sum_{j: \mathbf{1}_{\{\boldsymbol{S}_n(j) = 0\}}}\boldsymbol{\rho}^{\prime}_{n}(j) G(y; 0, q^{\ast}B\sigma^2).
\end{align}
Let $\tau^{1} : = \inf\left\{n: \max_{i} \boldsymbol{\rho}^{\prime}_n(i) \geq 1- \frac{\epsilon}{2} \right\}$ be the number of measurements used under stage 1. Note that $\tau^{1}$ is a random variable. Hence, first stage is a non-adaptive variable length strategy. Now, the expected stopping time $\expe_{\mathfrak{c}^{1}_{\frac{\epsilon}{2}}}[\tau^{1}]$ can be upper bounded using Lemma~\ref{lemm:stage_1_time} from Appendix-B.
\subsubsection{Second Stage (Sorted Posterior Matching Strategy $\mathfrak{c}_{\frac{\epsilon}{2}}^2$)} In the second stage, the agent zooms into the $\alpha B$ width section obtained from the first stage and uses an adaptive strategy to search only within this $\alpha B$ section. The agent searches for the target location of width $\delta$ among the remaining $\frac{\alpha B}{\delta}$ locations. In particular, we use the sorted posterior matching strategy proposed in~\cite{SungEnChiu} which we describe next. Let $\mathbf{W}^{\prime \prime}$ denote the true target location of width $\delta$. For all $i \in \{1,2, \ldots, \frac{\alpha B}{\delta} \}$, let $\boldsymbol{\rho}^{\prime \prime}_n(i)$ be the posterior probability of the estimate $\hat{\mathbf{W}}^{\prime \prime}(i) = 1$ after reception of $\mathbf{Y}^{n-1}$, i.e., $\boldsymbol{\rho}^{\prime}_n(i): = \mathsf{P} \left( \hat{\mathbf{W}}^{\prime \prime}(i) = 1| \mathbf{Y}^{n-1} \right)$ and let $\boldsymbol{\rho}^{\prime \prime}(n): = \{\boldsymbol{\rho}^{\prime \prime}_n(1), \boldsymbol{\rho}^{\prime \prime}_n(2), \ldots, \boldsymbol{\rho}^{\prime \prime}_{n}\left(\frac{\alpha B}{\delta}\right) \}$. Assume the agent begins with a uniform probability over the $\frac{\alpha B}{\delta}$ sections, i.e., $\boldsymbol{\rho}^{\prime \prime}_0 = \left\{\frac{\delta}{\alpha B}, \frac{\delta}{\alpha B}, \ldots, \frac{\delta}{\alpha B} \right\}$. At every time instant $n$, we sort the posterior values in descending order to obtain the sorted posterior vector $\boldsymbol{\rho}^{\downarrow}_n$. Let vector $I_n$ denote the corresponding ordering of the location indices in the new sorted posterior. Define
\begin{align}
k^{\ast}_n:= \mathop{\rm argmin}_{i} \left| \sum_{j = 1}^{i} \boldsymbol{\rho}^{\downarrow}_n(j)- \frac{1}{2}\right|.
\end{align}
We choose the measurement vector $\mathbf{S}_n^{\prime \prime}$ such that $\mathbf{S}_n^{\prime \prime}(j) = 1$ if and only if $j \in \{I_n(1),I_n(2), \ldots, I_n(k^{\ast}_n)\}$. Note that for this strategy, at every $n$, the noise is $Z_n \sim \mathcal{N}(0, |\mathbf{S}^{\prime \prime}_n|\delta \sigma^2)$ and the worst noise intensity is $\mathcal{N}(0, \frac{\alpha B \sigma^2}{2})$. The posterior probability $\boldsymbol{\rho}^{\prime \prime}_{n+1}(i)$ at time $n+1$ when $Y_n = y$ is obtained by the following Bayesian update:
\begin{align}
\label{eq:rho_update2}
\boldsymbol{\rho}^{\prime \prime}_{n+1}(i)
=
\left\{
\begin{array}{ll}
\frac{\boldsymbol{\rho}^{\prime \prime}_{n}(i) G(y; 1, |\mathbf{S}^{\prime \prime}_n|\delta \sigma^2)}{\mathcal{D}^{\prime \prime}_n} & \text{if } \boldsymbol{S}^{\prime \prime}_n(i) = 1,\\
\frac{\boldsymbol{\rho}^{\prime \prime}_{n}(i) G(y; 0, |\mathbf{S}^{\prime \prime}_n|\delta \sigma^2)}{\mathcal{D}^{\prime \prime}_n} & \text{if } \boldsymbol{S}^{\prime \prime}_n(i) = 0,
\end{array}
\right.
\end{align}
where
\begin{align}
\label{eq:rho_norm2}
\mathcal{D}^{\prime \prime}_n
= \sum_{j: \mathbf{1}_{\{\boldsymbol{S}_n(j) = 1\}}}\boldsymbol{\rho}^{\prime \prime}_{n}(j) G\left(y; 1,|\mathbf{S}^{\prime \prime}_n|\delta \sigma^2\right)
+ \sum_{j: \mathbf{1}_{\{\boldsymbol{S}_n(j) = 0\}}}\boldsymbol{\rho}^{\prime \prime}_{n}(j) G\left(y; 0, |\mathbf{S}^{\prime \prime}_n|\delta \sigma^2\right).
\end{align}
Let $\tau^{2} : = \inf\left\{n: \max_{i} \boldsymbol{\rho}^2_n(i) \geq 1- \frac{\epsilon}{2} \right \}$ be the number of measurements used under stage 2. Note that $\tau^{2}$ is a random variable. Hence, the second stage is an adaptive variable length strategy. The expected number of measurements $\expe_{\mathfrak{c}^{2}_{\frac{\epsilon}{2}}}[\tau^{\prime \prime}]$ can be upper bounded using Lemma~\ref{lemm:sortPM_tau} from Appendix-B.
Noting that the total probability of error of the two stage search strategy is less than $\epsilon$ and that the expected stopping time is $\expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}] = \expe_{\mathfrak{c}^1_{\frac{\epsilon}{2}}}[\tau^{1}]+ \expe_{\mathfrak{c}^2_{\frac{\epsilon}{2}}}[\tau^{2}]$, we have the assertion of the following lemma.
\begin{lemma}
\label{lemma:achv}
The minimum expected number of measurements required for the above $\epsilon$-reliable adaptive search strategy $\mathfrak{c}^{\prime}_{\epsilon}$ can be upper bounded as
\begin{align}
\expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]
\leq
\min_{\alpha \in \mathcal{I}_{\frac{B}{\delta}}}\left\{\frac{\log \frac{1}{\alpha} + \log \frac{2}{\epsilon} + \log \log \frac{1}{\alpha} + a_{\eta}}{C_{\text{BAWGN}}\left(q^{\ast}, q^{\ast} B \sigma^2 \right) -
\eta}
+
\frac{\log \frac{\alpha B}{\delta} + \log \frac{2}{\epsilon} + \log \log \frac{\alpha B}{\delta} + a_{\eta}}{C_{\text{BAWGN}}\left(\frac{1}{2}, \frac{\alpha B \sigma^2}{2} \right) -
\eta} \right\}.
\end{align}
\end{lemma}
\begin{remarks}
For an $\epsilon$-reliable adaptive search strategy $\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}$ using the two stage strategy, the non-asymptotic upper bound provided by Lemma~\ref{lemma:achv} for $\min_{\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}_{\epsilon}^{A}} \expe^{\prime}_{\mathfrak{c}_{\epsilon}}[\tau^{\prime}]$ is tighter than the upper bound provided in~\cite{SungEnChiu} using the sorted posterior matching strategy. In fact, for any given $\alpha$, our bound is significantly smaller than the upper bound in~\cite{SungEnChiu}. In the asymptotically dominating terms of the order $\log \frac{B}{\delta}$, our upper bound closely follows the simulations as illustrated in Section~\ref{sec:num_results}.
\end{remarks}
\begin{remarks}
In the regime of fixed $B$ and diminishing $\delta$, Lemma~\ref{lemma:achv} together with Corollary~\ref{cor:channelstate} establishes the optimality of our proposed algorithm. Further, it characterizes a lower bound on the increase in targeting capacity when utilizing an adaptive strategy over the non-adaptive strategies.
\end{remarks}
\section{Extensions and Generalizations}
\subsection{Generalization to other noise models}
The main results presented in this paper consider the setup where the noise $Z_n$ is distributed as $\mathcal{N}(0, |\mathbf{S}_n|\delta \sigma^2)$. In other words, the variance of the noise given by $(|\textbf{S}_n|\delta \sigma^2)$ is a linear function of the size of a measurement vector $|\mathbf{S}_n|$. This model assumption holds when each target location adds noise equally and independently of other locations when probed together. In general, due to correlation across locations the additive noise variance can be assumed to scale as a non-decreasing function $f(\cdot)$ of the measurement vector $|\mathbf{S}_n|$. In this section, we extend our model to a general formulation for the noise $Z_n \sim \mathcal{N}(0, f(|\mathbf{S}_n|)\delta \sigma^2)$, where $f(\cdot)$ is a non-decreasing function of $|\mathbf{S}_n|$. For example, $f(\textbf{S}_n) = |\mathbf{S}_n|^{\gamma}$ for some $\gamma > 0$. Figure~\ref{fig:capacity}, shows that the effect of the noise function $f(|\mathbf{S}_n|)$ on the capacity.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\textwidth]{Capacity_func_gamma}
\caption{Behavior of capacity of BAWGN channel with $\sigma^2=0.25$ over a total search region of width $B=10$, location width $\delta =0.1$, as a function of the size of a measurement $|S_n|$.}
\label{fig:capacity}
\end{figure}
\begin{theorem}
\label{thm:fgain_lower_bound}
Let $\epsilon \in (0,1)$ and let $f(\cdot)$ be a non-decreasing function. For any $\epsilon$-reliable non-adaptive strategy $\mathfrak{c}_{\epsilon} \in \mathcal{C}^{NA}_{\epsilon}$ searching over a search region of width $B$ among $\frac{B}{\delta}$ locations with $\tau$ number of measurements, there exists an $\epsilon$-reliable adaptive strategy $\mathfrak{c}^{\prime}_{\epsilon} \in \mathcal{C}^{A}_{\epsilon}$ with $\tau^{\prime}$ number of measurements, such that for some small constant $\eta > 0$ the following holds
\begin{align*}
\expe_{\mathfrak{c}_{\epsilon}}[\tau] - \expe_{\mathfrak{c}^{\prime}_{\epsilon}}[\tau^{\prime}]
&
\geq \left(\max_{\alpha \in \mathcal{I}_{\frac{B}{\delta}}} \left\{
\log \frac{1}{\alpha} \left(\frac{(1-\epsilon) }{C_{\text{BAWGN}}(q^{\ast}, f(\frac{q^{\ast} B}{\delta})\delta \sigma^2)} \right. \right. -\frac{1}{C_{\text{BAWGN}} \left( q^{\ast}, f(\frac{q^{\ast} B}{\delta})\delta \sigma^2\right) - \eta}
\right)
\nonumber
\\
&
\quad + \log \frac{\alpha B}{\delta} \left(\frac{(1-\epsilon) }{
C_{\text{BAWGN}}(q^{\ast}, f(\frac{q^{\ast} B}{\delta})\delta\sigma^2)} \right.
\nonumber
\\
& \left. \left. \left. \hspace{1 cm} - \frac{1}{
C_{\text{BAWGN}} \left(\frac{1}{2}, f(\frac{\alpha B}{2\delta})\delta \sigma^2\right)-\eta} \right) \right\} \right) (1+o(1)),
\end{align*}
where
$
q^{*}
=
\mathop{\rm argmax}_{q \in \mathcal{I}_{\frac{B}{\delta}}} C_{\text{BAWGN}}(q, f(\frac{qB}{\delta})\delta\sigma^2),
$
and $o(1)$ goes to 0 as $\frac{B}{\delta} \to \infty$.
\end{theorem}
\subsection{Multiple Targets}
\label{generalsetup}
The problem formulation and the main results of this paper consider the special case when there exists a single stationary target. Suppose instead the agent aims to find the true location of $r$ unique targets quickly and reliably. Our problem formulation is easily extended to the general case where there may exist multiple targets. In our generalization to multiple targets under the linear noise model (\ref{eq:noisysearch}), the clean signal indicates the the number of targets present in the measurement vector $\textbf{S}_n$. In particular, let $\mathbf{W}^{(i)} \in \mathcal{U}_{\frac{B}{\delta}}$ be such that $\mathbf{W}^{(i)}(j) = 1$ if and only if $j$-th location contains the $i$-th target. Then, the noisy observation is given as
\begin{align}
Y_n = \sum_{i = 1}^{r} (\mathbf{W}^{(i)})^{\intercal}\mathbf{S}_n + Z_n,
\end{align}
where $Z_n \sim \mathcal{N}(0, |\mathbf{S}_n|\delta \sigma^2)$. Setting $X_n^{(i)} = (\mathbf{W}^{(i)})^{\intercal}\mathbf{S}_n $ for $i \in [r]$, we have
\begin{align}
Y_n = \sum_{i = 1}^{r} X^{(i)}_n + Z_n.
\end{align}
The problem of searching for multiple targets is equivalent to the problem of channel coding over a Multiple Access Channel (MAC) with state and feedback~\cite{nancy_asilomar}. In other words, we can extend the Proposition~1, to channel coding over a MAC with state and feedback with the following constraints: (i) $\mathbf{W}^{(i)}$ can be viewed as the message to be transmitted by the $i$-th transmitter, (ii) the measurement matrix $\overline{\mathbf{S}}_n$ can be viewed as the common codebook shared by all the transmitters, and (iii) a search strategy dictates the evolution of the MAC state. The channel transition is then fixed by the channel state which is measurement dependent.
\textbf{Example $\mathbf{1^{\prime}}$} (Establishing initial access in mm-Wave communications).
In the deployment of mm-Wave links into a cellular or 802.11 network, the base station needs to to quickly switch between users and accommodate multiple mobile clients.
In this setup at time $n$ the noisy observation, $Y_n$, is a function of multiple users in the network, in addition to a measurement dependent noise.
\textbf{Example $\boldsymbol{2^{\prime}}$} (Spectrum Sensing for Cognitive Radio). Consider the problem of opportunistically searching for $r$ vacant subbands of bandwidth $\delta$ over a total bandwidth of $B$. In this problem we desire to locate $r$ stationary vacant subbands quickly and reliably, by making measurements over time. Here again the noise intensity depends on the number of subbands probed, $\mathbf{S}_n$, at each time instant $n$.
Searching for multiple targets with measurement dependent noise is a significantly harder problem compared to a single target case and achievability strategies for this problem even in the absence of noise are far more complex~\cite{Bshouty_kTargetsweighing, Chang_kTargetsCode} .
\section{Numerical Results}
\label{sec:num_results}
In this section we provide numerical analysis.
\subsection{Comparing Search Strategies}
In this section, we numerically compare four strategies proposed in the literature. Besides the sort PM strategy $\mathfrak{c}_{\epsilon}^2$ and the optimal variable length non-adaptive strategy i.e., the fixed composition strategy $\mathfrak{c}_{\epsilon}^1$, we also consider two noisy variants of the binary search strategy. The noisy binary search applied to our search proceeds by selecting $\mathbf{S}_n$ as half the width of the previous search region $\mathbf{S}_{n-1}$ with higher posterior probability. The first variant we consider is fixed length noisy binary search, resembles the adaptive iterative hierarchical search strategy~\cite{7460513}, where each measurement vector $\mathbf{S}_n$ is used $\alpha_{\epsilon}(\mathbf{S}_n)|\mathbf{S}_n|$ times where $\alpha_{\epsilon}(\mathbf{S}_n)$ is chosen such that entire search result in an $\epsilon$-reliable search strategy. The second variant is variable length noisy binary search where each measurement vector $\mathbf{S}_n$ is used until in each search we obtain error probability less than $\epsilon_p:=\frac{\epsilon}{\log{B/\delta}}$. Table~I provides a quick summary of the search strategies.
\begin{table}[!htb]
\centering
\caption{Candidate Search Strategies}
\label{Table:Strategies}
\begin{tabular}{|l|l|}
\hline
& \\
Strategies $\mathfrak{c}_{\epsilon} \in \mathcal{C}_{\epsilon}$& Description of $\mathbf{S}_n$ selection \\
\hline
Variable Length Random & $\bullet$ Select $\mathbf{S}_n$ s.t. $|\mathbf{S}_n| = \frac{q^{\ast}B}{\delta}$ \T \\
& as dictated by strategy $\mathfrak{c}_{\epsilon}^1$\\
\hline
Fixed Length Noisy Binary& $\bullet$ Select $\mathbf{S}_n$ as dictated by \T\\
& binary search strategy \\
&$\bullet$ Repeat $\alpha_{\epsilon}(\mathbf{S}_n)|\mathbf{S}_n|$ times\\
\hline
Variable Length Noisy Binary& $\bullet$ Select $\mathbf{S}_n$ as dictated by \T\\
& binary search strategy \\
&$\bullet$ Repeat $\tau$ times s.t.\\
&\setlength{\thickmuskip}{0mu} $\tau = \min \{n: \|\boldsymbol{\rho}_n \|{_{\scalebox{0.5}{$\infty$}}} \geq 1- \epsilon_p\}$\\
\hline
Sorted Posterior Matching&$\bullet$ Select $\mathbf{S}_n$ as dictated by \T\\
& Sort PM strategy $\mathfrak{c}_{\epsilon}^2$\\
\hline
\end{tabular}
\end{table}
Figure~\ref{fig:EN_strategies}, shows the performance of each $\epsilon$-reliable search strategy, when considering fixed parameters $B$, $\delta$, and $\epsilon$. We note that the fixed length noisy binary strategy performs poorly in comparison to the optimal non-adaptive strategy. This shows that randomized non-adaptive search strategies such as the one considered in~\cite{Abari_AgileLink} perform better than both exhaustive search and iterative hierarchical search strategy. In particular, it performs better than variable length noisy binary search since when SNR is high since each measurement is repeated far too many times in order to be $\epsilon$-reliable. The performance of the optimal fully adaptive variable length strategies sort PM~\cite{SungEnChiu} is superior to all strategies even in the non-asymptotic regime.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Strategies2}
\caption{$\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ with $\epsilon = 10^{-4}$, $B=16$, and $\delta=1$, as a function of $\sigma^2$ for various strategies.}
\label{fig:EN_strategies}
\end{figure}
\subsection{Two Distinct Regimes of Operation}
\label{sect:mainsimulations}
In this section, for a fixed $\sigma^2$ we are interested in the expected number of measurements required $\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ by an $\epsilon$-reliable strategy $\mathfrak{c}_{\epsilon}$, in the following two regimes: varying $\delta$ while keeping $B$ fixed, and varying $B$ while keeping $\delta$ fixed. Figures~\ref{fig:EN_varyB} and~\ref{fig:EN_varyDelta} show the simulation results of $\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ as a function of width $B$ and resolution $\delta$ respectively, for the fixed composition non adaptive strategy $\mathfrak{c}_{\epsilon}\in \mathcal{C}_{\epsilon}^{NA}$ and for the sort PM adaptive strategy $\mathfrak{c}_{\epsilon}\in \mathcal{C}_{\epsilon}^{A}$, along with dominant terms of the lower bound of Lemma~\ref{lemma:converse_k_1}, and the upper bound of Lemma~\ref{lemma:achv} for a fixed noise per unit width $\sigma^2=0.25$. For both of these cases, we see that the adaptivity gain grows as the total number of locations increases; however in distinctly different manner as seen in Corollary~\ref{cor:two_regime_gains}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Vary_B}
\caption{$\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ with $\epsilon = 10^{-4}$, $\sigma^2=0.25$, and $\delta=1$, as a function of B.}
\label{fig:EN_varyB}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Vary_delta}
\caption{$\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ with $\epsilon = 10^{-4}$, $\sigma^2=0.25$ and $B=1$, as a function of $\delta$.}
\label{fig:EN_varyDelta}
\end{figure}
\subsection{Relating the Regimes of Operation to Capacity}
In this section, we attempt to relate these two regimes of operation to the manner in which the capacity of a BAWGN channel varies. Let noise parameter $Z_n \sim \mathcal{N}(0, 2q\sigma^2_{\text{Total}})$, where $q = \frac{|\textbf{S}_n|\delta}{B}$ is the fraction of the search region measured and $\sigma^2_{\text{Total}} = \frac{B \sigma^2}{2}$ is the half bandwidth variance. Figure~\ref{fig:EN_sigma} show the effects of the half bandwidth variance on the capacity of a search as a function of $q$. Intuitively, the target acquisition rate of the adaptive strategy relates to the time spent searching sets of size $q$ as $q$ varies from $\frac{1}{2}$ to $\frac{\delta}{B}$. This means for sufficiently small $\sigma^2_{\text{Total}}$ ($\leq 0.005$ in this example), the adaptivity gain is negligible since $C_{\text{BAWGN}}(\frac{1}{2}, 2q\sigma^2_{\text{Total}})$ is about 1 for all $q$. For medium range $\sigma^2_{\text{Total}}$ (for e.g., $ 0.05$ in this example), the adaptivity effects the target acquisition rate from $C_{\text{BAWGN}}(\frac{1}{2}, 2q^{\ast}\sigma^2_{\text{Total}})$ to $C_{\text{BAWGN}}(\frac{1}{2}, 2\frac{\delta}{B}\sigma^2_{\text{Total}})$. When $\sigma^2_{\text{Total}}$ grows significantly, however, the capacity drops rather quickly to zero, forcing the non-adaptive strategies to operate close to exhaustive search, whose measurement time increases linearly in $\frac{B}{\delta}$. This is the regime with most significant adaptivity gain as predicted by Corollary~\ref{cor:two_regime_gains}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{Capacity_Func_SigmaTotal}
\caption{For arbitrary $B$ and $\delta$, and with $\epsilon = 10^{-4}$, $\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ as a function of $q$ for different values of total noise variance ($\sigma^2_{Total}$)}
\label{fig:EN_sigma}
\end{figure}
\subsection{Beyond i.i.d}
In this section, we analyze $\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ under a general noise model, as presented in section (VI-A). Recall, $Y_n \sim \mathcal{N}(X_n, f(|\mathbf{S}_n|)\delta \sigma^2)$, where $f$ is a non-decreasing function of the measurement vector $|\textbf{S}_n|$. Figure~\ref{fig:capacity} shows that the behavior of the capacity range of a search with fixed parameters $B$, $\delta$, $\textbf{S}_n$ can be significantly affected by the function $f(\cdot)$. Let us consider the noise function $f(\cdot)$ to be of the form $ |\mathbf{S}_n|^{\gamma}$. Figure~\ref{fig:EN_varygamma} shows the plot of dominant terms of the lower bound of Lemma~\ref{lemma:converse_k_1}, and the upper bound of Lemma~\ref{lemma:achv} as a function of $\sigma^2$ for the values of $\gamma \in \{0.5, 1, 2\}$. The adaptivity gain is clearly more significant for larger values of gamma and hence, validates the need for generalizing the noise function.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{varygamma_varysigma}
\caption{$\mathbb{E}_{\mathfrak{c}_{\epsilon}}[\tau]$ with $\epsilon = 10^{-4}$, $\sigma^2=0.25$ and $B=25$, $\delta =1$, as a function of $\gamma$ when $Z_n \sim \mathcal{N}(0, |\mathbf{S}_n|^{\gamma}\delta \sigma^2)$.}
\label{fig:EN_varygamma}
\end{figure}
\section{Conclusion and Future Work}
We considered the problem of searching for a target's unknown location under measurement dependent Gaussian noise. We showed that this problem is equivalent to channel coding over a BAWGN channel with state and feedback. We used this connection to utilize feedback code based adaptive search strategies. We obtained information theoretic converses to characterize the fundamental limits on the target acquisition rate under both adaptive and non-adaptive strategies. As a corollary, we obtained a lower bound on the adaptivity gain. We identified two asymptotic regimes with practical applications where our analysis shows that adaptive strategies are far more critical when either noise intensity or the total search width is large. In contrast, in scenarios where neither the total width nor noise intensity is large, non-adaptive strategies might perform quite well. The immediate step is the extension of this work to a model with $r>1$ target locations, where the problem has been shown to be equivalent to MAC encoding with feedback~\cite{nancy_asilomar}.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,902
|
\section{Introduction}
\par
In our previous work [1] we derived linearized Fokker - Planck equation for incompressible fluid
$$
\int_V n dv_x dv_y dv_z = 0 .
\eqno (1)$$
$$
{\partial n \over \partial t} +
v_j {\partial n \over \partial x_j} -
\alpha\ {\partial \over \partial v_j} (v_j n ) +
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\exp \left[
- {\alpha \over 2k }v_j v_j
\right]\
v_k
{\partial p \over \partial x_k}
= k\ {\partial^2 n \over \partial v_j \partial v_j} .
\eqno (2)$$
\par\noindent
where
\par\noindent
$n = n(t, x_1 , x_2 , x_3 , v_1 , v_2 , v_3 )$ - density;
\par\noindent
$p = p(t, x_1 , x_2 , x_3 )$ - pressure;
\par\noindent
$t$ - time variable;
\par\noindent
$x_1 , x_2 , x_3 $ - space coordinates;
\par\noindent
$v_1 , v_2 , v_3$ - velocities;
\par\noindent
$\alpha$ - coefficient of damping;
\par\noindent
$k$ - coefficient of diffusion.
\par
No attempt was made to solve this equation.
\par
Peculiarity of this equation consists in the fact, that for two unknown variables $n$ and $p$ we have only one differential equation. This is enough, because there is additional normalization requirement (1) on $n$ variable and $p$ variable depends only on space coordinates $x, y, z$ and time $t$.
\par
In [2] some simple solutions of nonlinear equation were studied. Particularly flow with zero pressure is of interest for our present studies, because for this flow the source of nonlinearity is absent. This flow solves both nonlinear and linear equations.
\par
Present work is devoted to solution of linearized equation. We consider only the case of parallelepiped with opposite sides identified (i.e. "periodic boundary conditions"). We try to formulate and solve Cauchy problem for linearized equation.
\section{Fourier decomposition of solution}
\par
We know the form of Cauchy problem solution for the case of usual Fokker-Planck equation (see [3]).
$$
n (t, x_j , v_j ) =
\sum_{ m_1 = - \infty }^{ + \infty }
\sum_{ m_2 = - \infty }^{ + \infty }
\sum_{ m_3 = - \infty }^{ + \infty }
\sum_{ p_1 = 0 }^{ \infty }
\sum_{ p_2 = 0 }^{ \infty }
\sum_{ p_3 = 0 }^{ \infty }
A_{ m_1 m_2 m_3 p_1 p_2 p_3 } (t)\ \phi_{ m_1 m_2 m_3 p_1 p_2 p_3 } ,
\eqno (3)$$
\par\noindent
where eigenfunctions of usual Fokker - Planck operator are
$$
\phi_{ m_1 m_2 m_3 n_1 n_2 n_3 } =
\prod_{{j=1}}^{{j=3}}
\exp
\left(
2 \pi i {m_j \over a_j }( x_j - {v_j \over \alpha })
\right)
\exp
\left(
- {\alpha \over 2k } v_j^2
\right)
H_{{n}_j}
\left(
\sqrt {\alpha \over { 2k }}
\left(
v_j +
{ {4 \pi i m_j k } \over { \alpha^2 a_j } }
\right)
\right)
.
\eqno (4)$$
\par
It is only natural to seek solution of the present problem in the same form. We need only add expression for the new $p$ variable
$$
p(t, x_j ) =
\sum_{ m_1 = - \infty }^{ + \infty }
\sum_{ m_2 = - \infty }^{ + \infty }
\sum_{ m_3 = - \infty }^{ + \infty }
P_{ m_1 m_2 m_3 } (t)\
\prod_{{j=1}}^{{j=3}}
\exp
\left(
2 \pi i {m_j \over a_j }x_j
\right) ,
\eqno (5)$$
\par
We try to represent the coefficient before ${\partial p \over \partial x_k} $ in equation (2) in the same way as a sum of Fourier series
$$
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\prod_{{j=1}}^{{j=3}}
\exp
\left(
2 \pi i {m_j \over a_j }x_j
\right)
\exp \left[
- {\alpha \over 2k }v_j v_j
\right]\
v_k =
\eqno (6)$$
$$
=
\sum_{ p_1 = 0 }^{ \infty }
\sum_{ p_2 = 0 }^{ \infty }
\sum_{ p_3 = 0 }^{ \infty }
B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (k) \phi_{ m_1 m_2 m_3 n_1 n_2 n_3 } .
$$
\par
In expressions (3, 5, 6) we introduced following Fourier coefficients:
\par\noindent
$A_{ m_1 m_2 m_3 p_1 p_2 p_3 } (t)$ - coefficients of decomposition unknown variable $n$;
\par\noindent
$P_{ m_1 m_2 m_3 } (t)$ - coefficients of decomposition of unknown variable $p$;
\par\noindent
$B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (k)$ - coefficients of decomposition of known variable, which represent coefficient before $p$ gradient. Values of $B$ are presented below.
\par
Using these coefficients, we rewrite equation (2) as
$$
{d \over dt }A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
+
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (7)$$
$$
=
2 \pi i {m_k \over a_k }B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (k)
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par
We reduced the partial differential equation (2) to system of ordinary differential equations for Fourier coefficients. To proceed with solution, we need expressions for known coefficients $B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (k)$.
\section{Expressions for Fourier coefficients}
\par
In this auxiliary section we find explicit expressions for Fourier coefficients of some known functions of velocities. These functions are products of multipliers, which depend only on one independent variable. Therefore we can consider only one velocity variable in this section.
\par
We start from definitions
$$
\phi_{mn} =
\exp \left(
- { {2 \pi i m} \over {\alpha a} } v
\right)\
\exp
\left(
- {\alpha \over {2k} } v^2
\right)
H_n
\left(
\sqrt {\alpha \over { 2k } }
\left(
v +
{ {4 \pi i m k} \over {\alpha^2 a }}
\right)
\right)
.
\eqno (8)$$
$$
\psi_{mn} =
\exp \left(
- { {2 \pi i m} \over {\alpha a} } v
\right)\
H_n
\left(
\sqrt {\alpha \over { 2k }}
\left(
v +
{{ 4 \pi i m k} \over {\alpha^2 a} }
\right)
\right)
.
\eqno (9)$$
\par
Functions $\phi_{mn}$ and $\psi_{mn}$ are of course orthogonal (see [3])
$$
\int_{{-} \infty}^{\infty}
\phi_{mp}\ \ \psi_{mq} dv =
\exp
\left[
- {\alpha \over 2k }\
\left(
{4 \pi m k \over \alpha^2 a}
\right)^2
\right]\
\sqrt {{2 \pi k \over \alpha }}
\delta_{pq} (-2)^p p! .
\eqno (10)$$
\par
Let us find Fourier coefficients for following functions:
$$
\exp \left[
- {\alpha \over 2k }v^2
\right] =
\sum_{n=0}^{\infty}
a_{mn} \phi_{mn} .
\eqno (11)$$
$$
\exp \left[
- {\alpha \over 2k }v^2
\right]\
v =
\sum_{n=0}^{\infty}
b_{mn} \phi_{mn} .
\eqno (12)$$
\par
To find these coefficients, we need to calculate integrals
$$
a_{mn} =
\exp
\left[
{\alpha \over 2k }\
\left(
{4 \pi m k \over \alpha^2 a}
\right)^2
\right]\
\sqrt {{\alpha \over 2 \pi k}}
{1 \over (-2)^n n! }
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ \ \psi_{mn} dv .
\eqno (13)$$
$$
b_{mn} =
\exp
\left[
{\alpha \over 2k }\
\left(
{4 \pi m k \over \alpha^2 a}
\right)^2
\right]\
\sqrt {{\alpha \over 2 \pi k}}
{1 \over (-2)^n n! }
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ v\ \psi_{mn} dv .
\eqno (14)$$
\par
We calculated these integrals in our previous work [2]
$$
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ \psi_{mn} dv =
\eqno (15)$$
$$
=
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ \exp \left(
- {2 \pi i m \over \alpha a} v
\right)\
H_n
\left(
\sqrt {{\alpha \over 2k }}
\left(
v +
{4 \pi i m k \over \alpha^2 a}
\right)
\right)\ dv =
$$
$$
=
\sqrt {{2 \pi k \over a }}
\left(
{2k \over \alpha }\right)^{n/2}
\exp \left[
- {k \over 2 \alpha}
\left(
{2 \pi m \over \alpha a}
\right)^2
\right]\
\left(
{2 \pi i m \over \alpha a}
\right)^n .
$$
$$
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ v\ \psi_{mn} dv =
\eqno (16)$$
$$
=
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ v\ \exp \left(
- {2 \pi i m \over \alpha a} v
\right)\
H_n
\left(
\sqrt {{\alpha \over 2k }}
\left(
v +
{4 \pi i m k \over \alpha^2 a}
\right)
\right)\ dv =
$$
$$
=
\sqrt {{2 \pi k \over a }}
\left(
{2k \over \alpha }\right)^{n/2}
\exp \left[
- {k \over 2 \alpha}
\left(
{2 \pi m \over \alpha a}
\right)^2
\right]\
\left[
-
{k \over \alpha}
\left(
{ 2 \pi i m \over \alpha a}
\right)^{n+1} +
n\
\left(
{ 2 \pi i m \over \alpha a}
\right)^{n-1}
\right]
.
$$
\par
Thus we get following expressions for Fourier coefficients
\boxit{
$$
a_{mn} =
\exp \left[
6 {k \over \alpha }\left(
{\pi m \over \alpha a}
\right)^2
\right]\
{1 \over 2^n n! }
\left(
{2k \over \alpha }\right)^{n/2}
\left(
{2 \pi i m \over \alpha a}
\right)^n
.
\eqno (17)$$
}
\boxit{
$$
b_{mn} =
\exp \left[
6 {k \over \alpha }\left(
{\pi m \over \alpha a}
\right)^2
\right]\
{1 \over 2^n n! }
\left(
{2k \over \alpha }\right)^{n/2}
\left[
-
{k \over \alpha}
\left(
{ 2 \pi i m \over \alpha a}
\right)^{n+1} +
n\
\left(
{ 2 \pi i m \over \alpha a}
\right)^{n-1}
\right] .
\eqno (18)$$
}
\section{Dynamics of Fourier coefficients}
\par
In this section we substitute expressions for known coefficients in terms of $a_{{m}_2 n_2}$ and $b_{{m}_1 n_1}$ coefficients, which we find in the last section, to the main equation (7). Namely, we use expressions
$$
B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (1) =
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
b_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3} .
\eqno (19)$$
$$
B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (2) =
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
a_{{m}_1 n_1}
b_{{m}_2 n_2}
a_{{m}_3 n_3} .
\eqno (20)$$
$$
B_{ m_1 m_2 m_3 n_1 n_2 n_3 } (3) =
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
a_{{m}_1 n_1}
a_{{m}_2 n_2}
b_{{m}_3 n_3} .
\eqno (21)$$
\par
Then (7) reads
$$
{d \over dt }A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
+
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (22)$$
$$
=
2 \pi i\
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\left(
{m_1 \over a_1 }b_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3} +
{m_2 \over a_2 }a_{{m}_1 n_1}
b_{{m}_2 n_2}
a_{{m}_3 n_3} +
\right.
$$
$$
+
\left.
{m_3 \over a_3 }a_{{m}_1 n_1}
a_{{m}_2 n_2}
b_{{m}_3 n_3}
\right)\
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par\noindent
or
$$
{d \over dt }A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
+
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (23)$$
$$
=
2 \pi i\
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
a_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3}\
\left(
{m_1 \over a_1 }b_{{m}_1 n_1
\over a_{ }m_1 n_1} +
{m_2 \over a_2 }b_{{m}_2 n_2
\over a_{ }m_2 n_2}
\right.
+
$$
$$
+
\left.
{m_3 \over a_3 }b_{{m}_3 n_3
\over a_{ }m_3 n_3}
\right)\
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par
This is main equation, which describe dynamics of Fourier coefficients. We delay actual substitution of $a_{{m}_i n_j}$ and $b_{{m}_i n_j}$ until (31).
\section{Incompressibility condition}
\par
We use the incompressibility condition (1) to eliminate $P_{ m_1 m_2 m_3 } (t) $ from (23).
\par
(1) and (3) imply
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)\
\int_V
\phi_{ m_1 m_2 m_3 n_1 n_2 n_3 }
dv_x dv_y dv_z
= 0 .
\eqno (24)$$
\par
Let us denote
$$
c_{mn} =
\int_{{-} \infty}^{\infty}
\exp \left[
- {\alpha \over 2k }v^2
\right]\ \psi_{mn} dv =
\int_{{-} \infty}^{\infty}
\phi_{mn} dv =
\eqno (25)$$
$$
=
\sqrt {{2 \pi k \over a }}
\left(
{2k \over \alpha }\right)^{n/2}
\exp \left[
- {k \over 2 \alpha}
\left(
{2 \pi m \over \alpha a}
\right)^2
\right]\
\left(
{2 \pi i m \over \alpha a}
\right)^n .
$$
\par
So incompressibility condition is equivalent to following equation
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)\
= 0 .
\eqno (26)$$
\par
Let us suppose, that coefficients $A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)$ satisfy (26) at the moment $t$. They must satisfy this equation at the next moment $t+dt$. Values of $A$ at the next moment are defined by dynamics equation (23), which contains besides $A(t)$ also pressure $P$. Therefore incompressibility condition, written for the next moment, will give us equation for pressure $P$. Derivation of this equation is rather long procedure, which ends in equation (42).
\par
Differentiate (26) on time $t$ and get
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
A'_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)\
= 0 .
\eqno (27)$$
\par
Let us find $A'_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)$ from (22) and substitute this value to (27)
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (28)$$
$$
=
2 \pi i\
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
\times
$$
$$
\times\
\left(
{m_1 \over a_1 }b_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3} +
{m_2 \over a_2 }a_{{m}_1 n_1}
b_{{m}_2 n_2}
a_{{m}_3 n_3}
\right.
+
$$
$$
+
\left.
{m_3 \over a_3 }a_{{m}_1 n_1}
a_{{m}_2 n_2}
b_{{m}_3 n_3}
\right)\
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par
Let us use (26) to remove term with $\left(
{2 \pi m_j \over \alpha a_j }
\right)^2$
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (29)$$
$$
=
2 \pi i\
\left(
{1 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
\times
$$
$$
\times\
\left(
{m_1 \over a_1 }b_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3} +
{m_2 \over a_2 }a_{{m}_1 n_1}
b_{{m}_2 n_2}
a_{{m}_3 n_3}
\right.
+
$$
$$
+
\left.
{m_3 \over a_3 }a_{{m}_1 n_1}
a_{{m}_2 n_2}
b_{{m}_3 n_3}
\right)\
P_{ m_1 m_2 m_3 } (t)\ =
0
.
$$
\par
We can easily calculate the sums on $n_i$ in RHS of (29). For this purpose let us introduce the partial sum on the group of terms with constant $(n_1 + n_2 + n_3 ) = J$.
$$
S_J
=
2 \pi i\
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sum_{ n_1 + n_2 + n_3 = J }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
\times
\eqno (30)$$
$$
\times
a_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3}
\left(
{m_1 \over a_1 } {b_{m_1 n_1} \over a_{ m_1 n_1 }}
+
{m_2 \over a_2 } {b_{m_2 n_2} \over a_{ m_2 n_2} }
+
{m_3 \over a_3 } {b_{m_3 n_3} \over a_{ m_3 n_3 } }
\right)\ .
$$
\par
Let us substitute to (30) values of coefficients $a_{ij} , b_{ij} , c_{ij}$
$$
S_J
=
\left(
{\alpha^2 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
\left(
{k \over \alpha }\right)^J
\times
\eqno (31)$$
$$
\times
\exp \left[
{k \over \alpha }\left(
{2 \pi m_1 \over \alpha a_1}
\right)^2
\right]\
\exp \left[
{k \over \alpha }\left(
{2 \pi m_2 \over \alpha a_2}
\right)^2
\right]\
\exp \left[
{k \over \alpha }\left(
{2 \pi m_3 \over \alpha a_3}
\right)^2
\right]\
\times
$$
$$
\times
\left(
{k \over \alpha}
\left[
\left(
{ 2 \pi m_1 \over \alpha a_1}
\right)^2 +
\left(
{ 2 \pi m_2 \over \alpha a_2}
\right)^2 +
\left(
{ 2 \pi m_3 \over \alpha a_3}
\right)^2
\right]
+
J
\right)
\times
$$
$$
\times
\sum_{ n_1 + n_2 + n_3 = J }
{1 \over {n_1} ! }
{1 \over {n_2} ! }
{1 \over {n_3} ! }
\left(
{2 \pi i m_1 \over \alpha a_1}
\right)^{{{2n}}_1}
\left(
{2 \pi i m_2 \over \alpha a_2}
\right)^{{{2n}}_2}
\left(
{2 \pi i m_3 \over \alpha a_3}
\right)^{{{2n}}_3}
.
$$
\par
The last sum according to Newton's binomial theorem is
$$
S_J
=
\left(
{\alpha^2 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
\times
\eqno (32)$$
$$
\times
\exp \left[
{k \over \alpha }\left(
{2 \pi m_1 \over \alpha a_1}
\right)^2
\right]\
\exp \left[
{k \over \alpha }\left(
{2 \pi m_2 \over \alpha a_2}
\right)^2
\right]\
\exp \left[
{k \over \alpha }\left(
{2 \pi m_3 \over \alpha a_3}
\right)^2
\right]\
\times
$$
$$
\times
\left(
{k \over \alpha}
\left[
\left(
{ 2 \pi m_1 \over \alpha a_1}
\right)^2 +
\left(
{ 2 \pi m_2 \over \alpha a_2}
\right)^2 +
\left(
{ 2 \pi m_3 \over \alpha a_3}
\right)^2
\right]
+
J
\right)
\times
$$
$$
\times
{1 \over J }!
\left[
\left(
{k \over \alpha }\right)\
\left(
\left(
{ 2 \pi i m_1 \over \alpha a_1}
\right)^2 +
\left(
{ 2 \pi i m_2 \over \alpha a_2}
\right)^2 +
\left(
{ 2 \pi i m_3 \over \alpha a_3}
\right)^2
\right)
\right]^J
.
$$
\par
We see, that result $S_J$ depends on two variables $J = n_1 + n_2 + n_3$ and $M$
$$
M =
\left(
{k \over \alpha }\right)\
\left(
\left(
{ 2 \pi m_1 \over \alpha a_1}
\right)^2 +
\left(
{ 2 \pi m_2 \over \alpha a_2}
\right)^2 +
\left(
{ 2 \pi m_3 \over \alpha a_3}
\right)^2
\right) .
\eqno (33)$$
\par
With this variables (32) reads
$$
S_J
=
\left(
{\alpha^2 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
\times
\eqno (34)$$
$$
\times\
e^M
\left(
M + J
\right)
{1 \over J }!
(-M)^J
.
$$
\par
The last effort is to calculate sum on $J$
$$
\sum_{J=0}^{\infty}
S_J =
\left(
{\alpha^2 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
\times
\eqno (35)$$
$$
\times\
e^M
\sum_{J=0}^{\infty}
\left(
M + J
\right)
{1 \over J }!
(-M)^J ;
$$
$$
\sum_{J=0}^{\infty}
\left(
M + J
\right)
{1 \over J }!
(-M)^J =
M e^{-M} +
(-M) e^{-M} =
0 .
\eqno (36)$$
\par\noindent
that is coefficient by $P_{ m_1 m_2 m_3 } (t)$ in (29) is zero and (29) reads
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
= 0 .
\eqno (37)$$
\par
This result is a little disappointment, because we still not get desired equation for $P_{ m_1 m_2 m_3 } (t)$. We need insistence to achieve success. The result is already near.
\par
Differentiate (37) again on $t$
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )
A'_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
= 0 .
\eqno (38)$$
\par\noindent
and substitute $A'$ from (22)
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (39)$$
$$
=
2 \pi i\
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )
\times
$$
$$
\times
\left(
{m_1 \over a_1 }b_{{m}_1 n_1}
a_{{m}_2 n_2}
a_{{m}_3 n_3} +
{m_2 \over a_2 }a_{{m}_1 n_1}
b_{{m}_2 n_2}
a_{{m}_3 n_3} +
\right.
$$
$$
+
\left.
{m_3 \over a_3 }a_{{m}_1 n_1}
a_{{m}_2 n_2}
b_{{m}_3 n_3}
\right)\
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par
Let us use (37) to remove terms with $\left(
{2 \pi m_j \over \alpha a_j }
\right)^2$ in (39). To calculate coefficient before $P_{ m_1 m_2 m_3 } (t)$ we perform once again summation on group of terms with constant $(n_1 + n_2 + n_3 ) = J$. The sum for each group is calculated as before, but this time this sum is multiplied by $J$ before final summation on $J$
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
\alpha
( n_1 + n_2 + n_3 )^2
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (40)$$
$$
=
\left(
{\alpha^2 \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
e^M
\sum_{J=0}^{\infty}
\left(
M + J
\right)
{J \over J }!
(- M)^J
P_{ m_1 m_2 m_3 } (t)\
.
$$
\par
This time the sum on $J$ is not zero
$$
\sum_{J=0}^{\infty}
\left(
M + J
\right)
{J \over J }!
(-M)^J =
- M^2 e^{-M} +
(-M)^2 e^{-M} -
M e^{-M} =
-M e^{-M} ,
\eqno (41)$$
\par\noindent
and we get finally equation for pressure, which follows from incompressibility condition
\boxit{
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )^2
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (42)$$
$$
=
-
\left(
{\alpha \over k }\right)\
\left( {\alpha \over 2 \pi k} \right)^{3/2}
\sqrt {{2 \pi k \over a_1 }}
\sqrt {{2 \pi k \over a_2 }}
\sqrt {{2 \pi k \over a_3 }}
M
P_{ m_1 m_2 m_3 } (t)\
.
$$
}
\par
(42) is algebraic equation, but in the same time it is Fourier transform of some differential equation for original unknown variable $n$. Namely according to (33) $M = \left(
{k \over \alpha }\right)\
\left(
\left(
{ 2 \pi m_1 \over \alpha a_1}
\right)^2 +
\left(
{ 2 \pi m_2 \over \alpha a_2}
\right)^2 +
\left(
{ 2 \pi m_3 \over \alpha a_3}
\right)^2
\right)$ . Each $m_i^2$ term in (42) is Fourier transform of second partial derivative of $p$ on corresponding space variable, their sum is Fourier transform of Laplace operator. Therefore (42) is Fourier transform of Poisson's equation for pressure.
\par
Solve this equation for $P_{ m_1 m_2 m_3 } (t)$ and get
$$
P_{ m_1 m_2 m_3 } (t) =
{-1 \over M }\left(
{k \over \alpha }\right)\
\left( {2 \pi k \over \alpha }\right)^{3/2}
\sqrt { {a_1 \over 2 \pi k}}
\sqrt { {a_2 \over 2 \pi k}}
\sqrt { {a_3 \over 2 \pi k}}
\times
\eqno (43)$$
$$
\times
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
c_{{m}_1 n_1}
c_{{m}_2 n_2}
c_{{m}_3 n_3}
( n_1 + n_2 + n_3 )^2
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
.
$$
\par
Let us take values of $c_{mn}$ from (25)
$$
P_{ m_1 m_2 m_3 } (t) =
{-\exp (- M / 2 ) \over M }\left(
{k \over \alpha }\right)\
\left( {2 \pi k \over \alpha }\right)^{3/2}
\times
\eqno (44)$$
$$
\times
\sum_{ \nu_1 = 0 }^{ \infty }
\sum_{ \nu_2 = 0 }^{ \infty }
\sum_{ \nu_3 = 0 }^{ \infty }
\left(
{2k \over \alpha }\right)^{ ( \nu_1 + \nu_2 + \nu_3 ) /2}
\left(
{2 \pi i m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi i m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi i m_3 \over \alpha a_3}
\right)^{{\nu}_3}
( \nu_1 + \nu_2 + \nu_3 )^2
A_{ m_1 m_2 m_3 \nu_1 \nu_2 \nu_3 } (t)
.
$$
\par\noindent
and substitute result to (23)
$$
{d \over dt }A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
+
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (45)$$
$$
=
2 \pi i\
a_{m_1 n_1}
a_{m_2 n_2}
a_{m_3 n_3}\
\left(
{m_1 \over a_1 } {b_{m_1 n_1} \over a_{ m_1 n_1}} +
{m_2 \over a_2 } {b_{m_2 n_2} \over a_{ m_2 n_2}} +
{m_3 \over a_3 } {b_{m_3 n_3} \over a_{ m_3 n_3}}
\right)\
{ {\exp (- M / 2 )} \over { (-M) }}
\times
$$
$$
\times
\sum_{ \nu_1 = 0 }^{ \infty }
\sum_{ \nu_2 = 0 }^{ \infty }
\sum_{ \nu_3 = 0 }^{ \infty }
\left(
{2k \over \alpha }\right)^{ ( \nu_1 + \nu_2 + \nu_3 ) /2}
\left(
{2 \pi i m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi i m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi i m_3 \over \alpha a_3}
\right)^{{\nu}_3}
( \nu_1 + \nu_2 + \nu_3 )^2
A_{ m_1 m_2 m_3 \nu_1 \nu_2 \nu_3 } (t)
.
$$
\par
To get final result, substitute values of $a_{mn}$ from (17)
\boxit{
$$
{d \over dt }A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
+
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
A_{ m_1 m_2 m_3 n_1 n_2 n_3 } (t)
=
\eqno (46)$$
$$
=
{\exp ( M ) \over (-M)}
{1 \over 2^{{n}_1 + n_2 + n_3} n_1 ! n_2 ! n_3 !}
\left(
{-2k \over \alpha }\right)^{{(} n_1 + n_2 + n_3 ) /2}
\times
$$
$$
\times\
\left(
{2 \pi m_1 \over \alpha a}
\right)^{{n}_1}
\left(
{2 \pi m_2 \over \alpha a}
\right)^{{n}_2}
\left(
{2 \pi m_3 \over \alpha a}
\right)^{{n}_3}
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
\times
$$
$$
\times
\sum_{ \nu_1 = 0 }^{ \infty }
\sum_{ \nu_2 = 0 }^{ \infty }
\sum_{ \nu_3 = 0 }^{ \infty }
\left(
{-2k \over \alpha }\right)^{ ( \nu_1 + \nu_2 + \nu_3 ) /2}
\left(
{2 \pi m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi m_3 \over \alpha a_3}
\right)^{{\nu}_3}
( \nu_1 + \nu_2 + \nu_3 )^2
A_{ m_1 m_2 m_3 \nu_1 \nu_2 \nu_3 } (t)
.
$$
}
\par
We totally eliminated pressure from dynamic equation. Thus the problem is reduced to the system of ordinary linear differential equations (46). Such systems are solved by Euler's exponential substitution. The only difficulty is, that the number of variables is indefinitely great. We try to integrate (46) using special properties of its matrix.
\section{Special eigenvalue problem}
\par
In this section we consider purely algebraic eigenvalue problem for special form of matrices.
\par
Let the matrix have the following form
$$
M_{ij} = D_{ij} + P_{ij} ;
\eqno (47)$$
\par\noindent
where $D_{ij}$ - diagonal matrix
$$
D_{ij} = \left\{ \matrix { 0;\ \ i \ne j \cr d_{i;}\ \ i = j} \right.
\eqno (48)$$
\par\noindent
$P_{ij}$ - diadic product matrix
$$
P_{ij} = l_i r_j .
\eqno (49)$$
\par
The eigenvalue problem consists of following tasks:
\par\noindent
1) To find a set of eigenvectors $x$ such, that matrix product of $M$ and $x$ is proportional to $x$
$$
M_{ij} x_i = \lambda x_j .
\eqno (50)$$
\par\noindent
2) To find corresponding set of proportionality coefficients - eigenvalues $\lambda$.
\par
For our special form of matrix
$$
( D_{ij} - \lambda\ \delta_{ij} ) x_j +
l_i \ ( r_k x_k ) = 0;
\eqno (51)$$
\par
Let us denote
$$
S = r_k x_k ;
\eqno (52)$$
\par
Then
$$
( d_1 - \lambda )\ {x_1 \over l_1 }=
( d_2 - \lambda )\ {x_2 \over l_2 }=
... = - S .
\eqno (53)$$
\par
This gives very simple expression for components of eigenvector, provided eigenvalue $\lambda$ is evident
$$
x_i = {-S l_i \over ( d_i - \lambda )} .
\eqno (54)$$
\par
Substitute this expression to (52) and get
$$
\sum_k
{-S\ l_k r_k \over ( d_k - \lambda )} = S .
\eqno (55)$$
\par
Two cases are possible here. The common case is $S != 0$.
$$
\sum_k
{l_k r_k \over ( d_k - \lambda )} + 1 = 0.
\eqno (56)$$
\par
(56) gives equation for $\lambda$. When all $d_i $ are different, algebraic equation (56) has degree $n$, where $n$ - dimension of matrix $M$.
\par
If all roots of (56) are different, we get the full set of eigenvalues, then from (54) we find full set of eigenvectors. Exact value of $S$ is of no meaning, we can put for example $S=1$ in (54).
\par
For example, when all $( l_k r_k )$ are positive or all are negative and all $d_i$ are real (and different - see above), one could guarantee that all $n$ roots of (56) are real and different. This follows from the fact, that $d_i$ separate the roots of (56). Therefore there are $n-1$ roots between $d_i$ and one more root $x_1 min (d_i )$ (the case $l_i r_i 0$). This fact makes numeric evaluation of roots rather simple. Unfortunately we deal with quite opposite case - signs of our $( l_k r_k )$ are alternating. Nevertheless we shall find roots - see below.
\par
When some $d_i$ are equal, equation (56) has less roots then matrix degree. We can consider this case as confluent. In this case additional eigenvalues $\lambda$ (besides roots of (56)) must be equal to iterated value $d_i$ and we must put for these eigenvalues $S=0$ in (53). Then all $x_i $ , besides that in columns, corresponding to iterated $d_i$, must be zero. The rest nonzero $x_i$ must satisfy orthogonality condition (52) (with $S=0$).
\par
Let us consider the eigenvalue problem for transposed matrix $M^T $ . Let us denote $y$ - eigenvectors for $M^T$. Then
$$
( D_{ij} - \lambda\ \delta_{ij} ) y_i +
( l_k y_k ) r_j = 0;
\eqno (57)$$
\par\noindent
or
$$
( a_i - \lambda ) {y_i \over r_i} = - S .
\eqno (58)$$
$$
y_i = {-S r_i \over ( d_i - \lambda )} .
\eqno (59)$$
$$
S = l_k y_k =
\sum_k {-S\ l_k r_k \over ( d_k - \lambda )}
.
\eqno (60)$$
\par
We get for $\lambda$ equation (56) once again - eigenvalues of conjugated problems are equal. Components of eigenvectors for conjugated problem are calculated from (59), where $S$ is arbitrary nonzero number, for example $S = 1$.
\par
Eigenvector for conjugated problems with different eigenvalues $\lambda$ and $\mu$ are orthogonal:
$$
\sum_k x_k y_k =
\sum_k
{- l_k \over ( d_k - \lambda )}
{- r_k \over ( d_k - \mu )} =
\sum_k
{l_k r_k \over \lambda - \mu}
\left(
{1 \over ( d_k - \lambda )} -
{1 \over ( d_k - \mu )}
\right) =
\eqno (61)$$
$$
=
{1 \over \lambda - \mu}
\left(
\sum_k
{l_k r_k \over ( d_k - \lambda )}
-
\sum_k
{l_k r_k \over ( d_k - \mu )}
\right)
=
{1 \over \lambda - \mu}
\left(
-1 + 1
\right) = 0.
$$
\section{Application to equation (46)}
\par
Let us return to our equation (46). Comparing with previous section, we could identify matrix components in the following way.
\par
Diagonal components of the matrix are equal to:
$$
d_{{n}_1 n_2 n_3} =
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right] .
\eqno (62)$$
\par
Diagonal components depend only on $M$ (see (33)) and $J = \sum n_l$. Therefore we deal with confluent case - see previous section. Some eigenvalues are equal to $d_{{n}_1 n_2 n_3}$.
\par
Nondiagonal components of the matrix are equal to:
$$
l_{{n}_1 n_2 n_3} =
{\exp ( M ) \over M }
{1 \over {2^{n_1 + n_2 + n_3} n_1 ! n_2 ! n_3 !} }
\left(
{-2k \over \alpha }\right)^{{(} n_1 + n_2 + n_3 ) /2}
\times
\eqno (63)$$
$$
\times\
\left(
{2 \pi m_1 \over \alpha a}
\right)^{{n}_1}
\left(
{2 \pi m_2 \over \alpha a}
\right)^{{n}_2}
\left(
{2 \pi m_3 \over \alpha a}
\right)^{{n}_3}
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
.
$$
$$
r_{{\nu}_1 \nu_2 \nu_3} =
\left(
{-2k \over \alpha }\right)^{ ( \nu_1 + \nu_2 + \nu_3 ) /2}
\left(
{2 \pi m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi m_3 \over \alpha a_3}
\right)^{{\nu}_3}
( \nu_1 + \nu_2 + \nu_3 )^2
.
\eqno (64)$$
\par
In all cases we replaced index $(i)$ with multiindex ${( n_1 n_2 n_3 )}.$
\par
Characteristic equation for $\lambda$ is
$$
\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
{l_{{n}_1 n_2 n_3} r_{{n}_1 n_2 n_3}
\over \lambda -
{d_{{n}_1 n_2 n_3} }}
=
\eqno (65)$$
$$
=
{e^M \over M }\sum_{ n_1 = 0 }^{ \infty }
\sum_{ n_2 = 0 }^{ \infty }
\sum_{ n_3 = 0 }^{ \infty }
{ \sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
\over \lambda - \sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\ }
\times
$$
$$
\times\
\left(
{-k \over \alpha }\right)^{ ( n_1 + n_2 + n_3 )}
\left(
{2 \pi m_1 \over \alpha a_1}
\right)^{{2n}_1}
\left(
{2 \pi m_2 \over \alpha a_2}
\right)^{{2n}_2}
\left(
{2 \pi m_3 \over \alpha a_3}
\right)^{{2n}_3}
{( n_1 + n_2 + n_3 )^2
\over n_1 ! n_2 ! n_3 !} =
1 .
$$
\par
Let us once again calculate partial sum $S_J$ of terms, for which $n_1 + n_2 + n_3 = J$. We have
$$
S_J =
{e^M \over M }\left(
{\alpha (J + M) J^2 \over \lambda - \alpha (J + M) }
\right)
{(-M)^J \over J }! .
\eqno (66)$$
$$
\sum_{J=0}^{\infty}
S_J =
\left(
1 - M
\right)
- {\lambda \over \alpha }{e^M \over M }M^{{-(} M - \lambda / \alpha )}\
\times
\eqno (67)$$
$$
\times\
\left[
\gamma ( ( 2 + M - \lambda / \alpha ), M) -
\gamma ( ( 1 + M - \lambda / \alpha ), M)
\right]
= 1
.
$$
\par
where $\gamma ( p, z)$ is incomplete gamma - function (see [4], [5])
$$
\gamma ( p, z) =
\int_{t=0}^{t=z}
e^{-t} t^{p-1}
dt .
\eqno (68)$$
\par
(65) reads now
$$
M
+ {\lambda \over \alpha }{e^M \over M }M^{{-(} M - \lambda / \alpha )}\
\left[
\gamma ( ( 2 + M - \lambda / \alpha ), M) -
\gamma ( ( 1 + M - \lambda / \alpha ), M)
\right]
= 0
.
\eqno (69)$$
\par
We see, that all eigenvalues $\lambda$ are proportional to $\alpha$
$$
\lambda = \xi \alpha ,
\eqno (70)$$
\par
where coefficients $\xi$ are roots of equation
\boxit{
$$
M + \xi
{e^M \over M }M^{{-(} M - \xi )}\
\left[
\gamma ( ( 2 + M - \xi ), M) -
\gamma ( ( 1 + M - \xi ), M)
\right]
= 0
.
\eqno (71)$$
}
\par
We can further simplify (71) using known recurrence (see [4], 9.2)
$$
\gamma (p+1, z) =
p \gamma (p, z) - x^p e^{-x} .
\eqno (72)$$
$$
M + \xi
{e^M \over M } M^{- ( M - \xi )}\
\left[
( 1 + M - \xi ) \gamma ( ( 1 + M - \xi ), M)
\right.
-
\eqno (73)$$
$$
\left.
-
M^{( 1 + M - \xi )} e^{-M} -
\gamma ( ( 1 + M - \xi ), M)
\right]
= 0
.
$$
$$
M + \xi
{e^M \over M }M^{{-(} M - \xi )}\
\left[
( M - \xi ) \gamma ( ( 1 + M - \xi ), M) -
M^{{(} 1 + M - \xi )} e^{-M}
\right]
= 0
.
\eqno (74)$$
$$
M - \xi + \xi
{e^M \over M }M^{{-(} M - \xi )}\
( M - \xi ) \gamma ( ( 1 + M - \xi ), M)
= 0
.
\eqno (75)$$
\par
One exact root of equation (75) we can find easily - this is root
$$
\xi = M .
\eqno (76)$$
\par
Another roots satisfy reduced equation
\boxit{
$$
1 + \xi
{e^M \over M }M^{{-(} M - \xi )}\
\gamma ( ( 1 + M - \xi ), M)
= 0
.
\eqno (77)$$
}
\par
Another form of (77) we obtain using modified incomplete gamma function
$$
\gamma^* (p, x) =
{x^{-p} \over { \Gamma (p) } }
\gamma (p, x) .
\eqno (78)$$
\par
This form is
$$
1 + \xi
{e^M}
\Gamma ( 1 + M - \xi )\
\gamma^* ( ( 1 + M - \xi ), M)
= 0
.
\eqno (79)$$
\par
Advantage of this form is that $\gamma^* ( p, x)$ is a single valued analytic function of $p$ and $x$ possessing no finite singularities.
\par
We calculate some roots of equation (77) using definition of incomplete gamma-function (66 - 67). Namely we keep only finite number of terms in the sum (66 - 67) and solve resulting algebraic equation using Newton's method. As the number of terms increase, the roots converge rapidly.
\par
As initial approximation for roots we use position of poles, that is we search roots in the close vicinity of the pole. We keep corresponding term and approximate contribution of the rest poles by two first terms of Taylor series. So we have quadratic equation. When this equation has two conjugated roots, Newton's iterations converge to two conjugated roots of full equation. When quadratic equation has two real roots, one root is really located in the poles vicinity and another is far enough, so that Newton's iterations diverge.
\par
The results of our calculations are presented in the Table 1 (see APPENDIX). We see, that:
\par\noindent
- All roots have positive real part. Therefore exists root with the least real part and roots can be ordered according their real part in ascending order. In the following we suppose that such ordering is done.
\par\noindent
- For each $M$ only finite number of roots are complex and imaginary part of roots decreases with root number.
\par\noindent
- Starting from some root all roots are real (tail).
\par\noindent
- Real roots are located very close to poles (natural numbers) and rapidly get indistinguishable.
\par
On this stage we shall content ourself by these experimental results and shall not try to give them rigorous proof. The theory of distribution of roots of analytic functions in question is rather ample (see [6], [7], [8]).
\par
As we see from previous section, besides roots of (77) there exist eigenvalues, which are exactly equal to diagonal values $d_{{n}_1 n_2 n_3}$ (62). They correspond to zero-pressure solutions from our work [2]. Orthogonality conditions (52), which must be satisfied by eigenfunctions, according to (64) and (43) mean, that $P_{{m}_1 m_2 m_3} (t) = 0$.
\section{Solution of Cauchy problem}
\par
In this section we describe construction of solution of linearized Fokker - Planck equation for incompressible fluid.
\par
1) To setup Cauchy problem we must set initial value of $n$ variable. This initial value $n_0 = n_0 (x_1 , x_2 , x_3 , v_1 , v_2 , v_3 )$ must satisfy incompressibility condition (1). There is no need to set initial value of pressure, because it is fully determined by $n$ (see (42)).
\par
2) Calculate Fourier coefficients $A_{ m_1 m_2 m_3 p_1 p_2 p_3 } (0)$ (see [3] for details):
$$
A_{ m_1 m_2 m_3 p_1 p_2 p_3 }(0) =
{1 \over 2^{{p}_1 + p_2 + p_3}
p_1 !
p_2 !
p_3 !}\
\left(
{\alpha \over 2 \pi k}
\right)^{ {3 \over 2 }}
{1 \over a_1 a_2 a_3}
\times
\eqno (80)$$
$$
\times
\exp
\left[
{\alpha \over 2k }\
\left(
{4 \pi k \over \alpha^2}
\right)^2
\left(
\left(
{m_1 \over a_1}
\right)^2 +
\left(
{m_2 \over a_2}
\right)^2 +
\left(
{m_3 \over a_3}
\right)^2
\right)
\right]
\times
$$
$$
\times
\int_0^{{a}_1}
dx_1
\int_0^{{a}_2}
dx_2
\int_0^{{a}_3}
dx_3
\int_{{-} \infty}^{\infty}
\int_{{-} \infty}^{\infty}
\int_{{-} \infty}^{\infty}
n_0 (x_1 , x_2 , x_3 , v_1 , v_2 , v_3 )\ \
\psi_{ m_1 m_2 m_3 p_1 p_2 p_3 }
dv_1
dv_2
dv_3
.
$$
\par\noindent
where
$$
\psi_{ m_1 m_2 m_3 n_1 n_2 n_3 } =
\prod_{{j=1}}^{{j=3}}
\exp
\left(
- 2 \pi i {m_j \over a }( x_j + {v_j \over \alpha })
\right) \
H_{{n}_j}
\left(
\sqrt {{\alpha \over 2k }}
\left(
v_j +
{4 \pi i {m_j} k \over \alpha^2 a_j}
\right)
\right)
.
\eqno (81)$$
\par
3) Change from eigenfunctions of simple Fokker - Planck equations to eigenfunctions of linearized Fokker - Planck equation
for incompressible fluid. According to (61) projection of vector $A_k$ on eigenvector $x_{\mu}$, corresponding to eigenvalue $\lambda$, is
$$
A_{\lambda} =
{ \sum_k A_k y_k \over \sum_k y_k y_k }
=
\left(
{ \sum_k A_k {r_k \over ( \lambda - d_k )} }
\right)\ \
/\ \
\left(
{ \sum_k {r_k^2 \over ( \lambda - d_k )^2} }
\right)
\eqno (82)$$
\par\noindent
or according to (64)
$$
A_{ m_1 m_2 m_3 \lambda }(0)
=
\left(
\sum_{{\nu}_1 \nu_2 \nu_3}
A_{ m_1 m_2 m_3 \nu_1 \nu_2 \nu_3 }(0)\
\left(
{-2k \over \alpha }\right)^{ ( \nu_1 + \nu_2 + \nu_3 ) /2}
\right.
\times
\eqno (83)$$
$$
\times
\left(
{2 \pi m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi m_3 \over \alpha a_3}
\right)^{{\nu}_3}
\times
$$
$$
\times
\left.
{ ( \nu_1 + \nu_2 + \nu_3 )^2
\over ( \lambda - d_{{\nu}_1 \nu_2 \nu_3} )}
\right)\ \
/\ \
\left(
{ \sum_k {r_{{n}_1 n_2 n_3}^2 \over ( \lambda - d_{{\nu}_1 \nu_2 \nu_3} )^2} }
\right)
$$
\par\noindent
where $d_{{n}_1 n_2 n_3}$ is defined by (62), $r_{{n}_1 n_2 n_3}$ is defined by (64).
\par
4) Initial field can contain some zero pressure solutions. Let us suppose, that there exist a group of coefficients with constant $J = \sum \nu_l$, for which
$$
\sum_{ \nu_1 + \nu_2 + \nu_3 = J}
A_{ m_1 m_2 m_3 \nu_1 \nu_2 \nu_3 }(0)\
\left(
{2 \pi m_1 \over \alpha a_1}
\right)^{{\nu}_1}
\left(
{2 \pi m_2 \over \alpha a_2}
\right)^{{\nu}_2}
\left(
{2 \pi m_3 \over \alpha a_3}
\right)^{{\nu}_3}
= 0 .
\eqno (84)$$
\par
This group does not contribute to any $A_{ m_1 m_2 m_3 \lambda }$ (see 83). Such groups, if present, we must consider separately. They correspond to zero pressure solutions of our work [2].
\par
5) Given initial values of $A_{\lambda}$ we can calculate their values for the arbitrary moment $t$ according to exponential law
$$
A_{ m_1 m_2 m_3 \lambda }(t) =
e^{{-} \lambda t }
A_{ m_1 m_2 m_3 \lambda }(0) .
\eqno (85)$$
\par
6) Evolution of zero pressure solution is determined by exponential multiplier $e^{{-} \alpha (J+M) t}$.
\par
7) Inverse transition from $A_{\lambda}$ to $A_k$ is
$$
A_k = \sum_{\lambda}
A_{\lambda} {l_k \over ( \lambda - d_k )}
\eqno (86)$$
\par\noindent
or according to (63)
$$
A_{ m_1 m_2 m_3 n_1 n_2 n_3 }(t)
= \sum_{\lambda}
A_{ m_1 m_2 m_3 \lambda }(t)
{\exp ( M ) \over M }\times
\eqno (87)$$
$$
\times
{1 \over 2^{{n}_1 + n_2 + n_3} n_1 ! n_2 ! n_3 !}
\left(
{-2k \over \alpha }\right)^{{(} n_1 + n_2 + n_3 ) /2}
\times
$$
$$
\times\
\left(
{2 \pi m_1 \over \alpha a}
\right)^{{n}_1}
\left(
{2 \pi m_2 \over \alpha a}
\right)^{{n}_2}
\left(
{2 \pi m_3 \over \alpha a}
\right)^{{n}_3}
\sum_{{j=1}}^{{j=3}}
\left[
\alpha n_j +
k\ \left(
{2 \pi m_j \over \alpha a_j }
\right)^2
\right]\
{1
\over ( \lambda - d_{{n}_1 n_2 n_3} )} .
$$
\par\noindent
$d_{{n}_1 n_2 n_3}$ defined by (62).
\par
8) Contribution from zero pressure solutions must be added to (87).
\par
9) Pressure for each moment of time is determined by (44).
\shead{DISCUSSION}
\par\noindent
We see that spectral properties of Fokker - Planck linearized differential operator for incompressible fluid are different from properties of usual operator. General spectrum structure is roughly similar, but nearest to zero eigenvalues are complex. Therefore most slowly damping modes are most strongly vibrating - very interesting result. Generally all modes are damping with time, flows tend to the rest.
\rule{2in}{1pt}
\shead{REFERENCES}
\begin{IPlist}
\IPitem{{[1]}}
Igor A. Tanski. Fokker - Planck equation for incompressible fluid.
arXiv:0812.2303v2 [nlin.CD] 3 Feb 2009
\IPitem{{[2]}}
Igor A. Tanski. Two simple solutions of nonlinear Fokker - Planck equation for incompressible fluid.
arXiv:0812.4795v2 [nlin.CD] 25 Feb 2009
\IPitem{{[3]}}
Igor A. Tanski. Spectral decomposition of 3D Fokker - Planck differential operator.
arXiv:nlin/0607050v3 [nlin.CD] 25 Jun 2007
\IPitem{{[4]}}
H. Bateman, A. Erdelyi. Higher transcendental functions.
vol. 2, Mc Graw-Hill, New York, 1953
\IPitem{{[5]}}
M. Abramovitz, I. A. Stegun, Handbook of Mathematical Functions.
National Bureau of Standards, 1970
\IPitem{{[6]}}
K. S. Koelbig. On the zeros of incomplete gamma function.
Mathematics of Computation, vol. 26, num. 119, Jul 1972
\IPitem{{[7]}}
Walter Gautschi. The incomplete gamma functions since Tricomi.
In Tricomi's Ideas and Contemporary Applied Mathematics, Atti dei Convegni Lincei, n. 147, Accademia Nazionale dei Lincei
\IPitem{{[8]}}
A. M. Sedlecki. Zeros of Mittag-Leffler functions.
Matematicheskie zametki, vol. 68, num. 5, Nov. 2000
\end{IPlist}\newpage
\par
APPENDIX 1
\par
Roots of equation (71)
\par
Root $\xi = M$ is omitted
\begin{tabular} { || l | l | l | l || } \hline \hline
NN&M=1&M=2&M=3\\ \hline \hline
1&3.84958810 - 1.92315575 i&4.52745332 - 3.27206660 i&5.04119504 - 4.34568739 i\\ \hline
2&3.84958810 + 1.92315575 i&4.52745332 + 3.27206660 i&5.04119504 + 4.34568739 i\\ \hline
3&5.94063198 - 1.14455587 i&7.06814092 - 2.67480339 i&7.90915778 - 3.92341797 i\\ \hline
4&5.94063198 + 1.14455587 i&7.06814092 + 2.67480339 i&7.90915778 + 3.92341797 i\\ \hline
5&7.69165960 - 0.29172595 i&9.14966499 - 1.97226724 i&10.24525406 - 3.33703003 i\\ \hline
6&7.69165960 + 0.29172595 i&9.14966499 + 1.97226724 i&10.24525406 + 3.33703003 i\\ \hline
7&11.000932211&11.00141191 - 1.21959567 i&12.31747194 - 2.67114557 i\\ \hline
8&11.999889388&11.00141191 + 1.21959567 i&12.31747194 + 2.67114557 i\\ \hline
9&13.000011755&12.72299911 - 0.42498830 i&14.22286668 - 1.95786294 i\\ \hline
10&13.999998866&12.72299911 + 0.42498830 i&14.22286668 + 1.95786294 i\\ \hline
11&15.000000099&14.07454792&16.00982241 - 1.21325010 i\\ \hline
12&15.999999999&14.98376860&16.00982241 + 1.21325010 i\\ \hline
13&17.000000000&16.00272440&17.72151356 - 0.43464100 i\\ \hline
14&17.999999999&16.99956146&17.72151356 + 0.43464100 i\\ \hline
15&19.000000000&18.00006506&19.98044555\\ \hline
16&19.999999999&18.99999098&19.08192627\\ \hline
17&21.000000000&20.00000116&19.98044554\\ \hline
18&22.000000000&20.99999985&21.00363177\\ \hline
19&23.000000000&22.00000001&21.99933574\\ \hline
20&24.000000000&22.99999999&23.00011373\\ \hline \hline
\end{tabular}
\begin{tabular} { || l | l | l | l || } \hline \hline
NN&M=4&M=5&M=6\\ \hline \hline
1&5.47498406 - 5.26768710 i&5.85889775 - 6.08913226 i&6.20771808 - 6.83731700 i\\ \hline
2&5.47498406 + 5.26768710 i&5.85889775 + 6.08913226 i&6.20771808 + 6.83731700 i\\ \hline
3&8.61344667 - 5.01144505 i&9.23406886 - 5.99054148 i&9.79675137 - 6.88888105 i\\ \hline
4&8.61344667 + 5.01144505 i&9.23406886 + 5.99054148 i&9.79675137 + 6.88888105 i\\ \hline
5&11.15620212 - 4.53812831 i&11.95509246 - 5.62665552 i&12.67703265 - 6.63086922 i\\ \hline
6&11.15620212 + 4.53812831 i&11.95509246 + 5.62665552 i&12.67703265 + 6.63086922 i\\ \hline
7 &13.40577133 - 3.95833350 i&14.35634208 - 5.13111612 i&15.21274497 - 6.21756080 i\\ \hline
8 &13.40577133 + 3.95833350 i&14.35634208 + 5.13111612 i&15.21274497 + 6.21756080 i\\ \hline
9 &15.47089306 - 3.31449740 i&16.55737228 - 4.55585073 i&17.53366817 - 5.70963951 i\\ \hline
10&15.47089306 + 3.31449740 i&16.55737228 + 4.55585073 i&17.53366817 + 5.70963951 i\\ \hline
11&17.40539050 - 2.62778318 i&18.61704841 - 3.92690815 i&19.70341924 - 5.13774003 i\\ \hline
12&17.40539050 + 2.62778318 i&18.61704841 + 3.92690815 i&19.70341924 + 5.13774003 i\\ \hline
13&19.24064677 - 1.91036878 i&20.56955610 - 3.25932488 i&21.75880445 - 4.51960861 i\\ \hline
14&19.24064677 + 1.91036878 i&20.56955610 + 3.25932488 i&21.75880445 + 4.51960861 i\\ \hline
15&20.99699656 - 1.17010968 i&22.43688595 - 2.56260936 i&23.72343062 - 3.86649261 i\\ \hline
16&20.99699656 + 1.17010968 i&22.43688595 + 2.56260936 i&23.72343062 + 3.86649261 i\\ \hline
17&22.70576400 - 0.39532166 i&24.23420403 - 1.84317596 i&25.61354419 - 3.18599046 i\\ \hline
18&22.70576400 + 0.39532166 i&24.23420403 + 1.84317596 i&25.61354419 + 3.18599046 i\\ \hline
19&24.07528691&25.97268205 - 1.10579718 i&27.44091387 - 2.48348667 i\\ \hline
20&24.98146932&25.97268205 + 1.10579718 i&27.44091387 + 2.48348667 i\\ \hline \hline
\end{tabular}
\begin{tabular} { || l | l | l | l || } \hline \hline
\#&M=7&M=8&M=9\\ \hline
1&6.53001156 - 7.52895368 i&6.83127967 - 8.17518985 i&7.11531330 - 8.78391538 i\\ \hline
2&6.53001156 + 7.52895368 i&6.83127967 + 8.17518985 i&7.11531330 + 8.78391538 i\\ \hline
3&10.31618762 - 7.72399088 i&10.80167947 - 8.50772212 i&11.25955129 - 9.24856515 i\\ \hline
4&10.31618762 + 7.72399088 i&10.80167947 + 8.50772212 i&11.25955129 + 9.24856515 i\\ \hline
5&13.34199931 - 7.56851364 i&13.96257983 - 8.45167276 i&14.54728640 - 9.28905550 i\\ \hline
6&13.34199931 + 7.56851364 i&13.96257983 + 8.45167276 i&14.54728640 + 9.28905550 i\\ \hline
7 &15.99974275 - 7.23543754 i&16.73291571 - 8.19691791 i&17.42277980 - 9.11081249 i\\ \hline
8 &15.99974275 + 7.23543754 i&16.73291571 + 8.19691791 i&17.42277980 + 9.11081249 i\\ \hline
9 &18.42897216 - 6.79355442 i&19.26164201 - 7.81977962 i&20.04405940 - 8.79717049 i\\ \hline
10&18.42897216 + 6.79355442 i&19.26164201 + 7.81977962 i&20.04405940 + 8.79717049 i\\ \hline
11&20.69785002 - 6.27781735 i&21.62130874 - 7.35928644 i&22.48793355 - 8.39101050 i\\ \hline
12&20.69785002 + 6.27781735 i&21.62130874 + 7.35928644 i&22.48793355 + 8.39101050 i\\ \hline
13&22.84567896 - 5.70854973 i&23.85362358 - 6.83821861 i&24.79844419 - 7.91745869 i\\ \hline
14&22.84567896 + 5.70854973 i&23.85362358 + 6.83821861 i&24.79844419 + 7.91745869 i\\ \hline
15&24.89759650 - 5.09866262 i&25.98519281 - 6.27108315 i&27.00361964 - 7.39255586 i\\ \hline
16&24.89759650 + 5.09866262 i&25.98519281 + 6.27108315 i&27.00361964 + 7.39255586 i\\ \hline
17&26.87086392 - 4.45689278 i&28.03424697 - 5.66771323 i&29.12262537 - 6.82719343 i\\ \hline
18&26.87086392 + 4.45689278 i&28.03424697 + 5.66771323 i&29.12262537 + 6.82719343 i\\ \hline
19&28.77796298 - 3.78943945 i&30.01394562 - 5.03509475 i&31.16927222 - 6.22911873 i\\ \hline
20&28.77796298 + 3.78943945 i&30.01394562 + 5.03509475 i&31.16927222 + 6.22911873 i\\ \hline \hline
\end{tabular}
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,758
|
\section{Introduction}
Let $(\Omega, {\mathcal F}, P)$ a probability space and $T>0$ a fixed time.
$(W(t))_{t\geq0}$ will be a cylindrical Brownian motion on ${L^2(0,1)}$.
We consider the stochastic heat equation, written in abstract form in
${L^2(0,1)}$: $X(0)=0$, for all $t\in[0,T]$ $X(t,0)=X(t,1)=0$ and
\begin{equation}dX(t)={{1\over2}{d^2\over dx^2}} X(t)dt+dW(t).\label{eq-X}
\end{equation}
It is well know that this equation admits
a unique weak solution (from the analytical point of view).
Let $N\in\mathbb N^*$ and $h:=T/N$.
Consider $(t_k)_{0\leq k\leq N}$ the uniform subdivision of $[0,T]$
defined by $t_k:=kh$.
We consider the implicit Euler scheme defined as follow:
\def{X^N}{{X^N}}
\def{(t_{k+1})}{{(t_{k+1})}}
\def{(t_k)}{{(t_k)}}
\def\Delta W(k+1){\Delta W(k+1)}
\begin{equation}{X^N}{(t_{k+1})}={X^N}{(t_k)}+h{{1\over2}{d^2\over dx^2}}{X^N}{(t_{k+1})}+\Delta W(k+1),\label{eq-XN}
\end{equation}
where $\Delta W(k+1)=W{(t_{k+1})}-W{(t_k)}$.
\def{\mathbb R}{{\mathbb R}}
\def\left({\left(}
\def\right){\right)}
Let $f:{L^2(0,1)}\rightarrow{\mathbb R}$ be a functional.
The stong error is the study of $E\left|{X^N}(T)-X(T)\right|^2_{L^2(0,1)}$.
The weak error is the study of $\left| Ef\left({X^N}(T)\right)-Ef\left( X(T)\right)\right|$
with respect to the time mesh $h$.
In \cite{Debussche}, A.~Debussche
considers a more general stochastic equation and a more general functional
than the one considered here. He obtains a weak error of order $1/2$,
which is the double of that proved by \cite{PJ} for the strong speed of
convergence.
The novelty of this paper his to prove that for the square of the norm the weak
error his better than $1/2$ in negative Sobolev spaces.
\section{Preliminaries and main result}
\subsection*{Notations}
We collect here some of the notations used through the paper.
$<.,.>_{{L^2(0,1)}}$ is the inner product in $L^2(0,1)$,
$H^1_0(0,1)$ is the Sobolev space of functions $f$ in ${L^2(0,1)}$
vanishing in 0 and 1 with first derivatives in ${L^2(0,1)}$,
$H^2(0,1)$ is the Sobolev space of functions $f$ in ${L^2(0,1)}$ with first and second derivatives in ${L^2(0,1)}$.
Finally, for $m=1,2,\dots$, let $(e_m(x)=\sqrt{2}\sin(m\pi x)$ and
$\lambda_m={{1\over2}}(\pi m)^2$ denote the eigenfunction and eigenvalues of $-\Delta$
with Dirichlet boundary conditions on $(0,1)$.
An ${L^2(0,1)}$-valued stochastic process $\left( X(t)\right)_{t\in[0,T]}$
is said to be a solution of \eqref{eq-X} if:
$X(0)=0$ and for all $g\in H^1_0(0,1)\cap H^2(0,1)$ we have
$$<X(t),g>_{L^2(0,1)}=\int_0^t<X(s),{{1\over2}{d^2\over dx^2}} g>_{L^2(0,1)} ds+<W(t),g>_{L^2(0,1)}.$$
It is well know that \eqref{eq-X} admits a unique solution: see \cite{DaPrato}.
Then $(e_m)_{m\geq1}$ is a complete orthonormal basis of ${L^2(0,1)}$.
\def{\lambda_m}{{\lambda_m}}
\defX_\lm{{X_{\lambda_m}}}
If we denote by ${\lambda_m}:={{1\over2}}(\pi m)^2$, $W_{\lambda_m}(t):=\left<W(t),e_m\right>_H$
and $X_\lm(t)$ denote the solution of the evolution equation:
$X_{\lambda_m}(0)=0$ and for $t>0$:
$$dX_\lm(t) = -{\lambda_m} X_\lm(t)dt+dW_{\lambda_m}(t).$$
Then the processes $\left(X_\lm(.)\right)_{m\geq1}$ are independent and
$X(t)=\sum_{m\geq1}X_\lm(t)e_m$ for all $t\geq0$.
A sequence of ${L^2(0,1)}$-valued $\left({X^N}{(t_k)}\right)_{k=0,\dots,N}$ is said to be a
solution of \eqref{eq-XN} if:
${X^N}(t_0)=0$ and for all $k=0,\dots,N-1$ and for all
$g\in H^1_0(0,1)\cap H^2(0,1)$ we have
\def{X^N}{{X^N}}
\begin{align*}
<{X^N}{(t_{k+1})},g>_{L^2(0,1)} =& <{X^N}{(t_k)},g>_{L^2(0,1)} +h<{X^N}{(t_{k+1})},{{1\over2}{d^2\over dx^2}} g>_{L^2(0,1)}\\
&+<\Delta W(k+1),g>_{L^2(0,1)}.
\end{align*}
It is well know that \eqref{eq-XN} has a unique solution and there exists
a constant $C>0$, independent of $N$, such that
$E\left|{X^N}(T)-X(T)\right|^2_{L^2(0,1)}\leq Ch^{{{1\over2}}}$ where $h=T/N$.
\defX^N_\lm{{X^N_{\lambda_m}}}
Now if we denote by $\left(X^N_\lm{(t_k)}\right)_{k=0,\dots,N}$ the solution of:
$X^N_\lm(t_0)=0$ and for $k=0,\dots,N-1$
$$X^N_\lm{(t_{k+1})} = X^N_\lm{(t_k)} -{\lambda_m} hX^N_\lm{(t_{k+1})} +W_{\lambda_m}(k+1).$$
The random vectors $(X^N_\lm(t_k), k=0,\dots,N)_{m=1,2,\dots}$ are independent and
${X^N}{(t_k)}=\sum_{m\geq1}X^N_\lm{(t_k)} e_m$.
\def{H^{-p}}{{H^{-p}}}
\def\sum_{m\geq1}{\sum_{m\geq1}}
Let $p\geq0$; we define the spaces ${H^{-p}}$ as the completion of ${L^2(0,1)}$
for the topology induced by the norm
\def\lambda^{-p}_m{\lambda^{-p}_m}
$\left| u\right|^2_{H^{-p}}:=\sum_{m\geq1}\lambda^{-p}_m<u,e_m>_H^2$.
The following theorem improves the speed of convergence of $X^N$ to $X$
for negative Sobolev spaces.
\begin{theorem} \label{th1}
Suppose that $h<1$ and let $p\in[0,{{1\over2}})$.
There exists a constant $C>0$, independent of $N$, such that
$$\left| E\left|{X^N}(T)\right|^2_{H^{-p}}-E\left| X(T)\right|^2_{H^{-p}}\right|\leq C h^{p+{{1\over2}}}.$$
\end{theorem}
\section{Proof of the theorem \ref{th1}}
The proof of the theorem will be done in several steps.
First we recall the weak error of the Ornstein-Uhlenbeck process.
Secondly we prove some technical lemmas.
Then we decompose the weak error and analyse each term of
these decomposition.
\subsection{Weak error of the Ornstein-Uhlenbeck process}
\defW_{\lambda}{W_{\lambda}}
\defX_{\lambda}{X_{\lambda}}
Let $\lambda>0$, $(W_{\lambda}(t))_{t\geq0}$ be a one dimensional Brownian motion
and $(X_{\lambda}(t))_{t\geq0}$ be the Ornstein Uhlenbeck process solution of the following stochastic
differential equation:
$X_{\lambda}(0)=x\in{\mathbb R}$ and
\begin{equation}dX_{\lambda}(t)=-\lambdaX_{\lambda}(t)dt+dW_{\lambda}(t).\label{eq-ou}
\end{equation}
In this step, we study two properties associated with this process:
the Kolmogorov equation and the implicit Euler scheme.
Let $\left( X^{t,x}_{\lambda}(s)\right)_{t\leq s\leq T}$ be the solution of \eqref{eq-ou}
starting from $x$ at time $t$.
It is well know that $X^{t,x}_{\lambda}(T)$ is a normal random variable:
$$X^{t,x}_{\lambda}(T) \sim
\mathcal{N}\left( e^{-\lambda(T-t)}x,{1-e^{-2\lambda(T-t)}\over 2\lambda}\right).$$
For $t\in[0,T]$ and $x\in{\mathbb R}$ set $u_{\lambda}(t,x):=E\left| X^{t,x}_{\lambda}(T)\right|^2$.
Then $u_{\lambda}$ is the solution of the following partial differential equation,
called Kolmogorov equation:
for all $x\in{\mathbb R}$, $u_{\lambda}(T,x)=\left| x\right|^2$
and for all $(t,x)\in[0,T)\times{\mathbb R}$
\def{\partial\over\partial t}{{\partial\over\partial t}}
\def{\partial\over\partial x}{{\partial\over\partial x}}
\def{\partial^2\over\partial x^2}{{\partial^2\over\partial x^2}}
\begin{equation}
-{\partial\over\partial t} u(t,x)={{1\over2}}{\partial^2\over\partial x^2} u(t,x)-\lambda x{\partial\over\partial x} u(t,x).\label{eq-kol}
\end{equation}
Since $X^{t,x}_{\lambda}(T)$ has a normal law, we can write $u_{\lambda}$
explicitely:
\begin{equation}
u_{\lambda}(t,x)={1-e^{-2\lambda(T-t)}\over2\lambda}
+e^{-2\lambda(T-t)}x^2.\label{eq-u}
\end{equation}
With this expression we see that $u_{\lambda}\in C^{1,2}([0,T]\times{\mathbb R})$
and we have the following derivatives:
\defu_{\lambda}(t,x){u_{\lambda}(t,x)}
\defe^{-2\lambda(T-t)}{e^{-2\lambda(T-t)}}
\def{\partial^2\over\partial t\partial x}{{\partial^2\over\partial t\partial x}}
\begin{align}
{\partial\over\partial x}u_{\lambda}(t,x)=&2e^{-2\lambda(T-t)} x,\label{eq-ux}\\
{\partial^2\over\partial x^2}u_{\lambda}(t,x)=&2e^{-2\lambda(T-t)},\label{eq-uxx}\\
{\partial\over\partial t}u_{\lambda}(t,x)=&-e^{-2\lambda(T-t)}+2\lambdae^{-2\lambda(T-t)} x^2,\label{eq-ut}\\
\dtxu_{\lambda}(t,x) = &4\lambdae^{-2\lambda(T-t)} x.\label{eq-utx}
\end{align}
\defX^N_{\lambda}{X^N_{\lambda}}
\def{W_\lambda}{{W_\lambda}}
\def{1\over1+\lambda h}{{1\over1+\lambda h}}
\def\Delta W_{\lambda}{\Delta W_{\lambda}}
The implicit Euler scheme for the Ornstein-Uhlenbeck equation \eqref{eq-ou}
starting from 0 at time $t_0$, is defined as follow:
$X^N_{\lambda}(t_0)=0$ and for $k=0,\dots,N-1$
\begin{equation}X^N_{\lambda}{(t_{k+1})}=X^N_{\lambda}{(t_k)}-\lambda hX^N_{\lambda}{(t_{k+1})}
+\Delta W_{\lambda}(k+1),\label{eq-xnl}
\end{equation}
where $\Delta W_{\lambda}(k+1)={W_\lambda}{(t_{k+1})}-{W_\lambda}{(t_k)}$.
Since we have the following equation
\begin{equation}
X^N_{\lambda}{(t_{k+1})}=\ulhX^N_{\lambda}{(t_k)}+{1\over1+\lambda h}\Delta W_{\lambda}(k+1),\label{eq-xnld}
\end{equation}
we see that the scheme is well defined.
\begin{lemma} \label{lem-xsum}
For $k=1,\dots,N$ we have
$X^N_{\lambda}{(t_k)}=\sum_{j=0}^{k-1}{\Delta W_{\lambda}(k-j)\over(1+\lambda h)^{j+1}}.$
\end{lemma}
\proof We proceed by induction.
If $k=1$, we have $X^N_{\lambda}(t_1)={1\over1+\lambda h}\Delta W_{\lambda}(1)$.
Suppose the result true until $k$.
Using \eqref{eq-xnld}, we have
\begin{align*}
X^N_{\lambda}{(t_{k+1})}=&\sum_{j=0}^{k-1}{\Delta W_{\lambda}(k-j)\over(1+\lambda h)^{j+2}}
+{1\over1+\lambda h}\Delta W_{\lambda}(k+1)\\
=&\sum_{l=1}^k{\Delta W_{\lambda}(k+1-l)\over(1+\lambda h)^{l+1}}
+{1\over(1+\lambda h)^{0+1}}\Delta W_{\lambda}(k+1-0),
\end{align*}
which concludes the proof.
\endproof
\begin{lemma}\label{lem-EX}
For all $k=0,\dots,N$, we have the following bound
$E\left|X^N_{\lambda}{(t_k)}\right|^2 \leq {1\over2\lambda}.$
\end{lemma}
\proof
Using the independence of the increments of the Brownian motion and
Lemma \ref{lem-xsum}, we have
\begin{align*}
E\left|X^N_{\lambda}{(t_k)}\right|^2 =& \sum_{j=0}^{k-1} {1\over(1+\lambda h)^{2(j+1)}}
E\left|\Delta W_{\lambda}(k-j)\right|^2
= h\sum_{j=0}^{k-1} {1\over(1+\lambda h)^{2(j+1)}}.
\end{align*}
\def\lambda h{\lambda h}
Let $a:=1/(1+\lambda h)^2$; we deduce that $E\left|X^N_{\lambda}{(t_k)}\right|^2 = ha {1-a^k\over1-a}$.
Simple computations yield $ha/(1-a) = 1/(2\lambda+\lambda^2h)$,
which implies
$$E\left|X^N_{\lambda}{(t_k)}\right|^2 = {1\over2\lambda+\lambda^2h}
\left( 1-{1\over(1+\lambda h)^{2k}}\right).$$
This concludes the proof.
\endproof
For $t\geq 0$, we denote $\mathcal{F}^{\lambda}_t :=
\sigma\left( W_\lambda(s), s\leq t\right)$ and $D^{1,2}_\lambda$
the Malliavin Sobolev space with respect to $W_\lambda$.
\begin{lemma} \label{lem-mal}
For all $k=1,\dots,N$, we have
$X^N_{\lambda}{(t_k)}\in D_\lambda^{1,2}\cap L^2\left(\mathcal{F}^\lambda_{t_k}\right).$
\end{lemma}
\proof This is a consequence of Lemma \ref{lem-xsum}, the fact that
$L^2\left(\mathcal{F}^\lambda_{t_k}\right)$ and $D_\lambda^{1,2}$ are linear space and for all
$j=0,\dots,k-1$, $\Delta W_{\lambda}(k-j)\in D_\lambda^{1,2}\cap L^2\left(\mathcal{F}^\lambda_{t_k}\right)$.
\endproof
As usual in the study of weak error, we need to use a continuous process that
interpolates the Euler scheme.
The interpolation process that we use was introduced in \cite{Aboura}.
We recall its construction and prove some of its properties.
Let $k\in\{0,\dots,N-1\}$ be fixed.
In order to interpolate the scheme between the points $\left( t_k,X^N_{\lambda}{(t_k)}\right)$
and $\left( t_{k+1},X^N_{\lambda}{(t_{k+1})}\right)$, we define the process as follows:
for $t\in[t_k,t_{k+1}]$, set
\begin{equation}
X^N_{\lambda}(t) := X^N_{\lambda}{(t_k)}-\lambda E\left(X^N_{\lambda}{(t_{k+1})}|\mathcal{F}_t\right)(t-t_k)
+W_{\lambda}(t)-W_{\lambda}{(t_k)}. \label{eq-xns}
\end{equation}
In the sequel, we will use the following processes:
for $t\in[t_k,t_{k+1}]$
\def\mathcal{F}{\mathcal{F}}
\def{t_{k+1}}{{t_{k+1}}}
\def\beta^{k,N}_{\lambda}{\beta^{k,N}_{\lambda}}
\defz^{k,N}_{\lambda}{z^{k,N}_{\lambda}}
\def\gamma^{k,N}_{\lambda}{\gamma^{k,N}_{\lambda}}
\begin{align}
\beta^{k,N}_{\lambda}(t) :=& -\lambda E\left(X^N_{\lambda}{(t_{k+1})}|\mathcal{F}_t\right), \label{eq-beta}\\
z^{k,N}_{\lambda}(t) :=& -\lambda E\left( D_tX^N_{\lambda}{(t_{k+1})}|\mathcal{F}_t\right), \label{eq-z}\\
\gamma^{k,N}_{\lambda}(t) :=& 1+(t-t_k)z^{k,N}_{\lambda}(t). \label{eq-gamma}
\end{align}
The next lemma relates the above processes.
\defW_{\lambda}{W_{\lambda}}
\begin{lemma}\label{lem-bzX}
Let $k=0,\dots,N-1$.
For $t\in[0,T]$, we have
\begin{align*}
d\beta^{k,N}_{\lambda}(t) =& z^{k,N}_{\lambda}(t)dW_{\lambda}(t),\quad
z^{k,N}_{\lambda}(t) = -{\lambda\over1+\lambda h},\\
\gamma^{k,N}_{\lambda}(t) =& 1 - (t-t_k){\lambda\over1+\lambda h},\quad
dX^N_{\lambda}(t) = \beta^{k,N}_{\lambda}(t)dt + \gamma^{k,N}_{\lambda}(t)dW_{\lambda}(t).
\end{align*}
\end{lemma}
\proof Using the Clark-Ocone formula and Lemma \ref{lem-mal}, we have
$$X^N_{\lambda}{(t_{k+1})} = E\left(X^N_{\lambda}{(t_{k+1})}|\mathcal{F}_t\right)
+ \int_t^{t_{k+1}} E\left( D_sX^N_{\lambda}{(t_{k+1})}|\mathcal{F}_s\right) dW_{\lambda}(s).$$
Multiplying by $(-\lambda)$, we deduce
$$-\lambdaX^N_{\lambda}{(t_{k+1})} = \beta^{k,N}_{\lambda}(t)+\int_t^{t_{k+1}}z^{k,N}_{\lambda}(s) dW_{\lambda}(s),$$
which gives the first identity.
Applying the Malliavin derivative to \eqref{eq-xnld},
we have for $s\in[t_k,{t_{k+1}}]$
$D_sX^N_{\lambda}{(t_{k+1})} = {1\over1+\lambda h}$.
Multiplying by $(-\lambda)$, we deduce the second and third equalities.
Finaly, It\^o's formula gives us
$$d\left((t-t_k)\beta^{k,N}_{\lambda}(t)\right)= (t-t_k)z^{k,N}_{\lambda}(t)dW_{\lambda}(t)+\beta^{k,N}_{\lambda}(t)dt,$$
which concludes the proof.
\endproof
\begin{lemma}\label{lem-bx}
Let $k\in\{0,\dots,N-1\}$.
For any $s\in[t_k,{t_{k+1}}]$, we have
\begin{align*}
E\left|\beta^{k,N}_{\lambda}(s)\right|^2 \leq& 2\lambda,\quad
E\left|X^N_{\lambda}(s)\right|^2 \leq {1\over2\lambda} + h,\quad
E\beta^{k,N}_{\lambda}(s)X^N_{\lambda}(s) \leq 1.
\end{align*}
\end{lemma}
\proof
Applying the conditionnal expectation with respect to $\mathcal{F}_s$
on both sides of \eqref{eq-xnld} for $s\in[t_k,t_{k+1})$ we have
$$E\left(X^N_{\lambda}{(t_{k+1})}|\mathcal{F}_s\right) = {1\over1+\lambda h}\left[X^N_{\lambda}{(t_k)} + (W_{\lambda}(s)-W_{\lambda}{(t_k)})\right].$$
Multiplying by $(-\lambda)$ and using \eqref{eq-beta}, we obtain
\begin{equation}
\beta^{k,N}_{\lambda}(s) = -{\lambda\over 1+\lambda h}X^N_{\lambda}{(t_k)}
-{\lambda\over1+\lambda h}\left(W_{\lambda}(s)-W_{\lambda}{(t_k)}\right). \label{eq-betad}
\end{equation}
The independence of $\mathcal{F}_{t_k}$ and $W_{\lambda}(s)-W_{\lambda}{(t_k)}$ yields
$$E\left|\beta^{k,N}_{\lambda}(s)\right|^2 = {\lambda^2\over(1+\lambda h)^2} E\left|X^N_{\lambda}{(t_k)}\right|^2
+{\lambda^2\over(1+\lambda h)^2}(s-t_k).$$
Using Lemma \ref{lem-EX}, we deduce
$$E\left|\beta^{k,N}_{\lambda}(s)\right|^2 \leq {\lambda\over2(1+\lambda h)^2}
+ {\lambda^2h\over(1+\lambda h)^2},$$
which proves the first upper estimate.
Using \eqref{eq-xns} and \eqref{eq-betad}, we have for $s\in[t_k,t_{k+1}]$
\def\lp1-{\lambda(s-t_k)\over1+\lambda h}\rp{\lp1-{\lambda(s-t_k)\over1+\lambda h}\right)}
\begin{equation}
X^N_{\lambda}(s) = \lp1-{\lambda(s-t_k)\over1+\lambda h}\rp\left[X^N_{\lambda}{(t_k)} + (W_{\lambda}(s)-W_{\lambda}{(t_k)})\right]. \label{eq-xnsd}
\end{equation}
Taking the expectation of the square and using the independence
of $\mathcal{F}_{t_k}$ and $W_{\lambda}(s)-W_{\lambda}{(t_k)}$, we have
\begin{align*}
E\left|X^N_{\lambda}(s)\right|^2 =& \lp1-{\lambda(s-t_k)\over1+\lambda h}\rp^2\left[ E\left|X^N_{\lambda}{(t_k)}\right|^2 + (s-t_k) \right]
\leq E\left|X^N_{\lambda}{(t_k)}\right|^2 + h
\leq {1\over 2\lambda} + h,
\end{align*}
where the last upper estimates follows from Lemma \ref{lem-EX}.
Multiplying \eqref{eq-betad} and \eqref{eq-xnsd}, taking expectation we obtain
\def{-\lambda\over1+\lambda h}{{-\lambda\over1+\lambda h}}
\def\sqrt{{q\over\alpha}}{\sqrt{{q\over\alpha}}}
\def\left[\sqa\right]{\left[\sqrt{{q\over\alpha}}\right]}
\def{-p}{{-p}}
\defX^N_\lm{X^N_{\lambda_m}}
\defX_\lm{X_{\lambda_m}}
\def\delta^N(k,m){\delta^N(k,m)}
\def\left\lbrace{\left\lbrace}
\def\right\rbrace{\right\rbrace}
\def{t_k}{{t_k}}
$$E\left(X^N_{\lambda}(s)\beta^{k,N}_{\lambda}(s)\right) = {-\lambda\over1+\lambda h}\lp1-{\lambda(s-t_k)\over1+\lambda h}\rp\left[ E\left|X^N_{\lambda}{(t_k)}\right|^2 + (s-t_k)\right].$$
Using Lemma \ref{lem-EX}, we deduce
$$\left| E\left(X^N_{\lambda}(s)\beta^{k,N}_{\lambda}(s)\right)\right| \leq {\lambda\over1+\lambda h}{1\over2\lambda}
+ {\lambda h\over1+\lambda h}.$$
This concludes the proof.
\endproof
\subsection{Some useful analytical lemmas}
We at first give a precise upper bound of a series defined in terms of the
eigenvalues of the Laplace operator with Dirichlet boundary conditions.
\begin{lemma}\label{lem-a1}
Let $p\in[0,{{1\over2}})$. There exists a constant $C>0$,
such that for all $\alpha>0$, we have
$$\sum_{m\geq1}\lambda^{-p}_m e^{-2{\lambda_m}\alpha}\leq C\alpha^{p-{{1\over2}}}$$
\end{lemma}
\proof
The function $(x\in{\mathbb R}_+\mapsto x^{-2p}e^{-2x^2\alpha})$ is decreasing.
So by comparaison, we obtain
\begin{align*}
\sum_{m\geq1} m^{-2p}e^{-2m^2\alpha} \leq \int_0^{\infty}x^{-2p} e^{-2x^2\alpha}dx
\leq \alpha^{p-{{1\over2}}}\int_0^\infty y^{-2p}e^{-2y^2}dy
= C \alpha^{p-{{1\over2}}}.
\end{align*}
Since ${\lambda_m} = {{1\over2}}(\pi m)^2$, we deduce the desired upper estimate.
\endproof
\begin{lemma}\label{lem-ad}
Let $q>0$.
There exists a constant $C>0$, such that for all $\alpha>0$
$$\sum_{m\geq1}\lambda^q_me^{-{\lambda_m}\alpha} \leq C\lp1+{1\over\alpha^{q+{{1\over2}}}}\right).$$
\end{lemma}
\proof Let $f(x) = x^{2q}e^{-x^2\alpha}$.
His derivatives is given by $f'(x) = 2x^{2q-1}e^{-x^2\alpha}(q-\alpha x^2)$.
\textit{Case 1:} $\alpha>q/4$.
Then $f$ is decreasing on $[2,\infty)$ and a standard comparaison argument yields
\begin{align*}
\sum_{m\geq1} m^{2q}e^{-m^2\alpha} \leq & e^{-\alpha} + 4^qe^{-4\alpha}
+\sum_{m\geq3}\int_{m-1}^{m}x^{2q}e^{-x^2\alpha}dx \\
\leq & C + \int_0^\infty x^{2q}e^{-x^2\alpha} dx \\
\leq & C + \alpha^{-q-{{1\over2}}}\int_0^\infty y^{2q}e^{-y^2}dy \\
\leq & C (1+\alpha^{-q-{{1\over2}}}).
\end{align*}
\textit{Case 2:} $\alpha \leq q/4$.
The function $f$ is increasing on $[0,\sqrt{q/\alpha}]$.
So for each $m=1,\dots,[\sqrt{{q\over\alpha}}]-1$, we have
$$m^{2q}e^{-m^2\alpha} \leq \int_m^{m+1} x^{2q} e^{-x^2\alpha}dx.$$
On the interval $[\sqrt{{q\over\alpha}},\infty)$, $f$ is decreasing.
So for each integer $m\geq[\sqrt{{q\over\alpha}}]+2$, we have
$$m^{2q}e^{-m^2\alpha} \leq \int_{m-1}^mx^{2q}e^{-x^2\alpha}dx.$$
The above upper estimates yield
\begin{align*}
\sum_{m\geq1} m^{2q}e^{-m^2\alpha} \leq &
\sum_{m\leq[\sqrt{{q\over\alpha}}]-1}\int_m^{m+1} x^{2q} e^{-x^2\alpha}dx
+ \sum_{m\geq[\sqrt{{q\over\alpha}}]+2}\int_{m-1}^m x^{2q} e^{-x^2\alpha}dx\\
& + \sum_{m\in\{[\sqrt{{q\over\alpha}}],[\sqrt{{q\over\alpha}}]+1\}}m^{2q}e^{-m^2\alpha}\\
\leq &\int_0^\infty x^{2q} e^{-x^2\alpha}dx
+ \sum_{m\in\{[\sqrt{{q\over\alpha}}],[\sqrt{{q\over\alpha}}]+1\}}m^{2q}e^{-m^2\alpha}\\
\leq &C\alpha^{-q-{{1\over2}}}
+ \sum_{m\in\{[\sqrt{{q\over\alpha}}],[\sqrt{{q\over\alpha}}]+1\}}m^{2q}e^{-m^2\alpha}
\end{align*}
Now we study each term of the sum in the right hand side.
Since $q\geq\alpha$, we have
\begin{align*}
\left[\sqa\right]^{2q}e^{-\left[\sqa\right]^2\alpha} \leq & \left( {q\over \alpha}\right)^q
\leq \left({q\over\alpha}\right)^{q+{{1\over2}}}
\leq C \alpha^{-q-{{1\over2}}}.
\end{align*}
For the second term, we remark that since $q\geq\alpha$ $\left[\sqa\right]+1\leq2\left[\sqa\right]\leq2\sqrt{{q\over\alpha}}$.
This implies
\begin{align*}
\left(\left[\sqa\right]+1\right)^{2q}e^{-\left(\left[\sqa\right]+1\right)^{2}\alpha} \leq \left( 2\sqrt{{q\over\alpha}}\right)^{2q}
\leq C \alpha^{-q-{{1\over2}}}.
\end{align*}
Therefore, in both cases we obtain
$$\sum_{m\geq1} m^{2q}e^{-m^2\alpha} \leq C\left( 1+{1\over\alpha^{q+{{1\over2}}}}\right).$$
Since ${\lambda_m}={{1\over2}}(\pi m)^2$, the proof is complete.
\endproof
\begin{lemma}\label{lem-at}
Let $p\in[0,{{1\over2}})$ and $n\in\mathbb{N}^*$.
Let $\left( v(k,m)\right)_{(k,m)\in\{0,\dots,N-2\}\times\mathbb{N}^*}$ be a sequence
such that for all $k\in\{0,\dots,N-2\}$ and $m\geq 1$, we have
$$0\leq v(k,m) \leq \lambda^{n-p}_mh^{n+1}e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
Then, there exists a constant $C>0$, independent of $N$, such that
$$\sum_{m\geq1}\sum_{k=0}^{N-2} v(k,m)\leq C h^{p+{{1\over2}}}.$$
\end{lemma}
\proof
First we remark that $T-{t_{k+1}} = h(N-k-1)$.
Using Lemma \ref{lem-ad}, we deduce the existence of $C$
depending on $n$ and $p$, but independent of $N$, such that
for $k=0,\dots,N-2$ :
\begin{align*}
\sum_{m\geq1} v(k,m) \leq& C h^{n+1}\left( 1 + {1\over h^{n-p+{{1\over2}}}(N-k-1)^{n-p+{{1\over2}}}}\right) \\
\leq& C \left( h^{n+1}+{h^{p+{{1\over2}}}\over (N-k-1)^{n-p+{{1\over2}}}}\right).
\end{align*}
Therefore, there exists a constant $C$ as above
such that
\begin{align*}
\sum_{m\geq1}\sum_{k=0}^{N-2} v(k,m) \leq & C\left( h^n+h^{p+{{1\over2}}}\sum_{k=0}^{N-2}
{1\over(N-k-1)^{n-p+{{1\over2}}}}\right)\\
\leq &C\left( h^n+h^{p+{{1\over2}}}\sum_{l=1}^{N-1}
{1\over l^{n-p+{{1\over2}}}}\right)
\leq C h^{p+{{1\over2}}},
\end{align*}
which concludes the proof.
\endproof
\subsection{Decomposition of the weak error.}
We follow the classical decomposition introduced in \cite{TalayTubaro}.
The definition of $u_{\lambda}(t,x)$ in section 3.1 yields
\begin{align*}
E\left| X^N(T)\right|^2_{H^{-p}} - E\left| X(T)\right|^2_{H^{-p}} = &
\sum_{m\geq1}\lambda^{-p}_m\left( E\left|X^N_\lm(T)\right|^2 - E\left|X_\lm(T)\right|^2\right)\\
=& \sum_{m\geq1}\lambda^{-p}_m\left( E u_{\lambda_m}\left( T,X^N_\lm(T)\right)-u_{\lambda_m}\lp0,X^N_\lm(0)\right)\rp.
\end{align*}
Let $\delta^N(k,m):=\lambda^{-p}_m\left( Eu_{\lambda_m}\left({t_{k+1}},X^N_\lm{(t_{k+1})}\right) - Eu_{\lambda_m}\left( t_k,X^N_\lm{(t_k)}\right)\rp$;
then
$$E\left| X^N(T)\right|^2_{H^{-p}} - E\left| X(T)\right|^2_{H^{-p}} = \sum_{m\geq1}\sum_{k=0}^{N-1}\delta^N(k,m).$$
Note that using Lemmas 3.3, 3.4 and (3.4) we deduce that for any
$k=0,\dots,N-1$
$$E\int_{t_k}^{t_{k+1}}\left| \gamma_{\lambda}^{k,N}(t){\partial u\over\partial x}
(t,X_{\lambda}^N(t))\right|^2 dt <\infty.$$
From now, we do not justify that the stochastic integral are centered.
It\^o's formula and Lemma \ref{lem-bzX}, we imply that for $k=0,\dots,N-1$
\def^{k,N}_\lm{^{k,N}_{\lambda_m}}
\begin{align*}
\delta^N(k,m)=&\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left\lbrace{\partial\over\partial t} u_{\lambda_m}+\beta^{k,N}_{\lambda_m}(t){\partial\over\partial x} u_{\lambda_m}
+{{1\over2}}\left|\gamma^{k,N}_{\lambda_m}(t)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\right\rbrace\left( t,X^N_\lm(t)\right) dt\\
&=\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left\lbrace I^{k,N}_\lm(t)+{{1\over2}} J^{k,N}_\lm(t)\right\rbrace dt,
\end{align*}
where
\begin{align}
I^{k,N}_\lm(t) :=& \left(\beta^{k,N}_{\lambda_m}(t)+{\lambda_m}X^N_\lm(t)\right){\partial\over\partial x} u_{\lambda_m}\left( t,X^N_\lm(t)\right),\label{eq-IkN}\\
J^{k,N}_\lm(t) :=& \left(\left|\gamma^{k,N}_{\lambda_m}(t)\right|^2-1\right){\partial^2\over\partial x^2} u_{\lambda_m}\left( t,X^N_\lm(t)\right). \label{eq-JkN}
\end{align}
This yields the following decomposition:
\def\sum_{k=0}^{N-1}{\sum_{k=0}^{N-1}}
\begin{align}
E\left| X^N(T)\right|^2_{H^{-p}} - E\left| X(T)\right|^2_{H^{-p}} =& \sum_{m\geq1} \delta^N(N-1,m)
+\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}} I^{k,N}_\lm(t)dt \nonumber\\
&+{{1\over2}}\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}} J^{k,N}_\lm(t)dt.\label{eq-WE}
\end{align}
Now we study each term of this decomposition.
\begin{lemma} \label{lem-deltaNm}
There exists a constant $C$, independant of $N$, such that
$$\sum_{m\geq1}\left|\delta^N(N-1,m)\right|\leq Ch^{p+{{1\over2}}}.$$
\end{lemma}
This study is similar to the third step of \cite{Debussche}, page 97.
\proof
Using the definition of $u_{\lambda_m}(t,x)$
\eqref{eq-u} and \eqref{eq-xnld}, we have
\begin{align*}
u_{\lambda_m}\left( t_N, X^N_{\lambda_m}(t_N)\right) =& \left| X^N_{\lambda_m}(t_N)\right|^2
= {1\over\lp1+{\lambda_m} h\right)^2}\left| X^N_{\lambda_m}(t_{N-1})+\Delta W_m(N)\right|^2,\\
u_{\lambda_m}\left( t_{N-1}, X^N_{\lambda_m}(t_{N-1})\right) =& {1-e^{-2{\lambda_m} h}\over2{\lambda_m}}
+e^{-2{\lambda_m} h}\left| X^N_{\lambda_m}(t_{N-1})\right|^2.
\end{align*}
By independence between $\Delta W_m(N)$ and $X^N_{\lambda_m}(t_{N-1})$, we have
\begin{align*}
\delta^N(N-1,m) =& \lambda^{-p}_m\left\lbrace {1\over\lp1+{\lambda_m} h\right)^2}-e^{-2{\lambda_m} h}\right\rbrace
E\left|X^N_\lm(t_{N-1})\right|^2 \\
&+ {h\over\lambda^p_m\lp1+{\lambda_m} h\right)^2} -{1-e^{-2{\lambda_m} h}\over 2\lambda^{1+p}_m}.
\end{align*}
\def\delta_1\lp\lm\rp{\delta_1\left({\lambda_m}\right)}
\def\delta_2\lp\lm\rp{\delta_2\left({\lambda_m}\right)}
\def\delta_3\lp\lm\rp{\delta_3\left({\lambda_m}\right)}
Let $\delta_1\lp\lm\rp := {1-2e^{-2{\lambda_m} h}\over2{\lambda_m}^{1+p}}$,
$\delta_2\lp\lm\rp := {h\over{\lambda_m}^p\lp1+{\lambda_m} h\right)^2}$, and
$$\delta_3\lp\lm\rp := \lambda^{-p}_m\left\lbrace{1\over\lp1+{\lambda_m} h\right)^2}-e^{-2{\lambda_m} h}\right\rbrace E\left|X^N_\lm(t_{N-1})\right|^2.$$
With these notations we have
$$\delta^N(N-1,m)\leq\delta_1\lp\lm\rp+\delta_2\lp\lm\rp+\delta_3\lp\lm\rp.$$
First, we study $\delta_1\lp\lm\rp$.
Since ${1-e^{-2\lambda h}\over2\lambda} = \int_0^he^{-2\lambda x}dx$, using
Lemma \ref{lem-a1}, we obtain
\begin{align}
\sum_{m\geq1}\delta_1\lp\lm\rp =& \int_0^h\sum_{m\geq1}\lambda^{-p}_m e^{-2{\lambda_m} x}dx
\leq C\int_0^h x^{p-{{1\over2}}}dx
= Ch^{p+{{1\over2}}}. \label{eq-dul}
\end{align}
Now we study $\delta_2\lp\lm\rp$.
Since $\left( x\in[0,\infty)\mapsto x^{-2p}(1+x^2h)^2\right)$ is decreasing, we have
for $p\in[0,{{1\over2}})$
\begin{align}
\sum_{m\geq1}\delta_2\lp\lm\rp \leq& Ch\int_0^\infty{1\over x^{2p}\lp1+x^2h\right)^2}dx
\leq Ch^{p+{{1\over2}}}\int_0^\infty {y^{-2p}\over(1+y^2)^2}dy
\leq Ch^{p+{{1\over2}}}. \label{eq-ddl}
\end{align}
Finally, we study $\delta_3\lp\lm\rp$.
Using Lemma \ref{lem-EX}, we have
$$\delta_3\lp\lm\rp \leq \lambda^{-p}_m\left\lbrace{1\over(1+{\lambda_m} h)^2}-e^{-2{\lambda_m} h}\right\rbrace{1\over2{\lambda_m}}.$$
Since ${1\over\lp1+\lambda h\right)^2}-e^{-2\lambda h}
=2\lambda\int_0^h\left\lbrace e^{-2\lambda x}-{1\over\lp1+\lambda x\right)^3}\right\rbrace dx$,
we have
$$\delta_3\lp\lm\rp \leq \lambda^{-p}_m\int_0^h\left\lbrace e^{-2{\lambda_m} x}+{1\over(1+{\lambda_m} x)^3}\right\rbrace dx.$$
Using Lemma \ref{lem-a1}, we have for $p\in[0,{{1\over2}})$
$$\sum_{m\geq1}\lambda^{-p}_m\int_0^h e^{-2{\lambda_m} x}dx \leq C\int_0^h x^{p-{{1\over2}}}dx
\leq Ch^{p+{{1\over2}}}.$$
Now since for $x\geq0$ the map $\left( y\in{\mathbb R}_+\mapsto y^{-2p}(1+y^2x)^{-3}\right)$ is decreasing, we have
for $p\in[0,{{1\over2}})$
\begin{align*}
\sum_{m\geq1}{\lambda^{-p}_m\over(1+{\lambda_m} x)^3} \leq C \int_0^\infty {1\over y^{2p}\lp1+y^2x\right)}dy
\leq C x^{p-{{1\over2}}}\int_0^\infty{1\over z^{2p}\lp1+z^2\right)^3}dz
\leq Cx^{p-{{1\over2}}},
\end{align*}
and hence Fubini's theorem yields
$$\sum_{m\geq1}\int_0^h{\lambda^{-p}_m\over\lp1+{\lambda_m} x\right)^3}dx \leq C\int_0^h x^{p-{{1\over2}}}dx
\leq Ch^{p+{{1\over2}}}.$$
The above inequalities imply $\sum_{m\geq1}\delta_3\lp\lm\rp\leq Ch^{p+{{1\over2}}}.$
This inequality, \eqref{eq-dul} and \eqref{eq-ddl} give the stated upper estimate.
\endproof
\begin{lemma}
\label{lem-J}
There exists a constant $C>0$, independent of $N$, such that
$$\sum_{m\geq1}\sum_{k=0}^{N-1}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}} \left| J^{k,N}_\lm(t)\right| dt \leq C h^{p+{{1\over2}}}.$$
\end{lemma}
\proof
Using Lemma \ref{lem-bzX}, we have
$$\left|\gamma^{k,N}_{\lambda_m}(t)\right|^2-1 = -{2(t-t_k){\lambda_m}\over1+{\lambda_m} h}
+{\left| t-t_k\right|^2{\lambda_m}^2\over(1+{\lambda_m} h)^2}.$$
Using \eqref{eq-uxx} and \eqref{eq-JkN}, we have
$$\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left| J^{k,N}_{\lambda_m}(t)\right| dt
\leq C\left(\lambda^{1-p}_mh^2+\lambda^{2-p}_mh^3\right) e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
Lemma \ref{lem-at} concludes the proof.
\endproof
\iffalse\subsection*{Step 6: Estimate of
$\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left| I^{k,N}_{\lambda_m}(t)\right| dt$ }
A contrario to the step 5, we need here to apply It\^o's formula.
\fi
\begin{lemma}\label{lem-I}
There exists a constant $C>0$, independant of $N$, such that
$$\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left| I^{k,N}_{\lambda_m}(t)\right| dt
\leq Ch^{p+{{1\over2}}}.$$
\end{lemma}
\proof
\def\lp \tkk,\xnlm\ltkk\rp{\left( {t_{k+1}},X^N_\lm{(t_{k+1})}\right)}
\defI^{k,N}_{1,\lm}(t){I^{k,N}_{1,{\lambda_m}}(t)}
\defI^{k,N}_{2,\lm}(t){I^{k,N}_{2,{\lambda_m}}(t)}
Let $I^{k,N}_{1,\lm}(t) := E\beta^{k,N}_{\lambda_m}(t){\partial\over\partial x} u_{\lambda_m}\left( t,X^N_\lm(t)\right) + E{\lambda_m}X^N_\lm{(t_{k+1})}{\partial\over\partial x} u_{\lambda_m}\lp \tkk,\xnlm\ltkk\rp$ and
$I^{k,N}_{2,\lm}(t) := -{\lambda_m} EX^N_\lm{(t_{k+1})}{\partial\over\partial x} u_{\lambda_m}\lp \tkk,\xnlm\ltkk\rp +{\lambda_m} EX^N_\lm(t){\partial\over\partial x} u_{\lambda_m}\left( t,X^N_\lm(t)\right)$.
Using \eqref{eq-IkN}, we have
\begin{equation}
E I^{k,N}_{\lambda_m}(t) = I^{k,N}_{1,\lm}(t) + I^{k,N}_{2,\lm}(t).
\label{eq-Idec}
\end{equation}
First we study $I^{k,N}_{1,\lm}(t)$. Using \eqref{eq-ux}, we know that ${\partial\over\partial x} u_{\lambda_m}\in C^{1,2}$.
So using It\^o's formula and Lemma \ref{lem-bzX}, we have
\def\lp s,\xnlm(s)\rp{\left( s,X^N_\lm(s)\right)}
\begin{align}
d{\partial\over\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp =& \left\lbrace {\partial^2\over\partial t\partial x} u_{\lambda_m}+\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\right\rbrace\lp s,\xnlm(s)\rp ds\nonumber\\
&+\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp dW_{\lambda_m}(s) \label{eq-duX}
\end{align}
Using this equation, Lemma \ref{lem-bzX} and the It\^o formula we deduce
\begin{align*}
d\left[\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m}\right.&\left.\lp s,\xnlm(s)\rp \right]= \left\lbrace\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial t\partial x} u_{\lambda_m}
+\left|\beta^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\right. \\
&\left.+z^{k,N}_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\right\rbrace\lp s,\xnlm(s)\rp ds\\
&+\left\lbrace\beta^{k,N}_{\lambda_m}(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m} +z^{k,N}_\lm(s){\partial\over\partial x} u_{\lambda_m} \right\rbrace
\lp s,\xnlm(s)\rp dW_{\lambda_m}(s).
\end{align*}
Integrating between $t$ and ${t_{k+1}}$, taking expectation,
and using the fact that $\beta^{k,N}_{\lambda_m}(t_{k+1})=-\lambda_m
X^N_{\lambda_m}(t_{k+1})$, so that $I^{k,N}_{1,\lambda_m}(t_{k+1})=0$,
we obtain
\begin{align}
I^{k,N}_{1,\lm}(t) = -E\int_t^{t_{k+1}}&\left\lbrace\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial t\partial x} u_{\lambda_m} +\left|\beta^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\right.\nonumber\\
&\left.+z^{k,N}_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\right\rbrace\lp s,\xnlm(s)\rp ds.
\label{eq-Iu}
\end{align}
Using \eqref{eq-utx} and Lemma \ref{lem-bx}, we have for $s\in[t,{t_{k+1}}]$
$$E\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp = 4{\lambda_m} e^{-2{\lambda_m}(T-s)} E\beta^{k,N}_{\lambda_m}(s)X^N_\lm(s)
\leq C{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})},$$
and hence
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C\lambda^{1-p}_mh^2e^{-{\lambda_m}(T-{t_{k+1}})}.$$
Using Lemma \ref{lem-at}, and the above inequality, we deduce
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq Ch^{p+{{1\over2}}}.
\label{eq-Iu1}
\end{equation}
Using \eqref{eq-uxx} and Lemma \ref{lem-bx}, we have for $s\in[t_k,t_{k+1}]$
$$E\left|\beta^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp = 4{\lambda_m} e^{-2{\lambda_m}(T-s)} \leq 4{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})},$$
so that
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\left|\beta^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C\lambda^{1-p}_mh^2e^{-2\lambda(T-{t_{k+1}})}.$$
Thus, Lemma \ref{lem-at} yields
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2} \lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\left|\beta^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq Ch^{p+{{1\over2}}}.
\label{eq-Iu2}
\end{equation}
Using equations \eqref{eq-uxx} and Lemma 3.4 we have for all
$s\in[t,{t_{k+1}}]$
\begin{align*}
E\left| z^{k,N}_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp\right| =& {2{\lambda_m}\over1+{\lambda_m} h}
\lp1-{(s-{t_k}){\lambda_m}\over1+{\lambda_m} h}\right) e^{-2{\lambda_m}(T-s)}\\
\leq& C{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})}.
\end{align*}
Therefore, we obtain
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\left| z^{k,N}_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp\right|
\leq C\lambda^{1-p}_mh^2 e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
Using once more Lemma \ref{lem-at}, we deduce
$$\sum_{m\geq1}\sum_{k=0}^{N-2}
\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds E\left| z^{k,N}_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp\right|
\leq Ch^{p+{{1\over2}}}.$$
Plugging this inequality together with \eqref{eq-Iu1} and \eqref{eq-Iu2} into \eqref{eq-Iu}
gives us
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left| I^{k,N}_{1,\lm}(t)\right| dt
\leq Ch^{p+{{1\over2}}}.
\label{eq-Iubound}
\end{equation}
Now we study $I^{k,N}_{2,\lm}(t)$.
Using Lemma \ref{lem-bzX}, equation \eqref{eq-duX} and the It\^o formula
we have
\begin{align*}
dX^N_\lm(s){\partial\over\partial x}& u_{\lambda_m}\lp s,\xnlm(s)\rp = \lbX^N_\lm(s){\partial^2\over\partial t\partial x} u_{\lambda_m}
+X^N_\lm(s)\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\right.\\
&\left. +\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m}
+\left|\gamma^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\right\rbrace\lp s,\xnlm(s)\rp ds \\
&+ \left\lbrace \gamma^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m} +X^N_\lm(s)\gamma^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m} \right\rbrace
\lp s,\xnlm(s)\rp dW_{\lambda_m}(s)
\end{align*}
So integrating between $t$ and ${t_{k+1}}$ and taking expectation, we obtain
\begin{align}
I^{k,N}_{2,\lm}(t) = -{\lambda_m} E\int_t^{{t_{k+1}}} &\left\lbrace X^N_\lm(s){\partial^2\over\partial t\partial x} u_{\lambda_m}
+\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m} +X^N_\lm(s)\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\right.\nonumber\\
&\left.+\left|\gamma^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m} \right\rbrace\lp s,\xnlm(s)\rp ds.
\label{eq-Id}
\end{align}
Using equation \eqref{eq-utx} and Lemma \ref{lem-bx}, we have for
all $s\in[t,{t_{k+1}}]$
\def{\partial^2\over\partial t\partial x}{{\partial^2\over\partial t\partial x}}
\begin{align*}
{\lambda_m} EX^N_\lm(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
=& 4\lambda^2_me^{-2{\lambda_m}(T-s)} E\left|X^N_\lm(s)\right|^2\\
\leq& C\lambda^2_m ({1\over{\lambda_m}}+h)e^{-2{\lambda_m}(T-{t_{k+1}})}.
\end{align*}
Therefore,
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}}\int_t^{t_{k+1}}{\lambda_m} EX^N_\lm(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp \leq
C\left(\lambda^{1-p}_mh^2+\lambda^{2-p}_mh^3\right) e^{-2{\lambda_m}(T-{t_{k+1}})},$$
and using Lemma \ref{lem-at}, we deduce
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2}
\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} EX^N_\lm(s){\partial^2\over\partial t\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp \leq
Ch^{p+{{1\over2}}}.
\label{eq-Id1}
\end{equation}
The equation \eqref{eq-ux} and Lemma \ref{lem-bx} yield for all
$s\in[t,{t_{k+1}}]$
$${\lambda_m} E\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
= 2{\lambda_m} e^{-2{\lambda_m}(T-s)}E\beta^{k,N}_{\lambda_m}(s)X^N_\lm(s)
\leq C{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
This upper estimate implies
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} E\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C\lambda^{1-p}_mh^2e^{-2{\lambda_m}(T-{t_{k+1}})},$$
and Lemma \ref{lem-at} yields
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2}
\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} E\beta^{k,N}_{\lambda_m}(s){\partial\over\partial x} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq Ch^{p+{{1\over2}}}.
\label{eq-Id2}
\end{equation}
Using equation \eqref{eq-uxx} and Lemma \ref{lem-bx}, we have for all
$s\in[t,{t_{k+1}}]$
$${\lambda_m} EX^N_\lm(s)\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
Therefore, we obtain
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} EX^N_\lm(s)\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C\lambda^{1-p}_mh^2e^{-2{\lambda_m}(T-{t_{k+1}})},$$
and Lemma \ref{lem-at} implies
\begin{equation}
\sum_{m\geq1}\sum_{k=0}^{N-2}
\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} EX^N_\lm(s)\beta^{k,N}_{\lambda_m}(s){\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq Ch^{p+{{1\over2}}}.
\label{eq-Id3}
\end{equation}
Finally, \eqref{eq-uxx} and Lemma \ref{lem-bzX} imply that for all $s\in[t,{t_{k+1}}]$
$${\lambda_m} E\left|\gamma^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C{\lambda_m} e^{-2{\lambda_m}(T-{t_{k+1}})}.$$
This yields
$$\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} E\left|\gamma^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq C\lambda^{1-p}_mh^2e^{-2{\lambda_m}(T-{t_{k+1}})},$$
and Lemma \ref{lem-at} implies
$$\sum_{m\geq1}\sum_{k=0}^{N-2}
\lambda^{-p}_m\int_{t_k}^{t_{k+1}} dt\int_t^{t_{k+1}} ds{\lambda_m} E\left|\gamma^{k,N}_{\lambda_m}(s)\right|^2{\partial^2\over\partial x^2} u_{\lambda_m}\lp s,\xnlm(s)\rp
\leq Ch^{p+{{1\over2}}}.$$
Plugging this inequality together with \eqref{eq-Id1} -
\eqref{eq-Id3} into \eqref{eq-Id}, we deduce
$$\sum_{m\geq1}\sum_{k=0}^{N-2}\lambda^{-p}_m E\int_{t_k}^{t_{k+1}}\left|I^{k,N}_{2,\lm}(t)\right| dt
\leq Ch^{p+{{1\over2}}}.$$
This equation together with \eqref{eq-Idec} and \eqref{eq-Iubound} conclude
the proof.
\endproof
Theorem 2.1 is a straightforward consequence of equation \eqref{eq-WE}
and Lemmas \ref{lem-deltaNm}-\ref{lem-I}.
\subsection*{Acknowledgments:}
The author wishes to thank Annie Millet for many helpful comments.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,011
|
\section{Introduction}
Suppose we want to $k$-color a graph $G$. If we already have a $k$-coloring of
an induced subgraph $H$ of $G$, we might try to extend this coloring to all of
$G$. We can view this task as coloring $G-H$ from lists (this is called
\emph{list-coloring}), where each vertex $v$ in $G-H$ gets a list of colors
formed from $\set{1, \ldots, k}$ by removing all colors used on its
neighborhood in $H$.
Often we cannot complete just any $k$-coloring of $H$ to all of
$G$. Instead, we may need to modify the $k$-coloring of $H$ to get a coloring
we can extend. Given rules for how we may modify the $k$-coloring
of $H$, we can view our original problem
as the problem of list coloring $G-H$, where each
vertex gets a list as before, but now we can modify these lists in certain ways.
As an example of this approach, the second author proved \cite{HallGame}
a common generalization of Hall's marriage theorem and Vizing's theorem on
edge-coloring. The present paper generalizes a special case of this result and
puts it into a broader context.
Since we often want to prove coloring results for all graphs having
certain properties, and not just some fixed graph, we only have partial control
over the outcome of a recoloring of $H$. For example, if we swap colors red and
green in a component $C$ of the red-green subgraph (that is, we perform a Kempe
change), we may succeed in making some desired vertex red,
but if $C$ is somewhat arbitrary, we cannot precisely control what happens
to the colors of its other vertices.
We model this lack of control as a two-player game---we move by
recoloring a vertex as we desire and then the other player gets a turn to muck things up.
In the original context, where we want to color $G$, our opponent is the graph
$G$; more precisely, the embedding of $G-H$ in $G$ is one way to describe a
strategy for the second player. The general paradigm that we described above is
for vertex coloring. In the rest of the paper, we consider only the special
case that is edge-coloring (or, equivalently, vertex coloring line graphs).
All of our multigraphs are loopless. Let $G$ be a multigraph, $L$ a list
assignment on $V(G)$, and $\operatorname{pot}(L) = \bigcup_{v\in V(G)} L(v)$. An
\emph{$L$-pot} is a set $X$ containing $\operatorname{pot}(L)$.
We typically let $P$ denote an arbitrary $L$-pot.
An \emph{$L$-edge-coloring}
is an edge-coloring $\pi$ of $G$ such that $\pi(xy) \in L(x) \cap L(y)$ for all
$xy \in E(G)$;
furthermore, we require $\pi(xy)\ne \pi(xz)$ for each vertex $x$ and distinct
neighbors $y$ and $z$ of $x$.
For the maximum degree in $G$ we write $\Delta(G)$, or simply
$\Delta$, when $G$ is clear from context.
For the edge-chromatic number of $G$ we write $\chi'(G)$.
We often denote the set $\{1,\ldots,k\}$ by $[k]$.
\section{Completing edge-colorings}
Our goal is to convert a partial $k$-edge-coloring of a multigraph $M$ into a
$k$-edge-coloring of (all of) $M$. For a partial $k$-edge-coloring $\pi$ of
$M$, let $M_\pi$ be the subgraph of $M$ induced by the uncolored edges and let
$L_\pi$ be the list assignment on the vertices of $M_\pi$ given by
$L_\pi(v) = \irange{k} - \setbs{\tau}{\pi(vx) = \tau \text{ for some edge } vx \in E(M)}$.
Kempe chains give a powerful technique for converting a partial
$k$-edge-coloring into a $k$-edge-coloring of the whole graph. The idea is to
repeatedly exchange colors on two-colored paths until the uncolored subgraph
$M_\pi$ has an edge-coloring $\zeta$ from its lists, that is, such that
$\zeta(xy) \in L_\zeta(x) \cap
L_\zeta(y)$ for all $xy \in E(M_\pi)$. (One advantage of considerng the special
case that is edge-coloring is that every Kempe chain is either a path or an even
cycle.) In this sense the original list
assignment $L_\pi$ on $M_\pi$ is \emph{fixable}. In the next section, we give
an abstract definition of this notion that frees us from the embedding in the
containing graph $M$. As we will see, computers enjoy this new freedom.
\subsection{Fixable graphs}
Thinking in terms of a two-player game is a good aid to intuition and we
encourage the reader to continue doing so. However, a simple recursive
definition is equivalent and has far less baggage. For distinct colors $a,b
\in P$, let $S_{L,a,b}$ be all the vertices of $G$ that have exactly one of $a$
or $b$ in their list; more precisely, $S_{L,a,b} =
\setb{v}{V(G)}{\,\card{\set{a,b} \cap L(v)} = 1}$.
\begin{defn}
$G$ is \emph{$(L, P)$-fixable} if either
\begin{enumerate}
\item[(1)] $G$ has an $L$-edge-coloring; or
\item[(2)] there are different colors $a,b \in P$ such that for every partition
$X_1, \ldots, X_t$ of $S_{L,a,b}$ into sets of size at most two, there exists $J
\subseteq \irange{t}$ so that $G$ is $(L', P)$-fixable, where $L'$ is formed
from $L$ by swapping $a$ and $b$ in $L(v)$ for every $v \in \bigcup_{i \in J} X_i$.
\end{enumerate}
\end{defn}
The meaning of (1) is clear. Intuitively, (2) says the following. There is
some pair of colors, $a$ and $b$, such that regardless of how the vertices of
$S_{L,a,b}$ are paired via Kempe chains for colors $a$ and $b$ (or not paired
with any vertex of $S_{L,a,b}$), we can swap the colors on some subset $J$ of
the Kempe chains so that the resulting partial edge-coloring is fixable.
We write $L$-fixable as shorthand for $(L, \operatorname{pot}(L))$-fixable. When $G$ is $(L,
P)$-fixable, the choices of $a,b$, and $J$ in each application of (2) determine
a tree where all leaves have lists satisfying (1). The \emph{height} of $(L,
P)$ is the minimum possible height of such a tree. We write $h_G(L, P)$ for
this height and let $h_G(L, P) = \infty$ when $G$ is not $(L,P)$-fixable.
\begin{lem}\label{FixableCompletesColoring}
If a multigraph $M$ has a partial $k$-edge-coloring $\pi$ such that $M_\pi$ is $(L_\pi, \irange{k})$-fixable, then $M$ is $k$-edge-colorable.
\end{lem}
\begin{proof}
Our proof is by induction on the height of $(L_\pi,[k])$.
Choose a partial $k$-edge-coloring $\pi$ of $M$ such that $M_\pi$ is $(L_\pi,
\irange{k})$-fixable.
If $h_{M_\pi}\parens{L_\pi, \irange{k}} = 0$, then (1) must hold for $M_\pi$
and $L_\pi$; that is, $M_\pi$ has an edge-coloring $\zeta$ such that $\zeta(x)
\in L_\pi(x) \cap L_\pi(y)$ for all $xy \in E(M_\pi)$. Now
$\pi \cup \zeta$ is the desired $k$-edge-coloring of $M$.
So we may assume that $h_{M_\pi}\parens{L_\pi, \irange{k}} > 0$. Choose colors
$a,b \in \irange{k}$ to satisfy (2) and give a tree of height
$h_{M_\pi}\parens{L_\pi, \irange{k}}$. Let $H$ be the subgraph of $M$ induced
on all edges colored $a$ or $b$,
and let $S$ be the vertices in $M_\pi$ with degree exactly one in $H$.
For each $x \in S$, let $C_x$ be the component of $H$ containing $x$.
Since $\card{V(C_x) \cap S} \in \set{1,2}$, the components of $H$ give a
partition $X_1, \ldots, X_t$ of $S$ into sets of size
at most two. Further, exchanging colors $a$ and $b$ on $C_x$ has the effect
of swapping $a$ and $b$ in $L_\pi(v)$ for each $v \in V(C_x) \cap S$. So we
achieve the needed swapping of colors in the lists in (2) by exchanging
colors on the components of $H$.
By (2) there is $J \subseteq \irange{t}$ so
that $M_\pi$ is $(L', \irange{k})$-fixable, where $L'$ is formed from $L_\pi$ by
swapping $a$ and $b$ in $L_\pi(v)$ for every $v \in \bigcup_{i \in J} X_i$.
In fact, there is a $J$ such that $(L',[k])$ has height less than that of
$(L,[k])$.
Let $\pi'$ be the partial $k$-edge-coloring of $M$ created from
$\pi$ by performing the color exchanges to create $L'$ from $L_\pi$.
By the induction hypothesis, $M$ is $k$-edge-colorable.
\end{proof}
\subsection{Some examples}
A graph $G$ is \emph{$\Delta$-edge-critical}, or simply \emph{edge-critical},
if $\chi'(G)>\Delta$, but $\chi'(G-e)\le\Delta$ for every edge $e$.
A \emph{configuration} is a subgraph $H$, along with specified degrees $d_G(v)$
in the original graph for each vertex of $H$. A configuration $H$ is
\emph{reducible} if there exists an edge $e\in E(H)$ such
that whenever $H$ appears as a subgraph (not necessarily induced) of a graph
$G$, if $G-e$ has a $\Delta$-edge-coloring, then so does $G$.
A central tool for proving reducibility for edge-coloring is Vizing's Adjacency
Lemma. For example, it yields a short proof of Vizing's Theorem that
$\chi'(G)\le \Delta+1$ for every simple graph $G$.
\begin{VAL}
Let $G$ be a $\Delta$-critical graph. If $xy\in E(G)$, then $x$ is adjacent to
at least $\max\{2,\Delta-d(y)+1\}$ vertices of degree $\Delta$.
\end{VAL}
We can view VAL as giving conditions for the degrees of a vertex and its
neighbors that yield a reducible configuration. Our goal now is to prove
similar statements for larger configurations; we'd like a way to talk about
configurations being reducible for $k$-edge-coloring.
Lemma \ref{FixableCompletesColoring} gives us this with respect to a fixed
partial $k$-edge-coloring $\pi$, but we want a condition independent of the
particular coloring. Note that we have a lower bound on the sizes of the lists
in $L_\pi$; specifically, if $\pi$ is a partial $k$-edge-coloring of a
multigraph $M$, then $|L_{\pi}(v)| \ge k + d_{M_\pi}(v) - d_M(v)$ for every $v
\in M_{\pi}$.
This observation motivates the following defintion.
\begin{defn}
If $G$ is a graph and $\func{f}{V(G)}{\mathbb{N}}$, then $G$ is \emph{$(f,k)$-fixable}
if $G$ is $(L, \irange{k})$-fixable for every $L$ with $|L(v)| \ge k + d_{G}(v)
- f(v)$ for all $v \in V(G)$.
\end{defn}
This definition enables us to state our desired condition on reducible configurations for
$k$-edge-coloring, which follows directly from Lemma \ref{FixableCompletesColoring}.
\begin{obs}
If $G$ is $(f,k)$-fixable, then $G$ cannot be a subgraph of a
$(k+1)$-edge-critical graph $M$ where $d_M(v) \le f(v)$ for all $v \in V(G)$.
\end{obs}
Now we can talk about a graph $G$ with vertices labeled by $f$ being
$k$-fixable. The computer is extremely good at finding $k$-fixable
graphs. Combined with discharging arguments\footnote{The discharging method is
a counting technique commonly used in coloring proofs to show that the graph
under consideration must contain a reducible configuration. For an introduction
to this method,
see~\cite{discharging13}.}, this gives a powerful method for proving (modulo
trusting the computer) edge-coloring results for small $\Delta$. We'll see
some examples of such proofs later; for now Figure \ref{fig:small3} shows some
$3$-fixable graphs. A gallery of hundreds more fixable graphs is available at
\url{https://dl.dropboxusercontent.com/u/8609833/Web/GraphData/Fixable/index.html}.
\begin{figure}[htb]
\includegraphics[scale=0.25]{Delta3TriangleFree/1_2,2_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/1_3,1_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/011_2,2,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/0011011010_3,3,2,2,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/000110011010010_3,3,2,2,3,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/001010011011000_3,3,2,2,3,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/001010011011000_3,3,3,2,2,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/001110011001000_3,3,2,2,3,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/001110111011000_3,3,3,2,3,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/001110111111000_3,3,3,3,3,3_.pdf}
\includegraphics[scale=0.25]{Delta3TriangleFree/0000110000101000110010001000_3,3,3,2,2,2,3,3_.pdf}
\caption{Some small $3$-fixable graphs. (The label for each vertex specifies
its degree in G.)\label{fig:small3}}
\end{figure}
The penultimate graph in Figure \ref{fig:small3} is an example of the more
general fact that a $k$-regular graph with $f(v) = k$ for all $v$ is $k$-fixable
precisely when it is $k$-edge-colorable. That the third graph in Figure
\ref{fig:small3} is reducible follows from Vizing's Adjacency Lemma.
\subsection{A necessary condition}
Since the edges incident to a vertex $v$ must all get different colors,
if $G$ is $(L, P)$-fixable, then $|L(v)| \ge d_G(v)$ for all $v \in V(G)$.
By considering the maximum size of matchings in each color, we get a more
interesting necessary condition.
For each $C \subseteq \operatorname{pot}(L)$ and $H \subseteq G$, let $H_{L, C}$ be the
subgraph of $H$ induced by the vertices $v$ with $L(v) \cap C \ne \emptyset$.
When $L$ is clear from context, we write $H_C$ for $H_{L,C}$. If $C =
\set{\alpha}$, we write $H_\alpha$ for $H_C$. For $H \subseteq G$, let
\[\psi_L(H) = \sum_{\alpha \in \operatorname{pot}(L)} \floor{\frac{\card{H_{L, \alpha}}}{2}}.\]
Each term in the sum gives an upper bound on the size of a matching in color
$\alpha$. So $\psi_L(H)$ is an upper bound on the number of edges in a
partial $L$-edge-coloring of $H$. The pair $(H, L)$ is \emph{abundant} if
$\psi_L(H) \ge \size{H}$ and $(G,L)$ is \emph{superabundant} if for every
$H \subseteq G$, the pair $(H, L)$ is abundant.
\begin{lem}
\label{SuperabundanceIsNecessary}
If $G$ is $(L, P)$-fixable, then $(G, L)$ is superabundant.
\end{lem}
\begin{proof}
Suppose instead that $G$ is $(L, P)$-fixable and there is $H \subseteq
G$ such that $(H, L)$ is not abundant. We show that for all distinct $a,b \in
P$ there is a partition $X_1, \ldots, X_t$ of $S_{a,b}$ into sets of size at
most two, such that for all $J \subseteq \irange{t}$, the pair $(H,L')$ is not
abundant, where $L'$ is formed from $L$ by swapping $a$ and $b$ in $L(v)$ for
every $v \in \bigcup_{i \in J} X_i$. Since $G$ can only be edge-colored from
a superabundant list assignment, this contradicts that $G$ is $(L,P)$-fixable.
Pick distinct colors $a,b \in P$. Let $S = S_{L,a,b} \cap V(H)$, let $S_a$ be the
$v \in S$ with $a \in L(v)$, and let $S_b = S\setminus S_a$.
In the sum for $\psi_L(H)$, swapping $a$ and $b$ only effects the terms
$\floor{\frac{\card{S_a}}{2}}$ and $\floor{\frac{\card{S_b}}{2}}$.
So, if $\psi_L(H)$ is increased
by the swapping, it must be that both $|S_a|$ and $|S_b|$ are odd, and after
swapping they are both even. Say $S_a = \set{a_1, \ldots,a_p}$ and $S_b =
\set{b_1, \ldots,b_q}$. By symmetry, we assume $p \le q$. For each $i \in
\irange{p}$, let $X_i = \set{a_i, b_i}$. Since $p$ and $q$ are both odd, $q-p$
is even, so we get a partition by, for each $j \in \irange{\frac{q-p}{2}}$,
letting $X_{p + j} = \set{b_{p + 2j - 1}, b_{p + 2j}}$. For any $i \in
\irange{p}$, swapping $a$ and $b$ in $L(v)$ for every $v \in X_i$ maintains
$|S_a|$ and $|S_b|$. For any $j \in \irange{\frac{q-p}{2}}$, swapping $a$ and
$b$ in $L(v)$ for every $v \in X_{p+j}$ maintains the parity of $|S_a|$ and
$|S_b|$. So no choice of $J$ can increase $\psi_L(H)$. Thus, $(H,L')$ is
never abundant.
\end{proof}
In particular, we conclude the following.
\begin{cor}
If $G$ is $(f,k)$-fixable, then $(G,L)$ is superabundant for every $L$ with
$L(v) \subseteq \irange{k}$ and $|L(v)| \ge k + d_{G}(v) - f(v)$ for all $v \in V(G)$.
\end{cor}
Intuitively, superabundance requires the potential for a large enough matching
in each color. If instead we require the existence of a large enough matching
in each color, then we get a stronger condition that has been studied before.
For a multigraph $H$, let $\nu(H)$ be the number of edges in a maximum matching
of $H$. For a list assignment $L$ on $H$, let
$$\eta_L(H) = \sum_{\alpha \in \operatorname{pot}(L)} \nu(H_\alpha).$$
Note that always $\psi_L(H) \ge \eta_L(H)$.
The following generalization of Hall's theorem was proved by Marcotte and
Seymour \cite{marcotte1990extending} and independently by Cropper,
Gy{\'a}rf{\'a}s, and Lehel \cite{cropper2003edge}. By a \emph{multitree} we
mean a tree that possibly has edges of multiplicity greater than one.
\begin{lem}[Marcotte and Seymour]\label{MultiTreeHall}
Let $T$ be a multitree and $L$ a list assignment on $V(T)$. If $\eta_L(H) \ge
\size{H}$ for all $H \subseteq T$, then $T$ has an $L$-edge-coloring.
\end{lem}
In \cite{HallGame}, the second author proved that superabundance itself is also a
sufficient condition for fixability, when we restrict our graphs to be
multistars. This result immediately implies the \emph{fan equation}, which is
an extension of Vizing's Adjacency Lemma to multigraphs and a standard tool in
proving reducibility for edge-coloring (see \cite[p.~19ff]{stiebitz2012graph}).
The proof for multistars uses Hall's
theorem to reduce to a smaller star and one might hope that we could do the
same for arbitrary trees, with Lemma \ref{MultiTreeHall} in place of Hall's
theorem (thus giving a short proof that Tashkinov trees are elementary), but we
haven't yet made this work.
\subsection{Fixability of stars}
When $G$ is a star, superabundance implies fixability (provided that $|L(v)|\ge
d_G(v)$ for each vertex $v$), and this result generalizes Vizing fans
\cite{Vizing76}. In \cite{HallGame}, the second author proved a
common generalization of
this and of Hall's theorem; below we reproduce the proof for the special case
of edge-coloring. In the next section we define ``Kierstead-Tashkinov-Vizing
assignments'' and show that they are always superabundant.
\begin{thm}\label{FixabilityOfStars}
If $G$ is a multistar, then $G$ is $L$-fixable if and only if $(G, L)$ is
superabundant and $|L(v)| \ge d_G(v)$ for all $v \in V(G)$.
\end{thm}
\begin{proof}
Our strategy is simply to increase $\eta_L(G)$ if we can; if we cannot, then
Hall's theorem allows us to reduce to a smaller graph. We can view
this strategy as the following double induction. Suppose the theorem is
false and choose a counterexample $(G, L)$ minimizing $\size{G}$ and, subject to
that, maximizing $\eta_L(G)$.
Let $z$ be the center of the multistar $G$. Create a bipartite graph $B$ with
parts $C$ and $D$, where $C$
is the set of colors $\alpha$ that can be used on at least one edge, and $D$ is
the set of edges $e$ with at least one color available on $e$, and a color
$\alpha$ is adjacent to an edge $e$ if $\alpha$ can be used on $e$.
Note that $|C|=\eta_L(G)$.
First, suppose $|C| < \size{G}$. Since $|L(z)| \ge d_G(z) = \size{G}$,
some color $\tau \in L(z)$ cannot be used on any edge. Suppose some
color $\beta \in C$ can be used on at least three edges.
Let $zw$ be some edge that can use $\beta$.
Since $G$ is not $L$-fixable, there is $X \subseteq S_{L, \tau, \beta}$ with
$w \in X$ and $|X| \le 2$ such that $G$ is not $L'$-fixable, where $L'$ is formed
from $L$ by swapping $\tau$ and $\beta$ in $L(v)$ for every $v \in X$. Since
$\beta$ can be used on at least three edges for $L$, it can be used on at
least one edge for $L'$. Further, $\tau$ can also be used on at least one edge
for $L'$. Thus $\eta_{L'}(G) > \eta_L(G)$. Since $(G,L')$ is still
superabundant, this violates maximality of $\eta_L(G)$. Hence,
each color $\beta \in C$ can be used on at most two edges. So, each color in $C$
contributes at most one to $\psi_L(G)$.
Since $|C| < \size{G} \le \psi_L(G)$,
some color $\gamma$ contributes at least 1 to $\psi_L(G)$, but is not in $C$.
More precisely, some $\gamma \not \in C$ satisfies $|G_\gamma - z| \ge 2$. Since $G$
is not $L$-fixable, there is $X \subseteq S_{L, \tau, \gamma}$ with $z \in X$ and
$|X| \le 2$ such that $G$ is not $L'$-fixable where $L'$ is formed from $L$
by swapping $\tau$ and $\gamma$ in $L(v)$ for every $v \in X$. Since
$\nu(G_{L, \tau}) = 0$ and $\nu(G_{L, \gamma}) = 0$ and $\nu(G_{L', \gamma}) =
1$, we have $\eta_{L'}(G) > \eta_L(G)$. Since $(G,L')$ is still superabundant,
this violates maximality of $\eta_L(G)$.
Hence, we must have $|C| \ge \size{G}$. In particular, $\card{N_B(C)} \le |C|$
so we may choose a set of colors $C' \subseteq C$ such that $C'$ is a minimal
nonempty set satisfying $\card{N_B(C')} \le \card{C'}$.
If $|C'|\ge |N_B(C')|+1$, then, for any $\rho\in C'$, we have
$\card{C'-\rho}=\card{C'}-1\ge\card{N_B(C')}\ge\card{N_B(C'-\rho)}$, which
contradicts the minimality of $C'$. Thus, $\card{C'}=\card{N_B(C')}$.
Furthermore, by minimality of $C'$, every nonempty $C''\subsetneq C'$ satisfies
$\card{N_B(C'')}>\card{C''}$, so Hall's Theorem yields a perfect matching $M$
between $C'$ and $N_B(C')$.
For each color/edge pair
$\set{\alpha, zw} \in M$, use color $\alpha$ on edge $zw$. Form $G'$ from $G$
by removing all the colored edges and then discarding any isolated vertices.
Note that $z$ lost exactly $\card{C'}$ colors from its list and also
$d_{G'}(z)=d_G(z)-\card{C'}$, so $\card{L'(z)}=\card{L(z)}-\card{C'}\ge
d_G(z)-\card{C'}=d_{G'}(z)$. Each other vertex $w\in V(G')$ satisifes
$d_{G'}(w)=d_G(w)$ and $\card{L'(w)}=\card{L(w)}$, so $\card{L'(w)}\ge
d_{G'}(w)$.
Since $G$ is not $L$-fixable and $C'$ and $\operatorname{pot}(L')$ are disjoint it must be
that $G'$ is not $L'$-fixable.
For each $H \subseteq G'$, we have $\psi_{L'}(H) = \psi_{L}(H)$.
For each color $\alpha\in C$, if $\alpha\in C'$, then
$\floor{\card{H_{L,\alpha}}/2}=0$, since $E(H)\cap N_B(C')=\emptyset$.
Similarly, if $\alpha\notin C'$, then each $v\in V(G')$ satisfies $\alpha\in
L'(v)$ if and only if $\alpha\in L(v)$.
Thus, $H$ is abundant for $L'$ precisely because $H$ is abundant for $L$.
But $\size{G'} < \size{G}$, so by minimality of $\size{G}$, $G'$ is
$L'$-fixable, a contradiction.
\end{proof}
As shown in \cite{HallGame},
a direct consequence of Theorem \ref{FixabilityOfStars} is the {fan equation}.
This, in turn, implies most classical edge-coloring results including Vizing's
Adjacency Lemma.
\subsection{Kierstead-Tashkinov-Vizing assignments}
Many edge-coloring results have been proved using a specific kind of
superabundant pair $(G, L)$ where superabundance can be proved via a special
ordering. That is, the orderings given by the definition of Vizing fans,
Kierstead paths, and Tashkinov trees (these structures are all standard tools in
edge-coloring; defenitions and more background are available
in~\cite{stiebitz2012graph}).
In this section, we show how superabundance follows easily from these orderings.
For each vertex $v$, we write $E(v)$ for the set of edges incident to $v$.
A list assignment $L$ on $G$ is a \emph{Kierstead-Tashkinov-Vizing} assignment
(henceforth \emph{KTV-assignment}) if for some edge $xy \in E(G)$, there is a
total ordering `$<$' of $V(G)$ such that
\begin{enumerate}
\item there is an edge-coloring $\pi$ of $G-xy$ such that $\pi(uv) \in L(u)
\cap L(v)$ for each edge $uv \in E(G - xy)$;
\item $x < z$ for all $z \in V(G - x)$;
\item $G\brackets{w \mid w \le z}$ is connected for all $z \in V(G)$;
\item for each edge $wz \in E(G - xy)$, there is a vertex $u < \max\set{w, z}$ such that
$\pi(wz) \in L(u) - \setbs{\pi(e)}{e \in E(u)}$;
\item there are distinct vertices $s, t \in V(G)$ with $L(s) \cap L(t) -
\setbs{\pi(e)}{e \in E(s) \cup E(t)} \ne \emptyset$.
\end{enumerate}
\begin{lem}\label{KTVImpliesSuperabundant}
If $L$ is a KTV-assignment on $G$, then $(G, L)$ is superabundant.
\end{lem}
\begin{proof}
Let $L$ be a KTV-assignment on $G$, and let $H \subseteq G$. We will show that
$(H,L)$ is abundant.
Clearly it suffices to consider the case when $H$ is an induced subgraph, so we
assume this.
Property (1) gives that $G-xy$ has an edge-coloring
$\pi$, so $\psi_L(H)\ge \size{H}-1$; also $\psi_L(H)\ge \size{H}$ if
$\{x,y\}\not\subseteq V(H)$. Furthermore $\psi_L(H)\ge \size{H}$ if $s$ and
$t$ from property (5) are both in $V(H)$, since then $\psi_L(H)$ gains 1 over
the naive lower bound, due to the color in $L(s)\cap L(t)$. So $V(G)-
V(H)\ne \emptyset$.
Now choose a vertex $z \in V(G) - V(H)$ that is smallest under $<$.
Let $H' = G\brackets{w \mid w \le z}$. By the minimality of $z$, we have $H' -
z \subseteq H$. By property (2), $\card{H'} \ge 2$. By property (3), $H'$ is
connected and thus there is $w \in V(H' - z)$ adjacent to $z$. So, we have $w <
z$ and $wz\in E(G)-E(H)$.
Property (4) implies that there exists a vertex $u$ with $u <
\max\set{w, z} = z$ and $\pi(wz) \in L(u)-\{\pi(e)|e\in E(u)\}$. Since $u \in
V(H' - z) \subseteq V(H)$, we again gain 1 over the naive lower bound on
$\psi_L(H)$, due to the color in $L(u)\cap L(w)$. So $\psi_L(H)\ge \size{H}$.
\end{proof}
\subsection{The gap between fixability and reducibility}
By abstracting away the containing graph, we may have lost some power in proving
reducibility results. Surely we have when we only care about a certain class of
graphs. For example, with planar graphs, not all Kempe path pairings are
possible (if we add an edge for each pair, the resulting graph must be
planar). But, possibly there are graphs that are reducible for all containing
graphs but are not fixable. We could strengthen ``fixable'' in various ways,
but we have not found the need to do so. One particular strengthening
deserves mention, since it makes fixability more induction friendly.
\begin{defn}
$G$ is \emph{$(L, P)$-subfixable} if either
\begin{enumerate}
\item[(1)] $G$ is $(L, P)$-fixable; or
\item[(2)] there is $xy \in E(G)$ and $\tau \in L(x) \cap L(y)$ such that
$G-xy$ is $L'$-subfixable, where $L'$ is formed from $L$ by removing $\tau$ from
$L(x)$ and $L(y)$.
\end{enumerate}
\end{defn}
Superabundance is a necessary condition for subfixability because coloring an
edge cannot make a non-abundant subgraph abundant. The conjectures in the rest
of this paper may be easier to prove with subfixable in place of fixable. That would
really be just as good since it would give the exact same results for edge coloring.
\section{Applications of small k-fixable graphs}
In this section, we use $k$-fixable graphs to prove a few conjectures about
3-critical and 4-critical graphs.
A \emph{$k$-vertex} is a vertex of degree $k$, and a \emph{$k$-neighbor} of a
vertex $v$ is a $k$-vertex adjacent to $v$.
\subsection{The conjecture of Hilton and Zhao for $\Delta=4$}
For a graph $G$, let $G_\Delta$ be the subgraph of $G$ induced by vertices of
degree $\Delta(G)$. Vizing's Adjacency Lemma implies that $\delta(G_\Delta) \ge
2$ in a critical graph $G$. A natural question is whether or not this is
best possible. For example, can we have $\Delta(G_\Delta) = 2$ in a
critical graph $G$? In fact, Hilton and Zhao have conjectured exactly when
this can happen.
Recall that a graph $G$ is \emph{class 1} if $\chi'(G)=\Delta$ and \emph{class
2} otherwise.
A graph $G$ is \emph{overfull} if $||G|| >
\floor{\frac{|G|}{2}}\Delta(G)$. (The significance of overfull graphs is that
they must be class 2, simply because they have more edges than can be colored by
$\Delta(G)$ matchings, each of size $\floor{\frac{|G|}2}$.)
Let $P^*$ denote the Peterson graph with one
vertex deleted (see Figure \ref{fig:petey}).
\begin{conjecture}[Hilton and Zhao]
A connected graph $G$ with $\Delta(G_\Delta) \le 2$ is class 2 if and only if
$G$ is $P^*$ or $G$ is overfull.
\end{conjecture}
\input{pics}
David and Gianfranco Cariolaro~\cite{cariolaro2003colouring} proved this
conjecture when $\Delta=3$. Here we prove it when $\Delta=4$, but we
omit the very long computer-generated proofs of the reducibility of the
graphs in Figure~\ref{fig:hiltonzhao}.
Since we do not include the reducibility proofs, we separate the proof into two
parts. The first does not use the computer at all.
Let $\fancy{H}_4$ be the class of connected graphs with maximum degree 4, minimum
degree 3, each vertex adjacent to at least two 4-vertices, and each 4-vertex
adjacent to exactly two 4-vertices.
\begin{lem}\label{HiltonZhaoLemma}
If $G$ is a graph in $\fancy{H}_4$ and $G$ contains none of the three configurations in
Figure~\ref{fig:hiltonzhao} (not necessarily induced), then $G$ is $K_5-e$.
\end{lem}
\begin{proof}
Let $G$ be a graph in $\fancy{H}_4$. Note that every 4-vertex in $G$ has exactly two
3-neighbors and two 4-neighbors. Let $u$ denote a 4-vertex and let
$v_1,\ldots,v_4$ denote its neighbors, where $d(v_1)=d(v_2)=3$ and $d(v_3)=d(v_4)=4$.
When vertices $x$ and $y$ are adjacent, we write $x\leftrightarrow y$. We assume that $G$
contains none of the configurations in Figure~\ref{fig:hiltonzhao} and show that
$G$ must be $K_5-e$.
First suppose that $u$ has a 3-neighbor and a 4-neighbor that are adjacent. By
symmetry, assume that $v_2\leftrightarrow v_3$. Since
Figure~\ref{fig:hiltonzhao}(a) is forbidden, we have $v_3\leftrightarrow v_1$.
Now consider $v_4$. If $v_4$ has a 3-neighbor distinct from $v_1$ and $v_2$,
then we have a copy of Figure~\ref{fig:hiltonzhao}(c). Hence $v_4\leftrightarrow v_1$ and
$v_4\leftrightarrow v_2$. If $v_3\leftrightarrow v_4$, then $G$ is $K_5-e$. Suppose not, and let
$x$ be a 4-neighbor of $v_4$. Since $G$ has no copy of
Figure~\ref{fig:hiltonzhao}(c), $x$ must be adjacent to $v_1$ and $v_2$. This
is a contradiction, since $v_1$ and $v_2$ are 3-vertices, but now each has at
least four neighbors. Hence, we conclude that each of $v_1$ and $v_2$ is
non-adjacent to each of $v_3$ and $v_4$.
Now consider the 3-neighbors of $v_3$ and $v_4$. If $v_3$ and $v_4$ have zero
or one 3-neighbors
in common, then we have a copy of Figure~\ref{fig:hiltonzhao}(b).
Otherwise they have two 3-neighbors in common,
so we have a copy of Figure~\ref{fig:hiltonzhao}(c).
\end{proof}
Since $K_5 - e$ is overfull, the next theorem implies Hilton and Zhao's conjecture
for $\Delta=4$.
\begin{thm}
A connected graph $G$ with $\Delta(G) = 4$ and $\Delta(G_\Delta) \le 2$ is
class 2 if and only if $G$ is $K_5-e$.
\end{thm}
\begin{proof}
Let $G$ be as stated in the theorem.
If $G$ is class $2$, then $G$ has a $4$-critical subgraph $H$. Since
$H$ is $4$-critical, it is connected,
and every vertex has at least two neighbors of degree $4$, by VAL.
Further, since $\Delta(H_\Delta) \le \Delta(G_\Delta) \le 2$, VAL implies
that $H$ has minimum degree $3$.
Thus, $H \in \fancy{H}_4$. By Lemma \ref{HiltonZhaoLemma},
either $H$ is $K_5-e$ or $H$ contains one of
the configurations in Figure~\ref{fig:hiltonzhao}. By computer, each of these
configurations is reducible and hence cannot be a subgraph of the $4$-critical
graph $H$. Thus $H$ is $K_5-e$. Let $x_1,x_2$ be the degree $3$ vertices in
$H$. Each $x_i$ has three degree $4$ neighbors in $H$ and hence $d_G(x_i) \le
3$ since $\Delta(G_\Delta) \le 2$. That is, $x_i$ has no neighbors outside
$H$. Since $G$ is connected, we must have $G = H = K_5 - e$.
\end{proof}
\subsection{Impoved lower bounds on the average degree of 3-critical graphs and
4-critical graphs}
Let $P^*$ denote the Petersen graph with a vertex deleted (see Figure \ref{fig:petey}).
Jakobsen~\cite{Jakobsen73,Jakobsen74} noted that $P^*$ is 3-critical and has
average degree $2.\overline{6}$. He showed that every 3-critical graph has average
degree at least $2.\overline{6}$, and asked whether equality holds only for $P^*$.
In~\cite{3criticalCR}, we answered his question affirmatively. More precisely,
we showed that every 3-critical graph other than $P^*$ has average degree at
least $2+\frac{26}{37}=2.\overline{702}$. The proof crucially depends on the
fact that the three leftmost configurations in Figure~\ref{tree1-pic} are
reducible for 3-edge-coloring.
As we noted in~\cite{3criticalCR}, by using the computer to prove reducibility
of additional configurations, we can slightly strengthen this result. Specifically,
every 3-critical graph has average degree at least $2+\frac{22}{31} \approx
2.7097$ unless it is $P^*$ or one other exceptional graph, the Haj\'{o}s
join of two copies of $P^*$. (For comparison, there exists an infinite family of
3-critical graphs with average degree less than $2.75$.) This strengthening
relies primarily on the fact that the rightmost configuration in
Figure~\ref{fig:bigtree} is reducible, even if one or more pairs of
its 2-vertices are identified. However, the simplest proof we have of this fact
is computer-generated and fills about 100 pages.
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale = 8, font=\sffamily]
\tikzstyle{VertexStyle} = []
\tikzstyle{EdgeStyle} = []
\tikzstyle{labeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 2.2pt, draw]
\tikzstyle{unlabeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 1.2pt, draw, fill]
\Vertex[style = unlabeledStyle, x = 0.45, y = 0.80, L = \tiny {}]{v0}
\Vertex[style = unlabeledStyle, x = 0.35, y = 0.75, L = \tiny {}]{v1}
\Vertex[style = unlabeledStyle, x = 0.30, y = 0.65, L = \tiny {}]{v2}
\Vertex[style = unlabeledStyle, x = 0.35, y = 0.55, L = \tiny {}]{v3}
\Vertex[style = unlabeledStyle, x = 0.45, y = 0.50, L = \tiny {}]{v4}
\Vertex[style = unlabeledStyle, x = 0.55, y = 0.55, L = \tiny {}]{v5}
\Vertex[style = unlabeledStyle, x = 0.60, y = 0.65, L = \tiny {}]{v6}
\Vertex[style = unlabeledStyle, x = 0.55, y = 0.75, L = \tiny {}]{v7}
\Vertex[style = unlabeledStyle, x = 0.45, y = 0.95, L = \tiny {}]{v8}
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v1)(v2)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v1)(v0)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v7)(v0)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v7)(v6)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v7)(v3)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v5)(v6)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v5)(v4)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v5)(v1)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v3)(v4)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v3)(v2)
\path [line width=1pt]
(v2) edge [bend left=35] (v8)
(v8) edge [bend left=35] (v6);
\end{tikzpicture}
\end{center}
\caption{The Peterson graph with one vertex removed.}
\label{fig:petey}
\end{figure}
\bigskip
\begin{figure}[!htb]
\renewcommand{\ttdefault}{ptm}
\begin{center}
\begin{tikzpicture}[scale = 9, font=\sffamily]
\tikzstyle{VertexStyle} = []
\tikzstyle{EdgeStyle} = []
\tikzstyle{labeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 2.2pt, draw]
\tikzstyle{unlabeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 1.2pt, draw, fill]
\Vertex[style = labeledStyle, x = 0.45, y = 0.75, L = \small {\texttt{2}}]{v0}
\Vertex[style = labeledStyle, x = 0.45, y = 0.65, L = \small {\texttt{3}}]{v1}
\Vertex[style = labeledStyle, x = 0.55, y = 0.65, L = \small {\texttt{3}}]{v2}
\Vertex[style = labeledStyle, x = 0.55, y = 0.75, L = \small {\texttt{2}}]{v3}
\Vertex[style = labeledStyle, x = 0.65, y = 0.75, L = \small {\texttt{2}}]{v4}
\Vertex[style = labeledStyle, x = 0.65, y = 0.65, L = \small {\texttt{3}}]{v5}
\Vertex[style = labeledStyle, x = 0.75, y = 0.65, L = \small {\texttt{3}}]{v6}
\Vertex[style = labeledStyle, x = 0.75, y = 0.75, L = \small {\texttt{2}}]{v7}
\Edge[](v6)(v7)
\Edge[](v6)(v5)
\Edge[](v5)(v4)
\Edge[](v5)(v2)
\Edge[](v2)(v3)
\Edge[](v2)(v1)
\Edge[](v1)(v0)
\end{tikzpicture}
~~~
\begin{tikzpicture}[scale = 9, font=\sffamily]
\tikzstyle{VertexStyle} = []
\tikzstyle{EdgeStyle} = []
\tikzstyle{labeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 2.2pt, draw]
\tikzstyle{unlabeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 1.2pt, draw, fill]
\Vertex[style = labeledStyle, x = 0.60, y = 0.80, L = \small {\texttt{2}}]{v0}
\Vertex[style = labeledStyle, x = 0.50, y = 0.70, L = \small {\texttt{3}}]{v1}
\Vertex[style = labeledStyle, x = 0.60, y = 0.70, L = \small {\texttt{3}}]{v2}
\Vertex[style = labeledStyle, x = 0.70, y = 0.70, L = \small {\texttt{3}}]{v3}
\Vertex[style = labeledStyle, x = 0.60, y = 0.60, L = \small {\texttt{2}}]{v4}
\Edge[](v3)(v2)
\Edge[](v3)(v0)
\Edge[](v1)(v0)
\Edge[](v1)(v2)
\Edge[](v2)(v4)
\end{tikzpicture}
~~~
\begin{tikzpicture}[scale = 9, font=\sffamily]
\tikzstyle{VertexStyle} = []
\tikzstyle{EdgeStyle} = []
\tikzstyle{labeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 2.2pt, draw]
\tikzstyle{unlabeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 1.2pt, draw, fill]
\Vertex[style = labeledStyle, x = 0.60, y = 0.80, L = \small {\texttt{3}}]{v0}
\Vertex[style = labeledStyle, x = 0.50, y = 0.70, L = \small {\texttt{3}}]{v1}
\Vertex[style = labeledStyle, x = 0.60, y = 0.70, L = \small {\texttt{3}}]{v2}
\Vertex[style = labeledStyle, x = 0.70, y = 0.70, L = \small {\texttt{3}}]{v3}
\Vertex[style = labeledStyle, x = 0.70, y = 0.60, L = \small {\texttt{2}}]{v4}
\Vertex[style = labeledStyle, x = 0.60, y = 0.60, L = \small {\texttt{2}}]{v5}
\Vertex[style = labeledStyle, x = 0.50, y = 0.60, L = \small {\texttt{2}}]{v6}
\Edge[](v3)(v4)
\Edge[](v3)(v2)
\Edge[](v3)(v0)
\Edge[](v1)(v6)
\Edge[](v1)(v0)
\Edge[](v1)(v2)
\Edge[](v2)(v5)
\end{tikzpicture}
~~~
\begin{tikzpicture}[scale = 9, font=\sffamily]
\tikzstyle{VertexStyle} = []
\tikzstyle{EdgeStyle} = []
\tikzstyle{labeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 2.2pt, draw]
\tikzstyle{unlabeledStyle}=[shape = circle, minimum size = 6pt, inner sep = 1.2pt, draw, fill]
\Vertex[style = labeledStyle, x = 0.70, y = 0.70, L = \small {\texttt{3}}]{v0}
\Vertex[style = labeledStyle, x = 0.80, y = 0.70, L = \small {\texttt{3}}]{v1}
\Vertex[style = labeledStyle, x = 0.90, y = 0.70, L = \small {\texttt{3}}]{v2}
\Vertex[style = labeledStyle, x = 0.80, y = 0.60, L = \small {\texttt{2}}]{v3}
\Vertex[style = labeledStyle, x = 0.70, y = 0.60, L = \small {\texttt{2}}]{v4}
\Vertex[style = labeledStyle, x = 0.90, y = 0.60, L = \small {\texttt{2}}]{v5}
\Vertex[style = labeledStyle, x = 0.30, y = 0.70, L = \small {\texttt{3}}]{v6}
\Vertex[style = labeledStyle, x = 0.40, y = 0.70, L = \small {\texttt{3}}]{v7}
\Vertex[style = labeledStyle, x = 0.50, y = 0.70, L = \small {\texttt{3}}]{v8}
\Vertex[style = labeledStyle, x = 0.30, y = 0.60, L = \small {\texttt{2}}]{v9}
\Vertex[style = labeledStyle, x = 0.50, y = 0.60, L = \small {\texttt{2}}]{v10}
\Vertex[style = labeledStyle, x = 0.40, y = 0.60, L = \small {\texttt{2}}]{v11}
\Vertex[style = labeledStyle, x = 0.60, y = 0.80, L = \small {\texttt{3}}]{v12}
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v1)(v0)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v1)(v2)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v3)(v1)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v4)(v0)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v5)(v2)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v6)(v7)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v8)(v7)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v8)(v10)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v9)(v6)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v11)(v7)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v12)(v0)
\Edge[label = \tiny {}, labelstyle={auto=right, fill=none}](v12)(v8)
\end{tikzpicture}
\caption{Four subgraphs forbidden from a 3-critical graph $G$.}
\label{tree1-pic}
\label{umbrella-pic}
\label{jellyfish-pic}
\label{fig:bigtree}
\end{center}
\end{figure}
Woodall conjectured \cite{woodall2008average} that the average degree of every
4-critical graph is at least 3.6, which is best possible due to $K_5-e$.
We have proved this conjecture (modulo computer proofs of reducibility).
However, the proof requires 39 reducible configurations, so we defer it to an
appendix; even there, we omit the computer-aided reducibility proofs.
Here, we give a brief outline to illustrate the technique.
Our proof uses the discharging method. Assume that $G$ is a 4-critical graph.
Each vertex begins with an initial charge, which is its degree. We redistribute
the charge (without changing its sum), with the goal that every vertex finishes
with charge at least 3.6. If this is true, then $G$ has the desired average
degree. To redistribute charge, we successively apply the following 3 rules.
\begin{enumerate}
\item[(R1)] Each 2-vertex takes $.8$ from each 4-neighbor.
\item[(R2)] Each 3-vertex with three 4-neighbors takes $.2$ from each 4-neighbor.
Each 3-vertex with two 4-neighbors takes $.3$ from each 4-neighbor.
\item[(R3)] Each 4-vertex with charge in excess of $3.6$ after (R2) splits this
excess evenly among its 4-neighbors with charge less than $3.6$.
\end{enumerate}
By Vizing's Adjacency Lemma (VAL), each neighbor of a 2-vertex $v$ is a
4-neighbor. Thus, $v$ finishes with charge at least $2+2(.8)=3.6$.
Again by VAL, each 3-vertex $v$ has at least two 4-neighbors. So $v$ finishes
with charge at least $3+3(.2)$ or $3+2(.3)$, both of which are at least 3.6.
It is also easy to check that each 4-vertex $v$ finishes with charge at least 3.2;
by VAL, $v$ has at least two 4-neighbors, and if it has a 2-neighbor, then it has
three 4-neighbors. So the remainder of the proof consists in showing that all
4-vertices that finish (R2) with charge less than 3.6 receive enough charge by
(R3). The intuition is simple: if $v$ has few low degree neighbors and
neighbors of neighbors, then $v$ gets enough charge; otherwise, $v$ is contained
in some reducible configuration, which contradicts our choice of $G$ as
4-critical.
\section{Superabundance sufficiency and adjacency lemmas}
In the previous sections, we studied $k$-fixable graphs, which are reducible
configurations for graphs with fixed maximum degree. Here we study a more
general notion that behaves similarly to Vizing Fans, Kierstead Paths, and
Tashkinov Trees. Specifically, we consider graphs that are fixable for all
superabundant list assignments.
\subsection{Superabundant fixability in general}
\begin{defn}
If $G$ is a graph and $\func{f}{V(G)}{\mathbb{N}}$ with $f(v) \ge d_G(v)$ for all $v
\in V(G)$, then $G$ is $f$-fixable if $G$ is $(L, P)$-fixable for every $L$
with $|L(v)| \ge f(v)$ for all $v \in V(G)$ and every $L$-pot $P$ such that
$(G,L)$ is superabundant. If a graph $G$ is $f$-fixable when $f(v)=d_G(v)$ for
each $v$, then $G$ is \emph{degree-fixable}.
\end{defn}
For example, Lemma \ref{FixabilityOfStars} shows that multistars are
degree-fixable. We have also found that the $4$-cycle is degree-fixable.
\begin{problem}
Classify the degree-fixable multigraphs (specifically, containment minimal
ones).
\end{problem}
Since $f(v) \ge d_G(v)$, it is convenient to express the values of $f$ as $d+k$
for a non-negative integer $k$; this means $f(v) = d_G(v) + k$. For brevity,
when $k=0$ we just write $d$, and when $k=1$ we write $d+$, since the figures
only depict the cases $k=0$ and $k=1$.
Looking at the trees in Figures \ref{fig:fixable4}, \ref{fig:fixable5tree}, and
\ref{fig:fixable6tree} we might conjecture that a tree is $f$-fixable whenever
at most one internal vertex is labeled ``$d$''. This conjecture
continues to hold for many more examples.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.25]{Superabundance/all/001011_1,1,1,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/011010_2,1,1,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/011011_2,1,2,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/011110_2,2,2,2_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/1_1,1_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/011_1,1,2_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/111_2,2,2_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/011111_2,2,3,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/all/111111_3,3,3,3_.pdf}
\caption{The fixable graphs on at most 4 vertices.}
\label{fig:fixable3}
\label{fig:fixable4}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/0011001010_2,1,1,1,4_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/0011001010_3,1,1,1,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/0101011000_2,3,1,1,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/0101011000_3,3,1,1,2_.pdf}
\caption{The fixable trees with maximum degree at most 3 on 5 vertices.}
\label{fig:fixable5tree}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/000100010001011_1,1,1,1,3,4_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/000110010001010_2,1,1,1,3,4_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/000110010001010_3,1,1,1,2,4_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/000110010001010_3,1,1,1,3,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/001010011001000_2,3,1,1,1,4_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/001010011001000_3,3,1,1,1,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/001010011010000_2,3,1,1,3,3_.pdf}
\includegraphics[scale=0.25]{Superabundance/MaxDegree3Trees/001010011010000_3,2,1,1,3,3_.pdf}
\caption{The fixable trees with maximum degree at most 3 on 6 vertices.}
\label{fig:fixable6tree}
\end{figure}
\begin{conjecture}
\label{OneHighConjecture}
A tree $T$ is $f$-fixable if $f(v) = d_T(v)$ for at most one non-leaf $v$ of $T$.
\end{conjecture}
Note that by Lemma \ref{KTVImpliesSuperabundant}, this would imply under the
same degree constraints that Tashkinov trees are elementary (that is, each color
is absent from at most one vertex of a Tashkinov tree). Can this be
proved in the simpler case when the tree is a path? For paths of length 4, this was
done by Kostochka and Stiebitz; in the next section we conjecture a generalization
of their result to stars with one edge subdivided. One nice feature of the
superabundance formulation is that since there is no need for an ordering as
with Tashkinov trees, we can easily formulate results about graphs with cycles.
The following is the most general thing we might think is true.
\input{goldberg}
\begin{conjecture}[false]
A multigraph $G$ is $f$-fixable if $f(v) > d_G(v)$ for all $v \in V(G)$.
\label{MoonshineConjecture}
\end{conjecture}
This conjecture is very strong and implies Goldberg's conjecture
(see \cite[p.~155ff]{stiebitz2012graph}), which is one of the major open
problems in edge-coloring.
Unfortunately, Conjecture~\ref{MoonshineConjecture} is false. We can make
counterexamples on a $5$-cycle as in Figure \ref{fig:goldbergce}. We don't yet
have an intuitive explanation for why these are counterexamples, but in each
case the computer has found a strategy preventing the 5-cycle from being colored.
One interesting consequence of these counterexamples
is that $C_5$ is not $f$-fixable for \emph{any}
function $f$. (This contrasts with the case of $(f,k)$-fixable, since now
increasing $f$ need not increase $\psi(L)$.)
Let $L$ denote the lists in Figure~\ref{fig:goldbergce}(b).
Given a function $f$, begin with $L$ and add as many ``singletons'' (colors that appear in
only one list) as needed to the lists so that each list is large enough; call
these lists $L'$. Since $L$ is superabundant, clearly so is $L'$.
Now we play an ``extra'' game against the computer using $L$ and we use the
computer's stratey in this extra game to inform our strategy for $L'$ in the
real game. If the colors chosen to swap in the real game are both in $L$, then
we play with the computer's strategy for the extra game. Any color $c\in
L'\setminus L$ at a vertex $v$ in the extra game is a singleton. Since only
three colors of $L$ are non-singletons, and $|L(v)|=4$, each vertex in the
extra game will always have a singleton in its list. Thus, we treat $c$ like
some other singleton $c'$ at $v$ in the extra game, and use the computer's strategy
from the extra game if $c'$ had been chosen instead.
We have more questions than answers about Conjectures~\ref{OneHighConjecture}
and~\ref{MoonshineConjecture}. For instance,
what if in Conjecture~\ref{MoonshineConjecture} we only look at
superabundant list assignments arising from an edge coloring of $G-e$, for some
edge $e$? The resulting conjecture is also stronger than Goldberg's
conjecture, and at present we have no counterexamples.
\subsection{Stars with one edge subdivided}
The following conjecture would generalize the ``Short Kierstead Paths'' of Kostochka
and Stiebitz (see \cite[p.~46ff]{stiebitz2012graph}). Parts (a) and (b) are
special cases of Conjecture \ref{OneHighConjecture}. We have a rough draft of
a proof for part (a) and we suspect parts (b) and (c) will be similar,
but our draft is long and detailed, and we are still hoping to find a clean
proof, like that for stars.
Recall that the fan equation implies the reducibility for $k$-edge-coloring of
stars with certain specified degrees of the leaves. In this section we show
that the truth of Conjecture \ref{StarWithOneEdgeSubdivided} would imply a
similar equation for stars with one edge subdivided.
\begin{conjecture}
\label{StarWithOneEdgeSubdivided}
Let $G$ be a star with one edge subdivided, where $r$ is the center of the
star, $t$ the vertex at distance two from $r$, and $s$ the intervening vertex.
If $L$ is superabundant and $|L(v)| \ge d_G(v)$ for all
$v \in V(G)$, then $G$ is $L$-fixable if at least one of the following holds:
\begin{enumerate}
\item[(a)] $|L(r)| > d_G(r)$; or
\item[(b)] $|L(s)| > d_G(s)$; or
\item[(c)] $\psi_L(G) > \size{G}$.
\end{enumerate}
\end{conjecture}
For a graph $H$ and $v \in V(H)$, let $E_H(v)$ be the set of edges incident to
$v$ in $H$. Let $Q$ be an edge-critical graph with $\chi'(Q) = \Delta(Q) + 1$
and $G \subseteq Q$. For a $\Delta(Q)$-edge-coloring $\pi$ of $Q - E(G)$, let
$L_\pi(v) = \irange{\Delta(Q)} - \pi\parens{E_Q(v) - E(G)}$ for all $v \in
V(G)$. Graph $G$ is a \emph{$\Psi$-subgraph} of $Q$ if there is a
$\Delta(Q)$-edge-coloring $\pi$ of $Q - E(G)$ such that each $H \subsetneq G$
is abundant. Let $E_{L}(H) = \card{\setb{\alpha}{\operatorname{pot}(L)}{\card{H_{L, \alpha}}
\text{ is even}}}$ and $O_{L}(H) = \card{\setb{\alpha}{\operatorname{pot}(L)}{\card{H_{L,
\alpha}} \text{ is odd}}}$. Note that $\operatorname{pot}(L) = E_{L}(G) + O_{L}(G)$.
\begin{lem}\label{LowPsiGivesManyOddColors}
Let $Q$ be an edge-critical graph with $\chi'(Q) = \Delta(Q) + 1$. If $G
\subseteq Q$ and $\pi$ is a $\Delta(Q)$-edge-coloring of $Q - E(G)$ such that
$\size{G}\ge \psi_L(G) $, then $\card{O_{L_\pi}(G)} \ge \sum_{v \in V(G)}
\Delta(Q) - d_Q(v)$. Furthermore, if $\size{G} > \psi_L(G)$, then
$\card{O_{L_\pi}(G)} > \sum_{v \in V(G)} \Delta(Q) - d_Q(v)$.
\end{lem}
\begin{proof}
The proof is a straightforward counting argument. For fixed degrees and list
sizes, as $\card{O_L(G)}$ gets larger, $\psi_L(G)$ gets smaller (half as
quickly). The details forthwith. Let $L = L_\pi$.
Since $\size{G} \ge \psi_L(G)$, we have
\begin{align}
\label{edge-crit1}
\size{G} \ge
\sum_{\alpha \in \operatorname{pot}(L)} \floor{\frac{\card{G_{L, \alpha}}}{2}} =
\sum_{\alpha \in \operatorname{pot}(L)} \frac{\card{G_{L, \alpha}}}{2} - \sum_{\alpha \in
O_L(H)} \frac12
.\end{align}
\noindent
Also,
\begin{align}
\sum_{\alpha \in \operatorname{pot}(L)} \frac{\card{G_{L, \alpha}}}{2}
&= \sum_{v \in V(G)} \frac{\Delta(Q) - (d_Q(v)-d_G(v))}{2} \notag\\
&= \sum_{v \in V(G)} \frac{d_G(v)}{2} + \sum_{v \in V(G)} \frac{\Delta(Q) -
d_Q(v)}{2}\notag\\
&= \size{G} + \sum_{v \in V(G)} \frac{\Delta(Q) - d_Q(v)}{2}.
\label{edge-crit2}
\end{align}
\noindent Now we solve for $\size{G}-
\sum_{\alpha \in \operatorname{pot}(L)} \frac{\card{G_{L, \alpha}}}{2}$ in
\eqref{edge-crit1} and \eqref{edge-crit2}, set the expressions equal, and then
simplify. The result is \eqref{edge-crit3}.
\begin{align}
\card{O_L(G)} \ge \sum_{v \in V(G)} \Delta(Q) - d_Q(v).
\label{edge-crit3}
\end{align}
Finally, if the inequality in \eqref{edge-crit1} is strict, then the inequality
in \eqref{edge-crit3} is also strict.
\end{proof}
Again, let $Q$ be an edge-critical graph with $\chi'(Q) = \Delta(Q) + 1$ and $G
\subseteq Q$. If there is a $\Delta(Q)$-edge-coloring $\pi$ of $Q - E(G)$ such
that each $H \subsetneq G$ is abundant, then $G$ is a \emph{$\Psi$-subgraph} of
$Q$. The point of this definition is that if $G$ is a $\Psi$-subgraph (and
Conjecture~\ref{StarWithOneEdgeSubdivided}(c) holds), then $\size{G}\ge
\psi(G)$, so we can apply Lemma~\ref{LowPsiGivesManyOddColors}.
\begin{conjecture}\label{AdjacencyPrecursor}
Let $Q$ be an edge-critical graph with $\chi'(Q) = \Delta(Q) + 1$.
Let $H$ be a star with one edge subdivided; let $r$ be the center of the star,
$t$ the vertex at distance two from $r$, and $s$ the intervening vertex.
If $H$ is a $\Psi$-subgraph of $Q$,
then there exists $X \subseteq N(r)$ with $V(H - r - t)
\subseteq X$ such that \[\sum_{v \in X \cup \set{t}} (d_Q(v) + 1 - \Delta(Q))
\ge 0.\]
\noindent Moreover, if $\set{r,s,t}$ does not induce a triangle in $Q$, then
\[\sum_{v \in X \cup \set{t}} (d_Q(v) + 1 - \Delta(Q)) \ge 1.\]
Furthermore, if $d_Q(r)<\Delta(Q)$ or $d_Q(s)<\Delta(Q)$, then both lower
bounds improve by 1.
\end{conjecture}
\begin{proof}[Proof (assuming Conjecture~\ref{StarWithOneEdgeSubdivided}).]
Let $G$ be a maximal $\Psi$-subgraph of $Q$ containing $H$ such that $G$
is a star with one edge subdivided. Let $\pi$ be a coloring of $Q - E(G)$
showing that $G$ is a $\Psi$-subgraph and let $L = L_\pi$.
We first show that $\card{E_{L}(G)} \ge d_Q(r) - d_G(r) - 1$ if $rst$ induces a
triangle; otherwise, $\card{E_{L}(G)} \ge d_Q(r) - d_G(r)$.
Suppose $rst$ does not induce a triangle; for an arbitrary $x \in N_Q(r) - V(G)$,
let $\alpha=\pi(rx)$. Now consider adding $x$ to $G$.
By assumption, every $J\subsetneq G$ is abundant. Further, if $J\subsetneq G$
is abundant, then $J+x$ is also abundant. Thus, we only need to show that $G$
is abundant. If $\alpha \in O_{L}(G)$, then adding $x$ to $G$ makes $G$
abundant, since now $r$ also has $\alpha$ in its list. This gives a larger
$\Psi$-subgraph of the required form, which contradicts the maximality of
$G$. Hence $\alpha \in E_{L}(G)$. Therefore, $\card{E_{L}(G)} \ge d_Q(r) -
d_G(r)$ as desired. If $rst$ induces a triangle, then we lose one off this
bound from the edge $rt$.
By Conjecture \ref{StarWithOneEdgeSubdivided}(c), we have $\psi_L(G) \le
\size{G}$. Hence, by Lemma \ref{LowPsiGivesManyOddColors}, we have
$\card{O_{L}(G)} \ge \sum_{v \in V(G)} \Delta(Q) - d_Q(v)$. If $rst$ does
not induce a triangle, then
\begin{align*}
\Delta(Q) &\ge \operatorname{pot}(L)\\
&= \card{E_{L}(G)} + \card{O_{L}(G)}\\
&\ge d_Q(r) - d_G(r) + \sum_{v \in V(G)} \Delta(Q) - d_Q(v) \numberthis
\label{strict-ineq}\\
&= \Delta(Q) - d_G(r) + \sum_{v \in V(G - r)} \Delta(Q) - d_Q(v)\\
&= \Delta(Q) + 1 +\sum_{v \in V(G - r)} \Delta(Q) - 1 - d_Q(v).
\end{align*}
Therefore, $\sum_{v \in V(G - r)} \Delta(Q) - 1 - d_Q(v) \le -1$. Negating
gives the desired inequality. If $rst$ induces a triangle, then we lose one
off the bound. Conjecture \ref{StarWithOneEdgeSubdivided}(a,b) gives the final
statement.
\end{proof}
\section{Algorithm Overview}
Here we describe the basic outline of our algorithm to test if a given graph $G$ is $k$-fixable. To test if $G$ is $(L,P)$-fixable for one $L$, we need to generate the two-player game tree.
Doing this for every $L$ would be a lot of work. With memoization, we can cut this down and get a reasonably efficient algorithm, but we can do much better by changing to a bottom-up strategy; that is, we do dynamic programming as follows.
\begin{enumerate}
\item Generate the set $\mathcal{L}$ of all possible lists assignments $L$ on $G$ with $\operatorname{pot}{(L) \subseteq \irange{k}}$.
\item Create a set $\mathcal{W}$ of \emph{won} assignments, consisting of all $L \in \mathcal{L}$ such that $G$ is $L$-colorable.
\item Put $\mathcal{L} \mathrel{\mathop:}= \mathcal{L} \setminus \mathcal{W}$.
\item For each $L \in \mathcal{L}$, check if there are different colors $a,b \in \irange{k}$ such that for every partition
$X_1, \ldots, X_t$ of $S_{L,a,b}$ into sets of size at most two, there exists $J
\subseteq \irange{t}$ so that $L' \in \mathcal{W}$, where $L'$ is formed
from $L$ by swapping $a$ and $b$ in $L(v)$ for every $v \in \bigcup_{i \in J} X_i$. If so, add $L$ to $\mathcal{W}$.
\item If step $(4)$ modified $\mathcal{W}$, goto step (3).
\item $G$ is $k$-fixable if and only if $\mathcal{L} = \emptyset$.
\end{enumerate}
In step (1), we do not really want to generate \emph{all} list assignments, just list assignments up to color permutation. To do this generation, we put an ordering on the set of list assignments and run an algorithm that outputs only the minimal representative of each color-permutation class. All the code lives in the GitHub repository of WebGraphs at \url{https://github.com/landon/WebGraphs}. Since a lot of this code is optimized for speed and not readability, a reference version is currently being built at \url{https://github.com/landon/Playground/tree/master/Fixability}.
\section{Conclusion}
Most work on proving sufficient conditions for $k$-edge-colorability relies on
proving that various configurations are reducible. Although these reducibility
proofs have common themes, they often feel ad hoc and are tailored to the
specific theorem being proved. We have introduced the notion of fixability,
which provides a unifying framework for many of these results. It also
naturally leads to a number of conjectures which, if true, will likely increase
greatly what we can prove about $k$-edge-coloring. The computer has provided
significant experimental evidence for these conjectures (proving many specific
cases), but offers little guidance toward proving them completely.
To conclude, we mention two consequences if
Conjectures~\ref{StarWithOneEdgeSubdivided} and~\ref{OneHighConjecture}
are true.
Vizing~\cite{vizing68unsolved} conjectured that every $\Delta$-critical graph
has average degree
greater than $\Delta-1$. For large $\Delta$, the best lower bound is about
$\frac23\Delta$, due to Woodall~\cite{woodall2007average}.
His proof relies on a new class of reducible configurations, which would be
implied if both $P_5$ is fixable (a very special case of
Conjecture~\ref{OneHighConjecture}) and
Conjecture~\ref{StarWithOneEdgeSubdivided} is true. Another old conjecture of
Vizing~\cite{vizing65chromatic} is that every $n$-vertex graph has independence
number at most $\frac12n$. The best upper bound known is $\frac35n$,
also due to Woodall~\cite{woodall2011independence}. The proof is relatively
short, but relies on the same reducible configurations just mentioned. Thus, proving
Conjecture~\ref{AdjacencyPrecursor} and
Conjecture~\ref{StarWithOneEdgeSubdivided} (even just for $P_5$) would put the
best bounds for these two old problems into a much broader context.
\bibliographystyle{amsplain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,541
|
{"url":"https:\/\/www.learncram.com\/cbse\/mcq-questions-for-class-6-maths-chapter-5\/","text":"# MCQ Questions for Class 6 Maths Chapter 5 Understanding Elementary Shapes with Answers\n\nWe have compiled the NCERT MCQ Questions for Class 6 Maths Chapter 5 Understanding Elementary Shapes with Answers Pdf free download covering the entire syllabus. Practice MCQ Questions for Class 6 Maths with Answers on a daily basis and score well in exams. Refer to the Understanding Elementary Shapes Class 6 MCQs Questions with Answers here along with a detailed explanation.\n\n## Understanding Elementary Shapes Class 6 MCQs Questions with Answers\n\nChoose the correct option.\n\nQuestion 1.\nA \u2026\u2026\u2026\u2026\u2026\u2026.. of a circle is a line segment joining any two points on the circle.\n(b) diameter\n(c) circumference\n(d) chord\n\nQuestion 2.\nIf two lines intersects each other then the common point between them is known as point of \u2026\u2026\u2026\u2026\u2026\u2026..\n(a) Contact\n(b) vertex\n(c) intersection\n(d) concurrence\n\nQuestion 3.\nTwo lines in a plane either intersect exactly at one point are\n(a) perpendicular\n(b) intersecting lines\n(c) equal\n(d) equidistant\n\nQuestion 4.\nThree or more points lying on the same line are known as \u2026\u2026\u2026\u2026\u2026\u2026.. points.\n(a) non-collinear\n(b) collinear\n(c) intersecting\n(d) none of these.\n\nQuestion 5.\nA flat surface which extends indefinitely in all directions is called \u2026\u2026\u2026\u2026\u2026\u2026..\n(a) line\n(b) line segment\n(c) plane\n(d) point\n\nQuestion 6.\nNumber of lines which can be drawn from one point:\n(a) one\n(b) infinite\n(c) two\n(d) zero\n\nQuestion 7.\nThree or more points which lie on a same line are called\n(a) non-collinear points\n(b) straight lines\n(c) collinear points\n(d) point of concurrence\n\nQuestion 8.\nTwo lines meeting at a point are called \u2026\u2026\u2026\u2026\u2026\u2026..\n(a) parallel lines\n(b) intersecting lines\n(c) concurrent lines\n(d) intercept\n\nQuestion 9.\nA line has \u2026\u2026\u2026\u2026\u2026\u2026.. length.\n(a) definite\n(b) indefinite\n(c) no\n(d) none of these.\n\nQuestion 10.\nThe edge of a ruler draws \u2026\u2026\u2026\u2026\u2026\u2026..\n(a) ray\n(b) line\n(c) line segment\n(d) curve\n\nQuestion 11.\nNumber of right angles turned by the hour hand of a clock when it goes from 3 to 6.\n(a) 1\n(b) 2\n(c) 3\n(d) 4\n\nQuestion 12.\nThe measure of a complete angle is\n(a) 90\u00b0\n(b) 180\u00b0\n(c) 360\u00b0\n(d) none of these\n\nFill in the blanks\n\nQuestion 1.\nEach angle of a rectangle is a \u2026\u2026\u2026\u2026\u2026\u2026.. angle.\n\nQuestion 2.\n\u2026\u2026\u2026\u2026\u2026\u2026.. sides of a rhombus are of equal length.\n\nQuestion 3.\nIn a \u2026\u2026\u2026\u2026\u2026\u2026.. there is only one pair of parallel sides.\n\nQuestion 4.\nThe \u2026\u2026\u2026\u2026\u2026\u2026.. sides of a rectangle are equal in length.\n\nQuestion 5.\nA rhombus with four right angles is called a \u2026\u2026\u2026\u2026\u2026\u2026..\n\nQuestion 6.\nThe polygon with least number of sides is \u2026\u2026\u2026\u2026\u2026\u2026..\n\nQuestion 7.\nThe measure of a right angle is \u2026\u2026\u2026\u2026\u2026\u2026\n\nQuestion 8.\nThe measure of straight angle is \u2026\u2026\u2026\u2026\u2026\u2026\n\nQuestion 9.\nThe measure of \u2026\u2026\u2026\u2026\u2026\u2026\u2026. is between 0\u00b0 and 90\u00b0.\n\nQuestion 10.\nThe measure of \u2026\u2026\u2026\u2026\u2026\u2026.. is between 90\u00b0 and 180\u00b0\n\nQuestion 11.\n1$$\\frac {1}{3}$$ right angles = \u2026\u2026\u2026\u2026\u2026\u2026\u2026.. degrees\n\nQuestion 12.\n1$$\\frac {1}{2}$$ right angles = \u2026\u2026\u2026\u2026\u2026\u2026\u2026.. degrees\n\nQuestion 13.\nA clock hand will stop at \u2026\u2026\u2026\u2026\u2026\u2026\u2026.. if it starts at 7 and makes $$\\frac {1}{2}$$ of a revolution.\n\nQuestion 14.\nYou will make \u2026\u2026\u2026\u2026\u2026\u2026. right angles, if you start facing North and turn anticlockwise to East.\n\nMatch the column B with A and C\n\n(a) \u2192 (iii) \u2192 (g)\n(b) \u2192 (iv) \u2192 (f)\n(c) \u2192 (i) \u2192 (i)\n(d) \u2192 (ii) \u2192 (j)\n(e) \u2192 (v) \u2192 (h)\n\n Column I Column II (a) A regular polygon having 3 sides (i) a trapezium (b) A triangle having two of its sides equal. (ii) a square (c) A regular polygon of four sides. (iii) equilateral triangle (d) A quadrilateral having a pair of opposite sides parallel (iv) an isosceles triangle","date":"2022-05-27 15:53:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5472795367240906, \"perplexity\": 6055.113303419084}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662658761.95\/warc\/CC-MAIN-20220527142854-20220527172854-00120.warc.gz\"}"}
| null | null |
\section{Introduction}
\noindent It is generally interesting to study in what ways
information about the geometry of a differentiable manifold $\Sigma$
can be extracted as algebraic properties of the algebra of smooth
functions $C^\infty(\Sigma)$. In case $\Sigma$ is a Poisson manifold,
this algebra has a second (apart from the commutative multiplication
of functions) bilinear (non-associative) algebra structure, the
Poisson bracket. The bracket is compatible with the commutative
multiplication via Leibniz rule, thus carrying the basic properties of
a derivation.
On a surface $\Sigma$, with local coordinates $u^1$ and $u^2$, one can
define
\begin{align*}
\{f,h\} = \frac{1}{\sqrt{g}}\parac{\frac{\d f}{\d u^1}\frac{\d h}{\d u^2}-
\frac{\d h}{\d u^1}\frac{\d f}{\d u^2}},
\end{align*}
where $g$ is the determinant of the induced metric tensor, and one
readily checks that $\paraa{C^\infty(\Sigma),\{\cdot,\cdot\}}$ is a
Poisson algebra. Having only this very particular combination of
derivatives at hand, it seems at first unlikely that one can encode
geometric information of $\Sigma$ in Poisson algebraic
expressions. Surprisingly, it turns out that many differential
geometric quantities can be computed in a completely algebraic way,
cp. Theorem \ref{thm:ricciCurvature} and Theorem
\ref{thm:CMNambu}. For instance, the Gaussian curvature of a surface
embedded in $\mathbb{R}^m$ can be written as
\begin{align}\label{eq:introKpb}
K=\sum_{j,k,l=1}^m\parac{\frac{1}{2}\{\{x^j,x^k\},x^k\}\{\{x^j,x^l\},x^l\}
-\frac{1}{4}\{\{x^j,x^k\},x^l\}\{\{x^j,x^k\},x^l\}},
\end{align}
where $x^i(u^1,u^2)$ are the embedding coordinates of the surface.
For a general $n$-dimensional manifold $\Sigma$, we are led to
consider Nambu brackets \cite{n:generalizedmech}, i.e. multi-linear
alternating $n$-ary maps from $C^\infty(\Sigma)\times\cdots\times
C^\infty(\Sigma)$ to $C^\infty(\Sigma)$, defined by
\begin{align*}
\{f_1,\ldots,f_n\} = \frac{1}{\sqrt{g}}\varepsilon^{a_1\cdots a_n}\paraa{\d_{a_1}f_1}\cdots\paraa{\d_{a_n} f_n}.
\end{align*}
In the case of surfaces, our initial motivation for studying the
problem came from matrix regularizations of Membrane Theory. Classical
solutions in Membrane Theory are 3-manifolds with vanishing mean
curvature in $\mathbb{R}^{1,d}$. Considering one of the coordinates to be
time, the problem can also be formulated in a dynamical way as
surfaces sweeping out volumes of vanishing mean curvature. In this
context, a regularization was introduced replacing the infinite
dimensional function algebra on the surface by an algebra of $N\times
N$ matrices \cite{h:phdthesis}. If we let $T_\alpha$ be a linear map from
smooth functions to hermitian $N_\alpha\times N_\alpha$ matrices, the main
properties of the regularization are
\begin{align*}
&\lim_{\alpha\to\infty}\norm{T_\alpha(f)T_\alpha(g)-T_\alpha(fg)}=0,\\
&\lim_{\alpha\to\infty}\norm{\frac{1}{i\ha}[T_\alpha(f),T_\alpha(h)]-T_\alpha(\{f,h\})}=0,
\end{align*}
where $\ha$ is a real valued function tending to zero as
$N_\alpha\to\infty$
(see Section \ref{sec:matrixRegularizations} for details), and
therefore it is natural to regularize the system by replacing
(commutative) multiplication of functions by (non-commutative)
multiplication of matrices and Poisson brackets of functions by
commutators of matrices.
Although we may very well consider $T_\alpha(\frac{\d f}{\d u^1})$, its
relation to $T_\alpha(f)$ is in general not simple. However, the particular
combination of derivatives in $T_\alpha(\{f,h\})$ is expressed in terms of
a commutator of $T_\alpha(f)$ and $T_\alpha(h)$. In the context of Membrane
Theory, it is desirable to have geometrical quantities in a form that
can easily be regularized, which is the case for any expression
constructed out of multiplications and Poisson brackets. For
instance, solving the equations of motion for the regularized membrane
gives sequences of matrices that correspond to the embedding
coordinates of the surface. Since the set of solutions contains
regularizations of surfaces of arbitrary topology, one would like to
be able to compute the genus corresponding to particular
solutions. The regularized form of (\ref{eq:introKpb}) provides a way
of resolving this problem.
The paper is organized as follows: In Section \ref{sec:preliminaries}
we introduce the relevant notation by recalling some basic facts about
submanifolds. In Section \ref{sec:nambuPoissonFormulation} we
formulate several basic differential geometric objects in terms of
Nambu brackets, and in Section \ref{sec:normalVectors} we provide a
construction of a set of orthonormal basis vectors of the normal
space. Section \ref{sec:CodazziMainardi} is devoted to the study of
the Codazzi-Mainardi equations and how one can rewrite them in terms
of Nambu brackets. In Section \ref{sec:surfaces} we study the
particular case of surfaces, for which many of the introduced formulas
and concepts are particularly nice and in which case one can construct
the complex structure in terms of Poisson brackets.
In the second part of the paper, starting with Section
\ref{sec:matrixRegularizations}, we study the implications of our
results for matrix regularizations of compact surfaces. In particular,
a discrete version of the Gauss-Bonnet theorem is derived in Section
\ref{sec:discreteGB} and a proof that the discrete Gauss curvature
bounds the eigenvalues of the discrete Laplacian is found in Section
\ref{sec:laplaceBound}.
\section{Preliminaries}\label{sec:preliminaries}
\noindent To introduce the relevant notations, we shall recall some
basic facts about submanifolds, in particular Gauss' and Weingarten's
equations (see
e.g. \cite{kn:foundationsDiffGeometryI,kn:foundationsDiffGeometryII}
for details). For $n\geq 2$, let $\Sigma$ be a $n$-dimensional manifold embedded in a
Riemannian manifold $M$ with $\dim M=n+p\equiv m$. Local coordinates on $M$ will be denoted by
$x^1,\ldots,x^m$, local coordinates on $\Sigma$ by $u^1,\ldots,u^n$,
and we regard $x^1,\ldots,x^m$ as being functions of $u^1,\ldots,u^n$
providing the embedding of $\Sigma$ in $M$. The metric tensor on $M$
is denoted by $\,\bar{\!g}_{ij}$ and the induced metric on $\Sigma$ by
$g_{ab}$; indices $i,j,k,l,n$ run from $1$ to $m$, indices
$a,b,c,d,p,q$ run from $1$ to $n$ and indices $A,B,C,D$ run from $1$
to $p$. Furthermore, the covariant derivative and the Christoffel
symbols in $M$ will be denoted by $\bar{\nabla}$ and $\bar{\Gamma}^{i}_{jk}$
respectively.
The tangent space $T\Sigma$ is regarded as a subspace of the tangent
space $TM$ and at each point of $\Sigma$ one can choose
$e_a=(\d_ax^i)\d_i$ as basis vectors in $T\Sigma$, and in this basis
we define $g_{ab}=\,\bar{\!g}(e_a,e_b)$. Moreover, we choose a set of normal
vectors $N_A$, for $A=1,\ldots,p$, such that
$\,\bar{\!g}(N_A,N_B)=\delta_{AB}$ and $\,\bar{\!g}(N_A,e_a)=0$.
The formulas of Gauss and Weingarten split the covariant derivative in
$M$ into tangential and normal components as
\begin{align}
&\bar{\nabla}_X Y = \nabla_X Y + \alpha(X,Y)\label{eq:GaussFormula}\\
&\bar{\nabla}_XN_A = -W_A(X) + D_XN_A\label{eq:WeingartenFormula}
\end{align}
where $X,Y\in T\Sigma$ and $\nabla_X Y$, $W_A(X)\inT\Sigma$ and
$\alpha(X,Y)$, $D_XN_A\inT\Sigma^\perp$. By expanding $\alpha(X,Y)$ in
the basis $\{N_1,\ldots,N_p\}$ one can write (\ref{eq:GaussFormula}) as
\begin{align}
&\bar{\nabla}_X Y = \nabla_X Y + \sum_{A=1}^ph_A(X,Y)N_A,\label{eq:GaussFormulah}
\end{align}
and we set $h_{A,ab} = h_A(e_a,e_b)$. From the above equations one derives the relation
\begin{align}
h_{A,ab} &= -\,\bar{\!g}\paraa{e_a,\bar{\nabla}_b N_A},
\end{align}
as well as Weingarten's equation
\begin{align}
h_A(X,Y) = \,\bar{\!g}\paraa{W_A(X),Y},
\end{align}
which implies that $(W_A)^a_b = g^{ac}h_{A,cb}$, where $g^{ab}$
denotes the inverse of $g_{ab}$.
From formulas (\ref{eq:GaussFormula}) and (\ref{eq:WeingartenFormula})
one obtains Gauss' equation, i.e. an expression for the curvature $R$
of $\Sigma$ in terms of the curvature $\bar{R}$ of $M$, as
\begin{equation}\label{eq:GaussEquation}
\begin{split}
g\paraa{R(X,Y)Z,V} =
\,\bar{\!g}&\paraa{\bar{R}(X,Y)Z,V}-\,\bar{\!g}\paraa{\alpha(X,Z),\alpha(Y,V)}\\
&+\,\bar{\!g}\paraa{\alpha(Y,Z),\alpha(X,V)},
\end{split}
\end{equation}
where $X,Y,Z,V\inT\Sigma$. As we shall later on consider the Ricci curvature,
let us note that (\ref{eq:GaussEquation}) implies
\begin{align}
\mathcal{R}^p_b = g^{pd}g^{ac}\,\bar{\!g}\paraa{\bar{R}(e_c,e_d)e_b,e_a}
+\sum_{A=1}^p\bracketb{(W_A)^a_a(W_A)_b^p-(W_A^2)_b^p}
\end{align}
where $\mathcal{R}$ is the Ricci curvature of $\Sigma$ considered as a map
$T\Sigma\toT\Sigma$. We also recall the mean curvature vector,
defined as
\begin{align}
H = \frac{1}{n}\sum_{A=1}^p\paraa{\operatorname{tr} W_A}N_A.
\end{align}
\section{Nambu bracket formulation}\label{sec:nambuPoissonFormulation}
\noindent In this section we will prove that one can express many
aspects of the differential geometry of an embedded manifold $\Sigma$
in terms of a Nambu bracket introduced on $C^\infty(\Sigma)$.
Let $\rho:\Sigma\to\mathbb{R}$ be an arbitrary non-vanishing
density and define
\begin{align}\label{eq:PbracketDef}
\{f_1,\ldots,f_n\} = \frac{1}{\rho}\varepsilon^{a_1\cdots a_n}\paraa{\d_{a_1}f_1}\cdots\paraa{\d_{a_n} f_n}
\end{align}
for all $f_1,\ldots,f_n\in C^\infty(\Sigma)$, where $\varepsilon^{a_1\cdots
a_n}$ is the totally antisymmetric Levi-Civita symbol with
$\varepsilon^{12\cdots n}=1$. Together with this multi-linear map, $\Sigma$ is
a Nambu-Poisson manifold.
The above Nambu bracket arises from the choice of a
volume form on $\Sigma$. Namely, let $\omega$ be a volume form and
define $\{f_1,\ldots,f_n\}$ via the formula
\begin{align}\label{eq:nambuVolumeForm}
\{f_1,\ldots,f_n\}\omega = df_1\wedge\cdots\wedge df_n.
\end{align}
Writing $\omega=\rho\, du^1\wedge\cdots\wedge du^n$ in local
coordinates, and evaluating both sides of (\ref{eq:nambuVolumeForm})
on the tangent vectors $\d_{u^1},\ldots,\d_{u^n}$ gives
\begin{align*}
\{f_1,\ldots,f_n\} = \frac{1}{\rho}\det\parac{\frac{\d(f_1,\ldots,f_n)}{\d(u^1,\ldots,u^n)}}
=\frac{1}{\rho}\varepsilon^{a_1\cdots a_n}\paraa{\d_{a_1}f_1}\cdots\paraa{\d_{a_n}f_n}.
\end{align*}
To define the objects which we will consider, it is convenient to
introduce some notation. Let
$x^1(u^1,\ldots,u^n),\ldots,x^m(u^1,\ldots,u^n)$ be the embedding
coordinates of $\Sigma$ into $M$, and let $n_A^i(u^1,\ldots,u^n)$ denote the
components of the orthonormal vectors $N_A$, normal to $T\Sigma$. Using
multi-indices $I=i_1\cdots i_{n-1}$ and $\vec{a}=a_1\cdots a_{n-1}$ we define
\begin{align*}
&\{f,\vec{x}^I\} \equiv \{f,x^{i_1},x^{i_2},\ldots,x^{i_{n-1}}\}\\
&\{f,\vec{n}_A^I\} \equiv \{f,n_A^{i_1},n_A^{i_2},\ldots,n_A^{i_{n-1}}\},
\end{align*}
together with
\begin{align*}
&\d_{\vec{a}}\vec{x}^I \equiv \paraa{\d_{a_1}x^{i_1}}\paraa{\d_{a_2}x^{i_2}}\cdots\paraa{\d_{a_{n-1}}x^{i_{n-1}}}\\
&\paraa{\bar{\nabla}_{\vec{a}}\vec{n}_A}^I \equiv \paraa{\bar{\nabla}_{a_1}N_A}^{i_1}\paraa{\bar{\nabla}_{a_2}N_A}^{i_2}\cdots\paraa{\bar{\nabla}_{a_{n-1}}N_A}^{i_{n-1}}\\
&\,\bar{\!g}_{IJ} \equiv \,\bar{\!g}_{i_1j_1}\,\bar{\!g}_{i_2j_2}\cdots\,\bar{\!g}_{i_{n-1}j_{n-1}}\\
&g_{\vec{a}\vec{c}}\equiv g_{a_1c_1}g_{a_2c_2}\cdots g_{a_{n-1}c_{n-1}}.
\end{align*}
We now introduce the main objects of our study
\begin{align}
\P^{iJ} &= \frac{1}{\sqrt{(n-1)!}}\{x^i,\vec{x}^J\} = \frac{1}{\sqrt{(n-1)!}}\frac{\varepsilon^{a\vec{a}}}{\rho}\paraa{\d_ax^i}\paraa{\d_{\vec{a}}\vec{x}^J}\\
\S_A^{iJ}&=\frac{(-1)^n}{\sqrt{(n-1)!}}\frac{\varepsilon^{a\vec{a}}}{\rho}\paraa{\d_ax^i}\paraa{\bar{\nabla}_{\vec{a}}\vec{n}_A}^J\\
\mathcal{T}_A^{Ij} &=\frac{(-1)^n}{\sqrt{(n-1)!}}\frac{\varepsilon^{\vec{a} a}}{\rho}\paraa{\d_{\vec{a}}\vec{x}^I}\paraa{\bar{\nabla}_aN_A}^j
\end{align}
from which we construct
\begin{align}
\paraa{\P^2}^{ik} &= \P^{iI}\P^{kJ}\,\bar{\!g}_{IJ}\\
\paraa{\mathcal{B}_A}^{ik} &= \P^{iI}(\mathcal{T}_A)^{Jk}\,\bar{\!g}_{IJ}\\
\paraa{\S_A\mathcal{T}_A}^{ik} &=(\S_A)^{iI}(\mathcal{T}_A)^{Jk}\,\bar{\!g}_{IJ}.
\end{align}
By lowering the second index with the metric $\,\bar{\!g}$, we will also
consider $\P^2$, $\mathcal{B}_A$ and $\mathcal{T}_A\S_A$ as maps $TM\to TM$. Note that
both $\S_A$ and $\mathcal{T}_A$ can be written in terms of Nambu brackets, e.g.
\begin{align*}
\mathcal{T}_A^{Ij} = \frac{(-1)^n}{\sqrt{(n-1)!}}\bracketb{\{\vec{x}^I,n_A^j\}+\{\vec{x}^I,x^k\}\bar{\Gamma}^j_{kl}n_A^l}.
\end{align*}
Let us now investigate some properties of the maps defined above. As it will appear frequently, we define
\begin{align}
\gamma = \frac{\sqrt{g}}{\rho}.
\end{align}
It is useful to note that (cp. Proposition \ref{prop:TrPBST})
\begin{align*}
\gamma^2 = \sum_{i,j,I,J=1}^m\frac{1}{n!}\,\bar{\!g}_{ij}\{x^i,\vec{x}^I\}\,\bar{\!g}_{IJ}\{x^j,\vec{x}^J\},
\end{align*}
and to recall the cofactor expansion of the inverse of a matrix:
\begin{lemma}
Let $g^{ab}$ denote the inverse of $g_{ab}$ and $g=\det(g_{ab})$. Then
\begin{align}
gg^{ba} = \frac{1}{(n-1)!}\varepsilon^{aa_1\cdots a_{n-1}}\varepsilon^{bb_1\cdots b_{n-1}}g_{a_1b_1}g_{a_2b_2}\cdots g_{a_{n-1}b_{n-1}}.
\end{align}
\end{lemma}
\begin{proposition}\label{prop:PBSTproperties}
For $X\in TM$ it holds that
\begin{align}
&\P^2(X) = \gamma^2\,\bar{\!g}(X,e_a)g^{ab}e_b\label{eq:P2X}\\
&\mathcal{B}_A(X) = -\gamma^2\,\bar{\!g}(X,\bar{\nabla}_a N_A)g^{ab}e_b\label{eq:BAX}\\
&\S_A\mathcal{T}_A(X) = \gamma^2(\det W_A)\,\bar{\!g}(X,\bar{\nabla}_aN_A)h_A^{ab}e_b,
\end{align}
and for $Y\inT\Sigma$ one obtains
\begin{align}
&\P^2(Y) = \gamma^2Y\label{eq:P2Y}\\
&\mathcal{B}_A(Y) = \gamma^2W_A(Y)\\
&\S_A\mathcal{T}_A(Y) = -\gamma^2(\det W_A)Y.
\end{align}
\end{proposition}
\begin{proof}
Let us provide a proof for equations (\ref{eq:P2X}) and (\ref{eq:P2Y}); the other
formulas can be proved analogously.
\begin{align*}
\P^2(X) &= \P^{iI}\P^{jJ}\,\bar{\!g}_{IJ}\,\bar{\!g}_{jk}X^k\d_i =
\frac{\varepsilon^{a\vec{a}}\varepsilon^{c\vec{c}}}{\rho^2(n-1)!}\paraa{\d_ax^i}\paraa{\d_{\vec{a}}x^I}\paraa{\d_cx^j}\paraa{\d_{\vec{c}}x^J}\,\bar{\!g}_{IJ}\,\bar{\!g}_{jk}X^k\d_i\\
&= \frac{\varepsilon^{a\vec{a}}\varepsilon^{c\vec{c}}}{\rho^2(n-1)!}g_{a_1c_1}\cdots g_{a_{n-1}c_{n-1}}\paraa{\d_ax^i}\paraa{\d_cx^j}\,\bar{\!g}_{jk}X^k\d_i\\
&= \gamma^2g^{ac}\paraa{\d_ax^i}\paraa{\d_cx^j}\,\bar{\!g}_{jk}X^k\d_i
= \gamma^2\,\bar{\!g}(X,e_c)g^{ca}e_a.
\end{align*}
Choosing a tangent vector $Y=Y^ce_c$ gives immediately that $\P^2(Y)=\gamma^2Y$.
\end{proof}
\noindent For a map $\mathcal{B}:TM\to TM$ we denote the trace by $\operatorname{Tr}\mathcal{B}\equiv
\mathcal{B}^i_i$ and for a map $W:T\Sigma\toT\Sigma$ we denote the trace by $\operatorname{tr} W\equiv W^a_a$.
\begin{proposition}\label{prop:TrPBST}
It holds that
\begin{align}
\frac{1}{n}\operatorname{Tr}\P^2 &= \gamma^2\\
\operatorname{Tr}\mathcal{B}_A &= \gamma^2\operatorname{tr} W_A\\
\frac{1}{n}\operatorname{Tr}\S_A\mathcal{T}_A &= -\gamma^2(\det W_A).
\end{align}
\end{proposition}
\begin{remark}
For a hypersurface (with normal $N=n^i\d_i$) in $\mathbb{R}^{n+1}$,
\begin{align}
\det W &= (-1)^n\frac{\{x^{i_1},\ldots,x^{i_n}\}\{n_{i_1},\ldots,n_{i_n}\}}
{\{x^{k_1},\ldots,x^{k_n}\}\{x_{k_1},\ldots,x_{k_n}\}}\\
&=\frac{1}{\gamma n!}\varepsilon^{i_1\cdots i_n i}\{n_{i_1},\ldots,n_{i_n}\}n_i,\notag
\end{align}
the signed ratio of infinitesimal volumes swept out on $S^n$ (by
$N$), resp $\Sigma$ (which can easily be obtained directly by simply
writing out the determinant of the second fundamental form,
$h=\det(-\d_ax^i\d_bn_i)$); in fact, all the symmetric functions of
the principal curvatures are related to ratios of products of two
Nambu brackets (cp. the paragraph after Proposition
\ref{prop:HyperSurfaceGaussian}). Namely, the $k$'th symmetric
curvature is given by
\begin{align}
(-1)^k\frac{\{x^{i_1},\ldots,x^{i_n}\}
\{n_{i_{1}},\ldots,n_{i_k},x_{i_{k+1}},\ldots,x_{i_n}\}}
{\{x^{k_1},\ldots,x^{k_n}\}\{x_{k_1},\ldots,x_{k_n}\}}.
\end{align}
\end{remark}
\noindent A direct consequence of Propositions
\ref{prop:PBSTproperties} and \ref{prop:TrPBST} is that one can write the projection onto
$T\Sigma$, as well as the mean curvature vector, in terms of Nambu brackets.
\begin{proposition}
The map
\begin{align}
\gamma^{-2}\P^2=\frac{n}{\operatorname{Tr}\P^2}\P^2:TM\toT\Sigma
\end{align}
is the orthogonal projection of $TM$ onto $T\Sigma$. Furthermore,
the mean curvature vector can be written as
\begin{align*}
H = \frac{1}{\operatorname{Tr}\P^2}\sum_{A=1}^p\paraa{\operatorname{Tr}\mathcal{B}_A}N_A.
\end{align*}
\end{proposition}
\noindent Proposition \ref{prop:PBSTproperties} tells us that $\gamma^{-2}\mathcal{B}_A$
equals the Weingarten map $W_A$, when restricted to $T\Sigma$. What is
the geometrical meaning of $\mathcal{B}_A$ acting on a normal vector? It turns
out that the maps $\mathcal{B}_A$ also provide information about the covariant
derivative in the normal space. If one defines $(D_X)_{AB}$ through
\begin{align*}
D_XN_A = \sum_{B=1}^p(D_X)_{AB}N_B
\end{align*}
for $X\inT\Sigma$, then one can prove the following relation to
the maps $\mathcal{B}_A$.
\begin{proposition}
For $X\inT\Sigma$ it holds that
\begin{align}
\,\bar{\!g}\paraa{\mathcal{B}_B(N_A),X}=\gamma^2\paraa{D_X}_{AB}.
\end{align}
\end{proposition}
\begin{proof}
For a vector $X=X^ae_a$, it follows from Weingarten's formula (\ref{eq:WeingartenFormula}) that
\begin{align*}
(D_X)_{AB} = \,\bar{\!g}\paraa{\bar{\nabla}_X N_A,N_B}.
\end{align*}
On the other hand, with the formula from Proposition
\ref{prop:PBSTproperties}, one computes
\begin{align*}
\,\bar{\!g}\paraa{\mathcal{B}_B(N_A),X} &= -\gamma^2\,\bar{\!g}\paraa{N_A,\bar{\nabla}_aN_B}g^{ab}g_{bc}X^c
= -\gamma^2\,\bar{\!g}\paraa{N_A,\bar{\nabla}_XN_B}\\
&= -\gamma^2(D_X)_{BA}=\gamma^2(D_X)_{AB}.
\end{align*}
The last equality is due to the fact that $D$ is a covariant
derivative, which implies that
$0=D_X\,\bar{\!g}(N_A,N_B)=\,\bar{\!g}(D_XN_A,N_B)+\,\bar{\!g}(N_A,D_XN_B)$.
\end{proof}
\noindent Thus, one can write Weingarten's formula as
\begin{align}
\gamma^2\bar{\nabla}_XN_A = -\mathcal{B}_A(X)+\sum_{B=1}^p\,\bar{\!g}\paraa{\mathcal{B}_B(N_A),X}N_B,
\end{align}
and since $h_A(X,Y) = \gamma^{-2}\,\bar{\!g}(\mathcal{B}_A(X),Y)$ Gauss' formula becomes
\begin{align}\label{eq:GaussformulaB}
\bar{\nabla}_XY = \nabla_XY+\frac{1}{\gamma^2}\sum_{A=1}^p\,\bar{\!g}\paraa{\mathcal{B}_A(X),Y}N_A.
\end{align}
Let us now turn our attention to the curvature of $\Sigma$. Since
Nambu brackets involve sums over all vectors in the basis of
$T\Sigma$, one can not expect to find expressions for quantities that
involve a choice of tangent plane, e.g. the sectional curvature
(unless $\Sigma$ is a surface). However, it turns out that one can
write the Ricci curvature as an expression involving Nambu brackets.
\begin{theorem}\label{thm:ricciCurvature}
Let $\mathcal{R}$ be the Ricci curvature of $\Sigma$, considered as a map
$T\Sigma\toT\Sigma$, and let $R$ denote the scalar curvature. For
any $X\inT\Sigma$ it holds that
\begin{align}
&\mathcal{R}(X) = \frac{1}{\gamma^4}\paraa{\P^2}^{ik}\paraa{\P^2}^{lm}\bar{R}_{ijkl}X^j\d_m
+\frac{1}{\gamma^4}\sum_{A=1}^p\bracketb{(\operatorname{Tr}\mathcal{B}_A)\mathcal{B}_A(X)-\mathcal{B}_A^2(X)}\\
&R = \frac{1}{\gamma^4}\paraa{\P^2}^{ik}\paraa{\P^2}^{jl}\bar{R}_{ijkl}
+\frac{1}{\gamma^4}\sum_{A=1}^p\bracketb{(\operatorname{Tr}\mathcal{B}_A)^2-\operatorname{Tr}\mathcal{B}_A^2(X)},
\end{align}
where $\bar{R}$ is the curvature tensor of $M$.
\end{theorem}
\begin{proof}
The Ricci curvature of $\Sigma$ is defined as
\begin{align*}
\mathcal{R}^p_b = g^{ac}g^{pd}g\paraa{R(e_c,e_d)e_b,e_a}
\end{align*}
and from Gauss' equation (\ref{eq:GaussEquation}) it follows that
\begin{align*}
\mathcal{R}^p_b = g^{pd}g^{ac}\,\bar{\!g}\paraa{\bar{R}(e_c,e_d)e_b,e_a}
+ g^{ac}g^{pd}\sum_{A=1}^p\parab{h_{A,bd}h_{A,ac}-h_{A,bc}h_{A,ad}}.
\end{align*}
Since $(W_A)^a_b = g^{ac}h_{A,cb}$ one obtains
\begin{align*}
\mathcal{R}_b^p = g^{ac}g^{pd}\,\bar{\!g}\paraa{\bar{R}(e_c,e_d)e_b,e_a}
+ \sum_{A=1}^p\bracketb{\paraa{\operatorname{tr} W_A}(W_A)^p_b-(W_A^2)_b^p},
\end{align*}
and as $\mathcal{B}_A(X)=\gamma^2 W_A(X)$ for any $X\inT\Sigma$, and
$\operatorname{Tr}\mathcal{B}_A=\gamma^2\operatorname{tr} W_A$, one has
\begin{equation*}
\mathcal{R}(X) = g^{ac}g^{pd}\,\bar{\!g}\paraa{\bar{R}(e_c,e_d)e_b,e_a}X^be_p
+ \frac{1}{\gamma^4}\sum_{A=1}^p\bracketb{\paraa{\operatorname{Tr} \mathcal{B}_A}\mathcal{B}_A(X)-\mathcal{B}_A^2(X)}.
\end{equation*}
By expanding the first term as
\begin{align*}
g^{ac}&g^{pd}X^b\bar{R}_{ijkl}\paraa{\d_ax^i}\paraa{\d_bx^j}\paraa{\d_cx^k}\paraa{\d_dx^l}\paraa{\d_px^m}\d_m\\
&=\frac{1}{g^2(n-1)!^2}\varepsilon^{p\vec{p}}\varepsilon^{d\vec{d}}g_{\vec{p}\vec{d}}\,\varepsilon^{a\vec{a}}\varepsilon^{c\vec{c}}g_{\vec{a}\vec{c}}
X^b\bar{R}_{ijkl}\paraa{\d_ax^i}\paraa{\d_bx^j}\paraa{\d_cx^k}\paraa{\d_dx^l}\paraa{\d_px^m}\d_m\\
&=\ldots=\frac{1}{\gamma^4}\paraa{\P^2}^{ik}\paraa{\P^2}^{lm}\bar{R}_{ijkl}X^j\d_m
\end{align*}
one obtains the desired result.
\end{proof}
\subsection{Construction of normal vectors}\label{sec:normalVectors}
\noindent The results in Section \ref{sec:nambuPoissonFormulation}
involve Nambu brackets of the embedding coordinates and the
components of the normal vectors. In this section we will prove that
one can replace sums over normal vectors by sums of Nambu
brackets of the embedding coordinates, thus providing expressions that
do not involve normal vectors.
It will be convenient to introduce yet another multi-index; namely, we
let $\alpha=i_1\ldots i_{p-1}$ consist of $p-1$ indices all taking
values between $1$ and $m$.
\begin{proposition}\label{prop:normalvectors}
For any value of the multi-index $\alpha$, the vector
\begin{align}\label{eq:Zdef}
Z_{\alpha}=\frac{1}{\gamma\paraa{n!\sqrt{(p-1)!}}}\,\bar{\!g}^{ij}\varepsilon_{jk_1\cdots k_n\alpha}\{x^{k_1},\ldots,x^{k_n}\}\d_i,
\end{align}
where $\varepsilon_{i_1\cdots i_m}$ is the Levi-Civita tensor of $M$, is
normal to $T\Sigma$, i.e. $\,\bar{\!g}(Z_{\alpha},e_a)=0$ for
$a=1,2,\ldots,n$. For hypersurfaces ($p=1$), equation (\ref{eq:Zdef})
defines a unique normal vector of unit length.
\end{proposition}
\begin{proof}
To prove that $Z_\alpha$ are normal vectors, one simply notes that
\begin{align*}
\gamma\paraa{n!\sqrt{(p-1)!}}\,\bar{\!g}(Z_\alpha,e_a) &=
\frac{1}{\rho}\varepsilon^{a_1\cdots a_n}\varepsilon_{jk_1\cdots k_n\alpha}\paraa{\d_ax^j}\paraa{\d_{a_1}x^{k_1}}\cdots\paraa{\d_{a_n}x^{k_n}}=0,
\end{align*}
since the $n+1$ indices $a,a_1,\ldots,a_n$ can only take on $n$
different values and since
$(\d_ax^j)(\d_{a_1}x^{k_1})\cdots(\d_{a_n}x^{k_n})$ is contracted
with $\varepsilon_{jk_1\cdots k_n\alpha}$ which is completely antisymmetric
in $j,k_1,\ldots,k_n$. Let us now calculate $|Z|^2\equiv\,\bar{\!g}(Z,Z)$
when $p=1$. Using that\footnote{In our convention, no combinatorial factor is included in the anti-symmetrization; for instance,
$\delta^{[i}_{[k}\delta^{j]}_{l]}=\delta^i_k\delta^j_l-\delta^i_l\delta^j_k$.}
\begin{align*}
\varepsilon_{ik_1\cdots k_n}\varepsilon^{il_1\cdots l_n} = \delta^{[l_1}_{[k_1}\cdots\delta^{l_n]}_{k_n]}
\end{align*}
one obtains
\begin{align*}
|Z|^2 &= \frac{1}{\gamma^2n!^2}\,\bar{\!g}_{l_1l_1'}\cdots\,\bar{\!g}_{l_nl_n'}
\varepsilon_{ik_1\cdots k_n}\varepsilon^{il_1\cdots l_n}
\{x^{k_1},\ldots,x^{k_n}\}\{x^{l_1'},\ldots,x^{l_n'}\}\\
&=\frac{1}{\gamma^2n!^2}\,\bar{\!g}_{l_1l_1'}\cdots\,\bar{\!g}_{l_nl_n'}
\delta^{[l_1}_{[k_1}\cdots\delta^{l_n]}_{k_n]}
\{x^{k_1},\ldots,x^{k_n}\}\{x^{l_1'},\ldots,x^{l_n'}\}\\
&=\frac{1}{\gamma^2n!}\{x^{l_1},\ldots,x^{l_n}\}
\,\bar{\!g}_{l_1l_1'}\cdots\,\bar{\!g}_{l_nl_n'}\{x^{l_1'},\ldots,x^{l_n'}\}\\
&=\frac{1}{\gamma^2n!}(n-1)!\operatorname{Tr}\P^2 = \frac{1}{\gamma^2n!}(n-1)!n\gamma^2=1,
\end{align*}
which proves that $Z$ has unit length.
\end{proof}
\noindent If the codimension is greater than one, $Z_\alpha$ defines
more than $p$ non-zero normal vectors that do not in general fulfill any
orthonormality conditions. In principle, one can now apply the
Gram-Schmidt orthonormalization procedure to obtain a set of $p$
orthonormal vectors. However, it turns out that one can use $Z_\alpha$
to construct another set of normal vectors, avoiding explicit use of
the Gram-Schmidt procedure; namely, introduce
\begin{align*}
\mathcal{Z}_{\alpha}^{\beta} = \,\bar{\!g}(Z_{\alpha},Z^\beta),
\end{align*}
and consider it as a matrix over multi-indices $\alpha$ and
$\beta$. As such, the matrix is symmetric (with respect to
$\,\bar{\!g}_{\alpha\beta}\equiv \,\bar{\!g}_{i_1j_1}\cdots\,\bar{\!g}_{i_{p-1}j_{p-1}}$) and
we let ${E_\alpha}^\beta,\mu_\alpha$ denote orthonormal eigenvectors
(i.e. $\,\bar{\!g}_{\delta\sigma}E_\alpha^\delta E_\beta^\sigma=\delta_{\alpha\beta}$)
and their corresponding
eigenvalues. Using these eigenvectors to define
\begin{align*}
\hat{N}_\alpha = E^{\beta}_\alpha Z_\beta
\end{align*}
one finds that
$\,\bar{\!g}(\hat{N}_\alpha,\hat{N}_\beta)=\mu_\alpha\delta_{\alpha\beta}$, i.e. the
vectors are orthogonal.
\begin{proposition}\label{prop:Zprojection}
For $\mathcal{Z}_\alpha^\beta=\,\bar{\!g}_{ij}Z^i_\alpha Z^{j\beta}$ it holds that
\begin{align}
\mathcal{Z}_\alpha^\delta\mathcal{Z}_\delta^\beta = \mathcal{Z}_\alpha^\beta\label{eq:Zidempot}\\
\mathcal{Z}_\alpha^\alpha = p.\label{eq:Ztrace}
\end{align}
\end{proposition}
\begin{proof}
Both statements can be easily proved once one has the following result
\begin{align}
Z^i_\alpha Z^{j\alpha} = \,\bar{\!g}^{ij}-\frac{1}{\gamma^2}\paraa{\P^2}^{ij},\label{eq:ZZproj}
\end{align}
which is obtained by using that
\begin{align*}
\varepsilon_{kk_1\cdots k_n\alpha}\varepsilon^{ll_1\cdots l_n\alpha} =
(p-1)!\parab{\delta^{[l}_{[k}\delta^{l_1}_{k_1}\cdots\delta^{l_n]}_{k_n]}}.
\end{align*}
Formula
(\ref{eq:Ztrace}) is now immediate, and to obtain
(\ref{eq:Zidempot}) one notes that since $Z_\alpha\inT\Sigma^\perp$
it holds that $\P^2(Z_\alpha)=0$, due to the fact that $\P^2$ is
proportional to the projection onto $T\Sigma$.
\end{proof}
\noindent From Proposition \ref{prop:Zprojection} it follows that an
eigenvalue of $\mathcal{Z}$ is either 0 or 1, which implies that $\hat{N}_\alpha=0$
or $\,\bar{\!g}(\hat{N}_\alpha,\hat{N}_\alpha)=1$, and that the number of non-zero
vectors is $\operatorname{Tr}\mathcal{Z} = \mathcal{Z}_\alpha^\alpha=p$. Hence, the $p$ non-zero
vectors among $\hat{N}_\alpha$ constitute an orthonormal basis of
$T\Sigma^\perp$, and it follows that one can replace any sum over
normal vectors $N_A$ by a sum over the multi-index of $\hat{N}_\alpha$.
As an example, let us work out some explicit expressions in the case
when $M=\mathbb{R}^m$.
\begin{proposition}
Assume that $M=\mathbb{R}^m$ and that all repeated indices are summed
over. For any $X\inT\Sigma$ one has
\begin{align}
&\sum_{A=1}^p\paraa{\operatorname{Tr}\mathcal{B}_A}\mathcal{B}_A(X)^i = \frac{1}{(n-1)!^2}\Pi^{jk}
\{\{x^j,\vec{x}^J\},\vec{x}^J\}\{x^i,\vec{x}^I\}\{X^k,\vec{x}^I\}\label{eq:trBABAx}\\
%
&\sum_{A=1}^p\mathcal{B}_A^2(X)^i=\frac{1}{(n-1)!^2}\Pi^{jk}
\{x^i,\vec{x}^I\}\{\{x^j,\vec{x}^J\}\{X^k,\vec{x}^J\},\vec{x}^I\}\\
&\sum_{A=1}^p\paraa{\operatorname{Tr}\mathcal{B}_A}N_A^i=
%
\frac{(-1)^n}{(n-1)!}\Pi^{ik}\{\{x^k,\vec{x}^I\},\vec{x}^I\}
\end{align}
where
\begin{align}
\Pi^{ij} = \delta^{ij}-\frac{1}{\gamma^2}\para{\P^2}^{ij}
\end{align}
is the projection onto the normal space.
\end{proposition}
\begin{proof}
Let us prove formula (\ref{eq:trBABAx}); the other formulas can be
proven analogously. One rewrites
\begin{align*}
\paraa{\operatorname{Tr}\mathcal{B}_A}\mathcal{B}_A(X)^i &=
\frac{1}{(n-1)!^2}\{x^j,\vec{x}^J\}\{\vec{x}^J,n^j_A\}\{x^i,\vec{x}^I\}\{\vec{x}^I,n_A^k\}X^k\\
&= \frac{1}{(n-1)!^2}n_A^jn_A^k\{\vec{x}^J,\{x^j,\vec{x}^J\}\}
\{x^i,\vec{x}^I\}\{\vec{x}^I,X^k\}
\end{align*}
since $n_A^j\{x^j,\vec{x}^J\}=n_A^kX^k=0$, due to the fact that $N_A$
is a normal vector. By replacing $n_A^jn_A^k$ with
$\hat{N}_\alpha^j\hat{N}_\alpha^k$ and using the fact that
\begin{align*}
\hat{N}_\alpha^i\hat{N}_{\alpha}^j = \delta^{ij}-\frac{1}{\gamma^2}\paraa{\P^2}^{ij}
\end{align*}
one obtains
\begin{equation*}
\paraa{\operatorname{Tr}\mathcal{B}_A}\mathcal{B}_A(X)^i = \frac{1}{(n-1)!^2}
\Pi^{jk}\{\{x^j,\vec{x}^J\},\vec{x}^J\}\{x^i,\vec{x}^I\}\{X^k,\vec{x}^I\}.\qedhere
\end{equation*}
\end{proof}
\noindent For hypersurfaces in $\mathbb{R}^{n+1}$, the ``Theorema Egregium'' states
that the determinant of the Weingarten map, i.e the ``Gaussian
curvature'', is an invariant (up to a sign when $\Sigma$ is
odd-dimensional) under isometries (this is in fact also true for
hypersurfaces in a manifold of constant sectional curvature). From
Proposition \ref{prop:TrPBST} we know that one can express $\det W_A$
in terms of $\operatorname{Tr}\S_A\mathcal{T}_A$.
\begin{proposition}\label{prop:HyperSurfaceGaussian}
Let $\Sigma$ be a hypersurface in $\mathbb{R}^{n+1}$ and let $W$ denote
the Weingarten map with respect to the unit normal
\begin{align*}
Z = \frac{1}{\gamma n!}\,\bar{\!g}^{ij}\varepsilon_{jkK}\{x^k,\vec{x}^K\}.
\end{align*}
Then one can write $\det W$ as
\begin{align*}
\det W = -\frac{1}{\gamma(\gamma n!)^{n+1}}&\sum
\varepsilon_{ilL}\varepsilon_{j_1k_1K_1}\cdots\varepsilon_{j_{n-1}k_{n-1}K_{n-1}}\\
&\times\{x^i,\{x^{k_1},\vec{x}^{K_1}\},\ldots,\{x^{k_{n-1}},\vec{x}^{K_{n-1}}\}\}
\{\vec{x}^J,\{x^l,\vec{x}^L\}\}.
\end{align*}
\end{proposition}
\noindent In fact, one can express all the elementary symmetric
functions of the principle curvatures in terms of Nambu brackets as
follows: The elementary symmetric functions of the eigenvalues
of $W$ is given (up to a sign) as the coefficients of the polynomial
$\det(W-t\mid)$. Since $\mathcal{B}(X)=0$ for all $X\inT\Sigma^\perp$ and
$\mathcal{B}(X)=\gamma^2W(X)$ for all $X\inT\Sigma$, it holds that
\begin{align*}
-t\det(W-t\mid_n) = \det(\gamma^{-2}\mathcal{B}-t\mid_{n+1})
=\frac{1}{\gamma^{2(n+1)}}\det(\mathcal{B}-t\gamma^2\mid_{n+1})
\end{align*}
which implies that the coefficient of $t^k$ in $\det(W-t\mid)$ is
given by the coefficient of $t^{k+1}$ in
$-\det(\mathcal{B}-t\gamma^2\mid)\gamma^{2(n-k)}$.
\subsection{The Codazzi-Mainardi equations}\label{sec:CodazziMainardi}
\noindent When studying the geometry of embedded manifolds, the Codazzi-Mainardi
equations are very useful. In this section we reformulate these equations
in terms of Nambu brackets.
The Codazzi-Mainardi equations express the normal component
of $\bar{R}(X,Y)Z$ in terms of the second fundamental forms; namely
\begin{align}\label{eq:CMh}
\begin{split}
\,\bar{\!g}\paraa{&\bar{R}(X,Y)Z,N_A} = \paraa{\nabla_Xh_A}(Y,Z) - \paraa{\nabla_Yh_A}(X,Z)\\
&+\sum_{A=1}^p\bracketb{\,\bar{\!g}(D_XN_B,N_A)h_B(Y,Z)-\,\bar{\!g}(D_YN_B,N_A)h_B(X,Z)},
\end{split}
\end{align}
for $X,Y,Z\inT\Sigma$ and $A=1,\ldots,p$. Defining
\begin{align}
\begin{split}
\mathcal{W}_A&(X,Y) = \paraa{\nabla_XW_A}(Y)-\paraa{\nabla_Y W_A}(X)\\
&+\sum_{B=1}^p\bracketb{\,\bar{\!g}(D_XN_B,N_A)W_B(Y)-\,\bar{\!g}(D_YN_B,N_A)W_B(X)}
\end{split}
\end{align}
one can rewrite the Codazzi-Mainardi equations as follows.
\begin{proposition}\label{prop:CMWPi}
Let $\Pi$ denote the projection onto $T\Sigma^\perp$. Then the
Codazzi-Mainardi equations are equivalent to
\begin{align}\label{eq:CMCA}
\mathcal{W}_A(X,Y) = -(\mid-\Pi)\paraa{\bar{R}(X,Y)N_A}
\end{align}
for $X,Y\inT\Sigma$ and $A=1,\ldots,p$.
\end{proposition}
\begin{proof}
Since $h_A(X,Y)=\,\bar{\!g}(W_A(X),Y)$ (by Weingarten's equation) one can
rewrite (\ref{eq:CMh}) as
\begin{align}
\,\bar{\!g}\paraa{\mathcal{W}_A(X,Y),Z} = \,\bar{\!g}\paraa{\bar{R}(X,Y)Z,N_A},
\end{align}
and since $\,\bar{\!g}(\bar{R}(X,Y)Z,N_A) = -\,\bar{\!g}(\bar{R}(X,Y)N_A,Z)$ this becomes
\begin{align}
\,\bar{\!g}\paraa{\mathcal{W}_A(X,Y)+\bar{R}(X,Y)N_A,Z} = 0.
\end{align}
That this holds for all $Z\inT\Sigma$ is equivalent to saying that
\begin{align}
(\mid-\Pi)\paraa{\mathcal{W}_A(X,Y)+\bar{R}(X,Y)N_A} = 0,
\end{align}
from which (\ref{eq:CMCA}) follows since $\mathcal{W}_A(X,Y)\inT\Sigma$.
\end{proof}
\noindent Note that since $\gamma^{-2}\P^2$ is the projection onto
$T\Sigma$ one can write (\ref{eq:CMCA}) as
\begin{align}\label{eq:CMPgamma}
\gamma^2\mathcal{W}_A(X,Y) = -\P^2\paraa{\bar{R}(X,Y)N_A}.
\end{align}
\noindent Since both $W_A$ and $D_X$ can be expressed in terms of
$\mathcal{B}_A$, one obtains the following expression for $\mathcal{W}_A$:
\begin{proposition}
For $X,Y\inT\Sigma$ one has
\begin{align*}
\gamma^2\mathcal{W}_A(X,Y) = &\paraa{\bar{\nabla}_X\mathcal{B}_A}(Y)-\paraa{\bar{\nabla}_Y\mathcal{B}_A}(X)\\
&-\frac{1}{\gamma^2}\bracketb{\paraa{\nabla_X\gamma^2}\mathcal{B}_A(Y)-\paraa{\nabla_Y\gamma^2}\mathcal{B}_A(X)}\\
&+\frac{1}{\gamma^2}\sum_{B=1}^p\bracketb{\,\bar{\!g}\paraa{\mathcal{B}_A(N_B),X}\mathcal{B}_B(Y)-\,\bar{\!g}\paraa{\mathcal{B}_A(N_B),Y}\mathcal{B}_B(X)}.
\end{align*}
\end{proposition}
\noindent As the aim is to express the Codazzi-Mainardi equations in
terms of Nambu brackets, we will introduce maps $\mathcal{C}_A$ that
is defined in terms of $\mathcal{W}_A$ and can be written as expressions
involving Nambu brackets.
\begin{definition}
The maps $\mathcal{C}_A:C^\infty(\Sigma)\times\cdots\times
C^\infty(\Sigma)\to T\Sigma$ are defined as
\begin{align}
\mathcal{C}_A(f_1,\ldots,f_{n-2}) = \frac{1}{2\rho}\varepsilon^{aba_1\cdots a_{n-2}}
\mathcal{W}_A(e_a,e_b)\paraa{\d_{a_1}f_1}\cdots\paraa{\d_{a_{n-2}}f_{n-2}}
\end{align}
for $A=1,\ldots,p$ and $n\geq 3$. When $n=2$, $\mathcal{C}_A$ is defined as
\begin{align*}
\mathcal{C}_A = \frac{1}{2\rho}\varepsilon^{ab}\mathcal{W}_A(e_a,e_b).
\end{align*}
\end{definition}
\begin{proposition}\label{prop:CANPbracket}
Let $\{g_1,g_2\}_f\equiv\{g_1,g_2,f_1,\ldots,f_{n-2}\}$. Then
\begin{align*}
\mathcal{C}_A(f_1,\ldots,&f_{n-2})^i =
\pb{\gamma^{-2}(\mathcal{B}_A)^i_k,x^k}_f
+\frac{1}{\gamma^2}\pb{x^j,x^l}_f\bracketb{\bar{\Gamma}^i_{jk}(\mathcal{B}_A)^k_l-(\mathcal{B}_A)^i_k\bar{\Gamma}^k_{jl}}\\
&-\frac{1}{\gamma^2}\sum_{B=1}^p\bracketb{
\pb{n_A^k,x^l}_f(\mathcal{B}_B)^i_l+\bar{\Gamma}^k_{lj}\pb{x^l,x^m}_fn_A^j(\mathcal{B}_B)^i_m
}(n_B)_k.
\end{align*}
\end{proposition}
\begin{remark}
\noindent In case $\Sigma$ is a hypersurface, the expression for $\mathcal{C}\equiv \mathcal{C}_1$ simplifies to
\begin{align*}
\mathcal{C}(f_1,\ldots,f_{n-2})^i =
&\pb{\gamma^{-2}\mathcal{B}^i_k,x^k}_f
+\frac{1}{\gamma^2}\pb{x^j,x^l}_f\bracketb{\bar{\Gamma}^i_{jk}\mathcal{B}^k_l-\mathcal{B}^i_k\bar{\Gamma}^k_{jl}},
\end{align*}
since $D_XN=0$.
\end{remark}
\noindent It follows from Proposition \ref{prop:CMWPi} that we can
reformulate the Codazzi-Mainardi equations in terms of $\mathcal{C}_A$:
\begin{theorem}\label{thm:CMNambu}
For all $f_1,\ldots,f_{n-2}\in C^\infty(\Sigma)$ it holds that
\begin{align}\label{eq:CMNambu}
\gamma^2\mathcal{C}_A(f_1,\ldots,f_{n-2}) = (\P^2)^{i}_j\bracketb{\{x^k,\bar{\Gamma}^j_{kj'}\}_f-\pb{x^k,x^l}_f\bar{\Gamma}^m_{lj'}\bar{\Gamma}^j_{km}}n_A^{j'}\d_i,
\end{align}
for $A=1,\ldots,p$, where $\{g_1,g_2\}_f=\{g_1,g_2,f_1,\ldots,f_{n-2}\}$.
\end{theorem}
\begin{proof}
As noted previously, one can write the Codazzi-Mainardi equations as
\begin{align*}
\gamma^2\mathcal{W}_A(X,Y) = -\P^2\paraa{\bar{R}(X,Y)N_A}.
\end{align*}
That the above equation holds for all $X,Y\inT\Sigma$ is equivalent to saying that
\begin{align*}
\gamma^2\frac{1}{2\rho}\varepsilon^{aba_1\cdots a_{n-2}}\mathcal{W}_A(e_a,e_b)
=-\frac{1}{2\rho}\varepsilon^{aba_1\cdots a_{n-2}}\P^2\paraa{\bar{R}(e_a,e_b)N_A}
\end{align*}
for all values of $a_1,\ldots,a_{n-2}\in\{1,\ldots,n\}$; furthermore, this is equivalent to
\begin{align*}
\gamma^2\mathcal{C}_A(f_1,\ldots,f_{n-2}) =
-\frac{1}{2\rho}\varepsilon^{aba_1\cdots a_{n-2}}\P^2\paraa{\bar{R}(e_a,e_b)N_A}
(\d_{a_1}f_1)\cdots(\d_{a_{n-2}}f_{n-2})
\end{align*}
for all $f_1,\ldots,f_{n-2}\in C^\infty(\Sigma)$. It is now straightforward to show that
\begin{align*}
-\frac{1}{2\rho}\varepsilon^{aba_1\cdots a_{n-1}}&\paraa{\bar{R}(e_a,e_b)N_A}^i(\d_{a_1}f_1)\cdots(\d_{a_{n-2}}f_{n-2})\\
&=\parab{\{x^k,\bar{\Gamma}^i_{kj}\}_f-\pb{x^k,x^l}_f\bar{\Gamma}^m_{lj}\bar{\Gamma}^i_{km}}n_A^{j},
\end{align*}
which proves the statement.
\end{proof}
\noindent If $M$ is a space of constant curvature (in which case $\,\bar{\!g}(\bar{R}(X,Y)Z,N_A)=0$), then Theorem \ref{thm:CMNambu} states that
\begin{align}
\mathcal{C}_A(f_1,\ldots,f_{n-2}) = 0
\end{align}
for all $f_1,\ldots,f_{n-2}\in C^\infty(\Sigma)$. Furthermore, if $M=\mathbb{R}^m$, then (\ref{eq:CMNambu}) becomes
\begin{align}
\gamma^2\pb{\gamma^{-2}(\mathcal{B}_A)^i_k,x^k}_f-\sum_{B=1}^p\bracketb{
\pb{n_A^k,x^l}_f(\mathcal{B}_B)^i_l}(n_B)_k = 0.
\end{align}
\subsection{Covariant derivatives}\label{sec:covariantDerivatives}
Equation (\ref{eq:GaussformulaB}) tells us that knowing $\bar{\nabla}_XY$,
for $X,Y\inT\Sigma$, one can compute $\nabla_XY$ through the formula
\begin{align*}
\nabla_XY = \bar{\nabla}_XY - \frac{1}{\gamma^2}\sum_{A=1}^p\,\bar{\!g}\paraa{\mathcal{B}_A(X),Y}N_A,
\end{align*}
which requires explicit knowledge about the normal vectors. Are there
other quantities involving $\nabla$ that can be computed solely in
terms of the embedding coordinates? We will now show that the two derivations
\begin{align}
&D^I(u)\equiv\frac{1}{\gamma\sqrt{(n-1)!}}\{u,\vec{x}^I\}\\
&\mathcal{D}^i(u)\equiv\,\bar{\!g}_{IJ}D^I(x^i)D^J(u),
\end{align}
can be considered as analogues of covariant derivatives on
$\Sigma$. Their indices are lowered by the ambient metric
$\,\bar{\!g}_{ij}$. Let us start by showing that several standard formulas
involving covariant derivatives with contracted indices also hold for
our newly defined derivations.
\begin{proposition}\label{prop:covderivFormulas}
For $u,v\in C^\infty(\Sigma)$ it holds that
\begin{align}
\nabla u &= \mathcal{D}^i(u)\d_i=D_I(u)D^I(x^i)\d_i\\
g\paraa{\nabla u,\nabla v} &= \mathcal{D}_i(u)\mathcal{D}^i(v)=D_I(u)D^I(v)\\
\Delta(u)&=\mathcal{D}_i\mathcal{D}^i(u)=D_ID^I(u)\\
|\nabla^2u|^2&=\mathcal{D}_i\mathcal{D}^j(u)\mathcal{D}_j\mathcal{D}^i(u)=D_ID^J(u)D_JD^I(u)\label{eq:nablaSquSq}
\end{align}
\end{proposition}
\begin{proof}
The most convenient way of proving the above identities is to work
in a coordinate system where $u^1,\ldots,u^n$ are normal
coordinates. In particular, this implies that $\Gamma^a_{bc}=0$,
which is equivalent to $\,\bar{\!g}_{ij}(\d_ax^i)\d^2_{bc}x^j=0$. Let us now
prove formula (\ref{eq:nablaSquSq}) for the operators $D^I$.
Let us first note that in normal coordinate one obtains
\begin{align*}
|\nabla^2u|^2\equiv\paraa{\nabla_a\nabla_bu}\paraa{\nabla_c\nabla_d u}g^{ac}g^{bd}
=g^{ac}g^{bd}\paraa{\d^2_{ab}u}\paraa{\d^2_{cd}u}.
\end{align*}
We now compute
\begin{align*}
&D_ID^J(u)D_JD^I(u) =
\frac{1}{\gamma^2(n-1)!^2}\{\gamma^{-1}\{u,\vec{x}^J\},\vec{x}^K\}\,\bar{\!g}_{KI}
\{\gamma^{-1}\{u,\vec{x}^I\},\vec{x}^L\}\,\bar{\!g}_{LJ}\\
&= \frac{1}{g^2(n-1)!^2}\varepsilon^{a\vec{a}}\d_a\paraa{\varepsilon^{p\vec{p}}(\d_pu)(\d_{\vec{p}}\vec{x}^J)}\paraa{\d_{\vec{a}}\vec{x}^K}\,\bar{\!g}_{KI}
\varepsilon^{c\vec{c}}\d_c\paraa{\varepsilon^{q\vec{q}}(\d_qu)(\d_{\vec{q}}\vec{x}^I)}\paraa{\d_{\vec{c}}\vec{x}^L}\,\bar{\!g}_{LJ}
\end{align*}
The terms involving $\d_a\d_{\vec{p}}\vec{x}^J$ and $\d_c\d_{\vec{q}}\vec{x}^I$
vanish since they appear in combinations such as
$(\d_a\d_{\vec{p}}\vec{x}^J)(\d_{\vec{c}}\vec{x}^L)\,\bar{\!g}_{LJ}$ which is zero due to
the presence of a normal coordinate system. Thus,
\begin{align*}
D_ID^J(u)D_JD^I(u) &=
\frac{1}{g^2(n-1)!^2}\varepsilon^{a\vec{a}}\varepsilon^{q\vec{q}}g_{\vec{a}\vec{q}}\varepsilon^{p\vec{p}}\varepsilon^{c\vec{c}}g_{\vec{p}\vec{c}}
\paraa{\d^2_{ap}u}\paraa{\d^2_{cq}u}\\
&=g^{aq}g^{pc}\paraa{\d^2_{ap}u}\paraa{\d^2_{cq}u}=|\nabla^2u|^2.
\end{align*}
The other formulas can be proved analogously.
\end{proof}
\noindent By definition, the curvature tenor of $\Sigma$ arises when
one commutes two covariant derivatives. In light of Theorem
\ref{thm:ricciCurvature}, one may ask if there is a similar Nambu
bracket relation which gives rise to the Ricci curvature. A particular
example that introduces curvature is the following
\begin{equation}\label{eq:curvatureEq}
(\nabla^au)\nabla_a\nabla_b\nabla^bu=(\nabla^au)\nabla_b\nabla_a\nabla^bu
-g(\mathcal{R}(\nabla u),\nabla u).
\end{equation}
\noindent Since $(\nabla^au)\nabla_a\nabla_b\nabla^bu=g(\nabla
u,\nabla\Delta u)$, it follows from Proposition
\ref{prop:covderivFormulas} that one can write it as
\begin{equation}
(\nabla^au)\nabla_a\nabla_b\nabla^bu=
\mathcal{D}_i(u)\mathcal{D}^i\mathcal{D}_j\mathcal{D}^j(u) = D_I(u)D^ID_JD^J(u),
\end{equation}
and the term in (\ref{eq:curvatureEq}) involving the Ricci curvature
is written in terms of Nambu brackets through Theorem
\ref{thm:ricciCurvature}. Using the relation
\begin{equation}
\Delta\paraa{|\nabla u|^2} = 2\paraa{\nabla^au}\nabla^b\nabla_a\nabla_b u
+2|\nabla^2u|^2,
\end{equation}
and (\ref{eq:nablaSquSq}) one obtains
\begin{align*}
\paraa{\nabla^au}\nabla^b\nabla_a\nabla_b u &= \frac{1}{2}\mathcal{D}_i\mathcal{D}^i\paraa{\mathcal{D}_j(u)\mathcal{D}^j(u)}
-\mathcal{D}_i\mathcal{D}^j(u)\mathcal{D}_j\mathcal{D}^i(u)\\
&=\mathcal{D}_i(u)\mathcal{D}^j\mathcal{D}_j\mathcal{D}^i(u)+\fatcom{\mathcal{D}_i,\mathcal{D}^j}(u)\mathcal{D}_i\mathcal{D}^j(u),
\end{align*}
where $\fatcom{\mathcal{D}^i,\mathcal{D}^j}$ denotes the commutator with respect to composition of operators.
Thus, we arrive at the following result:
\begin{proposition}\label{prop:covariantDRicci}
Let $\mathcal{R}$ be the Ricci curvature of $\Sigma$ and let $u\in
C^\infty(\Sigma)$. Then it holds that
\begin{align*}
&\mathcal{D}_i(u)\mathcal{D}^i\mathcal{D}_j\mathcal{D}^j(u) = \mathcal{D}_i(u)\mathcal{D}^j\mathcal{D}_j\mathcal{D}^i(u)+\fatcom{\mathcal{D}_i,\mathcal{D}^j}(u)\mathcal{D}_i\mathcal{D}^j(u)
-g(\mathcal{R}(\nabla u),\nabla u)\\
&D_I(u)D^ID_JD^J(u) = D_I(u)D^JD_JD^I(u)+\fatcom{D_I,D^J}(u)D_ID^J(u)
-g(\mathcal{R}(\nabla u),\nabla u).
\end{align*}
\end{proposition}
\noindent Note that it follows from Theorem \ref{thm:ricciCurvature}
that the term $g(\mathcal{R}(\nabla u),\nabla u)$ can be written in
terms of Nambu brackets. If the formulas in Proposition
\ref{prop:covariantDRicci} are integrated, one arrives at expressions
whose index structure closely resembles that of equation
(\ref{eq:curvatureEq}). Namely, by partial integration one obtains
\begin{align*}
\int\parab{D_I(u)D^JD_JD^I(u)+\fatcom{D_I,D^J}(u)D_ID^J(u)}\sqrt{g}
=\int D_I(u)D_JD^ID^J(u)\sqrt{g},
\end{align*}
which implies
\begin{align}
\int D^I(u)D_ID^JD_J(u)\sqrt{g} =
\int\parab{D_I(u)D_JD^ID^J(u)-g(\mathcal{R}(\nabla u),\nabla u)}\sqrt{g}.
\end{align}
Note that since the operators $D^I$ contain a factor of $\gamma^{-1}$,
the integration is actually performed with respect to $\rho$, as
$\gamma^{-1}\sqrt{g}=\rho$.
The derivations $D^I$ and $\mathcal{D}^i$ have indices of the ambient space
$M$; do they exhibit any tensorial properties? The object $\mathcal{D}^i(u)$
transforms as a tensor in the ambient space $M$, i.e.
\begin{align*}
\mathcal{D}_y^i(u) &= \frac{1}{\gamma^2(n-1)!}\{u,\vec{y}^I\}\,\bar{\!g}_{IJ}(y)\{y^i,\vec{y}^J\}\\
&=\frac{1}{\gamma^2(n-1)!}\frac{\d y^i}{\d x^k}\{u,\vec{x}^I\}\,\bar{\!g}_{IJ}(x)\{x^k,\vec{x}^J\}
=\frac{\d y^i}{\d x^k}\mathcal{D}_x^k(u),
\end{align*}
but this does not hold for the next order derivative
$\mathcal{D}^i\mathcal{D}^j(u)$ due to the second derivatives on the embedding
functions. One can however ``covariantize'' this object by adding
extra terms.
\begin{proposition}
Define $\nabla^{ij}$ acting on $u\in C^\infty(\Sigma)$ as
\begin{align}
\nabla^{ij}(u) = \frac{1}{2}\parab{\mathcal{D}^i\mathcal{D}^j(u)+\mathcal{D}^j\mathcal{D}^i(u)
-\mathcal{D}^u\paraa{\mathcal{D}^i(x^j)}},
\end{align}
where
$\mathcal{D}^u(f)=\frac{1}{\gamma^2(n-1)!}\{f,\vec{x}^I\}\,\bar{\!g}_{IJ}\{u,\vec{x}^J\}$. Then
$\nabla^{ij}(u)$ transforms as a tensor in $M$, i.e.
\begin{align*}
\nabla_y^{ij}(u) = \frac{\d y^i}{\d x^k}\frac{\d y^j}{\d x^l}\nabla^{kl}_x(u),
\end{align*}
and for all $X,Y\inT\Sigma$ it holds that
\begin{align*}
\nabla_{ij}(u)X^iY^j = \paraa{\nabla_a\nabla_bu}X^aY^b.
\end{align*}
In particular, this implies that $\,\bar{\!g}_{ij}\nabla^{ij}(u)=\Delta(u)$ and
$\,\bar{\!g}_{ij}\,\bar{\!g}_{kl}\nabla^{ik}(u)\nabla^{jl}(u)=|\nabla^2u|^2$.
\end{proposition}
\subsection{Embedded surfaces}\label{sec:surfaces}
\noindent Let us now turn to the special case when $\Sigma$ is a
surface. For surfaces, the tensors $\P$, $\S_A$ and $\mathcal{T}_A$ are themselves
maps from $TM$ to $TM$, and $\S_A$ coincides with
$\mathcal{T}_A$. Moreover, since the second fundamental forms can be considered
as $2\times 2$ matrices, one has the identity
\begin{align*}
2\det W_A = \paraa{\operatorname{tr} W_A}^2-\operatorname{tr} W_A^2,
\end{align*}
which implies that the scalar curvature can be written as
\begin{align*}
R &= \frac{1}{\gamma^4}\paraa{\P^2}^{ik}\paraa{\P^2}^{jl}\bar{R}_{ijkl}
+ 2\sum_{A=1}^p\det W_A.
\end{align*}
Thus, defining the Gaussian curvature $K$ to be one half of the above
expression (which also coincides with the sectional curvature), one obtains
\begin{align}
K = \frac{1}{2\gamma^4}\paraa{\P^2}^{ik}\paraa{\P^2}^{jl}\bar{R}_{ijkl}-\frac{1}{2\gamma^2}\sum_{A=1}^p\operatorname{Tr}\S_A^2,
\end{align}
which in the case when $M=\mathbb{R}^m$ becomes
\begin{align}
K =-\frac{1}{2\gamma^2}\sum_{A=1}^p\sum_{i,j=1}^m\{x^i,n_A^j\}\{x^j,n_A^i\},
\end{align}
and by using the normal vectors $Z_\alpha$ the expression for $K$ can
be written as
\begin{equation}
\begin{split}
K &= -\frac{1}{8\gamma^4(p-1)!}
\sum\varepsilon_{jklI}\varepsilon_{imnI}\{x^i,\{x^k,x^l\}\}\{x^j,\{x^m,x^n\}\}\\
&=\frac{1}{\gamma^4}\parac{\frac{1}{2}\{\{x^j,x^k\},x^k\}\{\{x^j,x^l\},x^l\}
-\frac{1}{4}\{\{x^j,x^k\},x^l\}\{\{x^j,x^k\},x^l\}}.
\end{split}
\end{equation}
\noindent To every Riemannian metric on $\Sigma$ one can associate an
almost complex structure $\mathcal{J}$ through the formula
\begin{equation*}
\mathcal{J}(X) = \frac{1}{\sqrt{g}}\varepsilon^{ac}g_{cb}X^be_a,
\end{equation*}
and since on a two dimensional manifold any almost complex structure
is integrable, $\mathcal{J}$ is a complex structure on $\Sigma$. For $X\in TM$ one has
\begin{align}
\P(X) = -\frac{1}{\gamma\sqrt{g}}\,\bar{\!g}\paraa{X,e_a}\varepsilon^{ab}e_b,
\end{align}
and it follows that one can express the complex structure in terms of $\P$.
\begin{theorem}\label{thm:complexstructure}
Defining $\mathcal{J}_M(X)=\gamma\P(X)$ for all $X\in TM$ it holds that
$\mathcal{J}_M(Y)=\mathcal{J}(Y)$ for all $Y\inT\Sigma$. That is, $\gamma\P$ defines a
complex structure on $T\Sigma$.
\end{theorem}
\noindent Let us now turn to the Codazzi-Mainardi equations for
surfaces. In this case, the map $\mathcal{C}_A$ becomes a tangent vector and
one can easily see in Proposition \ref{prop:CANPbracket} that the sum
in the expression for $\mathcal{C}_A$ can be written in a slightly more compact form, namely
\begin{align*}
\mathcal{C}_A = &\pb{\gamma^{-2}(\mathcal{B}_A)^i_k,x^k}\d_i
+\frac{1}{\gamma^2}\pb{x^j,x^l}\bracketb{\bar{\Gamma}^i_{jk}(\mathcal{B}_A)^k_l-(\mathcal{B}_A)^i_k\bar{\Gamma}^k_{jl}}\\
&\qquad+\frac{1}{\gamma^2}\sum_{B=1}^p\mathcal{B}_B\S_A(N_B).
\end{align*}
\noindent Thus, for surfaces embedded in $\mathbb{R}^m$ the Codazzi-Mainardi equations become
\begin{align*}
\sum_{j,k=1}^m\pb{\gamma^{-2}\{x^i,x^j\}\{x^j,n_A^k\},x^k}\d_i+\frac{1}{\gamma^2}\sum_{B=1}^p\mathcal{B}_B\S_A(N_B)=0,
\end{align*}
and in $\mathbb{R}^3$ one has
\begin{align}
\sum_{j,k=1}^3\pba{\gamma^{-2}\{x^i,x^j\}\{x^j,n^k\},x^k} = 0.
\end{align}
\noindent Let us note that one can rewrite these equations using the following result:
\begin{proposition}
For $M=\mathbb{R}^m$ and $i=1,\ldots,m$ it holds that
\begin{align}
\sum_{j,k=1}^m\pba{f\{x^i,x^j\}\{x^j,n^k\},x^k} =
\sum_{j,k=1}^m\pba{f\{x^i,x^j\}\{x^j,x^k\},n^k}
\end{align}
for any normal vector $N=n^i\d_i$ and any $f\in C^\infty(\Sigma)$.
\end{proposition}
\begin{proof}
We start by recalling that for any $g\in C^\infty(\Sigma)$ it holds
that $\sum_{i=1}^m\{g,x^i\}n^i=0$, since it involves the scalar product
$\,\bar{\!g}(e_a,N)$. Moreover, one also has
\begin{align*}
\sum_{k=1}^m\{x^k,n^k\} &= \sum_{k=1}^m\frac{1}{\rho}\varepsilon^{ab}(\d_ax^k)(\d_bn^k)
=\sum_{k=1}^m\frac{1}{\rho}\varepsilon^{ab}\parab{\d_b\paraa{n^k\d_ax^k}-n^k\d^2_{ab}x^k}\\
&=-\sum_{k=1}^m\frac{1}{\rho}\varepsilon^{ab}n^k\d^2_{ab}x^k=0,
\end{align*}
which implies that $\sum_{k=1}^m\{x^k,gn^k\}=0$ for all $g\in C^\infty(\Sigma)$.
By using the above identities together with the Jacobi identity, one
obtains
\begin{align*}
\pba{f\{x^i,x^j\}\{x^j,n^k\},x^k} &=
f\{x^i,x^j\}\pba{\{x^j,n^k\},x^k}+\{x^j,n^k\}\pba{f\{x^i,x^j\},x^k}\\
&= -f\{x^i,x^j\}\pba{\{x^k,x^j\},n^k}-n^k\pba{x^j,\{f\{x^i,x^j\},x^k\}}\\
&= -f\{x^i,x^j\}\pba{\{x^k,x^j\},n^k} + n^k\pba{f\{x^i,x^j\},\{x^k,x^j\}}\\
&= -f\{x^i,x^j\}\pba{\{x^k,x^j\},n^k} - \{x^k,x^j\}\pba{f\{x^i,x^j\},n^k}\\
&= \pba{f\{x^i,x^j\}\{x^j,x^k\},n^k}.\qedhere
\end{align*}
\end{proof}
\noindent Hence, one can rewrite the Codazzi-Mainardi equations for a surface in $\mathbb{R}^3$ as
\begin{align}\label{eq:CMR3P}
\sum_{j,k=1}^3\pba{\gamma^{-2}(\P^2)^{ik},n^k} = 0,
\end{align}
and it is straight-forward to show that
\begin{align*}
\sum_{i,j,k=1}^3\paraa{\d_cx^i}\pba{\gamma^{-2}(\P^2)^{ik},n^k} = \frac{1}{\rho}\varepsilon^{ab}\nabla_ah_{bc},
\end{align*}
thus reproducing the classical form of the Codazzi-Mainardi equations.
Is it possible to verify (\ref{eq:CMR3P}) directly using only Poisson
algebraic manipulations? It turns out that that the Codazzi-Mainardi
equations in $\mathbb{R}^3$ is an identity for arbitrary Poisson algebras,
if one assumes that a normal vector is given by
$\frac{1}{2\gamma}\varepsilon_{ijk}\{x^j,x^k\}\d_i$.
\begin{proposition}
Let $\{\cdot,\cdot\}$ be an arbitrary Poisson structure on
$C^\infty(\Sigma)$. Given $x^1,x^2,x^3\in C^\infty(\Sigma)$
it holds that
\begin{align*}
\sum_{j,k,l,n=1}^3\frac{1}{2}\varepsilon_{kln}\pba{\gamma^{-2}\{x^i,x^j\}\{x^j,x^k\},\gamma^{-1}\{x^l,x^n\}}=0
\end{align*}
for $i=1,2,3$, where
\begin{align*}
\gamma^2 = \{x^1,x^2\}^2+\{x^2,x^3\}^2+\{x^3,x^1\}^2.
\end{align*}
\end{proposition}
\begin{proof}
Let $u,v,w$ be a cyclic permutation of $1,2,3$. In the following we
do not sum over repeated indices $u,v,w$. Denoting by $\text{CM}^i$
the $i$'th component of the Codazzi-Mainardi equation, one has
\begin{align*}
&\text{CM}^u = -\pba{\gamma^{-2}\paraa{\{x^u,x^v\}^2+\{x^w,x^u\}^2},\gamma^{-1}\{x^v,x^w\}}\\
&\quad+\pba{\gamma^{-2}\{x^u,x^v\}\{x^v,x^w\},\gamma^{-1}\{x^u,x^v\}}
+\pba{\gamma^{-2}\{x^u,x^w\}\{x^w,x^v\},\gamma^{-1}\{x^w,x^u\}}\\
&\quad= -\pba{1-\gamma^{-2}\{x^v,x^w\}^2,\gamma^{-1}\{x^v,x^w\}}
+\gamma^{-1}\{x^u,x^v\}\pba{\gamma^{-1}\{x^v,x^w\},\gamma^{-1}\{x^u,x^v\}}\\
&\quad+\gamma^{-1}\{x^u,x^w\}\pba{\gamma^{-1}\{x^w,x^v\},\gamma^{-1}\{x^w,x^u\}}\\
&\quad= \frac{1}{2}\pba{\gamma^{-1}\{x^v,x^w\},\gamma^{-2}\paraa{\gamma^2-\{x^v,x^w\}^2}}=0.\qedhere
\end{align*}
\end{proof}
\noindent Let us end by noting that these results generalize to
arbitrary hypersurfaces in $\mathbb{R}^{n+1}$. Namely,
\begin{align*}
&\{\gamma^{-2}\pba{x^i,\vec{x}^J\}\{\vec{x}^J,n^k\},x^k}_f
=\{\gamma^{-2}\pba{x^i,\vec{x}^J\}\{\vec{x}^J,x^k\},n^k}_f,\\
&(\d_cx^i)\pba{\gamma^{-2}\paraa{\P^2}^{ik},n^k}_f=
-\frac{1}{\rho}\varepsilon^{aba_1\cdots a_{n-2}}\paraa{\nabla_ah_{bc}}\paraa{\d_{a_1}f_1}\cdots\paraa{\d_{a_{n-2}}f_{n-2}},
\end{align*}
and
\begin{align*}
\varepsilon_{klL}\pba{\gamma^{-2}\{x^i,\vec{x}^J\}\{\vec{x}^J,x^k\},\gamma^{-1}\{x^l,\vec{x}^L\}}_f=0
\end{align*}
for arbitrary $x^1,\ldots,x^{n+1}\in C^\infty(\Sigma)$.
\section{Matrix regularizations}\label{sec:matrixRegularizations}
\noindent In physics, ``fuzzy spaces'' have been used for a long time
to regularize quantum theories and to model non-commutativity,
originating in the study of a quantum theory of surfaces (membranes)
sweeping out 3-manifolds of vanishing mean curvature). The main idea
was to replace smooth functions on a surface by sequences of matrices,
approximating the Poisson algebra of functions with increasing
accuracy as the matrix dimension grows. Since the expressions for
geometric quantities derived in Section
\ref{sec:nambuPoissonFormulation} uses only the Poisson algebraic
structure of the function algebra, it is natural to study their matrix
analogues in this context.
Let us start by introducing some notation. Let $N_1,N_2,\ldots$ be a
strictly increasing sequence of positive integers and let $T_\alpha$, for
$\alpha=1,2,\ldots$, be linear maps from $C^\infty(\Sigma)$ to
hermitian $N_\alpha\timesN_\alpha$ matrices. Moreover, let
$\hbar:\mathbb{R}\to\mathbb{R}$ be a strictly positive decreasing function
such that $\lim_{N\to\infty} N\hbar(N)$ converges, and set $\ha=\hbar(N_\alpha)$.
Introduce the operators
\begin{align*}
\partial^f(h) = \{f,h\}
\end{align*}
as well as the matrix operators
\begin{align*}
\dh_\alpha^f(X) = \frac{1}{i\ha}[X,T_\alpha(f)],
\end{align*}
and write
\begin{align*}
&\partial^{f_1\cdots f_k}(h) = \partial^{f_1}\partial^{f_2}\cdots\partial^{f_k}(h)\\
&\dh_\alpha^{f_1\cdots f_k}(X) = \dh_\alpha^{f_1}\dh_\alpha^{f_2}\cdots\dh_\alpha^{f_k}(X).
\end{align*}
Let us now define what is meant by a matrix regularization of compact
surface.
\begin{definition}
Let $N_1,N_2,\ldots$ be a strictly increasing sequence of positive
integers, let $\{T_\alpha\}$ for $\alpha=1,2,\ldots$ be linear maps from
$C^\infty(\Sigma,\mathbb{R})$ to hermitian $N_\alpha\times N_\alpha$ matrices and
let $\hbar(N)$ be a real-valued strictly positive decreasing
function such that $\lim_{N\to\infty} N\hbar(N)<\infty$. Furthermore, let
$\omega$ be a symplectic form on $\Sigma$ and let $\{\cdot,\cdot\}$
denote the Poisson bracket induced by $\omega$.
If for all integers $1\leq l\leq k$, $\{T_\alpha\}$ has the following
properties for all $f,f_1,\ldots,f_k,h\in C^\infty(\Sigma)$
\begin{align}
&\lim_{\alpha\to\infty}\norm{T_\alpha(f)}<\infty\label{eq:matrixNorm},\\
&\lim_{\alpha\to\infty}\norm{T_\alpha(fh)-T_\alpha(f)T_\alpha(h)}=0,\label{eq:matrixProduct}\\
&\lim_{\alpha\to\infty}\norm{\dh_\alpha^{f_1\cdots f_l}\paraa{T_\alpha(f)}-T_\alpha\paraa{\partial^{f_1\cdots f_l}(f)}}=0\label{eq:matrixCommutator}\\
&\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}T_\alpha(f)=\int_\Sigma f \omega,\label{eq:matrixTrace}
\end{align}
where $||\cdot||$ denotes the operator norm and
$\hbar_\alpha=\hbar(N_\alpha)$, then we call the pair $(T_\alpha,\hbar)$ a
\emph{$C^k$-convergent matrix regularization of
$(\Sigma,\omega)$}. If $(T_\alpha,\hbar_\alpha)$ is $C^k$-convergent for all
$k\geq 0$ then $(T_\alpha,\hbar_\alpha)$ is called a \emph{smooth matrix regularization of
$(\Sigma,\omega)$}.
\end{definition}
\noindent In the following, when we speak of a matrix regularization
without any reference to the degree of convergence, we shall always
mean a $C^1$-convergent matrix regularization.
\begin{remark}
In some cases, a $C^1$-convergent matrix regularization is
automatically a smooth matrix regularization. For instance, if it
holds that for any $f,h\in C^\infty(\Sigma)$ there exists $A_k(f,h)\in
C^\infty(\Sigma)$ such that
\begin{align*}
\frac{1}{i\hbar_\alpha}[T_\alpha(f),T_\alpha(h)]=\sum_k c_{k,\alpha}(f,h)T_\alpha\paraa{A_k(f,h)},
\end{align*}
for some $c_{k,\alpha}(f,h)\in\mathbb{R}$, then $C^k$-convergence implies
$C^{k+1}$-convergence. The matrix regularizations for the sphere and
the torus in Section \ref{sec:simpleExamples} both fall into this
category. Hence, they are examples of smooth matrix
regularizations. Note that one can easily destroy the smoothness of
a matrix regularization by slightly deforming it, see Example
\ref{ex:deformedFuzzyTorus}.
\end{remark}
\begin{definition}
A sequence $\{\fh_\alpha\}$ of $N_\alpha\times N_\alpha$ matrices
\emph{converges to $f$} (or \emph{$C^0$-converges to $f$}) if
\begin{align}
\lim_{\alpha\to\infty}\norm{\fh_\alpha-T_\alpha(f)}=0.
\end{align}
Moreover, for any integer $k\geq 1$, a sequence $\{\fh_\alpha\}$ of
$N_\alpha\times N_\alpha$ matrices \emph{$C^k$-converges to $f$} if in addition
\begin{align*}
\lim_{\alpha\to\infty}\norm{\dh_\alpha^{f_1\cdots f_l}(\fh_\alpha)-T_\alpha\paraa{\partial
^{f_1\cdots f_l}(f)}}=0,
\end{align*}
for all $1\leq l\leq k$ and $f_1,\ldots,f_l\in C^\infty(\Sigma)$.
If $\{\fh_\alpha\}$ is $C^k$-convergent for all positive $k$ then we say
that $\{\fh_\alpha\}$ is a smooth sequence.
\end{definition}
\begin{remark}
If the matrix regularization is $C^k$-convergent, it is clear that
the matrix sequence $T_\alpha(f)$ is $C^k$-convergent. It is however easy
to construct, even in a smooth matrix regularization, $C^0$-convergent
sequences that are not $C^1$-convergent; see Example
\ref{ex:nonSmoothSequence}.
\end{remark}
\begin{definition}
A $C^k$-convergent matrix regularization $(T_\alpha,\hbar)$ is called
\emph{unital} if the sequence $\{\mid_{N_\alpha}\}$ $C^k$-converges to the constant function $1$.
\end{definition}
\begin{remark}
Although unital matrix regularizations seem natural, and all our
examples fall into this category, it is easy to construct examples
of non-unital matrix regularizations. Namely, let $(T_\alpha,\hbar)$ be a
matrix regularization and consider the map $\tilde{T}^\alpha$
defined by
\begin{align*}
\tilde{T}^\alpha(f) =
\begin{pmatrix}
& & & & 0\\
& & T_\alpha(f) & & \vdots \\
& & & & \\
0 & & \cdots & & 0
\end{pmatrix}.
\end{align*}
Then $(\tilde{T}^\alpha,\hbar)$ is a matrix regularization which is not unital, since
\begin{align*}
\lim_{\alpha\to\infty}\norm{\tilde{T}^\alpha(1)-\mid_{N_\alpha+1}}\geq 1.
\end{align*}
\end{remark}
\begin{proposition}
Let $(T_\alpha,\hbar)$ be a unital matrix regularization. Then
\begin{align}
\lim_{\alpha\to\infty} 2\pi N_\alpha\ha = \int_\Sigma\omega.
\end{align}
\end{proposition}
\begin{proof}
Let us use formula (\ref{eq:matrixTrace}) with $f=1$.
\begin{align*}
\int_\Sigma\omega &= \lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}T_\alpha(1)
=\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}\bracketb{T_\alpha(1)+\mid_{N_\alpha}-\mid_{N_\alpha}}\\
&= \lim_{\alpha\to\infty}\parab{2\pi\ha N_\alpha + 2\pi\ha\operatorname{Tr}(T_\alpha(1)-\mid_{N_\alpha})}
= \lim_{\alpha\to\infty} 2\pi\ha N_\alpha
\end{align*}
since
\begin{align*}
\lim_{\alpha\to\infty} \abs{2\pi\ha\operatorname{Tr}(T_\alpha(1)-\mid_{N_\alpha})}
\leq \lim_{\alpha\to\infty} 2\pi\ha N_\alpha\norm{T_\alpha(1)-\mid_{N_\alpha}} = 0,
\end{align*}
due to the fact that the matrix regularization is unital.
\end{proof}
\begin{proposition}\label{prop:arbitrarySequences}
Let $(T_\alpha,\hbar_\alpha)$ be a $C^k$-convergent matrix regularization and
assume that $\fh_\alpha$ and $\hh_\alpha$ $C^k$-converge to $f,h\in
C^\infty(\Sigma)$ respectively. Then it holds that $a\fh_\alpha+b\hh_\alpha$
$C^k$-converges to $af+bh$, for any $a,b\in\mathbb{R}$, and $\fh_\alpha\hh_\alpha$
$C^k$-converges to $fh$.
Furthermore, it holds that
\begin{align}
&\lim_{\alpha\to\infty}\norm{\fh_\alpha} = \lim_{\alpha\to\infty}\norm{T_\alpha(f)}\label{eq:multiMatrixNorm}\\
&\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}\paraa{\fh_\alpha\hh_\alpha} = \int_{\Sigma}fh\omega\label{eq:multiMatrixTrace}.
\end{align}
\end{proposition}
\begin{proof}
The fact that $a\hat{f}+b\hat{h}$ $C^k$-converges to $af+bh$ follows
directly from linearity of the maps $T_\alpha$. To prove (\ref{eq:multiMatrixNorm}) one
uses the reverse triangle inequality to deduce
\begin{align*}
\lim_{\alpha\to\infty}\left| ||\fh_\alpha||-\norm{T_\alpha(f)}\right|
\leq \lim_{\alpha\to\infty}\norm{\fh_\alpha-T_\alpha(f)}=0,
\end{align*}
since $\fh_\alpha$ is assumed to converge to $f$. Let us continue
by proving that $\fh_\alpha\hh_\alpha$ $C^0$-converges to $fh$, i.e.
\begin{align*}
&\lim_{\alpha\to\infty}\norm{\fh_\alpha\hh_\alpha-T_\alpha(fh)} =
\lim_{\alpha\to\infty}\norm{\fh_\alpha\hh_\alpha-\fhaT_\alpha(h)+\fhaT_\alpha(h)-T_\alpha(fh)}\\
&\leq\lim_{\alpha\to\infty}\Big(\norm{\fh_\alpha}\norm{\hh_\alpha-T_\alpha(h)}
+\norm{\fhaT_\alpha(h)-T_\alpha(f)T_\alpha(h)+T_\alpha(f)T_\alpha(h)-T_\alpha(fh)}\Big)\\
&\leq\lim_{\alpha\to\infty}\Big(
\norm{\fh_\alpha}\norm{\hh_\alpha-T_\alpha(h)}
+\norm{\fh_\alpha-T_\alpha(f)}\norm{T_\alpha(h)}
+\norm{T_\alpha(f)T_\alpha(h)-T_\alpha(fh)}
\Big)\\
&=0,
\end{align*}
since both $\{\fh_\alpha\}$ and $\{\hh_\alpha\}$ are $C^0$-convergent sequences
and $||\fh_\alpha||$ is bounded by (\ref{eq:multiMatrixNorm}). Using the
face that $\fh_\alpha\hh_\alpha$ $C^0$-converges to $fg$, it is easy to prove
(\ref{eq:multiMatrixTrace}) by computing
\begin{align*}
\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}\fh_\alpha\hh_\alpha &=
\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}\paraa{\fh_\alpha\hh_\alpha-T_\alpha(fh)+T_\alpha(fh)}\\
&=\lim_{\alpha\to\infty} 2\pi\ha\operatorname{Tr}T_\alpha(fh)) = \int_\Sigma fh\omega.
\end{align*}
Finally, we proceed by induction to show that $\fh_\alpha\hh_\alpha$
$C^k$-converges to $fh$. Thus, assume that, for some $0\leq l<k$,
$\uh_\alpha\vh_\alpha$ $C^l$-converges to $uv$ whenever $\uh_\alpha$ and $\vh_\alpha$
$C^l$-converges to $u$ and $v$ respectively. Since
\begin{align*}
\dh_\alpha^{f_1}(\fh_\alpha\hh_\alpha) = \paraa{\dh_\alpha^{f_1}\fh_\alpha}\hh_\alpha+\fh_\alpha\dh_\alpha^{f_1}\hh_\alpha
\end{align*}
we can use the induction hypothesis (together with the assumption
that $\fh_\alpha,\hh_\alpha$ $C^{k>l}$-converges) to conclude that
$\dh_\alpha^{f_1}(\fh_\alpha\hh_\alpha)$ $C^l$-converges, which implies that
$\fh_\alpha\hh_\alpha$ $C^{l+1}$-converges. Hence, it follows that
$\fh_\alpha\hh_\alpha$ $C^k$-converges to $fh$.
\end{proof}
\noindent The above result allows one to easily construct sequences of
matrices converging to any sum of products of functions and Poisson
brackets. Namely, simply substitute for every factor in every term of
the sum, a sequence converging to that function, where Poisson
brackets of functions may be replaced by commutators of
matrices. Proposition \ref{prop:arbitrarySequences} then guarantees
that the matrix sequence obtained in this way converges to the sum of
the products of the corresponding functions, as long as the
appropriate level of convergence is assumed.
\begin{proposition}
Let $(T_\alpha,\hbar)$ be a matrix regularization and let $\{\fh_\alpha\}$ be a
sequence converging to $f$. Then $\lim_{\alpha\to\infty}||\fh_\alpha||=0$ if and only if $f=0$.
\end{proposition}
\begin{proof}
From Proposition \ref{prop:arbitrarySequences} it follows directly
that if $\fh_\alpha$ converges to $0$ then
\begin{align*}
\lim_{\alpha\to\infty}||\fh_\alpha||=\lim_{\alpha\to\infty}||T_\alpha(0)||=0.
\end{align*}
Now, assume that $\lim_{\alpha\to\infty}||\fh_\alpha||=0$. Then it holds that
\begin{align*}
\int f^2\omega = \lim_{\alpha\to\infty} 2\pi\hbar_\alpha\operatorname{Tr}\fh_\alpha^2
\leq \lim_{\alpha\to\infty} 2\pi\hbar_\alphaN_\alpha||\fh_\alpha^2||
\leq \lim_{\alpha\to\infty} 2\pi\hbar_\alphaN_\alpha||\fh_\alpha||^2=0,
\end{align*}
from which we conclude that $f=0$.
\end{proof}
\begin{proposition}\label{prop:uniqueNormZero}
Let $(T_\alpha,\hbar)$ be a matrix regularization and assume that
$\{\fh_\alpha\}$ $C^k$-converges to $f$. Then $\{\fh_\alpha^\dagger\}$
$C^k$-converges to $f$.
\end{proposition}
\begin{proof}
Due to the fact that $||A||=||A^\dagger||$ one sees that
\begin{align*}
\lim_{\alpha\to\infty}&\norm{\dh_\alpha^{f_1\cdots f_k}(\fh_\alpha^\dagger)-T_\alpha\paraa{\partial^{f_1\cdots f_k}(f)}}
=\lim_{\alpha\to\infty}\norm{\dh_\alpha^{f_1\cdots f_k}(\fh_\alpha^\dagger)^\dagger-T_\alpha\paraa{\partial^{f_1\cdots f_k}(f)}}\\
&=\lim_{\alpha\to\infty}\norm{\dh_\alpha^{f_1\cdots f_k}(\fh_\alpha)-T_\alpha\paraa{\partial^{f_1\cdots f_k}(f)}}=0,
\end{align*}
since $\{\fh_\alpha\}$ $C^k$-converges to $f$.
\end{proof}
\begin{proposition}
Let $(T_\alpha,\hbar)$ be a unital matrix regularization and assume that
$f$ is a nowhere vanishing function and that $\{\fh_\alpha\}$
$C^k$-converges to $f$. If $\fh_\alpha^{-1}$ exists and
$||\fh_\alpha^{-1}||$ is uniformly bounded for all $\alpha$, then
$\{\fh_\alpha^{-1}\}$ $C^k$-converges to $1/f$.
\end{proposition}
\begin{proof}
Let us first show that $\fh_\alpha^{-1}$ $C^0$-converges to $1/f$;
one calculates
\begin{align*}
\lim_{\alpha\to\infty}&\norm{\fh_\alpha^{-1}-T_\alpha(1/f)}
\leq \lim_{\alpha\to\infty}\norm{\fh_\alpha^{-1}}\norm{\mid_{N_\alpha}-\fhaT_\alpha(1/f)}\\
&=\lim_{\alpha\to\infty}\norm{\fh_\alpha^{-1}}\norm{\mid_{N_\alpha}-\fhaT_\alpha(1/f)+T_\alpha(1)-T_\alpha(1)}\\
&\leq\lim_{\alpha\to\infty}\norm{\fh_\alpha^{-1}}\parab{\norm{\mid_{N_\alpha}-T_\alpha(1)}
+\norm{\fhaT_\alpha(1/f)-T_\alpha(1)}}\\
&=0,
\end{align*}
since the matrix regularization is unital and $||\fh_\alpha^{-1}||$ is
assumed to be uniformly bounded. Let us now proceed by induction and
assume that $\fh_\alpha^{-1}$ is $C^l$-convergent ($0\leq l<k$). For arbitrary
$h\in C^\infty(\Sigma)$ it holds that
\begin{align*}
[\fh_\alpha^{-1},T_\alpha(h)] = -\fh_\alpha^{-1}[\fh_\alpha,T_\alpha(h)]\fh_\alpha^{-1},
\end{align*}
and since $\fh_\alpha$ is $C^k$-convergent, the above sequence is
$C^l$-convergent by Proposition \ref{prop:arbitrarySequences} which
implies that $\fh_\alpha^{-1}$ is $C^{l+1}$-convergent. Hence, it follows
by induction that $\fh_\alpha^{-1}$ is $C^k$-convergent.
\end{proof}
\subsection{Discrete curvature and the Gauss-Bonnet theorem}\label{sec:discreteGB}
\noindent Let us now consider a surface $\Sigma$ embedded in
$M$ via the embedding coordinates $x^1,\ldots,x^m$, with a symplectic form
\begin{align*}
\omega = \rho(u^1,u^2)du^1\wedge du^2,
\end{align*}
inducing the Poisson bracket
$\{f,h\}=\frac{1}{\rho}\varepsilon^{ab}(\d_af)(\d_b h)$, and let $(T_\alpha,\hbar_\alpha)$
be a matrix regularization of $(\Sigma,\omega)$. Furthermore, we let
$\{\hat{\gamma}_\alpha\}$ be a $C^2$-convergent sequence converging to
$\gamma=\sqrt{g}/\rho$ (and we assume that $\{\hat{\gamma}_\alpha^{-1}\}$
exists and converges to $1/\gamma$), and we set $X_\alpha^i=T_\alpha(x^i)$ as well
as $N_{A\alpha}^i=T_\alpha(n_A^i)$ for $i=1,\ldots,m$. Moreover, given the metric
$\,\bar{\!g}_{ij}$ and the Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ of $M$, we
let $\{\hat{G}_{ij,\alpha}\}$ and $\{\hat{\Gamma}^{i}_{jk,\alpha}\}$ denote
sequences converging to $\,\bar{\!g}_{ij}$ and $\Gamma^{i}_{jk}$
respectively. To avoid excess of notation, we shall often suppress the
index $\alpha$ whenever all matrices are considered at a fixed (but
arbitrary) $\alpha$.
Since most formulas in Section \ref{sec:nambuPoissonFormulation} are
expressed in terms of the tensors $\P^i_j$ and $(\S_A)^i_j$ (in the
case of surfaces), we introduce their matrix analogues
\begin{align*}
\hat{\P}^i_j &= \frac{1}{i\hbar}[X^i,X^{j'}]\hat{G}_{j'j}\\
(\hat{\S}_A)^i_j &= \frac{1}{i\hbar}[X^i,N_A^{j'}]\hat{G}_{j'j}+
\frac{1}{i\hbar}[X^j,X^k]\hat{\Gamma}^{j'}_{kl}N_A^l\hat{G}_{j'j},
\end{align*}
as well as their squares
\begin{align*}
(\hat{\P}^2)^i_j = (\hat{\P}^i_k)^\dagger\hat{\P}^k_j
\quad\text{and}\quad
(\hat{\S}_A^2)^i_j=(\hat{\S}_A{}^i_k)^\dagger\hat{\S}_A{}^k_j,
\end{align*}
and corresponding trace
\begin{align*}
\widehat{\operatorname{tr}}\,\hat{\P}^2 =\sum_{i=1}^m(\hat{\P}^2)^i_i
\quad\text{and}\quad
\widehat{\operatorname{tr}}\,\hat{\S}_A^2 =\sum_{i=1}^m(\hat{\S}_A^2)^i_i.
\end{align*}
(The ordinary trace of a matrix $X$ will be denoted by $\operatorname{Tr} X$.) From
Proposition \ref{prop:arbitrarySequences} it follows that one can
easily construct matrix sequences converging to the geometric objects
in Section \ref{sec:nambuPoissonFormulation}, as long as the
appropriate type of convergence is assumed. Let us illustrate this by
investigating matrix sequences related to the curvature of $\Sigma$
and the Gauss-Bonnet theorem.
\begin{definition}
Let $(T_\alpha,\hbar)$ be a matrix regularization of $(\Sigma,\omega)$,
let $K$ be the Gaussian curvature of $\Sigma$ and let $\chi$ be the
Euler characteristic of $\Sigma$.. A \emph{Discrete Curvature of
$\Sigma$} is a matrix sequence $\{\hat{K}_1,\hat{K}_2,\hat{K}_3,\ldots\}$
converging to $K$, and a \emph{Discrete Euler Characteristic of
$\Sigma$} is a sequence $\{\hat{\chi}_1,\hat{\chi}_2,\hat{\chi}_3,\ldots\}$ such
that $\displaystyle\lim_{\alpha\to\infty}\hat{\chi}_\alpha=\chi$.
\end{definition}
\noindent From the classical Gauss-Bonnet theorem, it is immediate to derive a
discrete analogue for matrix regularizations.
\begin{theorem}\label{thm:discreteEuler}
Let $(T_\alpha,\hbar)$ be a matrix regularization of $(\Sigma,\omega)$, and let
$\{\hat{K}_1,\hat{K}_2,\ldots\}$ be a discrete curvature of $\Sigma$. Then the sequence
$\hat{\chi}_1,\hat{\chi}_2,\ldots$ defined by
\begin{align}
\hat{\chi}_{\alpha} = \ha\operatorname{Tr}\bracketb{\hat{\gamma}_\alpha\hat{K}_{\alpha}},
\end{align}
is a discrete Euler characteristic of $\Sigma$.
\end{theorem}
\begin{proof}
To prove the statement, we compute $\lim_{\alpha\to\infty} \hat{\chi}_\alpha$ and show that it is equal to $\chi(\Sigma)$. Thus
\begin{align*}
\lim_{\alpha\to\infty}\hat{\chi}_\alpha &= \lim_{\alpha\to\infty}\frac{1}{2\pi}2\pi\ha\operatorname{Tr}\bracketb{\hat{\gamma}_\alpha\hat{K}_{\alpha}},
\end{align*}
and by using Proposition \ref{prop:arbitrarySequences} we can write
\begin{align*}
\lim_{\alpha\to\infty}\hat{\chi}_\alpha &= \frac{1}{2\pi}\int_{\Sigma}K\frac{\sqrt{g}}{\rho}\omega
=\frac{1}{2\pi}\int_{\Sigma}K\frac{\sqrt{g}}{\rho}\rho dudv =
\frac{1}{2\pi}\int_{\Sigma}K\sqrt{g}dudv = \chi(\Sigma),
\end{align*}
where the last equality is the classical Gauss-Bonnet theorem.
\end{proof}
\begin{theorem}\label{thm:discreteCurvature}
Let $(T_\alpha,\hbar)$ be a unital matrix regularization of
$(\Sigma,\omega)$ and let $\hat{R}_{ijkl}$, for each
$i,j,k,l=1,\ldots,m$, be a sequence converging to the component of
the curvature tensor of $M$. Then the sequence $\hat{K}$
defined by
\begin{align*}
\hat{K} = \hat{\gamma}^{-4}(\hat{\P}^2)^{ik}(\hat{\P}^2)^{jl}\hat{R}_{ijkl}
-\frac{1}{2}\sum_{A=1}^p\paraa{\hat{\gamma}^\dagger}^{-1}\paraa{\widehat{\operatorname{tr}}\,\hat{\S}_A^2}\hat{\gamma}^{-1},
\end{align*}
is a discrete curvature of $\Sigma$. Thus, a discrete Euler
characteristic is given by
\begin{align}
\hat{\chi} = \hbar\operatorname{Tr}\paraa{\hat{\gamma}^{-3}(\hat{\P}^2)^{ik}(\hat{\P}^2)^{jl}\hat{R}_{ijkl}}
-\frac{\hbar}{2}\sum_{A=1}^p\operatorname{Tr}\bracketb{\hat{\gamma}^{-1}\widehat{\operatorname{tr}}\,\hat{\S}_A^2}.
\end{align}
\end{theorem}
\begin{proof}
By using the way of constructing matrix sequences given through
Proposition \ref{prop:arbitrarySequences}, the result follows
immediately from Theorem \ref{thm:ricciCurvature}.
\end{proof}
\noindent In the case $M=\mathbb{R}^m$ it follows from the results in
Section \ref{sec:surfaces} that when $(T_\alpha,\hbar)$ is a
$C^2$-convergent matrix regularization, then the sequence
\begin{equation}
\begin{split}
\hat{K}_\alpha=\frac{1}{\hbar_\alpha^4}\sum_{j,k,l=1}^m\Bigg(\frac{1}{2}&\paraa{\hat{\gamma}_\alpha^\dagger}^{-2}
\Ccom{X_\alpha^j}{X_\alpha^k}{X_\alpha^k}\Ccom{X_\alpha^j}{X_\alpha^l}{X_\alpha^l}\hat{\gamma}_\alpha^{-2}\\
&-\frac{1}{4}\paraa{\hat{\gamma}_\alpha^\dagger}^{-2}\Ccom{X_\alpha^j}{X_\alpha^k}{X_\alpha^l}
\Ccom{X_\alpha^j}{X_\alpha^k}{X_\alpha^l}\hat{\gamma}_\alpha^{-2}\Bigg).
\end{split}
\end{equation}
converges to the Gaussian curvature of $\Sigma$.
\subsection{Two simple examples}\label{sec:simpleExamples}
\subsubsection{The round fuzzy sphere}\label{sec:fuzzySphere}
\noindent For the sphere embedded in $\mathbb{R}^3$ as
\begin{align}
\vec{x} = (x^1,x^2,x^3) = (\cos\varphi\sin\theta,\sin\varphi\sin\theta,\cos\theta)
\end{align}
with the induced metric
\begin{align}
(g_{ab}) =
\begin{pmatrix}
1 & 0 \\ 0 & \sin^2\theta
\end{pmatrix},
\end{align}
it is well known that one can construct a matrix regularization from
representations of $su(2)$. Namely, let $S_1,S_2,S_3$ be hermitian
$N\times N$ matrices such that $[S^j,S^k] = i{\epsilon^{jk}}_lS^l$,
$(S^1)^2+(S^2)^2+(S^3)^2=(N^2-1)/4$, and define
\begin{align}
X^i = \frac{2}{\sqrt{N^2-1}}S^i.
\end{align}
Then there exists a map $T^{(N)}$ (which can be defined through expansion
in spherical harmonics) such that $T^{(N)}(x^i)=X^i$ and
$(T^{(N)},\hbar=2/\sqrt{N^2-1})$ is a unital matrix regularization of
$(S^2,\sqrt{g}d\theta\wedge d\varphi)$ \cite{h:phdthesis}. A unit normal of the sphere in
$\mathbb{R}^3$ is given by $N\in T\mathbb{R}^3$ with $N=x^i\d_i$, which gives
$N^i=X^i$, and one can compute the discrete curvature as
\begin{align}
\hat{K}_N = -\frac{1}{\hbar^2}\sum_{i<j=1}^m\operatorname{Tr}[X^i,X^j]^2 = \mid_N
\end{align}
which gives the discrete Euler characteristic
\begin{align}
\hat{\chi}_N &= \hbar\operatorname{Tr}\hat{K}_N = \hbar N = \frac{2N}{\sqrt{N^2-1}},
\end{align}
converging to $2$ as $N\to\infty$.
\subsubsection{The fuzzy Clifford torus}\label{sec:fuzzyTorus}
\noindent The Clifford torus in $S^3$ can be regarded as embedded in $\mathbb{R}^4$ through
\begin{align*}
\vec{x} = (x^1,x^2,x^3,x^4) = \frac{1}{\sqrt{2}}(\cos\varphi_1,\sin\varphi_1,\cos\varphi_2,\sin\varphi_2),
\end{align*}
with the induced metric
\begin{align*}
(g_{ab}) = \frac{1}{2}
\begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix},
\end{align*}
and two orthonormal vectors, normal to the tangent plane of the
surface in $T\mathbb{R}^4$, can be written as
\begin{align*}
N_\pm = x^1\d_1 + x^2\d_2 \pm x^3\d_3 \pm x^4\d_4.
\end{align*}
To construct a matrix regularization for the Clifford torus, one
considers the $N\times N$ matrices $g$ and $h$ with non-zero elements
\begin{align*}
&g_{kk} = \omega^{k-1}\quad\text{ for $k=1,\ldots,N$}\\
&h_{k,k+1} = 1\quad\text{ for $k=1,\ldots,N-1$}\\
&h_{N,1} = 1,
\end{align*}
where $\omega=\exp(i2\theta)$ and $\theta=\pi/N$. These matrices satisfy
the relation $hg=\omega gh$. The map $T^{(N)}$ is then defined on the
Fourier modes
\begin{align*}
Y_{\vec{m}}=e^{i\vec{m}\cdot\vec{\vphi}}=e^{im_1\varphi_1+im_2\varphi_2}
\end{align*}
as
\begin{align*}
T^{(N)}(Y_{\vec{m}}) = \omega^{\frac{1}{2}m_1m_2}g^{m_1}h^{m_2},
\end{align*}
and the pair $(T^{(N)},\hbar=\sin\theta)$ is a unital matrix
regularization of the Clifford torus with respect to
$\sqrt{g}d\varphi_1\wedge d\varphi_2$
\cite{ffz:trigonometric,h:diffeomorphism}. Thus, using this map one
finds that
\begin{align*}
&X^1 = T(x^1) = \frac{1}{\sqrt{2}}T(\cos\varphi_1) = \frac{1}{2\sqrt{2}}(g^\dagger+g)\\
&X^2 = T(x^2) = \frac{1}{\sqrt{2}}T(\sin\varphi_1) = \frac{i}{2\sqrt{2}}(g^\dagger-g)\\
&X^3 = T(x^3) = \frac{1}{\sqrt{2}}T(\cos\varphi_2) = \frac{1}{2\sqrt{2}}(h^\dagger+h)\\
&X^4 = T(x^4) = \frac{1}{\sqrt{2}}T(\sin\varphi_2) = \frac{i}{2\sqrt{2}}(h^\dagger-h)
\end{align*}
which implies that $N_\pm^1=X^1$, $N_\pm^2=X^2$, $N_\pm^3=\pm X^3$ and
$N_\pm^4=\pm X^4$. By a straightforward computation one obtains
\begin{align*}
-\frac{1}{\hbar^2}\sum_{i,j=1}^4[X^i,X^j]^2 = 2\mid
\end{align*}
and therefore
\begin{align*}
\frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,N^j_+][X^j,N^i_+]=-\frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,X^j]^2 = \mid,
\end{align*}
and since $[X^1,X^2]=[X^3,X^4]=0$ it follows that
\begin{align*}
\frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,N^j_-][X^j,N^i_-]
=\frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,X^j]^2 = -\mid.
\end{align*}
This implies that the discrete curvature vanishes, i.e.
\begin{align*}
\hat{K}_N = \frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,N^j_+][X^j,N^i_+]
+\frac{1}{2\hbar^2}\sum_{i,j=1}^4[X^i,N^j_-][X^j,N^i_-] = \mid-\mid = 0,
\end{align*}
which immediately gives $\hat{\chi}_N=0$.
The following two examples will show that even in the smooth matrix
regularization of the torus it is easy to find sequences that are not
smooth, and that the regularization can be deformed into a non-smooth
matrix regularization.
\begin{example}\label{ex:nonSmoothSequence}
Let $(T_\alpha,\hbar_\alpha)$ be the matrix regularization of the Clifford torus
as in Section \ref{sec:fuzzyTorus}. For each $N$, define the matrix
\begin{align*}
\hat{\theta} = \operatorname{diag}(\hbar^s,0,\ldots,0),
\end{align*}
for some fixed $0<s\leq 1$. Clearly, it holds that
\begin{align*}
\lim_{\alpha\to\infty}\norm{\hat{\theta}-T_\alpha(0)}=\lim_{\alpha\to\infty}\norm{\hat{\theta}}=0,
\end{align*}
i.e. $\hat{\theta}$ $C^0$-converges to $0$. Let us show that $\hat{\theta}$
does not $C^1$-converge to $0$. If $\hat{\theta}$ $C^1$-converges to $0$,
then it must hold that
\begin{align*}
\lim_{\alpha\to\infty}\norm{\frac{1}{i\hbar}[\hat{\theta},T_\alpha(f)]-T_\alpha\paraa{\{0,f\}}}=
\lim_{\alpha\to\infty}\norm{\frac{1}{i\hbar}[\hat{\theta},T_\alpha(f)]}=0
\end{align*}
for all $f\in C^\infty(\Sigma)$.
For $H=2\sqrt{2}T_{(N)}(x^3)=h+h^\dagger$ one
computes the eigenvalues of $A=\frac{1}{i\hbar}[\hat{\theta},H]$ to be
\begin{align*}
\lambda_1=i\sqrt{2}\hbar^{s-1}\quad
\lambda_2=-i\sqrt{2}\hbar^{s-1}\quad
\lambda_3=\cdots=\lambda_N=0.
\end{align*}
Hence, the norm of $A$ does \emph{not} tend to $0$, which implies
that $\hat{\theta}$ is not $C^1$-convergent.
\end{example}
\begin{example}\label{ex:deformedFuzzyTorus}
Let $(T_\alpha,\hbar_\alpha)$ be the matrix regularization of the Clifford torus
as in Section \ref{sec:fuzzyTorus}. For each $N$, define the matrix
\begin{align*}
\hat{\theta} = \operatorname{diag}(\hbar^s,0,\ldots,0),
\end{align*}
for some fixed $1<s\leq 2$. Let us now deform the fuzzy torus to
obtain a $C^1$-convergent matrix regularization that is not
$C^2$-convergent. Defining
\begin{align*}
S_\alpha(f) = T_\alpha(f) + \mu(f)\hat{\theta},
\end{align*}
where $\mu:C^\infty(\Sigma)\to\mathbb{R}$ is an arbitrary linear
functional, one can readily check that $(S_\alpha,\hbar_\alpha)$ is a $C^1$-convergent
matrix regularization of the Clifford torus. Let us now prove that
$(S_\alpha,\hbar_\alpha)$ is not a $C^2$-convergent matrix regularization, and let us for definiteness
choose $\mu$ to be the evaluation map at $\varphi_1=\varphi_2=0$.
In a $C^2$-convergent matrix regularization it holds that
\begin{align*}
\lim_{\alpha\to\infty}\norm{-\frac{1}{\hbar^2}\Ccom{S_\alpha(u)}{S_\alpha(v)}{S_\alpha(w)}
-S_\alpha\paraa{\{\{u,v\},w\}}}=0,
\end{align*}
for all $u,v,w\in C^\infty(\Sigma)$. Choosing
$u=2\sqrt{2}\cos\varphi_2$ and $v=w=2\sqrt{2}\sin\varphi_2$ gives
$S_\alpha(u)=h^\dagger+h+2\sqrt{2}\hat{\theta}$, $S_\alpha(v)=i(h^\dagger-h)$ and
$\{u,v\}=0$. Thus
\begin{align*}
\lim_{\alpha\to\infty}&\norm{-\frac{1}{\hbar^2}\Ccom{S_\alpha(u)}{S_\alpha(v)}{S_\alpha(w)}
-S_\alpha\paraa{\{\{u,v\},w\}}}\\
&=\lim_{\alpha\to\infty}\frac{2\sqrt{2}}{\hbar^2}\norm{\Ccom{\hat{\theta}}{i(h^\dagger-h)}{i(h^\dagger-h)}}
=\lim_{\alpha\to\infty} 2\sqrt{2}\paraa{2+\sqrt{6}}\hbar^{s-2},
\end{align*}
which does not converge to $0$. Hence, $(S_\alpha,\hbar_\alpha)$ is a
$C^1$-convergent, but not $C^2$-convergent, matrix regularization of the Clifford torus.
\end{example}
\subsection{Axially symmetric surfaces in $\mathbb{R}^3$}
\noindent Recall the classical description of general axially symmetric surfaces:
\begin{align}\label{axiallyuvparam}
\vec{x} &= \paraa{f(u)\cos v, f(u)\sin v, h(u)}\\
\vec{n} &= \frac{\pm 1}{\sqrt{h'(u)^2+f'(u)^2}}
\paraa{h'(u)\cos v,h'(u)\sin v,-f'(u)}\notag,
\end{align}
which implies
\begin{align*}
\paraa{g_{ab}}=
\begin{pmatrix}
f'^2+h'^2 & 0 \\
0 & f^2
\end{pmatrix}\qquad
\paraa{h_{ab}} =\frac{\pm 1}{\sqrt{h'^2+f'^2}}
\begin{pmatrix}
h'f''-h''f' & 0\\
0 & -fh'
\end{pmatrix},
\end{align*}
where $h_{ab}$ are the components of the second fundamental form. The
Euler characteristic can be computed as
\begin{align}
\chi = \frac{1}{2\pi}\int K\sqrt{g} =
-\int_{u_-}^{u_+}\frac{h'\paraa{h'f''-h''f'}}{\paraa{f'^2+h'^2}^{3/2}}du
=-\frac{f'}{\sqrt{f'^2+h'^2}}\Bigg|_{u_-}^{u_+},
\end{align}
which is equal to zero for tori (due to periodicity) and equal to $+2$ for spherical surfaces ($f'(u_{\pm})=\mp\infty$ if $u=h$).
While a general procedure for constructing matrix analogues of
surfaces embedded in $\mathbb{R}^3$ was obtained in
\cite{abhhs:noncommutative,abhhs:fuzzy} (cp. also \cite{a:repcalg}),
let us restrict now to $h(u)=u=z$, hence describe the axially
symmetric surface $\Sigma$ as a level set, $C=0$, of
\begin{align}
C(\vec{x}) = \frac{1}{2}\paraa{x^2+y^2-f^2(z)},
\end{align}
to carry out the construction in detail, and make the resulting
formulas explicit. Defining
\begin{align}
\{F(\vec{x}),G(\vec{x})\}_{\mathbb{R}^3} = \nabla C\cdot\paraa{\nabla F\times\nabla G},
\end{align}
one has
\begin{align}
\{x,y\}=-f\!f'(z),\quad\{y,z\}=x,\quad\{z,x\} = y,
\end{align}
respectively
\begin{align}\label{eq:XYZCommutators}
[X,Y] = i\hbar f\!f'(Z),\quad [Y,Z]=i\hbar X,\quad [Z,X]=i\hbar Y
\end{align}
for the ``quantized'' (``non-commutative'') surface. In terms of the parametrization given in
(\ref{axiallyuvparam}), the above Poisson bracket is equivalent to
\begin{align}
\{F(u,v),G(u,v)\} = \varepsilon^{ab}\paraa{\d_aF}\paraa{\d_b{G}}
\end{align}
where $\d_1=\d_v$ and $\d_2=\d_u$. By finding matrices of increasing
dimension satisfying (\ref{eq:XYZCommutators}), one can construct a
map $T_\alpha$ having the properties (\ref{eq:matrixProduct}) and
(\ref{eq:matrixCommutator}) of a matrix regularization restricted to
polynomial functions in $x,y,z$ (cp. \cite{a:phdthesis}).
For the round 2-sphere, $f(z)=1-z^2$, (\ref{eq:XYZCommutators}) gives
the Lie algebra $su(2)$, and its celebrated irreducible
representations satisfy
\begin{align}\label{eq:su2sumsquare}
X^2+Y^2+Z^2 = \mid\quad\text{if}\quad \hbar=\frac{2}{\sqrt{N^2-1}}.
\end{align}
When $f$ is arbitrary, one can still find finite dimensional
representations of (\ref{eq:XYZCommutators}) as follows: rewrite
(\ref{eq:XYZCommutators}) as
\begin{align}
&[Z,W] = \hbar W\label{eq:ZWCommutator}\\
&[W,W^\dagger] = -2\hbarf\!f'(Z)
\end{align}
implying that $z_i-z_j=\hbar$ whenever $W_{ij}\neq 0$ and $Z$
diagonal. Assuming $W=X+iY$ with non-zero matrix elements
$W_{k,k+1}=w_k$ for $k=1,\ldots,N-1$, one thus obtains (with
$w_0=w_N=0$)
\begin{align*}
&Z_{kk} = \frac{\hbar}{2}\paraa{N+1-2k}\\
&w_k^2-w_{k-1}^2=-2\hbarf\!f'\paraa{\hbar(N+1-2k)/2}\equiv Q_k,
\end{align*}
which implies that
\begin{align*}
w_ k^2 = \sum_{l=1}^kQ_l
\end{align*}
and the only non-trivial problem is to find the analogue of
(\ref{eq:su2sumsquare}). To this end, define
\begin{align}\label{eq:fhdef}
\hat{f}^2 = X^2+Y^2 = \frac{1}{2}\paraa{WW^\dagger+W^\dagger W},
\end{align}
with $W$ given as above. As $Z$ has pairwise different eigenvalues,
the diagonal matrix given in (\ref{eq:fhdef}) can be thought of as a
function of $Z$; hence as $\hat{f}^2(Z)$. It then trivially holds that
\begin{align}
\hat{C} = X^2+Y^2-\hat{f}^2(Z)=0,
\end{align}
for the representation defined above. The quantization of $\hbar$
comes through the requirement that $\hat{f}^2$ should correspond to
$f^2$. While for the \emph{round} 2-sphere $\hat{f}^2$ equals $f^2$,
provided $\hbar$ is chosen as in (\ref{eq:su2sumsquare}), it is easy
to see that in general they can not coincide, as
\begin{align*}
[X^2+Y^2-&f(Z)^2,W] = [(WW^\dagger+W^\dagger W)/2-f(Z)^2,W]\\
&=\frac{1}{2}W[W^\dagger,W]+\frac{1}{2}[W^\dagger,W]W-f(Z)[f(Z),W]-[f(Z),W]f(Z)\\
&=\cdots=f(Z)\paraa{\hbar f'(Z)W-[f(Z),W]}+\paraa{\hbar f'(Z)W-[f(Z),W]}f(Z)
\end{align*}
with off-diagonal elements
\begin{align*}
\paraa{f(z_k)+f(z_{k-1})}\paraa{\hbar f'(z_k)-(f(z_k)-f(z_{k-1}))}
\end{align*}
that are in general non-zero (hence $X^2+Y^2+f^2(Z)$ is usually not
even a Casimir, except in leading order).
How it \emph{does} work is perhaps best illustrated by a non-trivial example, $f(z)=1-z^4$:
\begin{align}
w_k^2 =\frac{\hbar^4}{2}&\parab{(N+1)^3k-3(N+1)^2k(k+1)+\label{eq:wk}\\
&2(N+1)k(k+1)(2k+1)-2k^2(k+1)^2}\notag\\
\hat{f}_k^2 = \frac{1}{2}(w_k^2&+w^2_{k-1}) =
\frac{\hbar^4}{4}\parab{(N+1)^3(2k-1)-6(N+1)^2k^2\notag\\ &\qquad\qquad+4(N+1)k(2k^2+1)-4k^2(k^2+1)}\notag
\end{align}
(note that $w^2_0=w_N^2=0$ is explicit in (\ref{eq:wk})) so that
\begin{align}
\paraa{X^2+Y^2+Z^4}_{kk} = \hbar^4\bracketc{\frac{(N+1)^4}{16}-\frac{(N+1)^3}{4}+k(N+1)-k^2}.
\end{align}
Expressing the last two terms via $Z^2$ (note that the cancellation of
$k^3$ and $k^4$ terms shows the absence of $Z^3$ and higher
corrections) one finds
\begin{align*}
X^2+Y^2+Z^4+\hbar^2Z^2 &= \hbar^4\frac{(N+1)^2}{16}\parab{(N+1)^2-4(N+1)+4}\mid\\
&=\hbar^4\frac{(N^2-1)^2}{16}\mid,
\end{align*}
which equals $\mid$ if $\hbar$ is chosen as $2/\sqrt{N^2-1}$. Note
that this is the \emph{same} expression for $\hbar$ then for the round
sphere, $f^2=1-z^2$ (cp. (\ref{eq:su2sumsquare})).
A more elegant way to derive the quantum Casimir (cp. also \cite{r:repnonlinear,gps:beyondfuzzy})
\begin{align}
Q = X^2+Y^2+Z^4+\hbar^2Z^2
\end{align}
is to calculate
\begin{align*}
[X^2+Y^2+Z^4,W] &= [(WW^\dagger+W^\dagger W)/2+Z^4,W]\\
&= \cdots = \hbar^2[W,Z^2],
\end{align*}
which determines the terms proportional to $\hbar$ in the Casimir.
Due to the general formula
\begin{align}
\hat{K} = -\frac{1}{8\hbar^4}\varepsilon_{jkl}\varepsilon_{ipq}(\hat{\gamma}^\dagger)^{-2}\coma{X^i,[X^k,X^l]}\coma{X^j,[X^p,X^q]}\hat{\gamma}^{-2}
\end{align}
one obtains, for the axially symmetric surfaces discussed above,
\begin{align}
\hat{K} = \hat{\gamma}^{-2}\parac{(f\!f')^2(Z)+\frac{1}{2\hbar}[W,f\!f'(Z)]W^\dagger+\frac{1}{2\hbar}W^\dagger[W,f\!f'(Z)]}\hat{\gamma}^{-2}
\end{align}
with
\begin{align}
\hat{\gamma}^2 = \frac{1}{2}\paraa{WW^\dagger+W^\dagger W}+(f\!f')^2(Z)
=f(Z)^2\paraa{f'(Z)^2+\mid} + O(\hbar),
\end{align}
giving
\begin{align}
&\hat{K} = -\paraa{f'(Z)^2+\mid}^{-2}f(Z)^{-1}f''(Z) + O(\hbar)
\end{align}
and for $f(z)^2=1-z^4$ one has
\begin{align}
&\hat{K} = \paraa{4Z^6+\mid-Z^4}^{-2}\paraa{6Z^2-2Z^6}+O(\hbar)\\
&\hat{\gamma}^2 = \mid-Z^4+4Z^6+O(\hbar).
\end{align}
Note that (cp. (\ref{eq:ZWCommutator}))
$z_j-z_{j-1}=\hbar$ for arbitrary $f$, and that (due to the axial
symmetry) $\hat{K}$ and $\hat{\gamma}^2$ are \emph{diagonal} matrices, so that
\begin{align*}
\hat{\chi} = \hbar\operatorname{Tr}\paraa{\sqrt{\hat{\gamma}^2}\hat{K}},
\end{align*}
in this case simply being a Riemann sum approximation of $\int
K\sqrt{g}$, indeed converges to 2, the Euler characteristic of
spherical surfaces.
\subsection{A bound on the eigenvalues of the matrix Laplacian}\label{sec:laplaceBound}
\noindent As we have shown, many of the objects in differential
geometry can be expressed in terms of Nambu brackets. Let us now
illustrate, in the case of surfaces, that some of the techniques used
to prove classical theorems can be implemented for matrix
regularizations. In particular, let us prove that a lower bound on the
discrete Gaussian curvature induces a lower bound for the eigenvalues
of the discrete Laplacian. For simplicity, we shall consider the case
when $M=\mathbb{R}^m$ and, in the following, all repeated indices are
assumed to be summed over the range $1,\ldots,m$.
Let us start by introducing the matrix analogue of the
operator $D^i$:
\begin{align*}
\Dh_\alpha^i(X) = \frac{1}{i\hbar_\alpha}\gammah_\alpha^{-1}[X,X_\alpha^i].
\end{align*}
These operators obey a rule of ``partial integration'', namely
\begin{align}\label{eq:discretePartialInt}
\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(X)Y} = -\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(Y)X},
\end{align}
which is in analogy with the fact that
\begin{align*}
\int_\Sigma \paraa{\gamma D^i(f)h}\omega = -\int_\Sigma\paraa{\gamma D^i(h)f}\omega.
\end{align*}
In view of Proposition \ref{prop:covderivFormulas}, it is natural to
make the following definition:
\begin{definition}
Let $(T_\alpha,\hbar_\alpha)$ be a matrix regularization of $(\Sigma,\omega)$. The
\emph{Discrete Laplacian on $\Sigma$} is a sequence
$\{\Deltah_\alpha\}$ of linear maps defined as
\begin{align*}
\Deltah_\alpha(X) = \Dh_\alpha^j\Dh_\alpha^j(X) =
-\frac{1}{\hbar_\alpha^2}\gammah_\alpha^{-1}\big[\gammah_\alpha^{-1}[X,X_\alpha^j],X_\alpha^j\big],
\end{align*}
where $X$ is a $N_\alpha\timesN_\alpha$ matrix. An \emph{eigenmatrix sequence
of $\Deltah_\alpha$} is a convergent sequence $\{\uh_\alpha\}$ such that
$\Deltah_\alpha(\uh_\alpha)=\lambda_\alpha\uh_\alpha$ for all $\alpha$ and
$\displaystyle\lim_{\alpha\to\infty}\lambda_\alpha=\lambda$.
\end{definition}
\begin{proposition}
A $C^2$-convergent eigenmatrix sequence of $\Deltah_\alpha$ converges to
an eigenfunction of $\Delta$ with eigenvalue
$\lambda=\displaystyle\lim_{\alpha\to\infty}\lambda_\alpha$.
\end{proposition}
\begin{proof}
Given the assumption that $\uh_\alpha$ is a $C^2$-convergent matrix
sequence converging to $u$, we want to prove that $\Delta u-\lambda
u=0$. By Proposition \ref{prop:uniqueNormZero} this is equivalent to
proving that $\lim_{\alpha\to\infty}||T_\alpha(\Delta u-\lambda u)||=0$. One obtains
\begin{align*}
\lim_{\alpha\to\infty}&\norm{T_\alpha(\Delta u-\lambda u)} =
\lim_{\alpha\to\infty}\norm{T_\alpha(\Delta u)-\Deltah_\alpha\uh_\alpha+\Deltah_\alpha\uh_\alpha-\lambda T_\alpha(u)+\lambda\uh_\alpha-\lambda\uh_\alpha}\\
&\leq\lim_{\alpha\to\infty}\parac{\norm{T_\alpha(\Delta u)-\Deltah_\alpha\uh_\alpha}+|\lambda|\norm{-T_\alpha(u)+\uh_\alpha}+\norm{\Deltah_\alpha\uh_\alpha-\lambda\uh_\alpha}}\\
&=\lim_{\alpha\to\infty}\norm{\Deltah_\alpha\uh_\alpha-\lambda\uh_\alpha}
\leq\lim_{\alpha\to\infty}\parab{\norm{\Deltah_\alpha\uh_\alpha-\lambda_\alpha\uh_\alpha}+|\lambda-\lambda_\alpha|\norm{\uh_\alpha}}=0,
\end{align*}
since $\Deltah_\alpha\uh_\alpha-\lambda_\alpha\uh_\alpha=0$ and $\lambda_\alpha$ converges to $\lambda$.
\end{proof}
\noindent The way curvature is introduced in the classical proof of
the bound on the eigenvalues, is through the commutation of covariant
derivatives. Let us state the corresponding result for matrix
regularizations.
\begin{proposition}\label{prop:curvatureeq}
Let $(T_\alpha,\hbar_\alpha)$ be a $C^2$-convergent matrix regularization of
$(\Sigma,\omega)$. If $\{\uh_\alpha\}$ is a $C^3$-convergent matrix
sequence then
\begin{align*}
\lim_{\alpha\to\infty}&\Big|\Big|
\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)
-\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^j\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)\\
&\qquad-\ldbrack\Dh_\alpha^i,\Dh_\alpha^j\rdbrack(\uh_\alpha)\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)
+\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\Big|\Big|=0,
\end{align*}
where $\ldbrack\cdot,\cdot\rdbrack$ denotes the commutator with
respect to composition of maps.
\end{proposition}
\begin{proof}
The result follows immediately from Proposition
\ref{prop:covariantDRicci} and Proposition
\ref{prop:arbitrarySequences}. Note that in the case of surfaces it
holds that $\mathcal{R}_{ab}=Kg_{ab}$, where $K$ is the Gaussian curvature of
$\Sigma$.
\end{proof}
\noindent A useful corollary is the following:
\begin{proposition}\label{prop:intCurvatureEq}
Let $(T_\alpha,\hbar_\alpha)$ be a $C^2$-convergent matrix regularization of
$(\Sigma,\omega)$. If $\{\uh_\alpha\}$ is a $C^2$-convergent matrix sequence then
\begin{align*}
\lim_{\alpha\to\infty}&\hbar_\alpha\operatorname{Tr}\parab{\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}=\\
&\lim_{\alpha\to\infty}\hbar_\alpha\operatorname{Tr}\parab{\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)
-\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}
\end{align*}
\end{proposition}
\begin{proof}
It follows from Proposition \ref{prop:curvatureeq} that for a
$C^3$-convergent sequence $\uh_\alpha$ it holds that
\begin{align*}
&\lim_{\alpha\to\infty}\ha\operatorname{Tr}\Big(\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)
-\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\\
&\qquad-\gammah_\alpha\ldbrack\Dh_\alpha^i,\Dh_\alpha^j\rdbrack(\uh_\alpha)\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)
+\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\Big)=0.
\end{align*}
Due to the appearance of a trace, the above holds even for
$C^2$-convergent sequences, since e.g.
\begin{align*}
\ha\operatorname{Tr}\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)
=-\ha\operatorname{Tr}\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha),
\end{align*}
and the latter expression only requires $C^2$-convergence. Thus, one obtains
\begin{align*}
\lim_{\alpha\to\infty}&\hbar_\alpha\operatorname{Tr}\parab{\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}
=\lim_{\alpha\to\infty}\hbar_\alpha\operatorname{Tr}\Big(\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\\
&+\gammah_\alpha\ldbrack\Dh_\alpha^i,\Dh_\alpha^j\rdbrack(\uh_\alpha)\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)
-\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\Big)\\
&=\lim_{\alpha\to\infty}\hbar_\alpha\operatorname{Tr}\parab{\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)
-\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)},
\end{align*}
by using equation (\ref{eq:discretePartialInt}).
\end{proof}
\begin{proposition}\label{prop:laplaceDtwoineq}
Let $(T_\alpha,\hbar_\alpha)$ be a matrix regularization of
$(\Sigma,\omega)$. If $\{\uh_\alpha\}$ is a $C^2$-convergent matrix sequence then
\begin{align*}
\lim_{\alpha\to\infty} \hbar_\alpha\operatorname{Tr}\parab{\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)}\geq
\frac{1}{2}\lim_{\alpha\to\infty} \hbar_\alpha\operatorname{Tr}\paraa{\Deltah_\alpha(\uh_\alpha)}^2.
\end{align*}
\end{proposition}
\begin{proof}
By using the fact that $|\nabla^2u|^2\geq \frac{1}{2}(\Delta u)^2$
(for 2-dimensional manifolds) one obtains
\begin{align*}
\lim_{\alpha\to\infty}&\hbar_\alpha\operatorname{Tr}\parab{\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)}
=\frac{1}{2\pi}\int_\Sigma|\nabla^2u|^2\omega
\geq\frac{1}{4\pi}\int_\Sigma(\Delta u)^2\omega\\
&=\lim_{\alpha\to\infty}\frac{1}{2}\hbar_\alpha\operatorname{Tr}\paraa{\Deltah_\alpha(\uh_\alpha)}^2,
\end{align*}
since $\uh_\alpha$ is assumed to $C^2$-converge to $u$.
\end{proof}
\begin{theorem}
Let $(T_\alpha,\hbar_\alpha)$ be a $C^2$-convergent matrix regularization of
$(\Sigma,\omega)$ and let $\{\uh_\alpha\}$ be a $C^2$-convergent
eigenmatrix sequence of $\Deltah_\alpha$ with eigenvalues $\{-\lambda_\alpha\}$. If
$\hat{K}_\alpha\geq\kappa\mid_{N_\alpha}$ for some $\kappa\in\mathbb{R}$ and all
$\alpha>\alpha_0$, then $\displaystyle\lim_{\alpha\to\infty}\lambda_\alpha\geq
2\kappa$.
\end{theorem}
\begin{proof}
Let $\{\uh_\alpha\}$ be a hermitian eigenmatrix sequence of $\Deltah_\alpha$ with
eigenvalues $\{-\lambda_\alpha\}$. First, one rewrites
\begin{equation}\label{eq:LasqDtwo}
\begin{split}
\operatorname{Tr}\gammah_\alpha\Deltah_\alpha(\uh_\alpha)^2 &= \operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)}\\
&= -\lambda_\alpha\operatorname{Tr}\paraa{\uh_\alpha\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^i(\uh_\alpha)}
= \lambda_\alpha\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}.
\end{split}
\end{equation}
Then, one makes use of Proposition \ref{prop:intCurvatureEq} to write
\begin{align*}
\limainfty&\hbar_\alpha\operatorname{Tr}\gammah_\alpha\Deltah_\alpha(\uh_\alpha)^2 = -\limainfty\hbar_\alpha\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i\Dh_\alpha^j\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}\\
&=\limainfty\hbar_\alpha\operatorname{Tr}\Big(-\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)+\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)\Big)\\
&=\limainfty\hbar_\alpha\operatorname{Tr}\parab{\gammah_\alpha\Dh_\alpha^j\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i\Dh_\alpha^j(\uh_\alpha)+\gammah_\alpha\hat{K}_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}.
\end{align*}
Using the assumption that $\hat{K}_\alpha\geq\kappa\mid$ together with
Proposition \ref{prop:laplaceDtwoineq} one obtains
\begin{align*}
\limainfty\hbar_\alpha\operatorname{Tr}\gammah_\alpha\Deltah_\alpha(\uh_\alpha)^2 &\geq
\limainfty\hbar_\alpha\operatorname{Tr}\parac{\frac{1}{2}\gammah_\alpha\Deltah_\alpha(\uh_\alpha)^2+\kappa\gammah_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}\\
&=\limainfty\parac{\frac{1}{2}\lambda_\alpha+\kappa}\hbar_\alpha\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)},
\end{align*}
where (\ref{eq:LasqDtwo}) has been used. One can now compare the
above inequality with (\ref{eq:LasqDtwo}) to obtain
\begin{align*}
\frac{1}{2}(\lambda-2\kappa)\limainfty\hbar_\alpha\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}\geq 0.
\end{align*}
Since
\begin{align*}
\limainfty\hbar_\alpha\operatorname{Tr}\paraa{\gammah_\alpha\Dh_\alpha^i(\uh_\alpha)\Dh_\alpha^i(\uh_\alpha)}
=\frac{1}{2\pi}\int_\Sigma\gamma|\nabla u|^2\omega\geq 0,
\end{align*}
due to the fact that $\gamma$ is a positive function, it follows
that $\lambda\geq 2\kappa$.
\end{proof}
\noindent Although the above proof depends on the fact that the matrix
regularization is associated to a surface (and therefore, the results
of differential geometry can be employed), we believe that, under
suitable conditions on the matrix algebra, there exists a proof that
is independent of this correspondence.
\section*{Acknowledgments}
\noindent J.A. would like to thank the Institut des Hautes \'Etudes
Scientifiques for hospitality and H. Shimada for discussions on matrix
regularizations, while J.H. thanks M. Bordemann for many discussions
on related topics (and for switching talks at the October 2009 AEI
workshop ``Membranes, Minimal Surfaces and Matrix Limits'').
\bibliographystyle{alpha}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,238
|
Q: If $S$ is a minimum $k$- restricted-edge-cut of $G$, then must $G-S$ have exactly two connected components? An edge cut is a set of edges that, if removed from a connected graph, will disconnect the graph.
For a connected graph $G=(V ,E)$, an edge set $S ⊂ E$ is a $k$-restricted-edge-cut, if $G−S$ is disconnected and every connected component of $G−S$ has at least $k$ vertices.
We can see the concept of $k$- restricted-edge-cuts is one of generalizations of edge cuts. The restricted edge connectivity was proposed by Esfahanian and Hakimi in [1].
[1] Esfahanian A H, Hakimi S L. On computing a conditional edge-connectivity of a graph[J]. Information processing letters, 1988, 27(4): 195-199. doi.org/10.1016/0020-0190(88)90025-7
PS (Off-topic remarks): They give a polynomial algorithm for computing
the $2$- restricted-edge-cut of graphs. But I feel that it may not be
correct. I have been trying to implement this algorithm, but failed,
see
*
*How-To-Compute-The-Restricted edge connectivity
For example, we have a graph as follows.
Then we can check that the edge set $\{(0,1),(0,2)\}$ is a minimum $2$-restricted-edge-cut of the above graph (we note that any cut edge $(0,4)$ or $(0,5)$ is not $2$-restricted-edge-cut).
A minimal edge cut set is an edge cut where no subset is also a cut set.
I know that the following claim holds.
*
*A edge cut $S$ is a minimal edge cut if and only if $G-S$ have exactly two connected components.
So a natural question is:
Question. If $S$ is a minimum (or minimal) $k$-restricted-edge-cut of $G$, then must $G-S$ have exactly two connected components?
I am looking for counterexamples (or prove it).
A: For brevity, use "CC" for "connected component".
A edge cut $S$ is a minimal edge cut if and only if $G-S$ have exactly two CCs.
While the "only if" direction is true, the "if" direction is wrong since $S$ may contain redundant edges. For example, consider a graph with vertices $a,b,c,d$ and edges $ab,bc,cd,db$. Edge cut $\{ab, bc\}$ is not minimal since $\{ab\}$ is an edge cut, too. Both edge sets "cut" the graph into two CCs.
If $S$ is a minimal $k$-restricted-edge-cut of $G$, then must $G−S$ have exactly two CCs?
Of course. Otherwise $G-S$ has at least three CCs. Since $G$ is connected, there is an edge $e$ in $G$ between two of those CCs. $e$ must be in $S$. Consider $S'=S-\{e\}$. Here are the CCs of $G-S'$:
*
*Every CC in $G-S$ other than those two CCs connected by $e$.
*the union of those two CCs in $G-S$ connected by $e$ together with $e$
So $G-S'$ has two CCs. That means $G-S'$ is disconnected. Each CC of $G-S'$ is either a CC of $G-S$ or the union of two CCs of $G-S$ together with $e$. Hence, $S'$ is also a $k$-restricted edge cut, which contradicts with the minimality of $S$.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 970
|
Q: Android: Arraylist is not cleared using clear() method in DatePickerDialog I request you to solve my problem. Some phones are giving me problem to clear Array list. I used"arrayList.clear()"method to clear the Array list. It works good in my phone but my client phone give problem. Data in list is show twice in my client's phone. But in my phone it show one time. It gives problem in my client's phone. Can anyone give me solution. Thanks in advance.
In below code when you select date then Array list is cleared and run a class to download data from API. Every time when you select date Array list is cleared and run class again. For this code give me solution for upper error i mention you.
public void daatepopup() {
DatePickerDialog d = new DatePickerDialog(ResultChartActivity.this,
new DatePickerDialog.OnDateSetListener() {
@Override
public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) {
tvDate.setText(year + "/" + (monthOfYear + 1) + "/" + dayOfMonth);
date = tvDate.getText().toString();
if (date.equals("")) {
Toast.makeText(ResultChartActivity.this, "Select Date For Result Chart", Toast.LENGTH_LONG).show();
} else {
if (NetworkAvailablity.chkStatus(ResultChartActivity.this)) {
arrayList.clear();
new GetResultChart().execute();
} else {
Toast.makeText(ResultChartActivity.this, "Check Internet Connection !", Toast.LENGTH_LONG).show();
}
}
}
}, year, month, day);
d.show();
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 321
|
package stats
import (
"fmt"
"github.com/docker/docker/api/types"
)
// dockerStatsToContainerStats returns a new object of the ContainerStats object from docker stats.
func dockerStatsToContainerStats(dockerStats *types.StatsJSON) (*ContainerStats, error) {
cpuUsage := (dockerStats.CPUStats.CPUUsage.TotalUsage * 100) / numCores
memoryUsage := dockerStats.MemoryStats.PrivateWorkingSet
networkStats := getNetworkStats(dockerStats)
storageReadBytes := dockerStats.StorageStats.ReadSizeBytes
storageWriteBytes := dockerStats.StorageStats.WriteSizeBytes
return &ContainerStats{
cpuUsage: cpuUsage,
memoryUsage: memoryUsage,
timestamp: dockerStats.Read,
storageReadBytes: storageReadBytes,
storageWriteBytes: storageWriteBytes,
networkStats: networkStats,
}, nil
}
func validateDockerStats(dockerStats *types.StatsJSON) error {
if numCores == uint64(0) {
return fmt.Errorf("invalid container statistics reported, no cpu core usage reported")
}
return nil
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,929
|
EPL Football Transfer News Transfers
Players Aston Villa Could Sign In January Transfer Window
December 9, 2019 December 11, 2019 SUBHAM 0 Comments Aston Villa
The 2020 January transfer window is just around the corner, and Aston Villa will be considering what business they will do. The Lions are in the midst of their first season in the Premier League, and they will be hoping to have a better window than in previous years. In this article, we will have a look at some of the players Aston Villa could sign in January transfer window.
Here are the 8 players Aston Villa could sign in January transfer window –
[nextpage title="1″ ]
8. Danny Loader
Highly rated at Reading, Danny Loader has often been linked with moves to the top flight over recent years.
Just 19 years of age, he would fit the mould in terms of Villa's transfer strategy under Smith and Christian Purslow – signing younger players with potential, that can accrue high re-sale value.
He was very close to a move to the Midlands over the summer, with the youngster seemingly set for Wolves, only for that deal to fall through.
He's out of contract this summer, so Reading are likely to entertain offers if a new contract can't be agreed.
The only issue Villa may have is other clubs battling for his services, with Manchester United one of the latest sides linked with a move for the promising striker.
7. Rhian Brewster
Liverpool youngster Rhian Brewster could be available for grabs in the winter transfer window. Although it is understood that Villa don't plan to splash the cash to the same extent as in the summer, a loan move could be viable.
According to the Mirror, Villa are keen to take the 19-year-old striker on loan. Dean Smith is among a host of Premier League and Championship managers said to be keeping tabs on Brewster.
Villa fans will be able to get a close look at Brewster later this month with the forward expected to feature in a second string Reds side in the Carabao Cup quarter final at Villa Park.
6. Aleksandar Mitrovic
Villa are scouring the market for a new forward to support Wesley Moraes as January draws nearer and Darren Bent thinks Mitrovic could be the answer.
The 25-year-old has been in stunning form for Fulham this season.
Former Villa frontman Bent told talkSPORT: "Come January, Mitrovic might be gone. I can see a few Premier League teams (taking an interest), like Aston Villa whose centre forwards aren't scoring many goals.
"These teams should look at him because he's too good for the Championship. He's a big, strong powerful target man who scores goals when given the service."
5. Alfredo Morelos
Colombian striker Alfredo Morelos is another man who's been linked with a move to B6 this winter, and it's hard to argue with his goal record so far this season.
He's banged in 22 goals in just 26 appearances for rangers this season, although that does come with the caveat that he's playing in the Scottish top flight, which is not at the level of the Premier League.
Regardless, the 23-year-old's stock is at an all-time high, and although Rangers boss Steven Gerrard is confident a move won't happen, that won't stop clubs sniffing around.
The striker himself hinted that he'd fancy a move to a more competitive league when speaking on Colombian radio.
4. Said Benrahma
Villa boss Dean Smith knows the Algerian winger very well, having signed him for Brentford from Nice when he was in charge at Griffin Park.
Smith was eager to bring the 24-year-old to Villa Park during the summer, as he prepared his squad for a return to the Premier League, but the club failed to agree a deal with the Bees, who are said to value him at £20m.
Sky Sports suggest Smith could make a fresh bid for Benrahma when the January transfer window opens for business, although the price tag might once again prove a stumbling block.
3. Nico Gaitan
Reports in The Guardian claimed the Argentina international is on the radar of Villa and Premier League rivals West Ham United and Sheffield United.
It was reported that the Hammers hierarchy held preliminary talks with Gaitan's representatives about a move, with the 31-year-old most recently of MLS side Chicago Fire.
However, it has now emerged that Gaitan is in talks over a return to his homeland with Boca Juniors.
2. Christoph Monschein
Reports on the continent suggested the Austria international could be in line for a move to Villa for a cut-price €5 million fee.
He has scored 13 goals in 18 games this season for Austria Wien – and it is claimed that Dean Smith's side have sent scouts to watch him in action.
1. Pedro
Aston Villa are preparing to make a shock move for Chelsea star Pedro, according to reports. The Spanish winger is out of contract at the end of the season and could be available for a cut-priced fee.
Pedro, 32, has featured in just one match since the end of October and is clearly down on Frank Lampard's pecking order – his only goal this term came from the penalty spot against Grimsby.
Christian Pulisic has hit form after arriving from Borussia Dortmund in the summer while Willian is still consistently selected.
Callum Hudson-Odoi is back from injury and is also pressing for a starting spot, coming off the bench in six of the last seven Premier League matches.
The added benefit of Chelsea's transfer ban being lifted, allowing them to buy players in January, is set to be further bad news for Pedro if Lampard decides to recruit in the forward positions.
However, he would add plenty of experience to an Aston Villa squad looking to avoid immediate relegation to the Championship.
Checkout : 10 Grestest Aston Villa Players Ever
← 8 Players Celtic Could Sign In January Transfer Window
8 Players Tottenham Hotspur Could Sign In January →
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,614
|
Q: user presses back button on device (emulator) control Is there any way to control the previous activity when the user presses the back button on the device.
Controlling it by setting a TextView to clear when the user does or ending the previous activity.
Thanks
A: Of course! Android provides some build in methods that are invoked when events (like pressing the back button and going to a previous activity) take place. These methods are detailed in the Android Activity life cycle.
One way to solve your problem would be by using the onResume method. Inside this method you can set your TextView to whatever you want once the activity is resumed.
protected void onResume() {
super.onResume();
//now perform any operations on your textview
//for example
fooTextView.setText("I resumed my activity");
}
One side note, is this code will get fired as the user starts the Activity for the first time. You may need to prevent this happening via a boolean value or use an alternative method to solve your problem like the method onActivityResult if you feel it is appropriate.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,050
|
Business, Research, Environment
Industries & Segments
GRI Reports
Polycarbonates
Coatings, Adhesives, Specialties
Electrical/Electronics/IT
Wood & Furniture
Textile, Synthetic Leather & Shoes
» Industries & Segments
» Polyurethanes
Friday - November 10, 2017
A versatile class of plastics celebrates its anniversary
80 years of polyurethane
Covestro drives the success story forward / Innovative and sustainable projects that make the world a brighter place
Polyurethanes have changed the world. We have them to thank for energy-efficient refrigerators, comfortable upholstered furniture, safe car seats, protective coatings and lightweight composites (see video at https://youtu.be/Brwl9ASvSJs). 80 years ago, Dr. Otto Bayer discovered polyurethane chemistry virtually by accident. His perseverance and creativity launched the sweeping success of one of the world's most versatile plastics – and the success story is far from over.
"With curiosity and courage Covestro is advancing the development of polyurethanes to make the world a brighter place," says Daniel Meyer, Global Head of the Polyurethanes segment. "We don't leave anything to chance but are specifically pushing existing boundaries to make more efficient insulating materials, even lighter materials, and even more resource-saving products possible."
All new developments must meet Covestro's sustainability targets. "We take a comprehensive approach to the entire product life cycle, including social, ecological and economic aspects," says Daniel Meyer. "Our products are manufactured on the basis of carbon. Our goal is to draw the maximum benefit from the carbon we use."
Even more efficient refrigerators
Polyurethanes make an important contribution to securing global food supplies: Some 95 percent of the world's refrigerators are thermally insulated with rigid polyurethane foam – and the Baytherm® Microcell polyurethane system can raise their insulating performance by another ten percent. That means greater energy and cost savings in households and reduced CO2 emissions. A leading appliance manufacturer is already using this system in its production.
Carbon dioxide as a raw material
Covestro has developed a method for using the greenhouse gas CO2 to synthesize polyurethane components. It markets these raw materials, known as polyols, under the brand name cardyon™ for the production of flexible polyurethane foam, and operates a new production plant for them at its Dormagen site. Up to 20 percent of the fossil raw materials previously used in these products have been replaced by carbon dioxide. A special catalyst gives the molecule the required level of reactivity.
New model for affordable housing
Providing fast, affordable and sustainable housing is a global challenge. Covestro is breaking new ground in its search for creative solutions. Together with industry partners, governments, government agencies and society, Covestro is developing models for affordable housing and running specific projects locally. One example is a multipurpose building in Bergisch Gladbach, Germany, that was planned and built by the local council, the French prefabricated building manufacturer Logelis and Covestro.
Next-generation rotor blades
In keeping with its sustainability strategy, Covestro develops materials and technologies for generating renewable energy – with a focus on wind power. The company has developed an innovative technology for manufacturing rotor blades for wind turbines. The rotors are fabricated in a special process from a polyurethane resin and a fiberglass fabric. For the resin Covestro recently received the vital DNV GL certification for China and can now supply its products to rotor blade manufacturers there.
Proud past, exciting future
Dr. Otto Bayer could only have dreamed of such developments. But even 80 years ago, he lived out Covestro's corporate values: curious, courageous, colorful. He stubbornly pursued his goal of enhancing the efficiency of plastics manufacturing and en route discovered polyurethane chemistry, which became his passion. He even stuck to his guns when his superiors shook their heads at the bubbly mass he produced in his experiments, saying it was at most a "substitute for Swiss cheese". Far from it! With incredible creativity he and his team discovered a whole string of potential applications.
Polyurethanes: Milestones of a success story
1937 – Otto Bayer invents polyurethane chemistry
1943 – New brands: Desmodur® (isocyanates) and Desmophen® (polyols)
1952 – First flexible foam made of TDI and polyester polyols
1958 – Premium coatings made of Desmodur® and Desmophen® ("DD coatings")
1962 – Premiere of rigid polyurethane foam as an insulating material in refrigerators
1967 – First car with an all-plastic body at the K'67 trade show
1970 – Metal-faced sandwich panels for building envelopes
From 1970 onwards – Introduction of Baydur® polyurethane systems for rigid integral foams
1980 – Car seats with various foam hardness levels
1990 – Viscoelastic foams open up a new dimension in comfort
1995 – Blowing agents with no HCFCs
1998 – Introduction of the Baypreg® spray system for composites
2000 – Polyols for coatings and adhesives based on Impact™ technology
2005 – Advances in polyurethane composites
2012 – Baytherm® Microcell for insulating refrigeration systems - CO2 technology
2016 – Market launch of cardyon™ - First rotor blade made of polyurethane resin in Asia
In the future – continuously pushing the boundaries of innovation
About Covestro:
With 2016 sales of EUR 11.9 billion, Covestro is among the world's largest polymer companies. Business activities are focused on the manufacture of high-tech polymer materials and the development of innovative solutions for products used in many areas of daily life. The main segments served are the automotive, construction, wood processing and furniture, and electrical and electronics industries. Other sectors include sports and leisure, cosmetics, health and the chemical industry itself. Covestro has 30 production sites worldwide and employs approximately 15,600 people (calculated as full-time equivalents) at the end of 2016.
You can find a video at https://youtu.be/Brwl9ASvSJs.
For more information go to www.covestro.com.
Follow us on Twitter: www.twitter.com/CovestroGroup
This news release may contain forward-looking statements based on current assumptions and forecasts made by Covestro AG management. Various known and unknown risks, uncertainties and other factors could lead to material differences between the actual future results, financial situation, development or performance of the company and the estimates given here. These factors include those discussed in Covestro's public reports which are available at www.covestro.com. The company assumes no liability whatsoever to update these forward-looking statements or to conform them to future events or developments.
Copyright © Covestro AG
We will keep you informed about the latest news.
Download RTF
Frank Rothbarth
Cookies are used on this website. To use this website, please accept the cookies. Privacy Statement
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,390
|
A plump pout is universally synonymous with youth, and an icon of sensuality. Lip augmentation uses a variety of materials to create fuller, more voluptuous lips, and can also reduce wrinkles in the mouth area, restoring youthfulness.
Lip augmentation is a simple, in-office procedure with little to no downtime and probably one of the highest rated patient satisfaction procedures we do in our office with virtually instant results and gratification. Dr. Sule will help you select the lip augmentation method that is best for you and with his conservative eye and approach, help you achieve beautiful lips that appear natural and full but never artificial or out of place.
As a facial plastic surgeon, Dr. Sule is uniquely qualified to perform this procedure taking into account the proportions of your facial features so that your lip augmentation enhances not only your lips, but, your overall facial appearance. Whether you want just a more defined border of your lip so your lipstick does not bleed and make-up stays in place, or you covet that dream movie star, Angelina Jolie pout, Sule Plastic Surgery has the individualized lip augmentation approach to meet your needs.
If you have any questions about Lip Augmentation, please call our office at (972) 960-2950 to arrange an appointment with Dr. Sule.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,822
|
Ralts (Japanese: ラルトス Ralts) is a Psychic-type Basic Pokémon card. It was first released as part of the EX Sandstorm expansion.
Initial EX Sandstorm prints of this card have the ID I-18-#. The Dot Code strip contains Pokédex information, a TCG glossary snippet, and a brief area summary for Pokémon Ruby and Sapphire.
It was later reprinted in the English EX Dragon Frontiers expansion, first released in the Japanese Imprison! Gardevoir ex Constructed Standard Deck.
This is one of only four cards in EX Dragon Frontiers which isn't a Delta Species. The others are Kirlia, Larvitar and Pupitar.
Hypnosis is a move in the Pokémon games that Ralts can learn. This card's e-Reader Pokédex entry comes from Pokémon Sapphire.
This page was last edited on 17 December 2018, at 07:37.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,467
|
{"url":"https:\/\/blancosilva.wordpress.com\/2011\/02\/","text":"Archive\n\nArchive for February, 2011\n\nMath Genealogy Project\n\nFebruary 14, 2011 3 comments\n\nI traced my mathematical lineage back into the XIV century at The Mathematics Genealogy Project. Imagine my surprise when I discovered that a big branch in the tree of my scientific ancestors is composed not by mathematicians, but by big names in the fields of Physics, Chemistry, Physiology and even Anatomy.\n\nThere is some \u201cblue blood\u201d in my family: Garrett Birkhoff, William Burnside (both algebrists). Archibald Hill, who shared the 1922 Nobel Prize in Medicine for his elucidation of the production of mechanical work in muscles. He is regarded, along with Hermann Helmholtz, as one of the founders of Biophysics.\n\nThomas Huxley (a.k.a. \u201cDarwin\u2019s Bulldog\u201d, biologist and paleontologist) participated in that famous debate in 1860 with the Lord Bishop of Oxford, Samuel Wilberforce. This was a key moment in the wider acceptance of Charles Darwin\u2019s Theory of Evolution.\n\nThere are some hard-core scientists in the XVIII century, like Joseph Barth and Georg Beer (the latter is notable for inventing the flap operation for cataracts, known today as Beer\u2019s operation).\n\nMy namesake Franciscus Sylvius, another professor in Medicine, discovered the cleft in the brain now known as Sylvius\u2019 fissure (circa 1637). One of his advisors, Jan Baptist van Helmont, is the founder of Pneumatic Chemistry and disciple of Paracelsus, the father of Toxicology (for some reason, the Mathematics Genealogy Project does not list any of these two in my lineage\u2014I wonder why).\n\nThere are other big names among the branches of my scientific genealogy tree, but I will postpone this discovery towards the end of the post, for a nice punch-line.\n\nPosters with your genealogy are available for purchase from the pages of the Mathematics Genealogy Project, but they are not very flexible neither in terms of layout nor design in general. A great option is, of course, doing it yourself. With the aid of python, GraphViz and a the sage library networkx, this becomes a straightforward task. Let me show you a na\u00efve way to accomplish it:\n\nCategories: sage\n\nBasic Statistics in\u00a0sage\n\nFebruary 13, 2011 Leave a comment\n\nNo need to spend big bucks in the purchase of expensive statistical software packages (SPSS or SAS): the R programming language will do it all for you, and of course sage has a neat way to interact with it. Let me prove you its capabilities with an example taken from one of the many textbooks used to teach the practice of basic statistics to researchers of Social Sciences (sorry, no names, unless you want to pay for the publicity!)\n\nEstimating Mean Weight Change for Anorexic Girls\n\nThe example comes from an experimental study that compared various treatments for young girls suffering from anorexia, an eating disorder. For each girl, weight was measured before and after a fixed period of treatment. The variable of interest was the change in weight; that is, weight at the end of the study minus weight at the beginning of the study. The change in weight was positive if the girl gained weight, and negative if she lost weight. The treatments were designed to aid weight gain. The weight changes for the 29 girls undergoing the cognitive behavioral treatment were\n\n$\\begin{array}{rrrrrr} 1.7&0.7&-0.1&-0.7&-3.5&14.9\\\\3.5&17.1&-7.6&1.6&11.7&6.1\\\\1.1&-4.0&20.9&-9.1&2.1&1.4\\\\-0.3&-3.7&-1.4&-0.8&2.4&12.6\\\\1.9&3.9&0.1&15.4&-0.7\\end{array}$\n\nCategories: sage, Statistics\n\nA Homework on the Web\u00a0System\n\nFebruary 4, 2011 Leave a comment\n\nIn the early 2000\u2019s, frustrated with the behavior of most computer-based homework systems in the market, my advisor\u2014Bradley Lucier\u2014decided to take matters into his own hands, and with the help of a couple of students, developed an amazing tool: It generated a great deal of different problems in Algebra and Trigonometry. A single problem model had enough different variations so that no two students would encounter the same exercise in their sessions. It allowed students to input exact answers, rather than mere calculator approximations. It also allowed you to input your answer in any possible legal way. In case of an error, the system would occasionally indicate you where the mistake was produced.\n\nIt was solid, elegant, fast\u2026 working in this project was sheer delight. The most amazing part of it all: it only took one graduate student to write the codes for the problems and checking for validity of answer. Only two graduate students worked in the coding of this project, with the assistance of several instructors, and Brad himself. He wrote a fun article explaining how the project came to life, enumerating the details that made it so solid, and showing statistical evidence that students working with this environment benefitted more than with traditional methods of evaluation and grading. You can access that article either [here], or continue reading below.\n\nCategories: sage","date":"2017-08-22 13:02:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44349154829978943, \"perplexity\": 1703.2953087760452}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886110774.86\/warc\/CC-MAIN-20170822123737-20170822143737-00703.warc.gz\"}"}
| null | null |
{"url":"https:\/\/www.physicsforums.com\/threads\/statistical-thermo-diatomic-molecule-w-harmonic-oscillator.381063\/","text":"Statistical thermo (diatomic molecule w\/harmonic oscillator)\n\n1. Feb 23, 2010\n\njaejoon89\n\n2. Jan 29, 2017\n\nTeethWhitener\n\nA diatomic molecule has only one vibrational mode. The $C_{ij}$ matrix becomes a single number, $k$, the spring constant for the molecule. The harmonic frequency is given by $\\omega = \\sqrt{\\frac{k}{m}}$.","date":"2018-03-24 18:22:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8661190271377563, \"perplexity\": 2268.9745276501735}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257650764.71\/warc\/CC-MAIN-20180324171404-20180324191404-00312.warc.gz\"}"}
| null | null |
//-CRE-
using Glass.Mapper.Configuration;
namespace Glass.Mapper.Sc.Configuration
{
/// <summary>
/// Class SitecoreLinkedConfiguration
/// </summary>
public class SitecoreLinkedConfiguration : LinkedConfiguration
{
/// <summary>
/// Indicate weather All, References or Referred should be loaded
/// </summary>
/// <value>The option.</value>
public SitecoreLinkedOptions Option { get; set; }
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,783
|
Saturdays after SNL
on NBC 7 San Diego
music. community. culture.
Santana's New Restaurant Is So Smooth
By Jill Duran
Published Jul 10, 2010 at 2:08 PM | Updated at 2:15 PM PDT on Jul 10, 2010
Receive the latest sounddiego updates in your inbox
Aside from being a world famous rock legend, Carlos Santana is also famous for his authentic style Mexican restaurant chain, Maria Maria, which opened an eatery in Mission Valley on Wednesday.
The other Maria Maria restaurants are in San Francisco; Tempe, Ariz.; and Austin, Texas. The business is a partnership between Santana, Chef Roberto Santibañez and Jeff Dudum.
The Maria Maria in Mission Valley which are named for a Grammy-winning Santana song, boast perhaps the best patio of all the restaurants in the chain.
"We chose to open another restaurant in San Diego because we love the area -- there's lots of sun, and the great people make an unbelievable community." Dudum said.
The fact that one of the owners is a music icon in reflected in the entertainment: "stellar regional and national artists [will perform] live several nights a week," according to Maria Maria's website, and "hand-selected playlists [will showcase] the most innovative artists from all over globe."
The restaurant is the first Southern California site for the small chain that serves "nuevo Mexican" food. The restaurateurs put a modern spin on traditional Mexican dishes, while adding a dash of regional style, depending upon the location.
"The menu at Maria Maria was hand-selected by Carlos Santana," Dudum said. "We started with about 100 dishes and slowly narrowed it down to about 20."
Despite delicious meals, the food is reasonably priced with no dishes more than $19.
"We want a place where everyone can come into," Dudum said. "It's reasonably priced, and right now, people need things to enjoy within a practical price range."
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,566
|
Winner of the British Fantasy Award.
What would the world have become, if Victor Frankenstein's attempts to conquer death had not gone awry? Would some events of the world we know still have emerged, or would it be completely, and utterly, transfigured?
The very name 'Frankenstein' has become a by-word for arrogant tampering with things man was not meant to know. Twenty new stories from today's masters of macabre explore this theme.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,798
|
La Unión de Periodistas y Escritores Saharauís, creada el 1 de enero de 2005, es una organización no gubernamental (ONG), que agrupa a escritores y periodistas del Sáhara Occidental en lengua árabe, francesa y española y pretende participar lo más eficazmente posible en la lucha por la independencia de la República Árabe Saharaui Democrática. El secretario general electo es actualmente Nafii Ahmed Mohamed y los miembros del despacho ejecutivo son: Mostafa Mohamed Mahmoud, Jalil Mohamed Lamin, Fadel Salma y Maarouf Fneidou y Hasanna Abed el Aziz y otros dos vocales de los territorios ocupados del Sahara Occidental.
Desde febrero de 2010 publica la revista Al-Itihad.
Enlaces externos
Unión de Periodistas y Escritores Saharauís
Cultura de Sahara Occidental
Periodismo en la República Árabe Saharaui Democrática
Literatura en español
Escritores en español
Hispanistas
Hispanidad
Organizaciones fundadas en 2005
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,505
|
Q: How to simulate a QTreeWidget itemClicked signal without making a derived class? I am unable to find a proper simulation for ItemClicked() SIGNAL for QTreeWidget.
Is there a way to simulate it so that ItemClicked Signal is generated ?
e.g: we can emit ItemClicked in a derived class of QTreeWidget but cannot (as a QT rule) outside of it.
A: You can't use the emit call for class A to emit class B's signals. But note that the documentation for signals and slots says:
"You can connect as many signals as you want to a single slot, and a signal can be connected to as many slots as you need. It is even possible to connect a signal directly to another signal. (This will emit the second signal immediately whenever the first is emitted.)"
So you can work around this by declaring a signal in class A of the same signature as the one you want class B to emit, and connecting the signals together:
connect(
myclass, SIGNAL(itemClicked(QTreeWidgetItem*, int)),
treewidget, SIGNAL(itemClicked(QTreeWidgetItem*, int))
);
Then emit itemClicked from myclass. If I'm not mistaken, it will work for this case...and fire the treewidget's itemClicked signal for you.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 761
|
\section{Introduction}
In the burgeoning field of autonomous robotics, aerial robots are quickly becoming a useful platform for firefighting, police, search-and-rescue, surveillance, and product delivery. In the deployment of large fleets of such robots, trajectory plans must satisfy the competing criteria of safety and performance. Specifically, the safety requirement of avoiding vehicle-to-vehicle collisions and the performance requirement of minimizing time-in-flight are considered. As the number of robots increases, so too does the complexity of satisfying these requirements, warranting the development of computationally tractable methods to this end.
A wide class of traditional single-agent motion planning methods rely on discretization of the state space and definition of a related state-transition graph \cite{svestka1998}. Optimal feasible paths are then found through a graph search \cite{dijkstra1959, hart1968, wang2012, koenig2002, koenig2004, stentz1993} or other combinatorial solvers \cite{lavalle2006}. While these methods can solve the multi-agent planning problem e.g. as in \cite{turpin2014ar,sharon2015conflict, adler2015, solovey2015, honig2018}, they become computationally intractable quickly as the number of agents increases, leading to an exponential growth of the search space dimensionality \cite{erdmann1986, lavalle2006}. Some methods have been explored which reduce the search space dimensionality \cite{lavalle2006, wagner2012, wagner2015, solovey2015finding}, but are unable to sufficiently reduce the complexity for large numbers of agents \cite{turpin2014}. Other centralized planning approaches based on sequential mixed-integer linear optimization \cite{schouwenaars2001mixed, richards2002aircraft, ayuso2016}, sequential convex programming \cite{gglr2012, chen2015}, semidefinite programming \cite{frazzoli2001}, formation space-based velocity planning \cite{kloder2006}, or path improvement using revolving areas \cite{solomon2018motion} can work well for relatively small teams but do not scale well to large teams due to high computational complexity.
A complementary approach for motion planning is to use local decentralized feedback control laws to satisfy collision avoidance and dynamic constraints \cite{warren1990multiple,fiorini1998,berg2008,berg2011,guy2009,guy2010,cap2014,cap2015,panagou2020}. These require real-time sense-and-avoid capabilities, increasing the system cost and complexity. In general, these schemes lead to difficulties maintaining optimal or near-optimal performance and avoiding undesirable deadlock situations.
It was recently shown by \cite{turpin2014} that when the robots are interchangeable, combining the assignment and planning problems facilitates finding collision-free trajectories. Therein a concurrent assignment and trajectory planning algorithm was proposed which tractably gave collision-free trajectories for large robot teams for sufficiently spaced start and goal locations. { The trajectories generated by that approach are globally optimal with respect to a total \emph{squared distance} metric and under the assumption of synchronized robot motion}, which resulted in trajectories that can be significantly suboptimal in terms of total time-in-motion and may violate minimum velocity constraints associated with certain aerial robots.
Additionally, it was asserted that coupling the assignment and trajectory planning was critical to avoid overwhelming computational complexity.
To overcome these shortcomings, in this work a similar problem setup and centralized planning setting to \cite{turpin2014} is considered, but allows non-simultaneous trajectory end times and attempts to minimizes the total \emph{time-in-motion} via a strategy with partial coupling between goal assignment and trajectory planning. The proposed algorithm designs piecewise polynomial-in-time trajectories with physical feasibility and computational tractability guarantees; physical systems track the generated trajectories without colliding with each other, while the polynomial formulation enables key calculations such as the point-of-minimum-clearance to be executed via well-studied and efficient root finding algorithms.
It is demonstrated that the proposed method significantly reduces total time-in-motion relative to \cite{turpin2014} with only modest additional computational expense which becomes negligible for large numbers of agents.
The authors' prior work \cite{Gravell2018} presented a similar approach, but required a simple kinematic model which assumed an unrealistic ability to instantaneously change positions vertically and instantaneously change velocities horizontally; these assumptions are removed in the present work. Further, more extensive simulations and an experimental implementation on a multi-robot testbed are provided.
The remainder of the paper is organized as follows.
First a variation of the trajectory planning problems given in \cite{turpin2014,Gravell2018} is defined, then a strategy in which goal assignment is performed based on initial trajectory plans is proposed, followed by a refinement step to resolve potential collisions using either time delays or altitude assignment.
Throughout, constraints on the magnitude of time derivatives of position (speed, acceleration, jerk, etc.) are honored.
Finally, simulated and physical experiments on a many-member quadrotor platform are presented that illustrate the effectiveness of the algorithms.
\section{Preliminaries}
{ This work begins by establishing mathematical and notational preliminaries; where applicable, the notation of \cite{turpin2014} is followed.}
Unlike the authors' previous work \cite{Gravell2018} which restricted agents to 2-dimensional planes and assumed instantaneous switching between these planes, now a fully 3-dimensional Euclidean space is considered and accompanying trajectory generation algorithms developed which support implementation on a physical robotic system. { To be clear, in this work all objects and geometry with physical extent reside within a common 3-dimensional Euclidean space.}
The set of integers between $1$ and positive integer $Z$ is denoted by $\mathcal{I}_Z\equiv\{1,2,\dots,Z\}$, and the $Z \times Z$ identity matrix is denoted by $I_Z$. The following symbols are used for certain operators and objects:
\begin{align*}
\land: &\text{ Logical ``and"} &\lor: &\text{ Logical ``or"}\\
\cap: &\text{ Set intersection} &\cup: &\text{ Set union}\\
\emptyset : &\text{ Empty set} &\oplus: &\text{ Minkowski sum}\\
\text{conv} : &\text{ Convex hull}
\end{align*}
Vectors in $\mathbb{R}^N$ are column vectors unless otherwise specified. The Euclidean norm of such a vector $z$ is denoted as $\| z \|$ and defined as $\| z \| = \sqrt{\sum_{i=0}^N z_i^2}$.
Consider the scenario where $n$ agents begin at $n$ start locations and move towards $n$ goal locations in a 3-dimensional Euclidean space { with a fixed origin and coordinates measured in a Cartesian coordinate system.} The first two coordinates of this space are referred to as ``horizontal'' and the third coordinate as ``vertical''.
The $i^{th}$ agent position is given by ${x}_i \in \mathbb{R}^3, i \in \mathcal{I}_n$ with the horizontal and vertical parts denoted as ${x}_{i,12} \in \mathbb{R}^2$ and ${x}_{i,3} \in \mathbb{R}^1$ respectively.
The first four derivatives of position with respect to time are called velocity, acceleration, jerk, and snap. These and higher-order derivatives are collectively referred to as time derivatives. { Derivatives with respect to time are notated either by dots or by numbers enclosed by parentheses above the variable, e.g. ${\dot{x}}_i = \overset{\scriptscriptstyle(1)}{x}_{i}$ is velocity and ${\ddot{x}}_i = \overset{\scriptscriptstyle(2)}{x}_{i}$ is acceleration of agent $i$.}
The collision volume of agent $i$ is represented by a finite cylinder $\mathcal{C}_{i}$ of radius $R_i$ and height $H_i$. Each cylinder is centered at ${x}_i \in \mathbb{R}^3$. The cylinders represent the safe collision volume around an agent; a collision occurs if and only if two cylinders intersect. The cylinders have orientations which remain fixed with the axial direction parallel to the world vertical direction (a ``vertical'' cylinder). The variable height of the cylinders relative to the radius is useful for modeling phenomena such as downwash from the rotors of a quadrotor vehicle. Let the largest radius and height of all vehicles be $R = \max_i(R_i)$ and $H = \max_i(H_i)$.
The $i^{th}$ start location and $j^{th}$ goal location are given by ${s}_i \in \mathbb{R}^3, i \in \mathcal{I}_n$ and ${g}_j \in \mathbb{R}^3, j \in \mathcal{I}_n$ respectively.
The horizontal ground plane is the 2-dimensional set $\{x | x_3=0\}$.
The agents operate in a region $\mathcal{K}$:
\begin{align}
\mathcal{K} \equiv \text{conv}\left(\{{s}_i|i\in \mathcal{I}_n\} \cup \{{g}_j|j\in \mathcal{I}_n\}\right) \oplus \mathcal{C}_{\infty}. \label{eq:region_K}
\end{align}
where $\mathcal{C}_\infty$ is a vertical cylinder with radius $R$ whose horizontal coordinates of the center are on the origin and with vertical extent from the ground to positive infinity.
An altitude $\mathcal{A}(\eta)$ at height $\eta$ is the subset of $\mathcal{K}$ within which any agent at that height is contained i.e. a 3-dimensional horizontal slab volume:
\begin{align}
\mathcal{A}(\eta) &= \mathcal{K} \cap \{x : | x_3-\eta | \leq H/2 \}
\end{align}
Define the $n\times 3$ goal matrix as
\begin{align}
{G}=
\begin{bmatrix}
{{g}_1} & {{g}_2} & \dots & {{g}_n}
\end{bmatrix}^\intercal.
\end{align}
Define the $n\times n$ boolean assignment matrix $\phi$, which assigns agents to goals, as
\begin{equation}
\phi_{ij} =
\begin{cases}
1 & \text{if agent } i \text{ is assigned to goal } j\\
0 & \text{otherwise}
\end{cases}
\end{equation}
Therefore row $i$ of $\phi{G}$, denoted as $(\phi{G})_i$, gives the goal location assigned to agent $i$.
All agents are assigned to goals in a one-to-one mapping so
\begin{equation} \label{eq:assignment_constraint}
\phi^\intercal\phi = I_n.
\end{equation}
{ A polynomial $p(t):\mathbb{R}\rightarrow{\mathbb{R}}$ of scalar $t$ with degree $d$, with $d+1$ coefficients $\alpha_i$, is given by
\begin{equation}
p(t) = \alpha_0 + \alpha_1 t + \alpha_2 t^2 + ... + \alpha_d t^d = \sum_{i=0}^d \alpha_i t^i . \label{eq:polynomial}
\end{equation}
}
Minimizing a polynomial over a finite domain interval $[t_0,t_f]$ is a straightforward and computationally efficient procedure which follows from Fermat's theorem for stationary points from differential calculus. A description is provided in Algo. \ref{algo:polynomial_minimization} for completeness since it will be called at various points throughout this work.
Maximization is accomplished by the same algorithm by passing a negated polynomial. Also, both minima and maxima can be found concurrently at little additional computational expense by a simple modification to the algorithm.
Similarly, finding the intervals of a finite domain interval $[t_0,t_f]$ over which a polynomial evaluates { within a range of values i.e. finding the set}
\begin{align}
\{t \ | \ p(t) \in [p_-,p_+] \text{ and } t \in [t_0,t_f] \}
\end{align}
is also straightforward and efficient. The procedure begins by finding domain intervals where the polynomial evaluates above the lower limit $p_-$ and below the upper limit $p_+$ (Alg. \ref{algo:polynomial_above}), then intersects those domain intervals with eachother and the prescribed interval $[t_0,t_f]$ (Alg. \ref{algo:polynomial_interval}). Note that Alg. \ref{algo:polynomial_above} works identically for finding intervals where a polynomial is below an upper value $p_+$ by simply reversing inequalities.
{ The trajectory planning problem and the proposed solution techniques are now introduced.}
\section{Trajectory Planning Problem}
{ The trajectory planning problem requires finding $n$ instances of $3$-dimensional trajectories which guide $n$ agents from start to goal locations. The trajectories are given agent-wise by}
\begin{equation*}
\gamma_i(t): [t_{0,i}, t_{f,i}] \rightarrow {x}_i,\quad i \in \mathcal{I}_n
\end{equation*}
and must satisfy the initial and terminal conditions
\begin{align}
\gamma_i(t_{0,i})&={s}_i,\quad i \in \mathcal{I}_n \label{eq:init_position_constraint}, \\
\gamma_i(t_{f,i})&=\left(\phi{G}\right)_i,\quad i \in \mathcal{I}_n \label{eq:term_position_constraint}.
\end{align}
Agents are considered to be quadrotors, whose center position dynamics linearized about the hover configuration are modeled as a quadruple integrator in horizontal directions (due to the rolling action which must precede lateral acceleration) and a double integrator in the vertical direction \cite{mellinger2011}:
\begin{align} \label{eq:dynamics_constraint}
\overset{\scriptscriptstyle(4)}{x}_{i,12}(t) = u_{\text{horz,i}} \ , \quad \overset{\scriptscriptstyle(2)}{x}_{i,3}(t) = u_{\text{vert,i}}
\end{align}
where $u_{\text{horz,i}}$ and $u_{\text{vert,i}}$ are control inputs. The dynamics are not used explicitly in terms of designing control inputs, but rather are used to motivate the choice of trajectory form, namely piecewise polynomials of a particular order.
{ By choosing a whole number $q$ sufficiently high and imposing constraints on the norm of $q-1$ time derivatives, actuator constraints are honored. The particular choice for $q$ in the case of quadrotors is established in Section \ref{subsec:base_polynomial}.}
These constraints are encoded in a vector $\delta \in \mathbb{R}^{q-1}$ with $\delta_k > 0$ and applied as
\begin{align} \label{eq:derivative_constraint}
\| \overset{\scriptscriptstyle(k)}{\gamma}_i(t) \| \leq \delta_k \text{ for } k \in \mathcal{I}_{q-1}.
\end{align}
Define the global start and end times for which motion may occur over all agents:
\begin{equation*}
\begin{aligned}
t_{0,\text{all}}&=\text{min}(t_{0,1},t_{0,2},\dots,t_{0,n}), \\
t_{f,\text{all}}&=\text{max}(t_{f,1},t_{f,2},\dots,t_{f,n}).
\end{aligned}
\end{equation*}
Ensure collision avoidance by requiring the collision volumes of all agent pairs to be disjoint during the period of possible motion:
\begin{align} \label{eq:collision_free_constraint}
\{x_i(t) \oplus \mathcal{C}_i\} \cap \{x_i(t) \oplus \mathcal{C}_j\} = \nonumber \emptyset \\
\text{ for } t:~[t_{0,all},t_{f,all}],\quad i\neq j \in \mathcal{I}_n.
\end{align}
Like the previous work in \cite{Gravell2018}, the proposed method aims to minimize the total, or equivalently average, time-in-flight of all agents. This is a useful cost metric for many applications e.g. product delivery and emergency response. Therefore, the optimization problem seeks trajectories $\gamma^*(t) = [\gamma_1(t),...,\gamma_n(t)]$ and goal assignment $\phi^*$ that minimize total time-in-motion:
\begin{equation} \label{eq:opt_prob_orig}
\begin{aligned}
& \gamma^*(t), \phi^* = &&\underset{\gamma(t), \phi}{\text{argmin}}
\sum_{i=1}^n \int_{t_{0,i}}^{t_{f,i}} dt \\
&&& \text{subject to }
\eqref{eq:assignment_constraint}, \eqref{eq:init_position_constraint}, \eqref{eq:term_position_constraint}, \eqref{eq:dynamics_constraint},\eqref{eq:derivative_constraint},\eqref{eq:collision_free_constraint}
\end{aligned}
\end{equation}
{
\textbf{Assumptions:} The following assumptions are explicitly imposed as part of the problem formulation:
\begin{enumerate}[label=(A\arabic*)]
\item Any assignment of agents to goals is permissible.
\item The collision volume of each agent is the set of points contained in cylinder $\mathcal{C}_{i}$.
\item The effect of any dynamics model mis-specification, imperfect state knowledge, actuation error, and external disturbance are small enough such that the true physical extent of each agent is always fully contained inside the collision volume $\mathcal{C}_{i}$.
\item Continuity and satisfaction of upper bound constraints on $q-1$ time derivatives of position is sufficient to ensure actuator constraints are honored.
\item The region $\mathcal{K}$ in \eqref{eq:region_K} is devoid of any obstacles other than the agents themselves.
\item The region $\mathcal{K}$ in \eqref{eq:region_K} has infinite positive vertical extent.
\item All start and goal locations are fixed on a common ground plane and are spaced at least $2R$ apart:
\begin{align*}
s_{i, 3} = g_{i, 3} &= 0 \ \forall \ i \in \mathcal{I}_n \\
\| s_i - s_j \| &> 2R \ \forall \ i \neq j \in \mathcal{I}_n \\
\| g_i - g_j \| &> 2R \ \forall \ i \neq j \in \mathcal{I}_n
\end{align*}
\end{enumerate}
}
{
The modeling assumption of no uncontrolled obstacles in the operating space is not altogether unreasonable when considering the nearly empty airspace encountered at altitudes above tree tops, buildings, etc. in typical real-world flight scenarios. The use of cylindrical collision volumes renders orientation of each quadrotor irrelevant for the purpose of trajectory planning.
}
{ The solution to the global problem in \eqref{eq:opt_prob_orig} is ultimately not obtained exactly, but rather a suboptimal solution is found using \eqref{eq:opt_prob_orig} to guide generation of trajectories and goal assignment by the approach proposed in the following subsections.
The strategy for finding an approximate solution to this problem proceeds by temporarily ignoring the clearance requirements \eqref{eq:collision_free_constraint} which effectively reduces the domain of trajectories under consideration to the ground plane, choosing a function form for trajectories (piecewise polynomials) to reduce the problem to goal assignment, generating horizontal trajectories, then constructing vertical trajectories using refinement techniques which detect and resolve collisions.
As a result, these trajectories will be shown to be feasible (e.g. collision-free) and computable after a finite number of operations by construction.
As the trajectory generation procedure based on piecewise polynomial functions is used throughout the goal assignment and collision resolution phases, the trajectory generation scheme is described next.}
\subsection{Trajectory Generation} \label{sec:traj}
The trajectory design is motivated by the observation that minimum-time trajectories along a long straight line with maximum speed constraints will naturally partition into three segments; acceleration, constant (max) speed, and deceleration. A similar idea has previously been suggested for point-to-point robot trajectory planning under the name ``Linear Segments with Parabolic Blends" \cite{spong2008}. This idea is generalized to higher-order acceleration (blend) segments. During the acceleration segments, one or more time derivatives of order 2 and higher will be pushed to a constraint maximum, and during the constant max speed segment the higher order time derivatives will be zero. Although physical models involving friction { (i.e. higher fidelity models than that assumed in \eqref{eq:dynamics_constraint})} theoretically allow only asymptotic approach of the maximum speed under actuation constraints e.g. the exponential approach of the speed of a particle in gravitational free-fall to a terminal speed, in practice it was found that the polynomial trajectories were sufficient for reference tracking.
This work does not attempt to optimize control effort during the acceleration segments since the control effort expended during the constant speed segment dominates e.g. due to air friction and by virtue of the relative duration of this segment over long horizontal paths. If deemed necessary, techniques such as minimum-snap trajectory design via quadratic programming \cite{mellinger2011} could be utilized to further decrease the control effort, possibly at the expense of trajectory duration and computational burden. Any techniques which return polynomial-in-time trajectory segments are fully compatible with the the remainder of the proposed method.
{ This work also restricts trajectories to strictly piecewise vertical and horizontal straight-line paths, which permits simplified trajectory planning and collision resolution by treating trajectories as single-dimensional polynomials of time multiplied by a constant unit heading vector. A pair of tuples $\delta_{\text{horz},k}$ and $\delta_{\text{vert},k}$ are used and \eqref{eq:derivative_constraint} is used with $\delta_k$ set to either $\delta_{\text{horz},k}$ or $\delta_{\text{vert},k}$ depending on whether $\overset{\scriptscriptstyle(k)}{\gamma}_i(t)$ is horizontal or vertical at $t$.}
The acceleration segments are individualized polynomials scaled from a base polynomial. The base polynomial is calculated only once at the beginning of the overall routine. Particular whole trajectories are generated by joining acceleration and constant speed segments. { Generation of the base polynomial and individualized polynomials are described in the subsequent two subsections.}
\subsubsection{Base polynomial} \label{subsec:base_polynomial}
{ Recalling the definition of a polynomial of degree $d$ in \eqref{eq:polynomial} and the whole number $q$ which represents the number of time derivatives on which constraints will be enforced, let $2q=d+1$.} It is evident that a given $2q-$tuple of initial and terminal time derivative conditions ($2q$ total point constraints) uniquely specifies a polynomial of degree $d$ so long as the problem is well-posed i.e. if a certain coefficient matrix $A$ is invertible. To ensure continuity of position and $q-1$ time derivatives at the endpoints, specify $q$ constraints at $t=0$ and $q$ constraints at $t=T$. Due to the assumption on the dynamics in \eqref{eq:dynamics_constraint}, by choosing reference trajectories which are piecewise polynomial with degree at least 4 and 2 respectively, open-loop control with sufficient control effort and the absence of disturbances would give perfect tracking. It is also desirable to make the segment transitions smooth to avoid discontinuous control signals. Choosing degree 9 would allow the specification of 5 endpoint time derivative constraints: position, speed, acceleration, jerk, and snap. However, to reduce the computational storage requirement for the trajectories during implementation on actual hardware and reduce computational effort during centralized trajectory planning, a degree of 7 is used. It was found that the difference between the degree 7 and 9 polynomials was extremely slight and in practice the reference tracking error was dominated by other noise sources.
For comparison, degree 1 polynomials represent constant speed trajectories; this was effectively the approach taken in the authors' previous work \cite{Gravell2018}. The procedure for calculating the base polynomial is as follows:
\begin{enumerate}
\item Form the vector of endpoint conditions
\begin{align}
b &= [p(0),\dot{p}(0),\ldots,\overset{\scriptscriptstyle(q)}{p}(0), \nonumber \\
&\qquad p(T),\dot{p}(T),\ldots,\overset{\scriptscriptstyle(q)}{p}(T)]^\intercal
\end{align}
\item Form the matrix of coefficients $A \in \mathbb{R}^{2q \times 2q}$ as
\begin{align}
A_{ij} &= \left\{\begin{array}{lr}
i! & \text{ if } i = j \text{ and } i \leq q\\
0 & \text{ if } i\neq j \text{ and } i \leq q\\
\frac{(j-1)!}{(j-(i-q))!} T^k & \text{ if } i-q \leq j \text{ and } i > q\\
0 & \text{ if } i-q > j \text{ and } i > q
\end{array}\right.
\end{align}
where $k={(j-1)+(i-q-1)}$. This follows from simple differentiation of polynomials and matching coefficients according to the endpoint constraints.
As an example, for $d=7$ and $T=1$ one has
\begin{align}
A = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 6 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
0 & 0 & 2 & 6 & 12 & 20 & 30 &42 \\
0 & 0 & 0 & 6 & 24 & 60 & 120 &210
\end{bmatrix}
\end{align}
\item Solve the system of linear equations $A\alpha=b$ to obtain the vector of polynomial coefficients $\alpha$.
\end{enumerate}
In this framework, other polynomial bases such as the orthogonal polynomials of Chebyshev or Legendre could be used to improve the conditioning of the $A$ matrix \cite{Mellinger2012} { i.e. to encourage the singular values of the $A$ matrix to remain clustered around unity and ensure numerical stability of the solution to $A\alpha=b$}; for ever-higher degree polynomials the conditioning of the matrix in the monomial basis degrades. However, for simplicity, monomials are used since the error was found to be manageable on the problem instances encountered, i.e. for degree $7$ polynomials.
If position and the first $q-1$ time derivatives are $0$ at $t=0$, the first $q$ coefficients $\alpha_0,\ldots,\alpha_q$ are also zero, which is evident from the partial diagonal structure of $A$. Indeed, it is desirable to create an acceleration polynomial which has $p(0) = 0$, $\dot{p}(0) = 0$, $p(T) > 0$, $\dot{p}(T) > 0$ and some higher-order time derivatives zero at both endpoints i.e.
\begin{align}
b &= [0,0,\ldots,0, x_f,v_f,0,\ldots,0]^\intercal
\end{align}
Although this procedure will always generate a polynomial which satisfies the endpoint constraints, the behavior between the endpoints is governed by the duration $T$. In particular, there is a unique setting of $T$ which ensures that both the position and velocity monotonically increase from the initial to terminal points, thus ensuring that the endpoints are where the minimum and maximum position and speed occur over the segment. This setting is
\begin{align}
T=2 \left| \frac{p(T)-p(0)}{\dot{p}(T)-\dot{p}(0)} \right|=2 \left| \frac{p(T)}{\dot{p}(T)} \right|. \label{eq:T_setting}
\end{align}
With this choice, as an additional benefit, the polynomial degree is reduced by 1 i.e. $\alpha_d$ = 0.
Although proving these facts for arbitrary degree polynomials is difficult, it is now shown that at least for $d=7$, which is the case of interest in this work, that the given setting of $T$ in \eqref{eq:T_setting} gives the desired behavior. Assuming, without loss of generality, that $p(0)=0$, $\dot{p}(0)=0$, $p(T)=0.5$, $\dot{p}(T)=1$, and by \eqref{eq:T_setting} set $T=1$. Solving for the coefficients of the position polynomial obtain
\begin{align}
p(t) &= t^6-3t^5+2.5t^4
\end{align}
and differentiating, the acceleration is
\begin{align}
\ddot{p}(t) &= 30t^4-60t^3+30t^2 \\
&= 30 t^2 (t-1)^2
\end{align}
which is nonnegative for all $t$ and thus on the interval $[0,T]$. Thus the velocity monotonically increases from 0 and so does the position, as desired. Attempting to show this for any other setting of $T$ will fail; a proof of this fact is left to future work, noting that a product-of-squares argument (as here) is insufficient to prove a setting of $T$ gives an acceleration which is somewhere negative.
It is emphasized that the base polynomial only needs to be calculated once at the beginning of the overall routine and can be scaled and translated (in time) as necessary for each particular trajectory.
The base polynomials for vertical and horizontal trajectories are calculated separately to account for differing actuation constraints in each direction. In each case, a unit path length and terminal speed equal to the max speed ($p(0)=0$, $\dot{p}(0)=0$, $p(T)=1$, $\dot{p}(T)=\delta_1, T = 2$) are used. This results in the polynomials $p_\text{base,horz}$ and $p_\text{base,vert}$.
\subsubsection{Individualized polynomials}
Once the base polynomials for an acceleration segment have been found, a piecewise polynomial (sub)trajectory may be generated which connects any two points $x_0$, $x_f$ with a straight line path, subject to the time derivative constraints. Let the distance between $x_0$, $x_f$ be $\ell_\Delta = \|x_f-x_0\|$. The whole piecewise polynomial trajectory for agent $i$ with $n_i$ pieces has the form
\begin{align}
\gamma_i(t) = \Big\{ p_{ik}(t) \hat{h}_{ik} \text{ if } t \in [t_{ik},t_{ik+1}], k \in \mathcal{I}_{n_i-1} \Big\}
\end{align}
where $\hat{h}_{ik} \in \mathbb{R}^3$ is a unit heading vector. In this work, this heading will either be horizontal $\hat{h}_{ik} = [a,b,0]^\intercal$ or vertical $\hat{h}_{ik} = [0,0,1]^\intercal$ where $a,b$ are dummy constants satisfying $a^2 + b^2 = 1$. Also, in this work these trajectories are comprised of 2- or 3-segment subtrajectories and 1-segment stationary wait segments. For notational compactness, let
\begin{align}
\gamma_{ik} = \{p_{ik},\hat{h}_{ik},[t_{ik},t_{ik+1}]\}
\end{align}
represent a polynomial trajectory segment which encodes a polynomial, a heading, and a time interval.
Accordingly, the norm of the time derivatives has the simplified form
\begin{align}
\left\| \overset{\scriptscriptstyle(k)}{\gamma}_i(t) \right\| = \Big\{ \left| \overset{\scriptscriptstyle(k)}{p}_{ik}(t) \right| \text{ if } t \in [t_{ik},t_{ik+1}], k \in \mathcal{I}_{n_i-1}
\end{align}
It will be useful to keep in mind the spatial and temporal scaling formulas for polynomials:
\begin{align}
c p(t) &= c \sum_{i=0}^d \alpha_i t^i \label{eq:spatial_scale} \\
p(ct) &= \sum_{i=0}^d \alpha_i (ct)^i = \sum_{i=0}^d \alpha_i c^i t^i \label{eq:time_scale} \\
\frac{1}{c} p(ct) &= c^{-1} \sum_{i=0}^d \alpha_i (ct)^i = \sum_{i=0}^d \alpha_i c^{i-1} t^i \label{eq:combine_scale}
\end{align}
from which it follows that the derivatives satisfy:
\begin{align}
\frac{d^k}{dt^k} (c p(t)) &= c \frac{d}{dt} p(t) \label{eq:spatial_scale_derv} \\
\frac{d^k}{dt^k} (p(ct)) &= c^k \frac{d^k}{d\tau^k} (p(\tau)) \label{eq:time_scale_derv} \\
\frac{d^k}{dt^k} \left( \frac{1}{c} p(ct) \right) &= c^{k-1} \frac{d^k}{d\tau^k} (p(\tau)) \label{eq:combine_scale_derv}
\end{align}
where $\tau = ct$.
First, temporal scaling is applied to the acceleration segment in order to ensure the terminal speed is the agent max speed so that
\begin{align}
| \dot{p}(t) | \leq \delta_1 \text{ for } t \in [0,T]
\end{align}
with equality ensured exactly at $t=T$.
The (absolute) maximum time derivative $\max_t(| \dot{p}(t) |)$ of the base polynomial is computed via Algorithm \ref{algo:polynomial_minimization} with the interval $[0,T]$. The scale factor is found as $c = \frac{\max_t(| \dot{p}(t) |)}{\delta_1} $ then temporal scaling is applied as
\begin{align}
p(t) \leftarrow p \left( c t \right) \quad \text{ and } \quad
T \leftarrow T/c
\end{align}
which achieves the proper scaling of speed per \eqref{eq:time_scale_derv} and preserves the path length traversed.
Next, scaling is applied to the acceleration segment in order to satisfy constraints on the higher time derivatives which are denoted by $\delta \in \mathbb{R}^{q-1}$ so that
\begin{align}
| \overset{\scriptscriptstyle(k)}{p}(t) | \leq \delta_k \text{ for } t \in [0,T], k \in \mathcal{I}_{q-1}
\end{align}
with equality ensured in at least one derivative at one time. This minimizes the time taken to traverse the path by taking full advantage of the available time derivatives.
The (absolute) maximum time derivatives $\max_t(| \overset{\scriptscriptstyle(k)}{p}(t) |)$ of the base polynomial are computed via repeated applications of Algorithm \ref{algo:polynomial_minimization} with the interval $[0,T]$. Once the (absolute) maximum time derivatives have been identified, scale factors $\psi_k$ associated with satisfying each time derivative constraint are found by
\begin{align}
\psi_k = \left( \frac{\max_t(| \overset{\scriptscriptstyle(k)}{p}(t) |)}{\delta_k}\right)^{\frac{1}{k-1}} \text{ for } k \in \mathcal{I}_{q-1} \label{eq:scale_factors} .
\end{align}
The maximum of these scale factors is the only one that is needed to ensure all constraints are satisfied, so take $\psi_*=\max_k(\psi_k)$.
The scaling is then applied by
\begin{align}
p(t) \leftarrow \psi_* p(t/\psi_*) \quad \text{ and } \quad
T \leftarrow T \psi_*
\end{align}
which compresses the trajectory temporally and stretches it spatially in equal proportions such that the terminal speed remains the same, per \eqref{eq:combine_scale_derv}, while honoring all higher order time derivative constraints, per \eqref{eq:time_scale_derv}.
Next, a determination of whether a middle constant speed segment is needed is made. This is accomplished by comparing the path length needed by the acceleration segment to reach max speed and (half) the actual path length between the physical endpoints i.e. if $2p(T) < \ell_{\Delta}$ then a constant speed segment is needed. This segment is trivial to calculate; it is simply a constant maximum speed segment whose duration is simply $T_{\text{cs}} = \frac{\ell_{\Delta}-2p(T)}{\delta_1}$. On the other hand, if $2p(T) \geq \ell_{\Delta}$ then no constant speed segment is needed and the acceleration segments must be scaled again to reduce their path length, in which case the maximum speed will not be attained.
The process continues with a spatial stretch in order to fit the path length exactly:
\begin{align}
p(t) \leftarrow \frac{\ell_{\Delta}}{2p(T)} p(t)
\end{align}
This has the effect of strictly decreasing the time derivatives per \eqref{eq:spatial_scale_derv} since the scale factor is less than 1.
Then new scale factors are calculated similarly to \eqref{eq:scale_factors} and a temporal stretch is applied to further optimize the trajectory by making full use of the available ``capacity'' of higher order time derivatives.:
\begin{align}
\psi_k^\prime &= \left( \frac{\max_t(| \overset{\scriptscriptstyle(k)}{p}(t) |)}{\delta_k}\right)^{\frac{1}{k}} \text{ for } k \in \mathcal{I}_{q-1} ,
\psi_*^\prime =\max_k(\psi^\prime_k) \\
p(t) &\leftarrow p(t/\psi_*^\prime) , \quad \text{and} \quad
T \leftarrow T \psi_*^\prime .
\end{align}
The end result of this entire procedure is a piecewise polynomial (sub)trajectory with 2 or 3 pieces with the first $q$ time derivatives continuous and satisfies all initial, terminal, and range constraints. See Fig. \ref{fig:figure_polytraj1} for an illustrative example.
\begin{figure}[pos=h]
\centering
\includegraphics[width=2.5in]{figure_polytraj1.png}
\caption{Plots of position and its time derivatives for an example piecewise polynomial subtrajectory generated by the proposed method. This example uses degree $d=7$ to accommodate $q-1=3$ time derivative constraints depicted as the upper and lower limits of the vertical axis and tightly bound the speed and at least one higher derivative (in this case acceleration). This example has 3 segments: acceleration, constant speed, and deceleration.}
\label{fig:figure_polytraj1}
\end{figure}
\subsection{Goal Assignment} \label{sec:goal_assignment}
{
Having described the piecewise polynomial trajectory generation procedure, it is now possible to reduce the problem in \eqref{eq:opt_prob_orig} to one of a linear assignment (combinatorial) goal assignment problem by fixing the functional form of the trajectories.
As in \cite{Gravell2018}, if the collision avoidance constraint is ignored \eqref{eq:collision_free_constraint}, an argument from the calculus of variations shows that trajectories which minimize the integral of $dt$, which is the time-in-motion, follow straight line paths and achieve the highest average speed possible while satisfying the boundary conditions and position derivative constraints.
Thus the problem reduces to simply connecting each start to each goal with minimum-time trajectories on straight line paths, computing the time-in-motion incurred by each trajectory, and finding the goal assignment which minimizes total time-in-motion. If these minimum-time trajectories are replaced with constant velocities, as in \cite{Gravell2018}, the problem amounts to minimizing the total \emph{non-squared} distance.
Unlike \cite{Gravell2018}, motivated by Section \ref{sec:traj}, we now replace these minimum-time trajectories with piecewise polynomial trajectories, where the expression of the cost in terms of distance is more complicated and is driven by the size of the constraints on the time derivatives (which determine the base polynomial) relative to the distances.
}
Therefore, the optimal assignment is given by
\begin{equation*}
\phi^* = \underset{\phi}{\text{argmin}}\sum_{i=1}^{n}\sum_{j=1}^{n} \phi_{ij}C_{ij}
\end{equation*}
where the cost matrix $C$ encodes the cost of assigning agent $i$ to goal $j$. In accordance with \eqref{eq:opt_prob_orig}, $C$ contains the values of the time-in-motion taken by agent $i$ to travel to goal $j$ along a straight line. These times $T_{ij}$ are found by calculating polynomial segment trajectories for agent $i$ moving from start $s_i$ to goal $g_j$ by the procedure described earlier:
\begin{equation*}
C_{ij} = T_{ij},\quad i \in \mathcal{I}_n, j \in \mathcal{I}_n.
\end{equation*}
Due to the exceptionally simple form of the piecewise polynomial trajectories, calculating the $n(n-1)/2$ trajectories for each start-goal pair remains computationally tractable compared with the simplified case of constant velocity trajectories.
This problem may be efficiently solved to optimality with a finite number of iterations using the well-known Hungarian algorithm \cite{kuhn1955,munkres1957}, which runs in $\mathcal{O}(n^3)$ time. Alternate algorithms such as the auction algorithm could also be used with the same time complexity, but with the benefit of parallelization \cite{bertsekas1989,bertsekas1991}.
After solving the optimal assignment, the presumptive horizontal trajectories for each agent are simply chosen as those from the cost matrix generation which are selected by the optimal assignment.
A comparison with the C-CAPT algorithm of \cite{turpin2014}, which uses a cost function of the distance traveled squared, is given in \cite{Gravell2018}. The main disadvantages of the C-CAPT algorithm are that the speed of agents is limited due to the requirement of agents to start and arrive at goals at the same time, as well as a minimum separation spacing between starts and between goals of $2\sqrt{2}R$. The advantage of allowing asynchronous goal arrival is highly dependent on the distribution of the start and goal locations; when some trajectory lengths are much larger than others, the ability to arrive earlier than other agents significantly improves utilization of the available actuation resources, e.g. speed. For many practical applications the service area includes goal locations which are both near and far from the start locations, { which necessitates some agents to travel much longer than others, regardless of the goal assignment,} so the advantage is substantial. The results of Section \ref{sec:simulation_results} demonstrate this advantage quantitatively, despite the minor degradation in flight times due to collision detection and resolution, which are discussed next.
\section{Collision detection} \label{sec:collision_detection}
Here the advantage of piecewise polynomial trajectories on straight line paths becomes apparent as the global minimum distance between any pair of agents across their entire trajectories becomes extremely easy and fast to compute. Additionally, the cylindrical collision volume representation synergizes with the restriction that paths are only vertical or horizontal and makes collision checking especially convenient and computationally efficient. Collisions at an instant of time are detected exactly by simply checking if both the radial separation is less than the sum of the radii and the vertical separation is less than the sum of the half-heights. Mathematically, the following equivalent conditions of collision between agents $i$ and $j$ hold:
\begin{numcases}{\mathcal{C}_i \cap \mathcal{C}_j \neq \emptyset \leftrightarrow}
\|x_{j,12}-x_{i,12}\| \leq R_i+R_j \label{eq:horz_collide} \\
\land \ |x_{j,3}-x_{i,3}| \leq \frac{H_i+H_j}{2} \label{eq:vert_collide} .
\end{numcases}
For a pair of points moving on straight-line paths whose positions are polynomials in time, the procedure in Alg. \ref{algo:separation_minimization} is used to find the minimum separation distance.
\begin{algorithm} \label{algo:separation_minimization}
\DontPrintSemicolon
\KwIn{Heading unit vectors $\hat{h}_i$ and $\hat{h}_j$, polynomial trajectories $x_i(t) = p_i(t)\hat{h}_i$ and $x_j(t) = p_j(t)\hat{h}_j$ of degree $d$ over a common time interval $\mathcal{T}_{ij} = [t_0,t_f]$.}
Calculate relative position polynomial $x_{ij}(t) = p_j(t)\hat{h}_j-p_i(t)\hat{h}_i.$ \;
Calculate squared separation distance polynomial of degree $2d+1$ as $p_{ij}(t) = x_{ij}(t)^\intercal x_{ij}(t)$ whose coefficients are computed from multiplication and addition of the appropriate coefficients of $x_{ij}(t)$.\;
Minimize the squared separation distance using Algorithm \ref{algo:polynomial_minimization} with inputs $p_{ij}(t)$ and $[t_0,t_f]$.\;
\KwOut{Minimum separation distance $d^* = \sqrt{\underset{t \in [t_0,t_f]}{\min} x_{ij}(t)^\intercal x_{ij}(t)}$.}
\caption{Separation minimization}
\end{algorithm}
Consequently, Alg. \ref{algo:segment_pair_collision} is used to check for a collision between a pair of agents for a single pair of polynomial segment trajectories. The algorithm uses a default return of ``false'' and terminates immediately whenever ``true'' is returned; this ``short-circuiting'' dramatically improves computational speed. First, it is checked whether the intersection of the time intervals for the segments is nonempty; otherwise the segments are never active at the same time and no collision could occur.
Then it is determined whether both agents are moving vertically, both horizontally, or one of each. Based on this, it is checked if \eqref{eq:horz_collide} or \eqref{eq:vert_collide} are satisfied, and if so Alg. \ref{algo:separation_minimization} is used to obtain the relevant minimum separation distance and that distance is used in \eqref{eq:horz_collide} or \eqref{eq:vert_collide} to determine the presence of a collision.
Now segment pair collision detection is used to detect collisions between all pairs of full composite trajectories of the polynomial segment type described earlier using Alg. \ref{algo:all_agent_collision}. The paths start and end at the start and goal locations on the ground and reside entirely within the planar region with infinite vertical extent passing through the line segment joining the start and goal i.e. $p_i(t) \in \{x | x_{12} \in \ell_i \} \forall t$. A ``short-circuit" of the full polynomial segment collision check is then accomplished by first doing a computationally cheap check which helps quickly guarantee safety of many trajectory segment pairs. If the minimum distance between the two line segments of the trajectory pair is greater than the sum of agent radii, then a collision is impossible since there is no configuration of the agent centers within the assumed planar regions which gives an intersection since \eqref{eq:horz_collide} is impossible to satisfy. If the minimum distance between the two line segments of the trajectory pair is not greater than the sum of agent radii, then collision detection using Alg. \ref{algo:segment_pair_collision} is run. Using this fast preliminary check is critical to obtaining usable performance since, in all but the most highly congested scenarios, this check catches a large portion of segment pairs which are far apart spatially.
The segment collision check is repeated for all polynomial trajectory segment pairs (for a single pair of agents).
As soon as a collision is detected on a single pair of trajectory segments, the pair of agents is flagged as having a collision and the check progresses to the next pair of agents without finishing checking all remaining segments of the current pair of agents (another ``short-circuit'').
This process is repeated for each pair of agents, resulting in a symmetric boolean matrix of collision flags which can be represented by an upper triangular matrix or flattened vector to reduce the storage space by half.
With exact collision results for the entire group of agents and trajectories in hand, the proposed methodology advances on to resolving the detected collisions.
\begin{algorithm} \label{algo:all_agent_collision}
\DontPrintSemicolon
\KwIn{Collection of $n$ trajectories $\gamma_k(t)$ for $k=1,\ldots,n$.}
\ForEach{Pair of agents $i,j$}{
Calculate the minimum distance $\delta_{ij}^*$ between the two line segments joining the starts and goals e.g. via \cite{Lumelsky1985}. \;
\eIf{$\delta_{ij}^* > R_i+R_j$}{
$B_{ij} \gets$ False
}{
\ForEach{Pair of segments $\gamma_{im}$ in $\gamma_i(t)$ and $\gamma_{jn}$ in $\gamma_j(t)$}{
$B_{ij} \gets$ result of Alg. \ref{algo:segment_pair_collision} with inputs $\gamma_{im}$ and $\gamma_{jn}$. \;
\If{$B_{ij} = $ True}{
\Break
}
}
}
}
\KwOut{Boolean matrix $F \in \mathbb{S}^{n \times n}$ of collision flags.}
\caption{All agents collision check}
\end{algorithm}
\section{Collision resolution}
The overall trajectory generation proceeds by using the general collision detection scheme described in the previous section to determine which agents collide assuming they are all in the same altitude.
After single-altitude collisions are detected, they are resolved by inserting vertical trajectories and time delays and/or additional altitudes.
\subsection{Collision resolution via time delay} \label{sec:collision_resolution_time}
One way to resolve collisions is to send all agents first to a high holding altitude $\mathcal{A}_{\text{hold}}$, then after some delay times have agents descend vertically down, then move horizontally in a single traversal altitude $\mathcal{A}_{\text{trav}}$, then finally descend to the ground altitude $\mathcal{A}_{\text{gnd}}$ at the goal location. In this scheme, a maximum of three altitudes are needed with a total height of $2.5H$ above the ground plane. By construction, given sufficient delay time on each agent that eventually all agents can complete their trajectories without colliding, since in the worst case an agent can simply wait in the holding altitude until all other agents have completed their trajectories and landed. See Fig. \ref{fig:figure_delay_times1} for an illustration of this idea in the case when two identical agents must exchange positions. Although such a troublesome goal assignment would never be chosen by the goal assignment procedure in Sec. \ref{sec:goal_assignment} since the reversal of the assignment gives a lower cost, it is conceptually useful simply to illustrate the ability of time delay to resolve collisions. The image shows a side-view with dashed lines representing paths and the table shows a sequence of positions that the agents pass through at generalized times $t$ along the linear paths.
\begin{figure}[pos=h]
\centering
\subfloat[\label{1a}]{%
\includegraphics[width=1.6in]{figure_delay_times1.png}}
\hfill
\subfloat[\label{1b}]{%
\begin{tabular}[b]{||c c c||}
\hline
$t$ & $x_1$ & $x_2$ \\
\hline\hline
1 & A & D \\
\hline
2 & C & F \\
\hline
3 & B & F \\
\hline
4 & E & F \\
\hline
5 & D & F \\
\hline
6 & D & E \\
\hline
7 & D & B \\
\hline
8 & D & A \\
\hline
\end{tabular}
}
\caption{(a) Diagram and (b) table for a case which motivates the ability of time delay to resolve collisions.}
\label{fig:figure_delay_times1}
\end{figure}
The reason to have agents wait in a high holding altitude rather than on the ground is simply that the start and goal locations of two agents may be within a colliding distance of eachother. If the somewhat weak restriction is imposed that
\begin{align}
\|s_i-g_j\| \geq 2 R \ \forall \ i \neq j \in \mathcal{I}_n
\end{align}
then the possibility of landing on top of another agent waiting on the ground is avoided and the holding altitude is unnecessary and agents can wait on the ground i.e. set $\eta_{\text{hold}}=0$ so that the holding and ground altitudes coincide. In either case, the proposed method works the same way.
Choosing the height of the traversal and holding altitudes as $\eta_{\text{trav}} = \eta_{\text{gnd}} + H = H$ and $\eta_{\text{hold}} = \eta_{\text{trav}} + H = 2H$ ensures agents in different altitudes cannot collide. The traversal and holding altitudes are thus $\mathcal{A}_{\text{trav}} = \mathcal{A}(\eta_{\text{trav}})$ and $\mathcal{A}_{\text{hold}} = \mathcal{A}(\eta_{\text{hold}})$.
Each full trajectory is made up of 4 or 5 subtrajectories which are generated according to the procedure in Sec. \ref{sec:traj} which have 2 or 3 polynomial segments each:
\begin{enumerate}
\item Vertical ascent from $\mathcal{A}_{\text{gnd}} \rightarrow \mathcal{A}_{\text{hold}}$:
\begin{align} \label{eq:time_delay_segment_1} s_i \rightarrow s_i + [0,0,\eta_{\text{hold}}]^\intercal \end{align}
\item Stationary wait in $\mathcal{A}_{\text{hold}}$ for time $\tau_i$:
\begin{align} \label{eq:time_delay_segment_2} s_i + [0,0,\eta_{\text{hold}}]^\intercal \rightarrow s_i + [0,0,\eta_{\text{hold}}]^\intercal \end{align}
\item Vertical descent from $\mathcal{A}_{\text{hold}} \rightarrow \mathcal{A}_{\text{trav}}$:
\begin{align} \label{eq:time_delay_segment_3} s_i + [0,0,\eta_{\text{hold}}]^\intercal \rightarrow s_i + [0,0,\eta_{\text{trav}}]^\intercal \end{align}
\item Horizontal movement within $\mathcal{A}_{\text{trav}}$:
\begin{align} \label{eq:time_delay_segment_4} s_i + [0,0,\eta_{\text{trav}}]^\intercal \rightarrow g_i + [0,0,\eta_{\text{trav}}]^\intercal \end{align}
\item Vertical descent from $\mathcal{A}_{\text{trav}} \rightarrow \mathcal{A}_{\text{gnd}}$:
\begin{align} \label{eq:time_delay_segment_5} g_i + [0,0,\eta_{\text{trav}}]^\intercal \rightarrow g_i \end{align}
\end{enumerate}
In the case that agents wait on the ground, the subtrajectory in step 1 can be skipped and the agents ascend rather than descend in step 3.
The trajectory generation problem is now reduced to finding the set of time delays $\tau_i$ whose sum is minimum and also resolve all collisions while adhering to the trajectory generation framework described earlier:
\begin{align} \label{eq:delay_opt_problem}
\underset{\tau}{\text{minimize }} &\sum_{i=1}^{n} \tau_i \\
\text{subject to } & \eqref{eq:collision_free_constraint}, \gamma(t) \leftarrow [\eqref{eq:time_delay_segment_1}, \eqref{eq:time_delay_segment_2}, \eqref{eq:time_delay_segment_3}, \eqref{eq:time_delay_segment_4}, \eqref{eq:time_delay_segment_5}] \nonumber
\end{align}
where \eqref{eq:collision_free_constraint} is the boolean collision avoidance constraint whose value is determined by the collision detection scheme in Sec. \ref{sec:collision_detection}.
This problem is nonconvex due to the collision avoidance constraint and has continuous decision variables, so a discretization scheme is used as an effective heuristic. The heuristic begins by ordering the agents randomly, then for each agent the associated delay time is increased by an increment $\tau_\Delta$ until collisions with all agents whose time delays have been fixed are resolved. This is repeated until each agent's delay time has been established. This procedure is expressed in Alg. \ref{algo:time_delays}.
\begin{algorithm} \label{algo:time_delays}
\DontPrintSemicolon
\KwIn{Collection of $n$ trajectories $\gamma_k(t)$ for $k=1,\ldots,n$.}
\For{$i = 1, \ldots, n$ }{
Initialize $\tau_i=0$ \;
\While{Any $F \leftarrow$ Alg. \ref{algo:all_agent_collision} ($\gamma_k(t)$ for $k=1,\ldots,i$)}{
$\tau_i \leftarrow \tau_i + \tau_\Delta$ \;
Apply time delay $\tau_i$ to trajectory $\gamma_i(t)$ }
}
\KwOut{Collection of $n$ collision-free trajectories $\gamma_k(t)$ with included time delays $\tau_k$ for $k=1,\ldots,n$}
\caption{Collision resolution via time delays}
\end{algorithm}
Although this heuristic is not assured to find the global minimum of the problem in \eqref{eq:delay_opt_problem}, the solutions found are empirically good and importantly are guaranteed to be found after a finite number of computations. To see this, consider the case depicted in Fig. \ref{fig:figure_delay_times1} where each agent proceeds one-by-one; the first agent descends from the holding altitude and completes its full trajectory while all remaining agents remain in the holding altitude, then the second agent does the same and so forth until all agents have landed.
Let $T_j$ for $j=1,\ldots,n$ be the times taken by each agent to execute trajectory segments 3, 4, 5 in \eqref{eq:time_delay_segment_3}, \eqref{eq:time_delay_segment_4}, \eqref{eq:time_delay_segment_5}, with $T_j$ ordered from greatest to least. Then an upper bound on the number of increments of time delay increase for any other agent is $n_{\text{inc}, \max} \leq n_{\text{inc}, \max} = \text{ceil}(T_1/\tau_\Delta)$ since for any greater time delay a collision is not possible, as explained earlier in the discussion of Fig. \ref{fig:figure_delay_times1}. Applying this argument iteratively shows that an upper bound on the number of increments for $\tau_i$ is $n_{\text{inc}, i} \leq i \times n_{\text{inc}, \max}$, and thus an upper bound on the total number of increments is
\begin{align}
n_{\text{inc}, \text{tot}} = \sum_{i=1}^n n_{\text{inc}, i} \leq \frac{n(n+1)}{2} n_{\text{inc}, \max} ,
\end{align}
which is clearly $\mathcal{O}(n^2)$.
In practice, many fewer increments are required than this conservative upper bound.
The ordering of agents could likely be further improved i.e. according to some metric such as shortest time in horizontal flight, but it was found that random ordering gave good results.
\subsection{Collision resolution via altitude assignment} \label{sec:collision_resolution_altitude}
Another way to resolve collisions is by finding an assignment to a set of altitudes and sending agents on trajectories that move horizontally only in these altitudes. The altitudes are given sufficient vertical separation to ensure clearance between agents in different altitudes regardless of horizontal position. Additional wait time and holding altitudes are introduced to resolve potential secondary collisions induced by the primary collision resolution.
There are $m$ traversal altitudes $\mathcal{A}_{\text{trav,i}}$ for $i \in \mathcal{I}_m$ and $h$ holding altitudes $\mathcal{A}_{\text{hold,i}}$ which are inserted between traversal altitudes and indexed to match the traversal altitudes, although in general $h \leq m-1$.
In this scheme, a maximum of $n$ traversal altitudes and $n$ holding altitudes are needed in addition to the ground altitude.
Define the $n \times m$ boolean altitude assignment matrix $B$, which assigns agents to altitudes, as
\begin{equation}
B_{ij} =
\begin{cases}
1 & \text{if agent } i \text{ is assigned to altitude } j\\
0 & \text{otherwise}
\end{cases}
\end{equation}
Therefore in row $i$ of $B$, denoted as $B_i$, the index where $B_{ij}=1$ gives the altitude assigned to agent $i$. Alternatively, in column $j$ of $B$ the indices where $B_{ij}=1$ give the agents assigned to altitude $j$.
All agents are assigned to altitudes in a one-to-many mapping, so
\begin{equation} \label{eq:assignment_alts}
B^\intercal B = D_{m}
\end{equation}
where $D_m$ is an $m\times m$ diagonal matrix whose entry $D_{ii}$ is the integer number of agents assigned to altitude $i$.
Each full trajectory is made up of 4 or 6 subtrajectories which are generated according to the procedure in Sec. \ref{sec:traj} which have 2 or 3 polynomial segments each:
\begin{enumerate}
\item Vertical ascent from $\mathcal{A}_{\text{gnd}} \rightarrow \mathcal{A}_{\text{trav,i}}$:
\begin{align} \label{eq:altitude_segment_1} s_i \rightarrow s_i + [0,0,\eta_{\text{trav,i}}]^\intercal \end{align}
\item Stationary wait in $\mathcal{A}_{\text{trav,i}}$ until global time $t_1$:
\begin{align} \label{eq:altitude_segment_2} s_i + [0,0,\eta_{\text{trav,i}}]^\intercal \rightarrow s_i + [0,0,\eta_{\text{trav,i}}]^\intercal \end{align}
\item Horizontal movement within $\mathcal{A}_{\text{trav,i}}$:
\begin{align} \label{eq:altitude_segment_3} s_i + [0,0,\eta_{\text{trav,i}}]^\intercal \rightarrow g_i + [0,0,\eta_{\text{trav,i}}]^\intercal \end{align}
\item Vertical descent from $\mathcal{A}_{\text{trav},i} \rightarrow \mathcal{A}_{\text{hold},i}$:
\begin{align} \label{eq:altitude_segment_4} g_i + [0,0,\eta_{\text{trav},i}]^\intercal \rightarrow s_i + [0,0,\eta_{\text{hold},i}]^\intercal \end{align}
\item Stationary wait in $\mathcal{A}_{\text{hold,i}}$ for time $\tau_i$:
\begin{align} \label{eq:altitude_segment_5} s_i + [0,0,\eta_{\text{hold,i}}]^\intercal \rightarrow s_i + [0,0,\eta_{\text{hold,i}}]^\intercal \end{align}
\item Vertical descent from $\mathcal{A}_{\text{hold},i} \rightarrow \mathcal{A}_{\text{gnd}}$:
\begin{align} \label{eq:altitude_segment_6} g_i + [0,0,\eta_{\text{hold},i}]^\intercal \rightarrow g_i \end{align}
\end{enumerate}
where the final 3 subtrajectories may be collapsed to a single vertical descent from $\mathcal{A}_{\text{trav},i} \rightarrow \mathcal{A}_{\text{gnd}}$.
Similarly to the collision resolution via time delays, the trajectory generation problem is now reduced to finding the altitude assignment and set of time delays $\tau_i$ which minimize the sum of flight times and also resolve all collisions while adhering to the trajectory generation framework described earlier:
\begin{align}
\underset{B, \tau}{\text{minimize }} &\sum_{i=1}^{n} \tau_i \\
\text{subject to } & \eqref{eq:collision_free_constraint}, \gamma(t) \leftarrow [\eqref{eq:altitude_segment_1}, \eqref{eq:altitude_segment_2}, \eqref{eq:altitude_segment_3}, \eqref{eq:altitude_segment_4}, \eqref{eq:altitude_segment_5}, \eqref{eq:altitude_segment_6}] \nonumber
\end{align}
\subsubsection{Primary collisions}
Primary collisions are those resulting from two agents moving horizontally in a shared altitude. These are resolved by altitude assignment. As a heuristic for minimizing the sum of flight times, one might seek to minimize the number of altitudes required so that time spent in vertical motion is minimized. However even finding the optimal altitude assignment which minimizes the number of altitudes is a hard nonconvex combinatorial problem, so a similar procedure as in collision resolution via time delays is used to find the altitude assignment $B$. Agents are prioritized randomly, then each agent is assigned the lowest altitude possible that resolves primary collisions with all previously assigned agents. If no such altitude exists, a new one is created at a height above the previous highest altitude by a vertical spacing of $H$. This is repeated for all agents. By construction, such an assignment guarantees that there will be no collisions during the horizontal movements. Alg. \ref{algo:altitude_assignment} documents this procedure using mathematical notation.
\begin{algorithm} \label{algo:altitude_assignment}
\DontPrintSemicolon
\KwIn{Boolean collision flag matrix $F \in \mathbb{S}^{n \times n}$.}
Initialize $m=1$ \;
\For{$i = 1, \ldots, n$ }{
\For{$j = 1, \ldots, m$ }{
\uIf{not any $F_{i,k}$ for $k \ | \ B_{k,j}==\text{True}$}{
$B(i, j) = $ True \;
}
\uElseIf{$j==m$}{
$m \leftarrow m+1$ \;
$B(i, m) = $ True \;
}
\Else{
Continue
}
}
}
\KwOut{Boolean altitude assignment matrix $B \in \mathbb{R}^{n \times m}$ that resolves primary collisions.}
\caption{Altitude assignment}
\end{algorithm}
\subsubsection{Secondary collisions}
Although altitude assignment resolves primary collisions, the possibility remains of secondary collisions during the vertical descent movements down towards the goals on the ground. These are easily detected by the same collision detection scheme in Sec. \ref{sec:collision_detection}. Secondary collisions are exhaustively partitioned into two types of collision: exit and entrance collisions. In practice, it was found that these secondary collisions were exceedingly rare, but nevertheless must be prevented.
\paragraph{Exit collisions}
In an exit collision, a descending agent is struck by another agent moving horizontally in the same traversal altitude. To resolve this, a simple enlargement of the collision radius is used. The longest time $T_\text{exit}$ that any agent could take to exit its altitude is calculated; this is easily accomplished by generating a trajectory which descends vertically downwards by $H$ (the spacing between two altitudes). This captures the effect of all position derivative constraints imposed on the agents. This is also conservative since some agents may not have to come to a full stop at the altitude below; some agents will continue descending and accelerating which would reduce the time taken to exit the altitude, but this is ignored for simplicity. Next, the greatest distance $L_\text{exit}$ that the fastest agent would traverse horizontally moving at maximum speed over the time $T_\text{exit}$ is calculated. Then the collision radii of all agents are increased by $L_\text{exit}/2$. Thus by using the same collision detection scheme in Sec. \ref{sec:collision_detection} it is ensured that agents maintain an additional horizontal clearance of $L_\text{exit}$ at all times, which by construction ensures that exit collisions are impossible.
Unfortunately this procedure requires the collision radii to be increased by an amount proportional to the maximum speed of the agents, but for agents with high maximum acceleration relative to the maximum speed, such as quadrotors, the detriment is not too severe. The enlargement of the collision radii is performed as the first step of the overall collision resolution, prior to finding the altitude assignment to resolve primary collisions.
\paragraph{Entrance collisions}
In an entrance collision, a descending agent enters a lower traversal altitude at the same time as another agent is moving horizontally underneath. To resolve an entrance collision, a holding altitude is placed between the descending agent's traversal altitude and the next lowest traversal altitude (if one does not already exist). This gives the descending agent a place to wait while the other agent moves out of the way. Once the holding altitude has been inserted, new trajectories are generated and the entire check must begin again from the point where the altitude assignment was made. In particular, the offending descending agent is made to come to a full stop and wait in its (newly inserted) holding altitude. If an entrance collision still exists with this agent, delay time is added according to the same scheme as in Section \ref{sec:collision_resolution_time}. Again, by construction, given sufficient delay time all the possible collisions with agents at lower heights will be resolved since those agents can all land. Agents in the lowest traversal altitude will clearly not encounter this type of secondary collision, and so can complete their trajectories without collision. Arguing inductively, since the lowest agents have collision-free trajectories, and entrance collisions can be resolved for agents in each successively higher altitude, all entrance collisions can be resolved. Since there are a finite number of altitudes, it also follows that the time delays required are also finite.
The roles of each agent in an entrance collision are distinguished by the collision detection algorithm simply by noting the heading vector of each agent.
{ As a final remark,} in the worst case $n$ traversal altitudes and $n$ holding altitudes are needed, and thus by construction the computations terminate in finite time.
{
To conclude the algorithmic development, Figure \ref{fig:flow} gives a broad description of all the steps involved in the method and their relationships.
Evaluation of the proposed schemes is presented next, both in computer simulations and in deployment on a physical testbed.
}
\begin{figure}[pos=h]
\centering
\includegraphics[width=2.5in]{flow_diagram.pdf}
\caption{Diagram of information flow for trajectory generation.}
\label{fig:flow}
\end{figure}
\section{Simulation results} \label{sec:simulation_results}
Both the computer simulations and physical experiments were based on the Crazyswarm, a hardware and software platform that serves as a research testbed for quadrotor autonomy, which is described in more detail in Section \ref{sec:experimental_results}.
Throughout the simulations, the vehicle parameters used for trajectory planning were chosen to match the actual Crazyswarm platform used in the physical experiments.
The Crazyflie quadrotor vehicles had a nominal outer diameter of 14 cm and height of 4 cm, while the diameter and height of the cylinders used for trajectory planning were enlarged to 30cm and 40cm respectively; see Section \ref{sec:experimental_results} for the rationale of this enlargement.
The kinematic constraints imposed during trajectory generation for all simulations are listed in Table \ref{tab:table_kinematic_constraints}. These correspond to conservative values computed by scaling down the most aggressive values the physical Crazyswarm platform could experience without significant tracking error.
\begin{table} \label{tab:table_kinematic_constraints}
\caption{Kinematic constraints}
\centering
\normalsize
\begin{tabular}{ c c c }
\toprule
\ & \multicolumn{2}{c}{Upper limit}\\
Time derivative & Horizontal & Vertical \\
\midrule\\
\addlinespace[-3ex]
Speed $(m/s)$ & 0.2 & 0.2
\\
Acceleration $(m/{s^2})$ & 0.5 & 0.5
\\
Jerk $(m/{s^3})$ & 10 & 10
\\
\bottomrule
\end{tabular}
\end{table}
For the time delay increase rule, used by both algorithms discussed in Sections \ref{sec:collision_resolution_time} and \ref{sec:collision_resolution_altitude}, an addition rule with an increment of $\tau_\Delta = 0.1 s$ was used, which was found empirically to strike a nice balance between computation time and quality of solutions.
In order to analyze the performance of the proposed algorithms, Monte Carlo trials were performed with start and goal locations generated randomly with uniform probability over a square of side length $S$. In all trials all agents were identical so that collision volume dimensions were $R_i=R$, $H_i=H$ and position derivatives were $\delta_i=\delta$ for all $i \in \mathcal{I}_n$. The number of agents $n$ and the area density $\eta$ were varied, where $\eta$ is defined as the ratio of the summed area of all agents' projection onto the ground to the area on the ground that any projection could occupy:
\begin{equation*}
\eta = \frac {A_{\text{agents}}}{A_{\text{space}}} = \frac{n\pi R^2}{S^2+4RS+\pi R^2}.
\end{equation*}
To ensure initial and terminal configurations were noncolliding, a minimum start-start and goal-goal separation distance of $2R$ was imposed. This led to an upper bound on the density, which occurs when the start locations are hexagonally close packed; proofs of this fact date back to Lagrange in 1773 with the first universally accepted proof delivered by Toth in 1942 \cite{Toth1942}. For a separation of $2R$ the upper limit of density is $\frac{\pi}{2\sqrt{3}}\approx0.9068$ and for a separation of $2\sqrt{2}R$ as in~\cite{turpin2014} the limit is $\frac{\pi}{4\sqrt{3}}\approx0.4534$. For reference, a typical area density encountered in commercial aircraft traffic management is on the order of $10^{-5}$~\cite{air2016}. For applications involving many unmanned aerial robots the traffic is considerably more dense, so simulations were performed over a wide range of densities.
\subsection{Time delay distribution}
Fig.~\ref{time_delay_histogram} shows a histogram of time delays using the proposed time delay collision resolution method for $n=1000$ agents and a high density of $\eta = 10^{-1/2} \approx 0.31$ { for a single random Monte Carlo problem instance described in Section \ref{sec:simulation_results}}. This shows that, even when start and goal locations are very dense, most agents have zero or low-valued time delays.
\begin{figure}[pos=h]
\centering
\includegraphics[width=3in]{time_delay_histogram.png}
\caption{Histogram of time delays for 1000 agents with $\eta = 10^{-1/2} \approx 0.31$.}
\label{time_delay_histogram}
\end{figure}
\subsection{Number of altitudes required}
Using altitude assignment, the number of altitudes required to resolve collisions was studied. Fig.~\ref{fig:AltQuantMonteCarlo} shows the number of flight altitudes (altitudes other than the ground) as a function of area density. As expected, the number of altitudes required grew as the density increased as a result of more potential collisions, but only a few altitudes were required even for highly dense scenarios.
\begin{figure}[pos=h]
\centering
\includegraphics[width=2in]{figure_monteCarloAltitudes_num_alts.png}
\caption{Number of flight altitudes required to resolve collisions as a function of area density. The minimum possible number of altitudes was 1. The number of agents was held constant at $n = 100$ and 100 trials were run at each density. Individual data for each agent in each trial are plotted as points. The mean, interquartile range (25th to 75th percentile), and full range (0th to 100th percentile) are plotted as a bold black line, dark shaded region, and light shaded region. The vertical axis is linear scaled and the horizontal axis is log scaled with base 10. }
\label{fig:AltQuantMonteCarlo}
\end{figure}
\subsection{Normalized flight times}
To analyze the relative degradation in flight times due to avoiding collisions (altitude changes and time delays), the flight time data are normalized by dividing by the average time spent in horizontal motion for each trial.
The time spent in horizontal motion can be viewed as unavoidable, since this is the minimum time which must be spent to reach the goals even if collisions were ignored.
\subsubsection{Effect of collision resolution}
From Fig. \ref{fig:monteCarloDelays}, for this class of random scenarios, it is evident that as the density increases, the total time taken increases as a larger portion of the time is spent moving vertically and waiting. From the zoomed portion, it is evident that the average induced degradation is manageable, being virtually negligible at low agent densities and peaking at around $60\%$ worse than the lower bound at an agent density of ${\eta = 10^{-1/2} \approx 0.316}$ which represents a highly congested scenario as can be seen in Figure \ref{fig:example_N_100}.
From Fig. \ref{fig:monteCarloAltitudes} similar trends are observed as in Fig. \ref{fig:monteCarloDelays} using the time delay collision resolution method, but with less time spent waiting, more time spent in vertical motion, and less time spent in total with the average induced degradation again virtually negligible at low agent densities and peaking at around $20\%$ worse than the lower bound at a density of ${\eta = 10^{-1/2} \approx 0.316}$.
\begin{figure}[pos=h]
\centering
\subfloat[Horizontal motion time\label{fig:monteCarloDelays_a}]{%
\includegraphics[width=1.6in]{figure_monteCarloDelays_t_horz.png}}
\hfill
\subfloat[Vertical motion time\label{fig:monteCarloDelays_b}]{%
\includegraphics[width=1.6in]{figure_monteCarloDelays_t_vert.png}}
\hfill
\subfloat[Waiting time\label{fig:monteCarloDelays_c}]{%
\includegraphics[width=1.6in]{figure_monteCarloDelays_t_wait.png}}
\hfill
\subfloat[Total time \label{fig:monteCarloDelays_d}]{%
\includegraphics[width=1.6in]{figure_monteCarloDelays_t_total.png}}
\hfill
\subfloat[Total time, zoomed \label{fig:monteCarloDelays_e}]{%
\includegraphics[width=1.6in]{figure_monteCarloDelays_t_total_zoom.png}}
\caption{Time spent in (a) horizontal motion only, (b) vertical motion only, (c) waiting only, and (d) total, using collision resolution via time delay. A y-axis zoomed view of (d) is given in (e). The number of agents was held constant at $n = 100$ and 100 trials were run at each density. Individual data for each agent in each trial are plotted as points. The mean, interquartile range (25th to 75th percentile), and full range (0th to 100th percentile) are plotted as a bold black line, dark shaded region, and light shaded region. The vertical axis is linear scaled and the horizontal axis is log scaled with base 10.}
\label{fig:monteCarloDelays}
\end{figure}
\begin{figure}[pos=h]
\centering
\subfloat[Horizontal motion time\label{fig:monteCarloAltitudes_a}]{%
\includegraphics[width=1.6in]{figure_monteCarloAltitudes_t_horz.png}}
\hfill
\subfloat[Vertical motion time\label{fig:monteCarloAltitudes_b}]{%
\includegraphics[width=1.6in]{figure_monteCarloAltitudes_t_vert.png}}
\hfill
\subfloat[Waiting time\label{fig:monteCarloAltitudes_c}]{%
\includegraphics[width=1.6in]{figure_monteCarloAltitudes_t_wait.png}}
\hfill
\subfloat[Total time \label{fig:monteCarloAltitudes_d}]{%
\includegraphics[width=1.6in]{figure_monteCarloAltitudes_t_total.png}}
\hfill
\subfloat[Total time, zoomed \label{fig:monteCarloAltitudes_e}]{%
\includegraphics[width=1.6in]{figure_monteCarloAltitudes_t_total_zoom.png}}
\caption{Time spent in (a) horizontal motion only, (b) vertical motion only, (c) waiting only, and (d) total, using collision resolution via altitude assignment. A y-axis zoomed view of (d) is given in (e). The number of agents was held constant at $n = 100$ and 100 trials were run at each density. Individual data for each agent in each trial are plotted as points. The mean, interquartile range (25th to 75th percentile), and full range (0th to 100th percentile) are plotted as a bold black line, dark shaded region, and light shaded region. The vertical axis is linear scaled and the horizontal axis is log scaled with base 10.}
\label{fig:monteCarloAltitudes}
\end{figure}
In Figures \ref{fig:example_N_100} and \ref{fig:example_N_1000} example trajectories generated by the proposed algorithm are shown. Figure \ref{fig:example_N_100} gives a visualization of the scale of the area density, while Figure \ref{fig:example_N_1000} demonstrates the ability of the proposed algorithm to plan trajectories for a large number of vehicles navigating between arbitrary locations.
\begin{figure}[pos=h]
\centering
\subfloat[${\eta = 10^{-1/2} \approx 0.316}$\label{fig:example_N_100_density_0p3162}]{%
\includegraphics[width=1.6in]{example_N_100_density_0p3162.png}}
\hfill
\subfloat[${\eta = 10^{-3/2} \approx 0.0316}$\label{fig:example_N_100_density_0p03162}]{%
\includegraphics[width=1.6in]{example_N_100_density_0p03162.png}}
\caption{Example trajectories using collision resolution via time delays with 100 agents at two different area densities ($\eta$).}
\label{fig:example_N_100}
\end{figure}
\begin{figure}[pos=h]
\centering
\subfloat[\label{fig:conl1000_1}]{%
\includegraphics[width=1.6in]{conl1000_1.png}}
\hfill
\subfloat[\label{fig:conl1000_2}]{%
\includegraphics[width=1.6in]{conl1000_2.png}}
\hfill
\subfloat[\label{fig:conl1000_3}]{%
\includegraphics[width=1.6in]{conl1000_3.png}}
\hfill
\subfloat[\label{fig:conl1000_4}]{%
\includegraphics[width=1.6in]{conl1000_4.png}}
\hfill
\subfloat[\label{fig:conl1000_5}]{%
\includegraphics[width=1.6in]{conl1000_5.png}}
\hfill
\subfloat[\label{fig:conl1000_6}]{%
\includegraphics[width=1.6in]{conl1000_6.png}}
\caption{Example of 1000 agents spelling out the letters of the authors' lab and university by following collision-free trajectories generated by the proposed algorithm using collision resolution via time delays.}
\label{fig:example_N_1000}
\end{figure}
\subsubsection{Performance relative to alternate methods}
For the purpose of comparing the proposed method to alternate methods e.g. that of \cite{turpin2014}, define the characteristic time $t_c$ which is the time an agent would take to traverse the longest horizontal straight-line path within the space, which for a square space has length $\sqrt{2}S$. For a given trajectory plan, denote the time spent by agent $i$ in horizontal motion and in waiting respectively as $t_{h,i}$ and $t_{w,i}$.
Also define the characteristic normalized time in horizontal motion and in waiting $t_p$ as
\begin{equation*}
\widetilde{t}_p = \frac{\frac{1}{n} \sum_{i=1}^n (t_{h,i}+t_{w,i})}{t_c} .
\end{equation*}
For simplicity, collisions were allowed for simulations using the approach of \cite{turpin2014} to avoid imposing the $2\sqrt{2}R$ separation condition required for that approach to possess collision-free guarantees; this was conservative in the sense that the relative performance of the proposed methods relative to \cite{turpin2014} was only degraded by this assumption. Also, for simplicity simple 1-degree (constant speed) polynomials were used for trajectory generation.
With respect to the $t_p$ metric, plotted in Fig.~\ref{fig:TTmonteCarlo}, the proposed altitudes approach gave the best results for all densities. At low densities, the proposed time delay approach gave nearly the same performance as the altitudes approach as a consequence of small time delays which vanish as the density goes to zero. At higher agent densities, the time delay approach result began to increase as the physical extent of the agents became more influential. Both proposed approaches performed better at all densities than the approach of \cite{turpin2014}.
\begin{figure}[pos=h]
\centering
\includegraphics[width=3in]{TTmonteCarlo.pdf}
\caption{Normalized average flight times using the proposed collision resolution methods vs the approach of \cite{turpin2014}. The mean value from 1000 trials over all $n=100$ agents is plotted.}
\label{fig:TTmonteCarlo}
\end{figure}
\subsection{Computation Time} \label{sec:compSpeed}
Achieving low computation time is an important practical consideration for successful deployment of large robot teams. The simulations were implemented in MATLAB running on a desktop with an AMD Ryzen 7 2700X eight-core processor running at 3.7GHz. The results are given in Figures \ref{fig:CompTime_time_delay} and \ref{fig:CompTime_altitudes}, where reasonable computation times for large teams are observed. Explicitly optimizing the code for performance or parallelization could decrease the computation times even further. The overall computation is split into three major segments: the generation of initial trajectories which occurs when finding the cost matrix for input into the goal assignment, the Hungarian algorithm which actually does the goal assignment, and the combined collision detection and resolution steps. This encompasses nearly all of the computations, with the exception of the base polynomial generation and some post-processing steps which together take negligible time to execute.
The goal assignment computation time grew as $\mathcal{O}(n^3)$ as expected from a standard computational complexity analysis \cite{munkres1957}. The trajectory generation and collision resolution steps grew only as $\mathcal{O}(n^2)$ since the average number of pairwise trajectories and collisions grew with the number of pairs of agents. At ever higher agent numbers it is inevitable that the goal assignment will began to dominate.
\begin{figure}[pos=h]
\centering
\subfloat[High density ${\eta = 10^{-1/2} \approx 0.316}$\label{fig:figure_monteCarloDelays_compute_time_high_density}]{%
\includegraphics[width=3in]{figure_monteCarloDelays_compute_time_high_density.png}}
\hfill
\subfloat[Low density ${\eta = 10^{-3/2} \approx 0.0316}$\label{fig:figure_monteCarloDelays_compute_time_low_density}]{%
\includegraphics[width=3in]{figure_monteCarloDelays_compute_time_low_density.png}}
\caption{Computational time as a function of number of agents using various trajectories using the time delay collision resolution method. The area density was held constant at (a) ${\eta = 10^{-1/2} \approx 0.316}$ and (b) ${\eta = 10^{-3/2} \approx 0.0316}$. The number of agents varied from 2 to 1024 at each power of 2. The mean value from 10 trials at each number of agents is shown.}
\label{fig:CompTime_time_delay}
\end{figure}
\begin{figure}[pos=h]
\centering
\subfloat[${\eta = 10^{-1/2} \approx 0.316}$\label{fig:figure_monteCarloAltitudes_compute_time_high_density}]{%
\includegraphics[width=3in]{figure_monteCarloAltitudes_compute_time_high_density.png}}
\caption{Computational time as a function of number of agents using various trajectories using the altitudes collision resolution method. The area density was held constant at ${\eta = 10^{-1/2} \approx 0.316}$. The number of agents varied from 2 to 1024 at each power of 2. The mean value from 10 trials at each number of agents is shown.}
\label{fig:CompTime_altitudes}
\end{figure}
\section{Experimental results}
\label{sec:experimental_results}
A series of experiments on physical hardware was performed to validate the performance and safety of the proposed approach.
\subsection{System description}
A proprietary branch of the Crazyswarm system was used, which encompasses both hardware and software \cite{Preiss2017}. State estimation was accomplished by taking position measurements of infrared (IR) markers on each quadrotor with an external Vicon camera system. The point cloud of these individual position measurements were then resolved into body coordinate frame (state) estimates using the object tracker portion of the Crazyswarm software. These state estimates were then used by the Crazyswarm software to generate feedback control signals, which were then broadcast over wireless radios to the flying vehicles and electrically converted to motor voltages, completing the feedback loop. { The controller used was the standard ``Mellinger'' controller implemented by the Crazyswarm package, which is a modified version of the nonlinear reference-tracking controller proposed by \cite{mellinger2011} which takes advantage of differential flatness of the quadrotor.}
See the documentation at \url{https://github.com/TSummersLab/crazyswarm} for further details of the software and hardware setup.
\begin{figure}[pos=h]
\centering
\includegraphics[width=1.6in]{crazyflie.jpg}
\caption{Photograph of a single Crazyflie quadrotor vehicle.}
\label{fig:crazyflie}
\end{figure}
\subsection{Trajectory tracking errors}
One immediate practical issue was reference trajectory tracking in the presence of noise; although the planned trajectories could be followed perfectly in the absence of disturbances, the presence of disturbances precludes this possibility. Natural sources of disturbances included ambient air currents from air conditioner vents, downwash from other agents, ground aerodynamic effects, other unmodeled dynamics, and sensor (camera) noise. By simple enlargement of the collision volumes and ensuring bounds on the position error of all agents, collision avoidance remained guaranteed. Feedback control within the Crazyswarm package ensured the position deviation of each agent from the desired position remained small at all times. In particular, it was found from the experiments that during all trajectory traversals that the radial position error was bounded by 8 cm and the vertical position error was bounded by 7cm; the plot in \ref{fig:position_error} demonstrates satisfaction of these bounds. Additionally, it was found that the effects of downwash were accounted for by extending the bottom of the collision cylinder by an additional 20cm. Thus, by choosing a collision cylinder with dimensions enlarged by these amounts relative to the physical dimensions of the quadrotor i.e. diameter of $14 + 2 \times 8 = 30$ cm and height of $4 + 2 \times 7 + 20 = 40$ cm, the vehicle was guaranteed to always be strictly contained within the collision volume, maintaining collision avoidance guarantees. This can be observed from from Figure \ref{fig:compare_sim_exp_X20_clearance}; the trajectory generation tightly respected the collision constraints, as the minimum clearance approached zero without becoming negative. Likewise, during the physical experiment the agents did not experience any collisions, as evidenced by the strictly positive clearance. This was true for all experiments.
\subsection{Experiment description and findings}
One experiment (``X20'') is now presented with $n = 20$ agents moving in a 2m by 3m room from start locations randomly selected from a grid with 0.5m spacing to goals arranged in an ``X'' configuration roughly 2.8m across the widest section; see Figures \ref{fig:experiment_photo} and \ref{fig:compare_sim_exp_O24_topview}.
In this experiment the time delay collision resolution method was used. This was for the practical reason that the height of the room limited the number of usable altitudes; in outdoor environments the height of the flyable space would be much greater.
The kinematic constraints imposed during trajectory generation were the same as for the simulation results i.e. those listed in Table \ref{tab:table_kinematic_constraints}.
The results in Figures \ref{fig:experiment_photo}, \ref{fig:compare_sim_exp_O24_topview}, \ref{fig:compare_sim_exp_X20_clearance}, \ref{fig:position_error} demonstrate that the proposed method reliably generated trajectories that could be successfully tracked by a physical quadrotor team and executed in a reasonable time frame with guaranteed absence of collisions.
Videos demonstrating the experiment described in this paper as well as several others are available at \url{https://youtu.be/OapaAQAGWDE}.
The code which implements the algorithms described in this work and which supports both the virtual simulations and physical experiments can be found in \url{https://github.com/TSummersLab/cannon-tags}.
\begin{figure}[pos=h]
\centering
\subfloat[Start\label{fig:X20_top_landed_start}]{%
\includegraphics[width=1.6in]{X20_top_landed_start.png}}
\hfill
\subfloat[End\label{fig:X20_top_landed_end}]{%
\includegraphics[width=1.6in]{X20_top_landed_end.png}}
\hfill
\subfloat[Mid-flight\label{fig:X20_side_midflight}]{%
\includegraphics[width=2.4in]{X20_side_midflight.png}}
\caption{Photographs of experimental setup at the (a) start configuration, top view, (b) end configuration, top view, and (c) mid-flight, side view.}
\label{fig:experiment_photo}
\end{figure}
\begin{figure}[pos=h]
\centering
\includegraphics[width=3in]{compare_sim_exp_X20_topview.png}
\caption{Top-down view of agent centers (filled dots) and trajectories at the beginning of the X20 experiment. Desired trajectory paths shown as dashed lines and actual paths realized by the physical vehicles are shown as solid lines.}
\label{fig:compare_sim_exp_O24_topview}
\end{figure}
\begin{figure}[pos=h]
\centering
\subfloat[Simulation\label{fig:compare_sim_exp_X20_sim_clearance}]{%
\includegraphics[width=1.6in]{compare_sim_exp_O24_sim_clearance.png}}
\hfill
\subfloat[Experiment\label{fig:compare_sim_exp_X20_exp_clearance}]{%
\includegraphics[width=1.6in]{compare_sim_exp_O24_exp_clearance.png}}
\caption{Minimum clearance between each agent and all other agents vs time, showing (a) clearance using the enlarged collision volumes on the disturbance-free simulation trajectories (b) clearance using the actual vehicle boundary volume on the noisy realized physical paths. The minimum clearance of each agent is plotted as a thin line, the mean as a thick line, the range between minimum and maximum shaded in grey, and zero as a dashed line.}
\label{fig:compare_sim_exp_X20_clearance}
\end{figure}
\begin{figure}[pos=h]
\centering
\subfloat[Horizontal\label{fig:position_error_horz}]{%
\includegraphics[width=1.6in]{position_error_horz_X20.png}}
\hfill
\subfloat[Vertical\label{fig:position_error_vert}]{%
\includegraphics[width=1.6in]{position_error_vert_X20.png}}
\caption{Position errors over time for all agents during the X20 experiment with (a) horizontal and (b) vertical components shown. Upper bounds used for trajectory planning are shown as dashed lines.}
\label{fig:position_error}
\end{figure}
\section{Conclusion}
This work demonstrated tractable centralized methods for solving the goal assignment and inter-agent-collision-free trajectory planning problem for multiple robots. The assignment of agents to goals { achieved a low} total time-in-motion, and the resulting polynomial-in-time trajectories took full advantage of (possibly heterogeneous) speed capabilities. The results of numerical simulations revealed promising decreases in the total time with only mild increases in the computation time over existing approaches, allowing faster task completion in practical terms. The proposed algorithm also allowed us to eliminate restrictions present in other methods such as enforcement of synchronized start and end times and minimum separation of start and goal locations.
Future work is envisioned where the proposed framework would be used as a high-level centralized planner, combined with other decentralized techniques for dealing with lower-level local obstacles and disturbances.
The ability to use different altitudes i.e. all three spatial dimensions is crucial to the proper working of the proposed approach; operating spaces limited to a single 2D plane are not supported. Future work will investigate using curved (polynomial) paths to alleviate this issue while retaining tractability.
Future work also includes extension to agents with more complex dynamics and/or motion constraints, dealing with uncontrolled obstacles, combining time delays with altitudes, reassigning goals dynamically to further reduce would-be collisions, and a parallel implementation to decrease solve times. Investigation of the setting when there are more goals than agents and the setting of multiple stages is also warranted, both requiring dynamic goal assignment and replanning.
Regarding the hardware implementation, refinements to the localization and state estimation furnished by the camera system as well as using more sophisticated controllers which account for downwash and ground effects \cite{Yeo2015,Shi2018} could further reduce the magnitude of the actual position errors and allow shrinkage of the collision volumes.
\clearpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,020
|
The estate that Lee Kong Chian built
Lying at the foot of Bukit Timah Hill is a tiny estate that if not for the Rail Mall that now fronts it and the nearby railway truss bridge, would probably go unnoticed. The estate of 142 households, launched a SG50 coffee table book on Sunday, an event to which I was invited to and one that also saw the unveiling of a sculpture by Oh Chai Hoo dedicated to the estate. It was at the event that I was to learn that the estate traces its origins to Southeast Asia's "Rubber and Pineapple King", businessman and philanthropist Mr Lee Kong Chian, and that the estate had once been home to Mr S R Nathan (who was to become the sixth President of the Republic of Singapore).
Faces of Fuyong Estate, a SG50 coffee table book produced by residents of the estate.
The name of the estate holds the clue to this origin. Fuyong or Phoo Yong in Hokkien, and the pinyin-ised as Furong (芙蓉), names by which the estate went by, was the village in China's Fujian province from which the illustrious Lee Kong Chian hailed from. The land on which the estate now sits was purchased by Lee from a Mr Alexander Edward Hughes. Lee, who pioneered a provident fund based housing scheme to allow his employees to own homes was persuaded by Mr Lim Koon Teck, his legal adviser and a Progressive Party politician, to allow much needed low cost housing built for the public there in the early 1950s and Phoo Yong Estate was born.
A page in the book. It was on land purchased by Mr Lee Kong Chian, pictured, that Fuyong Estate was developed to serve as much needed low-cost housing in the mid 1950s.
Before the Rail Mall – one of the two rows of single storey houses straddling Jalan Asas in 1989 that have since been converted into The Rail Mall.
Much has changed about the face of the estate and its vicinity since the days when it was known as Phoo Yong, or even in more recent times. In an area once dominated by the factories on the hills, and once where the sounds heard through day included the rumble of trains and the blasts from the nearby quarries, the estate is today set in an area bathed in the calm of the verdant Bukit Timah Hill that now paints a much less rowdy backdrop. The rows of houses by the main road, which had housed a mix of businesses that included a coffin shop, have since the mid 1990s, become the Rail Mall – developed by a subsidiary of the Lee Rubber Company.
The now silent truss bridge, a long-time landmark along Upper Bukit Timah Road.
Two of the estate's oldest residents at the launch event cutting a cake with Dr Vivian Balakrishnan, Minister for Foreign Affairs.
The strong sense of community in the estate was very much in evidence through the launch event, some of which perhaps in embodied in the sculpture that was also unveiled in the estate's Fuyong Park. Taking the form of the Chinese character for a person looking forward, the artist behind piece, Oh Chai Hoo, intends it as a symbol of the kampong spirit and the resilience shown by our forefathers.
Taking aim to unveil Oh Chai Hoo's sculpture, which takes the shape of the Chinese character for a person.
The coffee table book is a good little read for anyone interested in the estate and in the area's development. The book traces the estates transformation and also offers many interesting insights into the estate, such as how Mr Nathan became an early resident. One also learns of the meanings of the names of its roads in Malay. Asas for example means foundation, Tumpu, focus, Siap, readiness and Uji, challenge. There is also a little known fact that gets a mention. Having been built as a low cost housing estate, a regular visitor to the estate was the 32 door honey wagon. While there were initial efforts by a resident Mr Palpoo to bring in modern sanitation on a private basis in the early 1960s, it wasn't until 1969 that the estate would fully be equipped with flushing toilets – something we in in the Singapore of today would find hard to imagine.
A scan from Faces of Fuyong with an aerial view over the estate in 1958. The photograph also shows the railway line, the truss bridge, and Hume Industries and the Ford Factory on the high ground across the road.
The estate is set against a verdant backdrop that gives it an air of calm.
Residents pouring over the book.
Tags: Batu Sembilan, Bukit Timah, Bukit Timah Railway Bridge, Bukit Timan 9MS, Changing Landscapes, Former Rail Corridor, Furong Estate, Fuyong Estate, History, Jalan Asas, Lee Kong Chian, Lim Koon Teck, Ninth Milestone, Phoo Yong Estate, Photography, Rail Corridor, Rail Mall, SG50, Singapore, Steel Truss Railway Bridges, Upper Bukit Timah Road
Categories : Books, Bukit Timah Area, Changing Landscapes, Events, Forgotten Places, Railway, Reminders of Yesterday, Singapore
The changing landscape at the ninth mile
One part of Singapore where the landscape has seemed to be in a state of constant flux – at least in more recent times, is the area from the 9th to the 10th milestone of Bukit Timah. The area is one that has long been associated with the old railway, being one of two locations in the Bukit Timah area where an overhead railway truss bridge can be found, and where the train used to run quite visibly along large stretches of the length of the road.
Seeing the tailend of the trains the area is very much associated with – train operations ceased on 30 June 2011.
A passing train in the 9 1/2 mile area being captured by a crowd in June 2011 – the stretch was one where the trains running close to the road were quite visible.
The truss bridge at the 9th milestone.
The ninth milestone area is now in a state of change.
Another view northwards – road widening work is very noticeable.
A train running across the bridge seen just before the closure of the railway in 2011.
Now abandoned by the railway – the railway ceased operations through Singapore with its terminal moving to Woodlands Train Checkpoint on 1 July 2011, the bridge does remain as what is perhaps one of two reminders of the railway, the other being the two rows of single storey houses facing Upper Bukit Timah Road straddling Jalan Asas which we now know as The Rail Mall, which in being named after the railway, does help to preserve its memory.
The row of single storey houses straddling Jalan Asas in 1989. The houses have since been converted into The Rail Mall (photograph used with the kind permission of Henry Cordeiro).
Another photograph of what today has become The Rail Mall (photograph used with the kind permission of Henry Cordeiro).
All around the area, the construction of Phase II of the Downtown Line (DTL) of the Mass Raid Transit System (MRT) which started before the railway abandoned it, is very much in evidence. The work being done has left little in its wake untouched, with a wedge being driven between the two carriageways which make up Upper Bukit Timah Road at its junction is with Hillview Road, just north of The Rail Mall, and disfiguring much of the area as we once know it.
9 1/4 milestone Bukit Timah now dominated by new kids on the block as well as cranes and construction equipment.
Local model and TV host Denise Keller with sister Nadine seen during a Green Corridor organised walk in the area on the final weekend before the train operations ceased in June 2011 – even since then, there has been quite a fair bit of change that has come to the area.
Looking down Hillview Road from the junction, we now see that two landmarks in the area which have survived until fairly recently, have also fallen victim to the developments which will also see roads being widened – a major widening exercise is currently taking place along Upper Bukit Timah Road. A railway girder bridge which looked as if it was a gateway to an area it hid which had housing estate and factories which came up around the 1950s and 1960s, has already been dismantled. That went soon after the railway did. Its removal does pave the way for the road to eventually be widened, thus permitting the private residential developments intended for the vacant plot of land that was occupied by the former Princess Elizabeth Estate. The land for the estate, based on newspaper reports from the 1950s, was a donation by Credit Foncier intended for public housing made to the Singapore Improvement Trust in 1950 and has somewhat sacrilegiously been sold off to the highest bidder.
A train crossing the now missing girder bridge at Hillvew Road in early 2011.
Along with the bridge, a building that has long been associated with the corner of Upper Bukit Timah and Hillview Roads is another structure we would soon have to bid farewell to. Completed in 1957 as a branch of the Chartered Bank (which later became Standard Chartered Bank), the building has also long been one of the constants in the area. When the branch vacated the premises early this month, it would have have seen some fifty-six years and two months of operation at the building, having opened on 6 April 1957.
The recently closed Chartered Bank branch building with a notice of its closure.
Rendered insignificant by hoardings, towering cranes and construction equipment – as well as more recent buildings in the vicinity that now dominate the landscape, the bank building occupying the corner of Hillview Road on a little elevation was one that, in greener and quieter days, was not missed. It provided great help to me as a landmark on the bus journeys I took to visit a friend's house up at Chestnut Drive, two bus stops north, back in the 1980s.
The bank as it looked in 2010.
It would probably take a few more years for the dust in the area to settle. And judging by the way developments seem to be taking most of what did once seem familiar, by the time the dust does settle, there may be little for us to make that connection with the world the area did host in days that already seem forgotten.
A last look at a landmark soon to vanish.
Tags: 10th Milestone, 9th Milestone, Bukit Timah, Bukit Timah Branch, Bukit Timah Railway Bridge, Bukit Timah Road, Changing Landscapes, Chartered Bank, Former Rail Corridor, Former Railway Land, Green Corridor, Hill View, Hillview, Hillview Avenue, Hillview Road, Hillview Road Railway Bridge, Old Photographs, Old Singapore, Photographs, Photography, Princess Elizabeth Estate, Rail Bridges, Rail Corridor, Railway, Railway Corridor, Singapore, Standard Chartered Bank, Steel Truss Railway Bridges, Upper Bukit Timah Road
Categories : Bukit Timah Area, Forgotten Buildings, Forgotten Places, Rail Corridor, Railway Land, Reminders of Yesterday, Singapore
Bukit Timah Railway Station revisited
It was in the final days of the Malayan Railway's operations through Singapore just over a year and a half ago that the former Bukit Timah Railway Station drew crowds it that had not previously seen before. The station, built in 1932 as part of the Railway Deviation which took the railway towards a new terminal close to the docks at Tanjong Pagar, was one that was long forgotten. Once where prized racehorses bound for the nearby Turf Club were offloaded, the station's role had over time diminished. Its sole purpose had in the years leading up to its final moments been reduced to that of a point at which authority for the tracks north of the station to Woodlands and south of it to Tanjong Pagar was exchanged through a key token system. The practice was an archaic signalling practice that had been made necessary by the single track system on which the outbound and inbound trains shared. It had in its final days been the last point along the Malayan Railway at which the practice was still in use and added to the impression one always had of time leaving the station and its surroundings behind. It was for that sense of the old world, a world which if not for the railway might not have existed any more, for which it had, in its calmer days, been a place where one could find an escape from the concrete world which in recent years was never far away. It was a world in which the sanity which often eludes the citizens of the concrete world could be rediscovered. It is a world, despite the green mesh fencing now reminding us of its place in the concrete world, which still offers that escape, albeit one which will no longer come with those little reminders of a time we otherwise might have long forgotten.
Scenes from the station's gentler days
Tags: 1932, Bukit Timah, Bukit Timah Railway Bridge, Bukit Timah Railway Station, Bukit Timah railway station building at Blackmore Drive, Bukit Timah Road, Bukit Timah Station, Deviation of Railway Line, Federated Malay States Railway, FMSR, KTM Railway Land, KTMB, Old Turf Club, Photographs, Photography, Railway Journeys, Railway Land, Railway Stations, Road Journeys, Shift of KTMB Station, Singapore, Singapore Railway Realignment of 1932, Squatters on KTM Railway Land, Steel Truss Railway Bridges, Transport of Racehorses by Rail, Transport of Racehorses by Train
Categories : Bukit Timah Area, Forgotten Buildings, Forgotten Places, Rail Corridor, Railway Land, Reflections, Reminders of Yesterday, Singapore
The silence of a world forgotten
I recently had a look in and around the former Bukit Timah Railway Station, lying quiet and abandoned while plans have not been made for its future use. The station, the last on the old Malayan Railway (known in more recent times as Keretapi Tanah Melayu or KTM), where the old key token exchange system was employed, was vacated on 1 July 2011 when the southern terminal of the railway was moved to Woodlands, and is now a conserved building.
A bridge that's now too far.
A world that almost seems forgotten.
The station is one that was built as part of the 1932 railway deviation. The deviation raised the line (hence the four bridges south of Bukit Panjang – one of which, a grider bridge over Hillview Road, has since been removed), as well as turned it towards Holland Road and the docks at Tanjong Pagar. Bukit Timah Railway Station in more recent times prior to its closure operated almost forgotten, seen mainly by passengers on passing trains, operated only in a signalling role. It was only as the closure of the railway line through Singapore loomed that more took notice of the station and the archaic practice of exchanging key tokens.
A window into the forgotten world.
The ghost of station masters past?
Together with the nearby truss bridge, one of two longer span railway bridges over the Bukit Timah area, which in some respects gives the area some of its character, the station lies today somewhat forgotten. The frenzy that accompanied the last days of the railway and the days that followed prior to the removal of the tracks has since died down – the post track removal turfing work intended to level the terrain and prevent collection of rain water has probably served to do the opposite and rendered the ground too soft and mushy to have a pleasant walk on).
The tracks along much of the rail corridor has since been removed with only short sections such as this one at the truss bridge at close to Bukit Timah Railway Station left behind.
Through broken panes, the last half dozen of more than 30 levers that were once found in the signalling room of the station is seen.
While interest in the rail corridor seems to have faded with the passage of time, there may yet be motivation to pay a visit to it in the next month or so. A recent announcement (see Removal of structures along Rail Corridor dated 23 Nov 2012) made by the Singapore Land Authority (SLA) points to the removal of unsound structures. These unsound structures include two of the signal huts at the former level crossings, one of which does have a memorial of sorts to the last day of railway operations and the last train. Besides the huts, some buildings that served as lodgings including the ones at Blackmore Drive, will also be demolished. Work on removal of the structures, based on the announcement, are to be completed by the end of January 2013 and this December probably offers the last opportunity to see the affected areas of the rail corridor as it might once have been.
A Brahminy Kite flies over the formaer railway station.
Tags: Blackmore Drive, Brahminy Kite, Bukit Timah Railway Bridge, Bukit Timah Railway Station, Forgotten Buildings, Forgotten Places, Forgotten Structures, Key Token, Photographs, Photography, Rail Corridor, Removal of Unsound Structures, Singapore Land Authority, SLA, Steel Truss Railway Bridges
Categories : Bukit Timah Area, Forgotten Buildings, Forgotten Places, Quiet Moments, Rail Corridor, Railway Land, Reminders of Yesterday, Singapore
The sun sets over the rail corridor
The 17th of July was a day when the railway corridor would have been seen in its original state for the very last time. The corridor, having been one of the few places in Singapore where time has stood still – little has changed over the eight decades since the railway deviation of 1932, would after the 17th see an alteration to it that will erase much of the memory of the railway, barely two weeks after the cessation of rail services through Singapore and into Tanjong Pagar. It was a railway that had served to remind us in Singapore of our historical links with the states of the Malayan Peninsula – the land on which the railway ran through having been transferred to the Malayan Railway through a 1918 Ordinance, a reminder that has endured well into the fifth decade of our independence.
The 17th of July offered most in Singapore a last chance to walk the tracks ... removal work started the following day with only a short 3km stretch of the tracks opened to the public unitl the end of July.
It was in the pale light of the moon that my last encounter with the railway tracks in the Bukit Timah Station area began.
The corridor is one that I have had many memories of, having had many encounters with it from the numerous train journeys that I made through Tanjong Pagar, as well as some from encounters that I had from my younger days watching from the backseat of my father's car and also those that I had in clothed in the camouflage green of the army during my National Service. There are many parts of it that are special in some way or another to me, having always associated them with that railway we will no longer see, and the last day on which I could be reminded of this warranted a last glance at it, one that got me up well before the break of dawn, so that I could see it as how I would always want to remember it.
A scene that would soon only be a memory - the rail corridor on the 17th of July 2011.
It was at a short but very pretty stretch of the corridor that I decided to have a last glance at – a stretch that starts at the now empty and silent building that once served as Bukit Timah Station and continues south for another two kilometres or so. It was one that is marked by some of the most abundant greenery one can find along the corridor which even from the vantage of the train, is always a joy to glance at. Arriving in the darkness of the early morning, it was only the glow of the light of the waning but almost full moon that guided me towards the station which is now encircled by a green fence which I could barely make out. I was greeted by a menacing red light that shone from the end of the building, one that came from the security camera that even in the dark seemed out-of-place on the quaint structure that been the last place along the line where an old fashioned practice of exchanging a key token took place. The crisp morning air and the peace and calm that had eluded the corridor over the two weeks that followed the cessation of railway operations was just what I had woken up for and I quickly continued on my way down towards the concrete road bridge over the railway at Holland Road.
First light on the 17th along the corridor near Holland Green.
It wasn't long before first light transformed the scene before me into a scene that I desired, one that through the lifting mist, revealed a picture of calm and serenity that often eludes us as we interact with our urban world. It is a world that I have developed a fondness for and one in which I could frolic with the colourful butterflies and dragonflies to the songs of joy that the numerous bird that inhabit the area entertain us with. It was a brief but joyous last glance – it wasn't too long before the calm with which the morning started descended into the frenzy of that the crowds that the closing of the railway had brought. That did not matter to me as I had that last glance of the corridor just as I had wanted to remember it, with that air of serenity that I have known it for, leaving it with that and the view of the warm glow of the silent tracks bathed in the golden light of the rising sun etched forever in my memory.
First signs of the crowd that the closing of the railway brought.
A last chance to see the corridor as it might have been for 79 years.
For some, it was a last chance to get that 'planking' shot.
Signs of what lay ahead ... the secondary forest being cleared in the Clementi woodland area to provide access for removal works on the railway tracks in the area.
Weapons of rail destruction being put in place.
The scene at the truss bridge over Bukit Timah Road as I left ...
Despite coming away with how I had wanted to remember the rail corridor, I did take another look at another area of it that evening. It was at a that stretch that is just north of the level crossing at Kranji, one that would in the days that have passed us by, would have led to a village on stilts that extended beyond the shoreline, one of the last on our northern shores. The village, Kampong Lorong Fatimah, now lies partly buried under the new CIQ complex today, and had stood by the side of the old immigration complex. Today, all that is left of it beyond the CIQ complex is a barren and somewhat desolate looking piece of land, one that feels cut-off from the rest of Singapore. The stretch is where the last 2 kilometres of the line runs before it reaches Woodlands Train Checkpoint, an area that is restricted and one where it would not be possible to venture into. And it is there where the all train journeys now end – a cold and imposing place that doesn't resemble a station in any way.
What's become of the last level crossing to be used in Singapore - the scene at Kranji Level Crossing with road widening works already underway.
Another view of the former level crossing, concrete blocks occupy the spot where the yellow signal hut once stood.
An outhouse - the last remnant of the crossing left standing.
Walking through the area, it would not be hard to notice what is left of the huge mangrove swamp that once dominated the area – evidence of which lies beyond a girder bridge (the northernmost railway bridge in Singapore and one of three that would be removed) that crosses Sungei Mandai Besar some 700 metres north of the level crossing. The corridor here for the first kilometre or so is rather narrow with a green patches and cylindrical tanks to the east of it and an muddy slope that rises to what looks like an industrial area to the west. It is through the area here that I pass what was a semaphore signal pole – the northernmost one, before coming to the bridge.
The scene just north of the crossing.
The northernmost semaphore signal for the crossing in Singapore.
The last trolley on the tracks?
The northernmost railway bridge - the girder bridge over Sungei Mandai Besar. The bridge is one of three along the line that will be removed.
Sungei Mandai Besar.
It is about 200 metres beyond the bridge that the corridor starts to fan out to accommodate a loop line which looked as if it had been in a state of disuse with sleepers and rails missing from it. To the east of this widened area, tall trees and a grassland line the corridor and to the west, line of dense trees and shrubs partailly obscures part of the mangrove that had once stretched down to the Sungei Kadut. It is just north of this that the relatively short trek comes to an abrupt end. On the approach to Woodlands Train Checkpoint, sandbags over what had been the main line and a huge red warning sign serving as a reminder of what lay ahead. It is at the approach to the checkpoint that two signs serve as barriers to entry. It is beyond this that one can see a newly installed buffer at the end of the main line, and it is in seeing this that the realisation that that now is the end of a line, not just for the railway that ran through Singapore, but also for that grand old station which now lies cut-off from the railway that was meant to elevate it to a status beyond all the stations of the Far East. With the physical link now severed, that promise would now never be fulfilled, and all that is left is a building that has lost its sould and now stands in solitude, looking somewhat forlorn.
200 metres north of the bridge, the corridor widens to accommodate a loop line.
Evidence of the mangrove that once dominated the area right down to Sungei Kadut.
The northernmost stretch of the corridor.
Walking the bicycle over the wide strecth just short of Woodlands checkpoint.
Dismantling work that was already in evidence.
Sandbags on what was the main line and a warning posted ...
The end of the line- Woodlands Train Checkpoint lies beyond the signs.
It was at this point that I turned back, walking quietly into the glow that the setting sun had cast on the railway corridor. It is at Kranji that the setting sun and the skies above seemed to have conspire to provide a fitting and brilliant show over the place where there had once been an equally colourful crossing with its yellow hut and old fashioned gate. It was in the golden glow of the sunset that I spotted a fmailiar face, one of a fellow traveller on that tearful final journey out of Tanjong Pagar on the morning of the last day of train operations through Singapore, Mr Toh. Mr Toh is one who has been travelling on the trains out of and back into Tanjong Pagar since he was one, was on his final nostalgia motivated journey that final day just as I was, and was at Kranji to complete a final leg of his own exploration of the entire length of the tracks through Singapore. We exchanged our goodbyes, at the same time saying one last goodbye to the railway, as night fell on the last level crossing that was used in Singapore, and on the railway corridor as we had known it for one last time.
A track back into the colours of the setting sun.
A final look south towards Kranji Road.
The view of the setting of the sun over the railway at Kranji Road.
Night falls over the railway corridor as we knew it for one last time.
Posts on the Railway through Singapore and on the proposal on the Green Corridor:
I have also put together a collection of experiences and memories of the railway in Singapore and of my journeys through the grand old station which can be found through this page: "Journeys through Tanjong Pagar".
Do also take a look at the proposal by the Nature Society (Singapore) to retain the green areas that have been preserved by the existence of the railway through Singapore and maintain it as a Green Corridor, at the Green Corridor's website and show your support by liking the Green Corridor's Facebook page. My own series of posts on the Green Corridor are at: "Support the Green Corridor".
Tags: Bukit Timah Railway Bridge, Bukit Timah Railway Station, Clementi Woodland, Federated Malay States Railway, Green Corridor, Holland Road, Journeys through Tanjong Pagar, Keretapi Tanah Melayu, Kranji Level Crossing, Kranji Road, Kranji Road Level Crossing, KTM Land, KTMB, Last Days of Tanjong Pagar, Malayan Railway, Railway Girder Bridges, Railway Land, Railway Tracks, Railway Trek, Shift of KTMB Station, Singapore, Sungei Mandai Besar, Train Journeys, We Support the Green Corridor, Woodlands Train Checkpoint
Categories : Bukit Timah Area, Forgotten Places, History, Mangroves, Railway, Railway Land, Singapore, Things I loved about Singapore, Woodlands Road
I took a walk the morning after the party along the corridor that welcomed the last train to pull into Tanjong Pagar which became the last ever Malayan Railway train that the same crowd waved into the darkness of the night. It was a walk to take in a wonderfully fresh and green world that close to eight decades of the railway corridor as we know today had given to us. It is a world that I had become first acquainted with from my numerous journeys on the trains that we now see no more, a world apart from the modern world we have since become comfortable finding a fit into. It is a world that I for one often chose to escape to. The station at Bukit Timah and the area around it being a stretch of the corridor, that in its relative obscurity, has often provided me with a welcome respite from the hectic world that lay just 200 metres away down a narrow path from the station. What is to become of this wonderful world now that we have sent the last of the trains through it off, we don't now know, but there are certainly many who wish to see that the charm of the green world that lines the corridor kept as it now is, providing a world that many in Singapore can as I have done run off to …
The former rail corridor provides an escape from the urban world we live in.
The greenery is a refreshing change to the grey world we live in.
The walk down the stretch that I covered started at the bridge at Holland Road through an area that is possibly one of the more scenic stretches of the corridor, leading to that quiet little building that has in the last month come alive with many hoping to bid farewell to the railway and the wonderful people who ran the railway through Singapore. As I walk through the clearing mist, I felt a surreal sense of peace, one that the air of silence of a corridor that has descended after eight decades of silence punctured by the occasional sound of steam engines, horns and whistles and more recently, the drone of the diesel engines and the air operated horns and whistles that most will now remember. The air of calmness was all encompassing and that with the cool of the morning air made the walk down that stretch especially invigorating.
The lifting mist enhanced the surreal feel to the now surreally silent corridor.
It wasn't long before I reached the tiny building that served as Bukit Timah Railway Station … and again, the silence that greeted me was somewhat surreal, in stark contrast to the amazing and frenzied scenes of the night's send off just eight hours before my arrival at the now silent building. The flags that fluttered from the flagpoles that stood between the station's building and the platform were missing, the station's door was firmly shut, as the station stood forlornly alone in a world that no longer has a use for it as a station. At the north end of the station, the sight of a burly security guard against the backdrop of the now silent station and tracks and the green Singapore Land Authority sign confirmed the station's demise … no longer would we see that men in blue working tirelessly passing and receiving the looped piece of wirerope with a pouch at the end. With the passing of the last train in … the last of the old fashion practice of handing authority to the trains on the single stretch of track by means of the key token had also passed into history on the Malayan Railway line …
Bukit Timah Station now sits in silence and wears the forlorn look of an unwanted structure, in contrast to scenes just 8 hours before when a frenzied crowd had gathered in the dark of night to send the last train off. The building is now a conserved building.
The signs are now up … just hours after the handover and a security detail is in place.
The flags have stopped fluttering in the wind and the doors are now closed.
The security detail is provided to guard against any attempts to remove items (some of which are KTM property) from and to prevent vandalism at the station.
The passing of the trains provides what was I guess a first opportunity to walk on the bridges – something that many have risked their lives doing when the line was still active despite the warnings that have been given. Now, it is safe … as is the narrow northern stretch of the corridor lay beyond the truss bridge near Bukit Timah station. I did just that, walking the narrow 3 kilometre length towards the next truss bridge close to where the Rail Mall is. My most recent encounter with it was of course through the opened door of the train through which the rushing of greenery, the yellow of the kilometre markers and the wind blowing in my face provided me with a different perspective to the one I could now take in at leisure. The stretch is one on which work to remove the tracks would come later, with some parts of the track laid with monitoring equipment for the Downtown MRT line which is being constructed almost parallel to the old railway line. It is the sense of peace and quiet that surround me that I enjoyed, together with the wonderful green that made the walk all worthwhile … and while I do feel a deep sense of loss of a railway that I so love, I do hope to see that at least the memory of it is kept by preserving this ready made escape from the hectic world we spend too much of our time in. In news that came through on the afternoon after my walk, the URA and SLA have, in an encouraging move responded to requests by the public to allow the tracks and corridors to be explored by opening up the railway corridor for the public for 2 weeks. In the same new release, the URA and SLA are also seeking public feedback on the use of the railway land. More information can be found in the URA's news release below.
A last look before the station is fenced off ….
The cessation of train services to Tanjong Pagar allows access for the public to the previously dangerous bridges.
A window into the wonderfully green and peaceful world beyond the road bridges at Rifle Range Road.
The fence of the former Yeo Hiap Seng factory still lines the railway corridor by Rifle Range Road.
The stretch from Rifle Range Road to Hindhede.
A colourful resident of the Green Corridor.
On top of the girder bridge over Hindhede Road – one of the bridges that would be retained.
The approach to the end point of my morning after walk …. the truss bridge near the Rail Mall.
Posts on the Railway through Singapore and on the Green Corridor:
URA/SLA's Press Release
Public works and future plans for former railway land
The lands previously occupied by Keretapi Tanah Melayu (KTM) for railway use have been vested in the Singapore Government with effect from 1 July 2011.
As agreed with Malaysia, Singapore will remove the tracks and ancillary structures of the KTM railway and hand them over to Malaysia. The Singapore Land Authority (SLA) will commence these removal works as well as conduct maintenance works around the various railway sites shortly.
Public Can Access the Railway Tracks
Nevertheless, in response to requests for an opportunity for the public to trek along and experience the tracks, the SLA will be staging its works. From 1 Jul 2011 to 17 Jul 2011, the entire line of railway tracks will be open to public for 2 weeks, except for some localised areas.
After 17 Jul 2011, a 3km stretch of railway tracks from Rifle Range Road to the Rail Mall will continue to be open to the public till 31 Jul 2011.
As the railway tracks can be narrow and rough at certain locations, members of the public are advised to exercise caution when walking along the track.
The Tanjong Pagar Railway Station and Bukit Timah Railway Station will be closed temporarily to facilitate the moving out of the furniture and equipment by the KTM and its tenants. The SLA will also carry out maintenance works and structural inspection. More information on their re-opening will be provided to the public in due course.
Removal Works along the Railway Tracks
From 1 Jul to 17 Jul 2011, minor works will be carried out at the Bukit Timah Railway Station and the railway crossings at Kranji Road, Sungei Kadut Avenue, Choa Chu Kang Road, Stagmont Ring and Gombak Drive. Members of the public should avoid these work areas which will be cordoned off.
Works to remove the railway tracks along the rest of the former railway line, except for the 3km stretch from Rifle Range Road to the Rail Mall, will commence from 18 July 2011. The removal works include the clearance of minor buildings, sleepers, tracks, cables, gates, posts and debris around the various sites from Tanjong Pagar to Woodlands. Other items to be removed include railway equipment, such as signal lights, level crossings, controllers and traffic lights. The removal works are to be fully completed by 31 December 2011.
Due to these extensive removal works, the affected areas will be secured and cordoned off. For safety reasons, members of the public are advised to keep away from these areas whilst the removal works are ongoing.
Public Feedback Sought
The Urban Redevelopment Authority (URA) will comprehensively review and chart the development plans for the former railway lands and their surrounding areas. As part of its review, the URA will study the possibility of marrying development and greenery, such as applying innovative strategies to maintain a continuous green link along the rail corridor without affecting the development potential of the lands.
The URA welcomes feedback and ideas from the community in shaping the future development plans for the railway lands. The members of the public are invited to visit and provide their ideas at www.ura.gov.sg/railcorridor/.
Singapore Land Authority & Urban Redevelopment Authority
Tags: Bukit Timah Railway Bridge, Bukit Timah Road, Federated Malay States Railway, Green Corridor, Green Corridor Walks, Hindhede Railway Bridge, Journeys through Tanjong Pagar, Keretapi Tanah Melayu, KTMB, Last Day of Operation of KTM in Singapore, Last Day of Operations of Tanjong Pagar Station, Last Days of Tanjong Pagar, Malayan Railway, Nature Society (Singapore), NSS, Preserving the Green Corridor, Public Access to Railway Tracks, Rail Bridges, Railway Corridor, Railway Journeys, Redevelopment of Railway Land, Shift of KTMB Station, Singapore, SLA, Steel Truss Railway Bridges, Train Journeys, Upper Bukit Timah Road, URA, We Support the Green Corridor
Categories : Bukit Timah Area, Forgotten Buildings, Forgotten Places, Railway, Railway Land, Reminders of Yesterday, Singapore, Things I loved about Singapore
A send off at the weekend for our old friends …
Singapore residents were out in force to wave goodbye to the Malayan Railway that has been very much a part of the island's landscape for over a century during the final weekend of its operations. It wasn't just at Tanjong Pagar Railway Station which possibly because of the last day of operations of its food stalls today, has seen a large increase in visitors over the last week, but many other places along the line. At the We Support the Green Corridor's walk in the morning, the largest crowd seen in the series of walks conducted over several months to raise awareness of the proposal by the Nature Society (Singapore) or NSS to retain the soon to be vacated railway corridor as a continuous green corridor through Singapore, of more than 120 that included local model and TV host Denise Keller gathered at the Rail Mall at 8 am to take a 3 km walk north not only to acquaint themselves with glimpses of the green corridor, but also to an area that was of historical significance to the first days of Tanjong Pagar Railway Station, being the area where the first train that pulled in to Tanjong Pagar, had departed with its load of passengers that included Sir Cecil Clementi, the then Governor of Singapore, who opened Tanjong Pagar Railway Station on the 2nd of May 1932.
Among the more than 120 participants in the We Support the Green Corridor Walk was local TV personality and model Denise Keller.
The starting point of the We Support the Green Corridor walk was in the shadow of one of one of two truss bridges that give the Bukit Timah area its character, which was referred to in a comment left on the Facebook Page of the We Support the Green Corridor by the Minister of State for National Development Tan Chuan Jin, which seemed to indicate it, along with the bridge at Bukit Timah Road near Bukit Timah Station and the bridge at Hindhede (at the entrance to Bulit Timah Hill Nature Reserve) would be retained. The news of this was certainly greeted by many with relief and even expressions of joy. The ending point of the walk was at the Bukit Panjang level crossing, what is the widest level crossing in Singapore close to where that first train to Tanjong Pagar had departed from at a station that no longer exists, Bukit Panjang. Through much of the walk, signs of the massive construction efforts to get what is ironically a new railway in the form of the Downtown MRT Line that takes a course for much of its way along what was the original Singapore to Kranji Line that was deviated to turn the line towards Tanjong Pagar. It is also ironic that the new railway would in all probability hasten the greying of a corridor that the old railway has for so many years kept green for us.
Participants on a We Support the Green Corridor walk caught a glimpse of a southbound train on the black truss bridge over Upper Bukit Timah Road. Many on the walk expressed relief when they learnt that this bridge was not part of the structures that would be removed in tender awarded to Indeco to dismantle the tracks and ancilliary structures scheduled to be carried out from July to November 2011.
Through much of the accessible parts of the green corridor and at Bukit Timah Station, there were indeed many who were seen to greet the passing trains, a last chance for many to see the passing of trains through Singapore and to bid farewell to a railway that will leave many who have taken a ride on it through the archways of the magnificent station at Tanjong Pagar with a sense of sadness and loss and to a group of people who through their dedication has provided Singapore with a wonderful association with the railway going back to 1903 when the Singapore to Kranji Line was completed. The outpouring of feeling is perhaps driven by the sense of loss not just for a railway that has served us for so long, but also for a landscape that could change drastically once the railway stops operating through Singapore. It is this landscape that many hope will be preserved, there is of course a balance between development and conservation that has to be found in all this, and while the railway land does free up development opportunities in many parts of Singapore, the benefits of maintaining a continuous green corridor as a shared recreational space which can also be used as an uninterrupted path from the north to the south of the island with which the use of bicycles as a means of transport becomes viable, cannot be understated. It is therefore encouraging that the Mr Tan Chuan Jin has in his comments stated that the authorities "remain committed to working closely with NSS and others who love this stretch of land so that we can develop this sensibly together".
Many gathered at many places along the line to wave at the drivers of passing trains.
Many others were seen walking down the tracks for one last time ...
With that, there certainly is hope for a solution that would, as we wave our goodbyes and extend our gratitude to a railway and the men of the railway that we will soon lose, perhaps see some of the wonderful places and spaces that the railway has left behind be retained as it is for not just us but also for our future generations – that may at least preserve that fond memory of an old railway line that once ran right through the heart of Singapore.
The crowd at Bukit Timah Station.
... a passage to the north which on the 30th of June will no longer be used ...
Information related to the station and its architecture can be found on a previous post: "A final look at Tanjong Pagar Station". In addition to that, I have also put together a collection of experiences and memories of the railway in Singapore and of my journeys through the grand old station which can be found through this page: "Journeys through Tanjong Pagar".
Comments made by Minister of State for National Development Mr Tan Chuan Jin on the We Support the Green Corridor's Facebook Page:
These 3 bridges are part of the agreement that will go back to Malaysia (Sg Mandai, Junction 10 and over Hill View Road). It has been a long negotiation process over many many things. We have retained what we can, including stretches of railway in areas near the stations. I am sure you know that these 3 are not the same as the iconic steel girder (believe he meant "truss") bridges across Upper Bt Timah and Bt Timah Rds. The one at Hinhede will also remain. The other one close to Sunset Way that spans across Ulu Pandan Canal already belongs to us and will remain so.
We remain committed to working closely with NSS and others who love this stretch of land so that we can develop this sensibly together.
Our friends at URA and NParks care for the environment and heritage as much as many of you do but they also have to grapple with the dilemmas of ensuring living space for the many young Singaporeans who will be coming of age in the years ahead. As I have pointed out in my note, we are actively greening and blueing where we can and to work with the environment as much as possible.
Tags: Bukit Timah Railway Bridge, Bukit Timah Road, Denise Keller, Eugene Tay, Green Corridor, Green Corridor Walks, Hillview Road Railway Bridge, Hindhede Railway Bridge, Last Day of Operation of KTM in Singapore, Last Day of Operations of Tanjong Pagar Station, Last Days of Tanjong Pagar, Ministry of National Development, Nature Society (Singapore), NSS, Preserving the Green Corridor, Rail Bridges, Railway Corridor, Shift of KTMB Station, Steel Truss Railway Bridges, Tan Chuan Jin, Tanjong Pagar Railway Station, Upper Bukit Timah Road, We Support the Green Corridor
Categories : Bukit Timah Area, Railway, Railway Land, Reminders of Yesterday, Singapore, Tanjong Pagar
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,148
|
Q: php security function to filter our malicious code is stripping out legit characters I have a security function which is part of a script. It's supposed to filter out malicious code from being executed in an input form. It works without a problem with normal characters from A-Z, but it rejects inputs with characters such as á, ñ, ö, etc.
What can I do so that form inputs with these characters are not rejected? Here is the function:
function add_special_chars($string, $no_quotes = FALSE)
{
$patterns = array(
"/(?i)javascript:.+>/",
"/(?i)vbscript:.+>/",
"/(?i)<img.+onload.+>/",
"/(?i)<body.+onload.+>/",
"/(?i)<layer.+src.+>/",
"/(?i)<meta.+>/",
"/(?i)<style.+import.+>/",
"/(?i)<style.+url.+>/"
);
$string = str_ireplace("&","&",$string);
if (!$no_quotes) $string = str_ireplace("'","'",$string);
$string = str_ireplace('"','"',$string);
$string = str_ireplace('<','<',$string);
$string = str_ireplace('>','>',$string);
$string = str_ireplace(' ',' ',$string);
foreach ($patterns as $pattern)
{
if(preg_match($pattern, $string))
{
$string = strip_tags($string);
}
}
$string = preg_replace('#(&\#*\w+)[\x00-\x20]+;#u', "$1;", $string);
$string = preg_replace('#(&\#x*)([0-9A-F]+);*#iu', "$1$2;", $string);
$string = html_entity_decode($string, ENT_COMPAT, LANG_CODEPAGE);
$string = preg_replace('#(<[^>]+[\x00-\x20\"\'\/])(on|xmlns)[^>]*>#iUu', "$1>", $string);
$string = preg_replace('#([a-z]*)[\x00-\x20\/]*=[\x00-\x20\/]*([\`\'\"]*)[\x00-\x20\/]*j[\x00-\x20]*a[\x00-\x20]*v[\x00-\x20]*a[\x00-\x20]*s[\x00-\x20]*c[\x00-\x20]*r[\x00-\x20]*i[\x00-\x20]*p[\x00-\x20]*t[\x00-\x20]*:#iUu', '$1=$2nojavascript...', $string);
$string = preg_replace('#([a-z]*)[\x00-\x20\/]*=[\x00-\x20\/]*([\`\'\"]*)[\x00-\x20\/]*v[\x00-\x20]*b[\x00-\x20]*s[\x00-\x20]*c[\x00-\x20]*r[\x00-\x20]*i[\x00-\x20]*p[\x00-\x20]*t[\x00-\x20]*:#iUu', '$1=$2novbscript...', $string);
$string = preg_replace('#([a-z]*)[\x00-\x20\/]*=[\x00-\x20\/]*([\`\'\"]*)[\x00-\x20\/]*-moz-binding[\x00-\x20]*:#Uu', '$1=$2nomozbinding...', $string);
$string = preg_replace('#([a-z]*)[\x00-\x20\/]*=[\x00-\x20\/]*([\`\'\"]*)[\x00-\x20\/]*data[\x00-\x20]*:#Uu', '$1=$2nodata...', $string);
$string = preg_replace('#(<[^>]+[\x00-\x20\"\'\/])style[^>]*>#iUu', "$1>", $string);
$string = preg_replace('#</*\w+:\w[^>]*>#i', "", $string);
do
{
$original_string = $string;
$string = preg_replace('#</*(applet|meta|xml|blink|link|embed|object|iframe|frame|frameset|ilayer|layer|bgsound|title|base)[^>]*>#i', "", $string);
}
while ($original_string != $string);
return $string;
}
UPDATE: I found that the following line seems to be causing the problem, but not sure why:
$string = preg_replace('#(<[^>]+[\x00-\x20\"\'\/])style[^>]*>#iUu', "$1>", $string);
A: This is a bad idea. The worst part of your function is the htmlentity_decode() half way though, which undermines the first 1/2 of this function entirely. The attacker can just encode the quote marks and brackets, and you'll just build the payload for the attacker. strip_tags() is a joke, and is not a good way to protect against XSS. The main problem with this function is that it is far too simple. HTMLPurifer is made up of thousands of regular expressions and it does a much better job, but it isn't perfect.
You are hardly addressing the most common forms of XSS. XSS is an output problem, you can't expect to pass all input though some magical function and assume its safe. XSS depends on how it is used.
Without actually running your code i think something like this would bypass it:
<a href='javA%3bS%3bcript:%3balert(1)'>so very broken</a>
or maybe even something more simplistic:
<img src=x onerror=alert(1) />
Like I said this is a gross oversimplification of a extremely complex problem.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,157
|
{"url":"http:\/\/math.stackexchange.com\/questions\/355996\/books-similar-to-primes-of-the-form-x2ny2?answertab=active","text":"# Books similar to \u201cPrimes of the form $x^2+ny^2$\u201d\n\nAre there any other books which are similarly to the book \"Primes of the form $x^2+ny^2$\"? Basically, I want a book which starts with a very important classical problem ( in this case which primes can be represented in the form $x^2+ny^2$) ad use that problem as a motivation to introduce one mathematical topic (in this case class field theory and complex multiplication).\n\n-\nDoes it need to follow through with the problem statement throughout the book like Cox does? There are many books that give great, simple problems as motivation to study the subject, but don't follow through as much. For example, I believe it is Koblitz who motivates the study of modular forms by the consideration of the congruent triangle problem. \u2013\u00a0 Alex Youcis Apr 9 '13 at 14:51\nThis thread may be of use. \u2013\u00a0 Alexander Gruber Apr 9 '13 at 14:56\nThe question @Alex poses is a relevant one. I know quite some books that start out with an interesting problem as motivation, but none that follow the problem throughout. A nice example would be Washington's book on Elliptic Curves, which starts out with a problem about stacking (cannon-)balls. \u2013\u00a0 HSN Apr 9 '13 at 14:58\n@AlexYoucis That is not necessary. But it is preferable if it gives some sort of historical (or any other ) motivation for each topic described in the book. \u2013\u00a0 Mohan Apr 9 '13 at 15:00\nSiegel's first book in his \"Topics in Complex Function Theory\" series begins with the problem of trying to find an addition formula for certain integrals that aren't expressible in terms of elementary functions, and how that developed into the study of of the Weierstrass $\\wp$ function (and more generally elliptic functions). It also develops the beginnings of the theory of Riemann surfaces. Be aware though: the book is definitely not written from a modern perspective. \u2013\u00a0 Stahl Apr 9 '13 at 15:04","date":"2014-10-26 07:27:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.802004873752594, \"perplexity\": 416.2957580674694}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-42\/segments\/1414119658961.52\/warc\/CC-MAIN-20141024030058-00084-ip-10-16-133-185.ec2.internal.warc.gz\"}"}
| null | null |
Inside the new issue of The CEO Magazine Asia, we find out how Atlassian's accidental billionaire Mike Cannon-Brookes went from bootstraps to bonanza, and how…
Inside the first digital-only issue of The CEO Magazine Asia, we take a look at Zoom's phenomenal growth during the pandemic and find out why…
Inside the October issue of The CEO Magazine Asia, find out how Executive Director and Co-CEO Jessica Tan has transformed Ping An Group into one…
Inside the September issue of The CEO Magazine Asia, find out how Daniel Ek turned Spotify into a US$67 billion company and what other startups…
In the August issue of The CEO Magazine Asia, Mary Barra, CEO of multibillion-dollar powerhouse General Motors, reveals what drives her passion for the EV…
July 2021 Issue – The CEO Magazine Asia
Find out how Kering's François-Henri Pinault is leading the charge towards a more sustainable future, in the July issue of The CEO Magazine Asia. Available…
June 2021 Issue – The CEO Magazine Asia
Inside the June issue of The CEO Magazine Asia, we explore how AI impacts our everyday lives and how it's used for social good, plus…
May 2021 Issue – The CEO Magazine Asia
In the May issue of The CEO Magazine Asia, Zilingo Co-Founder and CEO Ankiti Bose tells how she's championing gender equality while building a billion-dollar…
Inside the April issue of The CEO Magazine Asia, we look at how Chairman and CEO Kais Marzouki believes Nestlé Philippines should give back to…
Inside the March issue of The CEO Magazine Asia, read our inspiring interviews to celebrate International Women's Day with the female game-changers who are shaking…
Jan/Feb 2021 Issue – The CEO Magazine Asia
Discover how entrepreneur Patrick Tsang is bridging the gap between the East and West in the January/February issue of The CEO Magazine Asia.
Discover how audacious maverick Bernard Arnault rose to become the head of LVMH and richest man in France in the December issue of The CEO…
In the November issue of The CEO Magazine Asia, we take a look at Joey Wat's incredible career journey, from her humble beginnings as a…
Discover how Uniqlo Founder and CEO Tadashi Yanai built his global fashion empire and became Japan's richest person in the October issue of The CEO…
Sept 2020 Issue – The CEO Magazine Asia
In the September issue of The CEO Magazine Asia, find out how Apple CEO Tim Cook is bouncing back from adversity and fighting for racial…
Discover how TikTok Founder Zhang Yiming became one of China's richest people in the August issue of The CEO Magazine Asia.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,338
|
require File.expand_path(File.dirname(__FILE__) + '/../test_helper.rb')
class AlbumsControllerTest < ActionController::TestCase
tests AlbumsController
with_common :is_user, :a_site_with_album
def default_params
{ :section_id => @album.id }
end
def photo_params
default_params.merge(:photo_id => @photo.id)
end
test "is kind of base_controller" do
@controller.should be_kind_of(BaseController)
end
# FIXME GET to index with sets and tags
describe "GET to index" do
action { get :index, default_params }
it_assigns :section
it_assigns :photos
it_renders_template 'albums'
it_caches_the_page :track => ['@photo', '@photos', '@set', '@commentable', {'@site' => :tag_counts, '@section' => :tag_counts}]
end
describe "GET to show, preview without permissions" do
with :an_unpublished_photo do
action { get :show, photo_params }
it_assigns_flash_cookie :error => :not_nil
it_redirects_to { album_url(@photo.section) }
end
end
describe "GET to show, preview with permissions" do
with :is_superuser, :an_unpublished_photo do
action { get :show, photo_params }
it_assigns :section
it_assigns :photo
it_renders_template 'show'
it_does_not_cache_the_page
end
end
describe "GET to show" do
with :a_published_photo do
action { get :show, photo_params }
it_assigns :section
it_assigns :photo
it_renders_template 'show'
it_caches_the_page :track => ['@photo', '@photos', '@set', '@commentable', {'@site' => :tag_counts, '@section' => :tag_counts}]
end
end
# FIXME uncomment when we are going to implement an atom feed for photos
#
# describe "Atom feeds" do
# describe "GET to /albums/1.atom" do
# act! { request_to :get, "/albums/#{@album.id}.atom" }
# it_renders_template 'index', :format => :atom
# it_gets_page_cached
# end
#
# describe "GET to /albums/1/tags/tagged.atom" do
# act! { request_to :get, "/albums/#{@album.id}/tags/tagged.atom" }
# it_renders_template 'index', :format => :atom
# it_gets_page_cached
# end
#
# describe "GET to /albums/1/sets/summer.atom" do
# act! { request_to :get, "/albums/#{@album.id}/sets/summer.atom" }
# it_renders_template 'index', :format => :atom
# it_gets_page_cached
# end
#
# describe "GET to /albums/1/photos/1.atom" do
# act! { request_to :get, "/albums/#{@album.id}/photos/#{@photo.id}.atom" }
# it_renders_template 'comments/comments', :format => :atom
# it_gets_page_cached
# end
#
# describe "GET to /albums/1/comments.atom" do
# act! { request_to :get, "/albums/#{@album.id}/comments.atom" }
# it_renders_template 'comments/comments', :format => :atom
# it_gets_page_cached
# end
#
# describe "GET to /albums/1/photos/1/comments.atom" do
# act! { request_to :get, "/albums/#{@album.id}/photos/#{@photo.id}.atom" }
# it_renders_template 'comments/comments', :format => :atom
# it_gets_page_cached
# end
# end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,733
|
Cooper is a security researcher and Senior Staff Technologist at EFF. He has worked on projects such as Privacy Badger, Canary Watch, and analysis of state sponsored malware. He has also performed security trainings for activists, non profit workers and ordinary folks around the world. He previously worked building websites for non-profits, such as Greenpeace, Adbusters, and the Chelsea Manning Support Network. He also was a co-founder of the Hackbloc hacktivist collective. In his spare time he enjoys playing music and participating in street protests.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,030
|
elasticsearch-cluster CHANGELOG
===============================
This file is used to list changes made in each version of the elasticsearch-cluster cookbook.
0.5.6
-----
### Breaking Changes
- cookbook by default now uses `node['elasticsearch']['config_v5']` for ES v5.x configuration
### Changes
- PR #80 - Grant Ridder
* fix rubocop offenses
* fix install repo for ES 5.x
* use correct java version for ES 5+
* fix ES v5 plugin installation
- PR #87 - Virender Khatri
* update default elasticsearch version to v5.1.2, #83
* ubuntu update ca certificates, #82
* use default config_v5 attributes for elasticsearch v5.x, #84
* attributes cleanup, #81
* MAX_OPEN_FILES should be at least 65536 for v5, #85
* add kitchen tests for v1, v2 and v5, #86
* ci fix, rubocop, specs
0.5.5
-----
### Changes
- Wei Wu - add jvm options file
- Wei Wu - skip index settings in elasticsearch.yml.erb
- Wei Wu - add v5 configuration attributes
- Wei Wu - fix cluster name validation
- Wei Wu - fix spec and change `set` to `normal` to remove warnings
- Grant Ridder - update Travis to Ruby 2.3.1
- Grant Ridder - update CI configuration
- Grant Ridder - fix lint
0.5.4
-----
### Changes
- Virender Khatri - bump es version v2.2.0
- Martin Tomes - use the correct path to the tar file for the Elasticsearch distribution.
- Virender Khatri - added checksum for missing versions tar file
- Virender Khatri - fix for issue #75
- Virender Khatri - fix lint and kitchen test default config
0.5.3
-----
### Changes
- Virender Khatri - tarball_file delete should be a file resource #71
- Virender Khatri - add ES v2.1.1 support #70
0.5.2
-----
### Changes
- Virender Khatri - #65, update metadata for issues source url
- Virender Khatri - #64, bump elasticsearch version to 2.1.0
- Virender Khatri - #63, :delayed is default timer for notify service
- Virender Khatri - #62, fix for ruby_block always notify service start
0.5.1
-----
### Changes
- Virender Khatri - #53, bump es version to 2.0.0
- Virender Khatri - #54, es 2.x repo url change
- Virender Khatri - #55, fix es 2.0 invalid parameter CONF_FILE
- Virender Khatri - #56, add sha256sum for 2.0.0 tar
- Virender Khatri - #57, update package repository to use http://packages.elastic.co
- Virender Khatri - #58, updated es 2.x initd
- Virender Khatri - #59, es 2.x new tarball url
0.5.0
-----
## Breaking Changes
- resource `elasticsearch_plugin` attribute `install_name` has been renamed to `install_source`
### Changes
- Virender Khatri - #49, fix for plugin syntax changed in es 2.x
- Virender Khatri - #50, bump es version to 1.7.3
- Virender Khatri - #51, plugins dir should be created under install_dir for tarball installation
- Virender Khatri - #52, rename resource plugin attribute install_name to install_source
0.4.7
-----
### Changes
- Virender Khatri - #39, script.disable_dynamic is not supported in v2.x
- Virender Khatri - #40, update init.d scripts resource type from template to cookbook_file
- Virender Khatri - #41, update cookbook for wrapper cookbook support
- Virender Khatri - #42, move default attribute tarball_url to recipe tarball
- Virender Khatri - #43, manage logging.yml configuration file
- Virender Khatri - #44, add attribute to enable sensitive
- Virender Khatri - #46, link node['elasticsearch']['install_dir'] should always notify service restart and does
not require `service` restart
- Virender Khatri - #47, added path.scripts directory resource config
- Virender Khatri - #48, add resource script
0.3.8
-----
### Changes
- WEI WU - set databag config related attributes before generating template file. Instead of storing in chef node.
- WEI WU - add sensitive flag only if chef respond to this flag
- Virender Khatri - #32, move tarball checksum to helper method
- Virender Khatri - #33, move install recipe directory resources to install method recipe
- Virender Khatri - #34, bump elasticsearch version to 1.7.2
- Virender Khatri - #35, manage es config dir in favor of tarball install method
- Virender Khatri - #36, update sysv init templates
- Virender Khatri - #37, add feature to purge old tarball es revisions
0.3.0
-----
### Changes
- WEI WU - Fixed config file location issue on CentOS7
- Grant Ridder - Updated .rubocop.yml
- Grant Ridder - Fixed ruby syntax
- Grant Ridder - Updated test kitchen platforms
0.2.7
-----
### Changes
- Virender Khatri - issue #23, Consider using directory-based plugin detection
- Virender Khatri - issue #24, add attribute host & port to plugin resource
- Virender Khatri - issue #25, plugin version is not the same as installed version
- Virender Khatri - issue #26, bump elasticsearch version to 1.7.1
0.2.5
-----
### Changes
- Michael Klishin - Update README.md
- Virender Khatri - issue #15, kopf and bigdesk are always installed, removed default plugins
- Virender Khatri - issue #17, update elasticsearch service resource to delayed start
- Virender Khatri - issue #18, add attribute ignore_error to plugin resource for issue #16
- Virender Khatri - issue #19, add attribute notify_restart to plugin resource
0.2.2
-----
### Changes
- Michael Klishin - Provide dependency info in the README
- Virender Khatri - issue #13, added 1.7.0 checksum attribute default['elasticsearch']['tarball_checksum']['1.7.0']
0.2.0
-----
### Changes
- Virender Khatri - added lwrp `elasticsearch_plugin`
- Virender Khatri - bumped elasticsearch version to 1.7.0
- Virender Khatri - added attribute default[elasticsearch][bin_dir]
- Virender Khatri - move common directory resources to recipe install
0.1.8
-----
### Changes
- Virender Khatri - added os support and README update
- Virender Khatri - issue #1, package install needs to manage ES data, work and log dir
- Virender Khatri - issue #2, add ES_JAVA_OPTS with double quotes
- Virender Khatri - issue #3, added missing attribute default['elasticsearch']['cookbook']
- Virender Khatri - issue #4, added config attribute name for elasticsearch.yml file
- Virender Khatri - issue #5, disable auto_java_memory by default
- Virender Khatri - issue #6, set default ES_HEAP_SIZE value to half of the memory
0.1.1
-----
### Changes
- Virender Khatri - README and minor updates
0.1.0
-----
### Changes
- Virender Khatri - Initial release of elasticsearch-cluster
- - -
Check the [Markdown Syntax Guide](http://daringfireball.net/projects/markdown/syntax) for help with Markdown.
The [Github Flavored Markdown page](http://github.github.com/github-flavored-markdown/) describes the differences between markdown on github and standard markdown.
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,301
|
Q: Multiple ul with single array in ember handlebars template Lets say I've an array with 6 items and I want print them 3 per list
Ex
//arr = [1,2,3,4,5,6];
//html
<div class="first">
<ul>
<li>1</li>
<li>2</li>
<li>3</li>
</ul>
</div>
<div class="second">
<ul>
<li>4</li>
<li>5</li>
<li>6</li>
</ul>
</div>
How can I accomplish that with ember/handlebars?
Thanks
A: One option would be to write a computed property on your controller that splits the larger array into an array of arrays.
Then you could iterate through the arrays of the computed property and use a component to display each of the smaller arrays.
I'll leave this as an excercise to you unless you have other questions.
A: Similar to what @Oren is saying, you would need to decorate your model (the array) inside the controller and then display the decorated model in the handlebars (as there is no way to perform logic inside the handlebars itself).
So, something like:
App.IndexController = Ember.ArrayController.extend({
decoratedModel: function(){
var model = this.get('model');
return [
Ember.Object.create({
className: "first",
arr: model.slice(0, 3)
}),
Ember.Object.create({
className: "second",
arr: model.slice(3)
})
];
}.property('model')
});
Then, you can display that in your template as follows:
<script type="text/x-handlebars" data-template-name="index">
{{#each item in decoratedModel}}
<div {{ bind-attr class=item.className}}>
<ul>
{{#each thing in item.arr }}
<li>{{thing}}</li>
{{/each}}
</ul>
</div>
{{/each}}
</script>
A: I ended doing this:
in template.hbs
{{#each set in arraySets}}
<div class="col-sm-6">
<ul class="list-unstyled">
{{#each item in set }}
<li>{{item.name}}</li>
{{/each}}
</ul>
</div>
{{/each}}
in the related controller
import Ember from "ember";
import Collection from "../../utils/collection";
export default Ember.ObjectController.extend({
// ....
arraySets: function() {
var infos = this.get('model.infos');
return Collection.create({ content: infos }).divide();
}.property()
});
and who does the hard work is the utils/collection.js
import Ember from "ember";
var Collection = Ember.ArrayProxy.extend({
divide: function (size = 2) {
var array = this.get('content');
var length = array.length;
var limit = Math.ceil(length / size);
var sets = Ember.A([]);
for (var i = 0; i < Math.min(size, length); i++) {
sets.pushObject(array.slice(i * limit, (i + 1) * limit));
}
return Collection.create({content: sets});
}
});
export default Collection;
I hope that this could help someone else!
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,222
|
[Leuren Moret is an internationally recognized geoscientist and critic of nuclear power who has maintained a long interest in Japan's nuclear power program. As she points out in this article, Japan is the world's 3rd largest nuclear producer, with 52 reactors (versus 72 in France and 118 in the United States). Japan's reactors produce about 30 percent of the country's electricity. Japan is also one of the most earthquake-prone countries in the world, with a multiplicity of active fault zones. In persuasive detail spelled out in a map, Moret shows that Japan's nuclear industry has generally neglected the earthquake threat and built its reactors close to fault zones. She shows that Japanese government and industry has no serious emergency planning in the event of a disaster. For example, Japan's most seismically dangerous nuclear plant - the Hamaoka reactor in Shizuoka Prefecture - has Emergency Response Centres (ERCs) equipped with tiny decontamination showers that would be of little avail in the event of a serious emergency. In fact, planning for a very serious nuclear emergency is in many respects not possible. According to Moret, the scale of the disaster would be of such magnitude as to render any conceivable emergency response totally inadequate and ineffective. She shows why the only adequate response ultimately is to prevent accidents by turning away from nuclear energy.
The Japanese archipelago is located on the so-called Pacific Rim of Fire, a large active volcanic and tectonic zone ringing North and South America, Asia and island arcs in Southeast Asia. The major earthquakes and active volcanoes occurring there are caused by the westward movement of the Pacific tectonic plate and other plates leading to subduction under Asia. Japan sits on top of four tectonic plates, at the edge of the subduction zone, and is in one of the most tectonically active regions of the world. It was extreme pressures and temperatures, resulting from the violent plate movements beneath the seafloor, that created the beautiful islands and volcanoes of Japan.
Nonetheless, like many countries around the world -- where General Electric and Westinghouse designs are used in 85 percent of all commercial reactors -- Japan has turned to nuclear power as a major energy source. In fact the three top nuclear-energy countries are the United States, where the existence of 118 reactors was acknowledged by the Department of Energy in 2000, France with 72, and Japan, where 52 active reactors were cited in a December 2003 Cabinet White Paper.
The 52 reactors in Japan -- which generate a little over 30 percent of its electricity -- are located in an area the size of California, many within 150 km of each other and almost all built along the coast where seawater is available to cool them. However, many of those reactors have been negligently sited on active faults, particularly in the subduction zone along the Pacific coast, where major earthquakes of magnitude 7-8 or more on the Richter scale occur frequently. The periodicity of major earthquakes in Japan is less than 10 years. There is almost no geologic setting in the world more dangerous for nuclear power than Japan -- the third-ranked country in the world for nuclear reactors.
"I think the situation right now is very scary," says Ishibashi Katsuhiko, a seismologist and professor at Kobe University. "It's like a kamikaze terrorist wrapped in bombs just waiting to explode."
In summer 2003, I visited Hamaoka nuclear power plant in Shizuoka Prefecture, at the request of citizens concerned about the danger of a major earthquake. I spoke about my findings at press conferences afterward.
Together with local citizens, I spent the day walking around the facility, collecting rocks, studying the soft sediments it sits on and tracing the nearly vertical faults through the area -- evidence of violent tectonic movements.
The next day I was surprised to see so many reporters attending the two press conferences held at Kakegawa City Hall and Shizuoka Prefecture Hall. When I asked the reporters why they had come so far from Tokyo to hear an American geoscientist, I was told it was because no foreigner had ever come to tell them how dangerous Japan's nuclear power plants are.
I told them that this is the power of gaiatsu (foreign pressure). Because citizens in the United States with similar concerns attract little media attention, we invite a Japanese to speak for us when we want media coverage -- someone like the famous seismologist Professor Ishibashi!
"The structures of the nuclear plant are directly rooted in the rock bed and can tolerate a quake of magnitude 8.5 on the Richter scale," the utility claimed on its Web site.
From my research and the investigation I conducted of the rocks in the area, I found that that the sedimentary beds underlying the plant were badly faulted.
When I held up samples of the rocks the plant was sitting on, they crumbled like sugar in my fingers. "But the power company told us these were really solid rocks!" the reporters said. I asked, "Do you think these are really solid?' and they started laughing.
On July 7 2003, the same day of my visit to Hamaoka, Ishibashi warned of the danger of an earthquake-induced nuclear disaster, not only to Japan but globally, at an International Union of Geodesy and Geophysics conference held in Sapporo. He said: "The seismic designs of nuclear facilities are based on standards that are too old from the viewpoint of modern seismology and are insufficient. The authorities must admit the possibility that an earthquake-nuclear disaster could happen and weigh the risks objectively."
After the greatest nuclear power plant disaster in Japan's history at Tokai, Ibaraki Prefecture, in September 1999, large, expensive Emergency Response Centers were built near nuclear power plants to calm nearby residents.
After visiting the center a few kilometers from Hamaoka, I realized that Japan has no real nuclear-disaster plan in the event that an earthquake damaged a reactor's water-cooling system and triggered a reactor meltdown.
Additionally, but not even mentioned by Emergency Response Center (ERC) officials, there is an extreme danger of an earthquake causing a loss of water coolant in the pools where spent fuel rods are kept. As reported last year in the journal Science and Global Security, based on a 2001 study by the U.S. Nuclear Regulatory Commission, if the heat-removing function of those pools is seriously compromised -- by, for example, the water in them draining out -- and the fuel rods heat up enough to combust, the radiation inside them will then be released into the atmosphere. This may create a nuclear disaster even greater than Chernobyl.
If a nuclear disaster occurred, power-plant workers as well as emergency-response personnel in the Hamaoka ERC would immediately be exposed to lethal radiation.
During my visit, ERC engineers showed us a tiny shower at the center, which they said would be used for "decontamination' of personnel. However, it would be useless for internally exposed emergency-response workers who inhaled radiation. When I asked ERC officials how they planned to evacuate millions of people from Shizuoka Prefecture and beyond after a Kobe-magnitude earthquake (Kobe is on the same subduction zone as Hamaoka) destroyed communication lines, roads, railroads, drinking-water supplies and sewage lines, they had no answer.
Last year, James Lee Witt, former director of the U.S. Federal Emergency Management Agency, was hired by New York citizens to assess the U.S. government's emergency-response plan for a nuclear power plant disaster. Citizens were shocked to learn that there was no government plan adequate to respond to a disaster at the Indian Point nuclear reactor, just 80 km from New York City.
In 1998, Kei Sugaoka, 51, a Japanese-American senior field engineer who worked for General Electric in the United States from 1980 until being dismissed in 1998 for whistle-blowing there, alerted Japanese nuclear regulators to a 1989 reactor inspection problem he claimed had been withheld by GE from their customer, Tokyo Electric Power Company. This led to nuclear-plant shutdowns and reforms of Japan's power industry.
Later it was revealed from GE documents that they had in fact informed TEPCO -- but that company did not notify government regulators of the hazards.
Kikuchi Yoichi, a Japanese nuclear engineer who also became a whistle-blower, has told me personally of many safety problems at Japan's nuclear power plants, such as cracks in pipes in the cooling system from vibrations in the reactor. He said the electric companies are "gambling in a dangerous game to increase profits and decrease government oversight."
Sugaoka agreed, saying, "The scariest thing, on top of all the other problems, is that all nuclear power plants are aging, causing a deterioration of piping and joints which are always exposed to strong radiation and heat."
Additionally, on March 26 2004 -- the eve of the 25th anniversary of the worst nuclear disaster in U.S. history, at the Three Mile Island plant in Pennsylvania -- the Radiation and Public Health Project released new data on the effects of that event. This showed rises in infant deaths up to 53 percent, and in thyroid cancer of more than 70 percent in downwind counties -- data which, like all that concerning both the short- and long-term health effects, has never been forthcoming from the U.S. government.
Considering the extreme danger of major earthquakes, the many serious safety and waste-disposal issues, it is timely and urgent -- with about half its reactors currently shut down -- for Japan to convert nuclear power plants to fossil fuels such as natural gas. This process is less expensive than building new power plants and, with political and other hurdles overcome, natural gas from the huge Siberian reserves could be piped in at relatively low cost. Several U.S. nuclear plants have been converted to natural gas after citizen pressure forced energy companies to make changeovers.
Commenting on this way out of the nuclear trap, Ernest Sternglass, a renowned U.S. scientist who helped to stop atmospheric testing in America, notes that, 'Most recently the Fort St. Vrain reactor in Colorado was converted to fossil fuel, actually natural gas, after repeated problems with the reactor. An earlier reactor was the Zimmer Power Plant in Cincinnati, which was originally designed as a nuclear plant but it was converted to natural gas before it began operating. This conversion can be done on any plant at a small fraction [20-30 percent] of the cost of building a new plant. Existing turbines, transmission facilities and land can be used."
After converting to natural gas, the Fort St. Vrain plant produced twice as much electricity much more efficiently and cheaply than from nuclear energy -- with no nuclear hazard at all, of course.
This is a slightly edited version of an article that appeared in The Japan Times, May 23, 2004. First posted at Japan Focus on November 29, 2005.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,866
|
tv Outnumbered FOX News August 11, 2017 9:00am-10:00am PDT
>> jon: we are back in an hour. "outnumbered" starts now. >> sandra: fox news alert. president trump telling north korea, get your act together as he doubles down on his warnings to kim jong-un to stop threatening the united states and its allies. mr. trump saying just this morning, america's military solutions are in place and toda today, he's meeting with rex tillerson and h.r. mcmaster as well as u.n. ambassador nikki haley. this is to be, i'm sandra smith. here today, from the fox business network, dagen mcdowell, also from fbn, anchor of the intelligent sea report, trish regan gillian turner, and today's
#oneluckyguy, iraq veteran, pete hegseth is here and he is outnumbered and he made it by the hair of your chinny chin chin. glad to be here, barely made it >> dagen: i called you honky and the teas, by the way. >> sandra: president trump doubling down on his threat to unleash fire and fury and north korea continues to threaten the u.s. after pyongyang announced possible plans to launch missiles toward the u.s. territory of guam. reporters asking the president if his initial tone was too harsh. listen. >> frankly, the people who are questioning that statement, is it too tough, maybe it wasn't tough enough. if north korea does anything in terms of thinking about attack of anybody that we love or we
represent or our allies or us, they should be very, very nervous. they should be nervous because things will happen to them like they never thought possible. >> sandra: this morning, president trump tweeting that military solutions are now fully and place, locked and loaded, should north korea act unwisely. hopefully kim jong-un will find another path. north korea state news agencies and sing today trump is driving the situation to "the brink of a nuclear war." meantime, in james mattis telling reporters diplomacy is the way forward. >> the american effort is dippel medical a lead, it has diplomatic traction, it is beginning to poetic results. i want to stay right there right now, the tragedy of war is will and of known, it doesn't need another need another characterization be on the fact that it would be catastrophic. >> sandra: first of the
rhetoric from the president, fire and fury, then he says those words may not have been tough enough. >> pete: then he says locked and loaded, this is a guy who understands tough talk is what is needed right now. you need the credible threat of military force, which we never had under barack obama and barely had ever and north korea, so we've diplomatic, sanctioned our way to a nuclear bomb where we are now under threat on our own shore. in 20 years, we could face perpetual blackmail from the korean peninsula, not a couple missiles, not a dozen, hundreds of missiles might nuclear iced nuclear weapons pointed at our cities. the same could happen in iran. this is a put abortion at moment. this is serious his instincts are right. we've done nothing but bad deals in that area, it's time to come to the table and stop it now.
>> sandra: then there is concern about mixed messaging that may or may not be coming out of the white house. defense secretar y mattis saying we want to use diplomacy, sing a potential >> gillian: they're not contradicting one another, i think secretary mattis naturally has a much more measured way of speaking than president trump. they have very different rhetorical styles, but i think they are synced up on policy underneath that. i think that by the way, i was at saying this to pete yesterday on the business channel that president trump's approach so far, and my opinion, is not hugely divergent from president obama. what we've got, since basically the policy has not changed that much. earlier, president obama
increased the presence in south korea. he increased the number of american troops in south korea, so how could you say that there is no credible military force? >> pete: you can move a landing base and north korea. >> gillian: we are talking about having the threat be visible to the north koreans, having the thread be immediate, having the threat be in close proximity, how was to convey that, aside from having military there? >> sandra: there is one group taking it with the rhetoric and that is democrats. we have this excerpt of a letter signed by 64 house democrats condemning his language, urging rex tillerson to rein in that rhetoric, saying this in the letter. these statements are irresponsible and dangerous and also provide a boom to propaganda. we respectfully but firmly urge you to do everything --
>> dagen: caution and strength do not work. they want warm and fuzzy, come over here and let me give you a big hug. this is a language that should resonate with the kim regime and even president clinton back in 1994 discussed or thought about, the threat of military action and was really talked away by president jimmy carter and then we put in place something that had no checks and balances in terms of what north korea was allowed to do. there are a lot of things at issue. we have diplomacy, we have at least signaled for the president and his advisors that we are at the ready to shoot down missiles. we are at the ready with military presence, but there are so many things we could do. mike pompeo has been talking for weeks about preparing for the north korea threat, including
covert operations in the country potentially. we can up our missile defenses, the trump budget actually reduces the funding for missile defense in this country, anti-proliferation, we would board ships if we think they're carrying nuclear material. there is so much we could do and so much is being talked about. >> sandra: as far as for their sanctions, which we've talked about on this couch. we talked about it this morning, pete. colonel oliver north was on fox news morning, sanctions take too long. >> trish: we should have sanctioned the heck out of north korea. there is a lot of opportunity. we sanctioned one banker from china recently, but there is a whole lot more they're doing business here and we should be going after china hard. we should demand that they play ball i north korea. as far as rhetoric goes, this is a crazy man, he might as well know that we can do what we say
we do in terms of the rhetoric itself. there is nothing wrong with conveying what we have. we don't want to get to that state, but if he doesn't know that we are serious, then he will not take us seriously. this is been the whole problem. >> pete: that's why doubling down on sanctions at this point, and may help bring china along, but it won't solve things. how many miniaturized nukes he has -- if you give up your nukes, you die in a ditch. he gave up the nukes after the iraq war and that's what happened. stopping them before they get them can proliferate them, it's so critically important. i go back to around because iran is even worse. they're cohabiting with north korea to get these very same weapons. under perpetual black know from these two states --
before we're talking talking about preventing north korea from development. i don't know why we talk about that. it's important to talk about -- i agree 100%, but i think it's more important to look at what's on the table right now because that's what president trump is looking at. he is exercising his own kind of, this is his own version of amped up strategic deterrence. he's not changing the policy so much because he's maneuver that is much as he can right now without launching a ground invasion inside north korea. that's a stylistic difference, not a policy. >> pete: do you agree with susan rice we need to tolerate? >> gillian: i never said anything remotely like that. >> pete: that's effectively what you're saying. >> gillian: no it's not. absolutely not. all i'm saying is that everybody is making a big deal, it's a
tremendously divergent i'm pointing out that if you look at what's on the ground, it's not. he's changing the rhetoric. he's not actually doing anything differently than what president obama dead. i'm not saying we need to tolerate north korea. >> dagen: how about the united nations over the weekend, they did not go far enough, (100)000-0000 out of 3 billion exports coming out of north korea. the fact that russia and china signed on to those, you're seeing so much from kim jong-un at this point. the fact that they sanctioned that small bank, it was kind of overlooked by a lot of the media before kim firing the missiles in july, but that's a big step because it signals that more. >> trish: we are making these threats now, do you want a situation where you may have this redline, we all remember
that well and if you don't -- the challenge with rhetoric and this is why other presidents haven't used this and i hear your point at the policy hasn't changed, but now that you're out there saying these things, what does that mean if he decides to go and lobbed those things? >> pete: that's where president trump has been so responsible were obama was reckless. he's been very careful to say this is not the point because that's where you force yourself into a corner where you have to do something you don't want to do or you look foolish by setting that redline. president trump is taking a responsible approach. it's part of how you get a good outcome. >> dagen: here's the problem, they're called ground-based intercepts. we have mixed success with those, about a 50% success rate over their lifetime, so if you shoot missiles and we don't get all of them, then what? >> sandra: more on this coming up as tensions escalate, china's
estate media warning north korea and the u.s. not to play with fire. president trump calling out beijing, saying ed must do more to rein in his nuclear neighbor. what the u.s. can truly pressure for help. plus president trump now suggesting mitch mcconnell should step down if he fails to deliver on his agenda items. good way to hold him accountable or is he alienating a key ally? we will debate. ♪ you don't let anything keep you sidelined. that's why you drink ensure. with 9 grams of protein and 26 vitamins and minerals. for the strength and energy to get back to doing... ...what you love. ensure. always be you. yeah, 'cause i got allstate. if you total your new bike, they replace it with a brand new one. that's cool. i got a new helmet. we know steve. switching to allstate is worth it.
a trip back to the dthe doctor's office, mean just for a shot. but why go back there, when you can stay home... ...with neulasta onpro? strong chemo can put you at risk of serious infection. neulasta helps reduce infection risk by boosting your white blood cell count, which strengthens your immune system. in a key study, neulasta reduced the risk of infection from 17% to 1%... ...a 94% decrease. applied the day of chemo, neulasta onpro is designed to deliver neulasta the next day. neulasta is for certain cancer patients receiving strong chemotherapy. do not take neulasta if you're allergic to neulasta or neupogen (filgrastim). ruptured spleen, sometimes fatal as well as serious lung problems, allergic reactions, kidney injuries, and capillary leak syndrome have occurred. report abdominal or shoulder tip pain, trouble breathing or allergic reactions to your doctor right away. in patients with sickle cell disorders, serious, sometimes fatal crises can occur. the most common side effect is bone and muscle ache. so why go back there? if you'd rather be home, ask your doctor about neulasta onpro.
hey you've gotta see this. cno.n. alright, see you down there. mmm, fine. okay, what do we got? okay, watch this. do the thing we talked about. what do we say? it's going to be great. watch. remember what we were just saying? go irish! see that? yes!
i'm gonna just go back to doing what i was doing. find your awesome with the xfinity x1 voice remote. >> i think china will do a lot more. look, we have trade with china, we lose hundreds of billions of dollars a year on trade with china, they know how i feel. it's not going to continue like that, but if china helps us, i would feel a lot differently. a lot differently toward trade. >> sandra: that was president trump yesterday stepping up pressure on china to do more to help the standoff with north korea, but beijing says both countries shouldn't play with fire. it's also suggesting it will block any preemptive attacks on north korea. the global times, the chinese state run newspaper writing "china should also make clear
that if north korea launches missiles that threaten u.s. soil first and the u.s., china will stay neutral. at the u.s. and south korea carry out strikes and try to overthrow the north korean regime and change the political pattern of the peninsula, china will prevent them from doing so." meantime, a chinese foreign minister spokesman says that both sides and you take steps to be a the situation. let's start here with china. what is china's role? >> pete: it's substantial. you have to escalate before you de-escalate. they have to come to the table based on shared interest. they want to agitate with north korea, if you don't want to create an overthrown regime, where does that point meet? i'm not sure. that's where the experts in the white house to determine, but you have to push that envelope because china is not interested and our interests, they are not our friend.
they have their own ambitions and globally. they have a long future where their ascendant in america's descendant, thus a view of chinese leadership right now over a 50 year perspective. this is part of that game that they are playing and being decisive with them, standing strong, using trade as a lever will be key. >> sandra: julian's trying to jump in, dagan, to you first >> dagen: china hopes the north korean threat drives us out of northeast asia. there is so much more that we can do in terms of putting financial pressure on china and on north korea. what the sanctions over the weekend did not do, they did not cut the supply of oil and refined oil products, gasoline into north korea from china. in terms of the secondary sanctions, you start cracking down, on financial institutions,
any companies that are propping up the missile and nuclear programs and china, you make it hurt. you make china hurt in their pocket book and could affect things. >> trish: it was interesting, he said look, we have to go after china on a lot of these issues i e-trade, et cetera maybe i'll back off if they cooperate on north korea. you think about china, their counterfeiting goods and they are a source for counterfeit goods, unlike any other. this is costing american businesses and it's a big issue because not only that, you run the risk doing business in china that your executives can be jailed for drums up charges, so there are a lot of issues that we need to start addressing and that is our leverage. that is the threat that we have,
we can actually start going after them on all of these economic issues. >> gillian: i think part of the reason is that we share with north korea a common interest in having a -- excuse me, we share with china having a nuclear free north korea, but that is in a relationship based on convenience and it only goes up to a point and then we sharply diverge. our national security interests in this are much stronger than china's because china is a neighbor and looks to north korea for major trade, major investment. they have a vested interest in seeing north korea's economy remain strong, remain stable, and remain independent. the north korean economy collapsing would prevent a major, major crisis for china, for us, not so much.
i think every american president comes into office and they think it, i'm going to hit china really hard. i'm going to go down that path as far as i can and maybe i'll be the one to turn it and they're not. again, our national security interests get to a certain point and then sharply diverge. >> dagen: financially and economically, we don't have leverage over china. we do rely on imported goods, people of cheap goods that they can buy at discount stores. i don't mean to sound simplistic about this and china be the second largest foreign holder because we are a debtor nation. we rely on them to not only continue to buy u.s. treasury, but to not sell what they own. >> trish: simultaneously, china needs us to do well because they need that to get
repaid. >> sandra: as the tensions continue to mount with north korea, chris wallace will talk to mike pompeo on "fox news sunday." check your local listings for channel and time on the fox broadcast network and you can catch it sunday at 2:00 p.m. eastern right here on fnc. plus, to a president trump's critics in the senate now facing primary challengers. should other trunk critics be worried and is it time for more republicans to support the president? we debate. and then you totaled him. you two had been through everything together. two boyfriends, three jobs... you're like nothing can replace brad. then liberty mutual calls... and you break into your happy dance.
if you sign up for better car replacement™, we'll pay for a car that's a model year newer with 15,000 fewer miles than your old one. liberty stands with you™. liberty mutual insurance. ugh. heartburn. sorry ma'am. no burning here. try alka-seltzer heartburn relief gummies. they don't taste chalky and work fast. mmmm. incredible. can i try? she doesn't have heartburn. alka-seltzer heartburn relief gummies. enjoy the relief. you can use whipped topping made ...but real joyful moments.. are shared over the real cream in reddi-wip. ♪ reddi-wip. share the joy.
get your ancestrydna kit.here. spit. mail it in. learn about you and the people and places that led to you. go explore your roots. take a walk through the past. meet new relatives and see how a place and its people are all a part of you. ancestrydna. save 30% through august 15th at ancestrydna.com. if you have bad breath and your mouth lacks moisture when you speak or swallow, you may suffer from dry mouth. try biotène®, the #1 dentist recommended dry mouth brand. biotène® provides immediate relief from dry mouth symptoms that last for up to four hours. in fact, biotène® is the only leading brand clinically proven
to soothe, moisturize, and freshen breath. don't just manage dry mouth symptoms with water, soothe, moisturize and freshen your breath, with biotène®. this has been medifacts for biotène®.
♪ >> dagen: if you do still brewing between mr. trump and mitch mcconnell. he raises the possibility of mr. mcconnell resigning from
his post if he fails to get the agenda through the senate. when asked yesterday if he thinks mcconnell should step down, he had this to say. >> if you didn't get repeal and replace done and taxes done meaning cuts and reform and if he doesn't get a very easy infrastructure, if he doesn't get them done, then you can ask me that question. i said mitch, i get to work and let's get it done. they should have had this last one done. they lost by one vote, for a thing like that to happen is a disgrace. >> dagen: or in hatch's office tweeting this. senate majority leader mitch mcconnell has been the best leader we've had and my time in the senate. when it comes to whose side the american people are on, a new poll shows the president is doing a better job in congress
is, even at 38%, he is up 18 points over congress' approval rating of 20%. newt gingrich said not just this morning, but in other interviews. he's encouraging the president to read mitch mcconnell's book, the long game and mcconnell is not going anywhere and they would need to make nice nice. >> pete: if it's such a long game that we are all dead, it's too long. losing is losing, failing is failing. very few grassroot conservatives out there who way the
responsibility at the feet of president trump. whether it's susan collins or susan michalski, or been staff rather than passing conservative ideas. i don't get it. >> dagen: my theory is the market is not selling off on the north korea rhetoric. we've got a little bit of a bounce back, but the worst week may be since the end of march. it's because there is tension between trump and mcconnell and the agenda is at risk. they're raising the debt ceiling next month, keeping the government open, . >> sandra: the tensions are there. >> dagen: you have mitch mcconnell taking a shot at donald trump and trump taking shots back at mcconnell. >> trish: as far as political risk, typically, it's hard to know in these situations, but
typically the market will always bounce back from that. you look at this over the long term and yes, we've had geopolitical problems before. it always comes back, so there's always something to this. that's going to have an effect on earnings, the market, and perhaps that's what underlies all of this. >> sandra: the rhetoric, does the president risk alienating those he needs? that is a real risk, not getting tax reform done. >> gillian: i feel like i'm missing something. isn't mitch mcconnell one of the only republicans, certainly in a leadership position who stood by the president, has been on board with him since the campaign?
what he said the other day was barely a criticism of the president. the first time i've heard mitch mcconnell say anything that went contrary to what the president has been saying. he has been right there with him every step of the way on the agenda. he said himself, i agree with the president on health care. is it smart for the president, if you're looking at strategies, is it smart for him to try and potentially throw mitch mcconnell under the bus? >> pete: if the republican party -- we had kevin brady on the radio, if the republican party can't do tax reform and they can't do obama repeal and replace -- >> dagen: it you're getting ahead of yourself pure them to keep the government open. i'll predict this, this ain't getting done this year.
this is a disaster. by the way, to call mitch mcconnell a leader is a misnomer because he's not leading. he said seven years. >> sandra: it's very frustrating. >> trish: meantime, a group during the trump campaign for one of the primary challengers jeff flake. this according to the hill which reports that trumps allies are ramping up their own to take down the arizona republican who has emerged as one of the presidents vocal critics on capitol hill. and senator dean heller from nevada is also facing a 2018 primary challenger. meantime ahead of the rnc this week, warning republicans that their seats are in danger if they oppose president trump. >> if you look at 2016, the senators who do not support the president, let's look at two,
they fell short in all senate races. there is a cautionary tale. voters want to support the president and his agenda. >> trish: voters are frustrated, they sell business function in washington. one of the reasons they sent him there. when you look at examples like kelly ayotte, she flip-flopped. that did not serve her well, but it's symptomatic of a lot of people that we see in the party right now. it's hard for them to jump with both feet in when they're not certain of how they're going to be looked upon by the media, by their peers, and ultimately voters. >> pete: et al. ultimately self interested, that's you're telling me.
>> trish: if they were really self-interested, they would actually be more in his corner, but i think it's hard for them to think. think of how he campaigned and how awkward it made some people feel, they're not used to this. they never seen anything like i it. it's hard for them to say we are going to back him when they're not certain what the ramifications will be. >> dagen: i think as evidence, you look at the ads that the pro-trump group america first policy started running against dean heller when it looked like he was not going to get on board with a health care reform they started running anti-dean heller adds and they had something like a seven-figure budget in place. they quickly pulled them once dean heller started going back to the table to negotiate with colleagues on that, slots effective.
any republicans out there who think this might be a bluff. >> gillian: >> trish: they neede is our president's before this g.o.p. civil war, who does that serve? it doesn't serve the president or the congress. congress is up from 10-20%.
>> pete: what does it serve jeff flake running around talking about terrible president trump as a conservative republican? this president wants to do all the things they said they would do on the campaign trail. pull on your big boy pants. get something done. >> trish: the president said for better or worse, just have at it, do what you've been working on for the last seven years and i'll sign whatever you put in front of me and they still can get it done. he gobbled up republicans right and left like m&ms. he destroyed the party and they resent it. >> trish: the fbi raids of
paul manafort's home, with the president has to say about the special focus and when he says about reports that he's considering firing robert mueller. ♪ sidekick! so when your "side glass" gets damaged... [dog barks] trust safelite autoglass to fix it fast. it's easy! just bring it to us, or let us come to you, and we'll get you back on the road! >> woman: thank you so much. >> safelite tech: my pleasure. >> announcer: 'cause we care about you... and your co-pilot. [dog barks] ♪safelite repair, safelite replace.♪ ahyou the law?
we've had some complaints of... is that a fire? there's your payoff, deputy. git! velveeta shells & cheese. there's gold in them thar shells.
the unpredictability of a flaree may weigh on your mind. thinking about what to avoid, where to go, and how to work around your uc. that's how i thought it had to be. but then i talked to my doctor about humira,
and learned humira can help get and keep uc under control... when certain medications haven't worked well enough. humira can lower your ability to fight infections, including tuberculosis. serious, sometimes fatal infections and cancers, including lymphoma, have happened; as have blood, liver, and nervous system problems, serious allergic reactions, and new or worsening heart failure. before treatment, get tested for tb. tell your doctor if you've been to areas where certain fungal infections are common and if you've had tb, hepatitis b, are prone to infections, or have flu-like symptoms or sores. don't start humira if you have an infection. raise your expectations and ask your gastroenterologist if humira may be right for you. with humira, control is possible. ♪ [brother] any last words? [boy] karma, danny... ...karma! [vo] progress is seizing the moment.
your summer moment awaits you, now that the summer of audi sales event is here. audi will cover your first month's lease payment on select models during the summer of audi sales event. >> sandra: president trump making it clear he has no intention of firing robert mueller as special counsel for the probe into the russia election meddling despite coming the investigation a witch hunt. listen. >> i haven't given it any thought. i've been reading about it because they see them dismissing him. i'm not dismissing anybody. there is no collusion because -- i won because i was a much better candidate than her. i won because i went to wisconsin, michigan, i won pennsylvania.
i thought a smart battle, i didn't win because of russia. russia had nothing to do with me winning. >> sandra: politico is reporting federal investigators are trying to rattle paul manafort, going after his son in law increase the pressure with a predawn rain. >> i've always found paul manafort to be a decent man. he probably gets consultant fees from all over the place, i don't know. it's pretty tough stuff. to wake him up, perhaps his family was there, i think that's really bad. >> sandra: meanwhile, don dowd said these are typically found in russia, not america. this invasive tool was employed for its shock value to try to
intimidate mr. manafort and bring him to his knees. first off, your thoughts on the presidents of strong words. >> pete: he's saying exactly what he means as he always does. he knows he never colluded, he knows it's a witch hunt. he's made a strategic decision, rather to take on mueller and fire him, let it run its course. but the senate committee come out and have their findings because he is going to be liberated by the fact that there's nothing there. during the break, you said washington, d.c., would go nuts, i say let them go nuts. when d.c. goes nuts, i don't care and neither does anyone else in america. this is a witch hunt and i think he let it run its course and is good that he set that aside and otherwise focus on things that matter. >> gillian: this menacing he could do barn on is to take the investigation, put it aside, let
it run its course. i don't want to live in a world where the president fired mueller. if i was an advisor, i would say please mr. president, please take that option off the table. if you felt the wrath of washington and the governing class in this country, fire the special investigator. >> trish: he is, that was a point of that sound that we just ran. he's saying, this is something you guys are talking about and i think the right pundits have called for that. get rid of the whole investigation, i don't think he's necessarily saying that. i think he understands the political risk. i don't think he anticipated the backlash she would get when he let james comey go, but that turned into a big can of worms
and resulted in robert mueller. >> dagen: it was the twitter threats, i might have some tapes and then call me leaks that information to "the new york times" " >> trish: absolutely horrendous and outrageous that he did leak that way, especially when everyone was complaining about leakers. nonetheless, i think he understands that that would not be well received. >> dagen: has tony answered he ink readily measured when asked about paul manafort. we talk about the politics of possibly firing robert mueller, think about it from a legal perspective because if you look back to watergate, you have to be careful what happens because leon jaworski was put in his place was much tougher than archibald cox.
you may like robert mueller, but you may head the next guy or gal even more. >> sandra: and looking at these tactics employed by muller's, does as backup president trump coming as a witch hunt? >> pete: yes, they are searching for a crime, this is why it's so frustrating for jeff sessions because he's not able to keep it inside and make sure they're focusing on russian interference in the election, instead they're going down this rabbit trail or that one. it's part of intimidation. >> dagen: we should point out that this team had to persuade a judge that they were somehow not being faithful or truthful with them in terms of the information they were turning over. fbi agents know the danger because they could be prosecuted. >> sandra: remember this big tease on fox news radio? "time" magazine's latest cover
declaring new white house chief of staff john kelly, trump last best hope, is that a fair assessment or is it more mainstream media bias? will debate that next. ♪
your brain changes as you get older. but prevagen helps your brain with an ingredient originally discovered... in jellyfish. in clinical trials, prevagen has been shown to improve short-term memory. prevagen. the name to remember.
your insurance on time. tap one little bumper, and up go your rates. what good is having insurance if you get punished for using it? news flash: nobody's perfect. for drivers with accident forgiveness, liberty mutual won't raise your rates due to your first accident. switch and you could save $782 on home and auto insurance. call for a free quote today. liberty stands with you™
liberty mutual insurance. >> sandra: more of this friday "outnumbered" in just a moment, but first, jon scott will that's coming up in the second hour "happening now." >> jon: in the next hour, president trump turns a simple photo up into an all-out news conference, the topics he covered and how the change in plans highlights his ml. isis is paying for their operations using at a popular online marketplace. how they're using the site to funnel money. a freak accident at a football practice has an entire community reeling. how a simple conditioning exercise class a 16-year-old his life. all that coming up on "happening now" ." >> gillian: e had more about mainstream media bias. john kelly is called
president trump's best last hope. the article which is full of criticism of the presidents white house reads in part "kelly urged his staff to stop the infighting and set their egos and agendas and any leaking aside." he told his audience that they must assert serving a hierarchy that puts the nation and not the president first. country, president, self. so began a new era at president trump's white house, when the might be his last and best chance for success. i feel like the like the chief of staff role is a thankless job and he only ever think about the chief of staff or hear about them in the mainstream media if something horrible is happening. meaning if you're doing a great job and everything is running on time, everybody's doing their job, the staff is serving the president well, you don't even knew who they are. if something goes down when there's a crisis, then all of a
sudden the chief of staff is always a forefront in the news. i think in general, it's probably not fair to say that general kelly is the president's last and best hope, but he is really important and his role is super important, but that's the same. >> pete: that title seems -- last best hope. structure would help president trump structure his agenda. i don't use the term mainstream media anymore ever, it's left wing media. they create these stories to make the president look bad. there is no balance and most of these newsrooms at all. this is meant to make him look bad, put a picture of someone who can save the trump presidency that they say is doomed to fail six months in and they will look at what he's done to succeed and they will look at what i had wing he's faced from a most everything will news outlet. he's had a great deal of troubl trouble. >> dagen: what moronic news
magazine editor would put someone on the cover of your issue that no one recognized? people are like, who the heck is that guy? >> trish: i don't think he was well served by his previous chief of staff. there is room for improvement. he has a lot of promise and a lot of talent and some of that needs to be channeled. it's important to who is around him and you point out structure, you look at president trump on his international trip and he knocked it out of the park. there was a lot of structure and he has an opportunity to do that and away back here at home that he can succeed at but he needs the right people, he needs more people around him. >> sandra: the article, i was trying to find -- i don't have
my high later today. it goes into detail of john kelly is passed from where he comes, how he was raised, his family, it does a good job of talking about what a strong leader this man and he is a hopeful presence for this white house. >> gillian: this is a serious guy. there is a lot of value there because you can say shut up, you are pursuing your own interest, this is how we get the president's agenda passed whereas before, there were camps, but he gets to rise above that. >> gillian: unlike reince priebus, he's not coming into this locked position and a larger web of political donors, operatives, he's coming at this from the military. he is approaching this, i think
it, it's a little too early to say that his chief of staff. >> sandra: happy friday again, more "outnumbered," will be back in just a moment. so the incredibly minor accident that i had tonight- four weeks without the car. okay, yup. good night. with accident forgiveness your rates won't go up just because of an accident. switching to allstate is worth it. ykeep you sidelined.ng that's why you drink ensure. with 9 grams of protein and 26 vitamins and minerals. for the strength and energy to get back to doing... ...what you love. ensure. always be you. back in the 90's, when billy wanted to ask madeline out on a date, he would call her corded house telephone and get permission to speak to her. today is a lot different. billy just slides into madeline's dm and she'll respond with "oh hayyy! swing by 4 dinnr! smiley face heart emoji" even though courtship has become less strict,
hebrew national hot dogs remain strict as ever when it comes to our standards. made with premium cuts of 100% kosher beef, it's sure to please whoever your daughter brings over last minute for dinner. hebrew national. we remain strict. (woman) there's a moment of truth.etes, and now with victoza®, a better moment of proof. victoza® lowers my a1c and blood sugar better than the leading branded pill, which didn't get me to my goal. lowers my a1c better than the leading branded injectable. the one i used to take. victoza® lowers blood sugar in three ways. and while it isn't for weight loss, victoza® may help you lose some weight. non-insulin victoza® comes in a pen and is taken once a day. (announcer) victoza® is not recommended as the first medication to treat diabetes and is not for people with type 1 diabetes or diabetic ketoacidosis. do not take victoza® if you have a personal or family history of medullary thyroid cancer, multiple endocrine neoplasia syndrome type 2,
or if you are allergic to victoza® or any of its ingredients. stop taking victoza® and call your doctor right away if you get a lump or swelling in your neck or if you develop any allergic symptoms including itching, rash, or difficulty breathing. serious side effects may happen, including pancreatitis, so stop taking victoza® and call your doctor right away if you have severe pain in your stomach area. tell your doctor your medical history. taking victoza® with a sulfonylurea or insulin may cause low blood sugar. the most common side effects are headache, nausea, diarrhea, and vomiting. side effects can lead to dehydration, which may cause kidney problems. now's the time for a better moment of proof. ask your doctor about victoza®. it's straight talk. if you love your phone but hate your bill. do something about it! no, not that. straight talk wireless let's you keep your phone, number and 4g lte network for a lot less,
with the bring your own phone activation kit. it's time to ask yourself... why haven't i switched? unlimited talk, text and data for forty-five dollars a month, no contract. straight talk wireless. only at walmart. at the lexus golden opportunity sales event before it ends. choose from the is turbo, es 350 or nx turbo for $299 a month for 36 months if you lease now. experience amazing at your lexus dealer. trust #1 doctor recommended dulcolax. use dulcolax tablets for gentle dependable relief. suppositories for relief in minutes. and dulcoease for comfortable relief of hard stools. dulcolax. designed for dependable relief. >> thanks to pete hegseth, our #oneluckyguy. >> he was extra lucky today.
>> i will see you again at 2:00 this afternoon, we are back on tv this hour on monday at noon eastern. "happening now" starts right no now. >> melissa: we have a fox news alert right now from fox news global headquarters in new york, vice president mike pence speaking in indianapolis, delivering the keynote address at the ten-point coalition annual luncheon. >> jon: coming in the wake of fresh comments from president trump about the north korea nuclear crisis as well as senate majority leader mitch mcconnell. we are covering all the news happening now. >> i greatly appreciate the fact that they have been able to. >> jon: president trump's shocking praise for vladimir putin. plus, heavy rains and high winds leave the flash flooding went on monsoon season storm
FOX News August 11, 2017 9:00am-10:00am PDT
A news show featuring the top headlines of the day from pop culture to politics, which are discussed by a rotating panel of four women and one man.
China 32, North Korea 29, Victoza 11, Mitch Mcconnell 10, Trump 9, U.s. 6, Sandra 6, America 5, Russia 5, Heller 4, Robert Mueller 4, Us 4, John Kelly 3, Neulasta 3, Mueller 3, Neulasta Onpro 3, Mcconnell 3, Dry Mouth 3, Allstate 3, Audi 3
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,922
|
Q: How to run macros in excel irrespective of the file extension it is saved I would like to know,if the excel file with macro/vba code saved in any format (.xls, .xlsx ..etc), still the macros can run?
Or is there a way to run macros irrespective of the file extension format.
A: A workbook saved as an .xlsx cannot contain a VBA project.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,277
|
using System.Collections.Generic;
using Sabio.Web.Domain;
using Sabio.Web.Models.Requests;
namespace Sabio.Web.Requests
{
public interface IBlogService
{
void Delete(int id);
int Insert(BlogAddRequest model);
List<Blog> SelectAll();
Blog SelectById(int id);
void Update(BlogUpdateRequest model);
List<Blog> Search(string searchStr);
List<Blog> SelectAllByUserId();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,923
|
package co.cask.cdap.etl.common.db;
import co.cask.cdap.api.annotation.Description;
import co.cask.cdap.api.plugin.PluginConfig;
import javax.annotation.Nullable;
/**
* Defines a base {@link PluginConfig} that both the RDBMS source and sink can re-use
*/
public class DBConfig extends PluginConfig {
@Description("JDBC connection string including database name.")
public String connectionString;
@Description("User to use to connect to the specified database. Required for databases that " +
"need authentication. Optional for databases that do not require authentication.")
@Nullable
public String user;
@Description("Password to use to connect to the specified database. Required for databases that " +
"need authentication. Optional for databases that do not require authentication.")
@Nullable
public String password;
@Description("Name of the JDBC plugin to use. This is the value of the 'name' key defined in the JSON file " +
"for the JDBC plugin.")
public String jdbcPluginName;
@Description("Name of the database table to use as a source or a sink.")
public String tableName;
@Description("Type of the JDBC plugin to use. This is the value of the 'type' key defined in the JSON file " +
"for the JDBC plugin. Defaults to 'jdbc'.")
@Nullable
public String jdbcPluginType = "jdbc";
@Description("Sets the case of the column names returned from the query. " +
"Possible options are upper or lower. By default or for any other input, the column names are not modified and " +
"the names returned from the database are used as-is. Note that setting this property provides predictability " +
"of column name cases across different databases but might result in column name conflicts if multiple column " +
"names are the same when the case is ignored.")
@Nullable
public String columnNameCase;
public DBConfig() {
jdbcPluginType = "jdbc";
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,384
|
Q: Сбор всех ссылок в HTML документе Подскажите пожалуйста, хочу собрать все ссылки из HTML документа.
Если я правильно понимаю, то сбор ссылок нужно начинать с нахождения атрибута href тега <a>.
Если да, то подскажите, как определить, что в href - указано относительная или абсолютная ссылка?
Правильно ли я понимаю, что:
*
*Абсолютная ссылка - ВСЕГДА - будет начинаться или с "http://", "https://", "www."
И по этому критерию можно будет отличить - относительно ссылку от абсолютной ?
A: Если рассматривать возможность применения регулярных выражений, то можно будет использовать регулярное выражение следующего вида.
<a\b[^>]+?href=["'][a-z]+://[^>]+>
Данное регулярное выражение поможет найти только открывающий тег <a> целиком.
В данном регулярном выражении:
*
*\b — граница слова. Что бы целевым тегом считался только тег <a>
*[^>]+? — будут выбираться все не > символы. Ленивая выборка. Выборка остановится при совпадении следующей части выражения
*["'] — один из двух допустимых символов в которые можно обворачивать значение
*[a-z]+:// — нахождение sсhema-части URL в начале значения атрибута href
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,277
|
{"url":"https:\/\/www.r-bloggers.com\/tag\/madrid\/","text":"### generalised ratio of uniforms\n\nMay 14, 2012 |\n\nA recent arXiv posting of the paper \u201cOn the Generalized Ratio of Uniforms as a Combination of Transformed Rejection and Extended Inverse of Density Sampling\u201d by Martino, Luengo, and M\u00edguez from Madrid rekindled my interest in this rather peculiar simulation method. The ratio of uniforms samples uniformly on the ... [Read more...]","date":"2021-05-08 09:44:06","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8136488199234009, \"perplexity\": 5084.008182909586}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988858.72\/warc\/CC-MAIN-20210508091446-20210508121446-00356.warc.gz\"}"}
| null | null |
{"url":"http:\/\/bookini.ru\/interdisciplinary-applied-mathematics\/50\/","text":"# Interdisciplinary Applied Mathematics\n\n\u0421\u043a\u0430\u0447\u0430\u0442\u044c \u0432 pdf \u00abInterdisciplinary Applied Mathematics\u00bb\n\nModeling of liquids in microdomains, see Chapters 7-9, requires a different approach. In mesoscopic scales a continuum description suffices (see Chapter 14), whereas in submicron dimensions atomistic modeling is required (see Chapter 16). We have already discussed slip phenomena in\u00a0liquids in Section 1.2; however, other phenomena may be present, for example:\n\nwetting,","date":"2017-08-20 09:17:44","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9344556331634521, \"perplexity\": 5690.952182549592}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886106358.80\/warc\/CC-MAIN-20170820073631-20170820093631-00037.warc.gz\"}"}
| null | null |
El distrito de Louny (en checo, Okres Louny) es uno de los siete distritos (okres) ubicados dentro de la región de Ústí nad Labem (Ústecký kraj) en la República Checa. Su capital es la ciudad de Louny. Tiene una superficie de 1.117,65 km² y una población de 88.247 hab. (2009), lo que hace una densidad de 78,96 hab./km².
Localidades (población año 2018)
Referencias
Enlaces externos
Louny
Región de Ústí nad Labem
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,701
|
{"url":"https:\/\/stats.stackexchange.com\/questions\/225563\/maximum-entropy-distribution-of-a-proportion-with-known-mean-and-variance-is-it","text":"# Maximum entropy distribution of a proportion with known mean and variance? Is it a beta?\n\nGiven a proportion and its standard error, what distributional assumption minimizes assumptions\/maximizes entropy? Is it the beta (and can I use the method of moments to estimate its parameters)? Or something else?\n\nIt's a truncated Normal distribution. This is a consequence of Boltzmann's Theorem.\n\nThe following analysis provides the details needed to implement a practical solution.\n\nA Normal$(\\mu,\\sigma)$ distribution $F$ truncated to the interval $[0,1]$ arises by taking a standard Normal variable $X$ with probability distribution $\\Phi$, scaling it by $\\sigma$, shifting it to $\\mu$, and truncating it to $[0,1]$. Equivalently--working backwards--the original variable $X$ must have been truncated to the interval $[-\\mu\/\\sigma, (1-\\mu)\/\\sigma]$ where it had a total probability of\n\n$$C = \\Phi\\left(\\frac{1-\\mu}{\\sigma}\\right) - \\Phi\\left(\\frac{-\\mu}{\\sigma}\\right),\\tag{1}$$\n\nexpectation\n\n$$\\mu_1=\\frac{1}{C\\sqrt{2\\pi}}\\int_\\frac{-\\mu}{\\sigma}^\\frac{1-\\mu}{\\sigma} x\\exp\\left(\\frac{-x^2}{2}\\right)\\mathrm{d}x,$$\n\nand second (raw) moment\n\n$$\\mu_2 = \\frac{1}{C\\sqrt{2\\pi}}\\int_\\frac{-\\mu}{\\sigma}^\\frac{1-\\mu}{\\sigma} x^2\\exp\\left(\\frac{-x^2}{2}\\right)\\mathrm{d}x.$$\n\nPresumably your \"standard error\" is either $\\sqrt{\\mu_2-\\mu_1^2}$ or some constant multiple of it.\n\nThese integrals can be computed in terms of\n\n$$\\mu_1(z) = \\frac{1}{C\\sqrt{2\\pi}}\\int_{-\\infty}^z x\\exp\\left(\\frac{-x^2}{2}\\right)\\mathrm{d}x = -\\frac{1}{C\\sqrt{2\\pi}}\\exp\\left(-\\frac{z^2}{2}\\right)\\tag{2}$$\n\nand, integrating by parts,\n\n\\eqalign{ \\mu_2(z) &= \\frac{1}{C\\sqrt{2\\pi}}\\int_{-\\infty}^z (x)\\left(x\\exp\\left(\\frac{-x^2}{2}\\right)\\right)\\mathrm{d}x \\\\ &= \\frac{1}{C\\sqrt{2\\pi}}\\left(x\\left(-\\exp\\left(-\\frac{x^2}{2}\\right)\\right)\\mid_{-\\infty}^z - \\int_{-\\infty}^z -\\exp\\left(-\\frac{x^2}{2}\\right)\\mathrm{d}x \\right)\\\\ &=-\\frac{1}{C\\sqrt{2\\pi}}z\\exp\\left(-\\frac{z^2}{2}\\right) + \\frac{1}{C}\\Phi(z)\\tag{3}. }\n\nThus\n\n$$\\mu_1 = \\mu_1\\left(\\frac{1-\\mu}{\\sigma}\\right) - \\mu_1\\left(\\frac{-\\mu}{\\sigma}\\right)$$\n\nand\n\n$$\\mu_2 = \\mu_2\\left(\\frac{1-\\mu}{\\sigma}\\right) - \\mu_2\\left(\\frac{-\\mu}{\\sigma}\\right).$$\n\nThese calculations $(1)$, $(2)$, and $(3)$ can be implemented in any software where exponentials, square roots, and $\\Phi$ are available. This permits application in any fitting procedure, such as method of moments or maximum likelihood. Either would require numerical solutions.","date":"2022-08-14 15:33:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9960030317306519, \"perplexity\": 960.4673508830012}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572043.2\/warc\/CC-MAIN-20220814143522-20220814173522-00336.warc.gz\"}"}
| null | null |
Contract of the Dark
02/19/2014 By Serealities in Contract of the Dark Tags: new stories, new story, online soap opera, online telenovela, Serial stories, serials Leave a comment
In the previous episode, readers voted that hiding in the shadows of the subway station watching Grim get punished by Gamma is Mr. Red, another karayan.
In the dead silence of the darkened subway station, the maddening darkness is drawn back to its source, bringing the nightmare to an end. Light returns to the dreary subway, casting its faint glow on those who stand on the tracks. Gamma stands with a small grin underneath his black hood. The childish karayan enjoys seeing the product of his terrible power.
Across the rails stands the shaman who attempted to take Raphael's life. His legs quiver under his robes, appear as though they will give out at any moment. At his feet are a pile of rags and crimson colored cotton. It is the remnants of his minion, the human sized rag doll. Of course, Grim is just as battered. A steady drop of blood comes from his shoulder where his arm was violently ripped from his body. Bruises cover his body, and the hoarse breathing suggests a number of broken ribs.
"It appears that you can no longer complete our agreement," a voice calls out from the shadows.
Emerging from the darkness is Mr. Red. A carnivorous grin spreads across the karayan's lips, as he steps closer to the wounded shaman. He stops next to Grim, adjusting his tie while looking with a hungry stare.
Seeing him there makes the shaman's eyes flare in anger, "This is not what we agreed upon!"
Mr. Red only gives a devious smirk, which sends Grim over the edge. Seething with fury, he reaches into his tattered robes, grabbing an ancient dagger. With a blood-boiling battle cry, he drives the blade into the chest of the well-dressed karayan. Steel digs deep into his heart, burying up to the hilt.
Crimson eyes fall upon the dagger jetting out of his torso, appearing slightly amused. Grim looks the karayan in the eye, but finds no fear in his gaze, or sign of death. Growing fear propels the shaman out of his furious mindset, as he backs away.
"I grow tired of this game," Mr. Red speaks calmly.
As if on cue, the dagger begins to move around in his chest. Strange, yet loud crunching sounds come from the wound, as the weapon is pulled deeper into his body. Grim stares dumbfounded, horrified by this supernatural sight. The sound of gnashing continues as Mr. Red tears a hole in his dress shirt. He reveals that where his heart should be is a beast-like mouth filled with razor sharp teeth. It finishes eating the dagger, and then growls at the shaman.
Grim is frozen in fear, but gasps, "Y-you are not human. Y-you are like Gam…"
Before he has a chance to finish speaking, Mr. Red has a firm grip on the shaman's throat, tightening like a vice. The karayan's crimson eyes flare, "Do not compare me to him!"
Growling from both of his mouths, he slams the strangled Grim into the tracks. The shaman's body creates a crater three feet deep, causing the metal rails to twist and break from the rest of the track. It reveals the brutal strength a karayan possesses.
Dusting his hands off, Mr. Red looks down the tunnel, expecting to see Gamma waiting for him. He sees no one there. Scanning the entire station shows it to be empty once more. The childish karayan has given him the slip. Mr. Red snarls, looking down at the body of Grim with a greater anger. But then, all that fury disappears, as he returns to his collected self.
"I will have to report this to Mr. De Phial," he says, adjusting his tie. Before he walks away, looks at the corpse one more time. The carnivorous smile returns to his lips, "After I grab a small bite to eat."
On the other side of the city in a penthouse suite sits Raphael Fantassa, relaxing on the couch while fooling around on his laptop computer. The screen jumps from one story to another, as he conducts research of strange occurrences around town as of late. After finishing another story, he rests his head on the cushion, his brain processing everything he has witnessed today.
Unfortunately, his thoughts are shattered when the door to his penthouse opens. He looks to find a young woman standing at the door. She dresses in formal business attire, with a folder of files fitted under her arm. Seeing her brings a smile to Raphael's face.
To someone else in the suite, she is unwelcome. From the shadows, darkness stretches toward her like many blackened tendrils. Before the young lady realizes what is happening, she is surrounded by a strange darkness. Maddening eyes open one after another, all trained on her.
"Gamma stop," Raphael scolds in a calm, yet stern voice.
The violet eyes look to him, looking surprised. Slowly, they return to the shadows from whence they came, coming back to Gamma. The silhouette of the karayan can be seen in the corner of the room, the body surrounded by the eyes. They appear to quietly judge her.
"I'm sorry, Ms. D," Raphael says, looking at her, "Gamma is a bit overprotective."
"I understand," she answers, showing no signs of fear. "Anyways, sir, I've brought the files you wanted."
Raphael gets up from his seat and eagerly snatches the papers. Without saying a word, he pushes Ms. D out, slamming the door behind her. His total attention turns to the documents, which he excitedly flips through. The papers hold information about three individuals that have been under surveillance. First is an older male by the name of L. Strider. Next he reads over the file of a teenager called Arron Fellmind. And lastly, he flips through the papers on a strange girl, Rune. Twenty minutes pass before he sets the files down, and he takes a seat.
Sinking back into the couch, he ponders aloud, "I think any of them will do, but who to send Gamma after? And then there was that pale fellow you saw. What was his name, Gamma?"
The chorus of voices sings in harmony, "His name is Mr. Red. He is dangerous."
"And you know who he has a contract with?" Raphael questions.
Gamma's silhouette nods, "Vance De Phial."
Raphael Fantassa sits upright upon hearing that name, "Is that so? Well, I guess I need to pay my old friend a visit."
A rush of energy fills his body, as he leaps from his seat. Steadily, he walks to the window that overlooks the city. Behind him, Gamma's darkness follows; all eyes on the young man.
"Now then," Raphael grins as he looks down on New York, "What shall be my first move?" What shall Raphael do? Please return to the top of the page to vote!
« The Ringer
Contract of the Dark »
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,147
|
Lawrence Bean1
d. before 1975
Father Harry L. Bean1 d. 1945
Mother Ruth B. Karns1 b. 20 Oct 1897, d. 30 Jun 1975
Lawrence Bean was the son of Harry L. Bean and Ruth B. Karns.1 He died before 1975.1
[S8] "Mrs. Beightol Dies; Was Retired Teacher", The Oil City Derrick, Oil City, PA, 2 Jul 1975, p. 12, Oil City Library, Oil City, Venango County, Pennsylvania. Hereinafter cited as The Derrick.
Norman Schrenckengost1
Norman Schrenckengost married Charlotte Bean, daughter of Harry L. Bean and Ruth B. Karns.1
Charlotte Bean
Ellwood Schrenckengost1
Ellwood Schrenckengost married Martha Bean, daughter of Harry L. Bean and Ruth B. Karns.1
Martha Bean
David Stallsmith1
David Stallsmith married Ruth Ann Bean, daughter of Harry L. Bean and Ruth B. Karns.1
Ruth Ann Bean
Richard Dunn1
Richard Dunn married Margaret Bean, daughter of Harry L. Bean and Ruth B. Karns.1
Margaret Bean
Robert Dilworth Morrison1,2,3
b. 28 June 1826, d. 12 November 1914
Father Joseph Morrison2,3 b. c 1802, d. 21 Feb 1874
Mother Ann Beatty2,3 b. c 1802, d. bt 1860 - 1870
Robert Dilworth Morrison was born on 28 June 1826 in Venango County, Pennsylvania.4,5 He was the son of Joseph Morrison and Ann Beatty.2,3 He married Isabella Jane Morrison.6,7 He died on 12 November 1914 in Deerfield Township, Warren County, Pennsylvania, at age 88.4,8 He was buried in Sutton Hill Cemetery, Deerfield Township, Warren County, Pennsylvania.1
Robert Dilworth Morrison served in Company K, 199th Regiment of the Pennsylvania Volunteer Infantry during the Civil War.1 He and Isabella Jane Morrison lived in Deerfield Township, Warren County, Pennsylvania.9,10,11,12,13
Isabella Jane Morrison b. 2 Jan 1827, d. 8 Mar 1899
Ida Elizabeth Morrison+9 b. 7 Dec 1851, d. 4 Dec 1936
Clara Salina Morrison+14 b. 30 Aug 1853, d. 12 May 1921
Moralla N. Morrison12,7 b. 23 Jul 1857, d. 18 May 1916
Joseph J. Morrison12 b. May 1859
Charles K. Morrison12 b. Jan 1864, d. bt 1920 - 1930
Minnie J. Morrison12 b. Aug 1867
[S1100] Burial Record, Robert D. Morrison, Sutton Hill Cemetery, Grand Valley, Warren Co., PA, viewed at www.findagrave.com, (record created by J. Rowan, 10 Jul 2009 (photo: Robert D. Morrison, b. 28 Jun 1926, d. 12 Nov 1914, Co K, 199th Regt, PA Vol. Inf.)).
[S1456] Warren County Intestate Docket, 1:32-33 (Estate of Joseph Morrison), Registrar & Recorder's Office, Courthouse, Warren, Pennsylvania.
[S1462] Robert Dilworth Morrison, Pennsylvania Death Certificate 110718 (14 Nov 1914), (informant: Chas. Morrison Tidioute, PA), viewed at Ancestry.com, online www.ancestry.com, accessed 4 May 2014; original at Pennsylvania Dept. of Health, Division of Vital Records, New Castle, PA; (parents Joseph L. Morrison and Ann Batey).
[S1462] Robert Dilworth Morrison, Pennsylvania Death Certificate 110718 (14 Nov 1914), (informant: Chas. Morrison Tidioute, PA), viewed at Ancestry.com, online www.ancestry.com, accessed 4 May 2014; original at Pennsylvania Dept. of Health, Division of Vital Records, New Castle, PA; (born 30 June 1828 in Venango Co., PA).
[S1100] Burial Record, Isabella Jane Morrison, Sutton Hill Cemetery, Grand Valley, Warren Co., PA, viewed at www.findagrave.com, (record created by Anne Stugis, 30 Oct 2007 (photo: Isabella Jane Morrison, b. 2 Jan 1827, d. 8 Mar 1899)).
[S1462] Moralla N. Morrison, Pennsylvania Death Certificate 56393 (20 May 1916), (informant: Mr. C. K. Morrison, Tidioute, PA), viewed at Ancestry.com, online www.ancestry.com, accessed 9 May 2014; original at Pennsylvania Dept. of Health, Division of Vital Records, New Castle, PA; (parents Robert D. Morrison and Elizabeth Jane Morrison).
[S1462] Robert Dilworth Morrison, Pennsylvania Death Certificate 110718 (14 Nov 1914), (informant: Chas. Morrison Tidioute, PA), viewed at Ancestry.com, online www.ancestry.com, accessed 4 May 2014; original at Pennsylvania Dept. of Health, Division of Vital Records, New Castle, PA.
[S155] 1860 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., Robert E. Morrison household, National Archives, (Robert E. Morrision, 34, PA, farmer, $300/250; Jane I., 33, PA; Ida E., 8, PA; Clara S., 6, PA; Marilla A., 3, PA; J. J., 1, PA).
[S146] 1870 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., Robt D. Morrison household, National Archives, (Robt D. Morrision, 44, PA, laborer, $-/200; Jane, 43, PA; Clara S., 16, PA; Mary A., 12, PA; Joseph, 11, PA; Charles K., 5, PA; Minnie, 2, PA; Joseph, 70, PA, carpenter, $600/100).
[S151] 1880 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., Robert Morrison household, National Archives, (Robert Morrision, 53, PA, farmer; Jane, 53, PA; Marilla, 22, PA; Joseph, 21, PA, farmer; Charles, 16, PA, laborer; Minnie, 12, PA).
[S152] 1900 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., Robert D. Morrison household, National Archives, (Robert D. Morrision, Jun 1826, PA/PA/PA, wd; Marilla A., Jul 1857, PA/PA/PA, dau, single; Joseph J., May 1859, PA/PA/PA, single, son, farmer; Charles K., Jan 1864, PA/PA/PA, single, son, farmer; Minnie J., Aug 1867, PA/PA/PA, dau, single).
[S207] 1910 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., Robert D. Morrison household, National Archives, (Robert D. Morrision, 88, PA/PA/PA, wd; Joseph J., 50, PA/PA/PA, single, son, farmer; Charles K., 47, PA/PA/PA, single, son, farmer; Marilla A., 52, PA/PA/PA, dau, single; Minnie J., 42, PA/PA/PA, dau, single; Lawrence W. Conklin, 17, PA/PA/PA, great-grandson).
[S1458] Clara Salina Greenlee, death certificate 12463 (14 May 1921), (informant: Webb City, MO Stella Greenlee), Missouri State Board of Health, Bureau of Vital Statistics.
Isabella Jane Morrison1
b. 2 January 1827, d. 8 March 1899
Isabella Jane Morrison was born on 2 January 1827 in Pennsylvania.1 She married Robert Dilworth Morrison, son of Joseph Morrison and Ann Beatty.1,2 She died on 8 March 1899 at age 72.1 She was buried in Sutton Hill Cemetery, Deerfield Township, Warren County, Pennsylvania.1
Isabella Jane Morrison and Robert Dilworth Morrison lived in Deerfield Township, Warren County, Pennsylvania.3,4,5,6,7
Robert Dilworth Morrison b. 28 Jun 1826, d. 12 Nov 1914
Clara Salina Morrison+8 b. 30 Aug 1853, d. 12 May 1921
Moralla N. Morrison6,2 b. 23 Jul 1857, d. 18 May 1916
Joseph J. Morrison6 b. May 1859
Charles K. Morrison6 b. Jan 1864, d. bt 1920 - 1930
Minnie J. Morrison6 b. Aug 1867
Ida Elizabeth Morrison1,2
b. 7 December 1851, d. 4 December 1936
Father Robert Dilworth Morrison1 b. 28 Jun 1826, d. 12 Nov 1914
Mother Isabella Jane Morrison1 b. 2 Jan 1827, d. 8 Mar 1899
Ida Elizabeth Morrison was born on 7 December 1851 in Deerfield Township, Warren County, Pennsylvania.1,2,3 She was the daughter of Robert Dilworth Morrison and Isabella Jane Morrison.1 She married James Price Conklin circa 1868.2,3 She died on 4 December 1936 in Deerfield Township, Warren County, Pennsylvania, at age 84.1,2 She was buried in Sutton Hill Cemetery, Deerfield Township.1,2
Ida Elizabeth Morrison and James Price Conklin lived in Deerfield Township, Warren County, Pennsylvania.4,3
James Price Conklin b. 14 Jul 1846, d. 26 Oct 1926
Leslie Melville Conklin+4 b. c 1869
Nellie J. Conklin+3 b. Dec 1880
Katherine Mae Conklin+3 b. 30 Dec 1882, d. 30 Jun 1955
Stella G. Conklin+3 b. 7 Jan 1885
[S1457] Pat Heinen, "Ancestry of Patricia (Foster) Heinen", 9 Jun 1980 (Mercer, PA).
[S152] 1900 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., James P. Conklin household, National Archives, (James P. Conklin, Jul 1846, PA/PA/PA, md32y, sawyer (wood); Ida E., Dec 1851, PA/PA/PA, wife, 4child/3lvg; Nellie J., Dec 1880, PA/PA/PA, single, dau; Katherine M., Dec 1882, PA/PA/PA, single, dau; Stella G., Jan 1885, PA/PA/PA, dau, single; Lawrence M., Apr 1893, PA/PA/NY, grandson).
[S151] 1880 U.S. Census, Pennsylvania, Warren Co., Deerfield Tp., James Conklin household, National Archives, (James Conklin, 33, PA/PA/NY, farmer; Ida, 27, PA/PA/PA, wife; Leslie, 11, PA/PA/PA, son).
James Price Conklin1
b. 14 July 1846, d. 26 October 1926
James Price Conklin was born on 14 July 1846 in Pittsfield, Warren County, Pennsylvania.1,2 He married Ida Elizabeth Morrison, daughter of Robert Dilworth Morrison and Isabella Jane Morrison, circa 1868.1,2 He died on 26 October 1926 in Deerfield Township, Warren County, Pennsylvania, at age 80.1 He was buried in Sutton Hill Cemetery, Deerfield Township, Warren County.1
James Price Conklin and Ida Elizabeth Morrison lived in Deerfield Township, Warren County, Pennsylvania.3,2
Ida Elizabeth Morrison b. 7 Dec 1851, d. 4 Dec 1936
Leslie Melville Conklin1
Father James Price Conklin1 b. 14 Jul 1846, d. 26 Oct 1926
Mother Ida Elizabeth Morrison1 b. 7 Dec 1851, d. 4 Dec 1936
Leslie Melville Conklin was born circa 1869 in Pennsylvania.1 He was the son of James Price Conklin and Ida Elizabeth Morrison.1 He married Jennie Pauline Wilcox on 29 May 1892 in Sheffield Township, Warren County, Pennsylvania.2
Jennie Pauline Wilcox
Lawrence M. Conklin3 b. Apr 1893
[S485] Marriage License, Leslie Melville Conklin and Jennie Pauline Wilcox, Registrar & Recorder's Office, Courthouse, Warren, Pennsylvania.
Jennie Pauline Wilcox1
Jennie Pauline Wilcox married Leslie Melville Conklin, son of James Price Conklin and Ida Elizabeth Morrison, on 29 May 1892 in Sheffield Township, Warren County, Pennsylvania.1
Leslie Melville Conklin b. c 1869
Lawrence M. Conklin1
Father Leslie Melville Conklin1 b. c 1869
Mother Jennie Pauline Wilcox1
Lawrence M. Conklin was born in April 1893 in New York.1 He was the son of Leslie Melville Conklin and Jennie Pauline Wilcox.1
Nellie J. Conklin1
Nellie J. Conklin was born in December 1880 in Pennsylvania.1 She was the daughter of James Price Conklin and Ida Elizabeth Morrison.1 She married Mearl Waters.2
Mearl Waters
Mabel J. Waters+2 b. 1907, d. 1953
Paul D. Waters+3 b. 19 Feb 1913, d. 6 Nov 1996
[S1273] Marriage license of Mabel J. Waters and Otis C. Chapman, Chautauqua Co., NY, 1926, "New York, County Marriages, 1908-1935," index and images", viewed at FamilySearch (familysearch.org), (parents of Mabel J. Waters: Mearl Waters and Nellie Conklin).
[S702] 1920 U.S. Census, New York, Chautauqua Co., Westfield, Mearl Watters household, National Archives, (Mearl Watters, 34, IA/NY/NY, manager, dairy farm; Nellie, 36, PA/US/US; Mabel, 12, NY/IA/PA, dau; Paul, 6, NY/IA/PA, son; 2 lodgers).
Katherine Mae Conklin1,2
b. 30 December 1882, d. 30 June 1955
Katherine Mae Conklin was born on 30 December 1882 in Deerfield Township, Warren County, Pennsylvania.1,2,3 She was the daughter of James Price Conklin and Ida Elizabeth Morrison.2 She married Leroy Everet Ellis on 15 September 1910 in Jamestown, New York.3 She died on 30 June 1955 in Titusville, Crawford County, Pennsylvania, at age 72.1
Katherine Mae Conklin and Leroy Everet Ellis lived in Eldred Township, Warren County, Pennsylvania.4,5
Leroy Everet Ellis b. 4 Sep 1884
Kenneth L. Ellis+4 b. 5 Jan 1911, d. 5 Mar 1995
Hazel Louise Ellis+1 b. 24 Sep 1917, d. 18 Oct 1978
Helen M. Ellis+5 b. 18 Feb 1920, d. 21 Sep 1983
Jean M. Ellis5
[S485] Marriage License, Katherine M. Conklin and Roy E. Ellis, Registrar & Recorder's Office, Courthouse, Warren, Pennsylvania.
[S433] 1920 U.S. Census, Pennsylvania, Warren Co., Eldred Tp., Roy E. Ellis household, National Archives, (Roy E. Ellis, 35, PA/Can/PA, farmer; Katherine M., 37, PA/PA/PA, wife; Kenneth L., 9, PA/PA/PA, son; Hazel L., 2 4/12, PA/PA/PA, dau; Stella Oliver, 35, PA/PA/PA, sis-in-law, md, clerk, candy store; Gertrude I. Oliver, 15, PA/PA/PA, neice; Merle E. Oliver, 7, PA/PA/PA, nephew).
[S969] 1930 U.S. Census, Pennsylvania, Warren Co., Eldred Tp., Roy E. Ellis household, National Archives, (Roy E. Ellis, 45, PA/Can/PA, care taker, state road; Katherine M., 47, PA/PA/PA, wife; Hazel L., 12, PA/PA/PA, dau; Helen M., 10, PA, dau; Jean M., 2 9/12, PA, dau).
Leroy Everet Ellis1,2
b. 4 September 1884
Leroy Everet Ellis was born on 4 September 1884 in Warren County, Pennsylvania.1 He married Katherine Mae Conklin, daughter of James Price Conklin and Ida Elizabeth Morrison, on 15 September 1910 in Jamestown, New York.1
Leroy Everet Ellis and Katherine Mae Conklin lived in Eldred Township, Warren County, Pennsylvania.3,4
Katherine Mae Conklin b. 30 Dec 1882, d. 30 Jun 1955
Stella G. Conklin1
b. 7 January 1885
Stella G. Conklin was born on 7 January 1885 in Deerfield Township, Warren County, Pennsylvania.1,2 She was the daughter of James Price Conklin and Ida Elizabeth Morrison.1 She married Lemuel E. Oliver on 20 August 1902 in Youngsville, Warren County, Pennsylvania.2
Stella G. Conklin was living in Eldred Township, Warren County, Pennsylvania, in 1920 with her sister Katherine.3
Lemuel E. Oliver b. 5 Apr 1880
Gertrude I. Oliver3
Merle E. Oliver3
[S485] Marriage License, Stella G. Conklin and Lem E. Oliver, Registrar & Recorder's Office, Courthouse, Warren, Pennsylvania.
Lemuel E. Oliver1
b. 5 April 1880
Lemuel E. Oliver was born on 5 April 1880.2 He married Stella G. Conklin, daughter of James Price Conklin and Ida Elizabeth Morrison, on 20 August 1902 in Youngsville, Warren County, Pennsylvania.1
Stella G. Conklin b. 7 Jan 1885
[S485] Marriage License, Stella G. Conklin and Lern E. Oliver, Registrar & Recorder's Office, Courthouse, Warren, Pennsylvania.
Mearl Waters1
Mearl Waters married Nellie J. Conklin, daughter of James Price Conklin and Ida Elizabeth Morrison.1
Nellie J. Conklin b. Dec 1880
Mabel J. Waters1
b. 1907, d. 1953
Father Mearl Waters1
Mother Nellie J. Conklin1 b. Dec 1880
Mabel J. Waters was born in 1907 in Ripley, Chautauqua County, New York.1,2 She was the daughter of Mearl Waters and Nellie J. Conklin.1 She married Otis C. Chapman on 29 August 1926 in Ripley, Chautauqua County, New York.3 She died in 1953.2 She was buried in East Ripley Cemetery, Ripley, Chautauqua County, New York.2
Otis C. Chapman
Arthur Chapman4
Eileen Chapman4
[S1100] Burial Record, Mabel Waters Chapman, East Ripley Cemetery, Ripley, NY, viewed at www.findagrave.com, (record created by Carole F. Blakeslee, 30 Jul 2012 (photo: Mabel Waters Chapman 1907-1953)).
[S1273] Marriage license of Mabel J. Waters and Otis C. Chapman, Chautauqua Co., NY, 1926, "New York, County Marriages, 1908-1935," index and images", viewed at FamilySearch (familysearch.org).
[S1386] 1940 U.S. Census, New York, Erie Co., Cheektowago Town, Otis Chapman household, National Archives, (Otis Chapman, 36, NY, school principal, public school; Mabel, 32, NY; Arthur, 11, NY; Eileen, 8, NY).
Otis C. Chapman1
Otis C. Chapman married Mabel J. Waters, daughter of Mearl Waters and Nellie J. Conklin, on 29 August 1926 in Ripley, Chautauqua County, New York.1
Mabel J. Waters b. 1907, d. 1953
Paul D. Waters1
b. 19 February 1913, d. 6 November 1996
Paul D. Waters was born on 19 February 1913 in Chautauqua County, New York.1,2 He was the son of Mearl Waters and Nellie J. Conklin.1 He married Frances Wolf.3 He died on 6 November 1996 in Chautauqua County, New York, at age 83.2
Audrey Waters3
[S114] Paul D. Waters, Social Security Death Index, Master File, www.ancestry.com (Orem, UT: Ancestry, Inc., 1999). Hereinafter cited as SSDI.
[S1386] 1940 U.S. Census, New York, Chautauqua Co., Ripley, Frances W. Waters household, National Archives, (Frances W. Waters, 29, NY, runs farm, own; Paul, 27, NY, husband, farmer; Audrey, 4, NY; Ralph B. Wolf, 54, PA, father, maintenance, state road; Bessie M. Wolf, 55, NY, mother; Nelson R. Wolf, 25, NY, brother).
Frances Wolf1
Frances Wolf married Paul D. Waters, son of Mearl Waters and Nellie J. Conklin.2
Paul D. Waters b. 19 Feb 1913, d. 6 Nov 1996
[S1386] 1940 U.S. Census, New York, Chautauqua Co., Ripley, Frances W. Watters household, National Archives, (Frances W. Wolf, 29, NY, runs farm, own; Paul, 27, NY, husband, farmer; Audrey, 4, NY; Ralph B. Wolf, 54, PA, father, maintenance, state road; Bessie M. Wolf, 55, NY, mother; Nelson R. Wolf, 25, NY, brother).
Father Paul D. Waters1 b. 19 Feb 1913, d. 6 Nov 1996
Mother Frances Wolf1
Audrey Waters is the daughter of Paul D. Waters and Frances Wolf.1
Father Otis C. Chapman1
Mother Mabel J. Waters1 b. 1907, d. 1953
Arthur Chapman is the son of Otis C. Chapman and Mabel J. Waters.1
Eileen Chapman is the daughter of Otis C. Chapman and Mabel J. Waters.1
Father Lemuel E. Oliver1 b. 5 Apr 1880
Mother Stella G. Conklin1 b. 7 Jan 1885
Gertrude I. Oliver is the daughter of Lemuel E. Oliver and Stella G. Conklin.1
Merle E. Oliver is the son of Lemuel E. Oliver and Stella G. Conklin.1
Kenneth L. Ellis1
Father Leroy Everet Ellis1 b. 4 Sep 1884
Mother Katherine Mae Conklin1 b. 30 Dec 1882, d. 30 Jun 1955
Kenneth L. Ellis was born on 5 January 1911 in Warren County, Pennsylvania.1,2 He was the son of Leroy Everet Ellis and Katherine Mae Conklin.1 He married Mildred (?)3 He died on 5 March 1995 in Youngsville, Warren County, Pennsylvania, at age 84.2
Mildred (?)
Frederick Ellis3
[S114] Kenneth L. Ellis, Social Security Death Index, Master File, www.ancestry.com (Orem, UT: Ancestry, Inc., 1999). Hereinafter cited as SSDI.
[S1348] 1940 U.S. Census, Pennsylvania, Warren Co., Eldred Tp., Kenneth Ellis household, National Archives, (Kenneth Ellis, 29, PA, clerk; Mildred, 25, PA; Frederick, 5, PA).
Mildred (?)1
Mildred (?) married Kenneth L. Ellis, son of Leroy Everet Ellis and Katherine Mae Conklin.1
Kenneth L. Ellis b. 5 Jan 1911, d. 5 Mar 1995
Father Kenneth L. Ellis1 b. 5 Jan 1911, d. 5 Mar 1995
Mother Mildred (?)1
Frederick Ellis is the son of Kenneth L. Ellis and Mildred (?)1
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 131
|
Der Verein Deutscher Zementwerke e.V. (VDZ) ist ein Zusammenschluss der Zement herstellenden Unternehmen in Deutschland. Aufgabe des VDZ ist die Wahrung und Förderung der gemeinsamen wirtschaftlichen Belange der deutschen Zementindustrie sowie die Förderung von Technik und Wissenschaft. Dies betrifft insbesondere die Forschung und Entwicklung auf dem Gebiet der Herstellung von Zement und weiterer hydraulischer Bindemittel sowie dessen Anwendung, z. B: in Beton und Mörtel. Hierzu unterhält der VDZ das Forschungszentrum der Zementindustrie in Düsseldorf. Darüber hinaus bringt sich der Verein für die Zementindustrie in Gremien und Normungsausschüssen ein und beteiligt sich an der Erstellung technischer Richtlinien. Gegenüber seinen Mitgliedern ist der VDZ beratend tätig. Der Vereinssitz ist Düsseldorf.
Geschichte
Am 24. Januar 1877 schlossen sich 23 Zementwerke zum Verein Deutscher Cement-Fabrikanten zusammen. Gemeinsam setzten sie sich für die Schaffung einer Zementnorm und die Festlegung von Mindestanforderungen an die Produkteigenschaften ein. Nach internen Auseinandersetzungen über die Frage der Verwendung weiterer Hauptbestandteile im Zement wurde er 1889 in Verein Deutscher Portland-Cement-Fabrikanten umbenannt. Die während der Auseinandersetzung ausgeschlossenen Hersteller von Eisenportlandzement und Hochofenzementhersteller gründeten 1902 beziehungsweise 1913 eigene Vereine. Die drei Zementvereine unterhielten fortan eigene technisch-wissenschaftliche Einrichtungen und Labore. Erst im Jahr 1941 näherten sich die drei Vereine wieder einander an und schlossen sich zu einer Arbeitsgemeinschaft zusammen, aus der 1948 der Verein Deutscher Portland- und Hüttenzementwerke hervorging. 1952 wurde diese technisch-wissenschaftliche Vereinigung in Verein Deutscher Zementwerke umbenannt. 2012 schlossen sich schließlich der Bundesverband der Deutschen Zementindustrie e.V. (BDZ), bis dato der wirtschaftspolitische Zusammenschluss der deutschen Zementhersteller, und der VDZ zusammen.
Mitglieder
CEMEX Deutschland AG
Deuna Zement GmbH
Dyckerhoff GmbH
HeidelbergCement AG
Holcim (Deutschland) GmbH
Holcim (Süddeutschland) GmbH
Märker Zement GmbH
OPTERRA Zement GmbH
OPTERRA Wössingen GmbH
PHOENIX Zementwerke Krogbeumker GmbH & Co. KG
Portlandzementwerk Wittekind Hugo Miebach Söhne
Portlandzementwerk "Wotan" H. Schneider KG
SCHWENK Zement KG
Sebald Zement GmbH
Solnhofer Portland-Zementwerke GmbH & Co. KG
Spenner GmbH & Co. KG
Spenner Zementwerk Berlin GmbH & Co. KG
Südbayerisches Portland-Zementwerk Gebr. Wiesböck & Co. GmbH
Weiterhin gehören dem VDZ e.V. fünf inländische sowie 22 ausländische Unternehmen als außerordentliche Mitglieder an.
Literatur
Verein Deutscher Zementwerke e.V.: 125 Jahre Forschung für Qualität und Fortschritt, Verein Deutscher Zementwerke e.V. (Hrsg.), ISBN 3-7640-0439-8, Berlin 2002.
Weblinks
Offizieller Internetauftritt des VDZ
Einzelnachweise
Verein (Bundesverband)
Technisch orientiertes Forschungsinstitut
Zementhersteller
Forschungsinstitut in Deutschland
Gegründet 1877
Verein (Düsseldorf)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,760
|
The Stranger from Within – drugi singel projektu muzycznego Ayreon, wydany w 1996 roku. Utwór pochodzi z drugiego albumu studyjnego projektu: Actual Fantasy.
Lista utworów
The Stranger From Within [single version] – 3:39
The Dawn of Man – 7:32
The Stranger from Within [album version] – 7:39
Twórcy
Arjen Lucassen – śpiew, gitara, gitara basowa, syntezator
Edward Reekers – śpiew
Okkie Huysdens – śpiew
Robert Soeterboek – śpiew
Cleem Determeijer – keyboard
Rene Merkelbach – keyboard
Ścieżka perkusji została stworzona korzystając z automatu perkusyjnego.
Linki zewnętrzne
Okładka
Single Ayreon
Single wydane w roku 1996
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,150
|
Libya's pottery industry struggles to make sales
MAHMUD TURKIA/AFP or licensors
Pottery has long been the pride of the city of Ghariane, in northwestern Libya, before declining over the years.
To try to save their tradition, the craftsmen now rely on online sales.
Potteries in Gharyan, a city of 160,000 people, essentially stopped developing in the 1980s and are struggling to keep pace with modernization, he said.
Businesses across Libya face daunting logistical challenges and an archaic banking system -- a challenge Shabani overcomes by receiving payment through an account in Europe.
On one of the main avenues of the city nicknamed by Libyans "capital of pottery", the tradition is quite evident.
For a long time, their major obstacle has been the transportation of pots to various markets across the country and externally.
"We tried to make it easier to transport the pottery to Libyans living outside the country - in Britain, Germany, America and Canada, as a first step - it was a great experience at first, then we experienced problems with shipping, such as packaging, as we didn't have very good materials for packaging, so we invested in that part of the project," said Muayyad al-Shabani, a potter from Gharyan.
Orders are placed directly on dedicated Facebook and Instagram pages, run by a dozen people, from sales to packaging to shipping.
This industry now faces imminent closure due to a shortage of laborers, a rise in the price of raw materials, sluggish marketing and increasingly tough competition.
"There's a lack of basic materials, which we have to import at high prices, and there are also few workers because of a lack of craft schools. Also the new generation (are not learning the craft), so we are relying on the older generation, who are leaving this profession as they advance in age, and then there's foreign labor and sometimes the market is good while other times there's a recession," said Ali Al-Zarqani, potter from Gharyan.
Ceramics have been part of the identity of many Libyans. The industry has changed the lives of so many families and individuals. The industry has hired thousands of managers, marketers and craftsmen in the entire north African country.
Libya: Troglodytes of Gharyan want to come out of the shadows
Libya to sign gas deals with Italy's Eni - National Oil corporation chief
In Ghana, one businessman is turning waste into new opportunities
Luxury brand Chanel stages embroidery and weaving exhibition in Dakar
Libya song festival returns after nearly 18 years
Libyan accused in 1988 plane bombing appears in US court
Lockerbie bombing suspect is now in US custody
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,793
|
{"url":"http:\/\/cms.math.ca\/10.4153\/CJM-2004-017-7","text":"Canadian Mathematical Society www.cms.math.ca\nAbstract view\n\n# Non-Abelian Generalizations of the Erd\\H os-Kac Theorem\n\nPublished:2004-04-01\nPrinted: Apr 2004\n\u2022 M. Ram Murty\n\u2022 Filip Saidak\nFeatures coming soon:\nCitations \u00a0\u00a0(via CrossRef) Tools: Search Google Scholar:\n Format: HTML LaTeX MathJax PDF PostScript\n\n## Abstract\n\nLet $a$ be a natural number greater than $1$. Let $f_a(n)$ be the order of $a$ mod $n$. Denote by $\\omega(n)$ the number of distinct prime factors of $n$. Assuming a weak form of the generalised Riemann hypothesis, we prove the following conjecture of Erd\\\"os and Pomerance: The number of $n\\leq x$ coprime to $a$ satisfying $$\\alpha \\leq \\frac{\\omega(f_a(n)) - (\\log \\log n)^2\/2 }{ (\\log \\log n)^{3\/2}\/\\sqrt{3}} \\leq \\beta$$ is asymptotic to $$\\left(\\frac{ 1 }{ \\sqrt{2\\pi}} \\int_{\\alpha}^{\\beta} e^{-t^2\/2}dt\\right) \\frac{x\\phi(a) }{ a},$$ as $x$ tends to infinity.\n Keywords: Tur{\\' a}n's theorem, Erd{\\H o}s-Kac theorem, Chebotarev density theorem, Erd{\\H o}s-Pomerance conjecture\n MSC Classifications: 11K36 - Well-distributed sequences and other variations 11K99 - None of the above, but in this section\n\n\u00a9 Canadian Mathematical Society, 2013 : http:\/\/www.cms.math.ca\/","date":"2013-05-22 23:30:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9334043264389038, \"perplexity\": 1781.1364528052443}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368702525329\/warc\/CC-MAIN-20130516110845-00026-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
| null | null |
\section{Introduction}
Quantum state engineering in atoms and molecules traditionally uses three
basic techniques for transfer of population, complete or partial, from one
bound energy state to another, single or superposition state: resonant
pulses of precise areas (e.g. $\pi $\ pulses in a two-state system or
generalized $\pi $ pulses for multiple states) \cite{Shore}, adiabatic
passage using one or more level crossings \cite{ARPC}, or stimulated Raman
adiabatic passage (STIRAP) and its extensions \cite{STIRAP}. All these
techniques require the system to be initially in a single energy state; such
a state can be easily prepared experimentally, e.g. by optical pumping. Some
of these techniques are \textquotedblleft tuned\textquotedblright\ to a
specific initial condition: for example, STIRAP requires a counterintuitive
pulse sequence to transfer population from state 1 to 3 in a 1-2-3 linkage,
but it is largely irrelevant if the system starts in states 2 or 3 (with
some exceptions for state 3) \cite{STIRAP}. In other words, STIRAP is (very)
useful in producing only one column (the first) of the unitary propagator.
Similar conclusions apply, to a large extent, also to the other two
techniques using pulse areas and level crossings.
These traditional techniques resolve only a small (although important) part
of the general problem of quantum state engineering: given the initial and
final states of an $N$-state system, find a physical set of operations that
connect them. This problem requires the construction of the \emph{entire
propagator}, not just a single column or row.
In this paper we introduce a technique for full quantum state engineering,
which produces in a systematic manner a propagator that can connect any two
preselected superposition states of an $N$-state quantum system,
representing a \emph{qunit} in quantum information \cite{QI}. The two states
can be pure as well as mixed, and the latter may have the same or different
sets of dynamic invariants (constants of motion). The solution consists of
two steps: first, find a propagator that connects the two states, and
second, find a physical realization of this propagator.
The \emph{first} part is the mathematical solution of this inverse problem
in quantum mechanics, and the solution is different for three types of
problems: (i) pure-to-pure states; (ii) mixed-to-mixed states with the same
invariants; (iii) mixed-to-mixed states with different invariants. The case
(iii), for instance, contains the important problem of engineering an
arbitrary presected mixed state and we pay special attention to it. In this
latter respect our \emph{exact analytic} results are alternative to the
(approximate) numeric optimization procedure proposed by Karpati \emph{et al}
\cite{Karpati}; moreover, our approach allows one to engineer \emph{any}
preselected mixed state, whereas the method of Karpati \emph{et al} \cite%
{Karpati} can only produce a class of mixed states.
The \emph{second} part of the solution is the physical realization of the
respective propagator. For this we use the recently introduced physical
implementation of the quantum Householder reflection (QHR) \cite%
{Kyoseva,Ivanov} and we show that QHR is a very powerful tool for quantum
state engineering. Remarkably, in case (i) only a single QHR is needed to
connect two pure states. In case (ii), a general U($N$) propagator is
necessary in the general case, which requires $N$ QHRs. In case (iii), some
sort of incoherent process is required in order to equalize the different
dynamic invariants of the initial and final mixed states, and the remaining
coherent U($N$) part is realized by QHRs. We describe the use of two such
incoherent processes: pure dephasing and spontaneous emission.
The Householder reflection \cite{Householder} is a powerful and numerically
very robust unitary transformation, which has many applications in classical
data analysis, e.g., in solving systems of linear algebraic equations,
finding eigenvalues of high-dimensional matrices, least-square optimization,
QR decomposition, etc. \cite{Householder applications}. In its quantum
mechanical implementation \cite{Kyoseva,Ivanov} it consists of a single
interaction step involving $N$\ simultaneous pulsed fields of precise areas
and detunings in an $N$-pod linkage pattern, wherein the $N$ states of our
system are coupled to each other via an ancillary excited state, as
displayed in Fig. \ref{Fig-Npod}. We use two types of QHRs: standard and
generalized; the latter involves an additional phase factor. The standard
QHR can operate on or off resonance, whereas the generalized QHR requires
specific detunings. \emph{Any} unitary matrix can be decomposed into (and
therefore, synthesized by) $N-1$\ standard QHRs and a phase gate, or into $N$%
\ generalized QHRs, without a phase gate; hence only $N$ physical operations
are needed, which allows one to greatly reduce the number of physical steps,
from $O(N^{2})$\ in existing U(2) realizations \cite{qudits-SU(2)} to only $%
O(N)$\ with QHRs.
\begin{figure}[tbp]
\includegraphics[width=60mm]{fig1.eps}
\caption{Physical realization of the quantum Householder reflection: $N$
degenerate (in RWA sense) ground states, forming the \emph{qunit},
coherently coupled via a common excited state by pulsed external fields of
the same time dependence and the same detuning, but possibly different
amplitudes and phases. }
\label{Fig-Npod}
\end{figure}
This paper is organized as follows. In Sec. \ref{Sec-QHR} we review the
standard and generalized QHR gates and their physical implementations. In
Sec. \ref{Sec-pure} we show how two pure states can be connected by means of
standard and generalized QHRs. In Sec. \ref{Sec-mixed} we construct the
propagator connecting two arbitrary mixed states with the same dynamic
invariants. Engineering of an arbitrary preselected mixed qunit state is
presented in Sec. \ref{Sec-engineering}. The conclusions are summarized in
Sec. \ref{Sec-conclusions}.
\section{The tool: Quantum Householder Reflection \label{Sec-QHR}}
\subsection{Definition}
The \emph{standard} QHR is defined as%
\begin{equation}
\mathbf{M}(v)=\mathbf{I}-2\left\vert v\right\rangle \left\langle
v\right\vert , \label{SQHR}
\end{equation}%
where $\mathbf{I}$ is the identity operator and $\left\vert v\right\rangle $
is an $N$-dimensional normalized complex column-vector. The QHR (\ref{SQHR})
is both hermitian and unitary, $\mathbf{M}=\mathbf{M}^{^{\dagger }}=\mathbf{M%
}^{-1}$, which means that $\mathbf{M}$ is involutary, $\mathbf{M}^{2}=%
\mathbf{I}$. In addition, $\det \mathbf{M}=-1$. For real $\left\vert
v\right\rangle $ the Householder transformation (\ref{SQHR}) has a simple
geometric interpretation: reflection with respect to an $(N-1)$-dimensional
plane with a normal vector $\left\vert v\right\rangle $. In general, the
vector $\left\vert v\right\rangle $ is complex and it is characterized by $%
2N-2$ real parameters (with the normalization condition and the unimportant
global phase accounted for).
The \emph{generalized} QHR is defined as%
\begin{equation}
\mathbf{M}(v;\varphi )=\mathbf{I}+\left( e^{i\varphi }-1\right) \left\vert
v\right\rangle \left\langle v\right\vert , \label{GQHR}
\end{equation}%
where $\varphi $ is an arbitrary phase. The standard QHR (\ref{SQHR}) is a
special case of the generalized QHR (\ref{GQHR}) for $\varphi =\pi $: $%
\mathbf{M}(v;\pi )\equiv \mathbf{M}(v)$. The generalized QHR is unitary, $%
\mathbf{M}(v;\varphi )^{-1}=\mathbf{M}(v;\varphi )^{\dagger }=\mathbf{M}%
(v;-\varphi )$, and its determinant is $\det \mathbf{M}=e^{i\varphi }$.
\subsection{Physical implementation}
We have shown recently \cite{Kyoseva,Ivanov} that the standard and
generalized QHR operators can be realized physically in a coherently coupled
$N$-pod system shown in Fig. \ref{Fig-Npod}. The $N$ degenerate [in the
rotating-wave approximation (RWA) sense \cite{Shore}] ground states $%
\left\vert n\right\rangle $ ($n=1,2,\ldots ,N$), which represent the \emph{%
qunit}, are coupled coherently by $N$ external fields to an ancillary,
excited state $\left\vert e\right\rangle \equiv \left\vert N+1\right\rangle $
\cite{Kyoseva}. The excited state $\left\vert e\right\rangle $ can generally
be off resonance by a detuning $\Delta \left( t\right) $ \cite{Kyoseva},
which must be the same for all fields. The Rabi frequencies $\Omega
_{1}(t),\ldots ,\Omega _{N}(t)$ of the couplings between the ground states
and the excited state have the same pulse-shaped time dependence $f\left(
t\right) $, but possibly different phases $\beta _{n}$ and amplitudes $\chi
_{n}$,
\begin{equation}
\Omega _{n}(t)=\chi _{n}f\left( t\right) e^{i\beta _{n}}\text{\quad }%
(n=1,2,\ldots ,N). \label{Omega}
\end{equation}%
The\ qunit+ancilla RWA Hamiltonian reads%
\begin{equation}
\mathbf{H}(t)=\frac{\hbar }{2}\left[
\begin{array}{ccccc}
0 & 0 & \cdots & 0 & \Omega _{1}\left( t\right) \\
0 & 0 & \cdots & 0 & \Omega _{2}\left( t\right) \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & \Omega _{N}\left( t\right) \\
\Omega _{1}^{\ast }\left( t\right) & \Omega _{2}^{\ast }\left( t\right) &
\cdots & \Omega _{N}^{\ast }\left( t\right) & 2\Delta \left( t\right)
\end{array}%
\right] , \label{Hamiltonian}
\end{equation}%
The exact solution to the Schr\"{o}dinger equation for the propagator $%
\mathbf{U}(t)$,%
\begin{equation}
i\hbar \frac{d}{dt}\mathbf{U}(t)=\mathbf{H}(t)\mathbf{U}(t). \label{SEq}
\end{equation}
can be found in \cite{Kyoseva}.
The \emph{standard QHR }$\mathbf{M}(v)$ is realized on exact resonance ($%
\Delta =0$), for any pulse shape $f\left( t\right) $, and for
root-mean-square (rms) pulse area%
\begin{equation}
A=2\left( 2k+1\right) \pi \quad \left( k=0,1,2,\ldots \right) ,
\label{A-resonance}
\end{equation}%
where%
\begin{equation}
A=\int_{-\infty }^{\infty }\Omega (t)dt, \label{rms area}
\end{equation}%
with $\Omega (t)=\left[ \sum_{n=1}^{N}\left\vert \Omega _{n}(t)\right\vert
^{2}\right] ^{1/2}$. Then the transition probabilities to the ancilla state
vanish and the propagator within the qunit space is given exactly by the
standard QHR $\mathbf{M}(v)$ (\ref{SQHR}). The components of the $N$%
-dimensional normalized complex vector $\left\vert v\right\rangle $ are the
Rabi frequencies, with the accompanying phases \cite{Ivanov},%
\begin{equation}
\left\vert v\right\rangle =\frac{1}{\chi }\left[ \chi _{1}e^{i\beta
_{1}},\chi _{2}e^{i\beta _{2}},\ldots ,\chi _{N}e^{i\beta _{N}}\right] ^{T},
\label{V}
\end{equation}%
where $\chi =\left( \sum_{n=1}^{N}\chi _{n}^{2}\right) ^{1/2}$. Hence the
qunit propagator represents a physical realization of the standard QHR\emph{%
\ }in a single interaction step. Any QHR vector (\ref{V}) can be produced on
demand by appropriately selecting the peak couplings $\chi _{n}$ and the
phases $\beta _{n}$ of the external fields.
The \emph{generalized QHR} $\mathbf{M}(v;\varphi )$ can be realized in the
same $N$-pod system, but for specific detunings off resonance. Again the
transition probabilities to the ancilla state must vanish; the corresponding
rms pulse areas (\ref{rms area}) in general depend on the pulse shape and
differ from the resonance values (\ref{A-resonance}). The propagator within
the qunit space is the generalized QHR (\ref{GQHR}), wherein the phase $%
\varphi $ depends on the interaction parameters. Although the parameters
(i.e. the rms area and the detuning) of any needed generalized QHR can be
found numerically for essentially any pulse shape, it is very convenient to
use a hyperbolic-secant pulse shape, for which there is a simple exact
analytic solution: the Rosen-Zener model \cite{RZ},
\begin{subequations}
\label{parameters RZ}
\begin{eqnarray}
f\left( t\right) &=&\text{sech}\left( t/T\right) , \label{pshape} \\
\Delta \left( t\right) &=&\Delta _{0}. \label{detuning}
\end{eqnarray}%
For this pulse shape, the rms area (\ref{rms area}) is $A=\pi \chi T$. A
generalized QHR transformation $\mathbf{M}(v;\varphi )$ (\ref{GQHR}) is
realized when the interactions satisfy again Eq. (\ref{V}), and the pulse
area and the detuning obey \cite{Kyoseva,Ivanov}
\end{subequations}
\begin{subequations}
\begin{eqnarray}
A &=&2\pi l\quad \left( l=1,2,\ldots \right) , \label{RZ area} \\
\varphi &=&2\arg \prod_{k=0}^{l-1}\left[ \Delta _{0}T+i\left( 2k+1\right) %
\right] . \label{RZ detuning}
\end{eqnarray}%
For any given $\varphi $, there are $l$ values of $\Delta _{0}$, which
satisfy Eq. (\ref{RZ detuning}) \cite{Kyoseva}. This is also the case for $%
\varphi =\pi $, i.e. for the standard QHR, for which one of the solutions is
$\Delta _{0}=0$. Hence the standard QHR $\mathbf{M}(v)$ can be realized both
on and off resonance, whereas the generalized QHR $\mathbf{M}(v;\varphi )$
can only be realized for nonzero $\Delta _{0}$. The advantage of tuning off
resonance is the lower transient population in the ancilla excited state,
which would reduce the population losses if the lifetime of this state is
short compared to the interaction duration.
This implementation is particularly suited for a qutrit ($N=3$) formed of
the magnetic sublevels of an atomic level with angular momentum $J=1$; then
the ancilla state should be a $J=0$ state. The three pulsed fields can be
delivered from the same laser by using beam splitters and polarizers, which
would authomatically ensure that all of them have the same detuning and
pulse shape. Moreover, with femtosecond pulses it would be possible to use
pulse shapers \cite{femto}, which can easily deliver pulses with the desired
areas. Of course, the use of femtosecond pulses offers another advantage:
decoherence is irrelevant on such time scales.
\subsection{Householder decomposition of a U($N$) propagator}
The standard QHR $\mathbf{M}(v)$ and the generalized QHR $\mathbf{M}%
(v;\varphi )$ can be used for U($N$) decomposition \cite{Ivanov}. Any $N$%
-dimensional unitary matrix $\mathbf{U}$ ($\mathbf{U}^{-1}=\mathbf{U}%
^{\dagger }$) can be expressed as a product of $N-1$ standard QHRs $\mathbf{M%
}\left( v_{n}\right) $ $(n=1,2,\ldots N-1)$ and a phase gate,
\end{subequations}
\begin{equation}
\Phi \left( \phi _{1},\ldots ,\phi _{N}\right) =\sum_{n=1}^{N}e^{i\phi
_{n}}\left\vert n\right\rangle \left\langle n\right\vert =\text{diag}\left\{
e^{i\phi _{1}},\ldots ,e^{i\phi _{N}}\right\} , \label{phase gate}
\end{equation}%
as%
\begin{equation}
\mathbf{U=M}\left( v_{1}\right) \mathbf{M}\left( v_{2}\right) \ldots \mathbf{%
M}\left( v_{N-1}\right) \Phi \left( \phi _{1},\phi _{2},\ldots ,\phi
_{N}\right) , \label{standard}
\end{equation}%
or as a product of $N$ generalized QHRs,
\begin{equation}
\mathbf{U=M}\left( v_{1};\varphi _{1}\right) \mathbf{M}\left( v_{2};\varphi
_{2}\right) \ldots \mathbf{M}\left( v_{N};\varphi _{N}\right) .
\label{generalized}
\end{equation}
\section{Transition between pure states\label{Sec-pure}}
The designed recipe for constructing a general U($N$) transformation makes
it possible to solve the important quantum mechanical problem of transfering
an $N$-state quantum system from one arbitrary preselected initial
superposition state to another such state, i.e. the inverse problem of
quantum-state engineering. The cases of pure and mixed states require
separate analyses.
\subsection{Transition by standard QHR}
\subsubsection{General case}
A pure qunit state\emph{\ }is described by a state vector $\left\vert \Psi
\right\rangle =\sum_{n=1}^{N}c_{n}\left\vert n\right\rangle $, where the
vectors $\left\vert n\right\rangle $ represent the qunit basis states, and $%
c_{n}$ is the complex-valued probability amplitude of state $\left\vert
n\right\rangle $. Given the preselected initial state $\left\vert \Psi
_{i}\right\rangle $ and the final state $\left\vert \Psi _{f}\right\rangle $
of the qunit, we wish to find a propagator $\mathbf{U}$, such that%
\begin{equation}
\left\vert \Psi _{f}\right\rangle =\mathbf{U}\left\vert \Psi
_{i}\right\rangle . \label{cf=Uci}
\end{equation}
We shall show that one of the possible solutions of Eq. (\ref{cf=Uci}) reads%
\begin{equation}
\mathbf{U}=\mathbf{M}(v_{f})\mathbf{DM}(v_{i}), \label{Us}
\end{equation}%
where $\mathbf{M}(v_{i})$ and $\mathbf{M}(v_{f})$ are standard QHRs. Here $%
\mathbf{D}$ is an $N$-dimensional unitary matrix, which, when acting upon a
single qunit basis state $\left\vert n\right\rangle $, only shifts its phase,%
\begin{equation}
\mathbf{D}\left\vert n\right\rangle =e^{i\phi _{n}}\left\vert n\right\rangle
. \label{D}
\end{equation}%
For example, $\mathbf{D}$ can be an arbitrary $N$-dimensional phase gate (%
\ref{phase gate}). Alternatively, $\mathbf{D}$ can be an arbitrary $N$%
-dimensional generalized QHR $\mathbf{M}\left( v;\varphi \right) $ with
vector $\left\vert v\right\rangle $ orthogonal to the qunit state $%
\left\vert n\right\rangle $, $\langle v\left\vert n\right\rangle =0$.
Finally $\mathbf{D}$ can be the identity $\mathbf{D}=\mathbf{I}$.
In order to prove Eq. (\ref{Us}) we first define the vector%
\begin{equation}
\left\vert v_{\alpha n}\right\rangle =\frac{\left\vert \Psi _{\alpha
}\right\rangle -e^{i\varphi _{\alpha n}}\left\vert n\right\rangle }{\sqrt{2%
\left[ 1-\text{Re}\left( \langle \Psi _{\alpha }\left\vert n\right\rangle
e^{i\varphi _{\alpha n}}\right) \right] }}, \label{vi}
\end{equation}%
where $\left\vert n\right\rangle $ is an arbitrarily chosen basis qunit
state, $\varphi _{\alpha n}=\arg [\Psi _{\alpha }]_{n}$ and $\alpha =i,f$.
The QHR $\mathbf{M}(v_{in})$ acting upon $\left\vert \Psi _{i}\right\rangle $
reflects it onto the single qunit state $\left\vert n\right\rangle $,
\begin{equation}
\mathbf{M}(v_{in})\left\vert \Psi _{i}\right\rangle =e^{i\varphi
_{in}}\left\vert n\right\rangle . \label{Mi}
\end{equation}%
The action of $\mathbf{D}$ upon $\left\vert n\right\rangle $ only shifts its
phase, see Eq. (\ref{D}). The action of $\mathbf{M}(v_{fn})$ upon $%
\left\vert n\right\rangle $ reflects this vector onto the final state,
\begin{equation}
\mathbf{M}(v_{fn})\left\vert n\right\rangle =e^{-i\varphi _{fn}}\left\vert
\Psi _{f}\right\rangle . \label{Mfe1}
\end{equation}%
Equations (\ref{Mi}), (\ref{D}) and (\ref{Mfe1}) imply that%
\begin{equation}
\mathbf{M}(v_{fn})\mathbf{DM}(v_{in})\left\vert \Psi _{i}\right\rangle
=e^{i\left( \varphi _{in}-\varphi _{fn}+\phi _{n}\right) }\left\vert \Psi
_{f}\right\rangle , \label{solution}
\end{equation}%
which, up to an unimportant phase, proves Eq. (\ref{Us}).
The arbitrariness in the choice of the unitary matrix $\mathbf{D}$, and the
intermediate basis state $\left\vert n\right\rangle $, means that the
solution (\ref{Us}) for $\mathbf{U}$ is \emph{not unique}. However, what is
important is that it always exists. In fact the availability of multiple
solutions offers some flexibility for a physical implementation. In
particular we can always choose $\mathbf{D}=\mathbf{I}$; then the physical
realization of the propagator $\mathbf{U}$ requires \emph{only two} \emph{%
standard} \emph{QHRs}.
\subsubsection{Special cases}
In several important special cases \emph{only a single standard QHR} is
needed for pure-to-pure transition.
1. If the qunit is in a \emph{single initial} basis state $\left\vert \Psi
_{i}\right\rangle =\left\vert n\right\rangle $ then, as follows from Eq. (%
\ref{Mfe1}), only one standard QHR $\mathbf{M}(v_{fn})$ is sufficient to
transfer it into an arbitrary superposition state $\left\vert \Psi
_{f}\right\rangle $, with $\left\vert v_{fn}\right\rangle $ given by Eq. (%
\ref{vi}).
2. Likewise, an arbitrary inital superposition state $\left\vert \Psi
_{i}\right\rangle $ can be linked to any \emph{single final} state $%
\left\vert \Psi _{f}\right\rangle =\left\vert n\right\rangle $ by only one
standard QHR $\mathbf{M}(v_{in})$, with $\left\vert v_{in}\right\rangle $
given by Eq. (\ref{vi}).
3. If $\left\vert \Psi _{i}\right\rangle $ and $\left\vert \Psi
_{f}\right\rangle $ are \emph{orthogonal} ($\langle \Psi _{f}\left\vert \Psi
_{i}\right\rangle =0$), then again only a single standard QHR $\mathbf{M}(v)$%
, with $\left\vert v\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert \Psi
_{f}\right\rangle -\left\vert \Psi _{i}\right\rangle \right) $, is
sufficient to connect them.
4. If $\left\vert \Psi _{i}\right\rangle $ and $\left\vert \Psi
_{f}\right\rangle $ are superpositional states with \emph{real }%
coefficients, then again a single standard QHR $\mathbf{M}(v)$, with $%
\left\vert v\right\rangle =\left( \left\vert \Psi _{f}\right\rangle
-\left\vert \Psi _{i}\right\rangle \right) /\sqrt{2\left( 1-\langle \Psi
_{f}\left\vert \Psi _{i}\right\rangle \right) }$, is sufficient to link them.
\subsection{Transition by generalized QHR}
Generalized QHR is ideally suited for a pure-to-pure transition because, as
it is easily seen, \emph{only one} \emph{generalized} \emph{QHR} is
sufficient to reflect state $\left\vert \Psi _{i}\right\rangle $ onto $%
\left\vert \Psi _{f}\right\rangle $,%
\begin{equation}
\mathbf{U}=\mathbf{M}(v;\varphi ), \label{gQHR}
\end{equation}%
where
\begin{subequations}
\begin{eqnarray}
\left\vert v\right\rangle &=&\frac{\left\vert \Psi _{f}\right\rangle
-\left\vert \Psi _{i}\right\rangle }{\sqrt{2\left( 1-\text{Re}\left\langle
\Psi _{f}|\Psi _{i}\right\rangle \right) }}, \\
\varphi &=&2\arg \left( 1-\langle \Psi _{f}\left\vert \Psi
_{i}\right\rangle \right) +\pi .
\end{eqnarray}%
In comparison with (\ref{Us}) the solution (\ref{gQHR}) is unique; there is
no arbitrariness in the choice of the QHR vector $\left\vert v\right\rangle $
(up to an unimportant global phase) and the phase $\varphi $.
\subsection{Examples}
\begin{figure}[tbp]
\includegraphics[height=90mm]{fig2.eps}
\caption{(color online) Time evolution of the pulsed fields (top) and the
populations (bottom) for the transition (\protect\ref{1-3}) in a qutrit. We
have assumed sech pulses and rms pulse area $A=4\protect\pi $ ($\protect\chi %
T=4$). The individual couplings $\protect\chi _{n}$ $(n=1,2,3)$ are given by
the components of the QHR vector (\protect\ref{vector1}), each multiplied by
$\protect\chi $. The detuning is $\Delta _{0}T$ $=1.732$ (which gives $%
\protect\varphi =\protect\pi $). The thick curve is the state mismatch (%
\protect\ref{mismatch}).}
\label{Fig 1-3}
\end{figure}
We consider a \emph{qutrit} ($N=3$), for which the QHR implementation is
particularly suitable. As a first example, the transition from a single
state to a superposition state,
\end{subequations}
\begin{equation}
\left\vert \Psi _{i}\right\rangle =\left\vert 1\right\rangle \longrightarrow
\frac{\left\vert 1\right\rangle +\left\vert 2\right\rangle +\left\vert
3\right\rangle }{\sqrt{3}}=\left\vert \Psi _{f}\right\rangle , \label{1-3}
\end{equation}%
is performed by a \emph{single QHR }$\mathbf{M}(v)$, with
\begin{equation}
\left\vert v\right\rangle =\frac{1}{2}\sqrt{1+\frac{1}{\sqrt{3}}}\left[
\sqrt{3}-1,-1,-1\right] ^{T}. \label{vector1}
\end{equation}%
Figure \ref{Fig 1-3} shows the corresponding time evolutions of the
populations and the state mismatch $D$. The latter is defined as the
distance between the qutrit state vector $\left\vert \Psi (t)\right\rangle $
and the target state $\left\vert \Psi _{f}\right\rangle $,%
\begin{equation}
D(t)=\frac{\sum_{mn}\left\vert \rho _{mn}(t)-\rho _{mn}^{f}\right\vert }{%
\sum_{mn}\left\vert \rho _{mn}^{i}-\rho _{mn}^{f}\right\vert },
\label{mismatch}
\end{equation}%
where $\rho _{mn}$ are the elements of the qutrit density matrix $\mathbf{%
\rho }$. This definition of $D$ applies to pure and mixed states as well.
The behavior of $D$ allows us to verify that not only the populations but
also the phases of the probability amplitudes of the target state $%
\left\vert \Psi _{f}\right\rangle $ are produced by the QHR. Indeed, as time
progresses, $D$ approaches zero, which implies that $\left\vert \Psi
(t)\right\rangle $ aligns with $\left\vert \Psi _{f}\right\rangle $.
\begin{figure}[tbp]
\includegraphics[height=90mm]{fig3.eps}
\caption{(color online) Time evolution of the pulsed fields (top) and the
populations (bottom) for the transition (\protect\ref{2-3}) in a qutrit. We
have assumed sech pulse shapes and rms pulse area $A=2\protect\pi $ ($%
\protect\chi T=2$). The individual couplings $\protect\chi _{n}$ $(n=1,2,3)$
are given by the components of the standard QHRs (\protect\ref{2-3 standard}%
), each multiplied by $\protect\chi $. The detuning is $\Delta T=0$. The
thick curve shows the state mismatch (\protect\ref{mismatch}). }
\label{Fig 2-3 standard}
\end{figure}
In another example, we transfer a two-state superposition to a three-state
superposition,%
\begin{equation}
\frac{\left\vert 1\right\rangle +\left\vert 3\right\rangle }{\sqrt{2}}%
\longrightarrow \frac{\left\vert 1\right\rangle +e^{i\pi /3}\left\vert
2\right\rangle +e^{i\pi /7}\left\vert 3\right\rangle }{\sqrt{3}},
\label{2-3}
\end{equation}%
by two standard QHRs, $\mathbf{U}=\mathbf{M}(v_{f})\mathbf{M}(v_{i})$, with
\begin{subequations}
\label{2-3 standard}
\begin{eqnarray}
\left\vert v_{i}\right\rangle &=&\left[ -0.383,0,0.924\right] ^{T},
\label{1} \\
\left\vert v_{f}\right\rangle &=&\left[ -0.460,0.628e^{i\pi
/3},0.628e^{i\pi /7}\right] ^{T}, \label{2}
\end{eqnarray}%
or by one generalized QHR, $\mathbf{U}=\mathbf{M}(v;\varphi )$, with
\end{subequations}
\begin{equation}
\left\vert v\right\rangle =\left[ 0.194e^{0.213\pi i},0.863e^{-0.454\pi
i},0.467e^{-0.083\pi i}\right] ^{T}, \label{2-3 generalized}
\end{equation}%
and $\varphi =0.574\pi $. Figure \ref{Fig 2-3 standard} shows the time
evolution of the populations and the state mismatch (\ref{mismatch}) for a
standard-QHR implementation, and Fig. \ref{Fig 2-3 generalized} for
generalized QHR. In both cases, the mismatch $D$ vaniches, indicating the
creation of the desired superposition (\ref{2-3}). The generalized-QHR
implementation is clearly superior because it creates the target state in a
single step.
\begin{figure}[tbp]
\includegraphics[height=90mm]{fig4.eps}
\caption{(color online) Time evolution of the pulsed fields (top) and the
populations (bottom) for the transition (\protect\ref{2-3}) in a qutrit. We
have assumed sech pulse shapes and rms pulse area $A=2\protect\pi $ ($%
\protect\chi T=2$). The individual couplings $\protect\chi _{n}$ $(n=1,2,3)$
are given by the components of the generalized QHR (\protect\ref{2-3
generalized}), each multiplied by $\protect\chi $. The detuning is $\Delta T$
$=0.791$, which produces the desired phase $\protect\varphi =0.574\protect%
\pi $. The thick curve shows the state mismatch (\protect\ref{mismatch}).}
\label{Fig 2-3 generalized}
\end{figure}
In conclusion of this section, we have demonstrated that any two pure
superposition qunit states can be connected by just a single generalized
QHR, or by two standard QHRs. This suggests that QHR, and particularly the
generalized version, is the most convenient and efficient tool for
pure-to-pure state navigation in Hilbert space. We now turn our attention to
the problem of connecting two arbitrary mixed states.
\section{Coherent navigation between mixed states\label{Sec-mixed}}
A mixed qunit state can be described by its density matrix $\mathbf{\rho }$,
whose spectral decomposition reads
\begin{equation}
\mathbf{\rho }=\sum_{n=1}^{N}r_{n}\left\vert \psi _{n}\right\rangle
\left\langle \psi _{n}\right\vert . \label{dm}
\end{equation}%
The eigenvalues $r_{n}$ of $\mathbf{\rho }$ satisfy $\sum_{n=1}^{N}r_{n}=1$,
and $\left\vert \psi _{n}\right\rangle $ are the orthonormalized $\left(
\langle \psi _{k}\left\vert \psi _{n}\right\rangle =\delta _{kn}\right) $
complex eigenvectors of $\mathbf{\rho }$. The density matrix is hermitian;
hence it can be parameterized by $N^{2}-1$ real parameters.
A hermitian Hamiltonian induces unitary evolution between an initial mixed
state $\mathbf{\rho }_{i}$ and a final state $\mathbf{\rho }_{f}$,
\begin{equation}
\mathbf{\rho }_{f}=\mathbf{U}\mathbf{\rho }_{i}\mathbf{U}^{\dagger }.
\label{udmu1}
\end{equation}%
A unitary evolution does not change the eigenvalues $\left\{ r_{n}\right\}
_{n=1}^{N}$, which are therefore dynamic invariants, which is easily seen
from Eq. (\ref{udmu1}) (as an equivalent set of dynamic invariants one can
take the set $\left\{ \text{Tr}\mathbf{\rho }^{n}\right\} _{n=1}^{N}$).
Therefore, a unitary propagator $\mathbf{U}$ can only connect mixed states
with the same set of invariants $\left\{ r_{n}\right\} _{n=1}^{N}$. In order
to connect mixed states with different invariants we need an incoherent
process; we shall return to this problem in the next section. Here we shall
find the solution to the problem of linking two mixed states with the same
invariants.
Because the eigenvalues $\left\{ r_{n}\right\} _{n=1}^{N}$ of $\mathbf{\rho }%
_{i}$ and $\mathbf{\rho }_{f}$ are the same, we should have%
\begin{equation}
\mathbf{R}_{i}^{\dagger }\mathbf{\rho }_{i}\mathbf{R}_{i}=\mathbf{R}%
_{f}^{\dagger }\mathbf{\rho }_{f}\mathbf{R}_{f}=\mathbf{\rho }_{0},
\label{Rif}
\end{equation}%
where the unitary matrices $\mathbf{R}_{i}$ and $\mathbf{R}_{f}$ diagonalize
respectively $\mathbf{\rho }_{i}$ and $\mathbf{\rho }_{f}$, and $\mathbf{%
\rho }_{0}=$diag$\{r_{1},r_{2},\ldots ,r_{N}\}$. By replacing Eq. (\ref{Rif}%
) into Eq. (\ref{udmu1}) we find
\begin{subequations}
\begin{eqnarray}
\mathbf{\rho }_{0} &=&\mathbf{D}\mathbf{\rho }_{0}\mathbf{D}^{\dagger },
\label{DrhoD} \\
\mathbf{D} &=&\mathbf{R}_{f}^{\dagger }\mathbf{UR}_{i}. \label{D=RUR}
\end{eqnarray}%
Because $\mathbf{D}$ is a unitary matrix we find $\mathbf{\rho }_{0}\mathbf{D%
}=\mathbf{D}\mathbf{\rho }_{0}$. Since $\mathbf{\rho }_{0}$ is diagonal, $%
\mathbf{D}$ must be diagonal too. There are no other restrictions on $%
\mathbf{D}$; hence $\mathbf{D}$ is an arbitrary diagonal matrix. It follows
from Eq. (\ref{D=RUR}) that the solution for the unitary propagator in Eq. (%
\ref{udmu1}) is given by
\end{subequations}
\begin{equation}
\mathbf{U}=\mathbf{R}_{f}\mathbf{DR}_{i}^{\dagger }. \label{U}
\end{equation}
The propagator (\ref{U}) is not unique; it depends on the choice of the
diagonal matrix $\mathbf{D}$. In particular, we can always choose $\mathbf{D}%
=\mathbf{I}$\textbf{.} Hence, the transfer between two mixed states requires
a general U($N$) propagator. The latter can be expressed as a product of $N-1
$ standard QHRs $\mathbf{M}(v_{n})$ ($n=1,2,\ldots ,N-1$) and a phase gate $%
\mathbf{\Phi }\left( \phi _{1},\phi _{2},\ldots ,\phi _{N}\right) $, Eq. (%
\ref{standard}), or by $N$ generalized QHRs $\mathbf{M}(v_{n};\varphi _{n})$
($n=1,2,\ldots ,N$), Eq. (\ref{generalized}) \cite{Ivanov}.
We take as an example a \emph{qutrit}, with the arbitrarily chosen initial
and final density matrices
\begin{subequations}
\label{rho-mixed}
\begin{equation}
\mathbf{\rho }_{i}=\left[
\begin{array}{ccc}
0.490 & 0.115e^{-0.789\pi i} & 0.158e^{0.107\pi i} \\
0.115e^{0.789\pi i} & 0.336 & 0.018e^{-0.675\pi i} \\
0.158e^{-0.107\pi i} & 0.018e^{0.675\pi i} & 0.175%
\end{array}%
\right] , \label{rho initial}
\end{equation}%
\begin{equation}
\mathbf{\rho }_{f}=\left[
\begin{array}{ccc}
0.298 & 0.022e^{0.689\pi i} & 0.033e^{0.319\pi i} \\
0.022e^{-0.689\pi i} & 0.180 & 0.177e^{0.909\pi i} \\
0.033e^{-0.319\pi i} & 0.177e^{-0.909\pi i} & 0.523%
\end{array}%
\right] . \label{rho final}
\end{equation}%
These density matrices can be connected by the unitary propagator (\ref{U})
with $\mathbf{D}=\mathbf{I}$: $\mathbf{U}=\mathbf{R}_{f}\mathbf{R}%
_{i}^{\dagger }$. The latter can be expressed as a product of two standard
QHRs and one phase gate $\mathbf{U}=\mathbf{M}(v_{1})\mathbf{M}(v_{2})%
\mathbf{\Phi }$, with
\end{subequations}
\begin{subequations}
\begin{eqnarray}
\left\vert v_{1}\right\rangle &=&\left[ 0.612e^{0.532\pi
i},0.091e^{0.211\pi i},0.785e^{0.690\pi i}\right] ^{T}, \label{QHR mixed} \\
\left\vert v_{2}\right\rangle &=&\left[ 0,0.533e^{-0.181\pi
i},0.846e^{0.859\pi i}\right] ^{T}, \\
\mathbf{\Phi } &=&\text{diag}\{e^{-0.468\pi i},e^{0.819\pi i},e^{-0.350\pi
i}\},
\end{eqnarray}%
or by three generalized QHRs, $\mathbf{U}=\mathbf{M}(v_{1};\varphi _{1})%
\mathbf{M}(v_{2};\varphi _{2})\mathbf{M}(v_{3};\varphi _{3})$, with
\end{subequations}
\begin{subequations}
\label{mixed generalized}
\begin{eqnarray}
\left\vert v_{1}\right\rangle &=&\left[ 0.721e^{0.659\pi
i},0.080e^{-0.209\pi i},0.689e^{0.270\pi i}\right] ^{T}, \label{mixed v1} \\
\left\vert v_{2}\right\rangle &=&\left[ 0,0.813e^{0.469\pi
i},0.582e^{-0.261\pi i}\right] ^{T}, \label{mixed v2} \\
\left\vert v_{3}\right\rangle &=&\left[ 0,0,1\right] ^{T}, \label{mixed v3}
\\
\varphi _{1} &=&-0.841\pi ,\text{ }\varphi _{2}=0.969\pi ,\text{ }\varphi
_{3}=-0.128\pi . \label{mixed phi}
\end{eqnarray}
\begin{figure}[tbp]
\includegraphics[height=90mm]{fig5.eps}
\caption{(color online) Time evolution of the pulsed fields (top), the
populations and the state mismatch (bottom) for the transition between
states (\protect\ref{rho initial}) and (\protect\ref{rho final}) in a
qutrit. We have assumed sech pulse shapes and rms pulse area $A=2\protect\pi
$ ($\protect\chi T=2$). The individual couplings $\protect\chi _{n}$ $%
(n=1,2,3)$ are given by the components of the generalized QHR (\protect\ref%
{mixed generalized}), each multiplied by $\protect\chi $. The detunings are $%
\Delta _{1}T=-0.255$, $\Delta _{2}T=0.049$, and $\Delta _{3}T=-4.918$, which
produce the QHR phases (\protect\ref{mixed phi}).}
\label{Fig-mixed}
\end{figure}
Figure \ref{Fig-mixed} shows the respective time evolution of the
populations and the state mismatch (\ref{mismatch}) for the generalized-QHR
realization (\ref{mixed generalized}). The first QHR $\mathbf{M}%
(v_{3};\varphi _{3})$ does not cause population changes because it is in
fact a phase gate. As time progresses, the mismatch decreases and the target
density matrix (\ref{rho final}) is approached.
\section{Synthesis of arbitrary preselected mixed states\label%
{Sec-engineering}}
As it was shown in the previous sections, by applying one or more QHRs one
can connect any two arbitrary pure states, or two arbitrary mixed states
with the same dynamic invariants $\left\{ r_{n}\right\} _{n=1}^{N}$. Mixed
states with different invariants cannot be connected by coherent hermitian
evolution because these invarants are constants of motion. Hence in order to
connect mixed states with different invariants we need a mechanism with
non-hermitian dynamics, which can alter the dynamic invariants.
In this section we shall describe two techniques for engineering an
arbitrary mixed state, starting from a single pure state. This is the most
interesting special case of the general problem of connecting two arbitrary
mixed states, because the initial state can be prepared routinely by optical
pumping. Moreover, the general mixed-to-mixed problem can be reduced to the
single-to-mixed problem by optically pumping the initial mixed state into a
single state.
The two techniques use a combination of coherent and incoherent evolutions.
The coherent evolution uses QHRs, whereas the incoherent non-hermitian
evolution is induced either by pure dephasing or spontaneous emission. We
shall consider the two techniques separately.
\subsection{Using dephasing}
We assume that the qunit is initially in the single qunit state $\mathbf{%
\rho }_{i}=\left\vert i\right\rangle \left\langle i\right\vert $, and we
wish to transform the system to an arbitrary mixed state $\mathbf{\rho }_{f}$%
. Let us denote the eigenvalues of $\mathbf{\rho }_{f}$ by $r_{n}$ ($%
n=1,2,\ldots ,N$). We proceed as follows.
\begin{itemize}
\item First, using the prescription from Sec. \ref{Sec-pure}, we apply a
single QHR to transfer state $\left\vert i\right\rangle $ to a pure
superposition state, in which the populations are equal to the eigenvalues
of $\mathbf{\rho }_{f}$: $\rho _{nn}=r_{n}$ ($n=1,2,\ldots ,N$). The phases
of this superposition are irrelevant.
\item In the second step we switch the dephasing on and let all coherences
decay to zero. This can be done, for example, by using phase-fluctuating
far-off-resonance laser fields. In the end of this process, the density
matrix will be diagonal, with the eigenvalues $r_{n}$ of $\mathbf{\rho }_{f}$
on the diagonal, which implies that it will have the same dynamic invariants
as $\mathbf{\rho }_{f}$.
\item The third step is to connect this intermediate state to the desired
state $\rho _{f}$ by a sequence of QHRs, as explained in the previous Sec. %
\ref{Sec-mixed}.
\end{itemize}
In summary, we need \emph{three} steps: a single QHR, a dephasing process,
and a sequence of $N$ QHRs. Figure \ref{Fig-engineering} shows the evolution
of the populations and the state mismatch (\ref{mismatch}) during the
engineering of the mixed state (\ref{rho final}) by the dephasing technique.
The first step is the single QHR $\mathbf{M}(v)$, with QHR vector
\end{subequations}
\begin{equation}
\left\vert v\right\rangle =\left[ -0.336,0.816,0.471\right] ^{T},
\label{engineering v}
\end{equation}%
which transfers the single initial state $\left\vert 1\right\rangle $ to the
pure superposition state
\begin{equation}
\mathbf{\rho }_{1}=\left[
\begin{array}{ccc}
0.6 & \sqrt{0.18} & \sqrt{0.06} \\
\sqrt{0.18} & 0.3 & \sqrt{0.03} \\
\sqrt{0.06} & \sqrt{0.03} & 0.1%
\end{array}%
\right] .
\end{equation}%
The second step is the pure dephasing process, which nullifies all
coherences and leaves the density matrix in a diagonal form,%
\begin{equation}
\mathbf{\rho }_{2}=\text{diag}\left\{ 0.6,0.3,0.1\right\} .
\end{equation}%
The third step is a sequence of two generalized QHRs, which transfer $%
\mathbf{\rho }_{2}$ into the desired final density matrix $\mathbf{\rho }_{f}
$, Eq. (\ref{rho final}). The QHR components read
\begin{subequations}
\label{mixed engineering}
\begin{eqnarray}
\left\vert v_{1}\right\rangle &=&\left[ 0.689e^{0.454\pi
i},0.280e^{0.436\pi i},0.668e^{-0.477\pi i}\right] ^{T},
\label{engineering v1} \\
\left\vert v_{2}\right\rangle &=&\left[ 0,0.793e^{0.740\pi
i},0.609e^{0.025\pi i}\right] ^{T}, \label{engineering v2} \\
\varphi _{1} &=&0.950\pi ,\quad \varphi _{2}=-0.760\pi .
\label{engineering phases}
\end{eqnarray}
\begin{figure}[tbp]
\includegraphics[width=70mm]{fig6.eps}
\caption{(color online) Time evolution of the pulsed fields (top), the
populations and the state mismatch (\protect\ref{mismatch}) (bottom) for
mixed state engineering in a qutrit. The qutrit starts in state $\left\vert
1\right\rangle $ and the target final state is given by Eq. (\protect\ref%
{rho final}). We have assumed sech pulse shapes and rms pulse area $A=2%
\protect\pi $ ($\protect\chi T=2$). The individual couplings $\protect\chi %
_{n}$ $(n=1,2,3)$ are given by the components of the generalized QHR (%
\protect\ref{mixed engineering}), each multiplied by $\protect\chi $. The
detunings are $\Delta _{1}T=0.072$ and $\Delta _{2}T=-0.396$, which produce
the desired QHR phases (\protect\ref{engineering phases}). The dephasing
rate is $\Gamma =2/T$.}
\label{Fig-engineering}
\end{figure}
\subsection{Using spontaneous emission}
In the method, which uses spontaneous emission, we start again in a single
qunit state $\mathbf{\rho }_{i}=\left\vert i\right\rangle \left\langle
i\right\vert $, and the target is the arbitrary mixed state $\mathbf{\rho }%
_{f}$. The procedure now consists of only two steps: incoherent and
coherent. It is particularly well suited for a qutrit, which we shall
describe, although it is readily extended to more states. This method
requires a closed qunit-ancilla transition; if the ancilla state can decay
to other levels then the fidelity will be reduced accordingly.
It is possible here to apply directly the incoherent step, which produces a
density matrix with the desired final dynamic invariants, without the need
to prepare first a coherent qunit superposition, as in the dephasing method
above. The idea is to use laser-induced spontaneous emission from the
ancilla excited state to prepare a completely incoherent superposition of
the qunit states with populations $\rho _{nn}$ equal to the eigenvalues $%
r_{n}$ of $\mathbf{\rho }_{f}$,
\end{subequations}
\begin{equation}
\mathbf{\rho }=\sum_{n=1}^{3}r_{n}\left\vert n\right\rangle \left\langle
n\right\vert . \label{rho intermediate}
\end{equation}%
For this we apply a sequence of appropriately chosen laser pulses from the
qunit states to the excited state, which decays back to the qunit states and
redistridutes the population among them.
There are various scenarios possible, which can produce the desired
incoherent qunit superposition. Here we describe a scenario which looks
particularly simple and easy to implement for the qutrit formed of the
magnetic sublevels $M=-1,0,1$ of a $J=1$ level and an ancilla excited level
with $J=0$ (this implies also equal spontaneous decay branch ratios from the
$J=0$ level to the $M$ sublevels of the qutrit). For definiteness, and
without loss of generality, we assume that the eigenvalues of $\mathbf{\rho }%
_{f}$ are ordered as $r_{1}\geqq r_{2}\geqq r_{3}$. We need three pulses: a
short pulse from state $\left\vert 1\right\rangle $, a long pulse from state
$\left\vert 3\right\rangle $ and again a short pulse from state $\left\vert
1\right\rangle $ (here short and long are related to the lifetime of the
excited state).
The short pulse from the initially populated state $\left\vert
1\right\rangle $, with excitation probability $p_{1}$, transfers population $%
p_{1}$ to the excited state, 1/3 of which decays back to each of the qutrit
states. The ensuing density matrix reads
\begin{equation}
\mathbf{\rho }_{1}=\text{diag}\left\{ 1-\frac{2}{3}p_{1},\frac{1}{3}p_{1},%
\frac{1}{3}p_{1}\right\} . \label{rho1}
\end{equation}%
We then apply a sufficiently long pulse from state $\left\vert
3\right\rangle $, so that its population is completely depleted and
distributed among states $\left\vert 1\right\rangle $ and $\left\vert
2\right\rangle $. The resulting density matrix is
\begin{equation}
\mathbf{\rho }_{2}=\text{diag}\left\{ 1-\frac{1}{2}p_{1},\frac{1}{2}%
p_{1},0\right\} .
\end{equation}%
We now apply again a short pulse from state $\left\vert 1\right\rangle $,
with a different probability $p_{2}$, and then wait for spontaneous emission
from the excited state. The result is%
\begin{eqnarray}
\mathbf{\rho }_{3} &=&\text{diag}\left\{ \left( 1-\frac{1}{2}p_{1}\right)
\left( 1-\frac{2}{3}p_{2}\right) ,\right. \notag \\
&&\left. \frac{1}{2}p_{1}+\frac{1}{3}p_{2}\left( 1-\frac{1}{2}p_{1}\right) ,%
\frac{1}{3}p_{2}\left( 1-\frac{1}{2}p_{1}\right) \right\} .
\end{eqnarray}%
It is easy to show that in order to create the mixed state (\ref{rho
intermediate}) we should have the probabilities
\begin{subequations}
\begin{eqnarray}
p_{1} &=&2\left( r_{2}-r_{3}\right) , \\
p_{2} &=&\frac{3r_{3}}{r_{1}+2r_{3}}.
\end{eqnarray}%
Because we assumed that $r_{1}\geqq r_{2}\geqq r_{3}$ the probabilities $%
p_{1}$ and $p_{2}$ belong to the interval $\left[ 0,1\right] $ and are
therefore well defined. Such probabilities can be produced by resonant
pulses with appropriate pulse areas $A_{n}$. These pulses should be short
compared to the lifetime of the excited state in order to avoid spontaneous
emission during their action.
Once we have prepared the mixed qutrit state (\ref{rho intermediate}), which
has the same invariants as $\mathbf{\rho }_{f}$, we can apply QHRs to
transfer this state into the desired final state $\mathbf{\rho }_{f}$, as
described in Sec. \ref{Sec-mixed}.
\section{Conclusions\label{Sec-conclusions}}
In this paper we have proposed a technique, which allows to connect any two
quantum superposition states, pure or mixed, of an $N$-state atom. This
solution of the inverse problem in quantum mechanics contains two stages:
(i) mathematical derivation of the propagator that links the desired initial
and final density matrices, and (ii) physical realization of this
propagator. In the most general case of arbitrary mixed states, the
implementations combine coherent hermitian and incoherent non-hermitian
interactions induced by pulsed laser fields. In general, the propagator is
not unique, which reflects the multitude of paths between two qunit states;
this also allows for some flexibility in the choice of most convenient path.
The physical realization uses an $N$-pod configuration of $N$ lower states,
forming the qunit, and an ancillary upper state. It is particularly
convenient for a qutrit, where the $N=3$ states are the magnetic sublevels
of a $J=1$ level and the ancilla state is a $J=0$ level. Then only a single
tunable laser is needed to provide the necessary polarized laser pulses.
The hermitian part uses a sequence of sets of short coherent laser pulses
with appropriate pulse areas and detunings. For each set, the propagator of
the $N$-pod represents a quantum Householder reflection (QHR). A sequence of
\emph{at most} $N$ suitably chosen QHRs can synthesize any desired unitary
propagator.
We have shown that two arbitrary preselected \emph{pure} superposition
states can be connected by a \emph{single} QHR only, because the respective
propagator has exactly the QHR symmetry. Two mixed states, with the same set
of dynamic invariants, require a general U($N$) transformation, which can be
realized by at most $N$ QHRs. This is a significant improvement over the
existing setups involving $O(N^{2})$\ operations, which can be crucial in
making quantum state engineering and operations with qunits experimentally
feasible.
The most general case of two arbitrary mixed states with different dynamic
invariants requires an incoherent step, which equalizes the invariants of
the initial density matrix to those of the final density matrix. We have
demonstrated how this can be done by using pure dephasing or spontaneous
decay of the ancillary upper state. Once the invariants are equalized, the
problem is reduced to the one of connecting two mixed states with the same
invariants, which, as explained above, can be done by at most $N$ QHRs. This
method has been described for a qutrit, but it is easily generalized to an
arbitrary qunit.
The present results can have important applications in the storage of
quantum information. For example, a qubit can encode two continuous
parameters: the population ratio of the two qubit states and the relative
phase of their amplitudes.\ A qunit in a \emph{pure }state can encode $2(N-1)
$\ parameters ($N-1$\ populations and $N-1$\ relative phases), i.e. by using
qunits information can be encoded in significantly fewer particles than with
qubits. Moreover, a \emph{mixed }qunit state can encode as many as $N^{2}-1$
real parameters. This may be particularly interesting if the number of
particles that can be used is restricted, e.g., due to decoherence \cite{QI}.
\acknowledgments
This work is supported by the European Union's ToK project CAMEL and RTN
project EMALI, and the Alexander von Humboldt Foundation.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,187
|
What is Charcot-Marie-Tooth disease (CMT)?
Charcot-Marie-Tooth disease (CMT) is a spectrum of nerve disorders named after the three physicians who first described it in 1886 — Jean-Martin Charcot and Pierre Marie of France and Howard Henry Tooth of the United Kingdom. The term "CMT" is regarded as being synonymous with hereditary motor sensory neuropathy (HMSN).
The overall estimated prevalence of CMT is approximately 19 instances per 100,000 people, which varies between. CMT causes damage to the peripheral nerves, which carry signals from the brain and spinal cord to the muscles and relay sensations, such as pain and touch, to the brain and spinal cord from the rest of the body. There are a number of types of CMT.
What are the symptoms of CMT?
CMT causes muscle weakness and reduction in size (atrophy), and some loss of sensation in the lower legs and feet. Sometimes the hands, wrists, and forearms are affected as well. CMT also often causes contractures (stiffened joints due to abnormal tightening of muscles and associated tissues), and sometimes, curvature of the spine (scoliosis or kyphosis).
At the severe end of the CMT spectrum, the disease can affect nerves other than those that go to and from the extremities. If the nerves that go to and from the diaphragm or intercostal (between the ribs) muscles are affected, respiratory impairment can result. For more, see Signs and Symptoms.
What causes CMT?
CMT is caused by defects in the genes that are responsible for creating and maintaining the myelin (insulating sheath around many nerves, increasing conductivity) and axonal structures.
More than 30 genes have been implicated in CMT, each one linked to a specific type (and in many cases, more than one type) of the disease.2 The vast majority of cases are attributed to mutations in just four of these genes: PMP22, MPZ, GJB1, and MFN2. 3
CMT can be inherited in several ways: autosomal dominant (through a faulty gene contributed by either parent); autosomal recessive (through a faulty gene contributed by each parent); or X-linked (through a gene on the X chromosome contributed by either parent).4,5 For more on causes and inheritance patterns in CMT, see Causes/Inheritance.
What is the progression of CMT?
Depending on the type of CMT, onset can be from birth to adulthood, and progression is typically slow. CMT usually isn't life-threatening, and it rarely affects the brain.
What is the status of research on CMT?
CMT research is focused on exploring the effects of defects in genes related to the peripheral nervous system and devising strategies to combat these effects.
Download our Charcot-Marie-Tooth disease (CMT) Fact Sheet
Understanding Neuromuscular Disease Care. IQVIA Institute. Parsippany, NJ. (2018).
Klein, C. J., Duan, X. & Shy, M. E. Inherited neuropathies: Clinical overview and update. Muscle and Nerve (2013). doi:10.1002/mus.23775
Saporta, A. S. D. et al. Charcot-marie-tooth disease subtypes and genetic testing strategies. Ann. Neurol. (2011). doi:10.1002/ana.22166
Hahn, A. F., Brown, W. F., Koopman, W. J. & Feasby, T. E. X-linked dominant hereditary motor and sensory neuropathy. Brain (1990). doi:10.1093/brain/113.5.1511
Tazir, M., Bellatache, M., Nouioua, S. & Vallat, J. M. Autosomal recessive Charcot-Marie-Tooth disease: From genes to phenotypes. Journal of the Peripheral Nervous System (2013). doi:10.1111/jns5.12026
Looking for more information, support or ways to get involved?
About Charcot-Marie-Tooth Disease (CMT)
Types Of Charcot-Marie-Tooth Disease (CMT)
CMT1
CMTX
Severe, Early-Onset CMT
CMT 5, 6, and 7
Intermediate CMT
Causes/Inheritance
Medical Management
Search for Clinical Trials
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,146
|
\section{Introduction}
We start with recalling some standard notation. Let $X$ be a Tychonoff space. By $C_p(X)$ we denote the space of all continuous real-valued functions on $X$ endowed with the pointwise topology. $C_p^*(X)$ denotes the subspace of $C_p(X)$ consisting of all bounded functions. If $\mu$ is a Borel measure on $X$ and $f\in C_p(X)$, then we write $\mu(f)=\int_Xfd\mu$. We also say that $\mu$ is finitely supported if it can be written in the form $\mu=\sum_{i=1}^n\alpha_i\delta_{x_n}$ for some points $x_1,\ldots,x_n\in X$ and real numbers $\alpha_1,\ldots,\alpha_n\in\mathbb{R}$, $n\in\omega}%{\in\N$; in this case we write $\|\mu\|=\sum_{i=1}^n\big|\alpha_i\big|$ (see Section \ref{sec:prelim} for more details).
Let $F$ be a free filter on $\omega$. Endow the set $N_F=\omega\cup\big\{p_F\big\}$, where $p_F$ is a fixed point not belonging to $\omega$, with the topology defined in the following way:
\begin{itemize}
\item every point of $\omega$ is isolated in $N_F$, i.e. $\{n\}$ is open for every $n\in\omega}%{\in\N$,
\item every open neighborhood of $p_F$ in $N_F$ is of the form $A\cup\big\{p_F\big\}$ for some $A\in F$.
\end{itemize}
It is immediate that $N_F$ is a countable non-discrete Tychonoff space.
Spaces of the form $N_F$ naturally occur in many settings, e.g. they play an essential role in $C_p$-theory, where they have been used to provide many important examples of metrizable spaces $C_p(X)$, see e.g. \cite{vMill01}. The main purpose of this paper is to study for which free filters $F$ on $\omega$ the space $N_F$ has any of the following two properties.
\begin{definition}\label{def:jnp_bjnp}
A Tychonoff space $X$ has \textit{the Josefson--Nissenzweig property} (resp. \textit{the bounded Josefson--Nissenzweig property}), or shortly \textit{the JNP} (resp. \textit{the BJNP}), if $X$ admits a sequence $\seqn{\mu_n}$ of finitely supported measures such that $\big\|\mu_n\big\|=1$ for every $n\in\omega}%{\in\N$ and $\mu_n(f)\to0$ for every $f\in C_p(X)$ (resp. $f\in C_p^*(X)$).
\end{definition}
These two properties are closely related to the famous Josefson--Nissenzweig theorem from Banach space theory stating that for every infinite-dimensional Banach space $E$ there exists a sequence $\seqn{x_n^*}$ in the dual space $E^*$ such that $\big\|x_n^*\big\|=1$ for every $n\in\omega}%{\in\N$ and $x_n^*(x)\to0$ for every $x\in E$ (that is, $\seqn{x_n^*}$ is weak* null). They were introduced and studied in \cite{BKS1}, \cite{KSZ}, and \cite{KMSZ}, in the context of the Separable Quotient Problem for spaces $C_p(X)$ as well as in order to investigate Grothendieck Banach spaces of the form $C(K)$ (see below for details). Both of the contexts, despite originating in functional analysis, have deep connections with set theory (as demonstrated e.g. in \cite{KS18}, \cite{BKS0}, \cite{HLO87}, \cite{Bre06}, \cite{SZForExt}, and \cite{DSsurv}).
\medskip
For many examples of spaces with or without the (B)JNP, we refer the reader to \cite{KSZ} and \cite{KMSZ}. It is however immediate that if a space $X$ contains a non-trivial convergent sequence, then $X$ has the JNP (see Fact \ref{fact:conv_seq_jnp}), the converse though does not hold (see e.g. Example \ref{example:schachermayer}). On the other hand, ${\beta\omega}}%{{\beta\N}$, the \v{C}ech--Stone compactification of $\omega$, is an example of a space without the JNP (see Fact \ref{fact:bo_bjnp}). The JNP and BJNP coincide, of course, in the class of pseudocompact spaces but not in the class of all Tychonoff spaces---in \cite[Example 4.2]{KMSZ} it is proved that the space $N_{F_d}$, where $F_d$ stands for the asymptotic density filter (see Remark \ref{rem:density_bjnp_no_jnp} for the definition), has the BJNP but not the JNP.
\medskip
Let us describe our main results. Since, as mentioned above, the existence of convergent sequences in a given space implies that it has the JNP, in Proposition \ref{prop:nf_convseq_char} we list 5 conditions for a filter $F$ which are all equivalent to the statement that the space $N_F$ contains a convergent sequence. Using this characterization, we obtain that, for the class of spaces of the form $N_F$, the existence of a non-trivial convergent sequence is actually equivalent to the JNP.
\begin{restatable*}{corollary}{cornfjnp}\label{cor:nf_jnp}
For every free filter $F$ on $\omega$, the space $N_F$ has the JNP if and only if $N_F$ contains a non-trivial convergent sequence.
\end{restatable*}
For any non-trivial convergent sequence $\seqn{x_n\in\omega}%{\in\N}$ in a given space $N_F$, the sequence $\seqn{\mu_n}$ of finitely supported probability measures defined for every $n\in\omega}%{\in\N$ by the equality $\mu_n=\delta_{x_n}$ satisfies the condition that $\lim_{n\to\infty}\mu_n(A)=1$ for every $A\in F$. It appears that a similar statement actually characterizes the bounded Josefson--Nissenzweig property for spaces of the form $N_F$.
\begin{restatable*}{corollary}{cornfbjnpchar}\label{cor:nf_bjnp_char}
Let $F$ be a free filter on $\omega$. Then, $N_F$ has the BJNP if and only if there is a sequence $\seqn{\mu_n}$ of finitely supported probability measures on $N_F$ such that:
\begin{enumerate}
\item
$\supp\big(\mu_n\big)\subseteq\omega$ for every $n\in\omega}%{\in\N$,
\item $\lim_{n\to\infty}\mu_n(A)=1$ for every $A\in F$.
\end{enumerate}
\end{restatable*}
As a consequence of Corollaries \ref{cor:nf_jnp} and \ref{cor:nf_bjnp_char}, we get, e.g., that, for two filters $F$ and $G$, if $N_G$ has the (B)JNP and $G$ is above $F$ with respect to the Kat\v{e}tov preorder, then $N_F$ has the (B)JNP, too (Proposition \ref{prop:rk_bjnp}).
We also investigate the Borel complexity of those filters $F$, thought of as subspaces of the Cantor space $2^\omega$, for which their spaces $N_F$ have the BJNP. The starting point here is the observation that if, for a filter $F$ on $\omega$, the space $N_F$ has the BJNP, then $F$ is contained in some $\mathbb{F}_{\sd}$ filter $G$ for which the space $N_G$ also has the BJNP, see the discussion preceding Proposition \ref{prop:nf_bjnp_fsd} for more details. As an immediate consequence of this observation we get, using the classical results of Talagrand and Sierpi\'nski, that $F$ must be meager and of measure zero (Corollary \ref{cor:nf_bjnp_meager_meas0}). To conduct this study further, we use submeasures on $\omega$ and associated ideals: the finite ideals $\Fin$ and the exhaustive ideals $\Exh$. We obtain the following characterization.
\begin{restatable*}{theorem}{theoremnfbjnpnonpath}\label{theorem:nf_bjnp_nonpath}
Let $F$ be a free filter on $\omega$. Then, the following are equivalent:
\begin{enumerate}
\item $N_F$ has the BJNP;
\item there is a density submeasure $\varphi$ on $\omega$ such that $F\subseteq\Exh(\varphi)^*$;
\item there is a non-pathological lsc submeasure $\varphi$ on $\omega$ such that $F\subseteq\Exh(\varphi)^*$.
\end{enumerate}
\end{restatable*}
Consequently, we get that if $F$ is the dual filter to a density ideal or to a summable ideal, then $N_F$ has the BJNP (Corollaries \ref{cor:density_bjnp} and \ref{cor:summable_bjnp}). It also follows that every non-pathological ideal is contained in some density ideal (Corollary \ref{cor:nonpath_sub_dens}).
Next, we study the question how many different (that is, non-homeomorphic) spaces of the form $N_F$ having the JNP or the BJNP there exist. Utilizing summable ideals related to the functions $f_p(n)=1/(n+1)^p$ for $p\in(0,1]$, we provide the optimal answers both in the case of Borel as well as of non-Borel filters.
\begin{restatable*}{theorem}{theoremcontinuummany}\label{theorem:continuum_many}
There are families $\mathcal{F}_1$ and $\mathcal{F}_2$, each consisting of continuum many pairwise non-isomorphic free $\mathbb{F}_{\sigma}$ P-filters on $\omega$, such that:
\begin{enumerate}[(A)]
\item for every $F\in\mathcal{F}_1$ the space $N_F$ has the JNP;
\item for every $F\in\mathcal{F}_2$ the space $N_F$ has the BJNP but does not have the JNP.
\end{enumerate}
\end{restatable*}
\begin{restatable*}{theorem}{theoremtwotocontinuummany}\label{theorem:two_to_continuum_many}
There exist families $\mathcal{F}_3$, $\mathcal{F}_4$ and $\mathcal{F}_5$ consisting of $2^{2^\omega}$ many pairwise non-isomorphic free filters on $\omega$, such that:
\begin{enumerate}[(a)]
\item for every $F\in\mathcal{F}_3$ the space $N_F$ has the JNP;
\item for every $F\in\mathcal{F}_4$ the space $N_F$ has the BJNP but does not have the JNP.
\item for every $F\in\mathcal{F}_5$ the space $N_F$ does not have the BJNP.
\end{enumerate}
\end{restatable*}
Let $\mathcal{F}_{BJNP}$ denote the family of all those filters $F$ for which their spaces $N_F$ have the BJNP. As we have already stated, $\mathcal{F}_{BJNP}$ is closed downwards with respect to the Kat\v{e}tov preorder. It turns out that this family has maximal elements, the standard asymptotic density filter $F_d$ being among them (see Section \ref{sec:bwp} for details).
\begin{restatable*}{theorem}{theoremeuideals}\label{thm: EU ideals}
If $F$ is a free filter on $\omega$ such that the dual ideal $F^*$ is a density ideal without the Bolzano--Weierstrass property, then $F$ is a maximal element of $\mathcal{F}_{BJNP}$ with respect to the Kat\v{e}tov preorder.
\end{restatable*}
Finally, we provide some applications of our results to analysis. The first one is related to the famous long-standing open Separable Quotient Problem (for Banach spaces), asking whether every infinite-dimensional Banach space admits a separable infinite-dimensional quotient. The problem has a positive answer in the case of Banach spaces $C(K)$ of continuous real-valued functions on infinite compact spaces $K$ endowed with the supremum norm. It is hence natural to pose its $C_p$-analogon and ask whether for every infinite space $X$ the space $C_p(X)$ admits a separable infinite-dimensional quotient. This version of the problem has recently gained much attention, see e.g. \cite{KS18}, \cite{BKS0}, \cite{BKS1}, and is also open. One the most remarkable results here is due to Banakh, K\k{a}kol, and \'Sliwa \cite{BKS1}, who proved that, for an infinite space $X$, the space $C_p(X)$ contains a complemented copy of the space $(c_0)_p=\big\{x\in\mathbb{R}^\omega\colon\ x(n)\to0\big\}$ endowed with the pointwise topology if and only if $X$ has the JNP. The $C_p^*$-variant of this theorem was obtained in \cite{KMSZ} and asserts that, for an infinite space $X$, the space $C_p^*(X)$ contains a complemented copy of $(c_0)_p$ if and only if $X$ has the BJNP. Using the latter characterization and the results presented above, we obtain the following sufficient condition for spaces $C_p^*(X)$ (and hence for spaces $C_p(X)$ for $X$ pseudocompact) to contain a complemented copy of $(c_0)_p$.
\begin{restatable*}{corollary}{corczeropcomplnfx}\label{cor:c0p_compl_nf_x}
Let $F$ be a free filter on $\omega$ such that the space $N_F$ has the BJNP (e.g. $F$ is contained in the filter dual to a density ideal or a summable ideal). If $X$ is a space such that $N_F$ homeomorphically embeds into $X$, then $C_p^*(X)$ contains a complemented copy of the space $(c_0)_p$. \hfill$\Box$
\end{restatable*}
The second main application of our results concerns Grothendieck Banach spaces. Recall that a Banach space $E$ is \textit{Grothendieck} (or has \textit{the Grothendieck property}) if every weak* null sequence in the dual space $E^*$ is also weakly null. For (non)-examples of Grothendieck spaces we refer the reader to \cite[Introduction]{KSZ}, here we only mention that it is an open question for which compact spaces $K$ the space $C(K)$ is Grothendieck (see \cite[Section 3]{Die73}). Note that it is well-known that if a compact space $K$ contains a non-trivial convergent sequence, then the space $C(K)$ is not Grothendieck. The latter fact may be stated in the following filter-like way: if the space $N_{Fr}$, where $Fr$ denotes the Fr\'echet filter on $\omega$ (see Section \ref{sec:prelim}), homeomorphically embeds into a compact space $K$, then $C(K)$ is not a Grothendieck space. We generalize this result as follows.
\begin{restatable*}{corollary}{corellonegrnfx}\label{cor:ell1_gr_nf_x}
Let $F$ be a free filter on $\omega$ such that the space $N_F$ has the BJNP (e.g. $F$ is contained in the filter dual to a density ideal or a summable ideal). If $K$ is a compact space such that $N_F$ homeomorphically embeds into $K$, then $C(K)$ is not a Grothendieck space. \hfill$\Box$
\end{restatable*}
\medskip
The paper is organized as follows. In Section \ref{sec:af_sf_nf} we recall basic topological facts concerning spaces of the form $N_F$ and their \v{C}ech--Stone compactifications $S_F=\beta\big(N_F\big)$. Most of those facts are already a part of folklore, but we provide them for self-containment of the paper and convenience of the reader. Section \ref{sec:conv_seq} is devoted to the study for which filters $F$ the spaces $N_F$ and $S_F$ contain non-trivial convergent sequences. In Section \ref{sec:nf_char_jnp_bjnp} we provide several reformulations and characterizations of the (bounded) Josefson--Nissenzweig property for spaces of the form $N_F$ and $S_F$. In Section \ref{sec:complexity} we investigate the Borel complexity of those filters $F$ for which their spaces $N_F$ have the BJNP. In Section \ref{sec:tychonoff} we apply the obtained results to general Tychonoff spaces.
\subsection*{Acknowledgements}
The authors would like to thank Piotr Borodulin-Nadzieja, Piotr Koszmider, Arturo Mart\'{\i}nez-Celis, Grzegorz Plebanek, Jacek Tryba, and Lyubomyr Zdomskyy, for providing helpful comments and ideas which allowed the authors to obtain results presented in this paper.
\subsection{Preliminaries\label{sec:prelim}}
By $\omega$ we denote the first infinite (countable) cardinal number. By $\mathfrak{c}$ we denote the continuum, i.e. the cardinality of the real line $\mathbb{R}$. For every $k,n\in\omega}%{\in\N$ such that $k<n$ we set $[k,n]=\{k,k+1,\ldots,n\}$ and $[k,n)=[k,n]\setminus\{n\}$.
If $X$ is a set, then $|X|$ denotes its cardinality. The symbols $\wp(X)$, $\finsub{X}$ and $\ctblsub{X}$ denote the families of all subsets of $X$, all finite subsets of $X$ and all countable subsets of $X$, respectively. As usual, $\id_X$ denotes the identity function on $X$. The complement $X\setminus Y$ of a subset $Y$ of $X$ is denoted by $Y^c$. For a family $S\subseteq\wp(X)$ we put $S^*=\{Y\colon X\setminus Y\in S\}$---$S^*$ is called \textit{the dual family of $S$}. If $A$ is a subset of $X$, then we also put $S\restriction A=\{B\cap A\colon B\in S\}$. For two sets $A$ and $B$ the relation $A\subseteq^*B$ means that the difference $A\setminus B$ is finite.
If $A$ is a non-empty set, then a family $F\subseteq\wp(A)$ is \textit{a filter on $A$} if $\emptyset\not\in F$, $A\in F$, $\big|\bigcap F\big|\le 1$, and $F$ is closed under finite intersections and taking supersets. A family $I\subseteq\wp(A)$ is \textit{an ideal on $A$} if $I^*$ is a filter on $A$. If $A$ is infinite, then by $Fr(A)$ we denote \textit{the Fr\'echet filter} on $A$, i.e. $Fr(A)=\big\{B\in\wp(A)\colon A\setminus B\text{ is finite}\big\}$. If $A=\omega$, then we simply write $Fr=Fr(\omega)$. If $A\subseteq B$ are both infinite sets, then $Fr(A,B)$ denotes the filter on $B$ such that $Fr(A)\subseteq Fr(A,B)$ (so in particular $A\in Fr(A,B)$) and $X\cap A\in Fr(A)$ for every $X\in Fr(A,B)$. We have $Fr=Fr(\omega,\omega)$. The dual ideal to $Fr$ will be denoted by $Fin$; note that simply $Fin=\finsub{\omega}}%{\finsub{\N}$.
If $F$ is a filter on a set $A$, then $F$ is \textit{free} if $\bigcap F=\emptyset$, and $F$ is \textit{principal} if $\bigcap F$ is a singleton and $\bigcap F\in F$. Note that, for a filter $F$ on $\omega$, $F$ is free if and only if $Fr\subseteq F$ if and only if $Fin\subseteq F^*$. $F$ is \textit{an ultrafilter on $A$} if $F$ is a maximal filter (with respect to inclusion) or, equivalently, if for every $B\in\wp(A)$ either $B\in F$ or $B^c\in F$.
A filter $F$ on $\omega$ is \textit{a P-filter} if for every sequence $\seqn{A_n\in F}$ there is $A\in F$ such that $A\subseteq^* A_n$ for every $n\in\omega}%{\in\N$. An ideal $I$ on $\omega$ is \textit{a P-ideal} if its dual filter $I^*$ is a P-filter, that is, if for every sequence $\seqn{A_n\in I}$ there is $A\in I$ such that $A_n\subseteq^*A$ for every $n\in\omega}%{\in\N$.
\medskip
Throughout the paper we assume that every topological space considered by us is \textbf{Tychonoff}, that is, completely regular and Hausdorff. In particular, all compact spaces are normal. For a subset $Y$ of a space $X$ its closure in $X$ is denoted by $\overline{Y}^X$. Given two spaces $X$ and $Y$, $X\cong Y$ denotes that they are homeomorphic. If $X$ is a space, then $\beta X$ denotes its \v{C}ech--Stone compactification. As usual, we write shortly $\os={\beta\omega}}%{{\beta\N}\setminus\omega$. A subset $Y$ of a space $X$ is \textit{a P-set} if the intersection of countably many open sets containing $Y$ contains $Y$ in its interior. A point $x\in X$ is \textit{a P-point} in $X$ if the singleton $\{x\}$ is a P-set in $X$.
A sequence $\seqn{x_n}$ of points in a space $X$ is \textit{non-trivial} if $x_n\neq x_m$ for every $n\neq m\in\omega}%{\in\N$.
If $X$ is a topological space, then by $C(X)$ and $C^*(X)$ we denote the space of all continuous real-valued functions on $X$ and the space of all bounded continuous real-valued functions on $X$. A subspace $Y$ of a space $X$ is \textit{$C$-embedded} (resp. \textit{$C^*$-embedded}) in $X$ if for every function $f\in C(Y)$ (resp. $f\in C^*(Y)$) there is a function $f'\in C(X)$ (resp. $f'\in C^*(X)$) such that $f=f'\restriction X$. $C_p(X)$ and $C_p^*(X)$ denote the spaces $C(X)$ and $C^*(X)$ endowed with the pointwise topology.
We denote the Cantor space with its standard topology by $2^\omega$. When we speak about measurability properties of subsets of $2^\omega$, then we always mean \textit{the standard product measure} on $2^\omega$. Each subset $A\in{\wp(\omega)}}%{{\wp(\N)}$ can be associated with its characteristic function $\chi_A\colon\omega\to\{0,1\}$ and hence $A$ may be treated as an element of $2^\omega$. Thus, every filter on $\omega$ may be considered as a subset of $2^\omega$ and so we can talk about its topological and measure-theoretic features like meagerness, measurability, Borel complexity, etc. Similar comments of course also hold for any arbitrary set $X$, the product space $2^X$, and the power set $\wp(X)$.
\medskip
Let $\mathcal{A}$ be a Boolean algebra (with operations $\wedge$, $\vee$, and $^c$, and zero $0_\mathcal{A}$ and unit $1_\mathcal{A}$). A family $\mathcal{U}\subseteq\mathcal{A}$ is \textit{an ultrafilter} on $\mathcal{A}$ if it is a maximal family (with respect to inclusion) such that $0_\mathcal{A}\not\in\mathcal{U}$, for every $A,B\in\mathcal{U}$ we have $A\wedge B\in\mathcal{U}$, and for every $A,B\in\mathcal{A}$ if $A\in\mathcal{U}$ and $A\le B$, then $B\in\mathcal{U}$. $St(\mathcal{A})$ denotes the Stone space of $\mathcal{A}$, i.e. the space of all ultrafilters on $\mathcal{A}$ endowed with the standard topology. If $A\in\aA$, then $\clopen{A}_\mathcal{A}$ denotes the clopen subset of $St(\mathcal{A})$ corresponding via the Stone duality to $A$. Note that ${\wp(\omega)}}%{{\wp(\N)}$ is a Boolean algebra when endowed with the standard set-theoretic operations and $0_{{\wp(\omega)}}%{{\wp(\N)}}=\emptyset$ and $1_{{\wp(\omega)}}%{{\wp(\N)}}=\omega$. Recall that $St({\wp(\omega)}}%{{\wp(\N)})\cong{\beta\omega}}%{{\beta\N}$ and $St({\wp(\omega)}}%{{\wp(\N)}/Fin)\cong\os$. For every $A\in{\wp(\omega)}}%{{\wp(\N)}$ we will write $\clopen{A}_\omega$ instead of $\clopen{A}_{{\wp(\omega)}}%{{\wp(\N)}}$ and $\clopen{A}_\omega^*$ instead of $\clopen{A}_{{\wp(\omega)}}%{{\wp(\N)}/Fin}$ for the corresponding clopen subsets of ${\beta\omega}}%{{\beta\N}$ and $\os$, respectively. Of course, $\clopen{A}_\omega^*=\clopen{A}_\omega\setminus\omega$. Note also that a free P-filter on $\omega$ which is an ultrafilter on ${\wp(\omega)}}%{{\wp(\N)}$ is a P-point in $\os$.
\medskip
Let $X$ be a (Tychonoff) space. When we say that $\mu$ is \textit{a measure on $X$}, then we mean that $\mu$ is a $\sigma$-additive regular signed measure defined on the Borel $\sigma$-algebra $Bor(X)$ of $X$ which has bounded total variation, that is, $\|\mu\|=\sup\big\{|\mu(A)|+|\mu(B)|\colon\ A,B\in Bor(X), A\cap B=\emptyset\big\}<\infty$. By $|\mu|(\cdot)$ we denote \textit{the variation} of $\mu$; note that $\|\mu\|=|\mu|(X)$. $\supp(\mu)$ denotes \textit{the support} of $\mu$. If $A\in\wp(X)$, then $\mu\restriction A$ is defined by the formula $(\mu\restriction A)(B)=\mu(A\cap B)$ for every $B\in Bor(X)$. $\mu$ is \textit{a probability measure} if $\mu(A)\ge0$ for every $A\in Bor(X)$ and $\mu(X)=1$.
If $X$ is a space and $x\in X$, then $\delta_x$ denotes the one-point measure on $X$ concentrated at $x$. We say that a measure $\mu$ on $X$ is \textit{finitely supported} if there is a sequence $x_1,\ldots,x_n$ of mutually distinct points of $X$ and a sequence $\alpha_1,\ldots,\alpha_n\in\mathbb{R}\setminus\{0\}$ such that $\mu=\sum_{i=1}^n\alpha_i\cdot\delta_{x_i}$. It follows that $\supp(\mu)=\big\{x_1,\ldots,x_n\big\}$ and that $\|\mu\|=\sum_{i=1}^n\big|\alpha_i\big|$. A sequence $\seqn{\mu_n}$ of measures on $X$ is \textit{disjointly supported} if $\supp\big(\mu_k\big)\cap\supp\big(\mu_n\big)=\emptyset$ for every $k\neq n\in\omega}%{\in\N$.
If $\mu$ is a measure on a space $X$ and $f\in C(X)$, then we set $\mu(f)=\int_X fd\mu$.
\section{Spaces related to filters on $\omega$\label{sec:af_sf_nf}}
We will now present a general class of totally disconnected topological spaces related to filters on $\omega$, which will constitute the main interest of this paper.
Let $F$ be a \underline{free} filter on $\omega$. By $\mathcal{A}_F$ we denote the following Boolean algebra:
\[\mathcal{A}_F=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ A\in F\text{ or }A^c\in F\big\},\]
endowed with the standard set-theoretic operations. Of course, $\mathcal{A}_F$ is a Boolean subalgebra of ${\wp(\omega)}}%{{\wp(\N)}$ and $F$ is an ultrafilter in $\mathcal{A}_F$. Put $S_F=St\big(\mathcal{A}_F\big)$, i.e $S_F$ denotes the Stone space of $\mathcal{A}_F$ (the totally disconnected compact spaces of all ultrafilters on $\mathcal{A}_F$). Trivially, $\mathcal{A}_F={\wp(\omega)}}%{{\wp(\N)}$ if and only if $F$ is an ultrafilter. For every $A\in\mathcal{A}_F$ by $\clopen{A}_F$ we will denote the clopen set in $S_F$ corresponding via the Stone duality to the set $A$. Let also $\pi_F\colon{\beta\omega}}%{{\beta\N}\to S_F$ denote the canonical continuous mapping defined for every ultrafilter $x\in{\beta\omega}}%{{\beta\N}$ by the formula $\pi_F(x)=x\cap\mathcal{A}_F$.
Note that $S_F$ contains a countable discrete dense subspace consisting of isolated points which we can associate with $\omega$, therefore we can put $S_F^*=S_F\setminus\omega$. One can show that $S_F^*$ is homeomorphic to the Stone space $St\big(\mathcal{A}_F/Fin)$ of the quotient Boolean algebra $\mathcal{A}_F$ modulo the ideal $Fin$, or, equivalently, that the Boolean algebra of clopen subsets of $S_F^*$ is isomorphic to $\mathcal{A}_F/Fin$. For every $A\in{\wp(\omega)}}%{{\wp(\N)}$ we will also simply write $A$ for the corresponding subset of $\omega\subseteq S_F$. If $A\in\mathcal{A}_F$, then $\overline{A}^{S_F}=\clopen{A}_F$, and conversely, if $A\in{\wp(\omega)}}%{{\wp(\N)}$ is such that $\overline{A}^{S_F}$ is clopen, then $A\in\mathcal{A}_F$ (and hence again $\overline{A}^{S_F}=\clopen{A}_F$). For $A\in\mathcal{A}_F$ we also write $\clopen{A}_F^*=\clopen{A}_F\setminus\omega$.
There is also a special unique point $p_F\in S_F$ such that for every $A\in\mathcal{A}_F$, $p_F\in\clopen{A}_F$ if and only if $A\in F$. Formally, of course, $F=p_F$, but to focus the attention we will use the symbol $F$ when talking about the filter on $\omega$, and $p_F$ when talking about the point in $S_F$. Note that every point $x\in S_F^*\setminus\big\{p_F\big\}$ has a clopen neighborhood $U$ in $S_F$ (in $S_F^*$) not containing $p_F$ and homeomorphic to ${\beta\omega}}%{{\beta\N}$ (to $\omega^*$).
\begin{lemma}\label{lemma:sf_frechet_bo}
Let $F$ be a free filter on $\omega$. Then,
\begin{enumerate}
\item If $F=Fr$, then $S_F^*=\big\{p_F\big\}$.
\item If $F\neq Fr$, then $S_F^*\setminus\big\{p_F\}\neq\emptyset$ and every point of $S_F^*\setminus\big\{p_F\big\}$ has a clopen neighborhood in $S_F$ homeomorphic to ${\beta\omega}}%{{\beta\N}$ and a clopen neighborhood in $S_F^*$ homeomorphic to $\omega^*$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) If $F=Fr$, then $S_F$ is just a one-point compactification of $\omega$
(2) Assume there is $A\in F$ such that $A^c\in\ctblsub{\omega}}%{\ctblsub{\N}$. Obviously, every $B\in\wp(A^c)$ is in $\mathcal{A}_F$ but not in $F$. Since $A^c$ is infinite, it follows that $\clopen{A^c}_F$ is homeomorphic to ${\beta\omega}}%{{\beta\N}$ and hence $\omega^*\cong\clopen{A^c}_F\setminus\omega\subseteq S_F^*\setminus\big\{p_F\big\}$, so $S_F^*\setminus\big\{p_F\big\}\neq\emptyset$. Let $x\in S_F^*$ be a point distinct from $p_F$. Then, there is $A\in x$ such that $A^c\in F$. By similar arguments we claim that $\clopen{A}_F$ is homeomorphic to ${\beta\omega}}%{{\beta\N}$ and $\clopen{A}_F^*$ is homeomorphic to $\omega^*$.
\end{proof}
A general relation between ${\beta\omega}}%{{\beta\N}$ and spaces $S_F$ is described by the following proposition. Intuitively, $S_F$ is made from ${\beta\omega}}%{{\beta\N}$ by gluing together all the ultrafilters in ${\beta\omega}}%{{\beta\N}$ extending $F$.
\begin{proposition}\label{prop:sf_comes_from_bo}
Let $F$ be a free filter on $\omega$. By $\mathcal{F}$ denote the subset of ${\beta\omega}}%{{\beta\N}$ consisting of all ultrafilters $x\in{\beta\omega}}%{{\beta\N}$ such that $F\subseteq x$. Then, $\mathcal{F}$ is closed in ${\beta\omega}}%{{\beta\N}$ and the mapping $\varphi\colon{\beta\omega}}%{{\beta\N}/\mathcal{F}\to S_F$ given for every $x\in{\beta\omega}}%{{\beta\N}$ by the formula $\varphi\big([x]_{\mathcal{F}}\big)=x\cap\mathcal{A}_F$ ($=\pi_F(x)$), where $[x]_{\mathcal{F}}$ denotes the equivalence class of $x$ in ${\beta\omega}}%{{\beta\N}/\mathcal{F}$, is a homeomorphism. Moreover, if $\pi\colon{\beta\omega}}%{{\beta\N}\to{\beta\omega}}%{{\beta\N}/\mathcal{F}$ is the canonical quotient map, then $\varphi\circ\pi=\pi_F$ and $\pi^{-1}\big(\varphi^{-1}\big(p_F\big)\big)=\mathcal{F}$.
\end{proposition}
\begin{proof}
The fact that $\mathcal{F}$ is a closed subset of ${\beta\omega}}%{{\beta\N}$ is obvious. It is also easy to see that $\varphi$ is a well-defined function ${\beta\omega}}%{{\beta\N}/\mathcal{F}\to S_F$. Indeed, note that if $x\in\mathcal{F}$, then $\varphi\big([x]_{\mathcal{F}}\big)=x\cap\mathcal{A}_F$ is an ultrafilter in $\mathcal{A}_F$ containing the filter $F$, which is also an ultrafilter in $\mathcal{A}_F$, so $\varphi\big([x]_{\mathcal{F}}\big)=F=p_F$. If on the other hand $x\not\in\mathcal{F}$, then $[x]_{\mathcal{F}}$ is a singleton. By the definition of the mapping $\pi_F$ we get that $\varphi\circ\pi=\pi_F$.
Similarly, if $\varphi\big([x]_{\mathcal{F}}\big)=p_F$ for some ultrafilter $x\in{\beta\omega}}%{{\beta\N}$, then $x\cap\mathcal{A}_F=F$, which means that $x$ extends $F$ and thus $x\in\mathcal{F}$. It follows that $\pi^{-1}\big(\varphi^{-1}\big(p_F\big)\big)=\mathcal{F}$.
We check that $\varphi$ is injective. Let $x\neq y\in{\beta\omega}}%{{\beta\N}$. Assume first that $x,y\not\in\mathcal{F}$, so there is $A\in F$ such that $A\not\in x$ and $A\not\in y$, or, equivalently, $A^c\in x$ and $A^c\in y$. Since $\wp(A^c)\subseteq\mathcal{A}_F$, if $x\cap\mathcal{A}_F=y\cap\mathcal{A}_F$, then $x\restriction A^c=y\restriction A^c$, which implies that $x=y$, a contradiction proving that $\varphi\big([x]_{\mathcal{F}}\big)\neq\varphi\big([y]_{\mathcal{F}}\big)$. Assume then that $x\not\in\mathcal{F}$ and $y\in\mathcal{F}$. If $x\cap\mathcal{A}_F=p_F=y\cap\mathcal{A}_F$, then $F\subseteq x$, which is impossible, so again $\varphi\big([x]_{\mathcal{F}}\big)\neq\varphi\big([y]_{\mathcal{F}}\big)$. The case when $x\in\mathcal{F}$ and $y\not\in\mathcal{F}$ is similar.
Note that for every $n\in\omega}%{\in\N$ we have $\varphi\big([n]_{\mathcal{F}}\big)=n$, so the image of $\varphi$ is dense in $S_F$. Since ${\beta\omega}}%{{\beta\N}/\mathcal{F}$ is compact, to finish the proof it is sufficient to show that $\varphi$ is continuous.
So let $A\in\mathcal{A}_F$. Then,
\[\pi^{-1}\Big[\varphi^{-1}\big[\clopen{A}_F\big]\Big]=(\varphi\circ\pi)^{-1}\big[\clopen{A}_F\big]=\pi_F^{-1}\big[\clopen{A}_F\big],\]
and since $\pi_F$ is continuous, $\pi^{-1}\Big[\varphi^{-1}\big[\clopen{A}_F\big]\Big]$ is open, and thus $\varphi^{-1}\big[\clopen{A}_F\big]$ is open, too. This way we prove that $\varphi$ is indeed a continuous mapping.
\end{proof}
The following lemma and corollary will be useful in the sequel.
\begin{lemma}\label{lem:nwd_fr}
Let $F$ be a free filter on $\omega$. Let $\mathcal{F}$ denote the closed subset of $\os$ as in Proposition \ref{prop:sf_comes_from_bo}.
Then, for every $A\in\ctblsub{\omega}}%{\ctblsub{\N}$, the following are equivalent:
\begin{enumerate}
\item $A\setminus B$ is finite for every $B\in F$,
\item $F\restriction A=Fr(A)$,
\item $\clopen{A}_\omega^*\subseteq\mathcal{F}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence (1)$\Leftrightarrow$(2) is obvious.
Fix $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ and let us assume first that $F\restriction A=Fr(A)$. If there is $x\in\clopen{A}_\omega^*\setminus\mathcal{F}$, then there is an infinite set $B\in x$ such that $B\subseteq A$ and $\mathcal{F}\subseteq\clopen{\omega\setminus B}_\omega^*$. It follows that $B^c\in F$, although $A\setminus B^c$ is infinite, which is a contradiction. Thus, (2)$\Rightarrow$(3) holds.
Assume now that $\clopen{A}_\omega^*\subseteq\mathcal{F}$. If there is $B\in F$ such that $A\setminus B$ is infinite, then $\clopen{A\setminus B}_\omega^*\neq\emptyset$, so for every $x\in\mathcal{F}$ such that $A\setminus B\in x$ we have $(A\setminus B)\cap B=\emptyset\in x$, a contradiction. Hence, (3)$\Rightarrow$(2) holds, too.
\end{proof}
\begin{corollary}\label{cor:nwd_fr}
Let $F$ be a free filter on $\omega$. Let $\mathcal{F}$ denote the closed subset of $\os$ as in Proposition \ref{prop:sf_comes_from_bo}. Then, the following are equivalent:
\begin{enumerate}
\item for every $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ there is $B\in F$ such that $A\setminus B$ is infinite;
\item $F\restriction A\neq Fr(A)$ for every $A\in\ctblsub{\omega}}%{\ctblsub{\N}$;
\item $\mathcal{F}$ is nowhere dense in $\omega^*$.\hfill$\Box$
\end{enumerate}
\end{corollary}
The converse to Proposition \ref{prop:sf_comes_from_bo} also holds.
\begin{proposition}\label{prop:sf_comes_from_bo_converse}
For every non-empty closed subset $\mathcal{F}$ of $\omega^*$ and the free filter $F=\bigcap\mathcal{F}$ the spaces ${\beta\omega}}%{{\beta\N}/\mathcal{F}$ and $S_F$ are homeomorphic.
\end{proposition}
\begin{proof}
If $\mathcal{F}$ is a non-empty closed subset of $\omega^*$, then $F=\bigcap\mathcal{F}$ is a free filter on $\omega$ such that for every ultrafilter $x\in\omega^*$ the following equivalence holds: $x\in\mathcal{F}$ if and only if $F\subseteq x$. Thus, $\mathcal{F}$ is the closed subset of $\omega^*$ consisting of all ultrafilters extending $F$, and hence, by Proposition \ref{prop:sf_comes_from_bo}, ${\beta\omega}}%{{\beta\N}/\mathcal{F}$ and $S_F$ are homeomorphic.
\end{proof}
\begin{corollary}\label{cor:correspondence}
There is a natural (many-to-one) correspondence between non-empty closed subsets of $\omega^*$ and spaces of the form $S_F$.
\end{corollary}
Note that the correspondence given by Propositions \ref{prop:sf_comes_from_bo} and \ref{prop:sf_comes_from_bo_converse} is not one-to-one, as for every two ultrafilters $x\neq y\in\omega^*$ we still have $S_x=S_y$. %
Let us note that the equivalence ``$S_F={\beta\omega}}%{{\beta\N}$ if and only if $F$ is maximal'' holds also for homeomorphisms.
\begin{proposition}\label{prop:sf_bo_ultrafilter}
Let $F$ be a free filter on $\omega$. Then, $S_F\cong{\beta\omega}}%{{\beta\N}$ if and only if $F$ is an ultrafilter.
\end{proposition}
\begin{proof}
If $F$ is an ultrafilter, then $\mathcal{A}_F={\wp(\omega)}}%{{\wp(\N)}$, so $S_F=St\big({\wp(\omega)}}%{{\wp(\N)}\big)={\beta\omega}}%{{\beta\N}$, thus trivially $S_F\cong{\beta\omega}}%{{\beta\N}$.
Assume then that $F$ is not an ultrafilter, so there is $A\in{\wp(\omega)}}%{{\wp(\N)}$ such that $A,A^c\not\in F$. For the sake of contradiction assume that $S_F\cong{\beta\omega}}%{{\beta\N}$. A point $x\in S_F$ is isolated if and only if $x\in\omega}%{\in\N$, so $\overline{A}^{S_F}\cap\overline{A^c}^{S_F}=\emptyset$ and $\overline{A}^{S_F}\cup\overline{A^c}^{S_F}=\overline{A\cup A^c}^{S_F}=\overline{\omega}^{S_F}=S_F$, which yields that both sets $\overline{A}^{S_F}$ and $\overline{A^c}^{S_F}$ are clopen and hence $A,A^c\in\mathcal{A}_F$.
It follows that either $A\in F$, or $A^c\in F$, a contradiction.
\end{proof}
\begin{remark}
The implication from left to right in Proposition \ref{prop:sf_bo_ultrafilter} does not hold anymore if one exchanges ${\beta\omega}}%{{\beta\N}$ and $S_F$ with $\os$ and $S_F^*$, respectively, that is, there is a filter $F$ on $\omega$ such that $S_F^*$ is homeomorphic to $\os$, but $F$ is not maximal. Indeed, let $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ be a co-infinite set and $x\in\os$ an ultrafilter such that $A\not\in x$. Put $\mathcal{F}=\big(\overline{A}^{{\beta\omega}}%{{\beta\N}}\setminus A\big)\cup\{x\}$ and $F=\bigcap\mathcal{F}$. Obviously, $F$ is not an ultrafilter, but it follows that the quotient algebra $\mathcal{A}_F/Fin$ is isomorphic to ${\wp(\omega)}}%{{\wp(\N)}/Fin$, so $S_F^*$ is homeomorphic to $\os$.
\end{remark}
\begin{remark}
If $F$ is a P-filter on $\omega$ such that $F\restriction A\neq Fr(A)$ for every $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ (e.g. $F$ is the density filter $F_d$, see Remark \ref{rem:density_bjnp_no_jnp}), then the answer whether the space $S_F^*$ is homeomorphic to $\os$ depends on the assumed system of axioms:
(1) If the Continuum Hypothesis holds, then $S_F^*\cong\os$. Indeed, let $\mathcal{F}$ be the set of all free ultrafilters on $\omega$ extending $F$. By Corollary \ref{cor:nwd_fr}, $\mathcal{F}$ is a closed nowhere dense subset of $\os$. Of course, $\mathcal{F}$ is a P-set, too, so by \cite[Lemma 1.4.1]{vMillHBK}, $\os/\mathcal{F}$ is an F-space. By \cite[Lemma 1.4.2]{vMillHBK}, $\os/\mathcal{F}$ has the property that each non-empty $\mathbb{G}_\delta$-subset has infinite interior (since $\os$ has this property, see \cite[Lemmas 1.1.2 and 1.2.3]{vMillHBK}), so by Parovi\v{c}enko's theorem (\cite[Theorem 1.2.4]{vMillHBK}) $\os/\mathcal{F}$ is homeomorphic to $\os$. Now, Proposition \ref{prop:sf_comes_from_bo} yields that $S_F^*\cong\os$.
(2) Let $M$ be a model of set theory in which no P-points in $\os$ exist, e.g. the Silver model (cf. \cite{CG19}). Since $F$ is a free P-filter on $\omega$, $p_F$ is a P-point in the space $S_F^*$. Since $\os$ contains no P-points, it cannot be homeomorphic to $S_F^*$.
\end{remark}
\medskip
For every free filter $F$ on $\omega$ we distinguish a special countable subspace of $S_F$:
\[N_F=\omega\cup\big\{p_F\big\}.\]
Note that the topology of $N_F$ inherited from $S_F$ can be described as follows: every point of $\omega$ is isolated in $N_F$ (as it is isolated in $S_F$) and a local open base of $p_F$ consists of all (clopen) sets of the form $A\cup\big\{p_F\big\}$, where $A\in F$. Note that every open neighborhood of $p_F$ in $N_F$ is a clopen subset of $N_F$. It follows from Lemma \ref{lemma:sf_frechet_bo}, that $N_F=S_F$ if and only if $F=Fr$. For more information on spaces of the form $N_F$ see \cite[Section 4M]{GJ60} and \cite{DMM}, where properties of associated spaces of functions were studied.
The main topological relations between the spaces $N_F$ and $S_F$ are described by the following results.
\begin{lemma}\label{lemma:nf_cstar_embedded_sf}
For every free filter $F$ the space $N_F$ is $C^*$-embedded in $S_F$.
\end{lemma}
\begin{proof}
By \cite[Theorem 6.4]{GJ60}, a dense subspace $X$ of a space $Y$ is $C^*$-embedded in $Y$ if and only if every two disjoint zero-sets in $X$ have disjoint closures in $Y$. Note that a subset of $N_F$ is a zero-set if and only if it is closed (cf. \cite[Problem 4M.1]{GJ60}). Let thus $A$ and $B$ be two disjoint closed subsets of $N_F$. It follows that either $p_F\not\in A$, or $p_F\not\in B$ (or both). Without loss of generality we may assume that $p_F\not\in A$, i.e. $p_F\in N_F\setminus A=A^c\cup\big\{p_F\big\}$. Since $A^c\cup\big\{p_F\big\}$ is an open neighborhood of $p_F$, $A^c\in F$ and thus $A,A^c\in\mathcal{A}_F$. We also get that $B\subseteq A^c\cup\big\{p_F\big\}$, so we have:
\[\overline{A}^{S_F}\cap\overline{B}^{S_F}\subseteq\overline{A}^{S_F}\cap\overline{A^c}^{S_F}=\clopen{A}_F\cap\clopen{A^c}_F=\clopen{A\cap A^c}_F=\clopen{\emptyset}_F=\emptyset.\]
It follows that $N_F$ is $C^*$-embedded in $S_F$.
\end{proof}
\begin{corollary}\label{cor:sf_betanf}
$S_F=\beta\big(N_F\big)$, i.e. $S_F$ is the \v{C}ech--Stone compactification of $N_F$.
\end{corollary}
\begin{proof}
By \cite[Section 6.9, page 89]{GJ60}, a subspace $Y$ of a space $X$ is $C^*$-embedded in $X$ if and only if $\overline{Y}^{\beta X}=\beta Y$. Hence, by Lemma \ref{lemma:nf_cstar_embedded_sf}, $S_F=\overline{N_F}^{S_F}=\overline{N_F}^{\beta(S_F)}=\beta\big(N_F\big)$.
\end{proof}
\begin{lemma}
Let $F$ be a free filter on $\omega$. For every $A\in F$, $\overline{A}^{N_F}\cong N_F$ if and only if $\omega\setminus A$ is finite or $Fr(A,\omega)\subsetneq F$
\end{lemma}
\begin{proof}
Let us first assume that $\overline{A}^{N_F}\cong N_F$. Suppose also that $Fr(A,\omega)=F$. It follows that the set $\overline{A}^{N_F}=A\cup\big\{p_F\big\}$ is compact and hence $N_F$ is compact, too. Now, if $\omega\setminus A$ was infinite, then the compact space $N_F$ would contain an infinite closed discrete subspace, which is impossible. So, $\omega\setminus A$ is finite.
We now prove the reverse implication. Assume first that $Fr(A,\omega)\subsetneq F$. It follows that there is $B\in F\cap\ctblsub{A}$ such that $A\setminus B$ is infinite. Let $h\colon\omega\setminus B\to A\setminus B$ be any bijection and define the function $\varphi\colon N_F\to\overline{A}^{N_F}$ as follows:
\[\varphi(x)=
\begin{cases}
h(x),&\text{ if }x\in\omega\setminus B,\\
x,&\text{ if }x\in B\cup\big\{p_F\big\}.
\end{cases}\]
Since $\omega\setminus B$ and $B\cup\big\{p_F\big\}$ are clopen subsets of $N_F$ and the space $\omega\setminus B$ is discrete, the function $\varphi$ is a homeomorphism between $N_F$ and $\overline{A}^{N_F}$.
Assume now that $Fr(A,\omega)=F$ and $\omega\setminus A$ is finite. It follows immediately that both spaces $N_F$ and $\overline{A}^{N_F}$ are homeomorphic to $N_{Fr}$, so the proof is finished.
\end{proof}
Let us recall here the notions of the \textit{Kat\v{e}tov preorder} and the \textit{Rudin--Keisler preorder} of filters on $\omega$, useful and well understood tools for studying complexity of filters. If $f\colon\omega\to\omega$ is a function and $\mathcal{A}\subseteq{\wp(\omega)}}%{{\wp(\N)}$, then we set
\[f(\mathcal{A})=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ f^{-1}[A]\in\mathcal{A}\big\}.\]
Let $F$ and $G$ be free filters on $\omega$. We say that $F$ is \textit{Kat\v{e}tov below} $G$, denoting $F\le_K G$, if there is a function $f\colon\omega\to\omega$ such that $F\subseteq f(G)$. Note that if $G\neq Fr$ (equivalently, there exists co-infinite $A\in G$), then we may assume that $f$ is a surjection. If $F\le_K G$ and $G\le_K F$, then we say that $F$ and $G$ are \textit{Kat\v{e}tov equivalent} (in short, \textit{K-equivalent}), denoting $F\equiv_K G$. Note that if $F\subseteq G$, then $F\le_K G$ and that fact is witnessed by the identity function on $\omega$. The Fr\'echet filter $Fr$ is Kat\v{e}tov below any free filter $G$.
Similarly, we say that $F$ is \textit{Rudin--Keisler below} $G$, denoting $F\le_{RK} G$, if there is a function $f\colon\omega\to\omega$ such that $f(G)=F$. Obviously, if $F\le_{RK} G$, then $F\le_K G$. If $F\le_{RK} G$ and $G\le_{RK} F$, then we say that $F$ and $G$ are \textit{Rudin--Keisler equivalent} (in short, \textit{RK-equivalent}), denoting $F\equiv_{RK} G$.
We say that filters $F$ and $G$ are \textit{isomorphic} if there is a bijection $f\colon\omega\to\omega$ such that $f(G)=F$. Obviously, if $F$ and $G$ are isomorphic, then they are RK-equivalent. If $F$ and $G$ are ultrafilters, then we have an equivalence: $F\equiv_{RK} G$ if and only if there is such a bijection. If $F$ and $G$ are filters which are not necessarily ultrafilters, then such a bijection may not exist.
We also apply the above nomenclature in the natural way to ideals (via their dual filters).
For details concerning the preorders, we refer the reader e.g. to \cite{Bla73}, \cite{Hru11}, or \cite{Uzc19}.
Fix free filters $F$ and $G$ on $\omega$. If $f\colon\omega\to\omega$ is a function such that $F\subseteq f(G)$ (so $F\le_K G$), then we define the function $\varphi_f\colon N_G\to N_F$ as follows: $\varphi_f\big(p_G\big)=p_F$ and $\varphi_f(n)=f(n)$ for every $n\in\omega}%{\in\N$. It follows immediately that $\varphi_f$ is continuous and $\varphi_f^{-1}\big(p_F\big)=\big\{p_G\big\}$. Conversely, if $\varphi\colon N_G\to N_F$ is a continuous function such that $\varphi^{-1}\big(p_F\big)=\big\{p_G\big\}$, then the function $f_\varphi\colon\omega\to\omega$ defined as $f_\varphi=\varphi\restriction\omega$ is such that $F\subseteq f(G)$ (so $F\le_K G$).
\begin{proposition}\label{prop:ordering_nf}
Let $F$ and $G$ be free filters on $\omega$ and $f\colon\omega\to\omega$ a function.
\begin{enumerate}
\item If $F\subseteq f(G)$ (hence $F\le_K G$) and $f$ is a surjection, then $\varphi_f$ maps continuously $N_G$ onto $N_F$. In particular, if $F\le_K G$ and $G\neq Fr$, then $N_G$ can be continuously mapped onto $N_F$.
\item $F=f(G)$ and $f$ is a bijection (hence $F\equiv_{RK}G$) if and only if $\varphi_f\colon N_G\to N_F$ is a homeomorphism.
\item $F\subseteq G$ if and only if $\varphi_{\id_\omega}$ maps continuously $N_G$ onto $N_F$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) and (3) are clear.
(2) Assume that $f$ is a bijection and $F=f(G)$. It follows that $\varphi_f$ is a continuous bijection. Since $F=f(G)$ and $f$ is a bijection, we get that $f^{-1}(F)=G$ and $\varphi_f^{-1}=\varphi_{f^{-1}}$, so $\varphi_f^{-1}$ is also continuous, and hence $\varphi_f$ is a homeomorphism. The other implication is proved similarly.
\end{proof}
Regarding Proposition \ref{prop:ordering_nf}.(2), note that it is not true that the equivalence $F\equiv_{RK}G$ alone implies that $N_F$ and $N_G$ are homeomorphic. To see this, fix a co-infinite set $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ and let $G=Fr$ and $F=Fr(A,\omega)$. Then, $F\equiv_{RK}G$ but $N_G$ and $N_F$ are not homeomorphic, since $N_G$ is compact and $N_F$ is not. It follows in this case that $S_G$ and $S_F$ are not homeomorphic as well, or even that $S_F$ is not a continuous image of $S_G$, since $S_G$ ($=N_G$) is countable and $S_F$ contains ${\beta\omega}}%{{\beta\N}$.
We have also an analogon of Proposition \ref{prop:ordering_nf} for spaces $S_F$ and $S_F^*$.
\begin{proposition}\label{prop:ordering_sf}
Let $F$ and $G$ be free filters on $\omega$.
\begin{enumerate}
\item If $F\le_K G$ and $G\neq Fr$, then $S_F$ is a continuous image of $S_G$
\item If there is a bijection $f\colon\omega\to\omega$ such that $F=f(G)$ (so $F\equiv_{RK} G$), then $S_F\cong S_G$ and $S_F^*\cong S_G^*$.
\item If $F\subseteq G$, then $S_F$ is a continuous image of $S_G$
\end{enumerate}
\end{proposition}
\begin{proof}
(1) If $F\le_KG$ and $G\neq Fr$, then there is a surjection $f\colon\omega\to\omega$ such that $F\subseteq f(G)$. We define a continuous surjection $\psi\colon S_G\to S_F$ for every $x\in S_G$ as follows:
\[\psi(x)=\big\{A\in\mathcal{A}_F\colon\ f^{-1}[A]\in x\big\}.\]
It is immediate that $\psi\big(p_G\big)=p_F$ and $\psi(n)=f(n)$ for every $n\in\omega}%{\in\N$, so $\psi\restriction N_G=\varphi_f$. The continuity follows from the Stone duality, see \cite[Chapter 36]{GH09} for details. Since $S_G$ is compact and $N_F=\psi\big[N_G\big]$ is dense in $S_F$, $\psi$ is indeed a surjection.
(2) If there is a bijection $f\colon\omega\to\omega$ such that $F=f(G)$, then by Proposition \ref{prop:ordering_nf}.(2) we have $N_F\cong N_G$, so $S_F=\beta\big(N_F\big)\cong\beta\big(N_G\big)=S_G$, hence $S_F\cong S_G$. Since homeomorphisms preserve isolated points, $S_F^*\cong S_G^*$, too.
(3) If $F\subseteq G$, then $F\le_KG$. If $G=Fr$, then $F=Fr$, so $S_G=S_F$ and $S_G^*=S_F^*$. If $G\neq Fr$, then we use (1).
\end{proof}
The function $\psi$ in the proof of Proposition \ref{prop:ordering_sf}.(1) does not have to satisfy the equality $\psi\big[S_G^*\big]=S_F^*$. To see this, let $A,B\in\ctblsub{\omega}}%{\ctblsub{\N}$ be arbitrary disjoint sets such that $\omega=A\cup B$, and consider the filters $G=Fr(A,\omega)$ and $F=Fr$. Obviously, $F\subseteq G\neq Fr$. Let now $f\colon\omega\to\omega$ be a surjection such that $f\restriction A$ is a bijection onto $\omega\setminus\{35\}$ and $f[B]=\{35\}$. Of course, $F\subseteq f(G)$. Let $x$ be any free ultrafilter on $\mathcal{A}_G$ such that $B\in x$. It follows that $x\in S_G^*$. However, $f^{-1}[\{35\}]=B\in x$, which implies that $\psi(x)=35$, so $\psi\big[S_G^*\big]\cap\omega\neq\emptyset$.
Note also that if $F$ and $G$ are two free ultrafilters on $\omega$ which are or non-RK-equivalent, or even not comparable in the sense of Kat\v{e}tov, then still $S_F={\beta\omega}}%{{\beta\N}=S_G$, so the converse statements to Propositions \ref{prop:ordering_sf}.(1)--(3) do not hold. By Proposition \ref{prop:ordering_nf}.(2), it follows also that two non-homeomorphic spaces $N_F$ and $N_G$ may still have the \v{C}ech--Stone compactifications that are homeomorphic or even equal.
\section{Non-trivial convergent sequences in spaces $S_F$\label{sec:conv_seq}}
We study in this section for which filters $F$ the spaces $N_F$, $S_F$, and $S_F^*$ contain non-trivial convergent sequences. We start with the following simple, but useful, observation yielding that there maybe at most one point in $S_F$, namely $p_F$, being the limit of a non-trivial convergent sequence.
\begin{lemma}\label{lemma:sf_convseq_pf}
Let $F$ be a free filter on $\omega$ such that the space $S_F$ contains a non-trivial convergent sequence $\seqn{x_n}$. Then, $p_F=\lim_{n\to\infty}x_n$
\end{lemma}
\begin{proof}
Let $x=\lim_{n\to\infty}x_n$. Every point $n\in\omega}%{\in\N\subseteq S_F$ is isolated, so $x\in S_F^*$. We consider the following two cases and appeal to Lemma \ref{lemma:sf_frechet_bo}:
\begin{enumerate}
\item If $F=Fr$, then $p_F$ is the only point in $S_F^*$, so $x=p_F$.
\item If $F\neq Fr$, then every point of $S_F^*\setminus\big\{p_F\big\}$ has a clopen neighborhood in $S_F$ homeomorphic to ${\beta\omega}}%{{\beta\N}$, so it cannot be the limit of a non-trivial convergent sequence. It follows that $x=p_F$.
\end{enumerate}
\end{proof}
The next results show that the space $N_F$ contains a non-trivial convergent sequence if and only if $F$ is ``Fr\'echet-like''.
\begin{lemma}\label{lemma:nf_convseq_frechet}
Let $F$ be a free filter on $\omega$. Let $\seqn{x_n}$ be sequence in $N_F$ such that $x_n\neq x_m$ for every $n\neq m\in\omega}%{\in\N$. Then, $\seqn{x_n}$ is convergent if and only if for every $A\in F$ there is $N\in\omega}%{\in\N$ such that $x_n\in A$ for every $n>N$.
In particular, the space $N_F$ contains a non-trivial convergent sequence if and only if there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $X\subseteq^*A$ for every $A\in F$.
\end{lemma}
\begin{proof}
Assume that a space $N_F$ contains a non-trivial convergent sequence $\seqn{x_n}$. By Lemma \ref{lemma:sf_convseq_pf}, $\lim_{n\to\infty}x_n=p_F$, so we may assume that for every $n\in\omega}%{\in\N$ we have $x_n\neq p_F$ and hence $x_n\in\omega}%{\in\N$. It follows that for every $A\in F$ there is $N\in\omega}%{\in\N$ such that $x_n\in A$ for every $n>N$. The converse is obvious.
The second statement follows of course by putting $X=\seqn{x_n}$.
\end{proof}
Note that the set $X$ in the above lemma need not to belong to $F$, so $Fr(X,\omega)$ need not to be a subset of $F$. It appears however that $F$ must be K-equivalent to $Fr$.
\begin{lemma}\label{lem:f_kequiv_fr}
Let $F$ be a free filter on $\omega$. Then $F\equiv_K Fr$ if and only if there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $X\subseteq^*A$ for every $A\in F$.
\end{lemma}
\begin{proof}
To prove the implication in the right direction, notice that $Fr\subseteq F$ since $F$ is free and that $F\subseteq f(Fr)$ for any bijection $f\colon\omega\to X$.
For the implication in the left direction, take $f\colon\omega\to\omega$ such that $F\subseteq f(Fr)$ and consider the set $X=f[\omega]$.
\end{proof}
It is well-known that a free filter $F$ on $\omega$ is K-equivalent to $Fr$ if and only if its dual ideal $F^*$ is not tall. Recall that an ideal $I$ is \textit{tall} (or \textit{dense}) if for every $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ there is $B\in I\setminus Fin$ contained in $A$. Trivially, every maximal ideal is tall but $Fin$ itself is not tall. As a consequence, with an aid of Corollary \ref{cor:nwd_fr}, we obtain the following characterization of the existence of non-trivial convergent sequences in spaces $N_F$.
\begin{proposition}\label{prop:nf_convseq_char}
Let $F$ be a free filter on $\omega$. Let $\mathcal{F}$ denote the closed subset of $\os$ consisting of all those ultrafilters on $\omega$ which extend $F$. Then, the following are equivalent:
\begin{enumerate}
\item $N_F$ contains a non-trivial convergent sequence;
\item there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $X\subseteq^*A$ for every $A\in F$;
\item there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $F\restriction X=Fr(X)$;
\item $F\equiv_K Fr$;
\item the dual ideal $F^*$ is not tall;
\item $\mathcal{F}$ has non-empty interior in $\os$.\hfill$\Box$
\end{enumerate}
\end{proposition}
We will now investigate the case when the space $S_F^*$ for a given filter $F$ contains non-trivial convergent sequences. Let us thus introduce a general scheme of constructing filters $F$ on $\omega$ being limits of convergent sequences in the spaces $S_F$.
Let $\seqn{A_n}$ be a sequence of pairwise disjoint (finite or infinite) non-empty subsets of $\omega$. For every $n\in\omega}%{\in\N$ let $F_n$ be a (not necessarily free) filter on $A_n$---we will say that the sequence \textit{$\seqn{F_n}$ is based on the sequence $\seqn{A_n}$}. Let us now define the \textit{limit filter} $LF\big(\seqn{F_n}\big)$ as follows:
\[LF\big(\seqn{F_n}\big)=\Big\{A\in\ctblsub{\omega}}%{\ctblsub{\N}\colon\ \big\{n\in\omega}%{\in\N\colon\ A\cap A_n\in F_n\big\}\in Fr\Big\}.\]
Limit filters were studied e.g. in \cite{CDM93}, \cite{Fre07} or \cite{KR13}. We will usually write simply $LF\big(F_n\big)$ instead of $LF\big(\seqn{F_n}\big)$. The following is immediate.
\begin{fact}
For every $N\in\omega}%{\in\N$, $\bigcup_{n\ge N}A_n\in LF\big(F_n\big)$. In particular, $LF\big(F_n\big)$ is a free filter on $\omega$.\hfill$\Box$
\end{fact}
Note that if a sequence $\seqn{F_N}$ of filters is based on some sequence $\seqn{A_n}$, then $A_n\in\mathcal{A}_{LF(F_n)}$ but $A_n\not\in LF\big(F_n\big)$ for every $n\in\omega}%{\in\N$.
Finite modifications of the sequence $\seqn{F_n}$ have no impact on $LF\big(F_n\big)$.
\begin{lemma}
Let $\seqn{F_n}$ be a sequence of filters based on a sequence $\seqn{A_n}$. Then,
\begin{enumerate}
\item for every $k\in\omega}%{\in\N$, $LF\big(\seqn{F_n}\big)=LF\big(\seqn{F_{n+k}}\big)$;
\item for every co-infinite $X\in\ctblsub{\omega}}%{\ctblsub{\N}$, $LF\big(\seqn{F_n}\big)\subsetneq LF\big(\seq{F_n}{n\in X}\big)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) and the inclusion in (2) follow immediately from the definition of $LF\big(F_n\big)$---to see that $LF\big(\seqn{F_n}\big)\neq LF\big(\seq{F_n}{n\in X}\big)$, consider any set $A\in LF\big(\seq{F_n}{n\in X}\big)$ such that $A\subseteq\bigcup_{n\in X}A_n$.
\end{proof}
The next lemma provides sufficient conditions for a filter $F$ so that $S_F$ has a non-trivial convergent sequence.
\begin{lemma}\label{lemma:sf_f_convseq}
For every sequence $\seqn{F_n}$ based on a sequence $\seqn{A_n}$ and a filter $F$ on $\omega$ such that $F\subseteq LF\big(F_n\big)$, the point $p_F$ in the space $S_F$ is the limit of a non-trivial convergent sequence. More precisely:
\begin{enumerate}
\item if there is a subsequence $\seqk{n_k}$ such that for every $k\in\omega}%{\in\N$ the filter $F_{n_k}$ is a principal filter on $A_{n_k}$, then there is a non-trivial sequence in $\omega$ convergent to $p_F$;
\item if there is a subsequence $\seqk{n_k}$ such that for every $k\in\omega}%{\in\N$ the filter $F_{n_k}$ is a free filter on $A_{n_k}$, then there is a non-trivial sequence in $S_F^*$ convergent to $p_F$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) For every $k\in\omega}%{\in\N$ let $x_k\in\omega}%{\in\N$ be the only point of $\bigcap F_{n_k}$. Then, for every $A\in F$ there is $K\in\omega}%{\in\N$ such that $x_k\in A$ for every $k>K$. It follows by Lemma \ref{lemma:nf_convseq_frechet} that $\lim_{k\to\infty}x_k=p_F$.
(2) For every $k\in\omega}%{\in\N$ there is an ultrafilter $\mathcal{U}_k$ on $\mathcal{A}_F$ such that $\mathcal{U}_k\in S_F^*$, $\mathcal{U}_k\neq F$ and $F_{n_k}\subseteq\mathcal{U}_k$. It follows that $\lim_{k\to\infty}\mathcal{U}_k=p_F$. Indeed, if $A\in F$, then $A\in LF\big(F_n\big)$, so for almost all $k\in\omega}%{\in\N$ we have $A\cap A_{n_k}\in F_{n_k}$, and hence for almost all $k\in\omega}%{\in\N$ the point $\mathcal{U}_k$ belongs to the clopen subset of $S_F$ induced by $A$.
\end{proof}
In fact, in the proof of (2) we have shown more: if for each $k\in\omega}%{\in\N$ $X_k$ is the closed subset of $S_F^*$ consisting of all ultrafilters in $\mathcal{A}_F$ extending $F_{n_k}$, then the sequence $\seqk{X_k}$ converges to $p_F$.
The converse to Lemma \ref{lemma:sf_f_convseq} is also true.
\begin{lemma}\label{lemma:sf_convseq_f}
Let $F$ be a free filter on $\omega$ such that $S_F$ contains a non-trivial convergent sequence $\seqn{x_n}$. Then, there is a sequence $\seqn{F_n}$ of filters based on some sequence $\seqn{A_n}$ such that $F\subseteq LF\big(F_n\big)$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:sf_convseq_pf}, $\lim_{n\to\infty}x_n=p_F$, so we may assume that $x_n\neq p_F$ for every $n\in\omega}%{\in\N$. Using the Tietze theorem we may find $f\in C\big(S_F\big)$ such that $f\big(p_F\big)=0$ and $f\big(x_n\big)=1/n$ for every $n\in\omega}%{\in\N$, and hence easily construct a sequence $\seqn{A_n}$ of pairwise disjoint subsets of $\omega$ such that $A_n\in x_n$ and $A_n^c\in F$ ($A_n$'s may be finite or infinite). For every $n\in\omega}%{\in\N$ let $F_n$ be a filter on $A_n$ defined as follows: $F_n=x_n\restriction A_n$ if $x_n\in\omega}%{\in\N$, and $F_n=\big(x_n\cap F\big)\restriction A_n$ otherwise. We claim that $F\subseteq LF\big(F_n\big)$. Indeed, let $A\in F$, then there is $N\in\omega}%{\in\N$ such that for every $n>N$ we have $A\in x_n$ and hence $A\cap A_n\i
F_n$. This yields that $A\in LF\big(F_n\big)$.
\end{proof}
Putting Lemmas \ref{lemma:sf_f_convseq} and \ref{lemma:sf_convseq_f} together we obtain the following characterization of those filters $F$ for which the space $S_F$ contains a non-trivial convergent sequence (with the limit $p_F$, by Lemma \ref{lemma:sf_convseq_pf}).
\begin{corollary}
Let $F$ be a free filter on $\omega$. The space $S_F$ contains a non-trivial convergent sequence if and only if $F\subseteq LF\big(F_n\big)$ for some sequence $\seqn{F_n}$ based on some $\seqn{A_n}$.\hfill$\Box$
\end{corollary}
The following lemma provides a sufficient condition for a sequence $\seqn{F_n}$ so that the space $N_{LF(F_n)}$ does not have any non-trivial convergent sequences (cf. condition (2) in Lemma \ref{lemma:sf_f_convseq}).
\begin{lemma}\label{lemma:lf_no_frechet}
For every sequence $\seqn{F_n}$ of free filters based on a sequence $\seqn{A_n}$ of (necessarily infinite) subsets of $\omega$ and $F=LF\big(F_n\big)$, there is no $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $X\subseteq^*A$ for every $A\in F$. In particular, $S_F^*$ contains a non-trivial convergent sequence and $N_F$ does not.
\end{lemma}
\begin{proof}
Let $X\in\ctblsub{\omega}}%{\ctblsub{\N}$. If $X\cap A_n\neq\emptyset$ for at most finitely many $n\in\omega}%{\in\N$, then $X\not\subseteq^*\bigcup_{n\ge N}A_n\in F$ for a sufficiently large $N\in\omega}%{\in\N$. So let $M\in\ctblsub{\omega}}%{\ctblsub{\N}$ be such that $X\cap A_n\neq\emptyset$ for all $n\in M$. For each $n\in M$ let $x_n\in X\cap A_n$. Put $Y=\omega\setminus\big\{x_n\colon n\in M\big\}$. Since for every $n\in\omega}%{\in\N$ we have $\omega\cap A_n=A_n\in F_n$ and the filter $F_n$ is free, we have also $Y\cap A_n\in F_n$ for every $n\in\omega}%{\in\N$, so $Y\in F$. But $x_n\in X\setminus Y$ for every $n\in M$ and $M$ is infinite, so $X\not\subseteq^* Y$.
The second statement follows from Lemmas \ref{lemma:sf_f_convseq}.(2) and \ref{lemma:nf_convseq_frechet}.
\end{proof}
We will use Lemma \ref{lemma:lf_no_frechet} to obtain a space $S_F$ with the JNP (induced by a convergent sequence in $S_F^*$) but such that $N_F$ does not have the BJNP (see Corollary \ref{cor:sf_jnp_nf_no_bjnp}).
We have also a counterpart for condition (1) of Lemma \ref{lemma:sf_f_convseq}.
\begin{lemma}\label{lemma:lf_no_convseq_sf}
For every sequence $\seqn{F_n}$ of principal filters based on some sequence $\seqn{A_n}$ of (finite or infinite) subsets of $\omega$ and $F=LF\big(F_n\big)$, there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}\cap F$ such that $X\subseteq^*A$ for every $A\in F$, so $N_F$ contains a non-trivial convergent sequence, but there is no non-trivial convergent sequence in $S_F^*$.
\end{lemma}
\begin{proof}
For every $n\in\omega}%{\in\N$ let $z_n$ be the only point of $\bigcap F_n$---the set $X=\big\{z_n\colon\ n\in\omega}%{\in\N\big\}$ satisfies the required condition. Moreover, $X\in F$, since each $F_n$ is principal. It follows by Lemma \ref{lemma:nf_convseq_frechet} that $N_F$ contains a non-trivial sequence convergent to $p_F$.
We now show that there is no non-trivial convergent sequence in $S_F^*$. If $\omega\subseteq^*X$, then $F=Fr$ and $S_F^*=\big\{p_F\big\}$, so we are done. Assume then that $X^c\in\ctblsub{\omega}}%{\ctblsub{\N}$ and, for the sake of contradiction, suppose there is a non-trivial sequence $\seqn{x_n\in S_F^*\setminus\big\{p_F\big\}}$ such that $\lim_{n\to\infty}x_n=p_F$.
Since $\seqn{x_n}$ converges to $p_F$ and $X\in F$, there is $N\in\omega}%{\in\N$ such that $X\in x_n$ for every $n>N$. But each $x_n$ is free, so $X\setminus\{0,\ldots,k\}\in x_n$ for every $k\in\omega}%{\in\N$ and $n>N$. Since for every $A\in F$ we have $X\subseteq^* A$, for every $n>N$ and $A\in F$ we have $X\cap A\in x_n$, and hence $A\in x_n$, which implies that we cannot separate $p_F$ from any of the points $x_n$ with $n>N$---a contradiction, since $S_F$ is Hausdorff.
\end{proof}
\section{General characterizations of the BJNP and JNP of spaces $N_F$ and $S_F$\label{sec:nf_char_jnp_bjnp}}
In this section we present several various sufficient and necessary conditions for spaces $N_F$, $S_F$, and $S_F^*$ implying that they have the BJNP or the JNP.
We start the section recalling the following important facts which will be frequently used in the sequel.
\begin{fact}\label{fact:conv_seq_jnp}
If a space $X$ contains a non-trivial convergent sequence, then it has the JNP.
\end{fact}
\begin{proof}
If $\seqn{x_n}$ is a non-trivial convergent sequence in a space $X$, then the sequence $\seqn{\mu_n}$ defined for every $n\in\omega}%{\in\N$ by the formula $\mu_n=\frac{1}{2}\big(\delta_{x_{2n}}-\delta_{x_{2n+1}}\big)$ is a JN-sequence on the space $X$.
\end{proof}
\begin{fact}\label{fact:bo_bjnp}
${\beta\omega}}%{{\beta\N}$ and $\omega^*$ do not have the BJNP. In particular, they do not have the JNP.
\end{fact}
\begin{proof}
It is well known that both ${\beta\omega}}%{{\beta\N}$ and $\omega^*$ are Grothendieck spaces, so by \cite[Section 6]{KSZ} (cf. also \cite{BKS1}) they cannot have the JNP. Since both spaces are compact, they cannot have the BJNP either.
\end{proof}
\begin{lemma}\label{lemma:bjn_disjoint_supps}
Let $X$ be a space. If $X$ has the JNP (resp. BJNP), then it is witnessed by a JN-sequence (resp. BJN-sequence) with disjoint supports.
\end{lemma}
\begin{proof}
See \cite[Section 4.1]{KSZ} for the case of the JNP---the proof works without any changes for the BJNP, too.
\end{proof}
\begin{lemma}\label{lemma:discrete_no_bjn}
If $X$ is a discrete space, then $X$ does not have the BJNP.
\end{lemma}
\begin{proof}
This may be proved in many ways, but the most direct one is the following. Assume there is a BJN-sequence $\seqn{\mu_n}$ on $X$. By Lemma \ref{lemma:bjn_disjoint_supps}, we may assume that $\supp\big(\mu_n\big)\cap\supp\big(\mu_{n'}\big)=\emptyset$ for every $n\neq n'\in\omega}%{\in\N$. For each $n\in\omega}%{\in\N$ let $F_n$ be a subset of $\supp\big(\mu_n\big)$ such that $\big|\mu_n\big(F_n\big)\big|>1/4$. Let $F=\bigcup_{n\in\omega}%{\in\N}F_n$. It follows that $\chi_F\in C^*(X)$ and $\lim_{n\to\infty}\big|\mu_n\big(\chi_F\big)\big|\ge1/4$, a contradiction.
\end{proof}
Recall that a subset $A$ of a space $X$ is \textit{bounded}\footnote{Note that some authors use the name \textit{functionally bounded}.} in $X$ if for every $f\in C(X)$ we have $f\restriction A\in C^*(A)$.
\begin{lemma}\label{lemma:bounded_supports}
Let $X$ be a space and $\seqn{\mu_n}$ a sequence of finitely supported measures on $X$. Then, the following hold:
\begin{enumerate}
\item if $\seqn{\mu_n}$ is a JN-sequence on $X$, then the union $\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)$ is bounded in $X$;
\item if $\seqn{\mu_n}$ is a BJN-sequence on $X$ such that the union $\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)$ is contained in a subset $Y$ which is bounded in $X$, then it is a JN-sequence on $X$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the proof of (1), see \cite[Lemma 4.11]{KSZ} or \cite[Proposition 4.1]{KMSZ}.
We prove (2). Let $f\in C(X)$. There is $M>0$ such that $|f(x)|<M$ for every $x\in Y$. Let $r\colon\mathbb{R}\to[-M,M]$ be a retraction and put $g=r\circ f$. Since $g(x)=f(x)$ for every $x\in Y$ and $g\in C^*(X)$, we have $\lim_{n\to\infty}\mu_n(f)=\lim_{n\to\infty}\mu_n(g)=0$, which proves that $\seqn{\mu_n}$ is a JN-sequence on $X$.
\end{proof}
Of course, the union of supports of measures from a (B)JN-sequence on a given space must necessarily be infinite.
BJN-sequences and JN-sequences on spaces $N_F$ and $S_F$ are characterized by the following results.
\begin{theorem}\label{prop:nf_bjnseq_char}
Let $F$ be a free filter on $\omega$ and $\seqn{\mu_n}$ a sequence of finitely supported measures on $N_F$. For each $n\in\omega}%{\in\N$ let $P_n=\big\{x\in\supp\big(\mu_n\big)\colon\ \mu_n(\{x\})>0\big\}$ and $N_n=\supp\big(\mu_n\big)\setminus P_n$. Then, $\seqn{\mu_n}$ is a BJN-sequence on $N_F$ if and only if the following three conditions simultaneously hold:
\begin{enumerate}
\item $\big\|\mu_n\big\|=1$ for every $n\in\omega}%{\in\N$,
\item $\lim_{n\to\infty}\big\|\mu_n\restriction P_n\big\|=\lim_{n\to\infty}\big\|\mu_n\restriction N_n\big\|=1/2$,
\item $\lim_{n\to\infty}\big\|\mu_n\restriction(\omega\setminus A)\big\|=0$ for every $A\in F$.
\end{enumerate}
\end{theorem}
\begin{proof}
Assume that $\seqn{\mu_n}$ is a BJN-sequence on $N_F$. Then, (1) holds by the definition and (2) was essentially proved in \cite[Lemma 4.2]{KSZ} (basing on the fact that $\mu_n\big(N_F\big)\to 0$).
If $A\in F$
and there is a subsequence $\seqk{\mu_{n_k}}$ such that $\big\|\mu_{n_k}\restriction(\omega\setminus A)\big\|>0$ for every $k\in\omega}%{\in\N$ and $\lim_{k\to\infty}\big\|\mu_{n_k}\restriction(\omega\setminus A)\big\|>0$, then the sequence
\[\seqk{\big(\mu_{n_k}\restriction(\omega\setminus A)\big)/\big\|\mu_{n_k}\restriction(\omega\setminus A)\big\|}\]
is a BJN-sequence on the discrete clopen subspace $\omega\setminus A$ of $N_F$, contradicting Lemma \ref{lemma:discrete_no_bjn}. This proves (3).
Assume now that $\seqn{\mu_n}$ is a sequence of finitely supported measures on $N_F$ satisfying conditions (1)--(3). We will first show that $\mu_n(f)\to 0$ for every $f\in C^*\big(N_F\big)$ such that $f\big(p_F\big)=0$. Let thus $f\in C^*\big(N_F\big)$ be such a function and fix $\varepsilon>0$. By the continuity of $f$, there is $A\in F$ such that $|f(n)|<\varepsilon/2$ for every $n\in A$. By (3), there is $N\in\omega}%{\in\N$ such that for every $n>N$ we have $\big\|\mu_n\restriction(\omega\setminus A)\big\|<\varepsilon/(2\|f\|_\infty)$. It holds now:
\[\big|\mu_n(f)\big|\le\big|\big(\mu_n\restriction(\omega\setminus A)\big)(f)\big|+\big|\big(\mu_n\restriction\overline{A}^{N_F}\big)(f)\big|\le\]
\[\|f\|_\infty\cdot\big\|\mu_n\restriction(\omega\setminus A)\big\|+\big\|f\restriction\overline{A}^{N_F}\big\|_\infty\cdot\big\|\mu_n\restriction\overline{A}^{N_F}\big\|<\varepsilon/2+\varepsilon/2\cdot1=\varepsilon\]
for every $n>N$, so $\lim_{n\to\infty}\mu_n(f)=0$.%
Let now $f\in C^*\big(N_F\big)$ be arbitrary. Notice that for every $n\in\omega}%{\in\N$ we have:
\[\mu_n\big(f-f\big(p_F\big)\cdot\chi_{N_F}\big)=\mu_n(f)-f\big(p_F\big)\cdot\mu_n\big(\chi_{N_F}\big)=\mu_n(f)-f\big(p_F\big)\Big(\mu_n\big(P_n\big)+\mu_n\big(N_n\big)\Big),\]
so, by (2) and the equality (proved above) \[\lim_{n\to\infty}\mu_n\big(f-f\big(p_F\big)\cdot\chi_{N_F}\big)=0,\]
we get that $\lim_{n\to\infty}\mu_n(f)=0$. This proves that $\seqn{\mu_n}$ is a BJN-sequence on $N_F$.
\end{proof}
\noindent Note that condition (3) in the above theorem may be equivalently stated as follows:
(3') $\lim_{n\to\infty}\big\|\mu_n\restriction\overline{A}^{N_F}\big\|=1$ for every $A\in F$.
\noindent Recall that for each $A\in F$ we have $\overline{A}^{N_F}=A\cup\big\{p_F\big\}$.
Recall also that for every $A$ belonging to a free filter $F$ on $\omega$ the clopen subset $\clopen{\omega\setminus A}_F$ (resp. $\clopen{\omega\setminus A}_F^*$) of $S_F$ (resp. of $S_F^*$) is either finite or homeomorphic to ${\beta\omega}}%{{\beta\N}$ (resp. to $\omega^*$), hence it does not have the JNP (by Fact \ref{fact:bo_bjnp}). The proof of the next theorem is thus basically identical to the previous one, so we omit it.
\begin{theorem}\label{prop:sf_jnseq_char}
Let $F$ be a free filter on $\omega$ and let $X=S_F$ or $X=S_F^*$. Let $\seqn{\mu_n}$ be a sequence of finitely supported measures on $X$. For each $n\in\omega}%{\in\N$ let $P_n=\big\{x\in\supp\big(\mu_n\big)\colon\ \mu_n(\{x\})>0\big\}$ and $N_n=\supp\big(\mu_n\big)\setminus P_n$. Then, $\seqn{\mu_n}$ is a JN-sequence on $X$ if and only if the following three conditions simultaneously hold:
\begin{enumerate}
\item $\big\|\mu_n\big\|=1$ for every $n\in\omega}%{\in\N$,
\item $\lim_{n\to\infty}\big\|\mu_n\restriction P_n\big\|=\lim_{n\to\infty}\big\|\mu_n\restriction N_n\big\|=1/2$,
\item if $X=S_F$, then $\lim_{n\to\infty}\big\|\mu_n\restriction[\omega\setminus A]_F\big\|=0$ for every $A\in F$, or,\\ if $X=S_F^*$, then $\lim_{n\to\infty}\big\|\mu_n\restriction[\omega\setminus A]_F^*\big\|=0$ for every $A\in F$.\hfill$\Box$
\end{enumerate}
\end{theorem}
To ensure that a given sequence of measures on a space $N_F$ is a JN-sequence, we need to add one supplementary condition to Theorem \ref{prop:nf_bjnseq_char}.
\begin{theorem}\label{prop:nf_jnseq_char}
Let $F$ be a free filter on $\omega$ and $\seqn{\mu_n}$ a sequence of finitely supported measures on $N_F$. Then, $\seqn{\mu_n}$ is a JN-sequence on $N_F$ if and only if it satisfies conditions (1)--(3) of Theorem \ref{prop:nf_bjnseq_char} and, in addition, the following one:
\begin{enumerate}
\setcounter{enumi}{3}
\item $\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\subseteq^* A$ for every $A\in F$.
\end{enumerate}
\end{theorem}
\begin{proof}
Assume that $\seqn{\mu_n}$ satisfies conditions (1)--(3) of Theorem \ref{prop:nf_bjnseq_char} and additionally condition (4). By Theorem \ref{prop:nf_bjnseq_char}, $\seqn{\mu_n}$ is a BJN-sequence on $N_F$. Put
\[Y=\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\setminus\big\{p_F\big\}.\]
It follows, by (4), that $Y\subseteq^* A$ for every $A\in F$, which means, by Lemmas \ref{lemma:nf_convseq_frechet} and \ref{lemma:sf_convseq_pf}, that $\seq{n}{n\in Y}$ is a sequence convergent to $p_F$. Put $X=Y\cup\big\{p_F\big\}$.
Since $X$ with the inherited topology is compact, it is bounded in $N_F$ and hence, by Lemma \ref{lemma:bounded_supports}.(2), $\seqn{\mu_n}$ is a JN-sequence on $N_F$
Conversely, assume that $\seqn{\mu_n}$ is a JN-sequence on $N_F$. Since it is immediately a BJN-sequence on $N_F$, Theorem \ref{prop:nf_bjnseq_char} we need only to prove condition (4). For the sake of contradiction, suppose there is $A\in F$ such that the set
\[X=\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\cap(\omega\setminus A)\]
is infinite. We will construct an unbounded function $f\in C\big(N_F\big)$ such that $f\restriction A\equiv 0$ and $\limsup_{n\to\infty}\big|\mu_n\big(f\restriction(\omega\setminus A)\big)\big|>0$. Since $X$ is infinite and each $\supp\big(\mu_n\big)$ is finite, we can find sequences $\seqk{x_k\in X}$ and $\seqk{n_k\in\omega}%{\in\N}$ such that
\[x_k\in\supp\big(\mu_{n_k}\big)\setminus\bigcup_{i=0}^{k-1}\supp\big(\mu_{n_i}\big)\]
for every $k\in\omega}%{\in\N$. We first define $f$ inductively on $\big\{x_k\colon k\in\omega}%{\in\N\big\}$. For $k=0$ let $f\big(x_0\big)=0$
and for $k>0$ define $f\big(x_k\big)$ as follows:
\[f\big(x_k\big)=\Big(1-\sum_{i=0}^{k-1}f\big(x_i\big)\cdot\mu_{n_k}\big(\big\{x_i\big\}\big)\Big)\cdot\mu_{n_k}\big(\big\{x_k\big\}\big)^{-1}.\]
For $x\in X\setminus\big\{x_k\colon\ k\in\omega}%{\in\N\big\}$ or $x\in\overline{A}^{N_F}$, let again $f(x)=0$, so $f\restriction A\equiv 0$. Since $A\in F$ and each $x_k$ is isolated in the discrete clopen space $N_F\setminus\overline{A}^{N_F}$, $f$ is continuous and so $f\in C\big(N_F\big)$. Then, as for every $k>0$ we have:
\[\supp\big(\mu_{n_k}\big)\cap\big\{x_i\colon\ i\in\omega}%{\in\N\big\}\subseteq\big\{x_0,\ldots,x_k\big\},\]
it also holds:
\[\mu_{n_k}(f)=\sum_{i=0}^{k-1}f\big(x_i\big)\cdot\mu_{n_k}\big(\big\{x_i\big\}\big)+f\big(x_k\big)\cdot\mu_{n_k}\big(\big\{x_k\big\}\big)=\]
\[\sum_{i=0}^{k-1}f\big(x_i\big)\cdot\mu_{n_k}\big(\big\{x_i\big\}\big)+\Big(1-\sum_{i=0}^{k-1}f\big(x_i\big)\cdot\mu_{n_k}\big(\big\{x_i\big\}\big)\Big)\cdot\mu_{n_k}\big(\big\{x_k\big\}\big)^{-1}\cdot\mu_{n_k}\big\{\big\{x_k\big\}\big)=1,\]
which implies that $\limsup_{n\to\infty}\mu_n(f)\ge1$, contradicting the fact that $\seqn{\mu_n}$ is a JN-sequence on $N_F$. It follows that $\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\subseteq^*A$ for every $A\in F$ and hence (4) holds.
\end{proof}
\cornfjnp
\begin{proof}
If $\seqn{\mu_n}$ is a JN-sequence on $N_F$, then, by Theorem \ref{prop:nf_jnseq_char}.(4),
\[\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\subseteq^*A\]
for every $A\in F$, which, by Lemma \ref{lemma:nf_convseq_frechet}, means that $N_F$ contains a non-trivial convergent sequence. The converse is obvious.
\end{proof}
Recall that in Proposition \ref{prop:nf_convseq_char} we have provided several equivalent conditions for a space $N_F$ to contain a non-trivial convergent sequence.
From Theorem \ref{prop:nf_bjnseq_char} we can the derive the following result, which can be thought of as a BJNP-analogon of Corollary \ref{cor:nf_jnp}, since any non-trivial convergent sequence $\seqn{x_n\in\omega}%{\in\N}$ in a given space $N_F$ induces the sequence $\seqn{\delta_{x_n}}$ of trivial finitely supported probability measures satisfying conditions (1) and (2) below.
\cornfbjnpchar
\begin{proof}
Assume that $N_F$ has the BJNP. By Lemma \ref{lemma:bjn_disjoint_supps}, there is a disjointly supported BJN-sequence $\seqn{\theta_n}$ on $N_F$, so we may assume that $p_F\not\in\supp\big(\theta_n\big)$ for every $n\in\omega}%{\in\N$. For each $n\in\omega}%{\in\N$ put $\mu_n=\big|\theta_n\big|$. It follows that $\big\|\mu_n\big\|=1$ and $\mu_n\ge0$ for each $n\in\omega}%{\in\N$, so $\mu_n$ is a probability measure on $N_F$, and, by Theorem \ref{prop:nf_bjnseq_char}.(3), that
\[\lim_{n\to\infty}\mu_n(A)=\lim_{n\to\infty}\big\|\mu_n\restriction A\big\|=1\]
for every $A\in F$. Thus, conditions (1) and (2) are satisfied.
Assume now that there is a sequence $\seqn{\mu_n}$ of finitely supported probability measures on $N_F$ satisfying conditions (1) and (2). For each $n\in\omega}%{\in\N$ put:
\[\nu_n=\frac{1}{2}\big(\delta_{p_F}-\mu_n\big).\]
By Theorem \ref{prop:nf_bjnseq_char}, it is immediate that $\seqn{\nu_n}$ is a BJN-sequence on $N_F$.
\end{proof}
Note that by the proof of Corollary \ref{cor:nf_bjnp_char} we might additionally require in its statement that the sequence $\seqn{\mu_n}$ is disjointly supported. However, for convenience of possible applications we omit this assumption.
The proof of the corollary also implies that condition (2) can be rephrased in the following alternative way:
(2') $\mu_n(f)\to\delta_{p_F}(f)$ for every $f\in C_p^*(X)$.
\noindent (Recall that $\delta_{p_F}(f)=f\big(p_F\big)$.)
\medskip
Corollary \ref{cor:nf_bjnp_char} may be used to obtain a characterization of filters $F$ such that $N_F$ has the BJNP in terms of single probability measures on $\omega$ and closed subsets of $\os$. Note that a closed subset $\mathcal{F}$ of $\os$, having similar properties to those described in Proposition \ref{prop:one_measure_bjnp} and \ref{prop:bjnp_one_measure} and Remark \ref{rem:one_measure_bjnp_constr}, was used in \cite[Section 7]{KSZ} to construct a compact space $K$ such that its Banach function space $C(K)$ has the $\ell_1$-Grothendieck property but does not have the Grothendieck property.
\begin{proposition}\label{prop:one_measure_bjnp}
Let $\mu$ be a probability measure on ${\beta\omega}}%{{\beta\N}$ with $\mu(\omega)=1$ and let $\seqn{A_n}$ be a sequence of pairwise disjoint subsets of $\omega$ such that $\mu\big(A_n\big)>0$ for every $n\in\omega}%{\in\N$. Assume that $\mathcal{F}$ is a closed subset of $\os$ having the following property ($\dagger$): for every clopen subset $U$ of ${\beta\omega}}%{{\beta\N}$ such that $\mathcal{F}\subseteq U$ we have:
\[\lim_{n\to\infty}\frac{\mu\big(A_n\cap U\big)}{\mu\big(A_n\big)}=1.\]
Let $F$ be the free filter on $\omega$ defined by $F=\bigcap\mathcal{F}$. Then, $N_F$ has the BJNP.
\end{proposition}
\begin{proof}
Some of the sets $A_n$'s may be infinite, so we first shrink them to finite pieces having relatively large measure $\mu$. Namely, for each $n\in\omega}%{\in\N$ let $B_n$ be a finite subset of $A_n$ such that
\[\mu\big(A_n\setminus B_n\big)/\mu\big(A_n\big)<1/2^{n+1}.\]
Then, for each $n\in\omega}%{\in\N$ and $A\in{\wp(\omega)}}%{{\wp(\N)}$, set:
\[\mu_n(A)=\mu\big(B_n\cap A\big)/\mu\big(B_n\big).\]
It follows that each $\mu_n$ is a finitely supported probability measure with $\supp\big(\mu_n\big)\subseteq\omega$.
Fix $A\in F$. Then, for the corresponding clopen subset of ${\beta\omega}}%{{\beta\N}$, it holds $\mathcal{F}\subseteq\clopen{A}_\omega$. We trivially have:
\[\tag{$*$}1\ge\mu_n(A)\ge\mu\big(B_n\cap A\big)/\mu\big(A_n\big),\]
and
\[\mu\big(A_n\cap A\big)-\mu\big(B_n\cap A\big)=\mu\big(\big(A_n\setminus B_n\big)\cap A\big)\le\mu\big(A_n\setminus B_n\big)<\mu\big(A_n\big)/2^{n+1},\]
so, by ($\dagger$),
\[\lim_{n\to\infty}\mu\big(B_n\cap A\big)/\mu\big(A_n\big)=1,\]
and hence, by ($*$), $\lim_{n\to\infty}\mu_n(A)=1$, too. By Corollary \ref{cor:nf_bjnp_char} the proof is finished.
\end{proof}
\begin{remark}\label{rem:one_measure_bjnp_constr}
Let $\mu$ and $\seqn{A_n}$ be as in Proposition \ref{prop:one_measure_bjnp}. Set:
\[\mathfrak{F}=\big\{\mathcal{F}\colon\ \mathcal{F}\subseteq\os\text{ is a closed subset having property }(\dagger)\big\}.\]
For every non-empty $\mathfrak{G}\subseteq\mathfrak{F}$ its intersection $\bigcap\mathfrak{G}$ has property ($\dagger$), i.e. $\bigcap\mathfrak{G}\in\mathfrak{F}$. (To see this, first prove it for finite $\mathfrak{G}$'s by induction on $|\mathfrak{G}|$, then assume that $\bigcap\mathfrak{F}\not\in\mathfrak{F}$ and use the compactness argument). In particular, $\bigcap\mathfrak{F}\in\mathfrak{F}$. Also, if $\mathcal{F},\mathcal{F}'$ are closed subsets of $\os$ such that $\mathcal{F}\subseteq\mathcal{F}'$ and $\mathcal{F}\in\mathfrak{F}$, then $\mathcal{F}'\in\mathfrak{F}$, too. In other words, $\mathfrak{F}$ is a principal filter in the family of all closed subsets of $\os$.
Let $\mathcal{G}$ be the closed subset of $\os$ defined by the following equivalence:
\[x\in\mathcal{G}\quad\Longleftrightarrow\quad\forall\text{ clopen neighborhood }U\text{ of }x\text{ in }{\beta\omega}}%{{\beta\N}\colon\ \limsup_{n\to\infty}\frac{\mu\big(A_n\cap U\big)}{\mu\big(A_n\big)}>0.\]
Then, the following equality holds:
\[\mathcal{G}=\bigcap\mathfrak{F}.\]
In particular, $\mathcal{G}$ has property ($\dagger$), too.
\end{remark}
The converse to Proposition \ref{prop:one_measure_bjnp} also holds.
\begin{proposition}\label{prop:bjnp_one_measure}
Let $F$ be a free filter on $\omega$ such that the space $N_F$ has the BJNP. Then, there exist a probability measure $\mu$ on ${\beta\omega}}%{{\beta\N}$ such that $\supp(\mu)\subseteq\omega$, and a sequence $\seqn{A_n}$ of pairwise disjoint subsets of $\omega$ such that $\mu\big(A_n\big)>0$ for every $n\in\omega}%{\in\N$ and
\[\lim_{n\to\infty}\frac{\mu\big(A_n\cap U\big)}{\mu\big(A_n\big)}=1\]
for every clopen subset $U$ of ${\beta\omega}}%{{\beta\N}$ such that $\mathcal{F}\subseteq U$, where $\mathcal{F}$ is the closed subset of $\os$ consisting of all ultrafilters extending $F$.
\end{proposition}
\begin{proof}
Let $\seqn{\mu_n}$ be a sequence of disjointly supported probability measures on $N_F$ from Corollary \ref{cor:nf_bjnp_char}. Set:
\[\mu=\sum_{n\in\omega}%{\in\N}\mu_n/2^{n+1}.\]
(Here, we treat every $\mu_n$ as a measure on ${\beta\omega}}%{{\beta\N}$.) Then, $\mu$ is a probability measure on ${\beta\omega}}%{{\beta\N}$ such that $\supp(\mu)\subseteq\omega$. For each $n\in\omega}%{\in\N$, let $A_n=\supp\big(\mu_n\big)$, so $\mu\big(A_n\big)=1/2^{n+1}>0$.
Let $U$ be a clopen subset of ${\beta\omega}}%{{\beta\N}$ containing $\mathcal{F}$. There is $A\in{\wp(\omega)}}%{{\wp(\N)}$ such that $U=\clopen{A}_\omega$. Consequently, $A\in F$, and so
\[\lim_{n\to\infty}\frac{\mu\big(A_n\cap U\big)}{\mu\big(A_n\big)}=\lim_{n\to\infty}\mu_n(A)=1,\]
which finishes the proof.
\end{proof}
\medskip
The following proposition is an counterpart of Corollary \ref{cor:nf_jnp} for spaces $S_F$ and $S_F^*$. Note that in the corollary we did not put any restrictions on the sizes of supports.
\begin{proposition}\label{prop:sf_convseq_supps2}
Let $F$ be a free filter on $\omega$ and let $X=S_F$ or $X=S_F^*$. Then, the space $X$ contains a non-trivial sequence convergent to $p_F$ if and only if there is a JN-sequence $\seqn{\mu_n}$ on $X$ such that $\big|\supp\big(\mu_n\big)\big|=2$ for every $n\in\omega}%{\in\N$.
\end{proposition}
\begin{proof}
We will prove the proposition for $X=S_F$ only as the proof for $X=S_F^*$ is similar.
The implication in the right direction follows from Fact \ref{fact:conv_seq_jnp}, so assume that $S_F$ admits a JN-sequence $\seqn{\mu_n}$ such that $\big|\supp\big(\mu_n\big)\big|=2$ for every $n\in\omega}%{\in\N$. By \cite[Lemma 4.15]{KSZ}, we may assume that $\seqn{\mu_n}$ is disjointly supported. It follows that for every $A\in F$ we have $\supp\big(\mu_n\big)\subseteq\clopen{A}_F$ for almost all $n\in\omega}%{\in\N$. Indeed, if not, then there is either $A\in F$ and infinitely many $n\in\omega}%{\in\N$ such that $\supp\big(\mu_n\big)\cap\clopen{A}_F=\emptyset$, which implies that the clopen $\clopen{A^c}_F$ admits a JN-sequence, despite that it is homeomorphic to ${\beta\omega}}%{{\beta\N}$, or $A\in F$ such that for infinitely many $n\in\omega}%{\in\N$ we have $\big|\supp\big(\mu_n\big)\cap\clopen{A}_F\big|=1$, which yields that $\limsup_{n\to\infty}\big|\mu_n\big(\clopen{A}_F\big)\big|\ge 1/2$ (see \cite[Lemma 4.2]{KSZ}), contradicting the fact that $\clopen{A}_F$ is a clopen subset of $S_F$ and $\seqn{\mu_n}$ is a JN-sequence. For every $n\in\omega}%{\in\N$ pick $x_n\in\supp\big(\mu_n\big)$---since for every $A\in F$ there is $N\in\omega}%{\in\N$ such that $x_n\in\clopen{A}_F$ for every $n>N$, we get that $\lim_{n\to\infty}x_n=p_F$.
\end{proof}
By \cite[Theorem 5.13]{KSZ}, in Proposition \ref{prop:sf_convseq_supps2} we may exchange the condition that $\big|\supp\big(\mu_n\big)\big|=2$ for every $n\in\omega}%{\in\N$ for the condition that there exists $M\ge2$ such that $\big|\supp\big(\mu_n\big)\big|\le M$ for every $n\in\omega}%{\in\N$. In \cite[Section 4]{BKS1}, it is proved that the space $S_{F_d}$, where $F_d$ is the density filter (see Remark \ref{rem:density_bjnp_no_jnp}), has the JNP but does not contain any non-trivial convergent sequences---it follows that every JN-sequence on $S_{F_d}$ consists of measures with supports having sizes not bounded by any constant $M$ (cf. \cite[Proposition 5.2]{KSZ}). Notice that in order to prove that $S_{F_d}$ has the JNP, a sequence of the form $\frac{1}{2}\big(\delta_{p_F}-\mu_n\big)$ was used
Applying Theorem \ref{prop:sf_jnseq_char}, we obtain the following result---we leave the proof to the reader as it is almost identical to Corollary \ref{cor:nf_bjnp_char}. Note that, similarly as before, we might require that the sequence $\seqn{\mu_n}$ in the statement of the corollary is disjointly supported, however again, for the purpose of applications, we omit this assumption.
\begin{corollary}\label{cor:sf_jnp_char}
Let $F$ be a free filter on $\omega$ and let $X=S_F$ or $X=S_F^*$. Then, $X$ has the JNP if and only if there is a sequence $\seqn{\mu_n}$ of finitely supported probability measures on $X$ such that:
\begin{enumerate}
\item
$p_F\not\in\supp\big(\mu_n\big)$ for every $n\in\omega}%{\in\N$,
\item if $X=S_F$, then $\lim_{n\to\infty}\mu_n\big(\clopen{A}_F\big)=1$ for every $A\in F$, or,\\ if $X=S_F^*$, then $\lim_{n\to\infty}\mu_n\big(\clopen{A}_F^*\big)=1$ for every $A\in F$.\noproo
\end{enumerate}
\end{corollary}
We will now briefly study relations between the spaces $S_F$ and $N_F$ in the context of the BJNP and JNP
\begin{proposition}\label{prop:nf_bjnp_sf_jnp}
For every free filter $F$, if there is a BJN-sequence of measures on $N_F$, then the same sequence is a JN-sequence on $S_F$. In particular, if the space $N_F$ has the BJNP, then $S_F$ has the JNP.
\end{proposition}
\begin{proof}
Let $F$ be a free filter on $\omega$ such that $N_F$ has the BJNP witnessed by a sequence $\seqn{\mu_n}$. Since $S_F$ is compact, it is bounded in itself. It follows by Lemma \ref{lemma:bounded_supports}.(2) that $\seqn{\mu_n}$ is a JN-sequence on $S_F$.
\end{proof}
\begin{theorem}\label{cor:sf_jnp_nf_sfs}
Let $F$ be a free filter on $\omega$. Then, the space $S_F$ has the JNP if and only if $N_F$ has the BJNP or $S_F^*$ has the JNP.
\end{theorem}
\begin{proof}
If $S_F^*$ has the JNP, then obviously $S_F$ has it, too. Similarly, if $N_F$ has the BJNP, then by Proposition \ref{prop:nf_bjnp_sf_jnp} $S_F$ has the JNP.
Assume now that neither $N_F$ has the BJNP nor $S_F^*$ has the JNP. We claim that $S_F$ does not have the JNP either, so for the sake of contradiction assume that it has the property and let $\seqn{\mu_n}$ be a sequence of probability measures like in Corollary \ref{cor:sf_jnp_char}.
We consider two cases.
1) There exists $\varepsilon>0$ and a subsequence $\seqk{\mu_{n_k}}$ such that $\mu_{n_k}(\omega)>\varepsilon$ for every $k\in\omega}%{\in\N$. If for every $k\in\omega}%{\in\N$ we put:
\[\nu_k=\big(\mu_{n_k}\restriction\omega\big)\big/\mu_{n_k}(\omega),\]
then $\nu_k$ is a finitely supported probability measure such that $\supp\big(\nu_k\big)\subseteq\omega$, as well as for every $A\in F$ we have:
\[\nu_k(\omega\setminus A)=\mu_{n_k}(\omega\setminus A)\big/\mu_{n_k}(\omega)\le\mu_{n_k}\big(\clopen{\omega\setminus A}_F\big)/\varepsilon,\]
which, by condition (2) of Corollary \ref{cor:sf_jnp_char}, converges to $0$ as $k\to\infty$. By Corollary \ref{cor:nf_bjnp_char} $N_F$ has the BJNP, a contradiction.
2) It holds $\limsup_{n\to\infty}\mu_n(\omega)=0$, so there exists $N\in\omega}%{\in\N$ such that for every $n>N$ we have $\mu_n(\omega)<1/2$ and hence $\mu_n\big(S_F^*\big)>1/2$. For every $n>N$ put:
\[\nu_n=\big(\mu_n\restriction S_F^*\big)\big/\mu_n\big(S_F^*\big),\]
so $\nu_n$ is a finitely supported probability measure on $S_F^*$ such that $p_F\not\in\supp\big(\nu_n\big)$ (by condition (1) of Corollary \ref{cor:sf_jnp_char}). Again, for every $A\in F$ and $n>N$ we have:
\[\nu_n\big(\clopen{\omega\setminus A}_F^*\big)=\mu_n\big(\clopen{\omega\setminus A}_F\setminus\omega\big)\big/\mu_n\big(S_F^*\big)\le2\mu_n\big(\clopen{\omega\setminus A}_F\big),\]
which, by condition (2) of Corollary \ref{cor:sf_jnp_char}, converges to $0$ as $n\to\infty$. By Corollary \ref{cor:sf_jnp_char} $S_F^*$ has the JNP, which is not true.
\end{proof}
A converse to Proposition \ref{prop:nf_bjnp_sf_jnp} may not hold. Corollary \ref{cor:sf_jnp_nf_no_bjnp} shows that $S_F$ may have the JNP induced by a convergent sequence contained in $S_F^*$, but $N_F$ fails to have the BJNP. However, by the next proposition, if $S_F$ has the JNP witnessed by a JN-sequence with supports contained in $\omega$, then $N_F$ has the BJNP.
\begin{proposition}\label{prop:sf_jnp_nf_bjnp}
Let $F$ be a free filter on $\omega$ and $\seqn{\mu_n}$ a sequence of measures on $S_F$ such that $\supp\big(\mu_n\big)\subseteq N_F$ for every $n\in\omega}%{\in\N$. If $\seqn{\mu_n}$ witnesses that $S_F$ has the JNP, then it witnesses that $N_F$ has the BJNP.
\end{proposition}
\begin{proof}
Assume that $\seqn{\mu_n}$ is a JN-sequence on $S_F$ and let $f\in C^*\big(N_F\big)$. By Lemma \ref{lemma:nf_cstar_embedded_sf}, there is $f'\in C^*\big(S_F\big)=C\big(S_F\big)$ such that $f=f'\restriction N_F$. Since $\supp\big(\mu_n\big)\subseteq N_F$ for every $n\in\omega}%{\in\N$, it holds $\mu_n(f)=\mu_n\big(f'\restriction N_F\big)=\mu_n(f')\to0$ as $n\to\infty$. It follows that $\lim_{n\to\infty}\mu_n(f)=0$ for every $f\in C^*\big(N_F\big)$, so $N_F$ has the BJNP.
\end{proof}
Propositions \ref{prop:nf_bjnp_sf_jnp} and \ref{prop:sf_jnp_nf_bjnp} can be used to obtain yet another characterization of those filters $F$ on $\omega$ for which their spaces $N_F$ contain non-trivial convergent sequences, analogous to Proposition \ref{prop:sf_convseq_supps2}.
\begin{corollary}\label{cor:nf_bjnp_supp_m}
Let $F$ be a free filter on $\omega$. Then, the space $N_F$ contains a non-trivial convergent sequence if and only if there are a BJN-sequence $\seqn{\mu_n}$ on $N_F$ and an integer $M\ge 2$ such that $\big|\supp\big(\mu_n\big)\big|=M$ for every $n\in\omega}%{\in\N$.
\end{corollary}
\begin{proof}
The implication in the right direction is clear (see the proof of Fact \ref{fact:conv_seq_jnp}). To see the converse, let $\seqn{\mu_n}$ be a BJN-sequence on $N_F$ such that for some natural number $M\ge2$ and every $n\in\omega}%{\in\N$ we have $\big|\supp\big(\mu_n\big)\big|=M$. For each $n\in\omega}%{\in\N$ write $\supp\big(\mu_n\big)=\big\{x_1^n,\ldots,x_M^n\big\}$. Proposition \ref{prop:nf_bjnp_sf_jnp} yields that $\seqn{\mu_n}$ is a JN-sequence on $S_F$. By \cite[Proposition 5.7]{KSZ}, there are a JN-sequence $\seqn{\nu_k}$ on $S_F$, a strictly increasing sequence $\seqk{n_k\in\omega}%{\in\N}$, and a finite sequence $\alpha_1,\ldots,\alpha_M\in\mathbb{R}$ such that for every $k\in\omega}%{\in\N$ we have $\supp\big(\nu_k\big)\subseteq\supp\big(\mu_{n_k}\big)\subseteq N_F$ and $\nu_k=\sum_{i=1}^M\alpha_i\delta_{x_i^{n_k}}$.
By Proposition \ref{prop:sf_jnp_nf_bjnp}, the sequence $\seqk{\nu_k}$ is a BJN-sequence on $N_F$. Since all non-zero $\alpha_i$'s are separated from $0$, Theorem \ref{prop:nf_bjnseq_char}.(3) implies that for every $A\in F$ there is $K\in\omega}%{\in\N$ such that $\supp\big(\nu_k\big)\subseteq A$ for every $k>K$. In other words, for every $A\in F$ we have $\bigcup_{k\in\omega}%{\in\N}\supp\big(\nu_k\big)\subseteq^* A$. But then it follows from Theorem \ref{prop:nf_jnseq_char} that $\seqk{\nu_k}$ is a JN-sequence on $N_F$. Corollary \ref{cor:nf_jnp} implies that $N_F$ contains a non-trivial convergent sequence.
\end{proof}
We will now focus on the issue concerning the Kat\v{e}tov preorder (see Section \ref{sec:af_sf_nf}) and transferring the BJNP or JNP from one space onto another. Let us recall here that the inclusion $F\subseteq G$, for filters on $\omega$, implies that $F\le_{K}G$.
\begin{proposition}\label{prop:rk_bjnp}
Let $F$ and $G$ be free filters on $\omega$ such that $F\le_{K}G$. Then,
\begin{enumerate}
\item if $N_G$ has the JNP, then $N_F$ has the JNP;
\item if $N_G$ has the BJNP, then $N_F$ has the BJNP;
\item if $S_G$ has the JNP, then $S_F$ has the JNP.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) Let $f\colon\omega\to\omega$ be a witness for $F\le_K G$. Assume that $N_G$ has the JNP. By Corollary \ref{cor:nf_jnp} and Lemma \ref{lemma:nf_convseq_frechet}, there is $X\in\ctblsub{\omega}}%{\ctblsub{\N}$ such that $X\subseteq^*A$ for every $A\in G$. Let $Y=f[X]$. It follows that $Y\subseteq^*A$ for every $A\in F$. Indeed, if there is $A\in F$ such that $Y\setminus A$ is infinite, then $X\setminus f^{-1}[A]$ is infinite, a contradiction, since $f^{-1}[A]\in G$. One can show in a similar way that $Y$ is infinite. Thus, $N_F$ has the JNP, too.
(2) If $G=Fr$, then $N_G$ has the JNP, so by (1) $N_F$ has the JNP, and in particular the BJNP, too. So assume that $G\neq Fr$ and that $N_G$ has the BJNP. Let $f\colon\omega\to\omega$ be such a function that $F\subseteq f(G)$. Let $\seqn{\mu_n}$ be a sequence of finitely supported probability measures on $N_G$ as in Corollary \ref{cor:nf_bjnp_char}. For each $n\in\omega}%{\in\N$ put:
\[\mu_n'=\sum_{x\in\supp(\mu_n)}\mu_n(\{x\})\cdot\delta_{f(x)};\]
then, $\big\|\mu_n'\big\|=1$, $\supp\big(\mu_n'\big)\subseteq\omega$ and $\mu_n'\ge0$. We also have that $\lim_{n\to\infty}\mu_n'(A)=1$ for every $A\in F$. Indeed, if there was $A\in F$ such that $\limsup_{n\to\infty}\mu_n'(\omega\setminus A)>0$, then $f^{-1}[A]\in G$ and $\limsup_{n\to\infty}\mu_n\big(\omega\setminus f^{-1}[A]\big)>0$, which contradicts condition (2) of Corollary \ref{cor:nf_bjnp_char}.
Corollary \ref{cor:nf_bjnp_char} implies that $N_F$ has the BJNP.
(3) Using (2) and Proposition \ref{prop:nf_bjnp_sf_jnp}, we may again assume that $G\neq Fr$. Let $f\colon\omega\to\omega$ be a surjection such that $F\subseteq f(G)$. Assume that $S_G$ has the JNP and let $\seqn{\mu_n}$ be a sequence of finitely supported probability measures on $S_G$ as in Corollary \ref{cor:sf_jnp_char}. Let $\psi\colon S_G\to S_F$ be a continuous surjection defined for every $x\in S_G$ as follows (cf. the proof of Proposition \ref{prop:ordering_sf}):
\[\psi(x)=\big\{A\in\mathcal{A}_F\colon f^{-1}[A]\in x\big\}.\]
For each $n\in\omega}%{\in\N$ we define the measure $\mu_n'$ on $S_F$ by the formula:
\[\mu_n'=\sum_{x\in\supp(\mu_n)}\mu_n(\{x\})\cdot\delta_{\psi(x)},\]
and proceed similarly as in (2).
\end{proof}
Let us note that it may happen that $F\le_{RK}G$ and $N_G$ does not have the BJNP or it has the BJNP but not the JNP, but $N_F$ still has the JNP. Indeed, recall that $N_F$ for $F=Fr$ has the JNP and notice that if $G$ is meager, then $Fr\le_{RK}G$ by Talagrand's characterization of meager filters (see Theorem \ref{theorem:nonmeager_filters} in the next section), so it suffices to take such meager $G$ that $N_G$ does not have the BJNP (see Example \ref{example: Fsigma, nonBJNP}) or it has the BJNP but not the JNP (see Remark \ref{rem:density_bjnp_no_jnp} or Theorem \ref{theorem:continuum_many}.(2)). Thus, the converse to Proposition \ref{prop:rk_bjnp} does not hold.
\medskip
We finish this section with some observations concerning the free sums of filters.
Recall that given free filters $F_0$ and $F_1$ on $\omega$, their \textit{free sum} $F_0\oplus F_1$ is a filter on $\omega\times\{0,1\}$ defined by
\[ F_0\oplus F_1 = \big\{(A_0\times\{0\})\cup (A_1\times\{1\})\colon\ A_i\in F_i,\ i=0,1\big\}.\]
By mapping $\omega\times\{0,1\}$ onto $\omega$ via a bijection, we can assume that $F_0\otimes F_1$ is actually a filter on $\omega$. Using this identification, we can unambiguously abuse the notation and speak about the space $N_{F_0\oplus F_1}$. Observe then that $N_{F_0\oplus F_1}$ is the union of two closed subspaces $N'_{F_i} = (\omega\times\{i\})\cup\big\{p_{F_0\oplus F_1}\big\}$, $i=0,1$, such that $N'_{F_0}\cap N'_{F_1}=\big\{p_{F_0\oplus F_1}\big\}$, and which can be identified with the spaces $N_{F_i}$. For each $i=0,1$, the map $r_i\colon N_{F_0\oplus F_1}\to N'_{F_i}$ sending the subset $N'_{F_{1-i}}$ to the point $p_{F_0\oplus F_1}$ and being the identity on $N'_{F_i}$ is a continuous retraction of $N_{F_0\oplus F_1}$ onto $N'_{F_i}$. Therefore the subspaces $N'_{F_i}$ are $C$-embedded in $N_{F_0\oplus F_1}$, and we can extend any (bounded) continuous function on each $N'_{F_i}$ to a (bounded) continuous function on $N_{F_0\oplus F_1}$.
\begin{proposition}\label{prop:product_filters}
Let $F_0$ and $F_1$ on $\omega$ be free filters on $\omega$. Then, the space $N_{F_0\oplus F_1}$ has the JNP (BJNP) if and only if $N_{F_0}$ has the JNP (BJNP) or $N_{F_1}$ has the JNP (BJNP).
\end{proposition}
\begin{proof}
The implication in the left direction follows immediately from the obvious fact that if a subspace $Y$ of a space $X$ has the JNP (BJNP), then $X$ also has this property.
To prove the implication in the right direction, let us assume that $N_{F_0\oplus F_1}$ has the BJNP and take a sequence $\seqn{\mu_n}$ of finitely supported probability measures like in Corollary \ref{cor:nf_bjnp_char}. Note that $p_{F_0\oplus F_1}\not\in\supp\big(\mu_n\big)$ for every $n\in\omega}%{\in\N$. There is a subsequence $\seqk{\mu_{n_k}}$ such that the limit $\lim_{k\to\infty}\mu_{n_k}\big(N'_{F_0}\big)$ exists---denote it by $\alpha$. We also have $\lim_{k\to\infty}\mu_{n_k}\big(N'_{F_1}\big)=1-\alpha$. If $\alpha>0$, let $i=0$ and $\alpha_0 = \alpha$, otherwise set $i=1$ and $\alpha_1 = 1 - \alpha=1$. By omitting several first elements of the sequence, we may assume that $\mu_{n_k}\big(N'_{F_i}\big)>0$ for every $k\in\omega}%{\in\N$.
We claim that for every $A\in F_0\oplus F_1$ we have $\lim_{k\to\infty}\mu_{n_k}\big(N'_{F_i}\cap A\big)=\alpha_i$. If not, then there are $A\in F_0\oplus F_1$, a subsequence $\seql{\mu_{n_{k_l}}}$, and $\beta\in[0,\alpha_i)$, such that $\lim_{l\to\infty}\mu_{n_{k_l}}\big(N'_{F_i}\cap A\big)=\beta$. But then \[\lim_{l\to\infty}\mu_{n_{k_l}}\big(N'_{F_{1-i}}\cap A\big)=1-\beta>1-\alpha_i= \lim_{l\to\infty}\mu_{n_{k_l}}\big(N'_{F_{1-i}}\big),\]
which is impossible.
For each $k\in\omega}%{\in\N$ let $\nu_k=\big(\mu_{n_k}\restriction N'_{F_i}\big)\big/\mu_{n_k}\big(N'_{F_i}\big)$. It follows that each $\nu_k$ is a finitely supported probability measure on $N'_{F_i}$, with $\supp\big(\nu_k\big)\subseteq\omega$, and that for every $A\in F_i$, identifying $N_{F_i}$ with $N'_{F_i}$ and abusing the notation, we have:
\[\lim_{k\to\infty}\nu_k(A)=\lim_{k\to\infty}\big(\mu_{n_k}\restriction N'_{F_i}\big)(A)\big/\mu_{n_k}\big(N'_{F_i}\big)=\lim_{k\to\infty}\mu_{n_k}\big( N'_{F_i}\cap A\big)\big/\mu_{n_k}\big(N'_{F_i}\big)=1,\]
so, by Corollary \ref{cor:nf_bjnp_char}, the space $N'_{F_i}$ has the BJNP.
The case of the JNP is similar but, thanks to Corollary \ref{cor:nf_jnp}, much easier.
\end{proof}
The proof of the following simple lemma is left to the reader.
\begin{lemma}\label{lemma:fr_f1_f2_iso}
Let $F_1$ and $F_2$ be free filters on $\omega$ such that $F_i\restriction A\neq Fr(A)$ for any $A\in\ctblsub{\omega}}%{\ctblsub{\N}$ and $i=1,2$. Then, the filters $F_1\oplus Fr$ and $F_2\oplus Fr$ are isomorphic if and only if $F_1$ and $F_2$ are isomorphic.\hfill$\Box$
\end{lemma}
\section{Complexity of filters $F$ whose spaces $N_F$ have the BJNP\label{sec:complexity}}
We will now focus on the question for which filters $F$ the space $N_F$ has the BJNP---the situation is apparently more intricate than that of the JNP (cf. Proposition \ref{prop:nf_convseq_char} and Corollary \ref{cor:nf_jnp}).
Let us start with the following motivational observation. Assume that $F$ is such a free filter on $\omega$ that the space $N_F$ has the BJNP. Let $\seqn{\mu_n}$ be a sequence of finitely supported probability measures like in Proposition \ref{cor:nf_bjnp_char}. Put:
\[G=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ \lim_{n\to\infty}\mu_n(A)=1\big\}.\]
It is immediate that $G$ is a filter on $\omega$ containing $F$ and that $N_G$ has the BJNP. It is also an $\mathbb{F}_{\sd}$ subset of $2^\omega$---to see this note that $G$ satisfies the equality:
\[G=\bigcap_{k\in\omega}%{\in\N}\bigcup_{N\in\omega}%{\in\N}\bigcap_{n\ge N}\Big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ \mu_n(A)\ge1-\frac{1}{k+1}\Big\},\]
where all the sets on the right hand side are closed subsets of $2^\omega$. We thus get the following proposition.
\begin{proposition}\label{prop:nf_bjnp_fsd}
Let $F$ be a free filter on $\omega$. If $N_F$ has the BJNP, then there exists an $\mathbb{F}_{\sigma\delta}$ filter $G$ on $\omega$ such that $F\subseteq G$ and $N_G$ has the BJNP, too.\hfill$\Box$
\end{proposition}
Proposition \ref{prop:nf_bjnp_fsd} has several immediate consequences. Let us recall the following well-known characterization of non-meager filters on $\omega$ due to Talagrand \cite[Theorem 21]{Tal80}.
\begin{theorem}[Talagrand]\label{theorem:nonmeager_filters}
For every free filter $F$ on $\omega$ the following are equivalent:
\begin{enumerate}
\item $F$ is non-meager;
\item $F$ does not have the Baire property;
\item for every strictly increasing sequence $\seqk{n_k}$ of natural numbers there is a set $A\in F$ such that $A\cap\big[n_k,n_{k+1}\big)=\emptyset$ for infinitely many $k\in\omega}%{\in\N$.\hfill$\Box$
\end{enumerate}
\end{theorem}
Sierpi\'nski \cite{Sie38} proved that every free ultrafilter on $\omega$ is non-measurable. Bartoszy\'nski \cite[Theorem 1.1]{Bar92} generalized this result to free filters.
\begin{theorem}[Sierpi\'nski, Bartoszy\'nski]\label{theorem:sierpinski}
If $F$ is a free filter on $\omega$, then either $F$ is of measure zero or it is non-measurable.\hfill$\Box$
\end{theorem}
Since Borel (or, more generally, analytic) subsets of $2^\omega$ have the Baire property as well as they are (universally) measurable, by Proposition \ref{prop:nf_bjnp_fsd} combined with Talagrand's and Bartoszy\'nski--Sierpi\'nski's results we immediately obtain the following corollary.
\begin{corollary}\label{cor:nf_bjnp_meager_meas0}
Let $F$ be a free filter on $\omega$. If $N_F$ has the BJNP, then $F$ is meager and of measure zero.\hfill$\Box$
\end{corollary}
It is well-known too that every free ultrafilter on $\omega$ is non-meager.
\begin{corollary}\label{cor:ultrafilter_no_bjnp}
If $F$ is a free ultrafilter on $\omega$, then $N_F$ does not have the BJNP.\hfill$\Box$
\end{corollary}
Note that the last corollary also follows from Propositions \ref{prop:nf_bjnp_sf_jnp} and \ref{prop:sf_bo_ultrafilter} and Fact \ref{fact:bo_bjnp}.
\medskip
Having established the above motivational results and corollaries, we are now moving to the main part of the section. Recall that in Proposition \ref{prop:nf_bjnp_fsd} we proved that every free filter $F$ for which the space $N_F$ has the BJNP is contained in an $\mathbb{F}_{\sd}$ filter. It appears that the converse does not hold---in Example \ref{example: Fsigma, nonBJNP} below we present a construction of a free $\mathbb{F}_{\sigma}$ filter $F$ such that $N_F$ does not have the BJNP. Since we also exhibit a large collection of free $\mathbb{F}_{\sigma}$ filters $F$ whose $N_F$ spaces have the BJNP (see Section \ref{sec:summable}), it follows that neither the topological properties of a free filter $F$ on $\omega$, nor the topological properties of the corresponding function spaces $C_p\big(N_F\big)$ and $C^*_p\big(N_F\big)$ determine whether the space $N_F$ has the JNP or the BJNP. Namely, if $F$ and $G$ are any uncountable $\mathbb{F}_{\sigma}$ filters, then they are homeomorphic to the space $\mathbb{Q}\times 2^\omega$ (see \cite{vE}), and all of the spaces $C_p\big(N_F\big)$, $C_p\big(N_G\big)$, $C^*_p\big(N_F\big)$, and $C^*_p\big(N_G\big)$ are homeomorphic (see \cite{DMM}). This means that in order to find a characterization of those free filters $F$ for which their spaces $N_F$ have the BJNP we need to consider more sophisticated properties
Let us recall some standard definitions concerning ideals and submeasures on $\omega$. A function $\varphi\colon{\wp(\omega)}}%{{\wp(\N)}\to[0,+\infty]$ is \textit{a submeasure} if $\varphi(\emptyset)=0$, $\varphi(\{n\})<\infty$ for every $n\in\omega}%{\in\N$, and $\varphi(A)\le\varphi(A\cup B)\le\varphi(A)+\varphi(B)$ for every $A,B\in{\wp(\omega)}}%{{\wp(\N)}$. Every non-negative measure $\mu$ on $\omega$ is a submeasure. A submeasure $\varphi$ is \textit{finite} if $\varphi(\omega)<\infty$, and is \textit{lower semi-continuous} (\textit{lsc}) if $\varphi(A)=\lim_{n\to\infty}\varphi(A\cap[0,n])$ for every $A\in{\wp(\omega)}}%{{\wp(\N)}$. For a lsc submeasure $\varphi$ we set the following two ideals:
\[\Fin(\varphi)=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ \varphi(A)<\infty\big\}\]
and
\[\Exh(\varphi)=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ \lim_{n\to\infty}\varphi(A\setminus[0,n])=0\big\},\]
called \textit{the finite ideal} and \textit{the exhaustive ideal} of $\varphi$, respectively. Trivially,
\[\finsub{\omega}}%{\finsub{\N}=Fin\subseteq\Exh(\varphi)\subseteq\Fin(\varphi).\]
One can also easily show that $\Exh(\varphi)$ is an $\mathbb{F}_{\sd}$ P-ideal and $\Fin(\varphi)$ is an $\mathbb{F}_{\sigma}$ ideal (see \cite[Lemma 1.2.2]{Far00}). Of course, the dual filters $\Fin(\varphi)^*$ and $\Exh(\varphi)^*$ have the same Borel complexity as their dual ideals (and are free).
Mazur \cite[Lemma 1.2]{Maz91} and Solecki \cite[Theorems 3.1 and 3.4]{Sol99} found the following characterizations of $\mathbb{F}_{\sigma}$ ideals and analytic P-ideals on $\omega$ in terms of lsc submeasures.
\begin{theorem}[Mazur]\label{theorem:mazur}
Let $I$ be an ideal on $\omega$ containing $Fin$. Then, $I$ is an $\mathbb{F}_{\sigma}$ ideal if and only if there is a lsc submeasure $\varphi$ such that $I=\Fin(\varphi)$.\hfill$\Box$
\end{theorem}
\begin{theorem}[Solecki]\label{theorem:solecki}
Let $I$ be an ideal on $\omega$ containing $Fin$.
\begin{enumerate}
\item $I$ is an analytic P-ideal if and only if there is a finite lsc submeasure $\varphi$ such that $I=\Exh(\varphi)$.
\item $I$ is an $\mathbb{F}_{\sigma}$ P-ideal if and only if there is a lsc submeasure $\varphi$ such that $I=\Fin(\varphi)=\Exh(\varphi).$\hfill$\Box$
\end{enumerate}
\end{theorem}
An immediate consequences of Solecki's theorem is that every analytic P-ideal containing $Fin$ is an $\mathbb{F}_{\sd}$ ideal.
For two submeasures $\varphi$ and $\psi$ on $\omega$ we write $\psi\le\varphi$ if $\psi(A)\le\varphi(A)$ for every $A\in{\wp(\omega)}}%{{\wp(\N)}$. Following Farah \cite[page 21]{Far00}, we call a submeasure $\varphi$ \textit{non-pathological} if for every $A\in{\wp(\omega)}}%{{\wp(\N)}$ we have:
\[\varphi(A)=\sup\big\{\mu(A)\colon\ \mu\text{ is a non-negative measure on }\omega\text{ such that }\mu\le\varphi\big\}.\]
An ideal $I$ on $\omega$ is \textit{non-pathological} if $I=\Exh(\varphi)$ for some non-pathological lsc submeasure $\varphi$. Note that in this case the formula $\psi=\min(\varphi,1)$ defines a non-pathological lsc submeasure such that $I=\Exh(\varphi)=\Exh(\psi)$, so without loss of generality we may always assume that a submeasure defining a non-pathological ideal is finite. An ideal $I$ is \textit{pathological} if it is not non-pathological. Finally, a filter $F$ on $\omega$ is \textit{non-pathological} (resp. \textit{pathological}) if its dual ideal $F^*$ is non-pathological (resp. pathological).
For various characterizations of non-pathological ideals, we refer the reader to \cite[Corollary 5.26]{Hru11} and \cite[Theorem 5.4]{BNFP}.
Density ideals constitute an important subclass of non-pathological ideals. Recall that a submeasure $\varphi$ on $\omega$ is \textit{a density submeasure} if there exists a sequence $\seqn{\mu_n}$ of finitely supported non-negative measures on $\omega$ with disjoints supports such that:
\[\varphi=\sup_{n\in\omega}%{\in\N}\mu_n.\]
An ideal $I$ on $\omega$ is \textit{a density ideal} if there is a density submeasure $\varphi$ such that $I=\Exh(\varphi)$. We then also say that $I$ is \textit{generated by} or \textit{associated to} the sequence $\seqn{\mu_n}$. For basic information concerning density ideals, see \cite[Section 1.13]{Far00}.
We are ready to prove the main theorem of this section providing a characterization of those free filters $F$ for which the spaces $N_F$ have the BJNP. Proposition \ref{prop:nf_bjnp_fsd} and Corollaries \ref{cor:nf_bjnp_meager_meas0} and \ref{cor:ultrafilter_no_bjnp} are immediate consequences of this result.
\theoremnfbjnpnonpath
\begin{proof}
(1)$\Rightarrow$(2) Assume that $N_F$ has the BJNP. Let $\seqn{\mu_n}$ be a sequence of finitely supported probability measures like in Corollary \ref{cor:nf_bjnp_char}. By the remark after the corollary, without loss of generality we may assume that $\mu_n$'s have pairwise disjoint supports such that
\[\tag{$*$}\max\big(\supp\big(\mu_n\big)\big)<\min\big(\supp\big(\mu_{n+1}\big)\big)\]
for every $n\in\omega}%{\in\N$. For each $A\in{\wp(\omega)}}%{{\wp(\N)}$ define:
\[\varphi(A)=\sup_{n\in\omega}%{\in\N}\mu_n(A).\]
Then, $\varphi$ is a density submeasure.
We need to show that $F\subseteq\Exh(\varphi)^*$. Let $A\in F$ and fix $\varepsilon>0$. By the properties of $\seqn{\mu_n}$, there is $N\in\omega}%{\in\N$ such that $\mu_n(\omega\setminus A)<\varepsilon$ for every $n\ge N$. Let
\[K=\min\big(\supp\big(\mu_N\big)\big).\]
By ($*$), for every $n<N$ we have $\supp\big(\mu_n\big)\subseteq[0,K]$, so
\[\mu_n\big((\omega\setminus A)\setminus[0,k]\big)=0\]
for every $k\ge K$. On the other hand, each measure $\mu_n$ is non-negative, so for every $k\ge K$ and $n\ge N$ we have:
\[\mu_n\big((\omega\setminus A)\setminus[0,k]\big)<\varepsilon.\]
It follows that for every $k\ge K$ we have
$\varphi\big((\omega\setminus A)\setminus[0,k]\big)\le\varepsilon.$
Since $\varepsilon$ was arbitrary, we get that
\[\lim_{k\to\infty}\varphi\big((\omega\setminus A)\setminus[0,k]\big)=0,\]
which implies that $\omega\setminus A\in\Exh(\varphi)$ or, equivalently, that $A\in\Exh(\varphi)^*$. We have thus showed that $F\subseteq\Exh(\varphi)^*$.
\medskip
(2)$\Rightarrow$(3) Obvious.
\medskip
(3)$\Rightarrow$(1) Assume that there is a non-pathological lsc submeasure $\varphi$ such that $F$ is contained in the dual filter $\Exh(\varphi)^*$. Without loss of generality we may assume that $\varphi$ is finite (see the sentence after the definition of a non-pathological ideal). Put $I=\Exh(\varphi)$. Since the inclusion $F\subseteq I^*$ implies the relation $F\le_K I^*$, by Proposition \ref{prop:rk_bjnp}.(2), it is enough to prove that $N_{I^*}$ has the BJNP.
Set
\[\alpha=\lim_{n\to\infty}\varphi(\omega\setminus[0,n]),\]
and note that $\alpha>0$ (since otherwise $I={\wp(\omega)}}%{{\wp(\N)}$) and that $\alpha\le\varphi(\omega)<\infty$ (since $\varphi$ is finite). By the monotonicity of $\varphi$, for every $n\in\omega}%{\in\N$ we have:
\[\tag{$**$}\varphi(\omega\setminus[0,n])>\alpha/2.\]
Put $n_0=0$. Since $\varphi$ is lower semi-continuous, there exists $n_1>n_0$ such that $\varphi\big(\big[n_0,n_1\big]\big)>\alpha/2$.
By ($**$) we have $\varphi\big(\omega\setminus\big[0,n_1\big]\big)>\alpha/2$, so again, by the lower semi-continuity of $\varphi$, there is $n_2>n_1$ such that $\varphi\big(\big[n_1,n_2\big]\big)>\alpha/2$. We continue in this way until we get a strictly increasing sequence $\seqk{n_k\in\omega}%{\in\N}$ satisfying for every $k\in\omega}%{\in\N$ the inequality
\[\varphi\big(\big[n_k,n_{k+1}\big]\big)>\alpha/2.\]
The submeasure $\varphi$ is non-pathological, so for each $k\in\omega}%{\in\N$ there exists a non-negative measure $\mu_k$ on $\omega$ such that $\mu_k\le\varphi$, $\supp\big(\mu_k\big)\subseteq\big[n_k,n_{k+1}\big]$, and
\[\mu_k\big(\big[n_k,n_{k+1}\big]\big)>\alpha/4.\]
Note that $\alpha/4<\big\|\mu_k\big\|<\infty$ and set $\nu_k=\mu_k\big/\big\|\mu_k\big\|$. The function $\nu_k$ is a finitely supported probability measure on $\omega$.
We claim that the sequence $\seqk{\nu_k}$ satisfies, for every $A\in I^*$, the equality $\lim_{k\to\infty}\nu_k(A)=1$, and thus, by Corollary \ref{cor:nf_bjnp_char}, the space $N_{I^*}$ has the BJNP. Let $B\in I$ and $\varepsilon>0$. Since $\lim_{n\to\infty}\varphi(B\setminus[0,n])=0$, there is $M\in\omega}%{\in\N$ such that $\varphi(B\setminus[0,n])<\varepsilon$ for every $n\ge M$. Let $k\in\omega}%{\in\N$ be such that $n_k>M$. It holds:
\[\nu_k(B)=\mu_k(B)\big/\big\|\mu_k\big\|=\mu_k\big(B\cap\big[n_k,n_{k+1}\big]\big)\big/\big\|\mu_k\big\|\le\varphi\big(B\cap\big[n_k,n_{k+1}\big]\big)\big/\big\|\mu_k\big\|\le\]
\[\varphi\big(B\setminus[0,M]\big)\big/\big\|\mu_k\big\|<4\varepsilon/\alpha.\]
It follows that $\lim_{n\to\infty}\nu_k(B)=0$ for every $B\in I$, hence $\lim_{n\to\infty}\nu_k(A)=1$ for every $A\in I^*$. The proof is thus finished.
\end{proof}
The following results are immediate consequences of Theorem \ref{theorem:nf_bjnp_nonpath}.
\begin{corollary}\label{cor:density_bjnp}
If $I$ is a density ideal, then $N_{I^*}$ has the BJNP.\hfill$\Box$
\end{corollary}
\begin{corollary}\label{cor:nonpath_sub_dens}
Every non-pathological ideal is contained in some density ideal.\hfill$\Box$
\end{corollary}
In Proposition \ref{prop:nf_convseq_char} and Corollary \ref{cor:nf_jnp} we proved that for a given free filter $F$ the space $N_F$ has the JNP if and only if the dual ideal $F^*$ is not tall. Note that in the case of ideals of the form $\Exh(\varphi)$ we have an easy characterization of tallness: for a lsc submeasure $\varphi$ the ideal $\Exh(\varphi)$ is tall if and only if $\lim_{n\to\infty}\varphi(\{n\})=0$.
\begin{proposition}\label{prop:nf_jnp_exh_tall}
Let $\varphi$ be a lsc submeasure on $\omega$. The following are equivalent:
\begin{enumerate}
\item $N_{\Exh(\varphi)^*}$ has the JNP;
\item $\Exh(\varphi)$ is not tall;
\item $\limsup_{n\to\infty}\varphi(\{n\})>0$.\hfill$\Box$
\end{enumerate}
\end{proposition}
\begin{remark}\label{rem:density_bjnp_no_jnp}
A prototype ideal for the class of density ideals is \textit{the (asymptotic) density zero ideal} $I_{d}=\Exh\big(\varphi_d\big)$, where \textit{the asymptotic density submeasure $\varphi_d$} is defined for every $A\in{\wp(\omega)}}%{{\wp(\N)}$ as follows:
\[\varphi_d(A)=\sup_{n\in\omega}%{\in\N}\frac{\big|A\cap\big[2^n,2^{n+1}\big)\big|}{2^n}.\]One can show that for every $A\in{\wp(\omega)}}%{{\wp(\N)}$ the following equivalence holds:
\[A\in\Exh\big(\varphi_d\big)\quad\Longleftrightarrow\quad\limsup_{n\to\infty}\frac{\big|A\cap[0,n)\big|}{n}=0.\]
Let us call the dual filter $F_d=\Exh\big(\varphi_d\big)^*$ simply \textit{the (asymptotic) density filter}. The BJNP and JNP of the spaces associated to $F_d$ were already studied in \cite[Section 4]{BKS1}, \cite[Section 5.1]{KSZ}, and \cite[Example 4.2]{KMSZ}, where it was among others proved that:
\begin{itemize}
\item $S_{F_d}$ has no non-trivial convergent sequences but contains many copies of ${\beta\omega}}%{{\beta\N}$;
\item $S_{F_d}$ has the JNP and there is a JN-sequence $\seqn{\mu_n}$ on $S_{F_d}$ such that $\supp\big(\mu_n\big)\subseteq\omega$ for every $n\in\omega}%{\in\N$;
\item every JN-sequence $\seqn{\mu}$ on $S_{F_d}$ satisfies the condition $\lim_{n\to\infty}\big|\supp\big(\mu_n\big)\big|=\infty$;
\item $N_{F_d}$ has the BJNP but not the JNP.
\end{itemize}
Note that most of the above facts may be easily deduced from more general results presented in this paper (cf. e.g. Proposition \ref{prop:sf_convseq_supps2}). In particular, the last fact follows from Theorem \ref{theorem:nf_bjnp_nonpath} and Proposition \ref{prop:nf_jnp_exh_tall}.
\end{remark}
\subsection{Summable ideals\label{sec:summable}}
Yet another class of non-pathological ideals is constituted by summable ideals. Recall that an ideal $I$ is \textit{summable} if there exists a function $f\colon\omega\to[0,\infty)$ such that for the non-pathological lsc submeasure $\varphi_f$ defined, for every $A\in{\wp(\omega)}}%{{\wp(\N)}$, by the formula:
\[\varphi_f(A)=\sum_{n\in A}f(n),\]
we have $I=\Exh\big(\varphi_f\big)$. We also say that $I$ is \textit{generated by} or \textit{associated to} $f$. Notice that for every $A\in{\wp(\omega)}}%{{\wp(\N)}$ it holds:
\[A\in\Exh\big(\varphi_f\big)\quad\Longleftrightarrow\quad\sum_{n\in A}f(n)<\infty,\]
so $\Exh\big(\varphi_f\big)=\Fin\big(\varphi_f\big)$. In particular, every summable ideal is an $\mathbb{F}_{\sigma}$ P-ideal.
The following corollary follows immediately from the fact that summable ideals are non-pathological.
\begin{corollary}\label{cor:summable_bjnp}
If $I$ is a summable ideal, then $N_{I^*}$ has the BJNP.\hfill$\Box$
\end{corollary}
The next result was mentioned without a proof in \cite[Remark 4.3]{Mar95}. It yields that there are continuum many non-homeomorphic spaces $N_F$, induced by non-isomorphic summable ideals, which have the BJNP but do not have the JNP (see Theorem \ref{theorem:continuum_many}).
\begin{theorem}\label{theorem:continuum_summable}
For each $p\in(0,1]$ define the function $f_p\colon\omega\to[0,\infty)$ as follows:
\[f_p(n)=1/(n+1)^p,\]
where $n\in\omega}%{\in\N$, and set $F_p=\Exh\big(\varphi_{f_p}\big)^*$. Then, for every $0<p<q\le 1$, the filters $F_p$ and $F_q$ are not isomorphic and hence the spaces $N_{F_p}$ and $N_{F_q}$ are not homeomorphic.
\end{theorem}
Since $F_p=\Exh\big(\varphi_{f_p}\big)^* = \Fin\big(\varphi_{f_p}\big)^*$, the above theorem follows immediately from the following lemma.
\begin{lemma}\label{lemma:continuum_summable}
Let $f\colon \omega\to \omega$ be a bijection and let $0<p<q\le 1$. Then, there exists $A\subseteq \omega$ such that
\[\sum_{n\in A} 1/(n+1)^p = \infty\quad\text{and}\quad\sum_{n\in f[A]} 1/(n+1)^q<\infty.\]
\end{lemma}
\begin{proof}
Let $k\in\omega}%{\in\N$. From the fact that
\[f\big[\big\{n\in [0,2k)\colon\ f(n) < n/2\big\}\big] \subseteq [0,k)\]
it easily follows that
\[ \label{LSC1} \tag{P.1} \big|\big\{n\in [0,2k)\colon\ f(n) \ge n/2\big\}\big|\ge k.\]
Put
\[H_k = \big\{n\in [0,2k)\colon\ f(n) \ge n/2\big\};\]
so, by (\ref{LSC1}), $\big|H_k\big|\ge k$.
By induction we will construct a sequence $\seqn{A_n}$ of pairwise disjoint finite subsets of $\omega$ such that, for every $n\in\omega}%{\in\N$, the following two inequalities hold:
\[\label{LSC2}\tag{P.2} \sum_{i\in A_n} 1/(i+1)^p\ge 1\]
and
\[\label{LSC3}\tag{P.3} \sum_{j\in f[A_n]} 1/(j+1)^q\le 2^{-n}.\]
We start with $A_0 = \{0\}$. Suppose that we have found pairwise disjoint finite sets $A_0,\dots,A_n$ satisfying (\ref{LSC2}) and (\ref{LSC3}). Take $l_0\in \omega$ such that
\[(l_0+1)^{p-q} < 2^{-(n+2+q)},\]
which implies that for every $i\ge l_0$ it holds:
\[\label{LSC4}\tag{P.4} \big(2/(i+1)\big)^q < 2^{-(n+2)}\big(1/(i+1)\big)^p .\]
Next, find $l_1\in \omega$ such that $l_1^{1-p} > 4^p$. Then, $l_1>1$ and for every $k\ge l_1$ we have $k\big(1/(4k)\big)^p>1$, so
\[\label{LSC5} \tag{P.5} \sum_{i=3k}^{4k-1} 1/(i+1)^p > k\big(1/(4k)\big)^p > 1 .\]
Finally, set $l_2 = \max\big(\bigcup_{i=0}^n A_i\big) + 1$ and $m = \max\big(l_0,l_1,l_2\big)$.
By (\ref{LSC1}), we have $\big|H_{2m}\big| \ge 2m$. Since $\bigcup_{i=0}^n A_i \subseteq \big[0,l_2\big)$ and $l_2\le m$, the set
\[B = H_{2m}\setminus [0,m)\]
satisfies the following conditions:
\[\label{LSC6} \tag{P.6} |B|\ge m\quad \mbox{and}\quad B\cap \bigcup_{i=0}^n A_i = \emptyset .\]
We have $H_{2m}\subseteq [0,4m)$, therefore, by (\ref{LSC5}) and (\ref{LSC6}), it holds
\[ \sum_{i\in B} 1/(i+1)^p \ge \sum_{i=3m}^{4m-1} 1/(i+1)^p > 1 .\]
Let $A_{n+1}$ be a subset of $B$ of the minimal cardinality such that
\[\sum_{i\in A_{n+1}} 1/(i+1)^p \ge 1,\]
so (\ref{LSC2}) is satisfied for $n+1$. Then,
\[ \label{LSC7} \tag{P.7} 2 > \sum_{i\in A_{n+1}} 1/(i+1)^p \ge 1.\]
From (\ref{LSC4}) and the definitions of the sets $H_{2m}$ and $B$, it follows that for every $i\in B$ we have:
\[ \label{LSC8} \tag{P.8} 1/\big(f(i)+1\big)^q < 1/\big(i/2 + 1\big)^q < \big(2/(i+1)\big)^q < 2^{-(n+2)}\big(1/(i+1)\big)^p .\]
Hence, by (\ref{LSC7}) and (\ref{LSC8}), we obtain
\[ \sum_{j\in f[A_{n+1}]} 1/(j+1)^q < 2^{-(n+2)} \sum_{i\in A_{n+1}} 1/(i+1)^p < 2^{-(n+2)}2 = 2^{-(n+1)},\]
so (\ref{LSC3}) is satisfied for $n+1$.
To finish the proof, put $A = \bigcup_{n\in\omega} A_n$.
\end{proof}
\medskip
\theoremcontinuummany
\begin{proof}
Let $\mathcal{F}_2$ be the family of filters $F_p$, $0<p\le 1$, from Theorem \ref{theorem:continuum_summable}. It follows that for each $F\in\mathcal{F}_2$ the space $N_F$ has the BJNP but, by Proposition \ref{prop:nf_jnp_exh_tall}, it does not have the JNP. Obviously, $\big|\mathcal{F}_2\big|=\mathfrak{c}$. Condition (B) is thus satisfied.
We put $\mathcal{F}_1 = \{F\oplus Fr\colon F\in \mathcal{F}_2\}$ (see the final part of Section \ref{sec:nf_char_jnp_bjnp}).
It is clear that each $G\in \mathcal{F}_1$ is an $\mathbb{F}_{\sigma}$ P-filter such that $N_G$ has the JNP, see Proposition \ref{prop:product_filters}. By Lemma \ref{lemma:fr_f1_f2_iso}, no two members of $\mathcal{F}_1$ are isomorphic.
\end{proof}
Dropping the assumption concerning the descriptive complexity of filters in Theorem \ref{theorem:continuum_many}, we may obtain families of cardinality $2^{\mathfrak{c}}$ containing pairwise non-isomorphic filters on $\omega$ whose spaces $N_F$ have the BJNP but not the JNP, etc.
\theoremtwotocontinuummany
\begin{proof}
Let $\mathcal{G}_5$ be the family of all ultrafilters on $\omega$ and let $F$ be a fixed element of the family $\mathcal{F}_2$ from Theorem \ref{theorem:continuum_many} (e.g. $F=F_{1/2}$). We put:
\begin{align*}
\mathcal{G}_3 &= \{G\oplus Fr\colon G\in \mathcal{G}_5\},\text{ and}\\
\mathcal{G}_4 &= \{G\oplus F\colon G\in \mathcal{G}_5\}\,.
\end{align*}
Using Corollary \ref{cor:ultrafilter_no_bjnp} and Proposition \ref{prop:product_filters}, one can easily check that, for any $H\in \mathcal{G}_i,\ i=3,4,5$, the space $N_H$ has the properties declared in conditions (a)--(c). Since $|\mathcal{G}_5| = 2^{\mathfrak{c}}$ and each family of pairwise isomorphic filters has the cardinality bounded by the continuum, for each $i=3,4,5$ we can select a subfamily $\mathcal{F}_i\subseteq \mathcal{G}_i$ consisting of $2^{\mathfrak{c}}$ many pairwise non-isomorphic filters.
\end{proof}
\noindent Proposition \ref{prop:ordering_nf}.(2) implies that for $i=1,\dots,5$ and each pair of distinct filters $F,G$ belonging to the family $\mathcal{F}_i$ from Theorems \ref{theorem:continuum_many} and \ref{theorem:two_to_continuum_many} their spaces $N_F$ and $N_G$ are not homeomorphic.
\medskip
\subsection{Density ideals without the Bolzano--Weierstrass property\label{sec:bwp}}
Let $I$ be an ideal on $\omega$. Recall that a sequence $\seqn{x_n\in\mathbb{R}}$ is \textit{$I$-convergent} to $x\in \mathbb{R}$ if for every $\varepsilon>0$ we have $\big\{n\in\omega\colon\ \big|x_n - x\big| > \varepsilon\big\}\in I$. We say that $I$ has \textit{the Bolzano-Weierstrass property} (or, in short, \textit{the BWP}) if for any bounded sequence $\seqn{x_n\in\mathbb{R}}$ there is an $I$-convergent subsequence $\seq{x_n}{n\in A}$ with $A\not\in I$ (see \cite{FMRS07}).
We will be interested in density ideals without the Bolzano--Weierstrass property. A standard example of such an ideal is the density zero ideal $I_{d}$ (see Remark \ref{rem:density_bjnp_no_jnp}). Observe that each ideal without the BWP is tall.
In Section \ref{sec:af_sf_nf} we recalled the Kat\v{e}tov and Rudin--Keisler preorders on filters (equivalently, on ideals). For the sake of this subsection we additionally recall that an ideal $I$ is \textit{Rudin--Blass below} an ideal $J$, denoting $I\le_{RB} J$, if there is a finite-to-one function $f\colon\omega \to \omega$ such that:
\[I=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ f^{-1}[A]\in J\big\},\]
that is, $I=f(J)$. It follows trivially that if $I\le_{RB} J$, then $I\le_K J$. We say that $I$ and $J$ are \textit{Rudin--Blass equivalent} (in short, $RB$-equivalent) if $I\le_{RB} J$ and $J\le_{RB} I$. Of course, using dual filters we may also apply the above notions to filters.
Let $\mathcal{F}_{BJNP}$ (resp. $\mathcal{F}_{JNP}$) denote the family of all free filters on $\omega$ such that the space $N_F$ has the BJNP (resp. the JNP). From Corollary \ref{cor:nf_jnp} and Proposition \ref{prop:nf_convseq_char} it follows that every two elements of $\mathcal{F}_{JNP}$ are Kat\v{e}tov equivalent, so every element of $\mathcal{F}_{JNP}$ is maximal in the Kat\v{e}tov preorder. In Theorem \ref{theorem:nf_bjnp_nonpath} we showed that for any filter $F\in\mathcal{F}_{BJNP}$ there is a density ideal $I$ such that $F\subseteq I^*$, so in particular $F\le_K I^*$. Basing on \cite[Lemma 1.13.10]{Far00}, Tryba \cite[Corollary 3.17]{Try21} proved that any two density ideals $I$ and $J$ without the BWP are Rudin--Blass equivalent. Using a similar argument as in the proof of \cite[Lemma 1.13.10]{Far00}, we can obtain the following result concerning the maximal elements of $\mathcal{F}_{BJNP}$.
\theoremeuideals
To simplify the notation in the proof, for a non-negative measure $\mu$ supported on a finite set $X$ we define the following number (cf. \cite[Section 1.13]{Far00}):
\[ \at(\mu) = \max\big\{\mu(x)\colon x\in X\big\}.\]
We will need the following simple lemma.
\begin{lemma}\label{Lemma: EU ideals}
Let $\lambda$ be a probability measure on a finite non-empty set $A$ and let $\varepsilon > 0$. If $\mu$ is a probability measure on a finite set $B$ such that $\at(\mu) < \varepsilon/(2|A|)$, then there exists a function $f\colon B\to A$ such that for every $C\subseteq A$ we have:
\[\big|\lambda(C) - \mu\big(f^{-1}[C]\big)\big| < \varepsilon.\]
\end{lemma}
\begin{proof}
Using our estimate for $\at(\mu)$ we can find, for each $a\in A$, a (possibly empty) set $X_a\subseteq B$ such that
\[\lambda(\{a\}) - \varepsilon/(2|A|) < \mu(X_a) \le \lambda(\{a\})\]
and $X_a\cap X_{a'} = \emptyset$ for every $a\ne a'$.
Put:
\[Y = B\setminus \bigcup\big\{X_a: a\in A\big\},\]
and note that above conditions imply that $\mu(Y) < \varepsilon/2$.
Fix any $a_0\in A$ and define the function $f\colon B\to A$ for every $b\in B$ by the formula:
\[f(b) = \begin{cases}
a,& \text{if } b\in X_a,\\
a_0,& \text{if } b\in Y.
\end{cases}
\]
A routine calculation shows that $f$ has the desired property.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: EU ideals}]
Let $I$ be a density ideal without the BWP and let $I_{d}$ be the density zero ideal (see Remark \ref{rem:density_bjnp_no_jnp}). Since $I_{d}\le_{RB} I$ by \cite[Corollary 3.17]{Try21}, the dual filters satisfy $I^*_{d}\le_{K} I^*$. Hence, it is enough to show that, for any filter $G$ in $\mathcal{F}_{BJNP}$, we have $G\le_K F_d$, where $F_d= I^*_{d}$.
Let $\seqn{\mu_n}$ be the standard sequence of probability measures on $\omega$ associated with $I_{d}$, i.e., each $\mu_n$ is supported on the interval $B_n =\big [2^n,2^{n+1}\big)$, and all singletons in $\big[2^n,2^{n+1}\big)$ have the same $\mu_n$-measure $1/2^n$, so that $\varphi_d = \sup_{n\in\omega} \mu_n$ and $I_{d} = \Exh(\varphi_d)$ (see again Remark \ref{rem:density_bjnp_no_jnp}). We will use the following two properties of the sequence $\seqn{\mu_n}$:
\[\label{LEU1} \tag{$*$} \lim_{n\to\infty} \at\big(\mu_n\big) = 0,\]
and
\[ F_d = \Big\{A\subseteq \omega\colon \lim_{n\to\infty} \mu_n(A) = 1\Big\}.\]
Let $G$ be any filter in $\mathcal{F}_{BJNP}$.
By Corollary \ref{cor:nf_bjnp_char} and the remark following it, there is a sequence $\seqn{\lambda_n}$ of probability measures on $\omega$ supported on pairwise disjoint finite sets such that
\[\label{LEU3} G \subseteq\Big\{B\subseteq \omega\colon \lim_{n\to\infty} \lambda_n(B) = 1\Big\}.\]
For each $n\in\omega}%{\in\N$ let $A_n = \supp(\lambda_n)$. Without loss of generality, we can assume that the sequence $\seqn{\big|A_n\big|}$ is non-decreasing. By (\ref{LEU1}), for each $n\in\omega}%{\in\N$, we can define
\[\label{LEU4} i_n = \min\Big\{i\in\omega}%{\in\N\colon\ \at\big(\mu_j\big) < 1\big/\big(2(n+1)|A_n|\big)\text{ for each }j\ge i\Big\}.\]
Observe that the sequence $\seqn{i_n}$ is non-decreasing and unbounded. For each $n\in\omega}%{\in\N$ and $j\in\big[i_n,i_{n+1}\big)$, we apply Lemma \ref{Lemma: EU ideals} for the measures $\lambda_n$ and $\mu_j$, and $\varepsilon_n = 1/(n+1)$, obtaining a function $f_j\colon B_j\to A_n$ such that for every $C\subseteq A_n$ we have:
\[\label{LEU5} \tag{$**$}\big|\lambda_n(C) - \mu_j\big(f_j^{-1}[C]\big)\big| < 1/(n+1).\]
Observe that $\omega\setminus \bigcup_{n\in\omega}%{\in\N} B_n = \{0\}$ and that $\mu_n(\{0\}) = 0$ for every $n\in\omega}%{\in\N$. Now, we can define a finite-to-one function $f\colon \omega \to \omega$ by the formula
\[\label{LEU6} f(l) = \begin{cases}
f_j(l), & \text{if } l\in B_j,\\
0,& \text{if } l = 0,
\end{cases}
\]
for every $l\in\omega}%{\in\N$. It remains to verify that $f$ is a witness for $G\le_K F_d$, i.e., $f^{-1}[X]\in F_d$ for any $X\in G$. So, fix $X\in G$ and $p\in\omega}%{\in\N$. There is $m\in\omega}%{\in\N$, $m\ge p$, such that for every $n\ge m$ we have:
\[\label{LEU7}\lambda_n(X) > 1 - 1/(p+1).\]
Then, by (\ref{LEU5}) and the definition of $f$, for every $j\ge i_m$, we obtain:
\[\label{LEU8} \mu_j\big(f^{-1}[X]\big) > 1 - 2/(p+1),
\]
which shows that $\lim_{j\to\infty} \mu_j\big(f^{-1}[X]\big) = 1$ and hence that $f^{-1}[X]\in F_d$.
\end{proof}
Note that since the function $f$ witnessing $G\le_K F_d$, constructed in the above proof, is finite-to-one and $F_d\le_{RB} I^*$, we can also find a finite-to-one function $g\colon \omega \to \omega$ witnessing $G\le_K I^*$. This observation can be used to show that the space $N_{F_d}$ corresponding to the density filter $F_d$ is universal in some sense for the family of all spaces $N_G$ having the BJNP.
\begin{corollary}\label{cor: EU ideals}
Let $F$ be a filter dual to a density ideal $I$ without the BWP. Then, for any $G\in\mathcal{F}_{BJNP}$ there exists a continuous finite-to-one surjection $\varphi\colon N_F\to N_G$ such that $\varphi^{-1}\big(p_G\big) = \big\{p_F\big\}$.
\end{corollary}
\begin{proof}
Since the ideal $I$ is tall, we can find an infinite set $X\in I$. Clearly, $(\omega\setminus X)\in F$ and $X$ is a closed and discrete subset of $N_F$.
Let $g\colon \omega \to \omega$ be a finite-to-one function witnessing $G\le_K F$, see the discussion above. Let $h \colon X \to \omega$ be any bijection. One can easily verify that the function $\varphi\colon N_F\to N_G$ defined for every $x\in N_F$ by the formula:
\[\varphi(x) = \begin{cases}
p_G,& \text{if } x = p_F,\\
g(x),& \text{if } x \in \omega\setminus X,\\
h(x),& \text{if } x\in X,
\end{cases}
\]
has the required properties.
\end{proof}
In the case of the JNP, using Corollary \ref{cor:nf_jnp}, a similar but stronger result may be easily obtained for filters of the form $Fr(A,\omega)$, where $A\in{\wp(\omega)}}%{{\wp(\N)}$ is infinite and co-infinite: for every filter $G\in\mathcal{F}_{JNP}\setminus\{Fr\}$ there is a continuous bijection $\varphi\colon N_{Fr(A,\omega)}\to N_G$.
\subsection{An $\mathbb{F}_{\sigma}$ filter $F$ such that $N_F$ does not have the BJNP}
At the beginning of this section we argued that any free filter $F$ on $\omega$ whose space $N_F$ has the BJNP is contained in some $\mathbb{F}_{\sd}$ filter. This also follows from Theorem \ref{theorem:nf_bjnp_nonpath}. The converse however does not hold---Filip\'ow and Szuca \cite[Example 3.6]{FS10} constructed an example of an $\mathbb{F}_{\sigma}$ P-ideal which cannot be covered by a summable ideal and recently it was proved by Filip\'ow and Tryba \cite[Theorem 4.12]{FT19} that the same ideal cannot be covered by a non-pathological ideal.
Using different tools than those in \cite{FS10} and \cite{FT19}, we present below yet another construction of an $\mathbb{F}_{\sigma}$ P-ideal which cannot be covered by a non-pathological ideal. Note that both of the constructions (that is, one in \cite{FS10} as well as ours) strengthen the results of Mazur \cite{Maz91} who constructed an $\mathbb{F}_{\sigma}$ ideal which is not contained in any summable ideal and of Farah \cite[Sections 1.9 and 1.11]{Far00} who provided an example of an $\mathbb{F}_{\sigma}$ P-ideal which is pathological (and hence not summable)\footnote{It must however be noted that the general idea standing behind all the four constructions seems to be quite similar.}.
We will need the following result of Herer and Christensen \cite[Theorem 1]{HCh}.
\begin{theorem}[Herer--Christensen]\label{thm_HCh}
For each $\varepsilon> 0$, there exist a finite set $X$ and a submeasure $s$ defined on the algebra $\wp(X)$ such that $s(X)=1$ and any non-negative measure $\mu$ defined on $\wp(X)$ and dominated by $s$ (i.e., $\mu\le s$) satisfies $\mu(X)<\varepsilon$.\hfill$\Box$
\end{theorem}
\begin{lemma}\label{HCh_sub}
Let $\varepsilon> 0$ and let $s$ be a submeasure on a set $X$ given by Theorem \ref{thm_HCh}. Then, for any non-negative finite measure $\mu$ on $\wp(X)$, there exists a set $A\subseteq X$ such that $\mu(A)\ge \mu(X)/2$ and $s(A)\le 2\varepsilon$.
\end{lemma}
\begin{proof}
Put $a= \mu(X)$, without loss of generality we can assume that $a> 0$. Observe that by the properties of $s$, the measure $(\varepsilon/a)\mu$ is not dominated by $s$, so there is $B\subseteq X$ such that $\mu(B) > (a/\varepsilon)s(B)$.
Let $A$ be a maximal with respect to the inclusion subset of $X$ satisfying
\[\mu(A) > \big(a/(2\varepsilon)\big)s(A).\]
By the maximality of $A$, for every $B\subseteq X\setminus A$, we have:
\[\label{sub1}\tag{$*$}\mu(B) \le \big(a/(2\varepsilon)\big)s(B).\]
Let $b=\mu(A)$. We will show that $b\ge a/2$. Suppose to the contrary that $b < a/2$, then $a-b = \mu(X\setminus A) > 0$. Hence, we can consider a measure $\nu$ on $\wp(X)$ defined for every $C\subseteq X$ by the formula
\[\nu(C) = \big(\varepsilon/(a-b)\big)\mu(C\setminus A).\]
We have $\nu(X) = \varepsilon$, therefore $\nu$ is not dominated by $s$, so there is a set $B\subseteq X$ such that
\[\label{sub2}\tag{$**$}\nu(B) > s(B).\]
Since
\[\nu(B\setminus A)=\nu(B)>s(B)\ge s(B\setminus A),\]
we can assume that $B\subseteq X\setminus A$. Using the definition of $\nu$ and combining inequalities (\ref{sub1}) and (\ref{sub2}), we obtain:
\[\big(\varepsilon/(a-b)\big)\cdot\big(a/(2\varepsilon)\big)s(B)\ge \big(\varepsilon/(a-b)\big)\cdot\mu(B) > s(B),\]
which, after routine simplifications of the outer sides, leads to the inequality $2b > a$, contradicting our assumption on $b$.
It remains to observe that the inequality $\mu(A) > \big(a/(2\varepsilon)\big)s(A)$ together with $a \ge \mu(A)$ gives the desired estimate $s(A)\le 2\varepsilon$.
\end{proof}
\begin{example}\label{example: Fsigma, nonBJNP}
\textit{There exists a free $\mathbb{F}_{\sigma}$ P-filter $F$ on $\omega$ such that the space $N_F$ does not have the BJNP.}
\end{example}
\begin{proof}
For each $n\in\omega}%{\in\N$, let $A_n$ be a finite set and $s_n$ be a submeasure on $\wp(A_n)$ given by Theorem \ref{thm_HCh} applied for $\varepsilon = 2^{-n}$. Without loss of generality we can assume that the sets $A_n$ are pairwise disjoint subsets of $\omega$ and that $\omega=\bigcup_{n\in\omega}%{\in\N}A_n$. For every $A\in{\wp(\omega)}}%{{\wp(\N)}$ set:
\[\varphi(A)=\sum_{n\in\omega}%{\in\N}s_n\big(A_n\cap A\big),\]
and notice that $\varphi$ is a lsc submeasure on $\omega$ such that $\Fin(\varphi)=\Exh(\varphi)$. Put $I=\Fin(\varphi)$ and $F=I^*$. By Theorem \ref{theorem:solecki}.(2), $F$ is an $\mathbb{F}_{\sigma}$ P-filter. Of course, $F$ is free.
We will show that the space $N_F$ does not have the BJNP. So assume that it has the BJNP and let $\seqk{\mu_k}$ be a sequence of finitely supported probability measures on $N_F$ as in Corollary \ref{cor:nf_bjnp_char}. For each $k\in\omega}%{\in\N$, set $B_k = \supp\big(\mu_k\big)$.
As noticed after the proof of the corollary, we can additionally assume that the sets $B_k$ are pairwise disjoint. Passing to a subsequence of $\seqk{\mu_k}$, if necessary, we can also require that each set $A_n$ intersects at most one set $B_k$. Let thus $P$ be a set of all those $n\in\omega}%{\in\N$ for which there exists (a unique) $k(n)\in\omega}%{\in\N$ such that $A_n\cap B_{k(n)}\ne \emptyset$. For every $n\in P$, put $C_n = A_n\cap B_{k(n)}$.
Applying Lemma \ref{HCh_sub} for the submeasure $s_n$ and the measure $\mu_{k(n)}\restriction C_{n}$, $n\in P$, we can find a subset $D_{n}$ of $C_{n}$ such that
\[\mu_{k(n)}\big(D_{n}\big)\ge \mu_{k(n)}\big(C_{n}\big)/2\]
and
\[s_n\big(D_n\big)\le 2\cdot 2^{-n} = 2^{-n+1}.\]
Let $D = \bigcup_{n\in P} D_n$. For each $k\in\omega}%{\in\N$, put $F_k=\big\{n\in P\colon\ k=k(n)\big\}$ and note that $B_k=\bigcup\big\{C_n\colon n\in F_k\big\}$. From the inequality $s_n(D_n)\le 2^{-n+1}$ for $n\in P$, we conclude that $D$ belongs to the ideal $I$. On the other hand, for each $k\in\omega}%{\in\N$, we have:
\[\mu_{k}(D) = \sum_{n\in F_k}\mu_k\big(D_n\big)\ge\sum_{n\in F_k}\mu_k\big(C_n\big)/2=(1/2)\mu_k\big(B_k\big) = 1/2,\]
a contradiction with the condition that $\lim_{k\to\infty}\mu_k(D)=0$.
\end{proof}
We do not know if one can construct a family of size continuum consisting of pairwise non-isomorphic $\mathbb{F}_{\sigma}$ P-filters $F$ such that the space $N_F$ does not have the BJNP (cf. Theorem \ref{theorem:continuum_many}).
\subsection{Non-meagerness of $LF\big(F_n\big)$-filters}
In this short final subsection of Section \ref{sec:complexity}, we show an example of a free filter $F$ on $\omega$ such that the space $S_F^*$ contains a non-trivial convergent sequence, but the space $N_F$ does not have the BJNP.
\begin{lemma}\label{lemma:lf_is_nonmeager}
Let $\seqn{A_n}$ be a sequence of pairwise disjoint infinite subsets of $\omega$. Let $\seqn{F_n}$ be a sequence of free filters based on $\seqn{A_n}$ such that each $F_n$ is non-meager on $A_n$. Then, the filter $LF\big(F_n\big)$ is non-meager.
\end{lemma}
\begin{proof}
For each $n\in\omega}%{\in\N$ set
\[G_n=\big\{A\in{\wp(\omega)}}%{{\wp(\N)}\colon\ A\cap A_n\in F_n\big\}.\]
Then, $G_n$ is a non-meager free filter on $\omega$ (e.g. by Talagrand's characterization). It holds
\[\bigcap_{n\in\omega}%{\in\N}G_n\subseteq LF\big(F_n\big),\]
and since the countable intersection of non-meager filters on $\omega$ is non-meager (see \cite{Bar99}), we get that $LF\big(F_n\big)$ is non-meager.
\end{proof}
\begin{corollary}\label{cor:sf_jnp_nf_no_bjnp}
There is a non-meager free filter $F$ on $\omega$ such that $S_F^*$ contains a non-trivial sequence $\seqn{x_n}$ convergent to $p_F$ (in particular, $S_F$ has the JNP) and such that $N_F$ does not have the BJNP.
\end{corollary}
\begin{proof}
Let $\seqn{F_n}$ be any sequence of free ultrafilters based on some sequence $\seqn{A_n}$ of pairwise disjoint infinite subsets of $\omega$ and put $F=LF\big(F_n\big)$. By Lemma \ref{lemma:lf_no_frechet}, the point $p_F$ in $S_F$ is the limit of a convergent sequence contained entirely in $S_F^*$---it follows that $S_F$ has the JNP. Due to Lemma \ref{lemma:lf_is_nonmeager}, $F$ is non-meager, so, by Corollary \ref{cor:nf_bjnp_meager_meas0}, the space $N_F$ does not have the BJNP.
\end{proof}
Corollary \ref{cor:sf_jnp_nf_no_bjnp} implies that to study which spaces among those of the form $S_F^*$ have the JNP we need to apply completely different techniques and tools than we described in this section.
\section{General Tychonoff spaces with the JNP and the BJNP \label{sec:tychonoff}}
Let $X$ be a space and assume that $\seqn{x_n}$ is a non-trivial sequence in $X$ convergent to some point $p\in X\setminus\big\{x_n\colon n\in\omega}%{\in\N\big\}$. By Fact \ref{fact:conv_seq_jnp}, $X$ has the JNP. This consequence can also be seen from a more general point of view. Namely, put $Z=\big\{x_n\colon\ n\in\omega}%{\in\N\big\}\cup\{p\}$ and endow it with the subspace topology. Obviously, in $Z$, the sequence $\seqn{x_n}$ is still convergent to $p$.
For each open neighborhood $U$ of $p$ in $Z$, let $A(U)=\big\{n\in\omega}%{\in\N\colon\ x_n\in U\big\}$, and set $F=\big\{A(U)\colon U\text{ is an open neighborhood of }p\text{ in }Z\big\}$. It is immediate that $F=Fr$ and that the space $N_{Fr}$ is homeomorphic to $Z$. Hence, $X$ contains a subspace homeomorphic to some space $N_G$, where $G$ is a free filter on $\omega$, which has the JNP, and hence $X$ itself must have the JNP. In this section we study this situation in more details and in more general settings. In particular, we provide several criteria deciding when a given space contains a subspace homeomorphic to some space $N_G$, where $G$ is a free filter on $\omega$, which has the JNP or the BJNP.
We need to introduce a piece of notation. Fix a (Tychonoff) space $X$, its infinite countable subset $Y$, and a point $x\in\overline{Y}\setminus Y$. Set $Z_X(x,Y)=Y\cup\{x\}$ and endow it with the subspace topology. By $\mathfrak{N}_X(x)$ we denote the neighborhood system of $x$ in $X$, that is, the collection of all (not necessarily open) subsets $U$ of $X$ such that $x\in\intt U$. We then put:
\[\mathfrak{F}_X(x,Y)=\big\{U\cap Y\colon U\in\mathfrak{N}_X(x)\big\}.\]
Note that since $x\not\in Y$ and $Y$ is infinite countable, $\mathfrak{F}_X(x,Y)$ is a free filter on the \textit{set} $Y$. If $f\colon\omega\to Y$ is a bijection, then $F=\big\{f^{-1}[V]\colon\ V\in\mathfrak{F}_X(x,Y)\big\}$ is a free filter on $\omega$---we will say in this case that $F$ is $f$-\textit{associated} (or, shortly, \textit{associated}, if $f$ is not important) \textit{to} $\mathfrak{F}_X(x,Y)$. The bijection $f$ gives rise to the injective continuous function $\varphi_f\colon N_F\to Z_X(x,Y)$ such that $\varphi_f\restriction\omega=f$ and $\varphi_f\big(p_F)=x$.
\begin{theorem}\label{theorem:tychonoff_x_bjnp}
Suppose $X$ is a space. Let $Y$ be its countable subset and $x\in\overline{Y}\setminus Y$. Let $f\colon\omega\to Y$ be a bijection and $F$ a free filter on $\omega$ $f$-associated to $\mathfrak{F}_X(x,Y)$.
Then, the space $N_F$ has the BJNP if and only if $X$ admits a BJN-sequence $\seqn{\mu_n}$ such that $\supp\big(\mu_n\big)\subseteq Z_X(x,Y)$ for every $n\in\omega}%{\in\N$ and $\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\|=0$ for every $V\in\mathfrak{F}_X(x,Y)$.
In particular, if $N_F$ has the BJNP, then $X$ has the BJNP.
\end{theorem}
\begin{proof}
Assume first that $N_F$ has the BJNP. Let $\seqn{\nu_n}$ be a BJN-sequence of measures on $N_F$.
For each $n\in\omega}%{\in\N$ define the measure $\mu_n$ on $X$ by the formula
\[\mu_n(B)=\nu_n\big(\varphi_f^{-1}\big[B\cap Z_X(x,Y)\big]\big),\]
where $B$ is a Borel subset of $X$; it follows that $\big\|\mu_n\big\|=1$ and that $\supp\big(\mu_n\big)$ is finite and contained in $Z_X(x,Y)$.
Since $\varphi_f$ is continuous, given any $g\in C_p^*(X)$, we get that $g\circ\varphi_f\in C_p^*\big(N_F\big)$, hence $\mu_n(g)=\nu_n\big(g\circ \varphi_f\big)\to 0$ as $n\to\infty$. Consequently, $\seqn{\mu_n}$ is a BJN-sequence on $X$ and so $X$ has the BJNP. Finally, if $V\in\mathfrak{F}_X(x,Y)$, then $f^{-1}[V]\in F$, so by Theorem \ref{prop:nf_bjnseq_char}.(3) it holds:
\[\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\big\|=\lim_{n\to\infty}\big\|\nu_n\restriction\big(\omega\setminus f^{-1}[V]\big)\big\|=0.\]
Assume now that $X$ admits a BJN-sequence $\seqn{\mu_n}$ of measures such that $\supp\big(\mu_n\big)\subseteq Z_X(x,Y)$ for every $n\in\omega}%{\in\N$ and $\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\|=0$ for every $V\in\mathfrak{F}_X(x,Y)$. For each $n\in\omega}%{\in\N$ and $z\in N_F$, set $\nu_n(\{z\})=\mu_n\big(\varphi_f(\{z\})\big)$. This defines a finitely supported measure on $N_F$ such that $\big\|\nu_n\big\|=1$. Also, for every $A\in F$, we have:
\[\lim_{n\to\infty}\big\|\nu_n\restriction(\omega\setminus A)\big\|=\lim_{n\to\infty}\big\|\mu_n\restriction\big(Y\setminus f[A]\big)\big\|=0,\]
since $f[A]\in\mathfrak{F}_X(x,Y)$. Put $P_n=\big\{x\in\supp\big(\nu_n\big)\colon \nu_n(\{x\})>0\big\}$ and $N_n=\supp\big(\nu_n\big)\setminus P_n$. Since $\seqn{\mu_n}$ is a BJN-sequence, for $\seqn{\nu_n}$ we get (cf. \cite[Lemma 4.2]{KSZ}):
\[\lim_{n\to\infty}\big\|\nu_n\restriction P_n\big\|=\lim_{n\to\infty}\big\|\nu_n\restriction N_n\big\|=1/2.\]
Appealing to Theorem \ref{prop:nf_bjnseq_char}, we learn that $\seqn{\nu_n}$ is a BJN-sequence on $N_F$ and hence $N_F$ has the BJNP.
\end{proof}
Let us note that without loss of generality we may require that the sequence $\seqn{\mu_n}$ in Theorem \ref{theorem:tychonoff_x_bjnp} is disjointly supported (by Lemma \ref{lemma:bjn_disjoint_supps} applied to $N_F$). Also, applying Corollary \ref{cor:nf_bjnp_char} and the methods similar to those used in the proof of Theorem \ref{theorem:tychonoff_x_bjnp}, one can obtain the following sufficient conditions for a space to have the BJNP.
\begin{corollary}\label{cor:tychonoff_x_prob_meas}
Suppose $X$ is a space. Let $Y$ be its countable subset and $x\in\overline{Y}\setminus Y$. Let $F$ be a free filter on $\omega$ associated to $\mathfrak{F}_X(x,Y)$.
Then, the space $N_F$ has the BJNP if and only if $X$ admits a (disjointly supported) sequence $\seqn{\mu_n}$ of finitely supported probability measures such that $\supp\big(\mu_n\big)\subseteq Z_X(x,Y)$ for every $n\in\omega}%{\in\N$ and $\lim_{n\to\infty}\mu_n(V)=1$ for every $V\in\mathfrak{F}_X(x,Y)$.
In particular, if $X$ admits a sequence $\seqn{\mu_n}$ as above, then $X$ has the BJNP.\hfill$\Box$
\end{corollary}
\begin{corollary}\label{cor:cont_nf_x}
Suppose $X$ is a space. Assume that $G$ is a free filter on $\omega$. Let $\varphi\colon N_G\to X$ be a continuous function such that $\varphi^{-1}\big(\varphi\big(p_F\big)\big)=\big\{p_F\big\}$ (e.g. $\varphi$ is an injection). Then, if $N_G$ has the BJNP, then $X$ has the BJNP, too.
\end{corollary}
\begin{proof}
Put $Y=\varphi[\omega]$ and $x=\varphi\big(p_F\big)$. It follows that $Y$ is an infinite countable subset of $X$ and that $x\in\overline{Y}\setminus Y$. Let $F$ be a filter on $\omega$ associated to $\mathfrak{F}_X(x,Y)$. Since $\varphi$ is continuous, $F$ is Kat\v{e}tov below $G$ (see the discussion before Proposition \ref{prop:ordering_nf}). By Proposition \ref{prop:rk_bjnp}.(2), $N_F$ has the BJNP, so by Theorem \ref{theorem:tychonoff_x_bjnp} $X$ has the BJNP, too.
\end{proof}
We leave the proof of the next theorem to the reader as it is very similar to the one of Theorem \ref{theorem:tychonoff_x_bjnp} (for the last part use Theorem \ref{prop:nf_jnseq_char} and Corollary \ref{cor:nf_jnp}).
\begin{theorem}\label{theorem:tychonoff_x_jnp}
Let $X$, $Y$, $x$, $f$, and $F$ be as in Theorem \ref{theorem:tychonoff_x_bjnp}.
Then, the space $N_F$ has the JNP if and only if $X$ admits a JN-sequence $\seqn{\mu_n}$ such that $\supp\big(\mu_n\big)\subseteq Z_X(x,Y)$ for every $n\in\omega}%{\in\N$ and for every $V\in\mathfrak{F}_X(x,Y)$ we have $\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\|=0$ and $\bigcup_{n\in\omega}%{\in\N}\supp\big(\mu_n\big)\subseteq^* V$.
In particular, if $N_F$ has the JNP, then $X$ contains a non-trivial convergent sequence and it has the JNP.\hfill$\Box$
\end{theorem}
Theorems \ref{theorem:tychonoff_x_bjnp} and \ref{theorem:tychonoff_x_jnp} can be used to obtain sufficient conditions implying that for a given space $X$ the spaces $C_p(X)$ and $C_p^*(X)$ contain complemented copies of the space $(c_0)_p$, and that for a given compact space $K$ the space $C(K)$ does not have the Grothendieck property. We will need here two results: 1) if $X$ is an infinite space, then the space $C_p(X)$ (resp. $C_p^*(X)$) contains a complemented copy of $(c_0)_p$ if and only if $X$ has the JNP (resp. the BJNP) (see \cite[Theorem 1]{BKS1} and \cite[Theorem 4.4]{KMSZ}), and 2) if $K$ is an infinite compact space, then the space $C(K)$ does not have the $\ell_1$-Grothendieck property if and only if $K$ has the JNP (see \cite[Theorem 6.7]{KSZ}). Recall that the $\ell_1$-Grothendieck property is a natural weakening of the Grothendieck property, which can be described as follows: for a compact space $K$, the space $C(K)$ has \textit{the $\ell_1$-Grothendieck property} if, for every sequence $\seqn{\mu_n}$ of finitely supported Radon measures on $K$ such that $\mu_n(f)\to 0$ for every $f\in C(K)$, we also have $\mu_n(B)\to0$ for every Borel subset $B\subseteq K$. Obviously, the lack of the $\ell_1$-Grothendieck property implies the lack of the Grothendieck property, but the converse is false (see \cite[Section 7]{KSZ}).
\begin{corollary}\label{cor:c0p_compl}
Let $X$ be a space. If there is a countable subset $Y\subseteq X$ and point $x\in\overline{Y}\setminus Y$ such that, for a free filter $F$ on $\omega$ associated to the filter $\mathfrak{F}_X(x,Y)$, the space $N_F$ has the JNP (resp. the BJNP), then $C_p(X)$ (resp. $C_p^*(X)$) contains a complemented copy of the space $(c_0)_p$. \hfill$\Box$
\end{corollary}
\corczeropcomplnfx
\begin{corollary}\label{cor:ell1_gr}
Let $K$ be a compact space. If there is a countable subset $Y\subseteq K$ and point $x\in\overline{Y}\setminus Y$ such that, for a free filter $F$ on $\omega$ associated to the filter $\mathfrak{F}_K(x,Y)$, the space $N_F$ has the BJNP, then $C(K)$ does not have the $\ell_1$-Grothendieck property and hence it does not have the Grothendieck property. \hfill$\Box$
\end{corollary}
\corellonegrnfx
It is natural to ask whether Theorem \ref{theorem:tychonoff_x_bjnp} can be strengthened by dropping from the right hand side of the conclusion the condition that $\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\|=0$ for every $V\in\mathfrak{F}_X(x,Y)$, that is, in other words, whether it is true that a space $X$ has the BJNP if and only if there exist an infinite countable subset $Y\subseteq X$ and a point $x\in\overline{Y}\setminus Y$ such that for a filter $F$ associated to $\mathfrak{F}_X(x,Y)$ the space $N_F$ has the BJNP. Unfortunately, as the next example shows, it is not possible.
It is also worth noting that for disjoint countable subsets $Y_1$ and $Y_2$ of a space $X$ and a point $x\in X$ such that $x\in\overline{Y_1}\setminus Y_1$ and $x\in\overline{Y_2}\setminus Y_2$ it may easily be true that the space $Z_X\big(x,Y_1\big)$ does not have the BJNP while the space $Z_X\big(x,Y_2\big)$ contains a non-trivial sequence convergent to $x$ (see Corollary \ref{cor:sf_jnp_nf_no_bjnp} for a relevant example).
\begin{example}\label{example:schachermayer}
Let $\mathcal{S}$ denote Schachermayer's algebra, that is, the Boolean subalgebra of ${\wp(\omega)}}%{{\wp(\N)}$ such that, for every $A\in{\wp(\omega)}}%{{\wp(\N)}$, $A\in\mathcal{S}$ if and only if there is $K\in\omega}%{\in\N$ such that for every $k\ge K$ we have: $2k\in A$ if and only if $2k+1\in A$. It is easy to see that the Stone space $St(\mathcal{S})$ is a compactification of $\omega$.
The algebra was introduced by Schachermayer in \cite[Example 4.10]{Sch82} (see also Remark \ref{rem:schachermayer_bereznitski}), where it was proved that its Stone space does not have the Grothendieck property and that the remainder $L=St(\mathcal{S})\setminus\omega$ is homeomorphic to $\os$. Further properties of $\mathcal{S}$ are studied in \cite[Section 5.1]{KSZ}, in particular, it is pointed there that $St(\mathcal{S})$ admits a JN-sequence $\seqn{\mu_n}$ such that $\supp\big(\mu_n\big)\subseteq\omega$ and $\big|\supp\big(\mu_n\big)\big|=2$ for every $n\in\omega}%{\in\N$ (consider simply the measures $\mu_n=\frac{1}{2}\big(\delta_{2n}-\delta_{2n+1}\big)$). We will now briefly show that $St(\mathcal{S})$ does not contain any infinite countable subspace $Y$ and point $x\in\overline{Y}\setminus Y$ such that the space $N_F$, where $F$ is a free filter on $\omega$ associated to $\mathfrak{F}_{St(\mathcal{S})}(x,Y)$, has the BJNP. So, for the sake of contradiction, let us assume that such $Y$ and $x$ exist. Of course, $x\in L$, since otherwise $x$ would be isolated.
Set $Z=Z_{St(\mathcal{S})}(x,Y)$. Let $f\colon\omega\to Y$ be a bijection and let $F$ be a free filter on $\omega$ $f$-associated to $\mathfrak{F}_{St(\mathcal{S})}(x,Y)$. Since $N_F$ has the BJNP and the mapping $\varphi_f\colon N_F\to Z$ satisfies the assumptions of Corollary \ref{cor:cont_nf_x}, the space $Z$ has the BJNP, too.
First note that if $Y\setminus L$ is finite, then Lemma \ref{lemma:bjn_disjoint_supps} yields that $Z$ admits a BJN-sequence with supports contained in $L$. It follows that the compact space $L$ has the JNP, which is impossible since $L\cong\os$ and $\os$ does not have the JNP (see Fact \ref{fact:bo_bjnp}). Moreover, if $Y\cap L$ is infinite, then using Corollary \ref{cor:tychonoff_x_prob_meas}, the fact that $Z\cap L$ does not have the BJNP (being contained in $L\cong\os$), and an argument similar to the one contained in the proof of Proposition \ref{prop:product_filters}, one can show that for any BJN-sequence $\seqn{\mu_n}$ on $Z$ we have $\big\|\mu_n\restriction L\big\|\to0$ as $n\to\infty$. Thus, without loss of generality, we may assume that $Y\subseteq\omega$.
Since $Z$ has the BJNP, by Theorem \ref{theorem:tychonoff_x_bjnp} $Z$ admits a BJN-sequence $\seqn{\mu_n}$ such that
\[\tag{$*$}\lim_{n\to\infty}\big\|\mu_n\restriction(Y\setminus V)\|=0\]
for every $V\in\mathfrak{F}_{St(\mathcal{S})}(x,Y)$. By the remark after Theorem \ref{theorem:tychonoff_x_bjnp}, without loss of generality we may assume that $\seqn{\mu_n}$ is disjointly supported and, by going to a subsequence, that there exists a strictly increasing sequence $\seqn{k_n\in\omega}%{\in\N}$ such that
\[\supp\big(\mu_n\big)\subseteq\big[2k_n,2k_{n+1}\big)\subseteq\omega\]
for every $n\in\omega}%{\in\N$.
Put:
\[E=\bigcup_{n\in\omega}%{\in\N}\big[2k_{2n},2k_{2n+1}\big)\quad\text{and}\quad O=\omega\setminus E=\bigcup_{n\in\omega}%{\in\N}\big[2k_{2n+1},2k_{2n+2}\big).\]
Both $E$ and $O$ are elements of the algebra $\mathcal{S}$. Since $x$ is an ultrafilter on $\mathcal{S}$, either $E\in x$ or $O\in x$. Without loss of generality we may assume that $E\in x$, so $Y\cap E\in\mathfrak{F}_{St(\mathcal{S})}(x,Y)$. We then have:
\[\limsup_{n\to\infty}\big\|\mu_n\restriction\big(Y\setminus(Y\cap E)\big)\big\|=\limsup_{n\to\infty}\big\|\mu_n\restriction(Y\cap O)\big\|=\]
\[\lim_{n\to\infty}\big\|\mu_{2n+1}\restriction \big[2k_{2n+1},2k_{2n+2}\big)\big\|=1,\]
which contradicts ($*$). Thus, $Z$ cannot have the BJNP and the proof is finished.
\end{example}
\begin{remark}\label{rem:schachermayer_bereznitski}
It is easy to see that the Stone space $St(\mathcal{S})$ does not contain any non-trivial convergent sequences. Since it admits a JN-sequence $\seqn{\mu_n}$ such that $\big|\supp\big(\mu_n\big)\big|=2$ for every $n\in\omega}%{\in\N$, it follows from Proposition \ref{prop:sf_convseq_supps2} that $St(\mathcal{S})$ is not homeomorphic to any space of the form $S_F$ where $F$ is a free filter on $\omega$. In particular, by Proposition \ref {prop:sf_comes_from_bo_converse}, $St(\mathcal{S})$ is not homeomorphic to any space of the form ${\beta\omega}}%{{\beta\N}/\mathcal{F}$ where $\mathcal{F}$ is a non-empty closed subset of $\os$.
The space $St(\mathcal{S})$ can be however described in another way. Denote $\mathbb{E}=\{2n\colon n\in\omega}%{\in\N\}$ and $\O=\{2n+1\colon n\in\omega}%{\in\N\}$, and set $B_{\mathbb{E}}=\overline{\mathbb{E}}^{{\beta\omega}}%{{\beta\N}}\setminus\mathbb{E}$ and $B_{\O}=\overline{\O}^{{\beta\omega}}%{{\beta\N}}\setminus\O$. Of course, $B_{\mathbb{E}}\cong B_{\O}\cong\os$. The bijection $h\colon\mathbb{E}\to\O$, defined for each $n\in\omega}%{\in\N$ by $h(2n)=2n+1$, gives rise to the natural homeomorphism $H\colon B_{\mathbb{E}}\to B_{\O}$. We define an equivalence relation $R$ on ${\beta\omega}}%{{\beta\N}$ by declaring its equivalence classes in the following way: for each $n\in\omega}%{\in\N$ set $[n]_R=\{n\}$, and for each $x\in B_{\mathbb{E}}$ set $[x]_R=\{x,H(x)\}$. Then, one can show that $St(\mathcal{S})$ is homeomorphic to the quotient space ${\beta\omega}}%{{\beta\N}/R$.
The space ${\beta\omega}}%{{\beta\N}/R$ was first studied by Bereznitski\u{\i} \cite{Ber71}. Arkhangel'ski\u{\i} asked whether there exists a compact infinite space $K$ such that the space $C_p(K)$ is not linearly homeomorphic to the product $C_p(K)\times\mathbb{R}$, and suggested that ${\beta\omega}}%{{\beta\N}/R$ might be such a space (see \cite[page 93]{Ark87}). In \cite{Mar97} the first author claimed without a proof that $C_p({\beta\omega}}%{{\beta\N}/R)$ is linearly homeomorphic to $C_p({\beta\omega}}%{{\beta\N}/R)\times\mathbb{R}$, and provided a proper example of $K$ for which the spaces $C_p(K)$ and $C_p(K)\times\mathbb{R}$ are not linearly homeomorphic.
We are now able to justify briefly the statement that $C_p({\beta\omega}}%{{\beta\N}/R)$ is linearly homeomorphic to $C_p({\beta\omega}}%{{\beta\N}/R)\times\mathbb{R}$. Since ${\beta\omega}}%{{\beta\N}/R$ has the JNP, it follows by \cite[Theorem 1]{BKS1} that the space $C_p({\beta\omega}}%{{\beta\N}/R)$ contains a complemented copy of the space $(c_0)_p$, that is, there exists a closed linear subspace $Y$ of $C_p({\beta\omega}}%{{\beta\N}/R)$ such that $C_p({\beta\omega}}%{{\beta\N}/R)$ is linearly homeomorphic to the product topological vector space $Y\times(c_0)_p$ and both the projections are continuous. Since $(c_0)_p$ is linearly homeomorphic to $(c_0)_p\times\mathbb{R}$, we get that $C_p({\beta\omega}}%{{\beta\N}/R)$ is linearly homeomorphic to $Y\times(c_0)_p\times\mathbb{R}$ and hence to $C_p({\beta\omega}}%{{\beta\N}/R)\times\mathbb{R}$.
\end{remark}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,388
|
=============================
OpenStack-Ansible pip install
=============================
This role will install pip using the upstream pip installation script. Within
the installation of pip the role will create a .pip directory within the
deploying user's home folder and a blank selfcheck JSON file for pip to use to
keep track of versions.
It can also configure pip links that will restrict the package sources to
the OpenStack-Ansible repository.
To clone or view the source code for this repository, visit the role repository
for `pip_install <https://github.com/openstack/openstack-ansible-pip_install>`_.
Default variables
~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../defaults/main.yml
:language: yaml
:start-after: under the License.
Required variables
~~~~~~~~~~~~~~~~~~
None
Example playbook
~~~~~~~~~~~~~~~~
.. literalinclude:: ../../examples/playbook.yml
:language: yaml
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,910
|
\section{Dynamical mean-field theory}
\label{sec:DMFT}
To set the stage, we first discuss the familiar case of DMFT.
For concreteness, we focus on the two-dimensional (2D) paramagnetic Hubbard model on the square lattice with nearest-neighbor hopping given by the Hamiltonian
\begin{align}
\label{hubbardmodel}
H = &-t\sum_{\langle ij\rangle\sigma} c^\dagger_{i\sigma}c^{}_{j\sigma}+ U\sum_{i} n_{i\ensuremath{\uparrow}} n_{i\ensuremath{\downarrow}}.
\end{align}
Here $i,j$ label lattice sites. The local Hubbard interaction has strength $U$. We use the hopping $t=1$ as the unit of energy and denote Green's function as $G_{ij}$ in real space and $G_{\ensuremath{\mathbf{k}}}$ in momentum space respectively (when it is not ambiguous, we omit the frequency dependence for brevity).
DMFT is a local approximation to the exact Luttinger-Ward functional $\Phi[G_{ij}]\approx\sum_{i}\phi[G_{\text{loc}}]$, which is therefore conserving in the Baym-Kadanoff sense~\cite{Georges96,Baym62}.
As a result of the local approximation, the self-energy is local: $\Sigma_{ij}=\delta\Phi[G_{i'j'}]/\delta G_{ji}=\delta\phi[G_{\text{loc}}]/\delta G_{\text{loc}}\, \delta_{ji}$ and we note that the same holds for the irreducible vertex: $-\Gamma_{ijkl}=\delta^2\Phi[G_{i'j'}]/\delta G_{ji}\delta G_{lk}=\delta^2\phi[G_{\text{loc}}]/\delta G_{\text{loc}}^{2}\, \delta_{li}\delta_{lj}\delta_{lk}$~\footnote{A minus sign arises in the functional relation of $\Gamma$ due to the definitions chosen in this publication.}.
If we know the local Green's function, the problem is solved: In this case we can evaluate the local functional and its derivatives at the local Green's function and hence compute the self-energy.
Since we do not know the local Green's function a priori and the self-energy is a functional of the latter, we have to solve this problem self-consistently:
we vary $G_{\text{loc}}$ until the local Green's function computed from the self-energy equals $G_{\text{loc}}$.
We can employ an auxiliary local model as a \emph{tool} to accomplish this and to sum the diagrams of this local functional exactly.
This means letting $\phi[G_{\text{loc}}]\equiv \phi_{\text{imp}}[g_{\text{imp}}]$ and $\Sigma[G_{\text{loc}}]\equiv \Sigma_{\text{imp}}[g_{\text{imp}}]$.
The desired solution is evidently obtained when the DMFT self-consistency condition is satisfied,
\begin{align}
g_{\text{imp},\nu}=G_{\text{loc},\nu}.
\label{eq:DMFTsc}
\end{align}
(Where unambiguous, we drop labels 'imp' and 'lat' in what follows.)
In practice, an Anderson impurity model (AIM) is often employed for this purpose, whose action reads
\begin{align}
\hspace*{-0.18cm}S_{\text{AIM}}=&\hspace*{-0.01cm}-\hspace*{-0.01cm}\sum_{\nu\sigma}c^*_{\nu\sigma}(\imath\nu+\mu-\Delta_\nu)c^{}_{\nu\sigma}+\hspace*{-0.01cm}U\sum_{\omega}n_{-\omega\uparrow} n_{\omega\downarrow}.
\label{eq:AIM}
\end{align}
Here $\Delta_{\nu}$ denotes the electronic hybridization, $\mu$ is the chemical potential and $\nu$ ($\omega$) denote the discrete fermionic (bosonic) Matsubara frequencies $\nu_n=(2n+1)\pi/\beta$ and $\omega_m=2m\pi/\beta$, respectively. $\beta=1/T$ is the inverse temperature. The AIM has the same local interaction $U$ as the lattice model.
Let us now take a \emph{practical} viewpoint.
Assume we have a non-trivial model that we can solve exactly, such as the AIM described by the action~\eqref{eq:AIM}. From this model we can obtain the local impurity self-energy and irreducible vertex function. We can now ask the question of how to construct a conserving approximation given these quantities.
We recall that \emph{local} conservation of charge and spin means that the following continuity equations for the charge ($\rho^{0}$) and spin densities ($\rho^{x,y,z}$) hold:
\begin{align}
\partial_\tau\rho^\alpha=-[\rho^\alpha,H].
\label{eq:conteq}
\end{align}
We have introduced the index $\alpha=0,x,y,z$ to label the charge and spin channels.
The corresponding charge and spin density operators are defined as $\rho^\alpha = \sum_{\sigma\sigma'}c^\dagger_{\sigma}s^\alpha_{\sigma\sigma'}c_{\sigma'}$ with the Pauli matrices $s^\alpha$,
such that $\rho^0=n=n_\uparrow+n_\downarrow$ and $\rho^{x,y,z}=2S^{x,y,z}$.
On the lattice we can formulate the following Ward identities (cf. Appendix~\ref{app:Wardlat}), which are the Green's function analogues of the continuity equations~\eqref{eq:conteq}:
\begin{align}
\Sigma_{k+q}-\Sigma_{k}=-\sum_{k'}\Gamma^{\alpha}_{kk'q}[G_{k'+q}-G_{k'}].
\label{eq:Ward}
\end{align}
Here we have introduced four-vector notation $k\equiv(\ensuremath{\mathbf{k}},\nu)$ and $q\equiv(\ensuremath{\mathbf{q}},\omega)$. Summations over frequencies and momenta imply factors $\beta^{-1}$ and $N^{-1}$, respectively, with $N$ being the number of sites.
$\Sigma$ and $G$ are the exact lattice self-energy and Green's function, respectively, and $\Gamma^{\alpha}$ denotes the \textit{irreducible} (horizontal) \textit{particle-hole} vertex.
The irreducible vertices in the charge and spin channels are explicitly defined as $\Gamma^{0}=\Gamma^{\uparrow\uparrow\uparrow\uparrow}+\Gamma^{\uparrow\uparrow\downarrow\downarrow}$,
$\Gamma^{z}=\Gamma^{\uparrow\uparrow\uparrow\uparrow}-\Gamma^{\uparrow\uparrow\downarrow\downarrow}$
and $\Gamma^{x}=\Gamma^{y}=\frac{1}{2}(\Gamma^{\uparrow\downarrow\downarrow\uparrow}+\Gamma^{\downarrow\uparrow\uparrow\downarrow})=\Gamma^{\uparrow\downarrow\downarrow\uparrow}$.
In a local approximation, $\Sigma_k\equiv\Sigma_\nu$ and $\Gamma^{\alpha}_{kk'q}\equiv\gamma^{\alpha}_{\nu\nu'\omega}$, such as DMFT, all momentum dependence drops out of the Ward identities~\eqref{eq:Ward} and we obtain~\footnote{In an earlier publication by Hettler et al. the right-hand-side of Eq.~\eqref{eq:dmft_definition} was expressed in terms of the \textit{reducible} vertex function~\cite{Hettler00}.
In that form one does not straightforwardly realize the momentum-independence of Eq.~\eqref{eq:dmft_definition} which misled the authors to believe that DMFT does not satisfy the Ward identities.}
\begin{align}
\Sigma_{\nu+\omega}-\Sigma_{\nu}=-\sum_{\nu'}\gamma^{\alpha}_{\nu\nu'\omega}[G_{\text{loc},\nu'+\omega}-G_{\text{loc},\nu'}].\label{eq:dmft_definition}
\end{align}
An analogous Ward identity holds for the AIM (see Appendix~\ref{app:Wardlocal}),
\begin{align}
\Sigma_{\nu+\omega}-\Sigma_\nu = -\sum_{\nu'}\gamma_{\nu\nu'\omega}^{\alpha}[g_{\nu'+\omega}-g_{\nu'}],
\label{eq:Wardimp}
\end{align}
where $\Sigma_\nu, g_\nu$ and $\gamma^\alpha_{\nu\nu'\omega}$ are the self-energy, Green's function and the irreducible vertex of the AIM, respectively.
Hence the DMFT approximation is apparently conserving when the self-consistency condition~\eqref{eq:DMFTsc} holds.
Remarkably, DMFT arises when we attempt to construct a locally conserving approximation based on the AIM~\eqref{eq:AIM}.
Let us consider further properties of the DMFT approximation. To this end, we introduce the (connected) susceptibilities
\begin{align}
X^\alpha_q = -\langle \bar{\rho}^\alpha_{-q}\bar{\rho}^\alpha_q\rangle= 2\sum_{kk'}X_{kk'q}^{\alpha},
\end{align}
which are defined in terms of density fluctuations, $\bar{\rho}^\alpha(\tau)=\rho^\alpha(\tau)-\langle\rho^\alpha\rangle$.
Their local parts are given by $X^\alpha_\text{loc} = \sum_{\ensuremath{\mathbf{q}}} X^\alpha_q$.
The generalized susceptibility $X^\alpha_{kk'q}$ is related to the irreducible vertex function via the integral equation
\begin{align}
X_{kk'q}^{\alpha}=G_{k}G_{k+q}\left[\beta \delta_{kk'}-\sum_{k''}\Gamma_{kk''q}^{\alpha}X_{k''k'q}^\alpha\right].
\label{eq:lthroughgamma}
\end{align}
Now consider the kinetic energy of the lattice. It is expressed through single-particle quantities as $E^{\text{lat}}_\text{kin}=\sum_{\ensuremath{\mathbf{k}}\sigma}\varepsilon_{\ensuremath{\mathbf{k}}}\langle n_{\ensuremath{\mathbf{k}}\sigma}\rangle$.
In Appendix~\ref{app:asymptotefromward} we establish a relation that expresses the kinetic energy in terms of a two-particle quantity, more precisely the high-frequency behavior of the local susceptibility. The relation follows directly from the Ward identities, Eq.~\eqref{eq:Ward}:
\begin{align}
\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2 X^{\alpha}_{\text{loc},\omega} &=-2E^{\text{lat}}_\text{kin}\label{eq:xloc_asymptote}.
\end{align}
As the Ward identities themselves, this relation connects single- and two-particle quantities.
The local impurity Ward identities~\eqref{eq:Wardimp} imply an analogous relation (see Appendix~\ref{app:WardAIM}),
\begin{align}
\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2\chi^{\alpha}_\omega &=-2E^{\text{imp}}_\text{kin},\label{eq:imp_asymptote}
\end{align}
where $\chi^\alpha_\omega=-\langle\bar{\rho}^\alpha_{-\omega}\bar{\rho}^\alpha_\omega\rangle_{\text{imp}}$ is the impurity susceptibility and the kinetic energy of the impurity model is given by~\cite{Haule07}
\begin{equation}
E^{\text{imp}}_\text{kin}=2\sum_{\nu}\Delta_{\nu}g_{\nu}.
\label{eq:Ekinimp}
\end{equation}
DMFT is not two-particle self-consistent. As a consequence, the impurity and local lattice susceptibility differ in general.
Remarkably, however, their asymptotes are the same.
Decomposing the susceptibility into a contribution from the impurity susceptibility and a momentum-dependent correction~\cite{Hafermann14-2},
$X^{\text{DMFT}}_{\text{loc}}=\chi+X'_{\text{loc}}$, one can show that $X'_{\text{loc}}$ decays at least with $\omega^{-4}$.
Therefore, $\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2 X_{\text{loc},\omega} = \lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2 \chi_\omega$.
We demonstrate this numerically in the left panel of Fig.~\ref{fig:dmft:asymptote} in the section on numerical results.
As a consequence, $E^{\text{lat}}_\text{kin}=E^{\text{imp}}_\text{kin}$ and the kinetic energy can be determined from the impurity model in DMFT.
Next, we consider the potential energy $E_{\text{pot}}=Ud_{\text{lat}}$ where $d_{\text{lat}}=\av{n_{\ensuremath{\uparrow}} n_{\ensuremath{\downarrow}}}$ is the double occupancy of the lattice.
As a two-particle correlation function, $d$ is naturally computed from two-particle quantities. We denote this by a superscript '2P'. We have the following relations:
\begin{align}
X^0_{\text{loc},\tau=0}&=-\av{\bar{\rho}^0\bar{\rho}^0}=-(\av{n}+2d^{\text{2P}}_{\text{lat}}-\av{n}^2),\label{eq:docc2P}\\
X^z_{\text{loc},\tau=0}&=-\av{\bar{\rho}^z\bar{\rho}^z}=-(\av{n}-2d^{\text{2P}}_{\text{lat}}),
\end{align}
where $\rho^0=n=n_\ensuremath{\uparrow}+n_\ensuremath{\downarrow}, \rho^z=m=n_\ensuremath{\uparrow}-n_\ensuremath{\downarrow}$ and $\av{m}=0$. Hence the double occupancy can be expressed in terms of the susceptibilities as
\begin{align}
d^{\text{2P}}_\text{lat} &= -\frac{1}{4}\left[X^0_{\text{loc},\tau=0}-X^z_{\text{loc},\tau=0}-\av{n}^2_\text{lat}\right].
\label{eq:dlat2}
\end{align}
Similarly, $d$ may be obtained from the impurity as
\begin{align}
d^{\text{2P}}_\text{imp} &= -\frac{1}{4}\left[\chi^0_{\tau=0}-\chi^z_{\tau=0}-\av{n}^2_\text{imp}\right]\label{eq:dimp2}.
\end{align}
By virtue of the single-particle self-consistency condition~\eqref{eq:DMFTsc} we have $\av{n}_{\text{lat}}=\av{n}_{\text{imp}}$.
Due to the missing two-particle self-consistency in DMFT however, the susceptibilities differ and we have in general $d^{\text{2P}}_\text{lat}\neq d^{\text{2P}}_\text{imp}$~\cite{vanLoon16}.
On the other hand, we can compute the double occupancies from single-particle quantities via the Migdal-Galitskii formula of the Hubbard model~\cite{Galitskii58},
\begin{align}
d^{\text{1P}}_\text{lat} &= \frac{1}{U}\sum_{\ensuremath{\mathbf{k}}\nu}G_{\ensuremath{\mathbf{k}}\nu} \Sigma_{\ensuremath{\mathbf{k}}\nu},\label{eq:dlat1}
\end{align}
and its counterpart of the Anderson impurity model,
\begin{align}
d^{\text{1P}}_\text{imp}=\frac{1}{U}\sum_\nu g_\nu \Sigma_\nu\label{eq:dimp1}.
\end{align}
Making use of the single-particle self-consistency condition~\eqref{eq:DMFTsc} and of the locality of the self-energy, $\Sigma_{\ensuremath{\mathbf{k}}\nu}=\Sigma_\nu$, we see that lattice and impurity double occupancies computed in this way are the same.
In summary, DMFT arises when one attempts to construct a conserving, single-particle self-consistent approximation based on the AIM.
The kinetic energy of the lattice model is equal to the kinetic energy of the impurity model.
It can be obtained from the asymptote of the local lattice susceptibility, a general feature of conserving approximations,
while in DMFT, it may also be obtained from the impurity susceptibility.
An ambiguity arises in the calculation of the double occupancy from single- and two-particle quantities: $d_\text{imp}=d^{\text{1P}}_\text{lat}\neq d^{\text{2P}}_\text{lat}$ in DMFT as a consequence of the lack of two-particle self-consistency.
We speak of several thermodynamically consistent~\cite{Aichhorn06,Janis17} ways to obtain a quantity if these yield one and the same result.
That different ways of calculating a quantity yield the same result is in general only true for an exact solution.
The kinetic energy and the $f$-sum rule (see, e.g.,~\cite{Vilk97}, see Appendix~\ref{app:fsumrule_derivation}) are examples where thermodynamic consistency between one- and two-particle level is ensured through the Ward identities~\eqref{eq:Ward}.
Obviously, the Ward identities are insufficient for consistency in other cases, as we have seen for the double occupancy, whose value is ambiguous in DMFT.
Another important example is the inconsistency of the Schwinger-Dyson equation with the Ward identities when the reducible vertex is computed from the irreducible one through the Bethe-Salpeter equation~\cite{Janis17}.
The recently proposed QUADRILEX approach has been reported to be free of this inconsistency~\cite{Ayral16}.
We will examine in the following section to what extent the deficiencies of DMFT can be cured by two-particle self-consistency.
\section{Two-particle self-consistency}
\label{sec:2psc}
Two-particle self-consistent approximations based on an impurity model go back to extended dynamical mean-field theory (EDMFT) and its precursors~\cite{Sengupta95,Otsuki13-2,sachdev93,Si96,Kajueter96,Smith00,Chitra01}.
In these approximations, a frequency-dependent interaction is introduced in the impurity model, and its values are fixed through a self-consistency condition on a two-particle (bosonic) correlation function such as the susceptibility.
In general, we can augment the AIM of~\eqref{eq:AIM} by a dynamical interaction in all four (one charge and three spin) channels as follows:
\begin{align}
S_{\text{BFK}}=& S_{\text{AIM}}+
\frac{1}{2}\sum_{\alpha\omega}\bar{\rho}^\alpha_{-\omega} \Lambda^\alpha_\omega\bar{\rho}^\alpha_\omega.\label{BFK}
\end{align}
We refer to this model as the Bose-Fermi-Kondo impurity (BFK) model.
$\Lambda^\alpha_{\omega}$ is a dynamical interaction which can be viewed as a bosonic bath or hybridization. We consider approximations to Green's function $G$ and to the susceptibility $X$ that are locally conserving and two-particle self-consistent. In analogy to the single-particle self-consistency condition~\eqref{eq:DMFTsc}, the retarded interactions in~\eqref{BFK} are determined through the following condition~\cite{vanLoon16-2,Stepanov16},
\begin{align}
\chi^{\alpha}_{\omega} = X_{\text{loc},\omega}^{\alpha}.
\label{eq:2psc}
\end{align}
This self-consistency condition provides a bounded double occupancy by construction [cf. Eqs.~\eqref{eq:dlat2} and~\eqref{eq:dimp2}],
which is sufficient to suppress magnetic phase transitions in two dimensions, as shown by Vilk and Tremblay~\cite{Vilk97}.
The Ward-identities~\eqref{eq:Ward} relate single-particle quantities (Green's function and self-energies) to two-particle quantities (vertex functions and susceptibilities).
We study the interplay between the requirement of local conservation and the self-consistency conditions~\eqref{eq:DMFTsc} and~\eqref{eq:2psc}.
We differentiate between two kinds of approaches: (i) The case of Ising-type ($S_{z}$) coupling is characterized by a finite $\Lambda^z_\omega$ and $\Lambda^{0}_\omega$, while we set $\Lambda^{x,y}_\omega = 0$. That is, we require self-consistency~\eqref{eq:2psc} only in the charge- and one of the spin channels, i.e., for $\alpha=0,z$.
(ii) For rotationally invariant Heisenberg-type coupling, all retarded interactions $\Lambda_{\omega}^{\alpha}$ for $\alpha=0,x,y,z$ are finite and determined by~\eqref{eq:2psc}.
The retarded interactions cause shifts in the Hubbard interaction and the chemical potential which are discussed in Appendix~\ref{app:impurity:hamiltonian}.
\subsection{Ising-type coupling}
\label{section:sz-coupling}
In the discussion of the kinetic energy in the context of DMFT we have shown that, assuming local conservation, it can be expressed in terms of the asymptotic behavior of the susceptibility.
For the lattice we have by virtue of local conservation $\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2 X^{0,z}_{\text{loc},\omega} =-2E_\text{kin}^{\text{lat}}$, while on the impurity,
$\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2\chi^{0,z}_\omega =-2E_{\text{kin}}^{\text{imp}}$ holds (cf. Appendix~\ref{app:asymptotics:impurity}).
In this case, the kinetic energies computed in the two ways are equal by means of the two-particle self-consistency~\eqref{eq:2psc}.
The kinetic energy can therefore be obtained from the impurity model, as in DMFT. Note that $E_\text{kin}^{\text{lat}}=E_{\text{kin}}^{\text{imp}}$ may be determined from the charge or spin susceptibility alike.
We saw previously that the double occupancy computed from two-particle quantities, $d_{\text{lat}}^{\text{2P}}$ and $d_{\text{imp}}^{\text{2P}}$, can be expressed in terms of the local susceptibilities $X_{\text{loc}}^{0,z}$ and $\chi^{0,z}$, respectively [Eqs.~\eqref{eq:dlat2} and~\eqref{eq:dimp2}].
While these differ in DMFT in general, the two-particle self-consistency ensures that $d_{\text{lat}}^{\text{2P}}=d_{\text{imp}}^{\text{2P}}$.
Using single-particle quantities, we can still compute it on the lattice using the Migdal-Galitskii formula, Eq.~\eqref{eq:dlat1}.
In a local approximation to the self-energy and with the single-particle self-consistency condition~\eqref{eq:DMFTsc}, we can express the double occupancy in terms of the impurity self-energy and Green's function, $d^{\text{1P}}_\text{lat} = (1/U)\sum_\nu g_\nu \Sigma_\nu$.
In contrast to DMFT however, this expression is \emph{not} equal to the impurity double occupancy.
Because the Migdal-Galitskii formula involves the potential energy, it is comprehensible that the retarded interactions will affect it. In Appendix~\ref{app:migdal} we derive the double occupancy for the Bose-Fermi-Kondo model, with the result
\begin{align}
d^{\text{1P}}_\text{imp} &= \frac{1}{2\tilde{U}}\left[2\sum_\nu g_\nu \Sigma_\nu+\sum_{\omega,\alpha}\tilde{\Lambda}^\alpha_\omega(\chi^\alpha_\omega-\beta\av{\rho^\alpha}^2\delta_\omega)\right].
\label{eq:dimp1_bfk}
\end{align}
Here $\tilde{U}=U+\Lambda^0_\infty-\Lambda^z_\infty$ contains the asymptotic part of the retarded interaction $\Lambda^{\alpha}_\omega=\Lambda^{\alpha}_\infty+\tilde{\Lambda}^{\alpha}_\omega$ (cf. Appendix~\ref{app:impurity:hamiltonian}). The summation in the second term in brackets in Eq.~\eqref{eq:dimp1_bfk} in general runs over all channels, $\alpha=0,x,y,z$. In the case of Ising-type coupling, only the retarded interaction in the two channels $\alpha=0,z$ is non-zero.
Because we assume that we solve the impurity model exactly, we have $d^{\text{1P}}_\text{imp}=d^{\text{2P}}_\text{imp}\equiv d_\text{imp}$ (so that we can drop the superscript indices). While $d^{\text{2P}}_\text{lat}=d_\text{imp}$, the second term in~\eqref{eq:dimp1_bfk} will in general lead to $d^{\text{1P}}_\text{lat}\neq d_\text{imp}$. The double occupancy computed from two-particle quantities $d^{\text{2P}}$ is consistent with that of the impurity (because of two-particle self-consistency), while that obtained from single-particle quantities is not.
This situation is exactly opposite to DMFT, where $d^{\text{1P}}$ is consistent. We see that while the retarded interactions allow us to enforce consistency of $d^{\text{2P}}$, they simultaneously undermine the consistency of $d^{\text{1P}}$.
We demonstrate this numerically in Fig.~\ref{fig:docc} for the two-particle self-consistent approximation presented in Sec.~\ref{section:scdb} representative of Ising-type coupling and compare to DMFT.
\subsection{Heisenberg-type coupling}
\label{section:3d-coupling}
In the case of a Heisenberg-type coupling, all retarded interactions $\Lambda_{\omega}^{\alpha}$ are in general non-zero. We fix their values through the self-consistency condition~\eqref{eq:2psc}, as before, and consider the $SU(2)$-symmetric case with $\Lambda^x=\Lambda^y=\Lambda^z$.
We further assume that the Ward identities hold. As a consequence, the relation (cf. Appendix~\ref{app:asymptotefromward})
\begin{align}
\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2 X^{\alpha}_{\text{loc},\omega} =-2E_\text{kin}\label{eq:xloc_asymptote_3d}
\end{align}
holds in all channels $\alpha=0,x,y,z$.
On the impurity model, contrary to the case of Ising-type coupling, we now have separate relations for the charge and spin susceptibilities (Appendix~\ref{app:asymptotics:impurity}):
\begin{align}
\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2\chi^0_\omega &=-4\sum_\nu\Delta_\nu g_\nu,\label{eq:imp_asymptote_3dch}\\
\lim\limits_{\omega\rightarrow\infty}(\imath\omega)^2\chi^z_\omega &=-4\sum_\nu\Delta_\nu g_\nu+4\sum_{\omega',\alpha=x,y}\tilde{\Lambda}^\alpha_{\omega'}\chi^\alpha_{\omega'}.\label{eq:imp_asymptote_3dsp}
\end{align}
The corresponding relations for $\chi^{x,y}$ are obtained by permuting $x,y,z$ in~\eqref{eq:imp_asymptote_3dsp}.
We see that in presence of retarded spin interactions the asymptote of the impurity spin susceptibility in Eq.~\eqref{eq:imp_asymptote_3dsp} no longer equals the kinetic energy.
In addition, the asymptotes of the charge and spin channels are different.
By virtue of the self-consistency condition, $\chi_{\omega}^{\alpha}=X_{\text{loc},\omega}^{\alpha}$, this must also hold for the asymptote of $X_{\text{loc},\omega}^{\alpha}$.
Eq.~\eqref{eq:xloc_asymptote_3d}, on the other hand, implies that the asymptote of the local susceptibility must be equal in all channels. We therefore conclude that there is no two-particle self-consistent approximation employing the self-consistency condition~\eqref{eq:2psc}, which at the same time is locally conserving
~\footnote{This only holds if $\sum_{\omega',\alpha=x,y}\tilde{\Lambda}^\alpha_{\omega'}\chi^\alpha_{\omega'}$ is nonzero. The term does not vanish in our calculations and can only do so for an unphysical $\tilde{\Lambda}^\alpha_{\omega}$ which changes its sign for different $\omega$ [as $\chi$ does not, cf. Eq.~\eqref{discretelambda}].}.
We note that in the case of Heisenberg-type coupling, the conclusions regarding the potential energy remain the same as in the Ising-type coupling. In particular the equation~\eqref{eq:dimp1_bfk} still holds.
We provide a numerical example of Eqs.~\eqref{eq:imp_asymptote_3dch} and~\eqref{eq:imp_asymptote_3dsp} in the right panel of Fig.~\ref{fig:doped:chi:imp:asymptote} in Appendix~\ref{app:asymptotics:impurity}.
\section{Ward identities and retarded spin-spin interactions}
\label{section:conservation_scdb}
In this section we show that the Ward identities of the Bose-Fermi-Kondo model are incompatible with the Ward identities~\eqref{eq:Ward} of the Hubbard model.
We identify this as the root cause of our earlier finding in the previous section, that no locally conserving approximation can be obtained in case of a Heisenberg-type coupling.
In the Hamiltonian formulation of the Bose-Fermi-Kondo model (cf. Appendix~\ref{bose_anderson}) the retarded interactions enter as density-boson couplings $\propto \phi^\alpha \rho^\alpha$.
Here, $\phi=b^\dagger+b$ are bosonic operators which commute with all fermions. Integrating out the bosons in the functional integral yields the effective impurity action~\eqref{BFK}.
Since the Ward identities are Green's function equivalents of the continuity equations, they describe conservation of the charge- and spin-currents.
These currents may be caused by kinetic and interaction contributions.
\begin{figure}[t]
\includegraphics[]{ward_3d}
\caption{(Color online) Test of Eq.~\eqref{eq:Wardimp} for an isotropic retarded spin-spin interaction $\tilde{\Lambda}^x=\tilde{\Lambda}^y=\tilde{\Lambda}^z$ and a retarded charge-charge interaction $\tilde{\Lambda}^0$.
The imaginary part of the left-hand-side (dashed black lines) and of the right-hand-side (symbols) of Eq.~\eqref{eq:Wardimp} is drawn at the first two bosonic Matsubara frequencies $\omega_{m=1,2}$.
Eq.~\eqref{eq:Wardimp} holds in the charge channel (open green symbols) but is violated in the spin channels (filled blue symbols).
This test was performed at $\beta=2$ and $U=6$ with a conducting bath $\Delta$.
The violation of Eq.~\eqref{eq:Wardimp} in the spin channels depends on the magnitude of $\tilde{\Lambda}^{x,y,z}$, which was chosen large for demonstration purposes.
The data shown in this figure was produced with the CTQMC solver presented in reference~\cite{Otsuki13}.}
\label{fig:ward3d}
\end{figure}
Regarding the latter, we notice two properties: (i) None of the retarded interactions contribute to the charge-current, that is $[\rho^{0},\phi^\alpha \rho^\alpha]=\phi^\alpha [n,\rho^\alpha]=0$ for $\alpha=0,x,y,z$.
(ii) The spin-current on the other hand has contributions from the retarded spin-spin interactions $\Lambda^\beta$ due to non-commutativity of the spin operators,
$[\rho^{\alpha},\phi^\beta \rho^\beta]=2\imath\phi^\beta\sum_\gamma\varepsilon_{\alpha\beta\gamma}\rho^\gamma$ for $\alpha,\beta=x,y,z$.
We show in Appendix~\ref{app:Wardlocal} that the resulting Ward identities contain an additional term that couples the retarded spin interaction to a three-particle correlation function~\cite{rostamiTBP}.
As a consequence, they cannot be brought into the form of the local Ward identities~\eqref{eq:Wardimp}.
We emphasize that this does not imply a violation of spin conservation in the Bose-Fermi-Kondo model.
The issue is instead that the Ward identities accounting for spin conservation simply have a form different from Eq.~\eqref{eq:Wardimp}.
It therefore seems plausible that conservation on the level of the BFK model does not imply that the local Ward identities~\eqref{eq:Wardimp} are fulfilled.
That they are indeed violated in general is illustrated numerically in Fig.~\ref{fig:ward3d} by plotting the left- and right-hand sides of Eq.~\eqref{eq:Wardimp} for finite $\tilde{\Lambda}^{x,y,z}$.
In order to understand the consequences for constructing conserving approximations based on an impurity model, we recall that in a local approximation to the self-energy and irreducible vertex function the local Ward identities~\eqref{eq:dmft_definition} are sufficient to guarantee that the approximation is conserving.
In the case of DMFT, with the self-consistency condition $G_{\text{loc}}=g$, they \emph{coincide} with the Ward identities of the AIM, so that DMFT is conserving. In presence of retarded spin-spin interactions this is no longer the case and, as we have seen numerically, this equation in general is violated. This can be seen as follows: Eq.~\eqref{eq:Wardimp} implies that the tails of the local susceptibilities must be identical independent of the channel index $\alpha$. We show this in Appendix~\ref{app:asymptotefromward} and~\ref{app:WardAIM}. In the previous section, we have seen however, that for the Heisenberg-type coupling they are different because of the retarded interaction [cf. Eqs.~\eqref{eq:imp_asymptote_3dch} and~\eqref{eq:imp_asymptote_3dsp}]. \eqref{eq:Wardimp} must therefore be violated and the approximation is not conserving.
In the case of Ising-type coupling, the retarded interaction $\Lambda^z$ in the longitudinal spin channel contributes to the currents in the transversal spin channels of the impurity.
The violation of the local Ward identites~\eqref{eq:Wardimp} thus affects only the transversal spin channels, while the longitudinal spin channel itself remains unaffected.
That the Ward identity in the longitudinal spin channel indeed holds under these circumstances is demonstrated in Fig.~\ref{fig:ward}.
\section{An example: Two-particle self-consistent DMFT}
\label{section:scdb}
\begin{figure}[t]
\includegraphics[]{ward}
\caption{(Color online) Test of Eq.~\eqref{eq:Wardimp} for a retarded spin-spin interaction $\tilde{\Lambda}^z$ in the $z$-channel
and a retarded charge-charge interaction $\tilde{\Lambda}^0$. Eq.~\eqref{eq:Wardimp} holds in the channels $\alpha=0,z$.
Parameters as in Fig.~\ref{fig:ward3d}, except $\tilde{\Lambda}^x=\tilde{\Lambda}^y=0$.}
\label{fig:ward}
\end{figure}
We have discussed the general conditions for a conserving and two-particle self-consistent approximation in Sec.~\ref{sec:2psc}. Here we construct a concrete example.
As we have seen, an approximation that satisfies the two particle self-consistency condition~\eqref{eq:2psc} can be conserving in the charge- and at most one of the spin channels. One may refer to this approximation as two-particle self-consistent DMFT.
We compute the lattice susceptibility in this approach according to
\begin{align}
X^{\alpha}_{\ensuremath{\mathbf{q}}\omega}=\left[\left(X^{\text{DMFT},\alpha}_{\ensuremath{\mathbf{q}}\omega}\right)^{-1}+\Lambda^{\alpha}_\omega\right]^{-1}.\label{eq:xdb}
\end{align}
The particular form~\eqref{eq:xdb} of the susceptibility can be motivated in the DB approach~\cite{Rubtsov12}. In this form the retarded interaction is reminiscent of the Moriya~$\Lambda$ correction employed in D$\Gamma$A.
Here $\Lambda$ however depends on frequency, while in D$\Gamma$A it is instantaneous.
We emphasize that the way of calculating the susceptibility and its particular form do not change the conserving character of the theory (see Sec.~\ref{section:sz-coupling} and results in Sec.~\ref{sec:results} below), but will of course affect the results.
\begin{figure}[t]
\includegraphics[]{x_wn1}
\caption{(Color online) Susceptibility at the first Matsubara frequency in the two-particle self-consistent DMFT. Shown is a momentum cross-section at $q_y=0$ for different values of $U$ ($\beta=2$).
The vanishing of the susceptibility for $\ensuremath{\mathbf{q}}\rightarrow 0$ with $X\sim |\ensuremath{\mathbf{q}}|^2$ is a necessary condition for global conservation, see text.}
\label{fig:x_wn1}
\end{figure}
In the above, $X^{\text{DMFT},\alpha}$ denotes the susceptibility computed as in DMFT in the standard way~\cite{Georges96}, including vertex corrections.
This amounts to approximating the irreducible vertex function of the lattice with its local counterpart on the impurity, $\Gamma^{\alpha}_{kk'q}\equiv\gamma^{\alpha}_{\nu\nu'\omega}$, in the channels $\alpha=0,z$.
We compute the generalized susceptibility from the integral equation $X_{kk'q}^{\alpha}=G_{k}G_{k+q}\left[\beta\delta_{kk'}-\sum_{k''}\gamma_{\nu\nu''\omega}^{\alpha}X_{k''k'q}^\alpha\right]$. The susceptibilities are obtained from the latter by tracing out $k,k'$: $X^{\text{DMFT},\alpha}_q=2\sum_{kk'}X_{kk'q}^{\alpha}$.
We emphasize that the label 'DMFT' merely indicates that $X^{\text{DMFT},\alpha}$ is computed as in DMFT. Its value will differ from the DMFT susceptibility, because the impurity model is different.
The BFK model can be solved accurately using a suitably generalized continuous-time quantum Monte Carlo (CTQMC) algorithm. In weak-coupling CTQMC, the inclusion of these terms is straightforward~\cite{Rubtsov05}. In strong-coupling CTQMC the impurity model can be solved in the segment representation when only $\Lambda^{0}$ or $\Lambda^{z}$ are included~\cite{Werner07,Werner10,Hafermann13}. For the general case of a vector bosonic field (not considered in the numerical results of this section), the algorithm simultaneously performs a hybridization expansion and an interaction expansion with respect to the spin-off-diagonal interactions $\Lambda^{x,y}$~\cite{Otsuki13}.
Here we compute the correlation functions $g_\nu$ and $\chi^\alpha_\omega$, the self-energy $\Sigma_\nu$ and the irreducible vertex function $\gamma_{\nu\nu'\omega}^{\alpha}$ for $\alpha=0,z$ using a strong coupling quantum Monte Carlo solver~\cite{Hafermann13} with improved estimators adapted to treat the retarded interactions~\cite{Hafermann12,Hafermann14}.
\begin{figure}[t]
\centering
\begin{minipage}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{chi_asymptote_dmft}
\end{minipage}
\hfill
\begin{minipage}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{chi_asymptote_scdb}
\end{minipage}
\caption{\label{fig:dmft:asymptote} (Color online) High frequency behavior of $\chi$ (open triangles, bold lines) and $X_{\text{loc}}$ (filled triangles, dashed lines) in DMFT (left) and in the two-particle self-consistent DMFT (right).
The dashed black lines indicate the asymptotes $-2E_\text{kin}/\omega^2$ computed from~\eqref{eq:imp_asymptote}.
The charge (green) and spin (blue) susceptibility approach the same asymptote in both approximations.
}
\end{figure}
The calculation procedure is as follows: We start from initial values for the hybridization $\Delta_{\nu}$ and retarded interactions $\Lambda^{\alpha}_{\omega}$, which specify the BFK impurity model~\eqref{BFK}. After solving the model, we evaluate the lattice susceptibility~\eqref{eq:xdb}. The Green's function is computed from the impurity self-energy in the same way as in DMFT:
\begin{align}
G_{\ensuremath{\mathbf{k}}\nu}^{-1} = i\nu+\mu-\epsilon_{\ensuremath{\mathbf{k}}}-\Sigma_{\nu}^{\text{imp}}.
\end{align}
The local parts of $G_{\ensuremath{\mathbf{k}}\nu}$ and $X^{\alpha}_{\ensuremath{\mathbf{q}}\omega}$ will in general be different from the impurity quantities $g_\nu$ and $\chi^{\alpha}_{\omega}$. We update the hybridization $\Delta_{\nu}$ and retarded interactions $\Lambda^{0}_{\omega}$, $\Lambda^{z}_{\omega}$ simultaneously and iteratively, until the conditions $G_{\text{loc},\nu}=g_\nu$ and $X^\alpha_{\text{loc},\omega}=\chi^\alpha_\omega$ for $\alpha=0,z$ are satisfied.
\subsection{Numerical results}
\label{sec:results}
Let us now turn to the discussion of numerical results of the two-particle self-consistent DMFT.
In the following we use parameters $U=6$, $T=0.5$ (in units of $t$), which is somewhat above the DMFT N\'eel temperature $T\approx 0.35$.
In Fig.~\ref{fig:ward} we illustrate numerically that contrary to Heisenberg-type coupling (cf. Fig.~\ref{fig:ward3d}) the local Ward identities~\eqref{eq:Wardimp} hold in the considered channels, $\alpha=0,z$.
As shown in Appendix~\ref{app:asymptotefromward}, this implies $X^{\text{DMFT},\alpha}_{\ensuremath{\mathbf{q}}=0,\omega\neq 0}=0$. Inserting this into Eq.~\eqref{eq:xdb} it follows that $X^\alpha_{\ensuremath{\mathbf{q}}=0,\omega\neq 0}=0$, which is a necessary condition for conservation of the total density, i.e., $\omega\rho_{\ensuremath{\mathbf{q}}=0,\omega}=0$.
Thus, $X^0$ and $X^z$ are at least globally conserving.
\begin{figure}
\includegraphics[width=0.49\textwidth]{doped_sz_susc}
\caption{Top: Static susceptibilities of the Hubbard model~\eqref{hubbardmodel} at $U=6, \beta=2$
in DMFT (triangles) and in the 2P self-consistent approximation~\eqref{eq:xdb} (circles) as a function of the density $\langle n\rangle$.
Left: Lattice susceptibility $X^z_{\mathbf{Q},\omega=0}$ at $\mathbf{Q}=(\pi,\pi)$. Right: Local lattice ($X^z_{\text{loc}}$) and impurity ($\chi^z$) susceptibility.
In DMFT (open and filled triangles), the local susceptibility $X^z_{\text{loc}}$ (full triangles, dashed lines) is larger than the impurity susceptibility $\chi^z$ (open triangles, bold lines). They coincide in the 2P self-consistent approximation (open and filled circles). Bottom, left: Static component of the retarded spin-spin interaction $\Lambda^z_{\omega=0}$. Bottom, right: Effective Hubbard repulsion $\tilde{U}$.
}
\label{fig:doped:X:sz}
\end{figure}
Fig.~\ref{fig:x_wn1} illustrates that $X^0$ and $X^z$ indeed vanish in the limit $|\ensuremath{\mathbf{q}}|\to 0$ for finite frequencies ($\omega_{m=1}$ in this case).
We note that due to the retarded spin interaction $\tilde{\Lambda}^z$ this approximation is not conserving in the $x$ and $y$ channels (cf. discussion in Sec.~\ref{section:conservation_scdb} and Appendix~\ref{app:Wardlocal}).
The right panel of Fig.~\ref{fig:dmft:asymptote} demonstrates the equivalence of the impurity and local lattice susceptibility in two-particle self-consistent DMFT.
In this approximation the charge and the longitudinal spin susceptibility approach the same asymptote.
This is required by the conservation laws and also satisfied in DMFT (see left panel, cf. Sec.~\ref{sec:DMFT} and~\ref{section:sz-coupling}).
In the top left panel of Fig.~\ref{fig:doped:X:sz} we compare the static spin susceptibility $X_{\mathbf{Q},\omega=0}$ at $\mathbf{Q}=(\pi,\pi)$ in DMFT,
two-particle self-consistent DMFT and the quantity $X_{\mathbf{Q},\omega=0}^{\text{DMFT}}$ in~\eqref{eq:xdb} at and near half-filling.
The increase of $X_{\mathbf{Q},\omega=0}^{\text{DMFT}}$ compared to its value in standard DMFT is consistent with an enhanced local interaction on the impurity model $\tilde{U}-U=\Lambda^0_\infty-\Lambda^z_\infty>0$ as seen in the bottom right panel of Fig.~\ref{fig:doped:X:sz}.
Concomitantly, we also find a larger leading eigenvalue of the Bethe-Salpeter equation in the two-particle self-consistent DMFT (not shown).
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{docc}
\caption{\label{fig:docc} (Color online) Double occupancy as a function of filling. The figure illustrates the inconsistency of $d_\text{imp}$ and $d^{\text{2P}}_\text{lat}$ in DMFT and of $d_\text{imp}$ and $d^{\text{1P}}_\text{lat}$ in 2PSC.
The impurity double occupancy computed from the susceptibilities~\eqref{eq:dimp2} and the Migdal-Galitskii formula~\eqref{eq:dimp1_bfk} yield the same result because we solve the impurity model exactly.}
\end{figure}
In the top right panel we see that the local susceptibility of the converged solution lies between the two values of DMFT.
The effect of the two-particle self-consistency is larger close to half-filling, where antiferromagnetic fluctuations are strongest.
Compared to DMFT, the increase of $\chi_{\omega=0}$ in the two-particle self-consistent method correlates with the large enhancement $U\rightarrow\tilde{U}$ of the on-site interaction in the impurity model.
Finally, we see in the top left panel of Fig.~\ref{fig:doped:X:sz} that $X^z_{\mathbf{Q},\omega=0}$ is significantly reduced compared to $X^{\text{DMFT}}$ due to $\Lambda$ which acts as a cutoff.
This reduction (marked by downward arrows) becomes larger with a larger absolute value of the cutoff $\Lambda_{\omega=0}$, as can be seen in the bottom left panel.
While the lattice susceptibility~\eqref{eq:xdb} can in principle be defined without the dynamical cutoff $\Lambda$, this solution gives results closer to benchmarks:
The double occupancy computed from the susceptibilities according to Eq.~\eqref{eq:dlat2} or~\eqref{eq:dimp2} gives a result that is closer to DCA benchmarks than either of the two values that are obtained in DMFT~\cite{vanLoon16}.
Despite two-particle self-consistency, the double occupancy is nevertheless inconsistent between one- and two-particle level, as discussed in Sec.~\ref{section:sz-coupling} and demonstrated in Fig.~\ref{fig:docc}.
We note that this approximation has several issues:
The dynamic part of the retarded interaction in the spin channel $\tilde{\Lambda}_\omega=\Lambda_\omega-\Lambda_\infty$ is positive.
This corresponds to negative energies of the bosons [cf.~\eqref{discretelambda}] and is unphysical. The impurity model can nevertheless be solved in the QMC solver in the segment picture.
Secondly, the asymptotic behavior of the self-energy is modified due to the retarded interactions~\cite{Hafermann14}. Since DMFT produces the correct asymptotic behavior~\cite{Rohringer16}, the high-frequency tail of the self-energy in this approximation is no longer exact.
Finally, even though the approximation suppresses a magnetic phase transition in two dimensions, the momentum-independent cutoff leads to an unphysical plateau of the susceptibility $X_{\ensuremath{\mathbf{q}},\omega=0}=\Lambda^{-1}_{\omega=0}$ for all momenta $\ensuremath{\mathbf{q}}$ in the vicinity of $\vc{Q}$ for which $X^{\text{DMFT}}_{\ensuremath{\mathbf{q}},\omega=0}$ diverges when approaching the DMFT N\'eel temperature.
\section{Discussion}
\label{sec:discussion}
We have discussed the conservation of charge and spin in two-particle self-consistent extensions of DMFT for the Hubbard model.
For large interaction, the Hubbard model approximately maps to the Heisenberg model and is hence dominated by spin fluctuations.
The motivation for including a retarded spin-spin interaction into the impurity model is to account for these fluctuations.
As we have seen however, introducing a retarded interaction in the longitudinal spin channel leads to a violation of conservation in the transversal spin channels of the lattice approximation.
Moreover, a retarded interaction in all three spin channels violates conservation on the lattice in all spin channels (cf. Table~\ref{table:conservation}).
We have argued that this is related to the fact that the Ward identities of the lattice and impurity are incompatible.
To make sense of this physically, we recall that an interaction only conserves the local charge- or spin-density if it commutes with the corresponding observable.
The retarded charge-charge interaction commutes with the charge density and therefore preserves charge on the impurity.
In other words, the bosons that mediate the retarded interaction do not carry a charge.
On the other hand, the transversal components of the retarded spin-spin interaction do \emph{not} commute with the longitudinal spin density operator.
Consequently, the spin bosons carry spin: Acting with the operator $S^{+}$ or $S^{-}$ on the impurity flips the spin of an electron by one quantum, which is carried by the boson. This leads to spin currents onto and off the impurity, which manifest themselves in the impurity Ward identities.
These currents have no analogue in the Hubbard model, where the motion of spin inevitably involves motion of charge. The latter is accounted for by the fermionic hybridization function.
In essence, the introduction of the retarded spin-spin interactions in order to achieve two-particle self-consistency causes a ``spin leak'' in the lattice approximation.
One may speculate that if the interaction of a lattice model conserves a local density, the local reference system should have the same property.
\begin{table}[t]
\begin{tabular}{ l | c || c | c || c | c | c}
& $\Lambda$ & $d^{\text{1P}}$ & $d^{\text{2P}}$ & charge & spin-$z$ & spin-$x,y$ \\\hline
DMFT & - & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & - & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; \\\hline
Ising & $0,z$ & - & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & - \\\hline
Heisenberg & $0,x,y,z$ & - & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4,color=red](0,.35) -- (.25,0) -- (1,.6) -- (.25,.15) -- cycle; & - & - \\\hline
\end{tabular}
\caption{\label{table:conservation} A summary of the main results for DMFT and 2PSC methods, respectively.
The second column indicates which retarded interactions $\Lambda^\alpha$ act on the impurity.
The third and fourth column show if the double occupancy $d$ is consistent between impurity and lattice on the 1P and 2P level, respectively [see Sec.~\ref{sec:DMFT} and~\ref{sec:2psc}].
The remaining columns list the channels in which local conservation is satisfied [see Eqs.~\eqref{eq:Ward}-\eqref{eq:Wardimp} for DMFT, and Sec.~\ref{section:3d-coupling} and~\ref{section:conservation_scdb} for 2PSC].
}
\end{table}
Even though the Hubbard model maps to a Heisenberg model for strong coupling, spin conservation is violated due to the exchange interaction on the impurity.
This is no contradiction because the Heisenberg model is an effective low-energy model.
The Ward identities however imply the equivalence of the charge- and spin susceptibilities and excitations at high energies for any finite value of $U$ [cf. Eq.~\eqref{eq:xloc_asymptote} in Sec.~\ref{sec:DMFT}]. Indeed, the effective exchange coupling $J=-4t^{2}/U$ involves two hopping processes and thus virtual high-energy charge excitations.
In the $t$-$J$ model, on the other hand, part of the spin currents is caused by the exchange interaction on the lattice. This part is decoupled from the charge current.
Contrary to the Hubbard model, a spin current on the impurity that is decoupled from the charge current a priori poses no problem and is even necessary.
However, there remains the problem of finding a two-particle self-consistency condition which satisfies the Ward identities of the $t$-$J$ model.
This brings us to a last point, namely a possible way out of this dilemma.
We recall that our conclusions about the conserving character of the considered approximations follow from the two-particle self-consistency condition $\chi^{\alpha}_{\omega} = X_{\text{loc},\omega}^{\alpha}$, which seems like a natural choice.
We can therefore not rule out the possibility that a different prescription exists, such that conservation is satisfied.
In view of the above arguments, this seems unlikely in case of the Hubbard model, but more promising for the $t$-$J$ model.
\section{Conclusions}
\label{sec:conclusions}
We have investigated the interplay between the requirement of conservation of an approximation and two-particle self-consistency.
Retarded interactions are required to enforce two-particle self-consistency, but their presence leads to problems. While the ambiguity in computing the DMFT double occupancy from two-particle quantities is resolved, the retarded interaction instead introduces an ambiguity in the calculation of the double occupancy from single-particle quantities.
More importantly, the Ward identities of the resulting impurity model are no longer compatible with the lattice Ward identities.
As a consequence, we found that it is impossible to construct a two-particle self-consistent approximation to the Hubbard model which simultaneously fulfills the lattice Ward identities in the charge and all spin channels.
A conserving two-particle self-consistent approximation can be obtained when restricting self-consistency to the charge and one of the spin channels.
We have used this to construct a two-particle self-consistent version of DMFT, which provably obeys global conservation laws and which resolves the ambiguity in the calculation of the double occupancy on the two-particle level.
While this approximation suppresses a magnetic phase transition in two dimensions and yields results for the double occupancy which are closer to benchmarks than either of the two DMFT values,
it however has several issues which make it impractical, in particular at low temperature.
Our results imply constraints for the construction of two-particle self-consistent diagrammatic extensions.
Finally, we have seen that DMFT arises naturally when constructing a conserving approximation based on the Anderson impurity model.
It may be possible to derive the cellular DMFT from the Ward identities, avoiding the cavity construction.
\acknowledgments
We thank the referees for constructive suggestions that have led to an improvement of this work.
F.K. likes to thank G. Rohringer and A. Toschi for a useful discussion on Ward identities.
F.K. and A.L. are supported by the DFG-SFB668 program. E.G.C.P.v.L. and M.I.K. acknowledge support from ERC Advanced Grant 338957 FEMTO/NANO.
The auxiliary impurity model was solved using a modified version of the open source CT-HYB solver~\cite{Hafermann13} based on the ALPS libraries~\cite{ALPS2}.
Computational resources were provided by the HLRN-cluster under Project No. hhp 00030.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,101
|
Q: выдвигающееся боковое меню pyqt У меня есть код приложения, я хочу сделать так, чтобы при нажатии pushButton_13 слева на право открывалось или выдвигалось меню, с пунктами 1, 2, 3, и.т.д.
Можно ли это как-то осуществить?
код nvutimain.py
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from nvutidesign import Ui_MainWindow
class ExampleApp(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
if __name__=="__main__":
app = QtWidgets.QApplication(sys.argv)
window = ExampleApp()
window.show()
sys.exit(app.exec_())
nvutidesign.py
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'e:/nvuti/nvutidesign.ui'
#
# Created by: PyQt5 UI code generator 5.14.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(870, 569)
MainWindow.setMinimumSize(QtCore.QSize(870, 569))
MainWindow.setMaximumSize(QtCore.QSize(870, 569))
font = QtGui.QFont()
font.setStyleStrategy(QtGui.QFont.PreferAntialias)
MainWindow.setFont(font)
MainWindow.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
MainWindow.setStyleSheet("background-color: qlineargradient(spread:pad, x1:0.011, y1:1, x2:1, y2:0, stop:0 rgba(182, 29, 212, 255), stop:1 rgba(48, 7, 182, 255));\n"
"")
MainWindow.setAnimated(True)
MainWindow.setDocumentMode(False)
MainWindow.setTabShape(QtWidgets.QTabWidget.Rounded)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setStyleSheet("")
self.centralwidget.setObjectName("centralwidget")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(170, 380, 221, 71))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(20)
font.setBold(False)
font.setWeight(50)
self.pushButton.setFont(font)
self.pushButton.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton.setStyleSheet("background-color: rgb(141, 142, 161);\n"
"border-radius: 10px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"\n"
"\n"
"")
self.pushButton.setObjectName("pushButton")
self.pushButton_2 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(470, 380, 221, 71))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(20)
font.setBold(False)
font.setWeight(50)
self.pushButton_2.setFont(font)
self.pushButton_2.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_2.setStyleSheet("background-color: rgb(141, 142, 161);\n"
"border-radius: 10px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_2.setObjectName("pushButton_2")
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setGeometry(QtCore.QRect(330, 220, 201, 23))
font = QtGui.QFont()
font.setPointSize(14)
self.label_3.setFont(font)
self.label_3.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setObjectName("label_3")
self.label_4 = QtWidgets.QLabel(self.centralwidget)
self.label_4.setGeometry(QtCore.QRect(20, 80, 781, 141))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(72)
self.label_4.setFont(font)
self.label_4.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(97, 118, 255);\n"
"\n"
"")
self.label_4.setAlignment(QtCore.Qt.AlignCenter)
self.label_4.setObjectName("label_4")
self.lineEdit = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit.setGeometry(QtCore.QRect(170, 260, 221, 51))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(15)
font.setStyleStrategy(QtGui.QFont.PreferAntialias)
self.lineEdit.setFont(font)
self.lineEdit.setStyleSheet("background-color: rgb(255, 255, 255);\n"
"border-radius: 10px;\n"
"border: 2px solid gray; \n"
"border-radius: 10px;\n"
"pIntValidator.setRange(1, 95)")
self.lineEdit.setMaxLength(99999)
self.lineEdit.setAlignment(QtCore.Qt.AlignCenter)
self.lineEdit.setReadOnly(False)
self.lineEdit.setClearButtonEnabled(True)
self.lineEdit.setObjectName("lineEdit")
self.pushButton_3 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_3.setGeometry(QtCore.QRect(210, 460, 61, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_3.setFont(font)
self.pushButton_3.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_3.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_3.setObjectName("pushButton_3")
self.pushButton_4 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_4.setGeometry(QtCore.QRect(280, 460, 61, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_4.setFont(font)
self.pushButton_4.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_4.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_4.setObjectName("pushButton_4")
self.pushButton_5 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_5.setGeometry(QtCore.QRect(230, 490, 41, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_5.setFont(font)
self.pushButton_5.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_5.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_5.setObjectName("pushButton_5")
self.pushButton_6 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_6.setGeometry(QtCore.QRect(280, 490, 41, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_6.setFont(font)
self.pushButton_6.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_6.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_6.setObjectName("pushButton_6")
self.pushButton_7 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_7.setGeometry(QtCore.QRect(590, 460, 61, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_7.setFont(font)
self.pushButton_7.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_7.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_7.setObjectName("pushButton_7")
self.pushButton_8 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_8.setGeometry(QtCore.QRect(520, 460, 61, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_8.setFont(font)
self.pushButton_8.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_8.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_8.setObjectName("pushButton_8")
self.pushButton_9 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_9.setGeometry(QtCore.QRect(590, 490, 41, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_9.setFont(font)
self.pushButton_9.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_9.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_9.setObjectName("pushButton_9")
self.pushButton_10 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_10.setGeometry(QtCore.QRect(540, 490, 41, 21))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_10.setFont(font)
self.pushButton_10.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_10.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_10.setObjectName("pushButton_10")
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setGeometry(QtCore.QRect(10, 500, 841, 61))
font = QtGui.QFont()
font.setPointSize(50)
font.setStyleStrategy(QtGui.QFont.PreferAntialias)
self.label_5.setFont(font)
self.label_5.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.label_5.setAlignment(QtCore.Qt.AlignCenter)
self.label_5.setObjectName("label_5")
self.label_6 = QtWidgets.QLabel(self.centralwidget)
self.label_6.setGeometry(QtCore.QRect(400, 480, 61, 16))
font = QtGui.QFont()
font.setPointSize(13)
font.setStyleStrategy(QtGui.QFont.PreferAntialias)
self.label_6.setFont(font)
self.label_6.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(231, 231, 231);")
self.label_6.setObjectName("label_6")
self.pushButton_11 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_11.setGeometry(QtCore.QRect(320, 20, 101, 31))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_11.setFont(font)
self.pushButton_11.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_11.setStyleSheet("background-color: rgb(162, 162, 162);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_11.setObjectName("pushButton_11")
self.pushButton_12 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_12.setGeometry(QtCore.QRect(440, 20, 101, 31))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(9)
font.setBold(False)
font.setWeight(50)
self.pushButton_12.setFont(font)
self.pushButton_12.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor))
self.pushButton_12.setStyleSheet("background-color: rgb(170, 170, 170);\n"
"border-radius: 5px;\n"
"color: rgb(255, 255, 255);\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(138, 138, 138);\n"
"")
self.pushButton_12.setObjectName("pushButton_12")
self.label_7 = QtWidgets.QLabel(self.centralwidget)
self.label_7.setGeometry(QtCore.QRect(0, 550, 61, 21))
self.label_7.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label_7.setObjectName("label_7")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setGeometry(QtCore.QRect(260, 350, 28, 23))
font = QtGui.QFont()
font.setPointSize(14)
self.label_2.setFont(font)
self.label_2.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_2.setObjectName("label_2")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(507, 350, 161, 23))
font = QtGui.QFont()
font.setPointSize(14)
self.label.setFont(font)
self.label.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setObjectName("label")
self.spinBox = QtWidgets.QSpinBox(self.centralwidget)
self.spinBox.setGeometry(QtCore.QRect(470, 260, 221, 51))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(15)
self.spinBox.setFont(font)
self.spinBox.setStyleSheet("background-color: rgb(255, 255, 255);\n"
"border-radius: 10px;\n"
"border: 2px solid gray;\n"
"border-radius: 10px;")
self.spinBox.setWrapping(False)
self.spinBox.setAlignment(QtCore.Qt.AlignCenter)
self.spinBox.setButtonSymbols(QtWidgets.QAbstractSpinBox.NoButtons)
self.spinBox.setPrefix("")
self.spinBox.setMinimum(1)
self.spinBox.setMaximum(95)
self.spinBox.setSingleStep(1)
self.spinBox.setProperty("value", 90)
self.spinBox.setObjectName("spinBox")
self.label_8 = QtWidgets.QLabel(self.centralwidget)
self.label_8.setGeometry(QtCore.QRect(490, 310, 211, 16))
font = QtGui.QFont()
font.setFamily("Arial")
font.setPointSize(10)
font.setStyleStrategy(QtGui.QFont.PreferAntialias)
self.label_8.setFont(font)
self.label_8.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label_8.setObjectName("label_8")
self.label_9 = QtWidgets.QLabel(self.centralwidget)
self.label_9.setGeometry(QtCore.QRect(480, 310, 47, 21))
font = QtGui.QFont()
font.setPointSize(12)
self.label_9.setFont(font)
self.label_9.setStyleSheet("background-color: rgb(alpha 0.5);\n"
"color: rgb(255, 255, 255);")
self.label_9.setObjectName("label_9")
self.lineEdit_3 = QtWidgets.QLineEdit(self.centralwidget)
self.lineEdit_3.setGeometry(QtCore.QRect(0, 0, 21, 20))
self.lineEdit_3.setStyleSheet("background-color: rgb(255, 255, 255);\n"
"border-radius: 1px;")
self.lineEdit_3.setReadOnly(True)
self.lineEdit_3.setObjectName("lineEdit_3")
self.pushButton_13 = QtWidgets.QPushButton(self.centralwidget)
self.pushButton_13.setGeometry(QtCore.QRect(0, 0, 51, 41))
self.pushButton_13.setStyleSheet("background-color: rgb(255, 255, 255);\n"
"border-radius: 5px;\n"
"}\n"
"\n"
"QPushButton:hover {\n"
" background-color: rgb(191, 191, 191);\n"
"}\n"
"\n"
"QPushButton:clicked {\n"
" background-color: rgb(255, 255, 0);\n"
"\n"
"")
self.pushButton_13.setText("")
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap("e:/nvuti\\123.png"), QtGui.QIcon.Normal, QtGui.QIcon.Off)
self.pushButton_13.setIcon(icon)
self.pushButton_13.setIconSize(QtCore.QSize(55, 55))
self.pushButton_13.setFlat(False)
self.pushButton_13.setObjectName("pushButton_13")
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "Nvuti"))
self.pushButton.setText(_translate("MainWindow", "Меньше"))
self.pushButton_2.setText(_translate("MainWindow", "Больше"))
self.label_3.setText(_translate("MainWindow", "Возможный выигрыш"))
self.label_4.setText(_translate("MainWindow", "0.00"))
self.lineEdit.setPlaceholderText(_translate("MainWindow", "Сумма"))
self.pushButton_3.setText(_translate("MainWindow", "Удвоить"))
self.pushButton_4.setText(_translate("MainWindow", "Половина"))
self.pushButton_5.setText(_translate("MainWindow", "Макс."))
self.pushButton_6.setText(_translate("MainWindow", "Мин."))
self.pushButton_7.setText(_translate("MainWindow", "Половина"))
self.pushButton_8.setText(_translate("MainWindow", "Удвоить"))
self.pushButton_9.setText(_translate("MainWindow", "Мин."))
self.pushButton_10.setText(_translate("MainWindow", "Макс."))
self.label_5.setText(_translate("MainWindow", "700"))
self.label_6.setText(_translate("MainWindow", "Баланс"))
self.pushButton_11.setText(_translate("MainWindow", "Пополнить"))
self.pushButton_12.setText(_translate("MainWindow", "Вывести"))
self.label_7.setText(_translate("MainWindow", "Nvuti 2020©"))
self.label_2.setText(_translate("MainWindow", "0-0"))
self.label.setText(_translate("MainWindow", "999999 - 999999"))
self.spinBox.setSuffix(_translate("MainWindow", "%"))
self.label_8.setText(_translate("MainWindow", "Максимальный процент - 95%"))
self.label_9.setText(_translate("MainWindow", "*"))
На примере скайпа
A: Вы не используете Layouts, поэтому говорить о чем-то выдвигающемся преждевременно.
А на счет меню - пожалуйста, только уберите из модуля nvutidesign.py все что связано с pushButton_13 и обратите внимание на lineEdit_3, который был спрятан за pushButton_13 и я его немного выдвинул, чтобы вы обратили внимание.
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from nvutidesign import Ui_MainWindow
class ExampleApp(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.menu_bar = QtWidgets.QMenuBar(self.centralwidget)
self.menu_bar.setGeometry(QtCore.QRect(1, 1, 51, 21))
self.menu = QtWidgets.QMenu('MENU', self)
self.menu_action = QtWidgets.QAction('Option #1', self)
# setData - Устанавливает внутренние данные действия в данные userData.
self.menu_action.setData('option1')
self.menu_action2 = QtWidgets.QAction('Option #2', self)
self.menu_action3 = QtWidgets.QAction('Option #3', self)
self.menu_action.triggered.connect(self.actionClicked)
self.menu_action2.triggered.connect(self.actionClicked)
self.menu_action3.triggered.connect(self.actionClicked)
self.menu.addAction(self.menu_action)
self.menu.addAction(self.menu_action2)
self.menu.addAction(self.menu_action3)
self.menu_bar.addMenu(self.menu)
def actionClicked(self):
action = self.sender()
print(action.text())
print(action.data())
if __name__=="__main__":
app = QtWidgets.QApplication(sys.argv)
window = ExampleApp()
window.show()
sys.exit(app.exec_())
можно ли изменить размер Option #1,2 и 3? в ширину и длину?
Да, все возможно.
import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from nvutidesign import Ui_MainWindow
class ExampleApp(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.menu_bar = QtWidgets.QMenuBar(self.centralwidget)
self.menu_bar.setNativeMenuBar(False) # ???
self.menu_bar.setGeometry(QtCore.QRect(1, 1, 51, 21))
self.menu = QtWidgets.QMenu('MENU', self)
self.menu_action = QtWidgets.QAction('Option #1', self)
# setData - Устанавливает внутренние данные действия в данные userData.
self.menu_action.setData('option1')
self.menu_action2 = QtWidgets.QAction('Option #2', self)
self.menu_action3 = QtWidgets.QAction('Option #3', self)
self.action4 = QtWidgets.QWidgetAction(self)
self.label4 = QtWidgets.QLabel('Hello \nWorld')
self.action4.setDefaultWidget(self.label4);
self.action4.setText('Hello World')
self.label4.setStyleSheet("""
QLabel {
background-color : red;
padding: 10px 12px 10px 12px;
min-width: 200px;
color: #000;
font: italic bold 16px;
}
QLabel:hover { background-color: #C10000; color: #fff;}
""")
self.menu_action.triggered.connect(self.actionClicked)
self.menu_action2.triggered.connect(self.actionClicked)
self.menu_action3.triggered.connect(self.actionClicked)
self.action4.triggered.connect(self.actionClicked)
self.menu.addAction(self.menu_action)
self.menu.addAction(self.menu_action2)
self.menu.addAction(self.menu_action3)
self.menu.addAction(self.action4)
self.menu_bar.addMenu(self.menu)
def actionClicked(self):
action = self.sender()
print(action.text())
print(action.data())
if __name__=="__main__":
app = QtWidgets.QApplication(sys.argv)
window = ExampleApp()
window.show()
sys.exit(app.exec_())
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,484
|
<div>
Le mot de passe utilisé pour s'authentifier avec le serveur proxy (si le serveur proxy nécessite une authentification).
<p/>
<a href="https://confluence.atlassian.com/display/CROWD/The+crowd.properties+file">Documentation de configuration de Crowd</a>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 267
|
Printable Blank Map of Australia – Outline
August 3, 2018 3 Mins Read
Share on Facebook Share on Twitter Pinterest LinkedIn Tumblr VKontakte Email
The commonwealth of Australia or just the Australia is a country located in the Australia continent.It is made up of the small islands such as the Island of Tasmania and other smaller islands which makes it the largest country of continent.
Australia is world's sixth largest country in the context of its area and is having approximately 26 million population, which is highly urbanized. Canberra is the official capital of Australia and Sydney is the largest city of country.
Here in this article we are going to compile the several kinds of printable Australian maps for our Australian geography learners. These printable templates will assist the users in learning the geography of Australia.
Blank Australia Map
Australia Map with States
Well, as we know that in Australia there are many states or provinces, which are as many as 6. All of these states are indifferent from each others in almost everything right from the geography to the applicable laws of country.
Check out our attached map to know about the geography of different Australian states in the continent of Australia.
Australia Map with States and Cities
If you are looking for the quick view of Australian map then here we have the complete map of Australia. In this map you can view all the geography of country with an utmost clarity whether it is the number of islands or other objects in its geography.
All the provinces of Australia have been assimilated in the map for the thorough understanding of users.
There are approx 7 to 8 major cities in Australia, where the significant population of the country reside. All of its major cities are highly urbanized with high standard of living.
If you want to learn about the location of all Australian cities in the map of Australia, then here we have the specific map of Australia with cities for you. In this map you can learn everything about the geography of Australian cities.
Australia Map Outline Printable
Australian continent is one of the biggest continents on the earth and being the island nature continent the major part of it is covered by water.
If you want to learn about the geography of Australia with the perspective of its location on the world map, then here we have the guiding map to throw light on Australia in the map of world.
Well, there are some of our users who don't want to draw the full map of Australia, yet want the readily usable map of country. We have the full printable special map of Australia for those of our users which can be easily availed just on the single click.
You can use this map for any of your desired purpose such as school assignment etc.
Physical Australia map
There is no doubt in the fact that map is the best source of learning the geography of any region, whether it is any local region or world's geography. If you want to learn about every details of Australia's geographical, then you better check out our special Australia geographically map.
In this map you will be able to see and learn the complete geography of Australia.
If you basically want to draw the Australian map on your own just by using the simple layout or shape of country, then here we have this blank template of country for you.
The template would provide you the basic structure of Australia, by the help of which you can just draw the accurate map of country with your knowledge about its geography.
Australia Blank MapAustralia Printable MapBlank Map of AustraliaPrintable Map of AustraliaTransparent Australia Map
Printable Blank Map of Argentina – Outline, Transparent, PNG Map
Printable Blank Map of Algeria – Outline
Susan 8 months ago Reply
List with all The Sims 4 cheats. Max out Skills, Get Promoted on your Career, Money Cheats, Death/Kill Cheats, Emotion Cheats, List with all Cheat Codes.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,723
|
\section{Introduction}
The problem of photon-atom scattering was first tackled using
quantum theory in the mid-to-late 1920s, resulting in the
development of the Kramers-Heisenberg-Waller matrix
elements~\cite{sakurai67a}. These describe the fundamental
Rayleigh and Raman processes, however, are also known to
suffer from an infra-red divergence problem when computing
Compton scattering cross-sections~\cite{heitler54a}.
In this paper we avoid this problem by building a
computational photon-plus-atom-in-a-box, which effectively
results in an upper photon wavelength based on the size of
the box, and we are able to obtain the total
photon-hydrogen scattering cross-sections.
The real and two different imaginary dipole transition polarizabilities
are set up in the present methodology and used to compute the set of
non-relativistic low-\textit{frequency}
photon-hydrogen cross-sections.
The incident photon field \textit{strengths} are assumed to lie in the
weak-to-intermediate regime where a collision involving two-incident photons
are unlikely, and where second-order perturbative treatments of a photon-atom
collision are applicable~\cite{delone00a}.
Photon-atomic hydrogen experiments are notoriously challenging~\cite{fried98a}.
Previously three-photon ionization experiments have been performed~\cite{kryala91a},
and recently experiments have been performed further into the strong-field
regime using ultrafast laser pulses~\cite{kielpinski14a}, neither of which do
we attempt to connect our results to since (high-order perturbative) multiphoton
treatments are required.
The fundamental computational approach taken in this paper to compute
the cross-sections is to use the transition matrix elements connecting
two states via a complete set of intermediate states summed over both
bound eigenstates and pseudostates.
These are schematically shown in Fig.~\ref{fig:schematic} for the cases of Rayleigh and
Compton scattering where an $\ell = 1$
state is given as an example of the
intermediate state and some possible dipole decay pathways from this state
are also shown that would impact the transition linewidth.
We present cross-sections here without resolving either fine or
hyperfine structure, which in future work could be included~\cite{delserieys08a}.
\begin{figure}[bth]
\caption{Schematic of some photon scattering processes from the hydrogen $1s$ state.
The incoming photon frequency, $\omega$, is depicted here as lying
above the ionization threshold.
Series (a) indicates Rayleigh scattering where one of the perturbative
terms involves the physical ($4p$) eigenstate with the absorption / spontaneous
emission of an $\omega$ photon.
The allowed dipole decays from the $4p$ state are shown in dashed-lines.
Series (b) shows Compton scattering, where one of the perturbative
terms involves an $E>0$ $\ell=1$ pseudostate, and then spontaneous decay
of frequency $\omega^\prime$ shown down into a $\ell=2$ pseudostate
(which would subsequently ionize).
Some allowed dipole decays from the $\ell=1$ pseudostate
are shown in dashed-lines. These impact the linewidths of these
processes (during Compton scattering the final $\ell=2$ pseudostate would
also have a number of decay channels that impacts its linewidth).}
\label{fig:schematic}
\vspace{0.1cm}
\includegraphics[width=85mm,angle=0]{figure1.eps}
\end{figure}
This paper was initially motivated by the incompleteness in the
compiled set of theoretical photon-hydrogen total cross-sections
in Fig.~29 of Bergstrom \textit{et al.}~\cite{bergstrom93a}.
There they did not present Raman scattering cross-sections,
whilst Compton cross-sections were presented over a limited frequency
range based on a low-energy (infra-red) photon truncation of the analytic
differential cross-sections from Gavrila~\cite{gavrila69b,gavrila72a,gavrila74a}.
They consequently noted that ``the total Compton cross-section
does not continue to fall at low energies''~\cite{bergstrom93a} as one
would predict.
This problem was more recently tackled
by Drukarev \textit{et al.}~\cite{drukarev10a},
who also used the work of Gavrila, and also an approximation
to numerically avoid the infra-red divergence.
Their total Compton cross-section does fall at low energies,
however, their highest-energy cross-sections at $100$~eV
are mismatched by an order-of-magnitude against those
in Ref.~\cite{bergstrom93a}.
Both of these calculations, furthermore,
lie vastly below those in the
``FFAST: Form Factor, Attenuation, \& Scattering Tables"
computed by Chantler~\cite{chantler95a}.
The FFAST database covers all atoms and are used for a variety of applications
spanning medical and materials science applications~\cite{pratt04a,pratt14a}.
Our results in this paper are qualitatively
similar to, yet $30-60$\% larger than,
the FFAST scattering cross-sections, and this is just
for the neutral hydrogen atom.
\textit{Photon-Atom Methodology} ---
Our numerical method begins by diagonalizing our hydrogen
atom using a finite-sized (orthogonal) Laguerre basis set that
provides some kind of a soft-walled potential atom-in-a-box~\cite{mitroy08e}.
This discretizes the continuum and provides both bound state and
`pseudostate' information that can be used
to compute freqency-dependent polarizabilities
below the ionization threshold~\cite{mitroy10a,tang13a},
and for dispersion coefficients~\cite{jiang15a}.
The pseudostate information can also be exploited to
perform, eg. lepton-atom scattering~\cite{mitroy07a,mitroy08e}.
The use of pseudostates for above threshold photon-atom
scattering was explored by Langhoff \textit{et al.}
for Rayleigh scattering and one-photon
photoionization~\cite{langhoff70a,langhoff73a,langhoff74a,langhoff74b,langhoff76a}.
We extend these methods to also perform calculations of
Raman scattering and, furthermore, of Compton scattering.
The photon-atom scattering cross-sections are based on the
Kramers-Heisenberg-Waller matrix elements involving
the electromagnetic coupling
$H_c = (2mc^2)^{-1}e^2A^2 - (mc)^{-1}e \vec{p}\cdot\vec{A}$~\cite{heitler54a,sakurai67a}.
We ignore the $A^2$ `seagull' term (the Waller matrix element) since that is
only important at $\gtrsim$~keV photon energies~\cite{eisenberger70a,bergstrom93a}.
The Kramers-Heisenberg matrix element is determined here
as a transition polarizability, $\alpha_{ji}(\omega)$, between some
initial state $|i;L_iS\rangle$ and final state $|j;L_jS\rangle$ through
a complete set of intermediate states $|t;L_tS\rangle$~\cite{chandrasekharan81a}.
The details of our algorithms are given in the Supplemental Material, Sections I and II.
In brief, we use reduced matrix elements,
assuming linear polarization~\cite{bonin97a,delserieys08a}, such that
\begin{equation}
\begin{split}
\alpha_{ji}(\omega) \approx \sum_{t} C_{L_i,L_t,L_j} \Bigg[ & \frac{\langle j || z || t \rangle \langle t || z ||i \rangle}
{\varepsilon_{ti}-\omega-i \frac12 \Gamma_{ti}(\omega)} \\
+ & \frac{\langle j || z || t \rangle \langle t || z || i \rangle}
{\varepsilon_{tj}-(-\omega)-i \frac12 \Gamma_{tj}(\omega)} \Bigg] ,
\label{eqn:alphatrans}
\end{split}
\end{equation}
where $\varepsilon_{ab} = E_b - E_a$, and all quantities above
are in atomic units (a.u.). We use the expressions in Ref.~\cite{delserieys08a},
to find that $C_{0,1,0} = \frac13$, whilst $C_{0,1,2} = \frac13$.
Each $\alpha_{ji}(\omega)$ is calculated as a sum over intermediate states $t$,
which are either bound or pseudostates. When the intermediate state is a bound
state, we compute an imaginary term (denoted $\mathrm{Im}_0$) by the damping of
the oscillator through the linewidth/s of the
atomic bound states~\cite{heitler54a,bonin97a,lekien13a,lepers14a}.
Wijers has argued~\cite{wijers04a} that the decay rates
must be frequency dependent to ensure
$\Gamma_{ab}(\omega) \to 0$ as $\omega \to 0$.
Thus we use
$\Gamma_{ab}(\omega) = (\Gamma_a + \Gamma_b)
(2\varepsilon_{ab}^2 \omega^2/(\varepsilon^4_{ab} + \omega^4))$.
This form of the resonant damping includes the case where
the state that absorbs the photon has a non-zero decay rate to
other bound states.
If the intermediate state in Eqn.~\ref{eqn:alphatrans} is
a pseudostate, then the linewidth is not included
as continuum states do not have a physical linewidth.
For $\omega > |E_{1s}|$, this results in unphysical
singularities in the continuum when $\omega = \varepsilon_{ti}$.
These are removed by assuming infinitesimally small pseudostate linewidths $\Gamma_{ti}\to 0^+$,
which results in real and imaginary (denoted $\mathrm{Im}_1$) terms~\cite{langhoff74a}.
This enables us to compute one real and the two different imaginary polarizabilities.
Our calculations of $\alpha_{ji}$ where $i \equiv 1s$
were all performed with a fixed number of Laguerre-type orbitals $N_\ell=120$ for each angular momentum,
which gives $18$ $\ell=1$ $(E<0)$ bound states and $102$ $\ell=1$ $(E>0)$ pseudostates.
Convergence studies against other basis sets are relegated to Supplemental Section IV.
\begin{figure}[tbh]
\caption{Rayleigh and photoionization scattering cross-sections for
photon-hydrogen scattering (in units of $\sigma_T$).
The $\sigma_{e}$ and $\sigma_{I}$ are shown as a function of
incident photon energy $\omega$ in atomic units
(ie. up to $\approx 27$~eV). Our $\sigma_{e}$ results agree with the
analytic results of Gavrila~\cite{gavrila67b}, with some discrepancy
against the (as digitized by us) numerical results of Bergstrom \textit{et al.}~\cite{bergstrom93a}.
The FFAST $\sigma_I$ results lie a few percent below ours~\cite{chantler95a}.
}
\includegraphics[width=85mm]{figure2.eps}
\label{fig:rayleigh}
\end{figure}
\textit{Cross-section results} ---
For (elastic) Rayleigh scattering of a photon with frequency $\omega$ we have~\cite{langhoff74a,sadeghpour92b,delserieys08a}
\begin{equation}
\sigma_e(\omega) = \sigma_T \omega^4 \Big|\mathrm{Re}\left[\alpha_{ii}(\omega)\right]
+ i\mathrm{Im}_0\left[\alpha_{ii}(\omega)\right]\Big|^2 ,
\label{eqn:crossrayleigh}
\end{equation}
where $\sigma_T$ is the Thomson scattering cross-section
of a photon with a free electron
(for reference, $\sigma_T \approx 6.65 \times 10^{-25}$ cm$^2$),
whilst $\omega$ and $\alpha_{ii}$ are both in a.u..
The photoionization cross-section is given by~\cite{langhoff74b}
\begin{equation}
\label{eqn:crossoptical1}
\sigma_{I}(\omega) = \sigma_T \frac32 c^3 \omega \: \mathrm{Im}_1\left[\alpha_{ii}(\omega)\right] ,
\end{equation}
i.e. the optical theorem. The speed-of-light, $c\approx 137$ (in a.u.),
gives the massive enhancement factor of $\sigma_{I}$ over that of $\sigma_e$.
Our results for these cross-sections for validation purposes are shown in Fig.~\ref{fig:rayleigh},
where our results agree with the (not-shown) analytic $\sigma_I$ function~\cite{sobelman96a}.
Fig.~\ref{fig:rayleigh} also shows physical resonances and
that $\sigma_e(\omega\to\infty) \approx \sigma_T$ in the non-relativistic limit.
The Rayleigh scattering cross-sections between $\omega = 0.37-0.55$~a.u.,
and up to $\omega = 100$~a.u., are given in Supplemental Figs.~1 and 3.
For (inelastic) Raman scattering to (physical) state $j$
\begin{equation}
\sigma_{r;j}(\omega) = \sigma_T \; \omega \left(\omega_j^\prime \right)^3
\Big|\mathrm{Re}[\alpha_{ij}(\omega)] + i \mathrm{Im}_0[\alpha_{ij}(\omega)]\Big|^2 ,
\label{eqn:crossrayram}
\end{equation}
with spontaneously emitted photon frequency $\omega_j^\prime = \omega - \omega_{ij}$.
Thus the total Raman cross-section for an atom in an initial state $i$ is
$\sigma_{R}(\omega) = \sum_{(j;E_j < \Delta)} \sigma_{r;j}(\omega)$
(see supplemental material for our basis-set-based choice of
ionization location $E = \Delta$, which effectively demarcates the
bound states from the pseudostates). Our Raman codes were
able to be validated at energies below ionisation $\omega < 0.5$~a.u.,
where previous photon-hydrogen Raman calculations have computed
H($1s$) $\to$ H($2s$) scattering~\cite{sadeghpour92b},
and excited initial state H($3s$) $\to$ H($3d$) `Rayleigh' scattering~\cite{florescu85a}.
Our H($1s$) $\to$ H($2s$), H($1s$) $\to$ H($3d$),
and total Raman cross-sections are shown in Fig.~\ref{fig:raman}.
The individual H($1s$) $\to$ H($ns$) Raman cross-sections above
threshold rapidly vanish and revive as each of their matrix elements
pass through a different `tune-out' wavelength where $|\alpha_{ij}(\omega)|\approx 0$.
The H($1s$) $\to$ H($nd$) cross-sections are monotonically
decreasing above threshold.
The total Raman cross-section above $\omega = 0.5$~a.u.
monotonically decreases such that $\sigma_{R}(\omega) \to 0$ as $\omega \to \infty$
(see Supplemental Figs.~2 and 4 for cross-sections for $\omega = 0.37-0.55$~a.u.,
and up to $\omega = 100$~a.u.).
\begin{figure}[tbh]
\caption{Raman scattering cross-sections for
photon-hydrogen scattering (in units of $\sigma_T$).
These are shown as a function of incident photon energy $\omega$ in
atomic units (ie. up to $\approx 27$ eV).
Our results are compared against the available ($\omega < 0.5$)
results for H($1s$) $\to$ H($2s$) of Sadeghpour and Dalgarno~\cite{sadeghpour92b}.
The cross-section for H($1s$) $\to$ H($3d$) is also shown.
The sum of the individual cross-sections $\sigma_{R}(\omega)$ is also shown
as the solid line, and is seen to monotonically decrease.
}
\includegraphics[width=85mm]{figure3.eps}
\label{fig:raman}
\end{figure}
The final process considered here is that of
Compton scattering, which opens up for frequencies above threshold $\omega>|E_i|$,
and analytically requires the \textit{differential} cross-section
to be integrated over all possible outgoing photon frequencies $\omega^\prime$~\cite{drukarev10a},
\begin{equation}
\sigma_{C}(\omega) = \int_{\omega^\prime_{\mathrm{min}}}^{\omega^\prime_{\mathrm{max}}} \frac{d \sigma_C}{d\omega}\Bigg|_{\omega^\prime} d\omega^\prime
\approx \sum_{(j;E_j>\Delta)} \sigma_{r;j}(\omega) ,
\label{eqn:completecompton}
\end{equation}
where the largest emitted photon frequency $\omega^\prime_{\mathrm{max}} = \omega-|E_i|$,
whilst the smallest $\omega^\prime_{\mathrm{min}} = 0$. The infra-red problem is
that $\frac{d \sigma_C}{d\omega}$ diverges as $\omega^\prime \to 0$, resulting
in an infinite cross-section, and thus previous analytic calculations
of Bergstrom~\textit{et al.}~\cite{bergstrom93a} assumed $\omega^\prime_{\mathrm{min}} = 10$~eV,
whilst Drukarev~\textit{et al.}~\cite{drukarev10a} assumed $\omega^\prime_{\mathrm{min}} = 1$~eV.
We, instead, adopt the approximation in Eqn.~\ref{eqn:completecompton}, adapting the same
formulae from the Raman case since the finite set of pseudostates gives us a discrete sum.
Note that our Raman vs Compton delineation is in the same spirit of previous work
on excitation Raman vs ionization Raman photon-helium scattering~\cite{grosges99a}.
The convergence of the sum towards the integral can be understood by considering that,
as the basis size $N_\ell$ is increased, more pseudostates are included
and the magnitude of the individual cross-sections $\sigma_{r;j}(\omega)$ decreases,
whilst the $\sigma_{C}(\omega)$ tends to remain constant (see Supplemental Fig.~5).
\begin{figure}[tbh]
\caption{Comparisons of Compton scattering cross-sections up
to high energies for photon-hydrogen scattering (in units of $\sigma_T$).
The summed $\sigma_{C}(\omega)$ is shown as a function of
incident photon energy $\omega$ in atomic units
(ie. up to $\approx 2700$ eV).
The summed cross-section is also shown when only the
$L_j = 0$ contributions are included, which approximately
agrees with the results of Bergstrom~\textit{et al.}~\cite{bergstrom93a} and
Drukarev~\textit{et al.}~\cite{drukarev10a}
(their results were digitized by us).
The FFAST results are also shown~\cite{chantler95a}.
}
\includegraphics[width=85mm]{figure4.eps}
\label{fig:comptoncompare}
\end{figure}
Our results for Compton scattering are shown in Fig.~\ref{fig:comptoncompare}
where our results completely disagree by over three orders-of-magnitude with
the previous results of Bergstrom~\textit{et al.}~\cite{bergstrom93a}
and Drukarev~\textit{et al.}~\cite{drukarev10a}. We, however, ran
our calculations by including only the states with final $L_j = 0$,
ie. ignoring the $L_j = 2$ contributions which turn out to dominate
the sum in Eqn.~\ref{eqn:completecompton}.
In doing so we find qualitative agreement of our $0\to 1\to 0$ cross-section
with these previous calculations, which relied on numerical truncation
of analytic differential cross-sections.
Our total results instead broadly agree with the FFAST
coherent + incoherent results~\cite{chantler95a}.
Coherent scattering generally refers to elastic (Rayleigh) scattering,
whilst incoherent scattering means inelastic (Compton) scattering~\cite{pratt14a}.
\textit{Total Cross-section} ---
The total cross-section summing all Rayleigh, Raman, Compton processes
is shown in Fig.~\ref{fig:total}, showing qualitative agreement with
the FFAST data. The NIST-based XCOM and XAAMDI database results are
presented in Fig.~\ref{fig:total}~\cite{hubbell75a,hubbell79a,berger10a,hubbell04a}.
The photoionization cross-section only appears in Fig.~\ref{fig:total}
towards keV energies where it drops down in magnitude to be below the others.
Our data shows that Wentzel's rule~\cite{kaplan76a} is never applicable
to hydrogen, that is, the sum over the elastic
and inelastic total cross-sections does not tend to that of a free electron
(ie. the Thomson cross-section) over the energy range before higher-order
effects take over~\cite{bergstrom93a}.
\begin{figure}[t]
\caption{Comparisons of scattering cross-sections up
to high energies for photon-hydrogen scattering (in units of $\sigma_T$).
These are shown as a function of incident photon energy
$\omega$ in atomic units (ie. up to $\approx 2700$ eV).
The coherent, incoherent, and photoionization data from FFAST~\cite{chantler95a}
and XCOM~\cite{berger10a} is shown, as well as photoionization from
XAAMDI~\cite{hubbell04a}. Note that the $y$-axis is here shown not on logscale.
}
\includegraphics[width=85mm]{figure5.eps}
\label{fig:total}
\end{figure}
\textit{Conclusion} --- We have introduced a computational method
for computing various cross-sections of photon-atom scattering
for any initial state from a single atomic structure calculation.
We find the expected behaviour of the Compton scattering
towards low-energies and our calculations do not appear to suffer from
the infra-red catastrophe. Effectively a computational atom-in-box
sets a maximum wavelength that can be `measured'~\cite{gavrila72a,gavrila72b}.
Our results are able to reach up to energies where the
beyond-dipole-approximation and relativistic/retardation effects
become important and thus provide a benchmark for future
work. Our methods can be extended to compute the cross-sections
for atoms in various initial states, and where knowledge of the
possible scattering processes will help guide experimentalists when
designing atomic and molecular experiments.
In particular, it will be worthwhile to compute Rayleigh, Raman, and
Compton scattering cross-sections for multi-electron atoms where
Cooper minima in the photoionization cross-sections and resonances
can occur~\cite{pratt14a}.
\begin{acknowledgments}
The work of MWJB was supported by an Australian Research Council
Future Fellowship (FT100100905).
YC was supported by the National Natural
Science Foundation of China (Grant No. 11304063).
The work of YC was also partially enabled by a Discovery Project (DP-1092620)
of Prof. Jim Mitroy (deceased).
We thank Dr. Julian Berengut, A/Prof. Tom Stace,
and Prof. Dave Kielpinski for useful conversations and
Dr. Sergey Novikov for correspondence about matrix elements.
\end{acknowledgments}
\section{Formulae}
Our atom-in-a-box calculation generates a large number of transition matrix elements.
These connect the $N_\ell$ number of Laguerre basis functions for each
partial wave to each other through dipole allowed reduced matrix elements
of operator $\hat{z} \equiv r C^1(\hat{r})$,
that are independent of the magnetic quantum number.
Given $C^1(\hat{r})$ is the spherical tensor of rank-$1$,
for eigenstate $i$ and eigenstate $j$,
the reduced matrix elements are
\begin{equation}
\begin{split}
\langle j|| r C^1(\hat{r}) || i \rangle
&= \int\int Y_{\ell_j}(\theta,\phi) C^1(\theta,\phi) Y_{\ell_i}(\theta,\phi) \sin \theta d\theta d\phi
\int \psi_j(r) r \psi_i(r) r^2 dr \\
&= (-1)^{\ell_j} \sqrt{(2\ell_j+1)(2\ell_i+1)}
\left( \begin{array}{ccc}
\ell_j & 1 & \ell_i \\
0 & 0 & 0
\end{array} \right)
\int \psi_j(r) r \psi_i(r) r^2 dr \; .
\label{eqn:reduced}
\end{split}
\end{equation}
The orthogonal Laguerre basis functions that we choose to use all
have the same long-range parameterisation, $\exp(-\lambda_\ell r$),
for each partial-wave, $\ell$. By providing a set of basis functions with fixed $\lambda = 0.5$,
they are explicitly not a set of eigenstates of hydrogen
(as hydrogen eigenstates have $\lambda_n = \frac12 n$~\cite{bromley03a}).
Thus we generate a set of
low-lying bound states as well as a
(reasonably well-defined) set of higher-lying Rydberg states,
along with a set of ($E>0$) pseudostates that reflect the finite
nature of the effective box induced by having a finite number of
basis functions.
In our code the radial integrals are performed numerically~\cite{bromley01a},
which has allowed us in the past to perform numerous calculations
of one- and two-electron atoms including mixtures of Laguerre and Slater
Type Orbitals to represent frozen Hartree-Fock-based core electrons~\cite{bromley01a,mitroy03e}.
Here we use a large radius grid spanning to maximum of $\approx 1000$ a.u.
using $4096$ grid points with $16$-point Gauss-Legendre radial integration quadrature~\cite{bromley01a}.
The `Kramers-Heisenberg' matrix element is determined here
as a transition polarizability between some initial state
$|i;L_iS\rangle$ and final state $|j;L_jS\rangle$ through a complete
set of intermediate states $|t;L_tS\rangle$.
In terms of sums over reduced matrix elements with
$LS$-coupled wavefunctions and assuming linear photon polarization~\cite{delserieys08a},
\begin{equation}
\begin{split}
\alpha_{ji}(\omega) & = \sum_{t}^{\infty} C_{L_i,L_t,L_j} \Bigg[ \frac{\langle j || z || t \rangle \langle t || z ||i \rangle}
{\varepsilon_{ti}-\omega-i \frac12 \Gamma_{ti}(\omega)} \Bigg]
+ \sum_{t}^{\infty} C_{L_i,L_t,L_j} \Bigg[ \frac{\langle j || z || t \rangle \langle t || z || i \rangle}
{\varepsilon_{tj}-(-\omega)-i \frac12 \Gamma_{tj}(\omega)} \Bigg] \\
& + \int_{0}^{\infty} \frac{C_{L_i,L_\epsilon,L_j}}{\rho(\epsilon)}
\Bigg[ \frac{\langle j || z || \epsilon \rangle \langle \epsilon || z ||i \rangle}
{\varepsilon_{\epsilon i}} \Bigg] d\epsilon
+ \int_{0}^{\infty} \frac{C_{L_i,L_\epsilon,L_j}}{\rho(\epsilon)}
\Bigg[ \frac{\langle j || z || \epsilon \rangle \langle \epsilon || z || i \rangle}
{\varepsilon_{\epsilon j}-(-\omega)} \Bigg] d\epsilon ,
\label{eqn:alphatrans}
\end{split}
\end{equation}
where, in atomic units, $\varepsilon_{ab} = E_b - E_a$.
The bound eigenstates, denoted by $t$, form an infinite set.
As the sums have bound intermediate states with physical linewidths, the decay rates $\Gamma_{ti}$ or $\Gamma_{tj}$ are included.
The integrals describe the transitions where the intermediate states are continuum states, denoted by energy $\epsilon$.
The transition matrix elements in the continuum integrals are normalized by the energy density $\rho(\epsilon)$
of the continuum states.
The terms with $-\omega$ in the denominator indicate photon absorption followed by emission.
The terms with $-(-\omega)$ in the denominator indicate photon emission followed by absorption.
For the H($1s$) initial state considered here,
$C_{0,1,0} = \frac13$ and also $C_{0,1,2} = \frac13$, where
we only have intermediate $L_\epsilon = L_t = 1$.
Whilst Eqn.~(\ref{eqn:alphatrans}) is for the Raman case, it covers
the Rayleigh case where $j = i$.
Here we break up the terms into the bound and continuum contributions. We sum up the bound states using
the notation $T_{ba} = \langle b || z || a \rangle$, as
\begin{equation}
\mathrm{Re}_0\left(\alpha_{ji}(\omega)\right) + i \mathrm{Im}_0\left(\alpha_{ji}(\omega)\right)
\approx \sum_{b=1}^{N_b} C_{L_i,L_b,L_j} \Bigg[
\frac{T_{jb} T_{bi}}{\varepsilon_{bi}-\omega-i \frac12 \Gamma_{bi}(\omega)}
+ \frac{T_{jb} T_{bi}}{\varepsilon_{bj}-(-\omega)-i \frac12 \Gamma_{bj}(\omega)} \Bigg],
\end{equation}
where $N_b$ is the finite number of (intermediate) bound states in a given calculation.
Next we consider the first continuum integral in Eqn.~(\ref{eqn:alphatrans}), both below and above ionization threshold.
For the first continuum integral in Eqn.~(\ref{eqn:alphatrans}), when $\omega$ lies below the ionization threshold,
\begin{equation}
\mathrm{Re}_1\left(\alpha_{ji}(\omega)\right)
= \int_{0}^{\infty} \frac{C_{L_i,L_\epsilon,L_j}}{\rho(\epsilon)}
\Bigg[ \frac{T_{j\epsilon} T_{\epsilon i}}{\varepsilon_{\epsilon i}-\omega} \Bigg] d\epsilon
\approx \sum_{p=1}^{N_p} \frac{C_{L_i,L_p,L_j}}{\Delta \varepsilon_p}
\Bigg[ \frac{T_{j p} T_{p i}}{\varepsilon_{p i}-\omega} \Bigg] \Delta \varepsilon_p ,
\end{equation}
where $\rho(\epsilon) \approx \Delta \varepsilon_p$. Note that the sum over the pseudostates
is only an approximation to the integral, however,
the sum tends to monotonically converge as the basis set increases.
When $\omega$ lies above the ionization threshold,
unphysical poles occur at $\omega = \varepsilon_{\epsilon i}$.
We have to integrate around the pole by
introducing a small complex term into the frequency $z = \omega + i0^+$ where the $+$ indicates
approaching zero from above. This means that we apply the Cauchy principal theorem,
\begin{equation}
\begin{split}
\mathrm{Re}_1\left(\alpha_{ji}(z)\right) - i \mathrm{Im}_1\left(\alpha_{ji}(z)\right)
= \int_{0}^{\infty}
\frac{\mathcal{F}(\epsilon)}{\varepsilon_{\epsilon i}-\omega-i 0^{+}} d\epsilon
\equiv \mathcal{P} \! \! \! \int_{0}^{\infty}
\frac{\mathcal{F}(\epsilon)}{\varepsilon_{\epsilon i}-\omega} d\epsilon
- i \pi \mathcal{F}(\epsilon)\bigg|_{\epsilon=\omega + E_i},
\end{split}
\end{equation}
where $\mathcal{F}(\epsilon) = C_{L_i,L_\epsilon,L_j} T_{j\epsilon} T_{\epsilon i} / \rho(\epsilon)$.
Again, however, we have a discrete set of pseudostates. Thus we evaluate the above
threshold values \textbf{only} at discrete frequencies $\omega_{ni} = E_n - E_i$,
which correspond to the frequencies required to resonantly excite the pseudostates,
and we remove the single singular term from the sum:
\begin{equation}
\mathrm{Re}_1\left(\alpha_{ji}(\omega_{ni})\right) - i \mathrm{Im}_1\left(\alpha_{ji}(\omega_{ni})\right)
\approx
\left(\sum_{p \ne n}^{N_p} \frac{C_{L_i,L_p,L_j}}{\Delta \varepsilon_p}
\Bigg[ \frac{T_{j p} T_{p i}}{\varepsilon_{pi}-\omega_{ni}} \Bigg]\Delta \varepsilon_p \right)
- i \pi C_{L_i,L_n,L_j} \frac{T_{j n} T_{n i}}{\frac12\left(E_{(n+1)}-E_{(n-1)}\right)} ,
\end{equation}
with modification of the finite differencing when computing at the first $n=1$ or the last $n=N_p$ pseudostates.
The second continuum integral in Eqn.~(\ref{eqn:alphatrans}),
corresponds first to photon emission, followed by absorption and
thus has no such pole due to the continuum. This integral can be
evaluated both above and below threshold at any incident photon energy as
\begin{equation}
\mathrm{Re}_2\left(\alpha_{ji}(\omega)\right) = \int_{0}^{\infty} \frac{C_{L_i,L_\epsilon,L_j}}{\rho(\epsilon)}
\Bigg[ \frac{\langle j || z || \epsilon \rangle \langle \epsilon || z || i \rangle}
{\varepsilon_{\epsilon j}-(-\omega)} \Bigg] d\epsilon
\approx \sum_{p=1}^{N_p} \frac{C_{L_i,L_p,L_j}}{\Delta \varepsilon_p}
\Bigg[ \frac{T_{j p} T_{p i}}{\omega_{pj}+\omega} \Bigg]\Delta \varepsilon_p .
\end{equation}
This term only contains a divergence
for the case of resonant `anti-Stokes' Raman processes, where the final state lies at
an energy below the initial state, which are not considered here.
Thus for either Rayleigh ($i=j$) or Raman ($i \ne j,$ where $j \in [1,N_b]$) we can compute the
transition polarizability at any frequency $\omega$ below threshold:
\begin{equation}
\alpha_{ji}(\omega) = \mathrm{Re}_0\left(\alpha_{ji}(\omega)\right) + \mathrm{Re}_1\left(\alpha_{ji}(\omega)\right) + \mathrm{Re}_2\left(\alpha_{ji}(\omega)\right)
+ i \mathrm{Im}_0\left(\alpha_{ji}(\omega)\right),
\end{equation}
whilst for $\omega$ above threshold, we evaluate at the discrete pseudostate frequencies
\begin{equation}
\alpha_{ji}(\omega_{ni}) = \left[\mathrm{Re}_0\left(\alpha_{ji}(\omega_{ni})\right) + \mathrm{Re}_1\left(\alpha_{ji}(\omega_{ni})\right) + \mathrm{Re}_2\left(\alpha_{ji}(\omega_{ni})\right)\right] + i \mathrm{Im}_0\left(\alpha_{ji}(\omega_{ni})\right) - i \mathrm{Im}_1\left(\alpha_{ji}(\omega_{ni})\right).
\end{equation}
For the case of Compton scattering, which is only available when the final state lies
in the continuum (ie. $j \to \epsilon_j$),
it can only be computed when the incident photon frequency is $\omega \geq \varepsilon_{ji}$.
However, we simply apply all of the same pseudostate machinery as outlined above when computing these
transition polarizabilities. That is, we compute the integrals by assuming that both the
intermediate states and final states are discrete pseudostates
and evaluate these
at the discretized frequencies $\omega_{ni} = E_n - E_i$,
where $n$ is the intermediate $L=1$ pseudostate.
\section{Choice of which eigenstates are bound vs pseudostates}
Since we are computing the non-relativistic hydrogen atom, with the
basis set employed here we obtain $E_{1s} = -0.5$~a.u. to near machine
precision. The natural ionization potential is then located
exactly at $E = 0$. Table~\ref{tab:transitiondata} shows the
transition data from the H($1s$) state to the lowest $25$ $p$-states
from the largest $N=120$ calculation. This table shows that in
a single-shot Laguerre calculation we are able to accurately
reproduce up to approximately the $n=15$ Rydberg state.
Looking at the oscillator strength pattern, it reaches a minimum
for the $n=15$ state, after which the oscillator strengths start
increasing again. Not shown is that they reach another maximum
at the $85$-th $\ell=1$ eigenstate ($E_{86p} = 0.409641$~a.u.,
with $f_{if} = 0.007975$), before dropping down towards zero.
This corresponds to photon frequencies of $\omega \approx 0.9$~a.u.,
which is approximately where the (above threshold) Rayleigh and
Compton scattering peaks occur.
\begin{table}[h]
\caption{The $1s-np$ transition data for the first $25$ $\ell=1$
eigenstates from our $N=120$ Laguerre basis set calculation,
whose results are compared to the exact Rydberg values.
The $n_i \to n_f$ column indicates the initial to final state transition.
The $E_f$ (calc) column gives the eigenenergies in atomic units
of the final state from the calculation, whilst $E_f$ (exact) column gives
the exact energies based on the Rydberg formula ($E_{n} = -0.5/n^2$).
The transition oscillator strengths $f_{if}$ (calc) are from the
calculation, whilst the $f_{if}$ (exact) are from the
analytic formulae~\cite{bethe57a}.
The final column gives the relative difference between
the oscillator strengths
($=(f_{if} \mathrm{(calc)}-f_{if} \mathrm{(exact)})/f_{if} \mathrm{(exact)}$).
Two row demarcations are given, the first to indicate where the
$E<0$ bound states have a minima in the oscillator strengths and
start to lose accuracy, the second where the $E>0$ pseudostates begin.
}
\label{tab:transitiondata}
\vspace{0.2cm}
\begin{tabular}{l|c|c|c|c|c}
\hline \hline
$n_i \to n_f$ & $E_f$~(calc) & $E_{f}$~(exact) & $f_{if}$~(calc) & $f_{if}$~(exact) & (rel. diff.) \\
\hline
$1s \to 2p$ & -0.125000 & -0.125000 & 0.416197 & 0.416197 & 0.000000 \\
$1s \to 3p$ & -0.055556 & -0.055556 & 0.079102 & 0.079102 & 0.000000 \\
$1s \to 4p$ & -0.031250 & -0.031250 & 0.028991 & 0.028991 & 0.000000 \\
$1s \to 5p$ & -0.020000 & -0.020000 & 0.013938 & 0.013938 & 0.000000 \\
$1s \to 6p$ & -0.013889 & -0.013889 & 0.007799 & 0.007799 & 0.000000 \\
$1s \to 7p$ & -0.010204 & -0.010204 & 0.004814 & 0.004814 & 0.000000 \\
$1s \to 8p$ & -0.007813 & -0.007813 & 0.003183 & 0.003183 & 0.000000 \\
$1s \to 9p$ & -0.006173 & -0.006173 & 0.002216 & 0.002216 & 0.000000 \\
$1s \to 10p$ & -0.005000 & -0.005000 & 0.001605 & 0.001605 & 0.000000 \\
$1s \to 11p$ & -0.004132 & -0.004132 & 0.001201 & 0.001201 & 0.000000 \\
$1s \to 12p$ & -0.003472 & -0.003472 & 0.000921 & 0.000921 & 0.000000 \\
$1s \to 13p$ & -0.002959 & -0.002959 & 0.000723 & 0.000723 & 0.000076 \\
$1s \to 14p$ & -0.002550 & -0.002551 & 0.000582 & 0.000577 & 0.007629 \\
$1s \to 15p$ & -0.002206 & -0.002222 & 0.000531 & 0.000469 & 0.132738 \\
\hline
$1s \to 16p$ & -0.001850 & -0.001953 & 0.000609 & 0.000386 & 0.578355 \\
$1s \to 17p$ & -0.001428 & -0.001730 & 0.000723 & 0.000321 & 1.253279 \\
$1s \to 18p$ & -0.000933 & -0.001543 & 0.000831 & 0.000270 & 2.073712 \\
$1s \to 19p$ & -0.000372 & -0.001385 & 0.000930 & 0.000230 & 3.050423 \\
\hline
$1s \to 20p$ & 0.000254 & - & 0.001024 & - & - \\
$1s \to 21p$ & 0.000940 & - & 0.001116 & - & - \\
$1s \to 22p$ & 0.001688 & - & 0.001205 & - & - \\
$1s \to 23p$ & 0.002496 & - & 0.001293 & - & - \\
$1s \to 24p$ & 0.003365 & - & 0.001381 & - & - \\
$1s \to 25p$ & 0.004295 & - & 0.001468 & - & - \\
$1s \to 26p$ & 0.005287 & - & 0.001555 & - & - \\
\hline
\hline
\end{tabular}
\end{table}
We were able to use Table~\ref{tab:transitiondata} to demarcate where the
bound states end and where the pseudostates begin. We choose this energy
to be just above where the oscillator strength reaches a minima.
For the present $N_\ell=120$ calculations this is at
$\Delta_{120} \approx (-0.002206+-0.001850)/2 = -0.002028$ a.u..
The Raman final states are thus those with $E_n < -0.00203$, whereas the
Compton have $E_n > -0.00203$.
This choice, rather than the exact $\Delta =0$, means that our effective
ionization potential in our box has a small residual error in it.
However, the moving of the states with $\Delta_{N_\ell} < E_n < 0$ to be
counted as Compton rather than Raman removes spurious cross-sections
that were polluting the convergence of the total Raman cross-section
at photon-frequencies above threshold. Essentially, the wavefunctions
corresponding to the $16p$ to $19p$ states appear to contain a
significant mixture of both Rydberg and continuum information.
This does mean that the $Im_1$ contribution also starts at
similarly lower photon energy $0.5-0.00203=0.4979$ a.u.,
and it does mean that our calculations have a large error
in this relatively small energy range around threshold.
\section{A zoomed in view of the cross-sections}
Fig.~\ref{fig:rayleighzoom} shows the Rayleigh cross-section between frequencies $0.37-0.55$~a.u.,
which spans the atomic (physical) resonances and also up into the continuum.
\begin{figure}[tbh]
\caption{Rayleigh scattering cross-sections for
photon-hydrogen scattering (in units of $\sigma_T$).
The $\sigma_{e}$ are shown as a function of
incident photon energy $\omega$ in atomic units. Our $\sigma_{e}$ results agree with the
analytic results of Gavrila~\cite{gavrila67b}, with some discrepancy
against the (as digitized by us) numerical results of Bergstrom \textit{et al.}~\cite{bergstrom93a}.}
\includegraphics[width=85mm]{supplemental-figure1.eps}
\label{fig:rayleighzoom}
\end{figure}
Fig.~\ref{fig:ramanzoom} shows the Raman cross-section across the same frequency range.
This gives a closer look at the behaviour of the Raman cross-section near threshold.
\begin{figure}[tbh]
\caption{Raman scattering cross-sections for
photon-hydrogen scattering (in units of $\sigma_T$).
These are shown as a function of incident photon energy $\omega$ in
atomic units.
Our results are shown and compared against the (below threshold only)
results for H($1s$) $\to$ H($2s$) of Sadeghpour and Dalgarno~\cite{sadeghpour92b}.
The cross-section for H($1s$) $\to$ H($3d$) is also shown.}
\includegraphics[width=85mm]{supplemental-figure2.eps}
\label{fig:ramanzoom}
\end{figure}
Table~\ref{tab:crosssectiondata} gives three data points for Rayleigh and Raman cross-sections. The photoionization cross-section
is also given for the frequency above threshold. Above threshold the cross-sections must be computed on a pseudostate energy.
Therefore, the values at $0.52$ a.u. were determined by interpolating linearly between two pseudostate frequencies.
This introduces an uncertainty in the value, resulting in only few significant figures. Table~\ref{tab:crosssectiondata} gives
the cross-section at $\omega = 0.52$a.u. as calculated for three different basis sets to show the convergence. Both in Raman and Compton
this shows that the convergence is not monotonic with respect to the number of Laguerres in the basis set, and will be examined in
future work.
\begin{table}[tbh]
\caption{The Rayleigh, Raman, Photoionization and Compton cross-sections (in units of $\sigma_T$) are given for three frequencies.
The cross-sections at frequency $\omega = 0.52$ a.u. are presented for basis sets with $N=80$, $N=100$ and $N=120$ Laguerres
for each partial wave.}
\begin{tabular}{cccccc} \hline \hline
$\omega$ (a.u.) & $N$ & Rayleigh $\sigma_e/\sigma_T$ & Raman $\sigma_R/\sigma_T$ & Photoionisation $\sigma_I/\sigma_T$ & Compton $\sigma_C/\sigma_T$ \\ \hline
0.40 & 80 & 7.2448357561099925 & 0.0677208258644729 & & \\
0.40 & 100 & 7.2448357561099925 & 0.0677208258644729 & & \\
0.40 & 120 & 7.2448357561099925 & 0.0677208258644729 & & \\
0.45 & 80 & 12.769933387763700 & 0.5657804519972952 & & \\
0.45 & 100 & 12.769933387763700 & 0.5657804519972952 & & \\
0.45 & 120 & 12.769933387763700 & 0.5657804519972952 & & \\
0.52 & 80 & 1.244 & 0.0456 & $8.531\times 10^6$ & 0.1524 \\
0.52 & 100 & 1.242 & 0.0462 & $8.530\times 10^6$ & 0.1517 \\
0.52 & 120 & 1.240 & 0.0461 & $8.528\times 10^6$ & 0.1519 \\ \hline \hline
\end{tabular}
\label{tab:crosssectiondata}
\end{table}
\section{Convergence of cross-sections with Laguerre-basis size}
We ran off a series of calculations with the inclusion of various
numbers of Laguerre functions, $N_\ell$ for each partial-wave.
The convergence of the Rayleigh and photoionization scattering
cross-sections are shown in Fig.~\ref{fig:rayleigh-converges}.
Note that the highest three eigenenergies in the $N=120$ calculation
are $E_{119p}=59.7669$, $E_{120p}=119.823$, $E_{121p}=357.323$~a.u.,
and thus it is no surprise that the polarizabilities develop
observable kinks around the $\omega \approx 100$~a.u. range.
\begin{figure}[tbh]
\caption{Convergence of Rayleigh and photoionization scattering cross-sections
for photon-hydrogen scattering (in units of $\sigma_T$).
The $\sigma_{e}$ and $\sigma_{I}$ are shown as a function of
incident photon energy $\omega$ in atomic units
(ie. up to $\approx 2700$~eV) for various Laguerre-basis set sizes $N$.
Our results broadly agree with previous (purely analytic) results as shown.
Two plots are given to show the behaviour of Rayleigh and photoionization (left) as well as
to highlight the small peak in the Rayleigh cross-section (right) by plotting on a smaller linear scale.
}
\includegraphics[width=85mm]{supplemental-figure3a.eps}
\includegraphics[width=85mm]{supplemental-figure3b.eps}
\label{fig:rayleigh-converges}
\end{figure}
The convergence of the Raman scattering cross-sections are shown
for the total
$\sigma_{R}(\omega) = \sum_{(j;E_j < \Delta_N)} \sigma_{r;j}(\omega)$
for various $N_\ell$ (left) and the summed cross-section when
only the $L_j = 0$ or $L_j = 2$ contributions are included (right).
Fig.~\ref{fig:ramanconvergence} shows that the $L_j = 0$ contribution
rapidly declines just above threshold, and then increases rapidly.
The $L_j = 2$ contribution exhibits a small peak which changes with
basis set size. This uncertainty in the (relatively small) cross-section
will be studied further in future work.
\begin{figure}[tbh]
\caption{Convergence of Raman scattering cross-sections for
photon-hydrogen scattering (in units of $\sigma_T$) for
various Laguerre-basis set sizes $N$.
These are shown as a function of incident photon energy $\omega$ in
atomic units (ie. up to $\approx 2700$ eV).
Our results are shown for the sum of the individual cross-sections
$\sigma_{R}(\omega)$ (left) and for the $L_j=0$ or $L_j = 2$ contributions (right).
The total cross-section (left) has a bi-modal decay above
threshold due to the two different sets of contributions.
Note also that there are residual issues with the convergence
of the (relatively small) Raman cross-sections above threshold
due to the small pollution of the highest-lying `bound' state/s
which have a small mixture of pseudostate character which should
really be contributing to the (much larger) Compton cross-sections.
This can be seen by comparing the frequency location of the two Raman peaks
in the (right) plot against the location of the equivalent two Compton peaks in
Fig.~\ref{fig:comptoncompare}(right).
}
\includegraphics[width=85mm]{supplemental-figure4a.eps}
\includegraphics[width=85mm]{supplemental-figure4b.eps}
\label{fig:ramanconvergence}
\end{figure}
The convergence of the Compton scattering cross-sections are
shown (in Fig.~\ref{fig:comptoncompare}) when only the $L_j = 0$ or $L_j = 2$ contributions are included
for various $N_\ell$. Here we show, as per paper, that the $L_j = 2$ contribution
dominates the total Compton cross-section. Fig.~\ref{fig:comptoncompare} demonstrates that
there is only a small change in the
Compton cross-section when the size of the basis set is varied.
\begin{figure}[tbh]
\caption{Convergence of Compton scattering cross-sections
for photon-hydrogen scattering (in units of $\sigma_T$)
for various Laguerre-basis set sizes $N$.
The summed cross-section is shown when only the
$L_j = 0$ contributions are included, which approximately
agrees with the results of Bergstrom \textit{et al.}~\cite{bergstrom93a} and
Drukarev \textit{et al.}~\cite{drukarev10a} which are also
shown. The summed cross-section when only the
$L_j = 2$ contributions are included is also shown.
The Compton cross-sections are plotted on a y-logscale (left)
to show the behaviour of the $L = 2$ as well as the smaller $L = 0$ cross-sections.
This is also plotted on a linear scale (right), to show the
convergence behaviour of the $N=120$, $N=100$ and $N=80$ calculations,
not visible on a logscale.
}
\includegraphics[width=85mm]{supplemental-figure5a.eps}
\includegraphics[width=85mm]{supplemental-figure5b.eps}
\label{fig:comptoncompare}
\end{figure}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,020
|
Q: how to dynamically adding new rows to table the Date Picker fields in the new rows? Am working on a time sheet application with struts 1.2 and implementing jquery inside it. would appreciate greatly if i could get reply quickly ... am in learning curve in both jquery and struts ...
Requirement is this :
On the (+) button insert a row which populates dynamically and has two timepickers in it one used for determining timefrom, the other timeto, based on which duration is to be calculated. before clicking on (-) button user needs to check atleast one row for deleting that record. below are the screen shots.... the timepicker plugin i used is trentrichardson's. is this possible using jtable ? or jqgrid too ?? guys advise ...
A: i worked it around using some of the ideas from the other users. from a newbie on struts and java perspective i found that in struts tag library id and class are given as styleId and styleClass respectively.
eg:
is written as :-
Though am still interested in knowing whether the below can be corrected further ...
http://jsfiddle.net/manmohan_menon/6YkyY/4/
any advise will be greatly appreciated... thanks! :)
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,048
|
\section{Introduction}
Quantum chromodynamics (QCD) is the theory of the strong interactions.
Although QCD has proven remarkably successful,
there is a blemish called the strong $CP$ problem.
The strong $CP$ problem is that the effective Lagrangian
of QCD has $CP$ violating term but it is not observed,
i.e.,
the experimental value of neutron electric dipole moment is
smaller than expected
by many orders of magnitude.
Peccei and Quinn proposed an attractive solution to solve
this problem
\cite{axion-bible1,axion-bible2,axion-bible3,axion-bible4,axion-bible5}.
They introduced a new global $U(1)$ symmetry,
Peccei--Quinn (PQ) symmetry.
When PQ symmetry is spontaneously broken,
a new effective term arises in QCD Lagrangian
which cancels the $CP$ violation term.
The solution also predicts a new pseudo Nambu--Goldstone
boson, axion.
The expected behavior of an axion is characterized
mostly by the scaling factor of the PQ symmetry breaking, $f_a$,
and so its mass, $m_a$, which is directly related to $f_a$ by
$m_a=6\times10^{15} [\mathrm{eV}^2]/f_a$.
Axions are expected to be produced in stellar core through their coupling to photons with energies
of order keV.
Especially,
the sun can be a powerful source of axions
and the so-called `axion helioscope' technique
may enable us to detect such axions directly
\cite{sikivie1983,bibber1989}.
The principle of the axion helioscope is illustrated
in Fig.~\ref{fig:principle}.
Axions would be produced
through the Primakoff process in the solar core.
The differential flux of solar axions
at the Earth is approximated by
\cite{bahcall2004,raffelt2005}
\begin{eqnarray}
\d \Phi_a/\d E&=&6.020\times10^{10}[\mathrm{cm^{-2}s^{-1}keV^{-1}}]
\nonumber\\
&&{}\times\left(\gagg\over10^{-10}\mathrm{GeV}^{-1}\right)^2
\left( \frac{E}{1\,\mathrm{keV}}\right)^{2.481}
\exp \left( -\frac{E}{1.205\,\mathrm{keV}}\right),
\label{eq:aflux}
\end{eqnarray}
where
$\gagg$ is the axion-photon coupling constant%
\footnote{The formula is different from the one
in our previous paper \cite{sumico1997,sumico2000},
but both coincide numerically with good approximation.}%
.
Their average energy is 4.2\,keV
reflecting actually the core temperature of the sun($\sim 3kT$),
since low energy axion production is suppressed due to screening effects\cite{raffelt1986}.
Then, they would be coherently converted into X-rays
through the inverse process
in a strong magnetic field at a laboratory.
The conversion rate is given by
\begin{equation}
P_{a\to\gamma} = \left| \frac{g_{a\gamma\gamma}}{2}
\exp\left[-\int_{0}^{L} \mathrm{d}z \Gamma/2 \right] \times \int_{0}^{L}dz B_\bot\exp
\left[i \int_{0}^{z} \mathrm{d}z^{\prime}\left( q - \frac{i\Gamma}{2}\right) \right]\right|^2,
\label{eq:prob}
\end{equation}
where
$z$ and $z^{\prime}$ are the coordinate along the incident solar axion,
$B_\bot$ is the strength of the transverse magnetic field,
$L$ is the length of the field along $z$-axis,
$\Gamma$ is the X-ray absorption coefficient of helium,
$q=(m_\gamma^2-m_a^2)/2E$ is the momentum transfer
by the virtual photon,
and
$m_\gamma$ is the effective mass of the photon
which equals zero in vacuum.
Eq. (\ref{eq:prob}) is reduced to
\begin{equation}
P_{a\to\gamma} = \left( \frac{g_{a\gamma\gamma}B_\bot L}{2}\right)^2
\left[ \frac{\sin(qL/2)}{qL/2}\right]^2,
\label{eq:prob_plain}
\end{equation}
in case $q$ and $B_\bot$ is constant along $z$-axis and $\Gamma=0$.
In 1997,
the first phase measurement \cite{sumico1997}
was performed
using an axion helioscope with a dedicated superconducting magnet
which is identical to the one used in the present experiment,
except that the gas container was absent and
the conversion region was vacuum.
Its sensitivity was limited to the axion mass region of $m_a<0.03\rm\,eV$
due to a loss of coherence by non-zero $q$ in
Eq.~(\ref{eq:prob_plain}).
If one can adjust $m_\gamma$ to $m_a$,
coherence will be restored
for non-zero mass axions.
This is achieved by filling the conversion region with gas.
A photon in the X-ray region acquires a positive effective mass
in a medium.
In light gas,
such as hydrogen or helium,
it is well approximated by
\begin{equation}
m_\gamma=\sqrt{4\pi\alpha N_e\over m_e},
\end{equation}
where $\alpha$ is the fine structure constant,
$m_e$ is the electron mass,
and $N_e$ is the number density of electrons.
We adopted cold helium gas as a dispersion-matching medium.
Here, light gas was preferred since it minimizes self absorption
by gas.
It is worth noting that
helium remains at gas state even at 5--6\,K,
the operating temperature of our magnet.
Since the bore of the magnet is limited in space,
the easiest way is to keep the gas
at the same temperature as the magnet.
Moreover,
axions as heavy as a few electronvolts
can be reached
with helium gas of only about one atmosphere
at this temperature.
In this way,
in 2000,
the second phase measurement \cite{sumico2000}
was performed
to search for sub-electronvolt axions.
This experiment,
together with the first phase measurement of 1997 \cite{sumico1997}
with vacuum conversion region,
yielded an upper limit of
$\gagg<\hbox{6.0--10.5}\times10^{-10}\rm GeV^{-1}$ (95\% CL)
for $m_a<0.27\rm\,eV$.
In this Letter, we will present the result of
the third phase measurement
in which we scanned the mass region between
$0.84<m_a<1.00\rm\,eV$
using the upgraded apparatus to withstand higher pressure gas.
\section{Experimental apparatus}
\label{sec:app}
The schematic figure of the axion helioscope is shown
in Fig.~\ref{fig:sumico}.
It is designed to track the sun in order to achieve
long exposure time.
It consists of
a superconducting magnet, X-ray detectors, a gas container,
and an altazimuth mounting.
In the following paragraphs,
we will describe each part in due order.
The superconducting magnet \cite{sato1997}
consists of two 2.3-m long
race-track shaped coils running parallel
with a 20-mm wide gap between them.
The magnetic field in the gap is 4\,T
perpendicular to the helioscope axis.
The coils are kept at 5--6\,K during operation.
In order to make it easy to swing this large cryogenic apparatus,
two devices are engaged.
First, the magnet was made cryogen-free
by making two Gifford-McMahon refrigerators
to cool it directly by conduction.
Second, a persistent current switch was equipped.
Thanks to this, the magnet can be freed from
thick current leads after excitation,
and the magnetic field is very stable for a long period of time
without supplying current.
The container to hold dispersion-matching gas is inserted
in the $20\times92\,\mathrm{mm^2}$
aperture of the magnet.
Its body is made of four 2.3-m long 0.8-mm thick
stainless-steel square pipes
welded side by side to each other.
The entire body is wrapped with 5N high purity
aluminium sheet to achieve high uniformity of temperature.
The measured thermal conductance between the both ends was
$1\times10^{-2}\mathrm{W/K}$ at 6\,K.
One end at the forward side of the container is sealed
with welded plugs
and is suspended firmly by three Kevlar cords,
so that thermal flow through this end is highly suppressed.
Combined thermal conductance of three Kevlar cords is estimated to be
smaller than that of the container itself by four orders of magnitude.
Since place-to-place temperature difference of the magnet is likely to be
much less than 1K, the temperature of the container is calculated to be
uniform along the length within the order of 0.1mK, which is fairly
small and has only negligible effect on the coherence of the axion to
photon conversion. Radiative thermal flow between the gas container and
the magnet is the same order of magnitude as the conductive flow through
the Kevlar cords, and hence negligible, too.
The opposite side nearer to the X-ray detectors
is flanged and fixed to the magnet.
At this end of the container, gas is separated from vacuum
with an X-ray window manufactured by METOREX
which is transparent to X-ray above 2\,keV
and can hold gas up to 0.3\,MPa at liquid helium temperature.
To have automatic sequential pressure settings of the dispersion-matching gas
for the scan of the axion mass region around 1\,eV,
a gas handling system is built with
3 Piezo valves (two HORIBASTEC PV1101 and a PV1302)
and a precision pressure gauge (YOKOGAWA MU101-AH1N).
The temperature of the gas container was measured by a Lakeshore CGR thermistor.
Required pressure for a given mass was determined by the corresponding gas density
and the temperature
based on interpolation of the tables from NIST \cite{nist}.
The error of the target mass is estimated to be less than 5\,meV
by the errors of the pressure and the temperature.
Since we are scanning a range of the axion mass,
the error is only crucial to the lower and upper edges of the mass range.
Helium gas is fed to the container through the Piezo valve to have a specified
pressure setting.
If a lower pressure setting is required, then another Piezo valve is opened
to suck the gas by a vacuum pump connected to the valve.
Once a proper pressure setting is settled, all the valves are closed and the helium gas
is kept confined to have a constant electron number density in the container
until the measurement for the setting is completed.
The uniformity of the temperature guarantees the homogeneous
density along the length of the container.
The whole process is done step by step automatically to scan the axion mass.
Absorption of X-ray in the helium gas is not negligible and the effect is properly
calculated in Eq. (\ref{eq:prob}).
The electron number density might vary slightly along the container
from the one end to the other
because of the gravity
when the inclination is high,
and the coherence could be partly lost accordingly.
The effect is also taken into account in Eq. (\ref{eq:prob}).
The decreases of the conversion probability $P_{a\to \gamma}$
due to the absorption and the gravity are less than 23\% and 1\%
, respectively
when $m_\gamma$ is tuned to 1.0\,eV at the center of the gas container.
For emergency exhaust of the gas in case of rapid temperature increase
due to a magnet quenching,
a rupture disk, which is designed to break at 0.248 MPa,
is introduced into the gas handling system
to avoid destruction of the X-ray window by the over pressure.
Sixteen PIN photodiodes, Hamamatsu Photonics S3590-06-SPL,
are used as the X-ray detectors,
whose chip sizes are $11\times11\times0.5\rm\,mm^3$ each.
In the present measurement, however, twelve of them are used for the analysis
because four went defective through thermal stresses since the measurement of the previous phase.
The effective area of a photodiode was measured
formerly using a pencil-beam X-ray source,
and found to be larger than $9\times9\,\mathrm{mm^2}$.
It has an inactive surface layer of
$0.35\,\mu\mathrm{m}$ \cite{akimotoPIN}.
Each chip is mounted on a Kapton film
bonded to an Invar plate with cryogenic compatible adhesive.
The X-ray detectors
are mounted in a 10-mm thick radiation shielding box made of
oxygen-free high conductivity copper (OFHC Cu),
which is then surrounded by a lead shield of about 150\,mm thick.
The copper shield is operated at about 60\,K,
so that
it also functions as a cold finger for the X-ray detectors.
Details on the X-ray detector are given
in Refs.\ \cite{naniwaPIN,akimotoPIN}.
The output from each photodiode is fed
to a charge sensitive preamplifier whose first-stage
FET is at the cryogenic stage near the photodiode chip
and the preamplifier outputs are digitized using
CAMAC flash analog-to-digital convertors (FADCs), REPIC RPC-081's,
at a sampling rate of 10 MHz.
The preamplifier outputs are also fed to
shaping amplifiers, Clear Pulse CP4026,
whose outputs are then discriminated to generate triggers.
Thus, waveforms of the sixteen preamplifier outputs
are recorded simultaneously
over 50 $\mu$s before and after each trigger
to be committed to later off-line analysis.
Each detector was calibrated by 5.9-keV Mn X-rays
from a \nuc{55}{Fe} source installed in front of them.
The source is manipulated from the outside
and is completely retracted behind the shield
during the axion observations, i.e. during solar tracking.
The entire axion detector is constructed in a vacuum vessel
and the vessel is mounted on an altazimuth mount.
Its trackable altitude ranges from $-28^\circ$ to $+28^\circ$
and its azimuthal direction is designed to be limited only
by a limiter which prevents the helioscope from endless rotation.
However, in the present measurement, the azimuthal range is restricted to
about 60$^\circ$ because a cable handling system for its
unmanned operation is not completed yet.
The range corresponds to an exposure time
of about a quarter of a day in observing the sun.
This is enough for the time being,
since background is measured during the other three quarters of a day.
When the entire cable handling system is complete, running time per pressure setting
can be shortened by a factor of more than two.
This helioscope mount is driven by two AC servo motors
controlled by a computer (PC).
The PC also monitors the azimuthal and altitudinal directions
of the helioscope regularly
by two precision rotary encoders
and forms a feedback controlling loop as a whole.
The US Naval Observatory Vector Astronomy Subroutines (NOVAS) \cite{novas}
were used to calculate the solar position.
The altitudinal origin was determined from a spirit level.
While the sun is not directly visible from the laboratory
in the basement floor,
the azimuthal origin was first determined
by a gyrocompass, which detects the north direction by the
rotation of the earth within an error of $\pm8"$,
and then it was introduced to the laboratory with a theodolite.
Since the effective aperture of the helioscope is narrow,
it is crucial to determine its accurate geometry.
The axis of the helioscope is defined by
two cross hairs at the edge of the vacuum vessel.
The position of each part of the helioscope was measured
relative to these cross hairs
from their exterior
using the theodolite
when they were installed.
The positions of the PIN photodiodes were determined relative to
the copper shielding box from a photo image
taken prior to the installation.
As it is hard to estimate analytically
the effect of the geometrical errors
as well as the effect of the size of the axion source,
we performed a Monte Carlo simulation and
found that the overall effective area is larger than $371\,\mathrm{mm}^2$
at 99\% confidence level.
It is smaller than the nominal sum of the area of all the living PIN
diodes mainly because the line of sight is partly shaded by the walls of the
four square pipes of the gas container from the axion source region of
the sun. Also included are many other small effects like geometrical
misalignment of the container and the PIN diodes, distortion of the
square pipes, absorption by thick supporting grids of the X-ray window,
and so on.
\section{Measurement and Analysis}
From December 2007 through April 2008,
a measurement employing dispersion-matching gas was performed
for 34 photon mass settings with about three days of running time per setting
to scan around 1 eV,
which is shown in Table \ref{tab:settings}.
Before obtaining energy spectra,
each event was categorized into two major groups,
the solar observation and the background.
Events while the measured direction agreed with the sun
are counted as former.
When the sun is completely out of the magnet aperture,
events are counted as latter.
Otherwise events are discarded.
We performed numerical pulse shaping to the raw waveforms
using the Wiener filter.
The energy of an X-ray is given by the peak height of
a wave after the shaping.
The shaped waveform is given by
\begin{equation}
\label{eq:wiener}
U(\omega)=
{S^\ast(\omega) C(\omega)
\over|N(\omega)|^2},
\end{equation}
where $U(\omega)$, $S(\omega)$, $C(\omega)$,
and $N(\omega)$ are Fourier transformations of
the shaped waveform,
the ideal signal waveform,
the measured waveform,
and the noise, respectively.
Noises are obtained by gathering waveforms
while no trigger exists,
and the ideal signal waveform is
approximated by averaging signals from 5.9-keV X-rays.
The response function of this waveform analysis,
i.e., non-linearity, gain walk by trigger timing, etc.,
was investigated thoroughly using simulated pulses
which were obtained by adding
the template waveform to the noise waveforms.
A correction was made
based on this numerical simulation.
Saturation arose at about 25\,keV,
therefore, $E>20\,\mathrm{keV}$ was not used
in the later analysis.
Event reduction process is applied
in the same way as the second phase measurement \cite{sumico2000}
in order to get rid of bad events like microphonics.
Applying the same cuts to the $^{55}$Fe source data,
we found the loss of axion detection efficiency by the reduction to be less than 1.5\%.
The background level was
about $1.0\times10^{-5}\mathrm{keV^{-1}s^{-1}/PIN(81\,\mathrm{mm}^2)}$
at $E=5\hbox{--}10\,\mathrm{keV}$.
By analysing the calibration data,
we found the energy resolution of each photodiode
to be
0.8--1.2\,keV (FWHM)
for 5.9-keV photons.
In Fig.~\ref{fig:spec},
the energy spectrum of the solar observation
with the gas density for $m_\gamma=1.004\rm\,eV$
is shown together with the background spectrum.
We searched for expected axion signals
which scale with $\gagg^4$
for various $m_a$
in these spectra.
The smooth curve in the figure represents an example for
the expected axion signal
where $m_a=m_\gamma=1.004\rm\,eV$
and $\gagg=7.7\times10^{-10}\rm GeV^{-1}$,
which corresponds to the upper limit
estimated as follows.
A series of least $\chi^2$ fittings was performed assuming various
$m_a$ values.
Data from the 34 different gas density settings
were combined by using the summed $\chi^2$ of the 34.
The energy region of 4--20\,keV was used for fitting
where the efficiency of the trigger system is almost 100\%
and the FADCs do not saturate.
As a result, no significant excess was seen for any $m_a$,
and thus an upper limit on $\gagg$ at 95\% confidence level
was given following the Bayesian scheme.
Fig.~\ref{fig:exclusion} shows
the limit plotted as a function of $m_a$.
Our previous limits from the first \cite{sumico1997}
and the second \cite{sumico2000} phase measurements
and some other bounds are also
plotted in the same figure.
The shown previous limits have been updated using newly measured
inactive surface layer thickness of the PIN photodiode~\cite{akimotoPIN};
the difference is, however, marginal.
The SOLAX~\cite{solax1999}, COSME~\cite{cosme2002} and DAMA~\cite{DAMA2001} are solar axion experiments
which exploit the coherent conversion(i.e. axion Bragg scattering\cite{Pascos})
on the crystalline planes in a germanium and a NaI detector.
The experiment by Lazarus \etal~\cite{Lazarus} and CAST~\cite{CAST}
are the same kind of experiments as ours.
The latter utilizes a large decommissioned magnet of the LHC at CERN.
Its limit is better than our previous limits by a factor of seven
in low $m_a$ region
due to its larger $B$ and $L$ in Eq.~(\ref{eq:prob_plain}).
In the region $m_a > 0.14\rm\,eV$, however,
our previous and present limits surpass the limit of CAST.
The limit $\gagg<2.3\times10^{-9}\rm GeV^{-1}$ is
the solar limit inferred from the solar age consideration
and the limit $\gagg<1\times10^{-9}\rm GeV^{-1}$
is a more stringent limit reported
by Schlattl \etal ~\cite{schlattl1999}
based on comparison between
the helioseismological sound-speed profile
and
the standard solar evolution models with energy losses by solar axions.
Watanabe and Shibahashi \cite{watanabe2001} have argued
that the helioseismological bound
can be lowered to $\gagg<4.0\times10^{-10}\rm GeV^{-1}$
if the `seismic solar model' and
the observed solar neutrino flux are combined.
\section{Conclusion}
The axion mass around 1\,eV has been scanned
with an axion helioscope with cold helium gas
as the dispersion-matching medium
in the $4{\rm\,T}\times2.3\rm\,m$ magnetic field,
but no evidence for solar axions was seen.
A new limit on $\gagg$ shown in Fig. \ref{fig:exclusion}
was set for $0.84<m_a<1.00\rm\,eV$.
It is the first result to search for the axion in the
$\gagg$-$m_a$ parameter region of the preferred axion models~\cite{GUT_axion}
with a magnetic helioscope.
When the complete unmanned operation is ready
with the automatic cable handling system,
the mass scan will be continued to cover still wider mass range around 1\,eV.
\begin{ack}
The authors thank the former director general of KEK, Professor H. Sugawara,
for his support in the beginning of the helioscope experiment.
This research was partially supported
by the Japanese Ministry of Education, Science, Sports and Culture,
Grant-in-Aid for COE Research and Grant-in-Aid for Scientic Research (B),
and also by the Matsuo Foundation.
\end{ack}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,793
|
{"url":"https:\/\/electronics.stackexchange.com\/questions?sort=active","text":"# All Questions\n\n120,680 questions\nFilter by\nSorted by\nTagged with\n2 views\n\n### Finding all hazards in a small MIPS program\n\nI have the following MIPS program: bne $zero,$zero, Label add $t2,$t3, $t4 add$t5, $t2,$t4 lw $t2, 0($t4) Label: add $t5,$t2, \\$t2 ...\n10 views\n\n### Safe to turn variable speed AC motor on\/off via wall switched outlet\n\nI have a (high velocity air) dual AC brush motor pet hair dryer (details). (There is no heating element.) It's rated at 120 Volts 18.5 Amps. It has a rotary knob (listed on the parts page as a \"...\n64 views\n\n### Heatsink on underside of PCB\n\nI have prototyped some DC-DC converter modules, and they work well. However due to size, I don't have a huge amount of copper pour for them to dissipate their heat. I have used 2oz copper but the ...\n16 views\n\n### Can I run a DDR4 module at 50 MHz?\n\nI haven't seen any restrictions about minimum clock, but I may have overlooked them.\n14 views\n\n### Can I make my own flight controller with following knowledge\n\nI want\/plan to build a basic quadcopter for the hobby. On the web, in most cases, everybody uses a flight controller like APM or PixHawk for example. I am a beginner in the embedded world and was ...\n20 views\n\n### Connecting balanced low impedance output with unbalanced high impedance input\n\nI want to send the signal form my audio interface to the guitar pedal effect. I'll use interface's line out balanced output and plan to plug it in to the effect pedal which has high impedance ...\n17 views\n\n### Circuit Definition\n\nThis is a circuit that I've found in my old amplifier board. Experts please help me identify this circuit and please describe how it works. I am sorry for my crude drawing, but this is what I found ...\n103 views\n\n### Ident of a 10 contact RJ45 style connector with 8 + 2 contacts\n\nI'm trying to find a plug that will fit into this RJ45 style socket. The top picture shows 8 contacts just as you would expect from a RJ45 8P8C style socket, in fact you can plug a standard ethernet ...\n79 views\n\n### Transfer function of an ideal op amp\n\nThis problem requires solving for V2\/V1. My approach: R2 and R4 parallel reason: R2 has virtual ground and R4 is connected to the ground. Applied Nodal Analysis and got the following answer(...\n248 views\n\n### Transistor power dissipation rating\n\nI'm trying to build the following circuit Except mine is a 24V ~20A (500W) motor, so I need to swap the components. I'm currently looking into transistors and I've stopped on the 2N6284 (NPN) ...\n8 views\n\n### Networked Control System\n\nHow to design an error co-variance matrix to tranmit data using optimal power between two ends viz. Transmitter and receiver?\n67 views\n\n### Simplest instruction set that has an c++\/C compiler to write an emulator for?\n\nI'm looking into writing a little software emulator, that emulates\/runs instructions. Easiest would be to invent my own instruction set, but I thought it would be more fun if what I write an ...\n71 views\n\n### Lowest breakdown voltage diode, [on hold]\n\nI'm looking for a diode which has the lowest forward voltage, about 1 mV or lower. I wanted to limit the signal amplitude to 1 mV. I would be very grateful for specific examples. The output impedance ...\n16 views\n\n### Variable Frequency Variable Amplitude PSU [on hold]\n\nI\u2019m looking for an adjustable power supply that takes 220V 60Hz AC and produces an adjustable RF power line (10MHz range) with amplitudes up to 500V. Where can I find these? I need something ...\n16 views\n\n### Reverse polarity protection for battery powered application\n\nIm designing battery powered application and need reverse polarity protection. My system specs: 4.5v(3x AA batteries) in and system running on 3.3v. Im using MCP1711T-33I\/OT LDO to convert 4.5v to 3....\n203 views\n\n### Using a 36V battery with step up converter for 48v kit\n\nI have an e-bike motor ( bbs02 48v 750watts ) which requires 48v battery. I also have a 36v 13ah ebike battery. Opening the battery and changing cells connection to make it 48 takes a lot of effort. ...\n46 views\n\n### Is USB Vendor ID needed to sell custom USB-hub based on IC?\n\nI'm about to make a hobby project - build custom USB hub fitting required physical\/package dimensions and port positions. The main IC would be TI TUSB4041I, which is is 4-port 2.0 USB HUB (I don't ...\n1k views\n\n### Solar charging with a super capacitor buffer\n\nI have a 20W 5V solar panel (real output more like 10W in bright sun). Would like to harvest as much solar power as possible to a power bank to power a bluetooth speaker (< 5W consumption). Bright ...\n36 views\n\n### MSP432+BLE+ Debugger interfacing\n\nI would like to Interface a BLE to a MSP432 microcontroller in serial. The BLE has a inbulit Atmel controller. I have a firmware for BLE in keil. Intially I have flashed the firmware to BLE directly ...\n385 views\n\n### high current buck converter?\n\nI am in the process of building a power bank with 18650 batteries for a trolling motor. The nominal voltage for this battery is 14.8 V. The motor specs are 12 VDC - 45 A max current draw. Now the ...\n25 views\n\n### RFID software development [on hold]\n\nI would like to know how to implement my own RFID reader software. I have read that these readers do come pre-installed with their own software however the disadvantage is the cost. Hence it will be ...\n231 views\n\n### Will copper pour help on my single-layer PCB?\n\nI have a PCB which contains one 20x4 LCD, eighteen 12x12 mm push buttons, and three LEDs. This board is connected to an Arduino Mega through a 30\u00a0cm long ribbon cable. Now during testing, I ...\n19 views\n\n### MSP430 MPU9250 MPU6050 I2C: After sending the address NACK\n\nI have a problem with read from my MPU6050\/MPU9250. Both are connected with AD0 to ground so both should have the address 0x68. With both sensors the same behaviour ...\n31 views\n\n### Why is LTSpice Bode plot disagreeing with a linear plot?\n\nHere is a picture of an amplifier that I'm simulating using LTSpice. When I insert a sine wave of 0.01 volts and 314Hz as an input, I get this: The output is taken from the collector and is ...\n56 views\n\n### does my microwave oven have acceptable leakage?\n\nI remembered that a microwave oven works on 2.4 GHz which is same as wifi frequency so I did a test for the leakage for my microwave oven, I put the oven's door in front of 2.4 GHz wifi router(line of ...\n102 views\n\n### If npn transistor switch is saturated, does base get galvanically connected to emitter?\n\nPlaying with Christmas lamp TRIAC, I got this question. Suppose some transistor is saturated, if gate current is >= 1ma, and I want to control it with 5V Arduino (or STM\\PIC\\etc) port. So I need to ...\n45 views\n\n### Change of variable from t to $\\tau$ during convolution\n\nI am not able to understand that in the standard convolution formula how we can change the variable from t to $\\tau$. $$\\int{x(\\tau)\\cdot h(t-\\tau)d\\tau }$$ Isn't this incorrect mathematically?\n80 views\n\n### Fender Telecaster guitar makes loud humming noise, stops when player touches strings\/metal parts\n\nI have a Fender Telecaster that hums loudly--much more so than other similar guitars. It's a standard Tele with single-coil pickups. The humming noise stops when the player touches the strings or ...\n131 views\n\n### Append Two String to LCD Using MikroC and Proteus\n\nI tried to read EEPROM values and display that value with a string. This is my MikroC code: ...\n97 views\n\n### Ohm's law confusion\n\nI've been trying to learn electronics stuff from this video series on YouTube: https:\/\/youtu.be\/m4jzgqZu-4s At 1:07 in the video above, with the switch closed, both sides of the switch seem to be at ...\n231 views\n\n### What is actually sent\/loaded to a microcontroller \/ STM32?\n\nI see in all tutorials or sources, everybody creates a project in some IDEs and then loads code to the MCU again via those IDEs. What is sent to the MCU? Is it a sequence of binary numbers or some ...\n20 views\n\n### sequential I2C read from PU6050 with pic16F18877 different from expected\n\nTo save time and reading I copied a code block from a program found on the internet to determin the X angle of inclination. ...\n46 views\n\n### 4 Phase 5 Pin high speed motor driving (turbomolecular pump driver)\n\nI just recently purchased a broken Turbomolecular pump from eBay and repaired it, now that it works again I need to build a low-cost driver for it. It has 4 Phases and normally relies on hall ...\n39 views\n\n### Wishbone B4 rule 3.55: Is this for a special case?\n\nRule 3.55 of Wishbone states: \"MASTER interfaces MUST be designed to operate normally when the SLAVE interface holds [ACK_I] in the asserted state.\" I read this rule together with permission 3.35, ...\n54 views\n\n### Crystal's datasheet give load capacitance is 20pF, do I need external load capacitors?\n\nI use external crystal for HSE of STM32F0. I plan to use 8MHz Cylinder Crystals. Datasheet of this crystal below Datasheet give crystal load capacitance is 20pF My questions: 1\/ Do I need to add 2 ...\n117 views\n\n### Envelope detector circuits\n\nWhy do these circuits have different output signals, please explain in detail. And how does the envelope detector work?\n16 views\n\n### The generic carry formula of CLA\n\nI'm trying to learn about the CLA (Carry Look Ahead). I found out that the first four carries are: I'm trying to build a generic formula for ...\n39 views\n\nEDIT: The exact issue is this, the ADC has a gain register (set during internal FS calibration and again at System FS calibration) and an offset register set during system zero calibration. The ...\n19 views\n\n### replace Miner controller board [on hold]\n\nmy miner controller board get crashed (S11 Antminer).. after many searches I can't find a suitable board for that but I found the S9 controller... my question is can I replace S11 controller board ...\n27 views\n\n### ccd with response \u201ctime constant\u201d of less than 0.5 ns & maximum \u201cradiant sensitivity\u201d exceeding 10 mA\/W\n\nI'm building a Raman Spectrometer. The following specs are listed under the U.S. Strategic Goods list where export license is required for any electronic item to be shipped out of the country. Does ...\n16 views\n\n### Need advice choosing wiring configuration for combination pressure and temperature sensor\n\nI am currently trying to purchase a combination temperature and pressure transmitter for a project, and it has an option to choose between a 3 wire or a 4 wire configuration. I have attached the link ...\n89 views\n\n### Thermal resistance - Temperature transistor\n\nI have a transistor with Vsupply = 24VDC, VCEO = 45VDC, ICE = 20mA. How to calculate the delta T (rise in temperature) of this transistor. I would like to know what thermal resistance I may have at ...\n739 views\n\n### RC Lowpass Filter between Amplifier and ADC input\n\nI have sensors (pyranometers that consist of thermopiles and measure sun irradiation) that ouput a low voltage signal so I need to amplify them using an instrumentation amplifier. I have chosen the ...\n23 views\n\n### Buck converter LC output filter contribution to phase margin\n\nI'm trying to develop a better understanding of how my power distribution network can affect the stability and performance of buck converters, which has lead me to documents like this one. The main ...\n110 views\n\n### Distance required between two wires to avoid EMI\n\nI am running 3 cables cables from a solar heater down to my apartment (around 12 meters), one to power an electric heater and water pump (around 2000 watts in total), one to control a 12V solenoid and ...\n89 views\n\n### Li-ion CR2 \/ 15270 voltage and charging circuitry\n\nCR2 \/ 15270 Li-ion batteries normally come as 3.0v. I thought Li-ion and Lipo batteries were always 3.7v, how is that possible? Does it require a different charger circuit than normal 3.7v li-ion\/...\n31 views\n\n### Need recommendation on SBC or microcontroller \/ ADC \/ other components for analog sensor and instrumentation measurement [on hold]\n\nI am a civil engineer with minimal experience working with instrumentation and sensors, and RPis. I am building a small project, and I need a recommendation on a model of SBU or microcontroller to ...\n97 views\n\n### Switch module test circuit design\n\nI am doing a project to make a test box that would test the switch module in the photo below. Please see circuit diagram below for inside of the switch. Please note this switch is not an ordinary ...\nEdit: My primary objective is to learn the mathematical model to compute the load resistor $R_s$ to satisfy a given crystal's drive level, not just fixing the one instance below. I'm building a ...","date":"2019-07-20 17:56:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5874552130699158, \"perplexity\": 4964.677742288916}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526560.40\/warc\/CC-MAIN-20190720173623-20190720195623-00347.warc.gz\"}"}
| null | null |
Q: How to proceed to test a function that receives a "Class" parameter using jMockIt? I'm developing a function that uses reflection on a Class<?> object that is passed as parameter and returns a POJO with some fields populated, something like this:
public MyPojo functionDeveloper(Class<?> targetClass) { /*...*/ }
This function works fine and does what it needs to do, so no problems on this side.
Now, I need to create a unit test for this function, but I can't really figure out how to proceed here: We are supposed to mock as much as we can (which basically rules out creating a dummy parameter), with some random parameter from a generic class I would go like this:
@Tested
TestedClass testedClassInstance;
@Mocked
private MyGenericClass myGenericClass;
@Mocked
private Field[] fields;
@Test
public void testFunction() {
new Expectations(testedClassInstance) {
myGenericClass.getDeclaredFields();
result = fields;
}
/* assertions here*/
}
...and my intention with the Class<?> parameter was the same: being able to tell "when the code says "targetClass.getDeclaredFields()", then return the mocked object "field" I declared before, but jMockIt is complaining about not being able to mock the Class<?> object.
So, how do I proceed here? I get that java.lang.Class is "special" and all that, and there's probably something I'm missing from how jMockIt works. Any idea?
A: When you use Mocked on a class, you holds a mocked instance automatically created by jmockit lib.
So, try myGenericClass.getClass().getDeclaredFields()
More details : https://jmockit.github.io/tutorial/Mocking.html#mocked
A: You have a really simple case of a function. Functions are incredibly easy to test and rarely need mocks. They receive some input and return some output. What you need to do, is to test that some input produced some output.
@Tested
TestedClass testedClassInstance;
@Test
public void egReturnsAllFieldsOfTheProvidedClass() {
MyPojo result = testedClassInstance.functionDeveloper(MyGenericClass.class)
/* assertions here*/
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,147
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.