text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
{"url":"https:\/\/support.bioconductor.org\/p\/9143010\/","text":"1\n1\nEntering edit mode\nXuebing \u25b4 10\n@f6ce8073\nLast seen 3 months ago\nUnited States\n\nHow can I download all bigwig files for TCGA samples? I noticed an answer was provided previously for recount2 but it doesn\u2019t work for recount3:\n\nRecount2 Bigwigs for TCGA\n\nThanks!\n\nrecount3 recount TCGA \u2022 194 views\n0\nEntering edit mode\nLast seen 23 days ago\nUnited States\n\nHi,\n\nThank you for your interest in recount3 and recount2. The easiest option in recount3 to find the URLs for BigWig files is to use the recount3::create_rse() function which will include a colData() column called BigWigURL as shown at https:\/\/github.com\/LieberInstitute\/recount3\/issues\/21#issuecomment-1074156958. Here's a short extract:\n\nas.data.frame(colData(rse)[1, c(\"external_id\", \"study\", \"BigWigURL\")])\n#> external_id study\n#> GTEX-T6MN-0011-R1A-SM-32QOY.1 GTEX-T6MN-0011-R1A-SM-32QOY.1 BRAIN\n#> BigWigURL\n#> GTEX-T6MN-0011-R1A-SM-32QOY.1 http:\/\/duffel.rail.bio\/recount3\/human\/data_sources\/gtex\/base_sums\/IN\/BRAIN\/OY\/gtex.base_sums.BRAIN_GTEX-T6MN-0011-R1A-SM-32QOY.1.ALL.bw\n\n\nYou could also use recount3::locate_url(), however as noted at https:\/\/github.com\/LieberInstitute\/recount3\/issues\/21#issuecomment-1074156958, that function doesn't guarantee that the result is a valid URL due to programmatic reasons from the data host side (IDIES at JHU).\n\nUsing recount3::create_rse() at the gene level might be a bit too much data to download for a large project such as TCGA (which is split by tissue as is GTEx), so you might prefer to dive into the internal code of recount3::create_rse_manual() and re-use it https:\/\/github.com\/LieberInstitute\/recount3\/blob\/6eb14b844062ebdf45fe5a356577e3ea0483c97e\/R\/create_rse_manual.R#L156-L165 after downloading the TCGA metadata files.\n\nAs you can see, there are a few different options, with different degrees of complexity.\n\nOnce you have located the URLs, you can use recount3::file_retrieve() which uses internally BiocFileCache::bfcrpath() https:\/\/github.com\/LieberInstitute\/recount3\/blob\/6eb14b844062ebdf45fe5a356577e3ea0483c97e\/R\/file_retrieve.R#L80 or download them through some other way including recount::download_retry() which uses internally downloader::download() https:\/\/github.com\/leekgroup\/recount\/blob\/10f29f9d44906f798aa3a7655ae40ac269c36ae5\/R\/download_retry.R#L39.\n\nBest, Leo","date":"2022-07-01 08:49:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.34835249185562134, \"perplexity\": 8991.2473259574}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103922377.50\/warc\/CC-MAIN-20220701064920-20220701094920-00737.warc.gz\"}"} | null | null |
\section{Introduction}
Recently, the search for low-amplitude signals in radial velocity time-series
has reached the point where detection of Doppler signals at the level of 1m/s or
less is technically possible \citep{pepe:2011, tuomi:2013}. Along with this
rise in precision have come claims, and counter-claims, of the detection of
planetary systems containing very low-mass planets (e.g. $\alpha$~Centauri,
\citealp{dumusque:2012}, \citealp{hatzes:2013}; HD~41248
\citealp{jenkins:2013b,jenkins:2014}, \citealp{santos:2014}; GJ~581
\citealp{mayor:gj581:2009}, \citealp{robertson:2014a}, \citealp{anglada:2015a}).
Given the sensitive nature of these works, it is clear more work must be done
to develop a clear structure for what constitutes a Doppler signal detection
and what does not.
It is known that stellar activity might induce spurious signals in precision
Doppler measurements \citep[eg.]{queloz:2001}. In particular, variability in
chromospheric activity indices are supposed to originate from localized active
regions on stars. Changes in the local properties of the visible surface of
stars can induce apparent Doppler shifts that do not necessarily average out
over time, producing apparent signals that might be mistaken as planets
\citep[eg.][]{hatzes:2002,bonfils:2007}. Theoretical and numerical simulations
suggest that variability on some of these indices should linearly correlate with
apparent radial velocity shifts \citep{boisse:2011,dumusque:2014}.
\citet{robertson:2014a} exploited this expected linear correlation to propose
that the planet candidate GJ~581d was caused by stellar variability by showing
some correlations of activity indices with residual time-series (all other
signals removed). Since residual time-series are not representative of the
original data, such conclusions were challenged by \citet{anglada:2015a}. In
response, \citet{robertson:2015a} admitted inconsistencies in their statistical
analysis but claimed that their interpretation of the data was physically more
sound. Along these lines, in \citet{robertson:2014b} and
\citet{robertson:2015b} (RM15 hereafter) similar qualitative arguments were
provided to argue that several super-Earth mass planet candidates orbiting
nearby M-dwarf stars were likely to be spurious. In this paper we show that the
claims in \citet{robertson:2015b} are unsupported by a global fit to the data,
so such results should be regarded as inconclusive.
The data used in this paper comes directly from RM15 to replicate their setup as
closely as possible. The datasets in RM15 contain measurements obtained with the
HARPS and the HIRES spectrometers. These are different from the ones in
\citet{anglada:2014a} in the sense that RM15 includes additional spectroscopic
indices and, additionally, three HARPS epochs (out of 95) were removed. We also
include the analysis of V magnitude historical photometric measurements obtained
by the ASAS project \citep{asas}. A more detailed description of the
measurements are given in both papers and references therein. We start by
reviewing possible periodic signals in the activity indices presented by RM15 in
Section \ref{sec:activity}. Section \ref{sec:model} introduces a minimal Doppler
model to include linear correlation terms caused by activity. To remove
ambiguities about the framework used, we perform the analyses in a frequentist
(Section \ref{sec:likelihood}) and a Bayesian framework (Section
\ref{sec:bayesian}); both providing a consistent picture of no correlations in
either case. Section \ref{sec:nocorrelations} discusses the discrepancy between
our results and the analysis presented in RM15. A summary and concluding remarks
are given in Section \ref{sec:conclusions}.
\section{Possible signals in activity indices and ASAS photometry}
\label{sec:activity}
\begin{figure}[]
\center
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_bis.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_fwhm.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_ha.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_na.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_s.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_night_asas.eps}
\caption{Likelihood periodograms of the activity indices in
RM15 (from top to bottom; BIS, FWHM, $I_\alpha$, Na D,
S-index) and ASAS V band photometry (bottom). With the
exception of BIS, variability above 1\% FAP threshold
(horizontal line) is detected in all indices. Most relevant
possible periods in each activity index are flagged with
arrows. The similarities between periodograms in different
indices (long period trend, and possible signals between 80
and 300 days) suggest similar, non-strictly periodic stellar
variability in these time-ranges but does not point out to a
clearly preferred signal. }\label{fig:actper}
\end{figure}
We perform a likelihood periodogram analysis of the activity indices as provided
by RM15 to verify the claim of a \textit{clear} rotation period at 143 days.
Likelihood ratio periodograms solves for all the free parameters of the model at
the same time when a signal is injected over a list of trial periods (x-axis).
Such periodograms are a generalization of Lomb-Scargle periodograms
\citep{scargle:1982} to account for models more complex than a single sinusoid
\citep{baluev:2009}, including parameters of the noise model (eg. extra white
noise for the activity data). The signal producing the highest improvement of
the maximum log-likelihood statistic (y-axis) would be the preferred one and its
significance can then be assessed using the recipes introduced by
\citet{baluev:2009,baluev:2013}, producing analytic estimates of the false
alarm probability of detection (or FAP). As a general rule, signals above a FAP
threshold of 1 \% can be considered significant, but a more conservative
threshold of 0.1\% is sometimes used. We present both in all the periodograms
presented throughout the paper. In the case of activity data, we assume that the
signal is modelled by: one constant (equivalent to the mean of the time-series),
one sinusoid (phase and amplitude are free parameters), and an extra white noise
parameters added in quadrature to the nominal uncertainties of each measurement.
As mentioned by RM15, nights with several measurements might be overweighted and
bias the signal searches. To account for this, we present the analysis using
night averages only (45 independent epochs). Our conclusions however didn't
differ substantially if all datapoints were included.
The activity indices provided in RM15 include BIS, FWHM, I$_\alpha$, Na D,
S-index. The first two are measurements of the shape of the mean spectral line
(BIS and FWHM represent asymmetry and width respectively), which can potentially
trace activity-induced features on the stellar photosphere. The last three ones are
measurements of the chromospheric emission of the star at the H$_\alpha$
(I$_\alpha$), Sodium D$_1$ and D$_2$ lines (Na D), and Calcium H+K lines
(S-index). Chromospheric indices are also supposed to trace the presence of
active regions on the star that might be responsible for apparent Doppler
shifts. More precise definitions and possible connection to activity-induced
signals are given in RM15 and references therein. The results of signal
searches on the five indices used by RM15 (plus available V band photometry from
the ASAS survey) are summarized in Figure \ref{fig:actper}.
No significant periodicity is detectable in BIS. Several other indices show
multiple peaks above the 1\% and 0.1\% FAP thresholds (horizontal dashed and
solid lines, respectively). However, several of the peaks have similar $\Delta$
ln-L values, meaning that they satisfy the data similarly well. The only
exception is the long period trend (marked as 5000+ days in
Fig~\ref{fig:actper}), which in some cases produces a much larger improvement of
the likelihood (eg. FWHM and $I_\alpha$; second and third panels from the top,
respectively). Although the periodograms in RM15 also show a likely long period
trend in several indices, this evidence was disregarded as irrelevant in RM15 by
using generalistic arguments that are not supported by the literature. That is,
most stars in the M-dwarf sub-sample of the HARPS-GTO program (Kapteyn's star is
part of it) were found to show chromospheric variability in similar indices over
long time-scales by \citet{dasilva:2012}.
In summary, signals at 5000+, 1100, 270, 135 and 88 days would explain the
activity data equally well (even better depending on the index). Given this
ambiguity the preferred periods in the various activity indices, the choice made
by RM15 for a rotation period at 143 days seems rather arbitrary.
\section{Search for correlations in the Doppler data}
\subsection{Model}\label{sec:model}
The next step in RM15's analysis was to assess the significances of linear
correlations of the Doppler signals with the activity indices. We implement
linear correlations by adding a linear relationship between the radial
velocities and activity data by using the following model
\begin{eqnarray}
v(t) = M(\vec{\theta}, t) + \sum_i c_i\,I_i ,
\label{eq:model} \end{eqnarray}
\noindent where $M$ contains all the Doppler variability
modeled by Keplerian signals, and $\vec{\theta}$ lists the
usual parameters used in RV modelling \citep[see][as an
example]{tuomi:2013}. Activity measurements obtained
simultaneous to $v(t)$ are I$_i$, where $i$ is added over all
the activity indices under consideration. As discussed before,
these indices include i=BIS, FWHM, I$_\alpha$, Na D, S-index.
Given a model, one can search for the combination of parameters
that optimize a figure of merit (global optimization), and then
decide whether the inclusion of a correlation term or a planet
is warranted given the improvement of the reference statistic.
As long as global optimization is applied (all parameters
adjusted simultaneously), there are various ways to assess
significance of planetary signals or correlations using either
\textit{Bayesian} or \textit{frequentist} approaches
\citep{anglada:2012c}. A Bayesian approach consists of
assessing which model has the highest probability given the
data. Frequentist confidence tests evaluate the chances of
obtaining an improvement of a statistic by an unfortunate
combination of random errors. While RM15 show some apparent
correlations when representing one Doppler signal against some
of their activity data, the significance of those correlations
was never established using model comparison. The next two sections
show that the correlations claimed in RM15 are not significant
when a global fit to the data is obtained in either framework.
\subsection{Frequentist analysis}
\label{sec:likelihood}
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_allcirc_c.eps}
\includegraphics[width=0.45\textwidth, clip]{likelihood_periodogram_allcirc.eps}
\caption{Likelihood-ratio periodograms for first (top,
Kapteyn's c, k=1 planet) and second Doppler signals (bottom, Kapteyn's
b, k=2 planets), without linear correlations (gray) and
including linear correlations with the $I_{\alpha}$ index
(connected black dots). The peaks for the Doppler signals
remain above the 1\% and 0.1\% FAP thresholds in both cases.}
\label{fig:periodograms}
\end{figure}
In RM15, the strongest apparent correlation was reported to be in the
chromospheric flux as measured by their $I_\alpha$ index. In
Fig.~\ref{fig:periodograms} we present likelihood ratio periodograms of the
combined HARPS and HIRES data (each data-set has its own linear correlation
coefficient as a free parameter). As shown in Fig.~\ref{fig:periodograms}, the
significance of both signals (120 and 48.6 days) remain well above the 0.1\% FAP
threshold, even when linear correlations are included in the model.
If linear correlations could explain the data better, adding a Keplerian signal
would not improve the fit substantially and its peak would be suppressed below
threshold. A similar result is obtained by using the other activity indices from
RM15 (omited here for brevity). In summary, the likelihood analysis indicates
that the linear correlation model cannot account for the presence either Doppler
signals.
\subsection{Bayesian analysis}
\label{sec:bayesian}
\begin{figure*}[htb]
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kc01.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kc02.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kc03.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kc04.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kc05.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kb01.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kb02.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kb03.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kb04.ps}
\includegraphics[angle=270, width=0.195\textwidth]{rv_GJ191_02_mc_co_Kb05.ps}
\caption{Posterior densities and equiprobability contours of the semi-amplitudes
of the planet candidates $K_c$ (top) and $K_b$ (bottom) against the linear
correlation terms defined in the text (x-axis). The contours contain 50\%, 95\%,
and 99\% of the probability density, respectively. The 3$\sigma$ and 5$\sigma$
intervals of the distributions are shown for $K_b$ and $K_c$ to demonstrate how
significantly $K_b$ and $K_c$ differ from $0$. On the other hand, all $c_i$ are
found to be broadly consistent with $0$.}\label{fig:distributions}
\end{figure*}
In this section we perform a Bayesian analysis to evaluate the significance of
correlations of the RV data with activity indices again assuming the linear
model in Eq.~\ref{eq:model}. As before, we literally use the values provided in
RM15 for simplicity in the discussion. All linear correlation terms ($c_1$
corresponds to HARPS BIS; $c_2$ to HARPS FWHM; $c_3$ to HARPS I$_\alpha$; $c_4$
to HARPS Na D, and $c_5$ to the HARPS S-index) were tested at the same time by
simultaneously including them all as free parameters. As a figure of merit for
model comparison, we obtained the integrated likelihoods of models with and
without signals and linear correlation terms. These integrated likelihoods
(sometimes called \textit{Evidences} $E$) were calculated by setting the priors
as discussed in \citet{tuomi:2013}, and uniform ones for the parameters $c_{i}$.
The algorithm used for the estimation of the integral is based on a mixture of
Markov Chain Monte Carlo samples from both the posterior and prior
\citep{newton:raftery:1994}.
Fig.~\ref{fig:distributions} illustrates the posterior
densities of each correlation coefficient c$_i$ against the $K$
semi-amplitudes of the signals at 48.6 (Kapteyn's b) and 120
days (Kapteyn's c). The posterior densities were sampled using
the adaptive-Metropolis posterior sampling algorithm
\citep{haario:2001}. Two features would be expected for a
radial velocity variations signal traced by an activity index.
Firstly, the posterior densities in
Fig.~\ref{fig:distributions} would show a tilted elliptical
shape and the value of the corresponding $c_i$ would be
significantly different from $0$, and secondly, $K$ would be
consistent with $0$ in the sense that 95\% (or 99\%)
equiprobability contours overlapped with zero. Some of the
plots show some mild hints of correlation (tilted ellipses),
but all distributions for the $c_i$ are broadly consistent with
$0$ values. In contrast, the expected value for the
semiamplitudes of Kapteyn b is distinct from $0$ at a $\sim$
5$\sigma$ level (even higher for Kapteyn's c), where $\sigma$
is the standard deviation in of the posterior density in each
$K$ (see Fig~\ref{fig:distributions}). The reason for the
apparent contradiction with the claims in RM15 is explained in
the next section.
Table \ref{tab:evidences} summarizes the model probabilities with linear
correlations and planet signals included. The evidence ratios between models
with $k$ and $k-1$ signals remain well above any reasonable significance
threshold (eg. model probabilities larger than the 150-1000 factors usually
required to claim a confident detection). The models including linear
correlations (right) have slightly better integrated probabilities than those
without (left), but the improvement is only a factor of $\sim$ 12 when comparing
the models with $k=2$. This negligible level of significance of correlated
variability is again consistent with the confidence level contours of
Fig.~\ref{fig:distributions}, which imply that all $c_i$ are compatible with
$0$.
\begin{deluxetable}{c|cc|cc}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0pt}
\tablecaption{Natural logarithms of the integrated model probabilties $E$ and their ratios.
\label{tab:evidences}}
\tablehead{ Number of Planets
& \multicolumn{2}{|l|}{Keplerian only} & \multicolumn{2}{|l}{Keplerian + correlations}\\
k& $\ln E_k$ & $\ln (E_{k}/E_{k-1})$ & $\ln E_k$ & $\ln (E_{k}/E_{k-1})$
}
\startdata
0 & -277.7 & - & -273.6 & - \\
1 & -260.1 & +17.6 & -254.9 & +18.7 \\
2 & -238.8 & +21.3 & -241.3 & +13.6$^{\dagger}$
\enddata
\vspace{-0.8cm}
\tablecomments{$^\dagger$As a reference, a $\ln (E_{k}/E_{k-1})$ of +13.6 indicates that the
model with $k$ planets has a higher probability than a model with $k-1$ planets
by a factor $e^{+13.6} = 8.1 \times 10^5$.}
\end{deluxetable}
\section{Origin of the correlation proposed by RM15}\label{sec:nocorrelations}
\begin{figure}[htb]
\center
\includegraphics[width=0.45\textwidth, clip]{correlation_plot.eps}
\caption{Correlation between the I$_\alpha$ index and the
RVs once all signals except Kapteyn's b have been removed
from the data. The thin violet line is the maximum likelihood
fit to the data we obtained, and the thick violet lines represent
alternative fits within 1$\sigma$ values of the obtained
correlation coefficient. The fit proposed by RM15 is represented
by a red line and the 1$\sigma$ representations of their law are
illustrated as dotted red lines. }\label{fig:corplot}
\end{figure}
There is a fundamental difference in the procedure we have used here to assess
the presence of correlations and the one used by RM15. That is, while we used a
global fit to the data to constrain the coefficients, RM15 used the predictions
of the two planet model (with no errors) to perform their analysis. That is,
RM15's Figure 3 (top-central panel) shows $I_\alpha$ against the Doppler
\textit{model} of planet $b$. In our Figure \ref{fig:corplot}, we show the same
plot but present the radial velocity measurements after removing all signals
except planet b. The linear correlation law derived from our Bayesian analysis
in the previous section is presented in violet. Models showing allowed values of
the correlation coefficients at $\pm$ 1$\sigma$ intervals are also represented
as thick violet lines, which visually illustrates the large uncertainty in
those. The best correlation law proposed by RM15 is shown as a red line, and red
dotted lines show values of the coefficient at their reported $\pm$ 1$\sigma$
values. While the linear correlation law reported by RM15 is well within our
1$\sigma$ interval, their reported uncertainties are notoriously underestimated
producing the spurious artifact of significant correlation. This is a direct
consequence of misusing the RV model preductions (no uncertainties), instead of
the actual data on testing the existence of potential correlation laws. We note
for example, that even the Doppler model contains uncertainties, which where
ignored in RM15.
\section{Discussion}\label{sec:conclusions}
We have shown that linear correlations of RVs with activity indicators in the
currently existing data are insignificant for Kapteyn's star's RVs when a global
fit to the data is obtained. This stands in contrast to the claims made in
RM15, which were based on a number of approximate physical assumptions and the
implementation of \textit{ad hoc} procedures. We also want to stress that
interpretation of the 143d periodicity found by RM15 in several indicators as
rotation period seems premature: alternative periods of 88d, 135d or 270d are
similarly likely, and long-term activity trends cannot be ruled out either. Even
If for the moment we assume that the star rotates at a period of 143d, it is not
straightforward to use this as argument against a Doppler signal close-to
$P_{rot}/3$, because there is no activity signal at $P_{rot}/2$ or $P_{rot}/3$.
Given all these caveats, we consider that the current Doppler data of Kapteyn's
star is most easily explained by the presence of two planets as proposed in
\citet{anglada:2014a} rather than activity induced variability as proposed by
RM15.
A clear distinction must be made between the statistical significance of RV
signals and the physical presence of planets (together with the merit of their
detection or falsification). We advocate for comprehensive scientific
discussions about the former instead of running into premature and unsupported
statements about the latter. We conclude by emphasizing that the intention of
this paper is not to rescue the planetary status of Kapteyn's b or any other
planet detection, but to stress the importance of objective global analysis
techniques in serious scientific discussions.
\bibliographystyle{apj}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,402 |
{"url":"https:\/\/ncatlab.org\/nlab\/show\/2-poset+of+partial+maps","text":"# nLab 2-poset of partial maps\n\n### Context\n\n#### Higher category theory\n\nhigher category theory\n\n## Definition\n\nGiven a dagger 2-poset $A$, the 2-poset of partial maps $Map_\\bot(A)$ is the sub-2-poset whose objects are the objects of $A$ and whose morphisms are the functional morphisms of $A$.\n\n## Examples\n\n\u2022 For the dagger 2-poset Rel of sets and relations, the 2-poset of partial maps $Map_\\bot(Rel)$ is equivalent to the category of sets and partial functions $Set_\\bot$.","date":"2022-11-30 21:43:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 6, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9556724429130554, \"perplexity\": 1648.8385152577769}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710771.39\/warc\/CC-MAIN-20221130192708-20221130222708-00828.warc.gz\"}"} | null | null |
/* Create your own CSS rules */
.persoo--topProducts__tabs {
width:100%;
display:inline-block;
border-bottom: 1px solid #dfdfdf;
}
.persoo--topProducts__tabs a {
float:left;
margin-right:5px;
display:inline-block;
cursor: pointer;
padding:10px;
line-height:1em;
text-decoration:none;
}
.persoo--topProducts__tabs a {
margin-top: 1px;
color: #000000;
background: #EEEEEE;
}
.persoo--topProducts__tabs a[data-active="true"] {
font-weight:bold;
margin-top: 0;
color: #cb1040;
background: #FFFFFF;
margin-bottom:-1px;
border: 1px solid #dfdfdf;
border-bottom-color: #EEEEEE;
}
.persoo--topProducts__tabs a:hover {
background: #f7f7f7;
}
.persoo--topProducts__tabs a[data-active="true"]:hover {
background: #FFFFFF;
}
.persoo--topProducts__tabs a:last-of-type {
margin-right:0;
}
.persoo--topProducts__list {
display:none;
}
.persoo--topProducts__list[data-active="true"] {
display:block;
}
.persoo--topProducts__list__moreProducts {
display: block;
height: 0px;
width: 100%;
overflow: hidden;
-webkit-transition: all 0.3s;
-moz-transition: all 0.3s;
-o-transition: all 0.3s;
transition: all 0.3s;
}
.persoo--topProducts__list__moreProducts__inner {
width:100%;
}
/* Item Box Spacing */
.persoo--topProducts__list__item {
display: table;
margin: 8px 0 0 0;
padding: 5px 0;
width: 100%;
min-height: 60px;
border: 1px solid #dfdfdf;
clear:both;
background-color: #FFFFFF;
}
.persoo--topProducts__showMore {
margin:8px 0 0 0;
border-top: 1px solid #dfdfdf;
}
.persoo--topProducts__list__item__number {
display: table-cell;
vertical-align: middle;
width: 35px;
padding: 19px 0;
}
.persoo--topProducts__list__item__number div {
float: left;
width: 2em;
height: 1.8em;
font-size: 0.85em;
line-height: 1.8em;
letter-spacing: 0;
text-align: center;
-webkit-border-radius: 0 4px 4px 0;
-moz-border-radius: 0 4px 4px 0;
border-radius: 0 4px 4px 0;
color: #000000;
background-color: #dfdfdf;
}
.persoo--topProducts__list__item__image {
display: table-cell;
vertical-align: middle;
padding: 3px 5px;
text-align: center;
width: 54px;
}
.persoo--topProducts__list__item__image img {
width: auto;
height: auto;
max-width: 54px;
max-height: 54px;
}
.persoo--topProducts__list__item__textWrapper {
display: table-cell;
vertical-align: middle;
margin: 0 0;
padding: 3px 0;
text-align: left;
}
.persoo--topProducts__list__item__textWrapper2 {
display: table;
width: 100%;
}
.persoo--topProducts__list__item__text {
display: table-cell;
vertical-align: middle;
margin: 0;
margin-right: 120px;
padding: 0;
text-align: left;
}
.persoo--topProducts__list__item__text__title a {
font-weight: bold;
max-width: 100%;
height: 1.1em;
line-height: 1.1em;
overflow: hidden;
white-space: nowrap;
display: inline-block;
text-decoration: none;
-ms-text-overflow: ellipsis;
-o-text-overflow: ellipsis;
text-overflow: ellipsis;
color: #cb1040;
}
.persoo--topProducts__list__item__text__title a:hover {
text-decoration: underline;
opacity: 0.8;
filter: brightness(50%);
}
.persoo--topProducts__list__item__text__description {
line-height: 1.1em;
height: 1.1em; /* Mutiply by Description Lines Count */
overflow: hidden;
margin: 0;
padding: 0;
display: inline-block;
-ms-text-overflow: ellipsis;
-o-text-overflow: ellipsis;
text-overflow: ellipsis;
font-size: smaller;
color: #333333;
}
.persoo--topProducts__list__item__price {
display: table-cell;
vertical-align: middle;
width: 100px;
text-align: right;
padding: 3px 8px 3px 0;
}
.persoo--topProducts__list__item__price {
font-weight: bold;
}
.persoo--topProducts__list__item__price small {
font-size:0.8em;
}
.persoo--topProducts__list__item__price > del {
font-size: smaller;
float: left;
width: 100%;
font-weight: bold;
vertical-align: sub;
line-height: 1.1em;
color: #cccccc;
}
.persoo--topProducts__list__item__price > div {
float:left;
width:100%;
vertical-align:sub;
color: #cb1040;
}
.persoo--topProducts__list__item__text__price > .persoo-clear {
float: none;
clear: both;
}
.persoo--topProducts__list__item:hover .persoo--topProducts__list__item__text__description a {
color: #cb1040;
}
.persoo--topProducts__showMore {
width:100%;
display:block!important;
text-align:center;
}
.persoo--topProducts__showMore__toggle {
width:90px;
height:30px;
overflow:hidden;
display:inline-block;
cursor: pointer;
background-color: #cb1040;
-webkit-border-bottom-right-radius: 7px;
-webkit-border-bottom-left-radius: 7px;
border-bottom-right-radius: 7px;
border-bottom-left-radius: 7px;
}
.persoo--topProducts__showMore__toggle__icon {
width: 100%;
height: 100%;
}
.persoo--topProducts__showMore__toggle[data-open="false"] .persoo--topProducts__showMore__toggle__icon {
/* change color and size in decoded SVG. Note: svg must be encoded because of IE. */
background: url("data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Axlink%3D%22http%3A%2F%2Fwww.w3.org%2F1999%2Fxlink%22%20version%3D%221.1%22%20style%3D%27fill%3Awhite%3B%20stroke%3Anone%27%20width%3D%2224%22%20height%3D%2224%22%20viewBox%3D%220%200%2024%2024%22%3E%3Cpath%20d%3D%22M7.41%2C8.58L12%2C13.17L16.59%2C8.58L18%2C10L12%2C16L6%2C10L7.41%2C8.58Z%22%20fill%3D%27%23fff%27%20fill-rule%3D%27evenodd%27%2F%3E%3C%2Fsvg%3E") no-repeat center center / contain;
}
.persoo--topProducts__showMore__toggle[data-open="true"] .persoo--topProducts__showMore__toggle__icon {
/* change color and size in decoded SVG. Note: svg must be encoded because of IE. */
background: url("data:image/svg+xml;charset=utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20xmlns%3Axlink%3D%22http%3A%2F%2Fwww.w3.org%2F1999%2Fxlink%22%20version%3D%221.1%22%20style%3D%27fill%3Awhite%3B%20stroke%3Anone%27%20width%3D%2224%22%20height%3D%2224%22%20viewBox%3D%220%200%2024%2024%22%3E%3Cpath%20d%3D%22M7.41%2C15.41L12%2C10.83L16.59%2C15.41L18%2C14L12%2C8L6%2C14L7.41%2C15.41Z%22%20fill%3D%27%23fff%27%20fill-rule%3D%27evenodd%27%20%2F%3E%3C%2Fsvg%3E") no-repeat center center / contain;
}
/* move priceBox from the top right corner to below content */
@media (max-width:964px){
.persoo--topProducts__list__item__text {
display: block;
margin-right: 5px;
}
.persoo--topProducts__list__item__price {
display: block;
right: auto;
width: auto;
margin: 0 5px 0 0;
padding: 0 0;
text-align:left;
}
.persoo--topProducts__list__item__price del {
width:auto;
padding-right:5px;
}
.persoo--topProducts__list__item__price div {
width:auto;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 872 |
\section{Introduction}
Solar polar plumes are bright rays located at coronal holes or polar areas. They can have different heights depending on particular wavelength of the observation. Solar polar plumes are visiable from the base until approximately 1.2 $R_s$ if observed in ultraviolet wavelength.(eg. STEREO telescope EUVI). Observations in x-rays(eg.{\it Hinode} XRT) are mainly only for hot gas distributed on the solar surface and bright points of the base of polar plumes.
Because they locate at coronal hole regions where the fast solar wind is originated, solar polar plumes have been associated with a possible solar fast wind source. However, there still lacks evidence about this connection in terms of the underneath physical process.\citet{rao07} have shown plumes have higher electron density than interplumes' but approach that of interplume regions as increasing height.\citet{ gio97, koh95} have also shown the H I Ly$\alpha$ line has narrower width of plumes than that of interplumes,corresponding with a lower temperature. \citet{has97} has the same results as \citep{gio97} by analyzing different UV lines(O vi $\lambda$ 1032 line width is lower by 10\%-15\%).\citet{wil98,you99} have studied intensity ratios of UV spectral lines in the low altitude of the corona. They found an increasing of temperature with respect to heights in the background corona(interplumes),but a similar temperature for lower parts of plumes.
The geometry of polar plumes have been studied and argued for a long time. There are mainly 2 kinds of opinions regarding to the shape of it: quasi-cylindrical plume or curtain plume. Curtain plumes are denser plasma sheet appearing as radial rays when observed edge on.\citet{gab09} have been expanded this curtain plume model with microplumes network. Quasi-cylindrical geometry of plumes have been more widely accepted. Many scholars have been investigated how plumes expand cylindrically. Some work have shown that the density structure of coronal holes or polar plumes follow a radial expansion \citep{woo99};while some others concluded a superadial expansion \citep{def97,fgu95}. However, there still lacks a quantitative plume shape model. This is what this paper wants to investigate.
\section{Observations and Data analysis}
STEREO observatory was launched at Oct.25th 2006. Data collected to study the width of the plume's cross section in this paper are all taken by STEREO SECCHI with the wavelength of 171 \AA. I first assumed plumes have an expanding cylindrical tube shape with a circular cross section. Observations have shown that the light intensity of the central axis along a plume reduces as enhanced altitude. The light intensity variation across any plume of some height appears toughly as a gaussian curve. A cross section width or diameter was measured by full width half maximum(FWHM) principle at 4 different heights from the sun disk center: 1.04$R_s$,1.10$R_s$,1.16$R_s$ and 1.20$R_s$ and did this for total 31 plumes. Some measurements at 1.20$R_s$ are excluded if they are too fuzzy.
Because of lacking enough statistics, a polynomial function with 4 parameters was assumed to approximate how the diameter of plumes vary as increasing height. These 4 unknown parameters of the polynomial function were calculated analytically from 4 measured average cross section diameters. Notice that this polynomial function model only describes lower part of plumes,particularly lower than 1.2 $R_s$ normally observed by EUV. The standard deviation of each calculated average diameter is also shown in Table 1. This standard deviation may be linked with the internal structure of plumes and can not represent purely as statistics error. Figure 1 shows how a width was measured by FWHM and where the plot of pixel value versus circular circumference was taken from the image. Figure \ref{fig:myfig3} contains graph of model results and data points. Obviously, data points are on the graph of the model itself.The final model obtained is as following,where $d$ is the plume diameter and $r$ is the height from the sun center:
\begin{equation}
d=20.5-55.5\,r+50.0\,r^2-14.9\,r^3
\end{equation}
\begin{deluxetable}{ccc}
\tablecolumns{3}
\tablewidth{0pc}
\tablecaption{Measured Plume widths and their standard deviations}
\startdata
\colhead{Plume height from the Sun center($R_s$)} & \colhead{Average value($R_s$)} & \colhead{standard deviation($R_s$)}\\
\tableline
1.04$R_s$ & 0.0614516 & 0.0171694\\
1.10$R_s$ & 0.0703548 & 0.0213363\\
1.16$R_s$ & 0.0841613 & 0.0242818\\
1.20$R_s$ & 0.0865333 & 0.0200067\\
\tableline
\enddata
\end{deluxetable}
\begin{figure}
\subfigure{
\includegraphics[height=60mm]{fig1.eps}\label{fig:myfig1}
}
\subfigure{
\includegraphics[height=60mm]{fig2.eps} \label{fig:myfig2}
}
\caption{The figure on the left shows the line on the image of data taken;the right figure is the fructuation of pixel value across the plume cross section \label{fig:myfig}}
\end{figure}
\begin{figure}
\includegraphics[height=60mm]{fig3.eps}\label{fig:myfig3}
\caption{The model results is the solid curve with data represented by cross points }
\end{figure}
\section{Conclusion and Discussion}
Diameters of each plume at different heights were measured by hand in this paper. Samples contain just 31 plumes considering the balance between accuracy and timing. If an automatic recognition computer program is used, large amount of samples can be analyzed to obtain good statistics.
\acknowledgments
I want to thank my previous advisor Jonathan Cirtain working in MSFC to initialize this project and funded me for 2 semesters.
Great thanks to faculties in Physics\&Astronomy Department in The University of Alabama and all my friends here. This work is never possible without all your help.
\clearpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,566 |
{"url":"https:\/\/www.impan.pl\/pl\/wydawnictwa\/czasopisma-i-serie-wydawnicze\/fundamenta-mathematicae\/all\/227\/1\/89113\/univoque-sets-for-real-numbers","text":"JEDNOSTKA NAUKOWA KATEGORII A+\n\n# Wydawnictwa \/ Czasopisma IMPAN \/ Fundamenta Mathematicae \/ Wszystkie zeszyty\n\n## Univoque sets for real numbers\n\n### Tom 227 \/ 2014\n\nFundamenta Mathematicae 227 (2014), 69-83 MSC: Primary 11K55. DOI: 10.4064\/fm227-1-5\n\n#### Streszczenie\n\nFor $x\\in (0,1)$, the univoque set for $x$, denoted $\\mathcal {U}(x)$, is defined to be the set of $\\beta \\in (1,2)$ such that $x$ has only one representation of the form $x=x_{1}\/\\beta +x_{2}\/\\beta ^{2}+\\cdots$ with $x_{i}\\in \\{0,1\\}$. We prove that for any $x\\in (0,1)$, $\\mathcal {U}(x)$ contains a sequence $\\{\\beta _{k}\\}_{k\\geq 1}$ increasing to $2$. Moreover, $\\mathcal {U}(x)$ is a Lebesgue null set of Hausdorff dimension $1$; both $\\mathcal {U}(x)$ and its closure $\\overline {\\mathcal {U}(x)}$ are nowhere dense.\n\n#### Autorzy\n\n\u2022 Fan L\u00fcSchool of Mathematics and Statistics\nHuazhong University of Science and Technology\n430074, Wuhan, P.R. China\ne-mail\n\u2022 Bo TanSchool of Mathematics and Statistics\nHuazhong University of Science and Technology\n430074, Wuhan, P.R. China\ne-mail\n\u2022 Jun WuSchool of Mathematics and Statistics\nHuazhong University of Science and Technology\n430074, Wuhan, P.R. China\ne-mail\n\n## Przeszukaj wydawnictwa IMPAN\n\nZbyt kr\u00f3tkie zapytanie. Wpisz co najmniej 4 znaki.\n\nOd\u015bwie\u017c obrazek","date":"2022-11-29 02:14:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6543583273887634, \"perplexity\": 4661.632195670107}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710684.84\/warc\/CC-MAIN-20221128235805-20221129025805-00774.warc.gz\"}"} | null | null |
How Steve Cohen's ownership could impact New York Mets
Sam Slackman
Photo by Twitter
Steve Cohen's wealth could help improve the Mets, as he is worth $14.6 billion
On Sept. 14, it was announced that Steve Cohen, a hedge fund manager from Long Island, New York, reached an agreement to purchase the New York Mets.
Cohen tried to acquire the team this past winter but the deal fell through. After months of speculation, Cohen looks to have become the majority owner after having been a minority owner for the past few years.
In a statement released by the Mets, Cohen expressed enthusiasm in becoming the majority owner. "I am excited to have reached an agreement with the Wilpon and Katz families to purchase the New York Mets," Cohen said.
The Wilpon family had been a part of ownership since 1980. Now the sale of the team is the most expensive for a North American professional sports team, at a value of $2.4 billion.
Cohen is valued at $14.6 billion, according to Forbes, which would make him the richest owner in the league. Washington Nationals owner Ted Lerner, worth $4.8 billion, would be the second richest. New York could have the ability to improve given Cohen's wealth.
New ownership could mean a new beginning for the Mets.
In 2015, they lost the World Series. In 2016, they made the National League Wild Card Game, only to lose 3-0 to the San Francisco Giants. Since then, New York has not made the playoffs and has finished in the top three in their division once.
Cohen brings wealth to the Mets and a fresh reputation. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,065 |
A Broken World
Edited by the bestselling author of Birdsong and Dr Hope Wolf, this is an original and illuminating non-fiction anthology of writing on the First World War.
A lieutenant writes of digging through bodies that have the consistency of Camembert cheese; a mother sends flower seeds to her son at the Front, hoping that one day someone may see them grow; a nurse tends a man back to health knowing he will be court-martialled and shot as soon as he is fit.
In this extraordinarily powerful and diverse selection of diaries, letters and memories – many of which have never been published before – privates and officers, seamen and airmen, munitions workers and mothers, nurses and pacifists, prisoners-of-war and conscientious objectors appear alongside each other.
Other Titles by Sebastian Faulks
The war involved people from so many different backgrounds and countries and included here are, among others, British, German, Russian and Indian voices. Alongside testament from the many ordinary people whose lives were transformed by the events of 1914-18, there are extracts from names that have become synonymous with the war, such as Siegfried Sassoon and T.E. Lawrence. What unites them is a desire to express something of the horror, the loss, the confusion and the desire to help – or to protest.
A Broken World is an original collection of personal and defining moments that offer an unprecedented insight into the Great War as it was experienced and as it was remembered. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,517 |
\section{Introduction}
\label{sec:intro}
The performance of charged particle track reconstruction in dense environments, such as the core of high $p_{T}$ jets, is important for many analyses and performance studies. Examples include $b$-tagging and boosted $\tau$ reconstruction, jet energy scale and mass determination, and analyses using jet substructure information. The leading source of systematic uncertainty is in many cases the uncertainty on track reconstruction efficiency. Thus, it is necessary to maintain a high tracking efficiency also inside of jets, and to determine the (in-)efficiency precisely.
Tracks are reconstructed from hit clusters in the inner detector. Clusters shared between multiple tracks are penalized during reconstruction, to ensure high quality tracks, as one of the tracks in this case is likely to be fake. However, in dense environments this is a disadvantage, since clusters from close-by tracks can naturally merge. To account for this, an artificial neural network is trained to identify and not penalize such merged clusters~\cite{tide,tidenn}. This greatly increases the percentage of correct associations of clusters to tracks at small track separations, and improves $b$-tagging and $\tau$ reconstruction performance. To determine the remaining inefficiency of tracking in dense environments, a data driven method is applied, where the fraction of lost tracks is determined from the energy deposition $dE/dx$ in the pixel detector (section~\ref{sec:method}).
\newcommand{ATLAS uses a right handed coordinate system, centered around the nominal interaction point (IP) at the center of the detector. The $x$ axis points towards the center of the LHC, the $y$ axis points upwards, and the $z$ axis in parallel to the proton beams. In the transverse plane, cylindrical coordinates $(r,\varphi)$ are used, where $\varphi$ is the polar angle around the $z$ axis. Instead of azimuthal angle $\theta$, pseudorapidity $\eta = - \ln \tan \theta/2$ is often used.}{ATLAS uses a right handed coordinate system, centered around the nominal interaction point (IP) at the center of the detector. The $x$ axis points towards the center of the LHC, the $y$ axis points upwards, and the $z$ axis in parallel to the proton beams. In the transverse plane, cylindrical coordinates $(r,\varphi)$ are used, where $\varphi$ is the polar angle around the $z$ axis. Instead of azimuthal angle $\theta$, pseudorapidity $\eta = - \ln \tan \theta/2$ is often used.}
The ATLAS~\cite{atlas} pixel detector is part of the inner detector system, together with the semiconductor tracker (SCT) and the transition radiation tracker (TRT). The pixel detector is built in a barrel and disc geometry, and has a pseudorapidity coverage\footnote{ATLAS uses a right handed coordinate system, centered around the nominal interaction point (IP) at the center of the detector. The $x$ axis points towards the center of the LHC, the $y$ axis points upwards, and the $z$ axis in parallel to the proton beams. In the transverse plane, cylindrical coordinates $(r,\varphi)$ are used, where $\varphi$ is the polar angle around the $z$ axis. Instead of azimuthal angle $\theta$, pseudorapidity $\eta = - \ln \tan \theta/2$ is often used.} of $|\eta| < 2.5$. It is built mostly from planar silicon pixel modules. The insertable B-Layer (IBL), which was added after a long shutdown (2013--2015), includes planar and 3D sensors~\cite{atlasibl}. There are four barrels, including the IBL, situated at $r=33.2$, $50.5$, $88.5$ and $122.5\;$mm. The forward region is instrumented with three disk pairs at $z = \pm 495$, $580$ and $650\;$mm. Sensor pixels are typically $50\;$\textmu{}m in transverse direction and $400\;$\textmu{}m in longitudinal direction, whereas pixels of the IBL are only $250\;$\textmu{}m longitudinally.
A measure of energy deposition $dE/dx$ is given by the pixel detector time-over-threshold (ToT). This is the time that a pulse, caused by a particle, spends over a given threshold, and is approximately proportional to the collected charge. In particle reconstruction, pixels are grouped by a clustering algorithm into clusters. The $dE/dx$ value of a cluster is determined by the total collected charge.
Since the magnetic field of the detector bends particle trajectories apart as they move out of the detector, nearby clusters are more likely to merge closer to the interaction point. However, the IBL only encodes ToT information in 4 bits, whereas the next layer, the B-layer, uses 8 bits and provides a better ToT resolution. For that reason, in the following the $dE/dx$ information from clusters in the B-Layer will be used.
\section{Samples}
\label{sec:samples}
For this analysis, data samples recorded by the ATLAS detector in 2015 (Run II) of proton-proton collisions produced by the LHC at $\sqrt{s}=13\;$TeV were used, corresponding to an integrated luminosity of $2.8\;\mathrm{fb}^{-1}$. Events were selected passing single jet triggers, with a minimal jet $p_{T}$ threshold of 100\;GeV. The triggers were subject to a prescaling depending on the instantaneous luminosity and the energy of the jet triggered on. This suppresses low $p_{T}$ jets, while keeping all events including a jet with at least $p_{T} > 1\;$TeV, leading to a more uniform transverse momentum spectrum. Events were required to pass standard data quality requirements, and contain at least one reconstructed primary vertex, associated to at least three tracks.
Data is compared with a Monte Carlo simulation, generated by \textsc{Pythia}~8.186~\cite{pythia}. Generator parameters were set according to the A14 tune for parton showering and hadronization, and parton distribution functions (PDF) were taken from \textsc{NNPDF23LO}~\cite{nnpdf}. For comparison, samples were also generated using \textsc{Herwig++}~2.7.1~\cite{herwigpp} with the UEEE5 tune and the \textsc{CTEQ6L1} PDF set~\cite{cteq}, as well as \textsc{Sherpa}~2.1~\cite{sherpa} using \textsc{CT10} PDFs~\cite{ct10}. Events are digitized using a \textsc{GEANT4} based simulation of the ATLAS detector, and then reconstructed using the same reconstruction algorithms as used for data. Monte Carlo events are finally reweighted to match the number of events triggered on in data.
\section{Object Selection}
Jets used were seeded from topological clusters~\cite{topocell} and reconstructed by the anti-$k_T$ algorithm~\cite{antikt} with a cone radius of $R=0.4$. They were required to have a transverse momentum of $p_T^\mathrm{jet} \geq 200\;\mathrm{GeV}$ and lie in the region of $|\eta^\mathrm{jet}| < 2.5$. Jets have been calibrated to the hadronic jet energy scale using a calibration derived from Monte Carlo~\cite{jes}. It has been shown previously that simulated jet properties agree well with data \cite{jetprop}.
Tracks are reconstructed using an iterative algorithm. They are seeded using combined measurements from the silicon detectors, and reconstructed using a combinatorial Kalman filter together with a stringent ambiguity solver~\cite{kalman,kalmanapplication}. The following cuts are applied to tracks:
\pagebreak
\begin{itemize}
\item $p_T^\mathrm{trk} > 10\;\mathrm{GeV}$
\item $|\eta^\mathrm{trk}| < 1.2$
\item $|d_0^\mathrm{BL}| < 1.5\;\mathrm{mm}\,,$ where $d_0^\mathrm{BL}$ is the transverse impact parameter w.r.t. the beamline position
\item $|z_0^\mathrm{BL} \sin \theta| < 1.5\;\mathrm{mm}\,,$ where $z_0^\mathrm{BL}$ is the distance in $z$ direction between the track's point of closest approach and the primary vertex, and $\theta$ is the polar angle of the track at this point
\item Number of SCT hits $\geq 6$
\item Number of pixel holes\footnote{A pixel hole is defined as a expected hit, where the reconstructed track crosses the detector surface, but no hit is recorded. Inactive parts such as sensor edges or disabled modules are excluded from the definition and do not create holes.} $\leq 1$
\end{itemize}
\section{Template Fit Method}
\label{sec:method}
The goal of this method is to determine the fraction of tracks lost due to merged clusters in jet cores. The energy deposition $dE/dx$ of pixel clusters follows a Landau distribution~\cite{PDG}, assuming the material is sufficiently thin and only single particles hit the clusters. The peak of the distribution is around the minimally ionizing particle (MIP) energy. In the case where two particles contribute to the same cluster, a second peak at $2\times$ the MIP energy is visible. A third weaker peak can appear for three particles hitting the same cluster.
When a cluster is assigned to only one reconstructed track (not multiply used), two situations are possible: The cluster was indeed hit by only one particle, or it was hit by one reconstructed particle, and another missed one. It is impossible to distinguish both situations on a per-cluster basis, but one can determine statistically how often each situation occurs, by comparing the two peaks in the $dE/dx$ distribution. From this, the probability that a track is lost due to merging can be computed.
\begin{figure}
\centering
{\includegraphics[width=0.6\textwidth,trim=0 0 0 1.5cm]{figures/fig_01}}
\caption{\label{fig:def} Definition of template and data distributions. This and all further figures taken from~\cite{pubnote}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/fig_02a}
\includegraphics[width=0.45\textwidth]{figures/fig_02b}
\caption{
\label{fig:templates}
Left: Single and multiple track templates, derived from data. Right: Energy loss $dE/dx$ distribution from pixel clusters in jet cores, to be fit with the templates. From~\cite{pubnote}.}
\end{figure}
To determine the individual contributions of both cases, a template fit is used. The data to be fitted is the $dE/dx$ distribution of not multiply used clusters, in the core of jets (angular separation\footnote{Angular distance is given by $\Delta R = \sqrt{\Delta \varphi^2 + \Delta \eta^2}$} between center of jet and track $\Delta R(\mathrm{trk}, \mathrm{jet}) < 0.05$). To get a sample that is enriched in clusters hit by single tracks (single-track template), a selection outside of the jet core ($\Delta R(\mathrm{trk}, \mathrm{jet}) > 0.1$) is applied. An enriched multiple-track template is obtained by staying in the jet core, but using multiply-used clusters instead (that is, clusters that have multiple confirmed reconstructed tracks). The selection is summarized in figure~\ref{fig:def}. Data is separated into seven $p_{T}^\mathrm{jet}$ bins ranging from $200$--$1600\;$GeV. Since at high $p_T^\mathrm{jet}$ the available statistics is low, the templates taken at $p_T^\mathrm{jet}=200$--$400\;$GeV are used to fit all distributions in data. The single- and multiple-track templates, and a data distribution can be seen in figure~\ref{fig:templates}. To minimize the influence of clusters which were hit by three tracks, the fit was performed in a reduced region of $0.8$--$3.2 \;\mathrm{MeV}\,\mathrm{g}^{-1}\,\mathrm{cm}^{2}$ for MC, and $0.67$--$3.07 \;\mathrm{MeV}\,\mathrm{g}^{-1}\,\mathrm{cm}^{2}$ for data. The regions were chosen such that the fraction of all clusters they contain is the same in data as in MC.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/fig_05a}
\includegraphics[width=0.45\textwidth]{figures/fig_05b}
\caption{\label{fig:datafit}
Result of the template fit for data, for values of jet $p_T=200$--$400\;$GeV (left) and $p_T=1000$--$1200\;$GeV (right). From~\cite{pubnote}.}
\end{figure}
The fraction of lost tracks $F_\mathrm{loss}$ is given directly by the fit fraction of the multiple-track template. The fit result for data can be seen in figure~\ref{fig:datafit} for two different $p_T$ bins. Similar plots for Monte Carlo simulation are shown in~\cite{pubnote}. The fraction of lost tracks depending on $p_T^\mathrm{jet}$ is shown in figure~\ref{fig:floss}, for data and simulation. The loss fraction increases with $p_T^\mathrm{jet}$, and shows over the whole range agreement between data and simulation, within systematic uncertainties as outlined in section~\ref{sec:systs}. The discrepancy between central values of data and simulation is approximately 25\%~\cite{pubnote}.
\section{Systematic Uncertainties}
\label{sec:systs}
The systematic uncertainties for simulation are dominated by Monte Carlo generator differences. This uncertainty has been evaluated by comparing the fit results from \textsc{Pythia8}, \textsc{Sherpa} and \textsc{Herwig++} samples. The relative systematic uncertainty ranges from 41\% at 200--400\;GeV to 5\% at 1000--1200\;GeV. For details, see~\cite{pubnote}.
An additional uncertainty comes from the choice of fit region. It was found that varying the upper edge of the region changes $F_\mathrm{lost}$, however only significantly in data. A systematic uncertainty of the size of the maximal change in $F_\mathrm{lost}$ has been applied in each $p_T^\mathrm{jet}$ bin, varying between 12\% and 25\%.
In data, an uncertainty results from using low $p_T^\mathrm{jet}$ templates to fit high $p_T^\mathrm{jet}$ data. A check has been carried out with a simulation of high statistics, and it was found that the fraction of clusters with three contributing tracks varies as a function of $p_T^\mathrm{jet}$. This leads to a small bias in the resulting value of $F_\mathrm{lost}$ which has been taken into account as a systematic uncertainty. The size of this uncertainty is between 11\% and 17\%.
\section{Conclusions}
The tracking inefficiency in jet cores has been determined using measurements of energy deposition in the ATLAS pixel detector, on $\sqrt{s} = 13\;$TeV LHC data taken in 2015. It was found that the fraction of lost tracks due to cluster merging is between $1\%$--$5\%$ for jet $p_T=200$--$1600\;$GeV. The data and simulation are found to agree within 25\% in the investigated jet $p_T$ range.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/fig_06}
\caption{\label{fig:floss} Fraction of lost tracks due to merged clusters, determined in data and simulation for varying values of $p_T^\mathrm{jet}$. Shaded areas show total uncertainty including systematic uncertainties as described in section~\ref{sec:systs}. From~\cite{pubnote}.}
\end{figure}
\acknowledgments
The author would like to thank the conference organizers for an interesting and enjoyable conference, the authors of the presented results~\cite{pubnote} for their excellent work, and the ATLAS collaboration for the opportunity to present it. The work is partly
supported by the National Natural Science Foundation of China (Grant
No. 11575200).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,872 |
Could Gisborne District Council please advise our ratepayers and protesters if the planned/ considered drop of 1080 in our only water supply catchment for the city has been decided yes or no?
Our people are saying that GDC tell us all they have consulted with the public on many issues, and then do what they want. This is not proper process or democratic.
DoC is not dropping 1080 at Waingake and has never approached the council to do so.
The council is exploring approaches to pest control at Waingake. We will be consulting with the District Health Board, Ngai Tamanuhiri, QEII National Trusts and other stakeholders before any decision is made.
The council has not used 1080 since 2001 and would only consider its use if it was absolutely necessary and where sites are considered carefully under specific conditions. This would include where sites are inaccessible by foot and in the event of animals being infected with TB.
When you live in a rural community where agriculture is a major industry and bush is widespread, it's inevitable that the populace will be routinely exposed to powerful chemicals designed to control fauna and flora.
The answer is to go and live in a city where you can enjoy other pollutants which will also shorten your life.
There can be little wonder why so many people are queuing up to take a one-way trip to Mars in an effort escape Earth's toxic atmosphere. Problem is Mars has no atmosphere at all.
The 1080 label says to avoid contamination of any water supply. Breaching the MSDS represents an illegal and dangerous act. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,593 |
111) Let Your Light Shine
Many people have given up on faith in Jesus Christ because of what they have seen in the lives of Christians. In these two passages from Narrative of the Life of Frederick Douglass, Douglass describes the importance of distinguishing between believing in Christ, and believing in those who say they follow Christ. The lives of Christians should be a positive witness to the Savior in whom they believe. But when people call themselves Christians and live wicked lives, we must focus not on their wickedness, but on Jesus. And we must live our lives in a way that would make others want to know more about our Savior. "Let your light shine before others," Jesus said, "that they may see your good deeds and glorify your Father in heaven." (See also yesterday's meditation #110)
I assert most unhesitatingly, that the religion of the south is a mere covering for the most horrid crimes, – a justifier of the most appalling barbarity, – a sanctifier of the most hateful frauds, – and a dark shelter under which the darkest, foulest, grossest, and most infernal deeds of the slaveholders find the strongest protection. Were I to be again reduced to the chains of slavery, I should regard being the slave of a religious master the greatest calamity that could befall me. For of all slaveholders with whom I have ever met, religious slaveholders are the worst. I have ever found them the meanest and basest, the most cruel and cowardly, of all others.
I find, since reading over the foregoing Narrative, that I have, in several instances, spoken in such a tone and manner, respecting religion, as may lead those unacquainted with my religious views to suppose me an opponent of all religion. To remove such misapprehension, I deem it proper to append the following brief explanation. What I have said respecting and against religion, I mean strictly to apply to the slaveholding religion of this land, and with no possible reference to Christianity proper; for, between the Christianity of this land, and the Christianity of Christ, I recognize the widest possible difference– so wide, that to receive the one as good, pure, and holy, is of necessity to reject the other as bad, corrupt, and wicked. To be the friend of the one, is of necessity to be the enemy of the other. I love the pure, peaceable, and impartial Christianity of Christ: I therefore hate the corrupt, slaveholding, women-whipping, cradle-plundering, partial and hypocritical Christianity of this land.
Titus 1:16 — They claim to know God, but by their actions they deny him. They are detestable, disobedient and unfit for doing anything good.
Matthew 5:14, 16 — (Jesus said), "You are the light of the world. A town built on a hill cannot be hidden. Neither do people light a lamp and put it under a bowl. Instead they put it on its stand, and it gives light to everyone in the house. In the same way, let your light shine before others, that they may see your good deeds and glorify your Father in heaven.
I Peter 2:12 — Live such good lives among the pagans that, though they accuse you of doing wrong, they may see your good deeds and glorify God on the day he visits us.
Dear Jesus, help me to spread Thy fragrance everywhere I go. Flood my soul with Thy spirit and love. Possess my whole being so utterly that all my life may be a radiance of Thine.
Shine through me and be so in me that every soul I come in contact with may feel Thy presence in my soul. Let them look up and see no longer me but only Jesus. Stay with me and then I shall begin to shine as you shine, so to shine as to be a light to others. Amen. –Mother Teresa
| Tagged witness
110) From "Life and Times of Frederick Douglass"
Portrait of Frederick Douglass as a young man
Frederick Douglass (1818-1895) was born into slavery in Maryland. He escaped slavery as a young man, and became the most prominent black abolitionist of his time. He is one of the most important figures in African-American history, and was a powerful orator. He was a firm believer in the equality of all people and often said, "I would unite with anybody to do right and with nobody to do wrong." In this selection from his autobiography (pages 82-84), Douglass declares that it was faith that enabled him to endure the sufferings of slavery, and it was faith that gave him the hope that he would someday be free. He was an ordained minister of the African Methodist Episcopal Church.
Previously to my contemplation of the anti-slavery movement, my mind had been seriously awakened to the subject of religion. I was not more than thirteen years old, when in my loneliness and destitution I longed for some one to whom I could go, as to a father and protector. The preaching of a white Methodist minister, named Hanson, was the means of causing me to feel that in God I had such a friend. He thought that all men, great and small, bond and free, were sinners in the sight of God: that they were by nature rebels against His government; and that they must repent of their sins, and be reconciled to God through Christ. I cannot say that I had a very distinct notion of what was required of me, but one thing I did know well: I was wretched and had no means of making myself otherwise.
I consulted a good old colored man named Charles Lawson, and in tones of holy affection he told me to pray, and to "cast all my care upon God." This I sought to do; and though for weeks I was a poor, broken-hearted mourner, traveling through doubts and fears, I finally found my burden lightened, and my heart relieved. I loved all mankind, slaveholders not excepted, though I abhorred slavery more than ever. I saw the world in a new light, and my great concern was to have everybody converted. My desire to learn increased, and especially, did I want a thorough acquaintance with the contents of the Bible. I have gathered scattered pages of the Bible from the filthy street-gutters, and washed and dried them, that in moments of leisure I might get a word or two of wisdom from them.
While thus religiously seeking knowledge, I continued my acquaintance with Lawson. This man not only prayed three times a day, but he prayed as he walked through the streets, at his work, on his dray– everywhere. His life was a life of prayer, and his words when he spoke to any one, were about a better world. Uncle Lawson lived near Master Hugh's house, and becoming deeply attached to him, I went often with him to prayer-meeting, and spent much of my leisure time with him on Sunday. The old man could read a little, and I was a great help to him in making out the hard words, for I was a better reader than he. I could teach him "the letter," but he could teach me "the spirit," and refreshing times we had together, in singing and praying. These meetings went on for a long time without the knowledge of Master Hugh or my mistress. Both knew, however, that I had become religious, and seemed to respect my conscientious piety.
…Uncle Lawson was my spiritual father and I loved him intensely, and was at his house every chance I could get… The good old man had told me that the "Lord had a great work for me to do," and I must prepare to do it; that he had been shown that I must preach the gospel. His words made a very deep impression upon me, and I verily felt that some such work was before me, though I could not see how I could ever engage in its performance. "The good Lord would bring it to pass in his own good time," he said, and that I must go on reading and studying the scriptures. This advice and these suggestions were not without their influence on my character and destiny. He fanned my already intense love of knowledge into a flame by assuring me that I was to be a useful man in the world. When I would say to him, "How can these things be? and what can I do?" his simple reply, was, "Trust in the Lord." When I would tell him, "I am a slave, and a slave for life, how can I do anything?" he would quietly answer, "The Lord can make you free, my dear; all things are possible with Him; only have faith in God. 'Ask, and it shall be given you.' If you want liberty, ask the Lord for it in FAITH, and he will give it to you."
Thus assured and thus cheered on under the inspiration of hope, I worked and prayed with a light heart, believing that my life was under the guidance of a wisdom higher than my own. With all other blessings sought at the mercy seat, I always prayed that God would, of his great mercy and in his own good time, deliver me from my bondage.
I Peter 5:6-7 — Humble yourselves, therefore, under God's mighty hand, that he may lift you up in due time. Cast all your anxiety on him because he cares for you.
Luke 11:9-10 — (Jesus said), "So I say to you: Ask and it will be given to you; seek and you will find; knock and the door will be opened to you. For everyone who asks receives; he who seeks finds; and to him who knocks, the door will be opened."
John 8:36 — So if the Son sets you free, you will be free indeed.
"I prayed for freedom for twenty years, but received no answer until I prayed with my legs."
Frederick Douglass's Prayer for Freedom:
O, why was I born a man, of whom to make a brute!… O God, save me! God, deliver me! Let me be free! Is there any God? Why am I a slave? I will run away. I will not stand it. Get caught, or get clear, I'll try it… I have only one life to lose. I had as well be killed running as die standing. Only think of it; 100 miles straight north, and I am free! Try it? Yes! God is helping me, and I will. It cannot be that I shall live and die a slave.
| Tagged freedom
109) Which Way?
Many people have chosen the following poem (or a variation of it) for the epitaph on their gravestone. This one was found in a cemetery in Waynesville, North Carolina:
Effie Jean Robinson 1897-1922
Come blooming youths, as you pass by ,
And on these lines do cast an eye.
As you are now, so once was I;
As I am now, so must you be;
Prepare for death and follow me.
Underneath, someone added:
To follow you;
I am not content,
Unless I know
Which way you went.
Joshua 24:14-15 — (Joshua said to the people), "Now fear the Lord and serve him with all faithfulness. Throw away the gods your ancestors worshiped beyond the Euphrates River and in Egypt, and serve the Lord. But if serving the Lord seems undesirable to you, then choose for yourselves this day whom you will serve, whether the gods your ancestors served beyond the Euphrates, or the gods of the Amorites, in whose land you are living. But as for me and my household, we will serve the Lord."
Philippians 3:10-14 — I want to know Christ– yes, to know the power of his resurrection and participation in his sufferings, becoming like him in his death, and so, somehow, attaining to the resurrection from the dead. Not that I have already obtained all this, or have already arrived at my goal, but I press on to take hold of that for which Christ Jesus took hold of me. Brothers and sisters, I do not consider myself yet to have taken hold of it. But one thing I do: Forgetting what is behind and straining toward what is ahead, I press on toward the goal to win the prize for which God has called me heavenward in Christ Jesus.
Hebrews 9:27-28 — Just as people are destined to die once, and after that to face judgment, so Christ was sacrificed once to take away the sins of many; and he will appear a second time, not to bear sin, but to bring salvation to those who are waiting for him.
O Lord, support us all the day long of this troubled life, until the shadows lengthen, and the evening comes, and the busy world is hushed; and the fever of life is over, and our work is done. Then, Lord, in thy mercy, grant us a safe lodging, and a holy rest, and peace at the last; through Jesus Christ our Lord. Amen. –Book of Common Prayer
108) A Welcoming Congregation?
I received this story in an email yesterday. www.snopes.com cannot confirm or deny the truth of it. Snopes has not been able to track down any source for this account, but they do list another, similar story that is known to be true. Whether or not this really happened, it should raise in our mind the question of how welcome outsiders are in our congregations. How welcoming are you?
Pastor Jeremiah Steepek let his beard grow for a few days, dressed shabbily like a homeless person, and then went to the 10,000 member church where he was to be introduced that morning as the new senior pastor. Still a stranger to everyone there, he walked around for 30 minutes while it was filling with people for the service. Only 3 people out of the thousands there said hello to him. He asked people for change to buy food. No one in the church gave him change. He went into the sanctuary to sit down in the front of the church and was asked by the ushers if he would please sit in the back. He greeted people to be greeted back with stares and dirty looks.
He sat in the back of the church and listened to the church announcements. When all that was done, the elders went up and were excited to introduce the new pastor of the church to the congregation. They said, "We would like to introduce to you Pastor Jeremiah Steepek." The congregation looked around clapping with joy and anticipation. The homeless man sitting in the back stood up and started walking down the aisle. The clapping stopped and all eyes were on him. He walked up the altar and took the microphone from the elders (who were in on this) and paused for a moment. Then he recited these words from Matthew 25:
"Then the King will say to those on his right, 'Come, you who are blessed by my Father; take your inheritance, the kingdom prepared for you since the creation of the world. For I was hungry and you gave me something to eat, I was thirsty and you gave me something to drink, I was a stranger and you invited me in, I needed clothes and you clothed me, I was sick and you looked after me, I was in prison and you came to visit me.' "Then the righteous will answer him, 'Lord, when did we see you hungry and feed you, or thirsty and give you something to drink? When did we see you a stranger and invite you in, or needing clothes and clothe you? When did we see you sick or in prison and go to visit you?' "The King will reply, 'Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.'
After he recited this, he looked towards the congregation and told them all what he had experienced that morning. Many heads were bowed in shame. He then said, "Today I see a gathering of people, not a church of Jesus Christ. The world has enough people, but not enough disciples. When will you decide to become disciples?" He then dismissed service until next week. Being a Christian is more than something you claim. It's something you live by and share with others.
Luke 6:36-37 — (Jesus said), "Be merciful, just as your Father is merciful. Do not judge, and you will not be judged. Do not condemn, and you will not be condemned. Forgive, and you will be forgiven."
Matthew 7:1-3 — (Jesus said), "Do not judge, or you too will be judged. For in the same way you judge others, you will be judged, and with the measure you use, it will be measured to you. Why do you look at the speck of sawdust in your brother's eye and pay no attention to the plank in your own eye?"
Matthew 25:45 — "The Master will reply, 'Truly I tell you, whatever you did not do for one of the least of these, you did not do for me.'
"May the road rise to meet you,
May the wind be always at your back,
May the sun shine warm on your face,
The rain fall softly on your fields;
–Gaelic blessing (Ireland)
107) Looking for the Great Spirit
The Clergy of America: Anecdotes, 1869, pp. 69-71
The following narrative was given by a gentleman of the United States, when on a visit to England, and was published in that country in 1838:
It was in the autumn of 1832, in the regions of the far West, by the waters of the Columbian River, that a traveler was led by commerce to seek out the tribe of Indians dwelling upon its borders (commonly called the 'Flathead Indians'). He appeared at the entrance of a wigwam and asked for food and water, in broken accents, but in their own language. When the traveler was rested and refreshed, the wigwam owner asked his errand, and when he said he was there with items to trade, that made him very welcome to these children of the wilderness.
The Indian who received him was tall, erect, and finely formed, with an expression of intelligence about his eyes and forehead. "You are weary," he said to the stranger, "and it was well that you reached our shelter before the voice of the great Eagle was abroad upon the mountains."
"What do you mean?," asked his guest looking at the clouded sky, "and what is the voice of the great Eagle?"
"Hear it now," replied the Indian, as the first peal of thunder rolled and echoed round the hills. "The great Spirit is riding down the waterfall! Do you not hear him in the wind? I am afraid of him, and so surely you must be. Let us speak against his harm."
"I fear nothing," replied the hardy wanderer. "But is this spirit a good or a bad spirit?– and have you more spirits than one in your country?"
"We have a good Spirit," was the answer, "but we never speak to him– he will do us no evil. And we have a bad spirit, who is the great Eagle I told you of; and we pray to him, that he may not work us harm. What spirits have you in your country?"
"I come," said the stranger, "from the Ohio River; and the men in those parts have a book which teaches them a new way to heaven; or, as you would call it, to the sky. They say that they shall live again after they die, and live up there– that is, if they please their Great Spirit."
"What is a book? I should like to see it," said the Indian. "And about living after death, I want to know more. How far is it to the Ohio?"
"It is 3,000 miles," replied the traveler, "and all through the desert. You would never reach the Ohio. But all I have said to you is true."
The Indian turned into his hut to sleep, but he could not sleep at all. When the storm was hushed, he walked out again into the clear, still moonlight to think about the book which could teach people the way into the sky. The next morning he repeated what the traveler had said to two men in his tribe, and he asked them if they would go with him to fetch such a book from beyond the mountains. They agreed, and after a season when the traveler went on his way, they too took their journey in an opposite direction. They lived by the chase, endured innumerable perils, and were six months on the road;– but at last they arrived at their destination, and entreated to see the book of which they had heard, and to be taught that which they did not know.
Their story excited great interest. They were welcomed and instructed. But after many months had passed, the Indian who had first heard the good news from the traveler, worn out with the fatigue and hardships of his journey, fell ill and died. But this was not, however, before he had listened to the glad tidings of salvation by Jesus Christ, and declared that he believed the book. A missionary offered himself to return with the two others to their homes, and did accompany them back to the Columbian River. Accounts were received from him of his safe arrival, his joyful reception by the tribe, and of his beginning to distribute among them the water of life.
Flathead delegation in Washington, D.C. with interpreter, 1884
Psalm 22:27-28 — All the ends of the earth will remember and turn to the Lord, and all the families of the nations will bow down before him, for dominion belongs to the Lord and he rules over the nations.
Isaiah 45:22 — "Turn to me and be saved, all you ends of the earth; for I am God, and there is no other."
Psalm 61:1-2 — Hear my cry, O God; listen to my prayer. From the ends of the earth I call to you, I call as my heart grows faint; lead me to the rock that is higher than I.
Acts 13:47 — For this is what the Lord has commanded us: "I have made you a light for the Gentiles, that you may bring salvation to the ends of the earth."
THANKSGIVING FOR THE MISSION OF THE CHURCH: Almighty God, you sent your Son Jesus Christ to reconcile the world to yourself: We praise and bless you for those whom you have sent in the power of the Spirit to preach the Gospel to all nations. We thank you that in all parts of the earth a community of love has been gathered together by their prayers and labors, and that in every place your servants call upon your Name; for the kingdom and the power and the glory are yours for ever. Amen. —Book of Common Prayer
106) When it is a Struggle to 'Keep the Faith'
Faith in Jesus is not the same as feelings for Jesus. Faith is stronger than feelings. It is stronger than knowledge. Faith often becomes a sheer act of will. A person may say, "I don't feel like believing, but I want to believe." And it may very well be that if a person were to say, "I don't know whether this Christianity is true or not, but with all my heart I want it to be true," then in God's sight he has faith. A man cried to Jesus, "I believe, help thou my unbelief." God himself is the giver of faith. By myself I cannot believe in Jesus or come to him, but the Holy Spirit works through Word and Sacrament to give me faith.
He makes it possible for me to be a believer. To be sure, feelings are important. In fact, Jesus will give us deep and lasting feelings. He will help us to feel joy, to feel repentance, to feel hope, to feel love, to feel faith. But when the dark days come, and these feelings seem to slip away, be sure of this: Jesus has not abandoned us. He does not make feelings a condition for his being with us. He is with us, even in those gloomy and depressed days when we hardly dare to think that he cares at all.
A man came once to me and said, "I feel that God has left me." I replied, "Perhaps that does not make any difference to God." After all, God is our Father, Jesus is our great Brother and Savior… He has promised never to leave us or abandon us. He has given us his Word. We rest there. –Alvin Rogness, The Jesus Life, pages 26-27.
"What people don't realize is how much religion costs. They think faith is a big electric blanket, when of course, it is the cross. It is much harder to believe than not to believe, so you must at least do this: keep an open mind. Keep it open toward faith, keep wanting it, keep asking for it, and leave the rest to God." –Flannery O'Connor, Habit of Being, page 354.
"I need not exert myself and try to force myself to believe or try to chase doubt out of my heart. Both are equally useless. I have let Jesus into my heart, and he will fulfill my heart's desire. I need only to tell Jesus how weak my faith is. "
–O. Hallesby
Mark 9:24 — …The boy's father exclaimed, "I do believe; help me overcome my unbelief!"
Proverbs 3:5-6 — Trust in the Lord with all your heart and lean not on your own understanding; in all your ways submit to him, and he will make your paths straight.
Romans 10:17 — Consequently, faith comes from hearing the message, and the message is heard through the word about Christ.
O Most High, Glorious God, enlighten the darkness of my heart and give me a right faith, a certain hope, and a perfect love, understanding, and knowledge; O Lord, that I may carry out your holy and true command. Amen. –St. Francis of Assisi
| Tagged faith
105) Divine Dealings with Sinners
From The Clergy of America: Anecdotes, 1869, pages 169-170
A sermon illustration from a New England minister in the 1700's.
A clergyman sitting in his study, saw some boys in his garden stealing melons. He quietly arose, and walking into his garden, called out to them, "Boys, boys." They immediately fled with the utmost speed, tearing through the shrubbery, and tumbling over the fences. "Boys," cried out the gentleman, "stop, do not be afraid. You may have as many melons as you want. I have more than I know what to do with."
The boys, urged by the consciousness of their guilt, fled with increasing speed. They did not like to trust themselves in the gentlemen's hands; neither did they exactly relish the idea of receiving favors from one whose garden they were robbing.
The clergyman continued to entreat them to stop, assuring them that they should not be hurt, and that they might have as many melons as they wished for. But the very sound of his voice added wings to their speed. They scampered on in every direction, with as determined an avoidance as though the gentleman were pursuing them with a horsewhip. He determined, however, that they should be convinced that he was sincere in his offers, and therefore pursued them. Two little fellows who could not climb over the fence were caught by the minister. He led them back, telling them they were welcome to melons whenever they wanted any, and gave to each of them a couple, and then allowed them to go home. He sent by them a message to the other boys, that whenever they wanted any melons, they were welcome to them, if they would but come to him.
The other boys, when they heard of the favors with which the two had been laden, were loud in the expression of their indignation. They accused the clergyman of impartiality, in giving to some without giving to all; and when reminded that would not accept his offers, but ran away from him as fast as they could, they replied, "What of that? He caught these two boys, and why should he have selected them instead of the rest of us? If he had only run a little faster, he might have caught us, too. It was mean of him to show such partiality."
Again they were reminded that the clergyman was ready to serve them as he did the other two he caught, and give them as many melons as they wanted, if they would only go and ask him for them.
Still, the boys would not go near him, but accused the generous man of injustice and partiality in doing for two, that which he did not do for all.
So it is with the sinner. God finds all guilty, and invites them to come to him and be forgiven, and receive the richest blessings heaven can afford. They all run from him, and the louder he calls, the more furious do they rush in their endeavors to escape. By his grace he pursues, and some he overtakes. He loads them with favors, and sends them back to invite their fellow-sinners to return and receive the same. They refuse to come, and yet never cease to abuse his mercy and insult his goodness. They say, "Why does God select some and not others? Why does he overtake others who are just as bad as we are, and allow us to escape? This election of some and not others, is unjust and partial."
And when the minister of God replies, "The invitation is extended to you: Whosoever will, let him come and take of the water of life freely" (Revelation 22:17), the sinner heeds it not, but goes on in his sins, still complaining of the injustice and partiality of God, in saving some and not saving all.
Psalm 103:8-10 — The Lord is compassionate and gracious, slow to anger, abounding in love. He will not always accuse, nor will he harbor his anger forever; he does not treat us as our sins deserve or repay us according to our iniquities.
Isaiah 55:7 — Let the wicked forsake his way and the evil man his thoughts. Let him turn to the Lord, and he will have mercy on him, and to our God, for he will freely pardon.
Matthew 18:14 — (Jesus said), "It is not the will of your Father which is in heaven, that any one of these little ones should perish."
A Morning Prayer by Robert Louis Stevenson (1850-1894): The day returns and brings us the petty round of irritating concerns and duties. Help us, O Lord, to perform them with laughter and kind faces, and let cheerfulness abound with industry. Give to us to go blithely on our business all this day, bring us to our resting beds weary and contented and undishonored, and grant us in the end the gift of sleep. Amen.
| Tagged grace
104) Religion and Wealth
John Wesley (1703-1791): "O ye lovers of money, hear the Word of the Lord. Do you suppose that money, though multiplied as the sand of the sea, can give happiness? Then you are 'given up to a strong delusion to believe a lie'– a palpable lie, confuted daily by a thousand experiments. Open your eyes! Look all around you! Are the richest men the happiest? Have those the largest share of contentment who have the largest possessions? Is not the very reverse true? Is it not a common observation that the richest men are, in general, the most discontented, the most miserable? Had not the far greater part of them been more contented when they had less money? Look inside yourselves. If you are increased in goods, are you proportionately increased in happiness? You have more substance; but have you more contentment? You know that in seeking happiness from riches, you are only striving to drink out of empty cups. And let them be painted and gilded ever so finely, they are empty still."
Dean Kelly, Why Conservative Churches are Growing, 1972, page 55: The Wesleyan revival made former beggars and roustabouts into such honest and self-respecting citizens that their neighbors took to entrusting to them the valuables they didn't trust themselves not to squander! Unfortunately, the virtues of Wesley's followers also helped them to prosper, and as they ascended in the esteem of their neighbors, they tended to place their religious commitments in perspective with other concerns, which took on increasing importance. John Wesley, the founder of the movement, has summed up this process in what might be called Wesley's Law:
"Wherever riches have increased, the essence of religion has decreased in the same proportion. Therefore, I do not see how it is possible, in the nature of things, for any revival of religion to continue long. For religion must necessarily produce both industry and frugality, and these cannot but produce riches. But as riches increase, so will pride, anger, and love of the world in all its branches… Is there no way to prevent this– this continual decay of pure religion?"
"Religion brought forth prosperity, and the daughter destroyed the mother… There is danger, lest the enchantments of this world make us forget our errand into the wilderness." –Cotton Mather, 1702, Early American clergyman
As a solution to this problem, Wesley said Christians should guard themselves against temptation by giving away all the money that they possibly can, saying, "We ought not to prevent people from being diligent and frugal; we must exhort all Christians to gain all they can, and to save all they can, in order to give all they can." Observing the verse "Lay not up for yourselves treasures upon earth," Wesley himself lived on only 30 pounds a year, even though he at times earned 1,400 pounds a year from the sale of his books. He gave all the rest away. He said, "When I have money, I get rid of it as quickly as possible, lest it find a way into my heart." Wesley also said, "It is no more sinful to be rich than to be poor. But it is dangerous beyond expression. Therefore, I remind all of you who are of this number, who have the conveniences of life, and something left over, that you walk upon slippery ground…"
Deuteronomy 8:6-7, 9-14, 17-18 — (Moses said), "Observe the commands of the Lord your God, walking in his ways and revering him. For the Lord your God is bringing you into a good land… a land where bread will not be scarce and you will lack nothing… When you have eaten and are satisfied, praise the Lord your God for the good land he has given you. Be careful that you do not forget the Lord your God, failing to observe his commands, his laws and decrees that I am giving you this day. Otherwise, when you eat and are satisfied, when you build fine houses and settle down, and when your herds and flocks grow large and your silver and gold increase and all you have is multiplied, then your heart will become proud, and you will forget the Lord your God, who brought you out of the land of Egypt, out of the land of slavery… You may say to yourself, 'My power and the strength of my hands have produced this wealth for me.' But remember the Lord your God, for it is he who gives you the ability to produce wealth, and so confirms his covenant…"
Proverbs 30:7-9 — Two things I ask of you, O Lord; do not refuse me before I die: Keep falsehood and lies far from me; give me neither poverty nor riches, but give me only my daily bread. Otherwise, I may have too much and disown you and say, 'Who is the Lord?' Or I may become poor and steal, and so dishonor the name of my God.
O God, we beg you to save us this day from the distractions of vanity and the false lure of inordinate desires. Grant us the grace of a quiet and humble mind, and may we learn from Jesus to be meek and lowly of heart. May we not join the throng of those who seek after things that never satisfy and who draw others after them in the fever of covetousness. Save us from adding our influence to the drag of temptation. If the fierce tide of greed beats against our soul, may we rest at peace in your higher contentment. In the press of life may we pass from duty to duty in tranquility of heart, and spread your quietness and peace to all who come near… Amen
–Walter Rauschenbusch (1861-1918)
103) Our Constitution and Religion
Adapted from Our 'Godless Constitution;' The Complicated Truth, by Eric Metaxas, www.Breakpoint.org July 16, 2013
One Fourth of July ad would have you believe that the Founding Fathers sought to shield our nation's government from Christianity. Clever, but not true.
This Fourth of July, I opened up the New York Times and I found an extremely misleading ad sponsored by the Freedom From Religion Foundation. Celebrate Our Godless Constitution, it read. The ad featured pictures of six founding fathers, and cherry-picked quotes that made it appear that these men were die-hard atheists– or at least, did not approve of Christianity influencing our new nation's government.
Now, it's quite true our Constitution is secular; the founders were well aware of what can happen when kings and countries force a particular religion on its citizens. Think Iran today.
But there is a big difference between believing a Constitution should be secular, and believing that religion– in this case, Christianity– should have no influence on one's country and its laws. Five of the six founders listed in the ad strongly believed that America would not survive if her people were godless.
For instance, John Adams warned in 1798, "Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other."
George Washington shared this view. In his Farewell Address, the old general said, "Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports." And, he added, "Reason and experience both forbid us to expect that national morality can prevail in exclusion of religious principle."
Benjamin Franklin– no doubt a rather worldy man– urged participants in the Constitutional Convention to pray, because, he said, "the longer I live, the more convincing proofs I see of this truth– that God governs in the affairs of men. And if a sparrow cannot fall to the ground without his notice, is it probable that an empire can rise without his aid?"
While James Madison, like Franklin, was against a state-imposed religion, as of course I am, our fourth president also noted that "before any man can be considered as a member of civil society, he must be considered as a subject of the governor of the universe"– that is, God.
So while the Constitution cannot be considered a religious document, many of our founders' religious views deeply informed their thinking about the kind of government America should embrace. To suggest otherwise is intellectually dishonest. They also considered freedom of religion so important that they enshrined it in the First Amendment to the Constitution…
Ironically, the New York Times itself revealed how silly and misleading the Freedom from Religion Foundation ad is just a few pages later. Every Fourth of July, it reprints the Declaration of Independence. Allow me to read from the text: "We, therefore, the Representatives of the United States of America… [appeal] to the Supreme Judge of the world for the rectitude of our intentions." The Declaration's signers also wrote that they were acting "with a firm reliance on the protection of divine Providence."
We need to understand the complicated truth about religion and America's founders– and with "firm reliance" appeal to God to both bless America, and to forgive the sins of our nation. Because Franklin was right: God does govern the affairs of men.
Isaiah 40:15 — Surely the nations are like a drop in a bucket (to the Lord); they are regarded as dust on the scales; He weighs the islands as though they were fine dust.
I Timothy 2:1-5 — I urge, then, first of all, that petitions, prayers, intercession and thanksgiving be made for all people– for kings and all those in authority,that we may live peaceful and quiet lives in all godliness and holiness. This is good, and pleases God our Savior, who wants all people to be saved and to come to a knowledge of the truth. For there is one God and one mediator between God and mankind, the man Christ Jesus.
II Chronicles 7:14 — If my people, who are called by my name, will humble themselves and pray and seek my face and turn from their wicked ways, then I will hear from heaven, and I will forgive their sin and will heal their land.
BILLY GRAHAM'S PRAYER FOR AMERICA:
Our Father and Our God, we praise You for Your goodness to our nation, giving us blessings far beyond what we deserve. Yet we know all is not right with America. We deeply need a moral and spiritual renewal to help us meet the many problems we face. Convict us of sin. Help us to turn to You in repentance and faith. Set our feet on the path of Your righteousness and peace. We pray today for our nation's leaders. Give them the wisdom to know what is right, and the courage to do it. You have said, "Blessed is the nation whose God is the Lord." May this be a new era for America, as we humble ourselves and acknowledge You alone as our Savior and Lord. This we pray in Your holy name. Amen.
102) A Religion Designed to Last
By John Steinbeck, Travels with Charley: In Search of America
(Penguin Books, 1962), 77-79.
Sunday morning, in a Vermont town… I looked for a church to attend. Several I eliminated for reasons I do not now remember, but on seeing a John Knox church I drove into a side street and parked… I took my seat in the rear of the spotless, polished place of worship. The prayers were to the point, directing the attention of the Almighty to certain weaknesses and undivine tendencies I know to be mine and could only suppose were shared by the others gathered there. The service did my heart and I hope my soul good. It had been a long time since I had heard such an approach.
It is our practice now, at least in the large cities, to find from our psychiatric priesthood that our sins are not really sins at all but accidents that are set in motion by forces beyond our control. There was no such nonsense in this church. The minister… opened up with prayer and reassured us that we were a pretty sorry lot. And he was right. We didn't amount to much to start with, and due to our own tawdry efforts we had been slipping ever since… Having proved that we, or perhaps only I, were no damn good he painted with cool certainly what was likely to happen to us if we didn't make some basic reorganizations for which he didn't hold out much hope. He spoke of hell as an expert, not the mush-mush hell of these soft days, but a well-stoked, white-hot hell served by technicians of the first order…
For some years now God has been a pal to us, practicing togetherness… but this Vermont God cared enough about me to go to a lot of trouble kicking the hell out of me. He put my sins in a new perspective. Whereas they had been small and mean and nasty and best forgotten, this minister gave them some size and bloom and dignity… I wasn't a naughty child, but a first-rate sinner, and I was going to catch it.
All across the country I went to church on Sundays, a different denomination ever week; but nowhere did l find the quality of that Vermont preacher. He forged a religion designed to last, not some predigested obsolescence.
I John 1:8-10 — If we claim to be without sin, we deceive ourselves and the truth is not in us. If we confess our sins, God is faithful and just and will forgive us our sins and purify us from all unrighteousness. If we claim we have not sinned, we make him out to be a liar and his word has no place in our lives.
Revelation 21:1-8 — Then I saw a new heaven and a new earth, for the first heaven and the first earth had passed away, and there was no longer any sea. I saw the Holy City, the new Jerusalem, coming down out of heaven from God, prepared as a bride beautifully dressed for her husband. And I heard a loud voice from the throne saying, "Now the dwelling of God is with men, and he will live with them. They will be his people, and God himself will be with them and be their God. He will wipe every tear from their eyes. There will be no more death or mourning or crying or pain, for the old order of things has passed away." He who was seated on the throne said, "I am making everything new!" Then he said, "Write this down, for these words are trustworthy and true." He said to me: "It is done. I am the Alpha and the Omega, the Beginning and the End. To him who is thirsty I will give to drink without cost from the spring of the water of life. He who overcomes will inherit all this, and I will be his God and he will be my son. But the cowardly, the unbelieving, the vile, the murderers, the sexually immoral, those who practice magic arts, the idolaters and all liars– their place will be in the fiery lake of burning sulfur. This is the second death."
O Lord God, eternal and Almighty Father, we confess and acknowledge before thy holy majesty that we are poor sinners, conceived and born in iniquity and corruption, prone to do evil, incapable of any good, and that in our depravity we transgress thy holy commandments without end or ceasing. Wherefore we purchase for ourselves, through thy righteous judgment, our ruin and perdition. Nevertheless, O Lord, we are grieved that we have offended thee; and we condemn ourselves and our sins with true repentance, pleading for thy grace to relieve our distress. O God and Father, most gracious and full of compassion, have mercy upon us in the name of thy dear Son, our Lord Jesus Christ. And as thou dost blot out our sins and stains, magnify and increase in us day by day the grace of thy Holy Spirit, that as we acknowledge our unrighteousness with all our heart, we may be moved by that sorrow which shall bring forth true repentance in us, mortifying all our sins, and producing in us the fruits of righteousness and innocence which are pleasing unto thee; through the same Jesus Christ, our Lord. Amen. –John Calvin, The Geneva Liturgy, 1542 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,191 |
\section{Introduction}
One of the major problems in knot theory is distinguishing knots in $\mathbb{S}^{3}$.
There are many polynomial invariants, such as the Alexander polynomial, the colored Jones polynomials, and the HOMFLY polynomial, each utilizing properties of knot diagrams, knot exteriors, knot groups, etc.
The $A$-polynomial is an algebraic-geometric knot invariant closely related to the $SL_{2}\mathbb{C}$-character variety and the strongly detected boundary slopes of the knot.
Certain knot families have explicit formulas for their $A$-polynomials, such as $n$-twist knots~\cite{hs_2004}, iterated torus knots~\cite{nz_2017}, and $r$-twisted Whitehead doubles over torus knots~\cite{ruppe_2016}.
Other families of interest have non-explicit formulas such as double-twist knots~\cite{petersen_2015}, two-bridge knots~\cite{hs_2007}, $(-2,3,2n+1)$-pretzel knots~\cite{gm_2011}~\cite{ty_2004}, and some families of hyperbolic knots.
The $A$-polynomials of general satellite knots are less studied than those of hyperbolic knots and torus knots.
We call the {\it rational pseudo-graph knots} the family of knots whose $A$-polynomial factors so that each factor is the sum of two monomials in $L$ and $M$, $L^{q}M^{p}-\delta$ or $L^{q}-\delta M^{p}$ for relatively prime $p,q$ with $q>0$ and $\delta\in\{\pm1\}$,
\begin{align*}
\mathcal{G}_{\mathbb{Q}}:=&\left\{K\subset \mathbb{S}^{3}\middle|A_{K}\doteq\prod_{j\in J}(L^{q_{j}}M^{p_{j}}-\delta_{j}),p_{j},q_{j}\in\mathbb{Z},\delta_{j}\in\{\pm1\},(p_{j},q_{j})=1,q_{j}>0\right\},
\end{align*}
where $J$ is some finite indexing set.
The symbol $\doteq$ denotes equivalence up to normalization in $\mathbb{Z}[L,M]$, that is $f(L,M)\doteq g(L,M)$ if $f(L,M)=\sigma L^{a}M^{b}g(L,M)$ for some integers $a,b$ and $\sigma\in\{\pm1\}$, so $L^{q}M^{p}-\delta\doteq L^{q}-\delta M^{-p}$.
We also write the {\it reduced polynomial} obtained from $f(L,M)$ in $\mathbb{Z}[L,M]$ by removing repeated factors as $\mathrm{Red}[f(L,M)]$.
Contained inside this set of knots is the set of \textit{integer pseudo-graph knots} where each $q_{j}=1$:
\begin{align*}
\mathcal{G}_{\mathbb{Z}}:=&\left\{K\subset \mathbb{S}^{3}\middle|A_{K}\doteq\prod_{j\in J}(LM^{r_{j}}-\delta_{j}),r_{j}\in\mathbb{Z},\delta_{j}\in\{\pm1\}\right\}.
\end{align*}
As we will show in Corollary~\ref{graphknotsgz}, contained inside $\mathcal{G}_{\mathbb{Z}}$ is the set of {\it graph knots} $\mathcal{G}_{0}$, that is, knots whose complements are graph manifolds; these knots are combinations of $(p,q)$-cables and connected sums over the unknot $U$, which will be discussed further in Section~\ref{knotfam}.
Also of interest, the logarithmic Mahler measure of a multivariable polynomial $P(z_{1},\ldots,z_{n})$ is denoted by:
$$
\mathrm{m}(P):=\frac{1}{(2\pi)^{n}}\int_{[0,2\pi]^{n}}\ln\left|P\left(e^{i\theta_{1}},\ldots,e^{i\theta_{n}}\right)\right|\,\mathrm{d}\theta_{1}\cdots\mathrm{d}\theta_{n}.
$$
The logarithmic Mahler measures of knot polynomials appear to have connections to the geometry of the knot, so let the set of knots whose $A$-polynomial have logarithmic Mahler measure zero be denoted
\begin{align*}
\mathfrak{M}_{0}:=\left\{K\subset\mathbb{S}^{3}\middle|0=\mathrm{m}\left(A_{K}\right)=\frac{1}{(2\pi)^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}\ln\left|A_{K}(e^{i\theta},e^{i\phi})\right|\,\mathrm{d}\theta\,\mathrm{d}\phi\right\}.
\end{align*}
Simple computation of these integrals shows that $\mathcal{G}_{\mathbb{Q}}\subset\mathfrak{M}_{0}$, and hence the containments are given by $\mathcal{G}_{\mathbb{Z}}\subset\mathcal{G}_{\mathbb{Q}}\subset\mathfrak{M}_{0}$.
The main satellite operations considered in this paper are $(p,q)$-cables $[(p,q),K]$ and connected sums $K_{1}\#K_{2}$; however, we will also discuss certain winding number zero satellite operations, such as $n$-twisted Whitehead doubles and $(m,n)$-double twisted doubles.
For $(p,q)$-cables, the convention used is $q\geq2$ is the {\it winding number} of the cable and $p$ is any nonzero integer relatively prime to $q$.
Since it is unknown at this time whether $\mathcal{G}_{\mathbb{Z}}$ is a proper subset of $\mathcal{G}_{\mathbb{Q}}$, we will focus primarily on results about $\mathcal{G}_{0}$ and $\mathcal{G}_{\mathbb{Z}}$.
The first result is the computation of $A$-polynomials of connected sums and $(p,q)$-cables of knots in $\mathcal{G}_{\mathbb{Z}}$:
\begin{theorem}
\label{gzconnect}
For nontrivial knots $K_{1},K_{2}\in\mathcal{G}_{\mathbb{Z}}$ with $A_{K_{1}}\doteq\underset{i\in I}{\prod}\left(LM^{r_{i}}-\delta_{i}\right)$ and $A_{K_{2}}\doteq\underset{j\in J}{\prod}\left(LM^{s_{j}}-\delta_{j}\right)$ as above.
Then the $A$-polynomial of their connected sum $K_{1}\# K_{2}$ is given by:
\begin{align*}
A_{K_{1}\#K_{2}}\doteq\mathrm{Red}\left[\underset{(i,j)\in I\times J}{\prod}\left(LM^{r_{i}+s_{j}}-\delta_{i}\delta_{j}\right)\right]
\end{align*}
and so $K_{1}\#K_{2}\in\mathcal{G}_{\mathbb{Z}}$.
\end{theorem}
\begin{theorem}
\label{gzcable}
For a nontrivial knot $C\in\mathcal{G}_{\mathbb{Z}}$ with $A_{C}\doteq\underset{j\in J}{\prod}\left(LM^{r_{j}}-\delta_{j}\right)$ as above, the $A$-polynomial of the $(p,q)$-cable over $C$ is given by:
\begin{align*}
A_{[(p,q),C]}\doteq\mathrm{Red}\left[(L-1)F_{(p,q)}(L,M)\underset{j\in J}{\prod}\left(LM^{r_{j}q^{2}}-{\delta_{j}}^{q}\right)\right],
\end{align*}
where $F_{(p,q)}(L,M)$ is defined in Remark \ref{iteratedtorus}, and so $[(p,q),C]\in\mathcal{G}_{\mathbb{Z}}$.
\end{theorem}
\begin{corollary}
\label{graphknotsgz}
Every graph knot is an integer pseudo-graph knot, $\mathcal{G}_{0}\subset\mathcal{G}_{\mathbb{Z}}$.
\end{corollary}
This gives us the containment $\mathcal{G}_{0}\subset\mathcal{G}_{\mathbb{Z}}\subset\mathcal{G}_{\mathbb{Q}}\subset\mathfrak{M}_{0}$.
Recall the hyperbolic volume of the exterior of a knot $\mathrm{Vol}(\mathcal{M}_{K})$ is the sum of the volumes of the hyperbolic pieces in its JSJ-decomposition.
Since the graph knots are exactly the knots whose exteriors have zero hyperbolic volume, Theorems~\ref{gzconnect} and~\ref{gzcable} imply the forward direction of the following conjecture:
\begin{conjecture}
\label{apolyvol}
A knot exterior $\mathcal{M}_{K}$ has $\mathrm{Vol}(\mathcal{M}_{K})=0$ if and only if $\mathrm{m}(A_{K})=0$.
Equivalently,
$\mathcal{G}_{0}=\mathfrak{M}_{0}$.
\end{conjecture}
This conjecture comes from the above containments and a connection between hyperbolic volume and the logarithmic Mahler measure in the case of the $A$-polynomial $A_{K}(L,M)$, as discussed in \cite{gm_2018}.
Notice that $m(P\cdot Q)=m(P)+m(Q)$ and the logarithmic Mahler measure is invariant under normalization.
By the following remarks, it is known that no hyperbolic knots will be in $\mathfrak{M}_{0}$ using the fact that the $A$-polynomial of a knot is a primitive polynomial since it can be explicitly computed via resultants~\cite{dl_2006}:
\begin{remark}\cite[Corollary 2.4]{nz_2017}
\label{nohyp}
If $K$ is a hyperbolic knot, then there is a balanced-irreducible factor $f_{K}$ of $A_{K}$ which is not the sum of two monomials in $L$ and $M$.
\end{remark}
\begin{remark}
\label{mahler0prim}
\cite[Theorem 3.10]{ew_1999}, for any primitive polynomial $F(x_{1},\ldots,x_{n})\in\mathbb{Z}[x_{1}^{\pm1},\ldots,x_{n}^{\pm1}]$, $\mathrm{m}(F)=0$ if and only if $F$ is a monomial times a product of cyclotomic polynomials evaluated on monomials.
Recall a polynomial $f(L,M)\in \mathbb{Z}[L,M]$ is {\it primitive} if its content is the unit ideal $(1)$, that is, if the greatest common divisor of its coefficients is 1.
\end{remark}
\begin{remark}
\label{nohypcomp}
In Section~\ref{knotfam}, we discuss satellite knots $K=\mathrm{Sat}(P,C,f)$ for a companion knot $C$ and a pattern knot $P$ embedded in a solid torus $V$.
By \cite[Proposition 2.7]{nz_2017}, if the winding number $w$ of the embedded pattern knot $f(P)\subset V$ is nonzero, then every balanced-irreducible factor $f_{C}|A_{C}$ extends to some factor $f_{K}|A_{K}$ given by the following
$$
f_{K}(L,M)=\begin{cases}
\mathrm{Red}\left[\mathrm{Res}_{\overline{L}}\left[f_{C}(\overline{L},M^{w}),L-\overline{L}^{w}\right]\right]&:\hspace{4pt}\deg_{\overline{L}}f_{C}(\overline{L},M)\neq0,\\
f_{C}(M^{w})&:\hspace{4pt}\deg_{\overline{L}}f_{C}(\overline{L},M)=0.
\end{cases}
$$
\end{remark}
Recall that a slope on a torus $T=\partial\mathcal{M}_{K}$ is a simple closed curve $\gamma\subset\partial\mathcal{M}_{K}$ up to isotopy which does not bound a disc in $\partial\mathcal{M}_{K}$; a slope $\gamma$ can be denoted by a number $p/q\in\mathbb{Q}\cup\{\infty\}$ where $[\gamma]=[\lambda^{q}\mu^{p}]$ for the preferred framing $(\lambda,\mu)$ of $\partial\mathcal{M}_{K}$.
Note that the slope $\infty$ corresponds to the meridian $[\mu]$.
A {\it boundary slope} of a knot $K$ is a slope $\gamma$ in $\partial\mathcal{M}_{K}$ that is also the boundary of an essential surface in the knot exterior $\mathcal{M}_{K}$; a boundary slope can also be described using a number $p/q\in\mathbb{Q}\cup\{\infty\}$, similarly.
Here, a properly embedded surface $S$ in a 3-manifold is {\it essential} if $S$ is incompressible, orientable, boundary incompressible, not boundary parallel, and not a sphere.
The set of boundary slopes of the exterior of a knot $K$ is denoted $\mathcal{BS}_{K}$.
For a link $L$ of $n$-components, the set of boundary slope tuples $\mathcal{BS}_{L}$ is a collection of tuples $(m_{1},\ldots,m_{n})$ where each $m_{i}\in\mathbb{Q}\cup\{\infty,\varnothing\}$ corresponds to the slope of an essential surface along the $i$-th boundary component of $\mathcal{M}_{L}$, with $\varnothing$ denoting non-intersection with a particular component.
For a two-variable polynomial $f(L,M)=\sum_{i,j}c_{ij}L^{i}M^{j}$ the Newton polygon is the convex hull of the set of points $\left\{(i,j)\middle|c_{ij}\neq0\right\}$, denoted $\mathrm{Newt}(f)$.
The {\it strongly detected} boundary slopes of a knot are exactly the slopes of the edges of $\mathrm{Newt}(A_{K})$.
We denote the subset of strongly detected boundary slopes of a knot $K$ by $\mathcal{DS}_{K}$ to distinguish them from $\mathcal{BS}_{K}$.
Since $\mathrm{Newt}(A_{K})$ is the Minkowski sum of the Newton polygons of its factors, a factor $(LM^{r}-\delta)|A_{K}$ with $r\in\mathbb{Z}$ and $\delta\in\{\pm1\}$ contributes $r\in\mathcal{DS}_{K}$, sometimes called a {\it killing slope}.
For a knot $K\in\mathcal{G}_{\mathbb{Z}}$, the strongly detected boundary slopes $\mathcal{DS}_{K}$ can be read off by the power of $M$ in each factor, where at most two factors $LM^{r}+1$ or $LM^{r}-1$ contribute the same killing slope $r\in\mathcal{DS}_{K}$, allowing $r\in\mathbb{Z}$ up to normalization.
By Thurston's Geometrization Theorem, knots in $\mathbb{S}^{3}$ are either torus, hyperbolic, or satellite.
Every torus knot is a graph knot, and so will be in $\mathfrak{M}_{0}$ by Corollary~\ref{graphknotsgz}.
The balanced-irreducible factor $f_{K}$ from Remark~\ref{nohyp} cannot have $\mathrm{Newt}(f_{K})$ be a single edge, and so this factor $f_{K}$ will not be a cyclotomic polynomial evaluated on a Laurent monomial in $L$ and $M$; hence $\mathfrak{M}_{0}$ contains no hyperbolic knots.
Also, satellite knots $K=\mathrm{Sat}(P,C,f)$ with a hyperbolic companion knot $C$ and embedded pattern knot $f(P)$ with nonzero winding number are not contained in $\mathfrak{M}_{0}$, since the factor $f_{C}$ from Remark~\ref{nohyp} will extend to a factor $f_{K}$ of the satellite knot with $\mathrm{m}(f_{K})>0$ by Remark~\ref{nohypcomp}.
To address Conjecture~\ref{apolyvol}, it suffices to understand which satellite knots are in $\mathfrak{M}_{0}$ and if any of them have positive hyperbolic volume.
\begin{corollary}
\label{m0nohyp}
There are no hyperbolic knots in $\mathfrak{M}_{0}$.
\end{corollary}
\begin{corollary}
\label{m0nohypsat}
If the winding number $w$ of an embedded pattern knot $f(P)\subset V$ is nonzero and $C$ is a hyperbolic companion knot, then $\mathrm{Sat}(P,C,f)$ is not in $\mathfrak{M}_{0}$.
\end{corollary}
Our primary focus will be satellite knots $\mathrm{Sat}(P,C,f)$ with $f(P)\subset V$ winding number zero and companion knot $C$ a graph knot.
Additionally, we will calculate a special case of when $C$ is the figure-eight knot and the satellite operation is the $r$-twisted Whitehead double for $-11\leq r\leq11$.
Since every knot $K$ has the factor $(L-1)|A_{K}$ corresponding to the component in the representation variety $R(\mathcal{M}_{K})$ containing the abelian representations, the nontrivial factor of $A_{K}$ is denoted by $\widetilde{A}_{K}=(L-1)^{-1}A_{K}$.
By~\cite{nz_2017}, for any satellite knot $K=\mathrm{Sat}(P,C,f)$, $A_{P}|A_{K}$ and so we denote the factor of $A_{K}$ that is not contributed by the $A$-polynomial of the pattern knot $\widetilde{F}_{K}=(A_{P})^{-1}A_{K}$, and computation of $A_{\mathrm{Sat}(P,C,f)}$ reduces to computing $A_{P}$ and $\widetilde{F}_{K}$.
For a killing slope $r\in\mathcal{DS}_{C}$, we will be interested in the knot $f(P)_{r}$ obtained from $f(P)$ in the 3-sphere $V(1/r)$ after $(1/r)$-Dehn filling; the knot exterior $\mathcal{M}_{f(P)_{r}}\cong V(1/r)-\overset{\circ}{N}(f(P))$ which is explained further in Section~\ref{zerodouble}.
There is an interesting connection between the $A$-polynomials $A_{f(P)_{r}}$ for each $r\in\mathcal{DS}_{C}$ and the $A$-polynomial of the satellite knot $A_{\mathrm{Sat}(P,C,f)}$ which suggests an approach to calculating the $A$-polynomials of many satellite knots.
\begin{theorem}
\label{wnzthm}
Let $C\in\mathcal{G}_{0}$ with strongly detected boundary slopes $\mathcal{DS}_{C}$, and let $K=\mathrm{Sat}(P,C,f)$ be a satellite knot whose embedded pattern knot $f(P)\subset V$ has winding number zero in $V$.
For each integer $r\in\mathcal{DS}_{C}$, if $V(1/r)$ is the $(1/r)$-Dehn filling of $V$, then $V(1/r)-\overset{\circ}{N}\left(f(P)\right)\cong\mathcal{M}_{f(P)_{r}}$ is the exterior the knot $f(P)_{r}$.
The $A$-polynomial of $K=\mathrm{Sat}(P,C,f)$ is given in terms of the $A$-polynomials of $f(P)_{r}$ for each $r\in\mathcal{DS}_{C}$:
$$
A_{K}=\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right].
$$
\end{theorem}
Notice that the $A$-polynomial of the pattern knot will appear $A_{P}|A_{K}$ and agrees with this result since $(LM^{0}-1)|A_{C}$ and so $0\in\mathcal{DS}_{C}$, hence $A_{f(P)_{0}}=A_{P}$ is contained in the product on the right.
Also, for a given factor $(LM^{r}-\delta)$ for some $r\in\mathcal{DS}_{C}$, the choice of $\delta\in\{\pm1\}$ does not affect the corresponding factor, $\widetilde{A}_{f(P)_{r}}$.
Furthermore, we conjecture that this equality holds for every $C\in\mathcal{G}_{\mathbb{Z}}$:
\begin{conjecture}
\label{wnzconj}
Let $C\in\mathcal{G}_{\mathbb{Z}}$ with strongly detected boundary slopes $\mathcal{DS}_{C}$, and let $K=\mathrm{Sat}(P,C,f)$ be a satellite knot whose embedded pattern knot $f(P)\subset V$ has winding number zero in $V$.
Following the notation of Theorem~\ref{wnzthm}, the $A$-polynomial of $K=\mathrm{Sat}(P,C,f)$ is given by
$$
A_{K}=\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right].
$$
\end{conjecture}
This conjecture will be discussed in Section~\ref{proof2} after the proof of Theorem~\ref{wnzthm}; however, since the graph knots are contained in $\mathcal{G}_{\mathbb{Z}}$, Conjecture~\ref{apolyvol} would also imply the above conjecture.
The simplest nontrivial family of $A$-polynomials from Theorem~\ref{wnzthm} are the $n$-twisted Whitehead doubles of graph knots, written in terms of the $A$-polynomials of twist knots $K(n)$:
\begin{theorem}
\label{doublellin}
Let $C\in\mathcal{G}_{0}$ and let $\mathcal{DS}_{C}$ be the set of its strongly detected boundary slopes, then the $n$-twisted Whitehead double of $C$, $D_{n}(C)$ has $A$-polynomial:
\begin{align*}
A_{D_{n}(C)}=(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{K(n-r)}.
\end{align*}
\end{theorem}
Notice that this theorem omits the polynomial reduction.
The general construction of the $n$-twisted Whitehead double is given by Figure~\ref{fig1} in Section~\ref{zerodouble}, but this theorem can be used to immediately find many interesting families of $n$-twisted Whitehead doubles of graph knots, such as iterated torus knots and connected sums of torus knots, in terms of the $A$-polynomials of twist knots, which have known formulas by~\cite{hs_2004} and~\cite{mathews_2014}.
Theorem~\ref{doublellin} tells us that for any nontrivial knot $C$, $D_{n}(C)\not\in\mathcal{G}_{\mathbb{Z}}$, as discussed in Section~\ref{zerodouble}.
Generalizing further, we have the following result in terms of the $A$-polynomials of double twist knots $J(2m,2n)$ whose embedding is described with Figure~\ref{fig2} in Section~\ref{zerodouble}:
\begin{theorem}
\label{doubletwisteddouble}
Let $C\in\mathcal{G}_{0}$ and let $\mathcal{DS}_{C}$ be its set of strongly detected boundary slopes, then the $(m,n)$-double twisted double of $C$, $D_{m,n}(C)$ has $A$-polynomial:
\begin{align*}
A_{D_{m,n}(C)}=(L-1)\mathrm{Red}\left[\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{J(2m,2(n-r))}\right].
\end{align*}
\end{theorem}
Here, the $n$-twisted Whitehead double of any knot $C$ is the special case $m=1$: $D_{n}(C)=D_{1,n}(C)$.
Other examples such as $(m_{1},\ldots,m_{k},n)$-twisted two-bridge doubles and $n$-twisted pretzel doubles can be constructed in terms of $A_{f(P)_{r}}$ following Theorem~\ref{wnzthm}.
\begin{remark}
Explicit formulas for the $A$-polynomials of all $(m,n)$-double twist knots are not currently known, however when $m$ is sufficiently small or $m=n$ we have formulas from Petersen~\cite{petersen_2015}.
Also, there are known symmetries of the double twist knots, such as $J(m,n)^{\ast}=J(-m,-n)$ and $J(m,n)=J(n,m)$ so we may assume that $m$ is always even (if both $m,n$ are odd, then $J(m,n)$ has two components).
\end{remark}
\begin{remark}
In Section \ref{whiteheadtwist}, we show that the $A$-polynomial of the Whitehead double over an arbitrary knot $C$ not in $\mathfrak{M}_{0}$ is much more involved.
For the figure-eight knot $C=K(-1)$, there are already difficulties in computing $A_{D_{n}(K(-1))}$ using resultant methods (or Groebner bases).
Note that the figure-eight is the simplest case in the more general problem of computing $A_{D_{n}(K(m))}$ for twist knot $K(m)$ with $m\neq0,1$.\end{remark}
In Section~\ref{apoly}, we remind the reader of the $A$-polynomial for knots in $\mathbb{S}^{3}$ and some of their properties.
In Section~\ref{knotfam}, we describe some families of knots, including torus knots, twist knots, satellite knots, graph knots, and integer pseduo-graph knots, as well as list relevant results about those knots.
In Section~\ref{proof1}, we prove Theorems~\ref{gzconnect} and~\ref{gzcable}, showing that all graph knots are in $\mathcal{G}_{\mathbb{Z}}$.
In Section~\ref{zerodouble}, we describe winding number zero satellite operations, discuss gaps in $A_{K}$, and show some results about representation varieties over winding number zero satellite knots when the companion knot is a graph knot.
In Section~\ref{rtwistedcomps}, we describe the twisted gluing relation used for explicit computations of $A_{D_{n}(C)}$, which can be used to computationally verify the results for when $C\in\mathcal{G}_{\mathbb{Z}}$ and is necessary for the calculations of $A_{D_{r}(K(-1))}$ from Section~\ref{whiteheadtwist}.
In Section~\ref{proof2}, we prove Theorem~\ref{wnzthm} with Theorems~\ref{doublellin} and~\ref{doubletwisteddouble} as special examples with known factors, and discuss Conjecture~\ref{wnzconj}.
In Section~\ref{whiteheadtwist}, we outline the resultant method for computing the $A$-polynomials of $D_{r}(K(-1))$.
In this case, a factor $Q_{K(-1),r}(L,M)$ appears in this resultant which cannot divide the $A$-polynomial because its Newton polygon has edges with slopes not in $\mathcal{BS}_{D_{r}(K(-1))}$.
Finally, in Section~\ref{conclusion}, we summarize and offer some remarks about further directions of investigation.
\section{The $A$-Polynomial}
\label{apoly}
The $A$-polynomial was defined by Cooper, Culler, Gillet, Long, Shalen~\cite{cooperetal_1994}, and we remind the reader here.
For a knot $K\subset\mathbb{S}^{3}$, its knot exterior is denoted $\mathcal{M}_{K}=\mathbb{S}^{3}-\overset{\circ}{N}(K)$ and its associated knot group, $\pi_{1}(\mathcal{M}_{K})$.
Within the knot group, the peripheral subgroup is denoted $\pi_{1}(\partial\mathcal{M}_{K})\cong\langle\mu_{K}\rangle\oplus\langle\lambda_{K}\rangle$ with generators $\mu_{K}$ (the meridian) and $\lambda_{K}$ (the preferred longitude) of $\partial\mathcal{M}_{K}$, and we call $(\lambda_{K},\mu_{K})$ the preferred framing of $\mathcal{M}_{K}$; here $\lambda_{K}$ is the homologically trivial longitudinal curve in $\pi_{1}(\mathcal{M}_{K})$ up to orientation.
The $SL_{2}\mathbb{C}$-representation variety of $\mathcal{M}_{K}$ is denoted $R(\mathcal{M}_{K})=\mathrm{Hom}\,(\pi_{1}(\mathcal{M}_{K}),SL_{2}\mathbb{C})$.
Taking our representations $\rho$ up to conjugacy class, we may find representations within those conjugacy classes which are upper-triangular on the peripheral subgroup and which satisfy the following, since $\mu_{K},\lambda_{K}$ commute:
\begin{align*}
\rho(\mu_{K})&=\begin{pmatrix}
M&\ast\\
0&M^{-1}\end{pmatrix}
&\rho(\lambda_{K})&=\begin{pmatrix}
L&\ast\\
0&L^{-1}\end{pmatrix}.
\end{align*}
The set of these representations is denoted by $R_{U}(\mathcal{M}_{K})$, and the projection map $\xi:R_{U}(\mathcal{M}_{K})\to\mathbb{C}^{2}$ given by $\xi(\rho)=(L,M)$ is well-defined and the Zariski closure of the image $\overline{\mathrm{im}\,\xi}$ is a complex-curve from which we can define a two-variable polynomial $A_{K}\in\mathbb{Z}[L,M]$ (unique up to sign) with:
\begin{enumerate}[(1)]
\item
$\overline{\mathrm{im}\,\xi}$ is the zero set of $A_{K}(L,M)$, that is $\overline{\mathrm{im}\,\xi}=\mathbb{V}(A_{K})$ where $\mathbb{V}(f)$ denotes the zero locus of polynomial $f$;
\item
the polynomial $A_{K}$ has no repeated factors and is in $\mathbb{Z}[L,M]$ after nonzero scaling;
\item
the polynomial $A_{K}$ can be normalized so that the coefficients are relatively prime.
\end{enumerate}
This polynomial is the {\it$A$-polynomial of $K$}, and $A_{K}$ is known to have only even powers of $M$:
$$
A_{K}(L,M)=\underset{i,j}{\sum}a_{i,2j}L^{i}M^{2j}.
$$
Here, we will only consider knots in $\mathbb{S}^{3}$, but for a more in-depth discussion, see~\cite{cooperetal_1994}.
For $(L,M)\in\mathbb{C}^{\ast}\times\mathbb{C}^{\ast}$, denote the involution $\tau(L,M)=(L^{-1},M^{-1})$ and say that a polynomial $f(L,M)$ is {\it balanced} if $f\circ\tau\doteq f$, that is,
$$
(f\circ\tau)(L,M)=\sigma L^{a}M^{b}f(L,M)
$$
for some $a,b\in\mathbb{Z}$ and $\sigma\in\{\pm1\}$.
\begin{remark}\cite{cooperetal_1994}
\label{trivfactor}
For any knot $K$, $(L-1)|A_{K}$; that is, $0\in\mathcal{DS}_{K}$.
\end{remark}
\begin{remark}\cite{cl_1997}
\label{symmetry}
For any knot $K$, $A_{K}$ is balanced.
\end{remark}
Therefore, for any irreducible factor $f|A_{K}$, either $f$ is balanced or its involution $(f\circ\tau)$ is also factor of $A_{K}$.
We note that an irreducible factor which is the sum of two monomials in $L$ and $M$ is balanced.
\begin{remark}\cite{cl_1997}
\label{mirrored}
For any knot $K$, its mirror image $K^{\ast}$ has $A$-polynomial given by $A_{K^{\ast}}(L,M)\doteq A_{K}\left(L,M^{-1}\right)$.
\end{remark}
We will also make use of the {\it $SL_{2}\mathbb{C}$-character variety} of $\mathcal{M}_{K}$, where each character $\chi_{\rho}:\pi_{1}(\mathcal{M}_{K})\to\mathbb{C}$ is given by $\chi_{\rho}(g)=\mathrm{tr}\rho(g)$, and the character variety is denoted
$$
X(\mathcal{M}_{K})=\{\chi_{\rho}|\rho\in R(\mathcal{M}_{K})\}.
$$
A construction of the $A$-polynomial based on the character variety is provided in~\cite{cooperetal_1994}, which will be summarized here.
Note that for every balanced-irreducible factor $f_{0}|\widetilde{A}_{K}$, there is a component $X_{0}$ in $X(\mathcal{M}_{K})$ which contributes this factor.
The inclusion $i:\partial\mathcal{M}_{K}\to\mathcal{M}_{K}$ induces the map $\widehat{i}_{\ast}:X(\mathcal{M}_{K})\to X(\partial\mathcal{M}_{K})$, and the algebraic map $\tau:R(\partial\mathcal{M}_{K})\to X(\partial\mathcal{M}_{K})$ given by $\tau(\rho)=\chi_{\rho}$ restricts to a degree 2 regular surjective map on the subset $\Lambda\subset R(\partial\mathcal{M}_{K})$ consisting of representations which are diagonal on the generators $\mu_{K},\lambda_{K}$.
$$
\begin{tikzcd}
&R(\mathcal{M}_{K})\arrow[r,"\tau"]\arrow[d]&X(\mathcal{M}_{K})\arrow[d,"\widehat{i}_{\ast}"]\\
\mathbb{C}^{2}\supset\mathbb{C}^{\ast}\times\mathbb{C}^{\ast}&R(\partial\mathcal{M}_{K})\arrow[r,"\tau"]\arrow[l,"\xi"]&X(\partial\mathcal{M}_{K})\\
&\Lambda\arrow[u]\arrow[ru,"\tau|_{\Lambda}" below]\arrow[lu,"\xi"]&
\end{tikzcd}
$$
The Zariski closure $\overline{\xi\left(\left(\tau|_{\Lambda}\right)^{-1}\left(\overline{\widehat{i}_{\ast}(X_{0})}\right)\right)}=D_{0}$ is a 1-dimensional variety in $\mathbb{C}^{2}$ given by $\mathbb{V}(f_{0})=D_{0}$.
The projective completion $\widetilde{X}_{0}$ and ideal points $\widetilde{x}\in\widetilde{X}_{0}$ will be used in Section~\ref{zerodouble} in the discussion of gaps of $A_{K}$.
\section{Some Families of Knots}
\label{knotfam}
Let $T(p,q)$ denote the $(p,q)$-torus knot which is an embedded simple closed curve on an unknotted torus $T^{2}$ in $\mathbb{S}^{3}$ in the homotopy class $[\mu^{p}\lambda^{q}]\in\pi_{1}(T^{2})$ where $\mu,\lambda$ are the standard meridian and longitude curves on the torus and $p,q$ are relatively prime integers.
Also notice that $T(p,q)=T(q,p)$ (using the complementary solid torus in $\mathbb{S}^{3}$), so we take the $(p,q)$-torus knot so that $|p|>q\geq2$ for relatively prime $p,q$ to avoid repetition.
Notice that its mirror image $T(p,q)^{\ast}=T(-p,q)$.
The family of 2-bridge knots $J(k,\ell)$ with $k$ vertical half-twists and $\ell$ horizontal half-twists are referred to as double twist knots, depicted below; for the right-handed trefoil knot $3_{1}^{+}=T(3,2)=J(2,2)$.
$$
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(-1.45,.6) to ++(.3,0);
\strand
(.55,.6) to ++(.3,0);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\flipcrossings{1,3}
\end{knot}
\draw[very thick, fill=white] (0,.7) to ++(0,-.8) to ++(-1.2,0) to ++(0,.8) to ++(1.2,0) -- cycle;
\draw[very thick, fill=white] (-.2,3.3) to ++(-.8,0) to ++(0,-1.2) to ++(.8,0) to ++(0,1.2) -- cycle;
\draw[very thick, -latex] (-2.3,1.2) to ++(.1,0);
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\node[left] at (-2.9,1.2) {$a$};
\node[right] at (1.7,1.2) {$b$};
\node at (-.6,.3) {$\ell$};
\node at (-.6,2.7) {$k$};
\end{tikzpicture}
\hspace{20pt}
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(-1.45,.6) to ++(.3,0);
\strand
(.55,.6) to ++(.3,0);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-.25,0)
.. controls +(-.15,0) and +(.15,0) .. ++(-.55,-.3)
.. controls +(-.15,0) and +(.15,0) .. ++(-.3,.3)
to ++(-.25,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(.4,0)
.. controls +(.15,0) and +(-.15,0) .. ++(.3,.3)
.. controls +(.15,0) and +(-.15,0) .. ++(.3,-.3);
\flipcrossings{2,3}
\end{knot}
\draw[dotted] (0,-.1) to ++(0,.8) to ++(-1.2,0) to ++(0,-.8) to ++(1.2,0);
\draw[dotted] (-.2,2.1) to ++(-.8,0) to ++(0,1.2) to ++(.8,0) to ++(0,-1.2);
\draw[very thick, -latex] (-2.3,1.2) to ++(.1,0);
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\node[left] at (-2.9,1.2) {$a$};
\node[right] at (1.7,1.2) {$b$};
\node at (-.6,.9) {$+2$};
\node at (0,2.7) {$+2$};
\end{tikzpicture}
$$
The figure-eight knot is another double-twist knot, instead written as $4_{1}=J(2,-2)$.
More generally, for $n\in\mathbb{Z}$, we denote the $n$-twist knot as $K(n)=J(2,2n)$.
\begin{remark}
Here, we consider only when both $k,\ell$ are even, although there is some interest in $\ell=2n+1$.
Using symmetry properties of the double twist knots, one can rewrite $J(k,\ell)=J(\ell,k)$, $J(k,\ell)^{\ast}=J(-k,-\ell)$, and $J(2,2n+1)=J(-2,2n)$.
When $k,\ell$ are both odd, $J(k,\ell)$ is a two component link, so these are not considered here.
\end{remark}
We denote the $(p,q)$-cabling over a knot $C$ by $[(p,q),C]$, whose construction is given in~\cite{nz_2017}.
If $C=T(r,s)$ is an $(r,s)$-torus knot, we may simply denote $[(p,q),T(r,s)]=[(p,q),(r,s)]$ and refer to this as an iterated torus knot.
A general iterated torus knot is similarly denoted by $[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$ which are iterated cables over a $(p_{n},q_{n})$-torus knot.
Note that each $(p_{i},q_{i})$-cable only requires $p_{i},q_{i}$ relatively prime and $q_{i}\geq2$, but the last $T(p_{n},q_{n})$ additionally requires $\left|p_{n}\right|>q_{n}\geq2$ to be a torus knot.
We also note that the $(p,q)$-cable over the unknot is the $(p,q)$-torus knot $T(p,q)$ when $|p|>1$ and the unknot for $|p|=1$.
For two knots $K_{1},K_{2}$, we denote their connected sum $K_{1}\# K_{2}$.
Beginning with the unknot, the {\it graph knots} are then the collection of all knots closed under $(p,q)$-cabling and connected sums:
$$
\mathcal{G}_{0}:=\big\langle U\big|[(p,q),-],\#\big\rangle.
$$
Equivalently, a knot $K$ is a graph knot if and only if $\mathcal{M}_{K}$ is a graph manifold, {\it i.e.} the hyperbolic volume $\mathrm{Vol}(\mathcal{M}_{K})$ is zero.
Recall that the hyperbolic volume of a knot $K$ is the sum of the volumes of the hyperbolic pieces $\mathcal{M}_{i}$ in the JSJ-decomposition, $\mathrm{Vol}(\mathcal{M}_{K})=\sum_{i}\mathrm{Vol}(\mathcal{M}_{i})$.
The $(p,q)$-cabling and connected sum operations are examples of satellite operations.
In general, a satellite knot is a knot whose exterior contains an incompressible, non-boundary parallel torus.
These knots can be constructed from a companion knot $C\subset \mathbb{S}^{3}$, a pattern knot $P$, and a homeomorphism $f:\mathbb{S}^{3}\to\mathbb{S}^{3}$ so that $f(P)$ is contained in an unknotted solid torus $V$ satisfying
\begin{enumerate}
\item
$f(P)$ is not contained in a 3-ball in $V$,
\item
$f(P)$ not isotopic to the core curve of $V$, and
\item
$f(P)$ is isotopic to $P$ when viewed in $\mathbb{S}^{3}$.
\end{enumerate}
The gluing $\phi$ is an ``untwisted'' embedding $\phi:V\to N(C)$, that is, a homeomorphism from $V$ to a regular neighborhood of $C$ that sends the meridian of $V$ to the meridian of $N(C)$, and likewise for the preferred longitudes.
We denote the satellite knot by $\mathrm{Sat}(P,C,f)=\phi(f(P))$.
The following guarantees the existence of certain factors of the $A$-polynomial of the connected sum of two knots:
\begin{remark}\cite{nz_2017}
\label{patternfactor}
For a satellite knot $K=\mathrm{Sat}(P,C,f)$ with companion knot $C$ and pattern knot $P$, $A_{P}|A_{K}$.
\end{remark}
For a satellite knot $K=\mathrm{Sat}(P,C,f)$, we denote the factor of the $A$-polynomial not contributed by the pattern knot by $\widetilde{F}_{K}=(A_{P})^{-1}A_{K}$.
Since $K_{1}\#K_{2}$ is a satellite knot where either $K_{1}$ or $K_{2}$ can be considered as the pattern knot and the other as the companion knot, we note the following corollary.
\begin{corollary}
\label{connectedfactor}
For the connected sum $K_{1}\#K_{2}$ of two knots $K_{i}$, we have $A_{K_{1}}|A_{K_{1}\#K_{2}}$ and $A_{K_{2}}|A_{K_{1}\#K_{2}}$; in particular, $\mathrm{Red}\left[(L-1)\widetilde{A}_{K_{1}}\widetilde{A}_{K_{2}}\right]\Big|A_{K_{1}\#K_{2}}$.
\end{corollary}
However, there may be other factors in $\widetilde{F}_{K_{1}\#K_{2}}$, and so the difficulty in computing $A_{K_{1}\#K_{2}}$ is computing these factors or showing none exist.
We now focus on the {\it integer pseudo-graph knots} $\mathcal{G}_{\mathbb{Z}}$, the family of knots $K$ where every irreducible factor of $A_{K}$ is the form $(LM^{r}-\delta)$ for $r\in\mathbb{Z}$ and $\delta\in\{\pm1\}$:
\begin{align*}
\mathcal{G}_{\mathbb{Z}}:=&\left\{K\subset \mathbb{S}^{3}\middle|A_{K}\doteq\prod_{j\in J}(LM^{r_{j}}-\delta_{j}),r_{j}\in\mathbb{Z},\delta_{j}\in\{\pm1\}\right\}.
\end{align*}
Remark~\ref{trivfactor} tells us that the factor $(L-1)$ with $r=0$ and $\delta=1$ will always occur in the $A$-polynomial.
The torus knots and unknot are contained in $\mathcal{G}_{\mathbb{Z}}$ by~\cite{cooperetal_1994}, the formulas of their $A$-polynomials given below; furthermore, the formula for $A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]}$ from~\cite{nz_2017} given below implies that every iterated torus knot is also in $\mathcal{G}_{\mathbb{Z}}$.
\begin{remark}\cite{nz_2017}
\label{iteratedtorus}
\begin{enumerate}[(1)]
\item
The $A$-polynomial of a $(p,q)$-torus knot $T(p,q)$ is
$$
A_{T(p,q)}=(L-1)F_{(p,q)}(L,M).
$$
\item
The $A$-polynomial of an iterated torus knot $[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$ is
$$
A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]}=(L-1)\underset{i=1}{\overset{k}{\prod}}F_{(p_{i},q_{i})}\left(L,M^{\prod_{j=1}^{i-1}q_{j}^{2}}\right)\cdot\underset{i=k+1}{\overset{n}{\prod}}G_{(p_{i},q_{i})}\left(L,M^{\prod_{j=1}^{i-1}q_{j}^{2}}\right),
$$
\end{enumerate}
where $q_{k}$ is the first even integer in the iterated cabling and the functions $F_{(p,q)}$, $G_{(p,q)}$ are as described below:
\begin{align*}
F_{(p,q)}(L,M)&=
\begin{cases}
LM^{2p}+1&:\hspace{4pt}q=2,p>0\\
L+M^{-2p}&:\hspace{4pt}q=2,p<0\\
L^{2}M^{2pq}-1&:\hspace{4pt}q>2,p>0\\
L^{2}-M^{-2pq}&:\hspace{4pt}q>2,p<0,
\end{cases}
&G_{(p,q)}(L,M)&=
\begin{cases}
LM^{pq}-1&:\hspace{4pt}p>0\\
L-M^{-pq}&:\hspace{4pt}p<0.
\end{cases}
\end{align*}
\end{remark}
We may also consider the ``non-normalized'' forms of $F_{(p,q)},G_{(p,q)}$ as
\begin{align*}
F_{(p,q)}(L,M)&\doteq
\begin{cases}
LM^{2p}+1&:\hspace{4pt}q=2\\
L^{2}M^{2pq}-1&:\hspace{4pt}q>2,
\end{cases}
&G_{(p,q)}(L,M)&\doteq
LM^{pq}-1.
\end{align*}
It is also worth noting these polynomials are a product of cyclotomic polynomials $\Phi_{n}(t)$ evaluated on the monomial in $L$ and $M$:
\begin{align*}
F_{(p,q)}(L,M)&\doteq\begin{cases}
\Phi_{2}(LM^{2p})&:\hspace{4pt}q=2\\
\Phi_{2}(LM^{pq})\Phi_{1}(LM^{pq})&:\hspace{4pt}q>2,
\end{cases}
&G_{(p,q)}(L,M)&\doteq\Phi_{1}(LM^{pq}).
\end{align*}
\section{Proofs of Theorems~\ref{gzconnect} and~\ref{gzcable}}
\label{proof1}
To prove Theorem~\ref{gzconnect}, we recall some ideas about connected sums and utilize the notation of an amalgamated representation $\rho_{1}\ast\rho_{2}$ from Cooper, Long~\cite{cl_1997}.
For an $SL_{2}\mathbb{C}$-representation over an amalgamated product $\rho:G_{1}\ast_{H}G_{2}\to SL_{2}\mathbb{C}$, if $\rho$ restricts to representations on the subgroups $G_{i}$ as $\rho|_{G_{i}}=\rho_{i}$ such that these representations agree along the group $H$, $\rho_{1}|_{H}=\rho_{2}|_{H}$, then we may simply write $\rho=\rho_{1}\ast\rho_{2}$ when the amalgamation is understood.
For a connected sum of knots $K_{1}\#K_{2}$, it is known that the knot exterior $\mathcal{M}_{K_{1}\#K_{2}}=\mathcal{M}_{K_{1}}\cup_{A}\mathcal{M}_{K_{2}}$ over a properly embedded gluing annulus $A$ whose boundary $\partial A$ is two meridian curves in $\partial\mathcal{M}_{K_{1}}$ and $\partial\mathcal{M}_{K_{2}}$.
In either knot exterior $\mathcal{M}_{K_{i}}$, the preferred framing can be taken to be $(\lambda_{i},\mu)$ where $\mu$ is one of the components of $\partial A$ and $\lambda_{i}$ is the boundary of a properly embedded Seifert surface $F_{i}$ in $\mathcal{M}_{K_{i}}$.
We may also isotopy the surfaces so that $F_{1}\cap A=F_{2}\cap A$ are curves from one boundary component of $A$ to the other.
A minimal Seifert surface $F$ in $\mathcal{M}_{K_{1}\#K_{2}}$ can then be taken by using the band connect sum of $F_{1}$ and $F_{2}$ along their common intersection in $A$.
The homotopy class $[\partial F]$ in $\pi_{1}(\mathcal{M}_{K_{1}})\ast_{\pi_{1}(A)}\pi_{1}(\mathcal{M}_{K_{2}})$ can be represented by the preferred longitude $\lambda=\lambda_{1}\lambda_{2}$, and therefore the preferred framing of $\mathcal{M}_{K_{1}\#K_{2}}$ is $(\lambda_{1}\lambda_{2},\mu)$ since $\mathcal{M}_{K_{i}}$ can be assumed to have a common meridian $\mu$ component of $\partial A$.
If $\rho_{i}:\pi_{1}(\mathcal{M}_{K_{i}})\to SL_{2}\mathbb{C}$ are representations which agree on the common meridian $\mu$ as above, then we may conjugate so that $\rho_{i}(\mu)$ is upper-triangular, which implies that each $\rho_{i}(\lambda_{i})$ is also upper-triangular.
Since $\rho_{1}(\mu)=\rho_{2}(\mu)$, note that the eigenvalue maps $\xi:R(\mathcal{M}_{K_{i}})\to\mathbb{C}^{2}$ will have $\xi(\rho_{i})=(L_{i},M)$.
Since these representations agree along the gluing annulus, they will extend to a representation $\rho=\rho_{1}\ast\rho_{2}\in R(\mathcal{M}_{K_{1}\#K_{2}})$ such that $\rho(\lambda_{i})=\rho_{i}(\lambda_{i})$, and therefore $\rho(\lambda)=\rho_{1}(\lambda_{1})\rho_{2}(\lambda_{2})$.
Hence, the eigenvalue map $\xi:R(\mathcal{M}_{K_{1}\#K_{2}})\to\mathbb{C}^{2}$ will satisfy $\xi(\rho)=(L_{1}L_{2},M)$, as described in Cooper, Long:
\begin{lemma}\cite{cl_1997}
\label{cooperlong}
For two knots $K_{1},K_{2}$ with representations $\rho_{i}:\pi_{1}(\mathcal{M}_{K_{i}})\to SL_{2}\mathbb{C}$ the eigenvalue map $\xi(\rho_{i})=(L_{i},M)$ extends to the representation $\rho=\rho_{1}\ast\rho_{2}$ over their connected sum if and only if $\rho_{1},\rho_{2}$ agree on the meridian.
In this case, $\xi(\rho)=(L_{1}L_{2},M)$.
\end{lemma}
\subsection*{Proof of Theorem~\ref{gzconnect}}
By Lemma \ref{cooperlong}, there is a representation $\rho=\rho_{1}\ast\rho_{2}\in R(\mathcal{M}_{K_{1}\#K_{2}})$ if and only if there are representations $\rho_{i}$ which agree along the meridian, and we find that the eigenvalue map $\xi(\rho_{i})=(L_{i},M)$ extends to $\xi(\rho)=(L_{1}L_{2},M)$ and hence we have $L=L_{1}L_{2}$.
This implies that we have the following three equations in variables $L,L_{1},L_{2},M$: $A_{K_{1}}(L_{1},M)=0$, $A_{K_{2}}(L_{2},M)=0$, and $L-L_{1}L_{2}=0$.
Assuming that $K_{1},K_{2}\in\mathcal{G}_{\mathbb{Z}}$, let $A_{K_{1}}\doteq\prod_{i\in I}(LM^{r_{i}}-\delta_{i})$ and $A_{K_{2}}\doteq\prod_{j\in J}(LM^{s_{j}}-\delta_{j})$ for $r_{i},s_{j}\in\mathbb{Z}$, $\delta_{i},\delta_{j}\in\{\pm1\}$, and finite indexing sets $I,J$.
Hence, for every pair of irreducible factors $f_{i}=(LM^{r_{i}}-\delta_{i})|A_{K_{1}}$ and $g_{j}=(LM^{s_{j}}-\delta_{j})|A_{K_{2}}$, there is a corresponding polynomial factor of $A_{K_{1}\#K_{2}}$.
If the factor $f_{i}(L_{1},M)=L_{1}-1$, then we find $L_{1}=1$ which contributes $g_{j}(L,M)|A_{K_{1}\# K_{2}}$, which is already known by Corollary \ref{connectedfactor}; similarly, the factor $g_{j}(L_{2},M)=L_{2}-1$ contributes the known factor $f_{i}(L,M)|A_{K_{1}\#K_{2}}$.
Otherwise, let $f_{i}=(L_{1}M^{r_{i}}-\delta_{i})|A_{K_{1}}$ and $g_{j}=(L_{2}M^{s_{j}}-\delta_{j})|A_{K_{2}}$ be generic factors respectively, with $r_{i},s_{j}\in\mathbb{Z}$ and $\delta_{i},\delta_{j}\in\{\pm1\}$.
Solving $f_{i}(L_{1},M)=0$ and $g_{j}(L_{2},M)=0$ for $L_{i}$ gives $L_{1}=\delta_{i}M^{-r_{i}}$ and $L_{2}=\delta_{j}M^{-s_{j}}$; hence $L=L_{1}L_{2}=(\delta_{i}M^{-r_{i}})(\delta_{j}M^{-s_{j}})$ and so $LM^{r_{i}+s_{j}}-\delta_{i}\delta_{j}=0$.
Therefore, up to normalization, $(LM^{r_{i}+s_{j}}-\delta_{i}\delta_{j})|A_{K_{1}\#K_{2}}$ and so $\mathrm{Red}\left[(L-1)\prod_{i,j}(LM^{r_{i}+s_{j}}-\delta_{i}\delta_{j})\right]$ divides $A_{K_{1}\#K_{2}}$.
To make sure that isolated points $(L_{i},M)=\xi(\rho_{i})$ for $\rho_{i}\in R(\mathcal{M}_{K_{i}})$ do not contribute new factors of $A_{K_{1}\#K_{2}}$, we let $(L_{1},M)=\xi(\rho_{1})$ be an isolated point, hence $M\in\mathbb{C}^{\ast}$ must be fixed.
If $\rho_{1}$ extends to some representation $\rho=\rho_{1}\ast\rho_{2}\in R(\mathcal{M}_{K_{1}\#K_{2}})$, then there must exist a representation $\rho_{2}\in R(\mathcal{M}_{K_{2}})$ so that $\xi(\rho_{2})=(L_{2},M)$ for some $L_{2}\in\mathbb{C}^{\ast}$, however we either have $(L_{2},M)$ also an isolated point or $(L_{2},M)\in\mathbb{V}(g_{j})$ for some factor $g_{j}|A_{K_{2}}$ which uniquely determines $L_{2}=\delta_{j}M^{-s_{j}}$.
Hence if the representation $\rho_{1}$ extends to $\rho\in R(\mathcal{M}_{K_{1}\#K_{2}})$, the point $(L_{1}L_{2},M)=\xi(\rho)$ is still an isolated point.
A similar argument shows isolated points $(L_{2},M)=\xi(\rho_{2})$ will contribute only isolated points $(L_{1}L_{2},M)$.
Thus, there are no other factors, which proves the formula for computation of $A_{K_{1}\# K_{2}}$ for $K_{i}\in\mathcal{G}_{\mathbb{Z}}$.
\null\hfill$\square$\\\\
We can use the above proof to construct an unreduced, non-normalized formula for the $A$-polynomial of connected sums of integer pseudo-graph knots noting that the $L-1$ is one of the factors in this product.
Notice that Theorem~\ref{gzconnect} can be generalized inductively to an arbitrary number of connected sums very easily:
\begin{corollary}
\label{torconnected}
Let $K_{1},\ldots,K_{n}\in\mathcal{G}_{\mathbb{Z}}$ where $A_{K_{i}}\doteq\underset{j_{i}\in J_{i}}{\prod}(LM^{r_{j_{i}}}-\delta_{j_{i}})$ with $r_{j_{i}}\in\mathbb{Z}$ and $\delta_{j_{i}}\in\{\pm1\}$ for $i=1,\ldots,n$.
Denote by $\mathbf{j}=(j_{1},\ldots,j_{n})$ where the $i$-th component $j_{i}$ corresponds to some factor $(LM^{r_{j_{i}}}-\delta_{j_{i}})$ of $A_{K_{i}}$, and let $\mathbf{J}=J_{1}\times\cdots\times J_{n}$ be the indexing set of all such $\mathbf{j}$, then
$$
A_{\#_{i=1}^{n}K_{i}}\doteq\mathrm{Red}\left[\underset{\mathbf{j}\in\mathbf{J}}{\prod}\left(LM^{\underset{i=1}{\overset{n}{\sum}}r_{j_{i}}}-\underset{i=1}{\overset{n}{\prod}}\delta_{j_{i}}\right)\right].
$$
\end{corollary}
This implies that we may take connected sums of as many knots in $\mathcal{G}_{\mathbb{Z}}$ as desired and the resulting $A$-polynomial can be found by considering combinations of factors from each component.
Also, notice that since $\widetilde{A}_{T(p,q)}=F_{(p,q)}(L,M)$ is explicitly given by Remark~\ref{iteratedtorus}, we immediately see that Corollary \ref{torconnected} is a consequence of Theorem~\ref{gzconnect}.
\begin{remark}
It is worth noting that the $A$-polynomial does not completely distinguish knots in $\mathcal{G}_{0}$.
Different torus knots can have equivalent $A$-polynomials, for example $A_{T(10,3)}=A_{T(6,5)}$.
Furthermore, by the work of Ni and Zhang, distinct cables over torus knots can have equivalent $A$-polynomials, such as $A_{[(13,15),(11,7)]}=A_{[(65,3),(275,7)]}$.
From Theorem~\ref{gzconnect} and the immediate Corollary~\ref{torconnected}, we find that there are infinitely many distinct connected sums of torus knots with equivalent $A$-polynomials.
For example, $A_{T(15,7)\# T(17,11)}=A_{T(21,5)\# T(17,11)}$.
\end{remark}
This process can be used for arbitrary connected sums of torus knots $\#_{i=1}^{n}T(p_{i},q_{i})$ noticing that any factor $(L^{2}M^{2r}-1)=(LM^{r}+1)(LM^{r}-1)$ and each component handled separately; but when we ``combine'' any factor $(LM^{r_{1}}-\delta_{1})$ with $(L^{2}M^{2r_{2}}-1)$, we get a new factor of $(L^{2}M^{2(r_{1}+r_{2})}-1)$ independent of $\delta_{1}$.
Similar combinatorial formulas will emerge as consequences of this connected sum formula, but we now move on to the proof of closure of $\mathcal{G}_{\mathbb{Z}}$ under the $(p,q)$-cabling operation.
\begin{lemma}\cite[Theorem 2.8]{nz_2017}
\label{generalcabling}
The $(p,q)$-cabling over any companion knot $C$ for $q\geq2$, $[(p,q),C]$ has $A$-polynomial
$$
A_{[(p,q),C]}=\begin{cases}
\mathrm{Red}\left[(L-1)F_{(p,q)}(L,M)\mathrm{Res}_{\overline{L}}\left[\widetilde{A}_{C}\left(\overline{L},M^{q}\right),L-\overline{L}^{q}\right]\right]&:\deg_{L}(\widetilde{A}_{C})\neq0\\
(L-1)F_{(p,q)}(L,M)\widetilde{A}_{C}(M^{q})&:\deg_{L}(\widetilde{A}_{C})=0.
\end{cases}
$$
\end{lemma}
Since knots in $\mathcal{G}_{\mathbb{Z}}$ will not have $\deg_{L}\left(\widetilde{A}_{C}\right)=0$ unless $C=U$, we prove Theorem~\ref{gzcable}:
\subsection*{Proof of Theorem \ref{gzcable}}
Let $C\in\mathcal{G}_{\mathbb{Z}}$ such that $A_{C}\doteq\prod_{j\in J}(LM^{r_{j}}-\delta_{j})$.
In the case that $\deg_{L}(\widetilde{A}_{C})=0$, this implies that $C=U$ and so we consider $[(p,q),U]$ either a torus knot or the unknot, which will be in $\mathcal{G}_{\mathbb{Z}}$ by Remark~\ref{iteratedtorus} (1).
When $\deg_{L}(\widetilde{A}_{C})\neq0$, Lemma~\ref{generalcabling} implies that $F_{(p,q)}(L,M)|A_{[(p,q),C]}$ as before, and each factor $(LM^{r_{j}}-\delta_{j})$ of $\widetilde{A}_{C}$ contributes a factor of $A_{[(p,q),C]}$ given by the resultant $\mathrm{Res}_{\overline{L}}\left[\widetilde{A}_{C}\left(\overline{L},M^{q}\right),L-\overline{L}^{q}\right]$.
By general properties of the resultant and the definition of $\widetilde{A}_{C}$, we know
\begin{align*}
\mathrm{Res}_{\overline{L}}\left[\widetilde{A}_{C}(\overline{L},M^{q}),L-\overline{L}^{q}\right]&=\mathrm{Res}_{\overline{L}}\left[\underset{j\in J}{\prod}(\overline{L}(M^{q})^{r_{j}}-\delta_{j}),L-\overline{L}^{q}\right]\\
&=\mathrm{Red}\left[\underset{j\in J}{\prod}\mathrm{Res}_{\overline{L}}\left[\overline{L}M^{r_{j}q}-\delta_{j},L-\overline{L}^{q}\right]\right].
\end{align*}
We can take this resultant directly from the Sylvester matrix:
\begin{align*}
\mathrm{Res}_{\overline{L}}\left[\overline{L}M^{r_{j}q}-\delta_{j},L-\overline{L}^{q}\right]&\doteq
\det\begin{pmatrix}
-\delta_{j}&M^{r_{j}q}&&0\\
&\ddots&\ddots&\\
0&&-\delta_{j}&M^{r_{j}q}\\
L&0&0&-1
\end{pmatrix}\\
&\doteq(-1)^{q}L\left(M^{r_{j}q}\right)^{q}-(-\delta_{j})^{q}\\
&\doteq LM^{r_{j}q^{2}}-\delta_{j}^{q}.
\end{align*}
Again, we find that the corresponding factors of $A_{[(p,q),C]}$ will kill the integer slope $r_{j}q^{2}\in\mathbb{Z}$, and therefore, every such factor $(LM^{r_{j}q^{2}}-\delta_{j}^{q})|A_{[(p,q),C]}$.
Since all of the factors of $A_{[(p,q),C]}$ up to polynomial reduction are of this form by~\cite{nz_2017}, it follows that $[(p,q),C]\in\mathcal{G}_{\mathbb{Z}}$.
\null\hfill$\square$\\\\
As with Theorem~\ref{gzconnect}, a simple argument gives a similar formula for the $A$-polynomial of an iterated cable over an integer pseudo-graph knot:
\begin{corollary}
For $C\in\mathcal{G}_{\mathbb{Z}}$ where $A_{C}\doteq\prod_{j\in J}(LM^{r_{j}}-\delta_{j})$, where $r_{j}\in\mathbb{Z}$ and $\delta_{j}\in\{-1,1\}$ and for each $i=1,\ldots,n$, we have $p_{i},q_{i}$ relatively prime with each $q_{i}\geq2$ and $\left|p_{n}\right|>q_{n}\geq2$,
$$
A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n}),C]}\doteq
\mathrm{Red}\left[A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]}\underset{j\in J}{\prod}\left(LM^{r_{j}\underset{i=1}{\overset{}{\prod}}q_{i}^{2}}-\delta_{j}^{\underset{i=1}{\overset{n}{\prod}}q_{i}}\right)\right].
$$
\end{corollary}
The factor of $A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]}$ is consistent with Remark~\ref{patternfactor} since we may think of the iterated cabling $[(p_{1},q_{1}),\ldots,(p_{n},q_{n}),C]$ as having a pattern knot $P=[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$ when $T(p_{n},q_{n})$ is a torus knot, and therefore it follows directly from $A_{P}|A_{[(p_{1},q_{1}),\ldots,(p_{n},q_{n}),C]}$.
By Theorems \ref{gzconnect} and \ref{gzcable}, we see that $\mathcal{G}_{\mathbb{Z}}$ is closed under connected sums and $(p,q)$-cabling, and thus $\mathcal{G}_{0}\subset\mathcal{G}_{\mathbb{Z}}$; furthermore, the above corollaries provide a strategy for computing the $A$-polynomials of combinations of $(p,q)$-cables and connected sums of knots in $\mathcal{G}_{\mathbb{Z}}$.
As mentioned before, since every knot $K\in\mathcal{G}_{\mathbb{Z}}$ has an $A$-polynomial where each irreducible factor can be written as the sum of two monomials in $L$ and $M$, there are no hyperbolic knots in $\mathcal{G}_{\mathbb{Z}}$; more generally, recall there are no hyperbolic knots in $\mathfrak{M}_{0}$ by Corollary~\ref{m0nohyp}.
It suffices to understand whether any satellite knots which are not graph knots are in $\mathcal{G}_{\mathbb{Z}}$.
As we will show in Section~\ref{proof2}, Theorem~\ref{wnzthm} implies that $n$-twisted Whitehead doubles of graph knots are not in $\mathcal{G}_{\mathbb{Z}}$, as well as several other families of satellite knots.
So far, the graph knots $\mathcal{G}_{0}$ are the only known examples of knots in $\mathcal{G}_{\mathbb{Z}}$ and more widely in $\mathfrak{M}_{0}$, and because all graph knots have zero hyperbolic volume, $\mathrm{Vol}(\mathcal{M}_{K})=0$, the known examples of knots in $\mathcal{G}_{\mathbb{Z}}$ support Conjecture~\ref{apolyvol}.
Since every graph knot has logarithmic Mahler measure zero by Theorems~\ref{gzconnect} and~\ref{gzcable}, the assertion of Conjecture~\ref{apolyvol} is that the $A$-polynomial of a knot $K$ has $\mathrm{m}(A_{K})=0$ implies $K$ is a graph knot.
In the next section, we will examine winding number zero satellites of graph knots, but other examples to consider are nonzero winding number satellite knots $\mathrm{Sat}(P,C,f)$ where the ``satellite space'' $V-\overset{\circ}{N}(f(P))$ has positive hyperbolic volume, for example $K=\mathrm{Sat}(U,T(3,2),f)$, where the embedding of the unknot $U$ in $V$ is given by the closure of the following solid cylinder:
$$
\begin{tikzpicture}[scale=2]
\draw[thick] (0,0) -- ++(1.1,0) arc (-90:90:.2 and .6) -- ++(-1.1,0) arc (90:270:.2 and .6);
\draw[thick, dashed] (0,0) arc (-90:90:.2 and .6);
\begin{knot}[
line width=1pt,
line join=round,
clip width=1,
scale=1,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1pt,
line cap=none
}
]
\strand
(0,.2) -- ++(.15,0)
.. controls +(.2,0) and +(-.2,0) .. ++(.4,.4)
.. controls +(.2,0) and +(-.2,0) .. ++(.4,.4)
-- ++(.15,0);
\strand
(0,.6) -- ++(.15,0)
.. controls +(.2,0) and +(-.2,0) .. ++(.4,-.4)
-- ++(.4,0)
-- ++(.15,0);
\strand
(0,1) -- ++(.15,0)
-- ++(.4,0)
.. controls +(.2,0) and +(-.2,0) .. ++(.4,-.4)
-- ++(.15,0);
\flipcrossings{1}
\end{knot}
\draw[thick] (1.1,0) arc (270:90:.2 and .6);
\draw[thick,fill=black] (0,.2) circle (1pt);
\draw[thick,fill=black] (0,.6) circle (1pt);
\draw[thick,fill=black] (0,1) circle (1pt);
\draw[thick,fill=black] (1.1,.2) circle (1pt);
\draw[thick,fill=black] (1.1,.6) circle (1pt);
\draw[thick,fill=black] (1.1,1) circle (1pt);
\node[above] at (.55,1.2) {$V$};
\node[above] at (.5,.6) {$U$};
\end{tikzpicture}
$$
Since $\mathrm{Vol}(\mathcal{M}_{K})=\mathrm{Vol}(V-\overset{\circ}{N}(f(U)))+\mathrm{Vol}(\mathcal{M}_{T(3,2)})>0$ (from SnapPy), we know $K$ is not a graph knot.
By Remark~\ref{nohypcomp}, since the winding number of $f(U)$ in $V$ is 3, each factor of $A_{T(3,2)}(\overline{L},M)=(\overline{L}-1)(\overline{L}M^{6}+1)$ extends to factors which are the sums of monomials in $L,M$ while $A_{P}=A_{U}=(L-1)$ contributes no nontrivial factor to $\widetilde{A}_{K}$.
The factor $(\overline{L}-1)$ contributes $\mathrm{Red}\left[\mathrm{Res}_{\overline{L}}\left[\overline{L}-1,L-\overline{L}^{3}\right]\right]=L-1$, and the factor $LM^{6}+1$ contributes $\mathrm{Red}\left[\mathrm{Res}_{\overline{L}}\left[\overline{L}(M^{3})^{6}-1,L-\overline{L}^{3}\right]\right]=LM^{54}+1$.
This implies $(LM^{54}+1)|\widetilde{F}_{\mathrm{Sat}(U,T(3,2),f)}$ for the factor $\widetilde{F}_{\mathrm{Sat}(P,C,f)}=(A_{P})^{-1}A_{\mathrm{Sat}(P,C,f)}$ mentioned in Section~\ref{knotfam}; however, this factor may contain more nontrivial factors which may not be the sum of two monomials in $L,M$.
To see whether there are other factors, we need to know whether the irreducible representations $\rho_{2}\in R^{\ast}(\mathcal{M}_{2})$ can extend to representations on the companion knot side.
\section{Winding Number Zero Satellite Operations}
\label{zerodouble}
We call a satellite knot $K=\mathrm{Sat}(P,C,f)$ a {\it winding number zero satellite} if the embedded knot $f(P)\subset V$ has winding number zero in $V$.
An example of a winding number zero satellite is the $n$-twisted Whitehead double of $C$, $D_{n}(C)$.
To visualize the satellite operations, we illustrate the pattern knot $f(P)=\ell_{x}$ and the unknot $\ell_{y}$ so that the solid torus $V=\mathcal{M}_{\ell_{y}}$.
To construct the Whitehead double, we consider the untwisted Whitehead link $W=\ell_{x}\cup\ell_{y}$ where both $\ell_{x},\ell_{y}$ are unknots or the $n$-twisted Whitehead link:
\begin{figure}[th]
$$
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(.85,.6) to ++(-.3,0);
\strand
(-.35,.5) to ++(.3,0);
\strand
(-.2,.5) arc (0:90:.15)
to ++(-.2,0)
arc (90:180:.15)
to ++(0,-.8);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\strand
(-.2,.5) to ++(0,-.8)
arc (0:-90:.15)
to ++(-.2,0)
arc (-90:-180:.15);
\flipcrossings{1,2}
\end{knot}
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\draw[very thick, -latex] (-.7,1) to ++(-.1,0);
\node[right] at (1.8,1.2) {$x$};
\node[right] at (0,1) {$y$};
\node at (1.7,2) {$\ell_{x}$};
\node at (-.1,-.4) {$\ell_{y}$};
\end{tikzpicture}
\hspace{10pt}
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(.85,.6) to ++(-.3,0);
\strand
(-.35,.5) to ++(.3,0);
\strand
(-.2,.5) arc (0:90:.15)
to ++(-.2,0)
arc (90:180:.15)
to ++(0,-.8);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\strand
(-.2,.5) to ++(0,-.8)
arc (0:-90:.15)
to ++(-.2,0)
arc (-90:-180:.15);
\flipcrossings{1,2}
\end{knot}
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\draw[very thick, -latex] (-.7,1) to ++(-.1,0);
\draw[thick,fill=white] (-2.8,.9) -- ++(1,0) -- ++(0,.6) -- ++(-1,0) -- ++(0,-.6) -- cycle;
\node[right] at (1.8,1.2) {$x$};
\node[right] at (0,1) {$y$};
\node at (1.7,2) {$\ell_{x}$};
\node at (-.1,-.4) {$\ell_{y}$};
\node at (-2.3,1.2) {$2n$};
\end{tikzpicture}
$$
\vspace*{0pt}
\caption{Untwisted Whitehead Link $W$ on the left and $n$-Twisted Whitehead Link on the right.\label{fig1}}
\end{figure}
The link exterior $\mathcal{M}_{W}=\mathbb{S}^{3}-\overset{\circ}{N}(W)$ will use the embedded $f(P)=\ell_{x}$ as the pattern knot for the untwisted double embedded into the solid torus $V=\mathcal{M}_{\ell_{y}}$.
The link group $\pi_{1}(\mathcal{M}_{W})$ has the following presentation,
$$
\pi_{1}(\mathcal{M}_{W})\cong\langle x,y|\Omega=\Omega^{\ast}\rangle
$$
where $x$ is the meridian generator coming from $\ell_{x}$, $y$ is the meridian generator coming from $\ell_{y}$, $x^{-1}=X$, $y^{-1}=Y$, the word $\Omega=yxYxyXyx$, and $\Omega^{\ast}$ denotes the reverse word of $\Omega$.
We also understand that a preferred framing of the two boundary tori $\partial N(\ell_{x})$ and $\partial N(\ell_{y})$ is given by meridians and longitudes
\begin{align*}
\mu_{x}&=x&\lambda_{x}&=XY\Omega YX=YxyXyxYX&\text{for }&\partial N(\ell_{x})\\
\mu_{y}&=y&\lambda_{y}&=YX\Omega XY=YXyxYxyX&\text{for }&\partial N(\ell_{y}).
\end{align*}
For a knot $C$, the {\it$n$-twisted Whitehead double} $D_{n}(C)$ is given by $\mathrm{Sat}(K(n),C,f)$ where the embedded twist knot $f(K(n))=\ell_{x}$ in $V=\mathcal{M}_{\ell_{y}}$ is given as shown in Figure~\ref{fig1} with $n$-vertical full-twists.
Thus, the fundamental group of the knot exterior $\mathcal{M}_{D_{n}(C)}$ is the amalgamated free-product given by the Van Kampen theorem,
$$
\pi_{1}(\mathcal{M}_{D_{n}(C)})\cong\pi_{1}(\mathcal{M}_{1})\ast_{\pi_{1}(\partial N(\ell_{y}))}\pi_{1}(\mathcal{M}_{2}),
$$
where $\mathcal{M}_{1}=\mathcal{M}_{C}$ is called the {\it companion space} and $\mathcal{M}_{2}=V-\overset{\circ}{N}(\ell_{x})$ is called the {\it satellite space}.
Generalizing slightly, the Borromean rings (shown below) give us a way of understanding a more general family of winding number zero doubles, the {\it $(m,n)$-double twisted doubles}, denoted $D_{m,n}(C)$:
\begin{figure}[th]
$$
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(0,1.5) to ++(-.6,0);
\strand
(.3,1.3) to ++(-.3,0);
\strand
(.85,.6) to ++(-.3,0);
\strand
(-.35,.5) to ++(.3,0);
\strand
(-.2,.5) arc (0:90:.15)
to ++(-.2,0)
arc (90:180:.15)
to ++(0,-.8);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
to +(0,-.6)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
to +(0,.6)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\strand
(-.2,.5) to ++(0,-.8)
arc (0:-90:.15)
to ++(-.2,0)
arc (-90:-180:.15);
\strand
(0,1.5)
arc (90:0:.15)
to ++(0,-.1)
arc (0:-90:.15)
to ++(-.6,0)
arc (-90:-180:.15)
to ++(0,.1)
arc (180:90:.15);
\flipcrossings{1,2,5,6}
\end{knot}
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\draw[very thick, -latex] (-.7,1) to ++(-.1,0);
\draw[very thick, -latex] (0,2.6) to ++(-.1,0);
\node[right] at (1.8,1.2) {$x$};
\node[right] at (0,1) {$y$};
\node[right] at (.6,2.6) {$z$};
\node at (1.7,2) {$\ell_{x}$};
\node at (-.1,-.4) {$\ell_{y}$};
\node at (-1.7,2.6) {$\ell_{z}$};
\end{tikzpicture}
\hspace{10pt}
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(.85,.6) to ++(-.3,0);
\strand
(-.35,.5) to ++(.3,0);
\strand
(-.2,.5) arc (0:90:.15)
to ++(-.2,0)
arc (90:180:.15)
to ++(0,-.8);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\strand
(-.2,.5) to ++(0,-.8)
arc (0:-90:.15)
to ++(-.2,0)
arc (-90:-180:.15);
\flipcrossings{1,2}
\end{knot}
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\draw[very thick, -latex] (-.7,1) to ++(-.1,0);
\draw[thick,fill=white] (-2.8,.9) -- ++(1,0) -- ++(0,.6) -- ++(-1,0) -- ++(0,-.6) -- cycle;
\draw[very thick, fill=white] (-.2,3.3) to ++(-.8,0) to ++(0,-1.2) to ++(.8,0) to ++(0,1.2) -- cycle;
\node[right] at (1.8,1.2) {$x$};
\node[right] at (0,1) {$y$};
\node at (1.7,2) {$\ell_{x}$};
\node at (-.1,-.4) {$\ell_{y}$};
\node at (-2.3,1.2) {$2n$};
\node at (-.6,2.7) {$2m$};
\end{tikzpicture}
$$
\vspace*{0pt}
\caption{Borromean rings $B$ on the left and $(m,n)$-double twisted double satellite space on the right.\label{fig2}}
\end{figure}
We find that the fundamental group of the Borromean rings is
$$
\pi_{1}(\mathcal{M}_{B})\cong\langle x,y,z|[x,\lambda_{x}]=[y,\lambda_{y}]=[z,\lambda_{z}]=e\rangle,
$$
where
\begin{align*}
\lambda_{x}&=ZyzY&\lambda_{y}&=zxZX&\lambda_{z}&=YXyx
\end{align*}
By performing $(1/m)$-Dehn surgery on $\mathcal{M}_{B}$ along the boundary component $\partial N(\ell_{z})$, this quotient affects the fundamental group by setting $z={\lambda_{z}}^{-m}=(YXyx)^{-m}$ and therefore the fundamental group of the satellite space on the right is given by:
$$
\pi_{1}(\mathcal{M}_{2})\cong\left\langle x,y\middle|[x,(YXyx)^{m}y(YXyx)^{-m}Y]=[y,(YXyx)^{-m}x(YXyx)^{m}X]=e\right\rangle.
$$
In full generality of winding number zero satellite knots, we will let $f(P)=\ell_{x}$ be an embedded pattern knot in $V=\mathcal{M}_{\ell_{y}}$ so that $f(P)$ has winding number zero in $V$, thus $f(P)$ bounds a Seifert surface in $V$.
To understand how killing slopes $r\in\mathcal{DS}_{C}$ extend to representations in the satellite space, we consider the quotient map obtained from $(1/r)$-Dehn filling along $\partial V$, that is $V(1/r)-\overset{\circ}{N}(f(P))=\mathcal{M}_{f(P)_{r}}$.
The quotient map $Q_{r}:\mathcal{M}_{2}\to \mathcal{M}_{f(P)_{r}}$ given by $Q_{r}(\lambda_{y}^{r}\mu_{y})=e$ induces an onto homomorphism $Q_{r\ast}:\pi_{1}(\mathcal{M}_{2})\to\pi_{1}(\mathcal{M}_{f(P)_{r}})$ also satisfying $Q_{r\ast}(\lambda_{y}^{r}\mu_{y})=e$.
Here, we denote the companion knot exterior $\mathcal{M}_{1}=\mathcal{M}_{C}$ and satellite space $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$.
The image of the Seifert surface $S$ under $Q_{r}$ will remain a Seifert surface of $\mathcal{M}_{f(P)_{r}}$, hence the preferred framing $(\lambda_{x},\mu_{x})$ of $\partial N(\ell_{x})$ can be thought of as the preferred framing of $\mathcal{M}_{f(P)_{r}}$.
We will refer to the boundary components of $\mathcal{M}_{2}$ as the $x$- or $y$-boundary component, denoted $\partial_{x}\mathcal{M}_{2}=\partial N(\ell_{x})$ and $\partial_{y}\mathcal{M}_{2}=\partial N(\ell_{y})$ respectively.
By the Van Kampen theorem, the fundamental group of $\mathcal{M}_{\mathrm{Sat}(P,C,f)}$ is the amalgamated free-product given by
$$
\pi_{1}(\mathcal{M}_{\mathrm{Sat}(P,C,f)})\cong\pi_{1}(\mathcal{M}_{1})\ast_{\pi_{1}(\partial_{y}\mathcal{M}_{2})}\pi_{1}(\mathcal{M}_{2}),
$$
where the gluing $\phi:\partial\mathcal{M}_{1}\to\partial_{y}\mathcal{M}_{2}$ is given by $\phi(\mu_{C})=\lambda_{y}$ and $\phi(\lambda_{C})=\mu_{y}{\lambda_{y}}^{-n}$.
Hence, we may consider $\lambda_{y}=\mu_{C}$ and $\mu_{y}=\lambda_{C}$ in $\pi_{1}(\mathcal{M}_{\mathrm{Sat}(P,C,f)})$.
Then, $\rho_{1}\in R(\mathcal{M}_{1}),\rho_{2}\in R(\mathcal{M}_{2})$ agree on the boundary by satisfying the gluing relations, $\rho_{1}(\mu_{C})=\rho_{2}(\lambda_{y})$ and $\rho_{1}(\lambda_{C})=\rho_{2}(\mu_{y})$.
Also notice that for every $\rho\in R(\mathcal{M}_{\mathrm{Sat}(P,C,f)})$, the representation will restrict to representations $\rho_{1}=\rho|_{\pi_{1}(\mathcal{M}_{1})}$ and $\rho_{2}=\rho|_{\pi_{1}(\mathcal{M}_{2})}$ which satisfy the gluing relations, hence $\rho=\rho_{1}\ast\rho_{2}$.
We note here that every representation $\sigma\in R(\mathcal{M}_{f(P)_{r}})$ will lift to a representation $\rho_{2}\in R(\mathcal{M}_{2})$ by composition with $Q_{r\ast}$:
$$
\begin{tikzcd}
\pi_{1}(\mathcal{M}_{2})\arrow[d, "Q_{r\ast}"]\arrow[dr, "\rho_{2}"]\\
\pi_{1}(\mathcal{M}_{f(P)_{r}})\arrow[r, "\sigma"]&SL_{2}\mathbb{C}
\end{tikzcd}
$$
The resulting representation $\rho_{2}$ will also satisfy $\rho_{2}(\lambda_{y}^{r}\mu_{y})=I$.
For any abelian representation $\varepsilon:\pi_{1}(\mathcal{M}_{2})\to\{\pm I\}$, $\varepsilon$ is determined by its images $\varepsilon(\mu_{x})$ and $\varepsilon(\mu_{y})$ because $[\mu_{x}]$ and $[\mu_{y}]$ generate the first homology $H_{1}(\mathcal{M}_{2};\mathbb{Z})$.
Therefore, simple calculation shows that $\rho_{2}^{\varepsilon}=\varepsilon\cdot\rho_{2}$ is still a representation, and $\rho^{\varepsilon}_{2}\in R(\mathcal{M}_{2})$ can be constructed to satisfy $\rho^{\varepsilon}_{2}(\lambda_{y}^{r}\mu_{y})=\delta I$ for $\delta\in\{\pm1\}$ by taking $\varepsilon(\mu_{x})=I$ and $\varepsilon(\mu_{y})=\delta I$.
Also notice that $[\lambda_{x}]=w[\mu_{y}]$ and $[\lambda_{y}]=w[\mu_{x}]$ in a general winding number $w$ satellite space, so for the winding number zero case, $\varepsilon(\lambda_{x})=I$ and $\varepsilon(\lambda_{y})=I$.
To prove Theorem~\ref{wnzthm}, a family of representations $\rho_{2}^{\varepsilon}\in R(\mathcal{M}_{2})$ must extend to a family of representations $\rho_{K}=\rho_{1}\ast\rho_{2}^{\varepsilon}\in R(\mathcal{M}_{K})$, for $\rho_{1}\in R(\mathcal{M}_{1})$ which agrees with $\rho_{2}^{\varepsilon}$ along the gluing torus.
Given any $\rho_{2}^{\varepsilon}(\lambda_{y})\in SL_{2}\mathbb{C}$ as above, we will show that for $C\in\mathcal{G}_{0}$ there exists a representation $\rho_{1}\in R(\mathcal{M}_{1})$ such that $\rho_{1}(\lambda_{C})=\rho_{2}^{\varepsilon}(\lambda_{y})$ and $\rho_{1}(\lambda_{C}\mu_{C}^{r})=\delta I$ for a given factor $(LM^{r}-\delta)|A_{C}$.
To address this, we say that a representation $\rho_{0}\in R(\mathcal{M}_{K})$ {\it realizes} a point $(L_{0},M_{0})\in\mathbb{V}(A_{K})$ if $\xi(\rho_{0})=(L_{0},M_{0})$.
We say that $R(\mathcal{M}_{K})$ {\it realizes} $A_{K}$ if every $(L_{0},M_{0})\in\mathbb{V}(A_{K})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some $\rho_{0}\in R(\mathcal{M}_{K})$.
For a killing slope $r\in\mathcal{DS}_{C}$ with balanced-irreducible factor $f_{0}=(LM^{r}-\delta)|A_{K}$, we say that $f_{0}$ {\it has no gaps} if every $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by a representation $\rho_{0}\in R(\mathcal{M}_{K})$ such that $\rho_{0}(\lambda_{K}\mu_{K}^{r})=\delta I$ and $\rho_{0}(\mu_{K})\neq\pm I$.
For $K\in\mathcal{G}_{0}$, we say that $A_{K}$ {\it has no gaps} if each balanced-irreducible factor $f_{0}|A_{K}$ has no gaps.
We recall the action of $\pi_{1}(\mathcal{M}_{K})$ on a simplicial tree $T$ from~\cite{cgls_1987} and the following:
\begin{remark}
\cite[Proposition 1.3.8]{cgls_1987}
\label{vertexstab}
Assume that no point of a simplicial tree $T$ is fixed by $\pi_{1}(\mathcal{M}_{K})$, then there exists an essential surface $S$ in $\mathcal{M}_{K}$ associated to the action.
Furthermore, if $C$ is a connected subcomplex of $\partial\mathcal{M}_{K}$ such that the image of $\pi_{1}(C)$ in $\pi_{1}(\partial\mathcal{M}_{K})$ is contained in a vertex stabilizer, then $S$ may be taken to be disjoint from $C$.
\end{remark}
\begin{lemma}
\label{smallnogap}
If $K$ is a small knot in $\mathbb{S}^{3}$, then $R(\mathcal{M}_{K})$ realizes $A_{K}$.
In particular, for each balanced-irreducible factor $f_{0}|A_{K}$, each point $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some $\rho_{0}\in R_{f_{0}}\subset R(\mathcal{M}_{K})$ in the component of $R(\mathcal{M}_{K})$ contributing $f_{0}$.
\end{lemma}
\begin{proof}
Let $K\subset\mathbb{S}^{3}$ be a small knot, that is, $\mathcal{M}_{K}$ contains no closed essential surfaces, and let $f_{0}|A_{K}$ be a balanced-irreducible factor of its $A$-polynomial with corresponding component $R_{f_{0}}\subset R(\mathcal{M}_{K})$.
By the construction of $A_{K}$ as $\overline{\mathrm{im}\,\xi}$, there are at most finitely many points $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ which are not realized by a representation in $R_{f_{0}}$.
Assume for contradiction that $(L_{0},M_{0})$ is such a point, then there is a sequence of representations $\{\rho_{i}\}$ in $R_{f_{0}}$ such that $\xi(\rho_{i})=(L_{i},M_{i})$ with $(L_{i},M_{i})\to(L_{0},M_{0})$.
Therefore, the traces approach finite values, $\mathrm{tr}\rho_{i}(\mu_{K})=\chi_{\rho_{i}}(\mu_{K})=M_{i}+M_{i}^{-1}\to M_{0}+M_{0}^{-1}$ and $\mathrm{tr}\rho_{i}(\lambda_{K})=\chi_{\rho_{i}}(\lambda_{K})=L_{i}+L_{i}^{-1}\to L_{0}+L_{0}^{-1}$.
However, $\rho_{i}$ does not have a limit in $R_{f_{0}}$ by assumption, hence the corresponding sequence of characters $\{\chi_{\rho_{i}}\}$ do not converge in the component $X_{f_{0}}$ of the character variety.
Since $f_{0}$ is a balanced-irreducible factor, if $(L_{0},M_{0})$ is in $\mathbb{V}(f_{0})\cap\left(\mathbb{C}^{\ast}\times\mathbb{C}^{\ast}\right)$, then so is $({L_{0}}^{-1},{M_{0}}^{-1})$, and a representation $\rho_{0}\in\Lambda$ with $\xi(\rho_{0})=({L_{0}}^{-1},{M_{0}}^{-1})$ can by conjugated by $A=\begin{pmatrix}0&1\\-1&0\end{pmatrix}$ to get $\xi(A\rho_{0}A^{-1})=(L_{0},M_{0})$.
By~\cite{cs_1983}, $\{\chi_{\rho_{i}}\}$ converges in the projective completion $\widetilde{X}_{f_{0}}$ to an ideal point $\widetilde{x}_{f_{0}}$.
This implies that there is an essential surface $S\subset\mathcal{M}_{K}$ associated to this ideal point and a corresponding nontrivial action of $\pi_{1}(\mathcal{M}_{K})$ on a simplicial tree $T$; furthermore, for every $\gamma\in\pi_{1}(\partial\mathcal{M}_{K})$, the sequence $\{\chi_{\rho_{i}}(\gamma)\}$ is bounded and so $\pi_{1}(\partial\mathcal{M}_{K})$ is contained in a vertex stabilizer.
This implies by Remark~\ref{vertexstab} that the surface $S$ is disjoint from $\partial\mathcal{M}_{K}$, and since $S$ is properly embedded, $S$ must be closed.
This contradicts $K$ is a small knot, hence no such points $(L_{0},M_{0})$ can exist.
Therefore, every point $(L_{0},M_{0})\in\mathbb{V}(A_{K})\cap(\mathbb{C}^{\ast})^{2}$ has no gaps and specifically every $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some representation $\rho_{0}\in R_{f_{0}}$.
\end{proof}
In particular, for every torus knot $T(m,n)$, two-bridge knot, or Montesinos knot of at most three rational tangle summands, we have that each factor $f_{0}|A_{K}$ has no gaps.
By Remark~\ref{iteratedtorus}, $A_{T(m,n)}=(L-1)F_{(m,n)}(L,M)$ with
$$
F_{(m,n)}(L,M)\doteq\begin{cases}
LM^{mn}+1&:n=2\\
L^{2}M^{2mn}-1&:n>2.
\end{cases}
$$
Therefore, for $f_{0}=(LM^{r}-\delta)|A_{T(m,n)}$ if $\rho_{0}\in R_{f_{0}}$ realizes $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ for $M_{0}\neq\pm1$, then up to conjugation, we may take $\rho_{0}(\mu_{K})$ to be a diagonal matrix; thus, $\rho_{0}(\lambda_{K}\mu_{K}^{r})=\delta I$.
The following lemma addresses when $M_{0}=\pm1$.
\begin{lemma}
\label{irrednogap}
Let $f_{0}(L,M)=(LM^{r}-\delta)|A_{K}(L,M)$ be a balanced-irreducible factor such that every $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by a representation $\rho_{0}\in R_{f_{0}}$.
Then for $M_{0}=\pm1$, a representation $\rho_{0}\in R_{f_{0}}$ such that $\xi(\rho_{0})=(L_{0},M_{0})$ where $L_{0}=\delta M_{0}^{-r}$ for $r\neq0$ is always an irreducible representation.
In particular, $\rho_{0}(\mu_{K})\neq\pm I$.
\end{lemma}
\begin{proof}
Assume for contradiction that such a representation $\rho_{0}\in R_{f_{0}}$ is reducible, then since $R_{f_{0}}$ is at least 4-dimensional, there is a reducible nonabelian representation $\rho_{1}\in R_{f_{0}}$ such that $\chi_{\rho_{1}}=\chi_{\rho_{0}}$.
In particular, $\mathrm{tr}\rho_{1}(\mu_{K})=2M_{0}$ which is either $2$ or $-2$.
By~\cite{burde_1967} and~\cite{derham_1967}, this implies that $1$ must be a root of the Alexander polynomial of $K$, $\Delta_{K}(1)=0$; however, $\Delta_{K}(1)=\pm1$ for every knot $K$, a contradiction.
Hence, $\rho_{0}$ must be irreducible, and thus $\rho_{0}(\mu_{K})\neq\pm I$.
\end{proof}
Since every torus knot $T(m,n)$ is a small knot, we have the following corollary, which serves as the base case for our induction on the graph knots:
\begin{corollary}
\label{torusnogaps}
For every torus knot $T(m,n)$, $A_{T(m,n)}$ has no gaps.
\end{corollary}
The above corollary guarantees for each factor $f_{0}\doteq(LM^{r}-\delta)|A_{T(m,n)}$, each $(L_{0},M_{0})\in\mathbb{V}(f_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by a representation $\rho_{0}\in R_{f_{0}}$ such that $\xi(\rho_{0})=(L_{0},M_{0})$, $\rho_{0}(\lambda\mu^{r})=\delta I$, and $\rho_{0}(\mu)\neq\pm I$.
\begin{lemma}
\label{sumnogaps}
If $K_{1},K_{2}\in\mathcal{G}_{0}$ are knots where $A_{K_{1}},A_{K_{2}}$ have no gaps, then $A_{K_{1}\#K_{2}}$ has no gaps.
\end{lemma}
\begin{proof}
Let $K=K_{1}\#K_{2}$, then each pair of factors $(L_{1}M^{r_{i}}-\delta_{i})|A_{K_{1}}$ and $(L_{2}M^{s_{j}}-\delta_{j})|A_{K_{2}}$ contributes the factor $(LM^{r_{i}+s_{j}}-\delta_{i}\delta_{j})|A_{K}$ by Theorem~\ref{gzconnect}.
For a given $M_{0}\in\mathbb{C}^{\ast}$, this determines $L_{1}=\delta_{i}M_{0}^{-r_{i}}$, $L_{2}=\delta_{j}M_{0}^{-s_{j}}$, and therefore $L_{0}=\delta_{i}\delta_{j}M^{-r_{i}-s_{j}}$.
Since each $A_{K_{i}}$ has no gaps and $L_{1},L_{2}\in\mathbb{C}^{\ast}$, there exist representations $\rho_{i}\in R(\mathcal{M}_{K_{i}})$ that realize $(L_{i},M_{0})$ with $\rho_{i}(\mu_{K})\neq\pm I$ for $i=1,2$.
Furthermore, $\rho_{1}(\lambda_{1}\mu_{K}^{r_{i}})=\delta_{i}I$ and $\rho_{2}(\lambda_{2}\mu_{K}^{s_{j}})=\delta_{j}I$, and so these representations will also satisfy $\rho_{1}(\lambda_{1})=\delta_{i}\rho_{1}(\mu_{K})^{-r_{i}}$ and $\rho_{2}(\lambda_{2})=\delta_{j}\rho_{2}(\mu_{K})^{-s_{j}}$.
If $M_{0}\neq\pm1$, that is $\mathrm{tr}\rho_{i}(\mu_{K})\neq\pm2$, then both $\rho_{i}(\mu_{K})$ are diagonalizable, so up to conjugation we have $\rho_{i}(\mu_{K})\begin{pmatrix}M_{0}&0\\0&M_{0}^{-1}\end{pmatrix}\neq\pm I$ for $i=1,2$.
If $M_{0}=\pm1$, then up to conjugation, $\rho_{i}(\mu_{K})=M_{0}\begin{pmatrix}1&1\\0&1\end{pmatrix}$ for $i=1,2$ since $(L_{i},M_{0})$ is not a gap of $A_{K_{i}}$ and so $\rho_{i}(\mu_{K})\neq\pm I$.
Hence, in either case these representations agree on the gluing annulus $A$ of the connected sum, and so $\rho_{1}\ast\rho_{2}\in R(\mathcal{M}_{K_{1}\#K_{2}})$ with $\xi(\rho_{1}\ast\rho_{2})=(L_{1}L_{2},M_{0})=(\delta_{i}\delta_{j}M_{0}^{-r_{i}-s_{j}},M_{0})$, $(\rho_{1}\ast\rho_{2})(\mu_{K})\neq\pm I$, and $(\rho_{1}\ast\rho_{2})(\lambda_{1}\lambda_{2}\mu_{K}^{r_{i}+s_{j}})=\delta_{1}\delta_{2}I$.
Therefore, $A_{K_{1}\#K_{2}}$ has no gaps.
\end{proof}
\begin{lemma}
\label{cablenogaps}
For a knot $C\in\mathcal{G}_{0}$ where $A_{C}$ has no gaps, then $A_{[(p,q),C]}$ has no gaps.
In particular, the factor $F_{(p,q)}$ has no gaps.
\end{lemma}
\begin{proof}
Letting $K=[(p,q),C]$, each factor $f_{0}(\overline{L},\overline{M})=(\overline{L}\,\overline{M}^{r}-\delta)$ of $A_{C}$ contributes a factor $g_{0}(L,M)=(LM^{rq^{2}}-\delta^{q})$ of $A_{K}$; additionally, there is the factor $F_{(p,q)}(L,M)$ of $A_{K}$.
It suffices to show that every $(L_{0},M_{0})\in\mathbb{V}(g_{0})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some $\rho\in R(\mathcal{M}_{K})$ and every $(L,M)\in\mathbb{V}(F_{(p,q)})$ is realized by some $\rho\in R(\mathcal{M}_{K})$.
We begin with the factor $F_{(p,q)}$ using a modified argument of Claim 2.9 in~\cite{nz_2017}.
If $|p|>1$, then notice that by Corollary~\ref{torusnogaps}, for any $M_{0}\in\mathbb{C}^{\ast}$, there is a representation $\sigma_{M_{0}}\in R(\mathcal{M}_{T(p,q)})$ realizing $(L_{0},M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$ with $\sigma_{M_{0}}(\lambda_{K}\mu_{K}^{pq})=\pm I$ and $\sigma_{M_{0}}(\mu_{K})\neq\pm I$.
Composing with the induced quotient homomorphism $Q_{0\ast}:\pi_{1}(\mathcal{M}_{2})\to\pi_{1}(\mathcal{M}_{T(p,q)})$ gives the representation $\rho_{2}=\sigma_{M_{0}}\circ Q_{0\ast}\in R(\mathcal{M}_{2})$ and so extend to the representation $\rho_{K}=\mathrm{id}\ast\rho_{2}\in R(\mathcal{M}_{K})$.
Hence, every $(L_{0},M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some $\rho_{K}\in R(\mathcal{M}_{K})$ for $|p|>1$ with $\rho_{K}(\lambda_{K}\mu_{K}^{pq})=\pm I$ and $\rho_{K}(\mu_{K})\neq\pm I$.
If $|p|=1$, then since the quotient map would give us the unknot $T(p,q)\cong U$, we recall from the discussion in~\cite{nz_2017}, the fundamental group of the cable space for $p=\pm1$:
$$
\pi_{1}(\mathcal{M}_{2})\cong\langle\alpha,\beta|\gamma_{C}=\alpha^{q},\gamma_{C}\beta=\beta\gamma_{C}\rangle,
$$
for a Seifert fiber of $\mathcal{M}_{2}$ $\gamma_{C}=\alpha^{q}$ lying in $\partial V$, and a Seifert fiber of $\mathcal{M}_{2}$ $\gamma_{K}=\lambda_{K}\mu_{K}^{pq}$ lying in $\partial\mathcal{M}_{K}$.
As described in~\cite{nz_2017} that $\mu_{K}=\alpha\beta$ and $\lambda_{K}=\gamma_{K}\mu_{K}^{-pq}$ with $\rho(\gamma_{C})=\rho(\gamma_{K})=\pm I$ for any irreducible representation $\rho\in R(\mathcal{M}_{2})$.
Letting $(L_{0},M_{0})\in\mathbb{V}(LM^{pq}-1)$ for $q>2$, then because $\mathcal{M}_{C}(p/q)$ is a homology sphere, by \cite{km_2004}, there must exist an irreducible $SU(2)$-representation of $\pi_{1}(\mathcal{M}_{C})$ and hence an irreducible representation $\rho_{1}\in R(\mathcal{M}_{C}(p/q))\subset R(\mathcal{M}_{C})$ satisfying $\rho_{1}(\lambda_{C}^{q}\mu_{C}^{p})=I$ and $\mathrm{tr}\rho_{1}(\lambda_{C})\neq\pm2$.
Hence, up to conjugation we may assume that $\rho_{1}$ satisfies the following:
\begin{align*}
\rho_{1}(\lambda_{C})&=\begin{pmatrix}\ell&0\\0&\ell^{-1}\end{pmatrix}&\rho_{1}(\mu_{C})&=\begin{pmatrix}\ell^{-pq}&0\\0&\ell^{pq}\end{pmatrix},
\end{align*}
for some choice of $\ell\neq\pm1$ and $\ell^{q}\neq\pm1$.
We define $\rho_{2}(\alpha)=ABA^{-1}$ for matrices $A,B\in SL_{2}\mathbb{C}$ such that $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$, $B=\begin{pmatrix}z&0\\0&z^{-1}\end{pmatrix}$ with $z^{q}=1$ and $z\neq\pm1$.
Simple calculation shows we may take $a\in\mathbb{C}^{\ast}$, $b=1$, $c=\frac{M_{0}+M_{0}^{-1}-\ell z-\ell^{-1}z^{-1}}{(\ell-\ell^{-1})(z-z^{-1})}\neq0$, and $d=\frac{c+1}{a}$.
Notice that we may choose $\ell,z\in\mathbb{C}-\{\pm1\}$ so that $M_{0}\neq\ell z$ and $M_{0}\neq(\ell z)^{-1}$, and therefore $c\neq0$.
Thus, $\mathrm{tr}\rho_{2}(\alpha\beta)=M_{0}+M_{0}^{-1}$ and $\mathrm{tr}\rho_{2}(\beta)=\mathrm{tr}\rho_{1}(\lambda_{C})=\ell+\ell^{-1}$ with
$$
\rho_{2}(\lambda_{K})=\rho_{2}(\alpha^{q}\mu_{K}^{-pq})=I\cdot\rho_{2}(\mu_{K})^{-pq}=\rho_{2}(\mu_{K})^{-pq}
$$
since $\rho_{2}(\alpha^{q})=I$ by construction.
Hence, up to conjugation, we may extend $\rho_{1}$ to $\rho_{K}=\rho_{1}\ast\rho_{2}$ so that $\rho_{K}(\mu_{K})=\begin{pmatrix}M_{0}&0\\0&M_{0}^{-1}\end{pmatrix}$ and thus $\rho_{K}(\lambda_{K})=\rho_{K}(\mu_{K})^{-pq}$ for all $M_{0}\neq\pm1$, and so $\rho_{K}(\lambda_{K}\mu_{K}^{pq})=I$.
However, if $M_{0}=\pm1$, then since $\ell\neq\pm1$, $\rho_{1}(\lambda_{C})\neq\pm I$ and so $\rho_{2}(\alpha\beta)\neq\pm I$.
Since $\mathrm{tr}\rho_{2}(\mu_{K})=M_{0}+M_{0}^{-1}=\pm2$, it follows that $\rho_{2}(\mu_{K})=M_{0}\begin{pmatrix}1&1\\0&1\end{pmatrix}$ up to conjugation, and thus $\rho_{2}(\lambda_{K})=\rho_{2}(\alpha^{q}\mu_{K}^{-pq})=\rho_{2}(\mu_{K})^{-pq}$ as before.
Therefore, $(LM^{pq}-1)$ does not have any gaps.
If $(L,M)\in\mathbb{V}(LM^{pq}+1)$ for $q\geq2$, then we construct the representation $\rho_{K}$ similarly instead using $z^{q}=-1$ so that $z^{2q}=1$, hence $\rho_{K}(\alpha^{q})=-I$.
Therefore every $(L_{0},M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$ is realized by a representation $\rho_{K}\in R(\mathcal{M}_{K})$ with $\rho_{K}(\mu_{K})\neq\pm I$ and $\rho_{K}(\lambda_{K}\mu_{K}^{pq})=\pm I$; hence, $F_{(p,q)}|A_{[(p,q),C]}$ has no gaps.
For the factor $g_{0}=(LM^{rq^{2}}-\delta^{q})|A_{K}$ contributed by $f_{0}=(\overline{L}\,\overline{M}^{r}-\delta)|A_{C}$, recall every $(\overline{L},\overline{M})\in\mathbb{V}(\overline{L}\,\overline{M}^{r}-\delta)$ is realized by some representation $\rho_{1}\in R(\mathcal{M}_{C})$ with $\rho_{1}(\mu_{K})\neq\pm I$ and $\rho_{1}(\lambda_{C}\mu_{C}^{r})=\delta I$.
For $(L_{0},M_{0})\in\mathbb{V}(g_{0})\cap(\mathbb{C}^{\ast})^{2}$, if $M_{0}^{q}=\overline{M}\neq\pm1$, then up to conjugation, $\rho_{1}(\mu_{C})$ and $\rho_{1}(\lambda_{C})$ are diagonal, and we may extend $\rho_{1}$ to $\rho_{K}=\rho_{1}\ast\rho_{2}$ via the abelian representation $\rho_{2}$:
\begin{align*}
\rho_{2}(\lambda_{K})&=\rho_{1}(\lambda_{C})^{q}=\delta^{q}\begin{pmatrix}M_{0}^{-rq^{2}}&0\\0&M_{0}^{rq^{2}}\end{pmatrix}&\rho_{2}(\mu_{K})&=\begin{pmatrix}M_{0}&0\\0&M_{0}^{-1}\end{pmatrix}\\
\rho_{2}(\mu_{C})&=\rho_{2}(\mu_{K})^{q}=\begin{pmatrix}M_{0}^{q}&0\\0&M_{0}^{-q}\end{pmatrix}&\rho_{2}(\beta)&=\rho_{1}(\lambda_{C})=\delta\begin{pmatrix}M_{0}^{-rq}&0\\0&M_{0}^{rq}\end{pmatrix},
\end{align*}
and hence $(L_{0},M_{0})$ for $M_{0}^{q}\neq\pm1$ is realized by some representation $\rho_{K}\in R(\mathcal{M}_{K})$ with $\rho_{K}(\mu_{K})\neq\pm I$ and $\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\delta^{q}I$.
Similarly, if $M_{0}=\pm1$, then for $q$ even, we have $\overline{M}=M_{0}^{q}=1$ and $L_{0}=\delta^{q}=1$, so we may take the abelian representation $\rho_{2}$ given by
\begin{align*}
\rho_{2}(\lambda_{K})&=\rho_{1}(\lambda_{C})^{q}=\begin{pmatrix}1&-rq^{2}\\0&1\end{pmatrix}&\rho_{2}(\mu_{K})&=M_{0}\begin{pmatrix}1&1\\0&1\end{pmatrix}\\
\rho_{2}(\mu_{C})&=\rho_{2}(\mu_{K})^{q}=\begin{pmatrix}1&q\\0&1\end{pmatrix}&\rho_{2}(\beta)&=\rho_{1}(\lambda_{C})=\delta\begin{pmatrix}1&-rq\\0&1\end{pmatrix}.
\end{align*}
This representation agrees with the irreducible representation $\rho_{1}\in R(\mathcal{M}_{K})$ realizing the point $(\delta,1)\in\mathbb{V}(\overline{L}\,\overline{M}^{r}-\delta)\cap(\mathbb{C}^{\ast})^{2}$, and hence $(1,\pm1)$ is realized by $\rho_{K}\in R(\mathcal{M}_{K})$ with $\rho_{K}(\mu_{K})\neq\pm I$ and $\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\delta^{q}I$.
If $M_{0}=\pm1$ and $q$ is odd, notice that $\overline{M}=M_{0}^{q}$ and $\overline{L}=\delta M_{0}^{-rq}$, and since $A_{C}$ has no gaps, the point $(\delta M_{0}^{-rq},M_{0}^{q})\in\mathbb{V}(\overline{L}\,\overline{M}^{r}-\delta)$ is realized by some $\rho_{1}\in R(\mathcal{M}_{C})$ up to conjugation so that
\begin{align*}
\rho_{1}(\lambda_{C})&=\delta\rho_{1}(\mu_{C})^{-r}=\delta M_{0}^{-rq}\begin{pmatrix}1&-rq\\0&1\end{pmatrix}&\rho_{1}(\mu_{C})&=M_{0}^{q}\begin{pmatrix}1&q\\0&1\end{pmatrix}
\end{align*}
and this representation can be extended to $\rho_{K}=\rho_{1}\ast\rho_{2}\in R(\mathcal{M}_{K})$ via the abelian representation $\rho_{2}$ given by
\begin{align*}
\rho_{2}(\lambda_{K})&=\rho_{1}(\lambda_{C})^{q}=\delta^{q}M_{0}^{-rq^{2}}\begin{pmatrix}1&-rq^{2}\\0&1\end{pmatrix}&\rho_{2}(\mu_{K})&=M_{0}\begin{pmatrix}1&1\\0&1\end{pmatrix}\\
\rho_{2}(\mu_{C})&=\rho_{1}(\mu_{K})^{q}=M_{0}^{q}\begin{pmatrix}1&q\\0&1\end{pmatrix}&\rho_{2}(\beta)&=\rho_{1}(\lambda_{C})=\delta M_{0}^{-rq}\begin{pmatrix}1&-rq\\0&1\end{pmatrix}.
\end{align*}
Lastly, we consider $M_{0}\neq\pm1$ with $L_{0}=\pm1$; if $L_{0}=1$, then we may take the abelian representation $\rho_{K}\in R(\mathcal{M}_{K})$ such that $\rho_{K}(\lambda_{K})=I$ and $\rho_{K}(\mu_{K})=\begin{pmatrix}M_{0}&0\\0&M_{0}^{-1}\end{pmatrix}$.
However, if $L_{0}=-1$ and $M_{0}^{q}=\pm1$, then notice $(-1,M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$.
For $M_{0}=\zeta\neq\pm1$ such that $\zeta^{q}=1$, notice that $q=2$ contradicts that $\zeta\neq\pm1$, hence $q>2$; in particular we have $F_{(p,q)}\doteq(LM^{pq}-1)(LM^{pq}+1)$.
Furthermore, $(-1)(\zeta)^{rq^{2}}-\delta^{q}=0$ implies that $\delta^{q}=-1$ and therefore $\delta=-1$ and $q$ is odd.
Notice that $(-1)(\zeta)^{pq}+1=0$, and hence $(-1,\zeta)\in\mathbb{V}(LM^{pq}+1)\cap(\mathbb{C}^{\ast})^{2}$.
For $M_{0}=\eta\neq\pm1$ such that $\eta^{q}=-1$, then $(-1)(\eta)^{rq^{2}}-\delta^{q}=0$ implies $\delta^{q}=(-1)(-1)^{rq}$, and so $q$ is odd.
Therefore, $\delta=(-1)^{r+1}$ and $q>2$, and so $F_{(p,q)}\doteq(LM^{pq}-1)(LM^{pq}+1)$ as before.
Since $p=\pm1$ is also odd, $(-1)(\eta)^{pq}-1=0$, and so $(-1,\eta)\in\mathbb{V}(LM^{pq}-1)\cap(\mathbb{C}^{\ast})^{2}$, and therefore, $(-1,\eta)\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$.
Hence, for every $(-1,M_{0})\in\mathbb{V}(LM^{rq^{2}}-\delta^{q})\cap(\mathbb{C}^{\ast})^{2}$ with $M_{0}^{q}=\pm1$ for $M_{0}\neq\pm1$, $(-1,M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$.
However, $F_{(p,q)}|A_{K}$ has no gaps, so every point $(L_{0},M_{0})\in\mathbb{V}(F_{(p,q)})\cap(\mathbb{C}^{\ast})^{2}$ is realized by a representation $\rho_{K}\in R(\mathcal{M}_{K})$ such that $\rho_{K}(\mu_{K})\neq\pm I$ and $\rho_{K}(\lambda_{K}\mu_{K}^{pq})=\pm I$.
We see that such a representation will also satisfy $\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\delta^{q}I$.
If $(-1,\zeta)\in\mathbb{V}(LM^{rq^{2}}-\delta^{q})$ as before, we find
$$
\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\rho_{K}(\lambda_{K})\rho_{K}(\mu_{K})^{rq^{2}}=(-I)(I)^{rq}=-I=\delta^{q}I.
$$
Similarly, if $(-1,\eta)\in\mathbb{V}(LM^{rq^{2}}-\delta^{q})$ as before, we find that $r$ even implies $\delta=-1$ and $r$ odd implies $\delta=1$, so
$$
\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\begin{cases}
(-I)(-I)^{rq}=-I=\delta^{q}I&:\hspace{4pt}r\text{ is even,}\\
(-I)(-I)^{rq}=I=\delta^{q}I&:\hspace{4pt}r\text{ is odd.}
\end{cases}
$$
Hence, every $(-1,\zeta)$ and $(-1,\eta)$ in $\mathbb{V}(LM^{rq^{2}}-\delta^{q})\cap(\mathbb{C}^{\ast})^{2}$ is realized by some representation $\rho_{K}\in R(\mathcal{M}_{K})$ with $\rho_{K}(\mu_{K})\neq\pm I$ such that $\rho_{K}(\lambda_{K}\mu_{K}^{rq^{2}})=\delta^{q}I$.
Since $g_{0}=LM^{rq^{2}}-\delta^{q}$ is a generic factor of $A_{K}$, $A_{K}$ has no gaps.
\end{proof}
Simple induction on $(p,q)$-cables and connected sums of torus knots mean that by Corollary~\ref{torusnogaps} and Lemmas~\ref{sumnogaps} and~\ref{cablenogaps}, we have the following theorem:
\begin{theorem}
\label{g0nogaps}
For every graph knot $K\in\mathcal{G}_{0}$, $A_{K}$ has no gaps.
\end{theorem}
By this theorem, we will be able to extend each representation $\rho_{2}^{\varepsilon}$ from the earlier discussion to a representation $\rho_{K}=\rho_{1}\ast\rho_{2}^{\varepsilon}$.
To do this, we require the following lemmas about the image of the projection map $\xi$ which considers three types of representations in $R_{U}(\mathcal{M}_{\mathrm{Sat}(P,C,f)})=R_{0}\cup R_{1}\cup R_{2}$, following the notation of Ruppe~\cite{ruppe_2016}:
\begin{align}
R_{0}&=\{\rho=\rho_{1}\ast\rho_{2}|\rho_{2}\text{ reducible}\}\\
R_{1}&=\{\rho=\rho_{1}\ast\rho_{2}|\rho_{2}\text{ irreducible and }\rho_{1}\text{ reducible}\}\\
R_{2}&=\{\rho=\rho_{1}\ast\rho_{2}|\rho_{2}\text{ irreducible and }\rho_{1}\text{ irreducible}\}.
\end{align}
Recall that our satellite space $\mathcal{M}_{2}$ has $\partial_{x}\mathcal{M}_{2}$ a torus with preferred framing $(\lambda_{x},x)=(\lambda_{K},\mu_{K})$ and $\partial_{y}\mathcal{M}_{2}$ a torus with preferred framing $(\lambda_{y},y)=(\mu_{C},\lambda_{C})$, following the gluing relation.
\begin{lemma}
\label{reps0}
Let $K=\mathrm{Sat}(P,C,f)$ be a winding number zero satellite where $\mathcal{M}_{K}=\mathcal{M}_{1}\cup_{\partial N(\ell_{y})}\mathcal{M}_{2}$ with $\mathcal{M}_{1}=\mathcal{M}_{C}$ and $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$, then
$$\overline{\xi(R_{0})}=\mathbb{V}(L-1).$$
\end{lemma}
\begin{proof
Let $\rho_{1}\ast\rho_{2}\in R_{0}$, then up to conjugation, let $\rho_{2}$ be upper-triangular on $\pi_{1}(\mathcal{M}_{2})$, and since $\rho_{2}$ must have the same character as an abelian representation, we see that $\mathrm{tr}\rho_{2}(\lambda_{x})=2$ since $\lambda_{x}$ is null-homologous in $\mathcal{M}_{2}$.
Thus $L=1$ and so $\xi(\rho_{1}\ast\rho_{2})\in\mathbb{V}(L-1)$.
Considering all abelian representations with $\rho_{2}(x)=\begin{pmatrix}M&0\\0&M^{-1}\end{pmatrix}$ and $\rho_{2}(y)=I$, we find that $\rho_{2}$ extends to the trivial representation $\mathrm{id}_{1}(\pi_{1}(M_{1}))=\{I\}$.
Therefore, $\mathrm{id}_{1}\ast\rho_{2}\in R_{0}$ and $\xi(\mathrm{id}_{1}\ast\rho_{2})=(1,M)$ for all $M\in\mathbb{C}^{\ast}$.
Hence, $\overline{\xi(R_{0})}=\mathbb{V}(L-1)$.
\end{proof}
\begin{lemma}
\label{reps01}
Let $K=\mathrm{Sat}(P,C,f)$ be a winding number zero satellite where $\mathcal{M}_{K}=\mathcal{M}_{1}\cup_{\partial N(\ell_{y})}\mathcal{M}_{2}$ with $\mathcal{M}_{1}=\mathcal{M}_{C}$ and $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$, then
$$\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}=\mathbb{V}(A_{P}).$$
\end{lemma}
\begin{proof
By Lemma~\ref{reps0} and Remark~\ref{trivfactor}, $\overline{\xi(R_{0})}=\mathbb{V}(L-1)\subset\mathbb{V}(A_{P})$.
Let $\rho_{1}\ast\rho_{2}\in R_{1}$ and up to conjugation, let $\rho_{1}$ be lower-triangular (since $\rho_{1}$ is reducible).
Since $\rho_{1}\in R(\mathcal{M}_{1})$ is reducible, $\rho_{1}(\lambda_{C})=I$ and thus $\rho_{2}(y)=I$ by the gluing relation.
Hence, we let $\mathcal{M}_{P}=V(1/0)-\overset{\circ}{N}(f(P))$ be the quotient of $\mathcal{M}_{2}$ by $(1/0)$-Dehn filling along $\partial_{y}\mathcal{M}_{2}$.
The quotient map $Q_{0}:\mathcal{M}_{2}\to\mathcal{M}_{P}$ induces an epimorphism $Q_{0\ast}:\pi_{1}(\mathcal{M}_{2})\to\pi_{1}(\mathcal{M}_{P})$ satisfying $Q_{0\ast}(y)=e$.
Since $Q_{0\ast}$ is surjective, $\rho_{2}$ factors through the quotient; that is, there is an irreducible representation $\sigma\in R(\mathcal{M}_{P})$ such that $\rho_{2}=\sigma\circ Q_{0\ast}$ and $\sigma(Q_{0\ast}(y))=I$.
Hence, $\xi(\rho_{1}\ast\rho_{2})=\xi(\sigma)$ since $Q_{0\ast}(x)=\mu_{P}$ and $Q_{0\ast}(\lambda_{x})=\lambda_{P}$, and $\xi(\rho_{1}\ast\rho_{2})$ is either an isolated point or in a component $R_{f_{0}}\subset R(\mathcal{M}_{P})$.
In the latter case, we find that $\xi(\rho_{1}\ast\rho_{2})=\xi(\sigma)\in\mathbb{V}(A_{P})$, and therefore $\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\subset\mathbb{V}(A_{P})$.
Note that the isolated points $\xi(\sigma)$ will only lift to isolated points in $\xi(R_{1})$ and so no other factors will appear.
For any $\sigma\in R(\mathcal{M}_{P})$ with $\xi(\sigma)\in\mathbb{V}(f_{0})$ for a balanced-irreducible factor $f_{0}|\widetilde{A}_{P}$, then the representation $\sigma$ will lift to some representation $\rho_{2}\in R(\mathcal{M}_{2})$ satisfying $\rho_{2}(y)=I$ from the quotient map $\rho_{2}=\sigma\circ Q_{0\ast}$.
$$
\begin{tikzcd}
\pi_{1}(\mathcal{M}_{2})\arrow[d, "Q_{0\ast}"]\arrow[dr, "\rho_{2}"]\\
\pi_{1}(\mathcal{M}_{P})\arrow[r, "\sigma"]&SL_{2}\mathbb{C}
\end{tikzcd}
$$
Conjugating $\rho_{2}$ so that $\rho_{2}$ is lower-triangular on $\partial_{y}\mathcal{M}_{2}$, we may take an abelian representation $\rho_{1}\in R(\mathcal{M}_{1})$ to send $\rho_{1}(\lambda_{C})=I$ and $\rho_{1}(\mu_{C})=\rho_{2}(\lambda_{y})$.
Therefore, we have a representation $\rho_{1}$ which agrees with $\rho_{2}$ along the gluing boundary and so $\rho_{1}\ast\rho_{2}\in R_{1}$, and up to conjugation, the representation $\rho_{1}\ast\rho_{2}$ can be made upper-triangular on $\partial_{x}\mathcal{M}_{2}$ with $\xi(\rho_{1}\ast\rho_{2})=\xi(\sigma)\in\mathbb{V}(\widetilde{A}_{P})$, which completes the proof, $\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}=\mathbb{V}(A_{P})$.
\end{proof}
\begin{lemma}
\label{reps012}
Let $K=\mathrm{Sat}(P,C,f)$ be a winding number zero satellite with companion knot $C\in\mathcal{G}_{\mathbb{Z}}$.
Let $\mathcal{M}_{K}=\mathcal{M}_{1}\cup_{\partial N(\ell_{y})}\mathcal{M}_{2}$ with $\mathcal{M}_{1}=\mathcal{M}_{C}$ and $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$, and let $f(P)_{r}$ be the knot whose exterior is given by $V(1/r)-\overset{\circ}{N}(f(P))$, then
$$
\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\cup\overline{\xi(R_{2})}\subset\mathbb{V}\left(\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right]\right).
$$
\end{lemma}
\begin{proof
By Lemma~\ref{reps01}, we know that $\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}=\mathbb{V}(A_{P})=\mathbb{V}\left((L-1)\widetilde{A}_{f(P)_{0}}\right)$ by definition of $f(P)_{0}$, so these factors will appear in the variety on the right.
Let $\rho_{1}\ast\rho_{2}\in R_{2}$, then since $C\in\mathcal{G}_{\mathbb{Z}}$ we may assume that $\rho_{1}\in R^{\ast}(\mathcal{M}_{1})$ satisfies $\rho_{1}(\lambda_{C}\mu_{C}^{r})=\delta I$ for some slope $r\in\mathcal{DS}_{C}$ and $\delta\in\{\pm1\}$.
Then $\rho_{2}(y\lambda_{y}^{r})=\delta I$ by the gluing relation and up to conjugation we may take $\rho_{2}(y)$ to be lower-triangular,
\begin{align*}
\rho_{2}(y)&=\begin{pmatrix}
u&0\\t&u^{-1}
\end{pmatrix}&
\rho_{2}(\lambda_{y})&=\begin{pmatrix}
v&0\\s&v^{-1}
\end{pmatrix},
\end{align*}
where $\rho_{2}(y\lambda_{y}^{r})=\delta I$ by the gluing relation.
The quotient $Q_{r}:\mathcal{M}_{2}\to\mathcal{M}_{f(P)_{r}}$ by $(1/r)$-Dehn filling along $\partial_{y}\mathcal{M}_{2}$ induces the map $Q_{r\ast}:\pi_{1}(\mathcal{M}_{2})\to\pi_{1}(\mathcal{M}_{f(P)_{r}})$ which satisfies $Q_{r\ast}(y{\lambda_{y}}^{r})=e$ and so letting $\varepsilon_{2}:\pi_{1}(\mathcal{M}_{2})\to\{\pm I\}$ be the abelian representation given by $\varepsilon_{2}(y)=\delta I$ and $\varepsilon_{2}(x)=I$, we find that
$\rho^{\varepsilon_{2}}_{2}=\varepsilon_{2}\cdot\rho_{2}$ is a representation of $\pi_{1}(\mathcal{M}_{2})$ satisfying
$$
\rho^{\varepsilon_{2}}_{2}(y\lambda_{y}^{r})=\varepsilon_{2}(y\lambda_{y}^{r})\cdot\rho_{2}(y{\lambda_{y}}^{r})=\delta I\cdot \delta I=I.
$$
Since $Q_{r\ast}$ is surjective (as in the proof of Lemma~\ref{reps01}), there is some irreducible $\sigma\in R(\mathcal{M}_{f(P)_{r}})$ such that $\rho^{\varepsilon_{2}}_{2}=\sigma\circ Q_{r\ast}$.
$$
\begin{tikzcd}
\pi_{1}(\mathcal{M}_{2})\arrow[d, "Q_{r\ast}"]\arrow[dr, "\rho^{\varepsilon_{2}}_{2}"]\\
\pi_{1}(\mathcal{M}_{f(P)_{r}})\arrow[r, "\sigma"]&SL_{2}\mathbb{C}
\end{tikzcd}
$$
Therefore, $\xi(\rho_{1}\ast\rho_{2})=\xi(\sigma)$ since $Q_{r\ast}(x)=\mu_{f(P)_{r}}$ and $Q_{r\ast}(\lambda_{x})=\lambda_{f(P)_{r}}$, and each $\xi(\sigma)$ is either an isolated point or in a component contributing a factor of $\mathbb{V}(A_{f(P)_{r}})$.
As with the earlier proof, the isolated points will only lift to isolated points, but for every $\xi(\sigma)$ in a component of $\mathbb{V}(A_{f(P)_{r}})$, the lifted point $\xi(\rho_{1}\ast\rho_{2})$ will still be in a component of $\mathbb{V}(A_{f(P)_{r}})$, and therefore
$$
\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\cup\overline{\xi(R_{2})}\subset\mathbb{V}\left(\mathrm{Red}\left[(L-1)\prod_{r\in\mathcal{DS}_{C}}\widetilde{A}_{f(P)_{r}}\right]\right).
$$
\end{proof}
As an immediate consequence of these lemmas, we can find a polynomial multiple of the $A$-polynomial of such winding number zero satellite knots where the companion knot $C\in\mathcal{G}_{\mathbb{Z}}$:
\begin{theorem}
Let $K=\mathrm{Sat}(P,C,f)$ be a winding number zero satellite knot with companion knot $C\in\mathcal{G}_{\mathbb{Z}}$, let $\mathcal{M}_{K}=\mathcal{M}_{1}\cup_{\partial N(\ell_{y})}\mathcal{M}_{2}$ where $\mathcal{M}_{1}=\mathcal{M}_{C}$ and $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$, and let $f(P)_{r}$ be the knot whose exterior is given by $V(1/r)-\overset{\circ}{N}(f(P))$, then
$$
A_{K}\Bigg|\mathrm{Red}\left[(L-1)\prod_{r\in\mathcal{DS}_{C}}\widetilde{A}_{f(P)_{r}}\right].
$$
\end{theorem}
To show that each of the factors on the right is a factor of $A_{K}$, we will utilize Theorem~\ref{g0nogaps}, that is, that the $A$-polynomial of a graph knot has no gaps.
\begin{lemma}
\label{reps012d}
Let $K=\mathrm{Sat}(P,C,f)$ be a winding number zero satellite with companion knot $C\in\mathcal{G}_{0}$.
Let $\mathcal{M}_{K}=\mathcal{M}_{1}\cup_{\partial N(\ell_{y})}\mathcal{M}_{2}$ with $\mathcal{M}_{1}=\mathcal{M}_{C}$ and $\mathcal{M}_{2}=V-\overset{\circ}{N}(f(P))$, and let $f(P)_{r}$ be the knot whose exterior is given by $V(1/r)-\overset{\circ}{N}(f(P))$, then
$$
\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\cup\overline{\xi(R_{2})}=\mathbb{V}\left(\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right]\right).
$$
\end{lemma}
\begin{proof}
By Lemma~\ref{reps012}, it suffices to show the other direction of containment.
Notice that Lemma~\ref{reps01} implies that $\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}=\mathbb{V}((L-1)\widetilde{A}_{f(P)_{0}})$.
Therefore, let $\widetilde{A}_{f(P)_{r}}$ be a factor with corresponding $f_{0}=(LM^{r}-\delta)|\widetilde{A}_{C}$, so $r\in\mathcal{DS}_{C}$ and $\delta\in\{\pm1\}$ are given.
Let $\mathcal{M}_{f(P)_{r}}$, $Q_{r}$, and $Q_{r\ast}$ be as in the previous proof, and so $Q_{r\ast}(\lambda_{y}^{r}y)=e$.
For each balanced-irreducible factor $g_{0}|\widetilde{A}_{f(P)_{r}}$, there is a family of representations $\sigma\in R(\mathcal{M}_{f(P)_{r}})$ such that $\xi(\sigma)=(L,M)\in\mathbb{V}(g_{0})$ for all but finitely many points.
Let $\rho_{2}=\sigma\circ Q_{r\ast}$ be the lift of such a representation, as in the proof of Lemma~\ref{reps012}, then to find a representation $\rho^{\varepsilon_{2}}_{2}\in R^{\ast}(\mathcal{M}_{2})$ which agrees with some $\rho_{1}\in R^{\ast}(\mathcal{M}_{1})$ along the gluing torus, we use the same abelian representation $\varepsilon_{2}$ from the proof of Lemma~\ref{reps012}, $\varepsilon_{2}:\pi_{1}(\mathcal{M}_{2})\to\{\pm I\}$ given by $\varepsilon(y)=\delta I$ and $\varepsilon(x)=I$.
This gives $\rho^{\varepsilon_{2}}_{2}=\varepsilon_{2}\cdot\rho_{2}$ satisfying $\rho^{\varepsilon_{2}}_{2}(y\lambda_{y}^{r})=\delta I$.
Additionally, we note that $\rho^{\varepsilon_{2}}_{2}(\lambda_{y})=\rho_{2}(\lambda_{y})$ and $\rho^{\varepsilon_{2}}_{2}(\lambda_{x})=\rho_{2}(\lambda_{x})$ since $\mathcal{M}_{2}$ is a winding number zero satellite space.
We show now that every $\rho^{\varepsilon_{2}}_{2}\in R(\mathcal{M}_{2})$ from this family of representations $\sigma\in R^{\ast}(\mathcal{M}_{f(P)_{r}})$ will extend to a representation $\rho_{1}\in R(\mathcal{M}_{C})$ since $A_{C}$ has no gaps.
If $\mathrm{tr}\rho^{\varepsilon_{2}}_{2}(\lambda_{y})\neq\pm2$, then we may conjugate $\rho^{\varepsilon_{2}}_{2}$ so that $\rho^{\varepsilon_{2}}_{2}(\lambda_{y})=\begin{pmatrix}\overline{M}&0\\0&\overline{M}^{-1}\end{pmatrix}$ and thus by the quotient identity, $\rho^{\varepsilon}_{2}(y)=\delta\rho^{\varepsilon}_{2}(\lambda_{y}^{-r})=\delta\begin{pmatrix}\overline{M}^{-r}&0\\0&\overline{M}^{r}\end{pmatrix}$.
Since $C\in\mathcal{G}_{0}$ and $A_{C}$ has no gaps by Theorem~\ref{g0nogaps}, there is a representation $\rho_{1}\in R(\mathcal{M}_{1})$ such that $\rho_{1}(\mu_{C})=\begin{pmatrix}\overline{M}&0\\0&\overline{M}^{-1}\end{pmatrix}$ and $\rho_{1}(\lambda_{C})=\delta\begin{pmatrix}\overline{M}^{-r}&0\\0&\overline{M}^{r}\end{pmatrix}$.
Therefore the representation $\rho^{\varepsilon_{2}}_{2}$ extends to a representation $\rho_{K}=\rho_{1}\ast\rho^{\varepsilon_{2}}_{2}\in R(\mathcal{M}_{K})$ with $\xi(\rho_{K})=\xi(\sigma)$.
If $\mathrm{tr}\rho^{\varepsilon_{2}}_{2}(\lambda_{y})=\pm2=2\overline{M}$, then up to conjugation, either $\rho^{\varepsilon_{2}}_{2}(\lambda_{y})=\overline{M} I$ or $\rho^{\varepsilon_{2}}_{2}=\overline{M}\begin{pmatrix}1&1\\0&1\end{pmatrix}$.
In the latter case, the gluing relation implies that
\begin{align*}
\rho^{\varepsilon_{2}}_{2}(\lambda_{y})&=\overline{M}\begin{pmatrix}
1&1\\
0&1
\end{pmatrix},
&
\rho^{\varepsilon_{2}}_{2}(\mu_{y})&=\delta\overline{M}^{-r}\begin{pmatrix}
1&-r\\
0&1
\end{pmatrix}.
\end{align*}
Since $(\delta\overline{M}^{-r},\overline{M})\in\mathbb{V}(LM^{r}-\delta)\cap(\mathbb{C}^{\ast})^{2}$, there is a representation $\rho_{1}\in R(\mathcal{M}_{C})$ such that $\rho_{1}(\lambda_{C}\mu_{C}^{r})=\delta I$ and $\rho_{1}(\mu_{C})\neq\pm I$ with $\mathrm{tr}\rho_{1}(\mu_{C})=2\overline{M}$.
Therefore, the representation will extend to $\rho_{K}=\rho_{1}\ast\rho^{\varepsilon_{2}}_{2}\in R(\mathcal{M}_{K})$ such that $\xi(\rho_{K})=\xi(\sigma)$.
In the former case, $\rho^{\varepsilon_{2}}_{2}(\lambda_{y})=\pm I$, then we use some abelian representation $\varepsilon$ so that $\rho^{\varepsilon}_{2}(y)=I$, which will naturally extend to $\rho_{K}=\varepsilon_{1}\ast\rho^{\varepsilon}_{2}$ via the abelian representation $\varepsilon_{1}:\pi_{1}(\mathcal{M}_{C})\to\{\pm I\}$ given by $\varepsilon_{1}(\mu_{C})=\rho^{\varepsilon}_{2}(\lambda_{y})$.
Therefore, every representation $\sigma\in R^{\ast}(\mathcal{M}_{f(P)_{r}})$ will extend to some $\rho_{K}\in R(\mathcal{M}_{K})$ with $\xi(\rho_{K})=\xi(\sigma)$; hence each factor $\mathbb{V}(g_{0})\subset\mathbb{V}(\widetilde{A}_{K})$ and therefore the lemma is proven.
\end{proof}
We see that Theorem~\ref{wnzthm} follows from Lemmas~\ref{reps012} and~\ref{reps012d} which will be discussed in Section~\ref{proof2}.
Furthermore, Theorems~\ref{doublellin} and ~\ref{doubletwisteddouble} follow as a consequence of Theorem~\ref{wnzthm}, where we omit polynomial reduction of $A_{D_{n}(C)}$ for $C\in\mathcal{G}_{0}$ because each factor $\widetilde{A}_{f(P)_{r}}=\widetilde{A}_{K(n-r)}$ is irreducible and distinct, also discussed in Section~\ref{proof2}.
\section{$r$-Twisted Gluing Relations}
\label{rtwistedcomps}
In the special case of $r$-twisted Whitehead doubles, the subset of importance in $R(\mathcal{M}_{D_{r}(K)})$ is $\overline{\xi(R_{2})}$ given by representations $\rho=\rho_{1}\ast\rho_{2}$ where both $\rho_{1},\rho_{2}$ are irreducible representations since Remark~\ref{patternfactor} gives us the known factor $A_{K(r)}|A_{D_{r}(K)}$ for any $r$-twisted Whitehead double.
For explicit computation when both $\rho_{1},\rho_{2}$ are irreducible, we may conjugate $\rho_{2}$ so that
\begin{align*}
\rho_{2}(\mu_{x})&=\begin{pmatrix}
M&1\\
0&M^{-1}
\end{pmatrix}=\rho_{2}(x),&
\rho_{2}(\mu_{y})&=\begin{pmatrix}
u&0\\
t&u^{-1}
\end{pmatrix}=\rho_{2}(y).
\end{align*}
And since $\mu_{x},\lambda_{x}$ commute and $\mu_{y},\lambda_{y}$ commute, we have
\begin{align*}
\rho_{2}(\lambda_{x})&=\begin{pmatrix}
L&\ast\\
0&L^{-1}
\end{pmatrix},&
\rho_{2}(\lambda_{y})&=\begin{pmatrix}
v&0\\
s&v^{-1}
\end{pmatrix}\overset{\phi_{r}}{=}\rho_{1}(\mu_{C})
=\begin{pmatrix}
m&0\\
\ast&m^{-1}
\end{pmatrix}.
\end{align*}
Furthermore, since $\mu_{K},\lambda_{K}$ commute, and $\mu_{K}$ is lower-triangular by the gluing, this implies $\lambda_{K}$ is also lower triangular, and label its $(1,1)$-entry $\ell$.
Here, we use the $r$-twisted gluing relation $\phi_{r}:\partial\mathcal{M}_{K}\to\partial V$, given by
\begin{align*}
\phi_{r}(\mu_{K})&=\lambda_{y},&\phi_{r}(\lambda_{K})&=\mu_{y}\lambda_{y}^{-r}.
\end{align*}
Hence, we have $\rho_{2}(y)=\rho_{1}(\lambda_{K})$ and $\rho_{2}(\mu_{y}\lambda_{y}^{-r})=\rho_{1}(\lambda_{K})$, whose $(1,1)$-entry gives us an additional relation on $\ell$; these combined give us:
\begin{align*}
m&=v,&
\ell&=\rho_{1}(\lambda_{K})_{1,1}=\rho_{2}(y\lambda_{y}^{-r})_{1,1}=uv^{-r}.
\end{align*}
Hence, if $\rho\in R_{2}$, then the (1,1)-entries of $\rho_{1}(\mu_{K})$ and $\rho_{1}(\lambda_{K})$ must satisfy $\widetilde{A}_{K}\left(\ell,m\right)=0$, or alternatively
\begin{align}
f_{K,r}(M,t,u)&=\widetilde{A}_{K}\left(uv^{-r},v\right)=0,
\end{align}
\begin{align}
v&=\tfrac{-M t^{2}+M^{3} t^{2}-t u+2 M^{2} t u-M^{4} t u+M^{2} t^{3} u+M u^{2}+M t^{2} u^{2}-2 M^{3} t^{2} u^{2}-M^{2} t u^{3}+M^{4} t u^{3}}{M u^{2}},
\end{align}
\begin{align}
s&=\rho_{2}(\lambda_{y})_{2,1}.
\end{align}
The Whitehead relation gives us $\rho_{2}(\Omega)=\rho_{2}(\Omega^{\ast})$ which is true so long as a single polynomial equation is satisfied:
\begin{align}
f_{W}(M,t,u)=\begin{matrix}M^2 t - M^4 t - M u + M^3 u \\- M t^2 u + 2 M^3 t^2 u + t u^2 - 4 M^2 t u^2 \\+ M^4 t u^2 - M^2 t^3 u^2 + M u^3 - M^3 u^3 \\+ 2 M t^2 u^3 - M^3 t^2 u^3 - t u^4 + M^2 t u^4\end{matrix}=0.
\end{align}
Lastly, $\rho_{2}(\lambda_{x})_{1,1}=\rho_{2}(XY\Omega YX)_{1,1}$ gives us an additional polynomial equation:
\begin{align}
F_{W}(L,M,t,u)=\begin{matrix}M t - M^{3} t - t^{2} u + 2 M^{2} t^{2} u - 2 M t u^{2} + M^{3} t u^{2} \\
- M t^{3} u^{2} - M^{2} u^{3} + L M^{2} u^{3} + t^{2} u^{3} - M^{2} t^{2} u^{3} + M t u^{4}\end{matrix}=0.
\end{align}
Keeping as many of these defining equations constant as possible is the reason for the choice of the $r$-twisted gluing $\phi_{r}$ with the same Whitehead link $W$.
From this, we see that if $\overline{\xi(R_{2})}$ contributes a factor $\widetilde{P}_{K,r}$ of the $A$-polynomial, then its variety $\mathbb{V}(\widetilde{P}_{K,r})\subset\mathbb{V}(\mathrm{Res}_{u,t}(f_{K,r},f_{W},F_{W}))$.
From these three polynomials, we are able to perform resultant methods to find (by explicit computation) a polynomial which contains the $\widetilde{P}_{K,r}$ as a factor: $\mathrm{Res}_{u}\left[\mathrm{Res}_{t}[f_{K,r},f_{W}],\mathrm{Res}_{t}[f_{W},F_{W}]\right]$.
Removing isolated points and impossible factors (since $M\neq0$, $u\neq0$, $t\neq0$, etc) as well as checking against possible boundary slopes, we may eliminate incorrect factors from this ``iterated resultant.''
The connection between $A_{D_{r}(K)}$ and the $A$-polynomial of $n$-twist knots is given clearly by Theorem~\ref{doublellin}; a recursive formula for $A_{K(n)}$ was first found by Hoste and Shanahan~\cite{hs_2004}, and later an explicit formula by Mathews~\cite{mathews_2014}:
\begin{theorem}\cite{mathews_2014}
For any $n$-twist knot $K(n)$, its $A$-polynomial is given explicitly as:
\label{twistpolynomial}
$$
\textstyle
\widetilde{A}_{K(n)}=\begin{cases}
M^{2n}(L+M^{2})^{2n-1}\times\\
\hspace{10pt}\times\underset{i=0}{\overset{2n-1}{\sum}}\binom{n+\left\lfloor\frac{i-1}{2}\right\rfloor}{i}\left(\tfrac{M^{2}-1}{L+M^{2}}\right)^{i}(1-L)^{\left\lfloor\frac{i}{2}\right\rfloor}(M^{2}-\tfrac{L}{M^{2}})^{\left\lfloor\frac{i+1}{2}\right\rfloor}
&:\hspace{4pt}n\geq0\\
M^{-2n}(L+M^{2})^{-2n}\times\\
\hspace{10pt}\times\underset{i=0}{\overset{-2n}{\sum}}\binom{-n+\left\lfloor\frac{i}{2}\right\rfloor}{i}\left(\tfrac{1-M^{2}}{L+M^{2}}\right)^{i}(1-L)^{\left\lfloor\frac{i}{2}\right\rfloor}(M^{2}-\tfrac{L}{M^{2}})^{\left\lfloor\frac{i+1}{2}\right\rfloor}
&:\hspace{4pt}n\leq0.
\end{cases}
$$
\end{theorem}
\begin{theorem}\cite{hs_2004}
\label{twistpolynomialrec}
For any $n$-twist knot $K(n)$, its $A$-polynomial is given recursively as:
$$
\widetilde{A}_{K(n)}=\begin{cases}
x\widetilde{A}_{K\left(n-\tfrac{n}{|n|}\right)}-y\widetilde{A}_{K\left(n-\tfrac{2n}{|n|}\right)}&:\hspace{4pt}n\neq-1,0,1,2\\
M^{4}+L(-1+M^{2}+2M^{4}+M^{6}-M^{8})+L^{2}M^{4}&:\hspace{4pt}n=-1\\
1&:\hspace{4pt}n=0\\
L+M^{6}&:\hspace{4pt}n=1\\
M^{14}+L(M^{4}-M^{6}+2M^{10}+2M^{12}-M^{14})\\+L^{2}(-1+2M^{2}+2M^{4}-M^{8}+M^{10})+L^{3}&:\hspace{4pt}n=2
\end{cases}
$$
where
\begin{align*}
x&=L^{2}(M^{4}+1)+L(-M^{8}+2M^{6}+2M^{4}+2M^{2}-1)+M^{4}\\
&=(L+M^{2})\widetilde{A}_{K(1)}+\widetilde{A}_{K(-1)}\\
y&=M^{4}(L+M^{2})^{4}.
\end{align*}
\end{theorem}
Note that Hose and Shanahan's convention actually gives $\widetilde{A}_{K(n)^{\ast}}$ under the notation in this paper; to remedy this, the mirror image is found by Remark \ref{mirrored}, which will not matter for $K(-1)$.
In general, $K(n)^{\ast}=J(2,2n)^{\ast}=J(-2,-2n)=J(-2n,2)$.
Further examples of winding number zero satellites can be described with links where $V=\mathcal{M}_{\ell_{y}}$ and so $f(P)=\ell_{x}\subset V$.
The first generalization of the $r$-twisted Whitehead doubles we consider are the $(m,n)$-double twisted doubles, whose pattern knot embedding is shown here with $m$ vertical full-twists and $n$ vertical full-twists:
\begin{figure}[th]
$$
\begin{tikzpicture}
\begin{knot}[
line width=1.5pt,
line join=round,
clip width=1,
scale=2,
background color=white,
consider self intersections,
only when rendering/.style={
draw=white,
double=black,
double distance=1.5pt,
line cap=none
}
]
\strand
(.85,.6) to ++(-.3,0);
\strand
(-.35,.5) to ++(.3,0);
\strand
(-.2,.5) arc (0:90:.15)
to ++(-.2,0)
arc (90:180:.15)
to ++(0,-.8);
\strand
(0,0) to +(.4,0)
arc (-90:0:.3)
to +(0,1.2)
arc (0:90:.3)
to +(-.4,0)
arc (-270:-180:.15)
.. controls +(0,-.15) and +(0,.15) .. ++(-.3,-.3)
.. controls +(0,-.15) and +(0,.15) .. ++(.3,-.3)
arc (-180:-90:.15)
to +(.25,0)
arc (90:0:.15)
to +(0,-.3)
arc (0:-90:.15)
to ++(-1.1,0)
arc (-90:-180:.15)
to ++(0,.3)
arc (180:90:.15)
to ++(.25,0)
arc (-90:0:.15)
.. controls +(0,.15) and +(0,-.15) .. ++(.3,.3)
.. controls +(0,.15) and +(0,-.15) .. ++(-.3,.3)
arc (0:90:.15)
to ++(-.4,0)
arc (90:180:.3)
to ++(0,-1.2)
arc (-180:-90:.3)
to ++(1,0);
\strand
(-.2,.5) to ++(0,-.8)
arc (0:-90:.15)
to ++(-.2,0)
arc (-90:-180:.15);
\flipcrossings{1,2}
\end{knot}
\draw[very thick, -latex] (1.1,1.2) to ++(-.1,0);
\draw[very thick, -latex] (-.7,1) to ++(-.1,0);
\draw[thick,fill=white] (-2.8,.9) -- ++(1,0) -- ++(0,.6) -- ++(-1,0) -- ++(0,-.6) -- cycle;
\draw[thick,fill=white] (-1.1,2.1) -- ++(1,0) -- ++(0,1.2) -- ++(-1,0) -- ++(0,-1.2) -- cycle;
\node[right] at (1.8,1.2) {$x$};
\node[right] at (0,1) {$y$};
\node at (1.7,2) {$\ell_{x}$};
\node at (-.1,-.4) {$\ell_{y}$};
\node at (-2.3,1.2) {$n$};
\node at (-.6,2.7) {$m$};
\end{tikzpicture}
$$
\vspace*{0pt}
\caption{The $(m,n)$-Double Twisted Double Pattern.
\label{fig2}}
\end{figure}
\section{Proof of Theorems~\ref{wnzthm} and~\ref{doublellin}}
\label{proof2}
We prove the more general result about the $r$-twisted Whitehead doubles of graph knots, then refer to specific examples as corollaries of .
We begin with a theorem from Ruppe's thesis:
\begin{theorem}~\cite{ruppe_2016}
\label{ruppethm}
For a $(p,q)$-torus knot $T(p,q)$,
\begin{align}
A_{D_{r}(T(p,q))}=(L-1)\widetilde{A}_{K(r)}\widetilde{A}_{K(r-pq)}.
\end{align}
\end{theorem}
As one possible generalization of this result, we will show for an iterated torus knot $[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$, the $A$-polynomial of the $r$-twisted Whitehead double of this knot is given by Corollary~\ref{iteratedtorusdouble}; another generalization is the $r$-twisted Whitehead double of the connected sum of two torus knots, given in Corollary~\ref{doubleconnectedsum}.
In general, for a graph knot $C\in\mathcal{G}_{0}$, Theorem~\ref{doublellin} will give the $A$-polynomial of any $n$-twisted Whitehead double $A_{D_{n}(C)}$.
Notice in Theorem~\ref{ruppethm}, the first two factors of $A_{D_{r}(T(p,q))}$ are $(L-1)$ and $\widetilde{A}_{K(r)}$, which is a restatement of the fact that $A_{P}|A_{\mathrm{Sat}(P,C,f)}$.
Recalling the earlier notation $A$-polynomial of the satellite knot $\mathrm{Sat}(P,C,f)$, for embedded pattern knot $P$ and companion knot $C$, can be written in the form:
$$
A_{\mathrm{Sat}(P,C,f)}=\mathrm{Red}[A_{P}\widetilde{F}_{\mathrm{Sat}(P,C,f)}].
$$
For $P=K(r)$, the last factor $\widetilde{F}_{\mathrm{Sat}(K(r),C,f)}|A_{\mathrm{Sat}(K(r),C,f)}=A_{D_{r}(C)}$ that requires Theorem~\ref{wnzthm}, which is a consequence of the results from Section~\ref{zerodouble}:
\subsection*{Proof of Theorem~\ref{wnzthm}}
By Lemmas~\ref{reps0},~\ref{reps01}, and~\ref{reps012}, for a winding number zero satellite knot $K=\mathrm{Sat}(P,C,f)$ with $C\in\mathcal{G}_{0}$, we find that
$$
\overline{\xi(R_{U}(\mathcal{M}_{K}))}=\mathbb{V}\left((L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right).
$$
By Lemma~\ref{reps012d}, we see that $A_{f(P)_{r}}|A_{K}$ for each slope $r\in\mathcal{DS}_{C}$, and hence
$$
\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right]\Big|A_{K}.
$$
Furthermore, for all but finitely many points $(L,M)$ in the zero locus of the product, we find that there will be representations $\rho\in R_{U}(\mathcal{M}_{K})$ such that $\xi(\rho)=(L,M)$ and hence
$$
A_{K}=\mathrm{Red}\left[(L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(P)_{r}}\right].
$$
We must reduce this polynomial formula in general since the $A_{K}$ does not contain any repeated factors, and depending on the different $A$-polynomials $\widetilde{A}_{f(P)_{r}}$, there may be repeated factors.
This completes the proof.
\null\hfill$\square$\\\\
Some additional lemmas to omit polynomial reduction for $A_{D_{n}(C)}$ are:
\begin{lemma}~\cite{hs_2004}
\label{irreducibletwists}
For any $n$-twist knot $K(n)$, $\widetilde{A}_{K(n)}$ is irreducible.
\end{lemma}
\begin{lemma}~\cite{hs_2004}
\label{uniquetwists}
For two integers $m\neq n$, $A_{K(m)}\neq A_{K(n)}$.
\end{lemma}
\subsection*{Proof of Theorem~\ref{doublellin}}
By Lemma~\ref{reps012d}, we know that
$$
\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\cup\overline{\xi(R_{2})}=\mathbb{V}\left((L-1)\underset{r\in\mathcal{DS}_{C}}{\prod}\widetilde{A}_{f(K(n))_{r}}\right),
$$
where $f(K(n))_{r}$ is the knot obtained from the embedding of $K(n)$ into the solid torus given by Figure~\ref{fig1} with the $(1/r)$-Dehn filling $V(1/r)$.
Notice that $f(K(n))_{r}=K(n-r)$ since the $(1/r)$-Dehn filling can be understood as $-r$ full-twists on the boundary of $V$, hence,
$$
\overline{\xi(R_{0})}\cup\overline{\xi(R_{1})}\cup\overline{\xi(R_{2})}=\mathbb{V}\left((L-1)\prod_{r\in\mathcal{DS}_{K}}\widetilde{A}_{K(n-r)}\right).
$$
Since each $\widetilde{A}_{K(n-r)}$ is irreducible by Lemma~\ref{irreducibletwists} and distinct by Lemma~\ref{uniquetwists}, with $\widetilde{A}_{K(0)}=1$, and since each slope $r\in\mathcal{DS}_{C}$ is distinct, we see that $\mathrm{Red}\left[\widetilde{A}_{K(n-r)}\widetilde{A}_{K(n-s)}\right]=\widetilde{A}_{K(n-r)}\widetilde{A}_{K(n-s)}$ for all $r\neq s$.
Therefore,
$$
A_{D_{n}(C)}=(L-1)\prod_{r\in\mathcal{DS}_{C}}\widetilde{A}_{K(n-r)},
$$
as claimed which completes the proof.
\null\hfill$\square$\\\\
The following corollaries provide many computational examples and are immediate consequences of Theorem~\ref{doublellin}, Remark~\ref{iteratedtorus}, and Corollary~\ref{torconnected}.
The strongly detected boundary slopes of iterated torus knots $[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$ were noted in~\cite{nz_2017} as $p_{i}q_{i}\prod_{j=1}^{i-1}q_{j}^{2}$ which were also shown to be distinct.
The slopes of $T(p,q)\#T(p',q')$ are also easy to find given a calculation of $A_{T(p,q)\#T(p',q')}$ from Theorem~\ref{gzconnect}.
\begin{corollary}
\label{iteratedtorusdouble}
The $A$-polynomial of the $r$-twisted Whitehead double of an iterated torus knot, denoted $D_{r}[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]$, is given by
\begin{align}
A_{D_{r}[(p_{1},q_{1}),\ldots,(p_{n},q_{n})]}=(L-1)\widetilde{A}_{K(r)}\underset{i=1}{\overset{}{\prod}}\widetilde{A}_{K\left(r-p_{i}q_{i}\Pi_{j=1}^{i-1}q_{j}^{2}\right)}.
\end{align}
\end{corollary}
\begin{corollary}
\label{doubleconnectedsum}
For torus knots $T(p,q),T(p',q')$ with $q\geq q'$, the $A$-polynomial of the $n$-twisted Whitehead double of their connected sum $K=T(p,q)\#T(p',q')$ is given by
\begin{align*}
A_{D_{n}(K)}&\doteq\begin{cases}
(L-1)\widetilde{A}_{K(n)}\widetilde{A}_{K(n-pq)}\widetilde{A}_{K(n-p'q')}\widetilde{A}_{K(n-(pq+p'q'))}&:\hspace{4pt}|p|q\neq|p'|q',\\
(L-1)\widetilde{A}_{K(n)}\widetilde{A}_{K(n-pq)}\widetilde{A}_{K(n-2pq)}&:\hspace{4pt}pq=p'q',\\
(L-1)\widetilde{A}_{K(n)}\widetilde{A}_{K(n-pq)}\widetilde{A}_{K(n+pq)}&:\hspace{4pt}pq=-p'q'.
\end{cases}
\end{align*}
\end{corollary}
\section{The $r$-Twisted Whitehead Double of $n$-Twist Knots}
\label{whiteheadtwist}
When $C=K(n)$ for $n\neq1,0$, we notice that $\mathrm{Vol}(\mathcal{M}_{C})>0$ since $K(n)$ is hyperbolic; additionally, the $r$-twisted Whitehead double $D_{r}(C)$ will have satellite space $\mathcal{M}_{2}$ in the JSJ-decomposition also with positive hyperbolic volume, $\mathrm{Vol}(\mathcal{M}_{2})>0$.
The $A$-polynomial of $K=D_{r}(C)$ will be more difficult than for the case of graph knots, though it can be found as a factor of the iterated resultant (which typically factors into multiple irreducible polynomial factors):
$$
\mathrm{Res}_{u}\left[\mathrm{Res}_{t}\left[A_{C}(uv^{-r},v),f_{W}\right],\mathrm{Res}_{t}\left[f_{W},F_{W}\right]\right]=
[P_{K(n),r}(L,M)]^{2}[Q_{K(n),r}(L,M)]^{2}.
$$
\begin{remark}
For our computations, we will not heavily distinguish between $D_{r}(K(n))$ and $D_{r}(K(n))^{\ast}$ since it is clear that $D_{r}(C)^{\ast}=D_{-r}(C^{\ast})$ and therefore $D_{r}(K(n))^{\ast}=D_{-r}(J(2,2n)^{\ast})=D_{-r}(J(-2,-2n))$.
However, in this particular case, $K(-1)^{\ast}=K(-1)$ hence we see that $D_{r}(K(-1))^{\ast}=D_{-r}(K(-1))$.
\end{remark}
Since we know that $A_{K(r)}|A_{D_{r}(K(n))}$, we will denote the remaining factor $A_{K(r)}^{-1}\widetilde{A}_{D_{r}(K(n))}$ by $\widetilde{P}_{K(n),r}=\widetilde{F}_{\mathrm{Sat}(K(r),K(n),f)}$ (following the previous notation from Section~\ref{proof1} for $\widetilde{F}_{\mathrm{Sat}(P,C,f)}$), which is computationally equivalent to $P_{K(n),r}$ as stated in Remark~\ref{fig8dbl} using specific calculations; the other factor $Q_{K(n),r}$ is a byproduct of iterated resultant computations.
\label{slopecomputations}
To verify that the factor $Q_{K(n),r}$ is invalid, we use Hoste and Shanahan's table for boundary slope computations~\cite{hs_2007} (here using particular $\mathcal{L}_{3/8}=W$ with $k=1$ in their notation $\mathcal{L}_{\frac{4k-1}{8k}}$), to find boundary slope pairs for $\mathcal{BS}_{W}$:
$$
\begin{tabular}{@{}ccc@{}}
\multicolumn{3}{@{}c}{Table 1. Boundary Slope Pairs for $\mathcal{L}_{3/8}=W$}\\ \hline
\multicolumn{2}{@{}c}{$\partial$-Slopes} & Restrictions \\ \hline
$(0,\varnothing)$ & $(\varnothing,0)$ & \\
$(-4,\varnothing)$ & $(\varnothing,-4)$ & \\
\multicolumn{2}{@{}c}{$(2t^{-1},2t)$} & $0\leq t\leq\infty$ \\
\multicolumn{2}{@{}c}{$(-2t^{-1}-2,-2t)$} & $0\leq t\leq 1$ \\
\multicolumn{2}{@{}c}{$(-2t^{-1},-2-2t)$} & $1\leq t\leq\infty$ \\
\multicolumn{2}{@{}c}{$(-3+s,-3-s)$} & $-1\leq s\leq 1$ \\ \hline
\end{tabular}
$$
For example, we realize that the boundary slopes $0,-4$ will always occur in $\mathcal{BS}_{D_{r}(K(n))}$ by the first two lines of Table 1, and without loss of generality, we use the convention that the attaching boundary slope is in the second component.
We verify that the boundary slope pairs given by~\cite{hs_2007} provide us with the means to compute the boundary slopes of $\mathcal{BS}_{D_{r}(K(n))}$ by the following well-known result:
\begin{lemma}
\label{bdyslopegluing}
Let $C$ be a nontrivial knot, let $L=\ell_{x}\cup\ell_{y}$ with $\ell_{y}$ an unknot, and $f:P\hookrightarrow V$ an embedding with $f(P)=\ell_{x}$ such that $\mathcal{M}_{L}=V-N(f(P))$, let $\phi:\partial\mathcal{M}_{C}\to\partial_{y}\mathcal{M}_{L}$ be the standard gluing map with $\partial_{y}\mathcal{M}_{L}=\partial N(\ell_{y})$,
\begin{align*}
\phi(\mu_{K})&=\lambda_{y}&
\phi(\lambda_{K})&=\mu_{y},
\end{align*}
and so $K=\mathrm{Sat}(P,C,f)$ with $V=\mathcal{M}_{\ell_{y}}$.
Then,
$$
\mathcal{BS}_{K}=\left\{m_{x}\middle|\exists m, m_{y}:m\in\mathcal{BS}_{C},(m_{x},m_{y})\in\mathcal{BS}_{L},\tfrac{1}{m}=m_{y}\right\}\cup\left\{m_{x}\middle|(m_{x},\varnothing)\in\mathcal{BS}_{L}\right\}.
$$
\end{lemma}
\begin{proof
Recall that $m=p/q\in\mathbb{Q}\cup\{\infty\}$ is in $\mathcal{BS}_{K'}$ if there is a properly embedded essential surface $(F,\partial F)\subset(\mathcal{M}_{K'},\partial\mathcal{M}_{K'})$ with $\partial F$ a collection of parallel simple closed curves with slope $p/q$.
Likewise, a slope-pair $(m_{x},m_{y})\in\mathcal{BS}_{L}$ if there is a properly embedded essential surface $F$ in $\mathcal{M}_{L}$ with $\partial_{x} F$ and $\partial_{y}F$ a collection of parallel simple closed curves with slopes $m_{x}$ and $m_{y}$ respectively.
We see immediately that $\left\{m_{x}\middle|(m_{x},\varnothing)\in\mathcal{BS}_{L}\right\}\subset\mathcal{BS}_{K'}$ since any such pair $(m_{x},\varnothing)$ has an associated essential surface $F$ which can also be embedded into $\mathcal{M}_{K'}$.
Likewise, for any slope pair $(m_{x},m_{y})\in\mathcal{BS}_{L}$ with $m_{y}=\tfrac{1}{m}$ for some $m\in\mathcal{BS}_{K}$, we have corresponding essential surfaces $F_{K},F_{L}$, and we may take necessary parallel copies of these surfaces until they agree on the number of boundary components along the gluing torus.
This new surface will be essential in $\mathcal{M}_{K'}$ since it's components are essential in their respective submanifolds, thus $m_{x}\in\mathcal{BS}_{K'}$.
Conversely, if $m_{x}\in\mathcal{BS}_{K'}$, then there exists a properly embedded essential surface $F$ with slope $m_{x}$ along $\partial\mathcal{M}_{K'}$.
If $F$ does not intersect $\mathcal{M}_{K}$ or if $F$ can be isotoped in $\mathcal{M}_{K'}$ so as to not intersect $\mathcal{M}_{K}$, then $(m_{x},\varnothing)\in\mathcal{BS}_{L}$.
However, if $F\cap\mathcal{M}_{K}$ is a nontrivial intersection, then $F\cap\partial\mathcal{M}_{K}$ is a collection of parallel simple closed curves on the torus $\partial\mathcal{M}_{K}$, {\it i.e.} some slope $m$.
This implies that $F=F_{K}\cup_{\phi}F_{L}$ where $F_{K}$ is a properly embedded essential surface with slope $m$ along $\partial\mathcal{M}_{K}$.
The other component $F_{L}$ will exhibit a boundary slope pair $(m_{x},m_{y})\in\mathcal{BS}_{L}$ which must satisfy the gluing relation $\phi$; hence $m_{y}=\tfrac{1}{m}$ and the lemma is proven.
\end{proof}
\begin{remark}
\label{fig8dbl}
We have verified the following formula for $A_{D_{r}(K(-1))}$ for twists $-11\leq r\leq11$:
$$
A_{D_{r}(K(-1))}=(L-1)\widetilde{A}_{K(r)}\widetilde{P}_{K(-1),r},
$$
where the last factor $\widetilde{P}_{K(-1),r}$ in the verified cases is equal to the polynomial $P_{K(-1),r}$ below, computed via resultant methods for $-11\leq r\leq11$,
$$
P_{K(-1),r}=\widetilde{A}_{K(r-4)}\widetilde{A}_{K(r+4)}-L(M^{2}-1)^{3}(M^{2}+1)(L-M^{4})x^{2}y(2x^{2}-y)y^{k(r)}(L+M^{2})^{\varepsilon(r)},
$$
with the polynomial factors $x,y$ as given in Hoste and Shanahan~\cite{hs_2004},
\begin{align*}
x&=(L+M^{2})\widetilde{A}_{K(1)}+\widetilde{A}_{K(-1)},\\
y&=M^{4}(L+M^{2})^{4},
\end{align*}
and the exponents $k(r),\varepsilon(r)$ are given by:
$$
k(r)=\begin{cases}
r-4&:\hspace{4pt}r>4\\
0&:\hspace{4pt}-4< r\leq4\\
-r-4&:\hspace{4pt}r\leq-4,
\end{cases}\hspace{20pt}
\varepsilon(r)=\begin{cases}
-1&:\hspace{4pt}r>4\\
0&:\hspace{4pt}-4< r\leq4\\
1&:\hspace{4pt}r\leq-4.
\end{cases}
$$
\end{remark}
Returning to twist-knot exteriors with attaching map $\phi_{r}$, for a boundary slope $p/q\in\mathcal{BS}_{K(n)}$, this corresponds to an essential surface in $\mathcal{M}_{K(n)}$ whose boundary is in the class $\mu_{K}^{p}\lambda_{K}^{q}\in\pi_{1}(\partial\mathcal{M}_{K(n)})$.
In the $r$-twisted Whitehead double, our boundary slopes will come from $(m_{x},m_{y})\in\mathcal{BS}_{W}$ which correspond to essential surfaces in $\mathcal{M}_{W}$ where the boundary components on $\partial N(\ell_{x})$ have parallel slopes $m_{x}$ and similarly, the boundary components on $\partial N(\ell_{y})$ have parallel slopes $m_{y}$.
We naturally expect to encounter boundary slopes of the form $(m_{x},\varnothing)$ corresponding to a boundary slope that can be isotoped to not intersect with the identified torus $\partial N(\ell_{y})=\partial\mathcal{M}_{K(n)}$.
These boundary slopes justify why $0,-4\in\mathcal{BS}_{D_{r}(K(n))}$ for all values of $n,r\in\mathbb{Z}$.
The more interesting boundary slopes we encounter are derived from boundary slopes $(m_{x},m_{y})\in\mathcal{BS}_{W}$ where $\phi_{r\ast}(m)=m_{y}$ for some $m\in\mathcal{BS}_{K(n)}$ by the above remark.
These boundary slopes will come from the gluing $\phi_{r}$, and so we expect to see:
$$
\phi_{r\ast}(p,q)=[\phi(\mu_{K}^{p}\lambda_{K}^{q})]=[(\lambda_{y})^{p}(\mu_{y}\lambda_{y}^{-r})^{q}]=[\mu_{y}^{q}\lambda_{y}^{p-qr}]=(q,p-qr).
$$
Thus, the boundary slopes in $\mathcal{BS}_{D_{r}(K(n))}$ which correspond to essential surfaces that nontrivially intersect the gluing torus will correspond to boundaries $m_{x}\in\mathbb{Q}\cup\{\infty\}$ where $(m_{x},m_{y})\in\mathcal{BS}_{W}$ and $p/q\in\mathcal{BS}_{K(n)}$ will correspond to $m_{y}=q/(p-qr)$.
This means that we may explicitly compute possible boundary slopes using a modified version of the table from Hoste and Shanahan by seeing when $m_{y}=q/(p-qr)$ for some $p/q\in\mathcal{BS}_{K(n)}$ and which pair $(m_{x},m_{y})$ is present in the table.
Included below are two tables of the computed boundary slopes for all cases of $r$ and $n\leq-1$ using the fact that the boundary slopes of $n$-twist knots are known~\cite{ho_1989}:
\begin{align*}
\mathcal{BS}_{K(n)}=\begin{cases}
\{-4,0,-4n\}&:n\leq-1\\
\{0\}&:n=0\\
\{0,-6\}&:n=1\\
\{-4,0,-4n-2\}&:n\geq2.
\end{cases}
\end{align*}
$$
\begin{tabular}{@{}cccccr@{}}
\multicolumn{6}{@{}c}{Table 2. Boundary Slope Table for $D_{r}(K(n))$ with $n\leq-1$ via Lemma~\ref{bdyslopegluing}}\\ \hline
$\varnothing$ & $\varnothing$ & $1/(-4-r)$ & $1/(-r)$ & $1/(-4n-r)$ \\ \hline
$-4$ & $0$ & $-4r-16$ & $-4r$ & $-4r-16n$ & $r<-4$ \\
$-4$ & $0$ & $0$ & $32$ & $-16n+16$ & $r=-4$ \\
$-4$ & $0$ & $-4r-18$ & $-4r$ & $-4r-16n$ & $-4<r<0$ \\
$-4$ & $0$ & $-18$ & $0$ & $-16n$ & $r=0$ \\
$-4$ & $0$ & $-4r-18$ & $-4r-2$ & $-4r-16n$ & $0<r<-4n$ \\
$-4$ & $0$ & $16n-18$ & $16n-2$ & $0$ & $r=-4n$ \\
$-4$ & $0$ & $-4r-18$ & $-4r-2$ & $-4r-16n-2$ & $r>-4n$\\ \hline
\end{tabular}
$$
$$
\begin{tabular}{@{}cccccr@{}}
\multicolumn{6}{@{}c}{Table 3. Boundary Slope Table for $D_{r}(K(n))$ with $n\geq2$ via Lemma~\ref{bdyslopegluing}}\\ \hline
$\varnothing$ & $\varnothing$ & $1/(-4-r)$ & $1/(-r)$ & $1/(-4n-r)$ \\ \hline
$-4$ & $0$ & $-4r-16$ & $-4r$ & $-4r-16n-8$ & $r<-4n-2$ \\
$-4$ & $0$ & $16n-8$ & $16n+32$ & $0$ & $r=-4n-2$ \\
$-4$ & $0$ & $-4r-16$ & $-4r$ & $-4r-16n-10$ & $-4n-2<r<-4$ \\
$-4$ & $0$ & $0$ & $16$ & $-16n+6$ & $r=-4$ \\
$-4$ & $0$ & $-4r-18$ & $-4r$ & $-4r-16n-10$ & $-4<r<0$ \\
$-4$ & $0$ & $-18$ & $0$ & $-16n-10$ & $r=0$ \\
$-4$ & $0$ & $-4r-18$ & $-4r-2$ & $-4r-16n-10$ & $r>0$\\ \hline
\end{tabular}
$$
We see that the boundary slope corresponding to $1/(-r)$ is $-4r$ when $r\leq0$, and $-4r-2$ when $r>0$ (regardless of choice of $n$), which are the boundary slopes coming from $\mathcal{BS}_{K(r)}$.
Hence, we see that $\widetilde{P}_{K(-1),r}$ for $r\in\mathbb{Z}$ cannot be equal to $\widetilde{A}_{K(m)}$ for any $m\neq-4$ since the boundary slopes from $K(m)$ are $\{-4,0,-4m\}$ while the strongly detected boundary slopes coming from $P_{K(-1),r}$ are
$$
\begin{cases}
\{-4,0,-4r-16,16-4r\}&:r<-4\\
\{-4,0,32\}&:r=-4\\
\{-4,0,-4r-18,16-4r\}&:-4<r<0\\
\{-4,0,-18,16\}&:r=0\\
\{-4,0,-4r-18,16-4r\}&:0<r<4\\
\{-4,0,-34\}&:r=4\\
\{-4,0,-4r-18,-4r+14\}&:r>4.
\end{cases}
$$
Notice in the special cases of $r=\pm4$, we find that $P_{K(-1),\pm4}$ has slopes identical with $\widetilde{A}_{K(\pm8)}$; however, the polynomials themselves are different by computation, and so $P_{K(-1),r}\neq\widetilde{A}_{K(m)}$ for any $m$.
In practice, we find that exactly one factor $P_{K(-1),r}$ has Newton polygon $\mathrm{Newt}(P_{K(-1),r})$ which exhibits these slopes, while the only other factor observed $Q_{K(-1),r}$ has Newton polygon $\mathrm{Newt}(Q_{K(-1),r})$ which exhibits a slope of $2$ (which is never seen in the predicted slopes for any $r$).
While the computation remains difficult, a formula for the simple case when $n=-1$ is presented (having been verified for $r=-11,\ldots,11$ using the boundary slopes) in Remark~\ref{fig8dbl}.
From the computed examples, it is apparent that the $A$-polynomial of the $r$-twisted Whitehead double of a non-graph knot is not an inherently obvious computation; more optimistically, the $A$-polynomial of $r$-twisted Whitehead doubles still exhibit some connections to the $A$-polynomials of twist knots, seen with the $\widetilde{A}_{K(r+4)}\widetilde{A}_{K(r-4)}$ summand in the expression.
\section{Conclusion}
\label{conclusion}
In summary, we have provided formulas for computing $A$-polynomials of several families of satellite knots; namely, connected sums and iterated cables of pseudo-graph knots and all winding number zero satellites of graph knots.
From this, the $A$-polynomials of all graph knots can be computed once the construction of the graph knot as cables and connected sums is understood, and will have zero logarithmic Mahler measure.
For graph knots, the main property which allows winding number zero satellites to be computed is that their $A$-polynomials have no gaps and they have killing slopes.
Further calculations show that these killing slopes are connected to the knots $f(P)_{r}$ obtained from $(1/q)$-Dehn filling on $\partial V$.
One future goal is a strategy for understanding how to more generally compute the factor $\widetilde{F}_{\mathrm{Sat}(P,C,f)}$ for various families of knots, either recursively or explicitly, broadening the understanding of the $A$-polynomials of satellite knots.
Another direction is to find explicit formulas for the $A$-polynomials of certain knots, thereby extending the applications of the cabling formula and eliminating the need for polynomial reduction in certain cases.
As mentioned, it is unclear whether the $A$-polynomial of a graph knot can also be the $A$-polynomial of a knot with positive hyperbolic volume; more generally, it is unclear whether a satellite knot $K=\mathrm{Sat}(P,C,f)$ with $\mathrm{m}(A_{K})=0$ must also have $\mathrm{Vol}(\mathcal{M}_{K})=0$.
However, Corollary~\ref{graphknotsgz} implies the converse and counterexamples remain difficult to find.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,716 |
Atelier LWD was an architecture studio led by Guy Lagneau, Jean Dimitrijevic and Michel Weill that was active from 1952 to 1985.
It later took the name of "Atelier d'Etudes Architecturales" (ATEA) (Architectural Studies Workshop) with the addition of Paul Cordoliani, Henri Coulomb (1927–2006), Renzo Moro and Ivan Seifert (1926–2008).
The studio originated many public buildings in France and Africa.
History
Guy Lagneau (1915–1996) and Michel Weill (1914–2001) met in the studio of Auguste Perret in the National School of Fine Arts established in 1943. They participated with Perret in the reconstruction of Le Havre from 1946, work that was later declared a World Heritage Site by UNESCO.
Laigneau was particularly influenced by Scandinavian architecture, especially steel.
Jean Dimitrijevic (1926–2010) joined the agency in 1947 after meeting Guy Lagneau in a Fine Arts workshop he was running. He graduated in 1957 and completed his training at the Massachusetts Institute of Technology in 1959. He then became a partner of the workshop.
The architects created the ATEA in association with a consulting firm, Société d'études techniques et d'aménagements planifiés (SETAP). ATEA-SETAP was involved in many planning operations in Africa, including Guinea, Mauritania and Senegal. At the same time, they accepted numerous public commissions from museums, prefectures, and shopping centers in France. Lagneau also participated as an individual in preparation of master plans and urban development in the Paris region between 1962 and 1965, contributing to creation of new towns.
On many occasions the agency worked with Jean Prouvé in creating innovative metal structures and with the designer Charlotte Perriand for interior design.
Key achievements
The partners were responsible for many significant projects, including:
1952–1955: Paul Bert d'Aplemont School group in Le Havre.
1953–1954: Hotel de France (Conakry) with Jean Prouvé and Charlotte Perriand
1955–1961: Musée des Beaux-Arts André Malraux in Le Havre (with Jean Prouvé and Charlotte Perriand)
1957: Ore port of Boké in Guinea.
1957: Taiba Mbaye Senegal.
1958: Prototype of the House of the Sahara for the Ideal Home Exhibition at the Grand Palais in Paris with Jean Prouve and Charlotte Perriand (destroyed).
1958–1959: "Les Buffets" housing on the Avenue du Marechal Foch in Fontenay-aux-Roses (Hauts-de-Seine), in collaboration with John Perrottet
1958: City of Cansado in Mauritania.
1960: Agency office building for Air France, rue Scribe in the 9th district of Paris, in collaboration with Charlotte Perriand
1961: The town of Maspalomas, Gran Canaria, Spain
1963: Office building of the Union de Transports Aériens (UTA) Boulevard Malesherbes in Paris
1964–1967: Faculty of Letters of Nice
1965: Primary School Balizy in Longjumeau (Essonne) in collaboration with Jean Prouvé
1965–1967: Normal School and High School of Bamako in Mali (1200 students)
1965–1985: Administrative center of Evry (Essonne) (Prefecture, County Council, Courthouse).
1971–1976: Administrative offices, joint distribution centers EDF-GDF, Val-d'Oise
1971–1987: Vacation rentals Marines Cogolin
1972–1985: Les Quatre Temps shopping center at La Défense near Paris.
1979–1982: Inner station of Cergy-Pontoise (Val-d'Oise), in association with Ivan Seifert
1981–1985: Office of the Bank of France in Marne-la-Vallée (Seine-et-Marne)
Further reading
References
Architecture firms of France | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,435 |
Dystrykt Jukon – region administracyjny Terytoriów Północno-Zachodnich Kanady utworzony w roku 1882 i obejmujący północno-zachodnie krańce terytoriów. W 1887 przekształcono dystrykt w terytorium Jukon.
Dystrykty gminne Kanady
Jukon | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,237 |
Black ion-plated stainless steel case and link bracelet. Fixed black ion-plated bezel showing tachymeter markings. Black dial with luminous hands and index hour markers. Minute markers around the outer rim. Luminescent hands and markers. Date display appears at the 4 o'clock position. Chronograph - three sub-dials displaying: 12 hours (dual time), 60 minutes and 60 seconds. Quartz movement. Scratch resistant mineral crystal. Solid case back. Case diameter: 44 mm. Case thickness: 13 mm. Deployment clasp. Water resistant at 100 meters/ 330 feet. Functions: hours, minutes, seconds, date, chronograph, tachymeter. Fossil Black Stainless Steel Chronograph Mens Watch CH2601. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,448 |
// @flow
import React from 'react';
import { translate } from '../../../../base/i18n';
import JitsiScreen from '../../../../base/modal/components/JitsiScreen';
import { connect } from '../../../../base/redux';
import HeaderNavigationButton
from '../../../../mobile/navigation/components/HeaderNavigationButton';
import { goBack } from
'../../../../mobile/navigation/components/conference/ConferenceNavigationContainerRef';
import { RECORDING_TYPES } from '../../../constants';
import AbstractStartRecordingDialog, {
type Props,
mapStateToProps
} from '../AbstractStartRecordingDialog';
import StartRecordingDialogContent from '../StartRecordingDialogContent';
import styles from '../styles.native';
/**
* React Component for getting confirmation to start a file recording session in
* progress.
*
* @augments Component
*/
class StartRecordingDialog extends AbstractStartRecordingDialog<Props> {
/**
* Constructor of the component.
*
* @inheritdoc
*/
constructor(props: Props) {
super(props);
this._onStartPress = this._onStartPress.bind(this);
}
/**
* Implements React's {@link Component#componentDidMount()}. Invoked
* immediately after this component is mounted.
*
* @inheritdoc
* @returns {void}
*/
componentDidMount() {
super.componentDidMount();
const { navigation, t } = this.props;
navigation.setOptions({
headerRight: () => (
<HeaderNavigationButton
disabled = { this.isStartRecordingDisabled() }
label = { t('dialog.start') }
onPress = { this._onStartPress }
twoActions = { true } />
)
});
}
/**
* Implements React's {@link Component#componentDidUpdate()}. Invoked
* immediately after this component is updated.
*
* @inheritdoc
* @returns {void}
*/
componentDidUpdate(prevProps) {
super.componentDidUpdate(prevProps);
const { navigation, t } = this.props;
navigation.setOptions({
// eslint-disable-next-line react/no-multi-comp
headerRight: () => (
<HeaderNavigationButton
disabled = { this.isStartRecordingDisabled() }
label = { t('dialog.start') }
onPress = { this._onStartPress }
twoActions = { true } />
)
});
}
_onStartPress: () => void;
/**
* Starts recording session and goes back to the previous screen.
*
* @returns {void}
*/
_onStartPress() {
this._onSubmit() && goBack();
}
isStartRecordingDisabled: () => boolean;
/**
* Disables start recording button.
*
* @returns {boolean}
*/
isStartRecordingDisabled() {
const { isTokenValid, selectedRecordingService } = this.state;
// Start button is disabled if recording service is only shown;
// When validating dropbox token, if that is not enabled, we either always
// show the start button or, if just dropbox is enabled, start button
// is available when there is token.
if (selectedRecordingService === RECORDING_TYPES.JITSI_REC_SERVICE) {
return false;
} else if (selectedRecordingService === RECORDING_TYPES.DROPBOX) {
return !isTokenValid;
}
return true;
}
/**
* Implements React's {@link Component#render()}.
*
* @inheritdoc
*/
render() {
const {
isTokenValid,
isValidating,
selectedRecordingService,
sharingEnabled,
spaceLeft,
userName
} = this.state;
const {
_fileRecordingsServiceEnabled,
_fileRecordingsServiceSharingEnabled
} = this.props;
return (
<JitsiScreen style = { styles.startRecodingContainer }>
<StartRecordingDialogContent
fileRecordingsServiceEnabled = { _fileRecordingsServiceEnabled }
fileRecordingsServiceSharingEnabled = { _fileRecordingsServiceSharingEnabled }
integrationsEnabled = { this._areIntegrationsEnabled() }
isTokenValid = { isTokenValid }
isValidating = { isValidating }
onChange = { this._onSelectedRecordingServiceChanged }
onSharingSettingChanged = { this._onSharingSettingChanged }
selectedRecordingService = { selectedRecordingService }
sharingSetting = { sharingEnabled }
spaceLeft = { spaceLeft }
userName = { userName } />
</JitsiScreen>
);
}
_areIntegrationsEnabled: () => boolean;
_onSubmit: () => boolean;
_onSelectedRecordingServiceChanged: (string) => void;
_onSharingSettingChanged: () => void;
}
export default translate(connect(mapStateToProps)(StartRecordingDialog));
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,701 |
package github
import (
"context"
"encoding/json"
"fmt"
"net/http"
"reflect"
"strings"
"testing"
)
func TestRepositoriesService_List_authenticatedUser(t *testing.T) {
setup()
defer teardown()
acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeCodesOfConductPreview, mediaTypeTopicsPreview}
mux.HandleFunc("/user/repos", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", strings.Join(acceptHeaders, ", "))
fmt.Fprint(w, `[{"id":1},{"id":2}]`)
})
repos, _, err := client.Repositories.List(context.Background(), "", nil)
if err != nil {
t.Errorf("Repositories.List returned error: %v", err)
}
want := []*Repository{{ID: Int(1)}, {ID: Int(2)}}
if !reflect.DeepEqual(repos, want) {
t.Errorf("Repositories.List returned %+v, want %+v", repos, want)
}
}
func TestRepositoriesService_List_specifiedUser(t *testing.T) {
setup()
defer teardown()
acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeCodesOfConductPreview, mediaTypeTopicsPreview}
mux.HandleFunc("/users/u/repos", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", strings.Join(acceptHeaders, ", "))
testFormValues(t, r, values{
"visibility": "public",
"affiliation": "owner,collaborator",
"sort": "created",
"direction": "asc",
"page": "2",
})
fmt.Fprint(w, `[{"id":1}]`)
})
opt := &RepositoryListOptions{
Visibility: "public",
Affiliation: "owner,collaborator",
Sort: "created",
Direction: "asc",
ListOptions: ListOptions{Page: 2},
}
repos, _, err := client.Repositories.List(context.Background(), "u", opt)
if err != nil {
t.Errorf("Repositories.List returned error: %v", err)
}
want := []*Repository{{ID: Int(1)}}
if !reflect.DeepEqual(repos, want) {
t.Errorf("Repositories.List returned %+v, want %+v", repos, want)
}
}
func TestRepositoriesService_List_specifiedUser_type(t *testing.T) {
setup()
defer teardown()
acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeCodesOfConductPreview, mediaTypeTopicsPreview}
mux.HandleFunc("/users/u/repos", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", strings.Join(acceptHeaders, ", "))
testFormValues(t, r, values{
"type": "owner",
})
fmt.Fprint(w, `[{"id":1}]`)
})
opt := &RepositoryListOptions{
Type: "owner",
}
repos, _, err := client.Repositories.List(context.Background(), "u", opt)
if err != nil {
t.Errorf("Repositories.List returned error: %v", err)
}
want := []*Repository{{ID: Int(1)}}
if !reflect.DeepEqual(repos, want) {
t.Errorf("Repositories.List returned %+v, want %+v", repos, want)
}
}
func TestRepositoriesService_List_invalidUser(t *testing.T) {
_, _, err := client.Repositories.List(context.Background(), "%", nil)
testURLParseError(t, err)
}
func TestRepositoriesService_ListByOrg(t *testing.T) {
setup()
defer teardown()
acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeCodesOfConductPreview, mediaTypeTopicsPreview}
mux.HandleFunc("/orgs/o/repos", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", strings.Join(acceptHeaders, ", "))
testFormValues(t, r, values{
"type": "forks",
"page": "2",
})
fmt.Fprint(w, `[{"id":1}]`)
})
opt := &RepositoryListByOrgOptions{"forks", ListOptions{Page: 2}}
repos, _, err := client.Repositories.ListByOrg(context.Background(), "o", opt)
if err != nil {
t.Errorf("Repositories.ListByOrg returned error: %v", err)
}
want := []*Repository{{ID: Int(1)}}
if !reflect.DeepEqual(repos, want) {
t.Errorf("Repositories.ListByOrg returned %+v, want %+v", repos, want)
}
}
func TestRepositoriesService_ListByOrg_invalidOrg(t *testing.T) {
_, _, err := client.Repositories.ListByOrg(context.Background(), "%", nil)
testURLParseError(t, err)
}
func TestRepositoriesService_ListAll(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repositories", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testFormValues(t, r, values{
"since": "1",
})
fmt.Fprint(w, `[{"id":1}]`)
})
opt := &RepositoryListAllOptions{1}
repos, _, err := client.Repositories.ListAll(context.Background(), opt)
if err != nil {
t.Errorf("Repositories.ListAll returned error: %v", err)
}
want := []*Repository{{ID: Int(1)}}
if !reflect.DeepEqual(repos, want) {
t.Errorf("Repositories.ListAll returned %+v, want %+v", repos, want)
}
}
func TestRepositoriesService_Create_user(t *testing.T) {
setup()
defer teardown()
input := &Repository{Name: String("n")}
mux.HandleFunc("/user/repos", func(w http.ResponseWriter, r *http.Request) {
v := new(Repository)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "POST")
if !reflect.DeepEqual(v, input) {
t.Errorf("Request body = %+v, want %+v", v, input)
}
fmt.Fprint(w, `{"id":1}`)
})
repo, _, err := client.Repositories.Create(context.Background(), "", input)
if err != nil {
t.Errorf("Repositories.Create returned error: %v", err)
}
want := &Repository{ID: Int(1)}
if !reflect.DeepEqual(repo, want) {
t.Errorf("Repositories.Create returned %+v, want %+v", repo, want)
}
}
func TestRepositoriesService_Create_org(t *testing.T) {
setup()
defer teardown()
input := &Repository{Name: String("n")}
mux.HandleFunc("/orgs/o/repos", func(w http.ResponseWriter, r *http.Request) {
v := new(Repository)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "POST")
if !reflect.DeepEqual(v, input) {
t.Errorf("Request body = %+v, want %+v", v, input)
}
fmt.Fprint(w, `{"id":1}`)
})
repo, _, err := client.Repositories.Create(context.Background(), "o", input)
if err != nil {
t.Errorf("Repositories.Create returned error: %v", err)
}
want := &Repository{ID: Int(1)}
if !reflect.DeepEqual(repo, want) {
t.Errorf("Repositories.Create returned %+v, want %+v", repo, want)
}
}
func TestRepositoriesService_Create_invalidOrg(t *testing.T) {
_, _, err := client.Repositories.Create(context.Background(), "%", nil)
testURLParseError(t, err)
}
func TestRepositoriesService_Get(t *testing.T) {
setup()
defer teardown()
acceptHeaders := []string{mediaTypeLicensesPreview, mediaTypeCodesOfConductPreview, mediaTypeTopicsPreview}
mux.HandleFunc("/repos/o/r", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", strings.Join(acceptHeaders, ", "))
fmt.Fprint(w, `{"id":1,"name":"n","description":"d","owner":{"login":"l"},"license":{"key":"mit"}}`)
})
repo, _, err := client.Repositories.Get(context.Background(), "o", "r")
if err != nil {
t.Errorf("Repositories.Get returned error: %v", err)
}
want := &Repository{ID: Int(1), Name: String("n"), Description: String("d"), Owner: &User{Login: String("l")}, License: &License{Key: String("mit")}}
if !reflect.DeepEqual(repo, want) {
t.Errorf("Repositories.Get returned %+v, want %+v", repo, want)
}
}
func TestRepositoriesService_GetCodeOfConduct(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/community/code_of_conduct", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeCodesOfConductPreview)
fmt.Fprint(w, `{
"key": "key",
"name": "name",
"url": "url",
"body": "body"}`,
)
})
coc, _, err := client.Repositories.GetCodeOfConduct(context.Background(), "o", "r")
if err != nil {
t.Errorf("Repositories.GetCodeOfConduct returned error: %v", err)
}
want := &CodeOfConduct{
Key: String("key"),
Name: String("name"),
URL: String("url"),
Body: String("body"),
}
if !reflect.DeepEqual(coc, want) {
t.Errorf("Repositories.Get returned %+v, want %+v", coc, want)
}
}
func TestRepositoriesService_GetByID(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repositories/1", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeLicensesPreview)
fmt.Fprint(w, `{"id":1,"name":"n","description":"d","owner":{"login":"l"},"license":{"key":"mit"}}`)
})
repo, _, err := client.Repositories.GetByID(context.Background(), 1)
if err != nil {
t.Fatalf("Repositories.GetByID returned error: %v", err)
}
want := &Repository{ID: Int(1), Name: String("n"), Description: String("d"), Owner: &User{Login: String("l")}, License: &License{Key: String("mit")}}
if !reflect.DeepEqual(repo, want) {
t.Errorf("Repositories.GetByID returned %+v, want %+v", repo, want)
}
}
func TestRepositoriesService_Edit(t *testing.T) {
setup()
defer teardown()
i := true
input := &Repository{HasIssues: &i}
mux.HandleFunc("/repos/o/r", func(w http.ResponseWriter, r *http.Request) {
v := new(Repository)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "PATCH")
if !reflect.DeepEqual(v, input) {
t.Errorf("Request body = %+v, want %+v", v, input)
}
fmt.Fprint(w, `{"id":1}`)
})
repo, _, err := client.Repositories.Edit(context.Background(), "o", "r", input)
if err != nil {
t.Errorf("Repositories.Edit returned error: %v", err)
}
want := &Repository{ID: Int(1)}
if !reflect.DeepEqual(repo, want) {
t.Errorf("Repositories.Edit returned %+v, want %+v", repo, want)
}
}
func TestRepositoriesService_Delete(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "DELETE")
})
_, err := client.Repositories.Delete(context.Background(), "o", "r")
if err != nil {
t.Errorf("Repositories.Delete returned error: %v", err)
}
}
func TestRepositoriesService_Get_invalidOwner(t *testing.T) {
_, _, err := client.Repositories.Get(context.Background(), "%", "r")
testURLParseError(t, err)
}
func TestRepositoriesService_Edit_invalidOwner(t *testing.T) {
_, _, err := client.Repositories.Edit(context.Background(), "%", "r", nil)
testURLParseError(t, err)
}
func TestRepositoriesService_ListContributors(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/contributors", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testFormValues(t, r, values{
"anon": "true",
"page": "2",
})
fmt.Fprint(w, `[{"contributions":42}]`)
})
opts := &ListContributorsOptions{Anon: "true", ListOptions: ListOptions{Page: 2}}
contributors, _, err := client.Repositories.ListContributors(context.Background(), "o", "r", opts)
if err != nil {
t.Errorf("Repositories.ListContributors returned error: %v", err)
}
want := []*Contributor{{Contributions: Int(42)}}
if !reflect.DeepEqual(contributors, want) {
t.Errorf("Repositories.ListContributors returned %+v, want %+v", contributors, want)
}
}
func TestRepositoriesService_ListLanguages(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/languages", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
fmt.Fprint(w, `{"go":1}`)
})
languages, _, err := client.Repositories.ListLanguages(context.Background(), "o", "r")
if err != nil {
t.Errorf("Repositories.ListLanguages returned error: %v", err)
}
want := map[string]int{"go": 1}
if !reflect.DeepEqual(languages, want) {
t.Errorf("Repositories.ListLanguages returned %+v, want %+v", languages, want)
}
}
func TestRepositoriesService_ListTeams(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/teams", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testFormValues(t, r, values{"page": "2"})
fmt.Fprint(w, `[{"id":1}]`)
})
opt := &ListOptions{Page: 2}
teams, _, err := client.Repositories.ListTeams(context.Background(), "o", "r", opt)
if err != nil {
t.Errorf("Repositories.ListTeams returned error: %v", err)
}
want := []*Team{{ID: Int(1)}}
if !reflect.DeepEqual(teams, want) {
t.Errorf("Repositories.ListTeams returned %+v, want %+v", teams, want)
}
}
func TestRepositoriesService_ListTags(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/tags", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testFormValues(t, r, values{"page": "2"})
fmt.Fprint(w, `[{"name":"n", "commit" : {"sha" : "s", "url" : "u"}, "zipball_url": "z", "tarball_url": "t"}]`)
})
opt := &ListOptions{Page: 2}
tags, _, err := client.Repositories.ListTags(context.Background(), "o", "r", opt)
if err != nil {
t.Errorf("Repositories.ListTags returned error: %v", err)
}
want := []*RepositoryTag{
{
Name: String("n"),
Commit: &Commit{
SHA: String("s"),
URL: String("u"),
},
ZipballURL: String("z"),
TarballURL: String("t"),
},
}
if !reflect.DeepEqual(tags, want) {
t.Errorf("Repositories.ListTags returned %+v, want %+v", tags, want)
}
}
func TestRepositoriesService_ListBranches(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
testFormValues(t, r, values{"page": "2"})
fmt.Fprint(w, `[{"name":"master", "commit" : {"sha" : "a57781", "url" : "https://api.github.com/repos/o/r/commits/a57781"}}]`)
})
opt := &ListOptions{Page: 2}
branches, _, err := client.Repositories.ListBranches(context.Background(), "o", "r", opt)
if err != nil {
t.Errorf("Repositories.ListBranches returned error: %v", err)
}
want := []*Branch{{Name: String("master"), Commit: &RepositoryCommit{SHA: String("a57781"), URL: String("https://api.github.com/repos/o/r/commits/a57781")}}}
if !reflect.DeepEqual(branches, want) {
t.Errorf("Repositories.ListBranches returned %+v, want %+v", branches, want)
}
}
func TestRepositoriesService_GetBranch(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprint(w, `{"name":"n", "commit":{"sha":"s","commit":{"message":"m"}}, "protected":true}`)
})
branch, _, err := client.Repositories.GetBranch(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.GetBranch returned error: %v", err)
}
want := &Branch{
Name: String("n"),
Commit: &RepositoryCommit{
SHA: String("s"),
Commit: &Commit{
Message: String("m"),
},
},
Protected: Bool(true),
}
if !reflect.DeepEqual(branch, want) {
t.Errorf("Repositories.GetBranch returned %+v, want %+v", branch, want)
}
}
func TestRepositoriesService_GetBranchProtection(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection", func(w http.ResponseWriter, r *http.Request) {
v := new(ProtectionRequest)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"required_status_checks":{"strict":true,"contexts":["continuous-integration"]},"required_pull_request_reviews":{"dismissal_restrictions":{"users":[{"id":3,"login":"u"}],"teams":[{"id":4,"slug":"t"}]},"dismiss_stale_reviews":true,"require_code_owner_reviews":true},"enforce_admins":{"url":"/repos/o/r/branches/b/protection/enforce_admins","enabled":true},"restrictions":{"users":[{"id":1,"login":"u"}],"teams":[{"id":2,"slug":"t"}]}}`)
})
protection, _, err := client.Repositories.GetBranchProtection(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.GetBranchProtection returned error: %v", err)
}
want := &Protection{
RequiredStatusChecks: &RequiredStatusChecks{
Strict: true,
Contexts: []string{"continuous-integration"},
},
RequiredPullRequestReviews: &PullRequestReviewsEnforcement{
DismissStaleReviews: true,
DismissalRestrictions: DismissalRestrictions{
Users: []*User{
{Login: String("u"), ID: Int(3)},
},
Teams: []*Team{
{Slug: String("t"), ID: Int(4)},
},
},
RequireCodeOwnerReviews: true,
},
EnforceAdmins: &AdminEnforcement{
URL: String("/repos/o/r/branches/b/protection/enforce_admins"),
Enabled: true,
},
Restrictions: &BranchRestrictions{
Users: []*User{
{Login: String("u"), ID: Int(1)},
},
Teams: []*Team{
{Slug: String("t"), ID: Int(2)},
},
},
}
if !reflect.DeepEqual(protection, want) {
t.Errorf("Repositories.GetBranchProtection returned %+v, want %+v", protection, want)
}
}
func TestRepositoriesService_UpdateBranchProtection(t *testing.T) {
setup()
defer teardown()
input := &ProtectionRequest{
RequiredStatusChecks: &RequiredStatusChecks{
Strict: true,
Contexts: []string{"continuous-integration"},
},
RequiredPullRequestReviews: &PullRequestReviewsEnforcementRequest{
DismissStaleReviews: true,
DismissalRestrictionsRequest: &DismissalRestrictionsRequest{
Users: []string{"uu"},
Teams: []string{"tt"},
},
},
Restrictions: &BranchRestrictionsRequest{
Users: []string{"u"},
Teams: []string{"t"},
},
}
mux.HandleFunc("/repos/o/r/branches/b/protection", func(w http.ResponseWriter, r *http.Request) {
v := new(ProtectionRequest)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "PUT")
if !reflect.DeepEqual(v, input) {
t.Errorf("Request body = %+v, want %+v", v, input)
}
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"required_status_checks":{"strict":true,"contexts":["continuous-integration"]},"required_pull_request_reviews":{"dismissal_restrictions":{"users":[{"id":3,"login":"uu"}],"teams":[{"id":4,"slug":"tt"}]},"dismiss_stale_reviews":true,"require_code_owner_reviews":true},"restrictions":{"users":[{"id":1,"login":"u"}],"teams":[{"id":2,"slug":"t"}]}}`)
})
protection, _, err := client.Repositories.UpdateBranchProtection(context.Background(), "o", "r", "b", input)
if err != nil {
t.Errorf("Repositories.UpdateBranchProtection returned error: %v", err)
}
want := &Protection{
RequiredStatusChecks: &RequiredStatusChecks{
Strict: true,
Contexts: []string{"continuous-integration"},
},
RequiredPullRequestReviews: &PullRequestReviewsEnforcement{
DismissStaleReviews: true,
DismissalRestrictions: DismissalRestrictions{
Users: []*User{
{Login: String("uu"), ID: Int(3)},
},
Teams: []*Team{
{Slug: String("tt"), ID: Int(4)},
},
},
RequireCodeOwnerReviews: true,
},
Restrictions: &BranchRestrictions{
Users: []*User{
{Login: String("u"), ID: Int(1)},
},
Teams: []*Team{
{Slug: String("t"), ID: Int(2)},
},
},
}
if !reflect.DeepEqual(protection, want) {
t.Errorf("Repositories.UpdateBranchProtection returned %+v, want %+v", protection, want)
}
}
func TestRepositoriesService_RemoveBranchProtection(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "DELETE")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
w.WriteHeader(http.StatusNoContent)
})
_, err := client.Repositories.RemoveBranchProtection(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.RemoveBranchProtection returned error: %v", err)
}
}
func TestRepositoriesService_ListLanguages_invalidOwner(t *testing.T) {
_, _, err := client.Repositories.ListLanguages(context.Background(), "%", "%")
testURLParseError(t, err)
}
func TestRepositoriesService_License(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/license", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
fmt.Fprint(w, `{"name": "LICENSE", "path": "LICENSE", "license":{"key":"mit","name":"MIT License","spdx_id":"MIT","url":"https://api.github.com/licenses/mit","featured":true}}`)
})
got, _, err := client.Repositories.License(context.Background(), "o", "r")
if err != nil {
t.Errorf("Repositories.License returned error: %v", err)
}
want := &RepositoryLicense{
Name: String("LICENSE"),
Path: String("LICENSE"),
License: &License{
Name: String("MIT License"),
Key: String("mit"),
SPDXID: String("MIT"),
URL: String("https://api.github.com/licenses/mit"),
Featured: Bool(true),
},
}
if !reflect.DeepEqual(got, want) {
t.Errorf("Repositories.License returned %+v, want %+v", got, want)
}
}
func TestRepositoriesService_GetRequiredStatusChecks(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/required_status_checks", func(w http.ResponseWriter, r *http.Request) {
v := new(ProtectionRequest)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprint(w, `{"strict": true,"contexts": ["x","y","z"]}`)
})
checks, _, err := client.Repositories.GetRequiredStatusChecks(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.GetRequiredStatusChecks returned error: %v", err)
}
want := &RequiredStatusChecks{
Strict: true,
Contexts: []string{"x", "y", "z"},
}
if !reflect.DeepEqual(checks, want) {
t.Errorf("Repositories.GetRequiredStatusChecks returned %+v, want %+v", checks, want)
}
}
func TestRepositoriesService_ListRequiredStatusChecksContexts(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/required_status_checks/contexts", func(w http.ResponseWriter, r *http.Request) {
v := new(ProtectionRequest)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprint(w, `["x", "y", "z"]`)
})
contexts, _, err := client.Repositories.ListRequiredStatusChecksContexts(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.ListRequiredStatusChecksContexts returned error: %v", err)
}
want := []string{"x", "y", "z"}
if !reflect.DeepEqual(contexts, want) {
t.Errorf("Repositories.ListRequiredStatusChecksContexts returned %+v, want %+v", contexts, want)
}
}
func TestRepositoriesService_GetPullRequestReviewEnforcement(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/required_pull_request_reviews", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"dismissal_restrictions":{"users":[{"id":1,"login":"u"}],"teams":[{"id":2,"slug":"t"}]},"dismiss_stale_reviews":true,"require_code_owner_reviews":true}`)
})
enforcement, _, err := client.Repositories.GetPullRequestReviewEnforcement(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.GetPullRequestReviewEnforcement returned error: %v", err)
}
want := &PullRequestReviewsEnforcement{
DismissStaleReviews: true,
DismissalRestrictions: DismissalRestrictions{
Users: []*User{
{Login: String("u"), ID: Int(1)},
},
Teams: []*Team{
{Slug: String("t"), ID: Int(2)},
},
},
RequireCodeOwnerReviews: true,
}
if !reflect.DeepEqual(enforcement, want) {
t.Errorf("Repositories.GetPullRequestReviewEnforcement returned %+v, want %+v", enforcement, want)
}
}
func TestRepositoriesService_UpdatePullRequestReviewEnforcement(t *testing.T) {
setup()
defer teardown()
input := &PullRequestReviewsEnforcementUpdate{
DismissalRestrictionsRequest: &DismissalRestrictionsRequest{
Users: []string{"u"},
Teams: []string{"t"},
},
}
mux.HandleFunc("/repos/o/r/branches/b/protection/required_pull_request_reviews", func(w http.ResponseWriter, r *http.Request) {
v := new(PullRequestReviewsEnforcementUpdate)
json.NewDecoder(r.Body).Decode(v)
testMethod(t, r, "PATCH")
if !reflect.DeepEqual(v, input) {
t.Errorf("Request body = %+v, want %+v", v, input)
}
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"dismissal_restrictions":{"users":[{"id":1,"login":"u"}],"teams":[{"id":2,"slug":"t"}]},"dismiss_stale_reviews":true,"require_code_owner_reviews":true}`)
})
enforcement, _, err := client.Repositories.UpdatePullRequestReviewEnforcement(context.Background(), "o", "r", "b", input)
if err != nil {
t.Errorf("Repositories.UpdatePullRequestReviewEnforcement returned error: %v", err)
}
want := &PullRequestReviewsEnforcement{
DismissStaleReviews: true,
DismissalRestrictions: DismissalRestrictions{
Users: []*User{
{Login: String("u"), ID: Int(1)},
},
Teams: []*Team{
{Slug: String("t"), ID: Int(2)},
},
},
RequireCodeOwnerReviews: true,
}
if !reflect.DeepEqual(enforcement, want) {
t.Errorf("Repositories.UpdatePullRequestReviewEnforcement returned %+v, want %+v", enforcement, want)
}
}
func TestRepositoriesService_DisableDismissalRestrictions(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/required_pull_request_reviews", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "PATCH")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
testBody(t, r, `{"dismissal_restrictions":[]}`+"\n")
fmt.Fprintf(w, `{"dismissal_restrictions":{"users":[],"teams":[]},"dismiss_stale_reviews":true,"require_code_owner_reviews":true}`)
})
enforcement, _, err := client.Repositories.DisableDismissalRestrictions(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.DisableDismissalRestrictions returned error: %v", err)
}
want := &PullRequestReviewsEnforcement{
DismissStaleReviews: true,
DismissalRestrictions: DismissalRestrictions{
Users: []*User{},
Teams: []*Team{},
},
RequireCodeOwnerReviews: true,
}
if !reflect.DeepEqual(enforcement, want) {
t.Errorf("Repositories.DisableDismissalRestrictions returned %+v, want %+v", enforcement, want)
}
}
func TestRepositoriesService_RemovePullRequestReviewEnforcement(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/required_pull_request_reviews", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "DELETE")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
w.WriteHeader(http.StatusNoContent)
})
_, err := client.Repositories.RemovePullRequestReviewEnforcement(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.RemovePullRequestReviewEnforcement returned error: %v", err)
}
}
func TestRepositoriesService_GetAdminEnforcement(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/enforce_admins", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"url":"/repos/o/r/branches/b/protection/enforce_admins","enabled":true}`)
})
enforcement, _, err := client.Repositories.GetAdminEnforcement(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.GetAdminEnforcement returned error: %v", err)
}
want := &AdminEnforcement{
URL: String("/repos/o/r/branches/b/protection/enforce_admins"),
Enabled: true,
}
if !reflect.DeepEqual(enforcement, want) {
t.Errorf("Repositories.GetAdminEnforcement returned %+v, want %+v", enforcement, want)
}
}
func TestRepositoriesService_AddAdminEnforcement(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/enforce_admins", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "POST")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
fmt.Fprintf(w, `{"url":"/repos/o/r/branches/b/protection/enforce_admins","enabled":true}`)
})
enforcement, _, err := client.Repositories.AddAdminEnforcement(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.AddAdminEnforcement returned error: %v", err)
}
want := &AdminEnforcement{
URL: String("/repos/o/r/branches/b/protection/enforce_admins"),
Enabled: true,
}
if !reflect.DeepEqual(enforcement, want) {
t.Errorf("Repositories.AddAdminEnforcement returned %+v, want %+v", enforcement, want)
}
}
func TestRepositoriesService_RemoveAdminEnforcement(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/branches/b/protection/enforce_admins", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "DELETE")
testHeader(t, r, "Accept", mediaTypeProtectedBranchesPreview)
w.WriteHeader(http.StatusNoContent)
})
_, err := client.Repositories.RemoveAdminEnforcement(context.Background(), "o", "r", "b")
if err != nil {
t.Errorf("Repositories.RemoveAdminEnforcement returned error: %v", err)
}
}
func TestPullRequestReviewsEnforcementRequest_MarshalJSON_nilDismissalRestirctions(t *testing.T) {
req := PullRequestReviewsEnforcementRequest{}
json, err := json.Marshal(req)
if err != nil {
t.Errorf("PullRequestReviewsEnforcementRequest.MarshalJSON returned error: %v", err)
}
want := `{"dismissal_restrictions":[],"dismiss_stale_reviews":false,"require_code_owner_reviews":false}`
if want != string(json) {
t.Errorf("PullRequestReviewsEnforcementRequest.MarshalJSON returned %+v, want %+v", string(json), want)
}
}
func TestRepositoriesService_ListAllTopics(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/topics", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "GET")
testHeader(t, r, "Accept", mediaTypeTopicsPreview)
fmt.Fprint(w, `{"names":["go", "go-github", "github"]}`)
})
got, _, err := client.Repositories.ListAllTopics(context.Background(), "o", "r")
if err != nil {
t.Fatalf("Repositories.ListAllTopics returned error: %v", err)
}
want := &Topics{Names: []string{"go", "go-github", "github"}}
if !reflect.DeepEqual(got, want) {
t.Errorf("Repositories.ListAllTopics returned %+v, want %+v", got, want)
}
}
func TestRepositoriesService_ReplaceAllTopics(t *testing.T) {
setup()
defer teardown()
mux.HandleFunc("/repos/o/r/topics", func(w http.ResponseWriter, r *http.Request) {
testMethod(t, r, "PUT")
testHeader(t, r, "Accept", mediaTypeTopicsPreview)
fmt.Fprint(w, `{"names":["go", "go-github", "github"]}`)
})
got, _, err := client.Repositories.ReplaceAllTopics(context.Background(), "o", "r", &Topics{Names: []string{"go", "go-github", "github"}})
if err != nil {
t.Fatalf("Repositories.ReplaceAllTopics returned error: %v", err)
}
want := &Topics{Names: []string{"go", "go-github", "github"}}
if !reflect.DeepEqual(got, want) {
t.Errorf("Repositories.ReplaceAllTopics returned %+v, want %+v", got, want)
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 103 |
mvn -DaltDeploymentRepository=snapshot-repo::default::file:../maven-repo/snapshots clean deploy
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,517 |
HIST 101-103 - World Civilizations
A guide to literature and other resources for students of HIST 101, 102, and 103.
World History Association
World History: Bibliographic Resources
World History: Areas of Specialization
World History: General Online Resources
Find Books & Articles
Scholarly Versus Popular Sources
The American Historical Association is the largest professional organization serving historians in all fields and all professions. The AHA has become a trusted voice for history education, the professional work of historians, and the critical role of historical thinking in public life. Learn more about our work on behalf of the entire discipline and connect with the staff that is dedicated to advancing history and historical thinking for the benefit of all.
Teaching and Learning Section of AHA
Jobs and Professional Development
Publications and Directories
Dr Levine's CWU webpage
Dr. Levine's Webpage
Zotero Information
Zotero User Guide
Zotero Subject Guide from Harvard
Without individual memory, a person literally loses his or her identity, and would not know how to act in encounters with others. Imagine waking up one morning unable to tell total strangers from family and friends! Collective memory is similar, though its loss does not immediately paralyze everyday private activity. But ignorance of history-that is, absent or defective collective memory does deprive us of the best available guide for public action, especially in encounters with outsiders, whether the outsiders are another nation, another civilization, or some special group within national borders. . . Historical knowledge is no more and no less than carefully and critically constructed collective memory . . . Clearly we need careful reflection about, and search for, enduring patterns and critical turning points in the past, for these are the historical facts that everyone needs to know.
-William McNeilI, Why Study History? (1985)
History 302 Autumn 2018 Syllabus
Why Study History? Entire article by William McNeill can be downloaded here.
Why Study History? from Stearns In the past history has been justified for reasons we would no longer accept. For instance, one of the reasons history holds its place in current education is because earlier leaders believed that a knowledge of certain historical facts helped distinguish the educated from the uneducated; the person who could reel off the date of the Norman conquest of England (1066) or the name of the person who came up with the theory of evolution at about the same time that Darwin did (Wallace) was deemed superior-a better candidate for law school or even a business promotion. Knowledge of historical facts has been used as a screening device in many societies, from China to the United States, and the habit is still with us to some extent. Unfortunately, this use can encourage mindless memorization-a real but not very appealing aspect of the discipline.
History should be studied because it is essential to individuals and to society, and because it harbors beauty.
-Peter Stearns, "Why Study History?"
History of the World History Association (WHA)
The WHA is the foremost organization for the promotion of world history through the encouragement of
teaching, research, and publication. It was founded in 1982 by a group of teachers and academics
determined to address the needs and interests of what was then a newly emerging historical subdiscipline
and teaching field.
The new world history emerged out of the shift in higher and secondary education away from a sole
emphasis on national and regional histories toward broader cross-cultural, comparative, and global
approaches. By the 1980s, instructors who had been asked to create new courses in this field, as well as
scholars who had already begun laying its theoretical groundwork, came together in founding a new type
of professional association, one that united the schools and the universities, teaching with research.
Since then, the WHA has grown four-fold, has garnered accolades for its award winning Journal of World
History, and has played a seminal role in shaping the field in the U.S. and around the world. Important
for American secondary education, WHA members have been instrumental in establishing standards for
world history teaching at the national and state levels as well as designing the AP World History course.
At present, although its membership is still predominantly North American, the WHA is represented in
over 35 countries and has an affiliate relationship with world history societies in Europe, Asia, Africa and
Most important, the WHA brings together university professors, college and community college
instructors, school teachers, graduate students, and independent scholars in a collegial camaraderie
rarely found in more narrowly focused academic and professional societies. Still motivated by a larger
sense of mission in preparing students and the public for an interdependent world, the WHA has been
unique in bridging the gap between secondary and post-secondary educators.
World History - Bibliographic of Resources
Recommended books and publications from the WHA.
Ecological Imperialism by Alfred W. Crosby (Contribution by); Donald Worster (Contribution by)
Guns, Germs, and Steel by Jared Diamond
The Human Web by J. R. McNeill; William H. McNeill
The Great Divergence by Kenneth Pomeranz
Shapes of World History in Twentieth-Century Scholarship by Jerry H. Bentley; Michael Adas (Editor)
The Perspective of the World by Fernand Braudel; Siân Reynolds (Translator)
The New World History by Ross E. Dunn (Editor); Laura J. Mitchell (Editor); Kerry Ward (Editor)
Reorient: Global Economy in the Asian Age by Andre Gunder Frank
Myth of Continents - A Critique of Metageography by Martin W. Lewis; K. Wigen
World History- Areas of Specialization
http://www.thewha.org/about-wha/areas-of-specialization-in-world-history/
Disciplines Related to World History
See Web page to expand
Regions in World History
Eras of World History by Dates
The Paleolithic Era of Human History (c.250,000-10,000 BCE)
The Early Agrarian Era (c.10,000-3,000 BCE)
The Later Agrarian Era (c.3,000 BCE-500CE)
The Post-Classical Era (c.500-1400CE)
The Early Modern Era (c.1400-1750CE)
The Industrial Era or Age (1750-1900CE)
The Modern Revolution (1750-2010 CE)
The Twentieth Century (1900-2000CE)
The Twenty-First Century (From 2000CE)
Genres of World History
Big History
Comparative World History
Environmental World History
World Systems Theory
Universal History
Cultural-Social Themes of World History
Cross-Cultural Encounters and Exchanges
High Culture
Social Structures
Economic Themes of World History
Globalizaion
Maritime Trade
Monocultural economies
Nomadic and pastoral peoples, impact on trade
Overland Trade
Plantation economies
Southernization
Trade Diasporas
Transoceanic voyaging
Political Themes of World History
Colonialism/imperialism
Empire/Empires
Frontiers and Borderlands
Rise, Decline and Fall of Civilizations
Westernization
War and Diplomacy
Environmental Themes of World History
Human Impact on the Environment
Internet History Sourcebooks Project: The Internet History Sourcebooks Project is a collection of public domain and copy-permitted historical texts presented cleanly (without advertising or excessive layout) for educational use.
Avalon Project: Documents in Law, History, and Diplomacy: Yale Law School hosts this collection of primary materials dating from 4000 B.C.E. to the 21st century. Included are the full text of laws, colony charters, acts, and declarations, presidential proclamations, treaties and formal negotiations affecting the United States judiciary system and governmental foreign policy in general.
Best of History Web Sites: This site has links to over 1,200 history-related websites that have been reviewed for quality, accuracy, and usefulness. Also included are links to K-12 history lesson plans, teacher guides, activities, games, quizzes, and more.
Mapping History: A project at the University of Oregon, this website contains modern maps illustrating historical topics in American, European, Latin and African history. Requires Shockwave Player 11.0--a free installation from Adobe.
PAIS International: Public Affairs Information Service: The PAIS (Public Affairs Information Service) International database covers a wide range of current and past public policy issues in countries throughout the world, emphasizing factual and statistical information.
World History for Us All: This website provides "a comprehensive model curriculum for teaching world history in middle and high schools." It features an overview of the integrative approach to world history, lesson plans arranged by "Big Eras," a glossary, and links to related websites.
WorldNews Network: Currently, World News has indexed over 130 million pages covering news about film, sports, entertainment, science, business, health, and every region on Earth. World News Network presents news from more than 1,000 reputable sources including mainstream providers (BBC, CNN, Reuters, Washington Post, Al Jazeera, etc.) and more regional sources (the Independent, the Guardian, the New York Times, the Times of India).
Electronic Cultural Atlas Initiative: "ECAI uses time and space to enhance understanding and preservation of human culture." A collection of digital atlas projects produced by scholars from around the world.
H-Net: Humanities and Social Sciences Online: "An international interdisciplinary organization of scholars and teachers dedicated to developing the enormous educational potential of the web."
Next: Find Books & Articles >>
Last Updated: Jan 8, 2021 11:34 AM
URL: https://libguides.lib.cwu.edu/worldciv | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,963 |
\section{Introduction}
The idea of the Landau-Ginzburg model, LG model from now on, of a variety originated in the work of physicists. In studying conformal field theories over projective hypersurfaces, the zeros of a certain function, they noticed that the function itself can be used to calculate various quantities in the theory of the hypersurface. This revealed a mysterious relation between smooth projective varieties and functions with isolated singularities, the superpotentials, as they are called in the literature. We cite, out of many references, the work of D.~Gepner, E.~Witten, and T.~Eguchi and his school as being very influential for mathematicians \cite{Ge}, \cite{Wi}, \cite{EgHoXi}.
The LG models appeared in the mathematics literature for the first time probably in the work of A.~Givental \cite{Gi}. He has introduced the notion of the quantum differential equation associated to a smooth projective variety, and expressed its solutions for some classes of Fano varieties as oscillating integrals of a certain function, the "LG model" of these Fano varieties.
Later the concept of the LG model became a part of the Homological Mirror Symmetry Conjecture, which is due to M.~Kontsevich.
Despite a large volume of serious work of many people there is no commonly accepted definition of the LG model for a given Fano variety. There is a number of examples of a LG model for Fano varieties but they are examples each in its own sense.
In developing the theory of Frobenius manifolds, B.~Dubrovin used two main sources of examples. The first is the quantum cohomology of a smooth projective variety, just a variety from now on, and the other is the work of K.~Saito \cite{Sai}, an important development of singularity theory, which produces a non-trivial example of a Frobenius manifold associated to a polynomial superpotential with an isolated singularity. This suggests another natural approach to finding the LG model for a variety $X$: one could look for a function on a non-compact manifold such that some generalization of the K.~Saito construction would produce a Frobenius manifold isomorphic to the one the quantum cohomology of $X$ produces. This approach to the LG model was detailed in the book of Yu.~Manin \cite{Ma}. It was actually made to work for the weighted projective spaces in the papers of A.~Douai, C.~Sabbah, and E.~Mann \cite{DoSa1}, \cite{DoSa2}, \cite{M}.
In this note we expand \cite{DoSa2} by considering varieties which are not toric. Namely, we take one of the known in the literature Laurent polynomials which is believed to be the LG model for an odd-dimensional quadric (see \cite{HoVa}, \cite{Pr}), and investigate the possibility of making it an example of a LG model to the quadric in the sense of the Frobenius manifold structure. From now on we will refer to this potential as "the standard potential".
We attempted to formulate a rule for finding such a potential in general. In the case of the quadric as well as in some other examples available in the literature the partial derivatives of the standard potential reproduce partially the relations generated by the multiplications by the generator of the second cohomology. But here a problem arises: the standard potential may have fewer critical points than the rank of the cohomology of the variety it supposedly models, as is observed in the quadric setting. Therefore this LG potential has the wrong Jacobi ring and in terms of the corresponding Frobenius manifolds can't be the correct LG model. This problem for the LG model of the Grassmannians was pointed out in the paper \cite{EgHoXi} a long time ago. In the same paper it was suggested how to remedy the situation. One needs to search for a partial compactification of the domain of the standard potential to gain the required number of the critical points. There is no procedure for doing this in general at least as far as we know, but in the examples we have looked at, including the quadric, it was always possible to find such a compactification by trial and error. The compactification of the domain changes the superpotential and it is no longer a Laurent polynomial.
We follow the generalization due to C.~Sabbah and A.~Douai of the K.~Saito work. This reduces the problem of construction of a Frobenius manifold to studying the Gauss-Manin system associated to the potential. The first step is to solve the Birkhoff problem for the Gauss-Manin system at one point of the parameter space for the potential, to make sure it matches the initial conditions of the quantum cohomology Frobenius manifold of the appropriate variety. In the case of the quadric this can be done but with bit of an effort. With the kind help of C.~Sabbah and A.~Nemethi we check a certain property for our potential, the so called cohomological tameness, which helped a lot with solving the Birkhoff problem.
For the final step, we need to build the Frobenius manifold in the neighbourhood of that one point in the parameter space which would automatically match the Frobenius manifold of the quadric. This amounts to finding a good deformation of our potential, and this in turn requires one to have a good hold on the behaviour of the potential at the infinity. It is not clear at the present time what the best method is to handle this purely analytic problem in its most general setting. There are, however, some sufficiency conditions -- for example the so called $M$-tameness of the potential -- that guarantee such behaviour. In the case of Laurent polynomial this property can be checked and is used in \cite{DoSa1} \cite{DoSa2}. For a general regular function it appears to be a difficult analytical problem. We fail to check the M-tameness for our potential, however it seems interesting to us that proving the existence of the LG model hinges on the delicate analytical properties of the potential.
\smallskip
Putting it all together, in this note we have proved that the standard potential for the quadric after a suitable modification defines the initial conditions of the quantum cohomology Frobenius manifold of the quadric.
\smallskip
Throughout the whole text we restrict ourselves to the case of a three-dimensional quadric $Q_3$ for the sake of making the exposition compact. In fact, all the results hold for an arbitrary odd-dimensional quadric.
\medskip
\textbf{Remark.} After the first version of this preprint was posted on the arxiv an interesting preprint by C.~Pech and K.~Rietsch \cite{PeRi} appeared where it is shown that the superpotential considered in this paper for an odd-dimensional quadric is a particular case of a general construction proposed by K.~Rietsch \cite{Ri} for a homogeneous space $G/P$. Some results of Section \ref{Sec.: Frobenius for Quadric} are used in \cite{PeRi} to establish a part of a conjecture made in \cite{Ri} (see \textit{loc.cit.} for details). It would be interesting to further bridge the work done in this preprint with the approach initiated by K.~Rietsch.
\medskip
\textbf{Acknowledgements.} First, we would like to thank Yu.~I.~Manin for initiating this project and sharing generously his ideas. We are indebted to C.~Sabbah and A.~Nemethi for the proof of Lemma \ref{SubSubSec.: Lemma Tameness}, and help with general questions about the subject of this paper. Special thanks go to P.~Bressler and to E.~Shinder for various discussions related to this work.
Both authors benefited a lot from the excellent research environment of the MPIM, Bonn. Some part of this work was done while the second author visited the IHES, Paris whose hospitality is gratefully acknowledged.
\section{Background and notation}
\label{Sec.: Background and notations}
\subsection{Quantum cohomology.}
\label{SubSubSec.: Quantum cohomology}
Here we very briefly recall some facts about quantum cohomology. For the general theory of Frobenius manifolds we refer to \cite{Ma}, \cite{He}, and references therein.
Let $X$ be a smooth projective complex algebraic variety and $QH(X)$ its big quantum cohomology. If $\Delta_0, \dots, \Delta_r$ is a graded basis and $x_0, \dots, x_r$ dual coordinates, then the quantum product is defined as
\begin{align}\label{Eq.: Quantum Product}
\Delta_i \circ \Delta_j = \sum_{k,l} \Phi_{ijk} g^{kl} \Delta_l,
\end{align}
where $\Phi$ is the Gromov-Witten potential, $\Phi_{ijk}=\frac{\partial^3\Phi}{\partial x_i \partial x_j \partial x_k}$, and $g$ is the Poincar\'e pairing on $H=H^*(X,\mathbb{C})$. The full structure of the quantum cohomology of $X$ endows $H$ with a structure of a Frobenius manifold. In fact, one needs to work with formal manifolds because $\Phi$ is not known to be convergent.
Disregarding convergence issues one can think of $QH(X)$ as a family of multiplications on $H$ parametrized by $H$ itself, i.e. it is a multiplication on $\mathcal{T}_H$, where we consider $H$ as a complex manifold. Poincar\'e pairing on $H$ defines a constant pairing on $\mathcal{T}_H$ which is multiplication invariant and flat.
Under small quantum cohomology one means the restriction of the above picture to $H^2(X, \mathbb{C}) \subset H$. In terms of coordinates it means that we reduce all our formulas modulo an ideal generated by coordinates dual to $\Delta_i$'s not lying in $H^2(X, \mathbb{C})$.
\subsubsection{Spectral cover.}
\label{SubSubSec.: Spectral Cover for QH}
Assume that $H=H^*(X, \mathbb{C})$ is of dimension one in each even degree and zero otherwise. In this situation one can consider an algebraic torus $\textbf{T} \subset H^t$, which is a locally closed subvariety of $H^{t}$. Here $H^t$ is the dual of $H$.
Namely, let $\Delta_0, \dots, \Delta_r$ be a graded basis of $H$, such that $\Delta_0$ is the identity element, and consider
$$
H^t = \text{Spec}\,(S^{\bullet}(H)) = \text{Spec}\,(\mathbb{C}[\Delta_0, \dots , \Delta_r]).
$$
In $H^t$ we have an affine subspace $\{\Delta_0=1\} \simeq \text{Spec}\,(\mathbb{C}[\Delta_1, \dots , \Delta_r])$, and inside this affine subspace we have the torus $\textbf{T}=\text{Spec}\,(\mathbb{C}[\Delta_1^{\pm 1}, \dots , \Delta_r^{\pm 1}])$. This torus does not depend on the choice of $\Delta_1, \dots, \Delta_r$ and will play an important role in construction of LG models.
Equations that define the spectral cover $\text{Spec}\,(QH(X))$ as a subvariety of $H \times H^t$ are given just by the multiplication table. One of the equations is always $\Delta_0=1$. Hence, the spectral cover always lives inside the affine space $\{\Delta_0=1\} \simeq \text{Spec}\,(\mathbb{C}[\Delta_1, \dots , \Delta_r])$.
One can summarize it in the diagram
\begin{align*}
\xymatrix{
\text{Spec}\,(QH(X)) \ar[r]^<<<<<i \ar[rd]& H \times H^t \ar[d] & \ar[l]_<<<<j\ar[ld] H \times \textbf{T}\\
& H
}
\end{align*}
where $i$ and $j$ are embeddings.
In some cases $\text{Spec}\,(QH(X))$ lies in $H \times \textbf{T}$. For example, it is true for projective spaces (at least in the small quantum cohomology). It is not true for the case of odd-dimensional quadrics that we consider.
\subsection{Landau-Ginzburg models.}
\label{SubSec.: LG models}
Let $X$ be a Fano variety and $QH(X)$ its quantum cohomology.
A Saito's framework (see \cite[III.8]{Ma}) is called a Landau-Ginzburg model for $X$ iff it is isomorphic to $QH(X)$ as a Frobenius manifold.
Consider a pair $(U, f)$ consisting of a complex smooth affine variety and a regular function on it. It is called a Landau-Ginzburg model for $X$ iff there exists a deformation of $(U, f)$ and a Saito's framework attached to it which is isomorphic to $QH(X)$ as a Frobenius manifold.
These definitions are quite restrictive. One can relax both of them as follows. We require the existence of a point in $QH(X)$ and a point in Saito's framework such that the germs of Frobenius manifolds at these points are isomorphic.
\subsection{Gauss-Manin systems.}
\label{SubSec.: GM system}
Here we will give a brief account on Gauss-Manin systems, Brieskorn lattices, higher residue pairings and Birkhoff problems. Mainly we will be setting up notation that we will use in Section \ref{Sec.: Frobenius for Quadric} and refer to \cite{Do1}, \cite{Do2} for details.
\smallskip
Consider a projective line with a chosen coordinate $\theta$. We will denote it by $\mathbb{P}^1_{\theta}$. Let $\mathbb{P}^1_{\theta}=U_0\cup U_{\infty}$ be the standard open cover, i.e. $U_0=\mathbb{A}^1_{\theta}$, $U_1=\mathbb{A}^1_{\theta^{-1}}$, and $W=U_0 \cap U_{\infty}=\mathbb{A}^1_{\theta}-\{0\}$. Here $\mathbb{A}^1_{\theta}$ stands for $\text{Spec}(\mathbb{C}[\theta])$ and $\{0\}$ for $\{\theta=0\}$. Sometimes we will write $\tau$ for $\theta^{-1}$. We will use this notation when working with Gauss-Manin systems throughout the article.
\subsubsection{Definition.}
Let $X$ be a smooth affine variety of dimension $n$ with a regular function $h$ on it. The main example relevant for mirror symmetry: $X=(\mathbf{G}_m)^n$ and $h$ is a Laurent polynomial. To such a function $h$ one can attach its \textit{Gauss-Manin system}
\begin{align}\notag
G=\Omega^n(X)[\theta, \theta^{-1}]/(\theta d-dh\wedge) \Omega^{n-1}(X)[\theta, \theta^{-1}],
\end{align}
which is a free $\mathbb{C}[\theta, \theta^{-1}]$-module of finite rank with a flat connection $\nabla$ defined as follows. Let $\sum_i \omega_i\theta^i$ be a representative of some class $\gamma \in G$, i.e. $\gamma=[\,\sum_i \omega_i\theta^i\,]$. Then
\begin{align}\notag
\theta^2\nabla_{\frac{\partial}{\partial \theta}}(\gamma)=\left[\sum_i h\omega_i\theta^i+\sum_i i \omega_i\theta^{i+1}\right],
\end{align}
where the brackets $[\,\,\,\,]$ denote taking class in $G$.
It is a general fact that $(G, \nabla)$ always has a regular singularity at $\theta=\infty$ and possibly irregular singularity at $\theta=0$ of rank 2.
\subsubsection{Brieskorn lattice.}
At $\theta=0$ the Gauss-Manin system $(G, \nabla)$ has a natural lattice
\begin{align}\label{Eq.: Brieskorn Lattice}
G_0=\Omega^n(X)[\theta]/(\theta d-dh\wedge) \Omega^{n-1}(X)[\theta],
\end{align}
the \textit{Brieskorn lattice} of $h$. This means that $G_0$ is a $\mathbb{C}[\theta]$-module such that $G_0 \otimes_{\mathbb{C}[\theta]}\mathbb{C}[\theta, \theta^{-1}] \simeq G $.
The connection $\nabla$ naturally restricted (or extended) to $G_0$ has a pole of order 2 at $\theta=0$, i.e.
\begin{align*}
&\theta^2\nabla_{\frac{\partial}{\partial \theta}}(G_0) \subset G_0.
\end{align*}
Abusing notation we will denote this meromorphic connection again by $\nabla$.
In general $G_0$ does not have torsion but may be not finitely generated. If $h$ is \textit{cohomologically tame} (see Section \ref{SubSubSec.: Tameness} and \cite{Sa2}), then it is finitely generated and free. Therefore, in the cohomologically tame case the pair $(G_0, \nabla)$ gives a meromorphic extension of $(G, \nabla)$ to $\mathbb{A}^1_{\theta}$ with a pole of order 2 at the origin.
\subsubsection{Extension to $\mathbb{P}^1_{\theta}$.}
\label{SubSubSec.: Extension to P1}
The aim is to find an extension of $(G_0, \nabla)$ to a free $\mathcal{O}_{\mathbb{P}^1_{\theta}}$-module with a meromorphic connection on $\mathbb{P}^1_{\theta}$ with a pole of order less or equal to 1 at infinity. We will denote such an extension $(\mathcal{F}, \nabla)$. This type of question is known as \textit{the Birkhoff problem} (cf. \cite[Ch. 4]{Sa1}).
More concretely it means that we need to find a $\mathbb{C}[\theta^{-1}]$-module $G_{\infty} \subset G$ such that:
\begin{itemize}
\item $G_{\infty} \otimes_{\mathbb{C}[\theta^{-1}]}\mathbb{C}[\theta, \theta^{-1}] \simeq G $
\item $G_0 = \left( G_0 \cap G_{\infty} \right) \oplus \theta G_0$
\item $\tau\nabla_{\frac{\partial}{\partial \tau}}(G_{\infty}) \subset G_{\infty}$
\end{itemize}
(recall that $\tau=\theta^{-1}$).
\medskip
Even more concretely, and this is the way we will do it in Section \ref{Sec.: Frobenius for Quadric}, it can be done as follows. We need to find a $\mathbb{C}[\theta]$-basis of $G_0$ such that the connection matrix in this basis takes the form
\begin{align}\label{Eq.: Connection matrix Birkhoff I}
\left(\frac{A_0}{\theta}+A_{\infty}\right)\frac{d\theta}{\theta},
\end{align}
where $A_0$ and $A_{\infty}$ are constant matrices. To get the desired extension one just needs to consider these basis elements inside $G$ and define $G_{\infty}$ as a $\mathbb{C}[\theta^{-1}]$-submodule generated by them.
\subsubsection{Pairing.}
\label{SubSubSec.: Pairing}
If $h$ is a cohomologically tame function, then there exists a non-degenerate bilinear pairing (cf. \cite{DoSa1})
\begin{align}\label{Eq.: Pairing II}
S_W \colon G \otimes j^*G\to \mathbb{C}[\theta, \theta^{-1}],
\end{align}
where $j\colon W \to W$ is given by $\theta \mapsto -\theta$.\footnote{The morphism $j$ extends uniquely to a morphism $\mathbb{P}^1_{\theta} \to \mathbb{P}^1_{\theta}$. Abusing notation we will denote this morphism again by~$j$, and we will use the same notation for its restrictions if it does not lead to confusion.}
It satisfies
\begin{align}
\label{Eq.: Derivative Of Pairing}
&\frac{d}{d\theta}S_W(g_1, g_2)=S_W(\partial_{\theta}g_1,g_2)+S_W(g_1,\partial_{\theta} g_2),
\end{align}
i.e. it is a horizontal section of the sheaf $\mathcal{H}om_{\mathcal{O}_W}(\mathcal{F}_W \otimes j^*\mathcal{F}_W,\mathcal{O}_W) $ equipped with its natural connection, and
\begin{align}
\label{Eq.: Swap Arguments in the Pairing}
&S_W(g_1, g_2)=(-1)^n \overline{S_W(g_2, g_1)},
\end{align}
where we used the notation $\overline{P(\theta, \theta^{-1})}:=P(-\theta, -\theta^{-1})$ for a Laurent polynomial $P(\theta, \theta^{-1})$.
\medskip
Moreover, \eqref{Eq.: Pairing II} has the property
\begin{align}\label{Eq.: teta^n}
S_W(G_0,j^*G_0) \subset \theta^n\mathbb{C}[\theta] \subset \mathbb{C}[\theta, \theta^{-1}],
\end{align}
and therefore we get a natural extension
\begin{align}\label{Eq.: Pairing III}
&S_{U_0} \colon G_0 \otimes j^*G_0 \to \mathbb{C}[\theta].
\end{align}
On $G_0$ we can write
$$S_{U_0}=\sum_{i\geq n} S_i \theta^i,$$
where $S_i\colon G_0 \otimes j^*G_0 \to \mathbb{C} \, \theta^i$ are \textit{higher residue pairings} of K. Saito; $S_n$ is the Grothendieck residue pairing. For a modern overview of K. Saito's works on this subject we refer to \cite{SaTa}.
\smallskip
Let $(\mathcal{F}, \nabla)$ be an extension as in Section \ref{SubSubSec.: Extension to P1}. We would like to extend (\ref{Eq.: Pairing III}) to a pairing
\begin{align}\notag
S \colon \mathcal{F} \otimes j^*\mathcal{F} \to \mathcal{O}_{\mathbb{P}^1_{\theta}}.
\end{align}
There exists $d\in \mathbb{Z}$ such that $S_W(G_{\infty},j^*G_{\infty}) \subset \tau^{-d} \mathcal{O}_{U_{\infty}}$ and therefore (\ref{Eq.: Pairing III}) always extends to
\begin{align}\label{Eq.: Pairing IV}
S \colon \mathcal{F} \otimes j^*\mathcal{F} \to \mathcal{O}_{\mathbb{P}^1_{\theta}}(- n \cdot \{0\} + d \cdot \{\infty \}).
\end{align}
Here $\mathcal{O}_{\mathbb{P}^1_{\theta}}(-n \cdot \{0\} + d \cdot \{\infty \})$ is an invertible subsheaf of $K_{\mathbb{P}^1_{\theta}}$ which consists of rational functions of ${\mathbb{P}^1_{\theta}}$, generated by $\theta^{n}$ and $\tau^{-d}$. It is isomorphic to $\mathcal{O}_{\mathbb{P}^1_{\theta}} $ if and only if $d=n$. The choice of $d$ in (\ref{Eq.: Pairing IV}) is not unique but there exists the minimal possible $d$.
By (\ref{Eq.: teta^n}) we know that $d\geq n$ and therefore (\ref{Eq.: Pairing IV}) produces a pairing with values in $\mathcal{O}_{\mathbb{P}^1_{\theta}}$ iff $d=n$. The latter condition is equivalent to the existence of a global basis $e_1, \dots, e_{\mu}$ of $\mathcal{F}$ such that $S_{U_0}({e_i}_{|U_0}, {e_j}_{|U_0})\in \theta^n \mathbb{C}$.
\subsubsection{$V$-filtration.}
\label{SubSubSec.: V-filtration}
Let $X$ be a smooth algebraic variety, $Y$ its closed smooth subvariety of codimension one, and $I$ the ideal sheaf of $Y$ in $X$. First define an increasing filtration $V_{\bullet} \mathcal{O}_X$ by putting $V_{i} \mathcal{O}_X=\mathcal{O}_X$ if $i\geq 0$ and $V_{i} \mathcal{O}_X=I^{-i}$ if $i < 0$. Now let $V_{\bullet} \mathcal{D}_X$ be an increasing filtration defined as
\begin{align*}
V_i\mathcal{D}_X=\{P\in \mathcal{D}_X \, | \, P(V_{m} \mathcal{O}_X)\subset V_{m+i} \mathcal{O}_X, \quad \forall \,\, m\in \mathbb{Z}\}.
\end{align*}
One can locally describe it more explicitly as follows (cf. \cite{PeSt}). Let $(y_1,\dots,$ $ y_n, x)$ be a local coordinate system on $X$ such that in this neighbourhood $Y$ is given by the equation $x=0$. Then $V_0\mathcal{D}_X$ is a subsheaf of rings of $\mathcal{D}_X$ locally generated by $\mathcal{O}_X$, vector fields $\frac{\partial}{\partial y_1}, \dots , \frac{\partial}{\partial y_n}$ and $x\frac{\partial}{\partial x}$. If we denote $\partial_{x}=\frac{\partial}{\partial x}$, then $V_i\mathcal{D}_X$ is a $V_0\mathcal{D}_X$-module generated by $x^i\partial_{x}^j$ with $i-j\geq -k$.
\smallskip
Let $\mathcal{M}$ be a (left) $\mathcal{D}_X$-module and $V_{\bullet}\mathcal{M}$ a
discrete exhaustive increasing filtration indexed by $\mathbb{Q}$. It is called \textit{$V$-filtration} iff
\smallskip
1. it is compatible with the filtration $V_{\bullet}\mathcal{D}_X$, i.e. $\left(V_i\mathcal{D}_X\right)\left(V_{\alpha}\mathcal{M}\right) \subset V_{\alpha+i}\mathcal{M}$ for all $\alpha$ and~$i$; furthermore, the inclusion $I \left( V_{\alpha}\mathcal{M} \right) \subset V_{\alpha-1}\mathcal{M}$ should be an equality for $\alpha < 0$.
\smallskip
2. the action of $x \partial_{x}+\alpha$ on $\text{Gr}_{\alpha}^V\mathcal{M}$ is nilpotent.
\medskip
\noindent If such a filtration exists, then it is unique (cf. \cite{Bu}).
\medskip
The Gauss-Manin system $G$ considered as a $\mathbb{C}[\tau]\langle \partial_{\tau}\rangle$-module\footnote{This just means that we consider not $G$ itself but its push-forward as a $D$-module with respect to the open inclusion $U_0\cap U_{\infty} \to U_{\infty}$.} always has a $V$-filtration along $\{\tau=0\}$, and pairing (\ref{Eq.: Pairing II}) satisfies
\begin{align}\label{Eq.: Pairing Property wrt V-filtration}
S_W(V_0G, \overline{V_{<1}G}) \subset \mathbb{C}[\tau].
\end{align}
For more details we refer to \cite{DoSa1}.
\subsubsection{Tameness.}
\label{SubSubSec.: Tameness}
Let $X$ be a smooth algebraic variety and $h \colon X \to \mathbb{A}^1$ a morphism. By a partial compactification we mean a commutative diagram
\begin{align}\notag
\xymatrix{
X \ar[r]^j \ar[d]^h & \overline{X} \ar[dl]^{\overline{h}}\\
\mathbb{A}^1
}
\end{align}
where $\overline{X}$ is an algebraic variety(not necessarily smooth), $j$ is an open embedding, and $\overline{h}$ is proper.
\smallskip
The morphism $h$ is called \textit{cohomologically tame} iff there exists a partial compactification such that the support of $\Phi_{\overline{h}-a}(Rj_*\mathbb{C}_X)$ is finite and contained in $X_a$, for all $a \in \mathbb{A}^1$. We refer to \cite{Sa2} for more details.
\subsection{Initial conditions.}
\label{SubSec.: Initial conditions}
Let $(M,\, \circ,\, e,\, g,\, E)$ be a Frobenius manifold with an Euler field. In this setting one defines two endomorphisms of $\mathcal{T}_M$ as
\begin{align}\label{Eq.: Operator mathcalUandV}
&\mathcal{U}(X)=E \circ X, \hspace{30pt} \mathcal{V}(X)=\nabla_X(E)-\frac{D}{2}X,
\end{align}
where $D$ is defined by $Lie_E(g)=D g$ (see \cite[II.1]{Ma}).
If $p\in M$ is a semi-simple point of $M$, i.e. the algebra $(T_pM, \, \circ_p)$ is semi-simple (isomorphic to $\mathbb{C}^n$), then in a neighborhood of this point the tuple $(M,\, \circ,\, e,\, g,\, E)$ is uniquely determined by the data
\begin{align}\label{Eq.: Initial Conditions General Form}
(T, \, \mathcal{U}, \, \mathcal{V}, \, g, \, e),
\end{align}
where $T=T_pM$, $\mathcal{U}$ and $\mathcal{V}$ are endomorphisms of $T$ induced by \eqref{Eq.: Operator mathcalUandV}, $g$ is a non-degenerate symmetric bilinear pairing on $T$ induced by the metric, and $e$ is an element in $T$ induced by the identity vector field. This follows from \cite[Main Th., p.188]{Du} or \cite[Th. VII.4.2]{Sa1}.
\section{Construction of Landau-Ginzburg potentials}
\label{Sec.: LG Potentials}
We start by summarizing some facts about quantum cohomology of a smooth three-dimensional quadric~$Q_3$ (see~\cite{BaMa} for details). Then we explain how to obtain the standard LG potential for $Q_3$ from its quantum cohomology. As we already mentioned, this LG potential does not have enough critical points to be an honest LG model in the sense of Section \ref{SubSec.: LG models}, and we present its \textit{adhoc} partial compactification.
\subsection{Quantum cohomology of $Q_{3}$.}
Let $V=Q_{3}$ be a smooth Fano hypersurface in $\mathbb{P}^{4}$ which is given by a non-degenerate homogeneous polynomial of degree 2. The singular cohomology groups $H^i(V, \mathbb{Z})$ are free of rank one in each even degree and vanish in odd degrees. Consider a graded basis $\Delta_0, \Delta_1, \Delta_2,\Delta_3$ of $H^*(V,\mathbb{Z})$, such that $\Delta_0$ is the identity, $\Delta_1$ is the hyperplane class, $\Delta_1 \cup \Delta_{2}=\Delta_{3}$, where $\Delta_{3}$ is Poincar\'e dual to the class of a point.
The table of quantum multiplication by $\Delta_1$ in the small quantum cohomology is
\begin{align}\label{Eq.: Multiplication Table Odd Quadric}
&\Delta_1\Delta_0=\Delta_1\\ \notag
&\Delta_1\Delta_{1}=2\Delta_{2}\\ \notag
&\Delta_1\Delta_{2}=\Delta_{3}+q\Delta_0\\ \notag
&\Delta_1\Delta_{3}=q\Delta_1.
\end{align}
Hence, the spectral cover consists of $4$ reduced points
\begin{align*}
&P_0=(1,0, 0, -q)\\
&P_i=(1, \xi_i,\frac{\xi_i^{2}}{2} , q ),
\end{align*}
where $\xi_i$ are roots of $\xi^{3}=4q$, and $1 \leq i \leq 3$. The point $P_0$ does not lie on the torus $\textbf{T}$ (cf. Section \ref{SubSubSec.: Spectral Cover for QH}).
\subsubsection{Initial conditions.}
\label{SubSubSec.: InitCondQuad}
We can express the anti-canonical class as
$$
-K_V=3\Delta_1.
$$
In the basis of the $\Delta_i$'s the initial conditions take the form
\begin{align*}
\mathcal{U}=\left(
\begin{array}{rrrr}
0 & 0 & 3q & 0 \\
3 & 0 & 0 & 3q \\
0 & 6 & 0 & 0 \\
0 & 0 & 3 & 0 \\
\end{array}
\right)\\
\intertext{and}
\mathcal{V}=\left(
\begin{array}{rrrr}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -2 \\
\end{array}
\right)+\frac{1}{2}.
\end{align*}
\subsection{Standard LG potential.}
Restricting to the torus $\textbf{T}$ we can rewrite system \eqref{Eq.: Multiplication Table Odd Quadric} as
\begin{align*}
&\Delta_1=\frac{2\Delta_2}{\Delta_1}\\
&\Delta_1=\frac{\Delta_{3}+q}{\Delta_{2}}\\
&\Delta_1=\frac{q\Delta_1}{\Delta_{3}}.
\end{align*}
The above system can be rewritten as
\begin{align}\label{Eq.: Multiplication table III}
&\Delta_1=\frac{2\Delta_2}{\Delta_1}\\ \notag
&\frac{\Delta_{2}}{\Delta_{1}}=\frac{(\Delta_{3}+q)^2}{2\Delta_{2}\Delta_{3}} \\ \notag
&\frac{\Delta_{3}^2}{2\Delta_{2}\Delta_{3}}=\frac{q^2}{2\Delta_{2}\Delta_{3}}.
\end{align}
It is easy to see that if we define
\begin{align}\label{Eq.: Non-compactified potential I}
f=\Delta_1+\frac{2\Delta_2}{\Delta_1} + \frac{(\Delta_{3} + q)^2}{2\Delta_{2}\Delta_{3}},
\end{align}
then the system $\Delta_i\frac{\partial f}{\partial \Delta_i}=0$ coincides with \eqref{Eq.: Multiplication table III}. In this sense $f$ "integrates" the multiplication table.
\medskip
We claim that \eqref{Eq.: Non-compactified potential I} is the standard LG potential for $Q_3$ proposed in \cite{EgHoXi}. Indeed, consider another coordinate system on the torus $\textbf{T}=\{\Delta_1\Delta_2 \Delta_{3} \neq 0\}$ given by
$$
Y_1=\Delta_1, \quad Y_2=\frac{2\Delta_2}{\Delta_1}, \quad Y_{3}=\Delta_{3}.
$$
Rewriting \eqref{Eq.: Non-compactified potential I} in terms of these coordinates we get
\begin{align}\label{Eq.: Non-compactified potential II}
f=Y_1+Y_{2}+\frac{(Y_{3}+q)^2}{Y_1Y_2Y_3}.
\end{align}
which is exactly the LG potential proposed in loc.cit..
\subsection{Compactification.}
By construction \eqref{Eq.: Non-compactified potential I} has 3 critical points but a Landau-Ginzburg potential for $Q_3$ in the sense of Section \ref{SubSec.: LG models} must necessarily have 4 critical points. Below we will give an \textit{adhoc} partial compactification of $f$ to a new LG potential $\widetilde{f}$ which has the correct number of critical points. In Section \ref{Sec.: Frobenius for Quadric} we will study the Gauss-Manin system of $\widetilde{f}$ and show that it deserves the name LG potential.
\smallskip
Consider the affine space $\mathbb{A}^3=\text{Spec}\,\mathbb{C}[Y_1, Y_2, Y_3]$. Expression \eqref{Eq.: Non-compactified potential II} gives a regular function on the torus $\{Y_1Y_2Y_3 \neq 0\} \subset \mathbb{A}^3$. Functions $x, y, z$ given by
\begin{align} \label{Eq.: Coordinate Change}
&x=\frac{Y_3+q}{qY_1}\\ \notag
&y=Y_1\\ \notag
&z=\frac{Y_2}{Y_1}-1
\end{align}
define another coordinate system on this torus. Rewriting $f$ in terms of these coordinates we get
\begin{align}\label{Eq.: Compactified LG model without parameters}
&\widetilde{f}=y(2+z)+\frac{qx^2}{(xy-1)(1+z)}.
\end{align}
One can interpret \eqref{Eq.: Compactified LG model without parameters} as a regular function on an open subvariety of $\mathbb{A}^{3}=\text{Spec}\,\mathbb{C}[x, y,z]$ defined by $\{(xy-1)(1+z)\neq 0\}$. The torus $\{Y_1Y_2Y_{3} \neq 0\}$ is embedded into this space by formulas \eqref{Eq.: Coordinate Change}.
It is easy to check that the critical locus of $\widetilde{f}$ consists of $4$ points
$$
P_0=(0, 0, -2) \quad \text{and} \quad P_i=(\frac{2}{\xi_i},\xi_i,0),
$$
where $\xi_i^3=4q$.
\subsection{General case.}
The above considerations work for an arbitrary smooth odd-dimensional quadric $Q_{2n+1}$ of dimension $2n+1$. Let $\Delta_0, \dots, \Delta_{2n+1}$ be a graded basis of $H^*(Q_{2n+1},\mathbb{Z})$, such that $\Delta_0$ is the identity, $\Delta_1$ is the hyperplane class, $\Delta_i=\Delta_1^{\cup i}$ for $i\leq n$, and $\Delta_i \cup \Delta_{2n+1-i}=\Delta_{2n+1}$, where $\Delta_{2n+1}$ is Poincar\'e dual to the class of a point.
One can write down the quantum multiplication by $\Delta_1$. Then one can show that
\begin{align*}
f=\Delta_1+\frac{\Delta_2}{\Delta_1}+\dots +\frac{\Delta_n}{\Delta_{n-1}}+\frac{2\Delta_{n+1}}{\Delta_n}+\frac{\Delta_{n+2}}{\Delta_{n+1}} +\dots + \frac{\Delta_{2n}}{\Delta_{2n-1}} + \frac{(\Delta_{2n+1} + q)^2}{2\Delta_{2n}\Delta_{2n+1}}
\end{align*}
is the analogue of \eqref{Eq.: Non-compactified potential I}, and
\begin{align*}
&\widetilde{f}=\sum_{i=1}^ny_i(2+z_i)+\frac{qx^2}{(xy_1\dots y_n-1)(1+z_1)\dots (1+z_n)}.
\end{align*}
is the analogue of \eqref{Eq.: Compactified LG model without parameters}.
\section{Gauss-Manin system of $\widetilde{f}$}
\label{Sec.: Frobenius for Quadric}
\subsection{Notation.}
For convenience we repeat the partial compactification in a somewhat backwards order. Namely, consider $\mathbb{A}^3=\text{Spec}\, \mathbb{C}[x,y,z]$ and let $\widetilde{U}$ be the open subvariety defined by $\{(xy-1)(1+z)\neq 0\}$. On $\widetilde{U}$ we have the regular function
\begin{align}\label{Eq.: F for Q_3}
&\widetilde{f}=y(2+z)+\frac{qx^2}{(xy-1)(1+z)},
\end{align}
which is our partially compactified potential (\ref{Eq.: Compactified LG model without parameters}).
Consider functions $\Delta_1, \Delta_2, \Delta_3$ on $\mathbb{A}^3$ given by
\begin{align}\label{Eq.: Formulas for Delta_i}
\Delta_1=y, \quad \Delta_2=\frac{y^2}{2}(z+1), \quad \Delta_3=qxy-q,
\end{align}
which form a coordinate system on the subset $\{y \neq 0\} \subset \mathbb{A}^3$. The inverse coordinate change is given by
\begin{align}\label{Eq.: xyz via deltas}
x=\frac{\Delta_3+q}{q\Delta_1}, \quad y=\Delta_1, \quad z=\frac{2\Delta_2}{\Delta_1^2}-1.
\end{align}
Let $U$ be the intersection $\widetilde{U} \cap \{y \neq 0\}$. On $U$ function (\ref{Eq.: F for Q_3}) can be rewritten in terms of $\Delta_1, \Delta_2, \Delta_3$ as
\begin{align}\label{Eq.: f for Q_3}
f:=\widetilde{f}_{|U}=\Delta_1+\frac{2\Delta_2}{\Delta_1}+\frac{(\Delta_3+q)^2}{2\Delta_2\Delta_3}.
\end{align}
Formulas \eqref{Eq.: Formulas for Delta_i} give an isomorphism of $U$ with the algebraic torus $\textbf{G}_m^3=\text{Spec}\, \mathbb{C}[t_1^{\pm 1}, t_2^{\pm 1}, t_3^{\pm 1}]$, such that $t_i$'s correspond to $\Delta_i$'s. Formula \eqref{Eq.: f for Q_3} gives the LG potential before the compactification as in \eqref{Eq.: Non-compactified potential I}.
\subsubsection{Lemma.}
\label{SubSubSec.: Lemma Tameness}
{\it Function (\ref{Eq.: F for Q_3}) is cohomologically tame.\footnote{It is also true that $f$ has isolated singularities at infinity in the sense of \cite{Do2}.}
}
\smallskip
\textbf{Proof. } See Appendix A. $\blacksquare$
\subsubsection{Lemma.}
{\it The Gauss-Manin system $G^{\widetilde{f}}$ has the following properties:
\smallskip
(i) $G^{\widetilde{f}}$ is a free $\mathbb{C}[\theta,\theta^{-1}]$-module of rank 4;
\smallskip
(ii) $G_0^{\widetilde{f}}$ is a free $\mathbb{C}[\theta]$-module of rank 4.
}
\smallskip
\textbf{Proof. } For both properties it is essential that $\widetilde{f}$ is cohomologically tame.
\smallskip
(i) For a function with isolated critical points the module $G$ is always free of finite rank. If, moreover, the function is cohomologically tame, then the rank is equal to the Milnor number (\cite{Do2}, Th. 5.2.3). In our case it is 4.
\smallskip
(ii) For a function with (cohomologically) isolated critical points at infinity Corollary 5.2.6 of \cite{Do2} states, that $G_0$ is free and of finite type iff the function is cohomologically tame.
Applying this corollary to $\widetilde{f}$ we get that $G_0^{\widetilde{f}}$ is a free $\mathbb{C}[\theta]$-module of finite rank. Hence, its rank equals to the dimension of the fiber at zero. Using Proposition 5.1.1 of \cite{Do2} we see that the rank is equal to the Milnor number.~$\blacksquare$
\subsubsection{Lemma.}
\label{SubSubSec.: GM Iso}
{\it The natural morphism of $\mathcal{D}_W$-modules $G^{\widetilde{f}} \to G^f$ given by the restriction of differential forms from $\widetilde{U}$ to $U$ is an isomorphism.\footnote{Recall from Section \ref{SubSec.: GM system} that $W=\mathbb{A}^1_{\theta}-\{0\}$}
}
\smallskip
\textbf{Proof. } Restriction of differential forms from $\widetilde{U}$ to $U$ defines the morphism
$$
\Omega^i(\widetilde{U}) \to \Omega^i(U),
$$
which is injective but not surjective; it is the localization morphism given by inverting $\Delta_1$. One can check directly that the induced morphism $G^{\widetilde{f}} \to G^f$ on the Gauss-Manin systems is also injective.
\smallskip
By Theorem 5.2.3 of \cite{Do2} the rank of $G^f$ is 4 (we use here that $f$ has one isolated singularity at infinity).
\smallskip
Consider the short exact sequence of $\mathcal{O}_W$-coherent $\mathcal{D}_W$-modules
\begin{align*}
0 \to G^{\widetilde{f}} \to G^f \to G^f/G^{\widetilde{f}} \to 0.
\end{align*}
Since $\text{rk }G^{\widetilde{f}}=\text{rk }G^f$ the quotient is an $\mathcal{O}_W$-module of rank zero. Therefore, by the standard fact that for a smooth algebraic variety $X$ any $\mathcal{O}_X$-coherent $\mathcal{D}_X$-module is a locally free $\mathcal{O}_X$-module (see \cite{Be}, Lect. 2, 1.a), we get that $G^f/G^{\widetilde{f}}$ is locally free of rank zero and hence vanishes. $\blacksquare$
\subsection{Birkhoff problem.}
\label{SubSec.: Our Birkhoff problem }
Consider the following 3-form on $\widetilde{U}$
\begin{align}\notag
&\omega_0=\frac{dx\wedge dy \wedge dz}{(xy-1)(z+1)},
\end{align}
and let $\omega_i=\Delta_i \omega_0$. Note also that
\begin{align}\notag
&{\omega_0}_{|U}=\frac{d \Delta_1}{\Delta_1}\wedge\frac{d \Delta_2}{\Delta_2}\wedge\frac{d \Delta_3}{\Delta_3}.
\end{align}
If $\omega$ is a 3-form, then let $[\omega]$ denote its class in $G_0$. In the above formulas by $\Delta_i$ we mean ${\Delta_i}_{|\widetilde{U}}$ and ${\Delta_i}_{|U}$ respectively. We will continue to use this notation, if it does not lead to confusion.
\subsubsection{Lemma.}
\label{SubSubSec.: Identities in G}
{\it In $G^f$ we have the following identities}
\begin{align*}
&[\Delta_if'_{\Delta_i}\omega_0]=0\\
&[\Delta_i\Delta_jf'_{\Delta_i}\omega_0]=0\\
&[\Delta_i^2f'_{\Delta_i}\omega_0]=\theta[\omega_i].
\end{align*}
\textbf{Proof. } Let us only prove the third identity for $i=2$. The other cases are analogous.
We have the following equality of differential forms
\begin{align*}
\Delta_2^2f'_{\Delta_2}\omega_0=-df\wedge \left(\Delta_2\frac{d \Delta_1}{\Delta_1}\wedge\frac{d \Delta_3}{\Delta_3}\right),
\end{align*}
hence $[\Delta_2^2f'_{\Delta_2}\omega_0]=\theta[\omega_2]$ in $G^f$. $\blacksquare$
\subsubsection{Lemma.}
\label{SubSubSec.: Lemma Linear Independence}
{\it Elements $[\omega_0], \dots, [\omega_3]$ are $\mathbb{C}[\theta]$-linearly independent in $G_0^{\widetilde{f}}$.}
\smallskip
\textbf{Proof. } The vector space $G_0^{\widetilde{f}}/\theta G_0^{\widetilde{f}}$ can be identified with the Milnor ring by mapping $1$ to the class of $[\omega_0]$. Under this isomorphism the class of $\Delta_i$ goes to the class of $[\omega_i]$. Since $1, \Delta_1, \Delta_2, \Delta_3$ form a basis in the Milnor ring, classes of $[\omega_0], \dots, [\omega_3]$ form a basis in $G_0^{\widetilde{f}}/\theta G_0^{\widetilde{f}}$. This implies the statement. $\blacksquare$
\subsubsection{Lemma.}
\label{SubSubSec.: Lemma A_0, A_inf}
{\it
(i) Elements $[\omega_0], \dots, [\omega_3]$ freely generate in $G^{\widetilde{f}}$ an $\mathcal{O}_W$-submodule $H^{\widetilde{f}}$ of rank 4;
\smallskip
(ii) The following identities hold
\begin{align*}
&\theta^2 \partial_{\theta}[\omega_0]=3[\omega_1]\\
&\theta^2 \partial_{\theta}[\omega_1]=6[\omega_2]+\theta[\omega_1]\\
&\theta^2 \partial_{\theta}[\omega_2]=3[\omega_3]+3q[\omega_0]+2\theta[\omega_2]\\
&\theta^2 \partial_{\theta}[\omega_3]=3q[\omega_1]+3\theta[\omega_3],
\end{align*}
and therefore $H^{\widetilde{f}}$ is a $\mathcal{D}_W$-submodule;
\smallskip
(iii) $G^{\widetilde{f}}=H^{\widetilde{f}}$.
\smallskip
(iv) The connection matrix in the basis $[\omega_0], \dots, [\omega_3]$ takes the form
\begin{align}\notag
\left(\frac{A_0}{\theta}+A_{\infty}\right)\frac{d\theta}{\theta},
\end{align}
where
\begin{align}\notag
A_0=
\left(
\begin{array}{rrrr}
0 & 0 & 3q & 0 \\
3 & 0 & 0 & 3q \\
0 & 6 & 0 & 0 \\
0 & 0 & 3 & 0 \\
\end{array}
\right)
\intertext{and}\notag
A_{\infty}=
\left(
\begin{array}{rrrr}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 3 \\
\end{array}
\right)
\end{align}
}
\medskip
\textbf{Proof. } (i) By Lemma \ref{SubSubSec.: Lemma Linear Independence} $[\omega_0], \dots, [\omega_3]$ are linearly independent in
$G_0^{\widetilde{f}}$, and hence also in $G^{\widetilde{f}}$ and $G^f$. Therefore, they generate a submodule of rank 4 in $G^{\widetilde{f}}$ (and in $G^f$).
\smallskip
(ii) Because of the natural isomorphism $G^{\widetilde{f}} \to G^f$ we can check these identities in $G^f$.
First, note that the following identities hold in the ring of functions on $U$
\begin{align*}
&f=3\Delta_1-2\Delta_1f'_{\Delta_1}-\Delta_2f'_{\Delta_2}\\
&\Delta_1\Delta_1=2\Delta_2+\Delta_1^2f'_{\Delta_1}\\
&\Delta_1\Delta_2=(\Delta_3+q)+\Delta_2(\Delta_1f'_{\Delta_1}+\Delta_2f'_{\Delta_2}-\Delta_3f'_{\Delta_3})\\
&\Delta_1\Delta_3=q\Delta_1+\Delta_3(\Delta_1f'_{\Delta_1}+\Delta_2f'_{\Delta_2}+\Delta_3f'_{\Delta_3})-q(\Delta_1f'_{\Delta_1}+\Delta_2f'_{\Delta_2}-\Delta_3f'_{\Delta_3}).
\end{align*}
These identities can be checked by direct computations.
Using the first identity we get
\begin{align*}
&\theta^2 \partial_{\theta}[\omega_0]=[f\omega_0]=[(3\Delta_1-2\Delta_1f'_{\Delta_1}-\Delta_2f'_{\Delta_2})\omega_0]=\\
&3[\Delta_1\omega_0]-2[\Delta_1f'_{\Delta_1}\omega_0]-[\Delta_2f'_{\Delta_2}\omega_0].
\end{align*}
Applying Lemma \ref{SubSubSec.: Identities in G} we get
\begin{align*}
&\theta^2 \partial_{\theta}[\omega_0]=3[\Delta_1\omega_0]=3[\omega_1].
\end{align*}
Using the first two identities and Lemma \ref{SubSubSec.: Identities in G} we get
\begin{align*}
&\theta^2 \partial_{\theta}[\omega_1]=[f\Delta_1\omega_0]=[6\Delta_2\omega_0+\Delta_1^2f'_{\Delta_1}\omega_0-\Delta_1\Delta_2f'_{\Delta_2}\omega_0]=6[\omega_2]+\theta[\omega_1].
\end{align*}
The remaining two formulas are obtained analogously.
\smallskip
(iii) Since $H^{\widetilde{f}}$ and $G^{\widetilde{f}}$ are $\mathcal{O}_W$-coherent $\mathcal{D}_W$-modules of the same rank, they coincide (as in the proof of Lemma \ref{SubSubSec.: GM Iso}).
\smallskip
(iv) It follows from (ii). $\blacksquare$
\subsubsection{Lemma.}
\label{SubSubSec.: Lemma about Basis}
{\it The classes $[\omega_0],\dots, [\omega_3]$ form a $\mathbb{C}[\theta]$-basis in $G_0^{\widetilde{f}}$.}
\medskip
\textbf{Proof. } Let $H_0^{\widetilde{f}}$ be the $\mathcal{O}_{\mathbb{A}^1_{\theta}}$-submodule of $ G_0^{\widetilde{f}}$ generated by $[\omega_0], \dots, [\omega_3]$. We have the short exact sequence of $\mathcal{O}_{\mathbb{A}^1_{\theta}}$-modules
\begin{align}\label{Eq.: ShortExactSequence 0}
0 \to H_0^{\widetilde{f}} \to G_0^{\widetilde{f}} \to Q_0^{\widetilde{f}} \to 0,
\end{align}
and we need to show that $Q_0^{\widetilde{f}}=0$.
Since ${Q_0^{\widetilde{f}}}_{|\mathbb{A}^1_{\theta}-\{0\}}=0$ by Lemma \ref{SubSubSec.: Lemma A_0, A_inf}, and $Q_0^{\widetilde{f}}$ is finitely generated, it is enough to prove that the fiber at zero vanishes, i.e. $Q_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta)=0$.
\smallskip
Tensoring \eqref{Eq.: ShortExactSequence 0} with $\mathbb{C}[\theta]/(\theta)$ we get a short exact sequence (tensor product is right exact)
\begin{align*}
H_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta) \to G_0^{\widetilde{f}} \otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta)\to Q_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta) \to 0,
\end{align*}
which can be rewritten as
\begin{align*}
H_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta) \to \Omega^n(\widetilde{U})/d\widetilde{f}\wedge \Omega^{n-1}(\widetilde{U}) \to Q_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta) \to 0.
\end{align*}
Since classes of $[\omega_0], \dots, [\omega_3]$ generate $\Omega^n(\widetilde{U})/d\widetilde{f}\wedge \Omega^{n-1}(\widetilde{U})$, the first map is surjective. Therefore, $Q_0^{\widetilde{f}}\otimes_{\mathbb{C}[\theta]} \mathbb{C}[\theta]/(\theta)=0$, and, finally, $Q_0^{\widetilde{f}}=0$. $\blacksquare$
\subsection{Pairing.}
\label{SubSec.: Our pairing}
In this section we study pairing \eqref{Eq.: Pairing II} in our setup. Since it will make no difference here, we are dropping the subscripts in the notation of the Gauss-Manin systems, and just write $G$.
\subsubsection{Lemma.}
{\it The $V$-filtration on $G$ along $\{\tau=0\}$ is given by
\begin{align*}
&V_0G=\bigoplus_{i=0}^{3}\mathbb{C}[\tau]e_i\\
&V_pG=\tau^{-p}V_0G,
\end{align*}
where $e_i=\tau^i[\omega_i]$.
}
\medskip
\textbf{Proof. }
This lemma, as well as most of the results of this article, first appeared in \cite{Sm}. There the proof of this lemma has a mistake which is corrected here.
\smallskip
It is enough to show that this filtration satisfies the conditions of Section \ref{SubSubSec.: V-filtration}.
\smallskip
\textit{1. Compatibility of filtrations.}
\textit{1a.} It is clear that $\tau (V_pG) \subset V_{p-1}G$, and using that $\partial_{\tau}=-\theta^2 \partial_{\theta}$ and applying Lemma \ref{SubSubSec.: Lemma A_0, A_inf} it is not difficult to see that $\partial_{\tau} (V_pG) \subset V_{p+1}G$.
\smallskip
These two facts imply that $(V_m\mathcal{D}_{\mathbb{A}_{\tau}})(V_pG) \subset V_{p+m}G$.
\smallskip
\textit{1b.} It is clear that the condition $\tau \, V_pG = V_{p-1}G$ for $p<0$ holds.
\smallskip
\textit{2. Nilpotence.} Classes of $\tau^{-p}e_0,\dots, \tau^{-p}e_3$ form a basis in $\text{Gr}_p^VG$. Using Lemma \ref{SubSubSec.: Lemma A_0, A_inf} one can see that in this basis the operator induced by $\tau \partial_{\tau}+p$ on $\text{Gr}_p^VG$ is given by the matrix
\begin{align}\notag
\left(
\begin{array}{rrrr}
0 & 0 & 0 & 0 \\
-3 & 0 & 0 & 0 \\
0 & -6 & 0 & 0 \\
0 & 0 & -3 & 0 \\
\end{array}
\right)
\end{align}
It is clearly nilpotent. $\blacksquare$
\subsubsection{Lemma.}
\label{SubSubSec.: PairingComputation}
{\it The pairing $S_W$ satisfies\footnote{The overline over the second argument of $S_W$ stresses that the element is considered as a section of $j^*\mathcal{F}_W$. Therefore, $\tau$ and $\nabla_{\frac{\partial}{\partial\tau}}$ act with the opposite sign.}
\begin{align}
S_W([\omega_k],\overline{[\omega_l]})=
\lbrace \begin{array}{lll}
S_W([\omega_0],\overline{[\omega_3]}) & \text{if} \quad k+l=3 \\
0 & \text{otherwise}
\end{array}
\end{align}
and $S_W([\omega_0],\overline{[\omega_3]}) \in \tau^{-3} \mathbb{C}$.
}
\medskip
\textbf{Proof. } To simplify the notation we will be writing $S$ instead of $S_W$. By \eqref{Eq.: teta^n} we know that
$$
S([\omega_k],\overline{[\omega_l]})\in \tau^{-3}\mathbb{C}[\tau^{-1}].
$$
On the other hand, by \eqref{Eq.: Pairing Property wrt V-filtration} we get
$$
S(e_k,\overline{e_l})=\tau^k(-\tau)^l S([\omega_k],\overline{[\omega_l]})\in \mathbb{C}[\tau],
$$
therefore $S([\omega_k],\overline{[\omega_l]})\in \tau^{-(k+l)}\mathbb{C}[\tau]$. Hence
\begin{align}\label{Eq.: Vanishing By V-filtration}
& S([\omega_k],\overline{[\omega_l]})=0 \hspace{37pt} \text{if} \quad k+l < 3, \\ \notag
& S([\omega_k],\overline{[\omega_l]}) \in \tau^{-3} \mathbb{C} \hspace{20pt} \text{if} \quad k+l = 3.
\end{align}
To show vanishing in the remaining 4 cases with $k+l > 3$ one just combines \eqref{Eq.: Derivative Of Pairing} and \eqref{Eq.: Vanishing By V-filtration}. Let us consider the case $k=1$, $l=3$ only. Applying \eqref{Eq.: Derivative Of Pairing} to $S([\omega_0], \overline{[\omega_3]})\in \tau^{-3}\mathbb{C}$ and using \eqref{Eq.: Vanishing By V-filtration} we get
\begin{align}\notag
&-3\tau^{-1}S([\omega_0], \overline{[\omega_3]})=\frac{d}{d\tau}S([\omega_0], \overline{[\omega_3]})=S(\partial_{\tau} [\omega_0], \overline{[\omega_3]})-S([\omega_0], \overline{\partial_{\tau}[\omega_3]})=\\ \notag
& \hspace{85pt} =-S(3[\omega_1], \overline{[\omega_3]})+S([\omega_0], \overline{3q[\omega_1]+3\theta[\omega_3]})=\\ \notag
& \hspace{85pt} =-3S([\omega_1], \overline{[\omega_3]})-3\tau^{-1}S([\omega_0], \overline{[\omega_3]}),
\end{align}
and therefore $S([\omega_1], \overline{[\omega_3]})=0$.
\smallskip
Similarly one can show that $S([\omega_0], \overline{[\omega_3]})=S([\omega_2], \overline{[\omega_1]})$. Moreover, by \eqref{Eq.: Swap Arguments in the Pairing} we have $S([\omega_0], \overline{[\omega_3]})=S([\omega_3], \overline{[\omega_0]})$ and $S([\omega_2], \overline{[\omega_1]})=S([\omega_1], \overline{[\omega_2]})$. $\blacksquare$
\subsection{Canonical solution to the Birkhoff problem.}
The problem of extending $(G_0, \nabla)$ to $\mathbb{P}^1_{\theta}$ described in Section \ref{SubSec.: GM system} has a canonical solution given by Hodge theory. Here we will show that our solution given by the basis $\omega_0, \dots, \omega_3$ is canonical in this sense. We more or less keep notation of \cite[Sec.~5]{DoSa2}. Details can be found in loc. cit. and references therein.
It is a general fact (see \cite{Sa2}) that the vector space
\begin{align}
H= \bigoplus_{\alpha \in [0,1) } \text{Gr}_{\alpha}^V G
\end{align}
carries a mixed Hodge structure, i.e. $H$ has a rational structure $H_{\mathbb{Q}}$, an increasing weight filtration $W_{\bullet}H_{\mathbb{Q}}$, a decreasing Hodge filtration $F^{\bullet}H$, s.t. the Hodge filtration induces a pure Hodge structure of weight $m$ on $\text{Gr}_m^WH$ for all $m$.
For the function $\widetilde{f}$ we have
\begin{align*}
&H=\text{Gr}_0^VG = \oplus_{i=0}^{3} \mathbb{C} \, e_i \\
&F^{p}H=\bigoplus_{i=0}^{3-p} \mathbb{C} \, e_i,
\end{align*}
where abusing notation we write $e_i$'s meaning classes of $e_i$'s in $H$. The complexification of the weight filtration is
\begin{align}\label{Eq.: Weight Filtration}
0 \subset \mathbb{C} e_3 \subset \mathbb{C} e_3 \subset \mathbb{C} e_2 \oplus \mathbb{C} e_3 \subset \mathbb{C} e_2 \oplus \mathbb{C} e_3 \subset \mathbb{C} e_1 \oplus \mathbb{C} e_2 \oplus \mathbb{C} e_3 \subset \mathbb{C} e_1 \oplus \mathbb{C} e_2 \oplus \mathbb{C} e_3 \subset H,
\end{align}
where the first term on the left is $W_{-1}H$. The only non-trivial associated graded objects are $\text{Gr}_0^WH$, $\text{Gr}_2^WH$ and $\text{Gr}_4^WH$.
In the case of the function $\widetilde{f}$ Saito's canonical opposite filtration (it is a filtration on $H$) is defined as
\begin{align*}
H_{\text{Saito}}^{\bullet}=\sum_q \overline{F^qH} \cap W_{3+q-\bullet}H.
\end{align*}
From \eqref{Eq.: Weight Filtration} and the fact that the weight filtration is stable under conjugation we have
\begin{align}\label{Eq.: Conjugation}
\overline{e}_i=\sum_{r=i}^3 a_{ri}e_r,
\end{align}
with $a_{ii} \neq 0$. Using \eqref{Eq.: Conjugation} one can show that
\begin{align}\label{Eq.: Saito's filtration}
H_{\text{Saito}}^{p}=\bigoplus_{i=0}^{3-p} \mathbb{C} e_{3-i}.
\end{align}
To any solution of the Birkhoff problem one can attach a filtration $H^{\bullet}$ on $H$ by the formula
\begin{align*}
&H^i:=G_{\infty}^i \cap V_0G/G_{\infty}^i \cap V_{-1}G,
\end{align*}
where $G^{k}_{\infty}=\tau^k G_{\infty}$. To prove that this solution of the Birkhoff problem is canonical one needs to show that the filtrations $H^{\bullet}$ and $H_{\text{Saito}}^{\bullet}$ coincide.
In our case it is easy to see that
\begin{align*}
&H^i=\bigoplus_{i=0}^{3-p} \mathbb{C} e_{3-i},
\end{align*}
which coincides with \eqref{Eq.: Saito's filtration}. Hence, our solution to the Birkhoff problem is canonical.
\subsection{Frobenius manifold.}
Ideally one would like to show the existence of (or to exhibit) a deformation of $\widetilde{f}$ producing a Saito's framework isomorphic to $QH(Q_3)$. We have not been able to achieve this goal so far.
There is a general construction of such Saito's frameworks due to A. Douai and C. Sabbah (see \cite{DoSa1}) but it requires some additional properties of $\widetilde{f}$ that we have not been able to check. Namely, one needs to show that $\widetilde{f}$ is $M$-tame. We refer to loc.cit. for details.
Assume that such Saito's framework $(M, \circ, e, g_{\omega}, E)$ exists. Then the initial conditions for $M$ at the origin are
$$
T=T_{x_0}M, \quad \mathcal{U}=A_0, \quad \mathcal{V}=-A_{\infty}+\frac{3}{2}\text{Id}, \quad g_{\omega}, \quad e.
$$
Therefore, by Lemmas \ref{SubSubSec.: Lemma A_0, A_inf} and \ref{SubSubSec.: PairingComputation} we see that these initial conditions coincide with those of Section \ref{SubSubSec.: InitCondQuad}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,942 |
As human beings, we all seem to have entitlement issues. We believe we are entitled to married parents. To an easy life. To having both parents present in our lives. To a job, a house, annual increases and regular promotions at work. We believe we are entitled to a loving, caring spouse and a marriage that somehow just works without any effort.
We think everyone must like us, that people will behave like we do. That other cultures must fit into our societal paradigm. We believe that no one SHOULD ever lie to us, steal from us or embezzle our pensions. We believe we have a right to loyalty, honesty, and fidelity from our friends, family, and spouses. We believe that we are entitled to our God-given three score and 10 years (70). That no-one should ever bury a child. We believe that our lives should just flow and the universe must just provide.
So how's that working for you?
We are entitled to a full life. A life full of happiness and despair. Filled with ecstasy and depression. Good times and bad. Fulfilled and unfulfilled. The truth is that as humans we are entitled to the full ambit of life experiences. Gut wrenching grief and moments our hearts could burst with joy. The truth is that some people will love you and others will hate you. That you will love some and hate others. The truth is that most of your achievements will come from actions that you have initiated and acted upon. The truth is that most of your failures will come from actions that you have initiated and acted upon too.
You are entitled to feedback. Negative feedback tells you that you are out of sync with authenticity. Positive feedback tells you that you are in sync with authenticity. Your mastery will come from constant tweaking, constant trial-and-error. Doing more of what works and less of what does not.
You are entitled to balance. When you are egotistical the universe will pull you down. When you are down-in-the-dumps, it will conspire to lift you up. You will be arrogant and humble and then grateful when this occurs.
When you start following your internal compass and tweak accordingly, constantly improving, you start to live in a state of grace. A state of fulfillment.
When you are doing more of what you love and less of what you hate, you build your self-confidence and positive self-image. You create health, wealth, and fulfillment.
When you are doing more of what you hate and less of what you love, you kill your self-confidence and create a negative self-image. If you do enough of this you will probably get ill and shorten your lifespan (and hate life while you wait to die).
Fulfillment is a chemically induced state. It is the result of an internal chemical reaction. It is created when you live a life filled with the things you truly love and appreciate. When you sub-contract the things you really dislike doing to someone who actually loves doing them.
Most importantly, it is a state achieved by yourself for yourself. It's not your parent's duty. It's not your government's duty. It's not your Boss's duty. Nobody wakes up every morning to fulfil your needs. Only you can create that.
Author, public speaker, trainer and coach, Stephen van Basten, is a past Karate World Champion, a yoga enthusiast, and recovering golfer. Stephen has owned his own company, worked in his family's business, being employed by small and large businesses like Shell SA and the BTG Group.
This article is excerpted from a post on Stephen's blog. Follow @stephenvb on Twitter and find his Author Page on Facebook. Lastly, click here for a free download of his most recent title, So you're engaged, now what? | {
"redpajama_set_name": "RedPajamaC4"
} | 921 |
{"url":"https:\/\/groupprops.subwiki.org\/wiki\/Characteristic_not_implies_isomorph-free_in_finite","text":"# Characteristic not implies isomorph-free in finite group\n\nThis article gives the statement and possibly, proof, of a non-implication relation between two subgroup properties, when the big group is a finite group. That is, it states that in a finite group, every subgroup satisfying the first subgroup property (i.e., characteristic subgroup) need not satisfy the second subgroup property (i.e., isomorph-free subgroup)\nView all subgroup property non-implications | View all subgroup property implications\n\n## Statement\n\n### Statement with symbols\n\nThere exists a finite group $G$ and a characteristic subgroup $H$ of $G$ such that $H$ is not an isomorph-free subgroup of $G$. In other words, there exists another subgroup $K$ of $G$ that is isomorphic to $H$.\n\n## Proof\n\n### Example of the dihedral group\n\nFurther information: dihedral group:D8, subgroup structure of dihedral group:D8, center of dihedral group:D8\n\nLet $G$ be the dihedral group of order eight, given as follows, where $e$ denotes the identity element of $G$:\n\n$G = \\langle a,x \\mid a^4 = x^2 = e, xax = a^{-1} \\rangle$.\n\nLet $H$ be the center of $G$. $H$ is a subgroup of order two generated by $a^2$.\n\n\u2022 $H$ is characteristic.\n\u2022 $H$ is not isomorph-free: The subgroup $\\langle x \\rangle$ of $G$ is isomorphic to $H$.","date":"2021-01-23 14:58:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 21, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.961601734161377, \"perplexity\": 519.8681555521089}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703538082.57\/warc\/CC-MAIN-20210123125715-20210123155715-00081.warc.gz\"}"} | null | null |
Working practices
Trusted science
EU institutions and agencies
Competent organisations in Member States
African swine fever
Data standardisation
Biological hazards
Food improvement agents
Upcoming Tenders €139k+
Tenders €139k+
Tenders €15k - €139k
Scientific and technical support
Calls for data
EFSA publishes all its scientific outputs, including its scientific opinions, in the EFSA Journal. It also issues a range of supporting publications. See also Definitions of EFSA Scientific Outputs and Supporting Publications.
go to EFSA Journal on Wiley
Guidance of EFSA Scientific Opinion Emerging risks GMO Reset all
Animal health Animal welfare Biological hazards Chemical contaminants Contaminants in feed Cross-cutting science Data Emerging risks Feed additives Food ingredients and packaging GMO Methodology Nutrition Pesticides Plant health
EFSA Journal Editorial Event report External Scientific Report Guidance Guidance of EFSA Scientific Opinion Scientific Report Special Issue Statement Statement of EFSA Technical Report
Past week Past month Past year
Customised date range
Scientific Opinion
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the Notification (Reference C/NL/98/11) for the placing on the market of herbicide-tolerant oilseed rape GT73, for import and processing, under Part C of Directive 2001/18/EC from Monsanto.
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a question from the Commission related to the Austrian notification of national legislation governing GMOs under Article 95(5) of the Treaty.<br />
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the safety of foods and food ingredients derived from herbicide-tolerant genetically modified maize NK603, for which a request for placing on the market was submitted under Article 4 of the Novel Food Regulation (EC) No 258/97 by Monsanto1
Opinion of the Scientific Panel on Genetically Modified Organisms on a request from the Commission related to guidance notes supplementing Part B of Annex II to Council Directive 90/219/EEC, as amended by Directive 98/81/EC, on the contained use of genetically modified micro-organisms
Opinion of the Scientific Panel on Genetically Modified Organisms on the use of antibiotic resistance genes as marker genes in genetically modified plants
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the Notification (Reference C/DE/02/9) for the placing on the market of insect-protected genetically modified maize MON 863 and MON 863 x MON 810, for import and processing, under Part C of Directive 2001/18/EC from Monsanto.
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the safety of foods and food ingredients derived from insect-protected genetically modified maize MON 863 and MON 863 x MON 810, for which a request for placing on the market was submitted under Article 4 of the Novel Food Regulation (EC) No 258/97 by Monsanto
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the Austrian invoke of Article 23 of Directive 2001/18/EC
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the Greek invoke of Article 23 of Directive 2001/18/EC
Opinion of the Scientific Panel on genetically modified organisms [GMO] on a request from the Commission related to the notification (Reference C/NL/00/10) for the placing on the market of insect-tolerant genetically modified maize 1507, for import and processing, under Part C of Directive 2001/18/EC from Pioneer Hi-Bred International/Mycogen Seeds*.
Follow us on: RSS Twitter LinkedIn Youtube
Trusted science for safe food
EFSA is an agency of the European Union | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,222 |
New Post Show Topics Any Date Today at 5:36pm Yesterday Last 2 Days Last Week Last Month Last Two Months Last Six Months Last Year Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 <12345 42>
Need advice on config
By kcv2012, 23 Nov 2019 at 9:37pm
7 333 By kcv2012
want to purchase but on budget
By jadoncolston, 24 Nov 2019 at 7:32pm
Question about cases....
By Whthawke66, 24 Nov 2019 at 2:40pm
New VR build...help!
By hispeed7721, 22 Nov 2019 at 6:35pm
Replacement gaming machine
By Whthawke66, 22 Nov 2019 at 11:29am
Is this over the top for a student?
By snowflake, 21 Nov 2019 at 11:21pm
A Build For Audio Production?
By ProjectTrinity, 19 Nov 2019 at 9:36am
3 210 By ProjectTrinity
First build w/DS... no more apples for me!!!
By Funnel-Designs, 16 Sep 2019 at 10:03pm
By Williamsde10, 19 Nov 2019 at 4:28pm
By Caellwin, 19 Nov 2019 at 11:30am
How is this build for VR / Art?
By FlareStar, 18 Nov 2019 at 4:42am
Hows my build?
By Digitroni, 16 Nov 2019 at 11:16am
Help with selecting a case for my configuration?.
By iosman123, 16 Nov 2019 at 11:47pm
1 180 By Snaike
Lynx Storage Questions
By DHPlayer, 15 Nov 2019 at 8:00pm
gaming center
By alsoofi88, 13 Nov 2019 at 11:13pm
First Time To Get A Gaming PC
By darrellv17, 12 Nov 2019 at 1:24pm
I need some help
By retrowolf98, 09 Nov 2019 at 6:23pm
Some Advice Needed
By Wholly_Knight, 31 Oct 2019 at 4:47pm
28 790 By Alex
New DS to Replace my apollo
By Gunner GzR, 10 Nov 2019 at 11:14am
Buy new or Upgrade
By Reagan, 06 Oct 2019 at 8:47am
Beginning to Design a New PC
By sbradfor, 07 Nov 2019 at 6:18pm
First time buyer what you think?
By Chriside420, 30 Sep 2019 at 4:22pm
34 1222 By Chriside420
My Aventum X configuration
By Lef, 06 Nov 2019 at 3:40am
Aventum X changes?
By Jayboy, 06 Nov 2019 at 2:55pm
New Post Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 <12345 42> | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,331 |
Let cool products embrace China
## setup
1. Install Wechat(weixin) in your mobile phone
2. Follow "producthunt" on Wechat(weixin) public account
## usage
1. search:keyword - search products base on keyword
2. 1/2/3 - view top voted products by day/week/month
Send 'help' in Wechat(weixin) to producthunt to get help information
## setup development environment
* Create a virtualenv
* `cd weixin_producthunt`
* `virtualenv .venv`
* `source .venv/bin/activate`
* Install the dependencies
* `pip install -r requirements.txt`
* Configuration (_adjust them accordingly to your needs_)
* For development copy `productporter/configs/development.py.example` to `productporter/configs/development.py`
* Database creation
* `python manage.py createall`
* Run the development server
* `python manage.py runserver`
* Visit [localhost:5000/porter/posts](http://localhost:5000/porter/posts)
## deploy
* copy `fabfile.py.example` to `fabfile.py`, modify accordingly
* copy `deploy/production.py.example` to `deploy/production.py` and modify accordingly
### bootstrap deploy
If you deploy the first time or setup a new server instance. Please follow these steps:
* install nginx/uwsgi/pip and other tools.
`sudo apt-get -y install python-pip nginx-full uwsgi uwsgi-plugin-python`
* setup nginx config file on remote server, please refer to `deploy/nginxconf.example`
* setup uwsgi config file on remote server, please refer to `deploy/uwsgiconf.example`
* run `fab -H newserver.example.com bootstrap`
* create a crontab to pull products periodic
### production
Since SQLite do not support Flask-Migrate, we sugguest to use mysql in production mode.
* install mysql and mysqldb.
`sudo apt-get -y install mysql-server python-mysqldb`
* create database before bootstrap deploy
use `mysql -uuser -ppassword` to connect mysql database; and use `CREATE DATABASE productporter;` to create database named **productporter**.
### continuous deploy
* run `fab deploy`
### more on deploy
I deploy this app in aws by nginx + uwsgi. Explain more about the nginx config and uwsgi config. For `deploy/nginxconf.example`, I deploy multi app in one aws instance. This app is deploy in `kamidox.com/pp` as:
```text
location /pp/ {
include uwsgi_params;
uwsgi_pass unix://tmp/uwsgi.pp.sock;
}
```
For uwsgi, it's important to note the following parameters:
```xml
<virtualenv>/home/kamidox/work/weixin_producthunt/.venv</virtualenv>
<pythonpath>/home/kamidox/work/weixin_producthunt</pythonpath>
<module>run</module>
<callable>app</callable>
```
`virtualenv` to setup the virtualenv of the app. uwsgi will look for `run.py` in `/home/kamidox/work/weixin_producthunt` and find the object called `app` and call `app.run()` to launch application.
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,234 |
Review of Harvest on Main Restaurant in Downtown Blue Ridge, GA
June 13, 2022 February 17, 2020 by Bret and Mary
[Updated November 9, 2021] Visiting Blue Ridge GA in the '90s (and even as recently as 2009), there was little indication that this tiny town would soon become a tourism hotspot.
Spanning just 2.6 square miles, with a population of less than 1300 people at the time, it felt like any number of other sleepy towns in the Blue Ridge Mountains.
Even on a Saturday during the Christmas holidays, the quaint downtown area was quiet and uncrowded, with few restaurants, but several antique and curio shops to explore. The Blue Ridge Scenic Railway was really the town's only major tourist attraction.
Lots of locals credit James Beard Award-nominated chef Danny Mellman and Michelle Moran, the partners behind Harvest on Main and several other Blue Ridge restaurants, with turning the town into a foodie-friendly mecca.
Relocating to Blue Ridge in 2009 after 25 years in Florida, the success of the couple's Lit'l Pond Hospitality Group has revitalized the North Georgia food scene.
Here's a deeper look at Harvest on Main's menu, interior design, and why we think it ranks among the very best restaurants in Blue Ridge, GA.
HARVEST ON MAIN INFO
ADDRESS: 576 East Main Street, Blue Ridge, GA 30513
HOURS: Like many Blue Ridge restaurants, Harvest on Main's hours are seasonal. For February 2020, their hours are as follows:
Monday: Lunch 11am to 3pm / Dinner 4pm to 8pm
Tuesday-Wednesday: CLOSED
Thursday: Lunch 11am to 3pm / Dinner 4pm to 8pm
Friday-Saturday: Lunch 11am to 3pm / Dinner 4pm to 9pm
Sunday: Lunch 11am – 3pm / Dinner 4pm – 8pm
• Danny Mellman is a self-taught chef who honed his skills in England, France & Italy.
• He was formerly Executive Chef of The Mad Batter in Cape May, NJ, where he drew national attention for his innovative use of game and wild herbs.
• Mellman moved to Captiva Island, FL and opened The Greenhouse Grill in the late '80s. He spent 25 years and opened several restaurants there, earning a James Beard nomination for Best Chef in the South.
• His partner, Michelle Moran, is a longtime gourmet food writer and oversees the couple's agricultural project, The Cook's Farm in nearby Morganton.
READ MORE: The 20 Best Things to Do in Blue Ridge, GA
THE ATMOSPHERE AT HARVEST ON MAIN
Located on Main Street in the heart of downtown Blue Ridge, Harvest on Main is designed to look like a cozy mountain lodge from the outside.
With river stone steps, massive wooden beams, Deer antlers, and strings of lights illuminating the entrance, it sets the stage for the upscale rustic look of the restaurant's interior. The wafting smell of smoking meats definitely entices passers-by to venture inside.
The design really brings the spirit of outdoor adventure indoors. A large wooden canoe and paddles take up the wall above the door.
Antique fishing rods adorn the wall to the left of the huge hearth stone fireplace, while antique hunting rifles adorn the wall to the right.
Wooden tables, chairs, floors, and cathedral ceiling are offset by Elk antlers, Moose and Deer trophies, with paintings of nature scattered about.
With the dim lighting and mellow music, it sets an intimate, romantic mood immediately as we settle at a table right by the fireplace that dominates the center of the dining room.
There's a bar in the back that serves an array of cocktails and an extensive list of wines, and bustles with action during the busy summer months.
The place was relatively quiet when we visited on a Thursday night in mid-winter, but we could hear occasional bursts of laughter from the private event space on the restaurant's bottom level.
HARVEST ON MAIN MENU
Chef Mellman's Harvest on Main menu centers on what he calls globally influenced cuisine with a Southern twang.
It generally adheres to the Farm to Table ethos, sourcing the majority of ingredients from local farmers and artisinal suppliers. As such, the menu changes seasonally, with appetizer and entree specials that change daily based on what's fresh and available.
Mellman and Moran also own The Cook's Farm, which produces quite a few of the ingredients that appear on the restaurant's menu. These include eggs, honey, specialty produce, Quail, Pheasant, Rabbit, Geese, and Duroc/Tamworth Pigs.
We started our meal with a Charcuterie Platter special, which featured House Smoked Bacon, Smoked Duroc Sausage, Smoked Tasso Ham, House Spiced Spanish Olives, House Pickles, cornbread, spicy mustard, and more.
The flavors of the farm-fresh meats were uniformly intense and smoky, with just the right amount of peppery spice. The Pimento Cheese appetizer, which is served with House Pickes and crackers, seemed somewhat tame by comparison.
READ MORE: The Top 10 Treehouse Rentals in the Georgia Mountains
For me, the true measure of a chef lies not in how inventive and "out there" his/her culinary concoctions are (remember molecular gastronomy?), but how well he or she puts an original spin on a classic dish you've tasted a million times before.
So for our entrees we started with their Wild Caught Gulf Shrimp & Grits, a masterful revelation. The Logan Turnpike Grits were whipped 'til they tasted like fluffy clouds of creaminess, and the peppery Tasso Ham gave it just the righ amount of kick. The homemade Buttermilk Cornbread on top, baked and then pan-fried in butter, was simply divine.
We followed it with Locally Farmed Rainbow Trout, which is served with House Smoked Bacon, Sweet Potato Succotash, and topped with Chickpea & Pickled Tomato Salad. The fish tasted so fresh it might've been caught that afternoon, and the succotash added a sweet finish.
I also highly recommend a side of Harvest's "3 Pork" Collard Greens. Fresh from the farm, they're lightly sweetened with molasses– an unusual, but refreshing change from the typical vineger-drenched approach of most Southern chefs.
By that point we were really too stuffed for dessert, but could not resist the temptation of a Blueberry Bread Pudding topped with caramel ice cream. It was just as luscious as it sounds, calories be damned!
PRIVATE EVENTS AT HARVEST ON MAIN
As mentioned above, there's a private event space downstairs at Harvest On Main, which offers full-service customized menus tailored to each event's needs.
The company also offers off-site catering, as well as hosting larger private events at The Cook's Farm in Morganton.
The Farm is also home to an annual Sunday Supper farm-to-table series, and a "Farm-to-Fork" summer camp for kids.
After an exquisite dinner at Harvest On Main, we're looking forward to returning to Blue Ridge in May to review Cucina Rustica, Masseria Kitchen & Bar, and their other Lit'l Pond Hospitality Group restaurants. –by Bret Love; photos by Bret Love & Mary Gabbett
READ MORE: Lake Blue Ridge Boat Rentals, Cabin Rentals, Camping & Fishing
Categories Blue Ridge GA, Blue Ridge GA Restaurants, North Georgia Mountains
Wood Haven Retreat: Blue Ridge Cabin Rental 5 Minutes from Downtown
Calendar of Blue Ridge, GA Events for 2020 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,224 |
Dawid i Sandy () – polsko-szwedzki film animowany z 1988 roku.
Obsada głosowa
Fabuła
Film opowiada o kilkuletnim chłopcu Dawidzie. Mieszka on wraz z rodzicami i psem Pipsem na skraju dżungli. Okazuje się, że grasuje tam Łowczyni wyłapująca zwierzęta i zamykająca je w klatkach. Jej przełożony, Wielki Szef, sprzedaje je do ogrodów zoologicznych. Za pieniądze zarobione w ten sposób chcą oni wybudować w sercu dżungli kopalnię złota. Dawid napotyka pisklę orłów, Sandy'ego, które chce zwrócić rodzicom, którzy jednak zostali schwytani przez Łowczynię. Gdy pojawiają się kosmici poszukujący pereł, kobieta, aby uzyskać pożądany cel, podaje się za obrończynię zwierząt. Wspólnie próbują pokonać kłusowników i uwolnić uwięzione zwierzęta. Ucieczka Wielkiego Szefa przez Królestwo Podziemi i przejście przez magiczne lustra kończy się jego śmiercią. Dżungla powraca do normalnego życia, a Sandy odnajduje rodziców i razem odlatują do gniazda.
Nagrody filmowe
1988 – Wiesław Zięba Poznań (FF dla Dzieci) – Nagroda Jury Dziecięcego
1987 – Wiesław Zięba Nagroda Szefa Kinematografii w dziedzinie filmu animowanego
Przypisy
Linki zewnętrzne
Polskie filmy przygodowe
Polskie filmy animowane z 1988 roku
Polskie animowane filmy fantastyczne
Animowane filmy fantastyczne
Szwedzkie filmy przygodowe
Szwedzkie filmy animowane
Filmy fantastyczne z 1988 roku
Polskie animowane filmy pełnometrażowe | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,498 |
Home Band Members Old Members DAVE MUSTAINE Says: "I Got Over METALLICA Using My Songs A Long Time Ago"
DAVE MUSTAINE Says: "I Got Over METALLICA Using My Songs A Long Time Ago"
Dave Mustaine is featured in Rolling Stone magazine's newest installment of "My Life In 15 Songs", for which he selected songs that represented numerous turning points in his profession, from early METALLICA numbers to MEGADETH classics spanning the band's whole three-decade-plus existence up to now.
One of the songs Mustaine picked was METALLICA's "Ride The Lightning", of which he said: "There's certain riffs that you hear, and you just know who the songwriter is. And I'm not talking about just when I write. So there are certain parts of 'Ride The Lightning' and 'Leper Messiah' and the first album, all that stuff, you can tell little things that are similar with MEGADETH's guitar playing 'cause you know there's so much you can do with an instrument. I think they did great with it.
"I didn't write all of the music in 'Ride The Lightning'. Lars [Ulrich, METALLICA drummer] wrote the melodic intro, and then the next part I wrote and then the next part I wrote and then the next part and then it went back to his part and then it went back to my next three parts and then at that point… who's keeping score?"
He continued: "I got over [METALLICA] using my songs a long time ago. You can obsess on shit like that or you can let it go, and nothing is gonna change it. You've got two great bands. We're friends. Stuff happened. Fuck, I forgave [MEGADETH bassist] David] Ellefson after suing me for 18 and a half million dollars, I can forgive those guys for using my songs. And honestly if it hadn't have been for that vehicle, what we started — and I mean we with a capital fucking W-E — you know, I think they did great. I'm really proud of them."
Mustaine added: "By the time [MEGADETH] put out 'Killing [Is My Business… and Business Is Good!]' in 1985, I had moved on. But I had all this stuff in my mind, in my catalog that I didn't get a chance to show those guys. We were progressing down a very simplistic road with that band. I can't remember who said it, but someone very prominent, very smart said, 'METALLICA is like to the RAMONES what MEGADETH are to THE CLASH.' And I thought that's probably one of the best things I've ever heard. I saw another comparison 'METALLICA is IRON MAIDEN to MEGADETH is LED ZEPPELIN' and I thought, 'Hey that's really a good way to look at it, too.' Because we are a little bit more twisty and turny."
Mustaine was a member of METALLICA for less than two years, from 1982 to 1983, earlier than being dismissed and replaced by Hammett.
Dave feuded with the members of METALLICA for more than twenty years before finally patching things up over the last few years. He has jammed along with his ex-bandmates on a number of occasions during "Big 4" shows and at METALLICA's 30th-anniversary live shows in 2011.
Mustaine was not inducted into the Rock And Roll Hall Of Fame with the band during the April 2009 ceremony at Cleveland, Ohio's Public Auditorium.
Jan 12, 2017 Metallica Online
Hetfield Doesn't See The Point In Holding A Grudge Against Megadeth's Dave MustaineWatch: METALLICA To Be Joined By Chinese Concert Pianist LANG LANG for Beijing Show, Will Reprise Grammy Performance
Is DAVE MUSTAINE Gonna Watch METALLICA Perform With LADY GAGA At This Weekend's GRAMMY AWARDS?
Watch Video Of METALLICA's "For Whom The Bells Tolls" Covered Only With BELLS
January 12, 2017 interviews, Old Membersdave mustaine, heavy metal, megadeth, metallica
According to Kirk Hammett Guns N' Roses Have Become a "Nostalgia Act"
What If Old-School SLAYER Wrote And Play METALLICA's "Spit Out The Bone"?
Watch Metallica Perform "Master Of Puppets" Live In Singapore
Subscribe to our Metallica mail list and get important news and updates before eveyone.
JAMES HETFIELD Says: "I Understand That A Lot Of Bands Don't Get A Career Like We Have Had"
Sina Drums And Jadyn Rylee Cover "Nothing Else Matters" And It Is Hauntingly Beautiful – Watch It!
Metallica's (Sir) Lars Ulrich Has Been Knighted in Denmark
Metallica are publishing an illustrated children's book
Metallica Covered Rammstein Over The Weekend And It Was Glorious
Watch Metallica cover 'I Wanna Be Adored' by The Stone Roses | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,344 |
After years of impasse, bipartisan Lansing drive for criminal justice reform
Originally Published Here
Republican and Democratic leaders in Lansing have officially moved on from divided government's early kumbayas into trash talking over Line 5 and proposed taxes on business and gas.
Criminal justice reform, however, seems to be staying above the fray.
Multiple bipartisan bills seeking to reduce the state criminal justice system's reach have been introduced since the beginning of the year. Those bills — including proposed changes to civil asset forfeiture, the age of adulthood for prosecution and cash bail policies — likely will have a better chance to pass than in previous years.
Advocates each have their own theories as to why the moment is ripe for reform: Some say it can be chalked up to long-term negotiations over policy, others attribute it to changing attitudes about punishment and incarceration. All say they are thrilled there's an appetite for change on a topic that has seen growing political consensus nationally in recent years, including among members of the public.
Seeing eye-to-eye on prisons
Rep. Leslie Love, D-Detroit, and Rep. Roger Hauck, R-Union Township, sat side by side before the House Judiciary committee Tuesday, pitching their peers on legislation that would raise the age of who is considered a "juvenile" in the criminal justice system from 17 to 18 years old.
"Our knowledge of human development and behavior has changed dramatically, yet our policy towards 17-year-olds has not changed when it comes to criminal justice," said Hauck, noting that Michigan began treating 17-year-olds as adults in 1912.
People who are tried in juvenile court are less likely to commit another crime than those tried as adults, Love said. "That represents long-term savings for our state and counties … and that means better outcomes for our communities, our state and Michigan families."
Big changes from unlikely bedfellows are becoming a common picture around the Capitol complex, as bipartisan bill sponsors lobby for criminal justice reforms that often appeal to diverse constituents and for different reasons.
In addition to the 8-bill "raise the age" package (there's a twin version in both the House and Senate), the Legislature is moving bipartisan bills that would rein in civil asset forfeiture by police and the cash bail system, while another bill would allow elderly and very sick inmates with a life sentence to be released to nursing homes.
Members of the House and Senate Judiciary committees said bipartisan legislation to expunge low-level crimes for people don't reoffend will be introduced soon.
Many Democrats have pushed criminal justice reforms for social justice reasons; In the case of the "raise the age" legislation, the sponsors argued it would protect young offenders from older ones, ensure they have access to rehabilitation programs and help ease the over-incarceration of young prisoners of color.
And the "tough on crime" mindset of the '80s and '90s is now evolving among many Republicans, said Joe Haveman, a former Republican state legislator who is now director of government relations for the Hope Network, which helps ex-offenders rejoin the workforce. Now, Haveman said, Republicans are motivated at least in part by a faith-based push to reduce incarceration and the possibility of saving taxpayers money by reducing the prison population.
Criminal justice reform "allows people more freedom and puts people back into the workforce. It ultimately lowers crime by getting people out and back into their communities and homes and lowers recidivism rates," Haveman said.
Sen. Stephanie Chang, D-Detroit, is minority vice chair of the Senate Judiciary committee. She said this legislative session is producing new political agreement — between leaders of both Judiciary committees and among the new class of senators — over criminal justice reform.
It is not the first time around the block for most of the proposed criminal justice reforms introduced this year. Previous versions have been discussed over and over in Michigan, only to die somewhere before reaching the governor's desk.
Advocates aren't sure whether any of the legislation will become law this year, but they say it feels like the timing is right.
State senators Peter Lucido, R-Shelby Township, and Stephanie Chang, D-Detroit, (chair and minority vice chair of the Senate Judiciary committee, respectively) told Bridge this week that the makeup of the Senate has changed; it's now more friendly to criminal justice reforms.
Chang said leaders of the House and Senate Judiciary committees now agree on most criminal justice priorities, which helps legislation move more quickly. For the last several years, former Sen. Rick Jones, who worked more than 30 years in law enforcement, helmed the Senate committee.
"With all due respect to Sen. Jones, he just had a different point of view," Chang said. "Because he chaired Senate Judiciary for four years there were just some issues that were not going to move forward."
Another reason, said Rep. David LaGrand, D-Grand Rapids, minority vice chair of the House Judiciary committee, may be a part of the natural pattern of political change.
"This is a pendulum. In the '90s, the pendulum swung towards mass incarceration. And there's been a general societal realization that the pendulum swung too far. Now it's swinging back," LaGrand said.
Rep. Graham Filler, R-DeWitt, chair of the House Judiciary committee, said with a Democrat in the Governor's office, it's fortunate that there's bipartisan support for criminal justice priorities: "We have shared government, why not sort of grease the skids and keep these things moving?"
Rep. Roger Hauck, R-Union Township, argued before the House Judiciary committee Tuesday that the state would save money in the long run if it raised the age of whose considered "juvenile" to 17. He is one of the sponsors of the so-called Raise the Age legislation.
Sticking points
While shared legislative priorities are the first to get traction under divided government, some reforms continue to divide Lansing.
Gun control legislation, such as policies that would implement extreme risk protection orders and expanded background checks for would-be gun owners, are unlikely to be popular with Repubican legislators, Chang said.
Peter Henning, a law professor at Wayne State University who specializes in criminal law, said industry interests — such as bail bond companies that profit off of the current system or communities whose main employers are prisons — can slow the trend toward a criminal justice system that focuses more on rehabilitation and less on incarceration.
So can high-profile criminal cases such as the Larry Nassar sex abuse scandal, he said, which inspired legislation to increase punishments for child sexual abuse.
"Because if you oppose them, you were (portrayed as) in league with Larry Nassar, and that's a career ender right there," Henning said.
Haveman, the Republican former legislator who has pushed for criminal justice reform, argues the state also hasn't gone far enough in changing sentences for juvenile lifersafter a Supreme Court case determined it was unconstitutional to give juveniles life without parole.
In any criminal justice standoff, Haveman said, it can be an uphill battle to convince legislators it's worth taking the risk of upsetting law-and-order-minded constituents.
"Most legislators want to do what they feel is politically safe. And unless there's an outcry from their public for things that are in the media all the time… they don't want to stick their neck out," Haveman said.
"The public is never going to have an outcry saying please let people out of prison. That doesn't mean it's not morally the right thing to do or, from a public policy point of view, the smart thing to do." | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,993 |
The Top Six Documentary Oscar Contenders
The Cove is, on its face, eco-advocacy at its best. But even more so, it's an edge-of-your-seat thriller with a Bourne Identity-like mission, led by a crack team of activists, divers and filmmakers to document the killings and expose them to the world.
Nicole LaPorte
In a face-off between fashion divas, food activists, Broadway dancers, and guerrilla videographers, a film about a Bourne Identity-like mission to rescue dolphins, triumphs.
When the Oscar short list for the Best Documentary category was announced recently, there was much guffawing. Where was Anvil! The Story of Anvil, a funny and touching movie about an aging Canadian big-hair band that is still head-banging as though it were 1986? Where was Tyson, a critically acclaimed film about another, more famous, banger of heads? And how was it possible that documentarian emeritus, Michael Moore—and his latest diatribe, Capitalism: A Love Story—had been snubbed? Then there was the glaring irony that one of the most talked-about documentaries in years— This Is It, the MJ film—was ineligible because it was released after September 30th.
Ric O'Barry has an Old Man and the Sea quality—weathered face, arm tattoos, eyes that alternately flare up in outrage and well up in sorrow—and The Cove's director Louis Psihoyos wisely allows him to tell the story of heartbreak, horror, and hopeful, if not certain, redemption.
Yet a look at the 15 films that actually are eligible to win, shows that the Academy didn't get it all wrong. Six films, in particular, stand out for their superior achievement, and The Daily Beast proudly nominates them as the year's best. The group couldn't be more varied in terms of style and sensibility, from Robert Kenner's terrifying, organic-all-the-way Food Inc.; to Matt Tyrnauer's ode to high fashion and male divas, Valentino: The Last Emperor; to Every Little Step, a behind-the-scenes look at the angst of auditioning for a Broadway musical; to Anders Ostergaard's Burma VJ, a film made from footage compiled by renegade vj's—video journalists—who surreptitiously chronicled the monk-led demonstrations against Myanmar's military government in 2007.
But the most compelling documentary of the year is The Cove. Directed by Louie Psihoyos, a National Geographic photographer, The Cove is, on its face, eco-advocacy at its best. But even more so, it's an edge-of-your-seat thriller, complete with villains—the Japanese fishermen who slaughter 23,000 dolphins a year in the sleepy town of Taiji, as well as the government officials who protect them—and a Bourne Identity-like mission, led by a crack team of activists, divers and filmmakers to document the killings and expose them to the world.
Most important, The Cove has a heart, and it belongs to Ric O'Barry, the original Flipper dolphin trainer, who has spent the last 35 years trying to make up for what he sees as his hand in the booming dolphin captivity industry by risking his life to rescue the animals from bondage; he's even served prison time for setting dolphins free. O'Barry has an Old Man and the Sea quality—weathered face, arm tattoos, eyes that alternately flare up in outrage and well up in sorrow—and Psihoyos wisely allows him to tell the story of heartbreak, horror, and hopeful, if not certain, redemption. In The New York Times, Jeannette Catsoulis wrote that The Cove "is no angry enviro-rant but a living, breathing movie whose horrifying disclosures feel fully earned." Variety's Justin Chang called the film "a love letter to a beloved species, an eye-opening primer on worldwide dolphin captivity, a playful paranoid thriller and a work of deep-seated (if sometimes hot-headed) moral outrage."
Plus: Check out more of the latest entertainment, fashion, and culture coverage on Sexy Beast—photos, videos, features, and Tweets.
Nicole LaPorte is the senior West Coast correspondent for The Daily Beast. A former film reporter for Variety, she has also written for The New Yorker, the Los Angeles Times Magazine, The New York Times, The New York Observer, and W. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,271 |
Automobile Gazette
Automobiles and Cars News
Cars Tuning
Ford Focus has already won 12 prestigious awards
January 21, 2020 - by Viliyana - Leave a Comment
The new generation Ford Focus has already won 12 prestigious awards, including two titles "Car of the Year" since the start of its sales in Europe in the summer. According to the official data, 42,100 new cars were sold until November, with the Focus ST-Line sporting more than 40% of sales.
One in six customers selected Chrome Blue or Desert Island Blue, in addition to traditional and popular Magnetic and Frozen White. The new 8-speed automatic transmission also proves to be more preferred among customers than expected.
Produced at Ford's factory in Saarlouis, Germany and using ultra-modern processes, the all-new Ford Focus is available in 4-door and 5-door versions, as well as a wagon version. In addition to the popular ST-Line version, the range includes the stylish Focus Trend and Titanium and the luxurious Vignale. The first SUV-inspired Focus Active Focus Crossover is also available for order.
"The media, automotive specialists, and most importantly, customers are all boasting the new Focus, and this reflects both the variety of awards the model wins and the speed at which cars come out of the showrooms", said Glen Goold, Focus chief program engineer, Ford of Europe. "Being the best in such a competitive segment is a very high degree of recognition", added he.
The inspiring driving behavior of the all-new Focus, developed after close cooperation with customers, has been recognized by the media and organizations in Europe, through the "Car of the Year 2019" in Croatia and Finland.
The benefits for business customers, including optimized economy from advanced EcoBoost gasoline and EcoBlue diesel engines, and the ability to wirelessly charge compatible mobile phones are reflected in the winner of the "Entrepreneurial Car of the Year" Awards in Spain and the "Best Car Compact" of the Business Cars Awards in the UK.
Providing more interior space combined with high-quality materials and craftsmanship, the all-new Focus offers the widest range of driver assistance technologies available to Ford customers so far, such as Adaptive Cruise Control with Stop & Go, Speed Limit Traffic Detection and Vehicle Lane System for Traffic Unleashing and Ford's First Headset.
Earlier, Focus was awarded the maximum five stars safety from the independent Euro NCAP, one of the first cars to pass the new, more stringent tests of the organization.
TaggedFord
LaFerrari Aperta available for sale at auction
Why BMW M3 F80 is the best M3 ever?
Volkswagen develops real electric SUV
Previous Article Mercedes-AMG A45 S 4Matic gets 415 hp
Next Article Polestar 2 is 400 hp electric hatchback
About Viliyana
Extensive knowledge of writing and managing business journals. Likes luxury cars.
View all posts by Viliyana →
Latest Cars News
Mercedes-Benz launched diesel G-Class
Hatfields car dealer
Supercar news
Japanese car parts from worldcarparts.co.uk
Copyright © 2021 Automobile Gazette. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 457 |
require 'initforthe-cookies/version'
require 'initforthe-cookies/helpers'
require 'initforthe-cookies/hooks'
require 'initforthe-cookies/engine'
module Initforthe
module Cookies
mattr_accessor :site_name
mattr_accessor :policy_url
mattr_accessor :button_classes
def self.setup
yield self
end
end
end | {
"redpajama_set_name": "RedPajamaGithub"
} | 3,080 |
No great Sheikhs
By Tony Bingham, Tony Bingham 2002-03-15T00:00:00+00:00
A case in the High Court provides an interesting angle on the obedience owed by parties to an adjudicator's decision. Let's hope they appreciate it in Qatar …
I am obliged to John Wright of solicitor Berwin Leighton Paisner for drawing my attention to the adjudication judgment in the High Court called Solland International vs Daraydan Holdings (case no. 92 in our series). It was an attempt by Daraydan to torpedo an adjudicator's decision to order the employer to pay Solland, the builder, a whopping £659,000. Daraydan, which acts as an intermediary for Sheikh Mohammed Khalifa Hamad Al-Thane, the deputy prime minister of Qatar, said "shan't obey". Solland took Daraydan to court. I will tell you the end of the story now. Daraydan was ordered to pay up. It is, of course, quite ordinary for the court to insist on obedience to the adjudicator's decision.
But there is an interesting angle to this case.
A few weeks ago (1 February), I told you that it wouldn't be very long before the argument used in David McLean Housing Contractors vs Swansea Housing Association would be run out again.
It was just so in this Solland case. The argument worked in McLean; it flopped in Solland. In McLean, the adjudicator told Swansea to pay up £613,000. It only paid £420,000, saying it had a claim for liquidated damages for the balance and could set off. The judge in Solland said that although he agreed with the judge in McLean, he would not allow liquidated damages to be set against the adjudicator's award. How come?
First, Daraydan had no quarrel with adjudicator Dominic Helps' decision. His decision on the sum due was the same as the architect's certificate. What the Sheikh's intermediary company was trying to do was persuade the court that it could withhold 54 weeks' damages at £15,000 per week. It looked at what happened in McLean and said to the judge in Scotland "follow that". He wouldn't.
Let's see if we can fathom the difference. In Solland the adjudicator had to decide whether he agreed with the valuation in interim no. 59 and with the architect's certificate for sums due. So the adjudication here was all about the value of work done, just like an ordinary valuation.
In their contract, parties agree to abide by, or comply with, the decision of the adjudicator. There is no question of using some device such as a withholding notice to contradict the express promise to comply. If in Solland the employer thinks it is entitled to liquidated damages, it can deduct them from future sums due but not from valuation no. 59. That valuation was the subject of the adjudication and shall be paid. And if the employer by now has nothing to deduct from, it is open to it to call for an adjudicator to make another decision. If it succeeds in showing liquidated damages are payable, it is due a cheque from Solland. Easy, isn't it?
The Swansea affair was a dispute about the final account and extension of time and loss and expense due to the builder. Do you see the immediate difference to Solland? It wasn't a mere valuation. The adjudicator in McLean did his stuff on the amount of extension of time. Once decided, that extension was binding. Aha, said the employer. Since we now have a binding decision on the extended date and since the builder reached practical completion on a later date, we can now calculate the liquidated damages. And it jolly well did. It came to £130,000. And now comes the crucial question: can Swansea HA deduct that from the adjudicator's order to pay the amount due in his decision? The answer is, yes. The reason is twofold. First, the adjudicator had of course decided the date when Swansea's builder ought to have been completed. That was binding. He did not decide, nor was he asked, if that meant Swansea could deduct liquidated damages. Instead, and this is the second reason, Swansea began an action in the High Court for payment of the £130,000 liquidated damages. Swansea got summary judgment on the same day as David McLean was seeking to enforce the adjudicator's decision for the full amount. It used the adjudicator's decision to confirm late completion.
So, if I have figured all this out, we can still say that the parties have agreed when carrying out construction contracts that they will obey an adjudicator's decision. No set-off is allowed. But nothing stops a party from bringing an action in court, which uses the adjudicator's decision, on the same day as enforcement is sought to prove a breach has occurred. Of course, you have to come with an open-and-shut case. The Sheikh's company didn't have such a case. It had to pay up without deduction.
Tony Bingham is a barrister and arbitrator specialising in construction. You can write to him at 3 Paper Buildings, Temple, London EC4 7EY, or email him on info@tonybingham.co.uk.
Tony BinghamTony Bingham is a barrister and arbitrator at 3 Paper Buildings, Temple | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,246 |
{"url":"https:\/\/www.rbjlabs.com\/probability-and-statistics\/hypergeometric-distribution\/","text":"# Hypergeometric distribution\n\nNext we will see one of the most present distributions in probability, which is the hypergeometric distribution, which we will explain below.\n\nThis distribution consists of extracting a random sample of size n without replacement or consideration of its order, from a set of N objects.\n\nThat is to say that the events are dependent and of the N objects, r have the feature that interests us, in addition, the random variable X is the number of objects in the sample that have that feature.\n\nSo to make this clearer, there are combinations N \\choose n equally likely ways to select n objects, thus giving to achieve x successes, x objects must be selected from among the r that have the feature we are interested in, having r \\choose x ways and also the n - x objects of the N - r objects that do not have the feature, having combinations {N-r} \\choose{n-x}.\n\n## Hypergeometric distribution formulas\n\nUsing the classic probability formula and the multiplication rule, the probability density is obtained as follows:\n\nP[X = x] = \\cfrac{ {r\\choose x} {N-r \\choose n-x} }{N \\choose n }\\quad \\text{max}[0,n-\\left(N-r\\right) \\le x \\le \\ \\text{min}\\left(n,r\\right)]\n\nIts most important characteristics are those shown below the expectation and variance\n\nE[X] = n \\left( \\cfrac{r}{N} \\right)\n\nVar[X] = n \\left( \\cfrac{r}{N} \\right) \\left( \\cfrac{N - r}{N} \\right) \\left( \\cfrac{N-n}{N-1} \\right)\n\nThe meaning of the unknowns in the above formulas are as follows:\n\n\u2022 N is our batch population\n\u2022 r which are our defective units per batch\n\u2022 n which is the number of units being tested\n\u2022 x is expected, to calculate the probability that x quantities have some condition\n\n## Hypergeometric distribution example\n\nA foundry ships blocks in batches of 20 units. No manufacturing process is perfect, so bad blocks are inevitable. However, it is necessary to destroy them to identify the defect. Three units are selected and tested before a lot is accepted. Suppose a given lot includes five defective units.\n\na) Express the density function.\n\nFor this exercise we have our following data:\n\n\u2022 N = 20 units\n\u2022 r = 5 defective units\n\u2022 n = 3 units that are tested\n\nWith these data we can proceed to write our density formula:\n\nP[X = x] = f(x) = \\cfrac{ {r\\choose x} {N-r \\choose n-x} }{N \\choose n } = \\cfrac{ {5\\choose x} {20-5 \\choose 3-x} }{20 \\choose 3 } \\quad x = 0,1,2,3\n\nSo now we have to calculate each of the probabilities with the values of x, which is the probability that none, one, two or three are defective:\n\nx= 0 \\qquad f(x) =\u00a0 \\cfrac{ {5\\choose 0} {15 \\choose 3-0} }{20 \\choose 3 } = \\cfrac{91}{228}\\approx 0.399\n\nThere is a 39.9% chance that zero units will be defective.\n\nx= 1 \\qquad f(x) =\u00a0 \\cfrac{ {5\\choose 1} {15 \\choose 3-1} }{20 \\choose 3 } = \\cfrac{35}{76}\\approx 0.46\n\nThere is a 46% chance that one unit will be defective.\n\nx= 2 \\qquad f(x) =\u00a0 \\cfrac{ {5\\choose 2} {15 \\choose 3-2} }{20 \\choose 3 } = \\cfrac{5}{38}\\approx 0.131\n\nThere is a 13.1% chance that two units will be defective.\n\nx= 3 \\qquad f(x) =\u00a0 \\cfrac{ {5\\choose 3} {15 \\choose 3-3} }{20 \\choose 3 } = \\cfrac{1}{114}\\approx 0.008\n\nThere is a 0.8% chance that three units will be defective.\n\nb) Find the expected value of defective units.\n\nTo find this value we will apply the expectation formula:\n\nE(X) = n\\left( \\cfrac{r}{N}\\right) = 3\\left(\\cfrac{5}{20} \\right) = \\cfrac{3}{4} = 0.75\n\nSo our expectation of getting defective units is 0.75 or 75%\n\nc) Find the variance for this case.\n\nAnd to find the value of the variance we will also apply the pure formula:\n\nVar(X)=n\\left(\\cfrac{r}{N}\\right)\\left(\\cfrac{N-r}{N}\\right)\\left(\\cfrac{N-n}{N-1}\\right)\n\n=3\\left(\\cfrac{5}{20}\\right)\\left(\\cfrac{20-5}{20}\\right)\\left(\\cfrac{20-3}{20-1}\\right)\n\n3\\left(\\cfrac{5}{20}\\right)\\left(\\cfrac{15}{20}\\right)\\left(\\cfrac{17}{19}\\right) = \\cfrac{153}{304}\n\nWhich gives us an approximate value of the variance of 0.5032\n\nThat\u2019s all, we hope that the hypergeometric distribution has been understood as well as possible.\n\nThank you for being in this moment with us : )","date":"2022-09-28 10:20:40","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9501110315322876, \"perplexity\": 1412.7362702896567}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030335190.45\/warc\/CC-MAIN-20220928082743-20220928112743-00302.warc.gz\"}"} | null | null |
The Six Wives of Henry VIII è il primo album solista di Rick Wakeman, tastierista del gruppo progressive inglese Yes.
Pubblicato nel gennaio 1973 dalla A&M Records, The Six Wives è un concept album strumentale composto da 6 brani (uno per ciascuna delle "sei mogli di Enrico VIII"). Sull'album appaiono anche gli altri strumentisti delle due principali formazioni degli Yes dell'inizio anni settanta: (Howe, Bruford, Squire e White). In questa opera prima, Wakeman si esibisce in tutti gli stili che costituiscono il suo notevole repertorio, dal rock con contaminazioni classiche alla fuga à la Bach su organo a canne. L'album è certamente, a tutt'oggi, uno dei più famosi momenti della prolifica carriera solista di Wakeman, ed è inoltre uno dei dischi più innovativi e importanti dell'intera produzione progressive, in quanto è il primo album interamente strumentale incentrato sulle tastiere. Estratti dai sei brani di questo celebre album furono spesso riproposti anche ai concerti degli Yes (per esempio, essi appaiono in Excerpts from the Six Wives of Henry VIII nell'album live più celebre del gruppo, Yessongs).
Nel 2009 è stato pubblicato l'album The Six Wives of Henry VIII - Live at Hampton Court Palace, versione dal vivo dell'omonimo album tenutasi all'Hampton Court Palace nello stesso anno.
Brani
1. Catherine of Aragon (3:45)
2. Anne of Cleves (7:50)
3. Catherine Howard (6:36)
4. Jane Seymour (4:44)
5. Anne Boleyn 'The Day Thou Gavest Lord Hath Ended''' (6:31)
6. Catherine Parr (7:00)
Musicisti
Rick Wakeman: pianoforte, organo, clavicembalo, sintetizzatore, mellotron
Bill Bruford: batteria (1-5)
Ray Cooper: percussioni
David Cousins: banjo elettrico (3)
Chas Cronk: basso (3)
Barry de Souza: batteria (3)
Mike Egan: chitarra (1-2-5-6)
Steve Howe: chitarra (1)
Les Hurdle: basso (1-5)
Dave Lambert: chitarra (3)
Laura Lee: cori (5)
Sylvia McNeill: cori (5)
Judy Powell: cori (1)
Frank Ricotti: percussioni (2-3-6)
Barry St.John: cori (1)
Chris Squire: basso (1)
Liza Strike: cori (1-5)
Alan White: batteria (2-4-6)
Note
Voci correlate
The Six Wives of Henry VIII - Live at Hampton Court Palace''
Collegamenti esterni
Concept album | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,791 |
El M48 Patton es un carro de combate, tercer tanque de la serie Patton, que recibieron este nombre por el General George S. Patton, comandante del Tercer Ejército estadounidense durante la Segunda Guerra Mundial y uno de los militares estadounidenses que abogaba por el empleo de carros de combate en el campo de batalla. El M48 Patton también sirvió como tanque intermedio hasta ser sustituido por el primer "carro de combate principal" del Ejército de los Estados Unidos, el M60 Patton. El M48 serviría con el Ejército estadounidense y los Marines como carro de combate principal durante la Guerra de Vietnam. Fue un blindado profusamente utilizado por los aliados de los Estados Unidos durante la Guerra Fría, especialmente por otros países de la OTAN.
El M48 Patton fue diseñado para sustituir a los anteriores M47 Patton y M4 Sherman. Aunque guardaba cierto parecido con respecto al M47, el M48 era un diseño completamente nuevo. Algunos modelos de M48A5 sirvieron hasta bien entrada la década de 1980 en Estados Unidos. En el resto del mundo, hay carros M48 que permanecen en servicio en varias Fuerzas Armadas. El M48 fue el último carro de combate estadounidense en montar el cañón de 90 mm. La variante M48A5 está modernizada para llevar el cañón de 105 mm estándar del M60.
El Ejército turco es actualmente el mayor operador del M48, los cuales han sido modernizados, aunque no están en servicio activo, sino en la reserva, y se prevé retirarlos pronto.
Evolución
En febrero de 1951, el Ejército inició el diseño del nuevo tanque, designado como el tanque T-48 de 90 mm (la letra "T" del prefijo sería reemplazada por el prefijo "X" que comienza con el tanque de la serie M60).
Para enero de 1952, los oficiales del Ejército estaban considerando si el tanque medio T42 más ligero se adaptaba mejor a la doctrina preferida por el Departamento de Artillería que pedía tanques más ligeros y ágiles.
Una modernización más profunda que la M46 y la M47, la M48 presentó una nueva torreta hemisférica, un casco rediseñado similar al tanque pesado T43 y una suspensión mejorada. La posición del artillero de la máquina del casco fue eliminada, reduciendo la tripulación a cuatro. En abril de 1953, el Ejército estandarizó el último de los tanques de la serie Patton como el Gun Tank M48 Patton de 90 mm.
En abril de 1952, Chrysler Corporation comenzó la producción del M48 en su planta de Newark, Delaware. El tanque fue bautizado después del fallecido general George S. Patton en su debut público en la planta de Chrysler en julio. General Motors y Ford Motor Company produjeron el tanque en Míchigan. También en julio, el Ejército otorgó a American Locomotive Company un contrato de $ 200 millones para producir el tanque. En diciembre, Chrysler recibió pedidos inicialmente destinados a la locomotora estadounidense después de que el Ejército ordenó recortes de producción a su programa de tanques. Bajo el modelo de "único productor eficiente" del secretario de Defensa Charles Erwin Wilson, el Ejército fue dirigido a reducir el número de contratistas que producen cada modelo de tanque. General Motors ofertó a Chrysler, y en septiembre de 1953, el Secretario del Ejército Robert T. Stevens otorgó a la división Fisher Body de GM un contrato de $ 200 millones para convertirse en el único productor del M48. La decisión suscitó escepticismo en los legisladores. El senador Estes Kefauver señaló que la medida dejaría a GM como el único productor de tanques ligeros y medianos cuando Chrysler cerró la producción de M48 en abril de 1954. El Departamento de Defensa fue llamado al Comité de Servicios Armados del Senado en enero de 1954 para defender la decisión del único productor. . Durante las audiencias, el subsecretario del ejército, John Slezak, dijo que la medida reducía los costos y que varios productores no eran necesarios para satisfacer las necesidades cada vez menores de nuevos tanques del ejército.
Meses después, Chrysler subestimó a GM en la nueva ronda de propuestas. En septiembre de 1954, el Ejército otorgó a Chrysler un contrato exclusivo de $ 160.6 millones para reiniciar la producción. En noviembre de 1955, el Ejército otorgó a Alco Products un contrato de $ 73 millones para comenzar a producir 600 M48A2 el próximo año. Alco optó por cerrar su negocio de tanques cuando finalizó su contrato en julio de 1957. En mayo de 1957, el Ejército otorgó a Chrysler, el único postor, un contrato de $ 119 millones para continuar la producción del M48A2 en Delaware y Muskegon, Míchigan.
En 1960, la Oficina de Contabilidad del Gobierno, que investigaba el desempeño de los tanques del Ejército y la Marina, encontró que los M48 y M48A1 eran "vehículos gravemente defectuosos". En noviembre, una investigación de los Servicios Armados de la Cámara de Representantes corroboró en gran medida el informe de la GAO, que había sido cuestionado por el Secretario del Ejército Wilber M. Brucker.
Casi 12,000 M48 se construyeron desde 1952 hasta 1959. Los primeros diseños, hasta el M48A2C, fueron impulsados por un motor de gasolina de 12 cilindros y un generador auxiliar de 1 cilindro (llamado "Little Joe"). Las versiones con motor de gasolina le dieron al tanque un rango de operación más corto y eran más propensas a incendiarse cuando se golpeaban. Aunque se consideraron menos confiables que las versiones con motor diésel, numerosos ejemplos vieron el uso del combate en varios conflictos árabe-israelíes. El bajo punto de inflamación del fluido hidráulico utilizado en los mecanismos de retroceso y los sistemas hidráulicos para rotar armas o dispositivos de puntería era inferior a 212 °F (100 °C) y podría provocar una bola de fuego en el compartimiento de la tripulación cuando las líneas se rompieran. El fluido no era exclusivo del M48 y ya no se usa en vehículos blindados de combate, ya que fue reemplazado por un fluido hidráulico resistente al fuego. A partir de 1959, la mayoría de los M48 estadounidenses se actualizaron al modelo M48A3, que contaba con una planta de energía diésel más confiable y de mayor alcance. Los M48 con motores de gasolina, sin embargo, todavía estaban en uso en el Ejército de los EE. UU. Hasta 1968, y hasta 1975 por muchas unidades del Ejército de Alemania Occidental.
M48A3
En febrero de 1963, el Ejército de los EE. UU. Aceptó el primero de los 600 tanques M48 Patton que se habían convertido a M48A3, y en 1964 el Cuerpo de Marines de los EE. UU.. Recibió 419 tanques Patton. El modelo A3 introdujo el motor diésel, contrarrestando las características de las versiones anteriores de incendiarse. Estos Patton fueron desplegados para la batalla en Vietnam. Debido a que todos los tanques M48A3 eran conversiones de modelos anteriores, muchas características variaban entre los ejemplos individuales de este tipo. Los tanques M48A3 podrían tener tres o cinco rodillos de soporte en cada lado y podrían tener el tipo de faro anterior o posterior.
M48A5
A mediados de la década de 1970, el vehículo fue modificado para llevar el cañón más pesado de 105 mm. La designación original del programa fue XM736. La designación se cambió posteriormente a M48A3E1 y finalmente se estandarizó como M48A5. Se utilizaron tantos componentes de la M60A1 como fue posible. A Anniston Army Depot se le otorgó un contrato para convertir 501 tanques M48A3 al estándar M48A5 y esto se completó en diciembre de 1976. Estos primeros M48A5 eran esencialmente tanques M48A3 con el cañón de 105 mm agregado. Retuvieron la cúpula M1 armada con una ametralladora de cal .50.
Basado en la experiencia israelí en la actualización de los tanques de la serie M48, se incluyeron cambios adicionales a partir de agosto de 1976. Estos incluyeron el reemplazo de la cúpula M1 por una cúpula tipo Urdan de bajo perfil que montaba una ametralladora M60D para uso del comandante del tanque. Una segunda ametralladora M60D fue montada en el techo de la torreta para ser utilizada por el cargador. El almacenamiento interno de municiones para el cañón principal de 105 mm también se incrementó a 54 disparos. Estos tanques recibieron inicialmente la designación M48A5API; pero, después de que las conversiones tempranas se llevaran a la norma posterior, se eliminó el API y estos tanques se conocían simplemente como M48A5.
Además de la conversión de tanques M48A3, también se desarrolló un proceso de conversión adicional para llevar los tanques M48A1 al estándar M48A5. Para marzo de 1978, 708 tanques M48A5 se habían convertido del modelo M48A1.
El trabajo continuó hasta diciembre de 1979, momento en el cual se convirtieron 2069 M48A5.
La gran mayoría de los tanques M48A5 en servicio con unidades del Ejército de los EE. UU. Fueron asignados a la Guardia Nacional y las Unidades de Reserva del Ejército. Una excepción notable fue la Segunda División de Infantería en la República de Corea, que reemplazó sus tanques M60A1 con los M48A5, que llegaron en junio y julio de 1978. En los tanques de la Segunda División de Infantería M48A5, la M60D del comandante fue reemplazada por una ametralladora M2 de calibre.
A mediados de la década de 1990, los M48 fueron eliminados del servicio de los Estados Unidos. Muchos otros países, sin embargo, continuaron usando estos modelos M48.
Historial en combate
Vietnam
Antes de Vietnam estuvo operativo en la Brigada de Berlín, fue visto en el incidente de Checkpoint Charlie, donde varios M48 Patton y T55 estuvieron cara a cara a menos de 200 metros, con máxima tensión por si el enemigo hacia algún movimiento brusco. El M48 vio una acción extensa con el ejército de los Estados Unidos durante la Guerra de Vietnam. Más de 600 Patton se desplegarían con las fuerzas estadounidenses durante esa guerra. Los M48 iniciales primero aterrizaron con los batallones de tanques marinos de Estados Unidos primero y tercero en 1965, y el quinto batallón de tanques marinos más tarde se convirtió en una unidad de respaldo/refuerzo. Los restantes Patton desplegados en Vietnam del Sur se encontraban en tres batallones del Ejército de EE. UU., A saber, la Armadura 1-77 cerca de la ZDM (67 M48A2C (23 tanques suministrados por el Centro de Entrenamiento del Ejército de EE. UU. En Fort Knox, KY, EE. UU. , EE. UU., EE. UU.) Los tanques fueron utilizados por la Armadura 77 desde agosto de 1968 hasta enero de 1969. Estos se reemplazaron posteriormente por M48A3s), la Armadura 1-69 en las Tierras Altas Centrales del centro de Vietnam del Sur y la Armadura 2-34 colocada cerca del Delta del Mekong. Cada batallón constaba de aproximadamente 57 tanques. Los M48 también fueron utilizados por los Escuadrones de Caballería Acorazados en Vietnam hasta que fueron reemplazados por los Vehículos de Asalto Aerotransportado de Reconocimiento Sheridan M551 (ARAAV) en los Escuadrones de Caballería Divisionales. Los tanques M48A3 permanecieron en servicio con el 11º Regimiento de Caballería Blindada hasta que la unidad se retiró del conflicto. El tanque de llamas M67A1 (apodado Zippo) era una variante M48 usada en Vietnam. Desde 1965 hasta 1968, 120 tanques de M48A3 estadounidenses fueron cancelados.
El M48 Patton tiene la distinción de desempeñar un papel único en un evento destinado a alterar radicalmente la conducta de la guerra blindada. Cuando las fuerzas de los Estados Unidos comenzaron las operaciones de redistribución, muchos de los Patton M48A3 fueron entregados a las fuerzas del Ejército de la República de Vietnam (ARVN), en particular, creando el 20º Regimiento de Tanques ARVN del tamaño de un batallón; que complementa sus unidades M41 Walker Bulldog. Durante la Ofensiva de Pascua del Ejército de Vietnam del Norte (NVA) en 1972, los choques de tanques entre las unidades NVA T-54/PT-76 y ARVN M48/M41 se convirtieron en algo común. Pero, el 23 de abril de 1972, un grupo de infantería NVA atacó a los petroleros del 20º Regimiento de Tanques, que estaba equipado con el nuevo misil antitanque de 9M14M Malyutka (designación de la OTAN: Sagger). Durante esta batalla, un tanque Patton M48A3 y un Vehículo de asalto de caballería blindado M113 (ACAV) fueron destruidos, convirtiéndose en las primeras pérdidas del misil Sagger; pérdidas que se harían eco en una escala aún mayor un año después durante la Guerra de Yom Kippur en el Medio Oriente en 1973. El 2 de mayo, el 20º Regimiento de Tanques había perdido todos sus tanques por el fuego enemigo. Durante el primer mes de la Primera Batalla de Quảng Trị, se perdieron un total de 110 ARVN M48 Pattons.
Los M48 se desempeñaron admirablemente en Vietnam en el papel de apoyo a la infantería. Sin embargo, hubo pocas batallas de tanques contra tanques reales. Uno fue entre la Armadura 1-69 de EE. UU. y los tanques anfibios ligeros PT-76 del Regimiento Blindado NVA 202 en Ben Het Camp en marzo de 1969. Los M48 brindaron protección adecuada para su tripulación contra armas pequeñas, minas y granadas propulsadas por cohetes. Los M48 y los M41 de Vietnam del Sur lucharon en la Ofensiva de Primavera de 1975. En varios incidentes, el ARVN derrotó exitosamente los tanques NVA T-34 y T-55 e incluso redujo la ofensiva del Norte. Sin embargo, debido a la escasez de combustible y municiones que enfrentan los militares de Vietnam del Sur debido a la prohibición impuesta por el Congreso de los EE. UU. a la financiación y suministro adicionales de equipo militar y logística al país, los tanques de fabricación estadounidense pronto se quedaron sin municiones y combustible. y fueron rápidamente abandonados al NVA, que luego los puso a su servicio después de que terminó la guerra en mayo de 1975. En total, 250 de los M48A3 del ARVN fueron destruidos y capturados y los capturados (al menos 30) solo se usaron brevemente antes de ser eliminados. y se convirtió en exhibiciones conmemorativas de guerra en todo Vietnam.
Los M48, junto con los Centuriones del Regimiento Blindado australianos de 20 libras (84 mm), fueron los únicos vehículos en uso por parte anticomunista en la Guerra de Vietnam que podrían proteger razonablemente a sus tripulaciones de las minas terrestres. A menudo se usaban para operaciones de limpieza de minas a lo largo de la Carretera 19 en las Tierras Altas Centrales, una carretera pavimentada de dos carriles entre An Khe y Pleiku. Los convoyes diarios se movían en ambos sentidos a lo largo de la Autopista 19. Estos convoyes eran detenidos cada mañana mientras la carretera era barrida en busca de minas. En ese momento, los soldados caminaban lentamente sobre los hombros sucios de la carretera con detectores de minas manuales. Durante este lento proceso, los convoyes se convertirían en un objetivo peligroso para el enemigo, especialmente sus guerrilleros y partidarios. Como resultado, se improvisó un método más rápido, el "Thunder Run", en el que una M48 se alineaba a cada lado de la carretera, con una pista en el hombro de tierra y la otra pista en el asfalto, y luego con todas las armas disparando, corrieron a una posición designada millas de distancia. Si los M48 lo lograron sin golpear una mina, el camino estaba despejado y los convoyes podrían continuar. En la mayoría de los casos, un M48 que golpeó una mina terrestre en estas operaciones solo perdió una o dos ruedas en la explosión; rara vez hubo daños en el casco que se consideraran "totalizando" (destruyendo totalmente) el tanque.
Guerras Indo-Pakistaní
M47 y M48 fueron utilizados en la guerra de tanques por el Ejército de Pakistán contra los tanques soviéticos T-55, británicos Centuriones y M4 Sherman de los EE. UU. Tanto en la Guerra Indo-Pakistaní de 1965 como en la siguiente guerra en 1971 con al menos algo de bueno resultados Durante la Operación Grand Slam, las fuerzas de tanques pakistaníes, que comprenden principalmente los tanques M47 y M48 Patton, empujaron a través de las líneas de defensa indias rápidamente y derrotaron rápidamente a los contraataques blindados del ejército indio. Los pakistaníes utilizaron aproximadamente el valor de los tanques de una división en la operación, aunque no todos eran Patton, con Shermans mejorados incluidos también. En contraste, el tanque Patton de Pakistán no pudo cumplir con sus altas expectativas en la Batalla de Asal Uttar en septiembre de 1965, donde se perdieron cerca de 97 tanques paquistaníes, la mayoría de ellos Patton (M47 y M48). Más tarde, el tanque Patton fue el principal tanque pakistaní en la batalla de Chawinda y su desempeño en esa batalla se consideró satisfactorio contra la armadura india.
El Patton fue usado más tarde por Pakistán nuevamente, esta vez, en la Guerra Indo-paquistaní de 1971. Un contraataque encabezado por los 13 Lanceros y las 31as unidades del ejército de Caballería fue derrotado por la 54 División Indígena en la Batalla de Barapind en diciembre de 1971. La India estableció más tarde un memorial de guerra temporal llamado "Patton Nagar" (o "Ciudad de Patton") en el distrito de Khemkaran en Punjab, donde se exhibieron los tanques de Patton paquistaníes capturados durante un corto período de tiempo antes de ser desechados o enviados. India para uso como monumentos de guerra y memoriales militares.
Al analizar su desempeño general en sus guerras con India, el ejército paquistaní sostuvo que ambas partes tenían a Patton en una estima bastante alta y que las tácticas de combate eran las culpables de su total derrota y la siguiente debacle en Asal Uttar. Sin embargo, un estudio de posguerra realizado en Estados Unidos sobre las batallas de tanques en el sur de Asia concluyó que la armadura de Patton podía, de hecho, ser penetrada por el cañón de tanques de 20 libras (84 mm) del Centurión (que luego fue reemplazado por el aún más exitoso L7 de 105 mm en la versión Mk. 7 que India también poseía, así como el cañón de 75 mm del tanque ligero AMX-13).
Medio Oriente
Los M48 también se usaron con resultados mixtos durante la Guerra de los Seis Días de 1967. En el frente de batalla del Sinaí, los M48 israelíes armados con el entonces avanzado cañón L7 de 105 mm con cañones fueron utilizados con considerable éxito contra los IS-3 egipcios, T-54/55, T-34/85 y SU-100 suministrados por la Unión Soviética durante las décadas de 1950 y 1960 como en la Segunda Batalla de Abu-Ageila. Sin embargo, en el frente de guerra de Cisjordania, los M48 de Jordania (Jordania fue también un usuario del M48 Patton al igual que Israel en el mismo período de tiempo) a menudo fueron derrotados por los centuriones de 105 mm armados y los M4 Shermans mejorados de la época de la Segunda Guerra Mundial (los M-51 armados con cañones de tanques de 105 mm construidos en Francia (no para confundirse con el cañón de tanque británico L7 de 105 mm)) En términos puramente técnicos, los Patton eran muy superiores a los Shermans mucho más antiguos, con disparos a más de 1000 metros simplemente mirando la armadura del M48. Sin embargo, el principal de 105 mm El arma de los Sherman de Israel disparó una ronda de HEAT diseñada para derrotar al tanque soviético T-62, que fue la respuesta de la URSS. Al sucesor de M48 en el servicio estadounidense, el M60 Patton. El fracaso general de los Patton jordanos en Cisjordania también podría atribuirse a la excelente superioridad aérea israelí. El ejército israelí capturó unos 100 tanques jordanos M48 y M48A1 y los puso en servicio en sus propias unidades después de la guerra, como el igual que en el caso de los APC M113 jordanos que tomaron durante la guerra.
Israel usó 445 tanques M48 en 1973 durante la Guerra de Yom Kippur. Del 15 al 18 de octubre, los tanques M48 participaron en la batalla de tanques más grande de la guerra: la batalla de la granja china. La batalla involucró a la 21 División Blindada egipcia (136 tanques) y las 143 y 162 Divisiones Blindadas israelíes (más de 300 tanques). La batalla terminó con una victoria israelí. Ambos bandos perdieron una gran cantidad de tanques en esta batalla. En la noche del 15/16 de octubre, la 14.ª Brigada israelí de la 143ª División perdió 70 tanques de los 97. Entre el 16 a las 9:00 y el 17 a las 14:00, las 143 y 162 Divisiones israelíes perdieron 96 tanques. Al 18 de octubre, la 21 División blindada egipcia no tenía más de 40 tanques restantes de los 136 tanques originales disponibles al inicio de la batalla.
Aparte de las Fuerzas de Defensa de Israel (FDI), el M48 también era operado por el Ejército del Líbano, la milicia cristiana de las Fuerzas del Líbano, la milicia del Ejército de Liberación Popular del Partido Socialista Progresista Druze y el Ejército del Líbano (SLA) respaldado por Israel en la Guerra. El 10 de junio de 1982, ocho M48A3 israelíes, dos M60A1 y al menos tres APC M113 se perdieron en una emboscada exitosa de tanques T-55 sirios y vehículos de combate de infantería BMP-1 (IFV) durante la batalla de Sultan Yacoub en 1982.
El ejército libanés todavía opera alrededor de 100 M48s. En 2007, durante el conflicto del norte de Líbano en 2007, los M48 del Ejército libanés bombardearon puestos de avanzada militantes ubicados en un campo de refugiados.
Junto con el M47, los tanques M48 fueron utilizados por las Fuerzas Armadas turcas durante la invasión turca de Chipre en 1974. Las Fuerzas Armadas turcas en el norte de Chipre continúan utilizando tanques M48 hoy.
Cuando comenzó el conflicto kurdo-turco, las Fuerzas Armadas turcas tenían varios M48. Estos se usaron a lo largo de los años 80 y 90 como artillería estática y se usaron para defender los perímetros de base militar de los ataques enemigos.
Los tanques iraníes M48 se utilizaron ampliamente en la guerra Irán-Irak desde 1980 hasta 1988, donde enfrentaron a los T-55, T-62 y T-72 iraquíes, junto con los M60 Patton, en un combate feroz y duro con sus enemigos iraquíes, con resultados mixtos. Los M48 de la 37ª Brigada Acorazada se usaron en la Batalla de Abadan. Aproximadamente 150 de los M48 se perdieron solo en esta batalla de tanques.
África
En 1973, Marruecos recibió sus primeros M48A3. A fines de la década de 1970, se habían producido nuevas entregas de M48A5 y la actualización a M48A5 se logró a nivel local con la ayuda de consultores de EE. UU. En 1987, un envío final de 100 tanques M-48A5 de la Guardia Nacional de Wisconsin fue entregado al ejército marroquí. Hay informes no confirmados de entregas de M48A5 israelíes durante los años ochenta. Los tanques se utilizaron en el desierto del Sáhara Occidental contra guerrilleros del Polisario con gran éxito. El sistema superior de control de incendios del M48 y las rondas APFSDS resultaron ser fatales para los T-55 del Polisario.
Pakistán utilizó los M48 Patton mientras reforzaba a las tropas estadounidenses durante la Batalla de Mogadiscio en 1993.
Usuarios
Actuales
850 M48A5K, reemplazados por el K1A1
390 M48A5 MOLF dados de baja en el 2008.
80, usados como base para el diseño de la construcción de un carro de combate local, el Zulfiqar.
561, convertidos en Magach 5
200.
104 M48A1 y M48A5.
345 M48A5.
Manufacturados localmente con mejoras en el cañón y el sistema de tiro, 450 CM-11, 100 CM-12
150 M48A5.
28.
(1200) 525 M48, 250 M48, 1350 M48 y 750 M48 en proceso de retiro, hasta el 2012.
20 M48.
Véase también
Notas
Bibliografía
Steven J Zaloga, Tony Bryan, Jim Laurier - "M26–M46 Pershing Tank 1943–1953", 2000 Osprey Publishing (New Vanguard 35), ISBN 1-84176-202-4.
Keith W. Nolan "Into Lao's, Operation Lam Son 719 and Dewey Canyon II" 1986. Presidio Press. Account of the US Army's final offensive of the Vietnam War.
Abraham Rabinovich - "The Battle for Jerusalem June 5–7, 1967", 2004 Sefer Ve Sefer Publishing, Jerusalén, ISBN 965-7287-07-3
Starry, Donn A., General. "Mounted Combat In Vietnam"; Vietnam Studies. 1989; Department of the Army.
Hunnicutt, R. P. "Patton: A History of the American Main Battle Tank." 1984, Presidio Press; ISBN 0-89141-230-1.
Dunstan, Simon. "Vietnam Tracks-Armor in Battle." (ed. 1982, Osprey Publishing), ISBN 0-89141-171-2.
Bowden, Mark "Black Hawk Down: A Story of Modern War" 2001, Signet; ISBN 0-451-20393-3.
Enlaces externos
AFV Database: M48 Patton (en inglés)
GlobalSecurity.org: M48 Patton (en inglés)
Patton-Mania (en inglés)
Carros de combate
Vehículos militares de Estados Unidos
Tanques de Estados Unidos
Tanques medios | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 856 |
This ring was displayed at the 2018 Oscars GBK celebrity gift lounge display!! The last photo shows it being worn by the amazing Taura Stinson!
This is an absolutely huge and stunning AAA reverse watermelon tourmaline set in a beautiful and unique ring with a wide and thick 3/8 inch wide hand forged in solid sterling. The band was then distressed, oxidized and polished and the stone is set with a modified tension setting in rose gold filled heavy wire.
The ring is roughly a 7.5 with a flattened top due to the way the ring is set, and as with all wide band rings, you may wish to size up a bit. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,045 |
Chumby preps IPTV set-top that uses Android devices as remote
Sep 9, 2011 — by Eric Brown — from the LinuxDevices Archive — 1 views
Chumby is readying a Linux-based IPTV set-top-box that can be remotely controlled by a Wi-Fi connected Android device. Soon to be offered as an open development platform, NeTV is equipped with an 800MHz Marvell processor, and it will include both a Webkit browser that can overlay content on video and a personalized news crawler.
Chumby has yet to formally announce the NeTV, but is expected to release a developer's version later this month, with the potential for a consumer release in the future. This is according to Engadget, which was tipped to the NeTV development pages that have appeared on Chumby's site (see link at end).
NeTV
Like other IPTV devices, the NeTv can display web pages and other digital content on a TV. The key innovation here is the ability to overlay content over standard video sources, including cable or satellite TV, or Blu-ray movies.
Overlays includes a personalized news ticker that apparently will draw content from Chumby's online service of personalized push content (previously offered via more than 1,000 non-interactive "widget" applications). Chumby "widgets" are found on a Chumby Android app, as well as on a variety of Linux-based Chumby devices. These include the company's own Chumby One digital picture frame (DPF) style device (pictured), and the somewhat similar Sony Bravia-enhanced Sony Dash.
Another selling point here is the ability to control the content with an Android phone or tablet via a Wi-Fi connection. For example, users can click on a widget to bring up a web page via the built-in Webkit browser, which can play videos at 480p, 720p, or 1080p (24fps), according to Chumby. Users can also upload images from their Android phones for display on TV.
The NeTV box will also ship with an IR remote control as an alternative to an Android phone. iOS support is promised in the future.
The NeTV widgets will include Twitter, Facebook, sports scores, and news updates, says Chumby. Android users, meanwhile, can also view SMS and email updates on the news banner.
There does not appear to be much interactivity available on content displayed on the TV, aside from web browsing, displaying images, or playing movies. The NeTV UI is pitched as a positive, however: "Event display UI is non-intrusive and passive, so it can be left on all the time," according to the Chumby NeTV website.
NeTV, internal view
Like earlier Chumby devices, this one is built on a Marvell processor. The 800MHz CPU is likely one of Marvell's Armada processors, a close cousin to the Sheeva Architecture processors used in Marvell Plug Computer designs such as the Pogoplug.
In addition, the NeTV device incorporates a Xilinx Spartan 6 field programmable gate array (FPGA) chip to handle the overlay duty. Chumby offers a technical page on the Spartan 6 and promises more information. It does not appear, however, that most app developers need to get their hands dirty with FPGA programming.
The NeTV development system is powered by a 5V supply with a micro-USB connector, and can usually be powered via a USB connection to a computer. (Problems can occur in this configuration, however, according to Chumby.)
According to notes provided for developers, when it is booted the NeTV configures its micro-USB port to act as "an Ethernet gadget." It also includes an integral DHCP server, the company adds.
From a developer point of view, the NeTV has numerous applications beyond a consumer IPTV box, says Chumby. The platform has "very strong potential applications in education, digital signage, smart energy, and low-cost computing," says the NeTV developer page. Schematics and other details have been posted for developers in separate sections for firmware and application development.
The firmware will be available as a full professional software development kit (SDK), as well as a hobbyist "local compilation" version. Application development is divided into local UI development, web services development, and event architecture (crawler development), says the company.
A demo of the Chumby NeTV
Source: Chumby via This Is My Next
The NeTV developers version will ship later this month, according to Engadget. More information may be found at Chumby's NeTV Wiki site.
Dell tips a seven-inch Android tablet
IP phone offers detachable Android 2.2 tablet
Google Maps goes 3D as Amazon, Chumby, and LinkedIn…
ARM-based MID boasts voice-enabled Facebook
Android 1.5 "Cupcake" arrives
HD-ready IP set-top offers Android integration
LinkedIn for Android ships as Google updates Apps…
Google TV gains new Android IPTV rivals
More Android tablets break cover
DPF design runs Linux on ARM11
Google's Android newsstand to rival iTunes Store,…
Intel shows off Google TV devices as Boxee Box…
Chinese vendors ramp up Android tablet plans
Sony Tablet S goes on sale with Android 3.1,…
IPTV set-top runs Linux | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,617 |
A Liga ASOBAL de 2016–17 é a 27º edição da Liga ASOBAL como primeira divisão do handebol espanhol. Com 16 equipes participantes.
Times
Ligações externas
Sítio oficial
Liga ASOBAL
2016 no handebol
2017 no handebol
Desporto na Espanha em 2016
Desporto na Espanha em 2017 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,785 |
The Career Development Center(CDC) offers a variety of services to undergraduates, graduate students and alumni to assist them in the career decision-making and job search process. Use of the CDC's services is voluntary, with students/alumni selecting the services that are best suited to their needs. However, faculty and staff have made attendance at CDC workshops and programs required as an integral part of coursework (arrangements can be made with the CDC).
Class presentations - The professional staff of the CDC accepts invitations from faculty and staff members to present workshops or to speak to their classes on career, employment and graduate school related topics. To request a class presentation, please complete the Workshop Request Form.
Employment - The CDC helps students and alumni find employment by teaching them job search skills, communicating job openings, hosting on-campus employer information sessions/tables/interviews, and by making candidate resumes available to employing organizations.
Graduate School - The CDC staff support the faculty with helping juniors and seniors apply to graduate and professional schools. Information on scholarships, fellowships and writing personal statements is also maintained in the center. A graduate school fair and a law school fair are held during the Fall semester. Faculty members who want to get students started on the graduate school selection process may refer them to the CDC.
Instructional Materials/Handouts - The center maintains a supply of handouts (also found online) written by our staff for use as quick reference tool by students who are career planning or job searching. The handouts consist of advice and strategies on job searching, applying to graduate school, and descriptions of various CDC services. Faculty and staff members may request multiple copies of handouts for use in advisement, curricular support, and recruitment activities.
HELPFUL LINKS - The CDC will post helpful faculty and/or staff links here. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,291 |
Dufresne is an Italian post-hardcore band hailing from Vicenza.
History
Dufresne formed in January, 2004 with Nicola "Dominik" Cerantola on lead vocals, Davide Zenorini on drums, Matteo "Ciube" Tabacco on bass and vocals, Luca Dal Lago on guitar and Alessandro Costa on keyboards. The band took its name from Andy Dufresne, the main character from Stephen King's novel, and later film adaption by Frank Darabont, The Shawshank Redemption.
In 2005 the band recorded a twelve-song demo album, completely sung in Italian. Later the band re-recorded six tracks from the demo in English.
In 2006 Dufresne signed with V2 Records and released their debut album Atlantic. To promote the album the band participated at the Taste of Chaos Tour, supporting Underoath in Italy, and then embarked on a European tour, that comprehended France, Germany, Belgium, Netherlands and United Kingdom.
In October, 2007 the band went to Richmond, Virginia to record their second album, titled Lovers, with producer Andreas Magnusson. Lovers was released on April 11, 2008 via V2 Records/Universal Music Group. After the release of Lovers, Dufresne toured Italy supporting headliners Linea 77.
In 2009 Dufresne signed with Wynona Records. On May 14, 2010 the band released their third studio album, AM:PM.
In July 2013 it was announced on their Facebook page that Dufresne went on hiatus.
Members
Nicola "Dominik" Cerantola – lead vocals
Matteo "Ciube" Tabacco – bass and vocals
Luca Dal Lago – guitar
Davide Zenorini – drums
Alessandro Costa – keyboards and synthesizer
Discography
Atlantic (V2 Records, 2006)
Lovers (Universal Music/V2 Records, 2008)
AM:PM (Wynona Records, 2010)
References
External links
Official website
Official Facebook page
Italian musical groups
Post-hardcore groups
Alternative metal musical groups
Screamo musical groups
Italian alternative rock groups
Italian hardcore punk groups
Musical groups established in 2004
Italian metalcore musical groups | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,094 |
module MuckEngine # :nodoc:
module Models # :nodoc:
module Matchers
# Ensures that the model can scope by 'source'
# requires that the class have a factory and that a user factory exist
# Tests:
# scope :created_by, lambda { |item_object| {:conditions => ["items.source_id = ? AND items.source_type = ?", item_object.id, item_object.class.to_s] } }
# Examples:
# it { should scope_created_by }
def scope_source
CreatedByMatcher.new(:source, :source)
end
# Ensures that the model can scope by created_by
# requires that the class have a factory and that a user factory exist
# Tests:
# scope :created_by, lambda { |user| where(['user_id = ?', user.id]) } }
# Examples:
# it { should scope_created_by }
def scope_created_by
CreatedByMatcher.new(:created_by, :user)
end
# Ensures that the model can scope by created_by
# requires that the class have a factory and that a user factory exist
# Tests:
# scope :by_creator, lambda { |creator| where(['creator_id = ?', creator.id]) } }
# Examples:
# it { should scope_by_creator }
def scope_by_creator
CreatedByMatcher.new(:by_creator, :creator)
end
class CreatedByMatcher < MuckMatcherBase # :nodoc:
def initialize(scope, field)
@scope = scope
@field = field
end
def matches?(subject)
@subject = subject
@subject.class.delete_all
@user = Factory(:user)
@user1 = Factory(:user)
@item = Factory(factory_name, @field => @user)
@item1 = Factory(factory_name, @field => @user1)
items = @subject.class.send(@scope, @user)
items.include?(@item) && !items.include?(@item1)
end
def failure_message
"Expected #{factory_name} to have scope created_by and to be able to successfully find #{@subject}'s creator"
end
def description
"scope created_by"
end
end
end
end
end | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,903 |
@echo off
pyinstaller --noconfirm hg-incpush.spec
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,167 |
Q: C++ convert boost multiprecision int to 64 bit length hex lets imagine that we have code:
boost::multiprecision::mpz_int one(1);
and I would like to convert that value to 64 bit (?) hex value so the result could be:
0000000000000000000000000000000000000000000000000000000000000001
I'm sure that the is solution of that but I'm not familiar with Boost
Example:
I have value:
boost::multiprecision::mpz_int value(8612844778121073626993440829679478604092771119379437256704)
and I want create 64b hex from value,
I tried
printf("%064X\n", value);
but don't work
A: Even with the extra example it is still unclear what you want to do:
a) Do you want to write the binary representation of the integer which you can think of as a string of 64 characters 0 or 1 as your first example suggests?
b) Or do you want the hex representation which would be a 16 character string?
What do you want to do if your integer does not fit into 64 bit?
I assume you want to print the hex representation of the integer filled with zeros, i.e. (b). Then the solution is as easy as (you claim to use c++)
std::cout << std::setfill('0') << std::setw( 16 ) << std::hex << value;
However this solution does not cover the unspecified behaviour of overflowing the 64 bit and it does not cover negative numbers. For this you need to be more precise...
Update:
According to the comment the output should be a 64 character string which could represent a 256 bit integer. This can be accomplished with:
std::cout << std::setfill('0') << std::setw( 64 ) << std::hex << value;
A: I don't have boost available atm but you could write a small function that formats the type for you.
If boost::multiprecision::mpz_int supports bitshift and the & operator operator this might be a solution:
template<typename T>
std::string toHex64(const T& value, size_t padding = 1)
{
using UT = typename std::make_unsigned_t<T>;
std::string hex;
UT uvalue = static_cast<UT>(value);
bool bPositive = std::is_same<T, UT>::value || value == uvalue;
while (uvalue > UT(0))
{
char current = uvalue & 0xF; uvalue >>= 4;
hex += (current < 0xA) ? (current + '0') : ((current-10) + 'A');
}
if (hex.size() < padding)
{
if (bPositive)
{
hex += std::string(padding - hex.size(), '0');
}
else
{
hex += std::string(padding - hex.size(), 'F');
}
}
std::reverse(hex.begin(), hex.end());
return hex;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,341 |
Ronald J. Bacigal is an American legal scholar and professor of law at the University of Richmond School of Law. He is "nationally recognized as one of the leading scholars of Fourth Amendment Law."
Bacigal graduated from Concord University and Washington and Lee University School of Law. In addition, he spent time at The Hague as a Fulbright Scholar. Professor Bacigal has taught at Richmond since 1971 and has been a professor since 1973. He is the reporter for criminal law decisions of the Court of Appeals of Virginia.
He has earned a number of awards, including the 2008 Harry L. Carrico Professionalism Award (presented by the Virginia State Bar), the Outstanding Faculty Award from the Virginia State Council of Higher Education in 1990, and is a two-time recipient of the University of Richmond Distinguished Educator Award. Bacigal has authored a number of texts used in both educational and legal settings, including Criminal Procedure: Cases, Problems, Exercises (West Publishing Co. 3rd ed. 2007) (2nd ed. 2004) (1sted. 2001) (with four other authors and annual supplements) and Criminal Law and Procedure: An Introduction (West Publishing Co. 2nd ed. 2001)(1st ed. 1996), May It Please The Court: A Biography of Judge Robert R. Merhige, Jr. (University Press of America 1992), The Limits of Litigation: The Dalkon Shield Controversy (Carolina Academic Press 1990) and many books concerning Virginia law and procedure. He has also published numerous papers.
Notes
Living people
Washington and Lee University School of Law alumni
University of Richmond faculty
Concord University alumni
American legal scholars
Year of birth missing (living people) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,872 |
\section{Introduction}
\noindent Large-scale knowledge graphs (KG) such as FreeBase~\cite{bollacker2008freebase}, YAGO~\cite{suchanek2007yago} and WordNet~\cite{miller1995wordnet} provide effective basis for many important AI tasks such as semantic search, recommendation~\cite{zhang2016collaborative} and question answering~\cite{cui2017kbqa}. A KG is typically a multi-relational graph containing entities as nodes and relations as edges. Each edge is represented as a triplet (\textit{head entity}, relation, \textit{tail entity}) ($(h, r, t)$ for short), indicating the relation between two entities, e.g., (\textit{Steve Jobs}, founded, \textit{Apple Inc.}). Despite their effectiveness, knowledge graphs are still far from being complete. This problem motivates the task of \textit{knowledge graph completion}, which is targeted at assessing the plausibility of triples not present in a knowledge graph.
Much research work has been devoted to knowledge graph completion. A common approach is called knowledge graph embedding which represents entities and relations in triples as real-valued vectors and assess triples' plausibility with these vectors~\cite{wang2017knowledge}. However, most knowledge graph embedding models only use structure information in observed triple facts, which suffer from the sparseness of knowledge graphs. Some recent studies incorporate textual information to enrich knowledge representation~\cite{socher2013reasoning,xie2016representation,xiao2017ssp}, but they learn unique text embedding for the same entity/relation in different triples, which ignore contextual information. For instance, different words in the description of \textit{Steve Jobs} should have distinct importance weights connected to two relations ``founded" and ``isCitizenOf", the relation ``wroteMusicFor" can have two different meanings ``writes lyrics" and ``composes musical compositions" given different entities. On the other hand, syntactic and semantic information in large-scale text data is not fully utilized, as they only employ entity descriptions, relation mentions or word co-occurrence with entities~\cite{wang2016text,xu2017knowledge,an2018accurate}.
Recently, pre-trained language models such as ELMo~\cite{peters2018deep}, GPT~\cite{radford2018improving}, BERT~\cite{devlin2019bert} and XLNet~\cite{yang2019xlnet} have shown great success in natural language processing (NLP), these models can learn contextualized word embeddings with large amount of free text data and achieve state-of-the-art performance in many language understanding tasks. Among them, BERT is the most prominent one by pre-training the bidirectional Transformer encoder through masked language modeling and next sentence prediction. It can capture rich linguistic knowledge in pre-trained model weights.
In this study, we propose a novel method for knowledge graph completion using pre-trained language models. Specifically, we first treat entities, relations and triples as textual sequences and turn knowledge graph completion into a sequence classification problem. We then fine-tune BERT model on these sequences for predicting the plausibility of a triple or a relation. The method can achieve strong performance in several KG completion tasks. Our source code is available at \url{https://github.com/yao8839836/kg-bert}. Our contributions are summarized as follows:
\begin{itemize}
\item We propose a new language modeling method for knowledge graph completion. To the best of our knowledge, this is the first study to model triples' plausibility with a pre-trained contextual language model.
\item Results on several benchmark datasets show that our method can achieve state-of-the-art results in triple classification, relation prediction and link prediction tasks.
\end{itemize}
\section{Related Work}
\subsection{Knowledge Graph Embedding}
A literature survey of knowledge graph embedding methods has been conducted by~\cite{wang2017knowledge}. These methods can be classified into translational distance models and semantic matching models based on different scoring functions for a triple $(h,r,t)$. Translational distance models use distance-based scoring functions. They assess the plausibility of a triple $(h,r,t)$ by the distance between the two entity vectors $\mathbf{h}$ and $\mathbf{t}$, typically after a translation performed by the relation vector $\mathbf{r}$. The representative models are TransE~\cite{bordes2013translating} and its extensions including TransH~\cite{wang2014knowledge}. For TransE, the scoring function is defined as the negative translational distance $f(h,r,t) = - || \mathbf{h} + \mathbf{r} - \mathbf{t}||$. Semantic matching models employ similarity-based scoring functions. The representative models are RESCAL~\cite{nickel2011three}, DistMult~\cite{yang2015embedding} and their extensions. For DistMult, the scoring function is defined as a bilinear function $f(h,r,t) = \langle\mathbf{h}, \mathbf{r}, \mathbf{t}\rangle$. Recently, convolutional neural networks also show promising results for knowledge graph completion~\cite{dettmers2018convolutional,SWJ318,schlichtkrull2018modeling}.
The above methods conduct knowledge graph completion using only structural information observed in triples, while different kinds of external information like entity types, logical rules and textual descriptions can be introduced to improve the performance~\cite{wang2017knowledge}. For textual descriptions, \cite{socher2013reasoning} firstly represented entities by averaging the word embeddings contained in their names, where the word embeddings are learned from an external corpus. \cite{wang2014knowledgeb} proposed to jointly embed entities and words into the same vector space by aligning Wikipedia anchors and entity names. \cite{xie2016representation} use convolutional neural networks (CNN) to encode word sequences in entity descriptions. \cite{xiao2017ssp} proposed semantic space projection (SSP) which jointly learns topics and KG embeddings by characterizing the strong correlations between fact triples and textual descriptions. Despite their success, these models learn the same textual representations of entities and relations while words in entity/relation descriptions can have different meanings or importance weights in different triples.
To address the above problems, \cite{wang2016text} presented a text-enhanced KG embedding model TEKE which can assign different embeddings to a relation in different triples. TEKE utilizes co-occurrences of entities and words in an entity-annotated text corpus. \cite{xu2017knowledge} used an LSTM encoder with attention mechanism to construct contextual text representations given different relations. \cite{an2018accurate} proposed an accurate text-enhanced KG embedding method by exploiting triple specific relation mentions and a mutual attention mechanism between relation mention and entity description. Although these methods can handle the semantic variety of entities and relations in distinct triples, they could not make full use of syntactic and semantic information in large scale free text data, as only entity descriptions, relation mentions and word co-occurrence with entities are utilized. Compared with these methods, our method can learn context-aware text embeddings with rich language information via pre-trained language models.
\subsection{Language Model Pre-training}
Pre-trained language representation models can be divided into two categories: feature-based and fine tuning approaches. Traditional word embedding methods such as Word2Vec~\cite{mikolov2013distributed} and Glove~\cite{pennington2014glove} aimed at adopting feature-based approaches to learn context-independent words vectors. ELMo~\cite{peters2018deep} generalized traditional word embeddings to context-aware word embeddings, where word polysemy can be properly handled. Different from feature-based approaches, fine tuning approaches like GPT~\cite{radford2018improving} and BERT~\cite{devlin2019bert} used the pre-trained model architecture and parameters as a starting point for specific NLP tasks. The pre-trained models capture rich semantic patterns from free text. Recently, pre-trained language models have also been explored in the context of KG. \cite{wang2018dolores} learned contextual embeddings on entity-relation chains (sentences) generated from random walks in KG, then used the embeddings as initialization of KG embeddings models like TransE. \cite{zhang-etal-2019-ernie} incorporated informative entities in KG to enhance BERT language representation. \cite{bosselut-etal-2019-comet} used GPT to generate tail phrase tokens given head phrases and relation types in a common sense knowledge base which does not cleanly fit into a schema comparing two entities with a known relation. The method focuses on generating new entities and relations. Unlike these studies, we use names or descriptions of entities and relations as input and fine-tune BERT to compute plausibility scores of triples.
\section{Method}
\subsection{Bidirectional
Encoder Representations from Transformers (BERT)}
BERT~\cite{devlin2019bert} is a state-of-the-art pre-trained contextual language representation model built on a multi-layer bidirectional Transformer encoder~\cite{vaswani2017attention}. The Transformer encoder is based on self-attention mechanism. There are two steps in BERT framework: \textit{pre-training} and \textit{fine-tuning}. During pre-training, BERT is trained on large-scale unlabeled general domain corpus (3,300M words from BooksCorpus and English Wikipedia) over two self-supervised tasks: masked language modeling and next sentence prediction. In masked language modeling, BERT predicts randomly masked input tokens. In next sentence prediction, BERT predicts whether two input sentences are consecutive. For fine-tuning, BERT is initialized with the pre-trained parameter weights, and all of the parameters are fine-tuned using labeled data from downstream tasks such as sentence pair classification, question answering and sequence labeling.
\subsection{Knowledge Graph BERT (KG-BERT)}
\begin{figure*}[t]
\centering
\includegraphics[width = 0.78 \textwidth]{kg_bert_fig.pdf}
\caption{Illustrations of fine-tuning KG-BERT for predicting the plausibility of a triple.}
\label{fig:framework}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width = 0.40 \textwidth]{kg_bert_rel.pdf}
\caption{Illustrations of fine-tuning KG-BERT for predicting the relation between two entities.}
\label{fig:framework}
\end{figure}
To take full advantage of contextual representation with rich language patterns, We fine tune pre-trained BERT for knowledge graph completion. We represent entities and relations as their names or descriptions, then take the name/description word sequences as the input sentence of the BERT model for fine-tuning. As original BERT, a ``sentence" can be an arbitrary span of contiguous text or word sequence, rather than an actual linguistic sentence. To model the plausibility of a triple, we packed the sentences of $(h,r,t)$ as a single sequence. A "sequence" means the input token sequence to BERT, which may be two entity name/description sentences or three sentences of $(h,r,t)$ packed together.
The architecture of the KG-BERT for modeling triples is shown in Figure 1. We name this KG-BERT version KG-BERT(a).
The first token of every input sequence is always a special classification token [CLS]. The head entity is represented as a sentence containing tokens Tok$_1^{h}$, ..., Tok$_a^{h}$, e.g., ``\textit{Steven Paul Jobs was an American business magnate, entrepreneur and investor.}" or ``\textit{Steve Jobs}", the relation is represented as a sentence containing tokens Tok$_1^{r}$, ..., Tok$_b^{r}$, e.g., ``founded", the tail entity is represented as a sentence containing tokens Tok$_1^{t}$, ..., Tok$_c^{t}$, e.g., ``\textit{Apple Inc. is an American multinational technology company headquartered in Cupertino, California.}" or ``\textit{Apple Inc.}". The sentences of entities and relations are separated by a special token [SEP]. For a given token, its input representation is constructed by summing the corresponding token, segment and position embeddings. Different elements separated by [SEP] have different segment embeddings, the tokens in sentences of head and tail entity share the same segment embedding $e_{A}$, while the tokens in relation sentence have a different segment embedding $e_{B}$. Different tokens in the same position $i \in \{1,2,3, \ldots, 512\}$ have a same position embedding. Each input token $i$ has a input representation $E_i$. The token representations are fed into the BERT model architecture which is a multi-layer bidirectional Transformer encoder based on the original implementation described in~\cite{vaswani2017attention}. The final hidden vector of the special [CLS] token and $i$-th input token are denoted as $C \in \mathbb{R}^H$ and $T_i \in \mathbb{R}^H$, where $H$ is the hidden state size in pre-trained BERT. The final hidden state $C$ corresponding to [CLS] is used as the aggregate sequence representation for computing triple scores. The only new parameters introduced during triple classification fine-tuning are classification layer weights $W \in \mathbb{R}^{2 \times H}$. The scoring function for a triple $\tau = (h,r,t)$ is $\mathbf{s_{\tau}} = f(h,r,t) = \text{sigmoid}(CW^T)$, $\mathbf{s}_{\tau} \in \mathbb{R}^2$ is a 2-dimensional real vector with $s_{\tau 0}, s_{\tau 1} \in [0,1]$ and $s_{\tau 0} + s_{\tau 1} = 1$. Given the positive triple set $\mathbb{D}^+$ and a negative triple set $\mathbb{D}^-$ constructed accordingly, we compute a cross-entropy loss with $\mathbf{s}_{\tau}$ and triple labels:
\begin{equation}
\mathcal{L} = -\sum_{\tau \in \mathbb{D}^+ \cup \mathbb{D}^-}{(y_{\tau}\log(s_{\tau 0}) + (1 - y_{\tau})\log(s_{\tau 1}))}
\end{equation}
where $y_{\tau} \in \{0,1\}$ is the label (negative or positive) of that triple. The negative triple set $\mathbb{D}^-$ is simply generated by replacing head entity $h$ or tail entity $t$ in a positive triple $(h,r,t) \in \mathbb{D}^+$ with a random entity $h'$ or $t'$, i.e.,
\begin{equation}
\begin{aligned}
\mathbb{D}^- = \{(h',r,t)| h' \in \mathbb{E} \land h' \ne h \land (h',r, t) \notin \mathbb{D}^+ \} \\ \cup \{(h,r,t')| t' \in \mathbb{E} \land t' \ne t \land (h,r,t') \notin \mathbb{D}^+\}
\end{aligned}
\end{equation}
where $\mathbb{E}$ is the set of entities. Note that a triple will not be treated as a negative example if it is already in positive set $\mathbb{D}^+$. The pre-trained parameter weights and new weights $W$ can be updated via gradient descent.
The architecture of the KG-BERT for predicting relations is shown in Figure 2. We name this KG-BERT version KG-BERT(b). We only use sentences of the two entities $h$ and $t$ to predict the relation $r$ between them. In our preliminary experiment, we found predicting relations with two entities directly is better than using KG-BERT(a) with relation corruption, i.e., generating negative triples by replacing relation $r$ with a random relation $r'$. As KG-BERT(a), the final hidden state $C$ corresponding to [CLS] is used as the representation of the two entities. The only new parameters introduced in relation prediction fine-tuning are classification layer weights $W' \in \mathbb{R}^{R \times H}$, where $R$ is the number of relations in a KG. The scoring function for a triple $\tau = (h,r,t)$ is $\mathbf{s_{\tau}'} = f(h,r,t) = \text{softmax}(CW'^T)$, $\mathbf{s_{\tau}'} \in \mathbb{R}^R$ is a $R$-dimensional real vector with $s'_{\tau i} \in [0,1]$ and $\sum_{i}^R s_{\tau i}' = 1$. We compute the following cross-entropy loss with $\mathbf{s'}_{\tau}$ and relation labels:
\begin{equation}
\mathcal{L'} = -\sum_{\tau \in \mathbb{D}^+ }\sum_{i=1}^R{y'_{\tau i}\log(s'_{\tau i})}
\end{equation}
where $\tau$ is an observed positive triple, $y'_{\tau i}$ is the relation indicator for the triple $\tau$, $y'_{\tau i} = 1$ when $r=i$ and $y'_{\tau i} = 0$ when $r \ne i$.
\section{Experiments}
In this section we evaluate our KG-BERT on three experimental tasks. Specifically we want to determine:
\begin{itemize}
\item Can our model judge whether an unseen triple fact $(h,r,t)$ is true or not?
\item Can our model predict an entity given another entity and a specific relation?
\item Can our model predict relations given two entities?
\end{itemize}
\paragraph{Datasets.}
We ran our experiments on six widely used benchmark KG datasets: WN11~\cite{socher2013reasoning}, FB13~\cite{socher2013reasoning}, FB15K~\cite{bordes2013translating}, WN18RR, FB15k-237 and UMLS~\cite{dettmers2018convolutional}. WN11 and WN18RR are two subsets of WordNet, FB15K and FB15k-237 are two subsets of Freebase. WordNet is a large lexical KG of English where each entity as a synset which is consisting of several words and corresponds to a distinct word sense. Freebase is a large knowledge graph of general world facts. UMLS is a medical semantic network containing semantic types (entities) and semantic relations. The test sets of WN11 and FB13 contain positive and negative triplets which can be used for triple classification. The test set of WN18RR, FB15K, FB15k-237 and UMLS only contain correct triples, we perform link (entity) prediction and relation prediction on these datasets. Table 1 provides statistics of all datasets we used.
For WN18RR, we use synsets definitions as entity sentences. For WN11, FB15K and UMLS, we use entity names as input sentences. For FB13, we use entity descriptions in Wikipedia as input sentences. For FB15k-237, we used entity descriptions made by ~\cite{xie2016representation}. For all datasets, we use relation names as relation sentences.
{\small
\begin{table}[t]\footnotesize
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c|ccccccc}
\hline
\bf{Dataset}& \bf{\# Ent} & \bf{\# Rel}& \bf{\# Train}& \bf{\# Dev} & \bf{\# Test} \\
\hline
WN11 & 38,696 & 11 & 112,581 & 2,609 & 10,544\\
FB13 & 75,043 & 13 & 316,232 & 5,908 & 23,733 \\
WN18RR& 40,943 & 11 & 86,835 & 3,034 & 3,134\\
FB15K& 14,951 & 1,345 & 483,142 & 50,000 & 59,071 \\
FB15k-237& 14,541 & 237 & 272,115 & 17,535 & 20,466\\
UMLS & 135 & 46 & 5,216 & 652 & 661\\
\hline
\end{tabular}
\caption{Summary statistics of datasets.}
\label{tab:statistics}
\end{table}
}
\paragraph{Baselines.}
We compare our KG-BERT with multiple state-of-the-art KG embedding methods as follows: TransE and its extensions TransH~\cite{wang2014knowledge}, TransD~\cite{ji2015knowledge}, TransR~\cite{lin2015learning}, TransG~\cite{xiao2016transg}, TranSparse~\cite{ji2016knowledge} and PTransE~\cite{lin2015modeling}, DistMult and its extension DistMult-HRS~\cite{zhang2018knowledge} which only used structural information in KG. The neural tensor network NTN~\cite{socher2013reasoning} and its simplified version ProjE~\cite{shi2017proje}. CNN models: ConvKB~\cite{SWJ318}, ConvE~\cite{dettmers2018convolutional} and R-GCN~\cite{schlichtkrull2018modeling}. KG embeddings with textual information: TEKE~\cite{wang2016text}, DKRL~\cite{xie2016representation}, SSP~\cite{xiao2017ssp}, AATE~\cite{an2018accurate}. KG embeddings with entity hierarchical types: TKRL~\cite{xie2016representationijcai}. Contextualized KG embeddings: DOLORES~\cite{wang2018dolores}. Complex-valued KG embeddings ComplEx~\cite{trouillon2016complex} and RotatE~\cite{sun2019rotate}. Adversarial learning framework: KBGAN~\cite{cai2018kbgan}.
\paragraph{Settings.}
We choose pre-trained BERT-Base model with 12 layers, 12 self-attention heads and $H=768$ as the initialization of KG-BERT, then fine tune KG-BERT with Adam implemented in BERT. In our preliminary experiment, we found BERT-Base model can achieve better results than BERT-Large in general, and BERT-Base is simpler and less sensitive to hyper-parameter choices. Following original BERT, we set the following hyper-parameters in KG-BERT fine-tuning: batch size: 32, learning rate: 5e-5, dropout rate: 0.1. We also tried other values of these hyper-parameters in~\cite{devlin2019bert} but didn't find much difference. We tuned number of epochs for different tasks: 3 for triple classification, 5 for link (entity) prediction and 20 for relation prediction. We found more epochs can lead to better results in relation prediction but not in other two tasks. For triple classification training, we sample 1 negative triple for a positive triple which can ensure class balance in binary classification. For link (entity) prediction training, we sample 5 negative triples for a positive triple, we tried 1, 3, 5 and 10 and found 5 is the best.
{\small
\begin{table}[h]\scriptsize
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|cc|c}
\hline
\bf{Method}& WN11& FB13& Avg. \\
\hline
NTN~\cite{socher2013reasoning}& 86.2 & 90.0 & 88.1\\
TransE~\cite{wang2014knowledge} & 75.9 & 81.5 & 78.7 \\
TransH~\cite{wang2014knowledge} & 78.8 & 83.3 & 81.1 \\
TransR~\cite{lin2015learning} & 85.9 & 82.5 & 84.2 \\
TransD~\cite{ji2015knowledge} & 86.4 & 89.1 & 87.8 \\
TEKE~\cite{wang2016text} & 86.1 & 84.2 & 85.2 \\
TransG~\cite{xiao2016transg} & 87.4 & 87.3 & 87.4 \\
TranSparse-S~\cite{ji2016knowledge} & 86.4& 88.2& 87.3\\
DistMult~\cite{zhang2018knowledge} & 87.1& 86.2 &86.7 \\
DistMult-HRS~\cite{zhang2018knowledge} & 88.9& 89.0 &89.0 \\
AATE~\cite{an2018accurate} & 88.0 & 87.2 & 87.6 \\
ConvKB~\cite{SWJ318} & 87.6 & 88.8 & 88.2 \\
DOLORES~\cite{wang2018dolores} & 87.5 & 89.3 & 88.4 \\
KG-BERT(a) & \textbf{93.5} & \textbf{90.4} & \textbf{91.9} \\
\hline
\end{tabular}
\caption{Triple classification accuracy (in percentage) for different embedding methods. The baseline results are obtained from corresponding papers.}
\label{tab:statistics}
\end{table}
}
{\small
\begin{table*}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|cc|cc|cc}
\hline
\multirow{2}*{Method} &\multicolumn{2}{c|}{WN18RR} & \multicolumn{2}{c|}{FB15k-237} & \multicolumn{2}{c}{UMLS}\\
\cline{2-7}
& MR & Hits@10 & MR & Hits@10 & MR & Hits@10\\
\hline
TransE (our results) & 2365 & 50.5 &223 & 47.4 &1.84&98.9\\
TransH (our results)& 2524 & 50.3 & 255& 48.6 &1.80& \textbf{99.5}\\
TransR (our results)& 3166 & 50.7 &237 &51.1 &1.81& 99.4\\
TransD (our results)& 2768 & 50.7 & 246 & 48.4 &1.71 & 99.3 \\
DistMult (our results) & 3704& 47.7 &411 & 41.9&5.52 & 84.6\\
ComplEx (our results)& 3921 & 48.3 & 508& 43.4 &2.59 & 96.7\\
ConvE~\cite{dettmers2018convolutional} & 5277 & 48 & 246& 49.1& -- & --\\
ConvKB~\cite{SWJ318} & 2554 & 52.5 & 257& 51.7 &--&-- \\
R-GCN~\cite{schlichtkrull2018modeling} & -- & -- & --& 41.7 &--& --\\
KBGAN~\cite{cai2018kbgan} & --& 48.1 & --& 45.8& --&--\\
RotatE~\cite{sun2019rotate} &3340& \textbf{57.1} & 177& \textbf{53.3}& --&--\\
KG-BERT(a) & \textbf{97} &52.4 & \textbf{153} &42.0 &\textbf{1.47}& 99.0\\
\hline
\end{tabular}
\caption{Link prediction results on WN18RR, FB15k-237 and UMLS datasets. The baseline models denoted (our results) are implemented using OpenKE toolkit~\cite{han2018openke}, other baseline results are taken from the original papers.}
\label{tab:statistics}
\end{table*}
}
{\small
\begin{table}[h]\scriptsize
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{l|cc}
\hline
\bf{Method}& Mean Rank & Hits@1 \\
\hline
TransE~\cite{lin2015modeling} & 2.5 & 84.3 \\
TransR~\cite{xie2016representationijcai} &2.1 & 91.6\\
DKRL (CNN)~\cite{xie2016representation}& 2.5 & 89.0 \\
DKRL (CNN) + TransE~\cite{xie2016representation}& 2.0 & 90.8 \\
DKRL (CBOW)~\cite{xie2016representation} & 2.5 & 82.7 \\
TKRL (RHE)~\cite{xie2016representationijcai} & 1.7 & 92.8 \\
TKRL (RHE)~\cite{xie2016representationijcai} & 1.8 & 92.5 \\
PTransE (ADD, len-2 path)~\cite{lin2015modeling} & \textbf{1.2}& 93.6 \\
PTransE (RNN, len-2 path)~\cite{lin2015modeling} & 1.4 & 93.2 \\
PTransE (ADD, len-3 path)~\cite{lin2015modeling} & 1.4 & 94.0 \\
SSP~\cite{xiao2017ssp} &\textbf{1.2}&--\\
ProjE (pointwise)~\cite{shi2017proje} & 1.3 & 95.6 \\
ProjE (listwise)~\cite{shi2017proje} & \textbf{1.2} & 95.7 \\
ProjE (wlistwise)~\cite{shi2017proje} & \textbf{1.2} & 95.6 \\
KG-BERT (b) & \textbf{1.2} & \textbf{96.0} \\
\hline
\end{tabular}
\caption{Relation prediction results on FB15K dataset. The baseline results are obtained from corresponding papers. }
\label{tab:statistics}
\end{table}
}
\paragraph{Triple Classification.}
Triple classification aims to judge whether a given triple $(h, r, t)$ is correct or not. Table 2 presents triple classification accuracy of different methods on WN11 and FB13. We can see that KG-BERT(a) clearly outperforms all baselines by a large margin, which shows the effectiveness of our method. We ran our models 10 times and found the standard deviations are less than 0.2, and the improvements are significant ($p <0.01$). To our knowledge, KG-BERT(a) achieves the best results so far. For more in-depth performance analysis, we note that TransE could not achieve high accuracy scores because it could not deal with 1-to-N, N-to-1, and N-to-N relations. TransH, TransR, TransD, TranSparse and TransG outperform TransE by introducing relation specific parameters. DistMult performs relatively well, and can also be improved by hierarchical relation structure information used in DistMult-HRS. ConvKB shows decent results, which suggests that CNN models can capture global interactions among the entity and relation embeddings. DOLORES further improves ConvKB by incorporating contextual information in entity-relation random walk chains. NTN also achieves competitive performances especially on FB13, which means it's an expressive model, and representing entities with word embeddings is helpful. Other text-enhanced KG embeddings TEKE and AATE outperform their base models like TransE and TransH, which demonstrates the benefit of external text data. However, their improvements are still limited due to less utilization of rich language patterns. The improvement of KG-BERT(a) over baselines on WN11 is larger than FB13, because WordNet is a linguistic knowledge graph which is closer to linguistic patterns contained in pre-trained language models.
Figure 3 reports triple classification accuracy with 5$\%$, 10$\%$, 15$\%$, 20$\%$ and 30$\%$ of original WN11 and FB13 training triples. We note that KG-BERT(a) can achieve higher test accuracy with limited training triples. For instance, KG-BERT(a) achieves a test accuracy of 88.1$\%$ on FB13 with only $5\%$ training triples and a test accuracy of 87.0$\%$ on WN11 with only $10\%$ training triples which are higher than some baseline models (including text-enhanced models) with even the full training triples. These encouraging results suggest that KG-BERT(a) can fully utilize rich linguistic patterns in large external text data to overcome the sparseness of knowledge graphs.
The main reasons why KG-BERT(a) performs well are four fold: 1) The input sequence contains both entity and relation word sequences; 2) The triple classification task is very similar to next sentence prediction task in BERT pre-training which captures relationship between two sentences in large free text, thus the pre-trained BERT weights are well positioned for the inference of relationship among different elements in a triple; 3) The token hidden vectors are contextual embeddings. The same token can have different hidden vectors in different triples, thus contextual information is explicitly used. 4) The self-attention mechanism can discover the most important words connected to the triple fact.
\begin{figure}[t]
\centering
\subfigure[WN11]{
\label{fig:proportion:a} %
\includegraphics[height = 28 mm]{proportion_WN11.pdf}}
\subfigure[FB13]{
\label{fig:proportion:b} %
\includegraphics[height = 28 mm]{proportion_FB13.pdf}}
\caption{Test accuracy of triple classification by varying training data proportions.}
\label{fig:proportion}
\end{figure}
\paragraph{Link Prediction.}
The link (entity) prediction task predicts the head entity $h$ given $(?, r, t)$ or predicts the tail entity $t$ given $(h, r, ?)$ where $?$ means the missing element. The results are evaluated using a ranking produced by the scoring function $f(h, r, t)$ ($s_{\tau 0}$ in our method) on test triples. Each correct test triple $(h, r, t)$ is corrupted by replacing either its head or tail entity with every entity $e \in \mathbb{E}$, then these candidates are ranked in descending order of their plausibility score. We report two common metrics, Mean Rank (MR) of correct entities and Hits@10 which means the proportion of correct entities in top 10. A lower MR is better while a higher Hits@10 is better. Following~\cite{nguyen2018novel}, we only report results under the \textit{filtered} setting~\cite{bordes2013translating} which removes all corrupted triples appeared in training, development, and test set before getting the ranking lists.
Table 3 shows link prediction performance of various models. We test some classical baseline models with OpenKE toolkit~\cite{han2018openke}\footnote{https://github.com/thunlp/OpenKE}, other results are taken from the original papers. We can observe that: 1) KG-BERT(a) can achieve lower MR than baseline models, and it achieves the lowest mean ranks on WN18RR and FB15k-237 to our knowledge. 2) The Hits@10 scores of KG-BERT(a) is lower than some state-of-the-art methods. KG-BERT(a) can avoid very high ranks with semantic relatedness of entity and relation sentences, but the KG structure information is not explicitly modeled, thus it could not rank some neighbor entities of a given entity in top 10. CNN models ConvE and ConvKB perform better compared to the graph convolutional network R-GCN. ComplEx could not perform well on WN18RR and FB15k-237, but can be improved using adversarial negative sampling in KBGAN and RotatE.
\paragraph{Relation Prediction.}
This task predicts relations between two given entities, i.e., $(h,?,t)$. The procedure is similar to link prediction while we rank the candidates with the relation scores $\mathbf{s_{\tau}'}$. We evaluate the relation ranking using Mean Rank (MR) and Hits@1 with \textit{filtered} setting.
Table 4 reports relation prediction results on FB15K. We note that KG-BERT(b) also shows promising results and achieves the highest Hits@1 so far. The KG-BERT(b) is analogous to sentence pair classification in BERT fine-tuning and can also benefit from BERT pre-training. Text-enhanced models DKRL and SSP can also outperform structure only methods TransE and TransH. TKRL and PTransE work well with hierarchical entity categories and extended path information. ProjE achieves very competitive results by treating KG completion as a ranking problem and optimizing ranking score vectors.
\begin{figure}[h]
\centering
\includegraphics[width = 0.40 \textwidth]{example_triple.pdf}
\caption{Illustrations of attention patterns of KG-BERT(a). A positive training triple (\textit{\underline{~~~~}twenty\underline{~~}dollar\underline{~~}bill\underline{~~}NN\underline{~~}1}, ~\underline{~~}hypernym, \textit{\underline{~~~~}note\underline{~~}NN\underline{~~}6}) from WN18RR is used as the example. Different colors mean different attention heads. Transparencies of colors reflect the attention scores. We show the attention weights between [CLS] and other tokens in layer 11 of the Transformer model.}
\label{fig:framework}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width = 0.40 \textwidth]{relation_examples.pdf}
\caption{Illustrations of attention patterns of KG-BERT(b). The example is taken from FB15K. Two entities \textit{20th century} and \textit{World War II} are used as input, the relation label is /time/event/includes\underline{~~}event. }
\label{fig:framework}
\end{figure}
\paragraph{Attention Visualization.}
We show attention patterns of KG-BERT in Figure 4 and Figure 5. We use the visualization tool released by~\cite{vig2019transformervis}\footnote{\url{https://github.com/jessevig/bertviz}}. Figure 4 depicts the attention patterns of KG-BERT(a). A positive training triple (\textit{\underline{~~~~}twenty\underline{~~}dollar\underline{~~}bill\underline{~~}NN\underline{~~}1}, ~\underline{~~}hypernym, \textit{\underline{~~~~}note\underline{~~}NN\underline{~~}6}) from WN18RR is taken as the example. The entity descriptions ``a United States bill worth 20 dollars" and ``a piece of paper money" as well as the relation name ``hypernym" are used as the input sequence. We observe that some important words such as ``paper" and ``money" have higher attention scores connected to the label token [CLS], while some less related words like ``united" and ``states" obtain less attentions. On the other hand, we can see that different attention heads focus on different tokens. [SEP] is highlighted by the same six attention heads, ``a" and ``piece" are highlighted by the three same attention heads, while ``paper" and ``money" are highlighted by other four attention heads. As mentioned in~\cite{vaswani2017attention}, multi-head attention allows KG-BERT to jointly attend to information from different representation subspaces at different positions, different attention heads are concatenated to compute the final attention values.
Figure 5 illustrates attention patterns of KG-BERT(b). The triple (\textit{20th century}, /time/event/includes\underline{~~}event, \textit{World War II}) from FB15K is taken as input. We can see similar attention patterns as in KG-BERT(a), six attention heads attend to ``century" in head entity, while other three attention heads focus on ``war" and ``ii" in tail entity. Multi-head attention can attend to different aspects of two entities in a triple.
\paragraph{Discussions.}
From experimental results, we note that KG-BERT can achieve strong performance in three KG completion tasks. However, a major limitation is that BERT model is expensive, which makes the link prediction evaluation very time consuming, link prediction evaluation needs to replace head or tail entity with almost all entities, and all corrupted triple sequences are fed into the 12 layer Transformer model. Possible solutions are introducing 1-N scoring models like ConvE or using lightweight language models.
\section{Conclusion and Future Work}
In this work, we propose a novel knowledge graph completion method termed Knowledge Graph BERT (KG-BERT). We represent entities and relations as their name/description textual sequences, and turn knowledge graph completion problem into a sequence classification problem. KG-BERT can make use of rich language information in large amount free text and highlight most important words connected to a triple. The proposed method demonstrates promising results by outperforming state-of-the-art results on multiple benchmark KG datasets.
Some future directions include improving the results by jointly modeling textual information with KG structures, or utilizing pre-trained models with more text data like XLNet. And applying our KG-BERT as a knowledge-enhanced language model to language understanding tasks is an interesting future work we are going to explore.
\bibliographystyle{aaai}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,173 |
GPALIB_DECL GPA_Status GPA_RegisterLoggingDebugCallback(GPA_Log_Debug_Type loggingType, GPA_LoggingDebugCallbackPtrType callbackFuncPtr);
/// \brief Internal function. Pass draw call counts to GPA for internal purposes.
/// \param iCounts[in] the draw counts for the current frame
/// \return The GPA result status of the operation. GPA_STATUS_OK is returned if the operation is successful.
GPALIB_DECL GPA_Status GPA_InternalSetDrawCallCounts(const int iCounts);
#endif // AMDT_INTERNAL
// *INDENT-ON*
/// \brief Internal function. Unsupported and may be removed from the API at any time.
///
/// \return The GPA result status of the operation. GPA_STATUS_OK is returned if the operation is successful.
GPALIB_DECL GPA_Status GPA_InternalProfileStart();
/// \brief Internal function. Unsupported and may be removed from the API at any time.
///
/// \param pFilename the name of the file to write profile results
/// \return The GPA result status of the operation. GPA_STATUS_OK is returned if the operation is successful.
GPALIB_DECL GPA_Status GPA_InternalProfileStop(const char* pFilename);
#endif // _GPUPERFAPI_PRIVATE_H_
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,223 |
Rebel MLAs camping in Mumbai likely to board plane to Bengaluru in afternoon
PTI, Jul 11, 2019, 1:10 PM IST
Mumbai: Fourteen Karnataka rebel MLAs staying at a hotel in Mumbai may fly back to Bengaluru on Thursday afternoon to meet the Assembly Speaker, sources said.
The move comes after the Supreme Court allowed the rebel MLAs of the Congress-JD(S) coalition in Karnataka to meet the Speaker at 6 pm to convey to him their decision to resign.
"The rebel MLAs can now appear before the Speaker in Karnataka. They are planning to book a flight at 2 pm to Bengaluru so that they can meet the speaker and put forth their statement," a source said.
Asked if some more ruling coalition MLAs in Karnataka were likely to switch sides, he said, "They will join these legislators in Bengaluru itself. The future course of action will be decided after the MLAs meet the speaker," he said.
Fourteen MLAs — including those of the Congress, the JD(S) and Independents — have been staying at the Renaissance Hotel in Powai after resigning from the Karnataka Assembly and withdrawing support to the coalition government.
Rebel MLAs camping
board plane
Leader of Opposition seems to be in a hurry: HDK
Bengaluru: Facing a truncated strength caused by the en masse resignation of 16 ruling coalition MLAs, Chief Minister HD Kumaraswamy on Thursday moved.... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,078 |
Trillium vaseyi е вид многогодишно растение от семейство Melanthiaceae. Видът е незастрашен от изчезване.
Разпространение
Видът е разпространен в югоизточната част на Съединените щати, предимно в южната част на Апалачийските планини, но има и няколко популации, които се срещат по на юг.
Източници
Трилиум | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,889 |
Madan Mohan Mishra (en devanagari: मदनमोहन मिश्र; Lalitpur, 12 de diciembre de 1931 - 4 de julio de 2013) fue un escritor y humorista nepalí, conocido por su poesía épica, escritos satíricos y canciones infantiles. Escribió en nepalés, nepal bhasa e inglés.
Biografía
Mishra nació en Lalitpur, sus padres fueron Pandit Madhusudan y Maheswari Mishra. Fue educado en sánscrito.
Mishra escribió más de una docena de libros, entre ellos trabajos académicos sobre el arte, la cultura y la escultura. Su Gajiguluya Mhagasay Pashupatinath (गजिगुलुया म्हगसय् पशुपतिनाथ, 'Pashupatinath en los sueños de un fumador de marihuana'), publicado en 1975, es una de sus obras más queridas en nepal bhasa. La primera edición fue confiscada por el régimen panchayat.
Fue distinguido con el título de Khyali Ratna ('joya entre los humoristas') por Khyaligulu Guthi, una asociación de humoristas.
Referencias
Enlaces externos
Humoristas de Nepal
Escritores de Nepal
Escritores del siglo XX
Nepalíes del siglo XX | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,619 |
Hand Drawn Fonts are well-known decorative element especially for vintage/retro style designs.
Typography is a huge topic on the internet, it is no longer some fonts and texts. Designers interested in learning what typography and how it work with different mediums.
If you are a type lover, you might know Hand Drawn fonts. Handwritten fonts are great decorative fonts for web and print. Realism is the best character of handwritten fonts, they are pretty popular for retro designs. If are planning to design fancy old style website or a poster or whatever you might need the good handwritten font to blend the content in.
Finding the Handwritten font is a time-consuming process. We have put together the best Free Hand Drawn Fonts in one place, you don't need look anywhere else. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,414 |
{"url":"http:\/\/clay6.com\/qa\/42768\/the-unit-cell-of-a-metal-of-atomic-mass-108-and-density-10-5-gm-cm-3-is-a-c","text":"Browse Questions\n\n# The unit cell of a metal of atomic mass 108 and density $10.5 gm\/cm^3$ is a cube with edge- length of 409pm. Find the structure of the crystal lattice.\n\nFCC\nHence (A) is the correct answer.","date":"2016-10-27 01:15:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8960440754890442, \"perplexity\": 1033.8561140689612}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988721027.15\/warc\/CC-MAIN-20161020183841-00293-ip-10-171-6-4.ec2.internal.warc.gz\"}"} | null | null |
\section{Introduction} \label{sec:intro}
\subsection{Background} \label{ssec:background}
Distribution testing has its roots in statistical hypothesis testing~\cite{NeymanP,lehmann2005testing}
and was initiated in~\cite{GRexp:00, BFR+:00}.
The paradigmatic problem in this area is the following: given sample access to an arbitrary distribution
$P$ over a domain of size $N$, determine whether $P$ has some global property or is ``far''
from any distribution having the property. A natural way to solve this problem would be to
learn the distribution in question to good accuracy, and then check if the
corresponding hypothesis is close to one with the desired property.
However, this testing-via-learning approach requires $\Omega(N)$ samples and is typically suboptimal.
The main goal in this area is to obtain {\em sample-optimal} testers -- ideally, testers that draw
$o(N)$ samples from the underlying distribution.
During the past two decades, a wide range of properties have been studied,
and we now have sample-optimal testers for many of these properties~\cite{Paninski:08, CDVV14, VV14, DK:16, DiakonikolasGPP16}.
We remark that even for the simplest properties, e.g., identity testing,
at least $\Omega(\sqrt{N})$ many samples are required for
arbitrary distributions over $N$ atoms.
While this is an improvement over the $\Omega(N)$ samples required
to learn the distribution, a sample upper bound of $O(\sqrt{N})$ is still impractical if $N$ is very large.
For example, suppose that the unknown distribution is supported on $\{0, 1\}^n$. For this high-dimensional setting,
a sample complexity bound of $\Theta(2^{n/2})$ quickly becomes prohibitive, when the dimension increases.
Notably, the aforementioned $\Omega(\sqrt{N})$ sample lower bound
characterizes worst-case instances, which in many cases are unlikely to arise in real-world data.
This observation motivates the study of testing {\em structured} distribution families,
where significantly improved testers may be possible. Hence, the following natural question arises:
{\em Can we exploit the structure of the data to perform the desired testing task more efficiently?}
A natural formalization of this question involves
viewing the data as samples from a {\em probabilistic model} -- a model
that we believe represents the random process generating the samples.
The usual assumption is that there exists a known
family of probabilistic models -- describing a set of probability distributions --
and that the data are random samples drawn from an unknown distribution in the family.
In this context, the distribution testing problem is the following:
Let $\mathcal{C}$ be a family of {probabilistic models}.
The {\em testing algorithm} has access to independent samples from an unknown $P \in \mathcal{C}$,
and its goal is to output ``yes'' if $P$ has some property $\mathcal{P}$,
and output ``no'' if the total variation distance, $d_{\mathrm TV}(P,Q) \stackrel{{\mathrm {\footnotesize def}}}{=} (1/2) \normone{P-Q}$,
\new{where $\| \cdot \|_1$ denotes the $L_1$-norm,} is at least $\epsilon$ to {\em every} $Q \in \mathcal{C}$ that has property $\mathcal{P}$.
The sample complexity of this structured testing problem depends on the underlying family
$\mathcal{C}$, and we are interested in obtaining efficient algorithms that are {\em sample optimal} for $\mathcal{C}$.
More than a decade ago, Batu, Kumar, and Rubinfeld~\cite{BKR:04} considered a specific instantiation of this broad question --
testing the equivalence between two unknown discrete monotone distributions -- and obtained
a tester whose sample complexity is poly-logarithmic in the domain size. A recent sequence of works~\cite{DDSVV13, DKN:15, DKN:15:FOCS}
developed a framework to obtain sample-optimal estimators
for testing the identity of structured distributions {\em over total orders} (e.g., univariate multi-modal or log-concave distributions).
The main lesson of these works is that, under reasonable structural assumptions,
the sample complexity of testing may dramatically improve --
becoming sub-logarithmic or even independent of the support size. Moreover, in all studied cases,
one obtains testers with {\em sub-learning} sample complexity.
\subsection{This Work: Testing High-Dimensional Structured Distributions} \label{ssec:this-work}
This paper initiates a systematic investigation of testing properties of {\em high-dimensional} structured
distributions. One of the most general formalisms to succinctly represent such distributions
is provided by probabilistic graphical models~\cite{Wainwright:2008, Koller:2009}.
Graphical models compactly encode joint probability distributions in high dimensions.
Formally, a graphical model is a graph where we associate a random variable with each node.
The key property is that the edge-structure of the graph determines the dependence relation
between the nodes.
The general problem of inference in graphical models is of fundamental importance
and arises in many applications across several scientific disciplines,
see~\cite{Wainwright:2008} and references therein.
In particular, the task of learning graphical models
has been extensively studied~\cite{Neapolitan:2003, RQS11}.
A range of information-theoretic and algorithmic results have been developed
during the past five decades in various settings,
see, e.g.,~\cite{Chow68, Dasgupta97, Friedman96, Friedman1997, Friedman00, Cheng02,
Chickering02, Margaritis03, Abbeel:2006, WainwrightRL06, AnandkumarHHK12,
SanthanamW12, LohW12, DiakonikolasKS16b} for a few references.
In contrast,
the general question of {\em testing} graphical models has received less attention.
We propose the following broad set of questions:
\begin{question} \label{q:gm}
{\em Let $\mathcal{C}$ be a family of high-dimensional graphical models and $\mathcal{P}$ be a property of
$\mathcal{C}$. What is the {\em sample complexity} of testing whether an unknown $P \in \mathcal{C}$ has property
$\mathcal{P}$? Can we develop testers for $\mathcal{P}$ with {\em sub-learning} sample complexity?
Can we design {\em sample-optimal} and {\em computationally efficient} testers?}
\end{question}
We believe that Question~\ref{q:gm} points to a fundamental research direction that warrants study for its own sake.
Moreover, as we explain in the following paragraphs, such estimation tasks
arise directly in various practical applications across the data sciences,
where sample efficiency is of critical importance. Hence, improved estimators for these tasks
may have implications for the analysis of datasets in these areas.
For concreteness, Question~\ref{q:gm} refers to a single unknown distribution that we have sample access to.
We are also naturally interested in the broader setting of testing properties for {\em collections} of distributions in
$\mathcal{C}$. Before we proceed to describe our contributions,
a few comments are in order: As previously mentioned, for all global properties of interest (e.g., identity,
independence, etc.), the sample complexity of testing the property is bounded from above by the sample complexity
of learning an arbitrary distribution from $\mathcal{C}$. Hence, the overarching goal is to obtain
testers that use fewer samples than \new{are required to actually learn} the model -- or to prove that this is impossible.
On a related note, in the well-studied setting of testing arbitrary discrete distributions,
the main challenge has been to devise sample-optimal testers; the algorithmic aspects are typically straightforward.
This is no longer the case in the high-dimensional setting, where the combinatorial structure of the underlying model
may pose non-trivial algorithmic challenges.
In this work, we start this line of inquiry by focusing on testing
{\em Bayesian networks}~\cite{Pearl88} ({\em Bayes nets} or BN for brevity),
the prototypical family of {\em directed} graphical models. Bayesian networks are used for modeling beliefs
in many fields including robotics, computer vision, computational biology, natural language processing,
and medicine~\cite{Jensen:2007, Koller:2009}. Formally, a Bayesian network
is defined by a directed acyclic graph (DAG) $\mathcal{S} = (V, E)$,
where we associate a random variable with each node. Moreover, the value
at any particular node is conditionally independent of all the other {non-descendant} nodes once its parents are fixed.
Hence, for a fixed topology, it suffices to specify the conditional distribution for each node
for each configuration of values for its parents.
The main problems that we study in this setting
are the related tasks of testing {\em identity} and {\em closeness}:
In identity testing, we are given samples from an unknown Bayes net $P$
and we want to distinguish between the case that it is equal to
versus significantly different from an explicitly given Bayes net $Q$.
In closeness testing, we want to test whether two unknown Bayes nets $P, Q$
are identical versus significantly different. \new{We believe that our techniques can be naturally
adapted to test other related properties (e.g., independence),
but we have not pursued this direction in the current paper.}
A related testing problem that we consider is that of {\em structure testing}:
given samples from an unknown Bayes net $P$, we want to
test whether it can be represented with a given graph structure $\mathcal{S}$
or is far from any Bayes net with this structure.
In the prior work on testing {\em unstructured} discrete distributions,
the natural complexity measure was the domain size of the unknown distributions.
For the case of Bayes nets, the natural complexity measures are the number of variables
(nodes of the DAG) -- denoted by $n$ -- the maximum in-degree of the DAG -- denoted by $d$ --
and the alphabet size of the discrete distributions on the nodes. To avoid clutter in the expressions,
we focus on the natural setting that the random variables associated with each node are Bernoulli's,
i.e., the domain of the underlying distributions is $\{0, 1\}^n$. (As we will point out, our bounds
straightforwardly extend to the case of general alphabets with a necessary
polynomial dependence on the alphabet size.)
We note that Bayes nets are a universal representation scheme:
{\em Any} distribution over $\{0, 1\}^n$ can be presented as a BN,
if the maximum in-degree $d$ of the graph is unbounded.
(Indeed, for $d = n-1$, \new{one} can encode all distributions over $\{0, 1\}^n$.)
In fact, as we will see, the sample complexity of testing scales exponentially with $d$.
Therefore, an upper bound on the maximum in-degree is {\em necessary} to obtain non-trivial upper bounds.
Indeed, the most interesting regime is the settting where the number of nodes $n$ is large
and the degree $d$ is small. {In applications of interest, this assumption will be automatically satisfied.
In fact, \new{as we explain in the following subsection,}
in many relevant applications the maximum in-degree is either
$1$ (i.e., the graph is a tree) or bounded by a small constant.
\subsection{Related Work} \label{ssec:related}
\new{We partition the related work intro three groups corresponding to research efforts by different communities.}
\vspace{-0.4cm}
\paragraph{Computer Science.}
\new{
A large body of work in computer science has focused on designing
statistically and computationally efficient algorithms for learning structured distributions
in both low and high dimensions~\cite{Dasgupta:99, FreundMansour:99short, AroraKannan:01, VempalaWang:02, CGG:02,
MosselRoch:05, MoitraValiant:10, BelkinSinha:10, DDS12soda, DDS12stoc, CDSS13,
DDOST13focs, CDSS14, CDSS14b, HardtP15, ADLS15, DDS15, DDKT15, DKS15, DKS16}. On the other hand,
} the vast majority of the literature in distribution property testing during the past two decades focused on
arbitrary discrete distributions, where the main complexity measure was the domain size.
See~\cite{BFR+:00, BFFKRW:01, Batu01, BDKR:02, BKR:04, Paninski:08, ValiantValiant:11,
DDSVV13, DJOP11, LRR11, ILR12, CDVV14, VV14, ADK15, CDGR16, DK:16} for a sample of works, or~\cite{Rub12,Canonne15} for surveys.
A line of work~\cite{BKR:04, DDSVV13, DKN:15, DKN:15:FOCS} studied properties of one-dimensional
structured distribution families under various ``shape restrictions'' on the underlying density.
In the high-dimensional setting, Rubinfeld and Servedio~\cite{RubinfeldServedio:05}
studied the identity testing problem for monotone distributions over $\{0, 1\}^n$.
It was shown in~\cite{RubinfeldServedio:05} that $\mathrm{poly}(n)$ samples suffice for the case of uniformity testing,
but the more general problems of identity testing and independence testing require $2^{\Omega(n)}$ samples.
Subsequently, Adamaszek, Cjumaj, and Sohler~\cite{AdamaszekCS10}
generalized these results to continuous monotone distributions over $[0, 1]^n$.
A related, yet distinct, line of work studied the problem of testing {\em whether} a probability distribution
has a certain structure~\cite{BKR:04, BFRV11, ADK15, CDGR16}.
The sample complexity bounds in these works scale exponentially with the dimension.
\vspace{-0.4cm}
\paragraph{Statistics.}
The area of hypothesis testing for high-dimensional models has a long history in statistics
and is currently an active topic of study. A sequence of early and recent works,
starting with~\cite{Weiss60, Bickel69, LS93}, \new{has} studied the problem
of testing the equivalence between two nonparametric high-dimensional
distributions in the asymptotic regime. In the parametric setting, Hotelling's
T-squared statistic~\cite{Hotelling1931} is the classical test for the equivalence
of two high-dimensional Gaussians (with known and identical covariance).
However, Hotelling's test has the serious defect that it fails when the
sample size is smaller than the dimension of the data~\cite{BaiS96}.
Recent work has obtained testers that, under a high-dimensional Gaussian model (with known covariance),
succeed in the sub-linear regime for testing identity~\cite{Sriv08}
and closeness~\cite{chen2010}. A number of more recent works study properties of
covariance matrices~\cite{Cai2013}, regression~\cite{Jav14}, and linear independence testing
~\cite{RamdasISW16}.
\vspace{-0.4cm}
\paragraph{Applications.}
The problems of testing identity and closeness of Bayesian networks arise in a number of applications
where sample efficiency is critical~\cite{Friedman00, GWJ03, Sobel03, Almudevar10, Nguyen2011, SEG14, StadM15, Yin2015}.
In bioinformatics applications (e.g., gene set analysis), each sample corresponds to an experiment that
may be costly or ethically questionable~\cite{Yin2015}. Specifically,~\cite{Yin2015} emphasizes
the need of making accurate inferences on {\em tree structured} Bayesian networks, using an extremely small
sample size -- significantly smaller than the number of variables (nodes). \cite{Almudevar10} studies
the problem of testing closeness between two unknown Bayesian network models
in the context of a biology application, where Bayes nets are used to model gene expression data.
The motivation in~\cite{Almudevar10} comes from the need to compare network models for a common set of genes
under varying phenotypes, which can be formulated as the problem of testing closeness between two unknown Bayes nets.
As argued in~\cite{Almudevar10}, due to the small sample size available, it is not feasible to directly
learn each BN separately.
\vspace{-0.3cm}
\paragraph{Basic Notation and Definitions.}
Consider a directed acyclic graph (DAG), $\mathcal{S}$, with $n$ vertices that are \new{topologically sorted, i.e.,} labelled \new{from the set}
$[n] \stackrel{{\mathrm {\footnotesize def}}}{=} \{1,2,\ldots,n\}$ so that all directed edges \new{of $\mathcal{S}$} point from vertices with smaller \new{label} to vertices with larger \new{label}.
A probability distribution $P$ over $\{0,1\}^n$ is defined to be
a \emph{Bayesian network} \new{(or Bayes net)} with \new{dependency} graph $\mathcal{S}$
if for each $i \in [n]$, we have that $\Pr_{X\sim P}\left[X_i = 1 \mid X_1,\ldots,X_{i-1}\right]$
depends only on the values $X_j$,
where $j$ is a parent of $i$ in $\mathcal{S}$. Such a distribution $P$ can be specified by
its {\em conditional probability table}, i.e., the vector of conditional probabilities of $X_i=1$
conditioned on every possible combination of values \new{to} the coordinates of $X$
at the parents of $i$.
\new{
To formalize the above description, we use the following terminology.
We will denote by $\parent{i}$ the set of parents of node $i$ in $\mathcal{S}$.
For a vector $X = (X_1, \ldots, X_n)$ and a subset $A \subseteq [n]$, we use $X_A$ to denote the
vector $(X_i)_{i \in A}$. We can now give the following definition:
}
\begin{definition} \label{def:BN-terminology}
Let $S$ be the set $\{(i,a): i \in [n], a\in \{0,1\}^{\new{|}\parent{i}\new{|}}\}$ and $m=|S|$.
For $(i,a)\in S$, the \emph{parental configuration} $\Pi_{i,a}$
is defined to be the event that $X_{\parent{i}} =a$.
Once $\mathcal{S}$ is fixed, we may associate to a Bayesian network $P$
the conditional probability table $p\in [0,1]^{S}$ given by
$p_{i,a}=\Pr_{X\sim P}\left[X_i=1 \mid \Pi_{i,a}\right]$, \new{for $(i, a) \in S$.}
We note that the distribution $P$ is determined by $p$.
We will frequently index $p$ as a vector.
That is, we will use the notation $p_k$, for $1 \leq k \leq m$,
and the associated events $\Pi_k$, where each $k$ stands for an $(i,a) \in S$ lexicographically ordered.
\end{definition}
\section{Our Results and Techniques} \label{sec:results-techniques}
The structure of this section is as follows: In Section~\ref{ssec:results}, we provide the statements
of our main results in tandem with a brief explanation of their context and the relations between them.
In Section~\ref{ssec:techniques}, we give a detailed outline of our algorithmic and lower bound
techniques.
\subsection{Main Results} \label{ssec:results}
The focus of this paper is on the properties of \emph{identity testing} and \emph{closeness testing}
of Bayes nets. We give the first non-trivial efficient
testing algorithms and matching information-theoretic lower bounds for these problems.
For a wide range of parameter settings, our algorithms achieve sub-learning sample complexity and
are sample-optimal (up to constant factors).
For concreteness, we consider Bayes nets over Bernoulli random variables.
We note that our upper bounds straightforwardly extend to general alphabets with a
polynomial dependence on the alphabet size \new{(see Remark~\ref{rem:alph})}.
Let $\mathcal{BN}_{n, d}$ denote the family of Bernoulli
Bayes nets on $n$ variables such that the corresponding DAG
has maximum in-degree at most $d$. For most of our results, we will think
of the dimension $n$ as being large and the maximum degree
$d$ as being comparably small (say, bounded from above by a constant
or at most logarithmic in $n$).
For the inference problems of learning and testing Bayes nets, there are two versions
of the problem: The first version corresponds to the setting
where the structure of the graph is fixed (and known a priori to the algorithm).
In the second version, both the graph and the parameters are unknown to the algorithm.
We note that both versions of the problem are interesting, based on the application.
The {\em unknown structure} setting is clearly at least as hard, and typically includes an algorithm
for the {\em fixed structure} case plus additional algorithmic ingredients.
Before we give the statements of our main testing results, we record a nearly tight
bound on the sample complexity of learning $\mathcal{BN}_{n, d}$. This bound
will be used as a baseline to compare against our efficient testers:
\begin{fact} \label{fact:learning-sample}
The sample complexity of learning $\mathcal{BN}_{n, d}$, within total
variation distance $\epsilon$, with confidence probability $9/10$, is:
(i) $\widetilde{\Theta}(2^d \cdot n/\epsilon^2)$, for all $d \leq n/2$, in the fixed structure setting,
and (ii) $\widetilde{\Theta}(2^d \cdot n/\epsilon^2)$ in the unknown structure setting.
\end{fact}
We give a proof of this fact in~\cref{sec:learn}.
\cref{fact:learning-sample} characterizes the sample complexity of learning Bayes nets
(up to logarithmic factors). We remark that our information-theoretic upper bound
for the fixed structure case also yields a simple computationally efficient algorithm.
The unknown structure regime is much more challenging computationally.
For this setting, we provide a nearly tight information-theoretic upper bound
that is non-constructive. (The corresponding algorithm runs in exponential time.)
In fact, we note that no sample-optimal computationally efficient algorithm is known
for unknown structure Bayes nets.
\medskip
Our first main result concerns the fixed structure regime.
For technical reasons, we focus on Bayes nets that satisfy
a natural balancedness condition. Roughly speaking, our balancedness condition
ensures that the conditional probabilities are bounded away from $0$ and $1$,
and that each parental configuration happens with some minimum probability.
Formally, we have:
\begin{definition} \label{def:balanced-net}
A Bayes net $P$ over $\{0,1\}^n$ with structure $\mathcal{S}$ is called
\emph{$(c,C)$-balanced} if, for all $k$, we have that (i) $p_k\in[c,1-c]$,
and (ii) $\probaDistrOf{P}{\Pi_k} \geq C$.
\end{definition}
Under a mild condition on the balancedness, we give sample-optimal and
computationally efficient algorithms for testing identity and closeness of Bayes nets.
Specifically, for the problem of identity testing against an explicit distribution,
we require that the explicit distribution be balanced (no assumption is needed for the unknown Bayes net).
For the problem of closeness testing, we require that one of the two unknown distributions be balanced.
We are now ready to state our first main theorem:
\begin{theorem}[Testing Identity and Closeness of Fixed--Structure Bayes Nets]\label{thm:informal-identity-closeness-known}
For testing identity and closeness of fixed structure Bayes nets $P, Q$
with $n$ nodes and maximum in-degree $d$, there is an efficient
algorithm that uses $\bigO{2^{d/2}\sqrt{n}/\epsilon^2}$ samples and, assuming that one of $P, Q$
is $(c,C)$-balanced with $c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$,
correctly distinguishes between the cases that $P=Q$ versus $\normone{P-Q} > \epsilon$,
with probability at least $2/3$.
Moreover, this sample size is information-theoretically optimal, up to constant factors,
for all $d < n/2$, even for the case of uniformity testing.
\end{theorem}
\noindent The conceptual message of~\cref{thm:informal-identity-closeness-known}
is that, for the case of fixed structure, testing is information-theoretically easier than learning.
Specifically, our result establishes a quadratic gap between learning and identity testing,
reminiscent of the analogous gap in the setting of unstructured discrete distributions.
We remark here that the information-theoretic lower bounds of Fact~\ref{fact:learning-sample} (i)
hold even for Bayes nets with constant balancedness.
We now turn our attention to the case of unknown structure.
Motivated by~\cref{thm:informal-identity-closeness-known}, it would be tempting
to conjecture that one can obtain testers with sub-learning sample complexity
in this setting as well. Our first main result for unknown structure testing
is an information-theoretic lower bound, showing that this is not the case.
Specifically, even for the most basic case of tree-structured Bays Nets ($d=1$)
with unknown structure, uniformity testing requires $\Omega(n/\epsilon^2)$ samples.
It should be noted that our lower bound applies even for Bayes nets with {\em constant}
balancedness. Formally, we have:
\begin{theorem}[Sample Lower Bound for Uniformity Testing of Unknown Tree-Structured Bayes Nets]
\label{thm:informal-uniformity-lower-unknown}
Any algorithm that, given sample access to a balanced tree-structured Bayes net $P$ over $\{0, 1\}^n$,
distinguishes between the cases $P=U$ and $\normone{P-U} > \epsilon$ \new{(where $U$ denotes the uniform distribution over $\{0,1\}^n$)},
with probability $2/3$, requires $\Omega(n/\epsilon^2)$ samples from $P$.
\end{theorem}
At the conceptual level, our above lower bound implies
that in the unknown topology case -- even for the simplest non-trivial case of degree-$1$ Bayes nets --
identity testing is information-theoretically essentially as hard as learning.
That is, in some cases, no tester with sub-learning sample complexity exists.
We view this fact as an interesting phenomenon that is absent from the previously
studied setting of testing unstructured discrete distributions.
\cref{thm:informal-uniformity-lower-unknown} shows that
testing Bayes nets can be as hard as learning. However, it is still possible that
testing is easier than learning in \emph{most} natural situations. For the sake of intuition,
let us examine our aforementioned lower bound more carefully.
We note that the difficulty of the problem originates from the fact that
the explicit distribution is the uniform distribution, which can be thought of
as having any of a large number of possible structures. We claim that
this impediment can be circumvented if the explicit distribution satisfies
some non-degeneracy conditions. Intuitively, we want these conditions
to ensure \emph{robust identifiability} of the structure: that is, that any (unknown)
Bayes net sufficiently close to a non-degenerate Bayes net $Q$
must also share the same structure.
For tree structures, there is a very simple non-degeneracy condition.
Namely, that for each node, the two conditional probabilities for that node
(depending on the value of its parent) are non-trivially far from each other.
For Bayes nets of degree more than one, our non-degeneracy condition
is somewhat more complicated to state, but the intuition is still simple:
By definition, non-equivalent Bayesian network structures satisfy
different conditional independence constraints. Our
non-degeneracy condition rules out some of these possible
new conditional independence constraints,
as \emph{far} from being satisfied
by the non-degenerate Bayesian network.
Let $\gamma >0$ be a parameter quantifying non-degeneracy.
Under our non-degeneracy condition, we can design a {\em structure tester}
with the following performance guarantee:
\begin{theorem}[Structure Testing for Non-Degenerate Bayes Nets]\label{thm:informal-struct-test}
Let $\mathcal{S}$ be a structure of degree at most $d$ and $P$
be a degree at most $d$ Bayes net over $\{0, 1\}^n$ with structure $\mathcal{S}'$
whose underlying undirected graph has no more edges than $\mathcal{S}$. There is an algorithm that uses
$\bigO{(2^d+ d \log n)/\gamma^2}$ samples from $P$, runs in time $\bigO{n^{d+3}/\gamma^2}$,
and distinguishes between the following two cases with probability at least $2/3$:
(i) $P$ can be expressed as a degree-$d$ Bayes net
with structure $\mathcal{S}$ that is $\gamma$-non-degenerate;
or (ii) $P$ cannot be expressed as a Bayes net with structure $\mathcal{S}$.
\end{theorem}
By invoking the structure test of the above theorem, we can reduce the identity testing with unknown structure
to the case of known structure, obtaining the following:
\begin{theorem}[Testing Identity of Non-Degenerate Unknown Structure Bayes Nets]
\label{thm:informal-upper-identity-unknown-nondegen}
There exists an algorithm with the following guarantees.
Given the description of a degree-$d$ Bayes net $Q$ over $\{0, 1\}^n$,
which is $(c,C)$ balanced and $\gamma$-non-degenerate for
$c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$, $\epsilon >0$,
and sample access to a distribution $P$, promised to be a degree-$d$ Bayes net
with no more edges than $Q$, the algorithm takes
$\bigO{2^{d/2}\sqrt{n}/\epsilon^2+(2^d+ d \log n)/\gamma^2}$ samples from $P$,
runs in time $\bigO{n}^{d+3}(1/\gamma^2+1/\epsilon^2)$,
and distinguishes with probability at least $2/3$ between (i) $P=Q$ and (ii) $\normone{P-Q} > \epsilon$.
\end{theorem}
\noindent We remark that we can obtain an analogous result for the problem of testing closeness.
See~\cref{sec:closeness}.
We have shown that, without any assumptions, testing is almost as hard as learning for the case of trees.
An interesting question is whether this holds for high degrees as well. We show that for the case
of high degree sub-learning sample complexity is possible.
We give an identity testing algorithm for degree-$d$ Bayes nets with unknown structure,
\emph{without} balancedness or degeneracy assumptions.
While the dependence on the number of nodes $n$ of this tester is suboptimal,
it does essentially achieve the ``right'' dependence on the degree $d$, that is $2^{d/2}$:
\begin{theorem}[Sample Complexity Upper Bound of Identity Testing] \label{thm:informal-sample-identity-general}
Given the description of a degree-$d$ Bayes net $Q$ over $\{0, 1\}^n$, $\epsilon>0$,
and sample access to a degree-$d$ Bayes net $P$, we can distinguish between the cases
that $P=Q$ and $\normone{P-Q} > \epsilon$, with probability at least $2/3$, using
$2^{d/2}\mathrm{poly}(n,1/\epsilon)$ samples from $P$.
\end{theorem}
The message of this result is that when the degree $d$ increases, specifically for $d = \Omega(\log n)$,
the sample complexity of testing becomes lower than the sample complexity of learning.
We also show an analogue of Theorem~\ref{thm:informal-sample-identity-general}
for closeness testing of two unknown Bayes nets, under the additional assumption
that we know the topological ordering of the unknown DAGs.
\subsection{Overview of Algorithmic and Lower Bound Techniques} \label{ssec:techniques}
In this section, we provide a high-level outline of
the main ideas that come into our testing algorithms
and our information-theoretic lower bounds.
\paragraph{Finding the Right Parameter Distance.}
In order to design sample-efficient testing algorithms,
we need to devise a statistic that can accurately
detect when our high-dimensional distributions are non-trivially
separated in total variation distance. The first obstacle to this attempt is that
of actually understanding the behavior of the variational distance between our
distributions. We remark that this difficulty does not appear in the one-dimensional
unstructured case, and is a consequence of the structured high-dimensional setting.
Unfortunately, it seems difficult to find an exact closed-form
expression for the variational distance between two Bayes nets in terms
of their parameters, even if the underlying graph structures
are the same and explicitly known. To make things manageable, we handle this issue
by finding an appropriate proxy for the total variation distance.
A natural candidate is the Kullback--Leibler (KL) Divergence
for the following reasons: (1) it can be used to bound from above the variation
distance, and (2) it is relatively easy to compute for a pair of Bayesian networks with the same
underlying structure. However, the KL-Divergence has the disadvantage
that it depends on logarithms of the relevant conditional probabilities,
which are hard to take advantage of. To deal with this, we bound from above
the KL-Divergence by a ``chi-squared-like quantity''.
For the sake of intuition, let us first consider the basic setting that the underlying Bayes nets
$P, Q$ have no edges, i.e., they correspond to {\em product} distributions over $\{0, 1\}^n$.
It turns out that this case captures some of the core difficulties of the high-dimensional
setting, and it is instructive to understand as a first step.
For product distributions, $P$ and $Q$ with mean vectors $p, q$,
we bound from above the variational distance
in terms of the chi-squared distance between the mean vectors, namely
$\sum_i (p_i-q_i)^2/(q_i(1-q_i))$ (\cref{lem:prod:kl,lem:prod:dtv:kl}).
For the case of general Bayes nets, we use an appropriate generalization of this
bound (\cref{lemma:hellinger:bn}) involving instead the conditional probabilities, where additionally,
each term is weighted by the probability of seeing that particular parental configuration.
\paragraph{Relation to Unstructured Discrete Setting and Tolerant Testing.}
In this paragraph, we take a moment to point out a very useful analogy between
(i) a special case of testing Bayes nets, and (ii) testing discrete unstructured distributions.
In particular, we again consider the special case of testing (identity or closeness)
of product distributions on $\{0,1\}^n$. It turns out that this setting
is quite similar to testing unstructured distributions on $[n]$.
Specifically, in both cases, the underlying distribution is parameterized
by a set of parameters $p_i$ in $[0,1]$, and in both cases the variational distance
between two such distributions can be bounded by an identical-looking
chi-squared quantity. Moreover, in both cases, taking samples allows us to
come up with independent estimates of the $p_i$'s (for discrete
distributions we need to employ Poissonization, while for products the
samples from different coordinates are automatically independent).
This analogy becomes even stronger when one considers product
distributions where the sum of the $p_i$'s is approximately $1$. In this
setting, there is a formal correspondence between the two problems.
Specifically, to each discrete distribution we can associate
the product distribution obtained by taking
$\mathop{\textnormal{Poi}}\nolimits(\lambda)$ samples from $P$ and noting the bins that received at least
one sample. This observation turns out to give a reduction of testing discrete
distributions to testing ``light'' product distributions that nearly preserves
variational distance. This reduction can be formalized, which allows
us to port some lower bounds from one setting to the other,
specifically for the question of {\em tolerant} testing.
\paragraph{Warm-up: Testing Identity and Closeness of Product Distributions.}
Our first set of results (see Section~\ref{sec:product}) involves sample-optimal testers and matching
information-theoretic lower bounds for testing identity and closeness of product distributions
over $\{0, 1\}^n$. Our results for this setting can be viewed as discrete analogues
of testing identity and closeness of high-dimensional spherical Gaussians,
that have been studied in the statistics literature~\cite{Hotelling1931, BaiS96, Sriv08, chen2010}.
We note that the Gaussian setting is simpler since the total variation distance can be bounded
by the Euclidean distance between the mean vectors, instead of the chi-squared distance.
We start with the problem of testing the identity of an unknown product $P$
with mean vector $p$ against an explicit product distribution $Q$ with mean vector $q$.
Our tester relies on a statistic providing an unbiased estimator of $\sum_i
(p_i-q_i)^2/(q_i(1-q_i))$. Essentially, every draw from $P$ gives us an
independent sample from each of the coordinate random variables.
In order to relate our tester more easily to the analogous testers for
unstructured distributions over finite domains, we consider $\mathop{\textnormal{Poi}}\nolimits(m)$
samples from each of these coordinate distributions. From there, we
construct a random variable $Z$ that provides an unbiased estimator of
our chi-squared statistic, and a careful analysis of the variance of $Z$
shows that with $O(\sqrt{n}/\epsilon^2)$ samples we can
distinguish between $P=Q$ and $P$ being $\epsilon$-far from $Q$.
Testing closeness between two unknown product distributions is
somewhat more complicated. As is the case when comparing unknown
discrete distributions on $[n]$, we have the difficulty that we do not
know how to scale our approximations to the $(p_i-q_i)^2$ terms. We are
forced to end up rescaling using the total number of samples drawn
with $x_i=1$ as a proxy for $1/(q_i)$. This leaves us with a statistic
reminiscent of that used in~\cite{CDVV14}, which can be shown to
work with a similar analysis. Unfortunately, in our
setting, it is no longer the case that the sum of the $q_i$'s is
$O(1)$, and this ends up affecting the analysis
making our sample complexity depend on $n^{3/4}$, instead of $n^{2/3}$ as in the unstructured
case.
Perhaps surprisingly, we show that this added complexity is in fact necessary: we show that both our identity tester and our closeness tester
are sample-optimal up to constant factors. To prove our lower bounds,
we use the information-theoretic technique from~\cite{DK:16}:
Given a candidate hard instance, we proceed by bounding from above
the mutual information between appropriate random variables.
More specifically, we construct an appropriate family of
hard instances (distributions)
and show that a set of $k$ samples taken from a distribution
from the chosen family has small shared
information with whether or not the distributions are the same.
Recall that the hard family of instances for distinguishing discrete distributions over $[n]$
had (a) many ``light'' bins (domain elements) of probability mass
approximately $1/n$, where either $p_i=q_i$ on each bin
or $p_i = q_i(1 \pm \epsilon)$ in each bin, and (b) a number of ``heavy'' bins
where $p_i=q_i \approx 1/k$ (where $k$ was the number of
samples taken). The goal of the heavy bins was to ``add noise''
and hide the signal from the light bins.
In the case of discrete distributions over $[n]$,
we could only have $k$ such heavy bins.
In the case of product distributions,
there is no such restriction,
and we can have $n/2$ of them in our hard instance.
The added noise leads to an increased sample complexity of testing closeness
in the high-dimensional setting.
An interesting difference with the unstructured discrete distribution setting
is that our identity tester is in fact {\em tolerant} for a wide range of settings --
in particular when $Q$ uniform or more generally {\em balanced},
i.e., has mean vector $q$ with coordinates bounded away from $0$ and $1$
(see Remark~\ref{remark:tolerance:tradeoff}).
Tolerant testing is the (harder) problem of distinguishing between
$\normone{P-Q} > \epsilon$ and $\normone{P-Q} < \epsilon/2$. Unlike the case of unstructured discrete
distributions over $[n]$, for product distributions
there exists a tolerant uniformity tester with strongly sublinear sample complexity.
This is essentially due to the fact that the variational distance from the
uniform distribution is proportional to the $\ell_2$ distance between the
mean vectors, which can be accurately approximated with $O(\sqrt{n}/\epsilon^2)$ samples.
The same holds when the explicit product distribution is balanced
or when we want to test closeness of two unknown balanced products.
On the other hand, when the distributions are unbalanced, tolerant testing requires
$\Omega(n/\log n)$ samples, via the foregoing reduction from
the case of discrete distributions on $[n]$ (\cref{theo:tradeoff:balancedness:tolerance}).
\medskip
In the following paragraphs, we describe how to generalize our previous
results for product distributions to testing general Bayes nets.
The case of known structure turns out to be manageable, and at a technical
level a generalization of our testers for product distributions. The case
of unknown structure poses various complications and requires a number of
non-trivial new ideas.
\paragraph{Testing Identity and Closeness of Fixed Structure Bayes Nets.}
Our testers and matching lower bounds for the fixed structure regime
are given in~\cref{sec:identity-known,sec:closeness-known}.
For concreteness, let us consider the case of testing identity
of a tree-structured ($d=1$) Bayes net $P$ against an explicit tree-structured
Bayes net $Q$ with the same structure.
Recall that we are using as a proxy for \new{the distance
$\normone{P-Q}$} an appropriate chi-squared-like quantity.
A major difficulty in generalizing our identity tester for products is that
the chi-squared statistic depends not on the probabilities of the various coordinates,
but on the conditional probabilities of these coordinates based on all possible parental
configurations. This fact produces a major wrinkle in our analysis for the following reason:
while in the product distribution case each sample provides information
about each coordinate probability, in the Bayes net case a sample
only provides information about conditional probabilities for parental
configurations that actually occurred in that sample.
This issue can be especially problematic to handle if there are uncommon
parental configurations about which we will have difficulty gathering much information
(with a small sized sample). Fortunately, the probabilities conditioned on such parental
configurations will have a correspondingly smaller effect on the final
distribution and thus, we will not need to know them to quite the same
accuracy. So while this issue can be essentially avoided, we will require
some technical assumptions about {\em balancedness} to let us know that none
of the parental configurations are too rare. Using these ideas, we
develop an identity tester for tree-structured Bayes nets that uses an optimal
$\Theta(\sqrt{n}/\epsilon^2)$ samples. For known structure Bayes nets of degree $d>1$, the sample complexity
will also depend exponentially on the degree $d$.
Specifically, each coordinate will have as many as $2^d$ parental configurations.
Thus, instead of having only $n$ coordinate probabilities to worry about,
we will need to keep track of $2^d n$ conditional probabilities. This will require that our sample
complexity also scale like $2^{d/2}$. The final complexity of our identity and closeness testers
will thus be $O(2^{d/2}\sqrt{n}/\epsilon^2)$.
We now briefly comment on our matching lower bounds.
Our sample complexity lower bound of $\Omega(\sqrt{n}/\epsilon^2)$ for the product
case can be generalized in a black-box manner to yield a tight lower bound
$\Omega(2^{d/2}\sqrt{n}/\epsilon^2)$ for testing uniformity of degree-$d$ Bayes nets.
The basic idea is to consider degree-$d$ Bayes nets with the following structure:
The first $d$ nodes are all independent (with marginal probability $1/2$ each),
and will form in some sense a ``pointer'' to one of $2^d$ arbitrary product distributions.
The remaining $n-d$ nodes will each depend on all
of the first $d$. The resulting distribution is now an (evenly weighted) disjoint mixture of $2^d$
product distributions on the $(n-d)$-dimensional hypercube.
In other words, there are $2^d$ product distributions $p_1,\dots,p_{2^d}$,
and our distribution returns a random $i$ (encoded in binary) followed by a random sample form $p_i$.
By using the fact that the $p_i$'s can be arbitrary product distributions, we obtain our desired
sample complexity lower bound.
\paragraph{Testing Identity and Closeness of Unknown Structure Bayes Nets.}
As we show in~\cref{sec:identity-uknown,sec:closeness-unknown}, this situation changes substantially when we do not know the
underlying structure of the nets involved. In particular, we show that
even for Bayes nets of degree-$1$ uniformity testing requires
$\Omega(n/\epsilon^2)$ samples.
The lower bound construction for this case is actually quite simple:
The adversarial distribution $P$ will be developed by taking
a {\em random matching} of the vertices and making each
matched pair of vertices randomly $1 \pm \epsilon/\sqrt{n}$ correlated. If the
matching were known by the algorithm, the testing procedure could
proceed by approximating these $n/2$ correlations. However, not knowing
the structure, our algorithm would be forced to consider all $\binom{n}{2}$
pairwise correlations, substantially increasing the amount of noise
involved. To actually prove this lower bound, we consider the
distribution $X$ obtained by taking $k$ samples from a randomly chosen $P$
and $Y$ from taking $k$ samples from the uniform distribution.
Roughly speaking, we wish to show that $\chi^2(X,Y)$ is approximately $1$.
This amounts to showing that for a randomly chosen pair of distributions
$P$ and $P'$ from this family, we have that
$\mathbb{E}[P^k(x)P'^k(x)]$ is approximately $1$.
Intuitively, we show that this expectation is only large
if $P$ and $P'$ share many edges in common. In
fact, this expectation can be computed exactly in terms of the lengths
of the cycles formed by the graph obtained taking the union of the
edges from $P$ and $P'$. Noting that $P$ and $P'$ typically share only about
$1$ edge, this allows us to prove our desired lower bound.
However, the hardness of the situation described above is not generic
and can be avoided if the explicit distribution $Q$
satisfies some non-degeneracy assumptions.
Morally, a Bayes nets $Q$ is non-degenerate if it is not close in
variational distance to any other Bayes net of no greater
complexity and non-equivalent underlying structure. For tree
structures, our condition is that for each node the two conditional probabilities
for that node (depending on the value of its parent) are far from each other.
If this is the case, even knowing approximately what the pairwise distributions of coordinates are
will suffice to determine the structure. One way to see this is the following: the analysis of the Chow-Liu algorithm~\cite{Chow68}
shows that the tree-structure for $P$ is the maximum spanning tree of the graph whose edge weights
are given by the shared information of the nodes involved. This tree will have the property that each edge, $e$,
has higher weight than any other edge connecting the two halves of the tree.
We show that our non-degeneracy assumption implies that this edge has higher weight by a noticeable margin,
and thus that it is possible to verify that we have the correct tree with only rough approximations
to the pairwise shared information of variables.
For Bayes nets of higher degree, the analysis is somewhat more difficult. We need a slightly more complicated notion of
non-degeneracy, essentially boiling down to a sizeable number of
not-approximately-conditionally-independent assumptions.
For example, a pair of nodes can be positively identified as having an edge between them in the underlying graph
if they are not conditionally independent upon any set of $d$ other nodes. By requiring that for each edge
the relevant coordinate variables are not close to being conditionally independent,
we can verify the identity of the edges of $\mathcal{S}$ with relatively few samples.
Unfortunately, this is not quite enough, as with higher degree Bayesian networks,
simply knowing the underlying undirected graph
is not sufficient to determine its structure.
We must also be able to correctly identify the so-called $\lor$-structures.
To do this, we will need to impose more not-close-to-conditionally-independent
assumptions that allow us to robustly determine these as well.
Assuming that $Q$ satisfies such a non-degeneracy condition, testing
identity to it is actually quite easy. First one verifies that the
distribution $P$ has all of its pairwise (or $(d+2)$-wise) probabilities close to the
corresponding probabilities for $Q$. By non-degeneracy, this will imply
that $P$ must have the same (or at least an equivalent) structure as $Q$. Once this has been
established, the testing algorithms for the known structure
can be employed.
\paragraph{Sample Complexity of Testing High-Degree Bayes Nets.}
One further direction of research is that of
understanding the dependence on degree of the sample complexity
of testing identity and closeness for degree-$d$ Bayes nets without additional assumptions.
For $d=1$, we showed that these problems can be as hard as learning the distribution.
For the general case, we give an algorithm with sample complexity
$2^{d/2}\mathrm{poly}(n,1/\epsilon)$ for identity testing (and $2^{2d/3}\mathrm{poly}(n,1/\epsilon)$ for closeness testing).
The conceptual message of this result is that, when the
degree increases, testing becomes easier than learning information-theoretically.
It is a plausible conjecture that the correct
answer for identity testing is $\Theta(2^{d/2}n/\epsilon^2)$
and for closeness testing is $\Theta(2^{2d/3}n/\epsilon^2)$.
We suspect that our lower bound techniques can be
generalized to match these quantities, but the constructions
will likely be substantially more intricate.
The basic idea of our $2^{d/2}\mathrm{poly}(n,1/\epsilon)$ sample upper bound for identity testing
is this: We enumerate over all possible structures for $P$, running a different tester for
each of them by comparing the relevant conditional probabilities.
Unfortunately, in this domain, our simple formula for the
KL-Divergence between the two distributions will no longer hold.
However, we can show that using the old formula will be sufficient by
showing that if there are large discrepancies when computing the KL-divergence,
then there must be large gap between the entropies $H(P)$ and $H(Q)$
in a particular direction.
As the gap cannot exist both ways, this
suffices for our purposes.
\subsection{Organization} \label{sec:struc}
This paper is organized as follows:
In \cref{sec:prelim}, we give the necessary definitions and tools we will require.
\cref{sec:product} gives our matching upper and lower bounds for identity and closeness testing of product distributions.
In~\cref{sec:identity-known} we study the identity testing for Bayes nets with known structure:
We give an identity tester that works under a mild balancedness condition on the explicit Bayes net distribution,
and also show that the sample complexity of our algorithm is optimal, up to constant factors.
In~\cref{sec:identity-uknown}, we study the identity testing for unknown structure Bayes nets:
We start by proving a sample complexity lower bound showing that, for the unknown structure regime,
uniformity testing is information-theoretically as hard as learning -- even for the case of trees.
We then show that this lower bound can be circumvented under a natural non-degeneracy condition
on the explicit Bayes net distribution. Specifically, we give an identity tester with
sub-learning sample complexity for all low-degree non-degenerate Bayes nets.
Our identity tester for unknown structure non-degenerate Bayes nets relies on a novel structure tester
that may be of interest in its own right. \cref{sec:closeness} studies the corresponding closeness testing problems
for both known and unknown structure Bayes nets.
Finally, in~\cref{sec:it:ub} we consider the case of high-degree Bayes nets and
obtain testers for identity and closeness of unknown-structure Bayes nets.
Our testers in this section have optimal (and sub-learning) sample complexity as a function of
the maximum in-degree $d$
and polynomial dependence in the dimension $n$.
\section{Preliminaries} \label{sec:prelim}
\new{
In this section, we record the basic
definitions and technical tools that will be used throughout this paper.
\vspace{-0.2cm}
\paragraph{Basic Notation and Definitions.}
The $L_1$-distance between two discrete probability distributions $P, Q$ supported on a set $A$ is defined as
$\normone{P-Q} = \sum_{x \in A} \abs{P(x)-Q(x)}$. Our arguments will make essential use
of related distance measures, specifically the KL-divergence, defined as
$\dkl{P}{Q} = \sum_{x \in A} P(x) \log \frac{P(x)}{Q(x)}$, and the Hellinger distance, defined as
$\hellinger{P}{Q} = (1/\sqrt{2}) \cdot \sqrt{ \sum_{x \in A} (\sqrt{P(x)} - \sqrt{Q(x)})^2}$.
We write $\log$ and $\ln$ for the binary and natural logarithms, respectively, and by $H(X)$ the (Shannon)
entropy of a discrete random variable $X$ (as well as, by extension, $H(P)$ for the entropy of a discrete distribution $P$).
We denote by $\mutualinfo{X}{Y}$ the mutual information between two random variables $X$ and $Y$,
defined as $\mutualinfo{X}{Y} = \sum_{x,y} \probaOf{(X,Y) = (x,y) } \log \frac{\probaOf{(X,Y) = (x,y)}}{\probaOf{X=x}\probaOf{Y=y}}$.
For a probability distribution $P$, we write $X\sim P$ to indicate that $X$ is distributed according to $P$.
For probability distributions $P, Q$, we will use $P\otimes Q$ to denote the product distribution with marginals $P$ and $Q$.
}
\vspace{-0.2cm}
\paragraph{Identity and Closeness Testing.}
We now formally define the testing problems that we study.
\begin{definition}[Identity testing]
An \emph{\new{identity} testing algorithm of distributions belonging to a class $\mathcal{C}$}
is a randomized algorithm which satisfies the following. Given a parameter $0< \epsilon <1$
and the explicit description of a reference distribution $Q\in \mathcal{C}$, as well as
access to independent samples from an unknown distribution $P\in \mathcal{C}$,
the algorithm outputs either $\textsf{accept}$ or $\textsf{reject}$ such that the following holds:
\begin{itemize}
\item(Completeness) if $P=Q$, then the algorithm outputs $\textsf{accept}$ with probability at least $2/3$;
\item(Soundness) if $\normone{P-Q} \geq \epsilon$, then the algorithm outputs $\textsf{reject}$ with probability at least $2/3$.
\end{itemize}
\end{definition}
Note that by the above definition the algorithm is allowed to answer arbitrarily if neither the completeness nor the soundness cases hold.
The closeness testing problem is similar, except that now both $P,Q$ are unknown
and are only available through independent samples.
\begin{definition}[Closeness testing]
A \emph{\new{closeness} testing algorithm of distributions belonging to a class $\mathcal{C}$}
is a randomized algorithm which satisfies the following. Given a parameter $0< \epsilon <1$
and access to independent samples from two unknown distributions $P, Q\in \mathcal{C}$,
the algorithm outputs either $\textsf{accept}$ or $\textsf{reject}$ such that the following holds:
\begin{itemize}
\item(Completeness) if $P=Q$, then the algorithm outputs $\textsf{accept}$ with probability at least $2/3$;
\item(Soundness) if $\normone{P-Q} \geq \epsilon$, then the algorithm outputs $\textsf{reject}$ with probability at least $2/3$.
\end{itemize}
\end{definition}
Finally, we also consider a third related question, that of \emph{structure testing}:
\begin{definition}[Structure testing]
\new{Let $\mathcal{C}$ be a family of Bayes nets.}
A \emph{\new{structure} testing algorithm of Bayes nets belonging to $\mathcal{C}$}
is a randomized algorithm which satisfies the following. Given a parameter $0< \epsilon <1$
and the explicit description of a DAG $\mathcal{S}$, as well as access to independent samples
from an unknown $P\in \mathcal{C}$,
the algorithm outputs either $\textsf{accept}$ or $\textsf{reject}$ such that the following holds:
\begin{itemize}
\item(Completeness) if $P$ can be expressed as a Bayes net with structure $\mathcal{S}$,
then the algorithm outputs $\textsf{accept}$ with probability at least $2/3$;
\item(Soundness) if $\normone{P-Q} > \epsilon$ for every $Q\in\mathcal{C}$ with structure $\mathcal{S}$,
then the algorithm outputs $\textsf{reject}$ with probability at least $2/3$.
\end{itemize}
\end{definition}
In all cases the two relevant complexity measures are the \em{sample complexity}, i.e., the number of samples drawn by the algorithm,
and the \em {time complexity} of the algorithm. The golden standard is to achieve sample complexity
that is information-theoretically optimal
and time-complexity linear in the sample complexity.
\new{
In this work, the family $\mathcal{C}$ will correspond to the family of Bayes nets over $\{0,1\}^n$,
where we will impose an upper bound $d$ on the maximum in-degree of each node.
For $d=0$, i.e., when the underlying graph has no edges,
we obtain the family of product distributions over $\{0,1\}^n$.}
\paragraph{Relations between Distances.} We will require a number of
inequalities relating the $L_1$-distance, the KL-divergence, and the Hellinger distance between distributions.
We state a number of inequalities relating these quantities that we will use extensively in our arguments.
The simple proofs are deferred to \cref{sec:prelim:distances:proofs}.
A binary product distribution is a distribution over $\{0, 1\}^n$ whose coordinates are independent.
Note that such a distribution is determined by its mean vector.
We have the following:
\begin{lemma}\label{lem:prod:kl}
Let $P, Q$ be binary product distributions with mean vectors $p, q \in (0, 1)^n.$
We have that
\begin{equation}\label{eq:prod:kl}
2\sum_{i=1}^n (p_i-q_i)^2 \leq \dkl{P}{Q} \leq \sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i(1-q_i)} \;.
\end{equation}
In particular, if there exists $\alpha > 0$ such that $q\in[\alpha,1-\alpha]^n$, we obtain
\begin{equation}\label{eq:prod:kl:balanced}
2\normtwo{p-q}^2 \leq \dkl{P}{Q} \leq \frac{1}{\alpha(1-\alpha)}\normtwo{p-q}^2 \;.
\end{equation}
\end{lemma}
\noindent Recall that for any pair of distributions $P, Q$,
Pinsker's inequality states that $\normone{P-Q}^2 \leq 2\dkl{P}{Q}$.
This directly implies the following:
\begin{corollary}\label{lem:prod:dtv:kl}
Let $P, Q$ be binary product distributions with mean vectors $p, q \in (0, 1)^n.$
We have that
\[
\normone{P-Q}^2 \leq 2\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i(1-q_i)} \;.
\]
\end{corollary}
\noindent The following lemma states an incomparable and symmetric upper bound on the $L_1$-distance,
as well as a lower bound.
\begin{lemma}\label{lem:prod:dtv:hellinger}
Let $P, Q$ be binary product distributions with mean vectors $p, q \in (0, 1)^n.$
Then it holds that
\[
\min\left( c, \normtwo{p-q}^4 \right) \leq \normone{P-Q}^2 \leq 8 \sum_{i=1}^n \frac{(p_i-q_i)^2}{(p_i+q_i)(2-p_i-q_i)} \;.
\]
for some absolute constant $c\in(0,1)$. (Moreover, one can take $c = 4(1-e^{-3/2}) \simeq 3.11$.)
\end{lemma}
\noindent While the above is specific to product distributions,
we will require analogous inequalities for Bayes nets.
We start with the following simple lemma:
\begin{lemma}\label{lemma:kl:bn}
Let $P$ and $Q$ be Bayes nets with the same dependency graph.
In terms of the conditional probability tables $p$ and $q$ of $P$ and $Q$, we have:
\[
2\sum_{k=1}^m \probaDistrOf{P}{ \Pi_k } (p_k-q_k)^2 \leq \dkl{P}{Q} \leq \sum_{k=1}^m \probaDistrOf{P}{ \Pi_k } \frac{(p_k-q_k)^2}{q_k(1-q_k)} \;.
\]
\end{lemma}
Finally, we state an alternative bound, expressed with respect to the Hellinger distance between two Bayes nets:
\begin{lemma}[{\cite[Lemma 4]{DiakonikolasKS16b}}]\label{lemma:hellinger:bn}
Let $P$ and $Q$ be Bayes nets with the same dependency graph.
In terms of the conditional probability tables $p$ and $q$ of $P$ and $Q$, we have:
\[
\hellinger{P}{Q}^2 \leq 2\sum_{k=1}^m \sqrt{\probaDistrOf{P}{ \Pi_k }\probaDistrOf{Q}{ \Pi_k }} \frac{(p_k-q_k)^2}{(p_k+q_k)(2-p_k-q_k)} \;.
\]
\end{lemma}
\section{Testing Identity and Closeness of Product Distributions} \label{sec:product}
\new{The structure of this section is as follows: In Section~\ref{ssec:product-upper}, we give an identity testing
algorithm for $n$-dimensional binary product distributions with sample complexity $O(\sqrt{n}/\epsilon^2)$.
In Section~\ref{ssec:product-lower}, we show that this sample bound is information-theoretically optimal.
Sections~\ref{sec:closeness:product:ub} and~\ref{sec:closeness:product:lb} contain
matching upper and lower bounds respectively for the task of closeness testing between product distributions.
}
\subsection{Identity Testing Algorithm} \label{ssec:product-upper}
In this section, we prove the following theorem:
\begin{restatable}{theorem}{identityproductub}\label{theo:identity:product:ub}
There exists a computationally efficient algorithm which, given an \new{explicit} product distribution $Q$ \new{(via its mean vector)},
and sample access to an unknown product distribution $P$ over $\{0,1\}^n$, has the following guarantees:
For any $\epsilon >0$, the algorithm takes $\bigO{\sqrt{n}/\epsilon^2}$ samples from $P$,
and distinguishes with probability $2/3$
between the cases that $P=Q$ versus $\normone{P-Q} > \epsilon$.
\end{restatable}
Let $Q=Q_1\otimes\cdots\otimes Q_n$ be a known product distribution over $\{0,1\}^n$ with \new{mean vector $q$},
and $P=P_1\otimes\cdots\otimes P_n$ be an unknown product distribution on $\{0,1\}^n$ \new{with unknown mean vector $p$}.
The goal is to distinguish, given independent samples from $P$, between $P=Q$, and $\normone{P-Q} > \epsilon$.
\new{Let $0 < \gamma < 1/2$. We say that a product distribution $P$ over $\{0,1\}^n$ is $\gamma$-balanced if its mean vector
$p$ satisfies $p_i \in [\gamma, 1-\gamma]$ for all $i \in [n]$.} To prove Theorem~\ref{theo:identity:product:ub},
we can assume without loss of generality that $P, Q$ are $\gamma_0$-balanced for $\gamma_0 \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\epsilon}{16n}$.
Indeed, given sample access to a product distribution $P$, we can simulate access to the $\gamma_0$-balanced product distribution $P'$
by re-randomizing independently each coordinate with probability $2\gamma_0$,
choosing it then to be uniform in $\{0,1\}$. The resulting product distribution $P'$ is $\gamma_0$-balanced,
and satisfies $\normone{P-P'} \leq n\cdot \gamma_0 \leq \frac{\epsilon}{4}$.
Therefore, to test the identity of a product distribution $P$ against a product distribution $Q$ with parameter $\epsilon$,
it is sufficient to test the identity of the $\gamma_0$-balanced product distributions $P', Q'$ (with parameter $\frac{\epsilon}{2}$).
\paragraph{Preprocessing.} \new{We also note that by flipping the coordinates $i$ such that $q_i > 1/2$,
we can assume that $q_i \in [\gamma_0, 1/2]$ for all $i \in [n]$.}
This can be done without loss of generality, as $q$ is explicitly given.
For any $i$ such that $q_i>\frac{1}{2}$, we replace $q_i$ by $1-q_i$
and work with the corresponding distribution $Q^\prime$ instead.
By flipping the $i$-th bit of all samples we receive from $P$,
it only remains to test identity of the resulting distribution
$P^\prime$ to $Q^\prime$, as all distances are preserved.
\paragraph{Proof of Correctness.}
Let $m\geq \new{2716}\frac{\sqrt{n}}{\epsilon^2}$, and let $M_1,\dots,M_n$ be i.i.d. $\poisson{m}$ random variables.
We set $M=\max_{i\in[n]} M_i$ and note that $M \leq 2m$ with probability $1-e^{-\Omega(m)}$ (by a union bound).
We condition hereafter on $M \leq 2m$ (our tester will reject otherwise)
and take $M$ samples $X^{(1)},\dots, X^{(M)}$ drawn from $P$. We define the following statistic:
\[
W = \sum_{i=1}^n \frac{(W_i - mq_i)^2- W_i}{q_i(1-q_i)} \;,
\]
where we write $W_i \stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{j=1}^{M_i} X^{(j)}_i$ for all $i\in[n]$.
We note that the $W_i$'s are independent, as $P$ is a product distribution and the $M_i$'s are independent.
\new{The pseudocode for our algorithm is given in Figure~\ref{algo:identity:product}.}
Our identity tester is reminiscent of the ``chi-squared type'' testers that have been designed for the unstructured
univariate discrete setting~\cite{CDVV14, DKN:15, ADK15}.
\begin{figure}
\begin{framed}
\begin{description}
\item[Input] Error tolerance $\epsilon$, dimension $n$, balancedness parameter $\gamma \geq \new{\frac{\epsilon}{16n}}$, \new{mean vector}
$q = (q_1,\dots,q_n)\in [\gamma, 1/2]^n$ of an explicit product distribution $Q$ over $\{0,1\}^n$,
and sampling access to a product distribution $P$ over $\{0,1\}^n$.
\item[-] Set $\tau \gets \frac{1}{4} \epsilon^2$, $m \gets \left\lceil \frac{\new{2716}\sqrt{n}}{\epsilon^2}\right\rceil$.
\item[-] Draw $M_1,\dots,M_n\sim\poisson{m}$ independently, and let $M\gets \max_{i\in[n]} M_i$.
\item[If] $M > 2m$ set
$
W = \tau m^2
$
\item[Else] Take $M$ samples $X^{(1)},\dots,X^{(M)}$ from $P$, and define
\[
W = \sum_{i=1}^n \frac{(W_i - mq_i)^2- W_i}{q_i(1-q_i)}
\]
where $W_i \gets \sum_{j=1}^{M_i} X^{(j)}_i$ for $i\in[n]$.
\item[If] $W \geq \tau m^2$ return $\textsf{reject}$.
\item[Otherwise] return $\textsf{accept}$.
\end{description}
\end{framed}
\caption{Identity testing of an unknown product distribution $P$ against a given product distribution $Q$.}
\label{algo:identity:product}
\end{figure}
\new{
We start with a simple formula for the expected value of our statistic:}
\begin{lemma}
$\expect{W} = m^2 \sum_{i=1}^n\frac{(p_i-q_i)^2}{q_i(1-q_i)}$.
\end{lemma}
\begin{proof}
Since $W_i \sim \poisson{mp_i}$ for all $i$, we can write
\begin{align*}
\expect{(W_i - mq_i)^2} &= \expect{W_i^2}-2mq_i\expect{W_i} + m^2q_i^2
= mp_i + m^2(p_i - q_i)^2 \;,
\end{align*}
and therefore
$
\expect{W} = \sum_{i=1}^n \frac{\expect{(W_i-mq_i)^2} - \expect{W_i}}{q_i(1-q_i)}
= m^2\sum_{i=1}^n\frac{(p_i-q_i)^2}{q_i(1-q_i)}.
$
\end{proof}
\new{
As a corollary we obtain:}
\begin{claim}\label{claim:toy:2:distance:means}
If $P=Q$ then $\expect{W}=0$. Moreover, whenever $\normone{P-Q} > \epsilon$
we have $\expect{W} > \frac{1}{2}m^2\epsilon^2$.
\end{claim}
\begin{proof}
The first part is immediate from the expression of $\expect{W}$.
The second follows from~\cref{lem:prod:dtv:kl}, as $m^2\normone{P-Q}^2 \leq 2m^2\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i(1-q_i)} = 2\expect{W}$.
\end{proof}
\new{
We now proceed to bound from above the variance of our statistic. The completeness case is quite simple:
}
\begin{claim}\label{claim:toy:2:variance:completeness}
If $P=Q$, then $\mathop{\textnormal{Var}}\nolimits[W] \leq 8m^2n$.
\end{claim}
\begin{proof}
Suppose that $P=Q$, i.e., $p=q$.
From independence, we have that $\mathop{\textnormal{Var}}\nolimits[W] = \sum_{i=1}^n \frac{\mathop{\textnormal{Var}}\nolimits[(W_i-mq_i)^2 - W_i]}{q_i^2(1-q_i)^2}$.
Using the fact that $\expect{(W_i-mq_i)^2 - W_i} = 0$, we get
$\mathop{\textnormal{Var}}\nolimits[(W_i-mq_i)^2 - W_i] = \expect{((W_i-mq_i)^2 - W_i)^2} = 2 m^2 q_i^2$,
where the last equality follows from standard computations involving the moments of a Poisson random variable.
From there, recalling that $q_i \in (0,1/2]$ for all $i\in[n]$, we obtain
$\mathop{\textnormal{Var}}\nolimits[W] = 2m^2\sum_{i=1}^n \frac{1}{(1-q_i)^2} \leq 8m^2n.$
\end{proof}
\new{For the soundness case, the following lemma bounds the variance of our statistic from above. We note that the upper bound depends
on the balancedness parameter $\gamma$.}
\begin{lemma}\label{claim:toy:2:variance:soundness}
We have that $\mathop{\textnormal{Var}}\nolimits[W] \leq 16nm^2 + \left(\frac{32}{\gamma} + 16\sqrt{2n}m\right) \expect{W} + \frac{32}{\sqrt{\gamma}}\expect{W}^{3/2}$.
\end{lemma}
\begin{proof}
For general $p,q$, we have that
\begin{align*}
\mathop{\textnormal{Var}}\nolimits[(W_i-mq_i)^2 - W_i] &= \expect{((W_i-mq_i)^2 - W_i)^2} - m^4(p_i-q_i)^4 = 2 m^2 p_i^2 + 4 m^3 p_i (p_i-q_i)^2 \;,
\end{align*}
where as before the last equality follows from standard computations involving the moments of a Poisson random variable.
This leads to
\begin{align*}
\mathop{\textnormal{Var}}\nolimits[W] = 2m^2\sum_{i=1}^n \frac{p_i^2}{q_i^2(1-q_i)^2} + 4m^3\sum_{i=1}^n \frac{p_i (p_i-q_i)^2}{q_i^2(1-q_i)^2}
\leq 8m^2\sum_{i=1}^n \frac{p_i^2}{q_i^2} + 16m^3\sum_{i=1}^n \frac{p_i (p_i-q_i)^2}{q_i^2}.
\end{align*}
\new{We handle the two terms separately, in a fashion similar to ~\cite[Lemma 2]{ADK15}. For the first term, we can write:}
\begin{align*}
\sum_{i=1}^n \frac{p_i^2}{q_i^2} &= \sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} + \sum_{i=1}^n \frac{2p_iq_i-q_i^2}{q_i^2}
= \sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} + \sum_{i=1}^n \frac{2q_i(p_i - q_i)+q_i^2}{q_i^2} \\
&= n+\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} + \sum_{i=1}^n \frac{2(p_i - q_i)}{q_i}
= n+\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} + \sum_{i=1}^n \frac{2(p_i - q_i)}{q_i} \\
&\operatorname*{\leq}_{\text{(AM-GM)}} n+\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} + \sum_{i=1}^n \left( 1+\frac{(p_i - q_i)^2}{q_i^2} \right)
= 2n+2\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i^2} \\
&\leq 2n+\frac{2}{\gamma}\sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i}
\leq 2n+\frac{4}{m^2\gamma}\expect{W}.
\end{align*}
\new{We bound the second term from above as follows:}
\begin{align*}
\sum_{i=1}^n \frac{p_i (p_i-q_i)^2}{q_i^2} &\leq \sum_{i=1}^n \frac{p_i}{q_i}\cdot \frac{p_i (p_i-q_i)^2}{q_i} \\
&\leq \sqrt{ \sum_{i=1}^n \frac{p_i^2}{q_i^2} }\sqrt{ \sum_{i=1}^n \frac{(p_i-q_i)^4}{q_i^2} } \tag{by the Cauchy--Schwarz inequality} \\
&\leq \left( \sqrt{2n}+\frac{2}{m\sqrt{\gamma}}\sqrt{\expect{W}} \right){ \sum_{i=1}^n \frac{(p_i-q_i)^2}{q_i} } \tag{from the monotonicity of $\ell_p$-norms} \\
&= \frac{1}{m^2}\left( \sqrt{2n}+\frac{2}{m\sqrt{\gamma}}\sqrt{\expect{W}} \right)\cdot\expect{W}.
\end{align*}
Overall, we obtain
\begin{align*}
\mathop{\textnormal{Var}}\nolimits[W] &\leq 16nm^2 + \frac{32}{\gamma} \expect{W} + 16m \left( \sqrt{2n}+\frac{2}{m\sqrt{\gamma}}\sqrt{\expect{W}} \right)\cdot\expect{W} \\
&= 16nm^2 + \left(\frac{32}{\gamma} + 16\sqrt{2n}m\right) \expect{W} + \frac{32}{\sqrt{\gamma}}\expect{W}^{3/2}.
\end{align*}
\end{proof}
\noindent \new{We are now ready to prove correctness.}
\begin{lemma}\label{lemma:product:correctness}
Set $\tau \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\epsilon^2}{4}$.
Then we have the following:
\begin{itemize}
\item If $\normone{P-Q} = 0$, then $\probaOf{ W \geq \tau m^2 } \leq \frac{1}{3}$.
\item If $\normone{P-Q} > \epsilon$, then $\probaOf{ W < \tau m^2 } \leq \frac{1}{3}$.
\end{itemize}
\end{lemma}
\begin{proof}
We start with the soundness case, i.e., assuming $\normone{P-Q} > \epsilon$.
In this case, \cref{claim:toy:2:distance:means} implies $\expect{W} > 2\tau m^2$.
Since $\gamma \geq \frac{\epsilon}{16n}$ and for $m \geq \frac{16}{\epsilon}\sqrt{2n}$, Lemma~\ref{claim:toy:2:variance:soundness} implies that
$$\mathop{\textnormal{Var}}\nolimits[W] \leq 16nm^2 + 32\sqrt{2n}m\expect{W} + 32\cdot 4\sqrt{\frac{n}{\epsilon}}\expect{W}^{3/2}.$$
By Chebyshev's inequality, we have that
\begin{align}
\probaOf{ W < \tau m^2 } &\leq \probaOf{ \expect{W} - W > \frac{1}{2}\expect{W} }
\leq \frac{4\mathop{\textnormal{Var}}\nolimits[W]}{\expect{W}^2} \notag\\
&\leq \frac{64nm^2}{\expect{W}^2} + \frac{128\sqrt{2n}m}{\expect{W}^{\vphantom{1}}}
+ \frac{\new{4\cdot 128\sqrt{n/\epsilon}}}{\expect{W}^{1/2}} \\
&\leq
\frac{4\cdot 64n}{m^2\epsilon^4}
+ \frac{2\cdot 128\sqrt{2n}}{m\epsilon^2}
+ \frac{\new{4}\sqrt{2}\cdot128\sqrt{n}}{m\epsilon^{\new{3/2}}}
\leq 128\left(\frac{2}{C^2} + \frac{\new{5}\sqrt{2}}{C} \right) \;,
\end{align}
which is at most $1/3$ as long as \new{$C\geq 2716$, that is $m \geq 2716\frac{\sqrt{n}}{\epsilon^2}$.}
Turning to the completeness, we suppose $\normone{P-Q} = 0$. Then, by Chebyshev's inequality,
and~\cref{claim:toy:2:variance:completeness} we have that
\begin{align*}
\probaOf{ W \geq \tau m^2 } &= \probaOf{ W \geq \expect{W} + \tau m^2 }
\leq \frac{\mathop{\textnormal{Var}}\nolimits[W]}{\tau^2 m^4} \leq \frac{128n}{\epsilon^4 m^2} \;,
\end{align*}
which is no more than $1/3$ as long as $m \geq 8\sqrt{6}\frac{\sqrt{n}}{\epsilon^2}$.
\end{proof}
\begin{remark}\label{remark:tolerance:tradeoff}
\emph{
We observe that the aforementioned analysis -- specifically~\cref{claim:toy:2:distance:means} and~\cref{lemma:product:correctness}) --
can be adapted to provide some tolerance guarantees in the completeness case, that is it implies a tester that distinguishes
between $\normone{P-Q} \leq \epsilon'$ and $\normone{P-Q} > \epsilon$, where $\epsilon' = O(\epsilon^2)$.
This extension, however, requires the assumption that $Q$ be balanced: indeed,
the exact dependence between $\epsilon'$ and $\epsilon^2$ will depend on this balancedness parameter,
leading to a tradeoff between tolerance and balancedness.
Further, as shown in~\cref{ssec:product-lower:tolerance}, this tradeoff is in fact necessary,
as tolerant testing of arbitrary product distributions requires $\Omega(n/\log n)$ samples.}
\end{remark}
\subsection{Sample Complexity Lower Bound for Identity Testing} \label{ssec:product-lower}
\new{
In this section, we prove our matching information-theoretic lower bound for identity testing.
In Theorem~\ref{theo:lb:product:uniform}, we give a lower bound for uniformity testing
of a product distribution, while Theorem~\ref{theo:lb:product:identity:unbalanced} shows a quantitatively similar lower bound for identity testing
against the product distribution with mean vector $q = (1/n, \ldots, 1/n)$.}
\begin{restatable}{theorem}{uniformityproductlb} \label{theo:lb:product:uniform}
There exists an absolute constant $\epsilon_0 > 0$ such that, for any $0 < \epsilon \leq \epsilon_0$, the following holds:
Any algorithm that has sample access to an unknown product distribution $P$ over $\{0,1\}^n$
and distinguishes between the cases that $P=U$ and $\normone{P-U} > \epsilon$ \new{with probability $2/3$}
requires $\Omega(\sqrt{n}/\epsilon^2)$ samples.
\end{restatable}
\begin{proof}
We follow the information-theoretic framework of~\cite{DK:16} for proving distribution testing lower bounds, first defining two distributions over product distributions ${\cal Y},{\cal N}$:
\begin{itemize}
\item ${\cal Y}$ is the distribution that puts probability mass $1$ on the uniform distribution, $U=\bernoulli{1/2}^{\otimes n}$;
\item ${\cal N}$ is the uniform distribution over the set
\[
\setOfSuchThat{ \bigotimes_{j=1}^n \bernoulli{\frac{1}{2} + (-1)^{b_j} \frac{\epsilon}{\sqrt{n}}} }{ (b_1,\dots,b_n)\in\{0,1\}^n } \;.
\]
\end{itemize}
\begin{lemma}\label{lemma:noinstances:far}
${\cal N}$ is supported on distributions that are $\Omega(\epsilon)$-far from $U$.
\end{lemma}
\begin{proof}
By symmetry, it is sufficient to consider the distribution $P\stackrel{{\mathrm {\footnotesize def}}}{=} \bigotimes_{j=1}^n \bernoulli{\frac{1}{2} + \frac{\epsilon}{\sqrt{n}}}$.
We explicitly bound from below the expression of $\normone{P-U}$
\begin{align*}
\normone{P-U}
&= \sum_{x\in\{0,1\}^n} \bigabs{ \left( \frac{1}{2} + \frac{\epsilon}{\sqrt{n}} \right)^{\abs{x}}\left( \frac{1}{2} - \frac{\epsilon}{\sqrt{n}} \right)^{n-\abs{x}} - \frac{1}{2^n} } \\
&= \frac{1}{2^n}\sum_{k=0}^n \binom{n}{k}\bigabs{ \left( 1 + \frac{2\epsilon}{\sqrt{n}} \right)^{k}\left( 1 - \frac{2\epsilon}{\sqrt{n}} \right)^{n-k} - 1 } \\
&\geq \frac{1}{2^n}\sum_{k=\frac{n}{2}+\sqrt{n}}^{\frac{n}{2}+2\sqrt{n}} \binom{n}{k}\bigabs{ \left( 1 + \frac{2\epsilon}{\sqrt{n}} \right)^{k}\left( 1 - \frac{2\epsilon}{\sqrt{n}} \right)^{n-k} - 1 } \\
&\geq \frac{C}{\sqrt{n}}\sum_{k=\frac{n}{2}+\sqrt{n}}^{\frac{n}{2}+2\sqrt{n}} \bigabs{ \left( 1 + \frac{2\epsilon}{\sqrt{n}} \right)^{k}\left( 1 - \frac{2\epsilon}{\sqrt{n}} \right)^{n-k} - 1 } \;,
\end{align*}
where $C>0$ is an absolute constant. We bound from below each summand separately: fixing $k$, and writing $\ell = k-\frac{n}{2} \in [\sqrt{n},2\sqrt{n}]$,
\begin{align*}
\left( 1 + \frac{2\epsilon}{\sqrt{n}} \right)^{k}\left( 1 - \frac{2\epsilon}{\sqrt{n}} \right)^{n-k}
&= \left( 1 - \frac{4\epsilon^2}{n} \right)^{n/2}\left( \frac{1 + \frac{2\epsilon}{\sqrt{n}}}{1 - \frac{2\epsilon}{\sqrt{n}}}\right)^\ell \\
&\geq \left( 1 - \frac{4\epsilon^2}{n} \right)^{n/2}\left( \frac{1 + \frac{2\epsilon}{\sqrt{n}}}{1 - \frac{2\epsilon}{\sqrt{n}}}\right)^{\sqrt{n}}
\xrightarrow[n\to\infty]{} e^{4\epsilon-2\epsilon^2} \;,
\end{align*}
so that each summand is bounded by a quantity that converges (when $n\to \infty$) to $e^{4\epsilon-2\epsilon^2}-1 > 4\epsilon-2\epsilon^2 > 2\epsilon$,
implying that each is $\Omega(\epsilon)$.
\new{Combining the above gives}
\begin{align*}
\normone{P-U} &\geq \frac{C}{\sqrt{n}}\sum_{k=\frac{n}{2}+\sqrt{n}}^{\frac{n}{2}+2\sqrt{n}} \Omega(\epsilon) = \Omega(\epsilon)
\end{align*}
as claimed.
\end{proof}
We will make a further simplification, namely that instead of drawing $k$ samples from $P=P_1\otimes\dots\otimes P_n$, the algorithm is given $k_i$ samples from each $P_i$, where $k_1,\dots,k_n$ are independent $\poisson{k}$ random variables. This does not affect the lower bound, as this implies a lower bound on algorithms taking $k^\ast\stackrel{{\mathrm {\footnotesize def}}}{=}\max(k_1,\dots, k_n)$ samples from $P$ (where the $k_i$'s are as above), and $k^\ast \geq \frac{k}{2}$ with probability $1-2^{-\Omega(n)}$.
We now consider the following process: letting $X\sim\bernoulli{1/2}$ be a uniformly random bit, we choose a distribution $P$ over $\{0,1\}^n$ by
\begin{itemize}
\item Drawing $P\sim{\cal Y}$ if $X=0$, and;
\item Drawing $P\sim{\cal N}$ if $X=1$;
\item Drawing $k_1,\dots,k_n\sim \poisson{k}$, and returning $k_1$ samples from $P_1$, \dots, $k_n$ samples from $P_n$.
\end{itemize}
For $i\in[n]$, we let $N_i$ denote the number of $1$'s among the $k_i$ samples drawn from $P_i$, and write $N=(N_1,\dots, N_n)\in\mathbb{N}^n$. We will rely on the standard fact, as stated in~\cite{DK:16}:
\begin{fact}\label{fact:main:fano}
Let $X$ be a uniform random bit and $Y$ a random variable taking value in some set $\mathcal{S}$.
If there exists a function $f\colon \mathcal{S}\to \{0,1\}$ such that $\probaOf{ f(Y)=X }\geq 0.51$, then $\mutualinfo{X}{Y} = \Omega(1)$.
\end{fact}
\begin{proof}
By Fano's inequality, letting $q=\probaOf{ f(Y) \neq X }$, we have $h(q) = h(q)+ q \log (\abs{\{0,1\}}-1) \geq \condentropy{X}{Y}$. This implies
$\mutualinfo{X}{Y} = \entropy{X} - \condentropy{X}{Y} = 1 - \condentropy{X}{Y} \geq 1-h(q) \geq 1 - h(0.49) \geq 2\cdot 10^{-4}$.
\end{proof}
The next step is then to bound from above $\mutualinfo{X}{N}$, in order to conclude that it will be $o(1)$ unless $k$ is taken big enough and invoke~\cref{fact:main:fano}. By the foregoing discussion and the relaxation on the $k_i$'s, we have that the conditioned on $X$ the $N_i$ are independent (with $N_i \sim \poisson{kp_i}$). Recall now that if $X,Y_1,Y_2$ are random variables such that $Y_1$ and $Y_2$ are independent conditioned on $X$, by the chain rule we have that
\begin{align*}
\condentropy{(Y_1,Y_2)}{X} = \condentropy{Y_1}{X} + \condentropy{Y_2}{X,Y_1} = \condentropy{Y_1}{X} + \condentropy{Y_2}{X} \;,
\end{align*}
where the second equality follows from conditional independence, and therefore
\begin{align*}
\mutualinfo{X}{(Y_1,Y_2)} &= \entropy{(Y_1,Y_2)} - \condentropy{(Y_1,Y_2)}{X} = \entropy{Y_1} + \condentropy{Y_1}{Y_2} - (\condentropy{Y_1}{X} + \condentropy{Y_2}{X}) \\
&\leq \entropy{Y_1} + \entropy{Y_1} - (\condentropy{Y_1}{X} + \condentropy{Y_2}{X}) = (\entropy{Y_1}-\condentropy{Y_1}{X}) + (\entropy{Y_2}-\condentropy{Y_2}{X})\\
&= \mutualinfo{X}{Y_1}+\mutualinfo{X}{Y_2}.
\end{align*}
This implies that
\begin{equation}\label{eq:bound:mutual:info:sum}
\mutualinfo{X}{N} \leq \sum_{i=1}^n \mutualinfo{X}{N_i} \;,
\end{equation}
so that it suffices to bound each $\mutualinfo{X}{N_i}$ separately.
\begin{lemma}\label{lemma:lb:key}
Fix any $i\in [n]$, and let $X, N_i$ be as above. Then $\mutualinfo{X}{N_i} = O( k^2\epsilon^4/n^2 )$.
\end{lemma}
\begin{proof}
By symmetry it suffices to consider only the case of $i=1$, so that we let $A=N_1$.\medskip
The first step is to bound from above $\mutualinfo{X}{A}$ by a more manageable quantity:
\begin{fact}\label{fact:mutualinfo:expansion}
We have that
\begin{equation}
\mutualinfo{X}{A} \leq \sum_{a=0}^\infty \probaOf{ A=a }\left(1-\frac{ \probaCond{A=a }{ X=1} }{ \probaCond{A=a }{ X=0} }\right)^2.
\end{equation}
\end{fact}
\noindent The proof of this fact is given in~\cref{sec:misc:proofs}.
Since $A\sim \poisson{kp_1}$ with $p_1 = 1/2$ if $X=0$ and uniformly $\frac{1}{2}\pm\frac{\epsilon}{\sqrt{n}}$ if $X=1$, a simple computation yields that
\begin{align*}
\probaCond{A=\ell }{ X=0 } &= e^{-k/2}\frac{(k/2)^\ell}{\ell!} \\
\probaCond{A=\ell }{ X=1 } &= \left(e^{-k/2}\frac{(k/2)^\ell}{\ell!}\right) \left(\frac{e^{-k\epsilon/\sqrt{n}}(1+2\frac{\epsilon}{\sqrt{n}})^\ell+e^{k\epsilon/\sqrt{n}}(1-2\frac{\epsilon}{\sqrt{n}})^\ell}{2} \right).
\end{align*}
Writing out $\varphi(\epsilon,\ell) = \frac{ \probaCond{ A=\ell }{ X=1 } }{\probaCond{A=\ell }{ X=0 }}$
as a function of $\epsilon/\sqrt{n}$, we see that it is even. Thus, expanding it as
a Taylor series in $\alpha\stackrel{{\mathrm {\footnotesize def}}}{=}\epsilon/\sqrt{n}$,
the odd degree terms will cancel. Moreover, we can write
\begin{align*}
\sum_{\ell=0}^\infty \probaOf{ A=\ell }\left(1-\varphi(\epsilon,A)\right)^2
&= \mathbb{E}_A\left[ \left(1-\varphi(\epsilon,A)\right)^2 \right] \\
&= \frac{1}{2}\mathbb{E}_{A\sim \poisson{k/2}}\left[ \left(1-\varphi(\epsilon,A)\right)^2 \right] \\
&+ \frac{1}{4}\mathbb{E}_{A\sim \poisson{k(1/2+\alpha)}}\left[ \left(1-\varphi(\epsilon,A)\right)^2 \right] \\
&+ \frac{1}{4}\mathbb{E}_{A\sim \poisson{k(1/2-\alpha)}}\left[ \left(1-\varphi(\epsilon,A)\right)^2 \right] \;.
\end{align*}
Now, we can rewrite
\begin{align*}
\left(1-\varphi(\epsilon,A)\right)^2 &= \left( 1 - \frac{e^{-k\alpha}(1+2\alpha)^\ell+e^{k\alpha}(1-2\alpha)^\ell}{2} \right)^2 \\
&= 1 - \left( e^{-k\alpha}(1+2\alpha)^\ell+e^{k\alpha}(1-2\alpha)^\ell \right) + \frac{e^{-2k\alpha}(1+2\alpha)^{2\ell}+ 2(1-4\alpha^2)^\ell +e^{2k\alpha}(1-2\alpha)^{2\ell}}{4} \;.
\end{align*}
For $b \in \{-1,0,1\}$, we have $\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ 1 \right] = 1$ (!), and (from the MGF of a Poisson distribution)
\begin{align*}
e^{-k\alpha}\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ (1+2\alpha)^A \right]
&= e^{-k\alpha} e^{k(1/2+b\alpha)\cdot 2\alpha} = e^{b\cdot 2\alpha^2 k} \\
e^{k\alpha}\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ (1-2\alpha)^A \right]
&= e^{k\alpha} e^{k(1/2+b\alpha)\cdot -2\alpha} = e^{-b\cdot 2\alpha^2 k} \;,
\end{align*}
as well as
\begin{align*}
e^{-2k\alpha}\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ (1+2\alpha)^{2A} \right]
&= e^{-2k\alpha} e^{k(1/2+b\alpha)\cdot (4\alpha+4\alpha^2)} = e^{2k\alpha^2(1+2b+2b\alpha)} \\
e^{2k\alpha}\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ (1-2\alpha)^{2A} \right]
&= e^{2k\alpha} e^{k(1/2+b\alpha)\cdot (-4\alpha+4\alpha^2)} = e^{2k\alpha^2(1-2b+2b\alpha)} \\
2\mathbb{E}_{A\sim \poisson{k(1/2+b\alpha)}}\left[ (1-4\alpha^2)^{A} \right]
&= 2 e^{k(1/2+b\alpha)\cdot -4\alpha^2} = 2e^{-2k\alpha^2 - 4kb\alpha^3} \;.
\end{align*}
Gathering the terms, we get
\begin{align*}
\mathbb{E}_A\left[ \left(1-\varphi(\epsilon,A)\right)^2 \right]
&= \frac{1}{4}\Big( 2\left( 1 - 2 + \frac{e^{2k\alpha^2}+e^{-2k\alpha^2}}{2} \right) \\
&+ \left( 1 - (e^{2k\alpha^2}+e^{-2k\alpha^2}) + \frac{ e^{2k\alpha^2(3+2\alpha)}+e^{2k\alpha^2(-1+2\alpha)} + 2e^{-2k\alpha^2 - 4k\alpha^3} }{4} \right) \\
&+ \left( 1 - (e^{-2k\alpha^2}+e^{2k\alpha^2}) + \frac{ e^{-2k\alpha^2(1+2\alpha)}+e^{2k\alpha^2(3-2\alpha)} + 2e^{-2k\alpha^2 + 4k\alpha^3} }{4} \right)
\Big)\\
&= \frac{1}{16}\Big( -4(e^{2k\alpha^2}+e^{-2k\alpha^2})
+ e^{2k\alpha^2(3+2\alpha)}+e^{2k\alpha^2(-1+2\alpha)} + 2e^{-2k\alpha^2 - 4k\alpha^3} \\
&+ e^{-2k\alpha^2(1+2\alpha)}+e^{2k\alpha^2(3-2\alpha)} + 2e^{-2k\alpha^2 + 4k\alpha^3}
\Big) \\
&= O( k^2 \alpha^4 ) \tag{Taylor series expansion in $\alpha$} \;,
\end{align*}
giving that indeed
\begin{align*}
\sum_{\ell=0}^\infty \probaOf{ A=\ell }\left(1-\varphi(\epsilon,A)\right)^2
&= O\left( \frac{\epsilon^4 k^2}{n^2} \right).
\end{align*}
This completes the proof.
\end{proof}
This lemma, along with~\cref{eq:bound:mutual:info:sum}, gives the desired result, that is
\begin{equation}
\mutualinfo{X}{N} \leq \sum_{i=1}^n O\!\left( \frac{\epsilon^4 k^2}{n^2} \right) = O\!\left(\frac{\epsilon^4 k^2}{n} \right) \;,
\end{equation}
which is $o(1)$ unless $k = \Omega(\sqrt{n}/{\epsilon^2})$.
\end{proof}
\begin{theorem}\label{theo:lb:product:identity:unbalanced}
There exists an absolute constant $\epsilon_0 > 0$ such that, for any $\epsilon \in(0,\epsilon_0)$,
distinguishing $P=P^\ast$ and $\normone{P-P^\ast} > \epsilon$ \new{with probability $2/3$}
requires $\Omega(\sqrt{n}/\epsilon^2)$ samples, where $P^\ast \stackrel{{\mathrm {\footnotesize def}}}{=} \bernoulli{1/n}^{\otimes n}$.
\end{theorem}
\begin{proof}
The proof will follow the same outline as that of~\cref{theo:lb:product:uniform} first defining two distributions over product distributions ${\cal Y},{\cal N}$:
\begin{itemize}
\item ${\cal Y}$ is the distribution that puts probability mass $1$ on $P^\ast$;
\item ${\cal N}$ is the uniform distribution over the set
\[
\setOfSuchThat{ \bigotimes_{j=1}^n \bernoulli{\frac{1}{n}\left( 1 + (-1)^{b_j}\epsilon\right) }}{ (b_1,\dots,b_n)\in\{0,1\}^n } \;.
\]
\end{itemize}
\begin{lemma}\label{lemma:noinstances:far:unbalanced}
With probability $1-2^{-\Omega(n)}$, ${\cal N}$ is supported on distributions that are $\Omega(\epsilon)$-far from $P^\ast$.
\end{lemma}
\begin{proof}
Using Hellinger distance as a proxy will only result in an $\Omega(\epsilon^2)$ lower bound on the distance,
so we compute it explicitly instead: in what follows, $e^{(j)}\in\{0,1\}^n$ denotes the basis vector with $e^{(j)}_i = \indic{\{i=j\}}$.
Fix any vector $b=(b_1,\dots, b_n)\in\{0,1\}^n$ such that $\abs{b}\in[n/3, 2n/3]$,
and let $P$ be the corresponding distribution from the support of ${\cal N}$.
\begin{align*}
\normone{P-P^\ast} &\geq \sum_{j=1}^n \abs{ P(e^{(j)}) - P^\ast(e^{(j)}) }
= \sum_{j=1}^n \bigabs{ \frac{1+(-1)^{b_j}\epsilon}{n}\prod_{i\neq j}\left(1-\frac{1+(-1)^{b_i}\epsilon}{n}\right) - \frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1} } \\
&= \frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1}\sum_{j=1}^n \bigabs{ ( 1+(-1)^{b_j}\epsilon )\prod_{i\neq j}\left(1-\frac{(-1)^{b_i}\epsilon}{n-1}\right) - 1 } \;.
\end{align*}
Each summand can be bounded from above as follows:
\begin{align*}
\left( 1 - \frac{\epsilon}{n-1}\right)^{2n/3} \leq \prod_{i\neq j}\left(1-\frac{(-1)^{b_i}\epsilon}{n-1}\right) \leq \left( 1+ \frac{\epsilon}{n-1}\right)^{2n/3} \;,
\end{align*}
where the last inequality follows from our assumption on $\abs{b}$. In turn, this gives that
\begin{itemize}
\item If $b_j = 0$,
\[
(1+(-1)^{b_j}\epsilon )\prod_{i\neq j}\left(1-\frac{(-1)^{b_i}\epsilon}{n-1}\right) - 1
\geq (1+\epsilon)\left( 1 - \frac{\epsilon}{n-1}\right)^{2n/3} - 1 = \Omega\left(\epsilon\right) \;.
\]
\item If $b_j = 1$,
\[
1 - (1+(-1)^{b_j}\epsilon )\prod_{i\neq j}\left(1-\frac{(-1)^{b_i}\epsilon}{n-1}\right)
\geq 1- (1-\epsilon)\left( 1 + \frac{\epsilon}{n-1}\right)^{2n/3} = \Omega\left(\epsilon\right) \;.
\]
\end{itemize}
Since $\frac{1}{n}\left(1-\frac{1}{n}\right)^{n-1} = \frac{e^{-1}+o(1)}{n}$, we get $\normone{P-P^\ast} = \Omega(\epsilon)$.
The lemma now follows from observing that a uniformly random $b\in\{0,1\}^n$ satisfies $\abs{b}\in[n/3, 2n/3]$ with probability $1-2^{-\Omega(n)}$.
\end{proof}
The only ingredient missing to conclude the proof is the analogue of~\cref{lemma:lb:key}:
\begin{lemma}\label{lemma:lb:key:unbalanced}
Suppose $\frac{k\epsilon^2}{n}\leq 1$. Fix any $i\in [n]$, and let $X, N_i$ be as above. Then $\mutualinfo{X}{N_i} = O( k^2\epsilon^4/n^2 )$.
\end{lemma}
\begin{proof}
The proof is similar as that of~\cite[Lemma 3.3]{DK:16}, replacing (their) $mn$ by (our) $n$.
For completeness, we provide an alternative proof in~\cref{sec:misc:proofs}.
\end{proof}
\subsection{Closeness Testing Algorithm}\label{sec:closeness:product:ub}
In this section, we prove the following theorem:
\begin{restatable}{theorem}{closenessproductub}\label{theo:closeness:product:ub}
There exists an efficient algorithm which, given sample access to two unknown
product distributions $P, Q$ over $\{0,1\}^n$, has the following guarantees.
For any $\epsilon\in(0,1)$, the algorithm takes
$
\bigO{ \max\left(\sqrt{n}/\epsilon^2, n^{3/4}/\new{\epsilon} \right) }
$
samples from $P$ and $Q$, and distinguishes with probability $2/3$
between (i)~$\normone{P-Q} = 0$ and (ii)~$\normone{P-Q} > \epsilon$.
\end{restatable}
The rest of this section is devoted to the proof of the above theorem.
Let $P,Q$ be two product distributions on $\{0,1\}^n$ with mean vectors $p,q\in[0,1]^n$.
For $S\subseteq [n]$, we denote by $P_S$ and $Q_S$ the product distributions on $\{0,1\}^{\abs{S}}$
obtained by restricting $P$ and $Q$ to the coordinates in $S$.
Similarly, we write $p_S, q_S\in[0,1]^{\abs{S}}$ for the vectors obtained by restricting $p,q$ to the coordinates in $S$,
so that $P_S$ has mean vector $p_S$.
\paragraph{High-level Idea.} The basic idea of the algorithm is to divide the coordinates in two bins $U,V$:
one containing the indices where both distributions have marginals very close to $0$
(specifically, at most $1/m$, where $m$ is our eventual sample complexity),
and one containing the remaining indices, on which at least one of the two distributions is roughly balanced.
Since $P$ and $Q$ can only be far from each other if at least one of $\normone{P_U-Q_U}$, $\normone{P_V-Q_V}$ is big,
we will test separately each case. Specifically, we will apply two different testers:
one ``$\chi^2$-based tester'' (with sample complexity $\bigO{\sqrt{n}/\epsilon^2}$) to the ``heavy bin'' $U$ --
which relies on the fact that the marginals of $P,Q$ on $U$ are balanced by construction --
and one ``$\lp[2]$-tester'' (with sample complexity $\bigO{{n}^{3/4}/\epsilon}$) to the ``light bin'' $V$ --
relying on the fact that $\normtwo{p_V}$, $\normtwo{q_V}$ are small.
\new{The pseudocode of our algorithm is given in Figure~\ref{algo:product:closeness}.}
\paragraph{Sample Complexity.} Hereafter, we let
\[
m \stackrel{{\mathrm {\footnotesize def}}}{=} C\max\left(\frac{\sqrt{n}}{\epsilon^2}, \frac{n^{3/4}}{\epsilon} \right) \;,
\]
for some absolute constant $C>0$ to be determined in the course of the analysis.
We let $M_1,\dots,M_n$ be i.i.d. $\poisson{m}$ random variables.
We set $M=\max_{i\in[n]} M_i$ and note that $M \leq 2m$ with probability $1-e^{-\Omega(m)}$ (by a union bound).
We will condition hereafter on the event that $M,M' \leq 2m$ and our tester will reject otherwise.
\noindent Without loss of generality, as in the previous sections,
we will assume that $\frac{\epsilon}{16n}\leq p_i,q_i \leq \frac{3}{4}$ for every $i\in[n]$.
Indeed, this can be ensured by the simple preprocessing step below.
\paragraph{Preprocessing.}
Using $O(\log n)$ samples from $P$ and $Q$, we can ensure without loss of generality that all $p_i,q_i$
are at most $3/4$ (with probability $9/10$).
Namely, we estimate every $p_i, q_i$ to an additive $1/64$, and proceed as follows:
\begin{itemize}
\item If the estimate of $q_i$ is not within an additive $\pm \frac{1}{32}$ of that of $p_i$, we output $\textsf{reject}$ and stop;
\item If the estimate of $p_i$ is more than $43/64$, mark $i$ as ``swapped'' and replace $X_i$ by $1-X_i$ (for $P$)
and $Y_i$ by $1-Y_i$ (for $Q$) in all future samples.
\end{itemize}
Assuming correctness of the estimates (which holds with probability at least $9/10$),
if we pass this step then $\abs{p_i - q_i} < \frac{1}{16}$ for all $i$.
Moreover, if $i$ was not swapped, then it means that we had $p_i \leq 43/64+1/64 < 3/4$,
and therefore $q_i < 43/64+1/64+1/16 = 3/4$.
Now, if we had $q_i > 3/4$, then $p_i > 3/4-1/16$ and the estimate of $p_i$ would be more than $3/4-1/16-1/64 = 43/64$.
Our closeness tester is described in the following pseudocode.
\begin{figure}[H]
\begin{framed}
\begin{description}
\item[Input] Error tolerance $\epsilon \in (0,1)$, dimension $n$, and sampling access to two product distributions $P,Q$ over $\{0,1\}^n$.
\item[-] Preprocess $P,Q$ so that $q_{i} \leq \frac{3}{4}$ for all $i\in[n]$, return $\textsf{reject}$ if a discrepancy appears.
\item[-] Set $m \stackrel{{\mathrm {\footnotesize def}}}{=} C\max\left(\frac{\sqrt{n}}{\epsilon^2}, \frac{n^{3/4}}{\epsilon} \right)$.
\item[-] Define $M, M'$ as follows:
Draw $M_1,\dots,M_n$, $M'_1,\dots,M'_n$ i.i.d. $\poisson{m}$ random variables, and set $M=\max_{i\in[n]} M_i$, $M'=\max_{i\in[n]} M'_i$.
\item[-] Take $m$ samples from both $P$ and $Q$, and let $U',V'\subseteq[n]$ be respectively the set of coordinates $i$ such that $X_i=1$
for at least one sample, and its complement.
\item[If] $\max(M,M') > 2m$, return $\textsf{reject}$.
\item[-] Take $M$ (resp. $M'$) samples $X^{(1)},\dots, X^{(M)}$ from $P_{U'}$ (resp. $Y^{(1)},\dots, Y^{(M')}$ from $Q_{U'}$), and define
\[
W_{\rm heavy} = \sum_{i\in U'} \frac{ (W_i - V_i)^2- (W_i+V_i) }{W_i+V_i} \;,
\]
for $V_i,W_i$ defined as $W_i = \sum_{j=1}^{M_i} X^{(j)}_i$ and $V_i = \sum_{j=1}^{M'_i} Y^{(j)}_i$ for all $i\in U'$.
\item[If] $W_{\rm heavy} \geq \frac{m\epsilon^2}{12000}$ return $\textsf{reject}$.
\item[-] Take $M$ (resp. $M'$) samples $X^{'(1)},\dots, X^{'(M)}$ from $P_{V'}$ (resp. $Y^{'(1)},\dots, Y^{'(M')}$ from $Q_{V'}$), and define
\[
W_{\rm light} = \sum_{i\in V'} \left( (W'_i - V'_i)^2- (W'_i+V'_i) \right) \;,
\]
for $V'_i,W'_i$ defined as $W'_i = \sum_{j=1}^{M_i} X^{'(j)}_i$, $V'_i = \sum_{j=1}^{M'_i} Y^{'(j)}_i$ for all $i\in V'$.
\item[If] $W_{\rm light} \geq \frac{\epsilon^2}{600n}$ return $\textsf{reject}$.
\item return $\textsf{accept}$.
\end{description}
\end{framed}
\caption{Closeness testing between two unknown product distributions $P,Q$ over $\{0,1\}^n$.}\label{algo:product:closeness}
\end{figure}
\paragraph{Proof of Correctness.}
For $m$ as above, define $U,V\subseteq [n]$ by $V \stackrel{{\mathrm {\footnotesize def}}}{=} \setOfSuchThat{ i \in [n]}{ \max(p_i,q_i) < \frac{1}{m} }$ and $U\stackrel{{\mathrm {\footnotesize def}}}{=} [n]\setminus V$.
We start with the following simple claim:
\begin{claim}\label{claim:closeness:twobuckets:UV}
Assume $\normone{P-Q} > \epsilon$. Then, at least one of the following must hold:
(i) $\normtwo{p_V-q_V}^2 > \frac{\epsilon^2}{16n}$, or (ii) $\sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{64}$.
\end{claim}
\begin{proof}
Since $\epsilon < \normone{P-Q} \leq \normone{P_U-Q_U}+\normone{P_V-Q_V}$,
at least one of the two terms in the RHS must exceed $\frac{\epsilon}{2}$.
We now recall that, by~\cref{lem:prod:dtv:hellinger}, it holds that
$\normone{P_U-Q_U}^2 \leq 8 \sum_{i\in U} \frac{(p_i-q_i)^2}{(p_i+q_i)(2-p_i-q_i)}$
and from the further assumption that $p_i,q_i \leq \frac{3}{4}$ that
$\normone{P_U-Q_U}^2 \leq 16 \sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i}$.
\noindent Using subadditivity and the Cauchy--Schwartz inequality, we also have
\[
\normone{P_V-Q_V} \leq \sum_{i\in V} \normone{P_{i}-Q_{i}}
= 2\sum_{i\in V} \abs{p_i-q_i} = 2\normone{p_V-q_V}
\leq 2\sqrt{\abs{V}} \normtwo{p_V-q_V}\leq 2\sqrt{n} \normtwo{p_V-q_V} \;,
\]
from where we derive that
$\normtwo{p_V-q_V}^2 \geq \frac{1}{4n}\normone{P_V-Q_V}^2 .$
This completes the proof.
\end{proof}
We now define $U',V'\subseteq [n]$ (our ``proxies'' for $U,V$) as follows:
Taking $m$ samples from both $P$ and $Q$,
we let $V'$ be the set of indices which were never seen set to one in any sample,
and $U'$ be its complement. We have the following:
\begin{claim}\label{claim:closeness:twobuckets:UVprime}
Assume $\normone{P-Q} > \epsilon$. Then, at least one of the following must hold:
(i) $\expect{\normtwo{p_{V'}-q_{V'}}^2} > \frac{\epsilon^2}{150n}$,
or (ii) $\expect{ \sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i} } > \frac{\epsilon^2}{128}$.
\end{claim}
\begin{proof}
By definition, any fixed $i$ belongs to $V'$ with probability $(1-p_i)^m(1-q_i)^m$, and so
\begin{align*}
\expect{\normtwo{p_{V'}-q_{V'}}^2} &= \sum_{i=1}^n (p_i-q_i)^2 \cdot (1-p_i)^m(1-q_i)^m
\geq \sum_{i\in V} (p_i-q_i)^2 \cdot (1-p_i)^m(1-q_i)^m \\
&\geq \left(1-\frac{1}{m}\right)^{2m} \sum_{i\in V} (p_i-q_i)^2 = \left(1-\frac{1}{m}\right)^{2m}\normtwo{p_V-q_V}^2
\geq \frac{1}{9} \normtwo{p_V-q_V}^2 \;,
\end{align*}
for $m\geq 10$. Similarly,
\begin{align*}
\expect{ \sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i} } &= \sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i} \cdot (1-(1-p_i)^m(1-q_i)^m)
\geq \left(1-\left(1-\frac{1}{m}\right)^{2m}\right) \sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i} \\
&\geq \frac{1}{2} \sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i} \;,
\end{align*}
and in both cases the proof follows by~\cref{claim:closeness:twobuckets:UV}.
\end{proof}
We will require the following implication:
\begin{claim}\label{claim:closeness:twobuckets:UVprime:whp}
Assume $\normone{P-Q} > \epsilon$. Then, at least one of the following must hold with probability at least $4/5$ (over the choice of $U',V'$):
(i) $\normtwo{p_{V'}-q_{V'}}^2 > \frac{\epsilon^2}{300n}$, or (ii) $\sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000}$.
\end{claim}
\begin{proof}
First, assume that $\normtwo{p_V-q_V}^2 > \frac{\epsilon^2}{16n}$,
and let $V''$ denote the random variable $V'\cap V$.
By (the proof of)~\cref{claim:closeness:twobuckets:UVprime}, we have
$\expect{\normtwo{p_{V''}-q_{V''}}^2} \geq \frac{1}{9} \normtwo{p_V-q_V}^2 > \frac{\epsilon^2}{150n}$.
Writing $m^2\normtwo{p_{V''}-q_{V''}}^2 = \sum_{i=1}^n m^2(p_i-q_i)^2\indic{i\in V''}$ (note that each summand is in $[0,1]$), we then get by a Chernoff bound that
\[
\probaOf{ \normtwo{p_{V''}-q_{V''}}^2 < \frac{\epsilon^2}{300n} } < e^{-\frac{1}{8}\frac{m^2 \epsilon^2}{150n}} < e^{-\frac{C}{1200}\frac{1}{\epsilon^2}} < \frac{1}{5} \;,
\]
using our setting of $m$ (for an appropriate choice of the constant $C>0$).
Suppose now that $\sum_{i\in U} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{64}$. We divide the proof in two cases.
\begin{itemize}
\item Case 1: there exists $i^\ast\in U$ such that $\frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000}$.
Then $\probaOf{ \sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000} } \geq \probaOf{ i^\ast \in U' } \geq 1-\left(1-\frac{1}{m}\right)^{2m} > \frac{4}{5}$.
\item Case 2: $\frac{(p_i-q_i)^2}{p_i+q_i} \leq \frac{\epsilon^2}{2000}$ for all $i\in U$.
Then, writing $X_i \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{2000}{\epsilon^2}\frac{(p_i-q_i)^2}{p_i+q_i} \indic{i\in U'\cap U} \in [0,1]$ for all $i\in[n]$, we have
$\expect{\sum_{i=1}^n X_i} \geq \frac{2000}{128}$ by~\cref{claim:closeness:twobuckets:UVprime}, and a multiplicative Chernoff bound ensures that
\[
\probaOf{ \sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i} < \frac{\epsilon^2}{2000} }
\leq \probaOf{ \sum_{i=1}^n X_i < 1 } \leq e^{-\frac{2000}{8\cdot 128}} < \frac{1}{5} \;,
\]
\end{itemize}
concluding the proof.
\end{proof}
Finally, we will need to bound the expected $\lp[2]$-norm of $p_{V'}$ and $q_{V'}$.
\begin{claim}\label{claim:closeness:bound:2norm:vprime}
For $U',V'$ defined as above, we have $\expect{\normtwo{p_{V'}}^2},\expect{\normtwo{q_{V'}}^2} \leq \frac{n}{m^2}$.
\end{claim}
\begin{proof}
By symmetry, it is sufficient to bound $\expect{\normtwo{p_{V'}}^2}$. We have
\begin{align*}
\expect{\normtwo{p_{V'}^2}} &= \sum_{i=1}^n p_i^2 \cdot (1-p_i)^m(1-q_i)^m
\leq \sum_{i=1}^n p_i^2 \cdot (1-p_i)^m.
\end{align*}
Studying the auxiliary function $f\colon x\in[0,1]\mapsto x^2(1-x)^m$,
we see that it achieves a maximum at $\frac{2}{m+2}$.
We can then bound
\[
\expect{\normtwo{p_{V'}^2}} \leq n\cdot f\left(\frac{2}{m+2}\right) \sim_{m\to\infty} \frac{4n}{e^2 m^2} \;,
\]
and so $\expect{\normtwo{p_{V'}^2}} \leq \frac{n}{m^2}$ for $m$ large enough
(and this actually holds for any $m\geq 1$).
\end{proof}
\paragraph{Case 1: discrepancy in $U'$.}
We assume that Algorithm~\ref{algo:product:closeness} reached the line where $W_{\rm heavy}$ is computed,
and show the following:
\begin{lemma} \label{lem:case1-close}
If $P=Q$, then with probability at least $9/10$ we have $W_{\rm heavy} \leq \frac{m\epsilon^2}{12000}$.
Conversely, if $\sum_{i\in U\cap U'} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000}$,
then $W_{\rm heavy} \geq \frac{m\epsilon^2}{12000}$ with probability at least $9/10$.
\end{lemma}
\begin{proof}
Recall that the $W_i$'s are independent, as $P$ is a product distribution and the $M_i$'s are independent. Similarly for the $V_i$'s.
We have:
\begin{claim}\label{claim:product:closeness:expectation}
If $P=Q$, then $\expect{W_{\rm heavy}} = 0$.
Moreover, if $\sum_{i\in U\cap U'} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000}$,
then $\expect{W_{\rm heavy}} > \frac{m\epsilon^2}{6000}$.
\end{claim}
\begin{proof}
Note that $W_i \sim \poisson{mp_i}$ and $V_i \sim \poisson{mq_i}$ for all $i\in U'$. From there, we can compute (as in~\cite{CDVV14})
\begin{align*}
\expect{\frac{ (W_i - V_i)^2- (W_i+V_i) }{W_i+V_i}} &= m\frac{(p_i-q_i)^2}{p_i+q_i}\left( 1-\frac{1-e^{-m(p_i+q_i)}}{m(p_i+q_i)} \right) \;,
\end{align*}
by first conditioning on $W_i+V_i$.
This immediately gives the first part of the claim.
As for the second, observing that $1-\frac{1-e^{-x}}{x} \geq \frac{1}{3}\min(1,x)$ for $x\geq 0$,
and that $p_i+q_i \geq \frac{1}{m}$ for all $i\in U$, by definition we get
\begin{align*}
\expect{W_{\rm heavy}} &= m\sum_{i\in U'} \frac{(p_i-q_i)^2}{p_i+q_i}\left( 1-\frac{1-e^{-m(p_i+q_i)}}{m(p_i+q_i)} \right)
\geq \frac{1}{3} m\sum_{i\in U\cap U'} \frac{(p_i-q_i)^2}{p_i+q_i}
\geq \frac{m\epsilon^2}{6000} \;.
\end{align*}
\end{proof}
\noindent We can now bound the variance of our estimator:
\begin{claim}\label{claim:product:closeness:variance}
$\mathop{\textnormal{Var}}\nolimits[W_{\rm heavy}] \leq 2n + 5m\sum_{i\in U'} \frac{(p_i-q_i)^2}{p_i+q_i} \leq 7n + \frac{3}{5}\expect{W_{\rm heavy}}$.
In particular, if $P=Q$ then $\mathop{\textnormal{Var}}\nolimits[W_{\rm heavy}] \leq 2n$.
\end{claim}
\begin{proof}
The proof of the first inequality is similar to that in \cite[Lemma 5]{CDVV14},
with a difference in the final bound due to the fact that the $p_i$'s and $q_i$'s
no longer sum to one.
For completeness, we give the proof below.
First, note that by independence of the $V_i$'s and $W_i$'s, we have
$\mathop{\textnormal{Var}}\nolimits[W_{\rm heavy}] = \sum_{i\in U'} \mathop{\textnormal{Var}}\nolimits\left[ \frac{ (W_i - V_i)^2- (W_i+V_i) }{W_i+V_i} \right]$,
so it is sufficient to bound each summand individually.
In order to do so, we split the variance calculation into two parts:
the variance conditioned on $W_i+V_i=j$,
and the component of the variance due to the variation in $j$.
Writing for convenience
$$
f(W_i,V_i)\stackrel{{\mathrm {\footnotesize def}}}{=} \frac{(W_i-V_i)^2-W_i-V_i}{W_i+V_i} \;,
$$
we have that
\[
\mathop{\textnormal{Var}}\nolimits[f(X,Y)] \leq \max_j\left( \mathop{\textnormal{Var}}\nolimits[f(X,Y) \mid X+Y=j]\right)+\mathop{\textnormal{Var}}\nolimits[\expect{f(X,Y) \mid X+Y=j}] \;.
\]
We now bound the first term.
Since $(W_i - V_i)^2 = (j - 2V_i)^2$, and $V_i$ is distributed as $\binomial{j}{\alpha}$
(where for conciseness we let $\alpha\stackrel{{\mathrm {\footnotesize def}}}{=}\frac{q_i}{p_i+q_i}$),
we can compute the variance of $(j -2V_i)^2$ from standard expressions for the moments
of the Binomial distribution as
$
\mathop{\textnormal{Var}}\nolimits[(j - 2V_i)^2] =16j(j -1)\alpha(1 -\alpha)\left( (j-\frac{3}{2})(1-2\alpha)^2+\frac{1}{2}\right).
$
Since $\alpha(1-\alpha) \leq \frac{1}{4}$ and $j-\frac{3}{2} < j-1 < j$,
this in turn is at most $j^2(2+4j(1-2\alpha)^2)$.
Because the denominator is $W_i + V_i$ which equals $j$,
we must divide this by $j^2$, make it $0$ when $j = 0$,
and take its expectation as $j$ is distributed as $\operatorname{Poi}(m(p_i + q_i))$.
This leads to
\[
\mathop{\textnormal{Var}}\nolimits[f(W_i,V_i) \mid W_i+V_i=j] \leq 2(1 - e^{-m(p_i+q_i)}) + 4m\frac{(p_i-q_i)^2}{p_i+q_i} \;.
\]
We now consider the second component of the variance--the contribution to the variance due to the variation
in the sum $W_i + V_i$. Since for fixed $j$, as noted above, we have $V_i$ distributed as $\binomial{j}{\alpha}$,
we have
\[
\mathbb{E}[(W_i - V_i)^2] = \mathbb{E}[j^2-4jV_i+4V_i^2]=j^2-4j^2\alpha+4(j\alpha-j\alpha^2+j^2\alpha^2)=j^2(1-2\alpha)^2+4j\alpha(1-\alpha) \;.
\]
We finally subtract $W_i+V_i=j$ and divide by $j$ to yield $(j - 1)(1 - 2 \alpha)^2$,
except with a value of $0$ when $j = 0$ by definition.
However, note that replacing the value at $j=0$ with $0$ can only lower the variance.
Since the sum $j=W_i+V_i$ is drawn from a Poisson distribution with parameter $m(p_i + q_i)$,
we thus have:
\[
\mathop{\textnormal{Var}}\nolimits \left[ \mathbb{E}[f(W_i,V_i)|W_i+V_i=j] \right] \leq m(p_i + q_i)(1 - 2\alpha)^4 \leq m(p_i + q_i)(1 - 2\alpha)^2 = m\frac{(p_i-q_i)^2}{p_i+q_i} \;.
\]
\noindent Summing the final expressions of the previous two paragraphs
yields a bound on the variance of $f(W_iV_i)$ of
\[
2(1 - e^{-m(p_i+q_i)}) + 5m\frac{(p_i-q_i)^2}{p_i+q_i} \leq 2 + 5m\frac{(p_i-q_i)^2}{p_i+q_i} \;,
\]
as $1 - e^{-x}\leq 1$ for all $x$. This shows that
\begin{align*}
\mathop{\textnormal{Var}}\nolimits[W_{\rm heavy}]
&\leq 2n + 5m\sum_{i\in U'} \frac{(p_i-q_i)^2}{p_i+q_i}
= 2n + 5m\sum_{i\in U'\cap U} \frac{(p_i-q_i)^2}{p_i+q_i}+ 5m\sum_{i\in U'\cap V} \frac{(p_i-q_i)^2}{p_i+q_i}\\
&\leq 2n + \frac{3}{5}\expect{W_{\rm heavy}} + 5m\sum_{i\in U'\cap V} \frac{(p_i-q_i)^2}{p_i+q_i} \;,
\end{align*}
so it only remains to bound the last term.
But by definition, $i\in V$ implies $0\leq p_i,q_i < \frac{1}{m}$,
from which
\[
5m\sum_{i\in U'\cap V} \frac{(p_i-q_i)^2}{p_i+q_i}
\leq 5\sum_{i\in U'\cap V} \frac{\abs{p_i-q_i}}{p_i+q_i}
\leq 5\abs{U'\cap V} \leq 5n \;.
\]
This completes the proof.
\end{proof}
\noindent With these two claims in hand, we are ready to conclude the proof of \new{Lemma~\ref{lem:case1-close}}.
We start with the soundness case, i.e. assuming $\sum_{i\in U\cap U'} \frac{(p_i-q_i)^2}{p_i+q_i} > \frac{\epsilon^2}{2000}$.
Then, by Chebyshev's inequality and~\cref{claim:product:closeness:expectation} we have that
\begin{align}
\probaOf{ W_{\rm heavy} < \frac{m\epsilon^2}{12000} }
&\leq \probaOf{ \expect{W_{\rm heavy}} - W_{\rm heavy} > \frac{1}{2}\expect{W_{\rm heavy}} }
\leq \frac{4\mathop{\textnormal{Var}}\nolimits[W_{\rm heavy}]}{\expect{W_{\rm heavy}}^2} \notag\\
&\leq \frac{28n}{\expect{W_{\rm heavy}}^2} + \frac{12}{5\expect{W_{\rm heavy}}} \tag{by \cref{claim:product:closeness:variance}} \\
&\leq
\frac{9\cdot 2000^2\cdot 28n}{m^2\epsilon^4} + \frac{36\cdot 2000}{5\epsilon^2 m}
= \bigO{ \frac{n}{m^2\epsilon^4}+\frac{1}{\epsilon^2 m} } \;.
\end{align}
We want to bound this quantity by $1/10$,
for which it suffices to have $m > C\frac{\sqrt{n}}{\epsilon^2}$
for an appropriate choice of the absolute constant $C>0$
in our setting of $m$.
Turning to the completeness, assume that $\normone{P-Q} = 0$.
Then, by Chebyshev's inequality, and invoking~\cref{claim:product:closeness:variance} we have:
\begin{align*}
\probaOf{ W \geq \frac{m\epsilon^2}{12000} } &= \probaOf{ W \geq \expect{W} + \frac{m\epsilon^2}{12000} }
\leq \frac{36\cdot 2000^2\mathop{\textnormal{Var}}\nolimits[W]}{\epsilon^4 m^2} = \bigO{\frac{n}{\epsilon^4 m^2}} \;,
\end{align*}
which is no more than $1/10$ for the same choice of $m$.
\paragraph{Case 2: discrepancy in $V'$.}
We now assume that Algorithm~\ref{algo:product:closeness} reached the line where $W_{\rm light}$ is computed,
and show the following:
\begin{lemma}
If $P=Q$, then with probability at least $9/10$
we have $W_{\rm light} \leq \frac{\epsilon^2}{600n}$.
Conversely, if $\normtwo{p_{V'}-q_{V'}}^2 > \frac{\epsilon^2}{300n}$,
then $W_{\rm light} \geq \frac{\epsilon^2}{600n}$ with probability at least $9/10$.
\end{lemma}
\begin{proof}
We condition on $\normtwo{p_V'}^2,\normtwo{q_V'}^2 \leq \frac{20n}{m^2}$,
which by~\cref{claim:closeness:bound:2norm:vprime}, a union bound, and Markov's inequality
happens with probability at least $19/20$.
The analysis is similar to~\cite[Section 3]{CDVV14},
observing that the $(V'_i)_{i\in V'},(W'_i)_{i\in V'}$'s are mutually independent Poisson random variables,
$V'_i$ (resp. $W'_i$) having mean $mp_i$ (resp. $mq_i$).
Namely, following their analysis, the statistic $W_{\rm light}$
is an unbiased estimator for $m^2\normtwo{p_{V'}-q_{V'}}^2$ with variance
\[\mathop{\textnormal{Var}}\nolimits[W_{\rm light}] \leq 8m^3\sqrt{b}\normtwo{p_{V'}-q_{V'}}^2+8m^2b \;,\]
where $b\stackrel{{\mathrm {\footnotesize def}}}{=} \frac{20n}{m^2}$ is our upper bound on $\normtwo{p_V'}^2,\normtwo{q_V'}^2$.
From there, setting $\epsilon'\stackrel{{\mathrm {\footnotesize def}}}{=}\frac{\epsilon}{\sqrt{n}}$ and applying Chebyshev's inequality,
we get that there exists an absolute constant $C'>0$
such the completeness and soundness guarantees from the lemma holds with probability at least $19/20$,
provided that $m> C'\frac{\sqrt{b}}{{\epsilon'}^2}$, i.e.,
\[m > C' \frac{n}{\epsilon^2}\cdot\frac{\sqrt{20n}}{m^2} = \sqrt{20}C'\frac{n^{3/2}}{m\epsilon^2} \;.\]
Solving for $m$ shows that choosing $m\geq C\frac{n^{3/4}}{\epsilon}$
for some absolute constant $C>0$ is enough.
A union bound then allows us to conclude the proof of the lemma,
guaranteeing correctness with probability at least $1-\frac{1}{20}-\frac{1}{20}=\frac{9}{10}$.
\end{proof}
\end{proof}
\subsection{Sample Complexity Lower Bound for Closeness Testing}\label{sec:closeness:product:lb}
In this section, we prove a matching information-theoretic lower bound
for testing closeness of two unknown arbitrary product distributions.
\begin{restatable}{theorem}{closenessproductlb} \label{theo:lb:product:closeness}
There exists an absolute constant $\epsilon_0 > 0$ such that, for any $0 < \epsilon \leq \epsilon_0$, the following holds:
Any algorithm that has sample access to two unknown \emph{product} distribution $P,Q$ over $\{0,1\}^n$
and distinguishes between the cases that $P=Q$ and $\normone{P-Q} > \epsilon$ requires $\Omega(\max(\sqrt{n}/\epsilon^2,n^{3/4}/\epsilon))$ samples.
\end{restatable}
\begin{proof}
The first part of the lower bound, $\Omega(\sqrt{n}/\epsilon^2)$, follows from~\cref{theo:lb:product:uniform}; we focus here on the second term, $\Omega(n^{3/4}/\epsilon)$, \new{and consequently assume hereafter that $\sqrt{n}/\epsilon^2 < n^{3/4}/\epsilon$. Let $k\geq 1$ be fixed, and suppose we have a tester that takes $k=o(n^{3/4}/\epsilon)$ samples: we will show that it can only be correct with vanishing probability.}
We will again follow the information-theoretic framework of~\cite{DK:16} for proving distribution testing lower bounds, first defining two distributions over pairs of product distributions ${\cal Y},{\cal N}$:
\begin{itemize}
\item ${\cal Y}$: for every $i\in[n]$, independently choose $(p_i,q_i)$ to be either $p_i=q_i = \frac{1}{\new{k}}$ with probability $1/2$, and $p_i=q_i = \frac{1}{n}$ otherwise; and set $P\stackrel{{\mathrm {\footnotesize def}}}{=} \bigotimes_{j=1}^n \bernoulli{p_i}$, $Q\stackrel{{\mathrm {\footnotesize def}}}{=} \bigotimes_{j=1}^n \bernoulli{q_i}$.
\item ${\cal N}$: for every $i\in[n]$, independently choose $(p_i,q_i)$ to be either $p_i=q_i = \frac{1}{\new{k}}$ with probability $1/2$, and $(\frac{1+\epsilon}{n},\frac{1-\epsilon}{n})$ or $(\frac{1-\epsilon}{n},\frac{1+\epsilon}{n})$ uniformly at random otherwise; and set $P\stackrel{{\mathrm {\footnotesize def}}}{=} \bigotimes_{j=1}^n \bernoulli{p_i}$, $Q\stackrel{{\mathrm {\footnotesize def}}}{=} \bigotimes_{j=1}^n \bernoulli{q_i}$.
\end{itemize}
Note that in both ${\cal Y}$ and ${\cal N}$, with overwhelming probability the pairs $(P,Q)$ have roughly $n/2$ marginals with (equal) parameter $1/\new{k}$, and roughly $n/2$ marginals with parameter $\bigTheta{1}/n$.
\begin{lemma}\label{lemma:noinstances:far:again}
With probability $1-2^{-\Omega(n)}$, a uniformly chosen pair $(P,Q)\sim {\cal N}$ satisfies $\normone{P-Q} = \Omega(\epsilon)$.
\end{lemma}
\begin{proof}
Similar to that of~\cref{lemma:noinstances:far:unbalanced}.
\end{proof}
We will as before make a further simplification, namely that instead of drawing $k$ samples from $P=P_1\otimes\dots\otimes P_n$ and $Q=Q_1\otimes\dots\otimes Q_n$, the algorithm is given $k_i$ samples from each $P_i$ (resp. $k'_i$ from $Q_i$), where $k_1,\dots,k_n,k'_1,\dots,k'_n$ are independent $\poisson{k}$ random variables. We now consider the following process: letting $X\sim\bernoulli{1/2}$ be a uniformly random bit, we choose a pair of distributions $(P,Q)$ (both $P$ and $Q$ being probability distributions over $\{0,1\}^n$) by
\begin{itemize}
\item Drawing $(P,Q)\sim{\cal Y}$ if $X=0$, and;
\item Drawing $(P,Q)\sim{\cal N}$ if $X=1$;
\item Drawing $k_1,k'_1,\dots,k_n,k'_n\sim \poisson{k}$, and returning $k_i$ samples from each $P_i$ and $k'_i$ samples from each $Q_i$
\end{itemize}
For $i\in[n]$, we let $N_i$ and $M_i$ denote respectively the number of $1$'s among the $k_i$ samples drawn from $P_i$ and $k'_i$ samples drawn from $Q_i$, and write $N=(N_1,\dots, N_n)\in\mathbb{N}^n$ (and $M\in \mathbb{N}^n$ for $Q$). The next step is then to upperbound $\mutualinfo{X}{(N,M)}$, in order to conclude that it will be $o(1)$ unless $k$ is taken big enough and invoke~\cref{fact:main:fano}. By the foregoing discussion and the relaxation on the $k_i$'s, we have that the conditioned on $X$ the $N_i$'s (and $M_i$'s) are independent (with $N_i \sim \poisson{kp_i}$ and $M_i \sim \poisson{kq_i}$). This implies that
\begin{equation}\label{eq:bound:mutual:info:sum:again}
\mutualinfo{X}{(N,M)} \leq \sum_{i=1}^n \mutualinfo{X}{(N_i,M_i)}
\end{equation}
so that it suffices to bound each $\mutualinfo{X}{(N_i,M_i)}$ separately.
\begin{lemma}\label{lemma:lb:key:again}
Fix any $i\in [n]$, and let $X, N_i,M_i$ be as above. Then $\mutualinfo{X}{(N_i,M_i)} = O( k^4\epsilon^4/n^4 )$.
\end{lemma}
\begin{proof}
By symmetry it is enough to consider only the case of $i=1$, so that we let $(A,B)=(N_1,M_1)$.\medskip
Since $A\sim \poisson{kp_1}$ and $B\sim \poisson{kq_1}$ with $(p_1,q_1) = (1/\new{k}, 1/\new{k})$ or $(p_1,q_1) = (1/n, 1/n)$ uniformly if $X=0$, and
\[
(p_1,q_1) = \begin{cases}
(\frac{1}{\new{k}},\frac{1}{\new{k}}) & \text{ w.p. } \frac{1}{2}\\
(\frac{1+\epsilon}{n},\frac{1-\epsilon}{n}) & \text{ w.p. } \frac{1}{4} \\
(\frac{1-\epsilon}{n},\frac{1+\epsilon}{n}) & \text{ w.p. } \frac{1}{4}
\end{cases}
\]
if $X=1$, a computation similar as that of~\cite[Proposition 3.8]{DK:16} yields that, for any $i,j\in\mathbb{N}$
\begin{align*}
\probaCond{(A,B)=(i,j)}{X=0} &= \frac{1}{2i!j!}\left( e^{-2k/\new{k}} \left(\frac{k}{\new{k}}\right)^{i+j} + e^{-2k/n} \left(\frac{k}{n}\right)^{i+j} \right)
= \frac{1}{2i!j!}\left( e^{-2} + e^{-2k/n} \left(\frac{k}{n}\right)^{i+j} \right) \\
\probaCond{(A,B)=(i,j)}{X=1} &= \frac{1}{2i!j!}\left( e^{-2k/\new{k}} \left(\frac{k}{\new{k}}\right)^{i+j} + e^{-2k/n} \left(\frac{k}{n}\right)^{i+j}\left( \frac{(1+\epsilon)^i (1-\epsilon)^j + (1-\epsilon)^i (1+\epsilon)^j }{2} \right) \right) \\
&= \frac{1}{2i!j!}\left( e^{-2} + e^{-2k/n} \left(\frac{k}{n}\right)^{i+j}\left( \frac{(1+\epsilon)^i (1-\epsilon)^j + (1-\epsilon)^i (1+\epsilon)^j }{2} \right) \right).
\end{align*}
Note in particular that for $0\leq i+j \leq 1$, this implies that $\probaCond{(A,B)=(i,j)}{X=0}=\probaCond{(A,B)=(i,j)}{X=1}$. From the above, we obtain
\begin{align*}
\mutualinfo{X}{(A,B)}
&= O(1)\cdot \sum_{i,j\geq 0} \frac{\left( \probaCond{(A,B)=(i,j)}{X=0}-\probaCond{(A,B)=(i,j)}{X=1} \right)^2}{\probaCond{(A,B)=(i,j)}{X=0}+\probaCond{(A,B)=(i,j)}{X=1} } \\
&= O(1)\cdot \sum_{i+j\geq 2} \frac{\left( \probaCond{(A,B)=(i,j)}{X=0}-\probaCond{(A,B)=(i,j)}{X=1} \right)^2}{\probaCond{(A,B)=(i,j)}{X=0}+\probaCond{(A,B)=(i,j)}{X=1} } \\
&= O(1)\cdot \sum_{i+j\geq 2} e^{-\frac{4k}{n}} \frac{\left(\frac{k}{n}\right)^{2(i+j)}}{2i!j!}\frac{(1-\frac{1}{2}( (1+\epsilon)^i (1-\epsilon)^j + (1-\epsilon)^i (1+\epsilon)^j ))^2}{2e^{-2}+o(1)} \\
&= \bigO{\left({k\epsilon}/{n}\right)^4}
\end{align*}
where the second-to-last inequality holds for $k=o(n)$. (Which is the case, as $\sqrt{n}/\epsilon^2 < n^{3/4}/\epsilon$ implies that $n^{3/4}/\epsilon < n$, and we assumed $k=o(n^{3/4}/\epsilon)$.)
\end{proof}
This lemma, along with~\cref{eq:bound:mutual:info:sum:again}, immediately implies the result:
\begin{equation}
\mutualinfo{X}{(N,M)} \leq \sum_{i=1}^n \bigO{\left(\frac{k\epsilon}{n}\right)^4} = \bigO{\frac{k^4\epsilon^4}{n^3}}
\end{equation}
which is $o(1)$ unless $k = \Omega(n^{3/4}/{\epsilon})$.
\end{proof}
\subsection{Ruling Out Tolerant Testing Without Balancedness} \label{ssec:product-lower:tolerance}
\new{In this section, we show that any \emph{tolerant} identity testing algorithm for product distributions
must have sample complexity near-linear in $n$ if the explicitly given distribution
is very biased.}
\begin{theorem}\label{theo:tradeoff:balancedness:tolerance}
There exists an absolute constant $\epsilon_0 < 1$ such that the following holds.
Any algorithm that, given a parameter $\epsilon\in(0,\epsilon_0]$ and sample access to
product distributions $P,Q$ over $\{0,1\}^n$, distinguishes between $\normone{P-Q} < \epsilon/2$
and $\normone{P-Q} > \epsilon$ with probability at least $2/3$
requires $\bigOmega{n/\log n}$ samples.
Moreover, the lower bound still holds in the case where $Q$ is known, and provided as an explicit parameter.
\end{theorem}
\begin{proof}
The basic idea will be to reduce to the case of tolerant testing of two arbitrary distributions $p$ and $q$ over $[n]$.
In order to do this, we define the following function from distributions of one type to distributions of the other:
If $p$ is a distribution over $[n]$, define $F_{\delta}(p)$ to be the distribution over $\{0,1\}^n$
obtained by taking $\mathop{\textnormal{Poi}}\nolimits(\delta)$ samples from $p$ and returning the vector $x$
where $x_i = 1$ if and only if $i$ was one of these samples drawn.
Note that, because of the Poissonization, $F_\delta(p)$ is a product distribution.
We have the following simple claim:
\begin{claim}
For any $\delta\in(0,1]$ and distributions $p,q$ on $[n]$, $d_{\mathrm TV}(F_\delta(p),F_\delta(q)) = (\delta+O(\delta^2))d_{\mathrm TV}(p,q)$.
\end{claim}
\begin{proof}
In one direction, we can take correlated samples from $F_\delta(p)$ and $F_\delta(q)$
by sampling $a$ from $\mathop{\textnormal{Poi}}\nolimits(\delta)$ and then taking $a$ samples
from each of $p$ and $q$, using these to generate our samples from $F_\delta(p),F_\delta(q)$.
For fixed $a$, the variation distance between $F_\delta(p)$ and $F_\delta(q)$
conditioned on that value of $a$ is clearly at most $ad_{\mathrm TV}(p,q)$.
Therefore, $d_{\mathrm TV}(F_\delta(p),F_\delta(q))\leq \mathbb{E}[a]d_{\mathrm TV}(p,q) = \delta d_{\mathrm TV}(p,q).$
In the other direction, note that $F_\delta(p)$ and $F_\delta(q)$ each have probability $\delta+O(\delta^2)$ of returning a vector of weight $1$.
This is because $\mathop{\textnormal{Poi}}\nolimits(\delta)=1$ with probability $\delta e^{-\delta} = \delta+O(\delta^2)$
and since $\mathop{\textnormal{Poi}}\nolimits(\delta)>1$ with probability $O(\delta^2)$.
Let $G(p)$ and $G(q)$ denote the distributions $F_\delta(p)$ and $F_\delta(q)$
conditioned on returning a vector of weight $1$. By the above, we have that
$d_{\mathrm TV}(F_\delta(p),F_\delta(q)) \geq (\delta+O(\delta^2))d_{\mathrm TV}(G(p),G(q))$.
Letting $p_i$ (resp. $q_i$) be the probability that $p$ (resp. $q$) assigns to $i\in[n]$,
we get that for any fixed $i\in[n]$ the probability that $F_\delta(p)$ returns $e_i$ is
$$
(1-e^{-\delta p_i})\prod_{j\neq i} e^{-\delta p_j} = (e^{\delta p_i}-1) \prod_{j=1}^n e^{-\delta p_j} \;.
$$
Therefore $G(p)$ puts on $e_i$ probability proportional to $(e^{\delta p_i}-1) = (\delta+O(\delta^2))p_i$.
Similarly, the probability that $G(q)$ puts on $e_i$ is proportional to $(\delta+O(\delta^2))q_i$
(where in both cases, the constant of proportionality is $(\delta +O(\delta^2))^{-1}$). Therefore,
\begin{align*}
d_{\mathrm TV}(G(p),G(q)) & = \delta^{-1}(1+O(\delta))\sum_{i=1}^n |(\delta+O(\delta^2))p_i - (\delta+O(\delta^2))q_i| \\
& = \delta^{-1}(1+O(\delta))\sum_{i=1}^n ( \delta|p_i-q_i|+O(\delta^2)(p_i+q_i) )\\
&= \delta^{-1}(1+O(\delta))(\deltad_{\mathrm TV}(p,q)+O(\delta^2))\\
& = d_{\mathrm TV}(p,q)+O(\delta) \;.
\end{align*}
Thus, $d_{\mathrm TV}(F_\delta(p),F_\delta(q)) \geq (\delta+O(\delta^2))d_{\mathrm TV}(p,q).$ This completes the proof.
\end{proof}
The above claim guarantees the existence of some constant $\delta_0\in(0,1]$ such that
$d_{\mathrm TV}(F_{\delta_0}(p),F_{\delta_0}(q)) \in [0.9{\delta_0} d_{\mathrm TV}(p,q),1.1d_{\mathrm TV}(p,q)].$
However, it is known~\cite{ValiantValiant:11} that for any sufficiently small $\epsilon>0$
there exist distributions $p$ and $q$ over $[n]$ such that one must take at least $c\frac{n}{\log n}$ samples
(where $c>0$ is an absolute constant) to distinguish between
$d_{\mathrm TV}(p,q) \leq \epsilon/(2\cdot 0.9{\delta_0})$ and $d_{\mathrm TV}(p,q) \geq \epsilon/(1.1{\delta_0})$.
Given $q$ samples from $p$ and $q$ we can with high probability simulate $c'q$ samples from
$P=F_{\delta_0}(p)$ and $Q= F_{{\delta_0}}(q)$ (where $c'=c'(\delta_0)>0$ is another absolute constant).
Therefore, we cannot distinguish between the cases $d_{\mathrm TV}(P,Q)\leq \epsilon/2$ and $d_{\mathrm TV}(P,Q)\geq \epsilon$ in fewer than $c'\cdot c \frac{n}{\log n}$,
as doing so would enable us to distinguish between $p$ and $q$ with less than $c\frac{n}{\log n}$ samples -- yielding a contradiction.
Moreover, the above still holds when $q$ is explicitly known, specifically even when $q$ is taken to be the uniform distribution on $[n]$.
\end{proof}
\section{Testing Identity of Fixed Structure Bayes Nets} \label{sec:identity-known}
\new{
In this section, we prove our matching upper and lower bounds for testing the identity of Bayes nets with known graph structure.
In Section~\ref{ssec:identity-known-upper}, we describe an identity algorithm that uses $\bigO{2^{d/2}\sqrt{n}/\epsilon^2}$
samples, where $d$ is the maximum in-degree and $n$ the number of nodes (dimension).
In Section~\ref{ssec:identity-known-lower}, we show that this sample upper bound is tight, up to constant factors,
even for uniformity testing.
}
\subsection{Identity Testing Algorithm} \label{ssec:identity-known-upper}
In this section, we establish the upper bound part of~\cref{thm:informal-identity-closeness-known} for identity, namely testing identity to a fixed Bayes net given sample access to an unknown Bayes net with the same underlying structure. In order to state our results, we recall the definition of \emph{balancedness} of a Bayes net:
\begin{definition}
A Bayes net $P$ over $\{0,1\}^n$ with structure $\mathcal{S}$ is said to be \emph{$(c,C)$-balanced} if,
for all $k$, it is the case that (i) $p_k\in[c,1-c]$ and (ii) $\probaDistrOf{P}{\Pi_k} \geq C$.
\end{definition}
\noindent Roughly speaking, the above conditions ensure that the conditional probabilities of the Bayes net
are bounded away from $0$ and $1$, and that each parental configuration
occurs with some minimum probability.
With this definition in hand, we are ready to state and prove the main theorem of this section:
\begin{restatable}{theorem}{identityknowndegreedub}\label{theo:upper:knowndegreed:identity}
There exists a computationally efficient algorithm with the following guarantees.
Given as input (i) a DAG $\mathcal{S}$ with $n$ nodes and maximum in-degree $d$
and a known $(c,C)$-balanced Bayes net $Q$ with structure $\mathcal{S}$,
where $c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$;
(ii) a parameter $\epsilon > 0$, and (iii) sample access to an unknown Bayes net $P$ with structure $\mathcal{S}$,
the algorithm takes $\bigO{2^{d/2}\sqrt{n}/\epsilon^2}$ samples from $P$,
and distinguishes with probability at least $2/3$ between the cases $P=Q$ and $\normone{P-Q} > \epsilon$.
\end{restatable}
We choose $m\geq \alpha\frac{2^{d/2}\sqrt{n}}{\epsilon^2}$,
where $\alpha>0$ is an absolute constant to be determined in the course of the analysis.
Let $\mathcal{S}$ and $Q$ be as in the statement of the theorem, for $c\geq \beta\frac{\log n}{ \sqrt{n} } \geq \beta\frac{\log n}{m}$ and $C\geq \beta\frac{d+\log n}{m}$,
for an appropriate absolute constant $\beta>0$.
\new{Recall that $S$ denotes the set $\{(i,a): i \in [n], a\in \{0,1\}^{\new{|}\parent{i}\new{|}}\}$. By assumption,
we have that $|\parent{i}| \leq d$ for all $i \in [n]$.}
For each $(i,a)\in S$, corresponding to the parental configuration $\Pi_{i,a}=\{ X_{\parent{i}} = a\}$,
we define the value $N_{i,a} \stackrel{{\mathrm {\footnotesize def}}}{=} m \probaDistrOf{Q}{\Pi_{i,a}}/\sqrt{2}$. \new{Intuitively,
$N_{i,a}$ is equal to a small constant factor times} the number of samples satisfying $\Pi_{i,a}$ one
would expect to see among $m$ independent samples,
if the unknown distribution $P$ were equal to $Q$.
We will also use the notation $p_{i,a} \stackrel{{\mathrm {\footnotesize def}}}{=} \probaCond{X_i=1}{X_{\parent{i}=a}}$,
where $X\sim P$, and $q_{i,a} \stackrel{{\mathrm {\footnotesize def}}}{=} \probaCond{X_i=1}{X_{\parent{i}=a}}$, where $X\sim Q$.
Given $m$ independent samples $X^{(1)},\dots, X^{(m)}$ from a Bayes net $P$ with structure $\mathcal{S}$,
we define the estimators $Z_{i,a},Y_{i,a}$ for every $i\in[n]$, $a\in\{0,1\}^{\new{|\parent{i}|}}$ as follows.
For every $(i,a)$ such that the number of samples $X^{(j)}$ satisfying the configuration $\Pi_{i,a}$ is between $N_{i,a}$ and $2N_{i,a}$
(that is, neither too few nor too many), we look only at the first $N_{i,a}$ such samples $X^{(j_1)},\dots,X^{(j_{N_{i,a}})}$, and let
\begin{align*}
Z_{i,a} &\stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{j=1}^{N_{i,a}} \indicSet{ X^{(j_\ell)}_i = 1} \\
Y_{i,a} &\stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{j=1}^{N_{i,a}} \indicSet{ X^{(j_\ell)}_i = 0} \;.
\end{align*}
We note that $Z_{i,a}+Y_{i,a}=N_{i,a}$ by construction.
We then define the quantity
\[
W_{i,a} \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{((1-q_{i,a})Z_{i,a} - q_{i,a}Y_{i,a})^2 + (2q_{i,a}-1)Z_{i,a} - q_{i,a}^2(Z_{i,a}+Y_{i,a})}{N_{i,a}(N_{i,a}-1)}\indic{N_{i,a}>1} + (p_{i,a}-q_{i,a})^2\indic{N_{i,a}\leq 1}.
\]
On the other hand, for every $(i,a)$ such that the number of samples $X^{(j)}$ satisfying the configuration $\Pi_{i,a}$ is less than $N_{i,a}$ or more than $2N_{i,a}$, we continue as a thought experiment and keep on getting samples until we see $N_{i,a}$ samples with the right configuration, and act as above (although the actual algorithm will stop and output $\textsf{reject}$ whenever this happens).
From there, we finally consider the statistic $W$:
\begin{equation}
W \stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{i=1}^n \sum_{a\in\{0,1\}^{\new{|\parent{i}|}}} \frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q_{i,a}(1-q_{i,a})} W_{i,a}
\end{equation}
Observe that the algorithm will output $\textsf{reject}$
as soon as at least one parental configuration $\Pi_{i,a}$
was not seen enough times, or seen too many times, among the $m$ samples.
The pseudocode of our algorithm is given in the following figure.
\begin{figure}[h]
\begin{framed}
\begin{description}
\item[Input] Error tolerance $\epsilon \in (0,1)$, dimension $n$, description $\mathcal{S}$ of a DAG with maximum in-degree $d$
and of a $(c,C)$-balanced Bayes net $Q$ with structure $\mathcal{S}$ (where $c\geq \beta\frac{\log n}{m}$ and $C\geq \beta\frac{d+\log n}{m}$),
and sampling access to a distribution $P$ over $\{0,1\}^n$ with structure $\mathcal{S}$.
\item[-] Preprocess $Q$ so that $q_{i,a} \leq \frac{1}{2}$ for all $(i,a)\in[n]\times\{0,1\}^d$ (and apply the same transformation to all samples taken from $P$)
\item[-] Set $m \gets \lceil\alpha\frac{\sqrt{n}}{\epsilon^2}\rceil$, and take $m$ samples $X^{(1)},\dots,X^{(m)}$ from $P$.
\item[-] Let $N_{i,a} \gets { m \probaDistrOf{Q}{\Pi_{i,a}} }/\sqrt{2}$ for all $(i,a)\in[n]\times\{0,1\}^d$.
\item[-] Define $Z_{i,a}, Y_{i,a}, W_{i,a}$
as above, and $W \stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{i=1}^n \sum_{a\in\{0,1\}^{\new{|\parent{i}|}}} \probaDistrOf{Q}{\Pi_{i,a}} \frac{W_{i,a}}{q_{i,a}(1-q_{i,a})}$.
\item \textit{(At this point, if any configuration $\Pi_{i,a}$ was satisfied by less than $N_{i,a}$ or more than $2N_{i,a}$ of the $m$ samples, then the algorithm
has rejected already.)}
\item[If] $W \geq \frac{\epsilon^2}{32}$ return $\textsf{reject}$.
\item[Otherwise] return $\textsf{accept}$.
\end{description}
\end{framed}
\caption{Testing identity against a known-structure balanced Bayes net.}\label{algo:bn:identity:known}
\end{figure}
\paragraph{Preprocessing.} We will henceforth assume that $q_{i,a} \leq \frac{1}{2}$ for all $(i,a)\in[n]\times\{0,1\}^d$.
This can be done without loss of generality, as $Q$ is explicitly known.
For any $i$ such that $q_{i,a}>\frac{1}{2}$, we replace $q_{i,a}$ by $1-q_{i,a}$ and work with the corresponding distribution $Q^\prime$ instead.
By flipping the corresponding bit of all samples we receive from $P$, it only remains to test identity of the resulting distribution $P^\prime$ to $Q^\prime$, as
all distances are preserved.
\paragraph{First Observation.} If $P=Q$, then we want to argue that with probability at least $9/10$ none of the $W_{i,a}$'s will be such that too few samples satisfied $\Pi_{i,a}$ (as this will immediately cause rejection). To see why this is the case, observe that as long as
$m \probaDistrOf{Q}{\Pi_{i,a}} \geq \beta(d+\log n)$ (for an appropriate choice of absolute constant $\beta > 0$),
the number $m_{i,a}$ of samples satisfying $\Pi_{i,a}$ among the $m$ we draw will, by a Chernoff bound, such that $m_{i,a} \geq m \probaDistrOf{Q}{\Pi_{i,a}} \geq N_{i,a}$ with probability at least $\new{1 -} \frac{1}{2^dn}\cdot\frac{1}{10}$. A union bound over the at most $2^d n$ possible parental configurations
will yield the desired conclusion. But the fact that $P=Q$ is $(c,C)$-balanced indeed implies that $\probaDistrOf{Q}{\Pi_{i,a}} \geq C \geq \beta \frac{d+\log n}{m}$, the last inequality by our choice of $C$.
Therefore, it will be sufficient to continue our analysis, assuming that none of the $W_{i,a}$'s caused rejection
because of an insufficient number of samples satisfying $\Pi_{i,a}$. As we argued above,
this came at the cost of only $1/10$ of probability of success in the completeness case,
and can only increase the probability of rejection, i.e., success, in the soundness case.
\medskip
Moreover, in the analysis of the expectation and variance of $W$, we assume that for every $(i,a)\in S$,
we have $\probaDistrOf{P}{\Pi_{i,a}} \leq 4 \probaDistrOf{Q}{\Pi_{i,a}}$.
This is justified by the following two lemmas, which ensure respectively that if it is not the case,
then we will have rejected with high probability (this time because \emph{too many} samples satisfied $\Pi_{i,a}$);
and that we still have not rejected (with high probability) if $P=Q$.
\begin{lemma}\label{lemma:estimating:parental:configuration:soundness}
Let $P$ be as in the statement of~\cref{theo:upper:knowndegreed:identity},
and suppose there exists a parental configuration $(i^\ast,a^\ast)\in \new{S}$
such that $\probaDistrOf{P}{\Pi_{i^\ast,a^\ast}} > 4 \probaDistrOf{Q}{\Pi_{i^\ast,a^\ast}}$.
Then, with probability at least $9/10$, the number of samples $m_{i^\ast,a^\ast}$ satisfying $\Pi_{i^\ast,a^\ast}$
among the $m$ samples taken will be more than $2N_{i^\ast,a^\ast}$.
\end{lemma}
\begin{proof}
This follows easily from a Chernoff bound, as
$$\probaOf{ m_{i,a} < 2m\probaDistrOf{Q}{\Pi_{i^\ast,a^\ast}} } < \probaOf{ m_{i,a}
< \frac{1}{2}m\probaDistrOf{P}{\Pi_{i^\ast,a^\ast}} }
= \probaOf{ m_{i,a} < \frac{1}{2}\expect{m_{i,a}} } \;,$$ and $\expect{m_{i,a}} > \beta (d+\log n).$
\end{proof}
\begin{lemma}\label{lemma:estimating:parental:configuration:completeness}
Suppose $P=Q$. Then, with probability at least $9/10$, for every parental configuration
$(i,a)\in \new{S}$ the number of samples $m_{i,a}$ satisfying $\Pi_{i,a}$ among the $m$ samples taken will be at most $2N_{i,a}$.
\end{lemma}
\begin{proof}
This again follows from a Chernoff bound and a union bound over all $2^dn$ configurations, as we have $\probaOf{ m_{i,a} > 2m\probaDistrOf{Q}{\Pi_{i,a}} } = \probaOf{ m_{i,a} > 2\expect{m_{i,a}} }$, and $\expect{m_{i,a}} > \beta (d+\log n)$.
\end{proof}
\paragraph{Expectation and Variance Analysis.}
We start with a simple closed form formula for the expectation of our statistic:
\begin{lemma}\label{identity:lemma:bn:tree:expectation}
We have that
$\expect{W} = \sum_{i,a} \probaDistrOf{Q}{\Pi_{i,a}} \frac{ (p_{i,a} - q_{i,a})^2 }{q_{i,a}(1-q_{i,a})}$.
(In particular, if $P=Q$ then $\expect{W} = 0$.)
\end{lemma}
\begin{proof}
Fix any $(i,a)\in \new{S}$. Since $Z_{i,a}$ follows a $\binomial{N_{i,a} }{ p_{i,a}}$ distribution, we get
\begin{align*}
\expect{ W_{i,a} } &= \expect{ (Z_{i,a}-q_{i,a}N_{i,a})^2+(2q_{i,a}-1)Z_{i,a}-q_{i,a}^2 N_{i,a} }\frac{\indic{N_{i,a}>1}}{N_{i,a}(N_{i,a}-1)}
+ \expect{ (p_{i,a}-q_{i,a})^2 }\indic{N_{i,a}\leq 1} \\
&= (p_{i,a}-q_{i,a})^2 \indic{N_{i,a}>1}+(p_{i,a}-q_{i,a})^2\indic{N_{i,a}\leq 1}
= (p_{i,a}-q_{i,a})^2 \;,
\end{align*}
giving the result by linearity of expectation.
The last part follows from the fact that $p_{i,a}=q_{i,a}$ for all $(i,a)$ if $P=Q$.
\end{proof}
As a simple corollary, we obtain:
\begin{claim}\label{identity:lemma:bn:tree:expectation:soundness}
If $\normone{P-Q}\geq \epsilon$, then $\expect{W} \geq \frac{\epsilon^2}{16}$.
\end{claim}
\begin{proof}
The claim follows from~Pinsker's inequality and~\cref{lemma:kl:bn},
along with our assumption that $\probaDistrOf{P}{\Pi_{i,a}} \leq 4\cdot \probaDistrOf{Q}{\Pi_{i,a}}$ for every $(i,a)$:
\[ \normone{P-Q}^2 \leq 2\dkl{P}{Q} \leq 2\sum_{(i,a)} \probaDistrOf{P}{ \Pi_{i,a} } \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})}
\leq 8\sum_{(i,a)} \probaDistrOf{Q}{ \Pi_{i,a} } \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})} \;.
\]
\end{proof}
We now turn to bounding from above the variance of our statistic.
This will be done by controlling the covariances and variances of the summands individually,
and specifically showing that the former are zero.
We have the following:
\begin{claim}\label{identity:lemma:bn:tree:variances}
If $(i,a)\neq (j,b)$, then $\operatorname{Cov}(W_{i,a}, W_{i,b}) = 0$;
and the variance satisfies
\[
\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q_{i,a}(1-q_{i,a})} W_{i,a}\right] \leq \frac{4}{m} \probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a})}{q_{i,a}^2(1-q_{i,a})^2 }(p_{i,a}-q_{i,a})^2+\frac{4}{m^2} \frac{p_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1} \;.
\]
(Moreover, if $P=Q$ then $\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q_{i,a}(1-q_{i,a})} W_{i,a}\right] \leq \frac{4}{m^2}$.)
\end{claim}
\begin{proof} The key point is to observe that, because of the way we defined the $Z_{i,a}$'s and $Y_{i,a}$'s
(only considering the $N_{i,a}$ first samples satisfying the desired parental configuration),
we have that $W_{i,a}$ and $W_{j,b}$ are independent whenever $(i,a)\neq (j,b)$. This directly implies the first part of the claim, i.e.,
\begin{align*}
\new{\operatorname{Cov}(W_{i,a}, W_{i,b})} = \expect{\left( W_{i,a} - \expect{ W_{i,a} } \right)\left( W_{j,b} - \expect{ W_{j,b} } \right)} = 0 \;,
\end{align*}
when $(i,a)\neq (j,b)$.
We then consider $\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q_{i,a}(1-q_{i,a})} W_{i,a}\right]$. Note that
\begin{align*}
\expect{ W_{i,a}^2 }
&= \expect{ ((Z_{i,a}-q_{i,a}N_{i,a})^2+(2q_{i,a}-1)Z_{i,a}-q_{i,a}^2 N_{i,a})^2 }\frac{\indic{N_{i,a}>1}}{N_{i,a}^2(N_{i,a}-1)^2} + (p_{i,a}-q_{i,a})^4\indic{N_{i,a}\leq 1} \;,
\end{align*}
so that, writing $p,q,N,Z$ for $p_{i,a},q_{i,a},N_{i,a},Z_{i,a}$ respectively (for readability):
\begin{align*}
\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q(1-q)} W_{i,a}\right]
&=\left(\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q(1-q)}\right)^2\left( \expect{ W_{i,a}^2 } - \expect{W_{i,a}}^2 \right) \\
&= \left(\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q(1-q)}\right)^2\left( \expect{ W_{i,a}^2 }- (p-q)^4 \right) \\
&= \expect{ ((Z-qN)^2+(2q-1)Z-q^2 N)^2 - N^2(N-1)^2(p-q)^4 }\frac{\probaDistrOf{Q}{\Pi_{i,a}}^2 \indic{N>1}}{N^2(N-1)^2 q^2(1-q)^2 }\\
&= \frac{1}{m^2}\expect{ ((Z-qN)^2+(2q-1)Z-q^2 N)^2 - N^2(N-1)^2(p-q)^4 }\frac{\indic{N>1}}{(N-1)^2 q^2(1-q)^2 }\\
&= \frac{1}{m^2}\frac{2N}{N-1}\frac{p(1-p)}{q^2(1-q)^2 }\indic{N>1}\left((2N-3)p^2 + 2(N-1)q^2-4(N-1)pq + p\right) \;.
\end{align*}
If $p=q$, then this becomes $\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q(1-q)} W_{i,a}\right] = \frac{1}{m^2}\frac{2N}{N-1}\indic{N_{i,a}>1} \leq \frac{4}{m^2}$, providing the second part of the claim. In the general case, we can bound the variance as follows:
\begin{align*}
\mathop{\textnormal{Var}}\nolimits\left[\frac{\probaDistrOf{Q}{\Pi_{i,a}}}{q(1-q)} W_{i,a}\right]
&= \frac{1}{m^2}\frac{2N}{N-1}\frac{p(1-p)}{q^2(1-q)^2 }\indic{N>1}\left(2(N-1)(p^2+q^2-2pq)-p^2+p\right)\\
&= \frac{1}{m^2}\frac{2N}{N-1}\frac{p(1-p)}{q^2(1-q)^2 }\indic{N>1}\left(2(N-1)(p-q)^2+p(1-p)\right)\\
&= \frac{4N}{m^2}\frac{p(1-p)}{q^2(1-q)^2 }(p-q)^2\indic{N>1} + \frac{1}{m^2}\frac{2N}{N-1}\frac{p^2(1-p)^2}{q^2(1-q)^2 }\indic{N>1}\\
&\leq \frac{4N}{m^2}\frac{p(1-p)}{q^2(1-q)^2 }(p-q)^2 + \frac{4}{m^2}\frac{p^2(1-p)^2}{q^2(1-q)^2 }\indic{N>1}\\
&=\frac{4}{m} \probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a})}{q_{i,a}^2(1-q_{i,a})^2 }(p_{i,a}-q_{i,a})^2+\frac{4}{m^2}\frac{p_{i,a}^2(1-p_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1} \\
&\leq \frac{4}{m} \probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a})}{q_{i,a}^2(1-q_{i,a})^2 }(p_{i,a}-q_{i,a})^2+\frac{4}{m^2} \frac{p_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1}.
\end{align*}
This completes the proof.
\end{proof}
Using this claim, we now state the upper bound it allows us to obtain:
\begin{lemma}\label{identity:coro:bn:tree:variance}
We have that
$\mathop{\textnormal{Var}}\nolimits[W] \leq \new{24} \frac{2^d n}{m^2} + \new{26} \frac{\expect{W}}{cm}$.
(Moreover, if $P=Q$ we have $\mathop{\textnormal{Var}}\nolimits[W] \leq 4\frac{2^dn}{m^2}$.)
\end{lemma}
\begin{proof}
This will follow from~\cref{identity:lemma:bn:tree:variances},
which guarantees that if $P=Q$, $\mathop{\textnormal{Var}}\nolimits[W] \leq 2^dn\cdot \frac{4}{m^2} = 4\frac{2^d n}{m^2}$.
Moreover, in the general case,
\[
\mathop{\textnormal{Var}}\nolimits[W] \leq \frac{4}{m} \sum_{(i,a)}\probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a})}{q_{i,a}^2(1-q_{i,a})^2 }(p_{i,a}-q_{i,a})^2+ \frac{4}{m^2} \sum_{(i,a)}\frac{p_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1} \;.
\]
We deal with the two terms separately, as follows:
\begin{itemize}
\item For the second term, we will show that
\[
\frac{4}{m^2} \sum_{(i,a)}\frac{p_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1} \leq 24\frac{2^d n}{m^2} + \frac{24\expect{W}}{cm} \;.
\]
This follows from the following sequence of (in-)equalities:
\begin{align*}
\sum_{(i,a)}\frac{p_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2 }\indic{N_{i,a}>1}
&= \sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} + \sum_{(i,a)} \frac{2p_{i,a}q_{i,a}-q_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} \\
&= \sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} + \sum_{(i,a)} \frac{2q_{i,a}(p_{i,a} - q_{i,a})+q_{i,a}^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} \\
&\leq 4\cdot 2^d n+\sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} + \sum_{(i,a)} \frac{2(p_{i,a} - q_{i,a})}{q_{i,a}(1-q_{i,a})^2}\indic{N_{i,a}>1}\indic{N_{i,a}>1} \\
&\leq 4\cdot 2^d n+\sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} + 4\sum_{(i,a)} \frac{p_{i,a} - q_{i,a}}{q_{i,a}(1-q_{i,a})}\indic{N_{i,a}>1} \\
&\operatorname*{\leq}_{\text{(AM-GM)}} 4\cdot 2^d n +\sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} + 2\sum_{(i,a)} \left( 1+\frac{(p_{i,a} - q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2} \right)\indic{N_{i,a}>1} \\
&\leq 6\cdot 2^d n +3\sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}\indic{N_{i,a}>1} \\
&\leq 6\cdot 2^d n+\frac{6}{c}\sum_{(i,a)} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})}\indic{N_{i,a}>1} \\
&\leq 6\cdot 2^d n+\frac{6m}{c}\sum_{(i,a)} \frac{N_{i,a}}{m} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})}\indic{N_{i,a}>1} \\
&= 6\cdot 2^d n+\frac{6m}{c}\sum_{(i,a)} \probaDistrOf{Q}{\Pi_{i,a}} \frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})}\indic{N_{i,a}>1}
\leq 6\cdot 2^d n+\frac{6m}{c}\expect{W} \;,
\end{align*}
using our assumption that $q_{i,a} \leq \frac{1}{2}$ for all $(i,a)$.
\item For the first term, we will show that
$$\frac{4}{m} \sum_{(i,a)}\probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a})}{q_{i,a}^2(1-q_{i,a})^2 }(p_{i,a}-q_{i,a})^2 \leq \frac{2}{cm}\expect{W} \;.$$
This is shown as follows:
\begin{align*}
\sum_{(i,a)} \probaDistrOf{Q}{\Pi_{i,a}}\frac{p_{i,a}(1-p_{i,a}) (p_{i,a}-q_{i,a})^2}{q_{i,a}^2(1-q_{i,a})^2}
&\leq \frac{1}{4}\sum_{(i,a)} \frac{1}{q_{i,a}(1-q_{i,a})}\cdot \probaDistrOf{Q}{\Pi_{i,a}}\frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})} \\
&\leq \frac{1}{2c}\sum_{(i,a)}\probaDistrOf{Q}{\Pi_{i,a}}\frac{(p_{i,a}-q_{i,a})^2}{q_{i,a}(1-q_{i,a})} \\
&= \frac{1}{2c}\expect{W} \;.
\end{align*}
\end{itemize}
Combining the above, we conclude that $\mathop{\textnormal{Var}}\nolimits[W] \leq \new{24} \frac{2^d n}{m^2} + \new{26} \frac{\expect{W}}{cm}$.
\end{proof}
We now have all the tools we require to establish the completeness and soundness of the tester.
\begin{lemma}[Completeness]
If $P=Q$, then the algorithm outputs $\textsf{accept}$ with probability at least $2/3$.
\end{lemma}
\begin{proof}
We first note that, as per the foregoing discussion and~\cref{lemma:estimating:parental:configuration:completeness},
with probability at least $8/10$ we have between $N_{i,a}$ and $2N_{i,a}$ samples for every parental configuration $(i,a)\in S$,
and therefore have not outputted $\textsf{reject}$.
By Chebyshev's inequality and~\cref{identity:coro:bn:tree:variance},
\[
\probaOf{ W \geq \frac{\epsilon^2}{32} } \leq \new{4096}\frac{2^d n}{m^2\epsilon^4} \leq \frac{4}{30}
\]
for a suitable choice of $\alpha>0$. Therefore, by a union bound the algorithm will output $\textsf{reject}$ with probability at most $\frac{4}{30}+\frac{2}{10} = \frac{1}{3}$.
\end{proof}
\begin{lemma}[Soundness]
If $\normone{P-Q} \geq \epsilon$, then the algorithm outputs $\textsf{reject}$ with probability at least $2/3$.
\end{lemma}
\begin{proof}
As noted before, it is sufficient to show that, conditioned on having between $N_{i,a}$ and $2N_{i,a}$ samples for every parental configuration and $\probaDistrOf{P}{\Pi_{i^\ast,a^\ast}} \leq 4 \probaDistrOf{Q}{\Pi_{i^\ast,a^\ast}}$ for all $(i,a)$, the algorithm rejects with probability at least $2/3+1/10 = 23/30$. Indeed, whenever too few or too many samples from a given parental configuration are seen the algorithm rejects automatically, and by~\cref{lemma:estimating:parental:configuration:soundness} this happens with probability at least $9/10$ if some parental configuration is such that $\probaDistrOf{P}{\Pi_{i^\ast,a^\ast}} > 4 \probaDistrOf{Q}{\Pi_{i^\ast,a^\ast}}$.
Conditioning on this case, by Chebyshev's inequality,
\[
\probaOf{ W \leq \frac{\epsilon^2}{32} } \leq \probaOf{ \abs{ W - \expect{W} } \geq \frac{1}{2}\expect{W} }
\leq \frac{4\mathop{\textnormal{Var}}\nolimits[W]}{ \expect{W}^2 }
\leq \new{96} \frac{2^d n}{m^2\expect{W}^2} + \new{104}\frac{1}{cm\expect{W}} \;,
\]
from~\cref{identity:coro:bn:tree:variance}. Since $\expect{W} \geq \frac{\epsilon^2}{16}$ by~\cref{identity:lemma:bn:tree:expectation:soundness}, we then get $\probaOf{ W \leq \frac{\epsilon^2}{32} } = \bigO{\frac{2^d n}{m^2\epsilon^4} + \frac{1}{cm\epsilon^2} } \leq \frac{17}{30}$, again for a suitable choice of $\alpha>0$ and $\beta>0$ (recalling that $c\geq \beta\frac{\log n}{ \sqrt{n} }$).
\end{proof}
\begin{remark} \label{rem:alph}
{\em We note that we can reduce the problem of testing degree-$d$
Bayes nets over alphabet $\Sigma$, to testing
degree $(d+1) \lceil \log_2(|\Sigma|) \rceil -1$
and alphabet of size $2$. First consider the case where $|\Sigma| = 2^b$.
Then it suffices to have $nb$ bits in $n$ clusters of size $b$.
Each cluster of $b$ will represent a
single variable in the initial model with each of the $2^b$ possibilities
denoting a single letter. Then each bit will need to potentially be
dependent on each other bit in its cluster and on each bit in each
cluster that its cluster is dependent on. Therefore, we need degree
$(d+1)b-1$. Note that this operation preserves balancedness.
Now if $|\Sigma|$ is not a power of $2$, we need to pad the alphabet.
The obvious way to do this is to create a set of unused letters
until the alphabet size is a power of $2$. Unfortunately, this creates
an unbalanced model. To create a balanced one, we proceed as follows:
we split a number of the letters in $\Sigma$ in two. So, instead of having
alphabet $a, b, c, \ldots$, we have $a_1,a_2,b_1,b_2,c,\ldots$.
We make it so that when a word would have an $a$ in a certain position,
we map this to a new word that has either $a_1$ or $a_2$ in that position,
each with equal probability. We note that this operation preserves $L_1$ distance,
and maintains the balancedness properties.
}
\end{remark}
\subsection{Sample Complexity Lower Bound} \label{ssec:identity-known-lower}
Here we prove a matching information-theoretic lower bound:
\begin{restatable}{theorem}{uniformityknownlb} \label{theo:lb:known:uniform}
There exists an absolute constant $\epsilon_0 > 0$ such that, for any $0 < \epsilon \leq \epsilon_0$, the following holds:
Any algorithm that has sample access to an unknown Bayes net $P$ over $\{0,1\}^n$ with known structure $\mathcal{S}$ of
maximum in-degree at most $d < n/2$, and distinguishes between the cases that $P=U$ and $\normone{P-U} > \epsilon$
requires $\Omega(2^{d/2} n^{1/2}/\epsilon^2)$ samples.
\end{restatable}
\begin{proof}
Our lower bound will be derived from families of Bayes nets with the following structure:
The first $d$ nodes are all independent (and will in fact have marginal probability $1/2$ each),
and will form in some sense a ``pointer'' to one of $2^d$ arbitrary product distributions.
The remaining $n-d$ nodes will each depend on all
of the first $d$. The resulting distribution is now an (evenly weighted) disjoint mixture of $2^d$
product distributions on the $(n-d)$-dimensional hypercube.
In other words, there are $2^d$ product distributions $p_1,\dots,p_{2^d}$,
and our distribution returns a random $i$ (encoded in binary) followed by a random sample form $p_i$.
Note that the $p_i$ can be arbitrary product distributions.
The unknown distribution $P$ to test is obtained as follows:
let $X$ be a Bernoulli random variable with parameter $1/2$.
If $X=0$, $P$ is the uniform distribution on $\{0,1\}^n$, i.e., each of the $2^d$ distributions $p_i$
is uniform on $\{0,1\}^{n-d}$. Otherwise, if $X=1$, then every $p_i$ is a product distribution
on $\{0,1\}^{n-d}$ with, for each coordinate, a parameter chosen uniformly and independently
to be either $\frac{1}{2}+\frac{\epsilon}{\sqrt{n}}$ or $\frac{1}{2}-\frac{\epsilon}{\sqrt{n}}$.
We will show that the shared information between a sample of size $o(2^{d/2}n^{1/2}/\epsilon^2)$ and $X$ is small.
In view of this, let $\sigma_i$ (for $1\leq i \leq n-d$) be the set of indices of the samples that were drawn from $p_i$.
Note that since $X$ is uncorrelated with the $\sigma_i$'s, and as the $\sigma_i$ are a function of the samples,
$\mutualinfo{X}{S} = \mutualinfo{X}{S \mid \sigma_i}$. This is because $\mutualinfo{X}{S}) = H(X) - H(X \mid S) =
H(X\mid \sigma_i) - H(X \mid S,\sigma_i) = \mutualinfo{X}{S \mid \sigma_i}$.
Now, for fixed $\sigma_i$, the samples we draw from $p_i$ are mutually independent of $X$.
Let $S_i$ denote the tuple of these $\abs{\sigma_i}$ samples. Thus, we have that
$\mutualinfo{X}{S \mid \sigma_i} \leq \sum_i \mutualinfo{X}{S_i \mid \sigma_i}$.
By the same analysis as in the proof of~\cref{theo:lb:product:uniform},
this latter term is $O(\binom{\abs{\sigma_i}}{2}\frac{\epsilon^4}{n})$. Therefore,
$$\mutualinfo{X}{S \mid \sigma_i} \leq \expect{ \sum_i \binom{\abs{\sigma_i}}{2} }O\left(\frac{\epsilon^4}{n}\right) = O\left(\frac{m^2 \epsilon^4}{n2^d}\right) \;,$$
where we used the fact that $\abs{\sigma_i}$ is $\binomial{m}{1/2^d}$ distributed.
Note that the above RHS is $o(1)$ unless $m = \Omega(2^{d/2}n^{1/2}/\epsilon^2)$, which completes the proof.
\end{proof}
\section{Testing Identity of Unknown Structure Bayes Nets} \label{sec:identity-uknown}
\new{
In this section, we give our algorithms and lower bounds for testing the identity of low-degree Bayes nets with unknown structure.
In Section~\ref{ssec:identity-unknown-lower}, we start by showing that -- even for the case of trees --
uniformity testing of $n$-node Bayes nets requires $\Omega(n/\epsilon^2)$ samples. In Sections~\ref{ssec:identity-unknown-upper},
we design efficient identity testers with sample complexity sublinear in the dimension $n$, under some non-degeneracy
assumptions on the explicit Bayes net.
}
\subsection{Sample Complexity Lower Bound} \label{ssec:identity-unknown-lower}
\new{In this section, we establish a tight lower bound on identity testing of Bayes nets in the unknown structure case.
Our lower bound holds even for \emph{balanced} Bayes nets with a \emph{tree} structure.
In order to state our theorem, we first give a specialized definition of balancedness for the case of trees.
We say that a Bayes net with tree structure is $c$-balanced if it satisfies $p_k\in[c,1-c]$ for all $k$ (note that this immediately implies it is $(c,C)$-balanced).}
\begin{restatable}{theorem}{uniformitybntreelb}\label{theo:lb:unknown:structure:bayes:tree:uniform}
There exists absolute constants $c > 0$ and $\epsilon_0>0$ such that, for any $\epsilon \in(0,\epsilon_0)$ and given samples from an unknown $c$-balanced Bayes net $P$ over $\{0,1\}^n$ with unknown tree structure, distinguishing between the cases $P=U$ and $\normone{P-U} > \epsilon$ (where $U$ is the uniform distribution over $\{0,1\}^n$)
\new{with probability $2/3$} requires $\Omega(n/\epsilon^2)$ samples. (Moreover, one can take $c=1/3$.)
\end{restatable}
\new{Hence, without any assumptions about the explicit distribution, identity testing is information-theoretically as hard as learning.
This section is devoted to the proof of Theorem~\ref{theo:lb:unknown:structure:bayes:tree:uniform}.
}
Fix any integer $m\geq 1$. We will define a family of $\textsf{no}$-instances consisting of distributions $\{P_\lambda\}_\lambda$ over $\{0,1\}^{n}$ such that:
\begin{enumerate}
\item\label{item:lb:farness} every $P_\lambda$ is $\epsilon$-far from the uniform distribution $U$ on $\{0,1\}^{n}$: $\| P_\lambda - U\|_1 = \bigOmega{\epsilon}$;
\item\label{item:lb:treebn} every $P_\lambda$ is a Bayes net with a tree structure;
\item\label{item:lb:indisting} unless $m=\bigOmega{\frac{n}{\epsilon^2}}$, no algorithm taking $m$ samples can distinguish with probability $2/3$ between a uniformly chosen distribution from $\{P_\lambda\}_\lambda$ and $u$; or, equivalently, no algorithm taking \emph{one} sample can distinguish with probability $2/3$ between $P_\lambda^{\otimes m}$ and $U^{\otimes m}$, when $P_\lambda$ is chosen uniformly at random from $\{P_\lambda\}_\lambda$.
\end{enumerate}
The family is defined as follows. We let $\delta \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\epsilon}{\sqrt{n}}$, and let a \emph{matching-orientation parameter} $\lambda$ consist of (i) a matching $\lambda^{(1)}$ of $[n]$ (partition of $[n]$ in $\frac{n}{2}$ disjoint pairs $(i,j)$ with $i<j$) and (ii) a vector $\lambda^{(2)}$ of $\frac{n}{2}$ bits. The distribution $P_\lambda$ is then defined as the distribution over $\{0,1\}^{n}$ with uniform marginals, and tree structure with edges corresponding to the pairs $\lambda^{(1)}$; and such that for every $\lambda^{(1)}_k = (i,j)\in \lambda^{(1)}$, $\operatorname{cov}(X_i, X_j) = (-1)^{\lambda^{(2)}_k}\delta$.
\paragraph{Notations.}
For $\lambda=(\lambda^{(1)},\lambda^{(2)})$ as above and $x\in\{0,1\}^n$, we define the \emph{agreement count of $x$ for $\lambda$}, $c(\lambda,x)$, as the number of pairs $(i,j)$ in $\lambda^{(1)}$ such that $(x_i,x_j)$ ``agrees'' with the correlation suggested by $\lambda^{(2)}$. Specifically:
\[
c(\lambda,x) \stackrel{{\mathrm {\footnotesize def}}}{=} \abs{ \setOfSuchThat{ (i,j) \in [n]^2 }{ \exists \ell \in [n/2], \quad \lambda^{(1)}_\ell = (i,j) \text{ and } (-1)^{x_i+x_j} = (-1)^{\lambda^{(2)}_\ell} } }.
\]
Moreover, for $\lambda,\mu$ two matching-orientation parameters, we define the sets $A=A_{\lambda,\mu},B=B_{\lambda,\mu},C=C_{\lambda,\mu}$ as
\begin{align*}
A &\stackrel{{\mathrm {\footnotesize def}}}{=} \setOfSuchThat{ (s,t)\in[n/2]^2 }{ \lambda^{(1)}_s=\mu^{(1)}_t, \quad \lambda^{(2)}_s = \mu^{(2)}_t } \tag{common pairs with same orientations}\\
B &\stackrel{{\mathrm {\footnotesize def}}}{=} \setOfSuchThat{ (s,t)\in[n/2]^2 }{ \lambda^{(1)}_s=\mu^{(1)}_t, \quad \lambda^{(2)}_s\neq \mu^{(2)}_t } \tag{common pairs with different orientations}\\
C &\stackrel{{\mathrm {\footnotesize def}}}{=} (\lambda^{(1)}\cup\mu^{(1)})\setminus(A\cup B) \tag{pairs unique to $\lambda$ or $\mu$}
\end{align*}
so that $2(\abs{A}+\abs{B})+\abs{C} = n$.
\paragraph{Proof of~\cref{item:lb:farness}.} Fix any matching-orientation parameter $\lambda$. We have
\begin{align*}
\normone{P_\lambda-U} &= \sum_{x\in\{0,1\}^n} \abs{P_\lambda(x) - U(x)} = \sum_{x\in\{0,1\}^n} \abs{U(x) (1+2\delta)^{c(\lambda,x)}(1-2\delta)^{\frac{n}{2}-c(\lambda,x)} - U(x)} \\
&= \frac{1}{2^n}\sum_{x\in\{0,1\}^n} \abs{(1+2\delta)^{c(\lambda,x)}(1-2\delta)^{\frac{n}{2}-c(\lambda,x)} - 1}
= \frac{1}{2^n}\sum_{k=0}^{\frac{n}{2}}\sum_{x\colon c(\lambda,x)=k} \abs{(1+2\delta)^{k}(1-2\delta)^{\frac{n}{2}-k} - 1} \\
&= \frac{1}{2^n}\sum_{k=0}^{\frac{n}{2}} 2^{\frac{n}{2}}\binom{\frac{n}{2}}{k} \abs{(1+2\delta)^{k}(1-2\delta)^{\frac{n}{2}-k} - 1}
= \sum_{k=0}^{\frac{n}{2}} \binom{\frac{n}{2}}{k} \abs{\left(\frac{1+2\delta}{2}\right)^{k}\left(\frac{1-2\delta}{2}\right)^{\frac{n}{2}-k} - \frac{1}{2^{\frac{n}{2}}} } \\
&= 2\totalvardist{ \binomial{\frac{n}{2}}{\frac{1}{2}} }{ \binomial{\frac{n}{2}}{\frac{1}{2}+\delta} }
= \bigOmega{\epsilon} \;,
\end{align*}
where the last equality follows from~\cref{lemma:noinstances:far}.
\paragraph{Proof of~\cref{item:lb:indisting}.} Let the distribution $Q$ over $(\{0,1\}^n)^m$ be the uniform mixture
\[
Q \stackrel{{\mathrm {\footnotesize def}}}{=} \mathbb{E}_\lambda[ P_\lambda^{\otimes m} ] \;,
\]
where $P_\lambda$ is the distribution on $\{0,1\}^n$ corresponding to the matching-orientation parameter $\lambda$. In particular, for any $x\in\{0,1\}^n$ we have
\[
P_\lambda(x) = U(x) (1+2\delta)^{c(\lambda,x)}(1-2\delta)^{\frac{n}{2}-c(\lambda,x)}
\]
with $U$ being the uniform distribution on $\{0,1\}^n$ and $c(\lambda,x)$, the agreement count of $x$ for $\lambda$, defined as before. Now, this leads to
\[
\frac{dP_\lambda}{du}(x) = 1+G(\lambda,x) \;,
\]
where $G(\lambda,x) \stackrel{{\mathrm {\footnotesize def}}}{=} (1-2\delta)^{\frac{n}{2}}\left(\frac{1+2\delta}{1-2\delta}\right)^{c(\lambda,x)}-1$. For two matching-orientation parameters $\lambda,\mu$, we can define the covariance $\tau(\lambda,\mu)\stackrel{{\mathrm {\footnotesize def}}}{=} \mathbb{E}_{x\sim U}[ G(\lambda,x)G(\mu,x) ]$. By the minimax approach (as in~\cite[Chapter 3]{Pollard:2003}), it is sufficient to bound the \new{$L_1$}-distance between $Q$ and $U^{\otimes m}$ by a small constant. Moreover, we have
\begin{equation}\label{eq:minimax:mixture:bound}
\normone{Q-U^{\otimes m}} \leq \mathbb{E}_{\lambda,\mu}\left[ (1+\tau(\lambda,\mu))^m \right] - 1
\end{equation}
and to show the lower bound it is sufficient to prove that the RHS is less than $\frac{1}{10}$ unless $m=\bigOmega{\frac{n}{\epsilon^2}}$.
\noindent Setting $z\stackrel{{\mathrm {\footnotesize def}}}{=}\frac{1+2\delta}{1-2\delta}$, we can derive, by expanding the definition
\begin{align*}
\tau(\lambda,\mu) &= 1+(1-\delta)^n\mathbb{E}_{x\sim U}[ z^{c(\lambda,x)+c(\mu,x)} ]-2(1-2\delta)^{\frac{n}{2}}\mathbb{E}_{x\sim U}[ z^{c(\lambda,x)} ].
\end{align*}
Since, when $x$ is uniformly drawn in $\{0,1\}^n$, $c(\lambda,x)$ follows a $\binomial{\frac{n}{2}}{\frac{1}{2}}$ distribution, we can compute the last term as
\begin{align*}
2(1-2\delta)^{\frac{n}{2}}\mathbb{E}_{x\sim U}[ z^{c(\lambda,x)} ]
&= 2(1-2\delta)^{\frac{n}{2}}\left(\frac{1+z}{2}\right)^{\frac{n}{2}} = 2(1-2\delta)^{\frac{n}{2}}\frac{1}{(1-2\delta)^{\frac{n}{2}}} = 2 \;,
\end{align*}
where we used the expression of the probability-generating function of a Binomial. This leads to
\begin{align*}
1+\tau(\lambda,\mu) &= (1-2\delta)^n \mathbb{E}_{x\sim U}[ z^{c(\lambda,x)+c(\mu,x)} ] \\
&= (1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \prod_{\sigma\text{ cycle}\colon\abs{\sigma} \geq 4} \mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ] \;,
\end{align*}
where ``cycle'' and the probability distribution $\mathcal{B}_{\lambda,\mu}(\sigma)$ are defined as follows. Recall that $\lambda$ and $\mu$ define a weighted multigraph
over $n$ vertices, where each vertex has degree exactly $2$, the edges are from the pairs $\lambda^{(1)}_i$'s and \new{$\mu^{(1)}_i$}'s, and the weights are in $\{0,1\}$ according to the $\lambda^{(2)}_i$'s and \new{$\mu^{(2)}_i$}'s. That multigraph $G_{\lambda,\mu}$ is better seen as the disjoint union of cycles (\new{and indeed, $A\cup B$ corresponds to the cycles of length $2$, while $C$ corresponds to cycles of length at least $4$)}.
For such a cycle $\sigma$ in $G_{\lambda,\mu}$, we let $\mathcal{B}_{\lambda,\mu}(\sigma)$ be the distribution below. If the number of negative covariances -- the number of edges with label $\lambda^{(2)}_\ell=1$ or $\mu^{(2)}_\ell=1$ -- along $\sigma$ is even (resp. odd), then
$\mathcal{B}_{\lambda,\mu}(\sigma)$ is a $\binomial{\abs{\sigma}}{\frac{1}{2}}$ conditioned on being even (resp. odd).\medskip
Instead of the above, we first consider the related quantity with the conditioning removed (indeed, as we will see in~\cref{claim:binomial:removing:evenodd:conditioning}, this new quantity is within an $1+O(\epsilon^2)$ factor of the actual one):
\begin{align*}
1+\tilde{\tau}(\lambda,\mu)
&= (1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \prod_{\sigma\text{ cycle}\colon\abs{\sigma} \geq 4} \mathbb{E}_{\alpha\sim \binomial{\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ]\\
&= (1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \mathbb{E}_{\alpha\sim \binomial{\sum_{\sigma\colon\abs{\sigma} \geq 4}\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ] \\
&= (1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \mathbb{E}_{\alpha\sim \binomial{\abs{C}}{\frac{1}{2}}}[ z^{\alpha} ] \\
&= (1-2\delta)^n z^{\abs{B}} \left(\frac{1+z^2}{2}\right)^{\abs{A}}\left(\frac{1+z}{2}\right)^{\abs{C}} \\
&= \left( (1-2\delta)^2z \right)^{\abs{B}} \left((1-2\delta)^2\frac{1+z^2}{2}\right)^{\abs{A}}{\underbrace{\left((1-2\delta)\frac{1+z}{2}\right)}_{=1}}^{\abs{C}} \tag{$2\abs{A}+2\abs{B}+\abs{C}=n$}\\
&= \left( 1-4\delta^2\right)^{\abs{B}} \left(1+4\delta^2\right)^{\abs{A}}.
\end{align*}
Thus, we need to compute
\begin{align*}
\mathbb{E}_{\lambda,\mu}\left[ (1+\tilde{\tau}(\lambda,\mu))^m \right]
&= \mathbb{E}_{\lambda,\mu}\left[ \left(1+4\delta^2\right)^{m\abs{A}}\left( 1-4\delta^2\right)^{m\abs{B}} \right] \\
&= \mathbb{E}_{\lambda,\mu}\left[ a^{\abs{A}} b^{\abs{B}} \right] \tag{where $a\stackrel{{\mathrm {\footnotesize def}}}{=}\left(1+4\delta^2\right)^{m}$, $b\stackrel{{\mathrm {\footnotesize def}}}{=}\left(1-4\delta^2\right)^{m}$} \\
&= \mathbb{E}_{\lambda,\mu}\left[ \expectCond{ a^{\abs{A}} b^{\abs{B}} }{ \abs{A}+\abs{B} } \right]
= \mathbb{E}_{\lambda,\mu}\left[ b^{\abs{A}+\abs{B}}\expectCond{ \left(\frac{a}{b}\right)^{\abs{A}} }{ \abs{A}+\abs{B} } \right] \\
&= \mathbb{E}_{\lambda,\mu}\left[ b^{\abs{A}+\abs{B}} \left(\frac{1+\frac{a}{b}}{2}\right)^{ \abs{A}+\abs{B} } \right] \tag{ $\abs{A} \sim \binomial{\abs{A}+\abs{B}}{\frac{1}{2}}$}\\
&= \mathbb{E}_{\lambda,\mu}\left[ \left(\frac{a+b}{2}\right)^{ \abs{A}+\abs{B} } \right]
= \mathbb{E}_{\lambda,\mu}\left[ \left(\frac{\left(1+4\delta^2\right)^{m}+\left(1-4\delta^2\right)^{m}}{2}\right)^{ \abs{A}+\abs{B} } \right].
\end{align*}
In particular, consider the following upper bound on $f(k)$, the probability that $\abs{A}+\abs{B}\geq k$: setting $s\stackrel{{\mathrm {\footnotesize def}}}{=} \frac{n}{2}$, for $0\leq k \leq s$,
\begin{align*}
f(k)=\probaOf{ \abs{A}+\abs{B}\geq k }&\leq \frac{s! 2^s}{(2s)!} \cdot \binom{s}{k}\frac{(2s-2k)!}{(s-k)! 2^{s-k}}
= \frac{2^k k!}{(2k)!}\frac{\binom{s}{k}^2}{\binom{2s}{2k}}
= \frac{2^k}{k!}\frac{\binom{2(s-k)}{s-k}}{\binom{2s}{s}} \\
&= \frac{2^k}{k!} \frac{\prod_{j=0}^{k-1} (s-j)^2}{\prod_{j=0}^{2k-1} (2s-j)}
= \frac{1}{k!} \frac{\prod_{j=0}^{k-1} (s-j)}{\prod_{j=0}^{k-1} (2s-2j-1)}
\leq \frac{1}{k!}.
\end{align*}
Therefore, for any $z > 1$, we hav
\begin{align*}
\mathbb{E}_{\lambda,\mu}\left[ z^{ \abs{A}+\abs{B} } \right]
&= \int_0^\infty \probaOf{ z^{ \abs{A}+\abs{B} } \geq t }dt
= \int_0^\infty \probaOf{ \abs{A}+\abs{B} \geq \frac{\ln t}{\ln z} }dt \\
&= 1+\int_1^\infty \probaOf{ \abs{A}+\abs{B} \geq \frac{\ln t}{\ln z} }dt
\leq 1+\int_1^\infty \probaOf{ \abs{A}+\abs{B} \geq \flr{\frac{\ln t}{\ln z}} }dt \\
&\leq 1+\int_1^\infty \frac{dt}{\flr{\frac{\ln t}{\ln z}}!} \tag{from our upper bound on $f(k)$}
\leq 1+\int_1^\infty \frac{dt}{\Gamma\left(\frac{\ln t}{\ln z}\right)} \\
&= 1+\int_1^\infty \frac{e^u du}{\Gamma\left(\frac{u}{\ln z}\right)}.
\end{align*}
Assuming now that $1< z \leq 1+\gamma$ for some $\gamma \in(0,1)$, from $\ln z < \gamma$ and monotonicity of the Gamma function we obtain
\begin{align*}
\mathbb{E}_{\lambda,\mu}\left[ z^{ \abs{A}+\abs{B} } \right]
&= 1+\int_1^\infty \frac{e^u du}{\Gamma\left(\frac{u}{\gamma}\right)}
= 1+\gamma \int_{1/\gamma}^\infty \frac{e^{\gamma v} dv}{\Gamma(v)}
\leq 1+\gamma \int_{0}^\infty \frac{e^{v} dv}{\Gamma(v)} \leq 1+42\gamma.
\end{align*}
Suppose now $m \leq c \frac{n}{\epsilon^2} = \frac{4c}{\delta^2}$, for some constant $c>0$ to be determined later.
Then, by monotonicity
\[
z\stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\left(1+4\delta^2\right)^{m}+\left(1-4\delta^2\right)^{m}}{2}
\leq
\frac{\left(1+4\delta^2\right)^{\frac{4c}{\delta^2}}+\left(1-4\delta^2\right)^{\frac{4c}{\delta^2}}}{2}
\leq
\frac{e^{16c}+e^{-16c}}{2} < 1+\frac{1}{42\cdot 20} \stackrel{{\mathrm {\footnotesize def}}}{=} 1+\gamma
\]
for $c < \frac{3}{1000}$. Therefore,
$$\mathbb{E}_{\lambda,\mu}\left[ (1+\tilde{\tau}(\lambda,\mu))^m \right] - 1 = \mathbb{E}_{\lambda,\mu}\left[ z^{ \abs{A}+\abs{B} } \right] -1 < \frac{1}{20} \;,$$
as desired.
\medskip
To conclude, we bound $\mathbb{E}_{\lambda,\mu}\left[ (1+\tau(\lambda,\mu))^m \right]$ combining the above with the following claim:
\begin{claim}\label{claim:binomial:removing:evenodd:conditioning}
Let $z\stackrel{{\mathrm {\footnotesize def}}}{=}\frac{1+2\delta}{1-2\delta}$ as above. Then for any two matching-orientation parameters $\lambda,\mu$, we have
\[
\prod_{\sigma\colon\abs{\sigma} \geq 4}\mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ]
\leq e^{\frac{8\epsilon^4}{n}} \cdot \mathbb{E}_{\alpha\sim \binomial{\sum_{\sigma\colon \abs{\sigma}\geq 4}\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ] \;.
\]
\end{claim}
\begin{proof}
Fix $\lambda,\mu$ as in the statement, and any cycle $\sigma$ in the resulting graph. Suppose first this is an ``even'' cycle:
\[
\mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ]
= \mathbb{E}_{\alpha\sim \binomial{\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} \mid \alpha\text{ even} ]
= \frac{1}{1/2} \sum_{k=0}^{\abs{\sigma}/2} \binom{\abs{\sigma}}{2k} z^{2k} = \frac{(1+z)^{\abs{\sigma}}+(1-z)^{\abs{\sigma}}}{2^{\abs{\sigma}}}.
\]
Similarly, if $\sigma$ is an ``odd'' cycle,
$
\mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ]
= \frac{(1+z)^{\abs{\sigma}}-(1-z)^{\abs{\sigma}}}{2^{\abs{\sigma}}}.
$
We then obtain $\mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ]
\leq \mathbb{E}_{\alpha\sim \binomial{\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ]\left( 1+\abs{\frac{1-z}{1+z}}^{\abs{\sigma}} \right)$, from which
\[
\prod_{\sigma}\mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ]
\leq \prod_{\sigma}\mathbb{E}_{\alpha\sim \binomial{\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ]\left( 1+\abs{\frac{1-z}{1+z}}^{\abs{\sigma}} \right)
= \mathbb{E}_{\alpha\sim \binomial{\sum_{\sigma}\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ]\cdot \prod_{\sigma}\left( 1+\abs{\frac{1-z}{1+z}}^{\abs{\sigma}} \right).
\]
We now bound the last factor: since $\abs{\frac{1-z}{1+z}}=2\delta = \frac{2\epsilon}{\sqrt{n}}$ we have at most $\frac{n}{2}$ cycles, we get
\[
\prod_{\sigma\colon \abs{\sigma} \geq 4}\left( 1+\abs{\frac{1-z}{1+z}}^{\abs{\sigma}} \right)
= \prod_{\sigma\colon \abs{\sigma} \geq 4}\left( 1+(2\delta)^{\abs{\sigma}} \right)
\leq \left( 1+16\delta^{4} \right)^{\frac{n}{2}} \leq e^{8\frac{\epsilon^4}{n}} \;,
\]
as claimed.
\end{proof}
With this result in hand, we can get the conclusion we want: for any $\lambda,\mu$,
\begin{align*}
1+\tau(\lambda,\mu)
&= (1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \prod_{\sigma\text{ cycle}\colon\abs{\sigma} \geq 4} \mathbb{E}_{\alpha\sim \mathcal{B}_{\lambda,\mu}(\sigma)}[ z^{\alpha} ] \\
&\leq e^{\frac{8\epsilon^4}{n}}(1-2\delta)^n z^{\abs{B}} \mathbb{E}_{\alpha\sim \binomial{\abs{A}}{\frac{1}{2}}}[ z^{2\alpha} ] \mathbb{E}_{\alpha\sim \binomial{\sum_{\sigma\colon \abs{\sigma}\geq 4}\abs{\sigma}}{\frac{1}{2}}}[ z^{\alpha} ] \tag{by \cref{claim:binomial:removing:evenodd:conditioning}} \\
&= e^{\frac{8\epsilon^4}{n}}(1+\tilde{\tau}(\lambda,\mu)) \;,
\end{align*}
from which
\begin{align*}
\mathbb{E}_{\lambda,\mu}\left[ (1+\tau(\lambda,\mu))^m \right]
&\leq e^{\frac{8\epsilon^4m}{n}}\mathbb{E}_{\lambda,\mu}\left[ (1+\tilde{\tau}(\lambda,\mu))^m \right]
\leq e^{\frac{8\epsilon^4m}{n}}\left(1+\frac{1}{20}\right) \tag{for $m\leq \frac{cn}{\epsilon^2}$}\\
&\leq e^{8c\epsilon^2}\frac{21}{20} < 1+\frac{1}{10} \tag{as $c<\frac{3}{1000}$ and $\epsilon \leq 1$} \;,
\end{align*}
concluding the proof: by \eqref{eq:minimax:mixture:bound},
$
\normone{Q-U^{\otimes m}} \leq \frac{1}{10} \;,
$
for any $m < c\frac{n}{\epsilon^2}$.
\subsection{Identity Testing Algorithm against Non-Degenerate Bayes Nets} \label{ssec:identity-unknown-upper}
We start with the case of trees and then generalize to bounded degree.
\subsubsection{The Case of Trees}\label{ssec:identity-unknown-upper:trees}
In this section, we prove our result on testing identity of a tree structured
Bayes net with unknown topology. \newblue{Recall from~\cref{ssec:identity-unknown-lower} that a Bayes net with tree structure is said to be $c$-balanced if it satisfies $p_k\in[c,1-c]$ for all $k$. We will require the following definition of \emph{non-degeneracy} of a tree, which will be a simpler case of the definition we shall have for general Bayes nets (\cref{def:non:degeneracy}):
\begin{definition}\label{def:nondegenerate:tree}
For any $\gamma \in(0,1]$, we say a tree Bayes net $P$ over $\{0,1\}^n$ is \emph{$\gamma$-non-degenerate} if for all $i\in[n]$,
\[
\abs{ \probaCond{ X_i = 1 }{ X_{\parent{i}} = 1 } - \probaCond{ X_i = 1 }{ X_{\parent{i}} = 0 } } \geq \gamma
\]
where $X\sim P$.
\end{definition}
Roughly speaking, this definition states that the choice of the value of its parent has a significant influence on the probability of any node. With these definitions, we are ready to state and prove our result:}
\begin{restatable}{theorem}{identityunknowntreeub}\label{theo:upper:unknowntree:identity}
There exists an efficient algorithm with the following guarantees.
Given as input {(i)} a tree $\mathcal{S}$ over $n$ nodes and an \new{explicit $c$-balanced, $\gamma$-non-degenerate Bayes net} $Q$
with structure $\mathcal{S}$, where $c, \gamma = \bigOmega{1/n^a}$ for some absolute constant $a>0$;
{(ii)} parameter $\epsilon > 0$, and {(iii)} sample access to a Bayes net $P$
with unknown tree structure, the algorithm takes $\bigO{\sqrt{n}/\epsilon^2}$ samples from $P$,
and distinguishes with probability at least $2/3$ between $P=Q$ and $\normone{P-Q} > \epsilon$.
\end{restatable}
The algorithm follows a natural idea: {(1)} check that the unknown distribution $P$ indeed has, as it should, the same tree structure as the (known) distribution $Q$; {(2)} if so, invoke the algorithm of the previous section, which works under the assumption that $P$ and $Q$ have the same structure.
Therefore, to establish the theorem it is sufficient to show that {(1)} can be performed efficiently. Specifically, we will prove the following:
\begin{theorem}\label{theo:check:tree:structure}
There exists an algorithm with the following guarantees.
For $\gamma\in(0,1)$, $c\in(0,1/2)$, it takes as input
an explicit $c$-balanced, $\gamma$-nondegenerate tree Bayes net $Q$ over $\{0,1\}^n$
with structure $\mathcal{S}(Q)$, and
\[ O\left( \frac{\log^2\frac{1}{c}}{c^6\gamma^4} \log n \right) \]
samples from an arbitrary tree Bayes net $P$ over $\{0,1\}^n$ with unknown structure $\mathcal{S}(P)$.
\begin{itemize}
\item If $P=Q$, the algorithm returns $\textsf{accept}$ with probability at least $4/5$;
\item If $\mathcal{S}(P)\neq\mathcal{S}(Q)$, the algorithm returns $\textsf{reject}$ with probability at least $4/5$.
\end{itemize}
\end{theorem}
\noindent Note that the above theorem implies the desired result
as long as $\frac{\log^2\frac{1}{c}}{c^6\gamma^4} = \bigO{\frac{\sqrt{n}}{\epsilon^2 \new{\log n}}}$.
\begin{proof}[Proof of~\cref{theo:check:tree:structure}]
We start by stating and proving lemmas that will be crucial in stating and analyzing the algorithm:
\begin{fact}\label{fact:unknown:tree:structure:estimation}
Given $\tau > 0$ and sample access to a tree Bayes net $P$ over $\{0,1\}^n$, one can obtain with $O(\frac{\log n}{\tau^2})$ samples estimates $(\hat{\mu}_i)_{i\in[n]}$, $(\hat{\rho}_{i,j})_{i,j\in[n]}$ such that, with probability at least $9/10$,
\[
\max\left( \max_{i\in[n]} \lvert \hat{\mu}_i - \expect{X_i} \ \rvert, \max_{i,\in[n]} \lvert \hat{\rho}_{i,j}-\expect{X_iX_j} \rvert \right) \leq \tau.
\]
\end{fact}
\begin{proof}
The fact follows immediately by an application of Chernoff bounds.
\end{proof}
\begin{lemma}\label{lemma:unknown:tree:structure:lipschitz:mutualinfo}
Let $c\in(0,1/2]$. There exists a constant $\lambda$ and a function $f$ such that
\[
\mutualinfo{X_i}{X_j} = f( \expect{X_i},\expect{X_j},\expect{X_iX_j} ) \;,
\]
for any $c$-balanced tree Bayes net $P$ over $\{0,1\}^n$ and $X\sim P$,
where $f$ is $\lambda$-Lipschitz with respect to the $\norminf{\cdot}$ norm on the domain $\Omega_c\subseteq [0,1]\times[0,1]\times[0,1]\to[0,1]$ in which $(\expect{X_i},\expect{X_j},\expect{X_iX_j})_{i,j}$ then take values. Moreover, one can take $\lambda = 16\log\frac{1}{c}$.
\end{lemma}
\begin{proof}[Proof Sketch:]
\new{Expanding the definition of mutual influence $\mutualinfo{X}{Y}$ of two random variables,
it is not hard to write it as a function of $\expect{X},\expect{Y}$, and $\expect{XY}$ only.
This function would not be Lipschitz on its entire domain, however. The core of the proof
leverages the balancedness assumption to restrict its domain to a convenient subset
$\Omega_c\subseteq [0,1]\times[0,1]\times[0,1]$,
on which it becomes possible to bound the partial derivatives of $f$.
We defer the details of the proof to~\cref{sec:misc:proofs}.}
\end{proof}
\new{We now show the following crucial lemma establishing the following result: For any balanced Bayes net,
the shared information between any pair of non-adjacent vertices $i, j$
is noticeably smaller than the minimum shared information
between any pair of neighbors along the path that connects $i, j$.}
\begin{lemma}\label{lemma:unknown:tree:structure:gap:mutualinfo}
Let $c\in(0,1/2]$, and fix any $c$-balanced tree Bayes net $P$ over $\{0,1\}^n$
with structure $\mathcal{S}(Q)$. Then, for any distinct $i,j\in[n]$ such that $i\neq\parent{j}$ and $j\neq\parent{i}$, we have
\[
\mutualinfo{X_i}{X_j} \leq (1-2c^2)\min_{\{k,\parent{k}\}\in\operatorname{path}(i,j)} \mutualinfo{X_k}{X_{\parent{k}}} \;,
\]
where $X\sim P$. (and $\operatorname{path}(i,j)$ is a path between $i$ to $j$, of the form $i - \dots - k - \dots - j$, where each edge is of the form $(k,\parent{k}$ or $(\parent{k},k)$).
\end{lemma}
\begin{proof}
By induction and the data processing inequality, it is sufficient to prove the statement for a path of length 3, namely
\[
X_i - X_k - X_j \;.
\]
The result will follow from a version of the strong data processing inequality (see e.g.,~\cite{PW:15},
from which we borrow the notations $\eta_{\rm KL},\eta_{\rm TV}$): since $X_i \to X_k \to X_j$ forms a chain with the Markov property, we get
$\mutualinfo{X_i}{X_j} \leq \eta_{\rm KL}(P_{X_j\mid X_k}) \mutualinfo{X_i}{X_k}$ from~\cite[Equation 17]{PW:15}. Now, by~\cite[Theorem 1]{PW:15}, we have
\[
\eta_{\rm KL}(P_{X_j\mid X_k}) \leq \eta_{\rm TV}(P_{X_j\mid X_k}) = d_{\mathrm TV}( P_{X_j\mid X_k=0}, P_{X_j\mid X_k=1} ).
\]
If $k=\parent{j}$ (in our Bayes net), then $d_{\mathrm TV}( P_{X_j\mid X_k=0}, P_{X_j\mid X_k=1} ) = \abs{p_{j,0}-p_{j,1}} \leq 1-2c$ from the $c$-balancedness assumption. On the other hand, if $j=\parent{k}$, then by Bayes' rule it is easy to check that (again, from the $c$-balancedness assumption) $\probaCond{X_{\parent{k}}=1}{X_k=a}\in[c^2,1-c^2]$, and
$d_{\mathrm TV}( P_{X_j\mid X_k=0}, P_{X_j\mid X_k=1} ) = \abs{ \probaCond{X_j=1}{X_k=0} - \probaCond{X_j=1}{X_k=1} } \leq 1-2c^2$.
Therefore, we get $\mutualinfo{X_i}{X_j} \leq (1-2c^2) \mutualinfo{X_i}{X_k}$ as wanted; by symmetry, $\mutualinfo{X_i}{X_j} \leq (1-2c^2) \mutualinfo{X_j}{X_k}$ holds as well.
\end{proof}
\begin{lemma}\label{lemma:unknown:tree:structure:min:mutualinfo}
Let $c\in(0,1/2], \gamma\in(0,1)$, and fix any $c$-balanced, $\gamma$-nondegenerate tree Bayes net $P$ over $\{0,1\}^n$, with structure $\mathcal{S}(P)$. Then, there exists an absolute constant $\kappa$ such that for any $i\in[n]$ one has
\[
\mutualinfo{X_i}{X_{\parent{i}}} \geq \kappa \;,
\]
where $X\sim P$. (Moreover, one can take $\kappa = \frac{c \gamma^2}{2\ln 2}$.)
\end{lemma}
\begin{proof}
Fix any such $i$, and write $X=X_i$, $Y=X_{\parent{i}}$ for convenience; and set $u \stackrel{{\mathrm {\footnotesize def}}}{=} \probaOf{X=1}$,
$v\stackrel{{\mathrm {\footnotesize def}}}{=} \probaOf{Y=1}$, $a\stackrel{{\mathrm {\footnotesize def}}}{=} \probaCond{X=1}{Y=1}$, and $\new{b}\stackrel{{\mathrm {\footnotesize def}}}{=} \probaCond{X=1}{Y=0}$. We then have
\begin{align*}
\mutualinfo{X}{Y}
&= \sum_{(x,y)\in\{0,1\}^2} \probaOf{X=x,Y=y} \log \frac{\probaOf{X=x,Y=y}}{\probaOf{X=x}\probaOf{Y=y}} \\
&= \sum_{(x,y)\in\{0,1\}^2} \probaCond{X=x}{Y=y}\probaOf{Y=y} \log \frac{\probaCond{X=x}{Y=y}}{\probaOf{X=x}} \\
&= (1-u)(1-b)\log\frac{1-b}{1-v} + (1-u)b\log\frac{b}{v} + u(1-a)\log\frac{1-a}{1-v} + ua\log\frac{a}{v} \\
&= u\varphi(a,v)+(1-u) \varphi(b,v) \;,
\end{align*}
where $\varphi(x,y) \stackrel{{\mathrm {\footnotesize def}}}{=} x\log\frac{x}{y}+(1-x)\log\frac{1-x}{1-y} \geq 0$ for $x,y\in[0,1]$ is the KL-divergence between
two Bernoulli distributions with parameters $x,y$.
From our assumptions of $c$-balanced and $\gamma$-nondegeneracy, we know that $u,v,a,b$ satisfy
\begin{align*}
c&\leq a,b,u,v\leq 1-c \\
\gamma &\leq \abs{a-b} \;,
\end{align*}
which leads to, noticing that $\abs{a-b}\geq \gamma$ implies that at least one of
$\abs{a-v}\geq \frac{\gamma}{2}$, $\abs{b-v}\geq \frac{\gamma}{2}$ holds and that $\varphi(\cdot,v)$ is convex with a minimum at $v$:
\begin{align*}
\mutualinfo{X}{Y}
&\geq c\left( \varphi(a,v)+\varphi(b,v) \right)
\geq c\min\left( \varphi\left(v-\frac{\gamma}{2},v\right),\varphi\left(v+\frac{\gamma}{2},v\right) \right)
\geq \frac{1}{2\ln 2} c\gamma^2 \;,
\end{align*}
using the standard lower bound of $\varphi(x,y)\geq\frac{2}{\ln 2}(x-y)^2$ on the KL-divergence.
\end{proof}
\paragraph{The Algorithm.} With these in hand, we are ready to describe and analyze the algorithm underlying~\cref{theo:check:tree:structure}:
Let $\gamma\in(0,1)$, $c\in(0,1/2)$ be fixed constants, and $Q$ be a known $c$-balanced, $\gamma$-nondegenerate tree Bayes net over $\{0,1\}^n$, with structure $\mathcal{S}(Q)$. Furthermore, let $P$ be an unknown tree Bayes net over $\{0,1\}^n$ with unknown structure $\mathcal{S}(P)$, to which we have sample access.
Let $\kappa=\kappa(c,\gamma) = \frac{c\gamma^2}{2\ln 2}$ as in~\cref{lemma:unknown:tree:structure:min:mutualinfo}, $c'\stackrel{{\mathrm {\footnotesize def}}}{=}\frac{c}{2}$, and $\lambda=\lambda(c')=16\log\frac{2}{c}$ as in~\cref{lemma:unknown:tree:structure:lipschitz:mutualinfo}. In view of applying~\cref{lemma:unknown:tree:structure:gap:mutualinfo} later to $P$, set
\[
\tau \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\kappa - (1-2{c'}^2)\kappa}{4\lambda} = \frac{1}{64\ln 2} \frac{c^3\gamma^2}{\log\frac{2}{c}}.
\]
The algorithm then proceeds as follows. (Below, $X$ denotes a random variable distributed according to $P$.)
\begin{enumerate}
\item Take $m = O\left( \frac{\log n}{\tau^2} \right)$ samples from $P$, and use them to
\begin{itemize}
\item Estimate all $n^2$ marginals $\probaCond{X_i=1}{X_j=a}$, and verify that they are all in $[c', 1-c']$ (ensuring that $P$ is $c'$-balanced), with probability at least $9/10$. Else, return \textsf{reject};
\item Estimate all $\binom{n}{2}+n$ values of $\expect{X_i}$ and $\expect{X_iX_j}$ to an additive $\tau$, with probability at least $9/10$, as in~\cref{fact:unknown:tree:structure:estimation}. (Call these estimates $\hat{\mu}_i$, $\hat{\rho}_{i,j}$.)
\end{itemize}
At the end of this step, we are guaranteed that $P$ is $c'$-balanced (or else we have rejected with probability at least $9/10$).
\item Check that all $\hat{\mu}_i$, $\hat{\rho}_{i,j}$'s are all within an additive $\tau$ of what they should be under $Q$. If so, return \textsf{accept}, else return \textsf{reject}.
\end{enumerate}
Clearly, the algorithm only uses $O\left( \frac{\log^2\frac{1}{c}}{c^6\gamma^4} \log n \right)$ samples from $P$. We now establish its correctness: first, with probability at least $4/5$ by a union bound, all estimates performed in the first step are correct; we condition on that.
\begin{description}
\item[Completeness.] If $P=Q$, then $P$ is $c$-balanced, and thus \textit{a fortiori} $c'$-balanced: the algorithm does not reject in the first step. Moreover, clearly all $(\hat{\mu}_i)_i$, $(\hat{\rho}_{i,j})_{i,j}$ are then within an additive $\tau$ of the corresponding values of $P=Q$, so the algorithm returns \textsf{accept}.
\item[Soundness.] By contrapositive. If the algorithm returns \textsf{accept}, then $P$ is $c'$-balanced by the first step. Given our setting of $\tau$, by~\cref{lemma:unknown:tree:structure:lipschitz:mutualinfo} our estimates $(\hat{\mu}_i)_i$, $(\hat{\rho}_{i,j})_{i,j}$ are such that all corresponding quantities
\[
\hat{I}_{i,j}\stackrel{{\mathrm {\footnotesize def}}}{=} f(\hat{\mu}_i,\hat{\mu}_j,\hat{\rho}_{i,j})
\]
are within $\tau\lambda = \frac{\kappa - (1-2{c'}^2)\kappa}{4}$ of the mutual informations $\mutualinfo{X_i}{X_j}$ for $P$. But then, by~\cref{lemma:unknown:tree:structure:gap:mutualinfo} this implies that the relative \emph{order} of all $\hat{I}_{i,j},\hat{I}_{i',j'}$ is the same as that of $\mutualinfo{X_i}{X_j},\mutualinfo{X_{i'}}{X_{j'}}$. This itself implies that running the Chow--Liu algorithm on input these $\hat{I}_{i,j}$'s would yield the same,
uniquely determined tree structure $\mathcal{S}(P)$ as if running it on the actual $\mutualinfo{X_i}{X_j}$'s.
\new{To see this, we note that the Chow-Liu algorithm works
by computing a maximum-weight spanning tree (MST) with respect to the weights given by the pairwise mutual information.
The claim follows from the fact that the MST only depends on the relative ordering of the edge-weights.}
But since the $(\hat{\mu}_i)_i$, $(\hat{\rho}_{i,j})_{i,j}$ are also within an additive $\tau$ of the corresponding quantities for $Q$ (per our check in the second step), the same argument shows that running the Chow--Liu algorithm would result in the same, uniquely determined tree structure $\mathcal{S}(Q)$ as if running it on the actual mutual informations from $Q$. Therefore, $\mathcal{S}(P)=\mathcal{S}(Q)$, concluding the proof.
\end{description}
\end{proof}
\subsubsection{The Case of Bounded Degree}\label{ssec:identity-unknown-upper:degreed}
In this section, we show how to test identity of unknown structure Bayesian networks with maximum in-degree $d$
under some non-degeneracy conditions. Intuitively, we want these conditions to ensure \emph{identifiability} of the structure:
that is, that any (unknown) Bayes net close to a non-degenerate Bayes net $Q$ must also share the same structure.
To capture this notion, observe that, by definition, non-equivalent Bayes net structures
satisfy different conditional independence constraints:
our non-degeneracy condition is then to rule out some of these possible
new conditional independence constraints,
as \emph{far} from being satisfied by the non-degenerate Bayes net.
Formally, we have the following definition:
\begin{definition}[Non-degeneracy]\label{def:non:degeneracy}
For nodes $X_i$, $X_j$, set of nodes $S$, and a distribution $P$ over $\{0,1\}^n$, we say that $X_i$ and $X_j$ are \emph{$\gamma$-far from independent conditioned on $X_S$} if for all distributions $Q$ over $\{0,1\}^n$ such that $d_{\mathrm TV}(P,Q) < \gamma$, it holds that $X_i$ and $X_j$ are not independent conditioned on $X_S$.
A Bayes net $P$ is then called \emph{$\gamma$-non-degenerate with respect to structure $\mathcal{S}$ and degree $d$}
if for any nodes $X_i$, $X_j$ and set of nodes $S$ of size $|S| \leq d$ not containing $i$ or $j$ satisfying one of the following:
\begin{itemize}
\item[(i)] $X_i$ is a parent of $X_j$,
\item[(ii)] $S$ contains a node $X_k$ that is a child of both $X_j$ and $X_j$,
\item[(iii)] $X_i$ is a grandparent of $X_j$ and there is a child of $X_i$ and parent of $X_j$, $X_k$, that is not in $S$,
\item[(iv)] $X_i$ and $X_j$ have a common parent $X_k$ that is not in $S$
\end{itemize}
we have that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$ (where all relations are under structure $\mathcal{S}$).
\end{definition}
\tikzset{
treenode/.style = {align=center, inner sep=0pt, text centered,
font=\sffamily},
blacknode/.style = {treenode, circle, white, font=\sffamily\bfseries, draw=black,
fill=black, text width=1.5em}
whitenode/.style = {treenode, circle, black, draw=black,
text width=1.5em, very thick},
rednode/.style = {treenode, circle, white, font=\sffamily\bfseries, draw=red,
fill=red, text width=1.5em}
}
\begin{figure}[ht]\centering
\begin{tikzpicture}[->,>=stealth',level/.style={sibling distance = 1cm/#1, level distance = 1.5cm}]
\node[whitenode]{$X_j$}
child[<-]{ node[whitenode] {$X_i$} }
;
\end{tikzpicture}
\hspace{.1\textwidth}
\begin{tikzpicture}[->,>=stealth',level/.style={sibling distance = 1cm/#1, level distance = 1.5cm}]
\node[blacknode] {$X_k$}
child[<-]{ node[whitenode] {$X_i$} }
child[<-]{ node[whitenode] {$X_j$} }
;
\end{tikzpicture}
\hspace{.1\textwidth}
\begin{tikzpicture}[->,>=stealth',level/.style={sibling distance = 1cm/#1, level distance = 1.5cm}]
\node[whitenode] {$X_j$}
child[<-]{ node[rednode] {$X_k$}
child[<-]{ node[whitenode] {$X_i$} }
}
;
\end{tikzpicture}
\hspace{.1\textwidth}
\begin{tikzpicture}[->,>=stealth',level/.style={sibling distance = 1cm/#1, level distance = 1.5cm}]
\node[rednode] {$X_k$}
child{ node[whitenode] {$X_i$} }
child{ node[whitenode] {$X_j$} }
;
\end{tikzpicture}
\caption{The four possible conditions of~\cref{def:non:degeneracy}, from left (i) to right (iv).
The black nodes are the ones in $S$, the red ones (besides $X_i,X_j$) are not in $S$.}
\end{figure}
We shall also require some terminology: namely, the definition of the \emph{skeleton} of a Bayesian network as the underlying undirected graph of its structure. We can now state the main result of this section:
\begin{theorem} \label{thm:unknown-structure-identity}
There exists an algorithm with the following guarantees. Given the full description of a Bayes net $Q$ of degree at most $d$ which is $(c,C)$ balanced and $\gamma$-non-degenerate for $c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$, parameter $\epsilon\in(0,1]$, and sample access to a distribution $P$, promised to be a Bayes net of degree at most $d$ whose skeleton has no more edges than $Q$'s, the algorithm takes $\bigO{2^{d/2}\sqrt{n}/\epsilon^2+(2^d+ d \log n)/\gamma^2}$ samples from $P$, runs in time $\bigO{n}^{d+3}(1/\gamma^2+1/\epsilon^2)$, and distinguishes with probability at least $2/3$ between (i) $P=Q$ and (ii) $\normone{P-Q} > \epsilon$.
\end{theorem}
In~\cref{lem:struct-equiv}, we show that these non-degeneracy conditions are enough to ensure identifiability of the structure, up to equivalence. In~\cref{prop:conditional:independence:tester}, we give a test for conditional independence specialized to Bernoulli random variables.
In the last part, we provide a test for showing whether a non-degenerate Bayes net has a given structure using this conditional independence test, establishing~\cref{prop:struct-test}. We then can combine this structure test with our test for Bayes nets with known structure to obtain~\cref{thm:unknown-structure-identity}. This structure tester, which may be of independent interest, has the following guarantees,
\begin{theorem}\label{prop:struct-test}
Let $\mathcal{S}$ be a structure of degree at most $d$ and $P$ be a Bayesian network with structure $\mathcal{S}'$ that also has degree at most $d$ and whose skeleton has no more edges than $\mathcal{S}$. Suppose that $P$ either (i) can be expressed as a Bayesian network with structure $\mathcal{S}$ that is $\gamma$-non-degenerate with degree $d$; or (ii) cannot be expressed as a Bayesian network with structure $\mathcal{S}$. Then there is an algorithm which can decide which case holds with probability $99/100$, given $\mathcal{S}$, $\gamma$, and sample access to $P$. The algorithm takes $\bigO{(2^d+ d \log n)/\gamma^2}$ samples and runs in time $\bigO{n^{d+3}/\gamma^2}$.
\end{theorem}
\noindent Using the above theorem, we can prove the main result of this section:
\begin{proof}[Proof of~\cref{thm:unknown-structure-identity}]
We first invoke the structure test given in~\cref{algo:bn:structure-test}. If it accepts, we run the known structure test given in~\cref{theo:upper:knowndegreed:identity}. We accept only if both accept.
The correctness and sample complexity now both follow from~\cref{prop:struct-test} and~\cref{theo:upper:knowndegreed:identity}.
Specifically, if the structure test accepts, then with high probability,
we have that $Q$ can be expressed as a Bayes net with the same structure as $P$,
and thus we have the pre-conditions for the known structure test. If either test rejects, then $P \neq Q$.
\end{proof}
\paragraph{Non-degeneracy and Equivalent Structures.} \label{ssec:non-degeneracy}
The motivation behind the $\gamma$-non-degeneracy condition is the following:
if $Q$ is $\gamma$-non-degenerate, then for any Bayesian network $P$ with degree at most $d$
that has $d_{\mathrm TV}(P,Q) < \gamma$ we will argue that $P$ can be described using the same structure $\mathcal{S}$
as we are given for $Q$. Indeed, the structure $\mathcal{S}'$ of $P$ will have the property that $\mathcal{S}$ and $\mathcal{S}'$
both can be used to describe the same Bayesian networks, a property known as \emph{$I$-equivalence}.
It will then remain to make this algorithmic, that is to describe how to decide whether $P$ can be described
with the same structure as $Q$ or whether $d_{\mathrm TV}(P,Q) \geq \gamma$.
Assuming we have this decision procedure, then if the former case happens to hold we can invoke our existing known-structure tester (or reject if the latter case holds).
We will require for our proofs the following definition:
\begin{definition}[$\lor$-structure]
For a structure $\mathcal{S}$, a triple $(i,j,k)$ is a \emph{$\lor$-structure} (also known as an \emph{immorality})
if $i$ and $j$ are parents of $k$ but neither $i$ nor $j$ is a parent of the other.
\end{definition}
\noindent The following result, due to Verma and Pearl~\cite{VermaPearl:90}, will play a key role:
\begin{lemma}\label{lem:Verma-Pearl}
Two structures $\mathcal{S}$ and $\mathcal{S}'$ are $I$-equivalent if and only if
they have the same skeleton and the same $\lor$-structures.
\end{lemma}
Note that, for general structures $\mathcal{S}$, $\mathcal{S}'$, it may be possible to represent all Bayesian networks with structure $\mathcal{S}$ as ones with structure $\mathcal{S}'$, but not vice versa. Indeed, this can easily be achieved by adding edges to $\mathcal{S}$ to any node (if any) with less than $d$ parents.
This is the rationale for the assumption in~\cref{prop:struct-test} that $\mathcal{S}'$ has no more edges than $\mathcal{S}$: as this assumption is then required for $\mathcal{S}$ and $\mathcal{S}'$ to be $I$-equivalent unless $d_{\mathrm TV}(P,Q) \geq \gamma$.
We now prove that any Bayesian network $Q$ satisfiying the conditions of~\cref{prop:struct-test} and being non-degenerate with respect to a structure can in fact be expressed as having that structure.
\begin{lemma} \label{lem:struct-equiv}
Fix $\gamma > 0$. If $Q$ is a Bayesian network with structure $\mathcal{S}'$ of degree at most $d$ that is $\gamma$-non-degenerate with respect to a structure $\mathcal{S}$ with degree at most $d$ and $\mathcal{S}'$ has no more edges than $\mathcal{S}$, then $\mathcal{S}$ and $\mathcal{S}'$ are $I$-equivalent.
\end{lemma}
Note that $Q$ being $\gamma$-non-degenerate for some $\gamma > 0$ is equivalent to a set of conditional independence conditions all being false, since if $X_i$ and $X_j$ are not conditionally independent with respect to $X_S$, then there is a configuration $a$ such that $\Pr_Q[X_S =a] >0$ and $I(X_i;X_j \mid X_S=a) \geq 0$.
\begin{proof}
We first show that $\mathcal{S}$ and $\mathcal{S}'$ have the same skeleton and then that they have the same $\lor$-structures. We need the following:
\begin{claim} \label{clm:super-parents}
Let $S$ be the set of parents of $X_i$ in a Bayesian network $Q$ with structure $\mathcal{S}$. Let $X_j$ be a node that is neither in $S$ nor a descendant of $X_i$. Then $X_i$ and $X_j$ are independent conditioned on $X_S$.
\end{claim}
\begin{proof
Firstly, we note that there is a numbering of the nodes which is consistent with the DAG of $\mathcal{S}$ such that any $j \in S$ has $j < i$. Explicitly, we can move $X_i$ and all its descendants to the end of the list of nodes to obtain this numbering.
Letting $D \stackrel{{\mathrm {\footnotesize def}}}{=} \{1,\dots, i-1\}$, we have that, from the definition of Bayesian networks, $\Pr_Q[ X_i=1 \mid X_{D}=b ] = \Pr_Q[ X_i=1 \mid X_S=b_S ]$ for all configurations $b$ of $D$. Then for any configuration $a$ of $S'\stackrel{{\mathrm {\footnotesize def}}}{=} S\cup\{j\}$, we have
\begin{align*}
\Pr_P[X_j=1 \mid X_{S'}=a] & = \sum_{b:b_S=a} \Pr_P[X_j=1 \mid X_D = b] \Pr_P[X_D=b \mid X_{S'}=a] \\
& = \Pr_P[X_j=1 \mid X_S=a_S] \sum_{b:b_S=a} \Pr_P[X_D=b \mid X_{S'}=a]\\
&= \Pr_P[X_j=1 \mid X_S=a_S]
\end{align*}
concluding the proof.
\end{proof}
Suppose for a contradiction that $(i,j)$ is an edge in the skeleton of $\mathcal{S}$ but not in $\mathcal{S}'$.
Without loss of generality, we may assume that $X_j$ is not a descendant of $X_i$ in $\mathcal{S}'$ (since otherwise we can swap the roles of $i$ and $j$ in the argument). Then as $X_i$ is not in $S$, the set of parents of $X_j$ in $\mathcal{S}'$, either, by~\cref{clm:super-parents} $X_i$ and $X_j$ are independent conditioned on $X_S$. However since one of $X_i$ and $X_j$ is a parent of the other in $\mathcal{S}$, condition (i) of $\gamma$-non-degeneracy gives that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$. This is a contradiction, so all edges in the skeleton of $\mathcal{S}$ must be edges of $\mathcal{S}'$. But by assumption $\mathcal{S}'$ has no more edges than $\mathcal{S}$, and so they have the same skeleton.
Next we show that $i$ and $j$ have the same $\lor$-structures. Assume by contradiction that $(i,j,k)$ is a $\lor$-structure in $\mathcal{S}$ but not $\mathcal{S}'$. Since $\mathcal{S}$ and $\mathcal{S}'$ have the same skeleton, this cannot be because $X_i$ is the parent of $X_j$ or vice versa. Therefore, must be that at least one of $X_i$ or $X_j$ is the child of $X_k$ rather than its parent in $\mathcal{S}'$.
As before, without loss of generality we may assume that $X_j$ is not a descendant of $X_i$ in $\mathcal{S}'$. This implies that $X_k$ cannot be a child of $X_i$, since then $X_j$ must be a child of $X_k$ and so a descendant of $X_i$. Thus $S$, the set of parents of $X_i$ in $\mathcal{S}'$, contains $X_k$ but not $X_j$; and~\cref{clm:super-parents} then implies that $X_i$ and $X_j$ are independent conditioned on $X_S$. However, in $\mathcal{S}$ $X_k$ is the child of both $X_i$ and $X_j$ and so by condition (ii) of $\gamma$-non-degeneracy, we have that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$. This contradiction shows that all $\lor$-structures in $\mathcal{S}$ are $\lor$-structures in $\mathcal{S}'$ as well.
Finally, we assume for the sake of contradiction that $(i,j,k)$ is a $\lor$-structure in $\mathcal{S}'$ but not $\mathcal{S}$.
Again without loss of generality, we assume that $X_j$ is not a descendant of $X_i$ in $\mathcal{S}'$; and let $S$ be the parents of $X_i$ in $\mathcal{S}'$. Note that neither $X_k$ nor $X_j$ is in $S$ since this is a $\lor$-structure.
Now by~\cref{clm:super-parents}, $X_i$ and $X_j$ are independent conditioned on $X_S$. In $\mathcal{S}$, however, $(i,j,k)$ is not a $\lor$-structure yet $(i,k)$, $(j,k)$ (but not $(i,j)$) are in the skeleton of $\mathcal{S}$.
Thus at least one of $X_i$, $X_j$ is a child of $X_k$. If only one is a child, then the other must be $X_k$'s parent. In the case of two children, we apply condition (iv) and in the case of a parent and a child, we apply condition (iii) of $\gamma$-non-degeneracy.
Either way, we obtain that, since $X_k$ is not in $S$, $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$. This contradiction shows that all $\lor$-structures in $\mathcal{S}'$ are also $\lor$-structures in $\mathcal{S}.$
We thus have all the conditions for~\cref{lem:Verma-Pearl} to apply and conclude that $\mathcal{S}$ and $\mathcal{S}'$ are $I$-equivalent.
\end{proof}
\paragraph{Conditional Independence Tester.} \label{ssec:cond-ind-test}
We now turn to establishing the following proposition:
\begin{proposition}\label{prop:conditional:independence:tester}
There exists an algorithm that, given parameters $\gamma, \tau > 0$, set of coordinates $S\subseteq [n]$ and coordinates $i,j\in[n]\setminus S$, as well as sample access to a distribution $P$ over $\{0,1\}^n$, satisfies the following. With probability at least $1-\tau$, the algorithm accepts when $X_i$ and $X_j$ are independent conditioned on $X_S$ and rejects when no distribution $Q$ with $d_{\mathrm TV}(P,Q) < \gamma$ has this property (and may do either if neither cases holds). Further, the algorithm takes $O((2^d + \log(1/\tau))/\gamma^2)$ samples from $P$ and runs in time $O((2^d + \log(1/\tau))/\gamma^2)$.
\end{proposition}
\begin{figure}[ht]
\begin{framed}
\begin{description}
\item[Input] $\gamma, \tau > 0$, $i,j \in \{0,1\}^n$, $S \subseteq \{0,1\}^n$ with $i,j \notin S$, and sample access to a distribution $P$ on $\{0,1\}^n$.
\item[-] Take $O((2^d + \log(1/\tau))/\gamma^2)$ samples from $P$. Let $\tilde P$ be the resulting empirical distribution.
\item[For each] configuration $a \in \{0,1\}^{|S|}$ of $S$,
\begin{description}
\item[-] Compute the empirical conditional means $\mu_{i,a} = \mathbb{E}_{X \sim \tilde P}[X_i \mid X_S=a]$ and $\mu_{j,a} = \mathbb{E}_{X \sim \tilde P}[X_j\mid X_S=a]$.
\item[-] Compute the conditional covariance $\mathrm{Cov}_{\tilde P}[X_i,X_j \mid X_S=a]=\mathbb{E}_{X \sim \tilde P}[(X_i-\mu_{i,a})(X_j - \mu_{j,a}) \mid X_S=a]$.
\end{description}
\item Compute the expected absolute value of the conditional covariance $\beta = \mathbb{E}_{Y \sim \tilde P}[|\mathrm{Cov}_{\tilde P}[X_i,X_j \mid X_S=Y_S]|]$.
\item[If] $\beta \leq \gamma/3$, return $\textsf{accept}$
\item[Else] return $\textsf{reject}.$
\end{description}
\end{framed}
\caption{Testing whether $X_i$ and $X_j$ are independent conditioned on $S$ or are $\gamma$-far from being so.} \label{algo:cond:independ}
\end{figure}
\begin{proof} The algorithm is given in~\cref{algo:cond:independ}. Its sample complexity is immediate, and that the algorithm takes linear time in the number of samples is not hard to see. It remains to prove correctness.
To do so, define $D \stackrel{{\mathrm {\footnotesize def}}}{=} S \cup \{i,j\}$. Let $P_D,\tilde P_D$ be the distributions of $X_S$ for $X$ distributed as $P,\tilde P$ respectively. Since $P_D$ is a discrete distribution with support size $2^{d+2}$, by standard results the empirical $\tilde P_D$ obtained from our $ O((2^{d+2} + \log 1/\tau)/\gamma^2)$ samples is such that $d_{\mathrm TV}(P_D, \tilde P) \leq \gamma/10$ with probability at least $1-\tau$. We hereafter assume that this holds.
Note that the distribution $P_D$ determines whether $P$ is such that $X_i$ and $X_j$ are independent conditioned on $S$ or is $\delta$-far from being so for any $\delta$. Thus if these two nodes are $\gamma$-far from being conditionally independent in $P$, then they are $\gamma$-far in $P_D$ and therefore are $9\gamma/10$-far in $\tilde P_D$. We now need to show that the expected absolute value of the conditional covariance is a good approximation of the distance from conditional independence, which is our next claim:
\begin{claim} \label{clm:good-conditional-dependence-metric}
For a distribution $Q$ on $\{0,1\}^n$, let $\gamma$ be the minimum $\gamma > 0$ such that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$ in $Q$. Let $\beta = \mathbb{E}_{Y \sim Q}[|\mathrm{Cov}_Q[X_i,X_j \mid X_S=Y_S]|]$. Then we have $\beta/3 \leq \gamma \leq 2\beta$.
\end{claim}
\begin{proof}
For simplicity, we assume that $|D|=n$ and that we have only coordinates $i$, $j$ and $S$.
Firstly, we show that $\beta \leq \gamma$. By assumption, there is a distribution $R$ with $d_{\mathrm TV}(Q,R)=\gamma$ which has that $X_i$ and $X_j$ are independent conditioned on $X_S$. Thus $R$ has $|\mathrm{Cov}_R[X_i,X_j \mid X_S=a]|=0$ for all configurations $a$.
Since $0 \leq |\mathrm{Cov}_Q[X_i,X_j \mid X_S=a]| \leq 1$, it follows that $|\beta - \mathbb{E}_{Y \sim R}[|\mathrm{Cov}_Q[X_i,X_j\mid X_S=Y_S]|]| \leq 3d_{\mathrm TV}(Q,R)$ \new{as $\mathrm{Cov}_Q[X_i,X_j \mid X_S=Y_S] = \mathbb{E}[X_iX_j \mid X_S=Y_S] - \mathbb{E}[X_i \mid X_S=Y_S]\mathbb{E}[X_j \mid X_S=Y_S]$} and so $\beta \leq 3\gamma$.
Next we need to show that $\beta \leq 2\gamma$. To show this, we construct a distribution $S$ on $\{0,1\}^n$ with $d_{\mathrm TV}(Q,S) = 2 \beta$ in which $X_i$ and $X_j$ are independent conditioned on $X_S$.
Explicitly, for a configuration $a$ of $S$ and $b,c \in \{0,1\}$, we set
\[
\Pr_S[X_S=a,X_i=b,X_j=c] \stackrel{{\mathrm {\footnotesize def}}}{=} \Pr_Q[X_S=a,X_i=b,X_j=c] + (-1)^{b+c} \mathrm{Cov}_Q[X_i,X_j \mid X_S=a] \Pr_Q[X_S=a] \; .
\]
For each configuration $a$, this increases two probabilities by $|\mathrm{Cov}_Q[X_i,X_j \mid X_S=a]|\Pr_Q[X_S=a]$ and decrease two probabilities by the same amount. Thus, provided that all probabilities are still non-negative (which we show below), $S$ is a distribution with $d_{\mathrm TV}(Q,S) = \sum_a 2|\mathrm{Cov}_Q[X_i,X_j \mid X_S=a]|\Pr_Q[X_S=a]=2\beta$.
Now consider the conditional joint distribution of $X_i$, $X_j$for a given $a$. Let $p_{b,c} \stackrel{{\mathrm {\footnotesize def}}}{=} \Pr_Q[X_i=b,X_j=c \mid X_S=a]$. Then the conditional covariance $\mathrm{Cov}_Q[X_i,X_j \mid X_S=a]$, which we denote by $\alpha$ for simplicity here, is
\begin{align*}
\alpha & = \mathbb{E}[X_iX_j \mid X_S=a] -\mathbb{E}[X_i\mid X_S=a] \mathbb{E}[X_j\mid X_S=a] \\
& = p_{1,1} -(p_{1,0} + p_{1,1})(p_{0,1} + p_{1,1}) \\
& = p_{1,1} (1- p_{1,0} -p_{0,1} - p_{1,1}) - p_{1,0} p_{0,1} \\
& = p_{1,1} p_{0,0} - p_{1,0} p_{0,1} \;.
\end{align*}
In $S$, these probabilities change by $\alpha$. $p_{1,1}$ and $p_{0,0}$ are increased by $\alpha$ and $p_{0,1}$ and $p_{1,0}$ are decreased by it. Note that if $\alpha > 0$, $p_{1,1}$ and $p_{0,0}$ are at least $p_{1,1} p_{0,0} \geq \alpha$ and when $\alpha<0$, $p_{0,1}$ and $p_{1,0}$ are at least $p_{1,0} p_{0,1} \geq - \alpha$. Thus all probabilities in $S$ are in $[0,1]$, as claimed.
A similar expression for the conditional covaraince in $S$ to that for $\alpha$ above yields
\begin{align*}
\mathrm{Cov}_{S}[X_i,X_j \mid X_S=a] & = (p_{1,1} - \alpha)(p_{0,0} - \alpha) - (p_{1,0} +\alpha) (p_{0,1} + \alpha) \\
& = 0\alpha^2 - (p_{0,0}+p_{1,1}+p_{0,1}+p_{1,0})\alpha + p_{1,1} p_{0,0} - p_{1,0} p_{0,1} \\
& = p_{1,1} p_{0,0} - p_{1,0} p_{0,1} - \alpha = 0 \;.
\end{align*}
Since $X_i$ and $X_j$ are Bernoulli random variables, the conditional covariance being zero implies that they are conditionally independent.
\end{proof}
\begin{description}
\item[Completeness.] Suppose by contrapositive that the algorithm rejects.~\cref{clm:good-conditional-dependence-metric} implies that in $\tilde{P}$, $X_i$ and $X_j$ are $\gamma/9$-far from independent conditioned on $X_S$. Thus they are $\gamma/9$ far in $\tilde P_D$ and, since $d_{\mathrm TV}(P_D, \tilde P_D) \leq \gamma/10$, this implies that they are not conditionally independent in $P_D$. Thus, in $P$, $X_i$ and $X_j$ are not independent conditioned on $X_S$.
\item[Soundness.] Now suppose that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$ in $P$. Per the foregoing discussion, this implies that they are $(9\gamma/10)$-far from being so in $\tilde P_D$. Now~\cref{clm:good-conditional-dependence-metric} guarantees that $\mathbb{E}_{Y \sim \tilde P}[|\mathrm{Cov}_{\tilde P}[X_i,X_j|X_S=Y_S]|] \leq 9\gamma/20 > \gamma/3$, and therefore the algorithm rejects in this case. This completes the proof of correctness.
\end{description}
\end{proof}
\paragraph{Structure Tester.} \label{ssec:structure-test}
Finally, we turn to the proof of~\cref{prop:struct-test}, analyzing the structure testing algorithm described in~\cref{algo:bn:structure-test}.
\begin{figure}[ht]
\begin{framed}
\begin{description}
\item[Input] $\gamma > 0$, a structure $\mathcal{S}$ and a Bayesian network $P$
\item[-] Draw $O((2^d + d\log n)/\gamma^2)$ samples from $P$. Call this set of samples $S$.
\item[For each] nodes $X_i$, $X_j$ and set $S$ of nodes with $|S| \leq d$ and $i,j \neq S$
\begin{description}
\item[If] one of the following conditions holds in structure $\mathcal{S}$
\begin{itemize}
\item[(i)] $X_i$ is the parent of $X_j$,
\item[(ii)] $S$ contains a node $X_k$ that is a child of both $X_j$ and $X_j$,
\item[(iii)] $X_i$ is a grandparent of $X_j$ and there is a child of $X_i$ and parent of $X_j$, $X_k$ that is not in~$S$,
\item[(iv)] $X_i$ and $X_j$ have a common parent $X_k$ that is not in $S$
\end{itemize}
\item[Then] run the conditional independence tester of~\cref{prop:conditional:independence:tester} (\cref{algo:cond:independ}) using the set of samples $S$ to test whether $X_i$ and $X_j$ are independent conditioned on $X_S$.
\item[If] the conditional indpendence tester accepts, return $\textsf{reject}$.
\end{description}
\item[Otherwise] return $\textsf{accept}$.
\end{description}
\end{framed}
\caption{Testing whether $P$ has structure as $\mathcal{S}$ }\label{algo:bn:structure-test}
\end{figure}
\begin{proof}[Proof of~\cref{prop:struct-test}]
We first show correctness. There are at most $n^{d+2}$ possible choices of $X_i$, $X_j$ and $|S|$
and thus we run the conditional independence tester at most $n^{d+2}$ times.
With $O((2^d + d\log n)/\gamma^2)$ samples, each test gives an incorrect answer with probability
no more than $\tau=n^{-\Omega(d)}$. With appropriate choice of constants
we therefore have that all conditional independence tests are correct with probability $99/100$.
We henceforth condition on this, i.e., that all such tests are correct.
\begin{description}
\item[Completeness.] If $P$ is $\gamma$-non-degenerate with respect to structure $\mathcal{S}$ and degree $d$, then by the definition of non-degeneracy, for any $X_i$, $X_j$ and $S$ that satisfy one of conditions (i)--(iv) we have that $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$. Thus every conditional independence test rejects and the algorithm accepts.
\item[Soundness.] Now suppose by contrapositive that the algorithm accepts. For any $X_i$, $X_j$, and $S$ that satisfy one of conditions (i)--(iv), the conditional independence test must have rejected, that is any such $X_i$ and $X_j$ are not independent conditioned on such an $X_S$. Let $\gamma'$ be the mimuimum over all $X_i$, $X_j$, and $S$ that satisfy one of conditions (i)--(iv) and distributions $Q$ over $\{0,1\}$ such that $X_i$ and $X_j$ are independent conditioned on $X_S$ in $Q$, of the total variation distance between $P$ and $Q$. Since there are only finitely many such combinations of $X_i$, $X_j$, and $S$, this $\gamma'$ is positive. Thus $P$ is $\gamma'$-non-degenerate with respect to $\mathcal{S}$ and $d$. Since we assumed that $P$ has a structure $\mathcal{S}'$ with degree at most $d$ and whose skeleton has no more edges than that of $\mathcal{S}$, we can apply~\cref{lem:struct-equiv}, which yields that $\mathcal{S}$ and $\mathcal{S}'$ are $I$-equivalent. Thus $P$ can indeed be expressed as a Bayesian network with structure $\mathcal{S}$. This completes the proof of correctness.
\end{description}
To conclude, observe that we run the loop at most $n^{d+2}$ times, each using time at most $O((2^d + d\log n)/\gamma^2)$. The total running time is thus $O(n^{d+3}/\gamma^2)$.
\end{proof}
\section{Testing Closeness of Bayes Nets} \label{sec:closeness}
\subsection{Fixed Structure Bayes Nets} \label{sec:closeness-known}
We now establish the upper bound part of~\cref{thm:informal-identity-closeness-known} for closeness, namely testing closeness between two unknown Bayes nets
with the same (known) underlying structure.
\begin{restatable}{theorem}{closenessknowndegreedub}\label{theo:upper:knowndegreed:closeness}
There exists a computationally efficient algorithm with the following guarantees.
Given as input (i) a DAG $\mathcal{S}$ with $n$ nodes and maximum in-degree $d$,
(ii) a parameter $\epsilon > 0$, and (iii) sample access to two unknown $(c,C)$-balanced Bayes nets $P,Q$ with structure $\mathcal{S}$,
where $c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$;
the algorithm takes $\bigO{2^{d/2}\sqrt{n}/\epsilon^2}$ samples from $P$ and $Q$,
and distinguishes with probability at least $2/3$ between the cases $P=Q$ and $\normone{P-Q} > \epsilon$.
\end{restatable}
\begin{proof}
We choose $m\geq \alpha\frac{2^{d/2}\sqrt{n}}{\epsilon^2}$, where $\alpha>0$ is an absolute constant to be determined in the course of the analysis. Let $\mathcal{S}$ and $P,Q$ be as in the statement of the theorem, for \new{$c\geq \beta\frac{\log n}{ \sqrt{n} } \geq \beta\frac{\log n}{m}$ and} $C\geq \beta\frac{d+\log n}{m}$, for some other absolute constant $\beta>0$.
The algorithm proceeds as follows: first, taking $m$ samples from both $P$ and $Q$, it computes for each parental configuration $(i,a)\in[n]\times\{0,1\}^d$ the number of times $\hat{N}_{i,a}$ and $\hat{M}_{i,a}$ this configuration was observed among the samples, for respectively $P$ and $Q$. If for any $(i,a)$ it is the case that $\hat{N}_{i,a}$ and $\hat{M}_{i,a}$ are not within a factor $4$ of each other, the algorithm returns $\textsf{reject}$. (Using the same number of samples, it also estimates $p_{i,a}$ and $q_{i,a}$ within an additive $1/3$, and applies the same standard transformation as before so that we can hereafter assume $p_{i,a},q_{i,a}\leq 2/3$ for all $(i,a)$.)
Note that $\expect{\hat{N}_{i,a}} = m \probaDistrOf{P}{\Pi_{i,a}}$ and $\expect{\hat{M}_{i,a}} = m \probaDistrOf{Q}{\Pi_{i,a}}$; given the $C$-balancedness assumption and by Chernoff and union bounds, with probability at least $9/10$ we have that $\hat{N}_{i,a}$ and $\hat{M}_{i,a}$ are within a factor $2$ of their expectation simultaneously for all $n2^d$ parental configurations. We hereafter condition on this (and observe that this implies that if $P=Q$, then the algorithm rejects in the step above with probability at most $1/10$).
The algorithm now draws independently $n2^d$ values $(M_{i,a})_{(i,a)}$, where $M_{i,a}\sim\poisson{\hat{N}_{i,a}}$; and takes fresh samples from $P,Q$ until it obtains $M_{i,a}$ samples for each parental configuration $\Pi_{i,a}$ (for each of the two distributions). If at any point the algorithm takes more than $10m$ samples, it stops and returns $\textsf{reject}$.
\noindent (Again, note that by concentration (this time of Poisson random variables)\footnote{Specifically, if $X\sim\poisson{\lambda}$ then we have $\probaOf{\abs{X-\lambda} > \lambda/2} = e^{-\Omega(\lambda)}$.}, our assumption that $\hat{N}_{i,a} \geq m\probaDistrOf{P}{\Pi_{i,a}}/2 \geq mC/2 = \beta(d+\log n)$ and a union bound, the algorithm will reject at this stage with probability at most $1/10$.)
Conditioning on not having rejected, we define for each parental configuration $\Pi_{i,a}$ the quantity $U_{i,a}$ (resp. $V_{i,a}$) as the number of samples from $P$ (resp. $Q$) among the first $M_{i,a}$ satisfying $\Pi_{i,a}$ for which $X_i=1$. In particular, this implies that $U_{i,a} \sim\poisson{p_{i,a}\hat{N}_{i,a}}$, $V_{i,a} \sim\poisson{q_{i,a}\hat{N}_{i,a}}$ (and are independent), and that the random variables $W_{i,a}$ defined below:
\[
W_{i,a} \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{(U_{i,a}-V_{i,a})^2 - (U_{i,a}+V_{i,a})}{U_{i,a}+V_{i,a}}
\]
are independent. We then consider the statistic $W$:
\[
W \stackrel{{\mathrm {\footnotesize def}}}{=} \sum_{i=1}^n \sum_{a\in\{0,1\}^d} W_{i,a}.
\]
\begin{claim}\label{claim:general:closeness:expectation}
If $P=Q$, then $\expect{W} = 0$. Moreover, if $\normone{P-Q} > \epsilon$ then $\expect{W} > \frac{m\epsilon^2}{144}$.
\end{claim}
\begin{proof}
We start by analyzing the expectation of $W_{i,a}$, for any fixed $(i,a)\in[n]\times\{0,1\}^d$. The same argument as~\cref{claim:general:closeness:expectation} leads to conclude that
$\expect{W_{i,a}}=0$ if $P=Q$ (proving the first part of the claim), and that otherwise we have
\begin{equation}\label{eq:general:closeness:expectation}
\expect{W_{i,a}} \geq \frac{\min(1,mc)}{3}\hat{N}_{i,a} \frac{(p_{i,a}-q_{i,a})^2}{p_{i,a}+q_{i,a}} = \frac{1}{3}\hat{N}_{i,a} \frac{(p_{i,a}-q_{i,a})^2}{p_{i,a}+q_{i,a}}
\geq \frac{2}{9}\hat{N}_{i,a} \frac{(p_{i,a}-q_{i,a})^2}{(p_{i,a}+q_{i,a})(2-p_{i,a}-q_{i,a})}
\end{equation}
(since $mc \geq \beta\log n \gg 1$ and $0<p_{i,a},q_{i,a} \leq 2/3$). Summing over all $(i,a)$'s and recalling that $\hat{N}_{i,a} \geq m\probaDistrOf{P}{\Pi_{i,a}}/2$, $\hat{N}_{i,a} \geq m\probaDistrOf{Q}{\Pi_{i,a}}/2$ yields the bound:
\[
\expect{W} \geq \frac{m}{9}\sum_{(i,a)}\sqrt{\probaDistrOf{P}{\Pi_{i,a}}\probaDistrOf{Q}{\Pi_{i,a}}} \frac{(p_{i,a}-q_{i,a})^2}{(p_{i,a}+q_{i,a})(2-p_{i,a}-q_{i,a})}
\geq \frac{m}{18} \hellinger{P}{Q}^2 \geq \frac{m}{18}\left( 1-\sqrt{1-\frac{1}{4}\normone{P-Q}^2} \right)
\]
(where we relied on~\cref{lemma:hellinger:bn} for the second-to-last inequality). This gives the last part of the claim, as the RHS is at least $\frac{m\epsilon^2}{144}$ whenever $\normone{P-Q}^2 > \epsilon^2$.
\end{proof}
\noindent We now bound the variance of our estimator:
\begin{claim}\label{claim:general:closeness:variance}
$\mathop{\textnormal{Var}}\nolimits[W] \leq n 2^{d+1} + 5\sum_{(i,a)} \hat{N}_{i,a} \frac{(p_{i,a}-q_{i,a})^2}{p_{i,a}+q_{i,a}} = O(n 2^d + \expect{W})$. In particular, if $P=Q$ then $\mathop{\textnormal{Var}}\nolimits[W] \leq n 2^{d+1}$.
\end{claim}
\begin{proof}
We follow the proof of~\cref{claim:product:closeness:variance} to analyze the variance of $W_{i,a}$, obtaining a bound of $\mathop{\textnormal{Var}}\nolimits[W_{i,a}] \leq 2+5\hat{N}_{i,a} \frac{(p_{i,a}-q_{i,a})^2}{p_{i,a}+q_{i,a}}$. Invoking \cref{eq:general:closeness:expectation} and summing over all $(i,a)\in[n]\times\{0,1\}^d$ then lead to the desired conclusion.
\end{proof}
\noindent The correctness of our algorithm will then follow for the two claims above:
\begin{lemma}
Set $\tau \stackrel{{\mathrm {\footnotesize def}}}{=} \frac{\epsilon^2}{288}$. Then we have the following:
\begin{itemize}
\item If $\normone{P-Q} = 0$, then $\probaOf{ W \geq \tau am } \leq \frac{1}{10}$.
\item If $\normone{P-Q} > \epsilon$, then $\probaOf{ W < \tau m } \leq \frac{1}{10}$.
\end{itemize}
\end{lemma}
\begin{proof}
We start with the soundness case,i.e. assuming $\normone{P-Q} > \epsilon$, which by~\cref{claim:general:closeness:expectation} implies $\expect{W} > 2\tau$. Then, by Chebyshev's inequality,
\begin{align}
\probaOf{ W < \tau m } &\leq \probaOf{ \expect{W} - W > \frac{1}{2}\expect{W} }
\leq \frac{4\mathop{\textnormal{Var}}\nolimits[W]}{\expect{W}^2} \notag\\
&\leq \frac{8n 2^d}{\expect{W}^2} + \frac{12}{5\expect{W}} \tag{\cref{claim:general:closeness:variance}} \\
&=O\!\left( \frac{n 2^d}{\epsilon^4 m^2} + \frac{1}{m\epsilon^2}\right) \notag \;.
\end{align}
We want to bound this quantity by $1/10$, for which it is enough to have
$\frac{n 2^d}{\epsilon^4 m^2} \ll 1$ and $\frac{1}{m\epsilon^2} \ll 1$, which both hold for an appropriate choice of the absolute constant $\alpha>0$
in our setting of $m$.
Turning to the completeness, we suppose $\normone{P-Q} = 0$. Then, by Chebyshev's inequality, and invoking~\cref{claim:general:closeness:variance},
\begin{align*}
\probaOf{ W \geq \tau m } &= \probaOf{ W \geq \expect{W} + \tau m }
\leq \frac{\mathop{\textnormal{Var}}\nolimits[W]}{\tau^2 m^2} = O\!\left(\frac{n 2^d}{\epsilon^4 m^2}\right)
\end{align*}
which is no more than $1/10$ for the same choice of $m$.
\end{proof}
Combining all the elements above concludes the proof, as by a union bound the algorithm is correct with probability at least $1-(\frac{1}{10}+\frac{1}{10}+\frac{1}{10}) > \frac{2}{3}$.
\end{proof}
\subsection{Unknown Structure Bayes Nets} \label{sec:closeness-unknown}
As for the case identity testing, we give a closeness tester for balanced non-degenerate Bayes Nets.
An additional assumption that we require is that the ordering of the nodes in the corresponding DAGs
is known to the algorithm. Formally, we show:
\begin{theorem}\label{thm:informal-upper-closeness-unknown-nondegen}
There exists a computationally efficient algorithm with the following guarantees.
Given as input (i) a parameter $\epsilon > 0$, (ii) an ordering of nodes $\pi$, and
(ii) sample access to unknown $\gamma$-non-degenerate, $(c,C)$-balanced Bayes nets $P,Q$
such the structures of $P$ and $Q$ give the same ordering $\pi$ to nodes,
where $c=\tildeOmega{1/\sqrt{n}}$ and $C=\tildeOmega{d\epsilon^2/\sqrt{n}}$;
the algorithm takes $N=O(2^{d/2}\sqrt{n}/\epsilon^2 + 2^d /\gamma^2+d\log(n)/\gamma^2)$ samples from $P$ and $Q$,
runs in time $n^d\mathrm{poly}(N)$, and distinguishes with probability at least $2/3$
between the cases $P=Q$ and $\normone{P-Q} > \epsilon$.
\end{theorem}
\begin{proof}
The argument's idea is the following: we first test that $P$ and $Q$ have the same skeleton. Since they have the same ordering, that suffices to show that they have the same structure. If this is the case, then we use our known-structure tester.
In more detail, given the $\gamma$-non-degeneracy assumption, for each pair of coordinates $i,j$ and set of coordinates $S$ with $\abs{S}\leq d$, we can, using the conditional independence tester from~\cref{prop:conditional:independence:tester} to test whether each of $P$ and $Q$ has $X_i$ and $X_j$ conditionally independent on $X_S$ or $\gamma$-far from it with $n^{-d-2}/100$ probability of error in $O((2^d+d\log(n))/\gamma^2)$ samples.
Running tests on the same samples for all $n^{d+2}$ combinations of $i,j,S$, we can with probability at least $99/100$ correctly classify which of the two cases holds, for all $i,j,S$ that are either conditionally independent or $\gamma$-far.
We note that by non-degeneracy, there is an edge between $i$ and $j$ in the structure defining $P$ only if $X_i$ and $X_j$ are $\gamma$-far from independent conditioned on $X_S$ for all $S$ (i.e., if there is no edge then there must exist a $S$ such that $X_i$ and $X_j$ are conditionally independent on $X_S$).
Therefore, assuming our conditional independence testers all answered as they should, we can use this to successfully identify the set of edges in the structure of $P$ (and thus, since we know the ordering, the entire structure).
Having determined the underlying structures of $P$ and $Q$, our tester rejects if these structures differ (as using~\cref{lem:struct-equiv}, $\gamma$-non-degeneracy implies that neither can equal a Bayes net with non-equivalent structure and fewer edges). Otherwise, we run the tester from~\cref{theo:upper:knowndegreed:closeness} (since we satisfy its assumptions) and return the result.
\end{proof}
\section{Identity and Closeness Testing for High-Degree Bayes Nets} \label{sec:it:ub}
Finally, in this section we give testing algorithms for identity and closeness of degree-$d$ Bayes nets with unknown structure, \emph{without} balancedness assumptions.
Compared to the testing algorithm of~\cref{thm:informal-upper-identity-unknown-nondegen} and~\cref{thm:informal-upper-closeness-unknown-nondegen} (which work under such assumptions) the dependence on the number of nodes $n$ the testers in this section are suboptimal,
they achieve the ``right'' dependence on the degree $d$ (specifically, $2^{d/2}$ for identity and $2^{2d/3}$ for closeness).
Hence, these testers achieve sub-learning sample complexity for the case that $d = \Omega(\log n)$.
\begin{theorem} \label{thm:unknown-structure-identity:informationtheoretic}
There exists two algorithms with the following guarantees:
\begin{itemize}
\item (Identity) Given the full description of a Bayes net $Q$ of degree at most $d$,
parameter $\epsilon\in(0,1]$, and sample access to a distribution $P$ promised to be a Bayes net (i) of degree at most $d$
and (ii) such that the structures of $P$ and $Q$ give the same ordering to nodes,
the first takes $N=2^{d/2}\mathrm{poly}(n/\epsilon)$ samples from $P$, runs in time $n^d\mathrm{poly}(N)$,
and distinguishes with probability at least $2/3$ between (i) $P=Q$ and (ii) $\normone{P-Q} > \epsilon$.
\item (Closeness) Given parameter $\epsilon\in(0,1]$, and sample access to two distributions $P,Q$
promised to be Bayes nets (i) of degree at most $d$
and (ii) such that the structures of $P$ and $Q$ give the same ordering to nodes,
the second takes $N=2^{2d/3}\mathrm{poly}(n/\epsilon)$ samples from $P$ and $Q$,
runs in time $n^d\mathrm{poly}(N)$, and distinguishes with probability at least $2/3$
between (i) $P=Q$ and (ii) $\normone{P-Q} > \epsilon$.
\end{itemize}
\end{theorem}
\begin{proof}
We first establish the first part of the theorem, namely the existence of an identity testing algorithm
with optimal dependence on the degree $d$. The algorithm is quite simple: it goes over each set $S\subseteq [n]$ of at most $d+1$ coordinates,
and checks that for each of them it holds that the conditional distributions $P,S,Q_S$
are equal (versus $\normone{P_S-Q_S} > \mathrm{poly}(\frac{\epsilon}{n})$).
Since $P_S$ and $Q_S$ are supported on sets of size $O(2^d)$,
and as there are only $O(n^{d+1})$ such sets to consider,
the claimed sample complexity suffices to run all tests correctly
with probability $9/10$ overall (by a union bound).
The more difficult part is to argue correctness,
that is to show that if the test accepts then one must have $\normone{P-Q} < \epsilon$.
To do so, assume (without loss of generality) that
$H(P) \leq H(Q)$: we will show that $\dkl{P}{Q}$ is small, which implies that the $L_1$ distance is as well..
Let the ordering of $P$ be coordinates $1,2,3,\dots$. We note that $\dkl{P}{Q} = \sum_i \dkl{P_i}{Q_i \mid P_1,\dots,P_{i-1}}$ (i.e. the expectation over $P_1,\dots,P_{i-1}$ of the KL-divergence of the conditional distributions of $P_i$ and $Q_i$, conditioned on these $(i-1)$ coordinates). It thus suffices to show that each of these terms is small.
Let $S_i$ be the set of parents of node $i$ under $P$. We have that:
\[
\dkl{P_i}{Q_i \mid P_1,\dots,P_{i-1}} = \dkl{P_i}{Q_i \mid P_{S_i}} + \mathbb{E}_{P_1,\dots,P_{i-1}}[\dkl{Q_i\mid P_{S_i}}{Q_i \mid P_1,\dots,P_{i-1}}] \;.
\]
Further, note that the fact that the tester accepted implies that $\dkl{P_i}{Q_i \mid P_{S_i}}$ is small. Now, we have that
\begin{align*}
H(P) &= \sum_i H(P_i \mid P_1,\dots,P_{i-1}) = \sum_i H(P_i\mid P_{S_i}) \;, \\
H(Q) &= \sum_i H(Q_i\mid Q_1,\dots,Q_{i-1}) = \sum_i H(Q_i\mid Q_{S_i}) - \mutualinfo{Q_i}{Q_1,\dots,Q_{i-1} \mid Q_{S_i}} \;.
\end{align*}
But since the $(d+1)$-wise probabilities are close, we have that $H(P_i \mid P_{S_i})$ is close to $H(Q_i \mid Q_{S_i})$ (up to an additive $\mathrm{poly}(\epsilon/n)$). Therefore, for each $i$, we have that $\mutualinfo{Q_i}{Q_1,\dots,Q_{i-1} \mid Q_{S_i}} = \mathrm{poly}(\epsilon/n)$. In order to conclude, let us compare $\mutualinfo{Q_i}{Q_1,\dots,Q_{i-1} \mid Q_{S_i}}$ and $\mathbb{E}_{P_1,\dots,P_{i-1}}[\dkl{Q_i\mid P_{S_i}}{Q_i \mid P_1,\dots,P_{i-1}}]$. The former is the sum, over assignments $y\in\{0,1\}^{i-1}$ consistent with an assignment $x\in\{0,1\}^{S_i}$, of
\[
\probaOf{ Q_{S_i} = x } H(Q_i \mid Q_{S_i} = x) + \probaOf{ Q_{1,\dots,i-1} = y } H(Q_i \mid Q_{1,\dots,i-1}=y ).
\]
The latter is the sum over the same $y$'s of
\[
\probaOf{ P_{S_i} = x } H(Q_i \mid Q_{S_i} = x) + \probaOf{ P_{1,\dots,i-1} = y} H(Q_i \mid Q_{1,\dots,i-1}=y ) \;.
\]
But because of the $d$-way probability similarities,
the terms $\probaOf{ P_{S_i} = x }$ and $\probaOf{ Q_{S_i} = x }$ terms are very close,
within an additive $\mathrm{poly}(\epsilon/n)$.
\emph{(Here we use the extra assumption that $P$ and $Q$ use the same ordering.)}
Denote by $T_i$ the parents of $i$ under the topology of $Q$.
Then $H(Q_i \mid Q_{1,\dots,i-1}=y)$ depends only on the values of the coordinates in $T_i$. Thus the last part
of the sum is a sum over $z$ of $\probaOf{ Q_{T_i} = z } H(Q_i \mid Q_{T_i} =z)$ and $\probaOf{ P_{T_i} = z } H(Q_i \mid Q_{T_i} =z)$,
which are also close by a similar argument.
Thus,
\[
\mathbb{E}_{P_1,\dots,P_{i-1}}[\dkl{Q_i\mid P_{S_i}}{Q_i \mid P_1,\dots,P_{i-1}}]
= \mutualinfo{ Q_i }{ Q_1,\dots,Q_{i-1} \mid Q_{S_i} } + \mathrm{poly}(\epsilon/n) = \mathrm{poly}(\epsilon/n) \;.
\]
This implies that $P,Q$ are close in KL divergence, and therefore in $L_1$.
\medskip
The second part of the theorem, asserting the existence of a closeness testing algorithm
with optimal dependence on $d$, will be very similar. Indeed, by the proof above it suffices
to check that the restrictions of $P$ and $Q$ to any set of $(d+3)$-coordinates are $\mathrm{poly}(\epsilon/n)$-close.
Using known results~\cite{CDVV14}, this can be done for any specific collection of $d+3$ coordinates
with $N$ samples in $\mathrm{poly}(N)$ time, and high probability of success,
implying the second part of the theorem.
\end{proof}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,711 |
<?php defined('BASEPATH') OR exit('No direct script access allowed'); ?>
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title><?php echo $page_title; ?> - EZPZ</title>
<!-- Stylesheet -->
<link href="<?php echo base_url() ?>css/custom.css" type="text/css" rel="stylesheet">
<link href="<?php echo base_url() ?>css/restaurant-custom.css" type="text/css" rel="stylesheet">
<link href="<?php echo base_url() ?>css/bootstrap.min.css" type="text/css" rel="stylesheet">
<link href="<?php echo base_url() ?>font-awesome/css/font-awesome.min.css" rel="stylesheet">
<link href="<?php echo base_url() ?>css/multi-select.css" rel="stylesheet">
<!-- Begin Scripts -->
<script src="<?php echo base_url() ?>js/jquery-3.1.0.js"></script>
<script src="<?php echo base_url() ?>js/bootstrap.min.js"></script>
<script src="<?php echo base_url() ?>js/jquery.waypoints.min.js"></script>
<script src="<?php echo base_url() ?>js/flat-ui.min.js"></script>
<script src="<?php echo base_url() ?>js/bootstrap-typeahead.js"></script>
<script src="<?php echo base_url() ?>js/jquery.multi-select.js"></script>
<style>
@import url(https://fonts.googleapis.com/css?family=Source+Sans+Pro:200,300,400,700);
@import url(https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css);
@import url(https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css);
body
{
background-size:100% 100%;
background-repeat: no-repeat;
background-attachment: fixed;
}
.login{
background:#F0F0F2;
padding-top:4em;
border-radius:10px;
padding: 15px;
margin: 1% auto 0 auto;
}
#logoBig{
margin-top:20%;
max-height:400px;
max-width:400px;
}
.login .heading {
text-align: center;
margin-top: 1%;
}
.login .heading h3 {
font-size: 3em;
font-weight: 300;
color: black;
display: inline-block;
font-weight: bold;
padding-bottom: 5px;
}
.login form .input-group {
border-bottom: 1px solid #AAA;
border-top: 1px solid rgba(255, 255, 255, 0.1);
width: 100%;
}
.login form .input-group:last-of-type {
border-top: none;
}
.login form .input-group span {
background: transparent;
min-width: 53px;
border: none;
}
.login form .input-group span i {
font-size: 1.5em;
width: 50px;
}
.login form input.form-control {
display: block;
height: auto;
border: none;
outline: none;
box-shadow: none;
background: none;
border-radius: 0px;
padding: 10px;
font-size: 1.6em;
background: transparent;
color: black;
}
.login form input.form-control:focus {
border: none;
}
.login form button {
margin-top: 20px;
background: #27AE60;
border: none;
font-size: 1.6em;
font-weight: 300;
padding: 5px 0;
width: 100%;
border-radius: 3px;
color: #b3eecc;
border-bottom: 4px solid #1e8449;
}
.login form button:hover {
background: #30b166;
-webkit-animation: hop 1s;
animation: hop 1s;
}
.float {
display: inline-block;
-webkit-transition-duration: 0.3s;
transition-duration: 0.3s;
-webkit-transition-property: transform;
transition-property: transform;
-webkit-transform: translateZ(0);
transform: translateZ(0);
box-shadow: 0 0 1px transparent;
}
.float:hover, .float:focus, .float:active {
-webkit-transform: translateY(-3px);
transform: translateY(-3px);
}
</style>
</head>
<body>
<header>
<!--NavBar-->
<div class="container-fluid">
<nav class="navbar navbar-default navbar-fixed-top" id="navbar">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#myNavbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="<?php echo base_url('main') ?>">EZPZ</a>
</div>
<div class="collapse navbar-collapse" id="myNavbar">
<ul class="nav navbar-nav navbar-left">
<li><a href="<?php echo base_url('main') ?>" class="nav-link">Home</a></li>
<li><a href="<?php echo base_url('main/about/') ?>" class="nav-link">About Us</a></li>
<li><a href="<?php echo base_url('restaurant/cuisine/') ?>" class="nav-link">Restaurants</a></li>
<!-- Menu Available available in diffrent login types -->
<?php if($this->session->userdata('type') == 'user'): ?>
<li><a href="#" class="nav-link">Top Up Wallet</a></li>
<?php elseif($this->session->userdata('type') == 'driver'): ?>
<li><a href="#" class="nav-link">Top Up Wallet</a></li>
<?php endif; ?>
<li role="separator" class="divider" style="background-color: white; height: 1px"></li>
</ul>
<?php if(!$this->session->userdata('user_id')) : ?>
<ul class="nav navbar-nav navbar-right">
<li><a href="<?php echo base_url('accounts/signup') ?>"><span class="glyphicon glyphicon-user"></span> Sign Up</a></li>
<li><a href="<?php echo base_url('accounts/') ?>"><span class="glyphicon glyphicon-log-in"></span> Login</a></li>
</ul>
<?php else : ?>
<ul class="nav navbar-nav navbar-right">
<li></li>
<li class="dropdown"><a class="dropdown-toggle" data-toggle="dropdown" href="#"><?php echo $this->session->userdata('username') ?>
<span class="caret"></span></a>
<ul class="dropdown-menu" style="background-color: #000;">
<?php if($this->session->userdata('type') == 'user' || $this->session->userdata('type') == 'driver' || $this->session->userdata('type') == 'clients'): ?>
<li><a href="<?php echo base_url(); echo $this->session->userdata('type'); ?>/complete_data" class="nav-link">Edit Profile</a></li>
<?php endif; ?>
<?php echo $this->session->userdata('type') == 'clients' ? '<li><a href="'.base_url().$this->session->userdata('type').'/menu">Edit Menu</a></li>' : ''; ?>
<li><a href="#">Top Up Wallet</a></li>
</ul>
</li>
<li><a href="<?php echo base_url('accounts/logout') ?>"><span class="glyphicon glyphicon-log-out"></span> Log Out</a></li>
</ul>
<?php endif; ?>
</div>
</div>
</nav>
</div>
</header>
<div id="main">
<?php echo $body ?>
</div>
<!--Navbar End-->
<?php defined('BASEPATH') OR exit('No direct script access allowed'); ?>
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
<script src="<?php echo base_url(); ?>js/jquery-1.11.0.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script>
<!-- Include all compiled plugins (below), or include individual files as needed -->
<script src="<?php echo base_url(); ?>js/bootstrap.min.js"></script>
<script>
var waypoint = new Waypoint({
element: document.getElementById('main'),
handler: function(direction) {
document.getElementById('navbar').style.backgroundColor = "black";
}
});
</script>
<script>
var waypoint2 = new Waypoint({
element: document.getElementById('top'),
handler: function(direction) {
document.getElementById('navbar').style.backgroundColor = "transparent";
},
offset: '-20%'
});
</script>
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,045 |
Die Maria-Hilf-Kirche, , korrekt Maria Hilfe der Christen ist eine moderne Kirche aus dem Jahr 1929 im Stadtteil Sośnica (Sosnitza) in Gliwice (Gleiwitz).
Geschichte
Sosnitza gehörte zunächst der katholischen Andreasgemeinde in Alt-Zabrze an, in die sie 1848 eingepfarrt wurde. 1911 bauten die Katholiken in Sosnitza einen eigenen provisorischen Kirchenbau im Fachwerkstil. Dieser erhielt den Namen Herz-Jesu-Kirche. Bald wurde der Bau einer neuen Kirche beschlossen, deren Kosten auf 215.000 Mark berechnet wurden, jedoch wurde der Bau wegen des Ersten Weltkriegs nicht mehr realisiert. Am 3. Juni 1928 begann man neben dem Provisorium den Bau einer neuen Kirche im Stil der Neuen Sachlichkeit nach Plänen der preußischen Hochbauverwaltung. 1929 wurde die Kirche geweiht und die Notkirche abgerissen, dabei wurde die Glocke aus der alten Kirche in den Neubau übernommen. Da Sosnitza mittlerweile zu Gleiwitz gehörte und dort bereits eine Herz-Jesu-Kirche bestand, gab man der Kirche den neuen Namen Marienkirche, dieser wurde nach 1945 zu Maria-Hilf-Kirche.
1998 wurde im Stadtteil eine weitere Kirche erbaut: Die Hyazinthkirche.
Literatur
Zwei katholische Kirchen. I. Gleiwitz-Sosnitza O.-S. In: Zentralblatt der Bauverwaltung. 50/1930, Nr. 39, 1930, S. 677–680 ().
Weblinks
Offizielle Seite der Gemeinde
Offizielle Seite der Kirchengemeinde
Geschichte
Einzelnachweise
MariaHilf
Maria-Hilf-Kirche
Erbaut in den 1920er Jahren
Kirchengebäude im Bistum Gliwice
Kirchengebäude der Neuen Sachlichkeit
Kirchengebäude in Europa | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,516 |
{"url":"https:\/\/www.physicsforums.com\/threads\/speed-of-light-and-acceleration.868481\/","text":"# B Speed of light and acceleration\n\n1. Apr 23, 2016\n\n### derek10\n\nHi\nI understand that accelerating you get closer and closer to the speed of light wrt anything in an asymptotic way, but would acceleration still act the same way as non relativistic speeds (inertia, gyroscope, etc?) even if the speed won't increase almost at all (for example at 0.9999 c)\nThank you\n\n2. Apr 23, 2016\n\n### Staff: Mentor\n\nWhat do you mean by \"act the same way\"? If you have a rocket that moves at 0.9999 c relative to Earth, you would not notice anything special in the rocket, and you can use the rocket to accelerate as usual*. That is one of the fundamental principles of physics: the laws of physics are the same in every reference frame. Why should you have to care about your speed relative to Earth? There is nothing special about Earth.\n\n*Observers on Earth will measure a different acceleration than you in your rocket.\n\n3. Apr 23, 2016\n\n### Ibix\n\nSomeone moving close to the speed of light relative to you can consider themselves at rest and you as moving close to the speed of light. You can both ignite rockets and feel a 1g acceleration. Both of you will report that you are accelerating at 1g and the other as accelerating much less. Both of you can spin up gyroscopes and find they look normal, but the other guy's will look odd - see the illustration half way down http:\/\/asia.iop.org\/cws\/article\/news\/50366\n\n4. Apr 24, 2016\n\n### derek10\n\nThank you. I mean accelerating constantly from the earth toward c , so it will still accelerate but the speed increase of this acceleration will slow down with respect to the earth, but not with other objects, right?\n\n5. Apr 24, 2016\n\n### Ibix\n\nYes. If you are accelerating in your direction of motion with respect to some object, if you feel an acceleration $a$ then an observer in an inertial frame will observe an acceleration $\\gamma^3a$, where $\\gamma=(1-v^2\/c^2)^{-1\/2}$ is the Lorentz gamma factor and $v$ is your speed with respect to that object. So as you approach light speed (with respect to me) I will see your acceleration drop while you will feel it remaining constant.","date":"2018-02-20 20:26:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6722207069396973, \"perplexity\": 481.0183592850647}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891813088.82\/warc\/CC-MAIN-20180220185145-20180220205145-00123.warc.gz\"}"} | null | null |
{"url":"http:\/\/hal.in2p3.fr\/in2p3-00005339","text":"A study of the $f_{0}(1370), f_{0}(1500), f_{0}(2000)$ and $f_{2}(1950)$ observed in centrally produced 4$\\pi$ final states\n\nAbstract : The production and decay properties of the f0(1370), f0(1500), f0(2000) and f2(1950) have been studied in central pp interactions at 450 GeV\/c. The dPT, phi and |t| distributions of these resonances are presented. For the J = 0 states, the f0(1370) and f0(2000) have similar dPT and phi dependences. These are different to the dPT and phi dependences of the f0(980), f0(1500) and f0(1710). For the J = 2 states the f2(1950) has different dependences to the f2(1270) and f2'(1520). This shows that the dPT and phi dependences are not just J phenomen\nKeywords :\nDocument type :\nJournal articles\n\nCited literature [4 references]\n\nhttp:\/\/hal.in2p3.fr\/in2p3-00005339\nContributor : Claudine Bombar <>\nSubmitted on : Friday, April 7, 2000 - 5:07:04 PM\nLast modification on : Friday, November 6, 2020 - 3:25:45 AM\nLong-term archiving on: : Monday, June 21, 2010 - 2:41:09 PM\n\nIdentifiers\n\n\u2022 HAL Id : in2p3-00005339, version 1\n\nCitation\n\nD. Barberis, F G. Binon, F E. Close, K M. Danielsen, S V. Donskov, et al.. A study of the $f_{0}(1370), f_{0}(1500), f_{0}(2000)$ and $f_{2}(1950)$ observed in centrally produced 4$\\pi$ final states. Physics Letters B, Elsevier, 2000, 474, pp.423-426. \u27e8in2p3-00005339\u27e9\n\nRecord views","date":"2021-05-16 16:08:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23577018082141876, \"perplexity\": 10769.157777708137}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991224.58\/warc\/CC-MAIN-20210516140441-20210516170441-00250.warc.gz\"}"} | null | null |
Proca – historyczna broń miotająca, składająca się ze sznura lub rzemienia, w środku długości którego znajduje się miseczka na pocisk, wykonana ze skóry lub tkaniny. Robiono je też z rozdwojonego patyka i sznura. Miotano z niej kamieniami lub pociskami z suszonej gliny albo odlewanymi z ołowiu. Aby z niej wyrzucić pocisk, należy trzymać końce sznurka w jednej ręce i kręcić nimi coraz szybciej. Gdy sznurek nabierze odpowiedniej prędkości, należy wypuścić z ręki jeden koniec sznura, uwalniając w ten sposób pocisk. Celne miotanie z procy wymaga dużego doświadczenia i wyczucia. Popularnie procą nazywana jest broń działająca na innej zasadzie – proca neurobalistyczna.
Wzmianki historyczne, potwierdzone współczesnymi rekordami Guinnessa, opisują procę jako broń o potężnym zasięgu (> 400 m) i śmiertelnej sile rażenia.
W pewnych warunkach możliwe jest skuteczne miotanie z procy nawet przy wykorzystaniu pojedynczego obrotu. Obrót pojedynczy lub wielokrotny służy jedynie do napięcia linek i przygotowania się do momentu rzutu, a nie do nadania ostatecznej prędkości wylotowej pocisku. Wstępne obroty (obrót) wcale nie muszą być bardzo szybkie. Decydujący jest ostatni fragment obrotu (ok. 150 stopni), w którym miotacz gwałtownie przyspiesza procę, na podobnej zasadzie jak przy rzucaniu "zza siebie" długiego kijka/pałki trzymanego za jeden koniec lub jak przy miotaniu oszczepu za pomocą atlatla.
Nie jest to jedynie bierne wypuszczenie jednego z wirujących sznurków, ale dynamiczna akcja całego ciała, która towarzyszy każdemu zwykłemu rzutowi, np. kamieniem, oszczepem, piłeczką itp. Ramię miotacza (a właściwie jego całe ciało) jest trochę jak ramię trebusza, które napędza zawieszoną – początkowo pod kątem, a ostatecznie prawie wyprostowaną – procę.
Historia
Proca była jedną z najstarszych historycznych broni miotających i najłatwiejszą do wykonania. Używana była jako broń myśliwska i bojowa przynajmniej od okresu neolitu do XVI wieku. Różne odmiany procy, wykonywane z różnych materiałów, znane były praktycznie we wszystkich rejonach świata poza Australią, aczkolwiek nie we wszystkich były szeroko używane. Proca raczej nie była stosowana w rejonach, gdzie upowszechnił się łuk (z wyjątkiem Asyrii).
Dawne proce z reguły nie zachowały się do chwili obecnej z uwagi na nietrwałość materiałów, lecz mimo to najstarszy egzemplarz znaleziono w Egipcie w grobowcu Tutenchamona (XIV w. p.n.e.). Proce były szeroko używane w starożytności w całym basenie Morza Śródziemnego. Najbardziej znane użycie procy - pokonanie wojownika Goliata przez pasterza Dawida celnym rzutem z procy, zostało opisane w Biblii (I Księga Samuela, napisana ok. VII/VI w. p.n.e. i odnosząca się do wydarzeń z ok. X w. p.n.e.). Oddziały procarzy były następnie używane m.in. w armii rzymskiej. Najsłynniejszymi procarzami, już w starożytności znanymi ze swoich umiejętności, byli mieszkańcy Balearów, służący m.in. w armii kastylijskiej. Proca była używana także w średniowieczu, chociaż w bardziej ograniczonym zakresie od pojawienia się kusz. Używały jej także ludy Ameryki Południowej. Ostatnie zanotowane użycie proc na większą skalę nastąpiło w 1572 podczas bitwy z hugenotami pod Sancerre. Później proca została wyparta przez doskonaloną broń palną i jako broń była wykorzystywana sporadycznie. Nietypowym użyciem było użycie proc do miotania małych granatów ręcznych podczas hiszpańskiej wojny domowej w latach 30. XX wieku.
Przypisy
Broń historyczna
Broń miotająca | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,270 |
{"url":"https:\/\/byjus.com\/question-answer\/what-is-piezo-electric-property\/","text":"Question\n\n# What is piezo electric property?\n\nSolution\n\n## Piezoelectric Effect is the ability of certain materials to generate an electric charge in response to applied mechanical stress. The word Piezoelectric is derived from the Greek piezein, which means to squeeze or press, and piezo, which is Greek for \u201cpush\u201d. One of the unique characteristics of the piezoelectric effect is that it is reversible, meaning that materials exhibiting the direct piezoelectric effect (the generation of electricity when stress is applied) also exhibit the converse piezoelectric effect (the generation of stress when an electric field is applied). When piezoelectric material is placed under mechanical stress, a shifting of the positive and negative charge centers in the material takes place, which then results in an external electrical field. When reversed, an outer electrical field either stretches or compresses the piezoelectric material.\n\nSuggest corrections","date":"2021-12-07 21:11:36","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8061406016349792, \"perplexity\": 972.5005769055686}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363418.83\/warc\/CC-MAIN-20211207201422-20211207231422-00245.warc.gz\"}"} | null | null |
\section{Introduction}
Online media constitute currently the largest share of Internet
traffic. A large part of such traffic is generated by platforms that
deliver user-generated content (UGC). This includes, among the other
ones, YouTube and Vimeo for videos, Flickr and Instagram for images
and all social networking platforms.
Among such services, a prominent role is played by YouTube. Founded in $2005$
by Chad Hurley, Steve Chen and Jawed Karim and acquired in $2006$ by Google,
YouTube scored in $2011$ more than $1$ trillion views (or, alternatively,
an average of $140$ video views for every person on Earth), with more
than $3$ billion hours of video watched every month and $72$ hours
of video uploaded every minute by YouTube's users\footnote{{\tt http://www.youtube.com/t/press\_statistics/}}.
Of course, not all videos posted on YouTube are equal. The key aspect
is their ``popularity'', broadly defined as the number of views they
score (also referred to as {\em viewcount}). This is relevant from a
twofold perspective. On the one hand, more popular content generates more traffic, so understanding
popularity has a direct impact on caching and replication strategy
that the provider should adopt. On the other one, popularity has a direct economic impact. Indeed,
popularity or viewcount are often directly related to click-through
rates of linked advertisements, which constitute the basis of the
YouTube's business model.
Recently, a number of researchers have analysed the evolution of the
popularity of online media content~\cite{ChaUtube,crane2008viral,Gill07youtubetraffic,RatkiewiczBurstyPoP,ChatFirstStep,ChaTON},
with the aim of developing models for early-stage prediction of
future popularity~\cite{SzaboPop}.
Such studies have highlighted a number of phenomena that are typical of UGC
delivery. This includes the fact that a significant share of content
gets basically no views~\cite{ChaTON}, as well as the fact that popularity may see some
bursts, when content ``goes viral''~\cite{RatkiewiczBurstyPoP}.
Also, in~\cite{SzaboPop} the authors demonstrate that after an
initial phase, in which contents gain popularity through
advertisement and other marketing tools, the platform mechanisms to
induce users to access contents (re-ranking mechanisms)
are main drivers of popularity.
In this paper, we address such phenomena, by developing a model, based
on game theoretical concepts and tools, for understanding how user's
behaviour drives the evolution of popularity of a given content. The
work is based on rational decision-making assumptions, whereby the
users have to decide whether to see a given content or not. This configures
as a game, where users seek to maximize
some expected utility based on their ``perception'' of the quality of
the content\footnote{This may come, e.g., from the name of user
who posted the content.}
and on viewcount. However, users suffer
also a cost for accessing contents of bad quality, i.e., waste of
time and possibly bandwidth, batteries, etc. In particular, in the
decision process the viewcount is used as a noisy estimator of
the quality of a content. Interestingly, this context resembles
closely the situation in the economic domain, where customers of
a firm which are uninformed do infer the quality of products
from the length of the queue they encounter upon requesting
firm's goods to purchase~\cite{Debo2012}.
Extensive advertising and marketing campaigns can be used to push the
viewcount of a given content up. And in the decision making process
users do not know whether the viewcount has been ``pushed'' by such
means. Also, the decisions made by different users influence the
viewcount and consequently the decisions made by other users, a
process which suits well the usage of game theoretical machinery.
Specifically, we describe the conditions for the adoption of common
behaviors in online content access. This is inspired by findings
in social science \cite{RolfeSocNet,GraSoongJEBO1986,GranovetterAJS1978}:
results there show that emerging behaviours would propagate by a
procedure in which an individual adopts a novel behavior if the
fraction of neighbors or friends having adopted the same behavior
exceeds some threshold. In our context, the threshold would
be expressed in terms of viewcount or related metric.
In the sense of game theory, users of online media represent
non-cooperative rational players connected through some social tie,
e.g., being users of the same UGC platform. Since we consider systems composed by a very large number of users,
the customary tool to study the user behaviour is that of Wardrop
equilibria~\cite{wardrop52}. In particular, we have found a number
of conditions for which such equilibria exist and can be characterized
analytically. Explicit conditions were found for content to stay at zero views or to become
so popular that it is makes sense for all users to access it the sooner the
better.
Furthermore, we identify, for the general case, conditions under which players
tend to accrue around a common strategy depending on initial conditions. This is due to the
existence of a continuum of equilibria: the system will settle at any point very much depending on initial conditions imposed, for instance, by a set of forerunners which cause significant changes of the content popularity. Such conditions were identified in early works such as \cite{Hassin97equilibriumthreshold} in other contexts: there, the authors applied threshold type Nash equilibrium strategies in which one purchases priority if and only if upon arrival the queue size is larger than some threshold value. Key motivation in \cite{Hassin97equilibriumthreshold} is predictability and control of purchase priority. What motivates this work is predictability and control of online content access.
\paragraph*{Novel contribution}
in this paper, we move away from the classical analysis of social networks in the spirit of \cite{SzaboPop,RatkiewiczBurstyPoP,ChatFirstStep,ChaUtube}: instead, we provide a first analysis based on games. The aim of this paper is to provide a novel perspective where contents compete to gain popularity and are subject to the effect of user's choice. To the best of the authors' knowledge, this is the first attempt so far to describe content popularity in UGC systems using game theoretical tools.
The remainder of the work is organized as follows. In Sec.~\ref{sec:model} we introduce the system model and the notation
used throughout the paper. Results for the case when plain viewcount is
used to make decisions are presented in Sec.~\ref{sec:gt1}. When
decisions account also \fdp{for a large increasing trend} of content popularity, i.e., looking
for 'hot' content, the dynamics of the game becomes different. This
case is analysed in Sec.~\ref{sec:gt2}. In Sec.~\ref{sec:gt3} we analyze the joint effect \fdp{when
both the viewcount and its trend are both relevant to the user}. Finally, in Sec.~\ref{sec:sideinfo} we model the
effect of side information when users have some measure of future content dynamics.
Sec.~\ref{sec:rel} reviews the related work and Sec.~\ref{sec:concl} concludes the paper
highlighting directions for expanding the current reach of the work.
\begin{figure}[t]
\centering
\subfigure[``President Obama Sings Sweet Home Chicago"]{\includegraphics[width=0.4\textwidth]{FIG/obama.eps}}
\subfigure[``Chris Sharma Worlds' First 5.15'']{\includegraphics[width=0.4\textwidth]{FIG/sharma.eps}}
\subfigure[``Montersino's Sacher Cake"]{\includegraphics[width=0.4\textwidth]{FIG/sacher.eps}}
\subfigure[``Shakira -- Waka-Waka"]{\includegraphics[width=0.4\textwidth]{FIG/Shakira.eps}}
\subfigure[``Bruno Mars -- Grenade"]{\includegraphics[width=0.4\textwidth]{FIG/Bruno.eps}}
\subfigure[``Adele -- Rolling in the deep"]{\includegraphics[width=0.4\textwidth]{FIG/Adele.eps}}
\caption{\fdp{Dynamics of the viewcount for six sample videos: the push dynamics can be identified with the first part of the dynamics, where labels identify some actions that are significant for the diffusion of the video; observe for cases a, b and c how a linear dynamics takes over in the last part of the dynamics. The labels tagging the first part of the dynamics mention specific events that identify the diffusion of the content on specific platforms or channels.}}\label{fig:pictures}
\end{figure}
\section{System Model}\label{sec:model}
We consider contents made available to a user by means of YouTube or a similar platform. We denote by $\tau$ the lifetime of a content, i.e.,
the time horizon during which the content bears some interest. In general, such horizon differs depending on the type of content: it can be typically
of the order of weeks to months for YouTube videos or a few days for news \cite{SzaboPop}. \fdp{A possible extension to the case of variable time horizon is the addressed in Sec.~\ref{sec:gt2}.}
We denote by $X(t)$ the viewcount attained by a given content $\theta$ at time $t$ seconds after it has been posted, for $0\leq t \leq \tau$.
As in standard UGC platforms, there are two mechanisms that coexist and \fdp{can jointly increase the viewcount:}
\begin{itemize}
\item {\em push}: the content provider exploits some preferential channels (including paid advertisement either directly on the UGC system or via social networking platforms) to make users aware of the content and to induce them to access it. We call {\em push users} the users that access the content as a reaction to the push mechanism.
\item {\em pull}: users find about the content through standard search and decide to access it based on the belief that the content is relevant for them. We call users accessing a content through the pull mechanism {\em pull users}.
\end{itemize}
\fdp{In practice, many YouTube videos are subject to the push and the pull mechanisms described above such as
the examples that we reported in Fig.~\ref{fig:pictures}. For instance, Fig.~\ref{fig:pictures}a, shows the dynamics of a popular video with viewcount $X \geq 675000$. The YouTube statistics associated with the video describe explicitly a series of events happening in the first part of the dynamics of $X$. For instance, the event B that appears around 02/12/2012, is precisely the event \texttt{``First embedded on: plus.google.com''} which indeed configures as a push towards a social network platform. After the initial push, such events vanish, and the rest of the dynamics appears ascribed mostly to the pull mechanism defined above, with a linear increase in the viewcount.}
\fdp{Also, some of the reported videos are representative of a specific class of online contents, which are those we will be dealing with in the rest of the paper. We can refer to those as the contents that comply to the {\em exponential-linear} model, for the sake of brevity. In particular, many such contents appear to obey to the following dynamics: after an initial exponential growth, the increase of the viewcount becomes linear. The way to interpret such a behavior can be traced to the notion of push and pull mechanisms described above: the exponential growth corresponds to actions through which the source distributes the content within a basin of target push viewers. When such basin is finite and small with respect to the content diffusion dynamics, the viewcount dynamics experiences a
saturation effect which takes over after an initial phase. However, at that stage, the access to the content is due to pull users that come across the content browsing online: they do so at random from a very large basin, so that the access rate, i.e., the viewcount increase rate, is linear. These combined effects are visible in the case of the first two videos, i.e. Fig~\ref{fig:pictures}a and Fig~\ref{fig:pictures}b. In the case of the first video, the saturation effect is well visible, whereas in the case of the second one the linear increase following the saturation is dominating. The example in Fig~\ref{fig:pictures}c is a case where all the dynamics is linear with good approximation: as it will be clear in the following, in the exponential-linear model this case is represented when either the basin of push users is large or when the rate at which contents are pushed is small.
\begin{remark}
Not all videos will diffuse according to the proposed exponential-linear model. For instance, there exist cases when the initial viewcount dynamics displays a characteristic sigmoid shape. We reported in Fig~\ref{fig:pictures}d,e,f the viewcount dynamics for three popular music videos: in those cases the dynamics resembles the logistic curve associated to the spread of epidemics. We can ascribe such similarity to the presence of a positive feedback in the push mechanism, e.g., those who access the content have some mean to recommend the content for others to access it, through targeted recommendation or similar mechanism. When a social network is present, this may happen due to the push of the content into the neighborhood of those who view the content. A similar and perhaps more powerful feedback effect can happen between different channels on the same platform, e.g., YouTube channels, and across different platforms through the recommendation list that is presented to the platform users.\\
This also qualifies the type of exponential-linear dynamics that we consider as those for which this type of feedback does not play a significant role. In particular, in the case of Fig~\ref{fig:pictures}a, the content is of interest at the national scale in the US, and the viewers are likely driven
to the content by general search criteria (e.g., typing in a search engine). Also, in the case of Fig~\ref{fig:pictures}c, the viewers are likely those who browse for some specific recipe, whereas in the case of Fig~\ref{fig:pictures}b viewers are interested in a niche sport, where the event is known within the reference community. In all such cases we see that the linear part of the dynamics takes over and becomes dominant.
\end{remark}
}
\subsection*{Game model}
In our model, we are interested in the uptake of the pull users. Pull users interested in the given content do not know in advance its quality. They may discover it during interval $[0,\tau]$ at random. Their estimation of the interest/potential quality is based on the viewcount $X$. In the simplest case, contents with higher viewcount are more likely to be accessed.
We define by $X_{ps}(t)$ the number of push users accessing the content up to time $t$ as a reaction to the push mechanism and, analogously, by $X_{pu}(t)$ the number of those accessing it through the pull mechanism. Clearly, $X(t)=X_{ps}(t)+X_{pu}(t)$.
Users have beliefs about the quality of the content. We denote by $\pi_G$ the belief that a given content is good (i.e., of interest or anyway worth accessing) and, conversely, by $\pi_B=1-\pi_G$ the belief that the content is bad. We denote by ${\bm{\pi}}=(\pi_G,\pi_B)$ the corresponding distribution. Stating $\pi_G=0.75$ means that a user believes that every $4$ similar contents she would get $3$ good ones and $1$ bad one.
\begin{figure}[t]
\centering
\includegraphics[width=0.27\textwidth]{FIG/util.eps}
\put(-180,100){$X(t,\theta)$}
\put(-91,10){$t_\beta(\theta)$}\put(-23,10){$\tau$}\put(-65,5){$\tau-t_\beta(\theta)$}
\put(-120,84){$\beta$}
\caption{The reward or the cost of content $\theta$ for a tagged user is
represented by the time during which the content can be accessed, i.e.,
when viewcount is larger than threshold $\beta$.}\label{fig:util}
\end{figure}
The content access configures as a game where we define {\em players}, {\em strategies} and {\em utilities}.
{\em Players:} the {\em players} are pull users: based on their belief ${\bm{\pi}}$, they may access the content $\theta$ or not.
{\em Strategies:} they access $\theta$ when the viewcount is above a certain threshold, i.e., $X(t)\geq \beta \geq 0$.\footnote{We consider the reference case when players select based on the viewcount only for the sake of explanation. We will extend the model to other interesting cases in next sections.} Hence, the {\em strategy} for a certain user is {\fdp the viewcount threshold} $\beta \geq 0$. Of course, all other players also adopt their own strategy with respect to $\theta$ and we denote $\bm{\alpha}$ the vector of strategies of all remaining users: $\bm{\alpha}$ is a vector of viewcount thresholds for all other users.
{\em Utilities:} users face either a {\em cost} $C$ or a {\em reward} $R$ for playing strategy $\beta$: the cost and the reward is the fraction of lifetime when the content is in the viewcount range, i.e, when they are willing to access it. The rationale to define this cost/reward is the following. Let a good content be worth one unit reward, and a bad content worth a unit cost. The user may hit several similar contents at random over time. If they are good, the fraction of those actually accessed will be proportional to $1-\frac{t_{\beta}}{\tau}$, where we define $t_{\beta}=\min\{t \,|\,\beta=X(t)\}$, i.e., $t_{\beta}$ is the smallest instant when the threshold is achieved. That also is going to be the long term reward, or the cost, for accessing similar online contents.
Formally,
\[
R(\bm{\alpha},\beta,G)=(\tau - t_{\beta}(G))^+, \quad C(\bm{\alpha},\beta,B)=(\tau - t_{\beta}(B))^+
\]
Finally, based on their belief ${\bm{\pi}}$, players expect a utility when playing $\beta$
that amounts to
\begin{eqnarray}
U(\bm{\alpha},\beta)= \pi_G R(\bm{\alpha},\beta,G) - \pi_B C(\bm{\alpha},\beta,B) \nonumber
\end{eqnarray}
According to the above expression, the cost and the reward are a function of the
interval when the content is above the threshold, i.e., when the users can
benefit from it, and depends on the other players strategy. Furthermore, the action
taken by players depends on their belief on the quality of the content.
In the following we will investigate {\em symmetric equilibria}, i.e., equilibria
for which all users play $\alpha\geq 0$. We can hence adopt a simplified scalar notation and
define $t_{\alpha}=\min\{t|\alpha=X(t)\}$.
Let a tagged user playing $\beta$ when all the remaining users use
$\alpha$: we make the assumption that Wardrop conditions holds. Namely,
for a large number of users any unilateral deviation of a single user
does not affect the utilities of other users. I.e., deviations due to
a single user action are negligible. Wardrop equilibria are much
easier to compute than the Nash equilibrium; however, Wardrop
is a good approximation for the latter, as in \cite{Haurie85}.\footnote{A traditional application
of Wardrop equilibria is road traffic, where users tend to settle to routes minimizing their
delay: the effect of a route change of an individual driver belonging to a flow is negligible system-wide to the utilities of
other users.}.
The tagged user expects to gain a certain reward $R(\alpha,\beta,G),$ for a good
content and expects to suffer a cost $C(\alpha,\beta,B)$ when the content is bad:
under which conditions $\alpha$ is the best response to itself, namely $\beta^*(\alpha)$?
We answer to this question in the next sections under different knowledge of the
viewcount dynamics available to users.
Before we introduce our analysis, we recall that the utility function has the following expression for $\beta\geq \beta_{\tau,B}$
\begin{eqnarray*}
&&U(\alpha,\beta)=\left\{
\begin{array}{ccc}
0 & \mbox{ if }& \beta \geq \beta_{\tau,G} \\
\pi_G (\tau - t_\beta(G)) & \mbox{ if } & \beta_{\tau,B} \leq \beta \leq \beta_{\tau,G}
\end{array}
\right.
\end{eqnarray*}
where $ \beta_{\tau,\theta}$ is solution of the following equation
\begin{equation}
\fdp{t_ {\beta_{\tau}}(\theta) =\tau }
\end{equation}
We observe that the utility function $U$ is nonincreasing for $\beta \geq\beta_{\tau,B}$. However the best response $\beta^*(\alpha)$ can be found only in the interval $[0,\beta_{\tau,B}]$. As a result we restrict our analysis to case when $\beta\leq \beta_{\tau,B}$ in which the utility function can be expressed as
$$
U(\alpha,\beta) = \pi_G (\tau - t_\beta(G))- \pi_B (\tau -t_\beta(B))
$$
\section{Plain Viewcount}\label{sec:gt1}
\fdp{The basic model that we introduce in this section is based on the assumption that pull
users rely on the number of hits of the contents to judge if it is worth to access it or not, i.e., they judge based on how many users accessed it. Thus, they play based on the dynamics. We hence specialize our analysis to two cases.
\subsection{Linear case}
First, we examine the case when the process of diffusion of contents is linear. This is the case when the time scale of the content diffusion is very large compared to the pool of potential users.
\fdp{A mechanism that that is able generate such a dynamics is the combined effect of an advertisement which
is broadcasted to a very large pool of viewers, e.g., covering newspapers or other general audience media,
and people so made aware of the existence of the content who decide to access the content with some random delay thereafter.}
Thus, we let $X_{ps}(t,\theta)=\lambda_{ps} t \cdot \mathbbm{1}(t)$ where $\mathbbm{1}(t)$ is the unitary step function,
and $X_{pu}(t,\theta)=\lambda_{pu} (t-t_{\alpha})\cdot \mathbbm{1}(t-t_{\alpha})$\footnote{In a single source
diffusion model, for instance, $X=N(1-\exp(-\lambda t))=N\lambda t + o(t)$}.
Observe that in this case $\lambda_{ps}=\lambda_{ps}(\theta)$, whereas $\lambda_{pu}$ is independent of $\theta$. In fact, we assume
pull users judge based on viewcount only \cite{Debo2012}. However, we assume that $\lambda_{ps}(G)\geq \lambda_{ps}(B)$.
\begin{lemma}\label{lem:noinfo}
In the linear case, under the assumption $\lambda_{pu}(G)\geq \lambda_{pu}(B)$, it holds
\begin{itemize}
\item[i.] if $\frac{\pi_G}{\lambda_{ps}(G)} \geq \frac{\pi_B}{\lambda_{ps}(B)}$, then $\beta^*(\alpha)=0$.
\item[ii.] if $\frac{\pi_G}{\lambda_{ps}(G)} \leq \frac{\pi_B}{\lambda_{ps}(B)}$ but $\frac{\pi_G}{\lambda_{ps}(G)+\lambda_{pu}} \geq \frac{\pi_B}{\lambda_{ps}(B)+\lambda_{pu}}$
, then $\beta^*(\alpha)=\alpha$
\item[iii.] if $\frac{\pi_G}{\lambda_{ps}(G)} \leq \frac{\pi_B}{\lambda_{ps}(B)}$ but $\frac{\pi_G}{\lambda_{ps}(G)+\lambda_{pu}} < \frac{\pi_B}{\lambda_{ps}(B)+\lambda_{pu}}$
, then $\beta^*(\alpha)=\beta_{\tau,B}$
\end{itemize}
\end{lemma}
\begin{proof} We need to distinguish two cases, namely $\alpha \geq \beta$ and $\alpha \leq \beta$, determine the best response for each case, and then by comparison choose the best response $\beta^*=\beta^*(\alpha)$. The expression for the utility in the two cases follows.
If {$\alpha \geq \beta$}, then $X(t,\theta)=X_{ps}(t,\theta)$ for $0\leq t \leq t_\beta$.
Thus, we can write simply
\dm{
\[
t_\alpha=\frac{\alpha}{\lambda_{ps}(\theta)},\quad t_\beta=\frac{\beta}{\lambda_{ps}(\theta)}
\]}
and the expression for the utility
\begin{eqnarray}\label{eq:utilcase1}
U(\alpha,\beta)= \tau (\pi_G - \pi_B) - \beta \Big ( \frac{\pi_G}{\lambda_{ps}(G)} - \frac{\pi_B}{\lambda_{ps}(B)}\Big )
\end{eqnarray}
\dm{
If {$\alpha \leq \beta$}, then
$X(t,\theta)=X_{ps}(t,\theta)$ for $0\leq t \leq t_\alpha$ and $X(t,\theta)=X_{ps}(t,\theta)+X_{pu}(t,\theta)$ for $t_{\alpha}\leq t \leq t_{\beta}$.
In this case,}
\[
t_\alpha=\frac{\alpha}{\lambda_{pu}(\theta)},\quad t_\beta=\frac{\beta-\alpha}{\lambda_{ps}(\theta)+\lambda_{pu}}+t_\alpha
\]
and in turn
\begin{eqnarray}\label{eq:utilcase2}
U(\alpha,\beta)&&= \tau (\pi_G - \pi_B)- \alpha \Big ( \frac{\pi_G}{\lambda_{ps}(G)} - \frac{\pi_B}{\lambda_{ps}(B)} \Big )\nonumber \\
&& - (\beta -\alpha) \Big ( \frac{\pi_G}{\lambda_{ps}(\theta)+\lambda_{pu}} - \frac{\pi_B}{\lambda_{ps}(\theta)+\lambda_{pu}}\Big )
\end{eqnarray
Now, we can distinguish the three statements in the claim:
{i.} $\frac{\pi_G}{\lambda_{ps}(G)} \geq \frac{\pi_B}{\lambda_{ps}(B)}$: in the first case, due to linearity, $\beta=0$ maximizes
the utility; in the second case, we observe that indeed it must hold $\pi_G \geq \pi_B$, and then
\[
\pi_G\lambda_{ps}(B)-\pi_B\lambda_{ps}(G)\geq 0 \geq \lambda_{pu} (\pi_B - \pi_G)
\]
so that $\frac{\pi_G}{\lambda_{ps}(G)+\lambda_{pu}} \geq \frac{\pi_B}{\lambda_{ps}(B)+\lambda_{pu}}$: in turn the utility function
is maximized again if $\beta=0$. Hence, it holds $\beta^*(\alpha)=0$.
{ii.} In the first case, it is optimal to maximize $\beta$, which brings $\beta=\alpha$. In the second
case, in turn it is optimal to minimize $\beta$, so that again $\beta=\alpha$. Hence, $\beta^*(\alpha)=\alpha$.
{iii.} In the first case, the best response is the same as in ii. In the second case, instead, it is optimal to maximize
$\beta$, so that again $\beta=\beta_{\tau,B}$. However, the last term of (\ref{eq:utilcase2}) is positive and $\beta=\beta_{\tau,B}$
maximizes it. Also, by comparison with (\ref{eq:utilcase1}), indeed $\beta^*(\alpha)=\beta_{\tau,B}$ in this case.
\end{proof}
The above results provide a characterization of the possible symmetric Wardrop equilibria of the system.
\begin{thm}\label{ea:linearWardrop}
\begin{itemize}
\item[i.] if $\frac{\pi_G}{\lambda_{ps}(G)} \geq \frac{\pi_B}{\lambda_{ps}(B)}$, then $0$ is a symmetric Wardrop equilibrium
\item[ii.] if $\frac{\pi_G}{\lambda_{ps}(G)} \leq \frac{\pi_B}{\lambda_{ps}(B)}$ but $\frac{\pi_G}{\lambda_{ps}(G)+\lambda_{pu}} \geq \frac{\pi_B}{\lambda_{ps}(B)+\lambda_{pu}}$
,\dm{ then all $0\leq \beta \leq \beta_{\tau,B}$ are symmetric Wardrop equilibria }
\item[iii.] if $\frac{\pi_G}{\lambda_{ps}(G)} \leq \frac{\pi_B}{\lambda_{ps}(B)}$ but $\frac{\pi_G}{\lambda_{ps}(G)+\lambda_{pu}} < \frac{\pi_B}{\lambda_{ps}(B)+\lambda_{pu}}$
, \dm{ then $\beta_{\tau,B}$ is a symmetric Wardrop equilibrium }
\end{itemize}
\end{thm}
It is possible to interpret the above result as follows: $\frac{\pi_G}{\lambda_{ps}(G)}$ represents the time pace at which push users are believed to access a good content. Similarly $\frac{\pi_B}{\lambda_{ps}(B)}$ represents the time pace at which push users are believed to access a bad content. Thus, condition i. suggests that it is always convenient to anticipate the access to the content. In case ii., the situation is dictated by the uptake of pull users, because they increase the viewcount thus reinforcing the believed viewcount pace of a good content against that of a bad content. Finally, in case iii. there is no incentive in accessing the content.
\subsection{Exponential case: fixed time horizon}
Let us consider the content dissemination process operated by a content provider using a finite set of potential target users. After the content is posted by the provider directly to users, it will be transmitted to more and more users by using some preferential channels. In this case, we need to model the push dynamics accounting for the size $N$ of the pool of push users, i.e., we assume that the content provider disseminates the content according to
\[
{\dot X_{ps}}(t,\theta)= \lambda_{ps}(\theta)(N-X_{ps}(t,\theta)),
\]
so that
\begin{equation}\label{expo1}
X_{ps}(t,\theta)=N(1 - e^{-\lambda_{ps}(\theta) t} )\mbox{ for } t\geq 0
\end{equation}
We reported in Fig.~\ref{fig:expo_utility} the shape of the utility function under the exponential case \fdp{for a fixed time horizon.}
As it can be observed in case a), for smaller values of $\alpha$, i.e, $\alpha=400$ a low value of the belief $\pi_G$ causes the access to be delayed till time $\tau$, whereas for increasing values of $\pi_G$ we observe first a local maximum at $\alpha$ ($\pi_g=0.75$), and finally the strategy $\beta=0$ takes over corresponding to very large values of $\pi_G$. Indeed, such a behavior of the utility function resembles -- for a fixed $N$ -- what we observed in the linear case. However, at a closer look, namely in Fig.~\ref{fig:expo_utility}c) we understand that the situation is more elaborate: in particular, we know that number of push users $N$ impacts the speed at which the viewcount increases. As such, a small $N$ \fdp{does not} permit to pass the threshold $\alpha$, whereas a very large one \fdp{incentivizes} early access: recall that $\beta_{\max}:=\beta_{\tau,B}$ means access at time $t=0$. In between, the presence of a maximum predicts, as in the linear case, the existence of best responses that lie in the interior of $[0,\beta_{\max}]$. This intuitive numerical insight is confirmed by the theoretical results that we detail in the following.
\begin{figure*}[t]
\centering
\subfigure[Case $\alpha=400$]{\includegraphics[width=0.30\textwidth]{FIG/util_expo_alpha=400.eps}\put(-110,35){\scriptsize $\pi_G=0,0.25,0.5,0.75, 1$}}
\subfigure[Case $\alpha=700$]{\includegraphics[width=0.30\textwidth]{FIG/util_expo_alpha=700.eps}\put(-90,28){\scriptsize $\pi_G=0,0.25,0.5,0.75, 1$}}
\subfigure[$\alpha=700$, increasing $N$]{\includegraphics[width=0.30\textwidth]{FIG/util_expo_N.eps}\put(-50,45){\scriptsize $N=50000$}\put(-120,105){\scriptsize $N=1000$}\put(-97,80){\scriptsize $N=700$}}
\caption{The utility function for $N=1000$, for $\tau=10$ days, $\lambda_{ps}(G)=10^{-1}$ views/day, $\lambda_{ps}(B)=\lambda_{ps}(G)/10$. a) $\alpha=400$ views, b) $\alpha=700$ views. Increasing values of the belief $\pi_G$ determine different shapes for the utility function. c) Increasing values of $N=700,1000,50000$ for $\alpha=700$. All graphs for $\lambda_{pu} =1.5 N \lambda_{ps}(G)$.}\label{fig:expo_utility}
\end{figure*}
We distinguish two cases, namely $\alpha<\beta$ and $\beta \leq \alpha$.
If {$\beta \leq \alpha$}, we have
\[
t_{\beta}(\theta)=-\frac 1{\lambda_{ps}(\theta)}\log \Big ( 1- \frac \beta N \Big ), \;\;
t_{\alpha}(\theta)=-\frac 1{\lambda_{ps}(\theta)}\log \Big ( 1- \frac \alpha N \Big )
\]
Hence the utility becomes
\[
U(\alpha,\beta)=(\pi_G-\pi_B)\tau + \log \Big ( 1- \frac \beta N \Big ) \Big ( \frac{\pi_G}{\lambda_{ps}(G)} - \frac{\pi_B}{\lambda_{ps}(B)}\Big )
\]
\dm{Let $\beta_1^*(\alpha)$ (resp. $\beta_2^*(\alpha)$) be the best response to $\alpha$ in $[0,\alpha]$ (resp. $[\alpha,\beta_{max}]$)}
\begin{lemma}
In the exponential case, under the assumption $\lambda_{ps}(G)>\lambda_{ps}(B)$, it holds for $\beta \leq \alpha$
\begin{itemize}
\item If $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ then $\beta^*_1 (\alpha)=\alpha$
\item If $\frac{\pi_G}{\pi_B} > \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ then $\beta^*_1 (\alpha)=0$
\item If $\frac{\pi_G}{\pi_B}= \frac{\lambda_{ps}(G)}{\lambda_{ps}(B) }$ then for every $\beta^*_1 \in [0,\alpha]$ is optimal
\end{itemize}
\end{lemma}
\begin{proof} \fdp{The proof is similar to the one developed in the linear case for $\beta \leq \alpha$.}
\end{proof}
Now, we study the second case: {$\alpha \leq \beta$}. If $t_\alpha \leq t \leq t_{\beta}$,
\begin{equation}\label{expo2}
X(t,\theta)= N(1 - \exp(-\lambda_{ps}(\theta) t) + \lambda_{pu} (t-t_{\alpha})
\end{equation}
for which we obtain
\begin{eqnarray}
t_{\beta}&&=\lambda_{ps}(\theta) \Big(W\Big( \frac{\lambda_{ps}(\theta)}\lambda_{pu} N \frac{e^{\frac{\lambda_{ps}(\theta)}\lambda_{pu} N(1-\frac \beta N)}}{\big (1-\frac \alpha N \big )}\Big ) \nonumber\\
&& - \log \Big ( \frac{e^{\frac{\lambda_{ps}(\theta)}\lambda_{pu} N\big (1-\frac \beta N \big )}}{\big ( 1- \frac{\alpha}{N} \big )} \Big )\Big)
\end{eqnarray}
where $W(\cdot)$ is the Lambert function \cite{CorlessLambert}. We can obtain the derivative of the above expression by letting $\xi(\beta)= \frac{e^{\zeta(\theta)(1-\frac \beta N)}}{\big (1-\frac \alpha N \big )}$
and $\zeta(\theta) = \frac{\lambda_{ps}(\theta)}\lambda_{pu} N$
\begin{eqnarray}
&&\frac d{d\beta} t_{\beta} =\frac 1 {\lambda_{ps}(\theta)} \frac d{d\beta} W(\zeta(\theta)\xi(\beta, \theta)) - \log(\xi(\beta))\nonumber \\
&&=\frac 1 {\lambda_{pu}} \cdot \frac 1{1 + W(\zeta(\theta)\xi(\beta, \theta))} \nonumber
\end{eqnarray}
\fdp{After some cumbersome algebra, we derive}
\begin{lemma}
\fdp{In the exponential case, under the assumptions $\lambda_{ps}(G)>\lambda_{ps}(B)$ and $\lambda_{ps}(G)N \leq \lambda_{pu}$, for $\alpha \leq \beta$ it holds}
\dm{\begin{itemize}
\item If ${\pi_G}\leq \pi_B $ then $\beta^*_2(\alpha)=\beta_{\tau,B}$
\item If $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\geq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_{\tau,B}]$ then $º\beta^*_2(\alpha)= \alpha$
\item If $\frac{1+W(\zeta(G)\xi(\beta_\tau, G))}{1+W(\zeta(B)\xi(\beta_tau, B))}\leq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_\tau(B)]$ then $\beta^*_2(\alpha)=\beta_{\tau,B}$
\item otherwise $\beta^*_2(\alpha)$ is the solution of the following equation
$$\frac{1+W(\zeta(G)\xi(\beta^*_2(\alpha), G))}{1+W(\zeta(B)\xi(\beta^*_2(\alpha), B))}=\frac{\pi_G}{\pi_B}$$
\end{itemize}}
\end{lemma}
\begin{proof}
The derivative of the utility function $U$ is
\begin{equation}
\label{derivative}
U'(\alpha, \beta) =\frac{1}{\lambda_{pu}}\Big(\frac{\pi_B}{W(\zeta(G)\xi(\beta, B))}-\frac{\pi_B}{W(\zeta(G)\xi(\beta, G))}\Big)
\end{equation}
Since $\xi(\beta, G)>\xi(\beta, B)$ and $\zeta(G)>\zeta(B)$ then it is easy to check under condition ${\pi_G}\leq \pi_B $ that $U'(\alpha, \beta) >0$. Hence the utility function attains a unique maximum at $\beta_{\tau,B}$.
In order to complete the proof, it is sufficient to show that the function $U$ is either non-increasing, or there is some $\bar \beta$ such that $U$ is non-decreasing for $\beta<\bar \beta$ and non-increasing for $\beta>\bar \beta$.
Assume that there exists a $\bar \beta$ such that $U'(\alpha, \bar \beta)\leq 0$. From (\ref{derivative}), it is sufficient to show that
$$
U'(\alpha, \beta)\leq 0 \;\,\;\mbox{ for all } \beta >\bar \beta
$$
We can show the above propriety by letting $\bar W(\beta) = \frac{1+W(\zeta(G)\xi(\beta, G))}{1+W(\zeta(B)\xi(\beta,B))}$ and it turns out that
\begin{eqnarray*}
&&\frac{\partial\bar W(\beta)}{\partial \beta}= \frac{1}{(1+W(\zeta(B)\xi(\beta,B)))^2}\\
&&\Big(\frac {\zeta(B) W(\zeta(B)\xi(\beta, B))(1+W(\zeta(G)\xi(\beta, G)))}{1+W(\zeta(B)\xi(\beta, B))}\\
&&-\frac {\zeta(G) W(\zeta(G)\xi(\beta, G))(1+W(\zeta(B)\xi(\beta, B)))}{1+W(\zeta(G)\xi(\beta, G))}\Big)
\end{eqnarray*}
To show $\frac{\partial\bar W(\beta)}{\partial \beta}\leq 0$, we impose the inequality
\begin{equation}
\label{compa}
\frac {\zeta(B) W(\zeta(B)\xi(\beta, B))}{(1+W(\zeta(B)\xi(\beta, B)))^2} \leq \frac {\zeta(G) W(\zeta(G)\xi(\beta, B))}{(1+W(\zeta(G)\xi(\beta, G)))^2}
\end{equation}
We can obtain the above inequality under assumption $\lambda_{ps}(G)N \leq \lambda_{pu}$ by letting
$$ f(y)=\frac {y W(y \frac{e^{y(1-\frac{\beta}{N})}}{(1-\frac{\alpha}{N}})}{(1+W(y \frac{e^{y(1-\frac{\beta}{N})}}{1-\frac{\alpha}{N}}))^2}$$
Hence the derivative of $f$
can be expressed as
\begin{equation}
\frac{\partial f}{\partial y} = w(\bar y)\frac{w^2(\bar y)+ w(\bar y) (1-y (1-\frac{\beta}{N}))+2+y (1-\frac{\beta}{N})}{(1+w(\bar y)^2}
\end{equation}
where $\bar y=y \frac{e^{y(1-\frac{\beta}{N})}}{(1-\frac{\alpha}{N})}$. In fact it can be showed that $\dot f$ is positive for $y(1-\frac{\beta}{N})\leq 1$ i.e., $\lambda_{ps}(G)N \leq \lambda_{pu}$.
\end{proof}
\dm{Overall, the above cases are summarized in the following theorem
\begin{thm}\label{thm:expo}
Let $\lambda_{ps}(G)>\lambda_{ps}(B)$ and $\lambda_{ps}(G)N \leq \lambda_{pu}$, then in the exponential case
\begin{itemize}
\item[i)] If ${\pi_G}\leq \pi_B$ then $\beta_{\tau,B}$ is a symmetric Wardrop equilibrium
\item[ii)] If ${\pi_G}> \pi_B $ then the following cases hold
\begin{itemize}
\item[a)] If $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and $\frac{1+W(\zeta(G)\xi(\alpha, G))}{W(\zeta(B)\xi(\alpha,B))}\geq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha,\beta_{\tau,B}]$ then all $0< \beta \leq \beta_{\tau,B}$ are symmetric Wardrop equilibria
\item[b)] If $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and $\frac{1+W(\zeta(G)\xi(\beta_\tau, G))}{1+W(\zeta(B)\xi(\beta_\tau, B))}\leq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_{\tau,B}]$ then $\beta_{\tau,B}$ is a symmetric Wardrop equilibrium
\item[c)] If $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and there exists a $\bar \beta$ is the solution of the following equation
$$\frac{1+W(\zeta(G)\xi(\bar\beta, G))}{1+W(\zeta(B)\xi(\bar \beta, B))}=\frac{\pi_G}{\pi_B}$$
then $\bar \beta$ is a symmetric Wardrop equilibrium
\end{itemize}
\item[iii)] If $\frac{\pi_G}{\pi_B} > \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)}$, then the following cases hold
\begin{itemize}
\item[a)] if $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\geq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha,\beta_{\tau,B}]$ then $0$ is a symmetric Wardrop equilibrium
\item[b)] if $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\leq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_{\tau,B}]$, then there exists a symmetric Wardrop equilibrium which is given by
\begin{eqnarray}
\left\{
\begin{array}{cc}
0 & \mbox{ if } \tau \pi_B < \pi_G t_{\beta_{\tau,B}}(G)\\
\beta_{\tau,B}& \mbox{ if } \tau \pi_B > \pi_G t_{\beta_{\tau,B}}(G)\\
\beta^* \in \{0, \beta_{\tau,B}\} & \mbox{ if } \tau \pi_B = \pi_G t_{\beta_{\tau,B}}(G)
\end{array}
\right.
\end{eqnarray}
\end{itemize}
\end{itemize}
\end{thm}}
\fdp{Theorem.~\ref{thm:expo} displays a structure of the best response that is similar to the result obtained for the linear case, but
we should highlight some differences. First, the additional request $\lambda_{ps}(G)N \leq \lambda_{pu}$
is excluding the case when the effect of the pull mechanism is negligible compared to push mechanism.
This means that we are restricting to the case when the aggregated maximum rate at which the viewcount can increase
due to the push mechanism is smaller than the increase that is generated once the viewcount is above threshold for pull users.
Indeed, this is the interesting case when the content provider's aim is to attract a large basin of pull users using a
target limited audience of push users.}
\fdp{Second, we observe that the term $\frac{\pi_\theta}{\lambda_{ps}(\theta)+ \lambda_{pu}}$ that was present in the linear case is
now replaced by a term involving the Lambert function $W(\cdot)$ \cite{CorlessLambert}: this is due to the combined effect of the exponential growth and the linear growth above the threshold, accounting for the saturation of the basin of push users. In the case
when $N$ is very large or $\lambda_{ps}$ is very small, the term collapses to the condition expressed in the linear case. }
\section{Variable time horizon}\label{sec:gt2}
In this section, we are interested in the case where the time horizon \fdp{during which the content is accessed by pull users is not fixed. But, it is determined by the popularity of the content and by the quality perceived by users. In particular, when the popularity of a content is subject to saturation, we can model a vanishing $\dot X$ to encode the condition when a content which is present online for a long time becomes stale. Conversely, fresh uptaking contents will experience large values of $\dot X$ and will be preferred.} \fdp{This case fits well specific types of contents such as news or pop songs, for which the {\em trend} of the viewcount increase may be the main trigger for the users' interest in some content. Pull users still adopt a threshold strategy and browse the content if}
\begin{equation}\label{eq:threxpo}
\dot X(t,\theta) \geq \gamma_{th}
\end{equation}
Let us consider the exponential push case introduced in the previous section. Condition \eqref{eq:threxpo} determines a variable horizon to access content $\theta$:
$$
\tau(\alpha, \theta) =\dot X^{-1}( \gamma_{th})
$$
Because the time horizon $\tau=\infty$ for $ \gamma_{th}\leq \lambda_{pu}$, we restrict our analysis to the case when $\gamma_{th}>\lambda_{pu}$.
\fdp{Again, we are interested to compute the utility function for a tagged user given a certain common threshold strategy $\alpha$ played by other
users; the objective is to compute the best response $\beta$ for the tagged user as done before}. Let $X_{th} (\theta)= N-\frac{\gamma_{th}}{\lambda_{ps}(\theta)}$, $\tau_0(\theta) = \frac{1}{\lambda_{ps}(\theta)} \log\big(\frac{\lambda_{ps}(\theta) N}{\gamma_{th}}\Big)$ and $\tau_1( \theta) = \frac{1}{\lambda_{ps}(\theta)} \log\big(\frac{\lambda_{ps}(\theta) N}{\gamma_{th}-\lambda_{pu}}\Big)$.
\fdp{Observe that the interval of time when pull users will access the content becomes now $[\tau_0(\theta)\, \tau_1(\theta)]$: the duration of
such interval corresponds to the useful lifetime of the content as dictated by the interest of the users based on \eqref{eq:threxpo} and by the content type.}
\fdp{We distinguish again two intervals, namely $0\leq \beta \leq \alpha$ and $\alpha\leq \beta \leq \tau$, and denote $\beta_1^*$ and $\beta_2^*$ the best response in those intervals, respectively. However, we need to account also for \eqref{eq:threxpo} and to detail the utility accordingly.}
It follows that if $\beta\geq \alpha$, then
$$
U(\alpha,\beta)= \pi_G \Big(\tau_1(G) -t_\beta (G)\Big)^+ - \pi_B \Big(\tau_1(B) -t_\beta (B)\Big)^+
$$
If $\alpha > X_{th} (G)$ and $\beta\leq \alpha$
\begin{eqnarray*}
U(\alpha,\beta)&=& \pi_G \Big(\tau_0(G) -t_\beta (G)\Big)^+ + \pi_G \Big(\tau_1(G)-t_\alpha(G)\Big)^+ \\
&&- \pi_B \Big(\tau_0(B) -t_\beta (B)\Big)^+ - \pi_B \Big(\tau_1(B)-t_\alpha(B)\Big)^+
\end{eqnarray*}
If $X_{th}(B)\leq \alpha \leq X_{th} (G)$ and $\beta\leq \alpha$, then
\begin{eqnarray*}
U(\alpha,\beta)&=& \pi_G \Big(\tau_1(G) -t_\beta (G)\Big) \\
&&- \pi_B \Big(\tau_0(B) -t_\beta (B)\Big)^+ - \pi_B \Big(\tau_1(B)-t_\alpha(B)\Big)^+
\end{eqnarray*}
If $X_{th}(B)\geq \alpha$ and $\beta\leq \alpha$, then
\begin{eqnarray*}
U(\alpha,\beta)&=& \pi_G \Big(\tau_1(G) -t_\beta (G)\Big) - \pi_B \Big(\tau_1(B)-t_\alpha(B)\Big)
\end{eqnarray*}
\begin{figure*}[t]
\centering
\subfigure{\includegraphics[width=0.30\textwidth]{FIG/util_expoxxdot_alpha=0.18_Lpus.eps}}
\subfigure{\includegraphics[width=0.30\textwidth]{FIG/util_expoxxdot_alpha=0.02899_Smooth.eps}\put(-85,22){\scriptsize $\pi_G=0,0.25,0.5,0.75, 1$}}
\subfigure{\includegraphics[width=0.31\textwidth]{FIG/util_expoxxdot_alpha=0.02899_LessSmooth.eps}\put(-90,19){\scriptsize $\pi_G=0,0.25,0.5,0.75, 1$}
\put(-155,110){(c)}\put(-315,110){(b)}\put(-480,110){(a)}}
\caption{The utility function for $N=1000$, for $\tau=10$ days, $\lambda_{ps}(G)=10^{-1}$ views/day, $\lambda_{ps}(B)=\lambda_{ps}(G)/10$. a) Detail of the discontinuities of $U(\alpha,\beta)$ for $\gamma=0.01,0.1,1$, where $\alpha=0.18$ b) Extremal type of best response for $\alpha=0.029$, $\gamma=1.5$ and under increasing values of the belief $\pi_G$. c) Same as b) but for $\gamma=0.3$. Discontinuity in $\alpha$ corresponds to local maxima for $\pi_G=0.25,0.50$.}\label{fig:expo_xxdot}
\end{figure*}
\dm{With a similar analysis as that employed in the proof of Thm.\ref{thm:exp_thr}, we can write:}
\begin{thm}\label{thm:exp_thr}
In the exponential case, under the assumptions $\lambda_{ps}(G)>\lambda_{ps}(B)$ and $\lambda_{ps}(G)N \leq \lambda_{pu}$, it holds
\begin{itemize}
\item \fdp{If ${\pi_G}\leq \pi_B $ then $\beta$ is a symmetric Wardrop equilibrium where $\beta=\beta_\tau(B)$ is solution of $t_{\beta}(B)=\tau $}
\item \dm{ If ${\pi_G}> \pi_B $, $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\geq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_{\tau_{0,B}}]$ then all values in the interval $[0, \tilde\beta] $ are symmetric Wardrop equilibria.}
\item \dm{ If ${\pi_G}> \pi_B $, $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and $\frac{1+W(\zeta(G)\xi(\beta_\tau, G))}{1+W(\zeta(B)\xi(\beta_\tau, B))}\leq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha,\beta_{\tau_{0,B}}(B)]$ then $\tilde \beta$ is a symmetric Wardrop equilibrium.}
\item \dm{ If ${\pi_G}> \pi_B $, $\frac{\pi_G}{\pi_B} < \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and there exists $ \beta_s$ solution of the following equation
$$\frac{1+W(\zeta(G)\xi(\beta_s, G))}{1+W(\zeta(B)\xi( \beta_s, B))}=\frac{\pi_G}{\pi_B}$$
then $\beta_s$ is a symmetric Wardrop equilibria.}
\dm{\item If $\frac{\pi_G}{\pi_B} > \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and and $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\geq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha,\beta_{\tau_{0,B}}]$ then $0$ is a symmetric Wardrop equilibrium.
\item If $\frac{\pi_G}{\pi_B} > \frac{\lambda_{ps}(G)}{\lambda_{ps}(B)} $ and and $\frac{1+W(\zeta(G)\xi(\alpha, G))}{1+W(\zeta(B)\xi(\alpha,B))}\leq \frac{\pi_G}{\pi_B}$ for all $\beta \in [\alpha, \beta_{\tau_{0,B}}]$ then there exists a symmetric Wardrop equilibrium which is given by}
\begin{eqnarray}
\left\{
\begin{array}{cc}
0 & \mbox{ if } \tau \pi_B < \pi_G t_{\beta_{\tau_{0,B}}}\\
\beta_\tau(B) & \mbox{ if } \tau \pi_B > \pi_G t_{\beta_{\tau_{0,B}}}\\
\beta^* \in \{0, \beta_{\tau_0}(B)\} & \mbox{ if } \tau \pi_B = \pi_G t_{\beta_{\tau{0,B}}}
\end{array}
\right.
\end{eqnarray}
\end{itemize}
\end{thm}
\fdp{The overall result in Thm.\ref{thm:exp_thr} shows a structure that is close to that obtained in Thm.~\ref{thm:expo}. We can conclude
that the presence of a selective preference expressed in terms of the viewcount trend does not affect the structure of the Wardrop
equilibria. In fact, they are of the kind determined before in the case of a fixed length interval: either extremal ones or a continuum of such restpoints. It is interesting to notice that this is following irrespective of the fact that the utility function is linear as a function of the "viewing time", i.e., the time that is useful for the viewers, but, pull users' preferences depend on a non-linear function of the threshold type.}
\section{Combined effect of Trend and Viewcount}\label{sec:gt3}
In general, contents that are present online since a long time display different
popularity than contents which last only a short time \cite{SzaboPop}. As we noticed in the previous
section, when popularity saturation occurs, $\dot X$ vanishes for large $t$. \fdp{If users choose among contents with
different trend and different viewcount, they would naturally choose a content with large viewcount and large increasing trend.
To this respect $y(t)=\dot X(t) X(t)$ encodes the condition when the pull user still values the viewcount, but, she favors a large
increasing trend given two contents with the same viewcount.}
Symmetric equilibria can be determined when in the system all users adopt a strategy
\[
\alpha := y(t_\alpha), \quad 0 \leq t_\alpha \leq \tau
\]
and again we determine the best response for a user deviating using $\beta := y(t_\beta)$ as a reply, where $0 \leq t_\beta \leq \tau$.
It is easy to see that in the linear case, the model developed in the previous section
applies as long as one replaces the dynamics with the one below
\[
X_{ps}(t,\theta)=\lambda_{ps}^2(\theta)t+\lambda_{ps}(\theta), \quad X_{pu}=\lambda_{pu}^2(t-t_{\alpha})\cdot \mathbbm{1}(t-t_{\alpha})
\]
so that all the results can be specialized accordingly replacing $\lambda_{ps}$ and $\lambda_{pu}$ with $\lambda_{ps}^2$ and $\lambda_{pu}^2$
wherever they appear. The intuition is that when the regime of content diffusion is linear, i.e.,
when a large number of push users exists, the trend of popularity has the only effect to reinforce
the inequality $\lambda_{ps}(B)\not =\lambda_{ps}(G)$. We then move to a more interesting case.
\subsection{Exponential push case}
In the exponential case, the dynamics again is the same captured by (\ref{expo1}), (\ref{expo2}). We
can specialize the analysis to the two cases as done before. If {$\alpha \geq \beta$}, $y(t_\beta)=\beta$
implies that
\[
\beta=\lambda_{ps}(\theta) N^2(1-e^{-\lambda_{ps}(\theta) t_\beta})e^{-\lambda_{ps}(\theta) t_\beta}
\]
where the solution is such that $t_\beta=-\frac1{\lambda(\theta)} f(\beta,\theta)$ where we let $f(\beta,\theta):=\log \Big ( \frac 12 \Big ( 1 + \sqrt{1 - \frac{4\beta}{\lambda_{ps}(\theta)N^2}} \Big ) \Big )$.
\[
U(\alpha,\beta)=(\pi_G-\pi_B)\tau + \Big ( \frac{\pi_G}{\lambda_{ps}(G)}f(\beta,G) - \frac{\pi_B}{\lambda_{ps}(B)}f(\beta,B) \Big )
\]
After observing that $f(0,\theta)=0$ and $f(\beta,G)\leq f(\beta,B) \leq 0$, again we obtain two extremal cases:
when $\frac{\pi_G}{\lambda_{ps}(G)} \geq \frac{\pi_B}{\lambda_{ps}(B)}$ then $U(\alpha,\beta)-(\pi_G-\pi_B)\tau\leq 0$ so that
$\beta=0$ maximizes the utility. In the opposite case, namely, $\frac{\pi_G}{\lambda_{ps}(G)} \leq \frac{\pi_B}{\lambda_{ps}(B)}$, $U(\alpha,\beta)-(\pi_G-\pi_B)\tau\geq 0$, so that $\beta=\alpha$ does.
If $\alpha \leq \beta$, the condition for
\[
\beta=X(t) \Big ( \lambda_{ps}(\theta) N e^{-\lambda_{ps}(\theta)} + \lambda_{pu} \Big )
\]
\begin{eqnarray}\label{eq:utexpxdotx}
\mbox{gives:} \quad t_\beta
=t_\alpha(\theta) - \frac N{\lambda_{pu}} \Big [1- \frac{W(f(\beta,B)\xi(B)e^{-\xi(B)})}{\xi(B)e^{-2\xi(B)}} \Big ]\nonumber
\end{eqnarray}
where we used the definition of $We^W=x$ and we stressed the dependence of $t_\alpha$ on $\theta$. It is important to notice that in this case, $t_\beta$ is not continuous, so that in correspondence of $t_\alpha(G)$ and $t_\alpha(B)$ the utility function has possibly two discontinuities. We reported in Fig.~\ref{fig:expo_xxdot}(a) the shape of the utility function for increasing values of $\gamma=0.01,0.1,1$ $\lambda_{pu} =\gamma N \lambda_{ps}(G)$. For larger values of $\gamma$ the effect of discontinuities becomes negligible with respect to the shape of the utility function (indeed we are looking for the best response, i.e., the maximum of $U(\alpha,\beta)$).
In particular, we observe in Fig.~\ref{fig:expo_xxdot}(b) that for the choice of parameters there, i.e., $\gamma=1.5$, the shape of the utility function leads again to the customary extremal type of best response that we observed in the linear case. That is, access at time $t=0$, i.e., $\beta=\beta_{\max}$ for large $\pi_G$ and access at time $t_\beta=\tau$, i.e., $\beta=0$ for smaller values of $\pi_G$. However, for $\gamma=0.3$, see Fig.~\ref{fig:expo_xxdot}(c), we find Wardrop equilibria ($\beta^*(\alpha)=\alpha$) in the interior of $[0,\beta_{\max}]$. Further numerical exploration \fdp{confirmed that the equilibria form an interval. Thus, again, we find that there exist conditions (in this case, smaller $\lambda_{pu}$) when the system has a continuum of equilibria as in previous cases. }
\begin{figure}[t]
\centering
\includegraphics[width=0.30\textwidth]{FIG/betauno}
\caption{The shape of function $\beta_1(\lambda_{pu})$ for increasing values of $\lambda_{pu}$: the vertical asymptote corresponds to the value $\lambda_{pu}^s$.}\label{fig:betauno}
\end{figure}
\section{Users with side information}\label{sec:sideinfo}
\fdp{In the previous section we have considered the product of the trend and magnitude of the viewcount as
a metric: as seen there, the structure of the equilibria that we can expect resembles closely what we found in the previous
cases: either extremal Wardrop equilibria or a continuum of restpoints. We want to describe the case when potential viewers
may be provided additional information on the upcoming popularity of a certain content, e.g., relying on some predictors or some apriori information they have.
They judge whether to access or not a given content based on the product of the popularity $X$ and the
popularity trend $\dot X$. But, they only know how such metric is going to accumulate over time, i.e., the metric for a user that
approaches the content at time $t$ is }
\begin{equation
y(t)=\int_{t}^{\tau} X(u) \dot X(u) du = \frac 12 (X^2(\tau) - X^2(t)) \nonumber
\end{equation}
\fdp{This metric can be used as a simple benchmark case: it contains information on the future dynamics of $X(\theta)$,
and it is defined by the current and the final values of the viewcount. However, the amount of such information in general is not
sufficient at time $t$ to state the type of the content. Of course, more sophisticated metrics are possible. Nevertheless,
the one at hand will do for the purpose of showing that by making the potential viewers of a
content aware of some side information, the system may experience a deep change in the structure
of the equilibria.}
Let all users adopt strategy
\[
\alpha := y(t_\alpha), \quad 0 \leq t_\alpha \leq \tau
\]
and in the same way as done before we want to determine the best response for a user adopting
$\beta := y(t_\beta)$ as a reply, where $0 \leq t_\beta \leq \tau$.
In the case $\beta \geq \alpha$, we recall that the dynamics is
\[
X(t,\theta)=\alpha + \lambda(\theta)(t-t_{\alpha})
\]
where $\lambda(\theta):=(\lambda_{pu} + \lambda_{ps}(\theta))$ for the sake of notation, so that
\[
\alpha+\lambda(\theta)(t_{\beta}-t_{\alpha})=\sqrt{X^2(\tau,\theta)-2\beta}
\]
which solves for $\displaystyle t_{\beta}=\frac 1{\lambda(\theta)} \Big ( \alpha \frac{\lambda_{pu}}{\lambda_{ps}(\theta)} + \sqrt{X^2(\tau,\theta)-2\beta} \Big)$.
The corresponding expression for the utility is $ U(\alpha,\beta) =$
\begin{eqnarray}
U_0(\alpha,\beta)-\left [ \frac{\pi_G \sqrt{X^2(\tau,G)-2\beta} }{\lambda(G)}- \frac{\pi_B \sqrt{(X^2(\tau,B)-2\beta)}}{\lambda(B)} \right ] \nonumber
\end{eqnarray}
where the term $U_0(\alpha,\beta)=(\pi_G-\pi_B)\tau - \alpha \lambda_{pu} \Big ( \frac{\pi_G}{\lambda_{ps}(G)\lambda(G)}- \frac{\pi_B}{\lambda_{ps}(B)\lambda(B)} \Big )$
and it turns out that
\[
\frac {dU(\alpha,\beta)}{d\beta} =\frac{\pi_G }{\lambda(G)(X^2(\tau,G)-2\beta)^{\frac 12} }- \frac{\pi_B }{\lambda(B)(X^2(\tau,B)-2\beta)^{\frac 12}}
\]
which is \fdp{decreasing with $\beta \in [-\infty,\beta_{\tau,B}]$, where $\beta_{\tau,B}:=\frac 12 X(\tau,B)$ as follows by comparing the ratio of the two positive terms appearing in the expression above under the assumption $X(\tau,G) \geq X(\tau,B)$)}. When $\frac{\pi_G }{\lambda(G)}\not = \frac{\pi_B }{\lambda(B)}$ the $U(\cdot,\beta)$ over $\mathbb R$ attains a unique maximum at
\[
\beta_1=\frac 12 \frac{- X^2(\tau,G)\big ( \frac{\pi_B}{\lambda(B)} \big )^2 + X^2(\tau,B)\big ( \frac{\pi_G}{\lambda(G)} \big )^2 }{\big ( \frac{\pi_G}{\lambda(G)} \big )^2 - \big ( \frac{\pi_B}{\lambda(B)} \big )^2}
\]
so that there exists also one maximum of $U(\alpha,\beta)$ in $[t_\alpha,\tau]$.
We can distinguish three cases based on the fact that
\begin{enumerate}
\item $\beta_1\leq \alpha$: the best response in this case is $\beta^*(\alpha)=\alpha$
\item $\alpha<\beta_1<\beta_{\tau,B}$: the best response is $\beta^*(\alpha)=\beta_1$
\item $\beta_1\geq \beta_{\tau,B}$: the best response in this case is $\beta^*(\alpha)=\beta_{\tau,B}$.
\end{enumerate}
\fdp{Finally, we notice that when $\frac{\pi_G }{\lambda(G)} = \frac{\pi_B }{\lambda(B)}$, case 1) applies.}
\fdp{In the case $\beta < \alpha$, we can derive a similar analysis starting from the dynamics $X(t,\theta)=\lambda_{ps}(\theta) t$, so that}
\[
\beta=y(t_\beta)=\frac 12 \Big ( X^2(\tau,\theta) - \lambda_{ps}^2(\theta) t_\beta^2 \Big )
\]
so that $t_\beta=\sqrt{X^2(\tau,\theta)-2\beta}$, and
\begin{eqnarray}
&&U(\alpha,\beta)=(\pi_G-\pi_B)\tau \nonumber \\
&&- \left [ \frac{\pi_G \sqrt{X^2(\tau,G)-2\beta} }{\lambda_{ps}(G)}- \frac{\pi_B \sqrt{(X^2(\tau,B)-2\beta)}}{\lambda_{ps}(B)} \right ] \nonumber
\end{eqnarray}
\fdp{In turn, we can recognize} the same structure for the best response as in the previous case, where the maximum of $U(\cdot,\beta)$ (when $\frac{\pi_G }{\lambda_{ps}(G)}\not= \frac{\pi_B }{\lambda_{ps}(B)}$), over $\mathbb R$ is attained at
\[
\beta_2=\frac 12 \frac{- X^2(\tau,G)\big ( \frac{\pi_B}{\lambda_{ps}(B)} \big )^2+ X^2(\tau,B)\big ( \frac{\pi_G}{\lambda_{ps}(G)} \big )^2 }{\big ( \frac{\pi_G}{\lambda_{ps}(G)} \big )^2 - \big ( \frac{\pi_B}{\lambda_{ps}(B)} \big )^2}
\]
and the three cases write
\begin{enumerate}
\item $\beta_2\leq 0$: the best response in this case is $\beta^*(\alpha)=0$.
\item $0 < \beta_2 < \alpha$: the best response is $\beta^*(\alpha)=\beta_2$.
\item $\beta_2 \geq \alpha$: the best response is $\beta^*(\alpha)=\alpha$.
\end{enumerate}
Again, when $\frac{\pi_G }{\lambda_{ps}(G)}= \frac{\pi_B }{\lambda_{ps}(B)}$, case 1) applies.
\fdp{Now, to complete our analysis, we need to determine the best response between the two cases: we need to detail the relation between $\beta_1$ and $\beta_2$. To so do we can rewrite for the sake of convenience }
\[
\beta_1(x)=\frac 12 \frac{\pi_G^2 x^2 X^2(\tau,B) - \pi_B^2 (L+x)^2 X^2(\tau,G)}{\pi_B^2 x^2 - \pi_G^2 (L+x)^2}
\]
where $L=\lambda_{ps}(G)-\lambda_{ps}(B)$ and $x=\lambda_{ps}(G)+\lambda_{pu}$. It can be easily showed that
\[
\frac d{dx} \beta_1(x)=\pi_G^2\pi_B^2\frac{2Lx (X^2(\tau,G)-X^2(\tau,B))(x-L)}{(\pi_B^2 x^2 - \pi_G (L+x)^2)^2}
\]
which brings $\frac d{dx} \beta_1(x)>0$ for $x\geq 0$, with a singularity in
\[
\lambda_{pu}^s=\frac{\pi_B}{\pi_G - \pi_B}(\lambda_{ps}(G)-\lambda_{ps}(B))-\lambda_{ps}(B)
\]
The typical shape of $\beta_1$ is reported in Fig.~\ref{fig:betauno}.
We observe that $\beta_1(\lambda_{pu}=0)=\beta_2$. The asymptotic value for $\lambda_{pu}=\infty$ is
\[
\beta_1(\infty)=\frac 12 \frac{\pi_G^2X^2(\tau,B) - \pi_B^2X^2(\tau,G)}{\pi_G^2 - \pi_B^2}
\]
\fdp{It can be verified that $\beta_1(\lambda_{pu})$ is injective. Hence, the above analysis let us state: $\beta_1(\infty) \leq \beta_1(0) = \beta_2$, which in turn leads to the following}
\begin{lemma}
For $0\leq \lambda_{pu} < \lambda_{pu}^s$, it holds $\beta_1 \geq \beta_2$, and for $\lambda_{pu} > \lambda_{pu}^s$
it holds $\beta_1 < \beta_2$.
\end{lemma}
Now we can combine the conditions above to derive:
\begin{thm}\label{thm:threshold}
Let $I=[0,\beta_{\tau,B}]$\\
{\noindent i.} If $\lambda_{pu} > \lambda_{pu}^s$, then
\[
W_s=[\beta_1,\beta_2]\cap I
\]
is the set of symmetric Wardrop equilibria for the system.
{\noindent ii.} If $\lambda_{pu} < \lambda_{pu}^s$ then $W_s \subseteq \{0, \beta_{\tau,B}\}$.
\end{thm}
\begin{proof}
\fdp{Case i. follows immediately observing that for $\beta_1\leq \alpha$ the best response is $\beta^*(\alpha)=\alpha$
and for $\beta_2 \geq \alpha$ the best response is $\beta^*(\alpha)=\alpha$: both conditions are satisfied simultaneously
for $\alpha \geq 0$ if and only if $\alpha \in W_s$.\\
Case ii. is proved observing that the conditions for case i fail, so that only extremal cases can hold. In particular, $W_s$ is
not always the empty set: if $\beta_2 \geq 0$, then $\beta_1\leq 0$ so that $\beta^*(0)=0$ and the same holds in the opposite case, i.e., if $\beta_1 \geq \alpha=\beta_{\tau,B}$ then $\beta_1 \geq \alpha=\beta_{\tau,B}$ so
that $\beta^*(\beta_{\tau,B})=\beta_{\tau,B}$.}\end{proof}
\fdp{The result in Thm.~\ref{thm:threshold} let us observe a neat phase transition effect on $\lambda_{pu}$: when the intensity of the views due to the pull mechanism is below threshold $\lambda_{pu}^s$, only extremal Wardrop equilibria are possible. Above that threshold, there can exist a continuum of equilibria where the system can settle.
Let $\mu(\cdot)$ denote the standard real measure: a sufficient condition is provided in the following
\begin{cor}
$\mu(W_s)>0$ if $\lambda_{pu}>\lambda_{pu}^s$ and $\beta_2\geq 0 >\beta_1$.
\end{cor}
We can observe that $\pi_G < \pi_B$ implies $\beta_2\geq 0$ and $\lambda_{pu}>0>\lambda_{pu}^s$,
so that a stronger sufficient condition than the one just provided in turn becomes: $\pi_G < \pi_B$ and $\beta_1\leq \beta_{\tau,B}$.
}
\section{Related Works}\label{sec:rel}
The analysis of dynamics of popularity of online contents has been subject of
recent papers. The work \cite{Gill07youtubetraffic} provides an analysis of the YouTube system,
with comprehensive view of the characteristic of the generated traffic.
In \cite{ChatFirstStep} the authors address the relation between metrics used
to evaluate popularity. They observed that viewcount is strongly correlated with
several such metrics as number of comments, ratings, or favorites. However, all
such metrics do not correlate to average rating. In this paper we confine our analysis
to viewcount as the metric of interest. \cite{SzaboPop} focuses on the core problem of predicting
popularity, namely, the viewcount, based on early measurements of user access. Based
on YouTube videos or Digg stories measurements, the authors observe that contents
increasing fast their viewcount in early stages typically become popular later on.
The proposed empirical model, i.e., $\log N(t_r)=\log N(t_o)+\lambda_0(t_r,t_0)$ where $\lambda_0(t_r,t_0)$,
is a random multiplicative noise and $N(t_r),N(t_0)$ is the viewcount at $t_r$ and $t_0$; it
resembles closely the exponential model adopted in this work.
In \cite{RatkiewiczBurstyPoP} the authors propose a model accounting for change of ranking
induced by UGC online platforms. The model is meant to overcome the limitations of the preferential
attachment models. Those models in fact cannot explain bursty growth of content popularity;
those in turn are claimed an inherent property of the online platforms. The authors relate
bursty growth spikes to the way such systems expose popular contents to users and perform
re-ranking of existing contents causing positive feedback loops.
The paper \cite{ChaTON} provides analysis of power law behavior for the rank distribution
of contents; the distribution of most watched videos is found heavily skewed towards
the most popular ones.
Threshold models similar to those studied in this work are described by Granovetter~\cite{GranovetterAJS1978}
in social science. The assumption is that individuals make binary decisions (in our framework, view or not view a content),
according to some static internal threshold of others participating. A generalization based on threshold
distribution is addressed in \cite{RolfeSocNet}.
\section{Conclusions}\label{sec:concl}
In this paper we characterized the access to online contents
by game theoretical means by leveraging on the concept of Wardrop equilibrium. We deduced the
structure of equilibria in systems where users adopt threshold type policies to select online contents. We explored several cases:
the case when the plain viewcount is the metric, or the viewcount trend, or both are combined as a product metric. We explored
the case of a fixed time horizon dictated by the content lifetime, and we considered a case when
the time horizon is not fixed. Finally we explored the impact of side information available
to users.
In all such cases we deduced the presence of a continuum of equilibria, which has potential implications in the
design and control of platforms for online content access. In future work, in particular, we are exploring the dynamics associated
to such sets of interior restpoints, when they exist, and comparing those with typical dynamics of online contents.
However, not only equilibria are relevant: as showed in \cite{GraSoongJEBO1986}, threshold strategies,
under specific conditions, may well lead the system to be asymptotically unstable; system trajectories may in turn consist of cycles that
can move into a chaotic dynamics, essentially indistinguishable from random noise.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,818 |
{"url":"https:\/\/www.semanticscholar.org\/paper\/Construction-of-%24%5Cmu%24-Limit-Sets-Boyer-Delacourt\/ee2507ac021695cc93dce7bd224e8bee31517948","text":"# Construction of $\\mu$-Limit Sets\n\n\u2022 Published 2010\n\n#### Abstract\n\nThe \u00b5-limit set of a cellular automaton is a subshift whose forbidden patterns are exactly those, whose probabilities tend to zero as time tends to infinity. In this article, for a given subshift in a large class of subshifts, we propose the construction of a cellular automaton which realizes this subshift as \u00b5-limit set where \u00b5 is the uniform Bernoulli\u2026\u00a0(More)\n\n### Cite this paper\n\n@inproceedings{Boyer2010ConstructionO, title={Construction of \\$\\mu\\$-Limit Sets}, author={Laurent Boyer and Martin Delacourt and Mathieu Sablik}, year={2010} }","date":"2018-02-22 01:58:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6461697220802307, \"perplexity\": 3062.2214365620366}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891813832.23\/warc\/CC-MAIN-20180222002257-20180222022257-00792.warc.gz\"}"} | null | null |
Tableware : Pour Spouts Walmart Restaurant Style Margarita Glasses Home Goods Margarita Glasses Plastic Punch Bowl Walmart. Contemporary Margarita Glasses. Sangria Price Walmart. Margarita Mix Walmart Canada. Bottle Pourer Walmart Canada.
By B15MiLl4hcodingcolor. Tableware. At Tuesday, January 29th 2019, 22:13:52 PM. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,813 |
{"url":"http:\/\/clay6.com\/qa\/29594\/a-parallel-plate-capacitor-of-area-50-cm-2-and-plate-separation-3-mm-is-cha","text":"# A parallel plate capacitor of area $50\\: cm^2$ and plate separation $3\\: mm$ is charged initially to $80\\: \\mu C$. Due to a radioactive source nearby, the medium between the plates gets slightly conducting and the plate loses charge initially at the rate of $1.5 \\times 10^{-8}C\/s$. What is the magnitude and direction of displacement current?\n$\\begin {array} {1 1} (a)\\;\\text{Zero} \\\\ (b)\\;1.5 \\times 10^{-10}A\\text{ and perpendicular to conduction current} \\\\ (c)\\;1.5 \\times 10^{-10} \\text{ and same direction as conduction current} \\\\ (d)\\;1.5 \\times 10^{-8} \\text{ and opposite direction to that of conduction current} \\end {array}$","date":"2020-09-27 05:00:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7839258313179016, \"perplexity\": 183.73492693727712}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400250241.72\/warc\/CC-MAIN-20200927023329-20200927053329-00055.warc.gz\"}"} | null | null |
Q: Fix PHP MySQL injection for login I have a PHP script that signs a user into my site when they enter the correct login credentials. However, I noticed that it is easy to inject this login by entering anything' OR 'x'='x into the password box.
How can I stop this from happening?
$query = "SELECT * FROM sm_editors WHERE email = '".$_POST['email']."' AND password = '".$_POST['password']."' AND user_type != 'reader-for-approval'";
A: You first need to sanitize the inputs to prevent this.
The function mysql_real_escape_string will remove any escape characters.
Take a look at What's the best method for sanitizing user input with PHP? question for more information.
A: You can use mysql_real_escape_string function on data you pass to your query or use prepared statements and stored procedures instead of old syntax.
A: $query = "SELECT * FROM sm_editors WHERE email = '".mysql_real_escape_string($_POST['email'])."' AND password = '".mysql_real_escape_string($_POST['password'])."' AND user_type != 'reader-for-approval'";
A: first you can use mysql_real_escape_string to escape the string.
But a better way would be, when you would prepare your SQL string before you execute it. Have a look at PHP PDO MySQL connector. There you have some methods to prepare your string.
$s = $pdo->prepare('SELECT * FROM table WHERE field1 = :value');
$s->execute(array(':value' => $value));
hava a look at http://www.php.net/manual/de/pdo.prepared-statements.php
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,520 |
Q: How to resolve kafkaServices.KafkaServices() is not a constructor error in node.js? I have the following js class KafkaServices. In server.js I would like to call a method from KafkaServices to create a Kafka Connection. In server.js this is how I'm currently accomplishing this:
/**
* Event listener for HTTP server "listening" event.
*/
var kafkaServices = require('./services/kafka-services');
const kafka = require('kafka-node');
function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
console.log('Listening on ' + bind);
logInfo({message: `Server is Listening on http://localhost:${addr.port}.`, scope: 'Server'});
kafkaTool = new kafkaServices.KafkaServices();
// Connect to kafka
kafkaTool.connect();
kafkaTool.error();
}
My KafkaServices is also below. I'm using module.exports = KafkaServices; to export the class and be able to use it elsewhere.
var express = require('express');
var kafka = require('kafka-node');
class KafkaServices{
constructor(){
this.Producer = kafka.Producer,
this.client = new kafka.Client();
this.producer = new Producer(client);
}
connect(){
// Create a connection
this.producer.on('ready', function() {
console.log('Producer is ready');
})
}
error(){
this.producer.on('error', function(err) {
console.log('Producer has the following error');
console.log(err);
})
}
publish(){
//Publish
this.producer.send(payload, function (err, data){})
}
}
module.exports = KafkaServices;
When I try to run this I get the following error message:
PS C:\Users\ENV\Projects\tool> npm run start
> settings-tool@1.0.0 start C:\Users\ENV\Projects\tool
> node ./src/server.js
Listening on port 3999
{"timestamp":"2020-07-02T18:22:27.909Z","level":"info","app_name":"tool","message":"Server is Listening on http://localhost:3999.","scope":"Server","tag":"tool"}
{"error":{},"level":"error","message":"uncaughtException: kafkaServices.KafkaServices is not a constructor\nTypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","stack":"TypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","exception":true,"date":"Thu Jul 02 2020 14:22:27 GMT-0400 (Eastern Daylight Time)","process":{"pid":22460,"uid":null,"gid":null,"cwd":"C:\\Users\\ENV\\Projects\\tool","execPath":"C:\\Program Files\\nodejs\\node.exe","version":"v10.16.3","argv":["C:\\Program Files\\nodejs\\node.exe","C:\\Users\\ENV\\Projects\\tool\\src\\server.js"],"memoryUsage":{"rss":40157184,"heapTotal":25980928,"heapUsed":16159328,"external":134547}},"os":{"loadavg":[0,0,0],"uptime":106081},"trace":[{"column":17,"file":"C:\\Users\\ENV\\Projects\\tool\\src\\server.js","function":"Server.onListening","line":149,"method":"onListening","native":false},{"column":13,"file":"events.js","function":"Server.emit","line":198,"method":"emit","native":false},{"column":10,"file":"net.js","function":"emitListeningNT","line":1313,"method":null,"native":false},{"column":19,"file":"internal/process/next_tick.js","function":"process._tickCallback","line":63,"method":"_tickCallback","native":false},{"column":11,"file":"internal/modules/cjs/loader.js","function":"Module.runMain","line":834,"method":"runMain","native":false},{"column":19,"file":"internal/bootstrap/node.js","function":"startup","line":283,"method":null,"native":false},{"column":3,"file":"internal/bootstrap/node.js","function":"bootstrapNodeJSCore","line":622,"method":null,"native":false}],"tag":"tool"}
{"error":{},"level":"error","message":"uncaughtException: kafkaServices.KafkaServices is not a constructor\nTypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","stack":"TypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","exception":true,"date":"Thu Jul 02 2020 14:22:27 GMT-0400 (Eastern Daylight Time)","process":{"pid":22460,"uid":null,"gid":null,"cwd":"C:\\Users\\ENV\\Projects\\tool","execPath":"C:\\Program Files\\nodejs\\node.exe","version":"v10.16.3","argv":["C:\\Program Files\\nodejs\\node.exe","C:\\Users\\ENV\\Projects\\tool\\src\\server.js"],"memoryUsage":{"rss":41377792,"heapTotal":25980928,"heapUsed":16255176,"external":134547}},"os":{"loadavg":[0,0,0],"uptime":106081},"trace":[{"column":17,"file":"C:\\Users\\ENV\\Projects\\tool\\src\\server.js","function":"Server.onListening","line":149,"method":"onListening","native":false},{"column":13,"file":"events.js","function":"Server.emit","line":198,"method":"emit","native":false},{"column":10,"file":"net.js","function":"emitListeningNT","line":1313,"method":null,"native":false},{"column":19,"file":"internal/process/next_tick.js","function":"process._tickCallback","line":63,"method":"_tickCallback","native":false},{"column":11,"file":"internal/modules/cjs/loader.js","function":"Module.runMain","line":834,"method":"runMain","native":false},{"column":19,"file":"internal/bootstrap/node.js","function":"startup","line":283,"method":null,"native":false},{"column":3,"file":"internal/bootstrap/node.js","function":"bootstrapNodeJSCore","line":622,"method":null,"native":false}],"tag":"tool"}
{"error":{},"level":"error","message":"uncaughtException: kafkaServices.KafkaServices is not a constructor\nTypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","stack":"TypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","exception":true,"date":"Thu Jul 02 2020 14:22:27 GMT-0400 (Eastern Daylight Time)","process":{"pid":22460,"uid":null,"gid":null,"cwd":"C:\\Users\\ENV\\Projects\\tool","execPath":"C:\\Program Files\\nodejs\\node.exe","version":"v10.16.3","argv":["C:\\Program Files\\nodejs\\node.exe","C:\\Users\\ENV\\Projects\\tool\\src\\server.js"],"memoryUsage":{"rss":41385984,"heapTotal":25980928,"heapUsed":16300016,"external":142739}},"os":{"loadavg":[0,0,0],"uptime":106081},"trace":[{"column":17,"file":"C:\\Users\\ENV\\Projects\\tool\\src\\server.js","function":"Server.onListening","line":149,"method":"onListening","native":false},{"column":13,"file":"events.js","function":"Server.emit","line":198,"method":"emit","native":false},{"column":10,"file":"net.js","function":"emitListeningNT","line":1313,"method":null,"native":false},{"column":19,"file":"internal/process/next_tick.js","function":"process._tickCallback","line":63,"method":"_tickCallback","native":false},{"column":11,"file":"internal/modules/cjs/loader.js","function":"Module.runMain","line":834,"method":"runMain","native":false},{"column":19,"file":"internal/bootstrap/node.js","function":"startup","line":283,"method":null,"native":false},{"column":3,"file":"internal/bootstrap/node.js","function":"bootstrapNodeJSCore","line":622,"method":null,"native":false}],"tag":"tool"}
{"error":{},"level":"error","message":"uncaughtException: kafkaServices.KafkaServices is not a constructor\nTypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","stack":"TypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","exception":true,"date":"Thu Jul 02 2020 14:22:27 GMT-0400 (Eastern Daylight Time)","process":{"pid":22460,"uid":null,"gid":null,"cwd":"C:\\Users\\ENV\\Projects\\tool","execPath":"C:\\Program Files\\nodejs\\node.exe","version":"v10.16.3","argv":["C:\\Program Files\\nodejs\\node.exe","C:\\Users\\ENV\\Projects\\tool\\src\\server.js"],"memoryUsage":{"rss":41385984,"heapTotal":25980928,"heapUsed":16340888,"external":142739}},"os":{"loadavg":[0,0,0],"uptime":106081},"trace":[{"column":17,"file":"C:\\Users\\ENV\\Projects\\tool\\src\\server.js","function":"Server.onListening","line":149,"method":"onListening","native":false},{"column":13,"file":"events.js","function":"Server.emit","line":198,"method":"emit","native":false},{"column":10,"file":"net.js","function":"emitListeningNT","line":1313,"method":null,"native":false},{"column":19,"file":"internal/process/next_tick.js","function":"process._tickCallback","line":63,"method":"_tickCallback","native":false},{"column":11,"file":"internal/modules/cjs/loader.js","function":"Module.runMain","line":834,"method":"runMain","native":false},{"column":19,"file":"internal/bootstrap/node.js","function":"startup","line":283,"method":null,"native":false},{"column":3,"file":"internal/bootstrap/node.js","function":"bootstrapNodeJSCore","line":622,"method":null,"native":false}],"tag":"tool"}
{"timestamp": "2020-07-02T18:22:27.959Z","message":"Uncaught Exception.","scope":"ServerError","error":"TypeError. kafkaServices.KafkaServices is not a constructor", "stack":"TypeError: kafkaServices.KafkaServices is not a constructor
at Server.onListening (C:\Users\ENV\Projects\tool\src\server.js:149:17)
at Server.emit (events.js:198:13)
at emitListeningNT (net.js:1313:10)
at process._tickCallback (internal/process/next_tick.js:63:19)
at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","origin":"undefined"}
{"timestamp":"2020-07-02T18:22:27.961Z","level":"error","app_name":"tool","message":"uncaughtException Error Message: kafkaServices.KafkaServices is not a constructor","label":{"label":"ServerError","scope":"ServerError"},"error_stack":"TypeError: kafkaServices.KafkaServices is not a constructor\n at Server.onListening (C:\\Users\\ENV\\Projects\\tool\\src\\server.js:149:17)\n at Server.emit (events.js:198:13)\n at emitListeningNT (net.js:1313:10)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)","tag":"tool"}
{"timestamp":"2020-07-02T18:22:30.943Z","level":"info","app_name":"tool","message":"Process exit event with code 1.","scope":"Server","tag":"tool"}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! settings-tool@1.0.0 start: `node ./src/server.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the settings-tool@1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\ENV\AppData\Roaming\npm-cache\_logs\2020-07-02T18_22_30_969Z-debug.log
I'm not entirely sure why I'm getting this error, since KafkaServices has a constructor. Any advice would be appreciated! Thanks
A: KafkaServices constructor should be the following:
constructor(){
this.Producer = kafka.Producer,
this.client = new kafka.KafkaClient();
this.producer = new kafka.Producer(this.client);
Also, changing server.js as follows resolved this specific error.
var KafkaServices = require('./services/kafka-services');
kafka = require('kafka-node');
function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
console.log('Listening on ' + bind);
logInfo({message: `Server is Listening on http://localhost:${addr.port}.`, scope: 'Server'});
kafkaTool = new KafkaServices();
// Connect to kafka
kafkaTool.connect();
kafkaTool.error();
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,417 |
Q: Creating a basic Input Form in a Django I am trying to create a simple form in Django but it is not showing the input form in HTML and there is not error appearing so that I can track the error.
Here is the model:
class Log(models.Model):
log_weight = models.FloatField(validators=[MinValueValidator(0)],blank=True, null=True)
log_repetitions = models.IntegerField(validators=[MinValueValidator(1)],blank=True, null=True)
class LogForm(forms.Form):
log_weight = forms.IntegerField()
log_repetitions = forms.IntegerField()
class Meta:
model = Log
fields = ['log_weight', 'log_repetitions']
Here is the views:
class workout_details(DetailView):
model = Workout
template_name = 'my_gym/start_workout.html'
context_object_name = 'workout'
def get_context_data(self, **kwargs):
exercises = Exercise.objects.filter(workout_id=self.object)
context = super().get_context_data(**kwargs)
context['exercises'] = exercises
return context
def addlog(request, id):
url = request.META.get('HTTP_REFERER') # get last url
# return HttpResponse(url)
if request.method == 'POST': # check post
form = LogForm(request.POST)
if form.is_valid():
data = Log() # create relation with model
data.log_repetitions = form.cleaned_data['log_repetitions']
data.log_weight = form.cleaned_data['log_weight']
data.workout_id = id
data.save() # save data to table
return HttpResponseRedirect(url)
return HttpResponseRedirect(url)
Here is the template:
<form
class="review-form" action="{% url 'my_gym:addlog' workout.id %}" method="post">
{% csrf_token %}
{{ form }}
</form>
Here is the url:
urlpatterns = [
path('', home.as_view(), name='home'),
path('workout/<int:pk>/', workout_details.as_view(), name='workout'),
path('workout/addlog/<int:pk>', addlog, name='addlog'),
]
My question:
What is the reason that the form is not showing in the details page? How can I fix it.
A: You missed to pass the form to the View context. Add the following to your DetailView get_context_data method:
context['form'] = LogForm()
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,172 |
\section{Introduction}\label{intro}
One of the cornerstones in the asymptotic theory of operators is the Katznelson-Tzafriri theorem \cite[Theorem~1]{KT86}, which states the following.
\begin{thm}\label{KT_thm}
Let $X$ be a complex Banach space and suppose that $T\in\mathcal{B}(X)$ is power-bounded. Then
\begin{equation}\label{eq:KT}
\lim_{n\to\infty}\|T^n(I-T)\|=0
\end{equation}
if any only if $\sigma(T)\cap\mathbb{T}\subset\{1\}$.
\end{thm}
Here $\mathcal{B}(X)$ denotes the algebra of bounded linear operators on a complex Banach space $X$, $\sigma(T)$ denotes the {spectrum} of the operator $T\in\mathcal{B}(X)$, and an operator $T\in\mathcal{B}(X)$ is said to be {power-bounded} if $\sup_{n\ge0}\|T^n\|<\infty$. Moreover, $\mathbb{T}$ stands for the unit circle $\{\lambda\in\mathbb{C}:|\lambda|=1\}.$
Limits of the type appearing in \eqref{eq:KT} play an important role for instance in the theory of iterative methods (see \cite{Ne93}), so it is natural to ask at what \emph{speed} convergence takes place. If $\sigma(T)\cap\mathbb{T}=\emptyset$ the decay is at least exponential, with the rate determined by the spectral radius of $T$, so the real interest is in the non-trivial case where $\sigma(T)\cap\mathbb{T}=\{1\}$. Given a continuous non-increasing function $m:(0,\pi]\to[1,\infty)$ such that $\|R(\mathrm{e}^{\mathrm{i}\theta},T)\|\leq m(|\theta|)$ for $0<|\theta|\leq\pi$, it is shown in \cite[Theorem~2.11]{Se2} that, for any $c\in(0,1)$,
$$\|T^n(I-T)\|=O\big(m_\mathrm{log}^{-1}(cn)\big),\quad n\to\infty,$$
where $m_\mathrm{log}^{-1}$ is the inverse function of the map $m_\mathrm{log}$ defined by
\begin{equation}\label{mlog}
m_\mathrm{log}(\varepsilon)=m(\varepsilon)\log\left(1+\frac{m(\varepsilon)}{\varepsilon}\right),\quad0<\varepsilon\leq\pi,
\end{equation}
and where the statement $x_n=O(y_n)$, $n\to\infty$, for two sequences $(x_n)$, $(y_n)$ of non-negative terms, means that there exists a constant $C>0$ such that $x_n\le C y_n$ for all sufficiently large $n\ge0$. Moreover, this result is optimal in an important special case; see Remark~\ref*{KT_rem}\eqref{opt_rem} below.
The main new result of this paper, Theorem~\ref{Ing}, is a Tauberian theorem for sequences. The result is formulated for bounded vector-valued sequences but, to the knowledge of the author, is new even in the scalar-valued case. It can be viewed as a discrete analogue of Ingham's classical Tauberian theorem for functions; however, it includes an estimate on the rate of decay. This is achieved by adapting a new technique developed recently in the setting of $C_0$-semigroups in \cite{CS} and going back to \cite{BCT}. The result is then used, in Theorem~\ref{KT_quant}, to give a new proof of the quantified version of Theorem~\ref{KT_thm} discussed above. For further related results in the general area may be found in \cite{AOFR87}, \cite{Ba94b}, \cite{Du08a}, \cite{KT86}, \cite{Le14}, \cite{Ne11}, \cite{Ra88}, \cite{Rie16} and the references they contain.
\section{Main results}\label{ingham}
\subsection{Preliminaries}
Let $X$ be a complex Banach space and write $C_0(-\pi,\pi)$ for the set of continuous functions $\psi:[-\pi,\pi]\to\mathbb{C}$ which vanish in a neighbourhood of zero and satisfy $\psi(-\pi)=\psi(\pi)$. Further let $L^1_\mathrm{loc}(\mathbb{T}\backslash\{1\};X)$ denote the set of functions $F:\mathbb{T}\backslash\{1\}\to X$ such that the map $\theta\mapsto \psi(\theta)F(\mathrm{e}^{\mathrm{i}\theta})$, interpreted as taking the value zero when $\psi$ does, lies in $L^1(-\pi,\pi;X)$ for all $\psi\in C_0(-\pi,\pi)$. Let $\mathbb{E}=\{\lambda\in\mathbb{C}:|\lambda|>1\}$, the exterior of the closed unit disc. Given a holomorphic function $G:\mathbb{E}\to X$ and given $F\in L^1_\mathrm{loc}(\mathbb{T}\backslash\{1\};X)$, $F$ will be said to be a \emph{boundary function} for $G$ if
\begin{equation}\label{extension}
\lim_{r\to1+}\int_{-\pi}^\pi \psi(\theta)G\big(r\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta=\int_{-\pi}^\pi \psi(\theta)F\big(\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta
\end{equation}
for all $\psi\in C_0(-\pi,\pi)$. For $k\ge1$, let $C^k(\mathbb{T}\backslash\{1\};X)$ denote the set of functions $F:\mathbb{T}\backslash\{1\}\to X$ which are $k$-times continuously differentiable, with $\mathbb{T}\backslash\{1\}$ viewed as a one-dimensional manifold, and let $C^\infty(\mathbb{T}\backslash\{1\};X)=\bigcap_{k\ge1}C^k(\mathbb{T}\backslash\{1\};X)$.
\subsection{A quantified Tauberian theorem}
Theorem~\ref{Ing} below is the main result of this paper and can be viewed as a discrete analogue of Ingham's Tauberian theorem for functions; see \cite{Ing33} and also \cite{Ka34}. In the statement of the result, given $x\in\ell^\infty(\mathbb{Z}_+;X)$, $G_x:\mathbb{E}\to X$ denotes the holomorphic function given by
$$G_x(\lambda)=\sum_{n\geq0}\frac{x_n}{\lambda^{n+1}},\quad |\lambda|>1.$$
The theorem shows that if $x\in\ell^\infty(\mathbb{Z}_+;X)$ has uniformly bounded partial sums and if $G_x$ possesses a boundary function $F_x$, then $x\in c_0(\mathbb{Z}_+;X)$. Moreover, the result gives an estimate on the rate of decay of $\|x_n\|$ as $n\to\infty$, the quality of which depends on the smoothness and the rate of growth near the point $1$ of $F_x$. The proof uses a technique which goes back to \cite{BCT} and has been extended recently in \cite{CS}. One advantage of this approach over the contour integral method used to obtain \cite[Theorem~2.11]{Se2} is that it extends to the case in which $F_x$ is only finitely often continuously differentiable. Given a continuous non-increasing function $m:(0,\pi]\to[1,\infty)$ and $k\ge1$, define the function $m_k:(0,\pi]\to(0,\infty)$ by
\begin{equation}\label{m_k}
m_k(\varepsilon)=m(\varepsilon)\left(\frac{m(\varepsilon)}{\varepsilon}\right)^{1/k},
\end{equation}
noting that, for each $k\ge1$, $m_k$ maps bijectively onto its range.
\begin{thm}\label{Ing}
Let $X$ be a complex Banach space and let $x\in \ell^\infty(\mathbb{Z}_+;X)$ be such that
\begin{equation}\label{bdd}
\sup_{n\geq0}\bigg\|\sum_{k=0}^n x_k\bigg\|<\infty.
\end{equation}
If $G_x$ admits a boundary function $F_x\in \smash{L^1_\mathrm{loc}}(\mathbb{T}\backslash\{1\};X)$, then $x\in c_0(\mathbb{Z}_+;X)$.
Moreover, given a continuous non-increasing function $m:(0,\pi]\to[1,\infty)$, the following hold.
\begin{enumerate}[(a)]
\item\label{Ck_case} Suppose that $F_x\in C^k(\mathbb{T}\backslash\{1\};X)$ for some $k\ge1$ and that
\begin{equation}\label{Ck_dom_fun}
\| F_x^{(j)}(\mathrm{e}^{\mathrm{i}\theta})\|\le C |\theta|^{\ell-j}m(|\theta|)^{\ell+1},\quad 0<|\theta|\le\pi,\;0\le j\le\ell\le k,
\end{equation}
for some constant $C>0$. Then, for any $c>0$,
\begin{equation}\label{bound_Ck}
\|x_n\|=O\left(m_k^{-1}\big(cn)\right),\quad n\to\infty,
\end{equation}
where $m_k^{-1}$ is the inverse function of the map $m_k$ defined in \eqref{m_k}.
\item\label{hol_case} Suppose that $F_x\in C^\infty(\mathbb{T}\backslash\{1\};X)$ and that
\begin{equation}\label{dom_fun}
\| F_x^{(j)}(\mathrm{e}^{\mathrm{i}\theta})\|\le C j!|\theta| m(|\theta|)^{j+1},\quad 0<|\theta|\le\pi,\;j\ge0,
\end{equation}
for some constant $C>0$. Then, for any $c\in(0,1)$,
\begin{equation}\label{bound}
\|x_n\|=O\left(m_\mathrm{log}^{-1}(cn)+\frac{1}{n}\right),\quad n\to\infty,
\end{equation}
where $m_\mathrm{log}^{-1}$ is the inverse function of the map $m_\mathrm{log}$ defined in \eqref{mlog}.
\end{enumerate}
\end{thm}
\begin{rem}\label{rem0}
\begin{enumerate}[(a)]
\item Neither condition \eqref{bdd} nor the assumption that $G_x$ admits a boundary function can be dropped, even in the scalar-valued case, as can be seen by considering the sequences $x=(1,1,1,\dotsc)$ and $x=(+1, -1,+ 1, -1,\dotsc)$, respectively.
\item Note that if $m(\varepsilon)\ge c/\varepsilon$ for some $c>0$, then \eqref{Ck_dom_fun} is satisfied if
$$\| F_x^{(j)}(\mathrm{e}^{\mathrm{i}\theta})\|\le C m(|\theta|)^{j+1}\quad0<|\theta|\le\pi,\;0\le j\le k,$$
for some constant $C>0$.
\item
Suppose that $m:(0,\pi]\to[1,\infty)$ is as in Theorem~\ref{Ing}. If $G_x$ has a holomorphic extension, denoted also by $G_x$, to a region containing
$$\Omega_{m,\theta}= \left\{\lambda\in\mathbb{C}:|\lambda-\mathrm{e}^{\mathrm{i}\theta}|\le\frac{1}{m(|\theta|)}\right\}$$
for $0<|\theta|\le\pi$ and if
\begin{equation*}\label{O_bd}
\|G_x(\lambda)\|\le C|\theta|m(|\theta|),\quad \lambda\in\Omega_{m,\theta},\;0<|\theta|\le\pi,
\end{equation*}
for some constant $C>0$, then a simple estimate using Cauchy's integral formula shows that \eqref{dom_fun} holds for the restriction $F_x$ of $G_x$ to $\mathbb{T}\backslash\{1\}$. This is analogous to the results for Laplace transforms in \cite{BD08} and \cite{Ma11}. Conversely, if $F_x\in C^\infty(\mathbb{T}\backslash\{1\};X)$ and \eqref{dom_fun} holds, then $F_x$ extends holomorphically to the region $\Omega_m$ given by
\begin{equation*}\label{Om}
\Omega_m= \left\{\lambda\in\mathbb{C}:|\lambda-\mathrm{e}^{\mathrm{i}\theta}|<\frac{1}{m(|\theta|)},\;0<|\theta|\le\pi\right\}.
\end{equation*}
Furthermore, if $G:\mathbb{E}\to X$ is a holomorphic function which admits a boundary function $F_x\in C^\infty(\mathbb{T}\backslash\{1\};X)$ satisfying \eqref{dom_fun}, then $G$ has a holomorphic extension which agrees with that of $F_x$ on $\Omega_m$. This follows from a standard Cayley transform argument combined with the `edge-of-the-wedge theorem'; see for instance \cite[\S2 Theorem~B]{Ru71}.
\end{enumerate}
\end{rem}
\begin{proof}[Proof of Theorem~\ref{Ing}]
Let $\psi:[-\pi,\pi]\to\mathbb{R}$ be a smooth function such that $\psi(\theta)=0$ for $|\theta|\le1$, $0\le\psi(\theta)\le1$ for $1\le|\theta|\le2$ and $\psi(\theta)=1$ for $2\le|\theta|\le\pi.$ For $\varepsilon\in(0,\pi/2]$, let $\psi_\varepsilon, \varphi_\varepsilon:[-\pi,\pi]\to\mathbb{R}$ be given by $\psi_\varepsilon(\theta)=\psi(\theta/\varepsilon)$ and $\varphi_\varepsilon(\theta)=1-\psi_\varepsilon(\theta),$ $-\pi\le\theta\le\pi$. Moreover, for $n\in\mathbb{Z}$, let
$$y^\varepsilon_n=\frac{1}{2\pi}\int_{-\pi}^\pi \mathrm{e}^{\mathrm{i} n\theta}\psi_\varepsilon(\theta)\,\mathrm{d}\theta\quad \mbox{and}\quad z^\varepsilon_n=\frac{1}{2\pi}\int_{-\pi}^\pi \mathrm{e}^{\mathrm{i} n\theta}\varphi_\varepsilon(\theta)\,\mathrm{d}\theta.$$
Then $y^\varepsilon_0=1-z_0^\varepsilon$ and $y^\varepsilon_n=-z_n^\varepsilon$ for $n\ne0$, and a simple calculation using integration by parts shows that $y^\varepsilon,z^\varepsilon\in\ell^1(\mathbb{Z})$. Let $x^\varepsilon\in\ell^\infty(\mathbb{Z};X)$ be given by $x^{\varepsilon}=x*y^{\varepsilon}$, so that $\smash{x^{\varepsilon}_n=\sum_{j\geq0}x_j y^{\varepsilon}_{n-j}}$ for $n\in\mathbb{Z}.$ Then, setting $\smash{s_n=\sum_{j=0}^n x_j}$ for $n\ge0$,
\begin{equation}\label{x-z}
x_n-x_n^\varepsilon=(x*z^\varepsilon)_n=\sum_{j\ge0}s_j\big(z_{n-j}^\varepsilon-z_{n-j-1}^\varepsilon\big),\quad n\ge0.
\end{equation}
Since $\varphi_\varepsilon(\theta)=0$ for $2\varepsilon\le|\theta|\le\pi$,
\begin{equation}\label{small}
|z_n^\varepsilon-z_{n-1}^\varepsilon|=\left|\frac{1}{2\pi}\int_{-\pi}^\pi \mathrm{e}^{\mathrm{i} n\theta}\big(1-\mathrm{e}^{-\mathrm{i}\theta}\big)\varphi_\varepsilon(\theta)\,\mathrm{d}\theta\right|\lesssim \int_{-2\varepsilon}^{2\varepsilon}|\theta|\,\mathrm{d}\theta\lesssim\varepsilon^2
\end{equation}
for all $n\in\mathbb{Z}$. Here and in what follows the statement $p\lesssim q$ for real-valued quantities $p$ and $q$ means that $p\leq Cq$ for some number $C>0$ which is independent of all the parameters that are free to vary, in this case of $\varepsilon$ and $n$. Similarly, for $n\ne0$, integrating by parts twice gives
\begin{equation}\label{big}
|z_n^\varepsilon-z_{n-1}^\varepsilon|=\left|\frac{1}{2\pi n^2}\int_{-\pi}^\pi \mathrm{e}^{\mathrm{i} n\theta}\frac{\mathrm{d}^2}{\mathrm{d}\theta^2}\Big(\big(1-\mathrm{e}^{-\mathrm{i}\theta}\big)\varphi_\varepsilon(\theta)\Big)\,\mathrm{d}\theta\right|\lesssim \frac{1}{n^2}.
\end{equation}
For $n\ge0$ and $\varepsilon\in(0,\pi/2]$, let $\smash{P_{n,\varepsilon}=\{j\ge0:|j-n|\le\frac{1}{\varepsilon}}\}$ and $Q_{n,\varepsilon}=\{j\ge0:|j-n|>\smash{\frac{1}{\varepsilon}}\}$. Using \eqref{small} and \eqref{big} in \eqref{x-z}, together with the fact that $s\in\ell^\infty(\mathbb{Z}_+;X)$ by assumption \eqref{bdd}, it follows that
\begin{equation}\label{x-xe}
\|x_n-x_n^\varepsilon\|\lesssim \sum_{j\in P_{n,\varepsilon}}\varepsilon^2+\sum_{j\in Q_{n,\varepsilon}}\frac{1}{(n-j)^2} \lesssim\varepsilon,\quad n\ge0.
\end{equation}
Now, by the dominated convergence theorem, Fubini's theorem and \eqref{extension},
\begin{equation}\label{RL}
\begin{aligned}
x^{\varepsilon}_n&=\lim_{r\to1+}\sum_{j\geq0}\frac{x_j}{r^{j+1}}y^{\varepsilon}_{n-j}\\
&=\lim_{r\to1+}\frac{1}{2\pi}\sum_{j\geq0}\int_{-\pi}^\pi \frac{x_j}{r^{j+1}}\mathrm{e}^{\mathrm{i}(n-j)\theta}\psi_\varepsilon(\theta)\,\mathrm{d}\theta \\
&=\lim_{r\to1+}\frac{1}{2\pi}\int_{-\pi}^\pi\mathrm{e}^{\mathrm{i}(n+1)\theta}\psi_\varepsilon(\theta)G_x\big(r\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta\\
&=\frac{1}{2\pi}\int_{-\pi}^\pi\mathrm{e}^{\mathrm{i}(n+1)\theta}\psi_\varepsilon(\theta)F_x\big(\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta
\end{aligned}
\end{equation}
for all $n\in\mathbb{Z}$ and $\varepsilon\in(0,\pi/2]$. Hence $x^\varepsilon\in c_0(\mathbb{Z};X)$ for each $\varepsilon\in(0,\pi/2]$ by the Riemann-Lebesgue lemma, and it follows from \eqref{x-xe} that $x\in c_0(\mathbb{Z}_+;X)$.
Suppose $F_x\in \smash{C^k}(\mathbb{T}\backslash\{1\};X)$ for some $k\ge1$. Integrating by parts $k$ times in \eqref{RL} and estimating crudely by means of \eqref{Ck_dom_fun} gives
\begin{equation}\label{xe}
\|x_n^\varepsilon\|\lesssim \frac{1}{n^k}\sum_{j=0}^k m(\varepsilon)^{j+1}\lesssim \frac{m(\varepsilon)^{k+1}}{n^k}
\end{equation}
for all $n\ge1$ and all $\varepsilon\in(0,\pi/2]$. Given $c>0$ and $n\ge1$ sufficiently large, let $\varepsilon_n\in (0,\pi/2]$ be given by $\smash{\varepsilon_n=m_k^{-1}(cn)}$. The estimate \eqref{bound_Ck} follows from \eqref{x-xe} and \eqref{xe} on setting on setting $\varepsilon=\varepsilon_{n}$ for sufficiently large $n\ge1$.
Now suppose that $F_x\in C^\infty(\mathbb{T}\backslash\{1\};X)$. In order to obtain the estimate \eqref{bound}, it is necessary to make explicit choices of the functions $\psi_\varepsilon,\varphi_\varepsilon:[-\pi,\pi]\to\mathbb{R}$ and hence of the sequences $y^\varepsilon,z^\varepsilon\in\ell^1(\mathbb{Z})$ of their Fourier coefficients. Thus, given $\varepsilon\in(0,\pi/2]$, let $y^\varepsilon\in \ell^1(\mathbb{Z})$ be given by $\smash{y_0^\varepsilon=1-\frac{3\varepsilon}{2\pi}}$ and
$$y^{\varepsilon}_n=\frac{\cos(2n\varepsilon)-\cos (n\varepsilon)}{\varepsilon\pi n^2},\quad n\ne0,$$
and define $z^\varepsilon\in \ell^1(\mathbb{Z})$ by $z^\varepsilon_0=1-y^\varepsilon_0$ and $z^\varepsilon_n=-y^\varepsilon_n$ for $n\ne0$. Moreover, let $x^{\varepsilon}\in\ell^\infty(\mathbb{Z};X)$ be given by $x^{\varepsilon}=x*y^{\varepsilon}$, as before. Then the function
$$\psi_\varepsilon(\theta)=\sum_{n\in\mathbb{Z}} \frac{y_n^\varepsilon}{\mathrm{e}^{\mathrm{i} n\theta}},\quad -\pi\le\theta\le\pi,$$
satisfies $\psi_\varepsilon(\theta)=0$ for $|\theta|\leq\varepsilon$, $\psi_\varepsilon(\theta)=\varepsilon^{-1}|\theta|-1$ for $\varepsilon\leq |\theta|\leq2\varepsilon$, and $\psi_\varepsilon(\theta)=1$ for $2\varepsilon\leq|\theta|\leq\pi$. Now \eqref{x-z} still holds but the above method for estimating $|z_n^\varepsilon-z_{n-1}^\varepsilon|$, $n\in\mathbb{Z}$, is no longer applicable since $\varphi_\varepsilon$ is not differentiable. Instead, consider the function $\phi:\mathbb{R}\to\mathbb{R}$ given by $\phi(0)=0$ and
$$\phi(t)=\frac{2}{\pi}\left(\frac{\cos(2t)-\cos(t)}{t^3}+\frac{\sin(2t)-\frac{1}{2}\sin(t)}{t^2}\right),\quad t\ne0.$$
Then $\phi\in L^1(\mathbb{R})$ and
$$z_{n}^\varepsilon-z_{n-1}^\varepsilon=\varepsilon\int_{\varepsilon(n-1)}^{\varepsilon n}\phi(t)\,\mathrm{d} t,\quad n\in\mathbb{Z},$$
and it follows from \eqref{bdd} and \eqref{x-z} that
\begin{equation}\label{difference}
\|x_n-x_n^\varepsilon\|\le\varepsilon\sum_{j\ge0}\int_{\varepsilon(n-j-1)}^{\varepsilon (n-j)}|\phi(t)|\,\mathrm{d} t
\lesssim \varepsilon,\quad n\ge0.
\end{equation}
Now, by the same argument as in \eqref{RL},
$$x_n^\varepsilon=\frac{1}{2\pi}\int_{-\pi}^\pi\mathrm{e}^{\mathrm{i}(n+1)\theta}\psi_\varepsilon(\theta)F_x\big(\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta,\quad n\in\mathbb{Z}.$$
Integrating by parts $k\ge1$ times gives
$$\begin{aligned}
x^\varepsilon_n&=A_{n,k}(-1)^k\int_{\varepsilon\leq|\theta|\leq\pi}\mathrm{e}^{\mathrm{i}(n+k+1)\theta}\psi_\varepsilon(\theta)F_x^{(k)}\big(\mathrm{e}^{\mathrm{i}\theta}\big)\,\mathrm{d}\theta\\
&\qquad+B_{n,k}\frac{(-1)^{k}}{2\pi\varepsilon\mathrm{i}}\int_{\varepsilon\leq|\theta|\leq2\varepsilon}\mathrm{e}^{\mathrm{i}(n+k)\theta}F_x^{(k-1)}\big(\mathrm{e}^{\mathrm{i}\theta}\big)\sgn\theta\,\mathrm{d}\theta\\
&\qquad +\sum_{j=0}^{k-2}C_{n,j}\frac{(-1)^{j}}{2\pi\varepsilon}\left[\mathrm{e}^{\mathrm{i}(n+j+1)\theta}F_x^{(j)}\big(\mathrm{e}^{\mathrm{i}\theta}\big)+\mathrm{e}^{-\mathrm{i}(n+j+1)\theta}F_x^{(j)}\big(\mathrm{e}^{-\mathrm{i}\theta}\big)\right]_\varepsilon^{2\varepsilon}
\end{aligned}$$
for all $n\geq0$, where
\begin{gather*}
A_{n,k}=\frac{n!}{(n+k)!}, \quad B_{n,k}=\frac{n!}{(n+k-1)!}\sum_{j=1}^k\frac{1}{n+j}\\
\mbox{and} \quad C_{n,j}=\frac{n!}{(n+j+1)!}\sum_{\ell=1}^{j+1}\frac{1}{n+\ell}
\end{gather*}
for $0\leq j\leq k-2$. Now $A_{n,k}\le n^{-k}$, $B_{n,k}\le kn^{-k}$ and $C_{n,j}\le(j+1)n^{-(j+2)}$ for $n\ge1$. Thus \eqref{dom_fun} gives
\begin{equation}\label{est}
\|x_n^\varepsilon\|\lesssim k! \frac{ m(\varepsilon)^{k+1}}{n^k}+\frac{m(\varepsilon)}{n^2}\sum_{j=0}^{k-2}(j+1)!\frac{ m(\varepsilon)^j}{n^j}, \quad n,k\ge1,
\end{equation}
for $\varepsilon\in(0,\pi/2]$. Denote the two terms on the right-hand side of \eqref{est} by $\smash{D^\varepsilon_{n,k}}$ and $\smash{E^\varepsilon_{n,k}}$, respectively. For $c\in (0,1)$, Stirling's formula implies that $\smash{k!\lesssim (k/c\mathrm{e} )^k}$ for all $k\geq0$ and hence
$$D^\varepsilon_{n,k}\lesssim m(\varepsilon)\left(\frac{k m(\varepsilon)}{c\mathrm{e} n}\right)^k,\quad n,k\ge1.$$
Let $k_{\varepsilon,n}=\lfloor cn/m(\varepsilon)\rfloor$. Then
\begin{equation}\label{D_est}
D^\varepsilon_{n,k_{\varepsilon,n}}\lesssim m(\varepsilon)\exp\left(-\frac{cn}{m(\varepsilon)}\right)
\end{equation}
for all $\varepsilon\in(0,\pi/2]$ and all $n\ge1$ such that $k_{\varepsilon,n}\ge1$. Moreover, for such values of $\varepsilon$ and $n$, the choice of $k_{\varepsilon,n}$ ensures that
\begin{equation}\label{E_est}
E^\varepsilon_{n,k_{\varepsilon,n}}\leq \frac{m(\varepsilon)}{n^2}\sum_{j=0}^{k-2}c^j\lesssim \frac{m(\varepsilon)}{n^2}.
\end{equation}
Thus setting $k=k_{\varepsilon,n}$ in \eqref{est} and using \eqref{D_est} and \eqref{E_est} gives
\begin{equation}\label{x_est}
\|x_n^\varepsilon\|\lesssim m(\varepsilon)\exp\left(-\frac{cn}{m(\varepsilon)}\right)+\frac{m(\varepsilon)}{n^2}
\end{equation}
for all $\varepsilon\in(0,\pi/2]$ and $n\ge1$ as above. Let $\varepsilon_n=\smash{m_\mathrm{log}^{-1}(cn)}$ for $n\ge1$ sufficiently large to ensure that $k_{\varepsilon_n,n}\ge1$. For such values of $n$,
$$m(\varepsilon_n)\exp\left(-\frac{cn}{m(\varepsilon_n)}\right)\lesssim\varepsilon_n\quad\mbox{and}\quad \frac{m(\varepsilon_n)}{n^2}\lesssim\frac{1}{n},$$
so \eqref{bound} follows from \eqref{difference} and \eqref{x_est} on setting $\varepsilon=\varepsilon_n$.
\end{proof}
\begin{rem}\label{rem1}
\begin{enumerate}[(a)]
\item The choice of $k_{\varepsilon,n}$ before equation \eqref{D_est} is motivated by the fact that, given any constant $C>0$, the function $t\mapsto (Ct)^t$, defined on $(0,\infty)$, attains its global minimum at $t=(C\mathrm{e})^{-1}$.
\item Theorem~\ref{Ing} can be extended to the case of a finite number of singularities on the unit circle; see also \cite{Ma11} and \cite{Se2}.
\end{enumerate}
\end{rem}
The following example illustrates that the estimates in Theorem~\ref{Ing} generally improve with the smoothness of the boundary function if $m(\varepsilon)$ grows moderately fast as $\varepsilon\to0+$, but that the quality of these estimates can be independent of the degree of smoothness if this blow-up is very rapid.
\begin{ex}\label{poly_ex}
In Theorem~\ref{Ing}, consider the function $m:(0,\pi]\to[1,\infty)$ given by $m(\varepsilon)=(\pi/\varepsilon)^{\alpha}$, where $\alpha\ge1$. If $F_x\in C^k(\mathbb{T}\backslash\{1\};X)$ for some $k\ge1$ and \eqref{Ck_dom_fun} holds, then \eqref{bound_Ck} gives
$$\|x_n\|=O\left(n^{-\frac{k}{\alpha(k+1)+1}}\right),\quad n\to\infty,$$
and, if $F_x\in C^\infty(\mathbb{T}\backslash\{1\};X)$ and \eqref{dom_fun} holds, \eqref{bound} becomes
\begin{equation}\label{u_bd}
\|x_n\|=O\Bigg(\bigg(\frac{\log n}{n}\bigg)^{\frac{1}{\alpha}}\Bigg),\quad n\to\infty.
\end{equation}
Thus the estimate improves with the smoothness of $F_x$. By contrast, if the assumptions of Theorem~\ref{Ing} are satisfied for $m(\varepsilon)=\exp(\varepsilon^{-\alpha})$ with $\alpha>0$, then \eqref{bound_Ck} for any $k\ge1$ and \eqref{bound} all become
$$\|x_n\|=O\left((\log n)^{-\frac{1}{\alpha}}\right),\quad n\to\infty,$$
so in this case the quality of the estimate is unaffected by the smoothness of the boundary function.
\end{ex}
\subsection{The quantified Katznelson-Tzafriri theorem}
The purpose of this final section is to deduce from Theorem~\ref{Ing} the quantified version of Theorem~\ref{KT_thm}. This result was first obtained in \cite{Se2} by means of a contour integral argument adapted from \cite{BD08} and \cite{Ma11}. Some closely related results may be found for instance in \cite{Du08a}, \cite{Le14} and \cite{Ne11}.
\begin{thm}\label{KT_quant}
Let $X$ be a complex Banach space and let $T\in \mathcal{B}(X)$ be a power-bounded operator such that $\sigma(T)\cap\mathbb{T}=\{1\}$. Suppose there exists a continuous non-increasing function $m:(0,\pi]\to[1,\infty)$ such that
$$\|R(\mathrm{e}^{\mathrm{i}\theta},T)\|\le m(|\theta|),\quad 0<|\theta|\le\pi.$$
Then, for any $c\in(0,1)$,
\begin{equation}\label{KT}
\|T^n(I-T)\|=O\big( m_{\log}^{-1}(c n)\big),\quad n\to\infty,
\end{equation}
where $m_\mathrm{log}^{-1}$ is the inverse function of the map $m_\mathrm{log}$ defined in \eqref{mlog}.
\end{thm}
\begin{proof}
The result follows from part (b) of Theorem~\ref{Ing} applied, with $X$ replaced by $\mathcal{B}(X)$, to the sequence $x$ whose $n$-th term is $x_n=T^n(I-T)$, $n\ge0$. Indeed, the sequence $x$ is bounded since $T$ is power-bounded, and moreover
$\smash{\sum_{k=0}^nx_n=I-T^{n+1}}$ for all $n\ge0,$ so \eqref{bdd} is also satisfied, again by power-boundedness of $T$. Furthermore,
$G_x(\lambda)=(I-T)R(\lambda,T)$ for $|\lambda|>1.$ Let $G_x$ denote also the extension of this map to the resolvent set $\rho(T)=\mathbb{C}\backslash\sigma(T)$ and let $F_x$ be the restriction of $G_x$ to $\mathbb{T}\backslash\{1\}$. Note that $\|R(\lambda,T)\|\ge \dist(\lambda,\sigma(T))^{-1}\ge |1-\lambda|^{-1}$ for all $\lambda\in\rho(T)$, and hence
\begin{equation}\label{lb}
m(\varepsilon)\ge \frac{1}{|1-\mathrm{e}^{\mathrm{i}\varepsilon}|}=\frac{1}{2\sin(\varepsilon/2)}\ge\frac{1}{\varepsilon},\quad 0<\varepsilon\le\pi.
\end{equation}
Further, for $k\ge0$,
$$F_x^{(k)}(\lambda)=(-1)^k k! R(\lambda,T)^k\big(I+(1-\lambda)R(\lambda,T)\big),\quad \lambda\in\mathbb{T}\backslash\{1\},$$
and it follows from \eqref{lb} that \eqref{dom_fun} holds. Since $m_\mathrm{log}(\varepsilon)\gtrsim m(\varepsilon)\ge\varepsilon^{-1}$
for all $\varepsilon\in(0,\pi]$, and therefore $n^{-1}\lesssim m_\mathrm{log}^{-1}(cn)$ for all sufficiently large $n\ge1$, \eqref{KT} follows from \eqref{bound}.
\end{proof}
\begin{rem}\label{KT_rem}
\begin{enumerate}[(a)]
\item\label{opt_rem} For functions $m:(0,\pi]\to[1,\infty)$ of the form $m(\varepsilon)=C\varepsilon^{-\alpha}$ for suitable constants $C>0$, as considered in the first part of Example~\ref{poly_ex}, the right-hand side in \eqref{KT} is given by that in \eqref{u_bd}.
It is shown in \cite[Section~3]{Se2} that the logarithmic factor in this expression can be dropped if $X$ is a Hilbert space but not for general Banach spaces.
\item It is possible to obtain a `local' version of Theorem~\ref{KT_quant} from Theorem~\ref{Ing} giving, for a fixed $x\in X$, an estimate for the rate of decay of $\|T^n(I-T)x\|$ as $n\to\infty$ which depends on the behaviour of the `local' resolvent operator $R(\lambda,T)x$ as $|\lambda|\to1+$; see for instance \cite{BNR98}, \cite{BV90}, \cite{BY00}, \cite{Ch98} and \cite{To01} for related local results in the context mainly of $C_0$-semigroups. Similarly, Theorem~\ref{Ing} can be used to obtain an estimate on the rate of decay of weak orbits $\phi(T^n(I-T)x)$ as $n\to\infty$, where $x\in X$ and $\phi$ is a bounded linear functional on $X$.
\end{enumerate}
\end{rem}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 14 |
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('documents', '0013_remove_document_tags'),
]
operations = [
migrations.AlterField(
model_name='document',
name='description',
field=models.TextField(),
),
]
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,948 |
<style type="text/css">
<!--
.style1 {
color: #FFFFFF
}
-->
</style>
<script language="javascript" type="text/javascript">
<!--
function MM_openBrWindow(theURL,winName,features) { //v2.0
window.open(theURL,winName,features);
}
function MM_swapImgRestore() { //v3.0
var i,x,a=document.MM_sr; for(i=0;a&&i<a.length&&(x=a[i])&&x.oSrc;i++) x.src=x.oSrc;
}
function MM_preloadImages() { //v3.0
var d=document; if(d.images){ if(!d.MM_p) d.MM_p=new Array();
var i,j=d.MM_p.length,a=MM_preloadImages.arguments; for(i=0; i<a.length; i++)
if (a[i].indexOf("#")!=0){ d.MM_p[j]=new Image; d.MM_p[j++].src=a[i];}}
}
function MM_findObj(n, d) { //v4.01
var p,i,x; if(!d) d=document; if((p=n.indexOf("?"))>0&&parent.frames.length) {
d=parent.frames[n.substring(p+1)].document; n=n.substring(0,p);}
if(!(x=d[n])&&d.all) x=d.all[n]; for (i=0;!x&&i<d.forms.length;i++) x=d.forms[i][n];
for(i=0;!x&&d.layers&&i<d.layers.length;i++) x=MM_findObj(n,d.layers[i].document);
if(!x && d.getElementById) x=d.getElementById(n); return x;
}
function MM_swapImage() { //v3.0
var i,j=0,x,a=MM_swapImage.arguments; document.MM_sr=new Array; for(i=0;i<(a.length-2);i+=3)
if ((x=MM_findObj(a[i]))!=null){document.MM_sr[j++]=x; if(!x.oSrc) x.oSrc=x.src; x.src=a[i+2];}
}
//-->
</script>
<table width="638" border="0" cellpadding="0" cellspacing="0">
<tr>
<td class="ant_dark" valign="middle"><table border="0" cellpadding="0" cellspacing="0">
<tr>
<td height="30" valign="middle"><div class="region1"><a href=" ../wfbInt/region_ant.html" style="color: #FFFFFF;">Antarctica</a> <strong>:: </strong><span class="region_name1">Heard Island and McDonald Islands</span> </div>
<div class="affiliation"><em>(territory of Australia)</em></div>
</td>
</tr>
</table></td>
<td width="80" align="right" valign="middle" class="ant_dark"> </td>
</tr>
</table>
<table width="638" border="0" cellspacing="0" cellpadding="0" >
<tr>
<td width="316" height="182" valign="bottom" background="../graphics/ant_flag_loc_bkgrnd.jpg" style="background-repeat:no-repeat; background-position:top left" border="0"><table width="100%" height="180" border="0" align="center" cellpadding="0" cellspacing="0">
<tr>
<td height="10" colspan="2" valign="top" class="smalltext_nav_country" style="padding-left:7px;">page last updated on November 15, 2012</td>
</tr>
<tr>
<td width="50%" align="center" valign="middle" ><table width="100%" border="0" cellspacing="3" cellpadding="0" align="center" valign="top" >
<tr>
<td height="95" align="center" valign="middle" background="../graphics/ant_smflag_bkgrnd.jpg" style="background-repeat:no-repeat; background-position: center bottom;" border="0">
<img src="../graphics/flags/newflags/hm-lgflag.gif" width="96" height="48" alt="Flag of Heard Island and McDonald Islands" title="Flag of Heard Island and McDonald Islands" class="flag_border"/>
</td>
</tr>
<tr>
<td height="10" align="center" valign="top" class="smalltext_nav" ></td>
</tr>
</table></td>
<td width="50%" align="center" valign="middle">
<a href="../maps/hm_largelocator_template.html" onClick="MM_openBrWindow('../maps/hm_largelocator_template.html','','scrollbars=no,resizable=no,width=850,height=638'); return false")> <img src="../graphics/locator/ant/hm_locator.gif" style="border: 2px double #e3dbe3; background: #FFFFFF; margin: 0px;;margin-right: 6px;" title="Location of Heard Island and McDonald Islands" alt="Location of Heard Island and McDonald Islands" border="0"> </a>
</td>
</tr>
<tr>
<td height="15" colspan="2" align="center" valign="middle" class="smalltext_nav" style="color: #734d73;; letter-spacing:1px; line-height: 10px;"></td>
</tr>
</table></td>
<td width="8" height="275" rowspan="3"> </td>
<td width="314" height="275" rowspan="3" valign="middle" background="../graphics/ant_map_bkgrnd.jpg"><table width="95%" height="302" border="0" align="center" cellpadding="0" cellspacing="0" >
<tr>
<td height="287" align="center" valign="middle">
<img src="../graphics/maps/newmaps/hm-map.gif" alt="Map of Heard Island and McDonald Islands" id="ant_smmapborder" title=" Map of Heard Island and McDonald Islands" border="0"/>
</td>
</tr>
<tr>
<td height="15" colspan="2" align="center" valign="bottom" class="smalltext_nav" style="color: #734d73;; letter-spacing:1px; line-height: 10px; "></td>
</tr>
</table></td>
</tr>
<tr>
<td width="314" align="left" valign="top" >
<table cellpadding="0" cellspacing="0" width="290" height="123" border="0">
<tr>
<td></td>
</tr>
</table>
</td>
</tr>
</table>
<table width="638" border="0" cellpadding="2" cellspacing="2">
<tr>
</tr>
</table>
<script language="javascript" type="text/javascript">
<!--
function collapseAllSections( )
{
CollapsiblePanel1_Intro.close();
CollapsiblePanel1_Intro.setCookie("false");
CollapsiblePanel1_Geo.close();
CollapsiblePanel1_Geo.setCookie("false");
CollapsiblePanel1_People.close();
CollapsiblePanel1_People.setCookie("false");
CollapsiblePanel1_Govt.close();
CollapsiblePanel1_Govt.setCookie("false");
CollapsiblePanel1_Econ.close();
CollapsiblePanel1_Econ.setCookie("false");
CollapsiblePanel1_Comm.close();
CollapsiblePanel1_Comm.setCookie("false");
CollapsiblePanel1_Trans.close();
CollapsiblePanel1_Trans.setCookie("false");
CollapsiblePanel1_Military.close();
CollapsiblePanel1_Military.setCookie("false");
CollapsiblePanel1_Issues.close();
CollapsiblePanel1_Issues.setCookie("false");
}
function expandAllSections( )
{
CollapsiblePanel1_Intro.open();
CollapsiblePanel1_Intro.setCookie("true");
CollapsiblePanel1_Geo.open();
CollapsiblePanel1_Geo.setCookie("true");
CollapsiblePanel1_People.open();
CollapsiblePanel1_People.setCookie("true");
CollapsiblePanel1_Govt.open();
CollapsiblePanel1_Govt.setCookie("true");
CollapsiblePanel1_Econ.open();
CollapsiblePanel1_Econ.setCookie("true");
CollapsiblePanel1_Comm.open();
CollapsiblePanel1_Comm.setCookie("true");
CollapsiblePanel1_Trans.open();
CollapsiblePanel1_Trans.setCookie("true");
CollapsiblePanel1_Military.open();
CollapsiblePanel1_Military.setCookie("true");
CollapsiblePanel1_Issues.open();
CollapsiblePanel1_Issues.setCookie("true");
}
//-->
/* IMPORTANT: Put script after tooltip div or put tooltip div just before </BODY>. */
var dom = (document.getElementById) ? true : false;
var ns5 = (!document.all && dom || window.opera) ? true: false;
var ie5 = ((navigator.userAgent.indexOf("MSIE")>-1) && dom) ? true : false;
var ie4 = (document.all && !dom) ? true : false;
var nodyn = (!ns5 && !ie4 && !ie5 && !dom) ? true : false;
var origWidth, origHeight;
// avoid error of passing event object in older browsers
if (nodyn) { event = "nope" }
/////////////////////// CUSTOMIZE HERE ////////////////////
// settings for tooltip
// Do you want tip to move when mouse moves over link?
var tipFollowMouse= false;
// Be sure to set tipWidth wide enough for widest image
var tipWidth= 159;
var tipHeight= 65;
var offX= -170; // how far from mouse to show tip
var offY= -10;
var tipFontFamily= "Verdana, arial, helvetica, sans-serif";
var tipFontSize= "8pt";
// set default text color and background color for tooltip here
// individual tooltips can have their own (set in messages arrays)
// but don't have to
var tipFontColor= "#000000";
var tipBgColor= "";
var tipBorderColor= "#666666";
var tipBorderWidth= 0;
var tipBorderStyle= "none";
var tipPadding= 0;
var tipPosition = "absolute";
// tooltip content goes here (image, description, optional bgColor, optional textcolor)
var messages = new Array();
// multi-dimensional arrays containing:
// image and text for tooltip
// optional: bgColor and color to be sent to tooltip
messages[0] = new Array('../graphics/field_listing_tooltip.gif','','');
messages[1] = new Array('../graphics/google_tooltip.gif','','');
messages[2] = new Array('../graphics/intelink_tooltip.gif','','');
messages[3] = new Array('../graphics/populationpyramid_tooltip.gif','','');
//messages[2] = new Array('test.gif','Test description','black','white');
//////////////////// END OF CUSTOMIZATION AREA ///////////////////
// preload images that are to appear in tooltip
// from arrays above
if (document.images) {
var theImgs = new Array();
for (var i=0; i<messages.length; i++) {
theImgs[i] = new Image();
theImgs[i].src = messages[i][0];
}
}
// to layout image and text, 2-row table, image centered in top cell
// these go in var tip in doTooltip function
// startStr goes before image, midStr goes between image and text
var startStr = '<table border="0" + "width="' + tipWidth + '" + height="' + tipHeight + '"><tr><td align="center"><img src="';
var midStr = '" border="0"></td></tr><tr><td valign="top">';
var endStr = '</td></tr></table>';
////////////////////////////////////////////////////////////
// initTip - initialization for tooltip.
// Global variables for tooltip.
// Set styles
// Set up mousemove capture if tipFollowMouse set true.
////////////////////////////////////////////////////////////
var tooltip, tipcss;
function initTip() {
if (nodyn)
return;
tooltip = (ie4)? document.all['tipDiv']: (ie5||ns5)? document.getElementById('tipDiv'): null;
tipcss = tooltip.style;
if (ie4||ie5||ns5) { // ns4 would lose all this on rewrites
tipcss.width = tipWidth+"px";
tipcss.fontFamily = tipFontFamily;
tipcss.fontSize = tipFontSize;
tipcss.color = tipFontColor;
tipcss.backgroundColor = tipBgColor;
tipcss.borderColor = tipBorderColor;
tipcss.borderWidth = tipBorderWidth+"px";
tipcss.padding = tipPadding+"px";
tipcss.borderStyle = tipBorderStyle;
tipcss.position = tipPosition;
}
if (tooltip&&tipFollowMouse) {
document.onmousemove = trackMouse;
}
}
/////////////////////////////////////////////////
// doTooltip function
// Assembles content for tooltip and writes
// it to tipDiv
/////////////////////////////////////////////////
var t1,t2; // for setTimeouts
var tipOn = false; // check if over tooltip link
function doTooltip(evt,num) {
if (!tooltip) return;
if (t1) clearTimeout(t1);
if (t2) clearTimeout(t2);
tipOn = true;
// set colors if included in messages array
if (messages[num][2])
var curBgColor = messages[num][2];
else
curBgColor = tipBgColor;
if (messages[num][3])
var curFontColor = messages[num][3];
else
curFontColor = tipFontColor;
if (ie4||ie5||ns5) {
var tip = startStr + messages[num][0] + midStr + '<span style="font-family:' + tipFontFamily + '; position:' + tipPosition + '; font-size:' + tipFontSize + '; color:' + curFontColor + ';">' + messages[num][1] + '</span>' + endStr;
tipcss.backgroundColor = curBgColor;
tooltip.innerHTML = tip;
}
if (!tipFollowMouse)
positionTip(evt);
else
t1=setTimeout("tipcss.visibility='visible'",100);
}
var mouseX, mouseY;
function trackMouse(evt) {
standardbody=(document.compatMode=="CSS1Compat")? document.documentElement : document.body //create reference to common "body" across doctypes
mouseX = (ns5)? evt.pageX: window.event.clientX + standardbody.scrollLeft;
mouseY = (ns5)? evt.pageY: window.event.clientY + standardbody.scrollTop;
if (tipOn)
positionTip(evt);
}
/////////////////////////////////////////////////////////////
// positionTip function
// If tipFollowMouse set false, so trackMouse function
// not being used, get position of mouseover event.
// Calculations use mouseover event position,
// offset amounts and tooltip width to position
// tooltip within window.
/////////////////////////////////////////////////////////////
function positionTip(evt) {
if (!tipFollowMouse) {
standardbody=(document.compatMode=="CSS1Compat")? document.documentElement : document.body
mouseX = (ns5)? evt.pageX: window.event.clientX + standardbody.scrollLeft;
mouseY = (ns5)? evt.pageY: window.event.clientY + standardbody.scrollTop;
}
// tooltip width and height
var tpWd = (ie4||ie5)? tooltip.clientWidth: tooltip.offsetWidth;
var tpHt = (ie4||ie5)? tooltip.clientHeight: tooltip.offsetHeight;
// document area in view (subtract scrollbar width for ns)
var winWd = (ns5)? window.innerWidth-20+window.pageXOffset: standardbody.clientWidth+standardbody.scrollLeft;
var winHt = (ns5)? window.innerHeight-20+window.pageYOffset: standardbody.clientHeight+standardbody.scrollTop;
// check mouse position against tip and window dimensions
// and position the tooltip
if ((mouseX+offX+tpWd)>winWd)
tipcss.left = mouseX-(tpWd+offX)+"px";
else
tipcss.left = mouseX+offX+"px";
if ((mouseY+offY+tpHt)>winHt)
tipcss.top = winHt-(tpHt+offY)+"px";
else
tipcss.top = mouseY+offY+"px";
if (!tipFollowMouse)
t1=setTimeout("tipcss.visibility='visible'",100);
}
function hideTip() {
if (!tooltip) return;
t2=setTimeout("tipcss.visibility='hidden'",100);
tipOn = false;
}
document.write('<div id="tipDiv" style="position:absolute; visibility:hidden; z-index:100;"></div>')
initTip();
</script>
<table width="638" border="0" cellpadding="0" cellspacing="0">
<tr>
<td>
<div id="CollapsiblePanel1_Intro" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Introduction" title="Expand/Collapse Introduction"> <span class="category" style="vertical-align:middle;padding-left:8px;">Introduction</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2028" alt="Definitions and Notes: Background" title="Definitions and Notes: Background"> Background</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">The United Kingdom transferred these uninhabited, barren, sub-Antarctic islands to Australia in 1947. Populated by large numbers of seal and bird species, the islands have been designated a nature preserve.</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Geo" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Geography" title="Expand/Collapse Geography"> <span class="category" style="vertical-align:middle;padding-left:8px;">Geography</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2144" alt="Definitions and Notes: Location" title="Definitions and Notes: Location"> Location</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">islands in the Indian Ocean, about two-thirds of the way from Madagascar to Antarctica</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2011" alt="Definitions and Notes: Geographic coordinates" title="Definitions and Notes: Geographic coordinates"> Geographic coordinates</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">53 06 S, 72 31 E</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2145" alt="Definitions and Notes: Map references" title="Definitions and Notes: Map references"> Map references</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">
<a href="../graphics/ref_maps/physical/pdf/antarctic.pdf" target="_blank" class="category_data">Antarctic Region</a>
</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2147" alt="Definitions and Notes: Area" title="Definitions and Notes: Area"> Area</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category">total: <span class="category_data" style="font-weight:normal; vertical-align:bottom;">412 sq km</span></div>
<span class="category" style="padding-left:7px;">country comparison to the world:</span> <span class="category_data"> <a href="../rankorder/2147rank.html?countryName=Heard Island and McDonald Islands&countryCode=hm®ionCode=ant&rank=203#hm" onmousedown="" title="Country comparison to the world" alt="Country comparison to the world"> 203 </a> </span>
<div class="category" style="padding-top: 2px;">
land:
<span class="category_data" style="font-weight:normal; vertical-align:top;">412 sq km </span></div>
<div class="category" style="padding-top: 2px;">
water:
<span class="category_data" style="font-weight:normal; vertical-align:top;">0 sq km </span></div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2023" alt="Definitions and Notes: Area - comparative" title="Definitions and Notes: Area - comparative"> Area - comparative</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">slightly more than two times the size of Washington, DC</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2096" alt="Definitions and Notes: Land boundaries" title="Definitions and Notes: Land boundaries"> Land boundaries</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">0 km</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2060" alt="Definitions and Notes: Coastline" title="Definitions and Notes: Coastline"> Coastline</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">101.9 km</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2106" alt="Definitions and Notes: Maritime claims" title="Definitions and Notes: Maritime claims"> Maritime claims</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category">territorial sea: <span class="category_data" style="font-weight:normal; vertical-align:bottom;">12 nm</span></div>
<div class="category" style="padding-top: 2px;">
exclusive fishing zone:
<span class="category_data" style="font-weight:normal; vertical-align:top;">200 nm </span></div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2059" alt="Definitions and Notes: Climate" title="Definitions and Notes: Climate"> Climate</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">antarctic</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2125" alt="Definitions and Notes: Terrain" title="Definitions and Notes: Terrain"> Terrain</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">Heard Island - 80% ice-covered, bleak and mountainous, dominated by a large massif (Big Ben) and an active volcano (Mawson Peak); McDonald Islands - small and rocky</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2020" alt="Definitions and Notes: Elevation extremes" title="Definitions and Notes: Elevation extremes"> Elevation extremes</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category">lowest point: <span class="category_data" style="font-weight:normal; vertical-align:bottom;">Indian Ocean 0 m</span></div>
<div class="category" style="padding-top: 2px;">
highest point:
<span class="category_data" style="font-weight:normal; vertical-align:top;">Mawson Peak on Big Ben volcano 2,745 m </span></div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2111" alt="Definitions and Notes: Natural resources" title="Definitions and Notes: Natural resources"> Natural resources</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">fish</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2097" alt="Definitions and Notes: Land use" title="Definitions and Notes: Land use"> Land use</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category">arable land: <span class="category_data" style="font-weight:normal; vertical-align:bottom;">0%</span></div>
<div class="category" style="padding-top: 2px;">
permanent crops:
<span class="category_data" style="font-weight:normal; vertical-align:top;">0% </span></div>
<div class="category" style="padding-top: 2px;">
other:
<span class="category_data" style="font-weight:normal; vertical-align:top;">100% (2005) </span></div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2146" alt="Definitions and Notes: Irrigated land" title="Definitions and Notes: Irrigated land"> Irrigated land</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">0 sq km</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2021" alt="Definitions and Notes: Natural hazards" title="Definitions and Notes: Natural hazards"> Natural hazards</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">Mawson Peak, an active volcano, is on Heard Island</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2032" alt="Definitions and Notes: Environment - current issues" title="Definitions and Notes: Environment - current issues"> Environment - current issues</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">NA</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2113" alt="Definitions and Notes: Geography - note" title="Definitions and Notes: Geography - note"> Geography - note</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">Mawson Peak on Heard Island is the highest Australian mountain (at 2,745 meters, it is taller than Mt. Kosciuszko in Australia proper), and one of only two active volcanoes located in Australian territory, the other being McDonald Island; in 1992, McDonald Island broke its dormancy and began erupting; it has erupted several times since, the most recent being in 2005</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_People" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse People and Society" title="Expand/Collapse People and Society"> <span class="category" style="vertical-align:middle;padding-left:8px;">People and Society</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2119" alt="Definitions and Notes: Population" title="Definitions and Notes: Population"> Population</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">uninhabited</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Govt" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Government" title="Expand/Collapse Government"> <span class="category" style="vertical-align:middle;padding-left:8px;">Government</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2142" alt="Definitions and Notes: Country name" title="Definitions and Notes: Country name"> Country name</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category">conventional long form: <span class="category_data" style="font-weight:normal;">Territory of Heard Island and McDonald Islands</span> </div>
<div class="category" style="padding-top: 2px;">
conventional short form:
<span class="category_data" style="font-weight:normal; vertical-align:top;">Heard Island and McDonald Islands </span></div>
<div class="category" style="padding-top: 2px;">
abbreviation:
<span class="category_data" style="font-weight:normal; vertical-align:top;">HIMI </span></div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2006" alt="Definitions and Notes: Dependency status" title="Definitions and Notes: Dependency status"> Dependency status</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">territory of Australia; administered from Canberra by the Department of Sustainability, Environment, Water, Population and Communities</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2100" alt="Definitions and Notes: Legal system" title="Definitions and Notes: Legal system"> Legal system</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">the laws of Australia, where applicable, apply</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2149" alt="Definitions and Notes: Diplomatic representation in the US" title="Definitions and Notes: Diplomatic representation in the US"> Diplomatic representation in the US</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">none (territory of Australia)</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2007" alt="Definitions and Notes: Diplomatic representation from the US" title="Definitions and Notes: Diplomatic representation from the US"> Diplomatic representation from the US</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">none (territory of Australia)</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2081" alt="Definitions and Notes: Flag description" title="Definitions and Notes: Flag description"> Flag description</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">the flag of Australia is used</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Econ" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Economy" title="Expand/Collapse Economy"> <span class="category" style="vertical-align:middle;padding-left:8px;">Economy</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2116" alt="Definitions and Notes: Economy - overview" title="Definitions and Notes: Economy - overview"> Economy - overview</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">The islands have no indigenous economic activity, but the Australian Government allows limited fishing in the surrounding waters.</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
</div>
<div id="CollapsiblePanel1_Comm" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Communications" title="Expand/Collapse Communications"> <span class="category" style="vertical-align:middle;padding-left:8px;">Communications</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2154" alt="Definitions and Notes: Internet country code" title="Definitions and Notes: Internet country code"> Internet country code</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">.hm</div>
</td>
</tr>
<tr>
<td height="10"></td>
</tr>
<tr class="ant_light">
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2184" alt="Definitions and Notes: Internet hosts" title="Definitions and Notes: Internet hosts"> Internet hosts</a>:
</div>
</td>
</tr>
<tr height="22">
<td colspan="2" id="data">
<div class="category_data">102 (2012)</div>
<span class="category" style="padding-left:7px;">country comparison to the world:</span> <span class="category_data"> <a href="../rankorder/2184rank.html?countryName=Heard Island and McDonald Islands&countryCode=hm®ionCode=ant&rank=209#hm" onmousedown="" title="Country comparison to the world" alt="Country comparison to the world"> 209 </a> </span>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Trans" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Transportation" title="Expand/Collapse Transportation"> <span class="category" style="vertical-align:middle;padding-left:8px;">Transportation</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2120" alt="Definitions and Notes: Ports and terminals" title="Definitions and Notes: Ports and terminals"> Ports and terminals</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">none; offshore anchorage only</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Military" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Military" title="Expand/Collapse Military"> <span class="category" style="vertical-align:middle;padding-left:8px;">Military</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2137" alt="Definitions and Notes: Military - note" title="Definitions and Notes: Military - note"> Military - note</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">defense is the responsibility of Australia; Australia conducts fisheries patrols</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<div id="CollapsiblePanel1_Issues" class="CollapsiblePanel" style="width:638px; ">
<table border="0" cellspacing="0" cellpadding="0" width="638" height="23" style="background-image: url(../graphics/ant_medium.jpg)">
<tr>
<td><span class="category" style="vertical-align:middle;padding-left:8px;" alt="Expand/Collapse Transnational Issues" title="Expand/Collapse Transnational Issues"> <span class="category" style="vertical-align:middle;padding-left:8px;">Transnational Issues</span> ::</span><span class="region">Heard Island and McDonald Islands</span></td>
</tr>
</table>
</div>
<table border="0" cellspacing="0" cellpadding="0" class="CollapsiblePanelContent" style="width: 638px; margin-left: 0px;">
<tr>
<td>
<table width="638" border="0" align="left" cellpadding="0" cellspacing="0">
<tr class="ant_light" >
<td width="450" height="20">
<div class="category" style="padding-left:5px;" id="field">
<a href="../docs/notesanddefs.html#2070" alt="Definitions and Notes: Disputes - international" title="Definitions and Notes: Disputes - international"> Disputes - international</a>:
</div>
</td>
<tr>
<td id="data" colspan="2" style="vertical-align:middle;">
<div class="category_data">none</div>
<tr>
<td class="category_data" style="padding-bottom: 5px;"></td>
</tr>
<tr>
<td colspan="3">
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
</td>
</tr>
</table>
<table width="638" border="0" cellpadding="2" cellspacing="0">
<tr>
</tr>
</table>
<script language="javascript" type="text/javascript">
var cookieExpdate = new Date();
cookieExpdate.setDate(cookieExpdate.getDate() + 7);
// RAN: Session cookie-only change
document.cookie = "LASTCRNTYCODE=" + escape("hm") + ";path=" + "/" + ";secure";
</script>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,259 |
Q: How to recognize a newly inserted android USB device using C++/Windows? I am trying to implement a function like this: when an android device is inserted to USB, get its pid and vid and then use them to do some processing.
Now I use WMI to detect USB insertion and deletion. But everytime the device is inserted, my program receives more than one event. That's because the other devices like usb mass-storage card is also detected.
So from what infomation can I recognize if a newly inserted USB device is an android phone?
A: I think i find a method to solve this problem.
I listen to the event of win32_PnPEntity creation and the Service property of Win32_pnpEntity can be used to identify the device.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,623 |
{"url":"http:\/\/www.physicsforums.com\/showthread.php?p=3741572","text":"## Are you a ME working in the Aerospace Industry?\n\nI'm looking for guidance. I'm very interested in the Aerospace field. However, none of my area universities offer an Aerospace degree. I\u2019ve noticed that some people have opted to go ME and then do Aerospace post grad. This route would be available to me,...and if I go this route.\n\nBased on your experience\n\nMy first question is: What type of level of competitiveness does this give me when applying for jobs in the Aerospace industry?\n\nMy second question is: I\u2019m interested in either structural analysis of moving bodies in the air like planes, spacecraft, satellites, etc; and\/or propulsion (rocket or plane engines, etc). Would this be an adequate degree track to seek those types of jobs?\n\nThanks!\n PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract\n It depends on what part of the aerospace industry you are trying to get into. In a broad sense, there are more MEs employed in the aerospace industry than there are AEs. You can do all of the job tracks you mentioned from an ME background with the right planning.\n\n Similar discussions for: Are you a ME working in the Aerospace Industry? Thread Forum Replies Aerospace Engineering 2 Mechanical Engineering 0 Aerospace Engineering 3 Mechanical Engineering 4","date":"2013-06-20 09:39:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.18755799531936646, \"perplexity\": 2647.3608558567616}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368711240143\/warc\/CC-MAIN-20130516133400-00067-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
Belinda Pratten
Independent Charity Advisor
View bjpratten's profile on Twitter
View belindapratten's profile on LinkedIn
Oh to be in Scotland
April 5, 2016 By Belinda Pratten
Oh to be in Scotland, now that the referendum's here…
Should charities have a voice in the forthcoming referendum on Britain's membership of the European Union? Apparently it depends on where in the UK you are based, with charity regulators in Scotland and Northern Ireland taking a more positive approach to charities getting involved in the process.
According to the Office of the Scottish Charity Regulator (OSCR): 'Charities in Scotland will want to consider the possible implications of the EU referendum and some may wish to make their voice heard during the EU referendum process.'
It has therefore produced guidance to enable charities 'to play a significant part in the process, legitimately within the requirements of Scots charity law.' Drawing on its experience of last year's Scottish independence referendum, the guidance clearly and concisely explains what a charity can and cannot do. The Charity Commission in Northern Ireland has taken a similar approach
But that other model of a modern charity regulator, the Charity Commission for England and Wales has adopted a very different tone in its guidance. Its view is that charity involvement in the debate should be the exception rather than the rule.
To be fair, it has qualified this since publishing its original guidance at the beginning of March. Then it was suggested that any involvement in the referendum process would be seen to be political activity and therefore restricted under charity law. The new guidance now states that: 'Such activity will amount to political activity if the engagement can reasonably be seen as influencing the outcome'.
This is welcome clarification, but the tone hasn't changed. It still appears designed to put charities off getting involved in the debate, rather than enabling them to make their voice heard. The emphasis is entirely risk averse and my personal view is that some of these risks, while very real, have been over-stated.
For example, the guidance suggests that there is a 'significant risk' that the charity might be used to promote the views of an individual trustee or staff member. The Commission is right to point out that charities should not be used in this way, but I'm not convinced that there is a more significant risk in relation to this issue than any other. I have worked with many boards whose members come from different backgrounds, have different beliefs and different political persuasions. But the banter in the kitchen is very different from the discussion around the table when those trustees come together to make decisions in the best interest of the charity. That is what good trustees do.
The guidance also makes a big deal about charities that are funded by the EU. Of course, the Charity Commission is right to say that a potential loss of funding is not a reason to get involved in the referendum debate, and charities should be transparent about where the money comes from. But again I think the Commission labours this point excessively – and, to me, the guidance seems to suggest that any charity in receipt of EU funding has an ulterior motive:
'If your charity does get involved in any political activity connected with the referendum, you should ensure that, during such involvement, you publicly acknowledge the source of your funding so that the reasons for your involvement can be fully assessed. If you do not do so, this could seriously undermine and detract from the quality of your contribution to these very important issues and may attract regulatory scrutiny by the commission.'
The idea that EU funding has created 'sock puppet' organisations whose role is to bolster support for the EU itself has been widely promoted by the Institute of Economic Affairs (IEA). But it is an argument based more on innuendo than evidence – and one which changes according to context. While EU-funded charities supposedly just roll over and ask for their tummy to be tickled, those funded by central or local government have a tendency to bite the hand that feeds them (hence the anti-advocacy clause). It would be laughable if the implications weren't so serious.
I am not suggesting that the Charity Commission has been influenced by the IEA's views on this issue but, like Caesar's wife, it needs to be above suspicion if it is to retain the confidence of the sector it regulates. It needs to stand up for the sector's legitimate right to campaign within the law at the same time as it polices the 'no go' areas of political activity.
The Charity Commission's own guidance, Speaking Out: Guidance on Campaigning and Political Activity for Charities ('CC9') gets that balance about right. And so too does the referendum guidance from OSCR and the guidance from the Charity Commission of Northern Ireland.
A healthy democracy is one where different viewpoints can be aired and different voices heard. Given that the referendum debate so far has been largely an evidence-free zone ('can't recycle teabags'), there is a crying need for independent voices who can provide well-informed, thoughtful analyses of its potential implications. Let's hear it for OSCR!
Filed Under: Blog, campaigning and political activity Tagged With: campaigning, charity commission
Levelling Up: Firm Foundations
Health Equity: More than enough evidence for action
Trust isn't a popularity contest
Cuts and Competition: Is the Future Local?
Politics Just Got Interesting Again
Copyright © 2023 Belinda Pratten | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,837 |
In differential geometry, conjugate points or focal points are, roughly, points that can almost be joined by a 1-parameter family of geodesics. For example, on a sphere, the north-pole and south-pole are connected by any meridian. Another viewpoint is that conjugate points tell when the geodesics fail to be length-minimizing. All geodesics are locally length-minimizing, but not globally. For example on a sphere, any geodesic passing through the north-pole can be extended to reach the south-pole, and hence any geodesic segment connecting the poles is not (uniquely) globally length minimizing. This tells us that any pair of antipodal points on the standard 2-sphere are conjugate points.
Definition
Suppose p and q are points on a Riemannian manifold, and is a geodesic that connects p and q. Then p and q are conjugate points along if there exists a non-zero Jacobi field along that vanishes at p and q.
Recall that any Jacobi field can be written as the derivative of a geodesic variation (see the article on Jacobi fields). Therefore, if p and q are conjugate along , one can construct a family of geodesics that start at p and almost end at q. In particular,
if is the family of geodesics whose derivative in s at generates the Jacobi field J, then the end point
of the variation, namely , is the point q only up to first order in s. Therefore, if two points are conjugate, it is not necessary that there exist two distinct geodesics joining them.
Examples
On the sphere , antipodal points are conjugate.
On , there are no conjugate points.
On Riemannian manifolds with non-positive sectional curvature, there are no conjugate points.
See also
Cut locus
Jacobi field
References
Riemannian geometry | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,618 |
Tough Night for Baggett, Bogle at Anaheim 2 SX
By Staff January 21, 2020 4 Mins Read
Team Rocky Mountain ATV/MC-KTM-WPS returned to Southern California's Angel Stadium in Anaheim for round three of the 2020 Monster Energy Supercross Series. The team pits were electric this weekend with the SoCal SuperTrucks/ GEAR Off-Road /JStar Jeep Gladiator on display by the Rocky Mountain ATV/MC-KTM-WPS race rig. Team riders Blake Baggett and Justin Bogle would have a tough night, as they had their hands full on the technical Anaheim track.
Blake Baggett had some work to do in his heat race. After rounding the first turn buried at the tail end of the top ten, Blake quickly found the fast line and focused on charging through the field. By the end of the race, he had worked his way up to 6th place to transfer straight to the main event. There, Blake launched out of the gate, rounding the first turn in 2nd place. He quickly gave chase to leader Ken Roczen, using the fast line around the track to get right on the Honda's rear wheel. Blake made his move in the tricky whoop section as he powered past Roczen for the lead. While fighting to keep the lead, Blake went down after he caught a square edge entering the split rhythm lane. After remounting in 14th place, Blake pulled a tear-off from his 100% goggles and got to work. He made his way inside the top 10 before getting caught in a tangle with Aaron Plessinger and Benny Bloss that sent him to the ground again. After re-entering the race at the tail end of the field, Blake raced back to a 14th place finish at the checkered flag.
"It was a tough one tonight. I had a great start in the main and I was feeling it tonight. My speed was there and had the track dialed. It was unfortunate I went down while battling for the lead, as I feel we had a chance at the win. We leave here healthy and look forward to next week at Glendale and the triple crown."
Justin Bogle timed the gate drop perfectly in his heat race and led the field down the start straight, rounding the first turn in 3rd place. Justin had the speed, but still came under attack from Jason Anderson. The two went back and forth before Justin slid back a spot. He kept his speed, though, transferring to the main with an 8th place. In the main, Justin wasn't able to get the start he needed. He was buried at the tail end of the pack early in the race. With a lot of work to do, Justin kept his head down and charged through the field to finish in 16th place at the checkered flag.
"Tonight was a struggle. I didn't have the flow all night and it's frustrating because I know I should be running at the front. We will make some changes, keep making progress and come out swinging at Glendale next weekend."
The team will now head to State Farm Stadium in Glendale, AZ for round four of the 2020 Monster Energy Supercross Series – the first triple crown race of the season.
Anaheim 2 SX Team Photo Gallery Below:
2020 Team Sponsors:
Rocky Mountain ATV/MC, KTM, WPS, Fly Racing, Palmetto Motosports, FMF Racing, Engine Ice, Alpinestars, ODI, Galfer, Motorex, ASV, Acerbis, GEAR Alloy, JStar Auto Group, SoCal SuperTrucks, HBD MotoGrafx, XYO Network, Seat Concepts, 100%, Sunstar, Regina Chains, WP Factory Services, Giant, Hinson Racing, ETS Fuels, DT-1, Dunlop, Motion Pro, Talon, Dubya, Excel, FCP, Loenbro, Ogio, Hurly, Beta Tools, ProPegs, Xtrig, San Diego Powder and Protective Coatings
2022 Monster Energy SX Championship Points
2022 Atlanta SX Qualifying Results | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,833 |
source $(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/testenv.sh \
|| { echo "testenv.sh not found!" >&2; exit 1; }
create_and_cd_client
put_bazel_on_path
write_default_bazelrc
#### SETUP #############################################################
set -e
function set_up() {
mkdir -p pkg
cat > pkg/true.sh <<EOF
#!/bin/sh
exit 0
EOF
chmod 755 pkg/true.sh
cat > pkg/BUILD <<EOF
sh_test(
name = "true",
srcs = ["true.sh"],
)
EOF
}
#### TESTS #############################################################
function test_basic_progress() {
bazel test --curses=yes --color=yes pkg:true 2>$TEST_log || fail "bazel test failed"
# some progress indicator is shown
expect_log '\[[0-9,]* / [0-9,]*\]'
# something is written in green
expect_log $'\x1b\[32m'
# curses are used to delete at least one line
expect_log $'\x1b\[1A\x1b\[K'
}
function test_line_wrapping() {
bazel test --curses=yes --color=yes --terminal_columns=5 pkg:true 2>$TEST_log || fail "bazel test failed"
# curses are used to delete at least one line
expect_log $'\x1b\[1A\x1b\[K'
# something is written in green
expect_log $'\x1b\[32m'
# lines are wrapped, hence at least one line should end with backslash
expect_log '\\'$'\r''$'
}
function test_noline_wrapping_color_nocurses() {
bazel test --curses=no --color=yes --terminal_columns=5 pkg:true 2>$TEST_log || fail "bazel test failed"
# something is written in green
expect_log $'\x1b\[32m'
# no lines are deleted
expect_not_log $'\x1b\[K'
# as no line wrapping occurs, no backlsash should be before a carriage return
expect_not_log '\\'$'\r'
}
run_suite "Basic integration tests for the standard UI"
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,049 |
\section{Introduction}\label{intro}
The one-dimensional, time-independent Schr\"{o}dinger equation\\
\bea
-\ddx{\Psi(x)}+V(x)\Psi(x) &=& E\Psi(x), \nn \\ \nn
\eea
with $\Psi(x) \in L^2(\cC)$ ({\it i.e.} $\Psi(x)$ is square integrable over some given contour $\cC$ in the complex plane), is an eigenvalue problem for the energy $E$. Eigenvalue problems of this type are a natural extension of eigenvalue problems on the real line \cite{Bender:1992bk}, and indeed, the usual requirement that $\Psi(x) \in L^2(\cR)$ just corresponds to a particular choice of contour. \\
In recent years, there has been considerable interest in the study of such eigenvalue problems associated with Hamiltonians that have the property of $\mathcal{PT}$ symmetry \cite{MR1627442, MR1686605, MR1742959}. The $\mathcal{PT}$ operator is an anti-linear operator corresponding to parity reflection and time reversal: \\
\bea
\mathcal{P} \Phi(x)=\Phi(-x) \nn \\[0.2cm]
\mathcal{T} \Phi(x)=\Phi^*(x) \nn \\ \nn
\eea
A reason for the interest in $\pt$ symmetry is that the spectrum of $\mathcal{PT}$ symmetric Hamiltonians sometimes appears to be purely real. In general, it can be shown that the eigenvalues of these Hamiltonians will be real, or occur in complex conjugate pairs. A connection has also been recognised between these types of differential equations and the field of integrable models (the `ODE/IM correspondence') \cite{Dorey:1998pt,MR1832065,Dorey:1999uk,MR1832082,Dorey:2000kq,Dorey:2004ta}. \\ \\ \\ \\
In this paper we will consider a particular $\pt$ symmetric eigenvalue problem\\
\bea \label{eig}
\left[ -\frac{d^2}{dx^2}+x^6+\ga x^2+\frac{l(l+1)}{x^2} \right] \Psi(x) &=& \gl \Psi(x),\hspace{1cm} \Psi(x) \in L^2(\mathcal{C}) \\ \nn
\eea
This is the $M=3$ case of the family of equations\\
\bea
\left[ -\frac{d^2}{dx^2}-(ix)^{2M}-\ga (ix)^{M-1}+\frac{l(l+1)}{x^2} \right] \Psi(x) &=& \gl \Psi(x),\hspace{1cm} \Psi(x) \in L^2(\mathcal{C}) \nn \\[0.2cm] \nn
\eea
discussed in \cite{MR1857169,Dorey:2001hi}, with the contour $\cC$ joining $|x|=\infty$ in the Stokes sectors $\cS_{-1}$ and $\cS_{1}$ (Figure \ref{contour}). \\[2.8cm]
\begin{figure}[here!]
\begin{center}
\resizebox{5cm}{!}{\includegraphics{stokesx8a}}
\caption{{\bf The quantisation contour, $\mathcal{C}$ ($M=3$)} {\em The Stokes structure is shown in the limit of large $|x|$, without showing the detailed structure near the turning points.}\label{contour}}
\end{center}
\end{figure}\\
It was shown in \cite{MR1857169,Dorey:2001hi} that this equation, with these boundary conditions, has a spectrum which is:
\begin{center}
\hspace{0.8cm} {\it real} \hspace{0.2cm} if $\ga <M+1+|2l+1|$ \hspace{1cm} [Regions $B,C,D$]\\
{\it positive} if $\ga<M+1-|2l+1|$ \hspace{1cm} [Region D] .
\end{center}
However, for $\ga >M+1+|2l+1|$, (Region $A$), complex energy levels may be found. For the $M=3$ case, these four regions in the $(\ga,l)$ plane are shown in Figure \ref{domain0}, separated by the dashed lines.
\begin{figure}[here!]
\begin{center}
\resizebox{6.9cm}{!}{\includegraphics{domain0}}
\caption{{\bf Regions of the $\ga$ - $l$ plane}.\hspace{0.5cm}{\em In} \cite{MR1857169}, {\em Dorey et al proved the reality of the spectrum for $(\ga,l) \in B \cup C \cup D$, and positivity for $(\ga,l) \in D$. The dotted lines show $\ga_{\pm} \in \Z$}\label{domain0}.}
\end{center}
\end{figure}\\
Also shown in \cite{MR1857169,Dorey:2001hi} was an interesting structure to the boundary of the region, within $A$, where eigenvalues become complex (the `domain of unreality'), notably the appearance of `cusps' on this boundary (Figure \ref{scan}).\\ \\
\begin{figure}[here!]
\begin{center}
\resizebox{6cm}{!}{\includegraphics{scan}}
\caption{{\bf Domain of unreality}.\hspace{0.5cm}{\em The outline of the region where complex eigenvalues first appear, as shown in} \cite{MR1857169,Dorey:2001hi}.\label{scan}}
\end{center}
\end{figure}\\
The remainder of this paper will be structured as follows: Section \ref{wkb} deals with the complex WKB method, and how quantisation conditions derived using this method are able to account for the results found in \cite{MR1857169,Dorey:2001hi}. In particular, in \ref{wkb1}, we give a brief summary of the complex WKB method and its use. Section \ref{qc} explains how this method can be used to obtain quantisation conditions for the energy eigenvalues, and in \ref{qcprob} we describe how different quantisation conditions are applicable in this problem for different values of the parameters $E,\ga$ and $l$. Section \ref{examplesec} gives an example of calculating a quantisation condition for some particular values of the parameters. In section \ref{pos} we demonstrate that the WKB quantisation condition approach is able to explain the positivity of the spectrum in the region found in \cite{MR1857169,Dorey:2001hi}. Section \ref{rea} explains the appearance of complex eigenvalues in terms of the quantisation conditions, in the region where Dorey {\it et al.} found they could occur, \cite{MR1857169,Dorey:2001hi} , and we conjecture how complex WKB methods might demonstrate reality. In section \ref{deg} we use the WKB quantisation conditions to calculate the positions of degenerate eigenvalues (and reproduce the boundary of the domain of unreality), and explain the formation of the cusps. Our results are compared to those found in \cite{MR1857169,Dorey:2001hi}. Finally, Section \ref{conc} contains conclusions and a discussion of possible future work. \\ \\
As a brief aside, in \cite{Dorey:2001hi}, Dorey {\it et al.} defined a new set of coordinates on the $(\ga,l)$ plane by
\bea
\ga_{\pm}=\frac{1}{2M+2}[\ga -M-1 \pm (2l+1)] \nn
\eea
which we will also occasionally adopt in order to allow an easier comparison of results with \cite{Dorey:2001hi}. In particular, for the M=3 case dealt with here,
\bea
\ga_{\pm}=\frac{1}{8}[\ga -4 \pm (2l+1)] \nn
\eea
so that
\bea
\ga=4(1+\ga_{+}+\ga_{-}) \mbox{ and } l=\frac{1}{2}[4(\ga_{+}-\ga_{-})-1] \nn
\eea
The dotted lines shown in Figures \ref{domain0} and \ref{scan} indicate $\ga_{\pm} \in \Z$.
\section{Complex WKB}\label{wkb}
\subsection{General Principles}\label{wkb1}
For the one-dimensional, time-independent Schr\"{o}dinger equation
\bea
-\ddx{\Psi(x)}+V(x)\Psi(x) &=& E\Psi(x), \nn
\eea
the first-order WKB approximation (see {\it e.g.} \cite{Griffiths}) states that two solutions to this equation are given asymptotically for $|x| \to \infty $ by
\begin{equation}
\label{wkbapprox}
\Psi_{\pm}(x) \sim \frac{1}{P(x)^{\frac{1}{4}}}\exp\left(\pm i \int_{x_0}^x(P(t))^{\frac{1}{2}}dt\right),\hspace{1cm} P(x)=E-V(x).
\end{equation}
The complex WKB method (see {\it e.g.} \cite{Heading:1962,Berry:1972na,Voros83}) involves treating $x$ as a complex variable, and calculating how the asymptotic form of the solutions change as they are traced around the complex plane. The lower bound of the integral $x_0$ will be a zero of $P(x)$ ({\it i.e.} a turning point of the classical problem), and the solutions are valid in some region of the complex plane around $x_0$.\\
The integral in the exponent is, in general, complex valued. Because it will typically have an imaginary part, the solutions will contain real exponentials. If a solution contains a growing exponential term, it is said to be dominant, while a solution with a decaying exponential is called subdominant. However, if this integral is purely real, then both WKB solutions will be purely oscillatory, with neither dominating the other. Lines can be drawn in the complex plane, emanating from the turning points (zeros), marking the curves where ${\Im m}\left[\int_{x_0}^x(P(t))^{\frac{1}{2}}dt)\right]=0$. These are known as {\it anti-Stokes} lines. Anti-Stokes lines mark the borders between sectors of dominant/subdominant behaviour; in a given sector one solution will be dominant, but on crossing an anti-Stokes line to the next sector this behaviour reverses and the solution will be subdominant in the new sector.\\
We can also find the regions where ${\Re e}\left[\int_{x_0}^x(P(t))^{\frac{1}{2}}dt)\right]=0$. These are known as {\it Stokes} lines, and are the regions where the solutions are most dominant/subdominant. Knowing the position of these Stokes and anti-Stokes lines is crucial in applying complex WKB methods. \\ \\
Beginning with a linear combination of the two solutions at one point of the complex plane, a globally defined solution may be obtained by following several well-established rules. The most important of these for our purposes can be summarised:\\
\begin{enumerate}
\item When crossing an anti-Stokes line, the solutions `exchange dominance', {\it i.e.} a dominant solution becomes subdominant, and vice versa.\\
\item Upon crossing a Stokes line, the coefficient of the subdominant solution changes by an amount proportional to the coefficient of the dominant term.
\begin{center}
{\it i.e.} {\bf new sub. coefficient $=$ old sub. coefficient $+$ $T$ $\times$ old dom. coefficient}\\[0.4cm]
\end{center}
$T$ is called a Stokes multiplier, and depends on the nature of the turning point that the Stokes line originates from. For a Stokes line emanating from a simple turning point the value of the Stokes multiplier $T$ is $i$, for a Stokes line from a double zero, $T=\sqrt{2}i$ (for the first-order approximation).
The coefficient of the dominant term remains unchanged when a Stokes line is crossed.\\
\item The rules given in 1. and 2. refer to a WKB solution defined in terms of a particular turning point crossing a Stokes/anti-Stokes line emanating from that turning point. If it is intended to continue a solution across a line from a different turning point, the WKB solution must first be rewritten in terms of this new zero. To connect solutions defined in terms of different turning points, $x_1$ and $x_2$, use:\\
\begin{center}
$ \exp\left(\pm i \int_{x_1}^x(P(t))^{\frac{1}{2}}dt\right) = \exp\left(\pm i \int_{x_1}^{x_2}(P(t))^{\frac{1}{2}}dt\right) \exp\left(\pm i \int_{x_2}^x(P(t))^{\frac{1}{2}}dt\right).$\\
\end{center}
\end{enumerate}
Note that it is chosen to introduce the discontinuous change in the subdominant coefficient at the Stokes line, so that this discontinuity is small compared to the error in the WKB approximation. \\
\subsection{WKB Quantisation conditions}\label{qc}
We study the spectrum of this eigenvalue problem via quantisation conditions obtained using the complex WKB method. Briefly, this is done by starting with a WKB solution that is purely subdominant in the Stokes sector $\cS_{-1}$ (Figure \ref{contour}). This solution can then be traced around the complex plane, in an anti-clockwise direction, with new subdominant exponentials appearing as Stokes lines are crossed. The new subdominant contributions will have a coefficient of $i$ (the Stokes multiplier associated with line from a simple turning point) or $\sqrt{2}i$ (the Stokes multiplier associated with line from a second order zero) multiplied by the coefficient of the dominant exponential at that Stokes line (in accordance with the established complex WKB rules - see for instance \cite{Heading:1962, Berry:1972na}). This leads to a wave function in the Stokes sector $\cS_{1}$ that has both a subdominant and dominant component. Requiring that the coefficient of the dominant solution vanishes (to fulfil the boundary conditions) leads to a quantisation condition for the energy. This is an approach used in \cite{BenderBerry:2001wk} to investigate the appearance of complex conjugate eigenvalues in another $\pt$ symmetric problem. An explicit example of this type of calculation is shown in Section \ref{examplesec}.\\
\subsection{WKB Quantisation conditions for this eigenvalue problem}\label{qcprob}
We now specialise our discussion to the eigenvalue problem of eq. (\ref{eig}). Complications arise in this problem because many different arrangements of turning points, and hence different topologies of Stokes and anti-Stokes lines, are found for different values of the parameters in the equation. This leads to different quantisation conditions being required for different ranges of values of $\ga$, $l$ and $E$. These different possible arrangements are summarised in Table \ref{plus} and Table \ref{minus}. Table \ref{plus} refers to positive values of $\ga$, and Table \ref{minus} to negative. Each table is split into five rows, by different ranges of values of $l$, relative to the key values $l=0$ and $l=l^\prime$. The remainder of this section will now be used to explain the origin of this $l^\prime$, and the way the different Stokes structures are organised according to the parameter values.\\ \\
As energy is increased (or decreased) from $0$, pairs of turning points move together, until an energy value is reached at which a double zero is found. The double zero then splits, and the two turning points move off again, perpendicular to their original path. (See Figure \ref{config}).\\[0.8 cm]
\begin{figure}[h!]
\begin{center}
\resizebox{17cm}{!}{\includegraphics{config1}}
\end{center}
\caption{{\bf Changes in turning point configuration occurring with increasing energy.} {\em Example shown is for $\ga =3, l=0.5$. This corresponds to the first row in Table \ref{plus}. $E^\prime$ is the energy value at which the zeros coalesce to form a double zero. $E^{\prime\prime}$ is the energy value at which the outer of the zeros in the horizontal line `line up' vertically with the outer zeros. Both $E^\prime$ and $E^{\prime\prime}$ can be expressed as a function of $\ga$ and $l$. For these values of $\ga$ and $l$ we have $E^\prime \approx 3.1075, E^{\prime\prime} \approx3.5905$.} \label{config}}.\hspace{0.5cm}
\end{figure}\\
Different Stokes structures are obtained according to where this coalescence occurs in respect to the other turning points. {\it i.e.} new topologies are obtained depending on whether the double zeros occur within, outside of, or directly in line with the other turning points. This can be easily seen by looking at Figure \ref{configl} (which corresponds to looking down the $E=E^\prime$ column of Table \ref{plus}); In the first entry the double zeros appear inside the other four zeros, in the second they line up vertically with the other zeros and in the third the double zeros are outside the box formed by the remaining zeros. \\
\begin{figure}[h!]
\begin{center}
\resizebox{12cm}{!}{\includegraphics{configl}}
\caption{{\bf Relative positions of double zeros.} {\em $\ga >0, E=E^\prime$. This corresponds to the $E=E^\prime$ column in Table \ref{plus}. } \label{configl}}\hspace{0.5cm}
\end{center}
\end{figure}\\
For a given value of the parameter $\ga$, the value of $l$ required for the zeros that coalesce to line up with the others, (which we will denote $l^\prime$), is given by
\bea \label{l}
l^\prime= -\frac{1}{2}+\frac{\sqrt{(1+\ga ^2)}}{2}
\eea
If $l$ is less than this $l^\prime$, turning points move together within the remaining turning points (in a horizontal sense for positive energies and vertical sense for negative energies). For $l>l^\prime$, the turning points that coalesce are positioned outside of the others (again in a horizontal sense for $E>0$ and vertical for $E<0$). \\
Another important value of $l$ where the behaviour changes is $l=0$. For $l=0$ and $l<0$, different sequences of arrangements occur, and these are shown in the second and third blocks of Tables \ref{plus} and \ref{minus}. Obviously for $l=0$ (and $l=-1$), the order $x^{-2}$ term vanishes from the potential, and there are now only six zeros, instead of eight. \\
\begin{landscape}
\thispagestyle{empty}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\begin{table}
\small
\caption{\bf Arrangements of Turning Points, $\ga>0$ \label{plus}}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline \hline
\scriptsize Region & {\bf } & \scriptsize {\bf $E<-E^{\prime\prime}$} & \scriptsize {\bf $E=-E^{\prime\prime}$} & \scriptsize {\bf $-E^{\prime\prime}<E<-E^\prime$} & \scriptsize {\bf $E=-E^\prime$} & \scriptsize {\bf $-E^\prime<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E^\prime$} & \scriptsize {\bf $E=E^\prime$} & \scriptsize {\bf $E^\prime<E<E^{\prime\prime}$} & \scriptsize {\bf $E=E^{\prime\prime}$} & \scriptsize {\bf $E>E^{\prime\prime}$} \footnotemark[1] \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & & & & & \\ \hline
\hline
{\bf $I$} & {\scriptsize \bf $0<l<l^\prime$} & \resizebox{0.7cm}{!}{\includegraphics{top3}} & \resizebox{0.8cm}{!}{\includegraphics{top26}} & \resizebox{0.7cm}{!}{\includegraphics{top25}} & \resizebox{0.8cm}{!}{\includegraphics{top24}} & \resizebox{0.8cm}{!}{\includegraphics{top19}} &\resizebox{0.8cm}{!}{\includegraphics{top22}} & \resizebox{0.8cm}{!}{\includegraphics{top1}} & \resizebox{0.8cm}{!}{\includegraphics{top13}} &\resizebox{0.9cm}{!}{\includegraphics{top5}} &\resizebox{0.7cm}{!}{\includegraphics{top10}} & \resizebox{0.8cm}{!}{\includegraphics{top11}} \\ \hline
\scriptsize {\bf Boundary of $H/I$} & \scriptsize {\bf $l=l^\prime$} & \resizebox{0.7cm}{!}{\includegraphics{top3}} & \footnotemark[2] & \footnotemark[2] & \resizebox{0.7cm}{!}{\includegraphics{top28}} & \resizebox{0.7cm}{!}{\includegraphics{top27}} & \resizebox{0.7cm}{!}{\includegraphics{top18}} & \resizebox{0.7cm}{!}{\includegraphics{top12}} & \resizebox{0.7cm}{!}{\includegraphics{top4}} & \footnotemark[8] & \footnotemark[8] & \resizebox{0.8cm}{!}{\includegraphics{top11}} \\ \hline
{\bf $H$} & \scriptsize {\bf $l>l^\prime$}& \resizebox{0.7cm}{!}{\includegraphics{top3}} & \resizebox{0.7cm}{!}{\includegraphics{top29}} & \resizebox{0.7cm}{!}{\includegraphics{top31}} & \resizebox{0.7cm}{!}{\includegraphics{top30}} & \resizebox{0.8cm}{!}{\includegraphics{top20}} & \resizebox{0.8cm}{!}{\includegraphics{top32}} & \resizebox{0.8cm}{!}{\includegraphics{top2}} & \resizebox{0.8cm}{!}{\includegraphics{top6}} & \resizebox{0.8cm}{!}{\includegraphics{top7}} & \resizebox{0.8cm}{!}{\includegraphics{top8}} & \resizebox{0.8cm}{!}{\includegraphics{top11}} \\ \hline
\end{tabular}
\begin{tabular}{ccc}
\\
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\scriptsize Region & {\bf } & \scriptsize {\bf $E<-E^{\prime\prime\prime}$} & \scriptsize {\bf $E=-E^{\prime\prime\prime}$} & \scriptsize {\bf $-E^{\prime\prime\prime}<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E^{\prime\prime\prime}$} & \scriptsize {\bf $E=E^{\prime\prime\prime}$} & \scriptsize {\bf $E>E^{\prime\prime\prime}$} \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & \\ \hline \hline
{\bf $J$} & \scriptsize {\bf $-\frac{1}{2}<l<0$} & \resizebox{0.7cm}{!}{\includegraphics{top49}} & \resizebox{0.8cm}{!}{\includegraphics{top55}} & \resizebox{0.7cm}{!}{\includegraphics{top58}} & \resizebox{0.8cm}{!}{\includegraphics{top56}} & \resizebox{0.8cm}{!}{\includegraphics{top57}} & \resizebox{0.7cm}{!}{\includegraphics{top54}} & \resizebox{0.8cm}{!}{\includegraphics{top48}}
\\ \hline
\end{tabular}
\begin{tabular}{ccc}
\\
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\scriptsize Region & & \scriptsize {\bf $E<-E^{\prime\prime}$} & \scriptsize {\bf $E=-E^{\prime\prime}$} & \scriptsize {\bf $-E^{\prime\prime}<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E^\prime\prime$} & \scriptsize {\bf $E=E^{\prime\prime}$} & \scriptsize {\bf $E>E^{\prime\prime}$} \footnotemark[1] \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & \\ \hline \hline
\scriptsize {\bf Boundary of $I/J$} & \scriptsize {\bf $l=0$} & \resizebox{0.7cm}{!}{\includegraphics{top42}} & \resizebox{0.8cm}{!}{\includegraphics{top47}} & \resizebox{0.8cm}{!}{\includegraphics{top44}} & \resizebox{0.7cm}{!}{\includegraphics{top43}} & \resizebox{0.7cm}{!}{\includegraphics{top45}} & \resizebox{0.7cm}{!}{\includegraphics{top46}} & \resizebox{0.8cm}{!}{\includegraphics{top41}} \\ \hline
\end{tabular}
\end{center}
\end{table}
\footnotetext[2]{$-E^\prime=-E^{\prime\prime}$ for $l=l^\prime$}
\footnotetext[8]{$E^\prime=E^{\prime\prime}$ for $l=l^\prime$}
\footnotetext[1]{Change in Stokes structure actually occurs at a value of $E$ slightly larger than $E^{\prime\prime}$. Hence the quantisation condition associated with this arrangement does not come into effect exactly at $E=E^{\prime\prime}$}
\end{landscape}
\renewcommand{\thefootnote}{\arabic{footnote}}
\begin{landscape}
\thispagestyle{empty}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\begin{table}
\small
\caption{\bf Arrangements of Turning Points, $\ga<0$ \label{minus}}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline \hline
\scriptsize Region & {\bf } & \scriptsize {\bf $E<-E^{\prime\prime}$} & \scriptsize {\bf $E=-E^{\prime\prime}$} & \scriptsize {\bf $-E^{\prime\prime}<E<-E^\prime$} & \scriptsize {\bf $E=-E^\prime$} & \scriptsize {\bf $-E^\prime<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E^\prime$} & \scriptsize {\bf $E=E^\prime$} & \scriptsize {\bf $E^\prime<E<E^{\prime\prime}$} & \scriptsize {\bf $E=E^{\prime\prime}$} & \scriptsize {\bf $E>E^{\prime\prime}$} \footnotemark[1] \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & & & & &\\ \hline
\hline
{\bf $E$} & \scriptsize {\bf $0<l<l^\prime$} & \resizebox{0.7cm}{!}{\includegraphics{top3}} & \resizebox{0.7cm}{!}{\includegraphics{top29}} & \resizebox{0.7cm}{!}{\includegraphics{top31}} & \resizebox{0.7cm}{!}{\includegraphics{top16}} & \resizebox{0.7cm}{!}{\includegraphics{top33}} &\resizebox{1cm}{!}{\includegraphics{top23}} & \resizebox{1cm}{!}{\includegraphics{top34}} & \resizebox{1cm}{!}{\includegraphics{top14}} &\resizebox{1cm}{!}{\includegraphics{top9}} &\resizebox{0.8cm}{!}{\includegraphics{top8}} & \resizebox{1cm}{!}{\includegraphics{top11}} \\ \hline
\tiny {\bf Boundary of $E/G$} & \scriptsize {\bf $l=l^\prime$} & \resizebox{0.7cm}{!}{\includegraphics{top3}} & \resizebox{0.7cm}{!}{\includegraphics{top29}} & \resizebox{0.7cm}{!}{\includegraphics{top35}} & \footnotemark[3] & \footnotemark[3] & \resizebox{1cm}{!}{\includegraphics{top21}} & \footnotemark[9] & \footnotemark[9] & \resizebox{1cm}{!}{\includegraphics{top9}} & \resizebox{0.8cm}{!}{\includegraphics{top8}} & \resizebox{1cm}{!}{\includegraphics{top11}} \\ \hline
{\bf $G$ } & \scriptsize {\bf $l>l^\prime$}& \resizebox{0.7cm}{!}{\includegraphics{top3}} & \resizebox{0.7cm}{!}{\includegraphics{top29}} & \resizebox{0.7cm}{!}{\includegraphics{top31}} & \resizebox{0.7cm}{!}{\includegraphics{top30}} & \resizebox{0.7cm}{!}{\includegraphics{top20}} & \resizebox{0.7cm}{!}{\includegraphics{top32}} & \resizebox{0.7cm}{!}{\includegraphics{top2}} & \resizebox{0.7cm}{!}{\includegraphics{top6}} & \resizebox{1cm}{!}{\includegraphics{top7}} & \resizebox{0.8cm}{!}{\includegraphics{top8}} & \resizebox{1cm}{!}{\includegraphics{top11}} \\ \hline
\end{tabular}
\begin{tabular}{ccc}
\\
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\scriptsize Region & {\bf } & \scriptsize {\bf $E<-E^{\prime}$} & \scriptsize {\bf $E=-E^{\prime}$} & \scriptsize {\bf $-E^{\prime}<E<-E_0$} & \scriptsize {\bf $E=E_0$} & \scriptsize {\bf $-E_0<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E_0$} & \scriptsize {\bf $E=E_0$} & \scriptsize {\bf $E_0<E<E^{\prime}$} & \scriptsize {\bf $E=E^{\prime}$} & \scriptsize {\bf $E>E^{\prime}$} \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & & & & & \\ \hline \hline
{\bf $F$} & \scriptsize {\bf $-\frac{1}{2}<l<0$} & \resizebox{0.6cm}{!}{\includegraphics{top49}} & \resizebox{1cm}{!}{\includegraphics{top51}} & \resizebox{0.7cm}{!}{\includegraphics{top63}} & \resizebox{1cm}{!}{\includegraphics{top52}} & \resizebox{0.7cm}{!}{\includegraphics{top61}} & \resizebox{0.7cm}{!}{\includegraphics{top59}} & \resizebox{0.7cm}{!}{\includegraphics{top60}} & \resizebox{0.3cm}{!}{\includegraphics{top65}} & \resizebox{0.7cm}{!}{\includegraphics{top62}} & \resizebox{0.2cm}{!}{\includegraphics{top64}} & \resizebox{0.7cm}{!}{\includegraphics{top48}} \\ \hline
\end{tabular}
\begin{tabular}{ccc}
\\
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\scriptsize Region & {\bf } & \scriptsize {\bf $E<-E^{\prime}$} & \scriptsize {\bf $E=-E^{\prime}$} & \scriptsize {\bf $-E^{\prime}<E<0$} & \scriptsize {\bf $E=0$} & \scriptsize {\bf $0<E<E^\prime$} & \scriptsize {\bf $E=E^{\prime}$} & \scriptsize {\bf $E>E^{\prime}$} \\
\scriptsize (Figure \ref{domain1}) & & & & & & & & \\ \hline \hline
\tiny {\bf Boundary of $E/F$} & \scriptsize {\bf $l=0$} & \resizebox{0.7cm}{!}{\includegraphics{top42}} & \resizebox{0.7cm}{!}{\includegraphics{top40}} & \resizebox{0.7cm}{!}{\includegraphics{top38}} & \resizebox{0.7cm}{!}{\includegraphics{top36}} & \resizebox{0.7cm}{!}{\includegraphics{top37}} & \resizebox{0.7cm}{!}{\includegraphics{top39}} & \resizebox{0.7cm}{!}{\includegraphics{top41}} \\ \hline
\end{tabular}
\end{center}
\end{table}
\footnotetext[3]{$-E^\prime=0$ for $l=l^\prime$}
\footnotetext[9]{$E^\prime=0$ for $l=l^\prime$}
\footnotetext[1]{Change in Stokes structure actually occurs at a value of $E$ slightly larger than $E^{\prime\prime}$. Hence the quantisation condition associated with this arrangement does not come into effect exactly at $E=E^{\prime\prime}$}
\end{landscape}
\renewcommand{\thefootnote}{\arabic{footnote}}
\noindent To summarise the organisation of these different structures: for a given $\ga$, the sequence of turning point configurations obtained will depend on the value of $l$ relative to these key values. This enables us to divide the $(\ga,l)$ plane up into sectors according to which sequence of turning point arrangements is valid there. A sequence is obtained as the pattern of zeros obtained also depends on energy. \\
The regions of the $(\ga,l)$ plane where different turning point configurations occur are shown in Figure \ref{domain1}. This shows how the plane is divided in terms of these several key values of angular momentum, $l$. The boundaries between these regions (shown as solid lines in Figure \ref{domain1}) are given by the lines $\ga=0$, $l=0$ and $l=\pm l^\prime$. It is also apparent that the problem (\ref{eig}) is symmetric about $l=-\frac{1}{2}$. \\
Note that combinations of these sections approximate sections $A$ - $D$ of Figure \ref{domain0}, and the borders of the latter are included in Figure \ref{domain1} for comparison (dashed lines). Later, we will compare our results with those found by Dorey {\it et al.} \cite{MR1857169,Dorey:2001hi}, and refer back to Figure \ref{domain0}. (The reader may also note that although these regions do not match up perfectly with those found by Dorey {\it et al.}, all results are consistent; using our method, reality and positivity are demonstrated for smaller regions than in \cite{MR1857169,Dorey:2001hi}, so we have a weaker condition overall - this point will be returned to in sections \ref{pos} and \ref{rea}).
\begin{figure}[here]
\begin{center}
\resizebox{10.95cm}{!}{\includegraphics{domain1}}
\caption{{\bf Regions of the $\ga$ - $l$ plane \label{domain1}}.\hspace{0.5cm}{\em The solid lines show how the plane is split up in terms of different Stokes structures occurring. The dashed lines indicate the boundaries from \cite{MR1857169,Dorey:2001hi}.}}
\end{center}
\end{figure}\\
Tables (\ref{plus}) and (\ref{minus}) are organised so that one first looks at the sign of $\ga$ to decide which table to use (Table \ref{plus} is for positive values of $\ga$), then which of the five rows to use depending on the value of $l$. Each row shows how the arrangement of zeros changes as $E$ is varied. \\
Looking along one of these rows then, it can be seen that change occurs at $E=0$, or one of four different critical energy values (labelled $E^{\prime}$, $E^{\prime\prime}$, $E^{\prime\prime\prime}$ and $E_0$). $E^\prime$ and $E_0$ refer to energies at which turning points come together to form the double zeros described earlier, while $E^{\prime\prime}$ and $E^{\prime\prime\prime}$ are energy values at which turning points line up. Expressions for each of these critical values (as a function of $\ga$ and $l$) have been obtained (though they are mostly unenlightening), and these energies can easily be calculated exactly for any particular values of the other parameters.
\subsubsection{Example of finding WKB quantisation condition}\label{examplesec}
Figure \ref{example} shows the structure of Stokes and anti-Stokes lines in the case $\ga=3$, $l=0.5$ and $E=1$. $l$ in this case is less than the critical value $l^\prime$ ($l^\prime \approx 1.081$ for $\ga=3$), so this refers to section $I$ in Figure \ref{domain1} and Table \ref{plus}. \\ \\
\begin{figure}[here]
\begin{center}
\resizebox{12cm}{!}{\includegraphics{example2_b_1}}
\caption{{\bf Stokes structure, $\ga=3$ , $l=0.5$ , $E=1$\label{example}} {\em Stokes lines are shown as solid lines and anti-Stokes lines are the broken lines.}}
\end{center}
\end{figure}\\
In sector $\cS_{-1}$, the WKB wave function proportional to
\bea
(P(x))^{-\frac{1}{4}}\mbox{e}^{i\omega(x_1,x)} \nn
\eea
is subdominant, where
\bea
\omega(a,b)= \int_a^b(P(t))^{\frac{1}{2}}dt =\int_a^b(E-V(t))^{\frac{1}{2}}dt \nn
\eea
and the $x_i$ are the turning points shown in Figure \ref{example}. (In the following, dominance or subdominance of a solution is denoted by $(\bf d)$ or $(\bf s)$). We now continue this solution in an anticlockwise direction, and take account of new contributions appearing at Stokes lines and the dominance changes at anti-Stokes lines:\\ \\
\begin{math}
\displaystyle \mbox{(a)} \hspace{1cm}{ (P(x))^{-\frac{1}{4}}\mbox{e}^{i\omega(x_1,x)} \atop {\bf (s)}} \\ \\
\end{math}
Going from (a) to (b) now involves crossing an anti-Stokes line, so the subdominant solution now becomes dominant. We will suppress the $ (P(x))^{-\frac{1}{4}}$ term in the remainder of the discussion for clarity. \\ \\
\begin{math}
\displaystyle \mbox{(b)} \hspace{1cm} {\mbox{e}^{i\omega(x_1,x)} \atop {\bf (d)}} \\ \\
\end{math}
To pass from region (b) to (c), the Stokes line coming from $x_1$ is crossed. The dominant term remains unchanged, and the subdominant term gains an extra contribution equal to the dominant coefficient multiplied by the Stokes multiplier for this line ($i$).\\ \\
\begin{math}
\displaystyle \mbox{(c)} \hspace{1cm} {\mbox{e}^{i\omega(x_1,x)} \atop {\bf (d)}} {+ \atop }{i\mbox{e}^{-i\omega(x_1,x)}\atop {\bf (s)}} \\ \\
\end{math}
This solution is re-written in terms of the WKB solution from turning point $x_2$.\\ \\
\begin{math}
\displaystyle \hspace*{0.95cm}={\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x)} \atop {\bf (d)}} {+ \atop }{i\mbox{e}^{-i\omega(x_1,x_2)}\mbox{e}^{-i\omega(x_2,x)} \atop {\bf (s)}}\\ \\
\end{math}
Next a Stokes line from $x_2$ is crossed, and so the subdominant term again picks up a contribution proportional to the dominant term. \\ \\
\begin{math}
\displaystyle \mbox{(d)} \hspace{1cm} {\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x)} \atop {\bf (d)}} { + \atop } {i\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x)} \atop {\bf (s)}} \\ \\ \end{math}
This is again written in terms of the WKB solution from $x_3$, in preparation for crossing the Stokes line from that turning point.\\ \\
\begin{math}
\hspace*{0.95cm}={\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x)} \atop {\bf (d)}}{+ \atop }{i\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}\mbox{e}^{-i\omega(x_3,x)} \atop {\bf (s)}}\\ \\
\end{math}
This procedure repeats, as a further Stoke line is crossed.\\ \\
\begin{math}
\mbox{(e)} \hspace{1cm}{\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x)} \atop {\bf (d)}} {+ \atop } {i\left\{\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}+\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\right\}\mbox{e}^{-i\omega(x_3,x)} \atop {\bf (s)}} \\ \\
\hspace*{0.95cm}={\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}\mbox{e}^{i\omega(x_4,x)} \atop {\bf (d)}} \\ \\
\hspace*{1.95cm}{+ \atop }{i\left\{\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}+\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\right\}\mbox{e}^{-i\omega(x_3,x_4)}\mbox{e}^{-i\omega(x_4,x)} \atop {\bf (s)}} \\ \\
\mbox{(f)} \hspace{1cm}{\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}\mbox{e}^{i\omega(x_4,x)} \atop {\bf (d)}} \\ \\
\hspace*{1.95cm} {+ i\left[\left\{\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}+\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\right\}\mbox{e}^{-i\omega(x_3,x_4)} \right. \atop } \\ \\
\hspace*{2.95cm}{ + \atop}{\left. \mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}\right]\mbox{e}^{-i\omega(x_4,x)} \atop {\bf (s)}} \\ \\
\end{math}
Finally, in going from (f) to (g), an anti-Stokes line is crossed again, and dominance is again exchanged.\\ \\
\begin{math}
\displaystyle \mbox{(g)} \hspace{1cm}{\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}\mbox{e}^{i\omega(x_4,x)} \atop {\bf (s)}} \\ \\
\hspace*{1.95cm}{+i\left[\left\{\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}+\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\right\}\mbox{e}^{-i\omega(x_3,x_4)} \right. \atop } \\ \\
\hspace*{2.95cm}{+ \atop }{\left. \mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}\right]\mbox{e}^{-i\omega(x_4,x)} \atop {\bf (d)}} \\ \\
\end{math}
So for the solution to be purely subdominant at (g), (Sector $\cS_{1}$), in order to satisfy the boundary condition, we require
\bea
\displaystyle \left\{\left[\mbox{e}^{-i\omega(x_1,x_2)}+\mbox{e}^{i\omega(x_1,x_2)}\right]\mbox{e}^{-i\omega(x_2,x_3)}+\mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\right\}\mbox{e}^{-i\omega(x_3,x_4)} + \mbox{e}^{i\omega(x_1,x_2)}\mbox{e}^{i\omega(x_2,x_3)}\mbox{e}^{i\omega(x_3,x_4)}=0 \nn \\ \nn
\eea
Writing $\omega(x_1,x_2)=U+iV=(\omega(x_3,x_4))^*$ and $\omega(x_2,x_3)=W$, with $U,V$ and $W$ real, ($\omega(x_2,x_3)$ can be seen to be purely real as $x_2$ and $x_3$ are joined by an anti-Stokes line), following the example of \cite{BenderBerry:2001wk}, this simplifies to:
\bea
E \mbox{ eigenvalue} \iff 2\cos(2U+W) + 2\mbox{e}^{-2V}\cos(W) &=& 0 \nn
\eea
As explained earlier, the form of the quantisation condition changes, depending on the parameters $\ga$ and $l$, as well as changing as energy is increased moving along a sequence. We now define (piecewise), an overall quantisation condition function, which we will denote $Q(E,\ga,l)$. {\it e.g.} as we have just seen $Q(E,\ga,l)= 2\cos(2U+W) + 2\mbox{e}^{-2V}\cos(W)$ for $0<E<E^\prime$ , $\ga>0$ , $0<l<l^\prime$. Similarly $Q(E,\ga,l)$ is defined to be the quantisation condition obtained for each of the other different sets of parameter values. Finding eigenvalues is then a matter of finding the zeros of this function.
\subsection{Positivity of Spectra from WKB Quantisation Conditions}\label{pos}
The sections $E$ and $F$ in Figure \ref{domain1} form an area that is completely contained in the sector where positivity was proved in \cite{MR1857169} (bounded by the red lines in Figure \ref{domain1}, or section D in Figure \ref{domain0}).\\
WKB analysis of $E$ and $F$ shows that no quantisation condition can exist for negative energies. In fact, a viable quantisation condition is only found for $E>E^\prime$. For energy values lower than this, demanding that the dominant wave function vanishes in the Stokes sector $\cS_{1}$ also means that the subdominant wave function will vanish; there is no way to have a WKB solution that is purely subdominant in sectors $\cS_{-1}$ and $\cS_{1}$, and hence there are no eigenvalues.
\subsection{Conjecture on Reality of Spectra from WKB Quantisation Conditions}\label{rea}
In \cite{BenderBerry:2001wk}, the appearance of complex eigenvalues was explained in terms of a WKB quantisation condition. In the paper, Bender {\it et al.} obtained a WKB quantisation condition for the $\pt$ symmetric potential
\bea
V(x)=x^4+iAx. \nn
\eea
The condition found was:
\bea
E \mbox{ eigenvalue} \iff \cos(2U)+\frac{1}{2}\mbox{e}^{-2V}=0, \nn
\eea
\bea
\mbox{where } U=\Re e \left(\int_{x_1}^{x_3}(E-V(t))^{\frac{1}{2}}dt\right), \hspace{0.5cm} V= \Im m\left(\int_{x_1}^{x_3}(E-V(t))^{\frac{1}{2}}dt\right) \nn
\eea
This condition, consisting of the sum of an exponential term and an oscillatory term can lead to real eigenvalues, provided that the magnitude of the exponential term does not exceed $1$. If parameters are varied, leading to an increase in magnitude of the exponential term beyond $1$, then no real solutions are possible. In effect, real eigenvalues change in value as the parameter $A$ is varied continuously, then `pair off' and disappear as the exponential term becomes larger. This can be seen explicitly by plotting the value of the quantisation condition function against energy for a particular value of parameter $A$, then looking at how this plot changes with different values of $A$. As the value of the exponential term increases with changing A, local minima in the plot can be seen to be pulled up through zero, leading to degenerate eigenvalues and complex conjugate eigenvalues. (see Figure \ref{qmin}).
\begin{figure}[here]
\begin{center}
\subfigure[]
{\label{qua}
\resizebox{5.1cm}{!}{\includegraphics{Qmin4}}} \hspace{0.5cm}
\subfigure[]
{\label{qub}
\resizebox{5.1cm}{!}{\includegraphics{Qmin2}}} \hspace{0.5cm}
\subfigure[]
{\label{quc}
\resizebox{5.1cm}{!}{\includegraphics{Qmin3}}}
\caption{{\bf Quantisation condition function plotted against energy} {\em \ref{qua} corresponds to 2 real eigenvalues, \ref{qub} to real, degenerate eigenvalues and \ref{quc} to complex conjugate eigenvalues. (Referring to two lowest energy eigenvalues shown).}\label{qmin}}
\end{center}
\end{figure}\\
The WKB quantisation conditions found in regions $I$ and $J$ of Figure \ref{domain1}, (which are slightly larger than the region $A$ (Figure \ref{domain0}) discussed in \cite{MR1857169,Dorey:2001hi}), were found to be similar in form to that obtained in \cite{BenderBerry:2001wk} ({\it i.e.} containing an exponential and oscillatory term).\\
With these conditions the effect described above can occur, where pairs of real eigenvalues move together and become degenerate as the parameters change, due to the effect of the exponential term(s) in the quantisation condition. \\
An example of this is seen by looking at the quantisation conditions obtained for Region $I$ (Table \ref{plus}). In section \ref{examplesec} we showed that the quantisation condition for region $I$ ($\ga>0,0<l<l^\prime$) with $0<E<E^\prime$ is
\bea
E \mbox{ eigenvalue} \iff 2\cos(2U+W) + 2\mbox{e}^{-2V}\cos(W) &=& 0 \nn
\eea
Plotting this quantisation condition function against energy, for various values of $\ga$ and $l$, it can be seen that varying the parameters can again lead to the exponential term increasing in magnitude so that it begins to dominate the oscillatory term, and pulls the function through zero. One important difference to \cite{BenderBerry:2001wk} is that the exponential term in the condition is multiplied by a cosine term. This makes it harder to analytically pin down the boundary in the parameter space where degenerate eigenvalues may occur, than in \cite{BenderBerry:2001wk}. (Hence we can only restrict the domain of unreality to the regions where particular quantisation conditions are valid, and don't, as yet, make a more detailed study of these regions - {\it i.e.} we only say that complex conjugate eigenvalues exist somewhere in this region, and do not analytically calculate the particular area inside this region in which real eigenvalues are not possible (c.f. \cite{BenderBerry:2001wk})). \\
This cosine term leads to another difference in the quantisation condition function plots; because this term varies between $\pm 1$, the exponential term in the condition will be positive for some values of $\ga,l,E$ and negative for others. degenerate eigenvalues can now come about through a local minimum being pulled up through zero, or a local maximum being pulled down. See Figure \ref{qminmax}.\\\\
\begin{figure}[here]
\begin{center}
\subfigure[]
{\label{qumin}
\resizebox{6.8cm}{!}{\includegraphics{Qmin1}}} \hspace{0.5cm}
\subfigure[]
{\label{qumax}
\resizebox{6.8cm}{!}{\includegraphics{Qmax1}}}
\caption{{\bf Quantisation condition function, }{\em illustrating degeneracies arising from mimimum being pulled up, and maximum being pulled down.}\label{qminmax}}
\end{center}
\end{figure}\\\\
In all other regions, the quantisation conditions do not contain exponential terms. The most common quantisation condition found outside of I and J is of the form $\cos (U) =0$, apparently leading to an infinite number of real eigenvalues. We conjecture that the only complex eigenvalues found for this problem are the ones that come about in the manner described earlier, whereby the exponential term allows for `pulling through' of the quantisation condition function. If it is true, as seems to be the case numerically, that this is the only means of complex eigenvalues coming into existence, then any quantisation condition lacking an exponential term (or at least a sum involving two similar sized terms) will not lead to complex eigenvalues. Hence our conjecture leads to the conclusion that, in Figure \ref{domain1}, the spectrum will be entirely real below the blue lines in the upper part of the plane. This is a weaker condition than that found in \cite{MR1857169}, where reality was established for all parameter values below the red lines in this upper part of the plane. \\
(Note that there is one exception to the statement that the quantisation condition does not contain exponential terms outside of I and J; that being the case $l>0,E>E^{\prime\prime}$, {\it i.e.} high energy values in regions $E,G,H$ and $I$; The quantisation condition found here is of the same form, $2\cos(2U+W) + 2\mbox{e}^{-2V}\cos(W) = 0$, but this condition only comes into effect for $E>E^{\prime\prime}$ whence it is seen numerically that $V$ is relatively large and positive, so the exponential term is approximately zero and the pairing off of eigenvalues cannot occur, and so again only real eigenvalues are possible).
\subsection{WKB explanation for appearance of degeneracies and cusps}\label{deg}
The pattern of cusps in the degeneracy curves described in \cite{Dorey:2001hi} is manifest from the WKB approach via the quantisation conditions. The position of degenerate eigenvalues can be found numerically by searching for the parameter values where a local maximum (minimum) in the quantisation condition passes through $0$. this is done by combining a simple maximum (minimum) finding algorithm, and a bisection algorithm to find when the maximum (minimum) value of the quantisation condition function is $0$. In this way, a plot of the locations of degenerate eigenvalues can be found. (Figure \ref{cuspa}).\\
Figure \ref{cuspa} shows the pattern of curves, meeting at cusps, as seen in Figure \ref{scan}, first shown by Dorey {\it et al.} \cite{MR1857169}. Note that this figure is different to Figure \ref{scan}, as it shows the cusp pattern repeating itself as further eigenvalues join up and then split into complex conjugate pairs. The results from the complex WKB quantisation conditions are shown as the crosses, overlaying the results from the method used in \cite{MR1857169,Dorey:2001hi}. It can be seen that there is extremely good agreement between the degeneracy curves found via these two methods. We are extremely grateful to Patrick Dorey, Clare Dunning, Anna Lishman and Roberto Tateo for sharing their results prior to publication \cite{Dorey:unpub, Dorey:unpub2}.
\begin{figure}[here!]
\begin{center}
\resizebox{13.8cm}{!}{\includegraphics{cuspa}}
\caption{{\bf Degenerate eigenvalues} {\em The positions of degenerate eigenvalues found using WKB quantisation conditions (crosses) and those found with the methods of \cite{MR1857169,Dorey:2001hi} (lines).}\label{cuspa}}
\end{center}
\end{figure}\\
To explain how the quantisation condition method accounts for the formation of the cusps in the diagram, consider the diamond-shaped area of the $(\ga,l)$ plane bordered by $0<\ga_{-}<1$ and $0<\ga_{+}<1$, In this region, for a fixed value of $\ga_{+}$, and $\ga_{-}$ being increased from $0$ to $1$, degeneracies are seen as a local minimum of the quantisation condition function being pulled up through zero. In the adjoining diamond-shaped area, $0<\ga_{-}<1$ and $1<\ga_{+}<2$, degeneracies occur as a local maximum is pulled down through zero. In $0<\ga_{-}<1$ and $2<\ga_{+}<3$, it is again a minimum, and so on (Figure \ref{domainf}).
\begin{figure}[here!]
\begin{center}
\resizebox{8cm}{!}{\includegraphics{domainf}}
\caption{{\bf Degeneracies being formed from minima or maxima of $Q(E,\ga,l)$} {\em On the edge of one diamond shaped region, degeneracies occur as either a minimum or maximum. In the neighbouring diamond this is swapped.}\label{domainf}}
\end{center}
\end{figure}\\
On the boundaries of these regions, ie $\ga_{\pm} \in \Z^{+}$, the degeneracies occur as a limit of a local minimum and local maximum of the quantisation condition function both meeting at $Q(E,\ga,l)=0$, ie a point of inflection of the quantisation condition function. These correspond to the cusps in \cite{Dorey:2001hi}.\\
\begin{figure}[here]
\begin{center}
\subfigure[]
{\label{qinf1}
\resizebox{3.6cm}{!}{\includegraphics{Qinf4}}} \hspace{0.5cm}
\subfigure[]
{\label{qinf2}
\resizebox{3.6cm}{!}{\includegraphics{Qinf2}}} \hspace{0.5cm}
\subfigure[]
{\label{qinf3}
\resizebox{3.6cm}{!}{\includegraphics{Qinf3}}} \hspace{0.5cm}
\subfigure[]
{\label{qinf4}
\resizebox{3.6cm}{!}{\includegraphics{Qinf5}}}
\caption{{\bf Quantisation condition function, parameter values close to inflection point.}\label{qinf}}
\end{center}
\end{figure}\\
An example is shown in Figure \ref{qinf}. \ref{qinf1} shows the behaviour of $Q$ with parameter values approaching those for which a point of inflection occurs. It can be seen that both a local maximum and local minimum are close to $Q=0$. Figure \ref{qinf2} corresponds to a degeneracy for $\ga_{+}$ slightly less than $2$, ($\ga_{-}\approx 1$). Here the minimum has just been pulled up to $Q=0$. \ref{qinf3} corresponds to a degeneracy for $\ga_{+}$ just greater than $2$, ($\ga_{-}\approx 1$ still). Here the maximum has just been pulled down. \ref{qinf4} shows $Q$ for the parameter values $\ga_{+}\approx2,\ga_{-}\approx 1$, leading to an inflection point.\\
Numerically, by scanning the $(\ga,l)$ plane for points of inflection, a plot of the positions of these cusps, as predicted by WKB was obtained (Figure \ref{cusps}).
\begin{figure}[here!]
\begin{center}
\resizebox{7.2cm}{!}{\includegraphics{cusps}}
\caption{{\bf Locations of cusps. \label{cusps}}}
\end{center}
\end{figure}\\
These cusp positions are in approximate agreement with those found by Dorey {\it et al.} in \cite{MR1857169,Dorey:2001hi}.
\begin{section}{Conclusions and Further Directions}\label{conc}
We have shown that the approach of complex WKB quantisation conditions can be used to explain aspects of the spectrum of (\ref{eig}). The problem appears to be more complicated due to the changing of the Stokes structures as parameters are varied, but in fact it is this changing of configurations that provides an explanation for the different behaviour in different regions of the $(\ga,l)$ plane. \\
The $M=3$ case is a useful test case to investigate as some energy levels can be found exactly, due to the quasi-exactly solvability of the related problem discussed in \cite{Dorey:2001hi}. The quantisation condition method has been shown to give excellent agreement in calculating the positions of the degenerate eigenvalues, (possibly surprisingly good, as first order complex WKB is supposed to be a more accurate approximation for higher energy levels, and the pairing off occurs between low lying energy levels). There should be no reason why the method would not be useful in investigating the spectrum for different values of $M$. Indeed, considering smaller values of $M$ would be expected to be a simpler problem, as less zeros of the potential should lead to less complexity in the pattern of Stokes structures obtained. It may be possible to see how the quantisation condition approach changes for general $M$, and gain insight into the way the domain of unreality changes as $M$ is varied.
\end{section}
\begin{subsubsection}*{Acknowledgements}
The author is very grateful to Robert Weston for many helpful ideas and much advice. He would also like to to thank Patrick Dorey, Clare Dunning, Chris Howls, Adri Olde Daalhuis, Michael Graham, Katie Russell and Des Johnston for useful discussions. This work was funded by a James Watt scholarship. Partial support was provided by the European TMR network EUCLID (contract number HPRN-CT-2002-00325).
\end{subsubsection}
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,264 |
What the CRKN-SAGE Open Access Agreement Means for U of G
Last updated: November 11, 2022 15:35 EST
U of G is participating in a new agreement between SAGE Publishing and the Canadian Research Knowledge Network (CRKN). The CRKN-SAGE Agreement combines elements of a traditional subscription with increased open access to research articles published by authors from CRKN member institutions. The result is a reduced cost to researchers and increased access to Canadian research.
The library is committed to increasing open access options for U of G researchers and this agreement represents a transitional step towards fully open access agreements.
What does this agreement mean for U of G Authors?
The SAGE-CKRN agreement allows U of G authors to publish their work in selected SAGE Journals without cost, or at a discount.
U of G corresponding authors may publish their articles as open access in over 900 SAGE Choice Journals without paying article processing charges (APCs). Some exclusions apply.
For SAGE Gold Open Access journals, U of G authors will receive a 40 per cent discount on APCs.
Articles published through this agreement will become open access. By publishing open access, you are:
Making research more accessible for everyone.
Complying with the Tri-Agency Open Access Policy.
Helping transform the scholarly publishing system into one that is more economically sustainable.
Ensuring the widest possible readership for publicly funded research.
This agreement is available to all U of G authors until the end of 2023.
For more information about SAGE's open access publishing visit: https://us.sagepub.com/en-us/nam/open-access-at-sage.
Not publishing with SAGE? We can help you make your work open. Learn more about how we support open scholarship.
Ask Us. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,107 |
Q: D3.js histogram bin size increments I am attempting to create a histogram of ambulance response times (in seconds). The sample code from D3.js works very well. I am able to put together a nice histogram easily. It even converts the response times in seconds to mm:ss format.
What I am trying to accomplish and need your help with is this; How to make the bins be 60 seconds (1 minute) If you run the following code you will see the bins are increments of 50 seconds. This is non-intuitive for most people. How would you specify the exact number for the bins? For response times I want the bins to be 1 minute (60 seconds), but for off-loading patient at hospital I would like the bins to be in 5 minute intervals (300 seconds). What ever the case I would like to ask for your assistance to make the bins be precise values.
Shown below is my code with data:
<!DOCTYPE html>
body {
font: 10px sans-serif;
}
.bar rect {
fill: thistle;
shape-rendering: crispEdges;
}
.bar text {
fill: black;
}
.axis path, .axis line {
fill: none;
stroke: cornsilk;
shape-rendering: crispEdges;
}
var values = [212,
279,
264,
411,
189,
343,
207,
424,
550,
302,
317,
315,
29,
227,
367,
163,
581,
96,
375,
313,
548,
570,
329,
269,
953,
238,
195,
183,
384,
353,
258,
465,
208,
273,
155,
344,
355,
354,
88,
364,
143,
407,
207,
437,
142,
234,
234,
193,
308,
416,
445,
327,
293,
327,
232,
319,
209,
498,
236,
427,
241,
164,
0,
157,
295,
337,
430,
218,
390,
231,
402,
301,
472,
349,
133,
311,
396,
452,
490,
189,
282,
297,
296,
413,
102,
219,
190,
371,
390,
454,
467,
302,
221,
547]
// Formatters for counts and times (converting numbers to Dates).
var formatCount = d3.format(",.0f"),
formatTime = d3.time.format("%H:%M"),
formatMinutes = function (d) {
return formatTime(new Date(2012, 0, 1, 0, d));
};
//this is the positioning of the chart
var margin = {top: 30, right: 30, bottom: 30, left: 30},
width = 960 - margin.left - margin.right,
height = 500 - margin.top - margin.bottom;
var x = d3.scale.linear()
.domain([0, 600])
.range([0, width]);
// These are the number of bins in the histogram.
var data = d3.layout.histogram()
.bins(x.ticks(10))
(values);
var y = d3.scale.linear()
.domain([0, d3.max(data, function (d) {
return d.y;
})])
.range([height, 0]);
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom")
.tickFormat(formatMinutes);
var svg = d3.select("body").append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
var bar = svg.selectAll(".bar")
.data(data)
.enter().append("g")
.attr("class", "bar")
.attr("transform", function (d) {
return "translate(" + x(d.x) + "," + y(d.y) + ")";
});
bar.append("rect")
.attr("x", 1)
.attr("width", x(data[0].dx) - 1)
.attr("height", function (d) {
return height - y(d.y);
});
//this block of code makes the tick values showing how many fall into the bin
bar.append("text")
.attr("dy", ".75em")
.attr("y", 6)
.attr("x", x(data[0].dx) / 2)
.attr("text-anchor", "middle")
.text(function (d) {
return formatCount(d.y);
});
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(0," + height + ")")
.call(xAxis);
</script>
A: I would do it like this. First, manually calculate your ticks:
var ticks = d3.range(0, x.domain()[1] + 1, 60);
This will create an array of:
[0, 60, 120, 180, 240, 300, 360, 420, 480, 540, 600]
Then feed this to d3.histogram:
var data = d3.layout.histogram()
.bins(ticks)
(values);
Finally fix your axis ticks:
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom")
.tickValues(ticks)
.tickFormat(formatMinutes);
All together:
<!DOCTYPE html>
<html>
<head>
<style>
body {
font: 10px sans-serif;
}
.bar rect {
fill: thistle;
shape-rendering: crispEdges;
}
.bar text {
fill: black;
}
.axis path,
.axis line {
fill: none;
stroke: cornsilk;
shape-rendering: crispEdges;
}
</style>
<script data-require="d3@3.5.3" data-semver="3.5.3" src="//cdnjs.cloudflare.com/ajax/libs/d3/3.5.3/d3.js"></script>
</head>
<body>
<script>
var values = [212,
279,
264,
411,
189,
343,
207,
424,
550,
302,
317,
315,
29,
227,
367,
163,
581,
96,
375,
313,
548,
570,
329,
269,
953,
238,
195,
183,
384,
353,
258,
465,
208,
273,
155,
344,
355,
354,
88,
364,
143,
407,
207,
437,
142,
234,
234,
193,
308,
416,
445,
327,
293,
327,
232,
319,
209,
498,
236,
427,
241,
164,
0,
157,
295,
337,
430,
218,
390,
231,
402,
301,
472,
349,
133,
311,
396,
452,
490,
189,
282,
297,
296,
413,
102,
219,
190,
371,
390,
454,
467,
302,
221,
547
]
// Formatters for counts and times (converting numbers to Dates).
var formatCount = d3.format(",.0f"),
formatTime = d3.time.format("%H:%M"),
formatMinutes = function(d) {
return formatTime(new Date(2012, 0, 1, 0, d));
};
//this is the positioning of the chart
var margin = {
top: 30,
right: 30,
bottom: 30,
left: 30
},
width = 960 - margin.left - margin.right,
height = 500 - margin.top - margin.bottom;
var x = d3.scale.linear()
.domain([0, 600])
.range([0, width]);
// These are the number of bins in the histogram.
var ticks = d3.range(0, x.domain()[1] + 1, 60);
var data = d3.layout.histogram()
.bins(ticks)
(values);
var y = d3.scale.linear()
.domain([0, d3.max(data, function(d) {
return d.y;
})])
.range([height, 0]);
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom")
.tickValues(ticks)
.tickFormat(formatMinutes);
var svg = d3.select("body").append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
var bar = svg.selectAll(".bar")
.data(data)
.enter().append("g")
.attr("class", "bar")
.attr("transform", function(d) {
return "translate(" + x(d.x) + "," + y(d.y) + ")";
});
bar.append("rect")
.attr("x", 1)
.attr("width", x(data[0].dx) - 1)
.attr("height", function(d) {
return height - y(d.y);
});
//this block of code makes the tick values showing how many fall into the bin
bar.append("text")
.attr("dy", ".75em")
.attr("y", 6)
.attr("x", x(data[0].dx) / 2)
.attr("text-anchor", "middle")
.text(function(d) {
return formatCount(d.y);
});
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(0," + height + ")")
.call(xAxis);
</script>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,783 |
\section{Introduction}
The Cram\'er-Wold theorem \cite[p.~291]{CW36} states that a probability
measure $P$ on ${\mathbf R}^d$ is uniquely determined by the values it gives
to halfspaces $H_{\omega,p}=\{x\in{\mathbf R}^d;\, x\cdot\omega<p\}$ for
$\omega \in S^{d-1}$ and $p\in{\mathbf R}$. Equivalently, $P$ is uniquely
determined by its one-dimensional projections
$P\pi_{\omega}^{-1}$, where $\pi_{\omega}$ is the projection
${\mathbf R}^d \ni x \mapsto x\cdot\omega \in {\mathbf R}$ for $\omega \in S^{d-1}$.
Moreover, a sequence of probability measures $P_k$ converges weakly
to a probability measure $P$ in the sense that
$\lim_{k\to\infty}\int\varphi\, dP_k=\int\varphi\, dP$ for all bounded
continuous real-valued $\varphi$, if for every $\omega \in S^{d-1}$
$\lim_{k\to\infty}P_k(H_{\omega,p})=P(H_{\omega,p})$ for all but at most
countably many $p \in {\mathbf R}$.
In recent years there has been an interest in analogous theorems in situations where the measure $P$ is not necessarily a probability measure but may have infinite mass near the origin.
Such measures arise for instance as limits of scalings
of probability measures in multivariate extreme value theory
(see e.g.~\cite{R87}) and limit theorems for sums of random
vectors (see e.g.~\cite{MS01});
other examples include L\'evy measures
for infinitely divisible probability distributions and intensity
measures for random measures (see e.g.~\cite{DV88} and \cite{S99}).
If $P$ has infinite mass near the origin, the value of $P(H_{\w,p})$ is of course not defined when the closure of $H_{\w,p}$ contains the origin, and the problem therefore becomes to decide if $P$ is determined by its values on all closed halfspaces contained in ${\mathbf R}^d\sm\{0\}$.
We present three types of extensions of the Cram\'er-Wold theorem in that direction, and we show by examples that the assumptions made cannot be omitted.
Our measures may take both positive and negative values unless
the contrary is explicitly stated.
For a finite (signed) measure $\mu$ with a density
$f \in L^1({\mathbf R}^d)$ we can write
\begin{align}\label{halfspacerepr}
\mu(H_{\omega,p})=\int_{-\infty}^p\left(\int_{L_{\omega,r}}f ds\right)dr,
\end{align}
where $ds$ is the Euclidean surface measure on the hyperplane
$L_{\omega,p}=\{x\in{\mathbf R}^d;\,x\cdot\omega=p\}$.
The inner integral in \eqref{halfspacerepr}
is identified as the Radon transform of $f$ evaluated
at the hyperplane $L_{\omega,r}$. Cram\'er-Wold theorems are therefore equivalent to injectivity theorems for the Radon transform.
The extension of the theory of the Radon transform to {\it distributions} $f$ is easy and well known \cite{H80}. Measures are distributions of order zero, and we can therefore without difficulty form the Radon transform of the measures we need to consider.
In particular, an analogue of (\ref{halfspacerepr}) for measures is (\ref{dpmuh}) below.
Since distributions are defined as linear forms on spaces of test functions, it is natural for us to define measures as linear forms on a space of continuous test functions (see Section 2).
Working with measures in the way we do requires a very small part of distribution theory. In a few cases we shall use distributions of higher order than zero. A few facts from distribution theory that may not be well known to our readers are collected in an appendix.
In Section 2 we introduce measures as distributions of order zero and define the Radon transform and other operations on measures.
In Section 3 we present four injectivity theorems for the Radon transform, here called Theorems A -- D. Theorems B -- D treat the case --- often called the exterior Radon transform --- when a function (measure) is to be reconstructed outside a compact, convex set $K$, the case which is in focus in this article. Theorem A, the injectivity theorem for the standard Radon transform, is included for completeness. Theorem B is the well known Helgason support theorem for the Radon transform; here uniqueness is guaranteed by the assumption that the measure is {\it rapidly decaying} at infinity (see definition below). That the rapid decay assumption cannot be omitted is well known: for any integer $m\ge d$ there exist functions $f$, homogeneous of degree $-m$, with Radon transform $Rf(L)=0$ for all hyperplanes $L$ not containing the origin. On the other hand, if $f$ is homogeneous of non-integral degree, then $f$ is uniquely determined by its exterior Radon transform (Theorem C). Theorem D, finally, proves injectivity for the exterior Radon transform if the unknown measure is supported in a closed, convex cone containing no complete straight line.
In Section 4 we begin by presenting four Cram\'er-Wold type uniqueness theorems, Theorems 1 -- 4, parallel to Theorems A -- D, respectively. Theorem 3 occurs in two variants, Theorem 3a and Theorem 3b; in the latter case the unknown measure is assumed to be non-negative and therefore does not have to be assumed homogeneous. The main part of Section 4 presents four Cram\'er-Wold theorems for sequences of measures, Theorems 1$'$ -- 4$'$. The main novelties of our paper are probably Theorem 4 and Theorem 4$'$, which are analoguous to Theorem D.
To some extent Theorems 3b and 3$'$ are perhaps also new, although a similar
result has been shown by Basrak, Davis and Mikosch \cite{BDM02}.
The support assumption in Theorems 4 and 4$'$ are often satisfied in applications with the cone $Q$ being the positive orthant $\{(x_1,\ldots,x_d)\in{\mathbf R}^d;\, x_k\ge 0\}$. A particular case concerns sequences of scalings of probability measures (Corollary 2);
this result answers affirmatively the conjecture in \cite{BDM02}.
Examples showing that the assumptions in Theorem B -- D are sharp are given in Section~5. A shorter description of essentially the same examples appeared in \cite{Bo92}, page 28. Those examples show immediately that the assumptions in Theorems 2 -- 4 and 2$'$ -- 4$'$ are sharp. Moreover, choosing $f(x) = q(|x|)h(x)$, where $h$ is a non-trivial solution to $Rh(\w,p) = 0$ in $p\ne 0$ and $q(|x|)$ is a very slowly oscillating radial function, shows that the assertion of Corollary 2 is not true if $\b$ is an integer. A similar example was previously given by Hult and Lindskog \cite{HL061}.
As explained above,
this article presents extensions of the Cram{\'e}r-Wold theorem to
measures that may have infinite mass near the origin.
The extensions build on a number of results and methods, that have
been developed essentially during the 1980ies and 1990ies, concerned
with injectivity properties of the Radon transform.
One purpose of this article is to contribute to making known to
probabilists interesting results for the Radon transform that
have appeared in the mathematical literature.
As is well known, the Radon transform and its generalizations have
been studied extensively after the invention of Computerized Tomography
in the 1970ies.
Using a few tools from distribution theory and Fourier analysis
we show that the presented injectivity results for the Radon transform lead to
Cram{\'e}r-Wold type results for measures.
The paper is self-contained and only a minimum of basic tools from
distribution theory and Fourier analysis are needed.
In particular, we have aimed to convince the reader of the usefulness
of some very basic facts from distribution theory for treating problems
occurring in applications of probability theory.
\section{Measures and their Radon transforms}
Let $C_0({\mathbf R}^d)$ be the space of continuous functions on ${\mathbf R}^d$ that tend to zero at infinity, equipped with the supremum norm $\|\cdot\|$. The dual space of $C_0({\mathbf R}^d)$,
the space of continuous linear forms on $C_0({\mathbf R}^d)$, will be denoted $M({\mathbf R}^d)$. This is the space of signed measures with finite total mass.
Throughout the paper a measure is
real-valued, as opposed to non-negative, unless anything else is said.
The action of a linear form $\mu$ on the test function $\vf$ will be denoted $\sca{\mu}{\vf}$, and the
norm of $\mu \in M({\mathbf R}^d)$ as a linear form will be denoted $\|\mu\|_M$, that is,
$$
\|\mu\|_M = \sup\{|\sca{\mu}{\vf}|;\, \vf \in C_0({\mathbf R}^d), \ \|\vf\| \le 1 \} .
$$
This is the total mass, or total variation norm $|\mu|({\mathbf R}^d)$, of the measure $\mu$. The space $L^1({\mathbf R}^d)$ is identified with a subspace of $M({\mathbf R}^d)$ by $f\in L^1({\mathbf R}^d)$ being identified with the linear form $\vf \mapsto \int f(x) \vf(x) dx$.
The relationship between a measure
$\mu \in M({\mathbf R}^d)$ considered as a set function $\mu(E)$ defined
on the family of Borel
sets and $\mu$ considered as a linear form
$\vf \mapsto \langle\mu,\vf\rangle$
is well known and explained by
the Riesz Representation Theorem
(see e.g.~Theorem 6.19 in \cite{R66}),
which says that to every continuous linear form $\Phi$ on $C_0({\mathbf R}^d)$
there corresponds a unique Borel measure $\mu$ such that
\begin{align}\label{rrt}
\langle\Phi,\vf\rangle=\int_{{\mathbf R}^d}\vf\, d\mu,
\quad \vf\in C_0({\mathbf R}^d),
\end{align}
and $\|\Phi\|_M=|\mu|({\mathbf R}^d)$.
The integral in \eqref{rrt} is defined in any
textbook on measure and integration theory.
Conversely, if the set function $\mu(E)$ is given, then it is clear that \eqref{rrt}
defines a continuous linear form on $C_0({\mathbf R}^d)$.
Any element $\mu\in M({\mathbf R}^d)$ can be uniquely extended as a linear form to the space $C_b({\mathbf R}^d)$ of bounded continuous functions on ${\mathbf R}^d$. This is perhaps most easily seen using the expression $\int \vf\, d\mu$ considering $\mu$ as a set function. If $\mu$ is considered as a linear form on
$C_0({\mathbf R}^d)$, we take a compactly supported continuous function $\chi$ that is equal to $1$ in some neighborhood of the origin and define
\begin{equation}
\sca{\mu}{\vf} = \lim_{A\rar\i} \sca{\mu}{\chi(\cdot/A)\vf(\cdot)} , \quad \vf \in C_b({\mathbf R}^d) .
\end{equation}
It is easy to see that this definition is independent of the choice of $\chi$.
Following Laurent Schwartz \cite{S66} we shall denote by $\mathcal D({\mathbf R}^d)$ the space of $C^{\i}$ functions with compact support. Note that $\mathcal D({\mathbf R}^d)$ is dense in $C_0({\mathbf R}^d)$.
For $f\in L^1({\mathbf R}^d)$ the Radon transform $Rf$ is defined by
$Rf(L) = \int_L f\, ds$, where $ds$ is the Euclidean surface measure on the hyperplane $L$, or
\begin{equation} \label{Rf}
Rf(\w,p) = \int_{L_{\w,p}} f \, ds , \quad (\w,p) \in S^{d-1}\times{\mathbf R} ,
\end{equation}
where ${L_{\w,p}}$ is the hyperplane $\{x\in{\mathbf R}^d;\, x\cdot\w = p\}$. Note that $Rf$ is even,
$Rf(\w,p) = Rf(-\w,-p)$, since $L_{\w,p} = L_{-\w,-p}$.
If $f\in L^1({\mathbf R}^d)$, then $Rf$ is defined almost everywhere on $S^{d-1}\times{\mathbf R}$, and in fact $Rf(\w,\cdot)$ is in $L^1({\mathbf R})$ for every $\w\in S^{d-1}$. It is clear that
$\|Rf(\w,\cdot)\|_{L^1({\mathbf R})} \le \|f\|_{L^1({\mathbf R}^d)}$ for every $\w$ and that
$\|Rf\|_{L^1(S^{d-1}\times{\mathbf R})} \le \|f\|_{L^1({\mathbf R}^d)}$. Here the norm in $L^1(S^{d-1}\times{\mathbf R})$ is defined using the normalized surface measure on $S^{d-1}$, which we denote by $d\w$.
For $\mu\in M({\mathbf R}^d)$ we shall define the Radon transform $R\mu$ as a measure on $S^{d-1}\times{\mathbf R}$, that is, as a linear form on the space $C_0(S^{d-1}\times{\mathbf R})$ of continuous functions on $S^{d-1}\times{\mathbf R}$ that tend to zero at infinity. To do this we need some more notation. If $\phi\in L^1(S^{d-1}\times{\mathbf R})$ and $\psi\in C_0(S^{d-1}\times{\mathbf R})$ we write
$$
\sca{\phi}{\psi} = \int_{S^{d-1}} \int_{{\mathbf R}} \phi(\w,p) \psi(\w,p) dp\, d\w .
$$
More generally, if $\phi$ is a linear form on $C_0(S^{d-1}\times{\mathbf R})$ we write
$\sca{\phi}{\psi}$ to denote the action of $\phi$ on the test function $\psi$;
thus $\phi\in L^1(S^{d-1}\times{\mathbf R})$ is identified with the linear form
$C_0(S^{d-1}\times{\mathbf R})\ni\psi\mapsto\sca{\phi}{\psi}$.
The dual Radon transform $R^*$ is defined for $\psi \in C_0(S^{d-1}\times{\mathbf R})$ by
\begin{equation} \label{R*}
R^*\psi(x) = \int_{S^{d-1}} \psi(\w, x\cdot\w) d\w .
\end{equation}
If $\psi$ is even, $\psi(\w,p) = \psi(-\w,-p)$, then $\psi$ can be considered as a function on the manifold of hyperplanes, $\psi(L_{\w,p}) = \psi(\w,p)$, and the geometric meaning of (\ref{R*}) is that $R^*\psi(x)$ is defined as the mean of $\psi(L)$ taken over all hyperplanes $L$
containing $x$. It is easy to verify that $R^*$ is the adjoint of $R$ in the sense that
\begin{equation} \label{adjoint}
\sca{R\phi}{\psi} = \sca{\phi}{R^*\psi}
\end{equation}
for $\phi\in L^1({\mathbf R}^d)$ and $\psi\in C_0(S^{d-1}\times{\mathbf R})$.
Therefore it is natural to define the
Radon transform $R\mu$ of $\mu\in M({\mathbf R}^d)$ by
\begin{equation} \label{Rmu1}
\sca{R\mu}{\psi} = \sca{\mu}{R^*\psi} ,\quad \psi\in C_0(S^{d-1}\times{\mathbf R}) .
\end{equation}
It is obvious that $R^*$ maps $C_0(S^{d-1}\times{\mathbf R})$ and
$C_b(S^{d-1}\times{\mathbf R})$
into $C_b({\mathbf R}^d)$, but in fact
$R^*$ maps $C_0(S^{d-1}\times{\mathbf R})$ into $C_0({\mathbf R}^d)$:
\smallskip
\noindent
{\bf Lemma 1.}
If $\psi \in C_0(S^{d-1}\times{\mathbf R})$ then $R^*\psi\in C_0({\mathbf R}^d)$ and
\begin{equation} \label{supR*}
\sup|R^*\psi| \le \sup|\psi| .
\end{equation}
\noindent
{\it Proof.}
It is obvious that $R^*\psi$ is continuous and bounded and that (\ref{supR*}) holds, so we only need to prove that $R^*\psi(x) \rar 0$ as $|x|\rar \i$. First observe that for any $A$ the measure of the set
$$
E(x,A) = \{\w\in S^{d-1};\, |x\cdot\w| < A\}
= \{\w\in S^{d-1};\, |(x/|x|)\cdot \w| < A/{|x|}\}
$$
tends to zero as $|x|\rar \i$. Choose $A$ so that $|\psi(\w,p)| < \e$ for $|p| > A$, and then choose $B$ so large that the measure of $E(x,A)$ is less than $\e$ if $|x| > B$. Then, if $|x|>B$,
\begin{equation*}
|R^*\psi(x)| \le \big|\int_{E(x,A)} \psi(\w, x\cdot\w) d\w \big|
+ \big|\int_{\complement E(x,A)} \psi(\w, x\cdot\w) d\w \big|
\le \e \sup|\psi| + \e ,
\end{equation*}
which completes the proof.
It follows from the definition that $R\mu \in M(S^{d-1}\times{\mathbf R})$, the space of measures on $S^{d-1}\times{\mathbf R}$ with finite total mass, and that
$\|R\mu\|_M \le \|\mu\|_M$, hence $R$ is a bounded operator from $M({\mathbf R}^d)$ into
$M(S^{d-1}\times{\mathbf R})$.
It follows from (\ref{adjoint}) that the definition (\ref{Rmu1}) coincides with (\ref{Rf}), if $\mu\in L^1({\mathbf R}^d)$.
The operator $R^*$ maps to zero all functions $\psi$ that are odd functions of $(\w,p)$; therefore we could equally well consider $R\mu$ as a linear form on the subspace of even functions in $C_0(S^{d-1}\times{\mathbf R})$. It follows that the measures $R\mu$ are all even. (A measure $\nu$, considered as a set function, is called even, if $\nu(E) = \nu(-E)$ for every Borel set $E$; in terms of the linear form this is equivalent to
$\sca{\nu}{\vf} = \sca{\nu}{\check{\vf}}$, where $\check{\vf}$ is defined by
$\check{\vf}(z) = \vf(-z)$.)
For sequences $\mu_k \in M({\mathbf R}^d)$ we shall consider weak convergence defined by the space $C_0({\mathbf R}^d)$ of test functions,
\begin{equation} \label{weakC0}
\lim_{k\rar\i} \sca{\mu_k}{\vf} = \sca{\mu}{\vf} \quad \mathrm{for\ all \ \ }
\vf \in C_0({\mathbf R}^d) .
\end{equation}
In mathematical literature this is often called weak* convergence, since $M({\mathbf R}^d)$ is the dual of the Banach space $C_0({\mathbf R}^d)$. We will sometimes consider the analogous convergence concept with test functions in the space $C_b({\mathbf R}^d)$,
\begin{equation} \label{weakCb}
\lim_{k\rar\i} \sca{\mu_k}{\vf} = \sca{\mu}{\vf} \quad \mathrm{for\ all \ \ }
\vf \in C_b({\mathbf R}^d) .
\end{equation}
To distinguish those concepts we shall talk about $C_0$-weak convergence and $C_b$-weak convergence, respectively. Occasionally we shall also consider $\mathcal D$-weak convergence; what this means should be obvious. It is obvious that
$C_b$-weak convergence implies $C_0$-weak convergence, which in turn implies
$\mathcal D$-weak convergence, and it is easy to see that none of those implications can be reversed.
Finally, note that (2.9) holds if and only if the corresponding
Borel measures $\mu_k$ and $\mu$ satisfy
$\lim_{k\to\infty}\mu_k(B)=\mu(B)$ for all Borel sets $B\subset {\mathbf R}^d$
for which $|\mu|(\partial B)=0$, where $\partial B$ denotes the
boundary of the set $B$. For probability measures this equivalence
is part of the well known Portmanteau theorem (see e.g.~\cite{B68}).
The definition \eqref{Rmu1} shows that the Radon transform is $C_0$-weakly continuous in the sense that $\mu_k\rar\mu$ $C_0$-weakly implies $R\mu_k\rar R\mu$ $C_0$-weakly. The same is true if $C_0$-weakly is replaced by $C_b$-weakly.
If $\|\mu_k\|_M \le C$, then it is also true that $R\mu_k$ converges $C_0$-weakly implies
$\mu_k$ converges $C_0$-weakly; this is essentially the content of Theorem 1$'$, see Remark 2 after Theorem~1$'$.
If $d$ is odd a slightly stronger statement is very easy to prove as follows.
Assume $\sca{R\mu_k}{\vf} \rar 0$ for all $\vf\in \mathcal D({\mathbf R}^d)$. The formula $\psi = c R^* \p_p^{(d-1)/2} R \psi$ (see \cite{H80}), valid if $d$ is odd, helps us to write an arbitrary function $\psi \in \mathcal D({\mathbf R}^d)$ in the form $\psi = R^*\vf$ with $\vf = c\, \p_p^{(d-1)/2} R\psi \in \mathcal D(S^{d-1}\times{\mathbf R})$, hence \eqref{Rmu1} shows that
$\mu_k $ tends to zero $\mathcal D$-weakly. But this implies $\mu_k \rar 0$ $C_0$-weakly, since $\mathcal D$ is dense in $C_0$ and $\|\mu_k\|_M \le C$.
For an arbitrary $L^1$-function (or measure) $g(\w,p)$ on $S^{d-1}\times{\mathbf R}$ it is of course not possible to define a function (measure) $p\mapsto g(\w,p)$ on ${\mathbf R}$ for every $\w\in S^{d-1}$. However, for any $\mu \in M({\mathbf R}^d)$ the measure $R\mu\in M(S^{d-1}\times{\mathbf R})$ has the special property that a measure
$R\mu(\w,\cdot) \in M({\mathbf R})$ is well defined for every $\w\in S^{d-1}$. This is very easy to see if $\mu$ is viewed as a set function. Indeed,
$R\mu(\w,\cdot)$ is nothing but the push-forward $\pi_{\w,*} \mu$, where $\pi_{\w}$ is the projection ${\mathbf R}^d\ni x \mapsto x\cdot\w \in {\mathbf R}$;
here the measure $\pi_{\w,*} \mu$ is defined by $\pi_{\w,*} \mu(E) = \mu(\pi_{\w}^{-1} (E))$ for every Borel set $E\ss{\mathbf R}$. Similarly, looking at $\mu$ as a linear form we define $\pi_{\w,*} \mu$ as follows. First define the pullback $\pi_{\w}^*$ on test functions by
$\pi_{\w}^*\vf = \vf\circ \pi_{\w} \in C_b({\mathbf R}^d)$ for $\vf\in C_0({\mathbf R})$. Then define the push-forward
$\pi_{\w,*} \mu$ by
$$
\sca{\pi_{\w,*} \mu}{\vf} = \sca{\mu}{\pi_{\w}^*\vf} = \sca{\mu}{\vf(x\cdot\w)}, \quad
\vf\in C_0({\mathbf R}) .
$$
Thus the restriction $R\mu(\w,\cdot)$ to a particular $\w$ of the Radon transform $R\mu$ is the same as the push-forward $\pi_{\w,*} \mu$,
\begin{equation} \label{Rmuw}
\sca{R\mu(\w,\cdot)}{\vf} = \sca{\mu}{\vf(x\cdot\w)}, \quad \vf\in C_0({\mathbf R}) .
\end{equation}
From the expression (\ref{Rmuw}) we also see that $\sca{R\mu(\w,\cdot)}{\vf}$ is a continuous function of $\w$ for every $\vf\in C_0({\mathbf R})$.
Having extended the linear form $\mu$ to $C_b({\mathbf R}^d)$ as explained above we can define the
Fourier transform $\hat{\mu}$ of $\mu\in M({\mathbf R}^d)$ by
\begin{equation} \label{muhat}
\hat{\mu}(\xi) = \sca{\mu}{x \mapsto e^{-ix\cdot\xi}} .
\end{equation}
If $\mu\in M({\mathbf R}^d)$, then $\hat{\mu}$ is a uniformly continuous bounded function.
\smallskip
\noindent
{\bf Lemma 2.}
The one-dimensional Fourier transform of $R\mu(\w,p)$ with respect to $p$ for fixed $\w$, denoted $\hat{R\mu}(\w,\s)$, is related to the $d$-dimensional Fourier transform of $\mu$ by
\begin{equation} \label{F-slice}
\hat{R\mu}(\w,\s) = \hat \mu(\s\w) , \quad \s\in{\mathbf R}, \ \w\in S^{d-1} .
\end{equation}
\noindent
{\it Proof.}
For functions in $L^1({\mathbf R}^d)$ the proof consists just in interpreting an iterated integral
$\int \ldots ds\,dp$ as a multiple integral over ${\mathbf R}^d$. For the general case one can argue as follows. Using (\ref{Rmuw}) we see that
$$
\hat{R\mu}(\w,\s) = \sca{R\mu(\w,\cdot)}{p\mapsto e^{-i p\s}}
= \sca{\mu}{x\mapsto e^{-i(x\cdot\w)\s}} = \hat\mu(\s\w) ,
$$
which proves the claim.
\smallskip
If $\mu\in M({\mathbf R}^d)$ and $h\in C_b({\mathbf R}^d)$ then the convolution $\mu * h$ can be defined as
$$
\mu * h (x) = \sca{\mu}{h(x - \cdot)}, \quad x\in {\mathbf R}^d ,
$$
which is easily seen to be a function in $C_b({\mathbf R}^d)$.
If $h\in C_0({\mathbf R}^d)$, then $\mu*h\in C_0({\mathbf R}^d)$.
If $\vf$ is a function in $\mathcal D({\mathbf R}^d)$ with integral equal to $1$, then $\vf_{\e}(x) = \e^{-d}\vf(x/\e)$ tends $C_b$-weakly to the Dirac measure at the origin as $\e\rar 0$, and similarly, the family of smooth functions $\mu* \vf_{\e}$ tends $C_b$-weakly to $\mu$ as $\e\rar 0$.
It is an elementary fact that if $\nu\in M({\mathbf R})$ then there exists a function $F(t)$ with bounded variation, defined up to a constant, such that
\begin{equation} \label{F(t)}
\sca{\nu}{\vf} = \int_{{\mathbf R}} \vf(t) dF(t) = - \int_{{\mathbf R}} F(t) \vf'(t) dt , \quad \vf\in\mathcal D({\mathbf R}) .
\end{equation}
If $F(t)$ is normalized by the requirement that $\lim_{t\rar-\i} F(t) = 0$, then
$$
F(t) = \nu(\{s\in{\mathbf R};\, s < t\}) \quad \text{for a. e.} \ \ t\in{\mathbf R} .
$$
The equation (\ref{F(t)}) shows that $\nu$ is the derivative of $F$ in the distribution sense, and hence $F$ can be found from $\nu$ as a primitive function of $\nu$.
If $\nu = R\mu(\w,\cdot)$, then $F(p) = \mu(H_{\w,p})$, hence
\begin{equation} \label{dpmuh}
\sca{R\mu(\w,\cdot)}{\vf} = - \int_{{\mathbf R}} \mu(H_{\w,p}) \vf'(p) dp ,
\quad \vf \in \mathcal D({\mathbf R}) ,
\end{equation}
which shows that the derivative in the distribution sense of the function (of bounded variation)
$p\mapsto \mu(H_{\w,p})$ is equal to the Radon transform $R\mu(\w,\cdot)$.
\section{Injectivity theorems for the Radon transform}
We now turn to the injectivity theorems for the Radon transform on the space of measures. In the literature on the Radon transform statements like Theorems B - D are often called support theorems.
\smallskip
\noindent
{\bf Theorem A.}
The Radon transform is injective on $M({\mathbf R}^d)$.
\noindent
{\it Proof.}
If $R\mu(\w,p) = 0$, then $\hat{R\mu}(\w,\s) = 0$, and by the formula $\hat{R\mu}(\w,\s) = \hat \mu(\s\w)$ it follows that $\hat\mu = 0$, hence $\mu= 0$.
\smallskip
Let us say that a continuous function $f$ on ${\mathbf R}^d$ is {\it rapidly decaying at infinity} if
\begin{equation} \label{decay1}
f(x) = \mathcal O(|x|^{-m}) \quad \text{as} \quad
|x|\rar\i \text{\ for every\ } m>0.
\end{equation}
To define this property for measures we choose, for arbitrary $r>1$, a continuous function $\chi_r(x)$ on ${\mathbf R}^d$ such that $0\le \chi_r \le 1$, $\chi_r(x)=0$ for $|x|<r-1$, and $\chi_r(x)=1$ for $|x|>r$.
The product $\phi\mu$ of a measure $\mu\in M({\mathbf R}^d)$ and a bounded continuous function $\phi$ is defined by $\sca{\phi\mu}{\vf} = \sca{\mu}{\phi\vf}$ for every $\vf\in C_0({\mathbf R}^d)$.
\smallskip
\noindent
{\bf Definition 1.}
We shall say that the measure $\mu$ is {\it rapidly decaying at infinity}, if
\begin{equation*}
\|\chi_r \mu\|_M =\mathcal O(r^{-m}) \quad \text{as} \quad
r\rar\i \text{\ for every\ } m>0.
\end{equation*}
If the measure $\mu$ is defined by a continuous density $f(x)$ and $\mu$ is rapidly decaying, it is not certain that $f$ is rapidly decaying in the sense of (\ref{decay1}); in fact then $f$ does not even have to be bounded. On the other hand, any convolution of $\mu$ with a compactly supported test function must satisfy (\ref{decay1}):
\smallskip
\noindent
{\bf Lemma 3.}
If $\mu\in M({\mathbf R}^d)$ is rapidly decaying at infinity and $\phi\in\mathcal D({\mathbf R}^d)$, then the smooth function $\mu*\phi$ is rapidly decaying in the sense of (\ref{decay1}), that is,
\begin{equation} \label{decay2}
|\mu*\phi(x)| = \mathcal O(|x|^{-m}) \quad \textrm{as} \ |x|\rar \i
\text{\ for every\ } m > 0 .
\end{equation}
Moreover, every derivative of $\mu*\phi$ is rapidly decaying in the same sense.
\noindent
{\it Proof.}
Assume that $\phi$ is supported in the ball $\{x;\, |x|\le A\}$. If $|x| > r + A$, then $\chi_r$ is equal to $1$ on the support of $y\mapsto \phi(x-y)$, hence
\begin{align*}
|\mu*\phi(x)| &= |\sca{\mu}{\phi(x-\cdot)}|
= |\sca{\mu}{\chi_r(\cdot)\phi(x-\cdot)}| =
|\sca{\chi_r\mu}{\phi(x-\cdot)}|\\
&\le \| \chi_r\mu\|_M \sup|\phi| ,
\end{align*}
which proves the first claim. Using the formula $\p^{\b}(\mu*\phi) = \mu*\p^{\b}\phi$, where $\p^{\b}$ is an arbitrary mixed derivative, we obtain the second statement.
\smallskip
If $\Om$ is an open subset of ${\mathbf R}^d$ we shall
denote by $C_c(\Om)$ the set of continuous functions with compact support in $\Om$. Moreover we shall denote by $M_{\mathrm{loc}}(\Om)$ the set of linear forms on $C_c(\Om)$ that are continuous with respect to the topology of uniform convergence on compact subsets of $\Om$, that is,
\begin{equation}\label{C_K}
|\sca{\mu}{\vf}| \le C_K \|\vf\|
\quad \text{for} \ \ \vf\in C_0(\Om) \ \text{with}
\ \supp \vf \ss K
\end{equation}
with a constant $C_K$ depending on the compact set $K\ss\Om$.
If (\ref{C_K}) holds with a constant $C$ independent of $K$, then the
total mass is $\le C$ and we write $\mu\in M(\Om)$. It is clear that
the restriction of any $\mu\inM_{\mathrm{loc}}(\Om)$ to $\Om_1\ss\Om$ with closure
$\bar{\Om_1}\ss\Om$ must belong to $M(\Om_1)$.
The family of non-negative measures $\mu\in M_{\mathrm{loc}}(\Om)$ is the family of
\emph{Radon measures} on $\Om$; see Chapter 7 in \cite{F99}.
Recall that a Radon
measure on an open subset $\Om$ of ${\mathbf R}^d$ is a non-negative Borel measure
$\mu$ on $\Om$ with $\mu(K)<\infty$ for every compact $K \subset \Om$.
\smallskip
\noindent
{\bf Theorem B.}
Let $K$ be a compact, convex subset of ${\mathbf R}^d$, let $f$ be a continuous function on ${\mathbf R}^d\sm K$ decaying at infinity faster than any negative power of $|x|$, and assume that the Radon transform $Rf(L) = 0$ for all hyperplanes disjoint from $K$. Then $f=0$ on ${\mathbf R}^d\sm K$.
More generally, let $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm K)$ be a measure that is rapidly decaying at infinity and assume that the Radon transform $R\mu$ vanishes on the open set of hyperplanes not intersecting $K$. Then $\mu = 0$ on ${\mathbf R}^d\sm K$.
\noindent
{\it Proof.}
This theorem was first proved by Helgason, see \cite{H80}. Here we will give Strichartz' short proof \cite{S82}. Approximating $\mu$ by smooth functions $f_{\e}=\mu*\vf_{\e}$, where $\vf_{\e}(x)=\e^{-d}\vf(x/\e)\in\mathcal D({\mathbf R}^d)$ and $\int\vf\, dx =1$,
and using Lemma 3 we see that the second statement follows from the first and that it is sufficient to prove the first statement for smooth functions
($f_{\e}$ is defined in the complement of the closed $\e$-neighborhood $K_{\e}=K+ \{x;\, |x|\le \e\}$ of $K$, and $Rf_{\e}(L)=0$ for all $L$ not intersecting $K_{\e}$).
To simplify notation we give the proof first for the case $d=2$. Denote the coordinates in the plane by $(x,y)$. Fix an arbitrary line $L$ in ${\mathbf R}^2\sm K$ and choose coordinates such that $L$ is the $x$-axis and $K$ is contained in the halfplane $y<0$. The assumption implies that the function
$$
G(a,b) = \int_{{\mathbf R}} f(x, ax + b) dx
$$
is equal to zero for all $b\ge 0$ and all $a$ sufficiently close to $0$. Differentiating $k$ times with respect to $a$ and putting $a=0$ gives
$$
\p_a^k G(0,b) = \int_{{\mathbf R}} x^k \p_y^k f(x, b) dx
= \big(\frac {\p}{\p b}\big)^k \int_{{\mathbf R}} x^k f(x, b) dx = 0 , \quad b \ge 0 .
$$
The decay assumption implies that those integrals converge.
This shows that the expression $\int_{{\mathbf R}} x^k f(x, b) dx$ must be a polynomial function of degree $k-1$ in $b$ for $b\ge 0$. But the assumption implies that this function must tend to zero as $b\rar\i$, so it must be identically zero and in particular
$\int_{{\mathbf R}} x^k f(x, 0) dx = 0$. Since this is true for every $k$ it follows that
$f(x,0) = 0$ for all $x$, that is, $f=0$ along the line $L$. And since $L$ was arbitrary we have proved that $f=0$ outside $K$. If $d > 2$ we argue similarly assuming that $L$ is the plane $x_d=0$ and considering the function
$G(a,b) = \int_{{\mathbf R}^{d-1}} f(x', x'\cdot a + b) dx'$, where $x=(x',x_d)\in{\mathbf R}^d$ and $a\in{\mathbf R}^{d-1}$.
\smallskip
In order to state Theorem C we have to formulate what it means that a measure is homogeneous of degree $\a$. If a function $f(x)$ on ${\mathbf R}^d\sm\{0\}$ is homogeneous of degree $\a\in{\mathbf R}$, that is, $f(\lb x) = \lb^{\a} f(x)$ for all $\lb> 0$, then its action on test functions satisfies
$$
\int f(x)\vf(x/\lb) dx = \lb^{d} \int f(\lb x)\vf(x) dx = \lb^{d+\a} \int f(x)\vf(x) dx,
\quad \lb > 0
$$
with $\supp\vf\ss {\mathbf R}^d\sm\{0\}$.
Set $\vf_{\lb}(x) = \vf(x/\lb)$.
Therefore a measure (or, more generally, a distribution) $\mu$ on ${\mathbf R}^d\sm\{0\}$ is said to be homogeneous of degree $\a$ if
$$
\sca{\mu}{\vf_{\lb}} = \lb^{d+\a} \sca{\mu}{\vf} \quad \text{for all} \ \
\vf\in\mathcal D({\mathbf R}^d\sm\{0\}) \ \ \text{and all} \ \ \lb > 0 .
$$
Thus, for a measure $\mu$ with density $f \in L^1({\mathbf R}^d)$ the definition means that $\mu$ is homogeneous of degree $\a$ if an only if the function $f$ is homogeneous of degree $\a$.
\noindent
{\bf Theorem C.}
Let $K$ be a convex, compact set, $0\in K$, and let $\mu$ be a function or a measure on ${\mathbf R}^d\sm\{0\}$ that is homogeneous of degree $\a$, where $\a$ is a non-integral real number $< -d$. Assume, as in Theorem B, that $R\mu = 0$ in the set of hyperplanes disjoint from $K$. Then $\mu=0$.
As we shall see in Section 5, the assumption that $\a$ is non-integral cannot be omitted.
\smallskip
\noindent
{\it Proof of Theorem C.}
It was proved in \cite {We} that any solution of $Rf(\w,p)=0$ in $|p|>1$ must be equal to an infinite sum of functions that are homogeneous of integral degrees $\le -d$. A function that is homogeneous of non-integral degree can obviously be represented in this form only if it is identically zero.
We will also present a self-contained proof of Theorem C using the methods of this paper. To begin with, we may assume that the set $K = \{0\}$; indeed, the Radon transform $R\mu(\w,p)$ must vanish for all $p\ne 0$, since it must be homogeneous with respect to $p$.
Since $\a < -d$, the measure $\mu$ must have infinite mass near the origin (unless $\mu=0$), so we cannot take the Fourier transform of $\mu$ in the elementary sense. However,
it is known that any distribution in ${\mathbf R}^d\sm \{0\}$ that is homogeneous of non-integral degree $\a$ can be uniquely continued to a homogeneous distribution on ${\mathbf R}^d$ (see Appendix, and \cite[Theorem 3.2.3]{H03}). Let us denote this distribution also by $\mu$.
Any homogeneous distribution $f$ on ${\mathbf R}^d$ belongs to the Schwartz class $\mathcal S'({\mathbf R}^d)$ and hence has a Fourier transform defined by $\sca{\hat f}{\vf} = \sca f{\hat{\vf}}$ for $\vf\in\mathcal D({\mathbf R}^d)$, and $\hat f\in \mathcal S'({\mathbf R}^d)$. (See \cite{H03} or any text book on distribution theory.)
We claim that $\hat{\mu}\inL^1_{\mathrm{loc}}({\mathbf R}^d)$, and in fact that $\hat{\mu}$ is a continuous function. To see this, take a function $\chi\in\mathcal D({\mathbf R}^d)$, equal to $1$ in some neighborhood of the origin, and write
$\mu = \mu_0 + \mu_1$ where $\mu_0 = \chi\mu$. Then $\mu_0$ is a distribution with compact support, hence $\hat{\mu_0}$ is a $C^{\i}$ function, and $\mu_1\in M({\mathbf R}^d)$, hence $\hat{\mu_1}$ is continuous. This proves the claim.
The Radon transform $R\mu$ is a distribution on $S^{d-1}\times{\mathbf R}$ defined by $\sca{R\mu}{\vf} = \sca{\mu}{R^*\vf}$ for $\vf\in\mathcal D(S^{d-1}\times{\mathbf R})$ (see \cite{H80}), and by assumption $R\mu = 0$ on the open set $\{(\w,p);\, p \ne 0\}$. This implies that the distribution $R\mu(\w,\cdot)$ is supported at the origin for every $\w$.
Hence, by Theorem 2.3.4 in \cite{H03} this distribution is a linear combination of the Dirac measure at the origin and its derivatives, which means that its Fourier transform $\hat{R\mu}(\w,\s)$ is a polynomial in $\s$ with coefficients that depend on $\w$ (in fact are continuous functions of $\w$). By an extension of Lemma~2 to distributions we know that
$\hat{R\mu}(\w,\s) = \hat {\mu}(\s\w)$, hence
$$
\hat {\mu}(\s\w) = \sum_0^N a_k(\w) \s^k
$$
for some $N$ (in fact $N \le \max\{0,-\a - d + 1\}$).
On the other hand, it is known that the Fourier transform of a homogeneous distribution is homogeneous, in this case of non-integral degree $ -\a - d > d - d = 0$ \cite[Theorem 7.1.16]{H03}. Since the expression on the right hand side cannot be homogeneous of non-integral degree unless all $a_k(\w)=0$, it follows that $\hat{\mu} = 0$, hence $\mu=0$ and the proof is complete.
\smallskip
\noindent
\noindent
{\it Remark.}
In the proof above we took for granted that $R\mu(\w, \cdot)$ is a well defined distribution for every fixed $\w$. That this is true is in fact not quite obvious, but can be understood as follows. If $\mu = \mu_0 + \mu_1$, where $\mu_0$ and $\mu_1$ have the same meaning as in the proof, then $\mu_1\in M({\mathbf R})$, and hence $R\mu_1(\w,\cdot)$ is a well defined element of $M({\mathbf R})$ for every $\w$. Since $\mu_0$ has compact support we can define $R\mu_0(\w,\cdot)$ by
\begin{equation*}
\sca{R\mu_0(\w,\cdot)}{\vf} = \sca{\mu}{\vf(x\cdot\w)}, \quad \vf\in\mathcal D({\mathbf R}) ,
\end{equation*}
in analogy with (2.10). The function $x\mapsto \vf(x\cdot\w)$ does not have compact support, but since $\mu_0$ has compact support we can define $\sca{\mu_0}{\psi}$ for any $\psi\in C^{\i}({\mathbf R}^d)$ as $\sca{\mu_0}{\chi\psi}$, where $\chi\in \mathcal D({\mathbf R}^d)$ and $\chi = 1$ in a neighborhood of the support of $\mu_0$.
A subset $Q$ of ${\mathbf R}^d$ will be called a \emph{cone} if $x\in Q$ implies $\lb x\in Q$ for every $\lb > 0$.
\smallskip
\noindent
{\bf Theorem D.}
Let $Q$ be a closed cone such that $Q\sm\{0\}$ is contained in some open halfspace $\{x\in{\mathbf R}^d;\, x\cdot\w > 0\}$.
Let $K$ be a convex, compact set, let $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm K)$ and assume that $\mu$ has finite mass on sets bounded away from $K$. Assume moreover that
$\supp\mu\ss Q$. Assume that $R\mu(L) = 0$ on the open set of hyperplanes disjoint from $K$. Then $\mu=0$ on ${\mathbf R}^d\sm K$.
\smallskip
This theorem is a special case of Corollary 3 in \cite{Bo92}; in the latter theorem the function (distribution) is only assumed to be rapidly decaying outside the cone $Q$, and the Radon transform is allowed to be weighted with a positive and real analytic weight function.
\noindent
{\it Proof of Theorem D.}
The idea of the proof is to make a projective transformation that maps $Q$ to a compact set and thereby reduce the problem to that of Theorem B, in fact to the special case of Theorem B when $\mu$ is compactly supported.
Write $x = (x',x_d)$, where $x'=(x_1,\ldots,x_{d-1}) \in {\mathbf R}^{d-1}$. We may choose coordinates so that $Q$ is the cone $\{x;\, x_d \ge \de|x'|\}$ for some $\de>0$.
Consider the projective transformation
\begin{equation*}
x \mapsto y = \frac x {1 + x_d} = \Psi(x) , \quad x\in{\mathbf R}^d .
\end{equation*}
Since $y_d = x_d/(1+x_d)$ it is clear that $\Psi(Q)$ is contained in the compact set
\begin{equation*}
\de|y'| \le y_d \le 1 .
\end{equation*}
Writing $\ti f = f\circ \Psi^{-1}$ and $\ti L = \Psi(L)$ our Radon transform in $x$-space is transformed as follows
\begin{equation} \label{Rtilde}
Rf(L) = \int_L f(x) ds_x = \int_{\ti{L}} \ti f(y) J(\ti L,y) ds_y .
\end{equation}
Here $ds_x$ and $ds_y$ are the Euclidean surface measures on hyperplanes in $x$ and $y$-spaces, respectively, and $J(\ti L,y)ds_y$ is the push-forward $\Psi_*(ds_x)$.
It is an important fact that, for an arbitrary projective transformation $\Psi$, the Jacobian $J(\ti L,y)$ factors into a product of a function depending only on the point $y$ and a function depending only on the hyperplane $\ti L$.
\smallskip
\noindent
{\bf Lemma 4.}
The Jacobian $J(\ti L,y)$ defined by $\Psi_*(ds_x) = J(\ti L,y)ds_y$ is a non-vanishing smooth function on the manifold $Z$ of pairs $(y,\ti L)$ of points $y$ and hyperplanes $\ti L$ for which $y\in \ti L$. It can be factored so that
\begin{equation} \label{factorability}
J(\ti L,y) = J_0(\ti L) J_1(y) .
\end{equation}
\smallskip
In the terminology introduced by Palamodov the identity \eqref{factorability} says that projective transformations are {\it factorable} with respect to the family of hyperplanes \cite[Section 3.1]{Pa3}. This property was used in an essential way in the study of Radon transforms in \cite{Pa1}, \cite{Pa2}, \cite{Bo89}, \cite{Bo92}. The factorability of projective transformations was implicit already in \cite{GGG}, where a projectively invariant Radon transform was defined, operating on sections of a certain vector bundle.
\smallskip
\noindent
{\it Sketch of proof of Lemma 4.}
Let $\Psi$ be the mapping
\begin{equation}
{\mathbf R}^d \ni x \mapsto (x,1)/\sqrt{1 + |x|^2} \in S^{d}
\end{equation}
from ${\mathbf R}^d$ onto the open upper half of the unit sphere $S^d$ in ${\mathbf R}^{d+1}$. Let $L$ be a hyperplane $x\cdot\w = p$ in ${\mathbf R}^d$, and let $\Psi(L)$ be the image of $L$ under $\Psi$, which is a $d-1$-dimensional halfsphere in $S^d$. Denote the Euclidean surface measures on $L$ and $\Psi(L)$ by $ds_L$ and $ds_{\Psi(L)}$, respectively. A straightforward calculation gives
\begin{equation}
\frac{ds_L}{ds_{\Psi(L)}} = \frac{(1 + |x|^2)^{d/2}}{(1 + p^2)^{1/2}} ,
\end{equation}
which shows that $\Psi$ is factorable with respect to the family of hyperplanes
(c.f.\ \cite[Lemma~1]{Bo89} and \cite[Corollary 7.5, III]{Pa2}).
An affine transformation $T$ from ${\mathbf R}^d$ onto itself is factorable in a trivial way, because the Jacobian $ds_L/ds_{T(L)}$ depends only on the hyperplane $L$. Now, an arbitrary projective transformation can be represented as $T\Psi^{-1}A \Psi$, where $A$ is a rotation of the sphere $S^d$ and $T$ is an affine transformation, and it is obvious that a product of factorable transformations is factorable. This completes the proof.
\smallskip
\noindent
{\it End of proof of Theorem D.}
Using Lemma 4 we can write (\ref{Rtilde}) as
\begin{equation*}
Rf(L) = J_0(\ti L) \int_{\ti L} \ti f(y) J_1(y) ds_y .
\end{equation*}
Since $J_0 \ne 0$, the assumption that $Rf(L) = 0$ for all $L$ not intersecting $K$ now implies that the Radon transform of $\ti f J_1$ in the $y$-space vanishes on the set of all $\ti L$ not intersecting $\ti K = \Psi(K)$. Since $\ti f$ is compactly supported, Theorem B implies that $\ti f J_1 = 0$ outside $\ti K$, and since $J_1\ne 0$ it follows that $\ti f = 0$ outside $\ti K$, which in turn implies that $f = 0$ outside $K$.
\smallskip
\noindent
{\it Remark.}
Theorem C holds without change for distributions of arbitrary order.
Theorem B holds for distributions if rapid decay at infinity is defined for instance by \eqref{decay2} being satisfied for all $\phi\in\mathcal D({\mathbf R}^d)$. Theorems A and D are valid for distributions that decay sufficiently fast at infinity for the Radon transform to be defined.
\section{ The Cram\'er-Wold theorems }
We are now ready to state several versions of the Cram\'er-Wold theorem. We begin with four versions of ``uniqueness type'', each an immediate consequence of one of the theorems above. After that we will state four analogous versions for sequences of measures. Recall that we denote the halfspace $\{x\in{\mathbf R}^d;\, x\cdot\w < p\}$ by $H_{\w,p}$.
\smallskip
\noindent
{\bf Theorem 1.}
A measure $\mu\in M({\mathbf R}^d)$ is uniquely determined by $\mu(H_{\w,p})$ for almost all $(\w,p)\in S^{d-1}\times{\mathbf R}$.
In other words, if $\mu(H_{\w,p}) = 0$ for almost all halfspaces $H_{\w,p} \ss {\mathbf R}^d$, then $\mu = 0$.
\noindent
{\it Proof.}
As we saw above, for each $\w$ the distribution derivative of $p\mapsto \mu(H_{\w,p})$ is the Radon transform $R\mu(\w,\cdot)$ of $\mu$, evaluated at $\w$. The assertion now follows from Theorem A.
We shall use the notation $\overline{E}$ to denote the closure of a
subset $E\ss{\mathbf R}^d$. We say that $E$ is bounded away from the origin
if $\overline{E} \subset {\mathbf R}^d \setminus \{0\}$.
\smallskip
\noindent
{\bf Theorem 2.}
Assume that $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ is rapidly decaying at infinity and that
$\mu(H_{\w,p}) = 0$ for almost all halfspaces $H_{\w,p}$ for which
$\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$. Then $\mu = 0$.
\noindent
{\it Proof.}
The assumption implies that the Radon transform of $\mu$ vanishes in the open set $\{(\w,p);\, p\ne 0\}$. Application of Theorem B with $K=\{0\}$ then proves that
$\mu = 0$.
\smallskip
\noindent
{\bf Theorem 3a.}
Assume that $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$
is homogeneous of non-integral degree
$\a < -d$, and that $\mu(H_{\w,p}) = 0$ for almost all halfspaces $H_{\w,p}$ for which
$\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$. Then $\mu = 0$.
\noindent
{\it Proof.}
As in the proof of Theorem 2 the assumption implies that $R\mu = 0$ in the set
$\{(\w,p);\, p\ne 0\}$. An application of Theorem C with $K=\{0\}$ completes the proof.
\smallskip
There is a more subtle version of the previous theorem where the function
$p\mapsto \mu(H_{\w,p}) = a(\w,p)$ is assumed to be homogeneous, but the measure $\mu$ is not. In this case we need to assume that the measure $\mu$ is non-negative.
Note that the closed halfspace $\overline{H_{\w,p}}$ is contained in ${\mathbf R}^d\sm\{0\}$ if and only if $p < 0$.
\noindent
{\bf Theorem 3b.}
Assume that $a(\w,p)$ is a locally bounded function on $S^{d-1}\times\{p\in{\mathbf R};\, p<0\}$ that is homogeneous of non-integral degree $-\b < 0$ with respect to $p$. Then there exists at most one non-negative measure
$\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ with finite mass on sets bounded away from the origin, such that
\begin{equation} \label{thm3b}
\mu(H_{\w,p}) = a(\w,p) \quad \text{for almost all halfspaces $H_{\w,p}$ for which
$\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$.}
\end{equation}
The measure $\mu$, if it exists, is homogeneous of degree $-d-\b$ and satisfies $R\mu = \p_p a$ in $S^{d-1}\times \{p\in{\mathbf R};\,p< 0\}$.
\smallskip
\noindent
{\it Proof.}
Here is an outline of the proof. Using the formula $\hat{R\mu}(\w,\s) = \hat{\mu}(\s\w)$ we construct a homogeneous solution $\mu_0$ of the equation $R\mu_0 = \p_p a$; then $\mu_0$ must satisfy \eqref{thm3b}. Of course we cannot know if $\mu_0$ is non-negative, and moreover, since we do not require $\mu_0$ to be homogeneous it is far from unique as a solution to \eqref{thm3b}. However, $\mu_0$ is the only solution of \eqref{thm3b} that can possibly be non-negative. To prove this we shall use the fact
the measure $\nu = \mu - \mu_0$, which solves the equation $R\nu = 0$ in $p \ne 0$, must be equal to a finite sum $\sum h_k$ of distributions that are homogeneous of integral order, each satisfying $Rh_k = 0$ in $p\ne 0$ (c.f.\ the proof of Theorem C). Since none of the $h_k$ can be non-negative and one of the $h_k$ must dominate in the expression $\mu = \mu_0 + \sum h_k$ either for small or for large $|x|$, $\mu$ cannot be non-negative unless all $h_k$ vanish.
Set $b(\w,p) = \p_p a(\w,p)$ for $p < 0$ and extend $b(\w,p)$ as an even function of $(\w,p)$ on $S^{d-1}\times{\mathbf R}$.
A homogeneous distribution $\mu_0$ on ${\mathbf R}^d$ satisfying $R\mu_0(\w,p) = b(\w,p)$ for $p\ne 0$
will now be constructed using the equation
\begin{equation} \label{mu0hat}
\hat{\mu_0}(\s\w) = \hat b(\w,\s) .
\end{equation}
But the function $p\mapsto b(\w,p)$ is not integrable at the origin (unless it is identically zero), so the Fourier transform
$\hat b(\w,\s)$ is not defined in the elementary sense.
However, since $p\mapsto b(\w,p)$ is homogeneous of non-integral order $-\b-1$, this function
can be extended for all $\w$ uniquely to a distribution on ${\mathbf R}$ that is homogeneous of degree $-\b - 1$ (see Appendix!).
In this way we obtain a distribution on
$S^{d-1} \times {\mathbf R}$ which we shall also denote by $b(\w,p)$, and by $\hat b(\w,\s)$ we understand the $1$-dimensional Fourier transform of this distribution with respect to $p$.
Note that $\hat{\mu_0}(\s\w)$ is well defined by (\ref{mu0hat}), since $b$ and $\hat b$ are even functions of $(\w,p)$ and $(\w,\s)$, respectively. The function $\s\mapsto \hat b(\w,\s)$ is homogeneous of degree $-1 - (-\b-1) = \b$, and this makes $\hat{\mu_0}$ homogeneous of degree $\b$ and locally bounded, hence $\hat{\mu_0}$ is an element of the space $\mathcal S'({\mathbf R}^d)$ of tempered distributions. This implies that $\mu_0$ is a well defined distribution in ${\mathbf R}^d$, homogeneous of degree $-d-\b$.
Assume now that $\mu$ is a non-negative measure satisfying (\ref{thm3b}).
In order to be able to use the Fourier transform we must now prove that $\mu$ can be extended to a distribution on ${\mathbf R}^d$.
Let $\mu^{\e}$ be the restriction of $\mu$ to ${\mathbf R}^d\sm B_{\e}$, where $B_{\e}$ is the closed ball with radius $\e$ centered at the origin.
We claim that the norm of $\mu^{\e}$ satisfies an estimate
\begin{equation} \label{mueps1}
\|\mu^{\e}\|_M \le C \e^{-\b} , \quad \e > 0 .
\end{equation}
To prove this we observe that we can cover ${\mathbf R}^d\sm B_{\e}$ by $2d$ halfspaces of the form $H_{\w^j,-\e/d} \ss {\mathbf R}^d\sm\{0\}$ for suitable $\w^j$, and since $\mu\ge 0$ it then follows that
\begin{equation} \label{mueps2}
\|\mu^{\e}\|_M \le \sum_j \mu(H_{\w^j,-\e/d}) = \sum_j a(\w^j,-\e/d)
\le C(\e/d)^{-\b} = C_1 \e^{-\b} .
\end{equation}
This is known to imply that $\mu$ can be extended to a distribution on ${\mathbf R}^d$ of order $< \b + 1$ (see Appendix!).
The extension is unique up to a distribution supported at the origin. Choose any of those extensions and denote it by $\ti{\mu}$.
We shall prove that $\ti{\mu} = \mu = \mu_0$ in ${\mathbf R}^d\sm\{0\}$. Set $\nu =\ti{\mu} - \mu_0$. It is clear that $R\nu(\w,p) = 0$ for $p\ne 0$.
As in the proof of Theorem C we can now use the formula $\hat{R\nu}(\w,\s) = \hat{\nu}(\s\w)$ together with the fact that $p\mapsto R\nu(\w,p)$ is supported at $p=0$ for every $\w$ to conclude that
\begin{equation*}
\hat{\nu}(\s\w) = \sum_0^N c_k(\w) \s^k , \quad \w \in S^{d-1}, \ \s\in{\mathbf R} ,
\end{equation*}
where each $c_k(\w)$ is a continuous function, even if $k$ is even, odd if $k$ is odd.
This shows that we can write
\begin{equation} \label{sumhk1}
\mu = \mu_0 + \sum_0^N h_k , \quad \text{in} \ {\mathbf R}^d\sm\{0\} ,
\end{equation}
for some $N$, where $h_k$ are homogeneous distributions of degree $-k-d$,
defined by $\hat{h_k}(\xi) = |\xi|^k c_k(\xi/|\xi|)$ for $\xi\in{\mathbf R}^d\sm\{0\}$,
which are all mapped to zero by the Radon transform.
Note that $h_k$ is even if $k$ is even, odd if $k$ is odd.
Now we are going to use the assumption that $\mu\ge 0$ to prove that all $h_k$ must vanish identically. Assuming the contrary we can choose $r$ and $s$, $0\le r \le s \le N$, such that $h_r$ and $h_s$ are not identically zero and $h_k=0$ for $k\notin[r,s]$.
For any $\vf\in\mathcal D({\mathbf R}^d)$ set $\vf_{\lb}(x) = \vf(x/\lb)$. Using the homogeneity properties of $\mu_0$ and $h_k$ we now obtain
\begin{equation} \label{sumhk}
\sca{\mu}{\vf_{\lb}}
= \lb^{-\b}\sca{\mu_0}{\vf} + \sum_{k=r}^s \lb^{-k} \sca{h_k}{\vf} , \quad
\vf\in\mathcal D({\mathbf R}^d\sm\{0\} .
\end{equation}
Assume first that $\b < s$.
Since $Rh_s(\w,p) = 0$ in $p\ne 0$ we can choose $\vf\in\mathcal D({\mathbf R}^d\sm\{0\}$ such that $\vf \ge 0$ and $\sca{h_s}{\vf} < 0$. In fact, a distribution $h$ for which $\sca h{\vf} \ge 0$ for all test functions $\vf \ge 0$ is known to be a non-negative measure, so if such a $\vf$ did not exist, $h_s$ would be a non-negative measure and hence could not have vanishing Radon transform unless it were identically zero.
If $\lb$ is small, the term with $k=s$ dominates in (\ref{sumhk}), hence if
$\sca{h_s}{\vf} < 0$ we get a contradiction to $\mu\ge 0$.
Similarly, if $\b>s>r$, then the term with $k=r$ dominates for large $\lb$, so
if we choose $\vf$ with $\sca{h_r}{\vf} < 0$, we see that
again $\mu$ cannot be $\ge 0$. This gives a contradiction unless all $h_k$ vanish, and the theorem is proved.
By Theorem 3.2.4 in \cite{H03} a function $f$ in ${\mathbf R}^d\sm\{0\}$ that is homogeneous of {\it integral} degree $-d-m$, $m\ge 0$, can be extended to a homogeneous distribution on ${\mathbf R}^d$
if and only if
\begin{equation} \label{horm}
\int_{|x| = 1} x^{\g} f(x) ds = 0 \quad \text{for all multi-indices $\g$ with $|\g|=m$} .
\end{equation}
The same is true for measures in $M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ (and even for distributions) if the condition \eqref{horm} is interpreted appropriately.
Using this fact one can prove that the statement of Theorem~C is true under the assumption that the measure $\mu$ is even and homogeneous of degree
$\a = -d-m$ where $m$ is an odd integer. Because if $\mu$ is even and $m$ is odd, then all the integrals \eqref{horm} must vanish, so $\mu$ must have an $\a$-homogeneous extension whose Fourier transform satisfies $\hat{\mu}(\s\w)=a_m(\w)\s^m$. Since $m$ is odd this contradicts the assumption that $\mu$ is even, unless $\mu=0$. Theorem 3a can be extended similarly.
The assertion of Theorem 3b, finally, is true if $\b$ is an odd integer and $\w\mapsto a(\w,p)$ is even. To prove this note first that $\w \mapsto b(\w,p)$ must then also be even, and since $(\w,p) \mapsto b(\w,p)$ is even, it follows that $p\mapsto b(\w,p)$ is even. We saw that $b(\w,p)$ is homogeneous of degree $-\b - 1$, and by assumption this is an even integer. The function $b(\w,p)$ therefore satisfies the condition \eqref{horm} (note that $d=1$ here), hence can be extended to a homogeneous distribution on ${\mathbf R}$ for every $\w$. The proof can now be finished just as the proof of Theorem~3b above after we have proved that there can be no term $h_{\b}$ in \eqref{sumhk1}. In fact, $\mu$ is assumed to be even, hence each homogeneous part of \eqref{sumhk1} must be even, in particular
$\mu_0 + h_{\b}$ must be even and $\mu_0$ is even by construction, hence $h_{\b}$ is even. But $R h_{\b} = 0$ in $p\ne 0$ and $\b$ is odd, and we saw above that this implies that $h_{\b}$ is odd. Since $h_{\b}$ is both even and odd, it follows that $h_{\b} = 0$. This completes the proof.
There are similar extensions of Theorems C, 3a, and 3b where the measure $\mu$ is assumed to be odd and $\a + d$ ($\b$, respectively) is an even integer.
\smallskip
\noindent
{\bf Theorem 4.}
Let $\mu$ be a measure in $M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ such that
$\mu\in M({\mathbf R}^d\sm B_{\e})$ for all $\e > 0$, and let
$Q$ be a closed cone such that $Q\sm\{0\}$ is contained in some open halfspace $\{x\in{\mathbf R}^d;\, x \cdot \w>0\}$. Assume moreover that $\supp\mu$ is contained in $Q$ and
that $\mu(H_{\w,p}) = 0$ for almost all halfspaces $H_{\w,p}$ for which $\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$. Then $\mu = 0$.
\noindent
{\it Proof.}
As before we know that $R\mu = 0$ in $\{(\w,p);\, p\ne 0\}$.
An application of Theorem D with $K=\{0\}$ completes the proof.
\medskip
We shall now discuss four Cram\'er-Wold theorems for sequences of measures,
analogous to the four theorems given above.
\noindent
{\bf Theorem 1$'$.}
Assume that $\mu_k\in M({\mathbf R}^d)$ is a sequence of measures with uniformly bounded norms, $\|\mu_k\|_M \le C$, and that for almost all halfspaces $H_{\w,p}$
\begin{equation} \label{a}
\lim_{k\rar\i} \mu_k(H_{\w,p}) = a(\w,p) .
\end{equation}
Then there exists a unique measure $\mu\in M({\mathbf R}^d)$ with $\|\mu\|_M \le C$ such that $\mu_k$ tends $C_0$-weakly to $\mu$, that is
\begin{equation} \label{thm1-prime}
\lim_{k\rar\i} \sca{\mu_k}{\vf} = \sca{\mu}{\vf} , \quad \vf \in C_0({\mathbf R}^d) .
\end{equation}
The measure $\mu$ is characterized by $R\mu = \p_p a$ on $S^{d-1}\times{\mathbf R}$, where the derivative is understood in the distribution sense. If, in addition, $\lim_{k\rar\i}\|\mu_k\|_M = \|\mu\|_M$, then (\ref{thm1-prime}) holds for all $\vf\in C_b({\mathbf R}^d)$.
\noindent
{\it Proof.}
Since $|\mu_k(H_{\w,p})| \le \|\mu_k\|_M \le C$ it follows from Lebesgue's theorem that
\begin{equation} \label{lebesgue}
\lim_{k\rar\i} \int_{S^{d-1}} \int_{{\mathbf R}} \mu_k(H_{\w,p}) \vf(\w,p) dp\, d\w
= \int_{S^{d-1}} \int_{{\mathbf R}} a(\w,p) \vf(\w,p) dp\, d\w
\end{equation}
for all $\vf \in \mathcal D(S^{d-1}\times{\mathbf R})$. Using the fact that the distribution derivative of $p\mapsto \mu_k(H_{\w,p})$ is equal to $R\mu_k(\w,\cdot)$ we obtain
\begin{align} \label{Rmuk}
\begin{split}
\lim_{k\rar\i} \sca{R\mu_k}{\vf}
& = - \lim_{k\rar\i} \int_{S^{d-1}} \int_{{\mathbf R}} \mu_k(H_{\w,p})\p_p\vf(\w,p) dp\, d\w \\
& = - \int_{S^{d-1}} \int_{{\mathbf R}} a(\w,p) \p_p\vf(\w,p) dp\, d\w = \sca{\p_pa}{\vf}
\end{split}
\end{align}
for all $\vf\in \mathcal D(S^{d-1}\times{\mathbf R})$.
Since $\|R\mu_k\|_M \le \|\mu_k\|_M \le C$ and $\mathcal D(S^{d-1}\times{\mathbf R})$ is dense in $C_0(S^{d-1}\times{\mathbf R})$ it follows that (\ref{Rmuk}) holds for all
$\vf\in C_0(S^{d-1}\times{\mathbf R})$, that is, $R\mu_k$ tends $C_0$-weakly to $\p_p a$.
Since $\|\mu_k\|_M$ is bounded we can find a subsequence $\mu_k'$ that is $C_0$-weakly convergent to some limit $\mu\in M({\mathbf R}^d)$. As we have seen, this implies that $R\mu_k'\rar R\mu$ $C_0$-weakly, hence $R\mu = \p_p a$. By Theorem A this condition determines $\mu$ uniquely, hence any convergent subsequence must converge to $\mu$, so the original sequence must in fact converge to $\mu$.
To prove the last statement assume that $\lim_{k\rar\i}\|\mu_k\|_M = \|\mu\|_M$. Let $\e > 0$ and take a continuous function $\chi$ such that $0 \le \chi \le 1$, $\chi = 0$ on $|x|<r$, and $\chi = 1$ on $|x| > r+1$, with $r$ so large that $\|\chi \mu\|_M < \e$.
Using the fact that $\|\nu\|_M \le \varliminf_{k\rar\i}\|\nu_k\|_M$ for any $\mathcal D$-weakly convergent sequence $\nu_k$ with limit $\nu$ we obtain
\begin{equation} \label{chimuk}
\lim_{k\rar\i} \|(1-\chi)\mu_k\|_M \ge \|(1-\chi)\mu\|_M \ge \|\mu\|_M - \e .
\end{equation}
By the assumption and by (\ref{chimuk}) we can choose $k_0$ so that
\begin{equation*}
\|\mu_k\|_M < \|\mu\|_M + \e , \quad \mathrm{and} \quad
\|(1-\chi)\mu_k\|_M > \|\mu\|_M - 2 \e
\end{equation*}
for $k> k_0$. Since $\chi\ge 0$ and $1 - \chi\ge 0$ we have
$\|\mu_k\|_M = \|\chi \mu_k\|_M + \|(1 - \chi) \mu_k\|_M$,
hence if $k > k_0$,
\begin{equation*}
\|\chi \mu_k\|_M = \|\mu_k\|_M - \|(1 - \chi) \mu_k\|_M < \|\mu\|_M + \e
- (\|\mu\|_M - 2 \e) = 3 \e .
\end{equation*}
For an arbitrary $\vf\in C_b({\mathbf R}^d)$ we now write
\begin{equation*}
\sca{\mu_k - \mu}{\vf} = \sca{\mu_k - \mu}{(1 - \chi)\vf} + \sca{\mu_k - \mu}{\chi\vf}
\end{equation*}
and observe that the first term on the right hand side tends to zero since $(1 - \chi)\vf \in C_0({\mathbf R}^d)$, and the second term can be estimated by $4\e \sup|\vf|$ since $\|\chi\mu_k\|_M < 3\e$ and $\|\chi\mu\|_M < \e$.
The proof is complete.
\noindent
{\it Remark 1.}
Since $R\mu(\w,\cdot)$ is a well defined measure on ${\mathbf R}$ for every $\w$ it follows in fact that $R\mu(\w,\cdot)$ is equal to $\p_pa(\w,\cdot)$ for almost every $\w$. If we also assume that (\ref{a}) holds for almost every $p$ for every $\w$, then we can conclude that
$$
R\mu(\w,\cdot) = \p_p a(\w,\cdot)
$$
for every $\w\in S^{d-1}$. To prove this, instead of (\ref{lebesgue}) we use the fact that
\begin{equation} \label{leb-prime}
\lim_{k\rar\i} \int_{{\mathbf R}} \mu_k(H_{\w,p}) \vf'(p) dp
= \int_{{\mathbf R}} a(\w,p) \vf'(p) dp
\end{equation}
for every $\w$ and all test functions $\vf(p)$ in $\mathcal D({\mathbf R})$. Then observe that the left hand side of
(\ref{leb-prime}) is equal to
$$
- \lim_{k\rar\i} \sca{R\mu_k(\w,\cdot)}{\vf}
$$
by (\ref{dpmuh}), and that the right hand side is equal to $- \sca{\p_p a(\w,\cdot)}{\vf}$.
\smallskip
\noindent
{\it Remark 2.}
The first part of Theorem 1$'$, if phrased in terms of the Radon transform $R$, says essentially that $R^{-1}$ is continuous in the following sense: if $\|\mu_k\|_M \le C$ and $R\mu_k$ is $C_0$-weakly convergent, then $\mu_k$ is $C_0$-weakly convergent.
\smallskip
If the measures $\mu_k$ are positive, then the assumption $\|\mu_k\|_M \le C$ in Theorem 1$'$ can be omitted:
\smallskip
\noindent
{\bf Corollary 1.}
Assume that $\mu_k$ is a sequence of non-negative measures in $M({\mathbf R}^d)$ and that $\lim_{k\rar\i} \mu_k(H_{\w,p})$ exists for almost all halfspaces $H_{\w,p}$ and is equal to $a(\w,p)$. Then the sequence $\mu_k$ converges $C_0$-weakly to a measure
$\mu\in M({\mathbf R}^d)$ satisfying $R\mu = \p_p a$.
\noindent
{\it Proof.}
The sequence $\mu_k(H_{\w,p})$ must be bounded for almost every $H_{\w,p}$ by (\ref{a}), so if we cover ${\mathbf R}^d$ by halfspaces, ${\mathbf R}^d = H_1 \cup H_2$, then $\|\mu_k\|_M \le \mu_k(H_1) + \mu_k(H_2) \le C$ for all $k$ since $\mu_k \ge 0$. The assertion therefore follows from Theorem 1$'$.
\noindent
{\it Remark.}
If the measures $\mu_k$ are not assumed to be positive measures, then the assumption $\|\mu_k\|_M \le C$ cannot be omitted. As an example, take a function $f \in \mathcal D({\mathbf R})$ with $\int f\,dx = 1$ and let $\mu_k$ be the measure with density $f_k(x) = k^2 f'(kx)$. Then $\lim_{k\rar\i}\mu_k(H) = 0$ for every halfaxis $H= (c,\pm\i)$ with $c \ne 0$, but the sequence $\mu_k$ is not $C_0$-weakly convergent, since $\sca{\mu_k}{\vf}$ tends to $-\vf'(0)$ for every continuously differentiable test function $\vf$ with compact support.
Since the case of positive measures is the most interesting in probability
theory, we shall only consider sequences of positive measures in
Theorems 2$'$ - 4$'$.
Recall that the family of non-negative measures in $M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$
is the family of Radon measures on ${\mathbf R}^d\sm\{0\}$.
If $\vf$ is a function on ${\mathbf R}^d$ whose support is bounded away from the
origin, then we say that $\vf$ is supported away from the origin.
\noindent
{\bf Theorem 2$'$.}
Let $\mu_k\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ be a sequence of non-negative measures satisfying
\begin{equation} \label{thm2prime}
\lim_{k\rar\i} \mu_k(H_{\w,p}) = a(\w,p)
\end{equation}
for almost all halfspaces $H_{\w,p}$ with $\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$, where $a(\w,p)$ is rapidly decaying in the sense that
\begin{equation*}
a(\w,p) = \mathcal O(|p|^{-m}) \quad \text{as}\ \ p\rar -\i \quad \text{for all} \ m .
\end{equation*}
Then there exists a unique $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$ such that
\begin{equation}
\lim_{k\rar\i} \sca{\mu_k}{\vf} = \sca{\mu}{\vf}
\end{equation}
for all $\vf\in C_b({\mathbf R}^d)$ that are supported away from the origin. The measure $\mu$ satisfies $R\mu = \p_p a$ in $S^{d-1}\times \{p\in{\mathbf R};\,p<0\}$ (note that only negative $p$ occur in (\ref{thm2prime})).
\smallskip
\noindent
{\it Proof.}
Let $\mu^{\e}_k$ be the restriction of $\mu_k$ to ${\mathbf R}^d\sm B_{\e}$. Covering ${\mathbf R}^d\sm B_{\e}$ by $2d$ halfspaces $H_{\w^j,-\e/d}$ as in the proof of Theorem 3b and choosing $k_0$ so large that
\begin{equation*}
\mu_k(H_{\w^j,-\e/d}) < a(\w^j,-\e/d) + 1
\end{equation*}
for all $j$ and all $k > k_0$ we obtain
\begin{equation*}
\|\mu^{\e}_k\|_M \le \sum_j \big(a(\w^j,-\e/d) + 1 \big) = C_{\e} .
\end{equation*}
Reasoning as in the proof of Theorem 1$'$ we can find a subsequence $\nu_k^{\e}$ that is $C_0$-weakly convergent to an element $\mu^{\e}$ in $M({\mathbf R}^d\sm B_{\e})$, which satisfies
\begin{equation} \label{Rmu-eps}
R\mu^{\e}(\w,p) = \p_p a(\w,p) \quad \text{in} \ \ p < -\e .
\end{equation}
Since $\mu^{\e} \ge 0$ and $a(\w,p)$ is rapidly decaying it is easy to see that $\mu_k^{\e}$ is uniformly rapidly decaying at infinity, and that the limit $\mu^{\e}$ is also rapidly decaying.
It follows from Theorem~2 that $\mu^{\e}$ is uniquely determined by (\ref{Rmu-eps}),
hence the original sequence $\mu_k^{\e}$ must tend $C_0$-weakly to $\mu^{\e}$.
Since ${\e}$ is arbitrary we obtain $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$, and since $\mu_k^{\e}$ is uniformly rapidly decaying (this is of course more than we need: it suffices to observe that $\|\chi_r\mu_k^{\e}\|_M$ tends uniformly to zero as $r\rar\i$ with the notation of Definition 1) we can replace $C_0$-weak convergence by $C_b$-weak convergence, and the proof is complete.
\smallskip
\noindent
{\bf Theorem 3$'$.}
Assume that $\mu_k \in M_{\textrm{loc}}({\mathbf R}^d \setminus \{0\})$ is a
sequence of non-negative measures
that satisfies
$\lim_{k\to\infty}\mu_k(H_{\omega,p})=a(\omega,p)$ for almost all
halfspaces $H_{\w,p}$ with $\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$, where
$p \mapsto a(\omega,p)$ is homogeneous of non-integral degree
$-\beta < 0$ for every $\omega$.
Then $\mu_k$ converges $C_b$-weakly to a measure
$\mu \in M_{\mathrm{loc}}({\mathbf R}^d \sm \{0\})$ in the sense that
$\lim_{k\to\infty}\langle\mu_k,\vf\rangle=\langle\mu,\vf\rangle$
for all $\vf \in C_b({\mathbf R}^d)$ supported away from the origin.
The measure $\mu$ is homogeneous of degree $-\beta-d$ and is uniquely
determined by the condition $R\mu=\p_p a$ in $S^{d-1}\times \{p\in{\mathbf R};\,p<0\}$.
\smallskip
\noindent
{\it Proof.}
As above we denote
the restriction of $\mu$ to ${\mathbf R}^d\sm B_{\e}$ by $\mu^{\e}$. As in the proof of Theorem~2$'$ we prove that the sequence of norms $\|\mu_k^{\e}\|_M$ is bounded by a constant $C_{\e}$. Therefore we can find a $C_0$-weakly convergent subsequence $\nu_k^{\e}$ with limit $\mu^{\e}$ satisfying $R\mu^{\e} = \p_p a$ in $p<-\e$. Since $\b$ is non-integral Theorem 3b tells us that $\mu^{\e}$ is uniquely determined by this condition. Since $\e$ is arbitrary we obtain $\mu$ in ${\mathbf R}^d\sm \{0\}$ such that
$\lim_{k\rar\i}\sca{\mu_k}{\vf} = \sca{\mu}{\vf}$
for all $\vf\in C_0({\mathbf R}^d)$ that are supported away from the origin. Finally, since $\mu_k\ge 0$ and $\lim_{p\rar-\i} a(\w,p) = 0$, it is clear that the total mass of $\mu_k$ outside the ball $\{x;\,|x|\le r\}$ tends uniformly to zero as $r\rar\i$, hence we obtain the same statement for $\vf\in C_b({\mathbf R}^d)$ that are supported away from the origin. The proof is complete.
\smallskip
\noindent
{\it Remark.} The assumptions of Theorem 3$'$ imply that $a(\w,p)$ must be $m$ times continuously differentiable in $\w$ if $m < \b$. To prove this we
use the fact that
$
\sca{\p_p a(\w,\cdot)}{\psi} = \sca{R\mu(\w,\cdot)}{\psi} = \sca{\mu}{\psi(x\cdot\w)}
$
and differentiate $m$ times with respect to $\w$ observing that
$x^{\g}\mu \in M({\mathbf R}^d\sm B_{\e})$ if $|\g| \le m < \b$ since $\mu$ is homogeneous of degree $-\b - d$.
\smallskip
\noindent
{\bf Theorem 4$'$.}
Let $\mu_k$ be a sequence of non-negative measures in $M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$, such that $\lim_{k\rar\i} \mu_k(H_{\w,p}) = a(\w,p)$ exists for almost all halfspaces
$H_{\w,p}$ for which $\overline{H_{\w,p}} \ss {\mathbf R}^d\sm\{0\}$.
Assume that there exists an open set $V\ss S^{d-1}$ such that $a(\w,p)=0$ for all $\w\in V$ and all $p<0$.
Then $\mu_k$ converges $C_0$-weakly to a measure
$\mu \in M_{\mathrm{loc}}({\mathbf R}^d \sm \{0\})$ in the sense that
$\lim_{k\to\infty}\langle\mu_k,\vf\rangle=\langle\mu,\vf\rangle$
for all $\vf \in C_0({\mathbf R}^d)$ supported away from the origin.
If $\lim_{p\rar-\i}a(\w,p) = 0$, then
$\lim_{k\to\infty}\langle\mu_k,\vf\rangle=\langle\mu,\vf\rangle$
for all $\vf \in C_b({\mathbf R}^d)$ supported away from the origin.
The measure $\mu$ is uniquely
determined by the condition $R\mu=\partial_p a$ in $S^{d-1}\times \{p\in{\mathbf R};\,p<0\}$.
\noindent
{\it Proof.}
Again we denote by $\mu_k^{\e}$ the restriction of $\mu_k$ to ${\mathbf R}^d\sm B_{\e}$, and as before we prove that $\|\mu_k^{\e}\|_M \le C_{\e}$ for every $\e > 0$. Let $\nu^{\e}$ be the limit of a $C_0$-weakly convergent subsequence of $\mu_k^{\e}$; as before $\nu^{\e}$ must satisfy the equation $R\nu^{\e} = \p_p a$ in $S^{d-1}\times \{p\in{\mathbf R};\,p<-\e\}$.
The assumptions on $a(\w,p)$ imply that $\nu^{\e}$ must be supported in a cone $Q$ satisfying the assumptions in Theorem 4.
It now follows from Theorem 4 that $\nu^{\e}$ is uniquely determined in ${\mathbf R}^d\sm B_{\e}$ by the condition $R\nu^{\e} = \p_p a$, hence we obtain in this way a measure $\mu \in M_{\mathrm{loc}}({\mathbf R}^d \sm \{0\})$ such that
$\lim_{k\to\infty}\langle\mu_k,\vf\rangle=\langle\mu,\vf\rangle$
for all $\vf \in C_0({\mathbf R}^d)$ supported away from the origin. To see that we can replace $\vf \in C_0({\mathbf R}^d)$ by $\vf \in C_b({\mathbf R}^d)$ here if $\lim_{p\rar-\i}a(\w,p) = 0$, we argue exactly as at the end of the proof of Theorem~2$'$.
\smallskip
To formulate the next corollary we need the notation $f_{(t)}(x) = t^d f(tx)$, $t>0$, for the scaling of a function $f(x)$ on ${\mathbf R}^d$. For a measure $\mu\in M({\mathbf R}^d)$ the analogous operation can be defined by
\begin{equation*}
\sca{\mu_{(t)}}{\vf} = \sca{\mu}{\vf(\cdot/t)} , \quad \vf\in C_0({\mathbf R}^d) ,
\end{equation*}
or $\mu_{(t)}(E) = \mu(tE)$, if $\mu$ is considered as a set function.
\smallskip
\noindent
{\bf Corollary 2.} Let $\rho$ be a probability measure and
consider the family of measures
\begin{align}\label{eqcor21}
\mu_t = t^{\beta}l(t)\rho_{(t)}, \quad t > 0,
\end{align}
where $\beta>0$ and $l$ is a positive and measurable function which
satisfies $\lim_{t\to\infty}l(\lambda t)/l(t)=1$ for every $\lambda>0$.
Assume that
\begin{align}\label{eqcor22}
\lim_{t\to\infty}\mu_t(H_{\omega,-1})=b(\omega)
\end{align}
exists for all $\omega \in S^{d-1}$.
Assume moreover that
\vspace{3pt} \newline
(i)\phantom{i} \quad $\b$ is non-integral, or \newline
(ii) \quad $b(\w) = 0$ for all $\w$ in some open set $V\ss S^{d-1}$.
\vspace{3pt} \newline
Then $\mu_t$ converges $C_b$-weakly to some measure $\mu\in M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$
in the sense that
$\lim_{t\to\infty}
\langle \mu_t,\varphi \rangle=\langle \mu,\varphi \rangle$
for all $\varphi \in C_b({\mathbf R}^d)$ supported away from the origin, and $\mu$ is uniquely determined by
$R\mu(\w,p) = \b|p|^{-\b-1}b(\w)$ in $S^{d-1}\times \{p\in{\mathbf R};p<0\}$.
\noindent
{\it Proof.}
Noting that $H_{\w,\lb p} = \lb H_{\w,p}$ for $\lb>0$ we see that
\begin{align*}
\mu_t(H_{\omega,\lambda p})
&=t^{\beta}l(t)\rho_{(t)}(H_{\omega,\lambda p})
=t^{\beta}l(t)\rho_{(\lambda t)}(H_{\omega,p})\\
&=\lambda^{-\beta}\frac{l(t)}{l(\lambda t)}
(\lambda t)^{\beta}l(\lambda t)\rho_{(\lambda t)}(H_{\omega,p})
=\lambda^{-\beta}\frac{l(t)}{l(\lambda t)}
\mu_{\lambda t}(H_{\omega,p}) .
\end{align*}
With $\lb=1/|p|$ we can conclude from (\ref{eqcor22}) that
$\lim_{t\to\infty}\mu_t(H_{\omega,p})$ exists for for all $\w\in S^{d-1}$ and all $p<0$ and that $\lim_{t\to\infty}\mu_t(H_{\omega,p})= |p|^{-\b}b(\w)$.
The assertion therefore follows immediately from Theorem~3$'$ if (i) holds, and from Theorem~4$'$ if (ii) holds.
\smallskip
A probability measure $\rho$ on ${\mathbf R}^d$ is said to be regularly varying
if there exist a $\beta>0$, a
positive and measurable function $l$ satisfying
$\lim_{t\to\infty}l(\lambda t)/l(t)=1$ for every $\lambda > 0$
and a non-zero Borel measure $\mu$ on ${\mathbf R}^d\setminus \{0\}$ with
finite mass on sets bounded away from the origin, such that
\begin{align*}
\lim_{t\to\infty}t^{\beta}l(t)P(tB)=\mu(B)
\end{align*}
for all Borel sets $B\subset{\mathbf R}^d$ bounded away from the origin
with $\mu(\partial B)=0$,
see e.g.~\cite{BDM02}, \cite{HL061}, \cite{HL062}, \cite{MS01} or \cite{R87}.
Equivalently (see e.g.~Theorems 2.1 and 3.1 in \cite{HL062})
$\rho$ is regularly varying if
$\mu_t$ in \eqref{eqcor21} converges $C_b$-weakly to some non-zero
$\mu\in M_{\textrm{loc}}({\mathbf R}^d\setminus \{0\})$ as $t\to\infty$,
in the sense that
$\lim_{t\to\infty}\langle \mu_t,\varphi\rangle
=\langle\mu,\varphi \rangle$ for all $\varphi\in C_b({\mathbf R}^d)$ supported
away from the origin.
Hence, Corollary 2 is a characterization of regular variation for probability
measures on ${\mathbf R}^d$.
With assumption (i) this characterization has been shown in \cite{BDM02}.
It follows from the counterexample in Section~5 that the assumptions in
Corollary 2 are sharp; if neither (i) nor (ii) are satisfied, then
the conclusion need not hold.
We now discuss two applications of the characterization of regular
variation given by Corollary 2.
The random variables considered are assumed to be defined on some
common probability space $(\Omega,\mathcal{F},\operatorname{P})$.
\smallskip
\noindent
{\it Random difference equations.}
Consider the random difference equation
\begin{align}\label{rde}
Y_n = M_nY_{n-1}+Q_n, \quad n \geq 1,
\end{align}
where $Y_n$ and $Q_n$ are ${\mathbf R}^d$-valued random variables and
$M_n$ is a random $d\times d$ matrix with ${\mathbf R}$-valued entries.
It is assumed that the pairs $(M_n,Q_n)$, $n\geq 1$,
are independent and identically distributed.
Under weak conditions (see e.g.~\cite{K73}) the series
\begin{align*}
R=\sum_{k=1}^{\infty}M_1\dots M_{k-1}Q_k
\end{align*}
converges $\operatorname{P}$-almost surely and
the probability distribution of
$Y_n$ converges $C_b$-weakly to that of $R$, independently of $Y_0$.
If $M_1$ and $Q_1$ have non-negative entries
and the weak (but technical) assumptions in Theorems 3 and 4 in
\cite{K73} are satisfied, then there exists a $\beta > 0$ such that
for each $\omega \in S^{d-1}$
\begin{align*}
\lim_{t\to\infty}t^{\beta}\operatorname{P}(\omega \cdot R > t)
\end{align*}
exists and is strictly positive for
$\omega \in S^{d-1}_+=\{\w \in S^{d-1};\, \w_k\geq 0,\, k=1,\dots,d\}$.
Since $R$ has non-negative entries it follows that
the limit is zero for $\omega \in -S^{d-1}_+$, so Corollary 2 implies that
the probability distribution $\rho$ given by
$\rho(E) = \operatorname{P}(R\in E)$ is regularly varying
with index $\beta$.
\smallskip
\noindent
{\it Domains of attraction for sums.}
Consider a sequence $\{X_k\}_{k\geq 1}$ of independent and identically
distributed ${\mathbf R}^d$-valued random variables. Let $X$ denote a generic
element of the sequence and denote by $\rho$ its probability
distribution.
It is well-known that if there exist positive constants $a_n$ and
${\mathbf R}^d$-valued constants $b_n$ such that
the probability distribution $G_n$ of
\begin{align}\label{eqrescaledsum}
a_n^{-1}(X_1+\dots+X_n) - b_n
\end{align}
converges $C_b$-weakly to some non-degenerate probability measure $G$, then
$G$ is a stable distribution with characteristic exponent
$\beta \in (0,2]$. In this case, $\rho$ is said to belong to the
domain of attraction of $G$.
It is well known that the class of stable distributions coincides with
the possible non-degenerate limit distributions of scaled sums of the
type in \eqref{eqrescaledsum}.
By Theorem 4.2 in \cite{R62}, $\rho$ is in the domain of attraction of a
non-degenerate stable distribution $G$ with characteristic exponent
$\beta < 2$ if and only if $\rho$ is regularly varying with index
$\beta$.
From Corollary 2 it follows that if $X$ takes values in ${\mathbf R}^d_+$,
then $\rho$ is in the domain of attraction of a
non-degenerate stable distribution $G$ with characteristic exponent
$\beta < 2$ if and only if \eqref{eqcor22} holds with the same $\beta$
and some $l$.
\section{Counterexamples}
We need counterexamples to show (1) that the rapid decay assumption in Theorem B cannot be omitted, (2) that the assumption that $\a$ is non-integral in Theorem C cannot be omitted, and (3) that the assumption that the cone $Q$ is contained in an open halfspace in Theorem D cannot be weakened very much. As we shall see, one sufficiently strong example meets all those requirements.
The following simple example for dimension $d=2$, which takes care of (1) and (2) in dimension $2$, has been known for a long time (see e.g.~\cite{H80}). Let $f(x)$ be the analytic function $1/(x_1+ix_2)^2$ in ${\mathbf R}^2\sm\{0\}$. We claim that $\int_L f\,ds = 0$ for each line $L$ not containing the origin. In fact, the complex line integral $\int _L f(z)\,dz = \int _L z^{-2} dz$, where we have written $z = x_1 + i x_2$, is equal to zero, because $-1/z$ is a primitive function of the integrand and it vanishes at infinity. And since $dz$ is equal to $ds$ multiplied by a non-zero complex constant along the path $L$, the claim follows.
Similarly we can of course take $f(z) = 1/z^k$ for any $k\ge 2$.
A closer analysis shows that the only properties of the function $f(z)=1/z^2$ that are needed here are that it is homogeneous of degree $-d = -2$, even, and has mean zero over circles centered at the origin. So, let $d$ be arbitrary $\ge 2$ and let $f(x)$ be a $C^{\i}$ function on ${\mathbf R}^d\sm\{0\}$, homogeneous of degree $-d$, even, and with mean zero over spheres centered at the orign. We claim that $\int_L f\,ds = 0$ for every hyperplane $L$ not containing the origin. Let $G(x)$ be the vector field
$$
G(x) = f(x)(x_1,\ldots, x_d) .
$$
Using Euler's formula for homogeneous functions,
$\sum x_j\p f/\p x_j = -d\, f(x)$, it is easy to see that $\divv G = 0$ in ${\mathbf R}^d\sm\{0\}$. Let $L_{\w,p}$ be an arbitrary hyperplane with $p\ne 0$, and consider the region in ${\mathbf R}^d$ that is bounded by the pair of hyperplanes $L_{\w,p}$, $L_{\w,-p}$, and the sphere $|x| = \de$, where $\de < |p|$. Since $f$ is even, $\int_{L_{\w,p}} f\,ds = \int_{L_{\w,-p}} f\,ds$. Stokes' theorem now gives
$2 \int_{L_{\w,p}} f\,ds = - \int_{|x|=\de} f\, ds = 0$, which proves the claim.
Moreover, for any mixed derivative $\p_x^{\b}$, $\b=(\b_1,\ldots,\b_d)$, the function $g=\p_x^{\b}f$ must have the same property. To see this, just observe that for given $\w$ any derivative $\p_x^{\b}$ can be written as a linear combination of derivatives of the form $D_{\w}^k D'^{\g}$, where $D_{\w}$ is the directional derivative in the direction $\w$ and $D'^{\g}$ is some derivative in the orthogonal subspace $\w^{\bot}$. Since $g=\p_x^{\b}f$ is homogeneous of degree $-d-|\b|$ we can in this way construct functions with $Rg(\w,p)=0$ for $p\ne 0$ satisfying $|g(x)|=\mathcal O(|x|^{-m})$ as $|x|\rar\i$ for arbitrarily large $m$.
Moreover, the function $f$ in the previous paragraph can be chosen with support in an arbitrarily small, open, symmetric cone $\G$ in ${\mathbf R}^d$. Thus the conclusion of Theorem D may be violated if the cone $Q$ is allowed to contain an arbitrarily small conic neighborhood of a closed halfspace.
If the dimension $d$ is $\ge 3$ we can even show that the statement of Theorem D is invalid if the cone $Q$ is assumed to be a halfspace. Indeed, take any non-zero function $h(x') = h(x_1,\ldots,x_{d-1})$ on ${\mathbf R}^{d-1}\sm\{0\}$, integrable at infinity,
with $Rh(\w,p)=0$ for $\w\in S^{d-2}$ and $p\ne 0$, for instance
$h(x')=\p_{x_1}(x_1x_2|x'|^{-d-1})$, and let $\mu$ be the measure $h(x')\de_0(x_d)$ for $(x',x_d)\in{\mathbf R}^d$, that is,
\begin{equation*}
\sca{\mu}{\vf} = \int_{{\mathbf R}^{d-1}} \vf(x',0) h(x') dx', \quad \vf\in C_0({\mathbf R}^d), \ \ \supp\vf\ss{\mathbf R}^d\sm\{0\} .
\end{equation*}
Then $\supp\mu$ is contained in the halfspace $\{x_d\le 0\}$, and the Radon transform $R\mu$ vanishes on the set of hyperplanes not containing the origin, or expressed differently, $\mu(H)=0$ for every closed halfspace $H$ contained in ${\mathbf R}^d\sm\{0\}$.
These examples obviously show that the assumptions of Theorems 2 - 4 are sharp in the corresponding ways.
Finally we show that the assumptions (i) and (ii) in Corollary 2 cannot be omitted. A similar example has recently been given by Hult and Lindskog \cite{HL061}. An advantage with the approach used here is that it makes the following
a very natural consequence of the examples given above.
\smallskip
\noindent
{\bf Proposition.}
For an arbitrary integer $m\ge 1$ there exists a non-negative function $g \in L^1({\mathbf R}^d)$ such that
\begin{align} \label{limit}
& \lim_{t\rar\i} t^{m+d} \int_{H_{\w,p}} g(tx) dx
\quad \text{exists for every halfspace $H_{\w,p} \ss {\mathbf R}^d\sm\{0\}$,}
\end{align}
but
\begin{align}
& \left\lbrace
\begin{array} {l} \label{nolimit}
\text{there exists $\vf\in\mathcal D({\mathbf R}^d)$ with\ } \text{$0\notin\supp\vf$ for which the limit} \\[4pt]
\lim_{t\rar\i} t^{m+d} \int_{{\mathbf R}^d} g(tx) \vf(x) dx \quad
\text{does not exist.}
\end{array}
\right .
\end{align}
The function $g$ can be chosen with support contained in an arbitrary open cone $\G$ satisfying $\G\cap(-\G)\ne\emptyset$.
\noindent
{\it Proof.}
Take $h\in C^{\i}(\{x\in{\mathbf R}^d;\,|x|>1\})$, not identically zero, homogeneous of degree $-d-m$ such that $\supp h\ss\G$ and $Rh(\w,p)=0$ for $|p|>1$, and set $h(x) = 0$ for $|x|<1$. To construct $g(x)$ we shall multiply $h(x)$ by a very
slowly oscillating radial function $q(x)$. This function will have the properties $|q(x)| \le 2$,
\begin{equation} \label{rlogr}
|\nabla q(x)| = o(|x|^{-1}) \quad \text{as} \ \ |x|\rar\i,
\end{equation}
and for some infinite sequence of numbers $R_k$ tending to infinity
\begin{align} \label{rings}
\begin{split}
& \text{\ \ each of the inequalities} \quad q(x) > 1 \quad \text{and} \quad q(x) < -1 \\
& \text{holds in infinitely many of the rings} \ \ R_k < |x| < 2R_k .
\end{split}
\end{align}
To construct such a function we can take
$q(x) = 2\sin(\log\log |x|)$ for $|x| > e$.
Set $g_0(x) = q(x)h(x)$ and $g(x) = g_0(x) + g_1(x)$, where
$g_1(x) = C/ |x|^{m+d}$ for $|x|>1$ and
$C$ is chosen so large that $g(x)\ge 0$.
Since $t^{d+m}g_1(tx) = C|x|^{-m-d}$ for $t>1$ and $|x|>1$,
it will be enough to prove (\ref{limit}) and (\ref{nolimit}) for $g = g_0$.
We first prove (\ref{nolimit}). Set $f_t(x) = t^{m+d}g_0(tx)$.
By the homogeneity property of $h$ we have
\begin{equation*}
f_t(x) = q(tx)h(x) , \quad \text{if\ \ } t|x| > 1 .
\end{equation*}
Take a non-negative function $\vf\in C_0({\mathbf R}^2)$, supported in a disk with radius $<1/2$ centered at the circle $|x|=2$, and so chosen that $h\ge 0$ in $\supp\vf$ and $\int h(x)\vf(x)dx = c > 0$. It follows immediately from property (\ref{rings}) that
$\int f_t(x)\vf(x)dx = \int q(tx) h(x)\vf(x)dx$ must take values $> c$ and $< -c$ infinitely many times as $t\rar\i$.
To prove (\ref{limit}) for $g=g_0$ we write
\begin{equation} \label{qh}
\int_{x\cdot\w>p} q(tx)h(x) dx =
\int_p^{\i} \big( \int_{x\cdot\w=u} q(tx)h(x) ds\big) du .
\end{equation}
Define a parametrization of the hyperplane $x\cdot\w=u$ by setting
$x(y) = u\w + A_{\w}y$ for $y\in{\mathbf R}^{d-1}$, where $A_{\w}$ is an isometric linear map from ${\mathbf R}^{d-1}$ to the subspace $\{x\in{\mathbf R}^d;\,x\cdot\w = 0\}$.
To estimate the integral
\begin{equation*}
\int_{x\cdot\w=u} q(tx)h(x) ds = \int_{{\mathbf R}^{d-1}} q(tx(y)) h(x(y)) dy
\end{equation*}
we shall use the following simple estimate. If $v(y)\in C^1$ and $k(y)$ are defined on ${\mathbf R}^{d-1}$ and $\int k(y)dy = 0$, then
\begin{equation*}
|\int v(y)k(y) dy| = |\int (v(y) - v(0)) k(y) dy| \le
\sup_y|\nabla v(y)| \int |y||k(y)| dy .
\end{equation*}
With $v(y) = q(tx(y))$ and $k(y)= h(x(y))$ we have for arbitrary $\e>0$ by (\ref{rlogr}) for $u>1$ and sufficiently large $t$
\begin{equation*}
\sup_y |\nabla v(y)| \le {\e t}/{t u }
= {\e}/{u} , \qquad \text{and}
\end{equation*}
\begin{equation*}
\int_{{\mathbf R}^{d-1}} |y||k(y)| dy = \int_{{\mathbf R}^{d-1}} |y||h(x(y))|dy
\le C_1 \int_{{\mathbf R}^{d-1}} \frac{|y| dy}{(u^2 + |y|^2)^{(d+m)/2}} = \frac {C_2}{u^m} .
\end{equation*}
Hence
\begin{equation*}
|\int_{x\cdot\w=u} q(tx)h(x) ds| \le C_2\e/u^{m+1} ,
\end{equation*}
which shows that the expression (\ref{qh}) is $< C_2 \e/ p^m$ if $t$ is large enough, and hence completes the proof.
\section{Appendix}
\noindent
For the convenience of those of our readers who are not familiar with distribution theory we give here a very short proof of the fact that a function or measure on ${\mathbf R}^d\sm\{0\}$ that is homogeneous of non-integral degree $\a<-d$ can be uniquely extended to a homogeneous distribution on ${\mathbf R}^d$. As we have seen above an important consequence of this fact is that the Fourier transform (in the sense of the theory of distributions) of the extended distribution becomes available.
The material in this section has been known since the 1950ies \cite{GS}, and is now described in many textbooks on distribution theory, and of course also in \cite{H03}.
For arbitrary $\g\in{\mathbf R}$ we define a function $x_+^{\g}$ in $L^1_{\mathrm{loc}}({\mathbf R}\sm\{0\})$ by $x_+^{\g} = x^{\g}$ for $x>0$ and $x_+^{\g} = 0$ for $x<0$. If $\g>-1$ this function belongs to $L^1_{\mathrm{loc}}({\mathbf R})$. For any $\g \le -1$ we wish to extend $x_+^{\g}$ to a distribution on ${\mathbf R}$, that is, to define a distribution on ${\mathbf R}$ whose restriction to ${\mathbf R}\sm\{0\}$ is equal to $x_+^{\g}$. An easy way to solve this problem is to integrate $x_+^{\g}$ sufficiently many times, say $k$ times, to obtain a continuous function $F$, and then define the extended distribution as the $k$:th order distribution derivative of $F$. Explicitly, choose $k$ such that $k + \g > 0$ and a continuous function
$F$ on ${\mathbf R}$ such that $F(x)=0$ for $x<0$ and
$F^{(k)}(x) = x^{\g}$ for $x>0$. Then define a distribution $\ti{x}_+^{\g}$ on ${\mathbf R}$ by
\begin{equation} \label{defx^g}
\sca{\ti{x}_+^{\g}}{\vf} = \sca{\p_x^{k} F}{\vf} = (-1)^{k} \sca F{\vf^{(k)}}
= (-1)^{k} \int_0^{\i} F(x) \vf^{(k)}(x) dx ,
\quad \vf\in\mathcal D({\mathbf R}) .
\end{equation}
Then $\ti{x}_+^{\g}$ is obviously a distribution on ${\mathbf R}$, and by partial integrations we verify that $\sca{\ti{x}_+^{\g}}{\vf} = \sca{{x}_+^{\g}}{\vf}$ for all $\vf\in\mathcal D({\mathbf R}\sm\{0\})$, that is,
$\ti{x}_+^{\g} = x_+^{\g}$ on ${\mathbf R}\sm\{0\}$.
For any $\g\in{\mathbf R}$ the function $x_+^{\g}$ is homogeneous of degree $\g$ in ${\mathbf R}\sm\{0\}$. Moreover, if $\g$ is $< -1$ and non-integral, the definition \eqref{defx^g} is easily seen to produce a homogeneous distribution on ${\mathbf R}$. Because if $\g$ is non-integral, then $F$ will have the form $F(x)=c\ x_+^{\g+k}$, where $c $ is a constant depending on $\g$ and $k$, hence $F$ is homogeneous, and the distribution derivative of a homogeneous distribution is homogeneous, hence the claim is proved.
Since any extension to ${\mathbf R}$ of
${x}_+^{\g}$ can differ from $\ti{x}_+^{\g}$ only by a linear combination of the Dirac measure $\de_0$ at the origin and its derivatives, and those distributions are homogeneous of integral degrees, $\ti{x}_+^{\g}$ must be the unique extensions of ${x}_+^{\g}$ that is homogeneous of degree $\g$ on ${\mathbf R}$.
On the other hand, if $\g$ is a negative integer, then $F$ will contain a logarithmic factor, hence $F$ will not be homogeneous, so $x_+^{\g}$ is not a homogeneous distribution on ${\mathbf R}$, at least
this argument does not prove that $\ti{x}_+^{\g}$ is homogeneous. Using the definition of homogeneous distribution it is easy to check that in fact it isn't homogeneous as a distribution on ${\mathbf R}$. It follows that no homogeneous extension of ${x}_+^{\g}$ exists if $\g$ is a negative integer.
More generally, let $f(x)$ be a continuous function on ${\mathbf R}\sm\{0\}$ that has at most polynomial growth as $|x|\rar 0$, that is, $|f(x)| \le C|x|^{-k}$ for $0 < |x|<1$ and some $k$. If $F$ is a $k$:th primitive of $f$, $F^{(k)}(x) = f(x)$, then $F$ is integrable up to the origin, so we can define an extension $\ti f\in \mathcal D'({\mathbf R})$ of $f$ as the $k$:th distribution derivative of $F$. The same procedure can be applied to an arbitrary measure $\mu\in M_{\mathrm{loc}}({\mathbf R}\sm\{0\})$ for which the restriction $\mu_{\e}$ to $\e<|x|<1$ satisfies $\|\mu_{\e}\|_M \le C \e^{-m}$, because the second primitive of a measure on ${\mathbf R}$ is a continuous function.
Using spherical polar coordinates in ${\mathbf R}^d$ we will now use these simple arguments to construct extensions to ${\mathbf R}^d$ of homogeneous distributions defined in ${\mathbf R}^d\sm\{0\}$.
Let $f(x)$ be a locally integrable function on ${\mathbf R}^d\sm\{0\}$ that is homogeneous of non-integral degree $\g$, which we assume to be $<-d$. To construct an extension $\ti f \in\mathcal D'({\mathbf R}^d)$ of $f$ we observe that we can write
\begin{equation*}
f(r\w) = r^{\g} u(\w) , \quad r > 0, \ \w\in S^{d-1} ,
\end{equation*}
for some function $u\in L^1(S^{d-1})$. Let $k$ be the smallest integer such that $k+ \g + d >0$, and choose a constant $c=c_{k,\g,d}$ such that
$$
c\,\, ({\p}/{\p r})^k r^{k+\g+d-1} = r^{\g + d -1} .
$$
Then $G(r,\w) = c\, r_+^{k+\g+d-1} u(\w)$ is a locally integrable function on $S^{d-1}\times {\mathbf R}$ and we can define a distribution $\ti f$ of order $k$ on ${\mathbf R}^d$ by
\begin{equation} \label{ext-f}
\sca{\ti f}{\vf} = (-1)^k \int_{S^{d-1}}\int_0^{\i} G(r,\w) \p_r^k \vf(r\w) dr d\w , \quad \vf \in \mathcal D ({\mathbf R}^d).
\end{equation}
This distribution must be equal to $f$ in ${\mathbf R}^d\sm\{0\}$, because if $\vf$ is supported in ${\mathbf R}^d\sm\{0\}$, we can make $k$ partial integrations with respect to $r$ in the inner integral and obtain
$$
\sca{\ti f}{\vf} = \int_{S^{d-1}} \int_0^{\i} r^{\g} u(\w) \vf(r\w) r^{d-1} dr d\w
= \int_{{\mathbf R}^d} f(x) \vf(x) dx .
$$
It is easy to see that $\ti f$ satisfies $\sca{\ti f}{\vf(\cdot/\lb)} = \lb^{\g+d} \sca{\ti f}{\vf}$, which shows that $\ti f$
is homogeneous of degree $\g$.
It is easy to see that the same procedure can be applied if $f$ is a homogeneous measure or even a homogeneous distribution defined on ${\mathbf R}^d\sm\{0\}$. Similarly one can also show that a measure $\mu$ in $M_{\mathrm{loc}}({\mathbf R}^d\sm\{0\})$, whose restriction $\mu_{\e}$ to $\{x\in{\mathbf R}^d;\, \e<|x|<1\}$ satisfies $\|\mu_{\e}\|_M \le C \e^{-m}$ for some $m$, can be extended to a distribution on ${\mathbf R}^d$.
\bigskip
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,205 |
package com.amazonaws.services.resiliencehub.model.transform;
import javax.annotation.Generated;
import com.amazonaws.SdkClientException;
import com.amazonaws.services.resiliencehub.model.*;
import com.amazonaws.protocol.*;
import com.amazonaws.annotation.SdkInternalApi;
/**
* ListAppVersionResourcesRequestMarshaller
*/
@Generated("com.amazonaws:aws-java-sdk-code-generator")
@SdkInternalApi
public class ListAppVersionResourcesRequestMarshaller {
private static final MarshallingInfo<String> APPARN_BINDING = MarshallingInfo.builder(MarshallingType.STRING).marshallLocation(MarshallLocation.PAYLOAD)
.marshallLocationName("appArn").build();
private static final MarshallingInfo<String> APPVERSION_BINDING = MarshallingInfo.builder(MarshallingType.STRING)
.marshallLocation(MarshallLocation.PAYLOAD).marshallLocationName("appVersion").build();
private static final MarshallingInfo<Integer> MAXRESULTS_BINDING = MarshallingInfo.builder(MarshallingType.INTEGER)
.marshallLocation(MarshallLocation.PAYLOAD).marshallLocationName("maxResults").build();
private static final MarshallingInfo<String> NEXTTOKEN_BINDING = MarshallingInfo.builder(MarshallingType.STRING).marshallLocation(MarshallLocation.PAYLOAD)
.marshallLocationName("nextToken").build();
private static final MarshallingInfo<String> RESOLUTIONID_BINDING = MarshallingInfo.builder(MarshallingType.STRING)
.marshallLocation(MarshallLocation.PAYLOAD).marshallLocationName("resolutionId").build();
private static final ListAppVersionResourcesRequestMarshaller instance = new ListAppVersionResourcesRequestMarshaller();
public static ListAppVersionResourcesRequestMarshaller getInstance() {
return instance;
}
/**
* Marshall the given parameter object.
*/
public void marshall(ListAppVersionResourcesRequest listAppVersionResourcesRequest, ProtocolMarshaller protocolMarshaller) {
if (listAppVersionResourcesRequest == null) {
throw new SdkClientException("Invalid argument passed to marshall(...)");
}
try {
protocolMarshaller.marshall(listAppVersionResourcesRequest.getAppArn(), APPARN_BINDING);
protocolMarshaller.marshall(listAppVersionResourcesRequest.getAppVersion(), APPVERSION_BINDING);
protocolMarshaller.marshall(listAppVersionResourcesRequest.getMaxResults(), MAXRESULTS_BINDING);
protocolMarshaller.marshall(listAppVersionResourcesRequest.getNextToken(), NEXTTOKEN_BINDING);
protocolMarshaller.marshall(listAppVersionResourcesRequest.getResolutionId(), RESOLUTIONID_BINDING);
} catch (Exception e) {
throw new SdkClientException("Unable to marshall request to JSON: " + e.getMessage(), e);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,942 |
\section{Introduction}
The angle $\gamma$ is currently the least precisely known of the CKM angles of the unitarity triangle. The average value of $\gamma$ obtained from
direct measurements is $70^{+27\circ}_{-29}$~\cite{CKMfitter}. A theoretically clean strategy to extract $\gamma$ is to exploit the interference
between $B^\pm$$\rightarrow$$D^0K^\pm$ and $B^\pm$$\rightarrow$$\overline{D^0}K^\pm$ decays in which the $D^0$ and $\overline{D^0}$ decay to the same final state
$F$~\cite{GGSZ}.
This method requires precise knowledge of how the strong-phase difference between the $D^0$ and $\overline{D^0}$ varies across the Dalitz plane.
Quantum-correlated data from CLEO-c provide a unique opportunity to measure this variation. This paper describes the procedure used when $F$ is either
$K^0\pi^+\pi^-$ or $K^0K^+K^-$, collectively denoted $K^0h^+h^-$. The strong phase distribution for $D$\footnote{Henceforth, $D$ indicates either
$D^0$ or $\overline{D^0}$}$\rightarrow$$K^0\pi^+\pi^-$ is in general different to that for $D$$\rightarrow$$K^0K^+K^-$, but the formalism is the same.
\section{Determination of $\gamma$ with $B^\pm$$\rightarrow$$D(K^0h^+h^-)K^\pm$}
The expression for the decay amplitude of $B^\pm$$\rightarrow$$D(K^0h^+h^-)K^\pm$ is:
\begin{equation}
A(B^\pm\rightarrow D(K^0h^+h^-)K^\pm) \propto f_D(x,y) + r_Be^{i(\delta_B\pm\gamma)}f_D(y,x)
\end{equation}
where $r_B\sim 0.1$ is the ratio of the magnitudes of the amplitudes of suppressed and favoured $B^\pm$$\rightarrow$$DK^\pm$ decays~\cite{CKMfitter}, $\delta_B$
is the CP-invariant strong-phase difference between interfering $B^\pm$ decays, $x$ and $y$ are the squared invariant masses $m^2(K^0h^+)$ and
$m^2(K^0h^-)$ respectively and $f_D(x,y)\equiv|f_D(x,y)|e^{i\delta_D(x,y)}$ is the $D$-decay amplitude. Neglecting CP violation, the square of $A$
contains the \textit{strong-phase difference} term $\Delta\delta_D\equiv\delta_D(x,y)-\delta_D(y,x)$. In order to extract $\gamma$ from the difference
in $B^\pm$ decay rates across the Dalitz plane, $\Delta\delta_D$ must be determined.
Previous studies have modelled the $D$ decay using several intermediate two-body resonances~\cite{BaBar2008,Belle2008}. This introduces a model
systematic uncertainty of $7-9^{\circ}$ to the value of $\gamma$ which is likely to be the main limitation at future b-physics experiments.
An alternative approach, which is the subject of the remainder of this paper, is to use external information on the strong-phase difference in a
binned fit to the distribution of events across the Dalitz plane~\cite{GGSZ}. A good choice of binning is to divide the Dalitz plane into regions of
similar $\Delta\delta_D$~\cite{BandP} based on a model of the $D$-decay. An incorrect model will not bias the measurement of $\gamma$, but will reduce
the statistical precision. An example of a particular binning, for $K^0_S\pi^+\pi^-$, is given in Figure~\ref{KsPiPiSPD}. This uses a model from
Ref.~\cite{BaBar2005}.
\begin{figure}[h]
\begin{minipage}[b]{10pc}
\includegraphics[width=10pc]{KsPiPiSPDbinsv2}
\end{minipage}
\hspace{1.5pc}
\begin{minipage}[b]{26pc}
\caption{\label{KsPiPiSPD}The strong-phase difference for $K^0_S\pi^+\pi^-$, binned uniformly in eight bins of $\Delta\delta_D$}
\end{minipage}
\end{figure}
The Dalitz plot is binned symmetrically about $y=x$; bins below this axis are numbered $i$ and those above $-i$. The number of events in the
$i^{\scriptsize{\textrm{th}}}$ bin of the $D$$\rightarrow$$K^0_Sh^+h^-$ Dalitz plot in the decay $B^\pm$$\rightarrow$$D(K^0_Sh^+h^-)K^\pm$ is dependent upon $c_i$ and
$s_i$ which are the average cosine and sine of the strong-phase difference across the $i^{\scriptsize{\textrm{th}}}$ bin~\cite{GGSZ}.
\section{Determination of $\Delta\delta_D$ with quantum-correlated decays of the $\psi(3770)$}
The $D$-mesons are quantum-correlated with an overall CP of -1, so knowledge of the CP state of one of the pair reveals the CP state of the other.
When one $D$ decays to $K^0h^+h^-$, the decay product of the other $D$ is denoted the opposite-side \textit{tag}.
Define $K_i$ as the number of events in the $i^{\scriptsize{\textrm{th}}}$ bin of the flavour-tagged $K^0_Sh^+h^-$ Dalitz plot. $c_i$ and $s_i$ can
then be expressed as:
\begin{equation}
c_i\equiv\frac{a_D^2}{\sqrt{K_iK_{-i}}}\int_i|f_D(x,y)||f_D(y,x)|\cos[\Delta\delta_D(x,y)]dxdy
\end{equation}
and
\begin{equation}
s_i\equiv\frac{a_D^2}{\sqrt{K_iK_{-i}}}\int_i|f_D(x,y)||f_D(y,x)|\sin[\Delta\delta_D(x,y)]dxdy
\end{equation}
where $a_D$ is a normalization factor. Analogous quantities, denoted $K_i'$, $c_i'$ and $s_i'$, exist for $K^0_Lh^+h^-$. To first order, $c_i'=c_i$
and $s_i'=s_i$, but there are second-order differences due to doubly Cabibbo suppressed contributions to the $D$$\rightarrow$$K^0_Lh^+h^-$ decay amplitude.
$c_i^{(\prime)}$ can be determined from CP-tagged $D$ decays to $K^0h^+h^-$. The number of events in the $i^{\scriptsize{\textrm{th}}}$ bin of a
$K^0_{S(L)}h^+h^-$ Dalitz plot, where the opposite-side tag is a CP eigenstate, is given by:
\begin{equation}
M_i^{(\prime)\pm}=h^{(\prime)}_{CP\pm}(K_i^{(\prime)}\pm(-1)^p\,2c_i^{(\prime)}\sqrt{K_i^{(\prime)}K_{-i}^{(\prime)}}+K_{-i}^{(\prime)})
\end{equation}
where $h^{(\prime)}_{CP\pm}$ is a normalization factor, $p=0$ for $K^0_Sh^+h^-$ and $1$ for $K^0_Lh^+h^-$.
In order to determine $s_i^{(\prime)}$, decays in which both $D$-mesons decay to $K^0h^+h^-$ are required. The number of events in the
$i^{\scriptsize{\textrm{th}}}$ Dalitz plot bin of $D$$\rightarrow$$K^0_Sh^+h^-$ and the $j^{\scriptsize{\textrm{th}}}$ of $D$$\rightarrow$$K^0_{S(L)}h^+h^-$ is:
\begin{equation}
M^{(\prime)}_{i,j}=h^{(\prime)}_{corr}(K_iK_{-j}^{(\prime)}+K_{-i}K_j^{(\prime)} -
(-1)^p\,2\sqrt{K_iK_{-j}^{(\prime)}K_{-i}K_j^{(\prime)}}(c_ic_j^{(\prime)}+s_is_j^{(\prime)}))
\end{equation}
where $h^{(\prime)}_{corr}$ is a normalization factor.
The parameters $c_i$, $s_i$, $c_i'$ and $s_i'$ are extracted by minimizing a maximum likelihood function based on the expected and observed values of
$M_i^{(\prime)}$ and $M_{i,j}^{(\prime)}$. The quantities $(c_i-c_i')$ and $(s_i-s_i')$, predicted by the model, are used as a constraint in the fit.
The consequences of variations in the predicted values are evaluated when assigning systematic errors.
\section{Event Selection at CLEO-c}
An integrated luminosity of $(818\pm 8)\textrm{ pb}^{-1}$ of $\psi(3770)$$\rightarrow$$D^0\overline{D^0}$ data were recorded at CLEO-c. The tags selected for
$K^0K^+K^-$ and $K^0\pi^+\pi^-$ are shown in Tables~\ref{K0KKsels} and~\ref{K0PiPisels} respectively. In order to maximize statistics, a large number
of tags were selected. Approximately 23,000 events were selected for $K^0\pi^+\pi^-$ and approximately 1,900 for $K^0K^+K^-$. The $\Delta\delta_D$
bins mentioned earlier were appropriately populated using selected event yields.
$K^\pm$, $\pi^\pm$, $\pi^0$ and $\eta$ particles were selected using kinematic and reconstruction quality criteria~\cite{CLEO}. Suitable invariant
mass cuts were used to select composite particles; for example, for the $\omega$$\rightarrow$$\pi^+\pi^-\pi^0$ decay, the invariant mass of the
$\pi^+\pi^-\pi^0$ system was constrained to lie within 20 MeV of the nominal $\omega$ mass.
For tags containing a $K^0_L$, a different approach was used, because $\sim97\%$ of $K^0_L$ particles escape the CLEO-c detector. All other particles
in the $K^0_L$ tag were reconstructed and the square of the missing mass, $m_{miss}^2$, was computed. Events could then be accepted or rejected
depending on where they lay on the $m_{miss}^2$ plane.
All raw yields were efficiency-corrected and background-subtracted. Flat backgrounds were estimated using sidebands and peaking backgrounds were
estimated from Monte Carlo. The CLEO-c environment is clean; most backgrounds were low, between 1 and 10\%.
\begin{table}[h]
\caption{\label{K0KKsels}Tags selected for $K^0K^+K^-$}
\scriptsize
\begin{center}
\begin{tabular}{p{7.5pc} p{30pc}}
\br
Tag Group & Opposite-Side Tags\\
\mr
$K^0_SK^+K^-$ vs CP+ & $K^+K^-$, $\pi^+\pi^-$, $K^0_S\pi^0\pi^0$, $K^0_L\pi^0$, $K^0_L\eta(\gamma\gamma)$, $K^0_L\omega(\pi^+\pi^-\pi^0)$,
$K^0_L\eta(\pi^+\pi^-\pi^0)$, $K^0_L\eta'(\pi^+\pi^-\eta)$\\
$K^0_SK^+K^-$ vs CP- & $K^0_S\pi^0$, $K^0_S\eta(\gamma\gamma)$, $K^0_S\omega(\pi^+\pi^-\pi^0)$, $K^0_S\eta(\pi^+\pi^-\pi^0)$,
$K^0_S\eta'(\pi^+\pi^-\eta)$, $K^0_L\pi^0\pi^0$\\
$K^0_SK^+K^-$ vs $K^0h^+h^-$ & $K^0_SK^+K^-$, $K^0_LK^+K^-$, $K^0_S\pi^+\pi^-$, $K^0_L\pi^+\pi^-$\\
$K^0_SK^+K^-$ vs Flavour & $K^\pm\pi^\mp$, $K^\pm\pi^\mp\pi^0$\\
$K^0_LK^+K^-$ vs CP+ & $K^+K^-$, $\pi^+\pi^-$, $K^0_S\pi^0\pi^0$\\
$K^0_LK^+K^-$ vs CP- & $K^0_S\pi^0$, $K^0_S\eta(\gamma\gamma)$, $K^0_S\omega(\pi^+\pi^-\pi^0)$, $K^0_S\eta(\pi^+\pi^-\pi^0)$,
$K^0_S\eta'(\pi^+\pi^-\eta)$\\
$K^0_LK^+K^-$ vs $K^0h^+h^-$ & $K^0_S\pi^+\pi^-$\\
$K^0_LK^+K^-$ vs Flavour & $K^\pm\pi^\mp$, $K^\pm\pi^\mp\pi^0$\\
\br
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\caption{\label{K0PiPisels}Tags selected for $K^0\pi^+\pi^-$}
\scriptsize
\begin{center}
\begin{tabular}{p{7.5pc} p{17pc}}
\br
Tag Group & Opposite-Side Tags\\
\mr
$K^0_S\pi^+\pi^-$ vs CP+ & $K^+K^-$, $\pi^+\pi^-$, $K^0_S\pi^0\pi^0$, $K^0_L\pi^0$\\
$K^0_S\pi^+\pi^-$ vs CP- & $K^0_S\pi^0$, $K^0_S\eta(\gamma\gamma)$, $K^0_S\omega(\pi^+\pi^-\pi^0)$\\
$K^0_S\pi^+\pi^-$ vs $K^0h^+h^-$ & $K^0_S\pi^+\pi^-$, $K^0_L\pi^+\pi^-$\\
$K^0_S\pi^+\pi^-$ vs Flavour & $K^\pm\pi^\mp$, $K^\pm\pi^\mp\pi^0$, $K^\pm\pi^\mp\pi^\pm\pi^\mp$, $K^\pm e^\mp\nu_e$\\
$K^0_L\pi^+\pi^-$ vs CP+ & $K^+K^-$, $\pi^+\pi^-$\\
$K^0_L\pi^+\pi^-$ vs CP- & $K^0_S\pi^0$, $K^0_S\eta(\gamma\gamma)$\\
$K^0_L\pi^+\pi^-$ vs Flavour & $K^\pm\pi^\mp$, $K^\pm\pi^\mp\pi^0$, $K^\pm\pi^\mp\pi^\pm\pi^\mp$\\
\br
\end{tabular}
\end{center}
\end{table}
\section{Results for $D$$\rightarrow$$K^0\pi^+\pi^-$}
Fit and predicted values of $c_i$ and $s_i$ for $D^0$$\rightarrow$$K^0_S\pi^+\pi^-$ are shown in Figure~\ref{ci_fvm}. Systematic errors, including those taking
into account variations in the model, are relatively small.
\begin{figure}[h]
\begin{minipage}[b]{10pc}
\includegraphics[width=10pc]{ci_fitVsModel}
\end{minipage}
\hspace{1.5pc}
\begin{minipage}[b]{26pc}
\caption{\label{ci_fvm}Fit results and model predictions for $c_i$ and $s_i$ for $D$$\rightarrow$$K^0_S\pi^+\pi^-$. Dots indicate fit results and stars indicate
model predictions.}
\end{minipage}
\end{figure}
In order to understand the impact these results have on the $\gamma$ measurement, a toy Monte Carlo study of $B^\pm$$\rightarrow$$DK^\pm$ was performed, with
enough data so the statistical uncertainty associated with B decays was minimal. $r_B$, $\delta_B$ and $\gamma$ were fit, with initial values
respectively of 0.1, $130^{\circ}$ and $60^{\circ}$. The uncertainty on $\gamma$ which propagates through from the uncertainty on $c_i$ and $s_i$ was
found to be $1.7^{\circ}$~\cite{CLEO}. This is a large improvement on the current model-dependent determinations.
\section{Preliminary Results for $D$$\rightarrow$$K^0K^+K^-$}
Studies of $D$$\rightarrow$$K^0K^+K^-$ are ongoing. Combined $K_{(S,L)}^0K^+K^-$ CP-tagged Dalitz plots are shown in Figure~\ref{K0KKDPs} and exhibit striking
differences; this is a consequence of the entanglement of the $D^0\overline{D^0}$ system. $K^0_SK^+K^-$ against CP+ tags is in a CP- state, hence
decays predominantly via $K_S\phi$. Similarly, $K^0_SK^+K^-$ against CP- tags mostly decays via $K^0_L\phi$. The $\phi$ resonance is narrow so most
points lie close to $m_{K^+K^-}^2 = m_\phi^2$. This effect is not seen for the other CP-tagged data because the $K^0\phi$ resonance is not present.
\begin{figure}[h]
\includegraphics[width=25pc]{KsKKDPs}
\hspace{1pc}
\begin{minipage}[b]{11.5pc}
\caption{\label{K0KKDPs}Dalitz plots for CP-tagged $K^0K^+K^-$. \\ (a): $K^0_SK^+K^-$~vs~CP+ and $K^0_LK^+K^-$~vs~CP-. \\ (b): $K^0_SK^+K^-$~vs~CP-
and $K^0_LK^+K^-$~vs~CP+. Dots indicate data points. The physical Dalitz plot boundary is delimited by a solid line.}
\end{minipage}
\end{figure}
\section*{References}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,068 |
Q: Can't find correct syntax to forward SSH keys I'm trying to build a custom container with Buildah via a Dockerfile that will run some tasks in Celery, but the tasks need access to a library available in a private repository on our local Gitlab instance. It works if I copy the library from a directory I cloned locally, but it would be best if I could just clone a copy to the container in the Dockerfile. However, I can't get the git clone to work inside the Dockerfile when trying to build it in Buildah. It doesn't seem to be able to read my SSH keys, which are stored on the host at ~/.ssh/id_rsa. I'm trying to follow this from the Buildah man page:
--ssh=default|id[=socket>|<key>[,<key>]
SSH agent socket or keys to expose to the build. The socket path can be left empty to use the
value of default=$SSH_AUTH_SOCK
To later use the ssh agent, use the --mount flag in a RUN instruction within a Containerfile:
RUN --mount=type=secret,id=id mycmd
So in my Dockerfile:
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan -t ed25519 gitlab.mycompany.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git@gitlab.mycompany.com:jdoe/library.git /opt/library
And when I try to build it in Builad:
buildah build --ssh=default -f celery/Dockerfile -t celery
And the error when Buildah gets to the step where it's trying to clone the git repository:
Permission denied, please try again.
Permission denied, please try again.
git@gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
error building at STEP "RUN --mount=type=ssh git clone git@gitlab.mycompany.com:jdoe/library.git /opt/library": error while running runtime: exit status 128
Finished
git clones work correctly using my default SSH keys on my host, but whatever I'm doing to access the keys when building the Dockerfile in Buildah isn't working correctly. What do I need to change to get use the SSH keys inside of Buildah?
PS Buildah version, on RHEL8:
$ buildah -v
buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
EDIT: So I figured out how to get it to work via the --secret flag. Dockerfile:
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git clone git@gitlab.mycompany.com:jdoe/library.git /opt/library
Command line:
buildah build --secret id=id_rsa,src=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
This works, although only once. When I try to run this command next in the Dockerfile:
WORKDIR /opt/library
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git fetch --all --tags --prune
I get the following error:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0755 for '/run/secrets/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/run/secrets/id_rsa": bad permissions
Permission denied, please try again.
Permission denied, please try again.
git@gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Looks like I'll have to figure out how to set permissions on the secret file. But I still have no idea on how to get the --ssh flag to work correctly, which should be easier than doing all this stuff with the secret file.
EDIT 2: And here is how I managed to run multiple commands that contact the private Gitlab repository - Dockerfile:
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa"
RUN --mount=type=secret,id=id_rsa git clone git@gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
Still not as convenient as figuring out the correct syntax for the --ssh flag, but it works.
A: I eventually figured out how to format this to get the --ssh flag to work. Although I'm now updated to version 1.27.2, so maybe it was a bug fix.
$ buildah -v
buildah version 1.27.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
But here is how I formatted the buildah command:
buildah build --ssh id=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
And here is the git fetch line in the Dockerfile:
RUN --mount=type=ssh,id=id git clone git@gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
I don't know why --ssh=default doesn't automatically pull ~/.ssh/id_rsa, but manually specifying that file in this way works.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 23 |
require 'spec_helper'
describe Function::Predicate::LessThanOrEqualTo, '.call' do
subject { object.call(left, right) }
let(:object) { described_class }
context 'when left is equal to right' do
let(:left) { 1 }
let(:right) { 1 }
it { should be(true) }
end
context 'when left is greater than right' do
let(:left) { 2 }
let(:right) { 1 }
it { should be(false) }
end
context 'when left is less than right' do
let(:left) { 1 }
let(:right) { 2 }
it { should be(true) }
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,677 |
Los Angeles, California - News broke last week that a political data firm for the 2016 Trump campaign accessed private Facebook user data and then utilized it for targeting ads. Facebook CEO Mark Zuckerberg apologized for the "breach of trust" as a #DeleteFacebook movement began and amid reports that Cambridge Analytica was swept into Special Counsel Robert Mueller's investigation into election meddling. USC experts believe that for social media users' protection, that is not how any of this should work.
Safiya U.Noble is an expert on the ways that digital media impacts and intersects with issues of race, gender, culture, and technology design. Her new monograph on racist and sexist algorithmic bias in commercial search engines is entitled Algorithms of Oppression: How Search Engines Reinforce Racism. She is an is an assistant professor at the USC Annenberg School for Communication and Journalism.
Did Cambridge Analytica's use of Facebook affect the election?
Emilio Ferrara has studied how bots on Twitter were utilized by Russians to disrupt political discourse online and possibly to influence elections in the United States and France. Ferrara, a computer scientist at the Information Sciences Institute at the USC Viterbi School of Engineering, is an expert in cybersecurity and big data.
'Notice and choice' refers to the presentation of terms by a company, generally delivered to the end-user through privacy policies or terms of service. Ideally, users could elect to decline or modify the collection of data, but more often users that opt out of data collection are either denied the service or product or provided with a significantly degraded service. Opting out of social media services such as Facebook is often not a viable choice if consumers want to stay up-to-date regarding news, employment opportunities, social gatherings or shared interests.
Valerie Barreiro is an expert in social media law, entertainment law and intellectual property. She is a visiting assistant professor at the USC Gould School of Law where she directs the Intellectual Property and Technology Law Clinic.
"It is clear that trust needs to be built. In one story I saw, Facebook responded that 'when users sign up for Facebook, they agree to this.' This is really about the fine print, like when people sign up for a credit card or a bank account. But how many people actually read that and consciously accept it?
Ali Abbas is an ethics and decision-making expert who directs the Neely Center for Ethical Leadership and Decision Making at the USC Marshall School of Business and USC Viterbi School. On March 30, he will host a conference at USC, "Next Generation Ethics," largely focused on business and tech ethics for issues such as driverless cars, artifiical intelligence and machine learning. Learn more here.
Mark Marino is an expert in fake news, social media, and the cultural phenomenon of the #selfie. An associate professor at the USC Dornsife College, he directs the Humanities and Critical Code Studies Lab. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,957 |
SYNONYM
#### According to
Index Fungorum
#### Published in
null
#### Original name
Rozites pallida E. Horak & G.M. Taylor
### Remarks
null | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,510 |
from __future__ import absolute_import
import tempfile
from . import track
from . import settings
from .hub import Hub
from . import helpers
from . import upload
from .genomes_file import GenomesFile
from .genome import Genome
from .assembly import Assembly
from .groups import GroupsFile, GroupDefinition
from .trackdb import TrackDb
from .track import BaseTrack, Track, SubGroupDefinition, CompositeTrack, \
ViewTrack, SuperTrack, AggregateTrack
from .version import version as __version__
def default_hub(hub_name, genome, email, short_label=None, long_label=None, defaultPos=None):
"""
Returns a fully-connected set of hub components using default filenames.
Parameters
----------
hub_name : str
Name of the hub
genome : str
Assembly name (hg38, dm6, etc)
email : str
Email to include with hub.
short_label : str
Short label for the hub. If None, defaults to the value of `hub_name`
long_label : str
Long label for the hub. If None, defaults to the value of `short_label`.
defaultPos : str
Default position for the hub
"""
if short_label is None:
short_label = hub_name
if long_label is None:
long_label = short_label
hub = Hub(
hub=hub_name,
short_label=short_label,
long_label=long_label,
email=email)
genome_kwargs = {}
if defaultPos:
genome_kwargs['defaultPos'] = defaultPos
genome = Genome(genome, **genome_kwargs)
genomes_file = GenomesFile()
trackdb = TrackDb()
hub.add_genomes_file(genomes_file)
genomes_file.add_genome(genome)
genome.add_trackdb(trackdb)
return hub, genomes_file, genome, trackdb
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,707 |
Jonathan Watt (born 11 September 1937) is a former English first-class cricketer.
Born at Eastbourne, Watt made two appearances in first-class cricket match for L. C. Stevens' XI against Cambridge University at Eastbourne in 1960 and 1961. He scored 69 runs across his two matches, with a high score 34.
References
External links
1937 births
Living people
Sportspeople from Eastbourne
English cricketers
L. C. Stevens' XI cricketers | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 315 |
Der Interwetten European Darts Grand Prix 2022 war ein Ranglistenturnier im Dartsport und wurde vom 20. bis zum 22. Mai 2022 nach einjähriger Pause wieder von der Professional Darts Corporation ausgetragen. Es war das siebte Event der European Darts Tour 2022, welche wiederum Teil der PDC Pro Tour 2022 war. Es wurde dabei zum ersten Mal in der Hanns-Martin-Schleyer-Halle in Stuttgart ausgetragen. Ursprünglich hätte das Turnier an seinem traditionellen Austragungsort, dem Glaspalast in Sindelfingen stattfinden sollen, welcher jedoch aufgrund der Unterbringung ukrainischer Geflüchteter nicht zur Verfügung stand.
Format
Das Turnier wurde im K.-o.-System gespielt. Spielmodus war in den ersten Runden ein best of 11 legs, im Halbfinale best of 13 legs und im Finale best of 15 legs.
Jedes leg wurde im double-out-Modus gespielt.
Preisgeld
Bei dem Turnier wurden insgesamt £ 140.000 an Preisgeldern ausgeschüttet, das sich unter den Teilnehmern wie folgt verteilte:
Teilnehmer
Für das Turnier qualifizierten sich die folgenden Spieler:
Die Top 16 der PDC Pro Tour Order of Merit mit Stand vom 7. April 2022
24 Gewinner eines Tour Card Holder Qualifiers vom 8. März 2022
Die Top 2 der deutschen Spieler in der PDC Pro Tour Order of Merit mit Stand vom 7. März 2022
2 Gewinner eines Host Nation Qualifiers vom 22. April 2022
1 Gewinner eines Associate Member Qualifiers 22. April 2022
1 Gewinner eines PDC Nordic & Baltic Qualifiers vom 19. Februar 2022
1 Gewinner eines East Europe Qualifiers vom 23. April 2022
PDC Pro Tour Order of Merit
Plätze 1–16
Gerwyn Price
Michael van Gerwen
José de Sousa
Peter Wright
Joe Cullen
Rob Cross
Michael Smith
Dimitri Van den Bergh
Damon Heta
Ryan Searle
Luke Humphries
Dirk van Duijvenbode
Krzysztof Ratajski
Daryl Gurney
Jonny Clayton
Brendan Dolan
Martin Schindler
Tour Card Qualifier
Ryan Meikle
Danny Jansen
Stephen Bunting
Martijn Kleermaker
Eddie Lovely
Radek Szagański
Martin Lukeman
Mickey Mansell
Jim Williams
Ryan Joyce
Danny Noppert
Nathan Rafferty
Niels Zonneveld
Joe Murnan
Kim Huybrechts
Rowby-John Rodriguez
Adrian Lewis
Andrew Gilding
Callan Rydz
John Michael
Adam Gawlas
Luke Woodhouse
Madars Razma
Ron Meulenkamp
Associate Qualifier
Jelle Klaasen
Stefan Bellmont
Höchstplatzierte deutsche Spieler der Order of Merit
Gabriel Clemens
Host Nation Qualifier
Dragutin Horvat
Lukas Wenig
Nordic & Baltic Qualifier
Johan Engström
East Europe Qualifier
Karel Sedláček
Turnierverlauf
Übertragung
Im deutschsprachigen Raum übertrug der Streaming-Dienst DAZN die Veranstaltung.
Weblinks
Bericht auf dartn.de
Bericht auf darts1.de
Einzelnachweise
2022
Sportveranstaltung in Stuttgart
Hanns-Martin-Schleyer-Halle
European Darts Tour 2022 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,115 |
\section{Introduction}\label{intro}
For $N=1,2,3,\cdots$ a fixed positive integer let $\mathbb{M}$ denote the associative algebra
of square matrices of size $N\times N$ with complex entries.
Denote by $\mathbb{M}[x]$ the associative algebra of matrix valued polynomials.
A matrix valued weight function $W$ on some open interval $(a,b)$, with $-\infty\leq a<b\leq \infty$,
assigns to each $x\in(a,b)$ a selfadjoint matrix $W(x)\in\mathbb{M}$ (so $W(x)^{\dagger}=W(x)$),
which is positive definite (denoted $W(x)>0$) almost everywhere on $(a,b)$,
such that the matrix valued moments
\[ \int_a^b x^n W(x)dx \]
are finite (and selfadjoint) for all $n\in{\mathbb N}$.
Such a weight function defines a sesquilinear matrix valued form
\[ \langle P,Q\rangle= \int_a^b P^{\dagger}(x)W(x)Q(x)dx \]
on the polynomial algebra $\mathbb{M}[x]$.
Sesquilinear in the convention of this paper amounts to antilinear in the first and linear in the second argument.
The additional properties
\[ \langle PA,Q\rangle=A^{\dagger}\langle P,Q\rangle\;,\;\langle P,QA\rangle=\langle P,Q\rangle A\;,\;
\langle P,Q\rangle^{\dagger}=\langle Q,P\rangle \]
for all $A\in\mathbb{M}$ and $P,Q\in\mathbb{M}[x]$ are trivially checked, while
\[ \langle P,P\rangle\geq0\;,\;\langle P,P\rangle=0\Leftrightarrow P=0 \]
holds for all $P\in\mathbb{M}[x]$, since $\{A\in\mathbb{M};A^{\dagger}=A,A\geq0\}$ is a convex cone,
and for $A$ in this cone $A=0\Leftrightarrow\mathrm{tr}{A}=0$.
Observe that $\langle P,P\rangle>0$ as soon as $\det P(x)\neq0$ at some point $x\in(a,b)$.
We can apply the Gram--Schmidt orthogonalization process to the
(right module for $\mathbb{M}$) basis $\{x^n;n\in{\mathbb N}\}$ of $\mathbb{M}[x]$.
By induction on $n$ we can define monic matrix valued polynomials $M_n(x)$ of degree $n$ by
\[ M_n(x)=x^n+\sum_{m=0}^{n-1}M_m(x)C_{n,m}\;,\;\langle M_m(x),x^n \rangle+\langle M_m(x),M_m(x)\rangle C_{n,m}=0 \]
for all $m<n$. Indeed, the matrix $C_{n,m}$ can be solved, because $\langle M_m,M_m\rangle>0$ and hence is invertible.
Since $\langle M_m,M_n\rangle=0$ for $m\neq n$ by construction any matrix valued polynomial $P(x)$ has a unique expansion
\[ P(x)=\sum_n M_n(x)C_n\;,\;\langle M_n,P \rangle=\langle M_n,M_n\rangle C_n \]
in terms of the basis $\{M_n;n\in{\mathbb N}\}$ of the monic orthogonal matrix valued polynomials.
The theory of matrix valued orthogonal polynomials was initiated by Krein \cite{Krein1}, \cite{Krein2},
and further developped by Geronimo \cite{Geronimo}, Duran \cite{Duran}, Gr\"{u}nbaum and Tirao \cite{Grunbaum--Tirao} and others.
In the scalar case $N=1$ with a non negative weight function $w(x)$ on the interval $(a,b)$
the system of monic orthogonal polynomials $p_n(x)$ has been the subject of an extensive study
in mathematical analysis over the past two centuries \cite{Szego}.
The classical orthogonal polynomials with weight functions
\[ w(x)=e^{-x^2/2}\;,\;w(x)=x^{\alpha}e^{-x}\;,\;w(x)=(1-x)^{\alpha}(1+x)^{\beta} \]
on the intervals $(-\infty,\infty)$, $(0,\infty)$, $(-1,1)$ for $\alpha,\beta>-1$
give rise to the Hermite, Laguerre and Jacobi polynomials respectively.
These three classes of orthogonal polynomials $p_n(x)$ are also eigenfunctions with eigenvalue $\lambda_n$
of a second order differential operator. Orthogonal polynomials with this additional
Sturm--Liouville property were characterized by Bochner \cite{Bochner},
who found besides the classical examples certain polynomials related to the Bessel function $J_{n+\frac12}(x)$.
In the matrix setting $N\geq1$ the question studied by Bochner was taken up by Duran \cite{Duran}
and further studied by Duran and Gr\"{u}nbaum \cite{Duran--Grunbaum}, and Gr\"{u}nbaum and Tirao \cite{Grunbaum--Tirao},
but a full list of matrix valued weight functions $W(x)$ with the Sturm--Liouville property
seems to be out of reach until now. Examples of matrix valued orthogonal polynomials
with the Sturm--Liouville property have been found using harmonic analysis for compact Gelfand pairs,
notably for the example $(\mathrm{SU}(2)\times\mathrm{SU}(2),\mathrm{SU}(2))$ (diagonally embedded) by Koornwinder \cite{Koornwinder}
and by Koelink, van Pruijssen and Rom\'{a}n \cite{Koelink--van Pruijssen--Roman1}, \cite{Koelink--van Pruijssen--Roman2},
and for the example $(\mathrm{SU}(3),\mathrm{U}(2))$ by Gr\"{u}nbaum, Pacheroni and Tirao
\cite{Grunbaum--Pacharoni--Tirao1}, \cite{Grunbaum--Pacharoni--Tirao2}.
The main goal of this paper is a uniform construction of a class of matrix valued orthogonal polynomials
with the Sturm--Liouville property, obtained using harmonic analysis for compact Lie groups.
More specifically, let $G$ be a compact connected Lie group, $K$ a closed connected subgroup
and $F$ a non empty face of the cone $P^+_K$ of dominant weights of $K$.
We say that $(G,K,F)$ is a multiplicity free system if for each irreducible representation $\pi^K_{\mu}$
of $K$ with highest weight $\mu\in F$ the induced representation $\mathrm{Ind}_K^G(\pi^K_{\mu})$
decomposes into a direct sum of irreducible representations $\pi^G_{\lambda}$ of G with highest weight $\lambda$,
with multiplicities
\[ m^{G,K}_{\lambda}(\mu)=[\pi^G_{\lambda}:\pi^K_{\mu}]\leq1 \]
for all $\lambda\in P_G^+$.
A necessary condition for $(G,K,F)$ to be a multiplicity free system is that the triple
$(G,K,\{0\})$ is multiplicity free, which is equivalent to $(G,K)$ being a Gelfand pair.
Henceforth, in this paper we shall assume that $(G,K)$ is a Gelfand pair of rank one.
The classification of such pairs is known from the work of Kr{\"a}mer \cite{Kramer} and Brion \cite{Brion1}.
The space $G/K$ is either a sphere ${\mathbb S}^n$ or a projective space ${\mathbb P}^n({\mathrm F})$
with $n\geq2$ for ${\mathrm F}={\mathbb R},{\mathbb C},{\mathbb H}$ and $n=2$ for ${\mathrm F}={\mathbb O}$.
If $G$ is the maximal connected group of isometries, then $(G,K)$ is a symmetric pair of rank one.
In addition there are two exceptional spheres ${\mathbb S}^7=\mathrm{Spin}(7)/\mathrm{G}_2$ and ${\mathbb S}^6=\mathrm{G}_2/\mathrm{SU}(3)$,
which are acted upon in a distance transitive way, and so the corresponding pairs
$(G,K)$ are still Gelfand pairs of rank one.
The homogeneous spaces $G/K$ are precisely the distance regular spaces as found by Wang \cite{Wang}.
For $(G,K)$ a rank one Gelfand pair the classification of multiplicity free triples
$(G,K,F)$ is given by the following theorem.
\begin{theorem}\label{classification theorem}
The full list of multiplicity free rank one triples $(G,K,F)$ is given by the Table \ref{table: mfs}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|r}
\hline
$G$ & $K$ & $\lambda_{\mathrm{sph}}$ & $\mathrm{faces}\;F$ \\ \hline
$\mathrm{SU}(n+1)$ & $\mathrm{S}(\mathrm{U}(n)\times\mathrm{U}(1))$ & $\varpi_1+\varpi_n$ & $\mathrm{any}$ \\ \hline
$\mathrm{SO}(2n+1)$ & $\mathrm{SO}(2n)$ & $\varpi_1$ & $\mathrm{any}$ \\ \hline
$\mathrm{SO}(2n)$ & $\mathrm{SO}(2n-1)$ & $\varpi_1$ & $\mathrm{any}$ \\ \hline
$\USp(2n)$ & $\USp(2n-2)\times\USp(2)$ & $\varpi_2$ & $\rk{F}\leq2$ \\ \hline
$\mathrm{F}_4$ & $\mathrm{Spin}(9)$ & $\varpi_1$ & $\rk F\le 1$ $\mathrm{or}$ \\
&&&$F={\mathbb N}\omega_1+{\mathbb N}\omega_2$ \\ \hline
$\mathrm{Spin}(7)$ & $\mathrm{G}_2$ & $\varpi_3$ & $\rk{F}\leq1$ \\ \hline
$\mathrm{G}_2$ & $\mathrm{SU}(3)$ & $\varpi_1$ & $\rk{F}\leq1$ \\ \hline
\end{tabular}
\caption{Multiplicity free systems.}\label{table: mfs}
\end{center}
\end{table}
In the third column we have given the highest weight $\lambda_{\mathrm{sph}}\in P_G^+$ of the fundamental
zonal spherical representation in the notation for root systems of Knapp \cite{Knapp},
except for case $(G,K)=(\mathrm{SO}(4),\mathrm{SO}(3))$ that $G$ is not simple and $\lambda_{\mathrm{sph}}=\varpi_1+\varpi_2\in P_G^+={\mathbb N}\varpi_1+{\mathbb N}\varpi_2$. Observe that $\lambda_{\mathrm{sph}}$ is a primitive vector in $P^{+}_{G}$.
\end{theorem}
The first three cases are well known through work of Weyl and Murnaghan \cite{Knapp}. In this paper we prove this theorem only in one direction, namely that all cases in the table give multiplicity free systems by working out the explicit branching rules in \S\S \ref{G2}, \ref{Spin7}, \ref{symplectic} and \ref{F4}. To exclude the case of the symplectic group with $\rk(F)\ge3$ we refer to \cite[Lem.~2.2.15]{van Pruijssen}, based on a result of Brion \cite[Prop.~3.1]{Brion1} or to \cite[Thm.~8.3]{He et al}.
The group $G$ for the two-point-homogeneous space $G/K$ admits a Cartan decomposition $G=KTK$ with $T\subset G$ a one dimensional torus with Lie algebra $\mathfrak{t}\subset\mathfrak{k}^{\perp}$. Denote $M=Z_{K}(T)$, the centralizer of $T$ in $K$. A triple $(G,K,F)$ is a multiplicity free system if and only if the restriction of $\pi^{K}_{\mu}$ to $M$ decomposes multiplicity free for all $\mu\in F$, which is proved in \cite[Prop.~2.2.9]{van Pruijssen} using the theory of spherical varieties. In the symmetric space examples this result goes back to Kostant and Camporesi \cite{Kostant, Camporesi1}.
For each of these triples $(G,K,F)$ we determine for all $\mu\in F$ the induced spectrum
\[ P^+_G(\mu)=\{\lambda\in P^+_G;m^{G,K}_{\lambda}(\mu)=1\} \]
explicitly through a case by case analysis.
We claim that if $\lambda\in P^+_G(\mu)$ then also $\lambda+\lambda_{\mathrm{sph}}\in P^+_G(\mu)$.
This can be derived from the Borel--Weil theorem.
Indeed, if $V^G_{\lambda}=H^0(G_c/B_c,L_{\lambda})$ denotes the Borel--Weil realization
of the finite dimensional representation of $G$ with highest weight $\lambda\in P_G^+$
then the intertwining projection
\[ V^G_{\lambda}\otimes V^G_{\lambda_{\mathrm{sph}}}\rightarrow V^G_{\lambda+\lambda_{\mathrm{sph}}} \]
onto the Cartan component of the tensor product is just realized
by the pointwise multiplication of holomorphic sections.
A spherical function of type $\mu\in F$ is a smooth map
$\Phi:G\rightarrow\mathrm{End}(V^K_{\mu})$ with the transformation rule
\begin{eqnarray}\label{trafo rule}
\Phi(kgk')=\pi^K_{\mu}(k)\Phi(g)\pi^K_{\mu}(k')
\end{eqnarray}
for all $g\in G$ and $k,k'\in K$.
The vector space $\mathcal{H}(G,K,\mu)$ of (say finite for $G$ on the left and the right)
spherical functions of type $\mu$ has a natural scalar valued Hermitian inner product
\[ \langle\Phi,\Phi'\rangle=\int_G\mathrm{tr}(\Phi(g)^{\dagger}\Phi'(g))dg \]
with the dagger coming from the (unique up to positive scalar) unitary structure on $V^K_{\mu}$, and $dg$ the normalized Haar measure on $G$.
Because $(G,K,F)$ is a multiplicity free system the elementary spherical functions
$\Phi^{\mu}_\lambda$ indexed by $\lambda\in P_G^+(\mu)$ form a basis for $\mathcal{H}(G,K,\mu)$,
which is orthogonal,
\[ \langle\Phi^{\mu}_{\lambda},\Phi^{\mu}_{\lambda'}\rangle=
\frac{(\dim\mu)^2}{\dim\lambda}\delta_{\lambda,\lambda'}, \]
as a consequence of the Schur orthogonality relations.
With $\phi=\phi_{\mathrm{sph}}$ the fundamental zonal spherical function of $(G,K)$, the product $\phi\Phi^{\mu}_{\lambda}$ is again a spherical function of type $\mu$, and therefore has an expansion
\begin{eqnarray}\label{eqn: expansion}
\phi\Phi^{\mu}_{\lambda}=\sum_{\lambda'}c_{\lambda,\lambda'}\Phi^{\mu}_{\lambda'}
\end{eqnarray}
with $\lambda'\in P_G^+(\mu)$.
For $\lambda,\lambda'\in P_G^+(\mu)$ the coefficient $c_{\lambda,\lambda'}=0$ unless
\begin{eqnarray}\label{eqn: inequality in well} \lambda-\lambda_{\mathrm{sph}}\preceq\lambda'\preceq\lambda+\lambda_{\mathrm{sph}},
\end{eqnarray}
where $\preceq$ is the usual partial ordering on $P^{+}_{G}$, and the leading coefficient $c_{\lambda,\lambda+\lambda_{\mathrm{sph}}}$ is non-zero. This allows one to define a degree $d:P^+_G(\mu)\rightarrow{\mathbb N}$ by
\[ d(\lambda+\lambda_{\mathrm{sph}})=d(\lambda)+1\;,\;\min\{d(P^+_G(\mu)\cap\{\lambda+{\mathbb Z}\lambda_{\mathrm{sph}}\})\}=0 \]
for all $\lambda\in P^+_G(\mu)$.
The bottom $B(\mu)$ of the induced spectrum $P_G^+(\mu)$ is defined as
\[ B(\mu)=\{\lambda\in P_G^+(\mu);d(\lambda)=0\} \]
giving $P_G^+(\mu)=B(\mu)+{\mathbb N}\lambda_{\mathrm{sph}}$ the structure of a well.
We have determined explicitly the structure of the bottom $B(\mu)$ with $\mu\in F$ for all
multiplicity free triples $(G,K,F)$ in the above table.
The first three lines of this table follow from a straightforward application of branching rules
going back to Weyl for the unitary group and Murnaghan for the orthogonal groups \cite{Knapp}.
The case of the symplectic group follows using the branching rule of Lepowsky \cite{Knapp,Lepowsky}, which under the
restriction $\rk{F}\leq2$ we are able to make completely explicit in \S\ref{symplectic}.
The remaining last two lines with the exceptional group of type $\mathrm{G}_2$ appearing turn out to be manageable as well and are treated in \S\S\ref{G2}, \ref{Spin7}.
The appropriate branching rules for the symmetric case $({\mathrm F}_{4},\mathrm{Spin}(9))$ are calculated in \S \ref{F4}, using computer algebra.
Behind all these explicit calculations is a general multiplicity formula for branching rules going back to Kostant \cite{Lepowsky, Vogan} and rediscovered by Heckman \cite{Heckman}.
On the basis of our explicit knowledge of the bottom $B(\mu)$ for $\mu\in F$
we are able to verify case by case the following degree inequality in \S\S \ref{G2}, \ref{Spin7}, \ref{symplectic} and \ref{F4}.
\begin{theorem}\label{degree inequality theorem}
The degree $d:P^+_G(\mu)\rightarrow{\mathbb N}$ satisfies the inequality
\[ d(\lambda)-1\leq d(\lambda')\leq d(\lambda)+1 \]
for all $\lambda'\in P^+_G(\mu)$ with $c_{\lambda,\lambda'}\neq0$.
\end{theorem}
As stated before, in all cases of our table the restriction of $\pi^K_{\mu}$ for $\mu\in F$ to the centralizer $M$ of
a Cartan circle $T$ in $G$ is multiplicity free.
Moreover, the irreducible constituents are indexed in a natural way by the bottom $B(\mu)$, as we shall explain in \S\ref{mfs}.
The restriction of the elementary spherical function $\Phi^{\mu}_{\lambda}$ to the Cartan circle $T$
takes values in $\mathrm{End}_M(V^K_{\mu})$, and so is block diagonal by Schur's Lemma: $\mathrm{End}_M(V^K_{\mu})\cong{\mathbb C}^{N_{\mu}}$
with $N_{\mu}$ the cardinality of the bottom $B(\mu)$. Operators on the left become vectors on the right.
In view of this isomorphism, $\Phi^{\mu}_{\lambda}(t)$ for $t\in T$ is identified with the function $\Psi^{\mu}_{\lambda}(t)$ taking values in ${\mathbb C}^{N_{\mu}}$. We define for $n\in{\mathbb N}$ the matrix valued spherical functions $\Psi^{\mu}_n(t)$,
whose columns are the vector valued functions $\Psi^{\mu}_{\lambda}(t)$ with $\lambda\in P_G^+(\mu)$ of degree $d(\lambda)=n$.
Observe that both rows and columns of the matrix $\Psi^{\mu}_n(t)$ are indexed by the bottom $B(\mu)$.
Finally we can define our matrix valued polynomials $P^{\mu}_n(x)\in\mathbb{M}[x]$ of size $N_{\mu}\times N_{\mu}$ as functions of a real variable $x$ by
\[ \Psi^{\mu}_n(t)=\Psi^{\mu}_0(t)P^{\mu}_n(x) \]
with $t\mapsto x$ a new variable, defined by $x=c\phi+(1-c)$ for some $c>0$ (with $\phi$ the fundamental
zonal spherical function as before) in order to make the orthogonality interval $x(T)$ equal to $[-1,1]$.
The crucial fact that $P^{\mu}_n(x)$ is a matrix valued polynomial in $x$ of degree $n$ with invertible
leading coefficient $D^{\mu}_n$ (inductively given by $D^{\mu}_n=D^{\mu}_{n+1}A^{\mu}_n$) follows
from a three term recurrence relation
\[ xP^{\mu}_n(x)=P^{\mu}_{n+1}(x)A^{\mu}_n+P^{\mu}_{n}(x)B^{\mu}_n+P^{\mu}_{n-1}(x)C^{\mu}_n \]
which is obtained using the expansion (\ref{eqn: expansion}).
Theorem \ref{degree inequality theorem} together with the ordering relation (\ref{eqn: inequality in well}) and $c_{\lambda,\lambda+\lambda_{{\mathrm{sph}}}}\ne0$ imply that the matrices $A_{n}$ are triangular with non-zero diagonal, and hence are invertible.
The matrix valued weight function is given by
\[ W^{\mu}(x)=(\Psi^{\mu}_0(t))^{\dagger}D^{\mu}\Psi^{\mu}_0(t)w(x) \]
with $w(x)=(1-x)^{\alpha}(1+x)^{\beta}$ the usual scalar weight function for the Cartan decomposition
$G=KTK$ and suitable $\alpha,\beta\in{\mathbb N}/2$ given in terms of root multiplicities.
The matrix $D^{\mu}$ is diagonal with entries the dimensions of the irreducible constituents of the restriction of $\pi^K_{\mu}$ to $M$,
which as a set was indexed by the bottom $B(\mu)$ as should. The diagonal matrix $D^{\mu}$ arises from the identification
$\mathrm{End}_M(V^K_{\mu})\cong{\mathbb C}^{N_{\mu}}$ with the trace form of the left operator side
and the standard Hermitian form on the right vector side.
The matrix valued polynomials $P^{\mu}_n(x)$ are orthogonal with respect to the
weight function $W^{\mu}(x)$ and have diagonal square norms, since
\[ \langle P^{\mu}_n,P^{\mu}_{n'}\rangle_{\nu,\nu'}=\langle\Phi^{\mu}_{\lambda},\Phi^{\mu}_{\lambda'}\rangle \]
with $\lambda=\nu+n\lambda_{\mathrm{sph}},\lambda'=\nu'+n'\lambda_{\mathrm{sph}}\in P^+_G(\mu)=B(\mu)+{\mathbb N}\lambda_{\mathrm{sph}}$.
Finally, the monic orthogonal polynomials $M^{\mu}_n(x)=x^n+\cdots$ and the orthogonal polynomials
$P^{\mu}_n(x)=M^{\mu}_n(x)D^{\mu}_n$ are related by eliminating the invertible leading coefficient $D^{\mu}_n$.
By Lie algebraic methods the polynomials $P^{\mu}_n(x)$ are shown to be eigenfunctions
of a commutative algebra $\mathbb{D}^{\mu}\subset\mathbb{M}[x,\partial_{x}]$ of matrix valued differential operators
\[ DP^{\mu}_n=P^{\mu}_n\Lambda^{\mu}_n(D) \]
with $\Lambda^{\mu}_n(D)$ a diagonal eigenvalue matrix for all $D\in\mathbb{D}^{\mu}$.
The desired second order operator for the orthogonal polynomials with
the Sturm--Liouville property comes from the quadratic Casimir operator.
The dimension of the affine variety underlying the commutative algebra $\mathbb{D}^{\mu}$
is equal to the affine rank of the well $P^+_G(\mu)$.
Our explicit results on branching rules provide examples of the convexity theorem for
Hamiltonian actions of connected compact Lie groups on connected symplectic manifolds
with a proper moment map \cite{Heckman}, \cite{Guillemin--Sternberg2},
\cite{Guillemin--Sternberg3}, \cite{Guillemin--Sternberg4}, \cite{Kirwan}.
The multiplicities occur at the integral points in the moment polytopes
in accordance with the $[Q,R]=0$ principle of geometric quantization \cite{Guillemin--Sternberg1}.
In the next section we first discuss the pair $(G,K)=(G_2,\mathrm{SU}(3))$, which is an instructive
example to illustrate the various aspects of the representation theory and the construction
of the matrix valued orthogonal polynomials.
\textbf{Acknowledgement.} We thank Noud Aldenhoven for his help in programming certain branching rules, which gave us a good idea about the multiplicity freeness in the symplectic case. Furthermore, we thank Erik Koelink and Pablo Rom{\'a}n for fruitful discussions concerning matrix valued orthogonal polynomials.
\section{The pair $(G,K)=(\mathrm{G}_2,\mathrm{SU}(3))$}\label{G2}
In this section we take $G$ of type $\mathrm{G}_2$ and $K=\mathrm{SU}(3)$ the subgroup of type $A_2$.
Having the same rank the root systems $R_G$ of $G$ and $R_K$ of $K$ can be drawn in one picture,
and $R_K$ consists of the $6$ long roots.
The simple roots $\{\alpha_1,\alpha_2\}$ in $R_G^+$ and $\{\beta_1,\beta_2\}$ in $R_K^+$ are
indicated in Figure \ref{figure: roots for G2} and $P_G^+={\mathbb N}\varpi_1+{\mathbb N}\varpi_2$ is contained in $P_K^+={\mathbb N}\omega_1+{\mathbb N}\omega_2$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1]
\pgfmathsetmacro\ax{2}
\pgfmathsetmacro\ay{0}
\pgfmathsetmacro\bx{2 * cos(120)}
\pgfmathsetmacro\by{2 * sin(120)}
\pgfmathsetmacro\lax{2*\ax/3 + \bx/3}
\pgfmathsetmacro\lay{2*\ay/3 + \by/3}
\pgfmathsetmacro\lbx{\ax/3 + 2*\bx/3}
\pgfmathsetmacro\lby{\ay/3 + 2*\by/3}
\begin{scope}
\clip (0,0) circle (3);
\foreach \k in {1,...,12}
\draw[dashed] (0,0) -- (\k * 30 + 30:40);
\foreach \a in {-10,...,-1,1,2,3,4,5}
\draw[dotted] (\a*\lax-50*\lbx,\a*\lay-50*\lby) -- (\a*\lax+50*\lbx,\a*\lay+50*\lby);
\foreach \a in {-40,-39,...,40}
\draw[dotted] (-50*\lax+\a*\lbx,-50*\lay+\a*\lby)-- (50*\lax+\a*\lbx,50*\lay+\a*\lby);
\foreach \a in {-40,-39,...,40}
\draw[dotted] (-50*-\lax-50*\lbx+\a*\lbx,-50*-\lay-50*\lby+\a*\lby)-- (50*-\lax+50*\lbx+\a*\lbx,50*-\lay+50*\lby+\a*\lby);
\draw[thick,->] (0,0) -- (\ax,\ay) node[below] {\(\alpha_2=\beta_{2}\)};
\draw[thick,->] (0,0) -- (\bx,\by) node[left]{\(\beta_{1}\)};
\draw[thick,->] (0,0) -- (-\ax,-\ay);
\draw[thick,->] (0,0) -- (-\bx,-\by);
\draw[thick,->] (0,0) -- (\ax+\bx,\ay+\by) node[right] {\(\varpi_{2}\)};
\draw[thick,->] (0,0) -- (-\ax-\bx,-\ay-\by)
\draw[thick,->] (0,0) -- (\lax,\lay) node[right] {\(\omega_{2}\)};
\draw[thick,->] (0,0) -- (\lbx,\lby) node[above]{\(\varpi_{1}=\omega_{1}\)};
\draw[thick,->] (0,0) -- (-\lax,-\lay)
\draw[thick,->] (0,0) -- (-\lbx,-\lby)
\draw[thick,->] (0,0) -- (\lax-\lbx,\lay-\lby)
\draw[thick,->] (0,0) -- (\lbx-\lax,\lby-\lay) node[left] {\(\alpha_1\)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Roots for $\mathrm{G}_{2}$.}\label{figure: roots for G2}
\end{figure}
The branching rule from $G$ to $K$ is well known, see for example \cite{Heckman}.
In the picture below $s_1\in W_G$ is the orthogonal reflection in the mirror ${\mathbb R}\varpi_2$.
For $\lambda\in P_G^+$ the multiplicities $m_{\lambda}(\mu)$ for $\mu\in P_K^+$ are
supported in the gray region in the left picture. They have the familiar pattern of
the weight multiplicities for $\mathrm{SU}(3)$ as discussed in the various text books
\cite{Humphreys}, \cite{Fulton--Harris}. They are one on the outer hexagon, and increase by one
on each inner shell hexagon, untill the hexagon becomes a triangle, and from that moment on they stabilize.
Hence the restriction to $K$ of any irreducible representation of $G$ with highest weight $\lambda\in P_G^+$
is multiplicity free on the two rank one faces ${\mathbb N}\omega_1$ and ${\mathbb N}\omega_2$ of the dominant cone $P_K^+$.
In other words, the triples $(\mathrm{G}_2,\mathrm{A}_2,F_i={\mathbb N}\omega_i)$ are multiplicity free for $i=1,2$,
which proves the last line of the table in Theorem{\;\ref{classification theorem}}.
The irreducible spherical representations of $G$ containing the trivial representation of $K$
have highest weight in ${\mathbb N}\varpi_1$, and $\lambda_{\mathrm{sph}}=\varpi_1$ is the fundamental
spherical weight. Given $\mu=n\omega_1\in F_1$ (and likewise $\mu=n\omega_2\in F_2$)
the corresponding induced spectrum of $G$ is multiplicity free by Frobenius reciprocity,
and by inversion of the branching rule has multiplicity one on the well shaped region
\[ P_G^+(\mu)=B(\mu)+{\mathbb N}\varpi_1\;,\;B(\mu)=\{k\varpi_1+l\varpi_2;k+l=n\} \]
with bottom $B(\mu)$. The bottom is given by a single linear relation.
If we take $M$ the $\mathrm{SU}(2)$ group corresponding to the roots $\{\pm\alpha_2\}$
and denote by $p:P_G^+\rightarrow P_M^+={\mathbb N}(\tfrac12\alpha_2)$ the natural projection
along the spherical direction $\varpi_1$, then $p$ is a bijection from the bottom $B(\mu)$
onto the image $p(B(\mu))$, which is just the restricted spectrum $P_M^+(\mu)$ for $M$
of the irreducible representation of $K$ with highest weight $\mu$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.8]
\pgfmathsetmacro\ax{2}
\pgfmathsetmacro\ay{0}
\pgfmathsetmacro\bx{2 * cos(120)}
\pgfmathsetmacro\by{2 * sin(120)}
\pgfmathsetmacro\lax{2*\ax/3 + \bx/3}
\pgfmathsetmacro\lay{2*\ay/3 + \by/3}
\pgfmathsetmacro\lbx{\ax/3 + 2*\bx/3}
\pgfmathsetmacro\lby{\ay/3 + 2*\by/3}
\begin{scope}
\clip (-2,-2) rectangle (3+4*\ax,8);
\draw[thick, fill=lightgray] (0,3*\lby) -- (0,5*\lby) -- (3*\lax+5*\lbx,3*\lay+5*\lby) -- (5*\lax,5*\lay+3*\lby) -- (5*\lax,5*\lay) -- (3*\lax,3*\lay) --cycle;
\draw[dashed] (0,0) -- (0,10);
\draw[dashed] (0,0)-- (8*\lax,8*\lay+8*\lby);
\draw[dashed] (0,0) -- (5*\lax,5*\lay);
\draw[thick,->] (0,0) -- (\ax,\ay) node[right] {\(\beta_{2}\)};
\draw[thick,->] (0,0) -- (\bx,\by) node[above] {\(\beta_{1}\)};
\draw[thick,->] (0,0) -- (-\ax,-\ay);
\draw[thick,->] (0,0) -- (-\bx,-\by);
\draw[thick,->] (0,0) -- (\ax+\bx,\ay+\by);
\draw[thick,->] (0,0) -- (-\ax-\bx,-\ay-\by);
\draw(\lax,\lay-1/8) node[right]{\(\omega_{2}\)};
\draw[thick,->] (0,0) -- (\lax,\lay);
\draw[thick,->] (0,0) -- (\lbx,\lby);
\draw (\lbx+1/8,\lby+1/8) node[left]{\(\omega_{1}\)};
\draw[thick,->] (0,0) -- (-\lax,-\lay);
\draw[thick,->] (0,0) -- (-\lbx,-\lby);
\draw[thick,->] (0,0) -- (\lax-\lbx,\lay-\lby);
\draw[thick,->] (0,0) -- (\lbx-\lax,\lby-\lay);
\fill (0,3*\lby) circle (2pt);
\fill (0,5*\lby) circle (2pt);
\fill (3*\lax+5*\lbx,3*\lay+5*\lby)circle (2pt) node[above] {\(\lambda\)};
\fill (5*\lax,5*\lay+3*\lby)circle (2pt) node[right] {\(s_{1}\lambda\)};
\fill (5*\lax,5*\lay) circle (2pt);
\fill (3*\lax,3*\lay) circle (2pt);
\draw[thick] (0,3*\lby) -- (5*\lax,5*\lay+3*\lby);
\draw[thick] (0,5*\lby) -- (5*\lax,5*\lay);
\draw[thick] (3*\lax+5*\lbx,3*\lay+5*\lby) -- (3*\lax,3*\lay);
\draw[thick, fill=lightgray] (3*\ax,3*\lby) -- (3*\ax,9*\lby) -- (3*\lax+3*\ax,3*\lay+9*\lby) -- (3*\lax+3*\ax,3*\lay+3*\lby) -- cycle;
\draw[dashed] (0+3*\ax,0) -- (0+3*\ax,10);
\draw[dashed] (0+3*\ax,0)-- (15*\lax+3*\ax,15*\lay+15*\lby);
\draw[dashed] (0+3*\ax,0) -- (15*\lax+3*\ax,15*\lay);
\draw[thick,->] (0+3*\ax,0) -- (\ax+3*\ax,\ay) node[right] {\(\alpha_{2}\)};
\draw[thick,->] (0+3*\ax,0) -- (\bx+3*\ax,\by);
\draw[thick,->] (0+3*\ax,0) -- (-\ax+3*\ax,-\ay);
\draw[thick,->] (0+3*\ax,0) -- (-\bx+3*\ax,-\by);
\draw[thick,->] (0+3*\ax,0) -- (\ax+\bx+3*\ax,\ay+\by) node[right] {\(\varpi_{2}\)};
\draw[thick,->] (0+3*\ax,0) -- (-\ax-\bx+3*\ax,-\ay-\by);
\draw(\lax+3*\ax,\lay-1/8);
\draw[thick,->] (0+3*\ax,0) -- (\lax+3*\ax,\lay);
\draw[thick,->] (0+3*\ax,0) -- (\lbx+3*\ax,\lby);
\draw (\lbx+3*\ax+1/8,\lby+1/8) node[left]{\(\varpi_{1}\)};
\draw[thick,->] (0+3*\ax,0) -- (-\lax+3*\ax,-\lay);
\draw[thick,->] (0+3*\ax,0) -- (-\lbx+3*\ax,-\lby);
\draw[thick,->] (0+3*\ax,0) -- (\lax-\lbx+3*\ax,\lay-\lby);
\draw[thick,->] (0+3*\ax,0) -- (\lbx-\lax+3*\ax,\lby-\lay) node[left] {\(\alpha_{1}\)};
\fill (3*\ax,3*\lby) circle (2pt) node[left] {\(\mu\)};
\fill (3*\lax+3*\lbx+3*\ax,3*\lay+3*\lby) circle (2pt);
\draw (3/2*\lax+3*\ax,6*\lby) node {\(P_{\mathrm{G}_{2}}^{+}(\mu)\)};
\draw (3/2*\lax+3*\ax,7/2*\lby) node[below] {\(B(\mu)\)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Branching from $\mathrm{G}_{2}$ to $\mathrm{SU}(3)$ on the left and the $\mu$-well on the right.}\label{intro: figure G2A2 branching}
\end{figure}
There is warning about the choice of the various Cartan subalgebras.
In order to compute branching rules it is natural and convenient (as we did above)
to choose the Cartan subalgebra of $K$ contained in the Cartan subalgebra of $G$.
The other choice is that we start with a rank one Gelfand pair $(G,K)$,
and choose the Cartan circle group $T$ in $G$ perpendicular to $K$.
If $M$ is the centralizer of $T$ in $K$, then $MT$ is a subgroup in $G$ of full rank.
A maximal torus in $MT$ is then a maximal torus for $G$ as well.
But this maximal torus need not contain a maximal torus for $K$,
as is clear from the present example. It will only do so if the rank of $K$
is equal to the rank of $M$, which is equal to the rank of $G$ minus $1$,
and a maximal torus of $M$ is a maximal torus of $K$ as well.
\section{Multiplicity free systems}\label{mfs}
Connected compact irreducible Gelfand pairs $(G,K)$ have been classified by Kr\"{a}mer
for $G$ a simple Lie group and by Brion for $G$ a semisimple Lie group \cite{Kramer}, \cite{Brion1}.
We shall assume that $G$ and $K$ are connected, and that the connected space $G/K$ is also simply connected.
The pair $(G,K)$ is called rank one if the Hecke algebra $\mathcal{H}(G,K)$ of zonal spherical
(so bi-$G$-finite and bi-$K$-invariant) functions is a polynomial algebra ${\mathbb C}[\phi]$ with one generator,
the fundamental elementary zonal spherical function $\phi=\phi_{\mathrm{sph}}$.
We shall assume throughout this paper that $(G,K)$ is a rank one Gelfand pair, with $G/K$ simply connected.
The corresponding spaces $G/K$ are just the distance regular spaces found by Wang \cite{Wang}.
Indeed, for $K<G$ compact connected Lie groups the homogeneous space $G/K$ equipped with an invariant Riemannian metric
is distance transitive for the action of $G$ on $G/K$ if and only if the action of $K$ on the tangent space $T_{eK}G/K$
is transitive on the unit sphere. This is equivalent with the algebra $P(T_{eK}G/K)^K$ of polynomial invariants being
a polynomial algebra in a single generator (the quadratic norm), which in turn is equivalent with the Hecke algebra
$\mathcal{H}(G,K)$ being a polynomial algebra ${\mathbb C}[\phi]$ in the single generator $\phi=\phi_{\mathrm{sph}}$.
If $\mathcal{H}(G,K)={\mathbb C}[\phi]$ has a single generator then it is commutative as convolution algebra,
which is equivalent with $(G,K)$ being a Gelfand pair.
Let $\mathfrak{k}<\mathfrak{g}$ be the Lie algebras of $K<G$.
By definition the infinitesimal Cartan decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$
is the orthogonal decomposition with respect to minus the Killing form on $\mathfrak{g}$.
Since $(G,K)$ has rank one the adjoint homomorphism $K\rightarrow\mathrm{SO}(\mathfrak{p})$ is a surjection.
Fix a (maximal Abelian) one dimensional subspace $\mathfrak{t}$ in $\mathfrak{p}$.
Any two such are clearly conjugated by $K$, and let $T<G$ be the corresponding Cartan circle group.
Let $M<N$ be the centralizer and normalizer of $T$ in $K$ with Lie algebra $\mathfrak{m}$.
The Weyl group $W=N/M$ has order $2$ and acts on $T$ by $t\mapsto t^{\pm1}$.
The subgroup $MT$ has maximal rank in $G$, and choosing a maximal torus in $MT$ for $G$ defines a natural
restriction map from the weight lattice $P_G$ of $G$ to the weight lattice of the circle $T$.
The next result for symmetric pairs is just the Cartan--Helgason theorem.
\begin{proposition}\label{Cartan circle group}
Suppose $G$ is simply connected and $K$ is connected, so that $G/K$ is simply connected.
Then $T\cap K$ has order $2$, except for the Gelfand pair $(G,K)=(\mathrm{Spin}(7),G_2)$ where it has order $3$.
\end{proposition}
\begin{proof}
The crucial remark is that the highest weight $\lambda_{\mathrm{sph}}\in P_G^+$ of the fundamental zonal spherical
representation of $(G,K)$ after restriction to $T$ becomes a generator for the weight lattice of $T/(T\cap K)$.
For a symmetric pair $(G,K)$ with Cartan involution $\theta:G\rightarrow G$ we have $K=G^{\theta}$ and $\theta(t)=t^{-1}$ for $t\in T$.
Hence $T\cap K$ has order $2$ for $(G,K)$ a symmetric pair. In the remaining two cases we use the notation of Bourbaki \cite{Bourbaki}.
For $(G,K)=(\mathrm{Spin}(7),G_2)$ the weight lattice of $G$ is naturally identified with ${\mathbb Z}^3$
with basis $\epsilon_i$, and likewise the dual coroot lattice becomes ${\mathbb Z}^3$ with basis $e_i$.
The character lattice of $T/(T\cap K)$ has generator $\varpi_3=(\epsilon_1+\epsilon_2+\epsilon_3)/2$,
which takes the value $3$ on the generator $2(e_1+e_2+e_3)$ of the coroot lattice of $T$.
For $(G,K)=(G_2,\mathrm{SU}(3))$ the weight lattice of $G$ is naturally identified with
$\{\xi\in{\mathbb Z}^3;\xi_1+\xi_2+\xi_3=0\}$, and likewise the dual coroot lattice becomes $\{x\in{\mathbb Z}^3;x_1+x_2+x_3=0\}$.
The character lattice of $T/(T\cap K)$ has generator $\varpi_1=2\alpha_1+\alpha_2=-\epsilon_2+\epsilon_3$,
which takes the value $2$ on the generator $-e_2+e_3$ of the coroot lattice of $T$.
\end{proof}
In the next definition we explain the well shape of the induced spectrum
$P_G^+(\mu)=B(\mu)+{\mathbb N}\lambda_{\mathrm{sph}}$ with bottom $B(\mu)$.
This idea goes back to Kostant and Camporesi \cite{Kostant}, \cite{Camporesi1}.
\begin{definition}
For $\mu\in P_K^+$ the highest weight of an irreducible representation $\pi^K_{\mu}$ of $K$ the
induced representation $\mathrm{Ind}^G_K(\pi^K_{\mu})$ decomposes as a direct sum of irreducible
representations $\pi^G_{\lambda}$ of $G$ with branching multiplicities
\[ m^{G,K}_{\lambda}(\mu)=[\pi^G_{\lambda}:\pi^K_{\mu}] \]
for all $\lambda\in P_G^+$ by Frobenius reciprocity. We denote
\[ P_G^+(\mu)=\{\lambda\in P_G^+;m^{G,K}_{\lambda}(\mu)\geq1\} \]
for the induced spectrum. In the introduction we have explained using the Borel--Weil theorem that $\lambda\in P_G^+(\mu)$
implies $\lambda+\lambda_{\mathrm{sph}}\in P_G^+(\mu)$. In turn we see that $P_G^+(\mu)=B(\mu)+{\mathbb N}\lambda_{\mathrm{sph}}$
has the shape of a well with
\[ B(\mu)=\{\lambda\in P_G^+(\mu);\lambda-\lambda_{\mathrm{sph}}\notin P_G^+(\mu)\} \]
the bottom of the induced spectrum $P_G^+(\mu)$.
\end{definition}
To arrive at a good theory of matrix valued orthogonal polynomials we have to restrict ourselves
to multiplicity free triples $(G,K,\mu)$ and $(G,K,F)$ for $\mu\in P_K^+$ a suitable dominant weight for $K$
and $F$ a suitable facet of the dominant cone $P_K^+$ for $K$.
\begin{definition}
The triple $(G,K,\mu)$ with $\mu\in P_K^+$ a highest weight for $K$ is called multiplicity free if
the branching multiplicity $m_{\lambda}(\mu)\leq1$ for all $\lambda\in P_G^+$,
so if the induced representation $\mathrm{Ind}^G_K(\pi^K_{\mu})$ decomposes multiplicity free as a representation of $G$.
Likewise, $(G,K,F)$ is called a multiplicity free system with $F$ a facet of the dominant integral cone $P_K^+$ if $(G,K,\mu)$ is multiplicity free for all $\mu\in F$.
\end{definition}
Camporesi calculated the bottoms $B(\mu)$ of the well $P_G^+(\mu)$ explicitly in the first three examples of the table
in Theorem{\;\ref{classification theorem}} using the classical branching laws of Weyl for the unitary group
and Murnaghan for the orthogonal groups \cite{Camporesi1},\cite{Knapp}. In the fourth example of the symplectic group
he obtained partial results, because of the complexity of the branching law of Lepowsky (from $G$ to $K$)
\cite{Lepowsky} and of Baldoni Silva (from $K$ to $M$) \cite{Baldoni Silva} in that case.
However, in this symplectic case the restriction on a multiplicity free system$(G,K,F)$ is
just strong enough to find a completely explicit description of the bottom.
\begin{proposition}\label{prop: reducing to M}
Let $F$ be a facet of the dominant integral cone $P_K^+$.
Then the branching multiplicity $m^{G,K}_{\lambda}(\mu)\leq1$ for all $\mu\in F$ and all dominant weights $\lambda\in P_G^+$ if and only if the branching
multiplicity $m^{K,M}_{\mu}(\nu)\leq1$ for all $\mu\in F$ and
all dominant weights $\nu\in P_M^+$.
\end{proposition}
\begin{proof}
Let us complexify all our compact Lie groups $G,K,M,T$ to complex reductive algebraic groups $G_c,K_c,M_c,T_c$.
The statement of the proposition translates into the following geometric statement.
For $P_c$ the parabolic subgroup of $K_c$ with Levi component the stabilizer of $F$
the variety $G_c/P_c$ is spherical for $G_c$ if and only if $K_c/P_c$ is spherical for $M_c$.
Here we say that a variety with an action of a reductive group is spherical if the Borel subgroup has an open orbit.
Observe that (also for the non symmetric pairs) we have an infinitesimal Iwasawa decomposition
\[ \mathfrak{g}_c=\mathfrak{k}_c\oplus\mathfrak{t}_c\oplus\mathfrak{n}_c \]
with $\mathfrak{n}_c$ the direct sum of those root spaces $\mathfrak{g}_c^{\alpha}$ for which the restriction of
$\alpha$ to $\mathfrak{t}_c$ is a positive multiple of the restriction of $\lambda_{\mathrm{sph}}$ to $\mathfrak{t}_c$.
Taking the Borel subgroup of $G_c$ of the form $B_{M_{c}}T_cN_c$ with $B_{M_{c}}$ a Borel subgroup for $M_c$
the equivalence of $G_c/P_c$ having an open orbit for $B_{M_{c}}T_cN_c$ is equivalent to $K_c/P_c$
having an open orbit for $B_{M_{c}}$ follows, since the orbit of $T_cN_c$ through $K_c$ is open in $G_c/K_c$.
\end{proof}
Let us take the Cartan subalgebra of $\mathfrak{g}_c$ a direct sum of $\mathfrak{t}_c$
and a Cartan subalgebra of $\mathfrak{m}_c$, and extend a set of positive roots for
$\mathfrak{m}_c$ to a set of positive roots for $\mathfrak{g}_c$.
Let $V^G_{\lambda}$ be an irreducible representation of $G$ with highest weight $\lambda\in P_G^+$.
Because $M_cT_cN_c$ is a standard parabolic subgroup of $G_c$ the vector space
\[ (V^G_{\lambda})^{\mathfrak{n}_c}=\{v\in V^G_{\lambda};Xv=0\;\forall X\in\mathfrak{n}_c\} \]
is an irreducible representation of $M_{c}$ with highest weight $\nu\in P_M^+$.
Clearly $\nu=p(\lambda)$ with $p:P_G^+\rightarrow P_M^+$ the natural projection
along the spherical direction ${\mathbb N}\lambda_{\mathrm{sph}}$.
The Iwasawa decomposition
$\mathfrak{g}_c=\mathfrak{k}_c\oplus\mathfrak{t}_c\oplus\mathfrak{n}_c$ of the above proof gives the
Poincar\'{e}--Birkhoff-Witt factorization $U(\mathfrak{g}_c)=U(\mathfrak{k}_c)U(\mathfrak{t}_c)U(\mathfrak{n}_c)$
and we conclude that $U(\mathfrak{k}_c)(V^G_{\lambda})^{\mathfrak{n}_c}=V^G_{\lambda}$.
\begin{proposition}\label{projection from induced to restricted spectrum}
Let $(G,K,F)$ be a multiplicity free system and let $\mu\in F$.
Then the natural projection $p:P_G^+\rightarrow P_M^+$ is a surjection
from the induced spectrum $P_G^+(\mu)$ for $G$ onto the restricted spectrum
\[ P_M^+(\mu)=\{\nu\in P_M^+;m^{K,M}_{\mu}(\nu)\geq1\} \]
for $M$, and thefore $p:B(\mu)\rightarrow P_M^+(\mu)$ is a bijection.
Note that $m^{K,M}_{\mu}(\nu)\leq1$ for all $\nu\in P_M^+$ by the previous proposition.
\end{proposition}
\begin{proof}
Let $\langle\cdot,\cdot\rangle$ be a unitary structure on $V^G_{\lambda}$ for $G$.
Suppose $V$ is an irreducible subrepresentation of $K$ in the restriction of $V^G_{\lambda}$ to $K$.
If $u$ is a nonzero vector in $(V^G_{\lambda})^{\mathfrak{n}_c}$ then $\langle u,v\rangle\neq0$
for some $v\in V$. Indeed $\langle u,v\rangle=0$ for all $v\in V$ contradicts
$U(\mathfrak{k}_c)(V^G_{\lambda})^{\mathfrak{n}_c}=V^G_{\lambda}$.
Hence the restriction of $V$ to $M$ contains a copy of $V^M_{p(\lambda)}$ by Schur's Lemma.
One of the subspaces $V$ is a copy of $V^K_{\mu}$, and so $m_{\mu}(p(\lambda))\geq1$.
This proves that the natural projection $p$ maps the induced spectrum $P_G^+(\mu)$ of $G$
inside the restricted spectrum $P_M^+(\mu)$ of $M$.
It remains to show that
\[ p:P_G^+(\mu)\rightarrow P_M^+(\mu) \]
is onto for all $\mu\in F$. This follows from Proposition \ref{prop: asymptotics}.
\end{proof}
\begin{proposition}\label{prop: asymptotics}
Let $\lambda\in P^{+}_{G}$, $\mu\in P^{+}_{K}$ and let $p:P_G^+\rightarrow P_M^+$ be the natural projection. Then
\begin{itemize}
\item $m^{G,K}_{\lambda+k\lambda_{{\mathrm{sph}}}}(\mu)\le m^{G,K}_{\lambda+s\lambda_{{\mathrm{sph}}}}(\mu)$ if $k\le s$ and
\item $\lim_{n\to\infty}m^{G,K}_{\lambda+n\lambda_{{\mathrm{sph}}}}(\mu)=m^{K,M}_{\mu}(p(\lambda)).$
\end{itemize}
\end{proposition}
\begin{proof} Every irreducible $K$-representation that occurs in the $K$-module $V_{\lambda}$ also occurs in the $K$-module $V_{\lambda+\lambda_{{\mathrm{sph}}}}$. Indeed, let $v_{K}\in V_{\lambda_{{\mathrm{sph}}}}$ be a non-zero $K$-fixed vector and consider the composition of $V_{\lambda}\to V_{\lambda}\otimes V_{\lambda_{{\mathrm{sph}}}}:v\mapsto v\otimes v_{K}$ and the projection $V_{\lambda}\otimes V_{\lambda_{{\mathrm{sph}}}}\to V_{\lambda+\lambda_{{\mathrm{sph}}}}$. Both maps intertwine the $K$-action and the first statement follows.
For $(G,K)$ a symmetric pair (even of arbitrary rank)
the second statement is a result of Kostant \cite[Thm.~3.5]{Kostant} and Wallach \cite[Cor.~8.5.15]{Wallach}. For spherical pairs $(G,K)$ a similar stability result is shown by Kitagawa \cite[Cor.~4.10]{Kitagawa}. However, since we have control over the branching rules of the remaining non-symmetric pairs, we present our own proof.
Consider the triple $(G,K,M)=(\mathrm{G}_2,\mathrm{SU}(3),\mathrm{SU}(2))$. Let $\lambda=n_1\varpi_1+n_2\varpi_2\in P_G^+$ with $n_1$ relatively large, $\mu=m_1\omega_1+m_2\omega_2\in P_K^+$ and $\nu=n_2p(\varpi_2)\in P_M^+$.
On the one hand, we find
\[ m^{G,K}_{\lambda}(\mu)=\min\{m_1+1,m_2+1,m_1+m_2-n_2+1,n_2+1\} \]
as in clear from the left side of Figure \ref{intro: figure G2A2 branching}. Indeed $m_1+1$ comes from the disctance
of $\mu$ to the face ${\mathbb N}\omega_1$, and similarly $m_2+1$ for the face ${\mathbb N}\omega_2$.
The expression $m_1+m_2-n_2+1$ comes from the middle linear constraint,
while $n_2+1$ comes from the middle truncation.
The other three constraints disappear as $n_1$ gets large.
On the other hand we get
\[ m^{K,M}_{\mu}(\nu)=\min\{n_2+1,\min\{m_1,m_2\}+1,m_1+m_2-n_2+1\} \]
as easily checked from the familiar branching from $\mathrm{SU}(3)$ to $\mathrm{SU}(2)$. It follows that $m^{G,K}_{\lambda}(\mu)=m^{K,M}_{\mu}(\nu)$ for large $n_{1}$.
The proof for the case $(\mathrm{Spin}(7),\mathrm{G}_{2},\mathrm{SU}(3))$ is postponed to the end of Section \ref{Spin7}, where we discuss the branching rules that are needed.
\end{proof}
\section{The pair $(G,K)=(\mathrm{Spin}(7),\mathrm{G}_2)$}\label{Spin7}
In this section we take $G=\mathrm{Spin}(7)$ with complexified Lie algebra $\mathfrak{g}$ of type $\mathrm{B}_3$.
Let $\mathfrak{t}_G\cong{\mathbb C}^3$ be a Cartan subalgebra with positive roots $R_G^+$ given by
\[ e_i-e_j,\;e_i+e_j,\;e_i \]
for $1\leq i<j\leq3$, and basis of simple roots $\alpha_1=e_1-e_2,\alpha_2=e_2-e_3,\alpha_3=e_3$.
The fundamental weights $\varpi_1=e_1,\varpi_2=e_1+e_2,\varpi_3=(e_1+e_2+e_3)/2$ are a basis
over ${\mathbb N}$ for the cone $P_G^+$ of dominant weights.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.5]
\begin{scope}
\clip (-6,-6) rectangle (8,6);
\draw[thin,lightgray] (9/2,-2) -- (3/2,-4);
\draw[thin,lightgray] (-3/2,4) -- (-9/2,2);
\draw[thin,lightgray] (-9/2,-4) -- (-27/10,-14/5);
\draw[thin,lightgray] (3/2,-4) -- (-9/2,-4) -- (-9/2,2);
\draw[thin,lightgray] (-3/2,4) -- (9/2,4);
\draw[thin,lightgray] (9/2,4) -- (9/2,-2);
\draw[thin,lightgray] (9/2,-2) -- (27/8,-2);
\draw[fill, lightgray!15] (0,0) -- (15/4,-1) -- (3,-3) -- (-3/2,-4) -- (-9/2,-1) -- (-3,3) -- (3/2,4) -- (9/2,1) -- cycle;
\draw[thin] (99/28,-11/7) -- (3,-3) -- (-3/2,-4) -- (-9/2,-1) -- (-3,3) -- (3/2,4) -- (9/2,1) ;
\draw[fill,gray!35] (0,0) -- (9/2,1) -- (3,0) -- cycle;
\draw[fill,gray!25] (0,0) -- (3,0) -- (15/4,-1) -- cycle;
\draw[fill,gray!25] (15/4,-1) -- (9/2,-2) -- (99/28,-11/7) -- cycle;
\def(15/4,-1) -- (3,0) -- (9/2,1) --cycle{(15/4,-1) -- (3,0) -- (9/2,1) --cycle}
\fill [lightgray!15] (15/4,-1) -- (3,0) -- (9/2,1) --cycle;
\draw[fill,lightgray!5] (15/4,-1) -- (9/2,1) -- (9/2,-2) --cycle;
\draw[thin] (15/4,-1) -- (3,0) -- (9/2,1) --cycle;
\draw[thin] (0,0) -- (15/4,-1);
\draw[thick,->] (0,0) -- (3,0) node[right] {\(\varpi_{1}\)};
\draw[thick,->] (0,0) -- (9/2,1) node[right] {\(\varpi_{2}=\omega_{2}\)};
\draw[thick,->] (0,0) -- (5/2,-2/3) node[below] {\(\omega_{1}\)};
\draw[thin] (9/2,1) -- (9/2,-2) -- (15/4,-1) -- (99/28,-11/7);
\draw[dotted] (0,0) -- (99/28,-11/7);
\draw[thick,->] (99/28,-11/7) -- (9/2,-2) node[right]{\(2\varpi_{3}\)};
\draw[thin,lightgray] (9/2,4) -- (3/2,2);
\draw[thin,lightgray] (-9/2,2) -- (3/2,2) -- (3/2,-4);
\draw[thin,lightgray] (-3/2,4) -- (-3/2,10/3);
\draw[dashed,lightgray] (-3/2,10/3) -- (-3/2,-2) -- (-27/10,-14/5);
\draw[dashed,lightgray] (-3/2,-2) -- (17/5,-2);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Fundamental weights for $\mathrm{Spin}(7)$ and $\mathrm{G}_{2}$.}
\end{figure}
As the Cartan subalgebra $\mathfrak{t}_K$ for $K=\mathrm{G}_2$ we shall take the orthogonal complement of $h=(-e_1+e_2+e_3)$.
The elements $e_1+e_3,e_1+e_2,e_2-e_3$ are the long positive roots in $R_K^+$, while
\[ \epsilon_1=(2e_1+e_2+e_3)/3,\;\epsilon_2=(e_1+2e_2-e_3)/3,\;\epsilon_3=(e_1-e_2+2e_3)/3 \]
are the short positive roots in $R_K^+$. The natural projection $q:R_G^+\rightarrow P_K^+$
is a bijection onto the long roots and two to one onto the short roots in $R_K^+$.
Note that $\epsilon_i=q(e_i)$ for $i=1,2,3$.
The simple roots in $R_K^+$ are $\{\beta_1=\epsilon_3,\beta_2=\epsilon_2-\epsilon_3\}$ with corresponding
fundamental weights $\{\omega_1=\epsilon_1,\omega_2=\epsilon_1+\epsilon_2\}$.
Observe that $\omega_1=q(\varpi_1)=q(\varpi_3)$ and $\omega_2=q(\varpi_2)$,
and hence $q:P_G^+\rightarrow P_K^+$ is a surjection.
Note that the natural projection $q:P_G\rightarrow P_K$ is equivariant for the action of
the Weyl group $W_M\cong\mathfrak{S}_3$ of the centralizer $M=\mathrm{SU}(3)$ in $K$ of $h$.
The Weyl group $W_G$ is the semidirect product of $C_2\times C_2\times C_2$
acting by sign changes on the three coordinates and the permutation group $\mathfrak{S}_3$.
As a set with multiplicities we have
\[ A=q(R_G^+)-R_K^+=\{\epsilon_1,\epsilon_2,\epsilon_3\} \]
whose partition function $p_A$ enters in the formula for the branching from $\mathrm{B}_3$ to $\mathrm{G}_2$.
Note that $p_A(k\epsilon_1+l\epsilon_2)=p_A(k\epsilon_1+m\epsilon_3)=k+1$ for $k,l,m\in {\mathbb N}$
and $p_A(\mu)=0$ otherwise.
\begin{lemma}\label{branching rule lemma}
For $\lambda\in P_G^+$ and $\mu\in P_K^+$ the multiplicity $m_{\lambda}^{G,K}(\mu)\in {\mathbb N}$
with which an irreducible representation of $K$ with highest weight $\mu$ occurs in the
restriction to $K$ of an irreducible representation of $G$ with highest weight $\lambda$
is given by
\[ m_{\lambda}^{G,K}(\mu)=\sum_{w\in W_G}\det(w)p_A(q(w(\lambda+\rho_G)-\rho_G)-\mu) \]
and if we extend $m_{\lambda}^{G,K}(\mu)\in {\mathbb Z}$ by this formula for all $\lambda\in P_G$ and $\mu\in P_K$ then
\[ m_{w(\lambda+\rho_G)-\rho_G}^{G,K}(v(\mu+\rho_K)-\rho_K)=\det(w)\det(v)m_{\lambda}^{G,K}(\mu) \]
for all $w\in W_G$ and $v\in W_K$. Here $\rho_G$ and $\rho_K$ are the Weyl vectors of $R_G^+$ and $R_K^+$ respectively.
\end{lemma}
This lemma was obtained in Heckman \cite{Heckman} as a direct application of the Weyl charcter formula.
The above type formula, valid for any pair $K<G$ of connected compact Lie groups \cite{Heckman},
might be cumbersome for practical computations of the multiplicities, because of the (possibly large) alternating
sum over a Weyl group $W_G$ and the piecewise polynomial behaviour of the partition function.
However in the present (fairly small) example one can proceed as follows.
If $\lambda=k\varpi_1+l\varpi_2+m\varpi_3=klm=(x,y,z)$ with
\[ x=k+l+m/2,y=l+m/2,z=m/2\Leftrightarrow k=x-y,l=y-z,m=2z \]
then $\lambda$ is dominant if $k,l,m\geq0$ or equivalently $x\geq y\geq z\geq0$.
We tabulate the $8$ elements $w_1,\cdots,w_8\in W_G$ such that the projection
$q(w_i\lambda)\in{\mathbb N}\epsilon_1+{\mathbb N}\epsilon_2$ is dominant for $R_M^+$ for all $\lambda$ which are dominant for $R_G^+$.
Clearly the projection of $(x,y,z)$ is given by
\[ q(x,y,z)=x\epsilon_1+y\epsilon_2+z\epsilon_3=(x+z)\epsilon_1+(y-z)\epsilon_2 \]
and $\rho_G=\varpi_1+\varpi_2+\varpi_3=(2\tfrac12,1\tfrac12,\tfrac12)$ is the Weyl vector for $R_G^+$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|l|l|l|l|} \hline
$i$&$\det(w_i)$&$w_i\lambda$&$q(w_i\lambda)$&$q(w_i\rho_G-\rho_G)$\\ \hline
$1$&$+$&$(x,y,z)$&$(x+z)\epsilon_1+(y-z)\epsilon_2$&$0$\\ \hline
$2$&$-$&$(x,y,-z)$&$(x-z)\epsilon_1+(y+z)\epsilon_2$&$-\epsilon_3$\\ \hline
$3$&$+$&$(x,z,-y)$&$(x-y)\epsilon_1+(y+z)\epsilon_2$&$-\epsilon_1-\epsilon_3$\\ \hline
$4$&$-$&$(x,-z,-y)$&$(x-y)\epsilon_1+(y-z)\epsilon_2$&$-\epsilon_1-\epsilon_2-\epsilon_3$\\ \hline
$5$&$-$&$(y,x,z)$&$(y+z)\epsilon_1+(x-z)\epsilon_2$&$-\epsilon_3+0$\\ \hline
$6$&$+$&$(y,x,-z)$&$(y-z)\epsilon_1+(x+z)\epsilon_2$&$-\epsilon_3-\epsilon_3$\\ \hline
$7$&$+$&$(z,x,y)$&$(y+z)\epsilon_1+(x-y)\epsilon_2$&$-\epsilon_3-\epsilon_2$\\ \hline
$8$&$-$&$(-z,x,y)$&$(y-z)\epsilon_1+(x-y)\epsilon_2$&$-\epsilon_3-\epsilon_1-\epsilon_2$\\ \hline
\end{tabular}
\end{center}
\caption{Projection of $w\lambda$ in $P^{+}_{M}$.}
\end{table}
In the picture below the location of the points $q(w_i\lambda)\in P_M^+$, indicated by the number $i$,
with the sign of $\det(w_i)$ attached, is drawn.
Observe that $q(w_1\lambda)=(k+m)\omega_1+l\omega_2\in P_K^+$ for all $\lambda=klm\in P_G^+$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.6]
\pgfmathsetmacro\ax{2}
\pgfmathsetmacro\ay{0}
\pgfmathsetmacro\bx{2 * cos(120)}
\pgfmathsetmacro\by{2 * sin(120)}
\pgfmathsetmacro\lax{2*\ax/3 + \bx/3}
\pgfmathsetmacro\lay{2*\ay/3 + \by/3}
\pgfmathsetmacro\lbx{\ax/3 + 2*\bx/3}
\pgfmathsetmacro\lby{\ay/3 + 2*\by/3}
\begin{scope}
\clip (5/3,10/4) circle (5);
\draw[dashed] (0,0) -- (0,10);
\draw[dashed] (0,0)-- (5*\lax,5*\lay+5*\lby);
\draw[dashed] (0,0) -- (7*\lax,7*\lay);
\draw[thick,->] (0,0) -- (\ax,\ay);
\draw[thick,->] (0,0) -- (\bx,\by);
\draw[thick,->] (0,0) -- (-\ax,-\ay);
\draw[thick,->] (0,0) -- (-\bx,-\by);
\draw[thick,->] (0,0) -- (\ax+\bx,\ay+\by);
\draw[thick,->] (0,0) -- (-\ax-\bx,-\ay-\by);
\draw[thick,->] (0,0) -- (\lax,\lay);
\draw (\lax,\lay-1/8) node[right] {\(\epsilon_{2}\)};
\draw[thick,->] (0,0) -- (\lbx,\lby);
\draw (1/8+\lbx,\lby+1/8) node[above,left]{\(\epsilon_{1}\)};
\draw[thick,->] (0,0) -- (-\lax,-\lay);
\draw[thick,->] (0,0) -- (-\lbx,-\lby);
\draw[thick,->] (0,0) -- (\lax-\lbx,\lay-\lby);
\draw[thick,->] (0,0) -- (\lbx-\lax,\lby-\lay) node[left] {\(\epsilon_{3}\)};
\draw[thick, fill=lightgray] (4*\lbx,4*\lby) -- (4*\lbx+1/2*\lax,4*\lby-1/2*\lay) -- (2*\lax+7/2*\lbx,2*\lay+7/2*\lby) -- (2*\lax+8/2*\lbx,2*\lay+8/2*\lby) -- (1/2*\lax+11/2*\lbx,1/2*\lay+11/2*\lby) -- (0*\lax+11/2*\lbx,0*\lay+11/2*\lby) -- cycle;
\draw[fill] (0*\lax+8/2*\lbx,0*\lay+8/2*\lby) circle (2pt) node[left] {\(b+\)};
\draw[fill] (1/2*\lax+8/2*\lbx,-1/2*\lay+8/2*\lby) circle (2pt) node[below] {\(4-\)};
\draw[fill] (2*\lax,2*\lay+7/2*\lby) circle (2pt) node[below] {\(3+\)};
\draw[fill] (2*\lax+8/2*\lbx,2*\lay+8/2*\lby) circle (2pt) node[right] {\(2-\)};
\draw[fill] (1/2*\lax+11/2*\lbx,1/2*\lay+11/2*\lby) circle (2pt) node[above] {\(1+\)};
\draw[fill] (0*\lax+11/2*\lbx,0*\lay+11/2*\lby) circle (2pt) node[left] {\(a-\)};
\draw[thick] (0*\lax+8/2*\lbx,0*\lay+8/2*\lby) -- (2*\lax+8/2*\lbx,2*\lay+8/2*\lby);
\draw[thick] (1/2*\lax+8/2*\lbx,-1/2*\lay+8/2*\lby)--(1/2*\lax+11/2*\lbx,1/2*\lay+11/2*\lby);
\draw[thick] (2*\lax,2*\lay+7/2*\lby)--(0*\lax+11/2*\lbx,0*\lay+11/2*\lby);
\draw[thick, fill=lightgray] (4*\lax,4*\lay) -- (1/2*\lbx+7/2*\lax,1/2*\lby+7/2*\lay) -- (7/2*\lax+4/2*\lbx,7/2*\lay+4/2*\lby) -- (4*\lax+4/2*\lbx,4*\lay+4/2*\lby) -- (11/2*\lax+1/2*\lbx,11/2*\lay+1/2*\lby) -- (11/2*\lax+0/2*\lbx,11/2*\lay+0/2*\lby) -- cycle;
\draw[fill] (4*\lax+0/2*\lbx,4*\lay+0/2*\lby) circle (2pt) node[below] {\(d+\)};
\draw[fill] (7/2*\lax+1/2*\lbx,7/2*\lay+1/2*\lby) circle (2pt) node[left] {\(8-\)};
\draw[fill] (7/2*\lax+2*\lbx,7/2*\lay+2*\lby) circle (2pt) node[left] {\(7+\)};
\draw[fill] (4*\lax+2*\lbx,4*\lay+2*\lby) circle (2pt) node[above] {\(5-\)};
\draw[fill] (11/2*\lax+1/2*\lbx,11/2*\lay+1/2*\lby) circle (2pt) node[right] {\(6+\)};
\draw[fill] (11/2*\lax+0/2*\lbx,11/2*\lay+0/2*\lby) circle (2pt) node[below] {\(c-\)};
\draw[thick] (4*\lax+0/2*\lbx,4*\lay+0/2*\lby) -- (4*\lax+2*\lbx,4*\lay+2*\lby);
\draw[thick] (7/2*\lax+1/2*\lbx,7/2*\lay+1/2*\lby) -- (11/2*\lax+1/2*\lbx,11/2*\lay+1/2*\lby);
\draw[thick] (7/2*\lax+2*\lbx,7/2*\lay+2*\lby) -- (11/2*\lax+0/2*\lbx,11/2*\lay+0/2*\lby);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Projection of $W_{G}\lambda$ onto $P^{+}_{M}$.}\label{figure: Spin7G2 projection on M chamber}
\end{figure}Let us denote $a=(k+l+m)\epsilon_1$ and $b=(k+l)\epsilon_1$, and so these two points
together with the four points $q(w_i\lambda)$ for $i=1,2,3,4$ form the vertices of
a hexagon with three pairs of parallel sides. In the picture we have drawn all six
vertices in $P_K^+$, which happens if and only if $q(w_3\lambda)=k\epsilon_1 +(l+m)\epsilon_2\in P_K^+$,
or equivalently if $k\geq(l+m)$. But in general some of the $q(w_i\lambda)\in P_M^+$
for $i=2,3,4$ might lie outside $P_K^+$. Indeed $q(w_2\lambda)=(k+l)\epsilon_1+(l+m)\epsilon_2$
lies outside $P_K^+$ if $k<m$, and $q(w_4\lambda)=k\epsilon_1+l\epsilon_2$ lies outside $P_K^+$ if $k<l$.
For fixed $\lambda\in P_G^+$ the sum $m_{\lambda}(\mu)$ of
the following six partition functions as a function of $\mu\in P_K$
\[ \sum_1^4\det(w)p_A(q(w_i(\lambda+\rho_G)-\rho_G)-\mu)
-p_A(a-\epsilon_2-\mu)+p_A(b-\epsilon_1-\epsilon_2-\mu) \]
is just the familiar multiplicity function for the weight multiplicities of the root system $\mathrm{A}_2$.
It vanishes outside the hexagon with vertices $a$, $b$ and $q(w_i\lambda)$ for $i=1,2,3,4$.
On the outer shell hexagon it is equal to $1$,
and it steadily increases by $1$ for each inner shell hexagon,
untill the hexagon becomes a triangle,
and from that moment on it stabilizes on the inner triangle.
The two partition functions we have added corresponding to the points $a$ and $b$
are invariant as a function of $\mu$ for the action $\mu\mapsto s_2(\mu+\rho_K)-\rho_K$
of the simple reflection $s_2\in W_K$ with mirror ${\mathbb R}\omega_1$, because $s_2(A)=A$.
In order to obtain the final multiplicity function
\[ \mu\mapsto m_{\lambda}^{G,K}(\mu)=\sum_{v\in W_K}\det(v)m_{\lambda}(v(\mu+\rho_K)-\rho_K) \]
for the branching from $G$ to $K$ we have to antisymmetrize for the shifted by $\rho_K$ action of $W_K$.
Note that the two additional partition functions and their transforms under $W_K$
all cancel because of their symmetry and the antisymmetrization.
For $\mu\in P_K^+$ the only terms in the sum over $v\in W_K$ that have a nonzero contribution are
those for $v=e$ the identity element and $v=s_1$ the reflection with mirror ${\mathbb R}\omega_2$,
and we arrive at the following result.
\begin{theorem}\label{(B_3,G_2) multiplicity formula}
For $\lambda\in P_G^+$ and $\mu\in P_K^+$ the branching multiplicity from $G=\mathrm{Spin}(7)$ to $K=\mathrm{G}_2$ is given by
\begin{eqnarray}\label{eq:multB3G2}
m_{\lambda}^{G,K}(\mu)=m_{\lambda}(\mu)-m_{\lambda}(s_1\mu-\epsilon_3)
\end{eqnarray}
with $m_{\lambda}$ the weight multiplicty function of type $A_2$ as given by the
above alternating sum of the six partition functions.
\end{theorem}
Indeed, we have $s_1(\mu+\rho_K)-\rho_K=s_1\mu-\epsilon_3$.
As before, we denote $klm=k\varpi_1+l\varpi_2+m\varpi_3$ and $kl=k\omega_1+l\omega_2$ with $k,l,m\in{\mathbb N}$
for the highest weight of irreducible representations of $G$ and $K$ respectively.
For $\mu\in{\mathbb N}\omega_1$ the multiplicities $m_{\lambda}^{G,K}(\mu)$ are only governed by the first term on the right hand side of (\ref{eq:multB3G2}) with $v=e$
and so are equal to $1$ for $\mu=n0$ with $n=(k+l),\cdots,(k+l+m)$ and $0$ elsewhere.
Indeed, $\mu=n0$ has multiplicity one if and only if it is contained in the segment
from $b=(k+l)\epsilon_1$ to $a=(k+l+m)\epsilon_1$. This proves to following statement.
\begin{corollary}
The fundamental representation of $G$ with highest weight $\lambda=001$ is the spin representation
of dimension $8$ with $K$-types $\mu=10$ and $\mu=00$. It is the fundamental spherical representation
for the Gelfand pair $(G,K)$. The irreducible spherical representations of $G$
have highest weights $00m$ with $K$-spectrum the set $\{n0;0\leq n\leq m\}$.
\end{corollary}
\begin{corollary}\label{(B_3,G_2,F_1) multiplicity free}
For any irreducible representation of $G$ with highest weight $\lambda=klm$ all $K$-types
with highest weight $\mu\in F_1={\mathbb N}\omega_1$ are multiplicity free, and the $K$-type with highest
weight $\mu=n0$ has multiplicity one if and only if $(k+l)\leq n\leq (k+l+m)$.
The domain of those $\lambda=klm$ for which the $K$-type $\mu=n0$ occurs has a well shape
$P^{+}_{G}(n0)=B(n0)+\N001$ with bottom
\[ B(n0)=\{klm\in P_G^+;k+l+m=n\} \]
given by a single linear relation.
\end{corollary}
\begin{proof}
The multiplicity freeness and the bounds for $n$ follow from Theorem \ref{(B_3,G_2) multiplicity formula} and in turn these inequalities $n\le k+l+m$ imply the formulae for $B(n0)$ and $P^{+}_{G}(n0)$.
\end{proof}
This ends our discussion that $(G,K,F_1={\mathbb N}\omega_1)$ is a multiplicity free system.
In order to show that $(G,K,F_2={\mathbb N}\omega_2)$ is also a multiplicity free triple
we shall carry out a similar analysis.
\begin{corollary}\label{(B_3,G_2,F_2) multiplicity free}
For an irreducible representation of $G$ with highest weight $\lambda=klm$ all $K$-types
with highest weight $\mu\in F_2={\mathbb N}\omega_2$ are multiplicity free, and the K-type with highest
weight $\mu=0n$ has multiplicity one if and only if $\max(k,l)\leq n\leq \min(k+l,l+m)$.
The domain of those $\lambda=klm$ for which the $K$-type $\mu=0n$ occurs has a well shape
$P^{+}_{G}(0n)=B(0n)+\N001$ with bottom
\[ B(0n)=\{klm\in P_G^+;m\leq k\leq n,l+m=n\} \]
given by a single linear relation and inequalities.
\end{corollary}
\begin{proof}
Under the assumption of the first part of this proposition $klm\in P^{+}_{G}(0n)$ implies that $kl(m+1)\in P^{+}_{G}(0n)$, and the bottom $B(0n)$ of those $klm\in P^{+}_{G}(0n)$ for which $kl(m-1)\notin P^{+}_{G}(0n)$
contains $klm$ if and only if $n=l+m$ and $k\geq m$.
It remains to show the first part of the proposition.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=3/7]
\pgfmathsetmacro\ax{2}
\pgfmathsetmacro\ay{0}
\pgfmathsetmacro\bx{2 * cos(120)}
\pgfmathsetmacro\by{2 * sin(120)}
\pgfmathsetmacro\lax{2*\ax/3 + \bx/3}
\pgfmathsetmacro\lay{2*\ay/3 + \by/3}
\pgfmathsetmacro\lbx{\ax/3 + 2*\bx/3}
\pgfmathsetmacro\lby{\ay/3 + 2*\by/3}
\begin{scope}
\clip (5/3,13/4) circle (7);
\draw[dashed] (0,0) -- (0,10);
\draw[dashed] (0,0)-- (15*\lax,15*\lay+15*\lby);
\draw[dashed] (0,0) -- (15*\lax,15*\lay);
\draw[thick,->] (0,0) -- (\ax,\ay);
\draw[thick,->] (0,0) -- (\bx,\by);
\draw[thick,->] (0,0) -- (-\ax,-\ay);
\draw[thick,->] (0,0) -- (-\bx,-\by);
\draw[thick,->] (0,0) -- (\ax+\bx,\ay+\by);
\draw[thick,->] (0,0) -- (-\ax-\bx,-\ay-\by);
\draw(\lax,\lay-1/8) node[right]{\(\epsilon_{2}\)};
\draw[thick,->] (0,0) -- (\lax,\lay);
\draw[thick,->] (0,0) -- (\lbx,\lby);
\draw (2/8+\lbx,\lby+2/8) node[above,left]{\(\epsilon_{1}\)};
\draw[thick,->] (0,0) -- (-\lax,-\lay);
\draw[thick,->] (0,0) -- (-\lbx,-\lby);
\draw[thick,->] (0,0) -- (\lax-\lbx,\lay-\lby);
\draw[thick,->] (0,0) -- (\lbx-\lax,\lby-\lay) node[left] {\(\epsilon_3\)};
\draw[thick, fill=lightgray] (3*\lbx,3*\lby) -- (7*\lbx+0/2*\lax,7*\lby+0/2*\lay) -- (1*\lax+7*\lbx,1*\lay+7*\lby) -- (3*\lax+10/2*\lbx,3*\lay+10/2*\lby) -- (3*\lax+3*\lbx,3*\lay+3*\lby) -- (2*\lax+2*\lbx,2*\lay+2*\lby) -- (1*\lax+2*\lbx,1*\lay+2*\lby)-- cycle;
\draw[thick] (3*\lbx,3*\lby) -- (7*\lbx+0/2*\lax,7*\lby+0/2*\lay) -- (1*\lax+7*\lbx,1*\lay+7*\lby) -- (3*\lax+10/2*\lbx,3*\lay+10/2*\lby) -- (7*\lax+1*\lbx,7*\lay+1*\lby) -- (7*\lax+0*\lbx,7*\lay+0*\lby) -- (3*\lax+0*\lbx,3*\lay+0*\lby) -- (2*\lax+1*\lbx,2*\lay+1*\lby) --(2*\lax+2*\lbx,2*\lay+2*\lby) -- (5*\lax+2*\lbx,5*\lay+2*\lby) -- (7*\lax+0*\lbx,7*\lay+0*\lby);
\draw[thick] (3*\lax+0*\lbx,3*\lay+0*\lby) -- (3*\lax+3*\lbx,3*\lay+3*\lby);
\draw[thick] (0*\lax+7*\lbx,0*\lay+7*\lby) -- (5*\lax+2*\lbx,5*\lay+2*\lby);
\draw[thick] (0*\lax+3*\lbx,0*\lay+3*\lby) -- (5*\lax+3*\lbx,5*\lay+3*\lby);
\draw[thick] (2*\lax+2*\lbx,2*\lay+2*\lby) -- (2*\lax+5*\lbx,2*\lay+5*\lby) -- (3*\lax+5*\lbx,3*\lay+5*\lby);
\draw[thick] (5*\lax+2*\lbx,5*\lay+2*\lby) -- (5*\lax+3*\lbx,5*\lay+3*\lby);
\draw[thick] (1*\lax+2*\lbx,1*\lay+2*\lby) -- (1*\lax+7*\lbx,1*\lay+7*\lby);
\draw[fill] (0/2*\lax+6/2*\lbx,0/2*\lay+6/2*\lby) circle (2pt) node[left] {\(b+\)};
\draw[fill] (0/2*\lax+14/2*\lbx,0/2*\lay+14/2*\lby) circle (2pt) node[left] {\(a-\)};
\draw[fill] (1*\lax+7*\lbx,1*\lay+7*\lby) circle (2pt) node[above] {\(1+\)};
\draw[fill] (3*\lax+10/2*\lbx,3*\lay+10/2*\lby) circle (2pt) node[above] {\(5-\)};
\draw[fill] (3*\lax+3*\lbx,3*\lay+3*\lby) circle (2pt);
\draw (3*\lax+3*\lbx,3*\lay+3*\lby-1/5) node [right] {\(e\)};
\draw (2*\lax+2*\lbx,2*\lay+2*\lby-1/5) node[right] {\(f\)};
\draw[fill] (2*\lax+2*\lbx,2*\lay+2*\lby) circle (2pt);
\draw[fill] (1*\lax+2*\lbx,1*\lay+2*\lby) circle (2pt) node[below] {\(4-\)};
\draw[fill] (1*\lax+3*\lbx,1*\lay+3*\lby) circle (2pt);
\draw[fill] (1*\lax+6*\lbx,1*\lay+6*\lby) circle (2pt);
\draw[fill] (2*\lax+5*\lbx,2*\lay+5*\lby) circle (2pt) node[above] {\(7+\)};
\draw[fill] (3*\lax+4*\lbx,3*\lay+4*\lby) circle (2pt);
\draw[fill] (2*\lax+3*\lbx,2*\lay+3*\lby) circle (2pt);
\draw[fill] (3*\lax+0*\lbx,3*\lay+0*\lby) circle (2pt) node[below] {\(d+\)};
\draw[fill] (14/2*\lax+0/2*\lbx,14/2*\lay+0/2*\lby) circle (2pt) node[below] {\(c-\)};
\draw[fill] (14/2*\lax+1*\lbx,14/2*\lay+1*\lby) circle (2pt) node[above] {\(6+\)};
\draw[fill] (5*\lax+3*\lbx,5*\lay+3*\lby) circle (2pt) node[above] {\(2-\)};
\draw[fill] (2*\lax+1*\lbx,2*\lay+1*\lby) circle (2pt) node[below] {\(8-\)};
\draw[fill] (3*\lax+2*\lbx,3*\lay+2*\lby) circle (2pt);
\draw[fill] (5*\lax+2*\lbx,5*\lay+2*\lby) circle (2pt) node[below] {\(3+\)};
\draw (0,5*\lby) node[left] {\(\mathbb{N}\omega_{1}\)};
\draw (5*\lax+5*\lbx,5*\lay+5*\lby) node[left] {\(\mathbb{N}\omega_{2}\)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Support of the multiplicity function $\mu\mapsto m^{G,K}_{\lambda}(\mu)$.}\label{figure: Spin7G2 support}
\end{figure}
In order to determine the $K$-spectrum associated to the highest weight $\lambda=klm\in{\mathbb N}^3$ for $G$ observe that
\[ q(w_3\lambda)=k\epsilon_1+(l+m)\epsilon_2 \]
and so the $K$-spectrum on ${\mathbb N}\omega_2$ is empty for $k>(l+m)$, while
for $k=(l+m)$ the $K$-spectrum has a unique point $k\omega_2$ on ${\mathbb N}\omega_2$.
If $k<(l+m)$ the point $q(w_3\lambda)$ moves out of the dominant cone $P_K^+$ into $P_M^+-P_K^+$,
and the support of the function $P_K^+\ni\mu\mapsto m_{\lambda}^{G,K}(\mu)$ consists of
(the integral points of) a heptagon with an additional side on ${\mathbb N}\omega_2$ from $e$ to $f$
as in the picture above. On the outer shell heptagon the multiplicity is one,
and the multiplicities increase by one for each inner shell heptagon,
untill the heptagon becomes a triangle or quadrangle, and it stabilizes.
This follows from Theorem{\;\ref{(B_3,G_2) multiplicity formula}} in a straightforward way.
Depending on whether the vertex
\[ q(w_2\lambda)=(k+l)\epsilon_1+(l+m)\epsilon_2 \]
lies in $P_K^+$ (for $k\geq m$) or in $P_M^+-P_K^+$ (for $k<m$)
we get $e=(l+m)\omega_2$ or $e=(k+l)\omega_2$ respectively.
Hence we find
\[ e=\min(k+l,l+m)\omega_2\;,\;f=\max(k,l)\omega_2 \]
by a similar consideration for
\[ q(w_4\lambda)=k\epsilon_1+l\epsilon_2 \]
as before ($f=k$ for $k\geq l$ and $f=l$ for $k<l$).
This finishes the proof of Corollary{\;\ref{(B_3,G_2,F_2) multiplicity free}}.
\end{proof}
Our choice of positive roots for $G=\mathrm{B}_3$ and $K=\mathrm{G}_2$ was made in such a way that
the dominant cone $P_K^+$ for $K$ was contained in the dominant cone $P_G^+$ for $G$.
In turn this implies that the set
\[ A=q(R_G^+)-R_K^+=\{\epsilon_1,\epsilon_1,\epsilon_3\}\]
lies in an open half plane, which was required for the application
of the branching rule of Lemma{\;\ref{branching rule lemma}}.
However, we now switch to a different positive system in $R_G$, or rather
we keep $R_G^+$ fixed as before, but take the Lie algebra $\mathfrak{k}$ of $\mathrm{G}_2$ to be
perpendicular to the spherical direction $\varpi_3=(e_1+e_2+e_3)/2$ instead.
Under this assumption the positive roots $R_M^+$ form a parabolic subsystem in $R_G^+$,
and so the simple roots $\{\alpha_1=e_1-e_2,\alpha_2=e_2-e_3\}$ of $R_M^+$ are also simple roots in $R_G^+$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.7]
\pgfmathsetmacro\ax{2}
\pgfmathsetmacro\ay{0}
\pgfmathsetmacro\bx{2 * cos(120)}
\pgfmathsetmacro\by{2 * sin(120)}
\pgfmathsetmacro\lax{2*\ax/3 + \bx/3}
\pgfmathsetmacro\lay{2*\ay/3 + \by/3}
\pgfmathsetmacro\lbx{\ax/3 + 2*\bx/3}
\pgfmathsetmacro\lby{\ay/3 + 2*\by/3}
\begin{scope}
\clip (-2,-2) rectangle (3+4*\ax,8);
\draw[thick, fill=lightgray] (0,0) -- (4*\lax,4*\lay) -- (0,4*\lby) -- cycle;
\draw[dashed] (0,0) -- (0,10);
\draw[dashed] (0,0)-- (8*\lax,8*\lay+8*\lby);
\draw[dashed] (0,0) -- (5*\lax,5*\lay);
\draw[thick,->] (0,0) -- (\ax,\ay) node[right] {\(\alpha_{2}\)};
\draw[thick,->] (0,0) -- (\bx,\by) node[above] {\(\alpha_{1}\)};
\draw[thick,->] (0,0) -- (-\ax,-\ay);
\draw[thick,->] (0,0) -- (-\bx,-\by);
\draw[thick,->] (0,0) -- (\ax+\bx,\ay+\by);
\draw[thick,->] (0,0) -- (-\ax-\bx,-\ay-\by);
\draw(\lax,\lay-1/8) node[right]{\(\epsilon_{2}\)};
\draw[thick,->] (0,0) -- (\lax,\lay);
\draw[thick,->] (0,0) -- (\lbx,\lby);
\draw (\lbx,\lby) node[left]{\(\epsilon_{1}\)};
\draw[thick,->] (0,0) -- (-\lax,-\lay);
\draw[thick,->] (0,0) -- (-\lbx,-\lby);
\draw[thick,->] (0,0) -- (\lax-\lbx,\lay-\lby);
\draw[thick,->] (0,0) -- (\lbx-\lax,\lby-\lay) node[left] {\(\epsilon_3\)};
\fill (0,4*\lby) circle (2pt) node[left] {\(n\epsilon_{1}\)};
\fill (4*\lax,4*\lay) circle (2pt);
\draw (4*\lax,4*\lay-1/4) node[below] {\(n\epsilon_{1}\)};
\draw[thick, fill=lightgray] (4*\lax+4*\lbx+3*\ax,4*\lay+4*\lby) -- (4*\lax+3*\ax,4*\lay) -- (0+3*\ax,4*\lby) -- cycle;
\draw[dashed] (0+3*\ax,0) -- (0+3*\ax,10);
\draw[dashed] (0+3*\ax,0)-- (15*\lax+3*\ax,15*\lay+15*\lby);
\draw[dashed] (0+3*\ax,0) -- (15*\lax+3*\ax,15*\lay);
\draw[thick,->] (0+3*\ax,0) -- (\ax+3*\ax,\ay) node[right] {\(\alpha_{2}\)};
\draw[thick,->] (0+3*\ax,0) -- (\bx+3*\ax,\by) node[above] {\(\alpha_{1}\)};
\draw[thick,->] (0+3*\ax,0) -- (-\ax+3*\ax,-\ay);
\draw[thick,->] (0+3*\ax,0) -- (-\bx+3*\ax,-\by);
\draw[thick,->] (0+3*\ax,0) -- (\ax+\bx+3*\ax,\ay+\by);
\draw[thick,->] (0+3*\ax,0) -- (-\ax-\bx+3*\ax,-\ay-\by);
\draw(\lax+3*\ax,\lay-1/8) node[right]{\(\epsilon_{2}\)};
\draw[thick,->] (0+3*\ax,0) -- (\lax+3*\ax,\lay);
\draw[thick,->] (0+3*\ax,0) -- (\lbx+3*\ax,\lby);
\draw (\lbx+3*\ax,\lby) node[left]{\(\epsilon_{1}\)};
\draw[thick,->] (0+3*\ax,0) -- (-\lax+3*\ax,-\lay);
\draw[thick,->] (0+3*\ax,0) -- (-\lbx+3*\ax,-\lby);
\draw[thick,->] (0+3*\ax,0) -- (\lax-\lbx+3*\ax,\lay-\lby);
\draw[thick,->] (0+3*\ax,0) -- (\lbx-\lax+3*\ax,\lby-\lay) node[left] {\(\epsilon_3\)};
\fill (03*\ax,4*\lby) circle (2pt) node[left] {\(n\epsilon_{1}\)};
\fill (4*\lax+3*\ax,4*\lay) circle (2pt);
\fill (4*\lax+4*\lbx+3*\ax,4*\lay+4*\lby) circle (2pt);
\draw (4*\lax+3*\ax,4*\lay-1/4) node[below] {\(n\epsilon_{1}\)};
\draw (4*\lax+4*\lbx+3*\ax-1/4,4*\lay+4*\lby) node[left] {\(n(\epsilon_{1}+\epsilon_{2})\)};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Projections of the bottoms $B_{n0}$ and $B_{0n}$.}\label{figure: Spin7G2 projections of bottoms}
\end{figure}
Let $p:P_G\rightarrow P_M=P_K$ be the orthogonal projection along the spherical direction.
By abuse of notation we denote (with $p(\varpi_3)=0$)
\[ \epsilon_1=p(\varpi_1)=(2,-1,-1)/3\;,\;\epsilon_2=p(\varpi_2)=(1,1,-2)/3 \]
for the fundamental weights for $P_M^+=p(P_G^+)$.
It is now easy to check that this projection
\[ p:B(n0)\rightarrow p(B(n0))\;,\;p:B(0n)\rightarrow p(B(0n)) \]
is a bijection from the bottom onto its image in $P_M^+$.
In the Figure \ref{figure: Spin7G2 projections of bottoms} we have drawn the projections
\[ p(B(n0))=\{k\epsilon_1+l\epsilon_2;k+l\leq n\}\;,\;p(B(0n))=\{k\epsilon_1+l\epsilon_2;k,l\leq n,k+l\geq n\} \]
on the left and the right side respectively.
Let us now prove the remaining case of Proposition \ref{prop: asymptotics}. Consider $\lambda=klm\in P^{+}_{G}$.
We take $x=k+l+m/2,y=l+m/2,z=m/2$ with $m$ relatively large. The projections of the elements $w\lambda$ that land in $P^{+}_{M}$ are given in Table \ref{Table: projections on M chamber}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|l|l|l|l|l|} \hline
$i$&$w_i\lambda$&$q(w_i\lambda)$&$q(w_i\lambda)$\\ \hline
$1$&$(x,y,z)$&$(x+z)\epsilon_1+(y-z)\epsilon_2$&$(k+l+m)\epsilon_1+l\epsilon_2$\\ \hline
$2$&$(x,y,-z)$&$(x-z)\epsilon_1+(y+z)\epsilon_2$&$(k+l)\epsilon_1+(l+m)\epsilon_2$\\ \hline
$3$&$(x,z,-y)$&$(x-y)\epsilon_1+(y+z)\epsilon_2$&$k\epsilon_1+(l+m)\epsilon_2$\\ \hline
$4$&$(x,-z,-y)$&$(x-y)\epsilon_1+(y-z)\epsilon_2$&$k\epsilon_1+l\epsilon_2$\\ \hline
$5$&$(y,x,z)$&$(y+z)\epsilon_1+(x-z)\epsilon_2$&$(l+m)\epsilon_1+(k+l)\epsilon_2$\\ \hline
$6$&$(y,x,-z)$&$(y-z)\epsilon_1+(x+z)\epsilon_2$&$l\epsilon_1+(k+l+m)\epsilon_2$\\ \hline
$7$&$(z,x,y)$&$(y+z)\epsilon_1+(x-y)\epsilon_2$&$(l+m)\epsilon_1+k\epsilon_2$\\ \hline
$8$&$(-z,x,y)$&$(y-z)\epsilon_1+(x-y)\epsilon_2$&$l\epsilon_1+k\epsilon_2$\\ \hline
\end{tabular}
\end{center}
\caption{Projections of $w\lambda$ in $P^{+}_{M}$.}\label{Table: projection of lambda}\label{Table: projections on M chamber}
\end{table}
As $m$ gets large the points $q(w_i\lambda)$ run to infinity except for $i=4$ and $i=8$.
This means that we should take for $\nu=p(\lambda)=q(w_4\lambda)$ if we pick $i=4$.
The multiplicity behavior $m^{G,K}_{\lambda}(\mu)$ in Picture \ref{figure: Spin7G2 support} for $m\rightarrow\infty$
goes as a function of $\mu\in P_K^+={\mathbb N}\omega_1+{\mathbb N}\omega_2$ to the function that gives the multiplicity of $\mu$ induced
representation $\mathrm{Ind}_{M}^{K}(V^{M}_{\nu})$ from $M=\mathrm{SU}(3)$ to $K=\mathrm{G}_2$,
and therefore by Frobenius reciprocity equals $m^{K,M}_{\mu}(\nu)$. This shows that $\lim_{m\to\infty}m^{G,K}_{\lambda}(\mu)=m^{K,M}_{\mu}(\nu)$.
The weights of the fundamental spherical representation with highest weight $\lambda_{{\mathrm{sph}}}=\varpi_{3}$ are $\frac{1}{2}(\pm\epsilon_{1}\pm\epsilon_{2}\pm\epsilon_{3})$. Expressed in terms of fundamental weights these become
$$001,(-1)01,1(-1)1, 01(-1)$$
and their negatives. It follows from Corollaries \ref{(B_3,G_2,F_1) multiplicity free} and \ref{(B_3,G_2,F_2) multiplicity free} that Theorem \ref{degree inequality theorem} holds true for this case.
\section{The pair $(G,K)=(\USp(2n),\USp(2n-2)\times\USp(2))$}\label{symplectic}
Let $G=\USp(2n)$ and $K=\USp(2n-2)\times\USp(2)$ with $n\ge3$. The weight lattices of $G$ and $K$ are equal, $P=\mathbb{Z}^{n}$, and we denote by $\epsilon_{i}$ the $i$-th basis vector. The set of dominant weights for $G$ is $P^{+}_{G}=\{(a_{1},\ldots,a_{n})\in P:a_{1}\ge\ldots\ge a_{n}\ge0\}$. The set of dominant weights for $K$ is $P^{+}_{K}=\{(b_{1},\ldots,b_{n})\in\ P:b_{1}\ge\ldots\ge b_{n-1}\ge0,b_{n}\ge0\}$. The branching rule from $G$ to $K$ is due to Lepowsky \cite{Lepowsky}, \cite[Thm.~9.50]{Knapp}.
\begin{theorem}[Lepowsky]\label{theorem: lepowsky1}
Let $\lambda=(a_{1},\ldots,a_{n})\in P^{+}_{G}$ and $\mu=(b_{1},\ldots,b_{n})\in P^{+}_{K}$. Define $A_{1}=a_{1}-\max(a_{2},b_{1})$, $A_{k}=\min(a_{k},b_{k-1})-\max(a_{k+1},b_{k})$ for $2\le k\le n-1$ and $A_{n}=\min(a_{n},b_{n-1})$.
The multiplicity $m^{G,K}_{\lambda}(\mu)=0$ unless all $A_{i}\ge0$ and $b_{n}+\sum_{i=1}^{n}A_{i}\in2\mathbb{Z}$. In this case the multiplicity is given by
\begin{multline}\label{eq: lepowsky}
m^{G,K}_{\lambda}(\mu)=p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_{2}+\cdots+(A_{n}-b_{n})\epsilon_{n})-\\
p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_{2}+\cdots+(A_{n}+b_{n}+2)\epsilon_{n})
\end{multline}
where $p_{\Sigma}$ is the multiplicity function for the set $\Sigma=\{\epsilon_{i}\pm\epsilon_{n}:1\le i\le n-1\}$.
\end{theorem}
\begin{theorem}
Let $\mu=x\omega_{i}+y\omega_{j}\in P^{+}_{K}$ with $i<j$ and write $\mu=(b_{1},\ldots,b_{n})$. Let $\lambda=(a_{1},\ldots,a_{n})\in P^{+}_{G}$. Let $A_{1},\ldots, A_{n}$ be defined as in Theorem \ref{theorem: lepowsky1}. Then $m^{G,K}_{\lambda}(\mu)\le1$ with equality precisely when (1) $A_{k}\ge0$ for $k=1,\ldots,n-1$, (2) $b_{n}+\sum_{k=1}^{n}A_{k}\in2\mathbb{Z}$ and (3) $\max(A_{k},b_{n})\le\frac{1}{2}(b_{n}+\sum_{k=1}^{n}A_{k})$.
\end{theorem}
\begin{proof}
Suppose that $m^{G,K}_{\lambda}(\mu)\ge1$. Then (1) and (2) follow from Theorem \ref{theorem: lepowsky1}. In fact, $A_{k}=0$ unless $k\in\{1,i+1,j+1\}\cap[1,n]$, because of the hypothesis on $\mu$. We evaluate (\ref{eq: lepowsky}) below, showing that $m^{G,K}_{\lambda}(\mu)\le1$ with equality precisely when (3) holds.
We distinguish 4 cases: (i) $j<n-1$, (ii) $j=n-1$, (iii) $j=n, i=n-1$, (iv) $j=n, i<n-1$. In all cases we reduce to $n=4$ and we find the following expressions for $m_{\lambda}^{G,K}(\mu)$:
\begin{itemize}
\item[(i)] $p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2+A_{3}\epsilon_{3})-p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2+A_{3}\epsilon_{3}+2\epsilon_{4})$,
\item[(ii)] $p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2+A_{4}\epsilon_{4})-p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2+(A_{4}+2)\epsilon_{4})$,
\item[(iii)] $p_{\Sigma}(A_{1}\epsilon_{1}+(A_{4}-b_{4})\epsilon_{4})-p_{\Sigma}(A_{1}\epsilon_{1}+(A_{4}+b_{4}+2)\epsilon_{4})$,
\item[(iv)] $p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2-b_{4}\epsilon_{4})-p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_2+(b_{4}+2)\epsilon_{4})$.
\end{itemize}
The cases (ii) and (iv) reduce to (i) using elementary manipulations of partition functions, see \cite[p.~588]{Knapp}. Case (iii) can also be reduced to (i) but this is not necessary as we can handle this case directly. We have $p_{\Sigma}(A_{1}\epsilon_{1}+(A_{4}-b_{4})\epsilon_{4})\le1$ with equality if and only if $A_{1}+A_{4}-b_{4}\in2\mathbb{N}$ and $A_{1}-A_{4}+b_{4}\in2\mathbb{N}$. Similarly $p_{\Xi}(A_{1}\epsilon_{1}+(A_{4}+b_{4}+2)\epsilon_{4})\le1$ with equality if and only if $A_{1}+A_{4}+b_{4}+2\in2\mathbb{N}$ and $A_{1}-A_{4}-b_{4}-2\in2\mathbb{N}$. This implies the assertion in case (iii).
In case (i) we have
\begin{eqnarray*}
\sum_{k=1}^{3}A_{k}\epsilon_{k}=\sum_{k=1}^{3}B_{k}(\epsilon_{k}+\epsilon_{4})+\sum_{k=1}^{3}(A_{k}-B_{k})(\epsilon_{k}-\epsilon_{4})
\end{eqnarray*}
if and only if $\sum_{i=1}^{3}B_{k}=A$. It follows that
$$p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_{2}+A_{3}\epsilon_{3})=
\#\{(B_{1},B_{2},B_{3})\in\mathbb{N}^{3}:\sum_{k=1}^{3}B_{k}=A\mbox{ and }B_{k}\le A_{k}\}$$
and similarly
\begin{multline}p_{\Sigma}(A_{1}\epsilon_{1}+A_{2}\epsilon_{2}+A_{3}\epsilon_{3}+2\epsilon_{4})=\\
\#\{(B_{1},B_{2},B_{3})\in\mathbb{N}^{3}:\sum_{k=1}^{3}B_{k}=A+1\mbox{ and }B_{k}\le A_{k}\}.
\end{multline}
Assume that $A_{1}\ge A_{2}\ge A_{3}$. We distinguish two possibilities: (1) $A_{1}\le A$ and (2) $A_{1}>A$. In case (1) we have
\[p_{\Sigma}(\sum_{i=1}^{3}A_{i}\epsilon_{i})=\#\{\mbox{lattice points in hexagon indicated in Figure \ref{figure: counting points}}\}\]
which is given by
\[p_{\Sigma}(\sum_{i=1}^{3}A_{i}\epsilon_{i})=(A+1)(A+2)/2-\sum_{i=1}^{3}(A-A_{i})(A-A_{i}+1)/2.\]
Similarly
\[p_{\Sigma}(\sum_{i=1}^{3}A_{i}\epsilon_{i}+2\epsilon_{4})=(A+2)(A+3)/2-\sum_{i=1}^{3}(A+1-A_{i})(A-A_{i}+2)/2\]
and the difference is one, as was to be shown.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=.35]
\begin{scope}
\clip (-5,-5) rectangle (28,11);
\draw[thin,lightgray,->] (0,0) -- (0,10);
\draw[thin,lightgray] (0,6) -- (6,6) -- (3,4) -- (3,-2);
\draw[thin,lightgray] (6,6) -- (6,0);
\draw[thin,lightgray] (6,0) -- (3,-2);
\draw[thin,lightgray] (3,-2) -- (-3,-2);
\draw[thin,lightgray] (-3,-2) -- (-3,4) --(3,4);
\draw[thin,lightgray] (-3,4) -- (0,6);
\draw[thin,lightgray,->] (0,0) -- (-5,-10/3);
\draw[thin,lightgray,->] (0,0) -- (10,0);
\draw[thin] (0,9) node[left]{$\bar{A}$} -- (3,6) -- (-3/2,5) -- cycle;
\draw[thin] (9,0) node[below]{$\bar{A}$} -- (9/2,-1) -- (6,3) -- cycle;
\draw[thin] (-9/2,-3) node[below]{$\bar{A}$} -- (-3,1) -- (0,-2) -- cycle;
\draw[dashed] (-3,1) -- (0,-2) -- (9/2,-1) -- (6,3) -- (3,6) -- (-3/2,5) -- cycle;
\draw[thin] (6,0) node[above right]{$A_2$};
\draw[thin] (0,6) node[above right]{$A_3$};
\draw[thin] (-2.7,-2) node[left]{$A_1$};
\draw[thin,lightgray,->] (20-3,0) -- (31-3,0);
\draw[thin,lightgray,->] (20-3,0) -- (20-3,11);
\draw[thin,lightgray,->] (20-3,0) -- (67/5-3,-22/5);
\draw[thin,lightgray] (14-3,-4) -- (18-3,-4) -- (18-3,0) -- (14-3,0) -- cycle;
\draw[thin,lightgray] (24-3,0) -- (20-3,0) -- (20-3,4) -- (24-3,4) -- cycle;
\draw[thin,lightgray] (14-3,-4) -- (20-3,0) -- (24-3,0) -- (18-3,-4);
\draw[thin,lightgray] (14-3,0) -- (20-3,4) -- (24-3,4) -- (18-3,0) -- cycle;
\draw[thin] (14-3,-4+1/4) node[left]{$A_1$};
\draw[thin] (24-3,0) node[above right]{$A_2$};
\draw[thin] (20-3,4) node[above right]{$A_3$};
\draw[thin] (20-3,10) node[left]{$\bar{A}$} -- (30-3,0) node[below]{$\bar{A}$} -- (21-3,-2);
\draw[dashed] (21-3,-2) -- (15-3,-10/3) node[below]{$\bar{A}$};
\draw[thin] (21-3,-2) -- (24-3,6);
\draw[thin, lightgray] (24-3,6) -- (24-3,4) -- (26-3,4);
\draw[thin] (23-3,10/3) -- (26-3,4);
\draw[thin] (23-3,10/3) -- (17-3,2) -- (20-3,10);
\draw[dashed] (17-3,2) -- (15-3,-10/3);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Counting integral points.}\label{figure: counting points}
\end{figure}
In case (2) where $A_{1}>A$ we have
\[p_{\Sigma}(\sum_{i=1}^{3}A_{i}\epsilon_{i})=\#\{\mbox{lattice points in parallelogram in Figure \ref{figure: counting points}}\}\]
which is given by $A_{2}A_{3}$. Similarly $p_{\Sigma}(\sum_{i=1}^{3}A_{i}\epsilon_{i}+2\epsilon_{4})=A_{2}A_{3}$ and hence the difference is zero.
\end{proof}
The bottom $B(\mu)$ of the $\mu$-well $P^{+}_{G}(\mu)$ is parametrized by $P^{+}_{M}(\mu)$, where $M\cong\USp(2)\times\USp(2n-4)\times\USp(2)$. In \cite{Baldoni Silva} the branching rules for $K$ to $M$ are described. The dominant integral weights for $M$ are parametrized by $P^{+}_{M}=\{(c_{1},c_{2},\ldots,c_{n-1},c_{1}): 2c_{1}\in\mathbb{N}, c_{2}\ge\cdots\ge c_{n-1}\}\subset P$. The map $p:P^{+}_{G}\to P^{+}_{M}$ from Proposition \ref{projection from induced to restricted spectrum} is given as follows. Write $\lambda=(a_{1},\ldots,a_{n})\in P^{+}_{G}$ as
\begin{eqnarray}
\lambda=(\lambda-\frac{a_{1}+a_{2}}{2}\lambda_{{\mathrm{sph}}})+\frac{a_{1}+a_{2}}{2}\lambda_{{\mathrm{sph}}},
\end{eqnarray}
with $\lambda_{{\mathrm{sph}}}=\varpi_{2}=\epsilon_{1}+\epsilon_{2}$. Then $p(\lambda)=(\frac{1}{2}(a_{1}+a_{2}),a_{3},\ldots,a_{n-1},\frac{1}{2}(a_{1}+a_{2}))\in P^{+}_{M}$. The map $q:P\to P:\lambda\mapsto\lambda-(a_{1}+a_{2})\lambda_{{\mathrm{sph}}}/2$ projects onto the orthocomplement of $\lambda_{{\mathrm{sph}}}$ and the maps $p$ and $q$ differ by a Weyl group element in $W_{G}$. To determine the bottom $B(\mu)$ we have to find for each $\lambda\in P^{+}_{G}(\mu)$ the minimal $d\in\frac{1}{2}\mathbb{N}$ for which $q(\lambda)+d\lambda_{{\mathrm{sph}}}\in P^{+}_{G}(\mu)$. We distinguish two cases for the $K$-type $\mu=x\omega_{i}+y\omega_{j}=(b_{1},\ldots,b_{n})$, $i<j$: (1) $i=1$, (2) $i>1$. Assume (1). Then the relevant inequalities are $A_{1}\ge0,A_{2}\ge0$ and $A_{1}+A_{2}\ge B$, with $B$ equal to $A_{j+1}$ or $y$, depending on $j<n$ or $j=n$ respectively. Plugging in $\lambda=q(\lambda)+d\lambda_{{\mathrm{sph}}}$ and minimizing for $d$ yields
$$d=\max(b_{1}-c_{1},b_{2}+c_{1},\frac{1}{2}(b_{1}+B+\max(a_{3},b_{2}))),$$
where $c_{1}=(a_{1}-a_{2})/2$. The branching rules for $K$ to $M$ specialized to the specific choice of $\mu$ implies that $d=\frac{1}{2}(b_{1}+B+\max(a_{3},b_{2}))$ (see \cite{van Pruijssen Roman}). Assume (2). The relevant inequality is $A_{1}\ge0$. Since $i>1$ we have $b_{1}=b_{2}$ so $A_{1}=a_{1}-a_{2}$, which is invariant for adding multiples of $\lambda_{{\mathrm{sph}}}$. We plug in $q(\lambda)+d\lambda_{{\mathrm{sph}}}$ and write $c_{1}=(a_{1}-a_{2})/2$ . Minimizing $d$ so that $A_{k}\ge0$ yields $d=c_{1}+b_{1}$.
The weights of the fundamental spherical representation of highest weight $\lambda_{{\mathrm{sph}}}=\epsilon_{1}+\epsilon_{2}$ are $\{\pm\epsilon_{i}\pm\epsilon_{j}:i<j\}\cup\{0\}$. One easily checks that Theorem \ref{degree inequality theorem} holds true for this case.
\section{The pair $(G,K)=({\mathrm F}_{4},\mathrm{Spin}(9))$}\label{F4}
In this section we take $G$ of type $F_{4}$ and $K=\mathrm{Spin}(9)$ the subgroup of type $B_{4}$. Let $H\subset K\subset G$ be the standard maximal torus and let $\mathfrak{g},\mathfrak{k},\mathfrak{h}$ denote the corresponding Lie algebras. We fix the set of positive roots of the root systems $\Delta(\mathfrak{g},\mathfrak{h})$ and $\Delta(\mathfrak{k},\mathfrak{h})$,
$$R^{+}_{K}=\{\epsilon_{i}\pm\epsilon_{j}|1\le i< j\le4\}\cup\{\epsilon_{i}|1\le i\le4\},$$
$$R^{+}_{G}=R^{+}_{K}\cup\left\{\frac{1}{2}(\epsilon_{1}\pm\epsilon_{2}\pm\epsilon_{2}\pm\epsilon_{2})\right\}.$$
The corresponding systems of simple roots are
$$\Pi_{G}=\{\alpha_{1}=\frac{1}{2}(\epsilon_{1}-\epsilon_{2}-\epsilon_{3}-\epsilon_{4}), \alpha_{2}=\epsilon_{4},\alpha_{3}=\epsilon_{3}-\epsilon_{4},\alpha_{4}=\epsilon_{2}-\epsilon_{3}\},$$
$$\Pi_{K}=\{\beta_{1}=\epsilon_{1}-\epsilon_{2},\beta_{2}=\epsilon_{2}-\epsilon_{3},\beta_{3}=\epsilon_{3}-\epsilon_{4},\beta_{4}=\epsilon_{4}\},$$
see also the Dynkin diagram in Figure \ref{figure: dynkin}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1]
\begin{scope}
\clip (0,-.30) rectangle (10.1,1);
\draw[black] (1,0.5) -- (2,0.5);
\draw[black] (3,0.5) -- (4,0.5);
\draw[black] (6,0.5) -- (8,0.5);
\draw[black] (2,0.4) -- (3,0.4);
\draw[black] (2,0.6) -- (3,0.6);
\draw[black] (8,0.4) -- (9,0.4);
\draw[black] (8,0.6) -- (9,0.6);
\draw[fill=white] (1,0.5) circle (4pt);
\draw[fill=white] (2,0.5) circle (4pt);
\draw[fill=white] (3,0.5) circle (4pt);
\draw[fill=white] (4,0.5) circle (4pt);
\draw[fill=white] (6,0.5) circle (4pt);
\draw[fill=white] (7,0.5) circle (4pt);
\draw[fill=white] (8,0.5) circle (4pt);
\draw[fill=white] (9,0.5) circle (4pt);
\draw (1,0) node {\(\alpha_{1}\)};
\draw (2,0) node {\(\alpha_{2}\)};
\draw (3,0) node {\(\alpha_{3}\)};
\draw (4,0) node {\(\alpha_{4}\)};
\draw (6,0) node {\(\beta_{1}\)};
\draw (7,0) node {\(\beta_{2}\)};
\draw (8,0) node {\(\beta_{3}\)};
\draw (9,0) node {\(\beta_{4}\)};
\draw[black] (2.6 ,0.7) -- (2.4 ,0.5) --(2.6 ,0.3);
\draw[black] (8.4 ,0.7) -- (8.6 ,0.5) --(8.4 ,0.3);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The Dynkin diagrams of ${\mathrm F}_{4}$ and $B_{4}$.}\label{figure: dynkin}
\end{figure}
\noindent The fundamental weights corresponding to $\Pi_{G}$ are given by
$$\varpi_{1}=\epsilon_{1},\varpi_{2}=\frac{1}{2}(3\epsilon_{1} + \epsilon_{2} + \epsilon_{3} + \epsilon_{4}), \varpi_{3}= 2 \epsilon_{1} + \epsilon_{2} + \epsilon_{3}, \varpi_{4}=\epsilon_{1} + \epsilon_{2}$$
and those corresponding to $\Pi_{K}$ by
$$\omega_{1}=\epsilon_{1},\omega_{2}=\epsilon_{1}+\epsilon_{2},\omega_{3}=\epsilon_{1}+\epsilon_{2}+\epsilon_{3},\omega_{4}=\frac{1}{2}(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4}).$$
The lattices of integral weights of $G$ and $K$ are the same and equal to $P=\mathbb{Z}^{4}\cup((1/2,1/2,1/2,1/2)+\mathbb{Z}^{4})$ and the sets of dominant integral weights are denoted by $P^{+}_{G}$ and $P^{+}_{K}$.
\begin{theorem}\label{thm: multiplicity free condition}
There are three faces $F$ of $K$ such that $m^{G,K}_{\lambda}(\mu)\le1$ for all $\lambda\in P^{+}_{G}$ and all $\mu\in F$: the two dimensional face spanned by $\{\omega_{1},\omega_{2}\}$ and two one dimensional faces, spanned by $\omega_{3}$ and $\omega_{4}$ respectively.
\end{theorem}
This result has been obtained in \cite{He et al} as part of a classification. Another proof is given in \cite[Lem.~2.2.10]{van Pruijssen}.
The pair $(G,K)$ is a symmetric pair and choosing the maximal anisotropic torus $T\subset G$ (a circle group) as in \cite{Baldoni Silva} we have $Z_{K}(T)=M\cong\mathrm{Spin}(7)$, where the embedding $\mathrm{Spin}(7)\to\mathrm{Spin}(8)$ is twisted:
\begin{eqnarray}\label{embedding of M}
\mathfrak{so}(7,\mathbb{C})\subset\mathfrak{so}(8,\mathbb{C})\stackrel{\tau}{\to}\mathfrak{so}(8,\mathbb{C})\subset\mathfrak{so}(9,\mathbb{C}),
\end{eqnarray}
with $\tau$ the automorphism that interchanges the roots $\epsilon_{1}-\epsilon_{2}$ and $\epsilon_{3}-\epsilon_{4}$, see \cite{Baldoni Silva}. We fix the maximal torus $\mathfrak{h}_{M}=\mathfrak{m}\cap\mathfrak{h}$ and choose the positive roots $\Delta(\mathfrak{m},\mathfrak{h}_{M})$ such that the set of simple roots equals
$$\Pi_{M}=\{\delta_{1}=\epsilon_{3}-\epsilon_{4},\delta_{2}=\epsilon_{2}-\epsilon_{3},\delta_{3}=\frac{1}{2}(\epsilon_{1}-\epsilon_{2}+\epsilon_{3}+\epsilon_{4})\}.$$
The corresponding fundamental weights are given by
$$\eta_{1}=\frac{1}{2}(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}-\epsilon_{4}),\eta_{2}=\epsilon_{1}+\epsilon_{2},\eta_{3}=\frac{1}{4}(3\epsilon_{1}+\epsilon_{2}+\epsilon_{3}+\epsilon_{4}).$$
The spherical weight is $\lambda_{{\mathrm{sph}}}=\varpi_{1}$. We want to calculate the map $P^{+}_{G}\to P^{+}_{M}$, but $\lambda_{{\mathrm{sph}}}$ is not perpendicular to $P^{+}_{M}$. Hence we pass to another Weyl chamber, and project along the new spherical direction, which is perpendicular to $P^{+}_{M}$. Choose a Weyl group element $w_{M}\in W_{G}$ such that the Weyl chamber $w_{M} P^{+}_{G}$ has the following properties: (1) $w_{M}\lambda_{{\mathrm{sph}}}\perp P_{M}$ and (2) the projection along $w_{M}\lambda_{{\mathrm{sph}}}$ induces a map $w_{M}P^{+}_{G}\to P^{+}_{M}$. We ask Mathematica \cite{Mathematica} to go through the list of Weyl group elements and test for these properties. We find two Weyl group elements, $w_{M}$ and $s_{1}w_{M}$, where
\begin{eqnarray}\label{eqn: element w}
w_M=\left(
\begin{array}{cccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\
-\frac{1}{2} & \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\
-\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \\
-\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & \frac{1}{2}
\end{array}
\right)
\end{eqnarray}
with respect to the basis $\{\epsilon_{1},\epsilon_{2},\epsilon_{3},\epsilon_{4}\}$.
\begin{lemma}\label{lemma: M projection}
Let $q:P^{+}_{G}\to P^{+}_{M}$ be given by $q(\lambda)=w_{M}(\lambda)|_{\mathfrak{h}_{M}}$, where $w_{M}$ is given by (\ref{eqn: element w}). Then $q(P^{+}_{G}(\mu))=P^{+}_{M}(\mu)$ and $q(\sum_{i=1}^{4}\lambda_{i}\varpi_{i})=\lambda_{4}\eta_{1}+\lambda_{3}\eta_{2}+\lambda_{2}\eta_{3}$.
\end{lemma}
\begin{proof}
The surjectivity is implied by Proposition \ref{prop: asymptotics}. The calculation involves a base change for $w_{M}$ with basis $\{\eta_{1},\eta_{2},\eta_{3},\alpha_{1}\}$ and follows readily.
\end{proof}
It follows that $\lambda=\lambda_{1}\varpi_{1}+\lambda_{2}\varpi_{2}+\lambda_{3}\varpi_{3}+\lambda_{4}\varpi_{4}\in P^{+}_{G}(\mu)$ implies that $\lambda_{4}\eta_{1}+\lambda_{3}\eta_{2}+\lambda_{2}\eta_{3}\in P^{+}_{M}(\mu)$. The branching rule $\mathrm{Spin}(9)\to\mathrm{Spin}(7)$ is described in \cite[Thm.~6.3]{Baldoni Silva} and we recall it for our special choices of $\mu$. It is basically the same as branching $B_{4}\downarrow D_{4}\downarrow B_{3}$ via interlacing, see e.g.~\cite[Thm.~9.16]{Knapp}, but on the $D_{4}$-level we have to interchange the coefficients of the first and the third fundamental weight.
\begin{proposition}\label{prop: branching spin9 -> spin7}
The spectrum $P^{+}_{M}(\mu)$ is given by the following inequalities.
\begin{itemize}\item Let $\mu=\mu_{1}\omega_{1}+\mu_{2}\omega_{2}$. Then $\lambda_{4}\eta_{1}+\lambda_{3}\eta_{2}+\lambda_{2}\eta_{3}\in P^{+}_{M}(\mu)$ if and only if
\begin{eqnarray*}
\lambda_{2}+\lambda_{3}+\lambda_{4}\le\mu_{1}+\mu_{2},\\
\lambda_{3}+\lambda_{4}\le\mu_{2}\le\lambda_{2}+\lambda_{3}+\lambda_{4}.
\end{eqnarray*}
\item Let $\mu=\mu_{2}\omega_{3}$. Then $\lambda_{4}\eta_{1}+\lambda_{3}\eta_{2}+\lambda_{2}\eta_{3}\in P^{+}_{M}(\mu)$ if and only if
\begin{eqnarray*}
\lambda_{2}+\lambda_{3}\le\mu_{3},\\
\lambda_{3}+\lambda_{4}\le\mu_{3}\le\lambda_{2}+\lambda_{3}+\lambda_{4}.
\end{eqnarray*}
\item Let $\mu=\mu_{4}\omega_{4}$. Then $\lambda_{4}\eta_{1}+\lambda_{3}\eta_{2}+\lambda_{2}\eta_{3}\in P^{+}_{M}(\mu)$ if and only if
\begin{eqnarray*}
\lambda_{3}=0,\\
\lambda_{2}+\lambda_{4}\le\mu_{4}.
\end{eqnarray*}
\end{itemize}
\end{proposition}
Given an element $\mu\in P^{+}_{K}$ we can determine the $M$-types $\nu=\nu_{1}\eta_{1}+\nu_{2}\eta_{2}+\nu_{3}\eta_{3}\in P^{+}_{M}(\mu)$ and we know from Proposition \ref{prop: asymptotics} that for $\lambda_{1}$ large enough,
\begin{eqnarray}\label{eqn: well limit}
\lambda=\lambda_{1}\varpi_{1}+\nu_{3}\varpi_{2}+\nu_{2}\varpi_{3}+\nu_{1}\varpi_{4}\in P^{+}_{G}(\mu).
\end{eqnarray}
We proceed to determine the minimal $\lambda_{1}$ such that (\ref{eqn: well limit}) holds, in the case that $\mu$ satisfies the multiplicity free condition of Theorem \ref{thm: multiplicity free condition}.
\begin{theorem}\label{thm: F4bottom}
Let $\mu\in(\mathbb{N}\omega_{1}\oplus\mathbb{N}\omega_{2})\cup(\mathbb{N}\omega_{3})\cup(\mathbb{N}\omega_{4})$. Then $\lambda=\lambda_{1}\varpi_{1}+\lambda_{2}\varpi_{2}+\lambda_{3}\varpi_{3}+\lambda_{4}\varpi_{4}\in B(\mu)$ if and only if (i) $q(\lambda)\in P^{+}_{M}(\mu)$ and (ii)
\begin{eqnarray}
\mu_{1}+\mu_{2}=\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4} & \mbox{if} & \mu=\mu_{1}\omega_{1}+\mu_{2}\omega_{2},\label{eqn: bottom 1}\\
\mu_{3}=\lambda_{1}+\lambda_{2}+\lambda_{3} & \mbox{if} & \mu=\mu_{3}\omega_{3},\label{eqn: bottom 2}\\
\mu_{4}=\lambda_{1}+\lambda_{2}+\lambda_{4} & \mbox{if} & \mu=\mu_{4}\omega_{4}\label{eqn: bottom 3}.
\end{eqnarray}
\end{theorem}
Hence the bottom $B(\mu)$ is given by a singular equation and the inequalities of $P^{+}_{M}(\mu)$ in all cases, except for $(G,K)=(\mathrm{SU}(n+1),\mathrm{S}(\mathrm{U}(n)\times\mathrm{U}(1)))$. We have found the inequalities of Theorem \ref{thm: F4bottom} using an implementation of the branching rule from ${\mathrm F}_{4}$ to $\mathrm{Spin}(9)$ in Mathematica and looking at some examples. Before we prove Theorem \ref{thm: F4bottom} we settle the proof of the final case of Theorem \ref{degree inequality theorem}.
\begin{corollary}
Let $\lambda\in P^{+}_{G}(\mu)\to\mathbb{N}$ and let $\lambda'\in P$ be a weight of the spherical representation. Then $|d(\lambda+\lambda')-d(\lambda|\le1$ with $d:P^{+}_{G}(\mu)\to\mathbb{N}$ the degree function of Theorem \ref{degree inequality theorem}.
\end{corollary}
\begin{proof}
The weights of the spherical representation are the short roots and zero (with multiplicity two). After expressing these weights as linear combinations of fundamental weights, one easily checks the assertion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: F4bottom}.]
The proof of Theorem \ref{thm: F4bottom} is devided into two parts, corresponding to the dimension of the face. The strategy in both cases is the same.
Fix $\mu\in(\mathbb{N}\omega_{1}\oplus\mathbb{N}\omega_{2})\cup(\mathbb{N}\omega_{3})\cup(\mathbb{N}\omega_{4})$ and choose a suitable system $R^{+}_{G}$ of positive roots of $G$. Let $A=R^{+}_{G}\backslash R^{+}_{K}$ and let $p_{A}$ denote the corresponding partition function. Let $\lambda\in P^{+}_{G}$ have the property that $q(\lambda)\in P^{+}_{M}(\mu)$. This gives restrictions on $\lambda_{2},\lambda_{3},\lambda_{4}$, according to Proposition \ref{prop: branching spin9 -> spin7}. Let $\lambda_{1}$ satisfy the appropriate linear equation from the theorem.
For $w\in W_G$ define $\Lambda_{w}(\lambda,\mu)=w(\lambda+\rho)-(\mu+\rho)$. Explicit knowledge of the partition function $p_{A}$ allows us, using Mathematica, to determine for which $w\in W_G$ the quantity $p_{A}(\Lambda_{w}(\lambda,\mu))$ is zero. We end up with two elements in case $\mu\in\mathbb{N}\omega_{1}\oplus\mathbb{N}\omega_{2}$ and twelve elements in the other cases, for which $p_{A}(\Lambda_{w}(\lambda,\mu))$ is \textit{possibly} not zero. This allows us to calculate $m^{G,K}_{\lambda}(\mu)$ using Lemma \ref{branching rule lemma}. One checks that the multiplicity is one for this choice of $\lambda\in P^{+}_{G}(\mu)$.
Moreover, if $\mu\in\mathbb{N}\omega_{1}\oplus\mathbb{N}\omega_{2}$ then $p(\Lambda_{w}(\lambda-\lambda_{{\mathrm{sph}}}))=0$ for all Weyl group elements. In the other cases for $\mu$ we find the same twelve Weyl group elements for which $p_{A}(\Lambda_{w}(\lambda-\lambda_{{\mathrm{sph}}},\mu))$ \textit{possibly} does not vanish. One checks that the multiplicity is zero in this case.
We conclude the proof by indicating the the positive system that we chose in the various cases, a description of the partition function and lists of the Weyl group elements that may contribute in the Kostant multiplicity formula.
\paragraph{The case $\mu=\mu_{1}\omega_{1}+\mu_{2}\omega_{2}$.} Here we take the standard positive system $R^{+}_{G}$ and we have $A=R_{G}^{+}\backslash R^{+}_{K}=\{\frac{1}{2}(\epsilon_{1}\pm\epsilon_{2}\pm\epsilon_{3}\pm\epsilon_{4})\}$. Let $\Lambda=(\Lambda_{1},\Lambda_{2},\Lambda_{3},\Lambda_{4})\in P$. We claim that $p_{A}(\Lambda)>0$ if and only if $|\Lambda_{j}|\le\Lambda_{1}$ for $j=2,3,4$.
Let us denote $A=\{a_{000},\ldots,a_{111}\}$ where the binary index indicates where to put the $+$ or the $-$ sign on positions 2,3,4, e.g. $a_{100}=\frac{1}{2}(\epsilon_{1}-\epsilon_{2}+\epsilon_{3}+\epsilon_{4})$. Let
\begin{eqnarray}\label{equality 12}
\Lambda=\sum_{i=000}^{111}n_{i}a_{i}.
\end{eqnarray}
We are going to count the number of tuples $(n_{000},\ldots,n_{111})\in\mathbb{N}^{8}$ for which (\ref{equality 12}) holds. First of all, it follows from (\ref{equality 12}) that
$$\sum_{i=000}^{011}n_{i}=\Lambda_{1}+\Lambda_{2},\quad\sum_{i=100}^{111}n_{i}=\Lambda_{1}-\Lambda_{2}.$$
In other words, any linear combination (\ref{equality 12}) uses $\Lambda_{1}+\Lambda_{2}$ elements from the set $\{a_{000},\ldots,a_{011}\}$ and $\Lambda_{1}-\Lambda_{2}$ elements from the set $\{a_{100},\ldots,a_{111}\}$. Let us write $(\Lambda_{3},\Lambda_{4})=(v_{1},v_{2})+(\Lambda_{3}-v_{1},\Lambda_{4}-v_{2})$. For each such decomposition we need to count (1) the number of tuples $(n_{000},\ldots,n_{011})\in\mathbb{N}^{4}$ for which $\sum_{i=000}^{011}n_{i}a_{i}=((\Lambda_{1}+\Lambda_{2})/2,(\Lambda_{1}+\Lambda_{2})/2,v_{1},v_{2})$ and (2) the number of tuples $(n_{100},\ldots,n_{111})\in\mathbb{N}^{4}$ for which $\sum_{i=100}^{111}n_{i}a_{i}=((\Lambda_{1}-\Lambda_{2})/2,-(\Lambda_{1}-\Lambda_{2})/2,\Lambda_{3}-v_{1},\Lambda_{4}-v_{2})$. For each $(v_{1},v_{2})$ we take the product of these quantities, and summing these for the possible vectors $(v_{1},v_{2})$ yields the desired formula for $p_{A}$.
This reduces the calculation of $p_{A}$ to the following counting problem. Let $L=\mathbb{Z}^{2}\cup((\frac{1}{2},\frac{1}{2})+\mathbb{Z}^{2})$, let $A'=\{(\pm\frac{1}{2},\pm\frac{1}{2})\}$ and let $p\in\mathbb{N}$. Let us denote $A'=\{a'_{00},\ldots,a'_{11}\}$, where the binary number indicates where to put the $+$ and the $-$ signs, e.g. $a'_{10}=(-1/2,1/2)$. Given a vector $v=(v_{1},v_{2})$ we want to calculate the number of tuples $(n_{00},\ldots,n_{11})\in\mathbb{N}^{4}$ such that $\sum_{i=00}^{11}n_{i}a'_{i}=v$ and $\sum_{i=00}^{11}n_{i}=p$. It is necessary that $|v_{1}|,|v_{2}|\le p/2$. In this case, the number of tuples is $1+\frac{p}{2}-\max(|v_{1}|,|v_{2}|)$.
Returning to our original problem, we have
\begin{multline*}
p_{A}(\Lambda)=\sum_{v_{1},v_{2}}\left(1+\frac{\Lambda_{1}+\Lambda_{2}}{2}-\max(|v_{1}|,|v_{2}|)\right)\times \\ \left(1+\frac{\Lambda_{1}-\Lambda_{2}}{2}-\max(|\Lambda_{3}-v_{1}|,|\Lambda_{4}-v_{2}|)\right),
\end{multline*}
where $(v_{1},v_{2})$ satisfies the restrictions $|v_{1}|,|v_{2}|\le (\Lambda_{1}+\Lambda_{2})/2$ and simultaneously $|\Lambda_{3}-v_{1}|,|\Lambda_{4}-v_{2}|\le (\Lambda_{1}-\Lambda_{2})/2$. As a result, the ranges for the summations are
$$
v_{1}=\max\left(-\frac{\Lambda_{1}+\Lambda_{2}}{2},\Lambda_{3}-\frac{\Lambda_{1}-\Lambda_{2}}{2}\right),\ldots,\min\left(\frac{\Lambda_{1}+\Lambda_{2}}{2},\Lambda_{3}+\frac{\Lambda_{1}-\Lambda_{2}}{2}\right),$$
$$
v_{2}=\max\left(-\frac{\Lambda_{1}+\Lambda_{2}}{2},\Lambda_{4}-\frac{\Lambda_{1}-\Lambda_{2}}{2}\right),\ldots,\min\left(\frac{\Lambda_{1}+\Lambda_{2}}{2},\Lambda_{4}+\frac{\Lambda_{1}-\Lambda_{2}}{2}\right).
$$
In particular, $p_{A}(\Lambda)>0$ if and only if the ranges for $v_{1}$ and $v_{2}$ are both non-empty, which is equivalent to
\begin{eqnarray}
|\Lambda_{2}| & \le & \Lambda_{1}\label{ineq: A standard 1},\\
|\Lambda_{3}| & \le & \Lambda_{1}\label{ineq: A standard 2},\\
|\Lambda_{4}| & \le & \Lambda_{1}\label{ineq: A standard 3}.
\end{eqnarray}
The only two Weyl group elements for which $p_{A}(\Lambda_{w}(\lambda,\mu))$ contributes to the multiplicity $m^{G,K}_{\lambda}(\mu)$, under the assumptions (\ref{eqn: bottom 1}) and $q(\lambda)\in P^{+}_{M}(\mu)$ are $e,s_{2}$. In this case $m^{G,K}_{\lambda}(\mu)=1$. Also, $m^{G,K}_{\lambda-\lambda_{{\mathrm{sph}}}}(\mu)=0$ under the same conditions, as there are no Weyl group elements for which $p_{A}(\Lambda_{w}(\lambda-\lambda_{{\mathrm{sph}}},\mu))$ is non-zero.
\paragraph{The case $\mu=\mu_{3}\omega_{3}$ and $\mu=\mu_{4}\omega_{4}$.}
Let $\mu=\mu_{3}\omega_{3}$ or $\mu=\mu_{4}\omega_{4}$ and $\lambda\in B(\mu)$ and consider $\Lambda_{w}(\lambda,\mu)=w(\lambda+\rho)-(\mu+\rho)$ for $w\in W_{G}$. Using Mathematica to check the inequalities (\ref{ineq: A standard 1},\ref{ineq: A standard 2},\ref{ineq: A standard 3}) under the condition (\ref{eqn: bottom 2}) or (\ref{eqn: bottom 3}) we find a number of 16 Weyl group elements for which $\Lambda_{w}(\lambda,\mu)$ is possibly in the support of $p_{A}$. However, the formulas for the elements $\Lambda_{w}(\lambda,\mu)$ that possibly contribute do not look tempting to perform calculations with.
Instead we pass to another Weyl chamber for ${\mathrm F}_{4}$ while remaining in the same Weyl chamber for $\mathrm{Spin}(9)$. The Weyl chamber that we choose contains $\omega_{3}$ and $\omega_{4}$. The element $\widetilde{w}=s_{2}s_{1}\in W$ translates the standard Weyl chamber to one that we are looking for. The set of positive roots that corresponds to the system of simple roots is $\widetilde{w}\Pi_{G}=R^{+}_{K}\cup B$, where
\begin{multline*}B=\left\{
\frac{1}{2}(-\epsilon_{1}+\epsilon_{2}+\epsilon_{3}\pm\epsilon_{4}),
\frac{1}{2}(\epsilon_{1}-\epsilon_{2}+\epsilon_{3}\pm\epsilon_{4}),\right.\\
\left.\frac{1}{2}(\epsilon_{1}+\epsilon_{2}-\epsilon_{3}\pm\epsilon_{4}),
\frac{1}{2}(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}\pm\epsilon_{4})\right\}
\end{multline*}
is the new set of positive roots of $G$ that are not roots of $K$. The Kostant multiplicity formula reads
$$m^{G,K}_{\lambda}(\mu)=\sum_{w\in W_{G}}\det(w)p_{B}(\Lambda_{w}(w(\lambda+\widetilde{\rho})-(\mu+\widetilde{\rho})),$$
where $\widetilde{\rho}=\frac{1}{2}(9\epsilon_{1}+7\epsilon_{2}+5\epsilon_{3}+\epsilon_{4})$ is the Weyl vector for the new system of positive roots.
Our aim is to calculate the partition $p_{B}(\Lambda)$ for $\Lambda=(\Lambda_{1},\Lambda_{2},\Lambda_{3},\Lambda_{4})\in P$. To begin with we focus on the first three coordinates. Let $\pi:P\to\mathbb{Z}^{3}\cup((\frac{1}{2},\frac{1}{2},\frac{1}{2})+\mathbb{Z}^{3})$ denote the projection on the first three coordinates. Let $C=\{c_{1},c_{2},c_{3},c_{4}\}$ with
\begin{multline*}
c_{1}=\frac{1}{2}(-\epsilon_{1}+\epsilon_{2}+\epsilon_{3}),
c_{2}=\frac{1}{2}(\epsilon_{1}-\epsilon_{2}+\epsilon_{3}),\\
c_{3}=\frac{1}{2}(\epsilon_{1}+\epsilon_{2}-\epsilon_{3}),
c_{4}=\frac{1}{2}(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}).$$
\end{multline*}
The number of linear combinations $\pi(\Lambda)=n_{1}c_{1}+n_{2}c_{2}+n_{3}c_{3}+n_{4}c_{4}$ with $n_{i}\in\mathbb{N}$ is non-zero if and only if
\begin{eqnarray}
0 & \le & \Lambda_{1}+\Lambda_{2},\label{ineq: B 1}\\
0 & \le & \Lambda_{1}+\Lambda_{3},\label{ineq: B 2}\\
0 & \le & \Lambda_{2}+\Lambda_{3}.\label{ineq: B 3}
\end{eqnarray}
We assume $\Lambda_{1}\ge\Lambda_{2}\ge\Lambda_{3}$. We have
\begin{eqnarray*}
(\Lambda_{1},\Lambda_{2},\Lambda_{3})&=&(\Lambda_{1}-\Lambda_{2})c_{1}+(\Lambda_{1}-\Lambda_{3})c_{2}+(\Lambda_{2}+\Lambda_{3})c_{4}\\
&=&(\Lambda_{1}-\Lambda_{2}+1)c_{1}+(\Lambda_{1}-\Lambda_{3}+1)c_{2}+c_{3}+(\Lambda_{2}+\Lambda_{3}-1)c_{4}\\
&\vdots&\\
&=&(\Lambda_{1}+\Lambda_{3})c_{1}+(\Lambda_{1}+\Lambda_{2})c_{2}+(\Lambda_{2}+\Lambda_{3})c_{3},
\end{eqnarray*}
from which we see that there are $\Lambda_{2}+\Lambda_{3}+1$ ways to write $(\Lambda_{1},\Lambda_{2},\Lambda_{3})$ as a linear combination of elements in $C$ with coefficients in $\mathbb{N}$. Every such combination uses a unique number of vectors: $2\Lambda_{1}+2r$, where $r=0,\ldots,\Lambda_{2}+\Lambda_{3}$.
Let $b_{i,\pm}=c_{i}\pm\frac{1}{2}\epsilon_{4}$ denote the elements in $B$ that project onto $c_{i}\in C$. Let $\Lambda=\sum s_{i,\pm}b_{i,\pm}$ be a positive integral linear combination of elements in $B$ and define $m_{i}=s_{i,+}+s_{i,-}$. Then $\pi(\Lambda)=\sum m_{i}c_{i}$ is a linear combination of elements in $C$ with coefficients in $\mathbb{N}$ and hence there is an $r\in\{0,\ldots,\Lambda_{2}+\Lambda_{3}\}$ such that $m_{1}=\Lambda_{1}-\Lambda_{3}+r,m_{2}=\Lambda_{1}-\Lambda_{2}+r,m_{3}=r$ and $m_{4}=\Lambda_{2}+\Lambda_{3}-r$. We find that $\sum_{i=1}^{4}s_{i,+}-\sum_{i=1}^{4}s_{i,-}=2\Lambda_{4}$ and $\sum_{i=1}^{4}s_{i,+}+\sum_{i=1}^{4}s_{i,-}=2\Lambda_{1}+2r$. It follows that the number of ways in which we can write $\Lambda$ as a linear combination of $2\Lambda_{1}+2r$ elements in $B$ with coefficients in $\mathbb{N}$ is equal to the number of tuples $(s_{1,+},s_{2,+},s_{3,+},s_{4,+})\in\mathbb{N}^{4}$ with $\sum_{i=1}^{4}s_{i,+}=\Lambda_{1}+\Lambda_{4}+r$ and $0\le s_{i,+}\le m_{i}$. This is the number of integral points in the intersection of the hyperrectangular $\{0\le s_{i,+}\le m_{i}\}$ and the affine hyperplane $\{s_{1,+}+s_{2,+}+s_{3,+}+s_{4,+}=\Lambda_{1}+\Lambda_{4}+r\}$ and we denote this quantity with $L((m_{1},m_{2},m_{3},m_{4}),\Lambda_{1}+\Lambda_{4}+r)$. Whenever
\begin{eqnarray}\label{ineq: B 4}
|\Lambda_{4}|\le \Lambda_{1}+\Lambda_{2}+\Lambda_{3},
\end{eqnarray}
$L((m_{1},m_{2},m_{3},m_{4}),\Lambda_{1}+\Lambda_{4}+r)>0$. Hence
$$p_{B}(\Lambda)=\sum_{r=0}^{\Lambda_{2}+\Lambda_{3}}L((\Lambda_{1}-\Lambda_{3}+r,\Lambda_{1}-\Lambda_{2}+r,r,\Lambda_{2}+\Lambda_{3}-r),\Lambda_{1}+\Lambda_{4}+r)$$
if $\Lambda_{1}\ge\Lambda_{2}\ge\Lambda_{3}$. The quantity $p_{B}(\Lambda)$ is positive if and only if the inequalities (\ref{ineq: B 1}),(\ref{ineq: B 2}),(\ref{ineq: B 3}) and (\ref{ineq: B 4}) hold. Note that these inequalities are invariant for permuting the first three coordinates of $\Lambda$.
Let $\mu=\mu_{3}\omega_{3}$ or $\mu=\mu_{4}\omega_{4}$ and let $\lambda\in P^{+}_{G}$ satisfy $q(\lambda)\in P^{+}_{M}(\mu)$ and (\ref{eqn: bottom 2}) or (\ref{eqn: bottom 3}) respectively. Define $\Gamma_{w}(\lambda,\mu)=w(\widetilde{w}\lambda+\widetilde{\rho})-(\mu+\widetilde{\rho})$. For the elements $\Gamma_{w}(\lambda,\mu)$ and $\Gamma_{w}(\lambda-\lambda_{{\mathrm{sph}}},\mu)$ we check the inequalities (\ref{ineq: B 1}),(\ref{ineq: B 2}),(\ref{ineq: B 3}) and (\ref{ineq: B 4}). We get 12 Weyl group elements for which $p_{B}(\Gamma_{w}(\lambda,\mu))$ and $p_{B}(\Gamma_{w}(\lambda-\lambda_{{\mathrm{sph}}},\mu))$ are possibly non-zero. Moreover, the twelve elements are the same for $\mu=\mu_{3}\omega_{3}$ and $\mu=\mu_{4}\omega_{4}$ and we have listed them in Table \ref{table: Weyl group elements}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$ w_{1} $ & $ w_{2} $ & $ w_{3} $ & $ w_{4} $ & $ w_{5} $ & $ w_{6} $ & $ w_{7} $ & $ w_{8} $ & $ w_{9} $ & $ w_{10} $ & $ w_{11} $ & $ w_{12}$ \\ \hline
$ e $ & $ s_{1} $ & $ s_{2} $ & $ s_{3} $ & $ s_{4} $ & $ s_{1}s_{2} $ & $ s_{1}s_{3} $ & $ s_{1}s_{4} $ & $ s_{2}s_{1} $ & $ s_{2}s_{3} $ & $ s_{2}s_{4} $ & $ s_{3} s_{2}$ \\ \hline
\end{tabular}
\caption{The Weyl group elements $w$ for which $\Gamma_{w}(\widetilde{w}\lambda,\mu)$ is possibly in the support of $p_{B}$.}\label{table: Weyl group elements}
\end{center}
\end{table}
\noindent Using the explicit description of $p_{B}$ one verifies the equalities $m^{G,K}_{\lambda}(\mu)=1$ and $m^{G,K}_{\lambda-\lambda_{{\mathrm{sph}}}}(\mu)=0$.
\end{proof}
\section{The differential equations}\label{DEs}
Our goal is to define a non-trivial commutative algebra of differential operators for the matrix valued orthogonal polynomials defined in Section \ref{intro}. Let $(G,K,F)$ be a multiplicity free system from Table \ref{table: mfs} and let $\mu\in F$. Let $\mathfrak{g}_{c},\mathfrak{k}_{c}$ denote the complexifications, let $U(\mathfrak{g}_{c})$ denote the universal enveloping algebra of $\mathfrak{g}_{c}$ and let $U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}$ denote the commutant of $\mathfrak{k}_{c}$ in $U(\mathfrak{g}_{c})$.
Let $\pi_{\mu}^{K}$ be an irreducible representation of $K$ in $V_{\mu}$ and let $\dot{\pi}^{K}_{\mu}$ denote the corresponding representation of $U(\mathfrak{k}_{c})$. Let $I(\mu)\subset U(\mathfrak{k}_{c})$ denote the kernel of $\dot{\pi}^{K}_{\mu}$ and consider the left ideal $U(\mathfrak{g}_{c})I(\mu)\subset U(\mathfrak{g}_{c})$. As in \cite[Ch.~9]{Dixmier} we define
$$\mathbb{D}(\mu)=U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}/(U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}\cap U(\mathfrak{g}_{c})I(\mu)),$$
which is an associative algebra. In fact, $\mathbb{D}(\mu)$ is commutative because it can be embedded, using an anti homomorphism, into the commutative algebra $U(\mathfrak{a}_{c})\otimes\mathrm{End}_{M}(V_{\mu})$ (see \cite[9.2.10]{Dixmier}), which is commutative by Proposition \ref{prop: reducing to M}. The irreducible representations of $\mathbb{D}(\mu)$ are in a 1--1 correspondence with the irreducible representations of $\mathfrak{g}_{c}$ that contain $\dot{\pi}^{K}_{\mu}$ upon restriction, see \cite[Thm.~9.2.12]{Dixmier}.
Let $D\in U(\mathfrak{g}_{c})$. The $\mu$-radial part $R(\mu,D)$ is a differential operator that satisfies
\begin{eqnarray}\label{def:radial part}
R(\mu,D)(\Phi|_{T})=D(\Phi)|_{T}
\end{eqnarray}
for all functions $\Phi:G\to\mathrm{End}(V_{\mu})$ satisfying (\ref{trafo rule}).
Following Casselman and Mili{\v{c}}i{\'c} \cite[Thm.~3.1]{CM1982} we find a homomorphism
$$R(\mu): U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}\to C(T)\otimes U(\mathfrak{t}_{c})\otimes\mathrm{End}(\mathrm{End}_{M}(V_{\mu}))$$
such that (\ref{def:radial part}) holds for all $D\in U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}$ and all $\Phi\in C^{\infty}(G,\mathrm{End}(V_{\mu}))$ satisfying (\ref{trafo rule}). For the two non-symmetric multiplicity free triples we have an Iwasawa-like decomposition $\mathfrak{g}_{c}=\mathfrak{k}_{c}\oplus\mathfrak{t}_{c}\oplus\mathfrak{n}^{+}$ and a map $\mathfrak{n}^{+}\to\mathfrak{k}_{c}$ onto the orthocomplement of $\mathfrak{m}_{c}$ in $\mathfrak{k}_{c}$. This map replaces $I+\theta$ in the symmetric case and is essential in the construction of $R_{\mu}$, see \cite[Lem.~2.2]{CM1982}. The homomorphism $R(\mu)$ factors through the projection $U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}\to\mathbb{D}(\mu)$ and we obtain an injective algebra homomorphism that we denote by the same symbol, $R(\mu):\mathbb{D}^{\mu}\to C(T)\otimes U(\mathfrak{t}_{c})\otimes\mathrm{End}(\mathrm{End}_{M}(V_{\mu}))$. We identify $\mathrm{End}_{M}(V_{\mu})=\mathbb{C}^{N_{\mu}}$ by Schur's Lemma with $N_{\mu}$ the cardinality of the bottom $B(\mu)$ and we write $\mathbb{M}^{\mu}=\mathrm{End}(\mathbb{C}^{N_{\mu}})$. The elementary spherical functions $\Phi^{\mu}_{\lambda}$ are simultaneous eigenfunctions for the algebra $\mathbb{D}(\mu)$. The differential operators $R(\mu,D)$ become differential operators for the functions $\Psi_{d}^{\mu}:T\to\mathbb{M}^{\mu}$ and, according to the construction, the functions $\Psi_{n}^{\mu}$ are simultaneous eigenfunctions for the operators $R(\mu,D)$ with $D\in\mathbb{D}(\mu)$. The eigenvalues are diagonal matrices $\Lambda_{n}(D)\in\mathbb{M}^{\mu}$ acting on the right, i.e.~we have $R(\mu,D)\Psi^{\mu}_{n}=\Psi^{\mu}_{n}\Lambda_{n}(D)$.
In the forthcoming paper \cite{van Pruijssen Roman} it is shown that the function $\Psi^{\mu}_{0}:T\to\mathbb{M}^{\mu}$ is point wise invertible on $T_{\mathrm{reg}}$, the open subset of $T$ on which the restriction of the minimal spherical function, $\phi|_{T}$, is regular. The proof relies on the bispectral property that is present for the family of matrix valued functions $\{\Psi^{\mu}_{n}:n\in\mathbb{N}\}$. More precisely, the interplay between the differential operators and the three term recurrence relation imply that the function $\Psi^{\mu}_{0}$ satisfies an ODE whose coefficients are regular on $T_{\mathrm{reg}}$. If we conjugate $R(\mu,D)$ with $\Psi^{\mu}_{0}$ and perform the change of variables $x=c\phi(t)+(1-c)$, such that $x$ runs in $[-1,1]$, then we obtain a differential operator acting on the space of matrix valued orthogonal polynomials $\mathbb{M}^{\mu}[x]$. The algebra of differential operators that is obtained in this way is denoted by $\mathbb{D}^{\mu}$. The family of matrix valued orthogonal polynomials $(P^{\mu}_{n}(x);n\in\mathbb{N})$ that we obtain from the functions $(\Psi^{\mu}_{n};n\in\mathbb{N})$, is a family of simultaneous eigenfunctions for the algebra $\mathbb{D}^{\mu}$. The algebra of differential operators $\mathbb{M}^{\mu}[x,\partial_{x}]$ acts on $\mathbb{M}^{\mu}[x]$, where the matrices act by left multiplication. Note that $\mathbb{D}^{\mu}\subset\mathbb{M}^{\mu}[x,\partial_{x}]$.
The description of the map $R(\mu)$ in \cite{CM1982} allows one to calculate explicitly the radial part of the (order two) Casimir operator $\Omega\in U(\mathfrak{g}_{c})^{\mathfrak{k}_{c}}$. An explicit expression can be found in \cite[Prop.~9.1.2.11]{Warner} for the case where $(G,K)$ is symmetric. The image of $\Omega$ in the algebra $\mathbb{D}^{\mu}$ is denoted by $\Omega^{\mu}$ and is of order two. Its eigenvalues can be calculated explicitly in terms of highest weights and they are real, which implies that $\Omega^{\mu}$ is symmetric with respect to the matrix valued inner product $\langle\cdot,\cdot\rangle_{W^{\mu}}$. These are examples of matrix valued hypergeometric differential operators \cite{Tirao}.
\section{Conclusions}
Several questions remain. We have shown the existence of families of matrix valued orthogonal polynomials, together with a commutative algebra of differential operators for which the polynomials are simultaneous eigenfunctions, mainly by working out the branching rules. The key result is that the bottom of the $\mu$-well is well behaved with respect to the weights of the fundamental spherical representation, so that the degree function has the right properties. It would be interesting to see whether one can draw the same conclusions by investigating of the differential equations for the matrix valued orthogonal polynomials. This would require more precise knowledge of the algebra $\mathbb{D}(\mu)$.
On the other hand, it would be interesting to investigate whether the good properties of the degree function follow from convexity arguments that come about if we formulate matters concerning the representation theory, such as induction and restriction, in terms of symplectic or algebraic geometry. For example, in this light, it is interesting to learn more about the (spherical) spaces $G_{c}/Q$ and their $G_{c}$-equivariant line bundles, where $Q\subset K_{c}$ is the parabolic subgroup associated to $F$, for a multiplicity free system $(G,K,F)$.
The existence of multiplicity free systems $(G,K,F)$ with $(G,K)$ a Gelfand pair of rank $>1$, raises the question whether the spectra of the induced representations have a similar structure as in the rank one case. If the answer is affirmative we expect that we can associate families of matrix valued orthogonal polynomials in several variables to these spectra, together with commutative algebras of differential operators that have these polynomials as simultaneous eigenfunctions. For the examples $(\mathrm{Spin}(9),\mathrm{Spin}(7),\mathbb{N}_{\omega_{1}})$ and $(\mathrm{SU}(n+1)\times\mathrm{SU}(n+1),\mathrm{diag}(\mathrm{SU}(n+1)),F)$, where $F=\omega_{1}\mathbb{N}$ or $F=\omega_{n}\mathbb{N}$, this seems to be the case. In general the branching rules will not be of great help in understanding the bottom of the $\mu$-well, as they soon become too complicated in the higher rank situations.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,383 |
Q: How to select the counter value in cassandra How exactly in CQL (or more importantly, in datastax java driver) I can query the value of the counter, like 'WHERE counter > 5'?
Thanks,
A: http://cassandra.apache.org/doc/cql3/CQL.html#counters
So you can only query by key, not by the counter value.
A: You cannot do it with counters as Mikhail posted. I also wanted to add a couple of extra points. First, this is fundamentally unsupported for architectural reasons, and not because "CQL doesn't support it yet" or some other superficial reason. And secondly, if you want to query by statistics (including counts), you need the following.
*
*A table that will allow querying by counter value. If you want range queries (<, > or equivalents) consider a schema where the count is the first clustering column (i.e.: primary key(some_group, counter_value, {counted object id})). This will internally sort rows by counter value and thus give you fast lookups for queries like you specified. You'll have to put some thought into what your partition key (some_group) will be. There is a tradeoff between being able to partition the data across nodes and ease of querying globally.
*A way to update the sorted table that is scalable. Two options here based on how fast data comes in:
a) if updates are really infrequent, just update the sorted table in real time. So when a counter is incremented, delete the old row and insert a new one. This gives you immediate updates, but writes will be quite expesnive, so this won't scale if many rows are always changing.
b) if writes are frequent OR if you can live with slightly stale data for your query-by-counter scenarios, consider running a periodic task that will read the latest counts and update the sorted table as appropriate. If you have "big data" volumes going on, running this may have to be done in a Hadoop environment.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,070 |
Der HMB 2 (Henschel-Magnetbahn 2) ist ein elektromagnetisch schwebendes Testfahrzeug. Es war der personentragende Nachfolger der Versuchsplattform HMB 1 und wurde 1976 von Thyssen-Henschel in Zusammenarbeit mit der TU Braunschweig in Deutschland entwickelt. Zweck des Fahrzeuges war wie beim vorherigen Modell die Erprobung des Langstator-Linearmotors auf einer 60 Meter langen Teststrecke.
Bedeutung
Die Versuche mit dem HMB 2 verliefen sehr vielversprechend. Er spielte daraufhin eine ausschlaggebende Rolle: Einerseits war er Prototyp für die später realisierte M-Bahn in Berlin. Andererseits waren die Resultate ein Anreiz für das deutsche Bundesministerium für Forschung, die Entwicklungen mit Kurzstator-Prinzip zugunsten derer mit Langstator einzustellen. Daraufhin begründete Krauss-Maffei, die bisher den Transrapid 04 mit Kurzstator entwickelten, zusammen mit Thyssen-Henschel und Messerschmitt-Bölkow-Blohm das Konsortium "Magnetbahn Transrapid". Als Ergebnis wurde bereits auf der IVA 1979 mit dem Transrapid 05 ein neues Fahrzeug mit Langstator-Linearmotor im Passagierbetrieb vorgeführt. Das machte den HMB 2 auch zum technischen Urahnen der seitdem entwickelten Transrapid-Fahrzeuge.
Verbleib
Der HMB 2 ist seit den 2010er-Jahren im Technikmuseum Kassel auf einem kurzen Abschnitt seines Fahrwegs ausgestellt.
Literatur
Stefan H. Hedrich: Transrapid. Die Magnetschwebebahn in der politischen "Warteschleife". Eisenbahn-Kurier, Freiburg 2003, ISBN 3-88255-148-8
Weblinks
HMB 2: Großvater des Transrapid. Videobeitrag von HNA-Online, 2009.
Triebfahrzeug (Deutschland)
Transrapid | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,348 |
<!--
Copyright 2019 The Chromium Authors
Use of this source code is governed by a BSD-style license that can be
found in the LICENSE file.
-->
<shape
xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle">
<corners
android:radius="4dp" />
<solid
android:color="@color/default_text_color_accent1_tint_list" />
</shape> | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,187 |
\section{Introduction}
It is widely known that quantum electrodynamics in $2+1$ dimensions (QED$_3$) has important applications in condensed matter physics. The predominant example is the quantum Hall effect (QHE), where a pure topological Chern-Simons (CS) term \cite{hall} is commonly added to model the response of the quantum Hall ground state to low energy perturbations as an effective theory~\cite{hall1}, but it can also be used to study the behavior of ultracold matter in optical lattices~\cite{cond}. Nevertheless, this theory also has outstanding properties from the theoretical point of view.
What is special about $2+1$ spacetime dimensions? Let us consider first the theory in the absence of fermions. By naive dimensional analysis we note a big difference with respect to the $3+1$ case. The vector potential $A_{\mu}$ has dimension 1 (in units of mass) in any $d$-dimensional spacetime. As a consequence, if we write the Lagrangian in the form
\begin{equation}
\mathcal{L}_{QED_d} = -\frac{1}{4e^2}F_{\mu\nu}F^{\mu\nu} + A_{\mu}J^{\mu},
\label{qedaction}
\end{equation}
then we realize that the coupling constant $e^2$ is dimensionless in $3+1$ but dimensionful in other dimension $d \ne 3+1$. In particular, in $d = 2+1$, the effective dimensionless coupling would be $e'^2 = e^2/E$, where $E$ is the energy scale. In the ultraviolet (UV) regime, $E$ tends to infinity and the coupling $e'$ goes to zero implying that the theory is superrenormalizable and always asymptotically free, i.e., this theory describes \textit{free photons in the UV}. In this sense, the UV does not matter at all. On the other hand, the theory is always \textit{strongly coupled in the infrared} (IR) because $e' \rightarrow \infty$ as $E\rightarrow 0$. Consequently, the IR limit of the theory becomes a playground for developing ideas to tackle more realistic problems as confinement in quantum chromodynamics (QCD) \cite{herbut,grignani1,grignani2} or gapped boundary phases in topological insulators (TI) \cite{seibergwitten}.
Another interesting property of the theory in this dimensionality is related to the existence of magnetic monopoles. Whenever we have a $U(1)$ gauge field we have a new current
\begin{equation}
\mathcal{J}^{\mu} \propto \epsilon^{\mu\nu\rho}F_{\nu\rho},
\label{TopCurr}
\end{equation}
which is identically conserved without imposing the equations of motion, i.e., it is not a Noether current. Its conservation is equivalent to the Bianchi identity $dF = 0$, where $F$ is the two-form field strength. This follows simply by the symmetry of partial derivatives which contributes to zero when contracted with a Levi-Civita symbol if $A_\mu$ is globally well-defined. A natural question is: Who is charged under the charge
\begin{equation}
\mathcal{Q} = \int d^2x \mathcal{J}^0?
\label{TopCharg}
\end{equation}
If we replace (\ref{TopCurr}) in (\ref{TopCharg}) we obtain that $\mathcal{Q}$ is equal to a magnetic flux from which we conclude that \textit{magnetic monopoles are charged under} $\mathcal{Q}$. This charge is known as the vortex charge because Abrikosov-Nielsen-Olesen (ANO) vortices carry it when the theory is put in the Higgs phase \cite{borokhov}.
Moreover, \textit{the vector potential} $A_{\mu}$ \textit{can be dualized to a free scalar} $\sigma$ \textit{in the UV}. It is known as the dual photon field. The construction of the dual theory is carried out analogously as the electric-magnetic duality of Maxwell theory in $3+1$ spacetime dimensions, namely,
\begin{equation}
Z = \int \mathcal{D}A_{\mu} \exp\left(-\int_x\frac{F^2}{4e^2}\right) \rightarrow \int \mathcal{D}\sigma\mathcal{D}F_{\mu\nu} \exp\left[\int_x \left(-\frac{F^2}{4e^2} + \frac{i}{4\pi} \sigma \epsilon^{\mu\nu\rho}\partial_{\mu}F_{\nu\rho}\right)\right],
\end{equation}
where the dual photon $\sigma$ has been introduced as a Lagrange constraint in order to be able to treat the field strength as the integration variable \cite{polchinski}. After integrating out the field strength through its equation of motion we obtain
\begin{equation}
Z_{\text{dual}} = \int D\sigma \exp\left( -\int_x \frac{e^2}{8\pi^2}(\partial\sigma)^2 \right).
\label{DualPhotonPathIntegral}
\end{equation}
It can be shown straightforwardly that the conserved Noether current of this dual theory under the shift symmetry $\sigma \rightarrow \sigma + \text{const}$, coincides with the current (\ref{TopCurr}). This in turn implies that $F_{\alpha\beta} \propto \epsilon_{\alpha\beta\mu}\partial^{\mu}\sigma$. Consequently, $\partial^{\alpha}F_{\alpha\beta} = 0$ and the theory describes free photons in accordance with our previous discussion of the UV.
We can also add to the action (\ref{qedaction}) Chern-Simons (CS) or topological terms. Although, these terms do not described any dynamics and have zero degrees of freedom, they can have effects on the degeneracy of the ground state of the theory with interesting consequences \cite{chen}. When added, the theory is known as Maxwell-Chern-Simons (MCS) theory and it is gapped, i.e., \textit{the photon is massive}. After having understood that the theory is strongly coupled in the IR, we could, effectively, drop out the Maxwell term and conclude that \textit{the theory is a topological quantum field theory} (TQFT) \textit{in the IR limit} \cite{dunne}.
Now, interesting things start happening when matter (either fermions, bosons or both) is taken into consideration. In the above-mentioned effective description of the theory in the IR, we can write
\begin{equation}
\mathcal{L} = \mathcal{L}_{\text{CS}} + \mathcal{L}_{\text{Fermion}} \quad \text{or} \quad \mathcal{L} = \mathcal{L}_{\text{CS}} + \mathcal{L}_{\text{Scalar}},
\end{equation}
because the Maxwell term disappears. Obviously, \textit{all dynamics arise from matter}. However, they are no longer TQFT but \textit{believed to be}\footnote{In fact, computations of the IR properties of the theory using Schwinger-Dyson equations show that they might not be non-trivial CFTs \cite{pisarski}. However, it is common to argue that these kind of conclusions are based on truncation methods. A similar argument could be pointed out against the functional renormalization group (FRG) method \cite{gies}.} non-trivial conformal field theories (CFT) when their masses are tune to zero in a IR fixed point. If this were true, there is a possibility of studying topological changing phase transitions by relevant deformations, e.g., mass deformations, between TQFT's,
$$TQFT_1 \xleftarrow{\text{Relev. Deform.}} CFT \xrightarrow{\text{Relev. Deform.}} TQFT_2.$$
Yet, there is no free lunch. There is a subtlety with massive fermions in $2+1$ dimensions. Their path integral description presents \textit{parity anomaly}, that is, parity is a symmetry at the classical level but is not at quantum level \cite{witten}. Among the several ways this anomaly can arise, one can understand it through the $1$-loop term in the low energy approximation of the Euclidean path integration \cite{alvarez,niemi,redlich}
\begin{equation}
-\frac{1}{2} \text{Tr} \left( \frac{1}{i\gamma_{\mu}\partial_{\mu} - m_e}\gamma_{\nu}A_{\nu} \frac{1}{i\gamma_{\mu}\partial_{\mu} - m_e}\gamma_{\delta}A_{\delta}\right) = \frac{1}{2}\int \frac{d^3p}{(2\pi)^3} A_{\mu}(p)\Gamma_{\mu\nu}(p,m_e)A_{\nu}(p),
\end{equation}
with
\begin{equation}
\Gamma_{\mu\nu}(p,m_e) = -\int \frac{d^3k}{(2\pi)^3} \frac{\text{Tr}\left[\gamma_{\mu}(\gamma_{\rho}(p_{\rho}+k_{\rho})+m_e)\gamma_{\nu}(-\gamma_{\delta}k_{\delta}-m_e)\right]}{\left[(p+k)^2+m_e^2\right]^2(k^2+m_e^2)^2}, \quad p \ll m_e.
\end{equation}
At zero temperature, the contribution of the anomaly to the effective action (in Minkowski signature) is of the form
\begin{equation}
S_{\text{eff}}[A,m_e]^{(T=0)} = \cdots + \frac{1}{2} \frac{1}{4\pi} \frac{m_e}{|m_e|} \int d^3x \ \epsilon^{\mu \nu \beta}A_\mu \partial_\nu A_\beta + \cdots,
\end{equation}
whereas at finite temperature, after imposing the anti-periodic conditions of Dirac fermions $\psi(0,\boldsymbol{x}) = - \psi(\beta,\boldsymbol{x})$, we obtain
\begin{equation}
S_{\text{eff}}[A,m_e]^{(T\ne0)} = \cdots + \frac{1}{2} \frac{1}{4\pi} \frac{m_e}{|m_e|} \tanh\left(\frac{|m_e|}{T}\right)\int_0^{1/T} dt\int d^2x \ \epsilon^{\mu\nu\rho}A_{\mu}\partial_{\nu}A_{\rho} + \cdots.
\end{equation}
Clearly, CS terms have arisen and the breaking of parity depends on the sign of $m$.
A detailed study of all the above-mentioned points and additional exact results in lattice models, e.g, weak duality \cite{kramers,peskin}, have led to the conjeture of the existence of a web of dualities in $2+1$ spacetime dimensions with possible connections to the realization of 3D bosonization \cite{webduality1,webduality2}. Hence, this theory deserves to be investigated further.
This work investigates some properties of QED$_3$ within the covariant operator formalism of quantum field theory. We call it the Kugo-Ojima-Nakanishi (KON) formalism \cite{kugoojima,Nakanishi,Nak1,Laut}. Firstly, we want to address the problem of a dynamical mass generation for the photon arising from the interaction with generic charged particles, that is, either bosons or fermions in a given specific representation. In fact, this phenomenon is expected to happen since in this dimension the appearance of a mass term is in accordance with the local symmetries of the theory if one considers a discrete symmetry breaking scenario, e.g., parity anomaly in the presence of massive fermions.
The standard way to gap the photon is considering the MCS theory from the outset. In other words, by ``adding by hand'' a bare topological mass term. However, we argue that this procedure is not necessary and that QED$_3$ per se provides us these terms \textit{dynamically}. Under this perspective the conventional low energy quantum Hall effect field description \cite{mar, tong} would arise naturally from the situation of bidimensional electrons interacting with initially massless photons. The interaction changes the dispersion relation of the photon and the electromagnetic correlations become to fall faster through the material medium.
This property of QED$_3$ was intensively studied within perturbation theory (PT). In this framework, however, it was uncertain whether a renormalized mass of the photon actually existed. If the Pauli-Villars regularization method was used, the photon could either acquire an effective mass or remain masless which, by themselves, are two contradictory results. In fact, this kind of problem appears in the conventional perturbation theory when the regularization techniques are wrongly applied. Nevertheless, in \cite{Pim2} it was shown that if Pauli-Villars regularization is correctly applied, no problem arises and the photon becomes massive. The controversy was finally completely solved (of course, only in PT) by using the causal perturbation theory \cite{Pim1} where by construction no regularization is needed.
This paper is organized as follows. In section 2 we consider, following the ideas of \cite{Nakanishi}, the ``pre-Maxwell-Chern-Simons model'' to derive the non-perturbative two-point function of the gauge field, a ``massive'' combination of field operators, an asymtoptic constraint for the matter currents, and a general condition for the existence of a renormalized mass of the photon with arbitrary matter currents. In section 3 we compare our result with the one obtained from PT for the particular case of fermionic matter in bidimensional representation. Finally, in section 4, the asymtoptic structure is constructed revealing that our ``massive'' combination has indeed a dynamically generated massive character. The conclusions and the outlook are presented in section 5. The metric signature $+--$ is used throughout.
\section{Effective mass of the photon in 2+1 dimensions}
Let us start with the following Lagrangian density within the KON formalism
\begin{equation}
\mathcal{L} = -\frac{1}{4}F_{\mu \nu}F^{\mu \nu} + \frac{m}{4}\epsilon^{\mu \nu \rho}F_{\mu\nu}A_{\rho} + B\partial^\mu A_\mu + \frac{1}{2}\alpha B^2 + J^{\mu}A_{\mu} + \mathcal{L}_M.
\label{eq:LagrangianGeneralCurrentPMCS}
\end{equation}
In the above expression, $\mathcal{L}_M$ is a generic matter Lagrangian density, $J^{\mu}$ is an \textit{arbitrary} $2+1$ dimensional matter current that breaks parity and $B$ is an auxiliary field that keeps track of the gauge fixing condition via the gauge parameter $\alpha$. Needless to say, $\mathcal{L}$ is invariant under the gauge transformations
\begin{equation}
\delta A_\mu(x) = \partial_\mu \Lambda(x), \qquad \Box \Lambda = 0, \qquad \delta B(x) = 0,
\end{equation}
wherein $\Lambda$ is a c-number.
We are interested in the behavior of this theory in the limit $m\rightarrow 0$. In 3 + 1 spacetime dimensions without spontaneous symmetry breaking (SSB), the renormalized mass of the photon is constrained to vanish as the bare mass goes to zero. This follows by the Johnson's theorem \cite{jon}. We want to follow this line of thought in 2 + 1 dimensions in order to show that in the limit $m\rightarrow 0$ a renormalized mass $m_r$ for the photon exists. It is the aim of this paper to derive a general mathematical expression for this statement (cf. (\ref{eq:MassRenormalization})).
The Heisenberg equations of motion read
\begin{align}
\partial^{\mu}A_{\mu} + \alpha B &= 0 \label{eq:GaugeCondition}\\
\partial_{\mu} F^{\mu\nu} + m\epsilon^{\nu\mu\beta}\partial_{\mu}A_{\beta} - \partial^{\nu} B &= -J^{\nu} \label{eq:EquationForCurrent}\\
\partial_{\mu}J^{\mu} = 0. \label{eq:CurrentConserv}
\end{align}
Applying $\partial_{\nu}$ to (\ref{eq:EquationForCurrent}) and using (\ref{eq:CurrentConserv})
we determine the equation of motion for the $B$-field
\begin{equation}
\Box B = 0.
\label{eq:B-Field}
\end{equation}
Hence, as usual in the case of an Abelian theory, the subsidiary condition necessary to identify the physical space $\mathfrak{F}_{\text{phys}}$ is given by
\begin{equation}
B^+(x) | \text{phys} \rangle = 0, \quad \forall | \text{phys} \rangle \in \mathfrak{F}_{\text{phys}}.
\end{equation}
In order to give a non-perturbative description of the dynamical mass generation phenomenon, let us first determine the vacuum expectation values of the commutation relations of the Heisenberg fields $A_{\mu}$. Equal-time commutation relations, quantum equations of motion and symmetries is all what we need. Although, an exact answer for them in the presence of interactions is almost impossible, the spectral representation method helps us to extract valuable information. In particular, it guides the construction of the asymptotic fields of the theory which represent the in/out Fock spaces $\mathfrak{F}$\footnote{We will write $\mathfrak{F}$ for both spaces in the assumption of asymptotic completeness, i.e., no bound states will emerge in the asymptotic region.}.
Since the matter current is gauge invariant it has vanishing projection with the auxiliary $B$-field, that is, $\left[ J^\mu(x), B(y) \right]=0$ or $J^{\mu}(x)|0\rangle \in \mathfrak{F}_{\text{phys}}$. From this, together with the sourced equations of motion and the zero norm character of $B(x)$, we find that (see Appendix)
\begin{equation}
\bigg( \Box^x\eta^{\alpha\nu} + m\epsilon^{\alpha\mu\nu}\partial_{\mu}^x \bigg) \bigg( \Box^y\eta^{\beta\sigma} + m\epsilon^{\beta\mu\sigma}\partial_{\mu}^y \bigg) \big[ A_{\nu}(x), A_{\sigma}(y) \big] = \left[ J^{\alpha}(x), J^{\beta}(y) \right].
\label{RelationBetweenSpectralFunctions}
\end{equation}
This result means that the spectral function for the full two-point function of the gauge field are related to the corresponding spectral function of the arbitrary matter current. In particular, the asymptotic structure of the latter imposes constraints on the former. A useful constraint can be derived by applying a trick based on reference \cite{des}.
Considering a renormalized mass $m_r$, we can find an asymtoptic parity breaking condition for the current and a pure massive physical discrete pole excitation by means of the expression
\begin{equation}
\Big(\Box+m^2_r \Big)\mathcal{U}^{\mu}=\Big( m_rJ^{\mu}+\epsilon^{\mu \alpha \nu}\partial_\alpha J_\nu \Big).
\label{Trick}
\end{equation}
If the asymptotic field $\mathcal{U}_{\mu}$ is to describe a purely massive field then it must satisfy the Proca conditions
\begin{equation}
\quad \partial_\mu \ \mathcal{U}^{\mu}=0 \quad \text{and} \quad \left(\Box+m^2_r\right)\mathcal{U}^{\mu}=0,
\end{equation}
and must be physical in the following sense
\begin{equation}
\left[ {\cal{U}}^\mu(x), B(y) \right] = 0.
\end{equation}
Hence, an asymptotic condition for the matter current follows immediately
\begin{equation}
\epsilon_{\mu \nu \alpha}\partial^\nu J^{\alpha}_{ \text{as}}=-m_r J_\mu^{\text{as}}.
\label{AsymptoticConditionForCurrents}
\end{equation}
This constraint will help us to fix some constants below whereas the identification of the asymptotic field is devoted to section 4.
Going back to equation \eqref{RelationBetweenSpectralFunctions}, we can find a general result for the vacuum expectation value of the gauge field commutator as follows
\begin{align}
\langle 0 | \left[ A_{\mu}(x), A_{\nu}(y) \right] | 0 \rangle &= a\left(\eta_{\mu\nu} + \frac{1}{m^2}\partial_{\mu}\partial_{\nu} - \frac{1}{m}\epsilon_{\mu\nu\sigma}\partial^{\sigma} \right)\Delta(x-y;m^2) \nonumber \\
&\quad+ \left( b\partial_{\mu}\partial_{\nu} + c\epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;0) + f\partial_{\mu}\partial_{\nu}E(x-y;0) \nonumber \\
&\quad- i\int^{\infty}_{0}ds \left[ \rho(s) \left( \eta_{\mu\nu} + s^{-1}\partial_{\mu}\partial_{\nu} \right) + \widetilde{\rho}(s) \epsilon_{\mu\nu\beta}\partial^{\beta} \right] \Delta(x-y;s),
\label{eq:SpectralRepresentation1}
\end{align}
where the Green's functions are defined by the following Cauchy data
\begin{align}
\Box \Delta(x-y; s) &= -s\Delta(x-y; s), \quad \Delta(x-y; s)|_0 = 0, \quad \partial_0^x\Delta(x-y; s)|_0 = -\delta^2(x-y) \\
\big(\Box+s\big) E(x-y; s) &= \Delta(x-y; s), \quad E(x-y; s)|_0=0, \quad (\partial_0^x)^3E(x-y;s)|_0 = -\delta^2(x-y),
\label{CauchyData}
\end{align}
with the subscript $|_0$ meaning $|_{x_0=y_0}$. In fact, the first two lines in \eqref{eq:SpectralRepresentation1} belong to the kernel of the differential operator in the left-hand side of \eqref{RelationBetweenSpectralFunctions}, that is, it is the solution in the absence of matter currents. The last term is the non-homogeneous part of the solution which arises due to the presence of matter currents, its specific form is fixed by current conservation (\ref{eq:CurrentConserv}).
By imposing the gauge fixing condition (\ref{eq:GaugeCondition}), the relation $f=-i\alpha$ is obtained. Using the initial condition $[A_k(x),\partial_0 A_l(y)]|_{0}=-i\eta_{kl}\ \delta^2(x-y)$ we have
\begin{equation}
-i = -a - i\int^\infty_{0+}ds \ \rho(s), \quad\quad \frac{a}{m^2} + b = i\int^\infty_{0}ds \ s^{-1}\rho(s),
\label{spectral1}
\end{equation}
and using $[A_k(x),A_l(y)]|_{0} = 0$ we have
\begin{equation}
c - \frac{a}{m} = i\int^\infty_{0}ds \ \tilde{\rho}(s).
\end{equation}
These results have been completely general so far but we can study particular solutions of them motivated by physical facts. Henceforth, we shall fix $a=0$, as is done in the spontaneous symmetry breaking context \cite{Nak2}, since in the MCS theory as well as in QED$_3$ there is just one asymptotic transverse physical excitation with a given mass. If $a\ne 0$, it would imply the existence of an additional asymptotic particle in the physical sector besides the radiatively generated one, namely, the one when parity breaking matter fields are considered in consistency with the Wilsonian perspective. However, this conclusion leads to a violation of the number of degrees of freedom in the theory and, thus, it is not allowed. Consequently,
\begin{equation}
b = i\int^\infty_{0}ds \ s^{-1}\rho(s), \qquad c = i\int^\infty_{0}ds \ \tilde{\rho}(s), \qquad \int^{\infty}_{0^+} ds \ \rho(s) = 1.
\label{spectral2}
\end{equation}
All in all, we obtain the following non-perturbative result
\begin{align}
\langle 0 | \left[ A_{\mu}(x), A_{\nu}(y) \right] | 0 \rangle &= i \left( L\partial_{\mu}\partial_{\nu} + R\epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;0) -i\alpha\partial_{\mu}\partial_{\nu}E(x-y;0) \nonumber \\
&\quad- i\int^{\infty}_{0}ds \left[ \rho(s) \left( \eta_{\mu\nu} + s^{-1}\partial_{\mu}\partial_{\nu} \right) + \widetilde{\rho}(s) \epsilon_{\mu\nu\beta}\partial^{\beta} \right] \Delta(x-y;s)
\label{eq:CommutationAFinalResult}
\end{align}
where we have defined the quantities $L \equiv -ib$ and $R \equiv -ic$.
Starting from \eqref{eq:CommutationAFinalResult} we will soon derive a relation between the bare and renormalized masses below but, before procedding, it is important to establish a non-trivial connection between the spectral functions $\rho(s)$ and $\tilde{\rho}(s)$. As usual, we shall decompose the spectral functions in their discrete and continuum parts
\begin{equation}
\rho(s) = Z \delta (s - m^2_r) + \sigma(s), \quad\quad \tilde{\rho}(s) = s^{-1/2}\tilde Z \delta (s - m^2_r) + s^{-1/2}\tilde{\sigma}(s).
\label{eq:DiscreteContinuousContributions}
\end{equation}
From equation \eqref{RelationBetweenSpectralFunctions} and its general solution \eqref{eq:SpectralRepresentation1}, it is possible to compute the vacuum expectation value for the matter current. In fact,
\begin{align}
\langle 0 | \left[ J_{\mu}(x), J_{\nu}(y) \right] | 0 \rangle = -i\int_0^{\infty} ds \ s\left(s-m^2\right) \rho_{\mu \nu}(x,y;s),
\label{MatterCurrent}
\end{align}
where we have defined the spectral density, $\rho_{\mu \nu}(x,y;s)$, of the matter current as
\begin{equation}
\rho_{\mu \nu}(x,y;s) = \left[ \rho(s) \left( \eta_{\mu\nu} + s^{-1}\partial_{\mu}\partial_{\nu} \right) + \widetilde{\rho}(s) \epsilon_{\mu\nu\beta}\partial^{\beta} \right] \Delta(x-y;s).
\end{equation}
The form of \eqref{MatterCurrent} was, of course, expected by construction. Imposing the constraint (\ref{AsymptoticConditionForCurrents}), we obtain that the following relation holds asymptotically
\begin{equation}
\epsilon^{\nu\alpha\mu} \partial_{\alpha}\langle 0 | \left[ J^{\text{as}}_{\mu}(x), J^{\text{as}}_{\nu}(y) \right] | 0 \rangle = - m_r \langle 0 | \left[ J_{\text{as}}^{\nu}(x), J^{\text{as}}_{\nu}(y) \right] | 0 \rangle.
\end{equation}
Thus, choosing only the discrete parts in \eqref{eq:DiscreteContinuousContributions} we have
\begin{equation}
\int_0^{\infty} ds \ s\left(s+m^2\right) \left[ s^{-1/2}\tilde Z\epsilon^{\nu\alpha\mu}\epsilon_{\mu\nu\beta}\partial_{\alpha}\partial^{\beta} + 2m_r Z \right] \delta(s - m^2_r) \Delta(x-y;s) = 0,
\end{equation}
from which it follows that
\begin{equation}
\tilde Z = \text{sgn}(m_r)Z.
\end{equation}
For completeness, after plugging this result back in equation (\ref{eq:DiscreteContinuousContributions}), we get from (\ref{spectral2}) that
\begin{equation*}
L = \frac{Z}{m_r^2} + \int^{\infty}_{0} ds \ \frac{\sigma(s)}{s}, \quad\quad R = \frac{Z}{m_r} + \int^{\infty}_{0} ds \ s^{-1/2}\tilde{\sigma}(s), \quad\quad 1 = Z + \int^{\infty}_{0} ds \ \sigma(s).
\end{equation*}
Now, acting with the differential operator $\Box\eta^{\mu\gamma} +m\epsilon^{\mu \beta \gamma}\partial_\beta$ on the two-point function (\ref{eq:CommutationAFinalResult}) we obtain for the left-hand side, by using the equations of motion (\ref{eq:GaugeCondition}) and (\ref{eq:EquationForCurrent}), the following result\footnote{The ellipsis is the result of the unequal-time commutator between the interacting Abelian gauge field and an arbitrary matter current. Although, it is not known, we do not need the explicit result to derive equation \eqref{eq:MassRenormalization} because charged fields commute with the Abelian gauge field at equal-time.}
\begin{align}
\langle 0 | \left[ (1-\alpha)\partial^{\gamma}B(x) - J^{\gamma}(x), A_{\nu}(y) \right] | 0 \rangle &= (1-\alpha) \partial^{\gamma}_x\langle 0 | \left[ B(x) , A_{\nu}(y) \right] | 0 \rangle - \langle 0 |\left[ J^{\gamma}(x) , A_{\nu}(y) \right] | 0 \rangle \nonumber \\
&= i(1-\alpha)\partial^{\gamma}\partial_{\nu}\Delta(x-y;0) + \cdots.
\end{align}
Thus, together with similar manipulations for the right-hand side, we have
\begin{align}
i(1-\alpha)\partial^{\gamma}\partial_{\nu}\Delta(x-y;0) + \cdots &= -imR\partial^{\gamma}\partial_{\nu}\Delta(x-y;0) -i\alpha\partial^{\gamma}\partial_{\nu}\Delta(x-y;0) \nonumber \\
&\quad-i\int_0^{\infty} ds\ \rho(s) \left( -s\delta^{\gamma}_{\nu} - \partial^{\gamma}\partial_{\nu} + m\epsilon_{\nu}^{~\beta\gamma}\partial_{\beta} \right)\Delta(x-y;s) \nonumber \\
&\quad-i\int_0^{\infty} ds\ \widetilde{\rho}(s) \left( -s\epsilon^{\gamma}_{~\nu\beta}\partial^{\beta} - m\partial^{\gamma}\partial_{\nu} - ms\delta^{\gamma}_{\nu} \right)\Delta(x-y;s) \nonumber.
\label{PreResult}
\end{align}
After considering the spatial components $\gamma, \nu = i, j$ at equal times and using the Cauchy data (\ref{CauchyData}), it follows that
\begin{equation}
0 = im\epsilon_{j}^{~0i}\delta^2(\Vec{x}-\Vec{y})\int_0^{\infty}ds\ \rho(s) - i\epsilon^{i}_{~j0}\delta^2(\Vec{x}-\Vec{y})\int_0^{\infty} ds\ s \tilde{\rho}(s) ,
\end{equation}
or
\begin{equation}
m = \int_0^{\infty} ds\ s \tilde{\rho}(s).
\end{equation}
Replacing \eqref{eq:DiscreteContinuousContributions} in this result we get straightforwardly that
\begin{equation}
m = Zm_r + \int^{\infty}_0 ds \ s^{1/2}\tilde{\sigma}(s).
\label{eq:MassRenormalization}
\end{equation}
This is the most important result of this paper. We interpret (\ref{eq:MassRenormalization}) as a \textit{non-perturbative model-dependent} relation between the bare and renormalized mass of the photon. It shows a new property which is intimately related to the dimensionality of the model. In fact, in the limit of vanishing bare mass $m\rightarrow 0$, the renormalized photon mass $m_r$ \emph{does not a priori vanish} and it depends on the continuous part of the spectral function $\tilde{\rho}(s)$ which arose only because we were working in 2 + 1 dimensions. It is worthwhile to mention that a similar equation relating the renormalized with the bare mass arises in 3 + 1 dimensions, the so-called Johnson's theorem. However, in that case, we conclude that in the limit $m\rightarrow 0$, the renormalized mass must vanish unless the matter current has massless discrete spectrum. A well-known example for the latter statement occurs in the presence of spontaneous symmetry breaking where gauge bosons can be massive \cite{Nakanishi}.
The next step is to identify what kind of matter current may produce a non-vanishing $\tilde \sigma(s)$. Certainly, it must break discrete symmetry even in the limit of vanishing bare mass since we are interested in dynamical mass generation. Although \cite{mcs} mentioned an explicit perturbative non-discrete symmetry breaking example in which scalar matter has nonvanishing $\tilde \sigma(s)$, it turns out that it is proportional to the bare mass, thus, the photon remains massless in the presence of scalars. Consequently, we are left with massive fermions in bidimensional representation. Since the source of the parity breaking comes from the mass term in the Dirac Lagrangian, it is expected that the topological mass generation depends strongly on the fermion mass. In the next section, our assumptions are verified perturbatively and in section 4 we show that we arrive at a massless discrete pole structure when considering $m_r \to 0$. In fact, the specific low energy prescription used to manipulate the equations $(6)$ and $(11)$ loses its sense in the limit $m_r \to 0$ since we cannot postulate an asymtoptic excitation such as $ \mathcal{U}^{\mu}(x)$ that explicitly violates parity without a discrete symmetry breaking Lagrangian. It can be perturbatively shown that without topological as well as fermion bare masses they are not radiatively generated \cite{jac}. On the other hand, in the presence of any of those terms, the other is dynamically obtained. Since they break discrete symmetries, the previous discussion is in agreement with the Wilsonian perspective.
\section{Perturbation Theory}
Let us denote the following smooth limit
\begin{equation}
\lim_{m \to 0}\tilde \rho(s) = \tilde \rho (s)_{\text{QED}_3},
\end{equation}
where the right-hand side represents the desired QED$_3$ parity breaking contribution. We can extract from the computations made for the vacuum polarization tensor in QED$_3$ using causal perturbation theory \cite{Pim1} the following result
\begin{equation}
\lim\limits_{m\to 0} \int ds \ s^{1/2}\tilde{\sigma}(s) = \text{Im} \left( \frac{e^2m_e}{4\pi ^2} \int_{4m_e^2}^{\infty} ds \ s^{-3/2} \log \left( \frac{1 - \sqrt{s/4m_e^2}}{1 + \sqrt{s/4m_e^2}} \right) \right).
\end{equation}
As discussed in the previous section, the continuous part is non-vanishing in the limit of $m \to 0$ due to the presence of the electron mass $m_e$ which manifests as a symmetry breaking term. Using \eqref{eq:MassRenormalization} we obtain the one-loop result
\begin{equation}
m_r = \frac{e^2}{4\pi}\text{sgn}(m_e).
\end{equation}
\section{Asymptotic Structure}
Having established the non-perturbative description of the phenomenon of dynamical mass generation of a gauge field through interactions with matter in 2 + 1 dimensions, we are ready to perform an analysis of the asymptotic structure of the theory.
First, we extract the discrete spectrum of (\ref{eq:CommutationAFinalResult}) assuming asymptotic completeness \cite{mcs}
\begin{align}
\langle 0 | \left[ A_{\mu}(x), A_{\nu}(y) \right] | 0 \rangle &\xrightarrow{\text{Disc. Spectr.}} i \left( L\partial_{\mu}\partial_{\nu} - R\epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;0) -i\alpha\partial_{\mu}\partial_{\nu}E(x-y;0) \nonumber \\
&\qquad\qquad\quad- i Z \left( \eta_{\mu\nu} + \frac{1}{m_r^2}\partial_{\mu}\partial_{\nu} - \frac{1}{m_r} \epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;m_r^2).
\end{align}
We next define the asymptotic field of the Heisenberg operator $A_{\mu}$ as $A_{\mu}^{\text{as}} = Z^{-1/2}A_{\mu}$ and the renormalized gauge parameter as $\alpha_r = Z^{-1} \alpha$ in terms of which the commutator for $A_{\mu}^{\text{as}}$ reads
\begin{multline}
\big[ A_{\mu}^{\text{as}}(x), A_{\nu}^{\text{as}}(y) \big] = \\ i \left[ \left( \frac{1}{m_r^2} + Z^{-1}\int^{\infty}_{0} ds \ \frac{\sigma(s)}{s} \right) \partial_{\mu}\partial_{\nu} - \left( \frac{1}{m_r} + Z^{-1}\int^{\infty}_{0} ds \ s^{-1/2}\tilde{\sigma}(s) \right) \epsilon_{\mu\nu\beta}\partial^{\beta} \right] \Delta(x-y;0) \\
\quad-i\alpha_r\partial_{\mu}\partial_{\nu}E(x-y;0) - i \left( \eta_{\mu\nu} + \frac{1}{m_r^2}\partial_{\mu}\partial_{\nu} - \frac{1}{m_r} \epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;m_r^2).
\label{eq:CommutationAAsymptotic}
\end{multline}
In view of (\ref{eq:B-Field}) we define the asymptotic field $B^{\text{as}} = B$ because it is just a free field.
Having determined (\ref{eq:CommutationAAsymptotic}), we are in position to distinguish between massive and massless spectrum by decomposing $A_{\mu}^{\text{as}}$ in terms of the following fields
\begin{equation}
\mathcal{U}^{\mu} = \frac{1}{m_r} \left( \epsilon^{\mu\nu\sigma}\partial_\nu A_\sigma^{\text{as}}- \frac{\partial^\mu B^{\text{as}}}{m_r} \right) , \qquad \cal{A}^\mu = A^\mu_{\text{as}}- \tilde{\cal{U}}^\mu.
\end{equation}
The non-physical part $\cal{A}^\mu$ is purely massless while the transverse part is physical, massive and its commutator is given by
\begin{equation}
\big[ \mathcal{U}_{\mu}(x),\mathcal{U}_{\nu}(y) \big] = -i \left( \eta_{\mu\nu} + \frac{1}{m_r^2}\partial_{\mu}\partial_{\nu} - \frac{1}{m_r} \epsilon_{\mu\nu\beta}\partial^{\beta} \right) \Delta(x-y;m_r^2).
\end{equation}
Note that this expression recovers the physical Hilbert space of the MCS theory. Therefore, we conclude that the \textit{Chern-Simons mass term has been induced by the interaction of the photon with matter}. This result is compatible with the discussion given after equation (\ref{Trick}) since $\cal{U}^\mu$ represents our massive pole.
The important point of our result is that this phenomenon does not occur via an ``eating" process. In fact, it is an intrinsic characteristic of the dimensionality and the topological properties of the model. The fermionic and gauge degrees of freedom must remain the same separately. It means that the latter can not have both massive and massless poles in order to preserve its degrees of freedom before and after the interaction. It is known that in 2 + 1 dimensions both MCS and Maxwell fields have one local excitation due to its Hamiltonian similarity. We have shown that the massive excitation is physical in the sense of $ \big[ {\cal{U}}^\mu(x), B(y) \big]=0$. So it must represent the unique observable degree of freedom.
It is also important to mention that the emergence of a Chern-Simons term can be understood as a topological Higgs mechanism \cite{top}. It is expected since every mass generation can be expressed as a kind of Higgs phenomenon \cite{Nakanishi}.
Furthermore, we can show that in the massless limit the asymtoptic field recovers the well-known discrete massless pole structure. To see this, we use the definition of the renormalized mass and its Taylor expansion given by
\begin{equation}
\Delta(x-y,m_r)=\Delta(x-y,0)-E(x-y,0)m_r^2+ \cdots.
\end{equation}
After the redefinition of variables
\begin{equation}
A_{\mu}^{\text{as}}(x)\to A_{\mu}^{\text{as}}(x)-\frac{1}{2}\left( Z^{-1}\int^{\infty}_{0} ds \ \frac{\sigma(s)}{s} \right)\partial_{\mu}B^{\text{as}}(x),
\end{equation}
we get \cite{Nakanishi}
\begin{align}
\left[ A_{\mu}^{\text{as}}(x), A_{\nu}^{\text{as}}(y) \right] =
-i\alpha_r\partial_{\mu}\partial_{\nu}E(x-y;0) - i \left( \eta_{\mu\nu}\Delta(x-y;0)-\partial_{\mu}\partial_{\nu}E(x-y;0) \right).
\end{align}
\section{Conclusion}
Throughout this work a dynamical mass generation for QED$_3$ was verified first by means of the Heisenberg equations of motion valid in all Hilbert space. Later, we obtained this same result by studying the asymptotic two-point structure of the renormalized photon fields whose physical part is the same as that of the Maxwell-Chern-Simons theory. This last observation allows us to talk about a dynamically generated topological mass term.
This result was previously obtained in the perturbative approach but here we had the opportunity to make some general observations which are characteristic of the non-perturbative treatment. The appearance of this massive excitation was expected because the Wilsonian perspective strongly indicates it since the addition of a Chern-Simons topological mass term is a natural generalization to QED in $D=2+1$ dimensions if we are in a parity breaking scenario. So, we also pointed out the importance of coupling with bidimensional massive fermions for the occurence of the mass generation phenomena.
The asymptotic structure was obtained and the massive excitation recovered is the one previously found by means of the operator equations of motion. We also show how to circumvent the Johnson's theorem in order to have a dynamically generated renormalized mass to the photon field. The method employed is indeed consistent since the massless structure could be continuosly reached in the limit $m_r \to 0$.
Finally, we have pointed out throughout the introduction of this work that these models have interesting properties when studied in their dual language. It would be interesting to know how the notion of duality can be formulated within the KON formalism. This investigation is reserved to another paper \cite{future}.
\section*{Acknowledgments}
The authors would like to thank the referee for the comments and suggestions to improve the manuscript significantly. G. B. de Gracia and L. Rabanal thank CAPES for support, and B. M. Pimentel thanks CNPq for partial support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,673 |
The making of a California prison town.
Of all the details Abdul Khan remembers of his flight from his home country, Ghana, perhaps the clearest is the glint of light on the machetes. He was 25 years old, and his textile business was failing. There were few jobs in his isolated village in Ghana's mountainous interior, and Khan had started working for two gay men, who ran an underground male prostitution business. In Ghana, homosexuality is not tolerated. You can be imprisoned for it, and you can be killed. When Khan's association became known, gossip began circulating that he, too, was gay. One day in the fall of 2014, his uncle sat him down for a talk. Renounce that friendship, his uncle said, or die. Khan had already heard rumors that his neighbors were looking to kill him before he "infected" their children, so he took his uncle's threat seriously. One night, as he lay awake and fearful in bed, a group of men brandishing machetes approached the house. Khan jumped out of bed and escaped through a window in the back.
Khan ran to his two gay friends, the only people he trusted. They told him that Ghana was no longer safe for him — that he should flee the country — and they scraped together money for him to buy a ticket to Ecuador, which did not require a tourist visa. On Nov. 6, 2014, Khan stepped off the plane in Quito, Ecuador's capital. Before he'd even left the airport a man told him about a group of migrants, mostly from Somalia, Bangladesh and Pakistan, who were trying to reach the United States, and advised him to join them. America, the man said, was the only country where he would have rights. He introduced Khan to a smuggler who would arrange his journey to the U.S. border. Khan paid the man $800 of the $1,000 he had with him and three days later was on a bus heading north. He traveled almost 4,000 miles, passing through 10 countries via secret trails, in fishing boats and long canoes, through the uncharted jungle of the Darien Gap, through Panama, Central America and Mexico, to the border at Tijuana.
When border officials asked him why he had come to America, Khan told them he had fled Ghana and come to seek asylum. For months, all he had thought about was survival, but soon, he imagined, he would be on his way to New York, where he had family. Instead, Khan was detained. He spent his first night in the United States on a concrete floor in a cold, windowless room at the San Ysidro Port of Entry. For five days, he was passed from one detention center to the next. Finally, Khan was brought to the Adelanto Detention Facility, where he would spend the next 16 months.
Last December, almost two years later, I met Khan in New York, on a busy corner in the Bronx. Khan, whose name has been changed to protect his identity, wore dark jeans and Adidas sneakers, his boyish face framed by short curly hair and sideburns. Inside a Ghanaian restaurant, we shared a plate of fried plantains and beans, and he told me his story. It is a story that says much about the way the United States now treats asylum-seekers and immigrants, even before the Trump administration's vitriolic rhetoric and attempted bans. It tells of the rise of corporate detention centers, and their role in reshaping communities in rural areas, including the West. The moment Khan fled Ghana, his fate became intertwined with one such place: Adelanto, California, a struggling town on the edge of the Mojave Desert that has hitched itself to America's booming incarceration economy.
A hopeful slogan for the Southern California town of Adelanto, which turned to a prison and detention-center economy after its military base closed.
Adelanto sits 85 miles northeast of Los Angeles, on a flat and featureless expanse dotted with Joshua trees U.S. Route 395 runs through the middle of town, out of Southern California and toward a line of distant ochre mountains. Trucks barrel up and down the roadway that serves as Adelanto's main thoroughfare, but there is no real center to the town. Instead, a haphazard collection of tract homes, trailer parks, warehouses, gas stations and fast-food restaurants spreads out over 56 square miles of desert. There are so many abandoned lots that the overwhelming impression is one of empty space.
Adelanto, a town of 32,000, is home to three prisons. This was not a coincidence. A century ago, orchards covered parts of the Mojave Desert. Farmers grew apples, pears, plums, grapes and alfalfa. Crops were watered by the Mojave River, which begins in the nearby San Bernardino Mountains; its water supply, farmers believed, was inexhaustible. At the western edge of the valley, past the town of Victorville, Earl Homes Richardson, an inventor and industrialist, envisioned a "City With Unlimited Possibilities," where soldiers returning from the Great War could recuperate in the high desert's clean dry air. Richardson sold one of his patents in 1915 and bought a parcel of land for $75,000, hoping to subdivide it into one-acre plots and develop a master-planned community. But vets had no interest in living so far out in the desert, and Richardson's vision never materialized. Instead, Adelanto grew up around the orchards, gaining some renown for its fruit and cider.
As agriculture intensified throughout the Victor Valley, excessive water use and a series of dry years shrank the Mojave River. Adelanto's farmers struggled, and when the Great Depression hit, many were forced out of business. The vacant land they left behind brought the U.S. military to the town's northern edge in 1941. The Air Corps established an advanced flying school for World War II, and by 1950, George Air Force Base served as a training ground for fighter jets and bombers.
With the base came jobs and steady tax revenue, and in 1970, Adelanto incorporated, becoming the smallest city in San Bernardino County. It was almost wholly reliant on a military economy, but planners hoped for more: giant shopping malls, new homes, and new people to boost the tax base. By then, a few poultry ranches were all that remained of Adelanto's agricultural past. In 1993, however, the base closed, due to congressional realignments and closures at the end of the Cold War. People packed up and left, and property values cratered. Houses emptied and lawns died.
Adelanto got its first prison in 1991: the Adelanto Community Correctional Facility, which held inmates for the California Department of Corrections. In just 11 years, the number of prisoners in California had more than quadrupled.
That growth trend continued across rural America. In the 1960s and 1970s, about four new prisons were built in small towns and rural communities each year, according to the Agriculture Department's economic research service. During the 1980s, that figure increased to an annual average of 16. The following decade, the number jumped to 25, with a prison opening somewhere in rural America every 15 days.
By 2006, two more prisons were sited near or within Adelanto's town limits: the High Desert Detention Center, a county facility, and a gigantic federal complex on the border with the neighboring town of Victorville.
A few years later, the GEO Group, a Florida-based private prison company, offered to buy the old Adelanto Community Correctional Facility, for $28 million. Adelanto happily accepted. But the company had no plans to run an ordinary jail. Instead, GEO had its eye on the latest iteration of America's prison boom, this one targeting immigrants. The county jail would be repurposed into the Adelanto Detention Center, housing asylum-seekers and others caught in immigration bureaucracy. Adelanto's detainees are among the 40,000 people held every day in over 400 facilities nationwide by Immigration and Customs Enforcement, or ICE, pending a decision in their immigration cases or while awaiting deportation.
A fortified compound surrounded by high barbed-wire fencing, the Adelanto Detention Center sits at the end of a paved road near an industrial zone on the outskirts of Adelanto. When Khan was brought there on Dec. 11, 2014, it was the middle of the night, but he could still sense the confinement. He felt confused, he told me. This was not the America he had envisioned. Why, he wondered, was he being treated like a criminal?
The guards gave him a blue jumpsuit and escorted him to a windowless dormitory. Soon, he learned about the "segregation units" used to isolate unruly detainees. By law, immigrant detention facilities are not supposed to be punitive, but the official distinction between "detainees" and "prisoners" seemed largely meaningless. Guards conducted daily headcounts — usually five or six, each one up to an hour, during which time detainees had to remain in place by their beds. Khan had a particularly hard time with the handcuffs, which guards placed around his ankles and wrists any time he was transported outside the facility for a court appointment. He had broken no laws and had not crossed the border illegally. He had simply asked for protection. He, like many of the other asylum-seekers held in the detention center, had passed a "credible fear" interview and had no criminal record. Back in Ghana, Khan had always imagined America as a country of freedom; a country where basic human rights were protected. Why keep us locked up? he thought. If you don't want us, tell us to go back.
An inmate is seen behind the locked door at the Adelanto Detention Facility. Formerly a California state prison, it is now owned by the private prison contractor GEO Group and houses nearly 2,000 detainees of U.S. Immigration and Customs Enforcement.
A guard escorts an immigrant detainee through the Adelanto Detention Facility in Adelanto, California, where around 2,000 detainees of Immigration and Customs Enforcement await hearings on their immigration status.
Cari Thomas is the former mayor of Adelanto. She has long red hair and a no-nonsense air. An Adelanto transplant who was born in the L.A. suburbs, she saw the high desert as an affordable place to live — where land was cheap and a comfortable, middle-class life still in reach. Elected to city council in 2008, Thomas became mayor in 2010 and oversaw the GEO Group's arrival.
Adelanto had plenty of incentives to keep the detention center full, she told me last fall, over an egg breakfast at Denny's. After the base closed, the town struggled to replace the lost jobs and revenue. Houses that once sold for $80,000-$100,000 plummeted to half their value. "It was horrible," Thomas said. Throughout the mid- and late 1990s and early 2000s, money flowed into the town, as new housing developments were built — part of the nationwide housing boom. For a while, "things were good," Thomas said. But it didn't last. The 2008 recession hit, and Adelanto suffered another housing crash and another wave of sunken hopes.
Thomas had dreamed of turning Adelanto, with all its space, into a town like Rancho Cucamonga, an hour south, with its colossal malls and shining housing developments set against the San Gabriel Mountains. But the Adelanto she inherited was in dire financial straits. "It was a really bad time to get into politics," Thomas said. She spent much of the next four years just trying to balance the budget.
In the wake of 9/11, private prison companies like GEO saw a lucrative business opportunity in the government's immigration policies. Throughout rural Texas and the Southwest, new for-profit immigrant detention facilities sprang up, bolstered by more and more government contracts. Last year, for instance, GEO's revenue was over $2 billion, 18 percent of which came from ICE — the highest of any government contractor. To protect its profits, the industry developed a number of tactics, such as incorporating so-called "guaranteed minimums" into detention center contracts, ensuring the company gets paid for a certain number of beds, whether or not they're filled.
For GEO, the deal offered plenty of perks, too. The facility was already built — the company just needed bodies to fill it. As part of the sale agreement, the town was obliged to secure the government contracts that would bring immigrant detainees to the newly renamed Adelanto Detention Facility. ICE would then pay GEO money based on the number of prisoners held in the facility, with the town serving as the middleman. GEO quickly expanded the facility to hold 1,300 detainees. Its contract with ICE included a 975-bed minimum occupancy rate guaranteeing GEO roughly $40 million per year.
According to documents obtained from a state records request filed by Community Initiatives for Visiting Immigrants in Confinement (CIVIC), Adelanto was only paid a flat yearly $50,000 "administrative fee" from GEO for its initial 650 bed capacity, even though the company had expanded the facility to hold 1,300.
City Councilman John Woodard, center, tours a medical marijuana production facility — a new revenue stream the town is pursuing — being developed in Adelanto's industrial area.
Former Adelanto Mayor Cari Thomas cut the first deal to transform a former state prison into a privately run detention facility for ICE.
Current Mayor Rich Kerr, elected on an anti-prison platform, now embraces them.
Khan thought he had all the papers required to prove his identity. He had financial documents showing he had family who could support him and an uncle in New York to stay with. He assumed he would be released. Yet the ICE officers denied him parole, claiming that Khan's documents were insufficient.
This kind of detention is not uncommon. According to a recent report by Human Rights First, ICE has increasingly refused parole for asylum seekers — even when they meet the official criteria. In 2012, 80 percent of asylum seekers who passed their credible fear interview were granted parole. By 2015, the number had dropped to 47 percent. The sharp drop coincided with an influx of migrants from Guatemala, El Salvador and Honduras, many of them asylum-seekers. On June 20, 2014, Secretary of Homeland Security Jeh Johnson announced a plan to significantly expand detention capacity to detain and quickly deport Central Americans, in an attempt to "send a message" to those seeking asylum or attempting to cross the border illegally.
Caught up in that policy, Khan would have to prove his case from inside Adelanto. The prospect of indefinite detention terrified him, a fear made worse by the smaller indignities he endured. Sometimes, the meat served at mealtimes was moldy or rotten, compelling many detainees to buy much of their own food at the GEO-run commissary, but Khan had no money to spare. Often he barely ate. GEO guards barred him from praying with other Muslim inmates, denying him an important part of his religious practice, while Christian detainees were allowed to attend church three times per week. (GEO later changed its policy in response to complaints.) Khan felt powerless in the face of the discriminatory rules, but the threat of the segregation units, or SU, which mirror the solitary confinement cells used in prisons housing criminals, kept him in check. "You want to fight for your rights, but if you fight too hard, you will be put in the SU," he said.
Sometimes, entire units experienced multi-day lockdowns as group punishment for one detainee's actions. "If anything happens, they put us in our cells and locked the door," Khan said. He learned not to attract attention, to keep his anger and despair in check, to pray alone.
Due to the backlog in immigration courts, which is now more than 500,000 cases long, asylum-seekers can remain in detention for months and sometimes years while their cases are processed. Khan felt like he existed outside the law. That is not entirely wrong: Unlike criminal defendants, for example, Khan had no right to a lawyer. Like most immigration detainees and asylum-seekers, he could not afford one and would have to represent himself.
For the next six months, Khan waited to find out when he would have his asylum hearing. He tried to bolster his case, researching the repression of homosexuality in Ghana and instances where people were imprisoned or killed for aligning themselves with gay and lesbian rights, but detainees could only use the law library for an hour a day and had no access to the internet, and so Khan struggled to find information. He wanted to call friends and family to see if they could help, but he couldn't afford the high rates charged by TALTON Communications, the detention center's for-profit phone service provider.
Even if Khan had been able to pay for a lawyer, he would have had a hard time finding one. Immigration attorneys like Fagen rarely take cases involving Adelanto detainees because of the long commute; a round-trip drive from LA can take most of the day. And there's a low chance of success. Adelanto's six immigration judges are among the harshest in the country. The most lenient of them denies 75 percent of asylum cases, according to data compiled by researchers at Syracuse University. Among the two harshest, the denial rate is over 91 percent.
Early in his career, most of the immigrant detainees Fagen dealt with were in two facilities closer to downtown LA: the San Pedro Processing Center on Terminal Island and the Mira Loma Detention Center in Lancaster, run by the Los Angeles County Sheriff's Department. The shorter drives meant he could take on more clients who were detained, he said.
Terminal shut down in 2014 after an internal review found the facility too unsafe, and ICE ended its contract for Mira Loma in 2012, transferring detainees to Adelanto, in part because the GEO contract was cheaper — even though it raised the costs for detainees.
Demonstrators with the Caravan Against Fear, a roving protest against U.S. immigration policies, gathered outside the Adelanto Detention Facility West in April.
The proceeds from Adelanto's GEO deal temporarily plugged the town deficit but failed to generate the substantial long-term revenue that the town needed. By 2014, Adelanto was once again contemplating bankruptcy. Around that time, a pair of private developers sought out Adelanto for another private prison.
The GEO Group, meanwhile, came forward with plans to expand the Adelanto Detention Facility to 1,940 beds, making it the largest immigrant detention facility in California. Thomas supported the expansion. The town was $2.6 million in the red and needed the additional money that the additional detainees would bring in. As the November 2014 election approached, Richard Kerr, an upstart candidate, ran on a platform that included no new jails.
Kerr narrowly defeated Thomas and is still mayor today. Last fall, I met him at his office in the Adelanto City Hall, a stucco, faux Spanish-colonial style building overlooking the swath of empty desert where the GEO prisons sit. A former Marine, Kerr has a mustache and often wears jeans to work. He has, in his words, a "maverick" approach to city politics. Almost immediately following his election, the new mayor changed his mind about prisons. Once he found out he could re-negotiate the per-bed rate that GEO paid Adelanto for each detainee held in its facilities, Kerr decided that the prisons were not as bad as they were often made out to be. "We need the money in the city," he told me. According to the mayor, GEO had no problem paying a higher rate and Kerr appreciated the company's donation to the rodeo and the local Christmas fund. GEO also paid the town $175,000 to fund an additional police officer. "They're 100 percent behind us," he said.
The city council approved both the new prison and the GEO expansion. On July 1, 2015, the Adelanto Detention Facility got 640 more beds, specifically designed to house women detainees. The new beds would bring in an extra $21 million for GEO. Along with another GEO-run state prison, there were over 9,000 people behind bars within a seven-mile radius of Adelanto — almost a third of the town's total population.
Renegotiating the GEO contract, Kerr told me, means Adelanto now receives $80,000 per month from GEO in bed tax for its two facilities — an eighth of the town's total budget.
Still, like most local officials I spoke with, Kerr would rather not dwell on Adelanto's prisons or the role they play in the town's economy. Instead, much of our hour-long conversation revolved around marijuana, which Kerr believes is on the cusp of transforming Adelanto from down-and-out prison town into a haven for California's nascent medical marijuana industry.
Chain-link and razor wire surround the Adelanto Detention Facility East, one of two private facilities operated by the GEO Group in Adelanto, California, with beds contracted to Immigration and Customs Enforcement. The company is approved to build a third facility nearby.
After six months in detention, Khan still had no verdict on his case. He was eligible for a bond hearing, which offered him a chance at release, but the judge set the bond at $28,000 — far beyond what Khan could afford. And so, like many detainees with limited means, he remained in Adelanto. A couple weeks later, in May 2015, the same judge, denied his asylum case, citing lack of evidence. Unless Khan appealed the decision, he would be deported.
Khan didn't see much point in appealing. He would have to continue his fight from inside Adelanto, a process that could take years and most likely would not yield any new evidence. For Khan, remaining in Adelanto seemed even worse than what he might face back in Ghana. It was better, he told the judge, for him to go back and face the consequences. "I'm ready for anything," he said.
Khan signed his deportation order and prepared for the worst. But before he could be released, immigration officials had to obtain a travel document from Ghana — essentially a guarantee that it would accept its citizen back once the U.S. had deported the person. In the meantime, Khan waited in Adelanto. Three months passed. An ICE officer told him they were still waiting to receive the documents from Ghana, which is among around two-dozen countries that often delay repatriating people from the U.S.
Almost a year into Khan's detention, in October 2015, he and a group of other detainees wrote a letter to ICE, requesting to speak with Gabriel Valdez, the assistant field office director for Adelanto. They wanted to know why they were still locked up, even after many had signed their deportation orders. When their request went unacknowledged, Khan and more than 90 detainees — mostly asylum seekers — began refusing to eat. Theirs became the fourth hunger strike at U.S. immigration detention facilities in less than three weeks.
At least some immigration judges have questioned the escalating use of detention. Last October, a group of former immigration judges wrote to Johnson, the former secretary of Homeland Security, expressing concern that the expansion "comes at the expense of basic rights and due process." People eligible for protection under U.S. and international laws are kept in jail-like facilities operated by private prison companies or local jails contracted by ICE. "A shocking 86 percent of immigrants in detention are unable to obtain legal representation," the judges noted.
The system creates a deep sense of despair for the people trapped within it. During Khan's detention, eight people attempted suicide, and 115 were placed on suicide watch. In 2015, CIVIC and the Detention Watch Network chronicled numerous reports of sexual assault and abuse. The poor medical care led to two deaths.
In late March of this year, a Nicaraguan man facing deportation hanged himself. Osmar Epifanio Gonzalez-Gadba, who did not have a criminal record, had been detained in Adelanto for three months. Three weeks later, Sergio Alonso Lopez, a 55-year-old Mexican detainee, began vomiting blood and later died in hospital. He had a history of serious medical issues and had been deported to Mexico three times previously.
Abdul Khan, who fled New York out of fear of deportation after Donald Trump was sworn in, currently resides in Montreal. His name has been changed in this story to protect his identity.
One day last spring, one year, three months and three weeks into his detention, a guard told Khan he was being released under supervision. ICE had decided to let him out while the agency continued its efforts to get his travel documents. At first, Khan thought the guards were lying — but when the guards gave him back his old clothes and told him to change out of his prison uniform, he began to believe. On March 23, 2016, Khan was set free.
One of Khan's relatives, who lived in Canada, wired Pamplone $300 — enough for a bus ticket to New York, where an uncle lived. Two days after his release, Khan said goodbye to Pamplone at the Greyhound station in LA, and set off east, to endure whatever fate had in store.
Note: This story has been updated to fix a misidentified roadway. It is U.S. Route 395, not Interstate.
This coverage is supported by contributors to the High Country News Enterprise Journalism Fund. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,383 |
\section{Introduction and Main Results} \label{S1}
The electromagnetic dynamics is governing by a coupled PDE system describing the behavior of an electrically conducting fluid
and the electromagnetic fields.
In the absence of viscosity, Hall effect, and heat conductivity, the system of electromagnetic dynamics can be written as (\cite{Im,EM})
\begin{align}
&\p_t \rho +\dv(\rho\u)=0, \label{maa} \\
&\rho(\p_t\u + \u\cdot \nabla\u)+\nabla p
=\rho_\textrm{e}\i+\mu_0\j\times \H, \label{mab} \\
&\rho \theta(\p_tS+\u\cdot \nabla S)=(\j-\rho_\textrm{e}\u)\cdot (\i+\mu_0\u\times \H) ,\label{mac}\\
&\epsilon\p_t\i-\cu \H+\j=0, \label{mad}\\
&\p_t \H+\frac{1}{\mu_0 }\cu \i=0, \label{mae}\\
&\p_t (\rho_\textrm{e})+\dv \j=0,\label{maf}\\
&\epsilon\dv \i=\rho_\textrm{e},\quad \dv \H=0.\label{mag}
\end{align}
Here the unknowns $\rho,\u=(u_1,u_2,u_3)\in \mathbb{R}^3, S, \i=(E_1,E_2,E_3)\in \mathbb{R}^3,\H=(H_1,\linebreak H_2,H_3)\in \mathbb{R}^3$,
and $\rho_\textrm{e}$ denote the density, velocity, entropy, electric field, magnetic field, and
electric charge density, respectively.
The
current density $\j$ is expressed by Ohm's law, i.e.,
\begin{align}\label{ohmm}
\j-\rho_\textrm{e} \u =\sigma (\i+\mu_0\u\times \H).
\end{align}
The pressure $p$ and the entropy $S$ satisfy the Gibbs relation
\begin{equation}\label{gibbs}
\theta \mathrm{d}S=\mathrm{d}e +p\,\mathrm{d}\left(\frac{1}{\rho}\right),
\end{equation}
where $\theta$ and $e$ denote the temperature and the internal energy of the fluid.
To the authors' best knownledge, the only mathematical result on the system \eqref{maa}--\eqref{mag} was
obtained by Kawashima~\cite{K} who established the global existence of smooth solutions
in whole space $\mathbb{R}^2$ when the
initial data are a small perturbation of some given constant state.
On the other hand, as it was pointed out in \cite{Im}, the
assumption that the electric charge density $\rho_\textrm{e}\simeq 0$ is
physically very reasonable for the study of plasmas. In this situation, we can eliminate
the terms involving $\rho_\textrm{e}$ in \eqref{maa}--\eqref{mag} and then obtain the following non-isentropic compressible
Euler-Maxwell system:
\begin{align}
&\p_t \rho +\dv(\rho\u)=0, \label{na} \\
&\rho(\p_t\u + \u\cdot \nabla\u)+\nabla P
= \mu_0\j\times \H, \label{nb} \\
&\rho \theta(\p_tS+\u\cdot \nabla S)=\j\cdot (\i+\mu_0\u\times \H) ,\label{nc}\\
&\epsilon\p_t\i-\cu \H+\j=0,\label{nd}\\
&\p_t \H+\frac{1}{\mu_0 }\cu \i=0, \quad \dv \H=0 \label{ne}
\end{align}
with
\begin{align}
\quad \j =\sigma (\i+\mu_0\u\times \H). \label{oh}
\end{align}
Formally, if we take the dielectric constant $\epsilon =0$ in
\eqref{nd}, i.e., the displacement current is negligible, then we
obtain $\j= \cu \H$. Thanks to \eqref{oh}, we can eliminate the
electric field $\i$ in \eqref{nb},
\eqref{nc} and \eqref{ne}, and finally obtain that
\begin{align}
&\p_t \rho +\dv(\rho\u)=0, \label{nba} \\
&\rho(\p_t\u + \u\cdot \nabla\u)+\nabla P
=\mu_0 \cu \H\times \H, \label{nbb} \\
&\rho \theta(\p_tS+\u\cdot \nabla S) =\frac{1}{\sigma }|\cu \H|^2, \label{nbc}\\
&\partial_t \H -\cu(\u\times\H)= -\frac{1}{\sigma\mu_0}\cu (\cu\H ),\quad \dv\H=0.\label{nbd}
\end{align}
The equations \eqref{nba}--\eqref{nbd} is called non-isentropic compressible magnetohydrodynamic equations with infinite
Reynolds number which is used in describing some local processes in the cosmic system \cite{Hu87}.
The above formal derivation is usually referred as magnetohydrodynamic
approximation \cite{Im,EM}. In \cite{KS1,KS2}, Kawashima and
Shizuta justified this limit process rigorously to the complete magnetohydrodynamic
fluid system in $\mathbb{R}^2$ for local and global small smooth solutions (small perturbations of some give constant state), respectively.
In \cite{JL}, we studied
the magnetohydrodynamic
approximation for the isentropic electromagnetic fluid system in three-dimensional period domain and obtained the
isentropic compressible magnetohydrodynamic
equations with explicit convergence rates. Recently, we extended the results in \cite{JL} to the complete magnetohydrodynamic
fluid system and obtained the full
compressible magnetohydrodynamic equations \cite{JL2}. We remark that the viscosities
(including the shear and buck viscosities and heat conductivity coefficient) play a crucial role in the proof process
of \cite{JL2} and the inviscid case is left as an open problem there.
The purpose of this paper is to solve this problem and give a rigorous derivation of the
compressible magnetohydrodynamic equations \eqref{nba}--\eqref{nbd} from the
non-isentropic compressible Euler-Maxwell system \eqref{na}--\eqref{oh} as the dielectric constant $\epsilon$ tends to
zero. As in \cite{JL2}, we consider the system \eqref{na}--\eqref{oh}
in a periodic domain of $\mathbb{R}^3$, i.e., the torus
$\mathbb{T}^3=(\mathbb{R}/(2\pi \mathbb{Z}))^3$.
Below we take the harmless physical constants
$\sigma$ and $\mu_0 $ to be one for simplicity of presentation.
For the system \eqref{na}--\eqref{oh}, it is more convenient to using the pressure $p$ instead of the density $\rho$ as an unknown.
Thus we reconsider the equations of state
as functions of $S$ and $p$, i.e., $\rho =r(S,p)$ and
$\theta=\Theta(S,p)$ for some positive smooth functions $r$ and
$\Theta$ defined for all $S$ and $p>0$, and satisfying
$\frac{\partial r(S,p)}{\partial p }>0$. Moreover, in order to emphasize the unknowns depending on the small parameter
$\epsilon$, we add the superscripts $\epsilon$ to the unknowns $ (p,\u,S, \i,\H)$ and rewrite the Euler-Maxwell system \eqref{na}--\eqref{oh} as
\begin{align}
& a(S^\ep ,p^\ep)(\partial_t p^\ep+\u^\ep\cdot \nabla p^\ep)+\dv \u^\ep=0,\label{nca}\\
& r(S^\ep,p^\ep)(\partial_t \u^\ep+\u^\ep\cdot \nabla \u^\ep)+\nabla p^\ep =(\i^\ep+\u^\ep\times \H^\ep)\times \H^\ep, \label{ncb}\\
& b(S^\ep,p^\ep)(\partial_tS^\ep+\u^\ep\cdot \nabla S^\ep)=|\i^\ep+\u^\ep\times \H^\ep|^2,\label{ncc}\\
& \epsilon\p_t\i^\ep-\cu \H^\ep + (\i^\ep+\u^\ep\times \H^\ep)=0\label{ncd}\\
&\p_t \H^\ep+\cu \i^\ep=0, \quad \dv \H^\ep=0. \label{nce}
\end{align}
where $a(S^\ep,p^\ep)$ and $b(S^\ep,p^\ep)$ are defined as
\begin{align}\label{ncf}
a(S^\epsilon,p^\epsilon)=\frac{1}{r(S^\epsilon,p^\epsilon)}\frac{\partial r(S^\epsilon,p^\epsilon)}{\partial p^\epsilon}, \quad
b(S^\epsilon,p^\ep)= r(S^\ep,p^\ep)\Theta(S^\ep,p^\ep).
\end{align}
The system \eqref{nca}--\eqref{nce} are supplemented with initial data
\begin{align}\label{ncg}
(p^\epsilon, \u^\epsilon, S^\epsilon, \i^\epsilon,\H^\epsilon)|_{t=0}
=( p_0^\epsilon(x), \u_0^\epsilon(x),S_0^\epsilon(x), \i_0^\epsilon(x),\H_0^\epsilon(x)), \quad x\in \mathbb{T}^3.
\end{align}
We also rewrite the target equations \eqref{nba}--\eqref{nbd} (recall that $\mu_0\equiv\sigma\equiv 1$) as
\begin{align}
& a(S^0 ,p^0)(\partial_t p^0+\u^0\cdot \nabla p^0)+\dv \u^0=0,\label{nda}\\
& r(S^0,p^0)(\partial_t \u^0+\u^0\cdot \nabla \u^0)+\nabla p^0 = \cu \H^0\times \H^0, \label{ndb}\\
& b(S^0,p^0)(\partial_tS^0+\u^0\cdot \nabla S^0)=|\cu \H^0|^2,\label{ndc}\\
& \p_t\H^0-\cu (\u^0\times \H^0)=- \cu\cu \H^0,\quad \dv \H^0=0.\label{ndd}
\end{align}
where $a(S^0,p^0)$ and $b(S^0,p^0)$ are defined through \eqref{ncf} with $(S^\epsilon,p^\epsilon)$ replaced by $(S^0,p^0)$.
The system \eqref{nda}--\eqref{ndd} are equipped with initial data
\begin{align}\label{nde}
(p^0, \u^0,S^0, \H^0)|_{t=0}
=(p^0_0(x), \u^0_0(x), S_0^0(x), \H_0^0(x)), \quad x\in \mathbb{T}^3.
\end{align}
We remark that although the electric field $\i^0$ does not appear in the system \eqref{nda}--\eqref{ndd}, it can be induced according to
the relation
\begin{equation}\label{Ohm}
\i^0=\cu\H^0 - \u^0\times\H^0
\end{equation}
by the moving conductive flow in the magnetic field.
Before stating our main results, we recall the local existence of smooth solutions to the problem \eqref{nda}--\eqref{nde}.
Since the system \eqref{nda}--\eqref{ndd} can be written as a symmetric hyperbolic-parabolic system, the results in \cite{VH} imply that
\begin{prop} \label{Pa} Let $s> 7/2$ be an integer and
assume that the initial data $(p^0_0, \u^0_0,S^0_0,$ $ \H_0^0)$ satisfy
\begin{gather*}
p^0_0, \u^0_0, S^0_0, \H_0^0\in H^{s+1}(\mathbb{T}^3), \ \ \dv \H^0_0 =0,\nonumber\\
0<\bar p= \inf_{x\in \mathbb{T}^3}p^0_0(x)\leq p^0_0(x)\leq
\bar {\bar p}= \sup_{x\in \mathbb{T}^3}p^0_0(x)<+\infty,\\
0<\bar S= \inf_{x\in \mathbb{T}^3}S^0_0(x)\leq S^0_0(x)\leq
\bar {\bar S}= \sup_{x\in \mathbb{T}^3}S^0_0(x)<+\infty
\end{gather*}
for some positive constants $\bar p,\, \bar{\bar p},\, \bar S$, and $\bar{\bar S}$. Then there exist positive
constants $T_*$\, $($the maximal time interval, $ 0<T_*\leq +\infty )$ and $\hat p, \tilde{p}, \hat S, \tilde{S} $, such that the problem
\eqref{nda}--\eqref{nde} has a unique classical solution $(p^0,\u^0,S^0,\H^0)$ satisfying $\dv \H^0=0$ and
\begin{gather*}
p^0, \u^0, S^0 \in C^l([0,T_*),H^{s+1-l}(\mathbb{T}^3)), \ \H^0 \in C^l([0,T_*),H^{s+1-2l}(\mathbb{T}^3)), \ \ l=0,1;\ \
\\
0<\hat p= \inf_{(x,t)\in \mathbb{T}^3 \times [0,T_*)}p^0(x,t)\leq p^0(x,t)\leq
{\tilde p}= \sup_{(x,t)\in \mathbb{T}^3 \times [0,T_*)}p^0(x,t)<+\infty,\\
0<\hat S= \inf_{(x,t)\in \mathbb{T}^3 \times [0,T_*)}S^0(x,t)\leq S^0(x,t)\leq
{\tilde S}= \sup_{(x,t)\in \mathbb{T}^3 \times [0,T_*)}S^0(x,t)<+ \infty.
\end{gather*}
\end{prop}
The main result of this paper can be stated as follows.
\begin{thm}\label{th}
Let $s>7/2$ be an integer and $(p^0, \u^0,S^0, \H^0)$ the unique classical solution to the problem
\eqref{nda}--\eqref{nde} given in
Proposition \ref{Pa}.
Suppose
that the initial data $(p^\epsilon_0, \u^\epsilon_0,S_0^\epsilon, \i_0^\epsilon,
\H_0^\epsilon)$ satisfy
$$
p^\epsilon_0, \u^\epsilon_0,S_0^\epsilon, \i^\epsilon_0, \H^\epsilon_0\in H^{s}(\mathbb{T}^3),
\ \inf_{x\in \mathbb{T}^3}
p^\epsilon_0(x)>0, \ \inf_{x\in \mathbb{T}^3}
S^\epsilon_0(x)>0, \ \dv \H^\epsilon_0 =0,
$$
and
\begin{align}
& \Vert (p^\epsilon_0-p^0_0, \u^\epsilon_0-\u^0_0, S^\epsilon_0-S^0_0,
\H_0^\epsilon-\H_0^0) \Vert_{H^s(\mathbb{T}^3)}\nonumber\\
& \qquad \qquad\qquad\quad
+ \sqrt{\epsilon} \left\Vert \i^\epsilon_0- ( \cu\H^0_0
- \u^0_0\times\H^0_0 ) \right\Vert_{H^s(\mathbb{T}^3)} \leq L_0 {\epsilon} \label{ivda}
\end{align}
for some constant $L_0>0$. Then, for any $T_0\in (0,T_* )$, there exist
a constant $L>0$, and
a sufficient small constant $\epsilon_0>0$ such that, for any $\epsilon\in
(0,\epsilon_0]$, the problem \eqref{nca}--\eqref{ncg} has a unique smooth solution $(p^\epsilon,
\u^\epsilon, S^\epsilon, \i^\epsilon,\H^\epsilon)$ on $[0,T_0]$ enjoying
\begin{align}\label{iivda}
& \Vert (p^\epsilon-p^0, \u^\epsilon-\u^0, S^\epsilon-S^0,\H^\epsilon-\H^0)(t)
\Vert_{H^s(\mathbb{T}^3)} \nonumber\\
& \qquad \quad + \sqrt{\epsilon}\left\Vert\left\{\i^\epsilon- ( \cu\H^0
- \u^0\times\H^0 )\right\}(t)\right\Vert_{H^s(\mathbb{T}^3)} \leq L {\epsilon}, \ \ t\in [0,T_0].
\end{align}
\end{thm}
We shall prove Theorem \ref{th} by adapting the elaborate nonlinear energy method inspired by
\cite{JL,JL2}. The key point of the proof is to derive the error system (see \eqref{error1}--\eqref{error4} below) and obtain
the uniform estimates in a fixed time interval independent of $\epsilon$.
As mentioned before, the zero dielectric constant limit to the complete magnetohydrodynamic
fluid system were studied in \cite{JL2} where the viscosity and heat conductivity terms
in the complete electromagnetic fluid system play a crucial role in the derivation of the uniform estimates.
In our case, all diffusion terms disappear and we shall make full use of the special structural of the system \eqref{nca}--\eqref{ncd}
to obtain the desired uniformly estimates. A direct but crucial observation is that there is a damping term
$ \i^\epsilon-\i^0 $ in the electric field equations which
control the terms involving $\i^\epsilon-\i^0$ in the momentum equations, entropy equation,
and electric filed equations. In order to obtain the desired higher order estimates to the error system, we shall
also modify some ideas developed in \cite{MS01,JJL4} which is quite different to the isentropic
case \cite{JL} and the viscous non-isentropic case \cite{JL2}.
\begin{rem}
The inequality \eqref{iivda} implies that the sequences $(p^\epsilon, \u^\epsilon,S^\epsilon, \H^\epsilon)$
converge strongly to $(p^0,\u^0,S^0,\H^0)$ in $L^\infty(0,T; H^{s}(\mathbb{T}^3))$ and
$\i^\epsilon$ converges strongly to $\i^0$ in $L^\infty(0,T; H^{s}(\mathbb{T}^3))$ but with different convergence rates, where
$\i^0$ is defined by \eqref{Ohm}.
\end{rem}
\begin{rem}
For the local existence of solutions $(p^0,\u^0,S^0,\H^0)$ to the problem \eqref{nda}--\eqref{nde}, the
assumption on the regularity of initial data $(p^0_0,\u^0_0,\theta^0_0,\H^0_0)$ belongs to
$H^s(\mathbb{T}^3)$, $s>7/2$, is enough. Here we have added more regularity assumption in Proposition \ref{Pa} to
obtain more regular solutions which are needed in the proof of Theorem \ref{th}.
The higher regularity assumption on the target equations can provide
a simpler arguments in this paper. To investigate the singular limit, another way is to obtain higher order
uniform estimates directly, see, for example, \cite{MS01} on zero Mach number limit
to non-isentropic Euler equations.
\end{rem}
\begin{rem}
In this paper we just consider the periodic domain case, it is more interesting to study the same problem in a spatial domain with boundary
which will be our future study. We remark that in this case the boundary must be analyzed very carefully,
the interested reader can refer \cite{A,B1,B2,B,S,Sc}, and among others on the zero Mach number limit of the compressible Euler equations,
and \cite{Rub} on singular limits of zero Alfv\'{e}n number for the equations of magneto-fluid dynamics.
In \cite{B1,B2}, some new pioneering ideas are introduced, which can be applied to the convergence study of
the singular limit in the data space, see \cite{B1,Rub,B} for the details.
\end{rem}
\begin{rem}
We point out that the zero dielectric constant limit is a singular limit
and similar to the zero Mach number limit in some sense, see \cite{A,B,JJL3,JJL4,JJLX,KM1,MS01,S,Sc} and
the references cited therein.
\end{rem}
\begin{rem}
It is obvious that if we let $\sigma\rightarrow \infty$ in \eqref{nba}--\eqref{nbd}, we will obtain formally
the well-known ideal non-isentropic magnetohydrodynamic equations. It is interesting to
establish this limit rigorously.
\end{rem}
\medskip
Before ending this introduction, we give some notations and recall some basic facts which
will be frequently used throughout this paper.
(1) We denote by $\langle \cdot,\cdot\rangle$ the standard inner product in $L^2(\mathbb{T}^3)$
with $\langle f,f\rangle=\|f\|^2$, by
$H^k$ the standard Sobolev space $W^{k,2}$ with $\|\cdot\|_{k}$ being
the corresponding norm ($\|\cdot\|_{0}\equiv\|\cdot\|)$. The notation $\|(A_1,A_2, \dots,
A_k)\|$ means the summation of $\|A_i\|,i=1,\dots,k$,
and it also applies to other norms.
For the multi-index $\alpha = (\alpha_1, \alpha_2, \alpha_3)$, we
denote $\partial_x^\alpha =\partial^{\alpha_1}_{x_1}\partial^{\alpha_2}_{x_2}
\partial^{\alpha_3}_{x_3}$ and
$|\alpha|=|\alpha_1|+|\alpha_2|+|\alpha_3|$. For the integer $l$, the symbol $D^l_x$ denotes
the summation of all terms $\partial_x^\alpha$ with the multi-index $\alpha$ satisfying $|\alpha|=l$. We use $C_i$,
$\delta_i$, $K_i$, and $K$ to denote the constants which are independent of
$\epsilon$ and may change from line to line. We also omit the spatial domain $\mathbb{T}^3$
in integrals for convenience.
(2) We shall frequently use the following Moser-type calculus
inequalities (see \cite{KM1}):
\hskip 4mm (i)\ \ For $f,g\in H^s(\mathbb{T}^3)\cap L^\infty(\mathbb{T}^3)$ and $|\alpha|\leq
s$, $s>3/2$, it holds that
\begin{align}\label{ma}
\|\partial^\alpha_x(fg)\| \leq C_s(\|f\|_{L^\infty}\|D^s_x
g\| +\|g\|_{L^\infty}\|D^s_x f\|)
\end{align}
\hskip 4mm (ii)\ \ For $f\in H^s(\mathbb{T}^3), D_x^1 f\in L^\infty(\mathbb{T}^3), g\in H^{s-1}(\mathbb{T}^3)\cap
L^\infty(\mathbb{T}^3)$ and $|\alpha|\leq s$, $s>5/2$, it holds that
\begin{align}\label{mb}
\quad \ \ \|\partial^\alpha_x(fg)-f \partial^\alpha_xg\|\leq
C_s(\|D^1_x f\|_{L^\infty}\|D^{s-1}_x g\| +\|g\|_{L^\infty}\|D^s_xf\|).
\end{align}
(3) Let $s> 3/2$, $f\in C^s(\mathbb{T}^3)$, and $u\in H^s(\mathbb{T}^3)$, then for each multi-index $\alpha$, $1\leq |\alpha| \leq s$, we have
(\cite{Mo,KM1}):
\begin{align}\label{mo}
\|\partial^\alpha_x (f(u))\| \leq C(1+\|u\|_{L^\infty}^{|\alpha|-1})\|u\|_{|\alpha|};
\end{align}
moreover, if $f(0)=0$, then (\cite{Ho97})
\begin{align}\label{ho}
\|\partial^\alpha_x(f(u))\|\leq C( \|u\|_s)\|u\|_s.
\end{align}
This paper is organized as follows. In Section \ref{S2}, we utilize the primitive system \eqref{nca}--\eqref{nce} and the
target system \eqref{nda}--\eqref{ndd} to derive an error
system and state the local existence of solutions to the error system.
In Section \ref{S3} we give \emph{a priori} energy
estimates to the error system and present the proof of Theorem \ref{th}.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}
\section{Derivation of an error system and local existence} \label{S2}
In this section we first derive an error system from the original
system \eqref{nca}--\eqref{nce} and the target equations
\eqref{nda}--\eqref{ndd}. Then we state the local existence of smooth solutions to
this error system.
Setting $P^\epsilon=p^\epsilon- p^0, \U^\epsilon=\u^\epsilon-\u^0, \Phi^\epsilon= S^\epsilon- S^0,
\F^\epsilon=\i^\epsilon-\i^0, \G^\epsilon=\H^\epsilon-\H^0$ and utilizing
the system
\eqref{nca}--\eqref{nce} and the system \eqref{nda}--\eqref{ndd} with \eqref{Ohm}, we
obtain that
\begin{align}
& a(\Phi^\epsilon+S^0, P^\epsilon+p^0) \{\partial_t P^\epsilon +(\U^\epsilon+ \u^0)\cdot \nabla P^\epsilon\}+\dv \U^\epsilon =f_1^\epsilon, \label{error1} \\
& r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \{\partial_t\U^\epsilon +(\U^\epsilon+\u^0)\cdot \nabla\U^\epsilon\}
+\nabla \Phi^\epsilon=\mathbf{f}_2^\epsilon, \label{error2}\\
& b(\Phi^\epsilon+S^0, P^\epsilon+p^0) \{\partial_t\Phi^\epsilon +(\U^\epsilon+\u^0)\cdot \nabla\Phi^\epsilon\}=\mathbf{f}_3^\epsilon, \label{error22}\\
& \epsilon \partial_t \F^\epsilon - \cu \G^\epsilon
=\mathbf{f}_4^\epsilon, \label{error3}\\
& \partial_t \G^\epsilon+\cu \F^\epsilon =0,\quad \dv \G^\epsilon=0, \label{error4}
\end{align}
where $f_1^\epsilon$, $\mathbf{f}_2^\epsilon$, $\mathbf{f}_3^\epsilon$, and $\mathbf{f}_4^\epsilon$ are defined as follows:
\begin{align*}
f_1^\epsilon =& -[a(\Phi^\epsilon+S^0, P^\epsilon+p^0)-a(S^0,p^0)][\partial_t p^0 + \u^0\cdot\nabla p^0]\nonumber\\
& -a(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon\cdot \nabla p^0), \\
\mathbf{f}_2^\epsilon =& -[r(\Phi^\epsilon+S^0, P^\epsilon+p^0)-r(S^0,p^0)][\partial_t \u^0 +\u^0\cdot\nabla\u^0]\nonumber\\
& -r(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon\cdot \nabla\u^0)\nonumber\\
& - \cu \H^0\times \H^0
+[\F^\epsilon+\u^0\times \G^\epsilon+\U^\epsilon\times \H^0]\times \H^0 \nonumber\\
& +[\F^\epsilon+\u^0\times \G^\epsilon+\U^\epsilon\times\H^0]\times \G^\epsilon+ (\U^\epsilon\times \G^\epsilon)\times (\G^\epsilon+\H^0),\\
\mathbf{f}_3^\epsilon =&-[b(\Phi^\epsilon+S^0, P^\epsilon+p^0)-b(S^0,p^0)][\partial_t S^0 +\u^0\cdot\nabla S^0]\nonumber\\
& -b(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon\cdot \nabla S^0) \nonumber\\
& + |\F^\epsilon+\U^\epsilon\times \G^\epsilon|^2+ |\u^0\times \G^\epsilon+\U^\epsilon\times \H^0|^2\nonumber\\
& + {2} (\F^\epsilon+\U^\epsilon\times \G^\epsilon)\cdot
[\cu \H^0+\u^0\times \G^\epsilon+\U^\epsilon\times \H^0]\nonumber\\
& + {2} \cu \H^0\cdot (\u^0\times \G^\epsilon+\U^\epsilon\times \H^0),\\
\mathbf{f}_4^\epsilon= &- [\F^\epsilon+\U^\epsilon\times \H^0+\u^0\times \G^\epsilon]- \U^\epsilon\times \G^\epsilon\nonumber\\
& - {\epsilon} \partial_t \cu \H^0+\epsilon\partial_t(\u^0\times \H^0).
\end{align*}
The system \eqref{error1}--\eqref{error4} are supplemented with initial data
\begin{align}\label{error5}
& ( P^\epsilon,\U^\epsilon,\Phi^\epsilon,\F^\epsilon,\G^\epsilon)|_{t=0}=
( P^\epsilon_0,\U^\epsilon_0,\Phi^\epsilon_0,\F^\epsilon_0,\G^\epsilon_0)\nonumber\\
& \qquad := \big( p^\epsilon_0- p^0_0, \u_0^\epsilon-\u^0_0, S^\epsilon_0- S^0_0,
\i^\epsilon_0- ( \cu\H^0_0 - \u^0_0\times\H^0_0 ),\H^\epsilon_0-\H^0_0\big).
\end{align}
Denote
\begin{align*}
&\W^\epsilon=\left(\begin{array}{c}
P^\epsilon \\
\U^\epsilon\\
\Phi^\epsilon\\
\F^\epsilon \\
\G^\epsilon
\end{array}\right),
\ \
\W^\epsilon_0=\left(\begin{array}{c}
P^\epsilon_0 \\
\U^\epsilon_0\\
\Phi^\epsilon_0\\
\F^\epsilon_0\\
\G^\epsilon_0\\
\end{array}\right), \ \ \s^\epsilon(\W^\epsilon)=\left(\begin{array}{c}
f^\epsilon_1\\
\mathbf{f}^\epsilon_2\\
\mathbf{f}^\epsilon_3\\
\mathbf{f}^\epsilon_4\\
\mathbf{0}
\end{array}
\right),\\
& \D^\epsilon=\left(\begin{array}{cc}
\D^\epsilon_1 & \mathbf{0} \\
\mathbf{0} & \left(\begin{array}{cc}
\epsilon \mathbf{I}_{3} & \mathbf{0}\\
\mathbf{0} & \mathbf{I}_{3}
\end{array}
\right)
\end{array}\right), \\
& \D^\epsilon_1= \left(\begin{array}{ccc}
a(\Phi^\epsilon+S^0, P^\epsilon+p^0) & 0 & 0 \\
0 & r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \mathbf{I}_{3} & \mathbf{0}\\
0 & \mathbf{0} & b(\Phi^\epsilon+S^0, P^\epsilon+p^0)
\end{array}\right),\\
& \A^\epsilon_i=\left(\begin{array}{cc}
\left(\begin{array}{ccc}
(\U^\epsilon+\u^0)_i & e_i & 0 \\
e^\mathrm{T}_i & (\U^\epsilon+\u^0)_i \mathbf{I}_{3} & \mathbf{0}\\
0 & \mathbf{0} & (\U^\epsilon+\u^0)_i
\end{array}\right) & \mathbf{0} \\
\mathbf{0} & \left(\begin{array}{cc}
\mathbf{0}& B_{i} \\
B_{i}^\mathrm{T} & \mathbf{0}
\end{array}
\right)
\end{array}\right),
\end{align*}
where
$(e_1, e_2, e_3)$ is the canonical basis of $\mathbb{R}^3$, $\mathbf{I}_{d}$ ($d = 3,5$)
is the $d\times d$ unit matrix, $y_i$ denotes the $i$-th component of $y\in \mathbb{ R}^3$, and
\begin{align*}
B_1 =\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 1\\
0 & -1 & 0
\end{array}
\right), \quad
B_2 =\left(\begin{array}{ccc}
0 & 0 & -1 \\
0 & 0 & 0\\
1 & 0 & 0
\end{array}
\right), \quad
B_3 =\left(\begin{array}{ccc}
0 & 1 & 0 \\
-1 & 0 & 0\\
0 & 0 & 0
\end{array}
\right).
\end{align*}
Using these notations we can rewrite the problem
\eqref{error1}--\eqref{error5} as
\begin{align}\label{error6}
\left\{\begin{aligned}
& \D^\epsilon \partial_t \W^\epsilon +\sum^{3}_{i=1}\A^\epsilon_i \W^\epsilon_{x_i}
=\s^\epsilon(\W^\epsilon),\\
& \W^\epsilon|_{t=0}= \W^\epsilon_0.
\end{aligned} \right.
\end{align}
Obviously, the system in
\eqref{error6} is a quasilinear symmetric
hyperbolic one. Thus, we can apply the result of Majda \cite{M84} to obtain the following local existence of
smooth solutions to the problem \eqref{error6}.
\begin{prop} \label{Pb}
Let $s>7/2 $ be an integer and $( p^0_0, \u^0_0, S^0_0, \H_0^0)$ satisfy the conditions in Proposition \ref{Pa}.
Assume that the initial data $( P^\epsilon_0, \U^\epsilon_0, \Phi^\epsilon_0, \F_0^\epsilon, \G_0^\epsilon)$ satisfy
\begin{gather*}
P^\epsilon_0, \U^\epsilon_0,\Phi^\epsilon_0, \F^\epsilon_0, \G^\epsilon_0\in
H^s(\mathbb{T}^3), \quad \dv \G^\epsilon_0 =0,\\
\inf_{x\in \mathbb{T}^3} P^\epsilon_0(x)>0,\quad \inf_{x\in \mathbb{T}^3}\Phi^\epsilon_0(x)>0,\quad
\|\Phi^\epsilon_0\|_s\leq \delta,\quad
\| P^\epsilon_0\|_s\leq \delta
\end{gather*}
for some small constant $\delta>0$.
Then there exist positive constants $T^\epsilon\,(0<T^\epsilon\leq +\infty)$ and $K$, such that the problem
\eqref{error6} has a unique classical solution $( P^\epsilon,
\U^\epsilon, \Phi^\epsilon, \linebreak \F^\epsilon, \G^\epsilon)$ satisfying
\begin{gather*}
P^\epsilon,
\U^\epsilon, \Phi^\epsilon, \F^\epsilon, \G^\epsilon \in C^l([0,T^\epsilon),H^{s-l}(\mathbb{T}^3)),\ l=0,1; \, \, \, \, \dv \G^\epsilon =0; \\
\| P^\epsilon(t)\|_{s}\leq
K\delta , \quad \|\Phi^\epsilon(t)\|_{s}\leq
K\delta, \ \ \ t\in [0,T^\epsilon).
\end{gather*}
\end{prop}
Notice that for smooth solutions, the non-isentropic Euler-Maxwell
system \eqref{nca}--\eqref{nce} with initial data \eqref{ncg} is equivalent to
\eqref{error1}--\eqref{error5} or \eqref{error6} on $[0,T]$. Therefore, in order to obtain the convergence of
the Euler-Maxwell system \eqref{nca}--\eqref{nce} to the
compressible magnetohydrodynamic equations \eqref{nda}--\eqref{ndd},
we need to establish the uniform decay estimates in some time interval $[0,T]$ with respect
to the parameter $\epsilon$ of the solution to the error system \eqref{error1}--\eqref{error5}.
We shall present these estimates in the next section.
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\setcounter{equation}{0}
\section{Uniform energy estimates and proof of Theorem \ref{th}} \label{S3}
In this section we shall derive the uniform decay estimates with respect to the parameter
$\epsilon$ of the solution to the problem \eqref{error1}--\eqref{error5} and
justify rigorously the convergence of the non-isentropic Euler-Maxwell system \eqref{nca}--\eqref{nce} to the
compressible magnetohydrodynamic equations \eqref{nda}--\eqref{ndd}.
Here we shall make full use of the structure of the system \eqref{error1}--\eqref{error4} and
Proposition \ref{Pb}, and adapt some techniques developed in \cite{JL,JL2,JJL4,MS01}.
We first establish the convergence rate of the error system \eqref{error1}--\eqref{error3} by obtaining the \emph{a priori} estimates
uniformly in $\epsilon$. For simplicity of presentation, we define
\begin{align*}
&\|\mathcal{E}^\epsilon(t)\|^2_s\ = \|( P^\epsilon,\U^\epsilon,\Phi^\epsilon, \G^\epsilon)(t)\|^2_{s},\\
&\v \mathcal{E}^\epsilon(t)\v ^2_s =\|\mathcal{E}^\epsilon(t)\| ^2_s+ \epsilon \Vert \F^\epsilon \Vert^2_{s},\\
&\v\mathcal{E}^\epsilon\v_{s,T}\ =\sup_{0<t<T}\v\mathcal{E}^\epsilon(t)\v_s.
\end{align*}
The crucial estimate of our paper is the following uniform-in-$\epsilon$ result on the error system
\eqref{error1}--\eqref{error4}.
\begin{prop}\label{P31}
Let $s>7/2$ be an integer and assume that the initial data $( P^\epsilon_0,\U^\epsilon_0,\Phi^\epsilon_0, \F^\epsilon_0,\G^\epsilon_0)$ satisfy
\begin{align}\label{ww}
\|( P^\epsilon_0,\U^\epsilon_0,\Phi^\epsilon_0, \G^\epsilon_0) \|^2_{s}+ \epsilon\Vert
\F^\epsilon_0 \Vert^2_{s} =\v \mathcal{E}^\epsilon(t=0)\v _{s}^2\leq M_0{\epsilon}^2
\end{align}
for sufficiently small $\epsilon$ and some constant $M_0>0$ independent of $\epsilon$.
Then, for any $T_0\in (0, T_*)$, there are
two constants $ M_1 > 0$ and $\epsilon_1 > 0$ depending only on $T_0$, such that
for all $\epsilon\in (0,\epsilon_1]$, it holds that $T^\epsilon\geq T_0$
and the solution $( P^\epsilon, \U^\epsilon,\Phi^\epsilon, \F^\epsilon, \G^\epsilon)$ of the problem
\eqref{error1}--\eqref{error5}, well-defined in $[0, T_0]$, enjoys
\begin{align}\label{www}
\v \mathcal{E}^\epsilon\v _{s,T_0} \leq M_1 {\epsilon}.
\end{align}
\end{prop}
In order to prove Proposition \ref{P31} we first derive the following a priori
estimates on $[0,T]$ with $T \equiv T_\epsilon = \min\{ T_1, T^\epsilon \}$ for some given $\hat T<1$ and any $T_1<\hat T$ independent of $\epsilon$.
\medskip
\subsection{$L^2$ estimates}
\begin{lem}\label{La}
Under the assumptions in Proposition \ref{P31}, it holds that for all $0<t<T $ and sufficiently small $\epsilon$,
\begin{align}\label{L2}
&
\|( P^\epsilon,\U^\epsilon,\Phi^\epsilon, \G^\epsilon)(t)\|^2+\epsilon\| \F^\epsilon(t)\| ^2 +\frac32\int^t_0 \|\F^\epsilon(\tau)\|^2\dif \tau\nonumber\\
\leq &
C\Big\{ \|( P^\epsilon,\U^\epsilon,\Phi^\epsilon, \G^\epsilon)(t)\|^2+\epsilon\| \F^\epsilon(t)\| ^2\Big\}(t=0)+ C_T\epsilon^2
\nonumber\\
& +\int^t_0\big\{\eta_2\|\F^\epsilon(\tau)\|^2+\eta_3\|\F^\epsilon(\tau)\|^4+\big[ \eta_1\|\F^\epsilon(\tau)\|^2_2 +C(1+\|\mathcal{E}^\epsilon(\tau)\|^2_s \nonumber\\
&+\|\mathcal{E}^\epsilon(\tau)\|^4_s+\|\mathcal{E}^\epsilon(\tau)\|^8_s)\big]
\| (P^\epsilon,\U^\epsilon,\Phi^\epsilon,\G^\epsilon )(\tau)\|^2\big\}\dif \tau,
\end{align}
where $\eta_1$, $\eta_2$, and $\eta_3$ are sufficiently small positive constants.
\end{lem}
\begin{proof}
Multiplying \eqref{error1} by $P^\epsilon$, \eqref{error2} by $\U^\epsilon$, \eqref{error22} by $\Phi^\epsilon$,
and integrating them over $\mathbb{T}^3$ respectively, we obtain that
\begin{align}\label{LU2}
&\langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0) \partial_t P^\epsilon,P^\epsilon\rangle
+\langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \partial_t \U^\epsilon, \U^\epsilon\rangle\nonumber\\
& +\langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0)\partial_t \Phi^\epsilon,\Phi^\epsilon\rangle
\nonumber\\
=
& - \left \langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla P^\epsilon , P^\epsilon\right\rangle\nonumber\\
& - \left \langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla \U^\epsilon , \U^\epsilon\right\rangle \nonumber\\
& - \left \langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla \Phi^\epsilon , \Phi^\epsilon\right\rangle\nonumber\\
& + \left \langle f^\epsilon_1,P^\epsilon\right\rangle + \left \langle \mathbf{f}^\epsilon_2,\U^\epsilon\right\rangle
+ \left \langle \mathbf{f}^\epsilon_3,\Phi^\epsilon\right\rangle .
\end{align}
Thanks to the positivity and smoothness of $a(\Phi^\epsilon+S^0, P^\epsilon+p^0)$, $r(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and $b(\Phi^\epsilon+S^0, P^\epsilon+p^0)$,
Proposition \ref{Pb}, the regularity of $( p^0,\u^0,S^0,\H^0)$,
Cauchy-Schwarz's inequality, Sobolev's imbedding, and \eqref{mo},
we get directly from \eqref{error1}, \eqref{error2}, and \eqref{error22}
that
\begin{align}
&\|(\partial_{t}a(\Phi^\epsilon+S^0, P^\epsilon+p^0), \partial_{t}r(\Phi^\epsilon+S^0, P^\epsilon+p^0),\partial_{t}b(\Phi^\epsilon+S^0, P^\epsilon+p^0) )\|_{L^\infty}\nonumber\\
\leq&
\|(\partial_{t}a(\Phi^\epsilon+S^0, P^\epsilon+p^0),\partial_{t}r(\Phi^\epsilon+S^0, P^\epsilon+p^0),\partial_{t}b(\Phi^\epsilon+S^0, P^\epsilon+p^0))\|_{2}\nonumber\\
\leq &
\eta_1\|\F^\epsilon\|^2_2+ C(\|\mathcal{E}^\epsilon(t)\|^4_s+\|\mathcal{E}^\epsilon(t)\|^2_s+1) \label{wb}
\end{align}
for any $\eta_1>0$ and
\begin{align}
& \|(\nabla a(\Phi^\epsilon+S^0, P^\epsilon+p^0), \nabla r(\Phi^\epsilon+S^0, P^\epsilon+p^0),\nabla b(\Phi^\epsilon+S^0, P^\epsilon+p^0))\|_{L^\infty}\nonumber\\
\leq & C (1+\|\mathcal{E}^\epsilon(t)\|_s+\|\mathcal{E}^\epsilon(t)\|^2_s). \label{wc}
\end{align}
Thus, the first three terms
on the right-hand side of \eqref{LU2} can be estimated as follows:
\begin{align}
& \left| \langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla P^\epsilon , P^\epsilon\rangle\right|\nonumber\\
& + \left| \langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla \U^\epsilon , \U^\epsilon\rangle\right|\nonumber\\
&+ \left| \langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0)(\U^\epsilon +\u^0) \cdot \nabla \Phi^\epsilon , \Phi^\epsilon\rangle\right|\nonumber\\
\leq & C (1+\|\mathcal{E}^\epsilon(t)\|_s+\|\mathcal{E}^\epsilon(t)\|^2_s+\|\mathcal{E}^\epsilon(t)\|^4_s)
(\| P^\epsilon\|^2+\|\U^\epsilon\|^2+ \|\Phi^\epsilon\|^2). \label{wcc}
\end{align}
By the definition of $f_1^\epsilon, \mathbf{f}^\epsilon_2$ and $\mathbf{f}^\epsilon_3$, the regularity of $( p^0,\u^0,S^0,\H^0)$, and
Cauchy-Schwarz's inequality, we have
\begin{align}
& \left \langle f^\epsilon_1,P^\epsilon\right\rangle + \left \langle \mathbf{f}^\epsilon_2,\U^\epsilon\right\rangle
+ \left \langle \mathbf{f}^\epsilon_3,\Phi^\epsilon\right\rangle\nonumber\\
\leq & C\epsilon^2 +\eta_2\|\F^\epsilon\|^2+\eta_3\|\F^\epsilon\|^4\nonumber\\
& +C(1+\|\mathcal{E}^\epsilon(t)\|^2_s+\|\mathcal{E}^\epsilon(t)\|^4_s+\|\mathcal{E}^\epsilon(t)\|^8_s)
(\| P^\epsilon\|^2+\|\U^\epsilon\|^2+ \|\Phi^\epsilon\|^2) \label{wd}
\end{align}
for any $\eta_2>0$ and $\eta_3>0$.
Putting \eqref{wcc} and \eqref{wd} into \eqref{LU2} and noticing \eqref{wb}, we arrive at
\begin{align
& \langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0) P^\epsilon, P^\epsilon\rangle+
\langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \U^\epsilon, \U^\epsilon\rangle\nonumber\\
&+ \langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0) \Phi^\epsilon, \Phi^\epsilon\rangle\nonumber\\
\leq & \big\{ \langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0) P^\epsilon, P^\epsilon\rangle+
\langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \U^\epsilon, U^\epsilon\rangle\nonumber\\
&+ \langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0) \Phi^\epsilon, \Phi^\epsilon\rangle\big\}\big|_{t=0} +\int^t_0\Big\{ C\epsilon^2 +\eta_2\|\F^\epsilon\|^2+\eta_3\|\F^\epsilon\|^4 \nonumber\\
& +\big[ \eta_1\|\F^\epsilon\|^2_2+C(1+\|\mathcal{E}^\epsilon\|^2_s+\|\mathcal{E}^\epsilon\|^4_s+\|\mathcal{E}^\epsilon\|^8_s)\big]
\| (P^\epsilon,\U^\epsilon,\Phi^\epsilon)\|^2\Big\}(\tau)\dif \tau. \label{we}
\end{align}
Moreover, we have
\begin{align} \label{wf}
& \|P^\epsilon\|^2+\|\U^\epsilon\|^2+\|\Phi^\epsilon\|^2\nonumber\\
\leq&
\|(a(\Phi^\epsilon+S^0, P^\epsilon+p^0))^{-1}\|_{L^\infty}\langle a (\Phi^\epsilon+S^0, P^\epsilon+p^0)P^\epsilon,
P^\epsilon\rangle\nonumber\\
&
+ \|(r(\Phi^\epsilon+S^0, P^\epsilon+p^0))^{-1}\|_{L^\infty}\langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \U^\epsilon, \U^\epsilon\rangle\nonumber\\
&+ \|(b(\Phi^\epsilon+S^0, P^\epsilon+p^0))^{-1}\|_{L^\infty}\langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0) \Phi^\epsilon, \Phi^\epsilon\rangle\nonumber\\
\leq &C_0 \big\{\langle a(\Phi^\epsilon+S^0, P^\epsilon+p^0) P^\epsilon, P^\epsilon\rangle+
\langle r(\Phi^\epsilon+S^0, P^\epsilon+p^0) \U^\epsilon, \U^\epsilon\rangle\nonumber\\
&+ \langle b(\Phi^\epsilon+S^0, P^\epsilon+p^0) \Phi^\epsilon, \Phi^\epsilon\rangle\big\},
\end{align}
since $a(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and $r(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ are uniformly bounded away from
zero.
Multiplying \eqref{error3} by ${ }\F^\epsilon$ and \eqref{error4} by $\G^\epsilon$ respectively,
and integrating them over $\mathbb{T}^3$, we find that
\begin{align}
& \frac12 \frac{\dif}{\dif t}(\| \sqrt{\epsilon }\,\F^\epsilon\| ^2+\| \G^\epsilon\| ^2)
+\int (\cu \F^\epsilon\cdot \G^\epsilon-\cu \G^\epsilon\cdot \F^\epsilon)\dif x
+ \| \F^\epsilon\| ^2
= \left\langle \mathbf{f}_4^\epsilon , \F^\epsilon\right\rangle.\label{L2M}
\end{align}
By the regularity of $ (\u^0,\H^0)$, Cauchy-Schwarz's inequality, and Sobolev's imbedding,
the terms on the right-hand side of \eqref{L2M} can be bounded by
\begin{align*
\frac{1}{4} \| \F^\epsilon\| ^2+C(\| \mathcal{E}^\epsilon(t)\| _{s}^2+1)\| (\U^\epsilon,\G^\epsilon)\| ^2+C\epsilon^2.
\end{align*}
From the fact that
\begin{align*}
\int (\cu \F^\epsilon\cdot \G^\epsilon-\cu \G^\epsilon\cdot \F^\epsilon)\dif x=\int \dv(\F^\epsilon\times \G^\epsilon)\dif x =0,
\end{align*}
we get
\begin{align}
& \frac12 \frac{d}{dt}(\|\sqrt{\epsilon }\F^\epsilon\| ^2+\| \G^\epsilon\| ^2)
+\frac{3 }{4}\| \F^\epsilon\| ^2\nonumber\\
& \qquad \qquad \qquad \leq
C(\| \mathcal{E}^\epsilon(t)\| _{s}^2+1)\| (\U^\epsilon,\G^\epsilon)\| ^2+C\epsilon^2. \label{L2M2}
\end{align}
Note that here we have used the special structure of \eqref{error3} and \eqref{error4}.
Thus, we can easily obtain \eqref{L2} by integrating \eqref{L2M} and \eqref{L2M2} over $[0,T]$ and then combining the result with \eqref{we} and \eqref{wf}.
\end{proof}
\medskip
\subsection{Higher order estimates on $\Phi^\epsilon, \F^\epsilon,$ and $\G^\epsilon$}
In order to close the estimate \eqref{L2}, we need to derive the higher order estimates of the system \eqref{error1}--\eqref{error4}. We first consider
the estimates on $\Phi^\epsilon, \F^\epsilon,$ and $\G^\epsilon$.
\begin{lem}\label{LHa}
Let the assumptions in Proposition \ref{P31} hold and the multi-index $\alpha$ satisy $1\leq |\alpha|\leq s$. Then,
for all $0<t<T $ and sufficiently small $\epsilon$, we have
\begin{align}\label{H2a}
&\|(\partial^\alpha_x \Phi^\epsilon,\partial^\alpha_x\G^\epsilon)\|^2 +\epsilon\|\partial^\alpha_x \F^\epsilon\|^2
+ \frac{3}{2} \int^t_0
\| \pa\F^\epsilon(\tau)\| ^2\dif\tau\nonumber\\
\leq & \big\{\|(\partial^\alpha_x \Phi^\epsilon,\partial^\alpha_x\G^\epsilon)\|^2 +\epsilon\|\partial^\alpha_x \F^\epsilon\|^2\big\} (t=0) +C_T\epsilon^2\nonumber\\
& + C\int^t_0\Big
\{\gamma_1 \| \F^\epsilon\| ^4_{s} +\left(\gamma_2+\gamma_3\right)\| \F^\epsilon\| ^2_{s-1}+\|\mathcal{E}\|^{2}_s+\mathcal{E}\|^{2s}_s\nonumber\\
& + {C}(1+\|\mathcal{E}\|^{2(s+1)}_s)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2
\Big\}(\tau)\dif \tau,
\end{align}
where $\gamma_1$, $\gamma_2$, and $\gamma_3$ are sufficiently small positive constants.
\end{lem}
\begin{proof}
Dividing \eqref{error22} by $b(\Phi^\epsilon+S^0, P^\epsilon+p^0)$, applying operator $\partial^\alpha_x\, (1\leq|\alpha|\leq s)$ to the resulting equation,
multiplying
by $ \partial^\alpha_x \Phi^\epsilon$, and integrating over $\mathbb{T}^3$, we obtain that
\begin{align}\label{zn1}
\frac12\frac{\rm d}{{\rm d}t}\left\langle \pa \Phi ^\epsilon, \pa \Phi^\epsilon \right\rangle
= & -\left\langle \pa((\U^\epsilon+\u^0)\cdot\nabla \Phi^\epsilon),\pa \Phi^\epsilon\right\rangle\nonumber\\
&+\left\langle \partial^\alpha_x\left\{\frac{\mathbf{f}^\epsilon_3}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\},\pa \Phi^\epsilon \right\rangle.
\end{align}
Now we bound the terms on the right-hand side of \eqref{zn1}. By the regularity of $\u^0$,
Cauchy-Schwarz's inequality, and Sobolev's imbedding, we see that
\begin{align}\label{zn2}
& \ \ \ \ \langle \partial^\alpha_x([(\U^\epsilon+\u^0)\cdot \nabla] \Phi^\epsilon),\partial^\alpha_x \Phi^\epsilon\rangle\nonumber\\
& = \langle [(\U^\epsilon+\u^0)\cdot \nabla]\partial^\alpha_x \Phi^\epsilon , \partial^\alpha_x \Phi^\epsilon\rangle\
+\big\langle \mathcal{H}^{(1)},\partial^\alpha_x \Phi^\epsilon \big\rangle\nonumber\\
& = -\frac12 \langle \dv(\U^\epsilon+\u^0) \partial^\alpha_x \Phi^\epsilon , \partial^\alpha_x \Phi^\epsilon \rangle\
+\big\langle \mathcal{H}^{(1)},\partial^\alpha_x \Phi^\epsilon \big \rangle\nonumber\\
& \leq C(\| \mathcal{E}^\epsilon(t)\| _{s}+1)\| \partial^\alpha_x \Phi^\epsilon\| ^2 +\| \mathcal{H}^{(1)}\| ^2,
\end{align}
where the commutator
\begin{align*}
\mathcal{H}^{(1)} :=\partial^\alpha_x([(\U^\epsilon+\u^0)\cdot \nabla] \Phi^\epsilon)-[(\U^\epsilon+\u^0)\cdot \nabla]\partial^\alpha_x \Phi^\epsilon .
\end{align*}
We use the Moser-type and Cauchy-Schwarz's inequalities, the regularity of $\u^0$, and Sobolev's imbedding to infer that
\begin{align}\label{znc}
\big\|\mathcal{H}^{(1)} \big\| &\leq C( \| D_x^1(\U^\epsilon+\u^0)\| _{L^\infty}\| D_x^s \Phi^\epsilon\|
+\| D_x^1 \Phi^\epsilon\| _{L^\infty}\| D^{s-1}_x(\U^\epsilon+\u^0)\|) \nonumber\\
& \leq C\| \mathcal{E}^\epsilon(t)\| _{s}^2+C\| \mathcal{E}^\epsilon(t)\| _{s}.
\end{align}
By the definiton of $\mathbf{f}^\epsilon_3$, the last term in \eqref{zn1} can be rewritten as
\begin{align}\label{HT1}
&\left\langle \partial^\alpha_x\left\{\frac{\mathbf{f}^\epsilon_3}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\},\pa \Phi^\epsilon \right\rangle\nonumber\\
= & -\left\langle \partial^\alpha_x\left\{\frac{[b(\Phi^\epsilon+S^0, P^\epsilon+p^0)-b(S^0,p^0)][\partial_t S^0 +\u^0\cdot\nabla S^0]}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}
\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
& -\left\langle \partial^\alpha_x (\U^\epsilon\cdot \nabla S^0) ,\pa\Phi^\epsilon\right\rangle
+\left\langle\pa\left\{ \frac{|\F^\epsilon+\U^\epsilon\times \G^\epsilon|^2}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
& +\left\langle\pa\left\{ \frac{|\u^0\times \G^\epsilon+\U^\epsilon\times \H^0|^2}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
& + \left\langle\pa\left\{\frac{2\F^\epsilon}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\cdot
[\cu \H^0+\u^0\times \G^\epsilon+\U^\epsilon\times \H^0]\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
& + \left\langle\pa\left\{\frac{2(\U^\epsilon\times \G^\epsilon)}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\cdot
[\cu \H^0+\u^0\times \G^\epsilon+\U^\epsilon\times \H^0]\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
& + \left\langle\pa\left\{\frac{2}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\cu \H^0\cdot (\u^0\times \G^\epsilon+\U^\epsilon\times \H^0)\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
: = & \sum^{7}_{i=1}\mathcal{I}^{(i)}.
\end{align}
We have to bound the terms on the right-hand side of \eqref{HT1}.
By the regularity of $ ( S^0,p^0,\u^0)$, the positivity of $b(\Phi^\epsilon+S^0, P^\epsilon+p^0)$, \eqref{ho}, and Cauchy-Schwarz's inequality, the
term $\mathcal{I}^{(1)}$ can be bounded as follows
\begin{align}\label{ht8}
\big|\mathcal{I}^{(1)} \big| \leq C(\|\mathcal{E}^\epsilon(t)\|_{s}^{2s} +1)
( \|\partial^\alpha_x \Phi^\epsilon\|^{2} +\|\partial^\alpha_x P^\epsilon\|^2).
\end{align}
Similarly, the term $\mathcal{I}^{(2)}$ can be controlled by
\begin{align}\label{ht88}
\big| \mathcal{I}^{(2)} \big| \leq C(\|\U^\epsilon(t)\|_{s}^2 + \|\partial^\alpha_x\Phi^\epsilon\|^2).
\end{align}
For the term $\mathcal{I}^{(3)}$, we rewrite it as
\begin{align*}
\mathcal{I}^{(3)} &= \left\langle\pa\left\{ \frac{1}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}|\F^\epsilon+\U^\epsilon\times \G^\epsilon|^2\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
&= \left\langle\pa\left\{ \frac{1}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}|\F^\epsilon|^2\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
&\ \ \ \ + \left\langle\pa\left\{ \frac{2}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\F^\epsilon\cdot(\U^\epsilon\times \G^\epsilon)\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
&\ \ \ \ + \left\langle\pa\left\{ \frac{1}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)} |\U^\epsilon\times \G^\epsilon|^2\right\},\pa\Phi^\epsilon\right\rangle\nonumber\\
&: =\mathcal{I}^{(3_1)}+\mathcal{I}^{(3_2)}+\mathcal{I}^{(3_3)}.
\end{align*}
By Cauchy-Schwarz's inequality and Sobolev's embedding, the term $\mathcal{I}^{(3_1)}$ can be bounded by
\begin{align}\label{ht111}
\mathcal{I}^{(3_1)} =& \left\langle \frac{1}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\pa\left(|\F^\epsilon|^2\right),\pa\Phi^\epsilon\right\rangle\nonumber\\
& +\sum_{\beta \leq \alpha,|\beta|<|\alpha|}
\left\langle\partial_x^{\alpha-\beta}\left( \frac{1}{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right)\partial_x^{\beta}(|\F^\epsilon|^2),\pa\Phi^\epsilon\right\rangle\nonumber\\
\leq & \gamma_1 \|\F^\epsilon\|^4_s+C_{\gamma_1}\|\pa\Phi^\epsilon\|^2(1+\|\mathcal{E}(t)\|^{2(s+1)}_s)
\end{align}
for any $\gamma_1>0$. For the term $\mathcal{I}^{(3_2)}$, by the positivity of $b(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and
Sobolev's imbedding, we have
\begin{align}\label{ht12}
\mathcal{I}^{(3_2)}& = 2\left\langle\partial^\alpha_x \F^\epsilon\cdot
\frac{\U^\epsilon\times\G^\epsilon }{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)},
\partial^\alpha_x\Phi^\epsilon \right\rangle
+ 2\big\langle \mathcal{H}^{(2)},
\partial^\alpha_x\Phi^\epsilon
\big\rangle \nonumber\\
& \leq \frac{1}{16} \| \partial^\alpha_x \F^\epsilon\| ^2+
C\| \mathcal{E}^\epsilon(t)\| _{s}^2
\| \partial^\alpha_x\U^\epsilon\| ^2+2 \big\langle \mathcal{H}^{(2)}, \partial^\alpha_x\Phi^\epsilon
\big\rangle,
\end{align}
where the commutator
\begin{align*}
\mathcal{H}^{(2)}:=\partial^\alpha_x\left\{
\F^\epsilon\cdot
\frac{\U^\epsilon\times\G^\epsilon }{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\}-\partial^\alpha_x \F^\epsilon\cdot
\frac{\U^\epsilon\times\G^\epsilon }{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}.
\end{align*}
By the Cauchy-Schwarz's and Moser-type inequalities, we obtain that
\begin{align}\label{ht13}
& \ \ \ 2\big|\big\langle\mathcal{H}^{(2)},
\partial^\alpha_x\Phi^\epsilon\big\rangle\big| \leq 2\| \mathcal{H}^{(2)}\| \cdot \| \partial^\alpha_x\Phi^\epsilon \| \nonumber\\
& \leq C \bigg[\left\Vert D_x^1\left(\frac{\U^\epsilon\times\G^\epsilon }{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right)\right\Vert_{L^\infty}\| \F^\epsilon\| _{s-1}
\nonumber\\
&\ \ \ \ +\| \F^\epsilon\| _{L^\infty}\left\Vert\frac{\U^\epsilon\times\G^\epsilon }{b(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right\Vert_{s}\bigg]\| \partial^\alpha_x\Phi^\epsilon \| \nonumber\\
&\leq \gamma_2 \| \F^\epsilon\| ^2_{s-1}+C_{\gamma_2} (\| \mathcal{E}^\epsilon(t)\| ^2_{s}+1) \| \partial^\alpha_x
\Phi^\epsilon\| ^2.
\end{align}
for any $\gamma_2>0$. For the term $\mathcal{I}^{(3_3)}$, by Cauchy-Schwarz's and the Moser-type inequalities, it can be bounded by
\begin{align}\label{ht133}
\big|\mathcal{I}^{(3_3)} \big| \leq & C(1+\|\mathcal{E}(t)\|^{2(s+1)}_s)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2.
\end{align}
By the regularity of $ (S^0,\u^0, \H^0)$, the positivity of $b(\Phi^\epsilon+S^0, P^\epsilon+p^0)$, and Cauchy-Schwarz's inequality, the
terms $\mathcal{I}^{(4)}$ and $\mathcal{I}^{(7)}$ can be bounded as follows:
\begin{align}\label{ht14}
\big| \mathcal{I}^{(4)} \big|+\big|\mathcal{I}^{(7)} \big| \leq C(\|\mathcal{E}^\epsilon(t)\|_{s}^{2s} +1)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2.
\end{align}
The term $ \mathcal{I}^{(5)}$ can be bounded, similarly to $\mathcal{I}^{(3_2)}$, by
\begin{align}\label{ht15}
\big| \mathcal{I}^{(5)}\big|\leq \gamma_3 \| \F^\epsilon\| ^2_{s-1}+C_{\gamma_3} (\| \mathcal{E}^\epsilon(t)\| ^2_{s}+1)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2
\end{align}
for any $\gamma_3>0$. Finally, similar to $\mathcal{I}^{(3_3)}$, the term $ \mathcal{I}^{(6)}$ can be controlled by
\begin{align}\label{ht16}
\big| \mathcal{I}^{(6)} \big| \leq & C(1+\|\mathcal{E}(t)\|^{2(s+1)}_s)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2.
\end{align}
Substituting \eqref{zn2}--\eqref{ht16} into \eqref{zn1}, we conclude that
\begin{align}\label{HT202a}
\!\!\! \frac12\frac{\dif}{\dif t} \langle \partial^\alpha_x\Phi^\epsilon,
\partial^\alpha_x\Phi^\epsilon \rangle
\leq &
{C}_{{\gamma}}(1+\|\mathcal{E}(t)\|^{2(s+1)}_s)
\|(\partial^\alpha_x \Phi^\epsilon, \partial^\alpha_xP^\epsilon, \partial^\alpha_x\U^\epsilon, \partial^\alpha_x\G^\epsilon)\|^2
\nonumber\\
& +\gamma_1 \| \F^\epsilon\| ^4_{s} +\left(\gamma_2+\gamma_3\right)\| \F^\epsilon\| ^2_{s-1} +C\|\mathcal{E}(t)\|^{2s}_s+
C\|\mathcal{E}(t)\|^{2}_s.
\end{align}
for some constant $ {C}_{{\gamma}}>0$ depending on $\gamma_j$ ($j=1,2,3$).
Applying operator
$\partial^\alpha_x\, (1\leq|\alpha|\leq s)$ to \eqref{error3} and \eqref{error4}, multiplying the resulting equations
by ${ }\partial^\alpha_x\F^\epsilon$ and $\partial^\alpha_x\G^\epsilon$ respectively, and then integrating over
${\mathbb T}^3$, we obtain that
\begin{align}
& \frac12\frac{\dif}{\dif t}(\epsilon\|\partial^\alpha_x\F^\epsilon\| ^2
+\| \partial^\alpha_x\G^\epsilon\| ^2) + \| \partial^\alpha_x\F^\epsilon\| ^2\nonumber\\
& +\int (\cu \partial^\alpha_x\F^\epsilon\cdot \partial^\alpha_x \G^\epsilon-\cu \partial^\alpha_x\G^\epsilon\cdot \partial^\alpha_x\F^\epsilon)\dif x\nonumber\\
= &
\left\langle [\partial^\alpha_x(\U^\epsilon\times \H^0)+\partial^\alpha_x(\u^0\times \G^\epsilon)]
-\partial^\alpha_x(\U^\epsilon\times \G^\epsilon),
\partial^\alpha_x\F^\epsilon\right\rangle\nonumber\\
&\ - \left\langle{\epsilon}\partial^\alpha_x\partial_t
\cu \H^0+\epsilon\partial^\alpha_x\partial_t(\u^0\times \H^0),
\partial^\alpha_x\F^\epsilon\right\rangle. \label{L2MMa}
\end{align}
In view of the regularity of $ (\u^0,\H^0)$, Cauchy-Schwarz's and the
Moser-type inequalities, and Sobolev's imbedding, the
terms on the right-hand side of \eqref{L2MMa} can be controlled by
\begin{align}\label{L2M1a}
\frac{1}{4} \|\partial^\alpha_x\F^\epsilon\|^2
+C(\|\partial^\alpha_x\mathcal{E}^\epsilon(t)\|_{s}^2+1)\|(\partial^\alpha_x\U^\epsilon,
\partial^\alpha_x\G^\epsilon)\|^2+C\epsilon^2.
\end{align}
Noticing the fact that
\begin{align*}
\int (\cu \partial^\alpha_x\F^\epsilon\cdot \partial^\alpha_x\G^\epsilon
-\cu \partial^\alpha_x\G^\epsilon\cdot \partial^\alpha_x\F^\epsilon)\dif x
=\int \dv(\partial^\alpha_x\F^\epsilon\times \partial^\alpha_x\G^\epsilon)\dif x =0,
\end{align*}
we find that
\begin{align}
&\frac12 \frac{\dif}{\dif t}(\sqrt{\epsilon }\,\partial^\alpha_x\F^\epsilon\| ^2+\| \partial^\alpha_x\G^\epsilon\| ^2)
+\frac{3 }{4}\| \partial^\alpha_x\F^\epsilon\| ^2 \nonumber\\
&\qquad \leq
C(\| \mathcal{E}^\epsilon(t)\| _{s}^2+1)\| (\partial^\alpha_x\U^\epsilon,\partial^\alpha_x\G^\epsilon)\| ^2+C\epsilon^2. \label{L2M2ba}
\end{align}
We should point out that here we have used the special structure of \eqref{error3} and \eqref{error4}.
Combining \eqref{HT202a} with \eqref{L2M2ba}, one obtains the estimate \eqref{H2a}.
\end{proof}
\medskip
\subsection{Higher order estimates on $P^\epsilon$ and $\U^\epsilon $}
In order to close the estimates \eqref{L2} and \eqref{H2a}, now we study the higher order derivatives of $P^\epsilon$ and $\U^\epsilon$.
We shall adapt the techniques developed in \cite{MS01,JJL4}. Set
\begin{align*}
\mathcal{A}(\Phi^\epsilon, P^\epsilon)& =\left(\begin{array}{cc}
a(\Phi^\epsilon+S^0, P^\epsilon+p^0) & 0 \\
0 & r(\Phi^\epsilon+S^0, P^\epsilon+p^0)\mathbf{I}_{3}
\end{array}\right),\nonumber\\
\mathcal{L}(\partial_x)& =\left(\begin{array}{cc}
0 & \dv \\
\nabla & 0 \end{array}\right), \qquad
\mathcal{U}^\epsilon=\left(\begin{array}{c}
P^\epsilon\\
\U^\epsilon
\end{array}
\right).
\end{align*}
Let $\mathcal{L}_{\mathcal{A}}(\partial_x ):= \{ \mathcal{A}(\Phi^\epsilon, P^\epsilon)\} ^{-1}\mathcal{L}(\partial_x)$. We have
\begin{lem}\label{LHB}
There are constants $K>0$ and $C_1>0$ such that for all $\sigma\in\{1,\dots,s\}$,
$\epsilon \in (0,1]$ and $t\in [0,T]$,
\begin{align}
& \|\mathcal{U}^\epsilon\|_\sigma \leq K\big\{\|\mathcal{L}(\partial_x) {\mathcal{U}^\epsilon}\|_{\sigma -1}
+ \|\cu \U^\epsilon \|_{\sigma-1}+\|\mathcal{U}^\epsilon\|_{\sigma-1}\big\},\label{pua}\\
&\|\mathcal{U}^\epsilon\|_\sigma \leq C_1\big\{\|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma \mathcal{U}^\epsilon\|_{0}
+\|\cu \U^\epsilon\|_{\sigma-1}+\|\mathcal{U}^\epsilon\|_{\sigma-1}\big\}.\label{pub}
\end{align}
\end{lem}
\begin{proof} \eqref{pua} is obvious. Recalling the fact that
$ a(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and $ r(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ are smooth, positive, and bounded away from zero
with respect to each $\epsilon$, and applying the inequality
$$ \|\mathcal{L}(\partial_x) \mathcal{U}^\epsilon\|\leq \|\mathcal{A}\|_{L^\infty}\|\mathcal{L}_{\mathcal{A}}(\partial_x )\mathcal{U}^\epsilon\|,$$
we see that \eqref{pub} can be shown easily by induction on $\sigma$.
\end{proof}
Next, we bound $\|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma {\mathcal{U}}^\epsilon\|_{0}$ and
$\|\cu \U^\epsilon\|_{\sigma-1}$ by induction. We first show the following estimate.
\begin{lem}\label{LHC}
There exist constants $\bar s>0$ and $\kappa_1>0$, such that for all $1\leq \sigma \leq s$, all
$\epsilon \in (0,1]$ and $t\in [0,T]$, it holds that
\begin{align}
&\int \big(\mathcal{A}(\Phi^\epsilon, P^\epsilon)|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma {\mathcal{U}}^\epsilon|^2\big)(t)\,\textrm{\emph{d}}x
\leq \int \big(\mathcal{A}(\Phi^\epsilon, P^\epsilon)|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma {\mathcal{U}}^\epsilon|^2\big)(0)\,\textrm{\emph{d}}x
\nonumber\\
&\quad
+C\epsilon^2 + \int^t_0 \kappa_1 \|\F^\epsilon(\tau)\|^4_s\dif \tau
+C_{\kappa_1}\int^t_0(1+\|\mathcal{E}(\tau)\|^{2\bar s}_s)\|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma \mathcal{U}^\epsilon(\tau)\|^2\dif \tau. \label{puc}
\end{align}
\end{lem}
\begin{proof} We follow the arguments in \cite{MS01,JJL4} with modifications. For simplicity, we set
$\mathcal{A}:= \mathcal{A}(\Phi^\epsilon, P^\epsilon)$.
Let $\mathcal{U}_\sigma^\epsilon:=\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma \mathcal{U}^\epsilon$,
$\sigma\in \{1,\dots, s\}.$
It is easy to verify that the operator $\mathcal{L}_{\mathcal{A}}(\partial_x )$ is
bounded from $H^{\sigma}$ to $H^{\sigma-1}$ for $\sigma \in \{1,\dots, s\}$. Note that
the equations \eqref{error1} and \eqref{error2} can be written as
\begin{align}\label{pud}
(\partial_t+(\U^\epsilon+\u^0)\cdot \nabla)\mathcal{U}^\epsilon+\frac{1}{\epsilon}\mathcal{A}^{-1}\mathcal{L}(\partial_x)\mathcal{U}^\epsilon
= \mathcal{A}^{-1}\mathcal{J}^\epsilon, \quad
{\mathcal{J}}^\epsilon=\left(\begin{array}{c}
f^\epsilon_1 \\
\mathbf{f}^\epsilon_2
\end{array}
\right).
\end{align}
For $ \sigma \geq 1$, we commute the operator $\{\mathcal{L}_{\mathcal{A}}\}^\sigma$ with \eqref{pud} and
multiply the resulting system by $\mathcal{A}$ to infer that
\begin{align}\label{pue}
\mathcal{A}(\partial_t+(\U^\epsilon+\u^0)\cdot \nabla)\mathcal{U}^\epsilon_\sigma
+\frac{1}{\epsilon}\mathcal{L}(\partial_x)\mathcal{U}^\epsilon_\sigma=\mathcal{A}({g}^\epsilon_\sigma+
\mathbf{h}^\epsilon_\sigma),
\end{align}
where
$$ {g}^\epsilon_\sigma:= [\partial_t+(\U^\epsilon+\u^0)\cdot \nabla , \{\mathcal{L}_{\mathcal{A}}\}^\sigma] \mathcal{U}^\epsilon,\qquad
\mathbf{h}^\epsilon_\sigma:= \{\mathcal{L}_{\mathcal{A}}\}^\sigma (\mathcal{A}^{-1} \mathcal{J}^\epsilon). $$
Multiplying \eqref{pue} by $ \mathcal{U}^\epsilon_\sigma$ and integrating
over $(0,t)\times\mathbb{T}^3$ with $t\leq T$, and noticing that
the singular terms cancel out since $\mathcal{L}(\partial_x)$ is skew-adjoint, we then use
the inequality \eqref{wb} and Cauchy-Schwarz's
inequality to deduce that
\begin{align}\label{puf}
\frac{1}{2}\langle \mathcal{A}(t)\mathcal{U}^\epsilon_\sigma(t), \mathcal{U}^\epsilon_\sigma(t)\rangle
\leq & \frac{1}{2}\langle \mathcal{A}(0)\mathcal{U}^\epsilon_\sigma(0),
\mathcal{U}^\epsilon_\sigma(0)\rangle
+C\int^t_0(1+\|\mathcal{E}\|_s^4(\tau)) \|\mathcal{U}^\epsilon_\sigma(\tau)\|^2 \textrm{d}\tau \nonumber\\
& +C\epsilon^2 + \int^t_0 (\|{g}^\epsilon_\sigma(\tau)\|^2_\sigma+\|\mathbf{h}^\epsilon_\sigma(\tau)\|^2_\sigma) \|\mathcal{U}^\epsilon_\sigma(\tau)\|^2 \dif \tau.
\end{align}
Following the proof process of Lemma 2.4 in \cite{MS01} and applying \eqref{wc}, we obtain that
\begin{align}\label{pug}
\|{g}^\epsilon_\sigma(t)\|\leq C(1+\|\mathcal{E}(t)\|^{2s_1}_s)
\end{align}
for some constant $s_1>0$.
Now we estimate the term $\mathbf{h}_\sigma^\epsilon$.
Noticing that there is no spatial derivatives of
$( P^\epsilon,\U^\epsilon,\Phi^\epsilon,\F^\epsilon,\G^\epsilon)$ in the definitions of $f^\epsilon_1$ and $\mathbf{f}^\epsilon_2$,
thus we can apply the regularity of $(p^0,\u^0,S^0,\H^0)$, \eqref{ma} and \eqref{wc} to obtain that
\begin{align}\label{puh}
\|\mathbf{h}^\epsilon_\sigma(t)\|\leq \kappa_1\|\F^\epsilon(t)\|^2_s+C_{\kappa_1}(1+\|\mathcal{E}(t)\|^{2s_1}_s)
\end{align}
for some constant $s_2>0$ and sufficient small $\kappa_1>0$.
Putting \eqref{pug} and \eqref{puh} into \eqref{puf} and choosing $\bar s=\max\{2,s_1,s_2\}$, we get \eqref{puc}.
\end{proof}
Finally, we derive an estimate for
$\|\cu\U^\epsilon\|_{\sigma-1}$.
Dividing \eqref{error2} by $ r(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and applying the operator \emph{curl} to the resulting equations, we obtain that
\begin{align}
& [\partial_t +((\U^\epsilon+\u^0)\cdot \nabla)](\cu\U^\epsilon)=\, [(\U^\epsilon+\u^0)\cdot \nabla, \cu]\U^\epsilon \nonumber\\
&\qquad
-\cu \left(\frac{\nabla \Phi^\epsilon}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right)
+\cu \left(\frac{\mathbf{f}^\epsilon_2}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right). \label{pui}
\end{align}
\begin{lem}\label{LHD} There exist constants $\bar{\bar s}>0$ and $\kappa_2>0$, such that the following inequality holds:
\begin{align} \label{puj}
\|\cu\U^\epsilon(t)\|^2_{s-1} \leq & \|\cu\U^\epsilon(0)\|^2_{s-1}+ C \epsilon^2 + \int^t_0 \kappa_2 \|\F^\epsilon(\tau)\|^4_s\dif \tau\nonumber\\
&
+C_{\kappa_2}\int^t_0(1+\|\mathcal{E}(\tau)\|^{2\bar{\bar s}}_s)\|\cu\U^\epsilon(\tau)\|_{s-1}^2\dif \tau.
\end{align}
\end{lem}
\begin{proof}
Set $\omega^\epsilon:= \cu\U^\epsilon$. Taking $\partial^\alpha_x$
$(0\leq|\alpha|\leq s-1)$ to \eqref{pui}, multiplying the resulting equations by $\omega^\epsilon$, and
integrating over $(0,t)\times \mathbb{T}^3$ with $t\leq T$, we infer that
\begin{align}\label{puk}
\frac{1}{2}\langle \partial^\alpha_x \omega^\epsilon(t), \partial^\alpha_x \omega^\epsilon(t)\rangle
\leq\,&\frac{1}{2}\langle \partial^\alpha_x \omega^\epsilon(0), \partial^\alpha_x\omega^\epsilon(0)\rangle
+ C\int^t_0(1+\|\mathcal{E}\|_s^4(\tau))\|\partial^\alpha \omega^\epsilon(\tau)\|^2\textrm{d}\tau\nonumber \\
&+\int^t_0\left\langle [(\U^\epsilon+\u^0)\cdot \nabla, \partial^\alpha_x]\omega^\epsilon ,\partial^\alpha_x \omega^\epsilon\right\rangle\dif\tau \nonumber\\
&
-\int^t_0\left\langle \partial^\alpha_x\cu \left(\frac{\nabla \Phi^\epsilon}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right),\partial^\alpha_x \omega^\epsilon\right\rangle \dif\tau \nonumber\\
&+\int^t_0\left\langle\partial^\alpha_x\cu \left(\frac{\mathbf{f}^\epsilon_2}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right),\partial^\alpha_x \omega^\epsilon\right\rangle\dif\tau\nonumber\\
:=\,&\frac{1}{2}\langle \partial^\alpha_x \omega^\epsilon(0), \partial^\alpha_x\omega^\epsilon(0)\rangle
+ C\int^t_0(1+\|\mathcal{E}(\tau)\|_s^4)\|\partial^\alpha_x \omega^\epsilon(\tau)\|^2\textrm{d}\tau\nonumber \\
& + \int^t_0\sum_{i=1}^3 \mathcal{N}_i(\tau)\dif \tau.
\end{align}
Next, we estimate the terms $\mathcal{N}_i(\tau)$ ($i=1,2,3$) on the right-hand side of \eqref{puk}.
From Cauchy-Schwarz's inequality we get that
\begin{align*}
| \mathcal{N}_1(\tau)|\leq C\|\partial^\alpha_x\omega^\epsilon(\tau)\| \, \|\hat{\mathbf{h}}^\epsilon_\alpha(\tau)\|,
\qquad \hat{\mathbf{h}}^\epsilon_\alpha(\tau):= [(\U^\epsilon+\u^0)\cdot \nabla, \partial_x^\alpha]\omega^\epsilon.
\end{align*}
The commutator $\hat{\mathbf{h}}^\alpha_\alpha$ is a sum of terms
$\partial^\beta_x (\U^\epsilon+\u^0)\cdot\partial^\zeta_x \omega^\epsilon$
with multi-indices $\beta$ and $\zeta$ satisfying $|\beta|+|\zeta|\leq s$, $|\beta|>0$,
and $|\zeta|>0$. Thus,
$$\|\hat{\mathbf{h}}^\epsilon_\alpha(\tau)\|\leq C(1+\|\mathcal{E}(\tau)\|^{s}_{s}),$$
where the the following nonlinear Sobolev inequality has been used (see \cite{Ho97}): For all
$\alpha=(\alpha_1, \alpha_2 , \alpha_3)$, $\sigma\geq 0$, and $f,g\in H^{k+\sigma}(\mathbb{T}^3)$, $|\alpha|=k$,
it holds that
$$ \|[f,\partial^\alpha_x ]g\|_{{\sigma}}\leq C_0(\|f\|_{W^{1,\infty}}\| g\|_{{\sigma+k-1}}
+\| f\|_{{\sigma+k}}\|g\|_{L^{\infty}}).
$$
Hence, we have
\begin{align}\label{pul}
| \mathcal{N}_1(\tau)|\leq C(1+\|\mathcal{E}(\tau)\|^{2s}_{s})\|\partial^\alpha_x \omega^\epsilon(\tau)\|^2.
\end{align}
Noting that basic vector formulas $\cu(\psi \u)=\psi \cu \u+\nabla \psi \times \u$ and $\cu \nabla \psi =0$, the term $N_2(\tau)$ can be
estimated by
\begin{align}\label{pull}
| \mathcal{N}_2(\tau)|\leq & \left\|\partial^\alpha_x\left\{\nabla \left(\frac{1}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right)\times \nabla\Phi^\epsilon\right\}\right\|
\|\partial^\alpha_x \omega^\epsilon(\tau)\|\nonumber\\
\leq& C(1+\|\mathcal{E}(\tau)\|^{s_3}_{s})\|\partial^\alpha_x \omega^\epsilon(\tau)\|
\end{align}
for some $s_3>0$, where the properties $r(\cdot,\cdot)$, Proposition \ref{Pb}, Sobolev's imbedding, \eqref{mo}, and \eqref{wc} have been used.
By the definition of $\mathbf{f}^\epsilon_2$,
the regularity of $(p^0,\u^0,S^0,\H^0)$, Cauchy-Schwarz's
inequality, \eqref{ma}, \eqref{wc}, and \eqref{mo}, we have
\begin{align}\nonumbe
\left\|\partial^\alpha_x\cu \left(\frac{\mathbf{f}^\epsilon_2}{r(\Phi^\epsilon+S^0, P^\epsilon+p^0)}\right)(t)\right\|\leq \tilde\kappa_2\|\F^\epsilon(t)\|^2_s+C_{\tilde\kappa_2}(1+\|\mathcal{E}(t)\|^{2s_4}_s)
\end{align}
for some constant $s_4>0$ and sufficient small $\tilde\kappa_2>0$.
Thus the term $\mathcal{N}_3(\tau)$ can be bounded by
\begin{align}\label{pun}
| \mathcal{N}_3(\tau)|\leq \big( \tilde\kappa_2\|\F^\epsilon(t)\|^2_s+C_{\tilde\kappa_2}(1+\|\mathcal{E}(t)\|^{s_4}_s)\big)\|\partial^\alpha \omega^\epsilon(\tau)\| .
\end{align}
Putting \eqref{puk}--\eqref{pun} together, summing up $\alpha$ with
$0\leq|\alpha|\leq s-1$, applying Cauchy-Schwarz's
inequality, and choosing $\bar{\bar{s}}=\max\{s_3,s_4\}$ and some sufficient small $\kappa_2>0$, we obtain \eqref{puj}.
\end{proof}
With the estimates in Lemmas \ref{La}--\ref{LHD} in hand, we are in a position to prove Proposition \ref{P31}.
\begin{proof}[Proof of Proposition \ref{P31}]
As in \cite{JL,JL2,PW}, we introduce an $\epsilon$-weighted energy functional
$$
\Gamma^\epsilon(t) = \v \mathcal{E}^\epsilon(t)\v ^2_{s}.
$$
Summing up \eqref{H2a} with $1\leq|\alpha|\leq s$ and \eqref{puc} with
$1\leq|\sigma|\leq s-1$, combining \eqref{L2} with \eqref{puj}, using \eqref{pub} and
the fact that $ a(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ and $ r(\Phi^\epsilon+S^0, P^\epsilon+p^0)$ are smooth, positive, and bounded away from zero
with respect to each $\epsilon$, $\F^\epsilon \in C^l([0,T],H^{s-2l})\ ( l=0,1)$, and
$$\|\{\mathcal{L}_{\mathcal{A}}(\partial_x)\}^\sigma {\mathcal{U}}^\epsilon\|_{0}\leq C(1+\|\mathcal{E}(\tau)\|^{s_5}_s)\|\U^\epsilon(\tau)\|_{s}$$
for some $s_5>0$, we can choose $\eta_i \,(i=1,2,3)$,
$\gamma_i\,(i=1,2,3)$, and $\kappa_1,\kappa_2$ to be sufficiently small to deduce
that there exist a sufficient large constant $s_0>0$ and
a small $\epsilon_0> 0$ depending only on $T$, such that for any $\epsilon\in (0,\epsilon_0]$ and $t\in [0,T]$,
\begin{align}\label{gma}
\Gamma^\epsilon(t)\leq C\Gamma^\epsilon(t = 0)+ C\epsilon^2+C\int^t_0\Big\{
\big((1+(\Gamma^\epsilon)^{s_0}\big)\Gamma^\epsilon\Big\}(\tau)\dif \tau .
\end{align}
Thus, applying Gronwall's lemma to \eqref{gma} with the assumption $\Gamma^\epsilon (t=0)\leq C\epsilon^2$
and Proposition \ref{P31}, we obtain that there exist a $0<T_1<1$ and an $\epsilon>0$, such that $T^\epsilon\geq T_1$
for all $\epsilon\in (0,\epsilon]$ and
$\Gamma^\epsilon(t)\leq C\epsilon^2$ for all $ t \in [0,T_1]$. Therefore, the desired a priori estimate
\eqref{www} holds. Moreover, by the standard continuous induction method, we can extend
$T^\epsilon\geq T_0$ for any $T_0<T_*$.
\end{proof}
Now we prove Theorem \ref{th} by applying Proposition \ref{P31}.
\begin{proof}[Proof of Theorem \ref{th}] By virtue of the definition of the error functions
$( P^\epsilon, \U^\epsilon, \Phi^\epsilon,\linebreak \F^\epsilon, \G^\epsilon)$,
the regularity of $( p^0,\u^0, S^0,\H^0)$, the error system \eqref{error1}--\eqref{error4} and the
primitive system \eqref{nca}--\eqref{nce} are equivalent on $[0,T]$ for some $T>0$.
Therefore the assumption \eqref{ivda} in Theorem \ref{th} implies the assumption \eqref{ww} in Proposition
\ref{P31}, and hence \eqref{www} gives \eqref{iivda}.
\end{proof}
{\bf Acknowledgements:}
The authors give their gratitude to professor Hugo Beir\~{a}o da Veiga for his valuable constructive
suggestions on preparation this paper.
Jiang was supported by the National Basic Research Program under the Grant 2011CB309705
and NSFC (Grant Nos. 11229101, 11371065).
Li was supported by NSFC (Grant No. 11271184), NCET-11-0227,
PAPD, and the Fundamental Research Funds for the Central Universities.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 624 |
Agiografia
Era un monaco eremita facente parte della numerosa schiera dei monaci italo-greci della Calabria bizantina. Il suo nome al secolo era Giovanni, rinchiusosi in un monastero nei pressi dell'antica città di Gerace. Spesso si ritirava sul monte della "Settina" (detto Monte San Jeiunio) in una grotta isolata, poco lontana dal centro della città, per praticare l'eremitismo e prolungati digiuni. Per questo motivo, Giovanni è stato rinominato "Jeiunio" (digiuno), da non confondere con Giovanni IV Nesteutes (detto il digiunatore). Assieme a San Antonio di Gerace (detto del Castello), Santa Veneranda e la Madonna Immacolata è protettore della città di Gerace in Calabria .
Note
Bibliografia
Lina Furfaro "San Jeiunio compatrono di Gerace" Ed. Bibliotheka Roma 2019
Santi della Chiesa ortodossa | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,532 |
Josh Misner, PhD
Mindful Living in a Distracted World
Contact/Social
The Power of Kindness in Any Language
I didn't know what picture to insert, so here's a pretty sunset over Krakow.
Before our plane landed in Reykjavik, the first stop on our 15-country, 4-continent adventure, I had downloaded language packs for each country we had planned on visiting via Google Translate.
As we sat on the plane, with about 20 minutes before landing, I pulled up the app and immediately looked up a translation for one phrase, which I consider the most important universal symbol of kindness and connection: THANK YOU.
Fortunately, Icelandic was easy, "Takk!" Learning it didn't take much practice, as saying it is like the "tock" in "tick tock." Even more fortunately, it's the same in Norwegian, which was our second stop.
During our three days in Iceland and one day in Norway, I must have said thank you at least once to each native resident we interacted with, and with great certainty, I can report that 100% of the time, it put a smile on the face of the person receiving the phrase.
Before we arrived in Poland, however, I knew I had a challenge in front of me. My tongue didn't want to cooperate with the polish language in general, let alone the word for "thanks."
Turns out, it's pronounced like "Jane-kwee-ay" with an extended pronunciation of the J part.
And it only took about fifteen Polish folks to remind me before it clicked.
However, as with Iceland, each and every time I said it, wrong or not, people lit up. In fact, I think they lit up TWICE as bright in Poland, probably because the language IS difficult, which likely keeps a lot of tourists from attempting to learn.
If I'm being honest here, that little, effortless touch of kindness has resulted in more smiles than I've seen in a long time – genuine smiles, like the kind we give others when they've made us feel important.
Dale Carnegie once said that the sweetest sound in any language is the sound of ones own name, but I might beg to differ.
I think the sweetest sound is that of a foreigner making a sincere effort to show residents of a culture that they matter enough to learn how to express kindness in their own language.
From here on out, I'm making it my goal to spread as much kindness as possible in any language I can.
If you like this, share it with someone you love!
Josh "Dr. J" Misner
Mindfulness researcher, author of Put the F**king Phone Down
Semester at Sea
Making Room: The Most Profound Lesson 4 Months at Sea Taught Me
Three in the Morning
Peeking Behind the Iron Curtain
Survival By Accident: Reflections on Gdansk
Berlin: The Character of a City
Stories of Hope: Balancing the Scales of Morality
Grappling with Grief: A Day at Auschwitz Birkenau
Arrival in Krakow, Feeling the Weight of History Bearing Down
16 Hours in Bergen: A Lifetime of Memories
View joshmisnerphd's profile on Facebook
View joshmisnerphd's profile on Twitter
View drjoshmisner's profile on Instagram | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 290 |
Q: Trying to get a more verbose message for "Bus Error" kernel I'm struggling with a python program, deployed on a Raspberry Pi, that uses tshark to monitor network traffic dumping the result on a separate file on a mounted SSD external disk.
I've tried almost every configuration possible:
*
*writing the output to the SD instead to the external SSD disk
*isolating the process, the method to just capture and write the file
*adding several configuration flags to set as verbose as possible
**Not tried on a separate computer as it is a remote env, and I cannot change anything else regarding HW.
**It is not clear if is HW arose (SD problem?) or SW arose https://www.wireshark.org/lists/ethereal-dev/199912/msg00027.html
Despite every test I tried, I cannot set a more verbose message apart from "Bus Error", which arises randomly after 2, 3 or 5 days of capturing without interruption.
The question is if it is possible to force the os to print more verbose messages.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,067 |
from django.conf import settings
from models import Publisher, MpttPublisher
from mptt_support import Mptt
from manager import PublisherManager
__all__ = ('Publisher', 'PublisherManager', 'MpttPublisher', 'Mptt', 'VERSION')
VERSION = (0, 4, 'sintab') | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,262 |
"""
Example: Nested Sampling for Gaussian Shells
============================================
This example illustrates the usage of the contrib class NestedSampler,
which is a wrapper of `jaxns` library ([1]) to be used for NumPyro models.
Here we will replicate the Gaussian Shells demo at [2] and compare against
NUTS sampler.
**References:**
1. jaxns library: https://github.com/Joshuaalbert/jaxns
2. dynesty's Gaussian Shells demo:
https://github.com/joshspeagle/dynesty/blob/master/demos/Examples%20--%20Gaussian%20Shells.ipynb
"""
import argparse
import matplotlib.pyplot as plt
from jax import random
import jax.numpy as jnp
import numpyro
from numpyro.contrib.nested_sampling import NestedSampler
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS, DiscreteHMCGibbs
class GaussianShell(dist.Distribution):
support = dist.constraints.real_vector
def __init__(self, loc, radius, width):
self.loc, self.radius, self.width = loc, radius, width
super().__init__(batch_shape=loc.shape[:-1], event_shape=loc.shape[-1:])
def sample(self, key, sample_shape=()):
return jnp.zeros(
sample_shape + self.shape()
) # a dummy sample to initialize the samplers
def log_prob(self, value):
normalizer = (-0.5) * (jnp.log(2.0 * jnp.pi) + 2.0 * jnp.log(self.width))
d = jnp.linalg.norm(value - self.loc, axis=-1)
return normalizer - 0.5 * ((d - self.radius) / self.width) ** 2
def model(center1, center2, radius, width, enum=False):
z = numpyro.sample(
"z", dist.Bernoulli(0.5), infer={"enumerate": "parallel"} if enum else {}
)
x = numpyro.sample("x", dist.Uniform(-6.0, 6.0).expand([2]).to_event(1))
center = jnp.stack([center1, center2])[z]
numpyro.sample("shell", GaussianShell(center, radius, width), obs=x)
def run_inference(args, data):
print("=== Performing Nested Sampling ===")
ns = NestedSampler(model)
ns.run(random.PRNGKey(0), **data, enum=args.enum)
ns.print_summary()
# samples obtained from nested sampler are weighted, so
# we need to provide random key to resample from those weighted samples
ns_samples = ns.get_samples(random.PRNGKey(1), num_samples=args.num_samples)
print("\n=== Performing MCMC Sampling ===")
if args.enum:
mcmc = MCMC(
NUTS(model), num_warmup=args.num_warmup, num_samples=args.num_samples
)
else:
mcmc = MCMC(
DiscreteHMCGibbs(NUTS(model)),
num_warmup=args.num_warmup,
num_samples=args.num_samples,
)
mcmc.run(random.PRNGKey(2), **data, enum=args.enum)
mcmc.print_summary()
mcmc_samples = mcmc.get_samples()
return ns_samples["x"], mcmc_samples["x"]
def main(args):
data = dict(
radius=2.0,
width=0.1,
center1=jnp.array([-3.5, 0.0]),
center2=jnp.array([3.5, 0.0]),
)
ns_samples, mcmc_samples = run_inference(args, data)
# plotting
fig, (ax1, ax2) = plt.subplots(
2, 1, sharex=True, figsize=(8, 8), constrained_layout=True
)
ax1.plot(mcmc_samples[:, 0], mcmc_samples[:, 1], "ro", alpha=0.2)
ax1.set(
xlim=(-6, 6),
ylim=(-2.5, 2.5),
ylabel="x[1]",
title="Gaussian-shell samples using NUTS",
)
ax2.plot(ns_samples[:, 0], ns_samples[:, 1], "ro", alpha=0.2)
ax2.set(
xlim=(-6, 6),
ylim=(-2.5, 2.5),
xlabel="x[0]",
ylabel="x[1]",
title="Gaussian-shell samples using Nested Sampler",
)
plt.savefig("gaussian_shells_plot.pdf")
if __name__ == "__main__":
assert numpyro.__version__.startswith("0.10.1")
parser = argparse.ArgumentParser(description="Nested sampler for Gaussian shells")
parser.add_argument("-n", "--num-samples", nargs="?", default=10000, type=int)
parser.add_argument("--num-warmup", nargs="?", default=1000, type=int)
parser.add_argument(
"--enum",
action="store_true",
default=False,
help="whether to enumerate over the discrete latent variable",
)
parser.add_argument("--device", default="cpu", type=str, help='use "cpu" or "gpu".')
args = parser.parse_args()
numpyro.set_platform(args.device)
main(args)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,807 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.